diff --git a/MANUAL.html b/MANUAL.html
index 008d79094..19d38aa09 100644
--- a/MANUAL.html
+++ b/MANUAL.html
@@ -12,7 +12,7 @@
Rclone
@@ -31,6 +31,7 @@
Google Drive
HTTP
Hubic
+IBM COS S3
Memset Memstore
Microsoft Azure Blob Storage
Microsoft OneDrive
@@ -82,7 +83,7 @@
See below for some expanded Linux / macOS instructions.
See the Usage section of the docs for how to use rclone, or run rclone -h
.
Script installation
-To install rclone on Linux/MacOs/BSD systems, run:
+To install rclone on Linux/macOS/BSD systems, run:
curl https://rclone.org/install.sh | sudo bash
For beta installation, run:
curl https://rclone.org/install.sh | sudo bash -s beta
@@ -136,6 +137,7 @@ sudo mv rclone /usr/local/bin/
rclone config
See the following for detailed instructions for
destpath/sourcepath/one.txt
destpath/sourcepath/two.txt
If you are familiar with rsync
, rclone always works as if you had written a trailing / - meaning "copy the contents of this directory". This applies to all commands and whether you are talking about the source or destination.
-See the --no-traverse
option for controlling whether rclone lists the destination directory or not.
rclone copy source:path dest:path [flags]
Options
-h, --help help for copy
@@ -268,23 +269,59 @@ rclone --dry-run --min-size 100M delete remote:path
--download Check by downloading rather than with hash.
-h, --help help for check
rclone ls
-List all the objects in the path with size and path.
+List the objects in the path with size and path.
Synopsis
-List all the objects in the path with size and path.
+Lists the objects in the source path to standard output in a human readable format with size and path. Recurses by default.
+Any of the filtering options can be applied to this commmand.
+There are several related list commands
+
+ls
to list size and path of objects only
+lsl
to list modification time, size and path of objects only
+lsd
to list directories only
+lsf
to list objects and directories in easy to parse format
+lsjson
to list objects and directories in JSON format
+
+ls
,lsl
,lsd
are designed to be human readable. lsf
is designed to be human and machine readable. lsjson
is designed to be machine readable.
+Note that ls
,lsl
,lsd
all recurse by default - use "--max-depth 1" to stop the recursion.
+The other list commands lsf
,lsjson
do not recurse by default - use "-R" to make them recurse.
rclone ls remote:path [flags]
Options
-h, --help help for ls
rclone lsd
List all directories/containers/buckets in the path.
Synopsis
-List all directories/containers/buckets in the path.
+Lists the directories in the source path to standard output. Recurses by default.
+Any of the filtering options can be applied to this commmand.
+There are several related list commands
+
+ls
to list size and path of objects only
+lsl
to list modification time, size and path of objects only
+lsd
to list directories only
+lsf
to list objects and directories in easy to parse format
+lsjson
to list objects and directories in JSON format
+
+ls
,lsl
,lsd
are designed to be human readable. lsf
is designed to be human and machine readable. lsjson
is designed to be machine readable.
+Note that ls
,lsl
,lsd
all recurse by default - use "--max-depth 1" to stop the recursion.
+The other list commands lsf
,lsjson
do not recurse by default - use "-R" to make them recurse.
rclone lsd remote:path [flags]
Options
-h, --help help for lsd
rclone lsl
-List all the objects path with modification time, size and path.
+List the objects in path with modification time, size and path.
Synopsis
-List all the objects path with modification time, size and path.
+Lists the objects in the source path to standard output in a human readable format with modification time, size and path. Recurses by default.
+Any of the filtering options can be applied to this commmand.
+There are several related list commands
+
+ls
to list size and path of objects only
+lsl
to list modification time, size and path of objects only
+lsd
to list directories only
+lsf
to list objects and directories in easy to parse format
+lsjson
to list objects and directories in JSON format
+
+ls
,lsl
,lsd
are designed to be human readable. lsf
is designed to be human and machine readable. lsjson
is designed to be machine readable.
+Note that ls
,lsl
,lsd
all recurse by default - use "--max-depth 1" to stop the recursion.
+The other list commands lsf
,lsjson
do not recurse by default - use "-R" to make them recurse.
rclone lsl remote:path [flags]
Options
-h, --help help for lsl
@@ -526,11 +563,15 @@ if src is directory
Cryptdecode returns unencrypted file names.
Synopsis
rclone cryptdecode returns unencrypted file names when provided with a list of encrypted file names. List limit is 10 items.
+If you supply the --reverse flag, it will return encrypted file names.
use it like this
-rclone cryptdecode encryptedremote: encryptedfilename1 encryptedfilename2
+rclone cryptdecode encryptedremote: encryptedfilename1 encryptedfilename2
+
+rclone cryptdecode --reverse encryptedremote: filename1 filename2
rclone cryptdecode encryptedremote: encryptedfilename [flags]
Options
- -h, --help help for cryptdecode
+ -h, --help help for cryptdecode
+ --reverse Reverse cryptdecode, encrypts filenames
rclone dbhashsum
Produces a Dropbox hash file for all the objects in the path.
Synopsis
@@ -584,25 +625,77 @@ if src is directory
Options
-h, --help help for listremotes
-l, --long Show the type as well as names.
+rclone lsf
+List directories and objects in remote:path formatted for parsing
+Synopsis
+List the contents of the source path (directories and objects) to standard output in a form which is easy to parse by scripts. By default this will just be the names of the objects and directories, one per line. The directories will have a / suffix.
+Use the --format option to control what gets listed. By default this is just the path, but you can use these parameters to control the output:
+p - path
+s - size
+t - modification time
+h - hash
+So if you wanted the path, size and modification time, you would use --format "pst", or maybe --format "tsp" to put the path last.
+If you specify "h" in the format you will get the MD5 hash by default, use the "--hash" flag to change which hash you want. Note that this can be returned as an empty string if it isn't available on the object (and for directories), "ERROR" if there was an error reading it from the object and "UNSUPPORTED" if that object does not support that hash type.
+For example to emulate the md5sum command you can use
+rclone lsf -R --hash MD5 --format hp --separator " " --files-only .
+(Though "rclone md5sum ." is an easier way of typing this.)
+By default the separator is ";" this can be changed with the --separator flag. Note that separators aren't escaped in the path so putting it last is a good strategy.
+Any of the filtering options can be applied to this commmand.
+There are several related list commands
+
+ls
to list size and path of objects only
+lsl
to list modification time, size and path of objects only
+lsd
to list directories only
+lsf
to list objects and directories in easy to parse format
+lsjson
to list objects and directories in JSON format
+
+ls
,lsl
,lsd
are designed to be human readable. lsf
is designed to be human and machine readable. lsjson
is designed to be machine readable.
+Note that ls
,lsl
,lsd
all recurse by default - use "--max-depth 1" to stop the recursion.
+The other list commands lsf
,lsjson
do not recurse by default - use "-R" to make them recurse.
+rclone lsf remote:path [flags]
+Options
+ -d, --dir-slash Append a slash to directory names. (default true)
+ --dirs-only Only list directories.
+ --files-only Only list files.
+ -F, --format string Output format - see help for details (default "p")
+ --hash h Use this hash when h is used in the format MD5|SHA-1|DropboxHash (default "MD5")
+ -h, --help help for lsf
+ -R, --recursive Recurse into the listing.
+ -s, --separator string Separator for the items in the format. (default ";")
rclone lsjson
List directories and objects in the path in JSON format.
-Synopsis
+Synopsis
List directories and objects in the path in JSON format.
The output is an array of Items, where each Item looks like this
-{ "Hashes" : { "SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f", "MD5" : "b1946ac92492d2347c6235b4d2611184", "DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc" }, "IsDir" : false, "ModTime" : "2017-05-31T16:15:57.034468261+01:00", "Name" : "file.txt", "Path" : "full/path/goes/here/file.txt", "Size" : 6 }
-If --hash is not specified the the Hashes property won't be emitted.
+{ "Hashes" : { "SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f", "MD5" : "b1946ac92492d2347c6235b4d2611184", "DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc" }, "IsDir" : false, "ModTime" : "2017-05-31T16:15:57.034468261+01:00", "Name" : "file.txt", "Encrypted" : "v0qpsdq8anpci8n929v3uu9338", "Path" : "full/path/goes/here/file.txt", "Size" : 6 }
+If --hash is not specified the Hashes property won't be emitted.
If --no-modtime is specified then ModTime will be blank.
+If --encrypted is not specified the Encrypted won't be emitted.
+The Path field will only show folders below the remote path being listed. If "remote:path" contains the file "subfolder/file.txt", the Path for "file.txt" will be "subfolder/file.txt", not "remote:path/subfolder/file.txt". When used without --recursive the Path will always be the same as Name.
The time is in RFC3339 format with nanosecond precision.
The whole output can be processed as a JSON blob, or alternatively it can be processed line by line as each item is written one to a line.
+Any of the filtering options can be applied to this commmand.
+There are several related list commands
+
+ls
to list size and path of objects only
+lsl
to list modification time, size and path of objects only
+lsd
to list directories only
+lsf
to list objects and directories in easy to parse format
+lsjson
to list objects and directories in JSON format
+
+ls
,lsl
,lsd
are designed to be human readable. lsf
is designed to be human and machine readable. lsjson
is designed to be machine readable.
+Note that ls
,lsl
,lsd
all recurse by default - use "--max-depth 1" to stop the recursion.
+The other list commands lsf
,lsjson
do not recurse by default - use "-R" to make them recurse.
rclone lsjson remote:path [flags]
-Options
- --hash Include hashes in the output (may take longer).
+Options
+ -M, --encrypted Show the encrypted names.
+ --hash Include hashes in the output (may take longer).
-h, --help help for lsjson
--no-modtime Don't read the modification time (can speed things up).
-R, --recursive Recurse into the listing.
rclone mount
Mount the remote as a mountpoint. EXPERIMENTAL
-Synopsis
+Synopsis
rclone mount allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE.
This is EXPERIMENTAL - use with care.
First set up your remote using rclone config
. Check it works with rclone ls
etc.
@@ -621,13 +714,18 @@ umount /path/to/local/mount
WinFsp is an open source Windows File System Proxy which makes it easy to write user space file systems for Windows. It provides a FUSE emulation layer which rclone uses combination with cgofuse. Both of these packages are by Bill Zissimopoulos who was very helpful during the implementation of rclone mount for Windows.
Windows caveats
Note that drives created as Administrator are not visible by other accounts (including the account that was elevated as Administrator). So if you start a Windows drive from an Administrative Command Prompt and then try to access the same drive from Explorer (which does not run as Administrator), you will not be able to see the new drive.
-The easiest way around this is to start the drive from a normal command prompt. It is also possible to start a drive from the SYSTEM account (using the WinFsp.Launcher infrastructure) which creates drives accessible for everyone on the system.
+The easiest way around this is to start the drive from a normal command prompt. It is also possible to start a drive from the SYSTEM account (using the WinFsp.Launcher infrastructure) which creates drives accessible for everyone on the system or alternatively using the nssm service manager.
Limitations
-This can only write files seqentially, it can only seek when reading. This means that many applications won't work with their files on an rclone mount.
+Without the use of "--vfs-cache-mode" this can only write files sequentially, it can only seek when reading. This means that many applications won't work with their files on an rclone mount without "--vfs-cache-mode writes" or "--vfs-cache-mode full". See the File Caching section for more info.
The bucket based remotes (eg Swift, S3, Google Compute Storage, B2, Hubic) won't work from the root - you will need to specify a bucket, or a path within the bucket. So swift:
won't work whereas swift:bucket
will as will swift:bucket/path
. None of these support the concept of directories, so empty directories will have a tendency to disappear once they fall out of the directory cache.
Only supported on Linux, FreeBSD, OS X and Windows at the moment.
rclone mount vs rclone sync/copy
-File systems expect things to be 100% reliable, whereas cloud storage systems are a long way from 100% reliable. The rclone sync/copy commands cope with this with lots of retries. However rclone mount can't use retries in the same way without making local copies of the uploads. This might happen in the future, but for the moment rclone mount won't do that, so will be less reliable than the rclone command.
+File systems expect things to be 100% reliable, whereas cloud storage systems are a long way from 100% reliable. The rclone sync/copy commands cope with this with lots of retries. However rclone mount can't use retries in the same way without making local copies of the uploads. Look at the EXPERIMENTAL file caching for solutions to make mount mount more reliable.
+Attribute caching
+You can use the flag --attr-timeout to set the time the kernel caches the attributes (size, modification time etc) for directory entries.
+The default is 0s - no caching - which is recommended for filesystems which can change outside the control of the kernel.
+If you set it higher ('1s' or '1m' say) then the kernel will call back to rclone less often making it more efficient, however there may be strange effects when files change on the remote.
+This is the same as setting the attr_timeout option in mount.fuse.
Filters
Note that all the rclone filters can be used to select a subset of the files to be visible in the mount.
systemd
@@ -636,12 +734,16 @@ umount /path/to/local/mount
Using the --dir-cache-time
flag, you can set how long a directory should be considered up to date and not refreshed from the backend. Changes made locally in the mount may appear immediately or invalidate the cache. However, changes done on the remote will only be picked up once the cache expires.
Alternatively, you can send a SIGHUP
signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:
kill -SIGHUP $(pidof rclone)
+If you configure rclone with a remote control then you can use rclone rc to flush the whole directory cache:
+rclone rc vfs/forget
+Or individual files or directories:
+rclone rc vfs/forget file=path/to/file dir=path/to/dir
File Caching
NB File caching is EXPERIMENTAL - use with care!
-These flags control the VFS file caching options. The VFS layer is used by rclone mount to make a cloud storage systm work more like a normal file system.
+These flags control the VFS file caching options. The VFS layer is used by rclone mount to make a cloud storage system work more like a normal file system.
You'll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details.
Note that the VFS cache works in addition to the cache backend and you may find that you need one or the other or both.
---vfs-cache-dir string Directory rclone will use for caching.
+--cache-dir string Directory rclone will use for caching.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
@@ -675,15 +777,17 @@ umount /path/to/local/mount
If an upload fails it will be retried up to --low-level-retries times.
--vfs-cache-mode full
In this mode all reads and writes are buffered to and from disk. When a file is opened for read it will be downloaded in its entirety first.
-This may be appropriate for your needs, or you may prefer to look at the cache backend which does a much more sophisticated job of caching, including caching directory heirachies and chunks of files.q
+This may be appropriate for your needs, or you may prefer to look at the cache backend which does a much more sophisticated job of caching, including caching directory hierarchies and chunks of files.
In this mode, unlike the others, when a file is written to the disk, it will be kept on the disk after it is written to the remote. It will be purged on a schedule according to --vfs-cache-max-age
.
This mode should support all normal file system operations.
If an upload or download fails it will be retried up to --low-level-retries times.
rclone mount remote:path /path/to/mountpoint [flags]
-Options
+Options
--allow-non-empty Allow mounting over a non-empty directory.
--allow-other Allow access to other users.
--allow-root Allow access to root user.
+ --attr-timeout duration Time for which file/directory attributes are cached.
+ --daemon Run mount as a daemon (background mode).
--debug-fuse Debug the FUSE internals - needs -v.
--default-permissions Makes kernel enforce access control based on the file mode.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
@@ -705,7 +809,7 @@ umount /path/to/local/mount
--write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.
rclone moveto
Move file or directory from source to dest.
-Synopsis
+Synopsis
If source:path is a file or directory then it moves it to a file or directory named dest:path.
This can be used to rename files or upload single files to other than their existing name. If the source is a directory then it acts exacty like the move command.
So
@@ -720,12 +824,13 @@ if src is directory
This doesn't transfer unchanged files, testing by size and modification time or MD5SUM. src will be deleted on successful transfer.
Important: Since this can cause data loss, test first with the --dry-run flag.
rclone moveto source:path dest:path [flags]
-Options
+Options
-h, --help help for moveto
rclone ncdu
Explore a remote with a text based user interface.
-Synopsis
+Synopsis
This displays a text based user interface allowing the navigation of a remote. It is most useful for answering the question - "What is using all my disk space?".
+
To make the user interface it first scans the entire remote given and builds an in memory representation. rclone ncdu can be used during this scanning phase and you will see it building up the directory structure as it goes along.
Here are the keys - press '?' to toggle the help on and off
↑,↓ or k,j to Move
@@ -738,18 +843,30 @@ if src is directory
q/ESC/c-C to quit
This an homage to the ncdu tool but for rclone remotes. It is missing lots of features at the moment, most importantly deleting files, but is useful as it stands.
rclone ncdu remote:path [flags]
-Options
+Options
-h, --help help for ncdu
rclone obscure
Obscure password for use in the rclone.conf
-Synopsis
+Synopsis
Obscure password for use in the rclone.conf
rclone obscure password [flags]
-Options
+Options
-h, --help help for obscure
+rclone rc
+Run a command against a running rclone.
+Synopsis
+This runs a command against a running rclone. By default it will use that specified in the --rc-addr command.
+Arguments should be passed in as parameter=value.
+The result will be returned as a JSON object by default.
+Use "rclone rc list" to see a list of all possible commands.
+rclone rc commands parameter [flags]
+Options
+ -h, --help help for rc
+ --no-output If set don't output the JSON result.
+ --url string URL to connect to rclone remote control. (default "http://localhost:5572/")
rclone rcat
Copies standard input to file on remote.
-Synopsis
+Synopsis
rclone rcat reads from standard input (stdin) and copies it to a single remote file.
echo "hello world" | rclone rcat remote:path/to/file
ffmpeg - | rclone rcat --checksum remote:path/to/file
@@ -757,45 +874,66 @@ ffmpeg - | rclone rcat --checksum remote:path/to/file
rcat will try to upload small files in a single request, which is usually more efficient than the streaming/chunked upload endpoints, which use multiple requests. Exact behaviour depends on the remote. What is considered a small file may be set through --streaming-upload-cutoff
. Uploading only starts after the cutoff is reached or if the file ends before that. The data must fit into RAM. The cutoff needs to be small enough to adhere the limits of your remote, please see there. Generally speaking, setting this cutoff too high will decrease your performance.
Note that the upload can also not be retried because the data is not kept around until the upload succeeds. If you need to transfer a lot of data, you're better off caching locally and then rclone move
it to the destination.
rclone rcat remote:path [flags]
-Options
+Options
-h, --help help for rcat
rclone rmdirs
Remove empty directories under the path.
-Synopsis
+Synopsis
This removes any empty directories (or directories that only contain empty directories) under the path that it finds, including the path if it has nothing in.
If you supply the --leave-root flag, it will not remove the root directory.
This is useful for tidying up remotes that rclone has left a lot of empty directories in.
rclone rmdirs remote:path [flags]
-Options
+Options
-h, --help help for rmdirs
--leave-root Do not remove root directory if empty
rclone serve
Serve a remote over a protocol.
-Synopsis
+Synopsis
rclone serve is used to serve a remote over a given protocol. This command requires the use of a subcommand to specify the protocol, eg
rclone serve http remote:
Each subcommand has its own options which you can see in their help.
rclone serve <protocol> [opts] <remote> [flags]
-Options
+Options
-h, --help help for serve
rclone serve http
Serve the remote over HTTP.
-Synopsis
+Synopsis
rclone serve http implements a basic web server to serve the remote over HTTP. This can be viewed in a web browser or you can make a remote of type http read from it.
-Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost.
You can use the filter flags (eg --include, --exclude) to control what is served.
The server will log errors. Use -v to see access logs.
--bwlimit will be respected for file transfers. Use --stats to control the stats printing.
+Server options
+Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost.
+If you set --addr to listen on a public or LAN accessible IP address then using Authentication if advised - see the next section for info.
+--server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.
+--max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.
+Authentication
+By default this will serve files without needing a login.
+You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags.
+Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.
+To create an htpasswd file:
+touch htpasswd
+htpasswd -B htpasswd user
+htpasswd -B htpasswd anotherUser
+The password file can be updated while rclone is running.
+Use --realm to set the authentication realm.
+SSL/TLS
+By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.
+--cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.
Directory Cache
Using the --dir-cache-time
flag, you can set how long a directory should be considered up to date and not refreshed from the backend. Changes made locally in the mount may appear immediately or invalidate the cache. However, changes done on the remote will only be picked up once the cache expires.
Alternatively, you can send a SIGHUP
signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:
kill -SIGHUP $(pidof rclone)
+If you configure rclone with a remote control then you can use rclone rc to flush the whole directory cache:
+rclone rc vfs/forget
+Or individual files or directories:
+rclone rc vfs/forget file=path/to/file dir=path/to/dir
File Caching
NB File caching is EXPERIMENTAL - use with care!
-These flags control the VFS file caching options. The VFS layer is used by rclone mount to make a cloud storage systm work more like a normal file system.
+These flags control the VFS file caching options. The VFS layer is used by rclone mount to make a cloud storage system work more like a normal file system.
You'll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details.
Note that the VFS cache works in addition to the cache backend and you may find that you need one or the other or both.
---vfs-cache-dir string Directory rclone will use for caching.
+--cache-dir string Directory rclone will use for caching.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
@@ -829,41 +967,146 @@ ffmpeg - | rclone rcat --checksum remote:path/to/file
If an upload fails it will be retried up to --low-level-retries times.
--vfs-cache-mode full
In this mode all reads and writes are buffered to and from disk. When a file is opened for read it will be downloaded in its entirety first.
-This may be appropriate for your needs, or you may prefer to look at the cache backend which does a much more sophisticated job of caching, including caching directory heirachies and chunks of files.q
+This may be appropriate for your needs, or you may prefer to look at the cache backend which does a much more sophisticated job of caching, including caching directory hierarchies and chunks of files.
In this mode, unlike the others, when a file is written to the disk, it will be kept on the disk after it is written to the remote. It will be purged on a schedule according to --vfs-cache-max-age
.
This mode should support all normal file system operations.
If an upload or download fails it will be retried up to --low-level-retries times.
rclone serve http remote:path [flags]
-Options
- --addr string IPaddress:Port to bind server to. (default "localhost:8080")
+Options
+ --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080")
+ --cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --client-ca string Client certificate authority to verify clients with
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--gid uint32 Override the gid field set by the filesystem. (default 502)
-h, --help help for http
+ --htpasswd string htpasswd file - if not provided no authentication is done
+ --key string SSL PEM Private key
+ --max-header-bytes int Maximum size of request header (default 4096)
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
+ --pass string Password for authentication.
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--read-only Mount read-only.
+ --realm string realm for authentication (default "rclone")
+ --server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--uid uint32 Override the uid field set by the filesystem. (default 502)
--umask int Override the permission bits set by the filesystem. (default 2)
+ --user string User name for authentication.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
+rclone serve restic
+Serve the remote for restic's REST API.
+Synopsis
+rclone serve restic implements restic's REST backend API over HTTP. This allows restic to use rclone as a data storage mechanism for cloud providers that restic does not support directly.
+Restic is a command line program for doing backups.
+The server will log errors. Use -v to see access logs.
+--bwlimit will be respected for file transfers. Use --stats to control the stats printing.
+Setting up rclone for use by restic
+First set up a remote for your chosen cloud provider.
+Once you have set up the remote, check it is working with, for example "rclone lsd remote:". You may have called the remote something other than "remote:" - just substitute whatever you called it in the following instructions.
+Now start the rclone restic server
+rclone serve restic -v remote:backup
+Where you can replace "backup" in the above by whatever path in the remote you wish to use.
+By default this will serve on "localhost:8080" you can change this with use of the "--addr" flag.
+You might wish to start this server on boot.
+Setting up restic to use rclone
+Now you can follow the restic instructions on setting up restic.
+Note that you will need restic 0.8.2 or later to interoperate with rclone.
+For the example above you will want to use "http://localhost:8080/" as the URL for the REST server.
+For example:
+$ export RESTIC_REPOSITORY=rest:http://localhost:8080/
+$ export RESTIC_PASSWORD=yourpassword
+$ restic init
+created restic backend 8b1a4b56ae at rest:http://localhost:8080/
+
+Please note that knowledge of your password is required to access
+the repository. Losing your password means that your data is
+irrecoverably lost.
+$ restic backup /path/to/files/to/backup
+scan [/path/to/files/to/backup]
+scanned 189 directories, 312 files in 0:00
+[0:00] 100.00% 38.128 MiB / 38.128 MiB 501 / 501 items 0 errors ETA 0:00
+duration: 0:00
+snapshot 45c8fdd8 saved
+Multiple repositories
+Note that you can use the endpoint to host multiple repositories. Do this by adding a directory name or path after the URL. Note that these must end with /. Eg
+$ export RESTIC_REPOSITORY=rest:http://localhost:8080/user1repo/
+# backup user1 stuff
+$ export RESTIC_REPOSITORY=rest:http://localhost:8080/user2repo/
+# backup user2 stuff
+Server options
+Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost.
+If you set --addr to listen on a public or LAN accessible IP address then using Authentication if advised - see the next section for info.
+--server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.
+--max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.
+Authentication
+By default this will serve files without needing a login.
+You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags.
+Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.
+To create an htpasswd file:
+touch htpasswd
+htpasswd -B htpasswd user
+htpasswd -B htpasswd anotherUser
+The password file can be updated while rclone is running.
+Use --realm to set the authentication realm.
+SSL/TLS
+By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.
+--cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.
+rclone serve restic remote:path [flags]
+Options
+ --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080")
+ --cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --client-ca string Client certificate authority to verify clients with
+ -h, --help help for restic
+ --htpasswd string htpasswd file - if not provided no authentication is done
+ --key string SSL PEM Private key
+ --max-header-bytes int Maximum size of request header (default 4096)
+ --pass string Password for authentication.
+ --realm string realm for authentication (default "rclone")
+ --server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --stdio run an HTTP2 server on stdin/stdout
+ --user string User name for authentication.
rclone serve webdav
Serve remote:path over webdav.
-Synopsis
+Synopsis
rclone serve webdav implements a basic webdav server to serve the remote over HTTP via the webdav protocol. This can be viewed with a webdav client or you can make a remote of type webdav to read and write it.
NB at the moment each directory listing reads the start of each file which is undesirable: see https://github.com/golang/go/issues/22577
+Server options
+Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost.
+If you set --addr to listen on a public or LAN accessible IP address then using Authentication if advised - see the next section for info.
+--server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.
+--max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.
+Authentication
+By default this will serve files without needing a login.
+You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags.
+Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.
+To create an htpasswd file:
+touch htpasswd
+htpasswd -B htpasswd user
+htpasswd -B htpasswd anotherUser
+The password file can be updated while rclone is running.
+Use --realm to set the authentication realm.
+SSL/TLS
+By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.
+--cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.
Directory Cache
Using the --dir-cache-time
flag, you can set how long a directory should be considered up to date and not refreshed from the backend. Changes made locally in the mount may appear immediately or invalidate the cache. However, changes done on the remote will only be picked up once the cache expires.
Alternatively, you can send a SIGHUP
signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:
kill -SIGHUP $(pidof rclone)
+If you configure rclone with a remote control then you can use rclone rc to flush the whole directory cache:
+rclone rc vfs/forget
+Or individual files or directories:
+rclone rc vfs/forget file=path/to/file dir=path/to/dir
File Caching
NB File caching is EXPERIMENTAL - use with care!
-These flags control the VFS file caching options. The VFS layer is used by rclone mount to make a cloud storage systm work more like a normal file system.
+These flags control the VFS file caching options. The VFS layer is used by rclone mount to make a cloud storage system work more like a normal file system.
You'll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details.
Note that the VFS cache works in addition to the cache backend and you may find that you need one or the other or both.
---vfs-cache-dir string Directory rclone will use for caching.
+--cache-dir string Directory rclone will use for caching.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
@@ -897,38 +1140,48 @@ ffmpeg - | rclone rcat --checksum remote:path/to/file
If an upload fails it will be retried up to --low-level-retries times.
--vfs-cache-mode full
In this mode all reads and writes are buffered to and from disk. When a file is opened for read it will be downloaded in its entirety first.
-This may be appropriate for your needs, or you may prefer to look at the cache backend which does a much more sophisticated job of caching, including caching directory heirachies and chunks of files.q
+This may be appropriate for your needs, or you may prefer to look at the cache backend which does a much more sophisticated job of caching, including caching directory hierarchies and chunks of files.
In this mode, unlike the others, when a file is written to the disk, it will be kept on the disk after it is written to the remote. It will be purged on a schedule according to --vfs-cache-max-age
.
This mode should support all normal file system operations.
If an upload or download fails it will be retried up to --low-level-retries times.
rclone serve webdav remote:path [flags]
-Options
- --addr string IPaddress:Port to bind server to. (default "localhost:8081")
+Options
+ --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080")
+ --cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --client-ca string Client certificate authority to verify clients with
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--gid uint32 Override the gid field set by the filesystem. (default 502)
-h, --help help for webdav
+ --htpasswd string htpasswd file - if not provided no authentication is done
+ --key string SSL PEM Private key
+ --max-header-bytes int Maximum size of request header (default 4096)
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
+ --pass string Password for authentication.
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--read-only Mount read-only.
+ --realm string realm for authentication (default "rclone")
+ --server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--uid uint32 Override the uid field set by the filesystem. (default 502)
--umask int Override the permission bits set by the filesystem. (default 2)
+ --user string User name for authentication.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
rclone touch
Create new file or change file modification time.
-Synopsis
+Synopsis
Create new file or change file modification time.
rclone touch remote:path [flags]
-Options
+Options
-h, --help help for touch
-C, --no-create Do not create the file if it does not exist.
-t, --timestamp string Change the modification times to the specified time instead of the current time of day. The argument is of the form 'YYMMDD' (ex. 17.10.30) or 'YYYY-MM-DDTHH:MM:SS' (ex. 2006-01-02T15:04:05)
rclone tree
List the contents of the remote in a tree like fashion.
-Synopsis
+Synopsis
rclone tree lists the contents of a remote in a similar way to the unix tree command.
For example
$ rclone tree remote:path
@@ -944,7 +1197,7 @@ ffmpeg - | rclone rcat --checksum remote:path/to/file
You can use any of the filtering options with the tree command (eg --include and --exclude). You can also use --fast-list.
The tree command has many options for controlling the listing which are compatible with the tree command. Note that not all of them have short options as they conflict with rclone's short options.
rclone tree remote:path [flags]
-Options
+Options
-a, --all All files are listed (list . files too).
-C, --color Turn colorization on always.
-d, --dirs-only List directories only.
@@ -972,7 +1225,7 @@ ffmpeg - | rclone rcat --checksum remote:path/to/file
rclone copy remote:test.jpg /tmp/download
The file test.jpg
will be placed inside /tmp/download
.
This is equivalent to specifying
-rclone copy --no-traverse --files-from /tmp/files remote: /tmp/download
+rclone copy --files-from /tmp/files remote: /tmp/download
Where /tmp/files
contains the single line
test.jpg
It is recommended to use copy
when copying individual files, not sync
. They have pretty much the same effect but copy
will use a lot less memory.
@@ -1008,7 +1261,7 @@ ffmpeg - | rclone rcat --checksum remote:path/to/file
This can be used when scripting to make aged backups efficiently, eg
rclone sync remote:current-backup remote:previous-backup
rclone sync /path/to/files remote:current-backup
-Options
+Options
Rclone has a number of options to control its behaviour.
Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
Options which use SIZE use kByte by default. However, a suffix of b
for bytes, k
for kBytes, M
for MBytes and G
for GBytes may be used. These are the binary units, eg 1, 2**10, 2**20, 2**30 respectively.
@@ -1034,6 +1287,8 @@ rclone sync /path/to/files remote:current-backup
Note that the units are Bytes/s, not Bits/s. Typically connections are measured in Bits/s - to convert divide by 8. For example, let's say you have a 10 Mbit/s connection and you wish rclone to use half of it - 5 Mbit/s. This is 5/8 = 0.625MByte/s so you would use a --bwlimit 0.625M
parameter for rclone.
On Unix systems (Linux, MacOS, …) the bandwidth limiter can be toggled by sending a SIGUSR2
signal to rclone. This allows to remove the limitations of a long running rclone transfer and to restore it back to the value specified with --bwlimit
quickly when needed. Assuming there is only one rclone instance running, you can toggle the limiter like this:
kill -SIGUSR2 $(pidof rclone)
+If you configure rclone with a remote control then you can use change the bwlimit dynamically:
+rclone rc core/bwlimit rate=1M
--buffer-size=SIZE
Use this sized buffer to speed up file transfers. Each --transfer
will use this much memory for buffering.
Set to 0 to disable the buffering for the minimum memory usage.
@@ -1099,6 +1354,8 @@ rclone sync /path/to/files remote:current-backup
A low level retry is used to retry a failing operation - typically one HTTP request. This might be uploading a chunk of a big file for example. You will see low level retries in the log with the -v
flag.
This shouldn't need to be changed from the default in normal operations. However, if you get a lot of low level retries you may wish to reduce the value so rclone moves on to a high level retry (see the --retries
flag) quicker.
Disable low level retries with --low-level-retries 1
.
+--max-delete=N
+This tells rclone not to delete more than N files. If that limit is exceeded then a fatal error will be generated and rclone will stop the operation in progress.
--max-depth=N
This modifies the recursion depth for all the commands except purge.
So if you do rclone --max-depth 1 ls remote:path
you will see only the files in the top level directory. Using --max-depth 2
means you will see all the files in first two directory levels and so on.
@@ -1130,8 +1387,10 @@ rclone sync /path/to/files remote:current-backup
The default is 1m
. Use 0 to disable.
If you set the stats interval then all commands can show stats. This can be useful when running other commands, check
or mount
for example.
Stats are logged at INFO
level by default which means they won't show at default log level NOTICE
. Use --stats-log-level NOTICE
or -v
to make them show. See the Logging section for more info on log levels.
+--stats-file-name-length integer
+By default, the --stats
output will truncate file names and paths longer than 40 characters. This is equivalent to providing --stats-file-name-length 40
. Use --stats-file-name-length 0
to disable any truncation of file names printed by stats.
--stats-log-level string
-Log level to show --stats
output at. This can be DEBUG
, INFO
, NOTICE
, or ERROR
. The default is INFO
. This means at the default level of logging which is NOTICE
the stats won't show - if you want them to then use -stats-log-level NOTICE
. See the Logging section for more info on log levels.
+Log level to show --stats
output at. This can be DEBUG
, INFO
, NOTICE
, or ERROR
. The default is INFO
. This means at the default level of logging which is NOTICE
the stats won't show - if you want them to then use --stats-log-level NOTICE
. See the Logging section for more info on log levels.
--stats-unit=bits|bytes
By default, data transfer rates will be printed in bytes/second.
This option allows the data rate to be printed in bits/second.
@@ -1162,7 +1421,7 @@ rclone sync /path/to/files remote:current-backup
If you use this flag, and the remote supports server side copy or server side move, and the source and destination have a compatible hash, then this will track renames during sync
, copy
, and move
operations and perform renaming server-side.
Files will be matched by size and hash - if both match then a rename will be considered.
If the destination does not support server-side copy or move, rclone will fall back to the default behaviour and log an error level message to the console.
-Note that --track-renames
is incompatible with --no-traverse
and that it uses extra memory to keep track of all the rename candidates.
+Note that --track-renames
uses extra memory to keep track of all the rename candidates.
Note also that --track-renames
is incompatible with --delete-before
and will select --delete-after
instead of --delete-during
.
--delete-(before,during,after)
This option allows you to specify when files on your destination are deleted when you sync folders.
@@ -1266,11 +1525,6 @@ export RCLONE_CONFIG_PASS
--no-check-certificate
controls whether a client verifies the server's certificate chain and host name. If --no-check-certificate
is true, TLS accepts any certificate presented by the server and any host name in that certificate. In this mode, TLS is susceptible to man-in-the-middle attacks.
This option defaults to false
.
This should be used only for testing.
---no-traverse
-The --no-traverse
flag controls whether the destination file system is traversed when using the copy
or move
commands. --no-traverse
is not compatible with sync
and will be ignored if you supply it with sync
.
-If you are only copying a small number of files and/or have a large number of files on the destination then --no-traverse
will stop rclone listing the destination and save time.
-However, if you are copying a large number of files, especially if you are doing a copy where lots of the files haven't changed and won't need copying then you shouldn't use --no-traverse
.
-It can also be used to reduce the memory usage of rclone when copying - rclone --no-traverse copy src dst
won't load either the source or destination listings into memory so will use the minimum amount of memory.
Filtering
For the filtering options
@@ -1289,8 +1543,15 @@ export RCLONE_CONFIG_PASS
--dump filters
See the filtering section.
+Remote control
+For the remote control options and for instructions on how to remote control rclone
+
+--rc
+- and anything starting with
--rc-
+
+See the remote control section.
Logging
-rclone has 4 levels of logging, Error
, Notice
, Info
and Debug
.
+rclone has 4 levels of logging, ERROR
, NOTICE
, INFO
and DEBUG
.
By default, rclone logs to standard error. This means you can redirect standard error and still see the normal output of rclone commands (eg rclone ls
).
By default, rclone will produce Error
and Notice
level messages.
If you use the -q
flag, rclone will only produce Error
messages.
@@ -1317,7 +1578,7 @@ export RCLONE_CONFIG_PASS
Environment Variables
Rclone can be configured entirely using environment variables. These can be used to set defaults for options or config file entries.
-Options
+Options
Every option in rclone can have its default set by environment variable.
To find the name of the environment variable, first, take the long option name, strip the leading --
, change -
to _
, make upper case and prepend RCLONE_
.
For example, to always set --stats 5s
, set the environment variable RCLONE_STATS=5s
. If you set stats on the command line this will override the environment variable setting.
@@ -1534,14 +1795,19 @@ file2.avi
Then use as --filter-from filter-file.txt
. The rules are processed in the order that they are defined.
This example will include all jpg
and png
files, exclude any files matching secret*.jpg
and include file2.avi
. It will also include everything in the directory dir
at the root of the sync, except dir/Trash
which it will exclude. Everything else will be excluded from the sync.
--files-from
- Read list of source-file names
-This reads a list of file names from the file passed in and only these files are transferred. The filtering rules are ignored completely if you use this option.
+This reads a list of file names from the file passed in and only these files are transferred. The filtering rules are ignored completely if you use this option.
This option can be repeated to read from more than one file. These are read in the order that they are placed on the command line.
-Prepare a file like this files-from.txt
+Paths within the --files-from
file will be interpreted as starting with the root specified in the command. Leading /
characters are ignored.
+For example, suppose you had files-from.txt
with this content:
# comment
file1.jpg
-file2.jpg
-Then use as --files-from files-from.txt
. This will only transfer file1.jpg
and file2.jpg
providing they exist.
-For example, let's say you had a few files you want to back up regularly with these absolute paths:
+subdir/file2.jpg
+You could then use it like this:
+rclone copy --files-from files-from.txt /home/me/pics remote:pics
+This will transfer these files only (if they exist)
+/home/me/pics/file1.jpg → remote:pics/file1.jpg
+/home/me/pics/subdir/file2.jpg → remote:pics/subdirfile1.jpg
+To take a more complicated example, let's say you had a few files you want to back up regularly with these absolute paths:
/home/user1/important
/home/user1/dir/file
/home/user2/stuff
@@ -1551,14 +1817,20 @@ user1/dir/file
user2/stuff
You could then copy these to a remote like this
rclone copy --files-from files-from.txt /home remote:backup
-The 3 files will arrive in remote:backup
with the paths as in the files-from.txt
.
+The 3 files will arrive in remote:backup
with the paths as in the files-from.txt
like this:
+/home/user1/important → remote:backup/user1/important
+/home/user1/dir/file → remote:backup/user1/dir/file
+/home/user2/stuff → remote:backup/stuff
You could of course choose /
as the root too in which case your files-from.txt
might look like this.
/home/user1/important
/home/user1/dir/file
/home/user2/stuff
And you would transfer it like this
rclone copy --files-from files-from.txt / remote:backup
-In this case there will be an extra home
directory on the remote.
+In this case there will be an extra home
directory on the remote:
+/home/user1/important → remote:home/backup/user1/important
+/home/user1/dir/file → remote:home/backup/user1/dir/file
+/home/user2/stuff → remote:home/backup/stuff
--min-size
- Don't transfer any file smaller than this
This option controls the minimum size file which will be transferred. This defaults to kBytes
but a suffix of k
, M
, or G
can be used.
For example --min-size 50k
means no files smaller than 50kByte will be transferred.
@@ -1615,6 +1887,125 @@ dir1/dir2/dir3/.ignore
You can exclude dir3
from sync by running the following command:
rclone sync --exclude-if-present .ignore dir1 remote:backup
Currently only one filename is supported, i.e. --exclude-if-present
should not be used multiple times.
+Remote controlling rclone
+If rclone is run with the --rc
flag then it starts an http server which can be used to remote control rclone.
+NB this is experimental and everything here is subject to change!
+Supported parameters
+--rc
+Flag to start the http server listen on remote requests
+--rc-addr=IP
+IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+--rc-cert=KEY
+SSL PEM key (concatenation of certificate and CA certificate)
+--rc-client-ca=PATH
+Client certificate authority to verify clients with
+--rc-htpasswd=PATH
+htpasswd file - if not provided no authentication is done
+--rc-key=PATH
+SSL PEM Private key
+
+Maximum size of request header (default 4096)
+--rc-user=VALUE
+User name for authentication.
+--rc-pass=VALUE
+Password for authentication.
+--rc-realm=VALUE
+Realm for authentication (default "rclone")
+--rc-server-read-timeout=DURATION
+Timeout for server reading data (default 1h0m0s)
+--rc-server-write-timeout=DURATION
+Timeout for server writing data (default 1h0m0s)
+Accessing the remote control via the rclone rc command
+Rclone itself implements the remote control protocol in its rclone rc
command.
+You can use it like this
+$ rclone rc rc/noop param1=one param2=two
+{
+ "param1": "one",
+ "param2": "two"
+}
+Run rclone rc
on its own to see the help for the installed remote control commands.
+Supported commands
+core/bwlimit: Set the bandwidth limit.
+This sets the bandwidth limit to that passed in.
+Eg
+rclone core/bwlimit rate=1M
+rclone core/bwlimit rate=off
+cache/expire: Purge a remote from cache
+Purge a remote from the cache backend. Supports either a directory or a file. Params:
+
+- remote = path to remote (required)
+- withData = true/false to delete cached data (chunks) as well (optional)
+
+vfs/forget: Forget files or directories in the directory cache.
+This forgets the paths in the directory cache causing them to be re-read from the remote when needed.
+If no paths are passed in then it will forget all the paths in the directory cache.
+rclone rc vfs/forget
+Otherwise pass files or dirs in as file=path or dir=path. Any parameter key starting with file will forget that file and any starting with dir will forget that dir, eg
+rclone rc vfs/forget file=hello file2=goodbye dir=home/junk
+
+This echoes the input parameters to the output parameters for testing purposes. It can be used to check that rclone is still alive and to check that parameter passing is working properly.
+rc/error: This returns an error
+This returns an error with the input as part of its error string. Useful for testing error handling.
+rc/list: List all the registered remote control commands
+This lists all the registered remote control commands as a JSON map in the commands response.
+Accessing the remote control via HTTP
+Rclone implements a simple HTTP based protocol.
+Each endpoint takes an JSON object and returns a JSON object or an error. The JSON objects are essentially a map of string names to values.
+All calls must made using POST.
+The input objects can be supplied using URL parameters, POST parameters or by supplying "Content-Type: application/json" and a JSON blob in the body. There are examples of these below using curl
.
+The response will be a JSON blob in the body of the response. This is formatted to be reasonably human readable.
+If an error occurs then there will be an HTTP error status (usually 400) and the body of the response will contain a JSON encoded error object.
+Using POST with URL parameters only
+curl -X POST 'http://localhost:5572/rc/noop/?potato=1&sausage=2'
+Response
+{
+ "potato": "1",
+ "sausage": "2"
+}
+Here is what an error response looks like:
+curl -X POST 'http://localhost:5572/rc/error/?potato=1&sausage=2'
+{
+ "error": "arbitrary error on input map[potato:1 sausage:2]",
+ "input": {
+ "potato": "1",
+ "sausage": "2"
+ }
+}
+Note that curl doesn't return errors to the shell unless you use the -f
option
+$ curl -f -X POST 'http://localhost:5572/rc/error/?potato=1&sausage=2'
+curl: (22) The requested URL returned error: 400 Bad Request
+$ echo $?
+22
+Using POST with a form
+curl --data "potato=1" --data "sausage=2" http://localhost:5572/rc/noop/
+Response
+{
+ "potato": "1",
+ "sausage": "2"
+}
+Note that you can combine these with URL parameters too with the POST parameters taking precedence.
+curl --data "potato=1" --data "sausage=2" "http://localhost:5572/rc/noop/?rutabaga=3&sausage=4"
+Response
+{
+ "potato": "1",
+ "rutabaga": "3",
+ "sausage": "4"
+}
+
+Using POST with a JSON blob
+curl -H "Content-Type: application/json" -X POST -d '{"potato":2,"sausage":1}' http://localhost:5572/rc/noop/
+response
+{
+ "password": "xyz",
+ "username": "xyz"
+}
+This can be combined with URL parameters too if required. The JSON blob takes precedence.
+curl -H "Content-Type: application/json" -X POST -d '{"potato":2,"sausage":1}' 'http://localhost:5572/rc/noop/?rutabaga=3&potato=4'
+{
+ "potato": 2,
+ "rutabaga": "3",
+ "sausage": 1
+}
Overview of cloud storage systems
Each cloud storage system is slightly different. Rclone attempts to provide a unified interface to them, but some underlying differences show through.
Features
@@ -2039,6 +2430,101 @@ dir1/dir2/dir3/.ignore
The remote supports a recursive list to list all the contents beneath a directory quickly. This enables the --fast-list
flag to work. See the rclone docs for more details.
StreamUpload
Some remotes allow files to be uploaded without knowing the file size in advance. This allows certain operations to work without spooling the file to local disk first, e.g. rclone rcat
.
+Alias
+The alias
remote provides a new name for another remote.
+Paths may be as deep as required or a local path, eg remote:directory/subdirectory
or /directory/subdirectory
.
+During the initial setup with rclone config
you will specify the target remote. The target remote can either be a local path or another remote.
+Subfolders can be used in target remote. Asume a alias remote named backup
with the target mydrive:private/backup
. Invoking rclone mkdir backup:desktop
is exactly the same as invoking rclone mkdir mydrive:private/backup/desktop
.
+There will be no special handling of paths containing ..
segments. Invoking rclone mkdir backup:../desktop
is exactly the same as invoking rclone mkdir mydrive:private/backup/../desktop
. The empty path is not allowed as a remote. To alias the current directory use .
instead.
+Here is an example of how to make a alias called remote
for local folder. First run:
+ rclone config
+This will guide you through an interactive setup process:
+No remotes found - make a new one
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> remote
+Type of storage to configure.
+Choose a number from below, or type in your own value
+ 1 / Alias for a existing remote
+ \ "alias"
+ 2 / Amazon Drive
+ \ "amazon cloud drive"
+ 3 / Amazon S3 (also Dreamhost, Ceph, Minio)
+ \ "s3"
+ 4 / Backblaze B2
+ \ "b2"
+ 5 / Box
+ \ "box"
+ 6 / Cache a remote
+ \ "cache"
+ 7 / Dropbox
+ \ "dropbox"
+ 8 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 9 / FTP Connection
+ \ "ftp"
+10 / Google Cloud Storage (this is not Google Drive)
+ \ "google cloud storage"
+11 / Google Drive
+ \ "drive"
+12 / Hubic
+ \ "hubic"
+13 / Local Disk
+ \ "local"
+14 / Microsoft Azure Blob Storage
+ \ "azureblob"
+15 / Microsoft OneDrive
+ \ "onedrive"
+16 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+ \ "swift"
+17 / Pcloud
+ \ "pcloud"
+18 / QingCloud Object Storage
+ \ "qingstor"
+19 / SSH/SFTP Connection
+ \ "sftp"
+20 / Webdav
+ \ "webdav"
+21 / Yandex Disk
+ \ "yandex"
+22 / http Connection
+ \ "http"
+Storage> 1
+Remote or path to alias.
+Can be "myremote:path/to/dir", "myremote:bucket", "myremote:" or "/local/path".
+remote> /mnt/storage/backup
+Remote config
+--------------------
+[remote]
+remote = /mnt/storage/backup
+--------------------
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+Current remotes:
+
+Name Type
+==== ====
+remote alias
+
+e) Edit existing remote
+n) New remote
+d) Delete remote
+r) Rename remote
+c) Copy remote
+s) Set configuration password
+q) Quit config
+e/n/d/r/c/s/q> q
+Once configured you can then use rclone
like this,
+List directories in top level in /mnt/storage/backup
+rclone lsd remote:
+List all the files in /mnt/storage/backup
+rclone ls remote:
+Copy another local directory to the alias directory called source
+rclone copy /home/source remote:source
Amazon Drive
Paths are specified as remote:path
Paths may be as deep as required, eg remote:directory/subdirectory
.
@@ -2161,37 +2647,23 @@ y/e/d> y
No remotes found - make a new one
n) New remote
s) Set configuration password
-n/s> n
+q) Quit config
+n/s/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
- 1 / Amazon Drive
+ 1 / Alias for a existing remote
+ \ "alias"
+ 2 / Amazon Drive
\ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
+ 3 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
- 3 / Backblaze B2
+ 4 / Backblaze B2
\ "b2"
- 4 / Dropbox
- \ "dropbox"
- 5 / Encrypt/Decrypt a remote
- \ "crypt"
- 6 / Google Cloud Storage (this is not Google Drive)
- \ "google cloud storage"
- 7 / Google Drive
- \ "drive"
- 8 / Hubic
- \ "hubic"
- 9 / Local Disk
- \ "local"
-10 / Microsoft OneDrive
- \ "onedrive"
-11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
- \ "swift"
-12 / SSH/SFTP Connection
- \ "sftp"
-13 / Yandex Disk
- \ "yandex"
-Storage> 2
+[snip]
+23 / http Connection
+ \ "http"
+Storage> s3
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own value
1 / Enter AWS credentials in the next step
@@ -2200,80 +2672,91 @@ Choose a number from below, or type in your own value
\ "true"
env_auth> 1
AWS Access Key ID - leave blank for anonymous access or runtime credentials.
-access_key_id> access_key
+access_key_id> XXX
AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
-secret_access_key> secret_key
-Region to connect to.
+secret_access_key> YYY
+Region to connect to. Leave blank if you are using an S3 clone and you don't have a region.
Choose a number from below, or type in your own value
/ The default endpoint - a good choice if you are unsure.
1 | US Region, Northern Virginia or Pacific Northwest.
| Leave location constraint empty.
\ "us-east-1"
+ / US East (Ohio) Region
+ 2 | Needs location constraint us-east-2.
+ \ "us-east-2"
/ US West (Oregon) Region
- 2 | Needs location constraint us-west-2.
+ 3 | Needs location constraint us-west-2.
\ "us-west-2"
/ US West (Northern California) Region
- 3 | Needs location constraint us-west-1.
+ 4 | Needs location constraint us-west-1.
\ "us-west-1"
- / EU (Ireland) Region Region
- 4 | Needs location constraint EU or eu-west-1.
+ / Canada (Central) Region
+ 5 | Needs location constraint ca-central-1.
+ \ "ca-central-1"
+ / EU (Ireland) Region
+ 6 | Needs location constraint EU or eu-west-1.
\ "eu-west-1"
+ / EU (London) Region
+ 7 | Needs location constraint eu-west-2.
+ \ "eu-west-2"
/ EU (Frankfurt) Region
- 5 | Needs location constraint eu-central-1.
+ 8 | Needs location constraint eu-central-1.
\ "eu-central-1"
/ Asia Pacific (Singapore) Region
- 6 | Needs location constraint ap-southeast-1.
+ 9 | Needs location constraint ap-southeast-1.
\ "ap-southeast-1"
/ Asia Pacific (Sydney) Region
- 7 | Needs location constraint ap-southeast-2.
+10 | Needs location constraint ap-southeast-2.
\ "ap-southeast-2"
/ Asia Pacific (Tokyo) Region
- 8 | Needs location constraint ap-northeast-1.
+11 | Needs location constraint ap-northeast-1.
\ "ap-northeast-1"
/ Asia Pacific (Seoul)
- 9 | Needs location constraint ap-northeast-2.
+12 | Needs location constraint ap-northeast-2.
\ "ap-northeast-2"
/ Asia Pacific (Mumbai)
-10 | Needs location constraint ap-south-1.
+13 | Needs location constraint ap-south-1.
\ "ap-south-1"
/ South America (Sao Paulo) Region
-11 | Needs location constraint sa-east-1.
+14 | Needs location constraint sa-east-1.
\ "sa-east-1"
- / If using an S3 clone that only understands v2 signatures
-12 | eg Ceph/Dreamhost
- | set this and make sure you set the endpoint.
+ / Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.
+15 | Set this and make sure you set the endpoint.
\ "other-v2-signature"
- / If using an S3 clone that understands v4 signatures set this
-13 | and make sure you set the endpoint.
- \ "other-v4-signature"
region> 1
Endpoint for S3 API.
Leave blank if using AWS to use the default endpoint for the region.
Specify if using an S3 clone such as Ceph.
-endpoint>
+endpoint>
Location constraint - must be set to match the Region. Used when creating buckets only.
Choose a number from below, or type in your own value
1 / Empty for US Region, Northern Virginia or Pacific Northwest.
\ ""
- 2 / US West (Oregon) Region.
+ 2 / US East (Ohio) Region.
+ \ "us-east-2"
+ 3 / US West (Oregon) Region.
\ "us-west-2"
- 3 / US West (Northern California) Region.
+ 4 / US West (Northern California) Region.
\ "us-west-1"
- 4 / EU (Ireland) Region.
+ 5 / Canada (Central) Region.
+ \ "ca-central-1"
+ 6 / EU (Ireland) Region.
\ "eu-west-1"
- 5 / EU Region.
+ 7 / EU (London) Region.
+ \ "eu-west-2"
+ 8 / EU Region.
\ "EU"
- 6 / Asia Pacific (Singapore) Region.
+ 9 / Asia Pacific (Singapore) Region.
\ "ap-southeast-1"
- 7 / Asia Pacific (Sydney) Region.
+10 / Asia Pacific (Sydney) Region.
\ "ap-southeast-2"
- 8 / Asia Pacific (Tokyo) Region.
+11 / Asia Pacific (Tokyo) Region.
\ "ap-northeast-1"
- 9 / Asia Pacific (Seoul)
+12 / Asia Pacific (Seoul)
\ "ap-northeast-2"
-10 / Asia Pacific (Mumbai)
+13 / Asia Pacific (Mumbai)
\ "ap-south-1"
-11 / South America (Sao Paulo) Region.
+14 / South America (Sao Paulo) Region.
\ "sa-east-1"
location_constraint> 1
Canned ACL used when creating buckets and/or storing objects in S3.
@@ -2294,14 +2777,14 @@ Choose a number from below, or type in your own value
/ Both the object owner and the bucket owner get FULL_CONTROL over the object.
6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
\ "bucket-owner-full-control"
-acl> private
+acl> 1
The server-side encryption algorithm used when storing this object in S3.
Choose a number from below, or type in your own value
1 / None
\ ""
2 / AES256
\ "AES256"
-server_side_encryption>
+server_side_encryption> 1
The storage class to use when storing objects in S3.
Choose a number from below, or type in your own value
1 / Default
@@ -2312,19 +2795,19 @@ Choose a number from below, or type in your own value
\ "REDUCED_REDUNDANCY"
4 / Standard Infrequent Access storage class
\ "STANDARD_IA"
-storage_class>
+storage_class> 1
Remote config
--------------------
[remote]
env_auth = false
-access_key_id = access_key
-secret_access_key = secret_key
+access_key_id = XXX
+secret_access_key = YYY
region = us-east-1
-endpoint =
-location_constraint =
+endpoint =
+location_constraint =
acl = private
-server_side_encryption =
-storage_class =
+server_side_encryption =
+storage_class =
--------------------
y) Yes this is OK
e) Edit this remote
@@ -2344,10 +2827,10 @@ y/e/d> y
Modified time
The modified time is stored as metadata on the object as X-Amz-Meta-Mtime
as floating point since the epoch accurate to 1 ns.
Multipart uploads
-rclone supports multipart uploads with S3 which means that it can upload files bigger than 5GB. Note that files uploaded with multipart upload don't have an MD5SUM.
+rclone supports multipart uploads with S3 which means that it can upload files bigger than 5GB. Note that files uploaded both with multipart upload and through crypt remotes do not have MD5 sums.
Buckets and Regions
With Amazon S3 you can list buckets (rclone lsd
) using any region, but you can only access the content of a bucket from the region it was created in. If you attempt to access a bucket from the wrong region, you will get an error, incorrect region, the bucket is not in 'XXX' region
.
-Authentication
+Authentication
There are two ways to supply rclone
with a set of AWS credentials. In order of precedence:
- Directly in the rclone configuration file (as configured by
rclone config
)
@@ -2402,6 +2885,9 @@ y/e/d> y
- The Resource entry must include both resource ARNs, as one implies the bucket and the other implies the bucket's objects.
For reference, here's an Ansible script that will generate one or more buckets that will work with rclone sync
.
+Key Management System (KMS)
+If you are using server side encryption with KMS then you will find you can't transfer small objects. As a work-around you can use the --ignore-checksum
flag.
+A proper fix is being worked on in issue #1824.
Glacier
You can transition objects to glacier storage using a lifecycle policy. The bucket can still be synced or copied into normally, but if rclone tries to access the data you will see an error like below.
2017/09/11 19:07:43 Failed to sync: failed to open source object: Object in GLACIER, restore first: path/to/file
@@ -2456,12 +2942,19 @@ secret_access_key>
rclone lsd anons3:1000genomes
You will be able to list and copy data but not upload it.
Ceph
-Ceph is an object storage system which presents an Amazon S3 interface.
-To use rclone with ceph, you need to set the following parameters in the config.
-access_key_id = Whatever
-secret_access_key = Whatever
-endpoint = https://ceph.endpoint.goes.here/
-region = other-v2-signature
+Ceph is an open source unified, distributed storage system designed for excellent performance, reliability and scalability. It has an S3 compatible object storage interface.
+To use rclone with Ceph, configure as above but leave the region blank and set the endpoint. You should end up with something like this in your config:
+[ceph]
+type = s3
+env_auth = false
+access_key_id = XXX
+secret_access_key = YYY
+region =
+endpoint = https://ceph.endpoint.example.com
+location_constraint =
+acl =
+server_side_encryption =
+storage_class =
Note also that Ceph sometimes puts /
in the passwords it gives users. If you read the secret access key using the command line tools you will get a JSON blob with the /
escaped as \/
. Make sure you only write /
in the secret access key.
Eg the dump from Ceph looks something like this (irrelevant keys removed).
{
@@ -2476,12 +2969,25 @@ region = other-v2-signature
],
}
Because this is a json dump, it is encoding the /
as \/
, so if you use the secret key as xxxxxx/xxxx
it will work fine.
+Dreamhost
+Dreamhost DreamObjects is an object storage system based on CEPH.
+To use rclone with Dreamhost, configure as above but leave the region blank and set the endpoint. You should end up with something like this in your config:
+[dreamobjects]
+env_auth = false
+access_key_id = your_access_key
+secret_access_key = your_secret_key
+region =
+endpoint = objects-us-west-1.dream.io
+location_constraint =
+acl = private
+server_side_encryption =
+storage_class =
DigitalOcean Spaces
Spaces is an S3-interoperable object storage service from cloud provider DigitalOcean.
To connect to DigitalOcean Spaces you will need an access key and secret key. These can be retrieved on the "Applications & API" page of the DigitalOcean control panel. They will be needed when promted by rclone config
for your access_key_id
and secret_access_key
.
When prompted for a region
or location_constraint
, press enter to use the default value. The region must be included in the endpoint
setting (e.g. nyc3.digitaloceanspaces.com
). The defualt values can be used for other settings.
Going through the whole process of creating a new remote by running rclone config
, each prompt should be answered as shown below:
-Storage> 2
+Storage> s3
env_auth> 1
access_key_id> YOUR_ACCESS_KEY
secret_access_key> YOUR_SECRET_KEY
@@ -2505,6 +3011,165 @@ storage_class =
Once configured, you can create a new Space and begin copying files. For example:
rclone mkdir spaces:my-new-space
rclone copy /path/to/files spaces:my-new-space
+IBM COS (S3)
+Information stored with IBM Cloud Object Storage is encrypted and dispersed across multiple geographic locations, and accessed through an implementation of the S3 API. This service makes use of the distributed storage technologies provided by IBM’s Cloud Object Storage System (formerly Cleversafe). For more information visit: (https://www.ibm.com/cloud/object-storage)
+To configure access to IBM COS S3, follow the steps below:
+
+Run rclone config and select n for a new remote.
+2018/02/14 14:13:11 NOTICE: Config file "C:\\Users\\a\\.config\\rclone\\rclone.conf" not found - using defaults
+No remotes found - make a new one
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+Enter the name for the configuration
+name> IBM-COS-XREGION
+Select "s3" storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value
+ 1 / Amazon Drive
+\ "amazon cloud drive"
+2 / Amazon S3 (also Dreamhost, Ceph, Minio, IBM COS(S3))
+\ "s3"
+3 / Backblaze B2
+Storage> 2
+Select "Enter AWS credentials…"
+Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
+Choose a number from below, or type in your own value
+ 1 / Enter AWS credentials in the next step
+\ "false"
+ 2 / Get AWS credentials from the environment (env vars or IAM)
+\ "true"
+env_auth> 1
+Enter the Access Key and Secret.
+AWS Access Key ID - leave blank for anonymous access or runtime credentials.
+access_key_id> <>
+AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
+secret_access_key> <>
+Select "other-v4-signature" region.
+Region to connect to.
+Choose a number from below, or type in your own value
+/ The default endpoint - a good choice if you are unsure.
+ 1 | US Region, Northern Virginia or Pacific Northwest.
+| Leave location constraint empty.
+\ "us-east-1"
+/ US East (Ohio) Region
+2 | Needs location constraint us-east-2.
+\ "us-east-2"
+/ US West (Oregon) Region
+…<omitted>…
+15 | eg Ceph/Dreamhost
+| set this and make sure you set the endpoint.
+\ "other-v2-signature"
+/ If using an S3 clone that understands v4 signatures set this
+16 | and make sure you set the endpoint.
+\ "other-v4-signature
+region> 16
+Enter the endpoint FQDN.
+Leave blank if using AWS to use the default endpoint for the region.
+Specify if using an S3 clone such as Ceph.
+endpoint> s3-api.us-geo.objectstorage.softlayer.net
+- Specify a IBM COS Location Constraint.
+
+Currently, the only IBM COS values for LocationConstraint are: us-standard / us-vault / us-cold / us-flex us-east-standard / us-east-vault / us-east-cold / us-east-flex us-south-standard / us-south-vault / us-south-cold / us-south-flex eu-standard / eu-vault / eu-cold / eu-flex
+Location constraint - must be set to match the Region. Used when creating buckets only.
+Choose a number from below, or type in your own value
+ 1 / Empty for US Region, Northern Virginia or Pacific Northwest.
+\ ""
+ 2 / US East (Ohio) Region.
+\ "us-east-2"
+ …<omitted>…
+location_constraint> us-standard
+
+Specify a canned ACL.
+Canned ACL used when creating buckets and/or storing objects in S3.
+For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
+Choose a number from below, or type in your own value
+1 / Owner gets FULL_CONTROL. No one else has access rights (default).
+\ "private"
+2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
+\ "public-read"
+/ Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
+ 3 | Granting this on a bucket is generally not recommended.
+\ "public-read-write"
+ 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
+\ "authenticated-read"
+/ Object owner gets FULL_CONTROL. Bucket owner gets READ access.
+5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
+\ "bucket-owner-read"
+/ Both the object owner and the bucket owner get FULL_CONTROL over the object.
+ 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
+\ "bucket-owner-full-control"
+acl> 1
+Set the SSE option to "None".
+Choose a number from below, or type in your own value
+ 1 / None
+\ ""
+2 / AES256
+\ "AES256"
+server_side_encryption> 1
+Set the storage class to "None" (IBM COS uses the LocationConstraint at the bucket level).
+The storage class to use when storing objects in S3.
+Choose a number from below, or type in your own value
+1 / Default
+\ ""
+ 2 / Standard storage class
+\ "STANDARD"
+ 3 / Reduced redundancy storage class
+\ "REDUCED_REDUNDANCY"
+ 4 / Standard Infrequent Access storage class
+ \ "STANDARD_IA"
+storage_class>
+Review the displayed configuration and accept to save the "remote" then quit.
+Remote config
+--------------------
+[IBM-COS-XREGION]
+env_auth = false
+access_key_id = <>
+secret_access_key = <>
+region = other-v4-signature
+endpoint = s3-api.us-geo.objectstorage.softlayer.net
+location_constraint = us-standard
+acl = private
+server_side_encryption =
+storage_class =
+--------------------
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+Remote config
+Current remotes:
+
+Name Type
+==== ====
+IBM-COS-XREGION s3
+
+e) Edit existing remote
+n) New remote
+d) Delete remote
+r) Rename remote
+c) Copy remote
+s) Set configuration password
+q) Quit config
+e/n/d/r/c/s/q> q
+Execute rclone commands
+1) Create a bucket.
+ rclone mkdir IBM-COS-XREGION:newbucket
+2) List available buckets.
+ rclone lsd IBM-COS-XREGION:
+ -1 2017-11-08 21:16:22 -1 test
+ -1 2018-02-14 20:16:39 -1 newbucket
+3) List contents of a bucket.
+ rclone ls IBM-COS-XREGION:newbucket
+ 18685952 test.exe
+4) Copy a file from local to remote.
+ rclone copy /Users/file.txt IBM-COS-XREGION:newbucket
+5) Copy a file from remote to local.
+ rclone copy IBM-COS-XREGION:newbucket/file.txt .
+6) Delete a file on remote.
+ rclone delete IBM-COS-XREGION:newbucket/file.txt
+
Minio
Minio is an object storage server built for cloud application developers and devops.
It is very easy to install and provides an S3 compatible server which can be used by rclone.
@@ -3056,8 +3721,21 @@ chunk_total_size = 10G
rclone ls test-cache:
To start a cached mount
rclone mount --allow-other test-cache: /var/tmp/test-cache
+Write Features
+Offline uploading
+In an effort to make writing through cache more reliable, the backend now supports this feature which can be activated by specifying a cache-tmp-upload-path
.
+A files goes through these states when using this feature:
+
+- An upload is started (usually by copying a file on the cache remote)
+- When the copy to the temporary location is complete the file is part of the cached remote and looks and behaves like any other file (reading included)
+- After
cache-tmp-wait-time
passes and the file is next in line, rclone move
is used to move the file to the cloud provider
+- Reading the file still works during the upload but most modifications on it will be prohibited
+- Once the move is complete the file is unlocked for modifications as it becomes as any other regular file
+- If the file is being read through
cache
when it's actually deleted from the temporary path then cache
will simply swap the source to the cloud provider without interrupting the reading (small blip can happen though)
+
+Files are uploaded in sequence and only one file is uploaded at a time. Uploads will be stored in a queue and be processed based on the order they were added. The queue and the temporary storage is persistent across restarts and even purges of the cache.
Write Support
-Writes are supported through cache
. One caveat is that a mounted cache remote does not add any retry or fallback mechanism to the upload operation. This will depend on the implementation of the wrapped remote.
+Writes are supported through cache
. One caveat is that a mounted cache remote does not add any retry or fallback mechanism to the upload operation. This will depend on the implementation of the wrapped remote. Consider using Offline uploading
for reliable writes.
One special case is covered with cache-writes
which will cache the file data at the same time as the upload when it is enabled making it available from the cache store immediately once the upload is finished.
Read Features
Multiple connections
@@ -3071,6 +3749,9 @@ chunk_total_size = 10G
How to enable? Run rclone config
and add all the Plex options (endpoint, username and password) in your remote and it will be automatically enabled.
Affected settings: - cache-workers
: Configured value during confirmed playback or 1 all the other times
Known issues
+Mount and --dir-cache-time
+--dir-cache-time controls the first layer of directory caching which works at the mount layer. Being an independent caching mechanism from the cache
backend, it will manage its own entries based on the configured time.
+To avoid getting in a scenario where dir cache has obsolete data and cache would have the correct one, try to set --dir-cache-time
to a lower time than --cache-info-age
. Default values are already configured in this way.
Windows support - Experimental
There are a couple of issues with Windows mount
functionality that still require some investigations. It should be considered as experimental thus far as fixes come in for this OS.
Most of the issues seem to be related to the difference between filesystems on Linux flavors and Windows as cache is heavily dependant on them.
@@ -3093,6 +3774,11 @@ chunk_total_size = 10G
One common scenario is to keep your data encrypted in the cloud provider using the crypt
remote. crypt
uses a similar technique to wrap around an existing remote and handles this translation in a seamless way.
There is an issue with wrapping the remotes in this order: cloud remote -> crypt -> cache
During testing, I experienced a lot of bans with the remotes in this order. I suspect it might be related to how crypt opens files on the cloud provider which makes it think we're downloading the full file instead of small chunks. Organizing the remotes in this order yelds better results: cloud remote -> cache -> crypt
+Cache and Remote Control (--rc)
+Cache supports the new --rc
mode in rclone and can be remote controlled through the following end points: By default, the listener is disabled if you do not add the flag.
+rc cache/expire
+Purge a remote from the cache backend. Supports either a directory or a file. It supports both encrypted and unencrypted file names if cache is wrapped by crypt.
+Params: - remote = path to remote (required) - withData = true/false to delete cached data (chunks) as well (optional, false by default)
Specific options
Here are the command line options specific to this cloud storage system.
--cache-chunk-path=PATH
@@ -3141,6 +3827,18 @@ chunk_total_size = 10G
--cache-writes
If you need to read files immediately after you upload them through cache
you can enable this flag to have their data stored in the cache store at the same time during upload.
Default: not set
+--cache-tmp-upload-path=PATH
+This is the path where cache
will use as a temporary storage for new files that need to be uploaded to the cloud provider.
+Specifying a value will enable this feature. Without it, it is completely disabled and files will be uploaded directly to the cloud provider
+Default: empty
+--cache-tmp-wait-time=DURATION
+This is the duration that a file must wait in the temporary location cache-tmp-upload-path before it is selected for upload.
+Note that only one file is uploaded at a time and it can take longer to start the upload if a queue formed for this purpose.
+Default: 15m
+--cache-db-wait-time=DURATION
+Only one process can have the DB open at any one time, so rclone waits for this duration for the DB to become available before it gives an error.
+If you set it to 0 then it will wait forever.
+Default: 1s
Crypt
The crypt
remote encrypts and decrypts another remote.
To use it first set up the underlying remote following the config instructions for that remote. You can also use a local pathname instead of a remote which will encrypt and decrypt from that directory which might be useful for encrypting onto a USB stick for example.
@@ -3296,7 +3994,7 @@ $ rclone -q ls secret:
Standard
- file names encrypted
-- file names can't be as long (~156 characters)
+- file names can't be as long (~143 characters)
- can use sub paths and copy single files
- directory structure visible
- identical files names will have identical uploaded names
@@ -3320,7 +4018,7 @@ $ rclone -q ls secret:
True
Encrypts the whole file path including directory names Example: 1/12/123.txt
is encrypted to p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0
False
-Only encrypts file names, skips directory names Example: 1/12/123/txt
is encrypted to 1/12/qgm4avr35m5loi1th53ato71v0
+Only encrypts file names, skips directory names Example: 1/12/123.txt
is encrypted to 1/12/qgm4avr35m5loi1th53ato71v0
Modified time and hashes
Crypt stores modification times using the underlying remote so support depends on that.
Hashes are not stored for crypt. However the data integrity is protected by an extremely strong crypto authenticator.
@@ -3746,39 +4444,34 @@ n/r/c/s/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
- 1 / Amazon Drive
- \ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
- \ "s3"
- 3 / Backblaze B2
- \ "b2"
- 4 / Dropbox
- \ "dropbox"
- 5 / Encrypt/Decrypt a remote
- \ "crypt"
- 6 / FTP Connection
- \ "ftp"
- 7 / Google Cloud Storage (this is not Google Drive)
- \ "google cloud storage"
- 8 / Google Drive
+[snip]
+10 / Google Drive
\ "drive"
- 9 / Hubic
- \ "hubic"
-10 / Local Disk
- \ "local"
-11 / Microsoft OneDrive
- \ "onedrive"
-12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
- \ "swift"
-13 / SSH/SFTP Connection
- \ "sftp"
-14 / Yandex Disk
- \ "yandex"
-Storage> 8
+[snip]
+Storage> drive
Google Application Client Id - leave blank normally.
client_id>
Google Application Client Secret - leave blank normally.
client_secret>
+Scope that rclone should use when requesting access from drive.
+Choose a number from below, or type in your own value
+ 1 / Full access all files, excluding Application Data Folder.
+ \ "drive"
+ 2 / Read-only access to file metadata and file contents.
+ \ "drive.readonly"
+ / Access to files created by rclone only.
+ 3 | These are visible in the drive website.
+ | File authorization is revoked when the user deauthorizes the app.
+ \ "drive.file"
+ / Allows read and write access to the Application Data folder.
+ 4 | This is not visible in the drive website.
+ \ "drive.appfolder"
+ / Allows read-only access to file metadata but
+ 5 | does not allow any access to read or download file content.
+ \ "drive.metadata.readonly"
+scope> 1
+ID of the root folder - leave blank normally. Fill in to access "Computers" folders. (see docs).
+root_folder_id>
Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login.
service_account_file>
Remote config
@@ -3798,9 +4491,12 @@ n) No
y/n> n
--------------------
[remote]
-client_id =
-client_secret =
-token = {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014-03-16T13:57:58.955387075Z","Extra":null}
+client_id =
+client_secret =
+scope = drive
+root_folder_id =
+service_account_file =
+token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2014-03-16T13:57:58.955387075Z"}
--------------------
y) Yes this is OK
e) Edit this remote
@@ -3814,10 +4510,81 @@ y/e/d> y
rclone ls remote:
To copy a local directory to a drive directory called backup
rclone copy /home/source remote:backup
+Scopes
+Rclone allows you to select which scope you would like for rclone to use. This changes what type of token is granted to rclone. The scopes are defined here..
+The scope are
+drive
+This is the default scope and allows full access to all files, except for the Application Data Folder (see below).
+Choose this one if you aren't sure.
+drive.readonly
+This allows read only access to all files. Files may be listed and downloaded but not uploaded, renamed or deleted.
+drive.file
+With this scope rclone can read/view/modify only those files and folders it creates.
+So if you uploaded files to drive via the web interface (or any other means) they will not be visible to rclone.
+This can be useful if you are using rclone to backup data and you want to be sure confidential data on your drive is not visible to rclone.
+Files created with this scope are visible in the web interface.
+drive.appfolder
+This gives rclone its own private area to store files. Rclone will not be able to see any other files on your drive and you won't be able to see rclone's files from the web interface either.
+
+This allows read only access to file names only. It does not allow rclone to download or upload data, or rename or delete files or directories.
+Root folder ID
+You can set the root_folder_id
for rclone. This is the directory (identified by its Folder ID
) that rclone considers to be a the root of your drive.
+Normally you will leave this blank and rclone will determine the correct root to use itself.
+However you can set this to restrict rclone to a specific folder hierarchy or to access data within the "Computers" tab on the drive web interface (where files from Google's Backup and Sync desktop program go).
+In order to do this you will have to find the Folder ID
of the directory you wish rclone to display. This will be the last segment of the URL when you open the relevant folder in the drive web interface.
+So if the folder you want rclone to use has a URL which looks like https://drive.google.com/drive/folders/1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh
in the browser, then you use 1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh
as the root_folder_id
in the config.
+NB folders under the "Computers" tab seem to be read only (drive gives a 500 error) when using rclone.
+There doesn't appear to be an API to discover the folder IDs of the "Computers" tab - please contact us if you know otherwise!
+Note also that rclone can't access any data under the "Backups" tab on the google drive web interface yet.
Service Account support
You can set up rclone with Google Drive in an unattended mode, i.e. not tied to a specific end-user Google account. This is useful when you want to synchronise files onto machines that don't have actively logged-in users, for example build machines.
-To create a service account and obtain its credentials, go to the Google Developer Console and use the "Create Credentials" button. After creating an account, a JSON file containing the Service Account's credentials will be downloaded onto your machine. These credentials are what rclone will use for authentication.
-To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the service_account_file
prompt and rclone won't use the browser based authentication flow.
+To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the service_account_file
prompt during rclone config
and rclone won't use the browser based authentication flow.
+Use case - Google Apps/G-suite account and individual Drive
+Let's say that you are the administrator of a Google Apps (old) or G-suite account. The goal is to store data on an individual's Drive account, who IS a member of the domain. We'll call the domain example.com, and the user foo@example.com.
+There's a few steps we need to go through to accomplish this:
+1. Create a service account for example.com
+
+- To create a service account and obtain its credentials, go to the Google Developer Console.
+- You must have a project - create one if you don't.
+- Then go to "IAM & admin" -> "Service Accounts".
+- Use the "Create Credentials" button. Fill in "Service account name" with something that identifies your client. "Role" can be empty.
+- Tick "Furnish a new private key" - select "Key type JSON".
+- Tick "Enable G Suite Domain-wide Delegation". This option makes "impersonation" possible, as documented here: Delegating domain-wide authority to the service account
+- These credentials are what rclone will use for authentication. If you ever need to remove access, press the "Delete service account key" button.
+
+
+
+- Go to example.com's admin console
+- Go into "Security" (or use the search bar)
+- Select "Show more" and then "Advanced settings"
+- Select "Manage API client access" in the "Authentication" section
+- In the "Client Name" field enter the service account's "Client ID" - this can be found in the Developer Console under "IAM & Admin" -> "Service Accounts", then "View Client ID" for the newly created service account. It is a ~21 character numerical string.
+- In the next field, "One or More API Scopes", enter
https://www.googleapis.com/auth/drive
to grant access to Google Drive specifically.
+
+
+rclone config
+
+n/s/q> n # New
+name>gdrive # Gdrive is an example name
+Storage> # Select the number shown for Google Drive
+client_id> # Can be left blank
+client_secret> # Can be left blank
+scope> # Select your scope, 1 for example
+root_folder_id> # Can be left blank
+service_account_file> /home/foo/myJSONfile.json # This is where the JSON file goes!
+y/n> # Auto config, y
+
+4. Verify that it's working
+
+rclone -v --drive-impersonate foo@example.com lsf gdrive:backup
+- The arguments do:
+
+-v
- verbose logging
+--drive-impersonate foo@example.com
- this is what does the magic, pretending to be user foo.
+lsf
- list files in a parsing friendly way
+gdrive:backup
- use the remote called gdrive, work in the folder named backup.
+
+
Team drives
If you want to configure the remote to point to a Google Team Drive then answer y
to the question Configure this as a team drive?
.
This will fetch the list of Team Drives from google and allow you to configure which one you want to use. You can also type in a team drive ID if you prefer.
@@ -3990,10 +4757,13 @@ y/e/d> y
+--drive-impersonate user
+When using a service account, this instructs rclone to impersonate the user passed in.
--drive-list-chunk int
Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me
-Only show files that are shared with me
+Instructs rclone to operate on your "Shared with me" folder (where Google Drive lets you access the files and folders others have shared with you).
+This works both with the "list" (lsd, lsl, etc) and the "copy" commands (copy, sync, etc), and with all other commands too.
--drive-skip-gdocs
Skip google documents in all listings. If given, gdocs practically become invisible to rclone.
--drive-trashed-only
@@ -4002,20 +4772,27 @@ y/e/d> y
File size cutoff for switching to chunked upload. Default is 8 MB.
--drive-use-trash
Controls whether files are sent to the trash or deleted permanently. Defaults to true, namely sending files to the trash. Use --drive-use-trash=false
to delete files permanently instead.
+--drive-use-created-date
+Use the file creation date in place of the modification date. Defaults to false.
+Useful when downloading data and you want the creation date used in place of the last modified date.
+WARNING: This flag may have some unexpected consequences.
+When uploading to your drive all files will be overwritten unless they haven't been modified since their creation. And the inverse will occur while downloading. This side effect can be avoided by using the --checksum
flag.
+This feature was implemented to retain photos capture date as recorded by google photos. You will first need to check the "Create a Google Photos folder" option in your google drive settings. You can then copy or move the photos locally and use the date the image was taken (created) set as the modification date.
Limitations
Drive has quite a lot of rate limiting. This causes rclone to be limited to transferring about 2 files per second only. Individual files may be transferred much faster at 100s of MBytes/s but lots of small files can take a long time.
Server side copies are also subject to a separate rate limit. If you see User rate limit exceeded errors, wait at least 24 hours and retry. You can disable server side copies with --disable copy
to download and upload the files if you prefer.
+Limitations of Google Docs
+Google docs will appear as size -1 in rclone ls
and as size 0 in anything which uses the VFS layer, eg rclone mount
, rclone serve
.
+This is because rclone can't find out the size of the Google docs without downloading them.
+Google docs will transfer correctly with rclone sync
, rclone copy
etc as rclone knows to ignore the size when doing the transfer.
+However an unfortunate consequence of this is that you can't download Google docs using rclone mount
- you will get a 0 sized file. If you try again the doc may gain its correct size and be downloadable.
Duplicated files
Sometimes, for no reason I've been able to track down, drive will duplicate a file that rclone uploads. Drive unlike all the other remotes can have duplicated files.
Duplicated files cause problems with the syncing and you will see messages in the log about duplicates.
Use rclone dedupe
to fix duplicated files.
Note that this isn't just a problem with rclone, even Google Photos on Android duplicates files on drive sometimes.
Rclone appears to be re-copying files it shouldn't
-There are two possible reasons for rclone to recopy files which haven't changed to Google Drive.
-The first is the duplicated file issue above - run rclone dedupe
and check your logs for duplicate object or directory messages.
-The second is that sometimes Google reports different sizes for the Google Docs exports which will cause rclone to re-download Google Docs for no apparent reason. --ignore-size
is a not very satisfactory work-around for this if it is causing you a lot of problems.
-Google docs downloads sometimes fail with "Failed to copy: read X bytes expecting Y"
-This is the same problem as above. Google reports the google doc is one size, but rclone downloads a different size. Work-around with the --ignore-size
flag or wait for rclone to retry the download which it will.
+The most likely cause of this is the duplicated file issue above - run rclone dedupe
and check your logs for duplicate object or directory messages.
Making your own client_id
When you use rclone with Google drive in its default configuration you are using rclone's client_id. This is shared between all the rclone users. There is a global rate limit on the number of queries per second that each client_id can do set by Google. rclone already has a high quota and I will continue to make sure it is high enough by contacting Google.
However you might find you get better performance making your own client_id if you are a heavy user. Or you may not depending on exactly how Google have been raising rclone's rate limit.
@@ -4398,12 +5175,27 @@ b/p>
Here are the command line options specific to this cloud storage system.
--onedrive-chunk-size=SIZE
Above this size files will be chunked - must be multiple of 320k. The default is 10MB. Note that the chunks will be buffered into memory.
---onedrive-upload-cutoff=SIZE
-Cutoff for switching to chunked upload - must be <= 100MB. The default is 10MB.
Limitations
Note that OneDrive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
There are quite a few characters that can't be in OneDrive file names. These can't occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ?
in it will be mapped to ?
instead.
The largest allowed file size is 10GiB (10,737,418,240 bytes).
+Versioning issue
+Every change in OneDrive causes the service to create a new version. This counts against a users quota.
+For example changing the modification time of a file creates a second version, so the file is using twice the space.
+The copy
is the only rclone command affected by this as we copy the file and then afterwards set the modification time to match the source file.
+User Weropol has found a method to disable versioning on OneDrive
+
+- Open the settings menu by clicking on the gear symbol at the top of the OneDrive Business page.
+- Click Site settings.
+- Once on the Site settings page, navigate to Site Administration > Site libraries and lists.
+- Click Customize "Documents".
+- Click General Settings > Versioning Settings.
+- Under Document Version History select the option No versioning.
+Note: This will disable the creation of new file versions, but will not remove any previous versions. Your documents are safe.
+- Apply the changes by clicking OK.
+- Use rclone to upload or modify files. (I also use the --no-update-modtime flag)
+- Restore the versioning settings after using rclone. (Optional)
+
QingStor
Paths are specified as remote:bucket
(or remote:
for the lsd
command.) You may put subdirectories in too, eg remote:bucket/path/to/dir
.
Here is an example of making an QingStor configuration. First run
@@ -4505,7 +5297,7 @@ y/e/d> y
rclone supports multipart uploads with QingStor which means that it can upload files bigger than 5GB. Note that files uploaded with multipart upload don't have an MD5SUM.
Buckets and Zone
With QingStor you can list buckets (rclone lsd
) using any zone, but you can only access the content of a bucket from the zone it was created in. If you attempt to access a bucket from the wrong zone, you will get an error, incorrect zone, the bucket is not in 'XXX' zone
.
-Authentication
+Authentication
There are two ways to supply rclone
with a set of QingStor credentials. In order of precedence:
- Directly in the rclone configuration file (as configured by
rclone config
)
@@ -4901,17 +5693,23 @@ y/e/d> y
Key files should be unencrypted PEM-encoded private key files. For instance /home/$USER/.ssh/id_rsa
.
If you don't specify pass
or key_file
then rclone will attempt to contact an ssh-agent.
+If you set the --sftp-ask-password
option, rclone will prompt for a password when needed and no password has been configured.
ssh-agent on macOS
Note that there seem to be various problems with using an ssh-agent on macOS due to recent changes in the OS. The most effective work-around seems to be to start an ssh-agent in each session, eg
eval `ssh-agent -s` && ssh-add -A
And then at the end of the session
eval `ssh-agent -k`
These commands can be used in scripts of course.
+Specific options
+Here are the command line options specific to this remote.
+--sftp-ask-password
+Ask for the SFTP password if needed when no password has been configured.
Modified time
Modified times are stored on the server to 1 second precision.
Modified times are used in syncing and are fully supported.
+Some SFTP servers disable setting/modifying the file modification time after upload (for example, certain configurations of ProFTPd with mod_sftp). If you are using one of these servers, you can set the option set_modtime = false
in your RClone backend configuration to disable this behaviour.
Limitations
-SFTP supports checksums if the same login has shell access and md5sum
or sha1sum
as well as echo
are in the remote's PATH.
+SFTP supports checksums if the same login has shell access and md5sum
or sha1sum
as well as echo
are in the remote's PATH. This remote check can be disabled by setting the configuration option disable_hashcheck
. This may be required if you're connecting to SFTP servers which are not under your control, and to which the execution of remote commands is prohibited.
The only ssh agent supported under Windows is Putty's pageant.
The Go SSH library disables the use of the aes128-cbc cipher by default, due to security concerns. This can be re-enabled on a per-connection basis by setting the use_insecure_cipher
setting in the configuration file to true
. Further details on the insecurity of this cipher can be found [in this paper] (http://www.isg.rhul.ac.uk/~kp/SandPfinal.pdf).
SFTP isn't supported under plan9 until this issue is fixed.
@@ -5146,7 +5944,7 @@ nounc = true
And use rclone like this:
rclone copy c:\src nounc:z:\dst
This will use UNC paths on c:\src
but not on z:\dst
. Of course this will cause problems if the absolute path length of a file exceeds 258 characters on z, so only use this option if you have to.
-Specific options
+Specific options
Here are the command line options specific to local storage
--copy-links, -L
Normally rclone will ignore symlinks or junction points (which behave like symlinks under Windows).
@@ -5198,6 +5996,196 @@ nounc = true
This flag disables warning messages on skipped symlinks or junction points, as you explicitly acknowledge that they should be skipped.
Changelog
+- v1.40 - 2018-03-19
+
+- New backends
+- Alias backend to create aliases for existing remote names (Fabian Möller)
+- New commands
+lsf
: list for parsing purposes (Jakub Tasiemski)
+
+- by default this is a simple non recursive list of files and directories
+- it can be configured to add more info in an easy to parse way
+
+serve restic
: for serving a remote as a Restic REST endpoint
+
+- This enables restic to use any backends that rclone can access
+- Thanks Alexander Neumann for help, patches and review
+
+rc
: enable the remote control of a running rclone
+
+- The running rclone must be started with --rc and related flags.
+- Currently there is support for bwlimit, and flushing for mount and cache.
+
+- New Features
+--max-delete
flag to add a delete threshold (Bjørn Erik Pedersen)
+- All backends now support RangeOption for ranged Open
+
+cat
: Use RangeOption for limited fetches to make more efficient
+cryptcheck
: make reading of nonce more efficient with RangeOption
+
+- serve http/webdav/restic
+
+- support SSL/TLS
+- add
--user
--pass
and --htpasswd
for authentication
+
+copy
/move
: detect file size change during copy/move and abort transfer (ishuah)
+cryptdecode
: added option to return encrypted file names. (ishuah)
+lsjson
: add --encrypted
to show encrypted name (Jakub Tasiemski)
+- Add
--stats-file-name-length
to specify the printed file name length for stats (Will Gunn)
+- Compile
+- Code base was shuffled and factored
+
+- backends moved into a backend directory
+- large packages split up
+- See the CONTRIBUTING.md doc for info as to what lives where now
+
+- Update to using go1.10 as the default go version
+- Implement daily full integration tests
+- Release
+- Include a source tarball and sign it and the binaries
+- Sign the git tags as part of the release process
+- Add .deb and .rpm packages as part of the build
+- Make a beta release for all branches on the main repo (but not pull requests)
+- Bug Fixes
+- config: fixes errors on non existing config by loading config file only on first access
+- config: retry saving the config after failure (Mateusz)
+- sync: when using
--backup-dir
don't delete files if we can't set their modtime
+
+- this fixes odd behaviour with Dropbox and
--backup-dir
+
+- fshttp: fix idle timeouts for HTTP connections
+serve http
: fix serving files with : in - fixes
+- Fix
--exclude-if-present
to ignore directories which it doesn't have permission for (Iakov Davydov)
+- Make accounting work properly with crypt and b2
+- remove
--no-traverse
flag because it is obsolete
+- Mount
+- Add
--attr-timeout
flag to control attribute caching in kernel
+
+- this now defaults to 0 which is correct but less efficient
+- see the mount docs for more info
+
+- Add
--daemon
flag to allow mount to run in the background (ishuah)
+- Fix: Return ENOSYS rather than EIO on attempted link
+
+- This fixes FileZilla accessing an rclone mount served over sftp.
+
+- Fix setting modtime twice
+- Mount tests now run on CI for Linux (mount & cmount)/Mac/Windows
+- Many bugs fixed in the VFS layer - see below
+- VFS
+- Many fixes for
--vfs-cache-mode
writes and above
+
+- Update cached copy if we know it has changed (fixes stale data)
+- Clean path names before using them in the cache
+- Disable cache cleaner if
--vfs-cache-poll-interval=0
+- Fill and clean the cache immediately on startup
+
+- Fix Windows opening every file when it stats the file
+- Fix applying modtime for an open Write Handle
+- Fix creation of files when truncating
+- Write 0 bytes when flushing unwritten handles to avoid race conditions in FUSE
+- Downgrade "poll-interval is not supported" message to Info
+- Make OpenFile and friends return EINVAL if O_RDONLY and O_TRUNC
+- Local
+- Downgrade "invalid cross-device link: trying copy" to debug
+- Make DirMove return fs.ErrorCantDirMove to allow fallback to Copy for cross device
+- Fix race conditions updating the hashes
+- Cache
+- Add support for polling - cache will update when remote changes on supported backends
+- Reduce log level for Plex api
+- Fix dir cache issue
+- Implement
--cache-db-wait-time
flag
+- Improve efficiency with RangeOption and RangeSeek
+- Fix dirmove with temp fs enabled
+- Notify vfs when using temp fs
+- Offline uploading
+- Remote control support for path flushing
+- Amazon cloud drive
+- Rclone no longer has any working keys - disable integration tests
+- Implement DirChangeNotify to notify cache/vfs/mount of changes
+- Azureblob
+- Don't check for bucket/container presense if listing was OK
+
+- this makes rclone do one less request per invocation
+
+- Improve accounting for chunked uploads
+- Backblaze B2
+- Don't check for bucket/container presense if listing was OK
+
+- this makes rclone do one less request per invocation
+
+- Box
+- Improve accounting for chunked uploads
+- Dropbox
+- Fix custom oauth client parameters
+- Google Cloud Storage
+- Don't check for bucket/container presense if listing was OK
+
+- this makes rclone do one less request per invocation
+
+- Google Drive
+- Migrate to api v3 (Fabian Möller)
+- Add scope configuration and root folder selection
+- Add
--drive-impersonate
for service accounts
+
+- thanks to everyone who tested, explored and contributed docs
+
+- Add
--drive-use-created-date
to use created date as modified date (nbuchanan)
+- Request the export formats only when required
+
+- This makes rclone quicker when there are no google docs
+
+- Fix finding paths with latin1 chars (a workaround for a drive bug)
+- Fix copying of a single Google doc file
+- Fix
--drive-auth-owner-only
to look in all directories
+- HTTP
+- Fix handling of directories with & in
+- Onedrive
+- Removed upload cutoff and always do session uploads
+
+- this stops the creation of multiple versions on business onedrive
+
+- Overwrite object size value with real size when reading file. (Victor)
+
+- this fixes oddities when onedrive misreports the size of images
+
+- Pcloud
+- Remove unused chunked upload flag and code
+- Qingstor
+- Don't check for bucket/container presense if listing was OK
+
+- this makes rclone do one less request per invocation
+
+- S3
+- Support hashes for multipart files (Chris Redekop)
+- Initial support for IBM COS (S3) (Giri Badanahatti)
+- Update docs to discourage use of v2 auth with CEPH and others
+- Don't check for bucket/container presense if listing was OK
+
+- this makes rclone do one less request per invocation
+
+- Fix server side copy and set modtime on files with + in
+- SFTP
+- Add option to disable remote hash check command execution (Jon Fautley)
+- Add
--sftp-ask-password
flag to prompt for password when needed (Leo R. Lundgren)
+- Add
set_modtime
configuration option
+- Fix following of symlinks
+- Fix reading config file outside of Fs setup
+- Fix reading $USER in username fallback not $HOME
+- Fix running under crontab - Use correct OS way of reading username
+- Swift
+- Fix refresh of authentication token
+
+- in v1.39 a bug was introduced which ignored new tokens - this fixes it
+
+- Fix extra HEAD transaction when uploading a new file
+- Don't check for bucket/container presense if listing was OK
+
+- this makes rclone do one less request per invocation
+
+- Webdav
+- Add new time formats to support mydrive.ch and others
+
- v1.39 - 2017-12-23
- New backends
@@ -6276,6 +7264,7 @@ Server B> rclone copy /tmp/whatever remote:Backup
mkdir -p /etc/ssl/certs/
curl -o /etc/ssl/certs/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt
ntpclient -s -h pool.ntp.org
+The two environment variables SSL_CERT_FILE
and SSL_CERT_DIR
, mentioned in the x509 pacakge, provide an additional way to provide the SSL root certificates.
Note that you may need to add the --insecure
option to the curl
command line if it doesn't work without.
curl --insecure -o /etc/ssl/certs/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt
Rclone gives Failed to load config file: function not implemented error
@@ -6289,6 +7278,7 @@ ntpclient -s -h pool.ntp.org
dig www.googleapis.com # resolve using your default DNS
dig www.googleapis.com @8.8.8.8 # resolve with Google's DNS server
If you are using systemd-resolved
(default on Arch Linux), ensure it is at version 233 or higher. Previous releases contain a bug which causes not all domains to be resolved properly.
+Additionally with the GODEBUG=netdns=
environment variable the Go resolver decision can be influenced. This also allows to resolve certain issues with DNS resolution. See the name resolution section in the go docs.
License
This is free software under the terms of MIT the license (check the COPYING file included with the source code).
Copyright (C) 2012 by Nick Craig-Wood https://www.craig-wood.com/nick/
@@ -6385,7 +7375,7 @@ THE SOFTWARE.
- Steven Lu tacticalazn@gmail.com
- Sjur Fredriksen sjurtf@ifi.uio.no
- Ruwbin hubus12345@gmail.com
-- Fabian Möller fabianm88@gmail.com
+- Fabian Möller fabianm88@gmail.com f.moeller@nynex.de
- Edward Q. Bridges github@eqbridges.com
- Vasiliy Tolstov v.tolstov@selfip.ru
- Harshavardhana harsha@minio.io
@@ -6395,7 +7385,7 @@ THE SOFTWARE.
- John Papandriopoulos jpap@users.noreply.github.com
- Zhiming Wang zmwangx@gmail.com
- Andy Pilate cubox@cubox.me
-- Oliver Heyme olihey@googlemail.com
+- Oliver Heyme olihey@googlemail.com olihey@users.noreply.github.com
- wuyu wuyu@yunify.com
- Andrei Dragomir adragomi@adobe.com
- Christian Brüggemann mail@cbruegg.com
@@ -6419,8 +7409,7 @@ THE SOFTWARE.
- Pierre Carlson mpcarl@us.ibm.com
- Ernest Borowski er.borowski@gmail.com
- Remus Bunduc remus.bunduc@gmail.com
-- Iakov Davydov iakov.davydov@unil.ch
-- Fabian Möller f.moeller@nynex.de
+- Iakov Davydov iakov.davydov@unil.ch dav05.gith@myths.ru
- Jakub Tasiemski tasiemski@gmail.com
- David Minor dminor@saymedia.com
- Tim Cooijmans cooijmans.tim@gmail.com
@@ -6430,6 +7419,24 @@ THE SOFTWARE.
- Jon Fautley jon@dead.li
- lewapm 32110057+lewapm@users.noreply.github.com
- Yassine Imounachen yassine256@gmail.com
+- Chris Redekop chris-redekop@users.noreply.github.com
+- Jon Fautley jon@adenoid.appstal.co.uk
+- Will Gunn WillGunn@users.noreply.github.com
+- Lucas Bremgartner lucas@bremis.ch
+- Jody Frankowski jody.frankowski@gmail.com
+- Andreas Roussos arouss1980@gmail.com
+- nbuchanan nbuchanan@utah.gov
+- Durval Menezes rclone@durval.com
+- Victor vb-github@viblo.se
+- Mateusz pabian.mateusz@gmail.com
+- Daniel Loader spicypixel@gmail.com
+- David0rk davidork@gmail.com
+- Alexander Neumann alexander@bumpern.de
+- Giri Badanahatti gbadanahatti@us.ibm.com@Giris-MacBook-Pro.local
+- Leo R. Lundgren leo@finalresort.org
+- wolfv wolfv6@users.noreply.github.com
+- Dave Pedu dave@davepedu.com
+- Stefan Lindblom lindblom@spotify.com
Forum
diff --git a/MANUAL.md b/MANUAL.md
index c82059a48..b7da0e536 100644
--- a/MANUAL.md
+++ b/MANUAL.md
@@ -1,6 +1,6 @@
% rclone(1) User Manual
% Nick Craig-Wood
-% Dec 23, 2017
+% Mar 19, 2018
Rclone
======
@@ -22,6 +22,7 @@ Rclone is a command line program to sync files and directories to and from:
* Google Drive
* HTTP
* Hubic
+* IBM COS S3
* Memset Memstore
* Microsoft Azure Blob Storage
* Microsoft OneDrive
@@ -79,7 +80,7 @@ run `rclone -h`.
## Script installation ##
-To install rclone on Linux/MacOs/BSD systems, run:
+To install rclone on Linux/macOS/BSD systems, run:
curl https://rclone.org/install.sh | sudo bash
@@ -183,6 +184,7 @@ option:
See the following for detailed instructions for
+ * [Alias](https://rclone.org/alias/)
* [Amazon Drive](https://rclone.org/amazonclouddrive/)
* [Amazon S3](https://rclone.org/s3/)
* [Backblaze B2](https://rclone.org/b2/)
@@ -236,7 +238,6 @@ Enter an interactive configuration session.
### Synopsis
-
Enter an interactive configuration session where you can setup new
remotes and manage existing ones. You may also set or remove a
password to protect your configuration.
@@ -259,7 +260,6 @@ Copy files from source to dest, skipping already copied
### Synopsis
-
Copy the source to the destination. Doesn't transfer
unchanged files, testing by size and modification time or
MD5SUM. Doesn't delete files from the destination.
@@ -296,9 +296,6 @@ written a trailing / - meaning "copy the contents of this directory".
This applies to all commands and whether you are talking about the
source or destination.
-See the `--no-traverse` option for controlling whether rclone lists
-the destination directory or not.
-
```
rclone copy source:path dest:path [flags]
@@ -317,7 +314,6 @@ Make source and dest identical, modifying destination only.
### Synopsis
-
Sync the source to the destination, changing the destination
only. Doesn't transfer unchanged files, testing by size and
modification time or MD5SUM. Destination is updated to match
@@ -355,7 +351,6 @@ Move files from source to dest.
### Synopsis
-
Moves the contents of the source directory to the destination
directory. Rclone will error if the source and destination overlap and
the remote does not support a server side directory move operation.
@@ -394,7 +389,6 @@ Remove the contents of path.
### Synopsis
-
Remove the contents of path. Unlike `purge` it obeys include/exclude
filters so can be used to selectively delete files.
@@ -430,7 +424,6 @@ Remove the path and all of its contents.
### Synopsis
-
Remove the path and all of its contents. Note that this does not obey
include/exclude filters - everything will be removed. Use `delete` if
you want to selectively delete files.
@@ -452,7 +445,6 @@ Make the path if it doesn't already exist.
### Synopsis
-
Make the path if it doesn't already exist.
```
@@ -472,7 +464,6 @@ Remove the path if empty.
### Synopsis
-
Remove the path. Note that you can't remove a path with
objects in it, use purge for that.
@@ -493,7 +484,6 @@ Checks the files in the source and destination match.
### Synopsis
-
Checks the files in the source and destination match. It compares
sizes and hashes (MD5 or SHA1) and logs a report of files which don't
match. It doesn't alter the source or destination.
@@ -520,12 +510,32 @@ rclone check source:path dest:path [flags]
## rclone ls
-List all the objects in the path with size and path.
+List the objects in the path with size and path.
### Synopsis
-List all the objects in the path with size and path.
+Lists the objects in the source path to standard output in a human
+readable format with size and path. Recurses by default.
+
+Any of the filtering options can be applied to this commmand.
+
+There are several related list commands
+
+ * `ls` to list size and path of objects only
+ * `lsl` to list modification time, size and path of objects only
+ * `lsd` to list directories only
+ * `lsf` to list objects and directories in easy to parse format
+ * `lsjson` to list objects and directories in JSON format
+
+`ls`,`lsl`,`lsd` are designed to be human readable.
+`lsf` is designed to be human and machine readable.
+`lsjson` is designed to be machine readable.
+
+Note that `ls`,`lsl`,`lsd` all recurse by default - use "--max-depth 1" to stop the recursion.
+
+The other list commands `lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse.
+
```
rclone ls remote:path [flags]
@@ -544,7 +554,27 @@ List all directories/containers/buckets in the path.
### Synopsis
-List all directories/containers/buckets in the path.
+Lists the directories in the source path to standard output. Recurses
+by default.
+
+Any of the filtering options can be applied to this commmand.
+
+There are several related list commands
+
+ * `ls` to list size and path of objects only
+ * `lsl` to list modification time, size and path of objects only
+ * `lsd` to list directories only
+ * `lsf` to list objects and directories in easy to parse format
+ * `lsjson` to list objects and directories in JSON format
+
+`ls`,`lsl`,`lsd` are designed to be human readable.
+`lsf` is designed to be human and machine readable.
+`lsjson` is designed to be machine readable.
+
+Note that `ls`,`lsl`,`lsd` all recurse by default - use "--max-depth 1" to stop the recursion.
+
+The other list commands `lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse.
+
```
rclone lsd remote:path [flags]
@@ -558,12 +588,32 @@ rclone lsd remote:path [flags]
## rclone lsl
-List all the objects path with modification time, size and path.
+List the objects in path with modification time, size and path.
### Synopsis
-List all the objects path with modification time, size and path.
+Lists the objects in the source path to standard output in a human
+readable format with modification time, size and path. Recurses by default.
+
+Any of the filtering options can be applied to this commmand.
+
+There are several related list commands
+
+ * `ls` to list size and path of objects only
+ * `lsl` to list modification time, size and path of objects only
+ * `lsd` to list directories only
+ * `lsf` to list objects and directories in easy to parse format
+ * `lsjson` to list objects and directories in JSON format
+
+`ls`,`lsl`,`lsd` are designed to be human readable.
+`lsf` is designed to be human and machine readable.
+`lsjson` is designed to be machine readable.
+
+Note that `ls`,`lsl`,`lsd` all recurse by default - use "--max-depth 1" to stop the recursion.
+
+The other list commands `lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse.
+
```
rclone lsl remote:path [flags]
@@ -582,7 +632,6 @@ Produces an md5sum file for all the objects in the path.
### Synopsis
-
Produces an md5sum file for all the objects in the path. This
is in the same format as the standard md5sum tool produces.
@@ -604,7 +653,6 @@ Produces an sha1sum file for all the objects in the path.
### Synopsis
-
Produces an sha1sum file for all the objects in the path. This
is in the same format as the standard sha1sum tool produces.
@@ -625,7 +673,6 @@ Prints the total size and number of objects in remote:path.
### Synopsis
-
Prints the total size and number of objects in remote:path.
```
@@ -644,7 +691,6 @@ Show the version number.
### Synopsis
-
Show the version number.
```
@@ -664,7 +710,6 @@ Clean up the remote if possible
### Synopsis
-
Clean up the remote if possible. Empty the trash or delete old file
versions. Not supported by all remotes.
@@ -686,7 +731,6 @@ Interactively find duplicate files and delete/rename them.
### Synopsis
-
By default `dedupe` interactively finds duplicate files and offers to
delete all but one or rename them to be different. Only useful with
Google Drive which can have duplicate file names.
@@ -785,7 +829,6 @@ Remote authorization.
### Synopsis
-
Remote authorization. Used to authorize a remote or headless
rclone from a machine with a browser - use as instructed by
rclone config.
@@ -807,7 +850,6 @@ Print cache stats for a remote
### Synopsis
-
Print cache stats for a remote in JSON format
@@ -828,7 +870,6 @@ Concatenates any files and sends them to stdout.
### Synopsis
-
rclone cat sends any files to standard output.
You can use it like this to output a single file
@@ -871,7 +912,6 @@ Create a new remote with name, type and options.
### Synopsis
-
Create a new remote of with and options. The options
should be passed in in pairs of .
@@ -897,7 +937,6 @@ Delete an existing remote .
### Synopsis
-
Delete an existing remote .
```
@@ -916,7 +955,6 @@ Dump the config file as JSON.
### Synopsis
-
Dump the config file as JSON.
```
@@ -935,7 +973,6 @@ Enter an interactive configuration session.
### Synopsis
-
Enter an interactive configuration session where you can setup new
remotes and manage existing ones. You may also set or remove a
password to protect your configuration.
@@ -957,7 +994,6 @@ Show path of configuration file in use.
### Synopsis
-
Show path of configuration file in use.
```
@@ -977,7 +1013,6 @@ Update password in an existing remote.
### Synopsis
-
Update an existing remote's password. The password
should be passed in in pairs of .
@@ -1002,7 +1037,6 @@ List in JSON format all the providers and options.
### Synopsis
-
List in JSON format all the providers and options.
```
@@ -1021,7 +1055,6 @@ Print (decrypted) config file, or the config for a single remote.
### Synopsis
-
Print (decrypted) config file, or the config for a single remote.
```
@@ -1041,7 +1074,6 @@ Update options in an existing remote.
### Synopsis
-
Update an existing remote's options. The options should be passed in
in pairs of .
@@ -1067,7 +1099,6 @@ Copy files from source to dest, skipping already copied
### Synopsis
-
If source:path is a file or directory then it copies it to a file or
directory named dest:path.
@@ -1112,7 +1143,6 @@ Cryptcheck checks the integrity of a crypted remote.
### Synopsis
-
rclone cryptcheck checks a remote against a crypted remote. This is
the equivalent of running rclone check, but able to check the
checksums of the crypted remote.
@@ -1154,14 +1184,17 @@ Cryptdecode returns unencrypted file names.
### Synopsis
-
rclone cryptdecode returns unencrypted file names when provided with
a list of encrypted file names. List limit is 10 items.
+If you supply the --reverse flag, it will return encrypted file names.
+
use it like this
rclone cryptdecode encryptedremote: encryptedfilename1 encryptedfilename2
+ rclone cryptdecode --reverse encryptedremote: filename1 filename2
+
```
rclone cryptdecode encryptedremote: encryptedfilename [flags]
@@ -1170,7 +1203,8 @@ rclone cryptdecode encryptedremote: encryptedfilename [flags]
### Options
```
- -h, --help help for cryptdecode
+ -h, --help help for cryptdecode
+ --reverse Reverse cryptdecode, encrypts filenames
```
## rclone dbhashsum
@@ -1180,7 +1214,6 @@ Produces a Dropbox hash file for all the objects in the path.
### Synopsis
-
Produces a Dropbox hash file for all the objects in the path. The
hashes are calculated according to [Dropbox content hash
rules](https://www.dropbox.com/developers/reference/content-hash).
@@ -1204,7 +1237,6 @@ Output completion script for a given shell.
### Synopsis
-
Generates a shell completion script for rclone.
Run with --help to list the supported shells.
@@ -1222,7 +1254,6 @@ Output bash completion script for rclone.
### Synopsis
-
Generates a bash shell autocompletion script for rclone.
This writes to /etc/bash_completion.d/rclone by default so will
@@ -1256,7 +1287,6 @@ Output zsh completion script for rclone.
### Synopsis
-
Generates a zsh autocompletion script for rclone.
This writes to /usr/share/zsh/vendor-completions/_rclone by default so will
@@ -1290,7 +1320,6 @@ Output markdown docs for rclone to the directory supplied.
### Synopsis
-
This produces markdown docs for the rclone commands to the directory
supplied. These are in a format suitable for hugo to render into the
rclone.org website.
@@ -1312,7 +1341,6 @@ List all the remotes in the config file.
### Synopsis
-
rclone listremotes lists all the available remotes from the config file.
When uses with the -l flag it lists the types too.
@@ -1329,13 +1357,89 @@ rclone listremotes [flags]
-l, --long Show the type as well as names.
```
+## rclone lsf
+
+List directories and objects in remote:path formatted for parsing
+
+### Synopsis
+
+
+List the contents of the source path (directories and objects) to
+standard output in a form which is easy to parse by scripts. By
+default this will just be the names of the objects and directories,
+one per line. The directories will have a / suffix.
+
+Use the --format option to control what gets listed. By default this
+is just the path, but you can use these parameters to control the
+output:
+
+ p - path
+ s - size
+ t - modification time
+ h - hash
+
+So if you wanted the path, size and modification time, you would use
+--format "pst", or maybe --format "tsp" to put the path last.
+
+If you specify "h" in the format you will get the MD5 hash by default,
+use the "--hash" flag to change which hash you want. Note that this
+can be returned as an empty string if it isn't available on the object
+(and for directories), "ERROR" if there was an error reading it from
+the object and "UNSUPPORTED" if that object does not support that hash
+type.
+
+For example to emulate the md5sum command you can use
+
+ rclone lsf -R --hash MD5 --format hp --separator " " --files-only .
+
+(Though "rclone md5sum ." is an easier way of typing this.)
+
+By default the separator is ";" this can be changed with the
+--separator flag. Note that separators aren't escaped in the path so
+putting it last is a good strategy.
+
+Any of the filtering options can be applied to this commmand.
+
+There are several related list commands
+
+ * `ls` to list size and path of objects only
+ * `lsl` to list modification time, size and path of objects only
+ * `lsd` to list directories only
+ * `lsf` to list objects and directories in easy to parse format
+ * `lsjson` to list objects and directories in JSON format
+
+`ls`,`lsl`,`lsd` are designed to be human readable.
+`lsf` is designed to be human and machine readable.
+`lsjson` is designed to be machine readable.
+
+Note that `ls`,`lsl`,`lsd` all recurse by default - use "--max-depth 1" to stop the recursion.
+
+The other list commands `lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse.
+
+
+```
+rclone lsf remote:path [flags]
+```
+
+### Options
+
+```
+ -d, --dir-slash Append a slash to directory names. (default true)
+ --dirs-only Only list directories.
+ --files-only Only list files.
+ -F, --format string Output format - see help for details (default "p")
+ --hash h Use this hash when h is used in the format MD5|SHA-1|DropboxHash (default "MD5")
+ -h, --help help for lsf
+ -R, --recursive Recurse into the listing.
+ -s, --separator string Separator for the items in the format. (default ";")
+```
+
## rclone lsjson
List directories and objects in the path in JSON format.
### Synopsis
-
List directories and objects in the path in JSON format.
The output is an array of Items, where each Item looks like this
@@ -1349,19 +1453,45 @@ The output is an array of Items, where each Item looks like this
"IsDir" : false,
"ModTime" : "2017-05-31T16:15:57.034468261+01:00",
"Name" : "file.txt",
+ "Encrypted" : "v0qpsdq8anpci8n929v3uu9338",
"Path" : "full/path/goes/here/file.txt",
"Size" : 6
}
-If --hash is not specified the the Hashes property won't be emitted.
+If --hash is not specified the Hashes property won't be emitted.
If --no-modtime is specified then ModTime will be blank.
+If --encrypted is not specified the Encrypted won't be emitted.
+
+The Path field will only show folders below the remote path being listed.
+If "remote:path" contains the file "subfolder/file.txt", the Path for "file.txt"
+will be "subfolder/file.txt", not "remote:path/subfolder/file.txt".
+When used without --recursive the Path will always be the same as Name.
+
The time is in RFC3339 format with nanosecond precision.
The whole output can be processed as a JSON blob, or alternatively it
can be processed line by line as each item is written one to a line.
+Any of the filtering options can be applied to this commmand.
+
+There are several related list commands
+
+ * `ls` to list size and path of objects only
+ * `lsl` to list modification time, size and path of objects only
+ * `lsd` to list directories only
+ * `lsf` to list objects and directories in easy to parse format
+ * `lsjson` to list objects and directories in JSON format
+
+`ls`,`lsl`,`lsd` are designed to be human readable.
+`lsf` is designed to be human and machine readable.
+`lsjson` is designed to be machine readable.
+
+Note that `ls`,`lsl`,`lsd` all recurse by default - use "--max-depth 1" to stop the recursion.
+
+The other list commands `lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse.
+
```
rclone lsjson remote:path [flags]
@@ -1370,6 +1500,7 @@ rclone lsjson remote:path [flags]
### Options
```
+ -M, --encrypted Show the encrypted names.
--hash Include hashes in the output (may take longer).
-h, --help help for lsjson
--no-modtime Don't read the modification time (can speed things up).
@@ -1383,7 +1514,6 @@ Mount the remote as a mountpoint. **EXPERIMENTAL**
### Synopsis
-
rclone mount allows Linux, FreeBSD, macOS and Windows to
mount any of Rclone's cloud storage systems as a file system with
FUSE.
@@ -1411,7 +1541,7 @@ When that happens, it is the user's responsibility to stop the mount manually wi
# OS X
umount /path/to/local/mount
-### Installing on Windows ###
+### Installing on Windows
To run rclone mount on Windows, you will need to
download and install [WinFsp](http://www.secfs.net/winfsp/).
@@ -1424,7 +1554,7 @@ uses combination with
packages are by Bill Zissimopoulos who was very helpful during the
implementation of rclone mount for Windows.
-#### Windows caveats ####
+#### Windows caveats
Note that drives created as Administrator are not visible by other
accounts (including the account that was elevated as
@@ -1437,13 +1567,16 @@ The easiest way around this is to start the drive from a normal
command prompt. It is also possible to start a drive from the SYSTEM
account (using [the WinFsp.Launcher
infrastructure](https://github.com/billziss-gh/winfsp/wiki/WinFsp-Service-Architecture))
-which creates drives accessible for everyone on the system.
+which creates drives accessible for everyone on the system or
+alternatively using [the nssm service manager](https://nssm.cc/usage).
-### Limitations ###
+### Limitations
-This can only write files seqentially, it can only seek when reading.
-This means that many applications won't work with their files on an
-rclone mount.
+Without the use of "--vfs-cache-mode" this can only write files
+sequentially, it can only seek when reading. This means that many
+applications won't work with their files on an rclone mount without
+"--vfs-cache-mode writes" or "--vfs-cache-mode full". See the [File
+Caching](#file-caching) section for more info.
The bucket based remotes (eg Swift, S3, Google Compute Storage, B2,
Hubic) won't work from the root - you will need to specify a bucket,
@@ -1455,29 +1588,43 @@ the directory cache.
Only supported on Linux, FreeBSD, OS X and Windows at the moment.
-### rclone mount vs rclone sync/copy ##
+### rclone mount vs rclone sync/copy
File systems expect things to be 100% reliable, whereas cloud storage
systems are a long way from 100% reliable. The rclone sync/copy
commands cope with this with lots of retries. However rclone mount
can't use retries in the same way without making local copies of the
-uploads. This might happen in the future, but for the moment rclone
-mount won't do that, so will be less reliable than the rclone command.
+uploads. Look at the **EXPERIMENTAL** [file caching](#file-caching)
+for solutions to make mount mount more reliable.
-### Filters ###
+### Attribute caching
+
+You can use the flag --attr-timeout to set the time the kernel caches
+the attributes (size, modification time etc) for directory entries.
+
+The default is 0s - no caching - which is recommended for filesystems
+which can change outside the control of the kernel.
+
+If you set it higher ('1s' or '1m' say) then the kernel will call back
+to rclone less often making it more efficient, however there may be
+strange effects when files change on the remote.
+
+This is the same as setting the attr_timeout option in mount.fuse.
+
+### Filters
Note that all the rclone filters can be used to select a subset of the
files to be visible in the mount.
-### systemd ###
+### systemd
When running rclone mount as a systemd service, it is possible
-to use Type=notify. In this case the service will enter the started state
+to use Type=notify. In this case the service will enter the started state
after the mountpoint has been successfully set up.
Units having the rclone mount service specified as a requirement
will see all files and folders immediately in this mode.
-### Directory Cache ###
+### Directory Cache
Using the `--dir-cache-time` flag, you can set how long a
directory should be considered up to date and not refreshed from the
@@ -1492,12 +1639,21 @@ like this:
kill -SIGHUP $(pidof rclone)
-### File Caching ###
+If you configure rclone with a [remote control](/rc) then you can use
+rclone rc to flush the whole directory cache:
+
+ rclone rc vfs/forget
+
+Or individual files or directories:
+
+ rclone rc vfs/forget file=path/to/file dir=path/to/dir
+
+### File Caching
**NB** File caching is **EXPERIMENTAL** - use with care!
These flags control the VFS file caching options. The VFS layer is
-used by rclone mount to make a cloud storage systm work more like a
+used by rclone mount to make a cloud storage system work more like a
normal file system.
You'll need to enable VFS caching if you want, for example, to read
@@ -1506,7 +1662,7 @@ and write simultaneously to a file. See below for more details.
Note that the VFS cache works in addition to the cache backend and you
may find that you need one or the other or both.
- --vfs-cache-dir string Directory rclone will use for caching.
+ --cache-dir string Directory rclone will use for caching.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
@@ -1525,7 +1681,7 @@ closed so if rclone is quit or dies with open files then these won't
get written back to the remote. However they will still be in the on
disk cache.
-#### --vfs-cache-mode off ####
+#### --vfs-cache-mode off
In this mode the cache will read directly from the remote and write
directly to the remote without caching anything on disk.
@@ -1540,7 +1696,7 @@ This will mean some operations are not possible
* Open modes O_APPEND, O_TRUNC are ignored
* If an upload fails it can't be retried
-#### --vfs-cache-mode minimal ####
+#### --vfs-cache-mode minimal
This is very similar to "off" except that files opened for read AND
write will be buffered to disks. This means that files opened for
@@ -1553,7 +1709,7 @@ These operations are not possible
* Files opened for write only will ignore O_APPEND, O_TRUNC
* If an upload fails it can't be retried
-#### --vfs-cache-mode writes ####
+#### --vfs-cache-mode writes
In this mode files opened for read only are still read directly from
the remote, write only and read/write files are buffered to disk
@@ -1563,14 +1719,14 @@ This mode should support all normal file system operations.
If an upload fails it will be retried up to --low-level-retries times.
-#### --vfs-cache-mode full ####
+#### --vfs-cache-mode full
In this mode all reads and writes are buffered to and from disk. When
a file is opened for read it will be downloaded in its entirety first.
This may be appropriate for your needs, or you may prefer to look at
the cache backend which does a much more sophisticated job of caching,
-including caching directory heirachies and chunks of files.q
+including caching directory hierarchies and chunks of files.
In this mode, unlike the others, when a file is written to the disk,
it will be kept on the disk after it is written to the remote. It
@@ -1592,6 +1748,8 @@ rclone mount remote:path /path/to/mountpoint [flags]
--allow-non-empty Allow mounting over a non-empty directory.
--allow-other Allow access to other users.
--allow-root Allow access to root user.
+ --attr-timeout duration Time for which file/directory attributes are cached.
+ --daemon Run mount as a daemon (background mode).
--debug-fuse Debug the FUSE internals - needs -v.
--default-permissions Makes kernel enforce access control based on the file mode.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
@@ -1620,7 +1778,6 @@ Move file or directory from source to dest.
### Synopsis
-
If source:path is a file or directory then it moves it to a file or
directory named dest:path.
@@ -1668,11 +1825,12 @@ Explore a remote with a text based user interface.
### Synopsis
-
This displays a text based user interface allowing the navigation of a
remote. It is most useful for answering the question - "What is using
all my disk space?".
+
+
To make the user interface it first scans the entire remote given and
builds an in memory representation. rclone ncdu can be used during
this scanning phase and you will see it building up the directory
@@ -1710,7 +1868,6 @@ Obscure password for use in the rclone.conf
### Synopsis
-
Obscure password for use in the rclone.conf
```
@@ -1723,6 +1880,34 @@ rclone obscure password [flags]
-h, --help help for obscure
```
+## rclone rc
+
+Run a command against a running rclone.
+
+### Synopsis
+
+
+This runs a command against a running rclone. By default it will use
+that specified in the --rc-addr command.
+
+Arguments should be passed in as parameter=value.
+
+The result will be returned as a JSON object by default.
+
+Use "rclone rc list" to see a list of all possible commands.
+
+```
+rclone rc commands parameter [flags]
+```
+
+### Options
+
+```
+ -h, --help help for rc
+ --no-output If set don't output the JSON result.
+ --url string URL to connect to rclone remote control. (default "http://localhost:5572/")
+```
+
## rclone rcat
Copies standard input to file on remote.
@@ -1730,7 +1915,6 @@ Copies standard input to file on remote.
### Synopsis
-
rclone rcat reads from standard input (stdin) and copies it to a
single remote file.
@@ -1770,7 +1954,6 @@ Remove empty directories under the path.
### Synopsis
-
This removes any empty directories (or directories that only contain
empty directories) under the path that it finds, including the path if
it has nothing in.
@@ -1799,7 +1982,6 @@ Serve a remote over a protocol.
### Synopsis
-
rclone serve is used to serve a remote over a given protocol. This
command requires the use of a subcommand to specify the protocol, eg
@@ -1824,15 +2006,10 @@ Serve the remote over HTTP.
### Synopsis
-
rclone serve http implements a basic web server to serve the remote
over HTTP. This can be viewed in a web browser or you can make a
remote of type http read from it.
-Use --addr to specify which IP address and port the server should
-listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all
-IPs. By default it only listens on localhost.
-
You can use the filter flags (eg --include, --exclude) to control what
is served.
@@ -1841,7 +2018,56 @@ The server will log errors. Use -v to see access logs.
--bwlimit will be respected for file transfers. Use --stats to
control the stats printing.
-### Directory Cache ###
+### Server options
+
+Use --addr to specify which IP address and port the server should
+listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all
+IPs. By default it only listens on localhost.
+
+If you set --addr to listen on a public or LAN accessible IP address
+then using Authentication if advised - see the next section for info.
+
+--server-read-timeout and --server-write-timeout can be used to
+control the timeouts on the server. Note that this is the total time
+for a transfer.
+
+--max-header-bytes controls the maximum number of bytes the server will
+accept in the HTTP header.
+
+#### Authentication
+
+By default this will serve files without needing a login.
+
+You can either use an htpasswd file which can take lots of users, or
+set a single username and password with the --user and --pass flags.
+
+Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is
+in standard apache format and supports MD5, SHA1 and BCrypt for basic
+authentication. Bcrypt is recommended.
+
+To create an htpasswd file:
+
+ touch htpasswd
+ htpasswd -B htpasswd user
+ htpasswd -B htpasswd anotherUser
+
+The password file can be updated while rclone is running.
+
+Use --realm to set the authentication realm.
+
+#### SSL/TLS
+
+By default this will serve over http. If you want you can serve over
+https. You will need to supply the --cert and --key flags. If you
+wish to do client side certificate validation then you will need to
+supply --client-ca also.
+
+--cert should be a either a PEM encoded certificate or a concatenation
+of that with the CA certificate. --key should be the PEM encoded
+private key and --client-ca should be the PEM encoded client
+certificate authority certificate.
+
+### Directory Cache
Using the `--dir-cache-time` flag, you can set how long a
directory should be considered up to date and not refreshed from the
@@ -1856,12 +2082,21 @@ like this:
kill -SIGHUP $(pidof rclone)
-### File Caching ###
+If you configure rclone with a [remote control](/rc) then you can use
+rclone rc to flush the whole directory cache:
+
+ rclone rc vfs/forget
+
+Or individual files or directories:
+
+ rclone rc vfs/forget file=path/to/file dir=path/to/dir
+
+### File Caching
**NB** File caching is **EXPERIMENTAL** - use with care!
These flags control the VFS file caching options. The VFS layer is
-used by rclone mount to make a cloud storage systm work more like a
+used by rclone mount to make a cloud storage system work more like a
normal file system.
You'll need to enable VFS caching if you want, for example, to read
@@ -1870,7 +2105,7 @@ and write simultaneously to a file. See below for more details.
Note that the VFS cache works in addition to the cache backend and you
may find that you need one or the other or both.
- --vfs-cache-dir string Directory rclone will use for caching.
+ --cache-dir string Directory rclone will use for caching.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
@@ -1889,7 +2124,7 @@ closed so if rclone is quit or dies with open files then these won't
get written back to the remote. However they will still be in the on
disk cache.
-#### --vfs-cache-mode off ####
+#### --vfs-cache-mode off
In this mode the cache will read directly from the remote and write
directly to the remote without caching anything on disk.
@@ -1904,7 +2139,7 @@ This will mean some operations are not possible
* Open modes O_APPEND, O_TRUNC are ignored
* If an upload fails it can't be retried
-#### --vfs-cache-mode minimal ####
+#### --vfs-cache-mode minimal
This is very similar to "off" except that files opened for read AND
write will be buffered to disks. This means that files opened for
@@ -1917,7 +2152,7 @@ These operations are not possible
* Files opened for write only will ignore O_APPEND, O_TRUNC
* If an upload fails it can't be retried
-#### --vfs-cache-mode writes ####
+#### --vfs-cache-mode writes
In this mode files opened for read only are still read directly from
the remote, write only and read/write files are buffered to disk
@@ -1927,14 +2162,14 @@ This mode should support all normal file system operations.
If an upload fails it will be retried up to --low-level-retries times.
-#### --vfs-cache-mode full ####
+#### --vfs-cache-mode full
In this mode all reads and writes are buffered to and from disk. When
a file is opened for read it will be downloaded in its entirety first.
This may be appropriate for your needs, or you may prefer to look at
the cache backend which does a much more sophisticated job of caching,
-including caching directory heirachies and chunks of files.q
+including caching directory hierarchies and chunks of files.
In this mode, unlike the others, when a file is written to the disk,
it will be kept on the disk after it is written to the remote. It
@@ -1953,22 +2188,184 @@ rclone serve http remote:path [flags]
### Options
```
- --addr string IPaddress:Port to bind server to. (default "localhost:8080")
+ --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080")
+ --cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --client-ca string Client certificate authority to verify clients with
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--gid uint32 Override the gid field set by the filesystem. (default 502)
-h, --help help for http
+ --htpasswd string htpasswd file - if not provided no authentication is done
+ --key string SSL PEM Private key
+ --max-header-bytes int Maximum size of request header (default 4096)
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
+ --pass string Password for authentication.
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--read-only Mount read-only.
+ --realm string realm for authentication (default "rclone")
+ --server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--uid uint32 Override the uid field set by the filesystem. (default 502)
--umask int Override the permission bits set by the filesystem. (default 2)
+ --user string User name for authentication.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
```
+## rclone serve restic
+
+Serve the remote for restic's REST API.
+
+### Synopsis
+
+rclone serve restic implements restic's REST backend API
+over HTTP. This allows restic to use rclone as a data storage
+mechanism for cloud providers that restic does not support directly.
+
+[Restic](https://restic.net/) is a command line program for doing
+backups.
+
+The server will log errors. Use -v to see access logs.
+
+--bwlimit will be respected for file transfers. Use --stats to
+control the stats printing.
+
+### Setting up rclone for use by restic ###
+
+First [set up a remote for your chosen cloud provider](/docs/#configure).
+
+Once you have set up the remote, check it is working with, for example
+"rclone lsd remote:". You may have called the remote something other
+than "remote:" - just substitute whatever you called it in the
+following instructions.
+
+Now start the rclone restic server
+
+ rclone serve restic -v remote:backup
+
+Where you can replace "backup" in the above by whatever path in the
+remote you wish to use.
+
+By default this will serve on "localhost:8080" you can change this
+with use of the "--addr" flag.
+
+You might wish to start this server on boot.
+
+### Setting up restic to use rclone ###
+
+Now you can [follow the restic
+instructions](http://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#rest-server)
+on setting up restic.
+
+Note that you will need restic 0.8.2 or later to interoperate with
+rclone.
+
+For the example above you will want to use "http://localhost:8080/" as
+the URL for the REST server.
+
+For example:
+
+ $ export RESTIC_REPOSITORY=rest:http://localhost:8080/
+ $ export RESTIC_PASSWORD=yourpassword
+ $ restic init
+ created restic backend 8b1a4b56ae at rest:http://localhost:8080/
+
+ Please note that knowledge of your password is required to access
+ the repository. Losing your password means that your data is
+ irrecoverably lost.
+ $ restic backup /path/to/files/to/backup
+ scan [/path/to/files/to/backup]
+ scanned 189 directories, 312 files in 0:00
+ [0:00] 100.00% 38.128 MiB / 38.128 MiB 501 / 501 items 0 errors ETA 0:00
+ duration: 0:00
+ snapshot 45c8fdd8 saved
+
+#### Multiple repositories ####
+
+Note that you can use the endpoint to host multiple repositories. Do
+this by adding a directory name or path after the URL. Note that
+these **must** end with /. Eg
+
+ $ export RESTIC_REPOSITORY=rest:http://localhost:8080/user1repo/
+ # backup user1 stuff
+ $ export RESTIC_REPOSITORY=rest:http://localhost:8080/user2repo/
+ # backup user2 stuff
+
+
+### Server options
+
+Use --addr to specify which IP address and port the server should
+listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all
+IPs. By default it only listens on localhost.
+
+If you set --addr to listen on a public or LAN accessible IP address
+then using Authentication if advised - see the next section for info.
+
+--server-read-timeout and --server-write-timeout can be used to
+control the timeouts on the server. Note that this is the total time
+for a transfer.
+
+--max-header-bytes controls the maximum number of bytes the server will
+accept in the HTTP header.
+
+#### Authentication
+
+By default this will serve files without needing a login.
+
+You can either use an htpasswd file which can take lots of users, or
+set a single username and password with the --user and --pass flags.
+
+Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is
+in standard apache format and supports MD5, SHA1 and BCrypt for basic
+authentication. Bcrypt is recommended.
+
+To create an htpasswd file:
+
+ touch htpasswd
+ htpasswd -B htpasswd user
+ htpasswd -B htpasswd anotherUser
+
+The password file can be updated while rclone is running.
+
+Use --realm to set the authentication realm.
+
+#### SSL/TLS
+
+By default this will serve over http. If you want you can serve over
+https. You will need to supply the --cert and --key flags. If you
+wish to do client side certificate validation then you will need to
+supply --client-ca also.
+
+--cert should be a either a PEM encoded certificate or a concatenation
+of that with the CA certificate. --key should be the PEM encoded
+private key and --client-ca should be the PEM encoded client
+certificate authority certificate.
+
+
+```
+rclone serve restic remote:path [flags]
+```
+
+### Options
+
+```
+ --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080")
+ --cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --client-ca string Client certificate authority to verify clients with
+ -h, --help help for restic
+ --htpasswd string htpasswd file - if not provided no authentication is done
+ --key string SSL PEM Private key
+ --max-header-bytes int Maximum size of request header (default 4096)
+ --pass string Password for authentication.
+ --realm string realm for authentication (default "rclone")
+ --server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --stdio run an HTTP2 server on stdin/stdout
+ --user string User name for authentication.
+```
+
## rclone serve webdav
Serve remote:path over webdav.
@@ -1976,7 +2373,6 @@ Serve remote:path over webdav.
### Synopsis
-
rclone serve webdav implements a basic webdav server to serve the
remote over HTTP via the webdav protocol. This can be viewed with a
webdav client or you can make a remote of type webdav to read and
@@ -1985,8 +2381,56 @@ write it.
NB at the moment each directory listing reads the start of each file
which is undesirable: see https://github.com/golang/go/issues/22577
+### Server options
-### Directory Cache ###
+Use --addr to specify which IP address and port the server should
+listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all
+IPs. By default it only listens on localhost.
+
+If you set --addr to listen on a public or LAN accessible IP address
+then using Authentication if advised - see the next section for info.
+
+--server-read-timeout and --server-write-timeout can be used to
+control the timeouts on the server. Note that this is the total time
+for a transfer.
+
+--max-header-bytes controls the maximum number of bytes the server will
+accept in the HTTP header.
+
+#### Authentication
+
+By default this will serve files without needing a login.
+
+You can either use an htpasswd file which can take lots of users, or
+set a single username and password with the --user and --pass flags.
+
+Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is
+in standard apache format and supports MD5, SHA1 and BCrypt for basic
+authentication. Bcrypt is recommended.
+
+To create an htpasswd file:
+
+ touch htpasswd
+ htpasswd -B htpasswd user
+ htpasswd -B htpasswd anotherUser
+
+The password file can be updated while rclone is running.
+
+Use --realm to set the authentication realm.
+
+#### SSL/TLS
+
+By default this will serve over http. If you want you can serve over
+https. You will need to supply the --cert and --key flags. If you
+wish to do client side certificate validation then you will need to
+supply --client-ca also.
+
+--cert should be a either a PEM encoded certificate or a concatenation
+of that with the CA certificate. --key should be the PEM encoded
+private key and --client-ca should be the PEM encoded client
+certificate authority certificate.
+
+### Directory Cache
Using the `--dir-cache-time` flag, you can set how long a
directory should be considered up to date and not refreshed from the
@@ -2001,12 +2445,21 @@ like this:
kill -SIGHUP $(pidof rclone)
-### File Caching ###
+If you configure rclone with a [remote control](/rc) then you can use
+rclone rc to flush the whole directory cache:
+
+ rclone rc vfs/forget
+
+Or individual files or directories:
+
+ rclone rc vfs/forget file=path/to/file dir=path/to/dir
+
+### File Caching
**NB** File caching is **EXPERIMENTAL** - use with care!
These flags control the VFS file caching options. The VFS layer is
-used by rclone mount to make a cloud storage systm work more like a
+used by rclone mount to make a cloud storage system work more like a
normal file system.
You'll need to enable VFS caching if you want, for example, to read
@@ -2015,7 +2468,7 @@ and write simultaneously to a file. See below for more details.
Note that the VFS cache works in addition to the cache backend and you
may find that you need one or the other or both.
- --vfs-cache-dir string Directory rclone will use for caching.
+ --cache-dir string Directory rclone will use for caching.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
@@ -2034,7 +2487,7 @@ closed so if rclone is quit or dies with open files then these won't
get written back to the remote. However they will still be in the on
disk cache.
-#### --vfs-cache-mode off ####
+#### --vfs-cache-mode off
In this mode the cache will read directly from the remote and write
directly to the remote without caching anything on disk.
@@ -2049,7 +2502,7 @@ This will mean some operations are not possible
* Open modes O_APPEND, O_TRUNC are ignored
* If an upload fails it can't be retried
-#### --vfs-cache-mode minimal ####
+#### --vfs-cache-mode minimal
This is very similar to "off" except that files opened for read AND
write will be buffered to disks. This means that files opened for
@@ -2062,7 +2515,7 @@ These operations are not possible
* Files opened for write only will ignore O_APPEND, O_TRUNC
* If an upload fails it can't be retried
-#### --vfs-cache-mode writes ####
+#### --vfs-cache-mode writes
In this mode files opened for read only are still read directly from
the remote, write only and read/write files are buffered to disk
@@ -2072,14 +2525,14 @@ This mode should support all normal file system operations.
If an upload fails it will be retried up to --low-level-retries times.
-#### --vfs-cache-mode full ####
+#### --vfs-cache-mode full
In this mode all reads and writes are buffered to and from disk. When
a file is opened for read it will be downloaded in its entirety first.
This may be appropriate for your needs, or you may prefer to look at
the cache backend which does a much more sophisticated job of caching,
-including caching directory heirachies and chunks of files.q
+including caching directory hierarchies and chunks of files.
In this mode, unlike the others, when a file is written to the disk,
it will be kept on the disk after it is written to the remote. It
@@ -2098,17 +2551,27 @@ rclone serve webdav remote:path [flags]
### Options
```
- --addr string IPaddress:Port to bind server to. (default "localhost:8081")
+ --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080")
+ --cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --client-ca string Client certificate authority to verify clients with
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--gid uint32 Override the gid field set by the filesystem. (default 502)
-h, --help help for webdav
+ --htpasswd string htpasswd file - if not provided no authentication is done
+ --key string SSL PEM Private key
+ --max-header-bytes int Maximum size of request header (default 4096)
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
+ --pass string Password for authentication.
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--read-only Mount read-only.
+ --realm string realm for authentication (default "rclone")
+ --server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--uid uint32 Override the uid field set by the filesystem. (default 502)
--umask int Override the permission bits set by the filesystem. (default 2)
+ --user string User name for authentication.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
@@ -2120,7 +2583,6 @@ Create new file or change file modification time.
### Synopsis
-
Create new file or change file modification time.
```
@@ -2142,7 +2604,6 @@ List the contents of the remote in a tree like fashion.
### Synopsis
-
rclone tree lists the contents of a remote in a similar way to the
unix tree command.
@@ -2216,7 +2677,7 @@ The file `test.jpg` will be placed inside `/tmp/download`.
This is equivalent to specifying
- rclone copy --no-traverse --files-from /tmp/files remote: /tmp/download
+ rclone copy --files-from /tmp/files remote: /tmp/download
Where `/tmp/files` contains the single line
@@ -2403,6 +2864,11 @@ running, you can toggle the limiter like this:
kill -SIGUSR2 $(pidof rclone)
+If you configure rclone with a [remote control](/rc) then you can use
+change the bwlimit dynamically:
+
+ rclone rc core/bwlimit rate=1M
+
### --buffer-size=SIZE ###
Use this sized buffer to speed up file transfers. Each `--transfer`
@@ -2598,6 +3064,12 @@ to reduce the value so rclone moves on to a high level retry (see the
Disable low level retries with `--low-level-retries 1`.
+### --max-delete=N ###
+
+This tells rclone not to delete more than N files. If that limit is
+exceeded then a fatal error will be generated and rclone will stop the
+operation in progress.
+
### --max-depth=N ###
This modifies the recursion depth for all the commands except purge.
@@ -2690,12 +3162,18 @@ show at default log level `NOTICE`. Use `--stats-log-level NOTICE` or
`-v` to make them show. See the [Logging section](#logging) for more
info on log levels.
+### --stats-file-name-length integer ###
+By default, the `--stats` output will truncate file names and paths longer
+than 40 characters. This is equivalent to providing
+`--stats-file-name-length 40`. Use `--stats-file-name-length 0` to disable
+any truncation of file names printed by stats.
+
### --stats-log-level string ###
Log level to show `--stats` output at. This can be `DEBUG`, `INFO`,
`NOTICE`, or `ERROR`. The default is `INFO`. This means at the
default level of logging which is `NOTICE` the stats won't show - if
-you want them to then use `-stats-log-level NOTICE`. See the [Logging
+you want them to then use `--stats-log-level NOTICE`. See the [Logging
section](#logging) for more info on log levels.
### --stats-unit=bits|bytes ###
@@ -2783,8 +3261,8 @@ If the destination does not support server-side copy or move, rclone
will fall back to the default behaviour and log an error level message
to the console.
-Note that `--track-renames` is incompatible with `--no-traverse` and
-that it uses extra memory to keep track of all the rename candidates.
+Note that `--track-renames` uses extra memory to keep track of all
+the rename candidates.
Note also that `--track-renames` is incompatible with
`--delete-before` and will select `--delete-after` instead of
@@ -3049,26 +3527,6 @@ This option defaults to `false`.
**This should be used only for testing.**
-### --no-traverse ###
-
-The `--no-traverse` flag controls whether the destination file system
-is traversed when using the `copy` or `move` commands.
-`--no-traverse` is not compatible with `sync` and will be ignored if
-you supply it with `sync`.
-
-If you are only copying a small number of files and/or have a large
-number of files on the destination then `--no-traverse` will stop
-rclone listing the destination and save time.
-
-However, if you are copying a large number of files, especially if you
-are doing a copy where lots of the files haven't changed and won't
-need copying then you shouldn't use `--no-traverse`.
-
-It can also be used to reduce the memory usage of rclone when copying
-- `rclone --no-traverse copy src dst` won't load either the source or
-destination listings into memory so will use the minimum amount of
-memory.
-
Filtering
---------
@@ -3090,10 +3548,20 @@ For the filtering options
See the [filtering section](https://rclone.org/filtering/).
+Remote control
+--------------
+
+For the remote control options and for instructions on how to remote control rclone
+
+ * `--rc`
+ * and anything starting with `--rc-`
+
+See [the remote control section](https://rclone.org/rc/).
+
Logging
-------
-rclone has 4 levels of logging, `Error`, `Notice`, `Info` and `Debug`.
+rclone has 4 levels of logging, `ERROR`, `NOTICE`, `INFO` and `DEBUG`.
By default, rclone logs to standard error. This means you can redirect
standard error and still see the normal output of rclone commands (eg
@@ -3581,23 +4049,33 @@ from the sync.
### `--files-from` - Read list of source-file names ###
This reads a list of file names from the file passed in and **only**
-these files are transferred. The filtering rules are ignored
+these files are transferred. The **filtering rules are ignored**
completely if you use this option.
This option can be repeated to read from more than one file. These
are read in the order that they are placed on the command line.
-Prepare a file like this `files-from.txt`
+Paths within the `--files-from` file will be interpreted as starting
+with the root specified in the command. Leading `/` characters are
+ignored.
+
+For example, suppose you had `files-from.txt` with this content:
# comment
file1.jpg
- file2.jpg
+ subdir/file2.jpg
-Then use as `--files-from files-from.txt`. This will only transfer
-`file1.jpg` and `file2.jpg` providing they exist.
+You could then use it like this:
-For example, let's say you had a few files you want to back up
-regularly with these absolute paths:
+ rclone copy --files-from files-from.txt /home/me/pics remote:pics
+
+This will transfer these files only (if they exist)
+
+ /home/me/pics/file1.jpg → remote:pics/file1.jpg
+ /home/me/pics/subdir/file2.jpg → remote:pics/subdirfile1.jpg
+
+To take a more complicated example, let's say you had a few files you
+want to back up regularly with these absolute paths:
/home/user1/important
/home/user1/dir/file
@@ -3616,7 +4094,11 @@ You could then copy these to a remote like this
rclone copy --files-from files-from.txt /home remote:backup
The 3 files will arrive in `remote:backup` with the paths as in the
-`files-from.txt`.
+`files-from.txt` like this:
+
+ /home/user1/important → remote:backup/user1/important
+ /home/user1/dir/file → remote:backup/user1/dir/file
+ /home/user2/stuff → remote:backup/stuff
You could of course choose `/` as the root too in which case your
`files-from.txt` might look like this.
@@ -3629,7 +4111,11 @@ And you would transfer it like this
rclone copy --files-from files-from.txt / remote:backup
-In this case there will be an extra `home` directory on the remote.
+In this case there will be an extra `home` directory on the remote:
+
+ /home/user1/important → remote:home/backup/user1/important
+ /home/user1/dir/file → remote:home/backup/user1/dir/file
+ /home/user2/stuff → remote:home/backup/stuff
### `--min-size` - Don't transfer any file smaller than this ###
@@ -3737,6 +4223,244 @@ You can exclude `dir3` from sync by running the following command:
Currently only one filename is supported, i.e. `--exclude-if-present`
should not be used multiple times.
+# Remote controlling rclone #
+
+If rclone is run with the `--rc` flag then it starts an http server
+which can be used to remote control rclone.
+
+**NB** this is experimental and everything here is subject to change!
+
+## Supported parameters
+
+#### --rc ####
+Flag to start the http server listen on remote requests
+
+#### --rc-addr=IP ####
+IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+
+#### --rc-cert=KEY ####
+SSL PEM key (concatenation of certificate and CA certificate)
+
+#### --rc-client-ca=PATH ####
+Client certificate authority to verify clients with
+
+#### --rc-htpasswd=PATH ####
+htpasswd file - if not provided no authentication is done
+
+#### --rc-key=PATH ####
+SSL PEM Private key
+
+#### --rc-max-header-bytes=VALUE ####
+Maximum size of request header (default 4096)
+
+#### --rc-user=VALUE ####
+User name for authentication.
+
+#### --rc-pass=VALUE ####
+Password for authentication.
+
+#### --rc-realm=VALUE ####
+Realm for authentication (default "rclone")
+
+#### --rc-server-read-timeout=DURATION ####
+Timeout for server reading data (default 1h0m0s)
+
+#### --rc-server-write-timeout=DURATION ####
+Timeout for server writing data (default 1h0m0s)
+
+## Accessing the remote control via the rclone rc command
+
+Rclone itself implements the remote control protocol in its `rclone
+rc` command.
+
+You can use it like this
+
+```
+$ rclone rc rc/noop param1=one param2=two
+{
+ "param1": "one",
+ "param2": "two"
+}
+```
+
+Run `rclone rc` on its own to see the help for the installed remote
+control commands.
+
+## Supported commands
+
+### core/bwlimit: Set the bandwidth limit.
+
+This sets the bandwidth limit to that passed in.
+
+Eg
+
+ rclone core/bwlimit rate=1M
+ rclone core/bwlimit rate=off
+
+### cache/expire: Purge a remote from cache
+
+Purge a remote from the cache backend. Supports either a directory or a file.
+Params:
+
+ - remote = path to remote (required)
+ - withData = true/false to delete cached data (chunks) as well (optional)
+
+### vfs/forget: Forget files or directories in the directory cache.
+
+This forgets the paths in the directory cache causing them to be
+re-read from the remote when needed.
+
+If no paths are passed in then it will forget all the paths in the
+directory cache.
+
+ rclone rc vfs/forget
+
+Otherwise pass files or dirs in as file=path or dir=path. Any
+parameter key starting with file will forget that file and any
+starting with dir will forget that dir, eg
+
+ rclone rc vfs/forget file=hello file2=goodbye dir=home/junk
+
+### rc/noop: Echo the input to the output parameters
+
+This echoes the input parameters to the output parameters for testing
+purposes. It can be used to check that rclone is still alive and to
+check that parameter passing is working properly.
+
+### rc/error: This returns an error
+
+This returns an error with the input as part of its error string.
+Useful for testing error handling.
+
+### rc/list: List all the registered remote control commands
+
+This lists all the registered remote control commands as a JSON map in
+the commands response.
+
+## Accessing the remote control via HTTP
+
+Rclone implements a simple HTTP based protocol.
+
+Each endpoint takes an JSON object and returns a JSON object or an
+error. The JSON objects are essentially a map of string names to
+values.
+
+All calls must made using POST.
+
+The input objects can be supplied using URL parameters, POST
+parameters or by supplying "Content-Type: application/json" and a JSON
+blob in the body. There are examples of these below using `curl`.
+
+The response will be a JSON blob in the body of the response. This is
+formatted to be reasonably human readable.
+
+If an error occurs then there will be an HTTP error status (usually
+400) and the body of the response will contain a JSON encoded error
+object.
+
+### Using POST with URL parameters only
+
+```
+curl -X POST 'http://localhost:5572/rc/noop/?potato=1&sausage=2'
+```
+
+Response
+
+```
+{
+ "potato": "1",
+ "sausage": "2"
+}
+```
+
+Here is what an error response looks like:
+
+```
+curl -X POST 'http://localhost:5572/rc/error/?potato=1&sausage=2'
+```
+
+```
+{
+ "error": "arbitrary error on input map[potato:1 sausage:2]",
+ "input": {
+ "potato": "1",
+ "sausage": "2"
+ }
+}
+```
+
+Note that curl doesn't return errors to the shell unless you use the `-f` option
+
+```
+$ curl -f -X POST 'http://localhost:5572/rc/error/?potato=1&sausage=2'
+curl: (22) The requested URL returned error: 400 Bad Request
+$ echo $?
+22
+```
+
+### Using POST with a form
+
+```
+curl --data "potato=1" --data "sausage=2" http://localhost:5572/rc/noop/
+```
+
+Response
+
+```
+{
+ "potato": "1",
+ "sausage": "2"
+}
+```
+
+Note that you can combine these with URL parameters too with the POST
+parameters taking precedence.
+
+```
+curl --data "potato=1" --data "sausage=2" "http://localhost:5572/rc/noop/?rutabaga=3&sausage=4"
+```
+
+Response
+
+```
+{
+ "potato": "1",
+ "rutabaga": "3",
+ "sausage": "4"
+}
+
+```
+
+### Using POST with a JSON blob
+
+```
+curl -H "Content-Type: application/json" -X POST -d '{"potato":2,"sausage":1}' http://localhost:5572/rc/noop/
+```
+
+response
+
+```
+{
+ "password": "xyz",
+ "username": "xyz"
+}
+```
+
+This can be combined with URL parameters too if required. The JSON
+blob takes precedence.
+
+```
+curl -H "Content-Type: application/json" -X POST -d '{"potato":2,"sausage":1}' 'http://localhost:5572/rc/noop/?rutabaga=3&potato=4'
+```
+
+```
+{
+ "potato": 2,
+ "rutabaga": "3",
+ "sausage": 1
+}
+```
+
# Overview of cloud storage systems #
Each cloud storage system is slightly different. Rclone attempts to
@@ -3929,6 +4653,130 @@ Some remotes allow files to be uploaded without knowing the file size
in advance. This allows certain operations to work without spooling the
file to local disk first, e.g. `rclone rcat`.
+Alias
+-----------------------------------------
+
+The `alias` remote provides a new name for another remote.
+
+Paths may be as deep as required or a local path,
+eg `remote:directory/subdirectory` or `/directory/subdirectory`.
+
+During the initial setup with `rclone config` you will specify the target
+remote. The target remote can either be a local path or another remote.
+
+Subfolders can be used in target remote. Asume a alias remote named `backup`
+with the target `mydrive:private/backup`. Invoking `rclone mkdir backup:desktop`
+is exactly the same as invoking `rclone mkdir mydrive:private/backup/desktop`.
+
+There will be no special handling of paths containing `..` segments.
+Invoking `rclone mkdir backup:../desktop` is exactly the same as invoking
+`rclone mkdir mydrive:private/backup/../desktop`.
+The empty path is not allowed as a remote. To alias the current directory
+use `.` instead.
+
+Here is an example of how to make a alias called `remote` for local folder.
+First run:
+
+ rclone config
+
+This will guide you through an interactive setup process:
+
+```
+No remotes found - make a new one
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> remote
+Type of storage to configure.
+Choose a number from below, or type in your own value
+ 1 / Alias for a existing remote
+ \ "alias"
+ 2 / Amazon Drive
+ \ "amazon cloud drive"
+ 3 / Amazon S3 (also Dreamhost, Ceph, Minio)
+ \ "s3"
+ 4 / Backblaze B2
+ \ "b2"
+ 5 / Box
+ \ "box"
+ 6 / Cache a remote
+ \ "cache"
+ 7 / Dropbox
+ \ "dropbox"
+ 8 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 9 / FTP Connection
+ \ "ftp"
+10 / Google Cloud Storage (this is not Google Drive)
+ \ "google cloud storage"
+11 / Google Drive
+ \ "drive"
+12 / Hubic
+ \ "hubic"
+13 / Local Disk
+ \ "local"
+14 / Microsoft Azure Blob Storage
+ \ "azureblob"
+15 / Microsoft OneDrive
+ \ "onedrive"
+16 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+ \ "swift"
+17 / Pcloud
+ \ "pcloud"
+18 / QingCloud Object Storage
+ \ "qingstor"
+19 / SSH/SFTP Connection
+ \ "sftp"
+20 / Webdav
+ \ "webdav"
+21 / Yandex Disk
+ \ "yandex"
+22 / http Connection
+ \ "http"
+Storage> 1
+Remote or path to alias.
+Can be "myremote:path/to/dir", "myremote:bucket", "myremote:" or "/local/path".
+remote> /mnt/storage/backup
+Remote config
+--------------------
+[remote]
+remote = /mnt/storage/backup
+--------------------
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+Current remotes:
+
+Name Type
+==== ====
+remote alias
+
+e) Edit existing remote
+n) New remote
+d) Delete remote
+r) Rename remote
+c) Copy remote
+s) Set configuration password
+q) Quit config
+e/n/d/r/c/s/q> q
+```
+
+Once configured you can then use `rclone` like this,
+
+List directories in top level in `/mnt/storage/backup`
+
+ rclone lsd remote:
+
+List all the files in `/mnt/storage/backup`
+
+ rclone ls remote:
+
+Copy another local directory to the alias directory called source
+
+ rclone copy /home/source remote:source
+
Amazon Drive
-----------------------------------------
@@ -4162,37 +5010,23 @@ This will guide you through an interactive setup process.
No remotes found - make a new one
n) New remote
s) Set configuration password
-n/s> n
+q) Quit config
+n/s/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
- 1 / Amazon Drive
+ 1 / Alias for a existing remote
+ \ "alias"
+ 2 / Amazon Drive
\ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
+ 3 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
- 3 / Backblaze B2
+ 4 / Backblaze B2
\ "b2"
- 4 / Dropbox
- \ "dropbox"
- 5 / Encrypt/Decrypt a remote
- \ "crypt"
- 6 / Google Cloud Storage (this is not Google Drive)
- \ "google cloud storage"
- 7 / Google Drive
- \ "drive"
- 8 / Hubic
- \ "hubic"
- 9 / Local Disk
- \ "local"
-10 / Microsoft OneDrive
- \ "onedrive"
-11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
- \ "swift"
-12 / SSH/SFTP Connection
- \ "sftp"
-13 / Yandex Disk
- \ "yandex"
-Storage> 2
+[snip]
+23 / http Connection
+ \ "http"
+Storage> s3
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own value
1 / Enter AWS credentials in the next step
@@ -4201,80 +5035,91 @@ Choose a number from below, or type in your own value
\ "true"
env_auth> 1
AWS Access Key ID - leave blank for anonymous access or runtime credentials.
-access_key_id> access_key
+access_key_id> XXX
AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
-secret_access_key> secret_key
-Region to connect to.
+secret_access_key> YYY
+Region to connect to. Leave blank if you are using an S3 clone and you don't have a region.
Choose a number from below, or type in your own value
/ The default endpoint - a good choice if you are unsure.
1 | US Region, Northern Virginia or Pacific Northwest.
| Leave location constraint empty.
\ "us-east-1"
+ / US East (Ohio) Region
+ 2 | Needs location constraint us-east-2.
+ \ "us-east-2"
/ US West (Oregon) Region
- 2 | Needs location constraint us-west-2.
+ 3 | Needs location constraint us-west-2.
\ "us-west-2"
/ US West (Northern California) Region
- 3 | Needs location constraint us-west-1.
+ 4 | Needs location constraint us-west-1.
\ "us-west-1"
- / EU (Ireland) Region Region
- 4 | Needs location constraint EU or eu-west-1.
+ / Canada (Central) Region
+ 5 | Needs location constraint ca-central-1.
+ \ "ca-central-1"
+ / EU (Ireland) Region
+ 6 | Needs location constraint EU or eu-west-1.
\ "eu-west-1"
+ / EU (London) Region
+ 7 | Needs location constraint eu-west-2.
+ \ "eu-west-2"
/ EU (Frankfurt) Region
- 5 | Needs location constraint eu-central-1.
+ 8 | Needs location constraint eu-central-1.
\ "eu-central-1"
/ Asia Pacific (Singapore) Region
- 6 | Needs location constraint ap-southeast-1.
+ 9 | Needs location constraint ap-southeast-1.
\ "ap-southeast-1"
/ Asia Pacific (Sydney) Region
- 7 | Needs location constraint ap-southeast-2.
+10 | Needs location constraint ap-southeast-2.
\ "ap-southeast-2"
/ Asia Pacific (Tokyo) Region
- 8 | Needs location constraint ap-northeast-1.
+11 | Needs location constraint ap-northeast-1.
\ "ap-northeast-1"
/ Asia Pacific (Seoul)
- 9 | Needs location constraint ap-northeast-2.
+12 | Needs location constraint ap-northeast-2.
\ "ap-northeast-2"
/ Asia Pacific (Mumbai)
-10 | Needs location constraint ap-south-1.
+13 | Needs location constraint ap-south-1.
\ "ap-south-1"
/ South America (Sao Paulo) Region
-11 | Needs location constraint sa-east-1.
+14 | Needs location constraint sa-east-1.
\ "sa-east-1"
- / If using an S3 clone that only understands v2 signatures
-12 | eg Ceph/Dreamhost
- | set this and make sure you set the endpoint.
+ / Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.
+15 | Set this and make sure you set the endpoint.
\ "other-v2-signature"
- / If using an S3 clone that understands v4 signatures set this
-13 | and make sure you set the endpoint.
- \ "other-v4-signature"
region> 1
Endpoint for S3 API.
Leave blank if using AWS to use the default endpoint for the region.
Specify if using an S3 clone such as Ceph.
-endpoint>
+endpoint>
Location constraint - must be set to match the Region. Used when creating buckets only.
Choose a number from below, or type in your own value
1 / Empty for US Region, Northern Virginia or Pacific Northwest.
\ ""
- 2 / US West (Oregon) Region.
+ 2 / US East (Ohio) Region.
+ \ "us-east-2"
+ 3 / US West (Oregon) Region.
\ "us-west-2"
- 3 / US West (Northern California) Region.
+ 4 / US West (Northern California) Region.
\ "us-west-1"
- 4 / EU (Ireland) Region.
+ 5 / Canada (Central) Region.
+ \ "ca-central-1"
+ 6 / EU (Ireland) Region.
\ "eu-west-1"
- 5 / EU Region.
+ 7 / EU (London) Region.
+ \ "eu-west-2"
+ 8 / EU Region.
\ "EU"
- 6 / Asia Pacific (Singapore) Region.
+ 9 / Asia Pacific (Singapore) Region.
\ "ap-southeast-1"
- 7 / Asia Pacific (Sydney) Region.
+10 / Asia Pacific (Sydney) Region.
\ "ap-southeast-2"
- 8 / Asia Pacific (Tokyo) Region.
+11 / Asia Pacific (Tokyo) Region.
\ "ap-northeast-1"
- 9 / Asia Pacific (Seoul)
+12 / Asia Pacific (Seoul)
\ "ap-northeast-2"
-10 / Asia Pacific (Mumbai)
+13 / Asia Pacific (Mumbai)
\ "ap-south-1"
-11 / South America (Sao Paulo) Region.
+14 / South America (Sao Paulo) Region.
\ "sa-east-1"
location_constraint> 1
Canned ACL used when creating buckets and/or storing objects in S3.
@@ -4295,14 +5140,14 @@ Choose a number from below, or type in your own value
/ Both the object owner and the bucket owner get FULL_CONTROL over the object.
6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
\ "bucket-owner-full-control"
-acl> private
+acl> 1
The server-side encryption algorithm used when storing this object in S3.
Choose a number from below, or type in your own value
1 / None
\ ""
2 / AES256
\ "AES256"
-server_side_encryption>
+server_side_encryption> 1
The storage class to use when storing objects in S3.
Choose a number from below, or type in your own value
1 / Default
@@ -4313,19 +5158,19 @@ Choose a number from below, or type in your own value
\ "REDUCED_REDUNDANCY"
4 / Standard Infrequent Access storage class
\ "STANDARD_IA"
-storage_class>
+storage_class> 1
Remote config
--------------------
[remote]
env_auth = false
-access_key_id = access_key
-secret_access_key = secret_key
+access_key_id = XXX
+secret_access_key = YYY
region = us-east-1
-endpoint =
-location_constraint =
+endpoint =
+location_constraint =
acl = private
-server_side_encryption =
-storage_class =
+server_side_encryption =
+storage_class =
--------------------
y) Yes this is OK
e) Edit this remote
@@ -4366,8 +5211,8 @@ The modified time is stored as metadata on the object as
### Multipart uploads ###
rclone supports multipart uploads with S3 which means that it can
-upload files bigger than 5GB. Note that files uploaded with multipart
-upload don't have an MD5SUM.
+upload files bigger than 5GB. Note that files uploaded *both* with
+multipart upload *and* through crypt remotes do not have MD5 sums.
### Buckets and Regions ###
@@ -4444,6 +5289,14 @@ Notes on above:
For reference, [here's an Ansible script](https://gist.github.com/ebridges/ebfc9042dd7c756cd101cfa807b7ae2b)
that will generate one or more buckets that will work with `rclone sync`.
+### Key Management System (KMS) ###
+
+If you are using server side encryption with KMS then you will find
+you can't transfer small objects. As a work-around you can use the
+`--ignore-checksum` flag.
+
+A proper fix is being worked on in [issue #1824](https://github.com/ncw/rclone/issues/1824).
+
### Glacier ###
You can transition objects to glacier storage using a [lifecycle policy](http://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.html).
@@ -4523,16 +5376,27 @@ You will be able to list and copy data but not upload it.
### Ceph ###
-Ceph is an object storage system which presents an Amazon S3 interface.
+[Ceph](https://ceph.com/) is an open source unified, distributed
+storage system designed for excellent performance, reliability and
+scalability. It has an S3 compatible object storage interface.
+
+To use rclone with Ceph, configure as above but leave the region blank
+and set the endpoint. You should end up with something like this in
+your config:
-To use rclone with ceph, you need to set the following parameters in
-the config.
```
-access_key_id = Whatever
-secret_access_key = Whatever
-endpoint = https://ceph.endpoint.goes.here/
-region = other-v2-signature
+[ceph]
+type = s3
+env_auth = false
+access_key_id = XXX
+secret_access_key = YYY
+region =
+endpoint = https://ceph.endpoint.example.com
+location_constraint =
+acl =
+server_side_encryption =
+storage_class =
```
Note also that Ceph sometimes puts `/` in the passwords it gives
@@ -4560,6 +5424,29 @@ removed).
Because this is a json dump, it is encoding the `/` as `\/`, so if you
use the secret key as `xxxxxx/xxxx` it will work fine.
+### Dreamhost ###
+
+Dreamhost [DreamObjects](https://www.dreamhost.com/cloud/storage/) is
+an object storage system based on CEPH.
+
+To use rclone with Dreamhost, configure as above but leave the region blank
+and set the endpoint. You should end up with something like this in
+your config:
+
+```
+[dreamobjects]
+env_auth = false
+access_key_id = your_access_key
+secret_access_key = your_secret_key
+region =
+endpoint = objects-us-west-1.dream.io
+location_constraint =
+acl = private
+server_side_encryption =
+storage_class =
+```
+
+
### DigitalOcean Spaces ###
[Spaces](https://www.digitalocean.com/products/object-storage/) is an [S3-interoperable](https://developers.digitalocean.com/documentation/spaces/) object storage service from cloud provider DigitalOcean.
@@ -4571,7 +5458,7 @@ When prompted for a `region` or `location_constraint`, press enter to use the de
Going through the whole process of creating a new remote by running `rclone config`, each prompt should be answered as shown below:
```
-Storage> 2
+Storage> s3
env_auth> 1
access_key_id> YOUR_ACCESS_KEY
secret_access_key> YOUR_SECRET_KEY
@@ -4605,6 +5492,209 @@ rclone mkdir spaces:my-new-space
rclone copy /path/to/files spaces:my-new-space
```
+### IBM COS (S3) ###
+Information stored with IBM Cloud Object Storage is encrypted and dispersed across multiple geographic locations, and accessed through an implementation of the S3 API. This service makes use of the distributed storage technologies provided by IBM’s Cloud Object Storage System (formerly Cleversafe). For more information visit: (https://www.ibm.com/cloud/object-storage)
+
+To configure access to IBM COS S3, follow the steps below:
+
+1. Run rclone config and select n for a new remote.
+```
+ 2018/02/14 14:13:11 NOTICE: Config file "C:\\Users\\a\\.config\\rclone\\rclone.conf" not found - using defaults
+ No remotes found - make a new one
+ n) New remote
+ s) Set configuration password
+ q) Quit config
+ n/s/q> n
+```
+
+2. Enter the name for the configuration
+```
+ name> IBM-COS-XREGION
+```
+
+3. Select "s3" storage.
+```
+ Type of storage to configure.
+ Choose a number from below, or type in your own value
+ 1 / Amazon Drive
+ \ "amazon cloud drive"
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio, IBM COS(S3))
+ \ "s3"
+ 3 / Backblaze B2
+ Storage> 2
+```
+
+4. Select "Enter AWS credentials…"
+```
+ Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
+ Choose a number from below, or type in your own value
+ 1 / Enter AWS credentials in the next step
+ \ "false"
+ 2 / Get AWS credentials from the environment (env vars or IAM)
+ \ "true"
+ env_auth> 1
+```
+
+5. Enter the Access Key and Secret.
+```
+ AWS Access Key ID - leave blank for anonymous access or runtime credentials.
+ access_key_id> <>
+ AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
+ secret_access_key> <>
+```
+
+6. Select "other-v4-signature" region.
+```
+ Region to connect to.
+ Choose a number from below, or type in your own value
+ / The default endpoint - a good choice if you are unsure.
+ 1 | US Region, Northern Virginia or Pacific Northwest.
+ | Leave location constraint empty.
+ \ "us-east-1"
+ / US East (Ohio) Region
+ 2 | Needs location constraint us-east-2.
+ \ "us-east-2"
+ / US West (Oregon) Region
+ ……
+ 15 | eg Ceph/Dreamhost
+ | set this and make sure you set the endpoint.
+ \ "other-v2-signature"
+ / If using an S3 clone that understands v4 signatures set this
+ 16 | and make sure you set the endpoint.
+ \ "other-v4-signature
+ region> 16
+```
+
+7. Enter the endpoint FQDN.
+```
+ Leave blank if using AWS to use the default endpoint for the region.
+ Specify if using an S3 clone such as Ceph.
+ endpoint> s3-api.us-geo.objectstorage.softlayer.net
+```
+
+8. Specify a IBM COS Location Constraint.
+ a. Currently, the only IBM COS values for LocationConstraint are:
+ us-standard / us-vault / us-cold / us-flex
+ us-east-standard / us-east-vault / us-east-cold / us-east-flex
+ us-south-standard / us-south-vault / us-south-cold / us-south-flex
+ eu-standard / eu-vault / eu-cold / eu-flex
+```
+ Location constraint - must be set to match the Region. Used when creating buckets only.
+ Choose a number from below, or type in your own value
+ 1 / Empty for US Region, Northern Virginia or Pacific Northwest.
+ \ ""
+ 2 / US East (Ohio) Region.
+ \ "us-east-2"
+ ……
+ location_constraint> us-standard
+```
+
+9. Specify a canned ACL.
+```
+ Canned ACL used when creating buckets and/or storing objects in S3.
+ For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
+ Choose a number from below, or type in your own value
+ 1 / Owner gets FULL_CONTROL. No one else has access rights (default).
+ \ "private"
+ 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
+ \ "public-read"
+ / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
+ 3 | Granting this on a bucket is generally not recommended.
+ \ "public-read-write"
+ 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
+ \ "authenticated-read"
+ / Object owner gets FULL_CONTROL. Bucket owner gets READ access.
+ 5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
+ \ "bucket-owner-read"
+ / Both the object owner and the bucket owner get FULL_CONTROL over the object.
+ 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
+ \ "bucket-owner-full-control"
+ acl> 1
+```
+
+10. Set the SSE option to "None".
+```
+ Choose a number from below, or type in your own value
+ 1 / None
+ \ ""
+ 2 / AES256
+ \ "AES256"
+ server_side_encryption> 1
+```
+
+11. Set the storage class to "None" (IBM COS uses the LocationConstraint at the bucket level).
+```
+ The storage class to use when storing objects in S3.
+ Choose a number from below, or type in your own value
+ 1 / Default
+ \ ""
+ 2 / Standard storage class
+ \ "STANDARD"
+ 3 / Reduced redundancy storage class
+ \ "REDUCED_REDUNDANCY"
+ 4 / Standard Infrequent Access storage class
+ \ "STANDARD_IA"
+ storage_class>
+```
+
+12. Review the displayed configuration and accept to save the "remote" then quit.
+```
+ Remote config
+ --------------------
+ [IBM-COS-XREGION]
+ env_auth = false
+ access_key_id = <>
+ secret_access_key = <>
+ region = other-v4-signature
+ endpoint = s3-api.us-geo.objectstorage.softlayer.net
+ location_constraint = us-standard
+ acl = private
+ server_side_encryption =
+ storage_class =
+ --------------------
+ y) Yes this is OK
+ e) Edit this remote
+ d) Delete this remote
+ y/e/d> y
+ Remote config
+ Current remotes:
+
+ Name Type
+ ==== ====
+ IBM-COS-XREGION s3
+
+ e) Edit existing remote
+ n) New remote
+ d) Delete remote
+ r) Rename remote
+ c) Copy remote
+ s) Set configuration password
+ q) Quit config
+ e/n/d/r/c/s/q> q
+```
+
+
+
+13. Execute rclone commands
+```
+ 1) Create a bucket.
+ rclone mkdir IBM-COS-XREGION:newbucket
+ 2) List available buckets.
+ rclone lsd IBM-COS-XREGION:
+ -1 2017-11-08 21:16:22 -1 test
+ -1 2018-02-14 20:16:39 -1 newbucket
+ 3) List contents of a bucket.
+ rclone ls IBM-COS-XREGION:newbucket
+ 18685952 test.exe
+ 4) Copy a file from local to remote.
+ rclone copy /Users/file.txt IBM-COS-XREGION:newbucket
+ 5) Copy a file from remote to local.
+ rclone copy IBM-COS-XREGION:newbucket/file.txt .
+ 6) Delete a file on remote.
+ rclone delete IBM-COS-XREGION:newbucket/file.txt
+```
+
+
### Minio ###
[Minio](https://minio.io/) is an object storage server built for cloud application developers and devops.
@@ -5428,12 +6518,38 @@ To start a cached mount
rclone mount --allow-other test-cache: /var/tmp/test-cache
+### Write Features ###
+
+### Offline uploading ###
+
+In an effort to make writing through cache more reliable, the backend
+now supports this feature which can be activated by specifying a
+`cache-tmp-upload-path`.
+
+A files goes through these states when using this feature:
+
+1. An upload is started (usually by copying a file on the cache remote)
+2. When the copy to the temporary location is complete the file is part
+of the cached remote and looks and behaves like any other file (reading included)
+3. After `cache-tmp-wait-time` passes and the file is next in line, `rclone move`
+is used to move the file to the cloud provider
+4. Reading the file still works during the upload but most modifications on it will be prohibited
+5. Once the move is complete the file is unlocked for modifications as it
+becomes as any other regular file
+6. If the file is being read through `cache` when it's actually
+deleted from the temporary path then `cache` will simply swap the source
+to the cloud provider without interrupting the reading (small blip can happen though)
+
+Files are uploaded in sequence and only one file is uploaded at a time.
+Uploads will be stored in a queue and be processed based on the order they were added.
+The queue and the temporary storage is persistent across restarts and even purges of the cache.
+
### Write Support ###
Writes are supported through `cache`.
One caveat is that a mounted cache remote does not add any retry or fallback
mechanism to the upload operation. This will depend on the implementation
-of the wrapped remote.
+of the wrapped remote. Consider using `Offline uploading` for reliable writes.
One special case is covered with `cache-writes` which will cache the file
data at the same time as the upload when it is enabled making it available
@@ -5476,6 +6592,16 @@ Affected settings:
### Known issues ###
+#### Mount and --dir-cache-time ####
+
+--dir-cache-time controls the first layer of directory caching which works at the mount layer.
+Being an independent caching mechanism from the `cache` backend, it will manage its own entries
+based on the configured time.
+
+To avoid getting in a scenario where dir cache has obsolete data and cache would have the correct
+one, try to set `--dir-cache-time` to a lower time than `--cache-info-age`. Default values are
+already configured in this way.
+
#### Windows support - Experimental ####
There are a couple of issues with Windows `mount` functionality that still require some investigations.
@@ -5525,6 +6651,18 @@ which makes it think we're downloading the full file instead of small chunks.
Organizing the remotes in this order yelds better results:
**cloud remote** -> **cache** -> **crypt**
+### Cache and Remote Control (--rc) ###
+Cache supports the new `--rc` mode in rclone and can be remote controlled through the following end points:
+By default, the listener is disabled if you do not add the flag.
+
+### rc cache/expire
+Purge a remote from the cache backend. Supports either a directory or a file.
+It supports both encrypted and unencrypted file names if cache is wrapped by crypt.
+
+Params:
+ - **remote** = path to remote **(required)**
+ - **withData** = true/false to delete cached data (chunks) as well _(optional, false by default)_
+
### Specific options ###
Here are the command line options specific to this cloud storage
@@ -5661,6 +6799,36 @@ same time during upload.
**Default**: not set
+#### --cache-tmp-upload-path=PATH ####
+
+This is the path where `cache` will use as a temporary storage for new files
+that need to be uploaded to the cloud provider.
+
+Specifying a value will enable this feature. Without it, it is completely disabled
+and files will be uploaded directly to the cloud provider
+
+**Default**: empty
+
+#### --cache-tmp-wait-time=DURATION ####
+
+This is the duration that a file must wait in the temporary location
+_cache-tmp-upload-path_ before it is selected for upload.
+
+Note that only one file is uploaded at a time and it can take longer to
+start the upload if a queue formed for this purpose.
+
+**Default**: 15m
+
+#### --cache-db-wait-time=DURATION ####
+
+Only one process can have the DB open at any one time, so rclone waits
+for this duration for the DB to become available before it gives an
+error.
+
+If you set it to 0 then it will wait forever.
+
+**Default**: 1s
+
Crypt
----------------------------------------
@@ -5885,7 +7053,7 @@ Off
Standard
* file names encrypted
- * file names can't be as long (~156 characters)
+ * file names can't be as long (~143 characters)
* can use sub paths and copy single files
* directory structure visible
* identical files names will have identical uploaded names
@@ -5935,7 +7103,7 @@ False
Only encrypts file names, skips directory names
Example:
-`1/12/123/txt` is encrypted to
+`1/12/123.txt` is encrypted to
`1/12/qgm4avr35m5loi1th53ato71v0`
@@ -6601,39 +7769,34 @@ n/r/c/s/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
- 1 / Amazon Drive
- \ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
- \ "s3"
- 3 / Backblaze B2
- \ "b2"
- 4 / Dropbox
- \ "dropbox"
- 5 / Encrypt/Decrypt a remote
- \ "crypt"
- 6 / FTP Connection
- \ "ftp"
- 7 / Google Cloud Storage (this is not Google Drive)
- \ "google cloud storage"
- 8 / Google Drive
+[snip]
+10 / Google Drive
\ "drive"
- 9 / Hubic
- \ "hubic"
-10 / Local Disk
- \ "local"
-11 / Microsoft OneDrive
- \ "onedrive"
-12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
- \ "swift"
-13 / SSH/SFTP Connection
- \ "sftp"
-14 / Yandex Disk
- \ "yandex"
-Storage> 8
+[snip]
+Storage> drive
Google Application Client Id - leave blank normally.
client_id>
Google Application Client Secret - leave blank normally.
client_secret>
+Scope that rclone should use when requesting access from drive.
+Choose a number from below, or type in your own value
+ 1 / Full access all files, excluding Application Data Folder.
+ \ "drive"
+ 2 / Read-only access to file metadata and file contents.
+ \ "drive.readonly"
+ / Access to files created by rclone only.
+ 3 | These are visible in the drive website.
+ | File authorization is revoked when the user deauthorizes the app.
+ \ "drive.file"
+ / Allows read and write access to the Application Data folder.
+ 4 | This is not visible in the drive website.
+ \ "drive.appfolder"
+ / Allows read-only access to file metadata but
+ 5 | does not allow any access to read or download file content.
+ \ "drive.metadata.readonly"
+scope> 1
+ID of the root folder - leave blank normally. Fill in to access "Computers" folders. (see docs).
+root_folder_id>
Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login.
service_account_file>
Remote config
@@ -6653,9 +7816,12 @@ n) No
y/n> n
--------------------
[remote]
-client_id =
-client_secret =
-token = {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014-03-16T13:57:58.955387075Z","Extra":null}
+client_id =
+client_secret =
+scope = drive
+root_folder_id =
+service_account_file =
+token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2014-03-16T13:57:58.955387075Z"}
--------------------
y) Yes this is OK
e) Edit this remote
@@ -6684,6 +7850,85 @@ To copy a local directory to a drive directory called backup
rclone copy /home/source remote:backup
+### Scopes ###
+
+Rclone allows you to select which scope you would like for rclone to
+use. This changes what type of token is granted to rclone. [The
+scopes are defined
+here.](https://developers.google.com/drive/v3/web/about-auth).
+
+The scope are
+
+#### drive ####
+
+This is the default scope and allows full access to all files, except
+for the Application Data Folder (see below).
+
+Choose this one if you aren't sure.
+
+#### drive.readonly ####
+
+This allows read only access to all files. Files may be listed and
+downloaded but not uploaded, renamed or deleted.
+
+#### drive.file ####
+
+With this scope rclone can read/view/modify only those files and
+folders it creates.
+
+So if you uploaded files to drive via the web interface (or any other
+means) they will not be visible to rclone.
+
+This can be useful if you are using rclone to backup data and you want
+to be sure confidential data on your drive is not visible to rclone.
+
+Files created with this scope are visible in the web interface.
+
+#### drive.appfolder ####
+
+This gives rclone its own private area to store files. Rclone will
+not be able to see any other files on your drive and you won't be able
+to see rclone's files from the web interface either.
+
+#### drive.metadata.readonly ####
+
+This allows read only access to file names only. It does not allow
+rclone to download or upload data, or rename or delete files or
+directories.
+
+### Root folder ID ###
+
+You can set the `root_folder_id` for rclone. This is the directory
+(identified by its `Folder ID`) that rclone considers to be a the root
+of your drive.
+
+Normally you will leave this blank and rclone will determine the
+correct root to use itself.
+
+However you can set this to restrict rclone to a specific folder
+hierarchy or to access data within the "Computers" tab on the drive
+web interface (where files from Google's Backup and Sync desktop
+program go).
+
+In order to do this you will have to find the `Folder ID` of the
+directory you wish rclone to display. This will be the last segment
+of the URL when you open the relevant folder in the drive web
+interface.
+
+So if the folder you want rclone to use has a URL which looks like
+`https://drive.google.com/drive/folders/1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh`
+in the browser, then you use `1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh` as
+the `root_folder_id` in the config.
+
+**NB** folders under the "Computers" tab seem to be read only (drive
+gives a 500 error) when using rclone.
+
+There doesn't appear to be an API to discover the folder IDs of the
+"Computers" tab - please contact us if you know otherwise!
+
+Note also that rclone can't access any data under the "Backups" tab on
+the google drive web interface yet.
+
### Service Account support ###
You can set up rclone with Google Drive in an unattended mode,
@@ -6691,17 +7936,77 @@ i.e. not tied to a specific end-user Google account. This is useful
when you want to synchronise files onto machines that don't have
actively logged-in users, for example build machines.
-To create a service account and obtain its credentials, go to the
-[Google Developer Console](https://console.developers.google.com) and
-use the "Create Credentials" button. After creating an account, a JSON
-file containing the Service Account's credentials will be downloaded
-onto your machine. These credentials are what rclone will use for
-authentication.
-
To use a Service Account instead of OAuth2 token flow, enter the path
to your Service Account credentials at the `service_account_file`
-prompt and rclone won't use the browser based authentication
-flow.
+prompt during `rclone config` and rclone won't use the browser based
+authentication flow.
+
+#### Use case - Google Apps/G-suite account and individual Drive ####
+
+Let's say that you are the administrator of a Google Apps (old) or
+G-suite account.
+The goal is to store data on an individual's Drive account, who IS
+a member of the domain.
+We'll call the domain **example.com**, and the user
+**foo@example.com**.
+
+There's a few steps we need to go through to accomplish this:
+
+##### 1. Create a service account for example.com #####
+ - To create a service account and obtain its credentials, go to the
+[Google Developer Console](https://console.developers.google.com).
+ - You must have a project - create one if you don't.
+ - Then go to "IAM & admin" -> "Service Accounts".
+ - Use the "Create Credentials" button. Fill in "Service account name"
+with something that identifies your client. "Role" can be empty.
+ - Tick "Furnish a new private key" - select "Key type JSON".
+ - Tick "Enable G Suite Domain-wide Delegation". This option makes
+"impersonation" possible, as documented here:
+[Delegating domain-wide authority to the service account](https://developers.google.com/identity/protocols/OAuth2ServiceAccount#delegatingauthority)
+ - These credentials are what rclone will use for authentication.
+If you ever need to remove access, press the "Delete service
+account key" button.
+
+##### 2. Allowing API access to example.com Google Drive #####
+ - Go to example.com's admin console
+ - Go into "Security" (or use the search bar)
+ - Select "Show more" and then "Advanced settings"
+ - Select "Manage API client access" in the "Authentication" section
+ - In the "Client Name" field enter the service account's
+"Client ID" - this can be found in the Developer Console under
+"IAM & Admin" -> "Service Accounts", then "View Client ID" for
+the newly created service account.
+It is a ~21 character numerical string.
+ - In the next field, "One or More API Scopes", enter
+`https://www.googleapis.com/auth/drive`
+to grant access to Google Drive specifically.
+
+##### 3. Configure rclone, assuming a new install #####
+
+```
+rclone config
+
+n/s/q> n # New
+name>gdrive # Gdrive is an example name
+Storage> # Select the number shown for Google Drive
+client_id> # Can be left blank
+client_secret> # Can be left blank
+scope> # Select your scope, 1 for example
+root_folder_id> # Can be left blank
+service_account_file> /home/foo/myJSONfile.json # This is where the JSON file goes!
+y/n> # Auto config, y
+
+```
+
+##### 4. Verify that it's working #####
+ - `rclone -v --drive-impersonate foo@example.com lsf gdrive:backup`
+ - The arguments do:
+ - `-v` - verbose logging
+ - `--drive-impersonate foo@example.com` - this is what does
+the magic, pretending to be user foo.
+ - `lsf` - list files in a parsing friendly way
+ - `gdrive:backup` - use the remote called gdrive, work in
+the folder named backup.
### Team drives ###
@@ -6835,13 +8140,22 @@ Here are the possible extensions with their corresponding mime types.
| xlsx | application/vnd.openxmlformats-officedocument.spreadsheetml.sheet | Microsoft Office Spreadsheet |
| zip | application/zip | A ZIP file of HTML, Images CSS |
+#### --drive-impersonate user ####
+
+When using a service account, this instructs rclone to impersonate the user passed in.
+
#### --drive-list-chunk int ####
Size of listing chunk 100-1000. 0 to disable. (default 1000)
#### --drive-shared-with-me ####
-Only show files that are shared with me
+Instructs rclone to operate on your "Shared with me" folder (where
+Google Drive lets you access the files and folders others have shared
+with you).
+
+This works both with the "list" (lsd, lsl, etc) and the "copy"
+commands (copy, sync, etc), and with all other commands too.
#### --drive-skip-gdocs ####
@@ -6862,6 +8176,27 @@ Controls whether files are sent to the trash or deleted
permanently. Defaults to true, namely sending files to the trash. Use
`--drive-use-trash=false` to delete files permanently instead.
+#### --drive-use-created-date ####
+
+Use the file creation date in place of the modification date. Defaults
+to false.
+
+Useful when downloading data and you want the creation date used in
+place of the last modified date.
+
+**WARNING**: This flag may have some unexpected consequences.
+
+When uploading to your drive all files will be overwritten unless they
+haven't been modified since their creation. And the inverse will occur
+while downloading. This side effect can be avoided by using the
+`--checksum` flag.
+
+This feature was implemented to retain photos capture date as recorded
+by google photos. You will first need to check the "Create a Google
+Photos folder" option in your google drive settings. You can then copy
+or move the photos locally and use the date the image was taken
+(created) set as the modification date.
+
### Limitations ###
Drive has quite a lot of rate limiting. This causes rclone to be
@@ -6874,6 +8209,21 @@ see User rate limit exceeded errors, wait at least 24 hours and retry.
You can disable server side copies with `--disable copy` to download
and upload the files if you prefer.
+#### Limitations of Google Docs ####
+
+Google docs will appear as size -1 in `rclone ls` and as size 0 in
+anything which uses the VFS layer, eg `rclone mount`, `rclone serve`.
+
+This is because rclone can't find out the size of the Google docs
+without downloading them.
+
+Google docs will transfer correctly with `rclone sync`, `rclone copy`
+etc as rclone knows to ignore the size when doing the transfer.
+
+However an unfortunate consequence of this is that you can't download
+Google docs using `rclone mount` - you will get a 0 sized file. If
+you try again the doc may gain its correct size and be downloadable.
+
### Duplicated files ###
Sometimes, for no reason I've been able to track down, drive will
@@ -6890,23 +8240,9 @@ Android duplicates files on drive sometimes.
### Rclone appears to be re-copying files it shouldn't ###
-There are two possible reasons for rclone to recopy files which
-haven't changed to Google Drive.
-
-The first is the duplicated file issue above - run `rclone dedupe` and
-check your logs for duplicate object or directory messages.
-
-The second is that sometimes Google reports different sizes for the
-Google Docs exports which will cause rclone to re-download Google Docs
-for no apparent reason. `--ignore-size` is a not very satisfactory
-work-around for this if it is causing you a lot of problems.
-
-### Google docs downloads sometimes fail with "Failed to copy: read X bytes expecting Y" ###
-
-This is the same problem as above. Google reports the google doc is
-one size, but rclone downloads a different size. Work-around with the
-`--ignore-size` flag or wait for rclone to retry the download which it
-will.
+The most likely cause of this is the duplicated file issue above - run
+`rclone dedupe` and check your logs for duplicate object or directory
+messages.
### Making your own client_id ###
@@ -7516,11 +8852,6 @@ system.
Above this size files will be chunked - must be multiple of 320k. The
default is 10MB. Note that the chunks will be buffered into memory.
-#### --onedrive-upload-cutoff=SIZE ####
-
-Cutoff for switching to chunked upload - must be <= 100MB. The default
-is 10MB.
-
### Limitations ###
Note that OneDrive is case insensitive so you can't have a
@@ -7534,6 +8865,31 @@ in it will be mapped to `?` instead.
The largest allowed file size is 10GiB (10,737,418,240 bytes).
+### Versioning issue ###
+
+Every change in OneDrive causes the service to create a new version.
+This counts against a users quota.
+For example changing the modification time of a file creates a second
+version, so the file is using twice the space.
+
+The `copy` is the only rclone command affected by this as we copy
+the file and then afterwards set the modification time to match the
+source file.
+
+User [Weropol](https://github.com/Weropol) has found a method to disable
+versioning on OneDrive
+
+1. Open the settings menu by clicking on the gear symbol at the top of the OneDrive Business page.
+2. Click Site settings.
+3. Once on the Site settings page, navigate to Site Administration > Site libraries and lists.
+4. Click Customize "Documents".
+5. Click General Settings > Versioning Settings.
+6. Under Document Version History select the option No versioning.
+Note: This will disable the creation of new file versions, but will not remove any previous versions. Your documents are safe.
+7. Apply the changes by clicking OK.
+8. Use rclone to upload or modify files. (I also use the --no-update-modtime flag)
+9. Restore the versioning settings after using rclone. (Optional)
+
QingStor
---------------------------------------
@@ -8223,6 +9579,9 @@ instance `/home/$USER/.ssh/id_rsa`.
If you don't specify `pass` or `key_file` then rclone will attempt to
contact an ssh-agent.
+If you set the `--sftp-ask-password` option, rclone will prompt for a
+password when needed and no password has been configured.
+
### ssh-agent on macOS ###
Note that there seem to be various problems with using an ssh-agent on
@@ -8237,16 +9596,33 @@ And then at the end of the session
These commands can be used in scripts of course.
+### Specific options ###
+
+Here are the command line options specific to this remote.
+
+#### --sftp-ask-password ####
+
+Ask for the SFTP password if needed when no password has been configured.
+
### Modified time ###
Modified times are stored on the server to 1 second precision.
Modified times are used in syncing and are fully supported.
+Some SFTP servers disable setting/modifying the file modification time after
+upload (for example, certain configurations of ProFTPd with mod_sftp). If you
+are using one of these servers, you can set the option `set_modtime = false` in
+your RClone backend configuration to disable this behaviour.
+
### Limitations ###
SFTP supports checksums if the same login has shell access and `md5sum`
or `sha1sum` as well as `echo` are in the remote's PATH.
+This remote check can be disabled by setting the configuration option
+`disable_hashcheck`. This may be required if you're connecting to SFTP servers
+which are not under your control, and to which the execution of remote commands
+is prohibited.
The only ssh agent supported under Windows is Putty's pageant.
@@ -8728,6 +10104,152 @@ points, as you explicitly acknowledge that they should be skipped.
Changelog
---------
+ * v1.40 - 2018-03-19
+ * New backends
+ * Alias backend to create aliases for existing remote names (Fabian Möller)
+ * New commands
+ * `lsf`: list for parsing purposes (Jakub Tasiemski)
+ * by default this is a simple non recursive list of files and directories
+ * it can be configured to add more info in an easy to parse way
+ * `serve restic`: for serving a remote as a Restic REST endpoint
+ * This enables restic to use any backends that rclone can access
+ * Thanks Alexander Neumann for help, patches and review
+ * `rc`: enable the remote control of a running rclone
+ * The running rclone must be started with --rc and related flags.
+ * Currently there is support for bwlimit, and flushing for mount and cache.
+ * New Features
+ * `--max-delete` flag to add a delete threshold (Bjørn Erik Pedersen)
+ * All backends now support RangeOption for ranged Open
+ * `cat`: Use RangeOption for limited fetches to make more efficient
+ * `cryptcheck`: make reading of nonce more efficient with RangeOption
+ * serve http/webdav/restic
+ * support SSL/TLS
+ * add `--user` `--pass` and `--htpasswd` for authentication
+ * `copy`/`move`: detect file size change during copy/move and abort transfer (ishuah)
+ * `cryptdecode`: added option to return encrypted file names. (ishuah)
+ * `lsjson`: add `--encrypted` to show encrypted name (Jakub Tasiemski)
+ * Add `--stats-file-name-length` to specify the printed file name length for stats (Will Gunn)
+ * Compile
+ * Code base was shuffled and factored
+ * backends moved into a backend directory
+ * large packages split up
+ * See the CONTRIBUTING.md doc for info as to what lives where now
+ * Update to using go1.10 as the default go version
+ * Implement daily [full integration tests](https://pub.rclone.org/integration-tests/)
+ * Release
+ * Include a source tarball and sign it and the binaries
+ * Sign the git tags as part of the release process
+ * Add .deb and .rpm packages as part of the build
+ * Make a beta release for all branches on the main repo (but not pull requests)
+ * Bug Fixes
+ * config: fixes errors on non existing config by loading config file only on first access
+ * config: retry saving the config after failure (Mateusz)
+ * sync: when using `--backup-dir` don't delete files if we can't set their modtime
+ * this fixes odd behaviour with Dropbox and `--backup-dir`
+ * fshttp: fix idle timeouts for HTTP connections
+ * `serve http`: fix serving files with : in - fixes
+ * Fix `--exclude-if-present` to ignore directories which it doesn't have permission for (Iakov Davydov)
+ * Make accounting work properly with crypt and b2
+ * remove `--no-traverse` flag because it is obsolete
+ * Mount
+ * Add `--attr-timeout` flag to control attribute caching in kernel
+ * this now defaults to 0 which is correct but less efficient
+ * see [the mount docs](/commands/rclone_mount/#attribute-caching) for more info
+ * Add `--daemon` flag to allow mount to run in the background (ishuah)
+ * Fix: Return ENOSYS rather than EIO on attempted link
+ * This fixes FileZilla accessing an rclone mount served over sftp.
+ * Fix setting modtime twice
+ * Mount tests now run on CI for Linux (mount & cmount)/Mac/Windows
+ * Many bugs fixed in the VFS layer - see below
+ * VFS
+ * Many fixes for `--vfs-cache-mode` writes and above
+ * Update cached copy if we know it has changed (fixes stale data)
+ * Clean path names before using them in the cache
+ * Disable cache cleaner if `--vfs-cache-poll-interval=0`
+ * Fill and clean the cache immediately on startup
+ * Fix Windows opening every file when it stats the file
+ * Fix applying modtime for an open Write Handle
+ * Fix creation of files when truncating
+ * Write 0 bytes when flushing unwritten handles to avoid race conditions in FUSE
+ * Downgrade "poll-interval is not supported" message to Info
+ * Make OpenFile and friends return EINVAL if O_RDONLY and O_TRUNC
+ * Local
+ * Downgrade "invalid cross-device link: trying copy" to debug
+ * Make DirMove return fs.ErrorCantDirMove to allow fallback to Copy for cross device
+ * Fix race conditions updating the hashes
+ * Cache
+ * Add support for polling - cache will update when remote changes on supported backends
+ * Reduce log level for Plex api
+ * Fix dir cache issue
+ * Implement `--cache-db-wait-time` flag
+ * Improve efficiency with RangeOption and RangeSeek
+ * Fix dirmove with temp fs enabled
+ * Notify vfs when using temp fs
+ * Offline uploading
+ * Remote control support for path flushing
+ * Amazon cloud drive
+ * Rclone no longer has any working keys - disable integration tests
+ * Implement DirChangeNotify to notify cache/vfs/mount of changes
+ * Azureblob
+ * Don't check for bucket/container presense if listing was OK
+ * this makes rclone do one less request per invocation
+ * Improve accounting for chunked uploads
+ * Backblaze B2
+ * Don't check for bucket/container presense if listing was OK
+ * this makes rclone do one less request per invocation
+ * Box
+ * Improve accounting for chunked uploads
+ * Dropbox
+ * Fix custom oauth client parameters
+ * Google Cloud Storage
+ * Don't check for bucket/container presense if listing was OK
+ * this makes rclone do one less request per invocation
+ * Google Drive
+ * Migrate to api v3 (Fabian Möller)
+ * Add scope configuration and root folder selection
+ * Add `--drive-impersonate` for service accounts
+ * thanks to everyone who tested, explored and contributed docs
+ * Add `--drive-use-created-date` to use created date as modified date (nbuchanan)
+ * Request the export formats only when required
+ * This makes rclone quicker when there are no google docs
+ * Fix finding paths with latin1 chars (a workaround for a drive bug)
+ * Fix copying of a single Google doc file
+ * Fix `--drive-auth-owner-only` to look in all directories
+ * HTTP
+ * Fix handling of directories with & in
+ * Onedrive
+ * Removed upload cutoff and always do session uploads
+ * this stops the creation of multiple versions on business onedrive
+ * Overwrite object size value with real size when reading file. (Victor)
+ * this fixes oddities when onedrive misreports the size of images
+ * Pcloud
+ * Remove unused chunked upload flag and code
+ * Qingstor
+ * Don't check for bucket/container presense if listing was OK
+ * this makes rclone do one less request per invocation
+ * S3
+ * Support hashes for multipart files (Chris Redekop)
+ * Initial support for IBM COS (S3) (Giri Badanahatti)
+ * Update docs to discourage use of v2 auth with CEPH and others
+ * Don't check for bucket/container presense if listing was OK
+ * this makes rclone do one less request per invocation
+ * Fix server side copy and set modtime on files with + in
+ * SFTP
+ * Add option to disable remote hash check command execution (Jon Fautley)
+ * Add `--sftp-ask-password` flag to prompt for password when needed (Leo R. Lundgren)
+ * Add `set_modtime` configuration option
+ * Fix following of symlinks
+ * Fix reading config file outside of Fs setup
+ * Fix reading $USER in username fallback not $HOME
+ * Fix running under crontab - Use correct OS way of reading username
+ * Swift
+ * Fix refresh of authentication token
+ * in v1.39 a bug was introduced which ignored new tokens - this fixes it
+ * Fix extra HEAD transaction when uploading a new file
+ * Don't check for bucket/container presense if listing was OK
+ * this makes rclone do one less request per invocation
+ * Webdav
+ * Add new time formats to support mydrive.ch and others
* v1.39 - 2017-12-23
* New backends
* WebDAV
@@ -9721,6 +11243,9 @@ curl -o /etc/ssl/certs/ca-certificates.crt https://raw.githubusercontent.com/bag
ntpclient -s -h pool.ntp.org
```
+The two environment variables `SSL_CERT_FILE` and `SSL_CERT_DIR`, mentioned in the [x509 pacakge](https://godoc.org/crypto/x509),
+provide an additional way to provide the SSL root certificates.
+
Note that you may need to add the `--insecure` option to the `curl` command line if it doesn't work without.
```
@@ -9758,6 +11283,10 @@ If you are using `systemd-resolved` (default on Arch Linux), ensure it
is at version 233 or higher. Previous releases contain a bug which
causes not all domains to be resolved properly.
+Additionally with the `GODEBUG=netdns=` environment variable the Go
+resolver decision can be influenced. This also allows to resolve certain
+issues with DNS resolution. See the [name resolution section in the go docs](https://golang.org/pkg/net/#hdr-Name_Resolution).
+
License
-------
@@ -9863,7 +11392,7 @@ Contributors
* Steven Lu
* Sjur Fredriksen
* Ruwbin
- * Fabian Möller
+ * Fabian Möller
* Edward Q. Bridges
* Vasiliy Tolstov
* Harshavardhana
@@ -9873,7 +11402,7 @@ Contributors
* John Papandriopoulos
* Zhiming Wang
* Andy Pilate
- * Oliver Heyme
+ * Oliver Heyme
* wuyu
* Andrei Dragomir
* Christian Brüggemann
@@ -9897,8 +11426,7 @@ Contributors
* Pierre Carlson
* Ernest Borowski
* Remus Bunduc
- * Iakov Davydov
- * Fabian Möller
+ * Iakov Davydov
* Jakub Tasiemski
* David Minor
* Tim Cooijmans
@@ -9908,6 +11436,24 @@ Contributors
* Jon Fautley
* lewapm <32110057+lewapm@users.noreply.github.com>
* Yassine Imounachen
+ * Chris Redekop
+ * Jon Fautley
+ * Will Gunn
+ * Lucas Bremgartner
+ * Jody Frankowski
+ * Andreas Roussos
+ * nbuchanan
+ * Durval Menezes
+ * Victor
+ * Mateusz
+ * Daniel Loader
+ * David0rk
+ * Alexander Neumann
+ * Giri Badanahatti
+ * Leo R. Lundgren
+ * wolfv
+ * Dave Pedu
+ * Stefan Lindblom
# Contact the rclone project #
diff --git a/MANUAL.txt b/MANUAL.txt
index 2ce9d0808..b340e2a26 100644
--- a/MANUAL.txt
+++ b/MANUAL.txt
@@ -1,6 +1,6 @@
rclone(1) User Manual
Nick Craig-Wood
-Dec 23, 2017
+Mar 19, 2018
@@ -25,6 +25,7 @@ from:
- Google Drive
- HTTP
- Hubic
+- IBM COS S3
- Memset Memstore
- Microsoft Azure Blob Storage
- Microsoft OneDrive
@@ -87,7 +88,7 @@ rclone -h.
Script installation
-To install rclone on Linux/MacOs/BSD systems, run:
+To install rclone on Linux/macOS/BSD systems, run:
curl https://rclone.org/install.sh | sudo bash
@@ -174,6 +175,7 @@ Instructions
into your local roles-directory
2. add the role to the hosts you want rclone installed to:
+
- hosts: rclone-hosts
roles:
- rclone
@@ -193,6 +195,7 @@ option:
See the following for detailed instructions for
+- Alias
- Amazon Drive
- Amazon S3
- Backblaze B2
@@ -299,9 +302,6 @@ written a trailing / - meaning "copy the contents of this directory".
This applies to all commands and whether you are talking about the
source or destination.
-See the --no-traverse option for controlling whether rclone lists the
-destination directory or not.
-
rclone copy source:path dest:path [flags]
Options
@@ -480,11 +480,31 @@ Options
rclone ls
-List all the objects in the path with size and path.
+List the objects in the path with size and path.
Synopsis
-List all the objects in the path with size and path.
+Lists the objects in the source path to standard output in a human
+readable format with size and path. Recurses by default.
+
+Any of the filtering options can be applied to this commmand.
+
+There are several related list commands
+
+- ls to list size and path of objects only
+- lsl to list modification time, size and path of objects only
+- lsd to list directories only
+- lsf to list objects and directories in easy to parse format
+- lsjson to list objects and directories in JSON format
+
+ls,lsl,lsd are designed to be human readable. lsf is designed to be
+human and machine readable. lsjson is designed to be machine readable.
+
+Note that ls,lsl,lsd all recurse by default - use "--max-depth 1" to
+stop the recursion.
+
+The other list commands lsf,lsjson do not recurse by default - use "-R"
+to make them recurse.
rclone ls remote:path [flags]
@@ -499,7 +519,27 @@ List all directories/containers/buckets in the path.
Synopsis
-List all directories/containers/buckets in the path.
+Lists the directories in the source path to standard output. Recurses by
+default.
+
+Any of the filtering options can be applied to this commmand.
+
+There are several related list commands
+
+- ls to list size and path of objects only
+- lsl to list modification time, size and path of objects only
+- lsd to list directories only
+- lsf to list objects and directories in easy to parse format
+- lsjson to list objects and directories in JSON format
+
+ls,lsl,lsd are designed to be human readable. lsf is designed to be
+human and machine readable. lsjson is designed to be machine readable.
+
+Note that ls,lsl,lsd all recurse by default - use "--max-depth 1" to
+stop the recursion.
+
+The other list commands lsf,lsjson do not recurse by default - use "-R"
+to make them recurse.
rclone lsd remote:path [flags]
@@ -510,11 +550,32 @@ Options
rclone lsl
-List all the objects path with modification time, size and path.
+List the objects in path with modification time, size and path.
Synopsis
-List all the objects path with modification time, size and path.
+Lists the objects in the source path to standard output in a human
+readable format with modification time, size and path. Recurses by
+default.
+
+Any of the filtering options can be applied to this commmand.
+
+There are several related list commands
+
+- ls to list size and path of objects only
+- lsl to list modification time, size and path of objects only
+- lsd to list directories only
+- lsf to list objects and directories in easy to parse format
+- lsjson to list objects and directories in JSON format
+
+ls,lsl,lsd are designed to be human readable. lsf is designed to be
+human and machine readable. lsjson is designed to be machine readable.
+
+Note that ls,lsl,lsd all recurse by default - use "--max-depth 1" to
+stop the recursion.
+
+The other list commands lsf,lsjson do not recurse by default - use "-R"
+to make them recurse.
rclone lsl remote:path [flags]
@@ -673,14 +734,14 @@ Dedupe can be run non interactively using the --dedupe-mode flag or by
using an extra parameter with the same value
- --dedupe-mode interactive - interactive as above.
-- --dedupe-mode skip - removes identical files then skips
- anything left.
-- --dedupe-mode first - removes identical files then keeps the
- first one.
-- --dedupe-mode newest - removes identical files then keeps the
- newest one.
-- --dedupe-mode oldest - removes identical files then keeps the
- oldest one.
+- --dedupe-mode skip - removes identical files then skips anything
+ left.
+- --dedupe-mode first - removes identical files then keeps the first
+ one.
+- --dedupe-mode newest - removes identical files then keeps the newest
+ one.
+- --dedupe-mode oldest - removes identical files then keeps the oldest
+ one.
- --dedupe-mode rename - removes identical files then renames the rest
to be different.
@@ -1006,15 +1067,20 @@ Synopsis
rclone cryptdecode returns unencrypted file names when provided with a
list of encrypted file names. List limit is 10 items.
+If you supply the --reverse flag, it will return encrypted file names.
+
use it like this
rclone cryptdecode encryptedremote: encryptedfilename1 encryptedfilename2
+ rclone cryptdecode --reverse encryptedremote: filename1 filename2
+
rclone cryptdecode encryptedremote: encryptedfilename [flags]
Options
- -h, --help help for cryptdecode
+ -h, --help help for cryptdecode
+ --reverse Reverse cryptdecode, encrypts filenames
rclone dbhashsum
@@ -1137,6 +1203,77 @@ Options
-l, --long Show the type as well as names.
+rclone lsf
+
+List directories and objects in remote:path formatted for parsing
+
+Synopsis
+
+List the contents of the source path (directories and objects) to
+standard output in a form which is easy to parse by scripts. By default
+this will just be the names of the objects and directories, one per
+line. The directories will have a / suffix.
+
+Use the --format option to control what gets listed. By default this is
+just the path, but you can use these parameters to control the output:
+
+ p - path
+ s - size
+ t - modification time
+ h - hash
+
+So if you wanted the path, size and modification time, you would use
+--format "pst", or maybe --format "tsp" to put the path last.
+
+If you specify "h" in the format you will get the MD5 hash by default,
+use the "--hash" flag to change which hash you want. Note that this can
+be returned as an empty string if it isn't available on the object (and
+for directories), "ERROR" if there was an error reading it from the
+object and "UNSUPPORTED" if that object does not support that hash type.
+
+For example to emulate the md5sum command you can use
+
+ rclone lsf -R --hash MD5 --format hp --separator " " --files-only .
+
+(Though "rclone md5sum ." is an easier way of typing this.)
+
+By default the separator is ";" this can be changed with the --separator
+flag. Note that separators aren't escaped in the path so putting it last
+is a good strategy.
+
+Any of the filtering options can be applied to this commmand.
+
+There are several related list commands
+
+- ls to list size and path of objects only
+- lsl to list modification time, size and path of objects only
+- lsd to list directories only
+- lsf to list objects and directories in easy to parse format
+- lsjson to list objects and directories in JSON format
+
+ls,lsl,lsd are designed to be human readable. lsf is designed to be
+human and machine readable. lsjson is designed to be machine readable.
+
+Note that ls,lsl,lsd all recurse by default - use "--max-depth 1" to
+stop the recursion.
+
+The other list commands lsf,lsjson do not recurse by default - use "-R"
+to make them recurse.
+
+ rclone lsf remote:path [flags]
+
+Options
+
+ -d, --dir-slash Append a slash to directory names. (default true)
+ --dirs-only Only list directories.
+ --files-only Only list files.
+ -F, --format string Output format - see help for details (default "p")
+ --hash h Use this hash when h is used in the format MD5|SHA-1|DropboxHash (default "MD5")
+ -h, --help help for lsf
+ -R, --recursive Recurse into the listing.
+ -s, --separator string Separator for the items in the format. (default ";")
+
+
rclone lsjson
List directories and objects in the path in JSON format.
@@ -1151,22 +1288,50 @@ The output is an array of Items, where each Item looks like this
"MD5" : "b1946ac92492d2347c6235b4d2611184", "DropboxHash" :
"ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc" },
"IsDir" : false, "ModTime" : "2017-05-31T16:15:57.034468261+01:00",
-"Name" : "file.txt", "Path" : "full/path/goes/here/file.txt", "Size" : 6
-}
+"Name" : "file.txt", "Encrypted" : "v0qpsdq8anpci8n929v3uu9338", "Path"
+: "full/path/goes/here/file.txt", "Size" : 6 }
-If --hash is not specified the the Hashes property won't be emitted.
+If --hash is not specified the Hashes property won't be emitted.
If --no-modtime is specified then ModTime will be blank.
+If --encrypted is not specified the Encrypted won't be emitted.
+
+The Path field will only show folders below the remote path being
+listed. If "remote:path" contains the file "subfolder/file.txt", the
+Path for "file.txt" will be "subfolder/file.txt", not
+"remote:path/subfolder/file.txt". When used without --recursive the Path
+will always be the same as Name.
+
The time is in RFC3339 format with nanosecond precision.
The whole output can be processed as a JSON blob, or alternatively it
can be processed line by line as each item is written one to a line.
+Any of the filtering options can be applied to this commmand.
+
+There are several related list commands
+
+- ls to list size and path of objects only
+- lsl to list modification time, size and path of objects only
+- lsd to list directories only
+- lsf to list objects and directories in easy to parse format
+- lsjson to list objects and directories in JSON format
+
+ls,lsl,lsd are designed to be human readable. lsf is designed to be
+human and machine readable. lsjson is designed to be machine readable.
+
+Note that ls,lsl,lsd all recurse by default - use "--max-depth 1" to
+stop the recursion.
+
+The other list commands lsf,lsjson do not recurse by default - use "-R"
+to make them recurse.
+
rclone lsjson remote:path [flags]
Options
+ -M, --encrypted Show the encrypted names.
--hash Include hashes in the output (may take longer).
-h, --help help for lsjson
--no-modtime Don't read the modification time (can speed things up).
@@ -1229,13 +1394,16 @@ Administrator), you will not be able to see the new drive.
The easiest way around this is to start the drive from a normal command
prompt. It is also possible to start a drive from the SYSTEM account
(using the WinFsp.Launcher infrastructure) which creates drives
-accessible for everyone on the system.
+accessible for everyone on the system or alternatively using the nssm
+service manager.
Limitations
-This can only write files seqentially, it can only seek when reading.
-This means that many applications won't work with their files on an
-rclone mount.
+Without the use of "--vfs-cache-mode" this can only write files
+sequentially, it can only seek when reading. This means that many
+applications won't work with their files on an rclone mount without
+"--vfs-cache-mode writes" or "--vfs-cache-mode full". See the File
+Caching section for more info.
The bucket based remotes (eg Swift, S3, Google Compute Storage, B2,
Hubic) won't work from the root - you will need to specify a bucket, or
@@ -1251,9 +1419,23 @@ rclone mount vs rclone sync/copy
File systems expect things to be 100% reliable, whereas cloud storage
systems are a long way from 100% reliable. The rclone sync/copy commands
cope with this with lots of retries. However rclone mount can't use
-retries in the same way without making local copies of the uploads. This
-might happen in the future, but for the moment rclone mount won't do
-that, so will be less reliable than the rclone command.
+retries in the same way without making local copies of the uploads. Look
+at the EXPERIMENTAL file caching for solutions to make mount mount more
+reliable.
+
+Attribute caching
+
+You can use the flag --attr-timeout to set the time the kernel caches
+the attributes (size, modification time etc) for directory entries.
+
+The default is 0s - no caching - which is recommended for filesystems
+which can change outside the control of the kernel.
+
+If you set it higher ('1s' or '1m' say) then the kernel will call back
+to rclone less often making it more efficient, however there may be
+strange effects when files change on the remote.
+
+This is the same as setting the attr_timeout option in mount.fuse.
Filters
@@ -1282,12 +1464,21 @@ rclone instance is running, you can reset the cache like this:
kill -SIGHUP $(pidof rclone)
+If you configure rclone with a remote control then you can use rclone rc
+to flush the whole directory cache:
+
+ rclone rc vfs/forget
+
+Or individual files or directories:
+
+ rclone rc vfs/forget file=path/to/file dir=path/to/dir
+
File Caching
NB File caching is EXPERIMENTAL - use with care!
These flags control the VFS file caching options. The VFS layer is used
-by rclone mount to make a cloud storage systm work more like a normal
+by rclone mount to make a cloud storage system work more like a normal
file system.
You'll need to enable VFS caching if you want, for example, to read and
@@ -1296,7 +1487,7 @@ write simultaneously to a file. See below for more details.
Note that the VFS cache works in addition to the cache backend and you
may find that you need one or the other or both.
- --vfs-cache-dir string Directory rclone will use for caching.
+ --cache-dir string Directory rclone will use for caching.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
@@ -1359,7 +1550,7 @@ file is opened for read it will be downloaded in its entirety first.
This may be appropriate for your needs, or you may prefer to look at the
cache backend which does a much more sophisticated job of caching,
-including caching directory heirachies and chunks of files.q
+including caching directory hierarchies and chunks of files.
In this mode, unlike the others, when a file is written to the disk, it
will be kept on the disk after it is written to the remote. It will be
@@ -1377,6 +1568,8 @@ Options
--allow-non-empty Allow mounting over a non-empty directory.
--allow-other Allow access to other users.
--allow-root Allow access to root user.
+ --attr-timeout duration Time for which file/directory attributes are cached.
+ --daemon Run mount as a daemon (background mode).
--debug-fuse Debug the FUSE internals - needs -v.
--default-permissions Makes kernel enforce access control based on the file mode.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
@@ -1491,6 +1684,30 @@ Options
-h, --help help for obscure
+rclone rc
+
+Run a command against a running rclone.
+
+Synopsis
+
+This runs a command against a running rclone. By default it will use
+that specified in the --rc-addr command.
+
+Arguments should be passed in as parameter=value.
+
+The result will be returned as a JSON object by default.
+
+Use "rclone rc list" to see a list of all possible commands.
+
+ rclone rc commands parameter [flags]
+
+Options
+
+ -h, --help help for rc
+ --no-output If set don't output the JSON result.
+ --url string URL to connect to rclone remote control. (default "http://localhost:5572/")
+
+
rclone rcat
Copies standard input to file on remote.
@@ -1580,10 +1797,6 @@ rclone serve http implements a basic web server to serve the remote over
HTTP. This can be viewed in a web browser or you can make a remote of
type http read from it.
-Use --addr to specify which IP address and port the server should listen
-on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By
-default it only listens on localhost.
-
You can use the filter flags (eg --include, --exclude) to control what
is served.
@@ -1592,6 +1805,55 @@ The server will log errors. Use -v to see access logs.
--bwlimit will be respected for file transfers. Use --stats to control
the stats printing.
+Server options
+
+Use --addr to specify which IP address and port the server should listen
+on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By
+default it only listens on localhost.
+
+If you set --addr to listen on a public or LAN accessible IP address
+then using Authentication if advised - see the next section for info.
+
+--server-read-timeout and --server-write-timeout can be used to control
+the timeouts on the server. Note that this is the total time for a
+transfer.
+
+--max-header-bytes controls the maximum number of bytes the server will
+accept in the HTTP header.
+
+Authentication
+
+By default this will serve files without needing a login.
+
+You can either use an htpasswd file which can take lots of users, or set
+a single username and password with the --user and --pass flags.
+
+Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in
+standard apache format and supports MD5, SHA1 and BCrypt for basic
+authentication. Bcrypt is recommended.
+
+To create an htpasswd file:
+
+ touch htpasswd
+ htpasswd -B htpasswd user
+ htpasswd -B htpasswd anotherUser
+
+The password file can be updated while rclone is running.
+
+Use --realm to set the authentication realm.
+
+SSL/TLS
+
+By default this will serve over http. If you want you can serve over
+https. You will need to supply the --cert and --key flags. If you wish
+to do client side certificate validation then you will need to supply
+--client-ca also.
+
+--cert should be a either a PEM encoded certificate or a concatenation
+of that with the CA certificate. --key should be the PEM encoded private
+key and --client-ca should be the PEM encoded client certificate
+authority certificate.
+
Directory Cache
Using the --dir-cache-time flag, you can set how long a directory should
@@ -1606,12 +1868,21 @@ rclone instance is running, you can reset the cache like this:
kill -SIGHUP $(pidof rclone)
+If you configure rclone with a remote control then you can use rclone rc
+to flush the whole directory cache:
+
+ rclone rc vfs/forget
+
+Or individual files or directories:
+
+ rclone rc vfs/forget file=path/to/file dir=path/to/dir
+
File Caching
NB File caching is EXPERIMENTAL - use with care!
These flags control the VFS file caching options. The VFS layer is used
-by rclone mount to make a cloud storage systm work more like a normal
+by rclone mount to make a cloud storage system work more like a normal
file system.
You'll need to enable VFS caching if you want, for example, to read and
@@ -1620,7 +1891,7 @@ write simultaneously to a file. See below for more details.
Note that the VFS cache works in addition to the cache backend and you
may find that you need one or the other or both.
- --vfs-cache-dir string Directory rclone will use for caching.
+ --cache-dir string Directory rclone will use for caching.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
@@ -1683,7 +1954,7 @@ file is opened for read it will be downloaded in its entirety first.
This may be appropriate for your needs, or you may prefer to look at the
cache backend which does a much more sophisticated job of caching,
-including caching directory heirachies and chunks of files.q
+including caching directory hierarchies and chunks of files.
In this mode, unlike the others, when a file is written to the disk, it
will be kept on the disk after it is written to the remote. It will be
@@ -1698,22 +1969,176 @@ If an upload or download fails it will be retried up to
Options
- --addr string IPaddress:Port to bind server to. (default "localhost:8080")
+ --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080")
+ --cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --client-ca string Client certificate authority to verify clients with
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--gid uint32 Override the gid field set by the filesystem. (default 502)
-h, --help help for http
+ --htpasswd string htpasswd file - if not provided no authentication is done
+ --key string SSL PEM Private key
+ --max-header-bytes int Maximum size of request header (default 4096)
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
+ --pass string Password for authentication.
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--read-only Mount read-only.
+ --realm string realm for authentication (default "rclone")
+ --server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--uid uint32 Override the uid field set by the filesystem. (default 502)
--umask int Override the permission bits set by the filesystem. (default 2)
+ --user string User name for authentication.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
+rclone serve restic
+
+Serve the remote for restic's REST API.
+
+Synopsis
+
+rclone serve restic implements restic's REST backend API over HTTP. This
+allows restic to use rclone as a data storage mechanism for cloud
+providers that restic does not support directly.
+
+Restic is a command line program for doing backups.
+
+The server will log errors. Use -v to see access logs.
+
+--bwlimit will be respected for file transfers. Use --stats to control
+the stats printing.
+
+Setting up rclone for use by restic
+
+First set up a remote for your chosen cloud provider.
+
+Once you have set up the remote, check it is working with, for example
+"rclone lsd remote:". You may have called the remote something other
+than "remote:" - just substitute whatever you called it in the following
+instructions.
+
+Now start the rclone restic server
+
+ rclone serve restic -v remote:backup
+
+Where you can replace "backup" in the above by whatever path in the
+remote you wish to use.
+
+By default this will serve on "localhost:8080" you can change this with
+use of the "--addr" flag.
+
+You might wish to start this server on boot.
+
+Setting up restic to use rclone
+
+Now you can follow the restic instructions on setting up restic.
+
+Note that you will need restic 0.8.2 or later to interoperate with
+rclone.
+
+For the example above you will want to use "http://localhost:8080/" as
+the URL for the REST server.
+
+For example:
+
+ $ export RESTIC_REPOSITORY=rest:http://localhost:8080/
+ $ export RESTIC_PASSWORD=yourpassword
+ $ restic init
+ created restic backend 8b1a4b56ae at rest:http://localhost:8080/
+
+ Please note that knowledge of your password is required to access
+ the repository. Losing your password means that your data is
+ irrecoverably lost.
+ $ restic backup /path/to/files/to/backup
+ scan [/path/to/files/to/backup]
+ scanned 189 directories, 312 files in 0:00
+ [0:00] 100.00% 38.128 MiB / 38.128 MiB 501 / 501 items 0 errors ETA 0:00
+ duration: 0:00
+ snapshot 45c8fdd8 saved
+
+Multiple repositories
+
+Note that you can use the endpoint to host multiple repositories. Do
+this by adding a directory name or path after the URL. Note that these
+MUST end with /. Eg
+
+ $ export RESTIC_REPOSITORY=rest:http://localhost:8080/user1repo/
+ # backup user1 stuff
+ $ export RESTIC_REPOSITORY=rest:http://localhost:8080/user2repo/
+ # backup user2 stuff
+
+Server options
+
+Use --addr to specify which IP address and port the server should listen
+on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By
+default it only listens on localhost.
+
+If you set --addr to listen on a public or LAN accessible IP address
+then using Authentication if advised - see the next section for info.
+
+--server-read-timeout and --server-write-timeout can be used to control
+the timeouts on the server. Note that this is the total time for a
+transfer.
+
+--max-header-bytes controls the maximum number of bytes the server will
+accept in the HTTP header.
+
+Authentication
+
+By default this will serve files without needing a login.
+
+You can either use an htpasswd file which can take lots of users, or set
+a single username and password with the --user and --pass flags.
+
+Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in
+standard apache format and supports MD5, SHA1 and BCrypt for basic
+authentication. Bcrypt is recommended.
+
+To create an htpasswd file:
+
+ touch htpasswd
+ htpasswd -B htpasswd user
+ htpasswd -B htpasswd anotherUser
+
+The password file can be updated while rclone is running.
+
+Use --realm to set the authentication realm.
+
+SSL/TLS
+
+By default this will serve over http. If you want you can serve over
+https. You will need to supply the --cert and --key flags. If you wish
+to do client side certificate validation then you will need to supply
+--client-ca also.
+
+--cert should be a either a PEM encoded certificate or a concatenation
+of that with the CA certificate. --key should be the PEM encoded private
+key and --client-ca should be the PEM encoded client certificate
+authority certificate.
+
+ rclone serve restic remote:path [flags]
+
+Options
+
+ --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080")
+ --cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --client-ca string Client certificate authority to verify clients with
+ -h, --help help for restic
+ --htpasswd string htpasswd file - if not provided no authentication is done
+ --key string SSL PEM Private key
+ --max-header-bytes int Maximum size of request header (default 4096)
+ --pass string Password for authentication.
+ --realm string realm for authentication (default "rclone")
+ --server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --stdio run an HTTP2 server on stdin/stdout
+ --user string User name for authentication.
+
+
rclone serve webdav
Serve remote:path over webdav.
@@ -1727,6 +2152,55 @@ client or you can make a remote of type webdav to read and write it.
NB at the moment each directory listing reads the start of each file
which is undesirable: see https://github.com/golang/go/issues/22577
+Server options
+
+Use --addr to specify which IP address and port the server should listen
+on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By
+default it only listens on localhost.
+
+If you set --addr to listen on a public or LAN accessible IP address
+then using Authentication if advised - see the next section for info.
+
+--server-read-timeout and --server-write-timeout can be used to control
+the timeouts on the server. Note that this is the total time for a
+transfer.
+
+--max-header-bytes controls the maximum number of bytes the server will
+accept in the HTTP header.
+
+Authentication
+
+By default this will serve files without needing a login.
+
+You can either use an htpasswd file which can take lots of users, or set
+a single username and password with the --user and --pass flags.
+
+Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in
+standard apache format and supports MD5, SHA1 and BCrypt for basic
+authentication. Bcrypt is recommended.
+
+To create an htpasswd file:
+
+ touch htpasswd
+ htpasswd -B htpasswd user
+ htpasswd -B htpasswd anotherUser
+
+The password file can be updated while rclone is running.
+
+Use --realm to set the authentication realm.
+
+SSL/TLS
+
+By default this will serve over http. If you want you can serve over
+https. You will need to supply the --cert and --key flags. If you wish
+to do client side certificate validation then you will need to supply
+--client-ca also.
+
+--cert should be a either a PEM encoded certificate or a concatenation
+of that with the CA certificate. --key should be the PEM encoded private
+key and --client-ca should be the PEM encoded client certificate
+authority certificate.
+
Directory Cache
Using the --dir-cache-time flag, you can set how long a directory should
@@ -1741,12 +2215,21 @@ rclone instance is running, you can reset the cache like this:
kill -SIGHUP $(pidof rclone)
+If you configure rclone with a remote control then you can use rclone rc
+to flush the whole directory cache:
+
+ rclone rc vfs/forget
+
+Or individual files or directories:
+
+ rclone rc vfs/forget file=path/to/file dir=path/to/dir
+
File Caching
NB File caching is EXPERIMENTAL - use with care!
These flags control the VFS file caching options. The VFS layer is used
-by rclone mount to make a cloud storage systm work more like a normal
+by rclone mount to make a cloud storage system work more like a normal
file system.
You'll need to enable VFS caching if you want, for example, to read and
@@ -1755,7 +2238,7 @@ write simultaneously to a file. See below for more details.
Note that the VFS cache works in addition to the cache backend and you
may find that you need one or the other or both.
- --vfs-cache-dir string Directory rclone will use for caching.
+ --cache-dir string Directory rclone will use for caching.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
@@ -1818,7 +2301,7 @@ file is opened for read it will be downloaded in its entirety first.
This may be appropriate for your needs, or you may prefer to look at the
cache backend which does a much more sophisticated job of caching,
-including caching directory heirachies and chunks of files.q
+including caching directory hierarchies and chunks of files.
In this mode, unlike the others, when a file is written to the disk, it
will be kept on the disk after it is written to the remote. It will be
@@ -1833,17 +2316,27 @@ If an upload or download fails it will be retried up to
Options
- --addr string IPaddress:Port to bind server to. (default "localhost:8081")
+ --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080")
+ --cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --client-ca string Client certificate authority to verify clients with
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--gid uint32 Override the gid field set by the filesystem. (default 502)
-h, --help help for webdav
+ --htpasswd string htpasswd file - if not provided no authentication is done
+ --key string SSL PEM Private key
+ --max-header-bytes int Maximum size of request header (default 4096)
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
+ --pass string Password for authentication.
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--read-only Mount read-only.
+ --realm string realm for authentication (default "rclone")
+ --server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--uid uint32 Override the uid field set by the filesystem. (default 502)
--umask int Override the permission bits set by the filesystem. (default 2)
+ --user string User name for authentication.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
@@ -1940,7 +2433,7 @@ The file test.jpg will be placed inside /tmp/download.
This is equivalent to specifying
- rclone copy --no-traverse --files-from /tmp/files remote: /tmp/download
+ rclone copy --files-from /tmp/files remote: /tmp/download
Where /tmp/files contains the single line
@@ -2125,6 +2618,11 @@ this:
kill -SIGUSR2 $(pidof rclone)
+If you configure rclone with a remote control then you can use change
+the bwlimit dynamically:
+
+ rclone rc core/bwlimit rate=1M
+
--buffer-size=SIZE
Use this sized buffer to speed up file transfers. Each --transfer will
@@ -2319,6 +2817,12 @@ flag) quicker.
Disable low level retries with --low-level-retries 1.
+--max-delete=N
+
+This tells rclone not to delete more than N files. If that limit is
+exceeded then a fatal error will be generated and rclone will stop the
+operation in progress.
+
--max-depth=N
This modifies the recursion depth for all the commands except purge.
@@ -2407,12 +2911,19 @@ Stats are logged at INFO level by default which means they won't show at
default log level NOTICE. Use --stats-log-level NOTICE or -v to make
them show. See the Logging section for more info on log levels.
+--stats-file-name-length integer
+
+By default, the --stats output will truncate file names and paths longer
+than 40 characters. This is equivalent to providing
+--stats-file-name-length 40. Use --stats-file-name-length 0 to disable
+any truncation of file names printed by stats.
+
--stats-log-level string
Log level to show --stats output at. This can be DEBUG, INFO, NOTICE, or
ERROR. The default is INFO. This means at the default level of logging
which is NOTICE the stats won't show - if you want them to then use
--stats-log-level NOTICE. See the Logging section for more info on log
+--stats-log-level NOTICE. See the Logging section for more info on log
levels.
--stats-unit=bits|bytes
@@ -2499,8 +3010,8 @@ If the destination does not support server-side copy or move, rclone
will fall back to the default behaviour and log an error level message
to the console.
-Note that --track-renames is incompatible with --no-traverse and that it
-uses extra memory to keep track of all the rename candidates.
+Note that --track-renames uses extra memory to keep track of all the
+rename candidates.
Note also that --track-renames is incompatible with --delete-before and
will select --delete-after instead of --delete-during.
@@ -2545,8 +3056,8 @@ listing directories. This will have the following consequences for the
listing:
- It WILL use fewer transactions (important if you pay for them)
-- It WILL use more memory. Rclone has to load the whole listing
- into memory.
+- It WILL use more memory. Rclone has to load the whole listing into
+ memory.
- It _may_ be faster because it uses fewer transactions
- It _may_ be slower because it can't be parallelized
@@ -2754,25 +3265,6 @@ This option defaults to false.
THIS SHOULD BE USED ONLY FOR TESTING.
---no-traverse
-
-The --no-traverse flag controls whether the destination file system is
-traversed when using the copy or move commands. --no-traverse is not
-compatible with sync and will be ignored if you supply it with sync.
-
-If you are only copying a small number of files and/or have a large
-number of files on the destination then --no-traverse will stop rclone
-listing the destination and save time.
-
-However, if you are copying a large number of files, especially if you
-are doing a copy where lots of the files haven't changed and won't need
-copying then you shouldn't use --no-traverse.
-
-It can also be used to reduce the memory usage of rclone when copying -
-rclone --no-traverse copy src dst won't load either the source or
-destination listings into memory so will use the minimum amount of
-memory.
-
Filtering
@@ -2795,9 +3287,20 @@ For the filtering options
See the filtering section.
+Remote control
+
+For the remote control options and for instructions on how to remote
+control rclone
+
+- --rc
+- and anything starting with --rc-
+
+See the remote control section.
+
+
Logging
-rclone has 4 levels of logging, Error, Notice, Info and Debug.
+rclone has 4 levels of logging, ERROR, NOTICE, INFO and DEBUG.
By default, rclone logs to standard error. This means you can redirect
standard error and still see the normal output of rclone commands (eg
@@ -2853,10 +3356,10 @@ List of exit codes
- 3 - Directory not found
- 4 - File not found
- 5 - Temporary error (one that more retries might fix) (Retry errors)
-- 6 - Less serious errors (like 461 errors from dropbox)
- (NoRetry errors)
-- 7 - Fatal error (one that more retries won't fix, like
- account suspended) (Fatal errors)
+- 6 - Less serious errors (like 461 errors from dropbox) (NoRetry
+ errors)
+- 7 - Fatal error (one that more retries won't fix, like account
+ suspended) (Fatal errors)
Environment Variables
@@ -2913,8 +3416,8 @@ Other environment variables
- RCLONE_CONFIG_PASS` set to contain your config file password (see
Configuration Encryption section)
-- HTTP_PROXY, HTTPS_PROXY and NO_PROXY (or the lowercase
- versions thereof).
+- HTTP_PROXY, HTTPS_PROXY and NO_PROXY (or the lowercase versions
+ thereof).
- HTTPS_PROXY takes precedence over HTTP_PROXY for https requests.
- The environment values may be either a complete URL or a
"host[:port]" for, in which case the "http" scheme is assumed.
@@ -3288,23 +3791,32 @@ the sync.
--files-from - Read list of source-file names
This reads a list of file names from the file passed in and ONLY these
-files are transferred. The filtering rules are ignored completely if you
+files are transferred. The FILTERING RULES ARE IGNORED completely if you
use this option.
This option can be repeated to read from more than one file. These are
read in the order that they are placed on the command line.
-Prepare a file like this files-from.txt
+Paths within the --files-from file will be interpreted as starting with
+the root specified in the command. Leading / characters are ignored.
+
+For example, suppose you had files-from.txt with this content:
# comment
file1.jpg
- file2.jpg
+ subdir/file2.jpg
-Then use as --files-from files-from.txt. This will only transfer
-file1.jpg and file2.jpg providing they exist.
+You could then use it like this:
-For example, let's say you had a few files you want to back up regularly
-with these absolute paths:
+ rclone copy --files-from files-from.txt /home/me/pics remote:pics
+
+This will transfer these files only (if they exist)
+
+ /home/me/pics/file1.jpg → remote:pics/file1.jpg
+ /home/me/pics/subdir/file2.jpg → remote:pics/subdirfile1.jpg
+
+To take a more complicated example, let's say you had a few files you
+want to back up regularly with these absolute paths:
/home/user1/important
/home/user1/dir/file
@@ -3322,7 +3834,11 @@ You could then copy these to a remote like this
rclone copy --files-from files-from.txt /home remote:backup
The 3 files will arrive in remote:backup with the paths as in the
-files-from.txt.
+files-from.txt like this:
+
+ /home/user1/important → remote:backup/user1/important
+ /home/user1/dir/file → remote:backup/user1/dir/file
+ /home/user2/stuff → remote:backup/stuff
You could of course choose / as the root too in which case your
files-from.txt might look like this.
@@ -3335,7 +3851,11 @@ And you would transfer it like this
rclone copy --files-from files-from.txt / remote:backup
-In this case there will be an extra home directory on the remote.
+In this case there will be an extra home directory on the remote:
+
+ /home/user1/important → remote:home/backup/user1/important
+ /home/user1/dir/file → remote:home/backup/user1/dir/file
+ /home/user2/stuff → remote:home/backup/stuff
--min-size - Don't transfer any file smaller than this
@@ -3446,6 +3966,234 @@ should not be used multiple times.
+REMOTE CONTROLLING RCLONE
+
+
+If rclone is run with the --rc flag then it starts an http server which
+can be used to remote control rclone.
+
+NB this is experimental and everything here is subject to change!
+
+
+Supported parameters
+
+--rc
+
+Flag to start the http server listen on remote requests
+
+--rc-addr=IP
+
+IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+
+--rc-cert=KEY
+
+SSL PEM key (concatenation of certificate and CA certificate)
+
+--rc-client-ca=PATH
+
+Client certificate authority to verify clients with
+
+--rc-htpasswd=PATH
+
+htpasswd file - if not provided no authentication is done
+
+--rc-key=PATH
+
+SSL PEM Private key
+
+--rc-max-header-bytes=VALUE
+
+Maximum size of request header (default 4096)
+
+--rc-user=VALUE
+
+User name for authentication.
+
+--rc-pass=VALUE
+
+Password for authentication.
+
+--rc-realm=VALUE
+
+Realm for authentication (default "rclone")
+
+--rc-server-read-timeout=DURATION
+
+Timeout for server reading data (default 1h0m0s)
+
+--rc-server-write-timeout=DURATION
+
+Timeout for server writing data (default 1h0m0s)
+
+
+Accessing the remote control via the rclone rc command
+
+Rclone itself implements the remote control protocol in its rclone rc
+command.
+
+You can use it like this
+
+ $ rclone rc rc/noop param1=one param2=two
+ {
+ "param1": "one",
+ "param2": "two"
+ }
+
+Run rclone rc on its own to see the help for the installed remote
+control commands.
+
+
+Supported commands
+
+core/bwlimit: Set the bandwidth limit.
+
+This sets the bandwidth limit to that passed in.
+
+Eg
+
+ rclone core/bwlimit rate=1M
+ rclone core/bwlimit rate=off
+
+cache/expire: Purge a remote from cache
+
+Purge a remote from the cache backend. Supports either a directory or a
+file. Params:
+
+- remote = path to remote (required)
+- withData = true/false to delete cached data (chunks) as well
+ (optional)
+
+vfs/forget: Forget files or directories in the directory cache.
+
+This forgets the paths in the directory cache causing them to be re-read
+from the remote when needed.
+
+If no paths are passed in then it will forget all the paths in the
+directory cache.
+
+ rclone rc vfs/forget
+
+Otherwise pass files or dirs in as file=path or dir=path. Any parameter
+key starting with file will forget that file and any starting with dir
+will forget that dir, eg
+
+ rclone rc vfs/forget file=hello file2=goodbye dir=home/junk
+
+rc/noop: Echo the input to the output parameters
+
+This echoes the input parameters to the output parameters for testing
+purposes. It can be used to check that rclone is still alive and to
+check that parameter passing is working properly.
+
+rc/error: This returns an error
+
+This returns an error with the input as part of its error string. Useful
+for testing error handling.
+
+rc/list: List all the registered remote control commands
+
+This lists all the registered remote control commands as a JSON map in
+the commands response.
+
+
+Accessing the remote control via HTTP
+
+Rclone implements a simple HTTP based protocol.
+
+Each endpoint takes an JSON object and returns a JSON object or an
+error. The JSON objects are essentially a map of string names to values.
+
+All calls must made using POST.
+
+The input objects can be supplied using URL parameters, POST parameters
+or by supplying "Content-Type: application/json" and a JSON blob in the
+body. There are examples of these below using curl.
+
+The response will be a JSON blob in the body of the response. This is
+formatted to be reasonably human readable.
+
+If an error occurs then there will be an HTTP error status (usually 400)
+and the body of the response will contain a JSON encoded error object.
+
+Using POST with URL parameters only
+
+ curl -X POST 'http://localhost:5572/rc/noop/?potato=1&sausage=2'
+
+Response
+
+ {
+ "potato": "1",
+ "sausage": "2"
+ }
+
+Here is what an error response looks like:
+
+ curl -X POST 'http://localhost:5572/rc/error/?potato=1&sausage=2'
+
+ {
+ "error": "arbitrary error on input map[potato:1 sausage:2]",
+ "input": {
+ "potato": "1",
+ "sausage": "2"
+ }
+ }
+
+Note that curl doesn't return errors to the shell unless you use the -f
+option
+
+ $ curl -f -X POST 'http://localhost:5572/rc/error/?potato=1&sausage=2'
+ curl: (22) The requested URL returned error: 400 Bad Request
+ $ echo $?
+ 22
+
+Using POST with a form
+
+ curl --data "potato=1" --data "sausage=2" http://localhost:5572/rc/noop/
+
+Response
+
+ {
+ "potato": "1",
+ "sausage": "2"
+ }
+
+Note that you can combine these with URL parameters too with the POST
+parameters taking precedence.
+
+ curl --data "potato=1" --data "sausage=2" "http://localhost:5572/rc/noop/?rutabaga=3&sausage=4"
+
+Response
+
+ {
+ "potato": "1",
+ "rutabaga": "3",
+ "sausage": "4"
+ }
+
+Using POST with a JSON blob
+
+ curl -H "Content-Type: application/json" -X POST -d '{"potato":2,"sausage":1}' http://localhost:5572/rc/noop/
+
+response
+
+ {
+ "password": "xyz",
+ "username": "xyz"
+ }
+
+This can be combined with URL parameters too if required. The JSON blob
+takes precedence.
+
+ curl -H "Content-Type: application/json" -X POST -d '{"potato":2,"sausage":1}' 'http://localhost:5572/rc/noop/?rutabaga=3&potato=4'
+
+ {
+ "potato": 2,
+ "rutabaga": "3",
+ "sausage": 1
+ }
+
+
+
OVERVIEW OF CLOUD STORAGE SYSTEMS
@@ -3639,6 +4387,128 @@ advance. This allows certain operations to work without spooling the
file to local disk first, e.g. rclone rcat.
+Alias
+
+The alias remote provides a new name for another remote.
+
+Paths may be as deep as required or a local path, eg
+remote:directory/subdirectory or /directory/subdirectory.
+
+During the initial setup with rclone config you will specify the target
+remote. The target remote can either be a local path or another remote.
+
+Subfolders can be used in target remote. Asume a alias remote named
+backup with the target mydrive:private/backup. Invoking
+rclone mkdir backup:desktop is exactly the same as invoking
+rclone mkdir mydrive:private/backup/desktop.
+
+There will be no special handling of paths containing .. segments.
+Invoking rclone mkdir backup:../desktop is exactly the same as invoking
+rclone mkdir mydrive:private/backup/../desktop. The empty path is not
+allowed as a remote. To alias the current directory use . instead.
+
+Here is an example of how to make a alias called remote for local
+folder. First run:
+
+ rclone config
+
+This will guide you through an interactive setup process:
+
+ No remotes found - make a new one
+ n) New remote
+ s) Set configuration password
+ q) Quit config
+ n/s/q> n
+ name> remote
+ Type of storage to configure.
+ Choose a number from below, or type in your own value
+ 1 / Alias for a existing remote
+ \ "alias"
+ 2 / Amazon Drive
+ \ "amazon cloud drive"
+ 3 / Amazon S3 (also Dreamhost, Ceph, Minio)
+ \ "s3"
+ 4 / Backblaze B2
+ \ "b2"
+ 5 / Box
+ \ "box"
+ 6 / Cache a remote
+ \ "cache"
+ 7 / Dropbox
+ \ "dropbox"
+ 8 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 9 / FTP Connection
+ \ "ftp"
+ 10 / Google Cloud Storage (this is not Google Drive)
+ \ "google cloud storage"
+ 11 / Google Drive
+ \ "drive"
+ 12 / Hubic
+ \ "hubic"
+ 13 / Local Disk
+ \ "local"
+ 14 / Microsoft Azure Blob Storage
+ \ "azureblob"
+ 15 / Microsoft OneDrive
+ \ "onedrive"
+ 16 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+ \ "swift"
+ 17 / Pcloud
+ \ "pcloud"
+ 18 / QingCloud Object Storage
+ \ "qingstor"
+ 19 / SSH/SFTP Connection
+ \ "sftp"
+ 20 / Webdav
+ \ "webdav"
+ 21 / Yandex Disk
+ \ "yandex"
+ 22 / http Connection
+ \ "http"
+ Storage> 1
+ Remote or path to alias.
+ Can be "myremote:path/to/dir", "myremote:bucket", "myremote:" or "/local/path".
+ remote> /mnt/storage/backup
+ Remote config
+ --------------------
+ [remote]
+ remote = /mnt/storage/backup
+ --------------------
+ y) Yes this is OK
+ e) Edit this remote
+ d) Delete this remote
+ y/e/d> y
+ Current remotes:
+
+ Name Type
+ ==== ====
+ remote alias
+
+ e) Edit existing remote
+ n) New remote
+ d) Delete remote
+ r) Rename remote
+ c) Copy remote
+ s) Set configuration password
+ q) Quit config
+ e/n/d/r/c/s/q> q
+
+Once configured you can then use rclone like this,
+
+List directories in top level in /mnt/storage/backup
+
+ rclone lsd remote:
+
+List all the files in /mnt/storage/backup
+
+ rclone ls remote:
+
+Copy another local directory to the alias directory called source
+
+ rclone copy /home/source remote:source
+
+
Amazon Drive
Paths are specified as remote:path
@@ -3862,37 +4732,23 @@ This will guide you through an interactive setup process.
No remotes found - make a new one
n) New remote
s) Set configuration password
- n/s> n
+ q) Quit config
+ n/s/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
- 1 / Amazon Drive
+ 1 / Alias for a existing remote
+ \ "alias"
+ 2 / Amazon Drive
\ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
+ 3 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
- 3 / Backblaze B2
+ 4 / Backblaze B2
\ "b2"
- 4 / Dropbox
- \ "dropbox"
- 5 / Encrypt/Decrypt a remote
- \ "crypt"
- 6 / Google Cloud Storage (this is not Google Drive)
- \ "google cloud storage"
- 7 / Google Drive
- \ "drive"
- 8 / Hubic
- \ "hubic"
- 9 / Local Disk
- \ "local"
- 10 / Microsoft OneDrive
- \ "onedrive"
- 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
- \ "swift"
- 12 / SSH/SFTP Connection
- \ "sftp"
- 13 / Yandex Disk
- \ "yandex"
- Storage> 2
+ [snip]
+ 23 / http Connection
+ \ "http"
+ Storage> s3
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own value
1 / Enter AWS credentials in the next step
@@ -3901,80 +4757,91 @@ This will guide you through an interactive setup process.
\ "true"
env_auth> 1
AWS Access Key ID - leave blank for anonymous access or runtime credentials.
- access_key_id> access_key
+ access_key_id> XXX
AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
- secret_access_key> secret_key
- Region to connect to.
+ secret_access_key> YYY
+ Region to connect to. Leave blank if you are using an S3 clone and you don't have a region.
Choose a number from below, or type in your own value
/ The default endpoint - a good choice if you are unsure.
1 | US Region, Northern Virginia or Pacific Northwest.
| Leave location constraint empty.
\ "us-east-1"
+ / US East (Ohio) Region
+ 2 | Needs location constraint us-east-2.
+ \ "us-east-2"
/ US West (Oregon) Region
- 2 | Needs location constraint us-west-2.
+ 3 | Needs location constraint us-west-2.
\ "us-west-2"
/ US West (Northern California) Region
- 3 | Needs location constraint us-west-1.
+ 4 | Needs location constraint us-west-1.
\ "us-west-1"
- / EU (Ireland) Region Region
- 4 | Needs location constraint EU or eu-west-1.
+ / Canada (Central) Region
+ 5 | Needs location constraint ca-central-1.
+ \ "ca-central-1"
+ / EU (Ireland) Region
+ 6 | Needs location constraint EU or eu-west-1.
\ "eu-west-1"
+ / EU (London) Region
+ 7 | Needs location constraint eu-west-2.
+ \ "eu-west-2"
/ EU (Frankfurt) Region
- 5 | Needs location constraint eu-central-1.
+ 8 | Needs location constraint eu-central-1.
\ "eu-central-1"
/ Asia Pacific (Singapore) Region
- 6 | Needs location constraint ap-southeast-1.
+ 9 | Needs location constraint ap-southeast-1.
\ "ap-southeast-1"
/ Asia Pacific (Sydney) Region
- 7 | Needs location constraint ap-southeast-2.
+ 10 | Needs location constraint ap-southeast-2.
\ "ap-southeast-2"
/ Asia Pacific (Tokyo) Region
- 8 | Needs location constraint ap-northeast-1.
+ 11 | Needs location constraint ap-northeast-1.
\ "ap-northeast-1"
/ Asia Pacific (Seoul)
- 9 | Needs location constraint ap-northeast-2.
+ 12 | Needs location constraint ap-northeast-2.
\ "ap-northeast-2"
/ Asia Pacific (Mumbai)
- 10 | Needs location constraint ap-south-1.
+ 13 | Needs location constraint ap-south-1.
\ "ap-south-1"
/ South America (Sao Paulo) Region
- 11 | Needs location constraint sa-east-1.
+ 14 | Needs location constraint sa-east-1.
\ "sa-east-1"
- / If using an S3 clone that only understands v2 signatures
- 12 | eg Ceph/Dreamhost
- | set this and make sure you set the endpoint.
+ / Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.
+ 15 | Set this and make sure you set the endpoint.
\ "other-v2-signature"
- / If using an S3 clone that understands v4 signatures set this
- 13 | and make sure you set the endpoint.
- \ "other-v4-signature"
region> 1
Endpoint for S3 API.
Leave blank if using AWS to use the default endpoint for the region.
Specify if using an S3 clone such as Ceph.
- endpoint>
+ endpoint>
Location constraint - must be set to match the Region. Used when creating buckets only.
Choose a number from below, or type in your own value
1 / Empty for US Region, Northern Virginia or Pacific Northwest.
\ ""
- 2 / US West (Oregon) Region.
+ 2 / US East (Ohio) Region.
+ \ "us-east-2"
+ 3 / US West (Oregon) Region.
\ "us-west-2"
- 3 / US West (Northern California) Region.
+ 4 / US West (Northern California) Region.
\ "us-west-1"
- 4 / EU (Ireland) Region.
+ 5 / Canada (Central) Region.
+ \ "ca-central-1"
+ 6 / EU (Ireland) Region.
\ "eu-west-1"
- 5 / EU Region.
+ 7 / EU (London) Region.
+ \ "eu-west-2"
+ 8 / EU Region.
\ "EU"
- 6 / Asia Pacific (Singapore) Region.
+ 9 / Asia Pacific (Singapore) Region.
\ "ap-southeast-1"
- 7 / Asia Pacific (Sydney) Region.
+ 10 / Asia Pacific (Sydney) Region.
\ "ap-southeast-2"
- 8 / Asia Pacific (Tokyo) Region.
+ 11 / Asia Pacific (Tokyo) Region.
\ "ap-northeast-1"
- 9 / Asia Pacific (Seoul)
+ 12 / Asia Pacific (Seoul)
\ "ap-northeast-2"
- 10 / Asia Pacific (Mumbai)
+ 13 / Asia Pacific (Mumbai)
\ "ap-south-1"
- 11 / South America (Sao Paulo) Region.
+ 14 / South America (Sao Paulo) Region.
\ "sa-east-1"
location_constraint> 1
Canned ACL used when creating buckets and/or storing objects in S3.
@@ -3995,14 +4862,14 @@ This will guide you through an interactive setup process.
/ Both the object owner and the bucket owner get FULL_CONTROL over the object.
6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
\ "bucket-owner-full-control"
- acl> private
+ acl> 1
The server-side encryption algorithm used when storing this object in S3.
Choose a number from below, or type in your own value
1 / None
\ ""
2 / AES256
\ "AES256"
- server_side_encryption>
+ server_side_encryption> 1
The storage class to use when storing objects in S3.
Choose a number from below, or type in your own value
1 / Default
@@ -4013,19 +4880,19 @@ This will guide you through an interactive setup process.
\ "REDUCED_REDUNDANCY"
4 / Standard Infrequent Access storage class
\ "STANDARD_IA"
- storage_class>
+ storage_class> 1
Remote config
--------------------
[remote]
env_auth = false
- access_key_id = access_key
- secret_access_key = secret_key
+ access_key_id = XXX
+ secret_access_key = YYY
region = us-east-1
- endpoint =
- location_constraint =
+ endpoint =
+ location_constraint =
acl = private
- server_side_encryption =
- storage_class =
+ server_side_encryption =
+ storage_class =
--------------------
y) Yes this is OK
e) Edit this remote
@@ -4065,8 +4932,8 @@ X-Amz-Meta-Mtime as floating point since the epoch accurate to 1 ns.
Multipart uploads
rclone supports multipart uploads with S3 which means that it can upload
-files bigger than 5GB. Note that files uploaded with multipart upload
-don't have an MD5SUM.
+files bigger than 5GB. Note that files uploaded _both_ with multipart
+upload _and_ through crypt remotes do not have MD5 sums.
Buckets and Regions
@@ -4142,6 +5009,14 @@ Notes on above:
For reference, here's an Ansible script that will generate one or more
buckets that will work with rclone sync.
+Key Management System (KMS)
+
+If you are using server side encryption with KMS then you will find you
+can't transfer small objects. As a work-around you can use the
+--ignore-checksum flag.
+
+A proper fix is being worked on in issue #1824.
+
Glacier
You can transition objects to glacier storage using a lifecycle policy.
@@ -4219,15 +5094,25 @@ You will be able to list and copy data but not upload it.
Ceph
-Ceph is an object storage system which presents an Amazon S3 interface.
+Ceph is an open source unified, distributed storage system designed for
+excellent performance, reliability and scalability. It has an S3
+compatible object storage interface.
-To use rclone with ceph, you need to set the following parameters in the
-config.
+To use rclone with Ceph, configure as above but leave the region blank
+and set the endpoint. You should end up with something like this in your
+config:
- access_key_id = Whatever
- secret_access_key = Whatever
- endpoint = https://ceph.endpoint.goes.here/
- region = other-v2-signature
+ [ceph]
+ type = s3
+ env_auth = false
+ access_key_id = XXX
+ secret_access_key = YYY
+ region =
+ endpoint = https://ceph.endpoint.example.com
+ location_constraint =
+ acl =
+ server_side_encryption =
+ storage_class =
Note also that Ceph sometimes puts / in the passwords it gives users. If
you read the secret access key using the command line tools you will get
@@ -4252,6 +5137,25 @@ removed).
Because this is a json dump, it is encoding the / as \/, so if you use
the secret key as xxxxxx/xxxx it will work fine.
+Dreamhost
+
+Dreamhost DreamObjects is an object storage system based on CEPH.
+
+To use rclone with Dreamhost, configure as above but leave the region
+blank and set the endpoint. You should end up with something like this
+in your config:
+
+ [dreamobjects]
+ env_auth = false
+ access_key_id = your_access_key
+ secret_access_key = your_secret_key
+ region =
+ endpoint = objects-us-west-1.dream.io
+ location_constraint =
+ acl = private
+ server_side_encryption =
+ storage_class =
+
DigitalOcean Spaces
Spaces is an S3-interoperable object storage service from cloud provider
@@ -4270,7 +5174,7 @@ other settings.
Going through the whole process of creating a new remote by running
rclone config, each prompt should be answered as shown below:
- Storage> 2
+ Storage> s3
env_auth> 1
access_key_id> YOUR_ACCESS_KEY
secret_access_key> YOUR_SECRET_KEY
@@ -4300,6 +5204,201 @@ example:
rclone mkdir spaces:my-new-space
rclone copy /path/to/files spaces:my-new-space
+IBM COS (S3)
+
+Information stored with IBM Cloud Object Storage is encrypted and
+dispersed across multiple geographic locations, and accessed through an
+implementation of the S3 API. This service makes use of the distributed
+storage technologies provided by IBM’s Cloud Object Storage System
+(formerly Cleversafe). For more information visit:
+(https://www.ibm.com/cloud/object-storage)
+
+To configure access to IBM COS S3, follow the steps below:
+
+1. Run rclone config and select n for a new remote.
+
+ 2018/02/14 14:13:11 NOTICE: Config file "C:\\Users\\a\\.config\\rclone\\rclone.conf" not found - using defaults
+ No remotes found - make a new one
+ n) New remote
+ s) Set configuration password
+ q) Quit config
+ n/s/q> n
+
+2. Enter the name for the configuration
+
+ name> IBM-COS-XREGION
+
+3. Select "s3" storage.
+
+ Type of storage to configure.
+ Choose a number from below, or type in your own value
+ 1 / Amazon Drive
+ \ "amazon cloud drive"
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio, IBM COS(S3))
+ \ "s3"
+ 3 / Backblaze B2
+ Storage> 2
+
+4. Select "Enter AWS credentials…"
+
+ Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
+ Choose a number from below, or type in your own value
+ 1 / Enter AWS credentials in the next step
+ \ "false"
+ 2 / Get AWS credentials from the environment (env vars or IAM)
+ \ "true"
+ env_auth> 1
+
+5. Enter the Access Key and Secret.
+
+ AWS Access Key ID - leave blank for anonymous access or runtime credentials.
+ access_key_id> <>
+ AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
+ secret_access_key> <>
+
+6. Select "other-v4-signature" region.
+
+ Region to connect to.
+ Choose a number from below, or type in your own value
+ / The default endpoint - a good choice if you are unsure.
+ 1 | US Region, Northern Virginia or Pacific Northwest.
+ | Leave location constraint empty.
+ \ "us-east-1"
+ / US East (Ohio) Region
+ 2 | Needs location constraint us-east-2.
+ \ "us-east-2"
+ / US West (Oregon) Region
+ ……
+ 15 | eg Ceph/Dreamhost
+ | set this and make sure you set the endpoint.
+ \ "other-v2-signature"
+ / If using an S3 clone that understands v4 signatures set this
+ 16 | and make sure you set the endpoint.
+ \ "other-v4-signature
+ region> 16
+
+7. Enter the endpoint FQDN.
+
+ Leave blank if using AWS to use the default endpoint for the region.
+ Specify if using an S3 clone such as Ceph.
+ endpoint> s3-api.us-geo.objectstorage.softlayer.net
+
+8. Specify a IBM COS Location Constraint.
+ a. Currently, the only IBM COS values for LocationConstraint are:
+ us-standard / us-vault / us-cold / us-flex us-east-standard /
+ us-east-vault / us-east-cold / us-east-flex us-south-standard /
+ us-south-vault / us-south-cold / us-south-flex eu-standard /
+ eu-vault / eu-cold / eu-flex
+
+ Location constraint - must be set to match the Region. Used when creating buckets only.
+ Choose a number from below, or type in your own value
+ 1 / Empty for US Region, Northern Virginia or Pacific Northwest.
+ \ ""
+ 2 / US East (Ohio) Region.
+ \ "us-east-2"
+ ……
+ location_constraint> us-standard
+
+9. Specify a canned ACL.
+
+ Canned ACL used when creating buckets and/or storing objects in S3.
+ For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
+ Choose a number from below, or type in your own value
+ 1 / Owner gets FULL_CONTROL. No one else has access rights (default).
+ \ "private"
+ 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
+ \ "public-read"
+ / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
+ 3 | Granting this on a bucket is generally not recommended.
+ \ "public-read-write"
+ 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
+ \ "authenticated-read"
+ / Object owner gets FULL_CONTROL. Bucket owner gets READ access.
+ 5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
+ \ "bucket-owner-read"
+ / Both the object owner and the bucket owner get FULL_CONTROL over the object.
+ 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
+ \ "bucket-owner-full-control"
+ acl> 1
+
+10. Set the SSE option to "None".
+
+ Choose a number from below, or type in your own value
+ 1 / None
+ \ ""
+ 2 / AES256
+ \ "AES256"
+ server_side_encryption> 1
+
+11. Set the storage class to "None" (IBM COS uses the LocationConstraint
+ at the bucket level).
+
+ The storage class to use when storing objects in S3.
+ Choose a number from below, or type in your own value
+ 1 / Default
+ \ ""
+ 2 / Standard storage class
+ \ "STANDARD"
+ 3 / Reduced redundancy storage class
+ \ "REDUCED_REDUNDANCY"
+ 4 / Standard Infrequent Access storage class
+ \ "STANDARD_IA"
+ storage_class>
+
+12. Review the displayed configuration and accept to save the "remote"
+ then quit.
+
+ Remote config
+ --------------------
+ [IBM-COS-XREGION]
+ env_auth = false
+ access_key_id = <>
+ secret_access_key = <>
+ region = other-v4-signature
+ endpoint = s3-api.us-geo.objectstorage.softlayer.net
+ location_constraint = us-standard
+ acl = private
+ server_side_encryption =
+ storage_class =
+ --------------------
+ y) Yes this is OK
+ e) Edit this remote
+ d) Delete this remote
+ y/e/d> y
+ Remote config
+ Current remotes:
+
+ Name Type
+ ==== ====
+ IBM-COS-XREGION s3
+
+ e) Edit existing remote
+ n) New remote
+ d) Delete remote
+ r) Rename remote
+ c) Copy remote
+ s) Set configuration password
+ q) Quit config
+ e/n/d/r/c/s/q> q
+
+13. Execute rclone commands
+
+ 1) Create a bucket.
+ rclone mkdir IBM-COS-XREGION:newbucket
+ 2) List available buckets.
+ rclone lsd IBM-COS-XREGION:
+ -1 2017-11-08 21:16:22 -1 test
+ -1 2018-02-14 20:16:39 -1 newbucket
+ 3) List contents of a bucket.
+ rclone ls IBM-COS-XREGION:newbucket
+ 18685952 test.exe
+ 4) Copy a file from local to remote.
+ rclone copy /Users/file.txt IBM-COS-XREGION:newbucket
+ 5) Copy a file from remote to local.
+ rclone copy IBM-COS-XREGION:newbucket/file.txt .
+ 6) Delete a file on remote.
+ rclone delete IBM-COS-XREGION:newbucket/file.txt
+
Minio
Minio is an object storage server built for cloud application developers
@@ -5080,11 +6179,42 @@ To start a cached mount
rclone mount --allow-other test-cache: /var/tmp/test-cache
+Write Features
+
+Offline uploading
+
+In an effort to make writing through cache more reliable, the backend
+now supports this feature which can be activated by specifying a
+cache-tmp-upload-path.
+
+A files goes through these states when using this feature:
+
+1. An upload is started (usually by copying a file on the cache remote)
+2. When the copy to the temporary location is complete the file is part
+ of the cached remote and looks and behaves like any other file
+ (reading included)
+3. After cache-tmp-wait-time passes and the file is next in line,
+ rclone move is used to move the file to the cloud provider
+4. Reading the file still works during the upload but most
+ modifications on it will be prohibited
+5. Once the move is complete the file is unlocked for modifications as
+ it becomes as any other regular file
+6. If the file is being read through cache when it's actually deleted
+ from the temporary path then cache will simply swap the source to
+ the cloud provider without interrupting the reading (small blip can
+ happen though)
+
+Files are uploaded in sequence and only one file is uploaded at a time.
+Uploads will be stored in a queue and be processed based on the order
+they were added. The queue and the temporary storage is persistent
+across restarts and even purges of the cache.
+
Write Support
Writes are supported through cache. One caveat is that a mounted cache
remote does not add any retry or fallback mechanism to the upload
operation. This will depend on the implementation of the wrapped remote.
+Consider using Offline uploading for reliable writes.
One special case is covered with cache-writes which will cache the file
data at the same time as the upload when it is enabled making it
@@ -5128,6 +6258,18 @@ playback or _1_ all the other times
Known issues
+Mount and --dir-cache-time
+
+--dir-cache-time controls the first layer of directory caching which
+works at the mount layer. Being an independent caching mechanism from
+the cache backend, it will manage its own entries based on the
+configured time.
+
+To avoid getting in a scenario where dir cache has obsolete data and
+cache would have the correct one, try to set --dir-cache-time to a lower
+time than --cache-info-age. Default values are already configured in
+this way.
+
Windows support - Experimental
There are a couple of issues with Windows mount functionality that still
@@ -5181,6 +6323,21 @@ cloud provider which makes it think we're downloading the full file
instead of small chunks. Organizing the remotes in this order yelds
better results: CLOUD REMOTE -> CACHE -> CRYPT
+Cache and Remote Control (--rc)
+
+Cache supports the new --rc mode in rclone and can be remote controlled
+through the following end points: By default, the listener is disabled
+if you do not add the flag.
+
+rc cache/expire
+
+Purge a remote from the cache backend. Supports either a directory or a
+file. It supports both encrypted and unencrypted file names if cache is
+wrapped by crypt.
+
+Params: - REMOTE = path to remote (REQUIRED) - WITHDATA = true/false to
+delete cached data (chunks) as well _(optional, false by default)_
+
Specific options
Here are the command line options specific to this cloud storage system.
@@ -5316,6 +6473,37 @@ store at the same time during upload.
DEFAULT: not set
+--cache-tmp-upload-path=PATH
+
+This is the path where cache will use as a temporary storage for new
+files that need to be uploaded to the cloud provider.
+
+Specifying a value will enable this feature. Without it, it is
+completely disabled and files will be uploaded directly to the cloud
+provider
+
+DEFAULT: empty
+
+--cache-tmp-wait-time=DURATION
+
+This is the duration that a file must wait in the temporary location
+_cache-tmp-upload-path_ before it is selected for upload.
+
+Note that only one file is uploaded at a time and it can take longer to
+start the upload if a queue formed for this purpose.
+
+DEFAULT: 15m
+
+--cache-db-wait-time=DURATION
+
+Only one process can have the DB open at any one time, so rclone waits
+for this duration for the DB to become available before it gives an
+error.
+
+If you set it to 0 then it will wait forever.
+
+DEFAULT: 1s
+
Crypt
@@ -5530,7 +6718,7 @@ Off
Standard
- file names encrypted
-- file names can't be as long (~156 characters)
+- file names can't be as long (~143 characters)
- can use sub paths and copy single files
- directory structure visible
- identical files names will have identical uploaded names
@@ -5578,7 +6766,7 @@ p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0
False
-Only encrypts file names, skips directory names Example: 1/12/123/txt is
+Only encrypts file names, skips directory names Example: 1/12/123.txt is
encrypted to 1/12/qgm4avr35m5loi1th53ato71v0
Modified time and hashes
@@ -6227,39 +7415,34 @@ This will guide you through an interactive setup process:
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
- 1 / Amazon Drive
- \ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
- \ "s3"
- 3 / Backblaze B2
- \ "b2"
- 4 / Dropbox
- \ "dropbox"
- 5 / Encrypt/Decrypt a remote
- \ "crypt"
- 6 / FTP Connection
- \ "ftp"
- 7 / Google Cloud Storage (this is not Google Drive)
- \ "google cloud storage"
- 8 / Google Drive
+ [snip]
+ 10 / Google Drive
\ "drive"
- 9 / Hubic
- \ "hubic"
- 10 / Local Disk
- \ "local"
- 11 / Microsoft OneDrive
- \ "onedrive"
- 12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
- \ "swift"
- 13 / SSH/SFTP Connection
- \ "sftp"
- 14 / Yandex Disk
- \ "yandex"
- Storage> 8
+ [snip]
+ Storage> drive
Google Application Client Id - leave blank normally.
client_id>
Google Application Client Secret - leave blank normally.
client_secret>
+ Scope that rclone should use when requesting access from drive.
+ Choose a number from below, or type in your own value
+ 1 / Full access all files, excluding Application Data Folder.
+ \ "drive"
+ 2 / Read-only access to file metadata and file contents.
+ \ "drive.readonly"
+ / Access to files created by rclone only.
+ 3 | These are visible in the drive website.
+ | File authorization is revoked when the user deauthorizes the app.
+ \ "drive.file"
+ / Allows read and write access to the Application Data folder.
+ 4 | This is not visible in the drive website.
+ \ "drive.appfolder"
+ / Allows read-only access to file metadata but
+ 5 | does not allow any access to read or download file content.
+ \ "drive.metadata.readonly"
+ scope> 1
+ ID of the root folder - leave blank normally. Fill in to access "Computers" folders. (see docs).
+ root_folder_id>
Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login.
service_account_file>
Remote config
@@ -6279,9 +7462,12 @@ This will guide you through an interactive setup process:
y/n> n
--------------------
[remote]
- client_id =
- client_secret =
- token = {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014-03-16T13:57:58.955387075Z","Extra":null}
+ client_id =
+ client_secret =
+ scope = drive
+ root_folder_id =
+ service_account_file =
+ token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2014-03-16T13:57:58.955387075Z"}
--------------------
y) Yes this is OK
e) Edit this remote
@@ -6309,6 +7495,83 @@ To copy a local directory to a drive directory called backup
rclone copy /home/source remote:backup
+Scopes
+
+Rclone allows you to select which scope you would like for rclone to
+use. This changes what type of token is granted to rclone. The scopes
+are defined here..
+
+The scope are
+
+drive
+
+This is the default scope and allows full access to all files, except
+for the Application Data Folder (see below).
+
+Choose this one if you aren't sure.
+
+drive.readonly
+
+This allows read only access to all files. Files may be listed and
+downloaded but not uploaded, renamed or deleted.
+
+drive.file
+
+With this scope rclone can read/view/modify only those files and folders
+it creates.
+
+So if you uploaded files to drive via the web interface (or any other
+means) they will not be visible to rclone.
+
+This can be useful if you are using rclone to backup data and you want
+to be sure confidential data on your drive is not visible to rclone.
+
+Files created with this scope are visible in the web interface.
+
+drive.appfolder
+
+This gives rclone its own private area to store files. Rclone will not
+be able to see any other files on your drive and you won't be able to
+see rclone's files from the web interface either.
+
+drive.metadata.readonly
+
+This allows read only access to file names only. It does not allow
+rclone to download or upload data, or rename or delete files or
+directories.
+
+Root folder ID
+
+You can set the root_folder_id for rclone. This is the directory
+(identified by its Folder ID) that rclone considers to be a the root of
+your drive.
+
+Normally you will leave this blank and rclone will determine the correct
+root to use itself.
+
+However you can set this to restrict rclone to a specific folder
+hierarchy or to access data within the "Computers" tab on the drive web
+interface (where files from Google's Backup and Sync desktop program
+go).
+
+In order to do this you will have to find the Folder ID of the directory
+you wish rclone to display. This will be the last segment of the URL
+when you open the relevant folder in the drive web interface.
+
+So if the folder you want rclone to use has a URL which looks like
+https://drive.google.com/drive/folders/1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh
+in the browser, then you use 1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh as the
+root_folder_id in the config.
+
+NB folders under the "Computers" tab seem to be read only (drive gives a
+500 error) when using rclone.
+
+There doesn't appear to be an API to discover the folder IDs of the
+"Computers" tab - please contact us if you know otherwise!
+
+Note also that rclone can't access any data under the "Backups" tab on
+the google drive web interface yet.
+
Service Account support
You can set up rclone with Google Drive in an unattended mode, i.e. not
@@ -6316,15 +7579,74 @@ tied to a specific end-user Google account. This is useful when you want
to synchronise files onto machines that don't have actively logged-in
users, for example build machines.
-To create a service account and obtain its credentials, go to the Google
-Developer Console and use the "Create Credentials" button. After
-creating an account, a JSON file containing the Service Account's
-credentials will be downloaded onto your machine. These credentials are
-what rclone will use for authentication.
-
To use a Service Account instead of OAuth2 token flow, enter the path to
-your Service Account credentials at the service_account_file prompt and
-rclone won't use the browser based authentication flow.
+your Service Account credentials at the service_account_file prompt
+during rclone config and rclone won't use the browser based
+authentication flow.
+
+Use case - Google Apps/G-suite account and individual Drive
+
+Let's say that you are the administrator of a Google Apps (old) or
+G-suite account. The goal is to store data on an individual's Drive
+account, who IS a member of the domain. We'll call the domain
+EXAMPLE.COM, and the user FOO@EXAMPLE.COM.
+
+There's a few steps we need to go through to accomplish this:
+
+1. Create a service account for example.com
+
+- To create a service account and obtain its credentials, go to the
+ Google Developer Console.
+- You must have a project - create one if you don't.
+- Then go to "IAM & admin" -> "Service Accounts".
+- Use the "Create Credentials" button. Fill in "Service account name"
+ with something that identifies your client. "Role" can be empty.
+- Tick "Furnish a new private key" - select "Key type JSON".
+- Tick "Enable G Suite Domain-wide Delegation". This option makes
+ "impersonation" possible, as documented here: Delegating domain-wide
+ authority to the service account
+- These credentials are what rclone will use for authentication. If
+ you ever need to remove access, press the "Delete service account
+ key" button.
+
+2. Allowing API access to example.com Google Drive
+
+- Go to example.com's admin console
+- Go into "Security" (or use the search bar)
+- Select "Show more" and then "Advanced settings"
+- Select "Manage API client access" in the "Authentication" section
+- In the "Client Name" field enter the service account's "Client ID" -
+ this can be found in the Developer Console under "IAM & Admin" ->
+ "Service Accounts", then "View Client ID" for the newly created
+ service account. It is a ~21 character numerical string.
+- In the next field, "One or More API Scopes", enter
+ https://www.googleapis.com/auth/drive to grant access to Google
+ Drive specifically.
+
+3. Configure rclone, assuming a new install
+
+ rclone config
+
+ n/s/q> n # New
+ name>gdrive # Gdrive is an example name
+ Storage> # Select the number shown for Google Drive
+ client_id> # Can be left blank
+ client_secret> # Can be left blank
+ scope> # Select your scope, 1 for example
+ root_folder_id> # Can be left blank
+ service_account_file> /home/foo/myJSONfile.json # This is where the JSON file goes!
+ y/n> # Auto config, y
+
+4. Verify that it's working
+
+- rclone -v --drive-impersonate foo@example.com lsf gdrive:backup
+- The arguments do:
+ - -v - verbose logging
+ - --drive-impersonate foo@example.com - this is what does the
+ magic, pretending to be user foo.
+ - lsf - list files in a parsing friendly way
+ - gdrive:backup - use the remote called gdrive, work in the folder
+ named backup.
Team drives
@@ -6374,8 +7696,8 @@ of that file.
Revisions follow the standard google policy which at time of writing was
-- They are deleted after 30 days or 100 revisions (whatever
- comes first).
+- They are deleted after 30 days or 100 revisions (whatever comes
+ first).
- They do not count towards a user storage quota.
Deleting files
@@ -6432,96 +7754,33 @@ My Spreadsheet.xlsx or My Spreadsheet.pdf etc.
Here are the possible extensions with their corresponding mime types.
- -------------------------------------
- Extension Mime Type Description
- ---------- ------------ -------------
- csv text/csv Standard CSV
- format for
- Spreadsheets
+ Extension Mime Type Description
+ --------------------------------------------------------------------------- ------------------------------------------------------------------------------------------ -------------------------------------------------------------------------------------------------
+ csv text/csv Standard CSV format for Spreadsheets
+ doc application/msword Micosoft Office Document
+ docx application/vnd.openxmlformats-officedocument.wordprocessingml.document Microsoft Office Document
+ epub application/epub+zip E-book format
+ html text/html An HTML Document
+ jpg image/jpeg A JPEG Image File
+ odp application/vnd.oasis.opendocument.presentation Openoffice Presentation
+ ods application/vnd.oasis.opendocument.spreadsheet Openoffice Spreadsheet
+ ods application/x-vnd.oasis.opendocument.spreadsheet Openoffice Spreadsheet
+ odt application/vnd.oasis.opendocument.text Openoffice Document
+ pdf application/pdf Adobe PDF Format
+ png image/png PNG Image Format
+ pptx application/vnd.openxmlformats-officedocument.presentationml.presentation Microsoft Office Powerpoint
+ rtf application/rtf Rich Text Format
+ svg image/svg+xml Scalable Vector Graphics Format
+ tsv text/tab-separated-values Standard TSV format for spreadsheets
+ txt text/plain Plain Text
+ xls application/vnd.ms-excel Microsoft Office Spreadsheet
+ xlsx application/vnd.openxmlformats-officedocument.spreadsheetml.sheet Microsoft Office Spreadsheet
+ zip application/zip A ZIP file of HTML, Images CSS
- doc application/ Micosoft
- msword Office
- Document
+--drive-impersonate user
- docx application/ Microsoft
- vnd.openxmlf Office
- ormats-offic Document
- edocument.wo
- rdprocessing
- ml.document
-
- epub application/ E-book format
- epub+zip
-
- html text/html An HTML
- Document
-
- jpg image/jpeg A JPEG Image
- File
-
- odp application/ Openoffice
- vnd.oasis.op Presentation
- endocument.p
- resentation
-
- ods application/ Openoffice
- vnd.oasis.op Spreadsheet
- endocument.s
- preadsheet
-
- ods application/ Openoffice
- x-vnd.oasis. Spreadsheet
- opendocument
- .spreadsheet
-
- odt application/ Openoffice
- vnd.oasis.op Document
- endocument.t
- ext
-
- pdf application/ Adobe PDF
- pdf Format
-
- png image/png PNG Image
- Format
-
- pptx application/ Microsoft
- vnd.openxmlf Office
- ormats-offic Powerpoint
- edocument.pr
- esentationml
- .presentatio
- n
-
- rtf application/ Rich Text
- rtf Format
-
- svg image/svg+xm Scalable
- l Vector
- Graphics
- Format
-
- tsv text/tab-sep Standard TSV
- arated-value format for
- s spreadsheets
-
- txt text/plain Plain Text
-
- xls application/ Microsoft
- vnd.ms-excel Office
- Spreadsheet
-
- xlsx application/ Microsoft
- vnd.openxmlf Office
- ormats-offic Spreadsheet
- edocument.sp
- readsheetml.
- sheet
-
- zip application/ A ZIP file of
- zip HTML, Images
- CSS
- -------------------------------------
+When using a service account, this instructs rclone to impersonate the
+user passed in.
--drive-list-chunk int
@@ -6529,7 +7788,12 @@ Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me
-Only show files that are shared with me
+Instructs rclone to operate on your "Shared with me" folder (where
+Google Drive lets you access the files and folders others have shared
+with you).
+
+This works both with the "list" (lsd, lsl, etc) and the "copy" commands
+(copy, sync, etc), and with all other commands too.
--drive-skip-gdocs
@@ -6551,6 +7815,27 @@ Controls whether files are sent to the trash or deleted permanently.
Defaults to true, namely sending files to the trash. Use
--drive-use-trash=false to delete files permanently instead.
+--drive-use-created-date
+
+Use the file creation date in place of the modification date. Defaults
+to false.
+
+Useful when downloading data and you want the creation date used in
+place of the last modified date.
+
+WARNING: This flag may have some unexpected consequences.
+
+When uploading to your drive all files will be overwritten unless they
+haven't been modified since their creation. And the inverse will occur
+while downloading. This side effect can be avoided by using the
+--checksum flag.
+
+This feature was implemented to retain photos capture date as recorded
+by google photos. You will first need to check the "Create a Google
+Photos folder" option in your google drive settings. You can then copy
+or move the photos locally and use the date the image was taken
+(created) set as the modification date.
+
Limitations
Drive has quite a lot of rate limiting. This causes rclone to be limited
@@ -6563,6 +7848,21 @@ User rate limit exceeded errors, wait at least 24 hours and retry. You
can disable server side copies with --disable copy to download and
upload the files if you prefer.
+Limitations of Google Docs
+
+Google docs will appear as size -1 in rclone ls and as size 0 in
+anything which uses the VFS layer, eg rclone mount, rclone serve.
+
+This is because rclone can't find out the size of the Google docs
+without downloading them.
+
+Google docs will transfer correctly with rclone sync, rclone copy etc as
+rclone knows to ignore the size when doing the transfer.
+
+However an unfortunate consequence of this is that you can't download
+Google docs using rclone mount - you will get a 0 sized file. If you try
+again the doc may gain its correct size and be downloadable.
+
Duplicated files
Sometimes, for no reason I've been able to track down, drive will
@@ -6579,23 +7879,9 @@ Android duplicates files on drive sometimes.
Rclone appears to be re-copying files it shouldn't
-There are two possible reasons for rclone to recopy files which haven't
-changed to Google Drive.
-
-The first is the duplicated file issue above - run rclone dedupe and
-check your logs for duplicate object or directory messages.
-
-The second is that sometimes Google reports different sizes for the
-Google Docs exports which will cause rclone to re-download Google Docs
-for no apparent reason. --ignore-size is a not very satisfactory
-work-around for this if it is causing you a lot of problems.
-
-Google docs downloads sometimes fail with "Failed to copy: read X bytes expecting Y"
-
-This is the same problem as above. Google reports the google doc is one
-size, but rclone downloads a different size. Work-around with the
---ignore-size flag or wait for rclone to retry the download which it
-will.
+The most likely cause of this is the duplicated file issue above - run
+rclone dedupe and check your logs for duplicate object or directory
+messages.
Making your own client_id
@@ -7189,11 +8475,6 @@ Here are the command line options specific to this cloud storage system.
Above this size files will be chunked - must be multiple of 320k. The
default is 10MB. Note that the chunks will be buffered into memory.
---onedrive-upload-cutoff=SIZE
-
-Cutoff for switching to chunked upload - must be <= 100MB. The default
-is 10MB.
-
Limitations
Note that OneDrive is case insensitive so you can't have a file called
@@ -7207,6 +8488,33 @@ mapped to ? instead.
The largest allowed file size is 10GiB (10,737,418,240 bytes).
+Versioning issue
+
+Every change in OneDrive causes the service to create a new version.
+This counts against a users quota.
+For example changing the modification time of a file creates a second
+version, so the file is using twice the space.
+
+The copy is the only rclone command affected by this as we copy the file
+and then afterwards set the modification time to match the source file.
+
+User Weropol has found a method to disable versioning on OneDrive
+
+1. Open the settings menu by clicking on the gear symbol at the top of
+ the OneDrive Business page.
+2. Click Site settings.
+3. Once on the Site settings page, navigate to Site Administration >
+ Site libraries and lists.
+4. Click Customize "Documents".
+5. Click General Settings > Versioning Settings.
+6. Under Document Version History select the option No versioning.
+ Note: This will disable the creation of new file versions, but will
+ not remove any previous versions. Your documents are safe.
+7. Apply the changes by clicking OK.
+8. Use rclone to upload or modify files. (I also use the
+ --no-update-modtime flag)
+9. Restore the versioning settings after using rclone. (Optional)
+
QingStor
@@ -7878,6 +9186,9 @@ instance /home/$USER/.ssh/id_rsa.
If you don't specify pass or key_file then rclone will attempt to
contact an ssh-agent.
+If you set the --sftp-ask-password option, rclone will prompt for a
+password when needed and no password has been configured.
+
ssh-agent on macOS
Note that there seem to be various problems with using an ssh-agent on
@@ -7892,16 +9203,35 @@ And then at the end of the session
These commands can be used in scripts of course.
+Specific options
+
+Here are the command line options specific to this remote.
+
+--sftp-ask-password
+
+Ask for the SFTP password if needed when no password has been
+configured.
+
Modified time
Modified times are stored on the server to 1 second precision.
Modified times are used in syncing and are fully supported.
+Some SFTP servers disable setting/modifying the file modification time
+after upload (for example, certain configurations of ProFTPd with
+mod_sftp). If you are using one of these servers, you can set the option
+set_modtime = false in your RClone backend configuration to disable this
+behaviour.
+
Limitations
SFTP supports checksums if the same login has shell access and md5sum or
-sha1sum as well as echo are in the remote's PATH.
+sha1sum as well as echo are in the remote's PATH. This remote check can
+be disabled by setting the configuration option disable_hashcheck. This
+may be required if you're connecting to SFTP servers which are not under
+your control, and to which the execution of remote commands is
+prohibited.
The only ssh agent supported under Windows is Putty's pageant.
@@ -8356,6 +9686,185 @@ points, as you explicitly acknowledge that they should be skipped.
Changelog
+- v1.40 - 2018-03-19
+ - New backends
+ - Alias backend to create aliases for existing remote names
+ (Fabian Möller)
+ - New commands
+ - lsf: list for parsing purposes (Jakub Tasiemski)
+ - by default this is a simple non recursive list of files and
+ directories
+ - it can be configured to add more info in an easy to parse
+ way
+ - serve restic: for serving a remote as a Restic REST endpoint
+ - This enables restic to use any backends that rclone can
+ access
+ - Thanks Alexander Neumann for help, patches and review
+ - rc: enable the remote control of a running rclone
+ - The running rclone must be started with --rc and related
+ flags.
+ - Currently there is support for bwlimit, and flushing for
+ mount and cache.
+ - New Features
+ - --max-delete flag to add a delete threshold (Bjørn Erik
+ Pedersen)
+ - All backends now support RangeOption for ranged Open
+ - cat: Use RangeOption for limited fetches to make more
+ efficient
+ - cryptcheck: make reading of nonce more efficient with
+ RangeOption
+ - serve http/webdav/restic
+ - support SSL/TLS
+ - add --user --pass and --htpasswd for authentication
+ - copy/move: detect file size change during copy/move and abort
+ transfer (ishuah)
+ - cryptdecode: added option to return encrypted file names.
+ (ishuah)
+ - lsjson: add --encrypted to show encrypted name (Jakub Tasiemski)
+ - Add --stats-file-name-length to specify the printed file name
+ length for stats (Will Gunn)
+ - Compile
+ - Code base was shuffled and factored
+ - backends moved into a backend directory
+ - large packages split up
+ - See the CONTRIBUTING.md doc for info as to what lives where
+ now
+ - Update to using go1.10 as the default go version
+ - Implement daily full integration tests
+ - Release
+ - Include a source tarball and sign it and the binaries
+ - Sign the git tags as part of the release process
+ - Add .deb and .rpm packages as part of the build
+ - Make a beta release for all branches on the main repo (but not
+ pull requests)
+ - Bug Fixes
+ - config: fixes errors on non existing config by loading config
+ file only on first access
+ - config: retry saving the config after failure (Mateusz)
+ - sync: when using --backup-dir don't delete files if we can't set
+ their modtime
+ - this fixes odd behaviour with Dropbox and --backup-dir
+ - fshttp: fix idle timeouts for HTTP connections
+ - serve http: fix serving files with : in - fixes
+ - Fix --exclude-if-present to ignore directories which it doesn't
+ have permission for (Iakov Davydov)
+ - Make accounting work properly with crypt and b2
+ - remove --no-traverse flag because it is obsolete
+ - Mount
+ - Add --attr-timeout flag to control attribute caching in kernel
+ - this now defaults to 0 which is correct but less efficient
+ - see the mount docs for more info
+ - Add --daemon flag to allow mount to run in the background
+ (ishuah)
+ - Fix: Return ENOSYS rather than EIO on attempted link
+ - This fixes FileZilla accessing an rclone mount served over
+ sftp.
+ - Fix setting modtime twice
+ - Mount tests now run on CI for Linux (mount & cmount)/Mac/Windows
+ - Many bugs fixed in the VFS layer - see below
+ - VFS
+ - Many fixes for --vfs-cache-mode writes and above
+ - Update cached copy if we know it has changed (fixes stale
+ data)
+ - Clean path names before using them in the cache
+ - Disable cache cleaner if --vfs-cache-poll-interval=0
+ - Fill and clean the cache immediately on startup
+ - Fix Windows opening every file when it stats the file
+ - Fix applying modtime for an open Write Handle
+ - Fix creation of files when truncating
+ - Write 0 bytes when flushing unwritten handles to avoid race
+ conditions in FUSE
+ - Downgrade "poll-interval is not supported" message to Info
+ - Make OpenFile and friends return EINVAL if O_RDONLY and O_TRUNC
+ - Local
+ - Downgrade "invalid cross-device link: trying copy" to debug
+ - Make DirMove return fs.ErrorCantDirMove to allow fallback to
+ Copy for cross device
+ - Fix race conditions updating the hashes
+ - Cache
+ - Add support for polling - cache will update when remote changes
+ on supported backends
+ - Reduce log level for Plex api
+ - Fix dir cache issue
+ - Implement --cache-db-wait-time flag
+ - Improve efficiency with RangeOption and RangeSeek
+ - Fix dirmove with temp fs enabled
+ - Notify vfs when using temp fs
+ - Offline uploading
+ - Remote control support for path flushing
+ - Amazon cloud drive
+ - Rclone no longer has any working keys - disable integration
+ tests
+ - Implement DirChangeNotify to notify cache/vfs/mount of changes
+ - Azureblob
+ - Don't check for bucket/container presense if listing was OK
+ - this makes rclone do one less request per invocation
+ - Improve accounting for chunked uploads
+ - Backblaze B2
+ - Don't check for bucket/container presense if listing was OK
+ - this makes rclone do one less request per invocation
+ - Box
+ - Improve accounting for chunked uploads
+ - Dropbox
+ - Fix custom oauth client parameters
+ - Google Cloud Storage
+ - Don't check for bucket/container presense if listing was OK
+ - this makes rclone do one less request per invocation
+ - Google Drive
+ - Migrate to api v3 (Fabian Möller)
+ - Add scope configuration and root folder selection
+ - Add --drive-impersonate for service accounts
+ - thanks to everyone who tested, explored and contributed docs
+ - Add --drive-use-created-date to use created date as modified
+ date (nbuchanan)
+ - Request the export formats only when required
+ - This makes rclone quicker when there are no google docs
+ - Fix finding paths with latin1 chars (a workaround for a drive
+ bug)
+ - Fix copying of a single Google doc file
+ - Fix --drive-auth-owner-only to look in all directories
+ - HTTP
+ - Fix handling of directories with & in
+ - Onedrive
+ - Removed upload cutoff and always do session uploads
+ - this stops the creation of multiple versions on business
+ onedrive
+ - Overwrite object size value with real size when reading file.
+ (Victor)
+ - this fixes oddities when onedrive misreports the size of
+ images
+ - Pcloud
+ - Remove unused chunked upload flag and code
+ - Qingstor
+ - Don't check for bucket/container presense if listing was OK
+ - this makes rclone do one less request per invocation
+ - S3
+ - Support hashes for multipart files (Chris Redekop)
+ - Initial support for IBM COS (S3) (Giri Badanahatti)
+ - Update docs to discourage use of v2 auth with CEPH and others
+ - Don't check for bucket/container presense if listing was OK
+ - this makes rclone do one less request per invocation
+ - Fix server side copy and set modtime on files with + in
+ - SFTP
+ - Add option to disable remote hash check command execution (Jon
+ Fautley)
+ - Add --sftp-ask-password flag to prompt for password when needed
+ (Leo R. Lundgren)
+ - Add set_modtime configuration option
+ - Fix following of symlinks
+ - Fix reading config file outside of Fs setup
+ - Fix reading $USER in username fallback not $HOME
+ - Fix running under crontab - Use correct OS way of reading
+ username
+ - Swift
+ - Fix refresh of authentication token
+ - in v1.39 a bug was introduced which ignored new tokens -
+ this fixes it
+ - Fix extra HEAD transaction when uploading a new file
+ - Don't check for bucket/container presense if listing was OK
+ - this makes rclone do one less request per invocation
+ - Webdav
+ - Add new time formats to support mydrive.ch and others
- v1.39 - 2017-12-23
- New backends
- WebDAV
@@ -8366,13 +9875,13 @@ Changelog
- NB this feature is in beta so use with care
- New commands
- serve command with subcommands:
- - serve webdav: this implements a webdav server for any
- rclone remote.
+ - serve webdav: this implements a webdav server for any rclone
+ remote.
- serve http: command to serve a remote over HTTP
- config: add sub commands for full config file management
- create/delete/dump/edit/file/password/providers/show/update
- - touch: to create or update the timestamp of a file
- (Jakub Tasiemski)
+ - touch: to create or update the timestamp of a file (Jakub
+ Tasiemski)
- New Features
- curl install for rclone (Filip Bartodziej)
- --stats now shows percentage, size, rate and ETA in condensed
@@ -8404,10 +9913,10 @@ Changelog
- --vfs-cache mode to make writes into mounts more reliable.
- this requires caching files on the disk (see --cache-dir)
- As this is a new feature, use with care
- - Use sdnotify to signal systemd the mount is ready
- (Fabian Möller)
- - Check if directory is not empty before mounting
- (Ernest Borowski)
+ - Use sdnotify to signal systemd the mount is ready (Fabian
+ Möller)
+ - Check if directory is not empty before mounting (Ernest
+ Borowski)
- Local
- Add error message for cross file system moves
- Fix equality check for times
@@ -8424,22 +9933,22 @@ Changelog
- Google Drive
- Add service account support (Tim Cooijmans)
- S3
- - Make it work properly with Digital Ocean Spaces
- (Andrew Starr-Bochicchio)
+ - Make it work properly with Digital Ocean Spaces (Andrew
+ Starr-Bochicchio)
- Fix crash if a bad listing is received
- Add support for ECS task IAM roles (David Minor)
- Backblaze B2
- Fix multipart upload retries
- Fix --hard-delete to make it work 100% of the time
- Swift
- - Allow authentication with storage URL and auth key
- (Giovanni Pizzi)
+ - Allow authentication with storage URL and auth key (Giovanni
+ Pizzi)
- Add new fields for swift configuration to support IBM Bluemix
Swift (Pierre Carlson)
- Add OS_TENANT_ID and OS_USER_ID to config
- Allow configs with user id instead of user name
- - Check if swift segments container exists before creating
- (John Leach)
+ - Check if swift segments container exists before creating (John
+ Leach)
- Fix memory leak in swift transfers (upstream fix)
- SFTP
- Add option to enable the use of aes128-cbc cipher (Jon Fautley)
@@ -8469,8 +9978,8 @@ Changelog
- dedupe - implement merging of duplicate directories
- check and cryptcheck made more consistent and use less memory
- cleanup for remaining remotes (thanks ishuah)
- - --immutable for ensuring that files don't change (thanks
- Jacob McNamee)
+ - --immutable for ensuring that files don't change (thanks Jacob
+ McNamee)
- --user-agent option (thanks Alex McGrath Kraak)
- --disable flag to disable optional features
- --bind flag for choosing the local addr on outgoing connections
@@ -8483,8 +9992,8 @@ Changelog
- Improve retriable error detection which makes multipart uploads
better
- Make check obey --ignore-size
- - Fix bwlimit toggle in conjunction with schedules
- (thanks cbruegg)
+ - Fix bwlimit toggle in conjunction with schedules (thanks
+ cbruegg)
- config ensures newly written config is on the same mount
- Local
- Revert to copy when moving file across file system boundaries
@@ -8530,8 +10039,8 @@ Changelog
- FTP - thanks to Antonio Messina
- HTTP - thanks to Vasiliy Tolstov
- New commands
- - rclone ncdu - for exploring a remote with a text based
- user interface.
+ - rclone ncdu - for exploring a remote with a text based user
+ interface.
- rclone lsjson - for listing with a machine readable output
- rclone dbhashsum - to show Dropbox style hashes of files (local
or Dropbox)
@@ -8656,8 +10165,8 @@ Changelog
- -vv is for full debug
- --syslog to log to syslog on capable platforms
- Implement --backup-dir and --suffix
- - Implement --track-renames (initial implementation by Bjørn
- Erik Pedersen)
+ - Implement --track-renames (initial implementation by Bjørn Erik
+ Pedersen)
- Add time-based bandwidth limits (Lukas Loesche)
- rclone cryptcheck: checks integrity of crypt remotes
- Allow all config file variables and options to be set from
@@ -8815,8 +10324,8 @@ Changelog
- --default-permissions, --write-back-cache, --max-read-ahead,
--umask, --uid, --gid
- Add --dir-cache-time to control caching of directory entries
- - Implement seek for files opened for read (useful for
- video players)
+ - Implement seek for files opened for read (useful for video
+ players)
- with -no-seek flag to disable
- Fix crash on 32 bit ARM (alignment of 64 bit counter)
- ...and many more internal fixes and improvements!
@@ -8859,10 +10368,10 @@ Changelog
- data encrypted in NACL secretbox format
- with optional file name encryption
- New commands
- - rclone mount - implements FUSE mounting of
- remotes (EXPERIMENTAL)
- - works on Linux, FreeBSD and OS X (need testers for the
- last 2!)
+ - rclone mount - implements FUSE mounting of remotes
+ (EXPERIMENTAL)
+ - works on Linux, FreeBSD and OS X (need testers for the last
+ 2!)
- rclone cat - outputs remote file or files to the terminal
- rclone genautocomplete - command to make a bash completion
script for rclone
@@ -8880,8 +10389,8 @@ Changelog
- New B2 API endpoint (thanks Per Cederberg)
- Set maximum backoff to 5 Minutes
- onedrive
- - Fix URL escaping in file names - eg uploading files with +
- in them.
+ - Fix URL escaping in file names - eg uploading files with + in
+ them.
- amazon cloud drive
- Fix token expiry during large uploads
- Work around 408 REQUEST_TIMEOUT and 504 GATEWAY_TIMEOUT errors
@@ -8896,8 +10405,8 @@ Changelog
- Reduce memory on sync by about 50%
- Implement --no-traverse flag to stop copy traversing the
destination remote.
- - This can be used to reduce memory usage down to the
- smallest possible.
+ - This can be used to reduce memory usage down to the smallest
+ possible.
- Useful to copy a small number of files into a large
destination folder.
- Implement cleanup command for emptying trash / removing old
@@ -8918,16 +10427,16 @@ Changelog
- Rename Amazon Cloud Drive to Amazon Drive - no changes to config
file needed.
- Swift
- - Add support for non-default project domain - thanks
- Antonio Messina.
+ - Add support for non-default project domain - thanks Antonio
+ Messina.
- S3
- Add instructions on how to use rclone with minio.
- Add ap-northeast-2 (Seoul) and ap-south-1 (Mumbai) regions.
- - Skip setting the modified time for objects > 5GB as it
- isn't possible.
+ - Skip setting the modified time for objects > 5GB as it isn't
+ possible.
- Backblaze B2
- - Add --b2-versions flag so old versions can be listed
- and retreived.
+ - Add --b2-versions flag so old versions can be listed and
+ retreived.
- Treat 403 errors (eg cap exceeded) as fatal.
- Implement cleanup command for deleting old file versions.
- Make error handling compliant with B2 integrations notes.
@@ -8997,8 +10506,8 @@ Changelog
the rest to be different.
- Bug fixes
- Make rclone check obey the --size-only flag.
- - Use "application/octet-stream" if discovered mime type
- is invalid.
+ - Use "application/octet-stream" if discovered mime type is
+ invalid.
- Fix missing "quit" option when there are no remotes.
- Google Drive
- Increase default chunk size to 8 MB - increases upload speed of
@@ -9032,8 +10541,8 @@ Changelog
- Don't make directories if --dry-run set
- Fix and document the move command
- Fix redirecting stderr on unix-like OSes when using --log-file
- - Fix delete command to wait until all finished - fixes
- missing deletes.
+ - Fix delete command to wait until all finished - fixes missing
+ deletes.
- Backblaze B2
- Use one upload URL per go routine fixes
more than one upload using auth token
@@ -9061,10 +10570,10 @@ Changelog
- Add support for multiple hash types - we now check SHA1 as well
as MD5 hashes.
- delete command which does obey the filters (unlike purge)
- - dedupe command to deduplicate a remote. Useful with
- Google Drive.
- - Add --ignore-existing flag to skip all files that exist
- on destination.
+ - dedupe command to deduplicate a remote. Useful with Google
+ Drive.
+ - Add --ignore-existing flag to skip all files that exist on
+ destination.
- Add --delete-before, --delete-during, --delete-after flags.
- Add --memprofile flag to debug memory use.
- Warn the user about files with same name but different case
@@ -9107,8 +10616,8 @@ Changelog
- Re-enable server side copy
- Don't mask HTTP error codes with JSON decode error
- S3
- - Fix corrupting Content-Type on mod time update (thanks
- Joseph Spurrier)
+ - Fix corrupting Content-Type on mod time update (thanks Joseph
+ Spurrier)
- v1.25 - 2015-11-14
- New features
- Implement Hubic storage system
@@ -9477,6 +10986,10 @@ time which is important for SSL to work properly.
curl -o /etc/ssl/certs/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt
ntpclient -s -h pool.ntp.org
+The two environment variables SSL_CERT_FILE and SSL_CERT_DIR, mentioned
+in the x509 pacakge, provide an additional way to provide the SSL root
+certificates.
+
Note that you may need to add the --insecure option to the curl command
line if it doesn't work without.
@@ -9511,6 +11024,11 @@ If you are using systemd-resolved (default on Arch Linux), ensure it is
at version 233 or higher. Previous releases contain a bug which causes
not all domains to be resolved properly.
+Additionally with the GODEBUG=netdns= environment variable the Go
+resolver decision can be influenced. This also allows to resolve certain
+issues with DNS resolution. See the name resolution section in the go
+docs.
+
License
@@ -9614,7 +11132,7 @@ Contributors
- Steven Lu tacticalazn@gmail.com
- Sjur Fredriksen sjurtf@ifi.uio.no
- Ruwbin hubus12345@gmail.com
-- Fabian Möller fabianm88@gmail.com
+- Fabian Möller fabianm88@gmail.com f.moeller@nynex.de
- Edward Q. Bridges github@eqbridges.com
- Vasiliy Tolstov v.tolstov@selfip.ru
- Harshavardhana harsha@minio.io
@@ -9624,7 +11142,7 @@ Contributors
- John Papandriopoulos jpap@users.noreply.github.com
- Zhiming Wang zmwangx@gmail.com
- Andy Pilate cubox@cubox.me
-- Oliver Heyme olihey@googlemail.com
+- Oliver Heyme olihey@googlemail.com olihey@users.noreply.github.com
- wuyu wuyu@yunify.com
- Andrei Dragomir adragomi@adobe.com
- Christian Brüggemann mail@cbruegg.com
@@ -9648,8 +11166,7 @@ Contributors
- Pierre Carlson mpcarl@us.ibm.com
- Ernest Borowski er.borowski@gmail.com
- Remus Bunduc remus.bunduc@gmail.com
-- Iakov Davydov iakov.davydov@unil.ch
-- Fabian Möller f.moeller@nynex.de
+- Iakov Davydov iakov.davydov@unil.ch dav05.gith@myths.ru
- Jakub Tasiemski tasiemski@gmail.com
- David Minor dminor@saymedia.com
- Tim Cooijmans cooijmans.tim@gmail.com
@@ -9659,6 +11176,24 @@ Contributors
- Jon Fautley jon@dead.li
- lewapm 32110057+lewapm@users.noreply.github.com
- Yassine Imounachen yassine256@gmail.com
+- Chris Redekop chris-redekop@users.noreply.github.com
+- Jon Fautley jon@adenoid.appstal.co.uk
+- Will Gunn WillGunn@users.noreply.github.com
+- Lucas Bremgartner lucas@bremis.ch
+- Jody Frankowski jody.frankowski@gmail.com
+- Andreas Roussos arouss1980@gmail.com
+- nbuchanan nbuchanan@utah.gov
+- Durval Menezes rclone@durval.com
+- Victor vb-github@viblo.se
+- Mateusz pabian.mateusz@gmail.com
+- Daniel Loader spicypixel@gmail.com
+- David0rk davidork@gmail.com
+- Alexander Neumann alexander@bumpern.de
+- Giri Badanahatti gbadanahatti@us.ibm.com@Giris-MacBook-Pro.local
+- Leo R. Lundgren leo@finalresort.org
+- wolfv wolfv6@users.noreply.github.com
+- Dave Pedu dave@davepedu.com
+- Stefan Lindblom lindblom@spotify.com
diff --git a/docs/content/changelog.md b/docs/content/changelog.md
index 551fbc4db..ca7533640 100644
--- a/docs/content/changelog.md
+++ b/docs/content/changelog.md
@@ -1,12 +1,158 @@
---
title: "Documentation"
description: "Rclone Changelog"
-date: "2017-12-23"
+date: "2018-03-19"
---
Changelog
---------
+ * v1.40 - 2018-03-19
+ * New backends
+ * Alias backend to create aliases for existing remote names (Fabian Möller)
+ * New commands
+ * `lsf`: list for parsing purposes (Jakub Tasiemski)
+ * by default this is a simple non recursive list of files and directories
+ * it can be configured to add more info in an easy to parse way
+ * `serve restic`: for serving a remote as a Restic REST endpoint
+ * This enables restic to use any backends that rclone can access
+ * Thanks Alexander Neumann for help, patches and review
+ * `rc`: enable the remote control of a running rclone
+ * The running rclone must be started with --rc and related flags.
+ * Currently there is support for bwlimit, and flushing for mount and cache.
+ * New Features
+ * `--max-delete` flag to add a delete threshold (Bjørn Erik Pedersen)
+ * All backends now support RangeOption for ranged Open
+ * `cat`: Use RangeOption for limited fetches to make more efficient
+ * `cryptcheck`: make reading of nonce more efficient with RangeOption
+ * serve http/webdav/restic
+ * support SSL/TLS
+ * add `--user` `--pass` and `--htpasswd` for authentication
+ * `copy`/`move`: detect file size change during copy/move and abort transfer (ishuah)
+ * `cryptdecode`: added option to return encrypted file names. (ishuah)
+ * `lsjson`: add `--encrypted` to show encrypted name (Jakub Tasiemski)
+ * Add `--stats-file-name-length` to specify the printed file name length for stats (Will Gunn)
+ * Compile
+ * Code base was shuffled and factored
+ * backends moved into a backend directory
+ * large packages split up
+ * See the CONTRIBUTING.md doc for info as to what lives where now
+ * Update to using go1.10 as the default go version
+ * Implement daily [full integration tests](https://pub.rclone.org/integration-tests/)
+ * Release
+ * Include a source tarball and sign it and the binaries
+ * Sign the git tags as part of the release process
+ * Add .deb and .rpm packages as part of the build
+ * Make a beta release for all branches on the main repo (but not pull requests)
+ * Bug Fixes
+ * config: fixes errors on non existing config by loading config file only on first access
+ * config: retry saving the config after failure (Mateusz)
+ * sync: when using `--backup-dir` don't delete files if we can't set their modtime
+ * this fixes odd behaviour with Dropbox and `--backup-dir`
+ * fshttp: fix idle timeouts for HTTP connections
+ * `serve http`: fix serving files with : in - fixes
+ * Fix `--exclude-if-present` to ignore directories which it doesn't have permission for (Iakov Davydov)
+ * Make accounting work properly with crypt and b2
+ * remove `--no-traverse` flag because it is obsolete
+ * Mount
+ * Add `--attr-timeout` flag to control attribute caching in kernel
+ * this now defaults to 0 which is correct but less efficient
+ * see [the mount docs](/commands/rclone_mount/#attribute-caching) for more info
+ * Add `--daemon` flag to allow mount to run in the background (ishuah)
+ * Fix: Return ENOSYS rather than EIO on attempted link
+ * This fixes FileZilla accessing an rclone mount served over sftp.
+ * Fix setting modtime twice
+ * Mount tests now run on CI for Linux (mount & cmount)/Mac/Windows
+ * Many bugs fixed in the VFS layer - see below
+ * VFS
+ * Many fixes for `--vfs-cache-mode` writes and above
+ * Update cached copy if we know it has changed (fixes stale data)
+ * Clean path names before using them in the cache
+ * Disable cache cleaner if `--vfs-cache-poll-interval=0`
+ * Fill and clean the cache immediately on startup
+ * Fix Windows opening every file when it stats the file
+ * Fix applying modtime for an open Write Handle
+ * Fix creation of files when truncating
+ * Write 0 bytes when flushing unwritten handles to avoid race conditions in FUSE
+ * Downgrade "poll-interval is not supported" message to Info
+ * Make OpenFile and friends return EINVAL if O_RDONLY and O_TRUNC
+ * Local
+ * Downgrade "invalid cross-device link: trying copy" to debug
+ * Make DirMove return fs.ErrorCantDirMove to allow fallback to Copy for cross device
+ * Fix race conditions updating the hashes
+ * Cache
+ * Add support for polling - cache will update when remote changes on supported backends
+ * Reduce log level for Plex api
+ * Fix dir cache issue
+ * Implement `--cache-db-wait-time` flag
+ * Improve efficiency with RangeOption and RangeSeek
+ * Fix dirmove with temp fs enabled
+ * Notify vfs when using temp fs
+ * Offline uploading
+ * Remote control support for path flushing
+ * Amazon cloud drive
+ * Rclone no longer has any working keys - disable integration tests
+ * Implement DirChangeNotify to notify cache/vfs/mount of changes
+ * Azureblob
+ * Don't check for bucket/container presense if listing was OK
+ * this makes rclone do one less request per invocation
+ * Improve accounting for chunked uploads
+ * Backblaze B2
+ * Don't check for bucket/container presense if listing was OK
+ * this makes rclone do one less request per invocation
+ * Box
+ * Improve accounting for chunked uploads
+ * Dropbox
+ * Fix custom oauth client parameters
+ * Google Cloud Storage
+ * Don't check for bucket/container presense if listing was OK
+ * this makes rclone do one less request per invocation
+ * Google Drive
+ * Migrate to api v3 (Fabian Möller)
+ * Add scope configuration and root folder selection
+ * Add `--drive-impersonate` for service accounts
+ * thanks to everyone who tested, explored and contributed docs
+ * Add `--drive-use-created-date` to use created date as modified date (nbuchanan)
+ * Request the export formats only when required
+ * This makes rclone quicker when there are no google docs
+ * Fix finding paths with latin1 chars (a workaround for a drive bug)
+ * Fix copying of a single Google doc file
+ * Fix `--drive-auth-owner-only` to look in all directories
+ * HTTP
+ * Fix handling of directories with & in
+ * Onedrive
+ * Removed upload cutoff and always do session uploads
+ * this stops the creation of multiple versions on business onedrive
+ * Overwrite object size value with real size when reading file. (Victor)
+ * this fixes oddities when onedrive misreports the size of images
+ * Pcloud
+ * Remove unused chunked upload flag and code
+ * Qingstor
+ * Don't check for bucket/container presense if listing was OK
+ * this makes rclone do one less request per invocation
+ * S3
+ * Support hashes for multipart files (Chris Redekop)
+ * Initial support for IBM COS (S3) (Giri Badanahatti)
+ * Update docs to discourage use of v2 auth with CEPH and others
+ * Don't check for bucket/container presense if listing was OK
+ * this makes rclone do one less request per invocation
+ * Fix server side copy and set modtime on files with + in
+ * SFTP
+ * Add option to disable remote hash check command execution (Jon Fautley)
+ * Add `--sftp-ask-password` flag to prompt for password when needed (Leo R. Lundgren)
+ * Add `set_modtime` configuration option
+ * Fix following of symlinks
+ * Fix reading config file outside of Fs setup
+ * Fix reading $USER in username fallback not $HOME
+ * Fix running under crontab - Use correct OS way of reading username
+ * Swift
+ * Fix refresh of authentication token
+ * in v1.39 a bug was introduced which ignored new tokens - this fixes it
+ * Fix extra HEAD transaction when uploading a new file
+ * Don't check for bucket/container presense if listing was OK
+ * this makes rclone do one less request per invocation
+ * Webdav
+ * Add new time formats to support mydrive.ch and others
* v1.39 - 2017-12-23
* New backends
* WebDAV
diff --git a/docs/content/commands/rclone.md b/docs/content/commands/rclone.md
index 70088867a..36d30a518 100644
--- a/docs/content/commands/rclone.md
+++ b/docs/content/commands/rclone.md
@@ -1,17 +1,16 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone"
slug: rclone
url: /commands/rclone/
---
## rclone
-Sync files and directories to and from local and remote object stores - v1.39
+Sync files and directories to and from local and remote object stores - v1.40
### Synopsis
-
Rclone is a command line program to sync files and directories to and
from various cloud storage systems and using file transfer services, such as:
@@ -80,10 +79,13 @@ rclone [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -102,17 +104,19 @@ rclone [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -134,29 +138,41 @@ rclone [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -170,12 +186,13 @@ rclone [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
-V, --version Print the version number
```
### SEE ALSO
+
* [rclone authorize](/commands/rclone_authorize/) - Remote authorization.
* [rclone cachestats](/commands/rclone_cachestats/) - Print cache stats for a remote
* [rclone cat](/commands/rclone_cat/) - Concatenates any files and sends them to stdout.
@@ -192,10 +209,11 @@ rclone [flags]
* [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell.
* [rclone gendocs](/commands/rclone_gendocs/) - Output markdown docs for rclone to the directory supplied.
* [rclone listremotes](/commands/rclone_listremotes/) - List all the remotes in the config file.
-* [rclone ls](/commands/rclone_ls/) - List all the objects in the path with size and path.
+* [rclone ls](/commands/rclone_ls/) - List the objects in the path with size and path.
* [rclone lsd](/commands/rclone_lsd/) - List all directories/containers/buckets in the path.
+* [rclone lsf](/commands/rclone_lsf/) - List directories and objects in remote:path formatted for parsing
* [rclone lsjson](/commands/rclone_lsjson/) - List directories and objects in the path in JSON format.
-* [rclone lsl](/commands/rclone_lsl/) - List all the objects path with modification time, size and path.
+* [rclone lsl](/commands/rclone_lsl/) - List the objects in path with modification time, size and path.
* [rclone md5sum](/commands/rclone_md5sum/) - Produces an md5sum file for all the objects in the path.
* [rclone mkdir](/commands/rclone_mkdir/) - Make the path if it doesn't already exist.
* [rclone mount](/commands/rclone_mount/) - Mount the remote as a mountpoint. **EXPERIMENTAL**
@@ -204,6 +222,7 @@ rclone [flags]
* [rclone ncdu](/commands/rclone_ncdu/) - Explore a remote with a text based user interface.
* [rclone obscure](/commands/rclone_obscure/) - Obscure password for use in the rclone.conf
* [rclone purge](/commands/rclone_purge/) - Remove the path and all of its contents.
+* [rclone rc](/commands/rclone_rc/) - Run a command against a running rclone.
* [rclone rcat](/commands/rclone_rcat/) - Copies standard input to file on remote.
* [rclone rmdir](/commands/rclone_rmdir/) - Remove the path if empty.
* [rclone rmdirs](/commands/rclone_rmdirs/) - Remove empty directories under the path.
@@ -215,4 +234,4 @@ rclone [flags]
* [rclone tree](/commands/rclone_tree/) - List the contents of the remote in a tree like fashion.
* [rclone version](/commands/rclone_version/) - Show the version number.
-###### Auto generated by spf13/cobra on 23-Dec-2017
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_authorize.md b/docs/content/commands/rclone_authorize.md
index 768bf7041..e0e822cd6 100644
--- a/docs/content/commands/rclone_authorize.md
+++ b/docs/content/commands/rclone_authorize.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone authorize"
slug: rclone_authorize
url: /commands/rclone_authorize/
@@ -11,7 +11,6 @@ Remote authorization.
### Synopsis
-
Remote authorization. Used to authorize a remote or headless
rclone from a machine with a browser - use as instructed by
rclone config.
@@ -51,10 +50,13 @@ rclone authorize [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -73,17 +75,19 @@ rclone authorize [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -104,29 +108,41 @@ rclone authorize [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -140,11 +156,12 @@ rclone authorize [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 23-Dec-2017
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40
+
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_cachestats.md b/docs/content/commands/rclone_cachestats.md
index 3d13b9a87..2960aacd5 100644
--- a/docs/content/commands/rclone_cachestats.md
+++ b/docs/content/commands/rclone_cachestats.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone cachestats"
slug: rclone_cachestats
url: /commands/rclone_cachestats/
@@ -11,7 +11,6 @@ Print cache stats for a remote
### Synopsis
-
Print cache stats for a remote in JSON format
@@ -50,10 +49,13 @@ rclone cachestats source: [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -72,17 +74,19 @@ rclone cachestats source: [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -103,29 +107,41 @@ rclone cachestats source: [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -139,11 +155,12 @@ rclone cachestats source: [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 23-Dec-2017
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40
+
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_cat.md b/docs/content/commands/rclone_cat.md
index 06ac6fbc6..e72590000 100644
--- a/docs/content/commands/rclone_cat.md
+++ b/docs/content/commands/rclone_cat.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone cat"
slug: rclone_cat
url: /commands/rclone_cat/
@@ -11,7 +11,6 @@ Concatenates any files and sends them to stdout.
### Synopsis
-
rclone cat sends any files to standard output.
You can use it like this to output a single file
@@ -72,10 +71,13 @@ rclone cat remote:path [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -94,17 +96,19 @@ rclone cat remote:path [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -125,29 +129,41 @@ rclone cat remote:path [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -161,11 +177,12 @@ rclone cat remote:path [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 23-Dec-2017
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40
+
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_check.md b/docs/content/commands/rclone_check.md
index c1ce2d68b..f4bc2bc53 100644
--- a/docs/content/commands/rclone_check.md
+++ b/docs/content/commands/rclone_check.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone check"
slug: rclone_check
url: /commands/rclone_check/
@@ -11,7 +11,6 @@ Checks the files in the source and destination match.
### Synopsis
-
Checks the files in the source and destination match. It compares
sizes and hashes (MD5 or SHA1) and logs a report of files which don't
match. It doesn't alter the source or destination.
@@ -61,10 +60,13 @@ rclone check source:path dest:path [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -83,17 +85,19 @@ rclone check source:path dest:path [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -114,29 +118,41 @@ rclone check source:path dest:path [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -150,11 +166,12 @@ rclone check source:path dest:path [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 23-Dec-2017
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40
+
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_cleanup.md b/docs/content/commands/rclone_cleanup.md
index c179b759c..3653be1f3 100644
--- a/docs/content/commands/rclone_cleanup.md
+++ b/docs/content/commands/rclone_cleanup.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone cleanup"
slug: rclone_cleanup
url: /commands/rclone_cleanup/
@@ -11,7 +11,6 @@ Clean up the remote if possible
### Synopsis
-
Clean up the remote if possible. Empty the trash or delete old file
versions. Not supported by all remotes.
@@ -51,10 +50,13 @@ rclone cleanup remote:path [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -73,17 +75,19 @@ rclone cleanup remote:path [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -104,29 +108,41 @@ rclone cleanup remote:path [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -140,11 +156,12 @@ rclone cleanup remote:path [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 23-Dec-2017
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40
+
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_config.md b/docs/content/commands/rclone_config.md
index bfe834845..1a5b9932d 100644
--- a/docs/content/commands/rclone_config.md
+++ b/docs/content/commands/rclone_config.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone config"
slug: rclone_config
url: /commands/rclone_config/
@@ -10,7 +10,6 @@ Enter an interactive configuration session.
### Synopsis
-
Enter an interactive configuration session where you can setup new
remotes and manage existing ones. You may also set or remove a
password to protect your configuration.
@@ -51,10 +50,13 @@ rclone config [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -73,17 +75,19 @@ rclone config [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -104,29 +108,41 @@ rclone config [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -140,12 +156,13 @@ rclone config [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
+
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40
* [rclone config create](/commands/rclone_config_create/) - Create a new remote with name, type and options.
* [rclone config delete](/commands/rclone_config_delete/) - Delete an existing remote .
* [rclone config dump](/commands/rclone_config_dump/) - Dump the config file as JSON.
@@ -156,4 +173,4 @@ rclone config [flags]
* [rclone config show](/commands/rclone_config_show/) - Print (decrypted) config file, or the config for a single remote.
* [rclone config update](/commands/rclone_config_update/) - Update options in an existing remote.
-###### Auto generated by spf13/cobra on 23-Dec-2017
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_config_create.md b/docs/content/commands/rclone_config_create.md
index 33c1b6843..e8ec0edac 100644
--- a/docs/content/commands/rclone_config_create.md
+++ b/docs/content/commands/rclone_config_create.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone config create"
slug: rclone_config_create
url: /commands/rclone_config_create/
@@ -11,7 +11,6 @@ Create a new remote with name, type and options.
### Synopsis
-
Create a new remote of with and options. The options
should be passed in in pairs of .
@@ -56,10 +55,13 @@ rclone config create [ ]* [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -78,17 +80,19 @@ rclone config create [ ]* [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -109,29 +113,41 @@ rclone config create [ ]* [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -145,11 +161,12 @@ rclone config create [ ]* [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
+
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
-###### Auto generated by spf13/cobra on 23-Dec-2017
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_config_delete.md b/docs/content/commands/rclone_config_delete.md
index 18bc6cdd7..2d0719dd0 100644
--- a/docs/content/commands/rclone_config_delete.md
+++ b/docs/content/commands/rclone_config_delete.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone config delete"
slug: rclone_config_delete
url: /commands/rclone_config_delete/
@@ -10,7 +10,6 @@ Delete an existing remote .
### Synopsis
-
Delete an existing remote .
```
@@ -48,10 +47,13 @@ rclone config delete [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -70,17 +72,19 @@ rclone config delete [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -101,29 +105,41 @@ rclone config delete [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -137,11 +153,12 @@ rclone config delete [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
+
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
-###### Auto generated by spf13/cobra on 23-Dec-2017
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_config_dump.md b/docs/content/commands/rclone_config_dump.md
index 26ecb989c..160d006a6 100644
--- a/docs/content/commands/rclone_config_dump.md
+++ b/docs/content/commands/rclone_config_dump.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone config dump"
slug: rclone_config_dump
url: /commands/rclone_config_dump/
@@ -10,7 +10,6 @@ Dump the config file as JSON.
### Synopsis
-
Dump the config file as JSON.
```
@@ -48,10 +47,13 @@ rclone config dump [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -70,17 +72,19 @@ rclone config dump [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -101,29 +105,41 @@ rclone config dump [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -137,11 +153,12 @@ rclone config dump [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
+
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
-###### Auto generated by spf13/cobra on 23-Dec-2017
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_config_edit.md b/docs/content/commands/rclone_config_edit.md
index fa5bfc8cd..9c47362fa 100644
--- a/docs/content/commands/rclone_config_edit.md
+++ b/docs/content/commands/rclone_config_edit.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone config edit"
slug: rclone_config_edit
url: /commands/rclone_config_edit/
@@ -10,7 +10,6 @@ Enter an interactive configuration session.
### Synopsis
-
Enter an interactive configuration session where you can setup new
remotes and manage existing ones. You may also set or remove a
password to protect your configuration.
@@ -51,10 +50,13 @@ rclone config edit [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -73,17 +75,19 @@ rclone config edit [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -104,29 +108,41 @@ rclone config edit [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -140,11 +156,12 @@ rclone config edit [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
+
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
-###### Auto generated by spf13/cobra on 23-Dec-2017
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_config_file.md b/docs/content/commands/rclone_config_file.md
index 0e06df4d8..f56433151 100644
--- a/docs/content/commands/rclone_config_file.md
+++ b/docs/content/commands/rclone_config_file.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone config file"
slug: rclone_config_file
url: /commands/rclone_config_file/
@@ -10,7 +10,6 @@ Show path of configuration file in use.
### Synopsis
-
Show path of configuration file in use.
```
@@ -48,10 +47,13 @@ rclone config file [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -70,17 +72,19 @@ rclone config file [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -101,29 +105,41 @@ rclone config file [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -137,11 +153,12 @@ rclone config file [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
+
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
-###### Auto generated by spf13/cobra on 23-Dec-2017
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_config_password.md b/docs/content/commands/rclone_config_password.md
index 04f7b1313..3fb51f5df 100644
--- a/docs/content/commands/rclone_config_password.md
+++ b/docs/content/commands/rclone_config_password.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone config password"
slug: rclone_config_password
url: /commands/rclone_config_password/
@@ -11,7 +11,6 @@ Update password in an existing remote.
### Synopsis
-
Update an existing remote's password. The password
should be passed in in pairs of .
@@ -55,10 +54,13 @@ rclone config password [ ]+ [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -77,17 +79,19 @@ rclone config password [ ]+ [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -108,29 +112,41 @@ rclone config password [ ]+ [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -144,11 +160,12 @@ rclone config password [ ]+ [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
+
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
-###### Auto generated by spf13/cobra on 23-Dec-2017
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_config_providers.md b/docs/content/commands/rclone_config_providers.md
index ea9c96fd6..f60d042ff 100644
--- a/docs/content/commands/rclone_config_providers.md
+++ b/docs/content/commands/rclone_config_providers.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone config providers"
slug: rclone_config_providers
url: /commands/rclone_config_providers/
@@ -10,7 +10,6 @@ List in JSON format all the providers and options.
### Synopsis
-
List in JSON format all the providers and options.
```
@@ -48,10 +47,13 @@ rclone config providers [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -70,17 +72,19 @@ rclone config providers [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -101,29 +105,41 @@ rclone config providers [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -137,11 +153,12 @@ rclone config providers [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
+
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
-###### Auto generated by spf13/cobra on 23-Dec-2017
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_config_show.md b/docs/content/commands/rclone_config_show.md
index 76f8d0db2..913daaa6d 100644
--- a/docs/content/commands/rclone_config_show.md
+++ b/docs/content/commands/rclone_config_show.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone config show"
slug: rclone_config_show
url: /commands/rclone_config_show/
@@ -10,7 +10,6 @@ Print (decrypted) config file, or the config for a single remote.
### Synopsis
-
Print (decrypted) config file, or the config for a single remote.
```
@@ -48,10 +47,13 @@ rclone config show [] [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -70,17 +72,19 @@ rclone config show [] [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -101,29 +105,41 @@ rclone config show [] [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -137,11 +153,12 @@ rclone config show [] [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
+
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
-###### Auto generated by spf13/cobra on 23-Dec-2017
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_config_update.md b/docs/content/commands/rclone_config_update.md
index 1c5a90bb6..fb15b8900 100644
--- a/docs/content/commands/rclone_config_update.md
+++ b/docs/content/commands/rclone_config_update.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone config update"
slug: rclone_config_update
url: /commands/rclone_config_update/
@@ -11,7 +11,6 @@ Update options in an existing remote.
### Synopsis
-
Update an existing remote's options. The options should be passed in
in pairs of .
@@ -55,10 +54,13 @@ rclone config update [ ]+ [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -77,17 +79,19 @@ rclone config update [ ]+ [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -108,29 +112,41 @@ rclone config update [ ]+ [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -144,11 +160,12 @@ rclone config update [ ]+ [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
+
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
-###### Auto generated by spf13/cobra on 23-Dec-2017
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_copy.md b/docs/content/commands/rclone_copy.md
index 5f84793bb..2a65eb031 100644
--- a/docs/content/commands/rclone_copy.md
+++ b/docs/content/commands/rclone_copy.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone copy"
slug: rclone_copy
url: /commands/rclone_copy/
@@ -11,7 +11,6 @@ Copy files from source to dest, skipping already copied
### Synopsis
-
Copy the source to the destination. Doesn't transfer
unchanged files, testing by size and modification time or
MD5SUM. Doesn't delete files from the destination.
@@ -48,9 +47,6 @@ written a trailing / - meaning "copy the contents of this directory".
This applies to all commands and whether you are talking about the
source or destination.
-See the `--no-traverse` option for controlling whether rclone lists
-the destination directory or not.
-
```
rclone copy source:path dest:path [flags]
@@ -87,10 +83,13 @@ rclone copy source:path dest:path [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -109,17 +108,19 @@ rclone copy source:path dest:path [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -140,29 +141,41 @@ rclone copy source:path dest:path [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -176,11 +189,12 @@ rclone copy source:path dest:path [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 23-Dec-2017
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40
+
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_copyto.md b/docs/content/commands/rclone_copyto.md
index ecaa58567..276ff178d 100644
--- a/docs/content/commands/rclone_copyto.md
+++ b/docs/content/commands/rclone_copyto.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone copyto"
slug: rclone_copyto
url: /commands/rclone_copyto/
@@ -11,7 +11,6 @@ Copy files from source to dest, skipping already copied
### Synopsis
-
If source:path is a file or directory then it copies it to a file or
directory named dest:path.
@@ -74,10 +73,13 @@ rclone copyto source:path dest:path [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -96,17 +98,19 @@ rclone copyto source:path dest:path [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -127,29 +131,41 @@ rclone copyto source:path dest:path [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -163,11 +179,12 @@ rclone copyto source:path dest:path [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 23-Dec-2017
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40
+
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_cryptcheck.md b/docs/content/commands/rclone_cryptcheck.md
index 08d82d0d4..f79529668 100644
--- a/docs/content/commands/rclone_cryptcheck.md
+++ b/docs/content/commands/rclone_cryptcheck.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone cryptcheck"
slug: rclone_cryptcheck
url: /commands/rclone_cryptcheck/
@@ -11,7 +11,6 @@ Cryptcheck checks the integrity of a crypted remote.
### Synopsis
-
rclone cryptcheck checks a remote against a crypted remote. This is
the equivalent of running rclone check, but able to check the
checksums of the crypted remote.
@@ -71,10 +70,13 @@ rclone cryptcheck remote:path cryptedremote:path [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -93,17 +95,19 @@ rclone cryptcheck remote:path cryptedremote:path [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -124,29 +128,41 @@ rclone cryptcheck remote:path cryptedremote:path [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -160,11 +176,12 @@ rclone cryptcheck remote:path cryptedremote:path [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 23-Dec-2017
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40
+
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_cryptdecode.md b/docs/content/commands/rclone_cryptdecode.md
index d6e23a41a..18ba88e50 100644
--- a/docs/content/commands/rclone_cryptdecode.md
+++ b/docs/content/commands/rclone_cryptdecode.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone cryptdecode"
slug: rclone_cryptdecode
url: /commands/rclone_cryptdecode/
@@ -11,14 +11,17 @@ Cryptdecode returns unencrypted file names.
### Synopsis
-
rclone cryptdecode returns unencrypted file names when provided with
a list of encrypted file names. List limit is 10 items.
+If you supply the --reverse flag, it will return encrypted file names.
+
use it like this
rclone cryptdecode encryptedremote: encryptedfilename1 encryptedfilename2
+ rclone cryptdecode --reverse encryptedremote: filename1 filename2
+
```
rclone cryptdecode encryptedremote: encryptedfilename [flags]
@@ -27,7 +30,8 @@ rclone cryptdecode encryptedremote: encryptedfilename [flags]
### Options
```
- -h, --help help for cryptdecode
+ -h, --help help for cryptdecode
+ --reverse Reverse cryptdecode, encrypts filenames
```
### Options inherited from parent commands
@@ -55,10 +59,13 @@ rclone cryptdecode encryptedremote: encryptedfilename [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -77,17 +84,19 @@ rclone cryptdecode encryptedremote: encryptedfilename [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -108,29 +117,41 @@ rclone cryptdecode encryptedremote: encryptedfilename [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -144,11 +165,12 @@ rclone cryptdecode encryptedremote: encryptedfilename [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 23-Dec-2017
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40
+
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_dbhashsum.md b/docs/content/commands/rclone_dbhashsum.md
index d6606b5d6..458404400 100644
--- a/docs/content/commands/rclone_dbhashsum.md
+++ b/docs/content/commands/rclone_dbhashsum.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone dbhashsum"
slug: rclone_dbhashsum
url: /commands/rclone_dbhashsum/
@@ -11,7 +11,6 @@ Produces a Dropbox hash file for all the objects in the path.
### Synopsis
-
Produces a Dropbox hash file for all the objects in the path. The
hashes are calculated according to [Dropbox content hash
rules](https://www.dropbox.com/developers/reference/content-hash).
@@ -53,10 +52,13 @@ rclone dbhashsum remote:path [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -75,17 +77,19 @@ rclone dbhashsum remote:path [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -106,29 +110,41 @@ rclone dbhashsum remote:path [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -142,11 +158,12 @@ rclone dbhashsum remote:path [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 23-Dec-2017
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40
+
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_dedupe.md b/docs/content/commands/rclone_dedupe.md
index 742f023e8..71cf4e2b0 100644
--- a/docs/content/commands/rclone_dedupe.md
+++ b/docs/content/commands/rclone_dedupe.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone dedupe"
slug: rclone_dedupe
url: /commands/rclone_dedupe/
@@ -11,7 +11,6 @@ Interactively find duplicate files and delete/rename them.
### Synopsis
-
By default `dedupe` interactively finds duplicate files and offers to
delete all but one or rename them to be different. Only useful with
Google Drive which can have duplicate file names.
@@ -128,10 +127,13 @@ rclone dedupe [mode] remote:path [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -150,17 +152,19 @@ rclone dedupe [mode] remote:path [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -181,29 +185,41 @@ rclone dedupe [mode] remote:path [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -217,11 +233,12 @@ rclone dedupe [mode] remote:path [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 23-Dec-2017
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40
+
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_delete.md b/docs/content/commands/rclone_delete.md
index da127a738..7a84c38d4 100644
--- a/docs/content/commands/rclone_delete.md
+++ b/docs/content/commands/rclone_delete.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone delete"
slug: rclone_delete
url: /commands/rclone_delete/
@@ -11,7 +11,6 @@ Remove the contents of path.
### Synopsis
-
Remove the contents of path. Unlike `purge` it obeys include/exclude
filters so can be used to selectively delete files.
@@ -65,10 +64,13 @@ rclone delete remote:path [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -87,17 +89,19 @@ rclone delete remote:path [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -118,29 +122,41 @@ rclone delete remote:path [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -154,11 +170,12 @@ rclone delete remote:path [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 23-Dec-2017
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40
+
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_genautocomplete.md b/docs/content/commands/rclone_genautocomplete.md
index 148c1ca59..df89c23ce 100644
--- a/docs/content/commands/rclone_genautocomplete.md
+++ b/docs/content/commands/rclone_genautocomplete.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone genautocomplete"
slug: rclone_genautocomplete
url: /commands/rclone_genautocomplete/
@@ -11,7 +11,6 @@ Output completion script for a given shell.
### Synopsis
-
Generates a shell completion script for rclone.
Run with --help to list the supported shells.
@@ -47,10 +46,13 @@ Run with --help to list the supported shells.
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -69,17 +71,19 @@ Run with --help to list the supported shells.
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -100,29 +104,41 @@ Run with --help to list the supported shells.
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -136,13 +152,14 @@ Run with --help to list the supported shells.
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
+
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40
* [rclone genautocomplete bash](/commands/rclone_genautocomplete_bash/) - Output bash completion script for rclone.
* [rclone genautocomplete zsh](/commands/rclone_genautocomplete_zsh/) - Output zsh completion script for rclone.
-###### Auto generated by spf13/cobra on 23-Dec-2017
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_genautocomplete_bash.md b/docs/content/commands/rclone_genautocomplete_bash.md
index c15d02b7c..50f56f00b 100644
--- a/docs/content/commands/rclone_genautocomplete_bash.md
+++ b/docs/content/commands/rclone_genautocomplete_bash.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone genautocomplete bash"
slug: rclone_genautocomplete_bash
url: /commands/rclone_genautocomplete_bash/
@@ -11,7 +11,6 @@ Output bash completion script for rclone.
### Synopsis
-
Generates a bash shell autocompletion script for rclone.
This writes to /etc/bash_completion.d/rclone by default so will
@@ -63,10 +62,13 @@ rclone genautocomplete bash [output_file] [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -85,17 +87,19 @@ rclone genautocomplete bash [output_file] [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -116,29 +120,41 @@ rclone genautocomplete bash [output_file] [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -152,11 +168,12 @@ rclone genautocomplete bash [output_file] [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
+
* [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell.
-###### Auto generated by spf13/cobra on 23-Dec-2017
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_genautocomplete_zsh.md b/docs/content/commands/rclone_genautocomplete_zsh.md
index 95d531a4e..b39a5bae0 100644
--- a/docs/content/commands/rclone_genautocomplete_zsh.md
+++ b/docs/content/commands/rclone_genautocomplete_zsh.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone genautocomplete zsh"
slug: rclone_genautocomplete_zsh
url: /commands/rclone_genautocomplete_zsh/
@@ -11,7 +11,6 @@ Output zsh completion script for rclone.
### Synopsis
-
Generates a zsh autocompletion script for rclone.
This writes to /usr/share/zsh/vendor-completions/_rclone by default so will
@@ -63,10 +62,13 @@ rclone genautocomplete zsh [output_file] [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -85,17 +87,19 @@ rclone genautocomplete zsh [output_file] [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -116,29 +120,41 @@ rclone genautocomplete zsh [output_file] [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -152,11 +168,12 @@ rclone genautocomplete zsh [output_file] [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
+
* [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell.
-###### Auto generated by spf13/cobra on 23-Dec-2017
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_gendocs.md b/docs/content/commands/rclone_gendocs.md
index fe8242124..446be4537 100644
--- a/docs/content/commands/rclone_gendocs.md
+++ b/docs/content/commands/rclone_gendocs.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone gendocs"
slug: rclone_gendocs
url: /commands/rclone_gendocs/
@@ -11,7 +11,6 @@ Output markdown docs for rclone to the directory supplied.
### Synopsis
-
This produces markdown docs for the rclone commands to the directory
supplied. These are in a format suitable for hugo to render into the
rclone.org website.
@@ -51,10 +50,13 @@ rclone gendocs output_directory [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -73,17 +75,19 @@ rclone gendocs output_directory [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -104,29 +108,41 @@ rclone gendocs output_directory [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -140,11 +156,12 @@ rclone gendocs output_directory [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 23-Dec-2017
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40
+
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_listremotes.md b/docs/content/commands/rclone_listremotes.md
index 596b77a91..ec0013a9b 100644
--- a/docs/content/commands/rclone_listremotes.md
+++ b/docs/content/commands/rclone_listremotes.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone listremotes"
slug: rclone_listremotes
url: /commands/rclone_listremotes/
@@ -11,7 +11,6 @@ List all the remotes in the config file.
### Synopsis
-
rclone listremotes lists all the available remotes from the config file.
When uses with the -l flag it lists the types too.
@@ -53,10 +52,13 @@ rclone listremotes [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -75,17 +77,19 @@ rclone listremotes [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -106,29 +110,41 @@ rclone listremotes [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -142,11 +158,12 @@ rclone listremotes [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 23-Dec-2017
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40
+
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_ls.md b/docs/content/commands/rclone_ls.md
index c7dcd8492..d8d43346e 100644
--- a/docs/content/commands/rclone_ls.md
+++ b/docs/content/commands/rclone_ls.md
@@ -1,17 +1,37 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone ls"
slug: rclone_ls
url: /commands/rclone_ls/
---
## rclone ls
-List all the objects in the path with size and path.
+List the objects in the path with size and path.
### Synopsis
-List all the objects in the path with size and path.
+Lists the objects in the source path to standard output in a human
+readable format with size and path. Recurses by default.
+
+Any of the filtering options can be applied to this commmand.
+
+There are several related list commands
+
+ * `ls` to list size and path of objects only
+ * `lsl` to list modification time, size and path of objects only
+ * `lsd` to list directories only
+ * `lsf` to list objects and directories in easy to parse format
+ * `lsjson` to list objects and directories in JSON format
+
+`ls`,`lsl`,`lsd` are designed to be human readable.
+`lsf` is designed to be human and machine readable.
+`lsjson` is designed to be machine readable.
+
+Note that `ls`,`lsl`,`lsd` all recurse by default - use "--max-depth 1" to stop the recursion.
+
+The other list commands `lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse.
+
```
rclone ls remote:path [flags]
@@ -48,10 +68,13 @@ rclone ls remote:path [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -70,17 +93,19 @@ rclone ls remote:path [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -101,29 +126,41 @@ rclone ls remote:path [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -137,11 +174,12 @@ rclone ls remote:path [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 23-Dec-2017
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40
+
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_lsd.md b/docs/content/commands/rclone_lsd.md
index f093992b4..5582d5967 100644
--- a/docs/content/commands/rclone_lsd.md
+++ b/docs/content/commands/rclone_lsd.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone lsd"
slug: rclone_lsd
url: /commands/rclone_lsd/
@@ -11,7 +11,27 @@ List all directories/containers/buckets in the path.
### Synopsis
-List all directories/containers/buckets in the path.
+Lists the directories in the source path to standard output. Recurses
+by default.
+
+Any of the filtering options can be applied to this commmand.
+
+There are several related list commands
+
+ * `ls` to list size and path of objects only
+ * `lsl` to list modification time, size and path of objects only
+ * `lsd` to list directories only
+ * `lsf` to list objects and directories in easy to parse format
+ * `lsjson` to list objects and directories in JSON format
+
+`ls`,`lsl`,`lsd` are designed to be human readable.
+`lsf` is designed to be human and machine readable.
+`lsjson` is designed to be machine readable.
+
+Note that `ls`,`lsl`,`lsd` all recurse by default - use "--max-depth 1" to stop the recursion.
+
+The other list commands `lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse.
+
```
rclone lsd remote:path [flags]
@@ -48,10 +68,13 @@ rclone lsd remote:path [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -70,17 +93,19 @@ rclone lsd remote:path [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -101,29 +126,41 @@ rclone lsd remote:path [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -137,11 +174,12 @@ rclone lsd remote:path [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 23-Dec-2017
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40
+
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_lsf.md b/docs/content/commands/rclone_lsf.md
new file mode 100644
index 000000000..b331c8ffc
--- /dev/null
+++ b/docs/content/commands/rclone_lsf.md
@@ -0,0 +1,223 @@
+---
+date: 2018-03-19T10:05:30Z
+title: "rclone lsf"
+slug: rclone_lsf
+url: /commands/rclone_lsf/
+---
+## rclone lsf
+
+List directories and objects in remote:path formatted for parsing
+
+### Synopsis
+
+
+List the contents of the source path (directories and objects) to
+standard output in a form which is easy to parse by scripts. By
+default this will just be the names of the objects and directories,
+one per line. The directories will have a / suffix.
+
+Use the --format option to control what gets listed. By default this
+is just the path, but you can use these parameters to control the
+output:
+
+ p - path
+ s - size
+ t - modification time
+ h - hash
+
+So if you wanted the path, size and modification time, you would use
+--format "pst", or maybe --format "tsp" to put the path last.
+
+If you specify "h" in the format you will get the MD5 hash by default,
+use the "--hash" flag to change which hash you want. Note that this
+can be returned as an empty string if it isn't available on the object
+(and for directories), "ERROR" if there was an error reading it from
+the object and "UNSUPPORTED" if that object does not support that hash
+type.
+
+For example to emulate the md5sum command you can use
+
+ rclone lsf -R --hash MD5 --format hp --separator " " --files-only .
+
+(Though "rclone md5sum ." is an easier way of typing this.)
+
+By default the separator is ";" this can be changed with the
+--separator flag. Note that separators aren't escaped in the path so
+putting it last is a good strategy.
+
+Any of the filtering options can be applied to this commmand.
+
+There are several related list commands
+
+ * `ls` to list size and path of objects only
+ * `lsl` to list modification time, size and path of objects only
+ * `lsd` to list directories only
+ * `lsf` to list objects and directories in easy to parse format
+ * `lsjson` to list objects and directories in JSON format
+
+`ls`,`lsl`,`lsd` are designed to be human readable.
+`lsf` is designed to be human and machine readable.
+`lsjson` is designed to be machine readable.
+
+Note that `ls`,`lsl`,`lsd` all recurse by default - use "--max-depth 1" to stop the recursion.
+
+The other list commands `lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse.
+
+
+```
+rclone lsf remote:path [flags]
+```
+
+### Options
+
+```
+ -d, --dir-slash Append a slash to directory names. (default true)
+ --dirs-only Only list directories.
+ --files-only Only list files.
+ -F, --format string Output format - see help for details (default "p")
+ --hash h Use this hash when h is used in the format MD5|SHA-1|DropboxHash (default "MD5")
+ -h, --help help for lsf
+ -R, --recursive Recurse into the listing.
+ -s, --separator string Separator for the items in the format. (default ";")
+```
+
+### Options inherited from parent commands
+
+```
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
+```
+
+### SEE ALSO
+
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40
+
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_lsjson.md b/docs/content/commands/rclone_lsjson.md
index 2aaf1a02c..83ed405b2 100644
--- a/docs/content/commands/rclone_lsjson.md
+++ b/docs/content/commands/rclone_lsjson.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone lsjson"
slug: rclone_lsjson
url: /commands/rclone_lsjson/
@@ -10,7 +10,6 @@ List directories and objects in the path in JSON format.
### Synopsis
-
List directories and objects in the path in JSON format.
The output is an array of Items, where each Item looks like this
@@ -24,19 +23,45 @@ The output is an array of Items, where each Item looks like this
"IsDir" : false,
"ModTime" : "2017-05-31T16:15:57.034468261+01:00",
"Name" : "file.txt",
+ "Encrypted" : "v0qpsdq8anpci8n929v3uu9338",
"Path" : "full/path/goes/here/file.txt",
"Size" : 6
}
-If --hash is not specified the the Hashes property won't be emitted.
+If --hash is not specified the Hashes property won't be emitted.
If --no-modtime is specified then ModTime will be blank.
+If --encrypted is not specified the Encrypted won't be emitted.
+
+The Path field will only show folders below the remote path being listed.
+If "remote:path" contains the file "subfolder/file.txt", the Path for "file.txt"
+will be "subfolder/file.txt", not "remote:path/subfolder/file.txt".
+When used without --recursive the Path will always be the same as Name.
+
The time is in RFC3339 format with nanosecond precision.
The whole output can be processed as a JSON blob, or alternatively it
can be processed line by line as each item is written one to a line.
+Any of the filtering options can be applied to this commmand.
+
+There are several related list commands
+
+ * `ls` to list size and path of objects only
+ * `lsl` to list modification time, size and path of objects only
+ * `lsd` to list directories only
+ * `lsf` to list objects and directories in easy to parse format
+ * `lsjson` to list objects and directories in JSON format
+
+`ls`,`lsl`,`lsd` are designed to be human readable.
+`lsf` is designed to be human and machine readable.
+`lsjson` is designed to be machine readable.
+
+Note that `ls`,`lsl`,`lsd` all recurse by default - use "--max-depth 1" to stop the recursion.
+
+The other list commands `lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse.
+
```
rclone lsjson remote:path [flags]
@@ -45,6 +70,7 @@ rclone lsjson remote:path [flags]
### Options
```
+ -M, --encrypted Show the encrypted names.
--hash Include hashes in the output (may take longer).
-h, --help help for lsjson
--no-modtime Don't read the modification time (can speed things up).
@@ -76,10 +102,13 @@ rclone lsjson remote:path [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -98,17 +127,19 @@ rclone lsjson remote:path [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -129,29 +160,41 @@ rclone lsjson remote:path [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -165,11 +208,12 @@ rclone lsjson remote:path [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 23-Dec-2017
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40
+
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_lsl.md b/docs/content/commands/rclone_lsl.md
index c84d5cc9d..2d10b8cb2 100644
--- a/docs/content/commands/rclone_lsl.md
+++ b/docs/content/commands/rclone_lsl.md
@@ -1,17 +1,37 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone lsl"
slug: rclone_lsl
url: /commands/rclone_lsl/
---
## rclone lsl
-List all the objects path with modification time, size and path.
+List the objects in path with modification time, size and path.
### Synopsis
-List all the objects path with modification time, size and path.
+Lists the objects in the source path to standard output in a human
+readable format with modification time, size and path. Recurses by default.
+
+Any of the filtering options can be applied to this commmand.
+
+There are several related list commands
+
+ * `ls` to list size and path of objects only
+ * `lsl` to list modification time, size and path of objects only
+ * `lsd` to list directories only
+ * `lsf` to list objects and directories in easy to parse format
+ * `lsjson` to list objects and directories in JSON format
+
+`ls`,`lsl`,`lsd` are designed to be human readable.
+`lsf` is designed to be human and machine readable.
+`lsjson` is designed to be machine readable.
+
+Note that `ls`,`lsl`,`lsd` all recurse by default - use "--max-depth 1" to stop the recursion.
+
+The other list commands `lsf`,`lsjson` do not recurse by default - use "-R" to make them recurse.
+
```
rclone lsl remote:path [flags]
@@ -48,10 +68,13 @@ rclone lsl remote:path [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -70,17 +93,19 @@ rclone lsl remote:path [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -101,29 +126,41 @@ rclone lsl remote:path [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -137,11 +174,12 @@ rclone lsl remote:path [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 23-Dec-2017
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40
+
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_md5sum.md b/docs/content/commands/rclone_md5sum.md
index 007b22dea..a8633ec54 100644
--- a/docs/content/commands/rclone_md5sum.md
+++ b/docs/content/commands/rclone_md5sum.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone md5sum"
slug: rclone_md5sum
url: /commands/rclone_md5sum/
@@ -11,7 +11,6 @@ Produces an md5sum file for all the objects in the path.
### Synopsis
-
Produces an md5sum file for all the objects in the path. This
is in the same format as the standard md5sum tool produces.
@@ -51,10 +50,13 @@ rclone md5sum remote:path [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -73,17 +75,19 @@ rclone md5sum remote:path [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -104,29 +108,41 @@ rclone md5sum remote:path [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -140,11 +156,12 @@ rclone md5sum remote:path [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 23-Dec-2017
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40
+
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_mkdir.md b/docs/content/commands/rclone_mkdir.md
index a66810f6a..9a584b934 100644
--- a/docs/content/commands/rclone_mkdir.md
+++ b/docs/content/commands/rclone_mkdir.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone mkdir"
slug: rclone_mkdir
url: /commands/rclone_mkdir/
@@ -10,7 +10,6 @@ Make the path if it doesn't already exist.
### Synopsis
-
Make the path if it doesn't already exist.
```
@@ -48,10 +47,13 @@ rclone mkdir remote:path [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -70,17 +72,19 @@ rclone mkdir remote:path [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -101,29 +105,41 @@ rclone mkdir remote:path [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -137,11 +153,12 @@ rclone mkdir remote:path [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 23-Dec-2017
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40
+
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_mount.md b/docs/content/commands/rclone_mount.md
index 8fa9ea3b2..48a776df3 100644
--- a/docs/content/commands/rclone_mount.md
+++ b/docs/content/commands/rclone_mount.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone mount"
slug: rclone_mount
url: /commands/rclone_mount/
@@ -11,7 +11,6 @@ Mount the remote as a mountpoint. **EXPERIMENTAL**
### Synopsis
-
rclone mount allows Linux, FreeBSD, macOS and Windows to
mount any of Rclone's cloud storage systems as a file system with
FUSE.
@@ -20,11 +19,6 @@ This is **EXPERIMENTAL** - use with care.
First set up your remote using `rclone config`. Check it works with `rclone ls` etc.
-You can either run mount in foreground mode or background(daemon) mode. Mount runs in
-foreground mode by default, use the `--daemon` flag to specify background mode mode.
-Background mode is only supported on Linux and OSX, you can only run mount in
-foreground mode on Windows.
-
Start the mount like this
rclone mount remote:path/to/files /path/to/local/mount
@@ -33,21 +27,18 @@ Or on Windows like this where X: is an unused drive letter
rclone mount remote:path/to/files X:
-When running in background mode the user will have to stop the mount manually (specified below).
-
-When the program ends while in foreground mode, either via Ctrl+C or receiving
-a SIGINT or SIGTERM signal, the mount is automatically stopped.
+When the program ends, either via Ctrl+C or receiving a SIGINT or SIGTERM signal,
+the mount is automatically stopped.
The umount operation can fail, for example when the mountpoint is busy.
-When that happens, it is the user's responsibility to stop the mount manually.
+When that happens, it is the user's responsibility to stop the mount manually with
-Stopping the mount manually:
# Linux
fusermount -u /path/to/local/mount
# OS X
umount /path/to/local/mount
-### Installing on Windows ###
+### Installing on Windows
To run rclone mount on Windows, you will need to
download and install [WinFsp](http://www.secfs.net/winfsp/).
@@ -60,7 +51,7 @@ uses combination with
packages are by Bill Zissimopoulos who was very helpful during the
implementation of rclone mount for Windows.
-#### Windows caveats ####
+#### Windows caveats
Note that drives created as Administrator are not visible by other
accounts (including the account that was elevated as
@@ -73,13 +64,16 @@ The easiest way around this is to start the drive from a normal
command prompt. It is also possible to start a drive from the SYSTEM
account (using [the WinFsp.Launcher
infrastructure](https://github.com/billziss-gh/winfsp/wiki/WinFsp-Service-Architecture))
-which creates drives accessible for everyone on the system.
+which creates drives accessible for everyone on the system or
+alternatively using [the nssm service manager](https://nssm.cc/usage).
-### Limitations ###
+### Limitations
-This can only write files seqentially, it can only seek when reading.
-This means that many applications won't work with their files on an
-rclone mount.
+Without the use of "--vfs-cache-mode" this can only write files
+sequentially, it can only seek when reading. This means that many
+applications won't work with their files on an rclone mount without
+"--vfs-cache-mode writes" or "--vfs-cache-mode full". See the [File
+Caching](#file-caching) section for more info.
The bucket based remotes (eg Swift, S3, Google Compute Storage, B2,
Hubic) won't work from the root - you will need to specify a bucket,
@@ -91,29 +85,43 @@ the directory cache.
Only supported on Linux, FreeBSD, OS X and Windows at the moment.
-### rclone mount vs rclone sync/copy ##
+### rclone mount vs rclone sync/copy
File systems expect things to be 100% reliable, whereas cloud storage
systems are a long way from 100% reliable. The rclone sync/copy
commands cope with this with lots of retries. However rclone mount
can't use retries in the same way without making local copies of the
-uploads. This might happen in the future, but for the moment rclone
-mount won't do that, so will be less reliable than the rclone command.
+uploads. Look at the **EXPERIMENTAL** [file caching](#file-caching)
+for solutions to make mount mount more reliable.
-### Filters ###
+### Attribute caching
+
+You can use the flag --attr-timeout to set the time the kernel caches
+the attributes (size, modification time etc) for directory entries.
+
+The default is 0s - no caching - which is recommended for filesystems
+which can change outside the control of the kernel.
+
+If you set it higher ('1s' or '1m' say) then the kernel will call back
+to rclone less often making it more efficient, however there may be
+strange effects when files change on the remote.
+
+This is the same as setting the attr_timeout option in mount.fuse.
+
+### Filters
Note that all the rclone filters can be used to select a subset of the
files to be visible in the mount.
-### systemd ###
+### systemd
When running rclone mount as a systemd service, it is possible
-to use Type=notify. In this case the service will enter the started state
+to use Type=notify. In this case the service will enter the started state
after the mountpoint has been successfully set up.
Units having the rclone mount service specified as a requirement
will see all files and folders immediately in this mode.
-### Directory Cache ###
+### Directory Cache
Using the `--dir-cache-time` flag, you can set how long a
directory should be considered up to date and not refreshed from the
@@ -128,12 +136,21 @@ like this:
kill -SIGHUP $(pidof rclone)
-### File Caching ###
+If you configure rclone with a [remote control](/rc) then you can use
+rclone rc to flush the whole directory cache:
+
+ rclone rc vfs/forget
+
+Or individual files or directories:
+
+ rclone rc vfs/forget file=path/to/file dir=path/to/dir
+
+### File Caching
**NB** File caching is **EXPERIMENTAL** - use with care!
These flags control the VFS file caching options. The VFS layer is
-used by rclone mount to make a cloud storage systm work more like a
+used by rclone mount to make a cloud storage system work more like a
normal file system.
You'll need to enable VFS caching if you want, for example, to read
@@ -142,7 +159,7 @@ and write simultaneously to a file. See below for more details.
Note that the VFS cache works in addition to the cache backend and you
may find that you need one or the other or both.
- --vfs-cache-dir string Directory rclone will use for caching.
+ --cache-dir string Directory rclone will use for caching.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
@@ -161,7 +178,7 @@ closed so if rclone is quit or dies with open files then these won't
get written back to the remote. However they will still be in the on
disk cache.
-#### --vfs-cache-mode off ####
+#### --vfs-cache-mode off
In this mode the cache will read directly from the remote and write
directly to the remote without caching anything on disk.
@@ -176,7 +193,7 @@ This will mean some operations are not possible
* Open modes O_APPEND, O_TRUNC are ignored
* If an upload fails it can't be retried
-#### --vfs-cache-mode minimal ####
+#### --vfs-cache-mode minimal
This is very similar to "off" except that files opened for read AND
write will be buffered to disks. This means that files opened for
@@ -189,7 +206,7 @@ These operations are not possible
* Files opened for write only will ignore O_APPEND, O_TRUNC
* If an upload fails it can't be retried
-#### --vfs-cache-mode writes ####
+#### --vfs-cache-mode writes
In this mode files opened for read only are still read directly from
the remote, write only and read/write files are buffered to disk
@@ -199,14 +216,14 @@ This mode should support all normal file system operations.
If an upload fails it will be retried up to --low-level-retries times.
-#### --vfs-cache-mode full ####
+#### --vfs-cache-mode full
In this mode all reads and writes are buffered to and from disk. When
a file is opened for read it will be downloaded in its entirety first.
This may be appropriate for your needs, or you may prefer to look at
the cache backend which does a much more sophisticated job of caching,
-including caching directory heirachies and chunks of files.q
+including caching directory hierarchies and chunks of files.
In this mode, unlike the others, when a file is written to the disk,
it will be kept on the disk after it is written to the remote. It
@@ -228,6 +245,8 @@ rclone mount remote:path /path/to/mountpoint [flags]
--allow-non-empty Allow mounting over a non-empty directory.
--allow-other Allow access to other users.
--allow-root Allow access to root user.
+ --attr-timeout duration Time for which file/directory attributes are cached.
+ --daemon Run mount as a daemon (background mode).
--debug-fuse Debug the FUSE internals - needs -v.
--default-permissions Makes kernel enforce access control based on the file mode.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
@@ -274,10 +293,13 @@ rclone mount remote:path /path/to/mountpoint [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -296,17 +318,19 @@ rclone mount remote:path /path/to/mountpoint [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -327,29 +351,41 @@ rclone mount remote:path /path/to/mountpoint [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -363,11 +399,12 @@ rclone mount remote:path /path/to/mountpoint [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 23-Dec-2017
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40
+
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_move.md b/docs/content/commands/rclone_move.md
index 14c3b0d83..3cc3f35aa 100644
--- a/docs/content/commands/rclone_move.md
+++ b/docs/content/commands/rclone_move.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone move"
slug: rclone_move
url: /commands/rclone_move/
@@ -11,7 +11,6 @@ Move files from source to dest.
### Synopsis
-
Moves the contents of the source directory to the destination
directory. Rclone will error if the source and destination overlap and
the remote does not support a server side directory move operation.
@@ -68,10 +67,13 @@ rclone move source:path dest:path [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -90,17 +92,19 @@ rclone move source:path dest:path [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -121,29 +125,41 @@ rclone move source:path dest:path [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -157,11 +173,12 @@ rclone move source:path dest:path [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 23-Dec-2017
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40
+
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_moveto.md b/docs/content/commands/rclone_moveto.md
index ba1d0ebfa..3ffb1bfc7 100644
--- a/docs/content/commands/rclone_moveto.md
+++ b/docs/content/commands/rclone_moveto.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone moveto"
slug: rclone_moveto
url: /commands/rclone_moveto/
@@ -11,7 +11,6 @@ Move file or directory from source to dest.
### Synopsis
-
If source:path is a file or directory then it moves it to a file or
directory named dest:path.
@@ -77,10 +76,13 @@ rclone moveto source:path dest:path [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -99,17 +101,19 @@ rclone moveto source:path dest:path [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -130,29 +134,41 @@ rclone moveto source:path dest:path [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -166,11 +182,12 @@ rclone moveto source:path dest:path [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 23-Dec-2017
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40
+
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_ncdu.md b/docs/content/commands/rclone_ncdu.md
index 53caf6e02..35ae0e4e9 100644
--- a/docs/content/commands/rclone_ncdu.md
+++ b/docs/content/commands/rclone_ncdu.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone ncdu"
slug: rclone_ncdu
url: /commands/rclone_ncdu/
@@ -11,11 +11,12 @@ Explore a remote with a text based user interface.
### Synopsis
-
This displays a text based user interface allowing the navigation of a
remote. It is most useful for answering the question - "What is using
all my disk space?".
+
+
To make the user interface it first scans the entire remote given and
builds an in memory representation. rclone ncdu can be used during
this scanning phase and you will see it building up the directory
@@ -72,10 +73,13 @@ rclone ncdu remote:path [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -94,17 +98,19 @@ rclone ncdu remote:path [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -125,29 +131,41 @@ rclone ncdu remote:path [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -161,11 +179,12 @@ rclone ncdu remote:path [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 23-Dec-2017
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40
+
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_obscure.md b/docs/content/commands/rclone_obscure.md
index d9399ad46..a7b17ec8a 100644
--- a/docs/content/commands/rclone_obscure.md
+++ b/docs/content/commands/rclone_obscure.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone obscure"
slug: rclone_obscure
url: /commands/rclone_obscure/
@@ -10,7 +10,6 @@ Obscure password for use in the rclone.conf
### Synopsis
-
Obscure password for use in the rclone.conf
```
@@ -48,10 +47,13 @@ rclone obscure password [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -70,17 +72,19 @@ rclone obscure password [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -101,29 +105,41 @@ rclone obscure password [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -137,11 +153,12 @@ rclone obscure password [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 23-Dec-2017
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40
+
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_purge.md b/docs/content/commands/rclone_purge.md
index 4ede2ea8c..cc2f3e769 100644
--- a/docs/content/commands/rclone_purge.md
+++ b/docs/content/commands/rclone_purge.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone purge"
slug: rclone_purge
url: /commands/rclone_purge/
@@ -11,7 +11,6 @@ Remove the path and all of its contents.
### Synopsis
-
Remove the path and all of its contents. Note that this does not obey
include/exclude filters - everything will be removed. Use `delete` if
you want to selectively delete files.
@@ -52,10 +51,13 @@ rclone purge remote:path [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -74,17 +76,19 @@ rclone purge remote:path [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -105,29 +109,41 @@ rclone purge remote:path [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -141,11 +157,12 @@ rclone purge remote:path [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 23-Dec-2017
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40
+
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_rc.md b/docs/content/commands/rclone_rc.md
new file mode 100644
index 000000000..3116309b4
--- /dev/null
+++ b/docs/content/commands/rclone_rc.md
@@ -0,0 +1,174 @@
+---
+date: 2018-03-19T10:05:30Z
+title: "rclone rc"
+slug: rclone_rc
+url: /commands/rclone_rc/
+---
+## rclone rc
+
+Run a command against a running rclone.
+
+### Synopsis
+
+
+This runs a command against a running rclone. By default it will use
+that specified in the --rc-addr command.
+
+Arguments should be passed in as parameter=value.
+
+The result will be returned as a JSON object by default.
+
+Use "rclone rc list" to see a list of all possible commands.
+
+```
+rclone rc commands parameter [flags]
+```
+
+### Options
+
+```
+ -h, --help help for rc
+ --no-output If set don't output the JSON result.
+ --url string URL to connect to rclone remote control. (default "http://localhost:5572/")
+```
+
+### Options inherited from parent commands
+
+```
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
+```
+
+### SEE ALSO
+
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40
+
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_rcat.md b/docs/content/commands/rclone_rcat.md
index aa6bfe17c..69f3e578a 100644
--- a/docs/content/commands/rclone_rcat.md
+++ b/docs/content/commands/rclone_rcat.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone rcat"
slug: rclone_rcat
url: /commands/rclone_rcat/
@@ -11,7 +11,6 @@ Copies standard input to file on remote.
### Synopsis
-
rclone rcat reads from standard input (stdin) and copies it to a
single remote file.
@@ -70,10 +69,13 @@ rclone rcat remote:path [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -92,17 +94,19 @@ rclone rcat remote:path [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -123,29 +127,41 @@ rclone rcat remote:path [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -159,11 +175,12 @@ rclone rcat remote:path [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 23-Dec-2017
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40
+
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_rmdir.md b/docs/content/commands/rclone_rmdir.md
index 9a90a9856..c01d7bb4e 100644
--- a/docs/content/commands/rclone_rmdir.md
+++ b/docs/content/commands/rclone_rmdir.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone rmdir"
slug: rclone_rmdir
url: /commands/rclone_rmdir/
@@ -11,7 +11,6 @@ Remove the path if empty.
### Synopsis
-
Remove the path. Note that you can't remove a path with
objects in it, use purge for that.
@@ -50,10 +49,13 @@ rclone rmdir remote:path [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -72,17 +74,19 @@ rclone rmdir remote:path [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -103,29 +107,41 @@ rclone rmdir remote:path [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -139,11 +155,12 @@ rclone rmdir remote:path [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 23-Dec-2017
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40
+
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_rmdirs.md b/docs/content/commands/rclone_rmdirs.md
index 79e2ce767..7f699bd9d 100644
--- a/docs/content/commands/rclone_rmdirs.md
+++ b/docs/content/commands/rclone_rmdirs.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone rmdirs"
slug: rclone_rmdirs
url: /commands/rclone_rmdirs/
@@ -10,7 +10,6 @@ Remove empty directories under the path.
### Synopsis
-
This removes any empty directories (or directories that only contain
empty directories) under the path that it finds, including the path if
it has nothing in.
@@ -58,10 +57,13 @@ rclone rmdirs remote:path [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -80,17 +82,19 @@ rclone rmdirs remote:path [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -111,29 +115,41 @@ rclone rmdirs remote:path [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -147,11 +163,12 @@ rclone rmdirs remote:path [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 23-Dec-2017
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40
+
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_serve.md b/docs/content/commands/rclone_serve.md
index 4e86fa98d..a9015fbde 100644
--- a/docs/content/commands/rclone_serve.md
+++ b/docs/content/commands/rclone_serve.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone serve"
slug: rclone_serve
url: /commands/rclone_serve/
@@ -10,7 +10,6 @@ Serve a remote over a protocol.
### Synopsis
-
rclone serve is used to serve a remote over a given protocol. This
command requires the use of a subcommand to specify the protocol, eg
@@ -54,10 +53,13 @@ rclone serve [opts] [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -76,17 +78,19 @@ rclone serve [opts] [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -107,29 +111,41 @@ rclone serve [opts] [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -143,13 +159,15 @@ rclone serve [opts] [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
+
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40
* [rclone serve http](/commands/rclone_serve_http/) - Serve the remote over HTTP.
+* [rclone serve restic](/commands/rclone_serve_restic/) - Serve the remote for restic's REST API.
* [rclone serve webdav](/commands/rclone_serve_webdav/) - Serve remote:path over webdav.
-###### Auto generated by spf13/cobra on 23-Dec-2017
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_serve_http.md b/docs/content/commands/rclone_serve_http.md
index 432918632..4db8b7fae 100644
--- a/docs/content/commands/rclone_serve_http.md
+++ b/docs/content/commands/rclone_serve_http.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone serve http"
slug: rclone_serve_http
url: /commands/rclone_serve_http/
@@ -10,15 +10,10 @@ Serve the remote over HTTP.
### Synopsis
-
rclone serve http implements a basic web server to serve the remote
over HTTP. This can be viewed in a web browser or you can make a
remote of type http read from it.
-Use --addr to specify which IP address and port the server should
-listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all
-IPs. By default it only listens on localhost.
-
You can use the filter flags (eg --include, --exclude) to control what
is served.
@@ -27,7 +22,56 @@ The server will log errors. Use -v to see access logs.
--bwlimit will be respected for file transfers. Use --stats to
control the stats printing.
-### Directory Cache ###
+### Server options
+
+Use --addr to specify which IP address and port the server should
+listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all
+IPs. By default it only listens on localhost.
+
+If you set --addr to listen on a public or LAN accessible IP address
+then using Authentication if advised - see the next section for info.
+
+--server-read-timeout and --server-write-timeout can be used to
+control the timeouts on the server. Note that this is the total time
+for a transfer.
+
+--max-header-bytes controls the maximum number of bytes the server will
+accept in the HTTP header.
+
+#### Authentication
+
+By default this will serve files without needing a login.
+
+You can either use an htpasswd file which can take lots of users, or
+set a single username and password with the --user and --pass flags.
+
+Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is
+in standard apache format and supports MD5, SHA1 and BCrypt for basic
+authentication. Bcrypt is recommended.
+
+To create an htpasswd file:
+
+ touch htpasswd
+ htpasswd -B htpasswd user
+ htpasswd -B htpasswd anotherUser
+
+The password file can be updated while rclone is running.
+
+Use --realm to set the authentication realm.
+
+#### SSL/TLS
+
+By default this will serve over http. If you want you can serve over
+https. You will need to supply the --cert and --key flags. If you
+wish to do client side certificate validation then you will need to
+supply --client-ca also.
+
+--cert should be a either a PEM encoded certificate or a concatenation
+of that with the CA certificate. --key should be the PEM encoded
+private key and --client-ca should be the PEM encoded client
+certificate authority certificate.
+
+### Directory Cache
Using the `--dir-cache-time` flag, you can set how long a
directory should be considered up to date and not refreshed from the
@@ -42,12 +86,21 @@ like this:
kill -SIGHUP $(pidof rclone)
-### File Caching ###
+If you configure rclone with a [remote control](/rc) then you can use
+rclone rc to flush the whole directory cache:
+
+ rclone rc vfs/forget
+
+Or individual files or directories:
+
+ rclone rc vfs/forget file=path/to/file dir=path/to/dir
+
+### File Caching
**NB** File caching is **EXPERIMENTAL** - use with care!
These flags control the VFS file caching options. The VFS layer is
-used by rclone mount to make a cloud storage systm work more like a
+used by rclone mount to make a cloud storage system work more like a
normal file system.
You'll need to enable VFS caching if you want, for example, to read
@@ -56,7 +109,7 @@ and write simultaneously to a file. See below for more details.
Note that the VFS cache works in addition to the cache backend and you
may find that you need one or the other or both.
- --vfs-cache-dir string Directory rclone will use for caching.
+ --cache-dir string Directory rclone will use for caching.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
@@ -75,7 +128,7 @@ closed so if rclone is quit or dies with open files then these won't
get written back to the remote. However they will still be in the on
disk cache.
-#### --vfs-cache-mode off ####
+#### --vfs-cache-mode off
In this mode the cache will read directly from the remote and write
directly to the remote without caching anything on disk.
@@ -90,7 +143,7 @@ This will mean some operations are not possible
* Open modes O_APPEND, O_TRUNC are ignored
* If an upload fails it can't be retried
-#### --vfs-cache-mode minimal ####
+#### --vfs-cache-mode minimal
This is very similar to "off" except that files opened for read AND
write will be buffered to disks. This means that files opened for
@@ -103,7 +156,7 @@ These operations are not possible
* Files opened for write only will ignore O_APPEND, O_TRUNC
* If an upload fails it can't be retried
-#### --vfs-cache-mode writes ####
+#### --vfs-cache-mode writes
In this mode files opened for read only are still read directly from
the remote, write only and read/write files are buffered to disk
@@ -113,14 +166,14 @@ This mode should support all normal file system operations.
If an upload fails it will be retried up to --low-level-retries times.
-#### --vfs-cache-mode full ####
+#### --vfs-cache-mode full
In this mode all reads and writes are buffered to and from disk. When
a file is opened for read it will be downloaded in its entirety first.
This may be appropriate for your needs, or you may prefer to look at
the cache backend which does a much more sophisticated job of caching,
-including caching directory heirachies and chunks of files.q
+including caching directory hierarchies and chunks of files.
In this mode, unlike the others, when a file is written to the disk,
it will be kept on the disk after it is written to the remote. It
@@ -139,17 +192,27 @@ rclone serve http remote:path [flags]
### Options
```
- --addr string IPaddress:Port to bind server to. (default "localhost:8080")
+ --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080")
+ --cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --client-ca string Client certificate authority to verify clients with
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--gid uint32 Override the gid field set by the filesystem. (default 502)
-h, --help help for http
+ --htpasswd string htpasswd file - if not provided no authentication is done
+ --key string SSL PEM Private key
+ --max-header-bytes int Maximum size of request header (default 4096)
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
+ --pass string Password for authentication.
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--read-only Mount read-only.
+ --realm string realm for authentication (default "rclone")
+ --server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--uid uint32 Override the uid field set by the filesystem. (default 502)
--umask int Override the permission bits set by the filesystem. (default 2)
+ --user string User name for authentication.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
@@ -180,10 +243,13 @@ rclone serve http remote:path [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -202,17 +268,19 @@ rclone serve http remote:path [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -233,29 +301,41 @@ rclone serve http remote:path [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -269,11 +349,12 @@ rclone serve http remote:path [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
+
* [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol.
-###### Auto generated by spf13/cobra on 23-Dec-2017
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_serve_restic.md b/docs/content/commands/rclone_serve_restic.md
new file mode 100644
index 000000000..f53c98c0f
--- /dev/null
+++ b/docs/content/commands/rclone_serve_restic.md
@@ -0,0 +1,298 @@
+---
+date: 2018-03-19T10:05:30Z
+title: "rclone serve restic"
+slug: rclone_serve_restic
+url: /commands/rclone_serve_restic/
+---
+## rclone serve restic
+
+Serve the remote for restic's REST API.
+
+### Synopsis
+
+rclone serve restic implements restic's REST backend API
+over HTTP. This allows restic to use rclone as a data storage
+mechanism for cloud providers that restic does not support directly.
+
+[Restic](https://restic.net/) is a command line program for doing
+backups.
+
+The server will log errors. Use -v to see access logs.
+
+--bwlimit will be respected for file transfers. Use --stats to
+control the stats printing.
+
+### Setting up rclone for use by restic ###
+
+First [set up a remote for your chosen cloud provider](/docs/#configure).
+
+Once you have set up the remote, check it is working with, for example
+"rclone lsd remote:". You may have called the remote something other
+than "remote:" - just substitute whatever you called it in the
+following instructions.
+
+Now start the rclone restic server
+
+ rclone serve restic -v remote:backup
+
+Where you can replace "backup" in the above by whatever path in the
+remote you wish to use.
+
+By default this will serve on "localhost:8080" you can change this
+with use of the "--addr" flag.
+
+You might wish to start this server on boot.
+
+### Setting up restic to use rclone ###
+
+Now you can [follow the restic
+instructions](http://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#rest-server)
+on setting up restic.
+
+Note that you will need restic 0.8.2 or later to interoperate with
+rclone.
+
+For the example above you will want to use "http://localhost:8080/" as
+the URL for the REST server.
+
+For example:
+
+ $ export RESTIC_REPOSITORY=rest:http://localhost:8080/
+ $ export RESTIC_PASSWORD=yourpassword
+ $ restic init
+ created restic backend 8b1a4b56ae at rest:http://localhost:8080/
+
+ Please note that knowledge of your password is required to access
+ the repository. Losing your password means that your data is
+ irrecoverably lost.
+ $ restic backup /path/to/files/to/backup
+ scan [/path/to/files/to/backup]
+ scanned 189 directories, 312 files in 0:00
+ [0:00] 100.00% 38.128 MiB / 38.128 MiB 501 / 501 items 0 errors ETA 0:00
+ duration: 0:00
+ snapshot 45c8fdd8 saved
+
+#### Multiple repositories ####
+
+Note that you can use the endpoint to host multiple repositories. Do
+this by adding a directory name or path after the URL. Note that
+these **must** end with /. Eg
+
+ $ export RESTIC_REPOSITORY=rest:http://localhost:8080/user1repo/
+ # backup user1 stuff
+ $ export RESTIC_REPOSITORY=rest:http://localhost:8080/user2repo/
+ # backup user2 stuff
+
+
+### Server options
+
+Use --addr to specify which IP address and port the server should
+listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all
+IPs. By default it only listens on localhost.
+
+If you set --addr to listen on a public or LAN accessible IP address
+then using Authentication if advised - see the next section for info.
+
+--server-read-timeout and --server-write-timeout can be used to
+control the timeouts on the server. Note that this is the total time
+for a transfer.
+
+--max-header-bytes controls the maximum number of bytes the server will
+accept in the HTTP header.
+
+#### Authentication
+
+By default this will serve files without needing a login.
+
+You can either use an htpasswd file which can take lots of users, or
+set a single username and password with the --user and --pass flags.
+
+Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is
+in standard apache format and supports MD5, SHA1 and BCrypt for basic
+authentication. Bcrypt is recommended.
+
+To create an htpasswd file:
+
+ touch htpasswd
+ htpasswd -B htpasswd user
+ htpasswd -B htpasswd anotherUser
+
+The password file can be updated while rclone is running.
+
+Use --realm to set the authentication realm.
+
+#### SSL/TLS
+
+By default this will serve over http. If you want you can serve over
+https. You will need to supply the --cert and --key flags. If you
+wish to do client side certificate validation then you will need to
+supply --client-ca also.
+
+--cert should be a either a PEM encoded certificate or a concatenation
+of that with the CA certificate. --key should be the PEM encoded
+private key and --client-ca should be the PEM encoded client
+certificate authority certificate.
+
+
+```
+rclone serve restic remote:path [flags]
+```
+
+### Options
+
+```
+ --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080")
+ --cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --client-ca string Client certificate authority to verify clients with
+ -h, --help help for restic
+ --htpasswd string htpasswd file - if not provided no authentication is done
+ --key string SSL PEM Private key
+ --max-header-bytes int Maximum size of request header (default 4096)
+ --pass string Password for authentication.
+ --realm string realm for authentication (default "rclone")
+ --server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --stdio run an HTTP2 server on stdin/stdout
+ --user string User name for authentication.
+```
+
+### Options inherited from parent commands
+
+```
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Obsolete - does nothing.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
+```
+
+### SEE ALSO
+
+* [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol.
+
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_serve_webdav.md b/docs/content/commands/rclone_serve_webdav.md
index 883c1c1e7..6ccd9eebf 100644
--- a/docs/content/commands/rclone_serve_webdav.md
+++ b/docs/content/commands/rclone_serve_webdav.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone serve webdav"
slug: rclone_serve_webdav
url: /commands/rclone_serve_webdav/
@@ -11,7 +11,6 @@ Serve remote:path over webdav.
### Synopsis
-
rclone serve webdav implements a basic webdav server to serve the
remote over HTTP via the webdav protocol. This can be viewed with a
webdav client or you can make a remote of type webdav to read and
@@ -20,8 +19,56 @@ write it.
NB at the moment each directory listing reads the start of each file
which is undesirable: see https://github.com/golang/go/issues/22577
+### Server options
-### Directory Cache ###
+Use --addr to specify which IP address and port the server should
+listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all
+IPs. By default it only listens on localhost.
+
+If you set --addr to listen on a public or LAN accessible IP address
+then using Authentication if advised - see the next section for info.
+
+--server-read-timeout and --server-write-timeout can be used to
+control the timeouts on the server. Note that this is the total time
+for a transfer.
+
+--max-header-bytes controls the maximum number of bytes the server will
+accept in the HTTP header.
+
+#### Authentication
+
+By default this will serve files without needing a login.
+
+You can either use an htpasswd file which can take lots of users, or
+set a single username and password with the --user and --pass flags.
+
+Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is
+in standard apache format and supports MD5, SHA1 and BCrypt for basic
+authentication. Bcrypt is recommended.
+
+To create an htpasswd file:
+
+ touch htpasswd
+ htpasswd -B htpasswd user
+ htpasswd -B htpasswd anotherUser
+
+The password file can be updated while rclone is running.
+
+Use --realm to set the authentication realm.
+
+#### SSL/TLS
+
+By default this will serve over http. If you want you can serve over
+https. You will need to supply the --cert and --key flags. If you
+wish to do client side certificate validation then you will need to
+supply --client-ca also.
+
+--cert should be a either a PEM encoded certificate or a concatenation
+of that with the CA certificate. --key should be the PEM encoded
+private key and --client-ca should be the PEM encoded client
+certificate authority certificate.
+
+### Directory Cache
Using the `--dir-cache-time` flag, you can set how long a
directory should be considered up to date and not refreshed from the
@@ -36,12 +83,21 @@ like this:
kill -SIGHUP $(pidof rclone)
-### File Caching ###
+If you configure rclone with a [remote control](/rc) then you can use
+rclone rc to flush the whole directory cache:
+
+ rclone rc vfs/forget
+
+Or individual files or directories:
+
+ rclone rc vfs/forget file=path/to/file dir=path/to/dir
+
+### File Caching
**NB** File caching is **EXPERIMENTAL** - use with care!
These flags control the VFS file caching options. The VFS layer is
-used by rclone mount to make a cloud storage systm work more like a
+used by rclone mount to make a cloud storage system work more like a
normal file system.
You'll need to enable VFS caching if you want, for example, to read
@@ -50,7 +106,7 @@ and write simultaneously to a file. See below for more details.
Note that the VFS cache works in addition to the cache backend and you
may find that you need one or the other or both.
- --vfs-cache-dir string Directory rclone will use for caching.
+ --cache-dir string Directory rclone will use for caching.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
@@ -69,7 +125,7 @@ closed so if rclone is quit or dies with open files then these won't
get written back to the remote. However they will still be in the on
disk cache.
-#### --vfs-cache-mode off ####
+#### --vfs-cache-mode off
In this mode the cache will read directly from the remote and write
directly to the remote without caching anything on disk.
@@ -84,7 +140,7 @@ This will mean some operations are not possible
* Open modes O_APPEND, O_TRUNC are ignored
* If an upload fails it can't be retried
-#### --vfs-cache-mode minimal ####
+#### --vfs-cache-mode minimal
This is very similar to "off" except that files opened for read AND
write will be buffered to disks. This means that files opened for
@@ -97,7 +153,7 @@ These operations are not possible
* Files opened for write only will ignore O_APPEND, O_TRUNC
* If an upload fails it can't be retried
-#### --vfs-cache-mode writes ####
+#### --vfs-cache-mode writes
In this mode files opened for read only are still read directly from
the remote, write only and read/write files are buffered to disk
@@ -107,14 +163,14 @@ This mode should support all normal file system operations.
If an upload fails it will be retried up to --low-level-retries times.
-#### --vfs-cache-mode full ####
+#### --vfs-cache-mode full
In this mode all reads and writes are buffered to and from disk. When
a file is opened for read it will be downloaded in its entirety first.
This may be appropriate for your needs, or you may prefer to look at
the cache backend which does a much more sophisticated job of caching,
-including caching directory heirachies and chunks of files.q
+including caching directory hierarchies and chunks of files.
In this mode, unlike the others, when a file is written to the disk,
it will be kept on the disk after it is written to the remote. It
@@ -133,17 +189,27 @@ rclone serve webdav remote:path [flags]
### Options
```
- --addr string IPaddress:Port to bind server to. (default "localhost:8081")
+ --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080")
+ --cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --client-ca string Client certificate authority to verify clients with
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--gid uint32 Override the gid field set by the filesystem. (default 502)
-h, --help help for webdav
+ --htpasswd string htpasswd file - if not provided no authentication is done
+ --key string SSL PEM Private key
+ --max-header-bytes int Maximum size of request header (default 4096)
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
+ --pass string Password for authentication.
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--read-only Mount read-only.
+ --realm string realm for authentication (default "rclone")
+ --server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--uid uint32 Override the uid field set by the filesystem. (default 502)
--umask int Override the permission bits set by the filesystem. (default 2)
+ --user string User name for authentication.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
@@ -174,10 +240,13 @@ rclone serve webdav remote:path [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -196,17 +265,19 @@ rclone serve webdav remote:path [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -227,29 +298,41 @@ rclone serve webdav remote:path [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -263,11 +346,12 @@ rclone serve webdav remote:path [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
+
* [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol.
-###### Auto generated by spf13/cobra on 23-Dec-2017
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_sha1sum.md b/docs/content/commands/rclone_sha1sum.md
index 2d22a3383..2a1b9217c 100644
--- a/docs/content/commands/rclone_sha1sum.md
+++ b/docs/content/commands/rclone_sha1sum.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone sha1sum"
slug: rclone_sha1sum
url: /commands/rclone_sha1sum/
@@ -11,7 +11,6 @@ Produces an sha1sum file for all the objects in the path.
### Synopsis
-
Produces an sha1sum file for all the objects in the path. This
is in the same format as the standard sha1sum tool produces.
@@ -51,10 +50,13 @@ rclone sha1sum remote:path [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -73,17 +75,19 @@ rclone sha1sum remote:path [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -104,29 +108,41 @@ rclone sha1sum remote:path [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -140,11 +156,12 @@ rclone sha1sum remote:path [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 23-Dec-2017
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40
+
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_size.md b/docs/content/commands/rclone_size.md
index 15d442052..66c85a15a 100644
--- a/docs/content/commands/rclone_size.md
+++ b/docs/content/commands/rclone_size.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone size"
slug: rclone_size
url: /commands/rclone_size/
@@ -10,7 +10,6 @@ Prints the total size and number of objects in remote:path.
### Synopsis
-
Prints the total size and number of objects in remote:path.
```
@@ -48,10 +47,13 @@ rclone size remote:path [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -70,17 +72,19 @@ rclone size remote:path [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -101,29 +105,41 @@ rclone size remote:path [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -137,11 +153,12 @@ rclone size remote:path [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 23-Dec-2017
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40
+
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_sync.md b/docs/content/commands/rclone_sync.md
index f83e0f832..5abf9a2f5 100644
--- a/docs/content/commands/rclone_sync.md
+++ b/docs/content/commands/rclone_sync.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone sync"
slug: rclone_sync
url: /commands/rclone_sync/
@@ -11,7 +11,6 @@ Make source and dest identical, modifying destination only.
### Synopsis
-
Sync the source to the destination, changing the destination
only. Doesn't transfer unchanged files, testing by size and
modification time or MD5SUM. Destination is updated to match
@@ -67,10 +66,13 @@ rclone sync source:path dest:path [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -89,17 +91,19 @@ rclone sync source:path dest:path [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -120,29 +124,41 @@ rclone sync source:path dest:path [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -156,11 +172,12 @@ rclone sync source:path dest:path [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 23-Dec-2017
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40
+
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_touch.md b/docs/content/commands/rclone_touch.md
index b9675f8de..73e243e98 100644
--- a/docs/content/commands/rclone_touch.md
+++ b/docs/content/commands/rclone_touch.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone touch"
slug: rclone_touch
url: /commands/rclone_touch/
@@ -10,7 +10,6 @@ Create new file or change file modification time.
### Synopsis
-
Create new file or change file modification time.
```
@@ -50,10 +49,13 @@ rclone touch remote:path [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -72,17 +74,19 @@ rclone touch remote:path [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -103,29 +107,41 @@ rclone touch remote:path [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -139,11 +155,12 @@ rclone touch remote:path [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 23-Dec-2017
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40
+
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_tree.md b/docs/content/commands/rclone_tree.md
index e08cb05c3..1c09d1079 100644
--- a/docs/content/commands/rclone_tree.md
+++ b/docs/content/commands/rclone_tree.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone tree"
slug: rclone_tree
url: /commands/rclone_tree/
@@ -11,7 +11,6 @@ List the contents of the remote in a tree like fashion.
### Synopsis
-
rclone tree lists the contents of a remote in a similar way to the
unix tree command.
@@ -91,10 +90,13 @@ rclone tree remote:path [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -113,17 +115,19 @@ rclone tree remote:path [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -144,29 +148,41 @@ rclone tree remote:path [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -180,11 +196,12 @@ rclone tree remote:path [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 23-Dec-2017
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40
+
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/content/commands/rclone_version.md b/docs/content/commands/rclone_version.md
index 9350c0349..7073ab4fb 100644
--- a/docs/content/commands/rclone_version.md
+++ b/docs/content/commands/rclone_version.md
@@ -1,5 +1,5 @@
---
-date: 2017-12-23T13:05:26Z
+date: 2018-03-19T10:05:30Z
title: "rclone version"
slug: rclone_version
url: /commands/rclone_version/
@@ -10,7 +10,6 @@ Show the version number.
### Synopsis
-
Show the version number.
```
@@ -48,10 +47,13 @@ rclone version [flags]
--cache-chunk-size string The size of a chunk (default "5M")
--cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
--cache-db-purge Purge the cache DB before
+ --cache-db-wait-time duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
--cache-info-age string How much time should object info be stored in cache (default "6h")
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded to the cloud storage
+ --cache-tmp-wait-time string How long should files be stored in local cache before being uploaded (default "15m")
--cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Will cache file data on writes through the FS
@@ -70,17 +72,19 @@ rclone version [flags]
--drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-impersonate string Impersonate this user when using a service account.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use created date instead of modified date.
--drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
-n, --dry-run Do a trial run with no permanent changes
- --dump string List of items to dump from:
+ --dump string List of items to dump from: headers,bodies,requests,responses,auth,filters
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-headers Dump HTTP headers - may contain sensitive info
+ --dump-headers Dump HTTP bodies - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--exclude-if-present string Exclude directories if filename is present
@@ -101,29 +105,41 @@ rclone version [flags]
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-age duration Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-age duration Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
+ --no-traverse Obsolete - does nothing.
--no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
-q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --sftp-ask-password Allow asking for SFTP password when needed.
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 40)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
@@ -137,11 +153,12 @@ rclone version [flags]
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.40")
+ -v, --verbose count Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 23-Dec-2017
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.40
+
+###### Auto generated by spf13/cobra on 19-Mar-2018
diff --git a/docs/layouts/chrome/navbar.html b/docs/layouts/chrome/navbar.html
index 19ddb4d3d..3bbbe8a84 100644
--- a/docs/layouts/chrome/navbar.html
+++ b/docs/layouts/chrome/navbar.html
@@ -43,6 +43,10 @@
rclone lsd
rclone delete
rclone size
+ rclone mount
+ rclone ncdu
+ rclone cat
+ rclone rcat
...and the rest
diff --git a/docs/layouts/partials/version.html b/docs/layouts/partials/version.html
index fe40bd64f..7eb7a1fe1 100644
--- a/docs/layouts/partials/version.html
+++ b/docs/layouts/partials/version.html
@@ -1 +1 @@
-v1.39
\ No newline at end of file
+v1.40
\ No newline at end of file
diff --git a/fs/version.go b/fs/version.go
index 173ef2b66..f3992cb1e 100644
--- a/fs/version.go
+++ b/fs/version.go
@@ -1,4 +1,4 @@
package fs
// Version of rclone
-var Version = "v1.39-DEV"
+var Version = "v1.40"
diff --git a/rclone.1 b/rclone.1
index 5472e4a4c..1f0c13509 100644
--- a/rclone.1
+++ b/rclone.1
@@ -1,7 +1,7 @@
.\"t
-.\" Automatically generated by Pandoc 1.17.2
+.\" Automatically generated by Pandoc 1.19.2.1
.\"
-.TH "rclone" "1" "Dec 23, 2017" "User Manual" ""
+.TH "rclone" "1" "Mar 19, 2018" "User Manual" ""
.hy
.SH Rclone
.PP
@@ -36,6 +36,8 @@ HTTP
.IP \[bu] 2
Hubic
.IP \[bu] 2
+IBM COS S3
+.IP \[bu] 2
Memset Memstore
.IP \[bu] 2
Microsoft Azure Blob Storage
@@ -128,7 +130,7 @@ See the Usage section (https://rclone.org/docs/) of the docs for how to
use rclone, or run \f[C]rclone\ \-h\f[].
.SS Script installation
.PP
-To install rclone on Linux/MacOs/BSD systems, run:
+To install rclone on Linux/macOS/BSD systems, run:
.IP
.nf
\f[C]
@@ -291,6 +293,8 @@ rclone\ config
.PP
See the following for detailed instructions for
.IP \[bu] 2
+Alias (https://rclone.org/alias/)
+.IP \[bu] 2
Amazon Drive (https://rclone.org/amazonclouddrive/)
.IP \[bu] 2
Amazon S3 (https://rclone.org/s3/)
@@ -442,9 +446,6 @@ had written a trailing / \- meaning "copy the contents of this
directory".
This applies to all commands and whether you are talking about the
source or destination.
-.PP
-See the \f[C]\-\-no\-traverse\f[] option for controlling whether rclone
-lists the destination directory or not.
.IP
.nf
\f[C]
@@ -673,10 +674,36 @@ rclone\ check\ source:path\ dest:path\ [flags]
.fi
.SS rclone ls
.PP
-List all the objects in the path with size and path.
+List the objects in the path with size and path.
.SS Synopsis
.PP
-List all the objects in the path with size and path.
+Lists the objects in the source path to standard output in a human
+readable format with size and path.
+Recurses by default.
+.PP
+Any of the filtering options can be applied to this commmand.
+.PP
+There are several related list commands
+.IP \[bu] 2
+\f[C]ls\f[] to list size and path of objects only
+.IP \[bu] 2
+\f[C]lsl\f[] to list modification time, size and path of objects only
+.IP \[bu] 2
+\f[C]lsd\f[] to list directories only
+.IP \[bu] 2
+\f[C]lsf\f[] to list objects and directories in easy to parse format
+.IP \[bu] 2
+\f[C]lsjson\f[] to list objects and directories in JSON format
+.PP
+\f[C]ls\f[],\f[C]lsl\f[],\f[C]lsd\f[] are designed to be human readable.
+\f[C]lsf\f[] is designed to be human and machine readable.
+\f[C]lsjson\f[] is designed to be machine readable.
+.PP
+Note that \f[C]ls\f[],\f[C]lsl\f[],\f[C]lsd\f[] all recurse by default
+\- use "\-\-max\-depth 1" to stop the recursion.
+.PP
+The other list commands \f[C]lsf\f[],\f[C]lsjson\f[] do not recurse by
+default \- use "\-R" to make them recurse.
.IP
.nf
\f[C]
@@ -695,7 +722,32 @@ rclone\ ls\ remote:path\ [flags]
List all directories/containers/buckets in the path.
.SS Synopsis
.PP
-List all directories/containers/buckets in the path.
+Lists the directories in the source path to standard output.
+Recurses by default.
+.PP
+Any of the filtering options can be applied to this commmand.
+.PP
+There are several related list commands
+.IP \[bu] 2
+\f[C]ls\f[] to list size and path of objects only
+.IP \[bu] 2
+\f[C]lsl\f[] to list modification time, size and path of objects only
+.IP \[bu] 2
+\f[C]lsd\f[] to list directories only
+.IP \[bu] 2
+\f[C]lsf\f[] to list objects and directories in easy to parse format
+.IP \[bu] 2
+\f[C]lsjson\f[] to list objects and directories in JSON format
+.PP
+\f[C]ls\f[],\f[C]lsl\f[],\f[C]lsd\f[] are designed to be human readable.
+\f[C]lsf\f[] is designed to be human and machine readable.
+\f[C]lsjson\f[] is designed to be machine readable.
+.PP
+Note that \f[C]ls\f[],\f[C]lsl\f[],\f[C]lsd\f[] all recurse by default
+\- use "\-\-max\-depth 1" to stop the recursion.
+.PP
+The other list commands \f[C]lsf\f[],\f[C]lsjson\f[] do not recurse by
+default \- use "\-R" to make them recurse.
.IP
.nf
\f[C]
@@ -711,10 +763,36 @@ rclone\ lsd\ remote:path\ [flags]
.fi
.SS rclone lsl
.PP
-List all the objects path with modification time, size and path.
+List the objects in path with modification time, size and path.
.SS Synopsis
.PP
-List all the objects path with modification time, size and path.
+Lists the objects in the source path to standard output in a human
+readable format with modification time, size and path.
+Recurses by default.
+.PP
+Any of the filtering options can be applied to this commmand.
+.PP
+There are several related list commands
+.IP \[bu] 2
+\f[C]ls\f[] to list size and path of objects only
+.IP \[bu] 2
+\f[C]lsl\f[] to list modification time, size and path of objects only
+.IP \[bu] 2
+\f[C]lsd\f[] to list directories only
+.IP \[bu] 2
+\f[C]lsf\f[] to list objects and directories in easy to parse format
+.IP \[bu] 2
+\f[C]lsjson\f[] to list objects and directories in JSON format
+.PP
+\f[C]ls\f[],\f[C]lsl\f[],\f[C]lsd\f[] are designed to be human readable.
+\f[C]lsf\f[] is designed to be human and machine readable.
+\f[C]lsjson\f[] is designed to be machine readable.
+.PP
+Note that \f[C]ls\f[],\f[C]lsl\f[],\f[C]lsd\f[] all recurse by default
+\- use "\-\-max\-depth 1" to stop the recursion.
+.PP
+The other list commands \f[C]lsf\f[],\f[C]lsjson\f[] do not recurse by
+default \- use "\-R" to make them recurse.
.IP
.nf
\f[C]
@@ -1362,11 +1440,15 @@ rclone cryptdecode returns unencrypted file names when provided with a
list of encrypted file names.
List limit is 10 items.
.PP
+If you supply the \-\-reverse flag, it will return encrypted file names.
+.PP
use it like this
.IP
.nf
\f[C]
rclone\ cryptdecode\ encryptedremote:\ encryptedfilename1\ encryptedfilename2
+
+rclone\ cryptdecode\ \-\-reverse\ encryptedremote:\ filename1\ filename2
\f[]
.fi
.IP
@@ -1379,7 +1461,8 @@ rclone\ cryptdecode\ encryptedremote:\ encryptedfilename\ [flags]
.IP
.nf
\f[C]
-\ \ \-h,\ \-\-help\ \ \ help\ for\ cryptdecode
+\ \ \-h,\ \-\-help\ \ \ \ \ \ help\ for\ cryptdecode
+\ \ \ \ \ \ \-\-reverse\ \ \ Reverse\ cryptdecode,\ encrypts\ filenames
\f[]
.fi
.SS rclone dbhashsum
@@ -1540,6 +1623,98 @@ rclone\ listremotes\ [flags]
\ \ \-l,\ \-\-long\ \ \ Show\ the\ type\ as\ well\ as\ names.
\f[]
.fi
+.SS rclone lsf
+.PP
+List directories and objects in remote:path formatted for parsing
+.SS Synopsis
+.PP
+List the contents of the source path (directories and objects) to
+standard output in a form which is easy to parse by scripts.
+By default this will just be the names of the objects and directories,
+one per line.
+The directories will have a / suffix.
+.PP
+Use the \-\-format option to control what gets listed.
+By default this is just the path, but you can use these parameters to
+control the output:
+.IP
+.nf
+\f[C]
+p\ \-\ path
+s\ \-\ size
+t\ \-\ modification\ time
+h\ \-\ hash
+\f[]
+.fi
+.PP
+So if you wanted the path, size and modification time, you would use
+\-\-format "pst", or maybe \-\-format "tsp" to put the path last.
+.PP
+If you specify "h" in the format you will get the MD5 hash by default,
+use the "\-\-hash" flag to change which hash you want.
+Note that this can be returned as an empty string if it isn\[aq]t
+available on the object (and for directories), "ERROR" if there was an
+error reading it from the object and "UNSUPPORTED" if that object does
+not support that hash type.
+.PP
+For example to emulate the md5sum command you can use
+.IP
+.nf
+\f[C]
+rclone\ lsf\ \-R\ \-\-hash\ MD5\ \-\-format\ hp\ \-\-separator\ "\ \ "\ \-\-files\-only\ .
+\f[]
+.fi
+.PP
+(Though "rclone md5sum ." is an easier way of typing this.)
+.PP
+By default the separator is ";" this can be changed with the
+\-\-separator flag.
+Note that separators aren\[aq]t escaped in the path so putting it last
+is a good strategy.
+.PP
+Any of the filtering options can be applied to this commmand.
+.PP
+There are several related list commands
+.IP \[bu] 2
+\f[C]ls\f[] to list size and path of objects only
+.IP \[bu] 2
+\f[C]lsl\f[] to list modification time, size and path of objects only
+.IP \[bu] 2
+\f[C]lsd\f[] to list directories only
+.IP \[bu] 2
+\f[C]lsf\f[] to list objects and directories in easy to parse format
+.IP \[bu] 2
+\f[C]lsjson\f[] to list objects and directories in JSON format
+.PP
+\f[C]ls\f[],\f[C]lsl\f[],\f[C]lsd\f[] are designed to be human readable.
+\f[C]lsf\f[] is designed to be human and machine readable.
+\f[C]lsjson\f[] is designed to be machine readable.
+.PP
+Note that \f[C]ls\f[],\f[C]lsl\f[],\f[C]lsd\f[] all recurse by default
+\- use "\-\-max\-depth 1" to stop the recursion.
+.PP
+The other list commands \f[C]lsf\f[],\f[C]lsjson\f[] do not recurse by
+default \- use "\-R" to make them recurse.
+.IP
+.nf
+\f[C]
+rclone\ lsf\ remote:path\ [flags]
+\f[]
+.fi
+.SS Options
+.IP
+.nf
+\f[C]
+\ \ \-d,\ \-\-dir\-slash\ \ \ \ \ \ \ \ \ \ Append\ a\ slash\ to\ directory\ names.\ (default\ true)
+\ \ \ \ \ \ \-\-dirs\-only\ \ \ \ \ \ \ \ \ \ Only\ list\ directories.
+\ \ \ \ \ \ \-\-files\-only\ \ \ \ \ \ \ \ \ Only\ list\ files.
+\ \ \-F,\ \-\-format\ string\ \ \ \ \ \ Output\ format\ \-\ see\ \ help\ for\ details\ (default\ "p")
+\ \ \ \ \ \ \-\-hash\ h\ \ \ \ \ \ \ \ \ \ \ \ \ Use\ this\ hash\ when\ h\ is\ used\ in\ the\ format\ MD5|SHA\-1|DropboxHash\ (default\ "MD5")
+\ \ \-h,\ \-\-help\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ help\ for\ lsf
+\ \ \-R,\ \-\-recursive\ \ \ \ \ \ \ \ \ \ Recurse\ into\ the\ listing.
+\ \ \-s,\ \-\-separator\ string\ \ \ Separator\ for\ the\ items\ in\ the\ format.\ (default\ ";")
+\f[]
+.fi
.SS rclone lsjson
.PP
List directories and objects in the path in JSON format.
@@ -1553,18 +1728,51 @@ The output is an array of Items, where each Item looks like this
"MD5" : "b1946ac92492d2347c6235b4d2611184", "DropboxHash" :
"ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc" },
"IsDir" : false, "ModTime" : "2017\-05\-31T16:15:57.034468261+01:00",
-"Name" : "file.txt", "Path" : "full/path/goes/here/file.txt", "Size" : 6
-}
+"Name" : "file.txt", "Encrypted" : "v0qpsdq8anpci8n929v3uu9338", "Path"
+: "full/path/goes/here/file.txt", "Size" : 6 }
.PP
-If \-\-hash is not specified the the Hashes property won\[aq]t be
-emitted.
+If \-\-hash is not specified the Hashes property won\[aq]t be emitted.
.PP
If \-\-no\-modtime is specified then ModTime will be blank.
.PP
+If \-\-encrypted is not specified the Encrypted won\[aq]t be emitted.
+.PP
+The Path field will only show folders below the remote path being
+listed.
+If "remote:path" contains the file "subfolder/file.txt", the Path for
+"file.txt" will be "subfolder/file.txt", not
+"remote:path/subfolder/file.txt".
+When used without \-\-recursive the Path will always be the same as
+Name.
+.PP
The time is in RFC3339 format with nanosecond precision.
.PP
The whole output can be processed as a JSON blob, or alternatively it
can be processed line by line as each item is written one to a line.
+.PP
+Any of the filtering options can be applied to this commmand.
+.PP
+There are several related list commands
+.IP \[bu] 2
+\f[C]ls\f[] to list size and path of objects only
+.IP \[bu] 2
+\f[C]lsl\f[] to list modification time, size and path of objects only
+.IP \[bu] 2
+\f[C]lsd\f[] to list directories only
+.IP \[bu] 2
+\f[C]lsf\f[] to list objects and directories in easy to parse format
+.IP \[bu] 2
+\f[C]lsjson\f[] to list objects and directories in JSON format
+.PP
+\f[C]ls\f[],\f[C]lsl\f[],\f[C]lsd\f[] are designed to be human readable.
+\f[C]lsf\f[] is designed to be human and machine readable.
+\f[C]lsjson\f[] is designed to be machine readable.
+.PP
+Note that \f[C]ls\f[],\f[C]lsl\f[],\f[C]lsd\f[] all recurse by default
+\- use "\-\-max\-depth 1" to stop the recursion.
+.PP
+The other list commands \f[C]lsf\f[],\f[C]lsjson\f[] do not recurse by
+default \- use "\-R" to make them recurse.
.IP
.nf
\f[C]
@@ -1575,6 +1783,7 @@ rclone\ lsjson\ remote:path\ [flags]
.IP
.nf
\f[C]
+\ \ \-M,\ \-\-encrypted\ \ \ \ Show\ the\ encrypted\ names.
\ \ \ \ \ \ \-\-hash\ \ \ \ \ \ \ \ \ Include\ hashes\ in\ the\ output\ (may\ take\ longer).
\ \ \-h,\ \-\-help\ \ \ \ \ \ \ \ \ help\ for\ lsjson
\ \ \ \ \ \ \-\-no\-modtime\ \ \ Don\[aq]t\ read\ the\ modification\ time\ (can\ speed\ things\ up).
@@ -1651,12 +1860,16 @@ prompt.
It is also possible to start a drive from the SYSTEM account (using the
WinFsp.Launcher
infrastructure (https://github.com/billziss-gh/winfsp/wiki/WinFsp-Service-Architecture))
-which creates drives accessible for everyone on the system.
+which creates drives accessible for everyone on the system or
+alternatively using the nssm service manager (https://nssm.cc/usage).
.SS Limitations
.PP
-This can only write files seqentially, it can only seek when reading.
+Without the use of "\-\-vfs\-cache\-mode" this can only write files
+sequentially, it can only seek when reading.
This means that many applications won\[aq]t work with their files on an
-rclone mount.
+rclone mount without "\-\-vfs\-cache\-mode writes" or
+"\-\-vfs\-cache\-mode full".
+See the File Caching (#file-caching) section for more info.
.PP
The bucket based remotes (eg Swift, S3, Google Compute Storage, B2,
Hubic) won\[aq]t work from the root \- you will need to specify a
@@ -1675,8 +1888,21 @@ systems are a long way from 100% reliable.
The rclone sync/copy commands cope with this with lots of retries.
However rclone mount can\[aq]t use retries in the same way without
making local copies of the uploads.
-This might happen in the future, but for the moment rclone mount
-won\[aq]t do that, so will be less reliable than the rclone command.
+Look at the \f[B]EXPERIMENTAL\f[] file caching (#file-caching) for
+solutions to make mount mount more reliable.
+.SS Attribute caching
+.PP
+You can use the flag \-\-attr\-timeout to set the time the kernel caches
+the attributes (size, modification time etc) for directory entries.
+.PP
+The default is 0s \- no caching \- which is recommended for filesystems
+which can change outside the control of the kernel.
+.PP
+If you set it higher (\[aq]1s\[aq] or \[aq]1m\[aq] say) then the kernel
+will call back to rclone less often making it more efficient, however
+there may be strange effects when files change on the remote.
+.PP
+This is the same as setting the attr_timeout option in mount.fuse.
.SS Filters
.PP
Note that all the rclone filters can be used to select a subset of the
@@ -1709,13 +1935,30 @@ like this:
kill\ \-SIGHUP\ $(pidof\ rclone)
\f[]
.fi
+.PP
+If you configure rclone with a remote control (/rc) then you can use
+rclone rc to flush the whole directory cache:
+.IP
+.nf
+\f[C]
+rclone\ rc\ vfs/forget
+\f[]
+.fi
+.PP
+Or individual files or directories:
+.IP
+.nf
+\f[C]
+rclone\ rc\ vfs/forget\ file=path/to/file\ dir=path/to/dir
+\f[]
+.fi
.SS File Caching
.PP
\f[B]NB\f[] File caching is \f[B]EXPERIMENTAL\f[] \- use with care!
.PP
These flags control the VFS file caching options.
-The VFS layer is used by rclone mount to make a cloud storage systm work
-more like a normal file system.
+The VFS layer is used by rclone mount to make a cloud storage system
+work more like a normal file system.
.PP
You\[aq]ll need to enable VFS caching if you want, for example, to read
and write simultaneously to a file.
@@ -1726,7 +1969,7 @@ may find that you need one or the other or both.
.IP
.nf
\f[C]
-\-\-vfs\-cache\-dir\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Directory\ rclone\ will\ use\ for\ caching.
+\-\-cache\-dir\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Directory\ rclone\ will\ use\ for\ caching.
\-\-vfs\-cache\-max\-age\ duration\ \ \ \ \ \ \ \ \ Max\ age\ of\ objects\ in\ the\ cache.\ (default\ 1h0m0s)
\-\-vfs\-cache\-mode\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ Cache\ mode\ off|minimal|writes|full\ (default\ "off")
\-\-vfs\-cache\-poll\-interval\ duration\ \ \ Interval\ to\ poll\ the\ cache\ for\ stale\ objects.\ (default\ 1m0s)
@@ -1801,7 +2044,7 @@ first.
.PP
This may be appropriate for your needs, or you may prefer to look at the
cache backend which does a much more sophisticated job of caching,
-including caching directory heirachies and chunks of files.q
+including caching directory hierarchies and chunks of files.
.PP
In this mode, unlike the others, when a file is written to the disk, it
will be kept on the disk after it is written to the remote.
@@ -1825,6 +2068,8 @@ rclone\ mount\ remote:path\ /path/to/mountpoint\ [flags]
\ \ \ \ \ \ \-\-allow\-non\-empty\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Allow\ mounting\ over\ a\ non\-empty\ directory.
\ \ \ \ \ \ \-\-allow\-other\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Allow\ access\ to\ other\ users.
\ \ \ \ \ \ \-\-allow\-root\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Allow\ access\ to\ root\ user.
+\ \ \ \ \ \ \-\-attr\-timeout\ duration\ \ \ \ \ \ \ \ \ \ \ \ \ \ Time\ for\ which\ file/directory\ attributes\ are\ cached.
+\ \ \ \ \ \ \-\-daemon\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Run\ mount\ as\ a\ daemon\ (background\ mode).
\ \ \ \ \ \ \-\-debug\-fuse\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Debug\ the\ FUSE\ internals\ \-\ needs\ \-v.
\ \ \ \ \ \ \-\-default\-permissions\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Makes\ kernel\ enforce\ access\ control\ based\ on\ the\ file\ mode.
\ \ \ \ \ \ \-\-dir\-cache\-time\ duration\ \ \ \ \ \ \ \ \ \ \ \ Time\ to\ cache\ directory\ entries\ for.\ (default\ 5m0s)
@@ -1966,6 +2211,34 @@ rclone\ obscure\ password\ [flags]
\ \ \-h,\ \-\-help\ \ \ help\ for\ obscure
\f[]
.fi
+.SS rclone rc
+.PP
+Run a command against a running rclone.
+.SS Synopsis
+.PP
+This runs a command against a running rclone.
+By default it will use that specified in the \-\-rc\-addr command.
+.PP
+Arguments should be passed in as parameter=value.
+.PP
+The result will be returned as a JSON object by default.
+.PP
+Use "rclone rc list" to see a list of all possible commands.
+.IP
+.nf
+\f[C]
+rclone\ rc\ commands\ parameter\ [flags]
+\f[]
+.fi
+.SS Options
+.IP
+.nf
+\f[C]
+\ \ \-h,\ \-\-help\ \ \ \ \ \ \ \ \ help\ for\ rc
+\ \ \ \ \ \ \-\-no\-output\ \ \ \ If\ set\ don\[aq]t\ output\ the\ JSON\ result.
+\ \ \ \ \ \ \-\-url\ string\ \ \ URL\ to\ connect\ to\ rclone\ remote\ control.\ (default\ "http://localhost:5572/")
+\f[]
+.fi
.SS rclone rcat
.PP
Copies standard input to file on remote.
@@ -2081,11 +2354,6 @@ HTTP.
This can be viewed in a web browser or you can make a remote of type
http read from it.
.PP
-Use \-\-addr to specify which IP address and port the server should
-listen on, eg \-\-addr 1.2.3.4:8000 or \-\-addr :8080 to listen to all
-IPs.
-By default it only listens on localhost.
-.PP
You can use the filter flags (eg \-\-include, \-\-exclude) to control
what is served.
.PP
@@ -2094,6 +2362,59 @@ Use \-v to see access logs.
.PP
\-\-bwlimit will be respected for file transfers.
Use \-\-stats to control the stats printing.
+.SS Server options
+.PP
+Use \-\-addr to specify which IP address and port the server should
+listen on, eg \-\-addr 1.2.3.4:8000 or \-\-addr :8080 to listen to all
+IPs.
+By default it only listens on localhost.
+.PP
+If you set \-\-addr to listen on a public or LAN accessible IP address
+then using Authentication if advised \- see the next section for info.
+.PP
+\-\-server\-read\-timeout and \-\-server\-write\-timeout can be used to
+control the timeouts on the server.
+Note that this is the total time for a transfer.
+.PP
+\-\-max\-header\-bytes controls the maximum number of bytes the server
+will accept in the HTTP header.
+.SS Authentication
+.PP
+By default this will serve files without needing a login.
+.PP
+You can either use an htpasswd file which can take lots of users, or set
+a single username and password with the \-\-user and \-\-pass flags.
+.PP
+Use \-\-htpasswd /path/to/htpasswd to provide an htpasswd file.
+This is in standard apache format and supports MD5, SHA1 and BCrypt for
+basic authentication.
+Bcrypt is recommended.
+.PP
+To create an htpasswd file:
+.IP
+.nf
+\f[C]
+touch\ htpasswd
+htpasswd\ \-B\ htpasswd\ user
+htpasswd\ \-B\ htpasswd\ anotherUser
+\f[]
+.fi
+.PP
+The password file can be updated while rclone is running.
+.PP
+Use \-\-realm to set the authentication realm.
+.SS SSL/TLS
+.PP
+By default this will serve over http.
+If you want you can serve over https.
+You will need to supply the \-\-cert and \-\-key flags.
+If you wish to do client side certificate validation then you will need
+to supply \-\-client\-ca also.
+.PP
+\-\-cert should be a either a PEM encoded certificate or a concatenation
+of that with the CA certificate.
+\-\-key should be the PEM encoded private key and \-\-client\-ca should
+be the PEM encoded client certificate authority certificate.
.SS Directory Cache
.PP
Using the \f[C]\-\-dir\-cache\-time\f[] flag, you can set how long a
@@ -2114,13 +2435,30 @@ like this:
kill\ \-SIGHUP\ $(pidof\ rclone)
\f[]
.fi
+.PP
+If you configure rclone with a remote control (/rc) then you can use
+rclone rc to flush the whole directory cache:
+.IP
+.nf
+\f[C]
+rclone\ rc\ vfs/forget
+\f[]
+.fi
+.PP
+Or individual files or directories:
+.IP
+.nf
+\f[C]
+rclone\ rc\ vfs/forget\ file=path/to/file\ dir=path/to/dir
+\f[]
+.fi
.SS File Caching
.PP
\f[B]NB\f[] File caching is \f[B]EXPERIMENTAL\f[] \- use with care!
.PP
These flags control the VFS file caching options.
-The VFS layer is used by rclone mount to make a cloud storage systm work
-more like a normal file system.
+The VFS layer is used by rclone mount to make a cloud storage system
+work more like a normal file system.
.PP
You\[aq]ll need to enable VFS caching if you want, for example, to read
and write simultaneously to a file.
@@ -2131,7 +2469,7 @@ may find that you need one or the other or both.
.IP
.nf
\f[C]
-\-\-vfs\-cache\-dir\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Directory\ rclone\ will\ use\ for\ caching.
+\-\-cache\-dir\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Directory\ rclone\ will\ use\ for\ caching.
\-\-vfs\-cache\-max\-age\ duration\ \ \ \ \ \ \ \ \ Max\ age\ of\ objects\ in\ the\ cache.\ (default\ 1h0m0s)
\-\-vfs\-cache\-mode\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ Cache\ mode\ off|minimal|writes|full\ (default\ "off")
\-\-vfs\-cache\-poll\-interval\ duration\ \ \ Interval\ to\ poll\ the\ cache\ for\ stale\ objects.\ (default\ 1m0s)
@@ -2206,7 +2544,7 @@ first.
.PP
This may be appropriate for your needs, or you may prefer to look at the
cache backend which does a much more sophisticated job of caching,
-including caching directory heirachies and chunks of files.q
+including caching directory hierarchies and chunks of files.
.PP
In this mode, unlike the others, when a file is written to the disk, it
will be kept on the disk after it is written to the remote.
@@ -2227,22 +2565,198 @@ rclone\ serve\ http\ remote:path\ [flags]
.IP
.nf
\f[C]
-\ \ \ \ \ \ \-\-addr\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ IPaddress:Port\ to\ bind\ server\ to.\ (default\ "localhost:8080")
+\ \ \ \ \ \ \-\-addr\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ IPaddress:Port\ or\ :Port\ to\ bind\ server\ to.\ (default\ "localhost:8080")
+\ \ \ \ \ \ \-\-cert\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ SSL\ PEM\ key\ (concatenation\ of\ certificate\ and\ CA\ certificate)
+\ \ \ \ \ \ \-\-client\-ca\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Client\ certificate\ authority\ to\ verify\ clients\ with
\ \ \ \ \ \ \-\-dir\-cache\-time\ duration\ \ \ \ \ \ \ \ \ \ \ \ Time\ to\ cache\ directory\ entries\ for.\ (default\ 5m0s)
\ \ \ \ \ \ \-\-gid\ uint32\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ gid\ field\ set\ by\ the\ filesystem.\ (default\ 502)
\ \ \-h,\ \-\-help\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ help\ for\ http
+\ \ \ \ \ \ \-\-htpasswd\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ htpasswd\ file\ \-\ if\ not\ provided\ no\ authentication\ is\ done
+\ \ \ \ \ \ \-\-key\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ SSL\ PEM\ Private\ key
+\ \ \ \ \ \ \-\-max\-header\-bytes\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Maximum\ size\ of\ request\ header\ (default\ 4096)
\ \ \ \ \ \ \-\-no\-checksum\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ compare\ checksums\ on\ up/download.
\ \ \ \ \ \ \-\-no\-modtime\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ read/write\ the\ modification\ time\ (can\ speed\ things\ up).
\ \ \ \ \ \ \-\-no\-seek\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ allow\ seeking\ in\ files.
+\ \ \ \ \ \ \-\-pass\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Password\ for\ authentication.
\ \ \ \ \ \ \-\-poll\-interval\ duration\ \ \ \ \ \ \ \ \ \ \ \ \ Time\ to\ wait\ between\ polling\ for\ changes.\ Must\ be\ smaller\ than\ dir\-cache\-time.\ Only\ on\ supported\ remotes.\ Set\ to\ 0\ to\ disable.\ (default\ 1m0s)
\ \ \ \ \ \ \-\-read\-only\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Mount\ read\-only.
+\ \ \ \ \ \ \-\-realm\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ realm\ for\ authentication\ (default\ "rclone")
+\ \ \ \ \ \ \-\-server\-read\-timeout\ duration\ \ \ \ \ \ \ Timeout\ for\ server\ reading\ data\ (default\ 1h0m0s)
+\ \ \ \ \ \ \-\-server\-write\-timeout\ duration\ \ \ \ \ \ Timeout\ for\ server\ writing\ data\ (default\ 1h0m0s)
\ \ \ \ \ \ \-\-uid\ uint32\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ uid\ field\ set\ by\ the\ filesystem.\ (default\ 502)
\ \ \ \ \ \ \-\-umask\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ permission\ bits\ set\ by\ the\ filesystem.\ (default\ 2)
+\ \ \ \ \ \ \-\-user\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ User\ name\ for\ authentication.
\ \ \ \ \ \ \-\-vfs\-cache\-max\-age\ duration\ \ \ \ \ \ \ \ \ Max\ age\ of\ objects\ in\ the\ cache.\ (default\ 1h0m0s)
\ \ \ \ \ \ \-\-vfs\-cache\-mode\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ Cache\ mode\ off|minimal|writes|full\ (default\ "off")
\ \ \ \ \ \ \-\-vfs\-cache\-poll\-interval\ duration\ \ \ Interval\ to\ poll\ the\ cache\ for\ stale\ objects.\ (default\ 1m0s)
\f[]
.fi
+.SS rclone serve restic
+.PP
+Serve the remote for restic\[aq]s REST API.
+.SS Synopsis
+.PP
+rclone serve restic implements restic\[aq]s REST backend API over HTTP.
+This allows restic to use rclone as a data storage mechanism for cloud
+providers that restic does not support directly.
+.PP
+Restic (https://restic.net/) is a command line program for doing
+backups.
+.PP
+The server will log errors.
+Use \-v to see access logs.
+.PP
+\-\-bwlimit will be respected for file transfers.
+Use \-\-stats to control the stats printing.
+.SS Setting up rclone for use by restic
+.PP
+First set up a remote for your chosen cloud provider (/docs/#configure).
+.PP
+Once you have set up the remote, check it is working with, for example
+"rclone lsd remote:".
+You may have called the remote something other than "remote:" \- just
+substitute whatever you called it in the following instructions.
+.PP
+Now start the rclone restic server
+.IP
+.nf
+\f[C]
+rclone\ serve\ restic\ \-v\ remote:backup
+\f[]
+.fi
+.PP
+Where you can replace "backup" in the above by whatever path in the
+remote you wish to use.
+.PP
+By default this will serve on "localhost:8080" you can change this with
+use of the "\-\-addr" flag.
+.PP
+You might wish to start this server on boot.
+.SS Setting up restic to use rclone
+.PP
+Now you can follow the restic
+instructions (http://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#rest-server)
+on setting up restic.
+.PP
+Note that you will need restic 0.8.2 or later to interoperate with
+rclone.
+.PP
+For the example above you will want to use "http://localhost:8080/" as
+the URL for the REST server.
+.PP
+For example:
+.IP
+.nf
+\f[C]
+$\ export\ RESTIC_REPOSITORY=rest:http://localhost:8080/
+$\ export\ RESTIC_PASSWORD=yourpassword
+$\ restic\ init
+created\ restic\ backend\ 8b1a4b56ae\ at\ rest:http://localhost:8080/
+
+Please\ note\ that\ knowledge\ of\ your\ password\ is\ required\ to\ access
+the\ repository.\ Losing\ your\ password\ means\ that\ your\ data\ is
+irrecoverably\ lost.
+$\ restic\ backup\ /path/to/files/to/backup
+scan\ [/path/to/files/to/backup]
+scanned\ 189\ directories,\ 312\ files\ in\ 0:00
+[0:00]\ 100.00%\ \ 38.128\ MiB\ /\ 38.128\ MiB\ \ 501\ /\ 501\ items\ \ 0\ errors\ \ ETA\ 0:00\
+duration:\ 0:00
+snapshot\ 45c8fdd8\ saved
+\f[]
+.fi
+.SS Multiple repositories
+.PP
+Note that you can use the endpoint to host multiple repositories.
+Do this by adding a directory name or path after the URL.
+Note that these \f[B]must\f[] end with /.
+Eg
+.IP
+.nf
+\f[C]
+$\ export\ RESTIC_REPOSITORY=rest:http://localhost:8080/user1repo/
+#\ backup\ user1\ stuff
+$\ export\ RESTIC_REPOSITORY=rest:http://localhost:8080/user2repo/
+#\ backup\ user2\ stuff
+\f[]
+.fi
+.SS Server options
+.PP
+Use \-\-addr to specify which IP address and port the server should
+listen on, eg \-\-addr 1.2.3.4:8000 or \-\-addr :8080 to listen to all
+IPs.
+By default it only listens on localhost.
+.PP
+If you set \-\-addr to listen on a public or LAN accessible IP address
+then using Authentication if advised \- see the next section for info.
+.PP
+\-\-server\-read\-timeout and \-\-server\-write\-timeout can be used to
+control the timeouts on the server.
+Note that this is the total time for a transfer.
+.PP
+\-\-max\-header\-bytes controls the maximum number of bytes the server
+will accept in the HTTP header.
+.SS Authentication
+.PP
+By default this will serve files without needing a login.
+.PP
+You can either use an htpasswd file which can take lots of users, or set
+a single username and password with the \-\-user and \-\-pass flags.
+.PP
+Use \-\-htpasswd /path/to/htpasswd to provide an htpasswd file.
+This is in standard apache format and supports MD5, SHA1 and BCrypt for
+basic authentication.
+Bcrypt is recommended.
+.PP
+To create an htpasswd file:
+.IP
+.nf
+\f[C]
+touch\ htpasswd
+htpasswd\ \-B\ htpasswd\ user
+htpasswd\ \-B\ htpasswd\ anotherUser
+\f[]
+.fi
+.PP
+The password file can be updated while rclone is running.
+.PP
+Use \-\-realm to set the authentication realm.
+.SS SSL/TLS
+.PP
+By default this will serve over http.
+If you want you can serve over https.
+You will need to supply the \-\-cert and \-\-key flags.
+If you wish to do client side certificate validation then you will need
+to supply \-\-client\-ca also.
+.PP
+\-\-cert should be a either a PEM encoded certificate or a concatenation
+of that with the CA certificate.
+\-\-key should be the PEM encoded private key and \-\-client\-ca should
+be the PEM encoded client certificate authority certificate.
+.IP
+.nf
+\f[C]
+rclone\ serve\ restic\ remote:path\ [flags]
+\f[]
+.fi
+.SS Options
+.IP
+.nf
+\f[C]
+\ \ \ \ \ \ \-\-addr\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ IPaddress:Port\ or\ :Port\ to\ bind\ server\ to.\ (default\ "localhost:8080")
+\ \ \ \ \ \ \-\-cert\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ SSL\ PEM\ key\ (concatenation\ of\ certificate\ and\ CA\ certificate)
+\ \ \ \ \ \ \-\-client\-ca\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Client\ certificate\ authority\ to\ verify\ clients\ with
+\ \ \-h,\ \-\-help\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ help\ for\ restic
+\ \ \ \ \ \ \-\-htpasswd\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ htpasswd\ file\ \-\ if\ not\ provided\ no\ authentication\ is\ done
+\ \ \ \ \ \ \-\-key\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ SSL\ PEM\ Private\ key
+\ \ \ \ \ \ \-\-max\-header\-bytes\ int\ \ \ \ \ \ \ \ \ \ \ \ Maximum\ size\ of\ request\ header\ (default\ 4096)
+\ \ \ \ \ \ \-\-pass\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Password\ for\ authentication.
+\ \ \ \ \ \ \-\-realm\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ realm\ for\ authentication\ (default\ "rclone")
+\ \ \ \ \ \ \-\-server\-read\-timeout\ duration\ \ \ \ Timeout\ for\ server\ reading\ data\ (default\ 1h0m0s)
+\ \ \ \ \ \ \-\-server\-write\-timeout\ duration\ \ \ Timeout\ for\ server\ writing\ data\ (default\ 1h0m0s)
+\ \ \ \ \ \ \-\-stdio\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ run\ an\ HTTP2\ server\ on\ stdin/stdout
+\ \ \ \ \ \ \-\-user\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ User\ name\ for\ authentication.
+\f[]
+.fi
.SS rclone serve webdav
.PP
Serve remote:path over webdav.
@@ -2255,6 +2769,59 @@ webdav to read and write it.
.PP
NB at the moment each directory listing reads the start of each file
which is undesirable: see https://github.com/golang/go/issues/22577
+.SS Server options
+.PP
+Use \-\-addr to specify which IP address and port the server should
+listen on, eg \-\-addr 1.2.3.4:8000 or \-\-addr :8080 to listen to all
+IPs.
+By default it only listens on localhost.
+.PP
+If you set \-\-addr to listen on a public or LAN accessible IP address
+then using Authentication if advised \- see the next section for info.
+.PP
+\-\-server\-read\-timeout and \-\-server\-write\-timeout can be used to
+control the timeouts on the server.
+Note that this is the total time for a transfer.
+.PP
+\-\-max\-header\-bytes controls the maximum number of bytes the server
+will accept in the HTTP header.
+.SS Authentication
+.PP
+By default this will serve files without needing a login.
+.PP
+You can either use an htpasswd file which can take lots of users, or set
+a single username and password with the \-\-user and \-\-pass flags.
+.PP
+Use \-\-htpasswd /path/to/htpasswd to provide an htpasswd file.
+This is in standard apache format and supports MD5, SHA1 and BCrypt for
+basic authentication.
+Bcrypt is recommended.
+.PP
+To create an htpasswd file:
+.IP
+.nf
+\f[C]
+touch\ htpasswd
+htpasswd\ \-B\ htpasswd\ user
+htpasswd\ \-B\ htpasswd\ anotherUser
+\f[]
+.fi
+.PP
+The password file can be updated while rclone is running.
+.PP
+Use \-\-realm to set the authentication realm.
+.SS SSL/TLS
+.PP
+By default this will serve over http.
+If you want you can serve over https.
+You will need to supply the \-\-cert and \-\-key flags.
+If you wish to do client side certificate validation then you will need
+to supply \-\-client\-ca also.
+.PP
+\-\-cert should be a either a PEM encoded certificate or a concatenation
+of that with the CA certificate.
+\-\-key should be the PEM encoded private key and \-\-client\-ca should
+be the PEM encoded client certificate authority certificate.
.SS Directory Cache
.PP
Using the \f[C]\-\-dir\-cache\-time\f[] flag, you can set how long a
@@ -2275,13 +2842,30 @@ like this:
kill\ \-SIGHUP\ $(pidof\ rclone)
\f[]
.fi
+.PP
+If you configure rclone with a remote control (/rc) then you can use
+rclone rc to flush the whole directory cache:
+.IP
+.nf
+\f[C]
+rclone\ rc\ vfs/forget
+\f[]
+.fi
+.PP
+Or individual files or directories:
+.IP
+.nf
+\f[C]
+rclone\ rc\ vfs/forget\ file=path/to/file\ dir=path/to/dir
+\f[]
+.fi
.SS File Caching
.PP
\f[B]NB\f[] File caching is \f[B]EXPERIMENTAL\f[] \- use with care!
.PP
These flags control the VFS file caching options.
-The VFS layer is used by rclone mount to make a cloud storage systm work
-more like a normal file system.
+The VFS layer is used by rclone mount to make a cloud storage system
+work more like a normal file system.
.PP
You\[aq]ll need to enable VFS caching if you want, for example, to read
and write simultaneously to a file.
@@ -2292,7 +2876,7 @@ may find that you need one or the other or both.
.IP
.nf
\f[C]
-\-\-vfs\-cache\-dir\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Directory\ rclone\ will\ use\ for\ caching.
+\-\-cache\-dir\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Directory\ rclone\ will\ use\ for\ caching.
\-\-vfs\-cache\-max\-age\ duration\ \ \ \ \ \ \ \ \ Max\ age\ of\ objects\ in\ the\ cache.\ (default\ 1h0m0s)
\-\-vfs\-cache\-mode\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ Cache\ mode\ off|minimal|writes|full\ (default\ "off")
\-\-vfs\-cache\-poll\-interval\ duration\ \ \ Interval\ to\ poll\ the\ cache\ for\ stale\ objects.\ (default\ 1m0s)
@@ -2367,7 +2951,7 @@ first.
.PP
This may be appropriate for your needs, or you may prefer to look at the
cache backend which does a much more sophisticated job of caching,
-including caching directory heirachies and chunks of files.q
+including caching directory hierarchies and chunks of files.
.PP
In this mode, unlike the others, when a file is written to the disk, it
will be kept on the disk after it is written to the remote.
@@ -2388,17 +2972,27 @@ rclone\ serve\ webdav\ remote:path\ [flags]
.IP
.nf
\f[C]
-\ \ \ \ \ \ \-\-addr\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ IPaddress:Port\ to\ bind\ server\ to.\ (default\ "localhost:8081")
+\ \ \ \ \ \ \-\-addr\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ IPaddress:Port\ or\ :Port\ to\ bind\ server\ to.\ (default\ "localhost:8080")
+\ \ \ \ \ \ \-\-cert\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ SSL\ PEM\ key\ (concatenation\ of\ certificate\ and\ CA\ certificate)
+\ \ \ \ \ \ \-\-client\-ca\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Client\ certificate\ authority\ to\ verify\ clients\ with
\ \ \ \ \ \ \-\-dir\-cache\-time\ duration\ \ \ \ \ \ \ \ \ \ \ \ Time\ to\ cache\ directory\ entries\ for.\ (default\ 5m0s)
\ \ \ \ \ \ \-\-gid\ uint32\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ gid\ field\ set\ by\ the\ filesystem.\ (default\ 502)
\ \ \-h,\ \-\-help\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ help\ for\ webdav
+\ \ \ \ \ \ \-\-htpasswd\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ htpasswd\ file\ \-\ if\ not\ provided\ no\ authentication\ is\ done
+\ \ \ \ \ \ \-\-key\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ SSL\ PEM\ Private\ key
+\ \ \ \ \ \ \-\-max\-header\-bytes\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Maximum\ size\ of\ request\ header\ (default\ 4096)
\ \ \ \ \ \ \-\-no\-checksum\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ compare\ checksums\ on\ up/download.
\ \ \ \ \ \ \-\-no\-modtime\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ read/write\ the\ modification\ time\ (can\ speed\ things\ up).
\ \ \ \ \ \ \-\-no\-seek\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ allow\ seeking\ in\ files.
+\ \ \ \ \ \ \-\-pass\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Password\ for\ authentication.
\ \ \ \ \ \ \-\-poll\-interval\ duration\ \ \ \ \ \ \ \ \ \ \ \ \ Time\ to\ wait\ between\ polling\ for\ changes.\ Must\ be\ smaller\ than\ dir\-cache\-time.\ Only\ on\ supported\ remotes.\ Set\ to\ 0\ to\ disable.\ (default\ 1m0s)
\ \ \ \ \ \ \-\-read\-only\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Mount\ read\-only.
+\ \ \ \ \ \ \-\-realm\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ realm\ for\ authentication\ (default\ "rclone")
+\ \ \ \ \ \ \-\-server\-read\-timeout\ duration\ \ \ \ \ \ \ Timeout\ for\ server\ reading\ data\ (default\ 1h0m0s)
+\ \ \ \ \ \ \-\-server\-write\-timeout\ duration\ \ \ \ \ \ Timeout\ for\ server\ writing\ data\ (default\ 1h0m0s)
\ \ \ \ \ \ \-\-uid\ uint32\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ uid\ field\ set\ by\ the\ filesystem.\ (default\ 502)
\ \ \ \ \ \ \-\-umask\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ permission\ bits\ set\ by\ the\ filesystem.\ (default\ 2)
+\ \ \ \ \ \ \-\-user\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ User\ name\ for\ authentication.
\ \ \ \ \ \ \-\-vfs\-cache\-max\-age\ duration\ \ \ \ \ \ \ \ \ Max\ age\ of\ objects\ in\ the\ cache.\ (default\ 1h0m0s)
\ \ \ \ \ \ \-\-vfs\-cache\-mode\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ Cache\ mode\ off|minimal|writes|full\ (default\ "off")
\ \ \ \ \ \ \-\-vfs\-cache\-poll\-interval\ duration\ \ \ Interval\ to\ poll\ the\ cache\ for\ stale\ objects.\ (default\ 1m0s)
@@ -2516,7 +3110,7 @@ This is equivalent to specifying
.IP
.nf
\f[C]
-rclone\ copy\ \-\-no\-traverse\ \-\-files\-from\ /tmp/files\ remote:\ /tmp/download
+rclone\ copy\ \-\-files\-from\ /tmp/files\ remote:\ /tmp/download
\f[]
.fi
.PP
@@ -2754,6 +3348,15 @@ limiter like this:
kill\ \-SIGUSR2\ $(pidof\ rclone)
\f[]
.fi
+.PP
+If you configure rclone with a remote control (/rc) then you can use
+change the bwlimit dynamically:
+.IP
+.nf
+\f[C]
+rclone\ rc\ core/bwlimit\ rate=1M
+\f[]
+.fi
.SS \-\-buffer\-size=SIZE
.PP
Use this sized buffer to speed up file transfers.
@@ -2963,6 +3566,11 @@ the value so rclone moves on to a high level retry (see the
\f[C]\-\-retries\f[] flag) quicker.
.PP
Disable low level retries with \f[C]\-\-low\-level\-retries\ 1\f[].
+.SS \-\-max\-delete=N
+.PP
+This tells rclone not to delete more than N files.
+If that limit is exceeded then a fatal error will be generated and
+rclone will stop the operation in progress.
.SS \-\-max\-depth=N
.PP
This modifies the recursion depth for all the commands except purge.
@@ -3054,6 +3662,14 @@ won\[aq]t show at default log level \f[C]NOTICE\f[].
Use \f[C]\-\-stats\-log\-level\ NOTICE\f[] or \f[C]\-v\f[] to make them
show.
See the Logging section (#logging) for more info on log levels.
+.SS \-\-stats\-file\-name\-length integer
+.PP
+By default, the \f[C]\-\-stats\f[] output will truncate file names and
+paths longer than 40 characters.
+This is equivalent to providing
+\f[C]\-\-stats\-file\-name\-length\ 40\f[].
+Use \f[C]\-\-stats\-file\-name\-length\ 0\f[] to disable any truncation
+of file names printed by stats.
.SS \-\-stats\-log\-level string
.PP
Log level to show \f[C]\-\-stats\f[] output at.
@@ -3062,7 +3678,7 @@ This can be \f[C]DEBUG\f[], \f[C]INFO\f[], \f[C]NOTICE\f[], or
The default is \f[C]INFO\f[].
This means at the default level of logging which is \f[C]NOTICE\f[] the
stats won\[aq]t show \- if you want them to then use
-\f[C]\-stats\-log\-level\ NOTICE\f[].
+\f[C]\-\-stats\-log\-level\ NOTICE\f[].
See the Logging section (#logging) for more info on log levels.
.SS \-\-stats\-unit=bits|bytes
.PP
@@ -3149,8 +3765,7 @@ If the destination does not support server\-side copy or move, rclone
will fall back to the default behaviour and log an error level message
to the console.
.PP
-Note that \f[C]\-\-track\-renames\f[] is incompatible with
-\f[C]\-\-no\-traverse\f[] and that it uses extra memory to keep track of
+Note that \f[C]\-\-track\-renames\f[] uses extra memory to keep track of
all the rename candidates.
.PP
Note also that \f[C]\-\-track\-renames\f[] is incompatible with
@@ -3434,27 +4049,6 @@ In this mode, TLS is susceptible to man\-in\-the\-middle attacks.
This option defaults to \f[C]false\f[].
.PP
\f[B]This should be used only for testing.\f[]
-.SS \-\-no\-traverse
-.PP
-The \f[C]\-\-no\-traverse\f[] flag controls whether the destination file
-system is traversed when using the \f[C]copy\f[] or \f[C]move\f[]
-commands.
-\f[C]\-\-no\-traverse\f[] is not compatible with \f[C]sync\f[] and will
-be ignored if you supply it with \f[C]sync\f[].
-.PP
-If you are only copying a small number of files and/or have a large
-number of files on the destination then \f[C]\-\-no\-traverse\f[] will
-stop rclone listing the destination and save time.
-.PP
-However, if you are copying a large number of files, especially if you
-are doing a copy where lots of the files haven\[aq]t changed and
-won\[aq]t need copying then you shouldn\[aq]t use
-\f[C]\-\-no\-traverse\f[].
-.PP
-It can also be used to reduce the memory usage of rclone when copying \-
-\f[C]rclone\ \-\-no\-traverse\ copy\ src\ dst\f[] won\[aq]t load either
-the source or destination listings into memory so will use the minimum
-amount of memory.
.SS Filtering
.PP
For the filtering options
@@ -3486,10 +4080,20 @@ For the filtering options
\f[C]\-\-dump\ filters\f[]
.PP
See the filtering section (https://rclone.org/filtering/).
+.SS Remote control
+.PP
+For the remote control options and for instructions on how to remote
+control rclone
+.IP \[bu] 2
+\f[C]\-\-rc\f[]
+.IP \[bu] 2
+and anything starting with \f[C]\-\-rc\-\f[]
+.PP
+See the remote control section (https://rclone.org/rc/).
.SS Logging
.PP
-rclone has 4 levels of logging, \f[C]Error\f[], \f[C]Notice\f[],
-\f[C]Info\f[] and \f[C]Debug\f[].
+rclone has 4 levels of logging, \f[C]ERROR\f[], \f[C]NOTICE\f[],
+\f[C]INFO\f[] and \f[C]DEBUG\f[].
.PP
By default, rclone logs to standard error.
This means you can redirect standard error and still see the normal
@@ -4082,27 +4686,45 @@ Everything else will be excluded from the sync.
.PP
This reads a list of file names from the file passed in and
\f[B]only\f[] these files are transferred.
-The filtering rules are ignored completely if you use this option.
+The \f[B]filtering rules are ignored\f[] completely if you use this
+option.
.PP
This option can be repeated to read from more than one file.
These are read in the order that they are placed on the command line.
.PP
-Prepare a file like this \f[C]files\-from.txt\f[]
+Paths within the \f[C]\-\-files\-from\f[] file will be interpreted as
+starting with the root specified in the command.
+Leading \f[C]/\f[] characters are ignored.
+.PP
+For example, suppose you had \f[C]files\-from.txt\f[] with this content:
.IP
.nf
\f[C]
#\ comment
file1.jpg
-file2.jpg
+subdir/file2.jpg
\f[]
.fi
.PP
-Then use as \f[C]\-\-files\-from\ files\-from.txt\f[].
-This will only transfer \f[C]file1.jpg\f[] and \f[C]file2.jpg\f[]
-providing they exist.
+You could then use it like this:
+.IP
+.nf
+\f[C]
+rclone\ copy\ \-\-files\-from\ files\-from.txt\ /home/me/pics\ remote:pics
+\f[]
+.fi
.PP
-For example, let\[aq]s say you had a few files you want to back up
-regularly with these absolute paths:
+This will transfer these files only (if they exist)
+.IP
+.nf
+\f[C]
+/home/me/pics/file1.jpg\ \ \ \ \ \ \ \ →\ remote:pics/file1.jpg
+/home/me/pics/subdir/file2.jpg\ →\ remote:pics/subdirfile1.jpg
+\f[]
+.fi
+.PP
+To take a more complicated example, let\[aq]s say you had a few files
+you want to back up regularly with these absolute paths:
.IP
.nf
\f[C]
@@ -4133,7 +4755,15 @@ rclone\ copy\ \-\-files\-from\ files\-from.txt\ /home\ remote:backup
.fi
.PP
The 3 files will arrive in \f[C]remote:backup\f[] with the paths as in
-the \f[C]files\-from.txt\f[].
+the \f[C]files\-from.txt\f[] like this:
+.IP
+.nf
+\f[C]
+/home/user1/important\ →\ remote:backup/user1/important
+/home/user1/dir/file\ \ →\ remote:backup/user1/dir/file
+/home/user2/stuff\ \ \ \ \ →\ remote:backup/stuff
+\f[]
+.fi
.PP
You could of course choose \f[C]/\f[] as the root too in which case your
\f[C]files\-from.txt\f[] might look like this.
@@ -4155,7 +4785,15 @@ rclone\ copy\ \-\-files\-from\ files\-from.txt\ /\ remote:backup
.fi
.PP
In this case there will be an extra \f[C]home\f[] directory on the
-remote.
+remote:
+.IP
+.nf
+\f[C]
+/home/user1/important\ →\ remote:home/backup/user1/important
+/home/user1/dir/file\ \ →\ remote:home/backup/user1/dir/file
+/home/user2/stuff\ \ \ \ \ →\ remote:home/backup/stuff
+\f[]
+.fi
.SS \f[C]\-\-min\-size\f[] \- Don\[aq]t transfer any file smaller than
this
.PP
@@ -4289,6 +4927,275 @@ rclone\ sync\ \-\-exclude\-if\-present\ .ignore\ dir1\ remote:backup
.PP
Currently only one filename is supported, i.e.
\f[C]\-\-exclude\-if\-present\f[] should not be used multiple times.
+.SH Remote controlling rclone
+.PP
+If rclone is run with the \f[C]\-\-rc\f[] flag then it starts an http
+server which can be used to remote control rclone.
+.PP
+\f[B]NB\f[] this is experimental and everything here is subject to
+change!
+.SS Supported parameters
+.SS \-\-rc
+.PP
+Flag to start the http server listen on remote requests
+.SS \-\-rc\-addr=IP
+.PP
+IPaddress:Port or :Port to bind server to.
+(default "localhost:5572")
+.SS \-\-rc\-cert=KEY
+.PP
+SSL PEM key (concatenation of certificate and CA certificate)
+.SS \-\-rc\-client\-ca=PATH
+.PP
+Client certificate authority to verify clients with
+.SS \-\-rc\-htpasswd=PATH
+.PP
+htpasswd file \- if not provided no authentication is done
+.SS \-\-rc\-key=PATH
+.PP
+SSL PEM Private key
+.SS \-\-rc\-max\-header\-bytes=VALUE
+.PP
+Maximum size of request header (default 4096)
+.SS \-\-rc\-user=VALUE
+.PP
+User name for authentication.
+.SS \-\-rc\-pass=VALUE
+.PP
+Password for authentication.
+.SS \-\-rc\-realm=VALUE
+.PP
+Realm for authentication (default "rclone")
+.SS \-\-rc\-server\-read\-timeout=DURATION
+.PP
+Timeout for server reading data (default 1h0m0s)
+.SS \-\-rc\-server\-write\-timeout=DURATION
+.PP
+Timeout for server writing data (default 1h0m0s)
+.SS Accessing the remote control via the rclone rc command
+.PP
+Rclone itself implements the remote control protocol in its
+\f[C]rclone\ rc\f[] command.
+.PP
+You can use it like this
+.IP
+.nf
+\f[C]
+$\ rclone\ rc\ rc/noop\ param1=one\ param2=two
+{
+\ \ \ \ "param1":\ "one",
+\ \ \ \ "param2":\ "two"
+}
+\f[]
+.fi
+.PP
+Run \f[C]rclone\ rc\f[] on its own to see the help for the installed
+remote control commands.
+.SS Supported commands
+.SS core/bwlimit: Set the bandwidth limit.
+.PP
+This sets the bandwidth limit to that passed in.
+.PP
+Eg
+.IP
+.nf
+\f[C]
+rclone\ core/bwlimit\ rate=1M
+rclone\ core/bwlimit\ rate=off
+\f[]
+.fi
+.SS cache/expire: Purge a remote from cache
+.PP
+Purge a remote from the cache backend.
+Supports either a directory or a file.
+Params:
+.IP \[bu] 2
+remote = path to remote (required)
+.IP \[bu] 2
+withData = true/false to delete cached data (chunks) as well (optional)
+.SS vfs/forget: Forget files or directories in the directory cache.
+.PP
+This forgets the paths in the directory cache causing them to be
+re\-read from the remote when needed.
+.PP
+If no paths are passed in then it will forget all the paths in the
+directory cache.
+.IP
+.nf
+\f[C]
+rclone\ rc\ vfs/forget
+\f[]
+.fi
+.PP
+Otherwise pass files or dirs in as file=path or dir=path.
+Any parameter key starting with file will forget that file and any
+starting with dir will forget that dir, eg
+.IP
+.nf
+\f[C]
+rclone\ rc\ vfs/forget\ file=hello\ file2=goodbye\ dir=home/junk
+\f[]
+.fi
+.SS rc/noop: Echo the input to the output parameters
+.PP
+This echoes the input parameters to the output parameters for testing
+purposes.
+It can be used to check that rclone is still alive and to check that
+parameter passing is working properly.
+.SS rc/error: This returns an error
+.PP
+This returns an error with the input as part of its error string.
+Useful for testing error handling.
+.SS rc/list: List all the registered remote control commands
+.PP
+This lists all the registered remote control commands as a JSON map in
+the commands response.
+.SS Accessing the remote control via HTTP
+.PP
+Rclone implements a simple HTTP based protocol.
+.PP
+Each endpoint takes an JSON object and returns a JSON object or an
+error.
+The JSON objects are essentially a map of string names to values.
+.PP
+All calls must made using POST.
+.PP
+The input objects can be supplied using URL parameters, POST parameters
+or by supplying "Content\-Type: application/json" and a JSON blob in the
+body.
+There are examples of these below using \f[C]curl\f[].
+.PP
+The response will be a JSON blob in the body of the response.
+This is formatted to be reasonably human readable.
+.PP
+If an error occurs then there will be an HTTP error status (usually 400)
+and the body of the response will contain a JSON encoded error object.
+.SS Using POST with URL parameters only
+.IP
+.nf
+\f[C]
+curl\ \-X\ POST\ \[aq]http://localhost:5572/rc/noop/?potato=1&sausage=2\[aq]
+\f[]
+.fi
+.PP
+Response
+.IP
+.nf
+\f[C]
+{
+\ \ \ \ "potato":\ "1",
+\ \ \ \ "sausage":\ "2"
+}
+\f[]
+.fi
+.PP
+Here is what an error response looks like:
+.IP
+.nf
+\f[C]
+curl\ \-X\ POST\ \[aq]http://localhost:5572/rc/error/?potato=1&sausage=2\[aq]
+\f[]
+.fi
+.IP
+.nf
+\f[C]
+{
+\ \ \ \ "error":\ "arbitrary\ error\ on\ input\ map[potato:1\ sausage:2]",
+\ \ \ \ "input":\ {
+\ \ \ \ \ \ \ \ "potato":\ "1",
+\ \ \ \ \ \ \ \ "sausage":\ "2"
+\ \ \ \ }
+}
+\f[]
+.fi
+.PP
+Note that curl doesn\[aq]t return errors to the shell unless you use the
+\f[C]\-f\f[] option
+.IP
+.nf
+\f[C]
+$\ curl\ \-f\ \-X\ POST\ \[aq]http://localhost:5572/rc/error/?potato=1&sausage=2\[aq]
+curl:\ (22)\ The\ requested\ URL\ returned\ error:\ 400\ Bad\ Request
+$\ echo\ $?
+22
+\f[]
+.fi
+.SS Using POST with a form
+.IP
+.nf
+\f[C]
+curl\ \-\-data\ "potato=1"\ \-\-data\ "sausage=2"\ http://localhost:5572/rc/noop/
+\f[]
+.fi
+.PP
+Response
+.IP
+.nf
+\f[C]
+{
+\ \ \ \ "potato":\ "1",
+\ \ \ \ "sausage":\ "2"
+}
+\f[]
+.fi
+.PP
+Note that you can combine these with URL parameters too with the POST
+parameters taking precedence.
+.IP
+.nf
+\f[C]
+curl\ \-\-data\ "potato=1"\ \-\-data\ "sausage=2"\ "http://localhost:5572/rc/noop/?rutabaga=3&sausage=4"
+\f[]
+.fi
+.PP
+Response
+.IP
+.nf
+\f[C]
+{
+\ \ \ \ "potato":\ "1",
+\ \ \ \ "rutabaga":\ "3",
+\ \ \ \ "sausage":\ "4"
+}
+\f[]
+.fi
+.SS Using POST with a JSON blob
+.IP
+.nf
+\f[C]
+curl\ \-H\ "Content\-Type:\ application/json"\ \-X\ POST\ \-d\ \[aq]{"potato":2,"sausage":1}\[aq]\ http://localhost:5572/rc/noop/
+\f[]
+.fi
+.PP
+response
+.IP
+.nf
+\f[C]
+{
+\ \ \ \ "password":\ "xyz",
+\ \ \ \ "username":\ "xyz"
+}
+\f[]
+.fi
+.PP
+This can be combined with URL parameters too if required.
+The JSON blob takes precedence.
+.IP
+.nf
+\f[C]
+curl\ \-H\ "Content\-Type:\ application/json"\ \-X\ POST\ \-d\ \[aq]{"potato":2,"sausage":1}\[aq]\ \[aq]http://localhost:5572/rc/noop/?rutabaga=3&potato=4\[aq]
+\f[]
+.fi
+.IP
+.nf
+\f[C]
+{
+\ \ \ \ "potato":\ 2,
+\ \ \ \ "rutabaga":\ "3",
+\ \ \ \ "sausage":\ 1
+}
+\f[]
+.fi
.SH Overview of cloud storage systems
.PP
Each cloud storage system is slightly different.
@@ -5052,6 +5959,152 @@ advance.
This allows certain operations to work without spooling the file to
local disk first, e.g.
\f[C]rclone\ rcat\f[].
+.SS Alias
+.PP
+The \f[C]alias\f[] remote provides a new name for another remote.
+.PP
+Paths may be as deep as required or a local path, eg
+\f[C]remote:directory/subdirectory\f[] or
+\f[C]/directory/subdirectory\f[].
+.PP
+During the initial setup with \f[C]rclone\ config\f[] you will specify
+the target remote.
+The target remote can either be a local path or another remote.
+.PP
+Subfolders can be used in target remote.
+Asume a alias remote named \f[C]backup\f[] with the target
+\f[C]mydrive:private/backup\f[].
+Invoking \f[C]rclone\ mkdir\ backup:desktop\f[] is exactly the same as
+invoking \f[C]rclone\ mkdir\ mydrive:private/backup/desktop\f[].
+.PP
+There will be no special handling of paths containing \f[C]\&..\f[]
+segments.
+Invoking \f[C]rclone\ mkdir\ backup:../desktop\f[] is exactly the same
+as invoking \f[C]rclone\ mkdir\ mydrive:private/backup/../desktop\f[].
+The empty path is not allowed as a remote.
+To alias the current directory use \f[C]\&.\f[] instead.
+.PP
+Here is an example of how to make a alias called \f[C]remote\f[] for
+local folder.
+First run:
+.IP
+.nf
+\f[C]
+\ rclone\ config
+\f[]
+.fi
+.PP
+This will guide you through an interactive setup process:
+.IP
+.nf
+\f[C]
+No\ remotes\ found\ \-\ make\ a\ new\ one
+n)\ New\ remote
+s)\ Set\ configuration\ password
+q)\ Quit\ config
+n/s/q>\ n
+name>\ remote
+Type\ of\ storage\ to\ configure.
+Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
+\ 1\ /\ Alias\ for\ a\ existing\ remote
+\ \ \ \\\ "alias"
+\ 2\ /\ Amazon\ Drive
+\ \ \ \\\ "amazon\ cloud\ drive"
+\ 3\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio)
+\ \ \ \\\ "s3"
+\ 4\ /\ Backblaze\ B2
+\ \ \ \\\ "b2"
+\ 5\ /\ Box
+\ \ \ \\\ "box"
+\ 6\ /\ Cache\ a\ remote
+\ \ \ \\\ "cache"
+\ 7\ /\ Dropbox
+\ \ \ \\\ "dropbox"
+\ 8\ /\ Encrypt/Decrypt\ a\ remote
+\ \ \ \\\ "crypt"
+\ 9\ /\ FTP\ Connection
+\ \ \ \\\ "ftp"
+10\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
+\ \ \ \\\ "google\ cloud\ storage"
+11\ /\ Google\ Drive
+\ \ \ \\\ "drive"
+12\ /\ Hubic
+\ \ \ \\\ "hubic"
+13\ /\ Local\ Disk
+\ \ \ \\\ "local"
+14\ /\ Microsoft\ Azure\ Blob\ Storage
+\ \ \ \\\ "azureblob"
+15\ /\ Microsoft\ OneDrive
+\ \ \ \\\ "onedrive"
+16\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
+\ \ \ \\\ "swift"
+17\ /\ Pcloud
+\ \ \ \\\ "pcloud"
+18\ /\ QingCloud\ Object\ Storage
+\ \ \ \\\ "qingstor"
+19\ /\ SSH/SFTP\ Connection
+\ \ \ \\\ "sftp"
+20\ /\ Webdav
+\ \ \ \\\ "webdav"
+21\ /\ Yandex\ Disk
+\ \ \ \\\ "yandex"
+22\ /\ http\ Connection
+\ \ \ \\\ "http"
+Storage>\ 1
+Remote\ or\ path\ to\ alias.
+Can\ be\ "myremote:path/to/dir",\ "myremote:bucket",\ "myremote:"\ or\ "/local/path".
+remote>\ /mnt/storage/backup
+Remote\ config
+\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
+[remote]
+remote\ =\ /mnt/storage/backup
+\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
+y)\ Yes\ this\ is\ OK
+e)\ Edit\ this\ remote
+d)\ Delete\ this\ remote
+y/e/d>\ y
+Current\ remotes:
+
+Name\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Type
+====\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ====
+remote\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ alias
+
+e)\ Edit\ existing\ remote
+n)\ New\ remote
+d)\ Delete\ remote
+r)\ Rename\ remote
+c)\ Copy\ remote
+s)\ Set\ configuration\ password
+q)\ Quit\ config
+e/n/d/r/c/s/q>\ q
+\f[]
+.fi
+.PP
+Once configured you can then use \f[C]rclone\f[] like this,
+.PP
+List directories in top level in \f[C]/mnt/storage/backup\f[]
+.IP
+.nf
+\f[C]
+rclone\ lsd\ remote:
+\f[]
+.fi
+.PP
+List all the files in \f[C]/mnt/storage/backup\f[]
+.IP
+.nf
+\f[C]
+rclone\ ls\ remote:
+\f[]
+.fi
+.PP
+Copy another local directory to the alias directory called source
+.IP
+.nf
+\f[C]
+rclone\ copy\ /home/source\ remote:source
+\f[]
+.fi
.SS Amazon Drive
.PP
Paths are specified as \f[C]remote:path\f[]
@@ -5309,37 +6362,23 @@ This will guide you through an interactive setup process.
No\ remotes\ found\ \-\ make\ a\ new\ one
n)\ New\ remote
s)\ Set\ configuration\ password
-n/s>\ n
+q)\ Quit\ config
+n/s/q>\ n
name>\ remote
Type\ of\ storage\ to\ configure.
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
-\ 1\ /\ Amazon\ Drive
+\ 1\ /\ Alias\ for\ a\ existing\ remote
+\ \ \ \\\ "alias"
+\ 2\ /\ Amazon\ Drive
\ \ \ \\\ "amazon\ cloud\ drive"
-\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio)
+\ 3\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio)
\ \ \ \\\ "s3"
-\ 3\ /\ Backblaze\ B2
+\ 4\ /\ Backblaze\ B2
\ \ \ \\\ "b2"
-\ 4\ /\ Dropbox
-\ \ \ \\\ "dropbox"
-\ 5\ /\ Encrypt/Decrypt\ a\ remote
-\ \ \ \\\ "crypt"
-\ 6\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
-\ \ \ \\\ "google\ cloud\ storage"
-\ 7\ /\ Google\ Drive
-\ \ \ \\\ "drive"
-\ 8\ /\ Hubic
-\ \ \ \\\ "hubic"
-\ 9\ /\ Local\ Disk
-\ \ \ \\\ "local"
-10\ /\ Microsoft\ OneDrive
-\ \ \ \\\ "onedrive"
-11\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
-\ \ \ \\\ "swift"
-12\ /\ SSH/SFTP\ Connection
-\ \ \ \\\ "sftp"
-13\ /\ Yandex\ Disk
-\ \ \ \\\ "yandex"
-Storage>\ 2
+[snip]
+23\ /\ http\ Connection
+\ \ \ \\\ "http"
+Storage>\ s3
Get\ AWS\ credentials\ from\ runtime\ (environment\ variables\ or\ EC2/ECS\ meta\ data\ if\ no\ env\ vars).\ Only\ applies\ if\ access_key_id\ and\ secret_access_key\ is\ blank.
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ 1\ /\ Enter\ AWS\ credentials\ in\ the\ next\ step
@@ -5348,80 +6387,91 @@ Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ \ \ \\\ "true"
env_auth>\ 1
AWS\ Access\ Key\ ID\ \-\ leave\ blank\ for\ anonymous\ access\ or\ runtime\ credentials.
-access_key_id>\ access_key
+access_key_id>\ XXX
AWS\ Secret\ Access\ Key\ (password)\ \-\ leave\ blank\ for\ anonymous\ access\ or\ runtime\ credentials.
-secret_access_key>\ secret_key
-Region\ to\ connect\ to.
+secret_access_key>\ YYY
+Region\ to\ connect\ to.\ \ Leave\ blank\ if\ you\ are\ using\ an\ S3\ clone\ and\ you\ don\[aq]t\ have\ a\ region.
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ \ \ /\ The\ default\ endpoint\ \-\ a\ good\ choice\ if\ you\ are\ unsure.
\ 1\ |\ US\ Region,\ Northern\ Virginia\ or\ Pacific\ Northwest.
\ \ \ |\ Leave\ location\ constraint\ empty.
\ \ \ \\\ "us\-east\-1"
+\ \ \ /\ US\ East\ (Ohio)\ Region
+\ 2\ |\ Needs\ location\ constraint\ us\-east\-2.
+\ \ \ \\\ "us\-east\-2"
\ \ \ /\ US\ West\ (Oregon)\ Region
-\ 2\ |\ Needs\ location\ constraint\ us\-west\-2.
+\ 3\ |\ Needs\ location\ constraint\ us\-west\-2.
\ \ \ \\\ "us\-west\-2"
\ \ \ /\ US\ West\ (Northern\ California)\ Region
-\ 3\ |\ Needs\ location\ constraint\ us\-west\-1.
+\ 4\ |\ Needs\ location\ constraint\ us\-west\-1.
\ \ \ \\\ "us\-west\-1"
-\ \ \ /\ EU\ (Ireland)\ Region\ Region
-\ 4\ |\ Needs\ location\ constraint\ EU\ or\ eu\-west\-1.
+\ \ \ /\ Canada\ (Central)\ Region
+\ 5\ |\ Needs\ location\ constraint\ ca\-central\-1.
+\ \ \ \\\ "ca\-central\-1"
+\ \ \ /\ EU\ (Ireland)\ Region
+\ 6\ |\ Needs\ location\ constraint\ EU\ or\ eu\-west\-1.
\ \ \ \\\ "eu\-west\-1"
+\ \ \ /\ EU\ (London)\ Region
+\ 7\ |\ Needs\ location\ constraint\ eu\-west\-2.
+\ \ \ \\\ "eu\-west\-2"
\ \ \ /\ EU\ (Frankfurt)\ Region
-\ 5\ |\ Needs\ location\ constraint\ eu\-central\-1.
+\ 8\ |\ Needs\ location\ constraint\ eu\-central\-1.
\ \ \ \\\ "eu\-central\-1"
\ \ \ /\ Asia\ Pacific\ (Singapore)\ Region
-\ 6\ |\ Needs\ location\ constraint\ ap\-southeast\-1.
+\ 9\ |\ Needs\ location\ constraint\ ap\-southeast\-1.
\ \ \ \\\ "ap\-southeast\-1"
\ \ \ /\ Asia\ Pacific\ (Sydney)\ Region
-\ 7\ |\ Needs\ location\ constraint\ ap\-southeast\-2.
+10\ |\ Needs\ location\ constraint\ ap\-southeast\-2.
\ \ \ \\\ "ap\-southeast\-2"
\ \ \ /\ Asia\ Pacific\ (Tokyo)\ Region
-\ 8\ |\ Needs\ location\ constraint\ ap\-northeast\-1.
+11\ |\ Needs\ location\ constraint\ ap\-northeast\-1.
\ \ \ \\\ "ap\-northeast\-1"
\ \ \ /\ Asia\ Pacific\ (Seoul)
-\ 9\ |\ Needs\ location\ constraint\ ap\-northeast\-2.
+12\ |\ Needs\ location\ constraint\ ap\-northeast\-2.
\ \ \ \\\ "ap\-northeast\-2"
\ \ \ /\ Asia\ Pacific\ (Mumbai)
-10\ |\ Needs\ location\ constraint\ ap\-south\-1.
+13\ |\ Needs\ location\ constraint\ ap\-south\-1.
\ \ \ \\\ "ap\-south\-1"
\ \ \ /\ South\ America\ (Sao\ Paulo)\ Region
-11\ |\ Needs\ location\ constraint\ sa\-east\-1.
+14\ |\ Needs\ location\ constraint\ sa\-east\-1.
\ \ \ \\\ "sa\-east\-1"
-\ \ \ /\ If\ using\ an\ S3\ clone\ that\ only\ understands\ v2\ signatures
-12\ |\ eg\ Ceph/Dreamhost
-\ \ \ |\ set\ this\ and\ make\ sure\ you\ set\ the\ endpoint.
+\ \ \ /\ Use\ this\ only\ if\ v4\ signatures\ don\[aq]t\ work,\ eg\ pre\ Jewel/v10\ CEPH.
+15\ |\ Set\ this\ and\ make\ sure\ you\ set\ the\ endpoint.
\ \ \ \\\ "other\-v2\-signature"
-\ \ \ /\ If\ using\ an\ S3\ clone\ that\ understands\ v4\ signatures\ set\ this
-13\ |\ and\ make\ sure\ you\ set\ the\ endpoint.
-\ \ \ \\\ "other\-v4\-signature"
region>\ 1
Endpoint\ for\ S3\ API.
Leave\ blank\ if\ using\ AWS\ to\ use\ the\ default\ endpoint\ for\ the\ region.
Specify\ if\ using\ an\ S3\ clone\ such\ as\ Ceph.
-endpoint>
+endpoint>\
Location\ constraint\ \-\ must\ be\ set\ to\ match\ the\ Region.\ Used\ when\ creating\ buckets\ only.
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ 1\ /\ Empty\ for\ US\ Region,\ Northern\ Virginia\ or\ Pacific\ Northwest.
\ \ \ \\\ ""
-\ 2\ /\ US\ West\ (Oregon)\ Region.
+\ 2\ /\ US\ East\ (Ohio)\ Region.
+\ \ \ \\\ "us\-east\-2"
+\ 3\ /\ US\ West\ (Oregon)\ Region.
\ \ \ \\\ "us\-west\-2"
-\ 3\ /\ US\ West\ (Northern\ California)\ Region.
+\ 4\ /\ US\ West\ (Northern\ California)\ Region.
\ \ \ \\\ "us\-west\-1"
-\ 4\ /\ EU\ (Ireland)\ Region.
+\ 5\ /\ Canada\ (Central)\ Region.
+\ \ \ \\\ "ca\-central\-1"
+\ 6\ /\ EU\ (Ireland)\ Region.
\ \ \ \\\ "eu\-west\-1"
-\ 5\ /\ EU\ Region.
+\ 7\ /\ EU\ (London)\ Region.
+\ \ \ \\\ "eu\-west\-2"
+\ 8\ /\ EU\ Region.
\ \ \ \\\ "EU"
-\ 6\ /\ Asia\ Pacific\ (Singapore)\ Region.
+\ 9\ /\ Asia\ Pacific\ (Singapore)\ Region.
\ \ \ \\\ "ap\-southeast\-1"
-\ 7\ /\ Asia\ Pacific\ (Sydney)\ Region.
+10\ /\ Asia\ Pacific\ (Sydney)\ Region.
\ \ \ \\\ "ap\-southeast\-2"
-\ 8\ /\ Asia\ Pacific\ (Tokyo)\ Region.
+11\ /\ Asia\ Pacific\ (Tokyo)\ Region.
\ \ \ \\\ "ap\-northeast\-1"
-\ 9\ /\ Asia\ Pacific\ (Seoul)
+12\ /\ Asia\ Pacific\ (Seoul)
\ \ \ \\\ "ap\-northeast\-2"
-10\ /\ Asia\ Pacific\ (Mumbai)
+13\ /\ Asia\ Pacific\ (Mumbai)
\ \ \ \\\ "ap\-south\-1"
-11\ /\ South\ America\ (Sao\ Paulo)\ Region.
+14\ /\ South\ America\ (Sao\ Paulo)\ Region.
\ \ \ \\\ "sa\-east\-1"
location_constraint>\ 1
Canned\ ACL\ used\ when\ creating\ buckets\ and/or\ storing\ objects\ in\ S3.
@@ -5442,14 +6492,14 @@ Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ \ \ /\ Both\ the\ object\ owner\ and\ the\ bucket\ owner\ get\ FULL_CONTROL\ over\ the\ object.
\ 6\ |\ If\ you\ specify\ this\ canned\ ACL\ when\ creating\ a\ bucket,\ Amazon\ S3\ ignores\ it.
\ \ \ \\\ "bucket\-owner\-full\-control"
-acl>\ private
+acl>\ 1
The\ server\-side\ encryption\ algorithm\ used\ when\ storing\ this\ object\ in\ S3.
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ 1\ /\ None
\ \ \ \\\ ""
\ 2\ /\ AES256
\ \ \ \\\ "AES256"
-server_side_encryption>
+server_side_encryption>\ 1
The\ storage\ class\ to\ use\ when\ storing\ objects\ in\ S3.
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ 1\ /\ Default
@@ -5460,19 +6510,19 @@ Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ \ \ \\\ "REDUCED_REDUNDANCY"
\ 4\ /\ Standard\ Infrequent\ Access\ storage\ class
\ \ \ \\\ "STANDARD_IA"
-storage_class>
+storage_class>\ 1
Remote\ config
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
[remote]
env_auth\ =\ false
-access_key_id\ =\ access_key
-secret_access_key\ =\ secret_key
+access_key_id\ =\ XXX
+secret_access_key\ =\ YYY
region\ =\ us\-east\-1
-endpoint\ =
-location_constraint\ =
+endpoint\ =\
+location_constraint\ =\
acl\ =\ private
-server_side_encryption\ =
-storage_class\ =
+server_side_encryption\ =\
+storage_class\ =\
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
y)\ Yes\ this\ is\ OK
e)\ Edit\ this\ remote
@@ -5529,7 +6579,8 @@ to 1 ns.
.PP
rclone supports multipart uploads with S3 which means that it can upload
files bigger than 5GB.
-Note that files uploaded with multipart upload don\[aq]t have an MD5SUM.
+Note that files uploaded \f[I]both\f[] with multipart upload
+\f[I]and\f[] through crypt remotes do not have MD5 sums.
.SS Buckets and Regions
.PP
With Amazon S3 you can list buckets (\f[C]rclone\ lsd\f[]) using any
@@ -5630,6 +6681,14 @@ For reference, here\[aq]s an Ansible
script (https://gist.github.com/ebridges/ebfc9042dd7c756cd101cfa807b7ae2b)
that will generate one or more buckets that will work with
\f[C]rclone\ sync\f[].
+.SS Key Management System (KMS)
+.PP
+If you are using server side encryption with KMS then you will find you
+can\[aq]t transfer small objects.
+As a work\-around you can use the \f[C]\-\-ignore\-checksum\f[] flag.
+.PP
+A proper fix is being worked on in issue
+#1824 (https://github.com/ncw/rclone/issues/1824).
.SS Glacier
.PP
You can transition objects to glacier storage using a lifecycle
@@ -5720,17 +6779,27 @@ rclone\ lsd\ anons3:1000genomes
You will be able to list and copy data but not upload it.
.SS Ceph
.PP
-Ceph is an object storage system which presents an Amazon S3 interface.
+Ceph (https://ceph.com/) is an open source unified, distributed storage
+system designed for excellent performance, reliability and scalability.
+It has an S3 compatible object storage interface.
.PP
-To use rclone with ceph, you need to set the following parameters in the
-config.
+To use rclone with Ceph, configure as above but leave the region blank
+and set the endpoint.
+You should end up with something like this in your config:
.IP
.nf
\f[C]
-access_key_id\ =\ Whatever
-secret_access_key\ =\ Whatever
-endpoint\ =\ https://ceph.endpoint.goes.here/
-region\ =\ other\-v2\-signature
+[ceph]
+type\ =\ s3
+env_auth\ =\ false
+access_key_id\ =\ XXX
+secret_access_key\ =\ YYY
+region\ =\
+endpoint\ =\ https://ceph.endpoint.example.com
+location_constraint\ =\
+acl\ =\
+server_side_encryption\ =\
+storage_class\ =\
\f[]
.fi
.PP
@@ -5762,6 +6831,29 @@ removed).
Because this is a json dump, it is encoding the \f[C]/\f[] as
\f[C]\\/\f[], so if you use the secret key as \f[C]xxxxxx/xxxx\f[] it
will work fine.
+.SS Dreamhost
+.PP
+Dreamhost DreamObjects (https://www.dreamhost.com/cloud/storage/) is an
+object storage system based on CEPH.
+.PP
+To use rclone with Dreamhost, configure as above but leave the region
+blank and set the endpoint.
+You should end up with something like this in your config:
+.IP
+.nf
+\f[C]
+[dreamobjects]
+env_auth\ =\ false
+access_key_id\ =\ your_access_key
+secret_access_key\ =\ your_secret_key
+region\ =
+endpoint\ =\ objects\-us\-west\-1.dream.io
+location_constraint\ =
+acl\ =\ private
+server_side_encryption\ =
+storage_class\ =
+\f[]
+.fi
.SS DigitalOcean Spaces
.PP
Spaces (https://www.digitalocean.com/products/object-storage/) is an
@@ -5787,7 +6879,7 @@ Going through the whole process of creating a new remote by running
.IP
.nf
\f[C]
-Storage>\ 2
+Storage>\ s3
env_auth>\ 1
access_key_id>\ YOUR_ACCESS_KEY
secret_access_key>\ YOUR_SECRET_KEY
@@ -5826,6 +6918,281 @@ rclone\ mkdir\ spaces:my\-new\-space
rclone\ copy\ /path/to/files\ spaces:my\-new\-space
\f[]
.fi
+.SS IBM COS (S3)
+.PP
+Information stored with IBM Cloud Object Storage is encrypted and
+dispersed across multiple geographic locations, and accessed through an
+implementation of the S3 API.
+This service makes use of the distributed storage technologies provided
+by IBM's Cloud Object Storage System (formerly Cleversafe).
+For more information visit: (https://www.ibm.com/cloud/object\-storage)
+.PP
+To configure access to IBM COS S3, follow the steps below:
+.IP " 1." 4
+Run rclone config and select n for a new remote.
+.RS 4
+.IP
+.nf
+\f[C]
+2018/02/14\ 14:13:11\ NOTICE:\ Config\ file\ "C:\\\\Users\\\\a\\\\.config\\\\rclone\\\\rclone.conf"\ not\ found\ \-\ using\ defaults
+No\ remotes\ found\ \-\ make\ a\ new\ one
+n)\ New\ remote
+s)\ Set\ configuration\ password
+q)\ Quit\ config
+n/s/q>\ n
+\f[]
+.fi
+.RE
+.IP " 2." 4
+Enter the name for the configuration
+.RS 4
+.IP
+.nf
+\f[C]
+name>\ IBM\-COS\-XREGION
+\f[]
+.fi
+.RE
+.IP " 3." 4
+Select "s3" storage.
+.RS 4
+.IP
+.nf
+\f[C]
+Type\ of\ storage\ to\ configure.
+Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
+\ 1\ /\ Amazon\ Drive
+\\\ "amazon\ cloud\ drive"
+2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio,\ IBM\ COS(S3))
+\\\ "s3"
+3\ /\ Backblaze\ B2
+Storage>\ 2
+\f[]
+.fi
+.RE
+.IP " 4." 4
+Select "Enter AWS credentials\&..."
+.RS 4
+.IP
+.nf
+\f[C]
+Get\ AWS\ credentials\ from\ runtime\ (environment\ variables\ or\ EC2/ECS\ meta\ data\ if\ no\ env\ vars).\ Only\ applies\ if\ access_key_id\ and\ secret_access_key\ is\ blank.
+Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
+\ 1\ /\ Enter\ AWS\ credentials\ in\ the\ next\ step
+\\\ "false"
+\ 2\ /\ Get\ AWS\ credentials\ from\ the\ environment\ (env\ vars\ or\ IAM)
+\\\ "true"
+env_auth>\ 1
+\f[]
+.fi
+.RE
+.IP " 5." 4
+Enter the Access Key and Secret.
+.RS 4
+.IP
+.nf
+\f[C]
+AWS\ Access\ Key\ ID\ \-\ leave\ blank\ for\ anonymous\ access\ or\ runtime\ credentials.
+access_key_id>\ <>
+AWS\ Secret\ Access\ Key\ (password)\ \-\ leave\ blank\ for\ anonymous\ access\ or\ runtime\ credentials.
+secret_access_key>\ <>
+\f[]
+.fi
+.RE
+.IP " 6." 4
+Select "other\-v4\-signature" region.
+.RS 4
+.IP
+.nf
+\f[C]
+Region\ to\ connect\ to.
+Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
+/\ The\ default\ endpoint\ \-\ a\ good\ choice\ if\ you\ are\ unsure.
+\ 1\ |\ US\ Region,\ Northern\ Virginia\ or\ Pacific\ Northwest.
+|\ Leave\ location\ constraint\ empty.
+\\\ "us\-east\-1"
+/\ US\ East\ (Ohio)\ Region
+2\ |\ Needs\ location\ constraint\ us\-east\-2.
+\\\ "us\-east\-2"
+/\ US\ West\ (Oregon)\ Region
+\&...\&...
+15\ |\ eg\ Ceph/Dreamhost
+|\ set\ this\ and\ make\ sure\ you\ set\ the\ endpoint.
+\\\ "other\-v2\-signature"
+/\ If\ using\ an\ S3\ clone\ that\ understands\ v4\ signatures\ set\ this
+16\ |\ and\ make\ sure\ you\ set\ the\ endpoint.
+\\\ "other\-v4\-signature
+region>\ 16
+\f[]
+.fi
+.RE
+.IP " 7." 4
+Enter the endpoint FQDN.
+.RS 4
+.IP
+.nf
+\f[C]
+Leave\ blank\ if\ using\ AWS\ to\ use\ the\ default\ endpoint\ for\ the\ region.
+Specify\ if\ using\ an\ S3\ clone\ such\ as\ Ceph.
+endpoint>\ s3\-api.us\-geo.objectstorage.softlayer.net
+\f[]
+.fi
+.RE
+.IP " 8." 4
+Specify a IBM COS Location Constraint.
+.RS 4
+.IP "a." 3
+Currently, the only IBM COS values for LocationConstraint are:
+us\-standard / us\-vault / us\-cold / us\-flex us\-east\-standard /
+us\-east\-vault / us\-east\-cold / us\-east\-flex us\-south\-standard /
+us\-south\-vault / us\-south\-cold / us\-south\-flex eu\-standard /
+eu\-vault / eu\-cold / eu\-flex
+.RS 4
+.IP
+.nf
+\f[C]
+Location\ constraint\ \-\ must\ be\ set\ to\ match\ the\ Region.\ Used\ when\ creating\ buckets\ only.
+Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
+\ 1\ /\ Empty\ for\ US\ Region,\ Northern\ Virginia\ or\ Pacific\ Northwest.
+\\\ ""
+\ 2\ /\ US\ East\ (Ohio)\ Region.
+\\\ "us\-east\-2"
+\ \&...\&...
+location_constraint>\ us\-standard
+\f[]
+.fi
+.RE
+.RE
+.IP " 9." 4
+Specify a canned ACL.
+.RS 4
+.IP
+.nf
+\f[C]
+Canned\ ACL\ used\ when\ creating\ buckets\ and/or\ storing\ objects\ in\ S3.
+For\ more\ info\ visit\ https://docs.aws.amazon.com/AmazonS3/latest/dev/acl\-overview.html#canned\-acl
+Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
+1\ /\ Owner\ gets\ FULL_CONTROL.\ No\ one\ else\ has\ access\ rights\ (default).
+\\\ "private"
+2\ /\ Owner\ gets\ FULL_CONTROL.\ The\ AllUsers\ group\ gets\ READ\ access.
+\\\ "public\-read"
+/\ Owner\ gets\ FULL_CONTROL.\ The\ AllUsers\ group\ gets\ READ\ and\ WRITE\ access.
+\ 3\ |\ Granting\ this\ on\ a\ bucket\ is\ generally\ not\ recommended.
+\\\ "public\-read\-write"
+\ 4\ /\ Owner\ gets\ FULL_CONTROL.\ The\ AuthenticatedUsers\ group\ gets\ READ\ access.
+\\\ "authenticated\-read"
+/\ Object\ owner\ gets\ FULL_CONTROL.\ Bucket\ owner\ gets\ READ\ access.
+5\ |\ If\ you\ specify\ this\ canned\ ACL\ when\ creating\ a\ bucket,\ Amazon\ S3\ ignores\ it.
+\\\ "bucket\-owner\-read"
+/\ Both\ the\ object\ owner\ and\ the\ bucket\ owner\ get\ FULL_CONTROL\ over\ the\ object.
+\ 6\ |\ If\ you\ specify\ this\ canned\ ACL\ when\ creating\ a\ bucket,\ Amazon\ S3\ ignores\ it.
+\\\ "bucket\-owner\-full\-control"
+acl>\ 1
+\f[]
+.fi
+.RE
+.IP "10." 4
+Set the SSE option to "None".
+.RS 4
+.IP
+.nf
+\f[C]
+Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
+\ 1\ /\ None
+\\\ ""
+2\ /\ AES256
+\\\ "AES256"
+server_side_encryption>\ 1
+\f[]
+.fi
+.RE
+.IP "11." 4
+Set the storage class to "None" (IBM COS uses the LocationConstraint at
+the bucket level).
+.RS 4
+.IP
+.nf
+\f[C]
+The\ storage\ class\ to\ use\ when\ storing\ objects\ in\ S3.
+Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
+1\ /\ Default
+\\\ ""
+\ 2\ /\ Standard\ storage\ class
+\\\ "STANDARD"
+\ 3\ /\ Reduced\ redundancy\ storage\ class
+\\\ "REDUCED_REDUNDANCY"
+\ 4\ /\ Standard\ Infrequent\ Access\ storage\ class
+\ \\\ "STANDARD_IA"
+storage_class>
+\f[]
+.fi
+.RE
+.IP "12." 4
+Review the displayed configuration and accept to save the "remote" then
+quit.
+.RS 4
+.IP
+.nf
+\f[C]
+Remote\ config
+\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
+[IBM\-COS\-XREGION]
+env_auth\ =\ false
+access_key_id\ =\ <>
+secret_access_key\ =\ <>
+region\ =\ other\-v4\-signature
+endpoint\ =\ s3\-api.us\-geo.objectstorage.softlayer.net
+location_constraint\ =\ us\-standard
+acl\ =\ private
+server_side_encryption\ =\
+storage_class\ =
+\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
+y)\ Yes\ this\ is\ OK
+e)\ Edit\ this\ remote
+d)\ Delete\ this\ remote
+y/e/d>\ y
+Remote\ config
+Current\ remotes:
+
+Name\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Type
+====\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ====
+IBM\-COS\-XREGION\ \ \ \ \ \ s3
+
+e)\ Edit\ existing\ remote
+n)\ New\ remote
+d)\ Delete\ remote
+r)\ Rename\ remote
+c)\ Copy\ remote
+s)\ Set\ configuration\ password
+q)\ Quit\ config
+e/n/d/r/c/s/q>\ q
+\f[]
+.fi
+.RE
+.IP "13." 4
+Execute rclone commands
+.RS 4
+.IP
+.nf
+\f[C]
+1)\ \ Create\ a\ bucket.
+\ \ \ \ rclone\ mkdir\ IBM\-COS\-XREGION:newbucket
+2)\ \ List\ available\ buckets.
+\ \ \ \ rclone\ lsd\ IBM\-COS\-XREGION:
+\ \ \ \ \-1\ 2017\-11\-08\ 21:16:22\ \ \ \ \ \ \ \ \-1\ test
+\ \ \ \ \-1\ 2018\-02\-14\ 20:16:39\ \ \ \ \ \ \ \ \-1\ newbucket
+3)\ \ List\ contents\ of\ a\ bucket.
+\ \ \ \ rclone\ ls\ IBM\-COS\-XREGION:newbucket
+\ \ \ \ 18685952\ test.exe
+4)\ \ Copy\ a\ file\ from\ local\ to\ remote.
+\ \ \ \ rclone\ copy\ /Users/file.txt\ IBM\-COS\-XREGION:newbucket
+5)\ \ Copy\ a\ file\ from\ remote\ to\ local.
+\ \ \ \ rclone\ copy\ IBM\-COS\-XREGION:newbucket/file.txt\ .
+6)\ \ Delete\ a\ file\ on\ remote.
+\ \ \ \ rclone\ delete\ IBM\-COS\-XREGION:newbucket/file.txt
+\f[]
+.fi
+.RE
.SS Minio
.PP
Minio (https://minio.io/) is an object storage server built for cloud
@@ -6741,12 +8108,48 @@ To start a cached mount
rclone\ mount\ \-\-allow\-other\ test\-cache:\ /var/tmp/test\-cache
\f[]
.fi
+.SS Write Features
+.SS Offline uploading
+.PP
+In an effort to make writing through cache more reliable, the backend
+now supports this feature which can be activated by specifying a
+\f[C]cache\-tmp\-upload\-path\f[].
+.PP
+A files goes through these states when using this feature:
+.IP "1." 3
+An upload is started (usually by copying a file on the cache remote)
+.IP "2." 3
+When the copy to the temporary location is complete the file is part of
+the cached remote and looks and behaves like any other file (reading
+included)
+.IP "3." 3
+After \f[C]cache\-tmp\-wait\-time\f[] passes and the file is next in
+line, \f[C]rclone\ move\f[] is used to move the file to the cloud
+provider
+.IP "4." 3
+Reading the file still works during the upload but most modifications on
+it will be prohibited
+.IP "5." 3
+Once the move is complete the file is unlocked for modifications as it
+becomes as any other regular file
+.IP "6." 3
+If the file is being read through \f[C]cache\f[] when it\[aq]s actually
+deleted from the temporary path then \f[C]cache\f[] will simply swap the
+source to the cloud provider without interrupting the reading (small
+blip can happen though)
+.PP
+Files are uploaded in sequence and only one file is uploaded at a time.
+Uploads will be stored in a queue and be processed based on the order
+they were added.
+The queue and the temporary storage is persistent across restarts and
+even purges of the cache.
.SS Write Support
.PP
Writes are supported through \f[C]cache\f[].
One caveat is that a mounted cache remote does not add any retry or
fallback mechanism to the upload operation.
This will depend on the implementation of the wrapped remote.
+Consider using \f[C]Offline\ uploading\f[] for reliable writes.
.PP
One special case is covered with \f[C]cache\-writes\f[] which will cache
the file data at the same time as the upload when it is enabled making
@@ -6789,6 +8192,18 @@ enabled.
Affected settings: \- \f[C]cache\-workers\f[]: \f[I]Configured value\f[]
during confirmed playback or \f[I]1\f[] all the other times
.SS Known issues
+.SS Mount and \-\-dir\-cache\-time
+.PP
+\-\-dir\-cache\-time controls the first layer of directory caching which
+works at the mount layer.
+Being an independent caching mechanism from the \f[C]cache\f[] backend,
+it will manage its own entries based on the configured time.
+.PP
+To avoid getting in a scenario where dir cache has obsolete data and
+cache would have the correct one, try to set
+\f[C]\-\-dir\-cache\-time\f[] to a lower time than
+\f[C]\-\-cache\-info\-age\f[].
+Default values are already configured in this way.
.SS Windows support \- Experimental
.PP
There are a couple of issues with Windows \f[C]mount\f[] functionality
@@ -6847,6 +8262,21 @@ provider which makes it think we\[aq]re downloading the full file
instead of small chunks.
Organizing the remotes in this order yelds better results: \f[B]cloud
remote\f[] \-> \f[B]cache\f[] \-> \f[B]crypt\f[]
+.SS Cache and Remote Control (\-\-rc)
+.PP
+Cache supports the new \f[C]\-\-rc\f[] mode in rclone and can be remote
+controlled through the following end points: By default, the listener is
+disabled if you do not add the flag.
+.SS rc cache/expire
+.PP
+Purge a remote from the cache backend.
+Supports either a directory or a file.
+It supports both encrypted and unencrypted file names if cache is
+wrapped by crypt.
+.PP
+Params: \- \f[B]remote\f[] = path to remote \f[B](required)\f[] \-
+\f[B]withData\f[] = true/false to delete cached data (chunks) as well
+\f[I](optional, false by default)\f[]
.SS Specific options
.PP
Here are the command line options specific to this cloud storage system.
@@ -6978,6 +8408,34 @@ If you need to read files immediately after you upload them through
cache store at the same time during upload.
.PP
\f[B]Default\f[]: not set
+.SS \-\-cache\-tmp\-upload\-path=PATH
+.PP
+This is the path where \f[C]cache\f[] will use as a temporary storage
+for new files that need to be uploaded to the cloud provider.
+.PP
+Specifying a value will enable this feature.
+Without it, it is completely disabled and files will be uploaded
+directly to the cloud provider
+.PP
+\f[B]Default\f[]: empty
+.SS \-\-cache\-tmp\-wait\-time=DURATION
+.PP
+This is the duration that a file must wait in the temporary location
+\f[I]cache\-tmp\-upload\-path\f[] before it is selected for upload.
+.PP
+Note that only one file is uploaded at a time and it can take longer to
+start the upload if a queue formed for this purpose.
+.PP
+\f[B]Default\f[]: 15m
+.SS \-\-cache\-db\-wait\-time=DURATION
+.PP
+Only one process can have the DB open at any one time, so rclone waits
+for this duration for the DB to become available before it gives an
+error.
+.PP
+If you set it to 0 then it will wait forever.
+.PP
+\f[B]Default\f[]: 1s
.SS Crypt
.PP
The \f[C]crypt\f[] remote encrypts and decrypts another remote.
@@ -7222,7 +8680,7 @@ Standard
.IP \[bu] 2
file names encrypted
.IP \[bu] 2
-file names can\[aq]t be as long (~156 characters)
+file names can\[aq]t be as long (~143 characters)
.IP \[bu] 2
can use sub paths and copy single files
.IP \[bu] 2
@@ -7281,7 +8739,7 @@ Encrypts the whole file path including directory names Example:
False
.PP
Only encrypts file names, skips directory names Example:
-\f[C]1/12/123/txt\f[] is encrypted to
+\f[C]1/12/123.txt\f[] is encrypted to
\f[C]1/12/qgm4avr35m5loi1th53ato71v0\f[]
.SS Modified time and hashes
.PP
@@ -8022,39 +9480,34 @@ n/r/c/s/q>\ n
name>\ remote
Type\ of\ storage\ to\ configure.
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
-\ 1\ /\ Amazon\ Drive
-\ \ \ \\\ "amazon\ cloud\ drive"
-\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio)
-\ \ \ \\\ "s3"
-\ 3\ /\ Backblaze\ B2
-\ \ \ \\\ "b2"
-\ 4\ /\ Dropbox
-\ \ \ \\\ "dropbox"
-\ 5\ /\ Encrypt/Decrypt\ a\ remote
-\ \ \ \\\ "crypt"
-\ 6\ /\ FTP\ Connection
-\ \ \ \\\ "ftp"
-\ 7\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
-\ \ \ \\\ "google\ cloud\ storage"
-\ 8\ /\ Google\ Drive
+[snip]
+10\ /\ Google\ Drive
\ \ \ \\\ "drive"
-\ 9\ /\ Hubic
-\ \ \ \\\ "hubic"
-10\ /\ Local\ Disk
-\ \ \ \\\ "local"
-11\ /\ Microsoft\ OneDrive
-\ \ \ \\\ "onedrive"
-12\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
-\ \ \ \\\ "swift"
-13\ /\ SSH/SFTP\ Connection
-\ \ \ \\\ "sftp"
-14\ /\ Yandex\ Disk
-\ \ \ \\\ "yandex"
-Storage>\ 8
+[snip]
+Storage>\ drive
Google\ Application\ Client\ Id\ \-\ leave\ blank\ normally.
client_id>
Google\ Application\ Client\ Secret\ \-\ leave\ blank\ normally.
client_secret>
+Scope\ that\ rclone\ should\ use\ when\ requesting\ access\ from\ drive.
+Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
+\ 1\ /\ Full\ access\ all\ files,\ excluding\ Application\ Data\ Folder.
+\ \ \ \\\ "drive"
+\ 2\ /\ Read\-only\ access\ to\ file\ metadata\ and\ file\ contents.
+\ \ \ \\\ "drive.readonly"
+\ \ \ /\ Access\ to\ files\ created\ by\ rclone\ only.
+\ 3\ |\ These\ are\ visible\ in\ the\ drive\ website.
+\ \ \ |\ File\ authorization\ is\ revoked\ when\ the\ user\ deauthorizes\ the\ app.
+\ \ \ \\\ "drive.file"
+\ \ \ /\ Allows\ read\ and\ write\ access\ to\ the\ Application\ Data\ folder.
+\ 4\ |\ This\ is\ not\ visible\ in\ the\ drive\ website.
+\ \ \ \\\ "drive.appfolder"
+\ \ \ /\ Allows\ read\-only\ access\ to\ file\ metadata\ but
+\ 5\ |\ does\ not\ allow\ any\ access\ to\ read\ or\ download\ file\ content.
+\ \ \ \\\ "drive.metadata.readonly"
+scope>\ 1
+ID\ of\ the\ root\ folder\ \-\ leave\ blank\ normally.\ \ Fill\ in\ to\ access\ "Computers"\ folders.\ (see\ docs).
+root_folder_id>\
Service\ Account\ Credentials\ JSON\ file\ path\ \-\ needed\ only\ if\ you\ want\ use\ SA\ instead\ of\ interactive\ login.
service_account_file>
Remote\ config
@@ -8074,9 +9527,12 @@ n)\ No
y/n>\ n
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
[remote]
-client_id\ =
-client_secret\ =
-token\ =\ {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014\-03\-16T13:57:58.955387075Z","Extra":null}
+client_id\ =\
+client_secret\ =\
+scope\ =\ drive
+root_folder_id\ =\
+service_account_file\ =
+token\ =\ {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2014\-03\-16T13:57:58.955387075Z"}
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
y)\ Yes\ this\ is\ OK
e)\ Edit\ this\ remote
@@ -8118,6 +9574,80 @@ To copy a local directory to a drive directory called backup
rclone\ copy\ /home/source\ remote:backup
\f[]
.fi
+.SS Scopes
+.PP
+Rclone allows you to select which scope you would like for rclone to
+use.
+This changes what type of token is granted to rclone.
+The scopes are defined
+here. (https://developers.google.com/drive/v3/web/about-auth).
+.PP
+The scope are
+.SS drive
+.PP
+This is the default scope and allows full access to all files, except
+for the Application Data Folder (see below).
+.PP
+Choose this one if you aren\[aq]t sure.
+.SS drive.readonly
+.PP
+This allows read only access to all files.
+Files may be listed and downloaded but not uploaded, renamed or deleted.
+.SS drive.file
+.PP
+With this scope rclone can read/view/modify only those files and folders
+it creates.
+.PP
+So if you uploaded files to drive via the web interface (or any other
+means) they will not be visible to rclone.
+.PP
+This can be useful if you are using rclone to backup data and you want
+to be sure confidential data on your drive is not visible to rclone.
+.PP
+Files created with this scope are visible in the web interface.
+.SS drive.appfolder
+.PP
+This gives rclone its own private area to store files.
+Rclone will not be able to see any other files on your drive and you
+won\[aq]t be able to see rclone\[aq]s files from the web interface
+either.
+.SS drive.metadata.readonly
+.PP
+This allows read only access to file names only.
+It does not allow rclone to download or upload data, or rename or delete
+files or directories.
+.SS Root folder ID
+.PP
+You can set the \f[C]root_folder_id\f[] for rclone.
+This is the directory (identified by its \f[C]Folder\ ID\f[]) that
+rclone considers to be a the root of your drive.
+.PP
+Normally you will leave this blank and rclone will determine the correct
+root to use itself.
+.PP
+However you can set this to restrict rclone to a specific folder
+hierarchy or to access data within the "Computers" tab on the drive web
+interface (where files from Google\[aq]s Backup and Sync desktop program
+go).
+.PP
+In order to do this you will have to find the \f[C]Folder\ ID\f[] of the
+directory you wish rclone to display.
+This will be the last segment of the URL when you open the relevant
+folder in the drive web interface.
+.PP
+So if the folder you want rclone to use has a URL which looks like
+\f[C]https://drive.google.com/drive/folders/1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh\f[]
+in the browser, then you use \f[C]1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh\f[]
+as the \f[C]root_folder_id\f[] in the config.
+.PP
+\f[B]NB\f[] folders under the "Computers" tab seem to be read only
+(drive gives a 500 error) when using rclone.
+.PP
+There doesn\[aq]t appear to be an API to discover the folder IDs of the
+"Computers" tab \- please contact us if you know otherwise!
+.PP
+Note also that rclone can\[aq]t access any data under the "Backups" tab
+on the google drive web interface yet.
.SS Service Account support
.PP
You can set up rclone with Google Drive in an unattended mode, i.e.
@@ -8125,16 +9655,97 @@ not tied to a specific end\-user Google account.
This is useful when you want to synchronise files onto machines that
don\[aq]t have actively logged\-in users, for example build machines.
.PP
-To create a service account and obtain its credentials, go to the Google
-Developer Console (https://console.developers.google.com) and use the
-"Create Credentials" button.
-After creating an account, a JSON file containing the Service
-Account\[aq]s credentials will be downloaded onto your machine.
-These credentials are what rclone will use for authentication.
-.PP
To use a Service Account instead of OAuth2 token flow, enter the path to
your Service Account credentials at the \f[C]service_account_file\f[]
-prompt and rclone won\[aq]t use the browser based authentication flow.
+prompt during \f[C]rclone\ config\f[] and rclone won\[aq]t use the
+browser based authentication flow.
+.SS Use case \- Google Apps/G\-suite account and individual Drive
+.PP
+Let\[aq]s say that you are the administrator of a Google Apps (old) or
+G\-suite account.
+The goal is to store data on an individual\[aq]s Drive account, who IS a
+member of the domain.
+We\[aq]ll call the domain \f[B]example.com\f[], and the user
+\f[B]foo\@example.com\f[].
+.PP
+There\[aq]s a few steps we need to go through to accomplish this:
+.SS 1. Create a service account for example.com
+.IP \[bu] 2
+To create a service account and obtain its credentials, go to the Google
+Developer Console (https://console.developers.google.com).
+.IP \[bu] 2
+You must have a project \- create one if you don\[aq]t.
+.IP \[bu] 2
+Then go to "IAM & admin" \-> "Service Accounts".
+.IP \[bu] 2
+Use the "Create Credentials" button.
+Fill in "Service account name" with something that identifies your
+client.
+"Role" can be empty.
+.IP \[bu] 2
+Tick "Furnish a new private key" \- select "Key type JSON".
+.IP \[bu] 2
+Tick "Enable G Suite Domain\-wide Delegation".
+This option makes "impersonation" possible, as documented here:
+Delegating domain\-wide authority to the service
+account (https://developers.google.com/identity/protocols/OAuth2ServiceAccount#delegatingauthority)
+.IP \[bu] 2
+These credentials are what rclone will use for authentication.
+If you ever need to remove access, press the "Delete service account
+key" button.
+.SS 2. Allowing API access to example.com Google Drive
+.IP \[bu] 2
+Go to example.com\[aq]s admin console
+.IP \[bu] 2
+Go into "Security" (or use the search bar)
+.IP \[bu] 2
+Select "Show more" and then "Advanced settings"
+.IP \[bu] 2
+Select "Manage API client access" in the "Authentication" section
+.IP \[bu] 2
+In the "Client Name" field enter the service account\[aq]s "Client ID"
+\- this can be found in the Developer Console under "IAM & Admin" \->
+"Service Accounts", then "View Client ID" for the newly created service
+account.
+It is a ~21 character numerical string.
+.IP \[bu] 2
+In the next field, "One or More API Scopes", enter
+\f[C]https://www.googleapis.com/auth/drive\f[] to grant access to Google
+Drive specifically.
+.SS 3. Configure rclone, assuming a new install
+.IP
+.nf
+\f[C]
+rclone\ config
+
+n/s/q>\ n\ \ \ \ \ \ \ \ \ #\ New
+name>gdrive\ \ \ \ \ \ #\ Gdrive\ is\ an\ example\ name
+Storage>\ \ \ \ \ \ \ \ \ #\ Select\ the\ number\ shown\ for\ Google\ Drive
+client_id>\ \ \ \ \ \ \ #\ Can\ be\ left\ blank
+client_secret>\ \ \ #\ Can\ be\ left\ blank
+scope>\ \ \ \ \ \ \ \ \ \ \ #\ Select\ your\ scope,\ 1\ for\ example
+root_folder_id>\ \ #\ Can\ be\ left\ blank
+service_account_file>\ /home/foo/myJSONfile.json\ #\ This\ is\ where\ the\ JSON\ file\ goes!
+y/n>\ \ \ \ \ \ \ \ \ \ \ \ \ #\ Auto\ config,\ y
+\f[]
+.fi
+.SS 4. Verify that it\[aq]s working
+.IP \[bu] 2
+\f[C]rclone\ \-v\ \-\-drive\-impersonate\ foo\@example.com\ lsf\ gdrive:backup\f[]
+.IP \[bu] 2
+The arguments do:
+.RS 2
+.IP \[bu] 2
+\f[C]\-v\f[] \- verbose logging
+.IP \[bu] 2
+\f[C]\-\-drive\-impersonate\ foo\@example.com\f[] \- this is what does
+the magic, pretending to be user foo.
+.IP \[bu] 2
+\f[C]lsf\f[] \- list files in a parsing friendly way
+.IP \[bu] 2
+\f[C]gdrive:backup\f[] \- use the remote called gdrive, work in the
+folder named backup.
+.RE
.SS Team drives
.PP
If you want to configure the remote to point to a Google Team Drive then
@@ -8394,6 +10005,10 @@ T}@T{
A ZIP file of HTML, Images CSS
T}
.TE
+.SS \-\-drive\-impersonate user
+.PP
+When using a service account, this instructs rclone to impersonate the
+user passed in.
.SS \-\-drive\-list\-chunk int
.PP
Size of listing chunk 100\-1000.
@@ -8401,7 +10016,12 @@ Size of listing chunk 100\-1000.
(default 1000)
.SS \-\-drive\-shared\-with\-me
.PP
-Only show files that are shared with me
+Instructs rclone to operate on your "Shared with me" folder (where
+Google Drive lets you access the files and folders others have shared
+with you).
+.PP
+This works both with the "list" (lsd, lsl, etc) and the "copy" commands
+(copy, sync, etc), and with all other commands too.
.SS \-\-drive\-skip\-gdocs
.PP
Skip google documents in all listings.
@@ -8420,6 +10040,27 @@ Controls whether files are sent to the trash or deleted permanently.
Defaults to true, namely sending files to the trash.
Use \f[C]\-\-drive\-use\-trash=false\f[] to delete files permanently
instead.
+.SS \-\-drive\-use\-created\-date
+.PP
+Use the file creation date in place of the modification date.
+Defaults to false.
+.PP
+Useful when downloading data and you want the creation date used in
+place of the last modified date.
+.PP
+\f[B]WARNING\f[]: This flag may have some unexpected consequences.
+.PP
+When uploading to your drive all files will be overwritten unless they
+haven\[aq]t been modified since their creation.
+And the inverse will occur while downloading.
+This side effect can be avoided by using the \f[C]\-\-checksum\f[] flag.
+.PP
+This feature was implemented to retain photos capture date as recorded
+by google photos.
+You will first need to check the "Create a Google Photos folder" option
+in your google drive settings.
+You can then copy or move the photos locally and use the date the image
+was taken (created) set as the modification date.
.SS Limitations
.PP
Drive has quite a lot of rate limiting.
@@ -8433,6 +10074,23 @@ If you see User rate limit exceeded errors, wait at least 24 hours and
retry.
You can disable server side copies with \f[C]\-\-disable\ copy\f[] to
download and upload the files if you prefer.
+.SS Limitations of Google Docs
+.PP
+Google docs will appear as size \-1 in \f[C]rclone\ ls\f[] and as size 0
+in anything which uses the VFS layer, eg \f[C]rclone\ mount\f[],
+\f[C]rclone\ serve\f[].
+.PP
+This is because rclone can\[aq]t find out the size of the Google docs
+without downloading them.
+.PP
+Google docs will transfer correctly with \f[C]rclone\ sync\f[],
+\f[C]rclone\ copy\f[] etc as rclone knows to ignore the size when doing
+the transfer.
+.PP
+However an unfortunate consequence of this is that you can\[aq]t
+download Google docs using \f[C]rclone\ mount\f[] \- you will get a 0
+sized file.
+If you try again the doc may gain its correct size and be downloadable.
.SS Duplicated files
.PP
Sometimes, for no reason I\[aq]ve been able to track down, drive will
@@ -8448,26 +10106,9 @@ Note that this isn\[aq]t just a problem with rclone, even Google Photos
on Android duplicates files on drive sometimes.
.SS Rclone appears to be re\-copying files it shouldn\[aq]t
.PP
-There are two possible reasons for rclone to recopy files which
-haven\[aq]t changed to Google Drive.
-.PP
-The first is the duplicated file issue above \- run
+The most likely cause of this is the duplicated file issue above \- run
\f[C]rclone\ dedupe\f[] and check your logs for duplicate object or
directory messages.
-.PP
-The second is that sometimes Google reports different sizes for the
-Google Docs exports which will cause rclone to re\-download Google Docs
-for no apparent reason.
-\f[C]\-\-ignore\-size\f[] is a not very satisfactory work\-around for
-this if it is causing you a lot of problems.
-.SS Google docs downloads sometimes fail with "Failed to copy: read X
-bytes expecting Y"
-.PP
-This is the same problem as above.
-Google reports the google doc is one size, but rclone downloads a
-different size.
-Work\-around with the \f[C]\-\-ignore\-size\f[] flag or wait for rclone
-to retry the download which it will.
.SS Making your own client_id
.PP
When you use rclone with Google drive in its default configuration you
@@ -9157,10 +10798,6 @@ Here are the command line options specific to this cloud storage system.
Above this size files will be chunked \- must be multiple of 320k.
The default is 10MB.
Note that the chunks will be buffered into memory.
-.SS \-\-onedrive\-upload\-cutoff=SIZE
-.PP
-Cutoff for switching to chunked upload \- must be <= 100MB.
-The default is 10MB.
.SS Limitations
.PP
Note that OneDrive is case insensitive so you can\[aq]t have a file
@@ -9176,6 +10813,50 @@ For example if a file has a \f[C]?\f[] in it will be mapped to
\f[C]?\f[] instead.
.PP
The largest allowed file size is 10GiB (10,737,418,240 bytes).
+.SS Versioning issue
+.PP
+Every change in OneDrive causes the service to create a new version.
+This counts against a users quota.
+.PD 0
+.P
+.PD
+For example changing the modification time of a file creates a second
+version, so the file is using twice the space.
+.PP
+The \f[C]copy\f[] is the only rclone command affected by this as we copy
+the file and then afterwards set the modification time to match the
+source file.
+.PP
+User Weropol (https://github.com/Weropol) has found a method to disable
+versioning on OneDrive
+.IP "1." 3
+Open the settings menu by clicking on the gear symbol at the top of the
+OneDrive Business page.
+.IP "2." 3
+Click Site settings.
+.IP "3." 3
+Once on the Site settings page, navigate to Site Administration > Site
+libraries and lists.
+.IP "4." 3
+Click Customize "Documents".
+.IP "5." 3
+Click General Settings > Versioning Settings.
+.IP "6." 3
+Under Document Version History select the option No versioning.
+.PD 0
+.P
+.PD
+Note: This will disable the creation of new file versions, but will not
+remove any previous versions.
+Your documents are safe.
+.IP "7." 3
+Apply the changes by clicking OK.
+.IP "8." 3
+Use rclone to upload or modify files.
+(I also use the \-\-no\-update\-modtime flag)
+.IP "9." 3
+Restore the versioning settings after using rclone.
+(Optional)
.SS QingStor
.PP
Paths are specified as \f[C]remote:bucket\f[] (or \f[C]remote:\f[] for
@@ -9963,6 +11644,9 @@ For instance \f[C]/home/$USER/.ssh/id_rsa\f[].
.PP
If you don\[aq]t specify \f[C]pass\f[] or \f[C]key_file\f[] then rclone
will attempt to contact an ssh\-agent.
+.PP
+If you set the \f[C]\-\-sftp\-ask\-password\f[] option, rclone will
+prompt for a password when needed and no password has been configured.
.SS ssh\-agent on macOS
.PP
Note that there seem to be various problems with using an ssh\-agent on
@@ -9985,16 +11669,35 @@ eval\ `ssh\-agent\ \-k`
.fi
.PP
These commands can be used in scripts of course.
+.SS Specific options
+.PP
+Here are the command line options specific to this remote.
+.SS \-\-sftp\-ask\-password
+.PP
+Ask for the SFTP password if needed when no password has been
+configured.
.SS Modified time
.PP
Modified times are stored on the server to 1 second precision.
.PP
Modified times are used in syncing and are fully supported.
+.PP
+Some SFTP servers disable setting/modifying the file modification time
+after upload (for example, certain configurations of ProFTPd with
+mod_sftp).
+If you are using one of these servers, you can set the option
+\f[C]set_modtime\ =\ false\f[] in your RClone backend configuration to
+disable this behaviour.
.SS Limitations
.PP
SFTP supports checksums if the same login has shell access and
\f[C]md5sum\f[] or \f[C]sha1sum\f[] as well as \f[C]echo\f[] are in the
remote\[aq]s PATH.
+This remote check can be disabled by setting the configuration option
+\f[C]disable_hashcheck\f[].
+This may be required if you\[aq]re connecting to SFTP servers which are
+not under your control, and to which the execution of remote commands is
+prohibited.
.PP
The only ssh agent supported under Windows is Putty\[aq]s pageant.
.PP
@@ -10543,6 +12246,366 @@ This flag disables warning messages on skipped symlinks or junction
points, as you explicitly acknowledge that they should be skipped.
.SS Changelog
.IP \[bu] 2
+v1.40 \- 2018\-03\-19
+.RS 2
+.IP \[bu] 2
+New backends
+.IP \[bu] 2
+Alias backend to create aliases for existing remote names (Fabian
+Möller)
+.IP \[bu] 2
+New commands
+.IP \[bu] 2
+\f[C]lsf\f[]: list for parsing purposes (Jakub Tasiemski)
+.RS 2
+.IP \[bu] 2
+by default this is a simple non recursive list of files and directories
+.IP \[bu] 2
+it can be configured to add more info in an easy to parse way
+.RE
+.IP \[bu] 2
+\f[C]serve\ restic\f[]: for serving a remote as a Restic REST endpoint
+.RS 2
+.IP \[bu] 2
+This enables restic to use any backends that rclone can access
+.IP \[bu] 2
+Thanks Alexander Neumann for help, patches and review
+.RE
+.IP \[bu] 2
+\f[C]rc\f[]: enable the remote control of a running rclone
+.RS 2
+.IP \[bu] 2
+The running rclone must be started with \-\-rc and related flags.
+.IP \[bu] 2
+Currently there is support for bwlimit, and flushing for mount and
+cache.
+.RE
+.IP \[bu] 2
+New Features
+.IP \[bu] 2
+\f[C]\-\-max\-delete\f[] flag to add a delete threshold (Bjørn Erik
+Pedersen)
+.IP \[bu] 2
+All backends now support RangeOption for ranged Open
+.RS 2
+.IP \[bu] 2
+\f[C]cat\f[]: Use RangeOption for limited fetches to make more efficient
+.IP \[bu] 2
+\f[C]cryptcheck\f[]: make reading of nonce more efficient with
+RangeOption
+.RE
+.IP \[bu] 2
+serve http/webdav/restic
+.RS 2
+.IP \[bu] 2
+support SSL/TLS
+.IP \[bu] 2
+add \f[C]\-\-user\f[] \f[C]\-\-pass\f[] and \f[C]\-\-htpasswd\f[] for
+authentication
+.RE
+.IP \[bu] 2
+\f[C]copy\f[]/\f[C]move\f[]: detect file size change during copy/move
+and abort transfer (ishuah)
+.IP \[bu] 2
+\f[C]cryptdecode\f[]: added option to return encrypted file names.
+(ishuah)
+.IP \[bu] 2
+\f[C]lsjson\f[]: add \f[C]\-\-encrypted\f[] to show encrypted name
+(Jakub Tasiemski)
+.IP \[bu] 2
+Add \f[C]\-\-stats\-file\-name\-length\f[] to specify the printed file
+name length for stats (Will Gunn)
+.IP \[bu] 2
+Compile
+.IP \[bu] 2
+Code base was shuffled and factored
+.RS 2
+.IP \[bu] 2
+backends moved into a backend directory
+.IP \[bu] 2
+large packages split up
+.IP \[bu] 2
+See the CONTRIBUTING.md doc for info as to what lives where now
+.RE
+.IP \[bu] 2
+Update to using go1.10 as the default go version
+.IP \[bu] 2
+Implement daily full integration
+tests (https://pub.rclone.org/integration-tests/)
+.IP \[bu] 2
+Release
+.IP \[bu] 2
+Include a source tarball and sign it and the binaries
+.IP \[bu] 2
+Sign the git tags as part of the release process
+.IP \[bu] 2
+Add .deb and .rpm packages as part of the build
+.IP \[bu] 2
+Make a beta release for all branches on the main repo (but not pull
+requests)
+.IP \[bu] 2
+Bug Fixes
+.IP \[bu] 2
+config: fixes errors on non existing config by loading config file only
+on first access
+.IP \[bu] 2
+config: retry saving the config after failure (Mateusz)
+.IP \[bu] 2
+sync: when using \f[C]\-\-backup\-dir\f[] don\[aq]t delete files if we
+can\[aq]t set their modtime
+.RS 2
+.IP \[bu] 2
+this fixes odd behaviour with Dropbox and \f[C]\-\-backup\-dir\f[]
+.RE
+.IP \[bu] 2
+fshttp: fix idle timeouts for HTTP connections
+.IP \[bu] 2
+\f[C]serve\ http\f[]: fix serving files with : in \- fixes
+.IP \[bu] 2
+Fix \f[C]\-\-exclude\-if\-present\f[] to ignore directories which it
+doesn\[aq]t have permission for (Iakov Davydov)
+.IP \[bu] 2
+Make accounting work properly with crypt and b2
+.IP \[bu] 2
+remove \f[C]\-\-no\-traverse\f[] flag because it is obsolete
+.IP \[bu] 2
+Mount
+.IP \[bu] 2
+Add \f[C]\-\-attr\-timeout\f[] flag to control attribute caching in
+kernel
+.RS 2
+.IP \[bu] 2
+this now defaults to 0 which is correct but less efficient
+.IP \[bu] 2
+see the mount docs (/commands/rclone_mount/#attribute-caching) for more
+info
+.RE
+.IP \[bu] 2
+Add \f[C]\-\-daemon\f[] flag to allow mount to run in the background
+(ishuah)
+.IP \[bu] 2
+Fix: Return ENOSYS rather than EIO on attempted link
+.RS 2
+.IP \[bu] 2
+This fixes FileZilla accessing an rclone mount served over sftp.
+.RE
+.IP \[bu] 2
+Fix setting modtime twice
+.IP \[bu] 2
+Mount tests now run on CI for Linux (mount & cmount)/Mac/Windows
+.IP \[bu] 2
+Many bugs fixed in the VFS layer \- see below
+.IP \[bu] 2
+VFS
+.IP \[bu] 2
+Many fixes for \f[C]\-\-vfs\-cache\-mode\f[] writes and above
+.RS 2
+.IP \[bu] 2
+Update cached copy if we know it has changed (fixes stale data)
+.IP \[bu] 2
+Clean path names before using them in the cache
+.IP \[bu] 2
+Disable cache cleaner if \f[C]\-\-vfs\-cache\-poll\-interval=0\f[]
+.IP \[bu] 2
+Fill and clean the cache immediately on startup
+.RE
+.IP \[bu] 2
+Fix Windows opening every file when it stats the file
+.IP \[bu] 2
+Fix applying modtime for an open Write Handle
+.IP \[bu] 2
+Fix creation of files when truncating
+.IP \[bu] 2
+Write 0 bytes when flushing unwritten handles to avoid race conditions
+in FUSE
+.IP \[bu] 2
+Downgrade "poll\-interval is not supported" message to Info
+.IP \[bu] 2
+Make OpenFile and friends return EINVAL if O_RDONLY and O_TRUNC
+.IP \[bu] 2
+Local
+.IP \[bu] 2
+Downgrade "invalid cross\-device link: trying copy" to debug
+.IP \[bu] 2
+Make DirMove return fs.ErrorCantDirMove to allow fallback to Copy for
+cross device
+.IP \[bu] 2
+Fix race conditions updating the hashes
+.IP \[bu] 2
+Cache
+.IP \[bu] 2
+Add support for polling \- cache will update when remote changes on
+supported backends
+.IP \[bu] 2
+Reduce log level for Plex api
+.IP \[bu] 2
+Fix dir cache issue
+.IP \[bu] 2
+Implement \f[C]\-\-cache\-db\-wait\-time\f[] flag
+.IP \[bu] 2
+Improve efficiency with RangeOption and RangeSeek
+.IP \[bu] 2
+Fix dirmove with temp fs enabled
+.IP \[bu] 2
+Notify vfs when using temp fs
+.IP \[bu] 2
+Offline uploading
+.IP \[bu] 2
+Remote control support for path flushing
+.IP \[bu] 2
+Amazon cloud drive
+.IP \[bu] 2
+Rclone no longer has any working keys \- disable integration tests
+.IP \[bu] 2
+Implement DirChangeNotify to notify cache/vfs/mount of changes
+.IP \[bu] 2
+Azureblob
+.IP \[bu] 2
+Don\[aq]t check for bucket/container presense if listing was OK
+.RS 2
+.IP \[bu] 2
+this makes rclone do one less request per invocation
+.RE
+.IP \[bu] 2
+Improve accounting for chunked uploads
+.IP \[bu] 2
+Backblaze B2
+.IP \[bu] 2
+Don\[aq]t check for bucket/container presense if listing was OK
+.RS 2
+.IP \[bu] 2
+this makes rclone do one less request per invocation
+.RE
+.IP \[bu] 2
+Box
+.IP \[bu] 2
+Improve accounting for chunked uploads
+.IP \[bu] 2
+Dropbox
+.IP \[bu] 2
+Fix custom oauth client parameters
+.IP \[bu] 2
+Google Cloud Storage
+.IP \[bu] 2
+Don\[aq]t check for bucket/container presense if listing was OK
+.RS 2
+.IP \[bu] 2
+this makes rclone do one less request per invocation
+.RE
+.IP \[bu] 2
+Google Drive
+.IP \[bu] 2
+Migrate to api v3 (Fabian Möller)
+.IP \[bu] 2
+Add scope configuration and root folder selection
+.IP \[bu] 2
+Add \f[C]\-\-drive\-impersonate\f[] for service accounts
+.RS 2
+.IP \[bu] 2
+thanks to everyone who tested, explored and contributed docs
+.RE
+.IP \[bu] 2
+Add \f[C]\-\-drive\-use\-created\-date\f[] to use created date as
+modified date (nbuchanan)
+.IP \[bu] 2
+Request the export formats only when required
+.RS 2
+.IP \[bu] 2
+This makes rclone quicker when there are no google docs
+.RE
+.IP \[bu] 2
+Fix finding paths with latin1 chars (a workaround for a drive bug)
+.IP \[bu] 2
+Fix copying of a single Google doc file
+.IP \[bu] 2
+Fix \f[C]\-\-drive\-auth\-owner\-only\f[] to look in all directories
+.IP \[bu] 2
+HTTP
+.IP \[bu] 2
+Fix handling of directories with & in
+.IP \[bu] 2
+Onedrive
+.IP \[bu] 2
+Removed upload cutoff and always do session uploads
+.RS 2
+.IP \[bu] 2
+this stops the creation of multiple versions on business onedrive
+.RE
+.IP \[bu] 2
+Overwrite object size value with real size when reading file.
+(Victor)
+.RS 2
+.IP \[bu] 2
+this fixes oddities when onedrive misreports the size of images
+.RE
+.IP \[bu] 2
+Pcloud
+.IP \[bu] 2
+Remove unused chunked upload flag and code
+.IP \[bu] 2
+Qingstor
+.IP \[bu] 2
+Don\[aq]t check for bucket/container presense if listing was OK
+.RS 2
+.IP \[bu] 2
+this makes rclone do one less request per invocation
+.RE
+.IP \[bu] 2
+S3
+.IP \[bu] 2
+Support hashes for multipart files (Chris Redekop)
+.IP \[bu] 2
+Initial support for IBM COS (S3) (Giri Badanahatti)
+.IP \[bu] 2
+Update docs to discourage use of v2 auth with CEPH and others
+.IP \[bu] 2
+Don\[aq]t check for bucket/container presense if listing was OK
+.RS 2
+.IP \[bu] 2
+this makes rclone do one less request per invocation
+.RE
+.IP \[bu] 2
+Fix server side copy and set modtime on files with + in
+.IP \[bu] 2
+SFTP
+.IP \[bu] 2
+Add option to disable remote hash check command execution (Jon Fautley)
+.IP \[bu] 2
+Add \f[C]\-\-sftp\-ask\-password\f[] flag to prompt for password when
+needed (Leo R.
+Lundgren)
+.IP \[bu] 2
+Add \f[C]set_modtime\f[] configuration option
+.IP \[bu] 2
+Fix following of symlinks
+.IP \[bu] 2
+Fix reading config file outside of Fs setup
+.IP \[bu] 2
+Fix reading $USER in username fallback not $HOME
+.IP \[bu] 2
+Fix running under crontab \- Use correct OS way of reading username
+.IP \[bu] 2
+Swift
+.IP \[bu] 2
+Fix refresh of authentication token
+.RS 2
+.IP \[bu] 2
+in v1.39 a bug was introduced which ignored new tokens \- this fixes it
+.RE
+.IP \[bu] 2
+Fix extra HEAD transaction when uploading a new file
+.IP \[bu] 2
+Don\[aq]t check for bucket/container presense if listing was OK
+.RS 2
+.IP \[bu] 2
+this makes rclone do one less request per invocation
+.RE
+.IP \[bu] 2
+Webdav
+.IP \[bu] 2
+Add new time formats to support mydrive.ch and others
+.RE
+.IP \[bu] 2
v1.39 \- 2017\-12\-23
.RS 2
.IP \[bu] 2
@@ -12682,6 +14745,11 @@ ntpclient\ \-s\ \-h\ pool.ntp.org
\f[]
.fi
.PP
+The two environment variables \f[C]SSL_CERT_FILE\f[] and
+\f[C]SSL_CERT_DIR\f[], mentioned in the x509
+pacakge (https://godoc.org/crypto/x509), provide an additional way to
+provide the SSL root certificates.
+.PP
Note that you may need to add the \f[C]\-\-insecure\f[] option to the
\f[C]curl\f[] command line if it doesn\[aq]t work without.
.IP
@@ -12722,6 +14790,12 @@ If you are using \f[C]systemd\-resolved\f[] (default on Arch Linux),
ensure it is at version 233 or higher.
Previous releases contain a bug which causes not all domains to be
resolved properly.
+.PP
+Additionally with the \f[C]GODEBUG=netdns=\f[] environment variable the
+Go resolver decision can be influenced.
+This also allows to resolve certain issues with DNS resolution.
+See the name resolution section in the go
+docs (https://golang.org/pkg/net/#hdr-Name_Resolution).
.SS License
.PP
This is free software under the terms of MIT the license (check the
@@ -12897,7 +14971,7 @@ Sjur Fredriksen
.IP \[bu] 2
Ruwbin
.IP \[bu] 2
-Fabian Möller
+Fabian Möller
.IP \[bu] 2
Edward Q.
Bridges
@@ -12918,7 +14992,7 @@ Zhiming Wang
.IP \[bu] 2
Andy Pilate
.IP \[bu] 2
-Oliver Heyme
+Oliver Heyme
.IP \[bu] 2
wuyu
.IP \[bu] 2
@@ -12966,9 +15040,7 @@ Ernest Borowski
.IP \[bu] 2
Remus Bunduc
.IP \[bu] 2
-Iakov Davydov
-.IP \[bu] 2
-Fabian Möller
+Iakov Davydov
.IP \[bu] 2
Jakub Tasiemski
.IP \[bu] 2
@@ -12987,6 +15059,43 @@ Jon Fautley
lewapm <32110057+lewapm@users.noreply.github.com>
.IP \[bu] 2
Yassine Imounachen
+.IP \[bu] 2
+Chris Redekop
+.IP \[bu] 2
+Jon Fautley
+.IP \[bu] 2
+Will Gunn
+.IP \[bu] 2
+Lucas Bremgartner
+.IP \[bu] 2
+Jody Frankowski
+.IP \[bu] 2
+Andreas Roussos
+.IP \[bu] 2
+nbuchanan
+.IP \[bu] 2
+Durval Menezes
+.IP \[bu] 2
+Victor
+.IP \[bu] 2
+Mateusz
+.IP \[bu] 2
+Daniel Loader
+.IP \[bu] 2
+David0rk
+.IP \[bu] 2
+Alexander Neumann
+.IP \[bu] 2
+Giri Badanahatti
+.IP \[bu] 2
+Leo R.
+Lundgren
+.IP \[bu] 2
+wolfv
+.IP \[bu] 2
+Dave Pedu
+.IP \[bu] 2
+Stefan Lindblom
.SH Contact the rclone project
.SS Forum
.PP