diff --git a/MANUAL.html b/MANUAL.html index 231384eb1..f07d0007b 100644 --- a/MANUAL.html +++ b/MANUAL.html @@ -12,10 +12,10 @@
Rclone is a command line program to sync files and directories to and from
Features
@@ -36,34 +37,34 @@Links
Rclone is a Go program and comes as a single binary file.
rclone
binary.rclone config
to setup. See rclone config docs for more details.rclone config
to setup. See rclone config docs for more details.See below for some expanded Linux / macOS instructions.
-See the Usage section of the docs for how to use rclone, or run rclone -h
.
See the Usage section of the docs for how to use rclone, or run rclone -h
.
Fetch and unpack
-curl -O http://downloads.rclone.org/rclone-current-linux-amd64.zip
+curl -O https://downloads.rclone.org/rclone-current-linux-amd64.zip
unzip rclone-current-linux-amd64.zip
cd rclone-*-linux-amd64
Copy binary file
@@ -74,21 +75,21 @@ sudo chmod 755 /usr/bin/rclone
sudo mkdir -p /usr/local/share/man/man1
sudo cp rclone.1 /usr/local/share/man/man1/
sudo mandb
-Run rclone config
to setup. See rclone config docs for more details.
Run rclone config
to setup. See rclone config docs for more details.
rclone config
Download the latest version of rclone.
-cd && curl -O http://downloads.rclone.org/rclone-current-osx-amd64.zip
+cd && curl -O https://downloads.rclone.org/rclone-current-osx-amd64.zip
Unzip the download and cd to the extracted folder.
unzip -a rclone-current-osx-amd64.zip && cd rclone-*-osx-amd64
Move rclone to your $PATH. You will be prompted for your password.
sudo mv rclone /usr/local/bin/
Remove the leftover files.
cd .. && rm -rf rclone-*-osx-amd64 rclone-current-osx-amd64.zip
-Run rclone config
to setup. See rclone config docs for more details.
Run rclone config
to setup. See rclone config docs for more details.
rclone config
Make sure you have at least Go 1.5 installed. Make sure your GOPATH
is set, then:
Make sure you have at least Go 1.6 installed. Make sure your GOPATH
is set, then:
go get -u -v github.com/ncw/rclone
and this will build the binary in $GOPATH/bin
. If you have built rclone before then you will want to update its dependencies first with this
go get -u -v github.com/ncw/rclone/...
@@ -107,7 +108,7 @@ sudo mandb
rclone config
to setup. See rclone config docs for more details.rclone config
to setup. See rclone config docs for more details.See below for how to install snapd if it isn't already installed
Install the snap meta layer.
sudo zypper addrepo http://download.opensuse.org/repositories/system:/snappy/openSUSE_Leap_42.2/ snappy
+sudo zypper addrepo https://download.opensuse.org/repositories/system:/snappy/openSUSE_Leap_42.2/ snappy
sudo zypper install snapd
OpenWrt
Enable the snap-openwrt feed.
@@ -139,19 +140,20 @@ sudo zypper install snapd
rclone config
See the following for detailed instructions for
Rclone syncs a directory tree from one storage system to another.
@@ -239,7 +241,7 @@ rclone --dry-run --min-size 100M delete remote:pathChecks the files in the source and destination match. It compares sizes and hashes (MD5 or SHA1) and logs a report of files which don't match. It doesn't alter the source or destination.
If you supply the --size-only flag, it will only compare the sizes not the hashes as well. Use this for a quick check.
If you supply the --download flag, it will download the data from both remotes and check them against each other on the fly. This can be useful for remotes that don't support hashes or if you really want to check all the data.
-rclone check source:path dest:path
+rclone check source:path dest:path [flags]
--download Check by downloading rather than with hash.
rclone dedupe --dedupe-mode rename "drive:Google Photos"
Or
rclone dedupe rename "drive:Google Photos"
-rclone dedupe [mode] remote:path
+rclone dedupe [mode] remote:path [flags]
--dedupe-mode string Dedupe mode interactive|skip|first|newest|oldest|rename. (default "interactive")
Or like this to output any .txt files in dir or subdirectories.
rclone --include "*.txt" cat remote:path/to/dir
Use the --head flag to print characters only at the start, --tail for the end and --offset and --count to print a section in the middle. Note that if offset is negative it will count from the end, so --offset -1 --count 1 is equivalent to --tail 1.
-rclone cat remote:path
+rclone cat remote:path [flags]
--count int Only print N characters. (default -1)
--discard Discard the output instead of printing.
@@ -396,9 +398,14 @@ if src is directory
rclone cryptcheck remote:path encryptedremote:path
After it has run it will log the status of the encryptedremote:.
rclone cryptcheck remote:path cryptedremote:path
+rclone dbhashsum
+Produces a Dropbbox hash file for all the objects in the path.
+Synopsis
+Produces a Dropbox hash file for all the objects in the path. The hashes are calculated according to Dropbox content hash rules. The output is in the same format as md5sum and sha1sum.
+rclone dbhashsum remote:path
rclone genautocomplete
Output bash completion script for rclone.
-Synopsis
+Synopsis
Generates a bash shell autocompletion script for rclone.
This writes to /etc/bash_completion.d/rclone by default so will probably need to be run with sudo or as root, eg
sudo rclone genautocomplete
@@ -408,31 +415,48 @@ if src is directory
rclone genautocomplete [output_file]
rclone gendocs
Output markdown docs for rclone to the directory supplied.
-Synopsis
+Synopsis
This produces markdown docs for the rclone commands to the directory supplied. These are in a format suitable for hugo to render into the rclone.org website.
-rclone gendocs output_directory
+rclone gendocs output_directory [flags]
+Options
+ -h, --help help for gendocs
rclone listremotes
List all the remotes in the config file.
-Synopsis
+Synopsis
rclone listremotes lists all the available remotes from the config file.
When uses with the -l flag it lists the types too.
-rclone listremotes
-Options
+rclone listremotes [flags]
+Options
-l, --long Show the type as well as names.
+rclone lsjson
+List directories and objects in the path in JSON format.
+Synopsis
+List directories and objects in the path in JSON format.
+The output is an array of Items, where each Item looks like this
+{ "Hashes" : { "SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f", "MD5" : "b1946ac92492d2347c6235b4d2611184", "DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc" }, "IsDir" : false, "ModTime" : "2017-05-31T16:15:57.034468261+01:00", "Name" : "file.txt", "Path" : "full/path/goes/here/file.txt", "Size" : 6 }
+If --hash is not specified the the Hashes property won't be emitted.
+If --no-modtime is specified then ModTime will be blank.
+The time is in RFC3339 format with nanosecond precision.
+The whole output can be processed as a JSON blob, or alternatively it can be processed line by line as each item is written one to a line.
+rclone lsjson remote:path [flags]
+Options
+ --hash Include hashes in the output (may take longer).
+ --no-modtime Don't read the modification time (can speed things up).
+ -R, --recursive Recurse into the listing.
rclone mount
Mount the remote as a mountpoint. EXPERIMENTAL
-Synopsis
+Synopsis
rclone mount allows Linux, FreeBSD and macOS to mount any of Rclone's cloud storage systems as a file system with FUSE.
This is EXPERIMENTAL - use with care.
First set up your remote using rclone config
. Check it works with rclone ls
etc.
-Start the mount like this (note the & on the end to put rclone in the background).
-rclone mount remote:path/to/files /path/to/local/mount &
-Stop the mount with
-fusermount -u /path/to/local/mount
-Or if that fails try
-fusermount -z -u /path/to/local/mount
-Or with OS X
-umount /path/to/local/mount
+Start the mount like this
+rclone mount remote:path/to/files /path/to/local/mount
+When the program ends, either via Ctrl+C or receiving a SIGINT or SIGTERM signal, the mount is automatically stopped.
+The umount operation can fail, for example when the mountpoint is busy. When that happens, it is the user's responsibility to stop the mount manually with
+# Linux
+fusermount -u /path/to/local/mount
+# OS X
+umount /path/to/local/mount
Limitations
This can only write files seqentially, it can only seek when reading. This means that many applications won't work with their files on an rclone mount.
The bucket based remotes (eg Swift, S3, Google Compute Storage, B2, Hubic) won't work from the root - you will need to specify a bucket, or a path within the bucket. So swift:
won't work whereas swift:bucket
will as will swift:bucket/path
. None of these support the concept of directories, so empty directories will have a tendency to disappear once they fall out of the directory cache.
@@ -441,6 +465,10 @@ if src is directory
File systems expect things to be 100% reliable, whereas cloud storage systems are a long way from 100% reliable. The rclone sync/copy commands cope with this with lots of retries. However rclone mount can't use retries in the same way without making local copies of the uploads. This might happen in the future, but for the moment rclone mount won't do that, so will be less reliable than the rclone command.
Filters
Note that all the rclone filters can be used to select a subset of the files to be visible in the mount.
+Directory Cache
+Using the --dir-cache-time
flag, you can set how long a directory should be considered up to date and not refreshed from the backend. Changes made locally in the mount may appear immediately or invalidate the cache. However, changes done on the remote will only be picked up once the cache expires.
+Alternatively, you can send a SIGHUP
signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:
+kill -SIGHUP $(pidof rclone)
Bugs
- All the remotes should work for read, but some may not for write
@@ -450,14 +478,8 @@ if src is directory
- Or put in an an upload cache to cache the files on disk first
-TODO
-
-- Check hashes on upload/download
-- Preserve timestamps
-- Move directories
-
-rclone mount remote:path /path/to/mountpoint
-Options
+rclone mount remote:path /path/to/mountpoint [flags]
+Options
--allow-non-empty Allow mounting over a non-empty directory.
--allow-other Allow access to other users.
--allow-root Allow access to root user.
@@ -466,15 +488,17 @@ if src is directory
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--gid uint32 Override the gid field set by the filesystem. (default 502)
--max-read-ahead int The number of bytes that can be prefetched for sequential reads. (default 128k)
- --no-modtime Don't read the modification time (can speed things up).
+ --no-checksum Don't compare checksums on up/download.
+ --no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
+ --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--read-only Mount read-only.
--uid uint32 Override the uid field set by the filesystem. (default 502)
--umask int Override the permission bits set by the filesystem. (default 2)
--write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.
rclone moveto
Move file or directory from source to dest.
-Synopsis
+Synopsis
If source:path is a file or directory then it moves it to a file or directory named dest:path.
This can be used to rename files or upload single files to other than their existing name. If the source is a directory then it acts exacty like the move command.
So
@@ -489,14 +513,30 @@ if src is directory
This doesn't transfer unchanged files, testing by size and modification time or MD5SUM. src will be deleted on successful transfer.
Important: Since this can cause data loss, test first with the --dry-run flag.
rclone moveto source:path dest:path
+rclone ncdu
+Explore a remote with a text based user interface.
+Synopsis
+This displays a text based user interface allowing the navigation of a remote. It is most useful for answering the question - "What is using all my disk space?".
+To make the user interface it first scans the entire remote given and builds an in memory representation. rclone ncdu can be used during this scanning phase and you will see it building up the directory structure as it goes along.
+Here are the keys - press '?' to toggle the help on and off
+ ↑,↓ or k,j to Move
+ →,l to enter
+ ←,h to return
+ c toggle counts
+ g toggle graph
+ n,s,C sort by name,size,count
+ ? to toggle help on and off
+ q/ESC/c-C to quit
+This an homage to the ncdu tool but for rclone remotes. It is missing lots of features at the moment, most importantly deleting files, but is useful as it stands.
+rclone ncdu remote:path
rclone obscure
Obscure password for use in the rclone.conf
-Synopsis
+Synopsis
Obscure password for use in the rclone.conf
rclone obscure password
rclone rmdirs
Remove any empty directoryies under the path.
-Synopsis
+Synopsis
This removes any empty directories (or directories that only contain empty directories) under the path that it finds, including the path if it has nothing in.
This is useful for tidying up remotes that rclone has left a lot of empty directories in.
rclone rmdirs remote:path
@@ -536,7 +576,7 @@ if src is directory
This can be used when scripting to make aged backups efficiently, eg
rclone sync remote:current-backup remote:previous-backup
rclone sync /path/to/files remote:current-backup
-Options
+Options
Rclone has a number of options to control its behaviour.
Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
Options which use SIZE use kByte by default. However a suffix of b
for bytes, k
for kBytes, M
for MBytes and G
for GBytes may be used. These are the binary units, eg 1, 2**10, 2**20, 2**30 respectively.
@@ -556,7 +596,7 @@ rclone sync /path/to/files remote:current-backup
An example of a typical timetable to avoid link saturation during daytime working hours could be:
--bwlimit "08:00,512 12:00,10M 13:00,512 18:00,30M 23:00,off"
In this example, the transfer bandwidth will be set to 512kBytes/sec at 8am. At noon, it will raise to 10Mbytes/s, and drop back to 512kBytes/sec at 1pm. At 6pm, the bandwidth limit will be set to 30MBytes/s, and at 11pm it will be completely disabled (full speed). Anything between 11pm and 8am will remain unlimited.
-Bandwidth limits only apply to the data transfer. The don't apply to the bandwith of the directory listings etc.
+Bandwidth limits only apply to the data transfer. They don't apply to the bandwidth of the directory listings etc.
Note that the units are Bytes/s not Bits/s. Typically connections are measured in Bits/s - to convert divide by 8. For example let's say you have a 10 Mbit/s connection and you wish rclone to use half of it - 5 Mbit/s. This is 5/8 = 0.625MByte/s so you would use a --bwlimit 0.625M
parameter for rclone.
Use this sized buffer to speed up file transfers. Each --transfer
will use this much memory for buffering.
Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check the file hash and size to determine if files are equal.
This is useful when the remote doesn't support setting modified time and a more accurate sync is desired than just checking the file size.
-This is very useful when transferring between remotes which store the same hash type on the object, eg Drive and Swift. For details of which remotes support which hash type see the table in the overview section.
+This is very useful when transferring between remotes which store the same hash type on the object, eg Drive and Swift. For details of which remotes support which hash type see the table in the overview section.
Eg rclone --checksum sync s3:/bucket swift:/bucket
would run much quicker than without the --checksum
flag.
When using this flag, rclone won't update mtimes of remote files if they are incorrect as it would normally.
This option allows you to specify when files on your destination are deleted when you sync folders.
Specifying the value --delete-before
will delete all files present on the destination, but not on the source before starting the transfer of any new or updated files. This uses two passes through the file systems, one for the deletions and one for the copies.
Specifying --delete-during
will delete files while checking and uploading files. This is the fastest option and uses the least memory.
Specifying --delete-after
(the default value) will delay deletion of files until all new/updated files have been successfully transfered. The files to be deleted are collected in the copy pass then deleted after the copy pass has completed sucessfully. The files to be deleted are held in memory so this mode may use more memory. This is the safest mode as it will only delete files if there have been no errors subsequent to that. If there have been errors before the deletions start then you will get the message not deleting files as there were IO errors
.
Specifying --delete-after
(the default value) will delay deletion of files until all new/updated files have been successfully transferred. The files to be deleted are collected in the copy pass then deleted after the copy pass has completed successfully. The files to be deleted are held in memory so this mode may use more memory. This is the safest mode as it will only delete files if there have been no errors subsequent to that. If there have been errors before the deletions start then you will get the message not deleting files as there were IO errors
.
When doing anything which involves a directory listing (eg sync
, copy
, ls
- in fact nearly every command), rclone normally lists a directory and processes it before using more directory lists to process any subdirectories. This can be parallelised and works very quickly using the least amount of memory.
However some remotes have a way of listing all files beneath a directory in one (or a small number) of transactions. These tend to be the bucket based remotes (eg s3, b2, gcs, swift, hubic).
+If you use the --fast-list
flag then rclone will use this method for listing directories. This will have the following consequences for the listing:
rclone should always give identical results with and without --fast-list
.
If you pay for transactions and can fit your entire sync listing into memory then --fast-list
is recommended. If you have a very big sync to do then don't use --fast-list
otherwise you will run out of memory.
If you use --fast-list
on a remote which doesn't support it, then rclone will just ignore it.
This sets the IO idle timeout. If a transfer has started but then becomes idle for this long it is considered broken and disconnected.
The default is 5m
. Set to 0 to disable.
This forces rclone to skip any files which exist on the destination and have a modified time that is newer than the source file.
If an existing destination file has a modification time equal (within the computed modify window precision) to the source file's, it will be updated if the sizes are different.
-On remotes which don't support mod time directly the time checked will be the uploaded time. This means that if uploading to one of these remoes, rclone will skip any files which exist on the destination and have an uploaded time that is newer than the modification time of the source file.
+On remotes which don't support mod time directly the time checked will be the uploaded time. This means that if uploading to one of these remotes, rclone will skip any files which exist on the destination and have an uploaded time that is newer than the modification time of the source file.
This can be useful when transferring to a remote which doesn't support mod times directly as it is more accurate than a --size-only
check and faster than using --checksum
.
With -v
rclone will tell you about each file that is transferred and a small number of significant events.
Then source the file when you want to use it. From the shell you would do source set-rclone-password
. It will then ask you for the password and set it in the envonment variable.
Then source the file when you want to use it. From the shell you would do source set-rclone-password
. It will then ask you for the password and set it in the environment variable.
If you are running rclone inside a script, you might want to disable password prompts. To do that, pass the parameter --ask-password=false
to rclone. This will make rclone fail instead of asking for a password if RCLONE_CONFIG_PASS
doesn't contain a valid password.
These options are useful when developing or debugging rclone. There are also some more remote specific options which aren't documented here which are used for testing. These start with remote name eg --drive-test-option
- see the docs for the remote in question.
The --no-traverse
flag controls whether the destination file system is traversed when using the copy
or move
commands. --no-traverse
is not compatible with sync
and will be ignored if you supply it with sync
.
If you are only copying a small number of files and/or have a large number of files on the destination then --no-traverse
will stop rclone listing the destination and save time.
However if you are copying a large number of files, escpecially if you are doing a copy where lots of the files haven't changed and won't need copying then you shouldn't use --no-traverse
.
However if you are copying a large number of files, especially if you are doing a copy where lots of the files haven't changed and won't need copying then you shouldn't use --no-traverse
.
It can also be used to reduce the memory usage of rclone when copying - rclone --no-traverse copy src dst
won't load either the source or destination listings into memory so will use the minimum amount of memory.
For the filtering options
@@ -763,7 +816,7 @@ export RCLONE_CONFIG_PASS--max-age
--dump-filters
See the filtering section.
+See the filtering section.
rclone has 4 levels of logging, Error
, Notice
, Info
and Debug
.
By default rclone logs to standard error. This means you can redirect standard error and still see the normal output of rclone commands (eg rclone ls
).
If any errors occurred during the command, rclone will exit with a non-zero exit code. This allows scripts to detect when rclone operations have failed.
During the startup phase rclone will exit immediately if an error is detected in the configuration. There will always be a log message immediately before exiting.
-When rclone is running it will accumulate errors as it goes along, and only exit with an non-zero exit code if (after retries) there were still failed transfers. For every error counted there will be a high priority log message (visibile with -q
) showing the message and which file caused the problem. A high priority message is also shown when starting a retry so the user can see that any previous error messages may not be valid after the retry. If rclone has done a retry it will log a high priority message if the retry was successful.
When rclone is running it will accumulate errors as it goes along, and only exit with an non-zero exit code if (after retries) there were still failed transfers. For every error counted there will be a high priority log message (visible with -q
) showing the message and which file caused the problem. A high priority message is also shown when starting a retry so the user can see that any previous error messages may not be valid after the retry. If rclone has done a retry it will log a high priority message if the retry was successful.
Rclone can be configured entirely using environment variables. These can be used to set defaults for options or config file entries.
-Every option in rclone can have its default set by environment variable.
To find the name of the environment variable, first take the long option name, strip the leading --
, change -
to _
, make upper case and prepend RCLONE_
.
For example to always set --stats 5s
, set the environment variable RCLONE_STATS=5s
. If you set stats on the command line this will override the environment variable setting.
Name | +Name | Hash | ModTime | Case Insensitive | @@ -1083,7 +1136,7 @@ user2/stuff||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Google Drive | +Google Drive | MD5 | Yes | No | @@ -1091,7 +1144,7 @@ user2/stuffR/W | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Amazon S3 | +Amazon S3 | MD5 | Yes | No | @@ -1099,7 +1152,7 @@ user2/stuffR/W | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Openstack Swift | +Openstack Swift | MD5 | Yes | No | @@ -1107,15 +1160,15 @@ user2/stuffR/W | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Dropbox | +Dropbox | +DBHASH † | +Yes | +Yes | +No | - | -No | -Yes | -No | -R | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Google Cloud Storage | +Google Cloud Storage | MD5 | Yes | No | @@ -1123,7 +1176,7 @@ user2/stuffR/W | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Amazon Drive | +Amazon Drive | MD5 | No | Yes | @@ -1131,7 +1184,7 @@ user2/stuffR | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Microsoft One Drive | +Microsoft OneDrive | SHA1 | Yes | Yes | @@ -1139,7 +1192,7 @@ user2/stuffR | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Hubic | +Hubic | MD5 | Yes | No | @@ -1147,7 +1200,7 @@ user2/stuffR/W | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Backblaze B2 | +Backblaze B2 | SHA1 | Yes | No | @@ -1155,7 +1208,7 @@ user2/stuffR/W | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Yandex Disk | +Yandex Disk | MD5 | Yes | No | @@ -1163,7 +1216,7 @@ user2/stuffR/W | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
SFTP | +SFTP | - | Yes | Depends | @@ -1171,7 +1224,15 @@ user2/stuff- | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
The local filesystem | +FTP | +- | +No | +Yes | +No | +- | +||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
The local filesystem | All | Yes | Depends | @@ -1184,6 +1245,7 @@ user2/stuff
Name | +Name | Purge | Copy | Move | DirMove | CleanUp | +ListR |
---|---|---|---|---|---|---|---|
Google Drive | +Google Drive | Yes | Yes | Yes | Yes | No #575 | +No |
Amazon S3 | +Amazon S3 | No | Yes | No | No | No | +Yes |
Openstack Swift | +Openstack Swift | Yes † | Yes | No | No | No | +Yes |
Dropbox | +Dropbox | Yes | Yes | Yes | Yes | No #575 | +No |
Google Cloud Storage | +Google Cloud Storage | Yes | Yes | No | No | No | +Yes |
Amazon Drive | +Amazon Drive | Yes | No | Yes | Yes | No #575 | +No |
Microsoft One Drive | +Microsoft OneDrive | Yes | Yes | Yes | No #197 | No #575 | +No |
Hubic | +Hubic | Yes † | Yes | No | No | No | +Yes |
Backblaze B2 | +Backblaze B2 | No | No | No | No | Yes | +Yes |
Yandex Disk | +Yandex Disk | Yes | No | No | No | No #575 | +Yes |
SFTP | +SFTP | No | No | Yes | Yes | No | +No |
The local filesystem | +FTP | +No | +No | +Yes | +Yes | +No | +No | +
The local filesystem | Yes | No | Yes | Yes | No | +No |
This is used for emptying the trash for a remote by rclone cleanup
.
If the server can't do CleanUp
then rclone cleanup
will return an error.
The remote supports a recursive list to list all the contents beneath a directory quickly. This enables the --fast-list
flag to work. See the rclone docs for more details.
Paths are specified as drive:path
Drive paths may be as deep as required, eg drive:directory/subdirectory
.
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
-n) New remote
-d) Delete remote
+No remotes found - make a new one
+n) New remote
+r) Rename remote
+c) Copy remote
+s) Set configuration password
q) Quit config
-e/n/d/q> n
+n/r/c/s/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
@@ -1355,27 +1444,29 @@ Choose a number from below, or type in your own value
\ "dropbox"
5 / Encrypt/Decrypt a remote
\ "crypt"
- 6 / Google Cloud Storage (this is not Google Drive)
+ 6 / FTP Connection
+ \ "ftp"
+ 7 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
- 7 / Google Drive
+ 8 / Google Drive
\ "drive"
- 8 / Hubic
+ 9 / Hubic
\ "hubic"
- 9 / Local Disk
+10 / Local Disk
\ "local"
-10 / Microsoft OneDrive
+11 / Microsoft OneDrive
\ "onedrive"
-11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
-12 / SSH/SFTP Connection
+13 / SSH/SFTP Connection
\ "sftp"
-13 / Yandex Disk
+14 / Yandex Disk
\ "yandex"
-Storage> 7
+Storage> 8
Google Application Client Id - leave blank normally.
-client_id>
+client_id>
Google Application Client Secret - leave blank normally.
-client_secret>
+client_secret>
Remote config
Use auto config?
* Say Y if not sure
@@ -1387,10 +1478,14 @@ If your browser doesn't open automatically go to the following link: http://
Log in and authorize rclone for access
Waiting for code...
Got code
+Configure this as a team drive?
+y) Yes
+n) No
+y/n> n
--------------------
[remote]
-client_id =
-client_secret =
+client_id =
+client_secret =
token = {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014-03-16T13:57:58.955387075Z","Extra":null}
--------------------
y) Yes this is OK
@@ -1405,6 +1500,34 @@ y/e/d> y
rclone ls remote:
To copy a local directory to a drive directory called backup
rclone copy /home/source remote:backup
+Team drives
+If you want to configure the remote to point to a Google Team Drive then answer y
to the question Configure this as a team drive?
.
+This will fetch the list of Team Drives from google and allow you to configure which one you want to use. You can also type in a team drive ID if you prefer.
+For example:
+Configure this as a team drive?
+y) Yes
+n) No
+y/n> y
+Fetching team drive list...
+Choose a number from below, or type in your own value
+ 1 / Rclone Test
+ \ "xxxxxxxxxxxxxxxxxxxx"
+ 2 / Rclone Test 2
+ \ "yyyyyyyyyyyyyyyyyyyy"
+ 3 / Rclone Test 3
+ \ "zzzzzzzzzzzzzzzzzzzz"
+Enter a Team Drive ID> 1
+--------------------
+[remote]
+client_id =
+client_secret =
+token = {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014-03-16T13:57:58.955387075Z","Extra":null}
+team_drive = xxxxxxxxxxxxxxxxxxxx
+--------------------
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
Modified time
Google drive stores modification times accurate to 1 ms.
Revisions
@@ -1445,111 +1568,111 @@ y/e/d> y
rclone ls remote:bucket
Sync /home/local/directory
to the remote bucket, deleting any excess files in the bucket.
rclone sync /home/local/directory remote:bucket
+This remote supports --fast-list
which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.
The modified time is stored as metadata on the object as X-Amz-Meta-Mtime
as floating point since the epoch accurate to 1 ns.
rclone
on an EC2 instance with an IAM roleIf none of these option actually end up providing rclone
with AWS credentials then S3 interaction will be non-authenticated (see below).
When using the sync
subcommand of rclone
the following minimum permissions are required to be available on the bucket being written to:
ListBucket
DeleteObject
GetObject
PutObject
PutObjectACL
Example policy:
+{
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Effect": "Allow",
+ "Principal": {
+ "AWS": "arn:aws:iam::USER_SID:user/USER_NAME"
+ },
+ "Action": [
+ "s3:ListBucket",
+ "s3:DeleteObject",
+ "s3:GetObject",
+ "s3:PutObject",
+ "s3:PutObjectAcl"
+ ],
+ "Resource": [
+ "arn:aws:s3:::BUCKET_NAME/*",
+ "arn:aws:s3:::BUCKET_NAME"
+ ]
+ }
+ ]
+}
+Notes on above:
+USER_NAME
has been created.For reference, here's an Ansible script that will generate one or more buckets that will work with rclone sync
.
Here are the command line options specific to this cloud storage system.
Canned ACL used when creating buckets and/or storing objects in S3.
-For more info visit the canned ACL docs.
+For more info visit the canned ACL docs.
Storage class to upload new objects with.
Available options include:
@@ -1882,10 +2045,10 @@ server_side_encryption =So once set up, for example to copy files into a bucket
rclone --size-only copy /path/to/files minio:bucket
Swift refers to Openstack Object Storage. Commercial implementations of that being:
+Swift refers to Openstack Object Storage. Commercial implementations of that being:
Paths are specified as remote:container
(or remote:
for the lsd
command.) You may put subdirectories in too, eg remote:container/path/to/dir
.
Here is an example of making a swift configuration. First run
@@ -2001,6 +2164,8 @@ key = $OS_PASSWORD auth = $OS_AUTH_URL tenant = $OS_TENANT_NAMENote that you may (or may not) need to set region
too - try without first.
This remote supports --fast-list
which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.
Here are the command line options specific to this cloud storage system.
rclone ls remote:
To copy a local directory to a dropbox directory called backup
rclone copy /home/source remote:backup
-Dropbox doesn't provide the ability to set modification times in the V1 public API, so rclone can't support modified time with Dropbox.
-This may change in the future - see these issues for details:
- -Dropbox doesn't return any sort of checksum (MD5 or SHA1).
-Together that means that syncs to dropbox will effectively have the --size-only
flag set.
Dropbox supports modified times, but the only way to set a modification time is to re-upload the file.
+This means that if you uploaded your data with an older version of rclone which didn't support the v2 API and modified times, rclone will decide to upload all your old data to fix the modification times. If you don't want this to happen use --size-only
or --checksum
flag to stop it.
Dropbox supports its own hash type which is checked for all transfers.
Here are the command line options specific to this cloud storage system.
Note that Dropbox is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
There are some file names such as thumbs.db
which Dropbox can't store. There is a full list of them in the "Ignored Files" section of this document. Rclone will issue an error message File name disallowed - not uploading
if it attempt to upload one of those file names, but the sync won't fail.
If you have more than 10,000 files in a directory then rclone purge dropbox:dir
will return the error Failed to purge: There are too many files involved in this operation
. As a work-around do an rclone delete dropbix:dir
followed by an rclone rmdir dropbox:dir
.
If you have more than 10,000 files in a directory then rclone purge dropbox:dir
will return the error Failed to purge: There are too many files involved in this operation
. As a work-around do an rclone delete dropbox:dir
followed by an rclone rmdir dropbox:dir
.
Paths are specified as remote:bucket
(or remote:
for the lsd
command.) You may put subdirectories in too, eg remote:bucket/path/to/dir
.
The initial setup for google cloud storage involves getting a token from Google Cloud Storage which you need to do in your browser. rclone config
walks you through it.
You can set up rclone with Google Cloud Storage in an unattended mode, i.e. not tied to a specific end-user Google account. This is useful when you want to synchronise files onto machines that don't have actively logged-in users, for example build machines.
To get credentials for Google Cloud Platform IAM Service Accounts, please head to the Service Account section of the Google Developer Console. Service Accounts behave just like normal User
permissions in Google Cloud Storage ACLs, so you can limit their access (e.g. make them read only). After creating an account, a JSON file containing the Service Account's credentials will be downloaded onto your machines. These credentials are what rclone will use for authentication.
To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the service_account_file
prompt and rclone won't use the browser based authentication flow.
This remote supports --fast-list
which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.
Google google cloud storage stores md5sums natively and rclone stores modification times as metadata on the object, under the "mtime" key in RFC3339 format accurate to 1ns.
Paths are specified as remote:path
Paths may be as deep as required, eg remote:directory/subdirectory
.
The initial setup for Amazon Drive involves getting a token from Amazon which you need to do in your browser. rclone config
walks you through it.
The configuration process for Amazon Drive may involve using an oauth proxy. This is used to keep the Amazon credentials out of the source code. The proxy runs in Google's very secure App Engine environment and doesn't store any credentials which pass through it.
+NB rclone doesn't not currently have its own Amazon Drive credentials (see the forum for why) so you will either need to have your own client_id
and client_secret
with Amazon Drive, or use a a third party ouath proxy in which case you will need to enter client_id
, client_secret
, auth_url
and token_url
.
Note also if you are not using Amazon's auth_url
and token_url
, (ie you filled in something for those) then if setting up on a remote machine you can only use the copying the config method of configuration - rclone authorize
will not work.
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
-n) New remote
-d) Delete remote
+No remotes found - make a new one
+n) New remote
+r) Rename remote
+c) Copy remote
+s) Set configuration password
q) Quit config
-e/n/d/q> n
+n/r/c/s/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
@@ -2241,28 +2409,35 @@ Choose a number from below, or type in your own value
\ "dropbox"
5 / Encrypt/Decrypt a remote
\ "crypt"
- 6 / Google Cloud Storage (this is not Google Drive)
+ 6 / FTP Connection
+ \ "ftp"
+ 7 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
- 7 / Google Drive
+ 8 / Google Drive
\ "drive"
- 8 / Hubic
+ 9 / Hubic
\ "hubic"
- 9 / Local Disk
+10 / Local Disk
\ "local"
-10 / Microsoft OneDrive
+11 / Microsoft OneDrive
\ "onedrive"
-11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
-12 / SSH/SFTP Connection
+13 / SSH/SFTP Connection
\ "sftp"
-13 / Yandex Disk
+14 / Yandex Disk
\ "yandex"
Storage> 1
-Amazon Application Client Id - leave blank normally.
-client_id>
-Amazon Application Client Secret - leave blank normally.
-client_secret>
+Amazon Application Client Id - required.
+client_id> your client ID goes here
+Amazon Application Client Secret - required.
+client_secret> your client secret goes here
+Auth server URL - leave blank to use Amazon's.
+auth_url> Optional auth URL
+Token server url - leave blank to use Amazon's.
+token_url> Optional token URL
Remote config
+Make sure your Redirect URL is set to "http://127.0.0.1:53682/" in your custom config.
Use auto config?
* Say Y if not sure
* Say N if you are working on a remote or headless machine
@@ -2275,15 +2450,17 @@ Waiting for code...
Got code
--------------------
[remote]
-client_id =
-client_secret =
+client_id = your client ID goes here
+client_secret = your client secret goes here
+auth_url = Optional auth URL
+token_url = Optional token URL
token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","refresh_token":"xxxxxxxxxxxxxxxxxx","expiry":"2015-09-06T16:07:39.658438471+01:00"}
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y
-See the remote setup docs for how to set it up on a machine with no Internet browser available.
+See the remote setup docs for how to set it up on a machine with no Internet browser available.
Note that rclone runs a webserver on your local machine to collect the token as returned from Amazon. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/
and this it may require you to unblock it temporarily if you are running a host firewall.
Once configured you can then use rclone
like this,
List directories in top level of your Amazon Drive
@@ -2292,7 +2469,7 @@ y/e/d> y
rclone ls remote:
To copy a local directory to an Amazon Drive directory called backup
rclone copy /home/source remote:backup
-Amazon Drive doesn't allow modification times to be changed via the API so these won't be accurate or used for syncing.
It does store MD5SUMs so for a more accurate sync, you can use the --checksum
flag.
Amazon Drive has rate limiting so you may notice errors in the sync (429 errors). rclone will automatically retry the sync up to 3 times by default (see --retries
flag) which should hopefully work around this problem.
Amazon Drive has an internal limit of file sizes that can be uploaded to the service. This limit is not officially published, but all files larger than this will fail.
At the time of writing (Jan 2016) is in the area of 50GB per file. This means that larger files are likely to fail.
-Unfortunatly there is no way for rclone to see that this failure is because of file size, so it will retry the operation, as any other failure. To avoid this problem, use --max-size 50000M
option to limit the maximum size of uploaded files. Note that --max-size
does not split files into segments, it only ignores files over this size.
Unfortunately there is no way for rclone to see that this failure is because of file size, so it will retry the operation, as any other failure. To avoid this problem, use --max-size 50000M
option to limit the maximum size of uploaded files. Note that --max-size
does not split files into segments, it only ignores files over this size.
Paths are specified as remote:path
Paths may be as deep as required, eg remote:directory/subdirectory
.
See the remote setup docs for how to set it up on a machine with no Internet browser available.
+See the remote setup docs for how to set it up on a machine with no Internet browser available.
Note that rclone runs a webserver on your local machine to collect the token as returned from Microsoft. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/
and this it may require you to unblock it temporarily if you are running a host firewall.
Once configured you can then use rclone
like this,
List directories in top level of your OneDrive
@@ -2391,7 +2568,7 @@ y/e/d> yrclone ls remote:
To copy a local directory to an OneDrive directory called backup
rclone copy /home/source remote:backup
-OneDrive allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.
One drive supports SHA1 type hashes, so you can use --checksum
flag.
See the remote setup docs for how to set it up on a machine with no Internet browser available.
+See the remote setup docs for how to set it up on a machine with no Internet browser available.
Note that rclone runs a webserver on your local machine to collect the token as returned from Hubic. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/
and this it may require you to unblock it temporarily if you are running a host firewall.
Once configured you can then use rclone
like this,
List containers in the top level of your Hubic
@@ -2483,6 +2660,8 @@ y/e/d> yrclone copy /home/source remote:backup
If you want the directory to be visible in the official Hubic browser, you need to copy your files to the default
directory
rclone copy /home/source remote:default/backup
+This remote supports --fast-list
which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.
The modified time is stored as metadata on the object as X-Object-Meta-Mtime
as floating point since the epoch accurate to 1 ns.
This is a defacto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.
@@ -2556,6 +2735,8 @@ y/e/d> yrclone ls remote:bucket
Sync /home/local/directory
to the remote bucket, deleting any excess files in the bucket.
rclone sync /home/local/directory remote:bucket
+This remote supports --fast-list
which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.
The modified time is stored as metadata on the object as X-Bz-Info-src_last_modified_millis
as milliseconds since 1970-01-01 in the Backblaze standard. Other tools should be able to use this as a modified time.
Modified times are used in syncing and are fully supported except in the case of updating a modification time on an existing object. In this case the object will be uploaded again as B2 doesn't have an API method to set the modification time independent of doing an upload.
@@ -2642,7 +2823,7 @@ $ rclone -q --b2-versions ls b2:cleanup-testShowing that the current version is unchanged but older versions can be seen. These have the UTC date that they were uploaded to the server to the nearest millisecond appended to them.
Note that when using --b2-versions
no file write operations are permitted, so you can't upload files or delete them.
Yandex Disk is a cloud storage solution created by Yandex.
+Yandex Disk is a cloud storage solution created by Yandex.
Yandex paths may be as deep as required, eg remote:directory/subdirectory
.
Here is an example of making a yandex configuration. First run
rclone config
@@ -2706,7 +2887,7 @@ y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y
-See the remote setup docs for how to set it up on a machine with no Internet browser available.
+See the remote setup docs for how to set it up on a machine with no Internet browser available.
Note that rclone runs a webserver on your local machine to collect the token as returned from Yandex Disk. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/
and this it may require you to unblock it temporarily if you are running a host firewall.
Once configured you can then use rclone
like this,
See top level directories
@@ -2717,6 +2898,8 @@ y/e/d> yrclone ls remote:directory
Sync /home/local/directory
to the remote path, deleting any excess files in the path.
rclone sync /home/local/directory remote:directory
+This remote supports --fast-list
which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.
Modified times are supported and are stored accurate to 1 ns in custom metadata called rclone_modified
in RFC3339 with nanoseconds format.
Modified times are used in syncing and are fully supported.
SFTP does not support any checksums.
+The only ssh agent supported under Windows is Putty's pagent.
SFTP isn't supported under plan9 until this issue is fixed.
Note that since SFTP isn't HTTP based the following flags don't work with it: --dump-headers
, --dump-bodies
, --dump-auth
Note that --timeout
isn't supported (but --contimeout
is).
Obfuscation
+This is a simple "rotate" of the filename, with each file having a rot distance based on the filename. We store the distance at the beginning of the filename. So a file called "hello" may become "53.jgnnq"
+This is not a strong encryption of filenames, but it may stop automated scanning tools from picking up on filename patterns. As such it's an intermediate between "off" and "standard". The advantage is that it allows for longer path segment names.
+There is a possibility with some unicode based filenames that the obfuscation is weak and may map lower case characters to upper case equivalents. You can not rely on this for strong protection.
+Cloud storage systems have various limits on file name length and total path length which you are more likely to hit using "Standard" file name encryption. If you keep your file names to below 156 characters in length then you should be OK on all providers.
There may be an even more secure file name encryption mode in the future which will address the long file name problem.
-Crypt stores modification times using the underlying remote so support depends on that.
Hashes are not stored for crypt. However the data integrity is protected by an extremely strong crypto authenticator.
Note that you should use the rclone cryptcheck
command to check the integrity of a crypted remote instead of rclone check
which can't check the checksums properly.
Rclone uses scrypt
with parameters N=16384, r=8, p=1
with a an optional user supplied salt (password2) to derive the 32+32+16 = 80 bytes of key material required. If the user doesn't supply a salt then rclone uses an internal one.
scrypt
makes it impractical to mount a dictionary attack on rclone encrypted data. For full protection agains this you should always use a salt.
FTP is the File Transfer Protocol. FTP support is provided using the github.com/jlaffaye/ftp package.
+Here is an example of making an FTP configuration. First run
+rclone config
+This will guide you through an interactive setup process. An FTP remote only needs a host together with and a username and a password. With anonymous FTP server, you will need to use anonymous
as username and your email address as the password.
No remotes found - make a new one
+n) New remote
+r) Rename remote
+c) Copy remote
+s) Set configuration password
+q) Quit config
+n/r/c/s/q> n
+name> remote
+Type of storage to configure.
+Choose a number from below, or type in your own value
+ 1 / Amazon Drive
+ \ "amazon cloud drive"
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
+ \ "s3"
+ 3 / Backblaze B2
+ \ "b2"
+ 4 / Dropbox
+ \ "dropbox"
+ 5 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 6 / FTP Connection
+ \ "ftp"
+ 7 / Google Cloud Storage (this is not Google Drive)
+ \ "google cloud storage"
+ 8 / Google Drive
+ \ "drive"
+ 9 / Hubic
+ \ "hubic"
+10 / Local Disk
+ \ "local"
+11 / Microsoft OneDrive
+ \ "onedrive"
+12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+ \ "swift"
+13 / SSH/SFTP Connection
+ \ "sftp"
+14 / Yandex Disk
+ \ "yandex"
+Storage> ftp
+FTP host to connect to
+Choose a number from below, or type in your own value
+ 1 / Connect to ftp.example.com
+ \ "ftp.example.com"
+host> ftp.example.com
+FTP username, leave blank for current username, ncw
+user>
+FTP port, leave blank to use default (21)
+port>
+FTP password
+y) Yes type in my own password
+g) Generate random password
+y/g> y
+Enter the password:
+password:
+Confirm the password:
+password:
+Remote config
+--------------------
+[remote]
+host = ftp.example.com
+user =
+port =
+pass = *** ENCRYPTED ***
+--------------------
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+This remote is called remote
and can now be used like this
See all directories in the home directory
+rclone lsd remote:
+Make a new directory
+rclone mkdir remote:path/to/directory
+List the contents of a directory
+rclone ls remote:path/to/directory
+Sync /home/local/directory
to the remote directory, deleting any excess files in the directory.
rclone sync /home/local/directory remote:directory
+FTP does not support modified times. Any times you see on the server will be time of upload.
+FTP does not support any checksums.
+Note that since FTP isn't HTTP based the following flags don't work with it: --dump-headers
, --dump-bodies
, --dump-auth
Note that --timeout
isn't supported (but --contimeout
is).
FTP could support server side move but doesn't yet.
Local paths are specified as normal filesystem paths, eg /path/to/wherever
, so
rclone sync /home/source /tmp/destination
Will sync /home/source
to /tmp/destination
These can be configured into the config file for consistencies sake, but it is probably easier not to.
-Rclone reads and writes the modified time using an accuracy determined by the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second on OS X.
Filenames are expected to be encoded in UTF-8 on disk. This is the normal case for Windows and OS X.
@@ -3085,6 +3374,10 @@ nounc = true 6 two/three 6 b/two 6 b/one +By default rclone normalizes (NFC) the unicode representation of filenames and directories. This flag disables that normalization and uses the same representation as the local filesystem.
+This can be useful if you need to retain the local unicode representation and you are using a cloud provider which supports unnormalized names (e.g. S3 or ACD).
+This should also work with any provider if you are using crypt and have file name encryption (the default) or obfuscation turned on.
This tells rclone to stay in the filesystem specified by the root and not to recurse into different file systems.
For example if you have a directory heirachy like this
@@ -3557,7 +3850,7 @@ nounc = true--no-check-certificate
option to disable server certificate verificationYes they do. All the rclone commands (eg sync
, copy
etc) will work on all the remote storage systems.
Sure! Rclone stores all of its config in a single file. If you want to find this file, the simplest way is to run rclone -h
and look at the help for the --config
flag which will tell you where it is.
See the remote setup docs for more info.
+See the remote setup docs for more info.
This has now been documented in its own remote setup page.
+This has now been documented in its own remote setup page.
Rclone can sync between two remote cloud storage systems just fine.
Note that it effectively downloads the file and uploads it again, so the node running rclone would need to have lots of bandwidth.
@@ -3918,7 +4211,7 @@ ntpclient -s -h pool.ntp.orgThis is caused by uploading these files from a Windows computer which hasn't got the Microsoft Office suite installed. The easiest way to fix is to install the Word viewer and the Microsoft Office Compatibility Pack for Word, Excel, and PowerPoint 2007 and later versions' file formats
This is free software under the terms of MIT the license (check the COPYING file included with the source code).
-Copyright (C) 2012 by Nick Craig-Wood http://www.craig-wood.com/nick/
+Copyright (C) 2012 by Nick Craig-Wood https://www.craig-wood.com/nick/
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
@@ -3939,309 +4232,81 @@ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
Authors
-- Nick Craig-Wood
+- Nick Craig-Wood nick@craig-wood.com
Contributors
-- Alex Couper
-- Leonid Shalupov
-- Shimon Doodkin
-- Colin Nicholson
-- Klaus Post
-- Sergey Tolmachev
-- Adriano Aurélio Meirelles
-- C. Bess
-- Dmitry Burdeev
-- Joseph Spurrier
-- Björn Harrtell
-- Xavier Lucas
-- Werner Beroux
-- Brian Stengaard
-- Jakub Gedeon
-- Jim Tittsler
-- Michal Witkowski
-- Fabian Ruff
-- Leigh Klotz
-- Romain Lapray
-- Justin R. Wilson
-- Antonio Messina
-- Stefan G. Weichinger
-- Per Cederberg
-- Radek Šenfeld
-- Fredrik Fornwall
-- Asko Tamm
-- xor-zz
-- Tomasz Mazur
-- Marco Paganini
-- Felix Bünemann
-- Durval Menezes
-- Luiz Carlos Rumbelsperger Viana
-- Stefan Breunig
-- Alishan Ladhani
-- 0xJAKE
-- Thibault Molleman
-- Scott McGillivray
-- Bjørn Erik Pedersen
-- Lukas Loesche
-- emyarod
-- T.C. Ferguson
-- Brandur
-- Dario Giovannetti
-- Károly Oláh
-- Jon Yergatian
-- Jack Schmidt
-- Dedsec1
-- Hisham Zarka
+- Alex Couper amcouper@gmail.com
+- Leonid Shalupov leonid@shalupov.com shalupov@diverse.org.ru
+- Shimon Doodkin helpmepro1@gmail.com
+- Colin Nicholson colin@colinn.com
+- Klaus Post klauspost@gmail.com
+- Sergey Tolmachev tolsi.ru@gmail.com
+- Adriano Aurélio Meirelles adriano@atinge.com
+- C. Bess cbess@users.noreply.github.com
+- Dmitry Burdeev dibu28@gmail.com
+- Joseph Spurrier github@josephspurrier.com
+- Björn Harrtell bjorn@wololo.org
+- Xavier Lucas xavier.lucas@corp.ovh.com
+- Werner Beroux werner@beroux.com
+- Brian Stengaard brian@stengaard.eu
+- Jakub Gedeon jgedeon@sofi.com
+- Jim Tittsler jwt@onjapan.net
+- Michal Witkowski michal@improbable.io
+- Fabian Ruff fabian.ruff@sap.com
+- Leigh Klotz klotz@quixey.com
+- Romain Lapray lapray.romain@gmail.com
+- Justin R. Wilson jrw972@gmail.com
+- Antonio Messina antonio.s.messina@gmail.com
+- Stefan G. Weichinger office@oops.co.at
+- Per Cederberg cederberg@gmail.com
+- Radek Šenfeld rush@logic.cz
+- Fredrik Fornwall fredrik@fornwall.net
+- Asko Tamm asko@deekit.net
+- xor-zz xor@gstocco.com
+- Tomasz Mazur tmazur90@gmail.com
+- Marco Paganini paganini@paganini.net
+- Felix Bünemann buenemann@louis.info
+- Durval Menezes jmrclone@durval.com
+- Luiz Carlos Rumbelsperger Viana maxd13_luiz_carlos@hotmail.com
+- Stefan Breunig stefan-github@yrden.de
+- Alishan Ladhani ali-l@users.noreply.github.com
+- 0xJAKE 0xJAKE@users.noreply.github.com
+- Thibault Molleman thibaultmol@users.noreply.github.com
+- Scott McGillivray scott.mcgillivray@gmail.com
+- Bjørn Erik Pedersen bjorn.erik.pedersen@gmail.com
+- Lukas Loesche lukas@mesosphere.io
+- emyarod allllaboutyou@gmail.com
+- T.C. Ferguson tcf909@gmail.com
+- Brandur brandur@mutelight.org
+- Dario Giovannetti dev@dariogiovannetti.net
+- Károly Oláh okaresz@aol.com
+- Jon Yergatian jon@macfanatic.ca
+- Jack Schmidt github@mowsey.org
+- Dedsec1 Dedsec1@users.noreply.github.com
+- Hisham Zarka hzarka@gmail.com
+- Jérôme Vizcaino jerome.vizcaino@gmail.com
+- Mike Tesch mjt6129@rit.edu
+- Marvin Watson marvwatson@users.noreply.github.com
+- Danny Tsai danny8376@gmail.com
+- Yoni Jah yonjah+git@gmail.com yonjah+github@gmail.com
+- Stephen Harris github@spuddy.org
+- Ihor Dvoretskyi ihor.dvoretskyi@gmail.com
+- Jon Craton jncraton@gmail.com
+- Hraban Luyat hraban@0brg.net
+- Michael Ledin mledin89@gmail.com
+- Martin Kristensen me@azgul.com
+- Too Much IO toomuchio@users.noreply.github.com
+- Anisse Astier anisse@astier.eu
+- Zahiar Ahmed zahiar@live.com
+- Igor Kharin igorkharin@gmail.com
+- Bill Zissimopoulos billziss@navimatics.com
+- Bob Potter bobby.potter@gmail.com
+- Steven Lu tacticalazn@gmail.com
+- Sjur Fredriksen sjurtf@ifi.uio.no
+- Ruwbin hubus12345@gmail.com
+- Fabian Möller fabianm88@gmail.com
+- Edward Q. Bridges github@eqbridges.com
Contact the rclone project
Forum
@@ -4266,11 +4331,6 @@ document.write(''+e+'<\
[@njcw](https://twitter.com/njcw)
Email
-Or if all else fails or you want to ask something private or confidential email
+Or if all else fails or you want to ask something private or confidential email Nick Craig-Wood