diff --git a/MANUAL.html b/MANUAL.html index 24e852353..751a82c2d 100644 --- a/MANUAL.html +++ b/MANUAL.html @@ -12,7 +12,7 @@
Rclone is a Go program and comes as a single binary file.
-Download the relevant binary.
-Or alternatively if you have Go 1.5+ installed use
-go get github.com/ncw/rclone
-and this will build the binary in $GOPATH/bin
. If you have built rclone before then you will want to update its dependencies first with this
go get -u -v github.com/ncw/rclone/...
+rclone
binary.rclone config
to setup. See rclone config docs for more details.See below for some expanded Linux / macOS instructions.
See the Usage section of the docs for how to use rclone, or run rclone -h
.
unzip rclone-v1.17-linux-amd64.zip
-cd rclone-v1.17-linux-amd64
-#copy binary file
-sudo cp rclone /usr/sbin/
+Linux installation from precompiled binary
+Fetch and unpack
+curl -O http://downloads.rclone.org/rclone-current-linux-amd64.zip
+unzip rclone-current-linux-amd64.zip
+cd rclone-*-linux-amd64
+Copy binary file
+sudo cp rclone /usr/sbin/
sudo chown root:root /usr/sbin/rclone
-sudo chmod 755 /usr/sbin/rclone
-#install manpage
-sudo mkdir -p /usr/local/share/man/man1
+sudo chmod 755 /usr/sbin/rclone
+Install manpage
+sudo mkdir -p /usr/local/share/man/man1
sudo cp rclone.1 /usr/local/share/man/man1/
sudo mandb
+Run rclone config
to setup. See rclone config docs for more details.
+rclone config
+macOS installation from precompiled binary
+Download the latest version of rclone.
+cd && curl -O http://downloads.rclone.org/rclone-current-osx-amd64.zip
+Unzip the download and cd to the extracted folder.
+unzip -a rclone-current-osx-amd64.zip && cd rclone-*-osx-amd64
+Move rclone to your $PATH. You will be prompted for your password.
+sudo mv rclone /usr/local/bin/
+Remove the leftover files.
+cd .. && rm -rf rclone-*-osx-amd64 rclone-current-osx-amd64.zip
+Run rclone config
to setup. See rclone config docs for more details.
+rclone config
+Install from source
+Make sure you have at least Go 1.5 installed. Make sure your GOPATH
is set, then:
+go get -u -v github.com/ncw/rclone
+and this will build the binary in $GOPATH/bin
. If you have built rclone before then you will want to update its dependencies first with this
+go get -u -v github.com/ncw/rclone/...
Installation with Ansible
This can be done with Stefan Weichinger's ansible role.
Instructions
@@ -286,7 +308,7 @@ two-3.txt: renamed from: two.txt
rclone dedupe rename "drive:Google Photos"
rclone dedupe [mode] remote:path
--dedupe-mode string Dedupe mode interactive|skip|first|newest|oldest|rename.
+ --dedupe-mode string Dedupe mode interactive|skip|first|newest|oldest|rename. (default "interactive")
Remote authorization.
This produces markdown docs for the rclone commands to the directory supplied. These are in a format suitable for hugo to render into the rclone.org website.
rclone gendocs output_directory
+List all the remotes in the config file.
+rclone listremotes lists all the available remotes from the config file.
+When uses with the -l flag it lists the types too.
+rclone listremotes
+ -l, --long Show the type as well as names.
Mount the remote as a mountpoint. EXPERIMENTAL
-rclone mount allows Linux, FreeBSD and macOS to mount any of Rclone's cloud storage systems as a file system with FUSE.
This is EXPERIMENTAL - use with care.
First set up your remote using rclone config
. Check it works with rclone ls
etc.
Or with OS X
umount -u /path/to/local/mount
This can only read files seqentially, or write files sequentially. It can't read and write or seek in files.
-rclonefs inherits rclone's directory handling. In rclone's world directories don't really exist. This means that empty directories will have a tendency to disappear once they fall out of the directory cache.
+This can only write files seqentially, it can only seek when reading.
+Rclone mount inherits rclone's directory handling. In rclone's world directories don't really exist. This means that empty directories will have a tendency to disappear once they fall out of the directory cache.
The bucket based FSes (eg swift, s3, google compute storage, b2) won't work from the root - you will need to specify a bucket, or a path within the bucket. So swift:
won't work whereas swift:bucket
will as will swift:bucket/path
.
Only supported on Linux, FreeBSD and OS X at the moment.
rclone mount remote:path /path/to/mountpoint
- --debug-fuse Debug the FUSE internals - needs -v.
- --no-modtime Don't read the modification time (can speed things up).
+ --allow-non-empty Allow mounting over a non-empty directory.
+ --allow-other Allow access to other users.
+ --allow-root Allow access to root user.
+ --debug-fuse Debug the FUSE internals - needs -v.
+ --default-permissions Makes kernel enforce access control based on the file mode.
+ --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
+ --gid uint32 Override the gid field set by the filesystem. (default 502)
+ --max-read-ahead int The number of bytes that can be prefetched for sequential reads. (default 128k)
+ --no-modtime Don't read the modification time (can speed things up).
+ --no-seek Don't allow seeking in files.
+ --read-only Mount read-only.
+ --uid uint32 Override the uid field set by the filesystem. (default 502)
+ --umask int Override the permission bits set by the filesystem. (default 2)
+ --write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.
rclone normally syncs or copies directories. However if the source remote points to a file, rclone will just copy that file. The destination remote must point to a directory - rclone will give the error Failed to create file system for "remote:file": is a file not a directory
if it isn't.
For example, suppose you have a remote with a file in called test.jpg
, then you could copy just that file like this
This can be used when scripting to make aged backups efficiently, eg
rclone sync remote:current-backup remote:previous-backup
rclone sync /path/to/files remote:current-backup
-Rclone has a number of options to control its behaviour.
Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
Options which use SIZE use kByte by default. However a suffix of b
for bytes, k
for kBytes, M
for MBytes and G
for GBytes may be used. These are the binary units, eg 1, 2**10, 2**20, 2**30 respectively.
Bandwidth limit in kBytes/s, or use suffix b|k|M|G. The default is 0
which means to not limit bandwidth.
For example to limit bandwidth usage to 10 MBytes/s use --bwlimit 10M
This only limits the bandwidth of the data transfer, it doesn't limit the bandwith of the directory listings etc.
+Note that the units are Bytes/s not Bits/s. Typically connections are measured in Bits/s - to convert divide by 8. For example let's say you have a 10 Mbit/s connection and you wish rclone to use half of it - 5 Mbit/s. This is 5/8 = 0.625MByte/s so you would use a --bwlimit 0.625M
parameter for rclone.
The number of checkers to run in parallel. Checkers do the equality checking of files during a sync. For some storage systems (eg s3, swift, dropbox) this can take a significant amount of time so they are run in parallel.
The default is to run 8 checkers in parallel.
@@ -524,12 +568,15 @@ c/u/q>These options are useful when developing or debugging rclone. There are also some more remote specific options which aren't documented here which are used for testing. These start with remote name eg --drive-test-option
- see the docs for the remote in question.
Write CPU profile to file. This can be analysed with go tool pprof
.
Dump HTTP headers - will contain sensitive info such as Authorization:
headers - use --dump-headers
to dump without Authorization:
headers. Can be very verbose. Useful for debugging only.
Dump HTTP headers and bodies - may contain sensitive info. Can be very verbose. Useful for debugging only.
Dump the filters to the output. Useful to see exactly what include and exclude options are filtering on.
Dump HTTP headers - may contain sensitive info. Can be very verbose. Useful for debugging only.
+Dump HTTP headers with Authorization:
lines removed. May still contain sensitive info. Can be very verbose. Useful for debugging only.
Use --dump-auth
if you do want the Authorization:
headers.
Write memory profile to file. This can be analysed with go tool pprof
.
If you use the -v
flag, rclone will produce Error
, Info
and Debug
messages.
If you use the --log-file=FILE
option, rclone will redirect Error
, Info
and Debug
messages along with standard error to FILE.
If any errors occurred during the command, rclone will set a non zero exit code. This allows scripts to detect when rclone operations have failed.
+If any errors occurred during the command, rclone with an exit code of 1
. This allows scripts to detect when rclone operations have failed.
During the startup phase rclone will exit immediately if an error is detected in the configuration. There will always be a log message immediately before exiting.
+When rclone is running it will accumulate errors as it goes along, and only exit with an non-zero exit code if (after retries) there were no transfers with errors remaining. For every error counted there will be a high priority log message (visibile with -q
) showing the message and which file caused the problem. A high priority message is also shown when starting a retry so the user can see that any previous error messages may not be valid after the retry. If rclone has done a retry it will log a high priority message if the retry was successful.
Some of the configurations (those involving oauth2) require an Internet connected web browser.
If you are trying to set rclone up on a remote or headless box with no browser available on it (eg a NAS or a server in a datacenter) then you will need to use an alternative means of configuration. There are two ways of doing it, described below.
@@ -664,10 +713,10 @@ y/e/d>Rclone keeps track of directories that could match any file patterns.
Eg if you add the include rule
-\a\*.jpg
+/a/*.jpg
Rclone will synthesize the directory include rule
-\a\
-If you put any rules which end in \
then it will only match directories.
/a/
+If you put any rules which end in /
then it will only match directories.
Directory matches are only used to optimise directory access patterns - you must still match the files that you want to match. Directory matches won't optimise anything on bucket based remotes (eg s3, swift, google compute storage, b2) which don't have a concept of directory.
Rclone implements bash style {a,b,c}
glob matching which rsync doesn't.
If a cloud storage system allows duplicate files then it can have two objects with the same name.
This confuses rclone greatly when syncing - use the rclone dedupe
command to rename or remove duplicates.
MIME types (also known as media types) classify types of documents using a simple text classification, eg text/html
or application/pdf
.
Some cloud storage systems support reading (R
) the MIME type of objects and some support writing (W
) the MIME type of objects.
The MIME type can be important if you are serving files directly to HTTP from the storage system.
+If you are copying from a remote which supports reading (R
) to a remote which supports writing (W
) then rclone will preserve the MIME types. Otherwise they will be guessed from the extension, or the remote itself may assign the MIME type.
All the remotes support a basic set of features, but there are some optional features supported by some remotes used to make some operations more efficient.
+Name | +Purge | +Copy | +Move | +DirMove | +CleanUp | +
---|---|---|---|---|---|
Google Drive | +Yes | +Yes | +Yes | +Yes | +No #575 | +
Amazon S3 | +No | +Yes | +No | +No | +No | +
Openstack Swift | +Yes † | +Yes | +No | +No | +No | +
Dropbox | +Yes | +Yes | +Yes | +Yes | +No #575 | +
Google Cloud Storage | +Yes | +Yes | +No | +No | +No | +
Amazon Drive | +Yes | +No | +No #721 | +No #721 | +No #575 | +
Microsoft One Drive | +Yes | +Yes | +No #197 | +No #197 | +No #575 | +
Hubic | +Yes † | +Yes | +No | +No | +No | +
Backblaze B2 | +No | +No | +No | +No | +Yes | +
Yandex Disk | +Yes | +No | +No | +No | +No #575 | +
The local filesystem | +Yes | +No | +Yes | +Yes | +No | +
This deletes a directory quicker than just deleting all the files in the directory.
+† Note Swift and Hubic implement this in order to delete directory markers but they don't actually have a quicker way of deleting files other than deleting them individually.
+Used when copying an object to and from the same remote. This known as a server side copy so you can copy a file without downloading it and uploading it again. It is used if you use rclone copy
or rclone move
if the remote doesn't support Move
directly.
If the server doesn't support Copy
directly then for copy operations the file is downloaded then re-uploaded.
Used when moving/renaming an object on the same remote. This is known as a server side move of a file. This is used in rclone move
if the server doesn't support DirMove
.
If the server isn't capable of Move
then rclone simulates it with Copy
then delete. If the server doesn't support Copy
then rclone will download the file and re-upload it.
This is used to implement rclone move
to move a directory if possible. If it isn't then it will use Move
on each file (which falls back to Copy
then download and upload - see Move
section).
This is used for emptying the trash for a remote by rclone cleanup
.
If the server can't do CleanUp
then rclone cleanup
will return an error.
Paths are specified as drive:path
Drive paths may be as deep as required, eg drive:directory/subdirectory
.
rclone
on an EC2 instance with an IAM roleIf none of these option actually end up providing rclone
with AWS credentials then S3 interaction will be non-authenticated (see below).
Here are the command line options specific to this cloud storage system.
+Canned ACL used when creating buckets and/or storing objects in S3.
+For more info visit the canned ACL docs.
+Storage class to upload new objects with.
+Available options include:
+If you want to use rclone to access a public bucket, configure with a blank access_key_id
and secret_access_key
. Eg
No remotes found - make a new one
@@ -1503,7 +1726,26 @@ y/e/d> y
rclone ls remote:container
Sync /home/local/directory
to the remote container, deleting any excess files in the container.
rclone sync /home/local/directory remote:container
-An Opentstack credentials file typically looks something something like this (without the comments)
+export OS_AUTH_URL=https://a.provider.net/v2.0
+export OS_TENANT_ID=ffffffffffffffffffffffffffffffff
+export OS_TENANT_NAME="1234567890123456"
+export OS_USERNAME="123abc567xy"
+echo "Please enter your OpenStack Password: "
+read -sr OS_PASSWORD_INPUT
+export OS_PASSWORD=$OS_PASSWORD_INPUT
+export OS_REGION_NAME="SBG1"
+if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi
+The config file needs to look something like this where $OS_USERNAME
represents the value of the OS_USERNAME
variable - 123abc567xy
in the example above.
[remote]
+type = swift
+user = $OS_USERNAME
+key = $OS_PASSWORD
+auth = $OS_AUTH_URL
+tenant = $OS_TENANT_NAME
+Note that you may (or may not) need to set region
too - try without first.
Here are the command line options specific to this cloud storage system.
Above this size files will be chunked into a _segments container. The default for this is 5GB which is its maximum value.
@@ -1516,6 +1758,7 @@ y/e/d> yDue to an oddity of the underlying swift library, it gives a "Bad Request" error rather than a more sensible error when the authentication fails for Swift.
So this most likely means your username / password is wrong. You can investigate further with the --dump-bodies
flag.
This may also be caused by specifying the region when you shouldn't have (eg OVH).
This is most likely caused by forgetting to specify your tenant when setting up a swift remote.
Dropbox doesn't return any sort of checksum (MD5 or SHA1).
Together that means that syncs to dropbox will effectively have the --size-only
flag set.
Here are the command line options specific to this cloud storage system.
Upload chunk size. Max 150M. The default is 128MB. Note that this isn't buffered into memory.
@@ -1781,19 +2024,25 @@ y/e/d> yIt does store MD5SUMs so for a more accurate sync, you can use the --checksum
flag.
Any files you delete with rclone will end up in the trash. Amazon don't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Amazon's apps or via the Amazon Drive website.
-.com
Amazon accountsLet's say you usually use amazon.co.uk
. When you authenticate with rclone it will take you to an amazon.com
page to log in. Your amazon.co.uk
email and password should work here just fine.
Here are the command line options specific to this cloud storage system.
Files this size or more will be downloaded via their tempLink
. This is to work around a problem with Amazon Drive which blocks downloads of files bigger than about 10GB. The default for this is 9GB which shouldn't need to be changed.
To download files above this threshold, rclone requests a tempLink
which downloads the file through a temporary URL directly from the underlying S3 storage.
Sometimes Amazon Drive gives an error when a file has been fully uploaded but the file appears anyway after a little while. This controls the time rclone waits - 2 minutes by default. You might want to increase the time if you are having problems with very big files. Upload with the -v
flag for more info.
Sometimes Amazon Drive gives an error when a file has been fully uploaded but the file appears anyway after a little while. This happens sometimes for files over 1GB in size and nearly every time for files bigger than 10GB. This parameter controls the time rclone waits for the file to appear.
+The default value for this parameter is 3 minutes per GB, so by default it will wait 3 minutes for every GB uploaded to see if the file appears.
+You can disable this feature by setting it to 0. This may cause conflict errors as rclone retries the failed upload but the file will most likely appear correctly eventually.
+These values were determined empirically by observing lots of uploads of big files for a range of file sizes.
+Upload with the -v
flag to see more info about what rclone is doing in this situation.
Note that Amazon Drive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
Amazon Drive has rate limiting so you may notice errors in the sync (429 errors). rclone will automatically retry the sync up to 3 times by default (see --retries
flag) which should hopefully work around this problem.
Amazon Drive has an internal limit of file sizes that can be uploaded to the service. This limit is not officially published, but all files larger than this will fail.
At the time of writing (Jan 2016) is in the area of 50GB per file. This means that larger files are likely to fail.
-Unfortunatly there is no way for rclone to see that this failure is because of file size, so it will retry the operation, as any other failure. To avoid this problem, use --max-size=50GB
option to limit the maximum size of uploaded files.
Unfortunatly there is no way for rclone to see that this failure is because of file size, so it will retry the operation, as any other failure. To avoid this problem, use --max-size 50G
option to limit the maximum size of uploaded files.
Paths are specified as remote:path
Paths may be as deep as required, eg remote:directory/subdirectory
.
One drive supports SHA1 type hashes, so you can use --checksum
flag.
Any files you delete with rclone will end up in the trash. Microsoft doesn't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Microsoft's apps or via the One Drive website.
-Here are the command line options specific to this cloud storage system.
Above this size files will be chunked - must be multiple of 320k. The default is 10MB. Note that the chunks will be buffered into memory.
@@ -2059,7 +2308,25 @@ $ rclone -q ls b2:cleanup-test $ rclone -q --b2-versions ls b2:cleanup-test 9 one.txt -It is useful to know how many requests are sent to the server in different scenarios.
+All copy commands send the following 4 requests:
+/b2api/v1/b2_authorize_account
+/b2api/v1/b2_create_bucket
+/b2api/v1/b2_list_buckets
+/b2api/v1/b2_list_file_names
+The b2_list_file_names
request will be sent once for every 1k files in the remote path, providing the checksum and modification time of the listed files. As of version 1.33 issue #818 causes extra requests to be sent when using B2 with Crypt. When a copy operation does not require any files to be uploaded, no more requests will be sent.
Uploading files that do not require chunking, will send 2 requests per file upload:
+/b2api/v1/b2_get_upload_url
+/b2api/v1/b2_upload_file/
+Uploading files requiring chunking, will send 2 requests (one each to start and finish the upload) and another 2 requests for each chunk:
+/b2api/v1/b2_start_large_file
+/b2api/v1/b2_get_upload_part_url
+/b2api/v1/b2_upload_part/
+/b2api/v1/b2_finish_large_file
+When using B2 with crypt
files are encrypted into a temporary location and streamed from there. This is required to calculate the encrypted file's checksum before beginning the upload. On Windows the %TMPDIR% environment variable is used as the temporary location. If the file requires chunking, both the chunking and encryption will take place in memory.
Here are the command line options specific to this cloud storage system.
When uploading large files chunk the file into this size. Note that these chunks are buffered in memory and there might a maximum of --transfers
chunks in progress at once. 100,000,000 Bytes is the minimim size (default 96M).
Important The password is stored in the config file is lightly obscured so it isn't immediately obvious what it is. It is in no way secure unless you use config file encryption.
A long passphrase is recommended, or you can use a random one. Note that if you reconfigure rclone with the same passwords/passphrases elsewhere it will be compatible - all the secrets used are derived from those two passwords/passphrases.
Note that rclone does not encrypt * file length - this can be calcuated within 16 bytes * modification time - used for syncing
+In normal use, make sure the remote has a :
in. If you specify the remote without a :
then rclone will use a local directory of that name. So if you use a remote of /path/to/secret/files
then rclone will encrypt stuff to that directory. If you use a remote of name
then rclone will put files in a directory called name
in the current directory.
If you specify the remote as remote:path/to/dir
then rclone will store encrypted files in path/to/dir
on the remote. If you are using file name encryption, then when you save files to secret:subdir/subfile
this will store them in the unencrypted path path/to/dir
but the subdir/subpath
bit will be encrypted.
Note that unless you want encrypted bucket names (which are difficult to manage because you won't know what directory they represent in web interfaces etc), you should probably specify a bucket, eg remote:secretbucket
when using bucket based remotes such as S3, Swift, Hubic, B2, GCS.
To test I made a little directory of files using "standard" file name encryption.
plaintext/
@@ -2293,6 +2566,9 @@ $ rclone -q ls secret:
Standard * file names encrypted * file names can't be as long (~156 characters) * can use sub paths and copy single files * directory structure visibile * identical files names will have identical uploaded names * can use shortcuts to shorten the directory recursion
Cloud storage systems have various limits on file name length and total path length which you are more likely to hit using "Standard" file name encryption. If you keep your file names to below 156 characters in length then you should be OK on all providers.
There may be an even more secure file name encryption mode in the future which will address the long file name problem.
+Modified time and hashes
+Crypt stores modification times using the underlying remote so support depends on that.
+Hashes are not stored for crypt. However the data integrity is protected by an extremely strong crypto authenticator.
File formats
File encryption
Files are encrypted 1:1 source file to destination object. The file has a header and is divided into chunks.
@@ -2369,8 +2645,101 @@ nounc = true
And use rclone like this:
rclone copy c:\src nounc:z:\dst
This will use UNC paths on c:\src
but not on z:\dst
. Of course this will cause problems if the absolute path length of a file exceeds 258 characters on z, so only use this option if you have to.
Here are the command line options specific to local storage
+This tells rclone to stay in the filesystem specified by the root and not to recurse into different file systems.
+For example if you have a directory heirachy like this
+root
+├── disk1 - disk1 mounted on the root
+│ └── file3 - stored on disk1
+├── disk2 - disk2 mounted on the root
+│ └── file4 - stored on disk12
+├── file1 - stored on the root disk
+└── file2 - stored on the root disk
+Using rclone --one-file-system copy root remote:
will only copy file1
and file2
. Eg
$ rclone -q --one-file-system ls root
+ 0 file1
+ 0 file2
+$ rclone -q ls root
+ 0 disk1/file3
+ 0 disk2/file4
+ 0 file1
+ 0 file2
+NB Rclone (like most unix tools such as du
, rsync
and tar
) treats a bind mount to the same device as being on the same filesystem.
NB This flag is only available on Unix based systems. On systems where it isn't supported (eg Windows) it will not appear as an valid flag.
--files-from
operations iterating through the source bucket.rclone check
shows count of hashes that couldn't be checkedrclone listremotes
commandAuthorization:
lines from --dump-headers
outputrclone move
command
+rclone check
on crypted file systems-q
rclone mount
- FUSE--no-modtime
, --debug-fuse
, --read-only
, --allow-non-empty
, --allow-root
, --allow-other
--default-permissions
, --write-back-cache
, --max-read-ahead
, --umask
, --uid
, --gid
--dir-cache-time
to control caching of directory entries-no-seek
flag to disable--acd-upload-wait-per-gb
+-x
/--one-file-system
to stay on a single file system
+.epub
, .odp
and .tsv
as export formats.Forum for general discussions and questions:
+The project website is at:
There you can file bug reports, ask for help or contribute pull requests.
-See also
+Rclone has a Google+ page which announcements are posted to
-Or email