forked from TrueCloudLab/rclone
docs: cleanup backend hashes sections
This commit is contained in:
parent
98a96596df
commit
a7faf05393
40 changed files with 115 additions and 96 deletions
|
@ -127,13 +127,13 @@ To copy a local directory to an Amazon Drive directory called backup
|
||||||
|
|
||||||
rclone copy /home/source remote:backup
|
rclone copy /home/source remote:backup
|
||||||
|
|
||||||
### Modified time and MD5SUMs
|
### Modification times and hashes
|
||||||
|
|
||||||
Amazon Drive doesn't allow modification times to be changed via
|
Amazon Drive doesn't allow modification times to be changed via
|
||||||
the API so these won't be accurate or used for syncing.
|
the API so these won't be accurate or used for syncing.
|
||||||
|
|
||||||
It does store MD5SUMs so for a more accurate sync, you can use the
|
It does support the MD5 hash algorithm, so for a more accurate sync,
|
||||||
`--checksum` flag.
|
you can use the `--checksum` flag.
|
||||||
|
|
||||||
### Restricted filename characters
|
### Restricted filename characters
|
||||||
|
|
||||||
|
|
|
@ -75,10 +75,10 @@ This remote supports `--fast-list` which allows you to use fewer
|
||||||
transactions in exchange for more memory. See the [rclone
|
transactions in exchange for more memory. See the [rclone
|
||||||
docs](/docs/#fast-list) for more details.
|
docs](/docs/#fast-list) for more details.
|
||||||
|
|
||||||
### Modified time
|
### Modification times and hashes
|
||||||
|
|
||||||
The modified time is stored as metadata on the object with the `mtime`
|
The modification time is stored as metadata on the object with the
|
||||||
key. It is stored using RFC3339 Format time with nanosecond
|
`mtime` key. It is stored using RFC3339 Format time with nanosecond
|
||||||
precision. The metadata is supplied during directory listings so
|
precision. The metadata is supplied during directory listings so
|
||||||
there is no performance overhead to using it.
|
there is no performance overhead to using it.
|
||||||
|
|
||||||
|
@ -88,6 +88,10 @@ flag. Note that rclone can't set `LastModified`, so using the
|
||||||
`--update` flag when syncing is recommended if using
|
`--update` flag when syncing is recommended if using
|
||||||
`--use-server-modtime`.
|
`--use-server-modtime`.
|
||||||
|
|
||||||
|
MD5 hashes are stored with blobs. However blobs that were uploaded in
|
||||||
|
chunks only have an MD5 if the source remote was capable of MD5
|
||||||
|
hashes, e.g. the local disk.
|
||||||
|
|
||||||
### Performance
|
### Performance
|
||||||
|
|
||||||
When uploading large files, increasing the value of
|
When uploading large files, increasing the value of
|
||||||
|
@ -116,12 +120,6 @@ These only get replaced if they are the last character in the name:
|
||||||
Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8),
|
Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8),
|
||||||
as they can't be used in JSON strings.
|
as they can't be used in JSON strings.
|
||||||
|
|
||||||
### Hashes
|
|
||||||
|
|
||||||
MD5 hashes are stored with blobs. However blobs that were uploaded in
|
|
||||||
chunks only have an MD5 if the source remote was capable of MD5
|
|
||||||
hashes, e.g. the local disk.
|
|
||||||
|
|
||||||
### Authentication {#authentication}
|
### Authentication {#authentication}
|
||||||
|
|
||||||
There are a number of ways of supplying credentials for Azure Blob
|
There are a number of ways of supplying credentials for Azure Blob
|
||||||
|
|
|
@ -96,9 +96,9 @@ This remote supports `--fast-list` which allows you to use fewer
|
||||||
transactions in exchange for more memory. See the [rclone
|
transactions in exchange for more memory. See the [rclone
|
||||||
docs](/docs/#fast-list) for more details.
|
docs](/docs/#fast-list) for more details.
|
||||||
|
|
||||||
### Modified time
|
### Modification times
|
||||||
|
|
||||||
The modified time is stored as metadata on the object as
|
The modification time is stored as metadata on the object as
|
||||||
`X-Bz-Info-src_last_modified_millis` as milliseconds since 1970-01-01
|
`X-Bz-Info-src_last_modified_millis` as milliseconds since 1970-01-01
|
||||||
in the Backblaze standard. Other tools should be able to use this as
|
in the Backblaze standard. Other tools should be able to use this as
|
||||||
a modified time.
|
a modified time.
|
||||||
|
|
|
@ -298,7 +298,7 @@ while `--ignore-checksum` controls whether checksums are considered during the c
|
||||||
if there ARE diffs.
|
if there ARE diffs.
|
||||||
* Unless `--ignore-listing-checksum` is passed, bisync currently computes hashes for one path
|
* Unless `--ignore-listing-checksum` is passed, bisync currently computes hashes for one path
|
||||||
*even when there's no common hash with the other path*
|
*even when there's no common hash with the other path*
|
||||||
(for example, a [crypt](/crypt/#modified-time-and-hashes) remote.)
|
(for example, a [crypt](/crypt/#modification-times-and-hashes) remote.)
|
||||||
* If both paths support checksums and have a common hash,
|
* If both paths support checksums and have a common hash,
|
||||||
AND `--ignore-listing-checksum` was not specified when creating the listings,
|
AND `--ignore-listing-checksum` was not specified when creating the listings,
|
||||||
`--check-sync=only` can be used to compare Path1 vs. Path2 checksums (as of the time the previous listings were created.)
|
`--check-sync=only` can be used to compare Path1 vs. Path2 checksums (as of the time the previous listings were created.)
|
||||||
|
@ -402,7 +402,7 @@ Alternately, a `--resync` may be used (Path1 versions will be pushed
|
||||||
to Path2). Consider the situation carefully and perhaps use `--dry-run`
|
to Path2). Consider the situation carefully and perhaps use `--dry-run`
|
||||||
before you commit to the changes.
|
before you commit to the changes.
|
||||||
|
|
||||||
### Modification time
|
### Modification times
|
||||||
|
|
||||||
Bisync relies on file timestamps to identify changed files and will
|
Bisync relies on file timestamps to identify changed files and will
|
||||||
_refuse_ to operate if backend lacks the modification time support.
|
_refuse_ to operate if backend lacks the modification time support.
|
||||||
|
|
|
@ -199,7 +199,7 @@ d) Delete this remote
|
||||||
y/e/d> y
|
y/e/d> y
|
||||||
```
|
```
|
||||||
|
|
||||||
### Modified time and hashes
|
### Modification times and hashes
|
||||||
|
|
||||||
Box allows modification times to be set on objects accurate to 1
|
Box allows modification times to be set on objects accurate to 1
|
||||||
second. These will be used to detect whether objects need syncing or
|
second. These will be used to detect whether objects need syncing or
|
||||||
|
|
|
@ -244,7 +244,7 @@ revert (sometimes silently) to time/size comparison if compatible hashsums
|
||||||
between source and target are not found.
|
between source and target are not found.
|
||||||
|
|
||||||
|
|
||||||
### Modified time
|
### Modification times
|
||||||
|
|
||||||
Chunker stores modification times using the wrapped remote so support
|
Chunker stores modification times using the wrapped remote so support
|
||||||
depends on that. For a small non-chunked file the chunker overlay simply
|
depends on that. For a small non-chunked file the chunker overlay simply
|
||||||
|
|
|
@ -405,7 +405,7 @@ Example:
|
||||||
`1/12/qgm4avr35m5loi1th53ato71v0`
|
`1/12/qgm4avr35m5loi1th53ato71v0`
|
||||||
|
|
||||||
|
|
||||||
### Modified time and hashes
|
### Modification times and hashes
|
||||||
|
|
||||||
Crypt stores modification times using the underlying remote so support
|
Crypt stores modification times using the underlying remote so support
|
||||||
depends on that.
|
depends on that.
|
||||||
|
|
|
@ -361,10 +361,14 @@ large folder (10600 directories, 39000 files):
|
||||||
- without `--fast-list`: 22:05 min
|
- without `--fast-list`: 22:05 min
|
||||||
- with `--fast-list`: 58s
|
- with `--fast-list`: 58s
|
||||||
|
|
||||||
### Modified time
|
### Modification times and hashes
|
||||||
|
|
||||||
Google drive stores modification times accurate to 1 ms.
|
Google drive stores modification times accurate to 1 ms.
|
||||||
|
|
||||||
|
Hash algorithms MD5, SHA1 and SHA256 are supported. Note, however,
|
||||||
|
that a small fraction of files uploaded may not have SHA1 or SHA256
|
||||||
|
hashes especially if they were uploaded before 2018.
|
||||||
|
|
||||||
### Restricted filename characters
|
### Restricted filename characters
|
||||||
|
|
||||||
Only Invalid UTF-8 bytes will be [replaced](/overview/#invalid-utf8),
|
Only Invalid UTF-8 bytes will be [replaced](/overview/#invalid-utf8),
|
||||||
|
@ -1528,9 +1532,10 @@ Waiting a moderate period of time between attempts (estimated to be
|
||||||
approximately 1 hour) and/or not using --fast-list both seem to be
|
approximately 1 hour) and/or not using --fast-list both seem to be
|
||||||
effective in preventing the problem.
|
effective in preventing the problem.
|
||||||
|
|
||||||
### Hashes
|
### SHA1 or SHA256 hashes may be missing
|
||||||
|
|
||||||
We need to say that all files have MD5 hashes, but a small fraction of files uploaded may not have SHA1 or SHA256 hashes especially if they were uploaded before 2018.
|
All files have MD5 hashes, but a small fraction of files uploaded may
|
||||||
|
not have SHA1 or SHA256 hashes especially if they were uploaded before 2018.
|
||||||
|
|
||||||
## Making your own client_id
|
## Making your own client_id
|
||||||
|
|
||||||
|
|
|
@ -97,7 +97,7 @@ You can then use team folders like this `remote:/TeamFolder` and
|
||||||
A leading `/` for a Dropbox personal account will do nothing, but it
|
A leading `/` for a Dropbox personal account will do nothing, but it
|
||||||
will take an extra HTTP transaction so it should be avoided.
|
will take an extra HTTP transaction so it should be avoided.
|
||||||
|
|
||||||
### Modified time and Hashes
|
### Modification times and hashes
|
||||||
|
|
||||||
Dropbox supports modified times, but the only way to set a
|
Dropbox supports modified times, but the only way to set a
|
||||||
modification time is to re-upload the file.
|
modification time is to re-upload the file.
|
||||||
|
|
|
@ -76,11 +76,11 @@ To copy a local directory to a 1Fichier directory called backup
|
||||||
|
|
||||||
rclone copy /home/source remote:backup
|
rclone copy /home/source remote:backup
|
||||||
|
|
||||||
### Modified time and hashes ###
|
### Modification times and hashes
|
||||||
|
|
||||||
1Fichier does not support modification times. It supports the Whirlpool hash algorithm.
|
1Fichier does not support modification times. It supports the Whirlpool hash algorithm.
|
||||||
|
|
||||||
### Duplicated files ###
|
### Duplicated files
|
||||||
|
|
||||||
1Fichier can have two files with exactly the same name and path (unlike a
|
1Fichier can have two files with exactly the same name and path (unlike a
|
||||||
normal file system).
|
normal file system).
|
||||||
|
|
|
@ -101,7 +101,7 @@ To copy a local directory to an Enterprise File Fabric directory called backup
|
||||||
|
|
||||||
rclone copy /home/source remote:backup
|
rclone copy /home/source remote:backup
|
||||||
|
|
||||||
### Modified time and hashes
|
### Modification times and hashes
|
||||||
|
|
||||||
The Enterprise File Fabric allows modification times to be set on
|
The Enterprise File Fabric allows modification times to be set on
|
||||||
files accurate to 1 second. These will be used to detect whether
|
files accurate to 1 second. These will be used to detect whether
|
||||||
|
|
|
@ -486,7 +486,7 @@ at present.
|
||||||
|
|
||||||
The `ftp_proxy` environment variable is not currently supported.
|
The `ftp_proxy` environment variable is not currently supported.
|
||||||
|
|
||||||
#### Modified time
|
### Modification times
|
||||||
|
|
||||||
File modification time (timestamps) is supported to 1 second resolution
|
File modification time (timestamps) is supported to 1 second resolution
|
||||||
for major FTP servers: ProFTPd, PureFTPd, VsFTPd, and FileZilla FTP server.
|
for major FTP servers: ProFTPd, PureFTPd, VsFTPd, and FileZilla FTP server.
|
||||||
|
|
|
@ -247,7 +247,7 @@ Eg `--header-upload "Content-Type text/potato"`
|
||||||
Note that the last of these is for setting custom metadata in the form
|
Note that the last of these is for setting custom metadata in the form
|
||||||
`--header-upload "x-goog-meta-key: value"`
|
`--header-upload "x-goog-meta-key: value"`
|
||||||
|
|
||||||
### Modification time
|
### Modification times
|
||||||
|
|
||||||
Google Cloud Storage stores md5sum natively.
|
Google Cloud Storage stores md5sum natively.
|
||||||
Google's [gsutil](https://cloud.google.com/storage/docs/gsutil) tool stores modification time
|
Google's [gsutil](https://cloud.google.com/storage/docs/gsutil) tool stores modification time
|
||||||
|
|
|
@ -428,7 +428,7 @@ if you uploaded an image to `upload` then uploaded the same image to
|
||||||
what it was uploaded with initially, not what you uploaded it with to
|
what it was uploaded with initially, not what you uploaded it with to
|
||||||
`album`. In practise this shouldn't cause too many problems.
|
`album`. In practise this shouldn't cause too many problems.
|
||||||
|
|
||||||
### Modified time
|
### Modification times
|
||||||
|
|
||||||
The date shown of media in Google Photos is the creation date as
|
The date shown of media in Google Photos is the creation date as
|
||||||
determined by the EXIF information, or the upload date if that is not
|
determined by the EXIF information, or the upload date if that is not
|
||||||
|
|
|
@ -126,7 +126,7 @@ username = root
|
||||||
You can stop this image with `docker kill rclone-hdfs` (**NB** it does not use volumes, so all data
|
You can stop this image with `docker kill rclone-hdfs` (**NB** it does not use volumes, so all data
|
||||||
uploaded will be lost.)
|
uploaded will be lost.)
|
||||||
|
|
||||||
### Modified time
|
### Modification times
|
||||||
|
|
||||||
Time accurate to 1 second is stored.
|
Time accurate to 1 second is stored.
|
||||||
|
|
||||||
|
|
|
@ -123,7 +123,7 @@ Using
|
||||||
|
|
||||||
the process is very similar to the process of initial setup exemplified before.
|
the process is very similar to the process of initial setup exemplified before.
|
||||||
|
|
||||||
### Modified time and hashes
|
### Modification times and hashes
|
||||||
|
|
||||||
HiDrive allows modification times to be set on objects accurate to 1 second.
|
HiDrive allows modification times to be set on objects accurate to 1 second.
|
||||||
|
|
||||||
|
|
|
@ -105,7 +105,7 @@ Sync the remote `directory` to `/home/local/directory`, deleting any excess file
|
||||||
|
|
||||||
This remote is read only - you can't upload files to an HTTP server.
|
This remote is read only - you can't upload files to an HTTP server.
|
||||||
|
|
||||||
### Modified time
|
### Modification times
|
||||||
|
|
||||||
Most HTTP servers store time accurate to 1 second.
|
Most HTTP servers store time accurate to 1 second.
|
||||||
|
|
||||||
|
|
|
@ -245,7 +245,7 @@ Note also that with rclone version 1.58 and newer, information about
|
||||||
[MIME types](/overview/#mime-type) and metadata item [utime](#metadata)
|
[MIME types](/overview/#mime-type) and metadata item [utime](#metadata)
|
||||||
are not available when using `--fast-list`.
|
are not available when using `--fast-list`.
|
||||||
|
|
||||||
### Modified time and hashes
|
### Modification times and hashes
|
||||||
|
|
||||||
Jottacloud allows modification times to be set on objects accurate to 1
|
Jottacloud allows modification times to be set on objects accurate to 1
|
||||||
second. These will be used to detect whether objects need syncing or
|
second. These will be used to detect whether objects need syncing or
|
||||||
|
|
|
@ -19,10 +19,10 @@ For consistencies sake one can also configure a remote of type
|
||||||
rclone remote paths, e.g. `remote:path/to/wherever`, but it is probably
|
rclone remote paths, e.g. `remote:path/to/wherever`, but it is probably
|
||||||
easier not to.
|
easier not to.
|
||||||
|
|
||||||
### Modified time ###
|
### Modification times
|
||||||
|
|
||||||
Rclone reads and writes the modified time using an accuracy determined by
|
Rclone reads and writes the modification times using an accuracy determined
|
||||||
the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second
|
by the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second
|
||||||
on OS X.
|
on OS X.
|
||||||
|
|
||||||
### Filenames ###
|
### Filenames ###
|
||||||
|
|
|
@ -123,17 +123,15 @@ excess files in the path.
|
||||||
|
|
||||||
rclone sync --interactive /home/local/directory remote:directory
|
rclone sync --interactive /home/local/directory remote:directory
|
||||||
|
|
||||||
### Modified time
|
### Modification times and hashes
|
||||||
|
|
||||||
Files support a modification time attribute with up to 1 second precision.
|
Files support a modification time attribute with up to 1 second precision.
|
||||||
Directories do not have a modification time, which is shown as "Jan 1 1970".
|
Directories do not have a modification time, which is shown as "Jan 1 1970".
|
||||||
|
|
||||||
### Hash checksums
|
File hashes are supported, with a custom Mail.ru algorithm based on SHA1.
|
||||||
|
|
||||||
Hash sums use a custom Mail.ru algorithm based on SHA1.
|
|
||||||
If file size is less than or equal to the SHA1 block size (20 bytes),
|
If file size is less than or equal to the SHA1 block size (20 bytes),
|
||||||
its hash is simply its data right-padded with zero bytes.
|
its hash is simply its data right-padded with zero bytes.
|
||||||
Hash sum of a larger file is computed as a SHA1 sum of the file data
|
Hashes of a larger file is computed as a SHA1 of the file data
|
||||||
bytes concatenated with a decimal representation of the data length.
|
bytes concatenated with a decimal representation of the data length.
|
||||||
|
|
||||||
### Emptying Trash
|
### Emptying Trash
|
||||||
|
|
|
@ -82,7 +82,7 @@ To copy a local directory to an Mega directory called backup
|
||||||
|
|
||||||
rclone copy /home/source remote:backup
|
rclone copy /home/source remote:backup
|
||||||
|
|
||||||
### Modified time and hashes
|
### Modification times and hashes
|
||||||
|
|
||||||
Mega does not support modification times or hashes yet.
|
Mega does not support modification times or hashes yet.
|
||||||
|
|
||||||
|
|
|
@ -54,7 +54,7 @@ testing or with an rclone server or rclone mount, e.g.
|
||||||
rclone serve webdav :memory:
|
rclone serve webdav :memory:
|
||||||
rclone serve sftp :memory:
|
rclone serve sftp :memory:
|
||||||
|
|
||||||
### Modified time and hashes
|
### Modification times and hashes
|
||||||
|
|
||||||
The memory backend supports MD5 hashes and modification times accurate to 1 nS.
|
The memory backend supports MD5 hashes and modification times accurate to 1 nS.
|
||||||
|
|
||||||
|
|
|
@ -162,7 +162,7 @@ You may try to [verify you account](https://docs.microsoft.com/en-us/azure/activ
|
||||||
Note: If you have a special region, you may need a different host in step 4 and 5. Here are [some hints](https://github.com/rclone/rclone/blob/bc23bf11db1c78c6ebbf8ea538fbebf7058b4176/backend/onedrive/onedrive.go#L86).
|
Note: If you have a special region, you may need a different host in step 4 and 5. Here are [some hints](https://github.com/rclone/rclone/blob/bc23bf11db1c78c6ebbf8ea538fbebf7058b4176/backend/onedrive/onedrive.go#L86).
|
||||||
|
|
||||||
|
|
||||||
### Modification time and hashes
|
### Modification times and hashes
|
||||||
|
|
||||||
OneDrive allows modification times to be set on objects accurate to 1
|
OneDrive allows modification times to be set on objects accurate to 1
|
||||||
second. These will be used to detect whether objects need syncing or
|
second. These will be used to detect whether objects need syncing or
|
||||||
|
|
|
@ -64,12 +64,14 @@ To copy a local directory to an OpenDrive directory called backup
|
||||||
|
|
||||||
rclone copy /home/source remote:backup
|
rclone copy /home/source remote:backup
|
||||||
|
|
||||||
### Modified time and MD5SUMs
|
### Modification times and hashes
|
||||||
|
|
||||||
OpenDrive allows modification times to be set on objects accurate to 1
|
OpenDrive allows modification times to be set on objects accurate to 1
|
||||||
second. These will be used to detect whether objects need syncing or
|
second. These will be used to detect whether objects need syncing or
|
||||||
not.
|
not.
|
||||||
|
|
||||||
|
The MD5 hash algorithm is supported.
|
||||||
|
|
||||||
### Restricted filename characters
|
### Restricted filename characters
|
||||||
|
|
||||||
| Character | Value | Replacement |
|
| Character | Value | Replacement |
|
||||||
|
|
|
@ -154,6 +154,7 @@ Rclone supports the following OCI authentication provider.
|
||||||
No authentication
|
No authentication
|
||||||
|
|
||||||
### User Principal
|
### User Principal
|
||||||
|
|
||||||
Sample rclone config file for Authentication Provider User Principal:
|
Sample rclone config file for Authentication Provider User Principal:
|
||||||
|
|
||||||
[oos]
|
[oos]
|
||||||
|
@ -174,6 +175,7 @@ Considerations:
|
||||||
- If the user is deleted, the config file will no longer work and may cause automation regressions that use the user's credentials.
|
- If the user is deleted, the config file will no longer work and may cause automation regressions that use the user's credentials.
|
||||||
|
|
||||||
### Instance Principal
|
### Instance Principal
|
||||||
|
|
||||||
An OCI compute instance can be authorized to use rclone by using it's identity and certificates as an instance principal.
|
An OCI compute instance can be authorized to use rclone by using it's identity and certificates as an instance principal.
|
||||||
With this approach no credentials have to be stored and managed.
|
With this approach no credentials have to be stored and managed.
|
||||||
|
|
||||||
|
@ -203,6 +205,7 @@ Considerations:
|
||||||
- It is applicable for oci compute instances only. It cannot be used on external instance or resources.
|
- It is applicable for oci compute instances only. It cannot be used on external instance or resources.
|
||||||
|
|
||||||
### Resource Principal
|
### Resource Principal
|
||||||
|
|
||||||
Resource principal auth is very similar to instance principal auth but used for resources that are not
|
Resource principal auth is very similar to instance principal auth but used for resources that are not
|
||||||
compute instances such as [serverless functions](https://docs.oracle.com/en-us/iaas/Content/Functions/Concepts/functionsoverview.htm).
|
compute instances such as [serverless functions](https://docs.oracle.com/en-us/iaas/Content/Functions/Concepts/functionsoverview.htm).
|
||||||
To use resource principal ensure Rclone process is started with these environment variables set in its process.
|
To use resource principal ensure Rclone process is started with these environment variables set in its process.
|
||||||
|
@ -222,6 +225,7 @@ Sample rclone configuration file for Authentication Provider Resource Principal:
|
||||||
provider = resource_principal_auth
|
provider = resource_principal_auth
|
||||||
|
|
||||||
### No authentication
|
### No authentication
|
||||||
|
|
||||||
Public buckets do not require any authentication mechanism to read objects.
|
Public buckets do not require any authentication mechanism to read objects.
|
||||||
Sample rclone configuration file for No authentication:
|
Sample rclone configuration file for No authentication:
|
||||||
|
|
||||||
|
@ -232,10 +236,9 @@ Sample rclone configuration file for No authentication:
|
||||||
region = us-ashburn-1
|
region = us-ashburn-1
|
||||||
provider = no_auth
|
provider = no_auth
|
||||||
|
|
||||||
## Options
|
### Modification times and hashes
|
||||||
### Modified time
|
|
||||||
|
|
||||||
The modified time is stored as metadata on the object as
|
The modification time is stored as metadata on the object as
|
||||||
`opc-meta-mtime` as floating point since the epoch, accurate to 1 ns.
|
`opc-meta-mtime` as floating point since the epoch, accurate to 1 ns.
|
||||||
|
|
||||||
If the modification time needs to be updated rclone will attempt to perform a server
|
If the modification time needs to be updated rclone will attempt to perform a server
|
||||||
|
@ -245,6 +248,8 @@ In the case the object is larger than 5Gb, the object will be uploaded rather th
|
||||||
Note that reading this from the object takes an additional `HEAD` request as the metadata
|
Note that reading this from the object takes an additional `HEAD` request as the metadata
|
||||||
isn't returned in object listings.
|
isn't returned in object listings.
|
||||||
|
|
||||||
|
The MD5 hash algorithm is supported.
|
||||||
|
|
||||||
### Multipart uploads
|
### Multipart uploads
|
||||||
|
|
||||||
rclone supports multipart uploads with OOS which means that it can
|
rclone supports multipart uploads with OOS which means that it can
|
||||||
|
|
|
@ -90,7 +90,7 @@ mistake or an unsupported feature.
|
||||||
⁹ QingStor does not support SetModTime for objects bigger than 5 GiB.
|
⁹ QingStor does not support SetModTime for objects bigger than 5 GiB.
|
||||||
|
|
||||||
¹⁰ FTP supports modtimes for the major FTP servers, and also others
|
¹⁰ FTP supports modtimes for the major FTP servers, and also others
|
||||||
if they advertised required protocol extensions. See [this](/ftp/#modified-time)
|
if they advertised required protocol extensions. See [this](/ftp/#modification-times)
|
||||||
for more details.
|
for more details.
|
||||||
|
|
||||||
¹¹ Internet Archive requires option `wait_archive` to be set to a non-zero value
|
¹¹ Internet Archive requires option `wait_archive` to be set to a non-zero value
|
||||||
|
|
|
@ -86,7 +86,7 @@ To copy a local directory to a pCloud directory called backup
|
||||||
|
|
||||||
rclone copy /home/source remote:backup
|
rclone copy /home/source remote:backup
|
||||||
|
|
||||||
### Modified time and hashes ###
|
### Modification times and hashes
|
||||||
|
|
||||||
pCloud allows modification times to be set on objects accurate to 1
|
pCloud allows modification times to be set on objects accurate to 1
|
||||||
second. These will be used to detect whether objects need syncing or
|
second. These will be used to detect whether objects need syncing or
|
||||||
|
|
|
@ -71,6 +71,13 @@ d) Delete this remote
|
||||||
y/e/d> y
|
y/e/d> y
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Modification times and hashes
|
||||||
|
|
||||||
|
PikPak keeps modification times on objects, and updates them when uploading objects,
|
||||||
|
but it does not support changing only the modification time
|
||||||
|
|
||||||
|
The MD5 hash algorithm is supported.
|
||||||
|
|
||||||
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/pikpak/pikpak.go then run make backenddocs" >}}
|
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/pikpak/pikpak.go then run make backenddocs" >}}
|
||||||
### Standard options
|
### Standard options
|
||||||
|
|
||||||
|
@ -294,12 +301,13 @@ Result:
|
||||||
|
|
||||||
{{< rem autogenerated options stop >}}
|
{{< rem autogenerated options stop >}}
|
||||||
|
|
||||||
## Limitations ##
|
## Limitations
|
||||||
|
|
||||||
### Hashes ###
|
### Hashes may be empty
|
||||||
|
|
||||||
PikPak supports MD5 hash, but sometimes given empty especially for user-uploaded files.
|
PikPak supports MD5 hash, but sometimes given empty especially for user-uploaded files.
|
||||||
|
|
||||||
### Deleted files ###
|
### Deleted files still visible with trashed-only
|
||||||
|
|
||||||
Deleted files will still be visible with `--pikpak-trashed-only` even after the trash emptied. This goes away after few days.
|
Deleted files will still be visible with `--pikpak-trashed-only` even after the
|
||||||
|
trash emptied. This goes away after few days.
|
||||||
|
|
|
@ -84,7 +84,7 @@ To copy a local directory to an premiumize.me directory called backup
|
||||||
|
|
||||||
rclone copy /home/source remote:backup
|
rclone copy /home/source remote:backup
|
||||||
|
|
||||||
### Modified time and hashes
|
### Modification times and hashes
|
||||||
|
|
||||||
premiumize.me does not support modification times or hashes, therefore
|
premiumize.me does not support modification times or hashes, therefore
|
||||||
syncing will default to `--size-only` checking. Note that using
|
syncing will default to `--size-only` checking. Note that using
|
||||||
|
|
|
@ -95,10 +95,12 @@ To copy a local directory to an Proton Drive directory called backup
|
||||||
|
|
||||||
rclone copy /home/source remote:backup
|
rclone copy /home/source remote:backup
|
||||||
|
|
||||||
### Modified time
|
### Modification times and hashes
|
||||||
|
|
||||||
Proton Drive Bridge does not support updating modification times yet.
|
Proton Drive Bridge does not support updating modification times yet.
|
||||||
|
|
||||||
|
The SHA1 hash algorithm is supported.
|
||||||
|
|
||||||
### Restricted filename characters
|
### Restricted filename characters
|
||||||
|
|
||||||
Invalid UTF-8 bytes will be [replaced](/overview/#invalid-utf8), also left and
|
Invalid UTF-8 bytes will be [replaced](/overview/#invalid-utf8), also left and
|
||||||
|
|
|
@ -121,7 +121,7 @@ d) Delete this remote
|
||||||
y/e/d> y
|
y/e/d> y
|
||||||
```
|
```
|
||||||
|
|
||||||
### Modified time and hashes
|
### Modification times and hashes
|
||||||
|
|
||||||
Quatrix allows modification times to be set on objects accurate to 1 microsecond.
|
Quatrix allows modification times to be set on objects accurate to 1 microsecond.
|
||||||
These will be used to detect whether objects need syncing or not.
|
These will be used to detect whether objects need syncing or not.
|
||||||
|
|
|
@ -271,7 +271,9 @@ d) Delete this remote
|
||||||
y/e/d>
|
y/e/d>
|
||||||
```
|
```
|
||||||
|
|
||||||
### Modified time
|
### Modification times and hashes
|
||||||
|
|
||||||
|
#### Modification times
|
||||||
|
|
||||||
The modified time is stored as metadata on the object as
|
The modified time is stored as metadata on the object as
|
||||||
`X-Amz-Meta-Mtime` as floating point since the epoch, accurate to 1 ns.
|
`X-Amz-Meta-Mtime` as floating point since the epoch, accurate to 1 ns.
|
||||||
|
@ -284,6 +286,29 @@ storage the object will be uploaded rather than copied.
|
||||||
Note that reading this from the object takes an additional `HEAD`
|
Note that reading this from the object takes an additional `HEAD`
|
||||||
request as the metadata isn't returned in object listings.
|
request as the metadata isn't returned in object listings.
|
||||||
|
|
||||||
|
#### Hashes
|
||||||
|
|
||||||
|
For small objects which weren't uploaded as multipart uploads (objects
|
||||||
|
sized below `--s3-upload-cutoff` if uploaded with rclone) rclone uses
|
||||||
|
the `ETag:` header as an MD5 checksum.
|
||||||
|
|
||||||
|
However for objects which were uploaded as multipart uploads or with
|
||||||
|
server side encryption (SSE-AWS or SSE-C) the `ETag` header is no
|
||||||
|
longer the MD5 sum of the data, so rclone adds an additional piece of
|
||||||
|
metadata `X-Amz-Meta-Md5chksum` which is a base64 encoded MD5 hash (in
|
||||||
|
the same format as is required for `Content-MD5`). You can use base64 -d and hexdump to check this value manually:
|
||||||
|
|
||||||
|
echo 'VWTGdNx3LyXQDfA0e2Edxw==' | base64 -d | hexdump
|
||||||
|
|
||||||
|
or you can use `rclone check` to verify the hashes are OK.
|
||||||
|
|
||||||
|
For large objects, calculating this hash can take some time so the
|
||||||
|
addition of this hash can be disabled with `--s3-disable-checksum`.
|
||||||
|
This will mean that these objects do not have an MD5 checksum.
|
||||||
|
|
||||||
|
Note that reading this from the object takes an additional `HEAD`
|
||||||
|
request as the metadata isn't returned in object listings.
|
||||||
|
|
||||||
### Reducing costs
|
### Reducing costs
|
||||||
|
|
||||||
#### Avoiding HEAD requests to read the modification time
|
#### Avoiding HEAD requests to read the modification time
|
||||||
|
@ -375,29 +400,6 @@ there for more details.
|
||||||
|
|
||||||
Setting this flag increases the chance for undetected upload failures.
|
Setting this flag increases the chance for undetected upload failures.
|
||||||
|
|
||||||
### Hashes
|
|
||||||
|
|
||||||
For small objects which weren't uploaded as multipart uploads (objects
|
|
||||||
sized below `--s3-upload-cutoff` if uploaded with rclone) rclone uses
|
|
||||||
the `ETag:` header as an MD5 checksum.
|
|
||||||
|
|
||||||
However for objects which were uploaded as multipart uploads or with
|
|
||||||
server side encryption (SSE-AWS or SSE-C) the `ETag` header is no
|
|
||||||
longer the MD5 sum of the data, so rclone adds an additional piece of
|
|
||||||
metadata `X-Amz-Meta-Md5chksum` which is a base64 encoded MD5 hash (in
|
|
||||||
the same format as is required for `Content-MD5`). You can use base64 -d and hexdump to check this value manually:
|
|
||||||
|
|
||||||
echo 'VWTGdNx3LyXQDfA0e2Edxw==' | base64 -d | hexdump
|
|
||||||
|
|
||||||
or you can use `rclone check` to verify the hashes are OK.
|
|
||||||
|
|
||||||
For large objects, calculating this hash can take some time so the
|
|
||||||
addition of this hash can be disabled with `--s3-disable-checksum`.
|
|
||||||
This will mean that these objects do not have an MD5 checksum.
|
|
||||||
|
|
||||||
Note that reading this from the object takes an additional `HEAD`
|
|
||||||
request as the metadata isn't returned in object listings.
|
|
||||||
|
|
||||||
### Versions
|
### Versions
|
||||||
|
|
||||||
When bucket versioning is enabled (this can be done with rclone with
|
When bucket versioning is enabled (this can be done with rclone with
|
||||||
|
@ -660,7 +662,8 @@ According to AWS's [documentation on S3 Object Lock](https://docs.aws.amazon.com
|
||||||
|
|
||||||
> If you configure a default retention period on a bucket, requests to upload objects in such a bucket must include the Content-MD5 header.
|
> If you configure a default retention period on a bucket, requests to upload objects in such a bucket must include the Content-MD5 header.
|
||||||
|
|
||||||
As mentioned in the [Hashes](#hashes) section, small files that are not uploaded as multipart, use a different tag, causing the upload to fail.
|
As mentioned in the [Modification times and hashes](#modification-times-and-hashes) section,
|
||||||
|
small files that are not uploaded as multipart, use a different tag, causing the upload to fail.
|
||||||
A simple solution is to set the `--s3-upload-cutoff 0` and force all the files to be uploaded as multipart.
|
A simple solution is to set the `--s3-upload-cutoff 0` and force all the files to be uploaded as multipart.
|
||||||
|
|
||||||
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/s3/s3.go then run make backenddocs" >}}
|
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/s3/s3.go then run make backenddocs" >}}
|
||||||
|
|
|
@ -359,7 +359,7 @@ commands is prohibited. Set the configuration option `disable_hashcheck`
|
||||||
to `true` to disable checksumming entirely, or set `shell_type` to `none`
|
to `true` to disable checksumming entirely, or set `shell_type` to `none`
|
||||||
to disable all functionality based on remote shell command execution.
|
to disable all functionality based on remote shell command execution.
|
||||||
|
|
||||||
### Modified time
|
### Modification times and hashes
|
||||||
|
|
||||||
Modified times are stored on the server to 1 second precision.
|
Modified times are stored on the server to 1 second precision.
|
||||||
|
|
||||||
|
|
|
@ -105,7 +105,7 @@ To copy a local directory to an ShareFile directory called backup
|
||||||
|
|
||||||
Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
|
Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
|
||||||
|
|
||||||
### Modified time and hashes
|
### Modification times and hashes
|
||||||
|
|
||||||
ShareFile allows modification times to be set on objects accurate to 1
|
ShareFile allows modification times to be set on objects accurate to 1
|
||||||
second. These will be used to detect whether objects need syncing or
|
second. These will be used to detect whether objects need syncing or
|
||||||
|
|
|
@ -98,7 +98,7 @@ Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
|
||||||
create a folder, which rclone will create as a "Sync Folder" with
|
create a folder, which rclone will create as a "Sync Folder" with
|
||||||
SugarSync.
|
SugarSync.
|
||||||
|
|
||||||
### Modified time and hashes
|
### Modification times and hashes
|
||||||
|
|
||||||
SugarSync does not support modification times or hashes, therefore
|
SugarSync does not support modification times or hashes, therefore
|
||||||
syncing will default to `--size-only` checking. Note that using
|
syncing will default to `--size-only` checking. Note that using
|
||||||
|
|
|
@ -227,7 +227,7 @@ sufficient to determine if it is "dirty". By using `--update` along with
|
||||||
`--use-server-modtime`, you can avoid the extra API call and simply upload
|
`--use-server-modtime`, you can avoid the extra API call and simply upload
|
||||||
files whose local modtime is newer than the time it was last uploaded.
|
files whose local modtime is newer than the time it was last uploaded.
|
||||||
|
|
||||||
### Modified time
|
### Modification times and hashes
|
||||||
|
|
||||||
The modified time is stored as metadata on the object as
|
The modified time is stored as metadata on the object as
|
||||||
`X-Object-Meta-Mtime` as floating point since the epoch accurate to 1
|
`X-Object-Meta-Mtime` as floating point since the epoch accurate to 1
|
||||||
|
@ -236,6 +236,8 @@ ns.
|
||||||
This is a de facto standard (used in the official python-swiftclient
|
This is a de facto standard (used in the official python-swiftclient
|
||||||
amongst others) for storing the modification time for an object.
|
amongst others) for storing the modification time for an object.
|
||||||
|
|
||||||
|
The MD5 hash algorithm is supported.
|
||||||
|
|
||||||
### Restricted filename characters
|
### Restricted filename characters
|
||||||
|
|
||||||
| Character | Value | Replacement |
|
| Character | Value | Replacement |
|
||||||
|
|
|
@ -82,7 +82,7 @@ To copy a local directory to an Uptobox directory called backup
|
||||||
|
|
||||||
rclone copy /home/source remote:backup
|
rclone copy /home/source remote:backup
|
||||||
|
|
||||||
### Modified time and hashes
|
### Modification times and hashes
|
||||||
|
|
||||||
Uptobox supports neither modified times nor checksums. All timestamps
|
Uptobox supports neither modified times nor checksums. All timestamps
|
||||||
will read as that set by `--default-time`.
|
will read as that set by `--default-time`.
|
||||||
|
|
|
@ -101,7 +101,7 @@ To copy a local directory to an WebDAV directory called backup
|
||||||
|
|
||||||
rclone copy /home/source remote:backup
|
rclone copy /home/source remote:backup
|
||||||
|
|
||||||
### Modified time and hashes ###
|
### Modification times and hashes
|
||||||
|
|
||||||
Plain WebDAV does not support modified times. However when used with
|
Plain WebDAV does not support modified times. However when used with
|
||||||
Fastmail Files, Owncloud or Nextcloud rclone will support modified times.
|
Fastmail Files, Owncloud or Nextcloud rclone will support modified times.
|
||||||
|
|
|
@ -87,14 +87,12 @@ excess files in the path.
|
||||||
|
|
||||||
Yandex paths may be as deep as required, e.g. `remote:directory/subdirectory`.
|
Yandex paths may be as deep as required, e.g. `remote:directory/subdirectory`.
|
||||||
|
|
||||||
### Modified time
|
### Modification times and hashes
|
||||||
|
|
||||||
Modified times are supported and are stored accurate to 1 ns in custom
|
Modified times are supported and are stored accurate to 1 ns in custom
|
||||||
metadata called `rclone_modified` in RFC3339 with nanoseconds format.
|
metadata called `rclone_modified` in RFC3339 with nanoseconds format.
|
||||||
|
|
||||||
### MD5 checksums
|
The MD5 hash algorithm is natively supported by Yandex Disk.
|
||||||
|
|
||||||
MD5 checksums are natively supported by Yandex Disk.
|
|
||||||
|
|
||||||
### Emptying Trash
|
### Emptying Trash
|
||||||
|
|
||||||
|
|
|
@ -107,13 +107,11 @@ excess files in the path.
|
||||||
|
|
||||||
Zoho paths may be as deep as required, eg `remote:directory/subdirectory`.
|
Zoho paths may be as deep as required, eg `remote:directory/subdirectory`.
|
||||||
|
|
||||||
### Modified time
|
### Modification times and hashes
|
||||||
|
|
||||||
Modified times are currently not supported for Zoho Workdrive
|
Modified times are currently not supported for Zoho Workdrive
|
||||||
|
|
||||||
### Checksums
|
No hash algorithms are supported.
|
||||||
|
|
||||||
No checksums are supported.
|
|
||||||
|
|
||||||
### Usage information
|
### Usage information
|
||||||
|
|
||||||
|
|
Loading…
Reference in a new issue