diff --git a/docs/content/s3.md b/docs/content/s3.md index ccb63c3f5..d3cef8d84 100644 --- a/docs/content/s3.md +++ b/docs/content/s3.md @@ -2873,7 +2873,6 @@ y/n> n Note that s3 credentials are generated when you [create an access grant](https://docs.storj.io/dcs/api-reference/s3-compatible-gateway#usage). - #### Backend quirks - `--chunk-size` is forced to be 64 MiB or greater. This will use more @@ -2882,6 +2881,31 @@ grant](https://docs.storj.io/dcs/api-reference/s3-compatible-gateway#usage). gateway. - GetTier and SetTier are not supported. +#### Backend bugs + +Due to [issue #39](https://github.com/storj/gateway-mt/issues/39) +uploading multipart files via the S3 gateway causes them to lose their +metadata. For rclone's purpose this means that the modification time +is not stored, nor is any MD5SUM (if one is available from the +source). + +This has the following consequences: + +- Using `rclone rcat` will fail as the medatada doesn't match after upload +- Uploading files with `rclone mount` will fail for the same reason + - This can worked around by using `--vfs-cache-mode writes` or `--vfs-cache-mode full` or setting `--s3-upload-cutoff` large +- Files uploaded via a multipart upload won't have their modtimes + - This will mean that `rclone sync` will likely keep trying to upload files bigger than `--s3-upload-cutoff` + - This can be worked around with `--checksum` or `--size-only` or setting `--s3-upload-cutoff` large + - The maximum value for `--s3-upload-cutoff` is 5GiB though + +One general purpose workaround is to set `--s3-upload-cutoff 5G`. This +means that rclone will upload files smaller than 5GiB as single parts. +Note that this can be set in the config file with `upload_cutoff = 5G` +or configured in the advanced settings. If you regularly transfer +files larger than 5G then using `--checksum` or `--size-only` in +`rclone sync` is the recommended workaround. + #### Comparison with the native protocol Use the [the native protocol](/tardigrade) to take advantage of