From 2e91287b2eec71d8a57bd2b8e1958a9d9bf19d43 Mon Sep 17 00:00:00 2001 From: Maciej Radzikowski Date: Thu, 16 Jun 2022 22:29:36 +0200 Subject: [PATCH] docs/s3: add note about chunk size decreasing progress accuracy --- backend/s3/s3.go | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/backend/s3/s3.go b/backend/s3/s3.go index cd4f3e06b..ac5e112cb 100644 --- a/backend/s3/s3.go +++ b/backend/s3/s3.go @@ -1677,7 +1677,14 @@ Files of unknown size are uploaded with the configured chunk_size. Since the default chunk size is 5 MiB and there can be at most 10,000 chunks, this means that by default the maximum size of a file you can stream upload is 48 GiB. If you wish to stream upload -larger files then you will need to increase chunk_size.`, +larger files then you will need to increase chunk_size. + +Increasing the chunk size decreases the accuracy of the progress +statistics displayed with "-P" flag. Rclone treats chunk as sent when +it's buffered by the AWS SDK, when in fact it may still be uploading. +A bigger chunk size means a bigger AWS SDK buffer and progress +reporting more deviating from the truth. +`, Default: minChunkSize, Advanced: true, }, {