docs: spelling: high-speed

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
This commit is contained in:
Josh Soref 2020-10-13 17:50:53 -04:00 committed by Nick Craig-Wood
parent bbe7eb35f1
commit d4f38d45a5
4 changed files with 8 additions and 8 deletions

View file

@ -93,7 +93,7 @@ as multipart uploads using this chunk size.
Note that "--qingstor-upload-concurrency" chunks of this size are buffered
in memory per transfer.
If you are transferring large files over high speed links and you have
If you are transferring large files over high-speed links and you have
enough memory, then increasing this will speed up the transfers.`,
Default: minChunkSize,
Advanced: true,
@ -107,7 +107,7 @@ concurrently.
NB if you set this to > 1 then the checksums of multipart uploads
become corrupted (the uploads themselves are not corrupted though).
If you are uploading small numbers of large file over high speed link
If you are uploading small numbers of large file over high-speed link
and these uploads do not fully utilize your bandwidth, then increasing
this may help to speed up the transfers.`,
Default: 1,

View file

@ -1023,7 +1023,7 @@ using this chunk size.
Note that "--s3-upload-concurrency" chunks of this size are buffered
in memory per transfer.
If you are transferring large files over high speed links and you have
If you are transferring large files over high-speed links and you have
enough memory, then increasing this will speed up the transfers.
Rclone will automatically increase the chunk size when uploading a
@ -1107,7 +1107,7 @@ If empty it will default to the environment variable "AWS_PROFILE" or
This is the number of chunks of the same file that are uploaded
concurrently.
If you are uploading small numbers of large file over high speed link
If you are uploading small numbers of large file over high-speed link
and these uploads do not fully utilize your bandwidth, then increasing
this may help to speed up the transfers.`,
Default: 4,

View file

@ -244,7 +244,7 @@ as multipart uploads using this chunk size.
Note that "--qingstor-upload-concurrency" chunks of this size are buffered
in memory per transfer.
If you are transferring large files over high speed links and you have
If you are transferring large files over high-speed links and you have
enough memory, then increasing this will speed up the transfers.
- Config: chunk_size
@ -262,7 +262,7 @@ concurrently.
NB if you set this to > 1 then the checksums of multipart uploads
become corrupted (the uploads themselves are not corrupted though).
If you are uploading small numbers of large file over high speed link
If you are uploading small numbers of large file over high-speed link
and these uploads do not fully utilize your bandwidth, then increasing
this may help to speed up the transfers.

View file

@ -1213,7 +1213,7 @@ using this chunk size.
Note that "--s3-upload-concurrency" chunks of this size are buffered
in memory per transfer.
If you are transferring large files over high speed links and you have
If you are transferring large files over high-speed links and you have
enough memory, then increasing this will speed up the transfers.
Rclone will automatically increase the chunk size when uploading a
@ -1328,7 +1328,7 @@ Concurrency for multipart uploads.
This is the number of chunks of the same file that are uploaded
concurrently.
If you are uploading small numbers of large file over high speed link
If you are uploading small numbers of large file over high-speed link
and these uploads do not fully utilize your bandwidth, then increasing
this may help to speed up the transfers.