Before this change the expect/continue timeout was set to
--conntimeout which was 60s by default which is too long to wait.
This was noticed when using s3 with a proxy which apparently didn't
support expect / continue properly.
Set --expect-continue-timeout 0 to disable expect/continue.
Before this change we used non multipart uploads for files of unknown
size (streaming and uploads in mount). This is slower and less
reliable and is not recommended by Google for files smaller than 5MB.
After this change we use multipart resumable uploads for all files of
unknown length. This will use an extra transaction so is less
efficient for files under the chunk size, however the natural
buffering in the operations.Rcat call specified by
`--streaming-upload-cutoff` will overcome this.
See: https://forum.rclone.org/t/upload-behaviour-and-speed-when-using-vfs-cache/9920/
Unfortunately bcrypt only hashes the first 72 bytes of a given input
which meant that using it on ssh keys which are longer than 72 bytes
was incorrect.
This swaps over to using sha256 which should be adequate for the
purpose of protecting in memory passwords where the unencrypted
password is likely in memory too.
Before this change, when uploading files from the VFS cache which were
pending a rename, rclone would use the new path of the object when
specifiying the destination remote. This didn't cause a problem with
most backends as the subsequent rename did nothing, however with the
drive backend, since it updates objects, the incorrect Remote was
embedded in the object. This caused the rename to apparently succeed
but the object be at the wrong location.
The fix for this was to make sure we upload to the path stored in the
object if available.
This problem was spotted by the new rename tests for the VFS layer.
This error started happening after updating golang/x/crypto which was
done as a side effect of:
3801b8109 vendor: update termbox-go to fix ncdu command on FreeBSD
This turned out to be a deliberate policy of making
ssh.ParsePrivateKeyWithPassphrase fail if the passphrase was empty.
See: https://go-review.googlesource.com/c/crypto/+/207599
This fix calls ssh.ParsePrivateKey if the passphrase is empty and
ssh.ParsePrivateKeyWithPassphrase otherwise which fixes the problem.
This has been reported to Wasabi and they've confirmed as a known
issue that multipart uploads can't be 0 sized even though that is
incompatible with AWS S3.
If the --drive-stop-on-upload-limit flag is in effect this checks the
error string from Google Drive to see if it is the error you get when
you've breached your 750GB a day limit.
If so then it turns this error into a Fatal error which should stop
the sync.
Fixes#3857
The failure is this which is not reproducable locally, only on the CI
servers.
--- FAIL: TestMount/CacheMode=minimal/TestWriteFileOverwrite (1.01s)
fs.go:351:
Error Trace: fs.go:351
write.go:65
Error: Received unexpected error:
open E:testwrite: The request could not be performed because of an I/O device error.
Test: TestMount/CacheMode=minimal/TestWriteFileOverwrite
The corresponding ERROR from the log is this:
ERROR : IO error: truncate C:\Users\runneradmin\AppData\Local\rclone\vfs\local\C\Users\RUNNER~1\AppData\Local\Temp\rclone298719627\testwrite: Access is denied.
Instead of using ioutil.WriteFile this fix uses an equivalent based on
rclone's lib/file which doesn't set the exclusive flag on
Windows. This allows files to be deleted that are open. It also
deletes existing files if an error is received and retries.
Statistics of ransfers which were interrupted are not cleared before
retry iteration. These transfers completed with over 100 percentage.
This change clears transfer accounting before next retry iteration is
done in order to keep numbers in track.
Fixes#3861