As reported in
https://github.com/rclone/rclone/issues/4660#issuecomment-705502792
After switching to a password callback function, if the ssh connection
aborts and needs to be reconnected then the user is-reprompted for their
password. Instead we now remember the password they entered and just give
that back. We do lose the ability for them to correct mistakes, but that's
the situation from before switching to callbacks. We keep the benefits
of not asking for passwords until the SSH connection succeeds (right
known_hosts entry, for example).
This required a small refactor of how `f := &Fs{}` was built, so we can
store the saved password in the Fs object
Before this change rclone returned the size from the Stat call of the
link. On Windows this reads as 0 always, however on unix it reads as
the length of the text in the link. This caused errors like this when
syncing:
Failed to copy: corrupted on transfer: sizes differ 0 vs 13
This change causes Windows platforms to read the link and use that as
the size of the link instead of 0 which fixes the problem.
Based on Issue 4087
https://github.com/rclone/rclone/issues/4087
Current behaviour is insecure. If the user specifies this value then we
switch to validating the server hostkey and so can detect server changes
or MITM-type attacks.
This allows files to be copied by ID from google drive. These can be
copied to any rclone remote and if the remote is a google drive then
server side copy will be attempted.
Fixes#3625
This type of error is unlikely to be an error that can be resolved by a retry,
and is triggered in #2296 by files with a timestamp before the unix epoch.
The maximum value for the --s3--copy-cutoff should be 5GiB as tested
with AWS S3.
However b2 have implemented this as 5GB rather than 5GiB so having the
default at 5 GiB makes the b2s3 server side copy of a large file by
default.
This patch sets the default to 4768 MiB which is slightly less than
5GB.
This should have very little effect on anything.
If in future rclone can lower this limit more if Copy can multithread.
See: https://forum.rclone.org/t/copying-files-within-a-b2-bucket/16680/76
Before this change the s3 multipart server side copy was not
preserving the metadata of the object. This was most noticeable
because the modtime was not preserved.
This change fetches the metadata from the object before starting the
copy and overwrites it if requires.
It will also mean any other metadata is preserved.
See: https://forum.rclone.org/t/copying-files-within-a-b2-bucket/16680/70
Before this change, when the above backends created a new backend they
didn't put it into the backend cache.
This meant that rc commands acting on those backends did not work.
This was fixed by making sure the backends use the backend cache.
See: https://forum.rclone.org/t/rclone-rc-backend-command-not-working-as-expected/18834
Before this change writing with the all policy deadlocked while
uploading.
This change fixes the problem by fixing the multi reader, closing the
pipes at the correct time with the correct error. This is factored
into a new function as it was used twice.
This patch also adds a new test which tests the all policies.
Before this fix we were reading the hash from the upload using the
string "ETag", however the go runtime normalises the tag into "Etag"
so we were in fact always reading an empty string.
This bug was introduced in
aeea4430d5 swift: efficiency: slim Object and reduce requests on upload
It was spotted by the integration tests.
The fix was just to use the canonical form "Etag" instead of "ETag".
In this commit
a2afa9aadd fs: Add directory to optional Purge interface
We failed to encrypt the directory name so the Purge failed.
This was spotted by the integration tests.
In this commit:
cbf3d43561 drive: fix missing items when listing using --fast-list / ListR
We introduced a bug where under specific circumstances it could cause
a "panic: send on closed channel".
This was caused by:
- rclone engaging the workaround from the commit above
- one of the listing routines returning an error
- this caused the `in` channel to be closed to stop the readers
- however the workaround was recycling stuff into the `in` channel at the time
- hence the panic on closed channel
This fix factors out the sending to the `in` channel into `sendJob`
and calls this both from the master go routine and the list
runners. `sendJob` detects the `in` channel being closed properly and
also deals correctly with contention on the `in` channel.
Fixes#4511
When using `rclone authorize` the hostname doesn't get set in the
config file.
This commit allows it to be set in the configurator and gives the user
a hint that it needs setting.
This reverts part of
151f03378f s3: fix upload of single files into buckets without create permission
This erroneously assumed that a HEAD request on a non existent object
would return "NotFound" if the bucket was found. In fact it returns
"NotFound" when the bucket isn't found also.
This will break the fix for #4297 - however that can be made to work
using the new --s3-assume-bucket-exists flag
Before this change, rclone was looking for the file without the
extension to see if it existed which meant that it never did.
This change checks the destination file exists firsts, before removing
the extension.
Google drive appears to no longer be copying the modification time of
google docs.
Setting the mod time immediately after the copy doesn't work either,
so this patch copies the object, waits for 1 second and then sets the
modtime.
Fixes#4517
This was only working for files in the root directory and wasn't
looking at the encoding.
This is fixed to use NewObject which takes both things into account
and it makes the share by ID instead of by path.
This problem was spotted by the integration tests.