Before this change --compare-dest and --copy-dest would check to see
if the compare/copy object existed first, before seeing if the
destination object was present.
This is inefficient, because in most --copy-dest syncs the destination
will be present and the compare/copy object need never be tested.
--compare-dest syncs may also be speeded up if they are done to the
same directory repeatedly.
This fixes the problem by re-arranging the logic so if the transfer is
not needed then the compare/copy object is never tested.
See: https://forum.rclone.org/t/union-with-copy-dest-enabled-is-slower-than-expected/32172
Before this change using --max-duration and --cutoff-mode soft would
work like --cutoff-mode hard.
This bug was introduced in this commit which made transfers be
cancelable - before that transfers couldn't be canceled.
122a47fba6 accounting: Allow transfers to be canceled with context #3257
This change adds the timeout to the input context for reading files
rather than the transfer context so the files transfers themselves
aren't canceled if --cutoff-mode soft is in action.
This also adds a test for cutoff mode soft and max duration which was
missing.
See: https://forum.rclone.org/t/max-duration-and-retries-not-working-correctly/27738
In commit
3ccf222acb sync: overlap check is now filter-sensitive
The tests were attempting to write invalid objects on some backends
due to a leading / on the object name.
This fix also adds a few more test cases and makes sure the tests can
be run individually.
Previously, the overlap check was based on simple prefix checks of the source and destination paths. Now it actually checks whether the destination is excluded via any filter rule or a "--exclude-if-present"-file.
Before this change, if the --max-duration limit was reached then
rclone would retry the sync as a fatal error wasn't raised.
This checks the deadline and raises a fatal error if necessary at the
end of the sync.
Fixes#6002
Previously only the fs being checked on gets passed to
GetModifyWindow(). However, in most tests, the test files are
generated in the local fs and transferred to the remote fs. So the
local fs time precision has to be taken into account.
This meant that on Windows the time tests failed because the
local fs has a time precision of 100ns. Checking remote items uploaded
from local fs on Windows also requires a modify window of 100ns.
This is possible now that we no longer support go1.12 and brings
rclone into line with standard practices in the Go world.
This also removes errors.New and errors.Errorf from lib/errors and
prefers the stdlib errors package over lib/errors.
The test is not applicable to uptobox which can't upload empty files.
The test was not skipped as intended because the direct error was compared.
This fix will compare error Cause because Sync wraps the error.
This change fixes the bug described below:
if a file is removed while the local backend List() runs,
the call will flag an accounting error.
The bug manifests itself if local backend is the Sync target
due to intrinsic concurrency.
The odds to hit this bug depend on --checkers and --transfers.
Chunker over local backend is affected even more because
updating a composite object with a smaller size content
translates into removing chunks on the underlying file system
and involves a number of List() calls.
Before this change, a sync which was finished with a graceful transfer
cutoff could return "context canceled" instead of the correct error.
This fixes the problem by ignoring "context canceled" errors if we
have done a graceful stop.
This is done by making fs.Config private and attaching it to the
context instead.
The Config should be obtained with fs.GetConfig and fs.AddConfig
should be used to get a new mutable config that can be changed.
This adds a context.Context parameter to NewFs and related calls.
This is necessary as part of reading config from the context -
backends need to be able to read the global config.
Before ths fix --cutoff-mode soft and cautious would emit a Fatal
error which stopped the sync immediately.
This fix introduces a new error which is checked in the sync error
processing which stops the sync gracefully.
Fixes#4576
Before this change `--track-renames-strategy` was broken. The hashing
method it used could declare times that were very close together to be
different.
The time hash was discarded and instead we check the modification time
window on every hash match.
Provided that the user doesn't use `--track-renames-strategy` on a
huge number of identically sized files this will perform just fine.
See: https://forum.rclone.org/t/track-renames-strategy-modtime-doesnt-work/16992/5
If your filenames contain two near-identical Unicode characters,
rclone will normalize these, making them identical. This flag
gives you the ability to keep them unique. This might
create unintended side effects, such as duplicating files that
contain certain Unicode characters, when downloading them from
certain cloud providers to a macOS filesystem.
Fixes#4228