This change ensures we call the Shutdown method on backends when
they drop out of the fs/cache and at program exit.
Some backends implement the optional fs.Shutdowner interface. Until now,
Shutdown is only checked and called, when a backend is wrapped (e.g.
crypt, compress, ...).
To have a general way to perform operations at the end of the backend
lifecycle with proper error handling, we can call Shutdown at cache
clear time.
We add a finalize hook to the cache which will be called when values
drop out of the cache.
Previous discussion: https://forum.rclone.org/t/31336
This was caused by nested calls to NewTransfer/Done.
This fixes the problem by only incrementing transfers if the remote is
present in the transferMap which means we only increment it once.
Before this change using --max-duration and --cutoff-mode soft would
work like --cutoff-mode hard.
This bug was introduced in this commit which made transfers be
cancelable - before that transfers couldn't be canceled.
122a47fba6 accounting: Allow transfers to be canceled with context #3257
This change adds the timeout to the input context for reading files
rather than the transfer context so the files transfers themselves
aren't canceled if --cutoff-mode soft is in action.
This also adds a test for cutoff mode soft and max duration which was
missing.
See: https://forum.rclone.org/t/max-duration-and-retries-not-working-correctly/27738
In commit
3ccf222acb sync: overlap check is now filter-sensitive
The tests were attempting to write invalid objects on some backends
due to a leading / on the object name.
This fix also adds a few more test cases and makes sure the tests can
be run individually.
In commit
da404dc0f2 sync,copy: Fix --fast-list --create-empty-src-dirs and --exclude
The fix caused DirTree.AddDir to be called with the root directory.
This in turn caused a spurious directory entry in the DirTree which
caused tests with the -fast-list flag to fail with directory not found
errors.
Differentiate output of 'config show remote' command from listing options as part
of interactive config process for consistency: 'config show remote' consistent with
'config show', while listing in interactive config consistent with other output.
See #6211
Previously, the overlap check was based on simple prefix checks of the source and destination paths. Now it actually checks whether the destination is excluded via any filter rule or a "--exclude-if-present"-file.
Before this change, if --fast-list was in use while doing a sync or
copy with --create-empty-src-dirs and --exclude excluded all the files
from the directory (but not the directory), then the directory would
not be created.
This is also visible with `rclone tree` which uses the same tree
building approach as `rclone sync --fast-list` where the directories
would go missing from the tree view.
This was caused by not adding the parents of excluded files to the
directory tree.
See: https://forum.rclone.org/t/create-empty-src-dirs-issue-with-b2/30856
strings.ReplaceAll(s, old, new) is a wrapper function for
strings.Replace(s, old, new, -1). But strings.ReplaceAll is more
readable and removes the hardcoded -1.
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
Cloudflare R2 doesn't support range options like `Range: bytes=21-`.
This patch makes FixRangeOption turn a SeekOption into an absolute
RangeOption like this `Range: bytes=21-25` to interoperate with R2.
See: #5642
Before this change FixRangeOption was leaving `Range: bytes=21-`
alone, thus not fulfilling its contract of making Range requests
absolute.
As it happens this form isn't supported by Cloudflare R2.
After this change the request is normalised to `Range: bytes=21-25`.
See: #5642
Similar to fs.Duration but parses into a timestamp instead
Supports parsing from:
* Any of the date formats in parseTimeDates
* A time.Duration offset from now
* parseDurationSuffixes offset from now
When this code was originally implemented os.UserCacheDir wasn't
public so this used a copy of the code. This commit replaces that now
out of date copy with a call to the now public stdlib function.
golang.org/x/crypto/ssh/terminal is deprecated in favor of
golang.org/x/term, see https://pkg.go.dev/golang.org/x/crypto/ssh/terminal
The latter also supports ReadPassword on solaris, so enable the
respective functionality in fs/config for solaris as well.
Before this change if the timezone was omitted in a
--min-age/--max-age time specifier then rclone defaulted to a UTC
timezone.
This is documented as using the local timezone if the time zone
specifier is omitted which is a much more useful default and this
patch corrects the implementation to agree with the documentation.
See: https://forum.rclone.org/t/problem-utc-windows-europe-1-summer-problem/29917
There has been a desire from more advanced rclone users to have regexp
filtering as well as the glob filtering.
This patch adds regexp filtering using this syntax `{{ regexp }}`
which is currently a syntax error, so is backwards compatibile.
This means regexps can be used everywhere globs can be used, and that
they also can be mixed with globs in the same pattern, eg `*.{{jpe?g}}`
Before this change, if the --max-duration limit was reached then
rclone would retry the sync as a fatal error wasn't raised.
This checks the deadline and raises a fatal error if necessary at the
end of the sync.
Fixes#6002
This allows a backend to have multiple aliases. These aliases are
hidden from `rclone config` and the command line flags are hidden from
the user. However the flags, environment varialbes and config for the
alias will work just fine.
The directory created by `T.TempDir` is automatically removed when the
test and all its subtests complete.
Reference: https://pkg.go.dev/testing#T.TempDir
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
Previously an empty input (just pressing enter) was only allowed for multiple-choice
options that did not have the Exclusive property set. With this change the existing
Required property is introduced into the multiple choice handling, so that one can have
Exclusive and Required options where only a value from the list is allowed, and one can
have Exclusive but not Required options where an empty value is accepted but any
non-empty value must still be matching an item from the list.
Fixes#5549
See #5551
This patch adds rclone_http_status_code counter vector labeled by
* host,
* method,
* code.
It allows to see HTTP errors, backoffs etc.
The Metrics struct is designed for extensibility.
Adding new metrics is a matter of adding them to Metrics struct and including them in the response handling.
This feature has been discussed in the forum [1].
[1] https://forum.rclone.org/t/prometheus-metrics/14484
Whenever transfer.Account() is called, a new goroutine acc.averageLoop()
is started. This goroutine exits only when the channel acc.exit is closed.
acc.exit is closed when acc.Done() is called, which happens during tr.Done().
However, if tr.Reset is called during a copy low level retry, it replaces
the tr.acc, without calling acc.Done(), which results in the goroutine
mentioned above never exiting.
This commit calls acc.Done() during a tr.Reset()
This was necessary because go1.14 seems to have a modules related bug
which means it tries to build modules even though the uses of them are
all disabled with build constraints. This seems to be fixed in go1.15.
Previously only the fs being checked on gets passed to
GetModifyWindow(). However, in most tests, the test files are
generated in the local fs and transferred to the remote fs. So the
local fs time precision has to be taken into account.
This meant that on Windows the time tests failed because the
local fs has a time precision of 100ns. Checking remote items uploaded
from local fs on Windows also requires a modify window of 100ns.
This is possible now that we no longer support go1.12 and brings
rclone into line with standard practices in the Go world.
This also removes errors.New and errors.Errorf from lib/errors and
prefers the stdlib errors package over lib/errors.
The test is not applicable to uptobox which can't upload empty files.
The test was not skipped as intended because the direct error was compared.
This fix will compare error Cause because Sync wraps the error.
This was caused by
7a1cab57b6 cmd/hashsum: dont put ERROR or UNSUPPORTED in output
And was picked up in the integration tests.
This patch no longer calls the HashLister for unsupported hash types.
This changes the interface to NewObject so that if NewObject is called
on a directory then it should return fs.ErrorIsDir if possible without
doing any extra work, otherwise fs.ErrorObjectNotFound.
Tested on integration test server with:
go run integration-test.go -tests backend -run TestIntegration/FsMkdir/FsPutFiles/FsNewObjectDir -branch fix-stat -maxtries 1
Google Drive API allows for clauses like "modifiedTime > '2012-06-04T12:00:00'"
in the query param, so the filter flags --max-age and --min-age can be applied
directly at the directory listing phase rather than in a filter.
This is extremely helpful when we want to do an incremental backup of a remote
drive with many files but the number of recently changed file is small.
Co-authored-by: fotile96 <fotile96@users.noreply.github.com>
This patch will:
- add --daemon-wait flag to control the time to wait for background mount
- remove dependency on sevlyar/go-daemon and implement backgrounding directly
- avoid setsid during backgrounding as it can result in race under Automount
- provide a fallback PATH to correctly run `fusermount` under systemd as it
runs mount units without standard environment variables
- correctly handle ^C pressed while background process is being setting up
This replaces built-in os.MkdirAll with a patched version that stops the recursion
when reaching the volume part of the path. The original version would continue recursion,
and for extended length paths end up with \\? as the top-level directory, and the error
message would then be something like:
mkdir \\?: The filename, directory name, or volume label syntax is incorrect.
Before this fix, on Windows, the --bwlimit would max out at 2.5Gbps
even when set to 10 Gbps.
This turned out to be because of the maximum token bucket size.
This fix scales up the token bucket size linearly above a bwlimit of
2Gbps.
Fixes#5507
Before this fix, saving a :backend config gave the error
Can't save config "token" = "XXX" for on the fly backend ":backend"
Even when using the in-memory config `--config ""`
This fixes the problem by
- always using the in memory config if it is configured
- moving the check for a :backend config save to the file config backend
It also removes the contents of the config items being saved from the
log which saves confidential tokens being logged.
Fixes#5451
Currently rclone check supports matching two file trees by sizes and hashes.
This change adds support for SUM files produced by GNU utilities like sha1sum.
Fixes#1005
Note: checksum by default checks, hashsum by default prints sums.
New flag is named "--checkfile" but carries hash name.
Summary of introduced command forms:
```
rclone check sums.sha1 remote:path --checkfile sha1
rclone checksum sha1 sums.sha1 remote:path
rclone hashsum sha1 remote:path --checkfile sums.sha1
rclone sha1sum remote:path --checkfile sums.sha1
rclone md5sum remote:path --checkfile sums.md5
```
This change fixes the bug described below:
if a file is removed while the local backend List() runs,
the call will flag an accounting error.
The bug manifests itself if local backend is the Sync target
due to intrinsic concurrency.
The odds to hit this bug depend on --checkers and --transfers.
Chunker over local backend is affected even more because
updating a composite object with a smaller size content
translates into removing chunks on the underlying file system
and involves a number of List() calls.
Some environment variables didn’t behave like their corresponding
command line flags. The affected flags were --stats, --log-level,
--separator, --multi-tread-streams, --rc-addr, --rc-user and --rc-pass.
Example:
RCLONE_STATS='10s'
rclone check remote: remote: --progress
# Expected: rclone check remote: remote: --progress –-stats=10s
# Actual: rclone check remote: remote: --progress
Remote specific options set by environment variables was overruled by
less specific backend options set by environment variables. Example:
RCLONE_DRIVE_USE_TRASH='false'
RCLONE_CONFIG_MYDRIVE_USE_TRASH='true'
rclone deletefile myDrive:my-test-file
# Expected: my-test-file is recoverable in the trash folder
# Actual: my-test-file is permanently deleted (not recoverable)
Backend specific options set by environment variables was overruled by
general backend options set by environment variables. Example:
RCLONE_SKIP_LINKS='true'
RCLONE_LOCAL_SKIP_LINKS='false'
rclone lsd local:
# Expected result: Warnings when symlinks are skipped
# Actual result: No warnings when symlinks are skipped
# That is RCLONE_SKIP_LINKS takes precedence
The above issues have been fixed.
The debug logging (-vv) has been enhanced to show when flags are set by
environment variables.
The documentation has been enhanced with details on the precedence of
configuration options.
See pull request #5341 for more information.
Nothing is added or removed and no package is renamed by this change.
Just rearrange definitions between source files in the fs directory.
New source files:
- types.go Filesystem types and interfaces
- features.go Features and optional interfaces
- registry.go Filesystem registry and backend options
- newfs.go NewFs and its helpers
- configmap.go Getters and Setters for ConfigMap
- pacer.go Pacer with logging and calculator
The final fs.go contains what is left.
Also rename options.go to open_options.go
to dissociate from registry options.
- Unify all hash names as lowercase alphanumerics without punctuation.
- Legacy names continue to work but disappear from docs, they can be depreciated or dropped later.
- Make rclone hashsum print supported hash list in case of wrong spelling.
- Update documentation.
Fixes#5071Fixes#4841
This also factors the config questions into a state based mechanism so
a backend can be configured using the same dialog as rclone config but
remotely.
This is a very large change which turns the post Config function in
backends into a state based call and response system so that
alternative user interfaces can be added.
The existing config logic has been converted, but it is quite
complicated and folloup commits will likely be needed to fix it!
Follow up commits will add a command line and API based way of using
this configuration system.
Includes adding support for additional size input suffix Mi and MiB, treated equivalent to M.
Extends binary suffix output with letter i, e.g. Ki and Mi.
Centralizes creation of bit/byte unit strings.
Restructuring of config code in v1.55 resulted in config
file being loaded early at process startup. If configuration
file is encrypted this means user will need to supply the password,
even when running commands that does not use config.
This also lead to an issue where mount with --deamon failed to
decrypt the config file when it had to prompt user for passord.
Fixes#5236Fixes#5228
Use %AppData% as primary default for configuration file on Windows,
which is more in line with Windows standards, while existing default
of using home directory is more Unix standards - though that made rclone
more consistent accross different OS.
Fixes#4667
Before this change any backends which required extra config in the
oauth phase (like the `region` for zoho) didn't work with `rclone
authorize`.
This change serializes the extra config and passes it to `rclone
authorize` and returns new config items to be set from rclone
authorize.
`rclone authorize` will still accept its previous configuration
parameters for use with old rclones.
Fixes#5178
Before this change, a sync which was finished with a graceful transfer
cutoff could return "context canceled" instead of the correct error.
This fixes the problem by ignoring "context canceled" errors if we
have done a graceful stop.
Before this change, on the first attempt to create a backend we used a
non-canonicalized string. When the backend expired the second attempt
to create it would use the canonicalized string (because it was in the
remap cache) which would fail because it was now `name{XXXX}:`
This change makes sure that whenever we create a backend we always use
the non-canonicalized string.
See: https://forum.rclone.org/t/connection-string-inconsistencies-on-beta/23171
This commit makes the previously statically configured fs cache configurable.
It introduces two parameters `--fs-cache-expire-duration` and
`--fs-cache-expire-interval` to control the caching of the items.
It also adds new interfaces to lib/cache to set these.
Before this change when the context was cancelled (due to
--max-duration for example) this could deadlock when uploading
multipart uploads.
This change fixes the problem by introducing another go routine to
monitor the context and close the pipe with an error when the context
errors.
In this commit
8a46dd1b57 fspath: Implement a connection string parser #4996
The parsing code was re-written. This didn't quite work as before,
failing to adjust local paths on Windows when it should.
This patch fixes the problem and implements tests for it.
This patch modifies the output of `rclone version`.
The `os/arch` line is split into `os/type` and `os/arch`.
The `go version` line is now tagged as `go/version` for consistency.
Additionally the `go/linking` line tells whether the rclone
was linked as a static or dynamic executable.
The new `go/tags` line shows a space separated list of build tags.
The info about linking and build tags is also added to the output
of the `core/version` RC endpoint.
This patch adds the missing stats to the output of core/stats
- totalChecks
- totalTransfers
- totalBytes
- eta
This now includes enough information to rebuild the normal stats
output from rclone including percentage completions and ETAs.
Fixes#5116
Users have noticed that backends created via the rc have been failing
to refresh their tokens with this error:
Token refresh failed try 1/5: context canceled
This is because the rc server cancels the context used to make the
backend when the request has finished. This same context is used to
refresh the token and the oauth library checks to see if the context
has been cancelled.
This patch creates a new context for the cached backends and copies
the global and filter config into the new context.
See: https://forum.rclone.org/t/google-drive-token-refresh-failed/22283
Backends for which additional config is detected (in the config string
or on the command line or as environment variables) will gain a suffix
`{XXXXX}` where `XXXX` is a base64 encoded md5hash of the config
string.
This fixes backend caching with config string remotes.
This much requested feature now works properly:
rclone copy -vv drive,shared_with_me:file.txt drive:
This adds AddOverrideGetter and GetOverride methods to config map and
uses them in fs.ConfigMap.
This enables us to tell which values have been set and which are just
read from the config file or at their defaults.
This also deletes the unused AddGetters method in configmap.
This is implemented as a state machine parser so it can emit sensible
error messages.
It does not use the connection strings elsewhere in rclone yet - see
subsequent commits.
An optional fuzzer is implemented for the Parse function.
Before this change if a config was altered via the rc then when a new
backend was created from that config, if there was a backend already
running from the old config in the cache then it would be used instead
of creating a new backend with the new config, thus leading to
confusion.
This change flushes the fs cache of any backends based off a config
when that config is changed is over the rc.
Before this change the config file needed to be explicitly reloaded.
This coupled the config file implementation with the backends
needlessly.
This change stats the config file to see if it needs to be reloaded on
every config file operation.
This allows us to remove calls to
- config.SaveConfig
- config.GetFresh
Which now makes the the only needed interface to the config file be
that provided by configmap.Map when rclone is not being configured.
This also adds tests for configfile
This change checks the context whenever rclone might retry, and
doesn't retry if the current context has an error.
This fixes the pathological behaviour of `--max-duration` refusing to
exit because all the context deadline exceeded errors were being
retried.
This unfortunately meant changing the shouldRetry logic in every
backend and doing a lot of context propagation.
See: https://forum.rclone.org/t/add-flag-to-exit-immediately-when-max-duration-reached/22723
This change makes dedupe recursively count elements in same-named directories
and make the largest one primary. This allows to minimize the amount of data
moved (or at least the amount of API calls) when dedupe merges them.
It also adds a new fs.Object interface `ParentIDer` with function `ParentID` and
implements it for the drive and opendrive backends. This function returns
parent directory ID for objects on filesystems that allow same-named dirs.
We use it to correctly count sizes of same-named directories.
Fixes#2568
Co-authored-by: Ivan Andreev <ivandeex@gmail.com>
This splits config.go into ui.go for the user interface functions and
authorize.go for the implementation of `rclone authorize`.
It also moves the tests into the correct places (including one from
obscure which was in the wrong place).
If you are using rclone a library you can decide to use the rclone
config file system or not by calling
configfile.LoadConfig(ctx)
If you don't you will need to set `config.Data` to an implementation
of `config.Storage`.
Other changes
- change interface of config.FileGet to remove unused default
- remove MustValue from config.Storage interface
- change GetValue to return string or bool like elsewhere in rclone
- implement a default config file system which panics with helpful error
- implement getWithDefault to replace the removed MustValue
- don't embed goconfig.ConfigFile so we can change the methods
Before this change the core bandwidth limit was limited to upload or
download value if the other value was off.
This fix only applies a core bandwidth limit when both values are set.
Reapply missing bwlimiting which was inserted in
0a932dc1f2 Add --bwlimit for upload and download #1873
But accidentally removed when merging
edfe183ba2 fshttp: add DSCP support with --dscp for QoS with differentiated services
Rclone uses directory exclusions to cut down the listing it has to do,
so before this fix `--exclude dir/` would make sure nothing in `dir/`
was scanned, **except** if --fast-list was used, in which case only
the directory was excluded and everything within it was included.
This is rather unexpected, so this patch makes `--exclude dir/` be
equivalent to `--exclude dir/**`, meaning that excluding a directory
excludes it and its contents.
We can't do the same for --include without changing the semantics of
filtering slightly.
Fixes#3375
Before this change options were read and set in native format. This
means for example nanoseconds for durations or an integer for
enumerated types, which isn't very convenient for humans.
This change enables these types to be set with a string with the
syntax as used in the command line instead, so `"10s"` rather than
`10000000000` or `"DEBUG"` rather than `8` for log level.
This change decreases the edge limiter burst size which dramatically
increases the smoothness of the bandwidth limiting.
The core bandwidth limiter remains with a large burst so it isn't
affected by double rate limiting on the edge limiters.
See: #4395
See: https://forum.rclone.org/t/bwlimit-is-not-really-smooth/20947
This change uses the bwlimit code to apply limits to the receive and
transmit data functions in the HTTP Transport.
This means that all HTTP transactions will have limiting applied -
this includes listings for example.
For HTTP based transorts this makes the limiting in Accounting
redundant and possibly counter productive
TestParseDuration relied on an elapsed time calculation which
would vary based on the system local time. Fix the test by not relying
on the system time location. Also make the test more deterministic
by injecting time in tests rather than using system time.
Fixes#4529.
Before this change attempting to return an error from core/command
failed with a 500 error and a message about unmarshable types.
This is because it was attempting to marshal the input parameters
which get _response added to them which contains an unmarshalable
field.
This was fixed by using the original parameters in the error response
rather than the one modified during the error handling.
This also adds end to end tests for the streaming facilities as used
in core/command.
Before this change calling core/command gave the error
error: response object is required expecting *http.ResponseWriter value for key "_response" (was *http.response)
This was because the http.ResponseWriter is an interface not an object.
Removing the `*` fixes the problem.
This also indicates that this bit of code wasn't properly tested.
The message now includes the flag name to help the user work out what
is happening.
Invalid value for environment variable "RCLONE_VERSION" when setting default
for --version: strconv.ParseBool: parsing "yes": invalid syntax
This commit modifies the operations.hashSum function by adding an alternate code path. This code path is triggered by passing downloadFlag = True. When activated, rclone will download files from the remote and hash them locally. downloadFlag = False preserves the existing behavior of using the remote to retrieve the hash.
This commit modifies HashLister to support the new hashSum method as well as consolidating the roles of HashLister, HashListerBase64, Md5sum, and Sha1sum. The printing of hashes from the function defined in HashLister has been revised to work with --progress. There are light changes to operations.syncFprintf and cmd.startProgress for this.
The unit test operations_test.TestHashSums is modified to support this change and test the download functionality.
The command functions hashsum, md5sum, sha1sum, and dbhashsum are modified to support this change. A download flag has been added and an output-file flag has been added. The output-file flag writes hashes to a file instead of stdout to avoid the need to redirect stdout.
Before this fix setting an alias of `s3:bucket` then using `alias:..`
would use the current working directory!
This fix corrects the path parsing. This parsing is also used in
wrapping backends like crypt, chunker, union etc.
It does not allow looking above the root of the alias, so `alias:..`
now lists `s3:bucket` as you might expect if you did `cd /` then
`ls ..`.
Before this change rclone would upload the whole of multipart files
before receiving a message from dropbox that the path was too long.
This change hard codes the 255 rune limit and checks that before
uploading any files.
Fixes#4805
This is done by making fs.Config private and attaching it to the
context instead.
The Config should be obtained with fs.GetConfig and fs.AddConfig
should be used to get a new mutable config that can be changed.
This adds a context.Context parameter to NewFs and related calls.
This is necessary as part of reading config from the context -
backends need to be able to read the global config.
Fix the copy and move operations that broke in 127f0fc when copying directly
to a remote without a specific destination.
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
Before this change we counted the final summary error as an error,
producing confusing log messages like:
Failed to check with 54 errors: last error was: 53 differences found
This change marks the summary error as already being counted, so the
error message becomes:
Failed to check with 53 errors: last error was: 53 differences found
This change also returns a listing failure in preference to a summary error.
See: https://forum.rclone.org/t/slow-checksum-validation/19763/22
Before this change
RCLONE_DRIVE_CHUNK_SIZE=111M rclone help flags | grep drive-chunk-size
Would show the default value, not the setting of RCLONE_DRIVE_CHUNK_SIZE
as the non backend flags do.
This change makes it work as expected by setting the default of the
option to the environment variable.
Fixes#4659
Adds a flag, --progress-terminal-title, that when used with --progress,
will print the string `ETA: %s` to the terminal title.
This also adds WriteTerminalTitle to lib/terminal
Before this change rclone would emit the message
--checksum is in use but the source and destination have no hashes in common; falling back to --size-only
When the source or destination hash was missing as well as when the
source and destination had no hashes in common.
This first case is very confusing for users when the source and
destination do have a hash in common.
This change fixes that and makes sure the error message is not emitted
on missing hashes even when there is a hash in common.
See: https://forum.rclone.org/t/source-and-destination-have-no-hashes-in-common-for-unencrypted-drive-to-local-sync/19531
Before this change we sorted transfers in the stats list solely on
time started. However if --check-first was in use then lots of
transfers could be started in the same millisecond. Because Windows
time resolution is only 1mS this caused the entries to sort equal and
bounce around in the list.
This change fixes the sort so that if the time is equal it uses the
name which should stabilize the order.
Fixes#4599
Before this change the code which summed up the existing transfers
over all the stats groups forgot to add the old transfer time and old
transfers in.
This meant that the speed and elapsedTime got increasingly inaccurate
over time due to the transfers being culled from the list but their
time not being accounted for.
This change adds the old transfers into the sum which fixes the
problem.
This was only a problem over the rc.
Fixes#4569
Before ths fix --cutoff-mode soft and cautious would emit a Fatal
error which stopped the sync immediately.
This fix introduces a new error which is checked in the sync error
processing which stops the sync gracefully.
Fixes#4576
This patch provides the support of synchronous cache space recovery
to allow read threads to recover from ENOSPC errors when cache space
can be recovered from cache items that are not in use or safe to be
reset/emptied .
The patch complements the existing cache cleaning process in two ways.
Firstly, the existing cache cleaning process is time-driven that runs
periodically. The cache space can run out while the cache cleaner
thread is still waiting for its next scheduled run. The io threads
encountering ENOSPC return an internal error to the applications
in this case even when cache space can be recovered to avoid this
error. This patch addresses this problem by having the read threads
kick the cache cleaner thread in this condition to recover cache
space preventing unnecessary ENOSPC errors from being seen by the
applications.
Secondly, this patch enhances the cache cleaner to support cache
item reset. Currently the cache purge process removes cache
items that are not in use. This may not be sufficient when the
total size of the working set exceeds the cache directory's
capacity. Like in the current code, this patch starts the purge
process by removing cache files that are not in use. Cache items
whose access times are older than vfs-cache-max-age are removed first.
After that, other not-in-use items are removed in LRU order until
vfs-cache-max-size is reached. If the vfs-cache-max-size (the quota)
is still not reached at this time, this patch adds a cache reset
step to reset/empty cache files that are still in use but not
dirtied. This enables application processes to continue without
seeing an error even when the working set depletes the cache space
as long as there is not a large write working set hoarding the
entire cache space.
By design this patch does not add ENOSPC error recovery for write
IOs. Rclone does not empty a write cache item until the file data
is written back to the backend upon close. Allowing more cache
space to be consumed by dirty cache items when the cache space is
already running low would increase the risk of exhausting the cache
space in a way that the vfs mount becomes unreadable.