Add new generic_attributes attribute in Node.
Use the generic attributes to add support for creation time and file attributes like hidden, readonly, encrypted in windows. Handle permission errors for readonly files in windows.
Handle backup and restore of encrypted attributes using windows system calls.
Some tests have to explicitly create pack files with blobs that don't
match their ID. For those blobs the builtin verification of the
repository must be disabled.
The method is not available on the restic.Repository interface that is
used for testing. Drop the call as a small amount of additional index
writes is not a problem.
Currently, the cmd/restic package contains a significant amount of code
that modifies repository internals. This code should in the mid-term
move into the repository package.
`writeStatus` also cleans no longer used status lines.
The old code actually cleaned one line too much. However, as that line
was never used it makes no difference.
If a lock could not be loaded, then restic would check all lock files
again. These repeated checks are not useful as the status of a lock file
cannot change unless its ID changes too. Thus, skip already check lock
files on retries.
LoadBlobsFromPack is now part of the repository struct. This ensures
that users of that method don't have to deal will internals of the
repository implementation.
The filerestorer tests now also contain far fewer pack file
implementation details.
To only stream the content of a pack file once, check used StreamPack
with a custom pack load function. This combination was always brittle
and complicates using StreamPack everywhere else. Now that StreamPack
internally uses PackBlobIterator use that primitive instead, which is a
much better fit for what the check command requires.
If client.Do returns an error, then there's no body that has to be
closed. For requests for which we are not interested in the response
body, immediately drain and close the body to make sure it isn't
forgotten later on.
This change in particular adds the missing `Close()` call for the
`List()` command.
It was only used in two places:
- stats: apparently as a minor performance optimization, which is
unlikely to be important
- find: filtered directories would be ignored. However, this
optimization missed that it is possible that two directories have the
exact same content. Such directories would be incorrectly ignored too.
Example:
```
mkdir test test/a test/b
restic backup test
restic find latest test/b
-> incorrectly does not return anything
```
Thus, remove the functionality as it's apparently too complex to use
correctly.
rclone returns a "not found" error if an internal error occurs while
listing a folder. Ignoring this error lets restic erroneously think that
there are no files, which can cause `prune` to wipe the whole
repository.
Writing these blobs to their files can take a long time and consequently
cause the backend connection to time out. Avoid that by retrieving these
blobs separately.
If an error occurred while streaming a pack file, this could result in
passing some of the blobs multiple times to the callback function. This
significantly complicates using StreamPack correctly and is unnecessary.
Retries do not change the content of a blob and thus only deliver the
same result over and over again.
Since Go 1.21, most reparse points are considered as irregular files.
Depending on the underlying driver these can exhibit nearly arbitrary
behavior. When encountering such a file, restic returned an
indecipherable error message: `error: invalid node type ""`.
Add the filepath to the error message and state that the file type is
not supported.
The behavior of the new option should reflect the behavior of normal backups: when the command exit code is zero and there is no output in the stdout, emit a warning but create the snapshot. This commit fixes the integration tests and the ReadCloserCommand struct.
Return with an error containing the stderr of the given command in case it fails. No new snapshot will be created and future prune operations on the repository will remove the unreferenced data.
Signed-off-by: Sebastian Hoß <seb@xn--ho-hia.de>
In order to determine whether to save a snapshot, we need to capture the exit code returned by a command. In order to provide a nice error message, we supply stderr as well.
Signed-off-by: Sebastian Hoß <seb@xn--ho-hia.de>
This removes code that is only used within a backend implementation from
the backend package. The latter now only contains code that also has
external users.
For now, the guide is only shown if the blob content does not match its
hash. The main intended usage is to handle data corruption errors when
using maximum compression in restic 0.16.0
Allow setting custom arguments for the `sftp` backend, by using the
`sftp.args` option. This is similar to the approach already implemented
in the `rclone` backend, to support new arguments without requiring
future code changes for each different SSH argument.
Closes#4241
Store oversized blobs in separate pack files as the blobs is large
enough to warrant its own pack file. This simplifies the garbage
collection of such blobs and keeps the cache smaller, as oversize (tree)
blobs only have to be downloaded if they are actually used.
The old version was taken from an MPL-licensed library. This is a
cleanroom implementation. The code is shorter and it's now explicit that
only Linux ACLs are supported.
The test uses `WithTimeout` to create a context that cancels the List
operation after a given delay. Several backends internally use a derived
child context created using WithCancel.
The cancellation of a context first closes the done channel of the
context (here: the `WithTimeout` context) and _afterwards_ propagates
the cancellation to child contexts (here: the `WithCancel` context).
Therefor if the List implementation uses a child context, then it may
take a moment until that context is also cancelled. Thus give the
context cancellation a moment to propagate.
A stale lock may be refreshed if it continues to exist until after a
replacement lock has been created. This ensures that a repository was
not unlocked in the meantime.
When transferring a repository from S3 to, for example, a local disk
then all empty folders will be missing.
When saving files, the missing intermediate folders are created
automatically. Therefore, missing directories can be ignored by the
`List()` operation.
Linux allows the use of non-`user.` extended attributes on symlinks. One
of the main users of this functionality is SELinux's `security.selinux`
xattr for storing a path's label. By storing symlink xattrs, restic is
now suitable for backing up the root filesystem on Linux distributions
that use SELinux.
This commit adds support for symlink xattrs when backing up data,
restoring data, and mounting snapshots via a fuse mount. All calls to
the xattr library have been updated to the use `L` variants of the
various functions, which always operate on the path given, without
following symlinks.
Fixes: #4375
Signed-off-by: Andrew Gunnerson <accounts+github@chiller3.com>
Conceptually the backend configuration should be validated when creating
or opening the backend, but not when filling in information from
environment variables into the configuration.
This unified construction removes most backend-specific code from
global.go. The backend registry will also enable integration tests to
use custom backends if necessary.
The ETA restic displays was based on a rate computed across the entire
backup operation. Often restic can progress at uneven rates. In the worst
case, restic progresses over most of the backup at a very high rate and
then finds new data to back up. The displayed ETA is then unrealistic and
never adapts.
Restic now estimates the transfer rate based on a sliding window, with the
goal of adapting to observed changes in rate. To avoid wild changes in the
estimate, several heuristics are used to keep the sliding window wide
enough to be relatively stable.
In order to change the backend initialization in `global.go` to be able
to generically call cfg.ApplyEnvironment() for supported backends, the
`interface{}` returned by `ParseConfig` must contain a pointer to the
configuration.
An alternative would be to use reflection to convert the type from
`interface{}(Config)` to `interface{}(*Config)` (from value to pointer
type). However, this would just complicate the type mess further.
Iterating through the indexmap according to the bucket order has the
problem that all indexEntries are accessed in random order which is
rather cache inefficient.
As we already keep a list of all allocated blocks, just iterate through
it. This allows iterating through a batch of indexEntries without random
memory accesses. In addition, the packID will likely remain similar
across multiple blobs as all blobs of a pack file are added as a single
batch.
This data structure reduces the wasted memory to O(sqrt(n)). The
top-layer of the hashed array tree (HAT) also has a size of O(sqrt(n)),
which makes it cache efficient. The top-layer should be small enough to
easily fit into the CPU cache and thus only adds little overhead
compared to directly accessing an index entry via a pointer.
The indexEntry objects are now allocated in a separate array. References
to an indexEntry are now stored as array indices. This has the benefit
of allowing the garbage collector to ignore the indexEntry objects as
these do not contain pointers and are part of a single large allocation.
New and its helpers used to create the cache directories several times
over. They now only do so once. The added test ensures that the cache is
produced in a consistent state when parts are deleted.
Use the logging methods from testing.TB to make use of tb.Helper(). This
allows the tests to log the filename and line number in which the test
helper was called. Previously the test helper was logged which is rarely
useful.
This function casts its argument to int32 before passing it to the
system call, so that big-endian CPUs read the lower rather than the
upper 32 bits of the pid.
This also gets rid of the last import of "unsafe" in the Unix build.
I changed syscall to x/sys/unix while I was at it, to remove one more
import line. The constants and types there are aliases for their syscall
counterparts.
As the `Fatal` error type only includes a string, it becomes impossible
to inspect the contained error. This is for a example a problem for the
fuse implementation, which must be able to detect context.Canceled
errors.
Co-authored-by: greatroar <61184462+greatroar@users.noreply.github.com>
For hardlinked files, only the first instance of that file increases the
amount of bytes to restore. All later instances only increase the file
count but not the restore size.
restic must be able to refresh lock files in time. However, large
uploads over slow connections can cause the lock refresh to be stuck
behind the large uploads and thus time out.
The test had a 4% chance of not modified the data read from the
repository, in which case the test would fail. Change the data
manipulation to just modified each read operation.
This adds support for caching already rewritten trees, handling of load
errors and disabling the check that the serialization doesn't lead to
data loss.
The more generic RewriteNode callback replaces the SelectByName and
PrintExclude functions. The main part of this change is a preparation to
allow using the TreeRewriter for the `repair snapshots` command.
The builtin mechanism to capture a stacktrace in Go is to send a SIGQUIT
to the running process. However, this mechanism is not avaiable on
Windows. Thus, tweak the SIGINT handler to dump a stacktrace if the
environment variable `RESTIC_DEBUG_STACKTRACE_SIGINT` is set.
The SemaphoreBackend now uniformly enforces the limit of concurrent
backend operations. In addition, it unifies the parameter validation.
The List() methods no longer uses a semaphore. Restic already never runs
multiple list operations in parallel.
By managing the semaphore in a wrapper backend, the sections that hold a
semaphore token grow slightly. However, the main bottleneck is IO, so
this shouldn't make much of a difference.
The key insight that enables the SemaphoreBackend is that all of the
complex semaphore handling in `openReader()` still happens within the
original call to `Load()`. Thus, getting and releasing the semaphore
tokens can be refactored to happen directly in `Load()`. This eliminates
the need for wrapping the reader in `openReader()` to release the token.
The snapshot filtering internally converts relative paths to absolute
ones to ensure that the parent snapshots selection works for backups of
relative paths.
x/text/width.LookupRune has to re-encode its argument as UTF-8,
while LookupString operates on the UTF-8 directly.
The uint casts get rid of a bounds check.
Benchmark results, with b.ResetTimer introduced first:
name old time/op new time/op delta
TruncateASCII-8 69.7ns ± 1% 55.2ns ± 1% -20.90% (p=0.000 n=20+18)
TruncateUnicode-8 350ns ± 1% 171ns ± 1% -51.05% (p=0.000 n=20+19)
Since 0.15 (#4020), inodes are generated as hashes of names, xor'd with
the parent inode. That means that the inode of a/b/b is
h(a/b/b) = h(a) ^ h(b) ^ h(b) = h(a).
I.e., the grandchild has the same inode as the grandparent. GNU find
trips over this because it thinks it has encountered a loop in the
filesystem, and fails to search a/b/b. This happens more generally when
the same name occurs an even number of times.
Fix this by multiplying the parent by a large prime, so the combining
operation is not longer symmetric in its arguments. This is what the FNV
hash does, which we used prior to 0.15. The hash is now
h(a/b/b) = h(b) ^ p*(h(b) ^ p*h(a))
Note that we already ensure that h(x) is never zero.
Collisions can still occur, but they should be much less likely to occur
within a single path.
Fixes#4253.
Tests in cmd_forget_test.go need the same convenience function
that was implemented in snapshot_policy_test.go.
Function parseDuration(...) was moved to testing.go and renamed to
ParseDurationOrPanic(...).
This turns snapshotFilterOptions from cmd into a restic.SnapshotFilter
type and makes restic.FindFilteredSnapshot and FindFilteredSnapshots
methods on that type. This fixes#4211 by ensuring that hosts and paths
are named struct fields instead of unnamed function arguments in long
lists of such.
Timestamp limits are also included in the new type. To avoid too much
pointer handling, the convention is that time zero means no limit.
That's January 1st, year 1, 00:00 UTC, which is so unlikely a date that
we can sacrifice it for simpler code.
This method had a buffer argument, but that was nil at all call sites.
That's removed, and instead LoadUnpacked now reuses whatever it
allocates inside its retry loop.
With debug logging enabled this method would take a lock and then
format the lock as a string. Since PR #4022 landed the string
formatting method has also taken the lock, so this deadlocks.
Instead just record the lock ID, as is done elsewhere.
The code always assumed that the upgrade happens in place. Thus writing
the upgrade to a separate file fails, when trying to remove the file
stored at that location.
Added missing call to scanFinished=true.
This was causing the percent and eta to never get
printed for backup progress even after the scan was finished.
The retry backend does not return the original error, if its execution
is interrupted by canceling the context. Thus, we have to manually
ensure that the invalid data error gets returned.
Additionally, use the retry backend for some of the repository tests, as
this is the configuration which will be used by restic.
The retry printed the filename twice:
```
Load(<lock/04804cba82>, 0, 0) returned error, retrying after 720.254544ms: load(<lock/04804cba82>): invalid data returned
```
now the warning has changed to
```
Load(<lock/04804cba82>, 0, 0) returned error, retrying after 720.254544ms: invalid data returned
```
The StdioWrapper is not used at all by the ProgressPrinters. It is
called a bit earlier than previously. However, as the password prompt
directly accessed stdin/stdout this doesn't cause problems.
Mostly changed the ones that repeat the name of a system call, which is
already contained in os.PathError.Op. internal/fs.Reader had to be
changed to actually return such errors.
TestRepository and its variants always returned no-op cleanup functions.
If they ever do need to do cleanup, using testing.T.Cleanup is easier
than passing these functions around.
We now check for space that is not reserved for the root user on the
remote, and the check is no longer in a defer block because it wouldn't
fire. Some change in the surrounding code may have led the deferred
function to capture the wrong err variable.
Fixes#3336.
IDs.Less can be rewritten as
string(list[i][:]) < string(list[j][:])
Note that this does not copy the ID's.
The Uniq method was no longer used.
The String method has been reimplemented without first copying into a
separate slice of a custom type.
The Test method was only used in exactly one place, namely when trying
to create a new repository it was used to check whether a config file
already exists.
Use a combination of Stat() and IsNotExist() instead.
Since #3940 the rclone backend returns the commands exit code if it
fails to start. The list of expected errors was missing the "file
already closed"-error which can occur if the http test request first
learns about the closed pipe to rclone before noticing the canceled
context.
Go internally makes sure that a file descriptor is unusable once it was
closed, thus this cannot have unintended side effects (like accidentally
reading from the wrong file due to a reused file descriptor).
The ioutil functions are deprecated since Go 1.17 and only wrap another
library function. Thus directly call the underlying function.
This commit only mechanically replaces the function calls.
Without comma-ok, the runtime inserts the same check with a similar
enough panic message:
interface conversion: interface {} is nil, not *syscall.Stat_t
The new genericized LRU cache no longer needs to have the IDs separately
allocated:
name old time/op new time/op delta
Add-8 494ns ± 2% 388ns ± 2% -21.46% (p=0.000 n=10+9)
name old alloc/op new alloc/op delta
Add-8 176B ± 0% 152B ± 0% -13.64% (p=0.000 n=10+10)
name old allocs/op new allocs/op delta
Add-8 5.00 ± 0% 3.00 ± 0% -40.00% (p=0.000 n=10+10)
Hard links to the same file now get the same inode within the FUSE
mount. Also, inode generation is faster and, more importantly, no longer
allocates.
Benchmarked on Linux/amd64. Old means the benchmark with
sink = fs.GenerateDynamicInode(1, sub.node.Name)
instead of calling inodeFromNode. Results:
name old time/op new time/op delta
Inode/no_hard_links-8 137ns ± 4% 34ns ± 1% -75.20% (p=0.000 n=10+10)
Inode/hard_link-8 33.6ns ± 1% 9.5ns ± 0% -71.82% (p=0.000 n=9+8)
name old alloc/op new alloc/op delta
Inode/no_hard_links-8 48.0B ± 0% 0.0B -100.00% (p=0.000 n=10+10)
Inode/hard_link-8 0.00B 0.00B ~ (all equal)
name old allocs/op new allocs/op delta
Inode/no_hard_links-8 1.00 ± 0% 0.00 -100.00% (p=0.000 n=10+10)
Inode/hard_link-8 0.00 0.00 ~ (all equal)
In principle, the JSON format of Tree objects is extensible without
requiring a format change. In order to not loose information just play
it safe and reject rewriting trees for which we could loose data.
The lock test creates a lock and checks that it is not stale. However,
it is possible that the lock is refreshed concurrently, which updates
the lock timestamp. Checking the timestamp in `Stale()` without
synchronization results in a data race. Thus add a lock to prevent
concurrent accesses.
The lock test creates a lock and checks that it is not stale. This also
tests whether the corresponding process still exists. However, it is
possible that the lock is refreshed concurrently, which updates the lock
timestamp. Calling `processExists()` with a value receiver, however,
creates an unsynchronized copy of this field. Thus call the method using
a pointer receiver.
In some rare cases files could be created which contain null IDs (all
zero) in their content list. This was caused by a race condition between
growing the `Content` slice and inserting the blob IDs into it. In some
cases the blob ID was written to the old slice, which a short time
afterwards was replaced with a larger copy, that did not yet contain the
blob ID.
Automatically fall back to hiding files if not authorized to permanently
delete files. This allows using restic with an append-only application
key with B2. Thus, an attacker cannot directly delete backups with the
API key used by restic.
To use this feature create an application key without the deleteFiles
capability. It is recommended to restrict the key to just one bucket.
For example using the b2 command line tool:
b2 create-key --bucket <bucketName> <keyName> listBuckets,readFiles,writeFiles,listFiles
Suggested-by: Daniel Gröber <dxld@darkboxed.org>
We previously checked whether the set of snapshots might have changed
based only on their number, which fails when as many snapshots are
forgotten as are added. Check for the SHA-256 of their id's instead.
The status bar got stuck once the first error was reported, the scanner
completed or some file was backed up. Either case sets a flag that the
scanner has started.
This flag is used to hide the progress bar until the flag is set. Due to
an inverted condition, the opposite happened and the status stopped
refreshing once the flag was set.
In addition, the scannerStarted flag was not set when the scanner just
reported progress information.
As the FileSaver is asynchronously waiting for all blobs of a file to be
stored, the number of active files is higher than the number of files
from which restic is reading concurrently. Thus to not confuse users,
only display files in the status from which restic is currently reading.
After reading and chunking all data in a file, the FutureFile still has
to wait until the FutureBlobs are completed. This was done synchronously
which results in blocking the file saver and prevents the next file from
being read.
By replacing the FutureBlob with a callback, it becomes possible to
complete the FutureFile asynchronously.