Previously, NodeFromFileInfo used the original file path to create the
node, which also meant that extended metadata was read from there
instead of within the vss snapshot.
Add two new test cases, TestBackendAzureAccountToken and
TestBackendAzureContainerToken, that ensure that the authorization using
both types of token works.
This introduces two new environment variables,
RESTIC_TEST_AZURE_ACCOUNT_SAS and RESTIC_TEST_AZURE_CONTAINER_SAS, that
contain the tokens to use when testing restic. If an environment
variable is missing, the related test is skipped.
Ignore AuthorizationFailure caused by using a container level SAS/SAT
token when calling GetProperties during the Create() call. This is because the
GetProperties call expects an Account Level token, and the container
level token simply lacks the appropriate permissions. Supressing the
Authorization Failure is OK, because if the token is actually invalid,
this is caught elsewhere when we try to actually use the token to do
work.
This does not produce exactly the same messages, as it inserts newlines
instead of "; ". But given how long our error messages can be, that
might be a good thing.
One place where IDSet.Clone is useful was reinventing it, using a
conversion to list, a sort, and a conversion back to map.
Also, use the stdlib "maps" package to implement as much of IDSet as
possible. This requires changing one caller, which assumed that cloning
nil would return a non-nil IDSet.
This changes Dumper.writeNode to spawn loader goroutines as needed
instead of as a pool. The code is shorter, fewer goroutines are spawned
for small files, and crash dumps (also for unrelated errors) should be
smaller.
A particular node should always be represented by a single instance.
This is necessary to allow the fuse library to assign a stable nodeId to
a node. macOS Sonoma trips over the previous, unstable behavior when
using fuse-t.
Now, a snapshot is only marked as oldest if it's the last in the list AND its values matches the last seen value for that bucket.
Also, updated the corresponding golden files for the tests.
Depending on parameters the paths in a snapshot do not directly
correspond to real paths on the filesystem. Therefore, reject funcs must
use the FS interface to work correctly.
The temp files used by the packer manager are either delete after
creation (unix) or marked as delete on close (windows). Thus, no
explicit cleanup is necessary.
The retry code path did not filter `ERROR_NOT_SUPPORTED`. Just call the
original function a second time to correctly follow the low privilege
code path.
Calling `Load()` twice for an atomic variable can return different
values each time. This resulted in trying to read the security
descriptor with high privileges, but then not entering the code path to
switch to low privileges when another thread has already done so
concurrently.
Failed locking attempts were immediately retried up to three times
without any delay between the retries. If a lock file is not found while
checking for other locks, with the reworked backend retries there is no
delay between those retries. This is a problem if a backend requires a
few seconds to reflect file deletions in the file listings. To work
around this problem, introduce a short exponentially increasing delay
between the retries. The number of retries is now increased to 4. This
results in delays of 5, 10 and 20 seconds between the retries.
When the context used for a load operation is canceled, then the result
is always an error independent of whether the file could be retrieved
from the backend. Do not false positively trip the circuit breaker in
this case.
The old behavior was problematic when trying to lock a repository. When
`Lock.checkForOtherLocks` listed multiple lock files in parallel and one
of them fails to load, then all other loads were canceled. This
cancelation was remembered by the circuit breaker, such that locking
retries would fail.
The HTTP client can only retry HTTP2 requests after receiving a GOAWAY
response if it can rewind the body. As we use a custom data type,
explicitly provide an implementation of `GetBody`.
* removes files which are no longer in the repository, including index files, snapshot files and pack files from the cache.
cache: fix ids set initialisation with NewIDSet()