Previously, NodeFromFileInfo used the original file path to create the
node, which also meant that extended metadata was read from there
instead of within the vss snapshot.
This change is a temporary solution for restic 0.17.2 and will be
replaced with a clean fix in restic 0.18.0.
Add two new test cases, TestBackendAzureAccountToken and
TestBackendAzureContainerToken, that ensure that the authorization using
both types of token works.
This introduces two new environment variables,
RESTIC_TEST_AZURE_ACCOUNT_SAS and RESTIC_TEST_AZURE_CONTAINER_SAS, that
contain the tokens to use when testing restic. If an environment
variable is missing, the related test is skipped.
Ignore AuthorizationFailure caused by using a container level SAS/SAT
token when calling GetProperties during the Create() call. This is because the
GetProperties call expects an Account Level token, and the container
level token simply lacks the appropriate permissions. Supressing the
Authorization Failure is OK, because if the token is actually invalid,
this is caught elsewhere when we try to actually use the token to do
work.
The retry code path did not filter `ERROR_NOT_SUPPORTED`. Just call the
original function a second time to correctly follow the low privilege
code path.
Calling `Load()` twice for an atomic variable can return different
values each time. This resulted in trying to read the security
descriptor with high privileges, but then not entering the code path to
switch to low privileges when another thread has already done so
concurrently.
The HTTP client can only retry HTTP2 requests after receiving a GOAWAY
response if it can rewind the body. As we use a custom data type,
explicitly provide an implementation of `GetBody`.
Split description for non-Amazon S3 providers into separate section. The
section now also includes the `s3.bucket-lookup` extended option. Switch
to using regional URLs for Amazon S3 to replace the need for setting the
region.
Failed locking attempts were immediately retried up to three times
without any delay between the retries. If a lock file is not found while
checking for other locks, with the reworked backend retries there is no
delay between those retries. This is a problem if a backend requires a
few seconds to reflect file deletions in the file listings. To work
around this problem, introduce a short exponentially increasing delay
between the retries. The number of retries is now increased to 4. This
results in delays of 5, 10 and 20 seconds between the retries.