Compare commits

...

307 commits

Author SHA1 Message Date
Aneesh N
6808004ad1
Refactor extended attributes and security descriptor helpers to use go-winio (#5040)
* Refactor ea and sd helpers to use go-winio

Import go-winio and instead of copying the functions to encode/decode extended attributes and enable process privileges for security descriptors, call the functions defined in go-winio.
2024-12-09 21:48:38 +01:00
Srigovind Nayak
d7d9af4c9f
ui: restore --delete indicates number of deleted files (#5100)
* ui: restore --delete indicates number of deleted files

* adds new field `FilesDeleted` to the State struct, JSON and text progress updaters
* increment FilesDeleted count when ReportedDeletedFile

* ui: collect the files to be deleted, delete, then update the count post deletion

* docs: update scripting output fields for restore command

ui: report deleted directories and refactor function name to ReportDeletion
2024-12-01 15:29:11 +01:00
Michael Eischer
2f0049cd6c
Merge pull request #5141 from richgrov/missing-azure-env-error
Return error if AZURE_ACCOUNT_NAME not set
2024-12-01 14:01:56 +01:00
Michael Eischer
72c02fa759
Merge pull request #5167 from restic/dependabot/go_modules/github.com/pkg/sftp-1.13.7
build(deps): bump github.com/pkg/sftp from 1.13.6 to 1.13.7
2024-12-01 13:14:03 +01:00
dependabot[bot]
770841f95d
build(deps): bump github.com/pkg/sftp from 1.13.6 to 1.13.7
Bumps [github.com/pkg/sftp](https://github.com/pkg/sftp) from 1.13.6 to 1.13.7.
- [Release notes](https://github.com/pkg/sftp/releases)
- [Commits](https://github.com/pkg/sftp/compare/v1.13.6...v1.13.7)

---
updated-dependencies:
- dependency-name: github.com/pkg/sftp
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-12-01 12:02:01 +00:00
Michael Eischer
5e0a045481
Merge pull request #5163 from restic/dependabot/go_modules/golang.org/x/sys-0.27.0
build(deps): bump golang.org/x/sys from 0.26.0 to 0.27.0
2024-12-01 13:00:28 +01:00
Michael Eischer
3fecddafe8
Merge pull request #5165 from restic/dependabot/go_modules/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob-1.5.0
build(deps): bump github.com/Azure/azure-sdk-for-go/sdk/storage/azblob from 1.4.0 to 1.5.0
2024-12-01 12:58:24 +01:00
dependabot[bot]
40987a5f80
build(deps): bump golang.org/x/sys from 0.26.0 to 0.27.0
Bumps [golang.org/x/sys](https://github.com/golang/sys) from 0.26.0 to 0.27.0.
- [Commits](https://github.com/golang/sys/compare/v0.26.0...v0.27.0)

---
updated-dependencies:
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-12-01 11:48:44 +00:00
Michael Eischer
875976f4a8
Merge pull request #5166 from restic/dependabot/go_modules/golang.org/x/text-0.20.0
build(deps): bump golang.org/x/text from 0.19.0 to 0.20.0
2024-12-01 12:47:55 +01:00
dependabot[bot]
2dc00cfd36
build(deps): bump github.com/Azure/azure-sdk-for-go/sdk/storage/azblob
Bumps [github.com/Azure/azure-sdk-for-go/sdk/storage/azblob](https://github.com/Azure/azure-sdk-for-go) from 1.4.0 to 1.5.0.
- [Release notes](https://github.com/Azure/azure-sdk-for-go/releases)
- [Changelog](https://github.com/Azure/azure-sdk-for-go/blob/main/documentation/release.md)
- [Commits](https://github.com/Azure/azure-sdk-for-go/compare/sdk/azcore/v1.4.0...sdk/azcore/v1.5.0)

---
updated-dependencies:
- dependency-name: github.com/Azure/azure-sdk-for-go/sdk/storage/azblob
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-12-01 11:45:54 +00:00
Michael Eischer
45d2b4cd3c
Merge pull request #5161 from restic/bump-backblaze-library
bump backblaze/blazer to v0.7.1
2024-12-01 12:45:00 +01:00
dependabot[bot]
a4d776ec8f
build(deps): bump golang.org/x/text from 0.19.0 to 0.20.0
Bumps [golang.org/x/text](https://github.com/golang/text) from 0.19.0 to 0.20.0.
- [Release notes](https://github.com/golang/text/releases)
- [Commits](https://github.com/golang/text/compare/v0.19.0...v0.20.0)

---
updated-dependencies:
- dependency-name: golang.org/x/text
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-12-01 01:41:13 +00:00
Michael Eischer
ead57ec501 bump backblaze/blazer to v0.7.1 2024-11-30 21:17:06 +01:00
Michael Eischer
8f9d755b44
Merge pull request #5158 from dnnr/clarify-max-repack-size
Reword description of --max-repack-size for clarity
2024-11-30 19:19:01 +01:00
Daniel Danner
1062546563
Mention size 2024-11-30 17:52:29 +01:00
Michael Eischer
0bf8af7188
Merge pull request #5138 from vmlemon/issue-5131
Implement basic DragonFlyBSD support
2024-11-30 17:32:59 +01:00
Michael Eischer
9a674ecc34
Merge pull request #5146 from MichaelEischer/inline-extended-stat
fs: Inline ExtendedFileInfo
2024-11-30 17:23:34 +01:00
Michael Eischer
9a99141a5f fs: remove os.FileInfo from fs.ExtendedFileInfo
Only the `Sys()` value from os.FileInfo is kept as field `sys` to
support Windows. The os.FileInfo removal ensures that for values like
`ModTime` that existed in both data structures there's no more confusion
which value is actually used.
2024-11-30 17:07:36 +01:00
Michael Eischer
847b2efba2 archiver: remove fs parameter from fileChanged function 2024-11-30 16:19:16 +01:00
Michael Eischer
641390103d fs: inline ExtendedStat 2024-11-30 16:19:16 +01:00
Michael Eischer
806fa534ce
Merge pull request #5145 from MichaelEischer/ignore-disappeared-files
backup: Ignore disappeared files
2024-11-30 16:15:31 +01:00
Michael Eischer
5df6bf80b1 fs: retry vss creation on VSS_E_SNAPSHOT_SET_IN_PROGRESS error
Depending on the change packages, the VSS tests from ./cmd/restic and
the fs package may overlap in time. This causes the snapshot creation to
fail. Add retries in that case.
2024-11-30 16:07:18 +01:00
Michael Eischer
dc89aad722 build dragonflybsd binaries 2024-11-30 15:47:39 +01:00
Tyson Key
3c0ceda536 Add basic support for DragonFlyBSD 2024-11-30 15:42:15 +01:00
Michael Eischer
c5fb46da53 archiver: ignore files removed in the meantime 2024-11-30 15:30:42 +01:00
Michael Eischer
8642049532
Merge pull request #5143 from MichaelEischer/fs-handle-interface
fs: rework FS interface to be handle based
2024-11-30 15:29:31 +01:00
Michael Eischer
8644bb145b
Merge pull request #5134 from MichaelEischer/better-time-restore-error
restore: improve error if timestamp fails to restore
2024-11-30 13:09:33 +01:00
Daniel Danner
0997f26461 Reword description --max-repack-size for clarity 2024-11-29 23:29:43 +01:00
Michael Eischer
a5c49e5340
Merge pull request #5142 from MichaelEischer/fix-not-ordered-error-message
restic: add missing space in error message
2024-11-29 22:48:16 +01:00
Michael Eischer
b51bf0c0c4 fs: test File implementation of Local FS 2024-11-16 16:09:17 +01:00
Michael Eischer
6cb19e0190 archiver: fix file type change test
The test did not test the case that the type of a file changed
unexpectedly.
2024-11-16 16:09:17 +01:00
Michael Eischer
d7f4b9db60 fs: deduplicate placeholders for generic and xattrs 2024-11-16 16:09:17 +01:00
Michael Eischer
087f95a298 fs: make generic and extended attrs independent of each other 2024-11-16 15:38:56 +01:00
Michael Eischer
6084848e5a fs: fix O_NOFOLLOW for metadata handles on Windows 2024-11-16 15:38:56 +01:00
Michael Eischer
48dbefc37e fs / archiver: convert to handle based interface
The actual implementation still relies on file paths, but with the
abstraction layer in place, an FS implementation can ensure atomic file
accesses in the future.
2024-11-16 12:56:23 +01:00
Michael Eischer
2f2ce9add2 fs: remove Stat from FS interface 2024-11-16 12:56:23 +01:00
Michael Eischer
623ba92b98 fs: drop unused permission parameter from OpenFile 2024-11-16 12:56:23 +01:00
Michael Eischer
b402e8a6fc fs: stricter enforcement to only call readdir on a directory
Use O_DIRECTORY to prevent opening any other than a directory in
readdirnames.
2024-11-16 12:56:23 +01:00
Richard Grover
548fa07577 Add changelog info 2024-11-15 14:46:34 -07:00
Michael Eischer
f8031561f2 archiver: deduplicate error filtering 2024-11-15 17:58:06 +01:00
Michael Eischer
49ef3ebec3 restic: add missing space in error message 2024-11-15 17:52:09 +01:00
Richard Grover
dfbd4fb983 Error if AZURE_ACCOUNT_NAME not set 2024-11-13 08:02:22 -07:00
Michael Eischer
1133498ef8
Merge pull request #5046 from konidev20/fix-gh-4521-azure-blob-storage-add-support-for-access-tiers
azure: add support for access tiers hot, cool and cold
2024-11-11 22:01:52 +01:00
Michael Eischer
9c758313e3
Merge pull request #5119 from MichaelEischer/backup-json-start-end-time
backup: include start and end time in json output
2024-11-11 21:50:30 +01:00
Michael Eischer
82c5043fc9
Reduce checkboxes in PR checklist (#5120)
The basics around how to format commits and PR settings are primarily
relevant when opening a PR for the first time. But for repeated
contributors it is tedious to always tick those checkboxes.

Co-authored-by: rawtaz <rawtaz@users.noreply.github.com>
2024-11-11 21:49:26 +01:00
Michael Eischer
a73ae7ba1a restore: improve error if timestamp fails to restore 2024-11-11 21:37:28 +01:00
Michael Eischer
bd16804812 Merge branch 'patch-release' 2024-11-09 11:43:01 +01:00
Alexander Neumann
e2a98aa955 Set development version for 0.17.3 2024-11-08 20:36:48 +01:00
Michael Eischer
408ec41a1d
Merge pull request #5123 from MichaelEischer/fix-removable-media-handling
fs: fallback to low privilege security descriptors on access denied
2024-11-03 21:35:38 +01:00
Michael Eischer
270e7b7679
Merge pull request #5122 from restic/bump-golangci-lint
Bump go and golangci lint version
2024-11-03 21:34:25 +01:00
Michael Eischer
97f3e15039
Merge pull request #5121 from MichaelEischer/improve-release-helper
prepare-release: improve handling of release from non-master branch
2024-11-03 21:31:33 +01:00
Michael Eischer
d5bd3fcda5
Merge pull request #5112 from MichaelEischer/fix-vss-root-volume
Fix VSS metadata error (master)
2024-11-03 21:30:39 +01:00
Michael Eischer
f9a90aae89 fs: fallback to low privilege security descriptors on access denied 2024-11-01 19:10:52 +01:00
Michael Eischer
289159beaf fs: remove redundant fixpath in vss code 2024-11-01 19:03:45 +01:00
Michael Eischer
4052a5927c fs: move getVolumePathName function 2024-11-01 19:03:45 +01:00
Michael Eischer
d3c3390a51 ls: proper error handling if output is not possible 2024-11-01 17:07:43 +01:00
Michael Eischer
569a117a1d improve fprintf related error handling 2024-11-01 17:07:43 +01:00
Michael Eischer
41fa41b28b fix double printf usage 2024-11-01 16:36:23 +01:00
Michael Eischer
3eb9556f6a CI: add go 1.23 2024-11-01 16:34:00 +01:00
Michael Eischer
f5b1f9c8b1 CI: bump golangci-lint to latest version 2024-11-01 16:33:47 +01:00
Michael Eischer
e65f4e2231 backup: include start and end time in json output
The timestamps were already stored in the created snapshot.
2024-11-01 16:31:34 +01:00
Michael Eischer
bcf5fbe498 prepare-release: improve handling of release from non-master branch
The final push command now states the correct branch to push.
2024-11-01 16:22:32 +01:00
Michael Eischer
ded9fc7690
Merge pull request #5101 from MichaelEischer/sftp-load-error
sftp: check for broken connection in Load/List operation
2024-11-01 16:05:29 +01:00
Michael Eischer
b3b173a47c fs: use non existing vss path to avoid flaky test
The test used \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy1 , which if
it exists and supports extended attributes can cause the test to fail.
2024-11-01 15:38:05 +01:00
Michael Eischer
e18a2a0072
Merge pull request #5096 from MichaelEischer/prune-allow-dry-run
prune: allow dry-run without taking a lock
2024-11-01 15:34:15 +01:00
Michael Eischer
1eea41c49e
Merge pull request #5095 from MichaelEischer/retry-load-config
Retry loading or creating repository config
2024-11-01 15:33:45 +01:00
Michael Eischer
71c185313e sftp: check for broken connection in Load/List operation 2024-11-01 15:33:27 +01:00
Michael Eischer
868efe4968 prune: allow dry-run without taking a lock 2024-11-01 15:27:25 +01:00
Michael Eischer
3be2b8a54b add config retry changelog 2024-11-01 15:22:55 +01:00
Michael Eischer
b5bc76cdc7 test retry on repo opening 2024-11-01 15:17:54 +01:00
Michael Eischer
58dc4a6892 backend/retry: hide final log for stat() method
stat is only used to check the config file's existence. We don't want
log output in this case.
2024-11-01 15:17:54 +01:00
Michael Eischer
74c783b850 retry load or creating repository config
By now missing files are not endlessly retried by the retry backend such
that it can be enabled right from the start.

In addition, this change also enables the retry backend for the `init`
command.
2024-11-01 15:17:54 +01:00
Michael Eischer
fc92a04284
Merge pull request #5116 from restic/dependabot/go_modules/github.com/Azure/azure-sdk-for-go/sdk/azidentity-1.8.0
build(deps): bump github.com/Azure/azure-sdk-for-go/sdk/azidentity from 1.7.0 to 1.8.0
2024-11-01 15:07:23 +01:00
Michael Eischer
2f698d1cff
Merge pull request #5117 from restic/dependabot/go_modules/google.golang.org/api-0.204.0
build(deps): bump google.golang.org/api from 0.199.0 to 0.204.0
2024-11-01 15:01:10 +01:00
dependabot[bot]
d8bf327d8b
build(deps): bump github.com/Azure/azure-sdk-for-go/sdk/azidentity
Bumps [github.com/Azure/azure-sdk-for-go/sdk/azidentity](https://github.com/Azure/azure-sdk-for-go) from 1.7.0 to 1.8.0.
- [Release notes](https://github.com/Azure/azure-sdk-for-go/releases)
- [Changelog](https://github.com/Azure/azure-sdk-for-go/blob/main/documentation/release.md)
- [Commits](https://github.com/Azure/azure-sdk-for-go/compare/sdk/azcore/v1.7.0...sdk/azcore/v1.8.0)

---
updated-dependencies:
- dependency-name: github.com/Azure/azure-sdk-for-go/sdk/azidentity
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-01 13:54:05 +00:00
Michael Eischer
2b3672198c
Merge pull request #5115 from restic/dependabot/go_modules/github.com/Azure/azure-sdk-for-go/sdk/azcore-1.16.0
build(deps): bump github.com/Azure/azure-sdk-for-go/sdk/azcore from 1.14.0 to 1.16.0
2024-11-01 14:53:13 +01:00
dependabot[bot]
de847a48bf
build(deps): bump google.golang.org/api from 0.199.0 to 0.204.0
Bumps [google.golang.org/api](https://github.com/googleapis/google-api-go-client) from 0.199.0 to 0.204.0.
- [Release notes](https://github.com/googleapis/google-api-go-client/releases)
- [Changelog](https://github.com/googleapis/google-api-go-client/blob/main/CHANGES.md)
- [Commits](https://github.com/googleapis/google-api-go-client/compare/v0.199.0...v0.204.0)

---
updated-dependencies:
- dependency-name: google.golang.org/api
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-01 13:52:51 +00:00
Michael Eischer
d1d8ae7368
Merge pull request #5113 from restic/dependabot/go_modules/golang.org/x/time-0.7.0
build(deps): bump golang.org/x/time from 0.6.0 to 0.7.0
2024-11-01 14:52:18 +01:00
Michael Eischer
a32c98a39c
Merge pull request #5114 from restic/dependabot/go_modules/golang.org/x/sys-0.26.0
build(deps): bump golang.org/x/sys from 0.25.0 to 0.26.0
2024-11-01 14:51:58 +01:00
dependabot[bot]
53cb6200fa
build(deps): bump github.com/Azure/azure-sdk-for-go/sdk/azcore
Bumps [github.com/Azure/azure-sdk-for-go/sdk/azcore](https://github.com/Azure/azure-sdk-for-go) from 1.14.0 to 1.16.0.
- [Release notes](https://github.com/Azure/azure-sdk-for-go/releases)
- [Changelog](https://github.com/Azure/azure-sdk-for-go/blob/main/documentation/release.md)
- [Commits](https://github.com/Azure/azure-sdk-for-go/compare/sdk/azcore/v1.14.0...sdk/azcore/v1.16.0)

---
updated-dependencies:
- dependency-name: github.com/Azure/azure-sdk-for-go/sdk/azcore
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-01 01:43:23 +00:00
dependabot[bot]
ae9268dadf
build(deps): bump golang.org/x/sys from 0.25.0 to 0.26.0
Bumps [golang.org/x/sys](https://github.com/golang/sys) from 0.25.0 to 0.26.0.
- [Commits](https://github.com/golang/sys/compare/v0.25.0...v0.26.0)

---
updated-dependencies:
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-01 01:43:12 +00:00
dependabot[bot]
a494bf661d
build(deps): bump golang.org/x/time from 0.6.0 to 0.7.0
Bumps [golang.org/x/time](https://github.com/golang/time) from 0.6.0 to 0.7.0.
- [Commits](https://github.com/golang/time/compare/v0.6.0...v0.7.0)

---
updated-dependencies:
- dependency-name: golang.org/x/time
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-01 01:43:08 +00:00
Michael Eischer
51cd1c847b backup: log error if test backup fails 2024-10-31 22:06:50 +01:00
Michael Eischer
14370fbf9e add vss metadata changelog 2024-10-31 22:06:50 +01:00
Michael Eischer
62af5f0b4a restic: test path handling of volume shadow copy root path 2024-10-31 22:06:50 +01:00
Michael Eischer
cb9247530e backup: run test with absolute path 2024-10-31 22:06:50 +01:00
Michael Eischer
1d0d5d87bc fs: fix error in fillGenericAttributes for vss volumes
Extended attributes and security descriptors apparently cannot be
retrieved from a vss volume. Fix the volume check to correctly detect
vss volumes and just completely disable extended attributes for volumes.
2024-10-31 22:06:50 +01:00
Michael Eischer
03aad742d3 fs: add correct vss support to fixpath
Paths that only contain the volume shadow copy snapshot name require
special treatment. These paths must end with a slash for regular file
operations to work.
2024-10-31 22:06:50 +01:00
Michael Eischer
15b7fb784f fs: cleanup fixpath 2024-10-31 21:49:03 +01:00
rawtaz
33da501c35
Merge pull request #5105 from joram-berger/patch-2
doc: Clarify number of blobs are added
2024-10-27 19:11:56 +00:00
Joram Berger
cd44b2bf8b doc: Clarify number of blobs are added
The numbers reported as `data_blobs` and `tree_blobs` are not total numbers of blobs but numbers of blobs added with the given snapshot.
2024-10-27 19:58:21 +01:00
Michael Eischer
1f0f6ad63d Merge branch 'patch-release' 2024-10-27 18:35:32 +01:00
Michael Eischer
ca4bd1b8ca
Merge pull request #5094 from MichaelEischer/document-restore-delete-safety
doc: document safety feature for --target / --delete
2024-10-27 18:21:47 +01:00
Michael Eischer
e320edd416
Merge pull request #5048 from MichaelEischer/fix-macos-fuse
Fix unusable `mount` on macOS Sonoma
2024-10-23 22:51:00 +02:00
Michael Eischer
821000cb68
Merge pull request #5097 from MichaelEischer/fix-vss-metadata
backup: read extended metadata from snapshot
2024-10-22 19:23:06 +02:00
Srigovind Nayak
db686592a1
debug: azure add debug log to show access-tier 2024-10-20 20:24:49 +05:30
Srigovind Nayak
bff3341d10
azure: add support for hot, cool, or cool access tiers 2024-10-20 15:27:21 +05:30
Michael Eischer
5fe6607127
Merge pull request #5084 from greatroar/utimesnano
Simplify and refactor restoring of timestamps
2024-10-19 12:47:13 +00:00
greatroar
8f20d5dcd5 fs: Refactor UtimesNano replacements
Previously, nodeRestoreTimestamps would do something like

	if node.Type == restic.NodeTypeSymlink {
	    return nodeRestoreSymlinkTimestamps(...)
	}
	return syscall.UtimesNano(...)

where nodeRestoreSymlinkTimestamps was either a no-op or a
reimplementation of syscall.UtimesNano that handles symlinks, with some
repeated converting between timestamp types. The Linux implementation
was a bit clumsy, requiring three syscalls to set the timestamps.

In this new setup, there is a function utimesNano that has three
implementations:

* on Linux, it's a modified syscall.UtimesNano that uses
  AT_SYMLINK_NOFOLLOW and AT_FDCWD so it can handle any type in a single
  call;
* on other Unix platforms, it just calls the syscall function after
  skipping symlinks;
* on Windows, it's the modified UtimesNano that was previously called
  nodeRestoreSymlinkTimestamps, except with different arguments.
2024-10-19 12:04:09 +02:00
greatroar
f967a33ccc fs: Use AT_FDCWD in Linux nodeRestoreSymlinkTimestamps
There's no need to open the containing directory. This is exactly what
syscall.UtimesNano does, except for the AT_SYMLINK_NOFOLLOW flag.
2024-10-19 11:29:35 +02:00
Michael Eischer
ec43594003 add vss metadata changelog 2024-10-18 22:36:03 +02:00
Michael Eischer
e1faf7b18c backup: work around file deletion error in test 2024-10-18 22:08:10 +02:00
Michael Eischer
fc6f1b4b06 redirect test log output to t.Log() 2024-10-18 21:43:46 +02:00
Michael Eischer
9f206601af backup: test that vss backups work if underlying data was removed 2024-10-18 21:43:46 +02:00
Michael Eischer
ca79cb92e3 fs/vss: test that vss functions actually read from snapshot 2024-10-18 21:43:46 +02:00
Michael Eischer
352605d9f0 fs: remove file.Name() from interface
The only user was archiver.fileSaver.
2024-10-18 21:43:23 +02:00
Michael Eischer
26b77a543d archiver: use correct filepath in fileSaver for vss
When using the VSS FS, then `f.Name()` contained the filename in the
snapshot. This caused a double mapping when calling NodeFromFileInfo.
2024-10-18 21:41:02 +02:00
Michael Eischer
b988754a6d fs/vss: reuse functions from underlying FS
OpenFile, Stat and Lstat should reuse the underlying FS implementation
to avoid diverging behavior.
2024-10-18 19:30:05 +02:00
Michael Eischer
60960d2405 fs/vss: properly create node from vss path
Previously, NodeFromFileInfo used the original file path to create the
node, which also meant that extended metadata was read from there
instead of within the vss snapshot.
2024-10-18 19:27:44 +02:00
Michael Eischer
7c02141548
Merge pull request #5093 from Seefin/fix-containerSAS
Fix Azure Container Token Auth
2024-10-17 18:45:06 +00:00
Connor Findlay
b434f560cc backend/azure: Add tests for both token types
Add two new test cases, TestBackendAzureAccountToken and
TestBackendAzureContainerToken, that ensure that the authorization using
both types of token works.

This introduces two new environment variables,
RESTIC_TEST_AZURE_ACCOUNT_SAS and RESTIC_TEST_AZURE_CONTAINER_SAS, that
contain the tokens to use when testing restic. If an environment
variable is missing, the related test is skipped.
2024-10-17 20:38:03 +02:00
Connor Findlay
7bdfcf13fb changelog: Add changes in issue-4004
Add changelog entry in the 'unreleased' sub-folder for changes
introduced when fixing issue #4004.
2024-10-17 20:38:03 +02:00
Connor Findlay
2e704c69ac backend/azure: Handle Container SAS/SAT
Ignore AuthorizationFailure caused by using a container level SAS/SAT
token when calling GetProperties during the Create() call. This is because the
GetProperties call expects an Account Level token, and the container
level token simply lacks the appropriate permissions. Supressing the
Authorization Failure is OK, because if the token is actually invalid,
this is caught elsewhere when we try to actually use the token to do
work.
2024-10-17 20:38:03 +02:00
Michael Eischer
5838896962 doc: document safety feature for --target / --delete 2024-10-17 19:45:03 +02:00
Michael Eischer
bcd5ac34bb
Merge pull request #5060 from MichaelEischer/proper-nodefromfileinfo
fs: move NodeFromFileInfo into FS interface
2024-10-16 21:34:37 +02:00
Michael Eischer
618f306f13
Merge pull request #5054 from phillipp/dump-compress-zip
dump: add --compress flag to compress archives
2024-10-16 19:17:47 +00:00
Michael Eischer
75711446e1 fs: move NodeFromFileInfo into FS interface 2024-10-16 21:17:21 +02:00
Michael Eischer
c3b3120e10
Merge pull request #5057 from MichaelEischer/fix-backup-irregular
backup: fix handling of files with type irregular
2024-10-16 21:13:08 +02:00
Michael Eischer
e29d38f8bf dump/zip: test that files are compressed 2024-10-16 21:11:24 +02:00
Michael Eischer
da3c02405b dump/zip: only compress regular files 2024-10-16 21:09:05 +02:00
Michael Eischer
55c150054d add irregular files bug changelog 2024-10-16 20:54:08 +02:00
Michael Eischer
012cb06fe9 repair snapshots: remove irregular files 2024-10-16 20:54:08 +02:00
Michael Eischer
f44b7cdf8c backup: exclude irregular files from backup
restic cannot backup irregular files as those don't behave like normal
files. Thus skip them with an error.
2024-10-16 20:54:08 +02:00
Michael Eischer
e91a456656
Merge pull request #5061 from MichaelEischer/fix-timestamp-restore-windows
fs: fix restoring timestamps on older Windows versions for long paths
2024-10-16 20:47:17 +02:00
Michael Eischer
e21496f217
Merge pull request #5074 from greatroar/dump
dump: Simplify writeNode and use fewer goroutines
2024-10-16 18:33:35 +00:00
Michael Eischer
0c0d8b8cfd
Merge pull request #5083 from greatroar/errors
Some error handling patches
2024-10-16 18:22:49 +00:00
Michael Eischer
60cba55647
Merge pull request #5079 from restic/dependabot/go_modules/google.golang.org/api-0.199.0
build(deps): bump google.golang.org/api from 0.195.0 to 0.199.0
2024-10-09 20:35:03 +00:00
dependabot[bot]
221fa0fa7c
build(deps): bump google.golang.org/api from 0.195.0 to 0.199.0
Bumps [google.golang.org/api](https://github.com/googleapis/google-api-go-client) from 0.195.0 to 0.199.0.
- [Release notes](https://github.com/googleapis/google-api-go-client/releases)
- [Changelog](https://github.com/googleapis/google-api-go-client/blob/main/CHANGES.md)
- [Commits](https://github.com/googleapis/google-api-go-client/compare/v0.195.0...v0.199.0)

---
updated-dependencies:
- dependency-name: google.golang.org/api
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-09 20:26:34 +00:00
Michael Eischer
7cfd8a6715
Merge pull request #5080 from restic/dependabot/go_modules/golang.org/x/oauth2-0.23.0
build(deps): bump golang.org/x/oauth2 from 0.22.0 to 0.23.0
2024-10-09 20:15:43 +00:00
Michael Eischer
0ada0b56b6
Merge pull request #5078 from restic/dependabot/go_modules/github.com/minio/minio-go/v7-7.0.77
build(deps): bump github.com/minio/minio-go/v7 from 7.0.76 to 7.0.77
2024-10-09 20:09:05 +00:00
Michael Eischer
7c12bd59a0
Merge pull request #5053 from rominf/rominf-generate-stdout
generate: allow passing `-` for stdout output
2024-10-09 20:06:54 +00:00
Michael Eischer
888abff7e0
Merge pull request #5058 from MichaelEischer/clarify-changelog
Changelogs should omit problem if its description duplicates the new behavior
2024-10-09 22:06:41 +02:00
Michael Eischer
783901726e
Merge pull request #5056 from MichaelEischer/fix-tag-error-handling
tag: fix swallowed error if repository cannot be opened
2024-10-09 22:06:26 +02:00
dependabot[bot]
eac00eb933
build(deps): bump golang.org/x/oauth2 from 0.22.0 to 0.23.0
Bumps [golang.org/x/oauth2](https://github.com/golang/oauth2) from 0.22.0 to 0.23.0.
- [Commits](https://github.com/golang/oauth2/compare/v0.22.0...v0.23.0)

---
updated-dependencies:
- dependency-name: golang.org/x/oauth2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-09 19:58:42 +00:00
Michael Eischer
96c1c1a0fc
Merge pull request #5075 from greatroar/idset
internal/restic: Use IDSet.Clone + use maps package
2024-10-09 19:55:26 +00:00
Michael Eischer
8d7f4574b4
Merge pull request #5077 from restic/dependabot/go_modules/go.uber.org/automaxprocs-1.6.0
build(deps): bump go.uber.org/automaxprocs from 1.5.3 to 1.6.0
2024-10-09 19:51:15 +00:00
Michael Eischer
ddf65b04f3
Merge pull request #5076 from restic/dependabot/go_modules/golang.org/x/sys-0.25.0
build(deps): bump golang.org/x/sys from 0.24.0 to 0.25.0
2024-10-09 19:50:45 +00:00
greatroar
2b609d3e77 errors, fs: Replace CombineErrors with stdlib Join
This does not produce exactly the same messages, as it inserts newlines
instead of "; ". But given how long our error messages can be, that
might be a good thing.
2024-10-05 10:56:40 +02:00
greatroar
19653f9e06 fs: Simplify NodeCreateAt 2024-10-05 10:56:39 +02:00
greatroar
e10e2bb50f fs: Include filename in mknod errors 2024-10-05 10:56:39 +02:00
greatroar
b5c28a7ba2 internal/restic: Use IDSet.Clone + use maps package
One place where IDSet.Clone is useful was reinventing it, using a
conversion to list, a sort, and a conversion back to map.

Also, use the stdlib "maps" package to implement as much of IDSet as
possible. This requires changing one caller, which assumed that cloning
nil would return a non-nil IDSet.
2024-10-03 21:14:29 +02:00
dependabot[bot]
f3f629bb69
build(deps): bump github.com/minio/minio-go/v7 from 7.0.76 to 7.0.77
Bumps [github.com/minio/minio-go/v7](https://github.com/minio/minio-go) from 7.0.76 to 7.0.77.
- [Release notes](https://github.com/minio/minio-go/releases)
- [Commits](https://github.com/minio/minio-go/compare/v7.0.76...v7.0.77)

---
updated-dependencies:
- dependency-name: github.com/minio/minio-go/v7
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-01 01:49:46 +00:00
dependabot[bot]
e90085b375
build(deps): bump go.uber.org/automaxprocs from 1.5.3 to 1.6.0
Bumps [go.uber.org/automaxprocs](https://github.com/uber-go/automaxprocs) from 1.5.3 to 1.6.0.
- [Release notes](https://github.com/uber-go/automaxprocs/releases)
- [Changelog](https://github.com/uber-go/automaxprocs/blob/master/CHANGELOG.md)
- [Commits](https://github.com/uber-go/automaxprocs/compare/v1.5.3...v1.6.0)

---
updated-dependencies:
- dependency-name: go.uber.org/automaxprocs
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-01 01:49:41 +00:00
dependabot[bot]
3f08dee685
build(deps): bump golang.org/x/sys from 0.24.0 to 0.25.0
Bumps [golang.org/x/sys](https://github.com/golang/sys) from 0.24.0 to 0.25.0.
- [Commits](https://github.com/golang/sys/compare/v0.24.0...v0.25.0)

---
updated-dependencies:
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-10-01 01:49:38 +00:00
greatroar
8c7a6daa47 dump: Simplify writeNode and use fewer goroutines
This changes Dumper.writeNode to spawn loader goroutines as needed
instead of as a pool. The code is shorter, fewer goroutines are spawned
for small files, and crash dumps (also for unrelated errors) should be
smaller.
2024-09-30 17:24:05 +02:00
Roman Inflianskas
3d976562fa
generate: allow passing - for stdout output
Since generating completions to stdout for multiple shells does not make
sense, enforce `-` is supplied only once.
2024-09-16 10:54:00 +03:00
Phillipp Röll
1a7fafc7eb dump: compress zip archives 2024-09-15 21:04:54 +02:00
Michael Eischer
4469fe1575 fs: fix restoring timestamps on Windows for long paths 2024-09-15 18:28:11 +02:00
Phillipp Röll
bad6c54a33 dump: add --compress-zip flag to compress zip archives 2024-09-15 14:25:02 +02:00
Michael Eischer
7680f48258 Changelogs should omit problem if it duplicates the new behavior
When adding a new feature, the problem description often just says that
feature Y was missing, followed by saying that feature Y is now
supported.

This duplication just makes the changelog entries unnecessarily verbose.
2024-09-14 20:54:27 +02:00
Michael Eischer
efec1a5e96
Merge pull request #5045 from MichaelEischer/fix-preallocate-eintr
Linux: retry preallocate if interrutped by signal
2024-09-14 19:17:51 +02:00
Michael Eischer
bd2c986592
Merge pull request #5051 from rominf/rominf-list-subcommands
list: complete and validate subcommand
2024-09-14 16:43:04 +00:00
Michael Eischer
cab6b15603 tag: fix swallowed error if repository cannot be opened 2024-09-14 18:38:48 +02:00
Michael Eischer
4105e4a356
Merge pull request #5047 from damoclark/patch-1
cache: fix race condition in cache cleanup or similar.
2024-09-14 16:14:48 +00:00
Michael Eischer
ccf5be235a add changelog for fuse fix 2024-09-14 18:11:44 +02:00
Michael Eischer
5ce6ca2219 fuse: test that the same fs.Node is used for the same file 2024-09-14 18:11:44 +02:00
Michael Eischer
51173c5003 fuse: forget fs.Node instances on request by the kernel
Forget fs.Node instances once the kernel frees the corresponding nodeId.
This ensures that restic does not run out of memory on large snapshots.
2024-09-14 18:11:44 +02:00
Michael Eischer
e9940f39dc fuse: add missing type assertion for optional interfaces 2024-09-14 18:11:44 +02:00
Michael Eischer
6ec2b62ec5 fuse: cache fs.Node instances
A particular node should always be represented by a single instance.
This is necessary to allow the fuse library to assign a stable nodeId to
a node. macOS Sonoma trips over the previous, unstable behavior when
using fuse-t.
2024-09-14 18:11:44 +02:00
Damien Clark
4795143d6d cache: fix race condition in cache cleanup
Fix multiple restic processes executing concurrently and racing to remove obsolete snapshots.

Co-authored-by: Michael Eischer <michael.eischer@fau.de>
2024-09-14 18:07:46 +02:00
Roman Inflianskas
a84e65b7f9
list: validate subcommand 2024-09-13 12:23:26 +03:00
Roman Inflianskas
6f08dbb2d7
list: add subcommand completion 2024-09-13 12:22:53 +03:00
Michael Eischer
c1532179d4
Merge pull request #5043 from MichaelEischer/fix-github-release-note-formatting
Fix indentation of blockquotes in github release notes
2024-09-07 17:11:22 +02:00
Michael Eischer
34fe73ea42 fs: retry preallocate on Linux if interrupted by signal 2024-09-07 16:39:40 +02:00
Michael Eischer
37d5bd61a0
Merge pull request #5042 from solracsf/patch-1
docs: Recommend to setup B2 versions lifecycle rules
2024-09-07 14:36:29 +00:00
Michael Eischer
7b1a15916d
Merge pull request #5039 from konidev20/fix-gh-4806-forget-add-reason-for-oldest-snapshot-retained
forget: indicate why the oldest snapshot in a group is kept
2024-09-07 14:31:47 +00:00
Git'Fellow
113439c69b
fix: shorten sentence 2024-09-07 15:27:15 +02:00
Srigovind Nayak
5468e85222
docs: mention that the oldest snapshot is marked oldest in the reasons of the forget comman 2024-09-07 15:07:23 +05:30
Srigovind Nayak
b69c6408a6
forget: make oldest snapshot marker more strict
Now, a snapshot is only marked as oldest if it's the last in the list AND its values matches the last seen value for that bucket.

Also, updated the corresponding golden files for the tests.
2024-09-07 15:07:23 +05:30
Srigovind Nayak
d656a50852
forget: update tests to reflect specific reasons for keeping oldest snapshots in a group 2024-09-07 15:07:23 +05:30
Srigovind Nayak
87f30bc787
forget: indicate why the oldest snapshot in a group is kept
When the oldest snapshot in the
list is retained, the reason is now prefixed with "oldest" to clearly
indicate why it's being kept.
2024-09-07 15:07:23 +05:30
Michael Eischer
4f0affd4f7 Merge branch 'patch-release' 2024-09-06 22:32:22 +02:00
Michael Eischer
3df8337d63 Fix indentation of blockquotes in github release notes 2024-09-05 22:33:57 +02:00
Git'Fellow
00ca0b371b
docs: Recommend to setup B2 versions lifecycle rules 2024-09-04 13:21:37 +02:00
Michael Eischer
8a0edde407
Merge pull request #5038 from restic/dependabot/go_modules/google.golang.org/api-0.195.0
build(deps): bump google.golang.org/api from 0.191.0 to 0.195.0
2024-09-01 22:36:39 +00:00
Michael Eischer
0a225049d8
Merge pull request #5035 from restic/dependabot/go_modules/github.com/minio/minio-go/v7-7.0.76
build(deps): bump github.com/minio/minio-go/v7 from 7.0.74 to 7.0.76
2024-09-01 22:14:47 +00:00
Michael Eischer
3023b2f566
Merge pull request #5033 from MichaelEischer/s3-clarify-docs
docs: make s3-compatible section standalone
2024-09-02 00:14:31 +02:00
dependabot[bot]
a6490feab2
build(deps): bump github.com/minio/minio-go/v7 from 7.0.74 to 7.0.76
Bumps [github.com/minio/minio-go/v7](https://github.com/minio/minio-go) from 7.0.74 to 7.0.76.
- [Release notes](https://github.com/minio/minio-go/releases)
- [Commits](https://github.com/minio/minio-go/compare/v7.0.74...v7.0.76)

---
updated-dependencies:
- dependency-name: github.com/minio/minio-go/v7
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-01 22:00:55 +00:00
Michael Eischer
daa6448a77
Merge pull request #5034 from restic/dependabot/go_modules/golang.org/x/sys-0.24.0
build(deps): bump golang.org/x/sys from 0.23.0 to 0.24.0
2024-09-01 21:52:56 +00:00
Michael Eischer
07a8b73f25
Merge pull request #5037 from restic/dependabot/go_modules/github.com/ncw/swift/v2-2.0.3
build(deps): bump github.com/ncw/swift/v2 from 2.0.2 to 2.0.3
2024-09-01 21:52:41 +00:00
Michael Eischer
9a6059eb71
Merge pull request #5032 from dropbigfish/master
chore: fix some function name comments
2024-09-01 21:52:26 +00:00
dependabot[bot]
790dbd442b
build(deps): bump google.golang.org/api from 0.191.0 to 0.195.0
Bumps [google.golang.org/api](https://github.com/googleapis/google-api-go-client) from 0.191.0 to 0.195.0.
- [Release notes](https://github.com/googleapis/google-api-go-client/releases)
- [Changelog](https://github.com/googleapis/google-api-go-client/blob/main/CHANGES.md)
- [Commits](https://github.com/googleapis/google-api-go-client/compare/v0.191.0...v0.195.0)

---
updated-dependencies:
- dependency-name: google.golang.org/api
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-01 01:33:45 +00:00
dependabot[bot]
daf156a76a
build(deps): bump github.com/ncw/swift/v2 from 2.0.2 to 2.0.3
Bumps [github.com/ncw/swift/v2](https://github.com/ncw/swift) from 2.0.2 to 2.0.3.
- [Release notes](https://github.com/ncw/swift/releases)
- [Changelog](https://github.com/ncw/swift/blob/master/RELEASE.md)
- [Commits](https://github.com/ncw/swift/compare/v2.0.2...v2.0.3)

---
updated-dependencies:
- dependency-name: github.com/ncw/swift/v2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-01 01:33:35 +00:00
dependabot[bot]
154ca4d9e8
build(deps): bump golang.org/x/sys from 0.23.0 to 0.24.0
Bumps [golang.org/x/sys](https://github.com/golang/sys) from 0.23.0 to 0.24.0.
- [Commits](https://github.com/golang/sys/compare/v0.23.0...v0.24.0)

---
updated-dependencies:
- dependency-name: golang.org/x/sys
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-01 01:33:18 +00:00
Michael Eischer
ebd8f0c74a docs: make s3-compatible section standalone 2024-08-31 19:39:30 +02:00
dropbigfish
6f9513d88c chore: fix some function names
Signed-off-by: dropbigfish <fillfish@foxmail.com>
2024-09-01 00:54:39 +08:00
Michael Eischer
d8be8f1e06
Merge pull request #5024 from MichaelEischer/move-node-to-fs
Cleanup FS package
2024-08-31 18:47:11 +02:00
Michael Eischer
b91ef3f1ff fs: remove dead code 2024-08-31 18:40:36 +02:00
Michael Eischer
e2bce1b9ee fs: move WindowsAttributes definition back to restic package 2024-08-31 18:40:36 +02:00
Michael Eischer
ebdd946ac1 fs: unexport nodeRestoreTimestamps 2024-08-31 18:40:36 +02:00
Michael Eischer
2aa1e2615b fs: fix comments 2024-08-31 18:40:36 +02:00
Michael Eischer
6c16733dfd fs: remove unused methods from File interface 2024-08-31 18:40:36 +02:00
Michael Eischer
f0329bb4e6 fs: replace statT with ExtendedFileInfo 2024-08-31 18:40:36 +02:00
Michael Eischer
6d3a5260d3 fs: unexport a several windows functions 2024-08-31 18:40:36 +02:00
Michael Eischer
cf051e777a fs: remove Readdir method from File interface 2024-08-31 18:20:41 +02:00
Michael Eischer
cc7f99125a minimize usage of internal/fs in tests 2024-08-31 18:20:41 +02:00
Michael Eischer
65a7157383 mount: use os instead of fs package 2024-08-31 18:20:41 +02:00
Michael Eischer
24f4e780f1 backend: consistently use os package for filesystem access
The go std library should be good enough to manage the files in the
backend and cache folders.
2024-08-31 18:20:40 +02:00
Michael Eischer
ca1e5e10b6 add proper constants for node type 2024-08-31 18:20:01 +02:00
Michael Eischer
3b438e5c7c
Merge pull request #5023 from MichaelEischer/cleanup-archiver
archiver: use FS interface nearly everywhere and cleanup exports
2024-08-31 18:14:47 +02:00
Michael Eischer
7bb92dc7bd archiver: use ExtendedStat from FS interface
With this change, NodeFromFileInfo is the last function that bypasses
the FS interface in the archiver.
2024-08-31 18:05:09 +02:00
Michael Eischer
e79dca644e fs: unexport DeviceID 2024-08-31 18:04:53 +02:00
Michael Eischer
70fbad6623 archiver: minimize imports 2024-08-31 18:04:37 +02:00
Michael Eischer
6fd5d5f2d5 archiver: move helper functions to combine rejects 2024-08-31 18:04:22 +02:00
Michael Eischer
f1585af0f2 move include/exclude options to filter package 2024-08-31 18:04:07 +02:00
Michael Eischer
5d58945718 cleanup include / exclude option setup 2024-08-31 18:03:53 +02:00
Michael Eischer
41c031a19e backup: move RejectFuncs to archiver package 2024-08-31 18:03:35 +02:00
Michael Eischer
f9dbcd2531 backup: convert reject funcs to use FS interface
Depending on parameters the paths in a snapshot do not directly
correspond to real paths on the filesystem. Therefore, reject funcs must
use the FS interface to work correctly.
2024-08-31 18:03:02 +02:00
Michael Eischer
c6fae0320e archiver: hide implementation details 2024-08-31 17:52:45 +02:00
Michael Eischer
e5cdae9c84
Merge pull request #5022 from MichaelEischer/extract-fs-code
Extract filesystem code from restic.Node
2024-08-31 17:52:11 +02:00
Michael Eischer
507842b614 fs: remove Open method from FS interface 2024-08-31 17:37:25 +02:00
Michael Eischer
263709da8c fs: unexport isListxattrPermissionError 2024-08-31 17:37:25 +02:00
Michael Eischer
80ed863aab repository: remove redundant cleanup code
The temp files used by the packer manager are either delete after
creation (unix) or marked as delete on close (windows). Thus, no
explicit cleanup is necessary.
2024-08-31 17:37:25 +02:00
Michael Eischer
0ddb4441d7 fs: clean up helper functions 2024-08-31 17:37:25 +02:00
Michael Eischer
fc549c9462 cleanup imports 2024-08-31 17:37:25 +02:00
Michael Eischer
b9b32e5647 restic: extract Node filesystem code to fs package 2024-08-31 17:37:25 +02:00
Michael Eischer
a2e54eac64 restic: simplify nodeCreateFileAt
The code to write the file content is never used.
2024-08-31 17:37:25 +02:00
Michael Eischer
5644079707 restic: prepare extraction of fs code from Node 2024-08-31 17:37:25 +02:00
Michael Eischer
3e0c081bed
Merge pull request #5020 from MichaelEischer/remove-legacy-formats
Remove support for legacy index format and s3 layout
2024-08-31 17:37:09 +02:00
Michael Eischer
97f696b937 backend: remove dead code 2024-08-31 17:25:24 +02:00
Michael Eischer
af989aab4e backend/layout: unexport fields and simplify rest layout 2024-08-31 17:25:24 +02:00
Michael Eischer
6024597028 drop support for s3legacy layout 2024-08-31 17:25:24 +02:00
Michael Eischer
943b6ccfba index: remove support for legacy index format 2024-08-31 17:12:43 +02:00
Michael Eischer
a5533344f9
Merge pull request #5028 from MichaelEischer/windows-allow-specifying-volumes
backup: support specifying volume instead of path on Windows
2024-08-31 16:43:20 +02:00
Michael Eischer
ddf35a60ad
Merge pull request #5026 from MichaelEischer/fix-handling-invalid-filenames
cache: Fix handling of invalid filenames
2024-08-31 16:42:13 +02:00
Michael Eischer
4fcedb4bae backup: support specifying volume instead of path on Windows
"C:" (volume name) versus "C:\" (path)
2024-08-30 11:35:43 +02:00
Michael Eischer
a0f2dfbc19
Merge pull request #5019 from MichaelEischer/fix-windows-sd-race
backup: Fix spurious "A Required Privilege Is Not Held by the Client" error
2024-08-29 16:59:06 +02:00
Michael Eischer
0aadfe32bb
Merge pull request #5018 from MichaelEischer/rest-retry-http2-goaway
rest: improve handling of HTTP2 goaway
2024-08-29 16:58:04 +02:00
Michael Eischer
dab3e549af
Merge pull request #5017 from MichaelEischer/rewrite-data-loss
rewrite: Document handling of "cannot encode tree" errors
2024-08-29 16:57:13 +02:00
Michael Eischer
5c238ea359
Merge pull request #5016 from MichaelEischer/s3-doc-rework
Rework documentation for s3-compatible storages
2024-08-29 16:55:40 +02:00
Michael Eischer
2c85d2468a
Merge pull request #5015 from MichaelEischer/update-exit-code-docs
Update exit code docs
2024-08-29 16:53:14 +02:00
Michael Eischer
7bbf75237d
Merge pull request #5014 from MichaelEischer/configurable-slow-request-timeout
Make timeout for slow requests configurable
2024-08-29 16:52:24 +02:00
Michael Eischer
dd90e1926b use OrderedListOnceBackend where possible 2024-08-29 16:35:48 +02:00
Michael Eischer
d19f706d50 Add temporary files repositories in integration tests
This is intended to catch problems with temporary files stored in the
backend, even if the responsible component forgets to test for those.
2024-08-29 16:33:18 +02:00
Michael Eischer
8eff4e0e5c cache: correctly ignore files whose filename is no ID
this can for example be the case for temporary files created by the
backend implementation.
2024-08-29 16:32:15 +02:00
Michael Eischer
45d05eb691 add changelog for security descriptor race condition 2024-08-26 19:43:18 +02:00
Michael Eischer
9c70794886 fs: fix error handling for retried get/set of security descriptor
The retry code path did not filter `ERROR_NOT_SUPPORTED`. Just call the
original function a second time to correctly follow the low privilege
code path.
2024-08-26 19:36:43 +02:00
Michael Eischer
6fbfccc2d3 fs: fix race condition in get/set security descriptor
Calling `Load()` twice for an atomic variable can return different
values each time. This resulted in trying to read the security
descriptor with high privileges, but then not entering the code path to
switch to low privileges when another thread has already done so
concurrently.
2024-08-26 19:31:21 +02:00
Michael Eischer
1931beab8e
Merge pull request #5012 from MichaelEischer/fix-lock-retries
lock: introduce short delay between failed locking retries
2024-08-26 18:10:30 +02:00
Michael Eischer
2296fdf668 lock: introduce short delay between failed locking retries
Failed locking attempts were immediately retried up to three times
without any delay between the retries. If a lock file is not found while
checking for other locks, with the reworked backend retries there is no
delay between those retries. This is a problem if a backend requires a
few seconds to reflect file deletions in the file listings. To work
around this problem, introduce a short exponentially increasing delay
between the retries. The number of retries is now increased to 4. This
results in delays of 5, 10 and 20 seconds between the retries.
2024-08-26 16:31:42 +02:00
Michael Eischer
89d216ca76
Merge pull request #5011 from MichaelEischer/fix-canceled-retry
backend/retry: don't trip circuit breaker if context is canceled
2024-08-26 16:30:03 +02:00
Michael Eischer
5cffd40002
Merge pull request #5013 from MichaelEischer/group-cli-commands
Group CLI commands and show features/options
2024-08-26 16:23:39 +02:00
Michael Eischer
e24dd5a162 backend/retry: don't trip circuit breaker if context is canceled
When the context used for a load operation is canceled, then the result
is always an error independent of whether the file could be retrieved
from the backend. Do not false positively trip the circuit breaker in
this case.

The old behavior was problematic when trying to lock a repository. When
`Lock.checkForOtherLocks` listed multiple lock files in parallel and one
of them fails to load, then all other loads were canceled. This
cancelation was remembered by the circuit breaker, such that locking
retries would fail.
2024-08-26 16:22:21 +02:00
Michael Eischer
2063bf5de4
Merge pull request #5006 from MichaelEischer/restore-time-last
restic: restore timestamps after extended attributes
2024-08-26 16:21:02 +02:00
Michael Eischer
36c4475ad9 rest: improve handling of HTTP2 goaway
The HTTP client can only retry HTTP2 requests after receiving a GOAWAY
response if it can rewind the body. As we use a custom data type,
explicitly provide an implementation of `GetBody`.
2024-08-26 15:44:17 +02:00
Michael Eischer
dc5d3fc473 doc: full tree blob data structure is in the code 2024-08-26 14:41:09 +02:00
Michael Eischer
05077eaa20 doc: JSON encoder must be deterministic 2024-08-26 14:41:09 +02:00
Michael Eischer
908d097904 doc: mark S3 layout as deprecated 2024-08-26 14:41:09 +02:00
Michael Eischer
828c8bc1e8 doc: describe how to handle rewrite encoding error 2024-08-26 14:41:09 +02:00
Michael Eischer
b8f409723d make timeout for slow requests configurable 2024-08-26 14:14:43 +02:00
Michael Eischer
8a8f5f3986 doc: fix typos 2024-08-26 12:24:02 +02:00
Michael Eischer
7de53a51b8 doc: shrink wasabi / alibaba cloud example
Remove descriptions for both providers and shorten the example to the
minimum.
2024-08-26 12:21:13 +02:00
Michael Eischer
9649a9c62b doc: use regional urls for Amazon S3 and add generic s3 provider section
Split description for non-Amazon S3 providers into separate section. The
section now also includes the `s3.bucket-lookup` extended option. Switch
to using regional URLs for Amazon S3 to replace the need for setting the
region.
2024-08-26 12:17:43 +02:00
Michael Eischer
354c2c38cc doc/backup: move exit status codes section up 2024-08-25 23:53:12 +02:00
Michael Eischer
ff9ef08f65 doc/backup: link to exit code for scripting section 2024-08-25 23:52:33 +02:00
Michael Eischer
311b27ced8 restic: cleanup redundant code in test case 2024-08-25 23:18:55 +02:00
Michael Eischer
43b36ad2b0 restore: test timestamps for macOS resource forks are restored correctly 2024-08-25 23:18:55 +02:00
Michael Eischer
2e55209b34 restic: restore timestamps after extended attributes
restoring the xattr containing resource forks on macOS apparently
modifies the file modification timestamps. Thus, restore the timestamp
after xattrs.
2024-08-25 23:18:55 +02:00
Michael Eischer
e7db5febcf update docs 2024-08-23 23:52:21 +02:00
Michael Eischer
7739aa685c Add missing DisableAutoGenTag flag for commands 2024-08-23 23:49:20 +02:00
Michael Eischer
5988d825b7 group commands and make features/options visible 2024-08-23 23:48:45 +02:00
Michael Eischer
a8efaee03c
Merge pull request #5010 from MichaelEischer/cleanup-cli-help
Improve description for  --from-insecure-no-password option
2024-08-23 23:41:08 +02:00
Michael Eischer
8672cef972
Merge pull request #5009 from restic/document-restic-host
Mention RESTIC_HOST environment variable in docs
2024-08-23 23:40:48 +02:00
Michael Eischer
551dfee707 Improve description for no password on secondary repo 2024-08-18 19:45:54 +02:00
Michael Eischer
1b8ca32e7d Mention RESTIC_HOST environment variable in docs 2024-08-18 19:41:58 +02:00
Michael Eischer
489af2a670
Merge pull request #5008 from mikix/doc-typo
docs: correct wrong exit_error message field name
2024-08-18 17:38:31 +00:00
Michael Terry
97df01b9b8 docs: correct wrong exit_error message field name 2024-08-17 15:00:39 -04:00
Michael Eischer
68f7abcff1
Merge pull request #5007 from deining/fix-warnings
GitHub test actions: fix warnings 'Restore cache failed'
2024-08-17 14:31:41 +00:00
Andreas Deininger
ceb45d9816 GitHub test actions: fix warnings 'Restore cache failed' 2024-08-17 12:39:41 +02:00
Michael Eischer
5cca6e66be
Merge pull request #4981 from konidev20/fix-gh-4934-cleanup-removed-snaphots-from-cache
cache: clear snapshot files from cache during load index
2024-08-16 19:04:59 +00:00
Srigovind Nayak
c9097994b9
changelog: update changelog 2024-08-17 00:24:19 +05:30
Michael Eischer
c636ad51a8
Merge pull request #4959 from mikix/fatal-wrap
main: return an exit code (12) for "bad password" errors
2024-08-16 18:52:36 +00:00
Srigovind Nayak
88174cd0a4
cache: remove redundant index file cleanup
addressing code review comments
2024-08-17 00:21:49 +05:30
Srigovind Nayak
b7d014b685
Revert "repository: removed redundant prepareCache method from Repository"
This reverts commit 720609f8ba.
2024-08-17 00:18:13 +05:30
Michael Terry
56f28c9bd5 main: return an exit code (12) for "bad password" errors 2024-08-15 16:55:45 -04:00
Michael Eischer
7462471c6b
Merge pull request #4952 from mikix/json-exit
Format exit errors as JSON if requested
2024-08-15 20:19:38 +00:00
Michael Eischer
74d3f92cc7
Merge pull request #4993 from MichaelEischer/fix-timeout-error
backend: return correct error on upload/request timeout
2024-08-15 22:07:37 +02:00
Michael Eischer
80f24584a5
Merge pull request #4998 from zmanda/ea_vss_fix
Fix extended attributes handling for VSS snapshots
2024-08-15 19:51:35 +00:00
Michael Eischer
8e00158c34
Merge pull request #5000 from deining/fix-typo
Fix typos
2024-08-15 19:42:14 +00:00
Michael Eischer
36b5580c1c
Merge pull request #4989 from plant99/progress-bar-for-restore-verify
restore: Add progress bar to 'restore --verify'
2024-08-15 19:34:05 +00:00
aneesh-n
19f487750e
Add test cases and handle volume GUID paths
Gracefully handle errors while checking for EA and add debug logs.
2024-08-11 19:25:58 -06:00
Shivashis Padhi
f1407afd1f
restore: Add progress bar to 'restore --verify' 2024-08-11 22:25:21 +02:00
Andreas Deininger
4401265e36 Fix typos 2024-08-11 21:38:15 +02:00
Srigovind Nayak
5fd984ba6f
cache: add test for the automated cache clear to cache backend 2024-08-11 23:41:07 +05:30
Srigovind Nayak
506e07127f
changelog: add unrelease changelog 2024-08-11 23:41:07 +05:30
Srigovind Nayak
720609f8ba
repository: removed redundant prepareCache method from Repository
* remove the prepareCache method from the Repository
* changed the signature of the SetIndex function to no longer return an error
2024-08-11 23:41:07 +05:30
Srigovind Nayak
a23e7bfb82
cache: check for context cancellation before clearing cache 2024-08-11 23:41:07 +05:30
Srigovind Nayak
f66624f5bf
cache: backend add List method and a cache clear functionality
* removes files which are no longer in the repository, including index files, snapshot files and pack files from the cache.

cache: fix ids set initialisation with NewIDSet()
2024-08-11 23:40:52 +05:30
Michael Terry
d3f9c05312 docs: update scripting documentation 2024-08-11 12:52:54 -04:00
Michael Terry
6283915f86 main: format exit errors as JSON when using --json 2024-08-11 12:52:50 -04:00
Michael Terry
2d250a9135 version: add message_type in --json mode 2024-08-11 12:51:15 -04:00
Michael Eischer
33c670dd7a
Merge pull request #4996 from restic/dependabot/go_modules/google.golang.org/api-0.191.0
build(deps): bump google.golang.org/api from 0.189.0 to 0.191.0
2024-08-11 09:25:19 +00:00
aneesh-n
849c441455
Gracefully handle invalid prepared volume names 2024-08-11 01:48:25 -06:00
aneesh-n
b5b5c1fe8e
Add changelog 2024-08-11 01:32:55 -06:00
aneesh-n
1d392a36f9
Fix extended attributes handling for VSS snapshots 2024-08-11 01:23:47 -06:00
dependabot[bot]
049186371f
build(deps): bump google.golang.org/api from 0.189.0 to 0.191.0
Bumps [google.golang.org/api](https://github.com/googleapis/google-api-go-client) from 0.189.0 to 0.191.0.
- [Release notes](https://github.com/googleapis/google-api-go-client/releases)
- [Changelog](https://github.com/googleapis/google-api-go-client/blob/main/CHANGES.md)
- [Commits](https://github.com/googleapis/google-api-go-client/compare/v0.189.0...v0.191.0)

---
updated-dependencies:
- dependency-name: google.golang.org/api
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-10 18:17:20 +00:00
Michael Eischer
910f64ce47
Merge pull request #4997 from restic/dependabot/go_modules/github.com/Azure/azure-sdk-for-go/sdk/azcore-1.14.0
build(deps): bump github.com/Azure/azure-sdk-for-go/sdk/azcore from 1.13.0 to 1.14.0
2024-08-10 18:11:28 +00:00
Michael Eischer
b3b71e78cd
Merge pull request #4995 from restic/dependabot/go_modules/golang.org/x/crypto-0.26.0
build(deps): bump golang.org/x/crypto from 0.25.0 to 0.26.0
2024-08-10 18:08:20 +00:00
dependabot[bot]
f2e2e5f5ab
build(deps): bump github.com/Azure/azure-sdk-for-go/sdk/azcore
Bumps [github.com/Azure/azure-sdk-for-go/sdk/azcore](https://github.com/Azure/azure-sdk-for-go) from 1.13.0 to 1.14.0.
- [Release notes](https://github.com/Azure/azure-sdk-for-go/releases)
- [Changelog](https://github.com/Azure/azure-sdk-for-go/blob/main/documentation/release.md)
- [Commits](https://github.com/Azure/azure-sdk-for-go/compare/sdk/azcore/v1.13.0...sdk/azcore/v1.14.0)

---
updated-dependencies:
- dependency-name: github.com/Azure/azure-sdk-for-go/sdk/azcore
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-10 17:58:58 +00:00
dependabot[bot]
ecd03b4fc6
build(deps): bump golang.org/x/crypto from 0.25.0 to 0.26.0
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.25.0 to 0.26.0.
- [Commits](https://github.com/golang/crypto/compare/v0.25.0...v0.26.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-10 17:58:50 +00:00
Michael Eischer
3f5e2160de
Merge pull request #4938 from MichaelEischer/bump-go-version
Bump go version to 1.21
2024-08-10 19:57:59 +02:00
Michael Eischer
400ae55940 replace deprecated usages of math/rand 2024-08-10 19:34:49 +02:00
Michael Eischer
84c79f1456 bump required go version to 1.21 2024-08-10 19:16:10 +02:00
Michael Eischer
0b19f6cf5a Switch back to sha256 from the std library
The std library now also supports the sha assembly instructions on
ARM64. Thus, sha256-simd no longer provides a performance benefit.
2024-08-10 19:16:10 +02:00
Michael Eischer
fbecc9db66 upgrade all direct dependencies 2024-08-10 19:16:10 +02:00
Michael Eischer
ad48751adb bump required go version to 1.21 2024-08-10 19:16:10 +02:00
Michael Eischer
853a686994 backend: return correct error on upload/request timeout 2024-08-10 18:06:24 +02:00
233 changed files with 4670 additions and 5776 deletions

View file

@ -28,13 +28,15 @@ Checklist
You do not need to check all the boxes below all at once. Feel free to take
your time and add more commits. If you're done and ready for review, please
check the last box. Enable a checkbox by replacing [ ] with [x].
Please always follow these steps:
- Read the [contribution guidelines](https://github.com/restic/restic/blob/master/CONTRIBUTING.md#providing-patches).
- Enable [maintainer edits](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/allowing-changes-to-a-pull-request-branch-created-from-a-fork).
- Run `gofmt` on the code in all commits.
- Format all commit messages in the same style as [the other commits in the repository](https://github.com/restic/restic/blob/master/CONTRIBUTING.md#git-commits).
-->
- [ ] I have read the [contribution guidelines](https://github.com/restic/restic/blob/master/CONTRIBUTING.md#providing-patches).
- [ ] I have [enabled maintainer edits](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/allowing-changes-to-a-pull-request-branch-created-from-a-fork).
- [ ] I have added tests for all code changes.
- [ ] I have added documentation for relevant changes (in the manual).
- [ ] There's a new file in `changelog/unreleased/` that describes the changes for our users (see [template](https://github.com/restic/restic/blob/master/changelog/TEMPLATE)).
- [ ] I have run `gofmt` on the code in all commits.
- [ ] All commit messages are formatted in the same style as [the other commits in the repo](https://github.com/restic/restic/blob/master/CONTRIBUTING.md#git-commits).
- [ ] I'm done! This pull request is ready for review.

View file

@ -13,7 +13,7 @@ permissions:
contents: read
env:
latest_go: "1.22.x"
latest_go: "1.23.x"
GO111MODULE: on
jobs:
@ -23,42 +23,37 @@ jobs:
# list of jobs to run:
include:
- job_name: Windows
go: 1.22.x
go: 1.23.x
os: windows-latest
- job_name: macOS
go: 1.22.x
go: 1.23.x
os: macOS-latest
test_fuse: false
- job_name: Linux
go: 1.22.x
go: 1.23.x
os: ubuntu-latest
test_cloud_backends: true
test_fuse: true
check_changelog: true
- job_name: Linux (race)
go: 1.22.x
go: 1.23.x
os: ubuntu-latest
test_fuse: true
test_opts: "-race"
- job_name: Linux
go: 1.22.x
os: ubuntu-latest
test_fuse: true
- job_name: Linux
go: 1.21.x
os: ubuntu-latest
test_fuse: true
- job_name: Linux
go: 1.20.x
os: ubuntu-latest
test_fuse: true
- job_name: Linux
go: 1.19.x
os: ubuntu-latest
test_fuse: true
name: ${{ matrix.job_name }} Go ${{ matrix.go }}
runs-on: ${{ matrix.os }}
@ -264,7 +259,7 @@ jobs:
uses: golangci/golangci-lint-action@v6
with:
# Required: the version of golangci-lint is required and must be specified without patch version: we always use the latest patch version.
version: v1.57.1
version: v1.61.0
args: --verbose --timeout 5m
# only run golangci-lint for pull requests, otherwise ALL hints get

View file

@ -1 +1 @@
0.17.3
0.17.3-dev

View file

@ -58,7 +58,7 @@ var config = Config{
Main: "./cmd/restic", // package name for the main package
DefaultBuildTags: []string{"selfupdate"}, // specify build tags which are always used
Tests: []string{"./..."}, // tests to run
MinVersion: GoVersion{Major: 1, Minor: 18, Patch: 0}, // minimum Go version supported
MinVersion: GoVersion{Major: 1, Minor: 21, Patch: 0}, // minimum Go version supported
}
// Config configures the build.

View file

@ -5,6 +5,8 @@ Enhancement: Allow custom bar in the foo command
# Describe the problem in the past tense, the new behavior in the present
# tense. Mention the affected commands, backends, operating systems, etc.
# If the problem description just says that a feature was missing, then
# only explain the new behavior.
# Focus on user-facing behavior, not the implementation.
# Use "Restic now ..." instead of "We have changed ...".

View file

@ -0,0 +1,9 @@
Bugfix: Correctly restore timestamp on long filepaths on old Windows versions
The `restore` command did not restore timestamps on file paths longer than 256
characters on Windows versions before Windows 10 1607.
This issue is now resolved.
https://github.com/restic/restic/issues/1843
https://github.com/restic/restic/pull/5061

View file

@ -0,0 +1,16 @@
Bugfix: Ignore disappeared backup source files
If during a backup files were removed between restic listing the directory
content and backing up the file in question, the following error could occur:
```
error: lstat /some/file/name: no such file or directory
```
The backup command now ignores this particular error and silently skips the
removed file.
https://github.com/restic/restic/issues/2165
https://github.com/restic/restic/issues/3098
https://github.com/restic/restic/pull/5143
https://github.com/restic/restic/pull/5145

View file

@ -0,0 +1,6 @@
Enhancement: Allow generating shell completions to stdout
Restic `generate` now supports passing `-` passed as file name to `--[shell]-completion` option.
https://github.com/restic/restic/issues/2511
https://github.com/restic/restic/pull/5053

View file

@ -0,0 +1,21 @@
Enhancement: Add config option to set Microsoft Blob Storage Access Tier
The `azure.access-tier` option can be passed to Restic (using `-o`) to
specify the access tier for Microsoft Blob Storage objects created by Restic.
The access tier is passed as-is to Microsoft Blob Storage, so it needs to be
understood by the API. The allowed values are `Hot`, `Cool`, or `Cold`.
If unspecified, the default is inferred from the default configured on the
storage account.
You can mix access tiers in the same container, and the setting isn't
stored in the restic repository, so be sure to specify it with each
command that writes to Microsoft Blob Storage.
There is no official `Archive` storage support in restic, use this option at
your own risk. To restore any data, it is still necessary to manually warm up
the required data in the `Archive` tier.
https://github.com/restic/restic/issues/4521
https://github.com/restic/restic/pull/5046

View file

@ -0,0 +1,6 @@
Enhancement: Format exit errors as JSON with --json
Restic now prints any exit error messages as JSON when requested.
https://github.com/restic/restic/issues/4948
https://github.com/restic/restic/pull/4952

View file

@ -0,0 +1,7 @@
Enhancement: Retry loading repository config
Restic now retries loading the repository config file when opening a repository.
In addition, the `init` command now also retries backend operations.
https://github.com/restic/restic/issues/5081
https://github.com/restic/restic/pull/5095

View file

@ -0,0 +1,8 @@
Enhancement: Indicate the of deleted files/directories during restore
Restic now indicates the number of deleted files/directories during restore.
The `--json` output now includes a `files_deleted` field that shows the number
of files and directories that were deleted during restore.
https://github.com/restic/restic/issues/5092
https://github.com/restic/restic/pull/5100

View file

@ -0,0 +1,6 @@
Enhancement: Add DragonflyBSD support
Restic can now be compiled on DragonflyBSD.
https://github.com/restic/restic/issues/5131
https://github.com/restic/restic/pull/5138

View file

@ -0,0 +1,7 @@
Change: Update dependencies and require Go 1.21 or newer
We have updated all dependencies. Since some libraries require newer Go standard
library features, support for Go 1.19 and 1.20 has been dropped, which means that
restic now requires at least Go 1.21 to build.
https://github.com/restic/restic/pull/4938

View file

@ -0,0 +1,7 @@
Enhancement: Compress ZIP archives created by `dump` command
Restic did not compress the archives that were created by using
the `dump` command. It now saves some disk space when exporting
archives using the DEFLATE algorithm for "zip" archives.
https://github.com/restic/restic/pull/5054

View file

@ -0,0 +1,6 @@
Enhancement: Include backup start and end in JSON output
The JSON output of the backup command now also includes the timestamps
of the `backup_start` and `backup_end` times.
https://github.com/restic/restic/pull/5119

View file

@ -0,0 +1,7 @@
Enhancement: Provide clear error message if AZURE_ACCOUNT_NAME is not set
If AZURE_ACCOUNT_NAME is not set, any command related to an Azure repository
would result in a misleading networking error. Restic will now detect this and
provide a clear warning that the variable is not defined.
https://github.com/restic/restic/pull/5141

View file

@ -20,10 +20,12 @@ import (
"github.com/restic/restic/internal/archiver"
"github.com/restic/restic/internal/debug"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/filter"
"github.com/restic/restic/internal/fs"
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/restic"
"github.com/restic/restic/internal/textfile"
"github.com/restic/restic/internal/ui"
"github.com/restic/restic/internal/ui/backup"
"github.com/restic/restic/internal/ui/termstatus"
)
@ -66,7 +68,7 @@ Exit status is 12 if the password is incorrect.
// BackupOptions bundles all options for the backup command.
type BackupOptions struct {
excludePatternOptions
filter.ExcludePatternOptions
Parent string
GroupBy restic.SnapshotGroupByOptions
@ -109,7 +111,7 @@ func init() {
f.VarP(&backupOptions.GroupBy, "group-by", "g", "`group` snapshots by host, paths and/or tags, separated by comma (disable grouping with '')")
f.BoolVarP(&backupOptions.Force, "force", "f", false, `force re-reading the source files/directories (overrides the "parent" flag)`)
initExcludePatternOptions(f, &backupOptions.excludePatternOptions)
backupOptions.ExcludePatternOptions.Add(f)
f.BoolVarP(&backupOptions.ExcludeOtherFS, "one-file-system", "x", false, "exclude other file systems, don't cross filesystem boundaries and subvolumes")
f.StringArrayVar(&backupOptions.ExcludeIfPresent, "exclude-if-present", nil, "takes `filename[:header]`, exclude contents of directories containing filename (except filename itself) if header of that file is as provided (can be specified multiple times)")
@ -298,7 +300,7 @@ func (opts BackupOptions) Check(gopts GlobalOptions, args []string) error {
// collectRejectByNameFuncs returns a list of all functions which may reject data
// from being saved in a snapshot based on path only
func collectRejectByNameFuncs(opts BackupOptions, repo *repository.Repository) (fs []RejectByNameFunc, err error) {
func collectRejectByNameFuncs(opts BackupOptions, repo *repository.Repository) (fs []archiver.RejectByNameFunc, err error) {
// exclude restic cache
if repo.Cache != nil {
f, err := rejectResticCache(repo)
@ -309,23 +311,12 @@ func collectRejectByNameFuncs(opts BackupOptions, repo *repository.Repository) (
fs = append(fs, f)
}
fsPatterns, err := opts.excludePatternOptions.CollectPatterns()
fsPatterns, err := opts.ExcludePatternOptions.CollectPatterns(Warnf)
if err != nil {
return nil, err
}
fs = append(fs, fsPatterns...)
if opts.ExcludeCaches {
opts.ExcludeIfPresent = append(opts.ExcludeIfPresent, "CACHEDIR.TAG:Signature: 8a477f597d28d172789f06886806bc55")
}
for _, spec := range opts.ExcludeIfPresent {
f, err := rejectIfPresent(spec)
if err != nil {
return nil, err
}
fs = append(fs, f)
for _, pat := range fsPatterns {
fs = append(fs, archiver.RejectByNameFunc(pat))
}
return fs, nil
@ -333,25 +324,43 @@ func collectRejectByNameFuncs(opts BackupOptions, repo *repository.Repository) (
// collectRejectFuncs returns a list of all functions which may reject data
// from being saved in a snapshot based on path and file info
func collectRejectFuncs(opts BackupOptions, targets []string) (fs []RejectFunc, err error) {
func collectRejectFuncs(opts BackupOptions, targets []string, fs fs.FS) (funcs []archiver.RejectFunc, err error) {
// allowed devices
if opts.ExcludeOtherFS && !opts.Stdin {
f, err := rejectByDevice(targets)
if opts.ExcludeOtherFS && !opts.Stdin && !opts.StdinCommand {
f, err := archiver.RejectByDevice(targets, fs)
if err != nil {
return nil, err
}
fs = append(fs, f)
funcs = append(funcs, f)
}
if len(opts.ExcludeLargerThan) != 0 && !opts.Stdin {
f, err := rejectBySize(opts.ExcludeLargerThan)
if len(opts.ExcludeLargerThan) != 0 && !opts.Stdin && !opts.StdinCommand {
maxSize, err := ui.ParseBytes(opts.ExcludeLargerThan)
if err != nil {
return nil, err
}
fs = append(fs, f)
f, err := archiver.RejectBySize(maxSize)
if err != nil {
return nil, err
}
funcs = append(funcs, f)
}
return fs, nil
if opts.ExcludeCaches {
opts.ExcludeIfPresent = append(opts.ExcludeIfPresent, "CACHEDIR.TAG:Signature: 8a477f597d28d172789f06886806bc55")
}
for _, spec := range opts.ExcludeIfPresent {
f, err := archiver.RejectIfPresent(spec, Warnf)
if err != nil {
return nil, err
}
funcs = append(funcs, f)
}
return funcs, nil
}
// collectTargets returns a list of target files/dirs from several sources.
@ -506,12 +515,6 @@ func runBackup(ctx context.Context, opts BackupOptions, gopts GlobalOptions, ter
return err
}
// rejectFuncs collect functions that can reject items from the backup based on path and file info
rejectFuncs, err := collectRejectFuncs(opts, targets)
if err != nil {
return err
}
var parentSnapshot *restic.Snapshot
if !opts.Stdin {
parentSnapshot, err = findParentSnapshot(ctx, repo, opts, targets, timeStamp)
@ -533,30 +536,11 @@ func runBackup(ctx context.Context, opts BackupOptions, gopts GlobalOptions, ter
}
bar := newIndexTerminalProgress(gopts.Quiet, gopts.JSON, term)
err = repo.LoadIndex(ctx, bar)
if err != nil {
return err
}
selectByNameFilter := func(item string) bool {
for _, reject := range rejectByNameFuncs {
if reject(item) {
return false
}
}
return true
}
selectFilter := func(item string, fi os.FileInfo) bool {
for _, reject := range rejectFuncs {
if reject(item, fi) {
return false
}
}
return true
}
var targetFS fs.FS = fs.Local{}
if runtime.GOOS == "windows" && opts.UseFsSnapshot {
if err = fs.HasSufficientPrivilegesForVSS(); err != nil {
@ -603,6 +587,15 @@ func runBackup(ctx context.Context, opts BackupOptions, gopts GlobalOptions, ter
targetFS = backupFSTestHook(targetFS)
}
// rejectFuncs collect functions that can reject items from the backup based on path and file info
rejectFuncs, err := collectRejectFuncs(opts, targets, targetFS)
if err != nil {
return err
}
selectByNameFilter := archiver.CombineRejectByNames(rejectByNameFuncs)
selectFilter := archiver.CombineRejects(rejectFuncs)
wg, wgCtx := errgroup.WithContext(ctx)
cancelCtx, cancel := context.WithCancel(wgCtx)
defer cancel()

View file

@ -31,7 +31,7 @@ func testRunBackupAssumeFailure(t testing.TB, dir string, target []string, opts
func testRunBackup(t testing.TB, dir string, target []string, opts BackupOptions, gopts GlobalOptions) {
err := testRunBackupAssumeFailure(t, dir, target, opts, gopts)
rtest.Assert(t, err == nil, "Error while backing up")
rtest.Assert(t, err == nil, "Error while backing up: %v", err)
}
func TestBackup(t *testing.T) {
@ -132,7 +132,7 @@ type vssDeleteOriginalFS struct {
hasRemoved bool
}
func (f *vssDeleteOriginalFS) Lstat(name string) (os.FileInfo, error) {
func (f *vssDeleteOriginalFS) Lstat(name string) (*fs.ExtendedFileInfo, error) {
if !f.hasRemoved {
// call Lstat to trigger snapshot creation
_, _ = f.FS.Lstat(name)
@ -365,12 +365,7 @@ func TestBackupExclude(t *testing.T) {
for _, filename := range backupExcludeFilenames {
fp := filepath.Join(datadir, filename)
rtest.OK(t, os.MkdirAll(filepath.Dir(fp), 0755))
f, err := os.Create(fp)
rtest.OK(t, err)
fmt.Fprint(f, filename)
rtest.OK(t, f.Close())
rtest.OK(t, os.WriteFile(fp, []byte(filename), 0o666))
}
snapshots := make(map[string]struct{})

View file

@ -39,21 +39,24 @@ func TestCollectTargets(t *testing.T) {
f1, err := os.Create(filepath.Join(dir, "fromfile"))
rtest.OK(t, err)
// Empty lines should be ignored. A line starting with '#' is a comment.
fmt.Fprintf(f1, "\n%s*\n # here's a comment\n", f1.Name())
_, err = fmt.Fprintf(f1, "\n%s*\n # here's a comment\n", f1.Name())
rtest.OK(t, err)
rtest.OK(t, f1.Close())
f2, err := os.Create(filepath.Join(dir, "fromfile-verbatim"))
rtest.OK(t, err)
for _, filename := range []string{fooSpace, barStar} {
// Empty lines should be ignored. CR+LF is allowed.
fmt.Fprintf(f2, "%s\r\n\n", filepath.Join(dir, filename))
_, err = fmt.Fprintf(f2, "%s\r\n\n", filepath.Join(dir, filename))
rtest.OK(t, err)
}
rtest.OK(t, f2.Close())
f3, err := os.Create(filepath.Join(dir, "fromfile-raw"))
rtest.OK(t, err)
for _, filename := range []string{"baz", "quux"} {
fmt.Fprintf(f3, "%s\x00", filepath.Join(dir, filename))
_, err = fmt.Fprintf(f3, "%s\x00", filepath.Join(dir, filename))
rtest.OK(t, err)
}
rtest.OK(t, err)
rtest.OK(t, f3.Close())

View file

@ -10,7 +10,6 @@ import (
"github.com/restic/restic/internal/backend/cache"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/fs"
"github.com/restic/restic/internal/ui"
"github.com/restic/restic/internal/ui/table"
"github.com/spf13/cobra"
@ -89,7 +88,7 @@ func runCache(opts CacheOptions, gopts GlobalOptions, args []string) error {
for _, item := range oldDirs {
dir := filepath.Join(cachedir, item.Name())
err = fs.RemoveAll(dir)
err = os.RemoveAll(dir)
if err != nil {
Warnf("unable to remove %v: %v\n", dir, err)
}

View file

@ -14,7 +14,6 @@ import (
"github.com/restic/restic/internal/backend/cache"
"github.com/restic/restic/internal/checker"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/fs"
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/restic"
"github.com/restic/restic/internal/ui"
@ -202,7 +201,7 @@ func prepareCheckCache(opts CheckOptions, gopts *GlobalOptions, printer progress
printer.P("using temporary cache in %v\n", tempdir)
cleanup = func() {
err := fs.RemoveAll(tempdir)
err := os.RemoveAll(tempdir)
if err != nil {
printer.E("error removing temporary cache directory: %v\n", err)
}
@ -245,17 +244,12 @@ func runCheck(ctx context.Context, opts CheckOptions, gopts GlobalOptions, args
errorsFound := false
suggestIndexRebuild := false
suggestLegacyIndexRebuild := false
mixedFound := false
for _, hint := range hints {
switch hint.(type) {
case *checker.ErrDuplicatePacks:
term.Print(hint.Error())
suggestIndexRebuild = true
case *checker.ErrOldIndexFormat:
printer.E("error: %v\n", hint)
suggestLegacyIndexRebuild = true
errorsFound = true
case *checker.ErrMixedPack:
term.Print(hint.Error())
mixedFound = true
@ -268,9 +262,6 @@ func runCheck(ctx context.Context, opts CheckOptions, gopts GlobalOptions, args
if suggestIndexRebuild {
term.Print("Duplicate packs are non-critical, you can run `restic repair index' to correct this.\n")
}
if suggestLegacyIndexRebuild {
printer.E("error: Found indexes using the legacy format, you must run `restic repair index' to correct this.\n")
}
if mixedFound {
term.Print("Mixed packs with tree and data blobs are non-critical, you can run `restic prune` to correct this.\n")
}
@ -304,9 +295,6 @@ func runCheck(ctx context.Context, opts CheckOptions, gopts GlobalOptions, args
errorsFound = true
printer.E("%v\n", err)
}
} else if err == checker.ErrLegacyLayout {
errorsFound = true
printer.E("error: repository still uses the S3 legacy layout\nYou must run `restic migrate s3legacy` to correct this.\n")
} else {
errorsFound = true
printer.E("%v\n", err)

View file

@ -143,7 +143,7 @@ func printPacks(ctx context.Context, repo *repository.Repository, wr io.Writer)
}
func dumpIndexes(ctx context.Context, repo restic.ListerLoaderUnpacked, wr io.Writer) error {
return index.ForAllIndexes(ctx, repo, repo, func(id restic.ID, idx *index.Index, oldFormat bool, err error) error {
return index.ForAllIndexes(ctx, repo, repo, func(id restic.ID, idx *index.Index, err error) error {
Printf("index_id: %v\n", id)
if err != nil {
return err

View file

@ -108,9 +108,9 @@ func (s *DiffStat) Add(node *restic.Node) {
}
switch node.Type {
case "file":
case restic.NodeTypeFile:
s.Files++
case "dir":
case restic.NodeTypeDir:
s.Dirs++
default:
s.Others++
@ -124,7 +124,7 @@ func addBlobs(bs restic.BlobSet, node *restic.Node) {
}
switch node.Type {
case "file":
case restic.NodeTypeFile:
for _, blob := range node.Content {
h := restic.BlobHandle{
ID: blob,
@ -132,7 +132,7 @@ func addBlobs(bs restic.BlobSet, node *restic.Node) {
}
bs.Insert(h)
}
case "dir":
case restic.NodeTypeDir:
h := restic.BlobHandle{
ID: *node.Subtree,
Type: restic.TreeBlob,
@ -184,14 +184,14 @@ func (c *Comparer) printDir(ctx context.Context, mode string, stats *DiffStat, b
}
name := path.Join(prefix, node.Name)
if node.Type == "dir" {
if node.Type == restic.NodeTypeDir {
name += "/"
}
c.printChange(NewChange(name, mode))
stats.Add(node)
addBlobs(blobs, node)
if node.Type == "dir" {
if node.Type == restic.NodeTypeDir {
err := c.printDir(ctx, mode, stats, blobs, name, *node.Subtree)
if err != nil && err != context.Canceled {
Warnf("error: %v\n", err)
@ -216,7 +216,7 @@ func (c *Comparer) collectDir(ctx context.Context, blobs restic.BlobSet, id rest
addBlobs(blobs, node)
if node.Type == "dir" {
if node.Type == restic.NodeTypeDir {
err := c.collectDir(ctx, blobs, *node.Subtree)
if err != nil && err != context.Canceled {
Warnf("error: %v\n", err)
@ -284,12 +284,12 @@ func (c *Comparer) diffTree(ctx context.Context, stats *DiffStatsContainer, pref
mod += "T"
}
if node2.Type == "dir" {
if node2.Type == restic.NodeTypeDir {
name += "/"
}
if node1.Type == "file" &&
node2.Type == "file" &&
if node1.Type == restic.NodeTypeFile &&
node2.Type == restic.NodeTypeFile &&
!reflect.DeepEqual(node1.Content, node2.Content) {
mod += "M"
stats.ChangedFiles++
@ -311,7 +311,7 @@ func (c *Comparer) diffTree(ctx context.Context, stats *DiffStatsContainer, pref
c.printChange(NewChange(name, mod))
}
if node1.Type == "dir" && node2.Type == "dir" {
if node1.Type == restic.NodeTypeDir && node2.Type == restic.NodeTypeDir {
var err error
if (*node1.Subtree).Equal(*node2.Subtree) {
err = c.collectDir(ctx, stats.BlobsCommon, *node1.Subtree)
@ -324,13 +324,13 @@ func (c *Comparer) diffTree(ctx context.Context, stats *DiffStatsContainer, pref
}
case t1 && !t2:
prefix := path.Join(prefix, name)
if node1.Type == "dir" {
if node1.Type == restic.NodeTypeDir {
prefix += "/"
}
c.printChange(NewChange(prefix, "-"))
stats.Removed.Add(node1)
if node1.Type == "dir" {
if node1.Type == restic.NodeTypeDir {
err := c.printDir(ctx, "-", &stats.Removed, stats.BlobsBefore, prefix, *node1.Subtree)
if err != nil && err != context.Canceled {
Warnf("error: %v\n", err)
@ -338,13 +338,13 @@ func (c *Comparer) diffTree(ctx context.Context, stats *DiffStatsContainer, pref
}
case !t1 && t2:
prefix := path.Join(prefix, name)
if node2.Type == "dir" {
if node2.Type == restic.NodeTypeDir {
prefix += "/"
}
c.printChange(NewChange(prefix, "+"))
stats.Added.Add(node2)
if node2.Type == "dir" {
if node2.Type == restic.NodeTypeDir {
err := c.printDir(ctx, "+", &stats.Added, stats.BlobsAfter, prefix, *node2.Subtree)
if err != nil && err != context.Canceled {
Warnf("error: %v\n", err)

View file

@ -95,15 +95,15 @@ func printFromTree(ctx context.Context, tree *restic.Tree, repo restic.BlobLoade
// first item it finds and dump that according to the switch case below.
if node.Name == pathComponents[0] {
switch {
case l == 1 && dump.IsFile(node):
case l == 1 && node.Type == restic.NodeTypeFile:
return d.WriteNode(ctx, node)
case l > 1 && dump.IsDir(node):
case l > 1 && node.Type == restic.NodeTypeDir:
subtree, err := restic.LoadTree(ctx, repo, *node.Subtree)
if err != nil {
return errors.Wrapf(err, "cannot load subtree for %q", item)
}
return printFromTree(ctx, subtree, repo, item, pathComponents[1:], d, canWriteArchiveFunc)
case dump.IsDir(node):
case node.Type == restic.NodeTypeDir:
if err := canWriteArchiveFunc(); err != nil {
return err
}
@ -114,7 +114,7 @@ func printFromTree(ctx context.Context, tree *restic.Tree, repo restic.BlobLoade
return d.DumpTree(ctx, subtree, item)
case l > 1:
return fmt.Errorf("%q should be a dir, but is a %q", item, node.Type)
case !dump.IsFile(node):
case node.Type != restic.NodeTypeFile:
return fmt.Errorf("%q should be a file, but is a %q", item, node.Type)
}
}

View file

@ -298,7 +298,7 @@ func (f *Finder) findInSnapshot(ctx context.Context, sn *restic.Snapshot) error
}
var errIfNoMatch error
if node.Type == "dir" {
if node.Type == restic.NodeTypeDir {
var childMayMatch bool
for _, pat := range f.pat.pattern {
mayMatch, err := filter.ChildMatch(pat, normalizedNodepath)
@ -357,7 +357,7 @@ func (f *Finder) findIDs(ctx context.Context, sn *restic.Snapshot) error {
return nil
}
if node.Type == "dir" && f.treeIDs != nil {
if node.Type == restic.NodeTypeDir && f.treeIDs != nil {
treeID := node.Subtree
found := false
if _, ok := f.treeIDs[treeID.Str()]; ok {
@ -377,7 +377,7 @@ func (f *Finder) findIDs(ctx context.Context, sn *restic.Snapshot) error {
}
}
if node.Type == "file" && f.blobIDs != nil {
if node.Type == restic.NodeTypeFile && f.blobIDs != nil {
for _, id := range node.Content {
if ctx.Err() != nil {
return ctx.Err()

View file

@ -1,6 +1,8 @@
package main
import (
"io"
"os"
"time"
"github.com/restic/restic/internal/errors"
@ -41,10 +43,10 @@ func init() {
cmdRoot.AddCommand(cmdGenerate)
fs := cmdGenerate.Flags()
fs.StringVar(&genOpts.ManDir, "man", "", "write man pages to `directory`")
fs.StringVar(&genOpts.BashCompletionFile, "bash-completion", "", "write bash completion `file`")
fs.StringVar(&genOpts.FishCompletionFile, "fish-completion", "", "write fish completion `file`")
fs.StringVar(&genOpts.ZSHCompletionFile, "zsh-completion", "", "write zsh completion `file`")
fs.StringVar(&genOpts.PowerShellCompletionFile, "powershell-completion", "", "write powershell completion `file`")
fs.StringVar(&genOpts.BashCompletionFile, "bash-completion", "", "write bash completion `file` (`-` for stdout)")
fs.StringVar(&genOpts.FishCompletionFile, "fish-completion", "", "write fish completion `file` (`-` for stdout)")
fs.StringVar(&genOpts.ZSHCompletionFile, "zsh-completion", "", "write zsh completion `file` (`-` for stdout)")
fs.StringVar(&genOpts.PowerShellCompletionFile, "powershell-completion", "", "write powershell completion `file` (`-` for stdout)")
}
func writeManpages(dir string) error {
@ -65,32 +67,44 @@ func writeManpages(dir string) error {
return doc.GenManTree(cmdRoot, header, dir)
}
func writeBashCompletion(file string) error {
func writeCompletion(filename string, shell string, generate func(w io.Writer) error) (err error) {
if stdoutIsTerminal() {
Verbosef("writing bash completion file to %v\n", file)
Verbosef("writing %s completion file to %v\n", shell, filename)
}
return cmdRoot.GenBashCompletionFile(file)
var outWriter io.Writer
if filename != "-" {
var outFile *os.File
outFile, err = os.Create(filename)
if err != nil {
return
}
defer func() { err = outFile.Close() }()
outWriter = outFile
} else {
outWriter = globalOptions.stdout
}
func writeFishCompletion(file string) error {
if stdoutIsTerminal() {
Verbosef("writing fish completion file to %v\n", file)
}
return cmdRoot.GenFishCompletionFile(file, true)
err = generate(outWriter)
return
}
func writeZSHCompletion(file string) error {
if stdoutIsTerminal() {
Verbosef("writing zsh completion file to %v\n", file)
func checkStdoutForSingleShell(opts generateOptions) error {
completionFileOpts := []string{
opts.BashCompletionFile,
opts.FishCompletionFile,
opts.ZSHCompletionFile,
opts.PowerShellCompletionFile,
}
return cmdRoot.GenZshCompletionFile(file)
seenIsStdout := false
for _, completionFileOpt := range completionFileOpts {
if completionFileOpt == "-" {
if seenIsStdout {
return errors.Fatal("the generate command can generate shell completions to stdout for single shell only")
}
func writePowerShellCompletion(file string) error {
if stdoutIsTerminal() {
Verbosef("writing powershell completion file to %v\n", file)
seenIsStdout = true
}
return cmdRoot.GenPowerShellCompletionFile(file)
}
return nil
}
func runGenerate(opts generateOptions, args []string) error {
@ -105,29 +119,34 @@ func runGenerate(opts generateOptions, args []string) error {
}
}
err := checkStdoutForSingleShell(opts)
if err != nil {
return err
}
if opts.BashCompletionFile != "" {
err := writeBashCompletion(opts.BashCompletionFile)
err := writeCompletion(opts.BashCompletionFile, "bash", cmdRoot.GenBashCompletion)
if err != nil {
return err
}
}
if opts.FishCompletionFile != "" {
err := writeFishCompletion(opts.FishCompletionFile)
err := writeCompletion(opts.FishCompletionFile, "fish", func(w io.Writer) error { return cmdRoot.GenFishCompletion(w, true) })
if err != nil {
return err
}
}
if opts.ZSHCompletionFile != "" {
err := writeZSHCompletion(opts.ZSHCompletionFile)
err := writeCompletion(opts.ZSHCompletionFile, "zsh", cmdRoot.GenZshCompletion)
if err != nil {
return err
}
}
if opts.PowerShellCompletionFile != "" {
err := writePowerShellCompletion(opts.PowerShellCompletionFile)
err := writeCompletion(opts.PowerShellCompletionFile, "powershell", cmdRoot.GenPowerShellCompletion)
if err != nil {
return err
}

View file

@ -0,0 +1,40 @@
package main
import (
"bytes"
"strings"
"testing"
rtest "github.com/restic/restic/internal/test"
)
func TestGenerateStdout(t *testing.T) {
testCases := []struct {
name string
opts generateOptions
}{
{"bash", generateOptions{BashCompletionFile: "-"}},
{"fish", generateOptions{FishCompletionFile: "-"}},
{"zsh", generateOptions{ZSHCompletionFile: "-"}},
{"powershell", generateOptions{PowerShellCompletionFile: "-"}},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
buf := bytes.NewBuffer(nil)
globalOptions.stdout = buf
err := runGenerate(tc.opts, []string{})
rtest.OK(t, err)
completionString := buf.String()
rtest.Assert(t, strings.Contains(completionString, "# "+tc.name+" completion for restic"), "has no expected completion header")
})
}
t.Run("Generate shell completions to stdout for two shells", func(t *testing.T) {
buf := bytes.NewBuffer(nil)
globalOptions.stdout = buf
opts := generateOptions{BashCompletionFile: "-", FishCompletionFile: "-"}
err := runGenerate(opts, []string{})
rtest.Assert(t, err != nil, "generate shell completions to stdout for two shells fails")
})
}

View file

@ -66,7 +66,7 @@ func runList(ctx context.Context, gopts GlobalOptions, args []string) error {
case "locks":
t = restic.LockFile
case "blobs":
return index.ForAllIndexes(ctx, repo, repo, func(_ restic.ID, idx *index.Index, _ bool, err error) error {
return index.ForAllIndexes(ctx, repo, repo, func(_ restic.ID, idx *index.Index, err error) error {
if err != nil {
return err
}

View file

@ -75,17 +75,17 @@ func init() {
}
type lsPrinter interface {
Snapshot(sn *restic.Snapshot)
Node(path string, node *restic.Node, isPrefixDirectory bool)
LeaveDir(path string)
Close()
Snapshot(sn *restic.Snapshot) error
Node(path string, node *restic.Node, isPrefixDirectory bool) error
LeaveDir(path string) error
Close() error
}
type jsonLsPrinter struct {
enc *json.Encoder
}
func (p *jsonLsPrinter) Snapshot(sn *restic.Snapshot) {
func (p *jsonLsPrinter) Snapshot(sn *restic.Snapshot) error {
type lsSnapshot struct {
*restic.Snapshot
ID *restic.ID `json:"id"`
@ -94,27 +94,21 @@ func (p *jsonLsPrinter) Snapshot(sn *restic.Snapshot) {
StructType string `json:"struct_type"` // "snapshot", deprecated
}
err := p.enc.Encode(lsSnapshot{
return p.enc.Encode(lsSnapshot{
Snapshot: sn,
ID: sn.ID(),
ShortID: sn.ID().Str(),
MessageType: "snapshot",
StructType: "snapshot",
})
if err != nil {
Warnf("JSON encode failed: %v\n", err)
}
}
// Print node in our custom JSON format, followed by a newline.
func (p *jsonLsPrinter) Node(path string, node *restic.Node, isPrefixDirectory bool) {
func (p *jsonLsPrinter) Node(path string, node *restic.Node, isPrefixDirectory bool) error {
if isPrefixDirectory {
return
}
err := lsNodeJSON(p.enc, path, node)
if err != nil {
Warnf("JSON encode failed: %v\n", err)
return nil
}
return lsNodeJSON(p.enc, path, node)
}
func lsNodeJSON(enc *json.Encoder, path string, node *restic.Node) error {
@ -137,7 +131,7 @@ func lsNodeJSON(enc *json.Encoder, path string, node *restic.Node) error {
size uint64 // Target for Size pointer.
}{
Name: node.Name,
Type: node.Type,
Type: string(node.Type),
Path: path,
UID: node.UID,
GID: node.GID,
@ -153,15 +147,15 @@ func lsNodeJSON(enc *json.Encoder, path string, node *restic.Node) error {
}
// Always print size for regular files, even when empty,
// but never for other types.
if node.Type == "file" {
if node.Type == restic.NodeTypeFile {
n.Size = &n.size
}
return enc.Encode(n)
}
func (p *jsonLsPrinter) LeaveDir(_ string) {}
func (p *jsonLsPrinter) Close() {}
func (p *jsonLsPrinter) LeaveDir(_ string) error { return nil }
func (p *jsonLsPrinter) Close() error { return nil }
type ncduLsPrinter struct {
out io.Writer
@ -171,16 +165,17 @@ type ncduLsPrinter struct {
// lsSnapshotNcdu prints a restic snapshot in Ncdu save format.
// It opens the JSON list. Nodes are added with lsNodeNcdu and the list is closed by lsCloseNcdu.
// Format documentation: https://dev.yorhel.nl/ncdu/jsonfmt
func (p *ncduLsPrinter) Snapshot(sn *restic.Snapshot) {
func (p *ncduLsPrinter) Snapshot(sn *restic.Snapshot) error {
const NcduMajorVer = 1
const NcduMinorVer = 2
snapshotBytes, err := json.Marshal(sn)
if err != nil {
Warnf("JSON encode failed: %v\n", err)
return err
}
p.depth++
fmt.Fprintf(p.out, "[%d, %d, %s, [{\"name\":\"/\"}", NcduMajorVer, NcduMinorVer, string(snapshotBytes))
_, err = fmt.Fprintf(p.out, "[%d, %d, %s, [{\"name\":\"/\"}", NcduMajorVer, NcduMinorVer, string(snapshotBytes))
return err
}
func lsNcduNode(_ string, node *restic.Node) ([]byte, error) {
@ -208,7 +203,7 @@ func lsNcduNode(_ string, node *restic.Node) ([]byte, error) {
Dev: node.DeviceID,
Ino: node.Inode,
NLink: node.Links,
NotReg: node.Type != "dir" && node.Type != "file",
NotReg: node.Type != restic.NodeTypeDir && node.Type != restic.NodeTypeFile,
UID: node.UID,
GID: node.GID,
Mode: uint16(node.Mode & os.ModePerm),
@ -232,27 +227,30 @@ func lsNcduNode(_ string, node *restic.Node) ([]byte, error) {
return json.Marshal(outNode)
}
func (p *ncduLsPrinter) Node(path string, node *restic.Node, _ bool) {
func (p *ncduLsPrinter) Node(path string, node *restic.Node, _ bool) error {
out, err := lsNcduNode(path, node)
if err != nil {
Warnf("JSON encode failed: %v\n", err)
return err
}
if node.Type == "dir" {
fmt.Fprintf(p.out, ",\n%s[\n%s%s", strings.Repeat(" ", p.depth), strings.Repeat(" ", p.depth+1), string(out))
if node.Type == restic.NodeTypeDir {
_, err = fmt.Fprintf(p.out, ",\n%s[\n%s%s", strings.Repeat(" ", p.depth), strings.Repeat(" ", p.depth+1), string(out))
p.depth++
} else {
fmt.Fprintf(p.out, ",\n%s%s", strings.Repeat(" ", p.depth), string(out))
_, err = fmt.Fprintf(p.out, ",\n%s%s", strings.Repeat(" ", p.depth), string(out))
}
return err
}
func (p *ncduLsPrinter) LeaveDir(_ string) {
func (p *ncduLsPrinter) LeaveDir(_ string) error {
p.depth--
fmt.Fprintf(p.out, "\n%s]", strings.Repeat(" ", p.depth))
_, err := fmt.Fprintf(p.out, "\n%s]", strings.Repeat(" ", p.depth))
return err
}
func (p *ncduLsPrinter) Close() {
fmt.Fprint(p.out, "\n]\n]\n")
func (p *ncduLsPrinter) Close() error {
_, err := fmt.Fprint(p.out, "\n]\n]\n")
return err
}
type textLsPrinter struct {
@ -261,17 +259,23 @@ type textLsPrinter struct {
HumanReadable bool
}
func (p *textLsPrinter) Snapshot(sn *restic.Snapshot) {
func (p *textLsPrinter) Snapshot(sn *restic.Snapshot) error {
Verbosef("%v filtered by %v:\n", sn, p.dirs)
return nil
}
func (p *textLsPrinter) Node(path string, node *restic.Node, isPrefixDirectory bool) {
func (p *textLsPrinter) Node(path string, node *restic.Node, isPrefixDirectory bool) error {
if !isPrefixDirectory {
Printf("%s\n", formatNode(path, node, p.ListLong, p.HumanReadable))
}
return nil
}
func (p *textLsPrinter) LeaveDir(_ string) {}
func (p *textLsPrinter) Close() {}
func (p *textLsPrinter) LeaveDir(_ string) error {
return nil
}
func (p *textLsPrinter) Close() error {
return nil
}
func runLs(ctx context.Context, opts LsOptions, gopts GlobalOptions, args []string) error {
if len(args) == 0 {
@ -374,7 +378,9 @@ func runLs(ctx context.Context, opts LsOptions, gopts GlobalOptions, args []stri
return err
}
printer.Snapshot(sn)
if err := printer.Snapshot(sn); err != nil {
return err
}
processNode := func(_ restic.ID, nodepath string, node *restic.Node, err error) error {
if err != nil {
@ -387,7 +393,9 @@ func runLs(ctx context.Context, opts LsOptions, gopts GlobalOptions, args []stri
printedDir := false
if withinDir(nodepath) {
// if we're within a target path, print the node
printer.Node(nodepath, node, false)
if err := printer.Node(nodepath, node, false); err != nil {
return err
}
printedDir = true
// if recursive listing is requested, signal the walker that it
@ -402,17 +410,19 @@ func runLs(ctx context.Context, opts LsOptions, gopts GlobalOptions, args []stri
if approachingMatchingTree(nodepath) {
// print node leading up to the target paths
if !printedDir {
printer.Node(nodepath, node, true)
return printer.Node(nodepath, node, true)
}
return nil
}
// otherwise, signal the walker to not walk recursively into any
// subdirs
if node.Type == "dir" {
if node.Type == restic.NodeTypeDir {
// immediately generate leaveDir if the directory is skipped
if printedDir {
printer.LeaveDir(nodepath)
if err := printer.LeaveDir(nodepath); err != nil {
return err
}
}
return walker.ErrSkipNode
}
@ -421,11 +431,12 @@ func runLs(ctx context.Context, opts LsOptions, gopts GlobalOptions, args []stri
err = walker.Walk(ctx, repo, *sn.Tree, walker.WalkVisitor{
ProcessNode: processNode,
LeaveDir: func(path string) {
LeaveDir: func(path string) error {
// the root path `/` has no corresponding node and is thus also skipped by processNode
if path != "/" {
printer.LeaveDir(path)
return printer.LeaveDir(path)
}
return nil
},
})
@ -433,6 +444,5 @@ func runLs(ctx context.Context, opts LsOptions, gopts GlobalOptions, args []stri
return err
}
printer.Close()
return nil
return printer.Close()
}

View file

@ -23,7 +23,7 @@ var lsTestNodes = []lsTestNode{
path: "/bar/baz",
Node: restic.Node{
Name: "baz",
Type: "file",
Type: restic.NodeTypeFile,
Size: 12345,
UID: 10000000,
GID: 20000000,
@ -39,7 +39,7 @@ var lsTestNodes = []lsTestNode{
path: "/foo/empty",
Node: restic.Node{
Name: "empty",
Type: "file",
Type: restic.NodeTypeFile,
Size: 0,
UID: 1001,
GID: 1001,
@ -56,7 +56,7 @@ var lsTestNodes = []lsTestNode{
path: "/foo/link",
Node: restic.Node{
Name: "link",
Type: "symlink",
Type: restic.NodeTypeSymlink,
Mode: os.ModeSymlink | 0777,
LinkTarget: "not printed",
},
@ -66,7 +66,7 @@ var lsTestNodes = []lsTestNode{
path: "/some/directory",
Node: restic.Node{
Name: "directory",
Type: "dir",
Type: restic.NodeTypeDir,
Mode: os.ModeDir | 0755,
ModTime: time.Date(2020, 1, 2, 3, 4, 5, 0, time.UTC),
AccessTime: time.Date(2021, 2, 3, 4, 5, 6, 7, time.UTC),
@ -79,7 +79,7 @@ var lsTestNodes = []lsTestNode{
path: "/some/sticky",
Node: restic.Node{
Name: "sticky",
Type: "dir",
Type: restic.NodeTypeDir,
Mode: os.ModeDir | 0755 | os.ModeSetuid | os.ModeSetgid | os.ModeSticky,
},
},
@ -134,29 +134,29 @@ func TestLsNcdu(t *testing.T) {
}
modTime := time.Date(2020, 1, 2, 3, 4, 5, 0, time.UTC)
printer.Snapshot(&restic.Snapshot{
rtest.OK(t, printer.Snapshot(&restic.Snapshot{
Hostname: "host",
Paths: []string{"/example"},
})
printer.Node("/directory", &restic.Node{
Type: "dir",
}))
rtest.OK(t, printer.Node("/directory", &restic.Node{
Type: restic.NodeTypeDir,
Name: "directory",
ModTime: modTime,
}, false)
printer.Node("/directory/data", &restic.Node{
Type: "file",
}, false))
rtest.OK(t, printer.Node("/directory/data", &restic.Node{
Type: restic.NodeTypeFile,
Name: "data",
Size: 42,
ModTime: modTime,
}, false)
printer.LeaveDir("/directory")
printer.Node("/file", &restic.Node{
Type: "file",
}, false))
rtest.OK(t, printer.LeaveDir("/directory"))
rtest.OK(t, printer.Node("/file", &restic.Node{
Type: restic.NodeTypeFile,
Name: "file",
Size: 12345,
ModTime: modTime,
}, false)
printer.Close()
}, false))
rtest.OK(t, printer.Close())
rtest.Equals(t, `[1, 2, {"time":"0001-01-01T00:00:00Z","tree":null,"paths":["/example"],"hostname":"host"}, [{"name":"/"},
[

View file

@ -15,7 +15,6 @@ import (
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/restic"
resticfs "github.com/restic/restic/internal/fs"
"github.com/restic/restic/internal/fuse"
systemFuse "github.com/anacrolix/fuse"
@ -122,7 +121,7 @@ func runMount(ctx context.Context, opts MountOptions, gopts GlobalOptions, args
// Check the existence of the mount point at the earliest stage to
// prevent unnecessary computations while opening the repository.
if _, err := resticfs.Stat(mountpoint); errors.Is(err, os.ErrNotExist) {
if _, err := os.Stat(mountpoint); errors.Is(err, os.ErrNotExist) {
Verbosef("Mountpoint %s doesn't exist\n", mountpoint)
return err
}

View file

@ -74,7 +74,7 @@ func init() {
func addPruneOptions(c *cobra.Command, pruneOptions *PruneOptions) {
f := c.Flags()
f.StringVar(&pruneOptions.MaxUnused, "max-unused", "5%", "tolerate given `limit` of unused data (absolute value in bytes with suffixes k/K, m/M, g/G, t/T, a value in % or the word 'unlimited')")
f.StringVar(&pruneOptions.MaxRepackSize, "max-repack-size", "", "maximum `size` to repack (allowed suffixes: k/K, m/M, g/G, t/T)")
f.StringVar(&pruneOptions.MaxRepackSize, "max-repack-size", "", "stop after repacking this much data in total (allowed suffixes for `size`: k/K, m/M, g/G, t/T)")
f.BoolVar(&pruneOptions.RepackCacheableOnly, "repack-cacheable-only", false, "only repack packs which are cacheable")
f.BoolVar(&pruneOptions.RepackSmall, "repack-small", false, "repack pack files below 80% of target pack size")
f.BoolVar(&pruneOptions.RepackUncompressed, "repack-uncompressed", false, "repack all uncompressed data")

View file

@ -88,7 +88,7 @@ func runRecover(ctx context.Context, gopts GlobalOptions) error {
}
for _, node := range tree.Nodes {
if node.Type == "dir" && node.Subtree != nil {
if node.Type == restic.NodeTypeDir && node.Subtree != nil {
trees[*node.Subtree] = true
}
}
@ -128,7 +128,7 @@ func runRecover(ctx context.Context, gopts GlobalOptions) error {
for id := range roots {
var subtreeID = id
node := restic.Node{
Type: "dir",
Type: restic.NodeTypeDir,
Name: id.Str(),
Mode: 0755,
Subtree: &subtreeID,

View file

@ -92,11 +92,11 @@ func runRepairSnapshots(ctx context.Context, gopts GlobalOptions, opts RepairOpt
// - files whose contents are not fully available (-> file will be modified)
rewriter := walker.NewTreeRewriter(walker.RewriteOpts{
RewriteNode: func(node *restic.Node, path string) *restic.Node {
if node.Type == "irregular" || node.Type == "" {
if node.Type == restic.NodeTypeIrregular || node.Type == restic.NodeTypeInvalid {
Verbosef(" file %q: removed node with invalid type %q\n", path, node.Type)
return nil
}
if node.Type != "file" {
if node.Type != restic.NodeTypeFile {
return node
}

View file

@ -7,6 +7,7 @@ import (
"github.com/restic/restic/internal/debug"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/filter"
"github.com/restic/restic/internal/restic"
"github.com/restic/restic/internal/restorer"
"github.com/restic/restic/internal/ui"
@ -49,8 +50,8 @@ Exit status is 12 if the password is incorrect.
// RestoreOptions collects all options for the restore command.
type RestoreOptions struct {
excludePatternOptions
includePatternOptions
filter.ExcludePatternOptions
filter.IncludePatternOptions
Target string
restic.SnapshotFilter
DryRun bool
@ -68,8 +69,8 @@ func init() {
flags := cmdRestore.Flags()
flags.StringVarP(&restoreOptions.Target, "target", "t", "", "directory to extract data to")
initExcludePatternOptions(flags, &restoreOptions.excludePatternOptions)
initIncludePatternOptions(flags, &restoreOptions.includePatternOptions)
restoreOptions.ExcludePatternOptions.Add(flags)
restoreOptions.IncludePatternOptions.Add(flags)
initSingleSnapshotFilter(flags, &restoreOptions.SnapshotFilter)
flags.BoolVar(&restoreOptions.DryRun, "dry-run", false, "do not write any data, just show what would be done")
@ -82,12 +83,12 @@ func init() {
func runRestore(ctx context.Context, opts RestoreOptions, gopts GlobalOptions,
term *termstatus.Terminal, args []string) error {
excludePatternFns, err := opts.excludePatternOptions.CollectPatterns()
excludePatternFns, err := opts.ExcludePatternOptions.CollectPatterns(Warnf)
if err != nil {
return err
}
includePatternFns, err := opts.includePatternOptions.CollectPatterns()
includePatternFns, err := opts.IncludePatternOptions.CollectPatterns(Warnf)
if err != nil {
return err
}

View file

@ -12,7 +12,6 @@ import (
"testing"
"time"
"github.com/restic/restic/internal/feature"
"github.com/restic/restic/internal/restic"
rtest "github.com/restic/restic/internal/test"
"github.com/restic/restic/internal/ui/termstatus"
@ -403,28 +402,14 @@ func TestRestoreNoMetadataOnIgnoredIntermediateDirs(t *testing.T) {
"meta data of intermediate directory hasn't been restore")
}
func TestRestoreLocalLayout(t *testing.T) {
defer feature.TestSetFlag(t, feature.Flag, feature.DeprecateS3LegacyLayout, false)()
func TestRestoreDefaultLayout(t *testing.T) {
env, cleanup := withTestEnvironment(t)
defer cleanup()
var tests = []struct {
filename string
layout string
}{
{"repo-layout-default.tar.gz", ""},
{"repo-layout-s3legacy.tar.gz", ""},
{"repo-layout-default.tar.gz", "default"},
{"repo-layout-s3legacy.tar.gz", "s3legacy"},
}
for _, test := range tests {
datafile := filepath.Join("..", "..", "internal", "backend", "testdata", test.filename)
datafile := filepath.Join("..", "..", "internal", "backend", "testdata", "repo-layout-default.tar.gz")
rtest.SetupTarTestFixture(t, env.base, datafile)
env.gopts.extended["local.layout"] = test.layout
// check the repo
testRunCheck(t, env.gopts)
@ -435,4 +420,3 @@ func TestRestoreLocalLayout(t *testing.T) {
rtest.RemoveAll(t, filepath.Join(env.base, "repo"))
rtest.RemoveAll(t, target)
}
}

View file

@ -9,6 +9,7 @@ import (
"github.com/restic/restic/internal/debug"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/filter"
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/restic"
"github.com/restic/restic/internal/walker"
@ -87,7 +88,7 @@ type RewriteOptions struct {
Metadata snapshotMetadataArgs
restic.SnapshotFilter
excludePatternOptions
filter.ExcludePatternOptions
}
var rewriteOptions RewriteOptions
@ -102,7 +103,7 @@ func init() {
f.StringVar(&rewriteOptions.Metadata.Time, "new-time", "", "replace time of the backup")
initMultiSnapshotFilter(f, &rewriteOptions.SnapshotFilter, true)
initExcludePatternOptions(f, &rewriteOptions.excludePatternOptions)
rewriteOptions.ExcludePatternOptions.Add(f)
}
type rewriteFilterFunc func(ctx context.Context, sn *restic.Snapshot) (restic.ID, error)
@ -112,7 +113,7 @@ func rewriteSnapshot(ctx context.Context, repo *repository.Repository, sn *resti
return false, errors.Errorf("snapshot %v has nil tree", sn.ID().Str())
}
rejectByNameFuncs, err := opts.excludePatternOptions.CollectPatterns()
rejectByNameFuncs, err := opts.ExcludePatternOptions.CollectPatterns(Warnf)
if err != nil {
return false, err
}
@ -262,7 +263,7 @@ func filterAndReplaceSnapshot(ctx context.Context, repo restic.Repository, sn *r
}
func runRewrite(ctx context.Context, opts RewriteOptions, gopts GlobalOptions, args []string) error {
if opts.excludePatternOptions.Empty() && opts.Metadata.empty() {
if opts.ExcludePatternOptions.Empty() && opts.Metadata.empty() {
return errors.Fatal("Nothing to do: no excludes provided and no new metadata provided")
}

View file

@ -5,6 +5,7 @@ import (
"path/filepath"
"testing"
"github.com/restic/restic/internal/filter"
"github.com/restic/restic/internal/restic"
rtest "github.com/restic/restic/internal/test"
"github.com/restic/restic/internal/ui"
@ -12,7 +13,7 @@ import (
func testRunRewriteExclude(t testing.TB, gopts GlobalOptions, excludes []string, forget bool, metadata snapshotMetadataArgs) {
opts := RewriteOptions{
excludePatternOptions: excludePatternOptions{
ExcludePatternOptions: filter.ExcludePatternOptions{
Excludes: excludes,
},
Forget: forget,

View file

@ -296,7 +296,9 @@ func PrintSnapshotGroupHeader(stdout io.Writer, groupKeyJSON string) error {
}
// Info
fmt.Fprintf(stdout, "snapshots")
if _, err := fmt.Fprintf(stdout, "snapshots"); err != nil {
return err
}
var infoStrings []string
if key.Hostname != "" {
infoStrings = append(infoStrings, "host ["+key.Hostname+"]")
@ -308,11 +310,13 @@ func PrintSnapshotGroupHeader(stdout io.Writer, groupKeyJSON string) error {
infoStrings = append(infoStrings, "paths ["+strings.Join(key.Paths, ", ")+"]")
}
if infoStrings != nil {
fmt.Fprintf(stdout, " for (%s)", strings.Join(infoStrings, ", "))
if _, err := fmt.Fprintf(stdout, " for (%s)", strings.Join(infoStrings, ", ")); err != nil {
return err
}
fmt.Fprintf(stdout, ":\n")
}
_, err = fmt.Fprintf(stdout, ":\n")
return nil
return err
}
// Snapshot helps to print Snapshots as JSON with their ID included.
@ -329,7 +333,7 @@ type SnapshotGroup struct {
Snapshots []Snapshot `json:"snapshots"`
}
// printSnapshotsJSON writes the JSON representation of list to stdout.
// printSnapshotGroupJSON writes the JSON representation of list to stdout.
func printSnapshotGroupJSON(stdout io.Writer, snGroups map[string]restic.Snapshots, grouped bool) error {
if grouped {
snapshotGroups := []SnapshotGroup{}

View file

@ -2,6 +2,7 @@ package main
import (
"context"
"crypto/sha256"
"encoding/json"
"fmt"
"path/filepath"
@ -16,7 +17,6 @@ import (
"github.com/restic/restic/internal/ui/table"
"github.com/restic/restic/internal/walker"
"github.com/minio/sha256-simd"
"github.com/spf13/cobra"
)
@ -276,7 +276,7 @@ func statsWalkTree(repo restic.Loader, opts StatsOptions, stats *statsContainer,
// will still be restored
stats.TotalFileCount++
if node.Links == 1 || node.Type == "dir" {
if node.Links == 1 || node.Type == restic.NodeTypeDir {
stats.TotalSize += node.Size
} else {
// if hardlinks are present only count each deviceID+inode once

View file

@ -25,6 +25,7 @@ Exit status is 1 if there was any error.
Run: func(_ *cobra.Command, _ []string) {
if globalOptions.JSON {
type jsonVersion struct {
MessageType string `json:"message_type"` // version
Version string `json:"version"`
GoVersion string `json:"go_version"`
GoOS string `json:"go_os"`
@ -32,6 +33,7 @@ Exit status is 1 if there was any error.
}
jsonS := jsonVersion{
MessageType: "version",
Version: version,
GoVersion: runtime.Version(),
GoOS: runtime.GOOS,

View file

@ -1,347 +1,16 @@
package main
import (
"bufio"
"bytes"
"fmt"
"io"
"os"
"path/filepath"
"strings"
"sync"
"github.com/restic/restic/internal/archiver"
"github.com/restic/restic/internal/debug"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/filter"
"github.com/restic/restic/internal/fs"
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/textfile"
"github.com/restic/restic/internal/ui"
"github.com/spf13/pflag"
)
type rejectionCache struct {
m map[string]bool
mtx sync.Mutex
}
// Lock locks the mutex in rc.
func (rc *rejectionCache) Lock() {
if rc != nil {
rc.mtx.Lock()
}
}
// Unlock unlocks the mutex in rc.
func (rc *rejectionCache) Unlock() {
if rc != nil {
rc.mtx.Unlock()
}
}
// Get returns the last stored value for dir and a second boolean that
// indicates whether that value was actually written to the cache. It is the
// callers responsibility to call rc.Lock and rc.Unlock before using this
// method, otherwise data races may occur.
func (rc *rejectionCache) Get(dir string) (bool, bool) {
if rc == nil || rc.m == nil {
return false, false
}
v, ok := rc.m[dir]
return v, ok
}
// Store stores a new value for dir. It is the callers responsibility to call
// rc.Lock and rc.Unlock before using this method, otherwise data races may
// occur.
func (rc *rejectionCache) Store(dir string, rejected bool) {
if rc == nil {
return
}
if rc.m == nil {
rc.m = make(map[string]bool)
}
rc.m[dir] = rejected
}
// RejectByNameFunc is a function that takes a filename of a
// file that would be included in the backup. The function returns true if it
// should be excluded (rejected) from the backup.
type RejectByNameFunc func(path string) bool
// RejectFunc is a function that takes a filename and os.FileInfo of a
// file that would be included in the backup. The function returns true if it
// should be excluded (rejected) from the backup.
type RejectFunc func(path string, fi os.FileInfo) bool
// rejectByPattern returns a RejectByNameFunc which rejects files that match
// one of the patterns.
func rejectByPattern(patterns []string) RejectByNameFunc {
parsedPatterns := filter.ParsePatterns(patterns)
return func(item string) bool {
matched, err := filter.List(parsedPatterns, item)
if err != nil {
Warnf("error for exclude pattern: %v", err)
}
if matched {
debug.Log("path %q excluded by an exclude pattern", item)
return true
}
return false
}
}
// Same as `rejectByPattern` but case insensitive.
func rejectByInsensitivePattern(patterns []string) RejectByNameFunc {
for index, path := range patterns {
patterns[index] = strings.ToLower(path)
}
rejFunc := rejectByPattern(patterns)
return func(item string) bool {
return rejFunc(strings.ToLower(item))
}
}
// rejectIfPresent returns a RejectByNameFunc which itself returns whether a path
// should be excluded. The RejectByNameFunc considers a file to be excluded when
// it resides in a directory with an exclusion file, that is specified by
// excludeFileSpec in the form "filename[:content]". The returned error is
// non-nil if the filename component of excludeFileSpec is empty. If rc is
// non-nil, it is going to be used in the RejectByNameFunc to expedite the evaluation
// of a directory based on previous visits.
func rejectIfPresent(excludeFileSpec string) (RejectByNameFunc, error) {
if excludeFileSpec == "" {
return nil, errors.New("name for exclusion tagfile is empty")
}
colon := strings.Index(excludeFileSpec, ":")
if colon == 0 {
return nil, fmt.Errorf("no name for exclusion tagfile provided")
}
tf, tc := "", ""
if colon > 0 {
tf = excludeFileSpec[:colon]
tc = excludeFileSpec[colon+1:]
} else {
tf = excludeFileSpec
}
debug.Log("using %q as exclusion tagfile", tf)
rc := &rejectionCache{}
fn := func(filename string) bool {
return isExcludedByFile(filename, tf, tc, rc)
}
return fn, nil
}
// isExcludedByFile interprets filename as a path and returns true if that file
// is in an excluded directory. A directory is identified as excluded if it contains a
// tagfile which bears the name specified in tagFilename and starts with
// header. If rc is non-nil, it is used to expedite the evaluation of a
// directory based on previous visits.
func isExcludedByFile(filename, tagFilename, header string, rc *rejectionCache) bool {
if tagFilename == "" {
return false
}
dir, base := filepath.Split(filename)
if base == tagFilename {
return false // do not exclude the tagfile itself
}
rc.Lock()
defer rc.Unlock()
rejected, visited := rc.Get(dir)
if visited {
return rejected
}
rejected = isDirExcludedByFile(dir, tagFilename, header)
rc.Store(dir, rejected)
return rejected
}
func isDirExcludedByFile(dir, tagFilename, header string) bool {
tf := filepath.Join(dir, tagFilename)
_, err := fs.Lstat(tf)
if os.IsNotExist(err) {
return false
}
if err != nil {
Warnf("could not access exclusion tagfile: %v", err)
return false
}
// when no signature is given, the mere presence of tf is enough reason
// to exclude filename
if len(header) == 0 {
return true
}
// From this stage, errors mean tagFilename exists but it is malformed.
// Warnings will be generated so that the user is informed that the
// indented ignore-action is not performed.
f, err := os.Open(tf)
if err != nil {
Warnf("could not open exclusion tagfile: %v", err)
return false
}
defer func() {
_ = f.Close()
}()
buf := make([]byte, len(header))
_, err = io.ReadFull(f, buf)
// EOF is handled with a dedicated message, otherwise the warning were too cryptic
if err == io.EOF {
Warnf("invalid (too short) signature in exclusion tagfile %q\n", tf)
return false
}
if err != nil {
Warnf("could not read signature from exclusion tagfile %q: %v\n", tf, err)
return false
}
if !bytes.Equal(buf, []byte(header)) {
Warnf("invalid signature in exclusion tagfile %q\n", tf)
return false
}
return true
}
// DeviceMap is used to track allowed source devices for backup. This is used to
// check for crossing mount points during backup (for --one-file-system). It
// maps the name of a source path to its device ID.
type DeviceMap map[string]uint64
// NewDeviceMap creates a new device map from the list of source paths.
func NewDeviceMap(allowedSourcePaths []string) (DeviceMap, error) {
deviceMap := make(map[string]uint64)
for _, item := range allowedSourcePaths {
item, err := filepath.Abs(filepath.Clean(item))
if err != nil {
return nil, err
}
fi, err := fs.Lstat(item)
if err != nil {
return nil, err
}
id, err := fs.DeviceID(fi)
if err != nil {
return nil, err
}
deviceMap[item] = id
}
if len(deviceMap) == 0 {
return nil, errors.New("zero allowed devices")
}
return deviceMap, nil
}
// IsAllowed returns true if the path is located on an allowed device.
func (m DeviceMap) IsAllowed(item string, deviceID uint64) (bool, error) {
for dir := item; ; dir = filepath.Dir(dir) {
debug.Log("item %v, test dir %v", item, dir)
// find a parent directory that is on an allowed device (otherwise
// we would not traverse the directory at all)
allowedID, ok := m[dir]
if !ok {
if dir == filepath.Dir(dir) {
// arrived at root, no allowed device found. this should not happen.
break
}
continue
}
// if the item has a different device ID than the parent directory,
// we crossed a file system boundary
if allowedID != deviceID {
debug.Log("item %v (dir %v) on disallowed device %d", item, dir, deviceID)
return false, nil
}
// item is on allowed device, accept it
debug.Log("item %v allowed", item)
return true, nil
}
return false, fmt.Errorf("item %v (device ID %v) not found, deviceMap: %v", item, deviceID, m)
}
// rejectByDevice returns a RejectFunc that rejects files which are on a
// different file systems than the files/dirs in samples.
func rejectByDevice(samples []string) (RejectFunc, error) {
deviceMap, err := NewDeviceMap(samples)
if err != nil {
return nil, err
}
debug.Log("allowed devices: %v\n", deviceMap)
return func(item string, fi os.FileInfo) bool {
id, err := fs.DeviceID(fi)
if err != nil {
// This should never happen because gatherDevices() would have
// errored out earlier. If it still does that's a reason to panic.
panic(err)
}
allowed, err := deviceMap.IsAllowed(filepath.Clean(item), id)
if err != nil {
// this should not happen
panic(fmt.Sprintf("error checking device ID of %v: %v", item, err))
}
if allowed {
// accept item
return false
}
// reject everything except directories
if !fi.IsDir() {
return true
}
// special case: make sure we keep mountpoints (directories which
// contain a mounted file system). Test this by checking if the parent
// directory would be included.
parentDir := filepath.Dir(filepath.Clean(item))
parentFI, err := fs.Lstat(parentDir)
if err != nil {
debug.Log("item %v: error running lstat() on parent directory: %v", item, err)
// if in doubt, reject
return true
}
parentDeviceID, err := fs.DeviceID(parentFI)
if err != nil {
debug.Log("item %v: getting device ID of parent directory: %v", item, err)
// if in doubt, reject
return true
}
parentAllowed, err := deviceMap.IsAllowed(parentDir, parentDeviceID)
if err != nil {
debug.Log("item %v: error checking parent directory: %v", item, err)
// if in doubt, reject
return true
}
if parentAllowed {
// we found a mount point, so accept the directory
return false
}
// reject everything else
return true
}, nil
}
// rejectResticCache returns a RejectByNameFunc that rejects the restic cache
// directory (if set).
func rejectResticCache(repo *repository.Repository) (RejectByNameFunc, error) {
func rejectResticCache(repo *repository.Repository) (archiver.RejectByNameFunc, error) {
if repo.Cache == nil {
return func(string) bool {
return false
@ -362,137 +31,3 @@ func rejectResticCache(repo *repository.Repository) (RejectByNameFunc, error) {
return false
}, nil
}
func rejectBySize(maxSizeStr string) (RejectFunc, error) {
maxSize, err := ui.ParseBytes(maxSizeStr)
if err != nil {
return nil, err
}
return func(item string, fi os.FileInfo) bool {
// directory will be ignored
if fi.IsDir() {
return false
}
filesize := fi.Size()
if filesize > maxSize {
debug.Log("file %s is oversize: %d", item, filesize)
return true
}
return false
}, nil
}
// readPatternsFromFiles reads all files and returns the list of
// patterns. For each line, leading and trailing white space is removed
// and comment lines are ignored. For each remaining pattern, environment
// variables are resolved. For adding a literal dollar sign ($), write $$ to
// the file.
func readPatternsFromFiles(files []string) ([]string, error) {
getenvOrDollar := func(s string) string {
if s == "$" {
return "$"
}
return os.Getenv(s)
}
var patterns []string
for _, filename := range files {
err := func() (err error) {
data, err := textfile.Read(filename)
if err != nil {
return err
}
scanner := bufio.NewScanner(bytes.NewReader(data))
for scanner.Scan() {
line := strings.TrimSpace(scanner.Text())
// ignore empty lines
if line == "" {
continue
}
// strip comments
if strings.HasPrefix(line, "#") {
continue
}
line = os.Expand(line, getenvOrDollar)
patterns = append(patterns, line)
}
return scanner.Err()
}()
if err != nil {
return nil, fmt.Errorf("failed to read patterns from file %q: %w", filename, err)
}
}
return patterns, nil
}
type excludePatternOptions struct {
Excludes []string
InsensitiveExcludes []string
ExcludeFiles []string
InsensitiveExcludeFiles []string
}
func initExcludePatternOptions(f *pflag.FlagSet, opts *excludePatternOptions) {
f.StringArrayVarP(&opts.Excludes, "exclude", "e", nil, "exclude a `pattern` (can be specified multiple times)")
f.StringArrayVar(&opts.InsensitiveExcludes, "iexclude", nil, "same as --exclude `pattern` but ignores the casing of filenames")
f.StringArrayVar(&opts.ExcludeFiles, "exclude-file", nil, "read exclude patterns from a `file` (can be specified multiple times)")
f.StringArrayVar(&opts.InsensitiveExcludeFiles, "iexclude-file", nil, "same as --exclude-file but ignores casing of `file`names in patterns")
}
func (opts *excludePatternOptions) Empty() bool {
return len(opts.Excludes) == 0 && len(opts.InsensitiveExcludes) == 0 && len(opts.ExcludeFiles) == 0 && len(opts.InsensitiveExcludeFiles) == 0
}
func (opts excludePatternOptions) CollectPatterns() ([]RejectByNameFunc, error) {
var fs []RejectByNameFunc
// add patterns from file
if len(opts.ExcludeFiles) > 0 {
excludePatterns, err := readPatternsFromFiles(opts.ExcludeFiles)
if err != nil {
return nil, err
}
if err := filter.ValidatePatterns(excludePatterns); err != nil {
return nil, errors.Fatalf("--exclude-file: %s", err)
}
opts.Excludes = append(opts.Excludes, excludePatterns...)
}
if len(opts.InsensitiveExcludeFiles) > 0 {
excludes, err := readPatternsFromFiles(opts.InsensitiveExcludeFiles)
if err != nil {
return nil, err
}
if err := filter.ValidatePatterns(excludes); err != nil {
return nil, errors.Fatalf("--iexclude-file: %s", err)
}
opts.InsensitiveExcludes = append(opts.InsensitiveExcludes, excludes...)
}
if len(opts.InsensitiveExcludes) > 0 {
if err := filter.ValidatePatterns(opts.InsensitiveExcludes); err != nil {
return nil, errors.Fatalf("--iexclude: %s", err)
}
fs = append(fs, rejectByInsensitivePattern(opts.InsensitiveExcludes))
}
if len(opts.Excludes) > 0 {
if err := filter.ValidatePatterns(opts.Excludes); err != nil {
return nil, errors.Fatalf("--exclude: %s", err)
}
fs = append(fs, rejectByPattern(opts.Excludes))
}
return fs, nil
}

View file

@ -24,20 +24,20 @@ func formatNode(path string, n *restic.Node, long bool, human bool) string {
}
switch n.Type {
case "file":
case restic.NodeTypeFile:
mode = 0
case "dir":
case restic.NodeTypeDir:
mode = os.ModeDir
case "symlink":
case restic.NodeTypeSymlink:
mode = os.ModeSymlink
target = fmt.Sprintf(" -> %v", n.LinkTarget)
case "dev":
case restic.NodeTypeDev:
mode = os.ModeDevice
case "chardev":
case restic.NodeTypeCharDev:
mode = os.ModeDevice | os.ModeCharDevice
case "fifo":
case restic.NodeTypeFifo:
mode = os.ModeNamedPipe
case "socket":
case restic.NodeTypeSocket:
mode = os.ModeSocket
}

View file

@ -19,7 +19,7 @@ func TestFormatNode(t *testing.T) {
testPath := "/test/path"
node := restic.Node{
Name: "baz",
Type: "file",
Type: restic.NodeTypeFile,
Size: 14680064,
UID: 1000,
GID: 2000,

View file

@ -29,7 +29,6 @@ import (
"github.com/restic/restic/internal/backend/sftp"
"github.com/restic/restic/internal/backend/swift"
"github.com/restic/restic/internal/debug"
"github.com/restic/restic/internal/fs"
"github.com/restic/restic/internal/options"
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/restic"
@ -47,7 +46,7 @@ import (
// to a missing backend storage location or config file
var ErrNoRepository = errors.New("repository does not exist")
var version = "0.17.3"
var version = "0.17.3-dev (compiled manually)"
// TimeFormat is the format used for all timestamps printed by restic.
const TimeFormat = "2006-01-02 15:04:05"
@ -309,7 +308,7 @@ func readPasswordTerminal(ctx context.Context, in *os.File, out *os.File, prompt
fd := int(out.Fd())
state, err := term.GetState(fd)
if err != nil {
fmt.Fprintf(os.Stderr, "unable to get terminal state: %v\n", err)
_, _ = fmt.Fprintf(os.Stderr, "unable to get terminal state: %v\n", err)
return "", err
}
@ -318,16 +317,22 @@ func readPasswordTerminal(ctx context.Context, in *os.File, out *os.File, prompt
go func() {
defer close(done)
fmt.Fprint(out, prompt)
_, err = fmt.Fprint(out, prompt)
if err != nil {
return
}
buf, err = term.ReadPassword(int(in.Fd()))
fmt.Fprintln(out)
if err != nil {
return
}
_, err = fmt.Fprintln(out)
}()
select {
case <-ctx.Done():
err := term.Restore(fd, state)
if err != nil {
fmt.Fprintf(os.Stderr, "unable to restore terminal state: %v\n", err)
_, _ = fmt.Fprintf(os.Stderr, "unable to restore terminal state: %v\n", err)
}
return "", ctx.Err()
case <-done:
@ -440,26 +445,6 @@ func OpenRepository(ctx context.Context, opts GlobalOptions) (*repository.Reposi
return nil, err
}
report := func(msg string, err error, d time.Duration) {
if d >= 0 {
Warnf("%v returned error, retrying after %v: %v\n", msg, d, err)
} else {
Warnf("%v failed: %v\n", msg, err)
}
}
success := func(msg string, retries int) {
Warnf("%v operation successful after %d retries\n", msg, retries)
}
be = retry.New(be, 15*time.Minute, report, success)
// wrap backend if a test specified a hook
if opts.backendTestHook != nil {
be, err = opts.backendTestHook(be)
if err != nil {
return nil, err
}
}
s, err := repository.New(be, repository.Options{
Compression: opts.Compression,
PackSize: opts.PackSize * 1024 * 1024,
@ -548,7 +533,7 @@ func OpenRepository(ctx context.Context, opts GlobalOptions) (*repository.Reposi
}
for _, item := range oldCacheDirs {
dir := filepath.Join(c.Base, item.Name())
err = fs.RemoveAll(dir)
err = os.RemoveAll(dir)
if err != nil {
Warnf("unable to remove %v: %v\n", dir, err)
}
@ -630,12 +615,31 @@ func innerOpen(ctx context.Context, s string, gopts GlobalOptions, opts options.
}
}
report := func(msg string, err error, d time.Duration) {
if d >= 0 {
Warnf("%v returned error, retrying after %v: %v\n", msg, d, err)
} else {
Warnf("%v failed: %v\n", msg, err)
}
}
success := func(msg string, retries int) {
Warnf("%v operation successful after %d retries\n", msg, retries)
}
be = retry.New(be, 15*time.Minute, report, success)
// wrap backend if a test specified a hook
if gopts.backendTestHook != nil {
be, err = gopts.backendTestHook(be)
if err != nil {
return nil, err
}
}
return be, nil
}
// Open the backend specified by a location config.
func open(ctx context.Context, s string, gopts GlobalOptions, opts options.Options) (backend.Backend, error) {
be, err := innerOpen(ctx, s, gopts, opts, false)
if err != nil {
return nil, err

View file

@ -5,6 +5,7 @@ import (
"path/filepath"
"testing"
"github.com/restic/restic/internal/filter"
rtest "github.com/restic/restic/internal/test"
)
@ -17,14 +18,14 @@ func TestBackupFailsWhenUsingInvalidPatterns(t *testing.T) {
var err error
// Test --exclude
err = testRunBackupAssumeFailure(t, filepath.Dir(env.testdata), []string{"testdata"}, BackupOptions{excludePatternOptions: excludePatternOptions{Excludes: []string{"*[._]log[.-][0-9]", "!*[._]log[.-][0-9]"}}}, env.gopts)
err = testRunBackupAssumeFailure(t, filepath.Dir(env.testdata), []string{"testdata"}, BackupOptions{ExcludePatternOptions: filter.ExcludePatternOptions{Excludes: []string{"*[._]log[.-][0-9]", "!*[._]log[.-][0-9]"}}}, env.gopts)
rtest.Equals(t, `Fatal: --exclude: invalid pattern(s) provided:
*[._]log[.-][0-9]
!*[._]log[.-][0-9]`, err.Error())
// Test --iexclude
err = testRunBackupAssumeFailure(t, filepath.Dir(env.testdata), []string{"testdata"}, BackupOptions{excludePatternOptions: excludePatternOptions{InsensitiveExcludes: []string{"*[._]log[.-][0-9]", "!*[._]log[.-][0-9]"}}}, env.gopts)
err = testRunBackupAssumeFailure(t, filepath.Dir(env.testdata), []string{"testdata"}, BackupOptions{ExcludePatternOptions: filter.ExcludePatternOptions{InsensitiveExcludes: []string{"*[._]log[.-][0-9]", "!*[._]log[.-][0-9]"}}}, env.gopts)
rtest.Equals(t, `Fatal: --iexclude: invalid pattern(s) provided:
*[._]log[.-][0-9]
@ -47,14 +48,14 @@ func TestBackupFailsWhenUsingInvalidPatternsFromFile(t *testing.T) {
var err error
// Test --exclude-file:
err = testRunBackupAssumeFailure(t, filepath.Dir(env.testdata), []string{"testdata"}, BackupOptions{excludePatternOptions: excludePatternOptions{ExcludeFiles: []string{excludeFile}}}, env.gopts)
err = testRunBackupAssumeFailure(t, filepath.Dir(env.testdata), []string{"testdata"}, BackupOptions{ExcludePatternOptions: filter.ExcludePatternOptions{ExcludeFiles: []string{excludeFile}}}, env.gopts)
rtest.Equals(t, `Fatal: --exclude-file: invalid pattern(s) provided:
*[._]log[.-][0-9]
!*[._]log[.-][0-9]`, err.Error())
// Test --iexclude-file
err = testRunBackupAssumeFailure(t, filepath.Dir(env.testdata), []string{"testdata"}, BackupOptions{excludePatternOptions: excludePatternOptions{InsensitiveExcludeFiles: []string{excludeFile}}}, env.gopts)
err = testRunBackupAssumeFailure(t, filepath.Dir(env.testdata), []string{"testdata"}, BackupOptions{ExcludePatternOptions: filter.ExcludePatternOptions{InsensitiveExcludeFiles: []string{excludeFile}}}, env.gopts)
rtest.Equals(t, `Fatal: --iexclude-file: invalid pattern(s) provided:
*[._]log[.-][0-9]
@ -70,28 +71,28 @@ func TestRestoreFailsWhenUsingInvalidPatterns(t *testing.T) {
var err error
// Test --exclude
err = testRunRestoreAssumeFailure("latest", RestoreOptions{excludePatternOptions: excludePatternOptions{Excludes: []string{"*[._]log[.-][0-9]", "!*[._]log[.-][0-9]"}}}, env.gopts)
err = testRunRestoreAssumeFailure("latest", RestoreOptions{ExcludePatternOptions: filter.ExcludePatternOptions{Excludes: []string{"*[._]log[.-][0-9]", "!*[._]log[.-][0-9]"}}}, env.gopts)
rtest.Equals(t, `Fatal: --exclude: invalid pattern(s) provided:
*[._]log[.-][0-9]
!*[._]log[.-][0-9]`, err.Error())
// Test --iexclude
err = testRunRestoreAssumeFailure("latest", RestoreOptions{excludePatternOptions: excludePatternOptions{InsensitiveExcludes: []string{"*[._]log[.-][0-9]", "!*[._]log[.-][0-9]"}}}, env.gopts)
err = testRunRestoreAssumeFailure("latest", RestoreOptions{ExcludePatternOptions: filter.ExcludePatternOptions{InsensitiveExcludes: []string{"*[._]log[.-][0-9]", "!*[._]log[.-][0-9]"}}}, env.gopts)
rtest.Equals(t, `Fatal: --iexclude: invalid pattern(s) provided:
*[._]log[.-][0-9]
!*[._]log[.-][0-9]`, err.Error())
// Test --include
err = testRunRestoreAssumeFailure("latest", RestoreOptions{includePatternOptions: includePatternOptions{Includes: []string{"*[._]log[.-][0-9]", "!*[._]log[.-][0-9]"}}}, env.gopts)
err = testRunRestoreAssumeFailure("latest", RestoreOptions{IncludePatternOptions: filter.IncludePatternOptions{Includes: []string{"*[._]log[.-][0-9]", "!*[._]log[.-][0-9]"}}}, env.gopts)
rtest.Equals(t, `Fatal: --include: invalid pattern(s) provided:
*[._]log[.-][0-9]
!*[._]log[.-][0-9]`, err.Error())
// Test --iinclude
err = testRunRestoreAssumeFailure("latest", RestoreOptions{includePatternOptions: includePatternOptions{InsensitiveIncludes: []string{"*[._]log[.-][0-9]", "!*[._]log[.-][0-9]"}}}, env.gopts)
err = testRunRestoreAssumeFailure("latest", RestoreOptions{IncludePatternOptions: filter.IncludePatternOptions{InsensitiveIncludes: []string{"*[._]log[.-][0-9]", "!*[._]log[.-][0-9]"}}}, env.gopts)
rtest.Equals(t, `Fatal: --iinclude: invalid pattern(s) provided:
*[._]log[.-][0-9]
@ -111,22 +112,22 @@ func TestRestoreFailsWhenUsingInvalidPatternsFromFile(t *testing.T) {
t.Fatalf("Could not write include file: %v", fileErr)
}
err := testRunRestoreAssumeFailure("latest", RestoreOptions{includePatternOptions: includePatternOptions{IncludeFiles: []string{patternsFile}}}, env.gopts)
err := testRunRestoreAssumeFailure("latest", RestoreOptions{IncludePatternOptions: filter.IncludePatternOptions{IncludeFiles: []string{patternsFile}}}, env.gopts)
rtest.Equals(t, `Fatal: --include-file: invalid pattern(s) provided:
*[._]log[.-][0-9]
!*[._]log[.-][0-9]`, err.Error())
err = testRunRestoreAssumeFailure("latest", RestoreOptions{excludePatternOptions: excludePatternOptions{ExcludeFiles: []string{patternsFile}}}, env.gopts)
err = testRunRestoreAssumeFailure("latest", RestoreOptions{ExcludePatternOptions: filter.ExcludePatternOptions{ExcludeFiles: []string{patternsFile}}}, env.gopts)
rtest.Equals(t, `Fatal: --exclude-file: invalid pattern(s) provided:
*[._]log[.-][0-9]
!*[._]log[.-][0-9]`, err.Error())
err = testRunRestoreAssumeFailure("latest", RestoreOptions{includePatternOptions: includePatternOptions{InsensitiveIncludeFiles: []string{patternsFile}}}, env.gopts)
err = testRunRestoreAssumeFailure("latest", RestoreOptions{IncludePatternOptions: filter.IncludePatternOptions{InsensitiveIncludeFiles: []string{patternsFile}}}, env.gopts)
rtest.Equals(t, `Fatal: --iinclude-file: invalid pattern(s) provided:
*[._]log[.-][0-9]
!*[._]log[.-][0-9]`, err.Error())
err = testRunRestoreAssumeFailure("latest", RestoreOptions{excludePatternOptions: excludePatternOptions{InsensitiveExcludeFiles: []string{patternsFile}}}, env.gopts)
err = testRunRestoreAssumeFailure("latest", RestoreOptions{ExcludePatternOptions: filter.ExcludePatternOptions{InsensitiveExcludeFiles: []string{patternsFile}}}, env.gopts)
rtest.Equals(t, `Fatal: --iexclude-file: invalid pattern(s) provided:
*[._]log[.-][0-9]
!*[._]log[.-][0-9]`, err.Error())

View file

@ -13,17 +13,17 @@ import (
func (e *dirEntry) equals(out io.Writer, other *dirEntry) bool {
if e.path != other.path {
fmt.Fprintf(out, "%v: path does not match (%v != %v)\n", e.path, e.path, other.path)
_, _ = fmt.Fprintf(out, "%v: path does not match (%v != %v)\n", e.path, e.path, other.path)
return false
}
if e.fi.Mode() != other.fi.Mode() {
fmt.Fprintf(out, "%v: mode does not match (%v != %v)\n", e.path, e.fi.Mode(), other.fi.Mode())
_, _ = fmt.Fprintf(out, "%v: mode does not match (%v != %v)\n", e.path, e.fi.Mode(), other.fi.Mode())
return false
}
if !sameModTime(e.fi, other.fi) {
fmt.Fprintf(out, "%v: ModTime does not match (%v != %v)\n", e.path, e.fi.ModTime(), other.fi.ModTime())
_, _ = fmt.Fprintf(out, "%v: ModTime does not match (%v != %v)\n", e.path, e.fi.ModTime(), other.fi.ModTime())
return false
}
@ -31,17 +31,17 @@ func (e *dirEntry) equals(out io.Writer, other *dirEntry) bool {
stat2, _ := other.fi.Sys().(*syscall.Stat_t)
if stat.Uid != stat2.Uid {
fmt.Fprintf(out, "%v: UID does not match (%v != %v)\n", e.path, stat.Uid, stat2.Uid)
_, _ = fmt.Fprintf(out, "%v: UID does not match (%v != %v)\n", e.path, stat.Uid, stat2.Uid)
return false
}
if stat.Gid != stat2.Gid {
fmt.Fprintf(out, "%v: GID does not match (%v != %v)\n", e.path, stat.Gid, stat2.Gid)
_, _ = fmt.Fprintf(out, "%v: GID does not match (%v != %v)\n", e.path, stat.Gid, stat2.Gid)
return false
}
if stat.Nlink != stat2.Nlink {
fmt.Fprintf(out, "%v: Number of links do not match (%v != %v)\n", e.path, stat.Nlink, stat2.Nlink)
_, _ = fmt.Fprintf(out, "%v: Number of links do not match (%v != %v)\n", e.path, stat.Nlink, stat2.Nlink)
return false
}

View file

@ -177,3 +177,47 @@ func TestFindListOnce(t *testing.T) {
// the snapshots can only be listed once, if both lists match then the there has been only a single List() call
rtest.Equals(t, thirdSnapshot, snapshotIDs)
}
type failConfigOnceBackend struct {
backend.Backend
failedOnce bool
}
func (be *failConfigOnceBackend) Load(ctx context.Context, h backend.Handle,
length int, offset int64, fn func(rd io.Reader) error) error {
if !be.failedOnce && h.Type == restic.ConfigFile {
be.failedOnce = true
return fmt.Errorf("oops")
}
return be.Backend.Load(ctx, h, length, offset, fn)
}
func (be *failConfigOnceBackend) Stat(ctx context.Context, h backend.Handle) (backend.FileInfo, error) {
if !be.failedOnce && h.Type == restic.ConfigFile {
be.failedOnce = true
return backend.FileInfo{}, fmt.Errorf("oops")
}
return be.Backend.Stat(ctx, h)
}
func TestBackendRetryConfig(t *testing.T) {
env, cleanup := withTestEnvironment(t)
defer cleanup()
var wrappedBackend *failConfigOnceBackend
// cause config loading to fail once
env.gopts.backendInnerTestHook = func(r backend.Backend) (backend.Backend, error) {
wrappedBackend = &failConfigOnceBackend{Backend: r}
return wrappedBackend, nil
}
testSetupBackupData(t, env)
rtest.Assert(t, wrappedBackend != nil, "backend not wrapped on init")
rtest.Assert(t, wrappedBackend != nil && wrappedBackend.failedOnce, "config loading was not retried on init")
wrappedBackend = nil
testRunBackup(t, "", []string{filepath.Join(env.testdata, "0", "0", "9")}, BackupOptions{}, env.gopts)
rtest.Assert(t, wrappedBackend != nil, "backend not wrapped on backup")
rtest.Assert(t, wrappedBackend != nil && wrappedBackend.failedOnce, "config loading was not retried on init")
}

View file

@ -4,6 +4,7 @@ import (
"bufio"
"bytes"
"context"
"encoding/json"
"fmt"
"log"
"os"
@ -119,6 +120,30 @@ func tweakGoGC() {
}
}
func printExitError(code int, message string) {
if globalOptions.JSON {
type jsonExitError struct {
MessageType string `json:"message_type"` // exit_error
Code int `json:"code"`
Message string `json:"message"`
}
jsonS := jsonExitError{
MessageType: "exit_error",
Code: code,
Message: message,
}
err := json.NewEncoder(globalOptions.stderr).Encode(jsonS)
if err != nil {
Warnf("JSON encode failed: %v\n", err)
return
}
} else {
_, _ = fmt.Fprintf(globalOptions.stderr, "%v\n", message)
}
}
func main() {
tweakGoGC()
// install custom global logger into a buffer, if an error occurs
@ -127,10 +152,10 @@ func main() {
log.SetOutput(logBuffer)
err := feature.Flag.Apply(os.Getenv("RESTIC_FEATURES"), func(s string) {
fmt.Fprintln(os.Stderr, s)
_, _ = fmt.Fprintln(os.Stderr, s)
})
if err != nil {
fmt.Fprintln(os.Stderr, err)
_, _ = fmt.Fprintln(os.Stderr, err)
Exit(1)
}
@ -148,23 +173,24 @@ func main() {
err = nil
}
var exitMessage string
switch {
case restic.IsAlreadyLocked(err):
fmt.Fprintf(os.Stderr, "%v\nthe `unlock` command can be used to remove stale locks\n", err)
exitMessage = fmt.Sprintf("%v\nthe `unlock` command can be used to remove stale locks", err)
case err == ErrInvalidSourceData:
fmt.Fprintf(os.Stderr, "Warning: %v\n", err)
exitMessage = fmt.Sprintf("Warning: %v", err)
case errors.IsFatal(err):
fmt.Fprintf(os.Stderr, "%v\n", err)
exitMessage = err.Error()
case errors.Is(err, repository.ErrNoKeyFound):
fmt.Fprintf(os.Stderr, "Fatal: %v\n", err)
exitMessage = fmt.Sprintf("Fatal: %v", err)
case err != nil:
fmt.Fprintf(os.Stderr, "%+v\n", err)
exitMessage = fmt.Sprintf("%+v", err)
if logBuffer.Len() > 0 {
fmt.Fprintf(os.Stderr, "also, the following messages were logged by a library:\n")
exitMessage += "also, the following messages were logged by a library:\n"
sc := bufio.NewScanner(logBuffer)
for sc.Scan() {
fmt.Fprintln(os.Stderr, sc.Text())
exitMessage += fmt.Sprintln(sc.Text())
}
}
}
@ -186,5 +212,9 @@ func main() {
default:
exitCode = 1
}
if exitCode != 0 {
printExitError(exitCode, exitMessage)
}
Exit(exitCode)
}

View file

@ -29,7 +29,7 @@ func calculateProgressInterval(show bool, json bool) time.Duration {
return interval
}
// newTerminalProgressMax returns a progress.Counter that prints to stdout or terminal if provided.
// newGenericProgressMax returns a progress.Counter that prints to stdout or terminal if provided.
func newGenericProgressMax(show bool, max uint64, description string, print func(status string, final bool)) *progress.Counter {
if !show {
return nil

View file

@ -284,8 +284,7 @@ From Source
***********
restic is written in the Go programming language and you need at least
Go version 1.19. Building for Solaris requires at least Go version 1.20.
Building restic may also work with older versions of Go,
Go version 1.21. Building restic may also work with older versions of Go,
but that's not supported. See the `Getting
started <https://go.dev/doc/install>`__ guide of the Go project for
instructions how to install Go.

View file

@ -314,9 +314,17 @@ this command.
S3-compatible Storage
*********************
For an S3-compatible server that is not Amazon, you can specify the URL to the server
For an S3-compatible storage service that is not Amazon, you can specify the URL to the server
like this: ``s3:https://server:port/bucket_name``.
You must also set credentials for authentication to the service.
.. code-block:: console
$ export AWS_ACCESS_KEY_ID=<YOUR-ACCESS-KEY-ID>
$ export AWS_SECRET_ACCESS_KEY=<YOUR-SECRET-ACCESS-KEY>
$ restic -r s3:https://server:port/bucket_name init
If needed, you can manually specify the region to use by either setting the
environment variable ``AWS_DEFAULT_REGION`` or calling restic with an option
parameter like ``-o s3.region="us-east-1"``. If the region is not specified,
@ -560,6 +568,10 @@ The number of concurrent connections to the Azure Blob Storage service can be se
``-o azure.connections=10`` switch. By default, at most five parallel connections are
established.
The access tier of the blobs uploaded to the Azure Blob Storage service can be set with the
``-o azure.access-tier=Cool`` switch. The allowed values are ``Hot``, ``Cool`` or ``Cold``.
If unspecified, the default is inferred from the default configured on the storage account.
Google Cloud Storage
********************

View file

@ -214,7 +214,8 @@ The ``forget`` command accepts the following policy options:
run) and these snapshots will hence not be removed.
.. note:: If there are not enough snapshots to keep one for each duration related
``--keep-{within-,}*`` option, the oldest snapshot is kept additionally.
``--keep-{within-,}*`` option, the oldest snapshot is kept additionally and
marked as ``oldest`` in the output (e.g. ``oldest hourly snapshot``).
.. note:: Specifying ``--keep-tag ''`` will match untagged snapshots only.

View file

@ -87,12 +87,33 @@ JSON output of most restic commands are documented here.
list of allowed values is documented may be extended at any time.
Exit errors
-----------
Fatal errors will result in a final JSON message on ``stderr`` before the process exits.
It will hold the error message and the exit code.
.. note::
Some errors cannot be caught and reported this way,
such as Go runtime errors or command line parsing errors.
+----------------------+-------------------------------------------+
| ``message_type`` | Always "exit_error" |
+----------------------+-------------------------------------------+
| ``code`` | Exit code (see above chart) |
+----------------------+-------------------------------------------+
| ``message`` | Error message |
+----------------------+-------------------------------------------+
Output formats
--------------
Currently only the output on ``stdout`` is JSON formatted. Errors printed on ``stderr``
are still printed as plain text messages. The generated JSON output uses one of the
following two formats.
Commands print their main JSON output on ``stdout``.
The generated JSON output uses one of the following two formats.
.. note::
Not all messages and errors have been converted to JSON yet.
Feel free to submit a pull request!
Single JSON document
^^^^^^^^^^^^^^^^^^^^
@ -140,6 +161,8 @@ Status
Error
^^^^^
These errors are printed on ``stderr``.
+----------------------+-------------------------------------------+
| ``message_type`` | Always "error" |
+----------------------+-------------------------------------------+
@ -203,6 +226,10 @@ Summary is the last output line in a successful backup.
+---------------------------+---------------------------------------------------------+
| ``total_bytes_processed`` | Total number of bytes processed |
+---------------------------+---------------------------------------------------------+
| ``backup_start`` | Time at which the backup was started |
+---------------------------+---------------------------------------------------------+
| ``backup_end`` | Time at which the backup was completed |
+---------------------------+---------------------------------------------------------+
| ``total_duration`` | Total time it took for the operation to complete |
+---------------------------+---------------------------------------------------------+
| ``snapshot_id`` | ID of the new snapshot. Field is omitted if snapshot |
@ -536,6 +563,8 @@ Status
+----------------------+------------------------------------------------------------+
|``files_skipped`` | Files skipped due to overwrite setting |
+----------------------+------------------------------------------------------------+
|``files_deleted`` | Files deleted |
+----------------------+------------------------------------------------------------+
|``total_bytes`` | Total number of bytes in restore set |
+----------------------+------------------------------------------------------------+
|``bytes_restored`` | Number of bytes restored |
@ -546,6 +575,8 @@ Status
Error
^^^^^
These errors are printed on ``stderr``.
+----------------------+-------------------------------------------+
| ``message_type`` | Always "error" |
+----------------------+-------------------------------------------+
@ -586,6 +617,8 @@ Summary
+----------------------+------------------------------------------------------------+
|``files_skipped`` | Files skipped due to overwrite setting |
+----------------------+------------------------------------------------------------+
|``files_deleted`` | Files deleted |
+----------------------+------------------------------------------------------------+
|``total_bytes`` | Total number of bytes in restore set |
+----------------------+------------------------------------------------------------+
|``bytes_restored`` | Number of bytes restored |
@ -695,12 +728,14 @@ version
The version command returns a single JSON object.
+----------------+--------------------+
+------------------+--------------------+
| ``message_type`` | Always "version" |
+------------------+--------------------+
| ``version`` | restic version |
+----------------+--------------------+
+------------------+--------------------+
| ``go_version`` | Go compile version |
+----------------+--------------------+
+------------------+--------------------+
| ``go_os`` | Go OS |
+----------------+--------------------+
+------------------+--------------------+
| ``go_arch`` | Go architecture |
+----------------+--------------------+
+------------------+--------------------+

View file

@ -119,16 +119,11 @@ A local repository can be initialized with the ``restic init`` command, e.g.:
$ restic -r /tmp/restic-repo init
The local and sftp backends will auto-detect and accept all layouts described
in the following sections, so that remote repositories mounted locally e.g. via
fuse can be accessed. The layout auto-detection can be overridden by specifying
the option ``-o local.layout=default``, valid values are ``default`` and
``s3legacy``. The option for the sftp backend is named ``sftp.layout``, for the
s3 backend ``s3.layout``.
S3 Legacy Layout (deprecated)
-----------------------------
Restic 0.17 is the last version that supports the legacy layout.
Unfortunately during development the Amazon S3 backend uses slightly different
paths (directory names use singular instead of plural for ``key``,
``lock``, and ``snapshot`` files), and the pack files are stored directly below
@ -152,8 +147,6 @@ the ``data`` directory. The S3 Legacy repository layout looks like this:
/snapshot
└── 22a5af1bdc6e616f8a29579458c49627e01b32210d09adb288d1ecda7c5711ec
Restic 0.17 is the last version that supports the legacy layout.
Pack Format
===========

90
go.mod
View file

@ -2,10 +2,11 @@ module github.com/restic/restic
require (
cloud.google.com/go/storage v1.43.0
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.12.0
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.7.0
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.3.2
github.com/Backblaze/blazer v0.6.1
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.16.0
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.8.0
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.5.0
github.com/Backblaze/blazer v0.7.1
github.com/Microsoft/go-winio v0.6.2
github.com/anacrolix/fuse v0.3.1
github.com/cenkalti/backoff/v4 v4.3.0
github.com/cespare/xxhash/v2 v2.3.0
@ -14,76 +15,71 @@ require (
github.com/google/go-cmp v0.6.0
github.com/hashicorp/golang-lru/v2 v2.0.7
github.com/klauspost/compress v1.17.9
github.com/minio/minio-go/v7 v7.0.66
github.com/minio/sha256-simd v1.0.1
github.com/ncw/swift/v2 v2.0.2
github.com/minio/minio-go/v7 v7.0.77
github.com/ncw/swift/v2 v2.0.3
github.com/peterbourgon/unixtransport v0.0.4
github.com/pkg/errors v0.9.1
github.com/pkg/profile v1.7.0
github.com/pkg/sftp v1.13.6
github.com/pkg/sftp v1.13.7
github.com/pkg/xattr v0.4.10
github.com/restic/chunker v0.4.0
github.com/spf13/cobra v1.8.1
github.com/spf13/pflag v1.0.5
go.uber.org/automaxprocs v1.5.3
golang.org/x/crypto v0.24.0
golang.org/x/net v0.26.0
golang.org/x/oauth2 v0.21.0
golang.org/x/sync v0.7.0
golang.org/x/sys v0.22.0
golang.org/x/term v0.22.0
golang.org/x/text v0.16.0
golang.org/x/time v0.5.0
google.golang.org/api v0.187.0
go.uber.org/automaxprocs v1.6.0
golang.org/x/crypto v0.28.0
golang.org/x/net v0.30.0
golang.org/x/oauth2 v0.23.0
golang.org/x/sync v0.9.0
golang.org/x/sys v0.27.0
golang.org/x/term v0.25.0
golang.org/x/text v0.20.0
golang.org/x/time v0.7.0
google.golang.org/api v0.204.0
)
require (
cloud.google.com/go v0.115.0 // indirect
cloud.google.com/go/auth v0.6.1 // indirect
cloud.google.com/go/auth/oauth2adapt v0.2.2 // indirect
cloud.google.com/go/compute/metadata v0.3.0 // indirect
cloud.google.com/go/iam v1.1.8 // indirect
github.com/Azure/azure-sdk-for-go/sdk/internal v1.9.0 // indirect
cloud.google.com/go v0.116.0 // indirect
cloud.google.com/go/auth v0.10.0 // indirect
cloud.google.com/go/auth/oauth2adapt v0.2.5 // indirect
cloud.google.com/go/compute/metadata v0.5.2 // indirect
cloud.google.com/go/iam v1.2.1 // indirect
github.com/Azure/azure-sdk-for-go/sdk/internal v1.10.0 // indirect
github.com/AzureAD/microsoft-authentication-library-for-go v1.2.2 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.4 // indirect
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/felixge/fgprof v0.9.3 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/go-logr/logr v1.4.1 // indirect
github.com/go-ini/ini v1.67.0 // indirect
github.com/go-logr/logr v1.4.2 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/goccy/go-json v0.10.3 // indirect
github.com/golang-jwt/jwt/v5 v5.2.1 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.4 // indirect
github.com/google/pprof v0.0.0-20230926050212-f7f687d19a98 // indirect
github.com/google/s2a-go v0.1.7 // indirect
github.com/google/s2a-go v0.1.8 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/googleapis/enterprise-certificate-proxy v0.3.2 // indirect
github.com/googleapis/gax-go/v2 v2.12.5 // indirect
github.com/googleapis/enterprise-certificate-proxy v0.3.4 // indirect
github.com/googleapis/gax-go/v2 v2.13.0 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/cpuid/v2 v2.2.6 // indirect
github.com/klauspost/cpuid/v2 v2.2.8 // indirect
github.com/kr/fs v0.1.0 // indirect
github.com/kylelemons/godebug v1.1.0 // indirect
github.com/minio/md5-simd v1.1.2 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c // indirect
github.com/rs/xid v1.5.0 // indirect
github.com/rs/xid v1.6.0 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/sirupsen/logrus v1.9.3 // indirect
go.opencensus.io v0.24.0 // indirect
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.49.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0 // indirect
go.opentelemetry.io/otel v1.24.0 // indirect
go.opentelemetry.io/otel/metric v1.24.0 // indirect
go.opentelemetry.io/otel/trace v1.24.0 // indirect
google.golang.org/genproto v0.0.0-20240624140628-dc46fd24d27d // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20240617180043-68d350f18fd4 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20240624140628-dc46fd24d27d // indirect
google.golang.org/grpc v1.64.1 // indirect
google.golang.org/protobuf v1.34.2 // indirect
gopkg.in/ini.v1 v1.67.0 // indirect
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.54.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.54.0 // indirect
go.opentelemetry.io/otel v1.29.0 // indirect
go.opentelemetry.io/otel/metric v1.29.0 // indirect
go.opentelemetry.io/otel/trace v1.29.0 // indirect
google.golang.org/genproto v0.0.0-20241021214115-324edc3d5d38 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20241015192408-796eee8c2d53 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20241021214115-324edc3d5d38 // indirect
google.golang.org/grpc v1.67.1 // indirect
google.golang.org/protobuf v1.35.1 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)
go 1.19
go 1.21

217
go.sum
View file

@ -1,32 +1,40 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.115.0 h1:CnFSK6Xo3lDYRoBKEcAtia6VSC837/ZkJuRduSFnr14=
cloud.google.com/go v0.115.0/go.mod h1:8jIM5vVgoAEoiVxQ/O4BFTfHqulPZgs/ufEzMcFMdWU=
cloud.google.com/go/auth v0.6.1 h1:T0Zw1XM5c1GlpN2HYr2s+m3vr1p2wy+8VN+Z1FKxW38=
cloud.google.com/go/auth v0.6.1/go.mod h1:eFHG7zDzbXHKmjJddFG/rBlcGp6t25SwRUiEQSlO4x4=
cloud.google.com/go/auth/oauth2adapt v0.2.2 h1:+TTV8aXpjeChS9M+aTtN/TjdQnzJvmzKFt//oWu7HX4=
cloud.google.com/go/auth/oauth2adapt v0.2.2/go.mod h1:wcYjgpZI9+Yu7LyYBg4pqSiaRkfEK3GQcpb7C/uyF1Q=
cloud.google.com/go/compute/metadata v0.3.0 h1:Tz+eQXMEqDIKRsmY3cHTL6FVaynIjX2QxYC4trgAKZc=
cloud.google.com/go/compute/metadata v0.3.0/go.mod h1:zFmK7XCadkQkj6TtorcaGlCW1hT1fIilQDwofLpJ20k=
cloud.google.com/go/iam v1.1.8 h1:r7umDwhj+BQyz0ScZMp4QrGXjSTI3ZINnpgU2nlB/K0=
cloud.google.com/go/iam v1.1.8/go.mod h1:GvE6lyMmfxXauzNq8NbgJbeVQNspG+tcdL/W8QO1+zE=
cloud.google.com/go/longrunning v0.5.7 h1:WLbHekDbjK1fVFD3ibpFFVoyizlLRl73I7YKuAKilhU=
cloud.google.com/go v0.116.0 h1:B3fRrSDkLRt5qSHWe40ERJvhvnQwdZiHu0bJOpldweE=
cloud.google.com/go v0.116.0/go.mod h1:cEPSRWPzZEswwdr9BxE6ChEn01dWlTaF05LiC2Xs70U=
cloud.google.com/go/auth v0.10.0 h1:tWlkvFAh+wwTOzXIjrwM64karR1iTBZ/GRr0S/DULYo=
cloud.google.com/go/auth v0.10.0/go.mod h1:xxA5AqpDrvS+Gkmo9RqrGGRh6WSNKKOXhY3zNOr38tI=
cloud.google.com/go/auth/oauth2adapt v0.2.5 h1:2p29+dePqsCHPP1bqDJcKj4qxRyYCcbzKpFyKGt3MTk=
cloud.google.com/go/auth/oauth2adapt v0.2.5/go.mod h1:AlmsELtlEBnaNTL7jCj8VQFLy6mbZv0s4Q7NGBeQ5E8=
cloud.google.com/go/compute/metadata v0.5.2 h1:UxK4uu/Tn+I3p2dYWTfiX4wva7aYlKixAHn3fyqngqo=
cloud.google.com/go/compute/metadata v0.5.2/go.mod h1:C66sj2AluDcIqakBq/M8lw8/ybHgOZqin2obFxa/E5k=
cloud.google.com/go/iam v1.2.1 h1:QFct02HRb7H12J/3utj0qf5tobFh9V4vR6h9eX5EBRU=
cloud.google.com/go/iam v1.2.1/go.mod h1:3VUIJDPpwT6p/amXRC5GY8fCCh70lxPygguVtI0Z4/g=
cloud.google.com/go/longrunning v0.6.1 h1:lOLTFxYpr8hcRtcwWir5ITh1PAKUD/sG2lKrTSYjyMc=
cloud.google.com/go/longrunning v0.6.1/go.mod h1:nHISoOZpBcmlwbJmiVk5oDRz0qG/ZxPynEGs1iZ79s0=
cloud.google.com/go/storage v1.43.0 h1:CcxnSohZwizt4LCzQHWvBf1/kvtHUn7gk9QERXPyXFs=
cloud.google.com/go/storage v1.43.0/go.mod h1:ajvxEa7WmZS1PxvKRq4bq0tFT3vMd502JwstCcYv0Q0=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.12.0 h1:1nGuui+4POelzDwI7RG56yfQJHCnKvwfMoU7VsEp+Zg=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.12.0/go.mod h1:99EvauvlcJ1U06amZiksfYz/3aFGyIhWGHVyiZXtBAI=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.7.0 h1:tfLQ34V6F7tVSwoTf/4lH5sE0o6eCJuNDTmH09nDpbc=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.7.0/go.mod h1:9kIvujWAA58nmPmWB1m23fyWic1kYZMxD9CxaWn4Qpg=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.9.0 h1:H+U3Gk9zY56G3u872L82bk4thcsy2Gghb9ExT4Zvm1o=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.9.0/go.mod h1:mgrmMSgaLp9hmax62XQTd0N4aAqSE5E0DulSpVYK7vc=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.5.0 h1:AifHbc4mg0x9zW52WOpKbsHaDKuRhlI7TVl47thgQ70=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.3.2 h1:YUUxeiOWgdAQE3pXt2H7QXzZs0q8UBjgRbl56qo8GYM=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.3.2/go.mod h1:dmXQgZuiSubAecswZE+Sm8jkvEa7kQgTPVRvwL/nd0E=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.16.0 h1:JZg6HRh6W6U4OLl6lk7BZ7BLisIzM9dG1R50zUk9C/M=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.16.0/go.mod h1:YL1xnZ6QejvQHWJrX/AvhFl4WW4rqHVoKspWNVwFk0M=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.8.0 h1:B/dfvscEQtew9dVuoxqxrUKKv8Ih2f55PydknDamU+g=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.8.0/go.mod h1:fiPSssYvltE08HJchL04dOy+RD4hgrjph0cwGGMntdI=
github.com/Azure/azure-sdk-for-go/sdk/azidentity/cache v0.3.0 h1:+m0M/LFxN43KvULkDNfdXOgrjtg6UYJPFBJyuEcRCAw=
github.com/Azure/azure-sdk-for-go/sdk/azidentity/cache v0.3.0/go.mod h1:PwOyop78lveYMRs6oCxjiVyBdyCgIYH6XHIVZO9/SFQ=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.10.0 h1:ywEEhmNahHBihViHepv3xPBn1663uRv2t2q/ESv9seY=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.10.0/go.mod h1:iZDifYGJTIgIIkYRNWPENUnqx6bJ2xnSDFI2tjwZNuY=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.6.0 h1:PiSrjRPpkQNjrM8H0WwKMnZUdu1RGMtd/LdGKUrOo+c=
github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/storage/armstorage v1.6.0/go.mod h1:oDrbWx4ewMylP7xHivfgixbfGBT6APAwsSoHRKotnIc=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.5.0 h1:mlmW46Q0B79I+Aj4azKC6xDMFN9a9SyZWESlGWYXbFs=
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.5.0/go.mod h1:PXe2h+LKcWTX9afWdZoHyODqR4fBa5boUM/8uJfZ0Jo=
github.com/AzureAD/microsoft-authentication-extensions-for-go/cache v0.1.1 h1:WJTmL004Abzc5wDB5VtZG2PJk5ndYDgVacGqfirKxjM=
github.com/AzureAD/microsoft-authentication-extensions-for-go/cache v0.1.1/go.mod h1:tCcJZ0uHAmvjsVYzEFivsRTN00oz5BEsRgQHu5JZ9WE=
github.com/AzureAD/microsoft-authentication-library-for-go v1.2.2 h1:XHOnouVk1mxXfQidrMEnLlPk9UMeRtyBTnEFtxkV0kU=
github.com/AzureAD/microsoft-authentication-library-for-go v1.2.2/go.mod h1:wP83P5OoQ5p6ip3ScPr0BAq0BvuPAvacpEuSzyouqAI=
github.com/Backblaze/blazer v0.6.1 h1:xC9HyC7OcxRzzmtfRiikIEvq4HZYWjU6caFwX2EXw1s=
github.com/Backblaze/blazer v0.6.1/go.mod h1:7/jrGx4O6OKOto6av+hLwelPR8rwZ+PLxQ5ZOiYAjwY=
github.com/Backblaze/blazer v0.7.1 h1:J43PbFj6hXLg1jvCNr+rQoAsxzKK0IP7ftl1ReCwpcQ=
github.com/Backblaze/blazer v0.7.1/go.mod h1:MhntL1nMpIuoqrPP6TnZu/xTydMgOAe/Xm6KongbjKs=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/Julusian/godocdown v0.0.0-20170816220326-6d19f8ff2df8/go.mod h1:INZr5t32rG59/5xeltqoCJoNY7e5x/3xoY9WSWVWg74=
github.com/Microsoft/go-winio v0.6.2 h1:F2VQgta7ecxGYO8k3ZZz3RS8fVIXVxONVUPlNERoyfY=
github.com/Microsoft/go-winio v0.6.2/go.mod h1:yd8OoFMLzJbo9gZq8j5qaps8bJ9aShtEA8Ipt1oGCvU=
github.com/anacrolix/envpprof v1.3.0 h1:WJt9bpuT7A/CDCxPOv/eeZqHWlle/Y0keJUvc6tcJDk=
github.com/anacrolix/envpprof v1.3.0/go.mod h1:7QIG4CaX1uexQ3tqd5+BRa/9e2D02Wcertl6Yh0jCB0=
github.com/anacrolix/fuse v0.3.1 h1:oT8s3B5HFkBdLe/WKJO5MNo9iIyEtc+BhvTZYp4jhDM=
@ -52,6 +60,8 @@ github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ3
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
github.com/dvyukov/go-fuzz v0.0.0-20220726122315-1d375ef9f9f6/go.mod h1:11Gm+ccJnvAhCNLlf5+cS9KjtbaD5I5zaZpFMsTHWTw=
@ -67,13 +77,17 @@ github.com/felixge/fgprof v0.9.3/go.mod h1:RdbpDgzqYVh/T9fPELJyV7EYJuHB55UTEULNu
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
github.com/frankban/quicktest v1.14.4/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0=
github.com/go-ini/ini v1.67.0 h1:z6ZrTEZqSWOTyH2FlglNbNgARyHG8oLW9gMELqKr06A=
github.com/go-ini/ini v1.67.0/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3Ies8=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.4.1 h1:pKouT5E8xu9zeFC39JXRDukb6JFQPXM5p5I91188VAQ=
github.com/go-logr/logr v1.4.1/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY=
github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/go-ole/go-ole v1.3.0 h1:Dt6ye7+vXGIKZ7Xtk4s6/xVdGDQynvom7xCFEdWr6uE=
github.com/go-ole/go-ole v1.3.0/go.mod h1:5LS6F96DhAwUc7C+1HLexzMXY1xGRSryjyPPKW6zv78=
github.com/goccy/go-json v0.10.3 h1:KZ5WoDbxAIgm2HNbYckL0se1fHD6rz5j4ywS6ebzDqA=
github.com/goccy/go-json v0.10.3/go.mod h1:oq7eo15ShAhp70Anwd5lgX2pLfOS3QCiwU/PULtXL6M=
github.com/golang-jwt/jwt/v5 v5.2.1 h1:OuVbFODueb089Lh128TAcimifWaLhJwVflnrgM17wHk=
github.com/golang-jwt/jwt/v5 v5.2.1/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
@ -101,32 +115,32 @@ github.com/google/go-cmp v0.5.3/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/martian/v3 v3.3.3 h1:DIhPTQrbPkgs2yJYdXU/eNACCG5DVQjySNRNlflZ9Fc=
github.com/google/martian/v3 v3.3.3/go.mod h1:iEPrYcgCF7jA9OtScMFQyAlZZ4YXTKEtJ1E6RWzmBA0=
github.com/google/pprof v0.0.0-20211214055906-6f57359322fd/go.mod h1:KgnwoLYCZ8IQu3XUZ8Nc/bM9CCZFOyjUNOSygVozoDg=
github.com/google/pprof v0.0.0-20230926050212-f7f687d19a98 h1:pUa4ghanp6q4IJHwE9RwLgmVFfReJN+KbQ8ExNEUUoQ=
github.com/google/pprof v0.0.0-20230926050212-f7f687d19a98/go.mod h1:czg5+yv1E0ZGTi6S6vVK1mke0fV+FaUhNGcd6VRS9Ik=
github.com/google/s2a-go v0.1.7 h1:60BLSyTrOV4/haCDW4zb1guZItoSq8foHCXrAnjBo/o=
github.com/google/s2a-go v0.1.7/go.mod h1:50CgR4k1jNlWBu4UfS4AcfhVe1r6pdZPygJ3R8F0Qdw=
github.com/google/s2a-go v0.1.8 h1:zZDs9gcbt9ZPLV0ndSyQk6Kacx2g/X+SKYovpnz3SMM=
github.com/google/s2a-go v0.1.8/go.mod h1:6iNWHTpQ+nfNRN5E00MSdfDwVesa8hhS32PhPO8deJA=
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/enterprise-certificate-proxy v0.3.2 h1:Vie5ybvEvT75RniqhfFxPRy3Bf7vr3h0cechB90XaQs=
github.com/googleapis/enterprise-certificate-proxy v0.3.2/go.mod h1:VLSiSSBs/ksPL8kq3OBOQ6WRI2QnaFynd1DCjZ62+V0=
github.com/googleapis/gax-go/v2 v2.12.5 h1:8gw9KZK8TiVKB6q3zHY3SBzLnrGp6HQjyfYBYGmXdxA=
github.com/googleapis/gax-go/v2 v2.12.5/go.mod h1:BUDKcWo+RaKq5SC9vVYL0wLADa3VcfswbOMMRmB9H3E=
github.com/googleapis/enterprise-certificate-proxy v0.3.4 h1:XYIDZApgAnrN1c855gTgghdIA6Stxb52D5RnLI1SLyw=
github.com/googleapis/enterprise-certificate-proxy v0.3.4/go.mod h1:YKe7cfqYXjKGpGvmSg28/fFvhNzinZQm8DGnaburhGA=
github.com/googleapis/gax-go/v2 v2.13.0 h1:yitjD5f7jQHhyDsnhKEBU52NdvvdSeGzlAnDPT0hH1s=
github.com/googleapis/gax-go/v2 v2.13.0/go.mod h1:Z/fvTZXF8/uw7Xu5GuslPw+bplx6SS338j1Is2S+B7A=
github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k=
github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM=
github.com/ianlancetaylor/demangle v0.0.0-20210905161508-09a460cdf81d/go.mod h1:aYm2/VgdVmcIU8iMfdMvDMsRAQjcfZSKFby6HOFvi/w=
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
github.com/keybase/go-keychain v0.0.0-20231219164618-57a3676c3af6 h1:IsMZxCuZqKuao2vNdfD82fjjgPLfyHLpR41Z88viRWs=
github.com/keybase/go-keychain v0.0.0-20231219164618-57a3676c3af6/go.mod h1:3VeWNIJaW+O5xpRQbPp0Ybqu1vJd/pm7s2F473HRrkw=
github.com/klauspost/compress v1.17.9 h1:6KIumPrER1LHsvBVuDa0r5xaG0Es51mhhB9BQB2qeMA=
github.com/klauspost/compress v1.17.9/go.mod h1:Di0epgTjJY877eYKx5yC51cX2A2Vl2ibi7bDH9ttBbw=
github.com/klauspost/cpuid/v2 v2.0.1/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
github.com/klauspost/cpuid/v2 v2.2.6 h1:ndNyv040zDGIDh8thGkXYjnFtiN02M1PVVF+JE/48xc=
github.com/klauspost/cpuid/v2 v2.2.6/go.mod h1:Lcz8mBdAVJIBVzewtcLocK12l3Y+JytZYpaMropDUws=
github.com/klauspost/cpuid/v2 v2.2.8 h1:+StwCXwm9PdpiEkPyzBXIy+M9KUb4ODm0Zarf1kS5BM=
github.com/klauspost/cpuid/v2 v2.2.8/go.mod h1:Lcz8mBdAVJIBVzewtcLocK12l3Y+JytZYpaMropDUws=
github.com/kr/fs v0.1.0 h1:Jskdu9ieNAYnjxsi0LbQp1ulIKZV1LAFgK1tWhpZgl8=
github.com/kr/fs v0.1.0/go.mod h1:FFnZGqtBN9Gxj7eW1uZ42v5BccTP0vu6NEaFoC2HwRg=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
@ -142,17 +156,10 @@ github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+
github.com/miekg/dns v1.1.54/go.mod h1:uInx36IzPl7FYnDcMeVWxj9byh7DutNykX4G9Sj60FY=
github.com/minio/md5-simd v1.1.2 h1:Gdi1DZK69+ZVMoNHRXJyNcxrMA4dSxoYHZSQbirFg34=
github.com/minio/md5-simd v1.1.2/go.mod h1:MzdKDxYpY2BT9XQFocsiZf/NKVtR7nkE4RoEpN+20RM=
github.com/minio/minio-go/v7 v7.0.66 h1:bnTOXOHjOqv/gcMuiVbN9o2ngRItvqE774dG9nq0Dzw=
github.com/minio/minio-go/v7 v7.0.66/go.mod h1:DHAgmyQEGdW3Cif0UooKOyrT3Vxs82zNdV6tkKhRtbs=
github.com/minio/sha256-simd v1.0.1 h1:6kaan5IFmwTNynnKKpDHe6FWHohJOHhCPchzK49dzMM=
github.com/minio/sha256-simd v1.0.1/go.mod h1:Pz6AKMiUdngCLpeTL/RJY1M9rUuPMYujV5xJjtbRSN8=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/ncw/swift/v2 v2.0.2 h1:jx282pcAKFhmoZBSdMcCRFn9VWkoBIRsCpe+yZq7vEk=
github.com/ncw/swift/v2 v2.0.2/go.mod h1:z0A9RVdYPjNjXVo2pDOPxZ4eu3oarO1P91fTItcb+Kg=
github.com/minio/minio-go/v7 v7.0.77 h1:GaGghJRg9nwDVlNbwYjSDJT1rqltQkBFDsypWX1v3Bw=
github.com/minio/minio-go/v7 v7.0.77/go.mod h1:AVM3IUN6WwKzmwBxVdjzhH8xq+f57JSbbvzqvUzR6eg=
github.com/ncw/swift/v2 v2.0.3 h1:8R9dmgFIWs+RiVlisCEfiQiik1hjuR0JnOkLxaP9ihg=
github.com/ncw/swift/v2 v2.0.3/go.mod h1:cbAO76/ZwcFrFlHdXPjaqWZ9R7Hdar7HpjRXBfbjigk=
github.com/oklog/run v1.1.0/go.mod h1:sVPdnTZT1zYwAJeCMu2Th4T21pA3FPOQRfWjQlk7DVU=
github.com/pelletier/go-toml v1.9.5/go.mod h1:u1nR/EPcESfeI/szUZKdtJ0xRNbUoANCkoOuaOx1Y+c=
github.com/peterbourgon/ff/v3 v3.3.1/go.mod h1:zjJVUhx+twciwfDl0zBcFzl4dW8axCRyXE/eKY9RztQ=
@ -165,14 +172,17 @@ github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/profile v1.7.0 h1:hnbDkaNWPCLMO9wGLdBFTIZvzDrDfBM2072E1S9gJkA=
github.com/pkg/profile v1.7.0/go.mod h1:8Uer0jas47ZQMJ7VD+OHknK4YDY07LPUC6dEvqDjvNo=
github.com/pkg/sftp v1.13.6 h1:JFZT4XbOU7l77xGSpOdW+pwIMqP044IyjXX6FGyEKFo=
github.com/pkg/sftp v1.13.6/go.mod h1:tz1ryNURKu77RL+GuCzmoJYxQczL3wLNNpPWagdg4Qk=
github.com/pkg/sftp v1.13.7 h1:uv+I3nNJvlKZIQGSr8JVQLNHFU9YhhNpvC14Y6KgmSM=
github.com/pkg/sftp v1.13.7/go.mod h1:KMKI0t3T6hfA+lTR/ssZdunHo+uwq7ghoN09/FSu3DY=
github.com/pkg/xattr v0.4.10 h1:Qe0mtiNFHQZ296vRgUjRCoPHPqH7VdTOrZx3g0T+pGA=
github.com/pkg/xattr v0.4.10/go.mod h1:di8WF84zAKk8jzR1UBTEWh9AUlIZZ7M/JNt8e9B6ktU=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prashantv/gostub v1.1.0 h1:BTyx3RfQjRHnUWaGF9oQos79AlQ5k8WNktv7VGvVH4g=
github.com/prashantv/gostub v1.1.0/go.mod h1:A5zLQHz7ieHGG7is6LLXLz7I8+3LZzsrV0P1IAHhP5U=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/redis/go-redis/v9 v9.6.1 h1:HHDteefn6ZkTtY5fGUE8tj8uy85AHk6zP7CpzIAM0y4=
github.com/redis/go-redis/v9 v9.6.1/go.mod h1:0C0c6ycQsdpVNQpxb1njEQIqkx5UcsM8FJCQLgE9+RA=
github.com/restic/chunker v0.4.0 h1:YUPYCUn70MYP7VO4yllypp2SjmsRhRJaad3xKu1QFRw=
github.com/restic/chunker v0.4.0/go.mod h1:z0cH2BejpW636LXw0R/BGyv+Ey8+m9QGiOanDHItzyw=
github.com/robertkrimen/godocdown v0.0.0-20130622164427-0bfa04905481/go.mod h1:C9WhFzY47SzYBIvzFqSvHIR6ROgDo4TtdTuRaOMjF/s=
@ -180,12 +190,11 @@ github.com/rogpeppe/go-internal v1.6.1/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTE
github.com/rogpeppe/go-internal v1.8.0/go.mod h1:WmiCO8CzOY8rg0OYDC4/i/2WRWAB6poM+XZ2dLUbcbE=
github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs=
github.com/rogpeppe/go-internal v1.12.0 h1:exVL4IDcn6na9z1rAb56Vxr+CgyK3nn3O+epU5NdKM8=
github.com/rs/xid v1.5.0 h1:mKX4bl4iPYJtEIxp6CYiUuLQ/8DYMoz0PUdtGgMFRVc=
github.com/rs/xid v1.5.0/go.mod h1:trrq9SKmegXys3aeAKXMUTdJsYXVwGY3RLcfgqegfbg=
github.com/rogpeppe/go-internal v1.12.0/go.mod h1:E+RYuTGaKKdloAfM02xzb0FW3Paa99yedzYV+kq4uf4=
github.com/rs/xid v1.6.0 h1:fV591PaemRlL6JfRxGDEPl69wICngIQ3shQtzfy2gxU=
github.com/rs/xid v1.6.0/go.mod h1:7XoLgs4eV+QndskICGsho+ADou8ySMSjJKDIan90Nz0=
github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/spf13/cobra v1.8.1 h1:e5/vxKd/rZsfSJMUX1agtjeTDf+qv1/JdBF8gg5k9ZM=
github.com/spf13/cobra v1.8.1/go.mod h1:wHxEcudfqmLYa8iTfL+OuZPbBZkmvliBWKIezN3kD9Y=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
@ -200,32 +209,34 @@ github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg=
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/tv42/httpunix v0.0.0-20191220191345-2ba4b9c3382c h1:u6SKchux2yDvFQnDHS3lPnIRmfVJ5Sxy3ao2SIdysLQ=
github.com/tv42/httpunix v0.0.0-20191220191345-2ba4b9c3382c/go.mod h1:hzIxponao9Kjc7aWznkXaL4U4TWaDSs8zcsY4Ka08nM=
github.com/yuin/goldmark v1.4.1/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
go.opencensus.io v0.24.0 h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0=
go.opencensus.io v0.24.0/go.mod h1:vNK8G9p7aAivkbmorf4v+7Hgx+Zs0yY+0fOtgBfjQKo=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.49.0 h1:4Pp6oUg3+e/6M4C0A/3kJ2VYa++dsWVTtGgLVj5xtHg=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.49.0/go.mod h1:Mjt1i1INqiaoZOMGR1RIUJN+i3ChKoFRqzrRQhlkbs0=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0 h1:jq9TW8u3so/bN+JPT166wjOI6/vQPF6Xe7nMNIltagk=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0/go.mod h1:p8pYQP+m5XfbZm9fxtSKAbM6oIllS7s2AfxrChvc7iw=
go.opentelemetry.io/otel v1.24.0 h1:0LAOdjNmQeSTzGBzduGe/rU4tZhMwL5rWgtp9Ku5Jfo=
go.opentelemetry.io/otel v1.24.0/go.mod h1:W7b9Ozg4nkF5tWI5zsXkaKKDjdVjpD4oAt9Qi/MArHo=
go.opentelemetry.io/otel/metric v1.24.0 h1:6EhoGWWK28x1fbpA4tYTOWBkPefTDQnb8WSGXlc88kI=
go.opentelemetry.io/otel/metric v1.24.0/go.mod h1:VYhLe1rFfxuTXLgj4CBiyz+9WYBA8pNGJgDcSFRKBco=
go.opentelemetry.io/otel/sdk v1.24.0 h1:YMPPDNymmQN3ZgczicBY3B6sf9n62Dlj9pWD3ucgoDw=
go.opentelemetry.io/otel/trace v1.24.0 h1:CsKnnL4dUAr/0llH9FKuc698G04IrpWV0MQA/Y1YELI=
go.opentelemetry.io/otel/trace v1.24.0/go.mod h1:HPc3Xr/cOApsBI154IU0OI0HJexz+aw5uPdbs3UCjNU=
go.uber.org/automaxprocs v1.5.3 h1:kWazyxZUrS3Gs4qUpbwo5kEIMGe/DAvi5Z4tl2NW4j8=
go.uber.org/automaxprocs v1.5.3/go.mod h1:eRbA25aqJrxAbsLO0xy5jVwPt7FQnRgjW+efnwa1WM0=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.54.0 h1:r6I7RJCN86bpD/FQwedZ0vSixDpwuWREjW9oRMsmqDc=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.54.0/go.mod h1:B9yO6b04uB80CzjedvewuqDhxJxi11s7/GtiGa8bAjI=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.54.0 h1:TT4fX+nBOA/+LUkobKGW1ydGcn+G3vRw9+g5HwCphpk=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.54.0/go.mod h1:L7UH0GbB0p47T4Rri3uHjbpCFYrVrwc1I25QhNPiGK8=
go.opentelemetry.io/otel v1.29.0 h1:PdomN/Al4q/lN6iBJEN3AwPvUiHPMlt93c8bqTG5Llw=
go.opentelemetry.io/otel v1.29.0/go.mod h1:N/WtXPs1CNCUEx+Agz5uouwCba+i+bJGFicT8SR4NP8=
go.opentelemetry.io/otel/metric v1.29.0 h1:vPf/HFWTNkPu1aYeIsc98l4ktOQaL6LeSoeV2g+8YLc=
go.opentelemetry.io/otel/metric v1.29.0/go.mod h1:auu/QWieFVWx+DmQOUMgj0F8LHWdgalxXqvp7BII/W8=
go.opentelemetry.io/otel/sdk v1.29.0 h1:vkqKjk7gwhS8VaWb0POZKmIEDimRCMsopNYnriHyryo=
go.opentelemetry.io/otel/sdk v1.29.0/go.mod h1:pM8Dx5WKnvxLCb+8lG1PRNIDxu9g9b9g59Qr7hfAAok=
go.opentelemetry.io/otel/trace v1.29.0 h1:J/8ZNK4XgR7a21DZUAsbF8pZ5Jcw1VhACmnYt39JTi4=
go.opentelemetry.io/otel/trace v1.29.0/go.mod h1:eHl3w0sp3paPkYstJOmAimxhiFXPg+MMTlEh3nsQgWQ=
go.uber.org/automaxprocs v1.6.0 h1:O3y2/QNTOdbF+e/dpXNNW7Rx2hZ4sTIPyybbxyNqTUs=
go.uber.org/automaxprocs v1.6.0/go.mod h1:ifeIMSnPZuznNm6jmdzmU3/bfk01Fe2fotchwEFJ8r8=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.1.0/go.mod h1:RecgLatLF4+eUMCP1PoPZQb+cVrJcOPbHkTkbkB9sbw=
golang.org/x/crypto v0.24.0 h1:mnl8DM0o513X8fdIkmyFE/5hTYxbwYOjDS/+rK6qpRI=
golang.org/x/crypto v0.24.0/go.mod h1:Z1PMYSOR5nyMcyAVAIQSKCDwalqy85Aqn1x3Ws4L5DM=
golang.org/x/crypto v0.17.0/go.mod h1:gCAAfMLgwOJRpTjQ2zCCt2OcSfYMTeZVSRtQlPC7Nq4=
golang.org/x/crypto v0.28.0 h1:GBDwsMXVQi34v5CCYUm2jkJvu4cbtru2U4TN2PSyQnw=
golang.org/x/crypto v0.28.0/go.mod h1:rmgy+3RHxRZMyY0jjAJShp2zgEdOqj2AO7U0pYmeQ7U=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20220428152302-39d4317da171 h1:TfdoLivD44QwvssI9Sv1xwa5DcL5XQr4au4sZ2F2NV4=
golang.org/x/exp v0.0.0-20220428152302-39d4317da171/go.mod h1:lgLbSvA5ygNOMpwM/9anMpWVlVJ7Z+cHWq/eFuinpGE=
@ -236,6 +247,7 @@ golang.org/x/mod v0.5.1/go.mod h1:5OXOZSfqPIIbmVBIIKWRFfZjPR0E5r58TLhUjH0a2Ro=
golang.org/x/mod v0.6.0-dev.0.20211013180041-c96bc1413d57/go.mod h1:3p9vT2HGsQu2K1YbXdKPJLVgG5VJdoTa1poYQBtP1AY=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.7.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@ -246,21 +258,22 @@ golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwY
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20211015210444-4f30a5c0130f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.1.0/go.mod h1:Cx3nUiGt4eDBEyega/BKRp+/AlGL8hYe7U9odMt2Cco=
golang.org/x/net v0.2.0/go.mod h1:KqCZLdyyvdV855qA2rE3GC2aiw5xGR5TEjj8smXukLY=
golang.org/x/net v0.26.0 h1:soB7SVo0PWrY4vPW/+ay0jKDNScG2X9wFeYlXIvJsOQ=
golang.org/x/net v0.26.0/go.mod h1:5YKkiSynbBIh3p6iOc/vibscux0x38BZDkn8sCUPxHE=
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
golang.org/x/net v0.30.0 h1:AcW1SDZMkb8IpzCdQUaIq2sP4sZ4zw+55h6ynffypl4=
golang.org/x/net v0.30.0/go.mod h1:2wGyMJ5iFasEhkwi13ChkO/t1ECNC4X4eBKkVFyYFlU=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.21.0 h1:tsimM75w1tF/uws5rbeHzIWxEqElMehnc+iW793zsZs=
golang.org/x/oauth2 v0.21.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI=
golang.org/x/oauth2 v0.23.0 h1:PbgcYx2W7i4LvjJWEbf0ngHV6qJYr86PkAV3bXdLEbs=
golang.org/x/oauth2 v0.23.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.7.0 h1:YsImfSBoP9QPYL0xyKJPq0gcaJdG3rInoqxTWbfQu9M=
golang.org/x/sync v0.7.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.9.0 h1:fEo0HyrW1GIgZdpbhCRO0PkJajUS5H9IFUztCgEo2jQ=
golang.org/x/sync v0.9.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@ -272,29 +285,35 @@ golang.org/x/sys v0.0.0-20211007075335-d3039528d8ac/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20211019181941-9d821ace8654/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220408201424-a24fb2fb8a0f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.2.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.10.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.22.0 h1:RI27ohtqKCnwULzJLqkv897zojh5/DwS/ENaMzUOaWI=
golang.org/x/sys v0.22.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.15.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.27.0 h1:wBqf8DvsY9Y/2P8gAfPDEYNuS30J4lPHJxXSb/nJZ+s=
golang.org/x/sys v0.27.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.2.0/go.mod h1:TVmDHMZPmdnySmBfhjOoOdhjzdE1h4u1VwSiw2l1Nuc=
golang.org/x/term v0.22.0 h1:BbsgPEJULsl2fV/AT3v15Mjva5yXKQDyKf+TbDz7QJk=
golang.org/x/term v0.22.0/go.mod h1:F3qCibpT5AMpCRfhfT53vVJwhLtIVHhB9XDjfFvnMI4=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo=
golang.org/x/term v0.15.0/go.mod h1:BDl952bC7+uMoWR75FIrCDx79TPU9oHkTZ9yRbYOrX0=
golang.org/x/term v0.25.0 h1:WtHI/ltw4NvSUig5KARz9h521QvRC8RmF/cuYqifU24=
golang.org/x/term v0.25.0/go.mod h1:RPyXicDX+6vLxogjjRxjgD2TKtmAO6NZBsBRfrOLu7M=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.16.0 h1:a94ExnEXNtEwYLGJSIUxnWoxoRz/ZcCsV63ROupILh4=
golang.org/x/text v0.16.0/go.mod h1:GhwF1Be+LQoKShO3cGOHzqOgRrGaYc9AvblQOmPVHnI=
golang.org/x/time v0.5.0 h1:o7cqy6amK/52YcAKIPlM3a+Fpj35zvRj2TP+e1xFSfk=
golang.org/x/time v0.5.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.20.0 h1:gK/Kv2otX8gz+wn7Rmb3vT96ZwuoxnQlY+HlJVj7Qug=
golang.org/x/text v0.20.0/go.mod h1:D4IsuqiFMhST5bX19pQ9ikHC2GsaKyk/oF+pn3ducp4=
golang.org/x/time v0.7.0 h1:ntUhktv3OPE6TgYxXWv9vKvUSJyIFJlyohwbkEwPrKQ=
golang.org/x/time v0.7.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
@ -304,30 +323,31 @@ golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtn
golang.org/x/tools v0.1.8-0.20211029000441-d6a9af8af023/go.mod h1:nABZi5QlRsZVlzPpHl034qft6wpY4eDcsTt5AaioBiU=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.3.0/go.mod h1:/rWhSS2+zyEVwoJf8YAX6L2f0ntZ7Kn/mGgAWcipA5k=
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/api v0.187.0 h1:Mxs7VATVC2v7CY+7Xwm4ndkX71hpElcvx0D1Ji/p1eo=
google.golang.org/api v0.187.0/go.mod h1:KIHlTc4x7N7gKKuVsdmfBXN13yEEWXWFURWY6SBp2gk=
google.golang.org/api v0.204.0 h1:3PjmQQEDkR/ENVZZwIYB4W/KzYtN8OrqnNcHWpeR8E4=
google.golang.org/api v0.204.0/go.mod h1:69y8QSoKIbL9F94bWgWAq6wGqGwyjBgi2y8rAK8zLag=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
google.golang.org/genproto v0.0.0-20240624140628-dc46fd24d27d h1:PksQg4dV6Sem3/HkBX+Ltq8T0ke0PKIRBNBatoDTVls=
google.golang.org/genproto v0.0.0-20240624140628-dc46fd24d27d/go.mod h1:s7iA721uChleev562UJO2OYB0PPT9CMFjV+Ce7VJH5M=
google.golang.org/genproto/googleapis/api v0.0.0-20240617180043-68d350f18fd4 h1:MuYw1wJzT+ZkybKfaOXKp5hJiZDn2iHaXRw0mRYdHSc=
google.golang.org/genproto/googleapis/api v0.0.0-20240617180043-68d350f18fd4/go.mod h1:px9SlOOZBg1wM1zdnr8jEL4CNGUBZ+ZKYtNPApNQc4c=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240624140628-dc46fd24d27d h1:k3zyW3BYYR30e8v3x0bTDdE9vpYFjZHK+HcyqkrppWk=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240624140628-dc46fd24d27d/go.mod h1:Ue6ibwXGpU+dqIcODieyLOcgj7z8+IcskoNIgZxtrFY=
google.golang.org/genproto v0.0.0-20241021214115-324edc3d5d38 h1:Q3nlH8iSQSRUwOskjbcSMcF2jiYMNiQYZ0c2KEJLKKU=
google.golang.org/genproto v0.0.0-20241021214115-324edc3d5d38/go.mod h1:xBI+tzfqGGN2JBeSebfKXFSdBpWVQ7sLW40PTupVRm4=
google.golang.org/genproto/googleapis/api v0.0.0-20241015192408-796eee8c2d53 h1:fVoAXEKA4+yufmbdVYv+SE73+cPZbbbe8paLsHfkK+U=
google.golang.org/genproto/googleapis/api v0.0.0-20241015192408-796eee8c2d53/go.mod h1:riSXTwQ4+nqmPGtobMFyW5FqVAmIs0St6VPp4Ug7CE4=
google.golang.org/genproto/googleapis/rpc v0.0.0-20241021214115-324edc3d5d38 h1:zciRKQ4kBpFgpfC5QQCVtnnNAcLIqweL7plyZRQHVpI=
google.golang.org/genproto/googleapis/rpc v0.0.0-20241021214115-324edc3d5d38/go.mod h1:GX3210XPVPUjJbTUbvwI8f2IpZDMZuPJWDzDuebbviI=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv2fbc=
google.golang.org/grpc v1.64.1 h1:LKtvyfbX3UGVPFcGqJ9ItpVWW6oN/2XqTxfAnwRRXiA=
google.golang.org/grpc v1.64.1/go.mod h1:hiQF4LFZelK2WKaP6W0L92zGHtiQdZxk8CrSdvyjeP0=
google.golang.org/grpc v1.67.1 h1:zWnc1Vrcno+lHZCOofnIMvycFcc0QRGIzm9dhnDX68E=
google.golang.org/grpc v1.67.1/go.mod h1:1gLDyUQU7CTLJI90u3nXZ9ekeghjeM7pTDZlqFNg2AA=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
@ -337,15 +357,14 @@ google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
google.golang.org/protobuf v1.34.2 h1:6xV6lTsCfpGD21XK49h7MhtcApnLqkfYgPcdHftf6hg=
google.golang.org/protobuf v1.34.2/go.mod h1:qYOHts0dSfpeUzUFpOMr/WGzszTmLH+DiWniOlNbLDw=
google.golang.org/protobuf v1.35.1 h1:m3LfL6/Ca+fqnjnlqQXNpFPABW1UD7mjh8KO2mKFytA=
google.golang.org/protobuf v1.35.1/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
gopkg.in/ini.v1 v1.67.0 h1:Dgnx+6+nfE+IfzjUEISNeydPJh9AXNNsWbGP9KzCsOA=
gopkg.in/ini.v1 v1.67.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=

View file

@ -245,6 +245,7 @@ func buildTargets(sourceDir, outputDir string, targets map[string][]string) {
var defaultBuildTargets = map[string][]string{
"aix": {"ppc64"},
"darwin": {"amd64", "arm64"},
"dragonfly": {"amd64"},
"freebsd": {"386", "amd64", "arm"},
"linux": {"386", "amd64", "arm", "arm64", "ppc64le", "mips", "mipsle", "mips64", "mips64le", "riscv64", "s390x"},
"netbsd": {"386", "amd64"},

View file

@ -25,7 +25,7 @@ type SelectByNameFunc func(item string) bool
// SelectFunc returns true for all items that should be included (files and
// dirs). If false is returned, files are ignored and dirs are not even walked.
type SelectFunc func(item string, fi os.FileInfo) bool
type SelectFunc func(item string, fi *fs.ExtendedFileInfo, fs fs.FS) bool
// ErrorFunc is called when an error during archiving occurs. When nil is
// returned, the archiver continues, otherwise it aborts and passes the error
@ -49,6 +49,8 @@ type ChangeStats struct {
}
type Summary struct {
BackupStart time.Time
BackupEnd time.Time
Files, Dirs ChangeStats
ProcessedBytes uint64
ItemStats
@ -64,6 +66,11 @@ func (s *ItemStats) Add(other ItemStats) {
s.TreeSizeInRepo += other.TreeSizeInRepo
}
// ToNoder returns a restic.Node for a File.
type ToNoder interface {
ToNode(ignoreXattrListError bool) (*restic.Node, error)
}
type archiverRepo interface {
restic.Loader
restic.BlobSaver
@ -75,6 +82,14 @@ type archiverRepo interface {
}
// Archiver saves a directory structure to the repo.
//
// An Archiver has a number of worker goroutines handling saving the different
// data structures to the repository, the details are implemented by the
// fileSaver, blobSaver, and treeSaver types.
//
// The main goroutine (the one calling Snapshot()) traverses the directory tree
// and delegates all work to these worker pools. They return a futureNode which
// can be resolved later, by calling Wait() on it.
type Archiver struct {
Repo archiverRepo
SelectByName SelectByNameFunc
@ -82,9 +97,9 @@ type Archiver struct {
FS fs.FS
Options Options
blobSaver *BlobSaver
fileSaver *FileSaver
treeSaver *TreeSaver
blobSaver *blobSaver
fileSaver *fileSaver
treeSaver *treeSaver
mu sync.Mutex
summary *Summary
@ -160,7 +175,7 @@ func (o Options) ApplyDefaults() Options {
if o.SaveTreeConcurrency == 0 {
// can either wait for a file, wait for a tree, serialize a tree or wait for saveblob
// the last two are cpu-bound and thus mutually exclusive.
// Also allow waiting for FileReadConcurrency files, this is the maximum of FutureFiles
// Also allow waiting for FileReadConcurrency files, this is the maximum of files
// which currently can be in progress. The main backup loop blocks when trying to queue
// more files to read.
o.SaveTreeConcurrency = uint(runtime.GOMAXPROCS(0)) + o.ReadConcurrency
@ -170,12 +185,12 @@ func (o Options) ApplyDefaults() Options {
}
// New initializes a new archiver.
func New(repo archiverRepo, fs fs.FS, opts Options) *Archiver {
func New(repo archiverRepo, filesystem fs.FS, opts Options) *Archiver {
arch := &Archiver{
Repo: repo,
SelectByName: func(_ string) bool { return true },
Select: func(_ string, _ os.FileInfo) bool { return true },
FS: fs,
Select: func(_ string, _ *fs.ExtendedFileInfo, _ fs.FS) bool { return true },
FS: filesystem,
Options: opts.ApplyDefaults(),
CompleteItem: func(string, *restic.Node, *restic.Node, ItemStats, time.Duration) {},
@ -224,7 +239,7 @@ func (arch *Archiver) trackItem(item string, previous, current *restic.Node, s I
}
switch current.Type {
case "dir":
case restic.NodeTypeDir:
switch {
case previous == nil:
arch.summary.Dirs.New++
@ -234,7 +249,7 @@ func (arch *Archiver) trackItem(item string, previous, current *restic.Node, s I
arch.summary.Dirs.Changed++
}
case "file":
case restic.NodeTypeFile:
switch {
case previous == nil:
arch.summary.Files.New++
@ -247,14 +262,13 @@ func (arch *Archiver) trackItem(item string, previous, current *restic.Node, s I
}
// nodeFromFileInfo returns the restic node from an os.FileInfo.
func (arch *Archiver) nodeFromFileInfo(snPath, filename string, fi os.FileInfo, ignoreXattrListError bool) (*restic.Node, error) {
mappedFilename := arch.FS.MapFilename(filename)
node, err := restic.NodeFromFileInfo(mappedFilename, fi, ignoreXattrListError)
func (arch *Archiver) nodeFromFileInfo(snPath, filename string, meta ToNoder, ignoreXattrListError bool) (*restic.Node, error) {
node, err := meta.ToNode(ignoreXattrListError)
if !arch.WithAtime {
node.AccessTime = node.ModTime
}
if feature.Flag.Enabled(feature.DeviceIDForHardlinks) {
if node.Links == 1 || node.Type == "dir" {
if node.Links == 1 || node.Type == restic.NodeTypeDir {
// the DeviceID is only necessary for hardlinked files
// when using subvolumes or snapshots their deviceIDs tend to change which causes
// restic to upload new tree blobs
@ -264,7 +278,7 @@ func (arch *Archiver) nodeFromFileInfo(snPath, filename string, fi os.FileInfo,
// overwrite name to match that within the snapshot
node.Name = path.Base(snPath)
// do not filter error for nodes of irregular or invalid type
if node.Type != "irregular" && node.Type != "" && err != nil {
if node.Type != restic.NodeTypeIrregular && node.Type != restic.NodeTypeInvalid && err != nil {
err = fmt.Errorf("incomplete metadata for %v: %w", filename, err)
return node, arch.error(filename, err)
}
@ -274,7 +288,7 @@ func (arch *Archiver) nodeFromFileInfo(snPath, filename string, fi os.FileInfo,
// loadSubtree tries to load the subtree referenced by node. In case of an error, nil is returned.
// If there is no node to load, then nil is returned without an error.
func (arch *Archiver) loadSubtree(ctx context.Context, node *restic.Node) (*restic.Tree, error) {
if node == nil || node.Type != "dir" || node.Subtree == nil {
if node == nil || node.Type != restic.NodeTypeDir || node.Subtree == nil {
return nil, nil
}
@ -299,27 +313,21 @@ func (arch *Archiver) wrapLoadTreeError(id restic.ID, err error) error {
// saveDir stores a directory in the repo and returns the node. snPath is the
// path within the current snapshot.
func (arch *Archiver) saveDir(ctx context.Context, snPath string, dir string, fi os.FileInfo, previous *restic.Tree, complete CompleteFunc) (d FutureNode, err error) {
func (arch *Archiver) saveDir(ctx context.Context, snPath string, dir string, meta fs.File, previous *restic.Tree, complete fileCompleteFunc) (d futureNode, err error) {
debug.Log("%v %v", snPath, dir)
treeNode, err := arch.nodeFromFileInfo(snPath, dir, fi, false)
treeNode, names, err := arch.dirToNodeAndEntries(snPath, dir, meta)
if err != nil {
return FutureNode{}, err
return futureNode{}, err
}
names, err := fs.Readdirnames(arch.FS, dir, fs.O_NOFOLLOW)
if err != nil {
return FutureNode{}, err
}
sort.Strings(names)
nodes := make([]FutureNode, 0, len(names))
nodes := make([]futureNode, 0, len(names))
for _, name := range names {
// test if context has been cancelled
if ctx.Err() != nil {
debug.Log("context has been cancelled, aborting")
return FutureNode{}, ctx.Err()
return futureNode{}, ctx.Err()
}
pathname := arch.FS.Join(dir, name)
@ -335,7 +343,7 @@ func (arch *Archiver) saveDir(ctx context.Context, snPath string, dir string, fi
continue
}
return FutureNode{}, err
return futureNode{}, err
}
if excluded {
@ -350,11 +358,34 @@ func (arch *Archiver) saveDir(ctx context.Context, snPath string, dir string, fi
return fn, nil
}
// FutureNode holds a reference to a channel that returns a FutureNodeResult
func (arch *Archiver) dirToNodeAndEntries(snPath, dir string, meta fs.File) (node *restic.Node, names []string, err error) {
err = meta.MakeReadable()
if err != nil {
return nil, nil, fmt.Errorf("openfile for readdirnames failed: %w", err)
}
node, err = arch.nodeFromFileInfo(snPath, dir, meta, false)
if err != nil {
return nil, nil, err
}
if node.Type != restic.NodeTypeDir {
return nil, nil, fmt.Errorf("directory %q changed type, refusing to archive", snPath)
}
names, err = meta.Readdirnames(-1)
if err != nil {
return nil, nil, fmt.Errorf("readdirnames %v failed: %w", dir, err)
}
sort.Strings(names)
return node, names, nil
}
// futureNode holds a reference to a channel that returns a FutureNodeResult
// or a reference to an already existing result. If the result is available
// immediately, then storing a reference directly requires less memory than
// using the indirection via a channel.
type FutureNode struct {
type futureNode struct {
ch <-chan futureNodeResult
res *futureNodeResult
}
@ -367,18 +398,18 @@ type futureNodeResult struct {
err error
}
func newFutureNode() (FutureNode, chan<- futureNodeResult) {
func newFutureNode() (futureNode, chan<- futureNodeResult) {
ch := make(chan futureNodeResult, 1)
return FutureNode{ch: ch}, ch
return futureNode{ch: ch}, ch
}
func newFutureNodeWithResult(res futureNodeResult) FutureNode {
return FutureNode{
func newFutureNodeWithResult(res futureNodeResult) futureNode {
return futureNode{
res: &res,
}
}
func (fn *FutureNode) take(ctx context.Context) futureNodeResult {
func (fn *futureNode) take(ctx context.Context) futureNodeResult {
if fn.res != nil {
res := fn.res
// free result
@ -417,38 +448,64 @@ func (arch *Archiver) allBlobsPresent(previous *restic.Node) bool {
// Errors and completion needs to be handled by the caller.
//
// snPath is the path within the current snapshot.
func (arch *Archiver) save(ctx context.Context, snPath, target string, previous *restic.Node) (fn FutureNode, excluded bool, err error) {
func (arch *Archiver) save(ctx context.Context, snPath, target string, previous *restic.Node) (fn futureNode, excluded bool, err error) {
start := time.Now()
debug.Log("%v target %q, previous %v", snPath, target, previous)
abstarget, err := arch.FS.Abs(target)
if err != nil {
return FutureNode{}, false, err
return futureNode{}, false, err
}
filterError := func(err error) (futureNode, bool, error) {
err = arch.error(abstarget, err)
if err != nil {
return futureNode{}, false, errors.WithStack(err)
}
return futureNode{}, true, nil
}
filterNotExist := func(err error) error {
if errors.Is(err, os.ErrNotExist) {
return nil
}
return err
}
// exclude files by path before running Lstat to reduce number of lstat calls
if !arch.SelectByName(abstarget) {
debug.Log("%v is excluded by path", target)
return FutureNode{}, true, nil
return futureNode{}, true, nil
}
meta, err := arch.FS.OpenFile(target, fs.O_NOFOLLOW, true)
if err != nil {
debug.Log("open metadata for %v returned error: %v", target, err)
// ignore if file disappeared since it was returned by readdir
return filterError(filterNotExist(err))
}
closeFile := true
defer func() {
if closeFile {
cerr := meta.Close()
if err == nil {
err = cerr
}
}
}()
// get file info and run remaining select functions that require file information
fi, err := arch.FS.Lstat(target)
fi, err := meta.Stat()
if err != nil {
debug.Log("lstat() for %v returned error: %v", target, err)
err = arch.error(abstarget, err)
if err != nil {
return FutureNode{}, false, errors.WithStack(err)
// ignore if file disappeared since it was returned by readdir
return filterError(filterNotExist(err))
}
return FutureNode{}, true, nil
}
if !arch.Select(abstarget, fi) {
if !arch.Select(abstarget, fi, arch.FS) {
debug.Log("%v is excluded", target)
return FutureNode{}, true, nil
return futureNode{}, true, nil
}
switch {
case fs.IsRegularFile(fi):
case fi.Mode.IsRegular():
debug.Log(" %v regular file", target)
// check if the file has not changed before performing a fopen operation (more expensive, specially
@ -458,9 +515,9 @@ func (arch *Archiver) save(ctx context.Context, snPath, target string, previous
debug.Log("%v hasn't changed, using old list of blobs", target)
arch.trackItem(snPath, previous, previous, ItemStats{}, time.Since(start))
arch.CompleteBlob(previous.Size)
node, err := arch.nodeFromFileInfo(snPath, target, fi, false)
node, err := arch.nodeFromFileInfo(snPath, target, meta, false)
if err != nil {
return FutureNode{}, false, err
return futureNode{}, false, err
}
// copy list of blobs
@ -479,46 +536,34 @@ func (arch *Archiver) save(ctx context.Context, snPath, target string, previous
err := errors.Errorf("parts of %v not found in the repository index; storing the file again", target)
err = arch.error(abstarget, err)
if err != nil {
return FutureNode{}, false, err
return futureNode{}, false, err
}
}
// reopen file and do an fstat() on the open file to check it is still
// a file (and has not been exchanged for e.g. a symlink)
file, err := arch.FS.OpenFile(target, fs.O_RDONLY|fs.O_NOFOLLOW, 0)
err := meta.MakeReadable()
if err != nil {
debug.Log("Openfile() for %v returned error: %v", target, err)
err = arch.error(abstarget, err)
if err != nil {
return FutureNode{}, false, errors.WithStack(err)
}
return FutureNode{}, true, nil
debug.Log("MakeReadable() for %v returned error: %v", target, err)
return filterError(err)
}
fi, err = file.Stat()
fi, err := meta.Stat()
if err != nil {
debug.Log("stat() on opened file %v returned error: %v", target, err)
_ = file.Close()
err = arch.error(abstarget, err)
if err != nil {
return FutureNode{}, false, errors.WithStack(err)
}
return FutureNode{}, true, nil
return filterError(err)
}
// make sure it's still a file
if !fs.IsRegularFile(fi) {
err = errors.Errorf("file %v changed type, refusing to archive", fi.Name())
_ = file.Close()
err = arch.error(abstarget, err)
if err != nil {
return FutureNode{}, false, err
}
return FutureNode{}, true, nil
if !fi.Mode.IsRegular() {
err = errors.Errorf("file %q changed type, refusing to archive", target)
return filterError(err)
}
closeFile = false
// Save will close the file, we don't need to do that
fn = arch.fileSaver.Save(ctx, snPath, target, file, fi, func() {
fn = arch.fileSaver.Save(ctx, snPath, target, meta, func() {
arch.StartFile(snPath)
}, func() {
arch.trackItem(snPath, nil, nil, ItemStats{}, 0)
@ -526,7 +571,7 @@ func (arch *Archiver) save(ctx context.Context, snPath, target string, previous
arch.trackItem(snPath, previous, node, stats, time.Since(start))
})
case fi.IsDir():
case fi.Mode.IsDir():
debug.Log(" %v dir", target)
snItem := snPath + "/"
@ -535,28 +580,28 @@ func (arch *Archiver) save(ctx context.Context, snPath, target string, previous
err = arch.error(abstarget, err)
}
if err != nil {
return FutureNode{}, false, err
return futureNode{}, false, err
}
fn, err = arch.saveDir(ctx, snPath, target, fi, oldSubtree,
fn, err = arch.saveDir(ctx, snPath, target, meta, oldSubtree,
func(node *restic.Node, stats ItemStats) {
arch.trackItem(snItem, previous, node, stats, time.Since(start))
})
if err != nil {
debug.Log("SaveDir for %v returned error: %v", snPath, err)
return FutureNode{}, false, err
return futureNode{}, false, err
}
case fi.Mode()&os.ModeSocket > 0:
case fi.Mode&os.ModeSocket > 0:
debug.Log(" %v is a socket, ignoring", target)
return FutureNode{}, true, nil
return futureNode{}, true, nil
default:
debug.Log(" %v other", target)
node, err := arch.nodeFromFileInfo(snPath, target, fi, false)
node, err := arch.nodeFromFileInfo(snPath, target, meta, false)
if err != nil {
return FutureNode{}, false, err
return futureNode{}, false, err
}
fn = newFutureNodeWithResult(futureNodeResult{
snPath: snPath,
@ -573,27 +618,26 @@ func (arch *Archiver) save(ctx context.Context, snPath, target string, previous
// fileChanged tries to detect whether a file's content has changed compared
// to the contents of node, which describes the same path in the parent backup.
// It should only be run for regular files.
func fileChanged(fi os.FileInfo, node *restic.Node, ignoreFlags uint) bool {
func fileChanged(fi *fs.ExtendedFileInfo, node *restic.Node, ignoreFlags uint) bool {
switch {
case node == nil:
return true
case node.Type != "file":
case node.Type != restic.NodeTypeFile:
// We're only called for regular files, so this is a type change.
return true
case uint64(fi.Size()) != node.Size:
case uint64(fi.Size) != node.Size:
return true
case !fi.ModTime().Equal(node.ModTime):
case !fi.ModTime.Equal(node.ModTime):
return true
}
checkCtime := ignoreFlags&ChangeIgnoreCtime == 0
checkInode := ignoreFlags&ChangeIgnoreInode == 0
extFI := fs.ExtendedStat(fi)
switch {
case checkCtime && !extFI.ChangeTime.Equal(node.ChangeTime):
case checkCtime && !fi.ChangeTime.Equal(node.ChangeTime):
return true
case checkInode && node.Inode != extFI.Inode:
case checkInode && node.Inode != fi.Inode:
return true
}
@ -605,43 +649,20 @@ func join(elem ...string) string {
return path.Join(elem...)
}
// statDir returns the file info for the directory. Symbolic links are
// resolved. If the target directory is not a directory, an error is returned.
func (arch *Archiver) statDir(dir string) (os.FileInfo, error) {
fi, err := arch.FS.Stat(dir)
if err != nil {
return nil, errors.WithStack(err)
}
tpe := fi.Mode() & (os.ModeType | os.ModeCharDevice)
if tpe != os.ModeDir {
return fi, errors.Errorf("path is not a directory: %v", dir)
}
return fi, nil
}
// saveTree stores a Tree in the repo, returned is the tree. snPath is the path
// within the current snapshot.
func (arch *Archiver) saveTree(ctx context.Context, snPath string, atree *Tree, previous *restic.Tree, complete CompleteFunc) (FutureNode, int, error) {
func (arch *Archiver) saveTree(ctx context.Context, snPath string, atree *tree, previous *restic.Tree, complete fileCompleteFunc) (futureNode, int, error) {
var node *restic.Node
if snPath != "/" {
if atree.FileInfoPath == "" {
return FutureNode{}, 0, errors.Errorf("FileInfoPath for %v is empty", snPath)
return futureNode{}, 0, errors.Errorf("FileInfoPath for %v is empty", snPath)
}
fi, err := arch.statDir(atree.FileInfoPath)
var err error
node, err = arch.dirPathToNode(snPath, atree.FileInfoPath)
if err != nil {
return FutureNode{}, 0, err
}
debug.Log("%v, dir node data loaded from %v", snPath, atree.FileInfoPath)
// in some cases reading xattrs for directories above the backup source is not allowed
// thus ignore errors for such folders.
node, err = arch.nodeFromFileInfo(snPath, atree.FileInfoPath, fi, true)
if err != nil {
return FutureNode{}, 0, err
return futureNode{}, 0, err
}
} else {
// fake root node
@ -650,7 +671,7 @@ func (arch *Archiver) saveTree(ctx context.Context, snPath string, atree *Tree,
debug.Log("%v (%v nodes), parent %v", snPath, len(atree.Nodes), previous)
nodeNames := atree.NodeNames()
nodes := make([]FutureNode, 0, len(nodeNames))
nodes := make([]futureNode, 0, len(nodeNames))
// iterate over the nodes of atree in lexicographic (=deterministic) order
for _, name := range nodeNames {
@ -658,7 +679,7 @@ func (arch *Archiver) saveTree(ctx context.Context, snPath string, atree *Tree,
// test if context has been cancelled
if ctx.Err() != nil {
return FutureNode{}, 0, ctx.Err()
return futureNode{}, 0, ctx.Err()
}
// this is a leaf node
@ -671,11 +692,11 @@ func (arch *Archiver) saveTree(ctx context.Context, snPath string, atree *Tree,
// ignore error
continue
}
return FutureNode{}, 0, err
return futureNode{}, 0, err
}
if err != nil {
return FutureNode{}, 0, err
return futureNode{}, 0, err
}
if !excluded {
@ -693,7 +714,7 @@ func (arch *Archiver) saveTree(ctx context.Context, snPath string, atree *Tree,
err = arch.error(join(snPath, name), err)
}
if err != nil {
return FutureNode{}, 0, err
return futureNode{}, 0, err
}
// not a leaf node, archive subtree
@ -701,7 +722,7 @@ func (arch *Archiver) saveTree(ctx context.Context, snPath string, atree *Tree,
arch.trackItem(snItem, oldNode, n, is, time.Since(start))
})
if err != nil {
return FutureNode{}, 0, err
return futureNode{}, 0, err
}
nodes = append(nodes, fn)
}
@ -710,6 +731,31 @@ func (arch *Archiver) saveTree(ctx context.Context, snPath string, atree *Tree,
return fn, len(nodes), nil
}
func (arch *Archiver) dirPathToNode(snPath, target string) (node *restic.Node, err error) {
meta, err := arch.FS.OpenFile(target, 0, true)
if err != nil {
return nil, err
}
defer func() {
cerr := meta.Close()
if err == nil {
err = cerr
}
}()
debug.Log("%v, reading dir node data from %v", snPath, target)
// in some cases reading xattrs for directories above the backup source is not allowed
// thus ignore errors for such folders.
node, err = arch.nodeFromFileInfo(snPath, target, meta, true)
if err != nil {
return nil, err
}
if node.Type != restic.NodeTypeDir {
return nil, errors.Errorf("path is not a directory: %v", target)
}
return node, err
}
// resolveRelativeTargets replaces targets that only contain relative
// directories ("." or "../../") with the contents of the directory. Each
// element of target is processed with fs.Clean().
@ -781,16 +827,16 @@ func (arch *Archiver) loadParentTree(ctx context.Context, sn *restic.Snapshot) *
// runWorkers starts the worker pools, which are stopped when the context is cancelled.
func (arch *Archiver) runWorkers(ctx context.Context, wg *errgroup.Group) {
arch.blobSaver = NewBlobSaver(ctx, wg, arch.Repo, arch.Options.SaveBlobConcurrency)
arch.blobSaver = newBlobSaver(ctx, wg, arch.Repo, arch.Options.SaveBlobConcurrency)
arch.fileSaver = NewFileSaver(ctx, wg,
arch.fileSaver = newFileSaver(ctx, wg,
arch.blobSaver.Save,
arch.Repo.Config().ChunkerPolynomial,
arch.Options.ReadConcurrency, arch.Options.SaveBlobConcurrency)
arch.fileSaver.CompleteBlob = arch.CompleteBlob
arch.fileSaver.NodeFromFileInfo = arch.nodeFromFileInfo
arch.treeSaver = NewTreeSaver(ctx, wg, arch.Options.SaveTreeConcurrency, arch.blobSaver.Save, arch.Error)
arch.treeSaver = newTreeSaver(ctx, wg, arch.Options.SaveTreeConcurrency, arch.blobSaver.Save, arch.Error)
}
func (arch *Archiver) stopWorkers() {
@ -804,14 +850,16 @@ func (arch *Archiver) stopWorkers() {
// Snapshot saves several targets and returns a snapshot.
func (arch *Archiver) Snapshot(ctx context.Context, targets []string, opts SnapshotOptions) (*restic.Snapshot, restic.ID, *Summary, error) {
arch.summary = &Summary{}
arch.summary = &Summary{
BackupStart: opts.BackupStart,
}
cleanTargets, err := resolveRelativeTargets(arch.FS, targets)
if err != nil {
return nil, restic.ID{}, nil, err
}
atree, err := NewTree(arch.FS, cleanTargets)
atree, err := newTree(arch.FS, cleanTargets)
if err != nil {
return nil, restic.ID{}, nil, err
}
@ -887,9 +935,10 @@ func (arch *Archiver) Snapshot(ctx context.Context, targets []string, opts Snaps
sn.Parent = opts.ParentSnapshot.ID()
}
sn.Tree = &rootTreeID
arch.summary.BackupEnd = time.Now()
sn.Summary = &restic.SnapshotSummary{
BackupStart: opts.BackupStart,
BackupEnd: time.Now(),
BackupStart: arch.summary.BackupStart,
BackupEnd: arch.summary.BackupEnd,
FilesNew: arch.summary.Files.New,
FilesChanged: arch.summary.Files.Changed,

View file

@ -76,17 +76,12 @@ func saveFile(t testing.TB, repo archiverRepo, filename string, filesystem fs.FS
startCallback = true
}
file, err := arch.FS.OpenFile(filename, fs.O_RDONLY|fs.O_NOFOLLOW, 0)
file, err := arch.FS.OpenFile(filename, fs.O_NOFOLLOW, false)
if err != nil {
t.Fatal(err)
}
fi, err := file.Stat()
if err != nil {
t.Fatal(err)
}
res := arch.fileSaver.Save(ctx, "/", filename, file, fi, start, completeReading, complete)
res := arch.fileSaver.Save(ctx, "/", filename, file, start, completeReading, complete)
fnr := res.take(ctx)
if fnr.err != nil {
@ -521,13 +516,13 @@ func chmodTwice(t testing.TB, name string) {
rtest.OK(t, err)
}
func lstat(t testing.TB, name string) os.FileInfo {
func lstat(t testing.TB, name string) *fs.ExtendedFileInfo {
fi, err := os.Lstat(name)
if err != nil {
t.Fatal(err)
}
return fi
return fs.ExtendedStat(fi)
}
func setTimestamp(t testing.TB, filename string, atime, mtime time.Time) {
@ -556,11 +551,12 @@ func rename(t testing.TB, oldname, newname string) {
}
}
func nodeFromFI(t testing.TB, filename string, fi os.FileInfo) *restic.Node {
node, err := restic.NodeFromFileInfo(filename, fi, false)
if err != nil {
t.Fatal(err)
}
func nodeFromFile(t testing.TB, localFs fs.FS, filename string) *restic.Node {
meta, err := localFs.OpenFile(filename, fs.O_NOFOLLOW, true)
rtest.OK(t, err)
node, err := meta.ToNode(false)
rtest.OK(t, err)
rtest.OK(t, meta.Close())
return node
}
@ -664,7 +660,7 @@ func TestFileChanged(t *testing.T) {
rename(t, filename, tempname)
save(t, filename, defaultContent)
remove(t, tempname)
setTimestamp(t, filename, fi.ModTime(), fi.ModTime())
setTimestamp(t, filename, fi.ModTime, fi.ModTime)
},
ChangeIgnore: ChangeIgnoreCtime | ChangeIgnoreInode,
SameFile: true,
@ -686,8 +682,10 @@ func TestFileChanged(t *testing.T) {
}
save(t, filename, content)
fiBefore := lstat(t, filename)
node := nodeFromFI(t, filename, fiBefore)
fs := &fs.Local{}
fiBefore, err := fs.Lstat(filename)
rtest.OK(t, err)
node := nodeFromFile(t, fs, filename)
if fileChanged(fiBefore, node, 0) {
t.Fatalf("unchanged file detected as changed")
@ -728,8 +726,8 @@ func TestFilChangedSpecialCases(t *testing.T) {
t.Run("type-change", func(t *testing.T) {
fi := lstat(t, filename)
node := nodeFromFI(t, filename, fi)
node.Type = "symlink"
node := nodeFromFile(t, &fs.Local{}, filename)
node.Type = restic.NodeTypeSymlink
if !fileChanged(fi, node, 0) {
t.Fatal("node with changed type detected as unchanged")
}
@ -833,7 +831,8 @@ func TestArchiverSaveDir(t *testing.T) {
wg, ctx := errgroup.WithContext(context.Background())
repo.StartPackUploader(ctx, wg)
arch := New(repo, fs.Track{FS: fs.Local{}}, Options{})
testFS := fs.Track{FS: fs.Local{}}
arch := New(repo, testFS, Options{})
arch.runWorkers(ctx, wg)
arch.summary = &Summary{}
@ -845,15 +844,11 @@ func TestArchiverSaveDir(t *testing.T) {
back := rtest.Chdir(t, chdir)
defer back()
fi, err := fs.Lstat(test.target)
if err != nil {
t.Fatal(err)
}
ft, err := arch.saveDir(ctx, "/", test.target, fi, nil, nil)
if err != nil {
t.Fatal(err)
}
meta, err := testFS.OpenFile(test.target, fs.O_NOFOLLOW, true)
rtest.OK(t, err)
ft, err := arch.saveDir(ctx, "/", test.target, meta, nil, nil)
rtest.OK(t, err)
rtest.OK(t, meta.Close())
fnr := ft.take(ctx)
node, stats := fnr.node, fnr.stats
@ -915,19 +910,16 @@ func TestArchiverSaveDirIncremental(t *testing.T) {
wg, ctx := errgroup.WithContext(context.TODO())
repo.StartPackUploader(ctx, wg)
arch := New(repo, fs.Track{FS: fs.Local{}}, Options{})
testFS := fs.Track{FS: fs.Local{}}
arch := New(repo, testFS, Options{})
arch.runWorkers(ctx, wg)
arch.summary = &Summary{}
fi, err := fs.Lstat(tempdir)
if err != nil {
t.Fatal(err)
}
ft, err := arch.saveDir(ctx, "/", tempdir, fi, nil, nil)
if err != nil {
t.Fatal(err)
}
meta, err := testFS.OpenFile(tempdir, fs.O_NOFOLLOW, true)
rtest.OK(t, err)
ft, err := arch.saveDir(ctx, "/", tempdir, meta, nil, nil)
rtest.OK(t, err)
rtest.OK(t, meta.Close())
fnr := ft.take(ctx)
node, stats := fnr.node, fnr.stats
@ -1121,7 +1113,7 @@ func TestArchiverSaveTree(t *testing.T) {
test.prepare(t)
}
atree, err := NewTree(testFS, test.targets)
atree, err := newTree(testFS, test.targets)
if err != nil {
t.Fatal(err)
}
@ -1529,7 +1521,7 @@ func TestArchiverSnapshotSelect(t *testing.T) {
},
"other": TestFile{Content: "another file"},
},
selFn: func(item string, fi os.FileInfo) bool {
selFn: func(item string, fi *fs.ExtendedFileInfo, _ fs.FS) bool {
return true
},
},
@ -1546,7 +1538,7 @@ func TestArchiverSnapshotSelect(t *testing.T) {
},
"other": TestFile{Content: "another file"},
},
selFn: func(item string, fi os.FileInfo) bool {
selFn: func(item string, fi *fs.ExtendedFileInfo, _ fs.FS) bool {
return false
},
err: "snapshot is empty",
@ -1573,7 +1565,7 @@ func TestArchiverSnapshotSelect(t *testing.T) {
},
"other": TestFile{Content: "another file"},
},
selFn: func(item string, fi os.FileInfo) bool {
selFn: func(item string, fi *fs.ExtendedFileInfo, _ fs.FS) bool {
return filepath.Ext(item) != ".txt"
},
},
@ -1597,8 +1589,8 @@ func TestArchiverSnapshotSelect(t *testing.T) {
},
"other": TestFile{Content: "another file"},
},
selFn: func(item string, fi os.FileInfo) bool {
return filepath.Base(item) != "subdir"
selFn: func(item string, fi *fs.ExtendedFileInfo, fs fs.FS) bool {
return fs.Base(item) != "subdir"
},
},
{
@ -1606,8 +1598,8 @@ func TestArchiverSnapshotSelect(t *testing.T) {
src: TestDir{
"foo": TestFile{Content: "foo"},
},
selFn: func(item string, fi os.FileInfo) bool {
return filepath.IsAbs(item)
selFn: func(item string, fi *fs.ExtendedFileInfo, fs fs.FS) bool {
return fs.IsAbs(item)
},
},
}
@ -1664,17 +1656,8 @@ type MockFS struct {
bytesRead map[string]int // tracks bytes read from all opened files
}
func (m *MockFS) Open(name string) (fs.File, error) {
f, err := m.FS.Open(name)
if err != nil {
return f, err
}
return MockFile{File: f, fs: m, filename: name}, nil
}
func (m *MockFS) OpenFile(name string, flag int, perm os.FileMode) (fs.File, error) {
f, err := m.FS.OpenFile(name, flag, perm)
func (m *MockFS) OpenFile(name string, flag int, metadataOnly bool) (fs.File, error) {
f, err := m.FS.OpenFile(name, flag, metadataOnly)
if err != nil {
return f, err
}
@ -1700,14 +1683,17 @@ func (f MockFile) Read(p []byte) (int, error) {
}
func checkSnapshotStats(t *testing.T, sn *restic.Snapshot, stat Summary) {
rtest.Equals(t, stat.Files.New, sn.Summary.FilesNew)
rtest.Equals(t, stat.Files.Changed, sn.Summary.FilesChanged)
rtest.Equals(t, stat.Files.Unchanged, sn.Summary.FilesUnmodified)
rtest.Equals(t, stat.Dirs.New, sn.Summary.DirsNew)
rtest.Equals(t, stat.Dirs.Changed, sn.Summary.DirsChanged)
rtest.Equals(t, stat.Dirs.Unchanged, sn.Summary.DirsUnmodified)
rtest.Equals(t, stat.ProcessedBytes, sn.Summary.TotalBytesProcessed)
rtest.Equals(t, stat.Files.New+stat.Files.Changed+stat.Files.Unchanged, sn.Summary.TotalFilesProcessed)
t.Helper()
rtest.Equals(t, stat.BackupStart, sn.Summary.BackupStart, "BackupStart")
// BackupEnd is set to time.Now() and can't be compared to a fixed value
rtest.Equals(t, stat.Files.New, sn.Summary.FilesNew, "FilesNew")
rtest.Equals(t, stat.Files.Changed, sn.Summary.FilesChanged, "FilesChanged")
rtest.Equals(t, stat.Files.Unchanged, sn.Summary.FilesUnmodified, "FilesUnmodified")
rtest.Equals(t, stat.Dirs.New, sn.Summary.DirsNew, "DirsNew")
rtest.Equals(t, stat.Dirs.Changed, sn.Summary.DirsChanged, "DirsChanged")
rtest.Equals(t, stat.Dirs.Unchanged, sn.Summary.DirsUnmodified, "DirsUnmodified")
rtest.Equals(t, stat.ProcessedBytes, sn.Summary.TotalBytesProcessed, "TotalBytesProcessed")
rtest.Equals(t, stat.Files.New+stat.Files.Changed+stat.Files.Unchanged, sn.Summary.TotalFilesProcessed, "TotalFilesProcessed")
bothZeroOrNeither(t, uint64(stat.DataBlobs), uint64(sn.Summary.DataBlobs))
bothZeroOrNeither(t, uint64(stat.TreeBlobs), uint64(sn.Summary.TreeBlobs))
bothZeroOrNeither(t, uint64(stat.DataSize+stat.TreeSize), uint64(sn.Summary.DataAdded))
@ -2061,20 +2047,12 @@ type TrackFS struct {
m sync.Mutex
}
func (m *TrackFS) Open(name string) (fs.File, error) {
func (m *TrackFS) OpenFile(name string, flag int, metadataOnly bool) (fs.File, error) {
m.m.Lock()
m.opened[name]++
m.m.Unlock()
return m.FS.Open(name)
}
func (m *TrackFS) OpenFile(name string, flag int, perm os.FileMode) (fs.File, error) {
m.m.Lock()
m.opened[name]++
m.m.Unlock()
return m.FS.OpenFile(name, flag, perm)
return m.FS.OpenFile(name, flag, metadataOnly)
}
type failSaveRepo struct {
@ -2223,48 +2201,51 @@ func snapshot(t testing.TB, repo archiverRepo, fs fs.FS, parent *restic.Snapshot
return snapshot, node
}
// StatFS allows overwriting what is returned by the Lstat function.
type StatFS struct {
type overrideFS struct {
fs.FS
OverrideLstat map[string]os.FileInfo
OnlyOverrideStat bool
overrideFI *fs.ExtendedFileInfo
resetFIOnRead bool
overrideNode *restic.Node
overrideErr error
}
func (fs *StatFS) Lstat(name string) (os.FileInfo, error) {
if !fs.OnlyOverrideStat {
if fi, ok := fs.OverrideLstat[fixpath(name)]; ok {
return fi, nil
}
}
return fs.FS.Lstat(name)
}
func (fs *StatFS) OpenFile(name string, flags int, perm os.FileMode) (fs.File, error) {
if fi, ok := fs.OverrideLstat[fixpath(name)]; ok {
f, err := fs.FS.OpenFile(name, flags, perm)
func (m *overrideFS) OpenFile(name string, flag int, metadataOnly bool) (fs.File, error) {
f, err := m.FS.OpenFile(name, flag, metadataOnly)
if err != nil {
return nil, err
return f, err
}
wrappedFile := fileStat{
File: f,
fi: fi,
if filepath.Base(name) == "testfile" || filepath.Base(name) == "testdir" {
return &overrideFile{f, m}, nil
}
return wrappedFile, nil
return f, nil
}
return fs.FS.OpenFile(name, flags, perm)
}
type fileStat struct {
type overrideFile struct {
fs.File
fi os.FileInfo
ofs *overrideFS
}
func (f fileStat) Stat() (os.FileInfo, error) {
return f.fi, nil
func (f overrideFile) Stat() (*fs.ExtendedFileInfo, error) {
if f.ofs.overrideFI == nil {
return f.File.Stat()
}
return f.ofs.overrideFI, nil
}
func (f overrideFile) MakeReadable() error {
if f.ofs.resetFIOnRead {
f.ofs.overrideFI = nil
}
return f.File.MakeReadable()
}
func (f overrideFile) ToNode(ignoreXattrListError bool) (*restic.Node, error) {
if f.ofs.overrideNode == nil {
return f.File.ToNode(ignoreXattrListError)
}
return f.ofs.overrideNode, f.ofs.overrideErr
}
// used by wrapFileInfo, use untyped const in order to avoid having a version
@ -2291,17 +2272,19 @@ func TestMetadataChanged(t *testing.T) {
// get metadata
fi := lstat(t, "testfile")
want, err := restic.NodeFromFileInfo("testfile", fi, false)
if err != nil {
t.Fatal(err)
}
localFS := &fs.Local{}
meta, err := localFS.OpenFile("testfile", fs.O_NOFOLLOW, true)
rtest.OK(t, err)
want, err := meta.ToNode(false)
rtest.OK(t, err)
rtest.OK(t, meta.Close())
fs := &StatFS{
FS: fs.Local{},
OverrideLstat: map[string]os.FileInfo{
"testfile": fi,
},
fs := &overrideFS{
FS: localFS,
overrideFI: fi,
overrideNode: &restic.Node{},
}
*fs.overrideNode = *want
sn, node2 := snapshot(t, repo, fs, nil, "testfile")
@ -2320,26 +2303,31 @@ func TestMetadataChanged(t *testing.T) {
t.Fatalf("metadata does not match:\n%v", cmp.Diff(want, node2))
}
// modify the mode by wrapping it in a new struct, uses the consts defined above
fs.OverrideLstat["testfile"] = wrapFileInfo(fi)
// modify the mode and UID/GID
modFI := *fi
modFI.Mode = mockFileInfoMode
if runtime.GOOS != "windows" {
modFI.UID = mockFileInfoUID
modFI.GID = mockFileInfoGID
}
fs.overrideFI = &modFI
rtest.Assert(t, !fileChanged(fs.overrideFI, node2, 0), "testfile must not be considered as changed")
// set the override values in the 'want' node which
want.Mode = 0400
want.Mode = mockFileInfoMode
// ignore UID and GID on Windows
if runtime.GOOS != "windows" {
want.UID = 51234
want.GID = 51235
want.UID = mockFileInfoUID
want.GID = mockFileInfoGID
}
// no user and group name
want.User = ""
want.Group = ""
// update mock node accordingly
fs.overrideNode.Mode = want.Mode
fs.overrideNode.UID = want.UID
fs.overrideNode.GID = want.GID
// make another snapshot
_, node3 := snapshot(t, repo, fs, sn, "testfile")
// Override username and group to empty string - in case underlying system has user with UID 51234
// See https://github.com/restic/restic/issues/2372
node3.User = ""
node3.Group = ""
// make sure that metadata was recorded successfully
if !cmp.Equal(want, node3) {
@ -2352,28 +2340,42 @@ func TestMetadataChanged(t *testing.T) {
checker.TestCheckRepo(t, repo, false)
}
func TestRacyFileSwap(t *testing.T) {
func TestRacyFileTypeSwap(t *testing.T) {
files := TestDir{
"file": TestFile{
"testfile": TestFile{
Content: "foo bar test file",
},
"testdir": TestDir{},
}
for _, dirError := range []bool{false, true} {
desc := "file changed type"
if dirError {
desc = "dir changed type"
}
t.Run(desc, func(t *testing.T) {
tempdir, repo := prepareTempdirRepoSrc(t, files)
back := rtest.Chdir(t, tempdir)
defer back()
// get metadata of current folder
fi := lstat(t, ".")
tempfile := filepath.Join(tempdir, "file")
var fakeName, realName string
if dirError {
// lstat claims this is a directory, but it's actually a file
fakeName = "testdir"
realName = "testfile"
} else {
fakeName = "testfile"
realName = "testdir"
}
fakeFI := lstat(t, fakeName)
tempfile := filepath.Join(tempdir, realName)
statfs := &StatFS{
statfs := &overrideFS{
FS: fs.Local{},
OverrideLstat: map[string]os.FileInfo{
tempfile: fi,
},
OnlyOverrideStat: true,
overrideFI: fakeFI,
resetFIOnRead: true,
}
ctx, cancel := context.WithCancel(context.Background())
@ -2391,23 +2393,30 @@ func TestRacyFileSwap(t *testing.T) {
// fs.Track will panic if the file was not closed
_, excluded, err := arch.save(ctx, "/", tempfile, nil)
if err == nil {
t.Errorf("Save() should have failed")
rtest.Assert(t, err != nil && strings.Contains(err.Error(), "changed type, refusing to archive"), "save() returned wrong error: %v", err)
tpe := "file"
if dirError {
tpe = "directory"
}
rtest.Assert(t, strings.Contains(err.Error(), tpe+" "), "unexpected item type in error: %v", err)
rtest.Assert(t, !excluded, "Save() excluded the node, that's unexpected")
})
}
}
if excluded {
t.Errorf("Save() excluded the node, that's unexpected")
type mockToNoder struct {
node *restic.Node
err error
}
func (m *mockToNoder) ToNode(_ bool) (*restic.Node, error) {
return m.node, m.err
}
func TestMetadataBackupErrorFiltering(t *testing.T) {
tempdir := t.TempDir()
repo := repository.TestRepository(t)
filename := filepath.Join(tempdir, "file")
rtest.OK(t, os.WriteFile(filename, []byte("example"), 0o600))
fi, err := os.Stat(filename)
rtest.OK(t, err)
repo := repository.TestRepository(t)
arch := New(repo, fs.Local{}, Options{})
@ -2418,15 +2427,24 @@ func TestMetadataBackupErrorFiltering(t *testing.T) {
return replacementErr
}
nonExistNoder := &mockToNoder{
node: &restic.Node{Type: restic.NodeTypeFile},
err: fmt.Errorf("not found"),
}
// check that errors from reading extended metadata are properly filtered
node, err := arch.nodeFromFileInfo("file", filename+"invalid", fi, false)
node, err := arch.nodeFromFileInfo("file", filename+"invalid", nonExistNoder, false)
rtest.Assert(t, node != nil, "node is missing")
rtest.Assert(t, err == replacementErr, "expected %v got %v", replacementErr, err)
rtest.Assert(t, filteredErr != nil, "missing inner error")
// check that errors from reading irregular file are not filtered
filteredErr = nil
node, err = arch.nodeFromFileInfo("file", filename, wrapIrregularFileInfo(fi), false)
nonExistNoder = &mockToNoder{
node: &restic.Node{Type: restic.NodeTypeIrregular},
err: fmt.Errorf(`unsupported file type "irregular"`),
}
node, err = arch.nodeFromFileInfo("file", filename, nonExistNoder, false)
rtest.Assert(t, node != nil, "node is missing")
rtest.Assert(t, filteredErr == nil, "error for irregular node should not have been filtered")
rtest.Assert(t, strings.Contains(err.Error(), "irregular"), "unexpected error %q does not warn about irregular file mode", err)
@ -2445,18 +2463,22 @@ func TestIrregularFile(t *testing.T) {
tempfile := filepath.Join(tempdir, "testfile")
fi := lstat(t, "testfile")
// patch mode to irregular
fi.Mode = (fi.Mode &^ os.ModeType) | os.ModeIrregular
statfs := &StatFS{
override := &overrideFS{
FS: fs.Local{},
OverrideLstat: map[string]os.FileInfo{
tempfile: wrapIrregularFileInfo(fi),
overrideFI: fi,
overrideNode: &restic.Node{
Type: restic.NodeTypeIrregular,
},
overrideErr: fmt.Errorf(`unsupported file type "irregular"`),
}
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
arch := New(repo, fs.Track{FS: statfs}, Options{})
arch := New(repo, fs.Track{FS: override}, Options{})
_, excluded, err := arch.save(ctx, "/", tempfile, nil)
if err == nil {
t.Fatalf("Save() should have failed")
@ -2467,3 +2489,48 @@ func TestIrregularFile(t *testing.T) {
t.Errorf("Save() excluded the node, that's unexpected")
}
}
type missingFS struct {
fs.FS
errorOnOpen bool
}
func (fs *missingFS) OpenFile(name string, flag int, metadataOnly bool) (fs.File, error) {
if fs.errorOnOpen {
return nil, os.ErrNotExist
}
return &missingFile{}, nil
}
type missingFile struct {
fs.File
}
func (f *missingFile) Stat() (*fs.ExtendedFileInfo, error) {
return nil, os.ErrNotExist
}
func (f *missingFile) Close() error {
// prevent segfault in test
return nil
}
func TestDisappearedFile(t *testing.T) {
tempdir, repo := prepareTempdirRepoSrc(t, TestDir{})
back := rtest.Chdir(t, tempdir)
defer back()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// depending on the underlying FS implementation a missing file may be detected by OpenFile or
// the subsequent file.Stat() call. Thus test both cases.
for _, errorOnOpen := range []bool{false, true} {
arch := New(repo, fs.Track{FS: &missingFS{FS: &fs.Local{}, errorOnOpen: errorOnOpen}}, Options{})
_, excluded, err := arch.save(ctx, "/", filepath.Join(tempdir, "testdir"), nil)
rtest.OK(t, err)
rtest.Assert(t, excluded, "testfile should have been excluded")
}
}

View file

@ -4,8 +4,6 @@
package archiver
import (
"os"
"syscall"
"testing"
"github.com/restic/restic/internal/feature"
@ -14,54 +12,9 @@ import (
rtest "github.com/restic/restic/internal/test"
)
type wrappedFileInfo struct {
os.FileInfo
sys interface{}
mode os.FileMode
}
func (fi wrappedFileInfo) Sys() interface{} {
return fi.sys
}
func (fi wrappedFileInfo) Mode() os.FileMode {
return fi.mode
}
// wrapFileInfo returns a new os.FileInfo with the mode, owner, and group fields changed.
func wrapFileInfo(fi os.FileInfo) os.FileInfo {
// get the underlying stat_t and modify the values
stat := fi.Sys().(*syscall.Stat_t)
stat.Mode = mockFileInfoMode
stat.Uid = mockFileInfoUID
stat.Gid = mockFileInfoGID
// wrap the os.FileInfo so we can return a modified stat_t
res := wrappedFileInfo{
FileInfo: fi,
sys: stat,
mode: mockFileInfoMode,
}
return res
}
// wrapIrregularFileInfo returns a new os.FileInfo with the mode changed to irregular file
func wrapIrregularFileInfo(fi os.FileInfo) os.FileInfo {
// wrap the os.FileInfo so we can return a modified stat_t
return wrappedFileInfo{
FileInfo: fi,
sys: fi.Sys().(*syscall.Stat_t),
mode: (fi.Mode() &^ os.ModeType) | os.ModeIrregular,
}
}
func statAndSnapshot(t *testing.T, repo archiverRepo, name string) (*restic.Node, *restic.Node) {
fi := lstat(t, name)
want, err := restic.NodeFromFileInfo(name, fi, false)
rtest.OK(t, err)
_, node := snapshot(t, repo, fs.Local{}, nil, name)
want := nodeFromFile(t, &fs.Local{}, name)
_, node := snapshot(t, repo, &fs.Local{}, nil, name)
return want, node
}

View file

@ -1,36 +0,0 @@
//go:build windows
// +build windows
package archiver
import (
"os"
)
type wrappedFileInfo struct {
os.FileInfo
mode os.FileMode
}
func (fi wrappedFileInfo) Mode() os.FileMode {
return fi.mode
}
// wrapFileInfo returns a new os.FileInfo with the mode, owner, and group fields changed.
func wrapFileInfo(fi os.FileInfo) os.FileInfo {
// wrap the os.FileInfo and return the modified mode, uid and gid are ignored on Windows
res := wrappedFileInfo{
FileInfo: fi,
mode: mockFileInfoMode,
}
return res
}
// wrapIrregularFileInfo returns a new os.FileInfo with the mode changed to irregular file
func wrapIrregularFileInfo(fi os.FileInfo) os.FileInfo {
return wrappedFileInfo{
FileInfo: fi,
mode: (fi.Mode() &^ os.ModeType) | os.ModeIrregular,
}
}

View file

@ -9,22 +9,22 @@ import (
"golang.org/x/sync/errgroup"
)
// Saver allows saving a blob.
type Saver interface {
// saver allows saving a blob.
type saver interface {
SaveBlob(ctx context.Context, t restic.BlobType, data []byte, id restic.ID, storeDuplicate bool) (restic.ID, bool, int, error)
}
// BlobSaver concurrently saves incoming blobs to the repo.
type BlobSaver struct {
repo Saver
// blobSaver concurrently saves incoming blobs to the repo.
type blobSaver struct {
repo saver
ch chan<- saveBlobJob
}
// NewBlobSaver returns a new blob. A worker pool is started, it is stopped
// newBlobSaver returns a new blob. A worker pool is started, it is stopped
// when ctx is cancelled.
func NewBlobSaver(ctx context.Context, wg *errgroup.Group, repo Saver, workers uint) *BlobSaver {
func newBlobSaver(ctx context.Context, wg *errgroup.Group, repo saver, workers uint) *blobSaver {
ch := make(chan saveBlobJob)
s := &BlobSaver{
s := &blobSaver{
repo: repo,
ch: ch,
}
@ -38,13 +38,13 @@ func NewBlobSaver(ctx context.Context, wg *errgroup.Group, repo Saver, workers u
return s
}
func (s *BlobSaver) TriggerShutdown() {
func (s *blobSaver) TriggerShutdown() {
close(s.ch)
}
// Save stores a blob in the repo. It checks the index and the known blobs
// before saving anything. It takes ownership of the buffer passed in.
func (s *BlobSaver) Save(ctx context.Context, t restic.BlobType, buf *Buffer, filename string, cb func(res SaveBlobResponse)) {
func (s *blobSaver) Save(ctx context.Context, t restic.BlobType, buf *buffer, filename string, cb func(res saveBlobResponse)) {
select {
case s.ch <- saveBlobJob{BlobType: t, buf: buf, fn: filename, cb: cb}:
case <-ctx.Done():
@ -54,26 +54,26 @@ func (s *BlobSaver) Save(ctx context.Context, t restic.BlobType, buf *Buffer, fi
type saveBlobJob struct {
restic.BlobType
buf *Buffer
buf *buffer
fn string
cb func(res SaveBlobResponse)
cb func(res saveBlobResponse)
}
type SaveBlobResponse struct {
type saveBlobResponse struct {
id restic.ID
length int
sizeInRepo int
known bool
}
func (s *BlobSaver) saveBlob(ctx context.Context, t restic.BlobType, buf []byte) (SaveBlobResponse, error) {
func (s *blobSaver) saveBlob(ctx context.Context, t restic.BlobType, buf []byte) (saveBlobResponse, error) {
id, known, sizeInRepo, err := s.repo.SaveBlob(ctx, t, buf, restic.ID{}, false)
if err != nil {
return SaveBlobResponse{}, err
return saveBlobResponse{}, err
}
return SaveBlobResponse{
return saveBlobResponse{
id: id,
length: len(buf),
sizeInRepo: sizeInRepo,
@ -81,7 +81,7 @@ func (s *BlobSaver) saveBlob(ctx context.Context, t restic.BlobType, buf []byte)
}, nil
}
func (s *BlobSaver) worker(ctx context.Context, jobs <-chan saveBlobJob) error {
func (s *blobSaver) worker(ctx context.Context, jobs <-chan saveBlobJob) error {
for {
var job saveBlobJob
var ok bool

View file

@ -38,20 +38,20 @@ func TestBlobSaver(t *testing.T) {
wg, ctx := errgroup.WithContext(ctx)
saver := &saveFail{}
b := NewBlobSaver(ctx, wg, saver, uint(runtime.NumCPU()))
b := newBlobSaver(ctx, wg, saver, uint(runtime.NumCPU()))
var wait sync.WaitGroup
var results []SaveBlobResponse
var results []saveBlobResponse
var lock sync.Mutex
wait.Add(20)
for i := 0; i < 20; i++ {
buf := &Buffer{Data: []byte(fmt.Sprintf("foo%d", i))}
buf := &buffer{Data: []byte(fmt.Sprintf("foo%d", i))}
idx := i
lock.Lock()
results = append(results, SaveBlobResponse{})
results = append(results, saveBlobResponse{})
lock.Unlock()
b.Save(ctx, restic.DataBlob, buf, "file", func(res SaveBlobResponse) {
b.Save(ctx, restic.DataBlob, buf, "file", func(res saveBlobResponse) {
lock.Lock()
results[idx] = res
lock.Unlock()
@ -95,11 +95,11 @@ func TestBlobSaverError(t *testing.T) {
failAt: int32(test.failAt),
}
b := NewBlobSaver(ctx, wg, saver, uint(runtime.NumCPU()))
b := newBlobSaver(ctx, wg, saver, uint(runtime.NumCPU()))
for i := 0; i < test.blobs; i++ {
buf := &Buffer{Data: []byte(fmt.Sprintf("foo%d", i))}
b.Save(ctx, restic.DataBlob, buf, "errfile", func(res SaveBlobResponse) {})
buf := &buffer{Data: []byte(fmt.Sprintf("foo%d", i))}
b.Save(ctx, restic.DataBlob, buf, "errfile", func(res saveBlobResponse) {})
}
b.TriggerShutdown()

View file

@ -1,14 +1,14 @@
package archiver
// Buffer is a reusable buffer. After the buffer has been used, Release should
// buffer is a reusable buffer. After the buffer has been used, Release should
// be called so the underlying slice is put back into the pool.
type Buffer struct {
type buffer struct {
Data []byte
pool *BufferPool
pool *bufferPool
}
// Release puts the buffer back into the pool it came from.
func (b *Buffer) Release() {
func (b *buffer) Release() {
pool := b.pool
if pool == nil || cap(b.Data) > pool.defaultSize {
return
@ -20,32 +20,32 @@ func (b *Buffer) Release() {
}
}
// BufferPool implements a limited set of reusable buffers.
type BufferPool struct {
ch chan *Buffer
// bufferPool implements a limited set of reusable buffers.
type bufferPool struct {
ch chan *buffer
defaultSize int
}
// NewBufferPool initializes a new buffer pool. The pool stores at most max
// newBufferPool initializes a new buffer pool. The pool stores at most max
// items. New buffers are created with defaultSize. Buffers that have grown
// larger are not put back.
func NewBufferPool(max int, defaultSize int) *BufferPool {
b := &BufferPool{
ch: make(chan *Buffer, max),
func newBufferPool(max int, defaultSize int) *bufferPool {
b := &bufferPool{
ch: make(chan *buffer, max),
defaultSize: defaultSize,
}
return b
}
// Get returns a new buffer, either from the pool or newly allocated.
func (pool *BufferPool) Get() *Buffer {
func (pool *bufferPool) Get() *buffer {
select {
case buf := <-pool.ch:
return buf
default:
}
b := &Buffer{
b := &buffer{
Data: make([]byte, pool.defaultSize),
pool: pool,
}

View file

@ -1,12 +1,3 @@
// Package archiver contains the code which reads files, splits them into
// chunks and saves the data to the repository.
//
// An Archiver has a number of worker goroutines handling saving the different
// data structures to the repository, the details are implemented by the
// FileSaver, BlobSaver, and TreeSaver types.
//
// The main goroutine (the one calling Snapshot()) traverses the directory tree
// and delegates all work to these worker pools. They return a type
// (FutureFile, FutureBlob, and FutureTree) which can be resolved later, by
// calling Wait() on it.
package archiver

View file

@ -0,0 +1,318 @@
package archiver
import (
"bytes"
"fmt"
"io"
"os"
"runtime"
"strings"
"sync"
"github.com/restic/restic/internal/debug"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/fs"
)
// RejectByNameFunc is a function that takes a filename of a
// file that would be included in the backup. The function returns true if it
// should be excluded (rejected) from the backup.
type RejectByNameFunc func(path string) bool
// RejectFunc is a function that takes a filename and os.FileInfo of a
// file that would be included in the backup. The function returns true if it
// should be excluded (rejected) from the backup.
type RejectFunc func(path string, fi *fs.ExtendedFileInfo, fs fs.FS) bool
func CombineRejectByNames(funcs []RejectByNameFunc) SelectByNameFunc {
return func(item string) bool {
for _, reject := range funcs {
if reject(item) {
return false
}
}
return true
}
}
func CombineRejects(funcs []RejectFunc) SelectFunc {
return func(item string, fi *fs.ExtendedFileInfo, fs fs.FS) bool {
for _, reject := range funcs {
if reject(item, fi, fs) {
return false
}
}
return true
}
}
type rejectionCache struct {
m map[string]bool
mtx sync.Mutex
}
func newRejectionCache() *rejectionCache {
return &rejectionCache{m: make(map[string]bool)}
}
// Lock locks the mutex in rc.
func (rc *rejectionCache) Lock() {
rc.mtx.Lock()
}
// Unlock unlocks the mutex in rc.
func (rc *rejectionCache) Unlock() {
rc.mtx.Unlock()
}
// Get returns the last stored value for dir and a second boolean that
// indicates whether that value was actually written to the cache. It is the
// callers responsibility to call rc.Lock and rc.Unlock before using this
// method, otherwise data races may occur.
func (rc *rejectionCache) Get(dir string) (bool, bool) {
v, ok := rc.m[dir]
return v, ok
}
// Store stores a new value for dir. It is the callers responsibility to call
// rc.Lock and rc.Unlock before using this method, otherwise data races may
// occur.
func (rc *rejectionCache) Store(dir string, rejected bool) {
rc.m[dir] = rejected
}
// RejectIfPresent returns a RejectByNameFunc which itself returns whether a path
// should be excluded. The RejectByNameFunc considers a file to be excluded when
// it resides in a directory with an exclusion file, that is specified by
// excludeFileSpec in the form "filename[:content]". The returned error is
// non-nil if the filename component of excludeFileSpec is empty. If rc is
// non-nil, it is going to be used in the RejectByNameFunc to expedite the evaluation
// of a directory based on previous visits.
func RejectIfPresent(excludeFileSpec string, warnf func(msg string, args ...interface{})) (RejectFunc, error) {
if excludeFileSpec == "" {
return nil, errors.New("name for exclusion tagfile is empty")
}
colon := strings.Index(excludeFileSpec, ":")
if colon == 0 {
return nil, fmt.Errorf("no name for exclusion tagfile provided")
}
tf, tc := "", ""
if colon > 0 {
tf = excludeFileSpec[:colon]
tc = excludeFileSpec[colon+1:]
} else {
tf = excludeFileSpec
}
debug.Log("using %q as exclusion tagfile", tf)
rc := newRejectionCache()
return func(filename string, _ *fs.ExtendedFileInfo, fs fs.FS) bool {
return isExcludedByFile(filename, tf, tc, rc, fs, warnf)
}, nil
}
// isExcludedByFile interprets filename as a path and returns true if that file
// is in an excluded directory. A directory is identified as excluded if it contains a
// tagfile which bears the name specified in tagFilename and starts with
// header. If rc is non-nil, it is used to expedite the evaluation of a
// directory based on previous visits.
func isExcludedByFile(filename, tagFilename, header string, rc *rejectionCache, fs fs.FS, warnf func(msg string, args ...interface{})) bool {
if tagFilename == "" {
return false
}
if fs.Base(filename) == tagFilename {
return false // do not exclude the tagfile itself
}
rc.Lock()
defer rc.Unlock()
dir := fs.Dir(filename)
rejected, visited := rc.Get(dir)
if visited {
return rejected
}
rejected = isDirExcludedByFile(dir, tagFilename, header, fs, warnf)
rc.Store(dir, rejected)
return rejected
}
func isDirExcludedByFile(dir, tagFilename, header string, fsInst fs.FS, warnf func(msg string, args ...interface{})) bool {
tf := fsInst.Join(dir, tagFilename)
_, err := fsInst.Lstat(tf)
if errors.Is(err, os.ErrNotExist) {
return false
}
if err != nil {
warnf("could not access exclusion tagfile: %v", err)
return false
}
// when no signature is given, the mere presence of tf is enough reason
// to exclude filename
if len(header) == 0 {
return true
}
// From this stage, errors mean tagFilename exists but it is malformed.
// Warnings will be generated so that the user is informed that the
// indented ignore-action is not performed.
f, err := fsInst.OpenFile(tf, fs.O_RDONLY, false)
if err != nil {
warnf("could not open exclusion tagfile: %v", err)
return false
}
defer func() {
_ = f.Close()
}()
buf := make([]byte, len(header))
_, err = io.ReadFull(f, buf)
// EOF is handled with a dedicated message, otherwise the warning were too cryptic
if err == io.EOF {
warnf("invalid (too short) signature in exclusion tagfile %q\n", tf)
return false
}
if err != nil {
warnf("could not read signature from exclusion tagfile %q: %v\n", tf, err)
return false
}
if !bytes.Equal(buf, []byte(header)) {
warnf("invalid signature in exclusion tagfile %q\n", tf)
return false
}
return true
}
// deviceMap is used to track allowed source devices for backup. This is used to
// check for crossing mount points during backup (for --one-file-system). It
// maps the name of a source path to its device ID.
type deviceMap map[string]uint64
// newDeviceMap creates a new device map from the list of source paths.
func newDeviceMap(allowedSourcePaths []string, fs fs.FS) (deviceMap, error) {
if runtime.GOOS == "windows" {
return nil, errors.New("Device IDs are not supported on Windows")
}
deviceMap := make(map[string]uint64)
for _, item := range allowedSourcePaths {
item, err := fs.Abs(fs.Clean(item))
if err != nil {
return nil, err
}
fi, err := fs.Lstat(item)
if err != nil {
return nil, err
}
deviceMap[item] = fi.DeviceID
}
if len(deviceMap) == 0 {
return nil, errors.New("zero allowed devices")
}
return deviceMap, nil
}
// IsAllowed returns true if the path is located on an allowed device.
func (m deviceMap) IsAllowed(item string, deviceID uint64, fs fs.FS) (bool, error) {
for dir := item; ; dir = fs.Dir(dir) {
debug.Log("item %v, test dir %v", item, dir)
// find a parent directory that is on an allowed device (otherwise
// we would not traverse the directory at all)
allowedID, ok := m[dir]
if !ok {
if dir == fs.Dir(dir) {
// arrived at root, no allowed device found. this should not happen.
break
}
continue
}
// if the item has a different device ID than the parent directory,
// we crossed a file system boundary
if allowedID != deviceID {
debug.Log("item %v (dir %v) on disallowed device %d", item, dir, deviceID)
return false, nil
}
// item is on allowed device, accept it
debug.Log("item %v allowed", item)
return true, nil
}
return false, fmt.Errorf("item %v (device ID %v) not found, deviceMap: %v", item, deviceID, m)
}
// RejectByDevice returns a RejectFunc that rejects files which are on a
// different file systems than the files/dirs in samples.
func RejectByDevice(samples []string, filesystem fs.FS) (RejectFunc, error) {
deviceMap, err := newDeviceMap(samples, filesystem)
if err != nil {
return nil, err
}
debug.Log("allowed devices: %v\n", deviceMap)
return func(item string, fi *fs.ExtendedFileInfo, fs fs.FS) bool {
allowed, err := deviceMap.IsAllowed(fs.Clean(item), fi.DeviceID, fs)
if err != nil {
// this should not happen
panic(fmt.Sprintf("error checking device ID of %v: %v", item, err))
}
if allowed {
// accept item
return false
}
// reject everything except directories
if !fi.Mode.IsDir() {
return true
}
// special case: make sure we keep mountpoints (directories which
// contain a mounted file system). Test this by checking if the parent
// directory would be included.
parentDir := fs.Dir(fs.Clean(item))
parentFI, err := fs.Lstat(parentDir)
if err != nil {
debug.Log("item %v: error running lstat() on parent directory: %v", item, err)
// if in doubt, reject
return true
}
parentAllowed, err := deviceMap.IsAllowed(parentDir, parentFI.DeviceID, fs)
if err != nil {
debug.Log("item %v: error checking parent directory: %v", item, err)
// if in doubt, reject
return true
}
if parentAllowed {
// we found a mount point, so accept the directory
return false
}
// reject everything else
return true
}, nil
}
func RejectBySize(maxSize int64) (RejectFunc, error) {
return func(item string, fi *fs.ExtendedFileInfo, _ fs.FS) bool {
// directory will be ignored
if fi.Mode.IsDir() {
return false
}
filesize := fi.Size
if filesize > maxSize {
debug.Log("file %s is oversize: %d", item, filesize)
return true
}
return false
}, nil
}

View file

@ -1,67 +1,14 @@
package main
package archiver
import (
"os"
"path/filepath"
"testing"
"github.com/restic/restic/internal/fs"
"github.com/restic/restic/internal/test"
)
func TestRejectByPattern(t *testing.T) {
var tests = []struct {
filename string
reject bool
}{
{filename: "/home/user/foo.go", reject: true},
{filename: "/home/user/foo.c", reject: false},
{filename: "/home/user/foobar", reject: false},
{filename: "/home/user/foobar/x", reject: true},
{filename: "/home/user/README", reject: false},
{filename: "/home/user/README.md", reject: true},
}
patterns := []string{"*.go", "README.md", "/home/user/foobar/*"}
for _, tc := range tests {
t.Run("", func(t *testing.T) {
reject := rejectByPattern(patterns)
res := reject(tc.filename)
if res != tc.reject {
t.Fatalf("wrong result for filename %v: want %v, got %v",
tc.filename, tc.reject, res)
}
})
}
}
func TestRejectByInsensitivePattern(t *testing.T) {
var tests = []struct {
filename string
reject bool
}{
{filename: "/home/user/foo.GO", reject: true},
{filename: "/home/user/foo.c", reject: false},
{filename: "/home/user/foobar", reject: false},
{filename: "/home/user/FOObar/x", reject: true},
{filename: "/home/user/README", reject: false},
{filename: "/home/user/readme.md", reject: true},
}
patterns := []string{"*.go", "README.md", "/home/user/foobar/*"}
for _, tc := range tests {
t.Run("", func(t *testing.T) {
reject := rejectByInsensitivePattern(patterns)
res := reject(tc.filename)
if res != tc.reject {
t.Fatalf("wrong result for filename %v: want %v, got %v",
tc.filename, tc.reject, res)
}
})
}
}
func TestIsExcludedByFile(t *testing.T) {
const (
tagFilename = "CACHEDIR.TAG"
@ -102,7 +49,7 @@ func TestIsExcludedByFile(t *testing.T) {
if tc.content == "" {
h = ""
}
if got := isExcludedByFile(foo, tagFilename, h, nil); tc.want != got {
if got := isExcludedByFile(foo, tagFilename, h, newRejectionCache(), &fs.Local{}, func(msg string, args ...interface{}) { t.Logf(msg, args...) }); tc.want != got {
t.Fatalf("expected %v, got %v", tc.want, got)
}
})
@ -153,8 +100,8 @@ func TestMultipleIsExcludedByFile(t *testing.T) {
// create two rejection functions, one that tests for the NOFOO file
// and one for the NOBAR file
fooExclude, _ := rejectIfPresent("NOFOO")
barExclude, _ := rejectIfPresent("NOBAR")
fooExclude, _ := RejectIfPresent("NOFOO", nil)
barExclude, _ := RejectIfPresent("NOBAR", nil)
// To mock the archiver scanning walk, we create filepath.WalkFn
// that tests against the two rejection functions and stores
@ -164,8 +111,8 @@ func TestMultipleIsExcludedByFile(t *testing.T) {
if err != nil {
return err
}
excludedByFoo := fooExclude(p)
excludedByBar := barExclude(p)
excludedByFoo := fooExclude(p, nil, &fs.Local{})
excludedByBar := barExclude(p, nil, &fs.Local{})
excluded := excludedByFoo || excludedByBar
// the log message helps debugging in case the test fails
t.Logf("%q: %v || %v = %v", p, excludedByFoo, excludedByBar, excluded)
@ -192,9 +139,6 @@ func TestMultipleIsExcludedByFile(t *testing.T) {
func TestIsExcludedByFileSize(t *testing.T) {
tempDir := test.TempDir(t)
// Max size of file is set to be 1k
maxSizeStr := "1k"
// Create some files in a temporary directory.
// Files in UPPERCASE will be used as exclusion triggers later on.
// We will test the inclusion later, so we add the expected value as
@ -238,7 +182,7 @@ func TestIsExcludedByFileSize(t *testing.T) {
test.OKs(t, errs) // see if anything went wrong during the creation
// create rejection function
sizeExclude, _ := rejectBySize(maxSizeStr)
sizeExclude, _ := RejectBySize(1024)
// To mock the archiver scanning walk, we create filepath.WalkFn
// that tests against the two rejection functions and stores
@ -249,7 +193,7 @@ func TestIsExcludedByFileSize(t *testing.T) {
return err
}
excluded := sizeExclude(p, fi)
excluded := sizeExclude(p, fs.ExtendedStat(fi), nil)
// the log message helps debugging in case the test fails
t.Logf("%q: dir:%t; size:%d; excluded:%v", p, fi.IsDir(), fi.Size(), excluded)
m[p] = !excluded
@ -268,7 +212,7 @@ func TestIsExcludedByFileSize(t *testing.T) {
}
func TestDeviceMap(t *testing.T) {
deviceMap := DeviceMap{
deviceMap := deviceMap{
filepath.FromSlash("/"): 1,
filepath.FromSlash("/usr/local"): 5,
}
@ -299,7 +243,7 @@ func TestDeviceMap(t *testing.T) {
for _, test := range tests {
t.Run("", func(t *testing.T) {
res, err := deviceMap.IsAllowed(filepath.FromSlash(test.item), test.deviceID)
res, err := deviceMap.IsAllowed(filepath.FromSlash(test.item), test.deviceID, &fs.Local{})
if err != nil {
t.Fatal(err)
}

View file

@ -4,7 +4,6 @@ import (
"context"
"fmt"
"io"
"os"
"sync"
"github.com/restic/chunker"
@ -15,13 +14,13 @@ import (
"golang.org/x/sync/errgroup"
)
// SaveBlobFn saves a blob to a repo.
type SaveBlobFn func(context.Context, restic.BlobType, *Buffer, string, func(res SaveBlobResponse))
// saveBlobFn saves a blob to a repo.
type saveBlobFn func(context.Context, restic.BlobType, *buffer, string, func(res saveBlobResponse))
// FileSaver concurrently saves incoming files to the repo.
type FileSaver struct {
saveFilePool *BufferPool
saveBlob SaveBlobFn
// fileSaver concurrently saves incoming files to the repo.
type fileSaver struct {
saveFilePool *bufferPool
saveBlob saveBlobFn
pol chunker.Pol
@ -29,21 +28,21 @@ type FileSaver struct {
CompleteBlob func(bytes uint64)
NodeFromFileInfo func(snPath, filename string, fi os.FileInfo, ignoreXattrListError bool) (*restic.Node, error)
NodeFromFileInfo func(snPath, filename string, meta ToNoder, ignoreXattrListError bool) (*restic.Node, error)
}
// NewFileSaver returns a new file saver. A worker pool with fileWorkers is
// newFileSaver returns a new file saver. A worker pool with fileWorkers is
// started, it is stopped when ctx is cancelled.
func NewFileSaver(ctx context.Context, wg *errgroup.Group, save SaveBlobFn, pol chunker.Pol, fileWorkers, blobWorkers uint) *FileSaver {
func newFileSaver(ctx context.Context, wg *errgroup.Group, save saveBlobFn, pol chunker.Pol, fileWorkers, blobWorkers uint) *fileSaver {
ch := make(chan saveFileJob)
debug.Log("new file saver with %v file workers and %v blob workers", fileWorkers, blobWorkers)
poolSize := fileWorkers + blobWorkers
s := &FileSaver{
s := &fileSaver{
saveBlob: save,
saveFilePool: NewBufferPool(int(poolSize), chunker.MaxSize),
saveFilePool: newBufferPool(int(poolSize), chunker.MaxSize),
pol: pol,
ch: ch,
@ -60,24 +59,23 @@ func NewFileSaver(ctx context.Context, wg *errgroup.Group, save SaveBlobFn, pol
return s
}
func (s *FileSaver) TriggerShutdown() {
func (s *fileSaver) TriggerShutdown() {
close(s.ch)
}
// CompleteFunc is called when the file has been saved.
type CompleteFunc func(*restic.Node, ItemStats)
// fileCompleteFunc is called when the file has been saved.
type fileCompleteFunc func(*restic.Node, ItemStats)
// Save stores the file f and returns the data once it has been completed. The
// file is closed by Save. completeReading is only called if the file was read
// successfully. complete is always called. If completeReading is called, then
// this will always happen before calling complete.
func (s *FileSaver) Save(ctx context.Context, snPath string, target string, file fs.File, fi os.FileInfo, start func(), completeReading func(), complete CompleteFunc) FutureNode {
func (s *fileSaver) Save(ctx context.Context, snPath string, target string, file fs.File, start func(), completeReading func(), complete fileCompleteFunc) futureNode {
fn, ch := newFutureNode()
job := saveFileJob{
snPath: snPath,
target: target,
file: file,
fi: fi,
ch: ch,
start: start,
@ -100,16 +98,15 @@ type saveFileJob struct {
snPath string
target string
file fs.File
fi os.FileInfo
ch chan<- futureNodeResult
start func()
completeReading func()
complete CompleteFunc
complete fileCompleteFunc
}
// saveFile stores the file f in the repo, then closes it.
func (s *FileSaver) saveFile(ctx context.Context, chnker *chunker.Chunker, snPath string, target string, f fs.File, fi os.FileInfo, start func(), finishReading func(), finish func(res futureNodeResult)) {
func (s *fileSaver) saveFile(ctx context.Context, chnker *chunker.Chunker, snPath string, target string, f fs.File, start func(), finishReading func(), finish func(res futureNodeResult)) {
start()
fnr := futureNodeResult{
@ -156,14 +153,14 @@ func (s *FileSaver) saveFile(ctx context.Context, chnker *chunker.Chunker, snPat
debug.Log("%v", snPath)
node, err := s.NodeFromFileInfo(snPath, target, fi, false)
node, err := s.NodeFromFileInfo(snPath, target, f, false)
if err != nil {
_ = f.Close()
completeError(err)
return
}
if node.Type != "file" {
if node.Type != restic.NodeTypeFile {
_ = f.Close()
completeError(errors.Errorf("node type %q is wrong", node.Type))
return
@ -205,7 +202,7 @@ func (s *FileSaver) saveFile(ctx context.Context, chnker *chunker.Chunker, snPat
node.Content = append(node.Content, restic.ID{})
lock.Unlock()
s.saveBlob(ctx, restic.DataBlob, buf, target, func(sbr SaveBlobResponse) {
s.saveBlob(ctx, restic.DataBlob, buf, target, func(sbr saveBlobResponse) {
lock.Lock()
if !sbr.known {
fnr.stats.DataBlobs++
@ -246,7 +243,7 @@ func (s *FileSaver) saveFile(ctx context.Context, chnker *chunker.Chunker, snPat
completeBlob()
}
func (s *FileSaver) worker(ctx context.Context, jobs <-chan saveFileJob) {
func (s *fileSaver) worker(ctx context.Context, jobs <-chan saveFileJob) {
// a worker has one chunker which is reused for each file (because it contains a rather large buffer)
chnker := chunker.New(nil, s.pol)
@ -262,7 +259,7 @@ func (s *FileSaver) worker(ctx context.Context, jobs <-chan saveFileJob) {
}
}
s.saveFile(ctx, chnker, job.snPath, job.target, job.file, job.fi, job.start, func() {
s.saveFile(ctx, chnker, job.snPath, job.target, job.file, job.start, func() {
if job.completeReading != nil {
job.completeReading()
}

View file

@ -30,11 +30,11 @@ func createTestFiles(t testing.TB, num int) (files []string) {
return files
}
func startFileSaver(ctx context.Context, t testing.TB) (*FileSaver, context.Context, *errgroup.Group) {
func startFileSaver(ctx context.Context, t testing.TB, fsInst fs.FS) (*fileSaver, context.Context, *errgroup.Group) {
wg, ctx := errgroup.WithContext(ctx)
saveBlob := func(ctx context.Context, tpe restic.BlobType, buf *Buffer, _ string, cb func(SaveBlobResponse)) {
cb(SaveBlobResponse{
saveBlob := func(ctx context.Context, tpe restic.BlobType, buf *buffer, _ string, cb func(saveBlobResponse)) {
cb(saveBlobResponse{
id: restic.Hash(buf.Data),
length: len(buf.Data),
sizeInRepo: len(buf.Data),
@ -48,9 +48,9 @@ func startFileSaver(ctx context.Context, t testing.TB) (*FileSaver, context.Cont
t.Fatal(err)
}
s := NewFileSaver(ctx, wg, saveBlob, pol, workers, workers)
s.NodeFromFileInfo = func(snPath, filename string, fi os.FileInfo, ignoreXattrListError bool) (*restic.Node, error) {
return restic.NodeFromFileInfo(filename, fi, ignoreXattrListError)
s := newFileSaver(ctx, wg, saveBlob, pol, workers, workers)
s.NodeFromFileInfo = func(snPath, filename string, meta ToNoder, ignoreXattrListError bool) (*restic.Node, error) {
return meta.ToNode(ignoreXattrListError)
}
return s, ctx, wg
@ -67,22 +67,17 @@ func TestFileSaver(t *testing.T) {
completeFn := func(*restic.Node, ItemStats) {}
testFs := fs.Local{}
s, ctx, wg := startFileSaver(ctx, t)
s, ctx, wg := startFileSaver(ctx, t, testFs)
var results []FutureNode
var results []futureNode
for _, filename := range files {
f, err := testFs.Open(filename)
f, err := testFs.OpenFile(filename, os.O_RDONLY, false)
if err != nil {
t.Fatal(err)
}
fi, err := f.Stat()
if err != nil {
t.Fatal(err)
}
ff := s.Save(ctx, filename, filename, f, fi, startFn, completeReadingFn, completeFn)
ff := s.Save(ctx, filename, filename, f, startFn, completeReadingFn, completeFn)
results = append(results, ff)
}

View file

@ -2,8 +2,6 @@ package archiver
import (
"context"
"os"
"path/filepath"
"sort"
"github.com/restic/restic/internal/debug"
@ -22,11 +20,11 @@ type Scanner struct {
}
// NewScanner initializes a new Scanner.
func NewScanner(fs fs.FS) *Scanner {
func NewScanner(filesystem fs.FS) *Scanner {
return &Scanner{
FS: fs,
FS: filesystem,
SelectByName: func(_ string) bool { return true },
Select: func(_ string, _ os.FileInfo) bool { return true },
Select: func(_ string, _ *fs.ExtendedFileInfo, _ fs.FS) bool { return true },
Error: func(_ string, err error) error { return err },
Result: func(_ string, _ ScanStats) {},
}
@ -38,7 +36,7 @@ type ScanStats struct {
Bytes uint64
}
func (s *Scanner) scanTree(ctx context.Context, stats ScanStats, tree Tree) (ScanStats, error) {
func (s *Scanner) scanTree(ctx context.Context, stats ScanStats, tree tree) (ScanStats, error) {
// traverse the path in the file system for all leaf nodes
if tree.Leaf() {
abstarget, err := s.FS.Abs(tree.Path)
@ -83,7 +81,7 @@ func (s *Scanner) Scan(ctx context.Context, targets []string) error {
debug.Log("clean targets %v", cleanTargets)
// we're using the same tree representation as the archiver does
tree, err := NewTree(s.FS, cleanTargets)
tree, err := newTree(s.FS, cleanTargets)
if err != nil {
return err
}
@ -115,15 +113,15 @@ func (s *Scanner) scan(ctx context.Context, stats ScanStats, target string) (Sca
}
// run remaining select functions that require file information
if !s.Select(target, fi) {
if !s.Select(target, fi, s.FS) {
return stats, nil
}
switch {
case fi.Mode().IsRegular():
case fi.Mode.IsRegular():
stats.Files++
stats.Bytes += uint64(fi.Size())
case fi.Mode().IsDir():
stats.Bytes += uint64(fi.Size)
case fi.Mode.IsDir():
names, err := fs.Readdirnames(s.FS, target, fs.O_NOFOLLOW)
if err != nil {
return stats, s.Error(target, err)
@ -131,7 +129,7 @@ func (s *Scanner) scan(ctx context.Context, stats ScanStats, target string) (Sca
sort.Strings(names)
for _, name := range names {
stats, err = s.scan(ctx, stats, filepath.Join(target, name))
stats, err = s.scan(ctx, stats, s.FS.Join(target, name))
if err != nil {
return stats, err
}

View file

@ -56,8 +56,8 @@ func TestScanner(t *testing.T) {
},
},
},
selFn: func(item string, fi os.FileInfo) bool {
if fi.IsDir() {
selFn: func(item string, fi *fs.ExtendedFileInfo, fs fs.FS) bool {
if fi.Mode.IsDir() {
return true
}

View file

@ -95,17 +95,17 @@ func TestCreateFiles(t testing.TB, target string, dir TestDir) {
t.Fatal(err)
}
case TestSymlink:
err := fs.Symlink(filepath.FromSlash(it.Target), targetPath)
err := os.Symlink(filepath.FromSlash(it.Target), targetPath)
if err != nil {
t.Fatal(err)
}
case TestHardlink:
err := fs.Link(filepath.Join(target, filepath.FromSlash(it.Target)), targetPath)
err := os.Link(filepath.Join(target, filepath.FromSlash(it.Target)), targetPath)
if err != nil {
t.Fatal(err)
}
case TestDir:
err := fs.Mkdir(targetPath, 0755)
err := os.Mkdir(targetPath, 0755)
if err != nil {
t.Fatal(err)
}
@ -157,7 +157,7 @@ func TestEnsureFiles(t testing.TB, target string, dir TestDir) {
// first, test that all items are there
TestWalkFiles(t, target, dir, func(path string, item interface{}) error {
fi, err := fs.Lstat(path)
fi, err := os.Lstat(path)
if err != nil {
return err
}
@ -169,7 +169,7 @@ func TestEnsureFiles(t testing.TB, target string, dir TestDir) {
}
return nil
case TestFile:
if !fs.IsRegularFile(fi) {
if !fi.Mode().IsRegular() {
t.Errorf("is not a regular file: %v", path)
return nil
}
@ -188,7 +188,7 @@ func TestEnsureFiles(t testing.TB, target string, dir TestDir) {
return nil
}
target, err := fs.Readlink(path)
target, err := os.Readlink(path)
if err != nil {
return err
}
@ -208,7 +208,7 @@ func TestEnsureFiles(t testing.TB, target string, dir TestDir) {
})
// then, traverse the directory again, looking for additional files
err := fs.Walk(target, func(path string, fi os.FileInfo, err error) error {
err := filepath.Walk(target, func(path string, fi os.FileInfo, err error) error {
if err != nil {
return err
}
@ -289,7 +289,7 @@ func TestEnsureTree(ctx context.Context, t testing.TB, prefix string, repo resti
switch e := entry.(type) {
case TestDir:
if node.Type != "dir" {
if node.Type != restic.NodeTypeDir {
t.Errorf("tree node %v has wrong type %q, want %q", nodePrefix, node.Type, "dir")
return
}
@ -301,13 +301,13 @@ func TestEnsureTree(ctx context.Context, t testing.TB, prefix string, repo resti
TestEnsureTree(ctx, t, path.Join(prefix, node.Name), repo, *node.Subtree, e)
case TestFile:
if node.Type != "file" {
if node.Type != restic.NodeTypeFile {
t.Errorf("tree node %v has wrong type %q, want %q", nodePrefix, node.Type, "file")
}
TestEnsureFileContent(ctx, t, repo, nodePrefix, node, e)
case TestSymlink:
if node.Type != "symlink" {
t.Errorf("tree node %v has wrong type %q, want %q", nodePrefix, node.Type, "file")
if node.Type != restic.NodeTypeSymlink {
t.Errorf("tree node %v has wrong type %q, want %q", nodePrefix, node.Type, "symlink")
}
if e.Target != node.LinkTarget {

View file

@ -54,7 +54,7 @@ func (t *MockT) Errorf(msg string, args ...interface{}) {
func createFilesAt(t testing.TB, targetdir string, files map[string]interface{}) {
for name, item := range files {
target := filepath.Join(targetdir, filepath.FromSlash(name))
err := fs.MkdirAll(filepath.Dir(target), 0700)
err := os.MkdirAll(filepath.Dir(target), 0700)
if err != nil {
t.Fatal(err)
}
@ -66,7 +66,7 @@ func createFilesAt(t testing.TB, targetdir string, files map[string]interface{})
t.Fatal(err)
}
case TestSymlink:
err := fs.Symlink(filepath.FromSlash(it.Target), target)
err := os.Symlink(filepath.FromSlash(it.Target), target)
if err != nil {
t.Fatal(err)
}
@ -105,7 +105,7 @@ func TestTestCreateFiles(t *testing.T) {
t.Run("", func(t *testing.T) {
tempdir := filepath.Join(tempdir, fmt.Sprintf("test-%d", i))
err := fs.MkdirAll(tempdir, 0700)
err := os.MkdirAll(tempdir, 0700)
if err != nil {
t.Fatal(err)
}
@ -114,7 +114,7 @@ func TestTestCreateFiles(t *testing.T) {
for name, item := range test.files {
targetPath := filepath.Join(tempdir, filepath.FromSlash(name))
fi, err := fs.Lstat(targetPath)
fi, err := os.Lstat(targetPath)
if err != nil {
t.Error(err)
continue
@ -122,7 +122,7 @@ func TestTestCreateFiles(t *testing.T) {
switch node := item.(type) {
case TestFile:
if !fs.IsRegularFile(fi) {
if !fi.Mode().IsRegular() {
t.Errorf("is not regular file: %v", name)
continue
}
@ -142,7 +142,7 @@ func TestTestCreateFiles(t *testing.T) {
continue
}
target, err := fs.Readlink(targetPath)
target, err := os.Readlink(targetPath)
if err != nil {
t.Error(err)
continue
@ -455,7 +455,7 @@ func TestTestEnsureSnapshot(t *testing.T) {
tempdir := rtest.TempDir(t)
targetDir := filepath.Join(tempdir, "target")
err := fs.Mkdir(targetDir, 0700)
err := os.Mkdir(targetDir, 0700)
if err != nil {
t.Fatal(err)
}

View file

@ -9,7 +9,7 @@ import (
"github.com/restic/restic/internal/fs"
)
// Tree recursively defines how a snapshot should look like when
// tree recursively defines how a snapshot should look like when
// archived.
//
// When `Path` is set, this is a leaf node and the contents of `Path` should be
@ -20,8 +20,8 @@ import (
//
// `FileInfoPath` is used to extract metadata for intermediate (=non-leaf)
// trees.
type Tree struct {
Nodes map[string]Tree
type tree struct {
Nodes map[string]tree
Path string // where the files/dirs to be saved are found
FileInfoPath string // where the dir can be found that is not included itself, but its subdirs
Root string // parent directory of the tree
@ -95,13 +95,13 @@ func rootDirectory(fs fs.FS, target string) string {
}
// Add adds a new file or directory to the tree.
func (t *Tree) Add(fs fs.FS, path string) error {
func (t *tree) Add(fs fs.FS, path string) error {
if path == "" {
panic("invalid path (empty string)")
}
if t.Nodes == nil {
t.Nodes = make(map[string]Tree)
t.Nodes = make(map[string]tree)
}
pc, virtualPrefix := pathComponents(fs, path, false)
@ -111,7 +111,7 @@ func (t *Tree) Add(fs fs.FS, path string) error {
name := pc[0]
root := rootDirectory(fs, path)
tree := Tree{Root: root}
tree := tree{Root: root}
origName := name
i := 0
@ -152,63 +152,63 @@ func (t *Tree) Add(fs fs.FS, path string) error {
}
// add adds a new target path into the tree.
func (t *Tree) add(fs fs.FS, target, root string, pc []string) error {
func (t *tree) add(fs fs.FS, target, root string, pc []string) error {
if len(pc) == 0 {
return errors.Errorf("invalid path %q", target)
}
if t.Nodes == nil {
t.Nodes = make(map[string]Tree)
t.Nodes = make(map[string]tree)
}
name := pc[0]
if len(pc) == 1 {
tree, ok := t.Nodes[name]
node, ok := t.Nodes[name]
if !ok {
t.Nodes[name] = Tree{Path: target}
t.Nodes[name] = tree{Path: target}
return nil
}
if tree.Path != "" {
if node.Path != "" {
return errors.Errorf("path is already set for target %v", target)
}
tree.Path = target
t.Nodes[name] = tree
node.Path = target
t.Nodes[name] = node
return nil
}
tree := Tree{}
node := tree{}
if other, ok := t.Nodes[name]; ok {
tree = other
node = other
}
subroot := fs.Join(root, name)
tree.FileInfoPath = subroot
node.FileInfoPath = subroot
err := tree.add(fs, target, subroot, pc[1:])
err := node.add(fs, target, subroot, pc[1:])
if err != nil {
return err
}
t.Nodes[name] = tree
t.Nodes[name] = node
return nil
}
func (t Tree) String() string {
func (t tree) String() string {
return formatTree(t, "")
}
// Leaf returns true if this is a leaf node, which means Path is set to a
// non-empty string and the contents of Path should be inserted at this point
// in the tree.
func (t Tree) Leaf() bool {
func (t tree) Leaf() bool {
return t.Path != ""
}
// NodeNames returns the sorted list of subtree names.
func (t Tree) NodeNames() []string {
func (t tree) NodeNames() []string {
// iterate over the nodes of atree in lexicographic (=deterministic) order
names := make([]string, 0, len(t.Nodes))
for name := range t.Nodes {
@ -219,7 +219,7 @@ func (t Tree) NodeNames() []string {
}
// formatTree returns a text representation of the tree t.
func formatTree(t Tree, indent string) (s string) {
func formatTree(t tree, indent string) (s string) {
for name, node := range t.Nodes {
s += fmt.Sprintf("%v/%v, root %q, path %q, meta %q\n", indent, name, node.Root, node.Path, node.FileInfoPath)
s += formatTree(node, indent+" ")
@ -228,7 +228,7 @@ func formatTree(t Tree, indent string) (s string) {
}
// unrollTree unrolls the tree so that only leaf nodes have Path set.
func unrollTree(f fs.FS, t *Tree) error {
func unrollTree(f fs.FS, t *tree) error {
// if the current tree is a leaf node (Path is set) and has additional
// nodes, add the contents of Path to the nodes.
if t.Path != "" && len(t.Nodes) > 0 {
@ -252,7 +252,7 @@ func unrollTree(f fs.FS, t *Tree) error {
return errors.Errorf("tree unrollTree: collision on path, node %#v, path %q", node, f.Join(t.Path, entry))
}
t.Nodes[entry] = Tree{Path: f.Join(t.Path, entry)}
t.Nodes[entry] = tree{Path: f.Join(t.Path, entry)}
}
t.Path = ""
}
@ -269,10 +269,10 @@ func unrollTree(f fs.FS, t *Tree) error {
return nil
}
// NewTree creates a Tree from the target files/directories.
func NewTree(fs fs.FS, targets []string) (*Tree, error) {
// newTree creates a Tree from the target files/directories.
func newTree(fs fs.FS, targets []string) (*tree, error) {
debug.Log("targets: %v", targets)
tree := &Tree{}
tree := &tree{}
seen := make(map[string]struct{})
for _, target := range targets {
target = fs.Clean(target)

View file

@ -9,20 +9,20 @@ import (
"golang.org/x/sync/errgroup"
)
// TreeSaver concurrently saves incoming trees to the repo.
type TreeSaver struct {
saveBlob SaveBlobFn
// treeSaver concurrently saves incoming trees to the repo.
type treeSaver struct {
saveBlob saveBlobFn
errFn ErrorFunc
ch chan<- saveTreeJob
}
// NewTreeSaver returns a new tree saver. A worker pool with treeWorkers is
// newTreeSaver returns a new tree saver. A worker pool with treeWorkers is
// started, it is stopped when ctx is cancelled.
func NewTreeSaver(ctx context.Context, wg *errgroup.Group, treeWorkers uint, saveBlob SaveBlobFn, errFn ErrorFunc) *TreeSaver {
func newTreeSaver(ctx context.Context, wg *errgroup.Group, treeWorkers uint, saveBlob saveBlobFn, errFn ErrorFunc) *treeSaver {
ch := make(chan saveTreeJob)
s := &TreeSaver{
s := &treeSaver{
ch: ch,
saveBlob: saveBlob,
errFn: errFn,
@ -37,12 +37,12 @@ func NewTreeSaver(ctx context.Context, wg *errgroup.Group, treeWorkers uint, sav
return s
}
func (s *TreeSaver) TriggerShutdown() {
func (s *treeSaver) TriggerShutdown() {
close(s.ch)
}
// Save stores the dir d and returns the data once it has been completed.
func (s *TreeSaver) Save(ctx context.Context, snPath string, target string, node *restic.Node, nodes []FutureNode, complete CompleteFunc) FutureNode {
func (s *treeSaver) Save(ctx context.Context, snPath string, target string, node *restic.Node, nodes []futureNode, complete fileCompleteFunc) futureNode {
fn, ch := newFutureNode()
job := saveTreeJob{
snPath: snPath,
@ -66,13 +66,13 @@ type saveTreeJob struct {
snPath string
target string
node *restic.Node
nodes []FutureNode
nodes []futureNode
ch chan<- futureNodeResult
complete CompleteFunc
complete fileCompleteFunc
}
// save stores the nodes as a tree in the repo.
func (s *TreeSaver) save(ctx context.Context, job *saveTreeJob) (*restic.Node, ItemStats, error) {
func (s *treeSaver) save(ctx context.Context, job *saveTreeJob) (*restic.Node, ItemStats, error) {
var stats ItemStats
node := job.node
nodes := job.nodes
@ -84,7 +84,7 @@ func (s *TreeSaver) save(ctx context.Context, job *saveTreeJob) (*restic.Node, I
for i, fn := range nodes {
// fn is a copy, so clear the original value explicitly
nodes[i] = FutureNode{}
nodes[i] = futureNode{}
fnr := fn.take(ctx)
// return the error if it wasn't ignored
@ -128,9 +128,9 @@ func (s *TreeSaver) save(ctx context.Context, job *saveTreeJob) (*restic.Node, I
return nil, stats, err
}
b := &Buffer{Data: buf}
ch := make(chan SaveBlobResponse, 1)
s.saveBlob(ctx, restic.TreeBlob, b, job.target, func(res SaveBlobResponse) {
b := &buffer{Data: buf}
ch := make(chan saveBlobResponse, 1)
s.saveBlob(ctx, restic.TreeBlob, b, job.target, func(res saveBlobResponse) {
ch <- res
})
@ -149,7 +149,7 @@ func (s *TreeSaver) save(ctx context.Context, job *saveTreeJob) (*restic.Node, I
}
}
func (s *TreeSaver) worker(ctx context.Context, jobs <-chan saveTreeJob) error {
func (s *treeSaver) worker(ctx context.Context, jobs <-chan saveTreeJob) error {
for {
var job saveTreeJob
var ok bool

View file

@ -12,8 +12,8 @@ import (
"golang.org/x/sync/errgroup"
)
func treeSaveHelper(_ context.Context, _ restic.BlobType, buf *Buffer, _ string, cb func(res SaveBlobResponse)) {
cb(SaveBlobResponse{
func treeSaveHelper(_ context.Context, _ restic.BlobType, buf *buffer, _ string, cb func(res saveBlobResponse)) {
cb(saveBlobResponse{
id: restic.NewRandomID(),
known: false,
length: len(buf.Data),
@ -21,7 +21,7 @@ func treeSaveHelper(_ context.Context, _ restic.BlobType, buf *Buffer, _ string,
})
}
func setupTreeSaver() (context.Context, context.CancelFunc, *TreeSaver, func() error) {
func setupTreeSaver() (context.Context, context.CancelFunc, *treeSaver, func() error) {
ctx, cancel := context.WithCancel(context.Background())
wg, ctx := errgroup.WithContext(ctx)
@ -29,7 +29,7 @@ func setupTreeSaver() (context.Context, context.CancelFunc, *TreeSaver, func() e
return err
}
b := NewTreeSaver(ctx, wg, uint(runtime.NumCPU()), treeSaveHelper, errFn)
b := newTreeSaver(ctx, wg, uint(runtime.NumCPU()), treeSaveHelper, errFn)
shutdown := func() error {
b.TriggerShutdown()
@ -43,7 +43,7 @@ func TestTreeSaver(t *testing.T) {
ctx, cancel, b, shutdown := setupTreeSaver()
defer cancel()
var results []FutureNode
var results []futureNode
for i := 0; i < 20; i++ {
node := &restic.Node{
@ -83,13 +83,13 @@ func TestTreeSaverError(t *testing.T) {
ctx, cancel, b, shutdown := setupTreeSaver()
defer cancel()
var results []FutureNode
var results []futureNode
for i := 0; i < test.trees; i++ {
node := &restic.Node{
Name: fmt.Sprintf("file-%d", i),
}
nodes := []FutureNode{
nodes := []futureNode{
newFutureNodeWithResult(futureNodeResult{node: &restic.Node{
Name: fmt.Sprintf("child-%d", i),
}}),
@ -128,7 +128,7 @@ func TestTreeSaverDuplicates(t *testing.T) {
node := &restic.Node{
Name: "file",
}
nodes := []FutureNode{
nodes := []futureNode{
newFutureNodeWithResult(futureNodeResult{node: &restic.Node{
Name: "child",
}}),

View file

@ -12,7 +12,7 @@ import (
)
// debug.Log requires Tree.String.
var _ fmt.Stringer = Tree{}
var _ fmt.Stringer = tree{}
func TestPathComponents(t *testing.T) {
var tests = []struct {
@ -142,20 +142,20 @@ func TestTree(t *testing.T) {
var tests = []struct {
targets []string
src TestDir
want Tree
want tree
unix bool
win bool
mustError bool
}{
{
targets: []string{"foo"},
want: Tree{Nodes: map[string]Tree{
want: tree{Nodes: map[string]tree{
"foo": {Path: "foo", Root: "."},
}},
},
{
targets: []string{"foo", "bar", "baz"},
want: Tree{Nodes: map[string]Tree{
want: tree{Nodes: map[string]tree{
"foo": {Path: "foo", Root: "."},
"bar": {Path: "bar", Root: "."},
"baz": {Path: "baz", Root: "."},
@ -163,8 +163,8 @@ func TestTree(t *testing.T) {
},
{
targets: []string{"foo/user1", "foo/user2", "foo/other"},
want: Tree{Nodes: map[string]Tree{
"foo": {Root: ".", FileInfoPath: "foo", Nodes: map[string]Tree{
want: tree{Nodes: map[string]tree{
"foo": {Root: ".", FileInfoPath: "foo", Nodes: map[string]tree{
"user1": {Path: filepath.FromSlash("foo/user1")},
"user2": {Path: filepath.FromSlash("foo/user2")},
"other": {Path: filepath.FromSlash("foo/other")},
@ -173,9 +173,9 @@ func TestTree(t *testing.T) {
},
{
targets: []string{"foo/work/user1", "foo/work/user2"},
want: Tree{Nodes: map[string]Tree{
"foo": {Root: ".", FileInfoPath: "foo", Nodes: map[string]Tree{
"work": {FileInfoPath: filepath.FromSlash("foo/work"), Nodes: map[string]Tree{
want: tree{Nodes: map[string]tree{
"foo": {Root: ".", FileInfoPath: "foo", Nodes: map[string]tree{
"work": {FileInfoPath: filepath.FromSlash("foo/work"), Nodes: map[string]tree{
"user1": {Path: filepath.FromSlash("foo/work/user1")},
"user2": {Path: filepath.FromSlash("foo/work/user2")},
}},
@ -184,50 +184,50 @@ func TestTree(t *testing.T) {
},
{
targets: []string{"foo/user1", "bar/user1", "foo/other"},
want: Tree{Nodes: map[string]Tree{
"foo": {Root: ".", FileInfoPath: "foo", Nodes: map[string]Tree{
want: tree{Nodes: map[string]tree{
"foo": {Root: ".", FileInfoPath: "foo", Nodes: map[string]tree{
"user1": {Path: filepath.FromSlash("foo/user1")},
"other": {Path: filepath.FromSlash("foo/other")},
}},
"bar": {Root: ".", FileInfoPath: "bar", Nodes: map[string]Tree{
"bar": {Root: ".", FileInfoPath: "bar", Nodes: map[string]tree{
"user1": {Path: filepath.FromSlash("bar/user1")},
}},
}},
},
{
targets: []string{"../work"},
want: Tree{Nodes: map[string]Tree{
want: tree{Nodes: map[string]tree{
"work": {Root: "..", Path: filepath.FromSlash("../work")},
}},
},
{
targets: []string{"../work/other"},
want: Tree{Nodes: map[string]Tree{
"work": {Root: "..", FileInfoPath: filepath.FromSlash("../work"), Nodes: map[string]Tree{
want: tree{Nodes: map[string]tree{
"work": {Root: "..", FileInfoPath: filepath.FromSlash("../work"), Nodes: map[string]tree{
"other": {Path: filepath.FromSlash("../work/other")},
}},
}},
},
{
targets: []string{"foo/user1", "../work/other", "foo/user2"},
want: Tree{Nodes: map[string]Tree{
"foo": {Root: ".", FileInfoPath: "foo", Nodes: map[string]Tree{
want: tree{Nodes: map[string]tree{
"foo": {Root: ".", FileInfoPath: "foo", Nodes: map[string]tree{
"user1": {Path: filepath.FromSlash("foo/user1")},
"user2": {Path: filepath.FromSlash("foo/user2")},
}},
"work": {Root: "..", FileInfoPath: filepath.FromSlash("../work"), Nodes: map[string]Tree{
"work": {Root: "..", FileInfoPath: filepath.FromSlash("../work"), Nodes: map[string]tree{
"other": {Path: filepath.FromSlash("../work/other")},
}},
}},
},
{
targets: []string{"foo/user1", "../foo/other", "foo/user2"},
want: Tree{Nodes: map[string]Tree{
"foo": {Root: ".", FileInfoPath: "foo", Nodes: map[string]Tree{
want: tree{Nodes: map[string]tree{
"foo": {Root: ".", FileInfoPath: "foo", Nodes: map[string]tree{
"user1": {Path: filepath.FromSlash("foo/user1")},
"user2": {Path: filepath.FromSlash("foo/user2")},
}},
"foo-1": {Root: "..", FileInfoPath: filepath.FromSlash("../foo"), Nodes: map[string]Tree{
"foo-1": {Root: "..", FileInfoPath: filepath.FromSlash("../foo"), Nodes: map[string]tree{
"other": {Path: filepath.FromSlash("../foo/other")},
}},
}},
@ -240,11 +240,11 @@ func TestTree(t *testing.T) {
},
},
targets: []string{"foo", "foo/work"},
want: Tree{Nodes: map[string]Tree{
want: tree{Nodes: map[string]tree{
"foo": {
Root: ".",
FileInfoPath: "foo",
Nodes: map[string]Tree{
Nodes: map[string]tree{
"file": {Path: filepath.FromSlash("foo/file")},
"work": {Path: filepath.FromSlash("foo/work")},
},
@ -261,11 +261,11 @@ func TestTree(t *testing.T) {
},
},
targets: []string{"foo/work", "foo"},
want: Tree{Nodes: map[string]Tree{
want: tree{Nodes: map[string]tree{
"foo": {
Root: ".",
FileInfoPath: "foo",
Nodes: map[string]Tree{
Nodes: map[string]tree{
"file": {Path: filepath.FromSlash("foo/file")},
"work": {Path: filepath.FromSlash("foo/work")},
},
@ -282,11 +282,11 @@ func TestTree(t *testing.T) {
},
},
targets: []string{"foo/work", "foo/work/user2"},
want: Tree{Nodes: map[string]Tree{
"foo": {Root: ".", FileInfoPath: "foo", Nodes: map[string]Tree{
want: tree{Nodes: map[string]tree{
"foo": {Root: ".", FileInfoPath: "foo", Nodes: map[string]tree{
"work": {
FileInfoPath: filepath.FromSlash("foo/work"),
Nodes: map[string]Tree{
Nodes: map[string]tree{
"user1": {Path: filepath.FromSlash("foo/work/user1")},
"user2": {Path: filepath.FromSlash("foo/work/user2")},
},
@ -304,10 +304,10 @@ func TestTree(t *testing.T) {
},
},
targets: []string{"foo/work/user2", "foo/work"},
want: Tree{Nodes: map[string]Tree{
"foo": {Root: ".", FileInfoPath: "foo", Nodes: map[string]Tree{
want: tree{Nodes: map[string]tree{
"foo": {Root: ".", FileInfoPath: "foo", Nodes: map[string]tree{
"work": {FileInfoPath: filepath.FromSlash("foo/work"),
Nodes: map[string]Tree{
Nodes: map[string]tree{
"user1": {Path: filepath.FromSlash("foo/work/user1")},
"user2": {Path: filepath.FromSlash("foo/work/user2")},
},
@ -332,12 +332,12 @@ func TestTree(t *testing.T) {
},
},
targets: []string{"foo/work/user2/data/secret", "foo"},
want: Tree{Nodes: map[string]Tree{
"foo": {Root: ".", FileInfoPath: "foo", Nodes: map[string]Tree{
want: tree{Nodes: map[string]tree{
"foo": {Root: ".", FileInfoPath: "foo", Nodes: map[string]tree{
"other": {Path: filepath.FromSlash("foo/other")},
"work": {FileInfoPath: filepath.FromSlash("foo/work"), Nodes: map[string]Tree{
"user2": {FileInfoPath: filepath.FromSlash("foo/work/user2"), Nodes: map[string]Tree{
"data": {FileInfoPath: filepath.FromSlash("foo/work/user2/data"), Nodes: map[string]Tree{
"work": {FileInfoPath: filepath.FromSlash("foo/work"), Nodes: map[string]tree{
"user2": {FileInfoPath: filepath.FromSlash("foo/work/user2"), Nodes: map[string]tree{
"data": {FileInfoPath: filepath.FromSlash("foo/work/user2/data"), Nodes: map[string]tree{
"secret": {
Path: filepath.FromSlash("foo/work/user2/data/secret"),
},
@ -368,10 +368,10 @@ func TestTree(t *testing.T) {
},
unix: true,
targets: []string{"mnt/driveA", "mnt/driveA/work/driveB"},
want: Tree{Nodes: map[string]Tree{
"mnt": {Root: ".", FileInfoPath: filepath.FromSlash("mnt"), Nodes: map[string]Tree{
"driveA": {FileInfoPath: filepath.FromSlash("mnt/driveA"), Nodes: map[string]Tree{
"work": {FileInfoPath: filepath.FromSlash("mnt/driveA/work"), Nodes: map[string]Tree{
want: tree{Nodes: map[string]tree{
"mnt": {Root: ".", FileInfoPath: filepath.FromSlash("mnt"), Nodes: map[string]tree{
"driveA": {FileInfoPath: filepath.FromSlash("mnt/driveA"), Nodes: map[string]tree{
"work": {FileInfoPath: filepath.FromSlash("mnt/driveA/work"), Nodes: map[string]tree{
"driveB": {
Path: filepath.FromSlash("mnt/driveA/work/driveB"),
},
@ -384,9 +384,9 @@ func TestTree(t *testing.T) {
},
{
targets: []string{"foo/work/user", "foo/work/user"},
want: Tree{Nodes: map[string]Tree{
"foo": {Root: ".", FileInfoPath: "foo", Nodes: map[string]Tree{
"work": {FileInfoPath: filepath.FromSlash("foo/work"), Nodes: map[string]Tree{
want: tree{Nodes: map[string]tree{
"foo": {Root: ".", FileInfoPath: "foo", Nodes: map[string]tree{
"work": {FileInfoPath: filepath.FromSlash("foo/work"), Nodes: map[string]tree{
"user": {Path: filepath.FromSlash("foo/work/user")},
}},
}},
@ -394,9 +394,9 @@ func TestTree(t *testing.T) {
},
{
targets: []string{"./foo/work/user", "foo/work/user"},
want: Tree{Nodes: map[string]Tree{
"foo": {Root: ".", FileInfoPath: "foo", Nodes: map[string]Tree{
"work": {FileInfoPath: filepath.FromSlash("foo/work"), Nodes: map[string]Tree{
want: tree{Nodes: map[string]tree{
"foo": {Root: ".", FileInfoPath: "foo", Nodes: map[string]tree{
"work": {FileInfoPath: filepath.FromSlash("foo/work"), Nodes: map[string]tree{
"user": {Path: filepath.FromSlash("foo/work/user")},
}},
}},
@ -405,10 +405,10 @@ func TestTree(t *testing.T) {
{
win: true,
targets: []string{`c:\users\foobar\temp`},
want: Tree{Nodes: map[string]Tree{
"c": {Root: `c:\`, FileInfoPath: `c:\`, Nodes: map[string]Tree{
"users": {FileInfoPath: `c:\users`, Nodes: map[string]Tree{
"foobar": {FileInfoPath: `c:\users\foobar`, Nodes: map[string]Tree{
want: tree{Nodes: map[string]tree{
"c": {Root: `c:\`, FileInfoPath: `c:\`, Nodes: map[string]tree{
"users": {FileInfoPath: `c:\users`, Nodes: map[string]tree{
"foobar": {FileInfoPath: `c:\users\foobar`, Nodes: map[string]tree{
"temp": {Path: `c:\users\foobar\temp`},
}},
}},
@ -445,7 +445,7 @@ func TestTree(t *testing.T) {
back := rtest.Chdir(t, tempdir)
defer back()
tree, err := NewTree(fs.Local{}, test.targets)
tree, err := newTree(fs.Local{}, test.targets)
if test.mustError {
if err == nil {
t.Fatal("expected error, got nil")

View file

@ -37,6 +37,8 @@ type Backend struct {
prefix string
listMaxItems int
layout.Layout
accessTier blob.AccessTier
}
const saveLargeSize = 256 * 1024 * 1024
@ -60,6 +62,11 @@ func open(cfg Config, rt http.RoundTripper) (*Backend, error) {
} else {
endpointSuffix = "core.windows.net"
}
if cfg.AccountName == "" {
return nil, errors.Fatalf("unable to open Azure backend: Account name ($AZURE_ACCOUNT_NAME) is empty")
}
url := fmt.Sprintf("https://%s.blob.%s/%s", cfg.AccountName, endpointSuffix, cfg.Container)
opts := &azContainer.ClientOptions{
ClientOptions: azcore.ClientOptions{
@ -124,20 +131,33 @@ func open(cfg Config, rt http.RoundTripper) (*Backend, error) {
}
}
var accessTier blob.AccessTier
// if the access tier is not supported, then we will not set the access tier; during the upload process,
// the value will be inferred from the default configured on the storage account.
for _, tier := range supportedAccessTiers() {
if strings.EqualFold(string(tier), cfg.AccessTier) {
accessTier = tier
debug.Log(" - using access tier %v", accessTier)
break
}
}
be := &Backend{
container: client,
cfg: cfg,
connections: cfg.Connections,
Layout: &layout.DefaultLayout{
Path: cfg.Prefix,
Join: path.Join,
},
Layout: layout.NewDefaultLayout(cfg.Prefix, path.Join),
listMaxItems: defaultListMaxItems,
accessTier: accessTier,
}
return be, nil
}
func supportedAccessTiers() []blob.AccessTier {
return []blob.AccessTier{blob.AccessTierHot, blob.AccessTierCool, blob.AccessTierCold, blob.AccessTierArchive}
}
// Open opens the Azure backend at specified container.
func Open(_ context.Context, cfg Config, rt http.RoundTripper) (*Backend, error) {
return open(cfg, rt)
@ -197,11 +217,6 @@ func (be *Backend) IsPermanentError(err error) bool {
return false
}
// Join combines path components with slashes.
func (be *Backend) Join(p ...string) string {
return path.Join(p...)
}
func (be *Backend) Connections() uint {
return be.connections
}
@ -221,25 +236,39 @@ func (be *Backend) Path() string {
return be.prefix
}
// useAccessTier determines whether to apply the configured access tier to a given file.
// For archive access tier, only data files are stored using that class; metadata
// must remain instantly accessible.
func (be *Backend) useAccessTier(h backend.Handle) bool {
notArchiveClass := !strings.EqualFold(be.cfg.AccessTier, "archive")
isDataFile := h.Type == backend.PackFile && !h.IsMetadata
return isDataFile || notArchiveClass
}
// Save stores data in the backend at the handle.
func (be *Backend) Save(ctx context.Context, h backend.Handle, rd backend.RewindReader) error {
objName := be.Filename(h)
debug.Log("InsertObject(%v, %v)", be.cfg.AccountName, objName)
var accessTier blob.AccessTier
if be.useAccessTier(h) {
accessTier = be.accessTier
}
var err error
if rd.Length() < saveLargeSize {
// if it's smaller than 256miB, then just create the file directly from the reader
err = be.saveSmall(ctx, objName, rd)
err = be.saveSmall(ctx, objName, rd, accessTier)
} else {
// otherwise use the more complicated method
err = be.saveLarge(ctx, objName, rd)
err = be.saveLarge(ctx, objName, rd, accessTier)
}
return err
}
func (be *Backend) saveSmall(ctx context.Context, objName string, rd backend.RewindReader) error {
func (be *Backend) saveSmall(ctx context.Context, objName string, rd backend.RewindReader, accessTier blob.AccessTier) error {
blockBlobClient := be.container.NewBlockBlobClient(objName)
// upload it as a new "block", use the base64 hash for the ID
@ -260,11 +289,13 @@ func (be *Backend) saveSmall(ctx context.Context, objName string, rd backend.Rew
}
blocks := []string{id}
_, err = blockBlobClient.CommitBlockList(ctx, blocks, &blockblob.CommitBlockListOptions{})
_, err = blockBlobClient.CommitBlockList(ctx, blocks, &blockblob.CommitBlockListOptions{
Tier: &accessTier,
})
return errors.Wrap(err, "CommitBlockList")
}
func (be *Backend) saveLarge(ctx context.Context, objName string, rd backend.RewindReader) error {
func (be *Backend) saveLarge(ctx context.Context, objName string, rd backend.RewindReader, accessTier blob.AccessTier) error {
blockBlobClient := be.container.NewBlockBlobClient(objName)
buf := make([]byte, 100*1024*1024)
@ -311,7 +342,9 @@ func (be *Backend) saveLarge(ctx context.Context, objName string, rd backend.Rew
return errors.Errorf("wrote %d bytes instead of the expected %d bytes", uploadedBytes, rd.Length())
}
_, err := blockBlobClient.CommitBlockList(ctx, blocks, &blockblob.CommitBlockListOptions{})
_, err := blockBlobClient.CommitBlockList(ctx, blocks, &blockblob.CommitBlockListOptions{
Tier: &accessTier,
})
debug.Log("uploaded %d parts: %v", len(blocks), blocks)
return errors.Wrap(err, "CommitBlockList")

View file

@ -23,6 +23,7 @@ type Config struct {
Prefix string
Connections uint `option:"connections" help:"set a limit for the number of concurrent connections (default: 5)"`
AccessTier string `option:"access-tier" help:"set the access tier for the blob storage (default: inferred from the storage account defaults)"`
}
// NewConfig returns a new Config with the default values filled in.

View file

@ -110,10 +110,7 @@ func Open(ctx context.Context, cfg Config, rt http.RoundTripper) (backend.Backen
client: client,
bucket: bucket,
cfg: cfg,
Layout: &layout.DefaultLayout{
Join: path.Join,
Path: cfg.Prefix,
},
Layout: layout.NewDefaultLayout(cfg.Prefix, path.Join),
listMaxItems: defaultListMaxItems,
canDelete: true,
}
@ -146,10 +143,7 @@ func Create(ctx context.Context, cfg Config, rt http.RoundTripper) (backend.Back
client: client,
bucket: bucket,
cfg: cfg,
Layout: &layout.DefaultLayout{
Join: path.Join,
Path: cfg.Prefix,
},
Layout: layout.NewDefaultLayout(cfg.Prefix, path.Join),
listMaxItems: defaultListMaxItems,
}
return be, nil

View file

@ -12,7 +12,6 @@ import (
"github.com/pkg/errors"
"github.com/restic/restic/internal/backend"
"github.com/restic/restic/internal/debug"
"github.com/restic/restic/internal/fs"
"github.com/restic/restic/internal/restic"
)
@ -54,7 +53,7 @@ const cachedirTagSignature = "Signature: 8a477f597d28d172789f06886806bc55\n"
func writeCachedirTag(dir string) error {
tagfile := filepath.Join(dir, "CACHEDIR.TAG")
f, err := fs.OpenFile(tagfile, os.O_CREATE|os.O_EXCL|os.O_WRONLY, fileMode)
f, err := os.OpenFile(tagfile, os.O_CREATE|os.O_EXCL|os.O_WRONLY, fileMode)
if err != nil {
if errors.Is(err, os.ErrExist) {
return nil
@ -85,7 +84,7 @@ func New(id string, basedir string) (c *Cache, err error) {
}
}
err = fs.MkdirAll(basedir, dirMode)
err = os.MkdirAll(basedir, dirMode)
if err != nil {
return nil, errors.WithStack(err)
}
@ -113,7 +112,7 @@ func New(id string, basedir string) (c *Cache, err error) {
case errors.Is(err, os.ErrNotExist):
// Create the repo cache dir. The parent exists, so Mkdir suffices.
err := fs.Mkdir(cachedir, dirMode)
err := os.Mkdir(cachedir, dirMode)
switch {
case err == nil:
created = true
@ -134,7 +133,7 @@ func New(id string, basedir string) (c *Cache, err error) {
}
for _, p := range cacheLayoutPaths {
if err = fs.MkdirAll(filepath.Join(cachedir, p), dirMode); err != nil {
if err = os.MkdirAll(filepath.Join(cachedir, p), dirMode); err != nil {
return nil, errors.WithStack(err)
}
}
@ -152,7 +151,7 @@ func New(id string, basedir string) (c *Cache, err error) {
// directory d to the current time.
func updateTimestamp(d string) error {
t := time.Now()
return fs.Chtimes(d, t, t)
return os.Chtimes(d, t, t)
}
// MaxCacheAge is the default age (30 days) after which cache directories are considered old.
@ -165,7 +164,7 @@ func validCacheDirName(s string) bool {
// listCacheDirs returns the list of cache directories.
func listCacheDirs(basedir string) ([]os.FileInfo, error) {
f, err := fs.Open(basedir)
f, err := os.Open(basedir)
if err != nil {
if errors.Is(err, os.ErrNotExist) {
err = nil

View file

@ -12,7 +12,6 @@ import (
"github.com/restic/restic/internal/backend/util"
"github.com/restic/restic/internal/crypto"
"github.com/restic/restic/internal/debug"
"github.com/restic/restic/internal/fs"
"github.com/restic/restic/internal/restic"
)
@ -44,7 +43,7 @@ func (c *Cache) load(h backend.Handle, length int, offset int64) (io.ReadCloser,
return nil, false, errors.New("cannot be cached")
}
f, err := fs.Open(c.filename(h))
f, err := os.Open(c.filename(h))
if err != nil {
return nil, false, errors.WithStack(err)
}
@ -91,7 +90,7 @@ func (c *Cache) save(h backend.Handle, rd io.Reader) error {
finalname := c.filename(h)
dir := filepath.Dir(finalname)
err := fs.Mkdir(dir, 0700)
err := os.Mkdir(dir, 0700)
if err != nil && !errors.Is(err, os.ErrExist) {
return err
}
@ -106,26 +105,26 @@ func (c *Cache) save(h backend.Handle, rd io.Reader) error {
n, err := io.Copy(f, rd)
if err != nil {
_ = f.Close()
_ = fs.Remove(f.Name())
_ = os.Remove(f.Name())
return errors.Wrap(err, "Copy")
}
if n <= int64(crypto.CiphertextLength(0)) {
_ = f.Close()
_ = fs.Remove(f.Name())
_ = os.Remove(f.Name())
debug.Log("trying to cache truncated file %v, removing", h)
return nil
}
// Close, then rename. Windows doesn't like the reverse order.
if err = f.Close(); err != nil {
_ = fs.Remove(f.Name())
_ = os.Remove(f.Name())
return errors.WithStack(err)
}
err = fs.Rename(f.Name(), finalname)
err = os.Rename(f.Name(), finalname)
if err != nil {
_ = fs.Remove(f.Name())
_ = os.Remove(f.Name())
}
if runtime.GOOS == "windows" && errors.Is(err, os.ErrPermission) {
// On Windows, renaming over an existing file is ok
@ -162,7 +161,7 @@ func (c *Cache) remove(h backend.Handle) (bool, error) {
return false, nil
}
err := fs.Remove(c.filename(h))
err := os.Remove(c.filename(h))
removed := err == nil
if errors.Is(err, os.ErrNotExist) {
err = nil
@ -189,7 +188,7 @@ func (c *Cache) Clear(t restic.FileType, valid restic.IDSet) error {
}
// ignore ErrNotExist to gracefully handle multiple processes running Clear() concurrently
if err = fs.Remove(c.filename(backend.Handle{Type: t, Name: id.String()})); err != nil && !errors.Is(err, os.ErrNotExist) {
if err = os.Remove(c.filename(backend.Handle{Type: t, Name: id.String()})); err != nil && !errors.Is(err, os.ErrNotExist) {
return err
}
}
@ -240,6 +239,6 @@ func (c *Cache) Has(h backend.Handle) bool {
return false
}
_, err := fs.Stat(c.filename(h))
_, err := os.Stat(c.filename(h))
return err == nil
}

View file

@ -12,17 +12,16 @@ import (
"github.com/restic/restic/internal/backend"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/fs"
"github.com/restic/restic/internal/restic"
rtest "github.com/restic/restic/internal/test"
"golang.org/x/sync/errgroup"
)
func generateRandomFiles(t testing.TB, tpe backend.FileType, c *Cache) restic.IDSet {
func generateRandomFiles(t testing.TB, random *rand.Rand, tpe backend.FileType, c *Cache) restic.IDSet {
ids := restic.NewIDSet()
for i := 0; i < rand.Intn(15)+10; i++ {
buf := rtest.Random(rand.Int(), 1<<19)
for i := 0; i < random.Intn(15)+10; i++ {
buf := rtest.Random(random.Int(), 1<<19)
id := restic.Hash(buf)
h := backend.Handle{Type: tpe, Name: id.String()}
@ -88,7 +87,7 @@ func clearFiles(t testing.TB, c *Cache, tpe restic.FileType, valid restic.IDSet)
func TestFiles(t *testing.T) {
seed := time.Now().Unix()
t.Logf("seed is %v", seed)
rand.Seed(seed)
random := rand.New(rand.NewSource(seed))
c := TestNewCache(t)
@ -100,7 +99,7 @@ func TestFiles(t *testing.T) {
for _, tpe := range tests {
t.Run(tpe.String(), func(t *testing.T) {
ids := generateRandomFiles(t, tpe, c)
ids := generateRandomFiles(t, random, tpe, c)
id := randomID(ids)
h := backend.Handle{Type: tpe, Name: id.String()}
@ -140,12 +139,12 @@ func TestFiles(t *testing.T) {
func TestFileLoad(t *testing.T) {
seed := time.Now().Unix()
t.Logf("seed is %v", seed)
rand.Seed(seed)
random := rand.New(rand.NewSource(seed))
c := TestNewCache(t)
// save about 5 MiB of data in the cache
data := rtest.Random(rand.Int(), 5234142)
data := rtest.Random(random.Int(), 5234142)
id := restic.ID{}
copy(id[:], data)
h := backend.Handle{
@ -223,6 +222,10 @@ func TestFileSaveConcurrent(t *testing.T) {
t.Skip("may not work due to FILE_SHARE_DELETE issue")
}
seed := time.Now().Unix()
t.Logf("seed is %v", seed)
random := rand.New(rand.NewSource(seed))
const nproc = 40
var (
@ -231,7 +234,8 @@ func TestFileSaveConcurrent(t *testing.T) {
g errgroup.Group
id restic.ID
)
rand.Read(id[:])
random.Read(id[:])
h := backend.Handle{
Type: restic.PackFile,
@ -273,7 +277,7 @@ func TestFileSaveConcurrent(t *testing.T) {
func TestFileSaveAfterDamage(t *testing.T) {
c := TestNewCache(t)
rtest.OK(t, fs.RemoveAll(c.path))
rtest.OK(t, os.RemoveAll(c.path))
// save a few bytes of data in the cache
data := rtest.Random(123456789, 42)

View file

@ -112,10 +112,7 @@ func open(cfg Config, rt http.RoundTripper) (*Backend, error) {
region: cfg.Region,
bucket: gcsClient.Bucket(cfg.Bucket),
prefix: cfg.Prefix,
Layout: &layout.DefaultLayout{
Path: cfg.Prefix,
Join: path.Join,
},
Layout: layout.NewDefaultLayout(cfg.Prefix, path.Join),
listMaxItems: defaultListMaxItems,
}
@ -189,11 +186,6 @@ func (be *Backend) IsPermanentError(err error) bool {
return false
}
// Join combines path components with slashes.
func (be *Backend) Join(p ...string) string {
return path.Join(p...)
}
func (be *Backend) Connections() uint {
return be.connections
}

View file

@ -1,18 +1,7 @@
package layout
import (
"context"
"fmt"
"os"
"path/filepath"
"regexp"
"github.com/restic/restic/internal/backend"
"github.com/restic/restic/internal/debug"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/feature"
"github.com/restic/restic/internal/fs"
"github.com/restic/restic/internal/restic"
)
// Layout computes paths for file name storage.
@ -23,159 +12,3 @@ type Layout interface {
Paths() []string
Name() string
}
// Filesystem is the abstraction of a file system used for a backend.
type Filesystem interface {
Join(...string) string
ReadDir(context.Context, string) ([]os.FileInfo, error)
IsNotExist(error) bool
}
// ensure statically that *LocalFilesystem implements Filesystem.
var _ Filesystem = &LocalFilesystem{}
// LocalFilesystem implements Filesystem in a local path.
type LocalFilesystem struct {
}
// ReadDir returns all entries of a directory.
func (l *LocalFilesystem) ReadDir(_ context.Context, dir string) ([]os.FileInfo, error) {
f, err := fs.Open(dir)
if err != nil {
return nil, err
}
entries, err := f.Readdir(-1)
if err != nil {
return nil, errors.Wrap(err, "Readdir")
}
err = f.Close()
if err != nil {
return nil, errors.Wrap(err, "Close")
}
return entries, nil
}
// Join combines several path components to one.
func (l *LocalFilesystem) Join(paths ...string) string {
return filepath.Join(paths...)
}
// IsNotExist returns true for errors that are caused by not existing files.
func (l *LocalFilesystem) IsNotExist(err error) bool {
return os.IsNotExist(err)
}
var backendFilenameLength = len(restic.ID{}) * 2
var backendFilename = regexp.MustCompile(fmt.Sprintf("^[a-fA-F0-9]{%d}$", backendFilenameLength))
func hasBackendFile(ctx context.Context, fs Filesystem, dir string) (bool, error) {
entries, err := fs.ReadDir(ctx, dir)
if err != nil && fs.IsNotExist(err) {
return false, nil
}
if err != nil {
return false, errors.Wrap(err, "ReadDir")
}
for _, e := range entries {
if backendFilename.MatchString(e.Name()) {
return true, nil
}
}
return false, nil
}
// ErrLayoutDetectionFailed is returned by DetectLayout() when the layout
// cannot be detected automatically.
var ErrLayoutDetectionFailed = errors.New("auto-detecting the filesystem layout failed")
var ErrLegacyLayoutFound = errors.New("detected legacy S3 layout. Use `RESTIC_FEATURES=deprecate-s3-legacy-layout=false restic migrate s3_layout` to migrate your repository")
// DetectLayout tries to find out which layout is used in a local (or sftp)
// filesystem at the given path. If repo is nil, an instance of LocalFilesystem
// is used.
func DetectLayout(ctx context.Context, repo Filesystem, dir string) (Layout, error) {
debug.Log("detect layout at %v", dir)
if repo == nil {
repo = &LocalFilesystem{}
}
// key file in the "keys" dir (DefaultLayout)
foundKeysFile, err := hasBackendFile(ctx, repo, repo.Join(dir, defaultLayoutPaths[backend.KeyFile]))
if err != nil {
return nil, err
}
// key file in the "key" dir (S3LegacyLayout)
foundKeyFile, err := hasBackendFile(ctx, repo, repo.Join(dir, s3LayoutPaths[backend.KeyFile]))
if err != nil {
return nil, err
}
if foundKeysFile && !foundKeyFile {
debug.Log("found default layout at %v", dir)
return &DefaultLayout{
Path: dir,
Join: repo.Join,
}, nil
}
if foundKeyFile && !foundKeysFile {
if feature.Flag.Enabled(feature.DeprecateS3LegacyLayout) {
return nil, ErrLegacyLayoutFound
}
debug.Log("found s3 layout at %v", dir)
return &S3LegacyLayout{
Path: dir,
Join: repo.Join,
}, nil
}
debug.Log("layout detection failed")
return nil, ErrLayoutDetectionFailed
}
// ParseLayout parses the config string and returns a Layout. When layout is
// the empty string, DetectLayout is used. If that fails, defaultLayout is used.
func ParseLayout(ctx context.Context, repo Filesystem, layout, defaultLayout, path string) (l Layout, err error) {
debug.Log("parse layout string %q for backend at %v", layout, path)
switch layout {
case "default":
l = &DefaultLayout{
Path: path,
Join: repo.Join,
}
case "s3legacy":
if feature.Flag.Enabled(feature.DeprecateS3LegacyLayout) {
return nil, ErrLegacyLayoutFound
}
l = &S3LegacyLayout{
Path: path,
Join: repo.Join,
}
case "":
l, err = DetectLayout(ctx, repo, path)
// use the default layout if auto detection failed
if errors.Is(err, ErrLayoutDetectionFailed) && defaultLayout != "" {
debug.Log("error: %v, use default layout %v", err, defaultLayout)
return ParseLayout(ctx, repo, defaultLayout, "", path)
}
if err != nil {
return nil, err
}
debug.Log("layout detected: %v", l)
default:
return nil, errors.Errorf("unknown backend layout string %q, may be one of: default, s3legacy", layout)
}
return l, nil
}

View file

@ -11,8 +11,8 @@ import (
// subdirs, two characters each (taken from the first two characters of the
// file name).
type DefaultLayout struct {
Path string
Join func(...string) string
path string
join func(...string) string
}
var defaultLayoutPaths = map[backend.FileType]string{
@ -23,6 +23,13 @@ var defaultLayoutPaths = map[backend.FileType]string{
backend.KeyFile: "keys",
}
func NewDefaultLayout(path string, join func(...string) string) *DefaultLayout {
return &DefaultLayout{
path: path,
join: join,
}
}
func (l *DefaultLayout) String() string {
return "<DefaultLayout>"
}
@ -37,32 +44,32 @@ func (l *DefaultLayout) Dirname(h backend.Handle) string {
p := defaultLayoutPaths[h.Type]
if h.Type == backend.PackFile && len(h.Name) > 2 {
p = l.Join(p, h.Name[:2]) + "/"
p = l.join(p, h.Name[:2]) + "/"
}
return l.Join(l.Path, p) + "/"
return l.join(l.path, p) + "/"
}
// Filename returns a path to a file, including its name.
func (l *DefaultLayout) Filename(h backend.Handle) string {
name := h.Name
if h.Type == backend.ConfigFile {
return l.Join(l.Path, "config")
return l.join(l.path, "config")
}
return l.Join(l.Dirname(h), name)
return l.join(l.Dirname(h), name)
}
// Paths returns all directory names needed for a repo.
func (l *DefaultLayout) Paths() (dirs []string) {
for _, p := range defaultLayoutPaths {
dirs = append(dirs, l.Join(l.Path, p))
dirs = append(dirs, l.join(l.path, p))
}
// also add subdirs
for i := 0; i < 256; i++ {
subdir := hex.EncodeToString([]byte{byte(i)})
dirs = append(dirs, l.Join(l.Path, defaultLayoutPaths[backend.PackFile], subdir))
dirs = append(dirs, l.join(l.path, defaultLayoutPaths[backend.PackFile], subdir))
}
return dirs
@ -74,6 +81,6 @@ func (l *DefaultLayout) Basedir(t backend.FileType) (dirname string, subdirs boo
subdirs = true
}
dirname = l.Join(l.Path, defaultLayoutPaths[t])
dirname = l.join(l.path, defaultLayoutPaths[t])
return
}

View file

@ -1,18 +1,24 @@
package layout
import (
"path"
"github.com/restic/restic/internal/backend"
)
// RESTLayout implements the default layout for the REST protocol.
type RESTLayout struct {
URL string
Path string
Join func(...string) string
url string
}
var restLayoutPaths = defaultLayoutPaths
func NewRESTLayout(url string) *RESTLayout {
return &RESTLayout{
url: url,
}
}
func (l *RESTLayout) String() string {
return "<RESTLayout>"
}
@ -25,10 +31,10 @@ func (l *RESTLayout) Name() string {
// Dirname returns the directory path for a given file type and name.
func (l *RESTLayout) Dirname(h backend.Handle) string {
if h.Type == backend.ConfigFile {
return l.URL + l.Join(l.Path, "/")
return l.url + "/"
}
return l.URL + l.Join(l.Path, "/", restLayoutPaths[h.Type]) + "/"
return l.url + path.Join("/", restLayoutPaths[h.Type]) + "/"
}
// Filename returns a path to a file, including its name.
@ -39,18 +45,18 @@ func (l *RESTLayout) Filename(h backend.Handle) string {
name = "config"
}
return l.URL + l.Join(l.Path, "/", restLayoutPaths[h.Type], name)
return l.url + path.Join("/", restLayoutPaths[h.Type], name)
}
// Paths returns all directory names
func (l *RESTLayout) Paths() (dirs []string) {
for _, p := range restLayoutPaths {
dirs = append(dirs, l.URL+l.Join(l.Path, p))
dirs = append(dirs, l.url+path.Join("/", p))
}
return dirs
}
// Basedir returns the base dir name for files of type t.
func (l *RESTLayout) Basedir(t backend.FileType) (dirname string, subdirs bool) {
return l.URL + l.Join(l.Path, restLayoutPaths[t]), false
return l.url + path.Join("/", restLayoutPaths[t]), false
}

View file

@ -1,79 +0,0 @@
package layout
import (
"github.com/restic/restic/internal/backend"
)
// S3LegacyLayout implements the old layout used for s3 cloud storage backends, as
// described in the Design document.
type S3LegacyLayout struct {
URL string
Path string
Join func(...string) string
}
var s3LayoutPaths = map[backend.FileType]string{
backend.PackFile: "data",
backend.SnapshotFile: "snapshot",
backend.IndexFile: "index",
backend.LockFile: "lock",
backend.KeyFile: "key",
}
func (l *S3LegacyLayout) String() string {
return "<S3LegacyLayout>"
}
// Name returns the name for this layout.
func (l *S3LegacyLayout) Name() string {
return "s3legacy"
}
// join calls Join with the first empty elements removed.
func (l *S3LegacyLayout) join(url string, items ...string) string {
for len(items) > 0 && items[0] == "" {
items = items[1:]
}
path := l.Join(items...)
if path == "" || path[0] != '/' {
if url != "" && url[len(url)-1] != '/' {
url += "/"
}
}
return url + path
}
// Dirname returns the directory path for a given file type and name.
func (l *S3LegacyLayout) Dirname(h backend.Handle) string {
if h.Type == backend.ConfigFile {
return l.URL + l.Join(l.Path, "/")
}
return l.join(l.URL, l.Path, s3LayoutPaths[h.Type]) + "/"
}
// Filename returns a path to a file, including its name.
func (l *S3LegacyLayout) Filename(h backend.Handle) string {
name := h.Name
if h.Type == backend.ConfigFile {
name = "config"
}
return l.join(l.URL, l.Path, s3LayoutPaths[h.Type], name)
}
// Paths returns all directory names
func (l *S3LegacyLayout) Paths() (dirs []string) {
for _, p := range s3LayoutPaths {
dirs = append(dirs, l.Join(l.Path, p))
}
return dirs
}
// Basedir returns the base dir name for type t.
func (l *S3LegacyLayout) Basedir(t backend.FileType) (dirname string, subdirs bool) {
return l.Join(l.Path, s3LayoutPaths[t]), false
}

View file

@ -1,16 +1,15 @@
package layout
import (
"context"
"fmt"
"path"
"path/filepath"
"reflect"
"sort"
"strings"
"testing"
"github.com/restic/restic/internal/backend"
"github.com/restic/restic/internal/feature"
rtest "github.com/restic/restic/internal/test"
)
@ -99,8 +98,8 @@ func TestDefaultLayout(t *testing.T) {
t.Run("Paths", func(t *testing.T) {
l := &DefaultLayout{
Path: tempdir,
Join: filepath.Join,
path: tempdir,
join: filepath.Join,
}
dirs := l.Paths()
@ -128,8 +127,8 @@ func TestDefaultLayout(t *testing.T) {
for _, test := range tests {
t.Run(fmt.Sprintf("%v/%v", test.Type, test.Handle.Name), func(t *testing.T) {
l := &DefaultLayout{
Path: test.path,
Join: test.join,
path: test.path,
join: test.join,
}
filename := l.Filename(test.Handle)
@ -141,7 +140,7 @@ func TestDefaultLayout(t *testing.T) {
}
func TestRESTLayout(t *testing.T) {
path := rtest.TempDir(t)
url := `https://hostname.foo`
var tests = []struct {
backend.Handle
@ -149,44 +148,43 @@ func TestRESTLayout(t *testing.T) {
}{
{
backend.Handle{Type: backend.PackFile, Name: "0123456"},
filepath.Join(path, "data", "0123456"),
strings.Join([]string{url, "data", "0123456"}, "/"),
},
{
backend.Handle{Type: backend.ConfigFile, Name: "CFG"},
filepath.Join(path, "config"),
strings.Join([]string{url, "config"}, "/"),
},
{
backend.Handle{Type: backend.SnapshotFile, Name: "123456"},
filepath.Join(path, "snapshots", "123456"),
strings.Join([]string{url, "snapshots", "123456"}, "/"),
},
{
backend.Handle{Type: backend.IndexFile, Name: "123456"},
filepath.Join(path, "index", "123456"),
strings.Join([]string{url, "index", "123456"}, "/"),
},
{
backend.Handle{Type: backend.LockFile, Name: "123456"},
filepath.Join(path, "locks", "123456"),
strings.Join([]string{url, "locks", "123456"}, "/"),
},
{
backend.Handle{Type: backend.KeyFile, Name: "123456"},
filepath.Join(path, "keys", "123456"),
strings.Join([]string{url, "keys", "123456"}, "/"),
},
}
l := &RESTLayout{
Path: path,
Join: filepath.Join,
url: url,
}
t.Run("Paths", func(t *testing.T) {
dirs := l.Paths()
want := []string{
filepath.Join(path, "data"),
filepath.Join(path, "snapshots"),
filepath.Join(path, "index"),
filepath.Join(path, "locks"),
filepath.Join(path, "keys"),
strings.Join([]string{url, "data"}, "/"),
strings.Join([]string{url, "snapshots"}, "/"),
strings.Join([]string{url, "index"}, "/"),
strings.Join([]string{url, "locks"}, "/"),
strings.Join([]string{url, "keys"}, "/"),
}
sort.Strings(want)
@ -215,59 +213,23 @@ func TestRESTLayoutURLs(t *testing.T) {
dir string
}{
{
&RESTLayout{URL: "https://hostname.foo", Path: "", Join: path.Join},
&RESTLayout{url: "https://hostname.foo"},
backend.Handle{Type: backend.PackFile, Name: "foobar"},
"https://hostname.foo/data/foobar",
"https://hostname.foo/data/",
},
{
&RESTLayout{URL: "https://hostname.foo:1234/prefix/repo", Path: "/", Join: path.Join},
&RESTLayout{url: "https://hostname.foo:1234/prefix/repo"},
backend.Handle{Type: backend.LockFile, Name: "foobar"},
"https://hostname.foo:1234/prefix/repo/locks/foobar",
"https://hostname.foo:1234/prefix/repo/locks/",
},
{
&RESTLayout{URL: "https://hostname.foo:1234/prefix/repo", Path: "/", Join: path.Join},
&RESTLayout{url: "https://hostname.foo:1234/prefix/repo"},
backend.Handle{Type: backend.ConfigFile, Name: "foobar"},
"https://hostname.foo:1234/prefix/repo/config",
"https://hostname.foo:1234/prefix/repo/",
},
{
&S3LegacyLayout{URL: "https://hostname.foo", Path: "/", Join: path.Join},
backend.Handle{Type: backend.PackFile, Name: "foobar"},
"https://hostname.foo/data/foobar",
"https://hostname.foo/data/",
},
{
&S3LegacyLayout{URL: "https://hostname.foo:1234/prefix/repo", Path: "", Join: path.Join},
backend.Handle{Type: backend.LockFile, Name: "foobar"},
"https://hostname.foo:1234/prefix/repo/lock/foobar",
"https://hostname.foo:1234/prefix/repo/lock/",
},
{
&S3LegacyLayout{URL: "https://hostname.foo:1234/prefix/repo", Path: "/", Join: path.Join},
backend.Handle{Type: backend.ConfigFile, Name: "foobar"},
"https://hostname.foo:1234/prefix/repo/config",
"https://hostname.foo:1234/prefix/repo/",
},
{
&S3LegacyLayout{URL: "", Path: "", Join: path.Join},
backend.Handle{Type: backend.PackFile, Name: "foobar"},
"data/foobar",
"data/",
},
{
&S3LegacyLayout{URL: "", Path: "", Join: path.Join},
backend.Handle{Type: backend.LockFile, Name: "foobar"},
"lock/foobar",
"lock/",
},
{
&S3LegacyLayout{URL: "", Path: "/", Join: path.Join},
backend.Handle{Type: backend.ConfigFile, Name: "foobar"},
"/config",
"/",
},
}
for _, test := range tests {
@ -284,165 +246,3 @@ func TestRESTLayoutURLs(t *testing.T) {
})
}
}
func TestS3LegacyLayout(t *testing.T) {
path := rtest.TempDir(t)
var tests = []struct {
backend.Handle
filename string
}{
{
backend.Handle{Type: backend.PackFile, Name: "0123456"},
filepath.Join(path, "data", "0123456"),
},
{
backend.Handle{Type: backend.ConfigFile, Name: "CFG"},
filepath.Join(path, "config"),
},
{
backend.Handle{Type: backend.SnapshotFile, Name: "123456"},
filepath.Join(path, "snapshot", "123456"),
},
{
backend.Handle{Type: backend.IndexFile, Name: "123456"},
filepath.Join(path, "index", "123456"),
},
{
backend.Handle{Type: backend.LockFile, Name: "123456"},
filepath.Join(path, "lock", "123456"),
},
{
backend.Handle{Type: backend.KeyFile, Name: "123456"},
filepath.Join(path, "key", "123456"),
},
}
l := &S3LegacyLayout{
Path: path,
Join: filepath.Join,
}
t.Run("Paths", func(t *testing.T) {
dirs := l.Paths()
want := []string{
filepath.Join(path, "data"),
filepath.Join(path, "snapshot"),
filepath.Join(path, "index"),
filepath.Join(path, "lock"),
filepath.Join(path, "key"),
}
sort.Strings(want)
sort.Strings(dirs)
if !reflect.DeepEqual(dirs, want) {
t.Fatalf("wrong paths returned, want:\n %v\ngot:\n %v", want, dirs)
}
})
for _, test := range tests {
t.Run(fmt.Sprintf("%v/%v", test.Type, test.Handle.Name), func(t *testing.T) {
filename := l.Filename(test.Handle)
if filename != test.filename {
t.Fatalf("wrong filename, want %v, got %v", test.filename, filename)
}
})
}
}
func TestDetectLayout(t *testing.T) {
defer feature.TestSetFlag(t, feature.Flag, feature.DeprecateS3LegacyLayout, false)()
path := rtest.TempDir(t)
var tests = []struct {
filename string
want string
}{
{"repo-layout-default.tar.gz", "*layout.DefaultLayout"},
{"repo-layout-s3legacy.tar.gz", "*layout.S3LegacyLayout"},
}
var fs = &LocalFilesystem{}
for _, test := range tests {
for _, fs := range []Filesystem{fs, nil} {
t.Run(fmt.Sprintf("%v/fs-%T", test.filename, fs), func(t *testing.T) {
rtest.SetupTarTestFixture(t, path, filepath.Join("../testdata", test.filename))
layout, err := DetectLayout(context.TODO(), fs, filepath.Join(path, "repo"))
if err != nil {
t.Fatal(err)
}
if layout == nil {
t.Fatal("wanted some layout, but detect returned nil")
}
layoutName := fmt.Sprintf("%T", layout)
if layoutName != test.want {
t.Fatalf("want layout %v, got %v", test.want, layoutName)
}
rtest.RemoveAll(t, filepath.Join(path, "repo"))
})
}
}
}
func TestParseLayout(t *testing.T) {
defer feature.TestSetFlag(t, feature.Flag, feature.DeprecateS3LegacyLayout, false)()
path := rtest.TempDir(t)
var tests = []struct {
layoutName string
defaultLayoutName string
want string
}{
{"default", "", "*layout.DefaultLayout"},
{"s3legacy", "", "*layout.S3LegacyLayout"},
{"", "", "*layout.DefaultLayout"},
}
rtest.SetupTarTestFixture(t, path, filepath.Join("..", "testdata", "repo-layout-default.tar.gz"))
for _, test := range tests {
t.Run(test.layoutName, func(t *testing.T) {
layout, err := ParseLayout(context.TODO(), &LocalFilesystem{}, test.layoutName, test.defaultLayoutName, filepath.Join(path, "repo"))
if err != nil {
t.Fatal(err)
}
if layout == nil {
t.Fatal("wanted some layout, but detect returned nil")
}
// test that the functions work (and don't panic)
_ = layout.Dirname(backend.Handle{Type: backend.PackFile})
_ = layout.Filename(backend.Handle{Type: backend.PackFile, Name: "1234"})
_ = layout.Paths()
layoutName := fmt.Sprintf("%T", layout)
if layoutName != test.want {
t.Fatalf("want layout %v, got %v", test.want, layoutName)
}
})
}
}
func TestParseLayoutInvalid(t *testing.T) {
path := rtest.TempDir(t)
var invalidNames = []string{
"foo", "bar", "local",
}
for _, name := range invalidNames {
t.Run(name, func(t *testing.T) {
layout, err := ParseLayout(context.TODO(), nil, name, "", path)
if err == nil {
t.Fatalf("expected error not found for layout name %v, layout is %v", name, layout)
}
})
}
}

View file

@ -10,7 +10,6 @@ import (
// Config holds all information needed to open a local repository.
type Config struct {
Path string
Layout string `option:"layout" help:"use this backend directory layout (default: auto-detect) (deprecated)"`
Connections uint `option:"connections" help:"set a limit for the number of concurrent operations (default: 2)"`
}

View file

@ -6,30 +6,22 @@ import (
"testing"
"github.com/restic/restic/internal/backend"
"github.com/restic/restic/internal/feature"
rtest "github.com/restic/restic/internal/test"
)
func TestLayout(t *testing.T) {
defer feature.TestSetFlag(t, feature.Flag, feature.DeprecateS3LegacyLayout, false)()
path := rtest.TempDir(t)
var tests = []struct {
filename string
layout string
failureExpected bool
packfiles map[string]bool
}{
{"repo-layout-default.tar.gz", "", false, map[string]bool{
{"repo-layout-default.tar.gz", false, map[string]bool{
"aa464e9fd598fe4202492ee317ffa728e82fa83a1de1a61996e5bd2d6651646c": false,
"fc919a3b421850f6fa66ad22ebcf91e433e79ffef25becf8aef7c7b1eca91683": false,
"c089d62788da14f8b7cbf77188305c0874906f0b73d3fce5a8869050e8d0c0e1": false,
}},
{"repo-layout-s3legacy.tar.gz", "", false, map[string]bool{
"fc919a3b421850f6fa66ad22ebcf91e433e79ffef25becf8aef7c7b1eca91683": false,
"c089d62788da14f8b7cbf77188305c0874906f0b73d3fce5a8869050e8d0c0e1": false,
"aa464e9fd598fe4202492ee317ffa728e82fa83a1de1a61996e5bd2d6651646c": false,
}},
}
for _, test := range tests {
@ -39,7 +31,6 @@ func TestLayout(t *testing.T) {
repo := filepath.Join(path, "repo")
be, err := Open(context.TODO(), Config{
Path: repo,
Layout: test.layout,
Connections: 2,
})
if err != nil {

View file

@ -37,15 +37,10 @@ func NewFactory() location.Factory {
return location.NewLimitedBackendFactory("local", ParseConfig, location.NoPassword, limiter.WrapBackendConstructor(Create), limiter.WrapBackendConstructor(Open))
}
const defaultLayout = "default"
func open(cfg Config) (*Local, error) {
l := layout.NewDefaultLayout(cfg.Path, filepath.Join)
func open(ctx context.Context, cfg Config) (*Local, error) {
l, err := layout.ParseLayout(ctx, &layout.LocalFilesystem{}, cfg.Layout, defaultLayout, cfg.Path)
if err != nil {
return nil, err
}
fi, err := fs.Stat(l.Filename(backend.Handle{Type: backend.ConfigFile}))
fi, err := os.Stat(l.Filename(backend.Handle{Type: backend.ConfigFile}))
m := util.DeriveModesFromFileInfo(fi, err)
debug.Log("using (%03O file, %03O dir) permissions", m.File, m.Dir)
@ -57,30 +52,30 @@ func open(ctx context.Context, cfg Config) (*Local, error) {
}
// Open opens the local backend as specified by config.
func Open(ctx context.Context, cfg Config) (*Local, error) {
debug.Log("open local backend at %v (layout %q)", cfg.Path, cfg.Layout)
return open(ctx, cfg)
func Open(_ context.Context, cfg Config) (*Local, error) {
debug.Log("open local backend at %v", cfg.Path)
return open(cfg)
}
// Create creates all the necessary files and directories for a new local
// backend at dir. Afterwards a new config blob should be created.
func Create(ctx context.Context, cfg Config) (*Local, error) {
debug.Log("create local backend at %v (layout %q)", cfg.Path, cfg.Layout)
func Create(_ context.Context, cfg Config) (*Local, error) {
debug.Log("create local backend at %v", cfg.Path)
be, err := open(ctx, cfg)
be, err := open(cfg)
if err != nil {
return nil, err
}
// test if config file already exists
_, err = fs.Lstat(be.Filename(backend.Handle{Type: backend.ConfigFile}))
_, err = os.Lstat(be.Filename(backend.Handle{Type: backend.ConfigFile}))
if err == nil {
return nil, errors.New("config file already exists")
}
// create paths for data and refs
for _, d := range be.Paths() {
err := fs.MkdirAll(d, be.Modes.Dir)
err := os.MkdirAll(d, be.Modes.Dir)
if err != nil {
return nil, errors.WithStack(err)
}
@ -132,7 +127,7 @@ func (b *Local) Save(_ context.Context, h backend.Handle, rd backend.RewindReade
debug.Log("error %v: creating dir", err)
// error is caused by a missing directory, try to create it
mkdirErr := fs.MkdirAll(dir, b.Modes.Dir)
mkdirErr := os.MkdirAll(dir, b.Modes.Dir)
if mkdirErr != nil {
debug.Log("error creating dir %v: %v", dir, mkdirErr)
} else {
@ -152,7 +147,7 @@ func (b *Local) Save(_ context.Context, h backend.Handle, rd backend.RewindReade
// temporary's name and no other goroutine will get the same data to
// Save, so the temporary name should never be reused by another
// goroutine.
_ = fs.Remove(f.Name())
_ = os.Remove(f.Name())
}
}(f)
@ -216,7 +211,7 @@ func (b *Local) Load(ctx context.Context, h backend.Handle, length int, offset i
}
func (b *Local) openReader(_ context.Context, h backend.Handle, length int, offset int64) (io.ReadCloser, error) {
f, err := fs.Open(b.Filename(h))
f, err := os.Open(b.Filename(h))
if err != nil {
return nil, err
}
@ -250,7 +245,7 @@ func (b *Local) openReader(_ context.Context, h backend.Handle, length int, offs
// Stat returns information about a blob.
func (b *Local) Stat(_ context.Context, h backend.Handle) (backend.FileInfo, error) {
fi, err := fs.Stat(b.Filename(h))
fi, err := os.Stat(b.Filename(h))
if err != nil {
return backend.FileInfo{}, errors.WithStack(err)
}
@ -263,12 +258,12 @@ func (b *Local) Remove(_ context.Context, h backend.Handle) error {
fn := b.Filename(h)
// reset read-only flag
err := fs.Chmod(fn, 0666)
err := os.Chmod(fn, 0666)
if err != nil && !os.IsPermission(err) {
return errors.WithStack(err)
}
return fs.Remove(fn)
return os.Remove(fn)
}
// List runs fn for each file in the backend which has the type t. When an
@ -294,7 +289,7 @@ func (b *Local) List(ctx context.Context, t backend.FileType, fn func(backend.Fi
// Also, visitDirs assumes it sees a directory full of directories, while
// visitFiles wants a directory full or regular files.
func visitDirs(ctx context.Context, dir string, fn func(backend.FileInfo) error) error {
d, err := fs.Open(dir)
d, err := os.Open(dir)
if err != nil {
return err
}
@ -321,7 +316,7 @@ func visitDirs(ctx context.Context, dir string, fn func(backend.FileInfo) error)
}
func visitFiles(ctx context.Context, dir string, fn func(backend.FileInfo) error, ignoreNotADirectory bool) error {
d, err := fs.Open(dir)
d, err := os.Open(dir)
if err != nil {
return err
}
@ -367,7 +362,7 @@ func visitFiles(ctx context.Context, dir string, fn func(backend.FileInfo) error
// Delete removes the repository and all files.
func (b *Local) Delete(_ context.Context) error {
return fs.RemoveAll(b.Path)
return os.RemoveAll(b.Path)
}
// Close closes all open files.

View file

@ -8,8 +8,6 @@ import (
"os"
"runtime"
"syscall"
"github.com/restic/restic/internal/fs"
)
// fsyncDir flushes changes to the directory dir.
@ -45,5 +43,5 @@ func isMacENOTTY(err error) bool {
// set file to readonly
func setFileReadonly(f string, mode os.FileMode) error {
return fs.Chmod(f, mode&^0222)
return os.Chmod(f, mode&^0222)
}

View file

@ -94,7 +94,7 @@ func run(command string, args ...string) (*StdioConn, *sync.WaitGroup, chan stru
err = errW
}
if err != nil {
if util.IsErrDot(err) {
if errors.Is(err, exec.ErrDot) {
return nil, nil, nil, nil, errors.Errorf("cannot implicitly run relative executable %v found in current directory, use -o rclone.program=./<program> to override", cmd.Path)
}
return nil, nil, nil, nil, err

View file

@ -8,7 +8,6 @@ import (
"io"
"net/http"
"net/url"
"path"
"strings"
"github.com/restic/restic/internal/backend"
@ -66,7 +65,7 @@ func Open(_ context.Context, cfg Config, rt http.RoundTripper) (*Backend, error)
be := &Backend{
url: cfg.URL,
client: http.Client{Transport: rt},
Layout: &layout.RESTLayout{URL: url, Join: path.Join},
Layout: layout.NewRESTLayout(url),
connections: cfg.Connections,
}

View file

@ -1,6 +1,3 @@
//go:build go1.20
// +build go1.20
package rest_test
import (
@ -109,7 +106,7 @@ func runRESTServer(ctx context.Context, t testing.TB, dir, reqListenAddr string)
matched = true
}
}
fmt.Fprintln(os.Stdout, line) // print all output to console
_, _ = fmt.Fprintln(os.Stdout, line) // print all output to console
}
}()

View file

@ -1,5 +1,5 @@
//go:build !windows && go1.20
// +build !windows,go1.20
//go:build !windows
// +build !windows
package rest_test

View file

@ -221,12 +221,19 @@ func (be *Backend) Load(ctx context.Context, h backend.Handle, length int, offse
// Stat returns information about the File identified by h.
func (be *Backend) Stat(ctx context.Context, h backend.Handle) (fi backend.FileInfo, err error) {
err = be.retry(ctx, fmt.Sprintf("Stat(%v)", h),
// see the call to `cancel()` below for why this context exists
statCtx, cancel := context.WithCancel(ctx)
defer cancel()
err = be.retry(statCtx, fmt.Sprintf("Stat(%v)", h),
func() error {
var innerError error
fi, innerError = be.Backend.Stat(ctx, h)
if be.Backend.IsNotExist(innerError) {
// stat is only used to check the existence of the config file.
// cancel the context to suppress the final error message if the file is not found.
cancel()
// do not retry if file is not found, as stat is usually used to check whether a file exists
return backoff.Permanent(innerError)
}

Some files were not shown because too many files have changed in this diff Show more