Commit graph

36 commits

Author SHA1 Message Date
Michael Eischer
4ccd5e806b backend: split layout code into own subpackage 2022-10-21 21:36:05 +02:00
greatroar
07e5c38361 errors: Drop Cause in favor of Go 1.13 error handling
The only use cases in the code were in errors.IsFatal, backend/b2,
which needs a workaround, and backend.ParseLayout. The last of these
requires all backends to implement error unwrapping in IsNotExist.
All backends except gs already did that.
2022-10-08 13:08:08 +02:00
Michael Eischer
f414db987d gofmt all files
Apparently the rules for comment formatting have changed with go 1.19.
2022-08-19 19:12:26 +02:00
greatroar
910d917b71 backend: Move semaphores to a dedicated package
... called backend/sema. I resisted the temptation to call the main
type sema.Phore. Also, semaphores are now passed by value to skip a
level of indirection when using them.
2022-06-18 10:01:58 +02:00
Michael Eischer
e36a40db10 upgrade_repo_v2: Use atomic replace for supported backends 2022-05-09 22:31:30 +02:00
Michael Eischer
4f97492d28 Backend: Expose connections parameter 2022-04-23 11:13:08 +02:00
Michael Eischer
0b258cc054 backends: clean reader closing 2022-04-09 12:21:38 +02:00
Michael Eischer
a009b39e4c gs/swift: calculate md5 content hash for upload 2021-08-04 22:17:46 +02:00
Michael Eischer
9aa2eff384 Add plumbing to calculate backend specific file hash for upload
This enables the backends to request the calculation of a
backend-specific hash. For the currently supported backends this will
always be MD5. The hash calculation happens as early as possible, for
pack files this is during assembly of the pack file. That way the hash
would even capture corruptions of the temporary pack file on disk.
2021-08-04 22:17:46 +02:00
Michael Eischer
8a486eafed gs: Don't drop error when finishing upload
The error returned when finishing the upload of an object was dropped.
This could cause silent upload failures and thus data loss in certain
cases. When a MD5 hash for the uploaded blob is specified, a wrong
hash/damaged upload would return its error via the Close() whose error
was dropped.
2021-01-30 13:31:32 +01:00
Michael Eischer
c73316a111 backends: add sanity check for the uploaded file size
Bugs in the error handling while uploading a file to the backend could
cause incomplete files, e.g. https://github.com/golang/go/issues/42400
which could affect the local backend.

Proactively add sanity checks which will treat an upload as failed if
the reported upload size does not match the actual file size.
2021-01-29 13:51:51 +01:00
eleith
a24e986b2b do not require gs bucket permissions to init repository
a gs service account may only have object permissions on an existing
bucket but no bucket create/get permissions.

these service accounts currently are blocked from initialization a
restic repository because restic can not determine if the bucket exists.

this PR updates the logic to assume the bucket exists when the bucket
attribute request results in a permissions denied error.

this way, restic can still initialize a repository if the service
account does have object permissions

fixes: https://github.com/restic/restic/issues/3100
2020-11-18 06:14:11 -08:00
Ingo Gottwald
8b8e230771 Swap deprecated GCS lib with replacement 2020-10-03 18:55:56 +02:00
Ingo Gottwald
00cedd22aa Replace deprecated method in gs backend 2020-10-01 10:02:42 +02:00
MichaelEischer
fd02407863
Merge pull request #2849 from classmarkets/gcs-access-token
gs: support authentication with access token
2020-09-30 17:42:56 +02:00
aawsome
0fed6a8dfc
Use "pack file" instead of "data file" (#2885)
- changed variable names, especially changed DataFile into PackFile
- changed in some comments
- always use "pack file" in docu
2020-08-16 11:16:38 +02:00
Peter Schultz
758b44b9c0 gs: support authentication with access token
In the Google Cloud Storage backend, support specifying access tokens
directly, as an alternative to a credentials file. This is useful when
restic is used non-interactively by some other program that is already
authenticated and eliminates the need to store long lived credentials.

The access token is specified in the GOOGLE_ACCESS_TOKEN environment
variable and takes precedence over GOOGLE_APPLICATION_CREDENTIALS.
2020-07-22 16:23:03 +02:00
Alexander Neumann
c9745cd47e gs: Respect bandwidth limiting
In 0dfdc11ed9, accidentally we dropped
using the provided http.RoundTripper, this commits adds it back.

Closes #1989
2018-11-25 18:52:32 +01:00
Lawrence Jones
0dfdc11ed9
Automatically load Google auth
This change removes the hardcoded Google auth mechanism for the GCS
backend, instead using Google's provided client library to discover and
generate credential material.

Google recommend that client libraries use their common auth mechanism
in order to authorise requests against Google services. Doing so means
you automatically support various types of authentication, from the
standard GOOGLE_APPLICATION_CREDENTIALS environment variable to making
use of Google's metadata API if running within Google Container Engine.
2018-03-11 17:11:25 +00:00
Alexander Neumann
99f7fd74e3 backend: Improve Save()
As mentioned in issue [#1560](https://github.com/restic/restic/pull/1560#issuecomment-364689346)
this changes the signature for `backend.Save()`. It now takes a
parameter of interface type `RewindReader`, so that the backend
implementations or our `RetryBackend` middleware can reset the reader to
the beginning and then retry an upload operation.

The `RewindReader` interface also provides a `Length()` method, which is
used in the backend to get the size of the data to be saved. This
removes several ugly hacks we had to do to pull the size back out of the
`io.Reader` passed to `Save()` before. In the `s3` and `rest` backend
this is actively used.
2018-03-03 15:49:44 +01:00
Alexander Neumann
29da86b473 Merge pull request #1623 from restic/backend-relax-restrictions
backend: Relax requirement for new files
2018-02-18 12:56:52 +01:00
Alexander Neumann
b5062959c8 backend: Relax requirement for new files
Before, all backend implementations were required to return an error if
the file that is to be written already exists in the backend. For most
backends, that means making a request (e.g. via HTTP) and returning an
error when the file already exists.

This is not accurate, the file could have been created between the HTTP
request testing for it, and when writing starts. In addition, apart from
the `config` file in the repo, all other file names have pseudo-random
names with a very very low probability of a collision. And even if a
file name is written again, the way the restic repo is structured this
just means that the same content is placed there again. Which is not a
problem, just not very efficient.

So, this commit relaxes the requirement to return an error when the file
in the backend already exists, which allows reducing the number of API
requests and thereby the latency for remote backends.
2018-02-17 22:39:18 +01:00
Igor Fedorenko
d58ae43317 Reworked Backend.Load API to retry errors during ongoing download
Signed-off-by: Igor Fedorenko <igor@ifedorenko.com>
2018-02-16 21:12:14 -05:00
Alexander Neumann
5dc8d3588d GS: Use generic http transport
During the development of #1524 I discovered that the Google Cloud
Storage backend did not yet use the HTTP transport, so things such as
bandwidth limiting did not work. This commit does the necessary magic to
make the GS library use our HTTP transport.
2018-01-27 20:12:34 +01:00
Alexander Neumann
e9ea268847 Change List() implementation for all backends 2018-01-21 21:15:09 +01:00
Alexander Neumann
7d8765a937 backend: Only return top-level files for most dirs
Fixes #1478
2017-12-14 19:14:16 +01:00
George Armhold
d069ee31b2 GS backend: limit http concurrency in Save(), Stat(), Test(), Remove(), List()
as per discussion in PR #1399
2017-10-31 08:01:43 -04:00
Michael Pratt
9fa4f5eb6b gs: disable resumable uploads
By default, the GCS Go packages have an internal "chunk size" of 8MB,
used for blob uploads.

Media().Do() will buffer a full 8MB from the io.Reader (or less if EOF
is reached) then write that full 8MB to the network all at once.

This behavior does not play nicely with --limit-upload, which only
limits the Reader passed to Media. While the long-term average upload
rate will be correctly limited, the actual network bandwidth will be
very spikey.

e.g., if an 8MB/s connection is limited to 1MB/s, Media().Do() will
spend 8s reading from the rate-limited reader (performing no network
requests), then 1s writing to the network at 8MB/s.

This is bad for network connections hurt by full-speed uploads,
particularly when writing 8MB will take several seconds.

Disable resumable uploads entirely by setting the chunk size to zero.
This causes the io.Reader to be passed further down the request stack,
where there is less (but still some) buffering.

My connection is around 1.5MB/s up, with nominal ~15ms ping times to
8.8.8.8.

Without this change, --limit-upload 1024 results in several seconds of
~200ms ping times (uploading), followed by several seconds of ~15ms ping
times (reading from rate-limited reader). A bandwidth monitor reports
this as several seconds of ~1.5MB/s followed by several seconds of
0.0MB/s.

With this change, --limit-upload 1024 results in ~20ms ping times and
the bandwidth monitor reports a constant ~1MB/s.

I've elected to make this change unconditional of --limit-upload because
the resumable uploads shouldn't be providing much benefit anyways, as
restic already uploads mostly small blobs and already has a retry
mechanism.

--limit-download is not affected by this problem, as Get().Download()
returns the real http.Response.Body without any internal buffering.

Updates #1216
2017-10-17 21:12:04 -07:00
Michael Pratt
fa0be82da8 gs: allow backend creation without storage.buckets.get
If the service account used with restic does not have the
storage.buckets.get permission (in the "Storage Admin" role), Create
cannot use Get to determine if the bucket is accessible.

Rather than always trying to create the bucket on Get error, gracefully
fall back to assuming the bucket is accessible. If it is, restic init
will complete successfully. If it is not, it will fail on a later call.

Here is what init looks like now in different cases.

Service account without "Storage Admin":

Bucket exists and is accessible (this is the case that didn't work
before):

$ ./restic init -r gs:this-bucket-does-exist:/
enter password for new backend:
enter password again:
created restic backend c02e2edb67 at gs:this-bucket-does-exist:/

Please note that knowledge of your password is required to access
the repository. Losing your password means that your data is
irrecoverably lost.

Bucket exists but is not accessible:

$ ./restic init -r gs:this-bucket-does-exist:/
enter password for new backend:
enter password again:
create key in backend at gs:this-bucket-does-exist:/ failed:
service.Objects.Insert: googleapi: Error 403:
my-service-account@myproject.iam.gserviceaccount.com does not have
storage.objects.create access to object this-bucket-exists/keys/0fa714e695c8ecd58cb467cdeb04d36f3b710f883496a90f23cae0315daf0b93., forbidden

Bucket does not exist:

$ ./restic init -r gs:this-bucket-does-not-exist:/
create backend at gs:this-bucket-does-not-exist:/ failed:
service.Buckets.Insert: googleapi: Error 403:
my-service-account@myproject.iam.gserviceaccount.com does not have storage.buckets.create access to bucket this-bucket-does-not-exist., forbidden

Service account with "Storage Admin":

Bucket exists and is accessible: Same

Bucket exists but is not accessible: Same. Previously this would fail
when Create tried to create the bucket. Now it fails when trying to
create the keys.

Bucket does not exist:

$ ./restic init -r gs:this-bucket-does-not-exist:/
enter password for new backend:
enter password again:
created restic backend c3c48b481d at gs:this-bucket-does-not-exist:/

Please note that knowledge of your password is required to access
the repository. Losing your password means that your data is
irrecoverably lost.
2017-09-25 22:25:51 -07:00
Michael Pratt
3b2106ed30 gs: document required permissions
In the manual, state which standard roles the service account must
have to work correctly, as well as the specific permissions required,
for creating even more specific custom roles.
2017-09-24 11:25:57 -07:00
Michael Pratt
5f4f997126 gs: minor comment cleanups
* Remove a reference to S3.
* Config can only be used for GCS, not other "gcs compatibile servers".
* Make comments complete sentences.
2017-09-24 10:10:56 -07:00
Alexander Neumann
3b6a580b32 backend: Make pagination for List configurable 2017-09-18 12:01:54 +02:00
Alexander Neumann
40edf00182 gs: implement pagination 2017-09-17 11:08:51 +02:00
Alexander Neumann
c35518a865 Azure/GS: Remove ReadDir() 2017-09-17 11:05:30 +02:00
Michael Pratt
9537bc561d gs: fix nil dereference
info can be nil if err != nil, resulting in a nil dereference while
logging:

$ # GCS config
$ ./restic init
debug enabled
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x935947]

goroutine 1 [running]:
github.com/restic/restic/internal/backend/gs.(*Backend).Save(0xc420012690, 0xe84e80, 0xc420010448, 0xb57149, 0x3, 0xc4203fc140, 0x40, 0xe7be40, 0xc4201d8f90, 0xa0, ...)
	src/github.com/restic/restic/internal/backend/gs/gs.go:226 +0x6d7
github.com/restic/restic/internal/repository.AddKey(0xe84e80, 0xc420010448, 0xc4202f0360, 0xc42000a1b0, 0x4, 0x0, 0xa55b60, 0xc4203043e0, 0xa55420)
	src/github.com/restic/restic/internal/repository/key.go:235 +0x4a1
github.com/restic/restic/internal/repository.createMasterKey(0xc4202f0360, 0xc42000a1b0, 0x4, 0xa55420, 0xc420304370, 0x6a6070)
	src/github.com/restic/restic/internal/repository/key.go:62 +0x60
github.com/restic/restic/internal/repository.(*Repository).init(0xc4202f0360, 0xe84e80, 0xc420010448, 0xc42000a1b0, 0x4, 0x1, 0xc42030a440, 0x40, 0x32a4573d3d9eb5, 0x0, ...)
	src/github.com/restic/restic/internal/repository/repository.go:403 +0x5d
github.com/restic/restic/internal/repository.(*Repository).Init(0xc4202f0360, 0xe84e80, 0xc420010448, 0xc42000a1b0, 0x4, 0xe84e40, 0xc42004ad80)
	src/github.com/restic/restic/internal/repository/repository.go:397 +0x12c
main.runInit(0xc420018072, 0x16, 0x0, 0x0, 0x0, 0xe84e40, 0xc42004ad80, 0xc42000a1b0, 0x4, 0xe7dac0, ...)
	src/github.com/restic/restic/cmd/restic/cmd_init.go:47 +0x2a4
main.glob..func9(0xeb5000, 0xedad70, 0x0, 0x0, 0x0, 0x0)
	src/github.com/restic/restic/cmd/restic/cmd_init.go:20 +0x8e
github.com/restic/restic/vendor/github.com/spf13/cobra.(*Command).execute(0xeb5000, 0xedad70, 0x0, 0x0, 0xeb5000, 0xedad70)
	src/github.com/restic/restic/vendor/github.com/spf13/cobra/command.go:649 +0x457
github.com/restic/restic/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xeb3e00, 0xc420011650, 0xa55b60, 0xc420011660)
	src/github.com/restic/restic/vendor/github.com/spf13/cobra/command.go:728 +0x339
github.com/restic/restic/vendor/github.com/spf13/cobra.(*Command).Execute(0xeb3e00, 0x25, 0xc4201a7eb8)
	src/github.com/restic/restic/vendor/github.com/spf13/cobra/command.go:687 +0x2b
main.main()
	src/github.com/restic/restic/cmd/restic/main.go:72 +0x268

(The error was likely because I had just enabled the GCS API. Subsequent
runs were fine.)
2017-08-27 21:36:04 -07:00
Dipta Das
ba75a3884c Add Google Cloud Storage as backend
Environment variables:
GOOGLE_PROJECT_ID=gcp-project-id
GOOGLE_APPLICATION_CREDENTIALS=path-to-json-file

Environment variables for test:
RESTIC_TEST_GS_PROJECT_ID=gcp-project-id
RESTIC_TEST_GS_APPLICATION_CREDENTIALS=path-to-json-file
RESTIC_TEST_GS_REPOSITORY=gs:us-central1/test-bucket

Init repository:
$ restic -r gs🪣/[prefix] init
2017-08-06 21:47:55 +02:00