Commit graph

512 commits

Author SHA1 Message Date
Michael Eischer
097ed659b2 rest: test that zero-length replies over HTTP2 work correctly
The first test function ensures that the workaround works as expected.
And the second test function is intended to fail as soon as the issue
has been fixed in golang to allow us to eventually remove the
workaround.
2021-07-10 17:22:42 +02:00
Michael Eischer
185a55026b rest: workaround for HTTP2 zero-length replies bug
The golang http client does not return an error when a HTTP2 reply
includes a non-zero content length but does not return any data at all.
This scenario can occur e.g. when using rclone when a file stored in a
backend seems to be accessible but then fails to download.
2021-07-10 16:59:01 +02:00
MichaelEischer
c1eb7ac1a1
Merge pull request #3420 from greatroar/local-errs
Modernize error handling in local backend
2021-06-20 14:20:40 +02:00
greatroar
e5f0f67ba0 Modernize error handling in local backend
* Stop prepending the operation name: it's already part of os.PathError,
  leading to repetitive errors like "Chmod: chmod /foo/bar: operation not
  permitted".

* Use errors.Is to check for specific errors.
2021-06-18 11:13:27 +02:00
MichaelEischer
a476752962
Merge pull request #3421 from greatroar/s3-fileinfo
Return s3.fileInfos by pointer
2021-06-12 18:55:18 +02:00
greatroar
0d4f16b6ba Return s3.fileInfos by pointer
Since the fileInfos are returned in a []interface, they're already
allocated on the heap. Making them pointers explicitly means the
compiler doesn't need to generate fileInfo and *fileInfo versions of the
methods on this type. The binary becomes about 7KiB smaller on
Linux/amd64.
2021-06-07 19:48:43 +02:00
Michael Eischer
75c990504d azure/gs: Fix default value in connections help text 2021-05-17 20:56:51 +02:00
MichaelEischer
64b00d28b1
Merge pull request #3345 from greatroar/sftp-enospc
Check for ENOSPC and remove broken files in SFTP
2021-05-13 20:09:38 +02:00
greatroar
dc88ca79b6 Handle lack of space and remove broken files in SFTP backend 2021-03-27 15:02:14 +01:00
Alexander Neumann
88a23521dd
Merge pull request #3327 from MichaelEischer/fix-s3-sanity-check
s3: Fix sanity check
2021-03-11 13:13:46 +01:00
greatroar
187a77fb27 Upgrade pkg/sftp to 1.13 and simplify SFTP.IsNotExist 2021-03-10 21:05:24 +01:00
Michael Eischer
2a9f0f19b6 s3: Fix sanity check
The sanity check shouldn't replace the error message if there is already
one.
2021-03-08 20:23:57 +01:00
Michael Eischer
a293fd9aef local: Ignore files in intermediate folders
For example the data folder of a repository might contain hidden files
which caused list operations to fail.
2021-02-27 13:47:55 +01:00
Lorenz Brun
5427119205 Allow HTTP/2 2021-01-31 02:44:30 +01:00
Michael Eischer
e0867c9682 backend: try to cleanup test leftovers 2021-01-30 21:23:20 +01:00
Michael Eischer
f740b2fb23 mem: check upload length before storing upload 2021-01-30 21:23:20 +01:00
Alexander Neumann
f867e65bcd Fix issues reported by staticcheck 2021-01-30 20:43:53 +01:00
Alexander Neumann
0858fbf6aa Add more error handling 2021-01-30 20:19:47 +01:00
Alexander Neumann
aef3658a5f Address review comments 2021-01-30 20:02:37 +01:00
Alexander Neumann
3c753c071c errcheck: More error handling 2021-01-30 20:02:37 +01:00
Alexander Neumann
75f53955ee errcheck: Add error checks
Most added checks are straight forward.
2021-01-30 20:02:37 +01:00
Michael Eischer
a53778cd83 rest: handle dropped error in save operation 2021-01-30 19:25:04 +01:00
Michael Eischer
8a486eafed gs: Don't drop error when finishing upload
The error returned when finishing the upload of an object was dropped.
This could cause silent upload failures and thus data loss in certain
cases. When a MD5 hash for the uploaded blob is specified, a wrong
hash/damaged upload would return its error via the Close() whose error
was dropped.
2021-01-30 13:31:32 +01:00
Alexander Neumann
cdd704920d azure: Pass data length to Azure libray
The azureAdapter was used directly without a pointer, but the Len()
method was only defined with a pointer receiver (which means Len() is
not present on a azureAdapter{}, only on a pointer to it).
2021-01-29 21:08:41 +01:00
Michael Eischer
1f583b3d8e backend: test that incomplete uploads fail 2021-01-29 13:51:53 +01:00
Michael Eischer
c73316a111 backends: add sanity check for the uploaded file size
Bugs in the error handling while uploading a file to the backend could
cause incomplete files, e.g. https://github.com/golang/go/issues/42400
which could affect the local backend.

Proactively add sanity checks which will treat an upload as failed if
the reported upload size does not match the actual file size.
2021-01-29 13:51:51 +01:00
Michael Eischer
4526d5d197 swift: explicitly pass upload size to library
This allows properly setting the content-length which could help the
server-side to detect incomplete uploads.
2021-01-29 13:50:46 +01:00
Michael Eischer
dca9b6f5db azure: explicitly pass upload size
Previously the fallback from the azure library was to read the whole
blob into memory and use that to determine the upload size.
2021-01-29 13:50:46 +01:00
Michael Eischer
678e75e1c2 sftp: enforce use of optimized upload method
ReadFrom was already used by Save before, this just ensures that this
won't accidentally change in the future.
2021-01-03 22:23:53 +01:00
Alexander Neumann
b2efa0af39
Merge pull request #3164 from MichaelEischer/improve-context-cancel
Improve context cancel handling in archiver and backends
2020-12-29 17:03:42 +01:00
greatroar
3b09ae9074 AIX port 2020-12-29 01:35:01 +01:00
Michael Eischer
d0ca8fb0b8 backend: test that a canceled context prevents RetryBackend operations 2020-12-28 21:06:47 +01:00
Michael Eischer
e483b63c40 retrybackend: Fail operations when context is already canceled
Depending on the used backend, operations started with a canceled
context may fail or not. For example the local backend still works in
large parts when called with a canceled context. Backends transfering
data via http don't work. It is also not possible to retry failed
operations in that state as the RetryBackend will abort with a 'context
canceled' error.

Ensure uniform behavior of all backends by checking for a canceled
context by checking for a canceled context as a first step in the
RetryBackend. This ensures uniform behavior across all backends, as
backends are always wrapped in a RetryBackend.
2020-12-28 21:06:47 +01:00
tWido
7dab113035 Don't retry when "Permission denied" occurs in local backend 2020-12-22 23:41:12 +01:00
greatroar
66d904c905 Make invalid handles permanent errors 2020-12-17 12:47:53 +01:00
greatroar
746dbda413 Mark "ssh exited" errors in SFTP as permanent 2020-12-17 12:43:09 +01:00
greatroar
f7784bddb3 Don't retry when "no space left on device" in local backend
Also adds relevant documentation to the restic.Backend interface.
2020-12-17 12:43:09 +01:00
Alexander Neumann
e96677cafb
Merge pull request #3158 from MichaelEischer/support-swift-auth-id-variables
swift: Add support for id based keystone v3 auth parameters
2020-12-12 16:27:38 +01:00
Michael Eischer
1d69341e88 swift: Add support for id based keystone v3 auth parameters
This adds support for the following environment variables, which were
previously missing:

OS_USER_ID            User ID for keystone v3 authentication
OS_USER_DOMAIN_ID     User domain ID for keystone v3 authentication
OS_PROJECT_DOMAIN_ID  Project domain ID for keystone v3 authentication
OS_TRUST_ID           Trust ID for keystone v3 authentication
2020-12-11 19:22:34 +01:00
Alexander Neumann
36c5d39c2c Fix issues reported by semgrep 2020-12-11 09:41:59 +01:00
MichaelEischer
f2959127b6
Merge pull request #3065 from greatroar/local-subdirs
Don't recurse in local backend's List if not required
2020-11-29 19:03:59 +01:00
greatroar
8e213e82fc backend/local: replace fs.Walk with custom walker
This code is more strict in what it expects to find in the backend:
depending on the layout, either a directory full of files or a directory
full of such directories.
2020-11-19 16:46:42 +01:00
Alexander Neumann
75eff92b56
Merge pull request #3107 from eleith/do-not-require-bucket-permissions-for-init
do not require gs bucket permissions to init repository
2020-11-18 16:53:45 +01:00
eleith
a24e986b2b do not require gs bucket permissions to init repository
a gs service account may only have object permissions on an existing
bucket but no bucket create/get permissions.

these service accounts currently are blocked from initialization a
restic repository because restic can not determine if the bucket exists.

this PR updates the logic to assume the bucket exists when the bucket
attribute request results in a permissions denied error.

this way, restic can still initialize a repository if the service
account does have object permissions

fixes: https://github.com/restic/restic/issues/3100
2020-11-18 06:14:11 -08:00
tofran
94a154c7ca Remove --drive-use-trash=false from rclone param
Google drive trash retention policy changed making this
no longer a good default
a go
Issue #3095
2020-11-13 22:58:48 +00:00
Alexander Neumann
4a0b7328ec s3: Remove dots for config description 2020-11-11 20:20:35 +01:00
Nick Douma
829959390a Provide UseV1 parameter to minio.ListObjectsOptions based on s3.list-objects-v1 2020-11-11 11:54:38 +01:00
Nick Douma
ccd55d529d Add s3.list-objects-v1 extended option and default to false 2020-11-11 11:54:36 +01:00
Alexander Weiss
fef408a8bd Return context error in mem backend 2020-11-08 00:05:53 +01:00
greatroar
a2d4209322 Don't recurse in local backend's List if not required
Due to the return if !isFile, the IsDir branch in List was never taken
and subdirectories were traversed recursively.

Also replaced isFile by an IsRegular check, which has been equivalent
since Go 1.12 (golang/go@a2a3dd00c9).
2020-11-07 08:54:13 +01:00
Ivan Andreev
ab2790d9de Fix http2 stream reset between restic and rest backends #3014 2020-11-05 15:57:40 +03:00
Nick Craig-Wood
86b5d8ffaa s3: add bucket-lookup parameter to select path or dns style bucket lookup
This is to enable restic working with Alibaba cloud

Fixes #2528
2020-11-05 12:20:10 +01:00
Alexander Neumann
9a88fb253b
Merge pull request #3051 from greatroar/sanitize-env
Sanitize environment before starting backend processes (rclone, ssh)
2020-11-02 21:18:57 +01:00
greatroar
11fbaaae9a Sanitize environment before starting backend processes (rclone, ssh)
The restic security model includes full trust of the local machine, so
this should not fix any actual security problems, but it's better to be
safe than sorry.

Fixes #2192.
2020-11-02 16:41:23 +01:00
Alexander Neumann
3ff37215df
Merge pull request #2935 from MichaelEischer/upgrade-minio
Upgrade minio SDK to version 7
2020-11-02 09:09:10 +01:00
Michael Eischer
37a5e2d681 rest: use global context on repository creation 2020-10-09 22:39:06 +02:00
Michael Eischer
45e9a55c62 Wire context into backend layout detection 2020-10-09 22:37:24 +02:00
Michael Eischer
307a6ba3a3 Upgrade minio sdk to v7
This changes are primarily straightforward modifications to pass the
parameters in the now expected way.
2020-10-09 22:37:24 +02:00
Alexander Neumann
30cb553c8d
Merge pull request #2932 from MichaelEischer/proper-rclone-create
Call rclone.Create to create a new repository for the rclone backend
2020-10-09 21:29:15 +02:00
MichaelEischer
88cc444779
Merge pull request #2934 from MichaelEischer/upgrade-backoff
Upgrade github.com/cenkalti/backoff module
2020-10-08 20:33:30 +02:00
Michael Eischer
b79f18209f Upgrade github.com/cenkalti/backoff module
We now use v4 of the module. `backoff.WithMaxRetries` no longer repeats
an operation endlessly when a retry count of 0 is specified. This
required a few fixes for the tests.
2020-10-07 22:04:59 +02:00
Michael Eischer
f4282aa6fd local: mark repository files as read-only
This is intended to prevent accidental modifications of data files.
Marking the files as read-only was accidentally removed in #1258.
2020-10-07 12:29:37 +02:00
Michael Eischer
40ee17167e local: Ignore permission errors on chmod call in Save/Remove operation
The file is already created with the proper permissions, thus the chmod
call is not critical. However, some file systems have don't allow
modifications of the file permissions. Similarly the chmod call in the Remove
operation should not prevent it from working.
2020-10-07 12:29:37 +02:00
Ingo Gottwald
8b8e230771 Swap deprecated GCS lib with replacement 2020-10-03 18:55:56 +02:00
Ingo Gottwald
00cedd22aa Replace deprecated method in gs backend 2020-10-01 10:02:42 +02:00
MichaelEischer
fd02407863
Merge pull request #2849 from classmarkets/gcs-access-token
gs: support authentication with access token
2020-09-30 17:42:56 +02:00
Fred
206cadfab4 Hide password from repository URLs 2020-09-22 22:00:51 +02:00
Michael Eischer
8c36317b71 rclone: use configured number of connections during create 2020-09-19 19:11:43 +02:00
Michael Eischer
60a5c35de9 rclone: close connection to rclone if open fails 2020-09-19 19:11:43 +02:00
Michael Eischer
b4a7ce86cf uint cannot be less than zero 2020-09-05 10:07:16 +02:00
Hristo Trendev
51c22f4223 local backend: ignore not supported error on sync()
Closes #2395
2020-08-22 01:27:07 +02:00
aawsome
0fed6a8dfc
Use "pack file" instead of "data file" (#2885)
- changed variable names, especially changed DataFile into PackFile
- changed in some comments
- always use "pack file" in docu
2020-08-16 11:16:38 +02:00
Michael Eischer
c81b122374 rclone: Don't panic after unexpected subprocess exit
As the connection to the rclone child process is now closed after an
unexpected subprocess exit, later requests will cause the http2
transport to try to reestablish a new connection. As previously this never
should have happened, the connection called panic in that case. This
panic is now replaced with a simple error message, as it no longer
indicates an internal problem.
2020-08-01 12:17:40 +02:00
Michael Eischer
ea97ff1ba4 rclone: Skip crash test when rclone is not found 2020-07-26 12:06:18 +02:00
Michael Eischer
01b9581453 rclone: Better field names for stdio conn 2020-07-26 00:29:25 +02:00
Michael Eischer
3cd927d180 rclone: Give rclone time to finish before closing stdin pipe
Calling `Close()` on the rclone backend sometimes failed during test
execution with 'signal: Broken pipe'. The stdio connection closed both
the stdin and stdout file descriptors at the same moment, therefore
giving rclone no chance to properly send any final http2 data frames.

Now the stdin connection to rclone is closed first and will only be
forcefully closed after a timeout. In case rclone exits before the
timeout then the stdio connection will be closed normally.
2020-07-26 00:29:25 +02:00
greatroar
bf7b1f12ea rclone: Add test for pipe handling when rclone exits 2020-07-26 00:29:25 +02:00
Michael Eischer
8554332894 rclone: Close rclone side of stdio_conn pipes
restic did not notice when the rclone subprocess exited unexpectedly.

restic manually created pipes for stdin and stdout and used these for the
connection to the rclone subprocess. The process creating a pipe gets
file descriptors for the sender and receiver side of a pipe and passes
them on to the subprocess. The expected behavior would be that reads or
writes in the parent process fail / return once the child process dies
as a pipe would now just have a reader or writer but not both.

However, this never happened as restic kept the reader and writer
file descriptors of the pipes. `cmd.StdinPipe` and `cmd.StdoutPipe`
close the subprocess side of pipes once the child process was started
and close the parent process side of pipes once wait has finished. We
can't use these functions as we need access to the raw `os.File` so just
replicate that behavior.
2020-07-26 00:29:25 +02:00
greatroar
3e93b36ca4 Make rclone.New private 2020-07-26 00:28:45 +02:00
Peter Schultz
758b44b9c0 gs: support authentication with access token
In the Google Cloud Storage backend, support specifying access tokens
directly, as an alternative to a credentials file. This is useful when
restic is used non-interactively by some other program that is already
authenticated and eliminates the need to store long lived credentials.

The access token is specified in the GOOGLE_ACCESS_TOKEN environment
variable and takes precedence over GOOGLE_APPLICATION_CREDENTIALS.
2020-07-22 16:23:03 +02:00
Alexander Neumann
8074879c5f Remove 'go generate' 2020-07-19 17:28:42 +02:00
greatroar
190d8e2f51 Flatten backend.LimitedReadCloser structure
This inlines the io.LimitedReader into the LimitedReadCloser body to
achieve fewer allocations. Results on linux/amd64:

name                                      old time/op    new time/op    delta
Backend/BenchmarkLoadPartialFile-8           412µs ± 4%     413µs ± 4%    ~     (p=0.634 n=17+17)
Backend/BenchmarkLoadPartialFileOffset-8     455µs ±13%     441µs ±10%    ~     (p=0.072 n=20+18)

name                                      old speed      new speed      delta
Backend/BenchmarkLoadPartialFile-8        10.2GB/s ± 3%  10.2GB/s ± 4%    ~     (p=0.817 n=16+17)
Backend/BenchmarkLoadPartialFileOffset-8  9.25GB/s ±12%  9.54GB/s ± 9%    ~     (p=0.072 n=20+18)

name                                      old alloc/op   new alloc/op   delta
Backend/BenchmarkLoadPartialFile-8            888B ± 0%      872B ± 0%  -1.80%  (p=0.000 n=15+15)
Backend/BenchmarkLoadPartialFileOffset-8      888B ± 0%      872B ± 0%  -1.80%  (p=0.000 n=15+15)

name                                      old allocs/op  new allocs/op  delta
Backend/BenchmarkLoadPartialFile-8            18.0 ± 0%      17.0 ± 0%  -5.56%  (p=0.000 n=15+15)
Backend/BenchmarkLoadPartialFileOffset-8      18.0 ± 0%      17.0 ± 0%  -5.56%  (p=0.000 n=15+15)
2020-06-17 13:11:45 +02:00
greatroar
f4cd2a7120 Make backend benchmarks fairer by removing checks
Checking whether the right data is returned takes up half the time in
some benchmarks. Results for local backend benchmarks on linux/amd64:

name                                      old time/op    new time/op    delta
Backend/BenchmarkLoadFile-8                 4.89ms ± 0%    2.72ms ± 1%   -44.26%  (p=0.008 n=5+5)
Backend/BenchmarkLoadPartialFile-8           936µs ± 6%     439µs ±15%   -53.07%  (p=0.008 n=5+5)
Backend/BenchmarkLoadPartialFileOffset-8     940µs ± 1%     456µs ±10%   -51.50%  (p=0.008 n=5+5)
Backend/BenchmarkSave-8                     23.9ms ±14%    24.8ms ±41%      ~     (p=0.690 n=5+5)

name                                      old speed      new speed      delta
Backend/BenchmarkLoadFile-8               3.43GB/s ± 0%  6.16GB/s ± 1%   +79.40%  (p=0.008 n=5+5)
Backend/BenchmarkLoadPartialFile-8        4.48GB/s ± 6%  9.63GB/s ±14%  +114.78%  (p=0.008 n=5+5)
Backend/BenchmarkLoadPartialFileOffset-8  4.46GB/s ± 1%  9.22GB/s ±10%  +106.74%  (p=0.008 n=5+5)
Backend/BenchmarkSave-8                    706MB/s ±13%   698MB/s ±31%      ~     (p=0.690 n=5+5)
2020-06-17 13:11:45 +02:00
greatroar
df66daa5c9 Fix context usage in backend tests
Found by go vet. This is also the only complaint is has.
2020-04-18 17:39:06 +02:00
greatroar
8cf3bb8737 Revert "Put host last in SSH command line"
This reverts commit e1969d1e33.
2020-03-08 16:45:33 +01:00
Alexander Neumann
0c03a80fc4
Merge pull request #2592 from greatroar/sftp-ipv6
Support IPv6 in SFTP backend
2020-03-01 19:38:44 +01:00
Alexander Neumann
99fd80a585 Remove all workarounds for Go < 1.11 2020-02-26 20:35:13 +01:00
greatroar
e1969d1e33 Put host last in SSH command line
This is how the SSH manpage says the command line should look, and the
"--" prevents mistakes in hostnames from being interpreted as options.
2020-02-19 15:53:20 +01:00
greatroar
6ac6bca7a1 Support IPv6 in SFTP backend
The previous code was doing its own hostname:port splitting, which
caused IPv6 addresses to be misinterpreted.
2020-02-19 15:42:12 +01:00
rawtaz
680a14afa1
Merge pull request #2530 from restic/fix-sftp-mkdirall
sftp: Use MkdirAll provided by the client
2020-02-13 01:12:59 +01:00
Lars Lehtonen
3ed54e762e
internal/backend/sftp: fix dropped test error 2020-02-12 13:36:21 -08:00
Alexander Neumann
2cd9c7ef16 sftp: Use MkdirAll provided by the client
Closes #2518
2020-01-01 17:26:38 +01:00
streambinder
97e5ce4344 internal: backend: sftp: support user@domain parsing as user 2019-12-19 13:15:37 +01:00
rawtaz
e14c4b1737
Merge pull request #2484 from restic/add-s3-region
s3: Allow specifying region
2019-11-22 15:51:17 +01:00
Alexander Neumann
409909a7f5 Add option description for Region 2019-11-22 15:09:09 +01:00
mdauphin
df500a372d Add AWS_REGION env var to specify s3 region 2019-11-22 15:04:48 +01:00
Alexander Neumann
a6e8af7e0f Update minio-go 2019-11-22 14:50:46 +01:00
Alexandr Bruyako
38ea7ed4f6 remove unused code 2019-07-01 00:24:45 +03:00
Alexandr Bruyako
02014be76c simplified prefix removal, removed unnecessary if-else statements 2019-06-30 23:34:47 +03:00
Alexandr Bruyako
16eeed2ad5 simplified string sorting by using a more suitable function 2019-06-30 23:20:32 +03:00
Alexander Neumann
6b700d02f5 Merge pull request #2217 from restic/improve-memory-usage
WIP: improve memory usage
2019-04-13 15:07:07 +02:00
Alexander Neumann
d51e9d1b98 Add []byte to repo.LoadAndDecrypt and utils.LoadAll
This commit changes the signatures for repository.LoadAndDecrypt and
utils.LoadAll to allow passing in a []byte as the buffer to use. This
buffer is enlarged as needed, and returned back to the caller for
further use.

In later commits, this allows reducing allocations by reusing a buffer
for multiple calls, e.g. in a worker function.
2019-04-13 13:38:39 +02:00
Benoît Knecht
3c112d9cae s3: Add config option to set storage class
The `s3.storage-class` option can be passed to restic (using `-o`) to
specify the storage class to be used for S3 objects created by restic.

The storage class is passed as-is to S3, so it needs to be understood by
the API. On AWS, it can be one of `STANDARD`, `STANDARD_IA`,
`ONEZONE_IA`, `INTELLIGENT_TIERING` and `REDUCED_REDUNDANCY`. If
unspecified, the default storage class is used (`STANDARD` on AWS).

You can mix storage classes in the same bucket, and the setting isn't
stored in the restic repository, so be sure to specify it with each
command that writes to S3.

Closes #706
2019-03-26 16:37:07 +01:00
Alexander Neumann
aaa1cc2c26 Merge pull request 2193 from restic/allow-empty-rclone-args
rclone: Rework backend option parsing
2019-03-16 12:17:38 +01:00
Alexander Neumann
3865b59716 rclone: Rework backend option parsing
This change allows passing no arguments to rclone, using `-o
rclone.args=""`. It is helpful when running rclone remotely via SSH
using a key with a forced command (via `command=` in `authorized_keys`).
2019-03-02 10:36:42 +01:00
kayrus
6ebcfe7c18 Swift: introduce application credential auth support 2019-02-14 14:19:05 +01:00
Alexander Neumann
c9745cd47e gs: Respect bandwidth limiting
In 0dfdc11ed9, accidentally we dropped
using the provided http.RoundTripper, this commits adds it back.

Closes #1989
2018-11-25 18:52:32 +01:00
Alexander Neumann
7ac683c360 rclone: Inject debug logger for HTTP 2018-10-21 19:58:40 +02:00
Toby Burress
8ceda538ef b2: simplify object iteration
Blazer is moving to a simpler object list interface, so I'm changing
this here as well.
2018-10-05 11:39:02 -07:00
George Armhold
bfc1bc6ee6 clean up some errors from 'go vet ./...' 2018-09-05 08:04:55 -04:00
denis.uzvik
1e42f4f300 S3 backend: accept AWS_SESSION_TOKEN 2018-07-12 16:18:19 +03:00
Alexander Neumann
141fabdd09 s3: Pass list errors up to the caller 2018-06-01 22:15:23 +02:00
Alexander Neumann
465700595c azure: Support uploading large files
Closes #1822
2018-06-01 14:52:16 +02:00
Jakob Unterwurzacher
dd3b9910ee sftp: persist "ssh command exited" error
If our ssh process has died, not only the next, but all subsequent
calls to clientError() should indicate the error.

restic output when the ssh process is killed with "kill -9":

  Save(<data/afb68adbf9>) returned error, retrying after 253.661803ms: Write: failed to send packet header: write |1: file already closed
  Save(<data/afb68adbf9>) returned error, retrying after 580.752212ms: ssh command exited: signal: killed
  Save(<data/afb68adbf9>) returned error, retrying after 790.150468ms: ssh command exited: signal: killed
  Save(<data/afb68adbf9>) returned error, retrying after 1.769595051s: ssh command exited: signal: killed
  [...]
  error in cleanup handler: ssh command exited: signal: killed

Before this patch:

  Save(<data/de698d934f>) returned error, retrying after 252.84163ms: Write: failed to send packet header: write |1: file already closed
  Save(<data/de698d934f>) returned error, retrying after 660.236963ms: OpenFile: failed to send packet header: write |1: file already closed
  Save(<data/de698d934f>) returned error, retrying after 568.049909ms: OpenFile: failed to send packet header: write |1: file already closed
  Save(<data/de698d934f>) returned error, retrying after 2.428813824s: OpenFile: failed to send packet header: write |1: file already closed
  [...]
  error in cleanup handler: failed to send packet header: write |1: file already closed
2018-05-30 19:28:14 +02:00
Alexander Neumann
2a976d795f b2: Remove extra error check 2018-05-26 10:12:30 +02:00
Alexander Neumann
bfd923e81e rclone: Respect bandwith limits 2018-05-22 20:48:17 +02:00
Steve Kriss
b358dd369b S3: rearrange credentials chain to be standard
Signed-off-by: Steve Kriss <steve@heptio.com>
2018-05-16 16:49:33 -07:00
Steve Kriss
d67b9a32c6 S3: add file credentials to chain
Signed-off-by: Steve Kriss <steve@heptio.com>
2018-05-16 16:35:14 -07:00
Bryce Chidester
e9f1721678 http backend: Parse the correct argument when loading --tls-client-cert
Previously, the function read from ARGV[1] (hardcoded) rather than the
value passed to it, the command-line argument as it exists in globalOptions.

Resolves #1745
2018-04-30 15:21:09 -07:00
Alexander Neumann
577faa7570 local/sftp: Handling non-existing dirs in List() 2018-04-10 21:35:30 +02:00
Alexander Neumann
3f48e0e0f4 Add extra options to rclone
For details see https://github.com/restic/restic/pull/1657#issuecomment-377707486
2018-04-01 10:34:30 +02:00
Alexander Neumann
86f4b03730 Remove unneeded byte counters 2018-04-01 10:18:38 +02:00
Alexander Neumann
c43c94776b rclone: Make concurrent connections configurable 2018-04-01 10:18:38 +02:00
Alexander Neumann
0b776e63e7 backend/rclone: Request random file name
When `/` is requested, rclone returns the list of all files in the
remote, which is not what we want (and it can take quite some time).
2018-04-01 10:18:38 +02:00
Alexander Neumann
737d93860a Extend first timeout to 60 seconds. 2018-04-01 10:18:38 +02:00
Alexander Neumann
17312d3a98 backend/rest: Ensure base URL ends with slash
This makes it easier for rclone.
2018-04-01 10:18:38 +02:00
Alexander Neumann
4d5c7a8749 backend/rclone: Make sure rclone terminates 2018-04-01 10:18:38 +02:00
Alexander Neumann
fc0295016a Address code review comments 2018-04-01 10:18:38 +02:00
Alexander Neumann
99b62c11b8 backend/rclone: Stop rclone in case of errors 2018-04-01 10:18:38 +02:00
Alexander Neumann
6d9a029e09 backend/rclone: Prefix all error messages 2018-04-01 10:18:38 +02:00
Alexander Neumann
065fe1e54f backend/rclone: Skip test if binary is unavailable 2018-04-01 10:16:31 +02:00
Alexander Neumann
4dc0f24b38 backend/tests: Drain reader before returning error 2018-04-01 10:16:31 +02:00
Alexander Neumann
fe99340e40 Add rclone backend 2018-04-01 10:16:31 +02:00
Alexander Neumann
e377759c81 rest: Export Backend struct 2018-04-01 10:16:31 +02:00
Alexander Neumann
cabbbd2b14 backend/rest: Export Content-Types 2018-04-01 10:16:31 +02:00
Alexander Neumann
cf4cf94418 Move backend/sftp.StartForeground to backend/ 2018-04-01 10:16:31 +02:00
Alexander Neumann
34f27edc03 Refactor SplitShellStrings 2018-04-01 10:16:31 +02:00
Alexander Neumann
345b6c4694 Move backend/sftp.SplitShellArgs to backend/ 2018-04-01 10:16:31 +02:00
Alexander Neumann
673f0bbd6c Update vendored library github.com/cenkalti/backoff 2018-03-30 11:45:27 +02:00
Alexander Neumann
e5c929b793 Fix rest-server tests
Since today, the rest-server needs to be explicitly told (via
`--no-auth`) that authentication is not necessary.
2018-03-24 18:06:21 +01:00
Lawrence Jones
0dfdc11ed9
Automatically load Google auth
This change removes the hardcoded Google auth mechanism for the GCS
backend, instead using Google's provided client library to discover and
generate credential material.

Google recommend that client libraries use their common auth mechanism
in order to authorise requests against Google services. Doing so means
you automatically support various types of authentication, from the
standard GOOGLE_APPLICATION_CREDENTIALS environment variable to making
use of Google's metadata API if running within Google Container Engine.
2018-03-11 17:11:25 +00:00
Alexander Neumann
fcc9ce81ba rest: Really set Content-Length HTTP header 2018-03-09 20:21:34 +01:00
Nick Craig-Wood
04c4033695 backend/rest: check HTTP error response for List
Before this change restic would attempt to JSON decode the error
message resulting in confusing `Decode: invalid character 'B' looking
for beginning of value` messages.  Afterwards it will return `List
failed, server response: 400 Bad Request (400)`
2018-03-08 10:22:43 +00:00
Alexander Neumann
be0a5b7f06 Merge pull request #1649 from jasperla/solaris
Minimal set of patches to get restic working on Solaris
2018-03-05 20:00:17 +01:00
Jasper Lievisse Adriaanse
96311d1a2b Add support for illumos/Solaris
This does come without xattr/fuse support at this point.

NB: not hooking up the integration tests as restic won't compile without
    cgo with Go < 1.10.
2018-03-04 20:11:29 +00:00
Alexander Neumann
da77f4a2e2 Merge pull request #1647 from duzvik/aws-session-token
Change priority of AWS credential providers to accept AWS_SESSION_TOKEN
2018-03-04 20:54:56 +01:00
denis.uzvik
6bb1bcce03 Change priority of AWS credential providers to accept AWS_SESSION_TOKEN 2018-03-04 19:58:27 +02:00
Alexander Neumann
929afc63d5 Use int64 for the length in the RewindReader 2018-03-04 10:40:42 +01:00
Alexander Neumann
99f7fd74e3 backend: Improve Save()
As mentioned in issue [#1560](https://github.com/restic/restic/pull/1560#issuecomment-364689346)
this changes the signature for `backend.Save()`. It now takes a
parameter of interface type `RewindReader`, so that the backend
implementations or our `RetryBackend` middleware can reset the reader to
the beginning and then retry an upload operation.

The `RewindReader` interface also provides a `Length()` method, which is
used in the backend to get the size of the data to be saved. This
removes several ugly hacks we had to do to pull the size back out of the
`io.Reader` passed to `Save()` before. In the `s3` and `rest` backend
this is actively used.
2018-03-03 15:49:44 +01:00
denis.uzvik
5873ab4031 Ignore s3 AccessDenied error, during creation of repository 2018-03-02 10:47:20 +02:00
Alexander Neumann
93210614f4 backend/retry: return worker function error and abort
This is a bug fix: Before, when the worker function fn in List() of the
RetryBackend returned an error, the operation is retried with the next
file. This is not consistent with the documentation, the intention was
that when fn returns an error, this is passed on to the caller and the
List() operation is aborted. Only errors happening on the underlying
backend are retried.

The error leads to restic ignoring exclusive locks that are present in
the repo, so it may happen that a new backup is written which references
data that is going to be removed by a concurrently running `prune`
operation.

The bug was reported by a user here:
https://forum.restic.net/t/restic-backup-returns-0-exit-code-when-already-locked/484
2018-02-24 13:26:13 +01:00
Alexander Neumann
29da86b473 Merge pull request #1623 from restic/backend-relax-restrictions
backend: Relax requirement for new files
2018-02-18 12:56:52 +01:00
Alexander Neumann
b5062959c8 backend: Relax requirement for new files
Before, all backend implementations were required to return an error if
the file that is to be written already exists in the backend. For most
backends, that means making a request (e.g. via HTTP) and returning an
error when the file already exists.

This is not accurate, the file could have been created between the HTTP
request testing for it, and when writing starts. In addition, apart from
the `config` file in the repo, all other file names have pseudo-random
names with a very very low probability of a collision. And even if a
file name is written again, the way the restic repo is structured this
just means that the same content is placed there again. Which is not a
problem, just not very efficient.

So, this commit relaxes the requirement to return an error when the file
in the backend already exists, which allows reducing the number of API
requests and thereby the latency for remote backends.
2018-02-17 22:39:18 +01:00
Igor Fedorenko
d58ae43317 Reworked Backend.Load API to retry errors during ongoing download
Signed-off-by: Igor Fedorenko <igor@ifedorenko.com>
2018-02-16 21:12:14 -05:00
Alexander Neumann
514f1b8917 Relax timeout backend test 2018-02-10 12:53:38 +01:00
Igor Fedorenko
aa333f4d49 Implement RetryBackend.List()
Signed-off-by: Igor Fedorenko <igor@ifedorenko.com>
2018-01-29 22:14:12 -05:00
Alexander Neumann
2369da158f Merge pull request #1592 from ncw/helpful-tests
Make backend tests more helpful
2018-01-28 10:09:35 +01:00
Nick Craig-Wood
fb62da1748 Make backend tests more helpful
* In TestList check that backend is empty first
  * Improve error message in TestBackend
2018-01-27 21:36:35 +00:00
Alexander Neumann
5dc8d3588d GS: Use generic http transport
During the development of #1524 I discovered that the Google Cloud
Storage backend did not yet use the HTTP transport, so things such as
bandwidth limiting did not work. This commit does the necessary magic to
make the GS library use our HTTP transport.
2018-01-27 20:12:34 +01:00
Alexander Neumann
c34db983d8 Read TLS client cert and key from the same file 2018-01-27 14:02:01 +01:00
Bryce Chidester
e805b968b1 Support for TLS client certificate authentication
This adds --tls-client-cert and --tls-client-key parameters and enables use
of that certificate/key pair when connecting to https servers.
2018-01-27 13:18:22 +01:00
Alexander Neumann
e11a183578 Merge pull request #1588 from restic/fix-sftp-without-tty
sftp: Allow running ssh without a tty
2018-01-26 21:56:41 +01:00
Alexander Neumann
7719cf88d9 b2: Check timeout 2018-01-26 21:07:05 +01:00
Alexander Neumann
00e905ebe6 sftp: Allow running ssh without a tty 2018-01-26 19:21:14 +01:00
Igor Fedorenko
abc4027083 Use errors.Cause in backend TestListCancel
Signed-off-by: Igor Fedorenko <igor@ifedorenko.com>
2018-01-25 08:53:50 -05:00
Alexander Neumann
7e6bfdae79 backend/rest: Implement REST API v2 2018-01-23 23:15:26 +01:00
Alexander Neumann
e835abeceb backend/test: Reliably trigger timeout error 2018-01-23 23:14:05 +01:00
Alexander Neumann
b0c6e53241 Fix calls to repo/backend.List() everywhere 2018-01-21 21:15:09 +01:00
Alexander Neumann
e9ea268847 Change List() implementation for all backends 2018-01-21 21:15:09 +01:00
Alexander Neumann
c4e9d5d11e backend: Add tests for new List() function 2018-01-21 18:35:37 +01:00
Alexander Neumann
52230b8f07 backend: Rework List()
For a discussion see #1567
2018-01-21 18:35:37 +01:00
Alexander Neumann
2130897ce0 rest: Add test for external server 2018-01-20 10:25:47 +01:00
Alexander Neumann
67da240068 rest: Refactor backend tests 2018-01-20 10:25:37 +01:00
Alexander Neumann
1046eabf95 rest: Remove unneeded tempdir 2018-01-20 10:13:04 +01:00
Alexander Neumann
0bdb131521 Remove SuspendSignalHandler 2018-01-17 23:14:47 +01:00
Alexander Neumann
05958caf6e sftp: Prompt for password, don't terminate on SIGINT
This is a follow-up on fb9729fdb9, which
runs the `ssh` in its own process group and selects that process group
as the foreground group. After the sftp connection is established,
restic switches back to the previous foreground process group.

This allows `ssh` to prompt for the password, but it won't receive
the interrupt signal (SIGINT, ^C) later on, because it is not in the
foreground process group any more, allowing a clean tear down.
2018-01-17 23:02:47 +01:00
Igor Fedorenko
8c550ca011 fixed restic-check does not retry backend.Test failures
added missing RetryBackend.Test implementation

Signed-off-by: Igor Fedorenko <igor@ifedorenko.com>
2018-01-06 23:22:35 -05:00
Alexander Neumann
b45fc89512 local/sftp: Create repo dirs on demand in Save() 2018-01-05 17:51:09 +01:00
Alexander Neumann
6c2b2a58ad backend: Retry deletes 2017-12-22 22:41:28 +01:00
Alexander Neumann
7d8765a937 backend: Only return top-level files for most dirs
Fixes #1478
2017-12-14 19:14:16 +01:00
Harshavardhana
27ccea6371 Since upgrade to minio-go 4.0 remove workaround
We added previously a code to fix the issue of chaining
credentials, we do not need this anymore since the
upstream minio-go already has this relevant change.
2017-12-09 02:01:42 -08:00
Alexander Neumann
8b3b7bc5ef s3: Use context 2017-12-08 22:04:55 +01:00
Alexander Neumann
934ae1b559 Update to minio-go 4 2017-12-08 21:52:50 +01:00
George Armhold
1695c8ed55 use global context for check, debug, dump, find, forget, init, key,
list, mount, tag, unlock commands

gh-1434
2017-12-06 07:02:55 -05:00
George Armhold
0dc31c03e1 remove check for context.Canceled
gh-1434
2017-12-06 05:38:29 -05:00
George Armhold
be24237063 make retry code context-aware.
detect cancellation in backend, so that retry code does not keep trying
once user has hit ^c

gh-1434
2017-12-03 07:22:14 -05:00
George Armhold
d886cb5c27 replace ad-hoc context.TODO() with gopts.ctx, so that cancellation
can properly trickle down from cmd_*.

gh-1434
2017-12-03 07:22:14 -05:00
Alexander Neumann
0b44c629f2 retry: Remove file after failed save 2017-11-30 22:05:14 +01:00
Alexander Neumann
134abbd82b rest: Use client for creating the repository
Before, creating a new repo via REST would use the defaut HTTP client,
which is not a problem unless the server uses HTTPS and a TLS
certificate which isn't signed by a CA in the system's CA store. In this
case, all commands work except the 'init' command, which fails with a
message like "invalid certificate".
2017-11-25 20:56:40 +01:00
Alexander Neumann
1ebf0e8de8 Merge pull request #1437 from restic/fix-1292
s3: Document and remove default prefix
2017-11-25 11:34:26 +01:00
Alexander Neumann
47b326b7b5 Merge pull request #1423 from harshavardhana/creds
Fix chaining of credentials for minio-go
2017-11-24 21:57:52 +01:00
Alexander Neumann
262b0cd9d4 s3: Remove default prefix "/restic" 2017-11-21 21:33:09 +01:00
Alexander Neumann
e83ec17e95 s3: Correct comment 2017-11-20 22:21:39 +01:00
Harshavardhana
41c8c946ba Fix chaining of credentials for minio-go
chaining failed because chaining provider
was only looking for subsequent credentials
provider after an error. Writer a new
chaining provider which proceeds to fetch
new credentials also under situations where
providers do not return but instead return
no keys at all.

Fixes https://github.com/restic/restic/issues/1422
2017-11-18 02:51:12 -08:00
George Armhold
0268d0e7d6 swift backend: limit http concurrency in Save(), Stat(), Test(), Remove(),
List().

move comment regarding problematic List() backend api (it's s3's ListObjects
that has a problem, NOT swift's ObjectsWalk).

As per discussion in PR #1399.
2017-11-02 18:29:32 -04:00
George Armhold
8515d093e0 swift backend: fix premature release of semaphore in Load() & document
concurrency issue in List().

refactor wrapReader from b2 -> semaphore so it can be used elsewhere.

As per discussion in PR #1399.
2017-11-02 12:38:17 -04:00
George Armhold
99ac0da4bc s3 backend: limit http concurrency in Save(), Stat(), Test(), Remove()
NB: List() is NOT currently limited, as it would cause deadlock due to
be.client.ListObjects() implementation.

as per discussion in PR #1399
2017-11-01 09:40:54 -04:00
George Armhold
d069ee31b2 GS backend: limit http concurrency in Save(), Stat(), Test(), Remove(), List()
as per discussion in PR #1399
2017-10-31 08:01:43 -04:00
George Armhold
981752ade0 Azure backend: limit http concurrency in Stat(), Test(), Remove()
as per discussion in PR #1399
2017-10-31 07:32:30 -04:00
George Armhold
2f8147af59 log unexpected errs from b2 ListCurrentObject()
gh-1385
2017-10-29 08:53:39 -04:00
Alexander Neumann
f854a41ba9 Merge pull request #1399 from armhold/deadlock2
prevent deadlock in List() for B2 when b2.connections=1
2017-10-29 09:26:46 +01:00
George Armhold
3304b0fcf0 prevent deadlock in List() for B2 when b2.connections=1
This is a fix for the following situation (gh-1188):

List() grabs a semaphore token upon entry, starts a goroutine, and
does not release the token until the routine exits (via a defer).

The goroutine iterates over the results from ListCurrentObjects(),
sending them one at a time to a channel, where they are ultimately
processed by be.Load().

Since be.Load() also needs a token, this will result in deadlock if
b2.connections=1.

This fix changes List() so that the token is only held during the call
to ListCurrentObjects().
2017-10-28 18:46:47 -04:00
George Armhold
d8938e259a sftp ReadDir: add path to return error messages (gh-1323)
fix missing "Close" string in debug log fmt
2017-10-28 14:16:27 -04:00
Alexander Neumann
7a99418dc5 Merge pull request #1393 from armhold/lint-errcheck
detect errors from fs.Walk() in local backend List()
2017-10-28 09:56:11 +02:00
Alexander Neumann
c71ba466ea Merge pull request #1391 from armhold/b2-listmax
pass in defaultListMaxItems to b2Backend constructor
2017-10-28 09:54:57 +02:00
George Armhold
8a37c07295 send errors from fs.Walk() to debug log
clarify non-err returns from Walk where err is already proved to be nil
2017-10-27 08:41:17 -04:00
George Armhold
bd0ada7842 go fmt 2017-10-26 16:37:11 -04:00
George Armhold
eea96f652d go fmt 2017-10-26 16:22:10 -04:00
George Armhold
38c3061df7 pass in defaultListMaxItems to b2Backend constructor
gh-1385
2017-10-26 14:22:16 -04:00
George Armhold
bcdebfb84e small cleanup:
- be explicit when discarding returned errors from .Close(), etc.
- remove named return values from funcs when naked return not used
- fix some "err" shadowing when redeclaration not needed
2017-10-25 12:03:55 -04:00
Alexander Neumann
90b96d19cd Merge pull request #1365 from felix9/fix_1068
Fix failure to detect some legacy s3 repos
2017-10-21 12:19:58 +02:00
Alexander Neumann
d63ab4e9a4 Merge pull request #1358 from prattmic/chunk_size
gs: add option to set chunk size
2017-10-21 11:13:48 +02:00
Felix Lee
944fc857eb Fix failure to detect some legacy s3 repos
Sometimes s3 listobjects for a directory includes an entry for that
directory. The restic s3 backend doesn't expect that and returns
an error.

Symptom is:
  ReadDir: invalid key name restic/key/, removing prefix
     restic/key/ yielded empty string

I'm not sure when s3 does that; I'm unable to reproduce it myself.

But in any case, it seems correct to ignore that when it happens.

Fixes #1068
2017-10-18 13:45:31 -07:00
Michael Pratt
9fa4f5eb6b gs: disable resumable uploads
By default, the GCS Go packages have an internal "chunk size" of 8MB,
used for blob uploads.

Media().Do() will buffer a full 8MB from the io.Reader (or less if EOF
is reached) then write that full 8MB to the network all at once.

This behavior does not play nicely with --limit-upload, which only
limits the Reader passed to Media. While the long-term average upload
rate will be correctly limited, the actual network bandwidth will be
very spikey.

e.g., if an 8MB/s connection is limited to 1MB/s, Media().Do() will
spend 8s reading from the rate-limited reader (performing no network
requests), then 1s writing to the network at 8MB/s.

This is bad for network connections hurt by full-speed uploads,
particularly when writing 8MB will take several seconds.

Disable resumable uploads entirely by setting the chunk size to zero.
This causes the io.Reader to be passed further down the request stack,
where there is less (but still some) buffering.

My connection is around 1.5MB/s up, with nominal ~15ms ping times to
8.8.8.8.

Without this change, --limit-upload 1024 results in several seconds of
~200ms ping times (uploading), followed by several seconds of ~15ms ping
times (reading from rate-limited reader). A bandwidth monitor reports
this as several seconds of ~1.5MB/s followed by several seconds of
0.0MB/s.

With this change, --limit-upload 1024 results in ~20ms ping times and
the bandwidth monitor reports a constant ~1MB/s.

I've elected to make this change unconditional of --limit-upload because
the resumable uploads shouldn't be providing much benefit anyways, as
restic already uploads mostly small blobs and already has a retry
mechanism.

--limit-download is not affected by this problem, as Get().Download()
returns the real http.Response.Body without any internal buffering.

Updates #1216
2017-10-17 21:12:04 -07:00
Alexander Neumann
ce4d71d626 backend: Add partial read failure to error backend 2017-10-17 22:11:38 +02:00
Alexander Neumann
8dc952775e backend: Correctly retry Save() calls
Make sure the given reader is an io.Seeker and rewind it properly each
time.
2017-10-17 21:46:38 +02:00
Alexander Neumann
4a995105a9 sftp: Fix Delete() 2017-10-14 16:08:15 +02:00
Alexander Neumann
7fe496f983 Ensure TestDelete runs last 2017-10-14 16:04:29 +02:00
Alexander Neumann
e56370eb5b Remove Deleter interface 2017-10-14 16:04:29 +02:00
Alexander Neumann
b8af7f63a0 backend test: Always remove files for TestList 2017-10-14 15:56:25 +02:00
Alexander Neumann
897c923cc9 Retry failed backend requests 2017-10-14 15:56:25 +02:00
Alexander Neumann
0e722efb09 backend: Add Delete() to restic.Backend interface 2017-10-14 15:56:25 +02:00
Harshavardhana
042adeb5d0 Refactor credentials management to support multiple mechanisms.
This PR adds the ability of chaining the credentials provider,
such that restic as a tool attempts to honor credentials from
multiple different ways.

Currently supported mechanisms are

 - static (user-provided)
 - IAM profile (only valid inside configured ec2 instances)
 - Standard AWS envs (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
 - Standard Minio envs (MINIO_ACCESS_KEY, MINIO_SECRET_KEY)

Refer https://github.com/restic/restic/issues/1341
2017-10-09 12:51:39 -07:00
Alexander Neumann
c5553ec855 Merge pull request #1276 from fawick/supply_ca_cert
Add REST backend option to use CA root certificate
2017-10-08 09:47:23 +02:00
Fabian Wickborn
6da9bfbbce Create missing lock dir when saving lock 2017-10-05 00:07:48 +02:00
Fabian Wickborn
69a6e622d0 Add REST backend option to use CA root certificate
Closes #1114.
2017-10-04 22:14:10 +02:00
Herbert
3473c3f7b6 Remove all dot-imports 2017-10-02 15:06:39 +02:00
Alexander Neumann
556a63de19 sftp: Return error when path starts with a tilde (~) 2017-09-30 10:34:23 +02:00
Michael Pratt
fa0be82da8 gs: allow backend creation without storage.buckets.get
If the service account used with restic does not have the
storage.buckets.get permission (in the "Storage Admin" role), Create
cannot use Get to determine if the bucket is accessible.

Rather than always trying to create the bucket on Get error, gracefully
fall back to assuming the bucket is accessible. If it is, restic init
will complete successfully. If it is not, it will fail on a later call.

Here is what init looks like now in different cases.

Service account without "Storage Admin":

Bucket exists and is accessible (this is the case that didn't work
before):

$ ./restic init -r gs:this-bucket-does-exist:/
enter password for new backend:
enter password again:
created restic backend c02e2edb67 at gs:this-bucket-does-exist:/

Please note that knowledge of your password is required to access
the repository. Losing your password means that your data is
irrecoverably lost.

Bucket exists but is not accessible:

$ ./restic init -r gs:this-bucket-does-exist:/
enter password for new backend:
enter password again:
create key in backend at gs:this-bucket-does-exist:/ failed:
service.Objects.Insert: googleapi: Error 403:
my-service-account@myproject.iam.gserviceaccount.com does not have
storage.objects.create access to object this-bucket-exists/keys/0fa714e695c8ecd58cb467cdeb04d36f3b710f883496a90f23cae0315daf0b93., forbidden

Bucket does not exist:

$ ./restic init -r gs:this-bucket-does-not-exist:/
create backend at gs:this-bucket-does-not-exist:/ failed:
service.Buckets.Insert: googleapi: Error 403:
my-service-account@myproject.iam.gserviceaccount.com does not have storage.buckets.create access to bucket this-bucket-does-not-exist., forbidden

Service account with "Storage Admin":

Bucket exists and is accessible: Same

Bucket exists but is not accessible: Same. Previously this would fail
when Create tried to create the bucket. Now it fails when trying to
create the keys.

Bucket does not exist:

$ ./restic init -r gs:this-bucket-does-not-exist:/
enter password for new backend:
enter password again:
created restic backend c3c48b481d at gs:this-bucket-does-not-exist:/

Please note that knowledge of your password is required to access
the repository. Losing your password means that your data is
irrecoverably lost.
2017-09-25 22:25:51 -07:00
Michael Pratt
3b2106ed30 gs: document required permissions
In the manual, state which standard roles the service account must
have to work correctly, as well as the specific permissions required,
for creating even more specific custom roles.
2017-09-24 11:25:57 -07:00
Michael Pratt
5f4f997126 gs: minor comment cleanups
* Remove a reference to S3.
* Config can only be used for GCS, not other "gcs compatibile servers".
* Make comments complete sentences.
2017-09-24 10:10:56 -07:00
Alexander Neumann
9c6b7f688e Merge pull request #1270 from restic/sftp-allow-password-prompt
sftp: Allow password entry
2017-09-23 22:13:04 +02:00
Alexander Neumann
429106340f Merge pull request #1267 from harshavardhana/possible-fix-memory
Implement Size() and Len() to know the optimal size.
2017-09-23 14:04:15 +02:00
Alexander Neumann
fb9729fdb9 sftp: Allow password entry
This was a bit tricky: We start the ssh binary, but we want it to ignore
SIGINT. In contrast, restic itself should process SIGINT and clean up
properly. Before, we used `setsid()` to give the ssh process its own
process group, but that means it cannot prompt the user for a password
because the tty is gone.

So, now we're passing in two functions that ignore SIGINT just before
the ssh process is started and re-install it after start.
2017-09-23 11:43:33 +02:00
Harshavardhana
98369f6a5d Implement Size() and Len() to know the optimal size. 2017-09-22 12:09:17 -07:00
Alexander Neumann
9842eff887 local/sftp: Remove unneeded stat() call 2017-09-21 21:47:03 +02:00
Alexander Neumann
4c6b626db6 backend: Improve TestList 2017-09-18 13:18:42 +02:00
Alexander Neumann
835ba16c27 b2: Add pagination for List() 2017-09-18 12:13:35 +02:00
Alexander Neumann
3b6a580b32 backend: Make pagination for List configurable 2017-09-18 12:01:54 +02:00
Alexander Neumann
649c536250 backend: Improve test for pagination in list 2017-09-17 11:36:45 +02:00
Alexander Neumann
dd49e2b12d Azure: Fix List(), use pagination marker 2017-09-17 11:32:05 +02:00
Alexander Neumann
f61dab1774 backend: Add test for List() 2017-09-17 11:09:16 +02:00
Alexander Neumann
40edf00182 gs: implement pagination 2017-09-17 11:08:51 +02:00
Alexander Neumann
c35518a865 Azure/GS: Remove ReadDir() 2017-09-17 11:05:30 +02:00
Alexander Neumann
2a1633621b Ignore "not exist" errors for swift backend tests 2017-09-16 13:59:55 +02:00
Alexander Neumann
5bf2228596 local: Fix creating data dirs 2017-09-11 21:48:25 +02:00
Alexander Neumann
227b01395f local: Add test for open non-existing dir 2017-09-11 21:34:26 +02:00
Alexander Neumann
48b1ab5aaf Merge pull request #1182 from restic/fix-1167
local: do not create dirs below data/ for non-existing dir
2017-08-28 21:13:24 +02:00
Michael Pratt
9537bc561d gs: fix nil dereference
info can be nil if err != nil, resulting in a nil dereference while
logging:

$ # GCS config
$ ./restic init
debug enabled
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x935947]

goroutine 1 [running]:
github.com/restic/restic/internal/backend/gs.(*Backend).Save(0xc420012690, 0xe84e80, 0xc420010448, 0xb57149, 0x3, 0xc4203fc140, 0x40, 0xe7be40, 0xc4201d8f90, 0xa0, ...)
	src/github.com/restic/restic/internal/backend/gs/gs.go:226 +0x6d7
github.com/restic/restic/internal/repository.AddKey(0xe84e80, 0xc420010448, 0xc4202f0360, 0xc42000a1b0, 0x4, 0x0, 0xa55b60, 0xc4203043e0, 0xa55420)
	src/github.com/restic/restic/internal/repository/key.go:235 +0x4a1
github.com/restic/restic/internal/repository.createMasterKey(0xc4202f0360, 0xc42000a1b0, 0x4, 0xa55420, 0xc420304370, 0x6a6070)
	src/github.com/restic/restic/internal/repository/key.go:62 +0x60
github.com/restic/restic/internal/repository.(*Repository).init(0xc4202f0360, 0xe84e80, 0xc420010448, 0xc42000a1b0, 0x4, 0x1, 0xc42030a440, 0x40, 0x32a4573d3d9eb5, 0x0, ...)
	src/github.com/restic/restic/internal/repository/repository.go:403 +0x5d
github.com/restic/restic/internal/repository.(*Repository).Init(0xc4202f0360, 0xe84e80, 0xc420010448, 0xc42000a1b0, 0x4, 0xe84e40, 0xc42004ad80)
	src/github.com/restic/restic/internal/repository/repository.go:397 +0x12c
main.runInit(0xc420018072, 0x16, 0x0, 0x0, 0x0, 0xe84e40, 0xc42004ad80, 0xc42000a1b0, 0x4, 0xe7dac0, ...)
	src/github.com/restic/restic/cmd/restic/cmd_init.go:47 +0x2a4
main.glob..func9(0xeb5000, 0xedad70, 0x0, 0x0, 0x0, 0x0)
	src/github.com/restic/restic/cmd/restic/cmd_init.go:20 +0x8e
github.com/restic/restic/vendor/github.com/spf13/cobra.(*Command).execute(0xeb5000, 0xedad70, 0x0, 0x0, 0xeb5000, 0xedad70)
	src/github.com/restic/restic/vendor/github.com/spf13/cobra/command.go:649 +0x457
github.com/restic/restic/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xeb3e00, 0xc420011650, 0xa55b60, 0xc420011660)
	src/github.com/restic/restic/vendor/github.com/spf13/cobra/command.go:728 +0x339
github.com/restic/restic/vendor/github.com/spf13/cobra.(*Command).Execute(0xeb3e00, 0x25, 0xc4201a7eb8)
	src/github.com/restic/restic/vendor/github.com/spf13/cobra/command.go:687 +0x2b
main.main()
	src/github.com/restic/restic/cmd/restic/main.go:72 +0x268

(The error was likely because I had just enabled the GCS API. Subsequent
runs were fine.)
2017-08-27 21:36:04 -07:00
Alexander Neumann
f9a934759f sftp: Improve error handling for non-existing dir 2017-08-27 20:53:04 +02:00
Alexander Neumann
3686b1ffe5 local: Create directories below data/ if it exists 2017-08-27 20:52:58 +02:00
Alexander Neumann
ea017a49c3 local: Add test for #1167
It was discovered that restic creates directories when a non-existing
directory is specified as a local repository.
2017-08-27 20:38:46 +02:00
Alexander Neumann
b67c178672 Merge pull request #1149 from restic/azure-support
Add Azure blob storage as backend
2017-08-09 21:30:35 +02:00
Alexander Neumann
d9a5b9178e gs: Rework path initialization 2017-08-06 21:47:56 +02:00
Dipta Das
ba75a3884c Add Google Cloud Storage as backend
Environment variables:
GOOGLE_PROJECT_ID=gcp-project-id
GOOGLE_APPLICATION_CREDENTIALS=path-to-json-file

Environment variables for test:
RESTIC_TEST_GS_PROJECT_ID=gcp-project-id
RESTIC_TEST_GS_APPLICATION_CREDENTIALS=path-to-json-file
RESTIC_TEST_GS_REPOSITORY=gs:us-central1/test-bucket

Init repository:
$ restic -r gs🪣/[prefix] init
2017-08-06 21:47:55 +02:00
Alexander Neumann
a726c91116 azure: Rework path initialization 2017-08-06 21:47:04 +02:00
Alexander Neumann
072b7a014e azure: User internal errors package 2017-08-06 21:47:04 +02:00
Alexander Neumann
618ce115d7 Azure: Use default HTTP transport 2017-08-06 21:47:04 +02:00
Dipta Das
3a85b6b7c6 Add Azure Blob Storage as backend
Environment variables:
AZURE_ACCOUNT_NAME=storage-account-name
AZURE_ACCOUNT_KEY=storage-account-key

Environment variables for test:
RESTIC_TEST_AZURE_ACCOUNT_NAME=storage-account-name
RESTIC_TEST_AZURE_ACCOUNT_KEY=storage-account-key
RESTIC_TEST_AZURE_REPOSITORY=azure:restic-test-container

Init repository:
$ restic -r azure:container-name:/prefix/dir init
2017-08-06 21:47:04 +02:00
Alexander Neumann
23c903074c Move restic package to internal/restic 2017-07-24 17:43:32 +02:00
Alexander Neumann
6caeff2408 Run goimports 2017-07-23 14:21:03 +02:00
Alexander Neumann
83d1a46526 Moves files 2017-07-23 14:19:13 +02:00