Commit graph

4423 commits

Author SHA1 Message Date
Robert-André Mauchin
e2e400e63c Use proper import path go.etcd.io/bbolt
Signed-off-by: Robert-André Mauchin <zebob.m@gmail.com>
2020-03-03 12:40:52 +00:00
Nick Craig-Wood
4d8d1e287b googlephotos: fix "concurrent map write" error - fixes #4003
This adds a bit of missed locking around the uploaded info to fix the
concurrent map write.

All the other accesses have locking - this one must have got missed.
2020-03-02 18:12:46 +00:00
Nick Craig-Wood
452fdbf1c1 Add Franklyn Tackitt to contributors 2020-03-02 17:31:23 +00:00
Nick Craig-Wood
51686bd1ef Add Shing Kit Chan to contributors 2020-03-02 17:31:23 +00:00
Gary Kim
38a4d50e73 rcd: Add Prometheus metrics support - fixes #3858
Signed-off-by: Gary Kim <gary@garykim.dev>
2020-03-01 09:58:34 +00:00
Gary Kim
3fd38cbe8d vendor: add github.com/prometheus/client_golang/prometheus
Signed-off-by: Gary Kim <gary@garykim.dev>
2020-03-01 09:58:34 +00:00
Franklyn Tackitt
2b3d13a841 fs: Use --cutoff-mode hard,soft,catious instead of 3 --max-transfer-mode flags
Fixes #2672
2020-03-01 09:49:55 +00:00
Shing Kit Chan
6f1766dd9e fs: Add support for --max-transfer-cutoff modes #2672
This also adds max transfer cut off check for server side copies too
2020-03-01 09:49:55 +00:00
Nick Craig-Wood
7d70eb0346 ftp: attempt to work-around pureftp sending spurious 150 messages
pureftpd has a bug where it sends messages like this

```
    150-Accepted data connection\r\n
        Response code: File status okay; about to open data connection (150)
        Response arg: Accepted data connection
    150 32768.0 kbytes to download\r\n
    150 0.014 seconds (measured here), 1665.27 Mbytes per second\r\n
```

The last `150` is treated as a new response - the previous `150` should have been `150-`.

This means that rclone sees the `150 0.014 seconds (measured here),
1665.27 Mbytes per second` as a reply to the next message and reports
it as an error.

This fix ignores that specific message when it is received in the
`Close` method. It dumps the FTP connection after as it is out of
sync.

See: #3984
Fixes #3445
2020-03-01 09:17:51 +00:00
Nick Craig-Wood
bae2644667 Add Yves G to contributors 2020-03-01 09:17:51 +00:00
Valeriy.Vyrva
f6f95822c1 doc: fix links in generated documentation 2020-03-01 09:14:37 +00:00
Yves G
b1b5e09081 vfs: make df output more consistent on a rclone mount.
When 2 values are known among vfs:{free,used,total}, compute the 3rd
2020-03-01 08:54:07 +00:00
Nick Craig-Wood
2b268f9724 build: fixup formatting after go1.14 go fmt changes 2020-02-28 16:58:33 +00:00
Nick Craig-Wood
7a5a74cecb crypt: clarify that directory_name_encryption depends on filename_encryption
See: https://forum.rclone.org/t/directory-name-encryption-is-set-to-always-false-when-choosing-filename-encryption-off/14600
2020-02-28 16:26:45 +00:00
Nick Craig-Wood
54a0c6b8ad Add valery1707 to contributors 2020-02-28 16:26:45 +00:00
valery1707
1ad23c4dc8
mailru: Describe 2FA requirements (#4015)
Fair enough
2020-02-28 16:54:09 +03:00
Dan Walters
7586a345ff dlna: cds: use modification time as date in dlna metadata
We havn't been outputting anything for this until now, which leads to my
Samsung showing an epoch/1970 date for all files.
2020-02-27 18:05:18 +01:00
Nick Craig-Wood
393b94bb70 vfs: add --vfs-read-wait and --vfs-write-wait flags
--vfs-read-wait duration    Time to wait for in-sequence read before seeking. (default 5ms)
    --vfs-write-wait duration   Time to wait for in-sequence write before giving error. (default 1s)

See: https://forum.rclone.org/t/constantly-high-iowait-add-log/14156
2020-02-27 16:12:33 +00:00
Nick Craig-Wood
e3c11c9ca1 mount: add --async-read flag to disable asynchronous reads
See: https://forum.rclone.org/t/constantly-high-iowait-add-log/14156
2020-02-27 16:12:33 +00:00
Nick Craig-Wood
3c91abce74 vfs: fix race condition caused by unlocked reading of Dir.path 2020-02-27 15:50:41 +00:00
Nick Craig-Wood
87d856d71b cache: disable race tests until bbolt is fixed
bbolt fails with "unsafe pointer conversion" under the go1.14 race
detector.

Disable race tests until https://github.com/etcd-io/bbolt/issues/187
is fixed.
2020-02-27 08:05:28 +00:00
Nick Craig-Wood
3855c003ce build: update to use go1.14 for the build 2020-02-26 21:26:47 +00:00
Nick Craig-Wood
abb9f89f65 vendor: update all dependencies 2020-02-26 21:26:46 +00:00
Nick Craig-Wood
17b4058ee9 mount: constrain to go1.13 or above otherwise bazil.org/fuse fails to compile 2020-02-26 21:26:46 +00:00
Nick Craig-Wood
9663f9b2ab mount: ignore --allow-root flag with a warning as it has been removed upstream
For background see: https://github.com/bazil/fuse/issues/144
2020-02-26 21:11:25 +00:00
Nick Craig-Wood
d6e10dba33 docs: fix confusion over processor names in download table
See: https://forum.rclone.org/t/intel-processor-download-help/14558
2020-02-26 16:39:46 +00:00
Nick Craig-Wood
da5cbc194a ftp: fix lockup on Close failures when using concurrency limit #3984
Before this change if rclone failed to close a file download for some
reason it would leak a concurrency token. When all the tokens were
leaked then rclone would lock up.

This fix returns the concurrency token regardless of the error status.
2020-02-25 14:38:12 +00:00
Nick Craig-Wood
e8eb658ba5 ftp: fix lockup on failed upload when using concurrency limit #3984
Before this change if rclone failed to upload a file for some reason
it would leak a concurrency token. When all the tokens were leaked
then rclone would lock up.

The fix returns the concurrency token regardless of the error state.
2020-02-25 14:38:12 +00:00
Nick Craig-Wood
28f69f25a0 ftp: fix lockup when using concurrency limit on failed connections #3984
Before this change if rclone failed to make an FTP connection for some
reason it would leak a concurrency token. When all the tokens were
leaked then rclone would lock up.

The fix returns the concurrency token if creating the FTP connection
returns an error.
2020-02-25 14:38:12 +00:00
Nick Craig-Wood
07e4b9bb7f operations: fix multithread copy test to use the correct modify window
In bde0334bd8 "operations: fix setting the timestamp on Windows
for multithread copy" the test for multithread copy failed to take
into account the modify window of the remote under test.
2020-02-25 13:30:35 +00:00
Aleksandar Jankovic
708b967f15 backend/s3: fix multipart abort context
S3 couldn't abort multi-part upload when context is canceled
because canceled context prevents abort request from being sent.
2020-02-25 12:11:32 +01:00
Dan Walters
7e2568a312 dlna: cds: don't specify childCount at all when unknown
Basically, solving #3541 with a different approach - bringing in
the upstream upnpav module, and changing ChildCount from int to a
*int to avoid childCount="0" in the XML output when that value is
simply unknown.

Current approach is leading to some recursion issues and according
to the DLNA spec it shouldn't be necessary, anyway.
2020-02-25 08:41:00 +01:00
Nick Craig-Wood
bde0334bd8 operations: fix setting the timestamp on Windows for multithread copy
Before this fix we attempted to set the modification time on the file
when it was open. This works fine on Linux but not on Windows. The
test was also incorrect testing the source file rather than the
destination file.

This closes the file before setting the modification time and fixes
the tests.

Fixes #3994
2020-02-24 17:30:09 +00:00
Aleksandar Janković
5470d34740
backend/s3: use low-level-retries as the number of SDK retries
Amazon S3 is built to handle different kinds of workloads.
In rare cases where S3 is not able to scale for whatever reason users
will face status 500 errors.
Main mechanism for handling these errors are retries.
Amount of needed retries varies for each different use case.

This change is making retries for s3 backend configurable by using
--low-level-retries option.
2020-02-24 16:43:44 +01:00
Maciej Zimnoch
ac9cb50fdb backend/s3: use memory pool for buffer allocations
Currently each multipart upload allocated his own buffers, which after
file upload was garbaged. Next files couldn't leverage already allocated
memory which resulted in inefficent memory management. This change
introduces backend memory pool keeping memory chunks which can be
used during object operations.

Fixes #3967
2020-02-24 13:32:32 +01:00
buengese
4a8b548add jottacloud: use RawURLEncoding when decoding base64 encoded login token - fixes #3945 2020-02-22 23:12:56 +01:00
Lars Lehtonen
481c8a40ea backend/azureblob: fmt nit 2020-02-20 15:50:53 +01:00
Lars Lehtonen
25ef3a281b backend/azureblob: remove unused Object.parseTimeString() 2020-02-20 15:50:53 +01:00
Lars Lehtonen
219bd97e8a backend/qingstor: lint fix 2020-02-14 18:11:01 +00:00
Lars Lehtonen
8b14cd24aa backend/qingstor: prune multiUploader.list() 2020-02-14 18:11:01 +00:00
Nick Craig-Wood
3893c14889 operations: make rcat obey --ignore-checksum 2020-02-14 12:47:11 +00:00
Nick Craig-Wood
c41fbc0f90 operations: move Rcat tests back to main test file now S3 prob is fixed 2020-02-14 12:26:52 +00:00
Nick Craig-Wood
f45425e5a9 operations: factor CommonHash out of Copy for re-use elsewhere 2020-02-14 12:12:10 +00:00
Nick Craig-Wood
bd9fd629bc test: add TestS3MinioEdge to test leading edge minio too #3934 2020-02-13 11:01:06 +00:00
Nick Craig-Wood
3b19f48929 testserver: add provider to TestS3Minio #3934
This is necessary to pass the TestIntegration/FsMkdir/FsEncoding
listing tests.
2020-02-13 11:01:06 +00:00
Outvi V
4996edc030
oauthutil: make instructions for rclone authorize more robust
It appends "--" to the rclone authorize command line before client_id,
in case that the client_id or client_secret has a prefix of "-"
(OneDrive's does), which affects the argument parsing.
2020-02-13 10:18:23 +00:00
Michał Matczuk
964f1f6a7e fs/accounting: Restore "Max number of stats groups reached" log line
Changed log level to debug.
2020-02-12 21:21:25 +00:00
Michał Matczuk
e75c1f70bb backend/s3: Added 500 as retryErrorCode
The error code 500 Internal Error indicates that Amazon S3 is unable to handle the request at that time. The error code 503 Slow Down typically indicates that the requests to the S3 bucket are very high, exceeding the request rates described in Request Rate and Performance Guidelines.

Because Amazon S3 is a distributed service, a very small percentage of 5xx errors are expected during normal use of the service. All requests that return 5xx errors from Amazon S3 can and should be retried, so we recommend that applications making requests to Amazon S3 have a fault-tolerance mechanism to recover from these errors.

https://aws.amazon.com/premiumsupport/knowledge-center/http-5xx-errors-s3/
2020-02-12 11:43:18 +00:00
Michał Matczuk
19a4d74ee7 backend/s3: Fail fast multipart upload
When a part upload request fails error is returned and gCtx is cancelled.
This does not prevent from other parts being tried.
They immediately fail due to a canceled context, but are retried by rclone anyway...

Example AWS debug output

```
-----------------------------------------------------
2020/02/11 14:12:17 DEBUG: Retrying Request s3/UploadPart, attempt 4
2020/02/11 14:12:17 DEBUG: Request s3/UploadPart Details:
---[ REQUEST POST-SIGN ]-----------------------------
PUT /backuptest-rclone/huge/file.db?partNumber=11&uploadId=190939b4-3c43-4b98-ac11-92303e3f11b0 HTTP/1.1
Host: 192.168.100.99:9000
User-Agent: aws-sdk-go/1.23.8 (go1.13.1; linux; amd64)
Content-Length: 5242880
Authorization: AWS4-HMAC-SHA256 Credential=miniouser/20200211/us-east-1/s3/aws4_request, SignedHeaders=content-length;content-md5;expect;host;x-amz-content-sha256;x-amz-date, Signature=3fc03a01f651cec09b05290459e9ceb26db9a8aa00c4e1b16e8cf5617eb81da8
Content-Md5: XzY+DlipXwbL6bvGYsXftg==
Expect: 100-Continue
X-Amz-Content-Sha256: c036cbb7553a909f8b8877d4461924307f27ecb66cff928eeeafd569c3887e29
X-Amz-Date: 20200211T131217Z
Accept-Encoding: gzip

-----------------------------------------------------
http://192.168.100.99:9000/backuptest-rclone/huge/file.db?partNumber=11&uploadId=190939b4-3c43-4b98-ac11-92303e3f11b0
2020/02/11 14:12:17 DEBUG: Response s3/UploadPart Details:
---[ RESPONSE ]--------------------------------------
HTTP/1.1 500 InternalServerError
Content-Length: 0
-----------------------------------------------------
UploadPartWithContext() error InternalError: We encountered an internal error. Please try again
	status code: 500, request id: , host id:

2020/02/11 14:12:18 DEBUG ERROR: Request s3/UploadPart:
---[ REQUEST DUMP ERROR ]-----------------------------
context canceled
------------------------------------------------------
UploadPartWithContext() error RequestCanceled: request context canceled
caused by: context canceled
2020/02/11 14:12:20 DEBUG ERROR: Request s3/UploadPart:
---[ REQUEST DUMP ERROR ]-----------------------------
context canceled
------------------------------------------------------
UploadPartWithContext() error RequestCanceled: request context canceled
caused by: context canceled
2020/02/11 14:12:22 DEBUG ERROR: Request s3/UploadPart:
---[ REQUEST DUMP ERROR ]-----------------------------
context canceled
------------------------------------------------------
UploadPartWithContext() error RequestCanceled: request context canceled
caused by: context canceled
```

This adds a fail fast behaviour in case the context was cancelled.
2020-02-12 11:40:34 +00:00
Nick Craig-Wood
55b5eded23 Add Frederick Zhang to contributors 2020-02-12 11:33:56 +00:00