Users can now input a comma separated list of namenodes when writing
config for hdfs remotes.
This is required when you have multiple namenodes in your hdfs cluster
and cannot be certain which namenodes will be in 'standby' or 'active'
states.
This was available before but wasn't documented and didn't use the
correct rclone interfaces.
On a 404 error, b2 returns an empty body which, before this change,
caused the error handler to try to parse an empty string and give the
following DEBUG message:
Couldn't decode error response: EOF
This is confusing as it is expected in normal operations and isn't an
error.
This change reads the body of an error response first then tries to
decode it only if it isn't empty, which avoids the confusing DEBUG
message.
This also upgrades failure to read the body or failure to decode the
JSON to ERROR messages as now we are certain that we should have
something to read and decode.
For some files the Windows Volume Shadow Service (VSS) advertises the
file size as X in the directory listing but returns a different number
Y on stat-ing the file. If the file is opened and read there are Y
bytes available for reading.
Existing copy tools copy Y bytes rather than X so for consistency
rclone should do the same.
This fixes the problem by stat-ing the file immediately before opening
it. This will also reduce the unnecessary occurrence of "can't copy -
source file is being updated" errors; if the file has finished
changing by the time we come to copy it then we now can copy it
successfully.
See: https://forum.rclone.org/t/consistently-getting-corrupted-on-transfer-sizes-differ-syncing-to-an-smb-share/42218/
Streaming uploads are used by rclone rcat and rclone mount
--vfs-cache-mode off.
After the multipart chunker refactor the multipart chunked streaming
upload was accidentally mixing the first and the second parts up which
was causing corrupted uploads.
This was caused by a simple off by one error in the refactoring where
we went from 1 based part number counting to 0 based part number
counting.
Fixing this revealed that the metadata wasn't being re-read for the
copied object either.
This fixes both of those issues and adds an integration tests so it
won't happen again.
Fixes#7367
After the multipart chunker refactor the multipart chunked server side
copy was accidentally sending one part too many. The last part was 0
length which was rejected by b2.
This was caused by a simple off by one error in the refactoring where
we went from 1 based part number counting to 0 based part number
counting.
Fixing this revealed that the metadata wasn't being re-read for the
copied object either.
This fixes both of those issues and adds an integration tests so it
won't happen again.
See: https://forum.rclone.org/t/large-server-side-copy-in-b2-fails-due-to-bad-byte-range/42294
Before this change if you tried to create a bucket that already
existed, but someone else owned then rclone did not return an error.
This now will return an error on providers that return the
AlreadyOwnedByYou error code or no error on bucket creation of an
existing bucket owned by you.
This introduces a new provider quirk and this has been set or cleared
for as many providers as can be tested. This can be overridden by the
--s3-use-already-exists flag.
Fixes#7351
Before this change, attempting to server side copy a google form would
give this error
No export formats found for "application/vnd.google-apps.form"
Adding this flag allows the form to be server side copied but not
downloaded.
Fixes#6302
In v1.63 memory usage in the b2 backend was limited to `--transfers` *
`--b2-chunk-size`
However in v1.64 this was raised to `--transfers` * `--b2-chunk-size`
* `--b2-upload-concurrency`.
The default value for this was accidently set quite high at 16 which
means by default rclone could use up to 6.4GB of memory!
The new default sets a more reasonable (but still high) max memory of 1.6GB.
Before this change, the lock was held while the upload URL was being
fetched from the server.
This meant that any other threads were blocked from getting upload
URLs unecessarily.
It also increased the potential for deadlock.
Most useful is the addition of the file created timestamp, but also a timestamp for
when the file was uploaded.
Currently supporting a rather minimalistic set of metadata items, see PR #6359 for
some thoughts about possible extensions.
In this commit:
5f938fb9ed s3: fix "Entry doesn't belong in directory" errors when using directory markers
We checked that the remote has the prefix and then changed the remote
before removing the prefix. This sometimes causes:
panic: runtime error: slice bounds out of range [56:55]
The fix is to do the modification of the remote after removing the
prefix.
See: https://forum.rclone.org/t/cryptcheck-panic-runtime-error-slice-bounds-out-of-range/41977
Before this change the b2 backend listed all the buckets to turn a
single bucket name into an ID.
However in July 26, 2018 a parameter was added to the list buckets API
to make listing all the buckets unecessary.
This code sets the bucketName parameter so that only the results for
the desired bucket are returned.
The improved upload logic is active by default in uplink v1.12.0, so the
`testuplink.WithConcurrentSegmentUploadsDefaultConfig(ctx)` is not
required anymore.
See https://github.com/rclone/rclone/pull/7198
Before this change uploaded files could return the error "replication
in progress".
This error is harmless though and means the Close should be retried
which is what this patch does.
In this commit we discovered a problem with objects being uploaded to
the incorrect object name. It added an integration test for the
problem.
65b2e378e0 drive: fix incorrect remote after Update on object
This test was tripped by the hdfs backend and this patch fixes the
problem.
Sometimes opendrive reports "403 Folder is already deleted" on
directories which should exist.
This might be a bug in opendrive or in rclone however we work-around
here sufficient to get the tests passing.
ChangeNotify has been broken on the compress backend for a long time!
Before this change it was wrapping the file names received rather than
unwrapping them to discover the original names.
It is likely ChangeNotify was working adequately though for users as
the VFS just uses the directories rather than the file names.
Before this change the concurrency used for an upload was rather
inconsistent.
- if size below `--backend-upload-cutoff` (default 200M) do single part upload.
- if size below `--multi-thread-cutoff` (default 256M) or using streaming
uploads (eg `rclone rcat) do multipart upload using
`--backend-upload-concurrency` to set the concurrency used by the uploader.
- otherwise do multipart upload using `--multi-thread-streams` to set the
concurrency.
This change makes the default for the concurrency used be the
`--backend-upload-concurrency`. If `--multi-thread-streams` is set and larger
than the `--backend-upload-concurrency` then that will be used instead.
This means that if the user sets `--backend-upload-concurrency` then it will be
obeyed for all multipart/multi-thread transfers and the user can override them
all with `--multi-thread-streams`.
See: #7056
Before this change, b2 would return an error when opening a link
generated by `rclone link`. The following error occurs when the object
path contains an ampersand that is not percent encoded:
{
"code": "bad_request",
"message": "Bad character in percent-encoded string: 38 (0x26)",
"status": 400
}
Before this change the box backend could make errors like
Error "not_found" (404): On-Behalf-Of User not found ([123 34 105 110
118 97 108 105 100 95 117 115 101 114 95 105 100 34 58 123 34 105 100
34 58 34 48 48 48 48 48 48 48 48 48 48 48 34 125 125])
This fixes it to produce this instead
Error "not_found" (404): On-Behalf-Of User not found ({"invalid_user_id":{"id":"00000000000"}})
This implements the OpenChunkWriter interface for b2 which
enables multi-thread uploads.
This makes the memory controls of the s3 backend inoperative; they are
replaced with the global ones.
--b2-memory-pool-flush-time
--b2-memory-pool-use-mmap
By using the buffered reader this fixes excessive memory use when
uploading large files as it will share memory pages between all
readers.
This implements the OpenChunkWriter interface for azureblob which
enables multi-thread uploads.
This makes the memory controls of the s3 backend inoperative; they are
replaced with the global ones.
--azureblob-memory-pool-flush-time
--azureblob-memory-pool-use-mmap
By using the buffered reader this fixes excessive memory use when
uploading large files as it will share memory pages between all
readers.
This makes the memory controls of the s3 backend inoperative and
replaced with the global ones.
--s3-memory-pool-flush-time
--s3-memory-pool-use-mmap
By using the buffered reader this fixes excessive memory use when
uploading large files as it will share memory pages between all
readers.
Fixes#7141
As an extra security feature some FTP servers (eg FileZilla) require
that the data connection re-use the same TLS connection as the control
connection. This is a good thing for security.
The message "TLS session of data connection not resumed" means that it
was not done.
The problem turned out to be that rclone was re-using the TLS session
cache between concurrent connections so the resumed TLS data
connection could from any of the control connections.
This patch makes each TLS connection have its own session cache which
should fix the problem.
This also reverts the ftp library to the upstream version which now
contains all of our patches.
Fixes#7234
storj.io/uplink v1.11.0 comes with an improved logic for uploading large
files where file segments are uploaded concurrently instead of serially.
This allows to fully utilize the network connection during the entire
upload process.
This change enable the new upload logic.
The Swift backend does not always respect the flag telling it to skip
HEADing zero-length objects. This commit fixes that for ls/lsl/lsf.
Swift returns zero length for dynamic large object files when they're
included in a files lookup, which means that determining their size
requires HEADing each file that returns a size of zero. rclone's
--swift-no-large-objects instructs rclone that no large objects are
present and accordingly rclone should not HEAD files that return zero
length.
When rclone is performing an ls / lsf / lsl type lookup, however, it
continues to HEAD any zero length objects it encounters, even with
this flag set. Accordingly, this change causes rclone to respect the
flag in these situations.
NB: It is worth noting that this will cause rclone to incorrectly
report zero length for any dynamic large objects encountered with the
--swift-no-large-objects flag set.
Before this change, rclone always expected --sftp-path-override to be
the absolute SSH path to remote:path/subpath which effectively made it
unusable for wrapped remotes (for example, when used with a crypt
remote, the user would need to provide the full decrypted path.)
After this change, the old behavior remains the default, but dynamic
paths are now also supported, if the user adds '@' as the first
character of --sftp-path-override. Rclone will ignore the '@' and
treat the rest of the string as the path to the SFTP remote's root.
Rclone will then add any relative subpaths automatically (including
unwrapping/decrypting remotes as necessary).
In other words, the path_override config parameter can now be used to
specify the difference between the SSH and SFTP paths. Once specified
in the config, it is no longer necessary to re-specify for each
command.
See: https://forum.rclone.org/t/sftp-path-override-breaks-on-wrapped-remotes/40025