Fix https://github.com/rclone/rclone/issues/7103
Before this change the RegExp validating the endpoint URL was a bit
too strict allowing only /dav/files/USER due to chunking limitations.
This patch adds back support for /dav/files/USER/dir/subdir etc.
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
This reverts commit 9065e921c1.
It turns out the problem for the failing fs/sync tests was the
policies being different for search and create which meant that the
file was being created in one union branch but a diferent one was
found in another branch.
The API seems to have changed and the `totalFileCount` item no longer
tracks the number of files in the directory so is useless for seeing
if the directory is empty.
This patch fixes the problem by seeing whether there are any files or
directories in the folder instead.
This problem was detected by the integration tests.
For some unknown reason the API sometimes returns the name already
exists on a server side copy.
{
"error_id": null,
"error_message": "Name already exist",
"error_type": "NAME_ALREADY_EXIST",
"error_uri": "http://api.put.io/v2/docs",
"extra": {},
"status": "ERROR",
"status_code": 400
}
This patch uploads to a temporary name then renames it which works
around the problem.
This was spotted by the integration tests.
The integration tests spotted that modification times are no longer
being preserved by the putio API in server side move and copy.
This patch explicitly sets the modtime after the server side move or
copy.
In this commit we enabled PartialUploads for the union backend.
3faa84b47c combine,compress,crypt,hasher,union: support wrapping backends with PartialUploads
This turns out to cause test failures in fs/sync so this commit
disables them again pending further investigation.
At some point the sharefile API changed to require the size of the
file in the initial transaction which makes the streaming upload fail
with this error:
upload failed: file size does not match (-2)
This was discovered by the integration tests.
In this commit we discovered a problem with objects being uploaded to
the incorrect object name. It added an integration test for the
problem.
65b2e378e0 drive: fix incorrect remote after Update on object
This test was tripped by the putio backend and this patch fixes the
problem.
Before this patch the Update method had a 50/50 chance of returning
the old object rather than the new updated object.
This was discovered in the integration tests.
This patch fixes the problem by deleting the duplicate object before
we look for the new object.
In this commit we discovered a problem with objects being uploaded to
the incorrect object name. It added an integration test for the
problem.
65b2e378e0 drive: fix incorrect remote after Update on object
This test was tripped by the Storj backend and this patch fixes the
problem.
Storj has a rate limit of 1 per second when uploading to the same
file.
This was being tripped by the integration tests.
This patch fixes it by detecting the error and sleeping for 1 second
before retrying.
See: https://github.com/storj/uplink/issues/149
Before this change a server side copy did not preserve the modtime.
This used to work on nextcloud but at some point it started ignoring
the `X-Oc-Mtime` header.
This patch sets the modtime explicitly after a server side copy if the
`X-Oc-Mtime` wasn't accepted.
This problem was discovered in the integration tests.
Before this change we were incorrectly identifying the root directory
of the listing and adding it into the listing.
This caused higher layers of rclone to emit the error above.
See #7038
Before this change we were incorrectly identifying the root directory
of the listing and adding it into the listing.
This caused higher layers of rclone to emit the error above.
See #7038
Before this change we were incorrectly identifying the root directory
of the listing and adding it into the listing.
This caused higher layers of rclone to emit the error above.
Fixes#7038
This commit
3567a47258 fs: make ConfigString properly reverse suffixed file systems
made fs.ConfigString() return the full config of the backend. Because
mount was using this to make a volume name it started to make volume
names with illegal characters in which couldn't be mounted by macOS.
This fixes the problem by making a separate fs.ConfigStringFull() and
using that where appropriate and leaving the original
fs.ConfigString() function untouched.
Fixes#7063
See: https://forum.rclone.org/t/1-63-beta-fails-to-mount-on-macos-with-on-the-fly-crypt-remote/39090
Zoho has started returning the results from Range: requests with a 200
response code rather than the technically correct 206 error code.
Before this change this triggered workaround code to deal with Zoho
not obeying Range: requests properly.
This fix tests the returned header for a Content-Range: header and if
it exists assumes it is a valid reply to the Range: request despite
the status being 200.
This problem was spotted by the integration tests.
Before this fix, if the upload failed for some reason the yandex
backend would attempt to retry itself it which would fail immediately
with 400 Bad Request.
Normally we retry uploads at a higher level so they can be done with
new data and this patch does that.
See #7044
Before this change the Errors type in the union backend produced
errors which could not be Unwrapped to test their type.
This adds the (go1.20) Unwrap method to the Errors type which allows
errors.Is to work on these errors.
It also adds unit tests for the Errors type and fixes a couple of
minor bugs thrown up in the process.
See: https://forum.rclone.org/t/failed-to-set-modification-time-1-error-pcloud-cant-set-modified-time/38596
Set this automatically for any backend which implements UnWrap and
manually for combine and union which can't implement UnWrap but do
overlay other backends.
Before this change, the Pikpak backend would always download
the first media item whenever possible, regardless of whether
or not it was the original contents.
Now we check the validity of a media link using the `fid`
parameter in the link URL.
Fixes#6992
Before this change the pikpak backend changed the global
--multi-thread-streams flag which wasn't desirable.
Now the machinery is in place to use the NoMultiThreading feature flag
instead.
Fixes#6915
Some servers return a 501 error when using MLST on a non-existing
directory. This patch allows it.
I don't think this is correct usage according to the RFC, but the RFC
doesn't explicitly state which error code should be returned for
file/directory not found.
Before this fix a blank line in the MLST output from the FTP server
would cause the "unsupported LIST line" error.
This fixes the problem in the upstream fork.
Fixes#6879
Implement a Partialuploads feature flag to mark backends for which
uploads are not atomic.
This is set for the following backends
- local
- ftp
- sftp
See #3770
Before this change rclone used a normal SFTP rename if present to
implement Move.
However the normal SFTP rename won't overwrite existing files.
This fixes it to either use the POSIX rename extension
("posix-rename@openssh.com") or to delete the source first before
renaming using the normal SFTP rename.
This isn't normally a problem as rclone always removes any existing
objects first, however to implement non --inplace operations we do
require overwriting an existing file.
This also produces a warning when rclone detects files have been
blocked because of virus content
server reports this file is infected with a virus - use --onedrive-av-override to download anyway
Fixes#557
Before this change, when Object.Update was called in the drive
backend, it overwrote the remote with that of the object info.
This is incorrect - the remote doesn't change on Update and this patch
fixes that and introduces a new test to make sure it is correct for
all backends.
This was noticed when doing Update of objects in a nested combine
backend.
See: https://forum.rclone.org/t/rclone-runtime-goroutine-stack-exceeds-1000000000-byte-limit/37912
Before this change if the storage class wasn't set on the object, we
didn't set the "tier" metadata.
This made it impossible to filter on tier using the metadata filters.
This returns the "tier" metadata as STANDARD if the storage class
isn't set on the object.
See: https://forum.rclone.org/t/copy-from-s3-to-another-s3-filter-by-storage-class/37861
- Report correct feature flag
- Fix test failures due to that
- don't output the root directory marker
- Don't create the directory marker if it is the bucket or root
- Create directories when uploading files
- Report correct feature flag
- Fix test failures due to that
- don't output the root directory marker
- Don't create the directory marker if it is the bucket or root
- Create directories when uploading files
Before this change, drive would mistakenly identify a folder with a
training slash as a file when passed to NewObject.
This was picked up by the integration tests
In an earlier patch
d5afcf9e34 crypt: try not to return "unexpected EOF" error
This introduced a bug for 0 length files which this fixes which only
manifests if the io.Reader returns data and EOF which not all readers
do.
This was failing in the integration tests.
Before this change using /path/to/file.rclonelink would not find the
file when using -l/--links.
This fixes the problem by doing another stat call if the file wasn't
found without the suffix if -l/--links is in use.
It will also give an error if you refer to a symlink without its
suffix which will not work because the limit to a single file
filtering will be using the file name without the .rclonelink suffix.
need ".rclonelink" suffix to refer to symlink when using -l/--links
Before this change it would use the symlink as a directory which then
would fail when listed.
See: #6855
golang.org/x/oauth2/jws is deprecated: this package is not intended for public use and
might be removed in the future. It exists for internal use only. Please switch to another
JWS package or copy this package into your own source tree.
github.com/golang-jwt/jwt/v4 seems to be a good alternative, and was already
an implicit dependency.
This changes crypt's use of sync.Pool: Instead of storing slices
it now stores pointers pointers fixed sized arrays.
This issue was reported by staticcheck:
SA6002 - Storing non-pointer values in sync.Pool allocates memory
A sync.Pool is used to avoid unnecessary allocations and reduce
the amount of work the garbage collector has to do.
When passing a value that is not a pointer to a function that accepts
an interface, the value needs to be placed on the heap, which means
an additional allocation. Slices are a common thing to put in sync.Pools,
and they're structs with 3 fields (length, capacity, and a pointer to
an array). In order to avoid the extra allocation, one should store
a pointer to the slice instead.
See: https://staticcheck.io/docs/checks#SA6002
This change provides the ability to pass `env_auth` as a parameter to
the drive provider. This enables the provider to pull IAM
credentials from the environment or instance metadata. Previously if no
auth method was given it would default to requesting oauth.