Before this change, rclone was looking for the file without the
extension to see if it existed which meant that it never did.
This change checks the destination file exists firsts, before removing
the extension.
Google drive appears to no longer be copying the modification time of
google docs.
Setting the mod time immediately after the copy doesn't work either,
so this patch copies the object, waits for 1 second and then sets the
modtime.
Fixes#4517
This was only working for files in the root directory and wasn't
looking at the encoding.
This is fixed to use NewObject which takes both things into account
and it makes the share by ID instead of by path.
This problem was spotted by the integration tests.
Before this change we errored out if one upstream errored in Purge or
About.
This change checks for fs.ErrorDirNotFound and skips that backend in
this case.
1. adds SharedOptions data structure to oauthutil
2. adds config.ConfigToken option to oauthutil.SharedOptions
3. updates the backends that have oauth functionality
Fixes#2849
After uploading a multipart object, rclone deletes any unused parts.
Probably as part of the listing unification, the detection of the
parts beloning to the current upload was failing and calling Update
was deleting the parts for the current object.
This change fixes the detection and deletes all the old parts but none
of the new ones now.
Fixes#4075
Previous to this change rclone cached the looked up root_folder_id in
the root_folder_id config variable.
This has caused a lot of confusion and a few attempts at workarounds
and ultimately was a mistake.
This reverts rclone attempting to cache anything in root_folder_id and
returns that variable to be entirely user modified.
It gives a little hint in the debug that rclone could be sped up
slightly by setting it, but it is up to the user to think about
whether that would be OK or not.
Google drive root '': root_folder_id = "XXX" - save this in the config to speed up startup
It does not change root_folder_id itself, leaving this to the user.
See: https://forum.rclone.org/t/rclone-gdrive-no-longer-returning-anything/17215
- add a directory to the optional Purge interface
- fix up all the backends
- add an additional integration test to test for the feature
- use the new feature in operations.Purge
Many of the backends had been prepared in advance for this so the
change was trivial for them.
If this option is enabled, rclone will not set modtime of uploaded files and
the backend will return ModTimeNotSupported as its Precision.
Normally rclone updates modification time of files after they are done
uploading. This can cause permissions issues on Linux platforms when
rclone is copying to a CIFS mount where the user rclone is
running as does not own the file uploaded. If this option is enabled,
rclone will no longer update the modtime after copying a file.
See: https://forum.rclone.org/t/chtimes-error-on-local-mounted-copy/17784
This implements `rclone cleanup` to remove multipart uploads over 24
hours old. It also implements the backend command
`list-multipart-uploads` to see which ones are available and `cleanup`
to delete them with a configurable expiry interval.
See #4302
Before this fix, if an object had ID set and download_url was in use,
downloading the object would give this error:
failed to open for download: bucket example_bucket does not have file: /b2api/v1/b2_download_file_by_id (404 not_found)
After this fix we only download by ID if download_url is not set
See: https://forum.rclone.org/t/correct-format-for-rclone-b2-download-url-variable/15498
When we run MKCOL on 4shared on a directory that already exists, this
returns a 409/Conflict error. However this error code usually means
that the intermediate collections need creating.
The actual error code to return when trying to create a directory that
already exists isn't specified in the RFC, only that an error MUST be
returned and there are already 3 statuses checked in the code.
However using 409 makes rclone's usual strategy for making directories
fail and return the 409 error.
This patch tries the MKCOL and if it returns an unrecognised error
code, then calls PROPFIND on the directory to discover whether the
directory really exists or not.
This should also cover other WebDAV servers returning other error
messages we haven't accounted for in the code yet.
Before this change the cache backend contained its own routines for
mounting testing on that mount.
These tests are never run on the CI and cause a maintenance burden.
This commit removes the tests.
Previous to this fix if Region was not set and Endpoint was not set
then we set the endpoint to "https://s3.amazonaws.com/".
This is unecessary because if the Region alone isn't set then we set
it to "us-east-1" which has the same endpoint.
Having the endpoint set breaks the bucket region auto detection with
the error "Failed to update region for bucket: can't set region to
"xxx" as endpoint is set".
This fix removes that check.
At some point Purge stopped deleting directory markers. We don't have
an integration test for this so it went unnoticed.
This patch fixes the problem but doesn't introduce an integration test
as we don't have a framework for making directory markers yet.
Before this change, large objects which had had their contents deleted
would return "Object not found" and break the listing.
This change makes these objects appear as 0 sized entities so they can
be listed and deleted.
Pcloud appears to have opened up a new region and they are returning
the hostname in the oauth callback, thus
GET /?code=XXX&locationid=1&hostname=api.pcloud.com&state=XXX HTTP/1.1
GET /?code=XXX&locationid=2&hostname=eapi.pcloud.com&state=XXX HTTP/1.1
This isn't documented yet, however pCloud have confirmed that this is
the correct interpretation.
Rclone now reads the "hostname" parameter in the oauth callback and
stores it in the config file. It uses it for all subequent API calls.
Previous to this a dangling shortcut would error the directory
listing.
This patch makes dangling shortcuts appear as 0 sized objects in the
directory listing so they can be deleted. These objects can't be read
though.
For some objects the onedrive backend has been doing a server side
copy and a delete when a server side move would have worked OK.
This was caused by not detecting the home drive correctly (when it was
an empty string) and assuming that these transfers were cross drive.
This is fixed by comparing canonicalizing drive IDs before comparing them.
Currently credentials are required to download a public bucket file
which is not really necessary and makes automated usage more complex.
Add a new option "anonymous" which when enabled configures the gcs
backend to use an anonymous HTTP client. This of course only works
for read access and trying to write will lead to errors like that:
"googleapi: Error 401: Anonymous caller does not not have
storage.objects.create access to the Google Cloud Storage object.",
as expected. By default the anonymous access option is disabled so that
the GCS Application Default Credentials are still used by default as
before and an error is given if they can't be found.
Before this change rclone used the relative path from the current
working directory.
It appears that WS FTP doesn't like this and the openssh sftp tool
also uses absolute paths which is a good reason for switching to
absolute paths.
This change reads the current working directory at startup and bases
all file requests from there.
See: https://forum.rclone.org/t/sftp-ssh-fx-failure-directory-not-found/17436
Before this change the --local-no-updated flag would not error if the
files changed in size during the transfer. The file could still be
read beyond the size advertised though which caused problems with
certain backends.
After this change we attempt to provide a consistent view of the file
once it has been opened.
Once the file has had stat() called on it for the first time we
- Only transfer the size that stat gave
- Only checksum the size that stat gave
- Don't update the stat info for the file
This means that files that are extending can be transferred - rclone
will transfer the length it saw the first time it listed the file.
See: https://forum.rclone.org/t/transport-connection-broken/16494/21
In this commit
5c5ad6220 drive: fix --drive-impersonate with cached root_folder_id
We disabled the use of root_folder_id with --drive-impersonate to fix
a problem with a cached root_folder_id giving the wrong results.
This, alas, broke one users setup with a root_folder_id of
appDataFolder. Since this is identifiable and definitely couldn't have
been cached, we can safely skip this check in this case.
See: https://forum.rclone.org/t/rclone-gdrive-no-longer-returning-anything/17215/10
Before this change there was lots of duplicated code in all the
dircache using backends to support DirMove.
This change factors this code into the dircache library.
Dircache was changed to:
- Remove special cases for the root directory
- Remove Fatal errors
- Call FindRoot on behalf of the user wherever possible
- Bring up to modern Go standards
Backends were changed to:
- Remove calls to FindRoot
- Change calls to FindRootAndPath to FindPath
- Don't make special cases for the root
This fixes several corner cases, for example removing a non existent
directory if FindRoot hasn't been called.
Before this fix rclone would continually try to delete non empty
segment containers which made deleting lots of files very slow.
This fix makes rclone just try the delete once and then carry on which
was the original intent of the code before the retry logic got put in.
Before this change if the server sent us xml like this
```
<D:propstat>
<D:prop>
<g0:quota-available-bytes/>
<g0:quota-used-bytes/>
</D:prop>
<D:status>HTTP/1.1 404 Not Found</D:status>
</D:propstat>
```
Rclone would read the empty XML items as containing 0
After this fix we make sure that we have a value before using it.
Before this fix rclone v1.51 and 1.52 would incorrectly use the cached
root_folder_id when the --drive-impersonate flag was in use. This
meant that rclone could be looking up the wrong directory ID with
unpredictable results - usually all files apparently being missing.
This fix makes rclone look up the root_folder_id always when using
--drive-impersonate. It does this by clearing the root_folder_id and
making a NOTICE message that it is ignoring the cached value.
It also stops rclone caching the root_folder_id when using
--drive-impersonate.
See: https://forum.rclone.org/t/rclone-gdrive-no-longer-returning-anything/17215
Adding the expires parameter gives settings_error/not_authorized/.. errors.
The expires setting isn't in the documentation so this commit removes
it for now.
For SSH authentication, `key_pem` should both override `key_file`
and not require other SSH authentication methods to be set.
Prior to this fix, rclone would attempt to use an ssh-agent
when `key_pem` was the only SSH authentication method set.
Fixes#4240
Before this change we were setting the headers on the PUT
request for normal and multipart uploads. For normal uploads this caused the error
403 Forbidden: There were headers present in the request which were not signed
After this fix we set the headers in the object upload request itself
as the s3 SDK expects.
This means that we only support a limited range of headers
- Cache-Control
- Content-Disposition
- Content-Encoding
- Content-Language
- Content-Type
- X-Amz-Tagging
- X-Amz-Meta-
Note for the last of those are for setting custom metadata in the form
"X-Amz-Meta-Key: value".
This now works for multipart uploads and single part uploads
See also #59
This provides two things:
* It gives Storj insight into which uplink clients are using the
network.
* It facilitate rclone participating in the Tardigrade Open Source
Partner Program https://tardigrade.io/partner/
* s3: add `max_upload_parts` support
This allows to configure a maximum amount of chunks used to upload file:
- Support Scaleway which has a limit of 1k chunks currently
- Reduce a cost on S3 when each request costs some money at the expense of memory used
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
This adds expire and unlink fields to the PublicLink interface.
This fixes up the affected backends and removes unlink parameters
where they are present.
This factors copy out of SetModTime and Copy so it can be called from
both places.
This also reworks all the multipart uploading to use sync.Errgroup and
memory pooling like the other backends. This makes it more memory
efficient and handle errors better.
See: https://forum.rclone.org/t/copying-files-within-a-b2-bucket/16680/10
Before this change, attempting to upload a single file into an s3
bucket which did not have create permission gave AccessDenied: Access
Denied error when it tried to create the bucket.
This was masked until e2bf91452a was
fixed.
This fix marks the bucket as OK if a fetch on an object indicates it
is OK. This stops rclone thinking it has to create the bucket in the
first place.
Fixes#4297
This is caused by a bug in Google drive where, in some circumstances
querying for "(A in parents) or (B in parents)" returns nothing
whereas querying for "A in parents" and "B in parents" separately
works fine.
This has been reported here:
https://issuetracker.google.com/issues/149522397
This workaround detects this condition by seeing if a listing for more
than one directory at once returns nothing.
If it does then it retries each one individually.
This can potentially have a false positive if the user has multiple
empty directories which are queried at once. The consequence of this
will be that ListR is disabled for a while until the directories are
found to be actually empty in which case it will be re-enabled.
Fixes#3114 and Fixes#4289
This reverts commit 9e4b68a364.
This does not work as intended - it only changes docs files and to
make it change drive files would take an extra roundtrip.
I think the sematics of server side copy are now correct - additional
features should be added with a new flag.
See #4230
When wrapping a backend that supports Server Side Copy (e.g. `b2`, `s3`)
and configuring the `tmp_upload_path` option, the `cache` backend would
erroneously report that Server Side Copy/Move was not supported, causing
operations such as file moves to fail. This change fixes this issue
under these circumstances such that Server Side Copy will now be used
when the wrapped backend supports it.
Fixes#3206
Before this change we early exited the SetModTime call which means we
skipped reading the info about the file.
This change reads info about the file in the SetModTime call even if
we are skipping setting the modtime.
See: https://forum.rclone.org/t/sftp-and-set-modtime-false-error/16362