Minor wording change to help for explicit and implicit FTPS flags. More consistent between flags. Add 's' to request because only one 'client' mentioned.
As reported in
https://github.com/rclone/rclone/issues/4660#issuecomment-705502792
After switching to a password callback function, if the ssh connection
aborts and needs to be reconnected then the user is-reprompted for their
password. Instead we now remember the password they entered and just give
that back. We do lose the ability for them to correct mistakes, but that's
the situation from before switching to callbacks. We keep the benefits
of not asking for passwords until the SSH connection succeeds (right
known_hosts entry, for example).
This required a small refactor of how `f := &Fs{}` was built, so we can
store the saved password in the Fs object
Before this change rclone returned the size from the Stat call of the
link. On Windows this reads as 0 always, however on unix it reads as
the length of the text in the link. This caused errors like this when
syncing:
Failed to copy: corrupted on transfer: sizes differ 0 vs 13
This change causes Windows platforms to read the link and use that as
the size of the link instead of 0 which fixes the problem.
Based on Issue 4087
https://github.com/rclone/rclone/issues/4087
Current behaviour is insecure. If the user specifies this value then we
switch to validating the server hostkey and so can detect server changes
or MITM-type attacks.
This allows files to be copied by ID from google drive. These can be
copied to any rclone remote and if the remote is a google drive then
server side copy will be attempted.
Fixes#3625
This type of error is unlikely to be an error that can be resolved by a retry,
and is triggered in #2296 by files with a timestamp before the unix epoch.
The maximum value for the --s3--copy-cutoff should be 5GiB as tested
with AWS S3.
However b2 have implemented this as 5GB rather than 5GiB so having the
default at 5 GiB makes the b2s3 server side copy of a large file by
default.
This patch sets the default to 4768 MiB which is slightly less than
5GB.
This should have very little effect on anything.
If in future rclone can lower this limit more if Copy can multithread.
See: https://forum.rclone.org/t/copying-files-within-a-b2-bucket/16680/76
Before this change the s3 multipart server side copy was not
preserving the metadata of the object. This was most noticeable
because the modtime was not preserved.
This change fetches the metadata from the object before starting the
copy and overwrites it if requires.
It will also mean any other metadata is preserved.
See: https://forum.rclone.org/t/copying-files-within-a-b2-bucket/16680/70
Before this change, when the above backends created a new backend they
didn't put it into the backend cache.
This meant that rc commands acting on those backends did not work.
This was fixed by making sure the backends use the backend cache.
See: https://forum.rclone.org/t/rclone-rc-backend-command-not-working-as-expected/18834
Before this change writing with the all policy deadlocked while
uploading.
This change fixes the problem by fixing the multi reader, closing the
pipes at the correct time with the correct error. This is factored
into a new function as it was used twice.
This patch also adds a new test which tests the all policies.
Before this fix we were reading the hash from the upload using the
string "ETag", however the go runtime normalises the tag into "Etag"
so we were in fact always reading an empty string.
This bug was introduced in
aeea4430d5 swift: efficiency: slim Object and reduce requests on upload
It was spotted by the integration tests.
The fix was just to use the canonical form "Etag" instead of "ETag".
In this commit
a2afa9aadd fs: Add directory to optional Purge interface
We failed to encrypt the directory name so the Purge failed.
This was spotted by the integration tests.
In this commit:
cbf3d43561 drive: fix missing items when listing using --fast-list / ListR
We introduced a bug where under specific circumstances it could cause
a "panic: send on closed channel".
This was caused by:
- rclone engaging the workaround from the commit above
- one of the listing routines returning an error
- this caused the `in` channel to be closed to stop the readers
- however the workaround was recycling stuff into the `in` channel at the time
- hence the panic on closed channel
This fix factors out the sending to the `in` channel into `sendJob`
and calls this both from the master go routine and the list
runners. `sendJob` detects the `in` channel being closed properly and
also deals correctly with contention on the `in` channel.
Fixes#4511
When using `rclone authorize` the hostname doesn't get set in the
config file.
This commit allows it to be set in the configurator and gives the user
a hint that it needs setting.
This reverts part of
151f03378f s3: fix upload of single files into buckets without create permission
This erroneously assumed that a HEAD request on a non existent object
would return "NotFound" if the bucket was found. In fact it returns
"NotFound" when the bucket isn't found also.
This will break the fix for #4297 - however that can be made to work
using the new --s3-assume-bucket-exists flag
Before this change, rclone was looking for the file without the
extension to see if it existed which meant that it never did.
This change checks the destination file exists firsts, before removing
the extension.
Google drive appears to no longer be copying the modification time of
google docs.
Setting the mod time immediately after the copy doesn't work either,
so this patch copies the object, waits for 1 second and then sets the
modtime.
Fixes#4517
This was only working for files in the root directory and wasn't
looking at the encoding.
This is fixed to use NewObject which takes both things into account
and it makes the share by ID instead of by path.
This problem was spotted by the integration tests.
Before this change we errored out if one upstream errored in Purge or
About.
This change checks for fs.ErrorDirNotFound and skips that backend in
this case.
1. adds SharedOptions data structure to oauthutil
2. adds config.ConfigToken option to oauthutil.SharedOptions
3. updates the backends that have oauth functionality
Fixes#2849
After uploading a multipart object, rclone deletes any unused parts.
Probably as part of the listing unification, the detection of the
parts beloning to the current upload was failing and calling Update
was deleting the parts for the current object.
This change fixes the detection and deletes all the old parts but none
of the new ones now.
Fixes#4075
Previous to this change rclone cached the looked up root_folder_id in
the root_folder_id config variable.
This has caused a lot of confusion and a few attempts at workarounds
and ultimately was a mistake.
This reverts rclone attempting to cache anything in root_folder_id and
returns that variable to be entirely user modified.
It gives a little hint in the debug that rclone could be sped up
slightly by setting it, but it is up to the user to think about
whether that would be OK or not.
Google drive root '': root_folder_id = "XXX" - save this in the config to speed up startup
It does not change root_folder_id itself, leaving this to the user.
See: https://forum.rclone.org/t/rclone-gdrive-no-longer-returning-anything/17215
- add a directory to the optional Purge interface
- fix up all the backends
- add an additional integration test to test for the feature
- use the new feature in operations.Purge
Many of the backends had been prepared in advance for this so the
change was trivial for them.
If this option is enabled, rclone will not set modtime of uploaded files and
the backend will return ModTimeNotSupported as its Precision.
Normally rclone updates modification time of files after they are done
uploading. This can cause permissions issues on Linux platforms when
rclone is copying to a CIFS mount where the user rclone is
running as does not own the file uploaded. If this option is enabled,
rclone will no longer update the modtime after copying a file.
See: https://forum.rclone.org/t/chtimes-error-on-local-mounted-copy/17784
This implements `rclone cleanup` to remove multipart uploads over 24
hours old. It also implements the backend command
`list-multipart-uploads` to see which ones are available and `cleanup`
to delete them with a configurable expiry interval.
See #4302
Before this fix, if an object had ID set and download_url was in use,
downloading the object would give this error:
failed to open for download: bucket example_bucket does not have file: /b2api/v1/b2_download_file_by_id (404 not_found)
After this fix we only download by ID if download_url is not set
See: https://forum.rclone.org/t/correct-format-for-rclone-b2-download-url-variable/15498
When we run MKCOL on 4shared on a directory that already exists, this
returns a 409/Conflict error. However this error code usually means
that the intermediate collections need creating.
The actual error code to return when trying to create a directory that
already exists isn't specified in the RFC, only that an error MUST be
returned and there are already 3 statuses checked in the code.
However using 409 makes rclone's usual strategy for making directories
fail and return the 409 error.
This patch tries the MKCOL and if it returns an unrecognised error
code, then calls PROPFIND on the directory to discover whether the
directory really exists or not.
This should also cover other WebDAV servers returning other error
messages we haven't accounted for in the code yet.
Before this change the cache backend contained its own routines for
mounting testing on that mount.
These tests are never run on the CI and cause a maintenance burden.
This commit removes the tests.
Previous to this fix if Region was not set and Endpoint was not set
then we set the endpoint to "https://s3.amazonaws.com/".
This is unecessary because if the Region alone isn't set then we set
it to "us-east-1" which has the same endpoint.
Having the endpoint set breaks the bucket region auto detection with
the error "Failed to update region for bucket: can't set region to
"xxx" as endpoint is set".
This fix removes that check.
At some point Purge stopped deleting directory markers. We don't have
an integration test for this so it went unnoticed.
This patch fixes the problem but doesn't introduce an integration test
as we don't have a framework for making directory markers yet.
Before this change, large objects which had had their contents deleted
would return "Object not found" and break the listing.
This change makes these objects appear as 0 sized entities so they can
be listed and deleted.
Pcloud appears to have opened up a new region and they are returning
the hostname in the oauth callback, thus
GET /?code=XXX&locationid=1&hostname=api.pcloud.com&state=XXX HTTP/1.1
GET /?code=XXX&locationid=2&hostname=eapi.pcloud.com&state=XXX HTTP/1.1
This isn't documented yet, however pCloud have confirmed that this is
the correct interpretation.
Rclone now reads the "hostname" parameter in the oauth callback and
stores it in the config file. It uses it for all subequent API calls.
Previous to this a dangling shortcut would error the directory
listing.
This patch makes dangling shortcuts appear as 0 sized objects in the
directory listing so they can be deleted. These objects can't be read
though.
For some objects the onedrive backend has been doing a server side
copy and a delete when a server side move would have worked OK.
This was caused by not detecting the home drive correctly (when it was
an empty string) and assuming that these transfers were cross drive.
This is fixed by comparing canonicalizing drive IDs before comparing them.
Currently credentials are required to download a public bucket file
which is not really necessary and makes automated usage more complex.
Add a new option "anonymous" which when enabled configures the gcs
backend to use an anonymous HTTP client. This of course only works
for read access and trying to write will lead to errors like that:
"googleapi: Error 401: Anonymous caller does not not have
storage.objects.create access to the Google Cloud Storage object.",
as expected. By default the anonymous access option is disabled so that
the GCS Application Default Credentials are still used by default as
before and an error is given if they can't be found.
Before this change rclone used the relative path from the current
working directory.
It appears that WS FTP doesn't like this and the openssh sftp tool
also uses absolute paths which is a good reason for switching to
absolute paths.
This change reads the current working directory at startup and bases
all file requests from there.
See: https://forum.rclone.org/t/sftp-ssh-fx-failure-directory-not-found/17436
Before this change the --local-no-updated flag would not error if the
files changed in size during the transfer. The file could still be
read beyond the size advertised though which caused problems with
certain backends.
After this change we attempt to provide a consistent view of the file
once it has been opened.
Once the file has had stat() called on it for the first time we
- Only transfer the size that stat gave
- Only checksum the size that stat gave
- Don't update the stat info for the file
This means that files that are extending can be transferred - rclone
will transfer the length it saw the first time it listed the file.
See: https://forum.rclone.org/t/transport-connection-broken/16494/21
In this commit
5c5ad6220 drive: fix --drive-impersonate with cached root_folder_id
We disabled the use of root_folder_id with --drive-impersonate to fix
a problem with a cached root_folder_id giving the wrong results.
This, alas, broke one users setup with a root_folder_id of
appDataFolder. Since this is identifiable and definitely couldn't have
been cached, we can safely skip this check in this case.
See: https://forum.rclone.org/t/rclone-gdrive-no-longer-returning-anything/17215/10