This was started by Fionera, finished off by Laura with fixes and more
docs from Nick.
Co-authored-by: Fionera <fionera@fionera.de>
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
* azureblob - Add support for Azure Storage Emulator to test things locally.
Testing - Verified changes by testing manually.
* docs: update azureblob docs to reflect support of storage emulator
This bug was introduced as part of adding context to the backends and
slipped through the net because the About call did not have an
interface assertion in the sftp backend.
I checked there were no other missing interface assertions on all the
optional methods on all the backends.
- Change rclone/fs interfaces to accept context.Context
- Update interface implementations to use context.Context
- Change top level usage to propagate context to lover level functions
Context propagation is needed for stopping transfers and passing other
request-scoped values.
Before this change rclone attempted to set the "updated" field in
uploaded objects to the modification time.
However when this modification time was before 1970, google drive
would return the rather cryptic error:
googleapi: Error 400: Invalid value for UnsignedLong: -42000, invalid
However API docs: https://cloud.google.com/storage/docs/json_api/v1/objects#resource
state the "updated" field is read only and tests confirm that. Even
though the field is read only, it looks like Google parses it.
This change therefore removes the attempt to set the "updated" field
(which was doing nothing anyway) and fixes the problem uploading pre
1970 files.
See #3196 and https://forum.rclone.org/t/invalid-value-for-unsignedlong-file-missing-date-modified/3466
In #2728 and 55b9a4e we decided to allow server side operations
between google drives with different configurations.
This works in some cases (eg between teamdrives) but does not work in
the general case, and this caused breakage in quite a number of
people's workflows.
This change makes the feature conditional on the
--drive-server-side-across-configs flag which defaults to off.
See: https://forum.rclone.org/t/gdrive-to-gdrive-error-404-file-not-found/9621/10Fixes#3119
Under Linux, rclone attempts to preallocate files for efficiency.
Before this change, pre-allocation would fail on ZFS with the error
Failed to pre-allocate: operation not supported
After this change rclone tries a different flag combination for ZFS
then disables pre-allocate if that doesn't work.
Fixes#3066
Before this change rclone would fail with
Failed to set modification time: InvalidObjectState: Operation is not valid for the source object's storage class
when attempting to set the modification time of an object in GLACIER.
After this change rclone will re-upload the object as part of a sync if it needs to change the modification time.
See: https://forum.rclone.org/t/suspected-bug-in-s3-or-compatible-sync-logic-to-glacier/10187
Before this change, rclone would return an error from the listing if
there was an unreadable directory, or if there was a problem stat-ing
a directory entry. This was frustrating because the command
completely aborts at that point when there is work it could do.
After this change rclone lists the directories and reports ERRORs for
unreadable directories or problems stat-ing files, but does return an
error from the listing. It does set the error flag which means the
command will fail (and objects won't be deleted with `rclone sync`).
This brings rclone's behaviour exactly in to line with rsync's
behaviour. It does as much as possible, but doesn't let the errors
pass silently.
Fixes#3179
Before this change we calculated all possible hashes for the file when
the `Hashes` method was called.
After we only calculate the Hash requested.
Almost all uses of `Hash` just need one checksum. This will slow down
`rclone lsjson` with the `--hash` flag. Perhaps lsjson should have a
`--hash-type` flag.
However it will speed up sync/copy/move/check/md5sum/sha1sum etc.
Before it took 12.4 seconds to md5sum a 1GB file, after it takes 3.1
seconds which is the same time the md5sum utility takes.
This fixes rclone returning `listing failed: strconv.ParseInt` errors
when listing files which have a malformed `src_last_modified_millis`.
This is uploaded by the client so care is needed in interpreting it as
it can be malformed.
Fixes#3065
In as many methods as possible we attempt to obey the Retry-After
header where it is provided.
This means that when objects are being requested from OVH cold storage
rclone will sleep the correct amount of time before retrying.
If the sleeps are short it does them immediately, if long then it
returns an ErrorRetryAfter which will cause the outer retry to sleep
before retrying.
Fixes#3041
This implements the Expiry interface so token expiry works properly
This change makes sure that this change from the swift library works
correctly with rclone's custom authenticator.
> Renew the token 60s before the expiry time
>
> The v2 and v3 auth schemes both return the expiry time of the token,
> so instead of waiting for a 401 error, renew the token 60s before this
> time.
>
> This makes transfers more efficient and also works around a bug in
> CEPH which returns 403 instead of 401 when the token expires.
>
> http://tracker.ceph.com/issues/22223
Some WebDAV servers return an empty Available and Used which parses as 0.
This caused About to return the Total as 0 which can confused mounted
file systems.
After this change we ignore the result if Available and Used are both 0.
See: https://forum.rclone.org/t/windows-mounted-webdav-drive-has-no-free-space/8938
Before this change a race condition existed in mkdir
- the directory was attempted to be created
- the parent didn't exist so it failed
- the parent was created
- the directory was created again
The last step failed as the directory was created in a different thread.
This was fixed by checking the error messages of MKCOL for both
directory creations, rather than only the first.
Before this change a range request on a 0 length file would fail
$ rclone cat --head 128 drive:test/emptyfile
ERROR : open file failed: googleapi: Error 416: Request range not satisfiable, requestedRangeNotSatisfiable
To fix this we remove Range: headers on requests for zero length files.
This introduces a new config variable bucket_policy_only. If this is
set then rclone:
- ignores ACLs set on buckets
- ignores ACLs set on objects
- creates buckets with Bucket Policy Only set
Fall back to default application credentials when all other credentials sources fail
This change allows users with default application credentials
configured (notably when running on google compute instances) to
dispense with explicitly configuring google cloud storage credentials
in rclone's own configuration.
This enables MD5 checksum calculation and publication when uploading file above the "Cutoff" limit.
It was explictely ignored in case of multi-block (a.k.a. multipart) uploads to Azure Blob Storage.
Make the pacer package more flexible by extracting the pace calculation
functions into a separate interface. This also allows to move features
that require the fs package like logging and custom errors into the fs
package.
Also add a RetryAfterError sentinel error that can be used to signal a
desired retry time to the Calculator.
Bitrix Site Manager emits `<D:resourcetype><collection/></D:resourcetype>`
missing the namespace on the `collection` tag. This causes the item
to be identified as a file instead of a directory.
To work around this look at the Microsoft extension prop
`iscollection` which seems to be emitted as well.
Before this change any attempt to access a google doc in an rclone
mount would give the error "partial downloads are not supported while
exporting Google Documents" as the mount uses ranged requests to read
data.
This implements ranged requests for a limited number of scenarios,
just enough so that Google docs can be cat-ed from an rclone mount.
When they are cat-ed then they receive their correct size also.
Before this change the union remote was using whether the writable
union could poll for changes to decide whether the union mount could
poll for changes.
The fix causes the union backend to signal it can poll for changes if
**any** of the remotes can poll for changes.
Before this change it was setting the modification times of the things
that the symlinks pointed to.
Note that this is only implemented for unix style OSes. Other OSes
will not attempt to set the modification time of a symlink.
If the upload concurrency is set > 1 then the hash becomes corrupted.
The upload is fine, and can be downloaded fine, however the hash is no
longer the md5sum of the object. It is not known whether this is
rclone's fault or a bug at QingStor.
Before this change if ContentLength was set in the options but 0 then
we would upload using chunked encoding. Fix this to always upload
with a "Content-Length" header even if the size is 0.
Remove workarounds for this from b2 and onedrive backends.
This fixes the issue for the webdav backend described here:
https://forum.rclone.org/t/code-500-errors-with-webdav-nextcloud/8440/
Before this change azureblob would attempt to create already existing
containers. This causes problems with limited permissions keys.
This change checks the container exists before trying to create it in
the same way the s3 backend does. This uses no more requests in the
usual case of the container existing.
See: https://forum.rclone.org/t/copying-individual-files-to-azure-blob-storage/8397
Before this change buckets were created with the same ACL as objects.
After this change, the user can set just --s3-acl to set the ACL of
buckets and objects, or use --s3-bucket-acl as well to have a
different ACL used for bucket creation.
This also logs at INFO level the creation and deletion of buckets.
* drive: don't run teamdrive config if auto confirm set
* onedrive: don't run extra config if auto confirm set
* make Confirm results customisable by config
Fixes#1010
The existing s3 backend passed all integration tests with OSS provided
`force_path_style = false`.
This makes sure that is so and adds documentation and configuration
for OSS.
Thanks to @luolibin for their work on the OSS backend which we ended
up not needing.
Fixes#1641Fixes#1237
The time format provided by webdav servers seems to vary wildly from
that specified in the RFC - rclone already parses times in 5 different
formats!
If an unparseable time is found, then fail softly logging an ERROR
(just once) but returning the epoch.
This will mean that webdav servers with bad time formats will still be
usable by rclone.
Before this fix rclone would just use the authorised bucket regardless
of what bucket you put on the command line.
This uses the new `bucketName` response in the API and checks that the
user is using the correct bucket name to avoid accidents.
Fixes#2839
Before this fix the http backend was returning the wrong error code
when files were not found. This was causing --files-from to error on
missing files instead of skipping them like it should.
The `cleanup` command will delete unfinished large file uploads that
were started more than a day ago (to avoid deleting uploads that are
potentially still in progress).
Fixes#2617
Increasing the --s3-upload-concurrency to 4 (from 2) gives an
additional 45% throughput at the cost of 10MB extra memory per transfer.
After testing the upload perfoc
Before this change rclone would use multipart uploads for any size of
file. However multipart uploads are less efficient for smaller files
and don't have MD5 checksums so it is advantageous to use single part
uploads if possible.
This implements single part uploads for all files smaller than the
upload_cutoff size. Streamed files must be uploaded as multipart
files though.
Before this change we used Remove to remove directories. This works
fine on Unix based systems but not so well on Windows based ones.
Swap to using RemoveDirectory instead.
When a container is deleted, a container with the same name cannot be
created for at least 30 seconds; the container may not be available
for more than 30 seconds if the service is still processing the
request.
We sleep so that we wait at most 60 seconds. This is mostly useful in
the integration tests where containers get deleted and remade
immediately.
Get rid of the api client and use rest/pacer for all API calls
Add Copy, Move, DirMove, PublicLink, About optional interfaces
Improve general error handling
Remove ListR for now due to inconsitent behaviour
fixes#2586, progress on #2740 and #2178
Before this change backend integration tests depended on each other,
so tests could not be retried.
After this change we nest tests to ensure that tests are provided with
the starting state they expect.
Tell the integration test runner that it can retry backend tests also.
This also includes bin/test_independence.go which runs each test
individually for a backend to prove that they are independent.
Wasabi has two location, US East and US West, with different endpoint URLs.
When configuring S3 to use Wasabi, provide the endpoint information for both
locations.
Before this change Rmdir would check the root rather than the
directory specified for being empty and return "directory not empty"
when it shouldn't have done.
When the env_auth option is enabled, the AWS SDK's session constructor
now loads configuration from ~/.aws/config and environment variables,
and credentials per the selected (or default) AWS_PROFILE's settings.
This is accomplished by **NOT** including any Credential provider in the
aws.Config passed to the session constructor: If the Config.Credentials
is non-nil, that will always be used and the user's configuration re
role_arn, credential_source, source_profile, etc... from the shared
config will be completely ignored.
(The conditional creation and configuration of the stscreds Credential
provider is complicated enough that it is not worth re-creating that
logic.)
Before this change the ACL for objects which were server side copied
was left at the default "private" settings. S3 doesn't copy the ACL
from the source when you copy an object, you have to set it afresh
which is what this does.
Until https://github.com/Azure/azure-storage-blob-go/pull/75 is merged
the SDK can't upload a single blob of exactly the chunk size, so
upload files of this size with a multpart upload as a work around.
The previous fix for this 6a773289e7 turned out to cause problems
uploading files with maximum chunk size so needed to be redone.
Fixes#2653
Before this change the Features() method would return a different Fs
to that the Features() method was called on if the remote was
instantiated on a file.
The practical effect of this is that optional features, eg `rclone
about` wouldn't work properly when called on a file, and likely this
has been causing low level problems for users of these backends for
ages.
Ideally there would be a test for this, but it turns out that this is
really hard, so instead of that all the backends have been converted
to not copy the Fs and a big warning comment inserted for future
readers.
Fixes#2182
Use the same function to join the root paths for the wrapping remotes
alias, cache and crypt.
The new function fspath.JoinRootPath is equivalent to path.Join, but if
the first non empty element starts with "//", this is preserved to allow
Windows network path to be used in these remotes.
Implement optional interfaces
- Purge
- PutStream
- Copy
- Move
- DirMove
- DirCacheFlush
- ChangeNotify
- About
Make Hashes() return the intersection of all the hashes supported by the remotes
When moving a directory in drive, most of the time only a notification
for the directory itself is created, not the old or new parents.
This tires to find the old path in the dirCache and the new path with
the dirCache of the new parent, which can result in two notifications
for a moved directory.
Add a new flag to the drive backend to allow document conversions oni upload.
The existing --drive-formats flag has been renamed to --drive-export-formats.
The old flag is still working to be backward compatible.
Make use of the mime package to find matching extensions and mime types.
For simplicity, all extensions are now prefixed with "." to match the
mime package requirements.
Parsed extensions get converted if needed.
Before this change on Windows, files copied locally could become
heavily fragmented (300+ fragments for maybe 100 MB), no matter how
much contiguous free space there was (even if it's over 1TiB). This
can needlessly yet severely adversely affect performance on hard
disks.
This changes uses NtSetInformationFile to pre-allocate the space to
avoid this.
It does nothing on other OSes other than Windows.
Add --drive-v2-download-min-size flag to allow downloading files via the
drive v2 API. If files are greater than this flag, a download link is
generated when needed. The flag is disabled by default.
When combining the remote value and the root path, preserve the absence
or presence of the / at the beginning of the wrapped remote path.
e.g. a remote "cloud:" and root path "dir" becomes "cloud:dir" instead
of "cloud:/dir".
Fixes#2553
When combining the remote value and the root path, preserve the absence
or presence of the / at the beginning of the wrapped remote path.
e.g. a remote "cloud:" and root path "dir" becomes "cloud:dir" instead
of "cloud:/dir".
* Fix error handling in List and NewObject
* Fix Precision in case we have precision > time.Second
* Fix Features - all binary features are possible
* Fix integration tests using new test facilities
Uploads were broken because chunk size was set to zero. This was a
consequence of the backend config re-organization which meant that
chunk size had lost its default.
Sharing some backend config between swift and hubic fixes the problem
and means hubic gains its own --hubic-chunk-size flag.
The initial work on this was done by Oliver Heyme with updates from
Cnly.
Oliver Heyme:
* Changed to Microsoft graph
* Enable writing
* Added more options for adding a OneDrive Remote
* Better error handling
* Send modDate at create upload session and fix list children
Cnly:
* Simple upload API only supports max 4MB files
* Fix supported hash types for different drive types
* Fix unchecked err
Co-authored-by: Oliver Heyme <olihey@googlemail.com>
Co-authored-by: Cnly <minecnly@gmail.com>
Sometimes pcloud will leave a half uploaded file when the transfer
actually failed. This patch deletes the file if it exists.
This problem was spotted by the integration tests.
Before this change we were using the ChildCount in the Folder facet to
determine if a directory was empty or not. However this seems to be
unreliable, or updated asynchronously which meant that `rclone rmdir`
sometimes deleted directories that had files in.
This problem was spotted by the integration tests.
Listing the directory instead of relying on the ChildCount fixes the
problem and the integration tests, without changing the cost (one http
transaction).
This was causing errors which looked like this when copying a file to
the root of a drive:
mkdir \\?: The filename, directory name, or volume label syntax is incorrect.
This was caused by an incorrect path splitting routine which was
removing \ of the end of UNC paths when it shouldn't have been. Fixed
by using the standard library `filepath.Dir` instead.
Before this change if only one of storage_url or auth_token were
supplied then rclone would overwrite both of them when authenticating.
This effectively meant you could supply both of them or none of them
only.
Now rclone still does the authentication to read the missing
storage_url or auth_token then afterwards re-writes the auth_token or
storage_url back to what the user desired.
Fixes#2464
* Add docs for --jottacloud-md5-memory-limit
* Factor out readMD5 function and add tests
* Fix accounting
* Make sure temp file is deleted at the start (not Windows)
In e52ecba295 we forgot to unwrap and
re-wrap the accounting which mean the the accounting was no longer
first in the chain of readers. This lead to accounting inaccuracies
in remotes which wrap and unwrap the reader again.
Some webdav backends (eg rclone serve webdav) leave behind half
written files on error. This causes the integration tests to
fail. Here we remove the file if it exists.
Sometimes it takes many more commit retries than expected to commit a
multipart file, so split this number into its own config variable and
default it to 100 which should always be enough.
this change adds the depth parameter to listAll and readMetaDataForPath.
this allows recursive calls of these methods with a different depth
header.
Sharepoint won't list files if the depth header is != 0. If that is the
case, it will just return a error 404 although the file exists.
Since it is not possible to determine if a path should be a file or a
directory, rclone has to make a request with depth = 1 first. On success
we are sure that the path is a directory and the listing will work.
If this request returns error 404, the path either doesn't exist or it
is a file.
To be sure, we can try again with depth set to 0. If it still fails, the
path really doesn't exist, else we found our file.
Before this change the Part structure had an int for the Offset and
uploading large files would produce this error
json: cannot unmarshal number 2147483648 into Go struct field Part.offset of type int
Changing the field to an int64 fixes the problem.