* Fix error handling in List and NewObject
* Fix Precision in case we have precision > time.Second
* Fix Features - all binary features are possible
* Fix integration tests using new test facilities
Uploads were broken because chunk size was set to zero. This was a
consequence of the backend config re-organization which meant that
chunk size had lost its default.
Sharing some backend config between swift and hubic fixes the problem
and means hubic gains its own --hubic-chunk-size flag.
The initial work on this was done by Oliver Heyme with updates from
Cnly.
Oliver Heyme:
* Changed to Microsoft graph
* Enable writing
* Added more options for adding a OneDrive Remote
* Better error handling
* Send modDate at create upload session and fix list children
Cnly:
* Simple upload API only supports max 4MB files
* Fix supported hash types for different drive types
* Fix unchecked err
Co-authored-by: Oliver Heyme <olihey@googlemail.com>
Co-authored-by: Cnly <minecnly@gmail.com>
Sometimes pcloud will leave a half uploaded file when the transfer
actually failed. This patch deletes the file if it exists.
This problem was spotted by the integration tests.
Before this change we were using the ChildCount in the Folder facet to
determine if a directory was empty or not. However this seems to be
unreliable, or updated asynchronously which meant that `rclone rmdir`
sometimes deleted directories that had files in.
This problem was spotted by the integration tests.
Listing the directory instead of relying on the ChildCount fixes the
problem and the integration tests, without changing the cost (one http
transaction).
This was causing errors which looked like this when copying a file to
the root of a drive:
mkdir \\?: The filename, directory name, or volume label syntax is incorrect.
This was caused by an incorrect path splitting routine which was
removing \ of the end of UNC paths when it shouldn't have been. Fixed
by using the standard library `filepath.Dir` instead.
Before this change if only one of storage_url or auth_token were
supplied then rclone would overwrite both of them when authenticating.
This effectively meant you could supply both of them or none of them
only.
Now rclone still does the authentication to read the missing
storage_url or auth_token then afterwards re-writes the auth_token or
storage_url back to what the user desired.
Fixes#2464
* Add docs for --jottacloud-md5-memory-limit
* Factor out readMD5 function and add tests
* Fix accounting
* Make sure temp file is deleted at the start (not Windows)
In e52ecba295 we forgot to unwrap and
re-wrap the accounting which mean the the accounting was no longer
first in the chain of readers. This lead to accounting inaccuracies
in remotes which wrap and unwrap the reader again.
Some webdav backends (eg rclone serve webdav) leave behind half
written files on error. This causes the integration tests to
fail. Here we remove the file if it exists.
Sometimes it takes many more commit retries than expected to commit a
multipart file, so split this number into its own config variable and
default it to 100 which should always be enough.
this change adds the depth parameter to listAll and readMetaDataForPath.
this allows recursive calls of these methods with a different depth
header.
Sharepoint won't list files if the depth header is != 0. If that is the
case, it will just return a error 404 although the file exists.
Since it is not possible to determine if a path should be a file or a
directory, rclone has to make a request with depth = 1 first. On success
we are sure that the path is a directory and the listing will work.
If this request returns error 404, the path either doesn't exist or it
is a file.
To be sure, we can try again with depth set to 0. If it still fails, the
path really doesn't exist, else we found our file.
Before this change the Part structure had an int for the Offset and
uploading large files would produce this error
json: cannot unmarshal number 2147483648 into Go struct field Part.offset of type int
Changing the field to an int64 fixes the problem.
This unifies the 3 methods of reading config
* command line
* environment variable
* config file
And allows them all to be configured in all places. This is done by
making the []fs.Option in the backend registration be the master
source of what the backend options are.
The backend changes are:
* Use the new configmap.Mapper parameter
* Use configstruct to parse it into an Options struct
* Add all config to []fs.Option including defaults and help
* Remove all uses of pflag
* Remove all uses of config.FileGet
This change includes removing older azureblob storage SDK, and getting
parity to existing code with latest blob storage SDK.
This change is also pre-req for addressing #2091
Go can't redirect PROPFIND requests properly, it changes the method to
GET, so we disable redirects when reading the metadata and assume the
object does not exist if we receive a redirect.
This is to work-around the qnap redirecting requests for directories
without /.
Previously this was reading a stale hash from the object leading to
broken integration tests.
This fixes these integration tests TestSyncDoesntUpdateModtime,
TestSyncAfterChangingFilesSizeOnly, TestSyncAfterChangingContentsOnly,
TestSyncWithUpdateOlder, TestSyncUTFNorm.
Now that https://issuetracker.google.com/issues/64468406 has been
fixed, we can remove part of the workaround which fixed#1675 -
019adc35609c2136
This will make queries marginally more efficient. We still need the
other part of the workaround since the `=` operator is case
insensitive.
Before this commit rclone's handling of symlinks and junction points
under Windows was broken. rclone treated them as files and attempted
to transfer them which gave the error "The handle is invalid".
Ultimately the cause of this was 3e43ff7414 which was a
workaround so files with reparse points (which are a kind of symlink)
would transfer correctly.
The solution implemented is to revert the above commit which will mean
that #614 will break again. However there is now a work-around (which
will be signaled by rclone) to use the -L flag which wasn't available
when the original commit was made.
Fixes#2336
In the drive v3 conversion we forgot the IncludeTeamDriveItems
parameter when calling the changes API. Adding it fixes the changes
polling with team drives.
This implementation hopefully can handle all error requests from the
onedrive for business authentication.
I have only tested it with the "domain in unmanaged state" error.
This was caused by using the sftp.File.Read method which resets the
streaming window after each call. Replacing it with sftp.File.WriteTo
and an io.Pipe fixes the problem bringing the speed to the same as the
sftp binary.
This checks the checksum of the streamed encrypted data against the
checksum of the encrypted object returned from the remote and returns
an error if it is different.
* Fix errcheck and golint warnings
* Remove unused constants and fix comments
* Parse error responses properly
* Fix Open with RangeOption
* Fix Move, Copy and DirMove
* Implement DirCacheFlush
* Check interfaces are correct
* Remove debugs and update overview
* Correct feature flags
* Pare replacement characters down to the minimum set
* Add to the integration tests
* Add Mkdir, Rmdir, Purge, Delete, SetModTime, Copy, Move, DirMove
* Update file size after upload
* Add Open seek
* Set private permission for new folder and uploaded file
* Add docs
* Update List function
* Fix UserSessionInfo struct
* Fix socket leaks
* Don’t close resp.Body in Open method
* Get hash when listing files
The official drive APIs seem to have trouble downloading large
documents sometimes.
This commit adds a --drive-alternate-export flag to use an different,
unofficial set of export URLS which seem to download large files OK.
Google cloud storage doesn't normally need retries, however certain
things (eg bucket creation and removal) are rate limited and do
generate 429 errors.
Before this change the integration tests would regularly blow up with
errors from GCS rate limiting bucket creation and removal.
After this change we low level retry all operations using the same
exponential backoff strategy as used in the google drive backend.
Before this fix team drives would return the drive quota which is
incorrect and mis-leading.
Team drives don't appear to have an API for reading the bytes used or
the quota so we now return that the quota and usage are unknown.
If md5sum/sha1sum fails we debug what it outputed on stderr and return
an empty hash indicating we didn't have a hash, rather than
hash.ErrUnsupported indicating that we don't support this hash type.
This fixes lots of ERROR messages for sftp and synology NAS which,
while it supports md5sum the SFTP paths and the SSH paths are
different so md5sum doesn't work.
We also stop disabling md5sum/sha1sum on errors since typically Hashes
is only checked at the start of a sync run and isn't expected to
change dynamically.
This enables the use of the SharePoint webdav endpoint provided by
OneDrive for Business or Office365 Education Accounts. It enables
unverified accounts to be accessed with rclone via webdav as it isn't
possible through the normal onedrive backend.
This integrates the https://github.com/hensur/onedrive-cookie-test
package to fetch the required cookies to authorize against the
SharePoint webdav endpoint.
Very large directories can have their sizes returned as floating point
numbers, eg `1.0034576985781e+14` from the box API.
Before this change this would fail to parse as an int64.
This change parses the size as a float64 instead which will be
perfectly accurate for sizes up to 2**56 which is about 9 PB.
It is unknown whether box themselves use a float64 as an intermediate
representation in the API or not - it seems likely.
Fixes#2261
* Implement about for:
* local, crypt, cache, drive, swift, hubic, onedrive, pcloud, dropbox
* Implement `--json` and `---full` flag for `rclone about`
* change About interface to return a Usage structure
* Remove operations.About as it is too thin an interface
* Implement Integration test
Relates to #1138 and #1564
This bug was introduced by the v3 API conversion in 07f20dd1fd.
The problem was that dircache.FindPath doesn't work for the root directory.
This adds an internal error for dircache.FindPath being called with
the root directory. This makes a failing test, which the fix to the
drive backend fixes.
This also improves the DirCache integration test.
These are AWS, Ceph, Dreamhost, IBM COS S3, Minio, Wasabi and Other.
This configures endpoints where known and makes sure config doesn't
appear where it isn't valid where possible.
This introduces a method of making provider specific configuration
within a remote. This is useful particularly in s3.
This commit does the basic configuration in S3 for IBM COS.
Before this change we lowercased the dropbox root directory. This was
likely a leftover from when we used to build a dictionary to translate
the cases of dropbox files. Now with the v2 API we can rely on
dropbox to do that for us, so we no longer need to lowercase the root.
This fixes issues using crypt with name obfuscation on dropbox.
Before this change asynchronous closes in cmount could cause sharing
violations under Windows on Remove which manifest themselves
frequently as test failures.
This change lets the Remove be retried on a sharing violation under
Windows.
Unfortunately multi part upload can't upload zero length files so
bring back the single part upload for zero length files only.
This was broken when we made all uploads multipart uploads.
The sftp library delivers the attributes of the symlink rather than
the object pointed to in directory listings, however when we use Stat
from the library it points to the objects.
Previous to this fix this caused items pointed to by symlinks to be
unusable.
After the fix both symlinked files and directories work as expected.
From testing it appears that CEPH no longer works properly with v2
auth and neither does Dreamhost, so update the docs anc configuration
to recommend v4 auth.
This is a problem when syncing a file which just needed its modtime
set with dropbox which can't set the mod time of a file without
re-uploading it.
Before this change we would delete the file, then the server side move
would fail moving the file to the backup-dir because it no longer
existed.
After this change the destination file is moved to the backup-dir
instead of being deleted and the new file is uploaded.
Fixes#2134
* All remotes now support RangeOption so remove SeekOption
* Correct off by one error as RangeOption arguments are inclusive.
* Use RangeSeek in preference to Seek if available
In a typical rclone copy to a bucket/container based remote, before
this change we were doing a list, followed by a HEAD of the bucket to
check it existed before doing the copy. The fact the list succeeded
means the bucket exists so mark it OK at that point.
Issue #1421
Before this change `rclone move localdir /mnt/different-fs` would
error. Now it falls back to moving individual files, which in turn
falls back to copying individual files across the filesystem boundary.
Because of a bug in the Onedrive API it will sometime report the wrong
size. If the size is wrong other remotes that depend on the size might
fail. To fix this we overwrite the objects size with the real size
from ContentLength header.
This was caused by inconsistent escaping of the URL in the prefix
check, so check the URL links back to the correct host and scheme
instead of the prefix check.
The decoded path check will catch any URLs which are outside of the
root.
This removes the old system of part accounting and replaces it with a
system of popping off the accounting reader and wrapping up new ones
as necessary.
This makes it much easier to carry the context down the chain of
wrapped readers and get the limiting as near as possible to the
output. This makes the accounting more accurate and the bandwidth
limiting smoother.
Fixes#2029 and Fixes#1443
This fixes uploads to existing files for Google Drive introduced by #2007.
Instead of updating the old file a new "Untitled" file would be created
in the root folder.
A Range request can never request 0 bytes however this change was made
to make a clearer signal that the limit means read to the end.
Add test and more documentation and fixup uses
It is unnecessary to notify the node.Parents, because a cahnge event is
generated for all involved files and folders in a move from d1/f1 to
d2/f1. There will be a event for d1, d2 and f1.
Additionally a duplicate notification is resolved when them empty string
is in pathsToClear.
Related to #2006
The purpose of this is to make it easier to maintain and eventually to
allow the rclone backends to be re-used in other projects without
having to use the rclone configuration system.
The new code layout is documented in CONTRIBUTING.