The go team made the decision to drop support for 32 bit macOS as 32
bit apps are no longer supported by macOS and 32 bit hardware hasn't
been produced by Apple for over 10 years.
1. adds SharedOptions data structure to oauthutil
2. adds config.ConfigToken option to oauthutil.SharedOptions
3. updates the backends that have oauth functionality
Fixes#2849
Before this fix, download threads would fill up the buffer and then
timeout even though data was still being read from them. If the client
was streaming slower than network speed this caused the downloader to
stop and be restarted continuously. This caused more potential for
skips in the download and unecessary network transactions.
This patch fixes that behaviour - as long as a downloader is being
read from more often than once every 5 seconds, it won't timeout.
This was done by:
- kicking the downloader whenever ensureDownloader is called
- making the downloader loop if it has already downloaded past the maxOffset
- making setRange() always kick the downloader
The deadlock was caused in transfermap.go by calling mu.RLock() in one
function then calling it again in a sub function. Normally this is
fine, however this leaves a window where mu.Lock() can be called. When
mu.Lock() is called it doesn't allow the second mu.RLock() and
deadlocks.
Thead 1 Thread 2
String():mu.RLock()
del():mu.Lock()
sortedSlice():mu.RLock() - DEADLOCK
Lesson learnt: don't try using locks recursively ever!
This patch fixes the problem by removing the second mu.RLock(). This
was done by factoring the code that was calling it into the
transfermap.go file so all the locking can be seen at once which was
ultimately the cause of the problem - the code which used the locks
was too far away from the rest of the code using the lock.
This problem was introduced in:
bfa5715017 fs/accounting: sort transfers by start time
Which hasn't been released in a stable version yet
After uploading a multipart object, rclone deletes any unused parts.
Probably as part of the listing unification, the detection of the
parts beloning to the current upload was failing and calling Update
was deleting the parts for the current object.
This change fixes the detection and deletes all the old parts but none
of the new ones now.
Fixes#4075
Previous to this change rclone cached the looked up root_folder_id in
the root_folder_id config variable.
This has caused a lot of confusion and a few attempts at workarounds
and ultimately was a mistake.
This reverts rclone attempting to cache anything in root_folder_id and
returns that variable to be entirely user modified.
It gives a little hint in the debug that rclone could be sped up
slightly by setting it, but it is up to the user to think about
whether that would be OK or not.
Google drive root '': root_folder_id = "XXX" - save this in the config to speed up startup
It does not change root_folder_id itself, leaving this to the user.
See: https://forum.rclone.org/t/rclone-gdrive-no-longer-returning-anything/17215
`rclone obscure` currently only accepts a command line argument of `password` to generate
an obfuscated password. This is an issue since generating obfuscated passwords programatically
requires sending the plain text password as a shell argument, which can cause problems if the
password contains shell characters, or if the password is from an untrusted source.
This patch opens up STDIN which will allow developers to open the STDIN source and print a password
directly to `rclone obscure`, which can increase safety and convenince.
- add a directory to the optional Purge interface
- fix up all the backends
- add an additional integration test to test for the feature
- use the new feature in operations.Purge
Many of the backends had been prepared in advance for this so the
change was trivial for them.
If this option is enabled, rclone will not set modtime of uploaded files and
the backend will return ModTimeNotSupported as its Precision.
Normally rclone updates modification time of files after they are done
uploading. This can cause permissions issues on Linux platforms when
rclone is copying to a CIFS mount where the user rclone is
running as does not own the file uploaded. If this option is enabled,
rclone will no longer update the modtime after copying a file.
See: https://forum.rclone.org/t/chtimes-error-on-local-mounted-copy/17784
Due to Chrome's rather complicated use of file handles when saving
files from the download windows, rclone was attempting to truncate a
closed file.
The file appeared closed due to the handling of 0 length files.
This patch removes the check for the file being closed in the
WriteFileHandle.Truncate call. This is safe because the only action
this method takes is to emit an error message if the file is the wrong
size.
See: https://forum.rclone.org/t/google-drive-cannot-save-files-directly-from-browser-to-gdrive-mounted-path/17992/
This is preparation for getting the Accounting to check the context,
buf first we need to get it in place. Since this is one of those
changes that makes lots of noise, this is in a seperate commit.
Before this change if the user supplied `-o uid=XXX` then rclone would
write `-o uid=-1 -o uid=XXX` so duplicating the uid value.
After this change rclone doesn't write the default `-1` version.
This fix affects `uid` and `gid`.
See: https://forum.rclone.org/t/issue-with-rclone-mount-and-resilio-sync/14730/27
This implements `rclone cleanup` to remove multipart uploads over 24
hours old. It also implements the backend command
`list-multipart-uploads` to see which ones are available and `cleanup`
to delete them with a configurable expiry interval.
See #4302
Before this fix, if an object had ID set and download_url was in use,
downloading the object would give this error:
failed to open for download: bucket example_bucket does not have file: /b2api/v1/b2_download_file_by_id (404 not_found)
After this fix we only download by ID if download_url is not set
See: https://forum.rclone.org/t/correct-format-for-rclone-b2-download-url-variable/15498
This patch enables rclone to be used as a library from within restic
- exposes NewServer
- exposes Server
- implements http.RoundTripper
Co-authored-by: Jack Deng <jackdeng@gmail.com>
- Use cache to store package versions
- Update actions/setup-go to v2
- Add go1.15-rc1 build
- Make seperate build step
- stop downloading code into special path
- leave adding ~/go/bin to PATH to sction/setup-go
- remove docker build from xgo as we are building rclone anyway
- remove modules setting since it is now always on
- use ./... instead of listing files in tests
go1.15 introduced a stricter policy for what you can convert with
`string()` and now `go vet` warns if you try to do `string(int)`.
See: https://github.com/golang/go/issues/32479
When we run MKCOL on 4shared on a directory that already exists, this
returns a 409/Conflict error. However this error code usually means
that the intermediate collections need creating.
The actual error code to return when trying to create a directory that
already exists isn't specified in the RFC, only that an error MUST be
returned and there are already 3 statuses checked in the code.
However using 409 makes rclone's usual strategy for making directories
fail and return the 409 error.
This patch tries the MKCOL and if it returns an unrecognised error
code, then calls PROPFIND on the directory to discover whether the
directory really exists or not.
This should also cover other WebDAV servers returning other error
messages we haven't accounted for in the code yet.
Before this change when reading directories we would use the directory
handle and the Readdir(-1) call on the directory handle. This worked
fine for the first read, but if the directory was read again on the
same handle Readdir(-1) returns nothing (as per its design).
It turns out that macOS leaves the directory handle open and just
re-reads the data from it, so this problem causes directories to start
out full then subsequently appear empty.
macOS/OSXFUSE is passing an offset of 0 to the Readdir call telling
rclone to seek in the directory, but we've told FUSE that we can't
seek by always returning ofst=0 in the fill function.
This fix works around the problem by reading the directory from the
path each time, ignoring the actual handle. This should be no less
efficient.
We will return an ESPIPE if offset is ever non 0.
There are possible corner cases reading deleted directories which this
ignores.