During the sync we collect a list of directories which should be empty
and attempt to rmdir them at the end of the sync. If the directories
are not empty then the rmdir will fail, logging a message but not
erroring the sync.
* Fixup bitrot (rclone and Azure library)
* Implement Copy
* Add modtime to metadata under mtime key as RFC3339Nano
* Make multipart upload work
* Make it pass the integration tests
* Fix uploading of zero length blobs
* Rename to azureblob as it seems likely we will do azurefile
* Add docs
Add new package qingstor to support QingStor API.
Add new unit test for its and tested through; But I commented
on some tests case because of some of the features of QingStor.
Add new docs for it.
This is useful if there are duplicates. Assuming the remote delivers
the entries in a consistent order, this will give the best user
experience in syncing as it will consistently use the first entry for
the sync comparison.
This simplifies the implementation of remotes. The only required
interface is now `List` which is a simple one level directory list.
Optionally remotes may implement `ListR` if they have an efficient way
of doing a recursive list.
The ListR interface will be implemented by remotes that can do a
recursive directory listing more efficiently than just recursing
through the directories. These include the bucket based remotes.
This is a fix left over from the v2 conversion. Dropbox ignores the
client modification on an incoming file if it was identical to the
existing file. This change deletes the existing file first before
re-uploading the new one.
If set in the config file, these override the ones configured into the
remote. This enables alternative oauth servers to be used for all
oauth remotes. This can only be altered by editing the config file
for the moment.
* fully write new config file before moving to target location (fixes#1287)
* do not fail if there is no previous config; print temporary config path on failure
* Add options to Put, PutUnchecked and Update for all Fses
* Use these to create HashOption
* Implement this in local
* Pass the option in fs.Copy
This has the effect that we only calculate hashes we need to in the
local Fs which speeds up transfers significantly.
* add support to hashing module
* add dbhashsum to list the hashes
* add support to dropbox module
This means objects up and downloaded to/from Dropbox will have their
hashes checked.
Note after this change local objects are calculating MD5, SHA1 and
DBHASH which is excessive and needs to be fixed.
This makes rclone with encrypted config better suited for use in
pipelines. E.g.:
$ rclone lsl mydrive:Some/Dir | sort -k 4
If the password prompt ("Enter configuration password") is printed to
stdout, it will be swallowed by sort. By printing it to stderr, you
still see the prompt, without sacrificing compatibility with the unix
pipeline.
What was happening is that when Move was implemented as Copy + Delete,
MoveFile was seeing the files didn't need transferring (because they
were identical) then deleted the source.
The fix uses Move instead and patches onedrive to disallow a case
folded identical copy (which errors with 500 error)
* -vv or --log-level DEBUG
* -v or --log-level INFO
* --log-level NOTICE (default)
* -q --log-level ERROR
Replace Config.Verbose and Config.Quiet with Config.LogLevel
Fixes#739Fixes#1108Fixes#1000
Multiple directories (up to --checkers worth) are scanned at once.
This uses much less memory than the previous scheme - only the amount
of memory needed to hold an entire directory listing of objects.
For directory based remotes the speed is unchanged.
For bucket based remotes, instead of doing one API call to list the
whole bucket, it does multiple calls, one for each pseudo directory.
However these are done in parallel so in practice this seems to speed
up directory listings.
This replaces the existing sync method as it performs faster and uses
less memory.
The old sync method is available with the temporary --old-sync-method
flag.
Fixes#517Fixes#439Fixes#236Fixes#1067
Optional interfaces are becoming more important in rclone,
--track-renames and --backup-dir both rely on them.
Up to this point rclone has used interface upgrades to define optional
behaviour on Fs objects. However when one Fs object wraps another it
is very difficult for this scheme to work accurately. rclone has
relied on specific error messages being returned when the interface
isn't supported - this is unsatisfactory because it means you have to
call the interface to see whether it is supported.
This change enables accurate detection of optional interfaces by use
of a Features struct as returned by an obligatory Fs.Features()
method. The Features struct contains flags and function pointers
which can be tested against nil to see whether they can be used.
As a result crypt and hubic can accurately reflect the capabilities of
the underlying Fs they are wrapping.
This changes `ListFn`'s implementation so that if it encounters a not
found error, instead of sending a fatal error to log, it coordinates the
return of the error between checker goroutines and sends it back to the
caller.
The main impetus here is that it allows an external program compiling
against rclone as a package to handle a not found, where it currently it
cannot.
This does change error output on a not found a little bit, we go from
this:
2017/01/09 21:14:03 directory not found
To this:
2017/01/09 21:13:44 Failed to ls: directory not found
- Only start the token ticker when the timetable entry has more than one
entry.
- This fixes the "Scheduled bandwidth change" log message when no
bwlimit is specified.
- Fixes#987
These are set in the form RCLONE_CONFIG_remote_option where remote is
the uppercased remote name and option is the uppercased config file
option name. Note that RCLONE_CONFIG_remote_TYPE must be set if
defining a new remote.
Fixes#616
- Change the --bwlimit command line parameter to accept both a limit (as
before) or a full timetable (formatted as "hh:mm,limit
hh:mm,limit...")
- The timetable is checked once a minute by a ticker function. A new
tokenBucket is created every time a bandwidth change is necessary.
- This change is compatible with the SIGUSR2 change to toggle bandwidth
limits.
This resolves#221.
This commits adds support for tracking of file renames if `track-renames` flag is set,
and it then performs server-side renames for remotes that support it, i.e.
remotes that implement either the `Mover` or the `Copier` interface.
Fixes#888