* Make move command check for overlapping remotes and refuse to run
* Do copy/delete rather than all the copies then all the deletes
* Doesn't purge the source - this was unexpected behaviour see #512 and #416
* Add -list-retries flag to test suite to control retries
This changes the semantics of `move` slightly. However it now errs on
the side of not deleting stuff.
* Factor sync/copy/move into its own file
* Make fatal errors abort the sync
* Make Copy return errors
* Make Sync/Copy/Move return the last Copy error if there was one
* Prioritise returning Fatal errors
* NoRetry errors are returned if no other types of errors
These are for remotes to signal that they have a fatal error and don't
want to continue (eg cap exceeded) or that a particular file shouldn't
be retried for some reason.
Refactor sync/copy/move
* Don't load the src listing unless doing a sync and --delete-before
* Don't load the dst listing if doing copy/move and --no-traverse is set
`rclone --no-traverse copy src dst` now won't load either of the
listings into memory so will use the minimum amount of memory.
This change will reduce the amount of memory rclone uses dramatically
too as in normal operations (copy without --notraverse or sync) as it
no longer loads the source file listing into memory at all.
Fixes#8Fixes#544Fixes#546
If remote:path points to a file make NewFs return a sentinel error
fs.ErrorIsFile and an Fs which points to the parent.
Use this to remove the LimitedFs and just add this file to the
--files-from list.
This means that server side operations can be used also.
Fixes#518Fixes#545
This should fix duplicate files on drive and 409 errors on
amazonclouddrive however it will slow down the upload slightly as
another roundtrip will be needed.
None of the other Fses needed adjusting.
Fixes#483
Gives more accurate error propagation, control of depth of recursion
and short circuit recursion where possible.
Most of the the heavy lifting is done in the "fs" package, making file
system implementations a bit simpler.
This commit contains some code originally by Klaus Post.
Fixes#316
* Now removes identical copies without asking
* Now obeys `--dry-run`
* Implement `--dedupe-mode` for non interactive running
* `--dedupe-mode interactive` - interactive the default.
* `--dedupe-mode skip` - removes identical files then skips anything left.
* `--dedupe-mode first` - removes identical files then keeps the first one.
* `--dedupe-mode newest` - removes identical files then keeps the newest one.
* `--dedupe-mode oldest` - removes identical files then keeps the oldest one.
* `--dedupe-mode rename` - removes identical files then renames the rest to be different.
* Add tests which will only run on Google Drive.
See #317 for details.
Use `rclone config` to add/change/remove password.
Tests that loads the default configuration will now fail with a better error message, and add a switch that makes it possible to disable password prompts and fail instead.
Make it possible to use the "RCLONE_CONFIG_PASS" environment variable as password for configuration.
* Make all integration tests start with an empty remote
* Add an -individual flag so this can be a different bucket/container/directory
* Fix up tests after changing the hashers
* Add sha1sum test
* Make directory checking in tests sleep more to fix acd inconsistencies
* Factor integration tests to make more maintainable
* Ensure remote writes have a fstest.CheckItems() before use
* this fixes eventual consistency on the directory listings later
* Call fs.Stats.ResetCounters() before every fs.Sync()
Note that the tests shouldn't be run concurrently as fs.Config is global state.
This means you can mix `--include` and `--include-from` with the
other filters (eg `--exclude`) but you must include all the files you
want in the include statement.
Fixes#280
This will allow setting up a remote with copy&paste of values to a headless machine. It will allow copy+pasting a token into the configuration.
This requires rclone to be on a machine with a proper browser. Custom client id and secrets are supported.
To test token generation, use `rclone auth "fs type"`.