Commit graph

327 commits

Author SHA1 Message Date
albertony
2925e1384c Use binary prefixes for size and rate units
Includes adding support for additional size input suffix Mi and MiB, treated equivalent to M.
Extends binary suffix output with letter i, e.g. Ki and Mi.
Centralizes creation of bit/byte unit strings.
2021-04-27 02:25:52 +03:00
albertony
f8d56bebaf
config: delay load config file (#5258)
Restructuring of config code in v1.55 resulted in config
file being loaded early at process startup. If configuration
file is encrypted this means user will need to supply the password,
even when running commands that does not use config.
This also lead to an issue where mount with --deamon failed to
decrypt the config file when it had to prompt user for passord.

Fixes #5236
Fixes #5228
2021-04-26 23:37:49 +02:00
buengese
a737ff21af uptobox: integration tests 2021-04-13 17:46:07 +02:00
albertony
23a0d4a1e6 config: fix issues with memory-only config file paths
Fixes #5222
2021-04-12 18:17:19 +02:00
Nick Craig-Wood
c1492cfa28 test: add sftp to rsync.net to integration tests 2021-04-12 15:52:31 +01:00
Nick Craig-Wood
20c5ca08fb test_all: fix crash when using -clean 2021-03-29 23:12:53 +01:00
Nick Craig-Wood
a12b2746b4 fs: make sure backends with additional config have a different name #4996
Backends for which additional config is detected (in the config string
or on the command line or as environment variables) will gain a suffix
`{XXXXX}` where `XXXX` is a base64 encoded md5hash of the config
string.

This fixes backend caching with config string remotes.

This much requested feature now works properly:

    rclone copy -vv drive,shared_with_me:file.txt drive:
2021-03-15 19:22:07 +00:00
Nick Craig-Wood
8a46dd1b57 fspath: Implement a connection string parser #4996
This is implemented as a state machine parser so it can emit sensible
error messages.

It does not use the connection strings elsewhere in rclone yet - see
subsequent commits.

An optional fuzzer is implemented for the Parse function.
2021-03-15 19:22:07 +00:00
Nick Craig-Wood
a7fd65bf2d config: move account initialisation out into accounting
Before this change the initialisation for the accounting package was
done in the config package for some strange historical reason.
2021-03-11 17:29:26 +00:00
Nick Craig-Wood
1fed2d910c config: make config file system pluggable
If you are using rclone a library you can decide to use the rclone
config file system or not by calling

    configfile.LoadConfig(ctx)

If you don't you will need to set `config.Data` to an implementation
of `config.Storage`.

Other changes
- change interface of config.FileGet to remove unused default
- remove MustValue from config.Storage interface
- change GetValue to return string or bool like elsewhere in rclone
- implement a default config file system which panics with helpful error
- implement getWithDefault to replace the removed MustValue
- don't embed goconfig.ConfigFile so we can change the methods
2021-03-11 17:29:26 +00:00
Maxwell Calman
9cc8ff4dd4 chunker: partially implement no-rename transactions (#4675)
Some storage providers e.g. S3 don't have an efficient rename operation.
Before this change, when chunker finished an upload, the server-side copy
and delete operations that renamed temporary chunks to their final names
could take a significant amount of time.
This PR records transaction identifier (versioning) in the metadata of
chunker composite objects striving to remove the need for rename
operations on such backends.
This approach will be triggered be the new "transactions" configuration
option, which can be "rename" (the default) or "norename".
We implement the new approach for uploads (Put operations).
The chunker Move operation still uses the rename operation of
underlying backend. Filling this gap is left for a later PR.

Co-authored-by: Ivan Andreev <ivandeex@gmail.com>
2021-02-28 10:49:17 +00:00
Ivan Andreev
5834020316
docker.bash: work correctly with multi-ip containers (#5028)
Currently if container under test has multiple IP addresses,
the `docker_ip` function from `docker.sh` will return a gibberish.
This patch makes it return the first address found.
Additionally, I apply shellcheck on `docker.sh`.
2021-02-17 03:38:02 +03:00
Nick Craig-Wood
41127965b0 fstest: add Onedrive Business and Onedrive China to the integration tests 2021-01-30 16:32:32 +00:00
Ivan Andreev
2bddba118e
fstest: apply shellcheck on rclone-serve.bash (#4975) 2021-01-29 19:07:17 +03:00
Yury Stankevich
b569dc11a0 hdfs: support kerberos authentication #42 2021-01-27 18:16:58 +00:00
Nick Craig-Wood
92b9dabf3c fstests: only test with ASCII uppercase for case insensitive tests
Upper/lower case for multilingual is provider dependent so we don't go
there in the tests.
2021-01-27 14:28:17 +00:00
Nick Craig-Wood
c8ab4f1d02 fstests: add a test for finding a directory in a case insensitive way #4830 2021-01-26 14:48:06 +00:00
Nick Craig-Wood
e776a1b122 fstests: don't run encoding tests on -short 2021-01-26 14:46:23 +00:00
buengese
45b57822d5 compress: improve testing 2021-01-18 21:42:58 +01:00
Nick Craig-Wood
feaacfd226 hdfs: correct username parameter in integration tests 2021-01-08 09:05:25 +00:00
Yury Stankevich
71edc75ca6 HDFS (Hadoop Distributed File System) implementation - #42
This includes an HDFS docker image to use with the integration tests.

Co-authored-by: Ivan Andreev <ivandeex@gmail.com>
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2021-01-07 09:48:51 +00:00
buengese
66c3f2f31f new backend: zoho workdrive - fixes #4533 2020-12-30 17:56:08 +00:00
Nick Craig-Wood
b81b6da3fc fstest: add test for reading object in a case insensitive way #4830 2020-12-07 17:38:22 +00:00
Nick Craig-Wood
56ad6aac4d test_all: remove duplicated config for filefabric backend 2020-12-07 17:38:22 +00:00
Nick Craig-Wood
bcbe393af3 sftp: implement Shutdown method 2020-11-27 17:35:01 +00:00
Nick Craig-Wood
dfadd98969 azureblob,memory,pcloud: fix setting of mime types
Before this change the backend was reading the mime type of the
destination object instead of the source object when uploading.

This changes fixes the problem and introduces an integration test for
it.

See: https://forum.rclone.org/t/is-there-a-way-to-get-rclone-copy-to-preserve-metadata/20682/2
2020-11-27 14:40:05 +00:00
Nick Craig-Wood
2e21c58e6a fs: deglobalise the config #4685
This is done by making fs.Config private and attaching it to the
context instead.

The Config should be obtained with fs.GetConfig and fs.AddConfig
should be used to get a new mutable config that can be changed.
2020-11-26 16:40:12 +00:00
Nick Craig-Wood
979bb07c86 filefabric: Implement the Enterprise File Fabric backend
Missing features
- M-Stream support
- Oauth-like flow (soon being changed to oauth)
2020-11-25 21:11:29 +00:00
Nick Craig-Wood
7078311a84 compress: add integration tests 2020-11-23 18:02:22 +00:00
Nick Craig-Wood
45e8bea8d0 testserver: Make Test{FTP,SFTP,Webdav}Rclone run the current rclone
Before this change the tests were run against the previous stable
rclone/rclone docker image.

This unfortunately masked errors in the integration test server.

This change uses the currently installed rclone to run "rclone serve
ftp" etc. This is installed out of the current code by the integration
test server so will make a better test.
2020-11-10 18:01:15 +00:00
Nick Craig-Wood
1fb6ad700f accounting: add context.Context #3257 #4685 2020-11-09 18:05:54 +00:00
Nick Craig-Wood
e3fe31f7cb fs: add context.Context to fs.GetModifyWindow #3257 #4685 2020-11-09 18:05:54 +00:00
Nick Craig-Wood
8b96933e58 fs: Add context to fs.Features.Fill & fs.Features.Mask #3257 #4685 2020-11-09 18:05:54 +00:00
Nick Craig-Wood
d69b96a94c test: Add context to mockfs.NewFs #3257 #4685 2020-11-09 18:05:54 +00:00
Nick Craig-Wood
d846210978 fs: Add context to NewFs #3257 #4685
This adds a context.Context parameter to NewFs and related calls.

This is necessary as part of reading config from the context -
backends need to be able to read the global config.
2020-11-09 18:05:54 +00:00
Adam Plánský
cd2c06f2a7 testserver: speed up seafile integration test
Before every seafile integration test we run Seafile server in docker
environment. We don't have to sleep for 60 seconds to have everything ready before running integration test. We can assume everything is ready when Seafile webserver returns status code 200. Seafile Dockerfile (https://github.com/haiwen/seafile-docker/blob/master/image/seafile/Dockerfile) runs scripts/start.py where is defined that before init_seafile_server() we always wait for mysql wait_for_mysql() (https://github.com/haiwen/seafile-docker/blob/master/scripts/start.py)
2020-11-03 16:31:39 +00:00
Josh Soref
e4a87f772f docs: spelling: e.g.
Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
2020-10-28 18:16:23 +00:00
Josh Soref
bbe7eb35f1 docs: spelling: server-side
Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
2020-10-28 18:16:23 +00:00
Nick Craig-Wood
bed83b0b64 test: add ListRetries config parameter to integration tests
Occasionally the b2 tests fail because the integration tests don't
retry hard enough with their new setting of -list-retries 3. Override
this setting to 5 for the b2 tests only.
2020-10-25 18:10:50 +00:00
Nick Craig-Wood
85d35ef03c test: remove TestS3Ceph: and TestSwiftCeph: from integration tests
Unfortunately we don't have access to this server any more
2020-10-25 18:10:49 +00:00
Josh Soref
d0888edc0a Spelling fixes
Fix spelling of: above, already, anonymous, associated,
authentication, bandwidth, because, between, blocks, calculate,
candidates, cautious, changelog, cleaner, clipboard, command,
completely, concurrently, considered, constructs, corrupt, current,
daemon, dependencies, deprecated, directory, dispatcher, download,
eligible, ellipsis, encrypter, endpoint, entrieslist, essentially,
existing writers, existing, expires, filesystem, flushing, frequently,
hierarchy, however, implementation, implements, inaccurate,
individually, insensitive, longer, maximum, metadata, modified,
multipart, namedirfirst, nextcloud, obscured, opened, optional,
owncloud, pacific, passphrase, password, permanently, persimmon,
positive, potato, protocol, quota, receiving, recommends, referring,
requires, revisited, satisfied, satisfies, satisfy, semver,
serialized, session, storage, strategies, stringlist, successful,
supported, surprise, temporarily, temporary, transactions, unneeded,
update, uploads, wrapped

Signed-off-by: Josh Soref <jsoref@users.noreply.github.com>
2020-10-14 15:21:31 +01:00
Ivan Andreev
c797494d88
Merge pull request #4608 from ivandeex/pr-chunker-crypt
chunker: fix upload over crypt (fixes #4570)
2020-09-18 17:58:44 +03:00
Ivan Andreev
e2a57182be mailru: re-enable fixed chunker tests
This reverts commit 9d3d397f50.
2020-09-18 17:56:34 +03:00
Nick Craig-Wood
5bf53fe3ac test_all: only run the backend tests for 1fichier
Running all the tests for 1fichier takes too long due to the directory
reading rate limiter.

The backend tests do complete in a reasonable time (21 mins).
2020-09-01 16:13:47 +01:00
Nick Craig-Wood
e2816629d0 fstest: fix upwrapping tests for bucket based remotes
TestIntegration/FsRmdirNotFound was failing on crypt wrapping a bucket based remote.

This was spotted by the integration tests.
2020-09-01 15:43:41 +01:00
Nick Craig-Wood
9d3d397f50 test_all: disable chunker + mailru tests while mailru is broken #4376 2020-08-20 12:50:20 +01:00
Nick Craig-Wood
38e8415e77 test_all: remove Digital Ocean s3 integration tests due to excessive rate limiting
This is what I wrote to Digital Ocean support on July 10, 2020 - alas
it didn't result in the rate limits dropping, so reluctantly I'm going
to remove DO from the integration tests since they never pass and have
no hope of ever passing while this rate limit is in effect.

----

Somewhere towards the end of June 2020 or the start of July 2020 my
integration tests between rclone ( https://rclone.org ) and Digital
Ocean started failing.

I tried moving the tests to different regions (currently they are
using AMS1 because I'm in Europe) with no improvement.

Rclone seems to be hitting this rate limit as documented here:
https://www.digitalocean.com/docs/spaces/#limits

- 2 COPYs per 5 minutes on any individual object in a Space

Rclone creates small objects about 100 bytes in size and renames them
a few times - this involves using the COPY call as S3 does not have a
rename API. The tests do this more than twice per object so hit the 5
minute timeout I think. Rclone does exponential backoff and fails
after 10 retries not having reached 5 minutes delay after 10 retries.

Having a 5 minute lockout on an S3 compatible API is surprising!

Rclone integration tests with about 30 other providers, none of which
have a rate limit like this.

I understand the need for a COPY rate limit as server side copying
large files can be resource intensive. However a 5 minute lockout for
copying 100 byte files seems excessive!

Might I humbly suggest that you reduce or eliminate this rate limit
for small files?

----

This was the reply

Unfortunately it is not possible to raise this limit or remove it
currently on our platform. I do see how this would interfere with type
of applications that need to copy many small files and will be happy
to take the feedback to our engineering team to see how we can improve
the spaces system in the future
2020-08-20 12:50:04 +01:00
Nick Craig-Wood
fb9edbe34e test_all: export more internal variables to index.json for analysis 2020-08-20 12:23:33 +01:00
Nick Craig-Wood
63e4d2952b fstests: Suggest that Purge on a nonexistent dir should return fs.ErrorDirNotFound 2020-08-19 18:03:42 +01:00
Nick Craig-Wood
a2afa9aadd fs: Add directory to optional Purge interface - fixes #1891
- add a directory to the optional Purge interface
- fix up all the backends
- add an additional integration test to test for the feature
- use the new feature in operations.Purge

Many of the backends had been prepared in advance for this so the
change was trivial for them.
2020-07-31 17:43:17 +01:00