This is done by making fs.Config private and attaching it to the
context instead.
The Config should be obtained with fs.GetConfig and fs.AddConfig
should be used to get a new mutable config that can be changed.
Before this change the tests were run against the previous stable
rclone/rclone docker image.
This unfortunately masked errors in the integration test server.
This change uses the currently installed rclone to run "rclone serve
ftp" etc. This is installed out of the current code by the integration
test server so will make a better test.
This adds a context.Context parameter to NewFs and related calls.
This is necessary as part of reading config from the context -
backends need to be able to read the global config.
Occasionally the b2 tests fail because the integration tests don't
retry hard enough with their new setting of -list-retries 3. Override
this setting to 5 for the b2 tests only.
Running all the tests for 1fichier takes too long due to the directory
reading rate limiter.
The backend tests do complete in a reasonable time (21 mins).
This is what I wrote to Digital Ocean support on July 10, 2020 - alas
it didn't result in the rate limits dropping, so reluctantly I'm going
to remove DO from the integration tests since they never pass and have
no hope of ever passing while this rate limit is in effect.
----
Somewhere towards the end of June 2020 or the start of July 2020 my
integration tests between rclone ( https://rclone.org ) and Digital
Ocean started failing.
I tried moving the tests to different regions (currently they are
using AMS1 because I'm in Europe) with no improvement.
Rclone seems to be hitting this rate limit as documented here:
https://www.digitalocean.com/docs/spaces/#limits
- 2 COPYs per 5 minutes on any individual object in a Space
Rclone creates small objects about 100 bytes in size and renames them
a few times - this involves using the COPY call as S3 does not have a
rename API. The tests do this more than twice per object so hit the 5
minute timeout I think. Rclone does exponential backoff and fails
after 10 retries not having reached 5 minutes delay after 10 retries.
Having a 5 minute lockout on an S3 compatible API is surprising!
Rclone integration tests with about 30 other providers, none of which
have a rate limit like this.
I understand the need for a COPY rate limit as server side copying
large files can be resource intensive. However a 5 minute lockout for
copying 100 byte files seems excessive!
Might I humbly suggest that you reduce or eliminate this rate limit
for small files?
----
This was the reply
Unfortunately it is not possible to raise this limit or remove it
currently on our platform. I do see how this would interfere with type
of applications that need to copy many small files and will be happy
to take the feedback to our engineering team to see how we can improve
the spaces system in the future
- add a directory to the optional Purge interface
- fix up all the backends
- add an additional integration test to test for the feature
- use the new feature in operations.Purge
Many of the backends had been prepared in advance for this so the
change was trivial for them.
This adds expire and unlink fields to the PublicLink interface.
This fixes up the affected backends and removes unlink parameters
where they are present.
These commands are for implementing backend specific
functionality. They have documentation which is placed automatically
into the backend doc.
There is a simple test for the feature in the backend tests.