mirror of
https://github.com/ceph/s3-tests.git
synced 2024-11-25 03:47:22 +00:00
Compatibility tests for S3 clones
9e7b2ae53d
Earlier values expected a lc debug interval of 2s, which is a pretty
short time window and can often lead to failures if the processing
didn't complete within the next day. This commit assumes the currently
configured LC debug interval of 10s, and gives time intervals using the
following logic:
Worst case:
LC start-time : 00:00
obj upload : 00:01
LC run1 : 00:10 -> object not expired as it is only 9s old
LC run2 : 00:20 -> object will expire in this run, however we
can't exactly guess when this run will complete, for a moderate amount
of objects this can take anywhere between 1 to a max of 10s, let us give
it a wiggle room to complete around 8s, given the amount of objects in a
teuthology run, it should be mostly probable that the object is already
deleted within this time, so at 28s, we should have seen day1 objects being
expired.
Best case:
LC start-time: 00:00
obj upload : 00:09
LC run1 : 00:10
LC run2 : 00:20 -> obj expires, so elapsed time is around 11->19s (of
course it would almost close to 10s too), We should probably configure
the LC lock time to 10s as well just so as to ensure that the lock isn't
held for the default 60s in which case it is possible that the object
might expire in a time greater than the lock interval.
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
(cherry picked from commit
|
||
---|---|---|
s3tests | ||
.gitignore | ||
bootstrap | ||
config.yaml.SAMPLE | ||
LICENSE | ||
README.rst | ||
request_decision_graph.yml | ||
requirements.txt | ||
setup.py | ||
siege.conf |
======================== S3 compatibility tests ======================== This is a set of completely unofficial Amazon AWS S3 compatibility tests, that will hopefully be useful to people implementing software that exposes an S3-like API. The tests only cover the REST interface. The tests use the Boto library, so any e.g. HTTP-level differences that Boto papers over, the tests will not be able to discover. Raw HTTP tests may be added later. The tests use the Nose test framework. To get started, ensure you have the ``virtualenv`` software installed; e.g. on Debian/Ubuntu:: sudo apt-get install python-virtualenv and then run:: ./bootstrap You will need to create a configuration file with the location of the service and two different credentials, something like this:: [DEFAULT] ## this section is just used as default for all the "s3 *" ## sections, you can place these variables also directly there ## replace with e.g. "localhost" to run against local software host = s3.amazonaws.com ## uncomment the port to use something other than 80 # port = 8080 ## say "no" to disable TLS is_secure = yes [fixtures] ## all the buckets created will start with this prefix; ## {random} will be filled with random characters to pad ## the prefix to 30 characters long, and avoid collisions bucket prefix = YOURNAMEHERE-{random}- [s3 main] ## the tests assume two accounts are defined, "main" and "alt". ## user_id is a 64-character hexstring user_id = 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef ## display name typically looks more like a unix login, "jdoe" etc display_name = youruseridhere ## replace these with your access keys access_key = ABCDEFGHIJKLMNOPQRST secret_key = abcdefghijklmnopqrstuvwxyzabcdefghijklmn ## replace with key id obtained when secret is created, or delete if KMS not tested kms_keyid = 01234567-89ab-cdef-0123-456789abcdef [s3 alt] ## another user account, used for ACL-related tests user_id = 56789abcdef0123456789abcdef0123456789abcdef0123456789abcdef01234 display_name = john.doe ## the "alt" user needs to have email set, too email = john.doe@example.com access_key = NOPQRSTUVWXYZABCDEFG secret_key = nopqrstuvwxyzabcdefghijklmnabcdefghijklm Once you have that, you can run the tests with:: S3TEST_CONF=your.conf ./virtualenv/bin/nosetests To gather a list of tests being run, use the flags:: -v --collect-only You can specify what test(s) to run:: S3TEST_CONF=your.conf ./virtualenv/bin/nosetests s3tests.functional.test_s3:test_bucket_list_empty Some tests have attributes set based on their current reliability and things like AWS not enforcing their spec stricly. You can filter tests based on their attributes:: S3TEST_CONF=aws.conf ./virtualenv/bin/nosetests -a '!fails_on_aws' TODO ==== - We should assume read-after-write consistency, and make the tests actually request such a location. http://aws.amazon.com/s3/faqs/#What_data_consistency_model_does_Amazon_S3_employ - Test direct HTTP downloads, like a web browser would do.