Client verification script should be able to populate buckets automatically
to get a good baseline for comparing similar client-created buckets against.
A yaml blueprint provides a reasonable structure to use to populate objects
and should also be easy enough for a human to follow to create test buckets
from a client.
Updates client verification tool to accept multiple bucket names on the
command line and dump information for each in case we need to test different
buckets.
After testing that a client works with an S3-like object store, use this
tool to print out information about the bucket and objects uploaded to
verify that the client worked correctly.
Accessing a non-constant global from another module is tricky,
as the import binds to the original object. Rebinding the name
in the original module does not change it in importers. Use a
getter as a quick workaround.
Now works correctly again after the changes to the random file
generator. Also now gets the true size of files when generating
using a stddev != 0 (rather than just assuming all files were
the mean size).
Static load test script now provides separate functions for generating a
list of random-file pointers and uploading those files to an S3 store. When
run as a script it still does both, but you can call each function
individually from a different script after loading the module.
Adds siege.conf file for siege configuration options
Adds docstring to main function in generate_objects.py describing how to run
the static content load test.
Script to generate garbage objects and push them to a bucket.
Script takes a config file on the command line (and some other command line
options using optparse) and generates a bunch of objects in an S3 bucket.
Also prints public URLs to stdout.
Number and sizes of the objects are determined by a yaml config file with each line
looking like this:
- [A, B, C]
A: Number of files in this group
B: Mean size of files in this group (in bytes)
C: Standard deviation (normal distribution) of file sizes in this group
command line options are:
- S3 access key
- S3 secret key
- seed for PRNG
- output file to write URLs to
- flag to add md5 checksum to url list
refactors so the FakeFile and Verifier classes can be used in multiple
tests and adds a helper function to verify data.
adds new tests similar to the previous atomic write tests, but this time
does a second write in the middle of writing (rather than doing a read
in the middle)
The atomic write test writes a large file of all A's followed by
overwriting the file with B's. The file is verified (to be either
all A's or all B's) after each write and just before the overwrite
is complete.
The test is performed 3 times, with sizes of 1 MB, 4 MB, and 8 MB.
It never tested 3-character bucket names; it was prefixed by
the 30-character uniqueness mechanism. I highly doubt
3-letter bucket names will stay available for very long anyway,
so if this is wanted back, it'll need to avoid the prefix, do
it's own cleanup, and be flagged as not to be executed on
AWS, DreamHost Objects, or any production system; it'll only
work on local dev instances with a clean slate.