Now works correctly again after the changes to the random file
generator. Also now gets the true size of files when generating
using a stddev != 0 (rather than just assuming all files were
the mean size).
Static load test script now provides separate functions for generating a
list of random-file pointers and uploading those files to an S3 store. When
run as a script it still does both, but you can call each function
individually from a different script after loading the module.
Adds siege.conf file for siege configuration options
Adds docstring to main function in generate_objects.py describing how to run
the static content load test.
Script to generate garbage objects and push them to a bucket.
Script takes a config file on the command line (and some other command line
options using optparse) and generates a bunch of objects in an S3 bucket.
Also prints public URLs to stdout.
Number and sizes of the objects are determined by a yaml config file with each line
looking like this:
- [A, B, C]
A: Number of files in this group
B: Mean size of files in this group (in bytes)
C: Standard deviation (normal distribution) of file sizes in this group
command line options are:
- S3 access key
- S3 secret key
- seed for PRNG
- output file to write URLs to
- flag to add md5 checksum to url list
refactors so the FakeFile and Verifier classes can be used in multiple
tests and adds a helper function to verify data.
adds new tests similar to the previous atomic write tests, but this time
does a second write in the middle of writing (rather than doing a read
in the middle)
The atomic write test writes a large file of all A's followed by
overwriting the file with B's. The file is verified (to be either
all A's or all B's) after each write and just before the overwrite
is complete.
The test is performed 3 times, with sizes of 1 MB, 4 MB, and 8 MB.
It never tested 3-character bucket names; it was prefixed by
the 30-character uniqueness mechanism. I highly doubt
3-letter bucket names will stay available for very long anyway,
so if this is wanted back, it'll need to avoid the prefix, do
it's own cleanup, and be flagged as not to be executed on
AWS, DreamHost Objects, or any production system; it'll only
work on local dev instances with a clean slate.
They can use a fixed name, nuke_prefixed_buckets guarantees the
bucket doesn't exist. Use assert_raises and check details of the
error. Move near existing non-existent bucket test.