forked from TrueCloudLab/s3-tests
Compare commits
2 commits
master
...
atomic-rea
Author | SHA1 | Date | |
---|---|---|---|
|
00cfe0dd82 | ||
|
d4a77acb71 |
37 changed files with 3743 additions and 26029 deletions
1
.gitignore
vendored
1
.gitignore
vendored
|
@ -10,6 +10,5 @@
|
||||||
|
|
||||||
/*.egg-info
|
/*.egg-info
|
||||||
/virtualenv
|
/virtualenv
|
||||||
/venv
|
|
||||||
|
|
||||||
config.yaml
|
config.yaml
|
||||||
|
|
143
README.rst
143
README.rst
|
@ -2,101 +2,90 @@
|
||||||
S3 compatibility tests
|
S3 compatibility tests
|
||||||
========================
|
========================
|
||||||
|
|
||||||
This is a set of unofficial Amazon AWS S3 compatibility
|
This is a set of completely unofficial Amazon AWS S3 compatibility
|
||||||
tests, that can be useful to people implementing software
|
tests, that will hopefully be useful to people implementing software
|
||||||
that exposes an S3-like API. The tests use the Boto2 and Boto3 libraries.
|
that exposes an S3-like API.
|
||||||
|
|
||||||
The tests use the Tox tool. To get started, ensure you have the ``tox``
|
The tests only cover the REST interface.
|
||||||
software installed; e.g. on Debian/Ubuntu::
|
|
||||||
|
|
||||||
sudo apt-get install tox
|
TODO: test direct HTTP downloads, like a web browser would do.
|
||||||
|
|
||||||
|
The tests use the Boto library, so any e.g. HTTP-level differences
|
||||||
|
that Boto papers over, the tests will not be able to discover. Raw
|
||||||
|
HTTP tests may be added later.
|
||||||
|
|
||||||
|
The tests use the Nose test framework. To get started, ensure you have
|
||||||
|
the ``virtualenv`` software installed; e.g. on Debian/Ubuntu::
|
||||||
|
|
||||||
|
sudo apt-get install python-virtualenv
|
||||||
|
|
||||||
|
and then run::
|
||||||
|
|
||||||
|
./bootstrap
|
||||||
|
|
||||||
You will need to create a configuration file with the location of the
|
You will need to create a configuration file with the location of the
|
||||||
service and two different credentials. A sample configuration file named
|
service and two different credentials, something like this::
|
||||||
``s3tests.conf.SAMPLE`` has been provided in this repo. This file can be
|
|
||||||
used to run the s3 tests on a Ceph cluster started with vstart.
|
|
||||||
|
|
||||||
Once you have that file copied and edited, you can run the tests with::
|
[DEFAULT]
|
||||||
|
## this section is just used as default for all the "s3 *"
|
||||||
|
## sections, you can place these variables also directly there
|
||||||
|
|
||||||
S3TEST_CONF=your.conf tox
|
## replace with e.g. "localhost" to run against local software
|
||||||
|
host = s3.amazonaws.com
|
||||||
|
|
||||||
You can specify which directory of tests to run::
|
## uncomment the port to use something other than 80
|
||||||
|
# port = 8080
|
||||||
|
|
||||||
S3TEST_CONF=your.conf tox -- s3tests_boto3/functional
|
## say "no" to disable TLS
|
||||||
|
is_secure = yes
|
||||||
|
|
||||||
You can specify which file of tests to run::
|
[fixtures]
|
||||||
|
## all the buckets created will start with this prefix;
|
||||||
|
## {random} will be filled with random characters to pad
|
||||||
|
## the prefix to 30 characters long, and avoid collisions
|
||||||
|
bucket prefix = YOURNAMEHERE-{random}-
|
||||||
|
|
||||||
S3TEST_CONF=your.conf tox s3tests_boto3/functional/test_s3.py
|
[s3 main]
|
||||||
|
## the tests assume two accounts are defined, "main" and "alt".
|
||||||
|
|
||||||
You can specify which test to run::
|
## user_id is a 64-character hexstring
|
||||||
|
user_id = 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef
|
||||||
|
|
||||||
S3TEST_CONF=your.conf tox s3tests_boto3/functional/test_s3.py::test_bucket_list_empty
|
## display name typically looks more like a unix login, "jdoe" etc
|
||||||
|
display_name = youruseridhere
|
||||||
|
|
||||||
|
## replace these with your access keys
|
||||||
|
access_key = ABCDEFGHIJKLMNOPQRST
|
||||||
|
secret_key = abcdefghijklmnopqrstuvwxyzabcdefghijklmn
|
||||||
|
|
||||||
|
[s3 alt]
|
||||||
|
## another user account, used for ACL-related tests
|
||||||
|
user_id = 56789abcdef0123456789abcdef0123456789abcdef0123456789abcdef01234
|
||||||
|
display_name = john.doe
|
||||||
|
## the "alt" user needs to have email set, too
|
||||||
|
email = john.doe@example.com
|
||||||
|
access_key = NOPQRSTUVWXYZABCDEFG
|
||||||
|
secret_key = nopqrstuvwxyzabcdefghijklmnabcdefghijklm
|
||||||
|
|
||||||
|
Once you have that, you can run the tests with::
|
||||||
|
|
||||||
|
S3TEST_CONF=your.conf ./virtualenv/bin/nosetests
|
||||||
|
|
||||||
|
You can specify what test(s) to run::
|
||||||
|
|
||||||
|
S3TEST_CONF=your.conf ./virtualenv/bin/nosetests s3tests.functional.test_s3:test_object_acl_grant_public_read
|
||||||
|
|
||||||
Some tests have attributes set based on their current reliability and
|
Some tests have attributes set based on their current reliability and
|
||||||
things like AWS not enforcing their spec stricly. You can filter tests
|
things like AWS not enforcing their spec stricly. You can filter tests
|
||||||
based on their attributes::
|
based on their attributes::
|
||||||
|
|
||||||
S3TEST_CONF=aws.conf tox -- -m 'not fails_on_aws'
|
S3TEST_CONF=aws.conf ./virtualenv/bin/nosetests -a '!fails_on_aws'
|
||||||
|
|
||||||
Most of the tests have both Boto3 and Boto2 versions. Tests written in
|
|
||||||
Boto2 are in the ``s3tests`` directory. Tests written in Boto3 are
|
|
||||||
located in the ``s3test_boto3`` directory.
|
|
||||||
|
|
||||||
You can run only the boto3 tests with::
|
TODO
|
||||||
|
====
|
||||||
|
|
||||||
S3TEST_CONF=your.conf tox -- s3tests_boto3/functional
|
- We should assume read-after-write consistency, and make the tests
|
||||||
|
actually request such a location.
|
||||||
========================
|
|
||||||
STS compatibility tests
|
|
||||||
========================
|
|
||||||
|
|
||||||
This section contains some basic tests for the AssumeRole, GetSessionToken and AssumeRoleWithWebIdentity API's. The test file is located under ``s3tests_boto3/functional``.
|
|
||||||
|
|
||||||
To run the STS tests, the vstart cluster should be started with the following parameter (in addition to any parameters already used with it)::
|
|
||||||
|
|
||||||
vstart.sh -o rgw_sts_key=abcdefghijklmnop -o rgw_s3_auth_use_sts=true
|
|
||||||
|
|
||||||
Note that the ``rgw_sts_key`` can be set to anything that is 128 bits in length.
|
|
||||||
After the cluster is up the following command should be executed::
|
|
||||||
|
|
||||||
radosgw-admin caps add --tenant=testx --uid="9876543210abcdef0123456789abcdef0123456789abcdef0123456789abcdef" --caps="roles=*"
|
|
||||||
|
|
||||||
You can run only the sts tests (all the three API's) with::
|
|
||||||
|
|
||||||
S3TEST_CONF=your.conf tox s3tests_boto3/functional/test_sts.py
|
|
||||||
|
|
||||||
You can filter tests based on the attributes. There is a attribute named ``test_of_sts`` to run AssumeRole and GetSessionToken tests and ``webidentity_test`` to run the AssumeRoleWithWebIdentity tests. If you want to execute only ``test_of_sts`` tests you can apply that filter as below::
|
|
||||||
|
|
||||||
S3TEST_CONF=your.conf tox -- -m test_of_sts s3tests_boto3/functional/test_sts.py
|
|
||||||
|
|
||||||
For running ``webidentity_test`` you'll need have Keycloak running.
|
|
||||||
|
|
||||||
In order to run any STS test you'll need to add "iam" section to the config file. For further reference on how your config file should look check ``s3tests.conf.SAMPLE``.
|
|
||||||
|
|
||||||
========================
|
|
||||||
IAM policy tests
|
|
||||||
========================
|
|
||||||
|
|
||||||
This is a set of IAM policy tests.
|
|
||||||
This section covers tests for user policies such as Put, Get, List, Delete, user policies with s3 actions, conflicting user policies etc
|
|
||||||
These tests uses Boto3 libraries. Tests are written in the ``s3test_boto3`` directory.
|
|
||||||
|
|
||||||
These iam policy tests uses two users with profile name "iam" and "s3 alt" as mentioned in s3tests.conf.SAMPLE.
|
|
||||||
If Ceph cluster is started with vstart, then above two users will get created as part of vstart with same access key, secrete key etc as mentioned in s3tests.conf.SAMPLE.
|
|
||||||
Out of those two users, "iam" user is with capabilities --caps=user-policy=* and "s3 alt" user is without capabilities.
|
|
||||||
Adding above capabilities to "iam" user is also taken care by vstart (If Ceph cluster is started with vstart).
|
|
||||||
|
|
||||||
To run these tests, create configuration file with section "iam" and "s3 alt" refer s3tests.conf.SAMPLE.
|
|
||||||
Once you have that configuration file copied and edited, you can run all the tests with::
|
|
||||||
|
|
||||||
S3TEST_CONF=your.conf tox s3tests_boto3/functional/test_iam.py
|
|
||||||
|
|
||||||
You can also specify specific test to run::
|
|
||||||
|
|
||||||
S3TEST_CONF=your.conf tox s3tests_boto3/functional/test_iam.py::test_put_user_policy
|
|
||||||
|
|
||||||
Some tests have attributes set such as "fails_on_rgw".
|
|
||||||
You can filter tests based on their attributes::
|
|
||||||
|
|
||||||
S3TEST_CONF=your.conf tox -- s3tests_boto3/functional/test_iam.py -m 'not fails_on_rgw'
|
|
||||||
|
|
||||||
|
http://aws.amazon.com/s3/faqs/#What_data_consistency_model_does_Amazon_S3_employ
|
||||||
|
|
28
bootstrap
Executable file
28
bootstrap
Executable file
|
@ -0,0 +1,28 @@
|
||||||
|
#!/bin/sh
|
||||||
|
set -e
|
||||||
|
|
||||||
|
for package in python-pip python-virtualenv python-dev libevent-dev; do
|
||||||
|
if [ "$(dpkg --status -- $package|sed -n 's/^Status: //p')" != "install ok installed" ]; then
|
||||||
|
# add a space after old values
|
||||||
|
missing="${missing:+$missing }$package"
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
if [ -n "$missing" ]; then
|
||||||
|
echo "$0: missing required packages, please install them:" 1>&2
|
||||||
|
echo "sudo apt-get install $missing"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
virtualenv --no-site-packages --distribute virtualenv
|
||||||
|
|
||||||
|
# avoid pip bugs
|
||||||
|
./virtualenv/bin/pip install --upgrade pip
|
||||||
|
|
||||||
|
./virtualenv/bin/pip install -r requirements.txt
|
||||||
|
|
||||||
|
# forbid setuptools from using the network because it'll try to use
|
||||||
|
# easy_install, and we really wanted pip; next line will fail if pip
|
||||||
|
# requirements.txt does not match setup.py requirements -- sucky but
|
||||||
|
# good enough for now
|
||||||
|
./virtualenv/bin/python setup.py develop \
|
||||||
|
--allow-hosts None
|
82
config.yaml.SAMPLE
Normal file
82
config.yaml.SAMPLE
Normal file
|
@ -0,0 +1,82 @@
|
||||||
|
fixtures:
|
||||||
|
## All the buckets created will start with this prefix;
|
||||||
|
## {random} will be filled with random characters to pad
|
||||||
|
## the prefix to 30 characters long, and avoid collisions
|
||||||
|
bucket prefix: YOURNAMEHERE-{random}-
|
||||||
|
|
||||||
|
file_generation:
|
||||||
|
groups:
|
||||||
|
## File generation works by creating N groups of files. Each group of
|
||||||
|
## files is defined by three elements: number of files, avg(filesize),
|
||||||
|
## and stddev(filesize) -- in that order.
|
||||||
|
- [1, 2, 3]
|
||||||
|
- [4, 5, 6]
|
||||||
|
|
||||||
|
## Config for the readwrite tool.
|
||||||
|
## The readwrite tool concurrently reads and writes to files in a
|
||||||
|
## single bucket for a set duration.
|
||||||
|
## Note: the readwrite tool does not need the s3.alt connection info.
|
||||||
|
## only s3.main is used.
|
||||||
|
readwrite:
|
||||||
|
## The number of reader and writer worker threads. This sets how many
|
||||||
|
## files will be read and written concurrently.
|
||||||
|
readers: 2
|
||||||
|
writers: 2
|
||||||
|
## The duration to run in seconds. Doesn't count setup/warmup time
|
||||||
|
duration: 15
|
||||||
|
|
||||||
|
files:
|
||||||
|
## The number of files to use. This number of files is created during the
|
||||||
|
## "warmup" phase. After the warmup, readers will randomly pick a file to
|
||||||
|
## read, and writers will randomly pick a file to overwrite
|
||||||
|
num: 3
|
||||||
|
## The file size to use, in KB
|
||||||
|
size: 1024
|
||||||
|
## The stddev for the file size, in KB
|
||||||
|
stddev: 0
|
||||||
|
|
||||||
|
s3:
|
||||||
|
## This section contains all the connection information
|
||||||
|
|
||||||
|
defaults:
|
||||||
|
## This section contains the defaults for all of the other connections
|
||||||
|
## below. You can also place these variables directly there.
|
||||||
|
|
||||||
|
## Replace with e.g. "localhost" to run against local software
|
||||||
|
host: s3.amazonaws.com
|
||||||
|
|
||||||
|
## Uncomment the port to use soemthing other than 80
|
||||||
|
# port: 8080
|
||||||
|
|
||||||
|
## Say "no" to disable TLS.
|
||||||
|
is_secure: yes
|
||||||
|
|
||||||
|
## The tests assume two accounts are defined, "main" and "alt". You
|
||||||
|
## may add other connections to be instantianted as well, however
|
||||||
|
## any additional ones will not be used unless your tests use them.
|
||||||
|
|
||||||
|
main:
|
||||||
|
|
||||||
|
## The User ID that the S3 provider gives you. For AWS, this is
|
||||||
|
## typically a 64-char hexstring.
|
||||||
|
user_id: AWS_USER_ID
|
||||||
|
|
||||||
|
## Display name typically looks more like a unix login, "jdoe" etc
|
||||||
|
display_name: AWS_DISPLAY_NAME
|
||||||
|
|
||||||
|
## The email for this account.
|
||||||
|
email: AWS_EMAIL
|
||||||
|
|
||||||
|
## Replace these with your access keys.
|
||||||
|
access_key: AWS_ACCESS_KEY
|
||||||
|
secret_key: AWS_SECRET_KEY
|
||||||
|
|
||||||
|
alt:
|
||||||
|
## Another user accout, used for ACL-related tests.
|
||||||
|
|
||||||
|
user_id: AWS_USER_ID
|
||||||
|
display_name: AWS_DISPLAY_NAME
|
||||||
|
email: AWS_EMAIL
|
||||||
|
access_key: AWS_ACCESS_KEY
|
||||||
|
secret_key: AWS_SECRET_KEY
|
||||||
|
|
51
pytest.ini
51
pytest.ini
|
@ -1,51 +0,0 @@
|
||||||
[pytest]
|
|
||||||
markers =
|
|
||||||
abac_test
|
|
||||||
appendobject
|
|
||||||
auth_aws2
|
|
||||||
auth_aws4
|
|
||||||
auth_common
|
|
||||||
bucket_policy
|
|
||||||
bucket_encryption
|
|
||||||
checksum
|
|
||||||
cloud_transition
|
|
||||||
encryption
|
|
||||||
fails_on_aws
|
|
||||||
fails_on_dbstore
|
|
||||||
fails_on_dho
|
|
||||||
fails_on_mod_proxy_fcgi
|
|
||||||
fails_on_rgw
|
|
||||||
fails_on_s3
|
|
||||||
fails_with_subdomain
|
|
||||||
group
|
|
||||||
group_policy
|
|
||||||
iam_account
|
|
||||||
iam_cross_account
|
|
||||||
iam_role
|
|
||||||
iam_tenant
|
|
||||||
iam_user
|
|
||||||
lifecycle
|
|
||||||
lifecycle_expiration
|
|
||||||
lifecycle_transition
|
|
||||||
list_objects_v2
|
|
||||||
object_lock
|
|
||||||
role_policy
|
|
||||||
session_policy
|
|
||||||
s3select
|
|
||||||
s3website
|
|
||||||
s3website_routing_rules
|
|
||||||
s3website_redirect_location
|
|
||||||
sns
|
|
||||||
sse_s3
|
|
||||||
storage_class
|
|
||||||
tagging
|
|
||||||
test_of_sts
|
|
||||||
token_claims_trust_policy_test
|
|
||||||
token_principal_tag_role_policy_test
|
|
||||||
token_request_tag_trust_policy_test
|
|
||||||
token_resource_tags_test
|
|
||||||
token_role_tags_test
|
|
||||||
token_tag_keys_test
|
|
||||||
user_policy
|
|
||||||
versioning
|
|
||||||
webidentity_test
|
|
|
@ -1,15 +1,7 @@
|
||||||
PyYAML
|
PyYAML
|
||||||
boto >=2.6.0
|
nose >=1.0.0
|
||||||
boto3 >=1.0.0
|
boto >=2.0b4
|
||||||
# botocore-1.28 broke v2 signatures, see https://tracker.ceph.com/issues/58059
|
bunch >=1.0.0
|
||||||
botocore <1.28.0
|
|
||||||
munch >=2.0.0
|
|
||||||
# 0.14 switches to libev, that means bootstrap needs to change too
|
# 0.14 switches to libev, that means bootstrap needs to change too
|
||||||
gevent >=1.0
|
gevent ==0.13.6
|
||||||
isodate >=0.4.4
|
isodate >=0.4.4
|
||||||
requests >=2.23.0
|
|
||||||
pytz
|
|
||||||
httplib2
|
|
||||||
lxml
|
|
||||||
pytest
|
|
||||||
tox
|
|
||||||
|
|
|
@ -1,171 +0,0 @@
|
||||||
[DEFAULT]
|
|
||||||
## this section is just used for host, port and bucket_prefix
|
|
||||||
|
|
||||||
# host set for rgw in vstart.sh
|
|
||||||
host = localhost
|
|
||||||
|
|
||||||
# port set for rgw in vstart.sh
|
|
||||||
port = 8000
|
|
||||||
|
|
||||||
## say "False" to disable TLS
|
|
||||||
is_secure = False
|
|
||||||
|
|
||||||
## say "False" to disable SSL Verify
|
|
||||||
ssl_verify = False
|
|
||||||
|
|
||||||
[fixtures]
|
|
||||||
## all the buckets created will start with this prefix;
|
|
||||||
## {random} will be filled with random characters to pad
|
|
||||||
## the prefix to 30 characters long, and avoid collisions
|
|
||||||
bucket prefix = yournamehere-{random}-
|
|
||||||
|
|
||||||
# all the iam account resources (users, roles, etc) created
|
|
||||||
# will start with this name prefix
|
|
||||||
iam name prefix = s3-tests-
|
|
||||||
|
|
||||||
# all the iam account resources (users, roles, etc) created
|
|
||||||
# will start with this path prefix
|
|
||||||
iam path prefix = /s3-tests/
|
|
||||||
|
|
||||||
[s3 main]
|
|
||||||
# main display_name set in vstart.sh
|
|
||||||
display_name = M. Tester
|
|
||||||
|
|
||||||
# main user_idname set in vstart.sh
|
|
||||||
user_id = testid
|
|
||||||
|
|
||||||
# main email set in vstart.sh
|
|
||||||
email = tester@ceph.com
|
|
||||||
|
|
||||||
# zonegroup api_name for bucket location
|
|
||||||
api_name = default
|
|
||||||
|
|
||||||
## main AWS access key
|
|
||||||
access_key = 0555b35654ad1656d804
|
|
||||||
|
|
||||||
## main AWS secret key
|
|
||||||
secret_key = h7GhxuBLTrlhVUyxSPUKUV8r/2EI4ngqJxD7iBdBYLhwluN30JaT3Q==
|
|
||||||
|
|
||||||
## replace with key id obtained when secret is created, or delete if KMS not tested
|
|
||||||
#kms_keyid = 01234567-89ab-cdef-0123-456789abcdef
|
|
||||||
|
|
||||||
## Storage classes
|
|
||||||
#storage_classes = "LUKEWARM, FROZEN"
|
|
||||||
|
|
||||||
## Lifecycle debug interval (default: 10)
|
|
||||||
#lc_debug_interval = 20
|
|
||||||
|
|
||||||
[s3 alt]
|
|
||||||
# alt display_name set in vstart.sh
|
|
||||||
display_name = john.doe
|
|
||||||
## alt email set in vstart.sh
|
|
||||||
email = john.doe@example.com
|
|
||||||
|
|
||||||
# alt user_id set in vstart.sh
|
|
||||||
user_id = 56789abcdef0123456789abcdef0123456789abcdef0123456789abcdef01234
|
|
||||||
|
|
||||||
# alt AWS access key set in vstart.sh
|
|
||||||
access_key = NOPQRSTUVWXYZABCDEFG
|
|
||||||
|
|
||||||
# alt AWS secret key set in vstart.sh
|
|
||||||
secret_key = nopqrstuvwxyzabcdefghijklmnabcdefghijklm
|
|
||||||
|
|
||||||
#[s3 cloud]
|
|
||||||
## to run the testcases with "cloud_transition" attribute.
|
|
||||||
## Note: the waiting time may have to tweaked depending on
|
|
||||||
## the I/O latency to the cloud endpoint.
|
|
||||||
|
|
||||||
## host set for cloud endpoint
|
|
||||||
# host = localhost
|
|
||||||
|
|
||||||
## port set for cloud endpoint
|
|
||||||
# port = 8001
|
|
||||||
|
|
||||||
## say "False" to disable TLS
|
|
||||||
# is_secure = False
|
|
||||||
|
|
||||||
## cloud endpoint credentials
|
|
||||||
# access_key = 0555b35654ad1656d804
|
|
||||||
# secret_key = h7GhxuBLTrlhVUyxSPUKUV8r/2EI4ngqJxD7iBdBYLhwluN30JaT3Q==
|
|
||||||
|
|
||||||
## storage class configured as cloud tier on local rgw server
|
|
||||||
# cloud_storage_class = CLOUDTIER
|
|
||||||
|
|
||||||
## Below are optional -
|
|
||||||
|
|
||||||
## Above configured cloud storage class config options
|
|
||||||
# retain_head_object = false
|
|
||||||
# target_storage_class = Target_SC
|
|
||||||
# target_path = cloud-bucket
|
|
||||||
|
|
||||||
## another regular storage class to test multiple transition rules,
|
|
||||||
# storage_class = S1
|
|
||||||
|
|
||||||
[s3 tenant]
|
|
||||||
# tenant display_name set in vstart.sh
|
|
||||||
display_name = testx$tenanteduser
|
|
||||||
|
|
||||||
# tenant user_id set in vstart.sh
|
|
||||||
user_id = 9876543210abcdef0123456789abcdef0123456789abcdef0123456789abcdef
|
|
||||||
|
|
||||||
# tenant AWS secret key set in vstart.sh
|
|
||||||
access_key = HIJKLMNOPQRSTUVWXYZA
|
|
||||||
|
|
||||||
# tenant AWS secret key set in vstart.sh
|
|
||||||
secret_key = opqrstuvwxyzabcdefghijklmnopqrstuvwxyzab
|
|
||||||
|
|
||||||
# tenant email set in vstart.sh
|
|
||||||
email = tenanteduser@example.com
|
|
||||||
|
|
||||||
# tenant name
|
|
||||||
tenant = testx
|
|
||||||
|
|
||||||
#following section needs to be added for all sts-tests
|
|
||||||
[iam]
|
|
||||||
#used for iam operations in sts-tests
|
|
||||||
#email from vstart.sh
|
|
||||||
email = s3@example.com
|
|
||||||
|
|
||||||
#user_id from vstart.sh
|
|
||||||
user_id = 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef
|
|
||||||
|
|
||||||
#access_key from vstart.sh
|
|
||||||
access_key = ABCDEFGHIJKLMNOPQRST
|
|
||||||
|
|
||||||
#secret_key vstart.sh
|
|
||||||
secret_key = abcdefghijklmnopqrstuvwxyzabcdefghijklmn
|
|
||||||
|
|
||||||
#display_name from vstart.sh
|
|
||||||
display_name = youruseridhere
|
|
||||||
|
|
||||||
# iam account root user for iam_account tests
|
|
||||||
[iam root]
|
|
||||||
access_key = AAAAAAAAAAAAAAAAAAaa
|
|
||||||
secret_key = aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
|
|
||||||
user_id = RGW11111111111111111
|
|
||||||
email = account1@ceph.com
|
|
||||||
|
|
||||||
# iam account root user in a different account than [iam root]
|
|
||||||
[iam alt root]
|
|
||||||
access_key = BBBBBBBBBBBBBBBBBBbb
|
|
||||||
secret_key = bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
|
|
||||||
user_id = RGW22222222222222222
|
|
||||||
email = account2@ceph.com
|
|
||||||
|
|
||||||
#following section needs to be added when you want to run Assume Role With Webidentity test
|
|
||||||
[webidentity]
|
|
||||||
#used for assume role with web identity test in sts-tests
|
|
||||||
#all parameters will be obtained from ceph/qa/tasks/keycloak.py
|
|
||||||
token=<access_token>
|
|
||||||
|
|
||||||
aud=<obtained after introspecting token>
|
|
||||||
|
|
||||||
sub=<obtained after introspecting token>
|
|
||||||
|
|
||||||
azp=<obtained after introspecting token>
|
|
||||||
|
|
||||||
user_token=<access token for a user, with attribute Department=[Engineering, Marketing>]
|
|
||||||
|
|
||||||
thumbprint=<obtained from x509 certificate>
|
|
||||||
|
|
||||||
KC_REALM=<name of the realm>
|
|
|
@ -1,18 +1,13 @@
|
||||||
import boto.s3.connection
|
import boto.s3.connection
|
||||||
import munch
|
import bunch
|
||||||
import itertools
|
import itertools
|
||||||
import os
|
import os
|
||||||
import random
|
import random
|
||||||
import string
|
import string
|
||||||
import yaml
|
import yaml
|
||||||
import re
|
|
||||||
from lxml import etree
|
|
||||||
|
|
||||||
from doctest import Example
|
s3 = bunch.Bunch()
|
||||||
from lxml.doctestcompare import LXMLOutputChecker
|
config = bunch.Bunch()
|
||||||
|
|
||||||
s3 = munch.Munch()
|
|
||||||
config = munch.Munch()
|
|
||||||
prefix = ''
|
prefix = ''
|
||||||
|
|
||||||
bucket_counter = itertools.count(1)
|
bucket_counter = itertools.count(1)
|
||||||
|
@ -51,10 +46,10 @@ def nuke_bucket(bucket):
|
||||||
while deleted_cnt:
|
while deleted_cnt:
|
||||||
deleted_cnt = 0
|
deleted_cnt = 0
|
||||||
for key in bucket.list():
|
for key in bucket.list():
|
||||||
print('Cleaning bucket {bucket} key {key}'.format(
|
print 'Cleaning bucket {bucket} key {key}'.format(
|
||||||
bucket=bucket,
|
bucket=bucket,
|
||||||
key=key,
|
key=key,
|
||||||
))
|
)
|
||||||
key.set_canned_acl('private')
|
key.set_canned_acl('private')
|
||||||
key.delete()
|
key.delete()
|
||||||
deleted_cnt += 1
|
deleted_cnt += 1
|
||||||
|
@ -67,26 +62,26 @@ def nuke_bucket(bucket):
|
||||||
and e.body == ''):
|
and e.body == ''):
|
||||||
e.error_code = 'AccessDenied'
|
e.error_code = 'AccessDenied'
|
||||||
if e.error_code != 'AccessDenied':
|
if e.error_code != 'AccessDenied':
|
||||||
print('GOT UNWANTED ERROR', e.error_code)
|
print 'GOT UNWANTED ERROR', e.error_code
|
||||||
raise
|
raise
|
||||||
# seems like we're not the owner of the bucket; ignore
|
# seems like we're not the owner of the bucket; ignore
|
||||||
pass
|
pass
|
||||||
|
|
||||||
def nuke_prefixed_buckets():
|
def nuke_prefixed_buckets():
|
||||||
for name, conn in list(s3.items()):
|
for name, conn in s3.items():
|
||||||
print('Cleaning buckets from connection {name}'.format(name=name))
|
print 'Cleaning buckets from connection {name}'.format(name=name)
|
||||||
for bucket in conn.get_all_buckets():
|
for bucket in conn.get_all_buckets():
|
||||||
if bucket.name.startswith(prefix):
|
if bucket.name.startswith(prefix):
|
||||||
print('Cleaning bucket {bucket}'.format(bucket=bucket))
|
print 'Cleaning bucket {bucket}'.format(bucket=bucket)
|
||||||
nuke_bucket(bucket)
|
nuke_bucket(bucket)
|
||||||
|
|
||||||
print('Done with cleanup of test buckets.')
|
print 'Done with cleanup of test buckets.'
|
||||||
|
|
||||||
def read_config(fp):
|
def read_config(fp):
|
||||||
config = munch.Munch()
|
config = bunch.Bunch()
|
||||||
g = yaml.safe_load_all(fp)
|
g = yaml.safe_load_all(fp)
|
||||||
for new in g:
|
for new in g:
|
||||||
config.update(munch.Munchify(new))
|
config.update(bunch.bunchify(new))
|
||||||
return config
|
return config
|
||||||
|
|
||||||
def connect(conf):
|
def connect(conf):
|
||||||
|
@ -97,24 +92,12 @@ def connect(conf):
|
||||||
access_key='aws_access_key_id',
|
access_key='aws_access_key_id',
|
||||||
secret_key='aws_secret_access_key',
|
secret_key='aws_secret_access_key',
|
||||||
)
|
)
|
||||||
kwargs = dict((mapping[k],v) for (k,v) in conf.items() if k in mapping)
|
kwargs = dict((mapping[k],v) for (k,v) in conf.iteritems() if k in mapping)
|
||||||
#process calling_format argument
|
conn = boto.s3.connection.S3Connection(
|
||||||
calling_formats = dict(
|
# TODO support & test all variations
|
||||||
ordinary=boto.s3.connection.OrdinaryCallingFormat(),
|
calling_format=boto.s3.connection.OrdinaryCallingFormat(),
|
||||||
subdomain=boto.s3.connection.SubdomainCallingFormat(),
|
**kwargs
|
||||||
vhost=boto.s3.connection.VHostCallingFormat(),
|
|
||||||
)
|
)
|
||||||
kwargs['calling_format'] = calling_formats['ordinary']
|
|
||||||
if 'calling_format' in conf:
|
|
||||||
raw_calling_format = conf['calling_format']
|
|
||||||
try:
|
|
||||||
kwargs['calling_format'] = calling_formats[raw_calling_format]
|
|
||||||
except KeyError:
|
|
||||||
raise RuntimeError(
|
|
||||||
'calling_format unknown: %r' % raw_calling_format
|
|
||||||
)
|
|
||||||
# TODO test vhost calling format
|
|
||||||
conn = boto.s3.connection.S3Connection(**kwargs)
|
|
||||||
return conn
|
return conn
|
||||||
|
|
||||||
def setup():
|
def setup():
|
||||||
|
@ -146,7 +129,7 @@ def setup():
|
||||||
raise RuntimeError("Empty Prefix! Aborting!")
|
raise RuntimeError("Empty Prefix! Aborting!")
|
||||||
|
|
||||||
defaults = config.s3.defaults
|
defaults = config.s3.defaults
|
||||||
for section in list(config.s3.keys()):
|
for section in config.s3.keys():
|
||||||
if section == 'defaults':
|
if section == 'defaults':
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
@ -186,117 +169,3 @@ def get_new_bucket(connection=None):
|
||||||
|
|
||||||
def teardown():
|
def teardown():
|
||||||
nuke_prefixed_buckets()
|
nuke_prefixed_buckets()
|
||||||
|
|
||||||
def with_setup_kwargs(setup, teardown=None):
|
|
||||||
"""Decorator to add setup and/or teardown methods to a test function::
|
|
||||||
|
|
||||||
@with_setup_args(setup, teardown)
|
|
||||||
def test_something():
|
|
||||||
" ... "
|
|
||||||
|
|
||||||
The setup function should return (kwargs) which will be passed to
|
|
||||||
test function, and teardown function.
|
|
||||||
|
|
||||||
Note that `with_setup_kwargs` is useful *only* for test functions, not for test
|
|
||||||
methods or inside of TestCase subclasses.
|
|
||||||
"""
|
|
||||||
def decorate(func):
|
|
||||||
kwargs = {}
|
|
||||||
|
|
||||||
def test_wrapped(*args, **kwargs2):
|
|
||||||
k2 = kwargs.copy()
|
|
||||||
k2.update(kwargs2)
|
|
||||||
k2['testname'] = func.__name__
|
|
||||||
func(*args, **k2)
|
|
||||||
|
|
||||||
test_wrapped.__name__ = func.__name__
|
|
||||||
|
|
||||||
def setup_wrapped():
|
|
||||||
k = setup()
|
|
||||||
kwargs.update(k)
|
|
||||||
if hasattr(func, 'setup'):
|
|
||||||
func.setup()
|
|
||||||
test_wrapped.setup = setup_wrapped
|
|
||||||
|
|
||||||
if teardown:
|
|
||||||
def teardown_wrapped():
|
|
||||||
if hasattr(func, 'teardown'):
|
|
||||||
func.teardown()
|
|
||||||
teardown(**kwargs)
|
|
||||||
|
|
||||||
test_wrapped.teardown = teardown_wrapped
|
|
||||||
else:
|
|
||||||
if hasattr(func, 'teardown'):
|
|
||||||
test_wrapped.teardown = func.teardown()
|
|
||||||
return test_wrapped
|
|
||||||
return decorate
|
|
||||||
|
|
||||||
# Demo case for the above, when you run test_gen():
|
|
||||||
# _test_gen will run twice,
|
|
||||||
# with the following stderr printing
|
|
||||||
# setup_func {'b': 2}
|
|
||||||
# testcase ('1',) {'b': 2, 'testname': '_test_gen'}
|
|
||||||
# teardown_func {'b': 2}
|
|
||||||
# setup_func {'b': 2}
|
|
||||||
# testcase () {'b': 2, 'testname': '_test_gen'}
|
|
||||||
# teardown_func {'b': 2}
|
|
||||||
#
|
|
||||||
#def setup_func():
|
|
||||||
# kwargs = {'b': 2}
|
|
||||||
# print("setup_func", kwargs, file=sys.stderr)
|
|
||||||
# return kwargs
|
|
||||||
#
|
|
||||||
#def teardown_func(**kwargs):
|
|
||||||
# print("teardown_func", kwargs, file=sys.stderr)
|
|
||||||
#
|
|
||||||
#@with_setup_kwargs(setup=setup_func, teardown=teardown_func)
|
|
||||||
#def _test_gen(*args, **kwargs):
|
|
||||||
# print("testcase", args, kwargs, file=sys.stderr)
|
|
||||||
#
|
|
||||||
#def test_gen():
|
|
||||||
# yield _test_gen, '1'
|
|
||||||
# yield _test_gen
|
|
||||||
|
|
||||||
def trim_xml(xml_str):
|
|
||||||
p = etree.XMLParser(encoding="utf-8", remove_blank_text=True)
|
|
||||||
xml_str = bytes(xml_str, "utf-8")
|
|
||||||
elem = etree.XML(xml_str, parser=p)
|
|
||||||
return etree.tostring(elem, encoding="unicode")
|
|
||||||
|
|
||||||
def normalize_xml(xml, pretty_print=True):
|
|
||||||
if xml is None:
|
|
||||||
return xml
|
|
||||||
|
|
||||||
root = etree.fromstring(xml.encode(encoding='ascii'))
|
|
||||||
|
|
||||||
for element in root.iter('*'):
|
|
||||||
if element.text is not None and not element.text.strip():
|
|
||||||
element.text = None
|
|
||||||
if element.text is not None:
|
|
||||||
element.text = element.text.strip().replace("\n", "").replace("\r", "")
|
|
||||||
if element.tail is not None and not element.tail.strip():
|
|
||||||
element.tail = None
|
|
||||||
if element.tail is not None:
|
|
||||||
element.tail = element.tail.strip().replace("\n", "").replace("\r", "")
|
|
||||||
|
|
||||||
# Sort the elements
|
|
||||||
for parent in root.xpath('//*[./*]'): # Search for parent elements
|
|
||||||
parent[:] = sorted(parent,key=lambda x: x.tag)
|
|
||||||
|
|
||||||
xmlstr = etree.tostring(root, encoding="unicode", pretty_print=pretty_print)
|
|
||||||
# there are two different DTD URIs
|
|
||||||
xmlstr = re.sub(r'xmlns="[^"]+"', 'xmlns="s3"', xmlstr)
|
|
||||||
xmlstr = re.sub(r'xmlns=\'[^\']+\'', 'xmlns="s3"', xmlstr)
|
|
||||||
for uri in ['http://doc.s3.amazonaws.com/doc/2006-03-01/', 'http://s3.amazonaws.com/doc/2006-03-01/']:
|
|
||||||
xmlstr = xmlstr.replace(uri, 'URI-DTD')
|
|
||||||
#xmlstr = re.sub(r'>\s+', '>', xmlstr, count=0, flags=re.MULTILINE)
|
|
||||||
return xmlstr
|
|
||||||
|
|
||||||
def assert_xml_equal(got, want):
|
|
||||||
assert want is not None, 'Wanted XML cannot be None'
|
|
||||||
if got is None:
|
|
||||||
raise AssertionError('Got input to validate was None')
|
|
||||||
checker = LXMLOutputChecker()
|
|
||||||
if not checker.check_output(want, got, 0):
|
|
||||||
message = checker.output_difference(Example("", want), got, 0)
|
|
||||||
raise AssertionError(message)
|
|
||||||
|
|
5
s3tests/functional/AnonymousAuth.py
Normal file
5
s3tests/functional/AnonymousAuth.py
Normal file
|
@ -0,0 +1,5 @@
|
||||||
|
from boto.auth_handler import AuthHandler
|
||||||
|
|
||||||
|
class AnonymousAuthHandler(AuthHandler):
|
||||||
|
def add_auth(self, http_request, **kwargs):
|
||||||
|
return # Nothing to do for anonymous access!
|
|
@ -1,38 +1,22 @@
|
||||||
import sys
|
import ConfigParser
|
||||||
import configparser
|
|
||||||
import boto.exception
|
import boto.exception
|
||||||
import boto.s3.connection
|
import boto.s3.connection
|
||||||
import munch
|
import bunch
|
||||||
import itertools
|
import itertools
|
||||||
import os
|
import os
|
||||||
import random
|
import random
|
||||||
import string
|
import string
|
||||||
import pytest
|
|
||||||
from http.client import HTTPConnection, HTTPSConnection
|
|
||||||
from urllib.parse import urlparse
|
|
||||||
|
|
||||||
from .utils import region_sync_meta
|
s3 = bunch.Bunch()
|
||||||
|
config = bunch.Bunch()
|
||||||
s3 = munch.Munch()
|
|
||||||
config = munch.Munch()
|
|
||||||
targets = munch.Munch()
|
|
||||||
|
|
||||||
# this will be assigned by setup()
|
# this will be assigned by setup()
|
||||||
prefix = None
|
prefix = None
|
||||||
|
|
||||||
calling_formats = dict(
|
|
||||||
ordinary=boto.s3.connection.OrdinaryCallingFormat(),
|
|
||||||
subdomain=boto.s3.connection.SubdomainCallingFormat(),
|
|
||||||
vhost=boto.s3.connection.VHostCallingFormat(),
|
|
||||||
)
|
|
||||||
|
|
||||||
def get_prefix():
|
def get_prefix():
|
||||||
assert prefix is not None
|
assert prefix is not None
|
||||||
return prefix
|
return prefix
|
||||||
|
|
||||||
def is_slow_backend():
|
|
||||||
return slow_backend
|
|
||||||
|
|
||||||
def choose_bucket_prefix(template, max_len=30):
|
def choose_bucket_prefix(template, max_len=30):
|
||||||
"""
|
"""
|
||||||
Choose a prefix for our test buckets, so they're easy to identify.
|
Choose a prefix for our test buckets, so they're easy to identify.
|
||||||
|
@ -58,209 +42,38 @@ def choose_bucket_prefix(template, max_len=30):
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
def nuke_prefixed_buckets_on_conn(prefix, name, conn):
|
def nuke_prefixed_buckets(prefix):
|
||||||
print('Cleaning buckets from connection {name} prefix {prefix!r}.'.format(
|
for name, conn in s3.items():
|
||||||
name=name,
|
print 'Cleaning buckets from connection {name} prefix {prefix!r}.'.format(
|
||||||
prefix=prefix,
|
name=name,
|
||||||
))
|
prefix=prefix,
|
||||||
|
)
|
||||||
for bucket in conn.get_all_buckets():
|
for bucket in conn.get_all_buckets():
|
||||||
print('prefix=',prefix)
|
if bucket.name.startswith(prefix):
|
||||||
if bucket.name.startswith(prefix):
|
print 'Cleaning bucket {bucket}'.format(bucket=bucket)
|
||||||
print('Cleaning bucket {bucket}'.format(bucket=bucket))
|
|
||||||
success = False
|
|
||||||
for i in range(2):
|
|
||||||
try:
|
try:
|
||||||
try:
|
bucket.set_canned_acl('private')
|
||||||
iterator = iter(bucket.list_versions())
|
for key in bucket.list():
|
||||||
# peek into iterator to issue list operation
|
print 'Cleaning bucket {bucket} key {key}'.format(
|
||||||
try:
|
|
||||||
keys = itertools.chain([next(iterator)], iterator)
|
|
||||||
except StopIteration:
|
|
||||||
keys = [] # empty iterator
|
|
||||||
except boto.exception.S3ResponseError as e:
|
|
||||||
# some S3 implementations do not support object
|
|
||||||
# versioning - fall back to listing without versions
|
|
||||||
if e.error_code != 'NotImplemented':
|
|
||||||
raise e
|
|
||||||
keys = bucket.list();
|
|
||||||
for key in keys:
|
|
||||||
print('Cleaning bucket {bucket} key {key}'.format(
|
|
||||||
bucket=bucket,
|
bucket=bucket,
|
||||||
key=key,
|
key=key,
|
||||||
))
|
)
|
||||||
# key.set_canned_acl('private')
|
key.set_canned_acl('private')
|
||||||
bucket.delete_key(key.name, version_id = key.version_id)
|
key.delete()
|
||||||
try:
|
bucket.delete()
|
||||||
bucket.delete()
|
|
||||||
except boto.exception.S3ResponseError as e:
|
|
||||||
# if DELETE times out, the retry may see NoSuchBucket
|
|
||||||
if e.error_code != 'NoSuchBucket':
|
|
||||||
raise e
|
|
||||||
pass
|
|
||||||
success = True
|
|
||||||
except boto.exception.S3ResponseError as e:
|
except boto.exception.S3ResponseError as e:
|
||||||
if e.error_code != 'AccessDenied':
|
if e.error_code != 'AccessDenied':
|
||||||
print('GOT UNWANTED ERROR', e.error_code)
|
print 'GOT UNWANTED ERROR', e.error_code
|
||||||
raise
|
raise
|
||||||
# seems like we don't have permissions set appropriately, we'll
|
# seems like we're not the owner of the bucket; ignore
|
||||||
# modify permissions and retry
|
|
||||||
pass
|
pass
|
||||||
|
|
||||||
if success:
|
print 'Done with cleanup of test buckets.'
|
||||||
break
|
|
||||||
|
|
||||||
bucket.set_canned_acl('private')
|
|
||||||
|
|
||||||
|
|
||||||
def nuke_prefixed_buckets(prefix):
|
|
||||||
# If no regions are specified, use the simple method
|
|
||||||
if targets.main.master == None:
|
|
||||||
for name, conn in list(s3.items()):
|
|
||||||
print('Deleting buckets on {name}'.format(name=name))
|
|
||||||
nuke_prefixed_buckets_on_conn(prefix, name, conn)
|
|
||||||
else:
|
|
||||||
# First, delete all buckets on the master connection
|
|
||||||
for name, conn in list(s3.items()):
|
|
||||||
if conn == targets.main.master.connection:
|
|
||||||
print('Deleting buckets on {name} (master)'.format(name=name))
|
|
||||||
nuke_prefixed_buckets_on_conn(prefix, name, conn)
|
|
||||||
|
|
||||||
# Then sync to propagate deletes to secondaries
|
|
||||||
region_sync_meta(targets.main, targets.main.master.connection)
|
|
||||||
print('region-sync in nuke_prefixed_buckets')
|
|
||||||
|
|
||||||
# Now delete remaining buckets on any other connection
|
|
||||||
for name, conn in list(s3.items()):
|
|
||||||
if conn != targets.main.master.connection:
|
|
||||||
print('Deleting buckets on {name} (non-master)'.format(name=name))
|
|
||||||
nuke_prefixed_buckets_on_conn(prefix, name, conn)
|
|
||||||
|
|
||||||
print('Done with cleanup of test buckets.')
|
|
||||||
|
|
||||||
class TargetConfig:
|
|
||||||
def __init__(self, cfg, section):
|
|
||||||
self.port = None
|
|
||||||
self.api_name = ''
|
|
||||||
self.is_master = False
|
|
||||||
self.is_secure = False
|
|
||||||
self.sync_agent_addr = None
|
|
||||||
self.sync_agent_port = 0
|
|
||||||
self.sync_meta_wait = 0
|
|
||||||
try:
|
|
||||||
self.api_name = cfg.get(section, 'api_name')
|
|
||||||
except (configparser.NoSectionError, configparser.NoOptionError):
|
|
||||||
pass
|
|
||||||
try:
|
|
||||||
self.port = cfg.getint(section, 'port')
|
|
||||||
except configparser.NoOptionError:
|
|
||||||
pass
|
|
||||||
try:
|
|
||||||
self.host=cfg.get(section, 'host')
|
|
||||||
except configparser.NoOptionError:
|
|
||||||
raise RuntimeError(
|
|
||||||
'host not specified for section {s}'.format(s=section)
|
|
||||||
)
|
|
||||||
try:
|
|
||||||
self.is_master=cfg.getboolean(section, 'is_master')
|
|
||||||
except configparser.NoOptionError:
|
|
||||||
pass
|
|
||||||
|
|
||||||
try:
|
|
||||||
self.is_secure=cfg.getboolean(section, 'is_secure')
|
|
||||||
except configparser.NoOptionError:
|
|
||||||
pass
|
|
||||||
|
|
||||||
try:
|
|
||||||
raw_calling_format = cfg.get(section, 'calling_format')
|
|
||||||
except configparser.NoOptionError:
|
|
||||||
raw_calling_format = 'ordinary'
|
|
||||||
|
|
||||||
try:
|
|
||||||
self.sync_agent_addr = cfg.get(section, 'sync_agent_addr')
|
|
||||||
except (configparser.NoSectionError, configparser.NoOptionError):
|
|
||||||
pass
|
|
||||||
|
|
||||||
try:
|
|
||||||
self.sync_agent_port = cfg.getint(section, 'sync_agent_port')
|
|
||||||
except (configparser.NoSectionError, configparser.NoOptionError):
|
|
||||||
pass
|
|
||||||
|
|
||||||
try:
|
|
||||||
self.sync_meta_wait = cfg.getint(section, 'sync_meta_wait')
|
|
||||||
except (configparser.NoSectionError, configparser.NoOptionError):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
try:
|
|
||||||
self.calling_format = calling_formats[raw_calling_format]
|
|
||||||
except KeyError:
|
|
||||||
raise RuntimeError(
|
|
||||||
'calling_format unknown: %r' % raw_calling_format
|
|
||||||
)
|
|
||||||
|
|
||||||
class TargetConnection:
|
|
||||||
def __init__(self, conf, conn):
|
|
||||||
self.conf = conf
|
|
||||||
self.connection = conn
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
class RegionsInfo:
|
|
||||||
def __init__(self):
|
|
||||||
self.m = munch.Munch()
|
|
||||||
self.master = None
|
|
||||||
self.secondaries = []
|
|
||||||
|
|
||||||
def add(self, name, region_config):
|
|
||||||
self.m[name] = region_config
|
|
||||||
if (region_config.is_master):
|
|
||||||
if not self.master is None:
|
|
||||||
raise RuntimeError(
|
|
||||||
'multiple regions defined as master'
|
|
||||||
)
|
|
||||||
self.master = region_config
|
|
||||||
else:
|
|
||||||
self.secondaries.append(region_config)
|
|
||||||
def get(self, name):
|
|
||||||
return self.m[name]
|
|
||||||
def get(self):
|
|
||||||
return self.m
|
|
||||||
def items(self):
|
|
||||||
return self.m.items()
|
|
||||||
|
|
||||||
regions = RegionsInfo()
|
|
||||||
|
|
||||||
|
|
||||||
class RegionsConn:
|
|
||||||
def __init__(self):
|
|
||||||
self.m = munch.Munch()
|
|
||||||
self.default = None
|
|
||||||
self.master = None
|
|
||||||
self.secondaries = []
|
|
||||||
|
|
||||||
def items(self):
|
|
||||||
return self.m.items()
|
|
||||||
|
|
||||||
def set_default(self, conn):
|
|
||||||
self.default = conn
|
|
||||||
|
|
||||||
def add(self, name, conn):
|
|
||||||
self.m[name] = conn
|
|
||||||
if not self.default:
|
|
||||||
self.default = conn
|
|
||||||
if (conn.conf.is_master):
|
|
||||||
self.master = conn
|
|
||||||
else:
|
|
||||||
self.secondaries.append(conn)
|
|
||||||
|
|
||||||
|
|
||||||
# nosetests --processes=N with N>1 is safe
|
|
||||||
_multiprocess_can_split_ = True
|
|
||||||
|
|
||||||
def setup():
|
def setup():
|
||||||
|
|
||||||
cfg = configparser.RawConfigParser()
|
cfg = ConfigParser.RawConfigParser()
|
||||||
try:
|
try:
|
||||||
path = os.environ['S3TEST_CONF']
|
path = os.environ['S3TEST_CONF']
|
||||||
except KeyError:
|
except KeyError:
|
||||||
|
@ -268,41 +81,18 @@ def setup():
|
||||||
'To run tests, point environment '
|
'To run tests, point environment '
|
||||||
+ 'variable S3TEST_CONF to a config file.',
|
+ 'variable S3TEST_CONF to a config file.',
|
||||||
)
|
)
|
||||||
cfg.read(path)
|
with file(path) as f:
|
||||||
|
cfg.readfp(f)
|
||||||
|
|
||||||
global prefix
|
global prefix
|
||||||
global targets
|
|
||||||
global slow_backend
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
template = cfg.get('fixtures', 'bucket prefix')
|
template = cfg.get('fixtures', 'bucket prefix')
|
||||||
except (configparser.NoSectionError, configparser.NoOptionError):
|
except (ConfigParser.NoSectionError, ConfigParser.NoOptionError):
|
||||||
template = 'test-{random}-'
|
template = 'test-{random}-'
|
||||||
prefix = choose_bucket_prefix(template=template)
|
prefix = choose_bucket_prefix(template=template)
|
||||||
|
|
||||||
try:
|
|
||||||
slow_backend = cfg.getboolean('fixtures', 'slow backend')
|
|
||||||
except (configparser.NoSectionError, configparser.NoOptionError):
|
|
||||||
slow_backend = False
|
|
||||||
|
|
||||||
# pull the default_region out, if it exists
|
|
||||||
try:
|
|
||||||
default_region = cfg.get('fixtures', 'default_region')
|
|
||||||
except (configparser.NoSectionError, configparser.NoOptionError):
|
|
||||||
default_region = None
|
|
||||||
|
|
||||||
s3.clear()
|
s3.clear()
|
||||||
config.clear()
|
config.clear()
|
||||||
|
|
||||||
for section in cfg.sections():
|
|
||||||
try:
|
|
||||||
(type_, name) = section.split(None, 1)
|
|
||||||
except ValueError:
|
|
||||||
continue
|
|
||||||
if type_ != 'region':
|
|
||||||
continue
|
|
||||||
regions.add(name, TargetConfig(cfg, section))
|
|
||||||
|
|
||||||
for section in cfg.sections():
|
for section in cfg.sections():
|
||||||
try:
|
try:
|
||||||
(type_, name) = section.split(None, 1)
|
(type_, name) = section.split(None, 1)
|
||||||
|
@ -310,52 +100,31 @@ def setup():
|
||||||
continue
|
continue
|
||||||
if type_ != 's3':
|
if type_ != 's3':
|
||||||
continue
|
continue
|
||||||
|
try:
|
||||||
|
port = cfg.getint(section, 'port')
|
||||||
|
except ConfigParser.NoOptionError:
|
||||||
|
port = None
|
||||||
|
|
||||||
if len(regions.get()) == 0:
|
config[name] = bunch.Bunch()
|
||||||
regions.add("default", TargetConfig(cfg, section))
|
|
||||||
|
|
||||||
config[name] = munch.Munch()
|
|
||||||
for var in [
|
for var in [
|
||||||
'user_id',
|
'user_id',
|
||||||
'display_name',
|
'display_name',
|
||||||
'email',
|
'email',
|
||||||
's3website_domain',
|
|
||||||
'host',
|
|
||||||
'port',
|
|
||||||
'is_secure',
|
|
||||||
'kms_keyid',
|
|
||||||
'storage_classes',
|
|
||||||
]:
|
]:
|
||||||
try:
|
try:
|
||||||
config[name][var] = cfg.get(section, var)
|
config[name][var] = cfg.get(section, var)
|
||||||
except configparser.NoOptionError:
|
except ConfigParser.NoOptionError:
|
||||||
pass
|
pass
|
||||||
|
conn = boto.s3.connection.S3Connection(
|
||||||
targets[name] = RegionsConn()
|
aws_access_key_id=cfg.get(section, 'access_key'),
|
||||||
|
aws_secret_access_key=cfg.get(section, 'secret_key'),
|
||||||
for (k, conf) in regions.items():
|
is_secure=cfg.getboolean(section, 'is_secure'),
|
||||||
conn = boto.s3.connection.S3Connection(
|
port=port,
|
||||||
aws_access_key_id=cfg.get(section, 'access_key'),
|
host=cfg.get(section, 'host'),
|
||||||
aws_secret_access_key=cfg.get(section, 'secret_key'),
|
# TODO support & test all variations
|
||||||
is_secure=conf.is_secure,
|
calling_format=boto.s3.connection.OrdinaryCallingFormat(),
|
||||||
port=conf.port,
|
)
|
||||||
host=conf.host,
|
s3[name] = conn
|
||||||
# TODO test vhost calling format
|
|
||||||
calling_format=conf.calling_format,
|
|
||||||
)
|
|
||||||
|
|
||||||
temp_targetConn = TargetConnection(conf, conn)
|
|
||||||
targets[name].add(k, temp_targetConn)
|
|
||||||
|
|
||||||
# Explicitly test for and set the default region, if specified.
|
|
||||||
# If it was not specified, use the 'is_master' flag to set it.
|
|
||||||
if default_region:
|
|
||||||
if default_region == name:
|
|
||||||
targets[name].set_default(temp_targetConn)
|
|
||||||
elif conf.is_master:
|
|
||||||
targets[name].set_default(temp_targetConn)
|
|
||||||
|
|
||||||
s3[name] = targets[name].default.connection
|
|
||||||
|
|
||||||
# WARNING! we actively delete all buckets we see with the prefix
|
# WARNING! we actively delete all buckets we see with the prefix
|
||||||
# we've chosen! Choose your prefix with care, and don't reuse
|
# we've chosen! Choose your prefix with care, and don't reuse
|
||||||
|
@ -371,15 +140,6 @@ def teardown():
|
||||||
# remove our buckets here also, to avoid littering
|
# remove our buckets here also, to avoid littering
|
||||||
nuke_prefixed_buckets(prefix=prefix)
|
nuke_prefixed_buckets(prefix=prefix)
|
||||||
|
|
||||||
@pytest.fixture(scope="package")
|
|
||||||
def configfile():
|
|
||||||
setup()
|
|
||||||
yield config
|
|
||||||
|
|
||||||
@pytest.fixture(autouse=True)
|
|
||||||
def setup_teardown(configfile):
|
|
||||||
yield
|
|
||||||
teardown()
|
|
||||||
|
|
||||||
bucket_counter = itertools.count(1)
|
bucket_counter = itertools.count(1)
|
||||||
|
|
||||||
|
@ -399,100 +159,18 @@ def get_new_bucket_name():
|
||||||
return name
|
return name
|
||||||
|
|
||||||
|
|
||||||
def get_new_bucket(target=None, name=None, headers=None):
|
def get_new_bucket(connection=None):
|
||||||
"""
|
"""
|
||||||
Get a bucket that exists and is empty.
|
Get a bucket that exists and is empty.
|
||||||
|
|
||||||
Always recreates a bucket from scratch. This is useful to also
|
Always recreates a bucket from scratch. This is useful to also
|
||||||
reset ACLs and such.
|
reset ACLs and such.
|
||||||
"""
|
"""
|
||||||
if target is None:
|
if connection is None:
|
||||||
target = targets.main.default
|
connection = s3.main
|
||||||
connection = target.connection
|
name = get_new_bucket_name()
|
||||||
if name is None:
|
|
||||||
name = get_new_bucket_name()
|
|
||||||
# the only way for this to fail with a pre-existing bucket is if
|
# the only way for this to fail with a pre-existing bucket is if
|
||||||
# someone raced us between setup nuke_prefixed_buckets and here;
|
# someone raced us between setup nuke_prefixed_buckets and here;
|
||||||
# ignore that as astronomically unlikely
|
# ignore that as astronomically unlikely
|
||||||
bucket = connection.create_bucket(name, location=target.conf.api_name, headers=headers)
|
bucket = connection.create_bucket(name)
|
||||||
return bucket
|
return bucket
|
||||||
|
|
||||||
def _make_request(method, bucket, key, body=None, authenticated=False, response_headers=None, request_headers=None, expires_in=100000, path_style=True, timeout=None):
|
|
||||||
"""
|
|
||||||
issue a request for a specified method, on a specified <bucket,key>,
|
|
||||||
with a specified (optional) body (encrypted per the connection), and
|
|
||||||
return the response (status, reason).
|
|
||||||
|
|
||||||
If key is None, then this will be treated as a bucket-level request.
|
|
||||||
|
|
||||||
If the request or response headers are None, then default values will be
|
|
||||||
provided by later methods.
|
|
||||||
"""
|
|
||||||
if not path_style:
|
|
||||||
conn = bucket.connection
|
|
||||||
request_headers['Host'] = conn.calling_format.build_host(conn.server_name(), bucket.name)
|
|
||||||
|
|
||||||
if authenticated:
|
|
||||||
urlobj = None
|
|
||||||
if key is not None:
|
|
||||||
urlobj = key
|
|
||||||
elif bucket is not None:
|
|
||||||
urlobj = bucket
|
|
||||||
else:
|
|
||||||
raise RuntimeError('Unable to find bucket name')
|
|
||||||
url = urlobj.generate_url(expires_in, method=method, response_headers=response_headers, headers=request_headers)
|
|
||||||
o = urlparse(url)
|
|
||||||
path = o.path + '?' + o.query
|
|
||||||
else:
|
|
||||||
bucketobj = None
|
|
||||||
if key is not None:
|
|
||||||
path = '/{obj}'.format(obj=key.name)
|
|
||||||
bucketobj = key.bucket
|
|
||||||
elif bucket is not None:
|
|
||||||
path = '/'
|
|
||||||
bucketobj = bucket
|
|
||||||
else:
|
|
||||||
raise RuntimeError('Unable to find bucket name')
|
|
||||||
if path_style:
|
|
||||||
path = '/{bucket}'.format(bucket=bucketobj.name) + path
|
|
||||||
|
|
||||||
return _make_raw_request(host=s3.main.host, port=s3.main.port, method=method, path=path, body=body, request_headers=request_headers, secure=s3.main.is_secure, timeout=timeout)
|
|
||||||
|
|
||||||
def _make_bucket_request(method, bucket, body=None, authenticated=False, response_headers=None, request_headers=None, expires_in=100000, path_style=True, timeout=None):
|
|
||||||
"""
|
|
||||||
issue a request for a specified method, on a specified <bucket>,
|
|
||||||
with a specified (optional) body (encrypted per the connection), and
|
|
||||||
return the response (status, reason)
|
|
||||||
"""
|
|
||||||
return _make_request(method=method, bucket=bucket, key=None, body=body, authenticated=authenticated, response_headers=response_headers, request_headers=request_headers, expires_in=expires_in, path_style=path_style, timeout=timeout)
|
|
||||||
|
|
||||||
def _make_raw_request(host, port, method, path, body=None, request_headers=None, secure=False, timeout=None):
|
|
||||||
"""
|
|
||||||
issue a request to a specific host & port, for a specified method, on a
|
|
||||||
specified path with a specified (optional) body (encrypted per the
|
|
||||||
connection), and return the response (status, reason).
|
|
||||||
|
|
||||||
This allows construction of special cases not covered by the bucket/key to
|
|
||||||
URL mapping of _make_request/_make_bucket_request.
|
|
||||||
"""
|
|
||||||
if secure:
|
|
||||||
class_ = HTTPSConnection
|
|
||||||
else:
|
|
||||||
class_ = HTTPConnection
|
|
||||||
|
|
||||||
if request_headers is None:
|
|
||||||
request_headers = {}
|
|
||||||
|
|
||||||
c = class_(host, port=port, timeout=timeout)
|
|
||||||
|
|
||||||
# TODO: We might have to modify this in future if we need to interact with
|
|
||||||
# how httplib.request handles Accept-Encoding and Host.
|
|
||||||
c.request(method, path, body=body, headers=request_headers)
|
|
||||||
|
|
||||||
res = c.getresponse()
|
|
||||||
#c.close()
|
|
||||||
|
|
||||||
print(res.status, res.reason)
|
|
||||||
return res
|
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -1,46 +0,0 @@
|
||||||
import json
|
|
||||||
|
|
||||||
class Statement(object):
|
|
||||||
def __init__(self, action, resource, principal = {"AWS" : "*"}, effect= "Allow", condition = None):
|
|
||||||
self.principal = principal
|
|
||||||
self.action = action
|
|
||||||
self.resource = resource
|
|
||||||
self.condition = condition
|
|
||||||
self.effect = effect
|
|
||||||
|
|
||||||
def to_dict(self):
|
|
||||||
d = { "Action" : self.action,
|
|
||||||
"Principal" : self.principal,
|
|
||||||
"Effect" : self.effect,
|
|
||||||
"Resource" : self.resource
|
|
||||||
}
|
|
||||||
|
|
||||||
if self.condition is not None:
|
|
||||||
d["Condition"] = self.condition
|
|
||||||
|
|
||||||
return d
|
|
||||||
|
|
||||||
class Policy(object):
|
|
||||||
def __init__(self):
|
|
||||||
self.statements = []
|
|
||||||
|
|
||||||
def add_statement(self, s):
|
|
||||||
self.statements.append(s)
|
|
||||||
return self
|
|
||||||
|
|
||||||
def to_json(self):
|
|
||||||
policy_dict = {
|
|
||||||
"Version" : "2012-10-17",
|
|
||||||
"Statement":
|
|
||||||
[s.to_dict() for s in self.statements]
|
|
||||||
}
|
|
||||||
|
|
||||||
return json.dumps(policy_dict)
|
|
||||||
|
|
||||||
def make_json_policy(action, resource, principal={"AWS": "*"}, conditions=None):
|
|
||||||
"""
|
|
||||||
Helper function to make single statement policies
|
|
||||||
"""
|
|
||||||
s = Statement(action, resource, principal, condition=conditions)
|
|
||||||
p = Policy()
|
|
||||||
return p.add_statement(s).to_json()
|
|
File diff suppressed because it is too large
Load diff
File diff suppressed because it is too large
Load diff
File diff suppressed because it is too large
Load diff
|
@ -1,9 +0,0 @@
|
||||||
from . import utils
|
|
||||||
|
|
||||||
def test_generate():
|
|
||||||
FIVE_MB = 5 * 1024 * 1024
|
|
||||||
assert len(''.join(utils.generate_random(0))) == 0
|
|
||||||
assert len(''.join(utils.generate_random(1))) == 1
|
|
||||||
assert len(''.join(utils.generate_random(FIVE_MB - 1))) == FIVE_MB - 1
|
|
||||||
assert len(''.join(utils.generate_random(FIVE_MB))) == FIVE_MB
|
|
||||||
assert len(''.join(utils.generate_random(FIVE_MB + 1))) == FIVE_MB + 1
|
|
|
@ -1,8 +1,3 @@
|
||||||
import random
|
|
||||||
import requests
|
|
||||||
import string
|
|
||||||
import time
|
|
||||||
|
|
||||||
def assert_raises(excClass, callableObj, *args, **kwargs):
|
def assert_raises(excClass, callableObj, *args, **kwargs):
|
||||||
"""
|
"""
|
||||||
Like unittest.TestCase.assertRaises, but returns the exception.
|
Like unittest.TestCase.assertRaises, but returns the exception.
|
||||||
|
@ -17,45 +12,3 @@ def assert_raises(excClass, callableObj, *args, **kwargs):
|
||||||
else:
|
else:
|
||||||
excName = str(excClass)
|
excName = str(excClass)
|
||||||
raise AssertionError("%s not raised" % excName)
|
raise AssertionError("%s not raised" % excName)
|
||||||
|
|
||||||
def generate_random(size, part_size=5*1024*1024):
|
|
||||||
"""
|
|
||||||
Generate the specified number random data.
|
|
||||||
(actually each MB is a repetition of the first KB)
|
|
||||||
"""
|
|
||||||
chunk = 1024
|
|
||||||
allowed = string.ascii_letters
|
|
||||||
for x in range(0, size, part_size):
|
|
||||||
strpart = ''.join([allowed[random.randint(0, len(allowed) - 1)] for _ in range(chunk)])
|
|
||||||
s = ''
|
|
||||||
left = size - x
|
|
||||||
this_part_size = min(left, part_size)
|
|
||||||
for y in range(this_part_size // chunk):
|
|
||||||
s = s + strpart
|
|
||||||
s = s + strpart[:(this_part_size % chunk)]
|
|
||||||
yield s
|
|
||||||
if (x == size):
|
|
||||||
return
|
|
||||||
|
|
||||||
# syncs all the regions except for the one passed in
|
|
||||||
def region_sync_meta(targets, region):
|
|
||||||
|
|
||||||
for (k, r) in targets.items():
|
|
||||||
if r == region:
|
|
||||||
continue
|
|
||||||
conf = r.conf
|
|
||||||
if conf.sync_agent_addr:
|
|
||||||
ret = requests.post('http://{addr}:{port}/metadata/incremental'.format(addr = conf.sync_agent_addr, port = conf.sync_agent_port))
|
|
||||||
assert ret.status_code == 200
|
|
||||||
if conf.sync_meta_wait:
|
|
||||||
time.sleep(conf.sync_meta_wait)
|
|
||||||
|
|
||||||
|
|
||||||
def get_grantee(policy, permission):
|
|
||||||
'''
|
|
||||||
Given an object/bucket policy, extract the grantee with the required permission
|
|
||||||
'''
|
|
||||||
|
|
||||||
for g in policy.acl.grants:
|
|
||||||
if g.permission == permission:
|
|
||||||
return g.id
|
|
||||||
|
|
115
s3tests/generate_objects.py
Normal file
115
s3tests/generate_objects.py
Normal file
|
@ -0,0 +1,115 @@
|
||||||
|
from boto.s3.key import Key
|
||||||
|
from optparse import OptionParser
|
||||||
|
from . import realistic
|
||||||
|
import traceback
|
||||||
|
import random
|
||||||
|
from . import common
|
||||||
|
import sys
|
||||||
|
|
||||||
|
|
||||||
|
def parse_opts():
|
||||||
|
parser = OptionParser()
|
||||||
|
parser.add_option('-O', '--outfile', help='write output to FILE. Defaults to STDOUT', metavar='FILE')
|
||||||
|
parser.add_option('-b', '--bucket', dest='bucket', help='push objects to BUCKET', metavar='BUCKET')
|
||||||
|
parser.add_option('--seed', dest='seed', help='optional seed for the random number generator')
|
||||||
|
|
||||||
|
return parser.parse_args()
|
||||||
|
|
||||||
|
|
||||||
|
def get_random_files(quantity, mean, stddev, seed):
|
||||||
|
"""Create file-like objects with pseudorandom contents.
|
||||||
|
IN:
|
||||||
|
number of files to create
|
||||||
|
mean file size in bytes
|
||||||
|
standard deviation from mean file size
|
||||||
|
seed for PRNG
|
||||||
|
OUT:
|
||||||
|
list of file handles
|
||||||
|
"""
|
||||||
|
file_generator = realistic.files(mean, stddev, seed)
|
||||||
|
return [file_generator.next() for _ in xrange(quantity)]
|
||||||
|
|
||||||
|
|
||||||
|
def upload_objects(bucket, files, seed):
|
||||||
|
"""Upload a bunch of files to an S3 bucket
|
||||||
|
IN:
|
||||||
|
boto S3 bucket object
|
||||||
|
list of file handles to upload
|
||||||
|
seed for PRNG
|
||||||
|
OUT:
|
||||||
|
list of boto S3 key objects
|
||||||
|
"""
|
||||||
|
keys = []
|
||||||
|
name_generator = realistic.names(15, 4, seed=seed)
|
||||||
|
|
||||||
|
for fp in files:
|
||||||
|
print >> sys.stderr, 'sending file with size %dB' % fp.size
|
||||||
|
key = Key(bucket)
|
||||||
|
key.key = name_generator.next()
|
||||||
|
key.set_contents_from_file(fp)
|
||||||
|
key.set_acl('public-read')
|
||||||
|
keys.append(key)
|
||||||
|
|
||||||
|
return keys
|
||||||
|
|
||||||
|
|
||||||
|
def _main():
|
||||||
|
'''To run the static content load test, make sure you've bootstrapped your
|
||||||
|
test environment and set up your config.yaml file, then run the following:
|
||||||
|
S3TEST_CONF=config.yaml virtualenv/bin/python generate_objects.py -O urls.txt --seed 1234
|
||||||
|
|
||||||
|
This creates a bucket with your S3 credentials (from config.yaml) and
|
||||||
|
fills it with garbage objects as described in generate_objects.conf.
|
||||||
|
It writes a list of URLS to those objects to ./urls.txt.
|
||||||
|
|
||||||
|
Once you have objcts in your bucket, run the siege benchmarking program:
|
||||||
|
siege --rc ./siege.conf -r 5
|
||||||
|
|
||||||
|
This tells siege to read the ./siege.conf config file which tells it to
|
||||||
|
use the urls in ./urls.txt and log to ./siege.log. It hits each url in
|
||||||
|
urls.txt 5 times (-r flag).
|
||||||
|
|
||||||
|
Results are printed to the terminal and written in CSV format to
|
||||||
|
./siege.log
|
||||||
|
'''
|
||||||
|
(options, args) = parse_opts()
|
||||||
|
|
||||||
|
#SETUP
|
||||||
|
random.seed(options.seed if options.seed else None)
|
||||||
|
conn = common.s3.main
|
||||||
|
|
||||||
|
if options.outfile:
|
||||||
|
OUTFILE = open(options.outfile, 'w')
|
||||||
|
elif common.config.file_generation.url_file:
|
||||||
|
OUTFILE = open(common.config.file_generation.url_file, 'w')
|
||||||
|
else:
|
||||||
|
OUTFILE = sys.stdout
|
||||||
|
|
||||||
|
if options.bucket:
|
||||||
|
bucket = conn.create_bucket(options.bucket)
|
||||||
|
else:
|
||||||
|
bucket = common.get_new_bucket()
|
||||||
|
|
||||||
|
bucket.set_acl('public-read')
|
||||||
|
keys = []
|
||||||
|
print >> OUTFILE, 'bucket: %s' % bucket.name
|
||||||
|
print >> sys.stderr, 'setup complete, generating files'
|
||||||
|
for profile in common.config.file_generation.groups:
|
||||||
|
seed = random.random()
|
||||||
|
files = get_random_files(profile[0], profile[1], profile[2], seed)
|
||||||
|
keys += upload_objects(bucket, files, seed)
|
||||||
|
|
||||||
|
print >> sys.stderr, 'finished sending files. generating urls'
|
||||||
|
for key in keys:
|
||||||
|
print >> OUTFILE, key.generate_url(0, query_auth=False)
|
||||||
|
|
||||||
|
print >> sys.stderr, 'done'
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
common.setup()
|
||||||
|
try:
|
||||||
|
_main()
|
||||||
|
except Exception as e:
|
||||||
|
traceback.print_exc()
|
||||||
|
common.teardown()
|
230
s3tests/readwrite.py
Normal file
230
s3tests/readwrite.py
Normal file
|
@ -0,0 +1,230 @@
|
||||||
|
import gevent
|
||||||
|
import gevent.pool
|
||||||
|
import gevent.queue
|
||||||
|
import gevent.monkey; gevent.monkey.patch_all()
|
||||||
|
import itertools
|
||||||
|
import optparse
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import time
|
||||||
|
import traceback
|
||||||
|
import random
|
||||||
|
import yaml
|
||||||
|
|
||||||
|
import realistic
|
||||||
|
import common
|
||||||
|
|
||||||
|
NANOSECOND = int(1e9)
|
||||||
|
|
||||||
|
def reader(bucket, worker_id, file_names, queue, rand):
|
||||||
|
while True:
|
||||||
|
objname = rand.choice(file_names)
|
||||||
|
key = bucket.new_key(objname)
|
||||||
|
|
||||||
|
fp = realistic.FileVerifier()
|
||||||
|
result = dict(
|
||||||
|
type='r',
|
||||||
|
bucket=bucket.name,
|
||||||
|
key=key.name,
|
||||||
|
worker=worker_id,
|
||||||
|
)
|
||||||
|
|
||||||
|
start = time.time()
|
||||||
|
try:
|
||||||
|
key.get_contents_to_file(fp)
|
||||||
|
except gevent.GreenletExit:
|
||||||
|
raise
|
||||||
|
except Exception as e:
|
||||||
|
# stop timer ASAP, even on errors
|
||||||
|
end = time.time()
|
||||||
|
result.update(
|
||||||
|
error=dict(
|
||||||
|
msg=str(e),
|
||||||
|
traceback=traceback.format_exc(),
|
||||||
|
),
|
||||||
|
)
|
||||||
|
# certain kinds of programmer errors make this a busy
|
||||||
|
# loop; let parent greenlet get some time too
|
||||||
|
time.sleep(0)
|
||||||
|
else:
|
||||||
|
end = time.time()
|
||||||
|
|
||||||
|
if not fp.valid():
|
||||||
|
result.update(
|
||||||
|
error=dict(
|
||||||
|
msg='md5sum check failed',
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
|
elapsed = end - start
|
||||||
|
result.update(
|
||||||
|
start=start,
|
||||||
|
duration=int(round(elapsed * NANOSECOND)),
|
||||||
|
chunks=fp.chunks,
|
||||||
|
)
|
||||||
|
queue.put(result)
|
||||||
|
|
||||||
|
def writer(bucket, worker_id, file_names, files, queue, rand):
|
||||||
|
while True:
|
||||||
|
fp = next(files)
|
||||||
|
objname = rand.choice(file_names)
|
||||||
|
key = bucket.new_key(objname)
|
||||||
|
|
||||||
|
result = dict(
|
||||||
|
type='w',
|
||||||
|
bucket=bucket.name,
|
||||||
|
key=key.name,
|
||||||
|
worker=worker_id,
|
||||||
|
)
|
||||||
|
|
||||||
|
start = time.time()
|
||||||
|
try:
|
||||||
|
key.set_contents_from_file(fp)
|
||||||
|
except gevent.GreenletExit:
|
||||||
|
raise
|
||||||
|
except Exception as e:
|
||||||
|
# stop timer ASAP, even on errors
|
||||||
|
end = time.time()
|
||||||
|
result.update(
|
||||||
|
error=dict(
|
||||||
|
msg=str(e),
|
||||||
|
traceback=traceback.format_exc(),
|
||||||
|
),
|
||||||
|
)
|
||||||
|
# certain kinds of programmer errors make this a busy
|
||||||
|
# loop; let parent greenlet get some time too
|
||||||
|
time.sleep(0)
|
||||||
|
else:
|
||||||
|
end = time.time()
|
||||||
|
|
||||||
|
elapsed = end - start
|
||||||
|
result.update(
|
||||||
|
start=start,
|
||||||
|
duration=int(round(elapsed * NANOSECOND)),
|
||||||
|
chunks=fp.last_chunks,
|
||||||
|
)
|
||||||
|
queue.put(result)
|
||||||
|
|
||||||
|
def parse_options():
|
||||||
|
parser = optparse.OptionParser(
|
||||||
|
usage='%prog [OPTS] <CONFIG_YAML',
|
||||||
|
)
|
||||||
|
parser.add_option("--no-cleanup", dest="cleanup", action="store_false",
|
||||||
|
help="skip cleaning up all created buckets", default=True)
|
||||||
|
|
||||||
|
return parser.parse_args()
|
||||||
|
|
||||||
|
def write_file(bucket, file_name, fp):
|
||||||
|
"""
|
||||||
|
Write a single file to the bucket using the file_name.
|
||||||
|
This is used during the warmup to initialize the files.
|
||||||
|
"""
|
||||||
|
key = bucket.new_key(file_name)
|
||||||
|
key.set_contents_from_file(fp)
|
||||||
|
|
||||||
|
def main():
|
||||||
|
# parse options
|
||||||
|
(options, args) = parse_options()
|
||||||
|
|
||||||
|
if os.isatty(sys.stdin.fileno()):
|
||||||
|
raise RuntimeError('Need configuration in stdin.')
|
||||||
|
config = common.read_config(sys.stdin)
|
||||||
|
conn = common.connect(config.s3)
|
||||||
|
bucket = None
|
||||||
|
|
||||||
|
try:
|
||||||
|
# setup
|
||||||
|
real_stdout = sys.stdout
|
||||||
|
sys.stdout = sys.stderr
|
||||||
|
|
||||||
|
# verify all required config items are present
|
||||||
|
if 'readwrite' not in config:
|
||||||
|
raise RuntimeError('readwrite section not found in config')
|
||||||
|
for item in ['readers', 'writers', 'duration', 'files', 'bucket']:
|
||||||
|
if item not in config.readwrite:
|
||||||
|
raise RuntimeError("Missing readwrite config item: {item}".format(item=item))
|
||||||
|
for item in ['num', 'size', 'stddev']:
|
||||||
|
if item not in config.readwrite.files:
|
||||||
|
raise RuntimeError("Missing readwrite config item: files.{item}".format(item=item))
|
||||||
|
|
||||||
|
seeds = dict(config.readwrite.get('random_seed', {}))
|
||||||
|
seeds.setdefault('main', random.randrange(2**32))
|
||||||
|
|
||||||
|
rand = random.Random(seeds['main'])
|
||||||
|
|
||||||
|
for name in ['names', 'contents', 'writer', 'reader']:
|
||||||
|
seeds.setdefault(name, rand.randrange(2**32))
|
||||||
|
|
||||||
|
print 'Using random seeds: {seeds}'.format(seeds=seeds)
|
||||||
|
|
||||||
|
# setup bucket and other objects
|
||||||
|
bucket_name = common.choose_bucket_prefix(config.readwrite.bucket, max_len=30)
|
||||||
|
bucket = conn.create_bucket(bucket_name)
|
||||||
|
print "Created bucket: {name}".format(name=bucket.name)
|
||||||
|
file_names = realistic.names(
|
||||||
|
mean=15,
|
||||||
|
stddev=4,
|
||||||
|
seed=seeds['names'],
|
||||||
|
)
|
||||||
|
file_names = itertools.islice(file_names, config.readwrite.files.num)
|
||||||
|
file_names = list(file_names)
|
||||||
|
files = realistic.files(
|
||||||
|
mean=1024 * config.readwrite.files.size,
|
||||||
|
stddev=1024 * config.readwrite.files.stddev,
|
||||||
|
seed=seeds['contents'],
|
||||||
|
)
|
||||||
|
q = gevent.queue.Queue()
|
||||||
|
|
||||||
|
# warmup - get initial set of files uploaded
|
||||||
|
print "Uploading initial set of {num} files".format(num=config.readwrite.files.num)
|
||||||
|
warmup_pool = gevent.pool.Pool(size=100)
|
||||||
|
for file_name in file_names:
|
||||||
|
fp = next(files)
|
||||||
|
warmup_pool.spawn_link_exception(
|
||||||
|
write_file,
|
||||||
|
bucket=bucket,
|
||||||
|
file_name=file_name,
|
||||||
|
fp=fp,
|
||||||
|
)
|
||||||
|
warmup_pool.join()
|
||||||
|
|
||||||
|
# main work
|
||||||
|
print "Starting main worker loop."
|
||||||
|
print "Using file size: {size} +- {stddev}".format(size=config.readwrite.files.size, stddev=config.readwrite.files.stddev)
|
||||||
|
print "Spawning {w} writers and {r} readers...".format(w=config.readwrite.writers, r=config.readwrite.readers)
|
||||||
|
group = gevent.pool.Group()
|
||||||
|
rand_writer = random.Random(seeds['writer'])
|
||||||
|
for x in xrange(config.readwrite.writers):
|
||||||
|
this_rand = random.Random(rand_writer.randrange(2**32))
|
||||||
|
group.spawn_link_exception(
|
||||||
|
writer,
|
||||||
|
bucket=bucket,
|
||||||
|
worker_id=x,
|
||||||
|
file_names=file_names,
|
||||||
|
files=files,
|
||||||
|
queue=q,
|
||||||
|
rand=this_rand,
|
||||||
|
)
|
||||||
|
rand_reader = random.Random(seeds['reader'])
|
||||||
|
for x in xrange(config.readwrite.readers):
|
||||||
|
this_rand = random.Random(rand_reader.randrange(2**32))
|
||||||
|
group.spawn_link_exception(
|
||||||
|
reader,
|
||||||
|
bucket=bucket,
|
||||||
|
worker_id=x,
|
||||||
|
file_names=file_names,
|
||||||
|
queue=q,
|
||||||
|
rand=this_rand,
|
||||||
|
)
|
||||||
|
def stop():
|
||||||
|
group.kill(block=True)
|
||||||
|
q.put(StopIteration)
|
||||||
|
gevent.spawn_later(config.readwrite.duration, stop)
|
||||||
|
|
||||||
|
yaml.safe_dump_all(q, stream=real_stdout)
|
||||||
|
|
||||||
|
finally:
|
||||||
|
# cleanup
|
||||||
|
if options.cleanup:
|
||||||
|
if bucket is not None:
|
||||||
|
common.nuke_bucket(bucket)
|
157
s3tests/realistic.py
Normal file
157
s3tests/realistic.py
Normal file
|
@ -0,0 +1,157 @@
|
||||||
|
import hashlib
|
||||||
|
import random
|
||||||
|
import string
|
||||||
|
import struct
|
||||||
|
import time
|
||||||
|
|
||||||
|
|
||||||
|
NANOSECOND = int(1e9)
|
||||||
|
|
||||||
|
|
||||||
|
class RandomContentFile(object):
|
||||||
|
def __init__(self, size, seed):
|
||||||
|
self.size = size
|
||||||
|
self.seed = seed
|
||||||
|
self.random = random.Random(self.seed)
|
||||||
|
|
||||||
|
# Boto likes to seek once more after it's done reading, so we need to save the last chunks/seek value.
|
||||||
|
self.last_chunks = self.chunks = None
|
||||||
|
self.last_seek = None
|
||||||
|
|
||||||
|
# Let seek initialize the rest of it, rather than dup code
|
||||||
|
self.seek(0)
|
||||||
|
|
||||||
|
def _mark_chunk(self):
|
||||||
|
self.chunks.append([self.offset, int(round((time.time() - self.last_seek) * NANOSECOND))])
|
||||||
|
|
||||||
|
def seek(self, offset):
|
||||||
|
assert offset == 0
|
||||||
|
self.random.seed(self.seed)
|
||||||
|
self.offset = offset
|
||||||
|
self.buffer = ''
|
||||||
|
|
||||||
|
self.hash = hashlib.md5()
|
||||||
|
self.digest_size = self.hash.digest_size
|
||||||
|
self.digest = None
|
||||||
|
|
||||||
|
# Save the last seek time as our start time, and the last chunks
|
||||||
|
self.last_chunks = self.chunks
|
||||||
|
# Before emptying.
|
||||||
|
self.last_seek = time.time()
|
||||||
|
self.chunks = []
|
||||||
|
|
||||||
|
def tell(self):
|
||||||
|
return self.offset
|
||||||
|
|
||||||
|
def _generate(self):
|
||||||
|
# generate and return a chunk of pseudorandom data
|
||||||
|
# 256 bits = 32 bytes at a time
|
||||||
|
size = 1*1024*1024
|
||||||
|
l = [self.random.getrandbits(64) for _ in xrange(size/8)]
|
||||||
|
s = struct.pack((size/8)*'Q', *l)
|
||||||
|
return s
|
||||||
|
|
||||||
|
def read(self, size=-1):
|
||||||
|
if size < 0:
|
||||||
|
size = self.size - self.offset
|
||||||
|
|
||||||
|
r = []
|
||||||
|
|
||||||
|
random_count = min(size, self.size - self.offset - self.digest_size)
|
||||||
|
if random_count > 0:
|
||||||
|
while len(self.buffer) < random_count:
|
||||||
|
self.buffer += self._generate()
|
||||||
|
self.offset += random_count
|
||||||
|
size -= random_count
|
||||||
|
data, self.buffer = self.buffer[:random_count], self.buffer[random_count:]
|
||||||
|
if self.hash is not None:
|
||||||
|
self.hash.update(data)
|
||||||
|
r.append(data)
|
||||||
|
|
||||||
|
digest_count = min(size, self.size - self.offset)
|
||||||
|
if digest_count > 0:
|
||||||
|
if self.digest is None:
|
||||||
|
self.digest = self.hash.digest()
|
||||||
|
self.hash = None
|
||||||
|
self.offset += digest_count
|
||||||
|
size -= digest_count
|
||||||
|
data = self.digest[:digest_count]
|
||||||
|
r.append(data)
|
||||||
|
|
||||||
|
self._mark_chunk()
|
||||||
|
|
||||||
|
return ''.join(r)
|
||||||
|
|
||||||
|
class FileVerifier(object):
|
||||||
|
def __init__(self):
|
||||||
|
self.size = 0
|
||||||
|
self.hash = hashlib.md5()
|
||||||
|
self.buf = ''
|
||||||
|
self.created_at = time.time()
|
||||||
|
self.chunks = []
|
||||||
|
|
||||||
|
def _mark_chunk(self):
|
||||||
|
self.chunks.append([self.size, int(round((time.time() - self.created_at) * NANOSECOND))])
|
||||||
|
|
||||||
|
def write(self, data):
|
||||||
|
self.size += len(data)
|
||||||
|
self.buf += data
|
||||||
|
digsz = -1*self.hash.digest_size
|
||||||
|
new_data, self.buf = self.buf[0:digsz], self.buf[digsz:]
|
||||||
|
self.hash.update(new_data)
|
||||||
|
self._mark_chunk()
|
||||||
|
|
||||||
|
def valid(self):
|
||||||
|
"""
|
||||||
|
Returns True if this file looks valid. The file is valid if the end
|
||||||
|
of the file has the md5 digest for the first part of the file.
|
||||||
|
"""
|
||||||
|
if self.size < self.hash.digest_size:
|
||||||
|
return self.hash.digest().startswith(self.buf)
|
||||||
|
|
||||||
|
return self.buf == self.hash.digest()
|
||||||
|
|
||||||
|
def files(mean, stddev, seed=None):
|
||||||
|
"""
|
||||||
|
Yields file-like objects with effectively random contents, where
|
||||||
|
the size of each file follows the normal distribution with `mean`
|
||||||
|
and `stddev`.
|
||||||
|
|
||||||
|
Beware, the file-likeness is very shallow. You can use boto's
|
||||||
|
`key.set_contents_from_file` to send these to S3, but they are not
|
||||||
|
full file objects.
|
||||||
|
|
||||||
|
The last 128 bits are the MD5 digest of the previous bytes, for
|
||||||
|
verifying round-trip data integrity. For example, if you
|
||||||
|
re-download the object and place the contents into a file called
|
||||||
|
``foo``, the following should print two identical lines:
|
||||||
|
|
||||||
|
python -c 'import sys, hashlib; data=sys.stdin.read(); print hashlib.md5(data[:-16]).hexdigest(); print "".join("%02x" % ord(c) for c in data[-16:])' <foo
|
||||||
|
|
||||||
|
Except for objects shorter than 16 bytes, where the second line
|
||||||
|
will be proportionally shorter.
|
||||||
|
"""
|
||||||
|
rand = random.Random(seed)
|
||||||
|
while True:
|
||||||
|
while True:
|
||||||
|
size = int(rand.normalvariate(mean, stddev))
|
||||||
|
if size >= 0:
|
||||||
|
break
|
||||||
|
yield RandomContentFile(size=size, seed=rand.getrandbits(32))
|
||||||
|
|
||||||
|
def names(mean, stddev, charset=None, seed=None):
|
||||||
|
"""
|
||||||
|
Yields strings that are somewhat plausible as file names, where
|
||||||
|
the lenght of each filename follows the normal distribution with
|
||||||
|
`mean` and `stddev`.
|
||||||
|
"""
|
||||||
|
if charset is None:
|
||||||
|
charset = string.ascii_lowercase
|
||||||
|
rand = random.Random(seed)
|
||||||
|
while True:
|
||||||
|
while True:
|
||||||
|
length = int(rand.normalvariate(mean, stddev))
|
||||||
|
if length > 0:
|
||||||
|
break
|
||||||
|
name = ''.join(rand.choice(charset) for _ in xrange(length))
|
||||||
|
yield name
|
219
s3tests/roundtrip.py
Normal file
219
s3tests/roundtrip.py
Normal file
|
@ -0,0 +1,219 @@
|
||||||
|
import gevent
|
||||||
|
import gevent.pool
|
||||||
|
import gevent.queue
|
||||||
|
import gevent.monkey; gevent.monkey.patch_all()
|
||||||
|
import itertools
|
||||||
|
import optparse
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import time
|
||||||
|
import traceback
|
||||||
|
import random
|
||||||
|
import yaml
|
||||||
|
|
||||||
|
import realistic
|
||||||
|
import common
|
||||||
|
|
||||||
|
NANOSECOND = int(1e9)
|
||||||
|
|
||||||
|
def writer(bucket, objname, fp, queue):
|
||||||
|
key = bucket.new_key(objname)
|
||||||
|
|
||||||
|
result = dict(
|
||||||
|
type='w',
|
||||||
|
bucket=bucket.name,
|
||||||
|
key=key.name,
|
||||||
|
)
|
||||||
|
|
||||||
|
start = time.time()
|
||||||
|
try:
|
||||||
|
key.set_contents_from_file(fp)
|
||||||
|
except gevent.GreenletExit:
|
||||||
|
raise
|
||||||
|
except Exception as e:
|
||||||
|
# stop timer ASAP, even on errors
|
||||||
|
end = time.time()
|
||||||
|
result.update(
|
||||||
|
error=dict(
|
||||||
|
msg=str(e),
|
||||||
|
traceback=traceback.format_exc(),
|
||||||
|
),
|
||||||
|
)
|
||||||
|
# certain kinds of programmer errors make this a busy
|
||||||
|
# loop; let parent greenlet get some time too
|
||||||
|
time.sleep(0)
|
||||||
|
else:
|
||||||
|
end = time.time()
|
||||||
|
|
||||||
|
elapsed = end - start
|
||||||
|
result.update(
|
||||||
|
start=start,
|
||||||
|
duration=int(round(elapsed * NANOSECOND)),
|
||||||
|
chunks=fp.last_chunks,
|
||||||
|
)
|
||||||
|
queue.put(result)
|
||||||
|
|
||||||
|
|
||||||
|
def reader(bucket, objname, queue):
|
||||||
|
key = bucket.new_key(objname)
|
||||||
|
|
||||||
|
fp = realistic.FileVerifier()
|
||||||
|
result = dict(
|
||||||
|
type='r',
|
||||||
|
bucket=bucket.name,
|
||||||
|
key=key.name,
|
||||||
|
)
|
||||||
|
|
||||||
|
start = time.time()
|
||||||
|
try:
|
||||||
|
key.get_contents_to_file(fp)
|
||||||
|
except gevent.GreenletExit:
|
||||||
|
raise
|
||||||
|
except Exception as e:
|
||||||
|
# stop timer ASAP, even on errors
|
||||||
|
end = time.time()
|
||||||
|
result.update(
|
||||||
|
error=dict(
|
||||||
|
msg=str(e),
|
||||||
|
traceback=traceback.format_exc(),
|
||||||
|
),
|
||||||
|
)
|
||||||
|
# certain kinds of programmer errors make this a busy
|
||||||
|
# loop; let parent greenlet get some time too
|
||||||
|
time.sleep(0)
|
||||||
|
else:
|
||||||
|
end = time.time()
|
||||||
|
|
||||||
|
if not fp.valid():
|
||||||
|
result.update(
|
||||||
|
error=dict(
|
||||||
|
msg='md5sum check failed',
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
|
elapsed = end - start
|
||||||
|
result.update(
|
||||||
|
start=start,
|
||||||
|
duration=int(round(elapsed * NANOSECOND)),
|
||||||
|
chunks=fp.chunks,
|
||||||
|
)
|
||||||
|
queue.put(result)
|
||||||
|
|
||||||
|
def parse_options():
|
||||||
|
parser = optparse.OptionParser(
|
||||||
|
usage='%prog [OPTS] <CONFIG_YAML',
|
||||||
|
)
|
||||||
|
parser.add_option("--no-cleanup", dest="cleanup", action="store_false",
|
||||||
|
help="skip cleaning up all created buckets", default=True)
|
||||||
|
|
||||||
|
return parser.parse_args()
|
||||||
|
|
||||||
|
def main():
|
||||||
|
# parse options
|
||||||
|
(options, args) = parse_options()
|
||||||
|
|
||||||
|
if os.isatty(sys.stdin.fileno()):
|
||||||
|
raise RuntimeError('Need configuration in stdin.')
|
||||||
|
config = common.read_config(sys.stdin)
|
||||||
|
conn = common.connect(config.s3)
|
||||||
|
bucket = None
|
||||||
|
|
||||||
|
try:
|
||||||
|
# setup
|
||||||
|
real_stdout = sys.stdout
|
||||||
|
sys.stdout = sys.stderr
|
||||||
|
|
||||||
|
# verify all required config items are present
|
||||||
|
if 'roundtrip' not in config:
|
||||||
|
raise RuntimeError('roundtrip section not found in config')
|
||||||
|
for item in ['readers', 'writers', 'duration', 'files', 'bucket']:
|
||||||
|
if item not in config.roundtrip:
|
||||||
|
raise RuntimeError("Missing roundtrip config item: {item}".format(item=item))
|
||||||
|
for item in ['num', 'size', 'stddev']:
|
||||||
|
if item not in config.roundtrip.files:
|
||||||
|
raise RuntimeError("Missing roundtrip config item: files.{item}".format(item=item))
|
||||||
|
|
||||||
|
seeds = dict(config.roundtrip.get('random_seed', {}))
|
||||||
|
seeds.setdefault('main', random.randrange(2**32))
|
||||||
|
|
||||||
|
rand = random.Random(seeds['main'])
|
||||||
|
|
||||||
|
for name in ['names', 'contents', 'writer', 'reader']:
|
||||||
|
seeds.setdefault(name, rand.randrange(2**32))
|
||||||
|
|
||||||
|
print 'Using random seeds: {seeds}'.format(seeds=seeds)
|
||||||
|
|
||||||
|
# setup bucket and other objects
|
||||||
|
bucket_name = common.choose_bucket_prefix(config.roundtrip.bucket, max_len=30)
|
||||||
|
bucket = conn.create_bucket(bucket_name)
|
||||||
|
print "Created bucket: {name}".format(name=bucket.name)
|
||||||
|
objnames = realistic.names(
|
||||||
|
mean=15,
|
||||||
|
stddev=4,
|
||||||
|
seed=seeds['names'],
|
||||||
|
)
|
||||||
|
objnames = itertools.islice(objnames, config.roundtrip.files.num)
|
||||||
|
objnames = list(objnames)
|
||||||
|
files = realistic.files(
|
||||||
|
mean=1024 * config.roundtrip.files.size,
|
||||||
|
stddev=1024 * config.roundtrip.files.stddev,
|
||||||
|
seed=seeds['contents'],
|
||||||
|
)
|
||||||
|
q = gevent.queue.Queue()
|
||||||
|
|
||||||
|
logger_g = gevent.spawn_link_exception(yaml.safe_dump_all, q, stream=real_stdout)
|
||||||
|
|
||||||
|
print "Writing {num} objects with {w} workers...".format(
|
||||||
|
num=config.roundtrip.files.num,
|
||||||
|
w=config.roundtrip.writers,
|
||||||
|
)
|
||||||
|
pool = gevent.pool.Pool(size=config.roundtrip.writers)
|
||||||
|
start = time.time()
|
||||||
|
for objname in objnames:
|
||||||
|
fp = next(files)
|
||||||
|
pool.spawn_link_exception(
|
||||||
|
writer,
|
||||||
|
bucket=bucket,
|
||||||
|
objname=objname,
|
||||||
|
fp=fp,
|
||||||
|
queue=q,
|
||||||
|
)
|
||||||
|
pool.join()
|
||||||
|
stop = time.time()
|
||||||
|
elapsed = stop - start
|
||||||
|
q.put(dict(
|
||||||
|
type='write_done',
|
||||||
|
duration=int(round(elapsed * NANOSECOND)),
|
||||||
|
))
|
||||||
|
|
||||||
|
print "Reading {num} objects with {w} workers...".format(
|
||||||
|
num=config.roundtrip.files.num,
|
||||||
|
w=config.roundtrip.readers,
|
||||||
|
)
|
||||||
|
# avoid accessing them in the same order as the writing
|
||||||
|
rand.shuffle(objnames)
|
||||||
|
pool = gevent.pool.Pool(size=config.roundtrip.readers)
|
||||||
|
start = time.time()
|
||||||
|
for objname in objnames:
|
||||||
|
pool.spawn_link_exception(
|
||||||
|
reader,
|
||||||
|
bucket=bucket,
|
||||||
|
objname=objname,
|
||||||
|
queue=q,
|
||||||
|
)
|
||||||
|
pool.join()
|
||||||
|
stop = time.time()
|
||||||
|
elapsed = stop - start
|
||||||
|
q.put(dict(
|
||||||
|
type='read_done',
|
||||||
|
duration=int(round(elapsed * NANOSECOND)),
|
||||||
|
))
|
||||||
|
|
||||||
|
q.put(StopIteration)
|
||||||
|
logger_g.get()
|
||||||
|
|
||||||
|
finally:
|
||||||
|
# cleanup
|
||||||
|
if options.cleanup:
|
||||||
|
if bucket is not None:
|
||||||
|
common.nuke_bucket(bucket)
|
|
@ -1,301 +0,0 @@
|
||||||
import boto.s3.connection
|
|
||||||
import munch
|
|
||||||
import itertools
|
|
||||||
import os
|
|
||||||
import random
|
|
||||||
import string
|
|
||||||
import yaml
|
|
||||||
import re
|
|
||||||
from lxml import etree
|
|
||||||
|
|
||||||
from doctest import Example
|
|
||||||
from lxml.doctestcompare import LXMLOutputChecker
|
|
||||||
|
|
||||||
s3 = munch.Munch()
|
|
||||||
config = munch.Munch()
|
|
||||||
prefix = ''
|
|
||||||
|
|
||||||
bucket_counter = itertools.count(1)
|
|
||||||
key_counter = itertools.count(1)
|
|
||||||
|
|
||||||
def choose_bucket_prefix(template, max_len=30):
|
|
||||||
"""
|
|
||||||
Choose a prefix for our test buckets, so they're easy to identify.
|
|
||||||
|
|
||||||
Use template and feed it more and more random filler, until it's
|
|
||||||
as long as possible but still below max_len.
|
|
||||||
"""
|
|
||||||
rand = ''.join(
|
|
||||||
random.choice(string.ascii_lowercase + string.digits)
|
|
||||||
for c in range(255)
|
|
||||||
)
|
|
||||||
|
|
||||||
while rand:
|
|
||||||
s = template.format(random=rand)
|
|
||||||
if len(s) <= max_len:
|
|
||||||
return s
|
|
||||||
rand = rand[:-1]
|
|
||||||
|
|
||||||
raise RuntimeError(
|
|
||||||
'Bucket prefix template is impossible to fulfill: {template!r}'.format(
|
|
||||||
template=template,
|
|
||||||
),
|
|
||||||
)
|
|
||||||
|
|
||||||
def nuke_bucket(bucket):
|
|
||||||
try:
|
|
||||||
bucket.set_canned_acl('private')
|
|
||||||
# TODO: deleted_cnt and the while loop is a work around for rgw
|
|
||||||
# not sending the
|
|
||||||
deleted_cnt = 1
|
|
||||||
while deleted_cnt:
|
|
||||||
deleted_cnt = 0
|
|
||||||
for key in bucket.list():
|
|
||||||
print('Cleaning bucket {bucket} key {key}'.format(
|
|
||||||
bucket=bucket,
|
|
||||||
key=key,
|
|
||||||
))
|
|
||||||
key.set_canned_acl('private')
|
|
||||||
key.delete()
|
|
||||||
deleted_cnt += 1
|
|
||||||
bucket.delete()
|
|
||||||
except boto.exception.S3ResponseError as e:
|
|
||||||
# TODO workaround for buggy rgw that fails to send
|
|
||||||
# error_code, remove
|
|
||||||
if (e.status == 403
|
|
||||||
and e.error_code is None
|
|
||||||
and e.body == ''):
|
|
||||||
e.error_code = 'AccessDenied'
|
|
||||||
if e.error_code != 'AccessDenied':
|
|
||||||
print('GOT UNWANTED ERROR', e.error_code)
|
|
||||||
raise
|
|
||||||
# seems like we're not the owner of the bucket; ignore
|
|
||||||
pass
|
|
||||||
|
|
||||||
def nuke_prefixed_buckets():
|
|
||||||
for name, conn in list(s3.items()):
|
|
||||||
print('Cleaning buckets from connection {name}'.format(name=name))
|
|
||||||
for bucket in conn.get_all_buckets():
|
|
||||||
if bucket.name.startswith(prefix):
|
|
||||||
print('Cleaning bucket {bucket}'.format(bucket=bucket))
|
|
||||||
nuke_bucket(bucket)
|
|
||||||
|
|
||||||
print('Done with cleanup of test buckets.')
|
|
||||||
|
|
||||||
def read_config(fp):
|
|
||||||
config = munch.Munch()
|
|
||||||
g = yaml.safe_load_all(fp)
|
|
||||||
for new in g:
|
|
||||||
config.update(munch.Munchify(new))
|
|
||||||
return config
|
|
||||||
|
|
||||||
def connect(conf):
|
|
||||||
mapping = dict(
|
|
||||||
port='port',
|
|
||||||
host='host',
|
|
||||||
is_secure='is_secure',
|
|
||||||
access_key='aws_access_key_id',
|
|
||||||
secret_key='aws_secret_access_key',
|
|
||||||
)
|
|
||||||
kwargs = dict((mapping[k],v) for (k,v) in conf.items() if k in mapping)
|
|
||||||
#process calling_format argument
|
|
||||||
calling_formats = dict(
|
|
||||||
ordinary=boto.s3.connection.OrdinaryCallingFormat(),
|
|
||||||
subdomain=boto.s3.connection.SubdomainCallingFormat(),
|
|
||||||
vhost=boto.s3.connection.VHostCallingFormat(),
|
|
||||||
)
|
|
||||||
kwargs['calling_format'] = calling_formats['ordinary']
|
|
||||||
if 'calling_format' in conf:
|
|
||||||
raw_calling_format = conf['calling_format']
|
|
||||||
try:
|
|
||||||
kwargs['calling_format'] = calling_formats[raw_calling_format]
|
|
||||||
except KeyError:
|
|
||||||
raise RuntimeError(
|
|
||||||
'calling_format unknown: %r' % raw_calling_format
|
|
||||||
)
|
|
||||||
# TODO test vhost calling format
|
|
||||||
conn = boto.s3.connection.S3Connection(**kwargs)
|
|
||||||
return conn
|
|
||||||
|
|
||||||
def setup():
|
|
||||||
global s3, config, prefix
|
|
||||||
s3.clear()
|
|
||||||
config.clear()
|
|
||||||
|
|
||||||
try:
|
|
||||||
path = os.environ['S3TEST_CONF']
|
|
||||||
except KeyError:
|
|
||||||
raise RuntimeError(
|
|
||||||
'To run tests, point environment '
|
|
||||||
+ 'variable S3TEST_CONF to a config file.',
|
|
||||||
)
|
|
||||||
with file(path) as f:
|
|
||||||
config.update(read_config(f))
|
|
||||||
|
|
||||||
# These 3 should always be present.
|
|
||||||
if 's3' not in config:
|
|
||||||
raise RuntimeError('Your config file is missing the s3 section!')
|
|
||||||
if 'defaults' not in config.s3:
|
|
||||||
raise RuntimeError('Your config file is missing the s3.defaults section!')
|
|
||||||
if 'fixtures' not in config:
|
|
||||||
raise RuntimeError('Your config file is missing the fixtures section!')
|
|
||||||
|
|
||||||
template = config.fixtures.get('bucket prefix', 'test-{random}-')
|
|
||||||
prefix = choose_bucket_prefix(template=template)
|
|
||||||
if prefix == '':
|
|
||||||
raise RuntimeError("Empty Prefix! Aborting!")
|
|
||||||
|
|
||||||
defaults = config.s3.defaults
|
|
||||||
for section in list(config.s3.keys()):
|
|
||||||
if section == 'defaults':
|
|
||||||
continue
|
|
||||||
|
|
||||||
conf = {}
|
|
||||||
conf.update(defaults)
|
|
||||||
conf.update(config.s3[section])
|
|
||||||
conn = connect(conf)
|
|
||||||
s3[section] = conn
|
|
||||||
|
|
||||||
# WARNING! we actively delete all buckets we see with the prefix
|
|
||||||
# we've chosen! Choose your prefix with care, and don't reuse
|
|
||||||
# credentials!
|
|
||||||
|
|
||||||
# We also assume nobody else is going to use buckets with that
|
|
||||||
# prefix. This is racy but given enough randomness, should not
|
|
||||||
# really fail.
|
|
||||||
nuke_prefixed_buckets()
|
|
||||||
|
|
||||||
def get_new_bucket(connection=None):
|
|
||||||
"""
|
|
||||||
Get a bucket that exists and is empty.
|
|
||||||
|
|
||||||
Always recreates a bucket from scratch. This is useful to also
|
|
||||||
reset ACLs and such.
|
|
||||||
"""
|
|
||||||
if connection is None:
|
|
||||||
connection = s3.main
|
|
||||||
name = '{prefix}{num}'.format(
|
|
||||||
prefix=prefix,
|
|
||||||
num=next(bucket_counter),
|
|
||||||
)
|
|
||||||
# the only way for this to fail with a pre-existing bucket is if
|
|
||||||
# someone raced us between setup nuke_prefixed_buckets and here;
|
|
||||||
# ignore that as astronomically unlikely
|
|
||||||
bucket = connection.create_bucket(name)
|
|
||||||
return bucket
|
|
||||||
|
|
||||||
def teardown():
|
|
||||||
nuke_prefixed_buckets()
|
|
||||||
|
|
||||||
def with_setup_kwargs(setup, teardown=None):
|
|
||||||
"""Decorator to add setup and/or teardown methods to a test function::
|
|
||||||
|
|
||||||
@with_setup_args(setup, teardown)
|
|
||||||
def test_something():
|
|
||||||
" ... "
|
|
||||||
|
|
||||||
The setup function should return (kwargs) which will be passed to
|
|
||||||
test function, and teardown function.
|
|
||||||
|
|
||||||
Note that `with_setup_kwargs` is useful *only* for test functions, not for test
|
|
||||||
methods or inside of TestCase subclasses.
|
|
||||||
"""
|
|
||||||
def decorate(func):
|
|
||||||
kwargs = {}
|
|
||||||
|
|
||||||
def test_wrapped(*args, **kwargs2):
|
|
||||||
k2 = kwargs.copy()
|
|
||||||
k2.update(kwargs2)
|
|
||||||
k2['testname'] = func.__name__
|
|
||||||
func(*args, **k2)
|
|
||||||
|
|
||||||
test_wrapped.__name__ = func.__name__
|
|
||||||
|
|
||||||
def setup_wrapped():
|
|
||||||
k = setup()
|
|
||||||
kwargs.update(k)
|
|
||||||
if hasattr(func, 'setup'):
|
|
||||||
func.setup()
|
|
||||||
test_wrapped.setup = setup_wrapped
|
|
||||||
|
|
||||||
if teardown:
|
|
||||||
def teardown_wrapped():
|
|
||||||
if hasattr(func, 'teardown'):
|
|
||||||
func.teardown()
|
|
||||||
teardown(**kwargs)
|
|
||||||
|
|
||||||
test_wrapped.teardown = teardown_wrapped
|
|
||||||
else:
|
|
||||||
if hasattr(func, 'teardown'):
|
|
||||||
test_wrapped.teardown = func.teardown()
|
|
||||||
return test_wrapped
|
|
||||||
return decorate
|
|
||||||
|
|
||||||
# Demo case for the above, when you run test_gen():
|
|
||||||
# _test_gen will run twice,
|
|
||||||
# with the following stderr printing
|
|
||||||
# setup_func {'b': 2}
|
|
||||||
# testcase ('1',) {'b': 2, 'testname': '_test_gen'}
|
|
||||||
# teardown_func {'b': 2}
|
|
||||||
# setup_func {'b': 2}
|
|
||||||
# testcase () {'b': 2, 'testname': '_test_gen'}
|
|
||||||
# teardown_func {'b': 2}
|
|
||||||
#
|
|
||||||
#def setup_func():
|
|
||||||
# kwargs = {'b': 2}
|
|
||||||
# print("setup_func", kwargs, file=sys.stderr)
|
|
||||||
# return kwargs
|
|
||||||
#
|
|
||||||
#def teardown_func(**kwargs):
|
|
||||||
# print("teardown_func", kwargs, file=sys.stderr)
|
|
||||||
#
|
|
||||||
#@with_setup_kwargs(setup=setup_func, teardown=teardown_func)
|
|
||||||
#def _test_gen(*args, **kwargs):
|
|
||||||
# print("testcase", args, kwargs, file=sys.stderr)
|
|
||||||
#
|
|
||||||
#def test_gen():
|
|
||||||
# yield _test_gen, '1'
|
|
||||||
# yield _test_gen
|
|
||||||
|
|
||||||
def trim_xml(xml_str):
|
|
||||||
p = etree.XMLParser(remove_blank_text=True)
|
|
||||||
elem = etree.XML(xml_str, parser=p)
|
|
||||||
return etree.tostring(elem)
|
|
||||||
|
|
||||||
def normalize_xml(xml, pretty_print=True):
|
|
||||||
if xml is None:
|
|
||||||
return xml
|
|
||||||
|
|
||||||
root = etree.fromstring(xml.encode(encoding='ascii'))
|
|
||||||
|
|
||||||
for element in root.iter('*'):
|
|
||||||
if element.text is not None and not element.text.strip():
|
|
||||||
element.text = None
|
|
||||||
if element.text is not None:
|
|
||||||
element.text = element.text.strip().replace("\n", "").replace("\r", "")
|
|
||||||
if element.tail is not None and not element.tail.strip():
|
|
||||||
element.tail = None
|
|
||||||
if element.tail is not None:
|
|
||||||
element.tail = element.tail.strip().replace("\n", "").replace("\r", "")
|
|
||||||
|
|
||||||
# Sort the elements
|
|
||||||
for parent in root.xpath('//*[./*]'): # Search for parent elements
|
|
||||||
parent[:] = sorted(parent,key=lambda x: x.tag)
|
|
||||||
|
|
||||||
xmlstr = etree.tostring(root, encoding="utf-8", xml_declaration=True, pretty_print=pretty_print)
|
|
||||||
# there are two different DTD URIs
|
|
||||||
xmlstr = re.sub(r'xmlns="[^"]+"', 'xmlns="s3"', xmlstr)
|
|
||||||
xmlstr = re.sub(r'xmlns=\'[^\']+\'', 'xmlns="s3"', xmlstr)
|
|
||||||
for uri in ['http://doc.s3.amazonaws.com/doc/2006-03-01/', 'http://s3.amazonaws.com/doc/2006-03-01/']:
|
|
||||||
xmlstr = xmlstr.replace(uri, 'URI-DTD')
|
|
||||||
#xmlstr = re.sub(r'>\s+', '>', xmlstr, count=0, flags=re.MULTILINE)
|
|
||||||
return xmlstr
|
|
||||||
|
|
||||||
def assert_xml_equal(got, want):
|
|
||||||
assert want is not None, 'Wanted XML cannot be None'
|
|
||||||
if got is None:
|
|
||||||
raise AssertionError('Got input to validate was None')
|
|
||||||
checker = LXMLOutputChecker()
|
|
||||||
if not checker.check_output(want, got, 0):
|
|
||||||
message = checker.output_difference(Example("", want), got, 0)
|
|
||||||
raise AssertionError(message)
|
|
|
@ -1,782 +0,0 @@
|
||||||
import pytest
|
|
||||||
import boto3
|
|
||||||
from botocore import UNSIGNED
|
|
||||||
from botocore.client import Config
|
|
||||||
from botocore.exceptions import ClientError
|
|
||||||
from botocore.handlers import disable_signing
|
|
||||||
import configparser
|
|
||||||
import datetime
|
|
||||||
import time
|
|
||||||
import os
|
|
||||||
import munch
|
|
||||||
import random
|
|
||||||
import string
|
|
||||||
import itertools
|
|
||||||
import urllib3
|
|
||||||
import re
|
|
||||||
|
|
||||||
config = munch.Munch
|
|
||||||
|
|
||||||
# this will be assigned by setup()
|
|
||||||
prefix = None
|
|
||||||
|
|
||||||
def get_prefix():
|
|
||||||
assert prefix is not None
|
|
||||||
return prefix
|
|
||||||
|
|
||||||
def choose_bucket_prefix(template, max_len=30):
|
|
||||||
"""
|
|
||||||
Choose a prefix for our test buckets, so they're easy to identify.
|
|
||||||
|
|
||||||
Use template and feed it more and more random filler, until it's
|
|
||||||
as long as possible but still below max_len.
|
|
||||||
"""
|
|
||||||
rand = ''.join(
|
|
||||||
random.choice(string.ascii_lowercase + string.digits)
|
|
||||||
for c in range(255)
|
|
||||||
)
|
|
||||||
|
|
||||||
while rand:
|
|
||||||
s = template.format(random=rand)
|
|
||||||
if len(s) <= max_len:
|
|
||||||
return s
|
|
||||||
rand = rand[:-1]
|
|
||||||
|
|
||||||
raise RuntimeError(
|
|
||||||
'Bucket prefix template is impossible to fulfill: {template!r}'.format(
|
|
||||||
template=template,
|
|
||||||
),
|
|
||||||
)
|
|
||||||
|
|
||||||
def get_buckets_list(client=None, prefix=None):
|
|
||||||
if client == None:
|
|
||||||
client = get_client()
|
|
||||||
if prefix == None:
|
|
||||||
prefix = get_prefix()
|
|
||||||
response = client.list_buckets()
|
|
||||||
bucket_dicts = response['Buckets']
|
|
||||||
buckets_list = []
|
|
||||||
for bucket in bucket_dicts:
|
|
||||||
if prefix in bucket['Name']:
|
|
||||||
buckets_list.append(bucket['Name'])
|
|
||||||
|
|
||||||
return buckets_list
|
|
||||||
|
|
||||||
def get_objects_list(bucket, client=None, prefix=None):
|
|
||||||
if client == None:
|
|
||||||
client = get_client()
|
|
||||||
|
|
||||||
if prefix == None:
|
|
||||||
response = client.list_objects(Bucket=bucket)
|
|
||||||
else:
|
|
||||||
response = client.list_objects(Bucket=bucket, Prefix=prefix)
|
|
||||||
objects_list = []
|
|
||||||
|
|
||||||
if 'Contents' in response:
|
|
||||||
contents = response['Contents']
|
|
||||||
for obj in contents:
|
|
||||||
objects_list.append(obj['Key'])
|
|
||||||
|
|
||||||
return objects_list
|
|
||||||
|
|
||||||
# generator function that returns object listings in batches, where each
|
|
||||||
# batch is a list of dicts compatible with delete_objects()
|
|
||||||
def list_versions(client, bucket, batch_size):
|
|
||||||
kwargs = {'Bucket': bucket, 'MaxKeys': batch_size}
|
|
||||||
truncated = True
|
|
||||||
while truncated:
|
|
||||||
listing = client.list_object_versions(**kwargs)
|
|
||||||
|
|
||||||
kwargs['KeyMarker'] = listing.get('NextKeyMarker')
|
|
||||||
kwargs['VersionIdMarker'] = listing.get('NextVersionIdMarker')
|
|
||||||
truncated = listing['IsTruncated']
|
|
||||||
|
|
||||||
objs = listing.get('Versions', []) + listing.get('DeleteMarkers', [])
|
|
||||||
if len(objs):
|
|
||||||
yield [{'Key': o['Key'], 'VersionId': o['VersionId']} for o in objs]
|
|
||||||
|
|
||||||
def nuke_bucket(client, bucket):
|
|
||||||
batch_size = 128
|
|
||||||
max_retain_date = None
|
|
||||||
|
|
||||||
# list and delete objects in batches
|
|
||||||
for objects in list_versions(client, bucket, batch_size):
|
|
||||||
delete = client.delete_objects(Bucket=bucket,
|
|
||||||
Delete={'Objects': objects, 'Quiet': True},
|
|
||||||
BypassGovernanceRetention=True)
|
|
||||||
|
|
||||||
# check for object locks on 403 AccessDenied errors
|
|
||||||
for err in delete.get('Errors', []):
|
|
||||||
if err.get('Code') != 'AccessDenied':
|
|
||||||
continue
|
|
||||||
try:
|
|
||||||
res = client.get_object_retention(Bucket=bucket,
|
|
||||||
Key=err['Key'], VersionId=err['VersionId'])
|
|
||||||
retain_date = res['Retention']['RetainUntilDate']
|
|
||||||
if not max_retain_date or max_retain_date < retain_date:
|
|
||||||
max_retain_date = retain_date
|
|
||||||
except ClientError:
|
|
||||||
pass
|
|
||||||
|
|
||||||
if max_retain_date:
|
|
||||||
# wait out the retention period (up to 60 seconds)
|
|
||||||
now = datetime.datetime.now(max_retain_date.tzinfo)
|
|
||||||
if max_retain_date > now:
|
|
||||||
delta = max_retain_date - now
|
|
||||||
if delta.total_seconds() > 60:
|
|
||||||
raise RuntimeError('bucket {} still has objects \
|
|
||||||
locked for {} more seconds, not waiting for \
|
|
||||||
bucket cleanup'.format(bucket, delta.total_seconds()))
|
|
||||||
print('nuke_bucket', bucket, 'waiting', delta.total_seconds(),
|
|
||||||
'seconds for object locks to expire')
|
|
||||||
time.sleep(delta.total_seconds())
|
|
||||||
|
|
||||||
for objects in list_versions(client, bucket, batch_size):
|
|
||||||
client.delete_objects(Bucket=bucket,
|
|
||||||
Delete={'Objects': objects, 'Quiet': True},
|
|
||||||
BypassGovernanceRetention=True)
|
|
||||||
|
|
||||||
client.delete_bucket(Bucket=bucket)
|
|
||||||
|
|
||||||
def nuke_prefixed_buckets(prefix, client=None):
|
|
||||||
if client == None:
|
|
||||||
client = get_client()
|
|
||||||
|
|
||||||
buckets = get_buckets_list(client, prefix)
|
|
||||||
|
|
||||||
err = None
|
|
||||||
for bucket_name in buckets:
|
|
||||||
try:
|
|
||||||
nuke_bucket(client, bucket_name)
|
|
||||||
except Exception as e:
|
|
||||||
# The exception shouldn't be raised when doing cleanup. Pass and continue
|
|
||||||
# the bucket cleanup process. Otherwise left buckets wouldn't be cleared
|
|
||||||
# resulting in some kind of resource leak. err is used to hint user some
|
|
||||||
# exception once occurred.
|
|
||||||
err = e
|
|
||||||
pass
|
|
||||||
if err:
|
|
||||||
raise err
|
|
||||||
|
|
||||||
print('Done with cleanup of buckets in tests.')
|
|
||||||
|
|
||||||
def configured_storage_classes():
|
|
||||||
sc = ['STANDARD']
|
|
||||||
|
|
||||||
extra_sc = re.split(r"[\b\W\b]+", config.storage_classes)
|
|
||||||
|
|
||||||
for item in extra_sc:
|
|
||||||
if item != 'STANDARD':
|
|
||||||
sc.append(item)
|
|
||||||
|
|
||||||
sc = [i for i in sc if i]
|
|
||||||
print("storage classes configured: " + str(sc))
|
|
||||||
|
|
||||||
return sc
|
|
||||||
|
|
||||||
def configure():
|
|
||||||
cfg = configparser.RawConfigParser()
|
|
||||||
try:
|
|
||||||
path = os.environ['S3TEST_CONF']
|
|
||||||
except KeyError:
|
|
||||||
raise RuntimeError(
|
|
||||||
'To run tests, point environment '
|
|
||||||
+ 'variable S3TEST_CONF to a config file.',
|
|
||||||
)
|
|
||||||
cfg.read(path)
|
|
||||||
|
|
||||||
if not cfg.defaults():
|
|
||||||
raise RuntimeError('Your config file is missing the DEFAULT section!')
|
|
||||||
if not cfg.has_section("s3 main"):
|
|
||||||
raise RuntimeError('Your config file is missing the "s3 main" section!')
|
|
||||||
if not cfg.has_section("s3 alt"):
|
|
||||||
raise RuntimeError('Your config file is missing the "s3 alt" section!')
|
|
||||||
if not cfg.has_section("s3 tenant"):
|
|
||||||
raise RuntimeError('Your config file is missing the "s3 tenant" section!')
|
|
||||||
|
|
||||||
global prefix
|
|
||||||
|
|
||||||
defaults = cfg.defaults()
|
|
||||||
|
|
||||||
# vars from the DEFAULT section
|
|
||||||
config.default_host = defaults.get("host")
|
|
||||||
config.default_port = int(defaults.get("port"))
|
|
||||||
config.default_is_secure = cfg.getboolean('DEFAULT', "is_secure")
|
|
||||||
|
|
||||||
proto = 'https' if config.default_is_secure else 'http'
|
|
||||||
config.default_endpoint = "%s://%s:%d" % (proto, config.default_host, config.default_port)
|
|
||||||
|
|
||||||
try:
|
|
||||||
config.default_ssl_verify = cfg.getboolean('DEFAULT', "ssl_verify")
|
|
||||||
except configparser.NoOptionError:
|
|
||||||
config.default_ssl_verify = False
|
|
||||||
|
|
||||||
# Disable InsecureRequestWarning reported by urllib3 when ssl_verify is False
|
|
||||||
if not config.default_ssl_verify:
|
|
||||||
urllib3.disable_warnings()
|
|
||||||
|
|
||||||
# vars from the main section
|
|
||||||
config.main_access_key = cfg.get('s3 main',"access_key")
|
|
||||||
config.main_secret_key = cfg.get('s3 main',"secret_key")
|
|
||||||
config.main_display_name = cfg.get('s3 main',"display_name")
|
|
||||||
config.main_user_id = cfg.get('s3 main',"user_id")
|
|
||||||
config.main_email = cfg.get('s3 main',"email")
|
|
||||||
try:
|
|
||||||
config.main_kms_keyid = cfg.get('s3 main',"kms_keyid")
|
|
||||||
except (configparser.NoSectionError, configparser.NoOptionError):
|
|
||||||
config.main_kms_keyid = 'testkey-1'
|
|
||||||
|
|
||||||
try:
|
|
||||||
config.main_kms_keyid2 = cfg.get('s3 main',"kms_keyid2")
|
|
||||||
except (configparser.NoSectionError, configparser.NoOptionError):
|
|
||||||
config.main_kms_keyid2 = 'testkey-2'
|
|
||||||
|
|
||||||
try:
|
|
||||||
config.main_api_name = cfg.get('s3 main',"api_name")
|
|
||||||
except (configparser.NoSectionError, configparser.NoOptionError):
|
|
||||||
config.main_api_name = ""
|
|
||||||
pass
|
|
||||||
|
|
||||||
try:
|
|
||||||
config.storage_classes = cfg.get('s3 main',"storage_classes")
|
|
||||||
except (configparser.NoSectionError, configparser.NoOptionError):
|
|
||||||
config.storage_classes = ""
|
|
||||||
pass
|
|
||||||
|
|
||||||
try:
|
|
||||||
config.lc_debug_interval = int(cfg.get('s3 main',"lc_debug_interval"))
|
|
||||||
except (configparser.NoSectionError, configparser.NoOptionError):
|
|
||||||
config.lc_debug_interval = 10
|
|
||||||
|
|
||||||
config.alt_access_key = cfg.get('s3 alt',"access_key")
|
|
||||||
config.alt_secret_key = cfg.get('s3 alt',"secret_key")
|
|
||||||
config.alt_display_name = cfg.get('s3 alt',"display_name")
|
|
||||||
config.alt_user_id = cfg.get('s3 alt',"user_id")
|
|
||||||
config.alt_email = cfg.get('s3 alt',"email")
|
|
||||||
|
|
||||||
config.tenant_access_key = cfg.get('s3 tenant',"access_key")
|
|
||||||
config.tenant_secret_key = cfg.get('s3 tenant',"secret_key")
|
|
||||||
config.tenant_display_name = cfg.get('s3 tenant',"display_name")
|
|
||||||
config.tenant_user_id = cfg.get('s3 tenant',"user_id")
|
|
||||||
config.tenant_email = cfg.get('s3 tenant',"email")
|
|
||||||
config.tenant_name = cfg.get('s3 tenant',"tenant")
|
|
||||||
|
|
||||||
config.iam_access_key = cfg.get('iam',"access_key")
|
|
||||||
config.iam_secret_key = cfg.get('iam',"secret_key")
|
|
||||||
config.iam_display_name = cfg.get('iam',"display_name")
|
|
||||||
config.iam_user_id = cfg.get('iam',"user_id")
|
|
||||||
config.iam_email = cfg.get('iam',"email")
|
|
||||||
|
|
||||||
config.iam_root_access_key = cfg.get('iam root',"access_key")
|
|
||||||
config.iam_root_secret_key = cfg.get('iam root',"secret_key")
|
|
||||||
config.iam_root_user_id = cfg.get('iam root',"user_id")
|
|
||||||
config.iam_root_email = cfg.get('iam root',"email")
|
|
||||||
|
|
||||||
config.iam_alt_root_access_key = cfg.get('iam alt root',"access_key")
|
|
||||||
config.iam_alt_root_secret_key = cfg.get('iam alt root',"secret_key")
|
|
||||||
config.iam_alt_root_user_id = cfg.get('iam alt root',"user_id")
|
|
||||||
config.iam_alt_root_email = cfg.get('iam alt root',"email")
|
|
||||||
|
|
||||||
# vars from the fixtures section
|
|
||||||
template = cfg.get('fixtures', "bucket prefix", fallback='test-{random}-')
|
|
||||||
prefix = choose_bucket_prefix(template=template)
|
|
||||||
template = cfg.get('fixtures', "iam name prefix", fallback="s3-tests-")
|
|
||||||
config.iam_name_prefix = choose_bucket_prefix(template=template)
|
|
||||||
template = cfg.get('fixtures', "iam path prefix", fallback="/s3-tests/")
|
|
||||||
config.iam_path_prefix = choose_bucket_prefix(template=template)
|
|
||||||
|
|
||||||
if cfg.has_section("s3 cloud"):
|
|
||||||
get_cloud_config(cfg)
|
|
||||||
else:
|
|
||||||
config.cloud_storage_class = None
|
|
||||||
|
|
||||||
def setup():
|
|
||||||
alt_client = get_alt_client()
|
|
||||||
tenant_client = get_tenant_client()
|
|
||||||
nuke_prefixed_buckets(prefix=prefix)
|
|
||||||
nuke_prefixed_buckets(prefix=prefix, client=alt_client)
|
|
||||||
nuke_prefixed_buckets(prefix=prefix, client=tenant_client)
|
|
||||||
|
|
||||||
def teardown():
|
|
||||||
alt_client = get_alt_client()
|
|
||||||
tenant_client = get_tenant_client()
|
|
||||||
nuke_prefixed_buckets(prefix=prefix)
|
|
||||||
nuke_prefixed_buckets(prefix=prefix, client=alt_client)
|
|
||||||
nuke_prefixed_buckets(prefix=prefix, client=tenant_client)
|
|
||||||
try:
|
|
||||||
iam_client = get_iam_client()
|
|
||||||
list_roles_resp = iam_client.list_roles()
|
|
||||||
for role in list_roles_resp['Roles']:
|
|
||||||
list_policies_resp = iam_client.list_role_policies(RoleName=role['RoleName'])
|
|
||||||
for policy in list_policies_resp['PolicyNames']:
|
|
||||||
del_policy_resp = iam_client.delete_role_policy(
|
|
||||||
RoleName=role['RoleName'],
|
|
||||||
PolicyName=policy
|
|
||||||
)
|
|
||||||
del_role_resp = iam_client.delete_role(RoleName=role['RoleName'])
|
|
||||||
list_oidc_resp = iam_client.list_open_id_connect_providers()
|
|
||||||
for oidcprovider in list_oidc_resp['OpenIDConnectProviderList']:
|
|
||||||
del_oidc_resp = iam_client.delete_open_id_connect_provider(
|
|
||||||
OpenIDConnectProviderArn=oidcprovider['Arn']
|
|
||||||
)
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
|
|
||||||
@pytest.fixture(scope="package")
|
|
||||||
def configfile():
|
|
||||||
configure()
|
|
||||||
return config
|
|
||||||
|
|
||||||
@pytest.fixture(autouse=True)
|
|
||||||
def setup_teardown(configfile):
|
|
||||||
setup()
|
|
||||||
yield
|
|
||||||
teardown()
|
|
||||||
|
|
||||||
def check_webidentity():
|
|
||||||
cfg = configparser.RawConfigParser()
|
|
||||||
try:
|
|
||||||
path = os.environ['S3TEST_CONF']
|
|
||||||
except KeyError:
|
|
||||||
raise RuntimeError(
|
|
||||||
'To run tests, point environment '
|
|
||||||
+ 'variable S3TEST_CONF to a config file.',
|
|
||||||
)
|
|
||||||
cfg.read(path)
|
|
||||||
if not cfg.has_section("webidentity"):
|
|
||||||
raise RuntimeError('Your config file is missing the "webidentity" section!')
|
|
||||||
|
|
||||||
config.webidentity_thumbprint = cfg.get('webidentity', "thumbprint")
|
|
||||||
config.webidentity_aud = cfg.get('webidentity', "aud")
|
|
||||||
config.webidentity_token = cfg.get('webidentity', "token")
|
|
||||||
config.webidentity_realm = cfg.get('webidentity', "KC_REALM")
|
|
||||||
config.webidentity_sub = cfg.get('webidentity', "sub")
|
|
||||||
config.webidentity_azp = cfg.get('webidentity', "azp")
|
|
||||||
config.webidentity_user_token = cfg.get('webidentity', "user_token")
|
|
||||||
|
|
||||||
def get_cloud_config(cfg):
|
|
||||||
config.cloud_host = cfg.get('s3 cloud',"host")
|
|
||||||
config.cloud_port = int(cfg.get('s3 cloud',"port"))
|
|
||||||
config.cloud_is_secure = cfg.getboolean('s3 cloud', "is_secure")
|
|
||||||
|
|
||||||
proto = 'https' if config.cloud_is_secure else 'http'
|
|
||||||
config.cloud_endpoint = "%s://%s:%d" % (proto, config.cloud_host, config.cloud_port)
|
|
||||||
|
|
||||||
config.cloud_access_key = cfg.get('s3 cloud',"access_key")
|
|
||||||
config.cloud_secret_key = cfg.get('s3 cloud',"secret_key")
|
|
||||||
|
|
||||||
try:
|
|
||||||
config.cloud_storage_class = cfg.get('s3 cloud', "cloud_storage_class")
|
|
||||||
except (configparser.NoSectionError, configparser.NoOptionError):
|
|
||||||
config.cloud_storage_class = None
|
|
||||||
|
|
||||||
try:
|
|
||||||
config.cloud_retain_head_object = cfg.get('s3 cloud',"retain_head_object")
|
|
||||||
except (configparser.NoSectionError, configparser.NoOptionError):
|
|
||||||
config.cloud_retain_head_object = None
|
|
||||||
|
|
||||||
try:
|
|
||||||
config.cloud_target_path = cfg.get('s3 cloud',"target_path")
|
|
||||||
except (configparser.NoSectionError, configparser.NoOptionError):
|
|
||||||
config.cloud_target_path = None
|
|
||||||
|
|
||||||
try:
|
|
||||||
config.cloud_target_storage_class = cfg.get('s3 cloud',"target_storage_class")
|
|
||||||
except (configparser.NoSectionError, configparser.NoOptionError):
|
|
||||||
config.cloud_target_storage_class = 'STANDARD'
|
|
||||||
|
|
||||||
try:
|
|
||||||
config.cloud_regular_storage_class = cfg.get('s3 cloud', "storage_class")
|
|
||||||
except (configparser.NoSectionError, configparser.NoOptionError):
|
|
||||||
config.cloud_regular_storage_class = None
|
|
||||||
|
|
||||||
|
|
||||||
def get_client(client_config=None):
|
|
||||||
if client_config == None:
|
|
||||||
client_config = Config(signature_version='s3v4')
|
|
||||||
|
|
||||||
client = boto3.client(service_name='s3',
|
|
||||||
aws_access_key_id=config.main_access_key,
|
|
||||||
aws_secret_access_key=config.main_secret_key,
|
|
||||||
endpoint_url=config.default_endpoint,
|
|
||||||
use_ssl=config.default_is_secure,
|
|
||||||
verify=config.default_ssl_verify,
|
|
||||||
config=client_config)
|
|
||||||
return client
|
|
||||||
|
|
||||||
def get_v2_client():
|
|
||||||
client = boto3.client(service_name='s3',
|
|
||||||
aws_access_key_id=config.main_access_key,
|
|
||||||
aws_secret_access_key=config.main_secret_key,
|
|
||||||
endpoint_url=config.default_endpoint,
|
|
||||||
use_ssl=config.default_is_secure,
|
|
||||||
verify=config.default_ssl_verify,
|
|
||||||
config=Config(signature_version='s3'))
|
|
||||||
return client
|
|
||||||
|
|
||||||
def get_sts_client(**kwargs):
|
|
||||||
kwargs.setdefault('aws_access_key_id', config.alt_access_key)
|
|
||||||
kwargs.setdefault('aws_secret_access_key', config.alt_secret_key)
|
|
||||||
kwargs.setdefault('config', Config(signature_version='s3v4'))
|
|
||||||
|
|
||||||
client = boto3.client(service_name='sts',
|
|
||||||
endpoint_url=config.default_endpoint,
|
|
||||||
region_name='',
|
|
||||||
use_ssl=config.default_is_secure,
|
|
||||||
verify=config.default_ssl_verify,
|
|
||||||
**kwargs)
|
|
||||||
return client
|
|
||||||
|
|
||||||
def get_iam_client(**kwargs):
|
|
||||||
kwargs.setdefault('aws_access_key_id', config.iam_access_key)
|
|
||||||
kwargs.setdefault('aws_secret_access_key', config.iam_secret_key)
|
|
||||||
|
|
||||||
client = boto3.client(service_name='iam',
|
|
||||||
endpoint_url=config.default_endpoint,
|
|
||||||
region_name='',
|
|
||||||
use_ssl=config.default_is_secure,
|
|
||||||
verify=config.default_ssl_verify,
|
|
||||||
**kwargs)
|
|
||||||
return client
|
|
||||||
|
|
||||||
def get_iam_s3client(**kwargs):
|
|
||||||
kwargs.setdefault('aws_access_key_id', config.iam_access_key)
|
|
||||||
kwargs.setdefault('aws_secret_access_key', config.iam_secret_key)
|
|
||||||
kwargs.setdefault('config', Config(signature_version='s3v4'))
|
|
||||||
|
|
||||||
client = boto3.client(service_name='s3',
|
|
||||||
endpoint_url=config.default_endpoint,
|
|
||||||
use_ssl=config.default_is_secure,
|
|
||||||
verify=config.default_ssl_verify,
|
|
||||||
**kwargs)
|
|
||||||
return client
|
|
||||||
|
|
||||||
def get_iam_root_client(**kwargs):
|
|
||||||
kwargs.setdefault('service_name', 'iam')
|
|
||||||
kwargs.setdefault('aws_access_key_id', config.iam_root_access_key)
|
|
||||||
kwargs.setdefault('aws_secret_access_key', config.iam_root_secret_key)
|
|
||||||
|
|
||||||
return boto3.client(endpoint_url=config.default_endpoint,
|
|
||||||
region_name='',
|
|
||||||
use_ssl=config.default_is_secure,
|
|
||||||
verify=config.default_ssl_verify,
|
|
||||||
**kwargs)
|
|
||||||
|
|
||||||
def get_iam_alt_root_client(**kwargs):
|
|
||||||
kwargs.setdefault('service_name', 'iam')
|
|
||||||
kwargs.setdefault('aws_access_key_id', config.iam_alt_root_access_key)
|
|
||||||
kwargs.setdefault('aws_secret_access_key', config.iam_alt_root_secret_key)
|
|
||||||
|
|
||||||
return boto3.client(endpoint_url=config.default_endpoint,
|
|
||||||
region_name='',
|
|
||||||
use_ssl=config.default_is_secure,
|
|
||||||
verify=config.default_ssl_verify,
|
|
||||||
**kwargs)
|
|
||||||
|
|
||||||
def get_alt_client(client_config=None):
|
|
||||||
if client_config == None:
|
|
||||||
client_config = Config(signature_version='s3v4')
|
|
||||||
|
|
||||||
client = boto3.client(service_name='s3',
|
|
||||||
aws_access_key_id=config.alt_access_key,
|
|
||||||
aws_secret_access_key=config.alt_secret_key,
|
|
||||||
endpoint_url=config.default_endpoint,
|
|
||||||
use_ssl=config.default_is_secure,
|
|
||||||
verify=config.default_ssl_verify,
|
|
||||||
config=client_config)
|
|
||||||
return client
|
|
||||||
|
|
||||||
def get_cloud_client(client_config=None):
|
|
||||||
if client_config == None:
|
|
||||||
client_config = Config(signature_version='s3v4')
|
|
||||||
|
|
||||||
client = boto3.client(service_name='s3',
|
|
||||||
aws_access_key_id=config.cloud_access_key,
|
|
||||||
aws_secret_access_key=config.cloud_secret_key,
|
|
||||||
endpoint_url=config.cloud_endpoint,
|
|
||||||
use_ssl=config.cloud_is_secure,
|
|
||||||
config=client_config)
|
|
||||||
return client
|
|
||||||
|
|
||||||
def get_tenant_client(client_config=None):
|
|
||||||
if client_config == None:
|
|
||||||
client_config = Config(signature_version='s3v4')
|
|
||||||
|
|
||||||
client = boto3.client(service_name='s3',
|
|
||||||
aws_access_key_id=config.tenant_access_key,
|
|
||||||
aws_secret_access_key=config.tenant_secret_key,
|
|
||||||
endpoint_url=config.default_endpoint,
|
|
||||||
use_ssl=config.default_is_secure,
|
|
||||||
verify=config.default_ssl_verify,
|
|
||||||
config=client_config)
|
|
||||||
return client
|
|
||||||
|
|
||||||
def get_v2_tenant_client():
|
|
||||||
client_config = Config(signature_version='s3')
|
|
||||||
client = boto3.client(service_name='s3',
|
|
||||||
aws_access_key_id=config.tenant_access_key,
|
|
||||||
aws_secret_access_key=config.tenant_secret_key,
|
|
||||||
endpoint_url=config.default_endpoint,
|
|
||||||
use_ssl=config.default_is_secure,
|
|
||||||
verify=config.default_ssl_verify,
|
|
||||||
config=client_config)
|
|
||||||
return client
|
|
||||||
|
|
||||||
def get_tenant_iam_client():
|
|
||||||
|
|
||||||
client = boto3.client(service_name='iam',
|
|
||||||
region_name='us-east-1',
|
|
||||||
aws_access_key_id=config.tenant_access_key,
|
|
||||||
aws_secret_access_key=config.tenant_secret_key,
|
|
||||||
endpoint_url=config.default_endpoint,
|
|
||||||
verify=config.default_ssl_verify,
|
|
||||||
use_ssl=config.default_is_secure)
|
|
||||||
return client
|
|
||||||
|
|
||||||
def get_alt_iam_client():
|
|
||||||
|
|
||||||
client = boto3.client(service_name='iam',
|
|
||||||
region_name='',
|
|
||||||
aws_access_key_id=config.alt_access_key,
|
|
||||||
aws_secret_access_key=config.alt_secret_key,
|
|
||||||
endpoint_url=config.default_endpoint,
|
|
||||||
verify=config.default_ssl_verify,
|
|
||||||
use_ssl=config.default_is_secure)
|
|
||||||
return client
|
|
||||||
|
|
||||||
def get_unauthenticated_client():
|
|
||||||
client = boto3.client(service_name='s3',
|
|
||||||
aws_access_key_id='',
|
|
||||||
aws_secret_access_key='',
|
|
||||||
endpoint_url=config.default_endpoint,
|
|
||||||
use_ssl=config.default_is_secure,
|
|
||||||
verify=config.default_ssl_verify,
|
|
||||||
config=Config(signature_version=UNSIGNED))
|
|
||||||
return client
|
|
||||||
|
|
||||||
def get_bad_auth_client(aws_access_key_id='badauth'):
|
|
||||||
client = boto3.client(service_name='s3',
|
|
||||||
aws_access_key_id=aws_access_key_id,
|
|
||||||
aws_secret_access_key='roflmao',
|
|
||||||
endpoint_url=config.default_endpoint,
|
|
||||||
use_ssl=config.default_is_secure,
|
|
||||||
verify=config.default_ssl_verify,
|
|
||||||
config=Config(signature_version='s3v4'))
|
|
||||||
return client
|
|
||||||
|
|
||||||
def get_svc_client(client_config=None, svc='s3'):
|
|
||||||
if client_config == None:
|
|
||||||
client_config = Config(signature_version='s3v4')
|
|
||||||
|
|
||||||
client = boto3.client(service_name=svc,
|
|
||||||
aws_access_key_id=config.main_access_key,
|
|
||||||
aws_secret_access_key=config.main_secret_key,
|
|
||||||
endpoint_url=config.default_endpoint,
|
|
||||||
use_ssl=config.default_is_secure,
|
|
||||||
verify=config.default_ssl_verify,
|
|
||||||
config=client_config)
|
|
||||||
return client
|
|
||||||
|
|
||||||
bucket_counter = itertools.count(1)
|
|
||||||
|
|
||||||
def get_new_bucket_name():
|
|
||||||
"""
|
|
||||||
Get a bucket name that probably does not exist.
|
|
||||||
|
|
||||||
We make every attempt to use a unique random prefix, so if a
|
|
||||||
bucket by this name happens to exist, it's ok if tests give
|
|
||||||
false negatives.
|
|
||||||
"""
|
|
||||||
name = '{prefix}{num}'.format(
|
|
||||||
prefix=prefix,
|
|
||||||
num=next(bucket_counter),
|
|
||||||
)
|
|
||||||
return name
|
|
||||||
|
|
||||||
def get_new_bucket_resource(name=None):
|
|
||||||
"""
|
|
||||||
Get a bucket that exists and is empty.
|
|
||||||
|
|
||||||
Always recreates a bucket from scratch. This is useful to also
|
|
||||||
reset ACLs and such.
|
|
||||||
"""
|
|
||||||
s3 = boto3.resource('s3',
|
|
||||||
aws_access_key_id=config.main_access_key,
|
|
||||||
aws_secret_access_key=config.main_secret_key,
|
|
||||||
endpoint_url=config.default_endpoint,
|
|
||||||
use_ssl=config.default_is_secure,
|
|
||||||
verify=config.default_ssl_verify)
|
|
||||||
if name is None:
|
|
||||||
name = get_new_bucket_name()
|
|
||||||
bucket = s3.Bucket(name)
|
|
||||||
bucket_location = bucket.create()
|
|
||||||
return bucket
|
|
||||||
|
|
||||||
def get_new_bucket(client=None, name=None):
|
|
||||||
"""
|
|
||||||
Get a bucket that exists and is empty.
|
|
||||||
|
|
||||||
Always recreates a bucket from scratch. This is useful to also
|
|
||||||
reset ACLs and such.
|
|
||||||
"""
|
|
||||||
if client is None:
|
|
||||||
client = get_client()
|
|
||||||
if name is None:
|
|
||||||
name = get_new_bucket_name()
|
|
||||||
|
|
||||||
client.create_bucket(Bucket=name)
|
|
||||||
return name
|
|
||||||
|
|
||||||
def get_parameter_name():
|
|
||||||
parameter_name=""
|
|
||||||
rand = ''.join(
|
|
||||||
random.choice(string.ascii_lowercase + string.digits)
|
|
||||||
for c in range(255)
|
|
||||||
)
|
|
||||||
while rand:
|
|
||||||
parameter_name = '{random}'.format(random=rand)
|
|
||||||
if len(parameter_name) <= 10:
|
|
||||||
return parameter_name
|
|
||||||
rand = rand[:-1]
|
|
||||||
return parameter_name
|
|
||||||
|
|
||||||
def get_sts_user_id():
|
|
||||||
return config.alt_user_id
|
|
||||||
|
|
||||||
def get_config_is_secure():
|
|
||||||
return config.default_is_secure
|
|
||||||
|
|
||||||
def get_config_host():
|
|
||||||
return config.default_host
|
|
||||||
|
|
||||||
def get_config_port():
|
|
||||||
return config.default_port
|
|
||||||
|
|
||||||
def get_config_endpoint():
|
|
||||||
return config.default_endpoint
|
|
||||||
|
|
||||||
def get_config_ssl_verify():
|
|
||||||
return config.default_ssl_verify
|
|
||||||
|
|
||||||
def get_main_aws_access_key():
|
|
||||||
return config.main_access_key
|
|
||||||
|
|
||||||
def get_main_aws_secret_key():
|
|
||||||
return config.main_secret_key
|
|
||||||
|
|
||||||
def get_main_display_name():
|
|
||||||
return config.main_display_name
|
|
||||||
|
|
||||||
def get_main_user_id():
|
|
||||||
return config.main_user_id
|
|
||||||
|
|
||||||
def get_main_email():
|
|
||||||
return config.main_email
|
|
||||||
|
|
||||||
def get_main_api_name():
|
|
||||||
return config.main_api_name
|
|
||||||
|
|
||||||
def get_main_kms_keyid():
|
|
||||||
return config.main_kms_keyid
|
|
||||||
|
|
||||||
def get_secondary_kms_keyid():
|
|
||||||
return config.main_kms_keyid2
|
|
||||||
|
|
||||||
def get_alt_aws_access_key():
|
|
||||||
return config.alt_access_key
|
|
||||||
|
|
||||||
def get_alt_aws_secret_key():
|
|
||||||
return config.alt_secret_key
|
|
||||||
|
|
||||||
def get_alt_display_name():
|
|
||||||
return config.alt_display_name
|
|
||||||
|
|
||||||
def get_alt_user_id():
|
|
||||||
return config.alt_user_id
|
|
||||||
|
|
||||||
def get_alt_email():
|
|
||||||
return config.alt_email
|
|
||||||
|
|
||||||
def get_tenant_aws_access_key():
|
|
||||||
return config.tenant_access_key
|
|
||||||
|
|
||||||
def get_tenant_aws_secret_key():
|
|
||||||
return config.tenant_secret_key
|
|
||||||
|
|
||||||
def get_tenant_display_name():
|
|
||||||
return config.tenant_display_name
|
|
||||||
|
|
||||||
def get_tenant_name():
|
|
||||||
return config.tenant_name
|
|
||||||
|
|
||||||
def get_tenant_user_id():
|
|
||||||
return config.tenant_user_id
|
|
||||||
|
|
||||||
def get_tenant_email():
|
|
||||||
return config.tenant_email
|
|
||||||
|
|
||||||
def get_thumbprint():
|
|
||||||
return config.webidentity_thumbprint
|
|
||||||
|
|
||||||
def get_aud():
|
|
||||||
return config.webidentity_aud
|
|
||||||
|
|
||||||
def get_sub():
|
|
||||||
return config.webidentity_sub
|
|
||||||
|
|
||||||
def get_azp():
|
|
||||||
return config.webidentity_azp
|
|
||||||
|
|
||||||
def get_token():
|
|
||||||
return config.webidentity_token
|
|
||||||
|
|
||||||
def get_realm_name():
|
|
||||||
return config.webidentity_realm
|
|
||||||
|
|
||||||
def get_iam_name_prefix():
|
|
||||||
return config.iam_name_prefix
|
|
||||||
|
|
||||||
def make_iam_name(name):
|
|
||||||
return config.iam_name_prefix + name
|
|
||||||
|
|
||||||
def get_iam_path_prefix():
|
|
||||||
return config.iam_path_prefix
|
|
||||||
|
|
||||||
def get_iam_access_key():
|
|
||||||
return config.iam_access_key
|
|
||||||
|
|
||||||
def get_iam_secret_key():
|
|
||||||
return config.iam_secret_key
|
|
||||||
|
|
||||||
def get_iam_root_user_id():
|
|
||||||
return config.iam_root_user_id
|
|
||||||
|
|
||||||
def get_iam_root_email():
|
|
||||||
return config.iam_root_email
|
|
||||||
|
|
||||||
def get_iam_alt_root_user_id():
|
|
||||||
return config.iam_alt_root_user_id
|
|
||||||
|
|
||||||
def get_iam_alt_root_email():
|
|
||||||
return config.iam_alt_root_email
|
|
||||||
|
|
||||||
def get_user_token():
|
|
||||||
return config.webidentity_user_token
|
|
||||||
|
|
||||||
def get_cloud_storage_class():
|
|
||||||
return config.cloud_storage_class
|
|
||||||
|
|
||||||
def get_cloud_retain_head_object():
|
|
||||||
return config.cloud_retain_head_object
|
|
||||||
|
|
||||||
def get_cloud_regular_storage_class():
|
|
||||||
return config.cloud_regular_storage_class
|
|
||||||
|
|
||||||
def get_cloud_target_path():
|
|
||||||
return config.cloud_target_path
|
|
||||||
|
|
||||||
def get_cloud_target_storage_class():
|
|
||||||
return config.cloud_target_storage_class
|
|
||||||
|
|
||||||
def get_lc_debug_interval():
|
|
||||||
return config.lc_debug_interval
|
|
|
@ -1,199 +0,0 @@
|
||||||
from botocore.exceptions import ClientError
|
|
||||||
import pytest
|
|
||||||
|
|
||||||
from . import (
|
|
||||||
configfile,
|
|
||||||
get_iam_root_client,
|
|
||||||
get_iam_root_user_id,
|
|
||||||
get_iam_root_email,
|
|
||||||
get_iam_alt_root_client,
|
|
||||||
get_iam_alt_root_user_id,
|
|
||||||
get_iam_alt_root_email,
|
|
||||||
get_iam_path_prefix,
|
|
||||||
)
|
|
||||||
|
|
||||||
def nuke_user_keys(client, name):
|
|
||||||
p = client.get_paginator('list_access_keys')
|
|
||||||
for response in p.paginate(UserName=name):
|
|
||||||
for key in response['AccessKeyMetadata']:
|
|
||||||
try:
|
|
||||||
client.delete_access_key(UserName=name, AccessKeyId=key['AccessKeyId'])
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
|
|
||||||
def nuke_user_policies(client, name):
|
|
||||||
p = client.get_paginator('list_user_policies')
|
|
||||||
for response in p.paginate(UserName=name):
|
|
||||||
for policy in response['PolicyNames']:
|
|
||||||
try:
|
|
||||||
client.delete_user_policy(UserName=name, PolicyName=policy)
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
|
|
||||||
def nuke_attached_user_policies(client, name):
|
|
||||||
p = client.get_paginator('list_attached_user_policies')
|
|
||||||
for response in p.paginate(UserName=name):
|
|
||||||
for policy in response['AttachedPolicies']:
|
|
||||||
try:
|
|
||||||
client.detach_user_policy(UserName=name, PolicyArn=policy['PolicyArn'])
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
|
|
||||||
def nuke_user(client, name):
|
|
||||||
# delete access keys, user policies, etc
|
|
||||||
try:
|
|
||||||
nuke_user_keys(client, name)
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
try:
|
|
||||||
nuke_user_policies(client, name)
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
try:
|
|
||||||
nuke_attached_user_policies(client, name)
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
client.delete_user(UserName=name)
|
|
||||||
|
|
||||||
def nuke_users(client, **kwargs):
|
|
||||||
p = client.get_paginator('list_users')
|
|
||||||
for response in p.paginate(**kwargs):
|
|
||||||
for user in response['Users']:
|
|
||||||
try:
|
|
||||||
nuke_user(client, user['UserName'])
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
|
|
||||||
def nuke_group_policies(client, name):
|
|
||||||
p = client.get_paginator('list_group_policies')
|
|
||||||
for response in p.paginate(GroupName=name):
|
|
||||||
for policy in response['PolicyNames']:
|
|
||||||
try:
|
|
||||||
client.delete_group_policy(GroupName=name, PolicyName=policy)
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
|
|
||||||
def nuke_attached_group_policies(client, name):
|
|
||||||
p = client.get_paginator('list_attached_group_policies')
|
|
||||||
for response in p.paginate(GroupName=name):
|
|
||||||
for policy in response['AttachedPolicies']:
|
|
||||||
try:
|
|
||||||
client.detach_group_policy(GroupName=name, PolicyArn=policy['PolicyArn'])
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
|
|
||||||
def nuke_group_users(client, name):
|
|
||||||
p = client.get_paginator('get_group')
|
|
||||||
for response in p.paginate(GroupName=name):
|
|
||||||
for user in response['Users']:
|
|
||||||
try:
|
|
||||||
client.remove_user_from_group(GroupName=name, UserName=user['UserName'])
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
|
|
||||||
def nuke_group(client, name):
|
|
||||||
# delete group policies and remove all users
|
|
||||||
try:
|
|
||||||
nuke_group_policies(client, name)
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
try:
|
|
||||||
nuke_attached_group_policies(client, name)
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
try:
|
|
||||||
nuke_group_users(client, name)
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
client.delete_group(GroupName=name)
|
|
||||||
|
|
||||||
def nuke_groups(client, **kwargs):
|
|
||||||
p = client.get_paginator('list_groups')
|
|
||||||
for response in p.paginate(**kwargs):
|
|
||||||
for user in response['Groups']:
|
|
||||||
try:
|
|
||||||
nuke_group(client, user['GroupName'])
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
|
|
||||||
def nuke_role_policies(client, name):
|
|
||||||
p = client.get_paginator('list_role_policies')
|
|
||||||
for response in p.paginate(RoleName=name):
|
|
||||||
for policy in response['PolicyNames']:
|
|
||||||
try:
|
|
||||||
client.delete_role_policy(RoleName=name, PolicyName=policy)
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
|
|
||||||
def nuke_attached_role_policies(client, name):
|
|
||||||
p = client.get_paginator('list_attached_role_policies')
|
|
||||||
for response in p.paginate(RoleName=name):
|
|
||||||
for policy in response['AttachedPolicies']:
|
|
||||||
try:
|
|
||||||
client.detach_role_policy(RoleName=name, PolicyArn=policy['PolicyArn'])
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
|
|
||||||
def nuke_role(client, name):
|
|
||||||
# delete role policies, etc
|
|
||||||
try:
|
|
||||||
nuke_role_policies(client, name)
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
try:
|
|
||||||
nuke_attached_role_policies(client, name)
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
client.delete_role(RoleName=name)
|
|
||||||
|
|
||||||
def nuke_roles(client, **kwargs):
|
|
||||||
p = client.get_paginator('list_roles')
|
|
||||||
for response in p.paginate(**kwargs):
|
|
||||||
for role in response['Roles']:
|
|
||||||
try:
|
|
||||||
nuke_role(client, role['RoleName'])
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
|
|
||||||
def nuke_oidc_providers(client, prefix):
|
|
||||||
result = client.list_open_id_connect_providers()
|
|
||||||
for provider in result['OpenIDConnectProviderList']:
|
|
||||||
arn = provider['Arn']
|
|
||||||
if f':oidc-provider{prefix}' in arn:
|
|
||||||
try:
|
|
||||||
client.delete_open_id_connect_provider(OpenIDConnectProviderArn=arn)
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
# fixture for iam account root user
|
|
||||||
@pytest.fixture
|
|
||||||
def iam_root(configfile):
|
|
||||||
client = get_iam_root_client()
|
|
||||||
try:
|
|
||||||
arn = client.get_user()['User']['Arn']
|
|
||||||
if not arn.endswith(':root'):
|
|
||||||
pytest.skip('[iam root] user does not have :root arn')
|
|
||||||
except ClientError as e:
|
|
||||||
pytest.skip('[iam root] user does not belong to an account')
|
|
||||||
|
|
||||||
yield client
|
|
||||||
nuke_users(client, PathPrefix=get_iam_path_prefix())
|
|
||||||
nuke_groups(client, PathPrefix=get_iam_path_prefix())
|
|
||||||
nuke_roles(client, PathPrefix=get_iam_path_prefix())
|
|
||||||
nuke_oidc_providers(client, get_iam_path_prefix())
|
|
||||||
|
|
||||||
# fixture for iam alt account root user
|
|
||||||
@pytest.fixture
|
|
||||||
def iam_alt_root(configfile):
|
|
||||||
client = get_iam_alt_root_client()
|
|
||||||
try:
|
|
||||||
arn = client.get_user()['User']['Arn']
|
|
||||||
if not arn.endswith(':root'):
|
|
||||||
pytest.skip('[iam alt root] user does not have :root arn')
|
|
||||||
except ClientError as e:
|
|
||||||
pytest.skip('[iam alt root] user does not belong to an account')
|
|
||||||
|
|
||||||
yield client
|
|
||||||
nuke_users(client, PathPrefix=get_iam_path_prefix())
|
|
||||||
nuke_roles(client, PathPrefix=get_iam_path_prefix())
|
|
|
@ -1,46 +0,0 @@
|
||||||
import json
|
|
||||||
|
|
||||||
class Statement(object):
|
|
||||||
def __init__(self, action, resource, principal = {"AWS" : "*"}, effect= "Allow", condition = None):
|
|
||||||
self.principal = principal
|
|
||||||
self.action = action
|
|
||||||
self.resource = resource
|
|
||||||
self.condition = condition
|
|
||||||
self.effect = effect
|
|
||||||
|
|
||||||
def to_dict(self):
|
|
||||||
d = { "Action" : self.action,
|
|
||||||
"Principal" : self.principal,
|
|
||||||
"Effect" : self.effect,
|
|
||||||
"Resource" : self.resource
|
|
||||||
}
|
|
||||||
|
|
||||||
if self.condition is not None:
|
|
||||||
d["Condition"] = self.condition
|
|
||||||
|
|
||||||
return d
|
|
||||||
|
|
||||||
class Policy(object):
|
|
||||||
def __init__(self):
|
|
||||||
self.statements = []
|
|
||||||
|
|
||||||
def add_statement(self, s):
|
|
||||||
self.statements.append(s)
|
|
||||||
return self
|
|
||||||
|
|
||||||
def to_json(self):
|
|
||||||
policy_dict = {
|
|
||||||
"Version" : "2012-10-17",
|
|
||||||
"Statement":
|
|
||||||
[s.to_dict() for s in self.statements]
|
|
||||||
}
|
|
||||||
|
|
||||||
return json.dumps(policy_dict)
|
|
||||||
|
|
||||||
def make_json_policy(action, resource, principal={"AWS": "*"}, effect="Allow", conditions=None):
|
|
||||||
"""
|
|
||||||
Helper function to make single statement policies
|
|
||||||
"""
|
|
||||||
s = Statement(action, resource, principal, effect=effect, condition=conditions)
|
|
||||||
p = Policy()
|
|
||||||
return p.add_statement(s).to_json()
|
|
|
@ -1,92 +0,0 @@
|
||||||
#!/usr/bin/python
|
|
||||||
import boto3
|
|
||||||
import os
|
|
||||||
import random
|
|
||||||
import string
|
|
||||||
import itertools
|
|
||||||
|
|
||||||
host = "localhost"
|
|
||||||
port = 8000
|
|
||||||
|
|
||||||
## AWS access key
|
|
||||||
access_key = "0555b35654ad1656d804"
|
|
||||||
|
|
||||||
## AWS secret key
|
|
||||||
secret_key = "h7GhxuBLTrlhVUyxSPUKUV8r/2EI4ngqJxD7iBdBYLhwluN30JaT3Q=="
|
|
||||||
|
|
||||||
prefix = "YOURNAMEHERE-1234-"
|
|
||||||
|
|
||||||
endpoint_url = "http://%s:%d" % (host, port)
|
|
||||||
|
|
||||||
client = boto3.client(service_name='s3',
|
|
||||||
aws_access_key_id=access_key,
|
|
||||||
aws_secret_access_key=secret_key,
|
|
||||||
endpoint_url=endpoint_url,
|
|
||||||
use_ssl=False,
|
|
||||||
verify=False)
|
|
||||||
|
|
||||||
s3 = boto3.resource('s3',
|
|
||||||
use_ssl=False,
|
|
||||||
verify=False,
|
|
||||||
endpoint_url=endpoint_url,
|
|
||||||
aws_access_key_id=access_key,
|
|
||||||
aws_secret_access_key=secret_key)
|
|
||||||
|
|
||||||
def choose_bucket_prefix(template, max_len=30):
|
|
||||||
"""
|
|
||||||
Choose a prefix for our test buckets, so they're easy to identify.
|
|
||||||
|
|
||||||
Use template and feed it more and more random filler, until it's
|
|
||||||
as long as possible but still below max_len.
|
|
||||||
"""
|
|
||||||
rand = ''.join(
|
|
||||||
random.choice(string.ascii_lowercase + string.digits)
|
|
||||||
for c in range(255)
|
|
||||||
)
|
|
||||||
|
|
||||||
while rand:
|
|
||||||
s = template.format(random=rand)
|
|
||||||
if len(s) <= max_len:
|
|
||||||
return s
|
|
||||||
rand = rand[:-1]
|
|
||||||
|
|
||||||
raise RuntimeError(
|
|
||||||
'Bucket prefix template is impossible to fulfill: {template!r}'.format(
|
|
||||||
template=template,
|
|
||||||
),
|
|
||||||
)
|
|
||||||
|
|
||||||
bucket_counter = itertools.count(1)
|
|
||||||
|
|
||||||
def get_new_bucket_name():
|
|
||||||
"""
|
|
||||||
Get a bucket name that probably does not exist.
|
|
||||||
|
|
||||||
We make every attempt to use a unique random prefix, so if a
|
|
||||||
bucket by this name happens to exist, it's ok if tests give
|
|
||||||
false negatives.
|
|
||||||
"""
|
|
||||||
name = '{prefix}{num}'.format(
|
|
||||||
prefix=prefix,
|
|
||||||
num=next(bucket_counter),
|
|
||||||
)
|
|
||||||
return name
|
|
||||||
|
|
||||||
def get_new_bucket(session=boto3, name=None, headers=None):
|
|
||||||
"""
|
|
||||||
Get a bucket that exists and is empty.
|
|
||||||
|
|
||||||
Always recreates a bucket from scratch. This is useful to also
|
|
||||||
reset ACLs and such.
|
|
||||||
"""
|
|
||||||
s3 = session.resource('s3',
|
|
||||||
use_ssl=False,
|
|
||||||
verify=False,
|
|
||||||
endpoint_url=endpoint_url,
|
|
||||||
aws_access_key_id=access_key,
|
|
||||||
aws_secret_access_key=secret_key)
|
|
||||||
if name is None:
|
|
||||||
name = get_new_bucket_name()
|
|
||||||
bucket = s3.Bucket(name)
|
|
||||||
bucket_location = bucket.create()
|
|
||||||
return bucket
|
|
|
@ -1,572 +0,0 @@
|
||||||
import boto3
|
|
||||||
import pytest
|
|
||||||
from botocore.exceptions import ClientError
|
|
||||||
from email.utils import formatdate
|
|
||||||
|
|
||||||
from .utils import assert_raises
|
|
||||||
from .utils import _get_status_and_error_code
|
|
||||||
from .utils import _get_status
|
|
||||||
|
|
||||||
from . import (
|
|
||||||
configfile,
|
|
||||||
setup_teardown,
|
|
||||||
get_client,
|
|
||||||
get_v2_client,
|
|
||||||
get_new_bucket,
|
|
||||||
get_new_bucket_name,
|
|
||||||
)
|
|
||||||
|
|
||||||
def _add_header_create_object(headers, client=None):
|
|
||||||
""" Create a new bucket, add an object w/header customizations
|
|
||||||
"""
|
|
||||||
bucket_name = get_new_bucket()
|
|
||||||
if client == None:
|
|
||||||
client = get_client()
|
|
||||||
key_name = 'foo'
|
|
||||||
|
|
||||||
# pass in custom headers before PutObject call
|
|
||||||
add_headers = (lambda **kwargs: kwargs['params']['headers'].update(headers))
|
|
||||||
client.meta.events.register('before-call.s3.PutObject', add_headers)
|
|
||||||
client.put_object(Bucket=bucket_name, Key=key_name)
|
|
||||||
|
|
||||||
return bucket_name, key_name
|
|
||||||
|
|
||||||
|
|
||||||
def _add_header_create_bad_object(headers, client=None):
|
|
||||||
""" Create a new bucket, add an object with a header. This should cause a failure
|
|
||||||
"""
|
|
||||||
bucket_name = get_new_bucket()
|
|
||||||
if client == None:
|
|
||||||
client = get_client()
|
|
||||||
key_name = 'foo'
|
|
||||||
|
|
||||||
# pass in custom headers before PutObject call
|
|
||||||
add_headers = (lambda **kwargs: kwargs['params']['headers'].update(headers))
|
|
||||||
client.meta.events.register('before-call.s3.PutObject', add_headers)
|
|
||||||
e = assert_raises(ClientError, client.put_object, Bucket=bucket_name, Key=key_name, Body='bar')
|
|
||||||
|
|
||||||
return e
|
|
||||||
|
|
||||||
|
|
||||||
def _remove_header_create_object(remove, client=None):
|
|
||||||
""" Create a new bucket, add an object without a header
|
|
||||||
"""
|
|
||||||
bucket_name = get_new_bucket()
|
|
||||||
if client == None:
|
|
||||||
client = get_client()
|
|
||||||
key_name = 'foo'
|
|
||||||
|
|
||||||
# remove custom headers before PutObject call
|
|
||||||
def remove_header(**kwargs):
|
|
||||||
if (remove in kwargs['params']['headers']):
|
|
||||||
del kwargs['params']['headers'][remove]
|
|
||||||
|
|
||||||
client.meta.events.register('before-call.s3.PutObject', remove_header)
|
|
||||||
client.put_object(Bucket=bucket_name, Key=key_name)
|
|
||||||
|
|
||||||
return bucket_name, key_name
|
|
||||||
|
|
||||||
def _remove_header_create_bad_object(remove, client=None):
|
|
||||||
""" Create a new bucket, add an object without a header. This should cause a failure
|
|
||||||
"""
|
|
||||||
bucket_name = get_new_bucket()
|
|
||||||
if client == None:
|
|
||||||
client = get_client()
|
|
||||||
key_name = 'foo'
|
|
||||||
|
|
||||||
# remove custom headers before PutObject call
|
|
||||||
def remove_header(**kwargs):
|
|
||||||
if (remove in kwargs['params']['headers']):
|
|
||||||
del kwargs['params']['headers'][remove]
|
|
||||||
|
|
||||||
client.meta.events.register('before-call.s3.PutObject', remove_header)
|
|
||||||
e = assert_raises(ClientError, client.put_object, Bucket=bucket_name, Key=key_name, Body='bar')
|
|
||||||
|
|
||||||
return e
|
|
||||||
|
|
||||||
|
|
||||||
def _add_header_create_bucket(headers, client=None):
|
|
||||||
""" Create a new bucket, w/header customizations
|
|
||||||
"""
|
|
||||||
bucket_name = get_new_bucket_name()
|
|
||||||
if client == None:
|
|
||||||
client = get_client()
|
|
||||||
|
|
||||||
# pass in custom headers before PutObject call
|
|
||||||
add_headers = (lambda **kwargs: kwargs['params']['headers'].update(headers))
|
|
||||||
client.meta.events.register('before-call.s3.CreateBucket', add_headers)
|
|
||||||
client.create_bucket(Bucket=bucket_name)
|
|
||||||
|
|
||||||
return bucket_name
|
|
||||||
|
|
||||||
|
|
||||||
def _add_header_create_bad_bucket(headers=None, client=None):
|
|
||||||
""" Create a new bucket, w/header customizations that should cause a failure
|
|
||||||
"""
|
|
||||||
bucket_name = get_new_bucket_name()
|
|
||||||
if client == None:
|
|
||||||
client = get_client()
|
|
||||||
|
|
||||||
# pass in custom headers before PutObject call
|
|
||||||
add_headers = (lambda **kwargs: kwargs['params']['headers'].update(headers))
|
|
||||||
client.meta.events.register('before-call.s3.CreateBucket', add_headers)
|
|
||||||
e = assert_raises(ClientError, client.create_bucket, Bucket=bucket_name)
|
|
||||||
|
|
||||||
return e
|
|
||||||
|
|
||||||
|
|
||||||
def _remove_header_create_bucket(remove, client=None):
|
|
||||||
""" Create a new bucket, without a header
|
|
||||||
"""
|
|
||||||
bucket_name = get_new_bucket_name()
|
|
||||||
if client == None:
|
|
||||||
client = get_client()
|
|
||||||
|
|
||||||
# remove custom headers before PutObject call
|
|
||||||
def remove_header(**kwargs):
|
|
||||||
if (remove in kwargs['params']['headers']):
|
|
||||||
del kwargs['params']['headers'][remove]
|
|
||||||
|
|
||||||
client.meta.events.register('before-call.s3.CreateBucket', remove_header)
|
|
||||||
client.create_bucket(Bucket=bucket_name)
|
|
||||||
|
|
||||||
return bucket_name
|
|
||||||
|
|
||||||
def _remove_header_create_bad_bucket(remove, client=None):
|
|
||||||
""" Create a new bucket, without a header. This should cause a failure
|
|
||||||
"""
|
|
||||||
bucket_name = get_new_bucket_name()
|
|
||||||
if client == None:
|
|
||||||
client = get_client()
|
|
||||||
|
|
||||||
# remove custom headers before PutObject call
|
|
||||||
def remove_header(**kwargs):
|
|
||||||
if (remove in kwargs['params']['headers']):
|
|
||||||
del kwargs['params']['headers'][remove]
|
|
||||||
|
|
||||||
client.meta.events.register('before-call.s3.CreateBucket', remove_header)
|
|
||||||
e = assert_raises(ClientError, client.create_bucket, Bucket=bucket_name)
|
|
||||||
|
|
||||||
return e
|
|
||||||
|
|
||||||
#
|
|
||||||
# common tests
|
|
||||||
#
|
|
||||||
|
|
||||||
@pytest.mark.auth_common
|
|
||||||
def test_object_create_bad_md5_invalid_short():
|
|
||||||
e = _add_header_create_bad_object({'Content-MD5':'YWJyYWNhZGFicmE='})
|
|
||||||
status, error_code = _get_status_and_error_code(e.response)
|
|
||||||
assert status == 400
|
|
||||||
assert error_code == 'InvalidDigest'
|
|
||||||
|
|
||||||
@pytest.mark.auth_common
|
|
||||||
def test_object_create_bad_md5_bad():
|
|
||||||
e = _add_header_create_bad_object({'Content-MD5':'rL0Y20xC+Fzt72VPzMSk2A=='})
|
|
||||||
status, error_code = _get_status_and_error_code(e.response)
|
|
||||||
assert status == 400
|
|
||||||
assert error_code == 'BadDigest'
|
|
||||||
|
|
||||||
@pytest.mark.auth_common
|
|
||||||
def test_object_create_bad_md5_empty():
|
|
||||||
e = _add_header_create_bad_object({'Content-MD5':''})
|
|
||||||
status, error_code = _get_status_and_error_code(e.response)
|
|
||||||
assert status == 400
|
|
||||||
assert error_code == 'InvalidDigest'
|
|
||||||
|
|
||||||
@pytest.mark.auth_common
|
|
||||||
def test_object_create_bad_md5_none():
|
|
||||||
bucket_name, key_name = _remove_header_create_object('Content-MD5')
|
|
||||||
client = get_client()
|
|
||||||
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
|
|
||||||
|
|
||||||
@pytest.mark.auth_common
|
|
||||||
def test_object_create_bad_expect_mismatch():
|
|
||||||
bucket_name, key_name = _add_header_create_object({'Expect': 200})
|
|
||||||
client = get_client()
|
|
||||||
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
|
|
||||||
|
|
||||||
@pytest.mark.auth_common
|
|
||||||
def test_object_create_bad_expect_empty():
|
|
||||||
bucket_name, key_name = _add_header_create_object({'Expect': ''})
|
|
||||||
client = get_client()
|
|
||||||
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
|
|
||||||
|
|
||||||
@pytest.mark.auth_common
|
|
||||||
def test_object_create_bad_expect_none():
|
|
||||||
bucket_name, key_name = _remove_header_create_object('Expect')
|
|
||||||
client = get_client()
|
|
||||||
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
|
|
||||||
|
|
||||||
@pytest.mark.auth_common
|
|
||||||
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the content-length header
|
|
||||||
@pytest.mark.fails_on_rgw
|
|
||||||
def test_object_create_bad_contentlength_empty():
|
|
||||||
e = _add_header_create_bad_object({'Content-Length':''})
|
|
||||||
status, error_code = _get_status_and_error_code(e.response)
|
|
||||||
assert status == 400
|
|
||||||
|
|
||||||
@pytest.mark.auth_common
|
|
||||||
@pytest.mark.fails_on_mod_proxy_fcgi
|
|
||||||
def test_object_create_bad_contentlength_negative():
|
|
||||||
client = get_client()
|
|
||||||
bucket_name = get_new_bucket()
|
|
||||||
key_name = 'foo'
|
|
||||||
e = assert_raises(ClientError, client.put_object, Bucket=bucket_name, Key=key_name, ContentLength=-1)
|
|
||||||
status = _get_status(e.response)
|
|
||||||
assert status == 400
|
|
||||||
|
|
||||||
@pytest.mark.auth_common
|
|
||||||
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the content-length header
|
|
||||||
@pytest.mark.fails_on_rgw
|
|
||||||
def test_object_create_bad_contentlength_none():
|
|
||||||
remove = 'Content-Length'
|
|
||||||
e = _remove_header_create_bad_object('Content-Length')
|
|
||||||
status, error_code = _get_status_and_error_code(e.response)
|
|
||||||
assert status == 411
|
|
||||||
assert error_code == 'MissingContentLength'
|
|
||||||
|
|
||||||
@pytest.mark.auth_common
|
|
||||||
def test_object_create_bad_contenttype_invalid():
|
|
||||||
bucket_name, key_name = _add_header_create_object({'Content-Type': 'text/plain'})
|
|
||||||
client = get_client()
|
|
||||||
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
|
|
||||||
|
|
||||||
@pytest.mark.auth_common
|
|
||||||
def test_object_create_bad_contenttype_empty():
|
|
||||||
client = get_client()
|
|
||||||
key_name = 'foo'
|
|
||||||
bucket_name = get_new_bucket()
|
|
||||||
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar', ContentType='')
|
|
||||||
|
|
||||||
@pytest.mark.auth_common
|
|
||||||
def test_object_create_bad_contenttype_none():
|
|
||||||
bucket_name = get_new_bucket()
|
|
||||||
key_name = 'foo'
|
|
||||||
client = get_client()
|
|
||||||
# as long as ContentType isn't specified in put_object it isn't going into the request
|
|
||||||
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.auth_common
|
|
||||||
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the authorization header
|
|
||||||
@pytest.mark.fails_on_rgw
|
|
||||||
def test_object_create_bad_authorization_empty():
|
|
||||||
e = _add_header_create_bad_object({'Authorization': ''})
|
|
||||||
status, error_code = _get_status_and_error_code(e.response)
|
|
||||||
assert status == 403
|
|
||||||
|
|
||||||
@pytest.mark.auth_common
|
|
||||||
# TODO: remove 'fails_on_rgw' and once we have learned how to pass both the 'Date' and 'X-Amz-Date' header during signing and not 'X-Amz-Date' before
|
|
||||||
@pytest.mark.fails_on_rgw
|
|
||||||
def test_object_create_date_and_amz_date():
|
|
||||||
date = formatdate(usegmt=True)
|
|
||||||
bucket_name, key_name = _add_header_create_object({'Date': date, 'X-Amz-Date': date})
|
|
||||||
client = get_client()
|
|
||||||
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
|
|
||||||
|
|
||||||
@pytest.mark.auth_common
|
|
||||||
# TODO: remove 'fails_on_rgw' and once we have learned how to pass both the 'Date' and 'X-Amz-Date' header during signing and not 'X-Amz-Date' before
|
|
||||||
@pytest.mark.fails_on_rgw
|
|
||||||
def test_object_create_amz_date_and_no_date():
|
|
||||||
date = formatdate(usegmt=True)
|
|
||||||
bucket_name, key_name = _add_header_create_object({'Date': '', 'X-Amz-Date': date})
|
|
||||||
client = get_client()
|
|
||||||
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
|
|
||||||
|
|
||||||
# the teardown is really messed up here. check it out
|
|
||||||
@pytest.mark.auth_common
|
|
||||||
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the authorization header
|
|
||||||
@pytest.mark.fails_on_rgw
|
|
||||||
def test_object_create_bad_authorization_none():
|
|
||||||
e = _remove_header_create_bad_object('Authorization')
|
|
||||||
status, error_code = _get_status_and_error_code(e.response)
|
|
||||||
assert status == 403
|
|
||||||
|
|
||||||
@pytest.mark.auth_common
|
|
||||||
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the content-length header
|
|
||||||
@pytest.mark.fails_on_rgw
|
|
||||||
def test_bucket_create_contentlength_none():
|
|
||||||
remove = 'Content-Length'
|
|
||||||
_remove_header_create_bucket(remove)
|
|
||||||
|
|
||||||
@pytest.mark.auth_common
|
|
||||||
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the content-length header
|
|
||||||
@pytest.mark.fails_on_rgw
|
|
||||||
def test_object_acl_create_contentlength_none():
|
|
||||||
bucket_name = get_new_bucket()
|
|
||||||
client = get_client()
|
|
||||||
client.put_object(Bucket=bucket_name, Key='foo', Body='bar')
|
|
||||||
|
|
||||||
remove = 'Content-Length'
|
|
||||||
def remove_header(**kwargs):
|
|
||||||
if (remove in kwargs['params']['headers']):
|
|
||||||
del kwargs['params']['headers'][remove]
|
|
||||||
|
|
||||||
client.meta.events.register('before-call.s3.PutObjectAcl', remove_header)
|
|
||||||
client.put_object_acl(Bucket=bucket_name, Key='foo', ACL='public-read')
|
|
||||||
|
|
||||||
@pytest.mark.auth_common
|
|
||||||
def test_bucket_put_bad_canned_acl():
|
|
||||||
bucket_name = get_new_bucket()
|
|
||||||
client = get_client()
|
|
||||||
|
|
||||||
headers = {'x-amz-acl': 'public-ready'}
|
|
||||||
add_headers = (lambda **kwargs: kwargs['params']['headers'].update(headers))
|
|
||||||
client.meta.events.register('before-call.s3.PutBucketAcl', add_headers)
|
|
||||||
|
|
||||||
e = assert_raises(ClientError, client.put_bucket_acl, Bucket=bucket_name, ACL='public-read')
|
|
||||||
status = _get_status(e.response)
|
|
||||||
assert status == 400
|
|
||||||
|
|
||||||
@pytest.mark.auth_common
|
|
||||||
def test_bucket_create_bad_expect_mismatch():
|
|
||||||
bucket_name = get_new_bucket_name()
|
|
||||||
client = get_client()
|
|
||||||
|
|
||||||
headers = {'Expect': 200}
|
|
||||||
add_headers = (lambda **kwargs: kwargs['params']['headers'].update(headers))
|
|
||||||
client.meta.events.register('before-call.s3.CreateBucket', add_headers)
|
|
||||||
client.create_bucket(Bucket=bucket_name)
|
|
||||||
|
|
||||||
@pytest.mark.auth_common
|
|
||||||
def test_bucket_create_bad_expect_empty():
|
|
||||||
headers = {'Expect': ''}
|
|
||||||
_add_header_create_bucket(headers)
|
|
||||||
|
|
||||||
@pytest.mark.auth_common
|
|
||||||
# TODO: The request isn't even making it to the RGW past the frontend
|
|
||||||
# This test had 'fails_on_rgw' before the move to boto3
|
|
||||||
@pytest.mark.fails_on_rgw
|
|
||||||
def test_bucket_create_bad_contentlength_empty():
|
|
||||||
headers = {'Content-Length': ''}
|
|
||||||
e = _add_header_create_bad_bucket(headers)
|
|
||||||
status, error_code = _get_status_and_error_code(e.response)
|
|
||||||
assert status == 400
|
|
||||||
|
|
||||||
@pytest.mark.auth_common
|
|
||||||
@pytest.mark.fails_on_mod_proxy_fcgi
|
|
||||||
def test_bucket_create_bad_contentlength_negative():
|
|
||||||
headers = {'Content-Length': '-1'}
|
|
||||||
e = _add_header_create_bad_bucket(headers)
|
|
||||||
status = _get_status(e.response)
|
|
||||||
assert status == 400
|
|
||||||
|
|
||||||
@pytest.mark.auth_common
|
|
||||||
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the content-length header
|
|
||||||
@pytest.mark.fails_on_rgw
|
|
||||||
def test_bucket_create_bad_contentlength_none():
|
|
||||||
remove = 'Content-Length'
|
|
||||||
_remove_header_create_bucket(remove)
|
|
||||||
|
|
||||||
@pytest.mark.auth_common
|
|
||||||
# TODO: remove 'fails_on_rgw' and once we have learned how to manipulate the authorization header
|
|
||||||
@pytest.mark.fails_on_rgw
|
|
||||||
def test_bucket_create_bad_authorization_empty():
|
|
||||||
headers = {'Authorization': ''}
|
|
||||||
e = _add_header_create_bad_bucket(headers)
|
|
||||||
status, error_code = _get_status_and_error_code(e.response)
|
|
||||||
assert status == 403
|
|
||||||
assert error_code == 'AccessDenied'
|
|
||||||
|
|
||||||
@pytest.mark.auth_common
|
|
||||||
# TODO: remove 'fails_on_rgw' and once we have learned how to manipulate the authorization header
|
|
||||||
@pytest.mark.fails_on_rgw
|
|
||||||
def test_bucket_create_bad_authorization_none():
|
|
||||||
e = _remove_header_create_bad_bucket('Authorization')
|
|
||||||
status, error_code = _get_status_and_error_code(e.response)
|
|
||||||
assert status == 403
|
|
||||||
assert error_code == 'AccessDenied'
|
|
||||||
|
|
||||||
@pytest.mark.auth_aws2
|
|
||||||
def test_object_create_bad_md5_invalid_garbage_aws2():
|
|
||||||
v2_client = get_v2_client()
|
|
||||||
headers = {'Content-MD5': 'AWS HAHAHA'}
|
|
||||||
e = _add_header_create_bad_object(headers, v2_client)
|
|
||||||
status, error_code = _get_status_and_error_code(e.response)
|
|
||||||
assert status == 400
|
|
||||||
assert error_code == 'InvalidDigest'
|
|
||||||
|
|
||||||
@pytest.mark.auth_aws2
|
|
||||||
# TODO: remove 'fails_on_rgw' and once we have learned how to manipulate the Content-Length header
|
|
||||||
@pytest.mark.fails_on_rgw
|
|
||||||
def test_object_create_bad_contentlength_mismatch_below_aws2():
|
|
||||||
v2_client = get_v2_client()
|
|
||||||
content = 'bar'
|
|
||||||
length = len(content) - 1
|
|
||||||
headers = {'Content-Length': str(length)}
|
|
||||||
e = _add_header_create_bad_object(headers, v2_client)
|
|
||||||
status, error_code = _get_status_and_error_code(e.response)
|
|
||||||
assert status == 400
|
|
||||||
assert error_code == 'BadDigest'
|
|
||||||
|
|
||||||
@pytest.mark.auth_aws2
|
|
||||||
# TODO: remove 'fails_on_rgw' and once we have learned how to manipulate the authorization header
|
|
||||||
@pytest.mark.fails_on_rgw
|
|
||||||
def test_object_create_bad_authorization_incorrect_aws2():
|
|
||||||
v2_client = get_v2_client()
|
|
||||||
headers = {'Authorization': 'AWS AKIAIGR7ZNNBHC5BKSUB:FWeDfwojDSdS2Ztmpfeubhd9isU='}
|
|
||||||
e = _add_header_create_bad_object(headers, v2_client)
|
|
||||||
status, error_code = _get_status_and_error_code(e.response)
|
|
||||||
assert status == 403
|
|
||||||
assert error_code == 'InvalidDigest'
|
|
||||||
|
|
||||||
@pytest.mark.auth_aws2
|
|
||||||
# TODO: remove 'fails_on_rgw' and once we have learned how to manipulate the authorization header
|
|
||||||
@pytest.mark.fails_on_rgw
|
|
||||||
def test_object_create_bad_authorization_invalid_aws2():
|
|
||||||
v2_client = get_v2_client()
|
|
||||||
headers = {'Authorization': 'AWS HAHAHA'}
|
|
||||||
e = _add_header_create_bad_object(headers, v2_client)
|
|
||||||
status, error_code = _get_status_and_error_code(e.response)
|
|
||||||
assert status == 400
|
|
||||||
assert error_code == 'InvalidArgument'
|
|
||||||
|
|
||||||
@pytest.mark.auth_aws2
|
|
||||||
def test_object_create_bad_ua_empty_aws2():
|
|
||||||
v2_client = get_v2_client()
|
|
||||||
headers = {'User-Agent': ''}
|
|
||||||
bucket_name, key_name = _add_header_create_object(headers, v2_client)
|
|
||||||
v2_client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
|
|
||||||
|
|
||||||
@pytest.mark.auth_aws2
|
|
||||||
def test_object_create_bad_ua_none_aws2():
|
|
||||||
v2_client = get_v2_client()
|
|
||||||
remove = 'User-Agent'
|
|
||||||
bucket_name, key_name = _remove_header_create_object(remove, v2_client)
|
|
||||||
v2_client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
|
|
||||||
|
|
||||||
@pytest.mark.auth_aws2
|
|
||||||
def test_object_create_bad_date_invalid_aws2():
|
|
||||||
v2_client = get_v2_client()
|
|
||||||
headers = {'x-amz-date': 'Bad Date'}
|
|
||||||
e = _add_header_create_bad_object(headers, v2_client)
|
|
||||||
status, error_code = _get_status_and_error_code(e.response)
|
|
||||||
assert status == 403
|
|
||||||
assert error_code == 'AccessDenied'
|
|
||||||
|
|
||||||
@pytest.mark.auth_aws2
|
|
||||||
def test_object_create_bad_date_empty_aws2():
|
|
||||||
v2_client = get_v2_client()
|
|
||||||
headers = {'x-amz-date': ''}
|
|
||||||
e = _add_header_create_bad_object(headers, v2_client)
|
|
||||||
status, error_code = _get_status_and_error_code(e.response)
|
|
||||||
assert status == 403
|
|
||||||
assert error_code == 'AccessDenied'
|
|
||||||
|
|
||||||
@pytest.mark.auth_aws2
|
|
||||||
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the date header
|
|
||||||
@pytest.mark.fails_on_rgw
|
|
||||||
def test_object_create_bad_date_none_aws2():
|
|
||||||
v2_client = get_v2_client()
|
|
||||||
remove = 'x-amz-date'
|
|
||||||
e = _remove_header_create_bad_object(remove, v2_client)
|
|
||||||
status, error_code = _get_status_and_error_code(e.response)
|
|
||||||
assert status == 403
|
|
||||||
assert error_code == 'AccessDenied'
|
|
||||||
|
|
||||||
@pytest.mark.auth_aws2
|
|
||||||
def test_object_create_bad_date_before_today_aws2():
|
|
||||||
v2_client = get_v2_client()
|
|
||||||
headers = {'x-amz-date': 'Tue, 07 Jul 2010 21:53:04 GMT'}
|
|
||||||
e = _add_header_create_bad_object(headers, v2_client)
|
|
||||||
status, error_code = _get_status_and_error_code(e.response)
|
|
||||||
assert status == 403
|
|
||||||
assert error_code == 'RequestTimeTooSkewed'
|
|
||||||
|
|
||||||
@pytest.mark.auth_aws2
|
|
||||||
def test_object_create_bad_date_before_epoch_aws2():
|
|
||||||
v2_client = get_v2_client()
|
|
||||||
headers = {'x-amz-date': 'Tue, 07 Jul 1950 21:53:04 GMT'}
|
|
||||||
e = _add_header_create_bad_object(headers, v2_client)
|
|
||||||
status, error_code = _get_status_and_error_code(e.response)
|
|
||||||
assert status == 403
|
|
||||||
assert error_code == 'AccessDenied'
|
|
||||||
|
|
||||||
@pytest.mark.auth_aws2
|
|
||||||
def test_object_create_bad_date_after_end_aws2():
|
|
||||||
v2_client = get_v2_client()
|
|
||||||
headers = {'x-amz-date': 'Tue, 07 Jul 9999 21:53:04 GMT'}
|
|
||||||
e = _add_header_create_bad_object(headers, v2_client)
|
|
||||||
status, error_code = _get_status_and_error_code(e.response)
|
|
||||||
assert status == 403
|
|
||||||
assert error_code == 'RequestTimeTooSkewed'
|
|
||||||
|
|
||||||
@pytest.mark.auth_aws2
|
|
||||||
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the date header
|
|
||||||
@pytest.mark.fails_on_rgw
|
|
||||||
def test_bucket_create_bad_authorization_invalid_aws2():
|
|
||||||
v2_client = get_v2_client()
|
|
||||||
headers = {'Authorization': 'AWS HAHAHA'}
|
|
||||||
e = _add_header_create_bad_bucket(headers, v2_client)
|
|
||||||
status, error_code = _get_status_and_error_code(e.response)
|
|
||||||
assert status == 400
|
|
||||||
assert error_code == 'InvalidArgument'
|
|
||||||
|
|
||||||
@pytest.mark.auth_aws2
|
|
||||||
def test_bucket_create_bad_ua_empty_aws2():
|
|
||||||
v2_client = get_v2_client()
|
|
||||||
headers = {'User-Agent': ''}
|
|
||||||
_add_header_create_bucket(headers, v2_client)
|
|
||||||
|
|
||||||
@pytest.mark.auth_aws2
|
|
||||||
def test_bucket_create_bad_ua_none_aws2():
|
|
||||||
v2_client = get_v2_client()
|
|
||||||
remove = 'User-Agent'
|
|
||||||
_remove_header_create_bucket(remove, v2_client)
|
|
||||||
|
|
||||||
@pytest.mark.auth_aws2
|
|
||||||
def test_bucket_create_bad_date_invalid_aws2():
|
|
||||||
v2_client = get_v2_client()
|
|
||||||
headers = {'x-amz-date': 'Bad Date'}
|
|
||||||
e = _add_header_create_bad_bucket(headers, v2_client)
|
|
||||||
status, error_code = _get_status_and_error_code(e.response)
|
|
||||||
assert status == 403
|
|
||||||
assert error_code == 'AccessDenied'
|
|
||||||
|
|
||||||
@pytest.mark.auth_aws2
|
|
||||||
def test_bucket_create_bad_date_empty_aws2():
|
|
||||||
v2_client = get_v2_client()
|
|
||||||
headers = {'x-amz-date': ''}
|
|
||||||
e = _add_header_create_bad_bucket(headers, v2_client)
|
|
||||||
status, error_code = _get_status_and_error_code(e.response)
|
|
||||||
assert status == 403
|
|
||||||
assert error_code == 'AccessDenied'
|
|
||||||
|
|
||||||
@pytest.mark.auth_aws2
|
|
||||||
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the date header
|
|
||||||
@pytest.mark.fails_on_rgw
|
|
||||||
def test_bucket_create_bad_date_none_aws2():
|
|
||||||
v2_client = get_v2_client()
|
|
||||||
remove = 'x-amz-date'
|
|
||||||
e = _remove_header_create_bad_bucket(remove, v2_client)
|
|
||||||
status, error_code = _get_status_and_error_code(e.response)
|
|
||||||
assert status == 403
|
|
||||||
assert error_code == 'AccessDenied'
|
|
||||||
|
|
||||||
@pytest.mark.auth_aws2
|
|
||||||
def test_bucket_create_bad_date_before_today_aws2():
|
|
||||||
v2_client = get_v2_client()
|
|
||||||
headers = {'x-amz-date': 'Tue, 07 Jul 2010 21:53:04 GMT'}
|
|
||||||
e = _add_header_create_bad_bucket(headers, v2_client)
|
|
||||||
status, error_code = _get_status_and_error_code(e.response)
|
|
||||||
assert status == 403
|
|
||||||
assert error_code == 'RequestTimeTooSkewed'
|
|
||||||
|
|
||||||
@pytest.mark.auth_aws2
|
|
||||||
def test_bucket_create_bad_date_after_today_aws2():
|
|
||||||
v2_client = get_v2_client()
|
|
||||||
headers = {'x-amz-date': 'Tue, 07 Jul 2030 21:53:04 GMT'}
|
|
||||||
e = _add_header_create_bad_bucket(headers, v2_client)
|
|
||||||
status, error_code = _get_status_and_error_code(e.response)
|
|
||||||
assert status == 403
|
|
||||||
assert error_code == 'RequestTimeTooSkewed'
|
|
||||||
|
|
||||||
@pytest.mark.auth_aws2
|
|
||||||
def test_bucket_create_bad_date_before_epoch_aws2():
|
|
||||||
v2_client = get_v2_client()
|
|
||||||
headers = {'x-amz-date': 'Tue, 07 Jul 1950 21:53:04 GMT'}
|
|
||||||
e = _add_header_create_bad_bucket(headers, v2_client)
|
|
||||||
status, error_code = _get_status_and_error_code(e.response)
|
|
||||||
assert status == 403
|
|
||||||
assert error_code == 'AccessDenied'
|
|
File diff suppressed because it is too large
Load diff
File diff suppressed because it is too large
Load diff
File diff suppressed because it is too large
Load diff
|
@ -1,159 +0,0 @@
|
||||||
import json
|
|
||||||
import pytest
|
|
||||||
from botocore.exceptions import ClientError
|
|
||||||
from . import (
|
|
||||||
configfile,
|
|
||||||
get_iam_root_client,
|
|
||||||
get_iam_alt_root_client,
|
|
||||||
get_new_bucket_name,
|
|
||||||
get_prefix,
|
|
||||||
nuke_prefixed_buckets,
|
|
||||||
)
|
|
||||||
from .iam import iam_root, iam_alt_root
|
|
||||||
from .utils import assert_raises, _get_status_and_error_code
|
|
||||||
|
|
||||||
def get_new_topic_name():
|
|
||||||
return get_new_bucket_name()
|
|
||||||
|
|
||||||
def nuke_topics(client, prefix):
|
|
||||||
p = client.get_paginator('list_topics')
|
|
||||||
for response in p.paginate():
|
|
||||||
for topic in response['Topics']:
|
|
||||||
arn = topic['TopicArn']
|
|
||||||
if prefix not in arn:
|
|
||||||
pass
|
|
||||||
try:
|
|
||||||
client.delete_topic(TopicArn=arn)
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def sns(iam_root):
|
|
||||||
client = get_iam_root_client(service_name='sns')
|
|
||||||
yield client
|
|
||||||
nuke_topics(client, get_prefix())
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def sns_alt(iam_alt_root):
|
|
||||||
client = get_iam_alt_root_client(service_name='sns')
|
|
||||||
yield client
|
|
||||||
nuke_topics(client, get_prefix())
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def s3(iam_root):
|
|
||||||
client = get_iam_root_client(service_name='s3')
|
|
||||||
yield client
|
|
||||||
nuke_prefixed_buckets(get_prefix(), client)
|
|
||||||
|
|
||||||
@pytest.fixture
|
|
||||||
def s3_alt(iam_alt_root):
|
|
||||||
client = get_iam_alt_root_client(service_name='s3')
|
|
||||||
yield client
|
|
||||||
nuke_prefixed_buckets(get_prefix(), client)
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.iam_account
|
|
||||||
@pytest.mark.sns
|
|
||||||
def test_account_topic(sns):
|
|
||||||
name = get_new_topic_name()
|
|
||||||
|
|
||||||
response = sns.create_topic(Name=name)
|
|
||||||
arn = response['TopicArn']
|
|
||||||
assert arn.startswith('arn:aws:sns:')
|
|
||||||
assert arn.endswith(f':{name}')
|
|
||||||
|
|
||||||
response = sns.list_topics()
|
|
||||||
assert arn in [p['TopicArn'] for p in response['Topics']]
|
|
||||||
|
|
||||||
sns.set_topic_attributes(TopicArn=arn, AttributeName='Policy', AttributeValue='')
|
|
||||||
|
|
||||||
response = sns.get_topic_attributes(TopicArn=arn)
|
|
||||||
assert 'Attributes' in response
|
|
||||||
|
|
||||||
sns.delete_topic(TopicArn=arn)
|
|
||||||
|
|
||||||
response = sns.list_topics()
|
|
||||||
assert arn not in [p['TopicArn'] for p in response['Topics']]
|
|
||||||
|
|
||||||
with pytest.raises(sns.exceptions.NotFoundException):
|
|
||||||
sns.get_topic_attributes(TopicArn=arn)
|
|
||||||
sns.delete_topic(TopicArn=arn)
|
|
||||||
|
|
||||||
@pytest.mark.iam_account
|
|
||||||
@pytest.mark.sns
|
|
||||||
def test_cross_account_topic(sns, sns_alt):
|
|
||||||
name = get_new_topic_name()
|
|
||||||
arn = sns.create_topic(Name=name)['TopicArn']
|
|
||||||
|
|
||||||
# not visible to any alt user apis
|
|
||||||
with pytest.raises(sns.exceptions.NotFoundException):
|
|
||||||
sns_alt.get_topic_attributes(TopicArn=arn)
|
|
||||||
with pytest.raises(sns.exceptions.NotFoundException):
|
|
||||||
sns_alt.set_topic_attributes(TopicArn=arn, AttributeName='Policy', AttributeValue='')
|
|
||||||
|
|
||||||
# delete returns success
|
|
||||||
sns_alt.delete_topic(TopicArn=arn)
|
|
||||||
|
|
||||||
response = sns_alt.list_topics()
|
|
||||||
assert arn not in [p['TopicArn'] for p in response['Topics']]
|
|
||||||
|
|
||||||
@pytest.mark.iam_account
|
|
||||||
@pytest.mark.sns
|
|
||||||
def test_account_topic_publish(sns, s3):
|
|
||||||
name = get_new_topic_name()
|
|
||||||
|
|
||||||
response = sns.create_topic(Name=name)
|
|
||||||
topic_arn = response['TopicArn']
|
|
||||||
|
|
||||||
bucket = get_new_bucket_name()
|
|
||||||
s3.create_bucket(Bucket=bucket)
|
|
||||||
|
|
||||||
config = {'TopicConfigurations': [{
|
|
||||||
'Id': 'id',
|
|
||||||
'TopicArn': topic_arn,
|
|
||||||
'Events': [ 's3:ObjectCreated:*' ],
|
|
||||||
}]}
|
|
||||||
s3.put_bucket_notification_configuration(
|
|
||||||
Bucket=bucket, NotificationConfiguration=config)
|
|
||||||
|
|
||||||
@pytest.mark.iam_account
|
|
||||||
@pytest.mark.iam_cross_account
|
|
||||||
@pytest.mark.sns
|
|
||||||
def test_cross_account_topic_publish(sns, s3_alt, iam_alt_root):
|
|
||||||
name = get_new_topic_name()
|
|
||||||
|
|
||||||
response = sns.create_topic(Name=name)
|
|
||||||
topic_arn = response['TopicArn']
|
|
||||||
|
|
||||||
bucket = get_new_bucket_name()
|
|
||||||
s3_alt.create_bucket(Bucket=bucket)
|
|
||||||
|
|
||||||
config = {'TopicConfigurations': [{
|
|
||||||
'Id': 'id',
|
|
||||||
'TopicArn': topic_arn,
|
|
||||||
'Events': [ 's3:ObjectCreated:*' ],
|
|
||||||
}]}
|
|
||||||
|
|
||||||
# expect AccessDenies because no resource policy allows cross-account access
|
|
||||||
e = assert_raises(ClientError, s3_alt.put_bucket_notification_configuration,
|
|
||||||
Bucket=bucket, NotificationConfiguration=config)
|
|
||||||
status, error_code = _get_status_and_error_code(e.response)
|
|
||||||
assert status == 403
|
|
||||||
assert error_code == 'AccessDenied'
|
|
||||||
|
|
||||||
# add topic policy to allow the alt user
|
|
||||||
alt_principal = iam_alt_root.get_user()['User']['Arn']
|
|
||||||
policy = json.dumps({
|
|
||||||
'Version': '2012-10-17',
|
|
||||||
'Statement': [{
|
|
||||||
'Effect': 'Allow',
|
|
||||||
'Principal': {'AWS': alt_principal},
|
|
||||||
'Action': 'sns:Publish',
|
|
||||||
'Resource': topic_arn
|
|
||||||
}]
|
|
||||||
})
|
|
||||||
sns.set_topic_attributes(TopicArn=topic_arn, AttributeName='Policy',
|
|
||||||
AttributeValue=policy)
|
|
||||||
|
|
||||||
s3_alt.put_bucket_notification_configuration(
|
|
||||||
Bucket=bucket, NotificationConfiguration=config)
|
|
File diff suppressed because it is too large
Load diff
|
@ -1,9 +0,0 @@
|
||||||
from . import utils
|
|
||||||
|
|
||||||
def test_generate():
|
|
||||||
FIVE_MB = 5 * 1024 * 1024
|
|
||||||
assert len(''.join(utils.generate_random(0))) == 0
|
|
||||||
assert len(''.join(utils.generate_random(1))) == 1
|
|
||||||
assert len(''.join(utils.generate_random(FIVE_MB - 1))) == FIVE_MB - 1
|
|
||||||
assert len(''.join(utils.generate_random(FIVE_MB))) == FIVE_MB
|
|
||||||
assert len(''.join(utils.generate_random(FIVE_MB + 1))) == FIVE_MB + 1
|
|
|
@ -1,47 +0,0 @@
|
||||||
import random
|
|
||||||
import requests
|
|
||||||
import string
|
|
||||||
import time
|
|
||||||
|
|
||||||
def assert_raises(excClass, callableObj, *args, **kwargs):
|
|
||||||
"""
|
|
||||||
Like unittest.TestCase.assertRaises, but returns the exception.
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
callableObj(*args, **kwargs)
|
|
||||||
except excClass as e:
|
|
||||||
return e
|
|
||||||
else:
|
|
||||||
if hasattr(excClass, '__name__'):
|
|
||||||
excName = excClass.__name__
|
|
||||||
else:
|
|
||||||
excName = str(excClass)
|
|
||||||
raise AssertionError("%s not raised" % excName)
|
|
||||||
|
|
||||||
def generate_random(size, part_size=5*1024*1024):
|
|
||||||
"""
|
|
||||||
Generate the specified number random data.
|
|
||||||
(actually each MB is a repetition of the first KB)
|
|
||||||
"""
|
|
||||||
chunk = 1024
|
|
||||||
allowed = string.ascii_letters
|
|
||||||
for x in range(0, size, part_size):
|
|
||||||
strpart = ''.join([allowed[random.randint(0, len(allowed) - 1)] for _ in range(chunk)])
|
|
||||||
s = ''
|
|
||||||
left = size - x
|
|
||||||
this_part_size = min(left, part_size)
|
|
||||||
for y in range(this_part_size // chunk):
|
|
||||||
s = s + strpart
|
|
||||||
s = s + strpart[:(this_part_size % chunk)]
|
|
||||||
yield s
|
|
||||||
if (x == size):
|
|
||||||
return
|
|
||||||
|
|
||||||
def _get_status(response):
|
|
||||||
status = response['ResponseMetadata']['HTTPStatusCode']
|
|
||||||
return status
|
|
||||||
|
|
||||||
def _get_status_and_error_code(response):
|
|
||||||
status = response['ResponseMetadata']['HTTPStatusCode']
|
|
||||||
error_code = response['Error']['Code']
|
|
||||||
return status, error_code
|
|
14
setup.py
14
setup.py
|
@ -14,10 +14,18 @@ setup(
|
||||||
|
|
||||||
install_requires=[
|
install_requires=[
|
||||||
'boto >=2.0b4',
|
'boto >=2.0b4',
|
||||||
'boto3 >=1.0.0',
|
|
||||||
'PyYAML',
|
'PyYAML',
|
||||||
'munch >=2.0.0',
|
'bunch >=1.0.0',
|
||||||
'gevent >=1.0',
|
'gevent ==0.13.6',
|
||||||
'isodate >=0.4.4',
|
'isodate >=0.4.4',
|
||||||
],
|
],
|
||||||
|
|
||||||
|
entry_points={
|
||||||
|
'console_scripts': [
|
||||||
|
's3tests-generate-objects = s3tests.generate_objects:main',
|
||||||
|
's3tests-test-readwrite = s3tests.readwrite:main',
|
||||||
|
's3tests-test-roundtrip = s3tests.roundtrip:main',
|
||||||
|
],
|
||||||
|
},
|
||||||
|
|
||||||
)
|
)
|
||||||
|
|
382
siege.conf
Normal file
382
siege.conf
Normal file
|
@ -0,0 +1,382 @@
|
||||||
|
# Updated by Siege 2.69, May-24-2010
|
||||||
|
# Copyright 2000-2007 by Jeffrey Fulmer, et al.
|
||||||
|
#
|
||||||
|
# Siege configuration file -- edit as necessary
|
||||||
|
# For more information about configuring and running
|
||||||
|
# this program, visit: http://www.joedog.org/
|
||||||
|
|
||||||
|
#
|
||||||
|
# Variable declarations. You can set variables here
|
||||||
|
# for use in the directives below. Example:
|
||||||
|
# PROXY = proxy.joedog.org
|
||||||
|
# Reference variables inside ${} or $(), example:
|
||||||
|
# proxy-host = ${PROXY}
|
||||||
|
# You can also reference ENVIRONMENT variables without
|
||||||
|
# actually declaring them, example:
|
||||||
|
# logfile = $(HOME)/var/siege.log
|
||||||
|
|
||||||
|
#
|
||||||
|
# Signify verbose mode, true turns on verbose output
|
||||||
|
# ex: verbose = true|false
|
||||||
|
#
|
||||||
|
verbose = true
|
||||||
|
|
||||||
|
#
|
||||||
|
# CSV Verbose format: with this option, you can choose
|
||||||
|
# to format verbose output in traditional siege format
|
||||||
|
# or comma separated format. The latter will allow you
|
||||||
|
# to redirect output to a file for import into a spread
|
||||||
|
# sheet, i.e., siege > file.csv
|
||||||
|
# ex: csv = true|false (default false)
|
||||||
|
#
|
||||||
|
csv = true
|
||||||
|
|
||||||
|
#
|
||||||
|
# Full URL verbose format: By default siege displays
|
||||||
|
# the URL path and not the full URL. With this option,
|
||||||
|
# you # can instruct siege to show the complete URL.
|
||||||
|
# ex: fullurl = true|false (default false)
|
||||||
|
#
|
||||||
|
# fullurl = true
|
||||||
|
|
||||||
|
#
|
||||||
|
# Display id: in verbose mode, display the siege user
|
||||||
|
# id associated with the HTTP transaction information
|
||||||
|
# ex: display-id = true|false
|
||||||
|
#
|
||||||
|
# display-id =
|
||||||
|
|
||||||
|
#
|
||||||
|
# Show logfile location. By default, siege displays the
|
||||||
|
# logfile location at the end of every run when logging
|
||||||
|
# You can turn this message off with this directive.
|
||||||
|
# ex: show-logfile = false
|
||||||
|
#
|
||||||
|
show-logfile = true
|
||||||
|
|
||||||
|
#
|
||||||
|
# Default logging status, true turns logging on.
|
||||||
|
# ex: logging = true|false
|
||||||
|
#
|
||||||
|
logging = true
|
||||||
|
|
||||||
|
#
|
||||||
|
# Logfile, the default siege logfile is $PREFIX/var/siege.log
|
||||||
|
# This directive allows you to choose an alternative log file.
|
||||||
|
# Environment variables may be used as shown in the examples:
|
||||||
|
# ex: logfile = /home/jeff/var/log/siege.log
|
||||||
|
# logfile = ${HOME}/var/log/siege.log
|
||||||
|
# logfile = ${LOGFILE}
|
||||||
|
#
|
||||||
|
logfile = ./siege.log
|
||||||
|
|
||||||
|
#
|
||||||
|
# HTTP protocol. Options HTTP/1.1 and HTTP/1.0.
|
||||||
|
# Some webservers have broken implementation of the
|
||||||
|
# 1.1 protocol which skews throughput evaluations.
|
||||||
|
# If you notice some siege clients hanging for
|
||||||
|
# extended periods of time, change this to HTTP/1.0
|
||||||
|
# ex: protocol = HTTP/1.1
|
||||||
|
# protocol = HTTP/1.0
|
||||||
|
#
|
||||||
|
protocol = HTTP/1.1
|
||||||
|
|
||||||
|
#
|
||||||
|
# Chunked encoding is required by HTTP/1.1 protocol
|
||||||
|
# but siege allows you to turn it off as desired.
|
||||||
|
#
|
||||||
|
# ex: chunked = true
|
||||||
|
#
|
||||||
|
chunked = true
|
||||||
|
|
||||||
|
#
|
||||||
|
# Cache revalidation.
|
||||||
|
# Siege supports cache revalidation for both ETag and
|
||||||
|
# Last-modified headers. If a copy is still fresh, the
|
||||||
|
# server responds with 304.
|
||||||
|
# HTTP/1.1 200 0.00 secs: 2326 bytes ==> /apache_pb.gif
|
||||||
|
# HTTP/1.1 304 0.00 secs: 0 bytes ==> /apache_pb.gif
|
||||||
|
# HTTP/1.1 304 0.00 secs: 0 bytes ==> /apache_pb.gif
|
||||||
|
#
|
||||||
|
# ex: cache = true
|
||||||
|
#
|
||||||
|
cache = false
|
||||||
|
|
||||||
|
#
|
||||||
|
# Connection directive. Options "close" and "keep-alive"
|
||||||
|
# Starting with release 2.57b3, siege implements persistent
|
||||||
|
# connections in accordance to RFC 2068 using both chunked
|
||||||
|
# encoding and content-length directives to determine the
|
||||||
|
# page size. To run siege with persistent connections set
|
||||||
|
# the connection directive to keep-alive. (Default close)
|
||||||
|
# CAUTION: use the keep-alive directive with care.
|
||||||
|
# DOUBLE CAUTION: this directive does not work well on HPUX
|
||||||
|
# TRIPLE CAUTION: don't use keep-alives until further notice
|
||||||
|
# ex: connection = close
|
||||||
|
# connection = keep-alive
|
||||||
|
#
|
||||||
|
connection = close
|
||||||
|
|
||||||
|
#
|
||||||
|
# Default number of simulated concurrent users
|
||||||
|
# ex: concurrent = 25
|
||||||
|
#
|
||||||
|
concurrent = 15
|
||||||
|
|
||||||
|
#
|
||||||
|
# Default duration of the siege. The right hand argument has
|
||||||
|
# a modifier which specifies the time units, H=hours, M=minutes,
|
||||||
|
# and S=seconds. If a modifier is not specified, then minutes
|
||||||
|
# are assumed.
|
||||||
|
# ex: time = 50M
|
||||||
|
#
|
||||||
|
# time =
|
||||||
|
|
||||||
|
#
|
||||||
|
# Repetitions. The length of siege may be specified in client
|
||||||
|
# reps rather then a time duration. Instead of specifying a time
|
||||||
|
# span, you can tell each siege instance to hit the server X number
|
||||||
|
# of times. So if you chose 'reps = 20' and you've selected 10
|
||||||
|
# concurrent users, then siege will hit the server 200 times.
|
||||||
|
# ex: reps = 20
|
||||||
|
#
|
||||||
|
# reps =
|
||||||
|
|
||||||
|
#
|
||||||
|
# Default URLs file, set at configuration time, the default
|
||||||
|
# file is PREFIX/etc/urls.txt. So if you configured siege
|
||||||
|
# with --prefix=/usr/local then the urls.txt file is installed
|
||||||
|
# int /usr/local/etc/urls.txt. Use the "file = " directive to
|
||||||
|
# configure an alternative URLs file. You may use environment
|
||||||
|
# variables as shown in the examples below:
|
||||||
|
# ex: file = /export/home/jdfulmer/MYURLS.txt
|
||||||
|
# file = $HOME/etc/urls.txt
|
||||||
|
# file = $URLSFILE
|
||||||
|
#
|
||||||
|
file = ./urls.txt
|
||||||
|
|
||||||
|
#
|
||||||
|
# Default URL, this is a single URL that you want to test. This
|
||||||
|
# is usually set at the command line with the -u option. When
|
||||||
|
# used, this option overrides the urls.txt (-f FILE/--file=FILE)
|
||||||
|
# option. You will HAVE to comment this out for in order to use
|
||||||
|
# the urls.txt file option.
|
||||||
|
# ex: url = https://shemp.whoohoo.com/docs/index.jsp
|
||||||
|
#
|
||||||
|
# url =
|
||||||
|
|
||||||
|
#
|
||||||
|
# Default delay value, see the siege(1) man page.
|
||||||
|
# This value is used for load testing, it is not used
|
||||||
|
# for benchmarking.
|
||||||
|
# ex: delay = 3
|
||||||
|
#
|
||||||
|
delay = 1
|
||||||
|
|
||||||
|
#
|
||||||
|
# Connection timeout value. Set the value in seconds for
|
||||||
|
# socket connection timeouts. The default value is 30 seconds.
|
||||||
|
# ex: timeout = 30
|
||||||
|
#
|
||||||
|
# timeout =
|
||||||
|
|
||||||
|
#
|
||||||
|
# Session expiration: This directive allows you to delete all
|
||||||
|
# cookies after you pass through the URLs. This means siege will
|
||||||
|
# grab a new session with each run through its URLs. The default
|
||||||
|
# value is false.
|
||||||
|
# ex: expire-session = true
|
||||||
|
#
|
||||||
|
# expire-session =
|
||||||
|
|
||||||
|
#
|
||||||
|
# Failures: This is the number of total connection failures allowed
|
||||||
|
# before siege aborts. Connection failures (timeouts, socket failures,
|
||||||
|
# etc.) are combined with 400 and 500 level errors in the final stats,
|
||||||
|
# but those errors do not count against the abort total. If you set
|
||||||
|
# this total to 10, then siege will abort after ten socket timeouts,
|
||||||
|
# but it will NOT abort after ten 404s. This is designed to prevent
|
||||||
|
# a run-away mess on an unattended siege. The default value is 1024
|
||||||
|
# ex: failures = 50
|
||||||
|
#
|
||||||
|
# failures =
|
||||||
|
|
||||||
|
#
|
||||||
|
# Internet simulation. If true, siege clients will hit
|
||||||
|
# the URLs in the urls.txt file randomly, thereby simulating
|
||||||
|
# internet usage. If false, siege will run through the
|
||||||
|
# urls.txt file in order from first to last and back again.
|
||||||
|
# ex: internet = true
|
||||||
|
#
|
||||||
|
internet = false
|
||||||
|
|
||||||
|
#
|
||||||
|
# Default benchmarking value, If true, there is NO delay
|
||||||
|
# between server requests, siege runs as fast as the web
|
||||||
|
# server and the network will let it. Set this to false
|
||||||
|
# for load testing.
|
||||||
|
# ex: benchmark = true
|
||||||
|
#
|
||||||
|
benchmark = false
|
||||||
|
|
||||||
|
#
|
||||||
|
# Set the siege User-Agent to identify yourself at the
|
||||||
|
# host, the default is: JoeDog/1.00 [en] (X11; I; Siege #.##)
|
||||||
|
# But that wreaks of corporate techno speak. Feel free
|
||||||
|
# to make it more interesting :-) Since Limey is recovering
|
||||||
|
# from minor surgery as I write this, I'll dedicate the
|
||||||
|
# example to him...
|
||||||
|
# ex: user-agent = Limey The Bulldog
|
||||||
|
#
|
||||||
|
# user-agent =
|
||||||
|
|
||||||
|
#
|
||||||
|
# Accept-encoding. This option allows you to specify
|
||||||
|
# acceptable encodings returned by the server. Use this
|
||||||
|
# directive to turn on compression. By default we accept
|
||||||
|
# gzip compression.
|
||||||
|
#
|
||||||
|
# ex: accept-encoding = *
|
||||||
|
# accept-encoding = gzip
|
||||||
|
# accept-encoding = compress;q=0.5;gzip;q=1
|
||||||
|
accept-encoding = gzip
|
||||||
|
|
||||||
|
#
|
||||||
|
# TURN OFF THAT ANNOYING SPINNER!
|
||||||
|
# Siege spawns a thread and runs a spinner to entertain you
|
||||||
|
# as it collects and computes its stats. If you don't like
|
||||||
|
# this feature, you may turn it off here.
|
||||||
|
# ex: spinner = false
|
||||||
|
#
|
||||||
|
spinner = true
|
||||||
|
|
||||||
|
#
|
||||||
|
# WWW-Authenticate login. When siege hits a webpage
|
||||||
|
# that requires basic authentication, it will search its
|
||||||
|
# logins for authentication which matches the specific realm
|
||||||
|
# requested by the server. If it finds a match, it will send
|
||||||
|
# that login information. If it fails to match the realm, it
|
||||||
|
# will send the default login information. (Default is "all").
|
||||||
|
# You may configure siege with several logins as long as no
|
||||||
|
# two realms match. The format for logins is:
|
||||||
|
# username:password[:realm] where "realm" is optional.
|
||||||
|
# If you do not supply a realm, then it will default to "all"
|
||||||
|
# ex: login = jdfulmer:topsecret:Admin
|
||||||
|
# login = jeff:supersecret
|
||||||
|
#
|
||||||
|
# login =
|
||||||
|
|
||||||
|
#
|
||||||
|
# WWW-Authenticate username and password. When siege
|
||||||
|
# hits a webpage that requires authentication, it will
|
||||||
|
# send this user name and password to the server. Note
|
||||||
|
# this is NOT form based authentication. You will have
|
||||||
|
# to construct URLs for that.
|
||||||
|
# ex: username = jdfulmer
|
||||||
|
# password = whoohoo
|
||||||
|
#
|
||||||
|
# username =
|
||||||
|
# password =
|
||||||
|
|
||||||
|
#
|
||||||
|
# ssl-cert
|
||||||
|
# This optional feature allows you to specify a path to a client
|
||||||
|
# certificate. It is not neccessary to specify a certificate in
|
||||||
|
# order to use https. If you don't know why you would want one,
|
||||||
|
# then you probably don't need this feature. Use openssl to
|
||||||
|
# generate a certificate and key with the following command:
|
||||||
|
# $ openssl req -nodes -new -days 365 -newkey rsa:1024 \
|
||||||
|
# -keyout key.pem -out cert.pem
|
||||||
|
# Specify a path to cert.pem as follows:
|
||||||
|
# ex: ssl-cert = /home/jeff/.certs/cert.pem
|
||||||
|
#
|
||||||
|
# ssl-cert =
|
||||||
|
|
||||||
|
#
|
||||||
|
# ssl-key
|
||||||
|
# Use this option to specify the key you generated with the command
|
||||||
|
# above. ex: ssl-key = /home/jeff/.certs/key.pem
|
||||||
|
# You may actually skip this option and combine both your cert and
|
||||||
|
# your key in a single file:
|
||||||
|
# $ cat key.pem > client.pem
|
||||||
|
# $ cat cert.pem >> client.pem
|
||||||
|
# Now set the path for ssl-cert:
|
||||||
|
# ex: ssl-cert = /home/jeff/.certs/client.pem
|
||||||
|
# (in this scenario, you comment out ssl-key)
|
||||||
|
#
|
||||||
|
# ssl-key =
|
||||||
|
|
||||||
|
#
|
||||||
|
# ssl-timeout
|
||||||
|
# This option sets a connection timeout for the ssl library
|
||||||
|
# ex: ssl-timeout = 30
|
||||||
|
#
|
||||||
|
# ssl-timeout =
|
||||||
|
|
||||||
|
#
|
||||||
|
# ssl-ciphers
|
||||||
|
# You can use this feature to select a specific ssl cipher
|
||||||
|
# for HTTPs. To view the ones available with your library run
|
||||||
|
# the following command: openssl ciphers
|
||||||
|
# ex: ssl-ciphers = EXP-RC4-MD5
|
||||||
|
#
|
||||||
|
# ssl-ciphers =
|
||||||
|
|
||||||
|
#
|
||||||
|
# Login URL. This is the first URL to be hit by every siege
|
||||||
|
# client. This feature was designed to allow you to login to
|
||||||
|
# a server and establish a session. It will only be hit once
|
||||||
|
# so if you need to hit this URL more then once, make sure it
|
||||||
|
# also appears in your urls.txt file.
|
||||||
|
#
|
||||||
|
# ex: login-url = http://eos.haha.com/login.jsp POST name=jeff&pass=foo
|
||||||
|
#
|
||||||
|
# login-url =
|
||||||
|
|
||||||
|
#
|
||||||
|
# Proxy protocol. This option allows you to select a proxy
|
||||||
|
# server stress testing. The proxy will request the URL(s)
|
||||||
|
# specified by -u"my.url.org" OR from the urls.txt file.
|
||||||
|
#
|
||||||
|
# ex: proxy-host = proxy.whoohoo.org
|
||||||
|
# proxy-port = 8080
|
||||||
|
#
|
||||||
|
# proxy-host =
|
||||||
|
# proxy-port =
|
||||||
|
|
||||||
|
#
|
||||||
|
# Proxy-Authenticate. When scout hits a proxy server which
|
||||||
|
# requires username and password authentication, it will this
|
||||||
|
# username and password to the server. The format is username,
|
||||||
|
# password and optional realm each separated by a colon. You
|
||||||
|
# may enter more than one proxy-login as long as each one has
|
||||||
|
# a different realm. If you do not enter a realm, then scout
|
||||||
|
# will send that login information to all proxy challenges. If
|
||||||
|
# you have more than one proxy-login, then scout will attempt
|
||||||
|
# to match the login to the realm.
|
||||||
|
# ex: proxy-login: jeff:secret:corporate
|
||||||
|
# proxy-login: jeff:whoohoo
|
||||||
|
#
|
||||||
|
# proxy-login =
|
||||||
|
|
||||||
|
#
|
||||||
|
# Redirection support. This option allows to to control
|
||||||
|
# whether a Location: hint will be followed. Most users
|
||||||
|
# will want to follow redirection information, but sometimes
|
||||||
|
# it's desired to just get the Location information.
|
||||||
|
#
|
||||||
|
# ex: follow-location = false
|
||||||
|
#
|
||||||
|
# follow-location =
|
||||||
|
|
||||||
|
# Zero-length data. siege can be configured to disregard
|
||||||
|
# results in which zero bytes are read after the headers.
|
||||||
|
# Alternatively, such results can be counted in the final
|
||||||
|
# tally of outcomes.
|
||||||
|
#
|
||||||
|
# ex: zero-data-ok = false
|
||||||
|
#
|
||||||
|
# zero-data-ok =
|
||||||
|
|
||||||
|
#
|
||||||
|
# end of siegerc
|
9
tox.ini
9
tox.ini
|
@ -1,9 +0,0 @@
|
||||||
[tox]
|
|
||||||
envlist = py
|
|
||||||
|
|
||||||
[testenv]
|
|
||||||
deps = -rrequirements.txt
|
|
||||||
passenv =
|
|
||||||
S3TEST_CONF
|
|
||||||
S3_USE_SIGV4
|
|
||||||
commands = pytest {posargs}
|
|
Loading…
Reference in a new issue