Compare commits

..

32 commits

Author SHA1 Message Date
Sage Weil
2c134616d2 print foo -> print(foo)
Signed-off-by: Sage Weil <sage@redhat.com>
2019-12-14 09:19:20 -06:00
Sage Weil
e3761a30b3 python 3
Signed-off-by: Sage Weil <sage@redhat.com>
2019-12-14 08:34:44 -06:00
Yehuda Sadeh
5f4f0a9c38 bootstrap: switch to bash
Use of declare that is bash specific was introduced in:
d44c923405e22ea8b6cc1ab4316be3cc07f29b5

Signed-off-by: Yehuda Sadeh <yehuda@redhat.com>
(cherry picked from commit e7fbcee0d5)
2019-12-09 05:45:34 +02:00
Ali Maredia
65f3441636
Merge pull request #306 from hairesis/wip-sse-kms-tests-refactor
Passing sse-kms keys from configuration instead of hard coding in tests
2019-10-03 12:08:44 -04:00
Andrea Baglioni
ad4bb57246 sse-kms test keys parametrization
Signed-off-by: Andrea Baglioni <andrea.baglioni@workday.com>
2019-09-24 19:40:09 +01:00
Andrea Baglioni
05dd20b14a sse-kms: adding missing kms test to parametrization 2019-09-24 19:39:08 +01:00
Casey Bodley
0311cc1ba3 boto2: tag more tests with fails_with_subdomain
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2019-09-11 10:31:25 -04:00
zhang Shaowen
8ebc36f05f add test cases for object lock.
Signed-off-by: zhang Shaowen <zhangshaowen@cmss.chinamobile.com>
2019-08-27 10:00:55 -04:00
albIN7
98515a779e startafter or continuation token shouldn't be returned if they are not specified in the request
Signed-off-by: Albin Antony <aantony@redhat.com>
2019-08-22 14:14:33 -04:00
Tianshan Qu
07057d2185 add test case for list with delimiter not skip special keys
Signed-off-by: Tianshan Qu <tianshan@xsky.com>
2019-08-16 11:01:17 -04:00
Ali Maredia
6c766eb60b
Merge pull request #285 from soumyakoduri/bucket-name-tests
Fix create_bucket tests as per new bucket naming conventions
2019-08-16 10:57:20 -04:00
Matt Benjamin
f236076632
Merge pull request #294 from linuxbox2/wip-lc-exphdr
lifecycle: expiration header checks do not require @lifecycle_expirat…
2019-07-24 16:49:59 -04:00
Matt Benjamin
a7d4ec68aa lc: remove date delta check from lc header checks
This was prone to false positives.  We still check:
1. the expiration header has valid form
2. the rule-id matches

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
2019-07-24 14:05:43 -04:00
Matt Benjamin
e0aafba82d lc: add rule to test disjoint object tag filters
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
2019-07-19 16:24:33 -04:00
Matt Benjamin
0a73ea08de lc: fix lifecycle expiration header tests
Fixes various issues with existing HEAD check and helper methods.
Add an explicit test for GET.

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
2019-07-19 13:59:49 -04:00
Matt Benjamin
ec4cf0de2b lifecycle: expiration header checks do not require @lifecycle_expiration tag
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
2019-07-19 09:19:14 -04:00
Casey Bodley
817e67d92e fix syntax error in test_bucket_get_location
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit eaa68f4f77)
2019-07-10 11:45:04 -04:00
Casey Bodley
585809a7ec skip bucket location test if no api_name is configured
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 250122e17d)
2019-07-10 11:45:04 -04:00
Casey Bodley
9c915ebfdf set 'api_name = default' in sample config
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 75182b7e03)
2019-07-10 11:45:04 -04:00
Soumya Koduri
86bfe40bd3 Fix create_bucket tests as per new bucket naming conventions
As per amazon s3 spec -
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-s3-bucket-naming-requirements.html

The s3 bucket names should not contain upper case letters or underscore.
Name cannot end with dash or have consecutive periods, or dashes adjacent to periods
Name length shouldn't exceed 63 characters.

These rules are being enforced via
- https://github.com/ceph/ceph/pull/26787

This patch is to update the respective testcases as well.

Note: check_invalid_bucket_name() seems to have been broken. It should
try to create bucket using invalid name. Have addressed it in this
patch. Because of this few testcases (which were incorrectly passing
earlier) are failing in validate_bucket_name. Have marked them
'fails_on_rgw' as done for test_bucket_create_naming_bad_punctuation

Signed-off-by: Soumya Koduri <skoduri@redhat.com>
2019-07-02 12:52:42 +05:30
albIN7
4c335b7aea Added testcases for listobjectsv2
Signed-off-by: Albin Antony <aantony@redhat.com>
2019-07-01 16:18:51 -04:00
Casey Bodley
c3425c7f00 test versioned copy with urlencoded keys
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2019-06-05 16:13:09 -04:00
Matt Benjamin
d7a20d0607 boto{2,3}: rm functional.test_s3.test_lifecycle_rules_conflict
The assumption that there may be only one rule per prefix has
been removed.  The rules specifically tested here--without tag
filters--do overlap but are not in conflict.

I am proposing to remove altogether rather than writing new
deconfliction logic.

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
2019-05-24 12:33:55 -04:00
Ali Maredia
9d07887011 remove boto2 tests already passing in boto3
Signed-off-by: Ali Maredia <amaredia@redhat.com>
2019-05-02 11:27:20 -04:00
Ali Maredia
20c742db49 boto3: tag boto3 tests that are failing on ssl as 'fails_on_rgw'
Signed-off-by: Ali Maredia <amaredia@redhat.com>
(cherry picked from commit 5db9eb059d)
2019-04-23 11:14:33 -04:00
Casey Bodley
0454190382 boto3: verify certificates for ssl connections
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 0e04dcd6aa)
2019-04-23 11:14:33 -04:00
Casey Bodley
a864020eac boto3: _get_post_url() uses config endpoint
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 7f49adda30)
2019-04-23 11:14:33 -04:00
Casey Bodley
b1d6dc927c boto3: use https:// for secure endpoints
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit ac18365f75)
2019-04-23 11:14:33 -04:00
Casey Bodley
bce59979ca boto3: use getboolean() for is_secure
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 774d40d114)
2019-04-23 11:14:33 -04:00
Matt Benjamin
7c067c346b boto3_s3tests: add tests for lifecycle expiration header
The response to a successful PUT or HEAD (or GET) of/on
an object matching any enabled lifecycle expiration rule should
include an x-amz-expiration header for the object.  The
x-amz-expiration header consists of an expiry-date, rule-id tuple
indicating the earliest matching rule and the corresponding
expiration date for the object.

Also, while at it, add test for lifecycle expiration rule with
'Days' : 0, which should both apply and...work.

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
(cherry picked from commit 2d12e98ce0)
2019-04-23 11:14:33 -04:00
Ali Maredia
baec994fc6 boto3: Foundation laid for boto3 tests
Added sample config file for boto3 and vstart.sh

Modified setup.py, requirements.txt, and README

Added an rgw_interactive.py to use interactively
with vstart.sh and python -i

Ported 400+ tests over to boto3 from functional/test_s3.py

Signed-off-by: Ali Maredia <amaredia@redhat.com>
(cherry picked from commit 67f4f5d356)
2019-04-23 11:08:04 -04:00
Kefu Chai
80c3f690ce bootstrap: add libffi-dev for gevent
because gevent 1.3.2 requires cffi>=1.11.5, and cffi requires
libffi-dev to build.

Signed-off-by: Kefu Chai <kchai@redhat.com>
(cherry picked from commit a2b3e4ef77)
2018-05-31 00:30:53 +08:00
53 changed files with 9916 additions and 13649 deletions

1
.gitignore vendored
View file

@ -10,6 +10,5 @@
/*.egg-info /*.egg-info
/virtualenv /virtualenv
/venv
config.yaml config.yaml

View file

@ -6,10 +6,14 @@ This is a set of unofficial Amazon AWS S3 compatibility
tests, that can be useful to people implementing software tests, that can be useful to people implementing software
that exposes an S3-like API. The tests use the Boto2 and Boto3 libraries. that exposes an S3-like API. The tests use the Boto2 and Boto3 libraries.
The tests use the Tox tool. To get started, ensure you have the ``tox`` The tests use the Nose test framework. To get started, ensure you have
software installed; e.g. on Debian/Ubuntu:: the ``virtualenv`` software installed; e.g. on Debian/Ubuntu::
sudo apt-get install tox sudo apt-get install python3-virtualenv
and then run::
./bootstrap
You will need to create a configuration file with the location of the You will need to create a configuration file with the location of the
service and two different credentials. A sample configuration file named service and two different credentials. A sample configuration file named
@ -18,25 +22,29 @@ used to run the s3 tests on a Ceph cluster started with vstart.
Once you have that file copied and edited, you can run the tests with:: Once you have that file copied and edited, you can run the tests with::
S3TEST_CONF=your.conf tox S3TEST_CONF=your.conf ./virtualenv/bin/nosetests
You can specify which directory of tests to run:: You can specify which directory of tests to run::
S3TEST_CONF=your.conf tox -- s3tests_boto3/functional S3TEST_CONF=your.conf ./virtualenv/bin/nosetests s3tests.functional
You can specify which file of tests to run:: You can specify which file of tests to run::
S3TEST_CONF=your.conf tox s3tests_boto3/functional/test_s3.py S3TEST_CONF=your.conf ./virtualenv/bin/nosetests s3tests.functional.test_s3
You can specify which test to run:: You can specify which test to run::
S3TEST_CONF=your.conf tox s3tests_boto3/functional/test_s3.py::test_bucket_list_empty S3TEST_CONF=your.conf ./virtualenv/bin/nosetests s3tests.functional.test_s3:test_bucket_list_empty
To gather a list of tests being run, use the flags::
-v --collect-only
Some tests have attributes set based on their current reliability and Some tests have attributes set based on their current reliability and
things like AWS not enforcing their spec stricly. You can filter tests things like AWS not enforcing their spec stricly. You can filter tests
based on their attributes:: based on their attributes::
S3TEST_CONF=aws.conf tox -- -m 'not fails_on_aws' S3TEST_CONF=aws.conf ./virtualenv/bin/nosetests -a '!fails_on_aws'
Most of the tests have both Boto3 and Boto2 versions. Tests written in Most of the tests have both Boto3 and Boto2 versions. Tests written in
Boto2 are in the ``s3tests`` directory. Tests written in Boto3 are Boto2 are in the ``s3tests`` directory. Tests written in Boto3 are
@ -44,59 +52,5 @@ located in the ``s3test_boto3`` directory.
You can run only the boto3 tests with:: You can run only the boto3 tests with::
S3TEST_CONF=your.conf tox -- s3tests_boto3/functional S3TEST_CONF=your.conf ./virtualenv/bin/nosetests -v -s -A 'not fails_on_rgw' s3tests_boto3.functional
========================
STS compatibility tests
========================
This section contains some basic tests for the AssumeRole, GetSessionToken and AssumeRoleWithWebIdentity API's. The test file is located under ``s3tests_boto3/functional``.
To run the STS tests, the vstart cluster should be started with the following parameter (in addition to any parameters already used with it)::
vstart.sh -o rgw_sts_key=abcdefghijklmnop -o rgw_s3_auth_use_sts=true
Note that the ``rgw_sts_key`` can be set to anything that is 128 bits in length.
After the cluster is up the following command should be executed::
radosgw-admin caps add --tenant=testx --uid="9876543210abcdef0123456789abcdef0123456789abcdef0123456789abcdef" --caps="roles=*"
You can run only the sts tests (all the three API's) with::
S3TEST_CONF=your.conf tox s3tests_boto3/functional/test_sts.py
You can filter tests based on the attributes. There is a attribute named ``test_of_sts`` to run AssumeRole and GetSessionToken tests and ``webidentity_test`` to run the AssumeRoleWithWebIdentity tests. If you want to execute only ``test_of_sts`` tests you can apply that filter as below::
S3TEST_CONF=your.conf tox -- -m test_of_sts s3tests_boto3/functional/test_sts.py
For running ``webidentity_test`` you'll need have Keycloak running.
In order to run any STS test you'll need to add "iam" section to the config file. For further reference on how your config file should look check ``s3tests.conf.SAMPLE``.
========================
IAM policy tests
========================
This is a set of IAM policy tests.
This section covers tests for user policies such as Put, Get, List, Delete, user policies with s3 actions, conflicting user policies etc
These tests uses Boto3 libraries. Tests are written in the ``s3test_boto3`` directory.
These iam policy tests uses two users with profile name "iam" and "s3 alt" as mentioned in s3tests.conf.SAMPLE.
If Ceph cluster is started with vstart, then above two users will get created as part of vstart with same access key, secrete key etc as mentioned in s3tests.conf.SAMPLE.
Out of those two users, "iam" user is with capabilities --caps=user-policy=* and "s3 alt" user is without capabilities.
Adding above capabilities to "iam" user is also taken care by vstart (If Ceph cluster is started with vstart).
To run these tests, create configuration file with section "iam" and "s3 alt" refer s3tests.conf.SAMPLE.
Once you have that configuration file copied and edited, you can run all the tests with::
S3TEST_CONF=your.conf tox s3tests_boto3/functional/test_iam.py
You can also specify specific test to run::
S3TEST_CONF=your.conf tox s3tests_boto3/functional/test_iam.py::test_put_user_policy
Some tests have attributes set such as "fails_on_rgw".
You can filter tests based on their attributes::
S3TEST_CONF=your.conf tox -- s3tests_boto3/functional/test_iam.py -m 'not fails_on_rgw'

51
bootstrap Executable file
View file

@ -0,0 +1,51 @@
#!/bin/bash
set -e
if [ -f /etc/debian_version ]; then
for package in python3-pip python3-virtualenv python3-dev libevent-dev libffi-dev libxml2-dev libxslt-dev zlib1g-dev; do
if [ "$(dpkg --status -- $package 2>/dev/null|sed -n 's/^Status: //p')" != "install ok installed" ]; then
# add a space after old values
missing="${missing:+$missing }$package"
fi
done
if [ -n "$missing" ]; then
echo "$0: missing required DEB packages. Installing via sudo." 1>&2
sudo apt-get -y install $missing
fi
elif [ -f /etc/fedora-release ]; then
for package in python3-pip python3-virtualenv python3-devel libevent-devel libffi-devel libxml2-devel libxslt-devel zlib-devel; do
if [ "$(rpm -qa $package 2>/dev/null)" == "" ]; then
missing="${missing:+$missing }$package"
fi
done
if [ -n "$missing" ]; then
echo "$0: missing required RPM packages. Installing via sudo." 1>&2
sudo yum -y install $missing
fi
elif [ -f /etc/redhat-release ]; then
for package in python3-virtualenv python3-devel libevent-devel libffi-devel libxml2-devel libxslt-devel zlib-devel; do
if [ "$(rpm -qa $package 2>/dev/null)" == "" ]; then
missing="${missing:+$missing }$package"
fi
done
if [ -n "$missing" ]; then
echo "$0: missing required RPM packages. Installing via sudo." 1>&2
sudo yum -y install $missing
fi
fi
virtualenv --no-site-packages --distribute virtualenv -p /usr/bin/python3
# avoid pip bugs
./virtualenv/bin/pip install --upgrade pip
# slightly old version of setuptools; newer fails w/ requests 0.14.0
#./virtualenv/bin/pip install setuptools==32.3.1
./virtualenv/bin/pip install -r requirements.txt
# forbid setuptools from using the network because it'll try to use
# easy_install, and we really wanted pip; next line will fail if pip
# requirements.txt does not match setup.py requirements -- sucky but
# good enough for now
./virtualenv/bin/python3 setup.py develop

View file

@ -1,51 +0,0 @@
[pytest]
markers =
abac_test
appendobject
auth_aws2
auth_aws4
auth_common
bucket_policy
bucket_encryption
checksum
cloud_transition
encryption
fails_on_aws
fails_on_dbstore
fails_on_dho
fails_on_mod_proxy_fcgi
fails_on_rgw
fails_on_s3
fails_with_subdomain
group
group_policy
iam_account
iam_cross_account
iam_role
iam_tenant
iam_user
lifecycle
lifecycle_expiration
lifecycle_transition
list_objects_v2
object_lock
role_policy
session_policy
s3select
s3website
s3website_routing_rules
s3website_redirect_location
sns
sse_s3
storage_class
tagging
test_of_sts
token_claims_trust_policy_test
token_principal_tag_role_policy_test
token_request_tag_trust_policy_test
token_resource_tags_test
token_role_tags_test
token_tag_keys_test
user_policy
versioning
webidentity_test

569
request_decision_graph.yml Normal file
View file

@ -0,0 +1,569 @@
#
# FUZZ testing uses a probabalistic grammar to generate
# pseudo-random requests which will be sent to a server
# over long periods of time, with the goal of turning up
# garbage-input and buffer-overflow sensitivities.
#
# Each state ...
# generates/chooses contents for variables
# chooses a next state (from a weighted set of options)
#
# A terminal state is one from which there are no successors,
# at which point a message is generated (from the variables)
# and sent to the server.
#
# The test program doesn't actually know (or care) what
# response should be returned ... since the goal is to
# crash the server.
#
start:
set:
garbage:
- '{random 10-3000 printable}'
- '{random 10-1000 binary}'
garbage_no_whitespace:
- '{random 10-3000 printable_no_whitespace}'
- '{random 10-1000 binary_no_whitespace}'
acl_header:
- 'private'
- 'public-read'
- 'public-read-write'
- 'authenticated-read'
- 'bucket-owner-read'
- 'bucket-owner-full-control'
- '{random 3000 letters}'
- '{random 100-1000 binary_no_whitespace}'
choices:
- bucket
- object
bucket:
set:
urlpath: '/{bucket}'
choices:
- 13 bucket_get
- 8 bucket_put
- 5 bucket_delete
- bucket_garbage_method
bucket_garbage_method:
set:
method:
- '{random 1-100 printable}'
- '{random 10-100 binary}'
bucket:
- '{bucket_readable}'
- '{bucket_not_readable}'
- '{bucket_writable}'
- '{bucket_not_writable}'
- '2 {garbage_no_whitespace}'
choices:
- bucket_get_simple
- bucket_get_filtered
- bucket_get_uploads
- bucket_put_create
- bucket_put_versioning
- bucket_put_simple
bucket_delete:
set:
method: DELETE
bucket:
- '{bucket_writable}'
- '{bucket_not_writable}'
- '2 {garbage_no_whitespace}'
query:
- null
- policy
- website
- '2 {garbage_no_whitespace}'
choices: []
bucket_get:
set:
method: GET
bucket:
- '{bucket_readable}'
- '{bucket_not_readable}'
- '2 {garbage_no_whitespace}'
choices:
- 11 bucket_get_simple
- bucket_get_filtered
- bucket_get_uploads
bucket_get_simple:
set:
query:
- acl
- policy
- location
- logging
- notification
- versions
- requestPayment
- versioning
- website
- '2 {garbage_no_whitespace}'
choices: []
bucket_get_uploads:
set:
delimiter:
- null
- '3 delimiter={garbage_no_whitespace}'
prefix:
- null
- '3 prefix={garbage_no_whitespace}'
key_marker:
- null
- 'key-marker={object_readable}'
- 'key-marker={object_not_readable}'
- 'key-marker={invalid_key}'
- 'key-marker={random 100-1000 printable_no_whitespace}'
max_uploads:
- null
- 'max-uploads={random 1-5 binary_no_whitespace}'
- 'max-uploads={random 1-1000 digits}'
upload_id_marker:
- null
- '3 upload-id-marker={random 0-1000 printable_no_whitespace}'
query:
- 'uploads'
- 'uploads&{delimiter}&{prefix}'
- 'uploads&{max_uploads}&{key_marker}&{upload_id_marker}'
- '2 {garbage_no_whitespace}'
choices: []
bucket_get_filtered:
set:
delimiter:
- 'delimiter={garbage_no_whitespace}'
prefix:
- 'prefix={garbage_no_whitespace}'
marker:
- 'marker={object_readable}'
- 'marker={object_not_readable}'
- 'marker={invalid_key}'
- 'marker={random 100-1000 printable_no_whitespace}'
max_keys:
- 'max-keys={random 1-5 binary_no_whitespace}'
- 'max-keys={random 1-1000 digits}'
query:
- null
- '{delimiter}&{prefix}'
- '{max-keys}&{marker}'
- '2 {garbage_no_whitespace}'
choices: []
bucket_put:
set:
bucket:
- '{bucket_writable}'
- '{bucket_not_writable}'
- '2 {garbage_no_whitespace}'
method: PUT
choices:
- bucket_put_simple
- bucket_put_create
- bucket_put_versioning
bucket_put_create:
set:
body:
- '2 {garbage}'
- '<CreateBucketConfiguration><LocationConstraint>{random 2-10 binary}</LocationConstraint></CreateBucketConfiguration>'
headers:
- ['0-5', 'x-amz-acl', '{acl_header}']
choices: []
bucket_put_versioning:
set:
body:
- '{garbage}'
- '4 <VersioningConfiguration>{versioning_status}{mfa_delete_body}</VersioningConfiguration>'
mfa_delete_body:
- null
- '<Status>{random 2-10 binary}</Status>'
- '<Status>{random 2000-3000 printable}</Status>'
versioning_status:
- null
- '<MfaDelete>{random 2-10 binary}</MfaDelete>'
- '<MfaDelete>{random 2000-3000 printable}</MfaDelete>'
mfa_header:
- '{random 10-1000 printable_no_whitespace} {random 10-1000 printable_no_whitespace}'
headers:
- ['0-1', 'x-amz-mfa', '{mfa_header}']
choices: []
bucket_put_simple:
set:
body:
- '{acl_body}'
- '{policy_body}'
- '{logging_body}'
- '{notification_body}'
- '{request_payment_body}'
- '{website_body}'
acl_body:
- null
- '<AccessControlPolicy>{owner}{acl}</AccessControlPolicy>'
owner:
- null
- '7 <Owner>{id}{display_name}</Owner>'
id:
- null
- '<ID>{random 10-200 binary}</ID>'
- '<ID>{random 1000-3000 printable}</ID>'
display_name:
- null
- '2 <DisplayName>{random 10-200 binary}</DisplayName>'
- '2 <DisplayName>{random 1000-3000 printable}</DisplayName>'
- '2 <DisplayName>{random 10-300 letters}@{random 10-300 letters}.{random 2-4 letters}</DisplayName>'
acl:
- null
- '10 <AccessControlList><Grant>{grantee}{permission}</Grant></AccessControlList>'
grantee:
- null
- '7 <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="CanonicalUser">{id}{display_name}</Grantee>'
permission:
- null
- '7 <Permission>{permission_value}</Permission>'
permission_value:
- '2 {garbage}'
- FULL_CONTROL
- WRITE
- WRITE_ACP
- READ
- READ_ACP
policy_body:
- null
- '2 {garbage}'
logging_body:
- null
- '<BucketLoggingStatus xmlns="http://doc.s3.amazonaws.com/2006-03-01" />'
- '<BucketLoggingStatus xmlns="http://doc.s3.amazonaws.com/2006-03-01"><LoggingEnabled>{bucket}{target_prefix}{target_grants}</LoggingEnabled></BucketLoggingStatus>'
target_prefix:
- null
- '<TargetPrefix>{random 10-1000 printable}</TargetPrefix>'
- '<TargetPrefix>{random 10-1000 binary}</TargetPrefix>'
target_grants:
- null
- '10 <TargetGrants><Grant>{grantee}{permission}</Grant></TargetGrants>'
notification_body:
- null
- '<NotificationConfiguration />'
- '2 <NotificationConfiguration><TopicConfiguration>{topic}{event}</TopicConfiguration></NotificationConfiguration>'
topic:
- null
- '2 <Topic>{garbage}</Topic>'
event:
- null
- '<Event>s3:ReducedRedundancyLostObject</Event>'
- '2 <Event>{garbage}</Event>'
request_payment_body:
- null
- '<RequestPaymentConfiguration xlmns="http://s3.amazonaws.com/doc/2006-03-01/"><Payer>{payer}</Payer></RequestPaymentConfiguration>'
payer:
- Requester
- BucketOwner
- '2 {garbage}'
website_body:
- null
- '<WebsiteConfiguration>{index_doc}{error_doc}{routing_rules}</WebsiteConfiguration>'
- '<WebsiteConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">{index_doc}{error_doc}{routing_rules}</WebsiteConfiguration>'
index_doc:
- null
- '<IndexDocument>{filename}</IndexDocument>'
- '<IndexDocument><Suffix>{filename}</Suffix></IndexDocument>'
filename:
- null
- '2 {garbage}'
- '{random 2-10 printable}.html'
- '{random 100-1000 printable}.html'
- '{random 100-1000 printable_no_whitespace}.html'
error_doc:
- null
- '<ErrorDocument>{filename}</ErrorDocument>'
- '<ErrorDocument><Key>{filename}</Key></ErrorDocument>'
routing_rules:
- null
- ['0-10', '<RoutingRules>{routing_rules_content}</RoutingRules>']
routing_rules_content:
- null
- ['0-1000', '<RoutingRule>{routing_rule}</RoutingRule>']
routing_rule:
- null
- ['0-2', '{routing_rule_condition}{routing_rule_redirect}']
routing_rule_condition:
- null
- ['0-10', '<Condition>{KeyPrefixEquals}{HttpErrorCodeReturnedEquals}</Condition>']
KeyPrefixEquals:
- null
- ['0-2', '<KeyPrefixEquals>{filename}</KeyPrefixEquals>']
HttpErrorCodeReturnedEquals:
- null
- ['0-2', '<HttpErrorCodeReturnedEquals>{HttpErrorCode}</HttpErrorCodeReturnedEquals>']
HttpErrorCode:
- null
- '2 {garbage}'
- '{random 1-10 digits}'
- '{random 1-100 printable}'
routing_rule_redirect:
- null
- '{protocol}{hostname}{ReplaceKeyPrefixWith}{ReplaceKeyWith}{HttpRedirectCode}'
protocol:
- null
- '<Protocol>http</Protocol>'
- '<Protocol>https</Protocol>'
- ['1-5', '<Protocol>{garbage}</Protocol>']
- ['1-5', '<Protocol>{filename}</Protocol>']
hostname:
- null
- ['1-5', '<HostHame>{hostname_val}</HostHame>']
- ['1-5', '<HostHame>{garbage}</HostHame>']
hostname_val:
- null
- '{random 1-255 printable_no_whitespace}'
- '{random 1-255 printable}'
- '{random 1-255 punctuation}'
- '{random 1-255 whitespace}'
- '{garbage}'
ReplaceKeyPrefixWith:
- null
- ['1-5', '<ReplaceKeyPrefixWith>{filename}</ReplaceKeyPrefixWith>']
HttpRedirectCode:
- null
- ['1-5', '<HttpRedirectCode>{random 1-10 digits}</HttpRedirectCode>']
- ['1-5', '<HttpRedirectCode>{random 1-100 printable}</HttpRedirectCode>']
- ['1-5', '<HttpRedirectCode>{filename}</HttpRedirectCode>']
choices: []
object:
set:
urlpath: '/{bucket}/{object}'
range_header:
- null
- 'bytes={random 1-2 digits}-{random 1-4 digits}'
- 'bytes={random 1-1000 binary_no_whitespace}'
if_modified_since_header:
- null
- '2 {garbage_no_whitespace}'
if_match_header:
- null
- '2 {garbage_no_whitespace}'
if_none_match_header:
- null
- '2 {garbage_no_whitespace}'
choices:
- object_delete
- object_get
- object_put
- object_head
- object_garbage_method
object_garbage_method:
set:
method:
- '{random 1-100 printable}'
- '{random 10-100 binary}'
bucket:
- '{bucket_readable}'
- '{bucket_not_readable}'
- '{bucket_writable}'
- '{bucket_not_writable}'
- '2 {garbage_no_whitespace}'
object:
- '{object_readable}'
- '{object_not_readable}'
- '{object_writable}'
- '{object_not_writable}'
- '2 {garbage_no_whitespace}'
choices:
- object_get_query
- object_get_head_simple
object_delete:
set:
method: DELETE
bucket:
- '5 {bucket_writable}'
- '{bucket_not_writable}'
- '{garbage_no_whitespace}'
object:
- '{object_writable}'
- '{object_not_writable}'
- '2 {garbage_no_whitespace}'
choices: []
object_get:
set:
method: GET
bucket:
- '5 {bucket_readable}'
- '{bucket_not_readable}'
- '{garbage_no_whitespace}'
object:
- '{object_readable}'
- '{object_not_readable}'
- '{garbage_no_whitespace}'
choices:
- 5 object_get_head_simple
- 2 object_get_query
object_get_query:
set:
query:
- 'torrent'
- 'acl'
choices: []
object_get_head_simple:
set: {}
headers:
- ['0-1', 'range', '{range_header}']
- ['0-1', 'if-modified-since', '{if_modified_since_header}']
- ['0-1', 'if-unmodified-since', '{if_modified_since_header}']
- ['0-1', 'if-match', '{if_match_header}']
- ['0-1', 'if-none-match', '{if_none_match_header}']
choices: []
object_head:
set:
method: HEAD
bucket:
- '5 {bucket_readable}'
- '{bucket_not_readable}'
- '{garbage_no_whitespace}'
object:
- '{object_readable}'
- '{object_not_readable}'
- '{garbage_no_whitespace}'
choices:
- object_get_head_simple
object_put:
set:
method: PUT
bucket:
- '5 {bucket_writable}'
- '{bucket_not_writable}'
- '{garbage_no_whitespace}'
object:
- '{object_writable}'
- '{object_not_writable}'
- '{garbage_no_whitespace}'
cache_control:
- null
- '{garbage_no_whitespace}'
- 'no-cache'
content_disposition:
- null
- '{garbage_no_whitespace}'
content_encoding:
- null
- '{garbage_no_whitespace}'
content_length:
- '{random 1-20 digits}'
- '{garbage_no_whitespace}'
content_md5:
- null
- '{garbage_no_whitespace}'
content_type:
- null
- 'binary/octet-stream'
- '{garbage_no_whitespace}'
expect:
- null
- '100-continue'
- '{garbage_no_whitespace}'
expires:
- null
- '{random 1-10000000 digits}'
- '{garbage_no_whitespace}'
meta_key:
- null
- 'foo'
- '{garbage_no_whitespace}'
meta_value:
- null
- '{garbage_no_whitespace}'
choices:
- object_put_simple
- object_put_acl
- object_put_copy
object_put_simple:
set: {}
headers:
- ['0-1', 'cache-control', '{cache_control}']
- ['0-1', 'content-disposition', '{content_disposition}']
- ['0-1', 'content-encoding', '{content_encoding}']
- ['0-1', 'content-length', '{content_length}']
- ['0-1', 'content-md5', '{content_md5}']
- ['0-1', 'content-type', '{content_type}']
- ['0-1', 'expect', '{expect}']
- ['0-1', 'expires', '{expires}']
- ['0-1', 'x-amz-acl', '{acl_header}']
- ['0-6', 'x-amz-meta-{meta_key}', '{meta_value}']
choices: []
object_put_acl:
set:
query: 'acl'
body:
- null
- '2 {garbage}'
- '<AccessControlPolicy>{owner}{acl}</AccessControlPolicy>'
owner:
- null
- '7 <Owner>{id}{display_name}</Owner>'
id:
- null
- '<ID>{random 10-200 binary}</ID>'
- '<ID>{random 1000-3000 printable}</ID>'
display_name:
- null
- '2 <DisplayName>{random 10-200 binary}</DisplayName>'
- '2 <DisplayName>{random 1000-3000 printable}</DisplayName>'
- '2 <DisplayName>{random 10-300 letters}@{random 10-300 letters}.{random 2-4 letters}</DisplayName>'
acl:
- null
- '10 <AccessControlList><Grant>{grantee}{permission}</Grant></AccessControlList>'
grantee:
- null
- '7 <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="CanonicalUser">{id}{display_name}</Grantee>'
permission:
- null
- '7 <Permission>{permission_value}</Permission>'
permission_value:
- '2 {garbage}'
- FULL_CONTROL
- WRITE
- WRITE_ACP
- READ
- READ_ACP
headers:
- ['0-1', 'cache-control', '{cache_control}']
- ['0-1', 'content-disposition', '{content_disposition}']
- ['0-1', 'content-encoding', '{content_encoding}']
- ['0-1', 'content-length', '{content_length}']
- ['0-1', 'content-md5', '{content_md5}']
- ['0-1', 'content-type', '{content_type}']
- ['0-1', 'expect', '{expect}']
- ['0-1', 'expires', '{expires}']
- ['0-1', 'x-amz-acl', '{acl_header}']
choices: []
object_put_copy:
set: {}
headers:
- ['1-1', 'x-amz-copy-source', '{source_object}']
- ['0-1', 'x-amz-acl', '{acl_header}']
- ['0-1', 'x-amz-metadata-directive', '{metadata_directive}']
- ['0-1', 'x-amz-copy-source-if-match', '{if_match_header}']
- ['0-1', 'x-amz-copy-source-if-none-match', '{if_none_match_header}']
- ['0-1', 'x-amz-copy-source-if-modified-since', '{if_modified_since_header}']
- ['0-1', 'x-amz-copy-source-if-unmodified-since', '{if_modified_since_header}']
choices: []

View file

@ -1,15 +1,14 @@
PyYAML PyYAML
setuptools
nose >=1.0.0
boto >=2.6.0 boto >=2.6.0
boto3 >=1.0.0 boto3 >=1.0.0
# botocore-1.28 broke v2 signatures, see https://tracker.ceph.com/issues/58059 bunch >=1.0.0
botocore <1.28.0
munch >=2.0.0
# 0.14 switches to libev, that means bootstrap needs to change too # 0.14 switches to libev, that means bootstrap needs to change too
gevent >=1.0 gevent >=1.0
isodate >=0.4.4 isodate >=0.4.4
requests >=2.23.0 requests >=0.14.0
pytz pytz >=2011k
ordereddict
httplib2 httplib2
lxml lxml
pytest
tox

View file

@ -10,23 +10,12 @@ port = 8000
## say "False" to disable TLS ## say "False" to disable TLS
is_secure = False is_secure = False
## say "False" to disable SSL Verify
ssl_verify = False
[fixtures] [fixtures]
## all the buckets created will start with this prefix; ## all the buckets created will start with this prefix;
## {random} will be filled with random characters to pad ## {random} will be filled with random characters to pad
## the prefix to 30 characters long, and avoid collisions ## the prefix to 30 characters long, and avoid collisions
bucket prefix = yournamehere-{random}- bucket prefix = yournamehere-{random}-
# all the iam account resources (users, roles, etc) created
# will start with this name prefix
iam name prefix = s3-tests-
# all the iam account resources (users, roles, etc) created
# will start with this path prefix
iam path prefix = /s3-tests/
[s3 main] [s3 main]
# main display_name set in vstart.sh # main display_name set in vstart.sh
display_name = M. Tester display_name = M. Tester
@ -49,12 +38,6 @@ secret_key = h7GhxuBLTrlhVUyxSPUKUV8r/2EI4ngqJxD7iBdBYLhwluN30JaT3Q==
## replace with key id obtained when secret is created, or delete if KMS not tested ## replace with key id obtained when secret is created, or delete if KMS not tested
#kms_keyid = 01234567-89ab-cdef-0123-456789abcdef #kms_keyid = 01234567-89ab-cdef-0123-456789abcdef
## Storage classes
#storage_classes = "LUKEWARM, FROZEN"
## Lifecycle debug interval (default: 10)
#lc_debug_interval = 20
[s3 alt] [s3 alt]
# alt display_name set in vstart.sh # alt display_name set in vstart.sh
display_name = john.doe display_name = john.doe
@ -70,37 +53,6 @@ access_key = NOPQRSTUVWXYZABCDEFG
# alt AWS secret key set in vstart.sh # alt AWS secret key set in vstart.sh
secret_key = nopqrstuvwxyzabcdefghijklmnabcdefghijklm secret_key = nopqrstuvwxyzabcdefghijklmnabcdefghijklm
#[s3 cloud]
## to run the testcases with "cloud_transition" attribute.
## Note: the waiting time may have to tweaked depending on
## the I/O latency to the cloud endpoint.
## host set for cloud endpoint
# host = localhost
## port set for cloud endpoint
# port = 8001
## say "False" to disable TLS
# is_secure = False
## cloud endpoint credentials
# access_key = 0555b35654ad1656d804
# secret_key = h7GhxuBLTrlhVUyxSPUKUV8r/2EI4ngqJxD7iBdBYLhwluN30JaT3Q==
## storage class configured as cloud tier on local rgw server
# cloud_storage_class = CLOUDTIER
## Below are optional -
## Above configured cloud storage class config options
# retain_head_object = false
# target_storage_class = Target_SC
# target_path = cloud-bucket
## another regular storage class to test multiple transition rules,
# storage_class = S1
[s3 tenant] [s3 tenant]
# tenant display_name set in vstart.sh # tenant display_name set in vstart.sh
display_name = testx$tenanteduser display_name = testx$tenanteduser
@ -116,56 +68,3 @@ secret_key = opqrstuvwxyzabcdefghijklmnopqrstuvwxyzab
# tenant email set in vstart.sh # tenant email set in vstart.sh
email = tenanteduser@example.com email = tenanteduser@example.com
# tenant name
tenant = testx
#following section needs to be added for all sts-tests
[iam]
#used for iam operations in sts-tests
#email from vstart.sh
email = s3@example.com
#user_id from vstart.sh
user_id = 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef
#access_key from vstart.sh
access_key = ABCDEFGHIJKLMNOPQRST
#secret_key vstart.sh
secret_key = abcdefghijklmnopqrstuvwxyzabcdefghijklmn
#display_name from vstart.sh
display_name = youruseridhere
# iam account root user for iam_account tests
[iam root]
access_key = AAAAAAAAAAAAAAAAAAaa
secret_key = aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
user_id = RGW11111111111111111
email = account1@ceph.com
# iam account root user in a different account than [iam root]
[iam alt root]
access_key = BBBBBBBBBBBBBBBBBBbb
secret_key = bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
user_id = RGW22222222222222222
email = account2@ceph.com
#following section needs to be added when you want to run Assume Role With Webidentity test
[webidentity]
#used for assume role with web identity test in sts-tests
#all parameters will be obtained from ceph/qa/tasks/keycloak.py
token=<access_token>
aud=<obtained after introspecting token>
sub=<obtained after introspecting token>
azp=<obtained after introspecting token>
user_token=<access token for a user, with attribute Department=[Engineering, Marketing>]
thumbprint=<obtained from x509 certificate>
KC_REALM=<name of the realm>

View file

142
s3tests/analysis/rwstats.py Normal file
View file

@ -0,0 +1,142 @@
#!/usr/bin/python3
import sys
import os
import yaml
import optparse
NANOSECONDS = int(1e9)
# Output stats in a format similar to siege
# see http://www.joedog.org/index/siege-home
OUTPUT_FORMAT = """Stats for type: [{type}]
Transactions: {trans:>11} hits
Availability: {avail:>11.2f} %
Elapsed time: {elapsed:>11.2f} secs
Data transferred: {data:>11.2f} MB
Response time: {resp_time:>11.2f} secs
Transaction rate: {trans_rate:>11.2f} trans/sec
Throughput: {data_rate:>11.2f} MB/sec
Concurrency: {conc:>11.2f}
Successful transactions: {trans_success:>11}
Failed transactions: {trans_fail:>11}
Longest transaction: {trans_long:>11.2f}
Shortest transaction: {trans_short:>11.2f}
"""
def parse_options():
usage = "usage: %prog [options]"
parser = optparse.OptionParser(usage=usage)
parser.add_option(
"-f", "--file", dest="input", metavar="FILE",
help="Name of input YAML file. Default uses sys.stdin")
parser.add_option(
"-v", "--verbose", dest="verbose", action="store_true",
help="Enable verbose output")
(options, args) = parser.parse_args()
if not options.input and os.isatty(sys.stdin.fileno()):
parser.error("option -f required if no data is provided "
"in stdin")
return (options, args)
def main():
(options, args) = parse_options()
total = {}
durations = {}
min_time = {}
max_time = {}
errors = {}
success = {}
calculate_stats(options, total, durations, min_time, max_time, errors,
success)
print_results(total, durations, min_time, max_time, errors, success)
def calculate_stats(options, total, durations, min_time, max_time, errors,
success):
print('Calculating statistics...')
f = sys.stdin
if options.input:
f = file(options.input, 'r')
for item in yaml.safe_load_all(f):
type_ = item.get('type')
if type_ not in ('r', 'w'):
continue # ignore any invalid items
if 'error' in item:
errors[type_] = errors.get(type_, 0) + 1
continue # skip rest of analysis for this item
else:
success[type_] = success.get(type_, 0) + 1
# parse the item
data_size = item['chunks'][-1][0]
duration = item['duration']
start = item['start']
end = start + duration / float(NANOSECONDS)
if options.verbose:
print("[{type}] POSIX time: {start:>18.2f} - {end:<18.2f} " \
"{data:>11.2f} KB".format(
type=type_,
start=start,
end=end,
data=data_size / 1024.0, # convert to KB
))
# update time boundaries
prev = min_time.setdefault(type_, start)
if start < prev:
min_time[type_] = start
prev = max_time.setdefault(type_, end)
if end > prev:
max_time[type_] = end
# save the duration
if type_ not in durations:
durations[type_] = []
durations[type_].append(duration)
# add to running totals
total[type_] = total.get(type_, 0) + data_size
def print_results(total, durations, min_time, max_time, errors, success):
for type_ in total.keys():
trans_success = success.get(type_, 0)
trans_fail = errors.get(type_, 0)
trans = trans_success + trans_fail
avail = trans_success * 100.0 / trans
elapsed = max_time[type_] - min_time[type_]
data = total[type_] / 1024.0 / 1024.0 # convert to MB
resp_time = sum(durations[type_]) / float(NANOSECONDS) / \
len(durations[type_])
trans_rate = trans / elapsed
data_rate = data / elapsed
conc = trans_rate * resp_time
trans_long = max(durations[type_]) / float(NANOSECONDS)
trans_short = min(durations[type_]) / float(NANOSECONDS)
print(OUTPUT_FORMAT.format(
type=type_,
trans_success=trans_success,
trans_fail=trans_fail,
trans=trans,
avail=avail,
elapsed=elapsed,
data=data,
resp_time=resp_time,
trans_rate=trans_rate,
data_rate=data_rate,
conc=conc,
trans_long=trans_long,
trans_short=trans_short,
))
if __name__ == '__main__':
main()

View file

@ -1,5 +1,5 @@
import boto.s3.connection import boto.s3.connection
import munch import bunch
import itertools import itertools
import os import os
import random import random
@ -11,8 +11,8 @@ from lxml import etree
from doctest import Example from doctest import Example
from lxml.doctestcompare import LXMLOutputChecker from lxml.doctestcompare import LXMLOutputChecker
s3 = munch.Munch() s3 = bunch.Bunch()
config = munch.Munch() config = bunch.Bunch()
prefix = '' prefix = ''
bucket_counter = itertools.count(1) bucket_counter = itertools.count(1)
@ -73,7 +73,7 @@ def nuke_bucket(bucket):
pass pass
def nuke_prefixed_buckets(): def nuke_prefixed_buckets():
for name, conn in list(s3.items()): for name, conn in s3.items():
print('Cleaning buckets from connection {name}'.format(name=name)) print('Cleaning buckets from connection {name}'.format(name=name))
for bucket in conn.get_all_buckets(): for bucket in conn.get_all_buckets():
if bucket.name.startswith(prefix): if bucket.name.startswith(prefix):
@ -83,10 +83,10 @@ def nuke_prefixed_buckets():
print('Done with cleanup of test buckets.') print('Done with cleanup of test buckets.')
def read_config(fp): def read_config(fp):
config = munch.Munch() config = bunch.Bunch()
g = yaml.safe_load_all(fp) g = yaml.safe_load_all(fp)
for new in g: for new in g:
config.update(munch.Munchify(new)) config.update(bunch.bunchify(new))
return config return config
def connect(conf): def connect(conf):
@ -97,7 +97,7 @@ def connect(conf):
access_key='aws_access_key_id', access_key='aws_access_key_id',
secret_key='aws_secret_access_key', secret_key='aws_secret_access_key',
) )
kwargs = dict((mapping[k],v) for (k,v) in conf.items() if k in mapping) kwargs = dict((mapping[k],v) for (k,v) in conf.iteritems() if k in mapping)
#process calling_format argument #process calling_format argument
calling_formats = dict( calling_formats = dict(
ordinary=boto.s3.connection.OrdinaryCallingFormat(), ordinary=boto.s3.connection.OrdinaryCallingFormat(),
@ -105,7 +105,7 @@ def connect(conf):
vhost=boto.s3.connection.VHostCallingFormat(), vhost=boto.s3.connection.VHostCallingFormat(),
) )
kwargs['calling_format'] = calling_formats['ordinary'] kwargs['calling_format'] = calling_formats['ordinary']
if 'calling_format' in conf: if conf.has_key('calling_format'):
raw_calling_format = conf['calling_format'] raw_calling_format = conf['calling_format']
try: try:
kwargs['calling_format'] = calling_formats[raw_calling_format] kwargs['calling_format'] = calling_formats[raw_calling_format]
@ -146,7 +146,7 @@ def setup():
raise RuntimeError("Empty Prefix! Aborting!") raise RuntimeError("Empty Prefix! Aborting!")
defaults = config.s3.defaults defaults = config.s3.defaults
for section in list(config.s3.keys()): for section in config.s3.keys():
if section == 'defaults': if section == 'defaults':
continue continue
@ -258,10 +258,9 @@ def with_setup_kwargs(setup, teardown=None):
# yield _test_gen # yield _test_gen
def trim_xml(xml_str): def trim_xml(xml_str):
p = etree.XMLParser(encoding="utf-8", remove_blank_text=True) p = etree.XMLParser(remove_blank_text=True)
xml_str = bytes(xml_str, "utf-8")
elem = etree.XML(xml_str, parser=p) elem = etree.XML(xml_str, parser=p)
return etree.tostring(elem, encoding="unicode") return etree.tostring(elem)
def normalize_xml(xml, pretty_print=True): def normalize_xml(xml, pretty_print=True):
if xml is None: if xml is None:
@ -283,7 +282,7 @@ def normalize_xml(xml, pretty_print=True):
for parent in root.xpath('//*[./*]'): # Search for parent elements for parent in root.xpath('//*[./*]'): # Search for parent elements
parent[:] = sorted(parent,key=lambda x: x.tag) parent[:] = sorted(parent,key=lambda x: x.tag)
xmlstr = etree.tostring(root, encoding="unicode", pretty_print=pretty_print) xmlstr = etree.tostring(root, encoding="utf-8", xml_declaration=True, pretty_print=pretty_print)
# there are two different DTD URIs # there are two different DTD URIs
xmlstr = re.sub(r'xmlns="[^"]+"', 'xmlns="s3"', xmlstr) xmlstr = re.sub(r'xmlns="[^"]+"', 'xmlns="s3"', xmlstr)
xmlstr = re.sub(r'xmlns=\'[^\']+\'', 'xmlns="s3"', xmlstr) xmlstr = re.sub(r'xmlns=\'[^\']+\'', 'xmlns="s3"', xmlstr)

View file

@ -0,0 +1,5 @@
from boto.auth_handler import AuthHandler
class AnonymousAuthHandler(AuthHandler):
def add_auth(self, http_request, **kwargs):
return # Nothing to do for anonymous access!

View file

@ -1,21 +1,21 @@
from __future__ import print_function
import sys import sys
import configparser import ConfigParser
import boto.exception import boto.exception
import boto.s3.connection import boto.s3.connection
import munch import bunch
import itertools import itertools
import os import os
import random import random
import string import string
import pytest from httplib import HTTPConnection, HTTPSConnection
from http.client import HTTPConnection, HTTPSConnection from urlparse import urlparse
from urllib.parse import urlparse
from .utils import region_sync_meta from .utils import region_sync_meta
s3 = munch.Munch() s3 = bunch.Bunch()
config = munch.Munch() config = bunch.Bunch()
targets = munch.Munch() targets = bunch.Bunch()
# this will be assigned by setup() # this will be assigned by setup()
prefix = None prefix = None
@ -69,7 +69,7 @@ def nuke_prefixed_buckets_on_conn(prefix, name, conn):
if bucket.name.startswith(prefix): if bucket.name.startswith(prefix):
print('Cleaning bucket {bucket}'.format(bucket=bucket)) print('Cleaning bucket {bucket}'.format(bucket=bucket))
success = False success = False
for i in range(2): for i in xrange(2):
try: try:
try: try:
iterator = iter(bucket.list_versions()) iterator = iter(bucket.list_versions())
@ -91,13 +91,7 @@ def nuke_prefixed_buckets_on_conn(prefix, name, conn):
)) ))
# key.set_canned_acl('private') # key.set_canned_acl('private')
bucket.delete_key(key.name, version_id = key.version_id) bucket.delete_key(key.name, version_id = key.version_id)
try: bucket.delete()
bucket.delete()
except boto.exception.S3ResponseError as e:
# if DELETE times out, the retry may see NoSuchBucket
if e.error_code != 'NoSuchBucket':
raise e
pass
success = True success = True
except boto.exception.S3ResponseError as e: except boto.exception.S3ResponseError as e:
if e.error_code != 'AccessDenied': if e.error_code != 'AccessDenied':
@ -116,12 +110,12 @@ def nuke_prefixed_buckets_on_conn(prefix, name, conn):
def nuke_prefixed_buckets(prefix): def nuke_prefixed_buckets(prefix):
# If no regions are specified, use the simple method # If no regions are specified, use the simple method
if targets.main.master == None: if targets.main.master == None:
for name, conn in list(s3.items()): for name, conn in s3.items():
print('Deleting buckets on {name}'.format(name=name)) print('Deleting buckets on {name}'.format(name=name))
nuke_prefixed_buckets_on_conn(prefix, name, conn) nuke_prefixed_buckets_on_conn(prefix, name, conn)
else: else:
# First, delete all buckets on the master connection # First, delete all buckets on the master connection
for name, conn in list(s3.items()): for name, conn in s3.items():
if conn == targets.main.master.connection: if conn == targets.main.master.connection:
print('Deleting buckets on {name} (master)'.format(name=name)) print('Deleting buckets on {name} (master)'.format(name=name))
nuke_prefixed_buckets_on_conn(prefix, name, conn) nuke_prefixed_buckets_on_conn(prefix, name, conn)
@ -131,7 +125,7 @@ def nuke_prefixed_buckets(prefix):
print('region-sync in nuke_prefixed_buckets') print('region-sync in nuke_prefixed_buckets')
# Now delete remaining buckets on any other connection # Now delete remaining buckets on any other connection
for name, conn in list(s3.items()): for name, conn in s3.items():
if conn != targets.main.master.connection: if conn != targets.main.master.connection:
print('Deleting buckets on {name} (non-master)'.format(name=name)) print('Deleting buckets on {name} (non-master)'.format(name=name))
nuke_prefixed_buckets_on_conn(prefix, name, conn) nuke_prefixed_buckets_on_conn(prefix, name, conn)
@ -149,46 +143,46 @@ class TargetConfig:
self.sync_meta_wait = 0 self.sync_meta_wait = 0
try: try:
self.api_name = cfg.get(section, 'api_name') self.api_name = cfg.get(section, 'api_name')
except (configparser.NoSectionError, configparser.NoOptionError): except (ConfigParser.NoSectionError, ConfigParser.NoOptionError):
pass pass
try: try:
self.port = cfg.getint(section, 'port') self.port = cfg.getint(section, 'port')
except configparser.NoOptionError: except ConfigParser.NoOptionError:
pass pass
try: try:
self.host=cfg.get(section, 'host') self.host=cfg.get(section, 'host')
except configparser.NoOptionError: except ConfigParser.NoOptionError:
raise RuntimeError( raise RuntimeError(
'host not specified for section {s}'.format(s=section) 'host not specified for section {s}'.format(s=section)
) )
try: try:
self.is_master=cfg.getboolean(section, 'is_master') self.is_master=cfg.getboolean(section, 'is_master')
except configparser.NoOptionError: except ConfigParser.NoOptionError:
pass pass
try: try:
self.is_secure=cfg.getboolean(section, 'is_secure') self.is_secure=cfg.getboolean(section, 'is_secure')
except configparser.NoOptionError: except ConfigParser.NoOptionError:
pass pass
try: try:
raw_calling_format = cfg.get(section, 'calling_format') raw_calling_format = cfg.get(section, 'calling_format')
except configparser.NoOptionError: except ConfigParser.NoOptionError:
raw_calling_format = 'ordinary' raw_calling_format = 'ordinary'
try: try:
self.sync_agent_addr = cfg.get(section, 'sync_agent_addr') self.sync_agent_addr = cfg.get(section, 'sync_agent_addr')
except (configparser.NoSectionError, configparser.NoOptionError): except (ConfigParser.NoSectionError, ConfigParser.NoOptionError):
pass pass
try: try:
self.sync_agent_port = cfg.getint(section, 'sync_agent_port') self.sync_agent_port = cfg.getint(section, 'sync_agent_port')
except (configparser.NoSectionError, configparser.NoOptionError): except (ConfigParser.NoSectionError, ConfigParser.NoOptionError):
pass pass
try: try:
self.sync_meta_wait = cfg.getint(section, 'sync_meta_wait') self.sync_meta_wait = cfg.getint(section, 'sync_meta_wait')
except (configparser.NoSectionError, configparser.NoOptionError): except (ConfigParser.NoSectionError, ConfigParser.NoOptionError):
pass pass
@ -208,7 +202,7 @@ class TargetConnection:
class RegionsInfo: class RegionsInfo:
def __init__(self): def __init__(self):
self.m = munch.Munch() self.m = bunch.Bunch()
self.master = None self.master = None
self.secondaries = [] self.secondaries = []
@ -226,21 +220,21 @@ class RegionsInfo:
return self.m[name] return self.m[name]
def get(self): def get(self):
return self.m return self.m
def items(self): def iteritems(self):
return self.m.items() return self.m.iteritems()
regions = RegionsInfo() regions = RegionsInfo()
class RegionsConn: class RegionsConn:
def __init__(self): def __init__(self):
self.m = munch.Munch() self.m = bunch.Bunch()
self.default = None self.default = None
self.master = None self.master = None
self.secondaries = [] self.secondaries = []
def items(self): def iteritems(self):
return self.m.items() return self.m.iteritems()
def set_default(self, conn): def set_default(self, conn):
self.default = conn self.default = conn
@ -260,7 +254,7 @@ _multiprocess_can_split_ = True
def setup(): def setup():
cfg = configparser.RawConfigParser() cfg = ConfigParser.RawConfigParser()
try: try:
path = os.environ['S3TEST_CONF'] path = os.environ['S3TEST_CONF']
except KeyError: except KeyError:
@ -268,7 +262,8 @@ def setup():
'To run tests, point environment ' 'To run tests, point environment '
+ 'variable S3TEST_CONF to a config file.', + 'variable S3TEST_CONF to a config file.',
) )
cfg.read(path) with file(path) as f:
cfg.readfp(f)
global prefix global prefix
global targets global targets
@ -276,19 +271,19 @@ def setup():
try: try:
template = cfg.get('fixtures', 'bucket prefix') template = cfg.get('fixtures', 'bucket prefix')
except (configparser.NoSectionError, configparser.NoOptionError): except (ConfigParser.NoSectionError, ConfigParser.NoOptionError):
template = 'test-{random}-' template = 'test-{random}-'
prefix = choose_bucket_prefix(template=template) prefix = choose_bucket_prefix(template=template)
try: try:
slow_backend = cfg.getboolean('fixtures', 'slow backend') slow_backend = cfg.getboolean('fixtures', 'slow backend')
except (configparser.NoSectionError, configparser.NoOptionError): except (ConfigParser.NoSectionError, ConfigParser.NoOptionError):
slow_backend = False slow_backend = False
# pull the default_region out, if it exists # pull the default_region out, if it exists
try: try:
default_region = cfg.get('fixtures', 'default_region') default_region = cfg.get('fixtures', 'default_region')
except (configparser.NoSectionError, configparser.NoOptionError): except (ConfigParser.NoSectionError, ConfigParser.NoOptionError):
default_region = None default_region = None
s3.clear() s3.clear()
@ -314,7 +309,7 @@ def setup():
if len(regions.get()) == 0: if len(regions.get()) == 0:
regions.add("default", TargetConfig(cfg, section)) regions.add("default", TargetConfig(cfg, section))
config[name] = munch.Munch() config[name] = bunch.Bunch()
for var in [ for var in [
'user_id', 'user_id',
'display_name', 'display_name',
@ -324,16 +319,15 @@ def setup():
'port', 'port',
'is_secure', 'is_secure',
'kms_keyid', 'kms_keyid',
'storage_classes',
]: ]:
try: try:
config[name][var] = cfg.get(section, var) config[name][var] = cfg.get(section, var)
except configparser.NoOptionError: except ConfigParser.NoOptionError:
pass pass
targets[name] = RegionsConn() targets[name] = RegionsConn()
for (k, conf) in regions.items(): for (k, conf) in regions.iteritems():
conn = boto.s3.connection.S3Connection( conn = boto.s3.connection.S3Connection(
aws_access_key_id=cfg.get(section, 'access_key'), aws_access_key_id=cfg.get(section, 'access_key'),
aws_secret_access_key=cfg.get(section, 'secret_key'), aws_secret_access_key=cfg.get(section, 'secret_key'),
@ -371,15 +365,6 @@ def teardown():
# remove our buckets here also, to avoid littering # remove our buckets here also, to avoid littering
nuke_prefixed_buckets(prefix=prefix) nuke_prefixed_buckets(prefix=prefix)
@pytest.fixture(scope="package")
def configfile():
setup()
yield config
@pytest.fixture(autouse=True)
def setup_teardown(configfile):
yield
teardown()
bucket_counter = itertools.count(1) bucket_counter = itertools.count(1)
@ -483,7 +468,7 @@ def _make_raw_request(host, port, method, path, body=None, request_headers=None,
if request_headers is None: if request_headers is None:
request_headers = {} request_headers = {}
c = class_(host, port=port, timeout=timeout) c = class_(host, port, strict=True, timeout=timeout)
# TODO: We might have to modify this in future if we need to interact with # TODO: We might have to modify this in future if we need to interact with
# how httplib.request handles Accept-Encoding and Host. # how httplib.request handles Accept-Encoding and Host.

File diff suppressed because it is too large Load diff

View file

@ -1,13 +1,14 @@
from io import StringIO from cStringIO import StringIO
import boto.exception import boto.exception
import boto.s3.connection import boto.s3.connection
import boto.s3.acl import boto.s3.acl
import boto.s3.lifecycle import boto.s3.lifecycle
import bunch
import datetime import datetime
import time import time
import email.utils import email.utils
import isodate import isodate
import pytest import nose
import operator import operator
import socket import socket
import ssl import ssl
@ -15,6 +16,7 @@ import os
import requests import requests
import base64 import base64
import hmac import hmac
import sha
import pytz import pytz
import json import json
import httplib2 import httplib2
@ -25,16 +27,18 @@ import random
import re import re
from collections import defaultdict from collections import defaultdict
from urllib.parse import urlparse from urlparse import urlparse
from . import utils from nose.tools import eq_ as eq
from nose.plugins.attrib import attr
from nose.plugins.skip import SkipTest
import utils
from .utils import assert_raises from .utils import assert_raises
from .policy import Policy, Statement, make_json_policy from .policy import Policy, Statement, make_json_policy
from . import ( from . import (
configfile,
setup_teardown,
nuke_prefixed_buckets, nuke_prefixed_buckets,
get_new_bucket, get_new_bucket,
get_new_bucket_name, get_new_bucket_name,
@ -51,9 +55,9 @@ from . import (
def check_access_denied(fn, *args, **kwargs): def check_access_denied(fn, *args, **kwargs):
e = assert_raises(boto.exception.S3ResponseError, fn, *args, **kwargs) e = assert_raises(boto.exception.S3ResponseError, fn, *args, **kwargs)
assert e.status == 403 eq(e.status, 403)
assert e.reason == 'Forbidden' eq(e.reason, 'Forbidden')
assert e.error_code == 'AccessDenied' eq(e.error_code, 'AccessDenied')
def check_bad_bucket_name(name): def check_bad_bucket_name(name):
""" """
@ -61,9 +65,9 @@ def check_bad_bucket_name(name):
that the request fails because of an invalid bucket name. that the request fails because of an invalid bucket name.
""" """
e = assert_raises(boto.exception.S3ResponseError, get_new_bucket, targets.main.default, name) e = assert_raises(boto.exception.S3ResponseError, get_new_bucket, targets.main.default, name)
assert e.status == 400 eq(e.status, 400)
assert e.reason.lower() == 'bad request' # some proxies vary the case eq(e.reason.lower(), 'bad request') # some proxies vary the case
assert e.error_code == 'InvalidBucketName' eq(e.error_code, 'InvalidBucketName')
def _create_keys(bucket=None, keys=[]): def _create_keys(bucket=None, keys=[]):
""" """
@ -92,16 +96,20 @@ def _get_alt_connection():
# Breaks DNS with SubdomainCallingFormat # Breaks DNS with SubdomainCallingFormat
@pytest.mark.fails_with_subdomain @attr('fails_with_subdomain')
@attr(resource='bucket')
@attr(method='put')
@attr(operation='create w/! in name')
@attr(assertion='fails with subdomain')
def test_bucket_create_naming_bad_punctuation(): def test_bucket_create_naming_bad_punctuation():
# characters other than [a-zA-Z0-9._-] # characters other than [a-zA-Z0-9._-]
check_bad_bucket_name('alpha!soup') check_bad_bucket_name('alpha!soup')
def check_versioning(bucket, status): def check_versioning(bucket, status):
try: try:
assert bucket.get_versioning_status()['Versioning'] == status eq(bucket.get_versioning_status()['Versioning'], status)
except KeyError: except KeyError:
assert status == None eq(status, None)
# amazon is eventual consistent, retry a bit if failed # amazon is eventual consistent, retry a bit if failed
def check_configure_versioning_retry(bucket, status, expected_string): def check_configure_versioning_retry(bucket, status, expected_string):
@ -109,7 +117,7 @@ def check_configure_versioning_retry(bucket, status, expected_string):
read_status = None read_status = None
for i in range(5): for i in xrange(5):
try: try:
read_status = bucket.get_versioning_status()['Versioning'] read_status = bucket.get_versioning_status()['Versioning']
except KeyError: except KeyError:
@ -120,10 +128,13 @@ def check_configure_versioning_retry(bucket, status, expected_string):
time.sleep(1) time.sleep(1)
assert expected_string == read_status eq(expected_string, read_status)
@pytest.mark.versioning @attr(resource='object')
@pytest.mark.fails_on_dbstore @attr(method='create')
@attr(operation='create versioned object, read not exist null version')
@attr(assertion='read null version behaves correctly')
@attr('versioning')
def test_versioning_obj_read_not_exist_null(): def test_versioning_obj_read_not_exist_null():
bucket = get_new_bucket() bucket = get_new_bucket()
check_versioning(bucket, None) check_versioning(bucket, None)
@ -137,12 +148,15 @@ def test_versioning_obj_read_not_exist_null():
key.set_contents_from_string(content) key.set_contents_from_string(content)
key = bucket.get_key(objname, version_id='null') key = bucket.get_key(objname, version_id='null')
assert key == None eq(key, None)
@pytest.mark.fails_on_aws @attr(resource='object')
@pytest.mark.fails_with_subdomain @attr(method='put')
@pytest.mark.appendobject @attr(operation='append object')
@pytest.mark.fails_on_dbstore @attr(assertion='success')
@attr('fails_on_aws')
@attr('fails_with_subdomain')
@attr('appendobject')
def test_append_object(): def test_append_object():
bucket = get_new_bucket() bucket = get_new_bucket()
key = bucket.new_key('foo') key = bucket.new_key('foo')
@ -154,16 +168,19 @@ def test_append_object():
res = _make_raw_request(host=s3.main.host, port=s3.main.port, method='PUT', path=path1, body='abc', secure=s3.main.is_secure) res = _make_raw_request(host=s3.main.host, port=s3.main.port, method='PUT', path=path1, body='abc', secure=s3.main.is_secure)
path2 = path + '&append&position=3' path2 = path + '&append&position=3'
res = _make_raw_request(host=s3.main.host, port=s3.main.port, method='PUT', path=path2, body='abc', secure=s3.main.is_secure) res = _make_raw_request(host=s3.main.host, port=s3.main.port, method='PUT', path=path2, body='abc', secure=s3.main.is_secure)
assert res.status == 200 eq(res.status, 200)
assert res.reason == 'OK' eq(res.reason, 'OK')
key = bucket.get_key('foo') key = bucket.get_key('foo')
assert key.size == 6 eq(key.size, 6)
@pytest.mark.fails_on_aws @attr(resource='object')
@pytest.mark.fails_with_subdomain @attr(method='put')
@pytest.mark.appendobject @attr(operation='append to normal object')
@pytest.mark.fails_on_dbstore @attr(assertion='fails 409')
@attr('fails_on_aws')
@attr('fails_with_subdomain')
@attr('appendobject')
def test_append_normal_object(): def test_append_normal_object():
bucket = get_new_bucket() bucket = get_new_bucket()
key = bucket.new_key('foo') key = bucket.new_key('foo')
@ -174,13 +191,16 @@ def test_append_normal_object():
path = o.path + '?' + o.query path = o.path + '?' + o.query
path = path + '&append&position=3' path = path + '&append&position=3'
res = _make_raw_request(host=s3.main.host, port=s3.main.port, method='PUT', path=path, body='abc', secure=s3.main.is_secure) res = _make_raw_request(host=s3.main.host, port=s3.main.port, method='PUT', path=path, body='abc', secure=s3.main.is_secure)
assert res.status == 409 eq(res.status, 409)
@pytest.mark.fails_on_aws @attr(resource='object')
@pytest.mark.fails_with_subdomain @attr(method='put')
@pytest.mark.appendobject @attr(operation='append position not right')
@pytest.mark.fails_on_dbstore @attr(assertion='fails 409')
@attr('fails_on_aws')
@attr('fails_with_subdomain')
@attr('appendobject')
def test_append_object_position_wrong(): def test_append_object_position_wrong():
bucket = get_new_bucket() bucket = get_new_bucket()
key = bucket.new_key('foo') key = bucket.new_key('foo')
@ -192,13 +212,17 @@ def test_append_object_position_wrong():
res = _make_raw_request(host=s3.main.host, port=s3.main.port, method='PUT', path=path1, body='abc', secure=s3.main.is_secure) res = _make_raw_request(host=s3.main.host, port=s3.main.port, method='PUT', path=path1, body='abc', secure=s3.main.is_secure)
path2 = path + '&append&position=9' path2 = path + '&append&position=9'
res = _make_raw_request(host=s3.main.host, port=s3.main.port, method='PUT', path=path2, body='abc', secure=s3.main.is_secure) res = _make_raw_request(host=s3.main.host, port=s3.main.port, method='PUT', path=path2, body='abc', secure=s3.main.is_secure)
assert res.status == 409 eq(res.status, 409)
assert int(res.getheader('x-rgw-next-append-position')) == 3 eq(int(res.getheader('x-rgw-next-append-position')), 3)
# TODO rgw log_bucket.set_as_logging_target() gives 403 Forbidden # TODO rgw log_bucket.set_as_logging_target() gives 403 Forbidden
# http://tracker.newdream.net/issues/984 # http://tracker.newdream.net/issues/984
@pytest.mark.fails_on_rgw @attr(resource='bucket.log')
@attr(method='put')
@attr(operation='set/enable/disable logging target')
@attr(assertion='operations succeed')
@attr('fails_on_rgw')
def test_logging_toggle(): def test_logging_toggle():
bucket = get_new_bucket() bucket = get_new_bucket()
log_bucket = get_new_bucket(targets.main.default, bucket.name + '-log') log_bucket = get_new_bucket(targets.main.default, bucket.name + '-log')
@ -214,6 +238,242 @@ def list_bucket_storage_class(bucket):
return result return result
# The test harness for lifecycle is configured to treat days as 10 second intervals.
@attr(resource='bucket')
@attr(method='put')
@attr(operation='test lifecycle expiration')
@attr('lifecycle')
@attr('lifecycle_transition')
@attr('fails_on_aws')
def test_lifecycle_transition():
sc = configured_storage_classes()
if len(sc) < 3:
raise SkipTest
bucket = set_lifecycle(rules=[{'id': 'rule1', 'transition': lc_transition(days=1, storage_class=sc[1]), 'prefix': 'expire1/', 'status': 'Enabled'},
{'id':'rule2', 'transition': lc_transition(days=4, storage_class=sc[2]), 'prefix': 'expire3/', 'status': 'Enabled'}])
_create_keys(bucket=bucket, keys=['expire1/foo', 'expire1/bar', 'keep2/foo',
'keep2/bar', 'expire3/foo', 'expire3/bar'])
# Get list of all keys
init_keys = bucket.get_all_keys()
eq(len(init_keys), 6)
# Wait for first expiration (plus fudge to handle the timer window)
time.sleep(25)
expire1_keys = list_bucket_storage_class(bucket)
eq(len(expire1_keys['STANDARD']), 4)
eq(len(expire1_keys[sc[1]]), 2)
eq(len(expire1_keys[sc[2]]), 0)
# Wait for next expiration cycle
time.sleep(10)
keep2_keys = list_bucket_storage_class(bucket)
eq(len(keep2_keys['STANDARD']), 4)
eq(len(keep2_keys[sc[1]]), 2)
eq(len(keep2_keys[sc[2]]), 0)
# Wait for final expiration cycle
time.sleep(20)
expire3_keys = list_bucket_storage_class(bucket)
eq(len(expire3_keys['STANDARD']), 2)
eq(len(expire3_keys[sc[1]]), 2)
eq(len(expire3_keys[sc[2]]), 2)
# The test harness for lifecycle is configured to treat days as 10 second intervals.
@attr(resource='bucket')
@attr(method='put')
@attr(operation='test lifecycle expiration')
@attr('lifecycle')
@attr('lifecycle_transition')
@attr('fails_on_aws')
def test_lifecycle_transition_single_rule_multi_trans():
sc = configured_storage_classes()
if len(sc) < 3:
raise SkipTest
bucket = set_lifecycle(rules=[
{'id': 'rule1',
'transition': lc_transitions([
lc_transition(days=1, storage_class=sc[1]),
lc_transition(days=4, storage_class=sc[2])]),
'prefix': 'expire1/',
'status': 'Enabled'}])
_create_keys(bucket=bucket, keys=['expire1/foo', 'expire1/bar', 'keep2/foo',
'keep2/bar', 'expire3/foo', 'expire3/bar'])
# Get list of all keys
init_keys = bucket.get_all_keys()
eq(len(init_keys), 6)
# Wait for first expiration (plus fudge to handle the timer window)
time.sleep(25)
expire1_keys = list_bucket_storage_class(bucket)
eq(len(expire1_keys['STANDARD']), 4)
eq(len(expire1_keys[sc[1]]), 2)
eq(len(expire1_keys[sc[2]]), 0)
# Wait for next expiration cycle
time.sleep(10)
keep2_keys = list_bucket_storage_class(bucket)
eq(len(keep2_keys['STANDARD']), 4)
eq(len(keep2_keys[sc[1]]), 2)
eq(len(keep2_keys[sc[2]]), 0)
# Wait for final expiration cycle
time.sleep(20)
expire3_keys = list_bucket_storage_class(bucket)
eq(len(expire3_keys['STANDARD']), 4)
eq(len(expire3_keys[sc[1]]), 0)
eq(len(expire3_keys[sc[2]]), 2)
def generate_lifecycle_body(rules):
body = '<?xml version="1.0" encoding="UTF-8"?><LifecycleConfiguration>'
for rule in rules:
body += '<Rule><ID>%s</ID><Status>%s</Status>' % (rule['ID'], rule['Status'])
if 'Prefix' in rule.keys():
body += '<Prefix>%s</Prefix>' % rule['Prefix']
if 'Filter' in rule.keys():
prefix_str= '' # AWS supports empty filters
if 'Prefix' in rule['Filter'].keys():
prefix_str = '<Prefix>%s</Prefix>' % rule['Filter']['Prefix']
body += '<Filter>%s</Filter>' % prefix_str
if 'Expiration' in rule.keys():
if 'ExpiredObjectDeleteMarker' in rule['Expiration'].keys():
body += '<Expiration><ExpiredObjectDeleteMarker>%s</ExpiredObjectDeleteMarker></Expiration>' \
% rule['Expiration']['ExpiredObjectDeleteMarker']
elif 'Date' in rule['Expiration'].keys():
body += '<Expiration><Date>%s</Date></Expiration>' % rule['Expiration']['Date']
else:
body += '<Expiration><Days>%d</Days></Expiration>' % rule['Expiration']['Days']
if 'NoncurrentVersionExpiration' in rule.keys():
body += '<NoncurrentVersionExpiration><NoncurrentDays>%d</NoncurrentDays></NoncurrentVersionExpiration>' % \
rule['NoncurrentVersionExpiration']['NoncurrentDays']
if 'NoncurrentVersionTransition' in rule.keys():
for t in rule['NoncurrentVersionTransition']:
body += '<NoncurrentVersionTransition>'
body += '<NoncurrentDays>%d</NoncurrentDays>' % \
t['NoncurrentDays']
body += '<StorageClass>%s</StorageClass>' % \
t['StorageClass']
body += '</NoncurrentVersionTransition>'
if 'AbortIncompleteMultipartUpload' in rule.keys():
body += '<AbortIncompleteMultipartUpload><DaysAfterInitiation>%d</DaysAfterInitiation>' \
'</AbortIncompleteMultipartUpload>' % rule['AbortIncompleteMultipartUpload']['DaysAfterInitiation']
body += '</Rule>'
body += '</LifecycleConfiguration>'
return body
@attr(resource='bucket')
@attr(method='put')
@attr(operation='set lifecycle config with noncurrent version expiration')
@attr('lifecycle')
@attr('lifecycle_transition')
def test_lifecycle_set_noncurrent_transition():
sc = configured_storage_classes()
if len(sc) < 3:
raise SkipTest
bucket = get_new_bucket()
rules = [
{
'ID': 'rule1',
'Prefix': 'test1/',
'Status': 'Enabled',
'NoncurrentVersionTransition': [
{
'NoncurrentDays': 2,
'StorageClass': sc[1]
},
{
'NoncurrentDays': 4,
'StorageClass': sc[2]
}
],
'NoncurrentVersionExpiration': {
'NoncurrentDays': 6
}
},
{'ID': 'rule2', 'Prefix': 'test2/', 'Status': 'Disabled', 'NoncurrentVersionExpiration': {'NoncurrentDays': 3}}
]
body = generate_lifecycle_body(rules)
fp = StringIO(body)
md5 = boto.utils.compute_md5(fp)
headers = {'Content-MD5': md5[1], 'Content-Type': 'text/xml'}
res = bucket.connection.make_request('PUT', bucket.name, data=fp.getvalue(), query_args='lifecycle',
headers=headers)
eq(res.status, 200)
eq(res.reason, 'OK')
@attr(resource='bucket')
@attr(method='put')
@attr(operation='test lifecycle non-current version expiration')
@attr('lifecycle')
@attr('lifecycle_expiration')
@attr('lifecycle_transition')
@attr('fails_on_aws')
def test_lifecycle_noncur_transition():
sc = configured_storage_classes()
if len(sc) < 3:
raise SkipTest
bucket = get_new_bucket()
check_configure_versioning_retry(bucket, True, "Enabled")
rules = [
{
'ID': 'rule1',
'Prefix': 'test1/',
'Status': 'Enabled',
'NoncurrentVersionTransition': [
{
'NoncurrentDays': 1,
'StorageClass': sc[1]
},
{
'NoncurrentDays': 3,
'StorageClass': sc[2]
}
],
'NoncurrentVersionExpiration': {
'NoncurrentDays': 5
}
}
]
body = generate_lifecycle_body(rules)
fp = StringIO(body)
md5 = boto.utils.compute_md5(fp)
headers = {'Content-MD5': md5[1], 'Content-Type': 'text/xml'}
bucket.connection.make_request('PUT', bucket.name, data=fp.getvalue(), query_args='lifecycle',
headers=headers)
create_multiple_versions(bucket, "test1/a", 3)
create_multiple_versions(bucket, "test1/b", 3)
init_keys = bucket.get_all_versions()
eq(len(init_keys), 6)
time.sleep(25)
expire1_keys = list_bucket_storage_class(bucket)
eq(len(expire1_keys['STANDARD']), 2)
eq(len(expire1_keys[sc[1]]), 4)
eq(len(expire1_keys[sc[2]]), 0)
time.sleep(20)
expire1_keys = list_bucket_storage_class(bucket)
eq(len(expire1_keys['STANDARD']), 2)
eq(len(expire1_keys[sc[1]]), 0)
eq(len(expire1_keys[sc[2]]), 4)
time.sleep(20)
expire_keys = bucket.get_all_versions()
expire1_keys = list_bucket_storage_class(bucket)
eq(len(expire1_keys['STANDARD']), 2)
eq(len(expire1_keys[sc[1]]), 0)
eq(len(expire1_keys[sc[2]]), 0)
def transfer_part(bucket, mp_id, mp_keyname, i, part, headers=None): def transfer_part(bucket, mp_id, mp_keyname, i, part, headers=None):
"""Transfer a part of a multipart upload. Designed to be run in parallel. """Transfer a part of a multipart upload. Designed to be run in parallel.
""" """
@ -231,11 +491,11 @@ def generate_random(size, part_size=5*1024*1024):
chunk = 1024 chunk = 1024
allowed = string.ascii_letters allowed = string.ascii_letters
for x in range(0, size, part_size): for x in range(0, size, part_size):
strpart = ''.join([allowed[random.randint(0, len(allowed) - 1)] for _ in range(chunk)]) strpart = ''.join([allowed[random.randint(0, len(allowed) - 1)] for _ in xrange(chunk)])
s = '' s = ''
left = size - x left = size - x
this_part_size = min(left, part_size) this_part_size = min(left, part_size)
for y in range(this_part_size // chunk): for y in range(this_part_size / chunk):
s = s + strpart s = s + strpart
if this_part_size > len(s): if this_part_size > len(s):
s = s + strpart[0:this_part_size - len(s)] s = s + strpart[0:this_part_size - len(s)]
@ -275,7 +535,7 @@ def _populate_key(bucket, keyname, size=7*1024*1024, storage_class=None):
key = bucket.new_key(keyname) key = bucket.new_key(keyname)
if storage_class: if storage_class:
key.storage_class = storage_class key.storage_class = storage_class
data_str = str(next(generate_random(size, size))) data_str = str(generate_random(size, size).next())
data = StringIO(data_str) data = StringIO(data_str)
key.set_contents_from_file(fp=data) key.set_contents_from_file(fp=data)
return (key, data_str) return (key, data_str)
@ -285,13 +545,13 @@ def gen_rand_string(size, chars=string.ascii_uppercase + string.digits):
def verify_object(bucket, k, data=None, storage_class=None): def verify_object(bucket, k, data=None, storage_class=None):
if storage_class: if storage_class:
assert k.storage_class == storage_class eq(k.storage_class, storage_class)
if data: if data:
read_data = k.get_contents_as_string() read_data = k.get_contents_as_string()
equal = data == read_data.decode() # avoid spamming log if data not equal equal = data == read_data # avoid spamming log if data not equal
assert equal == True eq(equal, True)
def copy_object_storage_class(src_bucket, src_key, dest_bucket, dest_key, storage_class): def copy_object_storage_class(src_bucket, src_key, dest_bucket, dest_key, storage_class):
query_args=None query_args=None
@ -307,7 +567,7 @@ def copy_object_storage_class(src_bucket, src_key, dest_bucket, dest_key, storag
res = dest_bucket.connection.make_request('PUT', dest_bucket.name, dest_key.name, res = dest_bucket.connection.make_request('PUT', dest_bucket.name, dest_key.name,
query_args=query_args, headers=headers) query_args=query_args, headers=headers)
assert res.status == 200 eq(res.status, 200)
def _populate_multipart_key(bucket, kname, size, storage_class=None): def _populate_multipart_key(bucket, kname, size, storage_class=None):
(upload, data) = _multipart_upload(bucket, kname, size, storage_class=storage_class) (upload, data) = _multipart_upload(bucket, kname, size, storage_class=storage_class)
@ -362,9 +622,6 @@ def configured_storage_classes():
if item != 'STANDARD': if item != 'STANDARD':
sc.append(item) sc.append(item)
sc = [i for i in sc if i]
print("storage classes configured: " + str(sc))
return sc return sc
def lc_transition(days=None, date=None, storage_class=None): def lc_transition(days=None, date=None, storage_class=None):
@ -378,13 +635,15 @@ def lc_transitions(transitions=None):
return result return result
@pytest.mark.storage_class @attr(resource='object')
@pytest.mark.fails_on_aws @attr(method='put')
@pytest.mark.fails_on_dbstore @attr(operation='test create object with storage class')
@attr('storage_class')
@attr('fails_on_aws')
def test_object_storage_class(): def test_object_storage_class():
sc = configured_storage_classes() sc = configured_storage_classes()
if len(sc) < 2: if len(sc) < 2:
pytest.skip('requires multiple storage classes') raise SkipTest
bucket = get_new_bucket() bucket = get_new_bucket()
@ -394,13 +653,15 @@ def test_object_storage_class():
verify_object(bucket, k, data, storage_class) verify_object(bucket, k, data, storage_class)
@pytest.mark.storage_class @attr(resource='object')
@pytest.mark.fails_on_aws @attr(method='put')
@pytest.mark.fails_on_dbstore @attr(operation='test create multipart object with storage class')
@attr('storage_class')
@attr('fails_on_aws')
def test_object_storage_class_multipart(): def test_object_storage_class_multipart():
sc = configured_storage_classes() sc = configured_storage_classes()
if len(sc) < 2: if len(sc) < 2:
pytest.skip('requires multiple storage classes') raise SkipTest
bucket = get_new_bucket() bucket = get_new_bucket()
size = 11 * 1024 * 1024 size = 11 * 1024 * 1024
@ -410,13 +671,13 @@ def test_object_storage_class_multipart():
(upload, data) = _multipart_upload(bucket, key, size, storage_class=storage_class) (upload, data) = _multipart_upload(bucket, key, size, storage_class=storage_class)
upload.complete_upload() upload.complete_upload()
key2 = bucket.get_key(key) key2 = bucket.get_key(key)
assert key2.size == size eq(key2.size, size)
assert key2.storage_class == storage_class eq(key2.storage_class, storage_class)
def _do_test_object_modify_storage_class(obj_write_func, size): def _do_test_object_modify_storage_class(obj_write_func, size):
sc = configured_storage_classes() sc = configured_storage_classes()
if len(sc) < 2: if len(sc) < 2:
pytest.skip('requires multiple storage classes') raise SkipTest
bucket = get_new_bucket() bucket = get_new_bucket()
@ -433,23 +694,27 @@ def _do_test_object_modify_storage_class(obj_write_func, size):
copy_object_storage_class(bucket, k, bucket, k, new_storage_class) copy_object_storage_class(bucket, k, bucket, k, new_storage_class)
verify_object(bucket, k, data, storage_class) verify_object(bucket, k, data, storage_class)
@pytest.mark.storage_class @attr(resource='object')
@pytest.mark.fails_on_aws @attr(method='put')
@pytest.mark.fails_on_dbstore @attr(operation='test changing objects storage class')
@attr('storage_class')
@attr('fails_on_aws')
def test_object_modify_storage_class(): def test_object_modify_storage_class():
_do_test_object_modify_storage_class(_populate_key, size=9*1024*1024) _do_test_object_modify_storage_class(_populate_key, size=9*1024*1024)
@pytest.mark.storage_class @attr(resource='object')
@pytest.mark.fails_on_aws @attr(method='put')
@pytest.mark.fails_on_dbstore @attr(operation='test changing objects storage class')
@attr('storage_class')
@attr('fails_on_aws')
def test_object_modify_storage_class_multipart(): def test_object_modify_storage_class_multipart():
_do_test_object_modify_storage_class(_populate_multipart_key, size=11*1024*1024) _do_test_object_modify_storage_class(_populate_multipart_key, size=11*1024*1024)
def _do_test_object_storage_class_copy(obj_write_func, size): def _do_test_object_storage_class_copy(obj_write_func, size):
sc = configured_storage_classes() sc = configured_storage_classes()
if len(sc) < 2: if len(sc) < 2:
pytest.skip('requires multiple storage classes') raise SkipTest
src_bucket = get_new_bucket() src_bucket = get_new_bucket()
dest_bucket = get_new_bucket() dest_bucket = get_new_bucket()
@ -467,15 +732,19 @@ def _do_test_object_storage_class_copy(obj_write_func, size):
copy_object_storage_class(src_bucket, src_key, dest_bucket, dest_key, new_storage_class) copy_object_storage_class(src_bucket, src_key, dest_bucket, dest_key, new_storage_class)
verify_object(dest_bucket, dest_key, data, new_storage_class) verify_object(dest_bucket, dest_key, data, new_storage_class)
@pytest.mark.storage_class @attr(resource='object')
@pytest.mark.fails_on_aws @attr(method='copy')
@pytest.mark.fails_on_dbstore @attr(operation='test copy object to object with different storage class')
@attr('storage_class')
@attr('fails_on_aws')
def test_object_storage_class_copy(): def test_object_storage_class_copy():
_do_test_object_storage_class_copy(_populate_key, size=9*1024*1024) _do_test_object_storage_class_copy(_populate_key, size=9*1024*1024)
@pytest.mark.storage_class @attr(resource='object')
@pytest.mark.fails_on_aws @attr(method='copy')
@pytest.mark.fails_on_dbstore @attr(operation='test changing objects storage class')
@attr('storage_class')
@attr('fails_on_aws')
def test_object_storage_class_copy_multipart(): def test_object_storage_class_copy_multipart():
_do_test_object_storage_class_copy(_populate_multipart_key, size=9*1024*1024) _do_test_object_storage_class_copy(_populate_multipart_key, size=9*1024*1024)
@ -485,7 +754,7 @@ class FakeFile(object):
""" """
def __init__(self, char='A', interrupt=None): def __init__(self, char='A', interrupt=None):
self.offset = 0 self.offset = 0
self.char = bytes(char, 'utf-8') self.char = char
self.interrupt = interrupt self.interrupt = interrupt
def seek(self, offset, whence=os.SEEK_SET): def seek(self, offset, whence=os.SEEK_SET):
@ -532,7 +801,7 @@ class FakeFileVerifier(object):
if self.char == None: if self.char == None:
self.char = data[0] self.char = data[0]
self.size += size self.size += size
assert data.decode() == self.char*size eq(data, self.char*size)
def _verify_atomic_key_data(key, size=-1, char=None): def _verify_atomic_key_data(key, size=-1, char=None):
""" """
@ -541,7 +810,7 @@ def _verify_atomic_key_data(key, size=-1, char=None):
fp_verify = FakeFileVerifier(char) fp_verify = FakeFileVerifier(char)
key.get_contents_to_file(fp_verify) key.get_contents_to_file(fp_verify)
if size >= 0: if size >= 0:
assert fp_verify.size == size eq(fp_verify.size, size)
def _test_atomic_dual_conditional_write(file_size): def _test_atomic_dual_conditional_write(file_size):
""" """
@ -570,20 +839,26 @@ def _test_atomic_dual_conditional_write(file_size):
# key.set_contents_from_file(fp_c, headers={'If-Match': etag_fp_a}) # key.set_contents_from_file(fp_c, headers={'If-Match': etag_fp_a})
e = assert_raises(boto.exception.S3ResponseError, key.set_contents_from_file, fp_c, e = assert_raises(boto.exception.S3ResponseError, key.set_contents_from_file, fp_c,
headers={'If-Match': etag_fp_a}) headers={'If-Match': etag_fp_a})
assert e.status == 412 eq(e.status, 412)
assert e.reason == 'Precondition Failed' eq(e.reason, 'Precondition Failed')
assert e.error_code == 'PreconditionFailed' eq(e.error_code, 'PreconditionFailed')
# verify the file # verify the file
_verify_atomic_key_data(key, file_size, 'B') _verify_atomic_key_data(key, file_size, 'B')
@pytest.mark.fails_on_aws @attr(resource='object')
@pytest.mark.fails_on_dbstore @attr(method='put')
@attr(operation='write one or the other')
@attr(assertion='1MB successful')
@attr('fails_on_aws')
def test_atomic_dual_conditional_write_1mb(): def test_atomic_dual_conditional_write_1mb():
_test_atomic_dual_conditional_write(1024*1024) _test_atomic_dual_conditional_write(1024*1024)
@pytest.mark.fails_on_aws @attr(resource='object')
@pytest.mark.fails_on_dbstore @attr(method='put')
@attr(operation='write file in deleted bucket')
@attr(assertion='fail 404')
@attr('fails_on_aws')
def test_atomic_write_bucket_gone(): def test_atomic_write_bucket_gone():
bucket = get_new_bucket() bucket = get_new_bucket()
@ -595,9 +870,9 @@ def test_atomic_write_bucket_gone():
key = bucket.new_key('foo') key = bucket.new_key('foo')
fp_a = FakeWriteFile(1024*1024, 'A', remove_bucket) fp_a = FakeWriteFile(1024*1024, 'A', remove_bucket)
e = assert_raises(boto.exception.S3ResponseError, key.set_contents_from_file, fp_a) e = assert_raises(boto.exception.S3ResponseError, key.set_contents_from_file, fp_a)
assert e.status == 404 eq(e.status, 404)
assert e.reason == 'Not Found' eq(e.reason, 'Not Found')
assert e.error_code == 'NoSuchBucket' eq(e.error_code, 'NoSuchBucket')
def _multipart_upload_enc(bucket, s3_key_name, size, part_size=5*1024*1024, def _multipart_upload_enc(bucket, s3_key_name, size, part_size=5*1024*1024,
do_list=None, init_headers=None, part_headers=None, do_list=None, init_headers=None, part_headers=None,
@ -623,8 +898,11 @@ def _multipart_upload_enc(bucket, s3_key_name, size, part_size=5*1024*1024,
@pytest.mark.encryption @attr(resource='object')
@pytest.mark.fails_on_dbstore @attr(method='put')
@attr(operation='multipart upload with bad key for uploading chunks')
@attr(assertion='successful')
@attr('encryption')
def test_encryption_sse_c_multipart_invalid_chunks_1(): def test_encryption_sse_c_multipart_invalid_chunks_1():
bucket = get_new_bucket() bucket = get_new_bucket()
key = "multipart_enc" key = "multipart_enc"
@ -645,10 +923,13 @@ def test_encryption_sse_c_multipart_invalid_chunks_1():
_multipart_upload_enc, bucket, key, objlen, _multipart_upload_enc, bucket, key, objlen,
init_headers=init_headers, part_headers=part_headers, init_headers=init_headers, part_headers=part_headers,
metadata={'foo': 'bar'}) metadata={'foo': 'bar'})
assert e.status == 400 eq(e.status, 400)
@pytest.mark.encryption @attr(resource='object')
@pytest.mark.fails_on_dbstore @attr(method='put')
@attr(operation='multipart upload with bad md5 for chunks')
@attr(assertion='successful')
@attr('encryption')
def test_encryption_sse_c_multipart_invalid_chunks_2(): def test_encryption_sse_c_multipart_invalid_chunks_2():
bucket = get_new_bucket() bucket = get_new_bucket()
key = "multipart_enc" key = "multipart_enc"
@ -669,11 +950,14 @@ def test_encryption_sse_c_multipart_invalid_chunks_2():
_multipart_upload_enc, bucket, key, objlen, _multipart_upload_enc, bucket, key, objlen,
init_headers=init_headers, part_headers=part_headers, init_headers=init_headers, part_headers=part_headers,
metadata={'foo': 'bar'}) metadata={'foo': 'bar'})
assert e.status == 400 eq(e.status, 400)
@pytest.mark.fails_with_subdomain @attr(resource='bucket')
@pytest.mark.bucket_policy @attr(method='get')
@pytest.mark.fails_on_dbstore @attr(operation='Test Bucket Policy for a user belonging to a different tenant')
@attr(assertion='succeeds')
@attr('fails_with_subdomain')
@attr('bucket-policy')
def test_bucket_policy_different_tenant(): def test_bucket_policy_different_tenant():
bucket = get_new_bucket() bucket = get_new_bucket()
key = bucket.new_key('asdf') key = bucket.new_key('asdf')
@ -708,8 +992,10 @@ def test_bucket_policy_different_tenant():
b = new_conn.get_bucket(bucket_name) b = new_conn.get_bucket(bucket_name)
b.get_all_keys() b.get_all_keys()
@pytest.mark.bucket_policy @attr(resource='bucket')
@pytest.mark.fails_on_dbstore @attr(method='put')
@attr(operation='Test put condition operator end with ifExists')
@attr('bucket-policy')
def test_bucket_policy_set_condition_operator_end_with_IfExists(): def test_bucket_policy_set_condition_operator_end_with_IfExists():
bucket = _create_keys(keys=['foo']) bucket = _create_keys(keys=['foo'])
policy = '''{ policy = '''{
@ -728,25 +1014,73 @@ def test_bucket_policy_set_condition_operator_end_with_IfExists():
} }
] ]
}''' % bucket.name }''' % bucket.name
assert bucket.set_policy(policy) == True eq(bucket.set_policy(policy), True)
res = _make_request('GET', bucket.name, bucket.get_key("foo"), res = _make_request('GET', bucket.name, bucket.get_key("foo"),
request_headers={'referer': 'http://www.example.com/'}) request_headers={'referer': 'http://www.example.com/'})
assert res.status == 200 eq(res.status, 200)
res = _make_request('GET', bucket.name, bucket.get_key("foo"), res = _make_request('GET', bucket.name, bucket.get_key("foo"),
request_headers={'referer': 'http://www.example.com/index.html'}) request_headers={'referer': 'http://www.example.com/index.html'})
assert res.status == 200 eq(res.status, 200)
res = _make_request('GET', bucket.name, bucket.get_key("foo")) res = _make_request('GET', bucket.name, bucket.get_key("foo"))
assert res.status == 200 eq(res.status, 200)
res = _make_request('GET', bucket.name, bucket.get_key("foo"), res = _make_request('GET', bucket.name, bucket.get_key("foo"),
request_headers={'referer': 'http://example.com'}) request_headers={'referer': 'http://example.com'})
assert res.status == 403 eq(res.status, 403)
def _make_arn_resource(path="*"): def _make_arn_resource(path="*"):
return "arn:aws:s3:::{}".format(path) return "arn:aws:s3:::{}".format(path)
@pytest.mark.tagging @attr(resource='object')
@pytest.mark.bucket_policy @attr(method='put')
@pytest.mark.fails_on_dbstore @attr(operation='Deny put obj requests without encryption')
@attr(assertion='success')
@attr('encryption')
@attr('bucket-policy')
def test_bucket_policy_put_obj_enc():
bucket = get_new_bucket()
deny_incorrect_algo = {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": "AES256"
}
}
deny_unencrypted_obj = {
"Null" : {
"s3:x-amz-server-side-encryption": "true"
}
}
p = Policy()
resource = _make_arn_resource("{}/{}".format(bucket.name, "*"))
s1 = Statement("s3:PutObject", resource, effect="Deny", condition=deny_incorrect_algo)
s2 = Statement("s3:PutObject", resource, effect="Deny", condition=deny_unencrypted_obj)
policy_document = p.add_statement(s1).add_statement(s2).to_json()
bucket.set_policy(policy_document)
key1_str ='testobj'
key1 = bucket.new_key(key1_str)
check_access_denied(key1.set_contents_from_string, key1_str)
sse_client_headers = {
'x-amz-server-side-encryption' : 'AES256',
'x-amz-server-side-encryption-customer-algorithm': 'AES256',
'x-amz-server-side-encryption-customer-key': 'pO3upElrwuEXSoFwCfnZPdSsmt/xWeFa0N9KgDijwVs=',
'x-amz-server-side-encryption-customer-key-md5': 'DWygnHRtgiJ77HCm+1rvHw=='
}
key1.set_contents_from_string(key1_str, headers=sse_client_headers)
@attr(resource='object')
@attr(method='put')
@attr(operation='put obj with RequestObjectTag')
@attr(assertion='success')
@attr('tagging')
@attr('bucket-policy')
def test_bucket_policy_put_obj_request_obj_tag(): def test_bucket_policy_put_obj_request_obj_tag():
bucket = get_new_bucket() bucket = get_new_bucket()

View file

@ -1,20 +1,23 @@
from __future__ import print_function
import sys import sys
from collections.abc import Container import collections
import pytest import nose
import string import string
import random import random
from pprint import pprint from pprint import pprint
import time import time
import boto.exception import boto.exception
import socket
from urllib.parse import urlparse from urlparse import urlparse
from nose.tools import eq_ as eq, ok_ as ok
from nose.plugins.attrib import attr
from nose.tools import timed
from nose.plugins.skip import SkipTest
from .. import common from .. import common
from . import ( from . import (
configfile,
setup_teardown,
get_new_bucket, get_new_bucket,
get_new_bucket_name, get_new_bucket_name,
s3, s3,
@ -39,26 +42,35 @@ ERRORDOC_TEMPLATE = '<html><h1>ErrorDoc</h1><body>{random}</body></html>'
CAN_WEBSITE = None CAN_WEBSITE = None
@pytest.fixture(autouse=True, scope="module")
def check_can_test_website(): def check_can_test_website():
bucket = get_new_bucket() global CAN_WEBSITE
try: # This is a bit expensive, so we cache this
wsconf = bucket.get_website_configuration() if CAN_WEBSITE is None:
bucket = get_new_bucket()
try:
wsconf = bucket.get_website_configuration()
CAN_WEBSITE = True
except boto.exception.S3ResponseError as e:
if e.status == 404 and e.reason == 'Not Found' and e.error_code in ['NoSuchWebsiteConfiguration', 'NoSuchKey']:
CAN_WEBSITE = True
elif e.status == 405 and e.reason == 'Method Not Allowed' and e.error_code == 'MethodNotAllowed':
# rgw_enable_static_website is false
CAN_WEBSITE = False
elif e.status == 403 and e.reason == 'SignatureDoesNotMatch' and e.error_code == 'Forbidden':
# This is older versions that do not support the website code
CAN_WEBSITE = False
else:
raise RuntimeError("Unknown response in checking if WebsiteConf is supported", e)
finally:
bucket.delete()
if CAN_WEBSITE is True:
return True return True
except boto.exception.S3ResponseError as e: elif CAN_WEBSITE is False:
if e.status == 404 and e.reason == 'Not Found' and e.error_code in ['NoSuchWebsiteConfiguration', 'NoSuchKey']: raise SkipTest
return True else:
elif e.status == 405 and e.reason == 'Method Not Allowed' and e.error_code == 'MethodNotAllowed': raise RuntimeError("Unknown cached response in checking if WebsiteConf is supported")
pytest.skip('rgw_enable_static_website is false')
elif e.status == 403 and e.reason == 'SignatureDoesNotMatch' and e.error_code == 'Forbidden':
# This is older versions that do not support the website code
pytest.skip('static website is not implemented')
elif e.status == 501 and e.error_code == 'NotImplemented':
pytest.skip('static website is not implemented')
else:
raise RuntimeError("Unknown response in checking if WebsiteConf is supported", e)
finally:
bucket.delete()
def make_website_config(xml_fragment): def make_website_config(xml_fragment):
""" """
@ -96,7 +108,7 @@ def get_website_url(**kwargs):
def _test_website_populate_fragment(xml_fragment, fields): def _test_website_populate_fragment(xml_fragment, fields):
for k in ['RoutingRules']: for k in ['RoutingRules']:
if k in list(fields.keys()) and len(fields[k]) > 0: if k in fields.keys() and len(fields[k]) > 0:
fields[k] = '<%s>%s</%s>' % (k, fields[k], k) fields[k] = '<%s>%s</%s>' % (k, fields[k], k)
f = { f = {
'IndexDocument_Suffix': choose_bucket_prefix(template='index-{random}.html', max_len=32), 'IndexDocument_Suffix': choose_bucket_prefix(template='index-{random}.html', max_len=32),
@ -154,24 +166,24 @@ def _test_website_prep(bucket, xml_template, hardcoded_fields = {}, expect_fail=
# Cleanup for our validation # Cleanup for our validation
common.assert_xml_equal(config_xmlcmp, config_xmlnew) common.assert_xml_equal(config_xmlcmp, config_xmlnew)
#print("config_xmlcmp\n", config_xmlcmp) #print("config_xmlcmp\n", config_xmlcmp)
#assert config_xmlnew == config_xmlcmp #eq (config_xmlnew, config_xmlcmp)
f['WebsiteConfiguration'] = config_xmlcmp f['WebsiteConfiguration'] = config_xmlcmp
return f return f
def __website_expected_reponse_status(res, status, reason): def __website_expected_reponse_status(res, status, reason):
if not isinstance(status, Container): if not isinstance(status, collections.Container):
status = set([status]) status = set([status])
if not isinstance(reason, Container): if not isinstance(reason, collections.Container):
reason = set([reason]) reason = set([reason])
if status is not IGNORE_FIELD: if status is not IGNORE_FIELD:
assert res.status in status, 'HTTP code was %s should be %s' % (res.status, status) ok(res.status in status, 'HTTP code was %s should be %s' % (res.status, status))
if reason is not IGNORE_FIELD: if reason is not IGNORE_FIELD:
assert res.reason in reason, 'HTTP reason was was %s should be %s' % (res.reason, reason) ok(res.reason in reason, 'HTTP reason was was %s should be %s' % (res.reason, reason))
def _website_expected_default_html(**kwargs): def _website_expected_default_html(**kwargs):
fields = [] fields = []
for k in list(kwargs.keys()): for k in kwargs.keys():
# AmazonS3 seems to be inconsistent, some HTML errors include BucketName, but others do not. # AmazonS3 seems to be inconsistent, some HTML errors include BucketName, but others do not.
if k is 'BucketName': if k is 'BucketName':
continue continue
@ -179,7 +191,7 @@ def _website_expected_default_html(**kwargs):
v = kwargs[k] v = kwargs[k]
if isinstance(v, str): if isinstance(v, str):
v = [v] v = [v]
elif not isinstance(v, Container): elif not isinstance(v, collections.Container):
v = [v] v = [v]
for v2 in v: for v2 in v:
s = '<li>%s: %s</li>' % (k,v2) s = '<li>%s: %s</li>' % (k,v2)
@ -197,22 +209,21 @@ def _website_expected_error_response(res, bucket_name, status, reason, code, con
errorcode = res.getheader('x-amz-error-code', None) errorcode = res.getheader('x-amz-error-code', None)
if errorcode is not None: if errorcode is not None:
if code is not IGNORE_FIELD: if code is not IGNORE_FIELD:
assert errorcode == code eq(errorcode, code)
if not isinstance(content, Container): if not isinstance(content, collections.Container):
content = set([content]) content = set([content])
for f in content: for f in content:
if f is not IGNORE_FIELD and f is not None: if f is not IGNORE_FIELD and f is not None:
f = bytes(f, 'utf-8') ok(f in body, 'HTML should contain "%s"' % (f, ))
assert f in body, 'HTML should contain "%s"' % (f, )
def _website_expected_redirect_response(res, status, reason, new_url): def _website_expected_redirect_response(res, status, reason, new_url):
body = res.read() body = res.read()
print(body) print(body)
__website_expected_reponse_status(res, status, reason) __website_expected_reponse_status(res, status, reason)
loc = res.getheader('Location', None) loc = res.getheader('Location', None)
assert loc == new_url, 'Location header should be set "%s" != "%s"' % (loc,new_url,) eq(loc, new_url, 'Location header should be set "%s" != "%s"' % (loc,new_url,))
assert len(body) == 0, 'Body of a redirect should be empty' ok(len(body) == 0, 'Body of a redirect should be empty')
def _website_request(bucket_name, path, connect_hostname=None, method='GET', timeout=None): def _website_request(bucket_name, path, connect_hostname=None, method='GET', timeout=None):
url = get_website_url(proto='http', bucket=bucket_name, path=path) url = get_website_url(proto='http', bucket=bucket_name, path=path)
@ -224,23 +235,33 @@ def _website_request(bucket_name, path, connect_hostname=None, method='GET', tim
request_headers={} request_headers={}
request_headers['Host'] = o.hostname request_headers['Host'] = o.hostname
request_headers['Accept'] = '*/*' request_headers['Accept'] = '*/*'
print('Request: {method} {path}\n{headers}'.format(method=method, path=path, headers=''.join([t[0]+':'+t[1]+"\n" for t in list(request_headers.items())]))) print('Request: {method} {path}\n{headers}'.format(method=method, path=path, headers=''.join(map(lambda t: t[0]+':'+t[1]+"\n", request_headers.items()))))
res = _make_raw_request(connect_hostname, config.main.port, method, path, request_headers=request_headers, secure=False, timeout=timeout) res = _make_raw_request(connect_hostname, config.main.port, method, path, request_headers=request_headers, secure=False, timeout=timeout)
for (k,v) in res.getheaders(): for (k,v) in res.getheaders():
print(k,v) print(k,v)
return res return res
# ---------- Non-existant buckets via the website endpoint # ---------- Non-existant buckets via the website endpoint
@pytest.mark.s3website @attr(resource='bucket')
@pytest.mark.fails_on_rgw @attr(method='get')
@attr(operation='list')
@attr(assertion='non-existant bucket via website endpoint should give NoSuchBucket, exposing security risk')
@attr('s3website')
@attr('fails_on_rgw')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
def test_website_nonexistant_bucket_s3(): def test_website_nonexistant_bucket_s3():
bucket_name = get_new_bucket_name() bucket_name = get_new_bucket_name()
res = _website_request(bucket_name, '') res = _website_request(bucket_name, '')
_website_expected_error_response(res, bucket_name, 404, 'Not Found', 'NoSuchBucket', content=_website_expected_default_html(Code='NoSuchBucket')) _website_expected_error_response(res, bucket_name, 404, 'Not Found', 'NoSuchBucket', content=_website_expected_default_html(Code='NoSuchBucket'))
@pytest.mark.s3website @attr(resource='bucket')
@pytest.mark.fails_on_s3 @attr(method='get')
@pytest.mark.fails_on_dbstore @attr(operation='list')
#@attr(assertion='non-existant bucket via website endpoint should give Forbidden, keeping bucket identity secure')
@attr(assertion='non-existant bucket via website endpoint should give NoSuchBucket')
@attr('s3website')
@attr('fails_on_s3')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
def test_website_nonexistant_bucket_rgw(): def test_website_nonexistant_bucket_rgw():
bucket_name = get_new_bucket_name() bucket_name = get_new_bucket_name()
res = _website_request(bucket_name, '') res = _website_request(bucket_name, '')
@ -248,9 +269,13 @@ def test_website_nonexistant_bucket_rgw():
_website_expected_error_response(res, bucket_name, 404, 'Not Found', 'NoSuchBucket', content=_website_expected_default_html(Code='NoSuchBucket')) _website_expected_error_response(res, bucket_name, 404, 'Not Found', 'NoSuchBucket', content=_website_expected_default_html(Code='NoSuchBucket'))
#------------- IndexDocument only, successes #------------- IndexDocument only, successes
@pytest.mark.s3website @attr(resource='bucket')
@pytest.mark.fails_on_dbstore @attr(method='get')
@pytest.mark.timeout(10) @attr(operation='list')
@attr(assertion='non-empty public buckets via s3website return page for /, where page is public')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@timed(10)
def test_website_public_bucket_list_public_index(): def test_website_public_bucket_list_public_index():
bucket = get_new_bucket() bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDoc']) f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDoc'])
@ -266,14 +291,17 @@ def test_website_public_bucket_list_public_index():
res = _website_request(bucket.name, '') res = _website_request(bucket.name, '')
body = res.read() body = res.read()
print(body) print(body)
indexstring = bytes(indexstring, 'utf-8') eq(body, indexstring) # default content should match index.html set content
assert body == indexstring # default content should match index.html set content
__website_expected_reponse_status(res, 200, 'OK') __website_expected_reponse_status(res, 200, 'OK')
indexhtml.delete() indexhtml.delete()
bucket.delete() bucket.delete()
@pytest.mark.s3website @attr(resource='bucket')
@pytest.mark.fails_on_dbstore @attr(method='get')
@attr(operation='list')
@attr(assertion='non-empty private buckets via s3website return page for /, where page is private')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
def test_website_private_bucket_list_public_index(): def test_website_private_bucket_list_public_index():
bucket = get_new_bucket() bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDoc']) f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDoc'])
@ -291,15 +319,18 @@ def test_website_private_bucket_list_public_index():
__website_expected_reponse_status(res, 200, 'OK') __website_expected_reponse_status(res, 200, 'OK')
body = res.read() body = res.read()
print(body) print(body)
indexstring = bytes(indexstring, 'utf-8') eq(body, indexstring, 'default content should match index.html set content')
assert body == indexstring, 'default content should match index.html set content'
indexhtml.delete() indexhtml.delete()
bucket.delete() bucket.delete()
# ---------- IndexDocument only, failures # ---------- IndexDocument only, failures
@pytest.mark.s3website @attr(resource='bucket')
@pytest.mark.fails_on_dbstore @attr(method='get')
@attr(operation='list')
@attr(assertion='empty private buckets via s3website return a 403 for /')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
def test_website_private_bucket_list_empty(): def test_website_private_bucket_list_empty():
bucket = get_new_bucket() bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDoc']) f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDoc'])
@ -310,8 +341,12 @@ def test_website_private_bucket_list_empty():
_website_expected_error_response(res, bucket.name, 403, 'Forbidden', 'AccessDenied', content=_website_expected_default_html(Code='AccessDenied')) _website_expected_error_response(res, bucket.name, 403, 'Forbidden', 'AccessDenied', content=_website_expected_default_html(Code='AccessDenied'))
bucket.delete() bucket.delete()
@pytest.mark.s3website @attr(resource='bucket')
@pytest.mark.fails_on_dbstore @attr(method='get')
@attr(operation='list')
@attr(assertion='empty public buckets via s3website return a 404 for /')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
def test_website_public_bucket_list_empty(): def test_website_public_bucket_list_empty():
bucket = get_new_bucket() bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDoc']) f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDoc'])
@ -321,8 +356,12 @@ def test_website_public_bucket_list_empty():
_website_expected_error_response(res, bucket.name, 404, 'Not Found', 'NoSuchKey', content=_website_expected_default_html(Code='NoSuchKey')) _website_expected_error_response(res, bucket.name, 404, 'Not Found', 'NoSuchKey', content=_website_expected_default_html(Code='NoSuchKey'))
bucket.delete() bucket.delete()
@pytest.mark.s3website @attr(resource='bucket')
@pytest.mark.fails_on_dbstore @attr(method='get')
@attr(operation='list')
@attr(assertion='non-empty public buckets via s3website return page for /, where page is private')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
def test_website_public_bucket_list_private_index(): def test_website_public_bucket_list_private_index():
bucket = get_new_bucket() bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDoc']) f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDoc'])
@ -342,8 +381,12 @@ def test_website_public_bucket_list_private_index():
indexhtml.delete() indexhtml.delete()
bucket.delete() bucket.delete()
@pytest.mark.s3website @attr(resource='bucket')
@pytest.mark.fails_on_dbstore @attr(method='get')
@attr(operation='list')
@attr(assertion='non-empty private buckets via s3website return page for /, where page is private')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
def test_website_private_bucket_list_private_index(): def test_website_private_bucket_list_private_index():
bucket = get_new_bucket() bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDoc']) f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDoc'])
@ -364,8 +407,12 @@ def test_website_private_bucket_list_private_index():
bucket.delete() bucket.delete()
# ---------- IndexDocument & ErrorDocument, failures due to errordoc assigned but missing # ---------- IndexDocument & ErrorDocument, failures due to errordoc assigned but missing
@pytest.mark.s3website @attr(resource='bucket')
@pytest.mark.fails_on_dbstore @attr(method='get')
@attr(operation='list')
@attr(assertion='empty private buckets via s3website return a 403 for /, missing errordoc')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
def test_website_private_bucket_list_empty_missingerrordoc(): def test_website_private_bucket_list_empty_missingerrordoc():
bucket = get_new_bucket() bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc']) f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc'])
@ -376,8 +423,12 @@ def test_website_private_bucket_list_empty_missingerrordoc():
bucket.delete() bucket.delete()
@pytest.mark.s3website @attr(resource='bucket')
@pytest.mark.fails_on_dbstore @attr(method='get')
@attr(operation='list')
@attr(assertion='empty public buckets via s3website return a 404 for /, missing errordoc')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
def test_website_public_bucket_list_empty_missingerrordoc(): def test_website_public_bucket_list_empty_missingerrordoc():
bucket = get_new_bucket() bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc']) f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc'])
@ -387,8 +438,12 @@ def test_website_public_bucket_list_empty_missingerrordoc():
_website_expected_error_response(res, bucket.name, 404, 'Not Found', 'NoSuchKey') _website_expected_error_response(res, bucket.name, 404, 'Not Found', 'NoSuchKey')
bucket.delete() bucket.delete()
@pytest.mark.s3website @attr(resource='bucket')
@pytest.mark.fails_on_dbstore @attr(method='get')
@attr(operation='list')
@attr(assertion='non-empty public buckets via s3website return page for /, where page is private, missing errordoc')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
def test_website_public_bucket_list_private_index_missingerrordoc(): def test_website_public_bucket_list_private_index_missingerrordoc():
bucket = get_new_bucket() bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc']) f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc'])
@ -407,8 +462,12 @@ def test_website_public_bucket_list_private_index_missingerrordoc():
indexhtml.delete() indexhtml.delete()
bucket.delete() bucket.delete()
@pytest.mark.s3website @attr(resource='bucket')
@pytest.mark.fails_on_dbstore @attr(method='get')
@attr(operation='list')
@attr(assertion='non-empty private buckets via s3website return page for /, where page is private, missing errordoc')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
def test_website_private_bucket_list_private_index_missingerrordoc(): def test_website_private_bucket_list_private_index_missingerrordoc():
bucket = get_new_bucket() bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc']) f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc'])
@ -428,8 +487,12 @@ def test_website_private_bucket_list_private_index_missingerrordoc():
bucket.delete() bucket.delete()
# ---------- IndexDocument & ErrorDocument, failures due to errordoc assigned but not accessible # ---------- IndexDocument & ErrorDocument, failures due to errordoc assigned but not accessible
@pytest.mark.s3website @attr(resource='bucket')
@pytest.mark.fails_on_dbstore @attr(method='get')
@attr(operation='list')
@attr(assertion='empty private buckets via s3website return a 403 for /, blocked errordoc')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
def test_website_private_bucket_list_empty_blockederrordoc(): def test_website_private_bucket_list_empty_blockederrordoc():
bucket = get_new_bucket() bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc']) f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc'])
@ -446,61 +509,17 @@ def test_website_private_bucket_list_empty_blockederrordoc():
body = res.read() body = res.read()
print(body) print(body)
_website_expected_error_response(res, bucket.name, 403, 'Forbidden', 'AccessDenied', content=_website_expected_default_html(Code='AccessDenied'), body=body) _website_expected_error_response(res, bucket.name, 403, 'Forbidden', 'AccessDenied', content=_website_expected_default_html(Code='AccessDenied'), body=body)
errorstring = bytes(errorstring, 'utf-8') ok(errorstring not in body, 'error content should NOT match error.html set content')
assert errorstring not in body, 'error content should NOT match error.html set content'
errorhtml.delete() errorhtml.delete()
bucket.delete() bucket.delete()
@pytest.mark.s3website @attr(resource='bucket')
@pytest.mark.fails_on_dbstore @attr(method='get')
def test_website_public_bucket_list_pubilc_errordoc(): @attr(operation='list')
bucket = get_new_bucket() @attr(assertion='empty public buckets via s3website return a 404 for /, blocked errordoc')
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc']) @attr('s3website')
bucket.make_public() @nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
errorhtml = bucket.new_key(f['ErrorDocument_Key'])
errorstring = choose_bucket_prefix(template=ERRORDOC_TEMPLATE, max_len=256)
errorhtml.set_contents_from_string(errorstring)
errorhtml.set_canned_acl('public-read')
url = get_website_url(proto='http', bucket=bucket.name, path='')
o = urlparse(url)
host = o.hostname
port = s3.main.port
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect((host, port))
request = "GET / HTTP/1.1\r\nHost:%s.%s:%s\r\n\r\n" % (bucket.name, host, port)
sock.send(request.encode())
#receive header
resp = sock.recv(4096)
print(resp)
#receive body
resp = sock.recv(4096)
print('payload length=%d' % len(resp))
print(resp)
#check if any additional payload is left
resp_len = 0
sock.settimeout(2)
try:
resp = sock.recv(4096)
resp_len = len(resp)
print('invalid payload length=%d' % resp_len)
print(resp)
except socket.timeout:
print('no invalid payload')
assert resp_len == 0, 'invalid payload'
errorhtml.delete()
bucket.delete()
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_website_public_bucket_list_empty_blockederrordoc(): def test_website_public_bucket_list_empty_blockederrordoc():
bucket = get_new_bucket() bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc']) f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc'])
@ -516,14 +535,17 @@ def test_website_public_bucket_list_empty_blockederrordoc():
body = res.read() body = res.read()
print(body) print(body)
_website_expected_error_response(res, bucket.name, 404, 'Not Found', 'NoSuchKey', content=_website_expected_default_html(Code='NoSuchKey'), body=body) _website_expected_error_response(res, bucket.name, 404, 'Not Found', 'NoSuchKey', content=_website_expected_default_html(Code='NoSuchKey'), body=body)
errorstring = bytes(errorstring, 'utf-8') ok(errorstring not in body, 'error content should match error.html set content')
assert errorstring not in body, 'error content should match error.html set content'
errorhtml.delete() errorhtml.delete()
bucket.delete() bucket.delete()
@pytest.mark.s3website @attr(resource='bucket')
@pytest.mark.fails_on_dbstore @attr(method='get')
@attr(operation='list')
@attr(assertion='non-empty public buckets via s3website return page for /, where page is private, blocked errordoc')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
def test_website_public_bucket_list_private_index_blockederrordoc(): def test_website_public_bucket_list_private_index_blockederrordoc():
bucket = get_new_bucket() bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc']) f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc'])
@ -544,15 +566,18 @@ def test_website_public_bucket_list_private_index_blockederrordoc():
body = res.read() body = res.read()
print(body) print(body)
_website_expected_error_response(res, bucket.name, 403, 'Forbidden', 'AccessDenied', content=_website_expected_default_html(Code='AccessDenied'), body=body) _website_expected_error_response(res, bucket.name, 403, 'Forbidden', 'AccessDenied', content=_website_expected_default_html(Code='AccessDenied'), body=body)
errorstring = bytes(errorstring, 'utf-8') ok(errorstring not in body, 'error content should match error.html set content')
assert errorstring not in body, 'error content should match error.html set content'
indexhtml.delete() indexhtml.delete()
errorhtml.delete() errorhtml.delete()
bucket.delete() bucket.delete()
@pytest.mark.s3website @attr(resource='bucket')
@pytest.mark.fails_on_dbstore @attr(method='get')
@attr(operation='list')
@attr(assertion='non-empty private buckets via s3website return page for /, where page is private, blocked errordoc')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
def test_website_private_bucket_list_private_index_blockederrordoc(): def test_website_private_bucket_list_private_index_blockederrordoc():
bucket = get_new_bucket() bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc']) f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc'])
@ -573,16 +598,19 @@ def test_website_private_bucket_list_private_index_blockederrordoc():
body = res.read() body = res.read()
print(body) print(body)
_website_expected_error_response(res, bucket.name, 403, 'Forbidden', 'AccessDenied', content=_website_expected_default_html(Code='AccessDenied'), body=body) _website_expected_error_response(res, bucket.name, 403, 'Forbidden', 'AccessDenied', content=_website_expected_default_html(Code='AccessDenied'), body=body)
errorstring = bytes(errorstring, 'utf-8') ok(errorstring not in body, 'error content should match error.html set content')
assert errorstring not in body, 'error content should match error.html set content'
indexhtml.delete() indexhtml.delete()
errorhtml.delete() errorhtml.delete()
bucket.delete() bucket.delete()
# ---------- IndexDocument & ErrorDocument, failures with errordoc available # ---------- IndexDocument & ErrorDocument, failures with errordoc available
@pytest.mark.s3website @attr(resource='bucket')
@pytest.mark.fails_on_dbstore @attr(method='get')
@attr(operation='list')
@attr(assertion='empty private buckets via s3website return a 403 for /, good errordoc')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
def test_website_private_bucket_list_empty_gooderrordoc(): def test_website_private_bucket_list_empty_gooderrordoc():
bucket = get_new_bucket() bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc']) f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc'])
@ -600,8 +628,12 @@ def test_website_private_bucket_list_empty_gooderrordoc():
errorhtml.delete() errorhtml.delete()
bucket.delete() bucket.delete()
@pytest.mark.s3website @attr(resource='bucket')
@pytest.mark.fails_on_dbstore @attr(method='get')
@attr(operation='list')
@attr(assertion='empty public buckets via s3website return a 404 for /, good errordoc')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
def test_website_public_bucket_list_empty_gooderrordoc(): def test_website_public_bucket_list_empty_gooderrordoc():
bucket = get_new_bucket() bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc']) f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc'])
@ -620,8 +652,12 @@ def test_website_public_bucket_list_empty_gooderrordoc():
errorhtml.delete() errorhtml.delete()
bucket.delete() bucket.delete()
@pytest.mark.s3website @attr(resource='bucket')
@pytest.mark.fails_on_dbstore @attr(method='get')
@attr(operation='list')
@attr(assertion='non-empty public buckets via s3website return page for /, where page is private')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
def test_website_public_bucket_list_private_index_gooderrordoc(): def test_website_public_bucket_list_private_index_gooderrordoc():
bucket = get_new_bucket() bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc']) f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc'])
@ -645,8 +681,12 @@ def test_website_public_bucket_list_private_index_gooderrordoc():
errorhtml.delete() errorhtml.delete()
bucket.delete() bucket.delete()
@pytest.mark.s3website @attr(resource='bucket')
@pytest.mark.fails_on_dbstore @attr(method='get')
@attr(operation='list')
@attr(assertion='non-empty private buckets via s3website return page for /, where page is private')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
def test_website_private_bucket_list_private_index_gooderrordoc(): def test_website_private_bucket_list_private_index_gooderrordoc():
bucket = get_new_bucket() bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc']) f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc'])
@ -671,8 +711,12 @@ def test_website_private_bucket_list_private_index_gooderrordoc():
bucket.delete() bucket.delete()
# ------ RedirectAll tests # ------ RedirectAll tests
@pytest.mark.s3website @attr(resource='bucket')
@pytest.mark.fails_on_dbstore @attr(method='get')
@attr(operation='list')
@attr(assertion='RedirectAllRequestsTo without protocol should TODO')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
def test_website_bucket_private_redirectall_base(): def test_website_bucket_private_redirectall_base():
bucket = get_new_bucket() bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['RedirectAll']) f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['RedirectAll'])
@ -684,8 +728,12 @@ def test_website_bucket_private_redirectall_base():
bucket.delete() bucket.delete()
@pytest.mark.s3website @attr(resource='bucket')
@pytest.mark.fails_on_dbstore @attr(method='get')
@attr(operation='list')
@attr(assertion='RedirectAllRequestsTo without protocol should TODO')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
def test_website_bucket_private_redirectall_path(): def test_website_bucket_private_redirectall_path():
bucket = get_new_bucket() bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['RedirectAll']) f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['RedirectAll'])
@ -699,8 +747,12 @@ def test_website_bucket_private_redirectall_path():
bucket.delete() bucket.delete()
@pytest.mark.s3website @attr(resource='bucket')
@pytest.mark.fails_on_dbstore @attr(method='get')
@attr(operation='list')
@attr(assertion='RedirectAllRequestsTo without protocol should TODO')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
def test_website_bucket_private_redirectall_path_upgrade(): def test_website_bucket_private_redirectall_path_upgrade():
bucket = get_new_bucket() bucket = get_new_bucket()
x = string.Template(WEBSITE_CONFIGS_XMLFRAG['RedirectAll+Protocol']).safe_substitute(RedirectAllRequestsTo_Protocol='https') x = string.Template(WEBSITE_CONFIGS_XMLFRAG['RedirectAll+Protocol']).safe_substitute(RedirectAllRequestsTo_Protocol='https')
@ -716,9 +768,13 @@ def test_website_bucket_private_redirectall_path_upgrade():
bucket.delete() bucket.delete()
# ------ x-amz redirect tests # ------ x-amz redirect tests
@pytest.mark.s3website @attr(resource='bucket')
@pytest.mark.s3website_redirect_location @attr(method='get')
@pytest.mark.fails_on_dbstore @attr(operation='list')
@attr(assertion='x-amz-website-redirect-location should not fire without websiteconf')
@attr('s3website')
@attr('x-amz-website-redirect-location')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
def test_website_xredirect_nonwebsite(): def test_website_xredirect_nonwebsite():
bucket = get_new_bucket() bucket = get_new_bucket()
#f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['RedirectAll']) #f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['RedirectAll'])
@ -730,7 +786,7 @@ def test_website_xredirect_nonwebsite():
headers = {'x-amz-website-redirect-location': redirect_dest} headers = {'x-amz-website-redirect-location': redirect_dest}
k.set_contents_from_string(content, headers=headers, policy='public-read') k.set_contents_from_string(content, headers=headers, policy='public-read')
redirect = k.get_redirect() redirect = k.get_redirect()
assert k.get_redirect() == redirect_dest eq(k.get_redirect(), redirect_dest)
res = _website_request(bucket.name, '/page') res = _website_request(bucket.name, '/page')
body = res.read() body = res.read()
@ -744,9 +800,13 @@ def test_website_xredirect_nonwebsite():
k.delete() k.delete()
bucket.delete() bucket.delete()
@pytest.mark.s3website @attr(resource='bucket')
@pytest.mark.s3website_redirect_location @attr(method='get')
@pytest.mark.fails_on_dbstore @attr(operation='list')
@attr(assertion='x-amz-website-redirect-location should fire websiteconf, relative path, public key')
@attr('s3website')
@attr('x-amz-website-redirect-location')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
def test_website_xredirect_public_relative(): def test_website_xredirect_public_relative():
bucket = get_new_bucket() bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDoc']) f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDoc'])
@ -758,7 +818,7 @@ def test_website_xredirect_public_relative():
headers = {'x-amz-website-redirect-location': redirect_dest} headers = {'x-amz-website-redirect-location': redirect_dest}
k.set_contents_from_string(content, headers=headers, policy='public-read') k.set_contents_from_string(content, headers=headers, policy='public-read')
redirect = k.get_redirect() redirect = k.get_redirect()
assert k.get_redirect() == redirect_dest eq(k.get_redirect(), redirect_dest)
res = _website_request(bucket.name, '/page') res = _website_request(bucket.name, '/page')
#new_url = get_website_url(bucket_name=bucket.name, path=redirect_dest) #new_url = get_website_url(bucket_name=bucket.name, path=redirect_dest)
@ -767,9 +827,13 @@ def test_website_xredirect_public_relative():
k.delete() k.delete()
bucket.delete() bucket.delete()
@pytest.mark.s3website @attr(resource='bucket')
@pytest.mark.s3website_redirect_location @attr(method='get')
@pytest.mark.fails_on_dbstore @attr(operation='list')
@attr(assertion='x-amz-website-redirect-location should fire websiteconf, absolute, public key')
@attr('s3website')
@attr('x-amz-website-redirect-location')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
def test_website_xredirect_public_abs(): def test_website_xredirect_public_abs():
bucket = get_new_bucket() bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDoc']) f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDoc'])
@ -781,7 +845,7 @@ def test_website_xredirect_public_abs():
headers = {'x-amz-website-redirect-location': redirect_dest} headers = {'x-amz-website-redirect-location': redirect_dest}
k.set_contents_from_string(content, headers=headers, policy='public-read') k.set_contents_from_string(content, headers=headers, policy='public-read')
redirect = k.get_redirect() redirect = k.get_redirect()
assert k.get_redirect() == redirect_dest eq(k.get_redirect(), redirect_dest)
res = _website_request(bucket.name, '/page') res = _website_request(bucket.name, '/page')
new_url = get_website_url(proto='http', hostname='example.com', path='/foo') new_url = get_website_url(proto='http', hostname='example.com', path='/foo')
@ -790,9 +854,13 @@ def test_website_xredirect_public_abs():
k.delete() k.delete()
bucket.delete() bucket.delete()
@pytest.mark.s3website @attr(resource='bucket')
@pytest.mark.s3website_redirect_location @attr(method='get')
@pytest.mark.fails_on_dbstore @attr(operation='list')
@attr(assertion='x-amz-website-redirect-location should fire websiteconf, relative path, private key')
@attr('s3website')
@attr('x-amz-website-redirect-location')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
def test_website_xredirect_private_relative(): def test_website_xredirect_private_relative():
bucket = get_new_bucket() bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDoc']) f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDoc'])
@ -804,7 +872,7 @@ def test_website_xredirect_private_relative():
headers = {'x-amz-website-redirect-location': redirect_dest} headers = {'x-amz-website-redirect-location': redirect_dest}
k.set_contents_from_string(content, headers=headers, policy='private') k.set_contents_from_string(content, headers=headers, policy='private')
redirect = k.get_redirect() redirect = k.get_redirect()
assert k.get_redirect() == redirect_dest eq(k.get_redirect(), redirect_dest)
res = _website_request(bucket.name, '/page') res = _website_request(bucket.name, '/page')
# We get a 403 because the page is private # We get a 403 because the page is private
@ -813,9 +881,13 @@ def test_website_xredirect_private_relative():
k.delete() k.delete()
bucket.delete() bucket.delete()
@pytest.mark.s3website @attr(resource='bucket')
@pytest.mark.s3website_redirect_location @attr(method='get')
@pytest.mark.fails_on_dbstore @attr(operation='list')
@attr(assertion='x-amz-website-redirect-location should fire websiteconf, absolute, private key')
@attr('s3website')
@attr('x-amz-website-redirect-location')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
def test_website_xredirect_private_abs(): def test_website_xredirect_private_abs():
bucket = get_new_bucket() bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDoc']) f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDoc'])
@ -827,7 +899,7 @@ def test_website_xredirect_private_abs():
headers = {'x-amz-website-redirect-location': redirect_dest} headers = {'x-amz-website-redirect-location': redirect_dest}
k.set_contents_from_string(content, headers=headers, policy='private') k.set_contents_from_string(content, headers=headers, policy='private')
redirect = k.get_redirect() redirect = k.get_redirect()
assert k.get_redirect() == redirect_dest eq(k.get_redirect(), redirect_dest)
res = _website_request(bucket.name, '/page') res = _website_request(bucket.name, '/page')
new_url = get_website_url(proto='http', hostname='example.com', path='/foo') new_url = get_website_url(proto='http', hostname='example.com', path='/foo')
@ -939,7 +1011,7 @@ ROUTING_RULES = {
""", """,
} }
for k in list(ROUTING_RULES.keys()): for k in ROUTING_RULES.keys():
if len(ROUTING_RULES[k]) > 0: if len(ROUTING_RULES[k]) > 0:
ROUTING_RULES[k] = "<!-- %s -->\n%s" % (k, ROUTING_RULES[k]) ROUTING_RULES[k] = "<!-- %s -->\n%s" % (k, ROUTING_RULES[k])
@ -1040,6 +1112,8 @@ def routing_teardown(**kwargs):
print('Deleting', str(o)) print('Deleting', str(o))
o.delete() o.delete()
@common.with_setup_kwargs(setup=routing_setup, teardown=routing_teardown)
#@timed(10)
def routing_check(*args, **kwargs): def routing_check(*args, **kwargs):
bucket = kwargs['bucket'] bucket = kwargs['bucket']
args=args[0] args=args[0]
@ -1065,8 +1139,8 @@ def routing_check(*args, **kwargs):
if args['code'] >= 200 and args['code'] < 300: if args['code'] >= 200 and args['code'] < 300:
#body = res.read() #body = res.read()
#print(body) #print(body)
#assert body == args['content'], 'default content should match index.html set content' #eq(body, args['content'], 'default content should match index.html set content')
assert int(res.getheader('Content-Length', -1)) > 0 ok(res.getheader('Content-Length', -1) > 0)
elif args['code'] >= 300 and args['code'] < 400: elif args['code'] >= 300 and args['code'] < 400:
_website_expected_redirect_response(res, args['code'], IGNORE_FIELD, new_url) _website_expected_redirect_response(res, args['code'], IGNORE_FIELD, new_url)
elif args['code'] >= 400: elif args['code'] >= 400:
@ -1074,9 +1148,9 @@ def routing_check(*args, **kwargs):
else: else:
assert(False) assert(False)
@pytest.mark.s3website_routing_rules @attr('s3website_RoutingRules')
@pytest.mark.s3website @attr('s3website')
@pytest.mark.fails_on_dbstore @nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
def test_routing_generator(): def test_routing_generator():
for t in ROUTING_RULES_TESTS: for t in ROUTING_RULES_TESTS:
if 'xml' in t and 'RoutingRules' in t['xml'] and len(t['xml']['RoutingRules']) > 0: if 'xml' in t and 'RoutingRules' in t['xml'] and len(t['xml']['RoutingRules']) > 0:

View file

@ -1,9 +1,11 @@
from . import utils from nose.tools import eq_ as eq
import utils
def test_generate(): def test_generate():
FIVE_MB = 5 * 1024 * 1024 FIVE_MB = 5 * 1024 * 1024
assert len(''.join(utils.generate_random(0))) == 0 eq(len(''.join(utils.generate_random(0))), 0)
assert len(''.join(utils.generate_random(1))) == 1 eq(len(''.join(utils.generate_random(1))), 1)
assert len(''.join(utils.generate_random(FIVE_MB - 1))) == FIVE_MB - 1 eq(len(''.join(utils.generate_random(FIVE_MB - 1))), FIVE_MB - 1)
assert len(''.join(utils.generate_random(FIVE_MB))) == FIVE_MB eq(len(''.join(utils.generate_random(FIVE_MB))), FIVE_MB)
assert len(''.join(utils.generate_random(FIVE_MB + 1))) == FIVE_MB + 1 eq(len(''.join(utils.generate_random(FIVE_MB + 1))), FIVE_MB + 1)

View file

@ -3,6 +3,8 @@ import requests
import string import string
import time import time
from nose.tools import eq_ as eq
def assert_raises(excClass, callableObj, *args, **kwargs): def assert_raises(excClass, callableObj, *args, **kwargs):
""" """
Like unittest.TestCase.assertRaises, but returns the exception. Like unittest.TestCase.assertRaises, but returns the exception.
@ -26,11 +28,11 @@ def generate_random(size, part_size=5*1024*1024):
chunk = 1024 chunk = 1024
allowed = string.ascii_letters allowed = string.ascii_letters
for x in range(0, size, part_size): for x in range(0, size, part_size):
strpart = ''.join([allowed[random.randint(0, len(allowed) - 1)] for _ in range(chunk)]) strpart = ''.join([allowed[random.randint(0, len(allowed) - 1)] for _ in xrange(chunk)])
s = '' s = ''
left = size - x left = size - x
this_part_size = min(left, part_size) this_part_size = min(left, part_size)
for y in range(this_part_size // chunk): for y in range(this_part_size / chunk):
s = s + strpart s = s + strpart
s = s + strpart[:(this_part_size % chunk)] s = s + strpart[:(this_part_size % chunk)]
yield s yield s
@ -40,13 +42,13 @@ def generate_random(size, part_size=5*1024*1024):
# syncs all the regions except for the one passed in # syncs all the regions except for the one passed in
def region_sync_meta(targets, region): def region_sync_meta(targets, region):
for (k, r) in targets.items(): for (k, r) in targets.iteritems():
if r == region: if r == region:
continue continue
conf = r.conf conf = r.conf
if conf.sync_agent_addr: if conf.sync_agent_addr:
ret = requests.post('http://{addr}:{port}/metadata/incremental'.format(addr = conf.sync_agent_addr, port = conf.sync_agent_port)) ret = requests.post('http://{addr}:{port}/metadata/incremental'.format(addr = conf.sync_agent_addr, port = conf.sync_agent_port))
assert ret.status_code == 200 eq(ret.status_code, 200)
if conf.sync_meta_wait: if conf.sync_meta_wait:
time.sleep(conf.sync_meta_wait) time.sleep(conf.sync_meta_wait)

0
s3tests/fuzz/__init__.py Normal file
View file

376
s3tests/fuzz/headers.py Normal file
View file

@ -0,0 +1,376 @@
from boto.s3.connection import S3Connection
from boto.exception import BotoServerError
from boto.s3.key import Key
from httplib import BadStatusLine
from optparse import OptionParser
from .. import common
import traceback
import itertools
import random
import string
import struct
import yaml
import sys
import re
class DecisionGraphError(Exception):
""" Raised when a node in a graph tries to set a header or
key that was previously set by another node
"""
def __init__(self, value):
self.value = value
def __str__(self):
return repr(self.value)
class RecursionError(Exception):
"""Runaway recursion in string formatting"""
def __init__(self, msg):
self.msg = msg
def __str__(self):
return '{0.__doc__}: {0.msg!r}'.format(self)
def assemble_decision(decision_graph, prng):
""" Take in a graph describing the possible decision space and a random
number generator and traverse the graph to build a decision
"""
return descend_graph(decision_graph, 'start', prng)
def descend_graph(decision_graph, node_name, prng):
""" Given a graph and a particular node in that graph, set the values in
the node's "set" list, pick a choice from the "choice" list, and
recurse. Finally, return dictionary of values
"""
node = decision_graph[node_name]
try:
choice = make_choice(node['choices'], prng)
if choice == '':
decision = {}
else:
decision = descend_graph(decision_graph, choice, prng)
except IndexError:
decision = {}
for key, choices in node['set'].iteritems():
if key in decision:
raise DecisionGraphError("Node %s tried to set '%s', but that key was already set by a lower node!" %(node_name, key))
decision[key] = make_choice(choices, prng)
if 'headers' in node:
decision.setdefault('headers', [])
for desc in node['headers']:
try:
(repetition_range, header, value) = desc
except ValueError:
(header, value) = desc
repetition_range = '1'
try:
size_min, size_max = repetition_range.split('-', 1)
except ValueError:
size_min = size_max = repetition_range
size_min = int(size_min)
size_max = int(size_max)
num_reps = prng.randint(size_min, size_max)
if header in [h for h, v in decision['headers']]:
raise DecisionGraphError("Node %s tried to add header '%s', but that header already exists!" %(node_name, header))
for _ in xrange(num_reps):
decision['headers'].append([header, value])
return decision
def make_choice(choices, prng):
""" Given a list of (possibly weighted) options or just a single option!,
choose one of the options taking weights into account and return the
choice
"""
if isinstance(choices, str):
return choices
weighted_choices = []
for option in choices:
if option is None:
weighted_choices.append('')
continue
try:
(weight, value) = option.split(None, 1)
weight = int(weight)
except ValueError:
weight = 1
value = option
if value == 'null' or value == 'None':
value = ''
for _ in xrange(weight):
weighted_choices.append(value)
return prng.choice(weighted_choices)
def expand_headers(decision, prng):
expanded_headers = {}
for header in decision['headers']:
h = expand(decision, header[0], prng)
v = expand(decision, header[1], prng)
expanded_headers[h] = v
return expanded_headers
def expand(decision, value, prng):
c = itertools.count()
fmt = RepeatExpandingFormatter(prng)
new = fmt.vformat(value, [], decision)
return new
class RepeatExpandingFormatter(string.Formatter):
charsets = {
'printable_no_whitespace': string.printable.translate(None, string.whitespace),
'printable': string.printable,
'punctuation': string.punctuation,
'whitespace': string.whitespace,
'digits': string.digits
}
def __init__(self, prng, _recursion=0):
super(RepeatExpandingFormatter, self).__init__()
# this class assumes it is always instantiated once per
# formatting; use that to detect runaway recursion
self.prng = prng
self._recursion = _recursion
def get_value(self, key, args, kwargs):
fields = key.split(None, 1)
fn = getattr(self, 'special_{name}'.format(name=fields[0]), None)
if fn is not None:
if len(fields) == 1:
fields.append('')
return fn(fields[1])
val = super(RepeatExpandingFormatter, self).get_value(key, args, kwargs)
if self._recursion > 5:
raise RecursionError(key)
fmt = self.__class__(self.prng, _recursion=self._recursion+1)
n = fmt.vformat(val, args, kwargs)
return n
def special_random(self, args):
arg_list = args.split()
try:
size_min, size_max = arg_list[0].split('-', 1)
except ValueError:
size_min = size_max = arg_list[0]
except IndexError:
size_min = '0'
size_max = '1000'
size_min = int(size_min)
size_max = int(size_max)
length = self.prng.randint(size_min, size_max)
try:
charset_arg = arg_list[1]
except IndexError:
charset_arg = 'printable'
if charset_arg == 'binary' or charset_arg == 'binary_no_whitespace':
num_bytes = length + 8
tmplist = [self.prng.getrandbits(64) for _ in xrange(num_bytes / 8)]
tmpstring = struct.pack((num_bytes / 8) * 'Q', *tmplist)
if charset_arg == 'binary_no_whitespace':
tmpstring = ''.join(c for c in tmpstring if c not in string.whitespace)
return tmpstring[0:length]
else:
charset = self.charsets[charset_arg]
return ''.join([self.prng.choice(charset) for _ in xrange(length)]) # Won't scale nicely
def parse_options():
parser = OptionParser()
parser.add_option('-O', '--outfile', help='write output to FILE. Defaults to STDOUT', metavar='FILE')
parser.add_option('--seed', dest='seed', type='int', help='initial seed for the random number generator')
parser.add_option('--seed-file', dest='seedfile', help='read seeds for specific requests from FILE', metavar='FILE')
parser.add_option('-n', dest='num_requests', type='int', help='issue NUM requests before stopping', metavar='NUM')
parser.add_option('-v', '--verbose', dest='verbose', action="store_true", help='turn on verbose output')
parser.add_option('-d', '--debug', dest='debug', action="store_true", help='turn on debugging (very verbose) output')
parser.add_option('--decision-graph', dest='graph_filename', help='file in which to find the request decision graph')
parser.add_option('--no-cleanup', dest='cleanup', action="store_false", help='turn off teardown so you can peruse the state of buckets after testing')
parser.set_defaults(num_requests=5)
parser.set_defaults(cleanup=True)
parser.set_defaults(graph_filename='request_decision_graph.yml')
return parser.parse_args()
def randomlist(seed=None):
""" Returns an infinite generator of random numbers
"""
rng = random.Random(seed)
while True:
yield rng.randint(0,100000) #100,000 seeds is enough, right?
def populate_buckets(conn, alt):
""" Creates buckets and keys for fuzz testing and sets appropriate
permissions. Returns a dictionary of the bucket and key names.
"""
breadable = common.get_new_bucket(alt)
bwritable = common.get_new_bucket(alt)
bnonreadable = common.get_new_bucket(alt)
oreadable = Key(breadable)
owritable = Key(bwritable)
ononreadable = Key(breadable)
oreadable.set_contents_from_string('oreadable body')
owritable.set_contents_from_string('owritable body')
ononreadable.set_contents_from_string('ononreadable body')
breadable.set_acl('public-read')
bwritable.set_acl('public-read-write')
bnonreadable.set_acl('private')
oreadable.set_acl('public-read')
owritable.set_acl('public-read-write')
ononreadable.set_acl('private')
return dict(
bucket_readable=breadable.name,
bucket_writable=bwritable.name,
bucket_not_readable=bnonreadable.name,
bucket_not_writable=breadable.name,
object_readable=oreadable.key,
object_writable=owritable.key,
object_not_readable=ononreadable.key,
object_not_writable=oreadable.key,
)
def _main():
""" The main script
"""
(options, args) = parse_options()
random.seed(options.seed if options.seed else None)
s3_connection = common.s3.main
alt_connection = common.s3.alt
if options.outfile:
OUT = open(options.outfile, 'w')
else:
OUT = sys.stderr
VERBOSE = DEBUG = open('/dev/null', 'w')
if options.verbose:
VERBOSE = OUT
if options.debug:
DEBUG = OUT
VERBOSE = OUT
request_seeds = None
if options.seedfile:
FH = open(options.seedfile, 'r')
request_seeds = [int(line) for line in FH if line != '\n']
print>>OUT, 'Seedfile: %s' %options.seedfile
print>>OUT, 'Number of requests: %d' %len(request_seeds)
else:
if options.seed:
print>>OUT, 'Initial Seed: %d' %options.seed
print>>OUT, 'Number of requests: %d' %options.num_requests
random_list = randomlist(options.seed)
request_seeds = itertools.islice(random_list, options.num_requests)
print>>OUT, 'Decision Graph: %s' %options.graph_filename
graph_file = open(options.graph_filename, 'r')
decision_graph = yaml.safe_load(graph_file)
constants = populate_buckets(s3_connection, alt_connection)
print>>VERBOSE, "Test Buckets/Objects:"
for key, value in constants.iteritems():
print>>VERBOSE, "\t%s: %s" %(key, value)
print>>OUT, "Begin Fuzzing..."
print>>VERBOSE, '='*80
for request_seed in request_seeds:
print>>VERBOSE, 'Seed is: %r' %request_seed
prng = random.Random(request_seed)
decision = assemble_decision(decision_graph, prng)
decision.update(constants)
method = expand(decision, decision['method'], prng)
path = expand(decision, decision['urlpath'], prng)
try:
body = expand(decision, decision['body'], prng)
except KeyError:
body = ''
try:
headers = expand_headers(decision, prng)
except KeyError:
headers = {}
print>>VERBOSE, "%r %r" %(method[:100], path[:100])
for h, v in headers.iteritems():
print>>VERBOSE, "%r: %r" %(h[:50], v[:50])
print>>VERBOSE, "%r\n" % body[:100]
print>>DEBUG, 'FULL REQUEST'
print>>DEBUG, 'Method: %r' %method
print>>DEBUG, 'Path: %r' %path
print>>DEBUG, 'Headers:'
for h, v in headers.iteritems():
print>>DEBUG, "\t%r: %r" %(h, v)
print>>DEBUG, 'Body: %r\n' %body
failed = False # Let's be optimistic, shall we?
try:
response = s3_connection.make_request(method, path, data=body, headers=headers, override_num_retries=1)
body = response.read()
except BotoServerError, e:
response = e
body = e.body
failed = True
except BadStatusLine, e:
print>>OUT, 'FAILED: failed to parse response (BadStatusLine); probably a NUL byte in your request?'
print>>VERBOSE, '='*80
continue
if failed:
print>>OUT, 'FAILED:'
OLD_VERBOSE = VERBOSE
OLD_DEBUG = DEBUG
VERBOSE = DEBUG = OUT
print>>VERBOSE, 'Seed was: %r' %request_seed
print>>VERBOSE, 'Response status code: %d %s' %(response.status, response.reason)
print>>DEBUG, 'Body:\n%s' %body
print>>VERBOSE, '='*80
if failed:
VERBOSE = OLD_VERBOSE
DEBUG = OLD_DEBUG
print>>OUT, '...done fuzzing'
if options.cleanup:
common.teardown()
def main():
common.setup()
try:
_main()
except Exception as e:
traceback.print_exc()
common.teardown()

View file

View file

@ -0,0 +1,403 @@
"""
Unit-test suite for the S3 fuzzer
The fuzzer is a grammar-based random S3 operation generator
that produces random operation sequences in an effort to
crash the server. This unit-test suite does not test
S3 servers, but rather the fuzzer infrastructure.
It works by running the fuzzer off of a simple grammar,
and checking the producted requests to ensure that they
include the expected sorts of operations in the expected
proportions.
"""
import sys
import itertools
import nose
import random
import string
import yaml
from ..headers import *
from nose.tools import eq_ as eq
from nose.tools import assert_true
from nose.plugins.attrib import attr
from ...functional.utils import assert_raises
_decision_graph = {}
def check_access_denied(fn, *args, **kwargs):
e = assert_raises(boto.exception.S3ResponseError, fn, *args, **kwargs)
eq(e.status, 403)
eq(e.reason, 'Forbidden')
eq(e.error_code, 'AccessDenied')
def build_graph():
graph = {}
graph['start'] = {
'set': {},
'choices': ['node2']
}
graph['leaf'] = {
'set': {
'key1': 'value1',
'key2': 'value2'
},
'headers': [
['1-2', 'random-header-{random 5-10 printable}', '{random 20-30 punctuation}']
],
'choices': []
}
graph['node1'] = {
'set': {
'key3': 'value3',
'header_val': [
'3 h1',
'2 h2',
'h3'
]
},
'headers': [
['1-1', 'my-header', '{header_val}'],
],
'choices': ['leaf']
}
graph['node2'] = {
'set': {
'randkey': 'value-{random 10-15 printable}',
'path': '/{bucket_readable}',
'indirect_key1': '{key1}'
},
'choices': ['leaf']
}
graph['bad_node'] = {
'set': {
'key1': 'value1'
},
'choices': ['leaf']
}
graph['nonexistant_child_node'] = {
'set': {},
'choices': ['leafy_greens']
}
graph['weighted_node'] = {
'set': {
'k1': [
'foo',
'2 bar',
'1 baz'
]
},
'choices': [
'foo',
'2 bar',
'1 baz'
]
}
graph['null_choice_node'] = {
'set': {},
'choices': [None]
}
graph['repeated_headers_node'] = {
'set': {},
'headers': [
['1-2', 'random-header-{random 5-10 printable}', '{random 20-30 punctuation}']
],
'choices': ['leaf']
}
graph['weighted_null_choice_node'] = {
'set': {},
'choices': ['3 null']
}
return graph
#def test_foo():
#graph_file = open('request_decision_graph.yml', 'r')
#graph = yaml.safe_load(graph_file)
#eq(graph['bucket_put_simple']['set']['grantee'], 0)
def test_load_graph():
graph_file = open('request_decision_graph.yml', 'r')
graph = yaml.safe_load(graph_file)
graph['start']
def test_descend_leaf_node():
graph = build_graph()
prng = random.Random(1)
decision = descend_graph(graph, 'leaf', prng)
eq(decision['key1'], 'value1')
eq(decision['key2'], 'value2')
e = assert_raises(KeyError, lambda x: decision[x], 'key3')
def test_descend_node():
graph = build_graph()
prng = random.Random(1)
decision = descend_graph(graph, 'node1', prng)
eq(decision['key1'], 'value1')
eq(decision['key2'], 'value2')
eq(decision['key3'], 'value3')
def test_descend_bad_node():
graph = build_graph()
prng = random.Random(1)
assert_raises(DecisionGraphError, descend_graph, graph, 'bad_node', prng)
def test_descend_nonexistant_child():
graph = build_graph()
prng = random.Random(1)
assert_raises(KeyError, descend_graph, graph, 'nonexistant_child_node', prng)
def test_expand_random_printable():
prng = random.Random(1)
got = expand({}, '{random 10-15 printable}', prng)
eq(got, '[/pNI$;92@')
def test_expand_random_binary():
prng = random.Random(1)
got = expand({}, '{random 10-15 binary}', prng)
eq(got, '\xdfj\xf1\xd80>a\xcd\xc4\xbb')
def test_expand_random_printable_no_whitespace():
prng = random.Random(1)
for _ in xrange(1000):
got = expand({}, '{random 500 printable_no_whitespace}', prng)
assert_true(reduce(lambda x, y: x and y, [x not in string.whitespace and x in string.printable for x in got]))
def test_expand_random_binary_no_whitespace():
prng = random.Random(1)
for _ in xrange(1000):
got = expand({}, '{random 500 binary_no_whitespace}', prng)
assert_true(reduce(lambda x, y: x and y, [x not in string.whitespace for x in got]))
def test_expand_random_no_args():
prng = random.Random(1)
for _ in xrange(1000):
got = expand({}, '{random}', prng)
assert_true(0 <= len(got) <= 1000)
assert_true(reduce(lambda x, y: x and y, [x in string.printable for x in got]))
def test_expand_random_no_charset():
prng = random.Random(1)
for _ in xrange(1000):
got = expand({}, '{random 10-30}', prng)
assert_true(10 <= len(got) <= 30)
assert_true(reduce(lambda x, y: x and y, [x in string.printable for x in got]))
def test_expand_random_exact_length():
prng = random.Random(1)
for _ in xrange(1000):
got = expand({}, '{random 10 digits}', prng)
assert_true(len(got) == 10)
assert_true(reduce(lambda x, y: x and y, [x in string.digits for x in got]))
def test_expand_random_bad_charset():
prng = random.Random(1)
assert_raises(KeyError, expand, {}, '{random 10-30 foo}', prng)
def test_expand_random_missing_length():
prng = random.Random(1)
assert_raises(ValueError, expand, {}, '{random printable}', prng)
def test_assemble_decision():
graph = build_graph()
prng = random.Random(1)
decision = assemble_decision(graph, prng)
eq(decision['key1'], 'value1')
eq(decision['key2'], 'value2')
eq(decision['randkey'], 'value-{random 10-15 printable}')
eq(decision['indirect_key1'], '{key1}')
eq(decision['path'], '/{bucket_readable}')
assert_raises(KeyError, lambda x: decision[x], 'key3')
def test_expand_escape():
prng = random.Random(1)
decision = dict(
foo='{{bar}}',
)
got = expand(decision, '{foo}', prng)
eq(got, '{bar}')
def test_expand_indirect():
prng = random.Random(1)
decision = dict(
foo='{bar}',
bar='quux',
)
got = expand(decision, '{foo}', prng)
eq(got, 'quux')
def test_expand_indirect_double():
prng = random.Random(1)
decision = dict(
foo='{bar}',
bar='{quux}',
quux='thud',
)
got = expand(decision, '{foo}', prng)
eq(got, 'thud')
def test_expand_recursive():
prng = random.Random(1)
decision = dict(
foo='{foo}',
)
e = assert_raises(RecursionError, expand, decision, '{foo}', prng)
eq(str(e), "Runaway recursion in string formatting: 'foo'")
def test_expand_recursive_mutual():
prng = random.Random(1)
decision = dict(
foo='{bar}',
bar='{foo}',
)
e = assert_raises(RecursionError, expand, decision, '{foo}', prng)
eq(str(e), "Runaway recursion in string formatting: 'foo'")
def test_expand_recursive_not_too_eager():
prng = random.Random(1)
decision = dict(
foo='bar',
)
got = expand(decision, 100*'{foo}', prng)
eq(got, 100*'bar')
def test_make_choice_unweighted_with_space():
prng = random.Random(1)
choice = make_choice(['foo bar'], prng)
eq(choice, 'foo bar')
def test_weighted_choices():
graph = build_graph()
prng = random.Random(1)
choices_made = {}
for _ in xrange(1000):
choice = make_choice(graph['weighted_node']['choices'], prng)
if choices_made.has_key(choice):
choices_made[choice] += 1
else:
choices_made[choice] = 1
foo_percentage = choices_made['foo'] / 1000.0
bar_percentage = choices_made['bar'] / 1000.0
baz_percentage = choices_made['baz'] / 1000.0
nose.tools.assert_almost_equal(foo_percentage, 0.25, 1)
nose.tools.assert_almost_equal(bar_percentage, 0.50, 1)
nose.tools.assert_almost_equal(baz_percentage, 0.25, 1)
def test_null_choices():
graph = build_graph()
prng = random.Random(1)
choice = make_choice(graph['null_choice_node']['choices'], prng)
eq(choice, '')
def test_weighted_null_choices():
graph = build_graph()
prng = random.Random(1)
choice = make_choice(graph['weighted_null_choice_node']['choices'], prng)
eq(choice, '')
def test_null_child():
graph = build_graph()
prng = random.Random(1)
decision = descend_graph(graph, 'null_choice_node', prng)
eq(decision, {})
def test_weighted_set():
graph = build_graph()
prng = random.Random(1)
choices_made = {}
for _ in xrange(1000):
choice = make_choice(graph['weighted_node']['set']['k1'], prng)
if choices_made.has_key(choice):
choices_made[choice] += 1
else:
choices_made[choice] = 1
foo_percentage = choices_made['foo'] / 1000.0
bar_percentage = choices_made['bar'] / 1000.0
baz_percentage = choices_made['baz'] / 1000.0
nose.tools.assert_almost_equal(foo_percentage, 0.25, 1)
nose.tools.assert_almost_equal(bar_percentage, 0.50, 1)
nose.tools.assert_almost_equal(baz_percentage, 0.25, 1)
def test_header_presence():
graph = build_graph()
prng = random.Random(1)
decision = descend_graph(graph, 'node1', prng)
c1 = itertools.count()
c2 = itertools.count()
for header, value in decision['headers']:
if header == 'my-header':
eq(value, '{header_val}')
assert_true(next(c1) < 1)
elif header == 'random-header-{random 5-10 printable}':
eq(value, '{random 20-30 punctuation}')
assert_true(next(c2) < 2)
else:
raise KeyError('unexpected header found: %s' % header)
assert_true(next(c1))
assert_true(next(c2))
def test_duplicate_header():
graph = build_graph()
prng = random.Random(1)
assert_raises(DecisionGraphError, descend_graph, graph, 'repeated_headers_node', prng)
def test_expand_headers():
graph = build_graph()
prng = random.Random(1)
decision = descend_graph(graph, 'node1', prng)
expanded_headers = expand_headers(decision, prng)
for header, value in expanded_headers.iteritems():
if header == 'my-header':
assert_true(value in ['h1', 'h2', 'h3'])
elif header.startswith('random-header-'):
assert_true(20 <= len(value) <= 30)
assert_true(string.strip(value, RepeatExpandingFormatter.charsets['punctuation']) is '')
else:
raise DecisionGraphError('unexpected header found: "%s"' % header)

117
s3tests/generate_objects.py Normal file
View file

@ -0,0 +1,117 @@
from boto.s3.key import Key
from optparse import OptionParser
from . import realistic
import traceback
import random
from . import common
import sys
def parse_opts():
parser = OptionParser()
parser.add_option('-O', '--outfile', help='write output to FILE. Defaults to STDOUT', metavar='FILE')
parser.add_option('-b', '--bucket', dest='bucket', help='push objects to BUCKET', metavar='BUCKET')
parser.add_option('--seed', dest='seed', help='optional seed for the random number generator')
return parser.parse_args()
def get_random_files(quantity, mean, stddev, seed):
"""Create file-like objects with pseudorandom contents.
IN:
number of files to create
mean file size in bytes
standard deviation from mean file size
seed for PRNG
OUT:
list of file handles
"""
file_generator = realistic.files(mean, stddev, seed)
return [file_generator.next() for _ in xrange(quantity)]
def upload_objects(bucket, files, seed):
"""Upload a bunch of files to an S3 bucket
IN:
boto S3 bucket object
list of file handles to upload
seed for PRNG
OUT:
list of boto S3 key objects
"""
keys = []
name_generator = realistic.names(15, 4, seed=seed)
for fp in files:
sys.stderr.write('sending file with size %dB\n' % fp.size)
key = Key(bucket)
key.key = name_generator.next()
key.set_contents_from_file(fp, rewind=True)
key.set_acl('public-read')
keys.append(key)
return keys
def _main():
'''To run the static content load test, make sure you've bootstrapped your
test environment and set up your config.yaml file, then run the following:
S3TEST_CONF=config.yaml virtualenv/bin/s3tests-generate-objects.py --seed 1234
This creates a bucket with your S3 credentials (from config.yaml) and
fills it with garbage objects as described in the
file_generation.groups section of config.yaml. It writes a list of
URLS to those objects to the file listed in file_generation.url_file
in config.yaml.
Once you have objcts in your bucket, run the siege benchmarking program:
siege --rc ./siege.conf -r 5
This tells siege to read the ./siege.conf config file which tells it to
use the urls in ./urls.txt and log to ./siege.log. It hits each url in
urls.txt 5 times (-r flag).
Results are printed to the terminal and written in CSV format to
./siege.log
'''
(options, args) = parse_opts()
#SETUP
random.seed(options.seed if options.seed else None)
conn = common.s3.main
if options.outfile:
OUTFILE = open(options.outfile, 'w')
elif common.config.file_generation.url_file:
OUTFILE = open(common.config.file_generation.url_file, 'w')
else:
OUTFILE = sys.stdout
if options.bucket:
bucket = conn.create_bucket(options.bucket)
else:
bucket = common.get_new_bucket()
bucket.set_acl('public-read')
keys = []
OUTFILE.write('bucket: %s\n' % bucket.name)
sys.stderr.write('setup complete, generating files\n')
for profile in common.config.file_generation.groups:
seed = random.random()
files = get_random_files(profile[0], profile[1], profile[2], seed)
keys += upload_objects(bucket, files, seed)
sys.stderr.write('finished sending files. generating urls\n')
for key in keys:
OUTFILE.write(key.generate_url(0, query_auth=False) + '\n')
sys.stderr.write('done\n')
def main():
common.setup()
try:
_main()
except Exception as e:
traceback.print_exc()
common.teardown()

265
s3tests/readwrite.py Normal file
View file

@ -0,0 +1,265 @@
import gevent
import gevent.pool
import gevent.queue
import gevent.monkey; gevent.monkey.patch_all()
import itertools
import optparse
import os
import sys
import time
import traceback
import random
import yaml
import realistic
import common
NANOSECOND = int(1e9)
def reader(bucket, worker_id, file_names, queue, rand):
while True:
objname = rand.choice(file_names)
key = bucket.new_key(objname)
fp = realistic.FileValidator()
result = dict(
type='r',
bucket=bucket.name,
key=key.name,
worker=worker_id,
)
start = time.time()
try:
key.get_contents_to_file(fp._file)
except gevent.GreenletExit:
raise
except Exception as e:
# stop timer ASAP, even on errors
end = time.time()
result.update(
error=dict(
msg=str(e),
traceback=traceback.format_exc(),
),
)
# certain kinds of programmer errors make this a busy
# loop; let parent greenlet get some time too
time.sleep(0)
else:
end = time.time()
if not fp.valid():
m='md5sum check failed start={s} ({se}) end={e} size={sz} obj={o}'.format(s=time.ctime(start), se=start, e=end, sz=fp._file.tell(), o=objname)
result.update(
error=dict(
msg=m,
traceback=traceback.format_exc(),
),
)
print("ERROR:", m)
else:
elapsed = end - start
result.update(
start=start,
duration=int(round(elapsed * NANOSECOND)),
)
queue.put(result)
def writer(bucket, worker_id, file_names, files, queue, rand):
while True:
fp = next(files)
fp.seek(0)
objname = rand.choice(file_names)
key = bucket.new_key(objname)
result = dict(
type='w',
bucket=bucket.name,
key=key.name,
worker=worker_id,
)
start = time.time()
try:
key.set_contents_from_file(fp)
except gevent.GreenletExit:
raise
except Exception as e:
# stop timer ASAP, even on errors
end = time.time()
result.update(
error=dict(
msg=str(e),
traceback=traceback.format_exc(),
),
)
# certain kinds of programmer errors make this a busy
# loop; let parent greenlet get some time too
time.sleep(0)
else:
end = time.time()
elapsed = end - start
result.update(
start=start,
duration=int(round(elapsed * NANOSECOND)),
)
queue.put(result)
def parse_options():
parser = optparse.OptionParser(
usage='%prog [OPTS] <CONFIG_YAML',
)
parser.add_option("--no-cleanup", dest="cleanup", action="store_false",
help="skip cleaning up all created buckets", default=True)
return parser.parse_args()
def write_file(bucket, file_name, fp):
"""
Write a single file to the bucket using the file_name.
This is used during the warmup to initialize the files.
"""
key = bucket.new_key(file_name)
key.set_contents_from_file(fp)
def main():
# parse options
(options, args) = parse_options()
if os.isatty(sys.stdin.fileno()):
raise RuntimeError('Need configuration in stdin.')
config = common.read_config(sys.stdin)
conn = common.connect(config.s3)
bucket = None
try:
# setup
real_stdout = sys.stdout
sys.stdout = sys.stderr
# verify all required config items are present
if 'readwrite' not in config:
raise RuntimeError('readwrite section not found in config')
for item in ['readers', 'writers', 'duration', 'files', 'bucket']:
if item not in config.readwrite:
raise RuntimeError("Missing readwrite config item: {item}".format(item=item))
for item in ['num', 'size', 'stddev']:
if item not in config.readwrite.files:
raise RuntimeError("Missing readwrite config item: files.{item}".format(item=item))
seeds = dict(config.readwrite.get('random_seed', {}))
seeds.setdefault('main', random.randrange(2**32))
rand = random.Random(seeds['main'])
for name in ['names', 'contents', 'writer', 'reader']:
seeds.setdefault(name, rand.randrange(2**32))
print('Using random seeds: {seeds}'.format(seeds=seeds))
# setup bucket and other objects
bucket_name = common.choose_bucket_prefix(config.readwrite.bucket, max_len=30)
bucket = conn.create_bucket(bucket_name)
print("Created bucket: {name}".format(name=bucket.name))
# check flag for deterministic file name creation
if not config.readwrite.get('deterministic_file_names'):
print('Creating random file names')
file_names = realistic.names(
mean=15,
stddev=4,
seed=seeds['names'],
)
file_names = itertools.islice(file_names, config.readwrite.files.num)
file_names = list(file_names)
else:
print('Creating file names that are deterministic')
file_names = []
for x in xrange(config.readwrite.files.num):
file_names.append('test_file_{num}'.format(num=x))
files = realistic.files2(
mean=1024 * config.readwrite.files.size,
stddev=1024 * config.readwrite.files.stddev,
seed=seeds['contents'],
)
q = gevent.queue.Queue()
# warmup - get initial set of files uploaded if there are any writers specified
if config.readwrite.writers > 0:
print("Uploading initial set of {num} files".format(num=config.readwrite.files.num))
warmup_pool = gevent.pool.Pool(size=100)
for file_name in file_names:
fp = next(files)
warmup_pool.spawn(
write_file,
bucket=bucket,
file_name=file_name,
fp=fp,
)
warmup_pool.join()
# main work
print("Starting main worker loop.")
print("Using file size: {size} +- {stddev}".format(size=config.readwrite.files.size, stddev=config.readwrite.files.stddev))
print("Spawning {w} writers and {r} readers...".format(w=config.readwrite.writers, r=config.readwrite.readers))
group = gevent.pool.Group()
rand_writer = random.Random(seeds['writer'])
# Don't create random files if deterministic_files_names is set and true
if not config.readwrite.get('deterministic_file_names'):
for x in xrange(config.readwrite.writers):
this_rand = random.Random(rand_writer.randrange(2**32))
group.spawn(
writer,
bucket=bucket,
worker_id=x,
file_names=file_names,
files=files,
queue=q,
rand=this_rand,
)
# Since the loop generating readers already uses config.readwrite.readers
# and the file names are already generated (randomly or deterministically),
# this loop needs no additional qualifiers. If zero readers are specified,
# it will behave as expected (no data is read)
rand_reader = random.Random(seeds['reader'])
for x in xrange(config.readwrite.readers):
this_rand = random.Random(rand_reader.randrange(2**32))
group.spawn(
reader,
bucket=bucket,
worker_id=x,
file_names=file_names,
queue=q,
rand=this_rand,
)
def stop():
group.kill(block=True)
q.put(StopIteration)
gevent.spawn_later(config.readwrite.duration, stop)
# wait for all the tests to finish
group.join()
print('post-join, queue size {size}'.format(size=q.qsize()))
if q.qsize() > 0:
for temp_dict in q:
if 'error' in temp_dict:
raise Exception('exception:\n\t{msg}\n\t{trace}'.format(
msg=temp_dict['error']['msg'],
trace=temp_dict['error']['traceback'])
)
else:
yaml.safe_dump(temp_dict, stream=real_stdout)
finally:
# cleanup
if options.cleanup:
if bucket is not None:
common.nuke_bucket(bucket)

281
s3tests/realistic.py Normal file
View file

@ -0,0 +1,281 @@
import hashlib
import random
import string
import struct
import time
import math
import tempfile
import shutil
import os
NANOSECOND = int(1e9)
def generate_file_contents(size):
"""
A helper function to generate binary contents for a given size, and
calculates the md5 hash of the contents appending itself at the end of the
blob.
It uses sha1's hexdigest which is 40 chars long. So any binary generated
should remove the last 40 chars from the blob to retrieve the original hash
and binary so that validity can be proved.
"""
size = int(size)
contents = os.urandom(size)
content_hash = hashlib.sha1(contents).hexdigest()
return contents + content_hash
class FileValidator(object):
def __init__(self, f=None):
self._file = tempfile.SpooledTemporaryFile()
self.original_hash = None
self.new_hash = None
if f:
f.seek(0)
shutil.copyfileobj(f, self._file)
def valid(self):
"""
Returns True if this file looks valid. The file is valid if the end
of the file has the md5 digest for the first part of the file.
"""
self._file.seek(0)
contents = self._file.read()
self.original_hash, binary = contents[-40:], contents[:-40]
self.new_hash = hashlib.sha1(binary).hexdigest()
if not self.new_hash == self.original_hash:
print('original hash: ', self.original_hash)
print('new hash: ', self.new_hash)
print('size: ', self._file.tell())
return False
return True
# XXX not sure if we need all of these
def seek(self, offset, whence=os.SEEK_SET):
self._file.seek(offset, whence)
def tell(self):
return self._file.tell()
def read(self, size=-1):
return self._file.read(size)
def write(self, data):
self._file.write(data)
self._file.seek(0)
class RandomContentFile(object):
def __init__(self, size, seed):
self.size = size
self.seed = seed
self.random = random.Random(self.seed)
# Boto likes to seek once more after it's done reading, so we need to save the last chunks/seek value.
self.last_chunks = self.chunks = None
self.last_seek = None
# Let seek initialize the rest of it, rather than dup code
self.seek(0)
def _mark_chunk(self):
self.chunks.append([self.offset, int(round((time.time() - self.last_seek) * NANOSECOND))])
def seek(self, offset, whence=os.SEEK_SET):
if whence == os.SEEK_SET:
self.offset = offset
elif whence == os.SEEK_END:
self.offset = self.size + offset;
elif whence == os.SEEK_CUR:
self.offset += offset
assert self.offset == 0
self.random.seed(self.seed)
self.buffer = ''
self.hash = hashlib.md5()
self.digest_size = self.hash.digest_size
self.digest = None
# Save the last seek time as our start time, and the last chunks
self.last_chunks = self.chunks
# Before emptying.
self.last_seek = time.time()
self.chunks = []
def tell(self):
return self.offset
def _generate(self):
# generate and return a chunk of pseudorandom data
size = min(self.size, 1*1024*1024) # generate at most 1 MB at a time
chunks = int(math.ceil(size/8.0)) # number of 8-byte chunks to create
l = [self.random.getrandbits(64) for _ in xrange(chunks)]
s = struct.pack(chunks*'Q', *l)
return s
def read(self, size=-1):
if size < 0:
size = self.size - self.offset
r = []
random_count = min(size, self.size - self.offset - self.digest_size)
if random_count > 0:
while len(self.buffer) < random_count:
self.buffer += self._generate()
self.offset += random_count
size -= random_count
data, self.buffer = self.buffer[:random_count], self.buffer[random_count:]
if self.hash is not None:
self.hash.update(data)
r.append(data)
digest_count = min(size, self.size - self.offset)
if digest_count > 0:
if self.digest is None:
self.digest = self.hash.digest()
self.hash = None
self.offset += digest_count
size -= digest_count
data = self.digest[:digest_count]
r.append(data)
self._mark_chunk()
return ''.join(r)
class PrecomputedContentFile(object):
def __init__(self, f):
self._file = tempfile.SpooledTemporaryFile()
f.seek(0)
shutil.copyfileobj(f, self._file)
self.last_chunks = self.chunks = None
self.seek(0)
def seek(self, offset, whence=os.SEEK_SET):
self._file.seek(offset, whence)
if self.tell() == 0:
# only reset the chunks when seeking to the beginning
self.last_chunks = self.chunks
self.last_seek = time.time()
self.chunks = []
def tell(self):
return self._file.tell()
def read(self, size=-1):
data = self._file.read(size)
self._mark_chunk()
return data
def _mark_chunk(self):
elapsed = time.time() - self.last_seek
elapsed_nsec = int(round(elapsed * NANOSECOND))
self.chunks.append([self.tell(), elapsed_nsec])
class FileVerifier(object):
def __init__(self):
self.size = 0
self.hash = hashlib.md5()
self.buf = ''
self.created_at = time.time()
self.chunks = []
def _mark_chunk(self):
self.chunks.append([self.size, int(round((time.time() - self.created_at) * NANOSECOND))])
def write(self, data):
self.size += len(data)
self.buf += data
digsz = -1*self.hash.digest_size
new_data, self.buf = self.buf[0:digsz], self.buf[digsz:]
self.hash.update(new_data)
self._mark_chunk()
def valid(self):
"""
Returns True if this file looks valid. The file is valid if the end
of the file has the md5 digest for the first part of the file.
"""
if self.size < self.hash.digest_size:
return self.hash.digest().startswith(self.buf)
return self.buf == self.hash.digest()
def files(mean, stddev, seed=None):
"""
Yields file-like objects with effectively random contents, where
the size of each file follows the normal distribution with `mean`
and `stddev`.
Beware, the file-likeness is very shallow. You can use boto's
`key.set_contents_from_file` to send these to S3, but they are not
full file objects.
The last 128 bits are the MD5 digest of the previous bytes, for
verifying round-trip data integrity. For example, if you
re-download the object and place the contents into a file called
``foo``, the following should print two identical lines:
python3 -c 'import sys, hashlib; data=sys.stdin.read(); print hashlib.md5(data[:-16]).hexdigest(); print "".join("%02x" % ord(c) for c in data[-16:])' <foo
Except for objects shorter than 16 bytes, where the second line
will be proportionally shorter.
"""
rand = random.Random(seed)
while True:
while True:
size = int(rand.normalvariate(mean, stddev))
if size >= 0:
break
yield RandomContentFile(size=size, seed=rand.getrandbits(32))
def files2(mean, stddev, seed=None, numfiles=10):
"""
Yields file objects with effectively random contents, where the
size of each file follows the normal distribution with `mean` and
`stddev`.
Rather than continuously generating new files, this pre-computes and
stores `numfiles` files and yields them in a loop.
"""
# pre-compute all the files (and save with TemporaryFiles)
fs = []
for _ in xrange(numfiles):
t = tempfile.SpooledTemporaryFile()
t.write(generate_file_contents(random.normalvariate(mean, stddev)))
t.seek(0)
fs.append(t)
while True:
for f in fs:
yield f
def names(mean, stddev, charset=None, seed=None):
"""
Yields strings that are somewhat plausible as file names, where
the lenght of each filename follows the normal distribution with
`mean` and `stddev`.
"""
if charset is None:
charset = string.ascii_lowercase
rand = random.Random(seed)
while True:
while True:
length = int(rand.normalvariate(mean, stddev))
if length > 0:
break
name = ''.join(rand.choice(charset) for _ in xrange(length))
yield name

219
s3tests/roundtrip.py Normal file
View file

@ -0,0 +1,219 @@
import gevent
import gevent.pool
import gevent.queue
import gevent.monkey; gevent.monkey.patch_all()
import itertools
import optparse
import os
import sys
import time
import traceback
import random
import yaml
import realistic
import common
NANOSECOND = int(1e9)
def writer(bucket, objname, fp, queue):
key = bucket.new_key(objname)
result = dict(
type='w',
bucket=bucket.name,
key=key.name,
)
start = time.time()
try:
key.set_contents_from_file(fp, rewind=True)
except gevent.GreenletExit:
raise
except Exception as e:
# stop timer ASAP, even on errors
end = time.time()
result.update(
error=dict(
msg=str(e),
traceback=traceback.format_exc(),
),
)
# certain kinds of programmer errors make this a busy
# loop; let parent greenlet get some time too
time.sleep(0)
else:
end = time.time()
elapsed = end - start
result.update(
start=start,
duration=int(round(elapsed * NANOSECOND)),
chunks=fp.last_chunks,
)
queue.put(result)
def reader(bucket, objname, queue):
key = bucket.new_key(objname)
fp = realistic.FileVerifier()
result = dict(
type='r',
bucket=bucket.name,
key=key.name,
)
start = time.time()
try:
key.get_contents_to_file(fp)
except gevent.GreenletExit:
raise
except Exception as e:
# stop timer ASAP, even on errors
end = time.time()
result.update(
error=dict(
msg=str(e),
traceback=traceback.format_exc(),
),
)
# certain kinds of programmer errors make this a busy
# loop; let parent greenlet get some time too
time.sleep(0)
else:
end = time.time()
if not fp.valid():
result.update(
error=dict(
msg='md5sum check failed',
),
)
elapsed = end - start
result.update(
start=start,
duration=int(round(elapsed * NANOSECOND)),
chunks=fp.chunks,
)
queue.put(result)
def parse_options():
parser = optparse.OptionParser(
usage='%prog [OPTS] <CONFIG_YAML',
)
parser.add_option("--no-cleanup", dest="cleanup", action="store_false",
help="skip cleaning up all created buckets", default=True)
return parser.parse_args()
def main():
# parse options
(options, args) = parse_options()
if os.isatty(sys.stdin.fileno()):
raise RuntimeError('Need configuration in stdin.')
config = common.read_config(sys.stdin)
conn = common.connect(config.s3)
bucket = None
try:
# setup
real_stdout = sys.stdout
sys.stdout = sys.stderr
# verify all required config items are present
if 'roundtrip' not in config:
raise RuntimeError('roundtrip section not found in config')
for item in ['readers', 'writers', 'duration', 'files', 'bucket']:
if item not in config.roundtrip:
raise RuntimeError("Missing roundtrip config item: {item}".format(item=item))
for item in ['num', 'size', 'stddev']:
if item not in config.roundtrip.files:
raise RuntimeError("Missing roundtrip config item: files.{item}".format(item=item))
seeds = dict(config.roundtrip.get('random_seed', {}))
seeds.setdefault('main', random.randrange(2**32))
rand = random.Random(seeds['main'])
for name in ['names', 'contents', 'writer', 'reader']:
seeds.setdefault(name, rand.randrange(2**32))
print('Using random seeds: {seeds}'.format(seeds=seeds))
# setup bucket and other objects
bucket_name = common.choose_bucket_prefix(config.roundtrip.bucket, max_len=30)
bucket = conn.create_bucket(bucket_name)
print("Created bucket: {name}".format(name=bucket.name))
objnames = realistic.names(
mean=15,
stddev=4,
seed=seeds['names'],
)
objnames = itertools.islice(objnames, config.roundtrip.files.num)
objnames = list(objnames)
files = realistic.files(
mean=1024 * config.roundtrip.files.size,
stddev=1024 * config.roundtrip.files.stddev,
seed=seeds['contents'],
)
q = gevent.queue.Queue()
logger_g = gevent.spawn(yaml.safe_dump_all, q, stream=real_stdout)
print("Writing {num} objects with {w} workers...".format(
num=config.roundtrip.files.num,
w=config.roundtrip.writers,
))
pool = gevent.pool.Pool(size=config.roundtrip.writers)
start = time.time()
for objname in objnames:
fp = next(files)
pool.spawn(
writer,
bucket=bucket,
objname=objname,
fp=fp,
queue=q,
)
pool.join()
stop = time.time()
elapsed = stop - start
q.put(dict(
type='write_done',
duration=int(round(elapsed * NANOSECOND)),
))
print("Reading {num} objects with {w} workers...".format(
num=config.roundtrip.files.num,
w=config.roundtrip.readers,
))
# avoid accessing them in the same order as the writing
rand.shuffle(objnames)
pool = gevent.pool.Pool(size=config.roundtrip.readers)
start = time.time()
for objname in objnames:
pool.spawn(
reader,
bucket=bucket,
objname=objname,
queue=q,
)
pool.join()
stop = time.time()
elapsed = stop - start
q.put(dict(
type='read_done',
duration=int(round(elapsed * NANOSECOND)),
))
q.put(StopIteration)
logger_g.get()
finally:
# cleanup
if options.cleanup:
if bucket is not None:
common.nuke_bucket(bucket)

View file

@ -0,0 +1,79 @@
from s3tests import realistic
import shutil
import tempfile
# XXX not used for now
def create_files(mean=2000):
return realistic.files2(
mean=1024 * mean,
stddev=1024 * 500,
seed=1256193726,
numfiles=4,
)
class TestFiles(object):
# the size and seed is what we can get when generating a bunch of files
# with pseudo random numbers based on sttdev, seed, and mean.
# this fails, demonstrating the (current) problem
#def test_random_file_invalid(self):
# size = 2506764
# seed = 3391518755
# source = realistic.RandomContentFile(size=size, seed=seed)
# t = tempfile.SpooledTemporaryFile()
# shutil.copyfileobj(source, t)
# precomputed = realistic.PrecomputedContentFile(t)
# assert precomputed.valid()
# verifier = realistic.FileVerifier()
# shutil.copyfileobj(precomputed, verifier)
# assert verifier.valid()
# this passes
def test_random_file_valid(self):
size = 2506001
seed = 3391518755
source = realistic.RandomContentFile(size=size, seed=seed)
t = tempfile.SpooledTemporaryFile()
shutil.copyfileobj(source, t)
precomputed = realistic.PrecomputedContentFile(t)
verifier = realistic.FileVerifier()
shutil.copyfileobj(precomputed, verifier)
assert verifier.valid()
# new implementation
class TestFileValidator(object):
def test_new_file_is_valid(self):
size = 2506001
contents = realistic.generate_file_contents(size)
t = tempfile.SpooledTemporaryFile()
t.write(contents)
t.seek(0)
fp = realistic.FileValidator(t)
assert fp.valid()
def test_new_file_is_valid_when_size_is_1(self):
size = 1
contents = realistic.generate_file_contents(size)
t = tempfile.SpooledTemporaryFile()
t.write(contents)
t.seek(0)
fp = realistic.FileValidator(t)
assert fp.valid()
def test_new_file_is_valid_on_several_calls(self):
size = 2506001
contents = realistic.generate_file_contents(size)
t = tempfile.SpooledTemporaryFile()
t.write(contents)
t.seek(0)
fp = realistic.FileValidator(t)
assert fp.valid()
assert fp.valid()

View file

View file

@ -0,0 +1,142 @@
#!/usr/bin/python3
import sys
import os
import yaml
import optparse
NANOSECONDS = int(1e9)
# Output stats in a format similar to siege
# see http://www.joedog.org/index/siege-home
OUTPUT_FORMAT = """Stats for type: [{type}]
Transactions: {trans:>11} hits
Availability: {avail:>11.2f} %
Elapsed time: {elapsed:>11.2f} secs
Data transferred: {data:>11.2f} MB
Response time: {resp_time:>11.2f} secs
Transaction rate: {trans_rate:>11.2f} trans/sec
Throughput: {data_rate:>11.2f} MB/sec
Concurrency: {conc:>11.2f}
Successful transactions: {trans_success:>11}
Failed transactions: {trans_fail:>11}
Longest transaction: {trans_long:>11.2f}
Shortest transaction: {trans_short:>11.2f}
"""
def parse_options():
usage = "usage: %prog [options]"
parser = optparse.OptionParser(usage=usage)
parser.add_option(
"-f", "--file", dest="input", metavar="FILE",
help="Name of input YAML file. Default uses sys.stdin")
parser.add_option(
"-v", "--verbose", dest="verbose", action="store_true",
help="Enable verbose output")
(options, args) = parser.parse_args()
if not options.input and os.isatty(sys.stdin.fileno()):
parser.error("option -f required if no data is provided "
"in stdin")
return (options, args)
def main():
(options, args) = parse_options()
total = {}
durations = {}
min_time = {}
max_time = {}
errors = {}
success = {}
calculate_stats(options, total, durations, min_time, max_time, errors,
success)
print_results(total, durations, min_time, max_time, errors, success)
def calculate_stats(options, total, durations, min_time, max_time, errors,
success):
print 'Calculating statistics...'
f = sys.stdin
if options.input:
f = file(options.input, 'r')
for item in yaml.safe_load_all(f):
type_ = item.get('type')
if type_ not in ('r', 'w'):
continue # ignore any invalid items
if 'error' in item:
errors[type_] = errors.get(type_, 0) + 1
continue # skip rest of analysis for this item
else:
success[type_] = success.get(type_, 0) + 1
# parse the item
data_size = item['chunks'][-1][0]
duration = item['duration']
start = item['start']
end = start + duration / float(NANOSECONDS)
if options.verbose:
print "[{type}] POSIX time: {start:>18.2f} - {end:<18.2f} " \
"{data:>11.2f} KB".format(
type=type_,
start=start,
end=end,
data=data_size / 1024.0, # convert to KB
)
# update time boundaries
prev = min_time.setdefault(type_, start)
if start < prev:
min_time[type_] = start
prev = max_time.setdefault(type_, end)
if end > prev:
max_time[type_] = end
# save the duration
if type_ not in durations:
durations[type_] = []
durations[type_].append(duration)
# add to running totals
total[type_] = total.get(type_, 0) + data_size
def print_results(total, durations, min_time, max_time, errors, success):
for type_ in total.keys():
trans_success = success.get(type_, 0)
trans_fail = errors.get(type_, 0)
trans = trans_success + trans_fail
avail = trans_success * 100.0 / trans
elapsed = max_time[type_] - min_time[type_]
data = total[type_] / 1024.0 / 1024.0 # convert to MB
resp_time = sum(durations[type_]) / float(NANOSECONDS) / \
len(durations[type_])
trans_rate = trans / elapsed
data_rate = data / elapsed
conc = trans_rate * resp_time
trans_long = max(durations[type_]) / float(NANOSECONDS)
trans_short = min(durations[type_]) / float(NANOSECONDS)
print OUTPUT_FORMAT.format(
type=type_,
trans_success=trans_success,
trans_fail=trans_fail,
trans=trans,
avail=avail,
elapsed=elapsed,
data=data,
resp_time=resp_time,
trans_rate=trans_rate,
data_rate=data_rate,
conc=conc,
trans_long=trans_long,
trans_short=trans_short,
)
if __name__ == '__main__':
main()

View file

@ -1,5 +1,5 @@
import boto.s3.connection import boto.s3.connection
import munch import bunch
import itertools import itertools
import os import os
import random import random
@ -11,8 +11,8 @@ from lxml import etree
from doctest import Example from doctest import Example
from lxml.doctestcompare import LXMLOutputChecker from lxml.doctestcompare import LXMLOutputChecker
s3 = munch.Munch() s3 = bunch.Bunch()
config = munch.Munch() config = bunch.Bunch()
prefix = '' prefix = ''
bucket_counter = itertools.count(1) bucket_counter = itertools.count(1)
@ -51,10 +51,10 @@ def nuke_bucket(bucket):
while deleted_cnt: while deleted_cnt:
deleted_cnt = 0 deleted_cnt = 0
for key in bucket.list(): for key in bucket.list():
print('Cleaning bucket {bucket} key {key}'.format( print 'Cleaning bucket {bucket} key {key}'.format(
bucket=bucket, bucket=bucket,
key=key, key=key,
)) )
key.set_canned_acl('private') key.set_canned_acl('private')
key.delete() key.delete()
deleted_cnt += 1 deleted_cnt += 1
@ -67,26 +67,26 @@ def nuke_bucket(bucket):
and e.body == ''): and e.body == ''):
e.error_code = 'AccessDenied' e.error_code = 'AccessDenied'
if e.error_code != 'AccessDenied': if e.error_code != 'AccessDenied':
print('GOT UNWANTED ERROR', e.error_code) print 'GOT UNWANTED ERROR', e.error_code
raise raise
# seems like we're not the owner of the bucket; ignore # seems like we're not the owner of the bucket; ignore
pass pass
def nuke_prefixed_buckets(): def nuke_prefixed_buckets():
for name, conn in list(s3.items()): for name, conn in s3.items():
print('Cleaning buckets from connection {name}'.format(name=name)) print 'Cleaning buckets from connection {name}'.format(name=name)
for bucket in conn.get_all_buckets(): for bucket in conn.get_all_buckets():
if bucket.name.startswith(prefix): if bucket.name.startswith(prefix):
print('Cleaning bucket {bucket}'.format(bucket=bucket)) print 'Cleaning bucket {bucket}'.format(bucket=bucket)
nuke_bucket(bucket) nuke_bucket(bucket)
print('Done with cleanup of test buckets.') print 'Done with cleanup of test buckets.'
def read_config(fp): def read_config(fp):
config = munch.Munch() config = bunch.Bunch()
g = yaml.safe_load_all(fp) g = yaml.safe_load_all(fp)
for new in g: for new in g:
config.update(munch.Munchify(new)) config.update(bunch.bunchify(new))
return config return config
def connect(conf): def connect(conf):
@ -97,7 +97,7 @@ def connect(conf):
access_key='aws_access_key_id', access_key='aws_access_key_id',
secret_key='aws_secret_access_key', secret_key='aws_secret_access_key',
) )
kwargs = dict((mapping[k],v) for (k,v) in conf.items() if k in mapping) kwargs = dict((mapping[k],v) for (k,v) in conf.iteritems() if k in mapping)
#process calling_format argument #process calling_format argument
calling_formats = dict( calling_formats = dict(
ordinary=boto.s3.connection.OrdinaryCallingFormat(), ordinary=boto.s3.connection.OrdinaryCallingFormat(),
@ -105,7 +105,7 @@ def connect(conf):
vhost=boto.s3.connection.VHostCallingFormat(), vhost=boto.s3.connection.VHostCallingFormat(),
) )
kwargs['calling_format'] = calling_formats['ordinary'] kwargs['calling_format'] = calling_formats['ordinary']
if 'calling_format' in conf: if conf.has_key('calling_format'):
raw_calling_format = conf['calling_format'] raw_calling_format = conf['calling_format']
try: try:
kwargs['calling_format'] = calling_formats[raw_calling_format] kwargs['calling_format'] = calling_formats[raw_calling_format]
@ -146,7 +146,7 @@ def setup():
raise RuntimeError("Empty Prefix! Aborting!") raise RuntimeError("Empty Prefix! Aborting!")
defaults = config.s3.defaults defaults = config.s3.defaults
for section in list(config.s3.keys()): for section in config.s3.keys():
if section == 'defaults': if section == 'defaults':
continue continue

View file

@ -1,21 +1,15 @@
import pytest
import boto3 import boto3
from botocore import UNSIGNED from botocore import UNSIGNED
from botocore.client import Config from botocore.client import Config
from botocore.exceptions import ClientError
from botocore.handlers import disable_signing from botocore.handlers import disable_signing
import configparser import ConfigParser
import datetime
import time
import os import os
import munch import bunch
import random import random
import string import string
import itertools import itertools
import urllib3
import re
config = munch.Munch config = bunch.Bunch
# this will be assigned by setup() # this will be assigned by setup()
prefix = None prefix = None
@ -79,64 +73,38 @@ def get_objects_list(bucket, client=None, prefix=None):
return objects_list return objects_list
# generator function that returns object listings in batches, where each def get_versioned_objects_list(bucket, client=None):
# batch is a list of dicts compatible with delete_objects() if client == None:
def list_versions(client, bucket, batch_size): client = get_client()
kwargs = {'Bucket': bucket, 'MaxKeys': batch_size} response = client.list_object_versions(Bucket=bucket)
truncated = True versioned_objects_list = []
while truncated:
listing = client.list_object_versions(**kwargs)
kwargs['KeyMarker'] = listing.get('NextKeyMarker') if 'Versions' in response:
kwargs['VersionIdMarker'] = listing.get('NextVersionIdMarker') contents = response['Versions']
truncated = listing['IsTruncated'] for obj in contents:
key = obj['Key']
version_id = obj['VersionId']
versioned_obj = (key,version_id)
versioned_objects_list.append(versioned_obj)
objs = listing.get('Versions', []) + listing.get('DeleteMarkers', []) return versioned_objects_list
if len(objs):
yield [{'Key': o['Key'], 'VersionId': o['VersionId']} for o in objs]
def nuke_bucket(client, bucket): def get_delete_markers_list(bucket, client=None):
batch_size = 128 if client == None:
max_retain_date = None client = get_client()
response = client.list_object_versions(Bucket=bucket)
delete_markers = []
# list and delete objects in batches if 'DeleteMarkers' in response:
for objects in list_versions(client, bucket, batch_size): contents = response['DeleteMarkers']
delete = client.delete_objects(Bucket=bucket, for obj in contents:
Delete={'Objects': objects, 'Quiet': True}, key = obj['Key']
BypassGovernanceRetention=True) version_id = obj['VersionId']
versioned_obj = (key,version_id)
delete_markers.append(versioned_obj)
# check for object locks on 403 AccessDenied errors return delete_markers
for err in delete.get('Errors', []):
if err.get('Code') != 'AccessDenied':
continue
try:
res = client.get_object_retention(Bucket=bucket,
Key=err['Key'], VersionId=err['VersionId'])
retain_date = res['Retention']['RetainUntilDate']
if not max_retain_date or max_retain_date < retain_date:
max_retain_date = retain_date
except ClientError:
pass
if max_retain_date:
# wait out the retention period (up to 60 seconds)
now = datetime.datetime.now(max_retain_date.tzinfo)
if max_retain_date > now:
delta = max_retain_date - now
if delta.total_seconds() > 60:
raise RuntimeError('bucket {} still has objects \
locked for {} more seconds, not waiting for \
bucket cleanup'.format(bucket, delta.total_seconds()))
print('nuke_bucket', bucket, 'waiting', delta.total_seconds(),
'seconds for object locks to expire')
time.sleep(delta.total_seconds())
for objects in list_versions(client, bucket, batch_size):
client.delete_objects(Bucket=bucket,
Delete={'Objects': objects, 'Quiet': True},
BypassGovernanceRetention=True)
client.delete_bucket(Bucket=bucket)
def nuke_prefixed_buckets(prefix, client=None): def nuke_prefixed_buckets(prefix, client=None):
if client == None: if client == None:
@ -144,38 +112,23 @@ def nuke_prefixed_buckets(prefix, client=None):
buckets = get_buckets_list(client, prefix) buckets = get_buckets_list(client, prefix)
err = None if buckets != []:
for bucket_name in buckets: for bucket_name in buckets:
try: objects_list = get_objects_list(bucket_name, client)
nuke_bucket(client, bucket_name) for obj in objects_list:
except Exception as e: response = client.delete_object(Bucket=bucket_name,Key=obj)
# The exception shouldn't be raised when doing cleanup. Pass and continue versioned_objects_list = get_versioned_objects_list(bucket_name, client)
# the bucket cleanup process. Otherwise left buckets wouldn't be cleared for obj in versioned_objects_list:
# resulting in some kind of resource leak. err is used to hint user some response = client.delete_object(Bucket=bucket_name,Key=obj[0],VersionId=obj[1])
# exception once occurred. delete_markers = get_delete_markers_list(bucket_name, client)
err = e for obj in delete_markers:
pass response = client.delete_object(Bucket=bucket_name,Key=obj[0],VersionId=obj[1])
if err: client.delete_bucket(Bucket=bucket_name)
raise err
print('Done with cleanup of buckets in tests.') print('Done with cleanup of buckets in tests.')
def configured_storage_classes(): def setup():
sc = ['STANDARD'] cfg = ConfigParser.RawConfigParser()
extra_sc = re.split(r"[\b\W\b]+", config.storage_classes)
for item in extra_sc:
if item != 'STANDARD':
sc.append(item)
sc = [i for i in sc if i]
print("storage classes configured: " + str(sc))
return sc
def configure():
cfg = configparser.RawConfigParser()
try: try:
path = os.environ['S3TEST_CONF'] path = os.environ['S3TEST_CONF']
except KeyError: except KeyError:
@ -183,7 +136,8 @@ def configure():
'To run tests, point environment ' 'To run tests, point environment '
+ 'variable S3TEST_CONF to a config file.', + 'variable S3TEST_CONF to a config file.',
) )
cfg.read(path) with file(path) as f:
cfg.readfp(f)
if not cfg.defaults(): if not cfg.defaults():
raise RuntimeError('Your config file is missing the DEFAULT section!') raise RuntimeError('Your config file is missing the DEFAULT section!')
@ -206,15 +160,6 @@ def configure():
proto = 'https' if config.default_is_secure else 'http' proto = 'https' if config.default_is_secure else 'http'
config.default_endpoint = "%s://%s:%d" % (proto, config.default_host, config.default_port) config.default_endpoint = "%s://%s:%d" % (proto, config.default_host, config.default_port)
try:
config.default_ssl_verify = cfg.getboolean('DEFAULT', "ssl_verify")
except configparser.NoOptionError:
config.default_ssl_verify = False
# Disable InsecureRequestWarning reported by urllib3 when ssl_verify is False
if not config.default_ssl_verify:
urllib3.disable_warnings()
# vars from the main section # vars from the main section
config.main_access_key = cfg.get('s3 main',"access_key") config.main_access_key = cfg.get('s3 main',"access_key")
config.main_secret_key = cfg.get('s3 main',"secret_key") config.main_secret_key = cfg.get('s3 main',"secret_key")
@ -223,31 +168,19 @@ def configure():
config.main_email = cfg.get('s3 main',"email") config.main_email = cfg.get('s3 main',"email")
try: try:
config.main_kms_keyid = cfg.get('s3 main',"kms_keyid") config.main_kms_keyid = cfg.get('s3 main',"kms_keyid")
except (configparser.NoSectionError, configparser.NoOptionError): except (ConfigParser.NoSectionError, ConfigParser.NoOptionError):
config.main_kms_keyid = 'testkey-1' config.main_kms_keyid = 'testkey-1'
try: try:
config.main_kms_keyid2 = cfg.get('s3 main',"kms_keyid2") config.main_kms_keyid2 = cfg.get('s3 main',"kms_keyid2")
except (configparser.NoSectionError, configparser.NoOptionError): except (ConfigParser.NoSectionError, ConfigParser.NoOptionError):
config.main_kms_keyid2 = 'testkey-2' config.main_kms_keyid2 = 'testkey-2'
try: try:
config.main_api_name = cfg.get('s3 main',"api_name") config.main_api_name = cfg.get('s3 main',"api_name")
except (configparser.NoSectionError, configparser.NoOptionError): except (ConfigParser.NoSectionError, ConfigParser.NoOptionError):
config.main_api_name = "" config.main_api_name = ""
pass pass
try:
config.storage_classes = cfg.get('s3 main',"storage_classes")
except (configparser.NoSectionError, configparser.NoOptionError):
config.storage_classes = ""
pass
try:
config.lc_debug_interval = int(cfg.get('s3 main',"lc_debug_interval"))
except (configparser.NoSectionError, configparser.NoOptionError):
config.lc_debug_interval = 10
config.alt_access_key = cfg.get('s3 alt',"access_key") config.alt_access_key = cfg.get('s3 alt',"access_key")
config.alt_secret_key = cfg.get('s3 alt',"secret_key") config.alt_secret_key = cfg.get('s3 alt',"secret_key")
config.alt_display_name = cfg.get('s3 alt',"display_name") config.alt_display_name = cfg.get('s3 alt',"display_name")
@ -259,38 +192,14 @@ def configure():
config.tenant_display_name = cfg.get('s3 tenant',"display_name") config.tenant_display_name = cfg.get('s3 tenant',"display_name")
config.tenant_user_id = cfg.get('s3 tenant',"user_id") config.tenant_user_id = cfg.get('s3 tenant',"user_id")
config.tenant_email = cfg.get('s3 tenant',"email") config.tenant_email = cfg.get('s3 tenant',"email")
config.tenant_name = cfg.get('s3 tenant',"tenant")
config.iam_access_key = cfg.get('iam',"access_key")
config.iam_secret_key = cfg.get('iam',"secret_key")
config.iam_display_name = cfg.get('iam',"display_name")
config.iam_user_id = cfg.get('iam',"user_id")
config.iam_email = cfg.get('iam',"email")
config.iam_root_access_key = cfg.get('iam root',"access_key")
config.iam_root_secret_key = cfg.get('iam root',"secret_key")
config.iam_root_user_id = cfg.get('iam root',"user_id")
config.iam_root_email = cfg.get('iam root',"email")
config.iam_alt_root_access_key = cfg.get('iam alt root',"access_key")
config.iam_alt_root_secret_key = cfg.get('iam alt root',"secret_key")
config.iam_alt_root_user_id = cfg.get('iam alt root',"user_id")
config.iam_alt_root_email = cfg.get('iam alt root',"email")
# vars from the fixtures section # vars from the fixtures section
template = cfg.get('fixtures', "bucket prefix", fallback='test-{random}-') try:
template = cfg.get('fixtures', "bucket prefix")
except (ConfigParser.NoOptionError):
template = 'test-{random}-'
prefix = choose_bucket_prefix(template=template) prefix = choose_bucket_prefix(template=template)
template = cfg.get('fixtures', "iam name prefix", fallback="s3-tests-")
config.iam_name_prefix = choose_bucket_prefix(template=template)
template = cfg.get('fixtures', "iam path prefix", fallback="/s3-tests/")
config.iam_path_prefix = choose_bucket_prefix(template=template)
if cfg.has_section("s3 cloud"):
get_cloud_config(cfg)
else:
config.cloud_storage_class = None
def setup():
alt_client = get_alt_client() alt_client = get_alt_client()
tenant_client = get_tenant_client() tenant_client = get_tenant_client()
nuke_prefixed_buckets(prefix=prefix) nuke_prefixed_buckets(prefix=prefix)
@ -303,93 +212,6 @@ def teardown():
nuke_prefixed_buckets(prefix=prefix) nuke_prefixed_buckets(prefix=prefix)
nuke_prefixed_buckets(prefix=prefix, client=alt_client) nuke_prefixed_buckets(prefix=prefix, client=alt_client)
nuke_prefixed_buckets(prefix=prefix, client=tenant_client) nuke_prefixed_buckets(prefix=prefix, client=tenant_client)
try:
iam_client = get_iam_client()
list_roles_resp = iam_client.list_roles()
for role in list_roles_resp['Roles']:
list_policies_resp = iam_client.list_role_policies(RoleName=role['RoleName'])
for policy in list_policies_resp['PolicyNames']:
del_policy_resp = iam_client.delete_role_policy(
RoleName=role['RoleName'],
PolicyName=policy
)
del_role_resp = iam_client.delete_role(RoleName=role['RoleName'])
list_oidc_resp = iam_client.list_open_id_connect_providers()
for oidcprovider in list_oidc_resp['OpenIDConnectProviderList']:
del_oidc_resp = iam_client.delete_open_id_connect_provider(
OpenIDConnectProviderArn=oidcprovider['Arn']
)
except:
pass
@pytest.fixture(scope="package")
def configfile():
configure()
return config
@pytest.fixture(autouse=True)
def setup_teardown(configfile):
setup()
yield
teardown()
def check_webidentity():
cfg = configparser.RawConfigParser()
try:
path = os.environ['S3TEST_CONF']
except KeyError:
raise RuntimeError(
'To run tests, point environment '
+ 'variable S3TEST_CONF to a config file.',
)
cfg.read(path)
if not cfg.has_section("webidentity"):
raise RuntimeError('Your config file is missing the "webidentity" section!')
config.webidentity_thumbprint = cfg.get('webidentity', "thumbprint")
config.webidentity_aud = cfg.get('webidentity', "aud")
config.webidentity_token = cfg.get('webidentity', "token")
config.webidentity_realm = cfg.get('webidentity', "KC_REALM")
config.webidentity_sub = cfg.get('webidentity', "sub")
config.webidentity_azp = cfg.get('webidentity', "azp")
config.webidentity_user_token = cfg.get('webidentity', "user_token")
def get_cloud_config(cfg):
config.cloud_host = cfg.get('s3 cloud',"host")
config.cloud_port = int(cfg.get('s3 cloud',"port"))
config.cloud_is_secure = cfg.getboolean('s3 cloud', "is_secure")
proto = 'https' if config.cloud_is_secure else 'http'
config.cloud_endpoint = "%s://%s:%d" % (proto, config.cloud_host, config.cloud_port)
config.cloud_access_key = cfg.get('s3 cloud',"access_key")
config.cloud_secret_key = cfg.get('s3 cloud',"secret_key")
try:
config.cloud_storage_class = cfg.get('s3 cloud', "cloud_storage_class")
except (configparser.NoSectionError, configparser.NoOptionError):
config.cloud_storage_class = None
try:
config.cloud_retain_head_object = cfg.get('s3 cloud',"retain_head_object")
except (configparser.NoSectionError, configparser.NoOptionError):
config.cloud_retain_head_object = None
try:
config.cloud_target_path = cfg.get('s3 cloud',"target_path")
except (configparser.NoSectionError, configparser.NoOptionError):
config.cloud_target_path = None
try:
config.cloud_target_storage_class = cfg.get('s3 cloud',"target_storage_class")
except (configparser.NoSectionError, configparser.NoOptionError):
config.cloud_target_storage_class = 'STANDARD'
try:
config.cloud_regular_storage_class = cfg.get('s3 cloud', "storage_class")
except (configparser.NoSectionError, configparser.NoOptionError):
config.cloud_regular_storage_class = None
def get_client(client_config=None): def get_client(client_config=None):
if client_config == None: if client_config == None:
@ -400,7 +222,6 @@ def get_client(client_config=None):
aws_secret_access_key=config.main_secret_key, aws_secret_access_key=config.main_secret_key,
endpoint_url=config.default_endpoint, endpoint_url=config.default_endpoint,
use_ssl=config.default_is_secure, use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
config=client_config) config=client_config)
return client return client
@ -410,69 +231,9 @@ def get_v2_client():
aws_secret_access_key=config.main_secret_key, aws_secret_access_key=config.main_secret_key,
endpoint_url=config.default_endpoint, endpoint_url=config.default_endpoint,
use_ssl=config.default_is_secure, use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
config=Config(signature_version='s3')) config=Config(signature_version='s3'))
return client return client
def get_sts_client(**kwargs):
kwargs.setdefault('aws_access_key_id', config.alt_access_key)
kwargs.setdefault('aws_secret_access_key', config.alt_secret_key)
kwargs.setdefault('config', Config(signature_version='s3v4'))
client = boto3.client(service_name='sts',
endpoint_url=config.default_endpoint,
region_name='',
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
**kwargs)
return client
def get_iam_client(**kwargs):
kwargs.setdefault('aws_access_key_id', config.iam_access_key)
kwargs.setdefault('aws_secret_access_key', config.iam_secret_key)
client = boto3.client(service_name='iam',
endpoint_url=config.default_endpoint,
region_name='',
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
**kwargs)
return client
def get_iam_s3client(**kwargs):
kwargs.setdefault('aws_access_key_id', config.iam_access_key)
kwargs.setdefault('aws_secret_access_key', config.iam_secret_key)
kwargs.setdefault('config', Config(signature_version='s3v4'))
client = boto3.client(service_name='s3',
endpoint_url=config.default_endpoint,
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
**kwargs)
return client
def get_iam_root_client(**kwargs):
kwargs.setdefault('service_name', 'iam')
kwargs.setdefault('aws_access_key_id', config.iam_root_access_key)
kwargs.setdefault('aws_secret_access_key', config.iam_root_secret_key)
return boto3.client(endpoint_url=config.default_endpoint,
region_name='',
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
**kwargs)
def get_iam_alt_root_client(**kwargs):
kwargs.setdefault('service_name', 'iam')
kwargs.setdefault('aws_access_key_id', config.iam_alt_root_access_key)
kwargs.setdefault('aws_secret_access_key', config.iam_alt_root_secret_key)
return boto3.client(endpoint_url=config.default_endpoint,
region_name='',
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
**kwargs)
def get_alt_client(client_config=None): def get_alt_client(client_config=None):
if client_config == None: if client_config == None:
client_config = Config(signature_version='s3v4') client_config = Config(signature_version='s3v4')
@ -482,19 +243,6 @@ def get_alt_client(client_config=None):
aws_secret_access_key=config.alt_secret_key, aws_secret_access_key=config.alt_secret_key,
endpoint_url=config.default_endpoint, endpoint_url=config.default_endpoint,
use_ssl=config.default_is_secure, use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
config=client_config)
return client
def get_cloud_client(client_config=None):
if client_config == None:
client_config = Config(signature_version='s3v4')
client = boto3.client(service_name='s3',
aws_access_key_id=config.cloud_access_key,
aws_secret_access_key=config.cloud_secret_key,
endpoint_url=config.cloud_endpoint,
use_ssl=config.cloud_is_secure,
config=client_config) config=client_config)
return client return client
@ -507,50 +255,15 @@ def get_tenant_client(client_config=None):
aws_secret_access_key=config.tenant_secret_key, aws_secret_access_key=config.tenant_secret_key,
endpoint_url=config.default_endpoint, endpoint_url=config.default_endpoint,
use_ssl=config.default_is_secure, use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
config=client_config) config=client_config)
return client return client
def get_v2_tenant_client():
client_config = Config(signature_version='s3')
client = boto3.client(service_name='s3',
aws_access_key_id=config.tenant_access_key,
aws_secret_access_key=config.tenant_secret_key,
endpoint_url=config.default_endpoint,
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
config=client_config)
return client
def get_tenant_iam_client():
client = boto3.client(service_name='iam',
region_name='us-east-1',
aws_access_key_id=config.tenant_access_key,
aws_secret_access_key=config.tenant_secret_key,
endpoint_url=config.default_endpoint,
verify=config.default_ssl_verify,
use_ssl=config.default_is_secure)
return client
def get_alt_iam_client():
client = boto3.client(service_name='iam',
region_name='',
aws_access_key_id=config.alt_access_key,
aws_secret_access_key=config.alt_secret_key,
endpoint_url=config.default_endpoint,
verify=config.default_ssl_verify,
use_ssl=config.default_is_secure)
return client
def get_unauthenticated_client(): def get_unauthenticated_client():
client = boto3.client(service_name='s3', client = boto3.client(service_name='s3',
aws_access_key_id='', aws_access_key_id='',
aws_secret_access_key='', aws_secret_access_key='',
endpoint_url=config.default_endpoint, endpoint_url=config.default_endpoint,
use_ssl=config.default_is_secure, use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
config=Config(signature_version=UNSIGNED)) config=Config(signature_version=UNSIGNED))
return client return client
@ -560,23 +273,9 @@ def get_bad_auth_client(aws_access_key_id='badauth'):
aws_secret_access_key='roflmao', aws_secret_access_key='roflmao',
endpoint_url=config.default_endpoint, endpoint_url=config.default_endpoint,
use_ssl=config.default_is_secure, use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
config=Config(signature_version='s3v4')) config=Config(signature_version='s3v4'))
return client return client
def get_svc_client(client_config=None, svc='s3'):
if client_config == None:
client_config = Config(signature_version='s3v4')
client = boto3.client(service_name=svc,
aws_access_key_id=config.main_access_key,
aws_secret_access_key=config.main_secret_key,
endpoint_url=config.default_endpoint,
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
config=client_config)
return client
bucket_counter = itertools.count(1) bucket_counter = itertools.count(1)
def get_new_bucket_name(): def get_new_bucket_name():
@ -604,8 +303,7 @@ def get_new_bucket_resource(name=None):
aws_access_key_id=config.main_access_key, aws_access_key_id=config.main_access_key,
aws_secret_access_key=config.main_secret_key, aws_secret_access_key=config.main_secret_key,
endpoint_url=config.default_endpoint, endpoint_url=config.default_endpoint,
use_ssl=config.default_is_secure, use_ssl=config.default_is_secure)
verify=config.default_ssl_verify)
if name is None: if name is None:
name = get_new_bucket_name() name = get_new_bucket_name()
bucket = s3.Bucket(name) bucket = s3.Bucket(name)
@ -627,21 +325,6 @@ def get_new_bucket(client=None, name=None):
client.create_bucket(Bucket=name) client.create_bucket(Bucket=name)
return name return name
def get_parameter_name():
parameter_name=""
rand = ''.join(
random.choice(string.ascii_lowercase + string.digits)
for c in range(255)
)
while rand:
parameter_name = '{random}'.format(random=rand)
if len(parameter_name) <= 10:
return parameter_name
rand = rand[:-1]
return parameter_name
def get_sts_user_id():
return config.alt_user_id
def get_config_is_secure(): def get_config_is_secure():
return config.default_is_secure return config.default_is_secure
@ -655,9 +338,6 @@ def get_config_port():
def get_config_endpoint(): def get_config_endpoint():
return config.default_endpoint return config.default_endpoint
def get_config_ssl_verify():
return config.default_ssl_verify
def get_main_aws_access_key(): def get_main_aws_access_key():
return config.main_access_key return config.main_access_key
@ -706,77 +386,8 @@ def get_tenant_aws_secret_key():
def get_tenant_display_name(): def get_tenant_display_name():
return config.tenant_display_name return config.tenant_display_name
def get_tenant_name():
return config.tenant_name
def get_tenant_user_id(): def get_tenant_user_id():
return config.tenant_user_id return config.tenant_user_id
def get_tenant_email(): def get_tenant_email():
return config.tenant_email return config.tenant_email
def get_thumbprint():
return config.webidentity_thumbprint
def get_aud():
return config.webidentity_aud
def get_sub():
return config.webidentity_sub
def get_azp():
return config.webidentity_azp
def get_token():
return config.webidentity_token
def get_realm_name():
return config.webidentity_realm
def get_iam_name_prefix():
return config.iam_name_prefix
def make_iam_name(name):
return config.iam_name_prefix + name
def get_iam_path_prefix():
return config.iam_path_prefix
def get_iam_access_key():
return config.iam_access_key
def get_iam_secret_key():
return config.iam_secret_key
def get_iam_root_user_id():
return config.iam_root_user_id
def get_iam_root_email():
return config.iam_root_email
def get_iam_alt_root_user_id():
return config.iam_alt_root_user_id
def get_iam_alt_root_email():
return config.iam_alt_root_email
def get_user_token():
return config.webidentity_user_token
def get_cloud_storage_class():
return config.cloud_storage_class
def get_cloud_retain_head_object():
return config.cloud_retain_head_object
def get_cloud_regular_storage_class():
return config.cloud_regular_storage_class
def get_cloud_target_path():
return config.cloud_target_path
def get_cloud_target_storage_class():
return config.cloud_target_storage_class
def get_lc_debug_interval():
return config.lc_debug_interval

View file

@ -1,199 +0,0 @@
from botocore.exceptions import ClientError
import pytest
from . import (
configfile,
get_iam_root_client,
get_iam_root_user_id,
get_iam_root_email,
get_iam_alt_root_client,
get_iam_alt_root_user_id,
get_iam_alt_root_email,
get_iam_path_prefix,
)
def nuke_user_keys(client, name):
p = client.get_paginator('list_access_keys')
for response in p.paginate(UserName=name):
for key in response['AccessKeyMetadata']:
try:
client.delete_access_key(UserName=name, AccessKeyId=key['AccessKeyId'])
except:
pass
def nuke_user_policies(client, name):
p = client.get_paginator('list_user_policies')
for response in p.paginate(UserName=name):
for policy in response['PolicyNames']:
try:
client.delete_user_policy(UserName=name, PolicyName=policy)
except:
pass
def nuke_attached_user_policies(client, name):
p = client.get_paginator('list_attached_user_policies')
for response in p.paginate(UserName=name):
for policy in response['AttachedPolicies']:
try:
client.detach_user_policy(UserName=name, PolicyArn=policy['PolicyArn'])
except:
pass
def nuke_user(client, name):
# delete access keys, user policies, etc
try:
nuke_user_keys(client, name)
except:
pass
try:
nuke_user_policies(client, name)
except:
pass
try:
nuke_attached_user_policies(client, name)
except:
pass
client.delete_user(UserName=name)
def nuke_users(client, **kwargs):
p = client.get_paginator('list_users')
for response in p.paginate(**kwargs):
for user in response['Users']:
try:
nuke_user(client, user['UserName'])
except:
pass
def nuke_group_policies(client, name):
p = client.get_paginator('list_group_policies')
for response in p.paginate(GroupName=name):
for policy in response['PolicyNames']:
try:
client.delete_group_policy(GroupName=name, PolicyName=policy)
except:
pass
def nuke_attached_group_policies(client, name):
p = client.get_paginator('list_attached_group_policies')
for response in p.paginate(GroupName=name):
for policy in response['AttachedPolicies']:
try:
client.detach_group_policy(GroupName=name, PolicyArn=policy['PolicyArn'])
except:
pass
def nuke_group_users(client, name):
p = client.get_paginator('get_group')
for response in p.paginate(GroupName=name):
for user in response['Users']:
try:
client.remove_user_from_group(GroupName=name, UserName=user['UserName'])
except:
pass
def nuke_group(client, name):
# delete group policies and remove all users
try:
nuke_group_policies(client, name)
except:
pass
try:
nuke_attached_group_policies(client, name)
except:
pass
try:
nuke_group_users(client, name)
except:
pass
client.delete_group(GroupName=name)
def nuke_groups(client, **kwargs):
p = client.get_paginator('list_groups')
for response in p.paginate(**kwargs):
for user in response['Groups']:
try:
nuke_group(client, user['GroupName'])
except:
pass
def nuke_role_policies(client, name):
p = client.get_paginator('list_role_policies')
for response in p.paginate(RoleName=name):
for policy in response['PolicyNames']:
try:
client.delete_role_policy(RoleName=name, PolicyName=policy)
except:
pass
def nuke_attached_role_policies(client, name):
p = client.get_paginator('list_attached_role_policies')
for response in p.paginate(RoleName=name):
for policy in response['AttachedPolicies']:
try:
client.detach_role_policy(RoleName=name, PolicyArn=policy['PolicyArn'])
except:
pass
def nuke_role(client, name):
# delete role policies, etc
try:
nuke_role_policies(client, name)
except:
pass
try:
nuke_attached_role_policies(client, name)
except:
pass
client.delete_role(RoleName=name)
def nuke_roles(client, **kwargs):
p = client.get_paginator('list_roles')
for response in p.paginate(**kwargs):
for role in response['Roles']:
try:
nuke_role(client, role['RoleName'])
except:
pass
def nuke_oidc_providers(client, prefix):
result = client.list_open_id_connect_providers()
for provider in result['OpenIDConnectProviderList']:
arn = provider['Arn']
if f':oidc-provider{prefix}' in arn:
try:
client.delete_open_id_connect_provider(OpenIDConnectProviderArn=arn)
except:
pass
# fixture for iam account root user
@pytest.fixture
def iam_root(configfile):
client = get_iam_root_client()
try:
arn = client.get_user()['User']['Arn']
if not arn.endswith(':root'):
pytest.skip('[iam root] user does not have :root arn')
except ClientError as e:
pytest.skip('[iam root] user does not belong to an account')
yield client
nuke_users(client, PathPrefix=get_iam_path_prefix())
nuke_groups(client, PathPrefix=get_iam_path_prefix())
nuke_roles(client, PathPrefix=get_iam_path_prefix())
nuke_oidc_providers(client, get_iam_path_prefix())
# fixture for iam alt account root user
@pytest.fixture
def iam_alt_root(configfile):
client = get_iam_alt_root_client()
try:
arn = client.get_user()['User']['Arn']
if not arn.endswith(':root'):
pytest.skip('[iam alt root] user does not have :root arn')
except ClientError as e:
pytest.skip('[iam alt root] user does not belong to an account')
yield client
nuke_users(client, PathPrefix=get_iam_path_prefix())
nuke_roles(client, PathPrefix=get_iam_path_prefix())

View file

@ -37,10 +37,10 @@ class Policy(object):
return json.dumps(policy_dict) return json.dumps(policy_dict)
def make_json_policy(action, resource, principal={"AWS": "*"}, effect="Allow", conditions=None): def make_json_policy(action, resource, principal={"AWS": "*"}, conditions=None):
""" """
Helper function to make single statement policies Helper function to make single statement policies
""" """
s = Statement(action, resource, principal, effect=effect, condition=conditions) s = Statement(action, resource, principal, condition=conditions)
p = Policy() p = Policy()
return p.add_statement(s).to_json() return p.add_statement(s).to_json()

View file

@ -1,4 +1,4 @@
#!/usr/bin/python #!/usr/bin/python3
import boto3 import boto3
import os import os
import random import random

View file

@ -1,5 +1,7 @@
import boto3 import boto3
import pytest from nose.tools import eq_ as eq
from nose.plugins.attrib import attr
import nose
from botocore.exceptions import ClientError from botocore.exceptions import ClientError
from email.utils import formatdate from email.utils import formatdate
@ -8,8 +10,6 @@ from .utils import _get_status_and_error_code
from .utils import _get_status from .utils import _get_status
from . import ( from . import (
configfile,
setup_teardown,
get_client, get_client,
get_v2_client, get_v2_client,
get_new_bucket, get_new_bucket,
@ -149,97 +149,178 @@ def _remove_header_create_bad_bucket(remove, client=None):
return e return e
def tag(*tags):
def wrap(func):
for tag in tags:
setattr(func, tag, True)
return func
return wrap
# #
# common tests # common tests
# #
@pytest.mark.auth_common @tag('auth_common')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/invalid MD5')
@attr(assertion='fails 400')
def test_object_create_bad_md5_invalid_short(): def test_object_create_bad_md5_invalid_short():
e = _add_header_create_bad_object({'Content-MD5':'YWJyYWNhZGFicmE='}) e = _add_header_create_bad_object({'Content-MD5':'YWJyYWNhZGFicmE='})
status, error_code = _get_status_and_error_code(e.response) status, error_code = _get_status_and_error_code(e.response)
assert status == 400 eq(status, 400)
assert error_code == 'InvalidDigest' eq(error_code, 'InvalidDigest')
@pytest.mark.auth_common @tag('auth_common')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/mismatched MD5')
@attr(assertion='fails 400')
def test_object_create_bad_md5_bad(): def test_object_create_bad_md5_bad():
e = _add_header_create_bad_object({'Content-MD5':'rL0Y20xC+Fzt72VPzMSk2A=='}) e = _add_header_create_bad_object({'Content-MD5':'rL0Y20xC+Fzt72VPzMSk2A=='})
status, error_code = _get_status_and_error_code(e.response) status, error_code = _get_status_and_error_code(e.response)
assert status == 400 eq(status, 400)
assert error_code == 'BadDigest' eq(error_code, 'BadDigest')
@pytest.mark.auth_common @tag('auth_common')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/empty MD5')
@attr(assertion='fails 400')
def test_object_create_bad_md5_empty(): def test_object_create_bad_md5_empty():
e = _add_header_create_bad_object({'Content-MD5':''}) e = _add_header_create_bad_object({'Content-MD5':''})
status, error_code = _get_status_and_error_code(e.response) status, error_code = _get_status_and_error_code(e.response)
assert status == 400 eq(status, 400)
assert error_code == 'InvalidDigest' eq(error_code, 'InvalidDigest')
@pytest.mark.auth_common @tag('auth_common')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/no MD5 header')
@attr(assertion='succeeds')
def test_object_create_bad_md5_none(): def test_object_create_bad_md5_none():
bucket_name, key_name = _remove_header_create_object('Content-MD5') bucket_name, key_name = _remove_header_create_object('Content-MD5')
client = get_client() client = get_client()
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar') client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
@pytest.mark.auth_common @tag('auth_common')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/Expect 200')
@attr(assertion='garbage, but S3 succeeds!')
def test_object_create_bad_expect_mismatch(): def test_object_create_bad_expect_mismatch():
bucket_name, key_name = _add_header_create_object({'Expect': 200}) bucket_name, key_name = _add_header_create_object({'Expect': 200})
client = get_client() client = get_client()
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar') client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
@pytest.mark.auth_common @tag('auth_common')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/empty expect')
@attr(assertion='succeeds ... should it?')
def test_object_create_bad_expect_empty(): def test_object_create_bad_expect_empty():
bucket_name, key_name = _add_header_create_object({'Expect': ''}) bucket_name, key_name = _add_header_create_object({'Expect': ''})
client = get_client() client = get_client()
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar') client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
@pytest.mark.auth_common @tag('auth_common')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/no expect')
@attr(assertion='succeeds')
def test_object_create_bad_expect_none(): def test_object_create_bad_expect_none():
bucket_name, key_name = _remove_header_create_object('Expect') bucket_name, key_name = _remove_header_create_object('Expect')
client = get_client() client = get_client()
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar') client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
@pytest.mark.auth_common @tag('auth_common')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/empty content length')
@attr(assertion='fails 400')
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the content-length header # TODO: remove 'fails_on_rgw' and once we have learned how to remove the content-length header
@pytest.mark.fails_on_rgw @attr('fails_on_rgw')
def test_object_create_bad_contentlength_empty(): def test_object_create_bad_contentlength_empty():
e = _add_header_create_bad_object({'Content-Length':''}) e = _add_header_create_bad_object({'Content-Length':''})
status, error_code = _get_status_and_error_code(e.response) status, error_code = _get_status_and_error_code(e.response)
assert status == 400 eq(status, 400)
@pytest.mark.auth_common @tag('auth_common')
@pytest.mark.fails_on_mod_proxy_fcgi @attr(resource='object')
@attr(method='put')
@attr(operation='create w/negative content length')
@attr(assertion='fails 400')
@attr('fails_on_mod_proxy_fcgi')
def test_object_create_bad_contentlength_negative(): def test_object_create_bad_contentlength_negative():
client = get_client() client = get_client()
bucket_name = get_new_bucket() bucket_name = get_new_bucket()
key_name = 'foo' key_name = 'foo'
e = assert_raises(ClientError, client.put_object, Bucket=bucket_name, Key=key_name, ContentLength=-1) e = assert_raises(ClientError, client.put_object, Bucket=bucket_name, Key=key_name, ContentLength=-1)
status = _get_status(e.response) status = _get_status(e.response)
assert status == 400 eq(status, 400)
@pytest.mark.auth_common @tag('auth_common')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/no content length')
@attr(assertion='fails 411')
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the content-length header # TODO: remove 'fails_on_rgw' and once we have learned how to remove the content-length header
@pytest.mark.fails_on_rgw @attr('fails_on_rgw')
def test_object_create_bad_contentlength_none(): def test_object_create_bad_contentlength_none():
remove = 'Content-Length' remove = 'Content-Length'
e = _remove_header_create_bad_object('Content-Length') e = _remove_header_create_bad_object('Content-Length')
status, error_code = _get_status_and_error_code(e.response) status, error_code = _get_status_and_error_code(e.response)
assert status == 411 eq(status, 411)
assert error_code == 'MissingContentLength' eq(error_code, 'MissingContentLength')
@pytest.mark.auth_common @tag('auth_common')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/content length too long')
@attr(assertion='fails 400')
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the content-length header
@attr('fails_on_rgw')
def test_object_create_bad_contentlength_mismatch_above():
content = 'bar'
length = len(content) + 1
client = get_client()
bucket_name = get_new_bucket()
key_name = 'foo'
headers = {'Content-Length': str(length)}
add_headers = (lambda **kwargs: kwargs['params']['headers'].update(headers))
client.meta.events.register('before-sign.s3.PutObject', add_headers_before_sign)
e = assert_raises(ClientError, client.put_object, Bucket=bucket_name, Key=key_name, Body=content)
status, error_code = _get_status_and_error_code(e.response)
eq(status, 400)
@tag('auth_common')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/content type text/plain')
@attr(assertion='succeeds')
def test_object_create_bad_contenttype_invalid(): def test_object_create_bad_contenttype_invalid():
bucket_name, key_name = _add_header_create_object({'Content-Type': 'text/plain'}) bucket_name, key_name = _add_header_create_object({'Content-Type': 'text/plain'})
client = get_client() client = get_client()
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar') client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
@pytest.mark.auth_common @tag('auth_common')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/empty content type')
@attr(assertion='succeeds')
def test_object_create_bad_contenttype_empty(): def test_object_create_bad_contenttype_empty():
client = get_client() client = get_client()
key_name = 'foo' key_name = 'foo'
bucket_name = get_new_bucket() bucket_name = get_new_bucket()
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar', ContentType='') client.put_object(Bucket=bucket_name, Key=key_name, Body='bar', ContentType='')
@pytest.mark.auth_common @tag('auth_common')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/no content type')
@attr(assertion='succeeds')
def test_object_create_bad_contenttype_none(): def test_object_create_bad_contenttype_none():
bucket_name = get_new_bucket() bucket_name = get_new_bucket()
key_name = 'foo' key_name = 'foo'
@ -248,26 +329,38 @@ def test_object_create_bad_contenttype_none():
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar') client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
@pytest.mark.auth_common @tag('auth_common')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/empty authorization')
@attr(assertion='fails 403')
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the authorization header # TODO: remove 'fails_on_rgw' and once we have learned how to remove the authorization header
@pytest.mark.fails_on_rgw @attr('fails_on_rgw')
def test_object_create_bad_authorization_empty(): def test_object_create_bad_authorization_empty():
e = _add_header_create_bad_object({'Authorization': ''}) e = _add_header_create_bad_object({'Authorization': ''})
status, error_code = _get_status_and_error_code(e.response) status, error_code = _get_status_and_error_code(e.response)
assert status == 403 eq(status, 403)
@pytest.mark.auth_common @tag('auth_common')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/date and x-amz-date')
@attr(assertion='succeeds')
# TODO: remove 'fails_on_rgw' and once we have learned how to pass both the 'Date' and 'X-Amz-Date' header during signing and not 'X-Amz-Date' before # TODO: remove 'fails_on_rgw' and once we have learned how to pass both the 'Date' and 'X-Amz-Date' header during signing and not 'X-Amz-Date' before
@pytest.mark.fails_on_rgw @attr('fails_on_rgw')
def test_object_create_date_and_amz_date(): def test_object_create_date_and_amz_date():
date = formatdate(usegmt=True) date = formatdate(usegmt=True)
bucket_name, key_name = _add_header_create_object({'Date': date, 'X-Amz-Date': date}) bucket_name, key_name = _add_header_create_object({'Date': date, 'X-Amz-Date': date})
client = get_client() client = get_client()
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar') client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
@pytest.mark.auth_common @tag('auth_common')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/x-amz-date and no date')
@attr(assertion='succeeds')
# TODO: remove 'fails_on_rgw' and once we have learned how to pass both the 'Date' and 'X-Amz-Date' header during signing and not 'X-Amz-Date' before # TODO: remove 'fails_on_rgw' and once we have learned how to pass both the 'Date' and 'X-Amz-Date' header during signing and not 'X-Amz-Date' before
@pytest.mark.fails_on_rgw @attr('fails_on_rgw')
def test_object_create_amz_date_and_no_date(): def test_object_create_amz_date_and_no_date():
date = formatdate(usegmt=True) date = formatdate(usegmt=True)
bucket_name, key_name = _add_header_create_object({'Date': '', 'X-Amz-Date': date}) bucket_name, key_name = _add_header_create_object({'Date': '', 'X-Amz-Date': date})
@ -275,24 +368,36 @@ def test_object_create_amz_date_and_no_date():
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar') client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
# the teardown is really messed up here. check it out # the teardown is really messed up here. check it out
@pytest.mark.auth_common @tag('auth_common')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/no authorization')
@attr(assertion='fails 403')
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the authorization header # TODO: remove 'fails_on_rgw' and once we have learned how to remove the authorization header
@pytest.mark.fails_on_rgw @attr('fails_on_rgw')
def test_object_create_bad_authorization_none(): def test_object_create_bad_authorization_none():
e = _remove_header_create_bad_object('Authorization') e = _remove_header_create_bad_object('Authorization')
status, error_code = _get_status_and_error_code(e.response) status, error_code = _get_status_and_error_code(e.response)
assert status == 403 eq(status, 403)
@pytest.mark.auth_common @tag('auth_common')
@attr(resource='bucket')
@attr(method='put')
@attr(operation='create w/no content length')
@attr(assertion='succeeds')
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the content-length header # TODO: remove 'fails_on_rgw' and once we have learned how to remove the content-length header
@pytest.mark.fails_on_rgw @attr('fails_on_rgw')
def test_bucket_create_contentlength_none(): def test_bucket_create_contentlength_none():
remove = 'Content-Length' remove = 'Content-Length'
_remove_header_create_bucket(remove) _remove_header_create_bucket(remove)
@pytest.mark.auth_common @tag('auth_common')
@attr(resource='bucket')
@attr(method='acls')
@attr(operation='set w/no content length')
@attr(assertion='succeeds')
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the content-length header # TODO: remove 'fails_on_rgw' and once we have learned how to remove the content-length header
@pytest.mark.fails_on_rgw @attr('fails_on_rgw')
def test_object_acl_create_contentlength_none(): def test_object_acl_create_contentlength_none():
bucket_name = get_new_bucket() bucket_name = get_new_bucket()
client = get_client() client = get_client()
@ -306,7 +411,11 @@ def test_object_acl_create_contentlength_none():
client.meta.events.register('before-call.s3.PutObjectAcl', remove_header) client.meta.events.register('before-call.s3.PutObjectAcl', remove_header)
client.put_object_acl(Bucket=bucket_name, Key='foo', ACL='public-read') client.put_object_acl(Bucket=bucket_name, Key='foo', ACL='public-read')
@pytest.mark.auth_common @tag('auth_common')
@attr(resource='bucket')
@attr(method='acls')
@attr(operation='set w/invalid permission')
@attr(assertion='fails 400')
def test_bucket_put_bad_canned_acl(): def test_bucket_put_bad_canned_acl():
bucket_name = get_new_bucket() bucket_name = get_new_bucket()
client = get_client() client = get_client()
@ -317,9 +426,13 @@ def test_bucket_put_bad_canned_acl():
e = assert_raises(ClientError, client.put_bucket_acl, Bucket=bucket_name, ACL='public-read') e = assert_raises(ClientError, client.put_bucket_acl, Bucket=bucket_name, ACL='public-read')
status = _get_status(e.response) status = _get_status(e.response)
assert status == 400 eq(status, 400)
@pytest.mark.auth_common @tag('auth_common')
@attr(resource='bucket')
@attr(method='put')
@attr(operation='create w/expect 200')
@attr(assertion='garbage, but S3 succeeds!')
def test_bucket_create_bad_expect_mismatch(): def test_bucket_create_bad_expect_mismatch():
bucket_name = get_new_bucket_name() bucket_name = get_new_bucket_name()
client = get_client() client = get_client()
@ -329,67 +442,99 @@ def test_bucket_create_bad_expect_mismatch():
client.meta.events.register('before-call.s3.CreateBucket', add_headers) client.meta.events.register('before-call.s3.CreateBucket', add_headers)
client.create_bucket(Bucket=bucket_name) client.create_bucket(Bucket=bucket_name)
@pytest.mark.auth_common @tag('auth_common')
@attr(resource='bucket')
@attr(method='put')
@attr(operation='create w/expect empty')
@attr(assertion='garbage, but S3 succeeds!')
def test_bucket_create_bad_expect_empty(): def test_bucket_create_bad_expect_empty():
headers = {'Expect': ''} headers = {'Expect': ''}
_add_header_create_bucket(headers) _add_header_create_bucket(headers)
@pytest.mark.auth_common @tag('auth_common')
@attr(resource='bucket')
@attr(method='put')
@attr(operation='create w/empty content length')
@attr(assertion='fails 400')
# TODO: The request isn't even making it to the RGW past the frontend # TODO: The request isn't even making it to the RGW past the frontend
# This test had 'fails_on_rgw' before the move to boto3 # This test had 'fails_on_rgw' before the move to boto3
@pytest.mark.fails_on_rgw @attr('fails_on_rgw')
def test_bucket_create_bad_contentlength_empty(): def test_bucket_create_bad_contentlength_empty():
headers = {'Content-Length': ''} headers = {'Content-Length': ''}
e = _add_header_create_bad_bucket(headers) e = _add_header_create_bad_bucket(headers)
status, error_code = _get_status_and_error_code(e.response) status, error_code = _get_status_and_error_code(e.response)
assert status == 400 eq(status, 400)
@pytest.mark.auth_common @tag('auth_common')
@pytest.mark.fails_on_mod_proxy_fcgi @attr(resource='bucket')
@attr(method='put')
@attr(operation='create w/negative content length')
@attr(assertion='fails 400')
@attr('fails_on_mod_proxy_fcgi')
def test_bucket_create_bad_contentlength_negative(): def test_bucket_create_bad_contentlength_negative():
headers = {'Content-Length': '-1'} headers = {'Content-Length': '-1'}
e = _add_header_create_bad_bucket(headers) e = _add_header_create_bad_bucket(headers)
status = _get_status(e.response) status = _get_status(e.response)
assert status == 400 eq(status, 400)
@pytest.mark.auth_common @tag('auth_common')
@attr(resource='bucket')
@attr(method='put')
@attr(operation='create w/no content length')
@attr(assertion='succeeds')
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the content-length header # TODO: remove 'fails_on_rgw' and once we have learned how to remove the content-length header
@pytest.mark.fails_on_rgw @attr('fails_on_rgw')
def test_bucket_create_bad_contentlength_none(): def test_bucket_create_bad_contentlength_none():
remove = 'Content-Length' remove = 'Content-Length'
_remove_header_create_bucket(remove) _remove_header_create_bucket(remove)
@pytest.mark.auth_common @tag('auth_common')
@attr(resource='bucket')
@attr(method='put')
@attr(operation='create w/empty authorization')
@attr(assertion='fails 403')
# TODO: remove 'fails_on_rgw' and once we have learned how to manipulate the authorization header # TODO: remove 'fails_on_rgw' and once we have learned how to manipulate the authorization header
@pytest.mark.fails_on_rgw @attr('fails_on_rgw')
def test_bucket_create_bad_authorization_empty(): def test_bucket_create_bad_authorization_empty():
headers = {'Authorization': ''} headers = {'Authorization': ''}
e = _add_header_create_bad_bucket(headers) e = _add_header_create_bad_bucket(headers)
status, error_code = _get_status_and_error_code(e.response) status, error_code = _get_status_and_error_code(e.response)
assert status == 403 eq(status, 403)
assert error_code == 'AccessDenied' eq(error_code, 'AccessDenied')
@pytest.mark.auth_common @tag('auth_common')
@attr(resource='bucket')
@attr(method='put')
@attr(operation='create w/no authorization')
@attr(assertion='fails 403')
# TODO: remove 'fails_on_rgw' and once we have learned how to manipulate the authorization header # TODO: remove 'fails_on_rgw' and once we have learned how to manipulate the authorization header
@pytest.mark.fails_on_rgw @attr('fails_on_rgw')
def test_bucket_create_bad_authorization_none(): def test_bucket_create_bad_authorization_none():
e = _remove_header_create_bad_bucket('Authorization') e = _remove_header_create_bad_bucket('Authorization')
status, error_code = _get_status_and_error_code(e.response) status, error_code = _get_status_and_error_code(e.response)
assert status == 403 eq(status, 403)
assert error_code == 'AccessDenied' eq(error_code, 'AccessDenied')
@pytest.mark.auth_aws2 @tag('auth_aws2')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/invalid MD5')
@attr(assertion='fails 400')
def test_object_create_bad_md5_invalid_garbage_aws2(): def test_object_create_bad_md5_invalid_garbage_aws2():
v2_client = get_v2_client() v2_client = get_v2_client()
headers = {'Content-MD5': 'AWS HAHAHA'} headers = {'Content-MD5': 'AWS HAHAHA'}
e = _add_header_create_bad_object(headers, v2_client) e = _add_header_create_bad_object(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response) status, error_code = _get_status_and_error_code(e.response)
assert status == 400 eq(status, 400)
assert error_code == 'InvalidDigest' eq(error_code, 'InvalidDigest')
@pytest.mark.auth_aws2 @tag('auth_aws2')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/content length too short')
@attr(assertion='fails 400')
# TODO: remove 'fails_on_rgw' and once we have learned how to manipulate the Content-Length header # TODO: remove 'fails_on_rgw' and once we have learned how to manipulate the Content-Length header
@pytest.mark.fails_on_rgw @attr('fails_on_rgw')
def test_object_create_bad_contentlength_mismatch_below_aws2(): def test_object_create_bad_contentlength_mismatch_below_aws2():
v2_client = get_v2_client() v2_client = get_v2_client()
content = 'bar' content = 'bar'
@ -397,176 +542,252 @@ def test_object_create_bad_contentlength_mismatch_below_aws2():
headers = {'Content-Length': str(length)} headers = {'Content-Length': str(length)}
e = _add_header_create_bad_object(headers, v2_client) e = _add_header_create_bad_object(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response) status, error_code = _get_status_and_error_code(e.response)
assert status == 400 eq(status, 400)
assert error_code == 'BadDigest' eq(error_code, 'BadDigest')
@pytest.mark.auth_aws2 @tag('auth_aws2')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/incorrect authorization')
@attr(assertion='fails 403')
# TODO: remove 'fails_on_rgw' and once we have learned how to manipulate the authorization header # TODO: remove 'fails_on_rgw' and once we have learned how to manipulate the authorization header
@pytest.mark.fails_on_rgw @attr('fails_on_rgw')
def test_object_create_bad_authorization_incorrect_aws2(): def test_object_create_bad_authorization_incorrect_aws2():
v2_client = get_v2_client() v2_client = get_v2_client()
headers = {'Authorization': 'AWS AKIAIGR7ZNNBHC5BKSUB:FWeDfwojDSdS2Ztmpfeubhd9isU='} headers = {'Authorization': 'AWS AKIAIGR7ZNNBHC5BKSUB:FWeDfwojDSdS2Ztmpfeubhd9isU='}
e = _add_header_create_bad_object(headers, v2_client) e = _add_header_create_bad_object(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response) status, error_code = _get_status_and_error_code(e.response)
assert status == 403 eq(status, 403)
assert error_code == 'InvalidDigest' eq(error_code, 'InvalidDigest')
@pytest.mark.auth_aws2 @tag('auth_aws2')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/invalid authorization')
@attr(assertion='fails 400')
# TODO: remove 'fails_on_rgw' and once we have learned how to manipulate the authorization header # TODO: remove 'fails_on_rgw' and once we have learned how to manipulate the authorization header
@pytest.mark.fails_on_rgw @attr('fails_on_rgw')
def test_object_create_bad_authorization_invalid_aws2(): def test_object_create_bad_authorization_invalid_aws2():
v2_client = get_v2_client() v2_client = get_v2_client()
headers = {'Authorization': 'AWS HAHAHA'} headers = {'Authorization': 'AWS HAHAHA'}
e = _add_header_create_bad_object(headers, v2_client) e = _add_header_create_bad_object(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response) status, error_code = _get_status_and_error_code(e.response)
assert status == 400 eq(status, 400)
assert error_code == 'InvalidArgument' eq(error_code, 'InvalidArgument')
@pytest.mark.auth_aws2 @tag('auth_aws2')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/empty user agent')
@attr(assertion='succeeds')
def test_object_create_bad_ua_empty_aws2(): def test_object_create_bad_ua_empty_aws2():
v2_client = get_v2_client() v2_client = get_v2_client()
headers = {'User-Agent': ''} headers = {'User-Agent': ''}
bucket_name, key_name = _add_header_create_object(headers, v2_client) bucket_name, key_name = _add_header_create_object(headers, v2_client)
v2_client.put_object(Bucket=bucket_name, Key=key_name, Body='bar') v2_client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
@pytest.mark.auth_aws2 @tag('auth_aws2')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/no user agent')
@attr(assertion='succeeds')
def test_object_create_bad_ua_none_aws2(): def test_object_create_bad_ua_none_aws2():
v2_client = get_v2_client() v2_client = get_v2_client()
remove = 'User-Agent' remove = 'User-Agent'
bucket_name, key_name = _remove_header_create_object(remove, v2_client) bucket_name, key_name = _remove_header_create_object(remove, v2_client)
v2_client.put_object(Bucket=bucket_name, Key=key_name, Body='bar') v2_client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
@pytest.mark.auth_aws2 @tag('auth_aws2')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/invalid date')
@attr(assertion='fails 403')
def test_object_create_bad_date_invalid_aws2(): def test_object_create_bad_date_invalid_aws2():
v2_client = get_v2_client() v2_client = get_v2_client()
headers = {'x-amz-date': 'Bad Date'} headers = {'x-amz-date': 'Bad Date'}
e = _add_header_create_bad_object(headers, v2_client) e = _add_header_create_bad_object(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response) status, error_code = _get_status_and_error_code(e.response)
assert status == 403 eq(status, 403)
assert error_code == 'AccessDenied' eq(error_code, 'AccessDenied')
@pytest.mark.auth_aws2 @tag('auth_aws2')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/empty date')
@attr(assertion='fails 403')
def test_object_create_bad_date_empty_aws2(): def test_object_create_bad_date_empty_aws2():
v2_client = get_v2_client() v2_client = get_v2_client()
headers = {'x-amz-date': ''} headers = {'x-amz-date': ''}
e = _add_header_create_bad_object(headers, v2_client) e = _add_header_create_bad_object(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response) status, error_code = _get_status_and_error_code(e.response)
assert status == 403 eq(status, 403)
assert error_code == 'AccessDenied' eq(error_code, 'AccessDenied')
@pytest.mark.auth_aws2 @tag('auth_aws2')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/no date')
@attr(assertion='fails 403')
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the date header # TODO: remove 'fails_on_rgw' and once we have learned how to remove the date header
@pytest.mark.fails_on_rgw @attr('fails_on_rgw')
def test_object_create_bad_date_none_aws2(): def test_object_create_bad_date_none_aws2():
v2_client = get_v2_client() v2_client = get_v2_client()
remove = 'x-amz-date' remove = 'x-amz-date'
e = _remove_header_create_bad_object(remove, v2_client) e = _remove_header_create_bad_object(remove, v2_client)
status, error_code = _get_status_and_error_code(e.response) status, error_code = _get_status_and_error_code(e.response)
assert status == 403 eq(status, 403)
assert error_code == 'AccessDenied' eq(error_code, 'AccessDenied')
@pytest.mark.auth_aws2 @tag('auth_aws2')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/date in past')
@attr(assertion='fails 403')
def test_object_create_bad_date_before_today_aws2(): def test_object_create_bad_date_before_today_aws2():
v2_client = get_v2_client() v2_client = get_v2_client()
headers = {'x-amz-date': 'Tue, 07 Jul 2010 21:53:04 GMT'} headers = {'x-amz-date': 'Tue, 07 Jul 2010 21:53:04 GMT'}
e = _add_header_create_bad_object(headers, v2_client) e = _add_header_create_bad_object(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response) status, error_code = _get_status_and_error_code(e.response)
assert status == 403 eq(status, 403)
assert error_code == 'RequestTimeTooSkewed' eq(error_code, 'RequestTimeTooSkewed')
@pytest.mark.auth_aws2 @tag('auth_aws2')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/date before epoch')
@attr(assertion='fails 403')
def test_object_create_bad_date_before_epoch_aws2(): def test_object_create_bad_date_before_epoch_aws2():
v2_client = get_v2_client() v2_client = get_v2_client()
headers = {'x-amz-date': 'Tue, 07 Jul 1950 21:53:04 GMT'} headers = {'x-amz-date': 'Tue, 07 Jul 1950 21:53:04 GMT'}
e = _add_header_create_bad_object(headers, v2_client) e = _add_header_create_bad_object(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response) status, error_code = _get_status_and_error_code(e.response)
assert status == 403 eq(status, 403)
assert error_code == 'AccessDenied' eq(error_code, 'AccessDenied')
@pytest.mark.auth_aws2 @tag('auth_aws2')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/date after 9999')
@attr(assertion='fails 403')
def test_object_create_bad_date_after_end_aws2(): def test_object_create_bad_date_after_end_aws2():
v2_client = get_v2_client() v2_client = get_v2_client()
headers = {'x-amz-date': 'Tue, 07 Jul 9999 21:53:04 GMT'} headers = {'x-amz-date': 'Tue, 07 Jul 9999 21:53:04 GMT'}
e = _add_header_create_bad_object(headers, v2_client) e = _add_header_create_bad_object(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response) status, error_code = _get_status_and_error_code(e.response)
assert status == 403 eq(status, 403)
assert error_code == 'RequestTimeTooSkewed' eq(error_code, 'RequestTimeTooSkewed')
@pytest.mark.auth_aws2 @tag('auth_aws2')
@attr(resource='bucket')
@attr(method='put')
@attr(operation='create w/invalid authorization')
@attr(assertion='fails 400')
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the date header # TODO: remove 'fails_on_rgw' and once we have learned how to remove the date header
@pytest.mark.fails_on_rgw @attr('fails_on_rgw')
def test_bucket_create_bad_authorization_invalid_aws2(): def test_bucket_create_bad_authorization_invalid_aws2():
v2_client = get_v2_client() v2_client = get_v2_client()
headers = {'Authorization': 'AWS HAHAHA'} headers = {'Authorization': 'AWS HAHAHA'}
e = _add_header_create_bad_bucket(headers, v2_client) e = _add_header_create_bad_bucket(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response) status, error_code = _get_status_and_error_code(e.response)
assert status == 400 eq(status, 400)
assert error_code == 'InvalidArgument' eq(error_code, 'InvalidArgument')
@pytest.mark.auth_aws2 @tag('auth_aws2')
@attr(resource='bucket')
@attr(method='put')
@attr(operation='create w/empty user agent')
@attr(assertion='succeeds')
def test_bucket_create_bad_ua_empty_aws2(): def test_bucket_create_bad_ua_empty_aws2():
v2_client = get_v2_client() v2_client = get_v2_client()
headers = {'User-Agent': ''} headers = {'User-Agent': ''}
_add_header_create_bucket(headers, v2_client) _add_header_create_bucket(headers, v2_client)
@pytest.mark.auth_aws2 @tag('auth_aws2')
@attr(resource='bucket')
@attr(method='put')
@attr(operation='create w/no user agent')
@attr(assertion='succeeds')
def test_bucket_create_bad_ua_none_aws2(): def test_bucket_create_bad_ua_none_aws2():
v2_client = get_v2_client() v2_client = get_v2_client()
remove = 'User-Agent' remove = 'User-Agent'
_remove_header_create_bucket(remove, v2_client) _remove_header_create_bucket(remove, v2_client)
@pytest.mark.auth_aws2 @tag('auth_aws2')
@attr(resource='bucket')
@attr(method='put')
@attr(operation='create w/invalid date')
@attr(assertion='fails 403')
def test_bucket_create_bad_date_invalid_aws2(): def test_bucket_create_bad_date_invalid_aws2():
v2_client = get_v2_client() v2_client = get_v2_client()
headers = {'x-amz-date': 'Bad Date'} headers = {'x-amz-date': 'Bad Date'}
e = _add_header_create_bad_bucket(headers, v2_client) e = _add_header_create_bad_bucket(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response) status, error_code = _get_status_and_error_code(e.response)
assert status == 403 eq(status, 403)
assert error_code == 'AccessDenied' eq(error_code, 'AccessDenied')
@pytest.mark.auth_aws2 @tag('auth_aws2')
@attr(resource='bucket')
@attr(method='put')
@attr(operation='create w/empty date')
@attr(assertion='fails 403')
def test_bucket_create_bad_date_empty_aws2(): def test_bucket_create_bad_date_empty_aws2():
v2_client = get_v2_client() v2_client = get_v2_client()
headers = {'x-amz-date': ''} headers = {'x-amz-date': ''}
e = _add_header_create_bad_bucket(headers, v2_client) e = _add_header_create_bad_bucket(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response) status, error_code = _get_status_and_error_code(e.response)
assert status == 403 eq(status, 403)
assert error_code == 'AccessDenied' eq(error_code, 'AccessDenied')
@pytest.mark.auth_aws2 @tag('auth_aws2')
@attr(resource='bucket')
@attr(method='put')
@attr(operation='create w/no date')
@attr(assertion='fails 403')
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the date header # TODO: remove 'fails_on_rgw' and once we have learned how to remove the date header
@pytest.mark.fails_on_rgw @attr('fails_on_rgw')
def test_bucket_create_bad_date_none_aws2(): def test_bucket_create_bad_date_none_aws2():
v2_client = get_v2_client() v2_client = get_v2_client()
remove = 'x-amz-date' remove = 'x-amz-date'
e = _remove_header_create_bad_bucket(remove, v2_client) e = _remove_header_create_bad_bucket(remove, v2_client)
status, error_code = _get_status_and_error_code(e.response) status, error_code = _get_status_and_error_code(e.response)
assert status == 403 eq(status, 403)
assert error_code == 'AccessDenied' eq(error_code, 'AccessDenied')
@pytest.mark.auth_aws2 @tag('auth_aws2')
@attr(resource='bucket')
@attr(method='put')
@attr(operation='create w/date in past')
@attr(assertion='fails 403')
def test_bucket_create_bad_date_before_today_aws2(): def test_bucket_create_bad_date_before_today_aws2():
v2_client = get_v2_client() v2_client = get_v2_client()
headers = {'x-amz-date': 'Tue, 07 Jul 2010 21:53:04 GMT'} headers = {'x-amz-date': 'Tue, 07 Jul 2010 21:53:04 GMT'}
e = _add_header_create_bad_bucket(headers, v2_client) e = _add_header_create_bad_bucket(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response) status, error_code = _get_status_and_error_code(e.response)
assert status == 403 eq(status, 403)
assert error_code == 'RequestTimeTooSkewed' eq(error_code, 'RequestTimeTooSkewed')
@pytest.mark.auth_aws2 @tag('auth_aws2')
@attr(resource='bucket')
@attr(method='put')
@attr(operation='create w/date in future')
@attr(assertion='fails 403')
def test_bucket_create_bad_date_after_today_aws2(): def test_bucket_create_bad_date_after_today_aws2():
v2_client = get_v2_client() v2_client = get_v2_client()
headers = {'x-amz-date': 'Tue, 07 Jul 2030 21:53:04 GMT'} headers = {'x-amz-date': 'Tue, 07 Jul 2030 21:53:04 GMT'}
e = _add_header_create_bad_bucket(headers, v2_client) e = _add_header_create_bad_bucket(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response) status, error_code = _get_status_and_error_code(e.response)
assert status == 403 eq(status, 403)
assert error_code == 'RequestTimeTooSkewed' eq(error_code, 'RequestTimeTooSkewed')
@pytest.mark.auth_aws2 @tag('auth_aws2')
@attr(resource='bucket')
@attr(method='put')
@attr(operation='create w/date before epoch')
@attr(assertion='fails 403')
def test_bucket_create_bad_date_before_epoch_aws2(): def test_bucket_create_bad_date_before_epoch_aws2():
v2_client = get_v2_client() v2_client = get_v2_client()
headers = {'x-amz-date': 'Tue, 07 Jul 1950 21:53:04 GMT'} headers = {'x-amz-date': 'Tue, 07 Jul 1950 21:53:04 GMT'}
e = _add_header_create_bad_bucket(headers, v2_client) e = _add_header_create_bad_bucket(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response) status, error_code = _get_status_and_error_code(e.response)
assert status == 403 eq(status, 403)
assert error_code == 'AccessDenied' eq(error_code, 'AccessDenied')

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -1,159 +0,0 @@
import json
import pytest
from botocore.exceptions import ClientError
from . import (
configfile,
get_iam_root_client,
get_iam_alt_root_client,
get_new_bucket_name,
get_prefix,
nuke_prefixed_buckets,
)
from .iam import iam_root, iam_alt_root
from .utils import assert_raises, _get_status_and_error_code
def get_new_topic_name():
return get_new_bucket_name()
def nuke_topics(client, prefix):
p = client.get_paginator('list_topics')
for response in p.paginate():
for topic in response['Topics']:
arn = topic['TopicArn']
if prefix not in arn:
pass
try:
client.delete_topic(TopicArn=arn)
except:
pass
@pytest.fixture
def sns(iam_root):
client = get_iam_root_client(service_name='sns')
yield client
nuke_topics(client, get_prefix())
@pytest.fixture
def sns_alt(iam_alt_root):
client = get_iam_alt_root_client(service_name='sns')
yield client
nuke_topics(client, get_prefix())
@pytest.fixture
def s3(iam_root):
client = get_iam_root_client(service_name='s3')
yield client
nuke_prefixed_buckets(get_prefix(), client)
@pytest.fixture
def s3_alt(iam_alt_root):
client = get_iam_alt_root_client(service_name='s3')
yield client
nuke_prefixed_buckets(get_prefix(), client)
@pytest.mark.iam_account
@pytest.mark.sns
def test_account_topic(sns):
name = get_new_topic_name()
response = sns.create_topic(Name=name)
arn = response['TopicArn']
assert arn.startswith('arn:aws:sns:')
assert arn.endswith(f':{name}')
response = sns.list_topics()
assert arn in [p['TopicArn'] for p in response['Topics']]
sns.set_topic_attributes(TopicArn=arn, AttributeName='Policy', AttributeValue='')
response = sns.get_topic_attributes(TopicArn=arn)
assert 'Attributes' in response
sns.delete_topic(TopicArn=arn)
response = sns.list_topics()
assert arn not in [p['TopicArn'] for p in response['Topics']]
with pytest.raises(sns.exceptions.NotFoundException):
sns.get_topic_attributes(TopicArn=arn)
sns.delete_topic(TopicArn=arn)
@pytest.mark.iam_account
@pytest.mark.sns
def test_cross_account_topic(sns, sns_alt):
name = get_new_topic_name()
arn = sns.create_topic(Name=name)['TopicArn']
# not visible to any alt user apis
with pytest.raises(sns.exceptions.NotFoundException):
sns_alt.get_topic_attributes(TopicArn=arn)
with pytest.raises(sns.exceptions.NotFoundException):
sns_alt.set_topic_attributes(TopicArn=arn, AttributeName='Policy', AttributeValue='')
# delete returns success
sns_alt.delete_topic(TopicArn=arn)
response = sns_alt.list_topics()
assert arn not in [p['TopicArn'] for p in response['Topics']]
@pytest.mark.iam_account
@pytest.mark.sns
def test_account_topic_publish(sns, s3):
name = get_new_topic_name()
response = sns.create_topic(Name=name)
topic_arn = response['TopicArn']
bucket = get_new_bucket_name()
s3.create_bucket(Bucket=bucket)
config = {'TopicConfigurations': [{
'Id': 'id',
'TopicArn': topic_arn,
'Events': [ 's3:ObjectCreated:*' ],
}]}
s3.put_bucket_notification_configuration(
Bucket=bucket, NotificationConfiguration=config)
@pytest.mark.iam_account
@pytest.mark.iam_cross_account
@pytest.mark.sns
def test_cross_account_topic_publish(sns, s3_alt, iam_alt_root):
name = get_new_topic_name()
response = sns.create_topic(Name=name)
topic_arn = response['TopicArn']
bucket = get_new_bucket_name()
s3_alt.create_bucket(Bucket=bucket)
config = {'TopicConfigurations': [{
'Id': 'id',
'TopicArn': topic_arn,
'Events': [ 's3:ObjectCreated:*' ],
}]}
# expect AccessDenies because no resource policy allows cross-account access
e = assert_raises(ClientError, s3_alt.put_bucket_notification_configuration,
Bucket=bucket, NotificationConfiguration=config)
status, error_code = _get_status_and_error_code(e.response)
assert status == 403
assert error_code == 'AccessDenied'
# add topic policy to allow the alt user
alt_principal = iam_alt_root.get_user()['User']['Arn']
policy = json.dumps({
'Version': '2012-10-17',
'Statement': [{
'Effect': 'Allow',
'Principal': {'AWS': alt_principal},
'Action': 'sns:Publish',
'Resource': topic_arn
}]
})
sns.set_topic_attributes(TopicArn=topic_arn, AttributeName='Policy',
AttributeValue=policy)
s3_alt.put_bucket_notification_configuration(
Bucket=bucket, NotificationConfiguration=config)

File diff suppressed because it is too large Load diff

View file

@ -1,9 +1,11 @@
from . import utils from nose.tools import eq_ as eq
import utils
def test_generate(): def test_generate():
FIVE_MB = 5 * 1024 * 1024 FIVE_MB = 5 * 1024 * 1024
assert len(''.join(utils.generate_random(0))) == 0 eq(len(''.join(utils.generate_random(0))), 0)
assert len(''.join(utils.generate_random(1))) == 1 eq(len(''.join(utils.generate_random(1))), 1)
assert len(''.join(utils.generate_random(FIVE_MB - 1))) == FIVE_MB - 1 eq(len(''.join(utils.generate_random(FIVE_MB - 1))), FIVE_MB - 1)
assert len(''.join(utils.generate_random(FIVE_MB))) == FIVE_MB eq(len(''.join(utils.generate_random(FIVE_MB))), FIVE_MB)
assert len(''.join(utils.generate_random(FIVE_MB + 1))) == FIVE_MB + 1 eq(len(''.join(utils.generate_random(FIVE_MB + 1))), FIVE_MB + 1)

View file

@ -3,6 +3,8 @@ import requests
import string import string
import time import time
from nose.tools import eq_ as eq
def assert_raises(excClass, callableObj, *args, **kwargs): def assert_raises(excClass, callableObj, *args, **kwargs):
""" """
Like unittest.TestCase.assertRaises, but returns the exception. Like unittest.TestCase.assertRaises, but returns the exception.
@ -26,11 +28,11 @@ def generate_random(size, part_size=5*1024*1024):
chunk = 1024 chunk = 1024
allowed = string.ascii_letters allowed = string.ascii_letters
for x in range(0, size, part_size): for x in range(0, size, part_size):
strpart = ''.join([allowed[random.randint(0, len(allowed) - 1)] for _ in range(chunk)]) strpart = ''.join([allowed[random.randint(0, len(allowed) - 1)] for _ in xrange(chunk)])
s = '' s = ''
left = size - x left = size - x
this_part_size = min(left, part_size) this_part_size = min(left, part_size)
for y in range(this_part_size // chunk): for y in range(this_part_size / chunk):
s = s + strpart s = s + strpart
s = s + strpart[:(this_part_size % chunk)] s = s + strpart[:(this_part_size % chunk)]
yield s yield s

View file

View file

@ -0,0 +1,376 @@
from boto.s3.connection import S3Connection
from boto.exception import BotoServerError
from boto.s3.key import Key
from httplib import BadStatusLine
from optparse import OptionParser
from .. import common
import traceback
import itertools
import random
import string
import struct
import yaml
import sys
import re
class DecisionGraphError(Exception):
""" Raised when a node in a graph tries to set a header or
key that was previously set by another node
"""
def __init__(self, value):
self.value = value
def __str__(self):
return repr(self.value)
class RecursionError(Exception):
"""Runaway recursion in string formatting"""
def __init__(self, msg):
self.msg = msg
def __str__(self):
return '{0.__doc__}: {0.msg!r}'.format(self)
def assemble_decision(decision_graph, prng):
""" Take in a graph describing the possible decision space and a random
number generator and traverse the graph to build a decision
"""
return descend_graph(decision_graph, 'start', prng)
def descend_graph(decision_graph, node_name, prng):
""" Given a graph and a particular node in that graph, set the values in
the node's "set" list, pick a choice from the "choice" list, and
recurse. Finally, return dictionary of values
"""
node = decision_graph[node_name]
try:
choice = make_choice(node['choices'], prng)
if choice == '':
decision = {}
else:
decision = descend_graph(decision_graph, choice, prng)
except IndexError:
decision = {}
for key, choices in node['set'].iteritems():
if key in decision:
raise DecisionGraphError("Node %s tried to set '%s', but that key was already set by a lower node!" %(node_name, key))
decision[key] = make_choice(choices, prng)
if 'headers' in node:
decision.setdefault('headers', [])
for desc in node['headers']:
try:
(repetition_range, header, value) = desc
except ValueError:
(header, value) = desc
repetition_range = '1'
try:
size_min, size_max = repetition_range.split('-', 1)
except ValueError:
size_min = size_max = repetition_range
size_min = int(size_min)
size_max = int(size_max)
num_reps = prng.randint(size_min, size_max)
if header in [h for h, v in decision['headers']]:
raise DecisionGraphError("Node %s tried to add header '%s', but that header already exists!" %(node_name, header))
for _ in xrange(num_reps):
decision['headers'].append([header, value])
return decision
def make_choice(choices, prng):
""" Given a list of (possibly weighted) options or just a single option!,
choose one of the options taking weights into account and return the
choice
"""
if isinstance(choices, str):
return choices
weighted_choices = []
for option in choices:
if option is None:
weighted_choices.append('')
continue
try:
(weight, value) = option.split(None, 1)
weight = int(weight)
except ValueError:
weight = 1
value = option
if value == 'null' or value == 'None':
value = ''
for _ in xrange(weight):
weighted_choices.append(value)
return prng.choice(weighted_choices)
def expand_headers(decision, prng):
expanded_headers = {}
for header in decision['headers']:
h = expand(decision, header[0], prng)
v = expand(decision, header[1], prng)
expanded_headers[h] = v
return expanded_headers
def expand(decision, value, prng):
c = itertools.count()
fmt = RepeatExpandingFormatter(prng)
new = fmt.vformat(value, [], decision)
return new
class RepeatExpandingFormatter(string.Formatter):
charsets = {
'printable_no_whitespace': string.printable.translate(None, string.whitespace),
'printable': string.printable,
'punctuation': string.punctuation,
'whitespace': string.whitespace,
'digits': string.digits
}
def __init__(self, prng, _recursion=0):
super(RepeatExpandingFormatter, self).__init__()
# this class assumes it is always instantiated once per
# formatting; use that to detect runaway recursion
self.prng = prng
self._recursion = _recursion
def get_value(self, key, args, kwargs):
fields = key.split(None, 1)
fn = getattr(self, 'special_{name}'.format(name=fields[0]), None)
if fn is not None:
if len(fields) == 1:
fields.append('')
return fn(fields[1])
val = super(RepeatExpandingFormatter, self).get_value(key, args, kwargs)
if self._recursion > 5:
raise RecursionError(key)
fmt = self.__class__(self.prng, _recursion=self._recursion+1)
n = fmt.vformat(val, args, kwargs)
return n
def special_random(self, args):
arg_list = args.split()
try:
size_min, size_max = arg_list[0].split('-', 1)
except ValueError:
size_min = size_max = arg_list[0]
except IndexError:
size_min = '0'
size_max = '1000'
size_min = int(size_min)
size_max = int(size_max)
length = self.prng.randint(size_min, size_max)
try:
charset_arg = arg_list[1]
except IndexError:
charset_arg = 'printable'
if charset_arg == 'binary' or charset_arg == 'binary_no_whitespace':
num_bytes = length + 8
tmplist = [self.prng.getrandbits(64) for _ in xrange(num_bytes / 8)]
tmpstring = struct.pack((num_bytes / 8) * 'Q', *tmplist)
if charset_arg == 'binary_no_whitespace':
tmpstring = ''.join(c for c in tmpstring if c not in string.whitespace)
return tmpstring[0:length]
else:
charset = self.charsets[charset_arg]
return ''.join([self.prng.choice(charset) for _ in xrange(length)]) # Won't scale nicely
def parse_options():
parser = OptionParser()
parser.add_option('-O', '--outfile', help='write output to FILE. Defaults to STDOUT', metavar='FILE')
parser.add_option('--seed', dest='seed', type='int', help='initial seed for the random number generator')
parser.add_option('--seed-file', dest='seedfile', help='read seeds for specific requests from FILE', metavar='FILE')
parser.add_option('-n', dest='num_requests', type='int', help='issue NUM requests before stopping', metavar='NUM')
parser.add_option('-v', '--verbose', dest='verbose', action="store_true", help='turn on verbose output')
parser.add_option('-d', '--debug', dest='debug', action="store_true", help='turn on debugging (very verbose) output')
parser.add_option('--decision-graph', dest='graph_filename', help='file in which to find the request decision graph')
parser.add_option('--no-cleanup', dest='cleanup', action="store_false", help='turn off teardown so you can peruse the state of buckets after testing')
parser.set_defaults(num_requests=5)
parser.set_defaults(cleanup=True)
parser.set_defaults(graph_filename='request_decision_graph.yml')
return parser.parse_args()
def randomlist(seed=None):
""" Returns an infinite generator of random numbers
"""
rng = random.Random(seed)
while True:
yield rng.randint(0,100000) #100,000 seeds is enough, right?
def populate_buckets(conn, alt):
""" Creates buckets and keys for fuzz testing and sets appropriate
permissions. Returns a dictionary of the bucket and key names.
"""
breadable = common.get_new_bucket(alt)
bwritable = common.get_new_bucket(alt)
bnonreadable = common.get_new_bucket(alt)
oreadable = Key(breadable)
owritable = Key(bwritable)
ononreadable = Key(breadable)
oreadable.set_contents_from_string('oreadable body')
owritable.set_contents_from_string('owritable body')
ononreadable.set_contents_from_string('ononreadable body')
breadable.set_acl('public-read')
bwritable.set_acl('public-read-write')
bnonreadable.set_acl('private')
oreadable.set_acl('public-read')
owritable.set_acl('public-read-write')
ononreadable.set_acl('private')
return dict(
bucket_readable=breadable.name,
bucket_writable=bwritable.name,
bucket_not_readable=bnonreadable.name,
bucket_not_writable=breadable.name,
object_readable=oreadable.key,
object_writable=owritable.key,
object_not_readable=ononreadable.key,
object_not_writable=oreadable.key,
)
def _main():
""" The main script
"""
(options, args) = parse_options()
random.seed(options.seed if options.seed else None)
s3_connection = common.s3.main
alt_connection = common.s3.alt
if options.outfile:
OUT = open(options.outfile, 'w')
else:
OUT = sys.stderr
VERBOSE = DEBUG = open('/dev/null', 'w')
if options.verbose:
VERBOSE = OUT
if options.debug:
DEBUG = OUT
VERBOSE = OUT
request_seeds = None
if options.seedfile:
FH = open(options.seedfile, 'r')
request_seeds = [int(line) for line in FH if line != '\n']
print>>OUT, 'Seedfile: %s' %options.seedfile
print>>OUT, 'Number of requests: %d' %len(request_seeds)
else:
if options.seed:
print>>OUT, 'Initial Seed: %d' %options.seed
print>>OUT, 'Number of requests: %d' %options.num_requests
random_list = randomlist(options.seed)
request_seeds = itertools.islice(random_list, options.num_requests)
print>>OUT, 'Decision Graph: %s' %options.graph_filename
graph_file = open(options.graph_filename, 'r')
decision_graph = yaml.safe_load(graph_file)
constants = populate_buckets(s3_connection, alt_connection)
print>>VERBOSE, "Test Buckets/Objects:"
for key, value in constants.iteritems():
print>>VERBOSE, "\t%s: %s" %(key, value)
print>>OUT, "Begin Fuzzing..."
print>>VERBOSE, '='*80
for request_seed in request_seeds:
print>>VERBOSE, 'Seed is: %r' %request_seed
prng = random.Random(request_seed)
decision = assemble_decision(decision_graph, prng)
decision.update(constants)
method = expand(decision, decision['method'], prng)
path = expand(decision, decision['urlpath'], prng)
try:
body = expand(decision, decision['body'], prng)
except KeyError:
body = ''
try:
headers = expand_headers(decision, prng)
except KeyError:
headers = {}
print>>VERBOSE, "%r %r" %(method[:100], path[:100])
for h, v in headers.iteritems():
print>>VERBOSE, "%r: %r" %(h[:50], v[:50])
print>>VERBOSE, "%r\n" % body[:100]
print>>DEBUG, 'FULL REQUEST'
print>>DEBUG, 'Method: %r' %method
print>>DEBUG, 'Path: %r' %path
print>>DEBUG, 'Headers:'
for h, v in headers.iteritems():
print>>DEBUG, "\t%r: %r" %(h, v)
print>>DEBUG, 'Body: %r\n' %body
failed = False # Let's be optimistic, shall we?
try:
response = s3_connection.make_request(method, path, data=body, headers=headers, override_num_retries=1)
body = response.read()
except BotoServerError, e:
response = e
body = e.body
failed = True
except BadStatusLine, e:
print>>OUT, 'FAILED: failed to parse response (BadStatusLine); probably a NUL byte in your request?'
print>>VERBOSE, '='*80
continue
if failed:
print>>OUT, 'FAILED:'
OLD_VERBOSE = VERBOSE
OLD_DEBUG = DEBUG
VERBOSE = DEBUG = OUT
print>>VERBOSE, 'Seed was: %r' %request_seed
print>>VERBOSE, 'Response status code: %d %s' %(response.status, response.reason)
print>>DEBUG, 'Body:\n%s' %body
print>>VERBOSE, '='*80
if failed:
VERBOSE = OLD_VERBOSE
DEBUG = OLD_DEBUG
print>>OUT, '...done fuzzing'
if options.cleanup:
common.teardown()
def main():
common.setup()
try:
_main()
except Exception as e:
traceback.print_exc()
common.teardown()

View file

View file

@ -0,0 +1,403 @@
"""
Unit-test suite for the S3 fuzzer
The fuzzer is a grammar-based random S3 operation generator
that produces random operation sequences in an effort to
crash the server. This unit-test suite does not test
S3 servers, but rather the fuzzer infrastructure.
It works by running the fuzzer off of a simple grammar,
and checking the producted requests to ensure that they
include the expected sorts of operations in the expected
proportions.
"""
import sys
import itertools
import nose
import random
import string
import yaml
from ..headers import *
from nose.tools import eq_ as eq
from nose.tools import assert_true
from nose.plugins.attrib import attr
from ...functional.utils import assert_raises
_decision_graph = {}
def check_access_denied(fn, *args, **kwargs):
e = assert_raises(boto.exception.S3ResponseError, fn, *args, **kwargs)
eq(e.status, 403)
eq(e.reason, 'Forbidden')
eq(e.error_code, 'AccessDenied')
def build_graph():
graph = {}
graph['start'] = {
'set': {},
'choices': ['node2']
}
graph['leaf'] = {
'set': {
'key1': 'value1',
'key2': 'value2'
},
'headers': [
['1-2', 'random-header-{random 5-10 printable}', '{random 20-30 punctuation}']
],
'choices': []
}
graph['node1'] = {
'set': {
'key3': 'value3',
'header_val': [
'3 h1',
'2 h2',
'h3'
]
},
'headers': [
['1-1', 'my-header', '{header_val}'],
],
'choices': ['leaf']
}
graph['node2'] = {
'set': {
'randkey': 'value-{random 10-15 printable}',
'path': '/{bucket_readable}',
'indirect_key1': '{key1}'
},
'choices': ['leaf']
}
graph['bad_node'] = {
'set': {
'key1': 'value1'
},
'choices': ['leaf']
}
graph['nonexistant_child_node'] = {
'set': {},
'choices': ['leafy_greens']
}
graph['weighted_node'] = {
'set': {
'k1': [
'foo',
'2 bar',
'1 baz'
]
},
'choices': [
'foo',
'2 bar',
'1 baz'
]
}
graph['null_choice_node'] = {
'set': {},
'choices': [None]
}
graph['repeated_headers_node'] = {
'set': {},
'headers': [
['1-2', 'random-header-{random 5-10 printable}', '{random 20-30 punctuation}']
],
'choices': ['leaf']
}
graph['weighted_null_choice_node'] = {
'set': {},
'choices': ['3 null']
}
return graph
#def test_foo():
#graph_file = open('request_decision_graph.yml', 'r')
#graph = yaml.safe_load(graph_file)
#eq(graph['bucket_put_simple']['set']['grantee'], 0)
def test_load_graph():
graph_file = open('request_decision_graph.yml', 'r')
graph = yaml.safe_load(graph_file)
graph['start']
def test_descend_leaf_node():
graph = build_graph()
prng = random.Random(1)
decision = descend_graph(graph, 'leaf', prng)
eq(decision['key1'], 'value1')
eq(decision['key2'], 'value2')
e = assert_raises(KeyError, lambda x: decision[x], 'key3')
def test_descend_node():
graph = build_graph()
prng = random.Random(1)
decision = descend_graph(graph, 'node1', prng)
eq(decision['key1'], 'value1')
eq(decision['key2'], 'value2')
eq(decision['key3'], 'value3')
def test_descend_bad_node():
graph = build_graph()
prng = random.Random(1)
assert_raises(DecisionGraphError, descend_graph, graph, 'bad_node', prng)
def test_descend_nonexistant_child():
graph = build_graph()
prng = random.Random(1)
assert_raises(KeyError, descend_graph, graph, 'nonexistant_child_node', prng)
def test_expand_random_printable():
prng = random.Random(1)
got = expand({}, '{random 10-15 printable}', prng)
eq(got, '[/pNI$;92@')
def test_expand_random_binary():
prng = random.Random(1)
got = expand({}, '{random 10-15 binary}', prng)
eq(got, '\xdfj\xf1\xd80>a\xcd\xc4\xbb')
def test_expand_random_printable_no_whitespace():
prng = random.Random(1)
for _ in xrange(1000):
got = expand({}, '{random 500 printable_no_whitespace}', prng)
assert_true(reduce(lambda x, y: x and y, [x not in string.whitespace and x in string.printable for x in got]))
def test_expand_random_binary_no_whitespace():
prng = random.Random(1)
for _ in xrange(1000):
got = expand({}, '{random 500 binary_no_whitespace}', prng)
assert_true(reduce(lambda x, y: x and y, [x not in string.whitespace for x in got]))
def test_expand_random_no_args():
prng = random.Random(1)
for _ in xrange(1000):
got = expand({}, '{random}', prng)
assert_true(0 <= len(got) <= 1000)
assert_true(reduce(lambda x, y: x and y, [x in string.printable for x in got]))
def test_expand_random_no_charset():
prng = random.Random(1)
for _ in xrange(1000):
got = expand({}, '{random 10-30}', prng)
assert_true(10 <= len(got) <= 30)
assert_true(reduce(lambda x, y: x and y, [x in string.printable for x in got]))
def test_expand_random_exact_length():
prng = random.Random(1)
for _ in xrange(1000):
got = expand({}, '{random 10 digits}', prng)
assert_true(len(got) == 10)
assert_true(reduce(lambda x, y: x and y, [x in string.digits for x in got]))
def test_expand_random_bad_charset():
prng = random.Random(1)
assert_raises(KeyError, expand, {}, '{random 10-30 foo}', prng)
def test_expand_random_missing_length():
prng = random.Random(1)
assert_raises(ValueError, expand, {}, '{random printable}', prng)
def test_assemble_decision():
graph = build_graph()
prng = random.Random(1)
decision = assemble_decision(graph, prng)
eq(decision['key1'], 'value1')
eq(decision['key2'], 'value2')
eq(decision['randkey'], 'value-{random 10-15 printable}')
eq(decision['indirect_key1'], '{key1}')
eq(decision['path'], '/{bucket_readable}')
assert_raises(KeyError, lambda x: decision[x], 'key3')
def test_expand_escape():
prng = random.Random(1)
decision = dict(
foo='{{bar}}',
)
got = expand(decision, '{foo}', prng)
eq(got, '{bar}')
def test_expand_indirect():
prng = random.Random(1)
decision = dict(
foo='{bar}',
bar='quux',
)
got = expand(decision, '{foo}', prng)
eq(got, 'quux')
def test_expand_indirect_double():
prng = random.Random(1)
decision = dict(
foo='{bar}',
bar='{quux}',
quux='thud',
)
got = expand(decision, '{foo}', prng)
eq(got, 'thud')
def test_expand_recursive():
prng = random.Random(1)
decision = dict(
foo='{foo}',
)
e = assert_raises(RecursionError, expand, decision, '{foo}', prng)
eq(str(e), "Runaway recursion in string formatting: 'foo'")
def test_expand_recursive_mutual():
prng = random.Random(1)
decision = dict(
foo='{bar}',
bar='{foo}',
)
e = assert_raises(RecursionError, expand, decision, '{foo}', prng)
eq(str(e), "Runaway recursion in string formatting: 'foo'")
def test_expand_recursive_not_too_eager():
prng = random.Random(1)
decision = dict(
foo='bar',
)
got = expand(decision, 100*'{foo}', prng)
eq(got, 100*'bar')
def test_make_choice_unweighted_with_space():
prng = random.Random(1)
choice = make_choice(['foo bar'], prng)
eq(choice, 'foo bar')
def test_weighted_choices():
graph = build_graph()
prng = random.Random(1)
choices_made = {}
for _ in xrange(1000):
choice = make_choice(graph['weighted_node']['choices'], prng)
if choices_made.has_key(choice):
choices_made[choice] += 1
else:
choices_made[choice] = 1
foo_percentage = choices_made['foo'] / 1000.0
bar_percentage = choices_made['bar'] / 1000.0
baz_percentage = choices_made['baz'] / 1000.0
nose.tools.assert_almost_equal(foo_percentage, 0.25, 1)
nose.tools.assert_almost_equal(bar_percentage, 0.50, 1)
nose.tools.assert_almost_equal(baz_percentage, 0.25, 1)
def test_null_choices():
graph = build_graph()
prng = random.Random(1)
choice = make_choice(graph['null_choice_node']['choices'], prng)
eq(choice, '')
def test_weighted_null_choices():
graph = build_graph()
prng = random.Random(1)
choice = make_choice(graph['weighted_null_choice_node']['choices'], prng)
eq(choice, '')
def test_null_child():
graph = build_graph()
prng = random.Random(1)
decision = descend_graph(graph, 'null_choice_node', prng)
eq(decision, {})
def test_weighted_set():
graph = build_graph()
prng = random.Random(1)
choices_made = {}
for _ in xrange(1000):
choice = make_choice(graph['weighted_node']['set']['k1'], prng)
if choices_made.has_key(choice):
choices_made[choice] += 1
else:
choices_made[choice] = 1
foo_percentage = choices_made['foo'] / 1000.0
bar_percentage = choices_made['bar'] / 1000.0
baz_percentage = choices_made['baz'] / 1000.0
nose.tools.assert_almost_equal(foo_percentage, 0.25, 1)
nose.tools.assert_almost_equal(bar_percentage, 0.50, 1)
nose.tools.assert_almost_equal(baz_percentage, 0.25, 1)
def test_header_presence():
graph = build_graph()
prng = random.Random(1)
decision = descend_graph(graph, 'node1', prng)
c1 = itertools.count()
c2 = itertools.count()
for header, value in decision['headers']:
if header == 'my-header':
eq(value, '{header_val}')
assert_true(next(c1) < 1)
elif header == 'random-header-{random 5-10 printable}':
eq(value, '{random 20-30 punctuation}')
assert_true(next(c2) < 2)
else:
raise KeyError('unexpected header found: %s' % header)
assert_true(next(c1))
assert_true(next(c2))
def test_duplicate_header():
graph = build_graph()
prng = random.Random(1)
assert_raises(DecisionGraphError, descend_graph, graph, 'repeated_headers_node', prng)
def test_expand_headers():
graph = build_graph()
prng = random.Random(1)
decision = descend_graph(graph, 'node1', prng)
expanded_headers = expand_headers(decision, prng)
for header, value in expanded_headers.iteritems():
if header == 'my-header':
assert_true(value in ['h1', 'h2', 'h3'])
elif header.startswith('random-header-'):
assert_true(20 <= len(value) <= 30)
assert_true(string.strip(value, RepeatExpandingFormatter.charsets['punctuation']) is '')
else:
raise DecisionGraphError('unexpected header found: "%s"' % header)

View file

@ -0,0 +1,117 @@
from boto.s3.key import Key
from optparse import OptionParser
from . import realistic
import traceback
import random
from . import common
import sys
def parse_opts():
parser = OptionParser()
parser.add_option('-O', '--outfile', help='write output to FILE. Defaults to STDOUT', metavar='FILE')
parser.add_option('-b', '--bucket', dest='bucket', help='push objects to BUCKET', metavar='BUCKET')
parser.add_option('--seed', dest='seed', help='optional seed for the random number generator')
return parser.parse_args()
def get_random_files(quantity, mean, stddev, seed):
"""Create file-like objects with pseudorandom contents.
IN:
number of files to create
mean file size in bytes
standard deviation from mean file size
seed for PRNG
OUT:
list of file handles
"""
file_generator = realistic.files(mean, stddev, seed)
return [file_generator.next() for _ in xrange(quantity)]
def upload_objects(bucket, files, seed):
"""Upload a bunch of files to an S3 bucket
IN:
boto S3 bucket object
list of file handles to upload
seed for PRNG
OUT:
list of boto S3 key objects
"""
keys = []
name_generator = realistic.names(15, 4, seed=seed)
for fp in files:
sys.stderr.write('sending file with size %dB\n' % fp.size)
key = Key(bucket)
key.key = name_generator.next()
key.set_contents_from_file(fp, rewind=True)
key.set_acl('public-read')
keys.append(key)
return keys
def _main():
'''To run the static content load test, make sure you've bootstrapped your
test environment and set up your config.yaml file, then run the following:
S3TEST_CONF=config.yaml virtualenv/bin/s3tests-generate-objects.py --seed 1234
This creates a bucket with your S3 credentials (from config.yaml) and
fills it with garbage objects as described in the
file_generation.groups section of config.yaml. It writes a list of
URLS to those objects to the file listed in file_generation.url_file
in config.yaml.
Once you have objcts in your bucket, run the siege benchmarking program:
siege --rc ./siege.conf -r 5
This tells siege to read the ./siege.conf config file which tells it to
use the urls in ./urls.txt and log to ./siege.log. It hits each url in
urls.txt 5 times (-r flag).
Results are printed to the terminal and written in CSV format to
./siege.log
'''
(options, args) = parse_opts()
#SETUP
random.seed(options.seed if options.seed else None)
conn = common.s3.main
if options.outfile:
OUTFILE = open(options.outfile, 'w')
elif common.config.file_generation.url_file:
OUTFILE = open(common.config.file_generation.url_file, 'w')
else:
OUTFILE = sys.stdout
if options.bucket:
bucket = conn.create_bucket(options.bucket)
else:
bucket = common.get_new_bucket()
bucket.set_acl('public-read')
keys = []
OUTFILE.write('bucket: %s\n' % bucket.name)
sys.stderr.write('setup complete, generating files\n')
for profile in common.config.file_generation.groups:
seed = random.random()
files = get_random_files(profile[0], profile[1], profile[2], seed)
keys += upload_objects(bucket, files, seed)
sys.stderr.write('finished sending files. generating urls\n')
for key in keys:
OUTFILE.write(key.generate_url(0, query_auth=False) + '\n')
sys.stderr.write('done\n')
def main():
common.setup()
try:
_main()
except Exception as e:
traceback.print_exc()
common.teardown()

265
s3tests_boto3/readwrite.py Normal file
View file

@ -0,0 +1,265 @@
import gevent
import gevent.pool
import gevent.queue
import gevent.monkey; gevent.monkey.patch_all()
import itertools
import optparse
import os
import sys
import time
import traceback
import random
import yaml
import realistic
import common
NANOSECOND = int(1e9)
def reader(bucket, worker_id, file_names, queue, rand):
while True:
objname = rand.choice(file_names)
key = bucket.new_key(objname)
fp = realistic.FileValidator()
result = dict(
type='r',
bucket=bucket.name,
key=key.name,
worker=worker_id,
)
start = time.time()
try:
key.get_contents_to_file(fp._file)
except gevent.GreenletExit:
raise
except Exception as e:
# stop timer ASAP, even on errors
end = time.time()
result.update(
error=dict(
msg=str(e),
traceback=traceback.format_exc(),
),
)
# certain kinds of programmer errors make this a busy
# loop; let parent greenlet get some time too
time.sleep(0)
else:
end = time.time()
if not fp.valid():
m='md5sum check failed start={s} ({se}) end={e} size={sz} obj={o}'.format(s=time.ctime(start), se=start, e=end, sz=fp._file.tell(), o=objname)
result.update(
error=dict(
msg=m,
traceback=traceback.format_exc(),
),
)
print("ERROR:", m)
else:
elapsed = end - start
result.update(
start=start,
duration=int(round(elapsed * NANOSECOND)),
)
queue.put(result)
def writer(bucket, worker_id, file_names, files, queue, rand):
while True:
fp = next(files)
fp.seek(0)
objname = rand.choice(file_names)
key = bucket.new_key(objname)
result = dict(
type='w',
bucket=bucket.name,
key=key.name,
worker=worker_id,
)
start = time.time()
try:
key.set_contents_from_file(fp)
except gevent.GreenletExit:
raise
except Exception as e:
# stop timer ASAP, even on errors
end = time.time()
result.update(
error=dict(
msg=str(e),
traceback=traceback.format_exc(),
),
)
# certain kinds of programmer errors make this a busy
# loop; let parent greenlet get some time too
time.sleep(0)
else:
end = time.time()
elapsed = end - start
result.update(
start=start,
duration=int(round(elapsed * NANOSECOND)),
)
queue.put(result)
def parse_options():
parser = optparse.OptionParser(
usage='%prog [OPTS] <CONFIG_YAML',
)
parser.add_option("--no-cleanup", dest="cleanup", action="store_false",
help="skip cleaning up all created buckets", default=True)
return parser.parse_args()
def write_file(bucket, file_name, fp):
"""
Write a single file to the bucket using the file_name.
This is used during the warmup to initialize the files.
"""
key = bucket.new_key(file_name)
key.set_contents_from_file(fp)
def main():
# parse options
(options, args) = parse_options()
if os.isatty(sys.stdin.fileno()):
raise RuntimeError('Need configuration in stdin.')
config = common.read_config(sys.stdin)
conn = common.connect(config.s3)
bucket = None
try:
# setup
real_stdout = sys.stdout
sys.stdout = sys.stderr
# verify all required config items are present
if 'readwrite' not in config:
raise RuntimeError('readwrite section not found in config')
for item in ['readers', 'writers', 'duration', 'files', 'bucket']:
if item not in config.readwrite:
raise RuntimeError("Missing readwrite config item: {item}".format(item=item))
for item in ['num', 'size', 'stddev']:
if item not in config.readwrite.files:
raise RuntimeError("Missing readwrite config item: files.{item}".format(item=item))
seeds = dict(config.readwrite.get('random_seed', {}))
seeds.setdefault('main', random.randrange(2**32))
rand = random.Random(seeds['main'])
for name in ['names', 'contents', 'writer', 'reader']:
seeds.setdefault(name, rand.randrange(2**32))
print('Using random seeds: {seeds}'.format(seeds=seeds))
# setup bucket and other objects
bucket_name = common.choose_bucket_prefix(config.readwrite.bucket, max_len=30)
bucket = conn.create_bucket(bucket_name)
print("Created bucket: {name}".format(name=bucket.name))
# check flag for deterministic file name creation
if not config.readwrite.get('deterministic_file_names'):
print('Creating random file names')
file_names = realistic.names(
mean=15,
stddev=4,
seed=seeds['names'],
)
file_names = itertools.islice(file_names, config.readwrite.files.num)
file_names = list(file_names)
else:
print('Creating file names that are deterministic')
file_names = []
for x in xrange(config.readwrite.files.num):
file_names.append('test_file_{num}'.format(num=x))
files = realistic.files2(
mean=1024 * config.readwrite.files.size,
stddev=1024 * config.readwrite.files.stddev,
seed=seeds['contents'],
)
q = gevent.queue.Queue()
# warmup - get initial set of files uploaded if there are any writers specified
if config.readwrite.writers > 0:
print("Uploading initial set of {num} files".format(num=config.readwrite.files.num))
warmup_pool = gevent.pool.Pool(size=100)
for file_name in file_names:
fp = next(files)
warmup_pool.spawn(
write_file,
bucket=bucket,
file_name=file_name,
fp=fp,
)
warmup_pool.join()
# main work
print("Starting main worker loop.")
print("Using file size: {size} +- {stddev}".format(size=config.readwrite.files.size, stddev=config.readwrite.files.stddev))
print("Spawning {w} writers and {r} readers...".format(w=config.readwrite.writers, r=config.readwrite.readers))
group = gevent.pool.Group()
rand_writer = random.Random(seeds['writer'])
# Don't create random files if deterministic_files_names is set and true
if not config.readwrite.get('deterministic_file_names'):
for x in xrange(config.readwrite.writers):
this_rand = random.Random(rand_writer.randrange(2**32))
group.spawn(
writer,
bucket=bucket,
worker_id=x,
file_names=file_names,
files=files,
queue=q,
rand=this_rand,
)
# Since the loop generating readers already uses config.readwrite.readers
# and the file names are already generated (randomly or deterministically),
# this loop needs no additional qualifiers. If zero readers are specified,
# it will behave as expected (no data is read)
rand_reader = random.Random(seeds['reader'])
for x in xrange(config.readwrite.readers):
this_rand = random.Random(rand_reader.randrange(2**32))
group.spawn(
reader,
bucket=bucket,
worker_id=x,
file_names=file_names,
queue=q,
rand=this_rand,
)
def stop():
group.kill(block=True)
q.put(StopIteration)
gevent.spawn_later(config.readwrite.duration, stop)
# wait for all the tests to finish
group.join()
print('post-join, queue size {size}'.format(size=q.qsize()))
if q.qsize() > 0:
for temp_dict in q:
if 'error' in temp_dict:
raise Exception('exception:\n\t{msg}\n\t{trace}'.format(
msg=temp_dict['error']['msg'],
trace=temp_dict['error']['traceback'])
)
else:
yaml.safe_dump(temp_dict, stream=real_stdout)
finally:
# cleanup
if options.cleanup:
if bucket is not None:
common.nuke_bucket(bucket)

281
s3tests_boto3/realistic.py Normal file
View file

@ -0,0 +1,281 @@
import hashlib
import random
import string
import struct
import time
import math
import tempfile
import shutil
import os
NANOSECOND = int(1e9)
def generate_file_contents(size):
"""
A helper function to generate binary contents for a given size, and
calculates the md5 hash of the contents appending itself at the end of the
blob.
It uses sha1's hexdigest which is 40 chars long. So any binary generated
should remove the last 40 chars from the blob to retrieve the original hash
and binary so that validity can be proved.
"""
size = int(size)
contents = os.urandom(size)
content_hash = hashlib.sha1(contents).hexdigest()
return contents + content_hash
class FileValidator(object):
def __init__(self, f=None):
self._file = tempfile.SpooledTemporaryFile()
self.original_hash = None
self.new_hash = None
if f:
f.seek(0)
shutil.copyfileobj(f, self._file)
def valid(self):
"""
Returns True if this file looks valid. The file is valid if the end
of the file has the md5 digest for the first part of the file.
"""
self._file.seek(0)
contents = self._file.read()
self.original_hash, binary = contents[-40:], contents[:-40]
self.new_hash = hashlib.sha1(binary).hexdigest()
if not self.new_hash == self.original_hash:
print('original hash: ', self.original_hash)
print('new hash: ', self.new_hash)
print('size: ', self._file.tell())
return False
return True
# XXX not sure if we need all of these
def seek(self, offset, whence=os.SEEK_SET):
self._file.seek(offset, whence)
def tell(self):
return self._file.tell()
def read(self, size=-1):
return self._file.read(size)
def write(self, data):
self._file.write(data)
self._file.seek(0)
class RandomContentFile(object):
def __init__(self, size, seed):
self.size = size
self.seed = seed
self.random = random.Random(self.seed)
# Boto likes to seek once more after it's done reading, so we need to save the last chunks/seek value.
self.last_chunks = self.chunks = None
self.last_seek = None
# Let seek initialize the rest of it, rather than dup code
self.seek(0)
def _mark_chunk(self):
self.chunks.append([self.offset, int(round((time.time() - self.last_seek) * NANOSECOND))])
def seek(self, offset, whence=os.SEEK_SET):
if whence == os.SEEK_SET:
self.offset = offset
elif whence == os.SEEK_END:
self.offset = self.size + offset;
elif whence == os.SEEK_CUR:
self.offset += offset
assert self.offset == 0
self.random.seed(self.seed)
self.buffer = ''
self.hash = hashlib.md5()
self.digest_size = self.hash.digest_size
self.digest = None
# Save the last seek time as our start time, and the last chunks
self.last_chunks = self.chunks
# Before emptying.
self.last_seek = time.time()
self.chunks = []
def tell(self):
return self.offset
def _generate(self):
# generate and return a chunk of pseudorandom data
size = min(self.size, 1*1024*1024) # generate at most 1 MB at a time
chunks = int(math.ceil(size/8.0)) # number of 8-byte chunks to create
l = [self.random.getrandbits(64) for _ in xrange(chunks)]
s = struct.pack(chunks*'Q', *l)
return s
def read(self, size=-1):
if size < 0:
size = self.size - self.offset
r = []
random_count = min(size, self.size - self.offset - self.digest_size)
if random_count > 0:
while len(self.buffer) < random_count:
self.buffer += self._generate()
self.offset += random_count
size -= random_count
data, self.buffer = self.buffer[:random_count], self.buffer[random_count:]
if self.hash is not None:
self.hash.update(data)
r.append(data)
digest_count = min(size, self.size - self.offset)
if digest_count > 0:
if self.digest is None:
self.digest = self.hash.digest()
self.hash = None
self.offset += digest_count
size -= digest_count
data = self.digest[:digest_count]
r.append(data)
self._mark_chunk()
return ''.join(r)
class PrecomputedContentFile(object):
def __init__(self, f):
self._file = tempfile.SpooledTemporaryFile()
f.seek(0)
shutil.copyfileobj(f, self._file)
self.last_chunks = self.chunks = None
self.seek(0)
def seek(self, offset, whence=os.SEEK_SET):
self._file.seek(offset, whence)
if self.tell() == 0:
# only reset the chunks when seeking to the beginning
self.last_chunks = self.chunks
self.last_seek = time.time()
self.chunks = []
def tell(self):
return self._file.tell()
def read(self, size=-1):
data = self._file.read(size)
self._mark_chunk()
return data
def _mark_chunk(self):
elapsed = time.time() - self.last_seek
elapsed_nsec = int(round(elapsed * NANOSECOND))
self.chunks.append([self.tell(), elapsed_nsec])
class FileVerifier(object):
def __init__(self):
self.size = 0
self.hash = hashlib.md5()
self.buf = ''
self.created_at = time.time()
self.chunks = []
def _mark_chunk(self):
self.chunks.append([self.size, int(round((time.time() - self.created_at) * NANOSECOND))])
def write(self, data):
self.size += len(data)
self.buf += data
digsz = -1*self.hash.digest_size
new_data, self.buf = self.buf[0:digsz], self.buf[digsz:]
self.hash.update(new_data)
self._mark_chunk()
def valid(self):
"""
Returns True if this file looks valid. The file is valid if the end
of the file has the md5 digest for the first part of the file.
"""
if self.size < self.hash.digest_size:
return self.hash.digest().startswith(self.buf)
return self.buf == self.hash.digest()
def files(mean, stddev, seed=None):
"""
Yields file-like objects with effectively random contents, where
the size of each file follows the normal distribution with `mean`
and `stddev`.
Beware, the file-likeness is very shallow. You can use boto's
`key.set_contents_from_file` to send these to S3, but they are not
full file objects.
The last 128 bits are the MD5 digest of the previous bytes, for
verifying round-trip data integrity. For example, if you
re-download the object and place the contents into a file called
``foo``, the following should print two identical lines:
python3 -c 'import sys, hashlib; data=sys.stdin.read(); print hashlib.md5(data[:-16]).hexdigest(); print "".join("%02x" % ord(c) for c in data[-16:])' <foo
Except for objects shorter than 16 bytes, where the second line
will be proportionally shorter.
"""
rand = random.Random(seed)
while True:
while True:
size = int(rand.normalvariate(mean, stddev))
if size >= 0:
break
yield RandomContentFile(size=size, seed=rand.getrandbits(32))
def files2(mean, stddev, seed=None, numfiles=10):
"""
Yields file objects with effectively random contents, where the
size of each file follows the normal distribution with `mean` and
`stddev`.
Rather than continuously generating new files, this pre-computes and
stores `numfiles` files and yields them in a loop.
"""
# pre-compute all the files (and save with TemporaryFiles)
fs = []
for _ in xrange(numfiles):
t = tempfile.SpooledTemporaryFile()
t.write(generate_file_contents(random.normalvariate(mean, stddev)))
t.seek(0)
fs.append(t)
while True:
for f in fs:
yield f
def names(mean, stddev, charset=None, seed=None):
"""
Yields strings that are somewhat plausible as file names, where
the lenght of each filename follows the normal distribution with
`mean` and `stddev`.
"""
if charset is None:
charset = string.ascii_lowercase
rand = random.Random(seed)
while True:
while True:
length = int(rand.normalvariate(mean, stddev))
if length > 0:
break
name = ''.join(rand.choice(charset) for _ in xrange(length))
yield name

219
s3tests_boto3/roundtrip.py Normal file
View file

@ -0,0 +1,219 @@
import gevent
import gevent.pool
import gevent.queue
import gevent.monkey; gevent.monkey.patch_all()
import itertools
import optparse
import os
import sys
import time
import traceback
import random
import yaml
import realistic
import common
NANOSECOND = int(1e9)
def writer(bucket, objname, fp, queue):
key = bucket.new_key(objname)
result = dict(
type='w',
bucket=bucket.name,
key=key.name,
)
start = time.time()
try:
key.set_contents_from_file(fp, rewind=True)
except gevent.GreenletExit:
raise
except Exception as e:
# stop timer ASAP, even on errors
end = time.time()
result.update(
error=dict(
msg=str(e),
traceback=traceback.format_exc(),
),
)
# certain kinds of programmer errors make this a busy
# loop; let parent greenlet get some time too
time.sleep(0)
else:
end = time.time()
elapsed = end - start
result.update(
start=start,
duration=int(round(elapsed * NANOSECOND)),
chunks=fp.last_chunks,
)
queue.put(result)
def reader(bucket, objname, queue):
key = bucket.new_key(objname)
fp = realistic.FileVerifier()
result = dict(
type='r',
bucket=bucket.name,
key=key.name,
)
start = time.time()
try:
key.get_contents_to_file(fp)
except gevent.GreenletExit:
raise
except Exception as e:
# stop timer ASAP, even on errors
end = time.time()
result.update(
error=dict(
msg=str(e),
traceback=traceback.format_exc(),
),
)
# certain kinds of programmer errors make this a busy
# loop; let parent greenlet get some time too
time.sleep(0)
else:
end = time.time()
if not fp.valid():
result.update(
error=dict(
msg='md5sum check failed',
),
)
elapsed = end - start
result.update(
start=start,
duration=int(round(elapsed * NANOSECOND)),
chunks=fp.chunks,
)
queue.put(result)
def parse_options():
parser = optparse.OptionParser(
usage='%prog [OPTS] <CONFIG_YAML',
)
parser.add_option("--no-cleanup", dest="cleanup", action="store_false",
help="skip cleaning up all created buckets", default=True)
return parser.parse_args()
def main():
# parse options
(options, args) = parse_options()
if os.isatty(sys.stdin.fileno()):
raise RuntimeError('Need configuration in stdin.')
config = common.read_config(sys.stdin)
conn = common.connect(config.s3)
bucket = None
try:
# setup
real_stdout = sys.stdout
sys.stdout = sys.stderr
# verify all required config items are present
if 'roundtrip' not in config:
raise RuntimeError('roundtrip section not found in config')
for item in ['readers', 'writers', 'duration', 'files', 'bucket']:
if item not in config.roundtrip:
raise RuntimeError("Missing roundtrip config item: {item}".format(item=item))
for item in ['num', 'size', 'stddev']:
if item not in config.roundtrip.files:
raise RuntimeError("Missing roundtrip config item: files.{item}".format(item=item))
seeds = dict(config.roundtrip.get('random_seed', {}))
seeds.setdefault('main', random.randrange(2**32))
rand = random.Random(seeds['main'])
for name in ['names', 'contents', 'writer', 'reader']:
seeds.setdefault(name, rand.randrange(2**32))
print('Using random seeds: {seeds}'.format(seeds=seeds))
# setup bucket and other objects
bucket_name = common.choose_bucket_prefix(config.roundtrip.bucket, max_len=30)
bucket = conn.create_bucket(bucket_name)
print("Created bucket: {name}".format(name=bucket.name))
objnames = realistic.names(
mean=15,
stddev=4,
seed=seeds['names'],
)
objnames = itertools.islice(objnames, config.roundtrip.files.num)
objnames = list(objnames)
files = realistic.files(
mean=1024 * config.roundtrip.files.size,
stddev=1024 * config.roundtrip.files.stddev,
seed=seeds['contents'],
)
q = gevent.queue.Queue()
logger_g = gevent.spawn(yaml.safe_dump_all, q, stream=real_stdout)
print("Writing {num} objects with {w} workers...".format(
num=config.roundtrip.files.num,
w=config.roundtrip.writers,
))
pool = gevent.pool.Pool(size=config.roundtrip.writers)
start = time.time()
for objname in objnames:
fp = next(files)
pool.spawn(
writer,
bucket=bucket,
objname=objname,
fp=fp,
queue=q,
)
pool.join()
stop = time.time()
elapsed = stop - start
q.put(dict(
type='write_done',
duration=int(round(elapsed * NANOSECOND)),
))
print("Reading {num} objects with {w} workers...".format(
num=config.roundtrip.files.num,
w=config.roundtrip.readers,
))
# avoid accessing them in the same order as the writing
rand.shuffle(objnames)
pool = gevent.pool.Pool(size=config.roundtrip.readers)
start = time.time()
for objname in objnames:
pool.spawn(
reader,
bucket=bucket,
objname=objname,
queue=q,
)
pool.join()
stop = time.time()
elapsed = stop - start
q.put(dict(
type='read_done',
duration=int(round(elapsed * NANOSECOND)),
))
q.put(StopIteration)
logger_g.get()
finally:
# cleanup
if options.cleanup:
if bucket is not None:
common.nuke_bucket(bucket)

View file

@ -0,0 +1,79 @@
from s3tests import realistic
import shutil
import tempfile
# XXX not used for now
def create_files(mean=2000):
return realistic.files2(
mean=1024 * mean,
stddev=1024 * 500,
seed=1256193726,
numfiles=4,
)
class TestFiles(object):
# the size and seed is what we can get when generating a bunch of files
# with pseudo random numbers based on sttdev, seed, and mean.
# this fails, demonstrating the (current) problem
#def test_random_file_invalid(self):
# size = 2506764
# seed = 3391518755
# source = realistic.RandomContentFile(size=size, seed=seed)
# t = tempfile.SpooledTemporaryFile()
# shutil.copyfileobj(source, t)
# precomputed = realistic.PrecomputedContentFile(t)
# assert precomputed.valid()
# verifier = realistic.FileVerifier()
# shutil.copyfileobj(precomputed, verifier)
# assert verifier.valid()
# this passes
def test_random_file_valid(self):
size = 2506001
seed = 3391518755
source = realistic.RandomContentFile(size=size, seed=seed)
t = tempfile.SpooledTemporaryFile()
shutil.copyfileobj(source, t)
precomputed = realistic.PrecomputedContentFile(t)
verifier = realistic.FileVerifier()
shutil.copyfileobj(precomputed, verifier)
assert verifier.valid()
# new implementation
class TestFileValidator(object):
def test_new_file_is_valid(self):
size = 2506001
contents = realistic.generate_file_contents(size)
t = tempfile.SpooledTemporaryFile()
t.write(contents)
t.seek(0)
fp = realistic.FileValidator(t)
assert fp.valid()
def test_new_file_is_valid_when_size_is_1(self):
size = 1
contents = realistic.generate_file_contents(size)
t = tempfile.SpooledTemporaryFile()
t.write(contents)
t.seek(0)
fp = realistic.FileValidator(t)
assert fp.valid()
def test_new_file_is_valid_on_several_calls(self):
size = 2506001
contents = realistic.generate_file_contents(size)
t = tempfile.SpooledTemporaryFile()
t.write(contents)
t.seek(0)
fp = realistic.FileValidator(t)
assert fp.valid()
assert fp.valid()

View file

@ -1,4 +1,4 @@
#!/usr/bin/python #!/usr/bin/python3
from setuptools import setup, find_packages from setuptools import setup, find_packages
setup( setup(
@ -16,8 +16,19 @@ setup(
'boto >=2.0b4', 'boto >=2.0b4',
'boto3 >=1.0.0', 'boto3 >=1.0.0',
'PyYAML', 'PyYAML',
'munch >=2.0.0', 'bunch >=1.0.0',
'gevent >=1.0', 'gevent >=1.0',
'isodate >=0.4.4', 'isodate >=0.4.4',
], ],
entry_points={
'console_scripts': [
's3tests-generate-objects = s3tests.generate_objects:main',
's3tests-test-readwrite = s3tests.readwrite:main',
's3tests-test-roundtrip = s3tests.roundtrip:main',
's3tests-fuzz-headers = s3tests.fuzz.headers:main',
's3tests-analysis-rwstats = s3tests.analysis.rwstats:main',
],
},
) )

382
siege.conf Normal file
View file

@ -0,0 +1,382 @@
# Updated by Siege 2.69, May-24-2010
# Copyright 2000-2007 by Jeffrey Fulmer, et al.
#
# Siege configuration file -- edit as necessary
# For more information about configuring and running
# this program, visit: http://www.joedog.org/
#
# Variable declarations. You can set variables here
# for use in the directives below. Example:
# PROXY = proxy.joedog.org
# Reference variables inside ${} or $(), example:
# proxy-host = ${PROXY}
# You can also reference ENVIRONMENT variables without
# actually declaring them, example:
# logfile = $(HOME)/var/siege.log
#
# Signify verbose mode, true turns on verbose output
# ex: verbose = true|false
#
verbose = true
#
# CSV Verbose format: with this option, you can choose
# to format verbose output in traditional siege format
# or comma separated format. The latter will allow you
# to redirect output to a file for import into a spread
# sheet, i.e., siege > file.csv
# ex: csv = true|false (default false)
#
csv = true
#
# Full URL verbose format: By default siege displays
# the URL path and not the full URL. With this option,
# you # can instruct siege to show the complete URL.
# ex: fullurl = true|false (default false)
#
# fullurl = true
#
# Display id: in verbose mode, display the siege user
# id associated with the HTTP transaction information
# ex: display-id = true|false
#
# display-id =
#
# Show logfile location. By default, siege displays the
# logfile location at the end of every run when logging
# You can turn this message off with this directive.
# ex: show-logfile = false
#
show-logfile = true
#
# Default logging status, true turns logging on.
# ex: logging = true|false
#
logging = true
#
# Logfile, the default siege logfile is $PREFIX/var/siege.log
# This directive allows you to choose an alternative log file.
# Environment variables may be used as shown in the examples:
# ex: logfile = /home/jeff/var/log/siege.log
# logfile = ${HOME}/var/log/siege.log
# logfile = ${LOGFILE}
#
logfile = ./siege.log
#
# HTTP protocol. Options HTTP/1.1 and HTTP/1.0.
# Some webservers have broken implementation of the
# 1.1 protocol which skews throughput evaluations.
# If you notice some siege clients hanging for
# extended periods of time, change this to HTTP/1.0
# ex: protocol = HTTP/1.1
# protocol = HTTP/1.0
#
protocol = HTTP/1.1
#
# Chunked encoding is required by HTTP/1.1 protocol
# but siege allows you to turn it off as desired.
#
# ex: chunked = true
#
chunked = true
#
# Cache revalidation.
# Siege supports cache revalidation for both ETag and
# Last-modified headers. If a copy is still fresh, the
# server responds with 304.
# HTTP/1.1 200 0.00 secs: 2326 bytes ==> /apache_pb.gif
# HTTP/1.1 304 0.00 secs: 0 bytes ==> /apache_pb.gif
# HTTP/1.1 304 0.00 secs: 0 bytes ==> /apache_pb.gif
#
# ex: cache = true
#
cache = false
#
# Connection directive. Options "close" and "keep-alive"
# Starting with release 2.57b3, siege implements persistent
# connections in accordance to RFC 2068 using both chunked
# encoding and content-length directives to determine the
# page size. To run siege with persistent connections set
# the connection directive to keep-alive. (Default close)
# CAUTION: use the keep-alive directive with care.
# DOUBLE CAUTION: this directive does not work well on HPUX
# TRIPLE CAUTION: don't use keep-alives until further notice
# ex: connection = close
# connection = keep-alive
#
connection = close
#
# Default number of simulated concurrent users
# ex: concurrent = 25
#
concurrent = 15
#
# Default duration of the siege. The right hand argument has
# a modifier which specifies the time units, H=hours, M=minutes,
# and S=seconds. If a modifier is not specified, then minutes
# are assumed.
# ex: time = 50M
#
# time =
#
# Repetitions. The length of siege may be specified in client
# reps rather then a time duration. Instead of specifying a time
# span, you can tell each siege instance to hit the server X number
# of times. So if you chose 'reps = 20' and you've selected 10
# concurrent users, then siege will hit the server 200 times.
# ex: reps = 20
#
# reps =
#
# Default URLs file, set at configuration time, the default
# file is PREFIX/etc/urls.txt. So if you configured siege
# with --prefix=/usr/local then the urls.txt file is installed
# int /usr/local/etc/urls.txt. Use the "file = " directive to
# configure an alternative URLs file. You may use environment
# variables as shown in the examples below:
# ex: file = /export/home/jdfulmer/MYURLS.txt
# file = $HOME/etc/urls.txt
# file = $URLSFILE
#
file = ./urls.txt
#
# Default URL, this is a single URL that you want to test. This
# is usually set at the command line with the -u option. When
# used, this option overrides the urls.txt (-f FILE/--file=FILE)
# option. You will HAVE to comment this out for in order to use
# the urls.txt file option.
# ex: url = https://shemp.whoohoo.com/docs/index.jsp
#
# url =
#
# Default delay value, see the siege(1) man page.
# This value is used for load testing, it is not used
# for benchmarking.
# ex: delay = 3
#
delay = 1
#
# Connection timeout value. Set the value in seconds for
# socket connection timeouts. The default value is 30 seconds.
# ex: timeout = 30
#
# timeout =
#
# Session expiration: This directive allows you to delete all
# cookies after you pass through the URLs. This means siege will
# grab a new session with each run through its URLs. The default
# value is false.
# ex: expire-session = true
#
# expire-session =
#
# Failures: This is the number of total connection failures allowed
# before siege aborts. Connection failures (timeouts, socket failures,
# etc.) are combined with 400 and 500 level errors in the final stats,
# but those errors do not count against the abort total. If you set
# this total to 10, then siege will abort after ten socket timeouts,
# but it will NOT abort after ten 404s. This is designed to prevent
# a run-away mess on an unattended siege. The default value is 1024
# ex: failures = 50
#
# failures =
#
# Internet simulation. If true, siege clients will hit
# the URLs in the urls.txt file randomly, thereby simulating
# internet usage. If false, siege will run through the
# urls.txt file in order from first to last and back again.
# ex: internet = true
#
internet = false
#
# Default benchmarking value, If true, there is NO delay
# between server requests, siege runs as fast as the web
# server and the network will let it. Set this to false
# for load testing.
# ex: benchmark = true
#
benchmark = false
#
# Set the siege User-Agent to identify yourself at the
# host, the default is: JoeDog/1.00 [en] (X11; I; Siege #.##)
# But that wreaks of corporate techno speak. Feel free
# to make it more interesting :-) Since Limey is recovering
# from minor surgery as I write this, I'll dedicate the
# example to him...
# ex: user-agent = Limey The Bulldog
#
# user-agent =
#
# Accept-encoding. This option allows you to specify
# acceptable encodings returned by the server. Use this
# directive to turn on compression. By default we accept
# gzip compression.
#
# ex: accept-encoding = *
# accept-encoding = gzip
# accept-encoding = compress;q=0.5;gzip;q=1
accept-encoding = gzip
#
# TURN OFF THAT ANNOYING SPINNER!
# Siege spawns a thread and runs a spinner to entertain you
# as it collects and computes its stats. If you don't like
# this feature, you may turn it off here.
# ex: spinner = false
#
spinner = true
#
# WWW-Authenticate login. When siege hits a webpage
# that requires basic authentication, it will search its
# logins for authentication which matches the specific realm
# requested by the server. If it finds a match, it will send
# that login information. If it fails to match the realm, it
# will send the default login information. (Default is "all").
# You may configure siege with several logins as long as no
# two realms match. The format for logins is:
# username:password[:realm] where "realm" is optional.
# If you do not supply a realm, then it will default to "all"
# ex: login = jdfulmer:topsecret:Admin
# login = jeff:supersecret
#
# login =
#
# WWW-Authenticate username and password. When siege
# hits a webpage that requires authentication, it will
# send this user name and password to the server. Note
# this is NOT form based authentication. You will have
# to construct URLs for that.
# ex: username = jdfulmer
# password = whoohoo
#
# username =
# password =
#
# ssl-cert
# This optional feature allows you to specify a path to a client
# certificate. It is not neccessary to specify a certificate in
# order to use https. If you don't know why you would want one,
# then you probably don't need this feature. Use openssl to
# generate a certificate and key with the following command:
# $ openssl req -nodes -new -days 365 -newkey rsa:1024 \
# -keyout key.pem -out cert.pem
# Specify a path to cert.pem as follows:
# ex: ssl-cert = /home/jeff/.certs/cert.pem
#
# ssl-cert =
#
# ssl-key
# Use this option to specify the key you generated with the command
# above. ex: ssl-key = /home/jeff/.certs/key.pem
# You may actually skip this option and combine both your cert and
# your key in a single file:
# $ cat key.pem > client.pem
# $ cat cert.pem >> client.pem
# Now set the path for ssl-cert:
# ex: ssl-cert = /home/jeff/.certs/client.pem
# (in this scenario, you comment out ssl-key)
#
# ssl-key =
#
# ssl-timeout
# This option sets a connection timeout for the ssl library
# ex: ssl-timeout = 30
#
# ssl-timeout =
#
# ssl-ciphers
# You can use this feature to select a specific ssl cipher
# for HTTPs. To view the ones available with your library run
# the following command: openssl ciphers
# ex: ssl-ciphers = EXP-RC4-MD5
#
# ssl-ciphers =
#
# Login URL. This is the first URL to be hit by every siege
# client. This feature was designed to allow you to login to
# a server and establish a session. It will only be hit once
# so if you need to hit this URL more then once, make sure it
# also appears in your urls.txt file.
#
# ex: login-url = http://eos.haha.com/login.jsp POST name=jeff&pass=foo
#
# login-url =
#
# Proxy protocol. This option allows you to select a proxy
# server stress testing. The proxy will request the URL(s)
# specified by -u"my.url.org" OR from the urls.txt file.
#
# ex: proxy-host = proxy.whoohoo.org
# proxy-port = 8080
#
# proxy-host =
# proxy-port =
#
# Proxy-Authenticate. When scout hits a proxy server which
# requires username and password authentication, it will this
# username and password to the server. The format is username,
# password and optional realm each separated by a colon. You
# may enter more than one proxy-login as long as each one has
# a different realm. If you do not enter a realm, then scout
# will send that login information to all proxy challenges. If
# you have more than one proxy-login, then scout will attempt
# to match the login to the realm.
# ex: proxy-login: jeff:secret:corporate
# proxy-login: jeff:whoohoo
#
# proxy-login =
#
# Redirection support. This option allows to to control
# whether a Location: hint will be followed. Most users
# will want to follow redirection information, but sometimes
# it's desired to just get the Location information.
#
# ex: follow-location = false
#
# follow-location =
# Zero-length data. siege can be configured to disregard
# results in which zero bytes are read after the headers.
# Alternatively, such results can be counted in the final
# tally of outcomes.
#
# ex: zero-data-ok = false
#
# zero-data-ok =
#
# end of siegerc

View file

@ -1,9 +0,0 @@
[tox]
envlist = py
[testenv]
deps = -rrequirements.txt
passenv =
S3TEST_CONF
S3_USE_SIGV4
commands = pytest {posargs}