forked from TrueCloudLab/s3-tests
Compare commits
5 commits
master
...
client_res
Author | SHA1 | Date | |
---|---|---|---|
|
c234981a7c | ||
|
1865ec50d1 | ||
|
83dc04c0f2 | ||
|
d9f3324b5e | ||
|
ebef1e5a0c |
38 changed files with 2472 additions and 26299 deletions
3
.gitignore
vendored
3
.gitignore
vendored
|
@ -10,6 +10,5 @@
|
|||
|
||||
/*.egg-info
|
||||
/virtualenv
|
||||
/venv
|
||||
|
||||
config.yaml
|
||||
config.yml
|
||||
|
|
143
README.rst
143
README.rst
|
@ -2,101 +2,90 @@
|
|||
S3 compatibility tests
|
||||
========================
|
||||
|
||||
This is a set of unofficial Amazon AWS S3 compatibility
|
||||
tests, that can be useful to people implementing software
|
||||
that exposes an S3-like API. The tests use the Boto2 and Boto3 libraries.
|
||||
This is a set of completely unofficial Amazon AWS S3 compatibility
|
||||
tests, that will hopefully be useful to people implementing software
|
||||
that exposes an S3-like API.
|
||||
|
||||
The tests use the Tox tool. To get started, ensure you have the ``tox``
|
||||
software installed; e.g. on Debian/Ubuntu::
|
||||
The tests only cover the REST interface.
|
||||
|
||||
sudo apt-get install tox
|
||||
TODO: test direct HTTP downloads, like a web browser would do.
|
||||
|
||||
The tests use the Boto library, so any e.g. HTTP-level differences
|
||||
that Boto papers over, the tests will not be able to discover. Raw
|
||||
HTTP tests may be added later.
|
||||
|
||||
The tests use the Nose test framework. To get started, ensure you have
|
||||
the ``virtualenv`` software installed; e.g. on Debian/Ubuntu::
|
||||
|
||||
sudo apt-get install python-virtualenv
|
||||
|
||||
and then run::
|
||||
|
||||
./bootstrap
|
||||
|
||||
You will need to create a configuration file with the location of the
|
||||
service and two different credentials. A sample configuration file named
|
||||
``s3tests.conf.SAMPLE`` has been provided in this repo. This file can be
|
||||
used to run the s3 tests on a Ceph cluster started with vstart.
|
||||
service and two different credentials, something like this::
|
||||
|
||||
Once you have that file copied and edited, you can run the tests with::
|
||||
[DEFAULT]
|
||||
## this section is just used as default for all the "s3 *"
|
||||
## sections, you can place these variables also directly there
|
||||
|
||||
S3TEST_CONF=your.conf tox
|
||||
## replace with e.g. "localhost" to run against local software
|
||||
host = s3.amazonaws.com
|
||||
|
||||
You can specify which directory of tests to run::
|
||||
## uncomment the port to use something other than 80
|
||||
# port = 8080
|
||||
|
||||
S3TEST_CONF=your.conf tox -- s3tests_boto3/functional
|
||||
## say "no" to disable TLS
|
||||
is_secure = yes
|
||||
|
||||
You can specify which file of tests to run::
|
||||
[fixtures]
|
||||
## all the buckets created will start with this prefix;
|
||||
## {random} will be filled with random characters to pad
|
||||
## the prefix to 30 characters long, and avoid collisions
|
||||
bucket prefix = YOURNAMEHERE-{random}-
|
||||
|
||||
S3TEST_CONF=your.conf tox s3tests_boto3/functional/test_s3.py
|
||||
[s3 main]
|
||||
## the tests assume two accounts are defined, "main" and "alt".
|
||||
|
||||
You can specify which test to run::
|
||||
## user_id is a 64-character hexstring
|
||||
user_id = 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef
|
||||
|
||||
S3TEST_CONF=your.conf tox s3tests_boto3/functional/test_s3.py::test_bucket_list_empty
|
||||
## display name typically looks more like a unix login, "jdoe" etc
|
||||
display_name = youruseridhere
|
||||
|
||||
## replace these with your access keys
|
||||
access_key = ABCDEFGHIJKLMNOPQRST
|
||||
secret_key = abcdefghijklmnopqrstuvwxyzabcdefghijklmn
|
||||
|
||||
[s3 alt]
|
||||
## another user account, used for ACL-related tests
|
||||
user_id = 56789abcdef0123456789abcdef0123456789abcdef0123456789abcdef01234
|
||||
display_name = john.doe
|
||||
## the "alt" user needs to have email set, too
|
||||
email = john.doe@example.com
|
||||
access_key = NOPQRSTUVWXYZABCDEFG
|
||||
secret_key = nopqrstuvwxyzabcdefghijklmnabcdefghijklm
|
||||
|
||||
Once you have that, you can run the tests with::
|
||||
|
||||
S3TEST_CONF=your.conf ./virtualenv/bin/nosetests
|
||||
|
||||
You can specify what test(s) to run::
|
||||
|
||||
S3TEST_CONF=your.conf ./virtualenv/bin/nosetests s3tests.functional.test_s3:test_object_acl_grant_public_read
|
||||
|
||||
Some tests have attributes set based on their current reliability and
|
||||
things like AWS not enforcing their spec stricly. You can filter tests
|
||||
based on their attributes::
|
||||
|
||||
S3TEST_CONF=aws.conf tox -- -m 'not fails_on_aws'
|
||||
S3TEST_CONF=aws.conf ./virtualenv/bin/nosetests -a '!fails_on_aws'
|
||||
|
||||
Most of the tests have both Boto3 and Boto2 versions. Tests written in
|
||||
Boto2 are in the ``s3tests`` directory. Tests written in Boto3 are
|
||||
located in the ``s3test_boto3`` directory.
|
||||
|
||||
You can run only the boto3 tests with::
|
||||
TODO
|
||||
====
|
||||
|
||||
S3TEST_CONF=your.conf tox -- s3tests_boto3/functional
|
||||
|
||||
========================
|
||||
STS compatibility tests
|
||||
========================
|
||||
|
||||
This section contains some basic tests for the AssumeRole, GetSessionToken and AssumeRoleWithWebIdentity API's. The test file is located under ``s3tests_boto3/functional``.
|
||||
|
||||
To run the STS tests, the vstart cluster should be started with the following parameter (in addition to any parameters already used with it)::
|
||||
|
||||
vstart.sh -o rgw_sts_key=abcdefghijklmnop -o rgw_s3_auth_use_sts=true
|
||||
|
||||
Note that the ``rgw_sts_key`` can be set to anything that is 128 bits in length.
|
||||
After the cluster is up the following command should be executed::
|
||||
|
||||
radosgw-admin caps add --tenant=testx --uid="9876543210abcdef0123456789abcdef0123456789abcdef0123456789abcdef" --caps="roles=*"
|
||||
|
||||
You can run only the sts tests (all the three API's) with::
|
||||
|
||||
S3TEST_CONF=your.conf tox s3tests_boto3/functional/test_sts.py
|
||||
|
||||
You can filter tests based on the attributes. There is a attribute named ``test_of_sts`` to run AssumeRole and GetSessionToken tests and ``webidentity_test`` to run the AssumeRoleWithWebIdentity tests. If you want to execute only ``test_of_sts`` tests you can apply that filter as below::
|
||||
|
||||
S3TEST_CONF=your.conf tox -- -m test_of_sts s3tests_boto3/functional/test_sts.py
|
||||
|
||||
For running ``webidentity_test`` you'll need have Keycloak running.
|
||||
|
||||
In order to run any STS test you'll need to add "iam" section to the config file. For further reference on how your config file should look check ``s3tests.conf.SAMPLE``.
|
||||
|
||||
========================
|
||||
IAM policy tests
|
||||
========================
|
||||
|
||||
This is a set of IAM policy tests.
|
||||
This section covers tests for user policies such as Put, Get, List, Delete, user policies with s3 actions, conflicting user policies etc
|
||||
These tests uses Boto3 libraries. Tests are written in the ``s3test_boto3`` directory.
|
||||
|
||||
These iam policy tests uses two users with profile name "iam" and "s3 alt" as mentioned in s3tests.conf.SAMPLE.
|
||||
If Ceph cluster is started with vstart, then above two users will get created as part of vstart with same access key, secrete key etc as mentioned in s3tests.conf.SAMPLE.
|
||||
Out of those two users, "iam" user is with capabilities --caps=user-policy=* and "s3 alt" user is without capabilities.
|
||||
Adding above capabilities to "iam" user is also taken care by vstart (If Ceph cluster is started with vstart).
|
||||
|
||||
To run these tests, create configuration file with section "iam" and "s3 alt" refer s3tests.conf.SAMPLE.
|
||||
Once you have that configuration file copied and edited, you can run all the tests with::
|
||||
|
||||
S3TEST_CONF=your.conf tox s3tests_boto3/functional/test_iam.py
|
||||
|
||||
You can also specify specific test to run::
|
||||
|
||||
S3TEST_CONF=your.conf tox s3tests_boto3/functional/test_iam.py::test_put_user_policy
|
||||
|
||||
Some tests have attributes set such as "fails_on_rgw".
|
||||
You can filter tests based on their attributes::
|
||||
|
||||
S3TEST_CONF=your.conf tox -- s3tests_boto3/functional/test_iam.py -m 'not fails_on_rgw'
|
||||
- We should assume read-after-write consistency, and make the tests
|
||||
actually request such a location.
|
||||
|
||||
http://aws.amazon.com/s3/faqs/#What_data_consistency_model_does_Amazon_S3_employ
|
||||
|
|
28
bootstrap
Executable file
28
bootstrap
Executable file
|
@ -0,0 +1,28 @@
|
|||
#!/bin/sh
|
||||
set -e
|
||||
|
||||
for package in python-pip python-virtualenv python-dev libevent-dev; do
|
||||
if [ "$(dpkg --status -- $package|sed -n 's/^Status: //p')" != "install ok installed" ]; then
|
||||
# add a space after old values
|
||||
missing="${missing:+$missing }$package"
|
||||
fi
|
||||
done
|
||||
if [ -n "$missing" ]; then
|
||||
echo "$0: missing required packages, please install them:" 1>&2
|
||||
echo "sudo apt-get install $missing"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
virtualenv --no-site-packages --distribute virtualenv
|
||||
|
||||
# avoid pip bugs
|
||||
./virtualenv/bin/pip install --upgrade pip
|
||||
|
||||
./virtualenv/bin/pip install -r requirements.txt
|
||||
|
||||
# forbid setuptools from using the network because it'll try to use
|
||||
# easy_install, and we really wanted pip; next line will fail if pip
|
||||
# requirements.txt does not match setup.py requirements -- sucky but
|
||||
# good enough for now
|
||||
./virtualenv/bin/python setup.py develop \
|
||||
--allow-hosts None
|
59
config.yml.SAMPLE
Normal file
59
config.yml.SAMPLE
Normal file
|
@ -0,0 +1,59 @@
|
|||
fixtures:
|
||||
## All the buckets created will start with this prefix;
|
||||
## {random} will be filled with random characters to pad
|
||||
## the prefix to 30 characters long, and avoid collisions
|
||||
bucket prefix: YOURNAMEHERE-{random}-
|
||||
|
||||
file_generation:
|
||||
groups:
|
||||
## File generation works by creating N groups of files. Each group of
|
||||
## files is defined by three elements: number of files, avg(filesize),
|
||||
## and stddev(filesize) -- in that order.
|
||||
- [1, 2, 3]
|
||||
- [4, 5, 6]
|
||||
|
||||
s3:
|
||||
## This section contains all the connection information
|
||||
|
||||
defaults:
|
||||
## This section contains the defaults for all of the other connections
|
||||
## below. You can also place these variables directly there.
|
||||
|
||||
## Replace with e.g. "localhost" to run against local software
|
||||
host: s3.amazonaws.com
|
||||
|
||||
## Uncomment the port to use soemthing other than 80
|
||||
# port: 8080
|
||||
|
||||
## Say "no" to disable TLS.
|
||||
is_secure: yes
|
||||
|
||||
## The tests assume two accounts are defined, "main" and "alt". You
|
||||
## may add other connections to be instantianted as well, however
|
||||
## any additional ones will not be used unless your tests use them.
|
||||
|
||||
main:
|
||||
|
||||
## The User ID that the S3 provider gives you. For AWS, this is
|
||||
## typically a 64-char hexstring.
|
||||
user_id: AWS_USER_ID
|
||||
|
||||
## Display name typically looks more like a unix login, "jdoe" etc
|
||||
display_name: AWS_DISPLAY_NAME
|
||||
|
||||
## The email for this account.
|
||||
email: AWS_EMAIL
|
||||
|
||||
## Replace these with your access keys.
|
||||
access_key: AWS_ACCESS_KEY
|
||||
secret_key: AWS_SECRET_KEY
|
||||
|
||||
alt:
|
||||
## Another user accout, used for ACL-related tests.
|
||||
|
||||
user_id: AWS_USER_ID
|
||||
display_name: AWS_DISPLAY_NAME
|
||||
email: AWS_EMAIL
|
||||
access_key: AWS_ACCESS_KEY
|
||||
secret_key: AWS_SECRET_KEY
|
||||
|
51
pytest.ini
51
pytest.ini
|
@ -1,51 +0,0 @@
|
|||
[pytest]
|
||||
markers =
|
||||
abac_test
|
||||
appendobject
|
||||
auth_aws2
|
||||
auth_aws4
|
||||
auth_common
|
||||
bucket_policy
|
||||
bucket_encryption
|
||||
checksum
|
||||
cloud_transition
|
||||
encryption
|
||||
fails_on_aws
|
||||
fails_on_dbstore
|
||||
fails_on_dho
|
||||
fails_on_mod_proxy_fcgi
|
||||
fails_on_rgw
|
||||
fails_on_s3
|
||||
fails_with_subdomain
|
||||
group
|
||||
group_policy
|
||||
iam_account
|
||||
iam_cross_account
|
||||
iam_role
|
||||
iam_tenant
|
||||
iam_user
|
||||
lifecycle
|
||||
lifecycle_expiration
|
||||
lifecycle_transition
|
||||
list_objects_v2
|
||||
object_lock
|
||||
role_policy
|
||||
session_policy
|
||||
s3select
|
||||
s3website
|
||||
s3website_routing_rules
|
||||
s3website_redirect_location
|
||||
sns
|
||||
sse_s3
|
||||
storage_class
|
||||
tagging
|
||||
test_of_sts
|
||||
token_claims_trust_policy_test
|
||||
token_principal_tag_role_policy_test
|
||||
token_request_tag_trust_policy_test
|
||||
token_resource_tags_test
|
||||
token_role_tags_test
|
||||
token_tag_keys_test
|
||||
user_policy
|
||||
versioning
|
||||
webidentity_test
|
|
@ -1,15 +1,6 @@
|
|||
PyYAML
|
||||
boto >=2.6.0
|
||||
boto3 >=1.0.0
|
||||
# botocore-1.28 broke v2 signatures, see https://tracker.ceph.com/issues/58059
|
||||
botocore <1.28.0
|
||||
munch >=2.0.0
|
||||
nose >=1.0.0
|
||||
boto >=2.0b4
|
||||
bunch >=1.0.0
|
||||
# 0.14 switches to libev, that means bootstrap needs to change too
|
||||
gevent >=1.0
|
||||
isodate >=0.4.4
|
||||
requests >=2.23.0
|
||||
pytz
|
||||
httplib2
|
||||
lxml
|
||||
pytest
|
||||
tox
|
||||
gevent ==0.13.6
|
||||
|
|
|
@ -1,171 +0,0 @@
|
|||
[DEFAULT]
|
||||
## this section is just used for host, port and bucket_prefix
|
||||
|
||||
# host set for rgw in vstart.sh
|
||||
host = localhost
|
||||
|
||||
# port set for rgw in vstart.sh
|
||||
port = 8000
|
||||
|
||||
## say "False" to disable TLS
|
||||
is_secure = False
|
||||
|
||||
## say "False" to disable SSL Verify
|
||||
ssl_verify = False
|
||||
|
||||
[fixtures]
|
||||
## all the buckets created will start with this prefix;
|
||||
## {random} will be filled with random characters to pad
|
||||
## the prefix to 30 characters long, and avoid collisions
|
||||
bucket prefix = yournamehere-{random}-
|
||||
|
||||
# all the iam account resources (users, roles, etc) created
|
||||
# will start with this name prefix
|
||||
iam name prefix = s3-tests-
|
||||
|
||||
# all the iam account resources (users, roles, etc) created
|
||||
# will start with this path prefix
|
||||
iam path prefix = /s3-tests/
|
||||
|
||||
[s3 main]
|
||||
# main display_name set in vstart.sh
|
||||
display_name = M. Tester
|
||||
|
||||
# main user_idname set in vstart.sh
|
||||
user_id = testid
|
||||
|
||||
# main email set in vstart.sh
|
||||
email = tester@ceph.com
|
||||
|
||||
# zonegroup api_name for bucket location
|
||||
api_name = default
|
||||
|
||||
## main AWS access key
|
||||
access_key = 0555b35654ad1656d804
|
||||
|
||||
## main AWS secret key
|
||||
secret_key = h7GhxuBLTrlhVUyxSPUKUV8r/2EI4ngqJxD7iBdBYLhwluN30JaT3Q==
|
||||
|
||||
## replace with key id obtained when secret is created, or delete if KMS not tested
|
||||
#kms_keyid = 01234567-89ab-cdef-0123-456789abcdef
|
||||
|
||||
## Storage classes
|
||||
#storage_classes = "LUKEWARM, FROZEN"
|
||||
|
||||
## Lifecycle debug interval (default: 10)
|
||||
#lc_debug_interval = 20
|
||||
|
||||
[s3 alt]
|
||||
# alt display_name set in vstart.sh
|
||||
display_name = john.doe
|
||||
## alt email set in vstart.sh
|
||||
email = john.doe@example.com
|
||||
|
||||
# alt user_id set in vstart.sh
|
||||
user_id = 56789abcdef0123456789abcdef0123456789abcdef0123456789abcdef01234
|
||||
|
||||
# alt AWS access key set in vstart.sh
|
||||
access_key = NOPQRSTUVWXYZABCDEFG
|
||||
|
||||
# alt AWS secret key set in vstart.sh
|
||||
secret_key = nopqrstuvwxyzabcdefghijklmnabcdefghijklm
|
||||
|
||||
#[s3 cloud]
|
||||
## to run the testcases with "cloud_transition" attribute.
|
||||
## Note: the waiting time may have to tweaked depending on
|
||||
## the I/O latency to the cloud endpoint.
|
||||
|
||||
## host set for cloud endpoint
|
||||
# host = localhost
|
||||
|
||||
## port set for cloud endpoint
|
||||
# port = 8001
|
||||
|
||||
## say "False" to disable TLS
|
||||
# is_secure = False
|
||||
|
||||
## cloud endpoint credentials
|
||||
# access_key = 0555b35654ad1656d804
|
||||
# secret_key = h7GhxuBLTrlhVUyxSPUKUV8r/2EI4ngqJxD7iBdBYLhwluN30JaT3Q==
|
||||
|
||||
## storage class configured as cloud tier on local rgw server
|
||||
# cloud_storage_class = CLOUDTIER
|
||||
|
||||
## Below are optional -
|
||||
|
||||
## Above configured cloud storage class config options
|
||||
# retain_head_object = false
|
||||
# target_storage_class = Target_SC
|
||||
# target_path = cloud-bucket
|
||||
|
||||
## another regular storage class to test multiple transition rules,
|
||||
# storage_class = S1
|
||||
|
||||
[s3 tenant]
|
||||
# tenant display_name set in vstart.sh
|
||||
display_name = testx$tenanteduser
|
||||
|
||||
# tenant user_id set in vstart.sh
|
||||
user_id = 9876543210abcdef0123456789abcdef0123456789abcdef0123456789abcdef
|
||||
|
||||
# tenant AWS secret key set in vstart.sh
|
||||
access_key = HIJKLMNOPQRSTUVWXYZA
|
||||
|
||||
# tenant AWS secret key set in vstart.sh
|
||||
secret_key = opqrstuvwxyzabcdefghijklmnopqrstuvwxyzab
|
||||
|
||||
# tenant email set in vstart.sh
|
||||
email = tenanteduser@example.com
|
||||
|
||||
# tenant name
|
||||
tenant = testx
|
||||
|
||||
#following section needs to be added for all sts-tests
|
||||
[iam]
|
||||
#used for iam operations in sts-tests
|
||||
#email from vstart.sh
|
||||
email = s3@example.com
|
||||
|
||||
#user_id from vstart.sh
|
||||
user_id = 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef
|
||||
|
||||
#access_key from vstart.sh
|
||||
access_key = ABCDEFGHIJKLMNOPQRST
|
||||
|
||||
#secret_key vstart.sh
|
||||
secret_key = abcdefghijklmnopqrstuvwxyzabcdefghijklmn
|
||||
|
||||
#display_name from vstart.sh
|
||||
display_name = youruseridhere
|
||||
|
||||
# iam account root user for iam_account tests
|
||||
[iam root]
|
||||
access_key = AAAAAAAAAAAAAAAAAAaa
|
||||
secret_key = aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
|
||||
user_id = RGW11111111111111111
|
||||
email = account1@ceph.com
|
||||
|
||||
# iam account root user in a different account than [iam root]
|
||||
[iam alt root]
|
||||
access_key = BBBBBBBBBBBBBBBBBBbb
|
||||
secret_key = bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
|
||||
user_id = RGW22222222222222222
|
||||
email = account2@ceph.com
|
||||
|
||||
#following section needs to be added when you want to run Assume Role With Webidentity test
|
||||
[webidentity]
|
||||
#used for assume role with web identity test in sts-tests
|
||||
#all parameters will be obtained from ceph/qa/tasks/keycloak.py
|
||||
token=<access_token>
|
||||
|
||||
aud=<obtained after introspecting token>
|
||||
|
||||
sub=<obtained after introspecting token>
|
||||
|
||||
azp=<obtained after introspecting token>
|
||||
|
||||
user_token=<access token for a user, with attribute Department=[Engineering, Marketing>]
|
||||
|
||||
thumbprint=<obtained from x509 certificate>
|
||||
|
||||
KC_REALM=<name of the realm>
|
26
s3tests/blueprint.yml.SAMPLE
Normal file
26
s3tests/blueprint.yml.SAMPLE
Normal file
|
@ -0,0 +1,26 @@
|
|||
- name: foo
|
||||
perms:
|
||||
null: READ
|
||||
kylmar4: FULL_CONTROL
|
||||
objects:
|
||||
- name: bar
|
||||
content: asdf
|
||||
metadata:
|
||||
a: b
|
||||
c: d
|
||||
perms:
|
||||
kylmar4: FULL_CONTROL
|
||||
- name: baz
|
||||
perms:
|
||||
null: WRITE
|
||||
kylmar4: FULL_CONTROL
|
||||
objects:
|
||||
- content: aoeu
|
||||
metadata:
|
||||
x: y
|
||||
z: w
|
||||
name: oof
|
||||
perms:
|
||||
null: FULL_CONTROL
|
||||
kylmar4: FULL_CONTROL
|
||||
|
|
@ -1,22 +1,16 @@
|
|||
import boto.s3.connection
|
||||
import munch
|
||||
import bunch
|
||||
import itertools
|
||||
import os
|
||||
import random
|
||||
import string
|
||||
import yaml
|
||||
import re
|
||||
from lxml import etree
|
||||
|
||||
from doctest import Example
|
||||
from lxml.doctestcompare import LXMLOutputChecker
|
||||
|
||||
s3 = munch.Munch()
|
||||
config = munch.Munch()
|
||||
s3 = bunch.Bunch()
|
||||
config = bunch.Bunch()
|
||||
prefix = ''
|
||||
|
||||
bucket_counter = itertools.count(1)
|
||||
key_counter = itertools.count(1)
|
||||
|
||||
def choose_bucket_prefix(template, max_len=30):
|
||||
"""
|
||||
|
@ -42,80 +36,36 @@ def choose_bucket_prefix(template, max_len=30):
|
|||
),
|
||||
)
|
||||
|
||||
def nuke_bucket(bucket):
|
||||
try:
|
||||
bucket.set_canned_acl('private')
|
||||
# TODO: deleted_cnt and the while loop is a work around for rgw
|
||||
# not sending the
|
||||
deleted_cnt = 1
|
||||
while deleted_cnt:
|
||||
deleted_cnt = 0
|
||||
for key in bucket.list():
|
||||
print('Cleaning bucket {bucket} key {key}'.format(
|
||||
bucket=bucket,
|
||||
key=key,
|
||||
))
|
||||
key.set_canned_acl('private')
|
||||
key.delete()
|
||||
deleted_cnt += 1
|
||||
bucket.delete()
|
||||
except boto.exception.S3ResponseError as e:
|
||||
# TODO workaround for buggy rgw that fails to send
|
||||
# error_code, remove
|
||||
if (e.status == 403
|
||||
and e.error_code is None
|
||||
and e.body == ''):
|
||||
e.error_code = 'AccessDenied'
|
||||
if e.error_code != 'AccessDenied':
|
||||
print('GOT UNWANTED ERROR', e.error_code)
|
||||
raise
|
||||
# seems like we're not the owner of the bucket; ignore
|
||||
pass
|
||||
|
||||
def nuke_prefixed_buckets():
|
||||
for name, conn in list(s3.items()):
|
||||
print('Cleaning buckets from connection {name}'.format(name=name))
|
||||
for name, conn in s3.items():
|
||||
print 'Cleaning buckets from connection {name}'.format(name=name)
|
||||
for bucket in conn.get_all_buckets():
|
||||
if bucket.name.startswith(prefix):
|
||||
print('Cleaning bucket {bucket}'.format(bucket=bucket))
|
||||
nuke_bucket(bucket)
|
||||
print 'Cleaning bucket {bucket}'.format(bucket=bucket)
|
||||
try:
|
||||
bucket.set_canned_acl('private')
|
||||
for key in bucket.list():
|
||||
print 'Cleaning bucket {bucket} key {key}'.format(
|
||||
bucket=bucket,
|
||||
key=key,
|
||||
)
|
||||
key.set_canned_acl('private')
|
||||
key.delete()
|
||||
bucket.delete()
|
||||
except boto.exception.S3ResponseError as e:
|
||||
# TODO workaround for buggy rgw that fails to send
|
||||
# error_code, remove
|
||||
if (e.status == 403
|
||||
and e.error_code is None
|
||||
and e.body == ''):
|
||||
e.error_code = 'AccessDenied'
|
||||
if e.error_code != 'AccessDenied':
|
||||
print 'GOT UNWANTED ERROR', e.error_code
|
||||
raise
|
||||
# seems like we're not the owner of the bucket; ignore
|
||||
pass
|
||||
|
||||
print('Done with cleanup of test buckets.')
|
||||
|
||||
def read_config(fp):
|
||||
config = munch.Munch()
|
||||
g = yaml.safe_load_all(fp)
|
||||
for new in g:
|
||||
config.update(munch.Munchify(new))
|
||||
return config
|
||||
|
||||
def connect(conf):
|
||||
mapping = dict(
|
||||
port='port',
|
||||
host='host',
|
||||
is_secure='is_secure',
|
||||
access_key='aws_access_key_id',
|
||||
secret_key='aws_secret_access_key',
|
||||
)
|
||||
kwargs = dict((mapping[k],v) for (k,v) in conf.items() if k in mapping)
|
||||
#process calling_format argument
|
||||
calling_formats = dict(
|
||||
ordinary=boto.s3.connection.OrdinaryCallingFormat(),
|
||||
subdomain=boto.s3.connection.SubdomainCallingFormat(),
|
||||
vhost=boto.s3.connection.VHostCallingFormat(),
|
||||
)
|
||||
kwargs['calling_format'] = calling_formats['ordinary']
|
||||
if 'calling_format' in conf:
|
||||
raw_calling_format = conf['calling_format']
|
||||
try:
|
||||
kwargs['calling_format'] = calling_formats[raw_calling_format]
|
||||
except KeyError:
|
||||
raise RuntimeError(
|
||||
'calling_format unknown: %r' % raw_calling_format
|
||||
)
|
||||
# TODO test vhost calling format
|
||||
conn = boto.s3.connection.S3Connection(**kwargs)
|
||||
return conn
|
||||
print 'Done with cleanup of test buckets.'
|
||||
|
||||
def setup():
|
||||
global s3, config, prefix
|
||||
|
@ -130,7 +80,9 @@ def setup():
|
|||
+ 'variable S3TEST_CONF to a config file.',
|
||||
)
|
||||
with file(path) as f:
|
||||
config.update(read_config(f))
|
||||
g = yaml.safe_load_all(f)
|
||||
for new in g:
|
||||
config.update(bunch.bunchify(new))
|
||||
|
||||
# These 3 should always be present.
|
||||
if 's3' not in config:
|
||||
|
@ -146,14 +98,32 @@ def setup():
|
|||
raise RuntimeError("Empty Prefix! Aborting!")
|
||||
|
||||
defaults = config.s3.defaults
|
||||
for section in list(config.s3.keys()):
|
||||
for section in config.s3.keys():
|
||||
if section == 'defaults':
|
||||
continue
|
||||
section_config = config.s3[section]
|
||||
|
||||
conf = {}
|
||||
conf.update(defaults)
|
||||
conf.update(config.s3[section])
|
||||
conn = connect(conf)
|
||||
kwargs = bunch.Bunch()
|
||||
conn_args = bunch.Bunch(
|
||||
port='port',
|
||||
host='host',
|
||||
is_secure='is_secure',
|
||||
access_key='aws_access_key_id',
|
||||
secret_key='aws_secret_access_key',
|
||||
)
|
||||
for cfg_key in conn_args.keys():
|
||||
conn_key = conn_args[cfg_key]
|
||||
|
||||
if section_config.has_key(cfg_key):
|
||||
kwargs[conn_key] = section_config[cfg_key]
|
||||
elif defaults.has_key(cfg_key):
|
||||
kwargs[conn_key] = defaults[cfg_key]
|
||||
|
||||
conn = boto.s3.connection.S3Connection(
|
||||
# TODO support & test all variations
|
||||
calling_format=boto.s3.connection.OrdinaryCallingFormat(),
|
||||
**kwargs
|
||||
)
|
||||
s3[section] = conn
|
||||
|
||||
# WARNING! we actively delete all buckets we see with the prefix
|
||||
|
@ -186,117 +156,3 @@ def get_new_bucket(connection=None):
|
|||
|
||||
def teardown():
|
||||
nuke_prefixed_buckets()
|
||||
|
||||
def with_setup_kwargs(setup, teardown=None):
|
||||
"""Decorator to add setup and/or teardown methods to a test function::
|
||||
|
||||
@with_setup_args(setup, teardown)
|
||||
def test_something():
|
||||
" ... "
|
||||
|
||||
The setup function should return (kwargs) which will be passed to
|
||||
test function, and teardown function.
|
||||
|
||||
Note that `with_setup_kwargs` is useful *only* for test functions, not for test
|
||||
methods or inside of TestCase subclasses.
|
||||
"""
|
||||
def decorate(func):
|
||||
kwargs = {}
|
||||
|
||||
def test_wrapped(*args, **kwargs2):
|
||||
k2 = kwargs.copy()
|
||||
k2.update(kwargs2)
|
||||
k2['testname'] = func.__name__
|
||||
func(*args, **k2)
|
||||
|
||||
test_wrapped.__name__ = func.__name__
|
||||
|
||||
def setup_wrapped():
|
||||
k = setup()
|
||||
kwargs.update(k)
|
||||
if hasattr(func, 'setup'):
|
||||
func.setup()
|
||||
test_wrapped.setup = setup_wrapped
|
||||
|
||||
if teardown:
|
||||
def teardown_wrapped():
|
||||
if hasattr(func, 'teardown'):
|
||||
func.teardown()
|
||||
teardown(**kwargs)
|
||||
|
||||
test_wrapped.teardown = teardown_wrapped
|
||||
else:
|
||||
if hasattr(func, 'teardown'):
|
||||
test_wrapped.teardown = func.teardown()
|
||||
return test_wrapped
|
||||
return decorate
|
||||
|
||||
# Demo case for the above, when you run test_gen():
|
||||
# _test_gen will run twice,
|
||||
# with the following stderr printing
|
||||
# setup_func {'b': 2}
|
||||
# testcase ('1',) {'b': 2, 'testname': '_test_gen'}
|
||||
# teardown_func {'b': 2}
|
||||
# setup_func {'b': 2}
|
||||
# testcase () {'b': 2, 'testname': '_test_gen'}
|
||||
# teardown_func {'b': 2}
|
||||
#
|
||||
#def setup_func():
|
||||
# kwargs = {'b': 2}
|
||||
# print("setup_func", kwargs, file=sys.stderr)
|
||||
# return kwargs
|
||||
#
|
||||
#def teardown_func(**kwargs):
|
||||
# print("teardown_func", kwargs, file=sys.stderr)
|
||||
#
|
||||
#@with_setup_kwargs(setup=setup_func, teardown=teardown_func)
|
||||
#def _test_gen(*args, **kwargs):
|
||||
# print("testcase", args, kwargs, file=sys.stderr)
|
||||
#
|
||||
#def test_gen():
|
||||
# yield _test_gen, '1'
|
||||
# yield _test_gen
|
||||
|
||||
def trim_xml(xml_str):
|
||||
p = etree.XMLParser(encoding="utf-8", remove_blank_text=True)
|
||||
xml_str = bytes(xml_str, "utf-8")
|
||||
elem = etree.XML(xml_str, parser=p)
|
||||
return etree.tostring(elem, encoding="unicode")
|
||||
|
||||
def normalize_xml(xml, pretty_print=True):
|
||||
if xml is None:
|
||||
return xml
|
||||
|
||||
root = etree.fromstring(xml.encode(encoding='ascii'))
|
||||
|
||||
for element in root.iter('*'):
|
||||
if element.text is not None and not element.text.strip():
|
||||
element.text = None
|
||||
if element.text is not None:
|
||||
element.text = element.text.strip().replace("\n", "").replace("\r", "")
|
||||
if element.tail is not None and not element.tail.strip():
|
||||
element.tail = None
|
||||
if element.tail is not None:
|
||||
element.tail = element.tail.strip().replace("\n", "").replace("\r", "")
|
||||
|
||||
# Sort the elements
|
||||
for parent in root.xpath('//*[./*]'): # Search for parent elements
|
||||
parent[:] = sorted(parent,key=lambda x: x.tag)
|
||||
|
||||
xmlstr = etree.tostring(root, encoding="unicode", pretty_print=pretty_print)
|
||||
# there are two different DTD URIs
|
||||
xmlstr = re.sub(r'xmlns="[^"]+"', 'xmlns="s3"', xmlstr)
|
||||
xmlstr = re.sub(r'xmlns=\'[^\']+\'', 'xmlns="s3"', xmlstr)
|
||||
for uri in ['http://doc.s3.amazonaws.com/doc/2006-03-01/', 'http://s3.amazonaws.com/doc/2006-03-01/']:
|
||||
xmlstr = xmlstr.replace(uri, 'URI-DTD')
|
||||
#xmlstr = re.sub(r'>\s+', '>', xmlstr, count=0, flags=re.MULTILINE)
|
||||
return xmlstr
|
||||
|
||||
def assert_xml_equal(got, want):
|
||||
assert want is not None, 'Wanted XML cannot be None'
|
||||
if got is None:
|
||||
raise AssertionError('Got input to validate was None')
|
||||
checker = LXMLOutputChecker()
|
||||
if not checker.check_output(want, got, 0):
|
||||
message = checker.output_difference(Example("", want), got, 0)
|
||||
raise AssertionError(message)
|
||||
|
|
5
s3tests/functional/AnonymousAuth.py
Normal file
5
s3tests/functional/AnonymousAuth.py
Normal file
|
@ -0,0 +1,5 @@
|
|||
from boto.auth_handler import AuthHandler
|
||||
|
||||
class AnonymousAuthHandler(AuthHandler):
|
||||
def add_auth(self, http_request, **kwargs):
|
||||
return # Nothing to do for anonymous access!
|
|
@ -1,38 +1,22 @@
|
|||
import sys
|
||||
import configparser
|
||||
import ConfigParser
|
||||
import boto.exception
|
||||
import boto.s3.connection
|
||||
import munch
|
||||
import bunch
|
||||
import itertools
|
||||
import os
|
||||
import random
|
||||
import string
|
||||
import pytest
|
||||
from http.client import HTTPConnection, HTTPSConnection
|
||||
from urllib.parse import urlparse
|
||||
|
||||
from .utils import region_sync_meta
|
||||
|
||||
s3 = munch.Munch()
|
||||
config = munch.Munch()
|
||||
targets = munch.Munch()
|
||||
s3 = bunch.Bunch()
|
||||
config = bunch.Bunch()
|
||||
|
||||
# this will be assigned by setup()
|
||||
prefix = None
|
||||
|
||||
calling_formats = dict(
|
||||
ordinary=boto.s3.connection.OrdinaryCallingFormat(),
|
||||
subdomain=boto.s3.connection.SubdomainCallingFormat(),
|
||||
vhost=boto.s3.connection.VHostCallingFormat(),
|
||||
)
|
||||
|
||||
def get_prefix():
|
||||
assert prefix is not None
|
||||
return prefix
|
||||
|
||||
def is_slow_backend():
|
||||
return slow_backend
|
||||
|
||||
def choose_bucket_prefix(template, max_len=30):
|
||||
"""
|
||||
Choose a prefix for our test buckets, so they're easy to identify.
|
||||
|
@ -58,209 +42,38 @@ def choose_bucket_prefix(template, max_len=30):
|
|||
)
|
||||
|
||||
|
||||
def nuke_prefixed_buckets_on_conn(prefix, name, conn):
|
||||
print('Cleaning buckets from connection {name} prefix {prefix!r}.'.format(
|
||||
name=name,
|
||||
prefix=prefix,
|
||||
))
|
||||
|
||||
for bucket in conn.get_all_buckets():
|
||||
print('prefix=',prefix)
|
||||
if bucket.name.startswith(prefix):
|
||||
print('Cleaning bucket {bucket}'.format(bucket=bucket))
|
||||
success = False
|
||||
for i in range(2):
|
||||
def nuke_prefixed_buckets(prefix):
|
||||
for name, conn in s3.items():
|
||||
print 'Cleaning buckets from connection {name} prefix {prefix!r}.'.format(
|
||||
name=name,
|
||||
prefix=prefix,
|
||||
)
|
||||
for bucket in conn.get_all_buckets():
|
||||
if bucket.name.startswith(prefix):
|
||||
print 'Cleaning bucket {bucket}'.format(bucket=bucket)
|
||||
try:
|
||||
try:
|
||||
iterator = iter(bucket.list_versions())
|
||||
# peek into iterator to issue list operation
|
||||
try:
|
||||
keys = itertools.chain([next(iterator)], iterator)
|
||||
except StopIteration:
|
||||
keys = [] # empty iterator
|
||||
except boto.exception.S3ResponseError as e:
|
||||
# some S3 implementations do not support object
|
||||
# versioning - fall back to listing without versions
|
||||
if e.error_code != 'NotImplemented':
|
||||
raise e
|
||||
keys = bucket.list();
|
||||
for key in keys:
|
||||
print('Cleaning bucket {bucket} key {key}'.format(
|
||||
bucket.set_canned_acl('private')
|
||||
for key in bucket.list():
|
||||
print 'Cleaning bucket {bucket} key {key}'.format(
|
||||
bucket=bucket,
|
||||
key=key,
|
||||
))
|
||||
# key.set_canned_acl('private')
|
||||
bucket.delete_key(key.name, version_id = key.version_id)
|
||||
try:
|
||||
bucket.delete()
|
||||
except boto.exception.S3ResponseError as e:
|
||||
# if DELETE times out, the retry may see NoSuchBucket
|
||||
if e.error_code != 'NoSuchBucket':
|
||||
raise e
|
||||
pass
|
||||
success = True
|
||||
)
|
||||
key.set_canned_acl('private')
|
||||
key.delete()
|
||||
bucket.delete()
|
||||
except boto.exception.S3ResponseError as e:
|
||||
if e.error_code != 'AccessDenied':
|
||||
print('GOT UNWANTED ERROR', e.error_code)
|
||||
print 'GOT UNWANTED ERROR', e.error_code
|
||||
raise
|
||||
# seems like we don't have permissions set appropriately, we'll
|
||||
# modify permissions and retry
|
||||
# seems like we're not the owner of the bucket; ignore
|
||||
pass
|
||||
|
||||
if success:
|
||||
break
|
||||
print 'Done with cleanup of test buckets.'
|
||||
|
||||
bucket.set_canned_acl('private')
|
||||
|
||||
|
||||
def nuke_prefixed_buckets(prefix):
|
||||
# If no regions are specified, use the simple method
|
||||
if targets.main.master == None:
|
||||
for name, conn in list(s3.items()):
|
||||
print('Deleting buckets on {name}'.format(name=name))
|
||||
nuke_prefixed_buckets_on_conn(prefix, name, conn)
|
||||
else:
|
||||
# First, delete all buckets on the master connection
|
||||
for name, conn in list(s3.items()):
|
||||
if conn == targets.main.master.connection:
|
||||
print('Deleting buckets on {name} (master)'.format(name=name))
|
||||
nuke_prefixed_buckets_on_conn(prefix, name, conn)
|
||||
|
||||
# Then sync to propagate deletes to secondaries
|
||||
region_sync_meta(targets.main, targets.main.master.connection)
|
||||
print('region-sync in nuke_prefixed_buckets')
|
||||
|
||||
# Now delete remaining buckets on any other connection
|
||||
for name, conn in list(s3.items()):
|
||||
if conn != targets.main.master.connection:
|
||||
print('Deleting buckets on {name} (non-master)'.format(name=name))
|
||||
nuke_prefixed_buckets_on_conn(prefix, name, conn)
|
||||
|
||||
print('Done with cleanup of test buckets.')
|
||||
|
||||
class TargetConfig:
|
||||
def __init__(self, cfg, section):
|
||||
self.port = None
|
||||
self.api_name = ''
|
||||
self.is_master = False
|
||||
self.is_secure = False
|
||||
self.sync_agent_addr = None
|
||||
self.sync_agent_port = 0
|
||||
self.sync_meta_wait = 0
|
||||
try:
|
||||
self.api_name = cfg.get(section, 'api_name')
|
||||
except (configparser.NoSectionError, configparser.NoOptionError):
|
||||
pass
|
||||
try:
|
||||
self.port = cfg.getint(section, 'port')
|
||||
except configparser.NoOptionError:
|
||||
pass
|
||||
try:
|
||||
self.host=cfg.get(section, 'host')
|
||||
except configparser.NoOptionError:
|
||||
raise RuntimeError(
|
||||
'host not specified for section {s}'.format(s=section)
|
||||
)
|
||||
try:
|
||||
self.is_master=cfg.getboolean(section, 'is_master')
|
||||
except configparser.NoOptionError:
|
||||
pass
|
||||
|
||||
try:
|
||||
self.is_secure=cfg.getboolean(section, 'is_secure')
|
||||
except configparser.NoOptionError:
|
||||
pass
|
||||
|
||||
try:
|
||||
raw_calling_format = cfg.get(section, 'calling_format')
|
||||
except configparser.NoOptionError:
|
||||
raw_calling_format = 'ordinary'
|
||||
|
||||
try:
|
||||
self.sync_agent_addr = cfg.get(section, 'sync_agent_addr')
|
||||
except (configparser.NoSectionError, configparser.NoOptionError):
|
||||
pass
|
||||
|
||||
try:
|
||||
self.sync_agent_port = cfg.getint(section, 'sync_agent_port')
|
||||
except (configparser.NoSectionError, configparser.NoOptionError):
|
||||
pass
|
||||
|
||||
try:
|
||||
self.sync_meta_wait = cfg.getint(section, 'sync_meta_wait')
|
||||
except (configparser.NoSectionError, configparser.NoOptionError):
|
||||
pass
|
||||
|
||||
|
||||
try:
|
||||
self.calling_format = calling_formats[raw_calling_format]
|
||||
except KeyError:
|
||||
raise RuntimeError(
|
||||
'calling_format unknown: %r' % raw_calling_format
|
||||
)
|
||||
|
||||
class TargetConnection:
|
||||
def __init__(self, conf, conn):
|
||||
self.conf = conf
|
||||
self.connection = conn
|
||||
|
||||
|
||||
|
||||
class RegionsInfo:
|
||||
def __init__(self):
|
||||
self.m = munch.Munch()
|
||||
self.master = None
|
||||
self.secondaries = []
|
||||
|
||||
def add(self, name, region_config):
|
||||
self.m[name] = region_config
|
||||
if (region_config.is_master):
|
||||
if not self.master is None:
|
||||
raise RuntimeError(
|
||||
'multiple regions defined as master'
|
||||
)
|
||||
self.master = region_config
|
||||
else:
|
||||
self.secondaries.append(region_config)
|
||||
def get(self, name):
|
||||
return self.m[name]
|
||||
def get(self):
|
||||
return self.m
|
||||
def items(self):
|
||||
return self.m.items()
|
||||
|
||||
regions = RegionsInfo()
|
||||
|
||||
|
||||
class RegionsConn:
|
||||
def __init__(self):
|
||||
self.m = munch.Munch()
|
||||
self.default = None
|
||||
self.master = None
|
||||
self.secondaries = []
|
||||
|
||||
def items(self):
|
||||
return self.m.items()
|
||||
|
||||
def set_default(self, conn):
|
||||
self.default = conn
|
||||
|
||||
def add(self, name, conn):
|
||||
self.m[name] = conn
|
||||
if not self.default:
|
||||
self.default = conn
|
||||
if (conn.conf.is_master):
|
||||
self.master = conn
|
||||
else:
|
||||
self.secondaries.append(conn)
|
||||
|
||||
|
||||
# nosetests --processes=N with N>1 is safe
|
||||
_multiprocess_can_split_ = True
|
||||
|
||||
def setup():
|
||||
|
||||
cfg = configparser.RawConfigParser()
|
||||
cfg = ConfigParser.RawConfigParser()
|
||||
try:
|
||||
path = os.environ['S3TEST_CONF']
|
||||
except KeyError:
|
||||
|
@ -268,41 +81,18 @@ def setup():
|
|||
'To run tests, point environment '
|
||||
+ 'variable S3TEST_CONF to a config file.',
|
||||
)
|
||||
cfg.read(path)
|
||||
with file(path) as f:
|
||||
cfg.readfp(f)
|
||||
|
||||
global prefix
|
||||
global targets
|
||||
global slow_backend
|
||||
|
||||
try:
|
||||
template = cfg.get('fixtures', 'bucket prefix')
|
||||
except (configparser.NoSectionError, configparser.NoOptionError):
|
||||
except (ConfigParser.NoSectionError, ConfigParser.NoOptionError):
|
||||
template = 'test-{random}-'
|
||||
prefix = choose_bucket_prefix(template=template)
|
||||
|
||||
try:
|
||||
slow_backend = cfg.getboolean('fixtures', 'slow backend')
|
||||
except (configparser.NoSectionError, configparser.NoOptionError):
|
||||
slow_backend = False
|
||||
|
||||
# pull the default_region out, if it exists
|
||||
try:
|
||||
default_region = cfg.get('fixtures', 'default_region')
|
||||
except (configparser.NoSectionError, configparser.NoOptionError):
|
||||
default_region = None
|
||||
|
||||
s3.clear()
|
||||
config.clear()
|
||||
|
||||
for section in cfg.sections():
|
||||
try:
|
||||
(type_, name) = section.split(None, 1)
|
||||
except ValueError:
|
||||
continue
|
||||
if type_ != 'region':
|
||||
continue
|
||||
regions.add(name, TargetConfig(cfg, section))
|
||||
|
||||
for section in cfg.sections():
|
||||
try:
|
||||
(type_, name) = section.split(None, 1)
|
||||
|
@ -310,52 +100,31 @@ def setup():
|
|||
continue
|
||||
if type_ != 's3':
|
||||
continue
|
||||
try:
|
||||
port = cfg.getint(section, 'port')
|
||||
except ConfigParser.NoOptionError:
|
||||
port = None
|
||||
|
||||
if len(regions.get()) == 0:
|
||||
regions.add("default", TargetConfig(cfg, section))
|
||||
|
||||
config[name] = munch.Munch()
|
||||
config[name] = bunch.Bunch()
|
||||
for var in [
|
||||
'user_id',
|
||||
'display_name',
|
||||
'email',
|
||||
's3website_domain',
|
||||
'host',
|
||||
'port',
|
||||
'is_secure',
|
||||
'kms_keyid',
|
||||
'storage_classes',
|
||||
]:
|
||||
try:
|
||||
config[name][var] = cfg.get(section, var)
|
||||
except configparser.NoOptionError:
|
||||
except ConfigParser.NoOptionError:
|
||||
pass
|
||||
|
||||
targets[name] = RegionsConn()
|
||||
|
||||
for (k, conf) in regions.items():
|
||||
conn = boto.s3.connection.S3Connection(
|
||||
aws_access_key_id=cfg.get(section, 'access_key'),
|
||||
aws_secret_access_key=cfg.get(section, 'secret_key'),
|
||||
is_secure=conf.is_secure,
|
||||
port=conf.port,
|
||||
host=conf.host,
|
||||
# TODO test vhost calling format
|
||||
calling_format=conf.calling_format,
|
||||
)
|
||||
|
||||
temp_targetConn = TargetConnection(conf, conn)
|
||||
targets[name].add(k, temp_targetConn)
|
||||
|
||||
# Explicitly test for and set the default region, if specified.
|
||||
# If it was not specified, use the 'is_master' flag to set it.
|
||||
if default_region:
|
||||
if default_region == name:
|
||||
targets[name].set_default(temp_targetConn)
|
||||
elif conf.is_master:
|
||||
targets[name].set_default(temp_targetConn)
|
||||
|
||||
s3[name] = targets[name].default.connection
|
||||
conn = boto.s3.connection.S3Connection(
|
||||
aws_access_key_id=cfg.get(section, 'access_key'),
|
||||
aws_secret_access_key=cfg.get(section, 'secret_key'),
|
||||
is_secure=cfg.getboolean(section, 'is_secure'),
|
||||
port=port,
|
||||
host=cfg.get(section, 'host'),
|
||||
# TODO support & test all variations
|
||||
calling_format=boto.s3.connection.OrdinaryCallingFormat(),
|
||||
)
|
||||
s3[name] = conn
|
||||
|
||||
# WARNING! we actively delete all buckets we see with the prefix
|
||||
# we've chosen! Choose your prefix with care, and don't reuse
|
||||
|
@ -371,15 +140,6 @@ def teardown():
|
|||
# remove our buckets here also, to avoid littering
|
||||
nuke_prefixed_buckets(prefix=prefix)
|
||||
|
||||
@pytest.fixture(scope="package")
|
||||
def configfile():
|
||||
setup()
|
||||
yield config
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def setup_teardown(configfile):
|
||||
yield
|
||||
teardown()
|
||||
|
||||
bucket_counter = itertools.count(1)
|
||||
|
||||
|
@ -399,100 +159,18 @@ def get_new_bucket_name():
|
|||
return name
|
||||
|
||||
|
||||
def get_new_bucket(target=None, name=None, headers=None):
|
||||
def get_new_bucket(connection=None):
|
||||
"""
|
||||
Get a bucket that exists and is empty.
|
||||
|
||||
Always recreates a bucket from scratch. This is useful to also
|
||||
reset ACLs and such.
|
||||
"""
|
||||
if target is None:
|
||||
target = targets.main.default
|
||||
connection = target.connection
|
||||
if name is None:
|
||||
name = get_new_bucket_name()
|
||||
if connection is None:
|
||||
connection = s3.main
|
||||
name = get_new_bucket_name()
|
||||
# the only way for this to fail with a pre-existing bucket is if
|
||||
# someone raced us between setup nuke_prefixed_buckets and here;
|
||||
# ignore that as astronomically unlikely
|
||||
bucket = connection.create_bucket(name, location=target.conf.api_name, headers=headers)
|
||||
bucket = connection.create_bucket(name)
|
||||
return bucket
|
||||
|
||||
def _make_request(method, bucket, key, body=None, authenticated=False, response_headers=None, request_headers=None, expires_in=100000, path_style=True, timeout=None):
|
||||
"""
|
||||
issue a request for a specified method, on a specified <bucket,key>,
|
||||
with a specified (optional) body (encrypted per the connection), and
|
||||
return the response (status, reason).
|
||||
|
||||
If key is None, then this will be treated as a bucket-level request.
|
||||
|
||||
If the request or response headers are None, then default values will be
|
||||
provided by later methods.
|
||||
"""
|
||||
if not path_style:
|
||||
conn = bucket.connection
|
||||
request_headers['Host'] = conn.calling_format.build_host(conn.server_name(), bucket.name)
|
||||
|
||||
if authenticated:
|
||||
urlobj = None
|
||||
if key is not None:
|
||||
urlobj = key
|
||||
elif bucket is not None:
|
||||
urlobj = bucket
|
||||
else:
|
||||
raise RuntimeError('Unable to find bucket name')
|
||||
url = urlobj.generate_url(expires_in, method=method, response_headers=response_headers, headers=request_headers)
|
||||
o = urlparse(url)
|
||||
path = o.path + '?' + o.query
|
||||
else:
|
||||
bucketobj = None
|
||||
if key is not None:
|
||||
path = '/{obj}'.format(obj=key.name)
|
||||
bucketobj = key.bucket
|
||||
elif bucket is not None:
|
||||
path = '/'
|
||||
bucketobj = bucket
|
||||
else:
|
||||
raise RuntimeError('Unable to find bucket name')
|
||||
if path_style:
|
||||
path = '/{bucket}'.format(bucket=bucketobj.name) + path
|
||||
|
||||
return _make_raw_request(host=s3.main.host, port=s3.main.port, method=method, path=path, body=body, request_headers=request_headers, secure=s3.main.is_secure, timeout=timeout)
|
||||
|
||||
def _make_bucket_request(method, bucket, body=None, authenticated=False, response_headers=None, request_headers=None, expires_in=100000, path_style=True, timeout=None):
|
||||
"""
|
||||
issue a request for a specified method, on a specified <bucket>,
|
||||
with a specified (optional) body (encrypted per the connection), and
|
||||
return the response (status, reason)
|
||||
"""
|
||||
return _make_request(method=method, bucket=bucket, key=None, body=body, authenticated=authenticated, response_headers=response_headers, request_headers=request_headers, expires_in=expires_in, path_style=path_style, timeout=timeout)
|
||||
|
||||
def _make_raw_request(host, port, method, path, body=None, request_headers=None, secure=False, timeout=None):
|
||||
"""
|
||||
issue a request to a specific host & port, for a specified method, on a
|
||||
specified path with a specified (optional) body (encrypted per the
|
||||
connection), and return the response (status, reason).
|
||||
|
||||
This allows construction of special cases not covered by the bucket/key to
|
||||
URL mapping of _make_request/_make_bucket_request.
|
||||
"""
|
||||
if secure:
|
||||
class_ = HTTPSConnection
|
||||
else:
|
||||
class_ = HTTPConnection
|
||||
|
||||
if request_headers is None:
|
||||
request_headers = {}
|
||||
|
||||
c = class_(host, port=port, timeout=timeout)
|
||||
|
||||
# TODO: We might have to modify this in future if we need to interact with
|
||||
# how httplib.request handles Accept-Encoding and Host.
|
||||
c.request(method, path, body=body, headers=request_headers)
|
||||
|
||||
res = c.getresponse()
|
||||
#c.close()
|
||||
|
||||
print(res.status, res.reason)
|
||||
return res
|
||||
|
||||
|
||||
|
|
|
@ -1,46 +0,0 @@
|
|||
import json
|
||||
|
||||
class Statement(object):
|
||||
def __init__(self, action, resource, principal = {"AWS" : "*"}, effect= "Allow", condition = None):
|
||||
self.principal = principal
|
||||
self.action = action
|
||||
self.resource = resource
|
||||
self.condition = condition
|
||||
self.effect = effect
|
||||
|
||||
def to_dict(self):
|
||||
d = { "Action" : self.action,
|
||||
"Principal" : self.principal,
|
||||
"Effect" : self.effect,
|
||||
"Resource" : self.resource
|
||||
}
|
||||
|
||||
if self.condition is not None:
|
||||
d["Condition"] = self.condition
|
||||
|
||||
return d
|
||||
|
||||
class Policy(object):
|
||||
def __init__(self):
|
||||
self.statements = []
|
||||
|
||||
def add_statement(self, s):
|
||||
self.statements.append(s)
|
||||
return self
|
||||
|
||||
def to_json(self):
|
||||
policy_dict = {
|
||||
"Version" : "2012-10-17",
|
||||
"Statement":
|
||||
[s.to_dict() for s in self.statements]
|
||||
}
|
||||
|
||||
return json.dumps(policy_dict)
|
||||
|
||||
def make_json_policy(action, resource, principal={"AWS": "*"}, conditions=None):
|
||||
"""
|
||||
Helper function to make single statement policies
|
||||
"""
|
||||
s = Statement(action, resource, principal, condition=conditions)
|
||||
p = Policy()
|
||||
return p.add_statement(s).to_json()
|
|
@ -1,767 +0,0 @@
|
|||
from io import StringIO
|
||||
import boto.connection
|
||||
import boto.exception
|
||||
import boto.s3.connection
|
||||
import boto.s3.acl
|
||||
import boto.utils
|
||||
import pytest
|
||||
import operator
|
||||
import random
|
||||
import string
|
||||
import socket
|
||||
import ssl
|
||||
import os
|
||||
import re
|
||||
from email.utils import formatdate
|
||||
|
||||
from urllib.parse import urlparse
|
||||
|
||||
from boto.s3.connection import S3Connection
|
||||
|
||||
from .utils import assert_raises
|
||||
|
||||
from email.header import decode_header
|
||||
|
||||
from . import (
|
||||
configfile,
|
||||
setup_teardown,
|
||||
_make_raw_request,
|
||||
nuke_prefixed_buckets,
|
||||
get_new_bucket,
|
||||
s3,
|
||||
config,
|
||||
get_prefix,
|
||||
TargetConnection,
|
||||
targets,
|
||||
)
|
||||
|
||||
|
||||
_orig_authorize = None
|
||||
_custom_headers = {}
|
||||
_remove_headers = []
|
||||
|
||||
|
||||
# HeaderS3Connection and _our_authorize are necessary to be able to arbitrarily
|
||||
# overwrite headers. Depending on the version of boto, one or the other is
|
||||
# necessary. We later determine in setup what needs to be used.
|
||||
|
||||
def _update_headers(headers):
|
||||
""" update a set of headers with additions/removals
|
||||
"""
|
||||
global _custom_headers, _remove_headers
|
||||
|
||||
headers.update(_custom_headers)
|
||||
|
||||
for header in _remove_headers:
|
||||
try:
|
||||
del headers[header]
|
||||
except KeyError:
|
||||
pass
|
||||
|
||||
|
||||
# Note: We need to update the headers twice. The first time so the
|
||||
# authentication signing is done correctly. The second time to overwrite any
|
||||
# headers modified or created in the authentication step.
|
||||
|
||||
class HeaderS3Connection(S3Connection):
|
||||
""" establish an authenticated connection w/customized headers
|
||||
"""
|
||||
def fill_in_auth(self, http_request, **kwargs):
|
||||
_update_headers(http_request.headers)
|
||||
S3Connection.fill_in_auth(self, http_request, **kwargs)
|
||||
_update_headers(http_request.headers)
|
||||
|
||||
return http_request
|
||||
|
||||
|
||||
def _our_authorize(self, connection, **kwargs):
|
||||
""" perform an authentication w/customized headers
|
||||
"""
|
||||
_update_headers(self.headers)
|
||||
_orig_authorize(self, connection, **kwargs)
|
||||
_update_headers(self.headers)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def hook_headers(setup_teardown):
|
||||
boto_type = None
|
||||
_orig_conn = {}
|
||||
|
||||
# we determine what we need to replace by the existence of particular
|
||||
# attributes. boto 2.0rc1 as fill_in_auth for S3Connection, while boto 2.0
|
||||
# has authorize for HTTPRequest.
|
||||
if hasattr(S3Connection, 'fill_in_auth'):
|
||||
boto_type = 'S3Connection'
|
||||
for conn in s3:
|
||||
_orig_conn[conn] = s3[conn]
|
||||
header_conn = HeaderS3Connection(
|
||||
aws_access_key_id=s3[conn].aws_access_key_id,
|
||||
aws_secret_access_key=s3[conn].aws_secret_access_key,
|
||||
is_secure=s3[conn].is_secure,
|
||||
port=s3[conn].port,
|
||||
host=s3[conn].host,
|
||||
calling_format=s3[conn].calling_format
|
||||
)
|
||||
|
||||
s3[conn] = header_conn
|
||||
elif hasattr(boto.connection.HTTPRequest, 'authorize'):
|
||||
global _orig_authorize
|
||||
|
||||
boto_type = 'HTTPRequest'
|
||||
|
||||
_orig_authorize = boto.connection.HTTPRequest.authorize
|
||||
boto.connection.HTTPRequest.authorize = _our_authorize
|
||||
else:
|
||||
raise RuntimeError
|
||||
|
||||
yield
|
||||
|
||||
# replace original functionality depending on the boto version
|
||||
if boto_type is 'S3Connection':
|
||||
for conn in s3:
|
||||
s3[conn] = _orig_conn[conn]
|
||||
_orig_conn = {}
|
||||
elif boto_type is 'HTTPRequest':
|
||||
boto.connection.HTTPRequest.authorize = _orig_authorize
|
||||
_orig_authorize = None
|
||||
else:
|
||||
raise RuntimeError
|
||||
|
||||
|
||||
def _clear_custom_headers():
|
||||
""" Eliminate any header customizations
|
||||
"""
|
||||
global _custom_headers, _remove_headers
|
||||
_custom_headers = {}
|
||||
_remove_headers = []
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def clear_custom_headers(setup_teardown, hook_headers):
|
||||
yield
|
||||
_clear_custom_headers() # clear headers before teardown()
|
||||
|
||||
def _add_custom_headers(headers=None, remove=None):
|
||||
""" Define header customizations (additions, replacements, removals)
|
||||
"""
|
||||
global _custom_headers, _remove_headers
|
||||
if not _custom_headers:
|
||||
_custom_headers = {}
|
||||
|
||||
if headers is not None:
|
||||
_custom_headers.update(headers)
|
||||
if remove is not None:
|
||||
_remove_headers.extend(remove)
|
||||
|
||||
|
||||
def _setup_bad_object(headers=None, remove=None):
|
||||
""" Create a new bucket, add an object w/header customizations
|
||||
"""
|
||||
bucket = get_new_bucket()
|
||||
|
||||
_add_custom_headers(headers=headers, remove=remove)
|
||||
return bucket.new_key('foo')
|
||||
|
||||
#
|
||||
# common tests
|
||||
#
|
||||
|
||||
@pytest.mark.auth_common
|
||||
@pytest.mark.fails_on_dbstore
|
||||
def test_object_create_bad_contentlength_none():
|
||||
key = _setup_bad_object(remove=('Content-Length',))
|
||||
|
||||
e = assert_raises(boto.exception.S3ResponseError, key.set_contents_from_string, 'bar')
|
||||
assert e.status == 411
|
||||
assert e.reason == 'Length Required'
|
||||
assert e.error_code == 'MissingContentLength'
|
||||
|
||||
|
||||
@pytest.mark.auth_common
|
||||
@pytest.mark.fails_on_rgw
|
||||
def test_object_create_bad_contentlength_mismatch_above():
|
||||
content = 'bar'
|
||||
length = len(content) + 1
|
||||
|
||||
key = _setup_bad_object({'Content-Length': length})
|
||||
|
||||
# Disable retries since key.should_retry will discard the response with
|
||||
# PleaseRetryException.
|
||||
def no_retry(response, chunked_transfer): return False
|
||||
key.should_retry = no_retry
|
||||
|
||||
e = assert_raises(boto.exception.S3ResponseError, key.set_contents_from_string, content)
|
||||
assert e.status == 400
|
||||
assert e.reason.lower() == 'bad request' # some proxies vary the case
|
||||
assert e.error_code == 'RequestTimeout'
|
||||
|
||||
|
||||
@pytest.mark.auth_common
|
||||
@pytest.mark.fails_on_dbstore
|
||||
def test_object_create_bad_authorization_empty():
|
||||
key = _setup_bad_object({'Authorization': ''})
|
||||
|
||||
e = assert_raises(boto.exception.S3ResponseError, key.set_contents_from_string, 'bar')
|
||||
assert e.status == 403
|
||||
assert e.reason == 'Forbidden'
|
||||
assert e.error_code == 'AccessDenied'
|
||||
|
||||
@pytest.mark.auth_common
|
||||
@pytest.mark.fails_on_dbstore
|
||||
def test_object_create_date_and_amz_date():
|
||||
date = formatdate(usegmt=True)
|
||||
key = _setup_bad_object({'Date': date, 'X-Amz-Date': date})
|
||||
key.set_contents_from_string('bar')
|
||||
|
||||
@pytest.mark.auth_common
|
||||
@pytest.mark.fails_on_dbstore
|
||||
def test_object_create_amz_date_and_no_date():
|
||||
date = formatdate(usegmt=True)
|
||||
key = _setup_bad_object({'X-Amz-Date': date}, ('Date',))
|
||||
key.set_contents_from_string('bar')
|
||||
|
||||
|
||||
# the teardown is really messed up here. check it out
|
||||
@pytest.mark.auth_common
|
||||
@pytest.mark.fails_on_dbstore
|
||||
def test_object_create_bad_authorization_none():
|
||||
key = _setup_bad_object(remove=('Authorization',))
|
||||
|
||||
e = assert_raises(boto.exception.S3ResponseError, key.set_contents_from_string, 'bar')
|
||||
assert e.status == 403
|
||||
assert e.reason == 'Forbidden'
|
||||
assert e.error_code == 'AccessDenied'
|
||||
|
||||
|
||||
@pytest.mark.auth_common
|
||||
@pytest.mark.fails_on_dbstore
|
||||
def test_bucket_create_contentlength_none():
|
||||
_add_custom_headers(remove=('Content-Length',))
|
||||
get_new_bucket()
|
||||
|
||||
|
||||
@pytest.mark.auth_common
|
||||
@pytest.mark.fails_on_dbstore
|
||||
def test_object_acl_create_contentlength_none():
|
||||
bucket = get_new_bucket()
|
||||
key = bucket.new_key('foo')
|
||||
key.set_contents_from_string('blah')
|
||||
|
||||
_add_custom_headers(remove=('Content-Length',))
|
||||
key.set_acl('public-read')
|
||||
|
||||
def _create_new_connection():
|
||||
# We're going to need to manually build a connection using bad authorization info.
|
||||
# But to save the day, lets just hijack the settings from s3.main. :)
|
||||
main = s3.main
|
||||
conn = HeaderS3Connection(
|
||||
aws_access_key_id=main.aws_access_key_id,
|
||||
aws_secret_access_key=main.aws_secret_access_key,
|
||||
is_secure=main.is_secure,
|
||||
port=main.port,
|
||||
host=main.host,
|
||||
calling_format=main.calling_format,
|
||||
)
|
||||
return TargetConnection(targets.main.default.conf, conn)
|
||||
|
||||
@pytest.mark.auth_common
|
||||
@pytest.mark.fails_on_rgw
|
||||
def test_bucket_create_bad_contentlength_empty():
|
||||
conn = _create_new_connection()
|
||||
_add_custom_headers({'Content-Length': ''})
|
||||
e = assert_raises(boto.exception.S3ResponseError, get_new_bucket, conn)
|
||||
assert e.status == 400
|
||||
assert e.reason.lower() == 'bad request' # some proxies vary the case
|
||||
|
||||
|
||||
@pytest.mark.auth_common
|
||||
@pytest.mark.fails_on_dbstore
|
||||
def test_bucket_create_bad_contentlength_none():
|
||||
_add_custom_headers(remove=('Content-Length',))
|
||||
bucket = get_new_bucket()
|
||||
|
||||
|
||||
@pytest.mark.auth_common
|
||||
@pytest.mark.fails_on_dbstore
|
||||
def test_bucket_create_bad_authorization_empty():
|
||||
_add_custom_headers({'Authorization': ''})
|
||||
e = assert_raises(boto.exception.S3ResponseError, get_new_bucket)
|
||||
assert e.status == 403
|
||||
assert e.reason == 'Forbidden'
|
||||
assert e.error_code == 'AccessDenied'
|
||||
|
||||
|
||||
# the teardown is really messed up here. check it out
|
||||
@pytest.mark.auth_common
|
||||
@pytest.mark.fails_on_dbstore
|
||||
def test_bucket_create_bad_authorization_none():
|
||||
_add_custom_headers(remove=('Authorization',))
|
||||
e = assert_raises(boto.exception.S3ResponseError, get_new_bucket)
|
||||
assert e.status == 403
|
||||
assert e.reason == 'Forbidden'
|
||||
assert e.error_code == 'AccessDenied'
|
||||
|
||||
#
|
||||
# AWS2 specific tests
|
||||
#
|
||||
|
||||
@pytest.mark.auth_aws2
|
||||
@pytest.mark.fails_on_dbstore
|
||||
def test_object_create_bad_contentlength_mismatch_below_aws2():
|
||||
check_aws2_support()
|
||||
content = 'bar'
|
||||
length = len(content) - 1
|
||||
key = _setup_bad_object({'Content-Length': length})
|
||||
e = assert_raises(boto.exception.S3ResponseError, key.set_contents_from_string, content)
|
||||
assert e.status == 400
|
||||
assert e.reason.lower() == 'bad request' # some proxies vary the case
|
||||
assert e.error_code == 'BadDigest'
|
||||
|
||||
|
||||
@pytest.mark.auth_aws2
|
||||
@pytest.mark.fails_on_dbstore
|
||||
def test_object_create_bad_authorization_incorrect_aws2():
|
||||
check_aws2_support()
|
||||
key = _setup_bad_object({'Authorization': 'AWS AKIAIGR7ZNNBHC5BKSUB:FWeDfwojDSdS2Ztmpfeubhd9isU='})
|
||||
e = assert_raises(boto.exception.S3ResponseError, key.set_contents_from_string, 'bar')
|
||||
assert e.status == 403
|
||||
assert e.reason == 'Forbidden'
|
||||
assert e.error_code in ('AccessDenied', 'SignatureDoesNotMatch', 'InvalidAccessKeyId')
|
||||
|
||||
|
||||
@pytest.mark.auth_aws2
|
||||
@pytest.mark.fails_on_dbstore
|
||||
def test_object_create_bad_authorization_invalid_aws2():
|
||||
check_aws2_support()
|
||||
key = _setup_bad_object({'Authorization': 'AWS HAHAHA'})
|
||||
e = assert_raises(boto.exception.S3ResponseError, key.set_contents_from_string, 'bar')
|
||||
assert e.status == 400
|
||||
assert e.reason.lower() == 'bad request' # some proxies vary the case
|
||||
assert e.error_code == 'InvalidArgument'
|
||||
|
||||
@pytest.mark.auth_aws2
|
||||
@pytest.mark.fails_on_dbstore
|
||||
def test_object_create_bad_date_none_aws2():
|
||||
check_aws2_support()
|
||||
key = _setup_bad_object(remove=('Date',))
|
||||
e = assert_raises(boto.exception.S3ResponseError, key.set_contents_from_string, 'bar')
|
||||
assert e.status == 403
|
||||
assert e.reason == 'Forbidden'
|
||||
assert e.error_code == 'AccessDenied'
|
||||
|
||||
|
||||
@pytest.mark.auth_aws2
|
||||
def test_bucket_create_bad_authorization_invalid_aws2():
|
||||
check_aws2_support()
|
||||
_add_custom_headers({'Authorization': 'AWS HAHAHA'})
|
||||
e = assert_raises(boto.exception.S3ResponseError, get_new_bucket)
|
||||
assert e.status == 400
|
||||
assert e.reason.lower() == 'bad request' # some proxies vary the case
|
||||
assert e.error_code == 'InvalidArgument'
|
||||
|
||||
@pytest.mark.auth_aws2
|
||||
@pytest.mark.fails_on_dbstore
|
||||
def test_bucket_create_bad_date_none_aws2():
|
||||
check_aws2_support()
|
||||
_add_custom_headers(remove=('Date',))
|
||||
e = assert_raises(boto.exception.S3ResponseError, get_new_bucket)
|
||||
assert e.status == 403
|
||||
assert e.reason == 'Forbidden'
|
||||
assert e.error_code == 'AccessDenied'
|
||||
|
||||
#
|
||||
# AWS4 specific tests
|
||||
#
|
||||
|
||||
def check_aws4_support():
|
||||
if 'S3_USE_SIGV4' not in os.environ:
|
||||
pytest.skip('sigv4 tests not enabled by S3_USE_SIGV4')
|
||||
|
||||
def check_aws2_support():
|
||||
if 'S3_USE_SIGV4' in os.environ:
|
||||
pytest.skip('sigv2 tests disabled by S3_USE_SIGV4')
|
||||
|
||||
|
||||
@pytest.mark.auth_aws4
|
||||
def test_object_create_bad_md5_invalid_garbage_aws4():
|
||||
check_aws4_support()
|
||||
key = _setup_bad_object({'Content-MD5':'AWS4 HAHAHA'})
|
||||
|
||||
e = assert_raises(boto.exception.S3ResponseError, key.set_contents_from_string, 'bar')
|
||||
assert e.status == 400
|
||||
assert e.reason.lower() == 'bad request' # some proxies vary the case
|
||||
assert e.error_code == 'InvalidDigest'
|
||||
|
||||
|
||||
@pytest.mark.auth_aws4
|
||||
def test_object_create_bad_contentlength_mismatch_below_aws4():
|
||||
check_aws4_support()
|
||||
content = 'bar'
|
||||
length = len(content) - 1
|
||||
key = _setup_bad_object({'Content-Length': length})
|
||||
|
||||
e = assert_raises(boto.exception.S3ResponseError, key.set_contents_from_string, content)
|
||||
assert e.status == 400
|
||||
assert e.reason.lower() == 'bad request' # some proxies vary the case
|
||||
assert e.error_code == 'XAmzContentSHA256Mismatch'
|
||||
|
||||
|
||||
@pytest.mark.auth_aws4
|
||||
def test_object_create_bad_authorization_incorrect_aws4():
|
||||
check_aws4_support()
|
||||
key = _setup_bad_object({'Authorization': 'AWS4-HMAC-SHA256 Credential=AKIAIGR7ZNNBHC5BKSUB/20150930/us-east-1/s3/aws4_request,SignedHeaders=host;user-agent,Signature=FWeDfwojDSdS2Ztmpfeubhd9isU='})
|
||||
|
||||
e = assert_raises(boto.exception.S3ResponseError, key.set_contents_from_string, 'bar')
|
||||
assert e.status == 403
|
||||
assert e.reason == 'Forbidden'
|
||||
assert e.error_code in ('AccessDenied', 'SignatureDoesNotMatch', 'InvalidAccessKeyId')
|
||||
|
||||
|
||||
@pytest.mark.auth_aws4
|
||||
def test_object_create_bad_authorization_invalid_aws4():
|
||||
check_aws4_support()
|
||||
key = _setup_bad_object({'Authorization': 'AWS4-HMAC-SHA256 Credential=HAHAHA'})
|
||||
|
||||
e = assert_raises(boto.exception.S3ResponseError, key.set_contents_from_string, 'bar')
|
||||
assert e.status == 400
|
||||
assert e.reason.lower() == 'bad request' # some proxies vary the case
|
||||
assert e.error_code in ('AuthorizationHeaderMalformed', 'InvalidArgument')
|
||||
|
||||
|
||||
@pytest.mark.auth_aws4
|
||||
def test_object_create_bad_ua_empty_aws4():
|
||||
check_aws4_support()
|
||||
key = _setup_bad_object({'User-Agent': ''})
|
||||
|
||||
e = assert_raises(boto.exception.S3ResponseError, key.set_contents_from_string, 'bar')
|
||||
assert e.status == 403
|
||||
assert e.reason == 'Forbidden'
|
||||
assert e.error_code == 'SignatureDoesNotMatch'
|
||||
|
||||
|
||||
@pytest.mark.auth_aws4
|
||||
def test_object_create_bad_ua_none_aws4():
|
||||
check_aws4_support()
|
||||
key = _setup_bad_object(remove=('User-Agent',))
|
||||
|
||||
e = assert_raises(boto.exception.S3ResponseError, key.set_contents_from_string, 'bar')
|
||||
assert e.status == 403
|
||||
assert e.reason == 'Forbidden'
|
||||
assert e.error_code == 'SignatureDoesNotMatch'
|
||||
|
||||
|
||||
@pytest.mark.auth_aws4
|
||||
def test_object_create_bad_date_invalid_aws4():
|
||||
check_aws4_support()
|
||||
key = _setup_bad_object({'Date': 'Bad Date'})
|
||||
key.set_contents_from_string('bar')
|
||||
|
||||
|
||||
@pytest.mark.auth_aws4
|
||||
def test_object_create_bad_amz_date_invalid_aws4():
|
||||
check_aws4_support()
|
||||
key = _setup_bad_object({'X-Amz-Date': 'Bad Date'})
|
||||
|
||||
e = assert_raises(boto.exception.S3ResponseError, key.set_contents_from_string, 'bar')
|
||||
assert e.status == 403
|
||||
assert e.reason == 'Forbidden'
|
||||
assert e.error_code in ('AccessDenied', 'SignatureDoesNotMatch')
|
||||
|
||||
|
||||
@pytest.mark.auth_aws4
|
||||
def test_object_create_bad_date_empty_aws4():
|
||||
check_aws4_support()
|
||||
key = _setup_bad_object({'Date': ''})
|
||||
key.set_contents_from_string('bar')
|
||||
|
||||
|
||||
@pytest.mark.auth_aws4
|
||||
def test_object_create_bad_amz_date_empty_aws4():
|
||||
check_aws4_support()
|
||||
key = _setup_bad_object({'X-Amz-Date': ''})
|
||||
|
||||
e = assert_raises(boto.exception.S3ResponseError, key.set_contents_from_string, 'bar')
|
||||
assert e.status == 403
|
||||
assert e.reason == 'Forbidden'
|
||||
assert e.error_code in ('AccessDenied', 'SignatureDoesNotMatch')
|
||||
|
||||
|
||||
@pytest.mark.auth_aws4
|
||||
def test_object_create_bad_date_none_aws4():
|
||||
check_aws4_support()
|
||||
key = _setup_bad_object(remove=('Date',))
|
||||
key.set_contents_from_string('bar')
|
||||
|
||||
|
||||
@pytest.mark.auth_aws4
|
||||
def test_object_create_bad_amz_date_none_aws4():
|
||||
check_aws4_support()
|
||||
key = _setup_bad_object(remove=('X-Amz-Date',))
|
||||
|
||||
e = assert_raises(boto.exception.S3ResponseError, key.set_contents_from_string, 'bar')
|
||||
assert e.status == 403
|
||||
assert e.reason == 'Forbidden'
|
||||
assert e.error_code in ('AccessDenied', 'SignatureDoesNotMatch')
|
||||
|
||||
|
||||
@pytest.mark.auth_aws4
|
||||
def test_object_create_bad_date_before_today_aws4():
|
||||
check_aws4_support()
|
||||
key = _setup_bad_object({'Date': 'Tue, 07 Jul 2010 21:53:04 GMT'})
|
||||
key.set_contents_from_string('bar')
|
||||
|
||||
|
||||
@pytest.mark.auth_aws4
|
||||
def test_object_create_bad_amz_date_before_today_aws4():
|
||||
check_aws4_support()
|
||||
key = _setup_bad_object({'X-Amz-Date': '20100707T215304Z'})
|
||||
|
||||
e = assert_raises(boto.exception.S3ResponseError, key.set_contents_from_string, 'bar')
|
||||
assert e.status == 403
|
||||
assert e.reason == 'Forbidden'
|
||||
assert e.error_code in ('RequestTimeTooSkewed', 'SignatureDoesNotMatch')
|
||||
|
||||
|
||||
@pytest.mark.auth_aws4
|
||||
def test_object_create_bad_date_after_today_aws4():
|
||||
check_aws4_support()
|
||||
key = _setup_bad_object({'Date': 'Tue, 07 Jul 2030 21:53:04 GMT'})
|
||||
key.set_contents_from_string('bar')
|
||||
|
||||
|
||||
@pytest.mark.auth_aws4
|
||||
def test_object_create_bad_amz_date_after_today_aws4():
|
||||
check_aws4_support()
|
||||
key = _setup_bad_object({'X-Amz-Date': '20300707T215304Z'})
|
||||
|
||||
e = assert_raises(boto.exception.S3ResponseError, key.set_contents_from_string, 'bar')
|
||||
assert e.status == 403
|
||||
assert e.reason == 'Forbidden'
|
||||
assert e.error_code in ('RequestTimeTooSkewed', 'SignatureDoesNotMatch')
|
||||
|
||||
|
||||
@pytest.mark.auth_aws4
|
||||
def test_object_create_bad_date_before_epoch_aws4():
|
||||
check_aws4_support()
|
||||
key = _setup_bad_object({'Date': 'Tue, 07 Jul 1950 21:53:04 GMT'})
|
||||
key.set_contents_from_string('bar')
|
||||
|
||||
|
||||
@pytest.mark.auth_aws4
|
||||
def test_object_create_bad_amz_date_before_epoch_aws4():
|
||||
check_aws4_support()
|
||||
key = _setup_bad_object({'X-Amz-Date': '19500707T215304Z'})
|
||||
|
||||
e = assert_raises(boto.exception.S3ResponseError, key.set_contents_from_string, 'bar')
|
||||
assert e.status == 403
|
||||
assert e.reason == 'Forbidden'
|
||||
assert e.error_code in ('AccessDenied', 'SignatureDoesNotMatch')
|
||||
|
||||
|
||||
@pytest.mark.auth_aws4
|
||||
def test_object_create_bad_date_after_end_aws4():
|
||||
check_aws4_support()
|
||||
key = _setup_bad_object({'Date': 'Tue, 07 Jul 9999 21:53:04 GMT'})
|
||||
key.set_contents_from_string('bar')
|
||||
|
||||
|
||||
@pytest.mark.auth_aws4
|
||||
def test_object_create_bad_amz_date_after_end_aws4():
|
||||
check_aws4_support()
|
||||
key = _setup_bad_object({'X-Amz-Date': '99990707T215304Z'})
|
||||
|
||||
e = assert_raises(boto.exception.S3ResponseError, key.set_contents_from_string, 'bar')
|
||||
assert e.status == 403
|
||||
assert e.reason == 'Forbidden'
|
||||
assert e.error_code in ('RequestTimeTooSkewed', 'SignatureDoesNotMatch')
|
||||
|
||||
|
||||
@pytest.mark.auth_aws4
|
||||
def test_object_create_missing_signed_custom_header_aws4():
|
||||
check_aws4_support()
|
||||
method='PUT'
|
||||
expires_in='100000'
|
||||
bucket = get_new_bucket()
|
||||
key = bucket.new_key('foo')
|
||||
body='zoo'
|
||||
|
||||
# compute the signature with 'x-amz-foo=bar' in the headers...
|
||||
request_headers = {'x-amz-foo':'bar'}
|
||||
url = key.generate_url(expires_in, method=method, headers=request_headers)
|
||||
|
||||
o = urlparse(url)
|
||||
path = o.path + '?' + o.query
|
||||
|
||||
# avoid sending 'x-amz-foo=bar' in the headers
|
||||
request_headers.pop('x-amz-foo')
|
||||
|
||||
res =_make_raw_request(host=s3.main.host, port=s3.main.port, method=method, path=path,
|
||||
body=body, request_headers=request_headers, secure=s3.main.is_secure)
|
||||
|
||||
assert res.status == 403
|
||||
assert res.reason == 'Forbidden'
|
||||
|
||||
|
||||
@pytest.mark.auth_aws4
|
||||
def test_object_create_missing_signed_header_aws4():
|
||||
check_aws4_support()
|
||||
method='PUT'
|
||||
expires_in='100000'
|
||||
bucket = get_new_bucket()
|
||||
key = bucket.new_key('foo')
|
||||
body='zoo'
|
||||
|
||||
# compute the signature...
|
||||
request_headers = {}
|
||||
url = key.generate_url(expires_in, method=method, headers=request_headers)
|
||||
|
||||
o = urlparse(url)
|
||||
path = o.path + '?' + o.query
|
||||
|
||||
# 'X-Amz-Expires' is missing
|
||||
target = r'&X-Amz-Expires=' + expires_in
|
||||
path = re.sub(target, '', path)
|
||||
|
||||
res =_make_raw_request(host=s3.main.host, port=s3.main.port, method=method, path=path,
|
||||
body=body, request_headers=request_headers, secure=s3.main.is_secure)
|
||||
|
||||
assert res.status == 403
|
||||
assert res.reason == 'Forbidden'
|
||||
|
||||
|
||||
@pytest.mark.auth_aws4
|
||||
def test_bucket_create_bad_authorization_invalid_aws4():
|
||||
check_aws4_support()
|
||||
_add_custom_headers({'Authorization': 'AWS4 HAHAHA'})
|
||||
e = assert_raises(boto.exception.S3ResponseError, get_new_bucket)
|
||||
|
||||
assert e.status == 400
|
||||
assert e.reason.lower() == 'bad request' # some proxies vary the case
|
||||
assert e.error_code == 'InvalidArgument'
|
||||
|
||||
|
||||
@pytest.mark.auth_aws4
|
||||
def test_bucket_create_bad_ua_empty_aws4():
|
||||
check_aws4_support()
|
||||
_add_custom_headers({'User-Agent': ''})
|
||||
e = assert_raises(boto.exception.S3ResponseError, get_new_bucket)
|
||||
|
||||
assert e.status == 403
|
||||
assert e.reason == 'Forbidden'
|
||||
assert e.error_code == 'SignatureDoesNotMatch'
|
||||
|
||||
@pytest.mark.auth_aws4
|
||||
def test_bucket_create_bad_ua_none_aws4():
|
||||
check_aws4_support()
|
||||
_add_custom_headers(remove=('User-Agent',))
|
||||
|
||||
e = assert_raises(boto.exception.S3ResponseError, get_new_bucket)
|
||||
assert e.status == 403
|
||||
assert e.reason == 'Forbidden'
|
||||
assert e.error_code == 'SignatureDoesNotMatch'
|
||||
|
||||
|
||||
@pytest.mark.auth_aws4
|
||||
def test_bucket_create_bad_date_invalid_aws4():
|
||||
check_aws4_support()
|
||||
_add_custom_headers({'Date': 'Bad Date'})
|
||||
get_new_bucket()
|
||||
|
||||
|
||||
@pytest.mark.auth_aws4
|
||||
def test_bucket_create_bad_amz_date_invalid_aws4():
|
||||
check_aws4_support()
|
||||
_add_custom_headers({'X-Amz-Date': 'Bad Date'})
|
||||
e = assert_raises(boto.exception.S3ResponseError, get_new_bucket)
|
||||
|
||||
assert e.status == 403
|
||||
assert e.reason == 'Forbidden'
|
||||
assert e.error_code in ('AccessDenied', 'SignatureDoesNotMatch')
|
||||
|
||||
|
||||
@pytest.mark.auth_aws4
|
||||
def test_bucket_create_bad_date_empty_aws4():
|
||||
check_aws4_support()
|
||||
_add_custom_headers({'Date': ''})
|
||||
get_new_bucket()
|
||||
|
||||
|
||||
@pytest.mark.auth_aws4
|
||||
def test_bucket_create_bad_amz_date_empty_aws4():
|
||||
check_aws4_support()
|
||||
_add_custom_headers({'X-Amz-Date': ''})
|
||||
e = assert_raises(boto.exception.S3ResponseError, get_new_bucket)
|
||||
|
||||
assert e.status == 403
|
||||
assert e.reason == 'Forbidden'
|
||||
assert e.error_code in ('AccessDenied', 'SignatureDoesNotMatch')
|
||||
|
||||
@pytest.mark.auth_aws4
|
||||
def test_bucket_create_bad_date_none_aws4():
|
||||
check_aws4_support()
|
||||
_add_custom_headers(remove=('Date',))
|
||||
get_new_bucket()
|
||||
|
||||
|
||||
@pytest.mark.auth_aws4
|
||||
def test_bucket_create_bad_amz_date_none_aws4():
|
||||
check_aws4_support()
|
||||
_add_custom_headers(remove=('X-Amz-Date',))
|
||||
e = assert_raises(boto.exception.S3ResponseError, get_new_bucket)
|
||||
|
||||
assert e.status == 403
|
||||
assert e.reason == 'Forbidden'
|
||||
assert e.error_code in ('AccessDenied', 'SignatureDoesNotMatch')
|
||||
|
||||
|
||||
@pytest.mark.auth_aws4
|
||||
def test_bucket_create_bad_date_before_today_aws4():
|
||||
check_aws4_support()
|
||||
_add_custom_headers({'Date': 'Tue, 07 Jul 2010 21:53:04 GMT'})
|
||||
get_new_bucket()
|
||||
|
||||
|
||||
@pytest.mark.auth_aws4
|
||||
def test_bucket_create_bad_amz_date_before_today_aws4():
|
||||
check_aws4_support()
|
||||
_add_custom_headers({'X-Amz-Date': '20100707T215304Z'})
|
||||
e = assert_raises(boto.exception.S3ResponseError, get_new_bucket)
|
||||
|
||||
assert e.status == 403
|
||||
assert e.reason == 'Forbidden'
|
||||
assert e.error_code in ('RequestTimeTooSkewed', 'SignatureDoesNotMatch')
|
||||
|
||||
|
||||
@pytest.mark.auth_aws4
|
||||
def test_bucket_create_bad_date_after_today_aws4():
|
||||
check_aws4_support()
|
||||
_add_custom_headers({'Date': 'Tue, 07 Jul 2030 21:53:04 GMT'})
|
||||
get_new_bucket()
|
||||
|
||||
|
||||
@pytest.mark.auth_aws4
|
||||
def test_bucket_create_bad_amz_date_after_today_aws4():
|
||||
check_aws4_support()
|
||||
_add_custom_headers({'X-Amz-Date': '20300707T215304Z'})
|
||||
e = assert_raises(boto.exception.S3ResponseError, get_new_bucket)
|
||||
|
||||
assert e.status == 403
|
||||
assert e.reason == 'Forbidden'
|
||||
assert e.error_code in ('RequestTimeTooSkewed', 'SignatureDoesNotMatch')
|
||||
|
||||
|
||||
@pytest.mark.auth_aws4
|
||||
def test_bucket_create_bad_date_before_epoch_aws4():
|
||||
check_aws4_support()
|
||||
_add_custom_headers({'Date': 'Tue, 07 Jul 1950 21:53:04 GMT'})
|
||||
get_new_bucket()
|
||||
|
||||
|
||||
@pytest.mark.auth_aws4
|
||||
def test_bucket_create_bad_amz_date_before_epoch_aws4():
|
||||
check_aws4_support()
|
||||
_add_custom_headers({'X-Amz-Date': '19500707T215304Z'})
|
||||
e = assert_raises(boto.exception.S3ResponseError, get_new_bucket)
|
||||
|
||||
assert e.status == 403
|
||||
assert e.reason == 'Forbidden'
|
||||
assert e.error_code in ('AccessDenied', 'SignatureDoesNotMatch')
|
File diff suppressed because it is too large
Load diff
File diff suppressed because it is too large
Load diff
|
@ -1,9 +0,0 @@
|
|||
from . import utils
|
||||
|
||||
def test_generate():
|
||||
FIVE_MB = 5 * 1024 * 1024
|
||||
assert len(''.join(utils.generate_random(0))) == 0
|
||||
assert len(''.join(utils.generate_random(1))) == 1
|
||||
assert len(''.join(utils.generate_random(FIVE_MB - 1))) == FIVE_MB - 1
|
||||
assert len(''.join(utils.generate_random(FIVE_MB))) == FIVE_MB
|
||||
assert len(''.join(utils.generate_random(FIVE_MB + 1))) == FIVE_MB + 1
|
|
@ -1,8 +1,3 @@
|
|||
import random
|
||||
import requests
|
||||
import string
|
||||
import time
|
||||
|
||||
def assert_raises(excClass, callableObj, *args, **kwargs):
|
||||
"""
|
||||
Like unittest.TestCase.assertRaises, but returns the exception.
|
||||
|
@ -17,45 +12,3 @@ def assert_raises(excClass, callableObj, *args, **kwargs):
|
|||
else:
|
||||
excName = str(excClass)
|
||||
raise AssertionError("%s not raised" % excName)
|
||||
|
||||
def generate_random(size, part_size=5*1024*1024):
|
||||
"""
|
||||
Generate the specified number random data.
|
||||
(actually each MB is a repetition of the first KB)
|
||||
"""
|
||||
chunk = 1024
|
||||
allowed = string.ascii_letters
|
||||
for x in range(0, size, part_size):
|
||||
strpart = ''.join([allowed[random.randint(0, len(allowed) - 1)] for _ in range(chunk)])
|
||||
s = ''
|
||||
left = size - x
|
||||
this_part_size = min(left, part_size)
|
||||
for y in range(this_part_size // chunk):
|
||||
s = s + strpart
|
||||
s = s + strpart[:(this_part_size % chunk)]
|
||||
yield s
|
||||
if (x == size):
|
||||
return
|
||||
|
||||
# syncs all the regions except for the one passed in
|
||||
def region_sync_meta(targets, region):
|
||||
|
||||
for (k, r) in targets.items():
|
||||
if r == region:
|
||||
continue
|
||||
conf = r.conf
|
||||
if conf.sync_agent_addr:
|
||||
ret = requests.post('http://{addr}:{port}/metadata/incremental'.format(addr = conf.sync_agent_addr, port = conf.sync_agent_port))
|
||||
assert ret.status_code == 200
|
||||
if conf.sync_meta_wait:
|
||||
time.sleep(conf.sync_meta_wait)
|
||||
|
||||
|
||||
def get_grantee(policy, permission):
|
||||
'''
|
||||
Given an object/bucket policy, extract the grantee with the required permission
|
||||
'''
|
||||
|
||||
for g in policy.acl.grants:
|
||||
if g.permission == permission:
|
||||
return g.id
|
||||
|
|
115
s3tests/generate_objects.py
Normal file
115
s3tests/generate_objects.py
Normal file
|
@ -0,0 +1,115 @@
|
|||
#! /usr/bin/python
|
||||
|
||||
from boto.s3.key import Key
|
||||
from optparse import OptionParser
|
||||
from . import realistic
|
||||
import traceback
|
||||
import random
|
||||
from . import common
|
||||
import sys
|
||||
|
||||
|
||||
def parse_opts():
|
||||
parser = OptionParser()
|
||||
parser.add_option('-O', '--outfile', help='write output to FILE. Defaults to STDOUT', metavar='FILE')
|
||||
parser.add_option('-b', '--bucket', dest='bucket', help='push objects to BUCKET', metavar='BUCKET')
|
||||
parser.add_option('--seed', dest='seed', help='optional seed for the random number generator')
|
||||
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def get_random_files(quantity, mean, stddev, seed):
|
||||
"""Create file-like objects with pseudorandom contents.
|
||||
IN:
|
||||
number of files to create
|
||||
mean file size in bytes
|
||||
standard deviation from mean file size
|
||||
seed for PRNG
|
||||
OUT:
|
||||
list of file handles
|
||||
"""
|
||||
file_generator = realistic.files(mean, stddev, seed)
|
||||
return [file_generator.next() for _ in xrange(quantity)]
|
||||
|
||||
|
||||
def upload_objects(bucket, files, seed):
|
||||
"""Upload a bunch of files to an S3 bucket
|
||||
IN:
|
||||
boto S3 bucket object
|
||||
list of file handles to upload
|
||||
seed for PRNG
|
||||
OUT:
|
||||
list of boto S3 key objects
|
||||
"""
|
||||
keys = []
|
||||
name_generator = realistic.names(15, 4, seed=seed)
|
||||
|
||||
for fp in files:
|
||||
print >> sys.stderr, 'sending file with size %dB' % fp.size
|
||||
key = Key(bucket)
|
||||
key.key = name_generator.next()
|
||||
key.set_contents_from_file(fp)
|
||||
keys.append(key)
|
||||
|
||||
return keys
|
||||
|
||||
|
||||
def _main():
|
||||
'''To run the static content load test, make sure you've bootstrapped your
|
||||
test environment and set up your config.yml file, then run the following:
|
||||
S3TEST_CONF=config.yml virtualenv/bin/python generate_objects.py -O urls.txt --seed 1234
|
||||
|
||||
This creates a bucket with your S3 credentials (from config.yml) and
|
||||
fills it with garbage objects as described in generate_objects.conf.
|
||||
It writes a list of URLS to those objects to ./urls.txt.
|
||||
|
||||
Once you have objcts in your bucket, run the siege benchmarking program:
|
||||
siege --rc ./siege.conf -r 5
|
||||
|
||||
This tells siege to read the ./siege.conf config file which tells it to
|
||||
use the urls in ./urls.txt and log to ./siege.log. It hits each url in
|
||||
urls.txt 5 times (-r flag).
|
||||
|
||||
Results are printed to the terminal and written in CSV format to
|
||||
./siege.log
|
||||
'''
|
||||
(options, args) = parse_opts()
|
||||
|
||||
#SETUP
|
||||
random.seed(options.seed if options.seed else None)
|
||||
conn = common.s3.main
|
||||
|
||||
if options.outfile:
|
||||
OUTFILE = open(options.outfile, 'w')
|
||||
elif common.config.file_generation.url_file:
|
||||
OUTFILE = open(common.config.file_generation.url_file, 'w')
|
||||
else:
|
||||
OUTFILE = sys.stdout
|
||||
|
||||
if options.bucket:
|
||||
bucket = conn.create_bucket(options.bucket)
|
||||
else:
|
||||
bucket = common.get_new_bucket()
|
||||
|
||||
keys = []
|
||||
print >> OUTFILE, 'bucket: %s' % bucket.name
|
||||
print >> sys.stderr, 'setup complete, generating files'
|
||||
for profile in common.config.file_generation.groups:
|
||||
seed = random.random()
|
||||
files = get_random_files(profile[0], profile[1], profile[2], seed)
|
||||
keys += upload_objects(bucket, files, seed)
|
||||
|
||||
print >> sys.stderr, 'finished sending files. generating urls'
|
||||
for key in keys:
|
||||
print >> OUTFILE, key.generate_url(30758400) #valid for 1 year
|
||||
|
||||
print >> sys.stderr, 'done'
|
||||
|
||||
|
||||
def main():
|
||||
common.setup()
|
||||
try:
|
||||
_main()
|
||||
except Exception as e:
|
||||
traceback.print_exc()
|
||||
common.teardown()
|
196
s3tests/rand_readwrite.py
Normal file
196
s3tests/rand_readwrite.py
Normal file
|
@ -0,0 +1,196 @@
|
|||
#!/usr/bin/python
|
||||
|
||||
import gevent
|
||||
import gevent.queue
|
||||
import gevent.monkey; gevent.monkey.patch_all()
|
||||
import optparse
|
||||
import time
|
||||
import random
|
||||
|
||||
import generate_objects
|
||||
import realistic
|
||||
import common
|
||||
|
||||
class Result:
|
||||
TYPE_NONE = 0
|
||||
TYPE_READER = 1
|
||||
TYPE_WRITER = 2
|
||||
|
||||
def __init__(self, name, type=TYPE_NONE, time=0, success=True, size=0, details=''):
|
||||
self.name = name
|
||||
self.type = type
|
||||
self.time = time
|
||||
self.success = success
|
||||
self.size = size
|
||||
self.details = details
|
||||
|
||||
def __repr__(self):
|
||||
type_dict = {Result.TYPE_NONE: 'None', Result.TYPE_READER: 'Reader', Result.TYPE_WRITER: 'Writer'}
|
||||
type_s = type_dict[self.type]
|
||||
if self.success:
|
||||
status = 'Success'
|
||||
else:
|
||||
status = 'FAILURE'
|
||||
|
||||
return "<Result: [{success}] {type}{name} -- {size} KB in {time}s = {mbps} MB/s {details}>".format(
|
||||
success=status,
|
||||
type=type_s,
|
||||
name=self.name,
|
||||
size=self.size,
|
||||
time=self.time,
|
||||
mbps=self.size / self.time / 1024.0,
|
||||
details=self.details
|
||||
)
|
||||
|
||||
def reader(seconds, bucket, name=None, queue=None):
|
||||
with gevent.Timeout(seconds, False):
|
||||
while (1):
|
||||
count = 0
|
||||
for key in bucket.list():
|
||||
fp = realistic.FileVerifier()
|
||||
start = time.clock()
|
||||
key.get_contents_to_file(fp)
|
||||
end = time.clock()
|
||||
elapsed = end - start
|
||||
if queue:
|
||||
queue.put(
|
||||
Result(
|
||||
name,
|
||||
type=Result.TYPE_READER,
|
||||
time=elapsed,
|
||||
success=fp.valid(),
|
||||
size=fp.size / 1024,
|
||||
),
|
||||
)
|
||||
count += 1
|
||||
if count == 0:
|
||||
gevent.sleep(1)
|
||||
|
||||
def writer(seconds, bucket, name=None, queue=None, quantity=1, file_size=1, file_stddev=0, file_name_seed=None):
|
||||
with gevent.Timeout(seconds, False):
|
||||
while (1):
|
||||
r = random.randint(0, 65535)
|
||||
r2 = r
|
||||
if file_name_seed != None:
|
||||
r2 = file_name_seed
|
||||
|
||||
files = generate_objects.get_random_files(
|
||||
quantity=quantity,
|
||||
mean=1024 * file_size,
|
||||
stddev=1024 * file_stddev,
|
||||
seed=r,
|
||||
)
|
||||
|
||||
start = time.clock()
|
||||
generate_objects.upload_objects(bucket, files, r2)
|
||||
end = time.clock()
|
||||
elapsed = end - start
|
||||
|
||||
if queue:
|
||||
queue.put(Result(name,
|
||||
type=Result.TYPE_WRITER,
|
||||
time=elapsed,
|
||||
size=sum(f.size/1024 for f in files),
|
||||
)
|
||||
)
|
||||
|
||||
def parse_options():
|
||||
parser = optparse.OptionParser()
|
||||
parser.add_option("-t", "--time", dest="duration", type="float",
|
||||
help="duration to run tests (seconds)", default=5, metavar="SECS")
|
||||
parser.add_option("-r", "--read", dest="num_readers", type="int",
|
||||
help="number of reader threads", default=0, metavar="NUM")
|
||||
parser.add_option("-w", "--write", dest="num_writers", type="int",
|
||||
help="number of writer threads", default=2, metavar="NUM")
|
||||
parser.add_option("-s", "--size", dest="file_size", type="float",
|
||||
help="file size to use, in kb", default=1024, metavar="KB")
|
||||
parser.add_option("-q", "--quantity", dest="quantity", type="int",
|
||||
help="number of files per batch", default=1, metavar="NUM")
|
||||
parser.add_option("-d", "--stddev", dest="stddev", type="float",
|
||||
help="stddev of file size", default=0, metavar="KB")
|
||||
parser.add_option("-W", "--rewrite", dest="rewrite", action="store_true",
|
||||
help="rewrite the same files (total=quantity)")
|
||||
parser.add_option("--no-cleanup", dest="cleanup", action="store_false",
|
||||
help="skip cleaning up all created buckets", default=True)
|
||||
|
||||
return parser.parse_args()
|
||||
|
||||
def main():
|
||||
# parse options
|
||||
(options, args) = parse_options()
|
||||
|
||||
try:
|
||||
# setup
|
||||
common.setup()
|
||||
bucket = common.get_new_bucket()
|
||||
print "Created bucket: {name}".format(name=bucket.name)
|
||||
r = None
|
||||
if (options.rewrite):
|
||||
r = random.randint(0, 65535)
|
||||
q = gevent.queue.Queue()
|
||||
|
||||
# main work
|
||||
print "Using file size: {size} +- {stddev}".format(size=options.file_size, stddev=options.stddev)
|
||||
print "Spawning {r} readers and {w} writers...".format(r=options.num_readers, w=options.num_writers)
|
||||
greenlets = []
|
||||
greenlets += [gevent.spawn(writer, options.duration, bucket,
|
||||
name=x,
|
||||
queue=q,
|
||||
file_size=options.file_size,
|
||||
file_stddev=options.stddev,
|
||||
quantity=options.quantity,
|
||||
file_name_seed=r
|
||||
) for x in xrange(options.num_writers)]
|
||||
greenlets += [gevent.spawn(reader, options.duration, bucket,
|
||||
name=x,
|
||||
queue=q
|
||||
) for x in xrange(options.num_readers)]
|
||||
gevent.spawn_later(options.duration, lambda: q.put(StopIteration))
|
||||
|
||||
total_read = 0
|
||||
total_write = 0
|
||||
read_success = 0
|
||||
read_failure = 0
|
||||
write_success = 0
|
||||
write_failure = 0
|
||||
for item in q:
|
||||
print item
|
||||
if item.type == Result.TYPE_READER:
|
||||
if item.success:
|
||||
read_success += 1
|
||||
total_read += item.size
|
||||
else:
|
||||
read_failure += 1
|
||||
elif item.type == Result.TYPE_WRITER:
|
||||
if item.success:
|
||||
write_success += 1
|
||||
total_write += item.size
|
||||
else:
|
||||
write_failure += 1
|
||||
|
||||
# overall stats
|
||||
print "--- Stats ---"
|
||||
print "Total Read: {read} MB ({mbps} MB/s)".format(
|
||||
read=(total_read/1024.0),
|
||||
mbps=(total_read/1024.0/options.duration)
|
||||
)
|
||||
print "Total Write: {write} MB ({mbps} MB/s)".format(
|
||||
write=(total_write/1024.0),
|
||||
mbps=(total_write/1024.0/options.duration)
|
||||
)
|
||||
print "Read filures: {num} ({percent}%)".format(
|
||||
num=read_failure,
|
||||
percent=(100.0*read_failure/max(read_failure+read_success, 1))
|
||||
)
|
||||
print "Write failures: {num} ({percent}%)".format(
|
||||
num=write_failure,
|
||||
percent=(100.0*write_failure/max(write_failure+write_success, 1))
|
||||
)
|
||||
|
||||
gevent.joinall(greenlets, timeout=1)
|
||||
except Exception as e:
|
||||
print e
|
||||
finally:
|
||||
# cleanup
|
||||
if options.cleanup:
|
||||
common.teardown()
|
126
s3tests/realistic.py
Normal file
126
s3tests/realistic.py
Normal file
|
@ -0,0 +1,126 @@
|
|||
import hashlib
|
||||
import random
|
||||
import string
|
||||
import struct
|
||||
|
||||
class RandomContentFile(object):
|
||||
def __init__(self, size, seed):
|
||||
self.seed = seed
|
||||
self.random = random.Random(self.seed)
|
||||
self.offset = 0
|
||||
self.buffer = ''
|
||||
self.size = size
|
||||
self.hash = hashlib.md5()
|
||||
self.digest_size = self.hash.digest_size
|
||||
self.digest = None
|
||||
|
||||
def seek(self, offset):
|
||||
assert offset == 0
|
||||
self.random.seed(self.seed)
|
||||
self.offset = offset
|
||||
self.buffer = ''
|
||||
|
||||
def tell(self):
|
||||
return self.offset
|
||||
|
||||
def _generate(self):
|
||||
# generate and return a chunk of pseudorandom data
|
||||
# 256 bits = 32 bytes at a time
|
||||
size = 1*1024*1024
|
||||
l = [self.random.getrandbits(64) for _ in xrange(size/8)]
|
||||
s = struct.pack((size/8)*'Q', *l)
|
||||
return s
|
||||
|
||||
def read(self, size=-1):
|
||||
if size < 0:
|
||||
size = self.size - self.offset
|
||||
|
||||
r = []
|
||||
|
||||
random_count = min(size, self.size - self.offset - self.digest_size)
|
||||
if random_count > 0:
|
||||
while len(self.buffer) < random_count:
|
||||
self.buffer += self._generate()
|
||||
self.offset += random_count
|
||||
size -= random_count
|
||||
data, self.buffer = self.buffer[:random_count], self.buffer[random_count:]
|
||||
if self.hash is not None:
|
||||
self.hash.update(data)
|
||||
r.append(data)
|
||||
|
||||
digest_count = min(size, self.size - self.offset)
|
||||
if digest_count > 0:
|
||||
if self.digest is None:
|
||||
self.digest = self.hash.digest()
|
||||
self.hash = None
|
||||
self.offset += digest_count
|
||||
size -= digest_count
|
||||
data = self.digest[:digest_count]
|
||||
r.append(data)
|
||||
|
||||
return ''.join(r)
|
||||
|
||||
class FileVerifier(object):
|
||||
def __init__(self):
|
||||
self.size = 0
|
||||
self.hash = hashlib.md5()
|
||||
self.buf = ''
|
||||
|
||||
def write(self, data):
|
||||
self.size += len(data)
|
||||
self.buf += data
|
||||
digsz = -1*self.hash.digest_size
|
||||
new_data, self.buf = self.buf[0:digsz], self.buf[digsz:]
|
||||
self.hash.update(new_data)
|
||||
|
||||
def valid(self):
|
||||
"""
|
||||
Returns True if this file looks valid. The file is valid if the end
|
||||
of the file has the md5 digest for the first part of the file.
|
||||
"""
|
||||
return self.buf == self.hash.digest()
|
||||
|
||||
def files(mean, stddev, seed=None):
|
||||
"""
|
||||
Yields file-like objects with effectively random contents, where
|
||||
the size of each file follows the normal distribution with `mean`
|
||||
and `stddev`.
|
||||
|
||||
Beware, the file-likeness is very shallow. You can use boto's
|
||||
`key.set_contents_from_file` to send these to S3, but they are not
|
||||
full file objects.
|
||||
|
||||
The last 128 bits are the MD5 digest of the previous bytes, for
|
||||
verifying round-trip data integrity. For example, if you
|
||||
re-download the object and place the contents into a file called
|
||||
``foo``, the following should print two identical lines:
|
||||
|
||||
python -c 'import sys, hashlib; data=sys.stdin.read(); print hashlib.md5(data[:-16]).hexdigest(); print "".join("%02x" % ord(c) for c in data[-16:])' <foo
|
||||
|
||||
Except for objects shorter than 16 bytes, where the second line
|
||||
will be proportionally shorter.
|
||||
"""
|
||||
rand = random.Random(seed)
|
||||
while True:
|
||||
while True:
|
||||
size = int(rand.normalvariate(mean, stddev))
|
||||
if size >= 0:
|
||||
break
|
||||
yield RandomContentFile(size=size, seed=rand.getrandbits(32))
|
||||
|
||||
def names(mean, stddev, charset=None, seed=None):
|
||||
"""
|
||||
Yields strings that are somewhat plausible as file names, where
|
||||
the lenght of each filename follows the normal distribution with
|
||||
`mean` and `stddev`.
|
||||
"""
|
||||
if charset is None:
|
||||
charset = string.ascii_lowercase
|
||||
rand = random.Random(seed)
|
||||
while True:
|
||||
while True:
|
||||
length = int(rand.normalvariate(mean, stddev))
|
||||
if length >= 0:
|
||||
break
|
||||
name = ''.join(rand.choice(charset) for _ in xrange(length))
|
||||
yield name
|
133
s3tests/verify_client.py
Executable file
133
s3tests/verify_client.py
Executable file
|
@ -0,0 +1,133 @@
|
|||
#! /usr/bin/python
|
||||
|
||||
from boto.s3.key import Key
|
||||
from optparse import OptionParser
|
||||
import traceback
|
||||
import common
|
||||
import bunch
|
||||
import yaml
|
||||
import sys
|
||||
|
||||
|
||||
def parse_opts():
|
||||
parser = OptionParser();
|
||||
parser.add_option('-O' , '--outfile', help='write output to FILE. Defaults to STDOUT', metavar='FILE')
|
||||
parser.add_option('-b' , '--blueprint', help='populate buckets according to blueprint file BLUEPRINT. Used to get baseline results to compare client results against.', metavar='BLUEPRINT')
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def get_bucket_properties(bucket):
|
||||
"""Get and return the following properties from bucket:
|
||||
Name
|
||||
All Grants
|
||||
"""
|
||||
grants = [(grant.display_name, grant.permission) for grant in bucket.list_grants()]
|
||||
return (bucket.name, grants)
|
||||
|
||||
|
||||
def get_key_properties(key):
|
||||
"""Get and return the following properties from key:
|
||||
Name
|
||||
Size
|
||||
All Grants
|
||||
All Metadata
|
||||
"""
|
||||
grants = [(grant.display_name, grant.permission) for grant in key.get_acl().acl.grants]
|
||||
return (key.name, key.size, grants, key.metadata)
|
||||
|
||||
|
||||
def read_blueprint(infile):
|
||||
"""Takes a filename as input and returns a "bunch" describing buckets
|
||||
and objects to upload to an S3-like object store. This can be used
|
||||
to confirm that buckets created by another client match those created
|
||||
by boto.
|
||||
"""
|
||||
try:
|
||||
INFILE = open(infile, 'r')
|
||||
blueprint = bunch.bunchify(yaml.safe_load(INFILE))
|
||||
except Exception as e:
|
||||
print >> sys.stderr, "There was an error reading the blueprint file, %s:" %infile
|
||||
print >> sys.stderr, traceback.print_exc()
|
||||
|
||||
return blueprint
|
||||
|
||||
|
||||
def populate_from_blueprint(conn, blueprint, prefix=''):
|
||||
"""Take a connection and a blueprint. Create buckets and upload objects
|
||||
according to the blueprint. Prefix will be added to each bucket name.
|
||||
"""
|
||||
buckets = []
|
||||
for bucket in blueprint:
|
||||
b = conn.create_bucket(prefix + bucket.name)
|
||||
for user in bucket.perms:
|
||||
b.add_user_grant(bucket.perms[user], user)
|
||||
for key in bucket.objects:
|
||||
k = Key(b)
|
||||
k.key = key.name
|
||||
k.metadata = bunch.unbunchify(key.metadata)
|
||||
k.set_contents_from_string(key.content)
|
||||
for user in key.perms:
|
||||
k.add_user_grant(key.perms[user], user)
|
||||
buckets.append(b)
|
||||
return buckets
|
||||
|
||||
|
||||
|
||||
def main():
|
||||
"""Client results validation tool make sure you've bootstrapped your
|
||||
test environment and set up your config.yml file, then run the
|
||||
following:
|
||||
S3TEST_CONF=config.yml virtualenv/bin/python verify_client.py -O output.txt test-bucket-name
|
||||
|
||||
S3 authentication information for the bucket's owner must be in
|
||||
config.yml to create the connection.
|
||||
"""
|
||||
(options, args) = parse_opts()
|
||||
|
||||
#SETUP
|
||||
conn = common.s3.main
|
||||
|
||||
if options.outfile:
|
||||
OUTFILE = open(options.outfile, 'w')
|
||||
else:
|
||||
OUTFILE = sys.stdout
|
||||
|
||||
blueprint = None
|
||||
if options.blueprint:
|
||||
blueprint = read_blueprint(options.blueprint)
|
||||
if blueprint:
|
||||
populate_from_blueprint(conn, blueprint, common.prefix)
|
||||
|
||||
for bucket_name in args:
|
||||
try:
|
||||
bucket = conn.get_bucket(bucket_name)
|
||||
except S3ResponseError as e:
|
||||
print >> sys.stderr, "S3 claims %s isn't a valid bucket...maybe the user you specified in config.yml doesn't have access to it?" %bucket_name
|
||||
common.teardown()
|
||||
return
|
||||
|
||||
(name, grants) = get_bucket_properties(bucket)
|
||||
print >> OUTFILE, "Bucket Name: %s" % name
|
||||
for grant in grants:
|
||||
print >> OUTFILE, "\tgrant %s %s" %(grant)
|
||||
|
||||
for key in bucket.list():
|
||||
full_key = bucket.get_key(key.name)
|
||||
(name, size, grants, metadata) = get_key_properties(full_key)
|
||||
print >> OUTFILE, name
|
||||
print >> OUTFILE, "\tsize: %s" %size
|
||||
for grant in grants:
|
||||
print >> OUTFILE, "\tgrant %s %s" %(grant)
|
||||
for metadata_key in metadata:
|
||||
print >> OUTFILE, "\tmetadata %s: %s" %(metadata_key, metadata[metadata_key])
|
||||
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
common.setup()
|
||||
try:
|
||||
main()
|
||||
except Exception as e:
|
||||
traceback.print_exc()
|
||||
common.teardown()
|
||||
|
|
@ -1,301 +0,0 @@
|
|||
import boto.s3.connection
|
||||
import munch
|
||||
import itertools
|
||||
import os
|
||||
import random
|
||||
import string
|
||||
import yaml
|
||||
import re
|
||||
from lxml import etree
|
||||
|
||||
from doctest import Example
|
||||
from lxml.doctestcompare import LXMLOutputChecker
|
||||
|
||||
s3 = munch.Munch()
|
||||
config = munch.Munch()
|
||||
prefix = ''
|
||||
|
||||
bucket_counter = itertools.count(1)
|
||||
key_counter = itertools.count(1)
|
||||
|
||||
def choose_bucket_prefix(template, max_len=30):
|
||||
"""
|
||||
Choose a prefix for our test buckets, so they're easy to identify.
|
||||
|
||||
Use template and feed it more and more random filler, until it's
|
||||
as long as possible but still below max_len.
|
||||
"""
|
||||
rand = ''.join(
|
||||
random.choice(string.ascii_lowercase + string.digits)
|
||||
for c in range(255)
|
||||
)
|
||||
|
||||
while rand:
|
||||
s = template.format(random=rand)
|
||||
if len(s) <= max_len:
|
||||
return s
|
||||
rand = rand[:-1]
|
||||
|
||||
raise RuntimeError(
|
||||
'Bucket prefix template is impossible to fulfill: {template!r}'.format(
|
||||
template=template,
|
||||
),
|
||||
)
|
||||
|
||||
def nuke_bucket(bucket):
|
||||
try:
|
||||
bucket.set_canned_acl('private')
|
||||
# TODO: deleted_cnt and the while loop is a work around for rgw
|
||||
# not sending the
|
||||
deleted_cnt = 1
|
||||
while deleted_cnt:
|
||||
deleted_cnt = 0
|
||||
for key in bucket.list():
|
||||
print('Cleaning bucket {bucket} key {key}'.format(
|
||||
bucket=bucket,
|
||||
key=key,
|
||||
))
|
||||
key.set_canned_acl('private')
|
||||
key.delete()
|
||||
deleted_cnt += 1
|
||||
bucket.delete()
|
||||
except boto.exception.S3ResponseError as e:
|
||||
# TODO workaround for buggy rgw that fails to send
|
||||
# error_code, remove
|
||||
if (e.status == 403
|
||||
and e.error_code is None
|
||||
and e.body == ''):
|
||||
e.error_code = 'AccessDenied'
|
||||
if e.error_code != 'AccessDenied':
|
||||
print('GOT UNWANTED ERROR', e.error_code)
|
||||
raise
|
||||
# seems like we're not the owner of the bucket; ignore
|
||||
pass
|
||||
|
||||
def nuke_prefixed_buckets():
|
||||
for name, conn in list(s3.items()):
|
||||
print('Cleaning buckets from connection {name}'.format(name=name))
|
||||
for bucket in conn.get_all_buckets():
|
||||
if bucket.name.startswith(prefix):
|
||||
print('Cleaning bucket {bucket}'.format(bucket=bucket))
|
||||
nuke_bucket(bucket)
|
||||
|
||||
print('Done with cleanup of test buckets.')
|
||||
|
||||
def read_config(fp):
|
||||
config = munch.Munch()
|
||||
g = yaml.safe_load_all(fp)
|
||||
for new in g:
|
||||
config.update(munch.Munchify(new))
|
||||
return config
|
||||
|
||||
def connect(conf):
|
||||
mapping = dict(
|
||||
port='port',
|
||||
host='host',
|
||||
is_secure='is_secure',
|
||||
access_key='aws_access_key_id',
|
||||
secret_key='aws_secret_access_key',
|
||||
)
|
||||
kwargs = dict((mapping[k],v) for (k,v) in conf.items() if k in mapping)
|
||||
#process calling_format argument
|
||||
calling_formats = dict(
|
||||
ordinary=boto.s3.connection.OrdinaryCallingFormat(),
|
||||
subdomain=boto.s3.connection.SubdomainCallingFormat(),
|
||||
vhost=boto.s3.connection.VHostCallingFormat(),
|
||||
)
|
||||
kwargs['calling_format'] = calling_formats['ordinary']
|
||||
if 'calling_format' in conf:
|
||||
raw_calling_format = conf['calling_format']
|
||||
try:
|
||||
kwargs['calling_format'] = calling_formats[raw_calling_format]
|
||||
except KeyError:
|
||||
raise RuntimeError(
|
||||
'calling_format unknown: %r' % raw_calling_format
|
||||
)
|
||||
# TODO test vhost calling format
|
||||
conn = boto.s3.connection.S3Connection(**kwargs)
|
||||
return conn
|
||||
|
||||
def setup():
|
||||
global s3, config, prefix
|
||||
s3.clear()
|
||||
config.clear()
|
||||
|
||||
try:
|
||||
path = os.environ['S3TEST_CONF']
|
||||
except KeyError:
|
||||
raise RuntimeError(
|
||||
'To run tests, point environment '
|
||||
+ 'variable S3TEST_CONF to a config file.',
|
||||
)
|
||||
with file(path) as f:
|
||||
config.update(read_config(f))
|
||||
|
||||
# These 3 should always be present.
|
||||
if 's3' not in config:
|
||||
raise RuntimeError('Your config file is missing the s3 section!')
|
||||
if 'defaults' not in config.s3:
|
||||
raise RuntimeError('Your config file is missing the s3.defaults section!')
|
||||
if 'fixtures' not in config:
|
||||
raise RuntimeError('Your config file is missing the fixtures section!')
|
||||
|
||||
template = config.fixtures.get('bucket prefix', 'test-{random}-')
|
||||
prefix = choose_bucket_prefix(template=template)
|
||||
if prefix == '':
|
||||
raise RuntimeError("Empty Prefix! Aborting!")
|
||||
|
||||
defaults = config.s3.defaults
|
||||
for section in list(config.s3.keys()):
|
||||
if section == 'defaults':
|
||||
continue
|
||||
|
||||
conf = {}
|
||||
conf.update(defaults)
|
||||
conf.update(config.s3[section])
|
||||
conn = connect(conf)
|
||||
s3[section] = conn
|
||||
|
||||
# WARNING! we actively delete all buckets we see with the prefix
|
||||
# we've chosen! Choose your prefix with care, and don't reuse
|
||||
# credentials!
|
||||
|
||||
# We also assume nobody else is going to use buckets with that
|
||||
# prefix. This is racy but given enough randomness, should not
|
||||
# really fail.
|
||||
nuke_prefixed_buckets()
|
||||
|
||||
def get_new_bucket(connection=None):
|
||||
"""
|
||||
Get a bucket that exists and is empty.
|
||||
|
||||
Always recreates a bucket from scratch. This is useful to also
|
||||
reset ACLs and such.
|
||||
"""
|
||||
if connection is None:
|
||||
connection = s3.main
|
||||
name = '{prefix}{num}'.format(
|
||||
prefix=prefix,
|
||||
num=next(bucket_counter),
|
||||
)
|
||||
# the only way for this to fail with a pre-existing bucket is if
|
||||
# someone raced us between setup nuke_prefixed_buckets and here;
|
||||
# ignore that as astronomically unlikely
|
||||
bucket = connection.create_bucket(name)
|
||||
return bucket
|
||||
|
||||
def teardown():
|
||||
nuke_prefixed_buckets()
|
||||
|
||||
def with_setup_kwargs(setup, teardown=None):
|
||||
"""Decorator to add setup and/or teardown methods to a test function::
|
||||
|
||||
@with_setup_args(setup, teardown)
|
||||
def test_something():
|
||||
" ... "
|
||||
|
||||
The setup function should return (kwargs) which will be passed to
|
||||
test function, and teardown function.
|
||||
|
||||
Note that `with_setup_kwargs` is useful *only* for test functions, not for test
|
||||
methods or inside of TestCase subclasses.
|
||||
"""
|
||||
def decorate(func):
|
||||
kwargs = {}
|
||||
|
||||
def test_wrapped(*args, **kwargs2):
|
||||
k2 = kwargs.copy()
|
||||
k2.update(kwargs2)
|
||||
k2['testname'] = func.__name__
|
||||
func(*args, **k2)
|
||||
|
||||
test_wrapped.__name__ = func.__name__
|
||||
|
||||
def setup_wrapped():
|
||||
k = setup()
|
||||
kwargs.update(k)
|
||||
if hasattr(func, 'setup'):
|
||||
func.setup()
|
||||
test_wrapped.setup = setup_wrapped
|
||||
|
||||
if teardown:
|
||||
def teardown_wrapped():
|
||||
if hasattr(func, 'teardown'):
|
||||
func.teardown()
|
||||
teardown(**kwargs)
|
||||
|
||||
test_wrapped.teardown = teardown_wrapped
|
||||
else:
|
||||
if hasattr(func, 'teardown'):
|
||||
test_wrapped.teardown = func.teardown()
|
||||
return test_wrapped
|
||||
return decorate
|
||||
|
||||
# Demo case for the above, when you run test_gen():
|
||||
# _test_gen will run twice,
|
||||
# with the following stderr printing
|
||||
# setup_func {'b': 2}
|
||||
# testcase ('1',) {'b': 2, 'testname': '_test_gen'}
|
||||
# teardown_func {'b': 2}
|
||||
# setup_func {'b': 2}
|
||||
# testcase () {'b': 2, 'testname': '_test_gen'}
|
||||
# teardown_func {'b': 2}
|
||||
#
|
||||
#def setup_func():
|
||||
# kwargs = {'b': 2}
|
||||
# print("setup_func", kwargs, file=sys.stderr)
|
||||
# return kwargs
|
||||
#
|
||||
#def teardown_func(**kwargs):
|
||||
# print("teardown_func", kwargs, file=sys.stderr)
|
||||
#
|
||||
#@with_setup_kwargs(setup=setup_func, teardown=teardown_func)
|
||||
#def _test_gen(*args, **kwargs):
|
||||
# print("testcase", args, kwargs, file=sys.stderr)
|
||||
#
|
||||
#def test_gen():
|
||||
# yield _test_gen, '1'
|
||||
# yield _test_gen
|
||||
|
||||
def trim_xml(xml_str):
|
||||
p = etree.XMLParser(remove_blank_text=True)
|
||||
elem = etree.XML(xml_str, parser=p)
|
||||
return etree.tostring(elem)
|
||||
|
||||
def normalize_xml(xml, pretty_print=True):
|
||||
if xml is None:
|
||||
return xml
|
||||
|
||||
root = etree.fromstring(xml.encode(encoding='ascii'))
|
||||
|
||||
for element in root.iter('*'):
|
||||
if element.text is not None and not element.text.strip():
|
||||
element.text = None
|
||||
if element.text is not None:
|
||||
element.text = element.text.strip().replace("\n", "").replace("\r", "")
|
||||
if element.tail is not None and not element.tail.strip():
|
||||
element.tail = None
|
||||
if element.tail is not None:
|
||||
element.tail = element.tail.strip().replace("\n", "").replace("\r", "")
|
||||
|
||||
# Sort the elements
|
||||
for parent in root.xpath('//*[./*]'): # Search for parent elements
|
||||
parent[:] = sorted(parent,key=lambda x: x.tag)
|
||||
|
||||
xmlstr = etree.tostring(root, encoding="utf-8", xml_declaration=True, pretty_print=pretty_print)
|
||||
# there are two different DTD URIs
|
||||
xmlstr = re.sub(r'xmlns="[^"]+"', 'xmlns="s3"', xmlstr)
|
||||
xmlstr = re.sub(r'xmlns=\'[^\']+\'', 'xmlns="s3"', xmlstr)
|
||||
for uri in ['http://doc.s3.amazonaws.com/doc/2006-03-01/', 'http://s3.amazonaws.com/doc/2006-03-01/']:
|
||||
xmlstr = xmlstr.replace(uri, 'URI-DTD')
|
||||
#xmlstr = re.sub(r'>\s+', '>', xmlstr, count=0, flags=re.MULTILINE)
|
||||
return xmlstr
|
||||
|
||||
def assert_xml_equal(got, want):
|
||||
assert want is not None, 'Wanted XML cannot be None'
|
||||
if got is None:
|
||||
raise AssertionError('Got input to validate was None')
|
||||
checker = LXMLOutputChecker()
|
||||
if not checker.check_output(want, got, 0):
|
||||
message = checker.output_difference(Example("", want), got, 0)
|
||||
raise AssertionError(message)
|
|
@ -1,782 +0,0 @@
|
|||
import pytest
|
||||
import boto3
|
||||
from botocore import UNSIGNED
|
||||
from botocore.client import Config
|
||||
from botocore.exceptions import ClientError
|
||||
from botocore.handlers import disable_signing
|
||||
import configparser
|
||||
import datetime
|
||||
import time
|
||||
import os
|
||||
import munch
|
||||
import random
|
||||
import string
|
||||
import itertools
|
||||
import urllib3
|
||||
import re
|
||||
|
||||
config = munch.Munch
|
||||
|
||||
# this will be assigned by setup()
|
||||
prefix = None
|
||||
|
||||
def get_prefix():
|
||||
assert prefix is not None
|
||||
return prefix
|
||||
|
||||
def choose_bucket_prefix(template, max_len=30):
|
||||
"""
|
||||
Choose a prefix for our test buckets, so they're easy to identify.
|
||||
|
||||
Use template and feed it more and more random filler, until it's
|
||||
as long as possible but still below max_len.
|
||||
"""
|
||||
rand = ''.join(
|
||||
random.choice(string.ascii_lowercase + string.digits)
|
||||
for c in range(255)
|
||||
)
|
||||
|
||||
while rand:
|
||||
s = template.format(random=rand)
|
||||
if len(s) <= max_len:
|
||||
return s
|
||||
rand = rand[:-1]
|
||||
|
||||
raise RuntimeError(
|
||||
'Bucket prefix template is impossible to fulfill: {template!r}'.format(
|
||||
template=template,
|
||||
),
|
||||
)
|
||||
|
||||
def get_buckets_list(client=None, prefix=None):
|
||||
if client == None:
|
||||
client = get_client()
|
||||
if prefix == None:
|
||||
prefix = get_prefix()
|
||||
response = client.list_buckets()
|
||||
bucket_dicts = response['Buckets']
|
||||
buckets_list = []
|
||||
for bucket in bucket_dicts:
|
||||
if prefix in bucket['Name']:
|
||||
buckets_list.append(bucket['Name'])
|
||||
|
||||
return buckets_list
|
||||
|
||||
def get_objects_list(bucket, client=None, prefix=None):
|
||||
if client == None:
|
||||
client = get_client()
|
||||
|
||||
if prefix == None:
|
||||
response = client.list_objects(Bucket=bucket)
|
||||
else:
|
||||
response = client.list_objects(Bucket=bucket, Prefix=prefix)
|
||||
objects_list = []
|
||||
|
||||
if 'Contents' in response:
|
||||
contents = response['Contents']
|
||||
for obj in contents:
|
||||
objects_list.append(obj['Key'])
|
||||
|
||||
return objects_list
|
||||
|
||||
# generator function that returns object listings in batches, where each
|
||||
# batch is a list of dicts compatible with delete_objects()
|
||||
def list_versions(client, bucket, batch_size):
|
||||
kwargs = {'Bucket': bucket, 'MaxKeys': batch_size}
|
||||
truncated = True
|
||||
while truncated:
|
||||
listing = client.list_object_versions(**kwargs)
|
||||
|
||||
kwargs['KeyMarker'] = listing.get('NextKeyMarker')
|
||||
kwargs['VersionIdMarker'] = listing.get('NextVersionIdMarker')
|
||||
truncated = listing['IsTruncated']
|
||||
|
||||
objs = listing.get('Versions', []) + listing.get('DeleteMarkers', [])
|
||||
if len(objs):
|
||||
yield [{'Key': o['Key'], 'VersionId': o['VersionId']} for o in objs]
|
||||
|
||||
def nuke_bucket(client, bucket):
|
||||
batch_size = 128
|
||||
max_retain_date = None
|
||||
|
||||
# list and delete objects in batches
|
||||
for objects in list_versions(client, bucket, batch_size):
|
||||
delete = client.delete_objects(Bucket=bucket,
|
||||
Delete={'Objects': objects, 'Quiet': True},
|
||||
BypassGovernanceRetention=True)
|
||||
|
||||
# check for object locks on 403 AccessDenied errors
|
||||
for err in delete.get('Errors', []):
|
||||
if err.get('Code') != 'AccessDenied':
|
||||
continue
|
||||
try:
|
||||
res = client.get_object_retention(Bucket=bucket,
|
||||
Key=err['Key'], VersionId=err['VersionId'])
|
||||
retain_date = res['Retention']['RetainUntilDate']
|
||||
if not max_retain_date or max_retain_date < retain_date:
|
||||
max_retain_date = retain_date
|
||||
except ClientError:
|
||||
pass
|
||||
|
||||
if max_retain_date:
|
||||
# wait out the retention period (up to 60 seconds)
|
||||
now = datetime.datetime.now(max_retain_date.tzinfo)
|
||||
if max_retain_date > now:
|
||||
delta = max_retain_date - now
|
||||
if delta.total_seconds() > 60:
|
||||
raise RuntimeError('bucket {} still has objects \
|
||||
locked for {} more seconds, not waiting for \
|
||||
bucket cleanup'.format(bucket, delta.total_seconds()))
|
||||
print('nuke_bucket', bucket, 'waiting', delta.total_seconds(),
|
||||
'seconds for object locks to expire')
|
||||
time.sleep(delta.total_seconds())
|
||||
|
||||
for objects in list_versions(client, bucket, batch_size):
|
||||
client.delete_objects(Bucket=bucket,
|
||||
Delete={'Objects': objects, 'Quiet': True},
|
||||
BypassGovernanceRetention=True)
|
||||
|
||||
client.delete_bucket(Bucket=bucket)
|
||||
|
||||
def nuke_prefixed_buckets(prefix, client=None):
|
||||
if client == None:
|
||||
client = get_client()
|
||||
|
||||
buckets = get_buckets_list(client, prefix)
|
||||
|
||||
err = None
|
||||
for bucket_name in buckets:
|
||||
try:
|
||||
nuke_bucket(client, bucket_name)
|
||||
except Exception as e:
|
||||
# The exception shouldn't be raised when doing cleanup. Pass and continue
|
||||
# the bucket cleanup process. Otherwise left buckets wouldn't be cleared
|
||||
# resulting in some kind of resource leak. err is used to hint user some
|
||||
# exception once occurred.
|
||||
err = e
|
||||
pass
|
||||
if err:
|
||||
raise err
|
||||
|
||||
print('Done with cleanup of buckets in tests.')
|
||||
|
||||
def configured_storage_classes():
|
||||
sc = ['STANDARD']
|
||||
|
||||
extra_sc = re.split(r"[\b\W\b]+", config.storage_classes)
|
||||
|
||||
for item in extra_sc:
|
||||
if item != 'STANDARD':
|
||||
sc.append(item)
|
||||
|
||||
sc = [i for i in sc if i]
|
||||
print("storage classes configured: " + str(sc))
|
||||
|
||||
return sc
|
||||
|
||||
def configure():
|
||||
cfg = configparser.RawConfigParser()
|
||||
try:
|
||||
path = os.environ['S3TEST_CONF']
|
||||
except KeyError:
|
||||
raise RuntimeError(
|
||||
'To run tests, point environment '
|
||||
+ 'variable S3TEST_CONF to a config file.',
|
||||
)
|
||||
cfg.read(path)
|
||||
|
||||
if not cfg.defaults():
|
||||
raise RuntimeError('Your config file is missing the DEFAULT section!')
|
||||
if not cfg.has_section("s3 main"):
|
||||
raise RuntimeError('Your config file is missing the "s3 main" section!')
|
||||
if not cfg.has_section("s3 alt"):
|
||||
raise RuntimeError('Your config file is missing the "s3 alt" section!')
|
||||
if not cfg.has_section("s3 tenant"):
|
||||
raise RuntimeError('Your config file is missing the "s3 tenant" section!')
|
||||
|
||||
global prefix
|
||||
|
||||
defaults = cfg.defaults()
|
||||
|
||||
# vars from the DEFAULT section
|
||||
config.default_host = defaults.get("host")
|
||||
config.default_port = int(defaults.get("port"))
|
||||
config.default_is_secure = cfg.getboolean('DEFAULT', "is_secure")
|
||||
|
||||
proto = 'https' if config.default_is_secure else 'http'
|
||||
config.default_endpoint = "%s://%s:%d" % (proto, config.default_host, config.default_port)
|
||||
|
||||
try:
|
||||
config.default_ssl_verify = cfg.getboolean('DEFAULT', "ssl_verify")
|
||||
except configparser.NoOptionError:
|
||||
config.default_ssl_verify = False
|
||||
|
||||
# Disable InsecureRequestWarning reported by urllib3 when ssl_verify is False
|
||||
if not config.default_ssl_verify:
|
||||
urllib3.disable_warnings()
|
||||
|
||||
# vars from the main section
|
||||
config.main_access_key = cfg.get('s3 main',"access_key")
|
||||
config.main_secret_key = cfg.get('s3 main',"secret_key")
|
||||
config.main_display_name = cfg.get('s3 main',"display_name")
|
||||
config.main_user_id = cfg.get('s3 main',"user_id")
|
||||
config.main_email = cfg.get('s3 main',"email")
|
||||
try:
|
||||
config.main_kms_keyid = cfg.get('s3 main',"kms_keyid")
|
||||
except (configparser.NoSectionError, configparser.NoOptionError):
|
||||
config.main_kms_keyid = 'testkey-1'
|
||||
|
||||
try:
|
||||
config.main_kms_keyid2 = cfg.get('s3 main',"kms_keyid2")
|
||||
except (configparser.NoSectionError, configparser.NoOptionError):
|
||||
config.main_kms_keyid2 = 'testkey-2'
|
||||
|
||||
try:
|
||||
config.main_api_name = cfg.get('s3 main',"api_name")
|
||||
except (configparser.NoSectionError, configparser.NoOptionError):
|
||||
config.main_api_name = ""
|
||||
pass
|
||||
|
||||
try:
|
||||
config.storage_classes = cfg.get('s3 main',"storage_classes")
|
||||
except (configparser.NoSectionError, configparser.NoOptionError):
|
||||
config.storage_classes = ""
|
||||
pass
|
||||
|
||||
try:
|
||||
config.lc_debug_interval = int(cfg.get('s3 main',"lc_debug_interval"))
|
||||
except (configparser.NoSectionError, configparser.NoOptionError):
|
||||
config.lc_debug_interval = 10
|
||||
|
||||
config.alt_access_key = cfg.get('s3 alt',"access_key")
|
||||
config.alt_secret_key = cfg.get('s3 alt',"secret_key")
|
||||
config.alt_display_name = cfg.get('s3 alt',"display_name")
|
||||
config.alt_user_id = cfg.get('s3 alt',"user_id")
|
||||
config.alt_email = cfg.get('s3 alt',"email")
|
||||
|
||||
config.tenant_access_key = cfg.get('s3 tenant',"access_key")
|
||||
config.tenant_secret_key = cfg.get('s3 tenant',"secret_key")
|
||||
config.tenant_display_name = cfg.get('s3 tenant',"display_name")
|
||||
config.tenant_user_id = cfg.get('s3 tenant',"user_id")
|
||||
config.tenant_email = cfg.get('s3 tenant',"email")
|
||||
config.tenant_name = cfg.get('s3 tenant',"tenant")
|
||||
|
||||
config.iam_access_key = cfg.get('iam',"access_key")
|
||||
config.iam_secret_key = cfg.get('iam',"secret_key")
|
||||
config.iam_display_name = cfg.get('iam',"display_name")
|
||||
config.iam_user_id = cfg.get('iam',"user_id")
|
||||
config.iam_email = cfg.get('iam',"email")
|
||||
|
||||
config.iam_root_access_key = cfg.get('iam root',"access_key")
|
||||
config.iam_root_secret_key = cfg.get('iam root',"secret_key")
|
||||
config.iam_root_user_id = cfg.get('iam root',"user_id")
|
||||
config.iam_root_email = cfg.get('iam root',"email")
|
||||
|
||||
config.iam_alt_root_access_key = cfg.get('iam alt root',"access_key")
|
||||
config.iam_alt_root_secret_key = cfg.get('iam alt root',"secret_key")
|
||||
config.iam_alt_root_user_id = cfg.get('iam alt root',"user_id")
|
||||
config.iam_alt_root_email = cfg.get('iam alt root',"email")
|
||||
|
||||
# vars from the fixtures section
|
||||
template = cfg.get('fixtures', "bucket prefix", fallback='test-{random}-')
|
||||
prefix = choose_bucket_prefix(template=template)
|
||||
template = cfg.get('fixtures', "iam name prefix", fallback="s3-tests-")
|
||||
config.iam_name_prefix = choose_bucket_prefix(template=template)
|
||||
template = cfg.get('fixtures', "iam path prefix", fallback="/s3-tests/")
|
||||
config.iam_path_prefix = choose_bucket_prefix(template=template)
|
||||
|
||||
if cfg.has_section("s3 cloud"):
|
||||
get_cloud_config(cfg)
|
||||
else:
|
||||
config.cloud_storage_class = None
|
||||
|
||||
def setup():
|
||||
alt_client = get_alt_client()
|
||||
tenant_client = get_tenant_client()
|
||||
nuke_prefixed_buckets(prefix=prefix)
|
||||
nuke_prefixed_buckets(prefix=prefix, client=alt_client)
|
||||
nuke_prefixed_buckets(prefix=prefix, client=tenant_client)
|
||||
|
||||
def teardown():
|
||||
alt_client = get_alt_client()
|
||||
tenant_client = get_tenant_client()
|
||||
nuke_prefixed_buckets(prefix=prefix)
|
||||
nuke_prefixed_buckets(prefix=prefix, client=alt_client)
|
||||
nuke_prefixed_buckets(prefix=prefix, client=tenant_client)
|
||||
try:
|
||||
iam_client = get_iam_client()
|
||||
list_roles_resp = iam_client.list_roles()
|
||||
for role in list_roles_resp['Roles']:
|
||||
list_policies_resp = iam_client.list_role_policies(RoleName=role['RoleName'])
|
||||
for policy in list_policies_resp['PolicyNames']:
|
||||
del_policy_resp = iam_client.delete_role_policy(
|
||||
RoleName=role['RoleName'],
|
||||
PolicyName=policy
|
||||
)
|
||||
del_role_resp = iam_client.delete_role(RoleName=role['RoleName'])
|
||||
list_oidc_resp = iam_client.list_open_id_connect_providers()
|
||||
for oidcprovider in list_oidc_resp['OpenIDConnectProviderList']:
|
||||
del_oidc_resp = iam_client.delete_open_id_connect_provider(
|
||||
OpenIDConnectProviderArn=oidcprovider['Arn']
|
||||
)
|
||||
except:
|
||||
pass
|
||||
|
||||
@pytest.fixture(scope="package")
|
||||
def configfile():
|
||||
configure()
|
||||
return config
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def setup_teardown(configfile):
|
||||
setup()
|
||||
yield
|
||||
teardown()
|
||||
|
||||
def check_webidentity():
|
||||
cfg = configparser.RawConfigParser()
|
||||
try:
|
||||
path = os.environ['S3TEST_CONF']
|
||||
except KeyError:
|
||||
raise RuntimeError(
|
||||
'To run tests, point environment '
|
||||
+ 'variable S3TEST_CONF to a config file.',
|
||||
)
|
||||
cfg.read(path)
|
||||
if not cfg.has_section("webidentity"):
|
||||
raise RuntimeError('Your config file is missing the "webidentity" section!')
|
||||
|
||||
config.webidentity_thumbprint = cfg.get('webidentity', "thumbprint")
|
||||
config.webidentity_aud = cfg.get('webidentity', "aud")
|
||||
config.webidentity_token = cfg.get('webidentity', "token")
|
||||
config.webidentity_realm = cfg.get('webidentity', "KC_REALM")
|
||||
config.webidentity_sub = cfg.get('webidentity', "sub")
|
||||
config.webidentity_azp = cfg.get('webidentity', "azp")
|
||||
config.webidentity_user_token = cfg.get('webidentity', "user_token")
|
||||
|
||||
def get_cloud_config(cfg):
|
||||
config.cloud_host = cfg.get('s3 cloud',"host")
|
||||
config.cloud_port = int(cfg.get('s3 cloud',"port"))
|
||||
config.cloud_is_secure = cfg.getboolean('s3 cloud', "is_secure")
|
||||
|
||||
proto = 'https' if config.cloud_is_secure else 'http'
|
||||
config.cloud_endpoint = "%s://%s:%d" % (proto, config.cloud_host, config.cloud_port)
|
||||
|
||||
config.cloud_access_key = cfg.get('s3 cloud',"access_key")
|
||||
config.cloud_secret_key = cfg.get('s3 cloud',"secret_key")
|
||||
|
||||
try:
|
||||
config.cloud_storage_class = cfg.get('s3 cloud', "cloud_storage_class")
|
||||
except (configparser.NoSectionError, configparser.NoOptionError):
|
||||
config.cloud_storage_class = None
|
||||
|
||||
try:
|
||||
config.cloud_retain_head_object = cfg.get('s3 cloud',"retain_head_object")
|
||||
except (configparser.NoSectionError, configparser.NoOptionError):
|
||||
config.cloud_retain_head_object = None
|
||||
|
||||
try:
|
||||
config.cloud_target_path = cfg.get('s3 cloud',"target_path")
|
||||
except (configparser.NoSectionError, configparser.NoOptionError):
|
||||
config.cloud_target_path = None
|
||||
|
||||
try:
|
||||
config.cloud_target_storage_class = cfg.get('s3 cloud',"target_storage_class")
|
||||
except (configparser.NoSectionError, configparser.NoOptionError):
|
||||
config.cloud_target_storage_class = 'STANDARD'
|
||||
|
||||
try:
|
||||
config.cloud_regular_storage_class = cfg.get('s3 cloud', "storage_class")
|
||||
except (configparser.NoSectionError, configparser.NoOptionError):
|
||||
config.cloud_regular_storage_class = None
|
||||
|
||||
|
||||
def get_client(client_config=None):
|
||||
if client_config == None:
|
||||
client_config = Config(signature_version='s3v4')
|
||||
|
||||
client = boto3.client(service_name='s3',
|
||||
aws_access_key_id=config.main_access_key,
|
||||
aws_secret_access_key=config.main_secret_key,
|
||||
endpoint_url=config.default_endpoint,
|
||||
use_ssl=config.default_is_secure,
|
||||
verify=config.default_ssl_verify,
|
||||
config=client_config)
|
||||
return client
|
||||
|
||||
def get_v2_client():
|
||||
client = boto3.client(service_name='s3',
|
||||
aws_access_key_id=config.main_access_key,
|
||||
aws_secret_access_key=config.main_secret_key,
|
||||
endpoint_url=config.default_endpoint,
|
||||
use_ssl=config.default_is_secure,
|
||||
verify=config.default_ssl_verify,
|
||||
config=Config(signature_version='s3'))
|
||||
return client
|
||||
|
||||
def get_sts_client(**kwargs):
|
||||
kwargs.setdefault('aws_access_key_id', config.alt_access_key)
|
||||
kwargs.setdefault('aws_secret_access_key', config.alt_secret_key)
|
||||
kwargs.setdefault('config', Config(signature_version='s3v4'))
|
||||
|
||||
client = boto3.client(service_name='sts',
|
||||
endpoint_url=config.default_endpoint,
|
||||
region_name='',
|
||||
use_ssl=config.default_is_secure,
|
||||
verify=config.default_ssl_verify,
|
||||
**kwargs)
|
||||
return client
|
||||
|
||||
def get_iam_client(**kwargs):
|
||||
kwargs.setdefault('aws_access_key_id', config.iam_access_key)
|
||||
kwargs.setdefault('aws_secret_access_key', config.iam_secret_key)
|
||||
|
||||
client = boto3.client(service_name='iam',
|
||||
endpoint_url=config.default_endpoint,
|
||||
region_name='',
|
||||
use_ssl=config.default_is_secure,
|
||||
verify=config.default_ssl_verify,
|
||||
**kwargs)
|
||||
return client
|
||||
|
||||
def get_iam_s3client(**kwargs):
|
||||
kwargs.setdefault('aws_access_key_id', config.iam_access_key)
|
||||
kwargs.setdefault('aws_secret_access_key', config.iam_secret_key)
|
||||
kwargs.setdefault('config', Config(signature_version='s3v4'))
|
||||
|
||||
client = boto3.client(service_name='s3',
|
||||
endpoint_url=config.default_endpoint,
|
||||
use_ssl=config.default_is_secure,
|
||||
verify=config.default_ssl_verify,
|
||||
**kwargs)
|
||||
return client
|
||||
|
||||
def get_iam_root_client(**kwargs):
|
||||
kwargs.setdefault('service_name', 'iam')
|
||||
kwargs.setdefault('aws_access_key_id', config.iam_root_access_key)
|
||||
kwargs.setdefault('aws_secret_access_key', config.iam_root_secret_key)
|
||||
|
||||
return boto3.client(endpoint_url=config.default_endpoint,
|
||||
region_name='',
|
||||
use_ssl=config.default_is_secure,
|
||||
verify=config.default_ssl_verify,
|
||||
**kwargs)
|
||||
|
||||
def get_iam_alt_root_client(**kwargs):
|
||||
kwargs.setdefault('service_name', 'iam')
|
||||
kwargs.setdefault('aws_access_key_id', config.iam_alt_root_access_key)
|
||||
kwargs.setdefault('aws_secret_access_key', config.iam_alt_root_secret_key)
|
||||
|
||||
return boto3.client(endpoint_url=config.default_endpoint,
|
||||
region_name='',
|
||||
use_ssl=config.default_is_secure,
|
||||
verify=config.default_ssl_verify,
|
||||
**kwargs)
|
||||
|
||||
def get_alt_client(client_config=None):
|
||||
if client_config == None:
|
||||
client_config = Config(signature_version='s3v4')
|
||||
|
||||
client = boto3.client(service_name='s3',
|
||||
aws_access_key_id=config.alt_access_key,
|
||||
aws_secret_access_key=config.alt_secret_key,
|
||||
endpoint_url=config.default_endpoint,
|
||||
use_ssl=config.default_is_secure,
|
||||
verify=config.default_ssl_verify,
|
||||
config=client_config)
|
||||
return client
|
||||
|
||||
def get_cloud_client(client_config=None):
|
||||
if client_config == None:
|
||||
client_config = Config(signature_version='s3v4')
|
||||
|
||||
client = boto3.client(service_name='s3',
|
||||
aws_access_key_id=config.cloud_access_key,
|
||||
aws_secret_access_key=config.cloud_secret_key,
|
||||
endpoint_url=config.cloud_endpoint,
|
||||
use_ssl=config.cloud_is_secure,
|
||||
config=client_config)
|
||||
return client
|
||||
|
||||
def get_tenant_client(client_config=None):
|
||||
if client_config == None:
|
||||
client_config = Config(signature_version='s3v4')
|
||||
|
||||
client = boto3.client(service_name='s3',
|
||||
aws_access_key_id=config.tenant_access_key,
|
||||
aws_secret_access_key=config.tenant_secret_key,
|
||||
endpoint_url=config.default_endpoint,
|
||||
use_ssl=config.default_is_secure,
|
||||
verify=config.default_ssl_verify,
|
||||
config=client_config)
|
||||
return client
|
||||
|
||||
def get_v2_tenant_client():
|
||||
client_config = Config(signature_version='s3')
|
||||
client = boto3.client(service_name='s3',
|
||||
aws_access_key_id=config.tenant_access_key,
|
||||
aws_secret_access_key=config.tenant_secret_key,
|
||||
endpoint_url=config.default_endpoint,
|
||||
use_ssl=config.default_is_secure,
|
||||
verify=config.default_ssl_verify,
|
||||
config=client_config)
|
||||
return client
|
||||
|
||||
def get_tenant_iam_client():
|
||||
|
||||
client = boto3.client(service_name='iam',
|
||||
region_name='us-east-1',
|
||||
aws_access_key_id=config.tenant_access_key,
|
||||
aws_secret_access_key=config.tenant_secret_key,
|
||||
endpoint_url=config.default_endpoint,
|
||||
verify=config.default_ssl_verify,
|
||||
use_ssl=config.default_is_secure)
|
||||
return client
|
||||
|
||||
def get_alt_iam_client():
|
||||
|
||||
client = boto3.client(service_name='iam',
|
||||
region_name='',
|
||||
aws_access_key_id=config.alt_access_key,
|
||||
aws_secret_access_key=config.alt_secret_key,
|
||||
endpoint_url=config.default_endpoint,
|
||||
verify=config.default_ssl_verify,
|
||||
use_ssl=config.default_is_secure)
|
||||
return client
|
||||
|
||||
def get_unauthenticated_client():
|
||||
client = boto3.client(service_name='s3',
|
||||
aws_access_key_id='',
|
||||
aws_secret_access_key='',
|
||||
endpoint_url=config.default_endpoint,
|
||||
use_ssl=config.default_is_secure,
|
||||
verify=config.default_ssl_verify,
|
||||
config=Config(signature_version=UNSIGNED))
|
||||
return client
|
||||
|
||||
def get_bad_auth_client(aws_access_key_id='badauth'):
|
||||
client = boto3.client(service_name='s3',
|
||||
aws_access_key_id=aws_access_key_id,
|
||||
aws_secret_access_key='roflmao',
|
||||
endpoint_url=config.default_endpoint,
|
||||
use_ssl=config.default_is_secure,
|
||||
verify=config.default_ssl_verify,
|
||||
config=Config(signature_version='s3v4'))
|
||||
return client
|
||||
|
||||
def get_svc_client(client_config=None, svc='s3'):
|
||||
if client_config == None:
|
||||
client_config = Config(signature_version='s3v4')
|
||||
|
||||
client = boto3.client(service_name=svc,
|
||||
aws_access_key_id=config.main_access_key,
|
||||
aws_secret_access_key=config.main_secret_key,
|
||||
endpoint_url=config.default_endpoint,
|
||||
use_ssl=config.default_is_secure,
|
||||
verify=config.default_ssl_verify,
|
||||
config=client_config)
|
||||
return client
|
||||
|
||||
bucket_counter = itertools.count(1)
|
||||
|
||||
def get_new_bucket_name():
|
||||
"""
|
||||
Get a bucket name that probably does not exist.
|
||||
|
||||
We make every attempt to use a unique random prefix, so if a
|
||||
bucket by this name happens to exist, it's ok if tests give
|
||||
false negatives.
|
||||
"""
|
||||
name = '{prefix}{num}'.format(
|
||||
prefix=prefix,
|
||||
num=next(bucket_counter),
|
||||
)
|
||||
return name
|
||||
|
||||
def get_new_bucket_resource(name=None):
|
||||
"""
|
||||
Get a bucket that exists and is empty.
|
||||
|
||||
Always recreates a bucket from scratch. This is useful to also
|
||||
reset ACLs and such.
|
||||
"""
|
||||
s3 = boto3.resource('s3',
|
||||
aws_access_key_id=config.main_access_key,
|
||||
aws_secret_access_key=config.main_secret_key,
|
||||
endpoint_url=config.default_endpoint,
|
||||
use_ssl=config.default_is_secure,
|
||||
verify=config.default_ssl_verify)
|
||||
if name is None:
|
||||
name = get_new_bucket_name()
|
||||
bucket = s3.Bucket(name)
|
||||
bucket_location = bucket.create()
|
||||
return bucket
|
||||
|
||||
def get_new_bucket(client=None, name=None):
|
||||
"""
|
||||
Get a bucket that exists and is empty.
|
||||
|
||||
Always recreates a bucket from scratch. This is useful to also
|
||||
reset ACLs and such.
|
||||
"""
|
||||
if client is None:
|
||||
client = get_client()
|
||||
if name is None:
|
||||
name = get_new_bucket_name()
|
||||
|
||||
client.create_bucket(Bucket=name)
|
||||
return name
|
||||
|
||||
def get_parameter_name():
|
||||
parameter_name=""
|
||||
rand = ''.join(
|
||||
random.choice(string.ascii_lowercase + string.digits)
|
||||
for c in range(255)
|
||||
)
|
||||
while rand:
|
||||
parameter_name = '{random}'.format(random=rand)
|
||||
if len(parameter_name) <= 10:
|
||||
return parameter_name
|
||||
rand = rand[:-1]
|
||||
return parameter_name
|
||||
|
||||
def get_sts_user_id():
|
||||
return config.alt_user_id
|
||||
|
||||
def get_config_is_secure():
|
||||
return config.default_is_secure
|
||||
|
||||
def get_config_host():
|
||||
return config.default_host
|
||||
|
||||
def get_config_port():
|
||||
return config.default_port
|
||||
|
||||
def get_config_endpoint():
|
||||
return config.default_endpoint
|
||||
|
||||
def get_config_ssl_verify():
|
||||
return config.default_ssl_verify
|
||||
|
||||
def get_main_aws_access_key():
|
||||
return config.main_access_key
|
||||
|
||||
def get_main_aws_secret_key():
|
||||
return config.main_secret_key
|
||||
|
||||
def get_main_display_name():
|
||||
return config.main_display_name
|
||||
|
||||
def get_main_user_id():
|
||||
return config.main_user_id
|
||||
|
||||
def get_main_email():
|
||||
return config.main_email
|
||||
|
||||
def get_main_api_name():
|
||||
return config.main_api_name
|
||||
|
||||
def get_main_kms_keyid():
|
||||
return config.main_kms_keyid
|
||||
|
||||
def get_secondary_kms_keyid():
|
||||
return config.main_kms_keyid2
|
||||
|
||||
def get_alt_aws_access_key():
|
||||
return config.alt_access_key
|
||||
|
||||
def get_alt_aws_secret_key():
|
||||
return config.alt_secret_key
|
||||
|
||||
def get_alt_display_name():
|
||||
return config.alt_display_name
|
||||
|
||||
def get_alt_user_id():
|
||||
return config.alt_user_id
|
||||
|
||||
def get_alt_email():
|
||||
return config.alt_email
|
||||
|
||||
def get_tenant_aws_access_key():
|
||||
return config.tenant_access_key
|
||||
|
||||
def get_tenant_aws_secret_key():
|
||||
return config.tenant_secret_key
|
||||
|
||||
def get_tenant_display_name():
|
||||
return config.tenant_display_name
|
||||
|
||||
def get_tenant_name():
|
||||
return config.tenant_name
|
||||
|
||||
def get_tenant_user_id():
|
||||
return config.tenant_user_id
|
||||
|
||||
def get_tenant_email():
|
||||
return config.tenant_email
|
||||
|
||||
def get_thumbprint():
|
||||
return config.webidentity_thumbprint
|
||||
|
||||
def get_aud():
|
||||
return config.webidentity_aud
|
||||
|
||||
def get_sub():
|
||||
return config.webidentity_sub
|
||||
|
||||
def get_azp():
|
||||
return config.webidentity_azp
|
||||
|
||||
def get_token():
|
||||
return config.webidentity_token
|
||||
|
||||
def get_realm_name():
|
||||
return config.webidentity_realm
|
||||
|
||||
def get_iam_name_prefix():
|
||||
return config.iam_name_prefix
|
||||
|
||||
def make_iam_name(name):
|
||||
return config.iam_name_prefix + name
|
||||
|
||||
def get_iam_path_prefix():
|
||||
return config.iam_path_prefix
|
||||
|
||||
def get_iam_access_key():
|
||||
return config.iam_access_key
|
||||
|
||||
def get_iam_secret_key():
|
||||
return config.iam_secret_key
|
||||
|
||||
def get_iam_root_user_id():
|
||||
return config.iam_root_user_id
|
||||
|
||||
def get_iam_root_email():
|
||||
return config.iam_root_email
|
||||
|
||||
def get_iam_alt_root_user_id():
|
||||
return config.iam_alt_root_user_id
|
||||
|
||||
def get_iam_alt_root_email():
|
||||
return config.iam_alt_root_email
|
||||
|
||||
def get_user_token():
|
||||
return config.webidentity_user_token
|
||||
|
||||
def get_cloud_storage_class():
|
||||
return config.cloud_storage_class
|
||||
|
||||
def get_cloud_retain_head_object():
|
||||
return config.cloud_retain_head_object
|
||||
|
||||
def get_cloud_regular_storage_class():
|
||||
return config.cloud_regular_storage_class
|
||||
|
||||
def get_cloud_target_path():
|
||||
return config.cloud_target_path
|
||||
|
||||
def get_cloud_target_storage_class():
|
||||
return config.cloud_target_storage_class
|
||||
|
||||
def get_lc_debug_interval():
|
||||
return config.lc_debug_interval
|
|
@ -1,199 +0,0 @@
|
|||
from botocore.exceptions import ClientError
|
||||
import pytest
|
||||
|
||||
from . import (
|
||||
configfile,
|
||||
get_iam_root_client,
|
||||
get_iam_root_user_id,
|
||||
get_iam_root_email,
|
||||
get_iam_alt_root_client,
|
||||
get_iam_alt_root_user_id,
|
||||
get_iam_alt_root_email,
|
||||
get_iam_path_prefix,
|
||||
)
|
||||
|
||||
def nuke_user_keys(client, name):
|
||||
p = client.get_paginator('list_access_keys')
|
||||
for response in p.paginate(UserName=name):
|
||||
for key in response['AccessKeyMetadata']:
|
||||
try:
|
||||
client.delete_access_key(UserName=name, AccessKeyId=key['AccessKeyId'])
|
||||
except:
|
||||
pass
|
||||
|
||||
def nuke_user_policies(client, name):
|
||||
p = client.get_paginator('list_user_policies')
|
||||
for response in p.paginate(UserName=name):
|
||||
for policy in response['PolicyNames']:
|
||||
try:
|
||||
client.delete_user_policy(UserName=name, PolicyName=policy)
|
||||
except:
|
||||
pass
|
||||
|
||||
def nuke_attached_user_policies(client, name):
|
||||
p = client.get_paginator('list_attached_user_policies')
|
||||
for response in p.paginate(UserName=name):
|
||||
for policy in response['AttachedPolicies']:
|
||||
try:
|
||||
client.detach_user_policy(UserName=name, PolicyArn=policy['PolicyArn'])
|
||||
except:
|
||||
pass
|
||||
|
||||
def nuke_user(client, name):
|
||||
# delete access keys, user policies, etc
|
||||
try:
|
||||
nuke_user_keys(client, name)
|
||||
except:
|
||||
pass
|
||||
try:
|
||||
nuke_user_policies(client, name)
|
||||
except:
|
||||
pass
|
||||
try:
|
||||
nuke_attached_user_policies(client, name)
|
||||
except:
|
||||
pass
|
||||
client.delete_user(UserName=name)
|
||||
|
||||
def nuke_users(client, **kwargs):
|
||||
p = client.get_paginator('list_users')
|
||||
for response in p.paginate(**kwargs):
|
||||
for user in response['Users']:
|
||||
try:
|
||||
nuke_user(client, user['UserName'])
|
||||
except:
|
||||
pass
|
||||
|
||||
def nuke_group_policies(client, name):
|
||||
p = client.get_paginator('list_group_policies')
|
||||
for response in p.paginate(GroupName=name):
|
||||
for policy in response['PolicyNames']:
|
||||
try:
|
||||
client.delete_group_policy(GroupName=name, PolicyName=policy)
|
||||
except:
|
||||
pass
|
||||
|
||||
def nuke_attached_group_policies(client, name):
|
||||
p = client.get_paginator('list_attached_group_policies')
|
||||
for response in p.paginate(GroupName=name):
|
||||
for policy in response['AttachedPolicies']:
|
||||
try:
|
||||
client.detach_group_policy(GroupName=name, PolicyArn=policy['PolicyArn'])
|
||||
except:
|
||||
pass
|
||||
|
||||
def nuke_group_users(client, name):
|
||||
p = client.get_paginator('get_group')
|
||||
for response in p.paginate(GroupName=name):
|
||||
for user in response['Users']:
|
||||
try:
|
||||
client.remove_user_from_group(GroupName=name, UserName=user['UserName'])
|
||||
except:
|
||||
pass
|
||||
|
||||
def nuke_group(client, name):
|
||||
# delete group policies and remove all users
|
||||
try:
|
||||
nuke_group_policies(client, name)
|
||||
except:
|
||||
pass
|
||||
try:
|
||||
nuke_attached_group_policies(client, name)
|
||||
except:
|
||||
pass
|
||||
try:
|
||||
nuke_group_users(client, name)
|
||||
except:
|
||||
pass
|
||||
client.delete_group(GroupName=name)
|
||||
|
||||
def nuke_groups(client, **kwargs):
|
||||
p = client.get_paginator('list_groups')
|
||||
for response in p.paginate(**kwargs):
|
||||
for user in response['Groups']:
|
||||
try:
|
||||
nuke_group(client, user['GroupName'])
|
||||
except:
|
||||
pass
|
||||
|
||||
def nuke_role_policies(client, name):
|
||||
p = client.get_paginator('list_role_policies')
|
||||
for response in p.paginate(RoleName=name):
|
||||
for policy in response['PolicyNames']:
|
||||
try:
|
||||
client.delete_role_policy(RoleName=name, PolicyName=policy)
|
||||
except:
|
||||
pass
|
||||
|
||||
def nuke_attached_role_policies(client, name):
|
||||
p = client.get_paginator('list_attached_role_policies')
|
||||
for response in p.paginate(RoleName=name):
|
||||
for policy in response['AttachedPolicies']:
|
||||
try:
|
||||
client.detach_role_policy(RoleName=name, PolicyArn=policy['PolicyArn'])
|
||||
except:
|
||||
pass
|
||||
|
||||
def nuke_role(client, name):
|
||||
# delete role policies, etc
|
||||
try:
|
||||
nuke_role_policies(client, name)
|
||||
except:
|
||||
pass
|
||||
try:
|
||||
nuke_attached_role_policies(client, name)
|
||||
except:
|
||||
pass
|
||||
client.delete_role(RoleName=name)
|
||||
|
||||
def nuke_roles(client, **kwargs):
|
||||
p = client.get_paginator('list_roles')
|
||||
for response in p.paginate(**kwargs):
|
||||
for role in response['Roles']:
|
||||
try:
|
||||
nuke_role(client, role['RoleName'])
|
||||
except:
|
||||
pass
|
||||
|
||||
def nuke_oidc_providers(client, prefix):
|
||||
result = client.list_open_id_connect_providers()
|
||||
for provider in result['OpenIDConnectProviderList']:
|
||||
arn = provider['Arn']
|
||||
if f':oidc-provider{prefix}' in arn:
|
||||
try:
|
||||
client.delete_open_id_connect_provider(OpenIDConnectProviderArn=arn)
|
||||
except:
|
||||
pass
|
||||
|
||||
|
||||
# fixture for iam account root user
|
||||
@pytest.fixture
|
||||
def iam_root(configfile):
|
||||
client = get_iam_root_client()
|
||||
try:
|
||||
arn = client.get_user()['User']['Arn']
|
||||
if not arn.endswith(':root'):
|
||||
pytest.skip('[iam root] user does not have :root arn')
|
||||
except ClientError as e:
|
||||
pytest.skip('[iam root] user does not belong to an account')
|
||||
|
||||
yield client
|
||||
nuke_users(client, PathPrefix=get_iam_path_prefix())
|
||||
nuke_groups(client, PathPrefix=get_iam_path_prefix())
|
||||
nuke_roles(client, PathPrefix=get_iam_path_prefix())
|
||||
nuke_oidc_providers(client, get_iam_path_prefix())
|
||||
|
||||
# fixture for iam alt account root user
|
||||
@pytest.fixture
|
||||
def iam_alt_root(configfile):
|
||||
client = get_iam_alt_root_client()
|
||||
try:
|
||||
arn = client.get_user()['User']['Arn']
|
||||
if not arn.endswith(':root'):
|
||||
pytest.skip('[iam alt root] user does not have :root arn')
|
||||
except ClientError as e:
|
||||
pytest.skip('[iam alt root] user does not belong to an account')
|
||||
|
||||
yield client
|
||||
nuke_users(client, PathPrefix=get_iam_path_prefix())
|
||||
nuke_roles(client, PathPrefix=get_iam_path_prefix())
|
|
@ -1,46 +0,0 @@
|
|||
import json
|
||||
|
||||
class Statement(object):
|
||||
def __init__(self, action, resource, principal = {"AWS" : "*"}, effect= "Allow", condition = None):
|
||||
self.principal = principal
|
||||
self.action = action
|
||||
self.resource = resource
|
||||
self.condition = condition
|
||||
self.effect = effect
|
||||
|
||||
def to_dict(self):
|
||||
d = { "Action" : self.action,
|
||||
"Principal" : self.principal,
|
||||
"Effect" : self.effect,
|
||||
"Resource" : self.resource
|
||||
}
|
||||
|
||||
if self.condition is not None:
|
||||
d["Condition"] = self.condition
|
||||
|
||||
return d
|
||||
|
||||
class Policy(object):
|
||||
def __init__(self):
|
||||
self.statements = []
|
||||
|
||||
def add_statement(self, s):
|
||||
self.statements.append(s)
|
||||
return self
|
||||
|
||||
def to_json(self):
|
||||
policy_dict = {
|
||||
"Version" : "2012-10-17",
|
||||
"Statement":
|
||||
[s.to_dict() for s in self.statements]
|
||||
}
|
||||
|
||||
return json.dumps(policy_dict)
|
||||
|
||||
def make_json_policy(action, resource, principal={"AWS": "*"}, effect="Allow", conditions=None):
|
||||
"""
|
||||
Helper function to make single statement policies
|
||||
"""
|
||||
s = Statement(action, resource, principal, effect=effect, condition=conditions)
|
||||
p = Policy()
|
||||
return p.add_statement(s).to_json()
|
|
@ -1,92 +0,0 @@
|
|||
#!/usr/bin/python
|
||||
import boto3
|
||||
import os
|
||||
import random
|
||||
import string
|
||||
import itertools
|
||||
|
||||
host = "localhost"
|
||||
port = 8000
|
||||
|
||||
## AWS access key
|
||||
access_key = "0555b35654ad1656d804"
|
||||
|
||||
## AWS secret key
|
||||
secret_key = "h7GhxuBLTrlhVUyxSPUKUV8r/2EI4ngqJxD7iBdBYLhwluN30JaT3Q=="
|
||||
|
||||
prefix = "YOURNAMEHERE-1234-"
|
||||
|
||||
endpoint_url = "http://%s:%d" % (host, port)
|
||||
|
||||
client = boto3.client(service_name='s3',
|
||||
aws_access_key_id=access_key,
|
||||
aws_secret_access_key=secret_key,
|
||||
endpoint_url=endpoint_url,
|
||||
use_ssl=False,
|
||||
verify=False)
|
||||
|
||||
s3 = boto3.resource('s3',
|
||||
use_ssl=False,
|
||||
verify=False,
|
||||
endpoint_url=endpoint_url,
|
||||
aws_access_key_id=access_key,
|
||||
aws_secret_access_key=secret_key)
|
||||
|
||||
def choose_bucket_prefix(template, max_len=30):
|
||||
"""
|
||||
Choose a prefix for our test buckets, so they're easy to identify.
|
||||
|
||||
Use template and feed it more and more random filler, until it's
|
||||
as long as possible but still below max_len.
|
||||
"""
|
||||
rand = ''.join(
|
||||
random.choice(string.ascii_lowercase + string.digits)
|
||||
for c in range(255)
|
||||
)
|
||||
|
||||
while rand:
|
||||
s = template.format(random=rand)
|
||||
if len(s) <= max_len:
|
||||
return s
|
||||
rand = rand[:-1]
|
||||
|
||||
raise RuntimeError(
|
||||
'Bucket prefix template is impossible to fulfill: {template!r}'.format(
|
||||
template=template,
|
||||
),
|
||||
)
|
||||
|
||||
bucket_counter = itertools.count(1)
|
||||
|
||||
def get_new_bucket_name():
|
||||
"""
|
||||
Get a bucket name that probably does not exist.
|
||||
|
||||
We make every attempt to use a unique random prefix, so if a
|
||||
bucket by this name happens to exist, it's ok if tests give
|
||||
false negatives.
|
||||
"""
|
||||
name = '{prefix}{num}'.format(
|
||||
prefix=prefix,
|
||||
num=next(bucket_counter),
|
||||
)
|
||||
return name
|
||||
|
||||
def get_new_bucket(session=boto3, name=None, headers=None):
|
||||
"""
|
||||
Get a bucket that exists and is empty.
|
||||
|
||||
Always recreates a bucket from scratch. This is useful to also
|
||||
reset ACLs and such.
|
||||
"""
|
||||
s3 = session.resource('s3',
|
||||
use_ssl=False,
|
||||
verify=False,
|
||||
endpoint_url=endpoint_url,
|
||||
aws_access_key_id=access_key,
|
||||
aws_secret_access_key=secret_key)
|
||||
if name is None:
|
||||
name = get_new_bucket_name()
|
||||
bucket = s3.Bucket(name)
|
||||
bucket_location = bucket.create()
|
||||
return bucket
|
|
@ -1,572 +0,0 @@
|
|||
import boto3
|
||||
import pytest
|
||||
from botocore.exceptions import ClientError
|
||||
from email.utils import formatdate
|
||||
|
||||
from .utils import assert_raises
|
||||
from .utils import _get_status_and_error_code
|
||||
from .utils import _get_status
|
||||
|
||||
from . import (
|
||||
configfile,
|
||||
setup_teardown,
|
||||
get_client,
|
||||
get_v2_client,
|
||||
get_new_bucket,
|
||||
get_new_bucket_name,
|
||||
)
|
||||
|
||||
def _add_header_create_object(headers, client=None):
|
||||
""" Create a new bucket, add an object w/header customizations
|
||||
"""
|
||||
bucket_name = get_new_bucket()
|
||||
if client == None:
|
||||
client = get_client()
|
||||
key_name = 'foo'
|
||||
|
||||
# pass in custom headers before PutObject call
|
||||
add_headers = (lambda **kwargs: kwargs['params']['headers'].update(headers))
|
||||
client.meta.events.register('before-call.s3.PutObject', add_headers)
|
||||
client.put_object(Bucket=bucket_name, Key=key_name)
|
||||
|
||||
return bucket_name, key_name
|
||||
|
||||
|
||||
def _add_header_create_bad_object(headers, client=None):
|
||||
""" Create a new bucket, add an object with a header. This should cause a failure
|
||||
"""
|
||||
bucket_name = get_new_bucket()
|
||||
if client == None:
|
||||
client = get_client()
|
||||
key_name = 'foo'
|
||||
|
||||
# pass in custom headers before PutObject call
|
||||
add_headers = (lambda **kwargs: kwargs['params']['headers'].update(headers))
|
||||
client.meta.events.register('before-call.s3.PutObject', add_headers)
|
||||
e = assert_raises(ClientError, client.put_object, Bucket=bucket_name, Key=key_name, Body='bar')
|
||||
|
||||
return e
|
||||
|
||||
|
||||
def _remove_header_create_object(remove, client=None):
|
||||
""" Create a new bucket, add an object without a header
|
||||
"""
|
||||
bucket_name = get_new_bucket()
|
||||
if client == None:
|
||||
client = get_client()
|
||||
key_name = 'foo'
|
||||
|
||||
# remove custom headers before PutObject call
|
||||
def remove_header(**kwargs):
|
||||
if (remove in kwargs['params']['headers']):
|
||||
del kwargs['params']['headers'][remove]
|
||||
|
||||
client.meta.events.register('before-call.s3.PutObject', remove_header)
|
||||
client.put_object(Bucket=bucket_name, Key=key_name)
|
||||
|
||||
return bucket_name, key_name
|
||||
|
||||
def _remove_header_create_bad_object(remove, client=None):
|
||||
""" Create a new bucket, add an object without a header. This should cause a failure
|
||||
"""
|
||||
bucket_name = get_new_bucket()
|
||||
if client == None:
|
||||
client = get_client()
|
||||
key_name = 'foo'
|
||||
|
||||
# remove custom headers before PutObject call
|
||||
def remove_header(**kwargs):
|
||||
if (remove in kwargs['params']['headers']):
|
||||
del kwargs['params']['headers'][remove]
|
||||
|
||||
client.meta.events.register('before-call.s3.PutObject', remove_header)
|
||||
e = assert_raises(ClientError, client.put_object, Bucket=bucket_name, Key=key_name, Body='bar')
|
||||
|
||||
return e
|
||||
|
||||
|
||||
def _add_header_create_bucket(headers, client=None):
|
||||
""" Create a new bucket, w/header customizations
|
||||
"""
|
||||
bucket_name = get_new_bucket_name()
|
||||
if client == None:
|
||||
client = get_client()
|
||||
|
||||
# pass in custom headers before PutObject call
|
||||
add_headers = (lambda **kwargs: kwargs['params']['headers'].update(headers))
|
||||
client.meta.events.register('before-call.s3.CreateBucket', add_headers)
|
||||
client.create_bucket(Bucket=bucket_name)
|
||||
|
||||
return bucket_name
|
||||
|
||||
|
||||
def _add_header_create_bad_bucket(headers=None, client=None):
|
||||
""" Create a new bucket, w/header customizations that should cause a failure
|
||||
"""
|
||||
bucket_name = get_new_bucket_name()
|
||||
if client == None:
|
||||
client = get_client()
|
||||
|
||||
# pass in custom headers before PutObject call
|
||||
add_headers = (lambda **kwargs: kwargs['params']['headers'].update(headers))
|
||||
client.meta.events.register('before-call.s3.CreateBucket', add_headers)
|
||||
e = assert_raises(ClientError, client.create_bucket, Bucket=bucket_name)
|
||||
|
||||
return e
|
||||
|
||||
|
||||
def _remove_header_create_bucket(remove, client=None):
|
||||
""" Create a new bucket, without a header
|
||||
"""
|
||||
bucket_name = get_new_bucket_name()
|
||||
if client == None:
|
||||
client = get_client()
|
||||
|
||||
# remove custom headers before PutObject call
|
||||
def remove_header(**kwargs):
|
||||
if (remove in kwargs['params']['headers']):
|
||||
del kwargs['params']['headers'][remove]
|
||||
|
||||
client.meta.events.register('before-call.s3.CreateBucket', remove_header)
|
||||
client.create_bucket(Bucket=bucket_name)
|
||||
|
||||
return bucket_name
|
||||
|
||||
def _remove_header_create_bad_bucket(remove, client=None):
|
||||
""" Create a new bucket, without a header. This should cause a failure
|
||||
"""
|
||||
bucket_name = get_new_bucket_name()
|
||||
if client == None:
|
||||
client = get_client()
|
||||
|
||||
# remove custom headers before PutObject call
|
||||
def remove_header(**kwargs):
|
||||
if (remove in kwargs['params']['headers']):
|
||||
del kwargs['params']['headers'][remove]
|
||||
|
||||
client.meta.events.register('before-call.s3.CreateBucket', remove_header)
|
||||
e = assert_raises(ClientError, client.create_bucket, Bucket=bucket_name)
|
||||
|
||||
return e
|
||||
|
||||
#
|
||||
# common tests
|
||||
#
|
||||
|
||||
@pytest.mark.auth_common
|
||||
def test_object_create_bad_md5_invalid_short():
|
||||
e = _add_header_create_bad_object({'Content-MD5':'YWJyYWNhZGFicmE='})
|
||||
status, error_code = _get_status_and_error_code(e.response)
|
||||
assert status == 400
|
||||
assert error_code == 'InvalidDigest'
|
||||
|
||||
@pytest.mark.auth_common
|
||||
def test_object_create_bad_md5_bad():
|
||||
e = _add_header_create_bad_object({'Content-MD5':'rL0Y20xC+Fzt72VPzMSk2A=='})
|
||||
status, error_code = _get_status_and_error_code(e.response)
|
||||
assert status == 400
|
||||
assert error_code == 'BadDigest'
|
||||
|
||||
@pytest.mark.auth_common
|
||||
def test_object_create_bad_md5_empty():
|
||||
e = _add_header_create_bad_object({'Content-MD5':''})
|
||||
status, error_code = _get_status_and_error_code(e.response)
|
||||
assert status == 400
|
||||
assert error_code == 'InvalidDigest'
|
||||
|
||||
@pytest.mark.auth_common
|
||||
def test_object_create_bad_md5_none():
|
||||
bucket_name, key_name = _remove_header_create_object('Content-MD5')
|
||||
client = get_client()
|
||||
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
|
||||
|
||||
@pytest.mark.auth_common
|
||||
def test_object_create_bad_expect_mismatch():
|
||||
bucket_name, key_name = _add_header_create_object({'Expect': 200})
|
||||
client = get_client()
|
||||
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
|
||||
|
||||
@pytest.mark.auth_common
|
||||
def test_object_create_bad_expect_empty():
|
||||
bucket_name, key_name = _add_header_create_object({'Expect': ''})
|
||||
client = get_client()
|
||||
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
|
||||
|
||||
@pytest.mark.auth_common
|
||||
def test_object_create_bad_expect_none():
|
||||
bucket_name, key_name = _remove_header_create_object('Expect')
|
||||
client = get_client()
|
||||
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
|
||||
|
||||
@pytest.mark.auth_common
|
||||
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the content-length header
|
||||
@pytest.mark.fails_on_rgw
|
||||
def test_object_create_bad_contentlength_empty():
|
||||
e = _add_header_create_bad_object({'Content-Length':''})
|
||||
status, error_code = _get_status_and_error_code(e.response)
|
||||
assert status == 400
|
||||
|
||||
@pytest.mark.auth_common
|
||||
@pytest.mark.fails_on_mod_proxy_fcgi
|
||||
def test_object_create_bad_contentlength_negative():
|
||||
client = get_client()
|
||||
bucket_name = get_new_bucket()
|
||||
key_name = 'foo'
|
||||
e = assert_raises(ClientError, client.put_object, Bucket=bucket_name, Key=key_name, ContentLength=-1)
|
||||
status = _get_status(e.response)
|
||||
assert status == 400
|
||||
|
||||
@pytest.mark.auth_common
|
||||
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the content-length header
|
||||
@pytest.mark.fails_on_rgw
|
||||
def test_object_create_bad_contentlength_none():
|
||||
remove = 'Content-Length'
|
||||
e = _remove_header_create_bad_object('Content-Length')
|
||||
status, error_code = _get_status_and_error_code(e.response)
|
||||
assert status == 411
|
||||
assert error_code == 'MissingContentLength'
|
||||
|
||||
@pytest.mark.auth_common
|
||||
def test_object_create_bad_contenttype_invalid():
|
||||
bucket_name, key_name = _add_header_create_object({'Content-Type': 'text/plain'})
|
||||
client = get_client()
|
||||
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
|
||||
|
||||
@pytest.mark.auth_common
|
||||
def test_object_create_bad_contenttype_empty():
|
||||
client = get_client()
|
||||
key_name = 'foo'
|
||||
bucket_name = get_new_bucket()
|
||||
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar', ContentType='')
|
||||
|
||||
@pytest.mark.auth_common
|
||||
def test_object_create_bad_contenttype_none():
|
||||
bucket_name = get_new_bucket()
|
||||
key_name = 'foo'
|
||||
client = get_client()
|
||||
# as long as ContentType isn't specified in put_object it isn't going into the request
|
||||
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
|
||||
|
||||
|
||||
@pytest.mark.auth_common
|
||||
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the authorization header
|
||||
@pytest.mark.fails_on_rgw
|
||||
def test_object_create_bad_authorization_empty():
|
||||
e = _add_header_create_bad_object({'Authorization': ''})
|
||||
status, error_code = _get_status_and_error_code(e.response)
|
||||
assert status == 403
|
||||
|
||||
@pytest.mark.auth_common
|
||||
# TODO: remove 'fails_on_rgw' and once we have learned how to pass both the 'Date' and 'X-Amz-Date' header during signing and not 'X-Amz-Date' before
|
||||
@pytest.mark.fails_on_rgw
|
||||
def test_object_create_date_and_amz_date():
|
||||
date = formatdate(usegmt=True)
|
||||
bucket_name, key_name = _add_header_create_object({'Date': date, 'X-Amz-Date': date})
|
||||
client = get_client()
|
||||
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
|
||||
|
||||
@pytest.mark.auth_common
|
||||
# TODO: remove 'fails_on_rgw' and once we have learned how to pass both the 'Date' and 'X-Amz-Date' header during signing and not 'X-Amz-Date' before
|
||||
@pytest.mark.fails_on_rgw
|
||||
def test_object_create_amz_date_and_no_date():
|
||||
date = formatdate(usegmt=True)
|
||||
bucket_name, key_name = _add_header_create_object({'Date': '', 'X-Amz-Date': date})
|
||||
client = get_client()
|
||||
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
|
||||
|
||||
# the teardown is really messed up here. check it out
|
||||
@pytest.mark.auth_common
|
||||
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the authorization header
|
||||
@pytest.mark.fails_on_rgw
|
||||
def test_object_create_bad_authorization_none():
|
||||
e = _remove_header_create_bad_object('Authorization')
|
||||
status, error_code = _get_status_and_error_code(e.response)
|
||||
assert status == 403
|
||||
|
||||
@pytest.mark.auth_common
|
||||
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the content-length header
|
||||
@pytest.mark.fails_on_rgw
|
||||
def test_bucket_create_contentlength_none():
|
||||
remove = 'Content-Length'
|
||||
_remove_header_create_bucket(remove)
|
||||
|
||||
@pytest.mark.auth_common
|
||||
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the content-length header
|
||||
@pytest.mark.fails_on_rgw
|
||||
def test_object_acl_create_contentlength_none():
|
||||
bucket_name = get_new_bucket()
|
||||
client = get_client()
|
||||
client.put_object(Bucket=bucket_name, Key='foo', Body='bar')
|
||||
|
||||
remove = 'Content-Length'
|
||||
def remove_header(**kwargs):
|
||||
if (remove in kwargs['params']['headers']):
|
||||
del kwargs['params']['headers'][remove]
|
||||
|
||||
client.meta.events.register('before-call.s3.PutObjectAcl', remove_header)
|
||||
client.put_object_acl(Bucket=bucket_name, Key='foo', ACL='public-read')
|
||||
|
||||
@pytest.mark.auth_common
|
||||
def test_bucket_put_bad_canned_acl():
|
||||
bucket_name = get_new_bucket()
|
||||
client = get_client()
|
||||
|
||||
headers = {'x-amz-acl': 'public-ready'}
|
||||
add_headers = (lambda **kwargs: kwargs['params']['headers'].update(headers))
|
||||
client.meta.events.register('before-call.s3.PutBucketAcl', add_headers)
|
||||
|
||||
e = assert_raises(ClientError, client.put_bucket_acl, Bucket=bucket_name, ACL='public-read')
|
||||
status = _get_status(e.response)
|
||||
assert status == 400
|
||||
|
||||
@pytest.mark.auth_common
|
||||
def test_bucket_create_bad_expect_mismatch():
|
||||
bucket_name = get_new_bucket_name()
|
||||
client = get_client()
|
||||
|
||||
headers = {'Expect': 200}
|
||||
add_headers = (lambda **kwargs: kwargs['params']['headers'].update(headers))
|
||||
client.meta.events.register('before-call.s3.CreateBucket', add_headers)
|
||||
client.create_bucket(Bucket=bucket_name)
|
||||
|
||||
@pytest.mark.auth_common
|
||||
def test_bucket_create_bad_expect_empty():
|
||||
headers = {'Expect': ''}
|
||||
_add_header_create_bucket(headers)
|
||||
|
||||
@pytest.mark.auth_common
|
||||
# TODO: The request isn't even making it to the RGW past the frontend
|
||||
# This test had 'fails_on_rgw' before the move to boto3
|
||||
@pytest.mark.fails_on_rgw
|
||||
def test_bucket_create_bad_contentlength_empty():
|
||||
headers = {'Content-Length': ''}
|
||||
e = _add_header_create_bad_bucket(headers)
|
||||
status, error_code = _get_status_and_error_code(e.response)
|
||||
assert status == 400
|
||||
|
||||
@pytest.mark.auth_common
|
||||
@pytest.mark.fails_on_mod_proxy_fcgi
|
||||
def test_bucket_create_bad_contentlength_negative():
|
||||
headers = {'Content-Length': '-1'}
|
||||
e = _add_header_create_bad_bucket(headers)
|
||||
status = _get_status(e.response)
|
||||
assert status == 400
|
||||
|
||||
@pytest.mark.auth_common
|
||||
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the content-length header
|
||||
@pytest.mark.fails_on_rgw
|
||||
def test_bucket_create_bad_contentlength_none():
|
||||
remove = 'Content-Length'
|
||||
_remove_header_create_bucket(remove)
|
||||
|
||||
@pytest.mark.auth_common
|
||||
# TODO: remove 'fails_on_rgw' and once we have learned how to manipulate the authorization header
|
||||
@pytest.mark.fails_on_rgw
|
||||
def test_bucket_create_bad_authorization_empty():
|
||||
headers = {'Authorization': ''}
|
||||
e = _add_header_create_bad_bucket(headers)
|
||||
status, error_code = _get_status_and_error_code(e.response)
|
||||
assert status == 403
|
||||
assert error_code == 'AccessDenied'
|
||||
|
||||
@pytest.mark.auth_common
|
||||
# TODO: remove 'fails_on_rgw' and once we have learned how to manipulate the authorization header
|
||||
@pytest.mark.fails_on_rgw
|
||||
def test_bucket_create_bad_authorization_none():
|
||||
e = _remove_header_create_bad_bucket('Authorization')
|
||||
status, error_code = _get_status_and_error_code(e.response)
|
||||
assert status == 403
|
||||
assert error_code == 'AccessDenied'
|
||||
|
||||
@pytest.mark.auth_aws2
|
||||
def test_object_create_bad_md5_invalid_garbage_aws2():
|
||||
v2_client = get_v2_client()
|
||||
headers = {'Content-MD5': 'AWS HAHAHA'}
|
||||
e = _add_header_create_bad_object(headers, v2_client)
|
||||
status, error_code = _get_status_and_error_code(e.response)
|
||||
assert status == 400
|
||||
assert error_code == 'InvalidDigest'
|
||||
|
||||
@pytest.mark.auth_aws2
|
||||
# TODO: remove 'fails_on_rgw' and once we have learned how to manipulate the Content-Length header
|
||||
@pytest.mark.fails_on_rgw
|
||||
def test_object_create_bad_contentlength_mismatch_below_aws2():
|
||||
v2_client = get_v2_client()
|
||||
content = 'bar'
|
||||
length = len(content) - 1
|
||||
headers = {'Content-Length': str(length)}
|
||||
e = _add_header_create_bad_object(headers, v2_client)
|
||||
status, error_code = _get_status_and_error_code(e.response)
|
||||
assert status == 400
|
||||
assert error_code == 'BadDigest'
|
||||
|
||||
@pytest.mark.auth_aws2
|
||||
# TODO: remove 'fails_on_rgw' and once we have learned how to manipulate the authorization header
|
||||
@pytest.mark.fails_on_rgw
|
||||
def test_object_create_bad_authorization_incorrect_aws2():
|
||||
v2_client = get_v2_client()
|
||||
headers = {'Authorization': 'AWS AKIAIGR7ZNNBHC5BKSUB:FWeDfwojDSdS2Ztmpfeubhd9isU='}
|
||||
e = _add_header_create_bad_object(headers, v2_client)
|
||||
status, error_code = _get_status_and_error_code(e.response)
|
||||
assert status == 403
|
||||
assert error_code == 'InvalidDigest'
|
||||
|
||||
@pytest.mark.auth_aws2
|
||||
# TODO: remove 'fails_on_rgw' and once we have learned how to manipulate the authorization header
|
||||
@pytest.mark.fails_on_rgw
|
||||
def test_object_create_bad_authorization_invalid_aws2():
|
||||
v2_client = get_v2_client()
|
||||
headers = {'Authorization': 'AWS HAHAHA'}
|
||||
e = _add_header_create_bad_object(headers, v2_client)
|
||||
status, error_code = _get_status_and_error_code(e.response)
|
||||
assert status == 400
|
||||
assert error_code == 'InvalidArgument'
|
||||
|
||||
@pytest.mark.auth_aws2
|
||||
def test_object_create_bad_ua_empty_aws2():
|
||||
v2_client = get_v2_client()
|
||||
headers = {'User-Agent': ''}
|
||||
bucket_name, key_name = _add_header_create_object(headers, v2_client)
|
||||
v2_client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
|
||||
|
||||
@pytest.mark.auth_aws2
|
||||
def test_object_create_bad_ua_none_aws2():
|
||||
v2_client = get_v2_client()
|
||||
remove = 'User-Agent'
|
||||
bucket_name, key_name = _remove_header_create_object(remove, v2_client)
|
||||
v2_client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
|
||||
|
||||
@pytest.mark.auth_aws2
|
||||
def test_object_create_bad_date_invalid_aws2():
|
||||
v2_client = get_v2_client()
|
||||
headers = {'x-amz-date': 'Bad Date'}
|
||||
e = _add_header_create_bad_object(headers, v2_client)
|
||||
status, error_code = _get_status_and_error_code(e.response)
|
||||
assert status == 403
|
||||
assert error_code == 'AccessDenied'
|
||||
|
||||
@pytest.mark.auth_aws2
|
||||
def test_object_create_bad_date_empty_aws2():
|
||||
v2_client = get_v2_client()
|
||||
headers = {'x-amz-date': ''}
|
||||
e = _add_header_create_bad_object(headers, v2_client)
|
||||
status, error_code = _get_status_and_error_code(e.response)
|
||||
assert status == 403
|
||||
assert error_code == 'AccessDenied'
|
||||
|
||||
@pytest.mark.auth_aws2
|
||||
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the date header
|
||||
@pytest.mark.fails_on_rgw
|
||||
def test_object_create_bad_date_none_aws2():
|
||||
v2_client = get_v2_client()
|
||||
remove = 'x-amz-date'
|
||||
e = _remove_header_create_bad_object(remove, v2_client)
|
||||
status, error_code = _get_status_and_error_code(e.response)
|
||||
assert status == 403
|
||||
assert error_code == 'AccessDenied'
|
||||
|
||||
@pytest.mark.auth_aws2
|
||||
def test_object_create_bad_date_before_today_aws2():
|
||||
v2_client = get_v2_client()
|
||||
headers = {'x-amz-date': 'Tue, 07 Jul 2010 21:53:04 GMT'}
|
||||
e = _add_header_create_bad_object(headers, v2_client)
|
||||
status, error_code = _get_status_and_error_code(e.response)
|
||||
assert status == 403
|
||||
assert error_code == 'RequestTimeTooSkewed'
|
||||
|
||||
@pytest.mark.auth_aws2
|
||||
def test_object_create_bad_date_before_epoch_aws2():
|
||||
v2_client = get_v2_client()
|
||||
headers = {'x-amz-date': 'Tue, 07 Jul 1950 21:53:04 GMT'}
|
||||
e = _add_header_create_bad_object(headers, v2_client)
|
||||
status, error_code = _get_status_and_error_code(e.response)
|
||||
assert status == 403
|
||||
assert error_code == 'AccessDenied'
|
||||
|
||||
@pytest.mark.auth_aws2
|
||||
def test_object_create_bad_date_after_end_aws2():
|
||||
v2_client = get_v2_client()
|
||||
headers = {'x-amz-date': 'Tue, 07 Jul 9999 21:53:04 GMT'}
|
||||
e = _add_header_create_bad_object(headers, v2_client)
|
||||
status, error_code = _get_status_and_error_code(e.response)
|
||||
assert status == 403
|
||||
assert error_code == 'RequestTimeTooSkewed'
|
||||
|
||||
@pytest.mark.auth_aws2
|
||||
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the date header
|
||||
@pytest.mark.fails_on_rgw
|
||||
def test_bucket_create_bad_authorization_invalid_aws2():
|
||||
v2_client = get_v2_client()
|
||||
headers = {'Authorization': 'AWS HAHAHA'}
|
||||
e = _add_header_create_bad_bucket(headers, v2_client)
|
||||
status, error_code = _get_status_and_error_code(e.response)
|
||||
assert status == 400
|
||||
assert error_code == 'InvalidArgument'
|
||||
|
||||
@pytest.mark.auth_aws2
|
||||
def test_bucket_create_bad_ua_empty_aws2():
|
||||
v2_client = get_v2_client()
|
||||
headers = {'User-Agent': ''}
|
||||
_add_header_create_bucket(headers, v2_client)
|
||||
|
||||
@pytest.mark.auth_aws2
|
||||
def test_bucket_create_bad_ua_none_aws2():
|
||||
v2_client = get_v2_client()
|
||||
remove = 'User-Agent'
|
||||
_remove_header_create_bucket(remove, v2_client)
|
||||
|
||||
@pytest.mark.auth_aws2
|
||||
def test_bucket_create_bad_date_invalid_aws2():
|
||||
v2_client = get_v2_client()
|
||||
headers = {'x-amz-date': 'Bad Date'}
|
||||
e = _add_header_create_bad_bucket(headers, v2_client)
|
||||
status, error_code = _get_status_and_error_code(e.response)
|
||||
assert status == 403
|
||||
assert error_code == 'AccessDenied'
|
||||
|
||||
@pytest.mark.auth_aws2
|
||||
def test_bucket_create_bad_date_empty_aws2():
|
||||
v2_client = get_v2_client()
|
||||
headers = {'x-amz-date': ''}
|
||||
e = _add_header_create_bad_bucket(headers, v2_client)
|
||||
status, error_code = _get_status_and_error_code(e.response)
|
||||
assert status == 403
|
||||
assert error_code == 'AccessDenied'
|
||||
|
||||
@pytest.mark.auth_aws2
|
||||
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the date header
|
||||
@pytest.mark.fails_on_rgw
|
||||
def test_bucket_create_bad_date_none_aws2():
|
||||
v2_client = get_v2_client()
|
||||
remove = 'x-amz-date'
|
||||
e = _remove_header_create_bad_bucket(remove, v2_client)
|
||||
status, error_code = _get_status_and_error_code(e.response)
|
||||
assert status == 403
|
||||
assert error_code == 'AccessDenied'
|
||||
|
||||
@pytest.mark.auth_aws2
|
||||
def test_bucket_create_bad_date_before_today_aws2():
|
||||
v2_client = get_v2_client()
|
||||
headers = {'x-amz-date': 'Tue, 07 Jul 2010 21:53:04 GMT'}
|
||||
e = _add_header_create_bad_bucket(headers, v2_client)
|
||||
status, error_code = _get_status_and_error_code(e.response)
|
||||
assert status == 403
|
||||
assert error_code == 'RequestTimeTooSkewed'
|
||||
|
||||
@pytest.mark.auth_aws2
|
||||
def test_bucket_create_bad_date_after_today_aws2():
|
||||
v2_client = get_v2_client()
|
||||
headers = {'x-amz-date': 'Tue, 07 Jul 2030 21:53:04 GMT'}
|
||||
e = _add_header_create_bad_bucket(headers, v2_client)
|
||||
status, error_code = _get_status_and_error_code(e.response)
|
||||
assert status == 403
|
||||
assert error_code == 'RequestTimeTooSkewed'
|
||||
|
||||
@pytest.mark.auth_aws2
|
||||
def test_bucket_create_bad_date_before_epoch_aws2():
|
||||
v2_client = get_v2_client()
|
||||
headers = {'x-amz-date': 'Tue, 07 Jul 1950 21:53:04 GMT'}
|
||||
e = _add_header_create_bad_bucket(headers, v2_client)
|
||||
status, error_code = _get_status_and_error_code(e.response)
|
||||
assert status == 403
|
||||
assert error_code == 'AccessDenied'
|
File diff suppressed because it is too large
Load diff
File diff suppressed because it is too large
Load diff
File diff suppressed because it is too large
Load diff
|
@ -1,159 +0,0 @@
|
|||
import json
|
||||
import pytest
|
||||
from botocore.exceptions import ClientError
|
||||
from . import (
|
||||
configfile,
|
||||
get_iam_root_client,
|
||||
get_iam_alt_root_client,
|
||||
get_new_bucket_name,
|
||||
get_prefix,
|
||||
nuke_prefixed_buckets,
|
||||
)
|
||||
from .iam import iam_root, iam_alt_root
|
||||
from .utils import assert_raises, _get_status_and_error_code
|
||||
|
||||
def get_new_topic_name():
|
||||
return get_new_bucket_name()
|
||||
|
||||
def nuke_topics(client, prefix):
|
||||
p = client.get_paginator('list_topics')
|
||||
for response in p.paginate():
|
||||
for topic in response['Topics']:
|
||||
arn = topic['TopicArn']
|
||||
if prefix not in arn:
|
||||
pass
|
||||
try:
|
||||
client.delete_topic(TopicArn=arn)
|
||||
except:
|
||||
pass
|
||||
|
||||
@pytest.fixture
|
||||
def sns(iam_root):
|
||||
client = get_iam_root_client(service_name='sns')
|
||||
yield client
|
||||
nuke_topics(client, get_prefix())
|
||||
|
||||
@pytest.fixture
|
||||
def sns_alt(iam_alt_root):
|
||||
client = get_iam_alt_root_client(service_name='sns')
|
||||
yield client
|
||||
nuke_topics(client, get_prefix())
|
||||
|
||||
@pytest.fixture
|
||||
def s3(iam_root):
|
||||
client = get_iam_root_client(service_name='s3')
|
||||
yield client
|
||||
nuke_prefixed_buckets(get_prefix(), client)
|
||||
|
||||
@pytest.fixture
|
||||
def s3_alt(iam_alt_root):
|
||||
client = get_iam_alt_root_client(service_name='s3')
|
||||
yield client
|
||||
nuke_prefixed_buckets(get_prefix(), client)
|
||||
|
||||
|
||||
@pytest.mark.iam_account
|
||||
@pytest.mark.sns
|
||||
def test_account_topic(sns):
|
||||
name = get_new_topic_name()
|
||||
|
||||
response = sns.create_topic(Name=name)
|
||||
arn = response['TopicArn']
|
||||
assert arn.startswith('arn:aws:sns:')
|
||||
assert arn.endswith(f':{name}')
|
||||
|
||||
response = sns.list_topics()
|
||||
assert arn in [p['TopicArn'] for p in response['Topics']]
|
||||
|
||||
sns.set_topic_attributes(TopicArn=arn, AttributeName='Policy', AttributeValue='')
|
||||
|
||||
response = sns.get_topic_attributes(TopicArn=arn)
|
||||
assert 'Attributes' in response
|
||||
|
||||
sns.delete_topic(TopicArn=arn)
|
||||
|
||||
response = sns.list_topics()
|
||||
assert arn not in [p['TopicArn'] for p in response['Topics']]
|
||||
|
||||
with pytest.raises(sns.exceptions.NotFoundException):
|
||||
sns.get_topic_attributes(TopicArn=arn)
|
||||
sns.delete_topic(TopicArn=arn)
|
||||
|
||||
@pytest.mark.iam_account
|
||||
@pytest.mark.sns
|
||||
def test_cross_account_topic(sns, sns_alt):
|
||||
name = get_new_topic_name()
|
||||
arn = sns.create_topic(Name=name)['TopicArn']
|
||||
|
||||
# not visible to any alt user apis
|
||||
with pytest.raises(sns.exceptions.NotFoundException):
|
||||
sns_alt.get_topic_attributes(TopicArn=arn)
|
||||
with pytest.raises(sns.exceptions.NotFoundException):
|
||||
sns_alt.set_topic_attributes(TopicArn=arn, AttributeName='Policy', AttributeValue='')
|
||||
|
||||
# delete returns success
|
||||
sns_alt.delete_topic(TopicArn=arn)
|
||||
|
||||
response = sns_alt.list_topics()
|
||||
assert arn not in [p['TopicArn'] for p in response['Topics']]
|
||||
|
||||
@pytest.mark.iam_account
|
||||
@pytest.mark.sns
|
||||
def test_account_topic_publish(sns, s3):
|
||||
name = get_new_topic_name()
|
||||
|
||||
response = sns.create_topic(Name=name)
|
||||
topic_arn = response['TopicArn']
|
||||
|
||||
bucket = get_new_bucket_name()
|
||||
s3.create_bucket(Bucket=bucket)
|
||||
|
||||
config = {'TopicConfigurations': [{
|
||||
'Id': 'id',
|
||||
'TopicArn': topic_arn,
|
||||
'Events': [ 's3:ObjectCreated:*' ],
|
||||
}]}
|
||||
s3.put_bucket_notification_configuration(
|
||||
Bucket=bucket, NotificationConfiguration=config)
|
||||
|
||||
@pytest.mark.iam_account
|
||||
@pytest.mark.iam_cross_account
|
||||
@pytest.mark.sns
|
||||
def test_cross_account_topic_publish(sns, s3_alt, iam_alt_root):
|
||||
name = get_new_topic_name()
|
||||
|
||||
response = sns.create_topic(Name=name)
|
||||
topic_arn = response['TopicArn']
|
||||
|
||||
bucket = get_new_bucket_name()
|
||||
s3_alt.create_bucket(Bucket=bucket)
|
||||
|
||||
config = {'TopicConfigurations': [{
|
||||
'Id': 'id',
|
||||
'TopicArn': topic_arn,
|
||||
'Events': [ 's3:ObjectCreated:*' ],
|
||||
}]}
|
||||
|
||||
# expect AccessDenies because no resource policy allows cross-account access
|
||||
e = assert_raises(ClientError, s3_alt.put_bucket_notification_configuration,
|
||||
Bucket=bucket, NotificationConfiguration=config)
|
||||
status, error_code = _get_status_and_error_code(e.response)
|
||||
assert status == 403
|
||||
assert error_code == 'AccessDenied'
|
||||
|
||||
# add topic policy to allow the alt user
|
||||
alt_principal = iam_alt_root.get_user()['User']['Arn']
|
||||
policy = json.dumps({
|
||||
'Version': '2012-10-17',
|
||||
'Statement': [{
|
||||
'Effect': 'Allow',
|
||||
'Principal': {'AWS': alt_principal},
|
||||
'Action': 'sns:Publish',
|
||||
'Resource': topic_arn
|
||||
}]
|
||||
})
|
||||
sns.set_topic_attributes(TopicArn=topic_arn, AttributeName='Policy',
|
||||
AttributeValue=policy)
|
||||
|
||||
s3_alt.put_bucket_notification_configuration(
|
||||
Bucket=bucket, NotificationConfiguration=config)
|
File diff suppressed because it is too large
Load diff
|
@ -1,9 +0,0 @@
|
|||
from . import utils
|
||||
|
||||
def test_generate():
|
||||
FIVE_MB = 5 * 1024 * 1024
|
||||
assert len(''.join(utils.generate_random(0))) == 0
|
||||
assert len(''.join(utils.generate_random(1))) == 1
|
||||
assert len(''.join(utils.generate_random(FIVE_MB - 1))) == FIVE_MB - 1
|
||||
assert len(''.join(utils.generate_random(FIVE_MB))) == FIVE_MB
|
||||
assert len(''.join(utils.generate_random(FIVE_MB + 1))) == FIVE_MB + 1
|
|
@ -1,47 +0,0 @@
|
|||
import random
|
||||
import requests
|
||||
import string
|
||||
import time
|
||||
|
||||
def assert_raises(excClass, callableObj, *args, **kwargs):
|
||||
"""
|
||||
Like unittest.TestCase.assertRaises, but returns the exception.
|
||||
"""
|
||||
try:
|
||||
callableObj(*args, **kwargs)
|
||||
except excClass as e:
|
||||
return e
|
||||
else:
|
||||
if hasattr(excClass, '__name__'):
|
||||
excName = excClass.__name__
|
||||
else:
|
||||
excName = str(excClass)
|
||||
raise AssertionError("%s not raised" % excName)
|
||||
|
||||
def generate_random(size, part_size=5*1024*1024):
|
||||
"""
|
||||
Generate the specified number random data.
|
||||
(actually each MB is a repetition of the first KB)
|
||||
"""
|
||||
chunk = 1024
|
||||
allowed = string.ascii_letters
|
||||
for x in range(0, size, part_size):
|
||||
strpart = ''.join([allowed[random.randint(0, len(allowed) - 1)] for _ in range(chunk)])
|
||||
s = ''
|
||||
left = size - x
|
||||
this_part_size = min(left, part_size)
|
||||
for y in range(this_part_size // chunk):
|
||||
s = s + strpart
|
||||
s = s + strpart[:(this_part_size % chunk)]
|
||||
yield s
|
||||
if (x == size):
|
||||
return
|
||||
|
||||
def _get_status(response):
|
||||
status = response['ResponseMetadata']['HTTPStatusCode']
|
||||
return status
|
||||
|
||||
def _get_status_and_error_code(response):
|
||||
status = response['ResponseMetadata']['HTTPStatusCode']
|
||||
error_code = response['Error']['Code']
|
||||
return status, error_code
|
14
setup.py
14
setup.py
|
@ -14,10 +14,16 @@ setup(
|
|||
|
||||
install_requires=[
|
||||
'boto >=2.0b4',
|
||||
'boto3 >=1.0.0',
|
||||
'PyYAML',
|
||||
'munch >=2.0.0',
|
||||
'gevent >=1.0',
|
||||
'isodate >=0.4.4',
|
||||
'bunch >=1.0.0',
|
||||
'gevent ==0.13.6',
|
||||
],
|
||||
|
||||
entry_points={
|
||||
'console_scripts': [
|
||||
's3tests-generate-objects = s3tests.generate_objects:main',
|
||||
's3tests-test-readwrite = s3tests.rand_readwrite:main',
|
||||
],
|
||||
},
|
||||
|
||||
)
|
||||
|
|
382
siege.conf
Normal file
382
siege.conf
Normal file
|
@ -0,0 +1,382 @@
|
|||
# Updated by Siege 2.69, May-24-2010
|
||||
# Copyright 2000-2007 by Jeffrey Fulmer, et al.
|
||||
#
|
||||
# Siege configuration file -- edit as necessary
|
||||
# For more information about configuring and running
|
||||
# this program, visit: http://www.joedog.org/
|
||||
|
||||
#
|
||||
# Variable declarations. You can set variables here
|
||||
# for use in the directives below. Example:
|
||||
# PROXY = proxy.joedog.org
|
||||
# Reference variables inside ${} or $(), example:
|
||||
# proxy-host = ${PROXY}
|
||||
# You can also reference ENVIRONMENT variables without
|
||||
# actually declaring them, example:
|
||||
# logfile = $(HOME)/var/siege.log
|
||||
|
||||
#
|
||||
# Signify verbose mode, true turns on verbose output
|
||||
# ex: verbose = true|false
|
||||
#
|
||||
verbose = true
|
||||
|
||||
#
|
||||
# CSV Verbose format: with this option, you can choose
|
||||
# to format verbose output in traditional siege format
|
||||
# or comma separated format. The latter will allow you
|
||||
# to redirect output to a file for import into a spread
|
||||
# sheet, i.e., siege > file.csv
|
||||
# ex: csv = true|false (default false)
|
||||
#
|
||||
csv = true
|
||||
|
||||
#
|
||||
# Full URL verbose format: By default siege displays
|
||||
# the URL path and not the full URL. With this option,
|
||||
# you # can instruct siege to show the complete URL.
|
||||
# ex: fullurl = true|false (default false)
|
||||
#
|
||||
# fullurl = true
|
||||
|
||||
#
|
||||
# Display id: in verbose mode, display the siege user
|
||||
# id associated with the HTTP transaction information
|
||||
# ex: display-id = true|false
|
||||
#
|
||||
# display-id =
|
||||
|
||||
#
|
||||
# Show logfile location. By default, siege displays the
|
||||
# logfile location at the end of every run when logging
|
||||
# You can turn this message off with this directive.
|
||||
# ex: show-logfile = false
|
||||
#
|
||||
show-logfile = true
|
||||
|
||||
#
|
||||
# Default logging status, true turns logging on.
|
||||
# ex: logging = true|false
|
||||
#
|
||||
logging = true
|
||||
|
||||
#
|
||||
# Logfile, the default siege logfile is $PREFIX/var/siege.log
|
||||
# This directive allows you to choose an alternative log file.
|
||||
# Environment variables may be used as shown in the examples:
|
||||
# ex: logfile = /home/jeff/var/log/siege.log
|
||||
# logfile = ${HOME}/var/log/siege.log
|
||||
# logfile = ${LOGFILE}
|
||||
#
|
||||
logfile = ./siege.log
|
||||
|
||||
#
|
||||
# HTTP protocol. Options HTTP/1.1 and HTTP/1.0.
|
||||
# Some webservers have broken implementation of the
|
||||
# 1.1 protocol which skews throughput evaluations.
|
||||
# If you notice some siege clients hanging for
|
||||
# extended periods of time, change this to HTTP/1.0
|
||||
# ex: protocol = HTTP/1.1
|
||||
# protocol = HTTP/1.0
|
||||
#
|
||||
protocol = HTTP/1.1
|
||||
|
||||
#
|
||||
# Chunked encoding is required by HTTP/1.1 protocol
|
||||
# but siege allows you to turn it off as desired.
|
||||
#
|
||||
# ex: chunked = true
|
||||
#
|
||||
chunked = true
|
||||
|
||||
#
|
||||
# Cache revalidation.
|
||||
# Siege supports cache revalidation for both ETag and
|
||||
# Last-modified headers. If a copy is still fresh, the
|
||||
# server responds with 304.
|
||||
# HTTP/1.1 200 0.00 secs: 2326 bytes ==> /apache_pb.gif
|
||||
# HTTP/1.1 304 0.00 secs: 0 bytes ==> /apache_pb.gif
|
||||
# HTTP/1.1 304 0.00 secs: 0 bytes ==> /apache_pb.gif
|
||||
#
|
||||
# ex: cache = true
|
||||
#
|
||||
cache = false
|
||||
|
||||
#
|
||||
# Connection directive. Options "close" and "keep-alive"
|
||||
# Starting with release 2.57b3, siege implements persistent
|
||||
# connections in accordance to RFC 2068 using both chunked
|
||||
# encoding and content-length directives to determine the
|
||||
# page size. To run siege with persistent connections set
|
||||
# the connection directive to keep-alive. (Default close)
|
||||
# CAUTION: use the keep-alive directive with care.
|
||||
# DOUBLE CAUTION: this directive does not work well on HPUX
|
||||
# TRIPLE CAUTION: don't use keep-alives until further notice
|
||||
# ex: connection = close
|
||||
# connection = keep-alive
|
||||
#
|
||||
connection = close
|
||||
|
||||
#
|
||||
# Default number of simulated concurrent users
|
||||
# ex: concurrent = 25
|
||||
#
|
||||
concurrent = 15
|
||||
|
||||
#
|
||||
# Default duration of the siege. The right hand argument has
|
||||
# a modifier which specifies the time units, H=hours, M=minutes,
|
||||
# and S=seconds. If a modifier is not specified, then minutes
|
||||
# are assumed.
|
||||
# ex: time = 50M
|
||||
#
|
||||
# time =
|
||||
|
||||
#
|
||||
# Repetitions. The length of siege may be specified in client
|
||||
# reps rather then a time duration. Instead of specifying a time
|
||||
# span, you can tell each siege instance to hit the server X number
|
||||
# of times. So if you chose 'reps = 20' and you've selected 10
|
||||
# concurrent users, then siege will hit the server 200 times.
|
||||
# ex: reps = 20
|
||||
#
|
||||
# reps =
|
||||
|
||||
#
|
||||
# Default URLs file, set at configuration time, the default
|
||||
# file is PREFIX/etc/urls.txt. So if you configured siege
|
||||
# with --prefix=/usr/local then the urls.txt file is installed
|
||||
# int /usr/local/etc/urls.txt. Use the "file = " directive to
|
||||
# configure an alternative URLs file. You may use environment
|
||||
# variables as shown in the examples below:
|
||||
# ex: file = /export/home/jdfulmer/MYURLS.txt
|
||||
# file = $HOME/etc/urls.txt
|
||||
# file = $URLSFILE
|
||||
#
|
||||
file = ./urls.txt
|
||||
|
||||
#
|
||||
# Default URL, this is a single URL that you want to test. This
|
||||
# is usually set at the command line with the -u option. When
|
||||
# used, this option overrides the urls.txt (-f FILE/--file=FILE)
|
||||
# option. You will HAVE to comment this out for in order to use
|
||||
# the urls.txt file option.
|
||||
# ex: url = https://shemp.whoohoo.com/docs/index.jsp
|
||||
#
|
||||
# url =
|
||||
|
||||
#
|
||||
# Default delay value, see the siege(1) man page.
|
||||
# This value is used for load testing, it is not used
|
||||
# for benchmarking.
|
||||
# ex: delay = 3
|
||||
#
|
||||
delay = 1
|
||||
|
||||
#
|
||||
# Connection timeout value. Set the value in seconds for
|
||||
# socket connection timeouts. The default value is 30 seconds.
|
||||
# ex: timeout = 30
|
||||
#
|
||||
# timeout =
|
||||
|
||||
#
|
||||
# Session expiration: This directive allows you to delete all
|
||||
# cookies after you pass through the URLs. This means siege will
|
||||
# grab a new session with each run through its URLs. The default
|
||||
# value is false.
|
||||
# ex: expire-session = true
|
||||
#
|
||||
# expire-session =
|
||||
|
||||
#
|
||||
# Failures: This is the number of total connection failures allowed
|
||||
# before siege aborts. Connection failures (timeouts, socket failures,
|
||||
# etc.) are combined with 400 and 500 level errors in the final stats,
|
||||
# but those errors do not count against the abort total. If you set
|
||||
# this total to 10, then siege will abort after ten socket timeouts,
|
||||
# but it will NOT abort after ten 404s. This is designed to prevent
|
||||
# a run-away mess on an unattended siege. The default value is 1024
|
||||
# ex: failures = 50
|
||||
#
|
||||
# failures =
|
||||
|
||||
#
|
||||
# Internet simulation. If true, siege clients will hit
|
||||
# the URLs in the urls.txt file randomly, thereby simulating
|
||||
# internet usage. If false, siege will run through the
|
||||
# urls.txt file in order from first to last and back again.
|
||||
# ex: internet = true
|
||||
#
|
||||
internet = false
|
||||
|
||||
#
|
||||
# Default benchmarking value, If true, there is NO delay
|
||||
# between server requests, siege runs as fast as the web
|
||||
# server and the network will let it. Set this to false
|
||||
# for load testing.
|
||||
# ex: benchmark = true
|
||||
#
|
||||
benchmark = false
|
||||
|
||||
#
|
||||
# Set the siege User-Agent to identify yourself at the
|
||||
# host, the default is: JoeDog/1.00 [en] (X11; I; Siege #.##)
|
||||
# But that wreaks of corporate techno speak. Feel free
|
||||
# to make it more interesting :-) Since Limey is recovering
|
||||
# from minor surgery as I write this, I'll dedicate the
|
||||
# example to him...
|
||||
# ex: user-agent = Limey The Bulldog
|
||||
#
|
||||
# user-agent =
|
||||
|
||||
#
|
||||
# Accept-encoding. This option allows you to specify
|
||||
# acceptable encodings returned by the server. Use this
|
||||
# directive to turn on compression. By default we accept
|
||||
# gzip compression.
|
||||
#
|
||||
# ex: accept-encoding = *
|
||||
# accept-encoding = gzip
|
||||
# accept-encoding = compress;q=0.5;gzip;q=1
|
||||
accept-encoding = gzip
|
||||
|
||||
#
|
||||
# TURN OFF THAT ANNOYING SPINNER!
|
||||
# Siege spawns a thread and runs a spinner to entertain you
|
||||
# as it collects and computes its stats. If you don't like
|
||||
# this feature, you may turn it off here.
|
||||
# ex: spinner = false
|
||||
#
|
||||
spinner = true
|
||||
|
||||
#
|
||||
# WWW-Authenticate login. When siege hits a webpage
|
||||
# that requires basic authentication, it will search its
|
||||
# logins for authentication which matches the specific realm
|
||||
# requested by the server. If it finds a match, it will send
|
||||
# that login information. If it fails to match the realm, it
|
||||
# will send the default login information. (Default is "all").
|
||||
# You may configure siege with several logins as long as no
|
||||
# two realms match. The format for logins is:
|
||||
# username:password[:realm] where "realm" is optional.
|
||||
# If you do not supply a realm, then it will default to "all"
|
||||
# ex: login = jdfulmer:topsecret:Admin
|
||||
# login = jeff:supersecret
|
||||
#
|
||||
# login =
|
||||
|
||||
#
|
||||
# WWW-Authenticate username and password. When siege
|
||||
# hits a webpage that requires authentication, it will
|
||||
# send this user name and password to the server. Note
|
||||
# this is NOT form based authentication. You will have
|
||||
# to construct URLs for that.
|
||||
# ex: username = jdfulmer
|
||||
# password = whoohoo
|
||||
#
|
||||
# username =
|
||||
# password =
|
||||
|
||||
#
|
||||
# ssl-cert
|
||||
# This optional feature allows you to specify a path to a client
|
||||
# certificate. It is not neccessary to specify a certificate in
|
||||
# order to use https. If you don't know why you would want one,
|
||||
# then you probably don't need this feature. Use openssl to
|
||||
# generate a certificate and key with the following command:
|
||||
# $ openssl req -nodes -new -days 365 -newkey rsa:1024 \
|
||||
# -keyout key.pem -out cert.pem
|
||||
# Specify a path to cert.pem as follows:
|
||||
# ex: ssl-cert = /home/jeff/.certs/cert.pem
|
||||
#
|
||||
# ssl-cert =
|
||||
|
||||
#
|
||||
# ssl-key
|
||||
# Use this option to specify the key you generated with the command
|
||||
# above. ex: ssl-key = /home/jeff/.certs/key.pem
|
||||
# You may actually skip this option and combine both your cert and
|
||||
# your key in a single file:
|
||||
# $ cat key.pem > client.pem
|
||||
# $ cat cert.pem >> client.pem
|
||||
# Now set the path for ssl-cert:
|
||||
# ex: ssl-cert = /home/jeff/.certs/client.pem
|
||||
# (in this scenario, you comment out ssl-key)
|
||||
#
|
||||
# ssl-key =
|
||||
|
||||
#
|
||||
# ssl-timeout
|
||||
# This option sets a connection timeout for the ssl library
|
||||
# ex: ssl-timeout = 30
|
||||
#
|
||||
# ssl-timeout =
|
||||
|
||||
#
|
||||
# ssl-ciphers
|
||||
# You can use this feature to select a specific ssl cipher
|
||||
# for HTTPs. To view the ones available with your library run
|
||||
# the following command: openssl ciphers
|
||||
# ex: ssl-ciphers = EXP-RC4-MD5
|
||||
#
|
||||
# ssl-ciphers =
|
||||
|
||||
#
|
||||
# Login URL. This is the first URL to be hit by every siege
|
||||
# client. This feature was designed to allow you to login to
|
||||
# a server and establish a session. It will only be hit once
|
||||
# so if you need to hit this URL more then once, make sure it
|
||||
# also appears in your urls.txt file.
|
||||
#
|
||||
# ex: login-url = http://eos.haha.com/login.jsp POST name=jeff&pass=foo
|
||||
#
|
||||
# login-url =
|
||||
|
||||
#
|
||||
# Proxy protocol. This option allows you to select a proxy
|
||||
# server stress testing. The proxy will request the URL(s)
|
||||
# specified by -u"my.url.org" OR from the urls.txt file.
|
||||
#
|
||||
# ex: proxy-host = proxy.whoohoo.org
|
||||
# proxy-port = 8080
|
||||
#
|
||||
# proxy-host =
|
||||
# proxy-port =
|
||||
|
||||
#
|
||||
# Proxy-Authenticate. When scout hits a proxy server which
|
||||
# requires username and password authentication, it will this
|
||||
# username and password to the server. The format is username,
|
||||
# password and optional realm each separated by a colon. You
|
||||
# may enter more than one proxy-login as long as each one has
|
||||
# a different realm. If you do not enter a realm, then scout
|
||||
# will send that login information to all proxy challenges. If
|
||||
# you have more than one proxy-login, then scout will attempt
|
||||
# to match the login to the realm.
|
||||
# ex: proxy-login: jeff:secret:corporate
|
||||
# proxy-login: jeff:whoohoo
|
||||
#
|
||||
# proxy-login =
|
||||
|
||||
#
|
||||
# Redirection support. This option allows to to control
|
||||
# whether a Location: hint will be followed. Most users
|
||||
# will want to follow redirection information, but sometimes
|
||||
# it's desired to just get the Location information.
|
||||
#
|
||||
# ex: follow-location = false
|
||||
#
|
||||
# follow-location =
|
||||
|
||||
# Zero-length data. siege can be configured to disregard
|
||||
# results in which zero bytes are read after the headers.
|
||||
# Alternatively, such results can be counted in the final
|
||||
# tally of outcomes.
|
||||
#
|
||||
# ex: zero-data-ok = false
|
||||
#
|
||||
# zero-data-ok =
|
||||
|
||||
#
|
||||
# end of siegerc
|
9
tox.ini
9
tox.ini
|
@ -1,9 +0,0 @@
|
|||
[tox]
|
||||
envlist = py
|
||||
|
||||
[testenv]
|
||||
deps = -rrequirements.txt
|
||||
passenv =
|
||||
S3TEST_CONF
|
||||
S3_USE_SIGV4
|
||||
commands = pytest {posargs}
|
Loading…
Reference in a new issue