Compare commits

..

504 commits

Author SHA1 Message Date
Casey Bodley
08df9352f9
Merge pull request #591 from cbodley/wip-68292
s3: test GetObject with PartNumber and SSE-C encryption
2024-10-04 17:21:04 -04:00
Casey Bodley
cba8047c7e
Merge pull request #589 from tobias-urdin/v2-presigned-get
Add v2 signature presigned get_object tests
2024-10-04 16:27:16 -04:00
Casey Bodley
acc8ef43c9 s3: test GetObject with PartNumber and SSE-C encryption
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2024-09-27 15:06:21 -04:00
Casey Bodley
999d39d4db s3: add v2 signature presigned put_object tests
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2024-09-25 11:29:01 -04:00
Tobias Urdin
ac71900ffb Add v2 signature presigned get_object tests
This adds tests for get_object presigned URLs
using signature v2.

Also code formatting.

Signed-off-by: Tobias Urdin <tobias.urdin@binero.com>
2024-09-25 11:22:37 -04:00
Casey Bodley
aa82bd16ae
Merge pull request #579 from galsalomon66/tracker_65651 2024-09-03 10:53:00 -04:00
Casey Bodley
e8db6c2c16
Merge pull request #528 from pritha-srivastava/wip-rgw-oidc-tests
rgw: adding tests for add_client_id_to_oidc_provider
2024-08-29 10:50:47 -04:00
galsalomon66
6a775cb445 fix comments
Signed-off-by: galsalomon66 <gal.salomon@gmail.com>
2024-08-28 09:48:09 +00:00
Casey Bodley
0d85ed2dda
Merge pull request #578 from clwluvw/upload-part-copy
UploadPartCopy: add test for source bucket with policy
2024-08-22 14:39:30 -04:00
Gal Salomon
9444c29674 fix the assert per empty results
Signed-off-by: Gal Salomon <gal.salomon@gmail.com>
2024-08-19 16:00:16 +03:00
Pritha Srivastava
bc8c14ac12 rgw: adding tests for add_client_id_to_oidc_provider
and update_thumbprint_for oidc_provider.

Signed-off-by: Pritha Srivastava <prsrivas@redhat.com>
2024-08-19 12:33:15 +05:30
Seena Fallah
ecf7a8a7a9 UploadPartCopy: add test for source bucket with policy
Ref: https://github.com/ceph/ceph/pull/59253

Signed-off-by: Seena Fallah <seenafallah@gmail.com>
2024-08-16 14:30:17 +02:00
Casey Bodley
3458971054
Merge pull request #577 from cbodley/wip-cross-tenant
s3: reenable tenanted bucket policy test
2024-08-14 09:44:16 -04:00
Casey Bodley
2e41494293 s3: reenable tenanted bucket policy test
the before-call hook url-encodes the ':' part of tenanted bucket names
to resolve SignatureDoesNotMatch errors

removed the list-v2 version of the test since it isn't relevant to
bucket policy test coverage

add a new test case that creates the bucket under the tenanted user,
then uses the main client to access it

Signed-off-by: Casey Bodley <cbodley@redhat.com>
2024-08-14 08:19:41 -04:00
Casey Bodley
f61129e432
Merge pull request #572 from clwluvw/s3select-error
s3select: align error codes with the new AWS format
2024-07-30 13:26:23 +01:00
Casey Bodley
218f90063f
Merge pull request #574 from clwluvw/sse-c-policy
BucketPolicy: add test for sse-c in conditions
2024-07-26 14:09:31 +01:00
Casey Bodley
82fedef5a5
Merge pull request #573 from clwluvw/public-policy
BucketPolicy: donot allow NotPrincipal with Allow Effect
2024-07-25 15:33:14 +01:00
Seena Fallah
c9aded48e5 BucketPolicy: decouple encryption tests from invalid algo and unencrypted
Signed-off-by: Seena Fallah <seenafallah@gmail.com>
2024-07-23 20:47:22 +02:00
Seena Fallah
87b496f25f BucketPolicy: add test for sse-c in conditions
Ref. https://github.com/ceph/ceph/pull/58689

Signed-off-by: Seena Fallah <seenafallah@gmail.com>
2024-07-19 23:07:03 +02:00
Seena Fallah
a83396cda7 BlockPublicPolicy: add test when policy has principal
Ref. https://tracker.ceph.com/issues/67048

Signed-off-by: Seena Fallah <seenafallah@gmail.com>
2024-07-19 20:51:10 +02:00
Seena Fallah
93a3b6c704 PolicyStatus: add test for policy with Principal
Ref. https://github.com/ceph/ceph/pull/58686

Signed-off-by: Seena Fallah <seenafallah@gmail.com>
2024-07-19 20:50:26 +02:00
Seena Fallah
474c1404e2 BucketPolicy: donot allow NotPrincipal with Allow Effect
Ref. https://github.com/ceph/ceph/pull/58686

Signed-off-by: Seena Fallah <seenafallah@gmail.com>
2024-07-19 20:48:06 +02:00
Seena Fallah
2e395d78ea s3select: align error codes with the new AWS format
ref. https://github.com/ceph/ceph/pull/56864

Signed-off-by: Seena Fallah <seenafallah@gmail.com>
2024-07-17 17:04:21 +02:00
Casey Bodley
4eda9c0626
Merge pull request #570 from cbodley/wip-66705
test Get/HeadObject with partNumber for single-multipart upload
2024-07-05 15:51:15 +01:00
Casey Bodley
38ab4c5638
Merge pull request #561 from galsalomon66/fix_non_handled_error_resonse
Fix non handled error response
2024-07-05 15:04:39 +01:00
Casey Bodley
36fb297e48
Merge pull request #564 from linuxbox2/wip-cbodley-multipart-nostreaming
Wip cbodley multipart nostreaming
2024-07-04 22:50:21 +01:00
Matt Benjamin
8277a9fb9a mark two tests that fail on dbstore
also add @pytest.mark.checksum for new checksum
tests

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
2024-07-03 09:53:08 -04:00
Matt Benjamin
c0f0b679db remove duplicate size assigment [rkhudov review]
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
2024-07-03 09:53:02 -04:00
Matt Benjamin
95df503ced add test_post_object_upload_checksum
this tests a two-megabyte binary upload with validated
(awscli-computed) SHA256 checksum, and also verifies failure when
a bad checksum is provided

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
2024-07-03 09:52:55 -04:00
Matt Benjamin
9577cde013 add test_multipart_checksum_3parts
tests a full multipart upload cycle with 3 unique parts, which
verifies composite checksum computation and the logic to propagate
parts_count to ComleteMultipart

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
2024-07-03 09:52:49 -04:00
Matt Benjamin
a3dbac7115 test_multipart_upload_sha256: work around failures re-trying complete-multipart
As described in https://tracker.ceph.com/issues/65746, retrying complete-multipart
after having attempted to complete the same upload with a bad checksum argument
fails with an internal error.

The status code is 500, but I'm unsure if it can be retried again, or whether
the upload can be aborted later.

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
2024-07-03 09:52:39 -04:00
Casey Bodley
73ed9121f4 add "checksum" marker, since new checksum tests reference it
this removes a Pytest warning during execution

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
2024-07-03 09:52:32 -04:00
Casey Bodley
bebdfd1ba7 test Get/HeadObject with partNumber for single-multipart upload
test_multipart_get_part() tests 'normal' multipart uploads. add a new
test case for a multipart upload with a single part to tests the fix
for https://tracker.ceph.com/issues/66705

Signed-off-by: Casey Bodley <cbodley@redhat.com>
2024-06-26 10:50:42 -04:00
Casey Bodley
658fc699a8
Merge pull request #569 from cbodley/wip-66655
requirements: unpin pytz version
2024-06-25 14:12:21 +01:00
Casey Bodley
27f24ee4d7 requirements: unpin pytz version
Fixes: https://tracker.ceph.com/issues/66655

Signed-off-by: Casey Bodley <cbodley@redhat.com>
2024-06-24 13:27:26 -04:00
Casey Bodley
00b9a2a291
Merge pull request #565 from bobham-bloomberg/FixListBucketsCtimeTest
Fix wrong assertion of the test: `test_buckets_list_ctime`
2024-05-14 16:48:37 +01:00
Sumedh A. Kulkarni
e9c5cc29e9 Fix wrong assertion of the test: test_buckets_list_ctime
TestName:
s3tests_boto3.functional.test_s3:test_buckets_list_ctime

Problem:
The test creates 5 buckets for a user but in an assertion check,
it asserts false if any bucket of the user has CreationTime less
than a day prior to current time.
Due to this reason the test fails if the user has pre-existing
buckets older than a day.

Solution:
Assert only on the CreationTime of buckets that were created with
test execution.

Signed-off-by: Sumedh A. Kulkarni <sumedh.a.kulkarni@seagate.com>
Co-developed-by: Bob Ham <bham12@bloomberg.net>
Signed-off-by: Bob Ham <bham12@bloomberg.net>
2024-05-14 07:12:17 -04:00
Gal Salomon
77f1334571 add handling for EventStreamError exception
Signed-off-by: Gal Salomon <gal.salomon@gmail.com>
2024-04-17 18:20:37 +03:00
Gal Salomon
c4c5a247eb a change is the RGW error-response require changes in s3-tests
Signed-off-by: Gal Salomon <gal.salomon@gmail.com>
2024-04-16 10:10:32 +03:00
Casey Bodley
54c1488a43
Merge pull request #537 from cbodley/wip-iam-user-apis
iam: add tests for account-based IAM apis
2024-04-12 18:12:41 +01:00
Casey Bodley
88fd867007
Merge pull request #557 from yuvalif/wip-yuval-mpu-etag
test etag on mpu complete replies

Reviewed-by: Matt Benjamin <mbenjamin@redhat.com>
2024-03-27 13:58:41 +00:00
Yuval Lifshitz
a28d46fa2a test etag on mpu complete replies
this is to cover the fix of: https://tracker.ceph.com/issues/58879

Signed-off-by: Yuval Lifshitz <ylifshit@ibm.com>
2024-03-25 16:45:06 +00:00
Matt Benjamin
46f60d3029 add tests for ObjectSizeGreater(Less)Than
Add tests for the new ObjectSizeGreaterThan and
ObjectSizeLessThan lifecycle operators.

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
2024-03-15 10:30:33 -04:00
Matt Benjamin
d48530a294 add test test_lifecycle_expiration_newer_noncurrent()
This verifies the new NewerNoncurrentVersions lifecycle filter
operator.

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
2024-03-15 10:30:33 -04:00
Casey Bodley
dfabbf5a8d sns: add test_sns.py for simple topic testing
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2024-03-12 20:34:24 -04:00
Casey Bodley
7bd4b0ee14 iam: move iam_root, iam_alt_root fixtures to iam.py
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2024-03-12 15:47:51 -04:00
Casey Bodley
96d658444a s3: remove test_bucket_acl_no_grants()
aws doesn't consult acls for same-account access. rgw doesn't for
account users either

Fixes: https://github.com/ceph/s3-tests/issues/184

Signed-off-by: Casey Bodley <cbodley@redhat.com>
2024-03-10 10:45:22 -04:00
Casey Bodley
a3a16eb66a iam: test cross-account policy with assumed role
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2024-03-10 10:45:22 -04:00
Casey Bodley
7ebc530e04 iam: add account tests for GroupPolicy apis
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2024-03-10 10:45:22 -04:00
Casey Bodley
4ca7967ae7 iam: add account tests for Group apis
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2024-03-10 10:45:22 -04:00
Casey Bodley
d5791d8da6 iam: add account test for OpenIDConnectProvider apis
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2024-03-10 10:45:22 -04:00
Casey Bodley
ba292fbf59 iam: test cross-account permissions
test the [iam alt root] user's access to buckets owned by [iam root]
using various policy principals and acl grantees

Signed-off-by: Casey Bodley <cbodley@redhat.com>
2024-03-10 10:45:22 -04:00
Casey Bodley
ed4a8e2244 config: add [iam alt root] for an alt account's root user
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2024-03-10 10:45:22 -04:00
Casey Bodley
46217fcf81 iam: test managed role policy
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2024-03-10 10:45:22 -04:00
Casey Bodley
cefea0fd26 iam: add account test for RolePolicy apis
adds test cases for the following iam actions:
* PutRolePolicy
* GetRolePolicy
* DeleteRolePolicy
* ListRolePolicies

verified to pass against aws when an account root user's credentials are
provided in the [iam] section of s3tests.conf

Signed-off-by: Casey Bodley <cbodley@redhat.com>
2024-03-10 10:45:22 -04:00
Casey Bodley
d4ada317e1 iam: add account tests for Role apis
adds test cases for the following iam actions:
* CreateRole
* GetRole
* ListRoles
* DeleteRole
* UpdateRole

verified to pass against aws when an account root user's credentials are
provided in the [iam] section of s3tests.conf

Signed-off-by: Casey Bodley <cbodley@redhat.com>
2024-03-10 10:45:22 -04:00
Casey Bodley
c6e40b4ffa iam: test managed user policy
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2024-03-10 10:45:22 -04:00
Casey Bodley
364f29d087 iam: add account tests for UserPolicy apis
adds test cases for the following iam actions:
* PutUserPolicy
* GetUserPolicy
* DeleteUserPolicy
* ListUserPolicies

verified to pass against aws when an account root user's credentials are
provided in the [iam] section of s3tests.conf

Signed-off-by: Casey Bodley <cbodley@redhat.com>
2024-03-10 10:45:22 -04:00
Casey Bodley
0377466704 iam: test bucket policy principal for iam user with path
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2024-03-10 10:45:22 -04:00
Casey Bodley
db76dfe791 iam: add tests for AccessKey apis
adds test cases for the following iam actions:
* CreateAccessKey
* UpdateAccessKey
* DeleteAccessKey
* ListAccessKeys

verified to pass against aws when an account root user's credentials are
provided in the [iam] section of s3tests.conf

Signed-off-by: Casey Bodley <cbodley@redhat.com>
2024-03-10 10:45:22 -04:00
Casey Bodley
d8becad96a iam: add tests for User apis
adds test cases for the following iam actions:
* CreateUser
* GetUser
* UpdateUser
* DeleteUser
* ListUsers

verified to pass against aws when an account root user's credentials are
provided in the [iam] section of s3tests.conf

Signed-off-by: Casey Bodley <cbodley@redhat.com>
2024-03-10 10:45:22 -04:00
Casey Bodley
7cd4613883 config: add [iam root] for an account root user
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2024-03-10 10:45:22 -04:00
Casey Bodley
5f3353e6b5 config: parse iam config during setup()
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2024-03-10 10:45:22 -04:00
Casey Bodley
a35b3c609a iam: rename test_of_iam mark to iam_tenant
differentiate the test cases that expect a tenant-wide IAM api from new
ones that expect an account-wide api

Signed-off-by: Casey Bodley <cbodley@redhat.com>
2024-03-10 10:45:22 -04:00
Casey Bodley
83af25722c config: add fixtures for iam name/path prefixes
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2024-03-10 10:45:22 -04:00
Casey Bodley
8e01f2315c fixtures: split setup() and configure()
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2024-03-10 10:45:22 -04:00
Casey Bodley
ade849b90f
Merge pull request #555 from cbodley/wip-64822
test_headers: use fixture to hook request headers
2024-03-10 14:43:36 +00:00
Casey Bodley
aecd282a11 test_headers: use fixture to hook request headers
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2024-03-10 10:39:26 -04:00
Casey Bodley
3ef85406f9
Merge pull request #547 from cbodley/wip-63724
s3: object lock tests for deletion of multipart objects
2024-03-08 19:14:19 +00:00
Casey Bodley
12abc78b9b
Merge pull request #551 from clwluvw/bucket-public-block
BucketPublicAccessBlock: compatibility issues
2024-03-07 15:55:37 +00:00
Ali Maredia
b46d16467c
Merge pull request #552 from alimaredia/lc-utcnow-fix
replace datetime.now with datetime.utcnow()
2024-02-21 12:29:17 -05:00
Ali Maredia
4744808eda replace datetime.now with datetime.utcnow()
when the local timezone is not UTC and if it is a day behind,
lifecycle_header tests fails with 2 days not equal to 1
so replacing datetime.now() with datetime.utcnow()

Signed-off-by: Ali Maredia <amaredia@redhat.com>
2024-02-20 12:17:32 -05:00
Casey Bodley
a87f0b63e7 s3: object lock tests for deletion of multipart objects
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2024-02-20 10:59:30 -05:00
Seena Fallah
3af42312bf PublicAccessBlock: test access deny via bucket policy
Make sure 403 is returned when access is denied via s3:GetBucketPublicAccessBlock action on GetBucketPublicAccessBlock

Refs: https://github.com/ceph/ceph/pull/55652
Signed-off-by: Seena Fallah <seenafallah@gmail.com>
2024-02-20 15:52:36 +01:00
Seena Fallah
3056e6d039 PublicAccessBlock: test 404 on no block configuration
Make sure NoSuchPublicAccessBlockConfiguration is returned when no public block is configured on bucket:

Refs: https://github.com/ceph/ceph/pull/55652
Signed-off-by: Seena Fallah <seenafallah@gmail.com>
2024-02-20 15:52:31 +01:00
Casey Bodley
997f78d58a
Merge pull request #546 from jzhu116-bloomberg/wip-64340
add test case for object copy in versioning suspended bucket
2024-02-12 20:31:18 +00:00
Casey Bodley
1d5764d569
Merge pull request #544 from tobias-urdin/improve-presigned
Add more testing for presigned URLs
2024-02-12 19:51:43 +00:00
Tobias Urdin
055451f666 Add more testing for presigned URLs
This improves the testing for presigned URLs for
both get_object and put_object when using
generate_presigned_url().

It covers the case where you pass for example
a x-amz-acl (ACL in params for generated_presigned_url)
header that should be signed.

Tests the regression in [1].

[1] https://tracker.ceph.com/issues/64308

Signed-off-by: Tobias Urdin <tobias.urdin@binero.se>
2024-02-12 19:47:56 +00:00
Jane Zhu
1866f04d81 tag test_versioning_obj_suspended_copy with fails_on_dbstore
Signed-off-by: Juan Zhu <jzhu4@dev-10-34-20-139.pw1.bcc.bloomberg.com>
2024-02-12 12:44:49 -05:00
Jane Zhu
a2acdbfdda add test case for object copy in versioning suspended bucket
Signed-off-by: Juan Zhu <jzhu4@dev-10-34-20-139.pw1.bcc.bloomberg.com>
2024-02-08 23:01:12 -05:00
Ali Maredia
da91ad8bbf
Merge pull request #533 from cbodley/wip-45736
304 Not Modified tests still expect etag header
2024-01-11 13:59:03 -05:00
Ali Maredia
6861c3d810
Merge pull request #532 from cbodley/wip-63537
object lock: test very large RetainUntilDate
2023-11-28 15:53:16 -05:00
Ali Maredia
519f8a4b0c
Merge pull request #491 from cbodley/wip-get-part
test Head/GetObject on completed multipart parts
2023-11-28 10:48:47 -05:00
Casey Bodley
d552124680 test Head/GetObject on completed multipart parts
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2023-11-21 11:37:38 -05:00
Casey Bodley
19c17fa49a 304 Not Modified tests still expect etag header
test case for https://tracker.ceph.com/issues/45736

Signed-off-by: Casey Bodley <cbodley@redhat.com>
2023-11-20 22:47:30 -05:00
Casey Bodley
40182ce26f object lock: test very large RetainUntilDate
https://tracker.ceph.com/issues/63537 reported that large dates (with
year after 2107) got truncated when written. test with a later date, and
check that get_object_retention() gives back the date we put

Signed-off-by: Casey Bodley <cbodley@redhat.com>
2023-11-15 16:31:54 -05:00
Casey Bodley
e29d6246fc
Merge pull request #527 from cbodley/wip-63004
add test_post_object_wrong_bucket() for CVE-2023-43040
2023-10-09 19:56:53 +01:00
Casey Bodley
95677d85bc add test_post_object_wrong_bucket()
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2023-09-28 10:52:00 -04:00
Ali Maredia
9c50cd1539
Merge pull request #524 from cbodley/wip-62013
lc: test transition of plain null version
2023-09-22 10:53:42 -04:00
Ali Maredia
e9e3374827
Merge pull request #525 from cbodley/wip-62919
boto2: fix byte vs. string comparison in verify_object
2023-09-22 10:52:28 -04:00
Casey Bodley
e54f0a4508 boto2: copy configured_storage_classes() fix from boto3
some boto2 storage class tests are failing because the list returned by
configured_storage_classes() included an empty string

the boto3 version had an extra line that removes empty values; copy that
for boto2

Signed-off-by: Casey Bodley <cbodley@redhat.com>
2023-09-21 17:44:59 -04:00
Casey Bodley
b1efd0477a boto2: fix byte vs. string comparison in verify_object
the storage class tests were failing on comparisons between the input
data and output data:

AssertionError: assert 'oFbdZvtRj' == b'oFbdZvtRj'

convert the byte representation back to string for comparison

Signed-off-by: Casey Bodley <cbodley@redhat.com>
2023-09-21 10:00:39 -04:00
Casey Bodley
9961af4bd2
Merge pull request #523 from tobias-urdin/http-options-presigned
Add test to verify HTTP OPTIONS on presigned URL

Reviewed-by: Casey Bodley <cbodley@redhat.com>
2023-08-21 15:52:49 -04:00
Tobias Urdin
c0a1880d4c Add test to verify HTTP OPTIONS on presigned URL
Related: https://tracker.ceph.com/issues/62033

Signed-off-by: Tobias Urdin <tobias.urdin@binero.com>
2023-08-07 20:27:48 +00:00
Casey Bodley
0e1bf6f652 lc: test transition of plain null version
for validating the fix of https://tracker.ceph.com/issues/62013

Signed-off-by: Casey Bodley <cbodley@redhat.com>
2023-08-04 10:02:25 -04:00
Casey Bodley
73b340a0e2
Merge pull request #522 from nadavMiz/boto3-tests-readme-clerify
Remove filtering from boto3 test example for simplicity
2023-07-06 11:26:28 -04:00
nadav mizrahi
b75b89c94b remove filtering from boto3 test example for simplicity 2023-07-06 11:29:40 +03:00
Soumya Koduri
c252440614
Merge pull request #517 from cbodley/wip-rm-rgw-stat-bucket-headers
remove reliance on x-rgw-object-count and x-rgw-bytes-used headers
2023-07-03 10:23:23 +05:30
Soumya Koduri
f624165ec9
Merge pull request #521 from soumyakoduri/wip-skoduri-setattr
Add testcase to verify obj mtime post setattrs
2023-06-30 17:39:27 +05:30
Soumya Koduri
10f3f7620d Add testcase to verify obj mtime post setattrs
Object mtime should not change for any attr changes unless
its a copy operation. Verify the same using PutObjectACL op.

Signed-off-by: Soumya Koduri <skoduri@redhat.com>
2023-06-28 13:12:54 +05:30
Casey Bodley
188b392131 remove reliance on x-rgw-object-count and x-rgw-bytes-used headers
stop using head_bucket() to fetch these response headers, and use
list_objects_v2() instead to count objects and sizes

Fixes: #315

Signed-off-by: Casey Bodley <cbodley@redhat.com>
2023-06-22 08:42:29 -04:00
Ali Maredia
28009bf7d3
Merge pull request #513 from galsalomon66/using_get_bucket_name
Using get bucket name
2023-06-21 15:32:59 -04:00
Casey Bodley
4476773180
Merge pull request #266 from theanalyst/wip-38700-range
add multipart ranges to encryption tests
2023-06-14 16:11:52 -04:00
Ali Maredia
928eb7a90f
Merge pull request #514 from cbodley/wip-tox-readme
readme: change fails_on_aws back to fails_on_rgw
2023-06-12 14:08:01 -04:00
Casey Bodley
b904ef08bc readme: change fails_on_aws back to fails_on_rgw
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2023-06-12 13:33:44 -04:00
Gal Salomon
fa0ea9afe0
Merge branch 'master' into using_get_bucket_name 2023-06-08 10:19:37 +03:00
galsalomon66
2998ea91eb instead of hard-coded bucket-name using get_bucket_name.merging PR #488 alphaprinz
Signed-off-by: galsalomon66 <gal.salomon@gmail.com>
2023-06-08 10:07:54 +03:00
Abhishek Lekshmanan
00cdcaf056 boto3/test_s3: add ranges around part boundary for encryption tests
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
2023-05-26 16:54:29 -04:00
Abhishek Lekshmanan
741f2cbc9e boto3/test_s3: range encoding helper function takes a size argument
This is to avoid a get_object call for every range check as the object size will
not change during this duration and we'd most likely already know the object
sizes beforehand

Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
2023-05-26 16:48:51 -04:00
Casey Bodley
b05a394738 sse: add sse-c test with unaligned multipart parts
Fixes: http://tracker.ceph.com/issues/38700

Signed-off-by: Casey Bodley <cbodley@redhat.com>
2023-05-26 16:40:06 -04:00
Casey Bodley
13e0d736a8
Merge pull request #463 from cfsnyder/wip-56646
Add test_versioning_concurrent_multi_object_delete
2023-05-17 14:33:11 -04:00
Cory Snyder
e18ea7fac4 Add test_versioning_concurrent_multi_object_delete
Tests that concurrent multi-object delete requests which specify
the same versioned object instances return successful object
responses within response body.

relates to: https://tracker.ceph.com/issues/56646

Signed-off-by: Cory Snyder <csnyder@iland.com>
2023-05-17 14:28:06 -04:00
Casey Bodley
7e35765dd4
Merge pull request #506 from galsalomon66/json_s3tests
Json s3tests
2023-05-15 10:36:15 -04:00
Ali Maredia
2535dd695d
Merge pull request #505 from cbodley/wip-59218
test_sse_s3_default_multipart_upload verifies encryption header
2023-04-10 22:41:11 -04:00
galsalomon66
008f5025f7 progress nessage is sent back upon processing the object, the change make sure it stay with the max result
Signed-off-by: galsalomon66 <gal.salomon@gmail.com>
2023-04-10 12:26:59 +03:00
galsalomon66
bc2a3b0b70 modifying of the run_s3select routine; to handle the different statuses (progress,stats,end)
Signed-off-by: galsalomon66 <gal.salomon@gmail.com>
2023-04-09 15:02:24 +03:00
Casey Bodley
c445361c2e
Merge pull request #501 from mkogan1/wip-chunked-trans-enc
Test object PUT with chunked transfer encoding
2023-04-07 09:19:38 -04:00
galsalomon66
89bbe654ca upon removing the payload tag, the response index should be changed also
Signed-off-by: galsalomon66 <gal.salomon@gmail.com>
2023-04-05 01:30:00 +03:00
Ali Maredia
b045323900
Merge pull request #503 from cbodley/wip-multi-delete
boto3: multi-object-delete tests use client.delete_objects()
2023-03-31 14:26:25 -04:00
Casey Bodley
febbcc12c2 test_sse_s3_default_multipart_upload verifies encryption header
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2023-03-30 14:27:27 -04:00
Casey Bodley
818443e916
Merge pull request #502 from cbodley/wip-58960
s3website: collections.Container removed from python3.10
2023-03-21 17:09:04 -04:00
Ali Maredia
992e193d81
Merge pull request #495 from cbodley/wip-get-object-torrent
add test_get_object_torrent
2023-03-14 12:40:35 -04:00
Casey Bodley
787dc6bd43 boto3: multi-object-delete tests use client.delete_objects()
when the tests were converted from boto2, they were rewritten as loops
over client.delete_object(). switch back to multi-delete

Signed-off-by: Casey Bodley <cbodley@redhat.com>
2023-03-13 13:33:00 -04:00
Casey Bodley
97c0338adf s3website: collections.Container removed from python3.10
Fixes: https://tracker.ceph.com/issues/58960

Signed-off-by: Casey Bodley <cbodley@redhat.com>
2023-03-12 15:33:42 -04:00
Casey Bodley
bb27e04c45 add test_get_object_torrent
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2023-03-12 14:58:06 -04:00
Casey Bodley
6d2ed19c18
Merge pull request #500 from soumyakoduri/wip-skoduri-lifecycle
lifecycle: Fix test_lifecycle_expiration_header_* testcases
2023-03-08 08:48:51 -05:00
Mark Kogan
13a9bfc00a test object PUT with chunked transfer enconding
Before the RGW fix PR was responding with 411 instead of 200

RGW fix PR: https://github.com/ceph/ceph/pull/50235

Signed-off-by: Mark Kogan <mkogan@redhat.com>
2023-03-07 14:08:51 +00:00
Soumya Koduri
29b0e27e49 lifecycle: Fix test_lifecycle_expiration_header_* testcases
Few checks were incorrectly mapped when switched to 'assert'. This
commit fixes the same.

Signed-off-by: Soumya Koduri <skoduri@redhat.com>
2023-03-07 15:17:47 +05:30
Casey Bodley
d158edb201
Merge pull request #499 from soumyakoduri/wip-bz58916
dbstore: add back missing 'fails_on_dbstore' tag
2023-03-06 12:59:35 -05:00
Casey Bodley
5b9652caa4
Merge pull request #490 from cbodley/wip-489
boto3: list_versions() omits empty KeyMarker/VersionIdMarker
2023-03-06 09:19:56 -05:00
Soumya Koduri
d976f47d74 dbstore: add back missing 'fails_on_dbstore' tag
Mark testcase "test_lifecycle_expiration_header_and_tags_head" as
fails_on_dbstore

Signed-off-by: Soumya Koduri <skoduri@redhat.com>
2023-03-06 19:48:01 +05:30
Casey Bodley
359bde7e87
Merge pull request #498 from m-ildefons/wip/tox-ini-fixes
QoL: Fix tox.ini syntax and other minor things
2023-03-05 10:06:56 -05:00
Moritz Röhrich
3a0f1f0ead
QoL: Fix tox.ini syntax and other minor things
- Fix tox.ini syntax

Modern tox versions require the expected environment variables to be
listed one by one on separate lines in tox.ini

- Add `venv` to list of ignored names for git

This is a common name for a local Python virtual environment. Less
typing than `virtualenv`

- Add `tox` to requirements.txt

Installing `tox` via `pip` has the advantage of including it in the
virtual environment, thus avoiding trouble on operating systems shipping
by default with python3.6 or older. It's also nice that `pip install -r
requirements.txt` is now sufficient to set up the testing environment,
after initializing the virtual environment with a moder-enough python
version.

Signed-off-by: Moritz Röhrich <moritz.rohrich@suse.com>
2023-02-28 12:19:54 +01:00
Soumya Koduri
42aff3e8fd
Merge pull request #496 from cbodley/wip-58762
iam: add back missing fails_on_dbstore tags
2023-02-27 11:10:40 +05:30
Casey Bodley
5219b86db9 iam: add back missing fails_on_dbstore tags
Fixes: https://tracker.ceph.com/issues/58762

Signed-off-by: Casey Bodley <cbodley@redhat.com>
2023-02-24 11:43:54 -05:00
Casey Bodley
43b957792b
Merge pull request #484 from yuvalif/fix-sts-error-handling
better error handling in the STS tests
2023-02-20 11:48:41 -05:00
Casey Bodley
2087c1ba26
Merge pull request #487 from robbat2/rgw_post_content_length
test_post_object_upload_size_below_minimum_rgw_max_chunk_size: new testcase for size validation bug

Reviewed-by: Casey Bodley <cbodley@redhat.com>
2023-02-20 09:28:53 -05:00
Robin H. Johnson
5914eb2005 test_post_object_upload_size_rgw_chunk_size_bug: new testcase
`ERR_TOO_SMALL` is wrongly returned if all of the following are true,
- the get_data returns multiple items (chunks)
- the length of the last item is smaller than the POST Policy's min
  value for content-length-range.

The check should be `(ofs < min_len)` instead of `(len < min_len)`

This is further confirmed by the next line of `s->obj_size = ofs`

Move the `int len` scope inside loop to try and prevent the bug in
future.

The bug was refactored in 2016, but was introduced in Oct 2012, when
this functionality was first added to RGW in commit 7bb3504d3f0974e9863f536e9af0ce8889d6888f.

Reference: 933a42f9af/src/rgw/rgw_op.cc (L4474-L4513)
Reference: 7bb3504d3f
Signed-off-by: Robin H. Johnson <rjohnson@digitalocean.com>
2023-02-17 10:09:59 -05:00
Casey Bodley
a536dd0e88 boto3: list_versions() omits empty KeyMarker/VersionIdMarker
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2023-02-16 21:36:51 -05:00
Yuval Lifshitz
3437cda73d better error handling in the STS tests
also, give more accurate instruction on how to run the tests

Signed-off-by: Yuval Lifshitz <ylifshit@redhat.com>
2023-02-13 17:28:23 +02:00
Albin Antony
2c710811fa s3select: json output serialization
Signed-off-by: Albin Antony <aantony@redhat.com>
2023-02-13 14:45:36 +05:30
Amit Prinz Setter
819dd5aa32 Use get_new_bucket_name() in select tests.
Signed-off-by: Amit Prinz Setter <aprinzse@redhat.com>
2023-02-05 17:54:46 +02:00
Casey Bodley
b1472019d7
Merge pull request #486 from cbodley/wip-58365
test_s3: remove test_user_policy()

Reviewed-by: Pritha Srivastava <prsrivas@redhat.com>
2023-01-27 09:20:40 -05:00
Casey Bodley
18a41ab63f test_s3: remove test_user_policy()
this has been failing consistently in local testing. test_sts.py has
lots of user policy test coverage, so this test case in test_s3.py is
superfluous

Fixes: https://tracker.ceph.com/issues/58365

Signed-off-by: Casey Bodley <cbodley@redhat.com>
2023-01-26 11:23:05 -05:00
Ali Maredia
b8422a2055
Merge pull request #482 from cbodley/wip-tox-pytest
replace deprecated nose with pytest
2023-01-25 17:46:57 -05:00
Casey Bodley
7993dd02a5 test_headers: add custom marks for auth_common/aws2/aws4
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2023-01-25 17:07:40 -05:00
Casey Bodley
5e9f6e5ffb remove boostrap
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2023-01-22 11:50:40 -05:00
Casey Bodley
d13ed28a5c cleanup duplicate lines
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2023-01-21 22:40:33 -05:00
Casey Bodley
494379c2ff remove nose dependency from requirements.txt
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2023-01-21 22:40:33 -05:00
Casey Bodley
4c75fba0de nose: remove nose attrs and imports
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2023-01-21 22:40:33 -05:00
Casey Bodley
f5d0bc9be3 pytest: replace nose eq() with assert ==
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2023-01-21 22:40:33 -05:00
Casey Bodley
7e7e8d5a42 pytest: replace nose SkipTest with pytest.skip()
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2023-01-21 22:40:33 -05:00
Casey Bodley
c80e9d2118 pytest: replace @nose.with_setup with fixtures
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2023-01-21 22:40:33 -05:00
Casey Bodley
4864dbc340 pytest: add custom marks for each nose @attr
and register them in pytest.ini

Signed-off-by: Casey Bodley <cbodley@redhat.com>
2023-01-21 22:40:33 -05:00
Casey Bodley
3652cfe2ec remove tests tagged fails_strict_rfc2616
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2023-01-21 22:40:33 -05:00
Casey Bodley
672a123348 pytest: add global configfile and autouse teardown fixtures
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2023-01-21 22:40:33 -05:00
Casey Bodley
9319a41b24 pytest: add tox.ini to run pytest, update README
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2023-01-20 17:40:12 -05:00
Casey Bodley
114397c358
Merge pull request #476 from cbodley/wip-botocore-sigv2
pin botocore to resolve v2 signature failures
2022-11-23 13:04:13 -05:00
Casey Bodley
6ff8cf27a2 pin botocore to resolve v2 signature failures
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2022-11-22 14:32:57 -05:00
Ali Maredia
e4953a3b76
Merge pull request #472 from galsalomon66/fixes_related_to_new_csv_parser_2
new CSV parser has some impact on engine results
2022-10-18 14:52:09 -04:00
galsalomon66
60b26f210e remove comment
Signed-off-by: galsalomon66 <gal.salomon@gmail.com>
2022-10-18 00:37:29 +03:00
Ali Maredia
64e919a13b
Merge pull request #473 from cbodley/wip-copyobj-tails
add test_object_copy_16m to test refcounting of tail objects
2022-10-06 16:57:39 -04:00
Casey Bodley
defb8eb977 add test_object_copy_16m to test refcounting of tail objects
i don't think any of our CopyObj test cases were large enough to have
tail objects, so weren't exercising our tail object ref counting
strategy

Signed-off-by: Casey Bodley <cbodley@redhat.com>
2022-10-05 14:37:09 -04:00
galsalomon66
b200013565 the use of std::to_string change the double presentation(not accuracy)
Signed-off-by: galsalomon66 <gal.salomon@gmail.com>
2022-09-18 10:46:04 +03:00
galsalomon66
89f97ed35c the upgraded CSV parser has some impact on engine result. (redundant comma, quote behavior)
Signed-off-by: galsalomon66 <gal.salomon@gmail.com>
2022-09-12 15:33:35 +03:00
Soumya Koduri
d89ab9d862
Merge pull request #471 from soumyakoduri/wip-skoduri-cloud
cloud-transition: Adjust lc wait time
2022-09-06 22:37:37 +05:30
Soumya Koduri
774172ad43 cloud-transition: Adjust lc wait time
Fixes: https://tracker.ceph.com/issues/57401

Signed-off-by: Soumya Koduri <skoduri@redhat.com>
2022-09-06 10:42:14 +05:30
Soumya Koduri
ad999de7c4
Merge pull request #439 from zalsader/patch-1
Add support for rocky linux in bootstrap
2022-08-25 10:40:53 +05:30
Soumya Koduri
44069ff062
Merge pull request #469 from soumyakoduri/dbstore_test_master
dbstore: Update the tests marked 'fails_on_dbstore'
2022-08-19 10:28:14 +05:30
Soumya Koduri
a8ee732732 dbstore: Update the tests marked 'fails_on_dbstore'
Tag/Untag testcases failing on dbstore as per latest run against main

Signed-off-by: Soumya Koduri <skoduri@redhat.com>
2022-08-18 22:44:59 +05:30
Casey Bodley
79156f3d3d
Merge pull request #461 from ifed01/origin/wip-ifed-fix-s3-loc-tests
Test case for locked and "marked-deleted" object version removal
2022-08-03 11:30:14 -04:00
Ali Maredia
8063cd68c9
Merge pull request #464 from shriya-deshmukh/delete-bucket-tagging-test-case
rgw/s3_boto3: Added http status code verification for DeleteBucketTagging case.
2022-07-29 13:35:56 -04:00
Shriya Deshmukh
c8fc8cd7c8
rgw/s3_boto3: Added http status code verification for DeleteBucketTagging
case.

Updated test_set_bucket_tagging test for verifying the http status code
for DeleteBucketTagging case.

Related CEPH PR: https://github.com/ceph/ceph/pull/47262

Signed-off-by: Shriya Deshmukh <shriya.deshmukh@seagate.com>
2022-07-27 08:53:48 -06:00
Ali Maredia
a3100af70a
Merge pull request #460 from soumyakoduri/user_policy_tests
Fix User Policy test failures on dbstore
2022-07-21 14:01:08 -04:00
Soumya Koduri
5d63ebf83d Fix User Policy test failures on dbstore
Tag User policy tests failing on dbstore as 'fails_on_dbstore'

Signed-off-by: Soumya Koduri <skoduri@redhat.com>
2022-07-20 21:56:28 +05:30
Igor Fedotov
be7ab936cd Test case for locked and "marked-deleted" object version removal.
Reproduces: https://tracker.ceph.com/issues/55766
Signed-off by: Igor Fedotov <igor.fedotov@croit.io>
2022-07-15 19:05:34 +03:00
Ali Maredia
952beb9ebd
Merge pull request #453 from Seagate/master
Contributing User Policy tests
2022-06-28 12:03:24 -04:00
Ravindra Choudhari
bf889041c9 Added put/get/list/delete User Policy Tests
c4d30d7	Ravindra Choudhari	Mon, 27 Jun 2022 removing region name
4a13f58	Ravindra Choudhari	Thu, 16 Jun 2022 Updating readme file (#15)
18bc152	Ravindra Choudhari	Tue, 14 Jun 2022 Adding attr test_of_iam to all user policy tests (#13)
03f520a	Ravindra Choudhari	Tue, 14 Jun 2022 resolving review comments (#12)
7cf2823	Ravindra Choudhari	Mon, 13 Jun 2022 added IAM policy test section in README.rst (#11)
563f3ea	Ravindra Choudhari	Fri, 10 Jun 2022 adding failing three tests back with attr @fails_on_rgw (#10)
696dd2e Ravindra Choudhari 	Mon, 6 Jun 2022 changes as per review comments
3d63dfd Ravindra Choudhari 	Mon, 6 Jun 2022 Fixed review comments (#8)
9492f69 Ravindra Choudhari	Fri, 3 Jun 2022 Fixed review comments (#7)
74095dc Ketan Arlulkar     	Wed, 1 Jun 2022 Fixed review comments (#6)
942fb4f Ketan Arlulkar     	Wed, 1 Jun 2022 Added Tests for conflicting policies and IAM actions (#4)
ad5b5ae Ravindra Choudhari 	Tue, 31 May 2022 IAM policies s3 actions (#5)
6515ec6 Ketan Arlulkar     	Fri, 27 May 2022 Corrected eq import
40a2841 Ravindra Choudhari 	Tue, 17 May 2022 resolving conflicts
f53a5c1 Ravindra Choudhari 	Tue, 17 May 2022 added cleanup
747d563 Ketan Arlulkar     	Tue, 17 May 2022 Added cleanup/Delete Policy
d1cc1d8 Ketan Arlulkar     	Mon, 16 May 2022 Fixed review comments
1ec43a2 Ravindra Choudhari 	Mon, 16 May 2022 delete user policy tests
a01722e Ravindra Choudhari 	Mon, 16 May 2022 get user policy tests
ff9d676 Ketan Arlulkar     	Fri, 13 May 2022 Removed TEST IDs
d261400 Ketan Arlulkar     	Tue, 10 May 2022 Put User Policy & List User Policy Tests

Signed-off-by: Ravindra Choudhari <ravindra.choudhari@seagate.com>
2022-06-27 11:54:27 +05:30
Ali Maredia
88a8d1c66f
Merge pull request #457 from cbodley/wip-bad-tests
remove tests that fail on boto3's parameter validation
2022-06-06 14:24:41 -04:00
Casey Bodley
4cf38b4138 remove tests that fail on boto3's parameter validation
Fixes: https://tracker.ceph.com/issues/55193

Signed-off-by: Casey Bodley <cbodley@redhat.com>
2022-06-02 12:25:50 -04:00
Daniel Gryniewicz
97be0d44c6
Merge pull request #452 from cbodley/wip-no-such-tag-set
fix GetBucketTagging error code

Reviewed-by: Daniel Gryniewicz <dang@redhat.com>
2022-05-18 14:34:14 -04:00
Casey Bodley
5b08b26453 fix GetBucketTagging error code
related to https://tracker.ceph.com/issues/55460

Signed-off-by: Casey Bodley <cbodley@redhat.com>
2022-05-18 09:40:16 -04:00
Zuhair AlSader
ef570220f9
Add support for rocky linux
Signed-off-by: Zuhair AlSader <zuhair.alsader@seagate.com>
2022-05-12 15:50:58 -04:00
Soumya Koduri
33afb4eb88
Merge pull request #450 from soumyakoduri/dbstore-tests
Tag sse* tests with 'fails_on_dbstore' attr
2022-05-11 23:33:13 +05:30
Soumya Koduri
25d05a194b
Merge pull request #442 from soumyakoduri/cloudtier_tests
Adjust wait time for cloud-transition test failures
2022-05-11 14:15:38 +05:30
Soumya Koduri
75e4e4f631 Tag sse* tests with 'fails_on_dbstore' attr
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
2022-05-11 00:52:33 +05:30
Soumya Koduri
c03108418f
Merge pull request #449 from cbodley/wip-cloud-config-sample
s3tests.conf.SAMPLE: comment out [s3 cloud] section
2022-05-10 11:04:26 +05:30
Casey Bodley
9f1f9c9273 s3tests.conf.SAMPLE: comment out [s3 cloud] section
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2022-05-09 17:19:44 -04:00
Soumya Koduri
16834645a6 Adjust wait time for cloud-transition test failures
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
2022-05-06 19:10:10 +05:30
Ali Maredia
8af8f96740
Merge pull request #440 from mdw-at-linuxbox/sse-s3-5
changes for sse s3.

Reviewed-by: Ali Maredia <amaredia@redhat.com>
2022-05-02 16:58:55 -04:00
Priya Sehgal
dd7cac25f5 sse s3 jumbo patch - all in one fixes for sse-s3 logic
original tests by Priya Sehgal <priya.sehgal@flipkart.com>:
rgw/s3_boto3: Tests added for SSE-S3 (GET, PUT, HEAD, MPU).

Additions by Casey Bodley <cbodley@redhat.com>:
add 'sse-s3' tag to test cases
sse: add _put_bucket_encryption() helper function
sse: document test cases with default bucket encryption
sse: expects encryption response header on put/get
sse: add 8MB default-encrypted upload
sse: test uploads that request x-amz-server-side-encryption=AES256

Lastly all my changes (Marcus Watts <mwatts@redhat.com>):

remove obsolete test - do it only in boto3 now.

Combine or rename duplicated function names.
Giving more than one test the same name is a Bad Thing(tm).

sse: expand test_bucket_policy_put_obj_enc, and _put_bucket_encryption

test_bucket_policy_put_obj_enc was testing too many things at once.
new tests:

* customer encryption and sse-s3: should fail
* customer encryption and sse-kms: should fail
* deny if not sse-s3: no-enc fails, sse-s3 succeeds.
* deny if not sse-s3: kms fails
  deny if not sse-ksm: no-enc fails, sse-kms succeeds.
  deny if not sse-ksm: s3 fails

_put_bucket_encryption was only testing sse-s3.
* test both these variations: sse-s3 and sse-kms

Note:
* these tests will fail on pre-sse-s3 ceph.

python3: comment out all boto3.set_stream_logger() calls
They made too much output.

Signed-off-by: Marcus Watts <mwatts@redhat.com>
2022-04-29 17:44:56 -04:00
Soumya Koduri
101dfc104a
Merge pull request #441 from soumyakoduri/dbstore-tests
Tag testcases failing on dbstore with 'fails_on_dbstore' attr
2022-04-28 23:23:02 +05:30
Soumya Koduri
bacab3cadf Tag testcases failing on dbstore with 'fails_on_dbstore' attr
To be able to successfully run s3tests on dbstore backend in teuthology,
mark all the s3-tests currently failing on it with 'fails_on_dbstore' attr

Signed-off-by: Soumya Koduri <skoduri@redhat.com>
2022-04-27 15:44:40 +05:30
Casey Bodley
6eb0e15711
Merge pull request #435 from soumyakoduri/lc_tests
lifecycle: Addressing lc spurious failures
2022-04-05 15:30:08 -04:00
Soumya Koduri
d20d8d2207 lifecycle: Adjust lc wait time
Adjust wait time to fix spurious failures reported -
https://github.com/ceph/ceph/pull/40703#issuecomment-1060811704
http://qa-proxy.ceph.com/teuthology/mbenjamin-2022-02-23_15:42:19-rgw-wip-rgwlc-noreset-distro-basic-smithi/6703349/teuthology.log

Signed-off-by: Soumya Koduri <skoduri@redhat.com>
2022-03-22 00:51:30 +05:30
Casey Bodley
76beb672d1
Merge pull request #429 from soumyakoduri/lc_tests
lifecycle/deletemarker_expiration: Increase timer window
2022-02-08 12:29:04 -05:00
Soumya Koduri
cb830ebae1 lifecycle/deletemarker_expiration: Increase timer window
Increase wait time in test_lifecycle_deletemarker_expiration(..)
to avoid any spurious failure.

Signed-off-by: Soumya Koduri <skoduri@redhat.com>
2022-02-07 23:24:34 +05:30
Casey Bodley
cf77d5c560
Merge pull request #421 from soumyakoduri/lc_tests
Enable Lifecycle tests
2022-02-04 11:01:42 -05:00
Soumya Koduri
0f3f35ef01 Enable lifecycle tests
Add an option to configure lc debug interval and adjust lifecycle
tests sleep as per the value set.

Signed-off-by: Soumya Koduri <skoduri@redhat.com>
2022-02-02 00:05:30 +05:30
Soumya Koduri
47292aee17 Add testcases for rgw cloudtransition feature
Feature PR: https://github.com/ceph/ceph/pull/35100

Also ported lc testcases from boto2 to boto3

Signed-off-by: Soumya Koduri <skoduri@redhat.com>
2022-02-02 00:05:30 +05:30
Ali Maredia
a38cdb1dd5
Merge pull request #424 from psathyan/fix_check_grants
Improving check_grants reliability
2022-01-18 17:02:35 -05:00
Casey Bodley
13a477d096
Merge pull request #428 from TRYTOBE8TME/wip-sts-error
s3tests_boto3/test_sts: Changing code for proper cleanup
2022-01-18 09:18:50 -05:00
Kalpesh Pandya
c03fd082cc test_sts: Changing code for proper cleanup
This solves: https://tracker.ceph.com/issues/53090

The solution is: We need to delete the role_policy and
user_policy attached user which was causing the failure.

Signed-off-by: Kalpesh Pandya <kapandya@redhat.com>
2022-01-18 11:55:06 +05:30
Ali Maredia
540b28fa20
Merge pull request #426 from galsalomon66/revert_install_arrow
revert the arrow installation(causing failure on some distro"s, such …
2022-01-13 16:31:30 -05:00
gal salomon
a4d282c1db revert the arrow installation(causing failure on some distro"s, such as fedora)
Signed-off-by: gal salomon <gal.salomon@gmail.com>
2022-01-12 19:01:06 +02:00
Gal Salomon
f7f0799ceb
Merge pull request #419 from galsalomon66/parquet_s3tests
parquet tests
2022-01-11 23:15:08 +02:00
gal salomon
60593c99dd fix output-serialization tests(upon comparing query results need to remove redundant columns)
skip output-serial test. the results from both queries are not equal, thus it raise an assert. the problem seems to be the formatting before the comparision

remove test_output_serial_expressions until fixing the test

experiment pyarrow for parquet testing, adding arrow/parquet to bootstrap, installing pyarrow,pandas for reading/writing parquet

Signed-off-by: gal salomon <gal.salomon@gmail.com>
2022-01-11 20:42:38 +02:00
Pragadeeswaran Sathyanarayanan
5f96a32045
Improving check_grants reliability
Signed-off-by: Pragadeeswaran Sathyanarayanan <psathyan@redhat.com>
2022-01-10 22:00:21 +05:30
gal salomon
6019ec1ef3 merging master tests into parquet branch
Signed-off-by: gal salomon <gal.salomon@gmail.com>
2021-12-28 03:03:54 +02:00
gal salomon
a3b849e4db fix for assert of error messages
Signed-off-by: gal salomon <gal.salomon@gmail.com>
2021-12-28 02:58:58 +02:00
gal salomon
93099c1fb0 remove redundant comma. s3select-engine produced redundant result column before its last fix, s3tests should align with that
Signed-off-by: gal salomon <gal.salomon@gmail.com>
2021-12-28 02:58:58 +02:00
Ali Maredia
9a6a1e9f19
Merge pull request #402 from galsalomon66/progress-stats
s3select-tests:  new-s3select-responses, output-serialization, presto alignments
2021-12-20 10:37:14 -05:00
galsalomon66
23be1160f5 adding datatime queries from #395
Signed-off-by: galsalomon66 <gal.salomon@gmail.com>
2021-11-25 11:57:52 +02:00
Ali Maredia
47ece6e861
Merge pull request #415 from cbodley/wip-list-multipart-owner
check Owner/Initiator fields of ListMultipartUploads response
2021-11-18 14:31:12 -05:00
galsalomon66
eef8d0fa67 rollback to use like without escape syntax (explore valgrind issue)
Signed-off-by: galsalomon66 <gal.salomon@gmail.com>
2021-11-12 11:39:45 +02:00
galsalomon66
f51101d752 modify the queries syntax to like-escape; the queries semantics is the same; results are the same; it is a part of exploring valgrind issue around like-operator
Signed-off-by: galsalomon66 <gal.salomon@gmail.com>
2021-11-11 16:15:49 +02:00
Casey Bodley
490d0a4c4f check Owner/Initiator fields of ListMultipartUploads response
new test case test_list_multipart_upload_owner() uses two different
users to initiate multipart uploads, then tests that
list_multipart_uploads() shows the correct user ids and display names
for each upload

Signed-off-by: Casey Bodley <cbodley@redhat.com>
2021-10-29 13:13:19 -04:00
galsalomon66
749e29185b add output-serialization tests; add syntax-error tests; run_s3select_output should merge into run_s3select
Signed-off-by: galsalomon66 <gal.salomon@gmail.com>
2021-10-21 03:32:57 +03:00
gal salomon
7c07bad930 remove comments from the like-expressions
Signed-off-by: gal salomon <gal.salomon@gmail.com>
2021-10-17 14:03:39 +03:00
Ali Maredia
687ab24e7d
Merge pull request #400 from iraj465/wip-new-multi-delete-objects-tests
rgw/s3_boto3:Adds new delete_objects tests
2021-09-08 15:58:40 -04:00
Ali Maredia
d073b991aa
Merge pull request #411 from mkogan1/multipart-extra-complete2
test extra complete_multipart_upload()
2021-09-08 10:44:20 -04:00
Mark Kogan
99d4b329e2 test extra complete_multipart_upload()
After the 1st successfull one should also return 200 OK

related tracker issue: https://tracker.ceph.com/issues/50141
related pr: https://github.com/ceph/ceph/pull/40594

Signed-off-by: Mark Kogan <mkogan@redhat.com>
2021-09-08 15:32:48 +03:00
Ali Maredia
55d8ef2a7e
Merge pull request #410 from Rjerk/wip-fix-doc-example
docs: fix wrong example in README.rst
2021-09-07 16:06:14 -04:00
Liu Lan
9ac8aef12b docs: fix wrong example in README.rst
Run: S3TEST_CONF=your.conf ./virtualenv/bin/nosetests s3tests.functional.test_s3:test_bucket_list_empty

But get an error: "ERROR: Failure: ValueError (No such test test_bucket_list_empty)".

Because test_bucket_list_empty is a test case in s3tests_boto3 directory.

Signed-off-by: Liu Lan <liulan_yewu@cmss.chinamobile.com>
2021-09-06 19:33:01 +08:00
Ali Maredia
4a89a9a5b2
Merge pull request #403 from pritha-srivastava/wip-rgw-sts-abac
rgw sts abac tests
2021-09-03 10:57:08 -04:00
Pritha Srivastava
71266fede9 rgw/sts: test to use role tag as iam:ResourceTag in
role's trust policy.

Signed-off-by: Pritha Srivastava <prsrivas@redhat.com>
2021-08-30 14:25:08 +05:30
Pritha Srivastava
5dcc3dd689 rgw/sts: test for s3:ResourceTag in role's permission policy
Signed-off-by: Pritha Srivastava <prsrivas@redhat.com>
2021-08-30 14:25:02 +05:30
Pritha Srivastava
bf43a4a10a rgw/sts: adding test for aws:TagKeys that can be used in the
condition element of the role's trust policy and the role's
permission policy.

Signed-off-by: Pritha Srivastava <prsrivas@redhat.com>
2021-08-30 14:21:38 +05:30
Pritha Srivastava
86fecf83b9 rgw/sts: adding test for aws:PrincipalTag in role permission
policy.

Signed-off-by: Pritha Srivastava <prsrivas@redhat.com>
2021-08-30 14:21:38 +05:30
Pritha Srivastava
64068d7bf9 rgw/sts: adding test to check for aws:RequestTag
in the condition element of a role's trust policy.

Signed-off-by: Pritha Srivastava <prsrivas@redhat.com>
2021-08-30 14:21:27 +05:30
Pritha Srivastava
d466b7bd09 rgw/sts: adding tests for testing assumerolewithwebidentity
using 'sub' and 'azp' fields in the web token.

Signed-off-by: Pritha Srivastava <prsrivas@redhat.com>
2021-08-30 14:19:37 +05:30
Ali Maredia
96438f44e4
Merge pull request #408 from cbodley/wip-52037
object-lock: test changes between retention modes
2021-08-11 13:49:57 -04:00
Ali Maredia
a6004fe43b
Merge pull request #409 from ceph/wip-ssl-verify-fix
Disable ssl verify by default
2021-08-11 13:13:59 -04:00
Ali Maredia
b252638369 disable ssl verify by default
Signed-off-by: Ali Maredia <amaredia@redhat.com>
2021-08-10 14:41:05 -04:00
Ali Maredia
5476c709c8
Merge pull request #405 from psathyan/wip-ssl-verify
Add support for disabling SSL certificate verification
2021-08-09 12:01:25 -04:00
Pragadeeswaran Sathyanarayanan
ea3caaa76b Add support for disabling SSL certificate verification
Signed-off-by: Pragadeeswaran Sathyanarayanan <psathyan@redhat.com>
2021-08-08 16:27:55 +05:30
Casey Bodley
95fd91df2b
Merge pull request #365 from markhoughton-microfocus/issue_47586_tests
Tests for multi-object delete with object lock
2021-08-05 11:28:44 -04:00
Mark Houghton
7fe0304e9c Add tests for issue 47586. 2021-08-05 11:26:12 -04:00
Casey Bodley
8662815ebe object-lock: test changes between retention modes
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2021-08-05 10:50:37 -04:00
Casey Bodley
9c4f15a47e nuke_prefixed_buckets waits up to 60 seconds for object locks to expire
objects locked in GOVERNANCE mode can be removed with
BypassGovernanceRetention, but some tests may leave an object locked in
COMPLIANCE mode, which blocks deletion until the retention period
expires

nuke_prefixed_buckets now checks the retention policy of objects that it
fails to delete with AccessDenied, and will wait up to 60 seconds for
locks to expire before retrying the deletes. if the wait exceeds 60
seconds, it instead throws an error without deleting the bucket

instead of doing this in nuke_prefixed_buckets, we could potentially
have each object-lock test case handle this manually, but that would
add a separate delay to each test case

Signed-off-by: Casey Bodley <cbodley@redhat.com>
2021-08-05 10:50:34 -04:00
Casey Bodley
bb995c2aeb nuke_prefixed_buckets deletes objects in batches
speed up the cleanup by using delete_objects() with batches of 128

Signed-off-by: Casey Bodley <cbodley@redhat.com>
2021-08-05 10:50:32 -04:00
Matt Benjamin
41ebef2540
Merge pull request #397 from TRYTOBE8TME/wip-new-sts-tests
sts: Addition of new STS tests
2021-07-29 12:43:34 -04:00
Casey Bodley
513ecdfdd0
Merge pull request #404 from flipkart-incubator/aws_sse_s3
rgw/s3_boto3: Add tests for bucket encryption APIs
2021-07-27 11:08:22 -04:00
galsalomon66
723853fd18 search for the cause of the valgrind issue; remove the like expression that may cause it
Signed-off-by: galsalomon66 <gal.salomon@gmail.com>
2021-07-27 08:26:28 +03:00
Rahul Dev Parashar
44643af0b0 rgw/s3_boto3: Add tests for bucket encryption APIs
Tests are added for GetBucketEncryption, PutBucketEncryption,
and DeleteBucketEncryption APIs.

Related PR: https://github.com/ceph/ceph/pull/42222

Signed-off-by: Rahul Dev Parashar <rahul.dev@flipkart.com>
2021-07-26 20:23:07 +05:30
Kalpesh Pandya
245a93326e rgw/sts: Addition of new STS tests for testing
session policies alongwith role's permission policy
and bucket policy.

Signed-off-by: Kalpesh Pandya <kapandya@redhat.com>
Signed-off-by: Pritha Srivastava <prsrivas@redhat.com>
2021-07-16 13:34:23 +05:30
Ali Maredia
700a04737a
Merge pull request #401 from SoftIron/object-retention-iso8601
Add test to check retain date is in iso 8601 format
2021-07-15 20:47:07 -04:00
iraj465
d2a7ed88f1 chore:Removes unused scaffolds 2021-07-16 03:29:36 +05:30
iraj465
459e3c870a rgw/boto3_s3:Adds delete_objects key limit for list-objects-v2 2021-07-16 03:15:53 +05:30
iraj465
20aa9aa071 chore:Bump the list-objects to paginator 2021-07-16 03:07:21 +05:30
Ali Maredia
d868058d0c
Merge pull request #399 from iraj465/wip-new-lifecycle-transition-tests
rgw/s3_boto3:Adds lifecycle transition tests
2021-07-15 16:14:55 -04:00
Danny Abukalam
e229d1aaf6
Add test to check retain date is in iso 8601 format
Signed-off-by: Danny Abukalam <danny@softiron.com>
2021-07-15 12:52:37 -04:00
iraj465
64bdc3beec rgw/s3_boto3:Adds new delete_objects tests for checking key delete limit 2021-07-13 23:20:35 +05:30
iraj465
ba9525f425 rgw/s3_boto3:Adds lifecycle transition test for invalid iso8601 date 2021-07-13 23:14:51 +05:30
Albin Antony
a3447c50df s3select: test progress stats
Signed-off-by: Albin Antony <aantony@redhat.com>
2021-07-07 15:18:29 +05:30
Albin Antony
aaa355f20b s3select: align s3select tests with ceph
Update s3-tests to handle the error-response (return 400, and error-description)

Signed-off-by: Albin Antony <aantony@redhat.com>
2021-06-30 15:13:45 +05:30
Casey Bodley
a0ef4be7fc
Merge pull request #389 from cbodley/wip-test-49780
sts: test role policy around nonexistent objects
2021-06-28 11:11:23 -04:00
Casey Bodley
7bd3c432fc
Merge pull request #386 from cbodley/wip-bucket-ctime
test that listed buckets have creation time
2021-06-28 11:09:11 -04:00
ofriedma
2851712901
Merge pull request #393 from ofriedma/wip-ofriedma-mp-upload-bucket-policy
test multipart upload with bucket policy
2021-05-27 17:34:41 +03:00
Ali Maredia
2ce7e15cca
Merge pull request #391 from ceph/tethology_s3select_apr19
modifying tests to be align with change of s3select compare sign( == …
2021-05-21 14:51:26 -04:00
Or Friedmann
cfdf914c4b test multipart upload with bucket policy
Signed-off-by: Or Friedmann <ofriedma@redhat.com>
2021-05-18 12:04:46 +03:00
gal salomon
1572fbc87b modifying tests to be align with change of s3select compare sign( == -> = )
Signed-off-by: gal salomon <gal.salomon@gmail.com>
2021-05-05 11:35:32 +03:00
Gal Salomon
b1815c25dc
Merge pull request #382 from albin-antony/master
s3select: align s3-tests with new changes in s3select
2021-04-09 10:26:24 +03:00
Albin Antony
c6a4ab9d12 tests
Signed-off-by: Albin Antony <aantony@redhat.com>
2021-04-08 16:21:47 +05:30
Casey Bodley
7276bee050 sts: test role policy around nonexistent objects
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2021-04-07 11:41:30 -04:00
Albin Antony
e7102e8cb0 tests
Signed-off-by: Albin Antony <aantony@redhat.com>
2021-03-30 20:59:31 +05:30
Albin Antony
60dd3444b3 tests
Signed-off-by: Albin Antony <aantony@redhat.com>
2021-03-22 16:24:26 +05:30
Albin Antony
4a86ebbe8b s3select: align s3-tests with new changes in s3select
Fix when then, date functions and NULL, add escape, trim tests

Signed-off-by: Albin Antony <aantony@redhat.com>
2021-03-19 13:10:43 +05:30
Casey Bodley
66ced9af1d test that listed buckets have creation time
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2021-03-11 16:48:04 -05:00
Ali Maredia
8893cc49c5
Merge pull request #381 from ceph/fix_test_file_permission_add_attrib
change test_s3select.py permission; add s3select attribute
2021-01-14 00:02:58 -05:00
galsalomon66
ea7d5fb563 change test_s3select.py permission; add s3select attribute
Signed-off-by: galsalomon66 <gal.salomon@gmail.com>
2021-01-13 09:24:31 +02:00
Gal Salomon
59a3aff3de
Merge pull request #373 from ceph/fix_function_names
rename charlength and character_length function names
2020-12-16 13:43:21 +02:00
galsalomon66
6a63d0cf91 reduce object size for test_like_expression, it may cause timeouts in teuthology runs
Signed-off-by: galsalomon66 <gal.salomon@gmail.com>
2020-12-15 11:51:19 +02:00
galsalomon66
5d6166bf53 rename charlength and character_length function names
Signed-off-by: galsalomon66 <gal.salomon@gmail.com>
2020-12-05 18:58:45 +02:00
Ali Maredia
6c885bb39a
Merge pull request #371 from ofriedma/wip-ofriedma-test-usage-api
Add test for GetUsage api

Reviewed-by: Ali Maredia <amaredia@redhat.com>
2020-12-03 15:59:40 -05:00
Or Friedmann
ef8f65d917 Add test for head bucket usage headers
Signed-off-by: Or Friedmann <ofriedma@redhat.com>
2020-12-03 18:28:40 +02:00
Or Friedmann
f4f7812efd Add test for GetUsage api
Signed-off-by: Or Friedmann <ofriedma@redhat.com>
2020-12-03 17:53:50 +02:00
Ali Maredia
cd1794f3c7
Merge pull request #363 from linuxbox2/master-headbucket-dne
Add test for HeadBucket on a non-existent bucket
2020-11-24 23:57:51 -05:00
Casey Bodley
26b43ccb02
Merge pull request #367 from IlsooByun/check_invalid_payload
Check if invalid payload is added after serving errordoc
2020-11-19 10:24:01 -05:00
Gal Salomon
26f06011ee
Merge pull request #364 from albin-antony/when-than
s3select tests for coalesce and case
2020-11-12 11:53:12 +02:00
Albin Antony
d7c243ba83 s3select tests for coalesce and case
Signed-off-by: Albin Antony <aantony@redhat.com>
2020-11-12 00:13:03 +05:30
Ilsoo Byun
c08de72d55 Check if invalid payload is added after serving errordoc
Signed-off-by: Ilsoo Byun <ilsoobyun@linecorp.com>
2020-11-08 22:22:32 +09:00
Ali Maredia
b72bff16d1
Merge pull request #360 from TRYTOBE8TME/wip-sts-webidentity
Webidentity Test addition to test_sts.py
2020-11-02 13:36:26 -05:00
Gal Salomon
bf23251357
Merge pull request #358 from albin-antony/predicates
s3select predicate tests
2020-11-01 15:18:33 +02:00
root
f4a052dfcf Webidentity Test addition to test_sts.py
Signed-off-by: Kalpesh Pandya <kapandya@redhat.com>

Few main changes/additions:
1. Webidentity test addition to test_sts.py.
2. A function named check_webidentity() added to __init__.py in order to check for section presence.
3. Few lines shifted from setup() to get_iam_client() to make them execute only when sts-tests run.
4. Documentation update (for sts section)
5. Changes in s3tests.conf.SAMPLE regarding sts sections
2020-10-31 00:15:42 +05:30
Matt Benjamin
62395eb872
Merge pull request #361 from ofriedma/wip-lc-expr-hdr-tags
Test expiration header for lc rules with tags
2020-10-22 08:21:02 -04:00
Or Friedmann
8638017020 Test expiration header for lc rules with tags
Signed-off-by: Or Friedmann <ofriedma@redhat.com>
2020-10-20 19:30:55 +03:00
Albin Antony
ae3052fa8a s3select predicate tests
Signed-off-by: Albin Antony <aantony@redhat.com>
2020-10-09 13:15:43 +05:30
Casey Bodley
a48a9bf6d1
Merge pull request #356 from cbodley/wip-test-bucket-recreate-acl
test bucket recreation with different acls
2020-10-08 10:37:09 -04:00
Matt Benjamin
16266d1590 Add test for HeadBucket on a non-existent bucket
n.b., RGW does not send a response document for this operation,
which seems consistent with
https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadBucket.html

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
2020-10-08 09:30:50 -04:00
Casey Bodley
0b2d7f729d
Merge pull request #362 from TRYTOBE8TME/wip-sts-issue-fix
STS issue fix (https://tracker.ceph.com/issues/47588)
2020-10-05 17:19:09 -04:00
root
daf9062a22 STS issue fix (https://tracker.ceph.com/issues/47588)
This is the fix for the issue reported (https://tracker.ceph.com/issues/47588). The issue was with the argument which was passed to the function. After removing that argument (as it's already an optional argument) the issue is fixed.

Signed-off-by: Kalpesh Pandya <kapandya@redhat.com>
2020-10-06 02:02:44 +05:30
Matt Benjamin
4e3fd5ff41
Merge pull request #359 from huww98/master
Test list_objects_v2 KeyCount with Delimiter
2020-10-02 12:41:19 -04:00
Matt Benjamin
0eed4a551d
Merge pull request #357 from linuxbox2/wip-fixes-911
Wip fixes 911
2020-10-01 11:26:31 -04:00
胡玮文
30db28e775 Test list_objects_v2 KeyCount with Delimiter
Test for: https://github.com/ceph/ceph/pull/37396

Signed-off-by: 胡玮文 <huww98@outlook.com>
2020-09-25 00:28:15 +08:00
Matt Benjamin
f0868651fd add noncurrent version expiration rule w/tag filter
Create 10 object versions (9 noncurrent).  Install a noncurrent
version expiration at 4 days.  Verify that 10 versions exist at
T+20, and only 1 (current) at T+60.

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
2020-09-16 13:19:41 -04:00
Matt Benjamin
6ff497d908 add lifecycle expiration test mixing 2-tag filter w/versioning
By design this test duplicates test_lifecycle_expiration_tags2,
but enables object versioning on the bucket.

The tests install a rule which requires -2- tags to be matched,
and creates 2 objects, one matching only 1 of the required tags,
the other matching both.  Only the 2nd object should expire.

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
2020-09-15 09:57:00 -04:00
Matt Benjamin
e79dffa731 add tests for lifecycle expiration w/1 and 2 object tags
Note that the 1-tag case contains a filter prefix--which exposes
an apparent bug parsing Filter when it contains a Prefix element
and a single Tag element (without And).

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
2020-09-14 19:06:32 -04:00
Matt Benjamin
c2b59fb714 fix lifecycle expiration days: 0
In fact test_lifecycle_expiration_days0 is should fail, as 0-day
expiration is permitted for transition rules but not expiration
rules.

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
2020-09-14 14:23:48 -04:00
Matt Benjamin
9d526d1a76 s/test_set_tagging/test_set_bucket_tagging/;
The test exercises bucket tagging, has nothing to do with object
tagging (clarity).

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
2020-09-11 13:52:29 -04:00
Matt Benjamin
979e739eff fix test_lifecycle_expiration_header_{put,head}
Primarily fixes the expiration header() verifier function
check_lifecycle_expiration_header, but also cleans up
prefix handling in setup_lifecycle_expiration().

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
2020-09-11 13:23:41 -04:00
Matt Benjamin
4948f8b009 fix and remark on test_lifecycle_expiration_days0
1. fix a python3-related KeyError exception

2. note here:  AWS documentation includes examples of "Days 0"
   in use, but boto3 will not accept them--this is why the days0
   test currently sets Days 1

3. delay increased to 30s, to avoid occasional failures due to
   jitter

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
2020-09-11 10:43:14 -04:00
Casey Bodley
f6218fa1de test bucket recreation with different acls
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2020-09-10 14:40:18 -04:00
TRYTOBE8TME
54103207e4
Merge pull request #355 from TRYTOBE8TME/wip-sts-tests
STS Tests File and required modification in other files for the same
2020-08-17 15:11:32 +05:30
Gal Salomon
5a8d0b8b0d
Merge pull request #352 from ceph/validate_test_setup
per each new uploaded file(for test), it got unique name(random), and…
2020-08-12 13:32:29 +03:00
gal salomon
350fcbb4ec per each new uploaded file(for test), it got unique name(random), and uploaded file is verified for its content
Signed-off-by: gal salomon <gal.salomon@gmail.com>

Signed-off-by: gal salomon <gal.salomon@gmail.com>
2020-08-12 13:29:30 +03:00
root
8dbe896f89 STS Tests Files and modification in __init__.py
Signed-off-by: Kalpesh Pandya <kapandya@redhat.com>
2020-08-12 13:30:22 +05:30
Casey Bodley
982d15c30e
Merge pull request #353 from ceph/add_test_filter
add filter for s3select tests
2020-07-07 07:37:49 -04:00
gal salomon
fce9a52ef4 add filter for s3select tests
Signed-off-by: gal salomon <gal.salomon@gmail.com>
2020-07-07 00:10:50 +03:00
Matt Benjamin
0985cc11d7
Merge pull request #342 from ceph/s3select_first_tests_framework
commit first tests for s3select and initial framework
2020-06-25 12:27:32 -04:00
gal salomon
72e251ed69 fix comments;remove non-used imports;enable test for projection;using get_client() 2020-05-31 16:09:03 +03:00
gal salomon
fb39ac4829 adding radom-generated tests, one for where clause , the second for prjection. random arithmetical expression is generated and used for building s3-select query; result is compared to python-engine 2020-05-27 01:20:31 +03:00
Casey Bodley
6d8c0059db
Merge pull request #347 from linuxbox2/wip-fix-lc-expire-timers
fix/restore test_lifecycle_expiration checks
2020-05-26 11:06:40 -04:00
Casey Bodley
b63229b110
Merge pull request #350 from tchaikov/wip-45691
bootstrap,requirements.txt: bump up setuptools and requests
2020-05-26 09:01:29 -04:00
Kefu Chai
b7f47c2a31 bootstrap,requirements.txt: bump up setuptools and requests
Fixes: https://tracker.ceph.com/issues/45691
Signed-off-by: Kefu Chai <kchai@redhat.com>
2020-05-26 20:28:13 +08:00
gal salomon
e006dd4753 python linter; replace assert with assert_equal; add complex query test(sum,count,where); add test-schema ; 2020-05-26 11:36:38 +03:00
gal salomon
1a9d3677f7 using config parameters 2020-05-23 15:10:14 +03:00
Casey Bodley
4c8bbbef0a
Merge pull request #348 from cbodley/wip-bootstrap-virtualenv-deprecated
bootstrap: remove deprecated virtualenv options

Reviewed-by: Adam C. Emerson <aemerson@redhat.com>
Reviewed-by: Ali Maredia <amaredia@redhat.com>
2020-05-21 11:08:46 -04:00
Casey Bodley
a0c15c80ad bootstrap: remove deprecated virtualenv options
this fails on Ubuntu 20.04:

> virtualenv: error: unrecognized arguments: --no-site-packages --distribute

according to `virtualenv -h`:

>   --no-site-packages    DEPRECATED. Retained only for backward compatibility.
>                         Not having access to global site-packages is now the
>                         default behavior.
>   --distribute          DEPRECATED. Retained only for backward compatibility.
>                         This option has no effect.

Signed-off-by: Casey Bodley <cbodley@redhat.com>
2020-05-21 10:48:47 -04:00
Matt Benjamin
b6db7bdd8a fix/restore test_lifecycle_expiration checks
Commit bf956df71e adding
listobvjectsv2 tests inadvertently changed the v1
test_lifecycle_expiration test, which it had copied to
create a v2 version.  Revert this.

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
2020-05-19 08:41:39 -04:00
Ali Maredia
7f8a12423f
Merge pull request #345 from huangp0600/hp-dev
Fix: Bucket resource leak when cleanup
2020-05-15 15:19:42 -04:00
Pei
713012c178 Fix: Bucket resource leak when cleanup
In the function of nuke_prefixed_buckets, if some err is thrown when deleting buckets, the left buckets remain uncleaned.
It is kind of resoruce leak on some charged platform. We have to clear them manually.

I know the original code is meant to give the user some hint by rasing error. But the resource leak of left buckets is a little annoying.

This PR would skip the problem point and continue the teardown process. The last client error would be saved and re-raised after the loop completes.

Signed-off-by: Pei <huangp0600@126.com>
Signed-off-by: Pei <phuang1@dev-new-3-3854897.slc07.dev.ebayc3.com>
2020-05-15 01:21:06 -07:00
gal salomon
5dc8bc75ab add tests to validate csv-header-info functionalities is correct 2020-05-08 16:51:24 +03:00
gal salomon
94b1986228 adding test cases for processing CSV objects with different CSV definitions; validate null,escape-rules and quotes are processed correctly 2020-04-28 19:10:24 +03:00
gal salomon
4c7c279f70 adding utcnow test 2020-04-15 17:20:52 +03:00
gal salomon
5925f0fb3f adding tests for date-time functionalities 2020-04-09 16:43:11 +03:00
gal salomon
c1bce6ac70 add complex expression tests; for nested function calls; and different where-clause which create the same group of values 2020-04-04 16:31:54 +03:00
gal salomon
d543619e71 adding test for detection of cyclic reference to alias 2020-03-31 14:35:51 +03:00
gal salomon
f42872fd53 adding aggregation tests 2020-03-29 17:23:05 +03:00
gal salomon
74daf86fe5 adding alias test case 2020-03-26 17:49:55 +02:00
gal salomon
dac38694ef commit first tests for s3select and initial framework 2020-03-23 13:36:32 +02:00
Casey Bodley
47a3755378
Merge pull request #302 from theanalyst/public-buckets
Public buckets
2020-03-20 17:01:34 -04:00
Abhishek Lekshmanan
4d675235dd fix ignore public acls with py3 compatible code
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
2020-01-22 17:06:09 +01:00
Abhishek Lekshmanan
3b1571ace6 add tests for ignore public acls
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
2020-01-22 17:03:21 +01:00
Abhishek Lekshmanan
b4516725f2 add test for block public policy
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
2020-01-22 17:03:21 +01:00
Abhishek Lekshmanan
d02c1819f6 use empty bodies for canned acl tests with BlockPublicAccess
This should be a temporary workaround until #42208 is fixed

Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
2020-01-22 17:03:21 +01:00
Abhishek Lekshmanan
4996430709 remove redundant get_client calls
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
2020-01-22 17:03:21 +01:00
Abhishek Lekshmanan
6d3f574a8e add ability to get svc client for s3config set of apis
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
2020-01-22 17:03:10 +01:00
Abhishek Lekshmanan
1ad38530e0 add tests for public access configuration
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
2020-01-22 17:01:41 +01:00
Abhishek Lekshmanan
3f9d31c6c7 add a few test cases for public bucket policies
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
2020-01-22 17:01:40 +01:00
Abhishek Lekshmanan
02b1d50ca7 boto3: add bucket policy status checks for public ACLs
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
2020-01-22 17:01:30 +01:00
Ali Maredia
11f75ea7c5
Merge pull request #340 from theanalyst/bootstrap-suse
bootstrap: add support for *suse distributions
2020-01-17 10:02:25 -05:00
Abhishek Lekshmanan
fb6520ac11 bootstrap: add support for *suse distributions
Also reformatted the bootstrap script to parse /etc/os-release instead, so that
more distro/pkg manager support could be added at a later point in time and
fixed the error message

Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
2020-01-15 20:34:46 +01:00
Casey Bodley
13452bd25f
Merge pull request #337 from ceph/wip-python-3-port
Wip python 3 port
2020-01-15 08:48:54 -05:00
Ali Maredia
024e74c469 remove all non-functional tests and infra
Signed-off-by: Ali Maredia <amaredia@redhat.com>
2020-01-14 12:20:07 -05:00
Adam C. Emerson
be9935ba1a Port functional tests from python 2 to python 3
Add fails_on_rgw to tests not passing. Some
tests from the master branch do not pass on the
rgw yet. Others waiting on rgw tracker issues to
be resolved.

Signed-off-by: Ali Maredia <amaredia@redhat.com>
2020-01-14 12:20:05 -05:00
Abhishek L
c9c84faf48
Merge pull request #314 from theanalyst/iam/user-policy-basic
iam: add a very basic user policy smoke test

Reviewed-By: Casey Bodley <cbodley@redhat.com>
Reviewed-By: Pritha Srivastava <prsivas@redhat.com>
Reviewed-By: Yuval Lifshitz <yuvalif@yahoo.com>
2019-12-20 18:30:21 +01:00
Abhishek L
aa453fa5c3
Merge pull request #311 from theanalyst/encoding-type
list-objects: add basic tests for encoding

Reviewed-By: Casey Bodley <cbodley@redhat.com>
2019-12-20 18:28:23 +01:00
Abhishek Lekshmanan
045ad2f46e iam: explicitly set a region for the iam client
Avoids region not defined errors

Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
2019-12-20 18:17:58 +01:00
Abhishek Lekshmanan
48be90a64e iam: add a very basic user policy smoke test
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
2019-12-20 18:17:55 +01:00
Ali Maredia
92f056532b
Merge pull request #331 from yehudasa/wip-which
bootstrap: different packages for 'which'
2019-12-11 10:29:21 -05:00
Ali Maredia
fc195db725
Merge pull request #321 from romayalon/tags-order-fix
test_put_obj_with_tags
2019-12-10 13:04:08 -05:00
Yehuda Sadeh
f7f5eeccb0 bootstrap: different packages for 'which'
Either debianutils or which.

Signed-off-by: Yehuda Sadeh <yehuda@redhat.com>
2019-12-09 14:20:55 +02:00
Ali Maredia
506e8e5d4a
Merge pull request #330 from yehudasa/wip-bash
bootstrap: switch to bash
2019-12-08 14:04:39 -05:00
Yehuda Sadeh
e7fbcee0d5 bootstrap: switch to bash
Use of declare that is bash specific was introduced in:
d44c923405e22ea8b6cc1ab4316be3cc07f29b5

Signed-off-by: Yehuda Sadeh <yehuda@redhat.com>
2019-12-07 11:23:03 +02:00
Casey Bodley
361f4f9a66
Merge pull request #326 from ceph/wip-bootstrap-fix
Minor fix to Centos 8 bootstrap changes
2019-11-25 11:49:25 -05:00
Ali Maredia
dd44c92340
Merge pull request #156 from gaul/read-unreadable-object
Test reading an unreadable object
2019-11-25 00:22:44 -05:00
Ali Maredia
42cdf6e026
Merge pull request #278 from gaul/x-rgw
Ignore missing x-rgw-* headers
2019-11-25 00:09:54 -05:00
Ali Maredia
6b814189e0 Minor fix to Centos 8 bootstrap changes
Signed-off-by: Ali Maredia <amaredia@redhat.com>
2019-11-24 18:04:42 -05:00
Andrew Gaul
b787b79d81 Ignore missing x-rgw-* headers
Continue to require them in test_bucket_head_extended.  Only Ceph
supports these headers.

Signed-off-by: Andrew Gaul <andrew@gaul.org>
2019-11-24 22:02:19 +09:00
Andrew Gaul
2160a6a403 Test reading an unreadable object
Signed-off-by: Andrew Gaul <andrew@gaul.org>
2019-11-24 21:48:36 +09:00
Ali Maredia
2bd8af524c
Merge pull request #319 from liranmauda/liran-fix-bootstrap-for-version-8
Fixing bootstap for RHEL/Centos 8
2019-11-21 13:10:56 -05:00
Casey Bodley
4a053e1640
Merge pull request #296 from gaul/can-test-website-501
Do not run website tests when server emits 501

Reviewed-by: Casey Bodley <cbodley@redhat.com>
2019-11-13 11:54:33 -05:00
Romy
a2598dcdea fix lists compare 2019-11-06 13:39:29 +02:00
liranmauda
6b553efbe1 Fixing bootstap for RHEL/Centos 8
Fixing bootstap for RHEL/Centos 8

- When VERSION_ID is less than 8, installing the python packages, otherwise installing the python2 packages.
- When we are using the python2-virtualenv, we will run virtualenv-2.
- Making sure we have which installed
2019-11-03 16:27:08 +02:00
Ali Maredia
5cb2bcdf89
Merge pull request #307 from peter-ginchev/fix_listobjects_v2_tests
Fix validate_bucket_listv2
2019-10-28 23:56:38 -04:00
Abhishek Lekshmanan
04b5e63238 list-objects: add basic tests for encoding
Fixes: https://tracker.ceph.com/issues/41870
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
2019-10-18 16:22:50 +02:00
Ali Maredia
4460b08222
Merge pull request #309 from hairesis/master
Push #306 to master branch.
2019-10-07 09:42:14 -04:00
Andrea Baglioni
b051efdada sse-kms test keys parametrization
Signed-off-by: Andrea Baglioni <andrea.baglioni@workday.com>
2019-10-07 11:53:45 +01:00
Andrea Baglioni
4f9710a23b sse-kms: adding missing kms test to parametrization 2019-10-07 11:53:30 +01:00
Peter Ginchev
2a475a408a Fix validate_bucket_listv2
NextContinuationToken is not cleartext, and we shouldn't assume that.
Also, the test should use it for ContinuationToken
2019-09-30 14:38:45 +03:00
Ali Maredia
fab38b2579
Merge pull request #305 from cbodley/wip-validate-bucket-name
Fix create_bucket tests as per new bucket naming conventions
2019-09-24 14:42:46 -04:00
Soumya Koduri
15d4dd96c5 Fix create_bucket tests as per new bucket naming conventions
As per amazon s3 spec -
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-s3-bucket-naming-requirements.html

The s3 bucket names should not contain upper case letters or underscore.
Name cannot end with dash or have consecutive periods, or dashes adjacent to periods
Name length shouldn't exceed 63 characters.

These rules are being enforced via
- https://github.com/ceph/ceph/pull/26787

This patch is to update the respective testcases as well.

Note: check_invalid_bucket_name() seems to have been broken. It should
try to create bucket using invalid name. Have addressed it in this
patch. Because of this few testcases (which were incorrectly passing
earlier) are failing in validate_bucket_name. Have marked them
'fails_on_rgw' as done for test_bucket_create_naming_bad_punctuation

Signed-off-by: Soumya Koduri <skoduri@redhat.com>
2019-09-24 13:48:58 -04:00
Casey Bodley
ae3ab351c8
Merge pull request #304 from cbodley/wip-fails-with-subdomain
boto2: tag more tests with fails_with_subdomain

Reviewed-by: Abhishek Lekshmanan <abhishek@suse.com>
2019-09-11 10:30:44 -04:00
Casey Bodley
5a67bab487 boto2: tag more tests with fails_with_subdomain
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2019-09-10 15:14:49 -04:00
Casey Bodley
8f732422ca
Merge pull request #286 from zhangsw/lifecycle-expiration-versioning
Add a case for lifecycle expiration on versining enabled bucket.

Reviewed-by: Casey Bodley <cbodley@redhat.com>
2019-08-30 08:58:30 -04:00
Ali Maredia
dab15076fe
Merge pull request #297 from albin-antony/mutuallyexclusive
startafter and continuation token shouldn't  be returned if they are not specified in the request
2019-08-22 14:13:09 -04:00
J. Eric Ivancich
e9ac946d0f
Merge pull request #295 from tianshan/test_pr_29215
add test case for list with delimiter not skip special keys

Reviewed-by: J. Eric Ivancich <ivancich@redhat.com>
2019-08-13 17:09:52 -04:00
albIN7
1ba6187525 startafter or continuation token shouldn't be returned if they are not specified in the request
Signed-off-by: Albin Antony <aantony@redhat.com>
2019-08-06 21:18:43 +05:30
Andrew Gaul
d6090edb82 Do not run website tests when server emits 501
Signed-off-by: Andrew Gaul <andrew@gaul.org>
2019-07-29 14:16:38 -07:00
Tianshan Qu
773edab4b9 add test case for list with delimiter not skip special keys
Signed-off-by: Tianshan Qu <tianshan@xsky.com>
2019-07-26 11:15:28 +08:00
Ali Maredia
968778e81b
Merge pull request #96 from gaul/if-match
Test copy if-match and if-none-match
2019-07-23 14:27:26 -04:00
Ali Maredia
75f748f211
Merge pull request #291 from zhangsw/listobjectv2
Add a test case for continuation token in list objects v2
2019-07-23 13:51:59 -04:00
zhang Shaowen
f57910a014 Update the continuation token test case so that it won't fail on aws
Signed-off-by: zhang Shaowen <zhangshaowen@cmss.chinamobile.com>
2019-07-18 15:06:06 +08:00
Ali Maredia
e4e225bacb
Merge pull request #284 from timuralp/improvement/improve-range-tests
Add a test for an improperly formatted range.
2019-07-16 16:24:20 -04:00
zhang Shaowen
9ca600eeaf Add a test case for list objects v2 with both continuation token and start after parameter
Signed-off-by: zhang Shaowen <zhangshaowen@cmss.chinamobile.com>
2019-07-16 19:03:25 +08:00
Ali Maredia
023ffb9fbc
Merge pull request #288 from gaul/missing-tags
Add missing tags
2019-07-15 14:26:56 -04:00
zhang Shaowen
8d623bff0c Add a test case for continuation token in list objects v2
Signed-off-by: zhang Shaowen <zhangshaowen@cmss.chinamobile.com>
2019-07-12 09:41:18 +08:00
zhang Shaowen
e54de402aa update the case for lifecycle expiration on versioning enabled bucket
Signed-off-by: zhang Shaowen <zhangshaowen@cmss.chinamobile.com>
2019-07-11 16:36:16 +08:00
Ali Maredia
66de5b19db
Merge pull request #290 from cbodley/wip-location2
fix syntax error in test_bucket_get_location
2019-07-10 11:42:32 -04:00
Casey Bodley
eaa68f4f77 fix syntax error in test_bucket_get_location
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2019-07-10 11:22:32 -04:00
Ali Maredia
b586f0170d
Merge pull request #271 from cbodley/wip-bucket-location
skip bucket location tests if no api_name is configured
2019-07-09 14:25:52 -04:00
Andrew Gaul
c9228c2140 Test copy if-match and if-none-match
References #72.

Signed-off-by: Andrew Gaul <andrew@gaul.org>
2019-07-03 11:12:55 -07:00
Andrew Gaul
dcac3bc189 Add missing tags 2019-07-02 14:59:18 -07:00
Ali Maredia
9bb46123f1
Merge pull request #276 from gaul/deps/fedora
Explicitly reference Fedora Python 2 dependency
2019-07-02 17:36:27 -04:00
Ali Maredia
68033a97a9
Merge pull request #287 from yuvalif/force_python2_in_virtualenv
make sure that virtualenv is python2 regardless of host
2019-07-02 17:32:40 -04:00
Yuval Lifshitz
2dfc89b077 make sure that virtualenv is python2 regardless of host
Signed-off-by: Yuval Lifshitz <yuvalif@yahoo.com>
2019-07-02 15:30:31 +03:00
Ali Maredia
7f21aaa8c2
Merge pull request #267 from albIN7/v2tests
Listobjectsv2 testcases
2019-06-28 16:56:49 -04:00
zhang Shaowen
b528485f62 Add a case for lifecycle expiration on versining enabled bucket.
Signed-off-by: zhang Shaowen <zhangshaowen@cmss.chinamobile.com>
2019-06-28 16:15:56 +08:00
Timur Alperovich
f6f3d3bdf1 Add a test for an improperly formatted range.
Adds tests for range headers that are not properly formatted (i.e. are
not of the form bytes=0-1024).
2019-06-24 15:36:55 -07:00
Casey Bodley
02722e9de5
Merge pull request #279 from zhangsw/object-lock
add test cases for object lock.

Reviewed-by: Casey Bodley <cbodley@redhat.com>
2019-06-20 13:21:53 -04:00
albIN7
bf956df71e Added testcases for listobjectsv2
Signed-off-by: Albin Antony <aantony@redhat.com>
2019-06-07 17:26:42 +05:30
Ali Maredia
aac3afd312
Merge pull request #249 from cbodley/wip-copy-version-urlencode
test versioned copy with urlencoded keys
2019-06-04 23:17:25 -04:00
Casey Bodley
2a3ece90db
Merge pull request #277 from liuchang0812/bucket-tagging
rgw: bucket tagging test
2019-06-03 13:24:05 -04:00
Casey Bodley
b6647c9860 test versioned copy with urlencoded keys
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2019-05-30 10:21:51 -04:00
zhang Shaowen
d9d91dd2de add test cases for object lock.
Signed-off-by: zhang Shaowen <zhangshaowen@cmss.chinamobile.com>
2019-05-28 20:16:55 +08:00
Ali Maredia
aa0f0d6ea9
Merge pull request #245 from gaul/attrs
Add missing attributes for versioning
2019-05-21 19:35:50 -04:00
chang liu
758de89dbf rgw: bucket tagging test
Signed-off-by: chang liu <liuchang0812@gmail.com>
2019-05-21 14:41:12 +08:00
Andrew Gaul
d6e0afd3f0 Add missing attributes for versioning
Also add cors attributes.

Signed-off-by: Andrew Gaul <andrew@gaul.org>
2019-05-20 00:43:51 +09:00
Andrew Gaul
dfa6088f6f Explicitly reference Fedora Python 2 dependency
Previously the aliases would install but subsequent runs would not
find the exact package name.  This caused yum to unnecessarily re-run.

Signed-off-by: Andrew Gaul <andrew@gaul.org>
2019-05-20 00:36:50 +09:00
Ali Maredia
37e6825d04
Merge pull request #272 from cbodley/wip-delete-retry
nuke_prefixed_buckets() ignores 404 from bucket delete
2019-04-23 14:56:00 -04:00
Casey Bodley
28b793271f nuke_prefixed_buckets() ignores 404 from bucket delete
if a bucket delete request times out, the retry will likely get a 404
NoSuchBucket response. ignore that error and treat this as a success

Signed-off-by: Casey Bodley <cbodley@redhat.com>
2019-04-22 15:35:12 -04:00
Casey Bodley
16393fb667
Merge pull request #261 from linuxbox2/wip-lifecycle-prefix
boto{2,3}: rm functional.test_s3.test_lifecycle_rules_conflict

Reviewed-by: Casey Bodley <cbodley@redhat.com>
2019-04-22 10:23:49 -04:00
Matt Benjamin
7757e8fe84 boto{2,3}: rm functional.test_s3.test_lifecycle_rules_conflict
The assumption that there may be only one rule per prefix has
been removed.  The rules specifically tested here--without tag
filters--do overlap but are not in conflict.

I am proposing to remove altogether rather than writing new
deconfliction logic.

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
2019-04-18 15:02:42 -04:00
Casey Bodley
250122e17d skip bucket location test if no api_name is configured
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2019-04-05 17:08:59 -04:00
Casey Bodley
75182b7e03 set 'api_name = default' in sample config
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2019-04-05 17:08:01 -04:00
Casey Bodley
cb309aa786
Merge pull request #269 from ceph/wip-boto2-remove
Wip boto2 remove

Reviewed-by: Casey Bodley <cbodley@redhat.com>
2019-04-05 16:56:05 -04:00
Ali Maredia
d54b8d1d9b remove boto2 tests already passing in boto3
Signed-off-by: Ali Maredia <amaredia@redhat.com>
2019-04-02 13:29:24 -04:00
Ali Maredia
5db9eb059d boto3: tag boto3 tests that are failing on ssl as 'fails_on_rgw'
Signed-off-by: Ali Maredia <amaredia@redhat.com>
2019-04-01 00:45:20 -04:00
Casey Bodley
617ee92edd
Merge pull request #262 from tianshan/test_for_38811
add case test for read not exist null version

Reviewed-by: Casey Bodley <cbodley@redhat.com>
2019-03-25 10:09:54 -04:00
Tianshan Qu
0634d6ee50 add case test for read not exist null version
http://tracker.ceph.com/issues/38811

Signed-off-by: Tianshan Qu <tianshan@xsky.com>
2019-03-19 19:32:43 +08:00
Ali Maredia
5be61bc30a
Merge pull request #260 from cbodley/wip-boto3-ssl
Wip boto3 ssl
2019-03-07 12:12:00 -05:00
Casey Bodley
0e04dcd6aa boto3: verify certificates for ssl connections
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2019-02-27 11:44:27 -05:00
Casey Bodley
7f49adda30 boto3: _get_post_url() uses config endpoint
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2019-02-27 11:44:27 -05:00
Casey Bodley
ac18365f75 boto3: use https:// for secure endpoints
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2019-02-27 11:44:27 -05:00
Casey Bodley
774d40d114 boto3: use getboolean() for is_secure
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2019-02-27 11:44:27 -05:00
Casey Bodley
e89b53f0c7
Merge pull request #247 from zhangsw/append-object
Add test case for appending object.

Reviewed-by: Casey Bodley <cbodley@redhat.com>
2019-02-26 13:28:13 -05:00
zhang Shaowen
e92835a00b add https support in apppend object test cases
Signed-off-by: zhang Shaowen <zhangshaowen@cmss.chinamobile.com>
2019-02-26 10:10:29 +08:00
Yehuda Sadeh
435922ee00
Merge pull request #256 from linuxbox2/wip-expiration-header
boto3_s3tests: add tests for lifecycle expiration header

Reviewed-by: Yehuda Sadeh <yehuda@redhat.com>
2019-02-18 08:18:10 -08:00
Matt Benjamin
2d12e98ce0 boto3_s3tests: add tests for lifecycle expiration header
The response to a successful PUT or HEAD (or GET) of/on
an object matching any enabled lifecycle expiration rule should
include an x-amz-expiration header for the object.  The
x-amz-expiration header consists of an expiry-date, rule-id tuple
indicating the earliest matching rule and the corresponding
expiration date for the object.

Also, while at it, add test for lifecycle expiration rule with
'Days' : 0, which should both apply and...work.

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
2019-02-14 18:32:33 -05:00
Ali Maredia
a2c6edff27
Merge pull request #250 from yehudasa/wip-rgw-lifecycle-trans
lifecycle transitions, and storage class related tests
2019-01-22 12:01:50 -05:00
Casey Bodley
daade6614f
Merge pull request #210 from ceph/wip-boto-3
boto3: Foundation laid for boto3 tests

Reviewed-by: Casey Bodley <cbodley@redhat.com>
2019-01-17 14:00:37 -05:00
Ali Maredia
67f4f5d356 boto3: Foundation laid for boto3 tests
Added sample config file for boto3 and vstart.sh

Modified setup.py, requirements.txt, and README

Added an rgw_interactive.py to use interactively
with vstart.sh and python -i

Ported 400+ tests over to boto3 from functional/test_s3.py

Signed-off-by: Ali Maredia <amaredia@redhat.com>
2019-01-16 16:31:24 -05:00
Yehuda Sadeh
b3d9487f14 s3tests: test copy objects (storage class)
Signed-off-by: Yehuda Sadeh <yehuda@redhat.com>
2019-01-10 11:50:02 -08:00
Yehuda Sadeh
06d0e3d398 s3tests: more object class related tests
modify object's storage_class, multipart

Signed-off-by: Yehuda Sadeh <yehuda@redhat.com>
2019-01-10 11:50:02 -08:00
Yehuda Sadeh
4cfbece929 s3tests: test create object with storage class
Signed-off-by: Yehuda Sadeh <yehuda@redhat.com>
2019-01-05 10:23:55 -08:00
Yehuda Sadeh
5522ac93fc s3tests: lifecycle transitions: configurable storage classes
Signed-off-by: Yehuda Sadeh <yehuda@redhat.com>
2019-01-04 10:26:50 -08:00
Yehuda Sadeh
d024de76f8 s3tests: tests for lifecycle transitions
Signed-off-by: Yehuda Sadeh <yehuda@redhat.com>
2019-01-04 10:26:50 -08:00
zhang Shaowen
e064dc6a85 Add test case for appending object.
Signed-off-by: zhang Shaowen <zhangshaowen@cmss.chinamobile.com>
2018-11-20 21:05:57 +08:00
Kefu Chai
fa979f416d
Merge pull request #228 from tchaikov/wip-cffi
bootstrap: add libffi-dev for gevent

Reviewed-by: Casey Bodley <cbodley@redhat.com>
2018-05-31 00:29:23 +08:00
Kefu Chai
a2b3e4ef77 bootstrap: add libffi-dev for gevent
because gevent 1.3.2 requires cffi>=1.11.5, and cffi requires
libffi-dev to build.

Signed-off-by: Kefu Chai <kchai@redhat.com>
2018-05-30 16:36:54 +08:00
Casey Bodley
24ab8ecf07
Merge pull request #192 from joke-lee/cors_option_header_check
rgw: test cors option request with Access-Control-Request-Headers

Reviewed-by: Casey Bodley <cbodley@redhat.com>
2018-04-23 09:01:53 -04:00
Casey Bodley
ab0acde0f2
Merge pull request #220 from joke-lee/max-keys-test
enable test for https://github.com/ceph/ceph/pull/21285

Reviewed-by: Casey Bodley <cbodley@redhat.com>
2018-04-12 11:18:14 -04:00
Ali Maredia
7ccdadb37a
Merge pull request #182 from gaul/versioning
Add missing versioning tag
2018-04-10 02:20:30 +08:00
yuliyang
abbd9f17e6 enable test for https://github.com/ceph/ceph/pull/21285
Signed-off-by: yuliyang <yuliyang@cmss.chinamobile.com>
2018-04-09 12:19:31 -04:00
Ali Maredia
f783f4294b
Merge pull request #217 from cbodley/wip-unpin-requests
unpin the requests library version
2018-04-06 00:47:10 +08:00
Casey Bodley
8c67aafcf2 requirements: unpin 'requests' from old v0.14
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2018-03-27 13:45:11 -04:00
Casey Bodley
23c99318e6 cors: allow None to match missing headers
this is apparently needed to run against newer versions of the requests
library

Signed-off-by: Casey Bodley <cbodley@redhat.com>
2018-03-27 13:45:11 -04:00
Casey Bodley
bb65f38ad9
Merge pull request #209 from fangyuxiangGL/compression
compression: add case to download range from big object

Reviewed-by: Casey Bodley <cbodley@redhat.com>
2018-02-16 11:34:00 -05:00
Ali Maredia
5fb7f7a709
Merge pull request #205 from malc0lm/multipartcopy-no-range
Add test for muiltpart copy without 'x-amz-copy-source-range' header
2018-02-13 14:03:15 -05:00
fang yuxiang
e93443ac07 compression: add case to download range from big object
Signed-off-by: fang yuxiang <fang.yuxiang@eisoo.com>
2018-02-07 16:16:39 +08:00
Casey Bodley
8ef3465f0e
Merge pull request #191 from theanalyst/policy-conditionals
test bucket policy with conditionals 

Reviewed-by: Adam C. Emerson <aemerson@redhat.com>
Reviewed-by: Casey Bodley <cbodley@redhat.com>
2018-01-25 11:06:06 -05:00
Malcolm Lee
800a9a758f Add test for muiltpart copy without 'x-amz-copy-source-range' header 2018-01-24 13:56:02 +08:00
Abhishek Lekshmanan
5da742036e policy: refactor make_json_policy to use the new Policy classes
since make_json_policy is redundantly doing most of the same work,
refactor to use the new policy classes instead

Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
2018-01-17 10:56:33 +01:00
Abhishek Lekshmanan
47e3772e0b policy: test existingobject tag on get acl
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
2018-01-17 10:56:33 +01:00
Abhishek Lekshmanan
a7e619b7ca policy: test put object with reqeust object tag 2018-01-17 10:56:32 +01:00
Abhishek Lekshmanan
ebb31e02f9 policy: test policy with sse-c encryption
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
2018-01-17 10:56:32 +01:00
Abhishek Lekshmanan
afa108d742 typo fix for pub_obj_acl
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
2018-01-17 10:56:32 +01:00
Abhishek Lekshmanan
fc0e55f7e1 policy: add S3 tests for grants conditionals
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
2018-01-17 10:56:32 +01:00
Abhishek Lekshmanan
26adb84b30 move ListBucket to new policy class 2018-01-17 10:56:32 +01:00
Abhishek Lekshmanan
ecea466666 policy: add a new policy class to make creation of complex policies
Since policies can have allow/deny rules etc

Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
2018-01-17 10:56:32 +01:00
Abhishek Lekshmanan
ef827b745e policy: test put object with canned acl
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
2018-01-17 10:56:32 +01:00
Abhishek Lekshmanan
35c9fcd6ae policy: test metadta copy key
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
2018-01-17 10:56:32 +01:00
Abhishek Lekshmanan
35cd3f77af policy: test put obj using copy conditionals
Test s3:x-amz-copy-source conditional on put obj with x-amz-copy-source

Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
2018-01-17 10:56:32 +01:00
Abhishek Lekshmanan
1012710ce7 policy: test for acl grants conditionals on put bucket acls
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
2018-01-17 10:56:32 +01:00
Abhishek Lekshmanan
006f9d5f46 policy: add tests for put bucket acl with canned acl conditionals
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
2018-01-17 10:56:31 +01:00
Abhishek Lekshmanan
b1e1ba0edf s3: policy tests for ListBucket with prefix, delimiter & max-keys
Allow conditionals on ListBucket similar to s3 docs which allow for
these clauses

Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
2018-01-17 10:56:31 +01:00
Abhishek Lekshmanan
3e650c5e6e policy: test put object tagging with conditionals
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
2018-01-17 10:56:31 +01:00
Abhishek Lekshmanan
3511eabc5c policy: test get object tagging with conditionals
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
2018-01-17 10:56:26 +01:00
Abhishek Lekshmanan
5167af2e73 add policy tests for get object with conditionals
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
2018-01-17 10:55:02 +01:00
Abhishek Lekshmanan
5ae2337747 add bucket-policy attr to all the policy tests
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
2018-01-17 10:52:49 +01:00
Abhishek Lekshmanan
fe35325ab8 test_s3: modify make_json_policy to support conditionals
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
2018-01-17 10:52:49 +01:00
Abhishek Lekshmanan
eee2d9a82c bucket policy: improve the helper functions used for tagging upwards
- Improve `make_json_policy` to support conditionals in policy
- Move the helper functions for creating policies up so that bucket
  policy tests can use these
- add bucket-policy attribute to the tagging tests using policy

Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
2018-01-17 10:52:49 +01:00
Matt Benjamin
6bef6ad125
Merge pull request #204 from linuxbox2/wip-post-ctype
Add test case for POST with no Content-Type header
2018-01-04 13:41:44 -05:00
Matt Benjamin
f5ad2bfaee Add test case for POST with no Content-Type header
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
2017-12-20 16:55:39 -05:00
Casey Bodley
f318716c63
Merge pull request #199 from cfanz/cfanz-version-id-wip
Add test case for getting version-id when upload objects  bucket

Reviewed-by: Casey Bodley <cbodley@redhat.com>
2017-11-27 14:57:26 -05:00
Xinying Song
9506ac220f Add test case for getting version-id when upload objects to bucket
For default-bucket and versioning-suspended-bucket, no version-id should return when upload.
For versioning-enabled-bucket, an non-empty version-id should return and equal to what
bucket list shows.

Signed-off-by: Xinying Song <songxinying@cloudin.cn>
2017-11-22 09:26:13 +08:00
Casey Bodley
92c4ac5a59
Merge pull request #196 from ZVampirEM77/wip-lc-typos
Fix a typos for lifecycle test case

Reviewed-by: Casey Bodley <cbodley@redhat.com>
2017-11-09 09:02:57 -05:00
vasukulkarni
33b1589969
Merge pull request #195 from ceph/wip-fix-416
Check for either 400 or 416 response code for invalid range test
2017-11-07 14:22:45 -08:00
Enming Zhang
52d9c955df cleanup unneeded comments
Signed-off-by: Enming Zhang <enming.zhang@umcloud.com>
2017-11-07 21:23:04 +08:00
Enming Zhang
19176daaf9 Fix a typos for lifecycle test case
According to the commit 9204283def,
lc debug interval should be set to 10s.

Signed-off-by: Enming Zhang <enming.zhang@umcloud.com>
2017-11-07 21:23:04 +08:00
Vasu Kulkarni
e208a74a05 check for either 400 or 416 response code for invalid range test
Signed-off-by: Vasu Kulkarni <vasu@redhat.com>
2017-11-06 11:35:32 -08:00
Yehuda Sadeh
5138dd451d Merge pull request #189 from joke-lee/list_bucket_versiong_with_marker
test for https://github.com/ceph/ceph/pull/17934

Reviewed-by: Yehuda Sadeh <yehuda@redhat.com>
2017-10-26 09:40:40 -07:00
yuliyang
2521183646 rgw: test cors option request with Access-Control-Request-Headers
test for https://github.com/ceph/ceph/pull/18556

Signed-off-by: yuliyang <yuliyang@cmss.chinamobile.com>
2017-10-26 21:46:28 +08:00
Yehuda Sadeh
6cb65b6fa8 Merge pull request #190 from cbodley/wip-pr-17882
expect 400 for missing sse headers

Reviewed-by: Yehuda Sadeh <yehuda@redhat.com>
2017-10-10 11:49:42 -07:00
Casey Bodley
42debb8be7 expect 400 for missing sse headers
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2017-10-10 13:49:37 -04:00
Yehuda Sadeh
a2a383e77f Merge pull request #171 from cbodley/wip-amz-date-precedence
check precedence of Date and X-Amz-Date in signatures

Reviewed-by: Yehuda Sadeh <yehuda@redhat.com>
2017-09-25 19:15:07 +03:00
yuliyang
f464a5a3b1 test for https://github.com/ceph/ceph/pull/17934
Signed-off-by: yuliyang <yuliyang@cmss.chinamobile.com>
2017-09-24 11:56:36 +08:00
Yehuda Sadeh
0826df92d2 Merge pull request #186 from tipabu/check-grant-ordering
Care less about ordering when checking grants

Reviewed-by: Yehuda Sadeh <yehuda@redhat.com>
2017-09-20 11:55:52 +03:00
Yehuda Sadeh
94d5be6ace Merge pull request #185 from theanalyst/lc-time-fixes
lc: Give more flexibility for LC expiration times

Reviewed-by: Yehuda Sadeh <yehuda@redhat.com>
2017-09-19 12:57:33 +03:00
Tim Burke
301272c0ef Care less about ordering when checking grants
Some tests, like test_object_header_acl_grants and
test_bucket_header_acl_grants, use the same id for all permissions.
Sort() is stable, so those tests end up testing the order of acl grants
returned by Boto. Not sure, but I think this may in turn depend on the
order of HTTP headers, where the order is not significant.

Signed-off-by: Tim Burke <tim.burke@gmail.com>
2017-09-12 13:24:00 -06:00
Abhishek Lekshmanan
9204283def lc: Give more flexibility for LC expiration times
Earlier values expected a lc debug interval of 2s, which is a pretty
short time window and can often lead to failures if the processing
didn't complete within the next day. This commit assumes the currently
configured LC debug interval of 10s, and gives time intervals using the
following logic:

Worst case:
LC start-time : 00:00
obj upload    : 00:01
LC run1       : 00:10 -> object not expired as it is only 9s old
LC run2       : 00:20 -> object will expire in this run, however we
can't exactly guess when this run will complete, for a moderate amount
of objects this can take anywhere between 1 to a max of 10s, let us give
it a wiggle room to complete around 8s, given the amount of objects in a
teuthology run, it should be mostly probable that the object is already
deleted within this time, so at 28s, we should have seen day1 objects being
expired.

Best case:
LC start-time: 00:00
obj upload : 00:09
LC run1 : 00:10
LC run2 : 00:20 -> obj expires, so elapsed time is around 11->19s (of
course it would almost close to 10s too), We should probably configure
the LC lock time to 10s as well just so as to ensure that the lock isn't
held for the default 60s in which case it is possible that the object
might expire in a time greater than the lock interval.

Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
2017-09-07 11:50:00 +02:00
Yehuda Sadeh
b5e72953fa Merge pull request #173 from chuang-he/fix_decrypt
Data encryption is not follow the AWS agreement

Reviewed-by: Casey Bodley <cbodley@redhat.com>
2017-08-30 15:53:36 +03:00
hechuang
1ca6a94e6a modify test_sse_kms_method_head() for aws standard
Signed-off-by: hechuang <hechuang@xsky.com>
2017-08-17 12:07:41 +08:00
hechuang
b554973239 add test_sse_kms_read_declare() for aws standard
Signed-off-by: hechuang <hechuang@xsky.com>
2017-08-17 12:07:06 +08:00
hechuang
58944d0ba6 rgw: Data encryption is not follow the AWS agreement
Encryption request headers should not be sent for GET requests and HEAD
requests if your object uses SSE-KMS/SSE-S3 or you’ll get an HTTP 400
BadRequest error.

Signed-off-by: hechuang <hechuang@xsky.com>
2017-08-17 12:04:53 +08:00
Andrew Gaul
a1b139c608 Add missing versioning tag
Signed-off-by: Andrew Gaul <andrew@gaul.org>
2017-08-16 14:08:58 -07:00
Yehuda Sadeh
425d209989 Merge pull request #179 from andrewgaul/copy-part-invalid-range
Add test for invalid copy part range

Reviewed-by: Yehuda Sadeh <yehuda@redhat.com>
2017-08-16 09:36:07 -07:00
Yehuda Sadeh
4ff6b044c2 Merge pull request #181 from theanalyst/lc-expir-disable
lc: set attribute as lifecycle_time for lc tests using timing ops

Reviewed-by: Casey Bodley <cbodley@redhat.com>
2017-08-15 08:43:53 -07:00
Abhishek Lekshmanan
edeb956a82 lc: add attr lifecycle_expiration for lc tests using expiration
Allows us to additionally filter out these tests which currently fail in
the teuthology runs

Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
2017-08-15 10:42:00 +02:00
Yehuda Sadeh
ca754b02f2 Merge pull request #180 from theanalyst/lc-filter
lifecycle: add test cases using filter & no id

Reviewed-by: Yehuda Sadeh <yehuda@redhat.com>
2017-08-14 23:06:36 -07:00
Abhishek Lekshmanan
cc2d6f076f lifecycle: add test cases using filter & no id
Added test cases for LC policies with no ID in xml, no prefix and an
empty and non empty Filter.

Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
2017-08-11 13:52:21 +02:00
Andrew Gaul
b068a07301 Add test for invalid copy part range
References kahing/goofys#212.

Signed-off-by: Andrew Gaul <andrew@gaul.org>
2017-08-01 19:03:31 -07:00
Casey Bodley
a2f73f6a84 check precedence of Date and X-Amz-Date in signatures
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2017-06-21 10:19:12 -04:00
44 changed files with 23966 additions and 13422 deletions

1
.gitignore vendored
View file

@ -10,5 +10,6 @@
/*.egg-info
/virtualenv
/venv
config.yaml

View file

@ -2,96 +2,101 @@
S3 compatibility tests
========================
This is a set of completely unofficial Amazon AWS S3 compatibility
tests, that will hopefully be useful to people implementing software
that exposes an S3-like API.
This is a set of unofficial Amazon AWS S3 compatibility
tests, that can be useful to people implementing software
that exposes an S3-like API. The tests use the Boto2 and Boto3 libraries.
The tests only cover the REST interface.
The tests use the Tox tool. To get started, ensure you have the ``tox``
software installed; e.g. on Debian/Ubuntu::
The tests use the Boto library, so any e.g. HTTP-level differences
that Boto papers over, the tests will not be able to discover. Raw
HTTP tests may be added later.
The tests use the Nose test framework. To get started, ensure you have
the ``virtualenv`` software installed; e.g. on Debian/Ubuntu::
sudo apt-get install python-virtualenv
and then run::
./bootstrap
sudo apt-get install tox
You will need to create a configuration file with the location of the
service and two different credentials, something like this::
service and two different credentials. A sample configuration file named
``s3tests.conf.SAMPLE`` has been provided in this repo. This file can be
used to run the s3 tests on a Ceph cluster started with vstart.
[DEFAULT]
## this section is just used as default for all the "s3 *"
## sections, you can place these variables also directly there
Once you have that file copied and edited, you can run the tests with::
## replace with e.g. "localhost" to run against local software
host = s3.amazonaws.com
S3TEST_CONF=your.conf tox
## uncomment the port to use something other than 80
# port = 8080
You can specify which directory of tests to run::
## say "no" to disable TLS
is_secure = yes
S3TEST_CONF=your.conf tox -- s3tests_boto3/functional
[fixtures]
## all the buckets created will start with this prefix;
## {random} will be filled with random characters to pad
## the prefix to 30 characters long, and avoid collisions
bucket prefix = YOURNAMEHERE-{random}-
You can specify which file of tests to run::
[s3 main]
## the tests assume two accounts are defined, "main" and "alt".
S3TEST_CONF=your.conf tox s3tests_boto3/functional/test_s3.py
## user_id is a 64-character hexstring
user_id = 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef
You can specify which test to run::
## display name typically looks more like a unix login, "jdoe" etc
display_name = youruseridhere
## replace these with your access keys
access_key = ABCDEFGHIJKLMNOPQRST
secret_key = abcdefghijklmnopqrstuvwxyzabcdefghijklmn
## replace with key id obtained when secret is created, or delete if KMS not tested
kms_keyid = 01234567-89ab-cdef-0123-456789abcdef
[s3 alt]
## another user account, used for ACL-related tests
user_id = 56789abcdef0123456789abcdef0123456789abcdef0123456789abcdef01234
display_name = john.doe
## the "alt" user needs to have email set, too
email = john.doe@example.com
access_key = NOPQRSTUVWXYZABCDEFG
secret_key = nopqrstuvwxyzabcdefghijklmnabcdefghijklm
Once you have that, you can run the tests with::
S3TEST_CONF=your.conf ./virtualenv/bin/nosetests
To gather a list of tests being run, use the flags::
-v --collect-only
You can specify what test(s) to run::
S3TEST_CONF=your.conf ./virtualenv/bin/nosetests s3tests.functional.test_s3:test_bucket_list_empty
S3TEST_CONF=your.conf tox s3tests_boto3/functional/test_s3.py::test_bucket_list_empty
Some tests have attributes set based on their current reliability and
things like AWS not enforcing their spec stricly. You can filter tests
based on their attributes::
S3TEST_CONF=aws.conf ./virtualenv/bin/nosetests -a '!fails_on_aws'
S3TEST_CONF=aws.conf tox -- -m 'not fails_on_aws'
Most of the tests have both Boto3 and Boto2 versions. Tests written in
Boto2 are in the ``s3tests`` directory. Tests written in Boto3 are
located in the ``s3test_boto3`` directory.
TODO
====
You can run only the boto3 tests with::
- We should assume read-after-write consistency, and make the tests
actually request such a location.
http://aws.amazon.com/s3/faqs/#What_data_consistency_model_does_Amazon_S3_employ
S3TEST_CONF=your.conf tox -- s3tests_boto3/functional
========================
STS compatibility tests
========================
This section contains some basic tests for the AssumeRole, GetSessionToken and AssumeRoleWithWebIdentity API's. The test file is located under ``s3tests_boto3/functional``.
To run the STS tests, the vstart cluster should be started with the following parameter (in addition to any parameters already used with it)::
vstart.sh -o rgw_sts_key=abcdefghijklmnop -o rgw_s3_auth_use_sts=true
Note that the ``rgw_sts_key`` can be set to anything that is 128 bits in length.
After the cluster is up the following command should be executed::
radosgw-admin caps add --tenant=testx --uid="9876543210abcdef0123456789abcdef0123456789abcdef0123456789abcdef" --caps="roles=*"
You can run only the sts tests (all the three API's) with::
S3TEST_CONF=your.conf tox s3tests_boto3/functional/test_sts.py
You can filter tests based on the attributes. There is a attribute named ``test_of_sts`` to run AssumeRole and GetSessionToken tests and ``webidentity_test`` to run the AssumeRoleWithWebIdentity tests. If you want to execute only ``test_of_sts`` tests you can apply that filter as below::
S3TEST_CONF=your.conf tox -- -m test_of_sts s3tests_boto3/functional/test_sts.py
For running ``webidentity_test`` you'll need have Keycloak running.
In order to run any STS test you'll need to add "iam" section to the config file. For further reference on how your config file should look check ``s3tests.conf.SAMPLE``.
========================
IAM policy tests
========================
This is a set of IAM policy tests.
This section covers tests for user policies such as Put, Get, List, Delete, user policies with s3 actions, conflicting user policies etc
These tests uses Boto3 libraries. Tests are written in the ``s3test_boto3`` directory.
These iam policy tests uses two users with profile name "iam" and "s3 alt" as mentioned in s3tests.conf.SAMPLE.
If Ceph cluster is started with vstart, then above two users will get created as part of vstart with same access key, secrete key etc as mentioned in s3tests.conf.SAMPLE.
Out of those two users, "iam" user is with capabilities --caps=user-policy=* and "s3 alt" user is without capabilities.
Adding above capabilities to "iam" user is also taken care by vstart (If Ceph cluster is started with vstart).
To run these tests, create configuration file with section "iam" and "s3 alt" refer s3tests.conf.SAMPLE.
Once you have that configuration file copied and edited, you can run all the tests with::
S3TEST_CONF=your.conf tox s3tests_boto3/functional/test_iam.py
You can also specify specific test to run::
S3TEST_CONF=your.conf tox s3tests_boto3/functional/test_iam.py::test_put_user_policy
Some tests have attributes set such as "fails_on_rgw".
You can filter tests based on their attributes::
S3TEST_CONF=your.conf tox -- s3tests_boto3/functional/test_iam.py -m 'not fails_on_rgw'
- Test direct HTTP downloads, like a web browser would do.

View file

@ -1,51 +0,0 @@
#!/bin/sh
set -e
if [ -f /etc/debian_version ]; then
for package in python-pip python-virtualenv python-dev libevent-dev libffi-dev libxml2-dev libxslt-dev zlib1g-dev; do
if [ "$(dpkg --status -- $package 2>/dev/null|sed -n 's/^Status: //p')" != "install ok installed" ]; then
# add a space after old values
missing="${missing:+$missing }$package"
fi
done
if [ -n "$missing" ]; then
echo "$0: missing required DEB packages. Installing via sudo." 1>&2
sudo apt-get -y install $missing
fi
elif [ -f /etc/fedora-release ]; then
for package in python-pip python2-virtualenv python-devel libevent-devel libffi-devel libxml2-devel libxslt-devel zlib-devel; do
if [ "$(rpm -qa $package 2>/dev/null)" == "" ]; then
missing="${missing:+$missing }$package"
fi
done
if [ -n "$missing" ]; then
echo "$0: missing required RPM packages. Installing via sudo." 1>&2
sudo yum -y install $missing
fi
elif [ -f /etc/redhat-release ]; then
for package in python-virtualenv python-devel libevent-devel libffi-devel libxml2-devel libxslt-devel zlib-devel; do
if [ "$(rpm -qa $package 2>/dev/null)" == "" ]; then
missing="${missing:+$missing }$package"
fi
done
if [ -n "$missing" ]; then
echo "$0: missing required RPM packages. Installing via sudo." 1>&2
sudo yum -y install $missing
fi
fi
virtualenv --no-site-packages --distribute virtualenv
# avoid pip bugs
./virtualenv/bin/pip install --upgrade pip
# slightly old version of setuptools; newer fails w/ requests 0.14.0
./virtualenv/bin/pip install setuptools==32.3.1
./virtualenv/bin/pip install -r requirements.txt
# forbid setuptools from using the network because it'll try to use
# easy_install, and we really wanted pip; next line will fail if pip
# requirements.txt does not match setup.py requirements -- sucky but
# good enough for now
./virtualenv/bin/python setup.py develop

View file

@ -1,85 +0,0 @@
fixtures:
## All the buckets created will start with this prefix;
## {random} will be filled with random characters to pad
## the prefix to 30 characters long, and avoid collisions
bucket prefix: YOURNAMEHERE-{random}-
file_generation:
groups:
## File generation works by creating N groups of files. Each group of
## files is defined by three elements: number of files, avg(filesize),
## and stddev(filesize) -- in that order.
- [1, 2, 3]
- [4, 5, 6]
## Config for the readwrite tool.
## The readwrite tool concurrently reads and writes to files in a
## single bucket for a set duration.
## Note: the readwrite tool does not need the s3.alt connection info.
## only s3.main is used.
readwrite:
## The number of reader and writer worker threads. This sets how many
## files will be read and written concurrently.
readers: 2
writers: 2
## The duration to run in seconds. Doesn't count setup/warmup time
duration: 15
files:
## The number of files to use. This number of files is created during the
## "warmup" phase. After the warmup, readers will randomly pick a file to
## read, and writers will randomly pick a file to overwrite
num: 3
## The file size to use, in KB
size: 1024
## The stddev for the file size, in KB
stddev: 0
s3:
## This section contains all the connection information
defaults:
## This section contains the defaults for all of the other connections
## below. You can also place these variables directly there.
## Replace with e.g. "localhost" to run against local software
host: s3.amazonaws.com
## Uncomment the port to use soemthing other than 80
# port: 8080
## Say "no" to disable TLS.
is_secure: yes
## The tests assume two accounts are defined, "main" and "alt". You
## may add other connections to be instantianted as well, however
## any additional ones will not be used unless your tests use them.
main:
## The User ID that the S3 provider gives you. For AWS, this is
## typically a 64-char hexstring.
user_id: AWS_USER_ID
## Display name typically looks more like a unix login, "jdoe" etc
display_name: AWS_DISPLAY_NAME
## The email for this account.
email: AWS_EMAIL
## Replace these with your access keys.
access_key: AWS_ACCESS_KEY
secret_key: AWS_SECRET_KEY
## If KMS is tested, this if barbican key id. Optional.
kms_keyid: barbican_key_id
alt:
## Another user accout, used for ACL-related tests.
user_id: AWS_USER_ID
display_name: AWS_DISPLAY_NAME
email: AWS_EMAIL
access_key: AWS_ACCESS_KEY
secret_key: AWS_SECRET_KEY

51
pytest.ini Normal file
View file

@ -0,0 +1,51 @@
[pytest]
markers =
abac_test
appendobject
auth_aws2
auth_aws4
auth_common
bucket_policy
bucket_encryption
checksum
cloud_transition
encryption
fails_on_aws
fails_on_dbstore
fails_on_dho
fails_on_mod_proxy_fcgi
fails_on_rgw
fails_on_s3
fails_with_subdomain
group
group_policy
iam_account
iam_cross_account
iam_role
iam_tenant
iam_user
lifecycle
lifecycle_expiration
lifecycle_transition
list_objects_v2
object_lock
role_policy
session_policy
s3select
s3website
s3website_routing_rules
s3website_redirect_location
sns
sse_s3
storage_class
tagging
test_of_sts
token_claims_trust_policy_test
token_principal_tag_role_policy_test
token_request_tag_trust_policy_test
token_resource_tags_test
token_role_tags_test
token_tag_keys_test
user_policy
versioning
webidentity_test

View file

@ -1,569 +0,0 @@
#
# FUZZ testing uses a probabalistic grammar to generate
# pseudo-random requests which will be sent to a server
# over long periods of time, with the goal of turning up
# garbage-input and buffer-overflow sensitivities.
#
# Each state ...
# generates/chooses contents for variables
# chooses a next state (from a weighted set of options)
#
# A terminal state is one from which there are no successors,
# at which point a message is generated (from the variables)
# and sent to the server.
#
# The test program doesn't actually know (or care) what
# response should be returned ... since the goal is to
# crash the server.
#
start:
set:
garbage:
- '{random 10-3000 printable}'
- '{random 10-1000 binary}'
garbage_no_whitespace:
- '{random 10-3000 printable_no_whitespace}'
- '{random 10-1000 binary_no_whitespace}'
acl_header:
- 'private'
- 'public-read'
- 'public-read-write'
- 'authenticated-read'
- 'bucket-owner-read'
- 'bucket-owner-full-control'
- '{random 3000 letters}'
- '{random 100-1000 binary_no_whitespace}'
choices:
- bucket
- object
bucket:
set:
urlpath: '/{bucket}'
choices:
- 13 bucket_get
- 8 bucket_put
- 5 bucket_delete
- bucket_garbage_method
bucket_garbage_method:
set:
method:
- '{random 1-100 printable}'
- '{random 10-100 binary}'
bucket:
- '{bucket_readable}'
- '{bucket_not_readable}'
- '{bucket_writable}'
- '{bucket_not_writable}'
- '2 {garbage_no_whitespace}'
choices:
- bucket_get_simple
- bucket_get_filtered
- bucket_get_uploads
- bucket_put_create
- bucket_put_versioning
- bucket_put_simple
bucket_delete:
set:
method: DELETE
bucket:
- '{bucket_writable}'
- '{bucket_not_writable}'
- '2 {garbage_no_whitespace}'
query:
- null
- policy
- website
- '2 {garbage_no_whitespace}'
choices: []
bucket_get:
set:
method: GET
bucket:
- '{bucket_readable}'
- '{bucket_not_readable}'
- '2 {garbage_no_whitespace}'
choices:
- 11 bucket_get_simple
- bucket_get_filtered
- bucket_get_uploads
bucket_get_simple:
set:
query:
- acl
- policy
- location
- logging
- notification
- versions
- requestPayment
- versioning
- website
- '2 {garbage_no_whitespace}'
choices: []
bucket_get_uploads:
set:
delimiter:
- null
- '3 delimiter={garbage_no_whitespace}'
prefix:
- null
- '3 prefix={garbage_no_whitespace}'
key_marker:
- null
- 'key-marker={object_readable}'
- 'key-marker={object_not_readable}'
- 'key-marker={invalid_key}'
- 'key-marker={random 100-1000 printable_no_whitespace}'
max_uploads:
- null
- 'max-uploads={random 1-5 binary_no_whitespace}'
- 'max-uploads={random 1-1000 digits}'
upload_id_marker:
- null
- '3 upload-id-marker={random 0-1000 printable_no_whitespace}'
query:
- 'uploads'
- 'uploads&{delimiter}&{prefix}'
- 'uploads&{max_uploads}&{key_marker}&{upload_id_marker}'
- '2 {garbage_no_whitespace}'
choices: []
bucket_get_filtered:
set:
delimiter:
- 'delimiter={garbage_no_whitespace}'
prefix:
- 'prefix={garbage_no_whitespace}'
marker:
- 'marker={object_readable}'
- 'marker={object_not_readable}'
- 'marker={invalid_key}'
- 'marker={random 100-1000 printable_no_whitespace}'
max_keys:
- 'max-keys={random 1-5 binary_no_whitespace}'
- 'max-keys={random 1-1000 digits}'
query:
- null
- '{delimiter}&{prefix}'
- '{max-keys}&{marker}'
- '2 {garbage_no_whitespace}'
choices: []
bucket_put:
set:
bucket:
- '{bucket_writable}'
- '{bucket_not_writable}'
- '2 {garbage_no_whitespace}'
method: PUT
choices:
- bucket_put_simple
- bucket_put_create
- bucket_put_versioning
bucket_put_create:
set:
body:
- '2 {garbage}'
- '<CreateBucketConfiguration><LocationConstraint>{random 2-10 binary}</LocationConstraint></CreateBucketConfiguration>'
headers:
- ['0-5', 'x-amz-acl', '{acl_header}']
choices: []
bucket_put_versioning:
set:
body:
- '{garbage}'
- '4 <VersioningConfiguration>{versioning_status}{mfa_delete_body}</VersioningConfiguration>'
mfa_delete_body:
- null
- '<Status>{random 2-10 binary}</Status>'
- '<Status>{random 2000-3000 printable}</Status>'
versioning_status:
- null
- '<MfaDelete>{random 2-10 binary}</MfaDelete>'
- '<MfaDelete>{random 2000-3000 printable}</MfaDelete>'
mfa_header:
- '{random 10-1000 printable_no_whitespace} {random 10-1000 printable_no_whitespace}'
headers:
- ['0-1', 'x-amz-mfa', '{mfa_header}']
choices: []
bucket_put_simple:
set:
body:
- '{acl_body}'
- '{policy_body}'
- '{logging_body}'
- '{notification_body}'
- '{request_payment_body}'
- '{website_body}'
acl_body:
- null
- '<AccessControlPolicy>{owner}{acl}</AccessControlPolicy>'
owner:
- null
- '7 <Owner>{id}{display_name}</Owner>'
id:
- null
- '<ID>{random 10-200 binary}</ID>'
- '<ID>{random 1000-3000 printable}</ID>'
display_name:
- null
- '2 <DisplayName>{random 10-200 binary}</DisplayName>'
- '2 <DisplayName>{random 1000-3000 printable}</DisplayName>'
- '2 <DisplayName>{random 10-300 letters}@{random 10-300 letters}.{random 2-4 letters}</DisplayName>'
acl:
- null
- '10 <AccessControlList><Grant>{grantee}{permission}</Grant></AccessControlList>'
grantee:
- null
- '7 <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="CanonicalUser">{id}{display_name}</Grantee>'
permission:
- null
- '7 <Permission>{permission_value}</Permission>'
permission_value:
- '2 {garbage}'
- FULL_CONTROL
- WRITE
- WRITE_ACP
- READ
- READ_ACP
policy_body:
- null
- '2 {garbage}'
logging_body:
- null
- '<BucketLoggingStatus xmlns="http://doc.s3.amazonaws.com/2006-03-01" />'
- '<BucketLoggingStatus xmlns="http://doc.s3.amazonaws.com/2006-03-01"><LoggingEnabled>{bucket}{target_prefix}{target_grants}</LoggingEnabled></BucketLoggingStatus>'
target_prefix:
- null
- '<TargetPrefix>{random 10-1000 printable}</TargetPrefix>'
- '<TargetPrefix>{random 10-1000 binary}</TargetPrefix>'
target_grants:
- null
- '10 <TargetGrants><Grant>{grantee}{permission}</Grant></TargetGrants>'
notification_body:
- null
- '<NotificationConfiguration />'
- '2 <NotificationConfiguration><TopicConfiguration>{topic}{event}</TopicConfiguration></NotificationConfiguration>'
topic:
- null
- '2 <Topic>{garbage}</Topic>'
event:
- null
- '<Event>s3:ReducedRedundancyLostObject</Event>'
- '2 <Event>{garbage}</Event>'
request_payment_body:
- null
- '<RequestPaymentConfiguration xlmns="http://s3.amazonaws.com/doc/2006-03-01/"><Payer>{payer}</Payer></RequestPaymentConfiguration>'
payer:
- Requester
- BucketOwner
- '2 {garbage}'
website_body:
- null
- '<WebsiteConfiguration>{index_doc}{error_doc}{routing_rules}</WebsiteConfiguration>'
- '<WebsiteConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">{index_doc}{error_doc}{routing_rules}</WebsiteConfiguration>'
index_doc:
- null
- '<IndexDocument>{filename}</IndexDocument>'
- '<IndexDocument><Suffix>{filename}</Suffix></IndexDocument>'
filename:
- null
- '2 {garbage}'
- '{random 2-10 printable}.html'
- '{random 100-1000 printable}.html'
- '{random 100-1000 printable_no_whitespace}.html'
error_doc:
- null
- '<ErrorDocument>{filename}</ErrorDocument>'
- '<ErrorDocument><Key>{filename}</Key></ErrorDocument>'
routing_rules:
- null
- ['0-10', '<RoutingRules>{routing_rules_content}</RoutingRules>']
routing_rules_content:
- null
- ['0-1000', '<RoutingRule>{routing_rule}</RoutingRule>']
routing_rule:
- null
- ['0-2', '{routing_rule_condition}{routing_rule_redirect}']
routing_rule_condition:
- null
- ['0-10', '<Condition>{KeyPrefixEquals}{HttpErrorCodeReturnedEquals}</Condition>']
KeyPrefixEquals:
- null
- ['0-2', '<KeyPrefixEquals>{filename}</KeyPrefixEquals>']
HttpErrorCodeReturnedEquals:
- null
- ['0-2', '<HttpErrorCodeReturnedEquals>{HttpErrorCode}</HttpErrorCodeReturnedEquals>']
HttpErrorCode:
- null
- '2 {garbage}'
- '{random 1-10 digits}'
- '{random 1-100 printable}'
routing_rule_redirect:
- null
- '{protocol}{hostname}{ReplaceKeyPrefixWith}{ReplaceKeyWith}{HttpRedirectCode}'
protocol:
- null
- '<Protocol>http</Protocol>'
- '<Protocol>https</Protocol>'
- ['1-5', '<Protocol>{garbage}</Protocol>']
- ['1-5', '<Protocol>{filename}</Protocol>']
hostname:
- null
- ['1-5', '<HostHame>{hostname_val}</HostHame>']
- ['1-5', '<HostHame>{garbage}</HostHame>']
hostname_val:
- null
- '{random 1-255 printable_no_whitespace}'
- '{random 1-255 printable}'
- '{random 1-255 punctuation}'
- '{random 1-255 whitespace}'
- '{garbage}'
ReplaceKeyPrefixWith:
- null
- ['1-5', '<ReplaceKeyPrefixWith>{filename}</ReplaceKeyPrefixWith>']
HttpRedirectCode:
- null
- ['1-5', '<HttpRedirectCode>{random 1-10 digits}</HttpRedirectCode>']
- ['1-5', '<HttpRedirectCode>{random 1-100 printable}</HttpRedirectCode>']
- ['1-5', '<HttpRedirectCode>{filename}</HttpRedirectCode>']
choices: []
object:
set:
urlpath: '/{bucket}/{object}'
range_header:
- null
- 'bytes={random 1-2 digits}-{random 1-4 digits}'
- 'bytes={random 1-1000 binary_no_whitespace}'
if_modified_since_header:
- null
- '2 {garbage_no_whitespace}'
if_match_header:
- null
- '2 {garbage_no_whitespace}'
if_none_match_header:
- null
- '2 {garbage_no_whitespace}'
choices:
- object_delete
- object_get
- object_put
- object_head
- object_garbage_method
object_garbage_method:
set:
method:
- '{random 1-100 printable}'
- '{random 10-100 binary}'
bucket:
- '{bucket_readable}'
- '{bucket_not_readable}'
- '{bucket_writable}'
- '{bucket_not_writable}'
- '2 {garbage_no_whitespace}'
object:
- '{object_readable}'
- '{object_not_readable}'
- '{object_writable}'
- '{object_not_writable}'
- '2 {garbage_no_whitespace}'
choices:
- object_get_query
- object_get_head_simple
object_delete:
set:
method: DELETE
bucket:
- '5 {bucket_writable}'
- '{bucket_not_writable}'
- '{garbage_no_whitespace}'
object:
- '{object_writable}'
- '{object_not_writable}'
- '2 {garbage_no_whitespace}'
choices: []
object_get:
set:
method: GET
bucket:
- '5 {bucket_readable}'
- '{bucket_not_readable}'
- '{garbage_no_whitespace}'
object:
- '{object_readable}'
- '{object_not_readable}'
- '{garbage_no_whitespace}'
choices:
- 5 object_get_head_simple
- 2 object_get_query
object_get_query:
set:
query:
- 'torrent'
- 'acl'
choices: []
object_get_head_simple:
set: {}
headers:
- ['0-1', 'range', '{range_header}']
- ['0-1', 'if-modified-since', '{if_modified_since_header}']
- ['0-1', 'if-unmodified-since', '{if_modified_since_header}']
- ['0-1', 'if-match', '{if_match_header}']
- ['0-1', 'if-none-match', '{if_none_match_header}']
choices: []
object_head:
set:
method: HEAD
bucket:
- '5 {bucket_readable}'
- '{bucket_not_readable}'
- '{garbage_no_whitespace}'
object:
- '{object_readable}'
- '{object_not_readable}'
- '{garbage_no_whitespace}'
choices:
- object_get_head_simple
object_put:
set:
method: PUT
bucket:
- '5 {bucket_writable}'
- '{bucket_not_writable}'
- '{garbage_no_whitespace}'
object:
- '{object_writable}'
- '{object_not_writable}'
- '{garbage_no_whitespace}'
cache_control:
- null
- '{garbage_no_whitespace}'
- 'no-cache'
content_disposition:
- null
- '{garbage_no_whitespace}'
content_encoding:
- null
- '{garbage_no_whitespace}'
content_length:
- '{random 1-20 digits}'
- '{garbage_no_whitespace}'
content_md5:
- null
- '{garbage_no_whitespace}'
content_type:
- null
- 'binary/octet-stream'
- '{garbage_no_whitespace}'
expect:
- null
- '100-continue'
- '{garbage_no_whitespace}'
expires:
- null
- '{random 1-10000000 digits}'
- '{garbage_no_whitespace}'
meta_key:
- null
- 'foo'
- '{garbage_no_whitespace}'
meta_value:
- null
- '{garbage_no_whitespace}'
choices:
- object_put_simple
- object_put_acl
- object_put_copy
object_put_simple:
set: {}
headers:
- ['0-1', 'cache-control', '{cache_control}']
- ['0-1', 'content-disposition', '{content_disposition}']
- ['0-1', 'content-encoding', '{content_encoding}']
- ['0-1', 'content-length', '{content_length}']
- ['0-1', 'content-md5', '{content_md5}']
- ['0-1', 'content-type', '{content_type}']
- ['0-1', 'expect', '{expect}']
- ['0-1', 'expires', '{expires}']
- ['0-1', 'x-amz-acl', '{acl_header}']
- ['0-6', 'x-amz-meta-{meta_key}', '{meta_value}']
choices: []
object_put_acl:
set:
query: 'acl'
body:
- null
- '2 {garbage}'
- '<AccessControlPolicy>{owner}{acl}</AccessControlPolicy>'
owner:
- null
- '7 <Owner>{id}{display_name}</Owner>'
id:
- null
- '<ID>{random 10-200 binary}</ID>'
- '<ID>{random 1000-3000 printable}</ID>'
display_name:
- null
- '2 <DisplayName>{random 10-200 binary}</DisplayName>'
- '2 <DisplayName>{random 1000-3000 printable}</DisplayName>'
- '2 <DisplayName>{random 10-300 letters}@{random 10-300 letters}.{random 2-4 letters}</DisplayName>'
acl:
- null
- '10 <AccessControlList><Grant>{grantee}{permission}</Grant></AccessControlList>'
grantee:
- null
- '7 <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="CanonicalUser">{id}{display_name}</Grantee>'
permission:
- null
- '7 <Permission>{permission_value}</Permission>'
permission_value:
- '2 {garbage}'
- FULL_CONTROL
- WRITE
- WRITE_ACP
- READ
- READ_ACP
headers:
- ['0-1', 'cache-control', '{cache_control}']
- ['0-1', 'content-disposition', '{content_disposition}']
- ['0-1', 'content-encoding', '{content_encoding}']
- ['0-1', 'content-length', '{content_length}']
- ['0-1', 'content-md5', '{content_md5}']
- ['0-1', 'content-type', '{content_type}']
- ['0-1', 'expect', '{expect}']
- ['0-1', 'expires', '{expires}']
- ['0-1', 'x-amz-acl', '{acl_header}']
choices: []
object_put_copy:
set: {}
headers:
- ['1-1', 'x-amz-copy-source', '{source_object}']
- ['0-1', 'x-amz-acl', '{acl_header}']
- ['0-1', 'x-amz-metadata-directive', '{metadata_directive}']
- ['0-1', 'x-amz-copy-source-if-match', '{if_match_header}']
- ['0-1', 'x-amz-copy-source-if-none-match', '{if_none_match_header}']
- ['0-1', 'x-amz-copy-source-if-modified-since', '{if_modified_since_header}']
- ['0-1', 'x-amz-copy-source-if-unmodified-since', '{if_modified_since_header}']
choices: []

View file

@ -1,12 +1,15 @@
PyYAML
nose >=1.0.0
boto >=2.6.0
bunch >=1.0.0
boto3 >=1.0.0
# botocore-1.28 broke v2 signatures, see https://tracker.ceph.com/issues/58059
botocore <1.28.0
munch >=2.0.0
# 0.14 switches to libev, that means bootstrap needs to change too
gevent >=1.0
isodate >=0.4.4
requests ==0.14.0
pytz >=2011k
ordereddict
requests >=2.23.0
pytz
httplib2
lxml
pytest
tox

171
s3tests.conf.SAMPLE Normal file
View file

@ -0,0 +1,171 @@
[DEFAULT]
## this section is just used for host, port and bucket_prefix
# host set for rgw in vstart.sh
host = localhost
# port set for rgw in vstart.sh
port = 8000
## say "False" to disable TLS
is_secure = False
## say "False" to disable SSL Verify
ssl_verify = False
[fixtures]
## all the buckets created will start with this prefix;
## {random} will be filled with random characters to pad
## the prefix to 30 characters long, and avoid collisions
bucket prefix = yournamehere-{random}-
# all the iam account resources (users, roles, etc) created
# will start with this name prefix
iam name prefix = s3-tests-
# all the iam account resources (users, roles, etc) created
# will start with this path prefix
iam path prefix = /s3-tests/
[s3 main]
# main display_name set in vstart.sh
display_name = M. Tester
# main user_idname set in vstart.sh
user_id = testid
# main email set in vstart.sh
email = tester@ceph.com
# zonegroup api_name for bucket location
api_name = default
## main AWS access key
access_key = 0555b35654ad1656d804
## main AWS secret key
secret_key = h7GhxuBLTrlhVUyxSPUKUV8r/2EI4ngqJxD7iBdBYLhwluN30JaT3Q==
## replace with key id obtained when secret is created, or delete if KMS not tested
#kms_keyid = 01234567-89ab-cdef-0123-456789abcdef
## Storage classes
#storage_classes = "LUKEWARM, FROZEN"
## Lifecycle debug interval (default: 10)
#lc_debug_interval = 20
[s3 alt]
# alt display_name set in vstart.sh
display_name = john.doe
## alt email set in vstart.sh
email = john.doe@example.com
# alt user_id set in vstart.sh
user_id = 56789abcdef0123456789abcdef0123456789abcdef0123456789abcdef01234
# alt AWS access key set in vstart.sh
access_key = NOPQRSTUVWXYZABCDEFG
# alt AWS secret key set in vstart.sh
secret_key = nopqrstuvwxyzabcdefghijklmnabcdefghijklm
#[s3 cloud]
## to run the testcases with "cloud_transition" attribute.
## Note: the waiting time may have to tweaked depending on
## the I/O latency to the cloud endpoint.
## host set for cloud endpoint
# host = localhost
## port set for cloud endpoint
# port = 8001
## say "False" to disable TLS
# is_secure = False
## cloud endpoint credentials
# access_key = 0555b35654ad1656d804
# secret_key = h7GhxuBLTrlhVUyxSPUKUV8r/2EI4ngqJxD7iBdBYLhwluN30JaT3Q==
## storage class configured as cloud tier on local rgw server
# cloud_storage_class = CLOUDTIER
## Below are optional -
## Above configured cloud storage class config options
# retain_head_object = false
# target_storage_class = Target_SC
# target_path = cloud-bucket
## another regular storage class to test multiple transition rules,
# storage_class = S1
[s3 tenant]
# tenant display_name set in vstart.sh
display_name = testx$tenanteduser
# tenant user_id set in vstart.sh
user_id = 9876543210abcdef0123456789abcdef0123456789abcdef0123456789abcdef
# tenant AWS secret key set in vstart.sh
access_key = HIJKLMNOPQRSTUVWXYZA
# tenant AWS secret key set in vstart.sh
secret_key = opqrstuvwxyzabcdefghijklmnopqrstuvwxyzab
# tenant email set in vstart.sh
email = tenanteduser@example.com
# tenant name
tenant = testx
#following section needs to be added for all sts-tests
[iam]
#used for iam operations in sts-tests
#email from vstart.sh
email = s3@example.com
#user_id from vstart.sh
user_id = 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef
#access_key from vstart.sh
access_key = ABCDEFGHIJKLMNOPQRST
#secret_key vstart.sh
secret_key = abcdefghijklmnopqrstuvwxyzabcdefghijklmn
#display_name from vstart.sh
display_name = youruseridhere
# iam account root user for iam_account tests
[iam root]
access_key = AAAAAAAAAAAAAAAAAAaa
secret_key = aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
user_id = RGW11111111111111111
email = account1@ceph.com
# iam account root user in a different account than [iam root]
[iam alt root]
access_key = BBBBBBBBBBBBBBBBBBbb
secret_key = bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
user_id = RGW22222222222222222
email = account2@ceph.com
#following section needs to be added when you want to run Assume Role With Webidentity test
[webidentity]
#used for assume role with web identity test in sts-tests
#all parameters will be obtained from ceph/qa/tasks/keycloak.py
token=<access_token>
aud=<obtained after introspecting token>
sub=<obtained after introspecting token>
azp=<obtained after introspecting token>
user_token=<access token for a user, with attribute Department=[Engineering, Marketing>]
thumbprint=<obtained from x509 certificate>
KC_REALM=<name of the realm>

View file

@ -1,142 +0,0 @@
#!/usr/bin/python
import sys
import os
import yaml
import optparse
NANOSECONDS = int(1e9)
# Output stats in a format similar to siege
# see http://www.joedog.org/index/siege-home
OUTPUT_FORMAT = """Stats for type: [{type}]
Transactions: {trans:>11} hits
Availability: {avail:>11.2f} %
Elapsed time: {elapsed:>11.2f} secs
Data transferred: {data:>11.2f} MB
Response time: {resp_time:>11.2f} secs
Transaction rate: {trans_rate:>11.2f} trans/sec
Throughput: {data_rate:>11.2f} MB/sec
Concurrency: {conc:>11.2f}
Successful transactions: {trans_success:>11}
Failed transactions: {trans_fail:>11}
Longest transaction: {trans_long:>11.2f}
Shortest transaction: {trans_short:>11.2f}
"""
def parse_options():
usage = "usage: %prog [options]"
parser = optparse.OptionParser(usage=usage)
parser.add_option(
"-f", "--file", dest="input", metavar="FILE",
help="Name of input YAML file. Default uses sys.stdin")
parser.add_option(
"-v", "--verbose", dest="verbose", action="store_true",
help="Enable verbose output")
(options, args) = parser.parse_args()
if not options.input and os.isatty(sys.stdin.fileno()):
parser.error("option -f required if no data is provided "
"in stdin")
return (options, args)
def main():
(options, args) = parse_options()
total = {}
durations = {}
min_time = {}
max_time = {}
errors = {}
success = {}
calculate_stats(options, total, durations, min_time, max_time, errors,
success)
print_results(total, durations, min_time, max_time, errors, success)
def calculate_stats(options, total, durations, min_time, max_time, errors,
success):
print 'Calculating statistics...'
f = sys.stdin
if options.input:
f = file(options.input, 'r')
for item in yaml.safe_load_all(f):
type_ = item.get('type')
if type_ not in ('r', 'w'):
continue # ignore any invalid items
if 'error' in item:
errors[type_] = errors.get(type_, 0) + 1
continue # skip rest of analysis for this item
else:
success[type_] = success.get(type_, 0) + 1
# parse the item
data_size = item['chunks'][-1][0]
duration = item['duration']
start = item['start']
end = start + duration / float(NANOSECONDS)
if options.verbose:
print "[{type}] POSIX time: {start:>18.2f} - {end:<18.2f} " \
"{data:>11.2f} KB".format(
type=type_,
start=start,
end=end,
data=data_size / 1024.0, # convert to KB
)
# update time boundaries
prev = min_time.setdefault(type_, start)
if start < prev:
min_time[type_] = start
prev = max_time.setdefault(type_, end)
if end > prev:
max_time[type_] = end
# save the duration
if type_ not in durations:
durations[type_] = []
durations[type_].append(duration)
# add to running totals
total[type_] = total.get(type_, 0) + data_size
def print_results(total, durations, min_time, max_time, errors, success):
for type_ in total.keys():
trans_success = success.get(type_, 0)
trans_fail = errors.get(type_, 0)
trans = trans_success + trans_fail
avail = trans_success * 100.0 / trans
elapsed = max_time[type_] - min_time[type_]
data = total[type_] / 1024.0 / 1024.0 # convert to MB
resp_time = sum(durations[type_]) / float(NANOSECONDS) / \
len(durations[type_])
trans_rate = trans / elapsed
data_rate = data / elapsed
conc = trans_rate * resp_time
trans_long = max(durations[type_]) / float(NANOSECONDS)
trans_short = min(durations[type_]) / float(NANOSECONDS)
print OUTPUT_FORMAT.format(
type=type_,
trans_success=trans_success,
trans_fail=trans_fail,
trans=trans,
avail=avail,
elapsed=elapsed,
data=data,
resp_time=resp_time,
trans_rate=trans_rate,
data_rate=data_rate,
conc=conc,
trans_long=trans_long,
trans_short=trans_short,
)
if __name__ == '__main__':
main()

View file

@ -1,5 +1,5 @@
import boto.s3.connection
import bunch
import munch
import itertools
import os
import random
@ -11,8 +11,8 @@ from lxml import etree
from doctest import Example
from lxml.doctestcompare import LXMLOutputChecker
s3 = bunch.Bunch()
config = bunch.Bunch()
s3 = munch.Munch()
config = munch.Munch()
prefix = ''
bucket_counter = itertools.count(1)
@ -51,10 +51,10 @@ def nuke_bucket(bucket):
while deleted_cnt:
deleted_cnt = 0
for key in bucket.list():
print 'Cleaning bucket {bucket} key {key}'.format(
print('Cleaning bucket {bucket} key {key}'.format(
bucket=bucket,
key=key,
)
))
key.set_canned_acl('private')
key.delete()
deleted_cnt += 1
@ -67,26 +67,26 @@ def nuke_bucket(bucket):
and e.body == ''):
e.error_code = 'AccessDenied'
if e.error_code != 'AccessDenied':
print 'GOT UNWANTED ERROR', e.error_code
print('GOT UNWANTED ERROR', e.error_code)
raise
# seems like we're not the owner of the bucket; ignore
pass
def nuke_prefixed_buckets():
for name, conn in s3.items():
print 'Cleaning buckets from connection {name}'.format(name=name)
for name, conn in list(s3.items()):
print('Cleaning buckets from connection {name}'.format(name=name))
for bucket in conn.get_all_buckets():
if bucket.name.startswith(prefix):
print 'Cleaning bucket {bucket}'.format(bucket=bucket)
print('Cleaning bucket {bucket}'.format(bucket=bucket))
nuke_bucket(bucket)
print 'Done with cleanup of test buckets.'
print('Done with cleanup of test buckets.')
def read_config(fp):
config = bunch.Bunch()
config = munch.Munch()
g = yaml.safe_load_all(fp)
for new in g:
config.update(bunch.bunchify(new))
config.update(munch.Munchify(new))
return config
def connect(conf):
@ -97,7 +97,7 @@ def connect(conf):
access_key='aws_access_key_id',
secret_key='aws_secret_access_key',
)
kwargs = dict((mapping[k],v) for (k,v) in conf.iteritems() if k in mapping)
kwargs = dict((mapping[k],v) for (k,v) in conf.items() if k in mapping)
#process calling_format argument
calling_formats = dict(
ordinary=boto.s3.connection.OrdinaryCallingFormat(),
@ -105,7 +105,7 @@ def connect(conf):
vhost=boto.s3.connection.VHostCallingFormat(),
)
kwargs['calling_format'] = calling_formats['ordinary']
if conf.has_key('calling_format'):
if 'calling_format' in conf:
raw_calling_format = conf['calling_format']
try:
kwargs['calling_format'] = calling_formats[raw_calling_format]
@ -146,7 +146,7 @@ def setup():
raise RuntimeError("Empty Prefix! Aborting!")
defaults = config.s3.defaults
for section in config.s3.keys():
for section in list(config.s3.keys()):
if section == 'defaults':
continue
@ -258,9 +258,10 @@ def with_setup_kwargs(setup, teardown=None):
# yield _test_gen
def trim_xml(xml_str):
p = etree.XMLParser(remove_blank_text=True)
p = etree.XMLParser(encoding="utf-8", remove_blank_text=True)
xml_str = bytes(xml_str, "utf-8")
elem = etree.XML(xml_str, parser=p)
return etree.tostring(elem)
return etree.tostring(elem, encoding="unicode")
def normalize_xml(xml, pretty_print=True):
if xml is None:
@ -282,7 +283,7 @@ def normalize_xml(xml, pretty_print=True):
for parent in root.xpath('//*[./*]'): # Search for parent elements
parent[:] = sorted(parent,key=lambda x: x.tag)
xmlstr = etree.tostring(root, encoding="utf-8", xml_declaration=True, pretty_print=pretty_print)
xmlstr = etree.tostring(root, encoding="unicode", pretty_print=pretty_print)
# there are two different DTD URIs
xmlstr = re.sub(r'xmlns="[^"]+"', 'xmlns="s3"', xmlstr)
xmlstr = re.sub(r'xmlns=\'[^\']+\'', 'xmlns="s3"', xmlstr)

View file

@ -1,5 +0,0 @@
from boto.auth_handler import AuthHandler
class AnonymousAuthHandler(AuthHandler):
def add_auth(self, http_request, **kwargs):
return # Nothing to do for anonymous access!

View file

@ -1,21 +1,21 @@
from __future__ import print_function
import sys
import ConfigParser
import configparser
import boto.exception
import boto.s3.connection
import bunch
import munch
import itertools
import os
import random
import string
from httplib import HTTPConnection, HTTPSConnection
from urlparse import urlparse
import pytest
from http.client import HTTPConnection, HTTPSConnection
from urllib.parse import urlparse
from .utils import region_sync_meta
s3 = bunch.Bunch()
config = bunch.Bunch()
targets = bunch.Bunch()
s3 = munch.Munch()
config = munch.Munch()
targets = munch.Munch()
# this will be assigned by setup()
prefix = None
@ -69,7 +69,7 @@ def nuke_prefixed_buckets_on_conn(prefix, name, conn):
if bucket.name.startswith(prefix):
print('Cleaning bucket {bucket}'.format(bucket=bucket))
success = False
for i in xrange(2):
for i in range(2):
try:
try:
iterator = iter(bucket.list_versions())
@ -91,7 +91,13 @@ def nuke_prefixed_buckets_on_conn(prefix, name, conn):
))
# key.set_canned_acl('private')
bucket.delete_key(key.name, version_id = key.version_id)
try:
bucket.delete()
except boto.exception.S3ResponseError as e:
# if DELETE times out, the retry may see NoSuchBucket
if e.error_code != 'NoSuchBucket':
raise e
pass
success = True
except boto.exception.S3ResponseError as e:
if e.error_code != 'AccessDenied':
@ -110,12 +116,12 @@ def nuke_prefixed_buckets_on_conn(prefix, name, conn):
def nuke_prefixed_buckets(prefix):
# If no regions are specified, use the simple method
if targets.main.master == None:
for name, conn in s3.items():
for name, conn in list(s3.items()):
print('Deleting buckets on {name}'.format(name=name))
nuke_prefixed_buckets_on_conn(prefix, name, conn)
else:
# First, delete all buckets on the master connection
for name, conn in s3.items():
for name, conn in list(s3.items()):
if conn == targets.main.master.connection:
print('Deleting buckets on {name} (master)'.format(name=name))
nuke_prefixed_buckets_on_conn(prefix, name, conn)
@ -125,7 +131,7 @@ def nuke_prefixed_buckets(prefix):
print('region-sync in nuke_prefixed_buckets')
# Now delete remaining buckets on any other connection
for name, conn in s3.items():
for name, conn in list(s3.items()):
if conn != targets.main.master.connection:
print('Deleting buckets on {name} (non-master)'.format(name=name))
nuke_prefixed_buckets_on_conn(prefix, name, conn)
@ -143,46 +149,46 @@ class TargetConfig:
self.sync_meta_wait = 0
try:
self.api_name = cfg.get(section, 'api_name')
except (ConfigParser.NoSectionError, ConfigParser.NoOptionError):
except (configparser.NoSectionError, configparser.NoOptionError):
pass
try:
self.port = cfg.getint(section, 'port')
except ConfigParser.NoOptionError:
except configparser.NoOptionError:
pass
try:
self.host=cfg.get(section, 'host')
except ConfigParser.NoOptionError:
except configparser.NoOptionError:
raise RuntimeError(
'host not specified for section {s}'.format(s=section)
)
try:
self.is_master=cfg.getboolean(section, 'is_master')
except ConfigParser.NoOptionError:
except configparser.NoOptionError:
pass
try:
self.is_secure=cfg.getboolean(section, 'is_secure')
except ConfigParser.NoOptionError:
except configparser.NoOptionError:
pass
try:
raw_calling_format = cfg.get(section, 'calling_format')
except ConfigParser.NoOptionError:
except configparser.NoOptionError:
raw_calling_format = 'ordinary'
try:
self.sync_agent_addr = cfg.get(section, 'sync_agent_addr')
except (ConfigParser.NoSectionError, ConfigParser.NoOptionError):
except (configparser.NoSectionError, configparser.NoOptionError):
pass
try:
self.sync_agent_port = cfg.getint(section, 'sync_agent_port')
except (ConfigParser.NoSectionError, ConfigParser.NoOptionError):
except (configparser.NoSectionError, configparser.NoOptionError):
pass
try:
self.sync_meta_wait = cfg.getint(section, 'sync_meta_wait')
except (ConfigParser.NoSectionError, ConfigParser.NoOptionError):
except (configparser.NoSectionError, configparser.NoOptionError):
pass
@ -202,7 +208,7 @@ class TargetConnection:
class RegionsInfo:
def __init__(self):
self.m = bunch.Bunch()
self.m = munch.Munch()
self.master = None
self.secondaries = []
@ -220,21 +226,21 @@ class RegionsInfo:
return self.m[name]
def get(self):
return self.m
def iteritems(self):
return self.m.iteritems()
def items(self):
return self.m.items()
regions = RegionsInfo()
class RegionsConn:
def __init__(self):
self.m = bunch.Bunch()
self.m = munch.Munch()
self.default = None
self.master = None
self.secondaries = []
def iteritems(self):
return self.m.iteritems()
def items(self):
return self.m.items()
def set_default(self, conn):
self.default = conn
@ -254,7 +260,7 @@ _multiprocess_can_split_ = True
def setup():
cfg = ConfigParser.RawConfigParser()
cfg = configparser.RawConfigParser()
try:
path = os.environ['S3TEST_CONF']
except KeyError:
@ -262,8 +268,7 @@ def setup():
'To run tests, point environment '
+ 'variable S3TEST_CONF to a config file.',
)
with file(path) as f:
cfg.readfp(f)
cfg.read(path)
global prefix
global targets
@ -271,19 +276,19 @@ def setup():
try:
template = cfg.get('fixtures', 'bucket prefix')
except (ConfigParser.NoSectionError, ConfigParser.NoOptionError):
except (configparser.NoSectionError, configparser.NoOptionError):
template = 'test-{random}-'
prefix = choose_bucket_prefix(template=template)
try:
slow_backend = cfg.getboolean('fixtures', 'slow backend')
except (ConfigParser.NoSectionError, ConfigParser.NoOptionError):
except (configparser.NoSectionError, configparser.NoOptionError):
slow_backend = False
# pull the default_region out, if it exists
try:
default_region = cfg.get('fixtures', 'default_region')
except (ConfigParser.NoSectionError, ConfigParser.NoOptionError):
except (configparser.NoSectionError, configparser.NoOptionError):
default_region = None
s3.clear()
@ -309,7 +314,7 @@ def setup():
if len(regions.get()) == 0:
regions.add("default", TargetConfig(cfg, section))
config[name] = bunch.Bunch()
config[name] = munch.Munch()
for var in [
'user_id',
'display_name',
@ -319,15 +324,16 @@ def setup():
'port',
'is_secure',
'kms_keyid',
'storage_classes',
]:
try:
config[name][var] = cfg.get(section, var)
except ConfigParser.NoOptionError:
except configparser.NoOptionError:
pass
targets[name] = RegionsConn()
for (k, conf) in regions.iteritems():
for (k, conf) in regions.items():
conn = boto.s3.connection.S3Connection(
aws_access_key_id=cfg.get(section, 'access_key'),
aws_secret_access_key=cfg.get(section, 'secret_key'),
@ -365,6 +371,15 @@ def teardown():
# remove our buckets here also, to avoid littering
nuke_prefixed_buckets(prefix=prefix)
@pytest.fixture(scope="package")
def configfile():
setup()
yield config
@pytest.fixture(autouse=True)
def setup_teardown(configfile):
yield
teardown()
bucket_counter = itertools.count(1)
@ -468,7 +483,7 @@ def _make_raw_request(host, port, method, path, body=None, request_headers=None,
if request_headers is None:
request_headers = {}
c = class_(host, port, strict=True, timeout=timeout)
c = class_(host, port=port, timeout=timeout)
# TODO: We might have to modify this in future if we need to interact with
# how httplib.request handles Accept-Encoding and Host.

View file

@ -0,0 +1,46 @@
import json
class Statement(object):
def __init__(self, action, resource, principal = {"AWS" : "*"}, effect= "Allow", condition = None):
self.principal = principal
self.action = action
self.resource = resource
self.condition = condition
self.effect = effect
def to_dict(self):
d = { "Action" : self.action,
"Principal" : self.principal,
"Effect" : self.effect,
"Resource" : self.resource
}
if self.condition is not None:
d["Condition"] = self.condition
return d
class Policy(object):
def __init__(self):
self.statements = []
def add_statement(self, s):
self.statements.append(s)
return self
def to_json(self):
policy_dict = {
"Version" : "2012-10-17",
"Statement":
[s.to_dict() for s in self.statements]
}
return json.dumps(policy_dict)
def make_json_policy(action, resource, principal={"AWS": "*"}, conditions=None):
"""
Helper function to make single statement policies
"""
s = Statement(action, resource, principal, condition=conditions)
p = Policy()
return p.add_statement(s).to_json()

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -1,23 +1,20 @@
from __future__ import print_function
import sys
import collections
import nose
from collections.abc import Container
import pytest
import string
import random
from pprint import pprint
import time
import boto.exception
import socket
from urlparse import urlparse
from nose.tools import eq_ as eq, ok_ as ok
from nose.plugins.attrib import attr
from nose.tools import timed
from nose.plugins.skip import SkipTest
from urllib.parse import urlparse
from .. import common
from . import (
configfile,
setup_teardown,
get_new_bucket,
get_new_bucket_name,
s3,
@ -42,36 +39,27 @@ ERRORDOC_TEMPLATE = '<html><h1>ErrorDoc</h1><body>{random}</body></html>'
CAN_WEBSITE = None
@pytest.fixture(autouse=True, scope="module")
def check_can_test_website():
global CAN_WEBSITE
# This is a bit expensive, so we cache this
if CAN_WEBSITE is None:
bucket = get_new_bucket()
try:
wsconf = bucket.get_website_configuration()
CAN_WEBSITE = True
return True
except boto.exception.S3ResponseError as e:
if e.status == 404 and e.reason == 'Not Found' and e.error_code in ['NoSuchWebsiteConfiguration', 'NoSuchKey']:
CAN_WEBSITE = True
return True
elif e.status == 405 and e.reason == 'Method Not Allowed' and e.error_code == 'MethodNotAllowed':
# rgw_enable_static_website is false
CAN_WEBSITE = False
pytest.skip('rgw_enable_static_website is false')
elif e.status == 403 and e.reason == 'SignatureDoesNotMatch' and e.error_code == 'Forbidden':
# This is older versions that do not support the website code
CAN_WEBSITE = False
pytest.skip('static website is not implemented')
elif e.status == 501 and e.error_code == 'NotImplemented':
pytest.skip('static website is not implemented')
else:
raise RuntimeError("Unknown response in checking if WebsiteConf is supported", e)
finally:
bucket.delete()
if CAN_WEBSITE is True:
return True
elif CAN_WEBSITE is False:
raise SkipTest
else:
raise RuntimeError("Unknown cached response in checking if WebsiteConf is supported")
def make_website_config(xml_fragment):
"""
Take the tedious stuff out of the config
@ -108,7 +96,7 @@ def get_website_url(**kwargs):
def _test_website_populate_fragment(xml_fragment, fields):
for k in ['RoutingRules']:
if k in fields.keys() and len(fields[k]) > 0:
if k in list(fields.keys()) and len(fields[k]) > 0:
fields[k] = '<%s>%s</%s>' % (k, fields[k], k)
f = {
'IndexDocument_Suffix': choose_bucket_prefix(template='index-{random}.html', max_len=32),
@ -166,24 +154,24 @@ def _test_website_prep(bucket, xml_template, hardcoded_fields = {}, expect_fail=
# Cleanup for our validation
common.assert_xml_equal(config_xmlcmp, config_xmlnew)
#print("config_xmlcmp\n", config_xmlcmp)
#eq (config_xmlnew, config_xmlcmp)
#assert config_xmlnew == config_xmlcmp
f['WebsiteConfiguration'] = config_xmlcmp
return f
def __website_expected_reponse_status(res, status, reason):
if not isinstance(status, collections.Container):
if not isinstance(status, Container):
status = set([status])
if not isinstance(reason, collections.Container):
if not isinstance(reason, Container):
reason = set([reason])
if status is not IGNORE_FIELD:
ok(res.status in status, 'HTTP code was %s should be %s' % (res.status, status))
assert res.status in status, 'HTTP code was %s should be %s' % (res.status, status)
if reason is not IGNORE_FIELD:
ok(res.reason in reason, 'HTTP reason was was %s should be %s' % (res.reason, reason))
assert res.reason in reason, 'HTTP reason was was %s should be %s' % (res.reason, reason)
def _website_expected_default_html(**kwargs):
fields = []
for k in kwargs.keys():
for k in list(kwargs.keys()):
# AmazonS3 seems to be inconsistent, some HTML errors include BucketName, but others do not.
if k is 'BucketName':
continue
@ -191,7 +179,7 @@ def _website_expected_default_html(**kwargs):
v = kwargs[k]
if isinstance(v, str):
v = [v]
elif not isinstance(v, collections.Container):
elif not isinstance(v, Container):
v = [v]
for v2 in v:
s = '<li>%s: %s</li>' % (k,v2)
@ -209,21 +197,22 @@ def _website_expected_error_response(res, bucket_name, status, reason, code, con
errorcode = res.getheader('x-amz-error-code', None)
if errorcode is not None:
if code is not IGNORE_FIELD:
eq(errorcode, code)
assert errorcode == code
if not isinstance(content, collections.Container):
if not isinstance(content, Container):
content = set([content])
for f in content:
if f is not IGNORE_FIELD and f is not None:
ok(f in body, 'HTML should contain "%s"' % (f, ))
f = bytes(f, 'utf-8')
assert f in body, 'HTML should contain "%s"' % (f, )
def _website_expected_redirect_response(res, status, reason, new_url):
body = res.read()
print(body)
__website_expected_reponse_status(res, status, reason)
loc = res.getheader('Location', None)
eq(loc, new_url, 'Location header should be set "%s" != "%s"' % (loc,new_url,))
ok(len(body) == 0, 'Body of a redirect should be empty')
assert loc == new_url, 'Location header should be set "%s" != "%s"' % (loc,new_url,)
assert len(body) == 0, 'Body of a redirect should be empty'
def _website_request(bucket_name, path, connect_hostname=None, method='GET', timeout=None):
url = get_website_url(proto='http', bucket=bucket_name, path=path)
@ -235,33 +224,23 @@ def _website_request(bucket_name, path, connect_hostname=None, method='GET', tim
request_headers={}
request_headers['Host'] = o.hostname
request_headers['Accept'] = '*/*'
print('Request: {method} {path}\n{headers}'.format(method=method, path=path, headers=''.join(map(lambda t: t[0]+':'+t[1]+"\n", request_headers.items()))))
print('Request: {method} {path}\n{headers}'.format(method=method, path=path, headers=''.join([t[0]+':'+t[1]+"\n" for t in list(request_headers.items())])))
res = _make_raw_request(connect_hostname, config.main.port, method, path, request_headers=request_headers, secure=False, timeout=timeout)
for (k,v) in res.getheaders():
print(k,v)
return res
# ---------- Non-existant buckets via the website endpoint
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='non-existant bucket via website endpoint should give NoSuchBucket, exposing security risk')
@attr('s3website')
@attr('fails_on_rgw')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_rgw
def test_website_nonexistant_bucket_s3():
bucket_name = get_new_bucket_name()
res = _website_request(bucket_name, '')
_website_expected_error_response(res, bucket_name, 404, 'Not Found', 'NoSuchBucket', content=_website_expected_default_html(Code='NoSuchBucket'))
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
#@attr(assertion='non-existant bucket via website endpoint should give Forbidden, keeping bucket identity secure')
@attr(assertion='non-existant bucket via website endpoint should give NoSuchBucket')
@attr('s3website')
@attr('fails_on_s3')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_s3
@pytest.mark.fails_on_dbstore
def test_website_nonexistant_bucket_rgw():
bucket_name = get_new_bucket_name()
res = _website_request(bucket_name, '')
@ -269,13 +248,9 @@ def test_website_nonexistant_bucket_rgw():
_website_expected_error_response(res, bucket_name, 404, 'Not Found', 'NoSuchBucket', content=_website_expected_default_html(Code='NoSuchBucket'))
#------------- IndexDocument only, successes
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='non-empty public buckets via s3website return page for /, where page is public')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@timed(10)
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
@pytest.mark.timeout(10)
def test_website_public_bucket_list_public_index():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDoc'])
@ -291,17 +266,14 @@ def test_website_public_bucket_list_public_index():
res = _website_request(bucket.name, '')
body = res.read()
print(body)
eq(body, indexstring) # default content should match index.html set content
indexstring = bytes(indexstring, 'utf-8')
assert body == indexstring # default content should match index.html set content
__website_expected_reponse_status(res, 200, 'OK')
indexhtml.delete()
bucket.delete()
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='non-empty private buckets via s3website return page for /, where page is private')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_website_private_bucket_list_public_index():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDoc'])
@ -319,18 +291,15 @@ def test_website_private_bucket_list_public_index():
__website_expected_reponse_status(res, 200, 'OK')
body = res.read()
print(body)
eq(body, indexstring, 'default content should match index.html set content')
indexstring = bytes(indexstring, 'utf-8')
assert body == indexstring, 'default content should match index.html set content'
indexhtml.delete()
bucket.delete()
# ---------- IndexDocument only, failures
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='empty private buckets via s3website return a 403 for /')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_website_private_bucket_list_empty():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDoc'])
@ -341,12 +310,8 @@ def test_website_private_bucket_list_empty():
_website_expected_error_response(res, bucket.name, 403, 'Forbidden', 'AccessDenied', content=_website_expected_default_html(Code='AccessDenied'))
bucket.delete()
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='empty public buckets via s3website return a 404 for /')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_website_public_bucket_list_empty():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDoc'])
@ -356,12 +321,8 @@ def test_website_public_bucket_list_empty():
_website_expected_error_response(res, bucket.name, 404, 'Not Found', 'NoSuchKey', content=_website_expected_default_html(Code='NoSuchKey'))
bucket.delete()
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='non-empty public buckets via s3website return page for /, where page is private')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_website_public_bucket_list_private_index():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDoc'])
@ -381,12 +342,8 @@ def test_website_public_bucket_list_private_index():
indexhtml.delete()
bucket.delete()
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='non-empty private buckets via s3website return page for /, where page is private')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_website_private_bucket_list_private_index():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDoc'])
@ -407,12 +364,8 @@ def test_website_private_bucket_list_private_index():
bucket.delete()
# ---------- IndexDocument & ErrorDocument, failures due to errordoc assigned but missing
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='empty private buckets via s3website return a 403 for /, missing errordoc')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_website_private_bucket_list_empty_missingerrordoc():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc'])
@ -423,12 +376,8 @@ def test_website_private_bucket_list_empty_missingerrordoc():
bucket.delete()
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='empty public buckets via s3website return a 404 for /, missing errordoc')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_website_public_bucket_list_empty_missingerrordoc():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc'])
@ -438,12 +387,8 @@ def test_website_public_bucket_list_empty_missingerrordoc():
_website_expected_error_response(res, bucket.name, 404, 'Not Found', 'NoSuchKey')
bucket.delete()
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='non-empty public buckets via s3website return page for /, where page is private, missing errordoc')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_website_public_bucket_list_private_index_missingerrordoc():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc'])
@ -462,12 +407,8 @@ def test_website_public_bucket_list_private_index_missingerrordoc():
indexhtml.delete()
bucket.delete()
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='non-empty private buckets via s3website return page for /, where page is private, missing errordoc')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_website_private_bucket_list_private_index_missingerrordoc():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc'])
@ -487,12 +428,8 @@ def test_website_private_bucket_list_private_index_missingerrordoc():
bucket.delete()
# ---------- IndexDocument & ErrorDocument, failures due to errordoc assigned but not accessible
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='empty private buckets via s3website return a 403 for /, blocked errordoc')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_website_private_bucket_list_empty_blockederrordoc():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc'])
@ -509,17 +446,61 @@ def test_website_private_bucket_list_empty_blockederrordoc():
body = res.read()
print(body)
_website_expected_error_response(res, bucket.name, 403, 'Forbidden', 'AccessDenied', content=_website_expected_default_html(Code='AccessDenied'), body=body)
ok(errorstring not in body, 'error content should NOT match error.html set content')
errorstring = bytes(errorstring, 'utf-8')
assert errorstring not in body, 'error content should NOT match error.html set content'
errorhtml.delete()
bucket.delete()
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='empty public buckets via s3website return a 404 for /, blocked errordoc')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_website_public_bucket_list_pubilc_errordoc():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc'])
bucket.make_public()
errorhtml = bucket.new_key(f['ErrorDocument_Key'])
errorstring = choose_bucket_prefix(template=ERRORDOC_TEMPLATE, max_len=256)
errorhtml.set_contents_from_string(errorstring)
errorhtml.set_canned_acl('public-read')
url = get_website_url(proto='http', bucket=bucket.name, path='')
o = urlparse(url)
host = o.hostname
port = s3.main.port
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect((host, port))
request = "GET / HTTP/1.1\r\nHost:%s.%s:%s\r\n\r\n" % (bucket.name, host, port)
sock.send(request.encode())
#receive header
resp = sock.recv(4096)
print(resp)
#receive body
resp = sock.recv(4096)
print('payload length=%d' % len(resp))
print(resp)
#check if any additional payload is left
resp_len = 0
sock.settimeout(2)
try:
resp = sock.recv(4096)
resp_len = len(resp)
print('invalid payload length=%d' % resp_len)
print(resp)
except socket.timeout:
print('no invalid payload')
assert resp_len == 0, 'invalid payload'
errorhtml.delete()
bucket.delete()
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_website_public_bucket_list_empty_blockederrordoc():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc'])
@ -535,17 +516,14 @@ def test_website_public_bucket_list_empty_blockederrordoc():
body = res.read()
print(body)
_website_expected_error_response(res, bucket.name, 404, 'Not Found', 'NoSuchKey', content=_website_expected_default_html(Code='NoSuchKey'), body=body)
ok(errorstring not in body, 'error content should match error.html set content')
errorstring = bytes(errorstring, 'utf-8')
assert errorstring not in body, 'error content should match error.html set content'
errorhtml.delete()
bucket.delete()
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='non-empty public buckets via s3website return page for /, where page is private, blocked errordoc')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_website_public_bucket_list_private_index_blockederrordoc():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc'])
@ -566,18 +544,15 @@ def test_website_public_bucket_list_private_index_blockederrordoc():
body = res.read()
print(body)
_website_expected_error_response(res, bucket.name, 403, 'Forbidden', 'AccessDenied', content=_website_expected_default_html(Code='AccessDenied'), body=body)
ok(errorstring not in body, 'error content should match error.html set content')
errorstring = bytes(errorstring, 'utf-8')
assert errorstring not in body, 'error content should match error.html set content'
indexhtml.delete()
errorhtml.delete()
bucket.delete()
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='non-empty private buckets via s3website return page for /, where page is private, blocked errordoc')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_website_private_bucket_list_private_index_blockederrordoc():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc'])
@ -598,19 +573,16 @@ def test_website_private_bucket_list_private_index_blockederrordoc():
body = res.read()
print(body)
_website_expected_error_response(res, bucket.name, 403, 'Forbidden', 'AccessDenied', content=_website_expected_default_html(Code='AccessDenied'), body=body)
ok(errorstring not in body, 'error content should match error.html set content')
errorstring = bytes(errorstring, 'utf-8')
assert errorstring not in body, 'error content should match error.html set content'
indexhtml.delete()
errorhtml.delete()
bucket.delete()
# ---------- IndexDocument & ErrorDocument, failures with errordoc available
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='empty private buckets via s3website return a 403 for /, good errordoc')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_website_private_bucket_list_empty_gooderrordoc():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc'])
@ -628,12 +600,8 @@ def test_website_private_bucket_list_empty_gooderrordoc():
errorhtml.delete()
bucket.delete()
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='empty public buckets via s3website return a 404 for /, good errordoc')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_website_public_bucket_list_empty_gooderrordoc():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc'])
@ -652,12 +620,8 @@ def test_website_public_bucket_list_empty_gooderrordoc():
errorhtml.delete()
bucket.delete()
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='non-empty public buckets via s3website return page for /, where page is private')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_website_public_bucket_list_private_index_gooderrordoc():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc'])
@ -681,12 +645,8 @@ def test_website_public_bucket_list_private_index_gooderrordoc():
errorhtml.delete()
bucket.delete()
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='non-empty private buckets via s3website return page for /, where page is private')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_website_private_bucket_list_private_index_gooderrordoc():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc'])
@ -711,12 +671,8 @@ def test_website_private_bucket_list_private_index_gooderrordoc():
bucket.delete()
# ------ RedirectAll tests
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='RedirectAllRequestsTo without protocol should TODO')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_website_bucket_private_redirectall_base():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['RedirectAll'])
@ -728,12 +684,8 @@ def test_website_bucket_private_redirectall_base():
bucket.delete()
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='RedirectAllRequestsTo without protocol should TODO')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_website_bucket_private_redirectall_path():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['RedirectAll'])
@ -747,12 +699,8 @@ def test_website_bucket_private_redirectall_path():
bucket.delete()
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='RedirectAllRequestsTo without protocol should TODO')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_website_bucket_private_redirectall_path_upgrade():
bucket = get_new_bucket()
x = string.Template(WEBSITE_CONFIGS_XMLFRAG['RedirectAll+Protocol']).safe_substitute(RedirectAllRequestsTo_Protocol='https')
@ -768,13 +716,9 @@ def test_website_bucket_private_redirectall_path_upgrade():
bucket.delete()
# ------ x-amz redirect tests
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='x-amz-website-redirect-location should not fire without websiteconf')
@attr('s3website')
@attr('x-amz-website-redirect-location')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.s3website_redirect_location
@pytest.mark.fails_on_dbstore
def test_website_xredirect_nonwebsite():
bucket = get_new_bucket()
#f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['RedirectAll'])
@ -786,7 +730,7 @@ def test_website_xredirect_nonwebsite():
headers = {'x-amz-website-redirect-location': redirect_dest}
k.set_contents_from_string(content, headers=headers, policy='public-read')
redirect = k.get_redirect()
eq(k.get_redirect(), redirect_dest)
assert k.get_redirect() == redirect_dest
res = _website_request(bucket.name, '/page')
body = res.read()
@ -800,13 +744,9 @@ def test_website_xredirect_nonwebsite():
k.delete()
bucket.delete()
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='x-amz-website-redirect-location should fire websiteconf, relative path, public key')
@attr('s3website')
@attr('x-amz-website-redirect-location')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.s3website_redirect_location
@pytest.mark.fails_on_dbstore
def test_website_xredirect_public_relative():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDoc'])
@ -818,7 +758,7 @@ def test_website_xredirect_public_relative():
headers = {'x-amz-website-redirect-location': redirect_dest}
k.set_contents_from_string(content, headers=headers, policy='public-read')
redirect = k.get_redirect()
eq(k.get_redirect(), redirect_dest)
assert k.get_redirect() == redirect_dest
res = _website_request(bucket.name, '/page')
#new_url = get_website_url(bucket_name=bucket.name, path=redirect_dest)
@ -827,13 +767,9 @@ def test_website_xredirect_public_relative():
k.delete()
bucket.delete()
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='x-amz-website-redirect-location should fire websiteconf, absolute, public key')
@attr('s3website')
@attr('x-amz-website-redirect-location')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.s3website_redirect_location
@pytest.mark.fails_on_dbstore
def test_website_xredirect_public_abs():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDoc'])
@ -845,7 +781,7 @@ def test_website_xredirect_public_abs():
headers = {'x-amz-website-redirect-location': redirect_dest}
k.set_contents_from_string(content, headers=headers, policy='public-read')
redirect = k.get_redirect()
eq(k.get_redirect(), redirect_dest)
assert k.get_redirect() == redirect_dest
res = _website_request(bucket.name, '/page')
new_url = get_website_url(proto='http', hostname='example.com', path='/foo')
@ -854,13 +790,9 @@ def test_website_xredirect_public_abs():
k.delete()
bucket.delete()
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='x-amz-website-redirect-location should fire websiteconf, relative path, private key')
@attr('s3website')
@attr('x-amz-website-redirect-location')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.s3website_redirect_location
@pytest.mark.fails_on_dbstore
def test_website_xredirect_private_relative():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDoc'])
@ -872,7 +804,7 @@ def test_website_xredirect_private_relative():
headers = {'x-amz-website-redirect-location': redirect_dest}
k.set_contents_from_string(content, headers=headers, policy='private')
redirect = k.get_redirect()
eq(k.get_redirect(), redirect_dest)
assert k.get_redirect() == redirect_dest
res = _website_request(bucket.name, '/page')
# We get a 403 because the page is private
@ -881,13 +813,9 @@ def test_website_xredirect_private_relative():
k.delete()
bucket.delete()
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='x-amz-website-redirect-location should fire websiteconf, absolute, private key')
@attr('s3website')
@attr('x-amz-website-redirect-location')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.s3website_redirect_location
@pytest.mark.fails_on_dbstore
def test_website_xredirect_private_abs():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDoc'])
@ -899,7 +827,7 @@ def test_website_xredirect_private_abs():
headers = {'x-amz-website-redirect-location': redirect_dest}
k.set_contents_from_string(content, headers=headers, policy='private')
redirect = k.get_redirect()
eq(k.get_redirect(), redirect_dest)
assert k.get_redirect() == redirect_dest
res = _website_request(bucket.name, '/page')
new_url = get_website_url(proto='http', hostname='example.com', path='/foo')
@ -1011,7 +939,7 @@ ROUTING_RULES = {
""",
}
for k in ROUTING_RULES.keys():
for k in list(ROUTING_RULES.keys()):
if len(ROUTING_RULES[k]) > 0:
ROUTING_RULES[k] = "<!-- %s -->\n%s" % (k, ROUTING_RULES[k])
@ -1112,8 +1040,6 @@ def routing_teardown(**kwargs):
print('Deleting', str(o))
o.delete()
@common.with_setup_kwargs(setup=routing_setup, teardown=routing_teardown)
#@timed(10)
def routing_check(*args, **kwargs):
bucket = kwargs['bucket']
args=args[0]
@ -1139,8 +1065,8 @@ def routing_check(*args, **kwargs):
if args['code'] >= 200 and args['code'] < 300:
#body = res.read()
#print(body)
#eq(body, args['content'], 'default content should match index.html set content')
ok(res.getheader('Content-Length', -1) > 0)
#assert body == args['content'], 'default content should match index.html set content'
assert int(res.getheader('Content-Length', -1)) > 0
elif args['code'] >= 300 and args['code'] < 400:
_website_expected_redirect_response(res, args['code'], IGNORE_FIELD, new_url)
elif args['code'] >= 400:
@ -1148,9 +1074,9 @@ def routing_check(*args, **kwargs):
else:
assert(False)
@attr('s3website_RoutingRules')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website_routing_rules
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_routing_generator():
for t in ROUTING_RULES_TESTS:
if 'xml' in t and 'RoutingRules' in t['xml'] and len(t['xml']['RoutingRules']) > 0:

View file

@ -1,11 +1,9 @@
from nose.tools import eq_ as eq
import utils
from . import utils
def test_generate():
FIVE_MB = 5 * 1024 * 1024
eq(len(''.join(utils.generate_random(0))), 0)
eq(len(''.join(utils.generate_random(1))), 1)
eq(len(''.join(utils.generate_random(FIVE_MB - 1))), FIVE_MB - 1)
eq(len(''.join(utils.generate_random(FIVE_MB))), FIVE_MB)
eq(len(''.join(utils.generate_random(FIVE_MB + 1))), FIVE_MB + 1)
assert len(''.join(utils.generate_random(0))) == 0
assert len(''.join(utils.generate_random(1))) == 1
assert len(''.join(utils.generate_random(FIVE_MB - 1))) == FIVE_MB - 1
assert len(''.join(utils.generate_random(FIVE_MB))) == FIVE_MB
assert len(''.join(utils.generate_random(FIVE_MB + 1))) == FIVE_MB + 1

View file

@ -3,8 +3,6 @@ import requests
import string
import time
from nose.tools import eq_ as eq
def assert_raises(excClass, callableObj, *args, **kwargs):
"""
Like unittest.TestCase.assertRaises, but returns the exception.
@ -28,11 +26,11 @@ def generate_random(size, part_size=5*1024*1024):
chunk = 1024
allowed = string.ascii_letters
for x in range(0, size, part_size):
strpart = ''.join([allowed[random.randint(0, len(allowed) - 1)] for _ in xrange(chunk)])
strpart = ''.join([allowed[random.randint(0, len(allowed) - 1)] for _ in range(chunk)])
s = ''
left = size - x
this_part_size = min(left, part_size)
for y in range(this_part_size / chunk):
for y in range(this_part_size // chunk):
s = s + strpart
s = s + strpart[:(this_part_size % chunk)]
yield s
@ -42,13 +40,22 @@ def generate_random(size, part_size=5*1024*1024):
# syncs all the regions except for the one passed in
def region_sync_meta(targets, region):
for (k, r) in targets.iteritems():
for (k, r) in targets.items():
if r == region:
continue
conf = r.conf
if conf.sync_agent_addr:
ret = requests.post('http://{addr}:{port}/metadata/incremental'.format(addr = conf.sync_agent_addr, port = conf.sync_agent_port))
eq(ret.status_code, 200)
assert ret.status_code == 200
if conf.sync_meta_wait:
time.sleep(conf.sync_meta_wait)
def get_grantee(policy, permission):
'''
Given an object/bucket policy, extract the grantee with the required permission
'''
for g in policy.acl.grants:
if g.permission == permission:
return g.id

View file

@ -1,376 +0,0 @@
from boto.s3.connection import S3Connection
from boto.exception import BotoServerError
from boto.s3.key import Key
from httplib import BadStatusLine
from optparse import OptionParser
from .. import common
import traceback
import itertools
import random
import string
import struct
import yaml
import sys
import re
class DecisionGraphError(Exception):
""" Raised when a node in a graph tries to set a header or
key that was previously set by another node
"""
def __init__(self, value):
self.value = value
def __str__(self):
return repr(self.value)
class RecursionError(Exception):
"""Runaway recursion in string formatting"""
def __init__(self, msg):
self.msg = msg
def __str__(self):
return '{0.__doc__}: {0.msg!r}'.format(self)
def assemble_decision(decision_graph, prng):
""" Take in a graph describing the possible decision space and a random
number generator and traverse the graph to build a decision
"""
return descend_graph(decision_graph, 'start', prng)
def descend_graph(decision_graph, node_name, prng):
""" Given a graph and a particular node in that graph, set the values in
the node's "set" list, pick a choice from the "choice" list, and
recurse. Finally, return dictionary of values
"""
node = decision_graph[node_name]
try:
choice = make_choice(node['choices'], prng)
if choice == '':
decision = {}
else:
decision = descend_graph(decision_graph, choice, prng)
except IndexError:
decision = {}
for key, choices in node['set'].iteritems():
if key in decision:
raise DecisionGraphError("Node %s tried to set '%s', but that key was already set by a lower node!" %(node_name, key))
decision[key] = make_choice(choices, prng)
if 'headers' in node:
decision.setdefault('headers', [])
for desc in node['headers']:
try:
(repetition_range, header, value) = desc
except ValueError:
(header, value) = desc
repetition_range = '1'
try:
size_min, size_max = repetition_range.split('-', 1)
except ValueError:
size_min = size_max = repetition_range
size_min = int(size_min)
size_max = int(size_max)
num_reps = prng.randint(size_min, size_max)
if header in [h for h, v in decision['headers']]:
raise DecisionGraphError("Node %s tried to add header '%s', but that header already exists!" %(node_name, header))
for _ in xrange(num_reps):
decision['headers'].append([header, value])
return decision
def make_choice(choices, prng):
""" Given a list of (possibly weighted) options or just a single option!,
choose one of the options taking weights into account and return the
choice
"""
if isinstance(choices, str):
return choices
weighted_choices = []
for option in choices:
if option is None:
weighted_choices.append('')
continue
try:
(weight, value) = option.split(None, 1)
weight = int(weight)
except ValueError:
weight = 1
value = option
if value == 'null' or value == 'None':
value = ''
for _ in xrange(weight):
weighted_choices.append(value)
return prng.choice(weighted_choices)
def expand_headers(decision, prng):
expanded_headers = {}
for header in decision['headers']:
h = expand(decision, header[0], prng)
v = expand(decision, header[1], prng)
expanded_headers[h] = v
return expanded_headers
def expand(decision, value, prng):
c = itertools.count()
fmt = RepeatExpandingFormatter(prng)
new = fmt.vformat(value, [], decision)
return new
class RepeatExpandingFormatter(string.Formatter):
charsets = {
'printable_no_whitespace': string.printable.translate(None, string.whitespace),
'printable': string.printable,
'punctuation': string.punctuation,
'whitespace': string.whitespace,
'digits': string.digits
}
def __init__(self, prng, _recursion=0):
super(RepeatExpandingFormatter, self).__init__()
# this class assumes it is always instantiated once per
# formatting; use that to detect runaway recursion
self.prng = prng
self._recursion = _recursion
def get_value(self, key, args, kwargs):
fields = key.split(None, 1)
fn = getattr(self, 'special_{name}'.format(name=fields[0]), None)
if fn is not None:
if len(fields) == 1:
fields.append('')
return fn(fields[1])
val = super(RepeatExpandingFormatter, self).get_value(key, args, kwargs)
if self._recursion > 5:
raise RecursionError(key)
fmt = self.__class__(self.prng, _recursion=self._recursion+1)
n = fmt.vformat(val, args, kwargs)
return n
def special_random(self, args):
arg_list = args.split()
try:
size_min, size_max = arg_list[0].split('-', 1)
except ValueError:
size_min = size_max = arg_list[0]
except IndexError:
size_min = '0'
size_max = '1000'
size_min = int(size_min)
size_max = int(size_max)
length = self.prng.randint(size_min, size_max)
try:
charset_arg = arg_list[1]
except IndexError:
charset_arg = 'printable'
if charset_arg == 'binary' or charset_arg == 'binary_no_whitespace':
num_bytes = length + 8
tmplist = [self.prng.getrandbits(64) for _ in xrange(num_bytes / 8)]
tmpstring = struct.pack((num_bytes / 8) * 'Q', *tmplist)
if charset_arg == 'binary_no_whitespace':
tmpstring = ''.join(c for c in tmpstring if c not in string.whitespace)
return tmpstring[0:length]
else:
charset = self.charsets[charset_arg]
return ''.join([self.prng.choice(charset) for _ in xrange(length)]) # Won't scale nicely
def parse_options():
parser = OptionParser()
parser.add_option('-O', '--outfile', help='write output to FILE. Defaults to STDOUT', metavar='FILE')
parser.add_option('--seed', dest='seed', type='int', help='initial seed for the random number generator')
parser.add_option('--seed-file', dest='seedfile', help='read seeds for specific requests from FILE', metavar='FILE')
parser.add_option('-n', dest='num_requests', type='int', help='issue NUM requests before stopping', metavar='NUM')
parser.add_option('-v', '--verbose', dest='verbose', action="store_true", help='turn on verbose output')
parser.add_option('-d', '--debug', dest='debug', action="store_true", help='turn on debugging (very verbose) output')
parser.add_option('--decision-graph', dest='graph_filename', help='file in which to find the request decision graph')
parser.add_option('--no-cleanup', dest='cleanup', action="store_false", help='turn off teardown so you can peruse the state of buckets after testing')
parser.set_defaults(num_requests=5)
parser.set_defaults(cleanup=True)
parser.set_defaults(graph_filename='request_decision_graph.yml')
return parser.parse_args()
def randomlist(seed=None):
""" Returns an infinite generator of random numbers
"""
rng = random.Random(seed)
while True:
yield rng.randint(0,100000) #100,000 seeds is enough, right?
def populate_buckets(conn, alt):
""" Creates buckets and keys for fuzz testing and sets appropriate
permissions. Returns a dictionary of the bucket and key names.
"""
breadable = common.get_new_bucket(alt)
bwritable = common.get_new_bucket(alt)
bnonreadable = common.get_new_bucket(alt)
oreadable = Key(breadable)
owritable = Key(bwritable)
ononreadable = Key(breadable)
oreadable.set_contents_from_string('oreadable body')
owritable.set_contents_from_string('owritable body')
ononreadable.set_contents_from_string('ononreadable body')
breadable.set_acl('public-read')
bwritable.set_acl('public-read-write')
bnonreadable.set_acl('private')
oreadable.set_acl('public-read')
owritable.set_acl('public-read-write')
ononreadable.set_acl('private')
return dict(
bucket_readable=breadable.name,
bucket_writable=bwritable.name,
bucket_not_readable=bnonreadable.name,
bucket_not_writable=breadable.name,
object_readable=oreadable.key,
object_writable=owritable.key,
object_not_readable=ononreadable.key,
object_not_writable=oreadable.key,
)
def _main():
""" The main script
"""
(options, args) = parse_options()
random.seed(options.seed if options.seed else None)
s3_connection = common.s3.main
alt_connection = common.s3.alt
if options.outfile:
OUT = open(options.outfile, 'w')
else:
OUT = sys.stderr
VERBOSE = DEBUG = open('/dev/null', 'w')
if options.verbose:
VERBOSE = OUT
if options.debug:
DEBUG = OUT
VERBOSE = OUT
request_seeds = None
if options.seedfile:
FH = open(options.seedfile, 'r')
request_seeds = [int(line) for line in FH if line != '\n']
print>>OUT, 'Seedfile: %s' %options.seedfile
print>>OUT, 'Number of requests: %d' %len(request_seeds)
else:
if options.seed:
print>>OUT, 'Initial Seed: %d' %options.seed
print>>OUT, 'Number of requests: %d' %options.num_requests
random_list = randomlist(options.seed)
request_seeds = itertools.islice(random_list, options.num_requests)
print>>OUT, 'Decision Graph: %s' %options.graph_filename
graph_file = open(options.graph_filename, 'r')
decision_graph = yaml.safe_load(graph_file)
constants = populate_buckets(s3_connection, alt_connection)
print>>VERBOSE, "Test Buckets/Objects:"
for key, value in constants.iteritems():
print>>VERBOSE, "\t%s: %s" %(key, value)
print>>OUT, "Begin Fuzzing..."
print>>VERBOSE, '='*80
for request_seed in request_seeds:
print>>VERBOSE, 'Seed is: %r' %request_seed
prng = random.Random(request_seed)
decision = assemble_decision(decision_graph, prng)
decision.update(constants)
method = expand(decision, decision['method'], prng)
path = expand(decision, decision['urlpath'], prng)
try:
body = expand(decision, decision['body'], prng)
except KeyError:
body = ''
try:
headers = expand_headers(decision, prng)
except KeyError:
headers = {}
print>>VERBOSE, "%r %r" %(method[:100], path[:100])
for h, v in headers.iteritems():
print>>VERBOSE, "%r: %r" %(h[:50], v[:50])
print>>VERBOSE, "%r\n" % body[:100]
print>>DEBUG, 'FULL REQUEST'
print>>DEBUG, 'Method: %r' %method
print>>DEBUG, 'Path: %r' %path
print>>DEBUG, 'Headers:'
for h, v in headers.iteritems():
print>>DEBUG, "\t%r: %r" %(h, v)
print>>DEBUG, 'Body: %r\n' %body
failed = False # Let's be optimistic, shall we?
try:
response = s3_connection.make_request(method, path, data=body, headers=headers, override_num_retries=1)
body = response.read()
except BotoServerError, e:
response = e
body = e.body
failed = True
except BadStatusLine, e:
print>>OUT, 'FAILED: failed to parse response (BadStatusLine); probably a NUL byte in your request?'
print>>VERBOSE, '='*80
continue
if failed:
print>>OUT, 'FAILED:'
OLD_VERBOSE = VERBOSE
OLD_DEBUG = DEBUG
VERBOSE = DEBUG = OUT
print>>VERBOSE, 'Seed was: %r' %request_seed
print>>VERBOSE, 'Response status code: %d %s' %(response.status, response.reason)
print>>DEBUG, 'Body:\n%s' %body
print>>VERBOSE, '='*80
if failed:
VERBOSE = OLD_VERBOSE
DEBUG = OLD_DEBUG
print>>OUT, '...done fuzzing'
if options.cleanup:
common.teardown()
def main():
common.setup()
try:
_main()
except Exception as e:
traceback.print_exc()
common.teardown()

View file

@ -1,403 +0,0 @@
"""
Unit-test suite for the S3 fuzzer
The fuzzer is a grammar-based random S3 operation generator
that produces random operation sequences in an effort to
crash the server. This unit-test suite does not test
S3 servers, but rather the fuzzer infrastructure.
It works by running the fuzzer off of a simple grammar,
and checking the producted requests to ensure that they
include the expected sorts of operations in the expected
proportions.
"""
import sys
import itertools
import nose
import random
import string
import yaml
from ..headers import *
from nose.tools import eq_ as eq
from nose.tools import assert_true
from nose.plugins.attrib import attr
from ...functional.utils import assert_raises
_decision_graph = {}
def check_access_denied(fn, *args, **kwargs):
e = assert_raises(boto.exception.S3ResponseError, fn, *args, **kwargs)
eq(e.status, 403)
eq(e.reason, 'Forbidden')
eq(e.error_code, 'AccessDenied')
def build_graph():
graph = {}
graph['start'] = {
'set': {},
'choices': ['node2']
}
graph['leaf'] = {
'set': {
'key1': 'value1',
'key2': 'value2'
},
'headers': [
['1-2', 'random-header-{random 5-10 printable}', '{random 20-30 punctuation}']
],
'choices': []
}
graph['node1'] = {
'set': {
'key3': 'value3',
'header_val': [
'3 h1',
'2 h2',
'h3'
]
},
'headers': [
['1-1', 'my-header', '{header_val}'],
],
'choices': ['leaf']
}
graph['node2'] = {
'set': {
'randkey': 'value-{random 10-15 printable}',
'path': '/{bucket_readable}',
'indirect_key1': '{key1}'
},
'choices': ['leaf']
}
graph['bad_node'] = {
'set': {
'key1': 'value1'
},
'choices': ['leaf']
}
graph['nonexistant_child_node'] = {
'set': {},
'choices': ['leafy_greens']
}
graph['weighted_node'] = {
'set': {
'k1': [
'foo',
'2 bar',
'1 baz'
]
},
'choices': [
'foo',
'2 bar',
'1 baz'
]
}
graph['null_choice_node'] = {
'set': {},
'choices': [None]
}
graph['repeated_headers_node'] = {
'set': {},
'headers': [
['1-2', 'random-header-{random 5-10 printable}', '{random 20-30 punctuation}']
],
'choices': ['leaf']
}
graph['weighted_null_choice_node'] = {
'set': {},
'choices': ['3 null']
}
return graph
#def test_foo():
#graph_file = open('request_decision_graph.yml', 'r')
#graph = yaml.safe_load(graph_file)
#eq(graph['bucket_put_simple']['set']['grantee'], 0)
def test_load_graph():
graph_file = open('request_decision_graph.yml', 'r')
graph = yaml.safe_load(graph_file)
graph['start']
def test_descend_leaf_node():
graph = build_graph()
prng = random.Random(1)
decision = descend_graph(graph, 'leaf', prng)
eq(decision['key1'], 'value1')
eq(decision['key2'], 'value2')
e = assert_raises(KeyError, lambda x: decision[x], 'key3')
def test_descend_node():
graph = build_graph()
prng = random.Random(1)
decision = descend_graph(graph, 'node1', prng)
eq(decision['key1'], 'value1')
eq(decision['key2'], 'value2')
eq(decision['key3'], 'value3')
def test_descend_bad_node():
graph = build_graph()
prng = random.Random(1)
assert_raises(DecisionGraphError, descend_graph, graph, 'bad_node', prng)
def test_descend_nonexistant_child():
graph = build_graph()
prng = random.Random(1)
assert_raises(KeyError, descend_graph, graph, 'nonexistant_child_node', prng)
def test_expand_random_printable():
prng = random.Random(1)
got = expand({}, '{random 10-15 printable}', prng)
eq(got, '[/pNI$;92@')
def test_expand_random_binary():
prng = random.Random(1)
got = expand({}, '{random 10-15 binary}', prng)
eq(got, '\xdfj\xf1\xd80>a\xcd\xc4\xbb')
def test_expand_random_printable_no_whitespace():
prng = random.Random(1)
for _ in xrange(1000):
got = expand({}, '{random 500 printable_no_whitespace}', prng)
assert_true(reduce(lambda x, y: x and y, [x not in string.whitespace and x in string.printable for x in got]))
def test_expand_random_binary_no_whitespace():
prng = random.Random(1)
for _ in xrange(1000):
got = expand({}, '{random 500 binary_no_whitespace}', prng)
assert_true(reduce(lambda x, y: x and y, [x not in string.whitespace for x in got]))
def test_expand_random_no_args():
prng = random.Random(1)
for _ in xrange(1000):
got = expand({}, '{random}', prng)
assert_true(0 <= len(got) <= 1000)
assert_true(reduce(lambda x, y: x and y, [x in string.printable for x in got]))
def test_expand_random_no_charset():
prng = random.Random(1)
for _ in xrange(1000):
got = expand({}, '{random 10-30}', prng)
assert_true(10 <= len(got) <= 30)
assert_true(reduce(lambda x, y: x and y, [x in string.printable for x in got]))
def test_expand_random_exact_length():
prng = random.Random(1)
for _ in xrange(1000):
got = expand({}, '{random 10 digits}', prng)
assert_true(len(got) == 10)
assert_true(reduce(lambda x, y: x and y, [x in string.digits for x in got]))
def test_expand_random_bad_charset():
prng = random.Random(1)
assert_raises(KeyError, expand, {}, '{random 10-30 foo}', prng)
def test_expand_random_missing_length():
prng = random.Random(1)
assert_raises(ValueError, expand, {}, '{random printable}', prng)
def test_assemble_decision():
graph = build_graph()
prng = random.Random(1)
decision = assemble_decision(graph, prng)
eq(decision['key1'], 'value1')
eq(decision['key2'], 'value2')
eq(decision['randkey'], 'value-{random 10-15 printable}')
eq(decision['indirect_key1'], '{key1}')
eq(decision['path'], '/{bucket_readable}')
assert_raises(KeyError, lambda x: decision[x], 'key3')
def test_expand_escape():
prng = random.Random(1)
decision = dict(
foo='{{bar}}',
)
got = expand(decision, '{foo}', prng)
eq(got, '{bar}')
def test_expand_indirect():
prng = random.Random(1)
decision = dict(
foo='{bar}',
bar='quux',
)
got = expand(decision, '{foo}', prng)
eq(got, 'quux')
def test_expand_indirect_double():
prng = random.Random(1)
decision = dict(
foo='{bar}',
bar='{quux}',
quux='thud',
)
got = expand(decision, '{foo}', prng)
eq(got, 'thud')
def test_expand_recursive():
prng = random.Random(1)
decision = dict(
foo='{foo}',
)
e = assert_raises(RecursionError, expand, decision, '{foo}', prng)
eq(str(e), "Runaway recursion in string formatting: 'foo'")
def test_expand_recursive_mutual():
prng = random.Random(1)
decision = dict(
foo='{bar}',
bar='{foo}',
)
e = assert_raises(RecursionError, expand, decision, '{foo}', prng)
eq(str(e), "Runaway recursion in string formatting: 'foo'")
def test_expand_recursive_not_too_eager():
prng = random.Random(1)
decision = dict(
foo='bar',
)
got = expand(decision, 100*'{foo}', prng)
eq(got, 100*'bar')
def test_make_choice_unweighted_with_space():
prng = random.Random(1)
choice = make_choice(['foo bar'], prng)
eq(choice, 'foo bar')
def test_weighted_choices():
graph = build_graph()
prng = random.Random(1)
choices_made = {}
for _ in xrange(1000):
choice = make_choice(graph['weighted_node']['choices'], prng)
if choices_made.has_key(choice):
choices_made[choice] += 1
else:
choices_made[choice] = 1
foo_percentage = choices_made['foo'] / 1000.0
bar_percentage = choices_made['bar'] / 1000.0
baz_percentage = choices_made['baz'] / 1000.0
nose.tools.assert_almost_equal(foo_percentage, 0.25, 1)
nose.tools.assert_almost_equal(bar_percentage, 0.50, 1)
nose.tools.assert_almost_equal(baz_percentage, 0.25, 1)
def test_null_choices():
graph = build_graph()
prng = random.Random(1)
choice = make_choice(graph['null_choice_node']['choices'], prng)
eq(choice, '')
def test_weighted_null_choices():
graph = build_graph()
prng = random.Random(1)
choice = make_choice(graph['weighted_null_choice_node']['choices'], prng)
eq(choice, '')
def test_null_child():
graph = build_graph()
prng = random.Random(1)
decision = descend_graph(graph, 'null_choice_node', prng)
eq(decision, {})
def test_weighted_set():
graph = build_graph()
prng = random.Random(1)
choices_made = {}
for _ in xrange(1000):
choice = make_choice(graph['weighted_node']['set']['k1'], prng)
if choices_made.has_key(choice):
choices_made[choice] += 1
else:
choices_made[choice] = 1
foo_percentage = choices_made['foo'] / 1000.0
bar_percentage = choices_made['bar'] / 1000.0
baz_percentage = choices_made['baz'] / 1000.0
nose.tools.assert_almost_equal(foo_percentage, 0.25, 1)
nose.tools.assert_almost_equal(bar_percentage, 0.50, 1)
nose.tools.assert_almost_equal(baz_percentage, 0.25, 1)
def test_header_presence():
graph = build_graph()
prng = random.Random(1)
decision = descend_graph(graph, 'node1', prng)
c1 = itertools.count()
c2 = itertools.count()
for header, value in decision['headers']:
if header == 'my-header':
eq(value, '{header_val}')
assert_true(next(c1) < 1)
elif header == 'random-header-{random 5-10 printable}':
eq(value, '{random 20-30 punctuation}')
assert_true(next(c2) < 2)
else:
raise KeyError('unexpected header found: %s' % header)
assert_true(next(c1))
assert_true(next(c2))
def test_duplicate_header():
graph = build_graph()
prng = random.Random(1)
assert_raises(DecisionGraphError, descend_graph, graph, 'repeated_headers_node', prng)
def test_expand_headers():
graph = build_graph()
prng = random.Random(1)
decision = descend_graph(graph, 'node1', prng)
expanded_headers = expand_headers(decision, prng)
for header, value in expanded_headers.iteritems():
if header == 'my-header':
assert_true(value in ['h1', 'h2', 'h3'])
elif header.startswith('random-header-'):
assert_true(20 <= len(value) <= 30)
assert_true(string.strip(value, RepeatExpandingFormatter.charsets['punctuation']) is '')
else:
raise DecisionGraphError('unexpected header found: "%s"' % header)

View file

@ -1,117 +0,0 @@
from boto.s3.key import Key
from optparse import OptionParser
from . import realistic
import traceback
import random
from . import common
import sys
def parse_opts():
parser = OptionParser()
parser.add_option('-O', '--outfile', help='write output to FILE. Defaults to STDOUT', metavar='FILE')
parser.add_option('-b', '--bucket', dest='bucket', help='push objects to BUCKET', metavar='BUCKET')
parser.add_option('--seed', dest='seed', help='optional seed for the random number generator')
return parser.parse_args()
def get_random_files(quantity, mean, stddev, seed):
"""Create file-like objects with pseudorandom contents.
IN:
number of files to create
mean file size in bytes
standard deviation from mean file size
seed for PRNG
OUT:
list of file handles
"""
file_generator = realistic.files(mean, stddev, seed)
return [file_generator.next() for _ in xrange(quantity)]
def upload_objects(bucket, files, seed):
"""Upload a bunch of files to an S3 bucket
IN:
boto S3 bucket object
list of file handles to upload
seed for PRNG
OUT:
list of boto S3 key objects
"""
keys = []
name_generator = realistic.names(15, 4, seed=seed)
for fp in files:
print >> sys.stderr, 'sending file with size %dB' % fp.size
key = Key(bucket)
key.key = name_generator.next()
key.set_contents_from_file(fp, rewind=True)
key.set_acl('public-read')
keys.append(key)
return keys
def _main():
'''To run the static content load test, make sure you've bootstrapped your
test environment and set up your config.yaml file, then run the following:
S3TEST_CONF=config.yaml virtualenv/bin/s3tests-generate-objects.py --seed 1234
This creates a bucket with your S3 credentials (from config.yaml) and
fills it with garbage objects as described in the
file_generation.groups section of config.yaml. It writes a list of
URLS to those objects to the file listed in file_generation.url_file
in config.yaml.
Once you have objcts in your bucket, run the siege benchmarking program:
siege --rc ./siege.conf -r 5
This tells siege to read the ./siege.conf config file which tells it to
use the urls in ./urls.txt and log to ./siege.log. It hits each url in
urls.txt 5 times (-r flag).
Results are printed to the terminal and written in CSV format to
./siege.log
'''
(options, args) = parse_opts()
#SETUP
random.seed(options.seed if options.seed else None)
conn = common.s3.main
if options.outfile:
OUTFILE = open(options.outfile, 'w')
elif common.config.file_generation.url_file:
OUTFILE = open(common.config.file_generation.url_file, 'w')
else:
OUTFILE = sys.stdout
if options.bucket:
bucket = conn.create_bucket(options.bucket)
else:
bucket = common.get_new_bucket()
bucket.set_acl('public-read')
keys = []
print >> OUTFILE, 'bucket: %s' % bucket.name
print >> sys.stderr, 'setup complete, generating files'
for profile in common.config.file_generation.groups:
seed = random.random()
files = get_random_files(profile[0], profile[1], profile[2], seed)
keys += upload_objects(bucket, files, seed)
print >> sys.stderr, 'finished sending files. generating urls'
for key in keys:
print >> OUTFILE, key.generate_url(0, query_auth=False)
print >> sys.stderr, 'done'
def main():
common.setup()
try:
_main()
except Exception as e:
traceback.print_exc()
common.teardown()

View file

@ -1,265 +0,0 @@
import gevent
import gevent.pool
import gevent.queue
import gevent.monkey; gevent.monkey.patch_all()
import itertools
import optparse
import os
import sys
import time
import traceback
import random
import yaml
import realistic
import common
NANOSECOND = int(1e9)
def reader(bucket, worker_id, file_names, queue, rand):
while True:
objname = rand.choice(file_names)
key = bucket.new_key(objname)
fp = realistic.FileValidator()
result = dict(
type='r',
bucket=bucket.name,
key=key.name,
worker=worker_id,
)
start = time.time()
try:
key.get_contents_to_file(fp._file)
except gevent.GreenletExit:
raise
except Exception as e:
# stop timer ASAP, even on errors
end = time.time()
result.update(
error=dict(
msg=str(e),
traceback=traceback.format_exc(),
),
)
# certain kinds of programmer errors make this a busy
# loop; let parent greenlet get some time too
time.sleep(0)
else:
end = time.time()
if not fp.valid():
m='md5sum check failed start={s} ({se}) end={e} size={sz} obj={o}'.format(s=time.ctime(start), se=start, e=end, sz=fp._file.tell(), o=objname)
result.update(
error=dict(
msg=m,
traceback=traceback.format_exc(),
),
)
print "ERROR:", m
else:
elapsed = end - start
result.update(
start=start,
duration=int(round(elapsed * NANOSECOND)),
)
queue.put(result)
def writer(bucket, worker_id, file_names, files, queue, rand):
while True:
fp = next(files)
fp.seek(0)
objname = rand.choice(file_names)
key = bucket.new_key(objname)
result = dict(
type='w',
bucket=bucket.name,
key=key.name,
worker=worker_id,
)
start = time.time()
try:
key.set_contents_from_file(fp)
except gevent.GreenletExit:
raise
except Exception as e:
# stop timer ASAP, even on errors
end = time.time()
result.update(
error=dict(
msg=str(e),
traceback=traceback.format_exc(),
),
)
# certain kinds of programmer errors make this a busy
# loop; let parent greenlet get some time too
time.sleep(0)
else:
end = time.time()
elapsed = end - start
result.update(
start=start,
duration=int(round(elapsed * NANOSECOND)),
)
queue.put(result)
def parse_options():
parser = optparse.OptionParser(
usage='%prog [OPTS] <CONFIG_YAML',
)
parser.add_option("--no-cleanup", dest="cleanup", action="store_false",
help="skip cleaning up all created buckets", default=True)
return parser.parse_args()
def write_file(bucket, file_name, fp):
"""
Write a single file to the bucket using the file_name.
This is used during the warmup to initialize the files.
"""
key = bucket.new_key(file_name)
key.set_contents_from_file(fp)
def main():
# parse options
(options, args) = parse_options()
if os.isatty(sys.stdin.fileno()):
raise RuntimeError('Need configuration in stdin.')
config = common.read_config(sys.stdin)
conn = common.connect(config.s3)
bucket = None
try:
# setup
real_stdout = sys.stdout
sys.stdout = sys.stderr
# verify all required config items are present
if 'readwrite' not in config:
raise RuntimeError('readwrite section not found in config')
for item in ['readers', 'writers', 'duration', 'files', 'bucket']:
if item not in config.readwrite:
raise RuntimeError("Missing readwrite config item: {item}".format(item=item))
for item in ['num', 'size', 'stddev']:
if item not in config.readwrite.files:
raise RuntimeError("Missing readwrite config item: files.{item}".format(item=item))
seeds = dict(config.readwrite.get('random_seed', {}))
seeds.setdefault('main', random.randrange(2**32))
rand = random.Random(seeds['main'])
for name in ['names', 'contents', 'writer', 'reader']:
seeds.setdefault(name, rand.randrange(2**32))
print 'Using random seeds: {seeds}'.format(seeds=seeds)
# setup bucket and other objects
bucket_name = common.choose_bucket_prefix(config.readwrite.bucket, max_len=30)
bucket = conn.create_bucket(bucket_name)
print "Created bucket: {name}".format(name=bucket.name)
# check flag for deterministic file name creation
if not config.readwrite.get('deterministic_file_names'):
print 'Creating random file names'
file_names = realistic.names(
mean=15,
stddev=4,
seed=seeds['names'],
)
file_names = itertools.islice(file_names, config.readwrite.files.num)
file_names = list(file_names)
else:
print 'Creating file names that are deterministic'
file_names = []
for x in xrange(config.readwrite.files.num):
file_names.append('test_file_{num}'.format(num=x))
files = realistic.files2(
mean=1024 * config.readwrite.files.size,
stddev=1024 * config.readwrite.files.stddev,
seed=seeds['contents'],
)
q = gevent.queue.Queue()
# warmup - get initial set of files uploaded if there are any writers specified
if config.readwrite.writers > 0:
print "Uploading initial set of {num} files".format(num=config.readwrite.files.num)
warmup_pool = gevent.pool.Pool(size=100)
for file_name in file_names:
fp = next(files)
warmup_pool.spawn(
write_file,
bucket=bucket,
file_name=file_name,
fp=fp,
)
warmup_pool.join()
# main work
print "Starting main worker loop."
print "Using file size: {size} +- {stddev}".format(size=config.readwrite.files.size, stddev=config.readwrite.files.stddev)
print "Spawning {w} writers and {r} readers...".format(w=config.readwrite.writers, r=config.readwrite.readers)
group = gevent.pool.Group()
rand_writer = random.Random(seeds['writer'])
# Don't create random files if deterministic_files_names is set and true
if not config.readwrite.get('deterministic_file_names'):
for x in xrange(config.readwrite.writers):
this_rand = random.Random(rand_writer.randrange(2**32))
group.spawn(
writer,
bucket=bucket,
worker_id=x,
file_names=file_names,
files=files,
queue=q,
rand=this_rand,
)
# Since the loop generating readers already uses config.readwrite.readers
# and the file names are already generated (randomly or deterministically),
# this loop needs no additional qualifiers. If zero readers are specified,
# it will behave as expected (no data is read)
rand_reader = random.Random(seeds['reader'])
for x in xrange(config.readwrite.readers):
this_rand = random.Random(rand_reader.randrange(2**32))
group.spawn(
reader,
bucket=bucket,
worker_id=x,
file_names=file_names,
queue=q,
rand=this_rand,
)
def stop():
group.kill(block=True)
q.put(StopIteration)
gevent.spawn_later(config.readwrite.duration, stop)
# wait for all the tests to finish
group.join()
print 'post-join, queue size {size}'.format(size=q.qsize())
if q.qsize() > 0:
for temp_dict in q:
if 'error' in temp_dict:
raise Exception('exception:\n\t{msg}\n\t{trace}'.format(
msg=temp_dict['error']['msg'],
trace=temp_dict['error']['traceback'])
)
else:
yaml.safe_dump(temp_dict, stream=real_stdout)
finally:
# cleanup
if options.cleanup:
if bucket is not None:
common.nuke_bucket(bucket)

View file

@ -1,281 +0,0 @@
import hashlib
import random
import string
import struct
import time
import math
import tempfile
import shutil
import os
NANOSECOND = int(1e9)
def generate_file_contents(size):
"""
A helper function to generate binary contents for a given size, and
calculates the md5 hash of the contents appending itself at the end of the
blob.
It uses sha1's hexdigest which is 40 chars long. So any binary generated
should remove the last 40 chars from the blob to retrieve the original hash
and binary so that validity can be proved.
"""
size = int(size)
contents = os.urandom(size)
content_hash = hashlib.sha1(contents).hexdigest()
return contents + content_hash
class FileValidator(object):
def __init__(self, f=None):
self._file = tempfile.SpooledTemporaryFile()
self.original_hash = None
self.new_hash = None
if f:
f.seek(0)
shutil.copyfileobj(f, self._file)
def valid(self):
"""
Returns True if this file looks valid. The file is valid if the end
of the file has the md5 digest for the first part of the file.
"""
self._file.seek(0)
contents = self._file.read()
self.original_hash, binary = contents[-40:], contents[:-40]
self.new_hash = hashlib.sha1(binary).hexdigest()
if not self.new_hash == self.original_hash:
print 'original hash: ', self.original_hash
print 'new hash: ', self.new_hash
print 'size: ', self._file.tell()
return False
return True
# XXX not sure if we need all of these
def seek(self, offset, whence=os.SEEK_SET):
self._file.seek(offset, whence)
def tell(self):
return self._file.tell()
def read(self, size=-1):
return self._file.read(size)
def write(self, data):
self._file.write(data)
self._file.seek(0)
class RandomContentFile(object):
def __init__(self, size, seed):
self.size = size
self.seed = seed
self.random = random.Random(self.seed)
# Boto likes to seek once more after it's done reading, so we need to save the last chunks/seek value.
self.last_chunks = self.chunks = None
self.last_seek = None
# Let seek initialize the rest of it, rather than dup code
self.seek(0)
def _mark_chunk(self):
self.chunks.append([self.offset, int(round((time.time() - self.last_seek) * NANOSECOND))])
def seek(self, offset, whence=os.SEEK_SET):
if whence == os.SEEK_SET:
self.offset = offset
elif whence == os.SEEK_END:
self.offset = self.size + offset;
elif whence == os.SEEK_CUR:
self.offset += offset
assert self.offset == 0
self.random.seed(self.seed)
self.buffer = ''
self.hash = hashlib.md5()
self.digest_size = self.hash.digest_size
self.digest = None
# Save the last seek time as our start time, and the last chunks
self.last_chunks = self.chunks
# Before emptying.
self.last_seek = time.time()
self.chunks = []
def tell(self):
return self.offset
def _generate(self):
# generate and return a chunk of pseudorandom data
size = min(self.size, 1*1024*1024) # generate at most 1 MB at a time
chunks = int(math.ceil(size/8.0)) # number of 8-byte chunks to create
l = [self.random.getrandbits(64) for _ in xrange(chunks)]
s = struct.pack(chunks*'Q', *l)
return s
def read(self, size=-1):
if size < 0:
size = self.size - self.offset
r = []
random_count = min(size, self.size - self.offset - self.digest_size)
if random_count > 0:
while len(self.buffer) < random_count:
self.buffer += self._generate()
self.offset += random_count
size -= random_count
data, self.buffer = self.buffer[:random_count], self.buffer[random_count:]
if self.hash is not None:
self.hash.update(data)
r.append(data)
digest_count = min(size, self.size - self.offset)
if digest_count > 0:
if self.digest is None:
self.digest = self.hash.digest()
self.hash = None
self.offset += digest_count
size -= digest_count
data = self.digest[:digest_count]
r.append(data)
self._mark_chunk()
return ''.join(r)
class PrecomputedContentFile(object):
def __init__(self, f):
self._file = tempfile.SpooledTemporaryFile()
f.seek(0)
shutil.copyfileobj(f, self._file)
self.last_chunks = self.chunks = None
self.seek(0)
def seek(self, offset, whence=os.SEEK_SET):
self._file.seek(offset, whence)
if self.tell() == 0:
# only reset the chunks when seeking to the beginning
self.last_chunks = self.chunks
self.last_seek = time.time()
self.chunks = []
def tell(self):
return self._file.tell()
def read(self, size=-1):
data = self._file.read(size)
self._mark_chunk()
return data
def _mark_chunk(self):
elapsed = time.time() - self.last_seek
elapsed_nsec = int(round(elapsed * NANOSECOND))
self.chunks.append([self.tell(), elapsed_nsec])
class FileVerifier(object):
def __init__(self):
self.size = 0
self.hash = hashlib.md5()
self.buf = ''
self.created_at = time.time()
self.chunks = []
def _mark_chunk(self):
self.chunks.append([self.size, int(round((time.time() - self.created_at) * NANOSECOND))])
def write(self, data):
self.size += len(data)
self.buf += data
digsz = -1*self.hash.digest_size
new_data, self.buf = self.buf[0:digsz], self.buf[digsz:]
self.hash.update(new_data)
self._mark_chunk()
def valid(self):
"""
Returns True if this file looks valid. The file is valid if the end
of the file has the md5 digest for the first part of the file.
"""
if self.size < self.hash.digest_size:
return self.hash.digest().startswith(self.buf)
return self.buf == self.hash.digest()
def files(mean, stddev, seed=None):
"""
Yields file-like objects with effectively random contents, where
the size of each file follows the normal distribution with `mean`
and `stddev`.
Beware, the file-likeness is very shallow. You can use boto's
`key.set_contents_from_file` to send these to S3, but they are not
full file objects.
The last 128 bits are the MD5 digest of the previous bytes, for
verifying round-trip data integrity. For example, if you
re-download the object and place the contents into a file called
``foo``, the following should print two identical lines:
python -c 'import sys, hashlib; data=sys.stdin.read(); print hashlib.md5(data[:-16]).hexdigest(); print "".join("%02x" % ord(c) for c in data[-16:])' <foo
Except for objects shorter than 16 bytes, where the second line
will be proportionally shorter.
"""
rand = random.Random(seed)
while True:
while True:
size = int(rand.normalvariate(mean, stddev))
if size >= 0:
break
yield RandomContentFile(size=size, seed=rand.getrandbits(32))
def files2(mean, stddev, seed=None, numfiles=10):
"""
Yields file objects with effectively random contents, where the
size of each file follows the normal distribution with `mean` and
`stddev`.
Rather than continuously generating new files, this pre-computes and
stores `numfiles` files and yields them in a loop.
"""
# pre-compute all the files (and save with TemporaryFiles)
fs = []
for _ in xrange(numfiles):
t = tempfile.SpooledTemporaryFile()
t.write(generate_file_contents(random.normalvariate(mean, stddev)))
t.seek(0)
fs.append(t)
while True:
for f in fs:
yield f
def names(mean, stddev, charset=None, seed=None):
"""
Yields strings that are somewhat plausible as file names, where
the lenght of each filename follows the normal distribution with
`mean` and `stddev`.
"""
if charset is None:
charset = string.ascii_lowercase
rand = random.Random(seed)
while True:
while True:
length = int(rand.normalvariate(mean, stddev))
if length > 0:
break
name = ''.join(rand.choice(charset) for _ in xrange(length))
yield name

View file

@ -1,219 +0,0 @@
import gevent
import gevent.pool
import gevent.queue
import gevent.monkey; gevent.monkey.patch_all()
import itertools
import optparse
import os
import sys
import time
import traceback
import random
import yaml
import realistic
import common
NANOSECOND = int(1e9)
def writer(bucket, objname, fp, queue):
key = bucket.new_key(objname)
result = dict(
type='w',
bucket=bucket.name,
key=key.name,
)
start = time.time()
try:
key.set_contents_from_file(fp, rewind=True)
except gevent.GreenletExit:
raise
except Exception as e:
# stop timer ASAP, even on errors
end = time.time()
result.update(
error=dict(
msg=str(e),
traceback=traceback.format_exc(),
),
)
# certain kinds of programmer errors make this a busy
# loop; let parent greenlet get some time too
time.sleep(0)
else:
end = time.time()
elapsed = end - start
result.update(
start=start,
duration=int(round(elapsed * NANOSECOND)),
chunks=fp.last_chunks,
)
queue.put(result)
def reader(bucket, objname, queue):
key = bucket.new_key(objname)
fp = realistic.FileVerifier()
result = dict(
type='r',
bucket=bucket.name,
key=key.name,
)
start = time.time()
try:
key.get_contents_to_file(fp)
except gevent.GreenletExit:
raise
except Exception as e:
# stop timer ASAP, even on errors
end = time.time()
result.update(
error=dict(
msg=str(e),
traceback=traceback.format_exc(),
),
)
# certain kinds of programmer errors make this a busy
# loop; let parent greenlet get some time too
time.sleep(0)
else:
end = time.time()
if not fp.valid():
result.update(
error=dict(
msg='md5sum check failed',
),
)
elapsed = end - start
result.update(
start=start,
duration=int(round(elapsed * NANOSECOND)),
chunks=fp.chunks,
)
queue.put(result)
def parse_options():
parser = optparse.OptionParser(
usage='%prog [OPTS] <CONFIG_YAML',
)
parser.add_option("--no-cleanup", dest="cleanup", action="store_false",
help="skip cleaning up all created buckets", default=True)
return parser.parse_args()
def main():
# parse options
(options, args) = parse_options()
if os.isatty(sys.stdin.fileno()):
raise RuntimeError('Need configuration in stdin.')
config = common.read_config(sys.stdin)
conn = common.connect(config.s3)
bucket = None
try:
# setup
real_stdout = sys.stdout
sys.stdout = sys.stderr
# verify all required config items are present
if 'roundtrip' not in config:
raise RuntimeError('roundtrip section not found in config')
for item in ['readers', 'writers', 'duration', 'files', 'bucket']:
if item not in config.roundtrip:
raise RuntimeError("Missing roundtrip config item: {item}".format(item=item))
for item in ['num', 'size', 'stddev']:
if item not in config.roundtrip.files:
raise RuntimeError("Missing roundtrip config item: files.{item}".format(item=item))
seeds = dict(config.roundtrip.get('random_seed', {}))
seeds.setdefault('main', random.randrange(2**32))
rand = random.Random(seeds['main'])
for name in ['names', 'contents', 'writer', 'reader']:
seeds.setdefault(name, rand.randrange(2**32))
print 'Using random seeds: {seeds}'.format(seeds=seeds)
# setup bucket and other objects
bucket_name = common.choose_bucket_prefix(config.roundtrip.bucket, max_len=30)
bucket = conn.create_bucket(bucket_name)
print "Created bucket: {name}".format(name=bucket.name)
objnames = realistic.names(
mean=15,
stddev=4,
seed=seeds['names'],
)
objnames = itertools.islice(objnames, config.roundtrip.files.num)
objnames = list(objnames)
files = realistic.files(
mean=1024 * config.roundtrip.files.size,
stddev=1024 * config.roundtrip.files.stddev,
seed=seeds['contents'],
)
q = gevent.queue.Queue()
logger_g = gevent.spawn(yaml.safe_dump_all, q, stream=real_stdout)
print "Writing {num} objects with {w} workers...".format(
num=config.roundtrip.files.num,
w=config.roundtrip.writers,
)
pool = gevent.pool.Pool(size=config.roundtrip.writers)
start = time.time()
for objname in objnames:
fp = next(files)
pool.spawn(
writer,
bucket=bucket,
objname=objname,
fp=fp,
queue=q,
)
pool.join()
stop = time.time()
elapsed = stop - start
q.put(dict(
type='write_done',
duration=int(round(elapsed * NANOSECOND)),
))
print "Reading {num} objects with {w} workers...".format(
num=config.roundtrip.files.num,
w=config.roundtrip.readers,
)
# avoid accessing them in the same order as the writing
rand.shuffle(objnames)
pool = gevent.pool.Pool(size=config.roundtrip.readers)
start = time.time()
for objname in objnames:
pool.spawn(
reader,
bucket=bucket,
objname=objname,
queue=q,
)
pool.join()
stop = time.time()
elapsed = stop - start
q.put(dict(
type='read_done',
duration=int(round(elapsed * NANOSECOND)),
))
q.put(StopIteration)
logger_g.get()
finally:
# cleanup
if options.cleanup:
if bucket is not None:
common.nuke_bucket(bucket)

View file

@ -1,79 +0,0 @@
from s3tests import realistic
import shutil
import tempfile
# XXX not used for now
def create_files(mean=2000):
return realistic.files2(
mean=1024 * mean,
stddev=1024 * 500,
seed=1256193726,
numfiles=4,
)
class TestFiles(object):
# the size and seed is what we can get when generating a bunch of files
# with pseudo random numbers based on sttdev, seed, and mean.
# this fails, demonstrating the (current) problem
#def test_random_file_invalid(self):
# size = 2506764
# seed = 3391518755
# source = realistic.RandomContentFile(size=size, seed=seed)
# t = tempfile.SpooledTemporaryFile()
# shutil.copyfileobj(source, t)
# precomputed = realistic.PrecomputedContentFile(t)
# assert precomputed.valid()
# verifier = realistic.FileVerifier()
# shutil.copyfileobj(precomputed, verifier)
# assert verifier.valid()
# this passes
def test_random_file_valid(self):
size = 2506001
seed = 3391518755
source = realistic.RandomContentFile(size=size, seed=seed)
t = tempfile.SpooledTemporaryFile()
shutil.copyfileobj(source, t)
precomputed = realistic.PrecomputedContentFile(t)
verifier = realistic.FileVerifier()
shutil.copyfileobj(precomputed, verifier)
assert verifier.valid()
# new implementation
class TestFileValidator(object):
def test_new_file_is_valid(self):
size = 2506001
contents = realistic.generate_file_contents(size)
t = tempfile.SpooledTemporaryFile()
t.write(contents)
t.seek(0)
fp = realistic.FileValidator(t)
assert fp.valid()
def test_new_file_is_valid_when_size_is_1(self):
size = 1
contents = realistic.generate_file_contents(size)
t = tempfile.SpooledTemporaryFile()
t.write(contents)
t.seek(0)
fp = realistic.FileValidator(t)
assert fp.valid()
def test_new_file_is_valid_on_several_calls(self):
size = 2506001
contents = realistic.generate_file_contents(size)
t = tempfile.SpooledTemporaryFile()
t.write(contents)
t.seek(0)
fp = realistic.FileValidator(t)
assert fp.valid()
assert fp.valid()

301
s3tests_boto3/common.py Normal file
View file

@ -0,0 +1,301 @@
import boto.s3.connection
import munch
import itertools
import os
import random
import string
import yaml
import re
from lxml import etree
from doctest import Example
from lxml.doctestcompare import LXMLOutputChecker
s3 = munch.Munch()
config = munch.Munch()
prefix = ''
bucket_counter = itertools.count(1)
key_counter = itertools.count(1)
def choose_bucket_prefix(template, max_len=30):
"""
Choose a prefix for our test buckets, so they're easy to identify.
Use template and feed it more and more random filler, until it's
as long as possible but still below max_len.
"""
rand = ''.join(
random.choice(string.ascii_lowercase + string.digits)
for c in range(255)
)
while rand:
s = template.format(random=rand)
if len(s) <= max_len:
return s
rand = rand[:-1]
raise RuntimeError(
'Bucket prefix template is impossible to fulfill: {template!r}'.format(
template=template,
),
)
def nuke_bucket(bucket):
try:
bucket.set_canned_acl('private')
# TODO: deleted_cnt and the while loop is a work around for rgw
# not sending the
deleted_cnt = 1
while deleted_cnt:
deleted_cnt = 0
for key in bucket.list():
print('Cleaning bucket {bucket} key {key}'.format(
bucket=bucket,
key=key,
))
key.set_canned_acl('private')
key.delete()
deleted_cnt += 1
bucket.delete()
except boto.exception.S3ResponseError as e:
# TODO workaround for buggy rgw that fails to send
# error_code, remove
if (e.status == 403
and e.error_code is None
and e.body == ''):
e.error_code = 'AccessDenied'
if e.error_code != 'AccessDenied':
print('GOT UNWANTED ERROR', e.error_code)
raise
# seems like we're not the owner of the bucket; ignore
pass
def nuke_prefixed_buckets():
for name, conn in list(s3.items()):
print('Cleaning buckets from connection {name}'.format(name=name))
for bucket in conn.get_all_buckets():
if bucket.name.startswith(prefix):
print('Cleaning bucket {bucket}'.format(bucket=bucket))
nuke_bucket(bucket)
print('Done with cleanup of test buckets.')
def read_config(fp):
config = munch.Munch()
g = yaml.safe_load_all(fp)
for new in g:
config.update(munch.Munchify(new))
return config
def connect(conf):
mapping = dict(
port='port',
host='host',
is_secure='is_secure',
access_key='aws_access_key_id',
secret_key='aws_secret_access_key',
)
kwargs = dict((mapping[k],v) for (k,v) in conf.items() if k in mapping)
#process calling_format argument
calling_formats = dict(
ordinary=boto.s3.connection.OrdinaryCallingFormat(),
subdomain=boto.s3.connection.SubdomainCallingFormat(),
vhost=boto.s3.connection.VHostCallingFormat(),
)
kwargs['calling_format'] = calling_formats['ordinary']
if 'calling_format' in conf:
raw_calling_format = conf['calling_format']
try:
kwargs['calling_format'] = calling_formats[raw_calling_format]
except KeyError:
raise RuntimeError(
'calling_format unknown: %r' % raw_calling_format
)
# TODO test vhost calling format
conn = boto.s3.connection.S3Connection(**kwargs)
return conn
def setup():
global s3, config, prefix
s3.clear()
config.clear()
try:
path = os.environ['S3TEST_CONF']
except KeyError:
raise RuntimeError(
'To run tests, point environment '
+ 'variable S3TEST_CONF to a config file.',
)
with file(path) as f:
config.update(read_config(f))
# These 3 should always be present.
if 's3' not in config:
raise RuntimeError('Your config file is missing the s3 section!')
if 'defaults' not in config.s3:
raise RuntimeError('Your config file is missing the s3.defaults section!')
if 'fixtures' not in config:
raise RuntimeError('Your config file is missing the fixtures section!')
template = config.fixtures.get('bucket prefix', 'test-{random}-')
prefix = choose_bucket_prefix(template=template)
if prefix == '':
raise RuntimeError("Empty Prefix! Aborting!")
defaults = config.s3.defaults
for section in list(config.s3.keys()):
if section == 'defaults':
continue
conf = {}
conf.update(defaults)
conf.update(config.s3[section])
conn = connect(conf)
s3[section] = conn
# WARNING! we actively delete all buckets we see with the prefix
# we've chosen! Choose your prefix with care, and don't reuse
# credentials!
# We also assume nobody else is going to use buckets with that
# prefix. This is racy but given enough randomness, should not
# really fail.
nuke_prefixed_buckets()
def get_new_bucket(connection=None):
"""
Get a bucket that exists and is empty.
Always recreates a bucket from scratch. This is useful to also
reset ACLs and such.
"""
if connection is None:
connection = s3.main
name = '{prefix}{num}'.format(
prefix=prefix,
num=next(bucket_counter),
)
# the only way for this to fail with a pre-existing bucket is if
# someone raced us between setup nuke_prefixed_buckets and here;
# ignore that as astronomically unlikely
bucket = connection.create_bucket(name)
return bucket
def teardown():
nuke_prefixed_buckets()
def with_setup_kwargs(setup, teardown=None):
"""Decorator to add setup and/or teardown methods to a test function::
@with_setup_args(setup, teardown)
def test_something():
" ... "
The setup function should return (kwargs) which will be passed to
test function, and teardown function.
Note that `with_setup_kwargs` is useful *only* for test functions, not for test
methods or inside of TestCase subclasses.
"""
def decorate(func):
kwargs = {}
def test_wrapped(*args, **kwargs2):
k2 = kwargs.copy()
k2.update(kwargs2)
k2['testname'] = func.__name__
func(*args, **k2)
test_wrapped.__name__ = func.__name__
def setup_wrapped():
k = setup()
kwargs.update(k)
if hasattr(func, 'setup'):
func.setup()
test_wrapped.setup = setup_wrapped
if teardown:
def teardown_wrapped():
if hasattr(func, 'teardown'):
func.teardown()
teardown(**kwargs)
test_wrapped.teardown = teardown_wrapped
else:
if hasattr(func, 'teardown'):
test_wrapped.teardown = func.teardown()
return test_wrapped
return decorate
# Demo case for the above, when you run test_gen():
# _test_gen will run twice,
# with the following stderr printing
# setup_func {'b': 2}
# testcase ('1',) {'b': 2, 'testname': '_test_gen'}
# teardown_func {'b': 2}
# setup_func {'b': 2}
# testcase () {'b': 2, 'testname': '_test_gen'}
# teardown_func {'b': 2}
#
#def setup_func():
# kwargs = {'b': 2}
# print("setup_func", kwargs, file=sys.stderr)
# return kwargs
#
#def teardown_func(**kwargs):
# print("teardown_func", kwargs, file=sys.stderr)
#
#@with_setup_kwargs(setup=setup_func, teardown=teardown_func)
#def _test_gen(*args, **kwargs):
# print("testcase", args, kwargs, file=sys.stderr)
#
#def test_gen():
# yield _test_gen, '1'
# yield _test_gen
def trim_xml(xml_str):
p = etree.XMLParser(remove_blank_text=True)
elem = etree.XML(xml_str, parser=p)
return etree.tostring(elem)
def normalize_xml(xml, pretty_print=True):
if xml is None:
return xml
root = etree.fromstring(xml.encode(encoding='ascii'))
for element in root.iter('*'):
if element.text is not None and not element.text.strip():
element.text = None
if element.text is not None:
element.text = element.text.strip().replace("\n", "").replace("\r", "")
if element.tail is not None and not element.tail.strip():
element.tail = None
if element.tail is not None:
element.tail = element.tail.strip().replace("\n", "").replace("\r", "")
# Sort the elements
for parent in root.xpath('//*[./*]'): # Search for parent elements
parent[:] = sorted(parent,key=lambda x: x.tag)
xmlstr = etree.tostring(root, encoding="utf-8", xml_declaration=True, pretty_print=pretty_print)
# there are two different DTD URIs
xmlstr = re.sub(r'xmlns="[^"]+"', 'xmlns="s3"', xmlstr)
xmlstr = re.sub(r'xmlns=\'[^\']+\'', 'xmlns="s3"', xmlstr)
for uri in ['http://doc.s3.amazonaws.com/doc/2006-03-01/', 'http://s3.amazonaws.com/doc/2006-03-01/']:
xmlstr = xmlstr.replace(uri, 'URI-DTD')
#xmlstr = re.sub(r'>\s+', '>', xmlstr, count=0, flags=re.MULTILINE)
return xmlstr
def assert_xml_equal(got, want):
assert want is not None, 'Wanted XML cannot be None'
if got is None:
raise AssertionError('Got input to validate was None')
checker = LXMLOutputChecker()
if not checker.check_output(want, got, 0):
message = checker.output_difference(Example("", want), got, 0)
raise AssertionError(message)

View file

@ -0,0 +1,782 @@
import pytest
import boto3
from botocore import UNSIGNED
from botocore.client import Config
from botocore.exceptions import ClientError
from botocore.handlers import disable_signing
import configparser
import datetime
import time
import os
import munch
import random
import string
import itertools
import urllib3
import re
config = munch.Munch
# this will be assigned by setup()
prefix = None
def get_prefix():
assert prefix is not None
return prefix
def choose_bucket_prefix(template, max_len=30):
"""
Choose a prefix for our test buckets, so they're easy to identify.
Use template and feed it more and more random filler, until it's
as long as possible but still below max_len.
"""
rand = ''.join(
random.choice(string.ascii_lowercase + string.digits)
for c in range(255)
)
while rand:
s = template.format(random=rand)
if len(s) <= max_len:
return s
rand = rand[:-1]
raise RuntimeError(
'Bucket prefix template is impossible to fulfill: {template!r}'.format(
template=template,
),
)
def get_buckets_list(client=None, prefix=None):
if client == None:
client = get_client()
if prefix == None:
prefix = get_prefix()
response = client.list_buckets()
bucket_dicts = response['Buckets']
buckets_list = []
for bucket in bucket_dicts:
if prefix in bucket['Name']:
buckets_list.append(bucket['Name'])
return buckets_list
def get_objects_list(bucket, client=None, prefix=None):
if client == None:
client = get_client()
if prefix == None:
response = client.list_objects(Bucket=bucket)
else:
response = client.list_objects(Bucket=bucket, Prefix=prefix)
objects_list = []
if 'Contents' in response:
contents = response['Contents']
for obj in contents:
objects_list.append(obj['Key'])
return objects_list
# generator function that returns object listings in batches, where each
# batch is a list of dicts compatible with delete_objects()
def list_versions(client, bucket, batch_size):
kwargs = {'Bucket': bucket, 'MaxKeys': batch_size}
truncated = True
while truncated:
listing = client.list_object_versions(**kwargs)
kwargs['KeyMarker'] = listing.get('NextKeyMarker')
kwargs['VersionIdMarker'] = listing.get('NextVersionIdMarker')
truncated = listing['IsTruncated']
objs = listing.get('Versions', []) + listing.get('DeleteMarkers', [])
if len(objs):
yield [{'Key': o['Key'], 'VersionId': o['VersionId']} for o in objs]
def nuke_bucket(client, bucket):
batch_size = 128
max_retain_date = None
# list and delete objects in batches
for objects in list_versions(client, bucket, batch_size):
delete = client.delete_objects(Bucket=bucket,
Delete={'Objects': objects, 'Quiet': True},
BypassGovernanceRetention=True)
# check for object locks on 403 AccessDenied errors
for err in delete.get('Errors', []):
if err.get('Code') != 'AccessDenied':
continue
try:
res = client.get_object_retention(Bucket=bucket,
Key=err['Key'], VersionId=err['VersionId'])
retain_date = res['Retention']['RetainUntilDate']
if not max_retain_date or max_retain_date < retain_date:
max_retain_date = retain_date
except ClientError:
pass
if max_retain_date:
# wait out the retention period (up to 60 seconds)
now = datetime.datetime.now(max_retain_date.tzinfo)
if max_retain_date > now:
delta = max_retain_date - now
if delta.total_seconds() > 60:
raise RuntimeError('bucket {} still has objects \
locked for {} more seconds, not waiting for \
bucket cleanup'.format(bucket, delta.total_seconds()))
print('nuke_bucket', bucket, 'waiting', delta.total_seconds(),
'seconds for object locks to expire')
time.sleep(delta.total_seconds())
for objects in list_versions(client, bucket, batch_size):
client.delete_objects(Bucket=bucket,
Delete={'Objects': objects, 'Quiet': True},
BypassGovernanceRetention=True)
client.delete_bucket(Bucket=bucket)
def nuke_prefixed_buckets(prefix, client=None):
if client == None:
client = get_client()
buckets = get_buckets_list(client, prefix)
err = None
for bucket_name in buckets:
try:
nuke_bucket(client, bucket_name)
except Exception as e:
# The exception shouldn't be raised when doing cleanup. Pass and continue
# the bucket cleanup process. Otherwise left buckets wouldn't be cleared
# resulting in some kind of resource leak. err is used to hint user some
# exception once occurred.
err = e
pass
if err:
raise err
print('Done with cleanup of buckets in tests.')
def configured_storage_classes():
sc = ['STANDARD']
extra_sc = re.split(r"[\b\W\b]+", config.storage_classes)
for item in extra_sc:
if item != 'STANDARD':
sc.append(item)
sc = [i for i in sc if i]
print("storage classes configured: " + str(sc))
return sc
def configure():
cfg = configparser.RawConfigParser()
try:
path = os.environ['S3TEST_CONF']
except KeyError:
raise RuntimeError(
'To run tests, point environment '
+ 'variable S3TEST_CONF to a config file.',
)
cfg.read(path)
if not cfg.defaults():
raise RuntimeError('Your config file is missing the DEFAULT section!')
if not cfg.has_section("s3 main"):
raise RuntimeError('Your config file is missing the "s3 main" section!')
if not cfg.has_section("s3 alt"):
raise RuntimeError('Your config file is missing the "s3 alt" section!')
if not cfg.has_section("s3 tenant"):
raise RuntimeError('Your config file is missing the "s3 tenant" section!')
global prefix
defaults = cfg.defaults()
# vars from the DEFAULT section
config.default_host = defaults.get("host")
config.default_port = int(defaults.get("port"))
config.default_is_secure = cfg.getboolean('DEFAULT', "is_secure")
proto = 'https' if config.default_is_secure else 'http'
config.default_endpoint = "%s://%s:%d" % (proto, config.default_host, config.default_port)
try:
config.default_ssl_verify = cfg.getboolean('DEFAULT', "ssl_verify")
except configparser.NoOptionError:
config.default_ssl_verify = False
# Disable InsecureRequestWarning reported by urllib3 when ssl_verify is False
if not config.default_ssl_verify:
urllib3.disable_warnings()
# vars from the main section
config.main_access_key = cfg.get('s3 main',"access_key")
config.main_secret_key = cfg.get('s3 main',"secret_key")
config.main_display_name = cfg.get('s3 main',"display_name")
config.main_user_id = cfg.get('s3 main',"user_id")
config.main_email = cfg.get('s3 main',"email")
try:
config.main_kms_keyid = cfg.get('s3 main',"kms_keyid")
except (configparser.NoSectionError, configparser.NoOptionError):
config.main_kms_keyid = 'testkey-1'
try:
config.main_kms_keyid2 = cfg.get('s3 main',"kms_keyid2")
except (configparser.NoSectionError, configparser.NoOptionError):
config.main_kms_keyid2 = 'testkey-2'
try:
config.main_api_name = cfg.get('s3 main',"api_name")
except (configparser.NoSectionError, configparser.NoOptionError):
config.main_api_name = ""
pass
try:
config.storage_classes = cfg.get('s3 main',"storage_classes")
except (configparser.NoSectionError, configparser.NoOptionError):
config.storage_classes = ""
pass
try:
config.lc_debug_interval = int(cfg.get('s3 main',"lc_debug_interval"))
except (configparser.NoSectionError, configparser.NoOptionError):
config.lc_debug_interval = 10
config.alt_access_key = cfg.get('s3 alt',"access_key")
config.alt_secret_key = cfg.get('s3 alt',"secret_key")
config.alt_display_name = cfg.get('s3 alt',"display_name")
config.alt_user_id = cfg.get('s3 alt',"user_id")
config.alt_email = cfg.get('s3 alt',"email")
config.tenant_access_key = cfg.get('s3 tenant',"access_key")
config.tenant_secret_key = cfg.get('s3 tenant',"secret_key")
config.tenant_display_name = cfg.get('s3 tenant',"display_name")
config.tenant_user_id = cfg.get('s3 tenant',"user_id")
config.tenant_email = cfg.get('s3 tenant',"email")
config.tenant_name = cfg.get('s3 tenant',"tenant")
config.iam_access_key = cfg.get('iam',"access_key")
config.iam_secret_key = cfg.get('iam',"secret_key")
config.iam_display_name = cfg.get('iam',"display_name")
config.iam_user_id = cfg.get('iam',"user_id")
config.iam_email = cfg.get('iam',"email")
config.iam_root_access_key = cfg.get('iam root',"access_key")
config.iam_root_secret_key = cfg.get('iam root',"secret_key")
config.iam_root_user_id = cfg.get('iam root',"user_id")
config.iam_root_email = cfg.get('iam root',"email")
config.iam_alt_root_access_key = cfg.get('iam alt root',"access_key")
config.iam_alt_root_secret_key = cfg.get('iam alt root',"secret_key")
config.iam_alt_root_user_id = cfg.get('iam alt root',"user_id")
config.iam_alt_root_email = cfg.get('iam alt root',"email")
# vars from the fixtures section
template = cfg.get('fixtures', "bucket prefix", fallback='test-{random}-')
prefix = choose_bucket_prefix(template=template)
template = cfg.get('fixtures', "iam name prefix", fallback="s3-tests-")
config.iam_name_prefix = choose_bucket_prefix(template=template)
template = cfg.get('fixtures', "iam path prefix", fallback="/s3-tests/")
config.iam_path_prefix = choose_bucket_prefix(template=template)
if cfg.has_section("s3 cloud"):
get_cloud_config(cfg)
else:
config.cloud_storage_class = None
def setup():
alt_client = get_alt_client()
tenant_client = get_tenant_client()
nuke_prefixed_buckets(prefix=prefix)
nuke_prefixed_buckets(prefix=prefix, client=alt_client)
nuke_prefixed_buckets(prefix=prefix, client=tenant_client)
def teardown():
alt_client = get_alt_client()
tenant_client = get_tenant_client()
nuke_prefixed_buckets(prefix=prefix)
nuke_prefixed_buckets(prefix=prefix, client=alt_client)
nuke_prefixed_buckets(prefix=prefix, client=tenant_client)
try:
iam_client = get_iam_client()
list_roles_resp = iam_client.list_roles()
for role in list_roles_resp['Roles']:
list_policies_resp = iam_client.list_role_policies(RoleName=role['RoleName'])
for policy in list_policies_resp['PolicyNames']:
del_policy_resp = iam_client.delete_role_policy(
RoleName=role['RoleName'],
PolicyName=policy
)
del_role_resp = iam_client.delete_role(RoleName=role['RoleName'])
list_oidc_resp = iam_client.list_open_id_connect_providers()
for oidcprovider in list_oidc_resp['OpenIDConnectProviderList']:
del_oidc_resp = iam_client.delete_open_id_connect_provider(
OpenIDConnectProviderArn=oidcprovider['Arn']
)
except:
pass
@pytest.fixture(scope="package")
def configfile():
configure()
return config
@pytest.fixture(autouse=True)
def setup_teardown(configfile):
setup()
yield
teardown()
def check_webidentity():
cfg = configparser.RawConfigParser()
try:
path = os.environ['S3TEST_CONF']
except KeyError:
raise RuntimeError(
'To run tests, point environment '
+ 'variable S3TEST_CONF to a config file.',
)
cfg.read(path)
if not cfg.has_section("webidentity"):
raise RuntimeError('Your config file is missing the "webidentity" section!')
config.webidentity_thumbprint = cfg.get('webidentity', "thumbprint")
config.webidentity_aud = cfg.get('webidentity', "aud")
config.webidentity_token = cfg.get('webidentity', "token")
config.webidentity_realm = cfg.get('webidentity', "KC_REALM")
config.webidentity_sub = cfg.get('webidentity', "sub")
config.webidentity_azp = cfg.get('webidentity', "azp")
config.webidentity_user_token = cfg.get('webidentity', "user_token")
def get_cloud_config(cfg):
config.cloud_host = cfg.get('s3 cloud',"host")
config.cloud_port = int(cfg.get('s3 cloud',"port"))
config.cloud_is_secure = cfg.getboolean('s3 cloud', "is_secure")
proto = 'https' if config.cloud_is_secure else 'http'
config.cloud_endpoint = "%s://%s:%d" % (proto, config.cloud_host, config.cloud_port)
config.cloud_access_key = cfg.get('s3 cloud',"access_key")
config.cloud_secret_key = cfg.get('s3 cloud',"secret_key")
try:
config.cloud_storage_class = cfg.get('s3 cloud', "cloud_storage_class")
except (configparser.NoSectionError, configparser.NoOptionError):
config.cloud_storage_class = None
try:
config.cloud_retain_head_object = cfg.get('s3 cloud',"retain_head_object")
except (configparser.NoSectionError, configparser.NoOptionError):
config.cloud_retain_head_object = None
try:
config.cloud_target_path = cfg.get('s3 cloud',"target_path")
except (configparser.NoSectionError, configparser.NoOptionError):
config.cloud_target_path = None
try:
config.cloud_target_storage_class = cfg.get('s3 cloud',"target_storage_class")
except (configparser.NoSectionError, configparser.NoOptionError):
config.cloud_target_storage_class = 'STANDARD'
try:
config.cloud_regular_storage_class = cfg.get('s3 cloud', "storage_class")
except (configparser.NoSectionError, configparser.NoOptionError):
config.cloud_regular_storage_class = None
def get_client(client_config=None):
if client_config == None:
client_config = Config(signature_version='s3v4')
client = boto3.client(service_name='s3',
aws_access_key_id=config.main_access_key,
aws_secret_access_key=config.main_secret_key,
endpoint_url=config.default_endpoint,
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
config=client_config)
return client
def get_v2_client():
client = boto3.client(service_name='s3',
aws_access_key_id=config.main_access_key,
aws_secret_access_key=config.main_secret_key,
endpoint_url=config.default_endpoint,
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
config=Config(signature_version='s3'))
return client
def get_sts_client(**kwargs):
kwargs.setdefault('aws_access_key_id', config.alt_access_key)
kwargs.setdefault('aws_secret_access_key', config.alt_secret_key)
kwargs.setdefault('config', Config(signature_version='s3v4'))
client = boto3.client(service_name='sts',
endpoint_url=config.default_endpoint,
region_name='',
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
**kwargs)
return client
def get_iam_client(**kwargs):
kwargs.setdefault('aws_access_key_id', config.iam_access_key)
kwargs.setdefault('aws_secret_access_key', config.iam_secret_key)
client = boto3.client(service_name='iam',
endpoint_url=config.default_endpoint,
region_name='',
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
**kwargs)
return client
def get_iam_s3client(**kwargs):
kwargs.setdefault('aws_access_key_id', config.iam_access_key)
kwargs.setdefault('aws_secret_access_key', config.iam_secret_key)
kwargs.setdefault('config', Config(signature_version='s3v4'))
client = boto3.client(service_name='s3',
endpoint_url=config.default_endpoint,
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
**kwargs)
return client
def get_iam_root_client(**kwargs):
kwargs.setdefault('service_name', 'iam')
kwargs.setdefault('aws_access_key_id', config.iam_root_access_key)
kwargs.setdefault('aws_secret_access_key', config.iam_root_secret_key)
return boto3.client(endpoint_url=config.default_endpoint,
region_name='',
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
**kwargs)
def get_iam_alt_root_client(**kwargs):
kwargs.setdefault('service_name', 'iam')
kwargs.setdefault('aws_access_key_id', config.iam_alt_root_access_key)
kwargs.setdefault('aws_secret_access_key', config.iam_alt_root_secret_key)
return boto3.client(endpoint_url=config.default_endpoint,
region_name='',
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
**kwargs)
def get_alt_client(client_config=None):
if client_config == None:
client_config = Config(signature_version='s3v4')
client = boto3.client(service_name='s3',
aws_access_key_id=config.alt_access_key,
aws_secret_access_key=config.alt_secret_key,
endpoint_url=config.default_endpoint,
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
config=client_config)
return client
def get_cloud_client(client_config=None):
if client_config == None:
client_config = Config(signature_version='s3v4')
client = boto3.client(service_name='s3',
aws_access_key_id=config.cloud_access_key,
aws_secret_access_key=config.cloud_secret_key,
endpoint_url=config.cloud_endpoint,
use_ssl=config.cloud_is_secure,
config=client_config)
return client
def get_tenant_client(client_config=None):
if client_config == None:
client_config = Config(signature_version='s3v4')
client = boto3.client(service_name='s3',
aws_access_key_id=config.tenant_access_key,
aws_secret_access_key=config.tenant_secret_key,
endpoint_url=config.default_endpoint,
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
config=client_config)
return client
def get_v2_tenant_client():
client_config = Config(signature_version='s3')
client = boto3.client(service_name='s3',
aws_access_key_id=config.tenant_access_key,
aws_secret_access_key=config.tenant_secret_key,
endpoint_url=config.default_endpoint,
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
config=client_config)
return client
def get_tenant_iam_client():
client = boto3.client(service_name='iam',
region_name='us-east-1',
aws_access_key_id=config.tenant_access_key,
aws_secret_access_key=config.tenant_secret_key,
endpoint_url=config.default_endpoint,
verify=config.default_ssl_verify,
use_ssl=config.default_is_secure)
return client
def get_alt_iam_client():
client = boto3.client(service_name='iam',
region_name='',
aws_access_key_id=config.alt_access_key,
aws_secret_access_key=config.alt_secret_key,
endpoint_url=config.default_endpoint,
verify=config.default_ssl_verify,
use_ssl=config.default_is_secure)
return client
def get_unauthenticated_client():
client = boto3.client(service_name='s3',
aws_access_key_id='',
aws_secret_access_key='',
endpoint_url=config.default_endpoint,
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
config=Config(signature_version=UNSIGNED))
return client
def get_bad_auth_client(aws_access_key_id='badauth'):
client = boto3.client(service_name='s3',
aws_access_key_id=aws_access_key_id,
aws_secret_access_key='roflmao',
endpoint_url=config.default_endpoint,
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
config=Config(signature_version='s3v4'))
return client
def get_svc_client(client_config=None, svc='s3'):
if client_config == None:
client_config = Config(signature_version='s3v4')
client = boto3.client(service_name=svc,
aws_access_key_id=config.main_access_key,
aws_secret_access_key=config.main_secret_key,
endpoint_url=config.default_endpoint,
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
config=client_config)
return client
bucket_counter = itertools.count(1)
def get_new_bucket_name():
"""
Get a bucket name that probably does not exist.
We make every attempt to use a unique random prefix, so if a
bucket by this name happens to exist, it's ok if tests give
false negatives.
"""
name = '{prefix}{num}'.format(
prefix=prefix,
num=next(bucket_counter),
)
return name
def get_new_bucket_resource(name=None):
"""
Get a bucket that exists and is empty.
Always recreates a bucket from scratch. This is useful to also
reset ACLs and such.
"""
s3 = boto3.resource('s3',
aws_access_key_id=config.main_access_key,
aws_secret_access_key=config.main_secret_key,
endpoint_url=config.default_endpoint,
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify)
if name is None:
name = get_new_bucket_name()
bucket = s3.Bucket(name)
bucket_location = bucket.create()
return bucket
def get_new_bucket(client=None, name=None):
"""
Get a bucket that exists and is empty.
Always recreates a bucket from scratch. This is useful to also
reset ACLs and such.
"""
if client is None:
client = get_client()
if name is None:
name = get_new_bucket_name()
client.create_bucket(Bucket=name)
return name
def get_parameter_name():
parameter_name=""
rand = ''.join(
random.choice(string.ascii_lowercase + string.digits)
for c in range(255)
)
while rand:
parameter_name = '{random}'.format(random=rand)
if len(parameter_name) <= 10:
return parameter_name
rand = rand[:-1]
return parameter_name
def get_sts_user_id():
return config.alt_user_id
def get_config_is_secure():
return config.default_is_secure
def get_config_host():
return config.default_host
def get_config_port():
return config.default_port
def get_config_endpoint():
return config.default_endpoint
def get_config_ssl_verify():
return config.default_ssl_verify
def get_main_aws_access_key():
return config.main_access_key
def get_main_aws_secret_key():
return config.main_secret_key
def get_main_display_name():
return config.main_display_name
def get_main_user_id():
return config.main_user_id
def get_main_email():
return config.main_email
def get_main_api_name():
return config.main_api_name
def get_main_kms_keyid():
return config.main_kms_keyid
def get_secondary_kms_keyid():
return config.main_kms_keyid2
def get_alt_aws_access_key():
return config.alt_access_key
def get_alt_aws_secret_key():
return config.alt_secret_key
def get_alt_display_name():
return config.alt_display_name
def get_alt_user_id():
return config.alt_user_id
def get_alt_email():
return config.alt_email
def get_tenant_aws_access_key():
return config.tenant_access_key
def get_tenant_aws_secret_key():
return config.tenant_secret_key
def get_tenant_display_name():
return config.tenant_display_name
def get_tenant_name():
return config.tenant_name
def get_tenant_user_id():
return config.tenant_user_id
def get_tenant_email():
return config.tenant_email
def get_thumbprint():
return config.webidentity_thumbprint
def get_aud():
return config.webidentity_aud
def get_sub():
return config.webidentity_sub
def get_azp():
return config.webidentity_azp
def get_token():
return config.webidentity_token
def get_realm_name():
return config.webidentity_realm
def get_iam_name_prefix():
return config.iam_name_prefix
def make_iam_name(name):
return config.iam_name_prefix + name
def get_iam_path_prefix():
return config.iam_path_prefix
def get_iam_access_key():
return config.iam_access_key
def get_iam_secret_key():
return config.iam_secret_key
def get_iam_root_user_id():
return config.iam_root_user_id
def get_iam_root_email():
return config.iam_root_email
def get_iam_alt_root_user_id():
return config.iam_alt_root_user_id
def get_iam_alt_root_email():
return config.iam_alt_root_email
def get_user_token():
return config.webidentity_user_token
def get_cloud_storage_class():
return config.cloud_storage_class
def get_cloud_retain_head_object():
return config.cloud_retain_head_object
def get_cloud_regular_storage_class():
return config.cloud_regular_storage_class
def get_cloud_target_path():
return config.cloud_target_path
def get_cloud_target_storage_class():
return config.cloud_target_storage_class
def get_lc_debug_interval():
return config.lc_debug_interval

View file

@ -0,0 +1,199 @@
from botocore.exceptions import ClientError
import pytest
from . import (
configfile,
get_iam_root_client,
get_iam_root_user_id,
get_iam_root_email,
get_iam_alt_root_client,
get_iam_alt_root_user_id,
get_iam_alt_root_email,
get_iam_path_prefix,
)
def nuke_user_keys(client, name):
p = client.get_paginator('list_access_keys')
for response in p.paginate(UserName=name):
for key in response['AccessKeyMetadata']:
try:
client.delete_access_key(UserName=name, AccessKeyId=key['AccessKeyId'])
except:
pass
def nuke_user_policies(client, name):
p = client.get_paginator('list_user_policies')
for response in p.paginate(UserName=name):
for policy in response['PolicyNames']:
try:
client.delete_user_policy(UserName=name, PolicyName=policy)
except:
pass
def nuke_attached_user_policies(client, name):
p = client.get_paginator('list_attached_user_policies')
for response in p.paginate(UserName=name):
for policy in response['AttachedPolicies']:
try:
client.detach_user_policy(UserName=name, PolicyArn=policy['PolicyArn'])
except:
pass
def nuke_user(client, name):
# delete access keys, user policies, etc
try:
nuke_user_keys(client, name)
except:
pass
try:
nuke_user_policies(client, name)
except:
pass
try:
nuke_attached_user_policies(client, name)
except:
pass
client.delete_user(UserName=name)
def nuke_users(client, **kwargs):
p = client.get_paginator('list_users')
for response in p.paginate(**kwargs):
for user in response['Users']:
try:
nuke_user(client, user['UserName'])
except:
pass
def nuke_group_policies(client, name):
p = client.get_paginator('list_group_policies')
for response in p.paginate(GroupName=name):
for policy in response['PolicyNames']:
try:
client.delete_group_policy(GroupName=name, PolicyName=policy)
except:
pass
def nuke_attached_group_policies(client, name):
p = client.get_paginator('list_attached_group_policies')
for response in p.paginate(GroupName=name):
for policy in response['AttachedPolicies']:
try:
client.detach_group_policy(GroupName=name, PolicyArn=policy['PolicyArn'])
except:
pass
def nuke_group_users(client, name):
p = client.get_paginator('get_group')
for response in p.paginate(GroupName=name):
for user in response['Users']:
try:
client.remove_user_from_group(GroupName=name, UserName=user['UserName'])
except:
pass
def nuke_group(client, name):
# delete group policies and remove all users
try:
nuke_group_policies(client, name)
except:
pass
try:
nuke_attached_group_policies(client, name)
except:
pass
try:
nuke_group_users(client, name)
except:
pass
client.delete_group(GroupName=name)
def nuke_groups(client, **kwargs):
p = client.get_paginator('list_groups')
for response in p.paginate(**kwargs):
for user in response['Groups']:
try:
nuke_group(client, user['GroupName'])
except:
pass
def nuke_role_policies(client, name):
p = client.get_paginator('list_role_policies')
for response in p.paginate(RoleName=name):
for policy in response['PolicyNames']:
try:
client.delete_role_policy(RoleName=name, PolicyName=policy)
except:
pass
def nuke_attached_role_policies(client, name):
p = client.get_paginator('list_attached_role_policies')
for response in p.paginate(RoleName=name):
for policy in response['AttachedPolicies']:
try:
client.detach_role_policy(RoleName=name, PolicyArn=policy['PolicyArn'])
except:
pass
def nuke_role(client, name):
# delete role policies, etc
try:
nuke_role_policies(client, name)
except:
pass
try:
nuke_attached_role_policies(client, name)
except:
pass
client.delete_role(RoleName=name)
def nuke_roles(client, **kwargs):
p = client.get_paginator('list_roles')
for response in p.paginate(**kwargs):
for role in response['Roles']:
try:
nuke_role(client, role['RoleName'])
except:
pass
def nuke_oidc_providers(client, prefix):
result = client.list_open_id_connect_providers()
for provider in result['OpenIDConnectProviderList']:
arn = provider['Arn']
if f':oidc-provider{prefix}' in arn:
try:
client.delete_open_id_connect_provider(OpenIDConnectProviderArn=arn)
except:
pass
# fixture for iam account root user
@pytest.fixture
def iam_root(configfile):
client = get_iam_root_client()
try:
arn = client.get_user()['User']['Arn']
if not arn.endswith(':root'):
pytest.skip('[iam root] user does not have :root arn')
except ClientError as e:
pytest.skip('[iam root] user does not belong to an account')
yield client
nuke_users(client, PathPrefix=get_iam_path_prefix())
nuke_groups(client, PathPrefix=get_iam_path_prefix())
nuke_roles(client, PathPrefix=get_iam_path_prefix())
nuke_oidc_providers(client, get_iam_path_prefix())
# fixture for iam alt account root user
@pytest.fixture
def iam_alt_root(configfile):
client = get_iam_alt_root_client()
try:
arn = client.get_user()['User']['Arn']
if not arn.endswith(':root'):
pytest.skip('[iam alt root] user does not have :root arn')
except ClientError as e:
pytest.skip('[iam alt root] user does not belong to an account')
yield client
nuke_users(client, PathPrefix=get_iam_path_prefix())
nuke_roles(client, PathPrefix=get_iam_path_prefix())

View file

@ -0,0 +1,46 @@
import json
class Statement(object):
def __init__(self, action, resource, principal = {"AWS" : "*"}, effect= "Allow", condition = None):
self.principal = principal
self.action = action
self.resource = resource
self.condition = condition
self.effect = effect
def to_dict(self):
d = { "Action" : self.action,
"Principal" : self.principal,
"Effect" : self.effect,
"Resource" : self.resource
}
if self.condition is not None:
d["Condition"] = self.condition
return d
class Policy(object):
def __init__(self):
self.statements = []
def add_statement(self, s):
self.statements.append(s)
return self
def to_json(self):
policy_dict = {
"Version" : "2012-10-17",
"Statement":
[s.to_dict() for s in self.statements]
}
return json.dumps(policy_dict)
def make_json_policy(action, resource, principal={"AWS": "*"}, effect="Allow", conditions=None):
"""
Helper function to make single statement policies
"""
s = Statement(action, resource, principal, effect=effect, condition=conditions)
p = Policy()
return p.add_statement(s).to_json()

View file

@ -0,0 +1,92 @@
#!/usr/bin/python
import boto3
import os
import random
import string
import itertools
host = "localhost"
port = 8000
## AWS access key
access_key = "0555b35654ad1656d804"
## AWS secret key
secret_key = "h7GhxuBLTrlhVUyxSPUKUV8r/2EI4ngqJxD7iBdBYLhwluN30JaT3Q=="
prefix = "YOURNAMEHERE-1234-"
endpoint_url = "http://%s:%d" % (host, port)
client = boto3.client(service_name='s3',
aws_access_key_id=access_key,
aws_secret_access_key=secret_key,
endpoint_url=endpoint_url,
use_ssl=False,
verify=False)
s3 = boto3.resource('s3',
use_ssl=False,
verify=False,
endpoint_url=endpoint_url,
aws_access_key_id=access_key,
aws_secret_access_key=secret_key)
def choose_bucket_prefix(template, max_len=30):
"""
Choose a prefix for our test buckets, so they're easy to identify.
Use template and feed it more and more random filler, until it's
as long as possible but still below max_len.
"""
rand = ''.join(
random.choice(string.ascii_lowercase + string.digits)
for c in range(255)
)
while rand:
s = template.format(random=rand)
if len(s) <= max_len:
return s
rand = rand[:-1]
raise RuntimeError(
'Bucket prefix template is impossible to fulfill: {template!r}'.format(
template=template,
),
)
bucket_counter = itertools.count(1)
def get_new_bucket_name():
"""
Get a bucket name that probably does not exist.
We make every attempt to use a unique random prefix, so if a
bucket by this name happens to exist, it's ok if tests give
false negatives.
"""
name = '{prefix}{num}'.format(
prefix=prefix,
num=next(bucket_counter),
)
return name
def get_new_bucket(session=boto3, name=None, headers=None):
"""
Get a bucket that exists and is empty.
Always recreates a bucket from scratch. This is useful to also
reset ACLs and such.
"""
s3 = session.resource('s3',
use_ssl=False,
verify=False,
endpoint_url=endpoint_url,
aws_access_key_id=access_key,
aws_secret_access_key=secret_key)
if name is None:
name = get_new_bucket_name()
bucket = s3.Bucket(name)
bucket_location = bucket.create()
return bucket

View file

@ -0,0 +1,572 @@
import boto3
import pytest
from botocore.exceptions import ClientError
from email.utils import formatdate
from .utils import assert_raises
from .utils import _get_status_and_error_code
from .utils import _get_status
from . import (
configfile,
setup_teardown,
get_client,
get_v2_client,
get_new_bucket,
get_new_bucket_name,
)
def _add_header_create_object(headers, client=None):
""" Create a new bucket, add an object w/header customizations
"""
bucket_name = get_new_bucket()
if client == None:
client = get_client()
key_name = 'foo'
# pass in custom headers before PutObject call
add_headers = (lambda **kwargs: kwargs['params']['headers'].update(headers))
client.meta.events.register('before-call.s3.PutObject', add_headers)
client.put_object(Bucket=bucket_name, Key=key_name)
return bucket_name, key_name
def _add_header_create_bad_object(headers, client=None):
""" Create a new bucket, add an object with a header. This should cause a failure
"""
bucket_name = get_new_bucket()
if client == None:
client = get_client()
key_name = 'foo'
# pass in custom headers before PutObject call
add_headers = (lambda **kwargs: kwargs['params']['headers'].update(headers))
client.meta.events.register('before-call.s3.PutObject', add_headers)
e = assert_raises(ClientError, client.put_object, Bucket=bucket_name, Key=key_name, Body='bar')
return e
def _remove_header_create_object(remove, client=None):
""" Create a new bucket, add an object without a header
"""
bucket_name = get_new_bucket()
if client == None:
client = get_client()
key_name = 'foo'
# remove custom headers before PutObject call
def remove_header(**kwargs):
if (remove in kwargs['params']['headers']):
del kwargs['params']['headers'][remove]
client.meta.events.register('before-call.s3.PutObject', remove_header)
client.put_object(Bucket=bucket_name, Key=key_name)
return bucket_name, key_name
def _remove_header_create_bad_object(remove, client=None):
""" Create a new bucket, add an object without a header. This should cause a failure
"""
bucket_name = get_new_bucket()
if client == None:
client = get_client()
key_name = 'foo'
# remove custom headers before PutObject call
def remove_header(**kwargs):
if (remove in kwargs['params']['headers']):
del kwargs['params']['headers'][remove]
client.meta.events.register('before-call.s3.PutObject', remove_header)
e = assert_raises(ClientError, client.put_object, Bucket=bucket_name, Key=key_name, Body='bar')
return e
def _add_header_create_bucket(headers, client=None):
""" Create a new bucket, w/header customizations
"""
bucket_name = get_new_bucket_name()
if client == None:
client = get_client()
# pass in custom headers before PutObject call
add_headers = (lambda **kwargs: kwargs['params']['headers'].update(headers))
client.meta.events.register('before-call.s3.CreateBucket', add_headers)
client.create_bucket(Bucket=bucket_name)
return bucket_name
def _add_header_create_bad_bucket(headers=None, client=None):
""" Create a new bucket, w/header customizations that should cause a failure
"""
bucket_name = get_new_bucket_name()
if client == None:
client = get_client()
# pass in custom headers before PutObject call
add_headers = (lambda **kwargs: kwargs['params']['headers'].update(headers))
client.meta.events.register('before-call.s3.CreateBucket', add_headers)
e = assert_raises(ClientError, client.create_bucket, Bucket=bucket_name)
return e
def _remove_header_create_bucket(remove, client=None):
""" Create a new bucket, without a header
"""
bucket_name = get_new_bucket_name()
if client == None:
client = get_client()
# remove custom headers before PutObject call
def remove_header(**kwargs):
if (remove in kwargs['params']['headers']):
del kwargs['params']['headers'][remove]
client.meta.events.register('before-call.s3.CreateBucket', remove_header)
client.create_bucket(Bucket=bucket_name)
return bucket_name
def _remove_header_create_bad_bucket(remove, client=None):
""" Create a new bucket, without a header. This should cause a failure
"""
bucket_name = get_new_bucket_name()
if client == None:
client = get_client()
# remove custom headers before PutObject call
def remove_header(**kwargs):
if (remove in kwargs['params']['headers']):
del kwargs['params']['headers'][remove]
client.meta.events.register('before-call.s3.CreateBucket', remove_header)
e = assert_raises(ClientError, client.create_bucket, Bucket=bucket_name)
return e
#
# common tests
#
@pytest.mark.auth_common
def test_object_create_bad_md5_invalid_short():
e = _add_header_create_bad_object({'Content-MD5':'YWJyYWNhZGFicmE='})
status, error_code = _get_status_and_error_code(e.response)
assert status == 400
assert error_code == 'InvalidDigest'
@pytest.mark.auth_common
def test_object_create_bad_md5_bad():
e = _add_header_create_bad_object({'Content-MD5':'rL0Y20xC+Fzt72VPzMSk2A=='})
status, error_code = _get_status_and_error_code(e.response)
assert status == 400
assert error_code == 'BadDigest'
@pytest.mark.auth_common
def test_object_create_bad_md5_empty():
e = _add_header_create_bad_object({'Content-MD5':''})
status, error_code = _get_status_and_error_code(e.response)
assert status == 400
assert error_code == 'InvalidDigest'
@pytest.mark.auth_common
def test_object_create_bad_md5_none():
bucket_name, key_name = _remove_header_create_object('Content-MD5')
client = get_client()
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
@pytest.mark.auth_common
def test_object_create_bad_expect_mismatch():
bucket_name, key_name = _add_header_create_object({'Expect': 200})
client = get_client()
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
@pytest.mark.auth_common
def test_object_create_bad_expect_empty():
bucket_name, key_name = _add_header_create_object({'Expect': ''})
client = get_client()
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
@pytest.mark.auth_common
def test_object_create_bad_expect_none():
bucket_name, key_name = _remove_header_create_object('Expect')
client = get_client()
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
@pytest.mark.auth_common
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the content-length header
@pytest.mark.fails_on_rgw
def test_object_create_bad_contentlength_empty():
e = _add_header_create_bad_object({'Content-Length':''})
status, error_code = _get_status_and_error_code(e.response)
assert status == 400
@pytest.mark.auth_common
@pytest.mark.fails_on_mod_proxy_fcgi
def test_object_create_bad_contentlength_negative():
client = get_client()
bucket_name = get_new_bucket()
key_name = 'foo'
e = assert_raises(ClientError, client.put_object, Bucket=bucket_name, Key=key_name, ContentLength=-1)
status = _get_status(e.response)
assert status == 400
@pytest.mark.auth_common
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the content-length header
@pytest.mark.fails_on_rgw
def test_object_create_bad_contentlength_none():
remove = 'Content-Length'
e = _remove_header_create_bad_object('Content-Length')
status, error_code = _get_status_and_error_code(e.response)
assert status == 411
assert error_code == 'MissingContentLength'
@pytest.mark.auth_common
def test_object_create_bad_contenttype_invalid():
bucket_name, key_name = _add_header_create_object({'Content-Type': 'text/plain'})
client = get_client()
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
@pytest.mark.auth_common
def test_object_create_bad_contenttype_empty():
client = get_client()
key_name = 'foo'
bucket_name = get_new_bucket()
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar', ContentType='')
@pytest.mark.auth_common
def test_object_create_bad_contenttype_none():
bucket_name = get_new_bucket()
key_name = 'foo'
client = get_client()
# as long as ContentType isn't specified in put_object it isn't going into the request
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
@pytest.mark.auth_common
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the authorization header
@pytest.mark.fails_on_rgw
def test_object_create_bad_authorization_empty():
e = _add_header_create_bad_object({'Authorization': ''})
status, error_code = _get_status_and_error_code(e.response)
assert status == 403
@pytest.mark.auth_common
# TODO: remove 'fails_on_rgw' and once we have learned how to pass both the 'Date' and 'X-Amz-Date' header during signing and not 'X-Amz-Date' before
@pytest.mark.fails_on_rgw
def test_object_create_date_and_amz_date():
date = formatdate(usegmt=True)
bucket_name, key_name = _add_header_create_object({'Date': date, 'X-Amz-Date': date})
client = get_client()
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
@pytest.mark.auth_common
# TODO: remove 'fails_on_rgw' and once we have learned how to pass both the 'Date' and 'X-Amz-Date' header during signing and not 'X-Amz-Date' before
@pytest.mark.fails_on_rgw
def test_object_create_amz_date_and_no_date():
date = formatdate(usegmt=True)
bucket_name, key_name = _add_header_create_object({'Date': '', 'X-Amz-Date': date})
client = get_client()
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
# the teardown is really messed up here. check it out
@pytest.mark.auth_common
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the authorization header
@pytest.mark.fails_on_rgw
def test_object_create_bad_authorization_none():
e = _remove_header_create_bad_object('Authorization')
status, error_code = _get_status_and_error_code(e.response)
assert status == 403
@pytest.mark.auth_common
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the content-length header
@pytest.mark.fails_on_rgw
def test_bucket_create_contentlength_none():
remove = 'Content-Length'
_remove_header_create_bucket(remove)
@pytest.mark.auth_common
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the content-length header
@pytest.mark.fails_on_rgw
def test_object_acl_create_contentlength_none():
bucket_name = get_new_bucket()
client = get_client()
client.put_object(Bucket=bucket_name, Key='foo', Body='bar')
remove = 'Content-Length'
def remove_header(**kwargs):
if (remove in kwargs['params']['headers']):
del kwargs['params']['headers'][remove]
client.meta.events.register('before-call.s3.PutObjectAcl', remove_header)
client.put_object_acl(Bucket=bucket_name, Key='foo', ACL='public-read')
@pytest.mark.auth_common
def test_bucket_put_bad_canned_acl():
bucket_name = get_new_bucket()
client = get_client()
headers = {'x-amz-acl': 'public-ready'}
add_headers = (lambda **kwargs: kwargs['params']['headers'].update(headers))
client.meta.events.register('before-call.s3.PutBucketAcl', add_headers)
e = assert_raises(ClientError, client.put_bucket_acl, Bucket=bucket_name, ACL='public-read')
status = _get_status(e.response)
assert status == 400
@pytest.mark.auth_common
def test_bucket_create_bad_expect_mismatch():
bucket_name = get_new_bucket_name()
client = get_client()
headers = {'Expect': 200}
add_headers = (lambda **kwargs: kwargs['params']['headers'].update(headers))
client.meta.events.register('before-call.s3.CreateBucket', add_headers)
client.create_bucket(Bucket=bucket_name)
@pytest.mark.auth_common
def test_bucket_create_bad_expect_empty():
headers = {'Expect': ''}
_add_header_create_bucket(headers)
@pytest.mark.auth_common
# TODO: The request isn't even making it to the RGW past the frontend
# This test had 'fails_on_rgw' before the move to boto3
@pytest.mark.fails_on_rgw
def test_bucket_create_bad_contentlength_empty():
headers = {'Content-Length': ''}
e = _add_header_create_bad_bucket(headers)
status, error_code = _get_status_and_error_code(e.response)
assert status == 400
@pytest.mark.auth_common
@pytest.mark.fails_on_mod_proxy_fcgi
def test_bucket_create_bad_contentlength_negative():
headers = {'Content-Length': '-1'}
e = _add_header_create_bad_bucket(headers)
status = _get_status(e.response)
assert status == 400
@pytest.mark.auth_common
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the content-length header
@pytest.mark.fails_on_rgw
def test_bucket_create_bad_contentlength_none():
remove = 'Content-Length'
_remove_header_create_bucket(remove)
@pytest.mark.auth_common
# TODO: remove 'fails_on_rgw' and once we have learned how to manipulate the authorization header
@pytest.mark.fails_on_rgw
def test_bucket_create_bad_authorization_empty():
headers = {'Authorization': ''}
e = _add_header_create_bad_bucket(headers)
status, error_code = _get_status_and_error_code(e.response)
assert status == 403
assert error_code == 'AccessDenied'
@pytest.mark.auth_common
# TODO: remove 'fails_on_rgw' and once we have learned how to manipulate the authorization header
@pytest.mark.fails_on_rgw
def test_bucket_create_bad_authorization_none():
e = _remove_header_create_bad_bucket('Authorization')
status, error_code = _get_status_and_error_code(e.response)
assert status == 403
assert error_code == 'AccessDenied'
@pytest.mark.auth_aws2
def test_object_create_bad_md5_invalid_garbage_aws2():
v2_client = get_v2_client()
headers = {'Content-MD5': 'AWS HAHAHA'}
e = _add_header_create_bad_object(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response)
assert status == 400
assert error_code == 'InvalidDigest'
@pytest.mark.auth_aws2
# TODO: remove 'fails_on_rgw' and once we have learned how to manipulate the Content-Length header
@pytest.mark.fails_on_rgw
def test_object_create_bad_contentlength_mismatch_below_aws2():
v2_client = get_v2_client()
content = 'bar'
length = len(content) - 1
headers = {'Content-Length': str(length)}
e = _add_header_create_bad_object(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response)
assert status == 400
assert error_code == 'BadDigest'
@pytest.mark.auth_aws2
# TODO: remove 'fails_on_rgw' and once we have learned how to manipulate the authorization header
@pytest.mark.fails_on_rgw
def test_object_create_bad_authorization_incorrect_aws2():
v2_client = get_v2_client()
headers = {'Authorization': 'AWS AKIAIGR7ZNNBHC5BKSUB:FWeDfwojDSdS2Ztmpfeubhd9isU='}
e = _add_header_create_bad_object(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response)
assert status == 403
assert error_code == 'InvalidDigest'
@pytest.mark.auth_aws2
# TODO: remove 'fails_on_rgw' and once we have learned how to manipulate the authorization header
@pytest.mark.fails_on_rgw
def test_object_create_bad_authorization_invalid_aws2():
v2_client = get_v2_client()
headers = {'Authorization': 'AWS HAHAHA'}
e = _add_header_create_bad_object(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response)
assert status == 400
assert error_code == 'InvalidArgument'
@pytest.mark.auth_aws2
def test_object_create_bad_ua_empty_aws2():
v2_client = get_v2_client()
headers = {'User-Agent': ''}
bucket_name, key_name = _add_header_create_object(headers, v2_client)
v2_client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
@pytest.mark.auth_aws2
def test_object_create_bad_ua_none_aws2():
v2_client = get_v2_client()
remove = 'User-Agent'
bucket_name, key_name = _remove_header_create_object(remove, v2_client)
v2_client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
@pytest.mark.auth_aws2
def test_object_create_bad_date_invalid_aws2():
v2_client = get_v2_client()
headers = {'x-amz-date': 'Bad Date'}
e = _add_header_create_bad_object(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response)
assert status == 403
assert error_code == 'AccessDenied'
@pytest.mark.auth_aws2
def test_object_create_bad_date_empty_aws2():
v2_client = get_v2_client()
headers = {'x-amz-date': ''}
e = _add_header_create_bad_object(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response)
assert status == 403
assert error_code == 'AccessDenied'
@pytest.mark.auth_aws2
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the date header
@pytest.mark.fails_on_rgw
def test_object_create_bad_date_none_aws2():
v2_client = get_v2_client()
remove = 'x-amz-date'
e = _remove_header_create_bad_object(remove, v2_client)
status, error_code = _get_status_and_error_code(e.response)
assert status == 403
assert error_code == 'AccessDenied'
@pytest.mark.auth_aws2
def test_object_create_bad_date_before_today_aws2():
v2_client = get_v2_client()
headers = {'x-amz-date': 'Tue, 07 Jul 2010 21:53:04 GMT'}
e = _add_header_create_bad_object(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response)
assert status == 403
assert error_code == 'RequestTimeTooSkewed'
@pytest.mark.auth_aws2
def test_object_create_bad_date_before_epoch_aws2():
v2_client = get_v2_client()
headers = {'x-amz-date': 'Tue, 07 Jul 1950 21:53:04 GMT'}
e = _add_header_create_bad_object(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response)
assert status == 403
assert error_code == 'AccessDenied'
@pytest.mark.auth_aws2
def test_object_create_bad_date_after_end_aws2():
v2_client = get_v2_client()
headers = {'x-amz-date': 'Tue, 07 Jul 9999 21:53:04 GMT'}
e = _add_header_create_bad_object(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response)
assert status == 403
assert error_code == 'RequestTimeTooSkewed'
@pytest.mark.auth_aws2
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the date header
@pytest.mark.fails_on_rgw
def test_bucket_create_bad_authorization_invalid_aws2():
v2_client = get_v2_client()
headers = {'Authorization': 'AWS HAHAHA'}
e = _add_header_create_bad_bucket(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response)
assert status == 400
assert error_code == 'InvalidArgument'
@pytest.mark.auth_aws2
def test_bucket_create_bad_ua_empty_aws2():
v2_client = get_v2_client()
headers = {'User-Agent': ''}
_add_header_create_bucket(headers, v2_client)
@pytest.mark.auth_aws2
def test_bucket_create_bad_ua_none_aws2():
v2_client = get_v2_client()
remove = 'User-Agent'
_remove_header_create_bucket(remove, v2_client)
@pytest.mark.auth_aws2
def test_bucket_create_bad_date_invalid_aws2():
v2_client = get_v2_client()
headers = {'x-amz-date': 'Bad Date'}
e = _add_header_create_bad_bucket(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response)
assert status == 403
assert error_code == 'AccessDenied'
@pytest.mark.auth_aws2
def test_bucket_create_bad_date_empty_aws2():
v2_client = get_v2_client()
headers = {'x-amz-date': ''}
e = _add_header_create_bad_bucket(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response)
assert status == 403
assert error_code == 'AccessDenied'
@pytest.mark.auth_aws2
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the date header
@pytest.mark.fails_on_rgw
def test_bucket_create_bad_date_none_aws2():
v2_client = get_v2_client()
remove = 'x-amz-date'
e = _remove_header_create_bad_bucket(remove, v2_client)
status, error_code = _get_status_and_error_code(e.response)
assert status == 403
assert error_code == 'AccessDenied'
@pytest.mark.auth_aws2
def test_bucket_create_bad_date_before_today_aws2():
v2_client = get_v2_client()
headers = {'x-amz-date': 'Tue, 07 Jul 2010 21:53:04 GMT'}
e = _add_header_create_bad_bucket(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response)
assert status == 403
assert error_code == 'RequestTimeTooSkewed'
@pytest.mark.auth_aws2
def test_bucket_create_bad_date_after_today_aws2():
v2_client = get_v2_client()
headers = {'x-amz-date': 'Tue, 07 Jul 2030 21:53:04 GMT'}
e = _add_header_create_bad_bucket(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response)
assert status == 403
assert error_code == 'RequestTimeTooSkewed'
@pytest.mark.auth_aws2
def test_bucket_create_bad_date_before_epoch_aws2():
v2_client = get_v2_client()
headers = {'x-amz-date': 'Tue, 07 Jul 1950 21:53:04 GMT'}
e = _add_header_create_bad_bucket(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response)
assert status == 403
assert error_code == 'AccessDenied'

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,159 @@
import json
import pytest
from botocore.exceptions import ClientError
from . import (
configfile,
get_iam_root_client,
get_iam_alt_root_client,
get_new_bucket_name,
get_prefix,
nuke_prefixed_buckets,
)
from .iam import iam_root, iam_alt_root
from .utils import assert_raises, _get_status_and_error_code
def get_new_topic_name():
return get_new_bucket_name()
def nuke_topics(client, prefix):
p = client.get_paginator('list_topics')
for response in p.paginate():
for topic in response['Topics']:
arn = topic['TopicArn']
if prefix not in arn:
pass
try:
client.delete_topic(TopicArn=arn)
except:
pass
@pytest.fixture
def sns(iam_root):
client = get_iam_root_client(service_name='sns')
yield client
nuke_topics(client, get_prefix())
@pytest.fixture
def sns_alt(iam_alt_root):
client = get_iam_alt_root_client(service_name='sns')
yield client
nuke_topics(client, get_prefix())
@pytest.fixture
def s3(iam_root):
client = get_iam_root_client(service_name='s3')
yield client
nuke_prefixed_buckets(get_prefix(), client)
@pytest.fixture
def s3_alt(iam_alt_root):
client = get_iam_alt_root_client(service_name='s3')
yield client
nuke_prefixed_buckets(get_prefix(), client)
@pytest.mark.iam_account
@pytest.mark.sns
def test_account_topic(sns):
name = get_new_topic_name()
response = sns.create_topic(Name=name)
arn = response['TopicArn']
assert arn.startswith('arn:aws:sns:')
assert arn.endswith(f':{name}')
response = sns.list_topics()
assert arn in [p['TopicArn'] for p in response['Topics']]
sns.set_topic_attributes(TopicArn=arn, AttributeName='Policy', AttributeValue='')
response = sns.get_topic_attributes(TopicArn=arn)
assert 'Attributes' in response
sns.delete_topic(TopicArn=arn)
response = sns.list_topics()
assert arn not in [p['TopicArn'] for p in response['Topics']]
with pytest.raises(sns.exceptions.NotFoundException):
sns.get_topic_attributes(TopicArn=arn)
sns.delete_topic(TopicArn=arn)
@pytest.mark.iam_account
@pytest.mark.sns
def test_cross_account_topic(sns, sns_alt):
name = get_new_topic_name()
arn = sns.create_topic(Name=name)['TopicArn']
# not visible to any alt user apis
with pytest.raises(sns.exceptions.NotFoundException):
sns_alt.get_topic_attributes(TopicArn=arn)
with pytest.raises(sns.exceptions.NotFoundException):
sns_alt.set_topic_attributes(TopicArn=arn, AttributeName='Policy', AttributeValue='')
# delete returns success
sns_alt.delete_topic(TopicArn=arn)
response = sns_alt.list_topics()
assert arn not in [p['TopicArn'] for p in response['Topics']]
@pytest.mark.iam_account
@pytest.mark.sns
def test_account_topic_publish(sns, s3):
name = get_new_topic_name()
response = sns.create_topic(Name=name)
topic_arn = response['TopicArn']
bucket = get_new_bucket_name()
s3.create_bucket(Bucket=bucket)
config = {'TopicConfigurations': [{
'Id': 'id',
'TopicArn': topic_arn,
'Events': [ 's3:ObjectCreated:*' ],
}]}
s3.put_bucket_notification_configuration(
Bucket=bucket, NotificationConfiguration=config)
@pytest.mark.iam_account
@pytest.mark.iam_cross_account
@pytest.mark.sns
def test_cross_account_topic_publish(sns, s3_alt, iam_alt_root):
name = get_new_topic_name()
response = sns.create_topic(Name=name)
topic_arn = response['TopicArn']
bucket = get_new_bucket_name()
s3_alt.create_bucket(Bucket=bucket)
config = {'TopicConfigurations': [{
'Id': 'id',
'TopicArn': topic_arn,
'Events': [ 's3:ObjectCreated:*' ],
}]}
# expect AccessDenies because no resource policy allows cross-account access
e = assert_raises(ClientError, s3_alt.put_bucket_notification_configuration,
Bucket=bucket, NotificationConfiguration=config)
status, error_code = _get_status_and_error_code(e.response)
assert status == 403
assert error_code == 'AccessDenied'
# add topic policy to allow the alt user
alt_principal = iam_alt_root.get_user()['User']['Arn']
policy = json.dumps({
'Version': '2012-10-17',
'Statement': [{
'Effect': 'Allow',
'Principal': {'AWS': alt_principal},
'Action': 'sns:Publish',
'Resource': topic_arn
}]
})
sns.set_topic_attributes(TopicArn=topic_arn, AttributeName='Policy',
AttributeValue=policy)
s3_alt.put_bucket_notification_configuration(
Bucket=bucket, NotificationConfiguration=config)

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,9 @@
from . import utils
def test_generate():
FIVE_MB = 5 * 1024 * 1024
assert len(''.join(utils.generate_random(0))) == 0
assert len(''.join(utils.generate_random(1))) == 1
assert len(''.join(utils.generate_random(FIVE_MB - 1))) == FIVE_MB - 1
assert len(''.join(utils.generate_random(FIVE_MB))) == FIVE_MB
assert len(''.join(utils.generate_random(FIVE_MB + 1))) == FIVE_MB + 1

View file

@ -0,0 +1,47 @@
import random
import requests
import string
import time
def assert_raises(excClass, callableObj, *args, **kwargs):
"""
Like unittest.TestCase.assertRaises, but returns the exception.
"""
try:
callableObj(*args, **kwargs)
except excClass as e:
return e
else:
if hasattr(excClass, '__name__'):
excName = excClass.__name__
else:
excName = str(excClass)
raise AssertionError("%s not raised" % excName)
def generate_random(size, part_size=5*1024*1024):
"""
Generate the specified number random data.
(actually each MB is a repetition of the first KB)
"""
chunk = 1024
allowed = string.ascii_letters
for x in range(0, size, part_size):
strpart = ''.join([allowed[random.randint(0, len(allowed) - 1)] for _ in range(chunk)])
s = ''
left = size - x
this_part_size = min(left, part_size)
for y in range(this_part_size // chunk):
s = s + strpart
s = s + strpart[:(this_part_size % chunk)]
yield s
if (x == size):
return
def _get_status(response):
status = response['ResponseMetadata']['HTTPStatusCode']
return status
def _get_status_and_error_code(response):
status = response['ResponseMetadata']['HTTPStatusCode']
error_code = response['Error']['Code']
return status, error_code

View file

@ -14,20 +14,10 @@ setup(
install_requires=[
'boto >=2.0b4',
'boto3 >=1.0.0',
'PyYAML',
'bunch >=1.0.0',
'munch >=2.0.0',
'gevent >=1.0',
'isodate >=0.4.4',
],
entry_points={
'console_scripts': [
's3tests-generate-objects = s3tests.generate_objects:main',
's3tests-test-readwrite = s3tests.readwrite:main',
's3tests-test-roundtrip = s3tests.roundtrip:main',
's3tests-fuzz-headers = s3tests.fuzz.headers:main',
's3tests-analysis-rwstats = s3tests.analysis.rwstats:main',
],
},
)

View file

@ -1,382 +0,0 @@
# Updated by Siege 2.69, May-24-2010
# Copyright 2000-2007 by Jeffrey Fulmer, et al.
#
# Siege configuration file -- edit as necessary
# For more information about configuring and running
# this program, visit: http://www.joedog.org/
#
# Variable declarations. You can set variables here
# for use in the directives below. Example:
# PROXY = proxy.joedog.org
# Reference variables inside ${} or $(), example:
# proxy-host = ${PROXY}
# You can also reference ENVIRONMENT variables without
# actually declaring them, example:
# logfile = $(HOME)/var/siege.log
#
# Signify verbose mode, true turns on verbose output
# ex: verbose = true|false
#
verbose = true
#
# CSV Verbose format: with this option, you can choose
# to format verbose output in traditional siege format
# or comma separated format. The latter will allow you
# to redirect output to a file for import into a spread
# sheet, i.e., siege > file.csv
# ex: csv = true|false (default false)
#
csv = true
#
# Full URL verbose format: By default siege displays
# the URL path and not the full URL. With this option,
# you # can instruct siege to show the complete URL.
# ex: fullurl = true|false (default false)
#
# fullurl = true
#
# Display id: in verbose mode, display the siege user
# id associated with the HTTP transaction information
# ex: display-id = true|false
#
# display-id =
#
# Show logfile location. By default, siege displays the
# logfile location at the end of every run when logging
# You can turn this message off with this directive.
# ex: show-logfile = false
#
show-logfile = true
#
# Default logging status, true turns logging on.
# ex: logging = true|false
#
logging = true
#
# Logfile, the default siege logfile is $PREFIX/var/siege.log
# This directive allows you to choose an alternative log file.
# Environment variables may be used as shown in the examples:
# ex: logfile = /home/jeff/var/log/siege.log
# logfile = ${HOME}/var/log/siege.log
# logfile = ${LOGFILE}
#
logfile = ./siege.log
#
# HTTP protocol. Options HTTP/1.1 and HTTP/1.0.
# Some webservers have broken implementation of the
# 1.1 protocol which skews throughput evaluations.
# If you notice some siege clients hanging for
# extended periods of time, change this to HTTP/1.0
# ex: protocol = HTTP/1.1
# protocol = HTTP/1.0
#
protocol = HTTP/1.1
#
# Chunked encoding is required by HTTP/1.1 protocol
# but siege allows you to turn it off as desired.
#
# ex: chunked = true
#
chunked = true
#
# Cache revalidation.
# Siege supports cache revalidation for both ETag and
# Last-modified headers. If a copy is still fresh, the
# server responds with 304.
# HTTP/1.1 200 0.00 secs: 2326 bytes ==> /apache_pb.gif
# HTTP/1.1 304 0.00 secs: 0 bytes ==> /apache_pb.gif
# HTTP/1.1 304 0.00 secs: 0 bytes ==> /apache_pb.gif
#
# ex: cache = true
#
cache = false
#
# Connection directive. Options "close" and "keep-alive"
# Starting with release 2.57b3, siege implements persistent
# connections in accordance to RFC 2068 using both chunked
# encoding and content-length directives to determine the
# page size. To run siege with persistent connections set
# the connection directive to keep-alive. (Default close)
# CAUTION: use the keep-alive directive with care.
# DOUBLE CAUTION: this directive does not work well on HPUX
# TRIPLE CAUTION: don't use keep-alives until further notice
# ex: connection = close
# connection = keep-alive
#
connection = close
#
# Default number of simulated concurrent users
# ex: concurrent = 25
#
concurrent = 15
#
# Default duration of the siege. The right hand argument has
# a modifier which specifies the time units, H=hours, M=minutes,
# and S=seconds. If a modifier is not specified, then minutes
# are assumed.
# ex: time = 50M
#
# time =
#
# Repetitions. The length of siege may be specified in client
# reps rather then a time duration. Instead of specifying a time
# span, you can tell each siege instance to hit the server X number
# of times. So if you chose 'reps = 20' and you've selected 10
# concurrent users, then siege will hit the server 200 times.
# ex: reps = 20
#
# reps =
#
# Default URLs file, set at configuration time, the default
# file is PREFIX/etc/urls.txt. So if you configured siege
# with --prefix=/usr/local then the urls.txt file is installed
# int /usr/local/etc/urls.txt. Use the "file = " directive to
# configure an alternative URLs file. You may use environment
# variables as shown in the examples below:
# ex: file = /export/home/jdfulmer/MYURLS.txt
# file = $HOME/etc/urls.txt
# file = $URLSFILE
#
file = ./urls.txt
#
# Default URL, this is a single URL that you want to test. This
# is usually set at the command line with the -u option. When
# used, this option overrides the urls.txt (-f FILE/--file=FILE)
# option. You will HAVE to comment this out for in order to use
# the urls.txt file option.
# ex: url = https://shemp.whoohoo.com/docs/index.jsp
#
# url =
#
# Default delay value, see the siege(1) man page.
# This value is used for load testing, it is not used
# for benchmarking.
# ex: delay = 3
#
delay = 1
#
# Connection timeout value. Set the value in seconds for
# socket connection timeouts. The default value is 30 seconds.
# ex: timeout = 30
#
# timeout =
#
# Session expiration: This directive allows you to delete all
# cookies after you pass through the URLs. This means siege will
# grab a new session with each run through its URLs. The default
# value is false.
# ex: expire-session = true
#
# expire-session =
#
# Failures: This is the number of total connection failures allowed
# before siege aborts. Connection failures (timeouts, socket failures,
# etc.) are combined with 400 and 500 level errors in the final stats,
# but those errors do not count against the abort total. If you set
# this total to 10, then siege will abort after ten socket timeouts,
# but it will NOT abort after ten 404s. This is designed to prevent
# a run-away mess on an unattended siege. The default value is 1024
# ex: failures = 50
#
# failures =
#
# Internet simulation. If true, siege clients will hit
# the URLs in the urls.txt file randomly, thereby simulating
# internet usage. If false, siege will run through the
# urls.txt file in order from first to last and back again.
# ex: internet = true
#
internet = false
#
# Default benchmarking value, If true, there is NO delay
# between server requests, siege runs as fast as the web
# server and the network will let it. Set this to false
# for load testing.
# ex: benchmark = true
#
benchmark = false
#
# Set the siege User-Agent to identify yourself at the
# host, the default is: JoeDog/1.00 [en] (X11; I; Siege #.##)
# But that wreaks of corporate techno speak. Feel free
# to make it more interesting :-) Since Limey is recovering
# from minor surgery as I write this, I'll dedicate the
# example to him...
# ex: user-agent = Limey The Bulldog
#
# user-agent =
#
# Accept-encoding. This option allows you to specify
# acceptable encodings returned by the server. Use this
# directive to turn on compression. By default we accept
# gzip compression.
#
# ex: accept-encoding = *
# accept-encoding = gzip
# accept-encoding = compress;q=0.5;gzip;q=1
accept-encoding = gzip
#
# TURN OFF THAT ANNOYING SPINNER!
# Siege spawns a thread and runs a spinner to entertain you
# as it collects and computes its stats. If you don't like
# this feature, you may turn it off here.
# ex: spinner = false
#
spinner = true
#
# WWW-Authenticate login. When siege hits a webpage
# that requires basic authentication, it will search its
# logins for authentication which matches the specific realm
# requested by the server. If it finds a match, it will send
# that login information. If it fails to match the realm, it
# will send the default login information. (Default is "all").
# You may configure siege with several logins as long as no
# two realms match. The format for logins is:
# username:password[:realm] where "realm" is optional.
# If you do not supply a realm, then it will default to "all"
# ex: login = jdfulmer:topsecret:Admin
# login = jeff:supersecret
#
# login =
#
# WWW-Authenticate username and password. When siege
# hits a webpage that requires authentication, it will
# send this user name and password to the server. Note
# this is NOT form based authentication. You will have
# to construct URLs for that.
# ex: username = jdfulmer
# password = whoohoo
#
# username =
# password =
#
# ssl-cert
# This optional feature allows you to specify a path to a client
# certificate. It is not neccessary to specify a certificate in
# order to use https. If you don't know why you would want one,
# then you probably don't need this feature. Use openssl to
# generate a certificate and key with the following command:
# $ openssl req -nodes -new -days 365 -newkey rsa:1024 \
# -keyout key.pem -out cert.pem
# Specify a path to cert.pem as follows:
# ex: ssl-cert = /home/jeff/.certs/cert.pem
#
# ssl-cert =
#
# ssl-key
# Use this option to specify the key you generated with the command
# above. ex: ssl-key = /home/jeff/.certs/key.pem
# You may actually skip this option and combine both your cert and
# your key in a single file:
# $ cat key.pem > client.pem
# $ cat cert.pem >> client.pem
# Now set the path for ssl-cert:
# ex: ssl-cert = /home/jeff/.certs/client.pem
# (in this scenario, you comment out ssl-key)
#
# ssl-key =
#
# ssl-timeout
# This option sets a connection timeout for the ssl library
# ex: ssl-timeout = 30
#
# ssl-timeout =
#
# ssl-ciphers
# You can use this feature to select a specific ssl cipher
# for HTTPs. To view the ones available with your library run
# the following command: openssl ciphers
# ex: ssl-ciphers = EXP-RC4-MD5
#
# ssl-ciphers =
#
# Login URL. This is the first URL to be hit by every siege
# client. This feature was designed to allow you to login to
# a server and establish a session. It will only be hit once
# so if you need to hit this URL more then once, make sure it
# also appears in your urls.txt file.
#
# ex: login-url = http://eos.haha.com/login.jsp POST name=jeff&pass=foo
#
# login-url =
#
# Proxy protocol. This option allows you to select a proxy
# server stress testing. The proxy will request the URL(s)
# specified by -u"my.url.org" OR from the urls.txt file.
#
# ex: proxy-host = proxy.whoohoo.org
# proxy-port = 8080
#
# proxy-host =
# proxy-port =
#
# Proxy-Authenticate. When scout hits a proxy server which
# requires username and password authentication, it will this
# username and password to the server. The format is username,
# password and optional realm each separated by a colon. You
# may enter more than one proxy-login as long as each one has
# a different realm. If you do not enter a realm, then scout
# will send that login information to all proxy challenges. If
# you have more than one proxy-login, then scout will attempt
# to match the login to the realm.
# ex: proxy-login: jeff:secret:corporate
# proxy-login: jeff:whoohoo
#
# proxy-login =
#
# Redirection support. This option allows to to control
# whether a Location: hint will be followed. Most users
# will want to follow redirection information, but sometimes
# it's desired to just get the Location information.
#
# ex: follow-location = false
#
# follow-location =
# Zero-length data. siege can be configured to disregard
# results in which zero bytes are read after the headers.
# Alternatively, such results can be counted in the final
# tally of outcomes.
#
# ex: zero-data-ok = false
#
# zero-data-ok =
#
# end of siegerc

9
tox.ini Normal file
View file

@ -0,0 +1,9 @@
[tox]
envlist = py
[testenv]
deps = -rrequirements.txt
passenv =
S3TEST_CONF
S3_USE_SIGV4
commands = pytest {posargs}