Compare commits

...

215 commits

Author SHA1 Message Date
galsalomon66
d5facc6e63 fix comments
Signed-off-by: galsalomon66 <gal.salomon@gmail.com>
(cherry picked from commit 6a775cb445)
2024-09-09 12:50:48 +00:00
Gal Salomon
0802a6f1f8 fix the assert per empty results
Signed-off-by: Gal Salomon <gal.salomon@gmail.com>
(cherry picked from commit 9444c29674)
2024-09-09 12:50:15 +00:00
Ali Maredia
d776898f61 Merge pull request #513 from galsalomon66/using_get_bucket_name
Using get bucket name

(cherry picked from commit 28009bf7d3)
2024-09-09 12:42:56 +00:00
Pritha Srivastava
6953aadf06 rgw: adding tests for add_client_id_to_oidc_provider
and update_thumbprint_for oidc_provider.

Signed-off-by: Pritha Srivastava <prsrivas@redhat.com>
(cherry picked from commit bc8c14ac12)
2024-08-29 10:51:16 -04:00
Casey Bodley
cf0103e3f3 s3: reenable tenanted bucket policy test
the before-call hook url-encodes the ':' part of tenanted bucket names
to resolve SignatureDoesNotMatch errors

removed the list-v2 version of the test since it isn't relevant to
bucket policy test coverage

add a new test case that creates the bucket under the tenanted user,
then uses the main client to access it

Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 2e41494293)
2024-08-23 12:35:22 -04:00
Seena Fallah
06a5851a92 s3select: align error codes with the new AWS format
ref. https://github.com/ceph/ceph/pull/56864

Signed-off-by: Seena Fallah <seenafallah@gmail.com>
(cherry picked from commit 2e395d78ea)
2024-07-30 13:33:12 -04:00
Seena Fallah
be181ba455 BucketPolicy: decouple encryption tests from invalid algo and unencrypted
Signed-off-by: Seena Fallah <seenafallah@gmail.com>
(cherry picked from commit c9aded48e5)
2024-07-26 09:19:03 -04:00
Seena Fallah
0887ec43fd BucketPolicy: add test for sse-c in conditions
Ref. https://github.com/ceph/ceph/pull/58689

Signed-off-by: Seena Fallah <seenafallah@gmail.com>
(cherry picked from commit 87b496f25f)
2024-07-26 09:19:03 -04:00
Seena Fallah
abd28c3aef BlockPublicPolicy: add test when policy has principal
Ref. https://tracker.ceph.com/issues/67048

Signed-off-by: Seena Fallah <seenafallah@gmail.com>
(cherry picked from commit a83396cda7)
2024-07-25 10:42:43 -04:00
Seena Fallah
3601557e45 PolicyStatus: add test for policy with Principal
Ref. https://github.com/ceph/ceph/pull/58686

Signed-off-by: Seena Fallah <seenafallah@gmail.com>
(cherry picked from commit 93a3b6c704)
2024-07-25 10:42:43 -04:00
Seena Fallah
94b02d5cbf BucketPolicy: donot allow NotPrincipal with Allow Effect
Ref. https://github.com/ceph/ceph/pull/58686

Signed-off-by: Seena Fallah <seenafallah@gmail.com>
(cherry picked from commit 474c1404e2)
2024-07-25 10:42:43 -04:00
Casey Bodley
88efafe863 test Get/HeadObject with partNumber for single-multipart upload
test_multipart_get_part() tests 'normal' multipart uploads. add a new
test case for a multipart upload with a single part to tests the fix
for https://tracker.ceph.com/issues/66705

Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit bebdfd1ba7)
2024-07-05 10:51:45 -04:00
Gal Salomon
565e5c8b9b add handling for EventStreamError exception
Signed-off-by: Gal Salomon <gal.salomon@gmail.com>
(cherry picked from commit 77f1334571)
2024-07-05 10:11:44 -04:00
Gal Salomon
f4e362dc3c a change is the RGW error-response require changes in s3-tests
Signed-off-by: Gal Salomon <gal.salomon@gmail.com>
(cherry picked from commit c4c5a247eb)
2024-07-05 10:11:44 -04:00
Matt Benjamin
8f988c10b9 mark two tests that fail on dbstore
also add @pytest.mark.checksum for new checksum
tests

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
(cherry picked from commit 8277a9fb9a)
2024-07-05 10:11:04 -04:00
Matt Benjamin
93f0753174 remove duplicate size assigment [rkhudov review]
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
(cherry picked from commit c0f0b679db)
2024-07-05 10:11:04 -04:00
Matt Benjamin
98c0931419 add test_post_object_upload_checksum
this tests a two-megabyte binary upload with validated
(awscli-computed) SHA256 checksum, and also verifies failure when
a bad checksum is provided

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
(cherry picked from commit 95df503ced)
2024-07-05 10:11:04 -04:00
Matt Benjamin
7d91b2507f add test_multipart_checksum_3parts
tests a full multipart upload cycle with 3 unique parts, which
verifies composite checksum computation and the logic to propagate
parts_count to ComleteMultipart

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
(cherry picked from commit 9577cde013)
2024-07-05 10:11:04 -04:00
Matt Benjamin
9de20e91f2 test_multipart_upload_sha256: work around failures re-trying complete-multipart
As described in https://tracker.ceph.com/issues/65746, retrying complete-multipart
after having attempted to complete the same upload with a bad checksum argument
fails with an internal error.

The status code is 500, but I'm unsure if it can be retried again, or whether
the upload can be aborted later.

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
(cherry picked from commit a3dbac7115)
2024-07-05 10:11:04 -04:00
Casey Bodley
2495db2447 add "checksum" marker, since new checksum tests reference it
this removes a Pytest warning during execution

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
(cherry picked from commit 73ed9121f4)
2024-07-05 10:10:59 -04:00
Casey Bodley
12a9fe7d32 requirements: unpin pytz version
Fixes: https://tracker.ceph.com/issues/66655

Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 27f24ee4d7)
2024-06-25 09:12:49 -04:00
Sumedh A. Kulkarni
5162da85ec Fix wrong assertion of the test: test_buckets_list_ctime
TestName:
s3tests_boto3.functional.test_s3:test_buckets_list_ctime

Problem:
The test creates 5 buckets for a user but in an assertion check,
it asserts false if any bucket of the user has CreationTime less
than a day prior to current time.
Due to this reason the test fails if the user has pre-existing
buckets older than a day.

Solution:
Assert only on the CreationTime of buckets that were created with
test execution.

Signed-off-by: Sumedh A. Kulkarni <sumedh.a.kulkarni@seagate.com>
Co-developed-by: Bob Ham <bham12@bloomberg.net>
Signed-off-by: Bob Ham <bham12@bloomberg.net>
(cherry picked from commit e9c5cc29e9)
2024-05-14 11:49:25 -04:00
Casey Bodley
787242b007 sns: add test_sns.py for simple topic testing
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit dfabbf5a8d)
2024-04-16 11:42:13 -04:00
Casey Bodley
5de66792e5 iam: move iam_root, iam_alt_root fixtures to iam.py
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 7bd4b0ee14)
2024-04-16 11:42:13 -04:00
Casey Bodley
ef5333112d s3: remove test_bucket_acl_no_grants()
aws doesn't consult acls for same-account access. rgw doesn't for
account users either

Fixes: https://github.com/ceph/s3-tests/issues/184

Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 96d658444a)
2024-04-16 11:42:13 -04:00
Casey Bodley
281ab8796f iam: test cross-account policy with assumed role
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit a3a16eb66a)
2024-04-16 11:42:13 -04:00
Casey Bodley
98fd3f55a0 iam: add account tests for GroupPolicy apis
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 7ebc530e04)
2024-04-16 11:42:13 -04:00
Casey Bodley
79a0dab8d5 iam: add account tests for Group apis
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 4ca7967ae7)
2024-04-16 11:42:13 -04:00
Casey Bodley
0f13cb9326 iam: add account test for OpenIDConnectProvider apis
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit d5791d8da6)
2024-04-16 11:42:13 -04:00
Casey Bodley
b514c9938b iam: test cross-account permissions
test the [iam alt root] user's access to buckets owned by [iam root]
using various policy principals and acl grantees

Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit ba292fbf59)
2024-04-16 11:42:13 -04:00
Casey Bodley
4d83a45b0f config: add [iam alt root] for an alt account's root user
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit ed4a8e2244)
2024-04-16 11:42:13 -04:00
Casey Bodley
7fbe5e94be iam: test managed role policy
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 46217fcf81)
2024-04-16 11:42:13 -04:00
Casey Bodley
a96a6a8363 iam: add account test for RolePolicy apis
adds test cases for the following iam actions:
* PutRolePolicy
* GetRolePolicy
* DeleteRolePolicy
* ListRolePolicies

verified to pass against aws when an account root user's credentials are
provided in the [iam] section of s3tests.conf

Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit cefea0fd26)
2024-04-16 11:42:13 -04:00
Casey Bodley
f14abc01a1 iam: add account tests for Role apis
adds test cases for the following iam actions:
* CreateRole
* GetRole
* ListRoles
* DeleteRole
* UpdateRole

verified to pass against aws when an account root user's credentials are
provided in the [iam] section of s3tests.conf

Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit d4ada317e1)
2024-04-16 11:42:13 -04:00
Casey Bodley
cd98e8ecb9 iam: test managed user policy
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit c6e40b4ffa)
2024-04-16 11:42:13 -04:00
Casey Bodley
58c32965d8 iam: add account tests for UserPolicy apis
adds test cases for the following iam actions:
* PutUserPolicy
* GetUserPolicy
* DeleteUserPolicy
* ListUserPolicies

verified to pass against aws when an account root user's credentials are
provided in the [iam] section of s3tests.conf

Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 364f29d087)
2024-04-16 11:42:13 -04:00
Casey Bodley
5c48348d7b iam: test bucket policy principal for iam user with path
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 0377466704)
2024-04-16 11:42:13 -04:00
Casey Bodley
1df924e131 iam: add tests for AccessKey apis
adds test cases for the following iam actions:
* CreateAccessKey
* UpdateAccessKey
* DeleteAccessKey
* ListAccessKeys

verified to pass against aws when an account root user's credentials are
provided in the [iam] section of s3tests.conf

Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit db76dfe791)
2024-04-16 11:42:13 -04:00
Casey Bodley
358e68250f iam: add tests for User apis
adds test cases for the following iam actions:
* CreateUser
* GetUser
* UpdateUser
* DeleteUser
* ListUsers

verified to pass against aws when an account root user's credentials are
provided in the [iam] section of s3tests.conf

Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit d8becad96a)
2024-04-16 11:42:13 -04:00
Casey Bodley
f7c57e1240 config: add [iam root] for an account root user
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 7cd4613883)
2024-04-16 11:42:13 -04:00
Casey Bodley
d6283e893b config: parse iam config during setup()
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 5f3353e6b5)
2024-04-16 11:42:13 -04:00
Casey Bodley
2812456bc6 iam: rename test_of_iam mark to iam_tenant
differentiate the test cases that expect a tenant-wide IAM api from new
ones that expect an account-wide api

Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit a35b3c609a)
2024-04-16 11:42:13 -04:00
Casey Bodley
e16230a24e config: add fixtures for iam name/path prefixes
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 83af25722c)
2024-04-16 11:42:13 -04:00
Matt Benjamin
6eb42a1c88 add tests for ObjectSizeGreater(Less)Than
Add tests for the new ObjectSizeGreaterThan and
ObjectSizeLessThan lifecycle operators.

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
(cherry picked from commit f752d6b6d8)
2024-04-03 13:51:05 -04:00
Matt Benjamin
fa75ccfe18 add test test_lifecycle_expiration_newer_noncurrent()
This verifies the new NewerNoncurrentVersions lifecycle filter
operator.

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
(cherry picked from commit a5aa59df04)
2024-04-03 13:51:05 -04:00
Yuval Lifshitz
ee3139ba04 test etag on mpu complete replies
this is to cover the fix of: https://tracker.ceph.com/issues/58879

Signed-off-by: Yuval Lifshitz <ylifshit@ibm.com>
(cherry picked from commit a28d46fa2a)
2024-03-27 10:16:44 -04:00
Casey Bodley
7a52579573 test_headers: use fixture to hook request headers
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit aecd282a11)
2024-03-10 10:44:03 -04:00
Casey Bodley
a559d86236 s3: object lock tests for deletion of multipart objects
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit a87f0b63e7)
2024-03-08 14:14:42 -05:00
Seena Fallah
509acecc67 PublicAccessBlock: test access deny via bucket policy
Make sure 403 is returned when access is denied via s3:GetBucketPublicAccessBlock action on GetBucketPublicAccessBlock

Refs: https://github.com/ceph/ceph/pull/55652
Signed-off-by: Seena Fallah <seenafallah@gmail.com>
(cherry picked from commit 3af42312bf)
2024-03-08 14:12:27 -05:00
Seena Fallah
a0d38e7d35 PublicAccessBlock: test 404 on no block configuration
Make sure NoSuchPublicAccessBlockConfiguration is returned when no public block is configured on bucket:

Refs: https://github.com/ceph/ceph/pull/55652
Signed-off-by: Seena Fallah <seenafallah@gmail.com>
(cherry picked from commit 3056e6d039)
2024-03-08 14:12:27 -05:00
Tobias Urdin
f3c2b9a744 Add more testing for presigned URLs
This improves the testing for presigned URLs for
both get_object and put_object when using
generate_presigned_url().

It covers the case where you pass for example
a x-amz-acl (ACL in params for generated_presigned_url)
header that should be signed.

Tests the regression in [1].

[1] https://tracker.ceph.com/issues/64308

Signed-off-by: Tobias Urdin <tobias.urdin@binero.se>
(cherry picked from commit 055451f666)
2024-03-07 16:02:34 -05:00
Jane Zhu
021ada2165 tag test_versioning_obj_suspended_copy with fails_on_dbstore
Signed-off-by: Juan Zhu <jzhu4@dev-10-34-20-139.pw1.bcc.bloomberg.com>
(cherry picked from commit 1866f04d81)
2024-03-07 16:02:34 -05:00
Jane Zhu
603679a414 add test case for object copy in versioning suspended bucket
Signed-off-by: Juan Zhu <jzhu4@dev-10-34-20-139.pw1.bcc.bloomberg.com>
(cherry picked from commit a2acdbfdda)
2024-03-07 16:02:34 -05:00
Ali Maredia
10fcfbea4c replace datetime.now with datetime.utcnow()
when the local timezone is not UTC and if it is a day behind,
lifecycle_header tests fails with 2 days not equal to 1
so replacing datetime.now() with datetime.utcnow()

Signed-off-by: Ali Maredia <amaredia@redhat.com>
(cherry picked from commit 4744808eda)
2024-02-21 12:29:58 -05:00
Casey Bodley
e0503c9fcd 304 Not Modified tests still expect etag header
test case for https://tracker.ceph.com/issues/45736

Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 19c17fa49a)
2024-01-11 14:00:50 -05:00
Casey Bodley
d50ea9f485 object lock: test very large RetainUntilDate
https://tracker.ceph.com/issues/63537 reported that large dates (with
year after 2107) got truncated when written. test with a later date, and
check that get_object_retention() gives back the date we put

Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 40182ce26f)
2023-11-28 15:54:35 -05:00
Casey Bodley
85c7fb3dd6 test Head/GetObject on completed multipart parts
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit d552124680)
2023-11-28 13:56:59 -05:00
Casey Bodley
c4a711ff28 add test_post_object_wrong_bucket()
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 95677d85bc)
2023-10-09 14:57:12 -04:00
Casey Bodley
651ecd20dd boto2: copy configured_storage_classes() fix from boto3
some boto2 storage class tests are failing because the list returned by
configured_storage_classes() included an empty string

the boto3 version had an extra line that removes empty values; copy that
for boto2

Signed-off-by: Casey Bodley <cbodley@redhat.com>
2023-09-22 11:15:09 -04:00
Casey Bodley
3023080509 boto2: fix byte vs. string comparison in verify_object
the storage class tests were failing on comparisons between the input
data and output data:

AssertionError: assert 'oFbdZvtRj' == b'oFbdZvtRj'

convert the byte representation back to string for comparison

Signed-off-by: Casey Bodley <cbodley@redhat.com>
2023-09-22 11:15:09 -04:00
Casey Bodley
69f867ff23 lc: test transition of plain null version
for validating the fix of https://tracker.ceph.com/issues/62013

Signed-off-by: Casey Bodley <cbodley@redhat.com>
2023-09-22 11:15:09 -04:00
Tobias Urdin
ff44ce9364 Add test to verify HTTP OPTIONS on presigned URL
Related: https://tracker.ceph.com/issues/62033

Signed-off-by: Tobias Urdin <tobias.urdin@binero.com>
(cherry picked from commit c0a1880d4c)
2023-08-21 15:53:11 -04:00
Soumya Koduri
cc13ed4984 Add testcase to verify obj mtime post setattrs
Object mtime should not change for any attr changes unless
its a copy operation. Verify the same using PutObjectACL op.

Signed-off-by: Soumya Koduri <skoduri@redhat.com>
(cherry picked from commit 10f3f7620d)
2023-07-03 10:26:48 +05:30
Casey Bodley
44bc52b426 remove reliance on x-rgw-object-count and x-rgw-bytes-used headers
stop using head_bucket() to fetch these response headers, and use
list_objects_v2() instead to count objects and sizes

Fixes: #315

Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 188b392131)
2023-07-03 10:26:04 +05:30
Abhishek Lekshmanan
610192c033 boto3/test_s3: add ranges around part boundary for encryption tests
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
(cherry picked from commit 00cdcaf056)
2023-06-14 16:12:59 -04:00
Abhishek Lekshmanan
7682983704 boto3/test_s3: range encoding helper function takes a size argument
This is to avoid a get_object call for every range check as the object size will
not change during this duration and we'd most likely already know the object
sizes beforehand

Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
(cherry picked from commit 741f2cbc9e)
2023-06-14 16:12:59 -04:00
Casey Bodley
dc7068b75b sse: add sse-c test with unaligned multipart parts
Fixes: http://tracker.ceph.com/issues/38700

Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit b05a394738)
2023-06-14 16:12:59 -04:00
Cory Snyder
b81c9ba8d9 Add test_versioning_concurrent_multi_object_delete
Tests that concurrent multi-object delete requests which specify
the same versioned object instances return successful object
responses within response body.

relates to: https://tracker.ceph.com/issues/56646

Signed-off-by: Cory Snyder <csnyder@iland.com>
(cherry picked from commit e18ea7fac4)
2023-05-17 14:33:34 -04:00
galsalomon66
6919f63324 adding tests per JSON input types, which includes various SQL operators.
alignment of some of the test routines upon removing the XML tags(<Payload><Records><Payload>) from the s3select results. (Json s3tests #506)

Signed-off-by: galsalomon66 <gal.salomon@gmail.com>
2023-05-16 00:26:19 +03:00
Casey Bodley
264e371101 test_sse_s3_default_multipart_upload verifies encryption header
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit febbcc12c2)
2023-04-10 22:44:40 -04:00
Mark Kogan
6342b3d258 test object PUT with chunked transfer enconding
Before the RGW fix PR was responding with 411 instead of 200

RGW fix PR: https://github.com/ceph/ceph/pull/50235

Signed-off-by: Mark Kogan <mkogan@redhat.com>
(cherry picked from commit 13a9bfc00a)
2023-04-07 09:20:15 -04:00
Casey Bodley
67e6c5aecc boto3: multi-object-delete tests use client.delete_objects()
when the tests were converted from boto2, they were rewritten as loops
over client.delete_object(). switch back to multi-delete

Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 787dc6bd43)
2023-03-31 14:43:34 -04:00
Casey Bodley
ac25acfddb s3website: collections.Container removed from python3.10
Fixes: https://tracker.ceph.com/issues/58960

Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 97c0338adf)
2023-03-21 17:09:29 -04:00
Casey Bodley
a1139f5301 add test_get_object_torrent
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit bb27e04c45)
2023-03-14 13:43:08 -04:00
Soumya Koduri
d6a9168220 lifecycle: Fix test_lifecycle_expiration_header_* testcases
Few checks were incorrectly mapped when switched to 'assert'. This
commit fixes the same.

Signed-off-by: Soumya Koduri <skoduri@redhat.com>
(cherry picked from commit 29b0e27e49)
2023-03-08 08:58:56 -05:00
Soumya Koduri
02fdf70f0a dbstore: add back missing 'fails_on_dbstore' tag
Mark testcase "test_lifecycle_expiration_header_and_tags_head" as
fails_on_dbstore

Signed-off-by: Soumya Koduri <skoduri@redhat.com>
(cherry picked from commit d976f47d74)
2023-03-06 13:00:00 -05:00
Moritz Röhrich
7fadea7416 QoL: Fix tox.ini syntax and other minor things
- Fix tox.ini syntax

Modern tox versions require the expected environment variables to be
listed one by one on separate lines in tox.ini

- Add `venv` to list of ignored names for git

This is a common name for a local Python virtual environment. Less
typing than `virtualenv`

- Add `tox` to requirements.txt

Installing `tox` via `pip` has the advantage of including it in the
virtual environment, thus avoiding trouble on operating systems shipping
by default with python3.6 or older. It's also nice that `pip install -r
requirements.txt` is now sufficient to set up the testing environment,
after initializing the virtual environment with a moder-enough python
version.

Signed-off-by: Moritz Röhrich <moritz.rohrich@suse.com>
(cherry picked from commit 3a0f1f0ead)
2023-03-06 09:20:49 -05:00
Casey Bodley
e5dabbe45d boto3: list_versions() omits empty KeyMarker/VersionIdMarker
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit a536dd0e88)
2023-03-06 09:20:49 -05:00
Casey Bodley
f060572730 iam: add back missing fails_on_dbstore tags
Fixes: https://tracker.ceph.com/issues/58762

Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 5219b86db9)
2023-02-27 08:43:51 -05:00
Yuval Lifshitz
99690b71c5 better error handling in the STS tests
also, give more accurate instruction on how to run the tests

Signed-off-by: Yuval Lifshitz <ylifshit@redhat.com>
(cherry picked from commit 3437cda73d)
2023-02-20 11:49:34 -05:00
Robin H. Johnson
a8fd8d430e test_post_object_upload_size_rgw_chunk_size_bug: new testcase
`ERR_TOO_SMALL` is wrongly returned if all of the following are true,
- the get_data returns multiple items (chunks)
- the length of the last item is smaller than the POST Policy's min
  value for content-length-range.

The check should be `(ofs < min_len)` instead of `(len < min_len)`

This is further confirmed by the next line of `s->obj_size = ofs`

Move the `int len` scope inside loop to try and prevent the bug in
future.

The bug was refactored in 2016, but was introduced in Oct 2012, when
this functionality was first added to RGW in commit 7bb3504d3f0974e9863f536e9af0ce8889d6888f.

Reference: 933a42f9af/src/rgw/rgw_op.cc (L4474-L4513)
Reference: 7bb3504d3f
Signed-off-by: Robin H. Johnson <rjohnson@digitalocean.com>
(cherry picked from commit 5914eb2005)
2023-02-20 09:30:30 -05:00
Casey Bodley
968d0841da test_s3: remove test_user_policy()
this has been failing consistently in local testing. test_sts.py has
lots of user policy test coverage, so this test case in test_s3.py is
superfluous

Fixes: https://tracker.ceph.com/issues/58365

Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 18a41ab63f)
2023-02-09 10:34:45 -05:00
Casey Bodley
acec6abe78 test_headers: add custom marks for auth_common/aws2/aws4
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 7993dd02a5)
2023-01-25 18:04:31 -05:00
Casey Bodley
8a51654721 remove boostrap
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 5e9f6e5ffb)
2023-01-25 18:04:23 -05:00
Casey Bodley
bb4158dec6 cleanup duplicate lines
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit d13ed28a5c)
2023-01-25 18:04:15 -05:00
Casey Bodley
3d8fa8179d remove nose dependency from requirements.txt
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 494379c2ff)
2023-01-25 18:04:06 -05:00
Casey Bodley
b4bb9afb52 nose: remove nose attrs and imports
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 4c75fba0de)
2023-01-25 18:03:58 -05:00
Casey Bodley
8186dd7561 pytest: replace nose eq() with assert ==
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit f5d0bc9be3)
2023-01-25 18:03:50 -05:00
Casey Bodley
5396b04f1b pytest: replace nose SkipTest with pytest.skip()
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 7e7e8d5a42)
2023-01-25 18:03:42 -05:00
Casey Bodley
e8d56603a3 pytest: replace @nose.with_setup with fixtures
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit c80e9d2118)
2023-01-25 18:01:16 -05:00
Casey Bodley
b6dcf111fa pytest: add custom marks for each nose @attr
and register them in pytest.ini

Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 4864dbc340)
2023-01-25 18:00:37 -05:00
Casey Bodley
60cc478d70 remove tests tagged fails_strict_rfc2616
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 3652cfe2ec)
2023-01-25 18:00:28 -05:00
Casey Bodley
c749452fd0 pytest: add global configfile and autouse teardown fixtures
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 672a123348)
2023-01-25 18:00:17 -05:00
Casey Bodley
d528efdda9 pytest: add tox.ini to run pytest, update README
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 9319a41b24)
2023-01-25 18:00:03 -05:00
Casey Bodley
c12216d0cc pin botocore to resolve v2 signature failures
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2022-11-23 13:05:10 -05:00
galsalomon66
def95c2968 remove comment
Signed-off-by: galsalomon66 <gal.salomon@gmail.com>
(cherry picked from commit 60b26f210e)
2022-10-19 15:46:02 -04:00
galsalomon66
5eadd6d08c the use of std::to_string change the double presentation(not accuracy)
Signed-off-by: galsalomon66 <gal.salomon@gmail.com>
(cherry picked from commit b200013565)
2022-10-19 15:46:02 -04:00
galsalomon66
89b10274dd the upgraded CSV parser has some impact on engine result. (redundant comma, quote behavior)
Signed-off-by: galsalomon66 <gal.salomon@gmail.com>
(cherry picked from commit 89f97ed35c)
2022-10-19 15:46:02 -04:00
Casey Bodley
02ee15041a add test_object_copy_16m to test refcounting of tail objects
i don't think any of our CopyObj test cases were large enough to have
tail objects, so weren't exercising our tail object ref counting
strategy

Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit defb8eb977)
2022-10-06 16:59:30 -04:00
Soumya Koduri
698f19d621 cloud-transition: Adjust lc wait time
Fixes: https://tracker.ceph.com/issues/57401

Signed-off-by: Soumya Koduri <skoduri@redhat.com>
(cherry picked from commit 774172ad43)
2022-09-06 22:40:34 +05:30
Zuhair AlSader
9ea2dfb393 Add support for rocky linux
Signed-off-by: Zuhair AlSader <zuhair.alsader@seagate.com>
(cherry picked from commit ad999de7c4)
2022-08-25 10:44:05 +05:30
Soumya Koduri
2227b61eee dbstore: Update the tests marked 'fails_on_dbstore'
Tag/Untag testcases failing on dbstore as per latest run against main

Signed-off-by: Soumya Koduri <skoduri@redhat.com>
(cherry picked from commit a8ee732732)
2022-08-19 10:30:49 +05:30
Igor Fedotov
c70849d527 Test case for locked and "marked-deleted" object version removal.
Reproduces: https://tracker.ceph.com/issues/55766
Signed-off by: Igor Fedotov <igor.fedotov@croit.io>
2022-08-03 11:54:06 -04:00
Shriya Deshmukh
d3e1ccda9c rgw/s3_boto3: Added http status code verification for DeleteBucketTagging
case.

Updated test_set_bucket_tagging test for verifying the http status code
for DeleteBucketTagging case.

Related CEPH PR: https://github.com/ceph/ceph/pull/47262

Signed-off-by: Shriya Deshmukh <shriya.deshmukh@seagate.com>
(cherry picked from commit c8fc8cd7c8)
2022-08-01 14:08:34 -04:00
Soumya Koduri
963359db88 Fix User Policy test failures on dbstore
Tag User policy tests failing on dbstore as 'fails_on_dbstore'

Signed-off-by: Soumya Koduri <skoduri@redhat.com>
(cherry picked from commit 5d63ebf83d)
2022-07-21 14:40:37 -04:00
Ravindra Choudhari
6dbdf39037 Added put/get/list/delete User Policy Tests
c4d30d7	Ravindra Choudhari	Mon, 27 Jun 2022 removing region name
4a13f58	Ravindra Choudhari	Thu, 16 Jun 2022 Updating readme file (#15)
18bc152	Ravindra Choudhari	Tue, 14 Jun 2022 Adding attr test_of_iam to all user policy tests (#13)
03f520a	Ravindra Choudhari	Tue, 14 Jun 2022 resolving review comments (#12)
7cf2823	Ravindra Choudhari	Mon, 13 Jun 2022 added IAM policy test section in README.rst (#11)
563f3ea	Ravindra Choudhari	Fri, 10 Jun 2022 adding failing three tests back with attr @fails_on_rgw (#10)
696dd2e Ravindra Choudhari 	Mon, 6 Jun 2022 changes as per review comments
3d63dfd Ravindra Choudhari 	Mon, 6 Jun 2022 Fixed review comments (#8)
9492f69 Ravindra Choudhari	Fri, 3 Jun 2022 Fixed review comments (#7)
74095dc Ketan Arlulkar     	Wed, 1 Jun 2022 Fixed review comments (#6)
942fb4f Ketan Arlulkar     	Wed, 1 Jun 2022 Added Tests for conflicting policies and IAM actions (#4)
ad5b5ae Ravindra Choudhari 	Tue, 31 May 2022 IAM policies s3 actions (#5)
6515ec6 Ketan Arlulkar     	Fri, 27 May 2022 Corrected eq import
40a2841 Ravindra Choudhari 	Tue, 17 May 2022 resolving conflicts
f53a5c1 Ravindra Choudhari 	Tue, 17 May 2022 added cleanup
747d563 Ketan Arlulkar     	Tue, 17 May 2022 Added cleanup/Delete Policy
d1cc1d8 Ketan Arlulkar     	Mon, 16 May 2022 Fixed review comments
1ec43a2 Ravindra Choudhari 	Mon, 16 May 2022 delete user policy tests
a01722e Ravindra Choudhari 	Mon, 16 May 2022 get user policy tests
ff9d676 Ketan Arlulkar     	Fri, 13 May 2022 Removed TEST IDs
d261400 Ketan Arlulkar     	Tue, 10 May 2022 Put User Policy & List User Policy Tests

Signed-off-by: Ravindra Choudhari <ravindra.choudhari@seagate.com>
(cherry picked from commit bf889041c9)
2022-07-21 14:40:17 -04:00
Casey Bodley
001b7f3f55 remove tests that fail on boto3's parameter validation
Fixes: https://tracker.ceph.com/issues/55193

Signed-off-by: Casey Bodley <cbodley@redhat.com>
2022-06-06 14:28:00 -04:00
Casey Bodley
9f1b9d02a5 fix GetBucketTagging error code
related to https://tracker.ceph.com/issues/55460

Signed-off-by: Casey Bodley <cbodley@redhat.com>
2022-05-19 12:14:13 -04:00
Soumya Koduri
54699553d5 Tag sse* tests with 'fails_on_dbstore' attr
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
(cherry picked from commit 33afb4eb88)
2022-05-11 23:37:37 +05:30
Soumya Koduri
358547142a Adjust wait time for cloud-transition test failures
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
(cherry picked from commit 25d05a194b)
2022-05-11 14:20:14 +05:30
Casey Bodley
e0b27409b9 s3tests.conf.SAMPLE: comment out [s3 cloud] section
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 9f1f9c9273)
2022-05-10 11:11:03 +05:30
Priya Sehgal
78ce27ae27 sse s3 jumbo patch - all in one fixes for sse-s3 logic
original tests by Priya Sehgal <priya.sehgal@flipkart.com>:
rgw/s3_boto3: Tests added for SSE-S3 (GET, PUT, HEAD, MPU).

Additions by Casey Bodley <cbodley@redhat.com>:
add 'sse-s3' tag to test cases
sse: add _put_bucket_encryption() helper function
sse: document test cases with default bucket encryption
sse: expects encryption response header on put/get
sse: add 8MB default-encrypted upload
sse: test uploads that request x-amz-server-side-encryption=AES256

Lastly all my changes (Marcus Watts <mwatts@redhat.com>):

remove obsolete test - do it only in boto3 now.

Combine or rename duplicated function names.
Giving more than one test the same name is a Bad Thing(tm).

sse: expand test_bucket_policy_put_obj_enc, and _put_bucket_encryption

test_bucket_policy_put_obj_enc was testing too many things at once.
new tests:

* customer encryption and sse-s3: should fail
* customer encryption and sse-kms: should fail
* deny if not sse-s3: no-enc fails, sse-s3 succeeds.
* deny if not sse-s3: kms fails
  deny if not sse-ksm: no-enc fails, sse-kms succeeds.
  deny if not sse-ksm: s3 fails

_put_bucket_encryption was only testing sse-s3.
* test both these variations: sse-s3 and sse-kms

Note:
* these tests will fail on pre-sse-s3 ceph.

python3: comment out all boto3.set_stream_logger() calls
They made too much output.

Signed-off-by: Marcus Watts <mwatts@redhat.com>
(cherry picked from commit dd7cac25f5)
2022-05-02 17:02:11 -04:00
Soumya Koduri
73f458da05 Tag testcases failing on dbstore with 'fails_on_dbstore' attr
To be able to successfully run s3tests on dbstore backend in teuthology,
mark all the s3-tests currently failing on it with 'fails_on_dbstore' attr

Signed-off-by: Soumya Koduri <skoduri@redhat.com>
2022-04-28 23:32:22 +05:30
Soumya Koduri
2aa4a22c5a lifecycle: Adjust lc wait time
Adjust wait time to fix spurious failures reported -
https://github.com/ceph/ceph/pull/40703#issuecomment-1060811704
http://qa-proxy.ceph.com/teuthology/mbenjamin-2022-02-23_15:42:19-rgw-wip-rgwlc-noreset-distro-basic-smithi/6703349/teuthology.log

Signed-off-by: Soumya Koduri <skoduri@redhat.com>
2022-04-06 01:07:06 +05:30
Pragadeeswaran Sathyanarayanan
e60dda8e1d Improving check_grants reliability
Signed-off-by: Pragadeeswaran Sathyanarayanan <psathyan@redhat.com>
(cherry picked from commit 5f96a32045)
2022-03-03 12:53:42 -05:00
Soumya Koduri
3c52fa8bb9 lifecycle/deletemarker_expiration: Increase timer window
Increase wait time in test_lifecycle_deletemarker_expiration(..)
to avoid any spurious failure.

Signed-off-by: Soumya Koduri <skoduri@redhat.com>
2022-02-08 12:29:33 -05:00
Soumya Koduri
23cd521b5f Enable lifecycle tests
Add an option to configure lc debug interval and adjust lifecycle
tests sleep as per the value set.

Signed-off-by: Soumya Koduri <skoduri@redhat.com>
2022-02-04 11:02:28 -05:00
Soumya Koduri
e5617b783f Add testcases for rgw cloudtransition feature
Feature PR: https://github.com/ceph/ceph/pull/35100

Also ported lc testcases from boto2 to boto3

Signed-off-by: Soumya Koduri <skoduri@redhat.com>
2022-02-04 11:02:28 -05:00
Kalpesh Pandya
1af1880b7a test_sts: Changing code for proper cleanup
This solves: https://tracker.ceph.com/issues/53090

The solution is: We need to delete the role_policy and
user_policy attached user which was causing the failure.

Signed-off-by: Kalpesh Pandya <kapandya@redhat.com>
2022-01-18 09:19:16 -05:00
gal salomon
0b8e554dd1 revert the arrow installation(causing failure on some distro"s, such as fedora)
Signed-off-by: gal salomon <gal.salomon@gmail.com>
2022-01-17 12:12:18 -05:00
gal salomon
70b928269f fix output-serialization tests(upon comparing query results need to remove redundant columns)
skip output-serial test. the results from both queries are not equal, thus it raise an assert. the problem seems to be the formatting before the comparision

remove test_output_serial_expressions until fixing the test

experiment pyarrow for parquet testing, adding arrow/parquet to bootstrap, installing pyarrow,pandas for reading/writing parquet

Signed-off-by: gal salomon <gal.salomon@gmail.com>
2022-01-17 12:12:18 -05:00
gal salomon
cfa805efe9 merging master tests into parquet branch
Signed-off-by: gal salomon <gal.salomon@gmail.com>
2022-01-17 12:12:18 -05:00
gal salomon
090fbfbead fix for assert of error messages
Signed-off-by: gal salomon <gal.salomon@gmail.com>
2022-01-17 12:12:18 -05:00
gal salomon
4143ec30f3 remove redundant comma. s3select-engine produced redundant result column before its last fix, s3tests should align with that
Signed-off-by: gal salomon <gal.salomon@gmail.com>
2022-01-17 12:12:18 -05:00
Or Friedmann
41006d68c2 Test expiration header for lc rules with tags
Signed-off-by: Or Friedmann <ofriedma@redhat.com>
2022-01-17 12:12:18 -05:00
胡玮文
ae981dd3a8 Test list_objects_v2 KeyCount with Delimiter
Test for: https://github.com/ceph/ceph/pull/37396

Signed-off-by: 胡玮文 <huww98@outlook.com>
2022-01-17 12:12:18 -05:00
galsalomon66
5e0cea1c07 adding datatime queries from #395
Signed-off-by: galsalomon66 <gal.salomon@gmail.com>
(cherry picked from commit 23be1160f5)
2021-12-20 10:43:49 -05:00
galsalomon66
d44879544d rollback to use like without escape syntax (explore valgrind issue)
Signed-off-by: galsalomon66 <gal.salomon@gmail.com>
(cherry picked from commit eef8d0fa67)
2021-12-20 10:43:31 -05:00
galsalomon66
7f21baaea6 modify the queries syntax to like-escape; the queries semantics is the same; results are the same; it is a part of exploring valgrind issue around like-operator
Signed-off-by: galsalomon66 <gal.salomon@gmail.com>
(cherry picked from commit f51101d752)
2021-12-20 10:43:19 -05:00
galsalomon66
3c1de4acc3 add output-serialization tests; add syntax-error tests; run_s3select_output should merge into run_s3select
Signed-off-by: galsalomon66 <gal.salomon@gmail.com>
(cherry picked from commit 749e29185b)
2021-12-20 10:43:07 -05:00
gal salomon
de683cd665 remove comments from the like-expressions
Signed-off-by: gal salomon <gal.salomon@gmail.com>
(cherry picked from commit 7c07bad930)
2021-12-20 10:42:48 -05:00
galsalomon66
6f9becf5b9 search for the cause of the valgrind issue; remove the like expression that may cause it
Signed-off-by: galsalomon66 <gal.salomon@gmail.com>
(cherry picked from commit 723853fd18)
2021-12-20 10:42:19 -05:00
Albin Antony
f4c19c9389 s3select: test progress stats
Signed-off-by: Albin Antony <aantony@redhat.com>
(cherry picked from commit a3447c50df)
2021-12-20 10:41:12 -05:00
Albin Antony
89363e8fef s3select: align s3select tests with ceph
Update s3-tests to handle the error-response (return 400, and error-description)

Signed-off-by: Albin Antony <aantony@redhat.com>
2021-12-20 10:40:50 -05:00
Casey Bodley
d1a38df407 check Owner/Initiator fields of ListMultipartUploads response
new test case test_list_multipart_upload_owner() uses two different
users to initiate multipart uploads, then tests that
list_multipart_uploads() shows the correct user ids and display names
for each upload

Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 490d0a4c4f)
2021-11-18 14:32:20 -05:00
iraj465
e95809024c chore:Removes unused scaffolds
(cherry picked from commit d2a7ed88f1)
2021-09-15 17:00:26 -04:00
iraj465
93a6d89681 rgw/boto3_s3:Adds delete_objects key limit for list-objects-v2
(cherry picked from commit 459e3c870a)
2021-09-15 16:59:52 -04:00
iraj465
907331d90c chore:Bump the list-objects to paginator
(cherry picked from commit 20aa9aa071)
2021-09-15 16:59:42 -04:00
iraj465
6e426d3291 rgw/s3_boto3:Adds new delete_objects tests for checking key delete limit
(cherry picked from commit 64bdc3beec)
2021-09-08 16:00:35 -04:00
Pritha Srivastava
80763520dc rgw/sts: test to use role tag as iam:ResourceTag in
role's trust policy.

Signed-off-by: Pritha Srivastava <prsrivas@redhat.com>
(cherry picked from commit 71266fede9)
2021-09-07 16:46:26 -04:00
Pritha Srivastava
27364bf5f6 rgw/sts: test for s3:ResourceTag in role's permission policy
Signed-off-by: Pritha Srivastava <prsrivas@redhat.com>
(cherry picked from commit 5dcc3dd689)
2021-09-07 16:46:19 -04:00
Pritha Srivastava
0fa29f6c8e rgw/sts: adding test for aws:TagKeys that can be used in the
condition element of the role's trust policy and the role's
permission policy.

Signed-off-by: Pritha Srivastava <prsrivas@redhat.com>
(cherry picked from commit bf43a4a10a)
2021-09-07 16:46:10 -04:00
Pritha Srivastava
47fff36c36 rgw/sts: adding test for aws:PrincipalTag in role permission
policy.

Signed-off-by: Pritha Srivastava <prsrivas@redhat.com>
(cherry picked from commit 86fecf83b9)
2021-09-07 16:46:01 -04:00
Pritha Srivastava
95154bf0ce rgw/sts: adding test to check for aws:RequestTag
in the condition element of a role's trust policy.

Signed-off-by: Pritha Srivastava <prsrivas@redhat.com>
(cherry picked from commit 64068d7bf9)
2021-09-07 16:45:51 -04:00
Pritha Srivastava
e00e3a77b0 rgw/sts: adding tests for testing assumerolewithwebidentity
using 'sub' and 'azp' fields in the web token.

Signed-off-by: Pritha Srivastava <prsrivas@redhat.com>
(cherry picked from commit d466b7bd09)
2021-09-07 16:45:43 -04:00
Liu Lan
864902a3ad docs: fix wrong example in README.rst
Run: S3TEST_CONF=your.conf ./virtualenv/bin/nosetests s3tests.functional.test_s3:test_bucket_list_empty

But get an error: "ERROR: Failure: ValueError (No such test test_bucket_list_empty)".

Because test_bucket_list_empty is a test case in s3tests_boto3 directory.

Signed-off-by: Liu Lan <liulan_yewu@cmss.chinamobile.com>
(cherry picked from commit 9ac8aef12b)
2021-09-07 16:45:32 -04:00
Mark Kogan
ec530d049c
Merge pull request #390 from mkogan1/multipart-extra-complete
test extra complete_multipart_upload()
2021-08-23 18:06:58 +03:00
Casey Bodley
f004493dcc object-lock: test changes between retention modes
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 8662815ebe)
2021-08-12 13:46:22 -04:00
Casey Bodley
9eacf29594 nuke_prefixed_buckets waits up to 60 seconds for object locks to expire
objects locked in GOVERNANCE mode can be removed with
BypassGovernanceRetention, but some tests may leave an object locked in
COMPLIANCE mode, which blocks deletion until the retention period
expires

nuke_prefixed_buckets now checks the retention policy of objects that it
fails to delete with AccessDenied, and will wait up to 60 seconds for
locks to expire before retrying the deletes. if the wait exceeds 60
seconds, it instead throws an error without deleting the bucket

instead of doing this in nuke_prefixed_buckets, we could potentially
have each object-lock test case handle this manually, but that would
add a separate delay to each test case

Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 9c4f15a47e)
2021-08-12 13:46:03 -04:00
Casey Bodley
8090ea4629 nuke_prefixed_buckets deletes objects in batches
speed up the cleanup by using delete_objects() with batches of 128

Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit bb995c2aeb)
2021-08-12 13:45:27 -04:00
Rahul Dev Parashar
df426ea041 rgw/s3_boto3: Add tests for bucket encryption APIs
Tests are added for GetBucketEncryption, PutBucketEncryption,
and DeleteBucketEncryption APIs.

Related PR: https://github.com/ceph/ceph/pull/42222

Signed-off-by: Rahul Dev Parashar <rahul.dev@flipkart.com>
(cherry picked from commit 44643af0b0)
2021-08-12 13:45:16 -04:00
Kalpesh Pandya
057432b9f5 rgw/sts: Addition of new STS tests for testing
session policies alongwith role's permission policy
and bucket policy.

Signed-off-by: Kalpesh Pandya <kapandya@redhat.com>
Signed-off-by: Pritha Srivastava <prsrivas@redhat.com>
(cherry picked from commit 245a93326e)
2021-08-12 13:45:07 -04:00
iraj465
e22a689a44 rgw/s3_boto3:Adds lifecycle transition test for invalid iso8601 date
(cherry picked from commit ba9525f425)
2021-08-12 13:44:56 -04:00
Pei
f344fd6ca7 Fix: Bucket resource leak when cleanup
In the function of nuke_prefixed_buckets, if some err is thrown when deleting buckets, the left buckets remain uncleaned.
It is kind of resoruce leak on some charged platform. We have to clear them manually.

I know the original code is meant to give the user some hint by rasing error. But the resource leak of left buckets is a little annoying.

This PR would skip the problem point and continue the teardown process. The last client error would be saved and re-raised after the loop completes.

Signed-off-by: Pei <huangp0600@126.com>
Signed-off-by: Pei <phuang1@dev-new-3-3854897.slc07.dev.ebayc3.com>
(cherry picked from commit 713012c178)
2021-08-12 13:38:40 -04:00
Ali Maredia
97fb5a7ee3 disable ssl verify by default
Signed-off-by: Ali Maredia <amaredia@redhat.com>
(cherry picked from commit b252638369)
2021-08-11 13:14:33 -04:00
Pragadeeswaran Sathyanarayanan
0fef1637ae Add support for disabling SSL certificate verification
Signed-off-by: Pragadeeswaran Sathyanarayanan <psathyan@redhat.com>
(cherry picked from commit ea3caaa76b)
2021-08-09 14:04:47 -04:00
Mark Houghton
a81ad3515e Add tests for issue 47586.
(cherry picked from commit 7fe0304e9c)
2021-08-05 11:29:25 -04:00
Danny Abukalam
0d7111ffc2 Add test to check retain date is in iso 8601 format
Signed-off-by: Danny Abukalam <danny@softiron.com>
(cherry picked from commit e229d1aaf6)
2021-07-28 09:38:30 -04:00
Casey Bodley
2ad7f81917 sts: test role policy around nonexistent objects
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 7276bee050)
2021-06-28 11:12:08 -04:00
Casey Bodley
72957ece35 test that listed buckets have creation time
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 66ced9af1d)
2021-06-28 11:09:59 -04:00
Or Friedmann
8423389033 test multipart upload with bucket policy
Signed-off-by: Or Friedmann <ofriedma@redhat.com>
(cherry picked from commit cfdf914c4b)
2021-06-07 14:05:58 -04:00
gal salomon
aea3f6b4c3 modifying tests to be align with change of s3select compare sign( == -> = )
Signed-off-by: gal salomon <gal.salomon@gmail.com>
(cherry picked from commit 1572fbc87b)
2021-05-21 20:10:14 -04:00
Albin Antony
6f74f2af07 tests
Signed-off-by: Albin Antony <aantony@redhat.com>
(cherry picked from commit c6a4ab9d12)
2021-05-21 20:10:14 -04:00
Albin Antony
6b412b509b tests
Signed-off-by: Albin Antony <aantony@redhat.com>
(cherry picked from commit e7102e8cb0)
2021-05-21 20:10:14 -04:00
Albin Antony
87993f147d tests
Signed-off-by: Albin Antony <aantony@redhat.com>
(cherry picked from commit 60dd3444b3)
2021-05-21 20:10:14 -04:00
Albin Antony
ecc4cbc5c4 s3select: align s3-tests with new changes in s3select
Fix when then, date functions and NULL, add escape, trim tests

Signed-off-by: Albin Antony <aantony@redhat.com>
(cherry picked from commit 4a86ebbe8b)
2021-05-21 20:09:48 -04:00
Mark Kogan
c4635f9463 test extra complete_multipart_upload()
after the 1st successfull one should also return 200 OK

related tracker issue: https://tracker.ceph.com/issues/50141
related pr: https://github.com/ceph/ceph/pull/40594

Signed-off-by: Mark Kogan <mkogan@redhat.com>
2021-04-25 15:30:03 +03:00
galsalomon66
287acbc6e7 change test_s3select.py permission; add s3select attribute
Signed-off-by: galsalomon66 <gal.salomon@gmail.com>
(cherry picked from commit ea7d5fb563)
2021-01-14 00:08:50 -05:00
galsalomon66
68f1939942 reduce object size for test_like_expression, it may cause timeouts in teuthology runs
Signed-off-by: galsalomon66 <gal.salomon@gmail.com>
2020-12-22 00:32:18 -05:00
galsalomon66
a0aa55d4ae rename charlength and character_length function names
Signed-off-by: galsalomon66 <gal.salomon@gmail.com>
2020-12-22 00:32:07 -05:00
Or Friedmann
e3d31ef6eb Add test for head bucket usage headers
Signed-off-by: Or Friedmann <ofriedma@redhat.com>
(cherry picked from commit ef8f65d917)
2020-12-03 16:04:29 -05:00
Or Friedmann
3fe80dc877 Add test for GetUsage api
Signed-off-by: Or Friedmann <ofriedma@redhat.com>
(cherry picked from commit f4f7812efd)
2020-12-03 16:04:14 -05:00
Albin Antony
a5108a7d69 s3select tests for coalesce and case
Signed-off-by: Albin Antony <aantony@redhat.com>
2020-12-03 01:04:02 -05:00
gal salomon
3be10d722f per each new uploaded file(for test), it got unique name(random), and uploaded file is verified for its content
Signed-off-by: gal salomon <gal.salomon@gmail.com>

Signed-off-by: gal salomon <gal.salomon@gmail.com>
2020-12-03 01:03:55 -05:00
Albin Antony
adad16121f s3select predicate tests
Signed-off-by: Albin Antony <aantony@redhat.com>
2020-12-03 00:59:32 -05:00
root
dd163877d4 Webidentity Test addition to test_sts.py
Signed-off-by: Kalpesh Pandya <kapandya@redhat.com>

Few main changes/additions:
1. Webidentity test addition to test_sts.py.
2. A function named check_webidentity() added to __init__.py in order to check for section presence.
3. Few lines shifted from setup() to get_iam_client() to make them execute only when sts-tests run.
4. Documentation update (for sts section)
5. Changes in s3tests.conf.SAMPLE regarding sts sections
2020-11-25 20:34:32 -05:00
Matt Benjamin
86bc2a191f Add test for HeadBucket on a non-existent bucket
n.b., RGW does not send a response document for this operation,
which seems consistent with
https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadBucket.html

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
2020-11-25 00:06:12 -05:00
Ilsoo Byun
3698d093bf Check if invalid payload is added after serving errordoc
Signed-off-by: Ilsoo Byun <ilsoobyun@linecorp.com>
(cherry picked from commit c08de72d55)
2020-11-19 10:24:30 -05:00
Casey Bodley
65b067486e test bucket recreation with different acls
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit f6218fa1de)
2020-10-08 13:32:53 -04:00
root
0e36699571 STS issue fix (https://tracker.ceph.com/issues/47588)
This is the fix for the issue reported (https://tracker.ceph.com/issues/47588). The issue was with the argument which was passed to the function. After removing that argument (as it's already an optional argument) the issue is fixed.

Signed-off-by: Kalpesh Pandya <kapandya@redhat.com>
(cherry picked from commit daf9062a22)
2020-10-08 13:32:53 -04:00
Matt Benjamin
3dc4ff5da8 add noncurrent version expiration rule w/tag filter
Create 10 object versions (9 noncurrent).  Install a noncurrent
version expiration at 4 days.  Verify that 10 versions exist at
T+20, and only 1 (current) at T+60.

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
2020-10-01 11:30:33 -04:00
Matt Benjamin
c9792cb975 add lifecycle expiration test mixing 2-tag filter w/versioning
By design this test duplicates test_lifecycle_expiration_tags2,
but enables object versioning on the bucket.

The tests install a rule which requires -2- tags to be matched,
and creates 2 objects, one matching only 1 of the required tags,
the other matching both.  Only the 2nd object should expire.

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
2020-10-01 11:30:24 -04:00
Matt Benjamin
b930f194e4 add tests for lifecycle expiration w/1 and 2 object tags
Note that the 1-tag case contains a filter prefix--which exposes
an apparent bug parsing Filter when it contains a Prefix element
and a single Tag element (without And).

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
2020-10-01 11:30:15 -04:00
Matt Benjamin
253b63aa11 fix lifecycle expiration days: 0
In fact test_lifecycle_expiration_days0 is should fail, as 0-day
expiration is permitted for transition rules but not expiration
rules.

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
2020-10-01 11:30:08 -04:00
Matt Benjamin
6bd75be1d6 s/test_set_tagging/test_set_bucket_tagging/;
The test exercises bucket tagging, has nothing to do with object
tagging (clarity).

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
2020-10-01 11:30:01 -04:00
Matt Benjamin
61804bcf91 fix test_lifecycle_expiration_header_{put,head}
Primarily fixes the expiration header() verifier function
check_lifecycle_expiration_header, but also cleans up
prefix handling in setup_lifecycle_expiration().

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
2020-10-01 11:29:52 -04:00
Matt Benjamin
ea9f07a2bf fix and remark on test_lifecycle_expiration_days0
1. fix a python3-related KeyError exception

2. note here:  AWS documentation includes examples of "Days 0"
   in use, but boto3 will not accept them--this is why the days0
   test currently sets Days 1

3. delay increased to 30s, to avoid occasional failures due to
   jitter

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
2020-10-01 11:29:45 -04:00
root
7d14452035 STS Tests Files and modification in __init__.py
Signed-off-by: Kalpesh Pandya <kapandya@redhat.com>
2020-09-16 01:40:33 -04:00
gal salomon
6ea6cb6467 add filter for s3select tests
Signed-off-by: gal salomon <gal.salomon@gmail.com>
(cherry picked from commit fce9a52ef4)
2020-07-07 07:38:41 -04:00
gal salomon
10c801a2e0 fix comments;remove non-used imports;enable test for projection;using get_client()
(cherry picked from commit 72e251ed69)
2020-06-25 13:42:23 -04:00
gal salomon
9d670846a3 adding radom-generated tests, one for where clause , the second for prjection. random arithmetical expression is generated and used for building s3-select query; result is compared to python-engine
(cherry picked from commit fb39ac4829)
2020-06-25 13:42:23 -04:00
gal salomon
bb801b8625 python linter; replace assert with assert_equal; add complex query test(sum,count,where); add test-schema ;
(cherry picked from commit e006dd4753)
2020-06-25 13:42:23 -04:00
gal salomon
652619f46f using config parameters
(cherry picked from commit 1a9d3677f7)
2020-06-25 13:42:23 -04:00
gal salomon
b1ddeee6eb add tests to validate csv-header-info functionalities is correct
(cherry picked from commit 5dc8bc75ab)
2020-06-25 13:42:23 -04:00
gal salomon
ca9cb5cc2c adding test cases for processing CSV objects with different CSV definitions; validate null,escape-rules and quotes are processed correctly
(cherry picked from commit 94b1986228)
2020-06-25 13:42:23 -04:00
gal salomon
e010c4cfec adding utcnow test
(cherry picked from commit 4c7c279f70)
2020-06-25 13:42:23 -04:00
gal salomon
edea887e9c adding tests for date-time functionalities
(cherry picked from commit 5925f0fb3f)
2020-06-25 13:42:23 -04:00
gal salomon
cd4f7e1a7a add complex expression tests; for nested function calls; and different where-clause which create the same group of values
(cherry picked from commit c1bce6ac70)
2020-06-25 13:42:23 -04:00
gal salomon
048f9297a1 adding test for detection of cyclic reference to alias
(cherry picked from commit d543619e71)
2020-06-25 13:42:23 -04:00
gal salomon
8bd6158054 adding aggregation tests
(cherry picked from commit f42872fd53)
2020-06-25 13:42:23 -04:00
gal salomon
aca68a9d39 adding alias test case
(cherry picked from commit 74daf86fe5)
2020-06-25 13:42:23 -04:00
gal salomon
537431c686 commit first tests for s3select and initial framework
(cherry picked from commit dac38694ef)
2020-06-25 13:42:23 -04:00
Matt Benjamin
8ca96c4519 fix/restore test_lifecycle_expiration checks
Commit bf956df71e adding
listobvjectsv2 tests inadvertently changed the v1
test_lifecycle_expiration test, which it had copied to
create a v2 version.  Revert this.

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
2020-05-26 11:07:53 -04:00
Kefu Chai
34040769ff bootstrap,requirements.txt: bump up setuptools and requests
Fixes: https://tracker.ceph.com/issues/45691
Signed-off-by: Kefu Chai <kchai@redhat.com>
2020-05-26 09:08:26 -04:00
Casey Bodley
8ebb504159 bootstrap: remove deprecated virtualenv options
this fails on Ubuntu 20.04:

> virtualenv: error: unrecognized arguments: --no-site-packages --distribute

according to `virtualenv -h`:

>   --no-site-packages    DEPRECATED. Retained only for backward compatibility.
>                         Not having access to global site-packages is now the
>                         default behavior.
>   --distribute          DEPRECATED. Retained only for backward compatibility.
>                         This option has no effect.

Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit a0c15c80ad)
2020-05-21 11:09:26 -04:00
Abhishek L
9092d1ac61
Merge pull request #343 from theanalyst/ceph-master-public-buckets-qa
Ceph master public buckets backport

Reviewed-By: Casey Bodley <cbodley@redhat.com>
2020-03-30 15:25:57 +02:00
Abhishek Lekshmanan
7b3df700cc fix ignore public acls with py3 compatible code
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
(cherry picked from commit 4d675235dd)
2020-03-26 16:28:12 +01:00
Abhishek Lekshmanan
4fc133b1b5 add tests for ignore public acls
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
(cherry picked from commit 3b1571ace6)
2020-03-26 16:28:12 +01:00
Abhishek Lekshmanan
0a495efc8c add test for block public policy
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
(cherry picked from commit b4516725f2)
2020-03-26 16:28:12 +01:00
Abhishek Lekshmanan
a48cf75391 use empty bodies for canned acl tests with BlockPublicAccess
This should be a temporary workaround until #42208 is fixed

Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
(cherry picked from commit d02c1819f6)
2020-03-26 16:28:12 +01:00
Abhishek Lekshmanan
a20e0d47f2 remove redundant get_client calls
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
(cherry picked from commit 4996430709)
2020-03-26 16:28:12 +01:00
Abhishek Lekshmanan
19947bd541 add ability to get svc client for s3config set of apis
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
(cherry picked from commit 6d3f574a8e)
2020-03-26 16:26:39 +01:00
Abhishek Lekshmanan
94168194fd add tests for public access configuration
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
(cherry picked from commit 1ad38530e0)
2020-03-26 16:26:19 +01:00
Abhishek Lekshmanan
0e3084c995 add a few test cases for public bucket policies
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
(cherry picked from commit 3f9d31c6c7)
2020-03-26 16:24:14 +01:00
Abhishek Lekshmanan
1d39198872 boto3: add bucket policy status checks for public ACLs
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
(cherry picked from commit 02b1d50ca7)
2020-03-26 16:23:46 +01:00
24 changed files with 12990 additions and 4977 deletions

1
.gitignore vendored
View file

@ -10,5 +10,6 @@
/*.egg-info
/virtualenv
/venv
config.yaml

View file

@ -6,14 +6,10 @@ This is a set of unofficial Amazon AWS S3 compatibility
tests, that can be useful to people implementing software
that exposes an S3-like API. The tests use the Boto2 and Boto3 libraries.
The tests use the Nose test framework. To get started, ensure you have
the ``virtualenv`` software installed; e.g. on Debian/Ubuntu::
The tests use the Tox tool. To get started, ensure you have the ``tox``
software installed; e.g. on Debian/Ubuntu::
sudo apt-get install python-virtualenv
and then run::
./bootstrap
sudo apt-get install tox
You will need to create a configuration file with the location of the
service and two different credentials. A sample configuration file named
@ -22,29 +18,25 @@ used to run the s3 tests on a Ceph cluster started with vstart.
Once you have that file copied and edited, you can run the tests with::
S3TEST_CONF=your.conf ./virtualenv/bin/nosetests
S3TEST_CONF=your.conf tox
You can specify which directory of tests to run::
S3TEST_CONF=your.conf ./virtualenv/bin/nosetests s3tests.functional
S3TEST_CONF=your.conf tox s3tests_boto3/functional
You can specify which file of tests to run::
S3TEST_CONF=your.conf ./virtualenv/bin/nosetests s3tests.functional.test_s3
S3TEST_CONF=your.conf tox s3tests_boto3/functional/test_s3.py
You can specify which test to run::
S3TEST_CONF=your.conf ./virtualenv/bin/nosetests s3tests.functional.test_s3:test_bucket_list_empty
To gather a list of tests being run, use the flags::
-v --collect-only
S3TEST_CONF=your.conf tox s3tests_boto3/functional/test_s3.py::test_bucket_list_empty
Some tests have attributes set based on their current reliability and
things like AWS not enforcing their spec stricly. You can filter tests
based on their attributes::
S3TEST_CONF=aws.conf ./virtualenv/bin/nosetests -a '!fails_on_aws'
S3TEST_CONF=aws.conf tox -- -m 'not fails_on_aws'
Most of the tests have both Boto3 and Boto2 versions. Tests written in
Boto2 are in the ``s3tests`` directory. Tests written in Boto3 are
@ -52,5 +44,59 @@ located in the ``s3test_boto3`` directory.
You can run only the boto3 tests with::
S3TEST_CONF=your.conf ./virtualenv/bin/nosetests -v -s -A 'not fails_on_rgw' s3tests_boto3.functional
S3TEST_CONF=your.conf tox -- -m 'not fails_on_aws' s3tests_boto3/functional
========================
STS compatibility tests
========================
This section contains some basic tests for the AssumeRole, GetSessionToken and AssumeRoleWithWebIdentity API's. The test file is located under ``s3tests_boto3/functional``.
To run the STS tests, the vstart cluster should be started with the following parameter (in addition to any parameters already used with it)::
vstart.sh -o rgw_sts_key=abcdefghijklmnop -o rgw_s3_auth_use_sts=true
Note that the ``rgw_sts_key`` can be set to anything that is 128 bits in length.
After the cluster is up the following command should be executed::
radosgw-admin caps add --tenant=testx --uid="9876543210abcdef0123456789abcdef0123456789abcdef0123456789abcdef" --caps="roles=*"
You can run only the sts tests (all the three API's) with::
S3TEST_CONF=your.conf tox s3tests_boto3/functional/test_sts.py
You can filter tests based on the attributes. There is a attribute named ``test_of_sts`` to run AssumeRole and GetSessionToken tests and ``webidentity_test`` to run the AssumeRoleWithWebIdentity tests. If you want to execute only ``test_of_sts`` tests you can apply that filter as below::
S3TEST_CONF=your.conf tox -- -m test_of_sts s3tests_boto3/functional/test_sts.py
For running ``webidentity_test`` you'll need have Keycloak running.
In order to run any STS test you'll need to add "iam" section to the config file. For further reference on how your config file should look check ``s3tests.conf.SAMPLE``.
========================
IAM policy tests
========================
This is a set of IAM policy tests.
This section covers tests for user policies such as Put, Get, List, Delete, user policies with s3 actions, conflicting user policies etc
These tests uses Boto3 libraries. Tests are written in the ``s3test_boto3`` directory.
These iam policy tests uses two users with profile name "iam" and "s3 alt" as mentioned in s3tests.conf.SAMPLE.
If Ceph cluster is started with vstart, then above two users will get created as part of vstart with same access key, secrete key etc as mentioned in s3tests.conf.SAMPLE.
Out of those two users, "iam" user is with capabilities --caps=user-policy=* and "s3 alt" user is without capabilities.
Adding above capabilities to "iam" user is also taken care by vstart (If Ceph cluster is started with vstart).
To run these tests, create configuration file with section "iam" and "s3 alt" refer s3tests.conf.SAMPLE.
Once you have that configuration file copied and edited, you can run all the tests with::
S3TEST_CONF=your.conf tox s3tests_boto3/functional/test_iam.py
You can also specify specific test to run::
S3TEST_CONF=your.conf tox s3tests_boto3/functional/test_iam.py::test_put_user_policy
Some tests have attributes set such as "fails_on_rgw".
You can filter tests based on their attributes::
S3TEST_CONF=your.conf tox -- s3tests_boto3/functional/test_iam.py -m 'not fails_on_rgw'

View file

@ -1,76 +0,0 @@
#!/bin/bash
set -e
virtualenv="virtualenv"
declare -a packages
source /etc/os-release
case "$ID" in
debian|ubuntu|devuan)
packages=(debianutils python3-pip python3-virtualenv python3-dev libevent-dev libffi-dev libxml2-dev libxslt-dev zlib1g-dev)
for package in ${packages[@]}; do
if [ "$(dpkg --status -- $package 2>/dev/null|sed -n 's/^Status: //p')" != "install ok installed" ]; then
# add a space after old values
missing="${missing:+$missing }$package"
fi
done
if [ -n "$missing" ]; then
echo "$0: missing required DEB packages. Installing via sudo." 1>&2
sudo apt-get -y install $missing
fi
;;
centos|fedora|rhel|ol|virtuozzo)
packages=(which python3-virtualenv python36-devel libevent-devel libffi-devel libxml2-devel libxslt-devel zlib-devel)
for package in ${packages[@]}; do
# When the package is python36-devel we change it to python3-devel on Fedora
if [[ ${package} == "python36-devel" && -f /etc/fedora-release ]]; then
package=python36
fi
if [ "$(rpm -qa $package 2>/dev/null)" == "" ]; then
missing="${missing:+$missing }$package"
fi
done
if [ -n "$missing" ]; then
echo "$0: Missing required RPM packages: ${missing}." 1>&2
sudo yum -y install $missing
fi
;;
opensuse*|suse|sles)
packages=(which python3-virtualenv python3-devel libev-devel libffi-devel libxml2-devel libxslt-devel zlib-devel)
for package in ${packages[@]}; do
if [ "$(rpm -qa $package 2>/dev/null)" == "" ]; then
missing="${missing:+$missing }$package"
fi
if [ -n "$missing" ]; then
echo "$0: Missing required RPM packages: ${missing}." 1>&2
sudo zypper --non-interactive install --no-recommends $missing
fi
done
;;
*)
echo "Bootstrap script does not support this distro yet, consider adding the packages"
exit 1
esac
# s3-tests only works on python 3.6 not newer versions of python3
${virtualenv} --python=$(which python3.6) --no-site-packages --distribute virtualenv
# avoid pip bugs
./virtualenv/bin/pip3 install --upgrade pip
# slightly old version of setuptools; newer fails w/ requests 0.14.0
./virtualenv/bin/pip3 install setuptools==32.3.1
./virtualenv/bin/pip3 install -r requirements.txt
# forbid setuptools from using the network because it'll try to use
# easy_install, and we really wanted pip; next line will fail if pip
# requirements.txt does not match setup.py requirements -- sucky but
# good enough for now
./virtualenv/bin/python3 setup.py develop

51
pytest.ini Normal file
View file

@ -0,0 +1,51 @@
[pytest]
markers =
abac_test
appendobject
auth_aws2
auth_aws4
auth_common
bucket_policy
bucket_encryption
checksum
cloud_transition
encryption
fails_on_aws
fails_on_dbstore
fails_on_dho
fails_on_mod_proxy_fcgi
fails_on_rgw
fails_on_s3
fails_with_subdomain
group
group_policy
iam_account
iam_cross_account
iam_role
iam_tenant
iam_user
lifecycle
lifecycle_expiration
lifecycle_transition
list_objects_v2
object_lock
role_policy
session_policy
s3select
s3website
s3website_routing_rules
s3website_redirect_location
sns
sse_s3
storage_class
tagging
test_of_sts
token_claims_trust_policy_test
token_principal_tag_role_policy_test
token_request_tag_trust_policy_test
token_resource_tags_test
token_role_tags_test
token_tag_keys_test
user_policy
versioning
webidentity_test

View file

@ -1,12 +1,15 @@
PyYAML
nose >=1.0.0
boto >=2.6.0
boto3 >=1.0.0
# botocore-1.28 broke v2 signatures, see https://tracker.ceph.com/issues/58059
botocore <1.28.0
munch >=2.0.0
# 0.14 switches to libev, that means bootstrap needs to change too
gevent >=1.0
isodate >=0.4.4
requests >=0.14.0
pytz >=2011k
requests >=2.23.0
pytz
httplib2
lxml
pytest
tox

View file

@ -10,12 +10,23 @@ port = 8000
## say "False" to disable TLS
is_secure = False
## say "False" to disable SSL Verify
ssl_verify = False
[fixtures]
## all the buckets created will start with this prefix;
## {random} will be filled with random characters to pad
## the prefix to 30 characters long, and avoid collisions
bucket prefix = yournamehere-{random}-
# all the iam account resources (users, roles, etc) created
# will start with this name prefix
iam name prefix = s3-tests-
# all the iam account resources (users, roles, etc) created
# will start with this path prefix
iam path prefix = /s3-tests/
[s3 main]
# main display_name set in vstart.sh
display_name = M. Tester
@ -38,6 +49,12 @@ secret_key = h7GhxuBLTrlhVUyxSPUKUV8r/2EI4ngqJxD7iBdBYLhwluN30JaT3Q==
## replace with key id obtained when secret is created, or delete if KMS not tested
#kms_keyid = 01234567-89ab-cdef-0123-456789abcdef
## Storage classes
#storage_classes = "LUKEWARM, FROZEN"
## Lifecycle debug interval (default: 10)
#lc_debug_interval = 20
[s3 alt]
# alt display_name set in vstart.sh
display_name = john.doe
@ -53,6 +70,37 @@ access_key = NOPQRSTUVWXYZABCDEFG
# alt AWS secret key set in vstart.sh
secret_key = nopqrstuvwxyzabcdefghijklmnabcdefghijklm
#[s3 cloud]
## to run the testcases with "cloud_transition" attribute.
## Note: the waiting time may have to tweaked depending on
## the I/O latency to the cloud endpoint.
## host set for cloud endpoint
# host = localhost
## port set for cloud endpoint
# port = 8001
## say "False" to disable TLS
# is_secure = False
## cloud endpoint credentials
# access_key = 0555b35654ad1656d804
# secret_key = h7GhxuBLTrlhVUyxSPUKUV8r/2EI4ngqJxD7iBdBYLhwluN30JaT3Q==
## storage class configured as cloud tier on local rgw server
# cloud_storage_class = CLOUDTIER
## Below are optional -
## Above configured cloud storage class config options
# retain_head_object = false
# target_storage_class = Target_SC
# target_path = cloud-bucket
## another regular storage class to test multiple transition rules,
# storage_class = S1
[s3 tenant]
# tenant display_name set in vstart.sh
display_name = testx$tenanteduser
@ -68,3 +116,56 @@ secret_key = opqrstuvwxyzabcdefghijklmnopqrstuvwxyzab
# tenant email set in vstart.sh
email = tenanteduser@example.com
# tenant name
tenant = testx
#following section needs to be added for all sts-tests
[iam]
#used for iam operations in sts-tests
#email from vstart.sh
email = s3@example.com
#user_id from vstart.sh
user_id = 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef
#access_key from vstart.sh
access_key = ABCDEFGHIJKLMNOPQRST
#secret_key vstart.sh
secret_key = abcdefghijklmnopqrstuvwxyzabcdefghijklmn
#display_name from vstart.sh
display_name = youruseridhere
# iam account root user for iam_account tests
[iam root]
access_key = AAAAAAAAAAAAAAAAAAaa
secret_key = aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
user_id = RGW11111111111111111
email = account1@ceph.com
# iam account root user in a different account than [iam root]
[iam alt root]
access_key = BBBBBBBBBBBBBBBBBBbb
secret_key = bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
user_id = RGW22222222222222222
email = account2@ceph.com
#following section needs to be added when you want to run Assume Role With Webidentity test
[webidentity]
#used for assume role with web identity test in sts-tests
#all parameters will be obtained from ceph/qa/tasks/keycloak.py
token=<access_token>
aud=<obtained after introspecting token>
sub=<obtained after introspecting token>
azp=<obtained after introspecting token>
user_token=<access token for a user, with attribute Department=[Engineering, Marketing>]
thumbprint=<obtained from x509 certificate>
KC_REALM=<name of the realm>

View file

@ -7,6 +7,7 @@ import itertools
import os
import random
import string
import pytest
from http.client import HTTPConnection, HTTPSConnection
from urllib.parse import urlparse
@ -370,6 +371,15 @@ def teardown():
# remove our buckets here also, to avoid littering
nuke_prefixed_buckets(prefix=prefix)
@pytest.fixture(scope="package")
def configfile():
setup()
yield config
@pytest.fixture(autouse=True)
def setup_teardown(configfile):
yield
teardown()
bucket_counter = itertools.count(1)

File diff suppressed because it is too large Load diff

View file

@ -7,7 +7,7 @@ import datetime
import time
import email.utils
import isodate
import nose
import pytest
import operator
import socket
import ssl
@ -27,16 +27,14 @@ import re
from collections import defaultdict
from urllib.parse import urlparse
from nose.tools import eq_ as eq
from nose.plugins.attrib import attr
from nose.plugins.skip import SkipTest
from . import utils
from .utils import assert_raises
from .policy import Policy, Statement, make_json_policy
from . import (
configfile,
setup_teardown,
nuke_prefixed_buckets,
get_new_bucket,
get_new_bucket_name,
@ -53,9 +51,9 @@ from . import (
def check_access_denied(fn, *args, **kwargs):
e = assert_raises(boto.exception.S3ResponseError, fn, *args, **kwargs)
eq(e.status, 403)
eq(e.reason, 'Forbidden')
eq(e.error_code, 'AccessDenied')
assert e.status == 403
assert e.reason == 'Forbidden'
assert e.error_code == 'AccessDenied'
def check_bad_bucket_name(name):
"""
@ -63,9 +61,9 @@ def check_bad_bucket_name(name):
that the request fails because of an invalid bucket name.
"""
e = assert_raises(boto.exception.S3ResponseError, get_new_bucket, targets.main.default, name)
eq(e.status, 400)
eq(e.reason.lower(), 'bad request') # some proxies vary the case
eq(e.error_code, 'InvalidBucketName')
assert e.status == 400
assert e.reason.lower() == 'bad request' # some proxies vary the case
assert e.error_code == 'InvalidBucketName'
def _create_keys(bucket=None, keys=[]):
"""
@ -94,20 +92,16 @@ def _get_alt_connection():
# Breaks DNS with SubdomainCallingFormat
@attr('fails_with_subdomain')
@attr(resource='bucket')
@attr(method='put')
@attr(operation='create w/! in name')
@attr(assertion='fails with subdomain')
@pytest.mark.fails_with_subdomain
def test_bucket_create_naming_bad_punctuation():
# characters other than [a-zA-Z0-9._-]
check_bad_bucket_name('alpha!soup')
def check_versioning(bucket, status):
try:
eq(bucket.get_versioning_status()['Versioning'], status)
assert bucket.get_versioning_status()['Versioning'] == status
except KeyError:
eq(status, None)
assert status == None
# amazon is eventual consistent, retry a bit if failed
def check_configure_versioning_retry(bucket, status, expected_string):
@ -126,13 +120,10 @@ def check_configure_versioning_retry(bucket, status, expected_string):
time.sleep(1)
eq(expected_string, read_status)
assert expected_string == read_status
@attr(resource='object')
@attr(method='create')
@attr(operation='create versioned object, read not exist null version')
@attr(assertion='read null version behaves correctly')
@attr('versioning')
@pytest.mark.versioning
@pytest.mark.fails_on_dbstore
def test_versioning_obj_read_not_exist_null():
bucket = get_new_bucket()
check_versioning(bucket, None)
@ -146,15 +137,12 @@ def test_versioning_obj_read_not_exist_null():
key.set_contents_from_string(content)
key = bucket.get_key(objname, version_id='null')
eq(key, None)
assert key == None
@attr(resource='object')
@attr(method='put')
@attr(operation='append object')
@attr(assertion='success')
@attr('fails_on_aws')
@attr('fails_with_subdomain')
@attr('appendobject')
@pytest.mark.fails_on_aws
@pytest.mark.fails_with_subdomain
@pytest.mark.appendobject
@pytest.mark.fails_on_dbstore
def test_append_object():
bucket = get_new_bucket()
key = bucket.new_key('foo')
@ -166,19 +154,16 @@ def test_append_object():
res = _make_raw_request(host=s3.main.host, port=s3.main.port, method='PUT', path=path1, body='abc', secure=s3.main.is_secure)
path2 = path + '&append&position=3'
res = _make_raw_request(host=s3.main.host, port=s3.main.port, method='PUT', path=path2, body='abc', secure=s3.main.is_secure)
eq(res.status, 200)
eq(res.reason, 'OK')
assert res.status == 200
assert res.reason == 'OK'
key = bucket.get_key('foo')
eq(key.size, 6)
assert key.size == 6
@attr(resource='object')
@attr(method='put')
@attr(operation='append to normal object')
@attr(assertion='fails 409')
@attr('fails_on_aws')
@attr('fails_with_subdomain')
@attr('appendobject')
@pytest.mark.fails_on_aws
@pytest.mark.fails_with_subdomain
@pytest.mark.appendobject
@pytest.mark.fails_on_dbstore
def test_append_normal_object():
bucket = get_new_bucket()
key = bucket.new_key('foo')
@ -189,16 +174,13 @@ def test_append_normal_object():
path = o.path + '?' + o.query
path = path + '&append&position=3'
res = _make_raw_request(host=s3.main.host, port=s3.main.port, method='PUT', path=path, body='abc', secure=s3.main.is_secure)
eq(res.status, 409)
assert res.status == 409
@attr(resource='object')
@attr(method='put')
@attr(operation='append position not right')
@attr(assertion='fails 409')
@attr('fails_on_aws')
@attr('fails_with_subdomain')
@attr('appendobject')
@pytest.mark.fails_on_aws
@pytest.mark.fails_with_subdomain
@pytest.mark.appendobject
@pytest.mark.fails_on_dbstore
def test_append_object_position_wrong():
bucket = get_new_bucket()
key = bucket.new_key('foo')
@ -210,17 +192,13 @@ def test_append_object_position_wrong():
res = _make_raw_request(host=s3.main.host, port=s3.main.port, method='PUT', path=path1, body='abc', secure=s3.main.is_secure)
path2 = path + '&append&position=9'
res = _make_raw_request(host=s3.main.host, port=s3.main.port, method='PUT', path=path2, body='abc', secure=s3.main.is_secure)
eq(res.status, 409)
eq(int(res.getheader('x-rgw-next-append-position')), 3)
assert res.status == 409
assert int(res.getheader('x-rgw-next-append-position')) == 3
# TODO rgw log_bucket.set_as_logging_target() gives 403 Forbidden
# http://tracker.newdream.net/issues/984
@attr(resource='bucket.log')
@attr(method='put')
@attr(operation='set/enable/disable logging target')
@attr(assertion='operations succeed')
@attr('fails_on_rgw')
@pytest.mark.fails_on_rgw
def test_logging_toggle():
bucket = get_new_bucket()
log_bucket = get_new_bucket(targets.main.default, bucket.name + '-log')
@ -236,242 +214,6 @@ def list_bucket_storage_class(bucket):
return result
# The test harness for lifecycle is configured to treat days as 10 second intervals.
@attr(resource='bucket')
@attr(method='put')
@attr(operation='test lifecycle expiration')
@attr('lifecycle')
@attr('lifecycle_transition')
@attr('fails_on_aws')
def test_lifecycle_transition():
sc = configured_storage_classes()
if len(sc) < 3:
raise SkipTest
bucket = set_lifecycle(rules=[{'id': 'rule1', 'transition': lc_transition(days=1, storage_class=sc[1]), 'prefix': 'expire1/', 'status': 'Enabled'},
{'id':'rule2', 'transition': lc_transition(days=4, storage_class=sc[2]), 'prefix': 'expire3/', 'status': 'Enabled'}])
_create_keys(bucket=bucket, keys=['expire1/foo', 'expire1/bar', 'keep2/foo',
'keep2/bar', 'expire3/foo', 'expire3/bar'])
# Get list of all keys
init_keys = bucket.get_all_keys()
eq(len(init_keys), 6)
# Wait for first expiration (plus fudge to handle the timer window)
time.sleep(25)
expire1_keys = list_bucket_storage_class(bucket)
eq(len(expire1_keys['STANDARD']), 4)
eq(len(expire1_keys[sc[1]]), 2)
eq(len(expire1_keys[sc[2]]), 0)
# Wait for next expiration cycle
time.sleep(10)
keep2_keys = list_bucket_storage_class(bucket)
eq(len(keep2_keys['STANDARD']), 4)
eq(len(keep2_keys[sc[1]]), 2)
eq(len(keep2_keys[sc[2]]), 0)
# Wait for final expiration cycle
time.sleep(20)
expire3_keys = list_bucket_storage_class(bucket)
eq(len(expire3_keys['STANDARD']), 2)
eq(len(expire3_keys[sc[1]]), 2)
eq(len(expire3_keys[sc[2]]), 2)
# The test harness for lifecycle is configured to treat days as 10 second intervals.
@attr(resource='bucket')
@attr(method='put')
@attr(operation='test lifecycle expiration')
@attr('lifecycle')
@attr('lifecycle_transition')
@attr('fails_on_aws')
def test_lifecycle_transition_single_rule_multi_trans():
sc = configured_storage_classes()
if len(sc) < 3:
raise SkipTest
bucket = set_lifecycle(rules=[
{'id': 'rule1',
'transition': lc_transitions([
lc_transition(days=1, storage_class=sc[1]),
lc_transition(days=4, storage_class=sc[2])]),
'prefix': 'expire1/',
'status': 'Enabled'}])
_create_keys(bucket=bucket, keys=['expire1/foo', 'expire1/bar', 'keep2/foo',
'keep2/bar', 'expire3/foo', 'expire3/bar'])
# Get list of all keys
init_keys = bucket.get_all_keys()
eq(len(init_keys), 6)
# Wait for first expiration (plus fudge to handle the timer window)
time.sleep(25)
expire1_keys = list_bucket_storage_class(bucket)
eq(len(expire1_keys['STANDARD']), 4)
eq(len(expire1_keys[sc[1]]), 2)
eq(len(expire1_keys[sc[2]]), 0)
# Wait for next expiration cycle
time.sleep(10)
keep2_keys = list_bucket_storage_class(bucket)
eq(len(keep2_keys['STANDARD']), 4)
eq(len(keep2_keys[sc[1]]), 2)
eq(len(keep2_keys[sc[2]]), 0)
# Wait for final expiration cycle
time.sleep(20)
expire3_keys = list_bucket_storage_class(bucket)
eq(len(expire3_keys['STANDARD']), 4)
eq(len(expire3_keys[sc[1]]), 0)
eq(len(expire3_keys[sc[2]]), 2)
def generate_lifecycle_body(rules):
body = '<?xml version="1.0" encoding="UTF-8"?><LifecycleConfiguration>'
for rule in rules:
body += '<Rule><ID>%s</ID><Status>%s</Status>' % (rule['ID'], rule['Status'])
if 'Prefix' in list(rule.keys()):
body += '<Prefix>%s</Prefix>' % rule['Prefix']
if 'Filter' in list(rule.keys()):
prefix_str= '' # AWS supports empty filters
if 'Prefix' in list(rule['Filter'].keys()):
prefix_str = '<Prefix>%s</Prefix>' % rule['Filter']['Prefix']
body += '<Filter>%s</Filter>' % prefix_str
if 'Expiration' in list(rule.keys()):
if 'ExpiredObjectDeleteMarker' in list(rule['Expiration'].keys()):
body += '<Expiration><ExpiredObjectDeleteMarker>%s</ExpiredObjectDeleteMarker></Expiration>' \
% rule['Expiration']['ExpiredObjectDeleteMarker']
elif 'Date' in list(rule['Expiration'].keys()):
body += '<Expiration><Date>%s</Date></Expiration>' % rule['Expiration']['Date']
else:
body += '<Expiration><Days>%d</Days></Expiration>' % rule['Expiration']['Days']
if 'NoncurrentVersionExpiration' in list(rule.keys()):
body += '<NoncurrentVersionExpiration><NoncurrentDays>%d</NoncurrentDays></NoncurrentVersionExpiration>' % \
rule['NoncurrentVersionExpiration']['NoncurrentDays']
if 'NoncurrentVersionTransition' in list(rule.keys()):
for t in rule['NoncurrentVersionTransition']:
body += '<NoncurrentVersionTransition>'
body += '<NoncurrentDays>%d</NoncurrentDays>' % \
t['NoncurrentDays']
body += '<StorageClass>%s</StorageClass>' % \
t['StorageClass']
body += '</NoncurrentVersionTransition>'
if 'AbortIncompleteMultipartUpload' in list(rule.keys()):
body += '<AbortIncompleteMultipartUpload><DaysAfterInitiation>%d</DaysAfterInitiation>' \
'</AbortIncompleteMultipartUpload>' % rule['AbortIncompleteMultipartUpload']['DaysAfterInitiation']
body += '</Rule>'
body += '</LifecycleConfiguration>'
return body
@attr(resource='bucket')
@attr(method='put')
@attr(operation='set lifecycle config with noncurrent version expiration')
@attr('lifecycle')
@attr('lifecycle_transition')
def test_lifecycle_set_noncurrent_transition():
sc = configured_storage_classes()
if len(sc) < 3:
raise SkipTest
bucket = get_new_bucket()
rules = [
{
'ID': 'rule1',
'Prefix': 'test1/',
'Status': 'Enabled',
'NoncurrentVersionTransition': [
{
'NoncurrentDays': 2,
'StorageClass': sc[1]
},
{
'NoncurrentDays': 4,
'StorageClass': sc[2]
}
],
'NoncurrentVersionExpiration': {
'NoncurrentDays': 6
}
},
{'ID': 'rule2', 'Prefix': 'test2/', 'Status': 'Disabled', 'NoncurrentVersionExpiration': {'NoncurrentDays': 3}}
]
body = generate_lifecycle_body(rules)
fp = StringIO(body)
md5 = boto.utils.compute_md5(fp)
headers = {'Content-MD5': md5[1], 'Content-Type': 'text/xml'}
res = bucket.connection.make_request('PUT', bucket.name, data=fp.getvalue(), query_args='lifecycle',
headers=headers)
eq(res.status, 200)
eq(res.reason, 'OK')
@attr(resource='bucket')
@attr(method='put')
@attr(operation='test lifecycle non-current version expiration')
@attr('lifecycle')
@attr('lifecycle_expiration')
@attr('lifecycle_transition')
@attr('fails_on_aws')
def test_lifecycle_noncur_transition():
sc = configured_storage_classes()
if len(sc) < 3:
raise SkipTest
bucket = get_new_bucket()
check_configure_versioning_retry(bucket, True, "Enabled")
rules = [
{
'ID': 'rule1',
'Prefix': 'test1/',
'Status': 'Enabled',
'NoncurrentVersionTransition': [
{
'NoncurrentDays': 1,
'StorageClass': sc[1]
},
{
'NoncurrentDays': 3,
'StorageClass': sc[2]
}
],
'NoncurrentVersionExpiration': {
'NoncurrentDays': 5
}
}
]
body = generate_lifecycle_body(rules)
fp = StringIO(body)
md5 = boto.utils.compute_md5(fp)
headers = {'Content-MD5': md5[1], 'Content-Type': 'text/xml'}
bucket.connection.make_request('PUT', bucket.name, data=fp.getvalue(), query_args='lifecycle',
headers=headers)
create_multiple_versions(bucket, "test1/a", 3)
create_multiple_versions(bucket, "test1/b", 3)
init_keys = bucket.get_all_versions()
eq(len(init_keys), 6)
time.sleep(25)
expire1_keys = list_bucket_storage_class(bucket)
eq(len(expire1_keys['STANDARD']), 2)
eq(len(expire1_keys[sc[1]]), 4)
eq(len(expire1_keys[sc[2]]), 0)
time.sleep(20)
expire1_keys = list_bucket_storage_class(bucket)
eq(len(expire1_keys['STANDARD']), 2)
eq(len(expire1_keys[sc[1]]), 0)
eq(len(expire1_keys[sc[2]]), 4)
time.sleep(20)
expire_keys = bucket.get_all_versions()
expire1_keys = list_bucket_storage_class(bucket)
eq(len(expire1_keys['STANDARD']), 2)
eq(len(expire1_keys[sc[1]]), 0)
eq(len(expire1_keys[sc[2]]), 0)
def transfer_part(bucket, mp_id, mp_keyname, i, part, headers=None):
"""Transfer a part of a multipart upload. Designed to be run in parallel.
"""
@ -543,13 +285,13 @@ def gen_rand_string(size, chars=string.ascii_uppercase + string.digits):
def verify_object(bucket, k, data=None, storage_class=None):
if storage_class:
eq(k.storage_class, storage_class)
assert k.storage_class == storage_class
if data:
read_data = k.get_contents_as_string()
equal = data == read_data # avoid spamming log if data not equal
eq(equal, True)
equal = data == read_data.decode() # avoid spamming log if data not equal
assert equal == True
def copy_object_storage_class(src_bucket, src_key, dest_bucket, dest_key, storage_class):
query_args=None
@ -565,7 +307,7 @@ def copy_object_storage_class(src_bucket, src_key, dest_bucket, dest_key, storag
res = dest_bucket.connection.make_request('PUT', dest_bucket.name, dest_key.name,
query_args=query_args, headers=headers)
eq(res.status, 200)
assert res.status == 200
def _populate_multipart_key(bucket, kname, size, storage_class=None):
(upload, data) = _multipart_upload(bucket, kname, size, storage_class=storage_class)
@ -620,6 +362,9 @@ def configured_storage_classes():
if item != 'STANDARD':
sc.append(item)
sc = [i for i in sc if i]
print("storage classes configured: " + str(sc))
return sc
def lc_transition(days=None, date=None, storage_class=None):
@ -633,15 +378,13 @@ def lc_transitions(transitions=None):
return result
@attr(resource='object')
@attr(method='put')
@attr(operation='test create object with storage class')
@attr('storage_class')
@attr('fails_on_aws')
@pytest.mark.storage_class
@pytest.mark.fails_on_aws
@pytest.mark.fails_on_dbstore
def test_object_storage_class():
sc = configured_storage_classes()
if len(sc) < 2:
raise SkipTest
pytest.skip('requires multiple storage classes')
bucket = get_new_bucket()
@ -651,15 +394,13 @@ def test_object_storage_class():
verify_object(bucket, k, data, storage_class)
@attr(resource='object')
@attr(method='put')
@attr(operation='test create multipart object with storage class')
@attr('storage_class')
@attr('fails_on_aws')
@pytest.mark.storage_class
@pytest.mark.fails_on_aws
@pytest.mark.fails_on_dbstore
def test_object_storage_class_multipart():
sc = configured_storage_classes()
if len(sc) < 2:
raise SkipTest
pytest.skip('requires multiple storage classes')
bucket = get_new_bucket()
size = 11 * 1024 * 1024
@ -669,13 +410,13 @@ def test_object_storage_class_multipart():
(upload, data) = _multipart_upload(bucket, key, size, storage_class=storage_class)
upload.complete_upload()
key2 = bucket.get_key(key)
eq(key2.size, size)
eq(key2.storage_class, storage_class)
assert key2.size == size
assert key2.storage_class == storage_class
def _do_test_object_modify_storage_class(obj_write_func, size):
sc = configured_storage_classes()
if len(sc) < 2:
raise SkipTest
pytest.skip('requires multiple storage classes')
bucket = get_new_bucket()
@ -692,27 +433,23 @@ def _do_test_object_modify_storage_class(obj_write_func, size):
copy_object_storage_class(bucket, k, bucket, k, new_storage_class)
verify_object(bucket, k, data, storage_class)
@attr(resource='object')
@attr(method='put')
@attr(operation='test changing objects storage class')
@attr('storage_class')
@attr('fails_on_aws')
@pytest.mark.storage_class
@pytest.mark.fails_on_aws
@pytest.mark.fails_on_dbstore
def test_object_modify_storage_class():
_do_test_object_modify_storage_class(_populate_key, size=9*1024*1024)
@attr(resource='object')
@attr(method='put')
@attr(operation='test changing objects storage class')
@attr('storage_class')
@attr('fails_on_aws')
@pytest.mark.storage_class
@pytest.mark.fails_on_aws
@pytest.mark.fails_on_dbstore
def test_object_modify_storage_class_multipart():
_do_test_object_modify_storage_class(_populate_multipart_key, size=11*1024*1024)
def _do_test_object_storage_class_copy(obj_write_func, size):
sc = configured_storage_classes()
if len(sc) < 2:
raise SkipTest
pytest.skip('requires multiple storage classes')
src_bucket = get_new_bucket()
dest_bucket = get_new_bucket()
@ -730,19 +467,15 @@ def _do_test_object_storage_class_copy(obj_write_func, size):
copy_object_storage_class(src_bucket, src_key, dest_bucket, dest_key, new_storage_class)
verify_object(dest_bucket, dest_key, data, new_storage_class)
@attr(resource='object')
@attr(method='copy')
@attr(operation='test copy object to object with different storage class')
@attr('storage_class')
@attr('fails_on_aws')
@pytest.mark.storage_class
@pytest.mark.fails_on_aws
@pytest.mark.fails_on_dbstore
def test_object_storage_class_copy():
_do_test_object_storage_class_copy(_populate_key, size=9*1024*1024)
@attr(resource='object')
@attr(method='copy')
@attr(operation='test changing objects storage class')
@attr('storage_class')
@attr('fails_on_aws')
@pytest.mark.storage_class
@pytest.mark.fails_on_aws
@pytest.mark.fails_on_dbstore
def test_object_storage_class_copy_multipart():
_do_test_object_storage_class_copy(_populate_multipart_key, size=9*1024*1024)
@ -799,7 +532,7 @@ class FakeFileVerifier(object):
if self.char == None:
self.char = data[0]
self.size += size
eq(data.decode(), self.char*size)
assert data.decode() == self.char*size
def _verify_atomic_key_data(key, size=-1, char=None):
"""
@ -808,7 +541,7 @@ def _verify_atomic_key_data(key, size=-1, char=None):
fp_verify = FakeFileVerifier(char)
key.get_contents_to_file(fp_verify)
if size >= 0:
eq(fp_verify.size, size)
assert fp_verify.size == size
def _test_atomic_dual_conditional_write(file_size):
"""
@ -837,26 +570,20 @@ def _test_atomic_dual_conditional_write(file_size):
# key.set_contents_from_file(fp_c, headers={'If-Match': etag_fp_a})
e = assert_raises(boto.exception.S3ResponseError, key.set_contents_from_file, fp_c,
headers={'If-Match': etag_fp_a})
eq(e.status, 412)
eq(e.reason, 'Precondition Failed')
eq(e.error_code, 'PreconditionFailed')
assert e.status == 412
assert e.reason == 'Precondition Failed'
assert e.error_code == 'PreconditionFailed'
# verify the file
_verify_atomic_key_data(key, file_size, 'B')
@attr(resource='object')
@attr(method='put')
@attr(operation='write one or the other')
@attr(assertion='1MB successful')
@attr('fails_on_aws')
@pytest.mark.fails_on_aws
@pytest.mark.fails_on_dbstore
def test_atomic_dual_conditional_write_1mb():
_test_atomic_dual_conditional_write(1024*1024)
@attr(resource='object')
@attr(method='put')
@attr(operation='write file in deleted bucket')
@attr(assertion='fail 404')
@attr('fails_on_aws')
@pytest.mark.fails_on_aws
@pytest.mark.fails_on_dbstore
def test_atomic_write_bucket_gone():
bucket = get_new_bucket()
@ -868,9 +595,9 @@ def test_atomic_write_bucket_gone():
key = bucket.new_key('foo')
fp_a = FakeWriteFile(1024*1024, 'A', remove_bucket)
e = assert_raises(boto.exception.S3ResponseError, key.set_contents_from_file, fp_a)
eq(e.status, 404)
eq(e.reason, 'Not Found')
eq(e.error_code, 'NoSuchBucket')
assert e.status == 404
assert e.reason == 'Not Found'
assert e.error_code == 'NoSuchBucket'
def _multipart_upload_enc(bucket, s3_key_name, size, part_size=5*1024*1024,
do_list=None, init_headers=None, part_headers=None,
@ -896,11 +623,8 @@ def _multipart_upload_enc(bucket, s3_key_name, size, part_size=5*1024*1024,
@attr(resource='object')
@attr(method='put')
@attr(operation='multipart upload with bad key for uploading chunks')
@attr(assertion='successful')
@attr('encryption')
@pytest.mark.encryption
@pytest.mark.fails_on_dbstore
def test_encryption_sse_c_multipart_invalid_chunks_1():
bucket = get_new_bucket()
key = "multipart_enc"
@ -921,13 +645,10 @@ def test_encryption_sse_c_multipart_invalid_chunks_1():
_multipart_upload_enc, bucket, key, objlen,
init_headers=init_headers, part_headers=part_headers,
metadata={'foo': 'bar'})
eq(e.status, 400)
assert e.status == 400
@attr(resource='object')
@attr(method='put')
@attr(operation='multipart upload with bad md5 for chunks')
@attr(assertion='successful')
@attr('encryption')
@pytest.mark.encryption
@pytest.mark.fails_on_dbstore
def test_encryption_sse_c_multipart_invalid_chunks_2():
bucket = get_new_bucket()
key = "multipart_enc"
@ -948,14 +669,11 @@ def test_encryption_sse_c_multipart_invalid_chunks_2():
_multipart_upload_enc, bucket, key, objlen,
init_headers=init_headers, part_headers=part_headers,
metadata={'foo': 'bar'})
eq(e.status, 400)
assert e.status == 400
@attr(resource='bucket')
@attr(method='get')
@attr(operation='Test Bucket Policy for a user belonging to a different tenant')
@attr(assertion='succeeds')
@attr('fails_with_subdomain')
@attr('bucket-policy')
@pytest.mark.fails_with_subdomain
@pytest.mark.bucket_policy
@pytest.mark.fails_on_dbstore
def test_bucket_policy_different_tenant():
bucket = get_new_bucket()
key = bucket.new_key('asdf')
@ -990,10 +708,8 @@ def test_bucket_policy_different_tenant():
b = new_conn.get_bucket(bucket_name)
b.get_all_keys()
@attr(resource='bucket')
@attr(method='put')
@attr(operation='Test put condition operator end with ifExists')
@attr('bucket-policy')
@pytest.mark.bucket_policy
@pytest.mark.fails_on_dbstore
def test_bucket_policy_set_condition_operator_end_with_IfExists():
bucket = _create_keys(keys=['foo'])
policy = '''{
@ -1012,73 +728,25 @@ def test_bucket_policy_set_condition_operator_end_with_IfExists():
}
]
}''' % bucket.name
eq(bucket.set_policy(policy), True)
assert bucket.set_policy(policy) == True
res = _make_request('GET', bucket.name, bucket.get_key("foo"),
request_headers={'referer': 'http://www.example.com/'})
eq(res.status, 200)
assert res.status == 200
res = _make_request('GET', bucket.name, bucket.get_key("foo"),
request_headers={'referer': 'http://www.example.com/index.html'})
eq(res.status, 200)
assert res.status == 200
res = _make_request('GET', bucket.name, bucket.get_key("foo"))
eq(res.status, 200)
assert res.status == 200
res = _make_request('GET', bucket.name, bucket.get_key("foo"),
request_headers={'referer': 'http://example.com'})
eq(res.status, 403)
assert res.status == 403
def _make_arn_resource(path="*"):
return "arn:aws:s3:::{}".format(path)
@attr(resource='object')
@attr(method='put')
@attr(operation='Deny put obj requests without encryption')
@attr(assertion='success')
@attr('encryption')
@attr('bucket-policy')
def test_bucket_policy_put_obj_enc():
bucket = get_new_bucket()
deny_incorrect_algo = {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": "AES256"
}
}
deny_unencrypted_obj = {
"Null" : {
"s3:x-amz-server-side-encryption": "true"
}
}
p = Policy()
resource = _make_arn_resource("{}/{}".format(bucket.name, "*"))
s1 = Statement("s3:PutObject", resource, effect="Deny", condition=deny_incorrect_algo)
s2 = Statement("s3:PutObject", resource, effect="Deny", condition=deny_unencrypted_obj)
policy_document = p.add_statement(s1).add_statement(s2).to_json()
bucket.set_policy(policy_document)
key1_str ='testobj'
key1 = bucket.new_key(key1_str)
check_access_denied(key1.set_contents_from_string, key1_str)
sse_client_headers = {
'x-amz-server-side-encryption' : 'AES256',
'x-amz-server-side-encryption-customer-algorithm': 'AES256',
'x-amz-server-side-encryption-customer-key': 'pO3upElrwuEXSoFwCfnZPdSsmt/xWeFa0N9KgDijwVs=',
'x-amz-server-side-encryption-customer-key-md5': 'DWygnHRtgiJ77HCm+1rvHw=='
}
key1.set_contents_from_string(key1_str, headers=sse_client_headers)
@attr(resource='object')
@attr(method='put')
@attr(operation='put obj with RequestObjectTag')
@attr(assertion='success')
@attr('tagging')
@attr('bucket-policy')
@pytest.mark.tagging
@pytest.mark.bucket_policy
@pytest.mark.fails_on_dbstore
def test_bucket_policy_put_obj_request_obj_tag():
bucket = get_new_bucket()

View file

@ -1,23 +1,20 @@
import sys
import collections
import nose
from collections.abc import Container
import pytest
import string
import random
from pprint import pprint
import time
import boto.exception
import socket
from urllib.parse import urlparse
from nose.tools import eq_ as eq, ok_ as ok
from nose.plugins.attrib import attr
from nose.tools import timed
from nose.plugins.skip import SkipTest
from .. import common
from . import (
configfile,
setup_teardown,
get_new_bucket,
get_new_bucket_name,
s3,
@ -42,38 +39,27 @@ ERRORDOC_TEMPLATE = '<html><h1>ErrorDoc</h1><body>{random}</body></html>'
CAN_WEBSITE = None
@pytest.fixture(autouse=True, scope="module")
def check_can_test_website():
global CAN_WEBSITE
# This is a bit expensive, so we cache this
if CAN_WEBSITE is None:
bucket = get_new_bucket()
try:
wsconf = bucket.get_website_configuration()
CAN_WEBSITE = True
return True
except boto.exception.S3ResponseError as e:
if e.status == 404 and e.reason == 'Not Found' and e.error_code in ['NoSuchWebsiteConfiguration', 'NoSuchKey']:
CAN_WEBSITE = True
return True
elif e.status == 405 and e.reason == 'Method Not Allowed' and e.error_code == 'MethodNotAllowed':
# rgw_enable_static_website is false
CAN_WEBSITE = False
pytest.skip('rgw_enable_static_website is false')
elif e.status == 403 and e.reason == 'SignatureDoesNotMatch' and e.error_code == 'Forbidden':
# This is older versions that do not support the website code
CAN_WEBSITE = False
pytest.skip('static website is not implemented')
elif e.status == 501 and e.error_code == 'NotImplemented':
CAN_WEBSITE = False
pytest.skip('static website is not implemented')
else:
raise RuntimeError("Unknown response in checking if WebsiteConf is supported", e)
finally:
bucket.delete()
if CAN_WEBSITE is True:
return True
elif CAN_WEBSITE is False:
raise SkipTest
else:
raise RuntimeError("Unknown cached response in checking if WebsiteConf is supported")
def make_website_config(xml_fragment):
"""
Take the tedious stuff out of the config
@ -168,20 +154,20 @@ def _test_website_prep(bucket, xml_template, hardcoded_fields = {}, expect_fail=
# Cleanup for our validation
common.assert_xml_equal(config_xmlcmp, config_xmlnew)
#print("config_xmlcmp\n", config_xmlcmp)
#eq (config_xmlnew, config_xmlcmp)
#assert config_xmlnew == config_xmlcmp
f['WebsiteConfiguration'] = config_xmlcmp
return f
def __website_expected_reponse_status(res, status, reason):
if not isinstance(status, collections.Container):
if not isinstance(status, Container):
status = set([status])
if not isinstance(reason, collections.Container):
if not isinstance(reason, Container):
reason = set([reason])
if status is not IGNORE_FIELD:
ok(res.status in status, 'HTTP code was %s should be %s' % (res.status, status))
assert res.status in status, 'HTTP code was %s should be %s' % (res.status, status)
if reason is not IGNORE_FIELD:
ok(res.reason in reason, 'HTTP reason was was %s should be %s' % (res.reason, reason))
assert res.reason in reason, 'HTTP reason was was %s should be %s' % (res.reason, reason)
def _website_expected_default_html(**kwargs):
fields = []
@ -193,7 +179,7 @@ def _website_expected_default_html(**kwargs):
v = kwargs[k]
if isinstance(v, str):
v = [v]
elif not isinstance(v, collections.Container):
elif not isinstance(v, Container):
v = [v]
for v2 in v:
s = '<li>%s: %s</li>' % (k,v2)
@ -211,22 +197,22 @@ def _website_expected_error_response(res, bucket_name, status, reason, code, con
errorcode = res.getheader('x-amz-error-code', None)
if errorcode is not None:
if code is not IGNORE_FIELD:
eq(errorcode, code)
assert errorcode == code
if not isinstance(content, collections.Container):
if not isinstance(content, Container):
content = set([content])
for f in content:
if f is not IGNORE_FIELD and f is not None:
f = bytes(f, 'utf-8')
ok(f in body, 'HTML should contain "%s"' % (f, ))
assert f in body, 'HTML should contain "%s"' % (f, )
def _website_expected_redirect_response(res, status, reason, new_url):
body = res.read()
print(body)
__website_expected_reponse_status(res, status, reason)
loc = res.getheader('Location', None)
eq(loc, new_url, 'Location header should be set "%s" != "%s"' % (loc,new_url,))
ok(len(body) == 0, 'Body of a redirect should be empty')
assert loc == new_url, 'Location header should be set "%s" != "%s"' % (loc,new_url,)
assert len(body) == 0, 'Body of a redirect should be empty'
def _website_request(bucket_name, path, connect_hostname=None, method='GET', timeout=None):
url = get_website_url(proto='http', bucket=bucket_name, path=path)
@ -245,26 +231,16 @@ def _website_request(bucket_name, path, connect_hostname=None, method='GET', tim
return res
# ---------- Non-existant buckets via the website endpoint
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='non-existant bucket via website endpoint should give NoSuchBucket, exposing security risk')
@attr('s3website')
@attr('fails_on_rgw')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_rgw
def test_website_nonexistant_bucket_s3():
bucket_name = get_new_bucket_name()
res = _website_request(bucket_name, '')
_website_expected_error_response(res, bucket_name, 404, 'Not Found', 'NoSuchBucket', content=_website_expected_default_html(Code='NoSuchBucket'))
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
#@attr(assertion='non-existant bucket via website endpoint should give Forbidden, keeping bucket identity secure')
@attr(assertion='non-existant bucket via website endpoint should give NoSuchBucket')
@attr('s3website')
@attr('fails_on_s3')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_s3
@pytest.mark.fails_on_dbstore
def test_website_nonexistant_bucket_rgw():
bucket_name = get_new_bucket_name()
res = _website_request(bucket_name, '')
@ -272,13 +248,9 @@ def test_website_nonexistant_bucket_rgw():
_website_expected_error_response(res, bucket_name, 404, 'Not Found', 'NoSuchBucket', content=_website_expected_default_html(Code='NoSuchBucket'))
#------------- IndexDocument only, successes
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='non-empty public buckets via s3website return page for /, where page is public')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@timed(10)
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
@pytest.mark.timeout(10)
def test_website_public_bucket_list_public_index():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDoc'])
@ -295,17 +267,13 @@ def test_website_public_bucket_list_public_index():
body = res.read()
print(body)
indexstring = bytes(indexstring, 'utf-8')
eq(body, indexstring) # default content should match index.html set content
assert body == indexstring # default content should match index.html set content
__website_expected_reponse_status(res, 200, 'OK')
indexhtml.delete()
bucket.delete()
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='non-empty private buckets via s3website return page for /, where page is private')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_website_private_bucket_list_public_index():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDoc'])
@ -324,18 +292,14 @@ def test_website_private_bucket_list_public_index():
body = res.read()
print(body)
indexstring = bytes(indexstring, 'utf-8')
eq(body, indexstring, 'default content should match index.html set content')
assert body == indexstring, 'default content should match index.html set content'
indexhtml.delete()
bucket.delete()
# ---------- IndexDocument only, failures
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='empty private buckets via s3website return a 403 for /')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_website_private_bucket_list_empty():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDoc'])
@ -346,12 +310,8 @@ def test_website_private_bucket_list_empty():
_website_expected_error_response(res, bucket.name, 403, 'Forbidden', 'AccessDenied', content=_website_expected_default_html(Code='AccessDenied'))
bucket.delete()
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='empty public buckets via s3website return a 404 for /')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_website_public_bucket_list_empty():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDoc'])
@ -361,12 +321,8 @@ def test_website_public_bucket_list_empty():
_website_expected_error_response(res, bucket.name, 404, 'Not Found', 'NoSuchKey', content=_website_expected_default_html(Code='NoSuchKey'))
bucket.delete()
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='non-empty public buckets via s3website return page for /, where page is private')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_website_public_bucket_list_private_index():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDoc'])
@ -386,12 +342,8 @@ def test_website_public_bucket_list_private_index():
indexhtml.delete()
bucket.delete()
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='non-empty private buckets via s3website return page for /, where page is private')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_website_private_bucket_list_private_index():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDoc'])
@ -412,12 +364,8 @@ def test_website_private_bucket_list_private_index():
bucket.delete()
# ---------- IndexDocument & ErrorDocument, failures due to errordoc assigned but missing
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='empty private buckets via s3website return a 403 for /, missing errordoc')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_website_private_bucket_list_empty_missingerrordoc():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc'])
@ -428,12 +376,8 @@ def test_website_private_bucket_list_empty_missingerrordoc():
bucket.delete()
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='empty public buckets via s3website return a 404 for /, missing errordoc')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_website_public_bucket_list_empty_missingerrordoc():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc'])
@ -443,12 +387,8 @@ def test_website_public_bucket_list_empty_missingerrordoc():
_website_expected_error_response(res, bucket.name, 404, 'Not Found', 'NoSuchKey')
bucket.delete()
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='non-empty public buckets via s3website return page for /, where page is private, missing errordoc')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_website_public_bucket_list_private_index_missingerrordoc():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc'])
@ -467,12 +407,8 @@ def test_website_public_bucket_list_private_index_missingerrordoc():
indexhtml.delete()
bucket.delete()
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='non-empty private buckets via s3website return page for /, where page is private, missing errordoc')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_website_private_bucket_list_private_index_missingerrordoc():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc'])
@ -492,12 +428,8 @@ def test_website_private_bucket_list_private_index_missingerrordoc():
bucket.delete()
# ---------- IndexDocument & ErrorDocument, failures due to errordoc assigned but not accessible
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='empty private buckets via s3website return a 403 for /, blocked errordoc')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_website_private_bucket_list_empty_blockederrordoc():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc'])
@ -515,17 +447,60 @@ def test_website_private_bucket_list_empty_blockederrordoc():
print(body)
_website_expected_error_response(res, bucket.name, 403, 'Forbidden', 'AccessDenied', content=_website_expected_default_html(Code='AccessDenied'), body=body)
errorstring = bytes(errorstring, 'utf-8')
ok(errorstring not in body, 'error content should NOT match error.html set content')
assert errorstring not in body, 'error content should NOT match error.html set content'
errorhtml.delete()
bucket.delete()
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='empty public buckets via s3website return a 404 for /, blocked errordoc')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_website_public_bucket_list_pubilc_errordoc():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc'])
bucket.make_public()
errorhtml = bucket.new_key(f['ErrorDocument_Key'])
errorstring = choose_bucket_prefix(template=ERRORDOC_TEMPLATE, max_len=256)
errorhtml.set_contents_from_string(errorstring)
errorhtml.set_canned_acl('public-read')
url = get_website_url(proto='http', bucket=bucket.name, path='')
o = urlparse(url)
host = o.hostname
port = s3.main.port
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect((host, port))
request = "GET / HTTP/1.1\r\nHost:%s.%s:%s\r\n\r\n" % (bucket.name, host, port)
sock.send(request.encode())
#receive header
resp = sock.recv(4096)
print(resp)
#receive body
resp = sock.recv(4096)
print('payload length=%d' % len(resp))
print(resp)
#check if any additional payload is left
resp_len = 0
sock.settimeout(2)
try:
resp = sock.recv(4096)
resp_len = len(resp)
print('invalid payload length=%d' % resp_len)
print(resp)
except socket.timeout:
print('no invalid payload')
assert resp_len == 0, 'invalid payload'
errorhtml.delete()
bucket.delete()
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_website_public_bucket_list_empty_blockederrordoc():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc'])
@ -542,17 +517,13 @@ def test_website_public_bucket_list_empty_blockederrordoc():
print(body)
_website_expected_error_response(res, bucket.name, 404, 'Not Found', 'NoSuchKey', content=_website_expected_default_html(Code='NoSuchKey'), body=body)
errorstring = bytes(errorstring, 'utf-8')
ok(errorstring not in body, 'error content should match error.html set content')
assert errorstring not in body, 'error content should match error.html set content'
errorhtml.delete()
bucket.delete()
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='non-empty public buckets via s3website return page for /, where page is private, blocked errordoc')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_website_public_bucket_list_private_index_blockederrordoc():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc'])
@ -574,18 +545,14 @@ def test_website_public_bucket_list_private_index_blockederrordoc():
print(body)
_website_expected_error_response(res, bucket.name, 403, 'Forbidden', 'AccessDenied', content=_website_expected_default_html(Code='AccessDenied'), body=body)
errorstring = bytes(errorstring, 'utf-8')
ok(errorstring not in body, 'error content should match error.html set content')
assert errorstring not in body, 'error content should match error.html set content'
indexhtml.delete()
errorhtml.delete()
bucket.delete()
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='non-empty private buckets via s3website return page for /, where page is private, blocked errordoc')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_website_private_bucket_list_private_index_blockederrordoc():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc'])
@ -607,19 +574,15 @@ def test_website_private_bucket_list_private_index_blockederrordoc():
print(body)
_website_expected_error_response(res, bucket.name, 403, 'Forbidden', 'AccessDenied', content=_website_expected_default_html(Code='AccessDenied'), body=body)
errorstring = bytes(errorstring, 'utf-8')
ok(errorstring not in body, 'error content should match error.html set content')
assert errorstring not in body, 'error content should match error.html set content'
indexhtml.delete()
errorhtml.delete()
bucket.delete()
# ---------- IndexDocument & ErrorDocument, failures with errordoc available
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='empty private buckets via s3website return a 403 for /, good errordoc')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_website_private_bucket_list_empty_gooderrordoc():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc'])
@ -637,12 +600,8 @@ def test_website_private_bucket_list_empty_gooderrordoc():
errorhtml.delete()
bucket.delete()
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='empty public buckets via s3website return a 404 for /, good errordoc')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_website_public_bucket_list_empty_gooderrordoc():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc'])
@ -661,12 +620,8 @@ def test_website_public_bucket_list_empty_gooderrordoc():
errorhtml.delete()
bucket.delete()
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='non-empty public buckets via s3website return page for /, where page is private')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_website_public_bucket_list_private_index_gooderrordoc():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc'])
@ -690,12 +645,8 @@ def test_website_public_bucket_list_private_index_gooderrordoc():
errorhtml.delete()
bucket.delete()
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='non-empty private buckets via s3website return page for /, where page is private')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_website_private_bucket_list_private_index_gooderrordoc():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc'])
@ -720,12 +671,8 @@ def test_website_private_bucket_list_private_index_gooderrordoc():
bucket.delete()
# ------ RedirectAll tests
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='RedirectAllRequestsTo without protocol should TODO')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_website_bucket_private_redirectall_base():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['RedirectAll'])
@ -737,12 +684,8 @@ def test_website_bucket_private_redirectall_base():
bucket.delete()
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='RedirectAllRequestsTo without protocol should TODO')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_website_bucket_private_redirectall_path():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['RedirectAll'])
@ -756,12 +699,8 @@ def test_website_bucket_private_redirectall_path():
bucket.delete()
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='RedirectAllRequestsTo without protocol should TODO')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_website_bucket_private_redirectall_path_upgrade():
bucket = get_new_bucket()
x = string.Template(WEBSITE_CONFIGS_XMLFRAG['RedirectAll+Protocol']).safe_substitute(RedirectAllRequestsTo_Protocol='https')
@ -777,13 +716,9 @@ def test_website_bucket_private_redirectall_path_upgrade():
bucket.delete()
# ------ x-amz redirect tests
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='x-amz-website-redirect-location should not fire without websiteconf')
@attr('s3website')
@attr('x-amz-website-redirect-location')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.s3website_redirect_location
@pytest.mark.fails_on_dbstore
def test_website_xredirect_nonwebsite():
bucket = get_new_bucket()
#f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['RedirectAll'])
@ -795,7 +730,7 @@ def test_website_xredirect_nonwebsite():
headers = {'x-amz-website-redirect-location': redirect_dest}
k.set_contents_from_string(content, headers=headers, policy='public-read')
redirect = k.get_redirect()
eq(k.get_redirect(), redirect_dest)
assert k.get_redirect() == redirect_dest
res = _website_request(bucket.name, '/page')
body = res.read()
@ -809,13 +744,9 @@ def test_website_xredirect_nonwebsite():
k.delete()
bucket.delete()
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='x-amz-website-redirect-location should fire websiteconf, relative path, public key')
@attr('s3website')
@attr('x-amz-website-redirect-location')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.s3website_redirect_location
@pytest.mark.fails_on_dbstore
def test_website_xredirect_public_relative():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDoc'])
@ -827,7 +758,7 @@ def test_website_xredirect_public_relative():
headers = {'x-amz-website-redirect-location': redirect_dest}
k.set_contents_from_string(content, headers=headers, policy='public-read')
redirect = k.get_redirect()
eq(k.get_redirect(), redirect_dest)
assert k.get_redirect() == redirect_dest
res = _website_request(bucket.name, '/page')
#new_url = get_website_url(bucket_name=bucket.name, path=redirect_dest)
@ -836,13 +767,9 @@ def test_website_xredirect_public_relative():
k.delete()
bucket.delete()
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='x-amz-website-redirect-location should fire websiteconf, absolute, public key')
@attr('s3website')
@attr('x-amz-website-redirect-location')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.s3website_redirect_location
@pytest.mark.fails_on_dbstore
def test_website_xredirect_public_abs():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDoc'])
@ -854,7 +781,7 @@ def test_website_xredirect_public_abs():
headers = {'x-amz-website-redirect-location': redirect_dest}
k.set_contents_from_string(content, headers=headers, policy='public-read')
redirect = k.get_redirect()
eq(k.get_redirect(), redirect_dest)
assert k.get_redirect() == redirect_dest
res = _website_request(bucket.name, '/page')
new_url = get_website_url(proto='http', hostname='example.com', path='/foo')
@ -863,13 +790,9 @@ def test_website_xredirect_public_abs():
k.delete()
bucket.delete()
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='x-amz-website-redirect-location should fire websiteconf, relative path, private key')
@attr('s3website')
@attr('x-amz-website-redirect-location')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.s3website_redirect_location
@pytest.mark.fails_on_dbstore
def test_website_xredirect_private_relative():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDoc'])
@ -881,7 +804,7 @@ def test_website_xredirect_private_relative():
headers = {'x-amz-website-redirect-location': redirect_dest}
k.set_contents_from_string(content, headers=headers, policy='private')
redirect = k.get_redirect()
eq(k.get_redirect(), redirect_dest)
assert k.get_redirect() == redirect_dest
res = _website_request(bucket.name, '/page')
# We get a 403 because the page is private
@ -890,13 +813,9 @@ def test_website_xredirect_private_relative():
k.delete()
bucket.delete()
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='x-amz-website-redirect-location should fire websiteconf, absolute, private key')
@attr('s3website')
@attr('x-amz-website-redirect-location')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website
@pytest.mark.s3website_redirect_location
@pytest.mark.fails_on_dbstore
def test_website_xredirect_private_abs():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDoc'])
@ -908,7 +827,7 @@ def test_website_xredirect_private_abs():
headers = {'x-amz-website-redirect-location': redirect_dest}
k.set_contents_from_string(content, headers=headers, policy='private')
redirect = k.get_redirect()
eq(k.get_redirect(), redirect_dest)
assert k.get_redirect() == redirect_dest
res = _website_request(bucket.name, '/page')
new_url = get_website_url(proto='http', hostname='example.com', path='/foo')
@ -1121,8 +1040,6 @@ def routing_teardown(**kwargs):
print('Deleting', str(o))
o.delete()
@common.with_setup_kwargs(setup=routing_setup, teardown=routing_teardown)
#@timed(10)
def routing_check(*args, **kwargs):
bucket = kwargs['bucket']
args=args[0]
@ -1148,8 +1065,8 @@ def routing_check(*args, **kwargs):
if args['code'] >= 200 and args['code'] < 300:
#body = res.read()
#print(body)
#eq(body, args['content'], 'default content should match index.html set content')
ok(int(res.getheader('Content-Length', -1)) > 0)
#assert body == args['content'], 'default content should match index.html set content'
assert int(res.getheader('Content-Length', -1)) > 0
elif args['code'] >= 300 and args['code'] < 400:
_website_expected_redirect_response(res, args['code'], IGNORE_FIELD, new_url)
elif args['code'] >= 400:
@ -1157,9 +1074,9 @@ def routing_check(*args, **kwargs):
else:
assert(False)
@attr('s3website_RoutingRules')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
@pytest.mark.s3website_routing_rules
@pytest.mark.s3website
@pytest.mark.fails_on_dbstore
def test_routing_generator():
for t in ROUTING_RULES_TESTS:
if 'xml' in t and 'RoutingRules' in t['xml'] and len(t['xml']['RoutingRules']) > 0:

View file

@ -1,11 +1,9 @@
from nose.tools import eq_ as eq
from . import utils
def test_generate():
FIVE_MB = 5 * 1024 * 1024
eq(len(''.join(utils.generate_random(0))), 0)
eq(len(''.join(utils.generate_random(1))), 1)
eq(len(''.join(utils.generate_random(FIVE_MB - 1))), FIVE_MB - 1)
eq(len(''.join(utils.generate_random(FIVE_MB))), FIVE_MB)
eq(len(''.join(utils.generate_random(FIVE_MB + 1))), FIVE_MB + 1)
assert len(''.join(utils.generate_random(0))) == 0
assert len(''.join(utils.generate_random(1))) == 1
assert len(''.join(utils.generate_random(FIVE_MB - 1))) == FIVE_MB - 1
assert len(''.join(utils.generate_random(FIVE_MB))) == FIVE_MB
assert len(''.join(utils.generate_random(FIVE_MB + 1))) == FIVE_MB + 1

View file

@ -3,8 +3,6 @@ import requests
import string
import time
from nose.tools import eq_ as eq
def assert_raises(excClass, callableObj, *args, **kwargs):
"""
Like unittest.TestCase.assertRaises, but returns the exception.
@ -48,7 +46,7 @@ def region_sync_meta(targets, region):
conf = r.conf
if conf.sync_agent_addr:
ret = requests.post('http://{addr}:{port}/metadata/incremental'.format(addr = conf.sync_agent_addr, port = conf.sync_agent_port))
eq(ret.status_code, 200)
assert ret.status_code == 200
if conf.sync_meta_wait:
time.sleep(conf.sync_meta_wait)

View file

@ -1,14 +1,19 @@
import pytest
import boto3
from botocore import UNSIGNED
from botocore.client import Config
from botocore.exceptions import ClientError
from botocore.handlers import disable_signing
import configparser
import datetime
import time
import os
import munch
import random
import string
import itertools
import urllib3
import re
config = munch.Munch
@ -74,38 +79,64 @@ def get_objects_list(bucket, client=None, prefix=None):
return objects_list
def get_versioned_objects_list(bucket, client=None):
if client == None:
client = get_client()
response = client.list_object_versions(Bucket=bucket)
versioned_objects_list = []
# generator function that returns object listings in batches, where each
# batch is a list of dicts compatible with delete_objects()
def list_versions(client, bucket, batch_size):
kwargs = {'Bucket': bucket, 'MaxKeys': batch_size}
truncated = True
while truncated:
listing = client.list_object_versions(**kwargs)
if 'Versions' in response:
contents = response['Versions']
for obj in contents:
key = obj['Key']
version_id = obj['VersionId']
versioned_obj = (key,version_id)
versioned_objects_list.append(versioned_obj)
kwargs['KeyMarker'] = listing.get('NextKeyMarker')
kwargs['VersionIdMarker'] = listing.get('NextVersionIdMarker')
truncated = listing['IsTruncated']
return versioned_objects_list
objs = listing.get('Versions', []) + listing.get('DeleteMarkers', [])
if len(objs):
yield [{'Key': o['Key'], 'VersionId': o['VersionId']} for o in objs]
def get_delete_markers_list(bucket, client=None):
if client == None:
client = get_client()
response = client.list_object_versions(Bucket=bucket)
delete_markers = []
def nuke_bucket(client, bucket):
batch_size = 128
max_retain_date = None
if 'DeleteMarkers' in response:
contents = response['DeleteMarkers']
for obj in contents:
key = obj['Key']
version_id = obj['VersionId']
versioned_obj = (key,version_id)
delete_markers.append(versioned_obj)
# list and delete objects in batches
for objects in list_versions(client, bucket, batch_size):
delete = client.delete_objects(Bucket=bucket,
Delete={'Objects': objects, 'Quiet': True},
BypassGovernanceRetention=True)
return delete_markers
# check for object locks on 403 AccessDenied errors
for err in delete.get('Errors', []):
if err.get('Code') != 'AccessDenied':
continue
try:
res = client.get_object_retention(Bucket=bucket,
Key=err['Key'], VersionId=err['VersionId'])
retain_date = res['Retention']['RetainUntilDate']
if not max_retain_date or max_retain_date < retain_date:
max_retain_date = retain_date
except ClientError:
pass
if max_retain_date:
# wait out the retention period (up to 60 seconds)
now = datetime.datetime.now(max_retain_date.tzinfo)
if max_retain_date > now:
delta = max_retain_date - now
if delta.total_seconds() > 60:
raise RuntimeError('bucket {} still has objects \
locked for {} more seconds, not waiting for \
bucket cleanup'.format(bucket, delta.total_seconds()))
print('nuke_bucket', bucket, 'waiting', delta.total_seconds(),
'seconds for object locks to expire')
time.sleep(delta.total_seconds())
for objects in list_versions(client, bucket, batch_size):
client.delete_objects(Bucket=bucket,
Delete={'Objects': objects, 'Quiet': True},
BypassGovernanceRetention=True)
client.delete_bucket(Bucket=bucket)
def nuke_prefixed_buckets(prefix, client=None):
if client == None:
@ -113,27 +144,36 @@ def nuke_prefixed_buckets(prefix, client=None):
buckets = get_buckets_list(client, prefix)
if buckets != []:
err = None
for bucket_name in buckets:
objects_list = get_objects_list(bucket_name, client)
for obj in objects_list:
response = client.delete_object(Bucket=bucket_name,Key=obj)
versioned_objects_list = get_versioned_objects_list(bucket_name, client)
for obj in versioned_objects_list:
response = client.delete_object(Bucket=bucket_name,Key=obj[0],VersionId=obj[1])
delete_markers = get_delete_markers_list(bucket_name, client)
for obj in delete_markers:
response = client.delete_object(Bucket=bucket_name,Key=obj[0],VersionId=obj[1])
try:
response = client.delete_bucket(Bucket=bucket_name)
except ClientError:
# if DELETE times out, the retry may see NoSuchBucket
if response['Error']['Code'] != 'NoSuchBucket':
raise ClientError
nuke_bucket(client, bucket_name)
except Exception as e:
# The exception shouldn't be raised when doing cleanup. Pass and continue
# the bucket cleanup process. Otherwise left buckets wouldn't be cleared
# resulting in some kind of resource leak. err is used to hint user some
# exception once occurred.
err = e
pass
if err:
raise err
print('Done with cleanup of buckets in tests.')
def configured_storage_classes():
sc = ['STANDARD']
extra_sc = re.split(r"[\b\W\b]+", config.storage_classes)
for item in extra_sc:
if item != 'STANDARD':
sc.append(item)
sc = [i for i in sc if i]
print("storage classes configured: " + str(sc))
return sc
def setup():
cfg = configparser.RawConfigParser()
try:
@ -166,6 +206,15 @@ def setup():
proto = 'https' if config.default_is_secure else 'http'
config.default_endpoint = "%s://%s:%d" % (proto, config.default_host, config.default_port)
try:
config.default_ssl_verify = cfg.getboolean('DEFAULT', "ssl_verify")
except configparser.NoOptionError:
config.default_ssl_verify = False
# Disable InsecureRequestWarning reported by urllib3 when ssl_verify is False
if not config.default_ssl_verify:
urllib3.disable_warnings()
# vars from the main section
config.main_access_key = cfg.get('s3 main',"access_key")
config.main_secret_key = cfg.get('s3 main',"secret_key")
@ -188,6 +237,17 @@ def setup():
config.main_api_name = ""
pass
try:
config.storage_classes = cfg.get('s3 main',"storage_classes")
except (configparser.NoSectionError, configparser.NoOptionError):
config.storage_classes = ""
pass
try:
config.lc_debug_interval = int(cfg.get('s3 main',"lc_debug_interval"))
except (configparser.NoSectionError, configparser.NoOptionError):
config.lc_debug_interval = 10
config.alt_access_key = cfg.get('s3 alt',"access_key")
config.alt_secret_key = cfg.get('s3 alt',"secret_key")
config.alt_display_name = cfg.get('s3 alt',"display_name")
@ -199,13 +259,31 @@ def setup():
config.tenant_display_name = cfg.get('s3 tenant',"display_name")
config.tenant_user_id = cfg.get('s3 tenant',"user_id")
config.tenant_email = cfg.get('s3 tenant',"email")
config.tenant_name = cfg.get('s3 tenant',"tenant")
config.iam_access_key = cfg.get('iam',"access_key")
config.iam_secret_key = cfg.get('iam',"secret_key")
config.iam_display_name = cfg.get('iam',"display_name")
config.iam_user_id = cfg.get('iam',"user_id")
config.iam_email = cfg.get('iam',"email")
config.iam_root_access_key = cfg.get('iam root',"access_key")
config.iam_root_secret_key = cfg.get('iam root',"secret_key")
config.iam_root_user_id = cfg.get('iam root',"user_id")
config.iam_root_email = cfg.get('iam root',"email")
config.iam_alt_root_access_key = cfg.get('iam alt root',"access_key")
config.iam_alt_root_secret_key = cfg.get('iam alt root',"secret_key")
config.iam_alt_root_user_id = cfg.get('iam alt root',"user_id")
config.iam_alt_root_email = cfg.get('iam alt root',"email")
# vars from the fixtures section
try:
template = cfg.get('fixtures', "bucket prefix")
except (configparser.NoOptionError):
template = 'test-{random}-'
template = cfg.get('fixtures', "bucket prefix", fallback='test-{random}-')
prefix = choose_bucket_prefix(template=template)
template = cfg.get('fixtures', "iam name prefix", fallback="s3-tests-")
config.iam_name_prefix = choose_bucket_prefix(template=template)
template = cfg.get('fixtures', "iam path prefix", fallback="/s3-tests/")
config.iam_path_prefix = choose_bucket_prefix(template=template)
alt_client = get_alt_client()
tenant_client = get_tenant_client()
@ -213,12 +291,104 @@ def setup():
nuke_prefixed_buckets(prefix=prefix, client=alt_client)
nuke_prefixed_buckets(prefix=prefix, client=tenant_client)
if cfg.has_section("s3 cloud"):
get_cloud_config(cfg)
else:
config.cloud_storage_class = None
def teardown():
alt_client = get_alt_client()
tenant_client = get_tenant_client()
nuke_prefixed_buckets(prefix=prefix)
nuke_prefixed_buckets(prefix=prefix, client=alt_client)
nuke_prefixed_buckets(prefix=prefix, client=tenant_client)
try:
iam_client = get_iam_client()
list_roles_resp = iam_client.list_roles()
for role in list_roles_resp['Roles']:
list_policies_resp = iam_client.list_role_policies(RoleName=role['RoleName'])
for policy in list_policies_resp['PolicyNames']:
del_policy_resp = iam_client.delete_role_policy(
RoleName=role['RoleName'],
PolicyName=policy
)
del_role_resp = iam_client.delete_role(RoleName=role['RoleName'])
list_oidc_resp = iam_client.list_open_id_connect_providers()
for oidcprovider in list_oidc_resp['OpenIDConnectProviderList']:
del_oidc_resp = iam_client.delete_open_id_connect_provider(
OpenIDConnectProviderArn=oidcprovider['Arn']
)
except:
pass
@pytest.fixture(scope="package")
def configfile():
setup()
return config
@pytest.fixture(autouse=True)
def setup_teardown(configfile):
yield
teardown()
def check_webidentity():
cfg = configparser.RawConfigParser()
try:
path = os.environ['S3TEST_CONF']
except KeyError:
raise RuntimeError(
'To run tests, point environment '
+ 'variable S3TEST_CONF to a config file.',
)
cfg.read(path)
if not cfg.has_section("webidentity"):
raise RuntimeError('Your config file is missing the "webidentity" section!')
config.webidentity_thumbprint = cfg.get('webidentity', "thumbprint")
config.webidentity_aud = cfg.get('webidentity', "aud")
config.webidentity_token = cfg.get('webidentity', "token")
config.webidentity_realm = cfg.get('webidentity', "KC_REALM")
config.webidentity_sub = cfg.get('webidentity', "sub")
config.webidentity_azp = cfg.get('webidentity', "azp")
config.webidentity_user_token = cfg.get('webidentity', "user_token")
def get_cloud_config(cfg):
config.cloud_host = cfg.get('s3 cloud',"host")
config.cloud_port = int(cfg.get('s3 cloud',"port"))
config.cloud_is_secure = cfg.getboolean('s3 cloud', "is_secure")
proto = 'https' if config.cloud_is_secure else 'http'
config.cloud_endpoint = "%s://%s:%d" % (proto, config.cloud_host, config.cloud_port)
config.cloud_access_key = cfg.get('s3 cloud',"access_key")
config.cloud_secret_key = cfg.get('s3 cloud',"secret_key")
try:
config.cloud_storage_class = cfg.get('s3 cloud', "cloud_storage_class")
except (configparser.NoSectionError, configparser.NoOptionError):
config.cloud_storage_class = None
try:
config.cloud_retain_head_object = cfg.get('s3 cloud',"retain_head_object")
except (configparser.NoSectionError, configparser.NoOptionError):
config.cloud_retain_head_object = None
try:
config.cloud_target_path = cfg.get('s3 cloud',"target_path")
except (configparser.NoSectionError, configparser.NoOptionError):
config.cloud_target_path = None
try:
config.cloud_target_storage_class = cfg.get('s3 cloud',"target_storage_class")
except (configparser.NoSectionError, configparser.NoOptionError):
config.cloud_target_storage_class = 'STANDARD'
try:
config.cloud_regular_storage_class = cfg.get('s3 cloud', "storage_class")
except (configparser.NoSectionError, configparser.NoOptionError):
config.cloud_regular_storage_class = None
def get_client(client_config=None):
if client_config == None:
@ -229,6 +399,7 @@ def get_client(client_config=None):
aws_secret_access_key=config.main_secret_key,
endpoint_url=config.default_endpoint,
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
config=client_config)
return client
@ -238,9 +409,69 @@ def get_v2_client():
aws_secret_access_key=config.main_secret_key,
endpoint_url=config.default_endpoint,
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
config=Config(signature_version='s3'))
return client
def get_sts_client(**kwargs):
kwargs.setdefault('aws_access_key_id', config.alt_access_key)
kwargs.setdefault('aws_secret_access_key', config.alt_secret_key)
kwargs.setdefault('config', Config(signature_version='s3v4'))
client = boto3.client(service_name='sts',
endpoint_url=config.default_endpoint,
region_name='',
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
**kwargs)
return client
def get_iam_client(**kwargs):
kwargs.setdefault('aws_access_key_id', config.iam_access_key)
kwargs.setdefault('aws_secret_access_key', config.iam_secret_key)
client = boto3.client(service_name='iam',
endpoint_url=config.default_endpoint,
region_name='',
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
**kwargs)
return client
def get_iam_s3client(**kwargs):
kwargs.setdefault('aws_access_key_id', config.iam_access_key)
kwargs.setdefault('aws_secret_access_key', config.iam_secret_key)
kwargs.setdefault('config', Config(signature_version='s3v4'))
client = boto3.client(service_name='s3',
endpoint_url=config.default_endpoint,
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
**kwargs)
return client
def get_iam_root_client(**kwargs):
kwargs.setdefault('service_name', 'iam')
kwargs.setdefault('aws_access_key_id', config.iam_root_access_key)
kwargs.setdefault('aws_secret_access_key', config.iam_root_secret_key)
return boto3.client(endpoint_url=config.default_endpoint,
region_name='',
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
**kwargs)
def get_iam_alt_root_client(**kwargs):
kwargs.setdefault('service_name', 'iam')
kwargs.setdefault('aws_access_key_id', config.iam_alt_root_access_key)
kwargs.setdefault('aws_secret_access_key', config.iam_alt_root_secret_key)
return boto3.client(endpoint_url=config.default_endpoint,
region_name='',
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
**kwargs)
def get_alt_client(client_config=None):
if client_config == None:
client_config = Config(signature_version='s3v4')
@ -250,6 +481,19 @@ def get_alt_client(client_config=None):
aws_secret_access_key=config.alt_secret_key,
endpoint_url=config.default_endpoint,
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
config=client_config)
return client
def get_cloud_client(client_config=None):
if client_config == None:
client_config = Config(signature_version='s3v4')
client = boto3.client(service_name='s3',
aws_access_key_id=config.cloud_access_key,
aws_secret_access_key=config.cloud_secret_key,
endpoint_url=config.cloud_endpoint,
use_ssl=config.cloud_is_secure,
config=client_config)
return client
@ -262,6 +506,7 @@ def get_tenant_client(client_config=None):
aws_secret_access_key=config.tenant_secret_key,
endpoint_url=config.default_endpoint,
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
config=client_config)
return client
@ -272,6 +517,18 @@ def get_tenant_iam_client():
aws_access_key_id=config.tenant_access_key,
aws_secret_access_key=config.tenant_secret_key,
endpoint_url=config.default_endpoint,
verify=config.default_ssl_verify,
use_ssl=config.default_is_secure)
return client
def get_alt_iam_client():
client = boto3.client(service_name='iam',
region_name='',
aws_access_key_id=config.alt_access_key,
aws_secret_access_key=config.alt_secret_key,
endpoint_url=config.default_endpoint,
verify=config.default_ssl_verify,
use_ssl=config.default_is_secure)
return client
@ -281,6 +538,7 @@ def get_unauthenticated_client():
aws_secret_access_key='',
endpoint_url=config.default_endpoint,
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
config=Config(signature_version=UNSIGNED))
return client
@ -290,9 +548,23 @@ def get_bad_auth_client(aws_access_key_id='badauth'):
aws_secret_access_key='roflmao',
endpoint_url=config.default_endpoint,
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
config=Config(signature_version='s3v4'))
return client
def get_svc_client(client_config=None, svc='s3'):
if client_config == None:
client_config = Config(signature_version='s3v4')
client = boto3.client(service_name=svc,
aws_access_key_id=config.main_access_key,
aws_secret_access_key=config.main_secret_key,
endpoint_url=config.default_endpoint,
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
config=client_config)
return client
bucket_counter = itertools.count(1)
def get_new_bucket_name():
@ -320,7 +592,8 @@ def get_new_bucket_resource(name=None):
aws_access_key_id=config.main_access_key,
aws_secret_access_key=config.main_secret_key,
endpoint_url=config.default_endpoint,
use_ssl=config.default_is_secure)
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify)
if name is None:
name = get_new_bucket_name()
bucket = s3.Bucket(name)
@ -342,6 +615,21 @@ def get_new_bucket(client=None, name=None):
client.create_bucket(Bucket=name)
return name
def get_parameter_name():
parameter_name=""
rand = ''.join(
random.choice(string.ascii_lowercase + string.digits)
for c in range(255)
)
while rand:
parameter_name = '{random}'.format(random=rand)
if len(parameter_name) <= 10:
return parameter_name
rand = rand[:-1]
return parameter_name
def get_sts_user_id():
return config.alt_user_id
def get_config_is_secure():
return config.default_is_secure
@ -355,6 +643,9 @@ def get_config_port():
def get_config_endpoint():
return config.default_endpoint
def get_config_ssl_verify():
return config.default_ssl_verify
def get_main_aws_access_key():
return config.main_access_key
@ -403,8 +694,77 @@ def get_tenant_aws_secret_key():
def get_tenant_display_name():
return config.tenant_display_name
def get_tenant_name():
return config.tenant_name
def get_tenant_user_id():
return config.tenant_user_id
def get_tenant_email():
return config.tenant_email
def get_thumbprint():
return config.webidentity_thumbprint
def get_aud():
return config.webidentity_aud
def get_sub():
return config.webidentity_sub
def get_azp():
return config.webidentity_azp
def get_token():
return config.webidentity_token
def get_realm_name():
return config.webidentity_realm
def get_iam_name_prefix():
return config.iam_name_prefix
def make_iam_name(name):
return config.iam_name_prefix + name
def get_iam_path_prefix():
return config.iam_path_prefix
def get_iam_access_key():
return config.iam_access_key
def get_iam_secret_key():
return config.iam_secret_key
def get_iam_root_user_id():
return config.iam_root_user_id
def get_iam_root_email():
return config.iam_root_email
def get_iam_alt_root_user_id():
return config.iam_alt_root_user_id
def get_iam_alt_root_email():
return config.iam_alt_root_email
def get_user_token():
return config.webidentity_user_token
def get_cloud_storage_class():
return config.cloud_storage_class
def get_cloud_retain_head_object():
return config.cloud_retain_head_object
def get_cloud_regular_storage_class():
return config.cloud_regular_storage_class
def get_cloud_target_path():
return config.cloud_target_path
def get_cloud_target_storage_class():
return config.cloud_target_storage_class
def get_lc_debug_interval():
return config.lc_debug_interval

View file

@ -0,0 +1,199 @@
from botocore.exceptions import ClientError
import pytest
from . import (
configfile,
get_iam_root_client,
get_iam_root_user_id,
get_iam_root_email,
get_iam_alt_root_client,
get_iam_alt_root_user_id,
get_iam_alt_root_email,
get_iam_path_prefix,
)
def nuke_user_keys(client, name):
p = client.get_paginator('list_access_keys')
for response in p.paginate(UserName=name):
for key in response['AccessKeyMetadata']:
try:
client.delete_access_key(UserName=name, AccessKeyId=key['AccessKeyId'])
except:
pass
def nuke_user_policies(client, name):
p = client.get_paginator('list_user_policies')
for response in p.paginate(UserName=name):
for policy in response['PolicyNames']:
try:
client.delete_user_policy(UserName=name, PolicyName=policy)
except:
pass
def nuke_attached_user_policies(client, name):
p = client.get_paginator('list_attached_user_policies')
for response in p.paginate(UserName=name):
for policy in response['AttachedPolicies']:
try:
client.detach_user_policy(UserName=name, PolicyArn=policy['PolicyArn'])
except:
pass
def nuke_user(client, name):
# delete access keys, user policies, etc
try:
nuke_user_keys(client, name)
except:
pass
try:
nuke_user_policies(client, name)
except:
pass
try:
nuke_attached_user_policies(client, name)
except:
pass
client.delete_user(UserName=name)
def nuke_users(client, **kwargs):
p = client.get_paginator('list_users')
for response in p.paginate(**kwargs):
for user in response['Users']:
try:
nuke_user(client, user['UserName'])
except:
pass
def nuke_group_policies(client, name):
p = client.get_paginator('list_group_policies')
for response in p.paginate(GroupName=name):
for policy in response['PolicyNames']:
try:
client.delete_group_policy(GroupName=name, PolicyName=policy)
except:
pass
def nuke_attached_group_policies(client, name):
p = client.get_paginator('list_attached_group_policies')
for response in p.paginate(GroupName=name):
for policy in response['AttachedPolicies']:
try:
client.detach_group_policy(GroupName=name, PolicyArn=policy['PolicyArn'])
except:
pass
def nuke_group_users(client, name):
p = client.get_paginator('get_group')
for response in p.paginate(GroupName=name):
for user in response['Users']:
try:
client.remove_user_from_group(GroupName=name, UserName=user['UserName'])
except:
pass
def nuke_group(client, name):
# delete group policies and remove all users
try:
nuke_group_policies(client, name)
except:
pass
try:
nuke_attached_group_policies(client, name)
except:
pass
try:
nuke_group_users(client, name)
except:
pass
client.delete_group(GroupName=name)
def nuke_groups(client, **kwargs):
p = client.get_paginator('list_groups')
for response in p.paginate(**kwargs):
for user in response['Groups']:
try:
nuke_group(client, user['GroupName'])
except:
pass
def nuke_role_policies(client, name):
p = client.get_paginator('list_role_policies')
for response in p.paginate(RoleName=name):
for policy in response['PolicyNames']:
try:
client.delete_role_policy(RoleName=name, PolicyName=policy)
except:
pass
def nuke_attached_role_policies(client, name):
p = client.get_paginator('list_attached_role_policies')
for response in p.paginate(RoleName=name):
for policy in response['AttachedPolicies']:
try:
client.detach_role_policy(RoleName=name, PolicyArn=policy['PolicyArn'])
except:
pass
def nuke_role(client, name):
# delete role policies, etc
try:
nuke_role_policies(client, name)
except:
pass
try:
nuke_attached_role_policies(client, name)
except:
pass
client.delete_role(RoleName=name)
def nuke_roles(client, **kwargs):
p = client.get_paginator('list_roles')
for response in p.paginate(**kwargs):
for role in response['Roles']:
try:
nuke_role(client, role['RoleName'])
except:
pass
def nuke_oidc_providers(client, prefix):
result = client.list_open_id_connect_providers()
for provider in result['OpenIDConnectProviderList']:
arn = provider['Arn']
if f':oidc-provider{prefix}' in arn:
try:
client.delete_open_id_connect_provider(OpenIDConnectProviderArn=arn)
except:
pass
# fixture for iam account root user
@pytest.fixture
def iam_root(configfile):
client = get_iam_root_client()
try:
arn = client.get_user()['User']['Arn']
if not arn.endswith(':root'):
pytest.skip('[iam root] user does not have :root arn')
except ClientError as e:
pytest.skip('[iam root] user does not belong to an account')
yield client
nuke_users(client, PathPrefix=get_iam_path_prefix())
nuke_groups(client, PathPrefix=get_iam_path_prefix())
nuke_roles(client, PathPrefix=get_iam_path_prefix())
nuke_oidc_providers(client, get_iam_path_prefix())
# fixture for iam alt account root user
@pytest.fixture
def iam_alt_root(configfile):
client = get_iam_alt_root_client()
try:
arn = client.get_user()['User']['Arn']
if not arn.endswith(':root'):
pytest.skip('[iam alt root] user does not have :root arn')
except ClientError as e:
pytest.skip('[iam alt root] user does not belong to an account')
yield client
nuke_users(client, PathPrefix=get_iam_path_prefix())
nuke_roles(client, PathPrefix=get_iam_path_prefix())

View file

@ -37,10 +37,10 @@ class Policy(object):
return json.dumps(policy_dict)
def make_json_policy(action, resource, principal={"AWS": "*"}, conditions=None):
def make_json_policy(action, resource, principal={"AWS": "*"}, effect="Allow", conditions=None):
"""
Helper function to make single statement policies
"""
s = Statement(action, resource, principal, condition=conditions)
s = Statement(action, resource, principal, effect=effect, condition=conditions)
p = Policy()
return p.add_statement(s).to_json()

View file

@ -1,7 +1,5 @@
import boto3
from nose.tools import eq_ as eq
from nose.plugins.attrib import attr
import nose
import pytest
from botocore.exceptions import ClientError
from email.utils import formatdate
@ -10,6 +8,8 @@ from .utils import _get_status_and_error_code
from .utils import _get_status
from . import (
configfile,
setup_teardown,
get_client,
get_v2_client,
get_new_bucket,
@ -149,178 +149,97 @@ def _remove_header_create_bad_bucket(remove, client=None):
return e
def tag(*tags):
def wrap(func):
for tag in tags:
setattr(func, tag, True)
return func
return wrap
#
# common tests
#
@tag('auth_common')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/invalid MD5')
@attr(assertion='fails 400')
@pytest.mark.auth_common
def test_object_create_bad_md5_invalid_short():
e = _add_header_create_bad_object({'Content-MD5':'YWJyYWNhZGFicmE='})
status, error_code = _get_status_and_error_code(e.response)
eq(status, 400)
eq(error_code, 'InvalidDigest')
assert status == 400
assert error_code == 'InvalidDigest'
@tag('auth_common')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/mismatched MD5')
@attr(assertion='fails 400')
@pytest.mark.auth_common
def test_object_create_bad_md5_bad():
e = _add_header_create_bad_object({'Content-MD5':'rL0Y20xC+Fzt72VPzMSk2A=='})
status, error_code = _get_status_and_error_code(e.response)
eq(status, 400)
eq(error_code, 'BadDigest')
assert status == 400
assert error_code == 'BadDigest'
@tag('auth_common')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/empty MD5')
@attr(assertion='fails 400')
@pytest.mark.auth_common
def test_object_create_bad_md5_empty():
e = _add_header_create_bad_object({'Content-MD5':''})
status, error_code = _get_status_and_error_code(e.response)
eq(status, 400)
eq(error_code, 'InvalidDigest')
assert status == 400
assert error_code == 'InvalidDigest'
@tag('auth_common')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/no MD5 header')
@attr(assertion='succeeds')
@pytest.mark.auth_common
def test_object_create_bad_md5_none():
bucket_name, key_name = _remove_header_create_object('Content-MD5')
client = get_client()
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
@tag('auth_common')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/Expect 200')
@attr(assertion='garbage, but S3 succeeds!')
@pytest.mark.auth_common
def test_object_create_bad_expect_mismatch():
bucket_name, key_name = _add_header_create_object({'Expect': 200})
client = get_client()
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
@tag('auth_common')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/empty expect')
@attr(assertion='succeeds ... should it?')
@pytest.mark.auth_common
def test_object_create_bad_expect_empty():
bucket_name, key_name = _add_header_create_object({'Expect': ''})
client = get_client()
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
@tag('auth_common')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/no expect')
@attr(assertion='succeeds')
@pytest.mark.auth_common
def test_object_create_bad_expect_none():
bucket_name, key_name = _remove_header_create_object('Expect')
client = get_client()
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
@tag('auth_common')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/empty content length')
@attr(assertion='fails 400')
@pytest.mark.auth_common
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the content-length header
@attr('fails_on_rgw')
@pytest.mark.fails_on_rgw
def test_object_create_bad_contentlength_empty():
e = _add_header_create_bad_object({'Content-Length':''})
status, error_code = _get_status_and_error_code(e.response)
eq(status, 400)
assert status == 400
@tag('auth_common')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/negative content length')
@attr(assertion='fails 400')
@attr('fails_on_mod_proxy_fcgi')
@pytest.mark.auth_common
@pytest.mark.fails_on_mod_proxy_fcgi
def test_object_create_bad_contentlength_negative():
client = get_client()
bucket_name = get_new_bucket()
key_name = 'foo'
e = assert_raises(ClientError, client.put_object, Bucket=bucket_name, Key=key_name, ContentLength=-1)
status = _get_status(e.response)
eq(status, 400)
assert status == 400
@tag('auth_common')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/no content length')
@attr(assertion='fails 411')
@pytest.mark.auth_common
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the content-length header
@attr('fails_on_rgw')
@pytest.mark.fails_on_rgw
def test_object_create_bad_contentlength_none():
remove = 'Content-Length'
e = _remove_header_create_bad_object('Content-Length')
status, error_code = _get_status_and_error_code(e.response)
eq(status, 411)
eq(error_code, 'MissingContentLength')
assert status == 411
assert error_code == 'MissingContentLength'
@tag('auth_common')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/content length too long')
@attr(assertion='fails 400')
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the content-length header
@attr('fails_on_rgw')
def test_object_create_bad_contentlength_mismatch_above():
content = 'bar'
length = len(content) + 1
client = get_client()
bucket_name = get_new_bucket()
key_name = 'foo'
headers = {'Content-Length': str(length)}
add_headers = (lambda **kwargs: kwargs['params']['headers'].update(headers))
client.meta.events.register('before-sign.s3.PutObject', add_headers)
e = assert_raises(ClientError, client.put_object, Bucket=bucket_name, Key=key_name, Body=content)
status, error_code = _get_status_and_error_code(e.response)
eq(status, 400)
@tag('auth_common')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/content type text/plain')
@attr(assertion='succeeds')
@pytest.mark.auth_common
def test_object_create_bad_contenttype_invalid():
bucket_name, key_name = _add_header_create_object({'Content-Type': 'text/plain'})
client = get_client()
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
@tag('auth_common')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/empty content type')
@attr(assertion='succeeds')
@pytest.mark.auth_common
def test_object_create_bad_contenttype_empty():
client = get_client()
key_name = 'foo'
bucket_name = get_new_bucket()
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar', ContentType='')
@tag('auth_common')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/no content type')
@attr(assertion='succeeds')
@pytest.mark.auth_common
def test_object_create_bad_contenttype_none():
bucket_name = get_new_bucket()
key_name = 'foo'
@ -329,38 +248,26 @@ def test_object_create_bad_contenttype_none():
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
@tag('auth_common')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/empty authorization')
@attr(assertion='fails 403')
@pytest.mark.auth_common
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the authorization header
@attr('fails_on_rgw')
@pytest.mark.fails_on_rgw
def test_object_create_bad_authorization_empty():
e = _add_header_create_bad_object({'Authorization': ''})
status, error_code = _get_status_and_error_code(e.response)
eq(status, 403)
assert status == 403
@tag('auth_common')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/date and x-amz-date')
@attr(assertion='succeeds')
@pytest.mark.auth_common
# TODO: remove 'fails_on_rgw' and once we have learned how to pass both the 'Date' and 'X-Amz-Date' header during signing and not 'X-Amz-Date' before
@attr('fails_on_rgw')
@pytest.mark.fails_on_rgw
def test_object_create_date_and_amz_date():
date = formatdate(usegmt=True)
bucket_name, key_name = _add_header_create_object({'Date': date, 'X-Amz-Date': date})
client = get_client()
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
@tag('auth_common')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/x-amz-date and no date')
@attr(assertion='succeeds')
@pytest.mark.auth_common
# TODO: remove 'fails_on_rgw' and once we have learned how to pass both the 'Date' and 'X-Amz-Date' header during signing and not 'X-Amz-Date' before
@attr('fails_on_rgw')
@pytest.mark.fails_on_rgw
def test_object_create_amz_date_and_no_date():
date = formatdate(usegmt=True)
bucket_name, key_name = _add_header_create_object({'Date': '', 'X-Amz-Date': date})
@ -368,36 +275,24 @@ def test_object_create_amz_date_and_no_date():
client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
# the teardown is really messed up here. check it out
@tag('auth_common')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/no authorization')
@attr(assertion='fails 403')
@pytest.mark.auth_common
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the authorization header
@attr('fails_on_rgw')
@pytest.mark.fails_on_rgw
def test_object_create_bad_authorization_none():
e = _remove_header_create_bad_object('Authorization')
status, error_code = _get_status_and_error_code(e.response)
eq(status, 403)
assert status == 403
@tag('auth_common')
@attr(resource='bucket')
@attr(method='put')
@attr(operation='create w/no content length')
@attr(assertion='succeeds')
@pytest.mark.auth_common
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the content-length header
@attr('fails_on_rgw')
@pytest.mark.fails_on_rgw
def test_bucket_create_contentlength_none():
remove = 'Content-Length'
_remove_header_create_bucket(remove)
@tag('auth_common')
@attr(resource='bucket')
@attr(method='acls')
@attr(operation='set w/no content length')
@attr(assertion='succeeds')
@pytest.mark.auth_common
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the content-length header
@attr('fails_on_rgw')
@pytest.mark.fails_on_rgw
def test_object_acl_create_contentlength_none():
bucket_name = get_new_bucket()
client = get_client()
@ -411,11 +306,7 @@ def test_object_acl_create_contentlength_none():
client.meta.events.register('before-call.s3.PutObjectAcl', remove_header)
client.put_object_acl(Bucket=bucket_name, Key='foo', ACL='public-read')
@tag('auth_common')
@attr(resource='bucket')
@attr(method='acls')
@attr(operation='set w/invalid permission')
@attr(assertion='fails 400')
@pytest.mark.auth_common
def test_bucket_put_bad_canned_acl():
bucket_name = get_new_bucket()
client = get_client()
@ -426,13 +317,9 @@ def test_bucket_put_bad_canned_acl():
e = assert_raises(ClientError, client.put_bucket_acl, Bucket=bucket_name, ACL='public-read')
status = _get_status(e.response)
eq(status, 400)
assert status == 400
@tag('auth_common')
@attr(resource='bucket')
@attr(method='put')
@attr(operation='create w/expect 200')
@attr(assertion='garbage, but S3 succeeds!')
@pytest.mark.auth_common
def test_bucket_create_bad_expect_mismatch():
bucket_name = get_new_bucket_name()
client = get_client()
@ -442,99 +329,67 @@ def test_bucket_create_bad_expect_mismatch():
client.meta.events.register('before-call.s3.CreateBucket', add_headers)
client.create_bucket(Bucket=bucket_name)
@tag('auth_common')
@attr(resource='bucket')
@attr(method='put')
@attr(operation='create w/expect empty')
@attr(assertion='garbage, but S3 succeeds!')
@pytest.mark.auth_common
def test_bucket_create_bad_expect_empty():
headers = {'Expect': ''}
_add_header_create_bucket(headers)
@tag('auth_common')
@attr(resource='bucket')
@attr(method='put')
@attr(operation='create w/empty content length')
@attr(assertion='fails 400')
@pytest.mark.auth_common
# TODO: The request isn't even making it to the RGW past the frontend
# This test had 'fails_on_rgw' before the move to boto3
@attr('fails_on_rgw')
@pytest.mark.fails_on_rgw
def test_bucket_create_bad_contentlength_empty():
headers = {'Content-Length': ''}
e = _add_header_create_bad_bucket(headers)
status, error_code = _get_status_and_error_code(e.response)
eq(status, 400)
assert status == 400
@tag('auth_common')
@attr(resource='bucket')
@attr(method='put')
@attr(operation='create w/negative content length')
@attr(assertion='fails 400')
@attr('fails_on_mod_proxy_fcgi')
@pytest.mark.auth_common
@pytest.mark.fails_on_mod_proxy_fcgi
def test_bucket_create_bad_contentlength_negative():
headers = {'Content-Length': '-1'}
e = _add_header_create_bad_bucket(headers)
status = _get_status(e.response)
eq(status, 400)
assert status == 400
@tag('auth_common')
@attr(resource='bucket')
@attr(method='put')
@attr(operation='create w/no content length')
@attr(assertion='succeeds')
@pytest.mark.auth_common
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the content-length header
@attr('fails_on_rgw')
@pytest.mark.fails_on_rgw
def test_bucket_create_bad_contentlength_none():
remove = 'Content-Length'
_remove_header_create_bucket(remove)
@tag('auth_common')
@attr(resource='bucket')
@attr(method='put')
@attr(operation='create w/empty authorization')
@attr(assertion='fails 403')
@pytest.mark.auth_common
# TODO: remove 'fails_on_rgw' and once we have learned how to manipulate the authorization header
@attr('fails_on_rgw')
@pytest.mark.fails_on_rgw
def test_bucket_create_bad_authorization_empty():
headers = {'Authorization': ''}
e = _add_header_create_bad_bucket(headers)
status, error_code = _get_status_and_error_code(e.response)
eq(status, 403)
eq(error_code, 'AccessDenied')
assert status == 403
assert error_code == 'AccessDenied'
@tag('auth_common')
@attr(resource='bucket')
@attr(method='put')
@attr(operation='create w/no authorization')
@attr(assertion='fails 403')
@pytest.mark.auth_common
# TODO: remove 'fails_on_rgw' and once we have learned how to manipulate the authorization header
@attr('fails_on_rgw')
@pytest.mark.fails_on_rgw
def test_bucket_create_bad_authorization_none():
e = _remove_header_create_bad_bucket('Authorization')
status, error_code = _get_status_and_error_code(e.response)
eq(status, 403)
eq(error_code, 'AccessDenied')
assert status == 403
assert error_code == 'AccessDenied'
@tag('auth_aws2')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/invalid MD5')
@attr(assertion='fails 400')
@pytest.mark.auth_aws2
def test_object_create_bad_md5_invalid_garbage_aws2():
v2_client = get_v2_client()
headers = {'Content-MD5': 'AWS HAHAHA'}
e = _add_header_create_bad_object(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response)
eq(status, 400)
eq(error_code, 'InvalidDigest')
assert status == 400
assert error_code == 'InvalidDigest'
@tag('auth_aws2')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/content length too short')
@attr(assertion='fails 400')
@pytest.mark.auth_aws2
# TODO: remove 'fails_on_rgw' and once we have learned how to manipulate the Content-Length header
@attr('fails_on_rgw')
@pytest.mark.fails_on_rgw
def test_object_create_bad_contentlength_mismatch_below_aws2():
v2_client = get_v2_client()
content = 'bar'
@ -542,252 +397,176 @@ def test_object_create_bad_contentlength_mismatch_below_aws2():
headers = {'Content-Length': str(length)}
e = _add_header_create_bad_object(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response)
eq(status, 400)
eq(error_code, 'BadDigest')
assert status == 400
assert error_code == 'BadDigest'
@tag('auth_aws2')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/incorrect authorization')
@attr(assertion='fails 403')
@pytest.mark.auth_aws2
# TODO: remove 'fails_on_rgw' and once we have learned how to manipulate the authorization header
@attr('fails_on_rgw')
@pytest.mark.fails_on_rgw
def test_object_create_bad_authorization_incorrect_aws2():
v2_client = get_v2_client()
headers = {'Authorization': 'AWS AKIAIGR7ZNNBHC5BKSUB:FWeDfwojDSdS2Ztmpfeubhd9isU='}
e = _add_header_create_bad_object(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response)
eq(status, 403)
eq(error_code, 'InvalidDigest')
assert status == 403
assert error_code == 'InvalidDigest'
@tag('auth_aws2')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/invalid authorization')
@attr(assertion='fails 400')
@pytest.mark.auth_aws2
# TODO: remove 'fails_on_rgw' and once we have learned how to manipulate the authorization header
@attr('fails_on_rgw')
@pytest.mark.fails_on_rgw
def test_object_create_bad_authorization_invalid_aws2():
v2_client = get_v2_client()
headers = {'Authorization': 'AWS HAHAHA'}
e = _add_header_create_bad_object(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response)
eq(status, 400)
eq(error_code, 'InvalidArgument')
assert status == 400
assert error_code == 'InvalidArgument'
@tag('auth_aws2')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/empty user agent')
@attr(assertion='succeeds')
@pytest.mark.auth_aws2
def test_object_create_bad_ua_empty_aws2():
v2_client = get_v2_client()
headers = {'User-Agent': ''}
bucket_name, key_name = _add_header_create_object(headers, v2_client)
v2_client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
@tag('auth_aws2')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/no user agent')
@attr(assertion='succeeds')
@pytest.mark.auth_aws2
def test_object_create_bad_ua_none_aws2():
v2_client = get_v2_client()
remove = 'User-Agent'
bucket_name, key_name = _remove_header_create_object(remove, v2_client)
v2_client.put_object(Bucket=bucket_name, Key=key_name, Body='bar')
@tag('auth_aws2')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/invalid date')
@attr(assertion='fails 403')
@pytest.mark.auth_aws2
def test_object_create_bad_date_invalid_aws2():
v2_client = get_v2_client()
headers = {'x-amz-date': 'Bad Date'}
e = _add_header_create_bad_object(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response)
eq(status, 403)
eq(error_code, 'AccessDenied')
assert status == 403
assert error_code == 'AccessDenied'
@tag('auth_aws2')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/empty date')
@attr(assertion='fails 403')
@pytest.mark.auth_aws2
def test_object_create_bad_date_empty_aws2():
v2_client = get_v2_client()
headers = {'x-amz-date': ''}
e = _add_header_create_bad_object(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response)
eq(status, 403)
eq(error_code, 'AccessDenied')
assert status == 403
assert error_code == 'AccessDenied'
@tag('auth_aws2')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/no date')
@attr(assertion='fails 403')
@pytest.mark.auth_aws2
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the date header
@attr('fails_on_rgw')
@pytest.mark.fails_on_rgw
def test_object_create_bad_date_none_aws2():
v2_client = get_v2_client()
remove = 'x-amz-date'
e = _remove_header_create_bad_object(remove, v2_client)
status, error_code = _get_status_and_error_code(e.response)
eq(status, 403)
eq(error_code, 'AccessDenied')
assert status == 403
assert error_code == 'AccessDenied'
@tag('auth_aws2')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/date in past')
@attr(assertion='fails 403')
@pytest.mark.auth_aws2
def test_object_create_bad_date_before_today_aws2():
v2_client = get_v2_client()
headers = {'x-amz-date': 'Tue, 07 Jul 2010 21:53:04 GMT'}
e = _add_header_create_bad_object(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response)
eq(status, 403)
eq(error_code, 'RequestTimeTooSkewed')
assert status == 403
assert error_code == 'RequestTimeTooSkewed'
@tag('auth_aws2')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/date before epoch')
@attr(assertion='fails 403')
@pytest.mark.auth_aws2
def test_object_create_bad_date_before_epoch_aws2():
v2_client = get_v2_client()
headers = {'x-amz-date': 'Tue, 07 Jul 1950 21:53:04 GMT'}
e = _add_header_create_bad_object(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response)
eq(status, 403)
eq(error_code, 'AccessDenied')
assert status == 403
assert error_code == 'AccessDenied'
@tag('auth_aws2')
@attr(resource='object')
@attr(method='put')
@attr(operation='create w/date after 9999')
@attr(assertion='fails 403')
@pytest.mark.auth_aws2
def test_object_create_bad_date_after_end_aws2():
v2_client = get_v2_client()
headers = {'x-amz-date': 'Tue, 07 Jul 9999 21:53:04 GMT'}
e = _add_header_create_bad_object(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response)
eq(status, 403)
eq(error_code, 'RequestTimeTooSkewed')
assert status == 403
assert error_code == 'RequestTimeTooSkewed'
@tag('auth_aws2')
@attr(resource='bucket')
@attr(method='put')
@attr(operation='create w/invalid authorization')
@attr(assertion='fails 400')
@pytest.mark.auth_aws2
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the date header
@attr('fails_on_rgw')
@pytest.mark.fails_on_rgw
def test_bucket_create_bad_authorization_invalid_aws2():
v2_client = get_v2_client()
headers = {'Authorization': 'AWS HAHAHA'}
e = _add_header_create_bad_bucket(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response)
eq(status, 400)
eq(error_code, 'InvalidArgument')
assert status == 400
assert error_code == 'InvalidArgument'
@tag('auth_aws2')
@attr(resource='bucket')
@attr(method='put')
@attr(operation='create w/empty user agent')
@attr(assertion='succeeds')
@pytest.mark.auth_aws2
def test_bucket_create_bad_ua_empty_aws2():
v2_client = get_v2_client()
headers = {'User-Agent': ''}
_add_header_create_bucket(headers, v2_client)
@tag('auth_aws2')
@attr(resource='bucket')
@attr(method='put')
@attr(operation='create w/no user agent')
@attr(assertion='succeeds')
@pytest.mark.auth_aws2
def test_bucket_create_bad_ua_none_aws2():
v2_client = get_v2_client()
remove = 'User-Agent'
_remove_header_create_bucket(remove, v2_client)
@tag('auth_aws2')
@attr(resource='bucket')
@attr(method='put')
@attr(operation='create w/invalid date')
@attr(assertion='fails 403')
@pytest.mark.auth_aws2
def test_bucket_create_bad_date_invalid_aws2():
v2_client = get_v2_client()
headers = {'x-amz-date': 'Bad Date'}
e = _add_header_create_bad_bucket(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response)
eq(status, 403)
eq(error_code, 'AccessDenied')
assert status == 403
assert error_code == 'AccessDenied'
@tag('auth_aws2')
@attr(resource='bucket')
@attr(method='put')
@attr(operation='create w/empty date')
@attr(assertion='fails 403')
@pytest.mark.auth_aws2
def test_bucket_create_bad_date_empty_aws2():
v2_client = get_v2_client()
headers = {'x-amz-date': ''}
e = _add_header_create_bad_bucket(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response)
eq(status, 403)
eq(error_code, 'AccessDenied')
assert status == 403
assert error_code == 'AccessDenied'
@tag('auth_aws2')
@attr(resource='bucket')
@attr(method='put')
@attr(operation='create w/no date')
@attr(assertion='fails 403')
@pytest.mark.auth_aws2
# TODO: remove 'fails_on_rgw' and once we have learned how to remove the date header
@attr('fails_on_rgw')
@pytest.mark.fails_on_rgw
def test_bucket_create_bad_date_none_aws2():
v2_client = get_v2_client()
remove = 'x-amz-date'
e = _remove_header_create_bad_bucket(remove, v2_client)
status, error_code = _get_status_and_error_code(e.response)
eq(status, 403)
eq(error_code, 'AccessDenied')
assert status == 403
assert error_code == 'AccessDenied'
@tag('auth_aws2')
@attr(resource='bucket')
@attr(method='put')
@attr(operation='create w/date in past')
@attr(assertion='fails 403')
@pytest.mark.auth_aws2
def test_bucket_create_bad_date_before_today_aws2():
v2_client = get_v2_client()
headers = {'x-amz-date': 'Tue, 07 Jul 2010 21:53:04 GMT'}
e = _add_header_create_bad_bucket(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response)
eq(status, 403)
eq(error_code, 'RequestTimeTooSkewed')
assert status == 403
assert error_code == 'RequestTimeTooSkewed'
@tag('auth_aws2')
@attr(resource='bucket')
@attr(method='put')
@attr(operation='create w/date in future')
@attr(assertion='fails 403')
@pytest.mark.auth_aws2
def test_bucket_create_bad_date_after_today_aws2():
v2_client = get_v2_client()
headers = {'x-amz-date': 'Tue, 07 Jul 2030 21:53:04 GMT'}
e = _add_header_create_bad_bucket(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response)
eq(status, 403)
eq(error_code, 'RequestTimeTooSkewed')
assert status == 403
assert error_code == 'RequestTimeTooSkewed'
@tag('auth_aws2')
@attr(resource='bucket')
@attr(method='put')
@attr(operation='create w/date before epoch')
@attr(assertion='fails 403')
@pytest.mark.auth_aws2
def test_bucket_create_bad_date_before_epoch_aws2():
v2_client = get_v2_client()
headers = {'x-amz-date': 'Tue, 07 Jul 1950 21:53:04 GMT'}
e = _add_header_create_bad_bucket(headers, v2_client)
status, error_code = _get_status_and_error_code(e.response)
eq(status, 403)
eq(error_code, 'AccessDenied')
assert status == 403
assert error_code == 'AccessDenied'

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,159 @@
import json
import pytest
from botocore.exceptions import ClientError
from . import (
configfile,
get_iam_root_client,
get_iam_alt_root_client,
get_new_bucket_name,
get_prefix,
nuke_prefixed_buckets,
)
from .iam import iam_root, iam_alt_root
from .utils import assert_raises, _get_status_and_error_code
def get_new_topic_name():
return get_new_bucket_name()
def nuke_topics(client, prefix):
p = client.get_paginator('list_topics')
for response in p.paginate():
for topic in response['Topics']:
arn = topic['TopicArn']
if prefix not in arn:
pass
try:
client.delete_topic(TopicArn=arn)
except:
pass
@pytest.fixture
def sns(iam_root):
client = get_iam_root_client(service_name='sns')
yield client
nuke_topics(client, get_prefix())
@pytest.fixture
def sns_alt(iam_alt_root):
client = get_iam_alt_root_client(service_name='sns')
yield client
nuke_topics(client, get_prefix())
@pytest.fixture
def s3(iam_root):
client = get_iam_root_client(service_name='s3')
yield client
nuke_prefixed_buckets(get_prefix(), client)
@pytest.fixture
def s3_alt(iam_alt_root):
client = get_iam_alt_root_client(service_name='s3')
yield client
nuke_prefixed_buckets(get_prefix(), client)
@pytest.mark.iam_account
@pytest.mark.sns
def test_account_topic(sns):
name = get_new_topic_name()
response = sns.create_topic(Name=name)
arn = response['TopicArn']
assert arn.startswith('arn:aws:sns:')
assert arn.endswith(f':{name}')
response = sns.list_topics()
assert arn in [p['TopicArn'] for p in response['Topics']]
sns.set_topic_attributes(TopicArn=arn, AttributeName='Policy', AttributeValue='')
response = sns.get_topic_attributes(TopicArn=arn)
assert 'Attributes' in response
sns.delete_topic(TopicArn=arn)
response = sns.list_topics()
assert arn not in [p['TopicArn'] for p in response['Topics']]
with pytest.raises(sns.exceptions.NotFoundException):
sns.get_topic_attributes(TopicArn=arn)
sns.delete_topic(TopicArn=arn)
@pytest.mark.iam_account
@pytest.mark.sns
def test_cross_account_topic(sns, sns_alt):
name = get_new_topic_name()
arn = sns.create_topic(Name=name)['TopicArn']
# not visible to any alt user apis
with pytest.raises(sns.exceptions.NotFoundException):
sns_alt.get_topic_attributes(TopicArn=arn)
with pytest.raises(sns.exceptions.NotFoundException):
sns_alt.set_topic_attributes(TopicArn=arn, AttributeName='Policy', AttributeValue='')
# delete returns success
sns_alt.delete_topic(TopicArn=arn)
response = sns_alt.list_topics()
assert arn not in [p['TopicArn'] for p in response['Topics']]
@pytest.mark.iam_account
@pytest.mark.sns
def test_account_topic_publish(sns, s3):
name = get_new_topic_name()
response = sns.create_topic(Name=name)
topic_arn = response['TopicArn']
bucket = get_new_bucket_name()
s3.create_bucket(Bucket=bucket)
config = {'TopicConfigurations': [{
'Id': 'id',
'TopicArn': topic_arn,
'Events': [ 's3:ObjectCreated:*' ],
}]}
s3.put_bucket_notification_configuration(
Bucket=bucket, NotificationConfiguration=config)
@pytest.mark.iam_account
@pytest.mark.iam_cross_account
@pytest.mark.sns
def test_cross_account_topic_publish(sns, s3_alt, iam_alt_root):
name = get_new_topic_name()
response = sns.create_topic(Name=name)
topic_arn = response['TopicArn']
bucket = get_new_bucket_name()
s3_alt.create_bucket(Bucket=bucket)
config = {'TopicConfigurations': [{
'Id': 'id',
'TopicArn': topic_arn,
'Events': [ 's3:ObjectCreated:*' ],
}]}
# expect AccessDenies because no resource policy allows cross-account access
e = assert_raises(ClientError, s3_alt.put_bucket_notification_configuration,
Bucket=bucket, NotificationConfiguration=config)
status, error_code = _get_status_and_error_code(e.response)
assert status == 403
assert error_code == 'AccessDenied'
# add topic policy to allow the alt user
alt_principal = iam_alt_root.get_user()['User']['Arn']
policy = json.dumps({
'Version': '2012-10-17',
'Statement': [{
'Effect': 'Allow',
'Principal': {'AWS': alt_principal},
'Action': 'sns:Publish',
'Resource': topic_arn
}]
})
sns.set_topic_attributes(TopicArn=topic_arn, AttributeName='Policy',
AttributeValue=policy)
s3_alt.put_bucket_notification_configuration(
Bucket=bucket, NotificationConfiguration=config)

File diff suppressed because it is too large Load diff

View file

@ -1,11 +1,9 @@
from nose.tools import eq_ as eq
from . import utils
def test_generate():
FIVE_MB = 5 * 1024 * 1024
eq(len(''.join(utils.generate_random(0))), 0)
eq(len(''.join(utils.generate_random(1))), 1)
eq(len(''.join(utils.generate_random(FIVE_MB - 1))), FIVE_MB - 1)
eq(len(''.join(utils.generate_random(FIVE_MB))), FIVE_MB)
eq(len(''.join(utils.generate_random(FIVE_MB + 1))), FIVE_MB + 1)
assert len(''.join(utils.generate_random(0))) == 0
assert len(''.join(utils.generate_random(1))) == 1
assert len(''.join(utils.generate_random(FIVE_MB - 1))) == FIVE_MB - 1
assert len(''.join(utils.generate_random(FIVE_MB))) == FIVE_MB
assert len(''.join(utils.generate_random(FIVE_MB + 1))) == FIVE_MB + 1

View file

@ -3,8 +3,6 @@ import requests
import string
import time
from nose.tools import eq_ as eq
def assert_raises(excClass, callableObj, *args, **kwargs):
"""
Like unittest.TestCase.assertRaises, but returns the exception.

9
tox.ini Normal file
View file

@ -0,0 +1,9 @@
[tox]
envlist = py
[testenv]
deps = -rrequirements.txt
passenv =
S3TEST_CONF
S3_USE_SIGV4
commands = pytest {posargs}