original tests by Priya Sehgal <priya.sehgal@flipkart.com>:
rgw/s3_boto3: Tests added for SSE-S3 (GET, PUT, HEAD, MPU).
Additions by Casey Bodley <cbodley@redhat.com>:
add 'sse-s3' tag to test cases
sse: add _put_bucket_encryption() helper function
sse: document test cases with default bucket encryption
sse: expects encryption response header on put/get
sse: add 8MB default-encrypted upload
sse: test uploads that request x-amz-server-side-encryption=AES256
Lastly all my changes (Marcus Watts <mwatts@redhat.com>):
remove obsolete test - do it only in boto3 now.
Combine or rename duplicated function names.
Giving more than one test the same name is a Bad Thing(tm).
sse: expand test_bucket_policy_put_obj_enc, and _put_bucket_encryption
test_bucket_policy_put_obj_enc was testing too many things at once.
new tests:
* customer encryption and sse-s3: should fail
* customer encryption and sse-kms: should fail
* deny if not sse-s3: no-enc fails, sse-s3 succeeds.
* deny if not sse-s3: kms fails
deny if not sse-ksm: no-enc fails, sse-kms succeeds.
deny if not sse-ksm: s3 fails
_put_bucket_encryption was only testing sse-s3.
* test both these variations: sse-s3 and sse-kms
Note:
* these tests will fail on pre-sse-s3 ceph.
python3: comment out all boto3.set_stream_logger() calls
They made too much output.
Signed-off-by: Marcus Watts <mwatts@redhat.com>
To be able to successfully run s3tests on dbstore backend in teuthology,
mark all the s3-tests currently failing on it with 'fails_on_dbstore' attr
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
This solves: https://tracker.ceph.com/issues/53090
The solution is: We need to delete the role_policy and
user_policy attached user which was causing the failure.
Signed-off-by: Kalpesh Pandya <kapandya@redhat.com>
skip output-serial test. the results from both queries are not equal, thus it raise an assert. the problem seems to be the formatting before the comparision
remove test_output_serial_expressions until fixing the test
experiment pyarrow for parquet testing, adding arrow/parquet to bootstrap, installing pyarrow,pandas for reading/writing parquet
Signed-off-by: gal salomon <gal.salomon@gmail.com>
new test case test_list_multipart_upload_owner() uses two different
users to initiate multipart uploads, then tests that
list_multipart_uploads() shows the correct user ids and display names
for each upload
Signed-off-by: Casey Bodley <cbodley@redhat.com>
objects locked in GOVERNANCE mode can be removed with
BypassGovernanceRetention, but some tests may leave an object locked in
COMPLIANCE mode, which blocks deletion until the retention period
expires
nuke_prefixed_buckets now checks the retention policy of objects that it
fails to delete with AccessDenied, and will wait up to 60 seconds for
locks to expire before retrying the deletes. if the wait exceeds 60
seconds, it instead throws an error without deleting the bucket
instead of doing this in nuke_prefixed_buckets, we could potentially
have each object-lock test case handle this manually, but that would
add a separate delay to each test case
Signed-off-by: Casey Bodley <cbodley@redhat.com>
Tests are added for GetBucketEncryption, PutBucketEncryption,
and DeleteBucketEncryption APIs.
Related PR: https://github.com/ceph/ceph/pull/42222
Signed-off-by: Rahul Dev Parashar <rahul.dev@flipkart.com>
Signed-off-by: Kalpesh Pandya <kapandya@redhat.com>
Few main changes/additions:
1. Webidentity test addition to test_sts.py.
2. A function named check_webidentity() added to __init__.py in order to check for section presence.
3. Few lines shifted from setup() to get_iam_client() to make them execute only when sts-tests run.
4. Documentation update (for sts section)
5. Changes in s3tests.conf.SAMPLE regarding sts sections
This is the fix for the issue reported (https://tracker.ceph.com/issues/47588). The issue was with the argument which was passed to the function. After removing that argument (as it's already an optional argument) the issue is fixed.
Signed-off-by: Kalpesh Pandya <kapandya@redhat.com>
Create 10 object versions (9 noncurrent). Install a noncurrent
version expiration at 4 days. Verify that 10 versions exist at
T+20, and only 1 (current) at T+60.
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
By design this test duplicates test_lifecycle_expiration_tags2,
but enables object versioning on the bucket.
The tests install a rule which requires -2- tags to be matched,
and creates 2 objects, one matching only 1 of the required tags,
the other matching both. Only the 2nd object should expire.
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
Note that the 1-tag case contains a filter prefix--which exposes
an apparent bug parsing Filter when it contains a Prefix element
and a single Tag element (without And).
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
In fact test_lifecycle_expiration_days0 is should fail, as 0-day
expiration is permitted for transition rules but not expiration
rules.
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
Primarily fixes the expiration header() verifier function
check_lifecycle_expiration_header, but also cleans up
prefix handling in setup_lifecycle_expiration().
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
1. fix a python3-related KeyError exception
2. note here: AWS documentation includes examples of "Days 0"
in use, but boto3 will not accept them--this is why the days0
test currently sets Days 1
3. delay increased to 30s, to avoid occasional failures due to
jitter
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
Commit bf956df71e adding
listobvjectsv2 tests inadvertently changed the v1
test_lifecycle_expiration test, which it had copied to
create a v2 version. Revert this.
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
In the function of nuke_prefixed_buckets, if some err is thrown when deleting buckets, the left buckets remain uncleaned.
It is kind of resoruce leak on some charged platform. We have to clear them manually.
I know the original code is meant to give the user some hint by rasing error. But the resource leak of left buckets is a little annoying.
This PR would skip the problem point and continue the teardown process. The last client error would be saved and re-raised after the loop completes.
Signed-off-by: Pei <huangp0600@126.com>
Signed-off-by: Pei <phuang1@dev-new-3-3854897.slc07.dev.ebayc3.com>