Increase wait time in test_lifecycle_deletemarker_expiration(..)
to avoid any spurious failure.
(cherry picked from commit cb830ebae1)
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
Add an option to configure lc debug interval and adjust lifecycle
tests sleep as per the value set.
(cherry picked from commit 0f3f35ef01)
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
This solves: https://tracker.ceph.com/issues/53090
The solution is: We need to delete the role_policy and
user_policy attached user which was causing the failure.
Signed-off-by: Kalpesh Pandya <kapandya@redhat.com>
skip output-serial test. the results from both queries are not equal, thus it raise an assert. the problem seems to be the formatting before the comparision
remove test_output_serial_expressions until fixing the test
experiment pyarrow for parquet testing, adding arrow/parquet to bootstrap, installing pyarrow,pandas for reading/writing parquet
Signed-off-by: gal salomon <gal.salomon@gmail.com>
new test case test_list_multipart_upload_owner() uses two different
users to initiate multipart uploads, then tests that
list_multipart_uploads() shows the correct user ids and display names
for each upload
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 490d0a4c4f)
condition element of the role's trust policy and the role's
permission policy.
Signed-off-by: Pritha Srivastava <prsrivas@redhat.com>
(cherry picked from commit bf43a4a10a)
Run: S3TEST_CONF=your.conf ./virtualenv/bin/nosetests s3tests.functional.test_s3:test_bucket_list_empty
But get an error: "ERROR: Failure: ValueError (No such test test_bucket_list_empty)".
Because test_bucket_list_empty is a test case in s3tests_boto3 directory.
Signed-off-by: Liu Lan <liulan_yewu@cmss.chinamobile.com>
(cherry picked from commit 9ac8aef12b)
objects locked in GOVERNANCE mode can be removed with
BypassGovernanceRetention, but some tests may leave an object locked in
COMPLIANCE mode, which blocks deletion until the retention period
expires
nuke_prefixed_buckets now checks the retention policy of objects that it
fails to delete with AccessDenied, and will wait up to 60 seconds for
locks to expire before retrying the deletes. if the wait exceeds 60
seconds, it instead throws an error without deleting the bucket
instead of doing this in nuke_prefixed_buckets, we could potentially
have each object-lock test case handle this manually, but that would
add a separate delay to each test case
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 9c4f15a47e)
speed up the cleanup by using delete_objects() with batches of 128
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit bb995c2aeb)
Tests are added for GetBucketEncryption, PutBucketEncryption,
and DeleteBucketEncryption APIs.
Related PR: https://github.com/ceph/ceph/pull/42222
Signed-off-by: Rahul Dev Parashar <rahul.dev@flipkart.com>
(cherry picked from commit 44643af0b0)
In the function of nuke_prefixed_buckets, if some err is thrown when deleting buckets, the left buckets remain uncleaned.
It is kind of resoruce leak on some charged platform. We have to clear them manually.
I know the original code is meant to give the user some hint by rasing error. But the resource leak of left buckets is a little annoying.
This PR would skip the problem point and continue the teardown process. The last client error would be saved and re-raised after the loop completes.
Signed-off-by: Pei <huangp0600@126.com>
Signed-off-by: Pei <phuang1@dev-new-3-3854897.slc07.dev.ebayc3.com>
(cherry picked from commit 713012c178)