Make sure 403 is returned when access is denied via s3:GetBucketPublicAccessBlock action on GetBucketPublicAccessBlock
Refs: https://github.com/ceph/ceph/pull/55652
Signed-off-by: Seena Fallah <seenafallah@gmail.com>
(cherry picked from commit 3af42312bf)
Make sure NoSuchPublicAccessBlockConfiguration is returned when no public block is configured on bucket:
Refs: https://github.com/ceph/ceph/pull/55652
Signed-off-by: Seena Fallah <seenafallah@gmail.com>
(cherry picked from commit 3056e6d039)
This improves the testing for presigned URLs for
both get_object and put_object when using
generate_presigned_url().
It covers the case where you pass for example
a x-amz-acl (ACL in params for generated_presigned_url)
header that should be signed.
Tests the regression in [1].
[1] https://tracker.ceph.com/issues/64308
Signed-off-by: Tobias Urdin <tobias.urdin@binero.se>
(cherry picked from commit 055451f666)
when the local timezone is not UTC and if it is a day behind,
lifecycle_header tests fails with 2 days not equal to 1
so replacing datetime.now() with datetime.utcnow()
Signed-off-by: Ali Maredia <amaredia@redhat.com>
(cherry picked from commit 4744808eda)
https://tracker.ceph.com/issues/63537 reported that large dates (with
year after 2107) got truncated when written. test with a later date, and
check that get_object_retention() gives back the date we put
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 40182ce26f)
Object mtime should not change for any attr changes unless
its a copy operation. Verify the same using PutObjectACL op.
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
(cherry picked from commit 10f3f7620d)
stop using head_bucket() to fetch these response headers, and use
list_objects_v2() instead to count objects and sizes
Fixes: #315
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 188b392131)
This is to avoid a get_object call for every range check as the object size will
not change during this duration and we'd most likely already know the object
sizes beforehand
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
(cherry picked from commit 741f2cbc9e)
Tests that concurrent multi-object delete requests which specify
the same versioned object instances return successful object
responses within response body.
relates to: https://tracker.ceph.com/issues/56646
Signed-off-by: Cory Snyder <csnyder@iland.com>
(cherry picked from commit e18ea7fac4)
alignment of some of the test routines upon removing the XML tags(<Payload><Records><Payload>) from the s3select results. (Json s3tests #506)
Signed-off-by: galsalomon66 <gal.salomon@gmail.com>
Before the RGW fix PR was responding with 411 instead of 200
RGW fix PR: https://github.com/ceph/ceph/pull/50235
Signed-off-by: Mark Kogan <mkogan@redhat.com>
(cherry picked from commit 13a9bfc00a)
when the tests were converted from boto2, they were rewritten as loops
over client.delete_object(). switch back to multi-delete
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 787dc6bd43)
Few checks were incorrectly mapped when switched to 'assert'. This
commit fixes the same.
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
(cherry picked from commit 29b0e27e49)
Mark testcase "test_lifecycle_expiration_header_and_tags_head" as
fails_on_dbstore
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
(cherry picked from commit d976f47d74)
also, give more accurate instruction on how to run the tests
Signed-off-by: Yuval Lifshitz <ylifshit@redhat.com>
(cherry picked from commit 3437cda73d)
`ERR_TOO_SMALL` is wrongly returned if all of the following are true,
- the get_data returns multiple items (chunks)
- the length of the last item is smaller than the POST Policy's min
value for content-length-range.
The check should be `(ofs < min_len)` instead of `(len < min_len)`
This is further confirmed by the next line of `s->obj_size = ofs`
Move the `int len` scope inside loop to try and prevent the bug in
future.
The bug was refactored in 2016, but was introduced in Oct 2012, when
this functionality was first added to RGW in commit 7bb3504d3f0974e9863f536e9af0ce8889d6888f.
Reference: 933a42f9af/src/rgw/rgw_op.cc (L4474-L4513)
Reference: 7bb3504d3f
Signed-off-by: Robin H. Johnson <rjohnson@digitalocean.com>
(cherry picked from commit 5914eb2005)
this has been failing consistently in local testing. test_sts.py has
lots of user policy test coverage, so this test case in test_s3.py is
superfluous
Fixes: https://tracker.ceph.com/issues/58365
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 18a41ab63f)
i don't think any of our CopyObj test cases were large enough to have
tail objects, so weren't exercising our tail object ref counting
strategy
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit defb8eb977)
Tag/Untag testcases failing on dbstore as per latest run against main
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
(cherry picked from commit a8ee732732)
case.
Updated test_set_bucket_tagging test for verifying the http status code
for DeleteBucketTagging case.
Related CEPH PR: https://github.com/ceph/ceph/pull/47262
Signed-off-by: Shriya Deshmukh <shriya.deshmukh@seagate.com>
(cherry picked from commit c8fc8cd7c8)
Tag User policy tests failing on dbstore as 'fails_on_dbstore'
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
(cherry picked from commit 5d63ebf83d)
original tests by Priya Sehgal <priya.sehgal@flipkart.com>:
rgw/s3_boto3: Tests added for SSE-S3 (GET, PUT, HEAD, MPU).
Additions by Casey Bodley <cbodley@redhat.com>:
add 'sse-s3' tag to test cases
sse: add _put_bucket_encryption() helper function
sse: document test cases with default bucket encryption
sse: expects encryption response header on put/get
sse: add 8MB default-encrypted upload
sse: test uploads that request x-amz-server-side-encryption=AES256
Lastly all my changes (Marcus Watts <mwatts@redhat.com>):
remove obsolete test - do it only in boto3 now.
Combine or rename duplicated function names.
Giving more than one test the same name is a Bad Thing(tm).
sse: expand test_bucket_policy_put_obj_enc, and _put_bucket_encryption
test_bucket_policy_put_obj_enc was testing too many things at once.
new tests:
* customer encryption and sse-s3: should fail
* customer encryption and sse-kms: should fail
* deny if not sse-s3: no-enc fails, sse-s3 succeeds.
* deny if not sse-s3: kms fails
deny if not sse-ksm: no-enc fails, sse-kms succeeds.
deny if not sse-ksm: s3 fails
_put_bucket_encryption was only testing sse-s3.
* test both these variations: sse-s3 and sse-kms
Note:
* these tests will fail on pre-sse-s3 ceph.
python3: comment out all boto3.set_stream_logger() calls
They made too much output.
Signed-off-by: Marcus Watts <mwatts@redhat.com>
(cherry picked from commit dd7cac25f5)
To be able to successfully run s3tests on dbstore backend in teuthology,
mark all the s3-tests currently failing on it with 'fails_on_dbstore' attr
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
This solves: https://tracker.ceph.com/issues/53090
The solution is: We need to delete the role_policy and
user_policy attached user which was causing the failure.
Signed-off-by: Kalpesh Pandya <kapandya@redhat.com>
skip output-serial test. the results from both queries are not equal, thus it raise an assert. the problem seems to be the formatting before the comparision
remove test_output_serial_expressions until fixing the test
experiment pyarrow for parquet testing, adding arrow/parquet to bootstrap, installing pyarrow,pandas for reading/writing parquet
Signed-off-by: gal salomon <gal.salomon@gmail.com>
new test case test_list_multipart_upload_owner() uses two different
users to initiate multipart uploads, then tests that
list_multipart_uploads() shows the correct user ids and display names
for each upload
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 490d0a4c4f)
condition element of the role's trust policy and the role's
permission policy.
Signed-off-by: Pritha Srivastava <prsrivas@redhat.com>
(cherry picked from commit bf43a4a10a)
objects locked in GOVERNANCE mode can be removed with
BypassGovernanceRetention, but some tests may leave an object locked in
COMPLIANCE mode, which blocks deletion until the retention period
expires
nuke_prefixed_buckets now checks the retention policy of objects that it
fails to delete with AccessDenied, and will wait up to 60 seconds for
locks to expire before retrying the deletes. if the wait exceeds 60
seconds, it instead throws an error without deleting the bucket
instead of doing this in nuke_prefixed_buckets, we could potentially
have each object-lock test case handle this manually, but that would
add a separate delay to each test case
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 9c4f15a47e)
speed up the cleanup by using delete_objects() with batches of 128
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit bb995c2aeb)
Tests are added for GetBucketEncryption, PutBucketEncryption,
and DeleteBucketEncryption APIs.
Related PR: https://github.com/ceph/ceph/pull/42222
Signed-off-by: Rahul Dev Parashar <rahul.dev@flipkart.com>
(cherry picked from commit 44643af0b0)
In the function of nuke_prefixed_buckets, if some err is thrown when deleting buckets, the left buckets remain uncleaned.
It is kind of resoruce leak on some charged platform. We have to clear them manually.
I know the original code is meant to give the user some hint by rasing error. But the resource leak of left buckets is a little annoying.
This PR would skip the problem point and continue the teardown process. The last client error would be saved and re-raised after the loop completes.
Signed-off-by: Pei <huangp0600@126.com>
Signed-off-by: Pei <phuang1@dev-new-3-3854897.slc07.dev.ebayc3.com>
(cherry picked from commit 713012c178)