when the local timezone is not UTC and if it is a day behind,
lifecycle_header tests fails with 2 days not equal to 1
so replacing datetime.now() with datetime.utcnow()
Signed-off-by: Ali Maredia <amaredia@redhat.com>
Make sure 403 is returned when access is denied via s3:GetBucketPublicAccessBlock action on GetBucketPublicAccessBlock
Refs: https://github.com/ceph/ceph/pull/55652
Signed-off-by: Seena Fallah <seenafallah@gmail.com>
Make sure NoSuchPublicAccessBlockConfiguration is returned when no public block is configured on bucket:
Refs: https://github.com/ceph/ceph/pull/55652
Signed-off-by: Seena Fallah <seenafallah@gmail.com>
This improves the testing for presigned URLs for
both get_object and put_object when using
generate_presigned_url().
It covers the case where you pass for example
a x-amz-acl (ACL in params for generated_presigned_url)
header that should be signed.
Tests the regression in [1].
[1] https://tracker.ceph.com/issues/64308
Signed-off-by: Tobias Urdin <tobias.urdin@binero.se>
https://tracker.ceph.com/issues/63537 reported that large dates (with
year after 2107) got truncated when written. test with a later date, and
check that get_object_retention() gives back the date we put
Signed-off-by: Casey Bodley <cbodley@redhat.com>
Object mtime should not change for any attr changes unless
its a copy operation. Verify the same using PutObjectACL op.
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
stop using head_bucket() to fetch these response headers, and use
list_objects_v2() instead to count objects and sizes
Fixes: #315
Signed-off-by: Casey Bodley <cbodley@redhat.com>
This is to avoid a get_object call for every range check as the object size will
not change during this duration and we'd most likely already know the object
sizes beforehand
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
Tests that concurrent multi-object delete requests which specify
the same versioned object instances return successful object
responses within response body.
relates to: https://tracker.ceph.com/issues/56646
Signed-off-by: Cory Snyder <csnyder@iland.com>
when the tests were converted from boto2, they were rewritten as loops
over client.delete_object(). switch back to multi-delete
Signed-off-by: Casey Bodley <cbodley@redhat.com>
Before the RGW fix PR was responding with 411 instead of 200
RGW fix PR: https://github.com/ceph/ceph/pull/50235
Signed-off-by: Mark Kogan <mkogan@redhat.com>
`ERR_TOO_SMALL` is wrongly returned if all of the following are true,
- the get_data returns multiple items (chunks)
- the length of the last item is smaller than the POST Policy's min
value for content-length-range.
The check should be `(ofs < min_len)` instead of `(len < min_len)`
This is further confirmed by the next line of `s->obj_size = ofs`
Move the `int len` scope inside loop to try and prevent the bug in
future.
The bug was refactored in 2016, but was introduced in Oct 2012, when
this functionality was first added to RGW in commit 7bb3504d3f0974e9863f536e9af0ce8889d6888f.
Reference: 933a42f9af/src/rgw/rgw_op.cc (L4474-L4513)
Reference: 7bb3504d3f
Signed-off-by: Robin H. Johnson <rjohnson@digitalocean.com>
this has been failing consistently in local testing. test_sts.py has
lots of user policy test coverage, so this test case in test_s3.py is
superfluous
Fixes: https://tracker.ceph.com/issues/58365
Signed-off-by: Casey Bodley <cbodley@redhat.com>
i don't think any of our CopyObj test cases were large enough to have
tail objects, so weren't exercising our tail object ref counting
strategy
Signed-off-by: Casey Bodley <cbodley@redhat.com>
case.
Updated test_set_bucket_tagging test for verifying the http status code
for DeleteBucketTagging case.
Related CEPH PR: https://github.com/ceph/ceph/pull/47262
Signed-off-by: Shriya Deshmukh <shriya.deshmukh@seagate.com>
original tests by Priya Sehgal <priya.sehgal@flipkart.com>:
rgw/s3_boto3: Tests added for SSE-S3 (GET, PUT, HEAD, MPU).
Additions by Casey Bodley <cbodley@redhat.com>:
add 'sse-s3' tag to test cases
sse: add _put_bucket_encryption() helper function
sse: document test cases with default bucket encryption
sse: expects encryption response header on put/get
sse: add 8MB default-encrypted upload
sse: test uploads that request x-amz-server-side-encryption=AES256
Lastly all my changes (Marcus Watts <mwatts@redhat.com>):
remove obsolete test - do it only in boto3 now.
Combine or rename duplicated function names.
Giving more than one test the same name is a Bad Thing(tm).
sse: expand test_bucket_policy_put_obj_enc, and _put_bucket_encryption
test_bucket_policy_put_obj_enc was testing too many things at once.
new tests:
* customer encryption and sse-s3: should fail
* customer encryption and sse-kms: should fail
* deny if not sse-s3: no-enc fails, sse-s3 succeeds.
* deny if not sse-s3: kms fails
deny if not sse-ksm: no-enc fails, sse-kms succeeds.
deny if not sse-ksm: s3 fails
_put_bucket_encryption was only testing sse-s3.
* test both these variations: sse-s3 and sse-kms
Note:
* these tests will fail on pre-sse-s3 ceph.
python3: comment out all boto3.set_stream_logger() calls
They made too much output.
Signed-off-by: Marcus Watts <mwatts@redhat.com>
To be able to successfully run s3tests on dbstore backend in teuthology,
mark all the s3-tests currently failing on it with 'fails_on_dbstore' attr
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
This solves: https://tracker.ceph.com/issues/53090
The solution is: We need to delete the role_policy and
user_policy attached user which was causing the failure.
Signed-off-by: Kalpesh Pandya <kapandya@redhat.com>
skip output-serial test. the results from both queries are not equal, thus it raise an assert. the problem seems to be the formatting before the comparision
remove test_output_serial_expressions until fixing the test
experiment pyarrow for parquet testing, adding arrow/parquet to bootstrap, installing pyarrow,pandas for reading/writing parquet
Signed-off-by: gal salomon <gal.salomon@gmail.com>