Object mtime should not change for any attr changes unless
its a copy operation. Verify the same using PutObjectACL op.
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
stop using head_bucket() to fetch these response headers, and use
list_objects_v2() instead to count objects and sizes
Fixes: #315
Signed-off-by: Casey Bodley <cbodley@redhat.com>
This is to avoid a get_object call for every range check as the object size will
not change during this duration and we'd most likely already know the object
sizes beforehand
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
Tests that concurrent multi-object delete requests which specify
the same versioned object instances return successful object
responses within response body.
relates to: https://tracker.ceph.com/issues/56646
Signed-off-by: Cory Snyder <csnyder@iland.com>
when the tests were converted from boto2, they were rewritten as loops
over client.delete_object(). switch back to multi-delete
Signed-off-by: Casey Bodley <cbodley@redhat.com>
Before the RGW fix PR was responding with 411 instead of 200
RGW fix PR: https://github.com/ceph/ceph/pull/50235
Signed-off-by: Mark Kogan <mkogan@redhat.com>
`ERR_TOO_SMALL` is wrongly returned if all of the following are true,
- the get_data returns multiple items (chunks)
- the length of the last item is smaller than the POST Policy's min
value for content-length-range.
The check should be `(ofs < min_len)` instead of `(len < min_len)`
This is further confirmed by the next line of `s->obj_size = ofs`
Move the `int len` scope inside loop to try and prevent the bug in
future.
The bug was refactored in 2016, but was introduced in Oct 2012, when
this functionality was first added to RGW in commit 7bb3504d3f0974e9863f536e9af0ce8889d6888f.
Reference: 933a42f9af/src/rgw/rgw_op.cc (L4474-L4513)
Reference: 7bb3504d3f
Signed-off-by: Robin H. Johnson <rjohnson@digitalocean.com>
this has been failing consistently in local testing. test_sts.py has
lots of user policy test coverage, so this test case in test_s3.py is
superfluous
Fixes: https://tracker.ceph.com/issues/58365
Signed-off-by: Casey Bodley <cbodley@redhat.com>
i don't think any of our CopyObj test cases were large enough to have
tail objects, so weren't exercising our tail object ref counting
strategy
Signed-off-by: Casey Bodley <cbodley@redhat.com>
case.
Updated test_set_bucket_tagging test for verifying the http status code
for DeleteBucketTagging case.
Related CEPH PR: https://github.com/ceph/ceph/pull/47262
Signed-off-by: Shriya Deshmukh <shriya.deshmukh@seagate.com>
original tests by Priya Sehgal <priya.sehgal@flipkart.com>:
rgw/s3_boto3: Tests added for SSE-S3 (GET, PUT, HEAD, MPU).
Additions by Casey Bodley <cbodley@redhat.com>:
add 'sse-s3' tag to test cases
sse: add _put_bucket_encryption() helper function
sse: document test cases with default bucket encryption
sse: expects encryption response header on put/get
sse: add 8MB default-encrypted upload
sse: test uploads that request x-amz-server-side-encryption=AES256
Lastly all my changes (Marcus Watts <mwatts@redhat.com>):
remove obsolete test - do it only in boto3 now.
Combine or rename duplicated function names.
Giving more than one test the same name is a Bad Thing(tm).
sse: expand test_bucket_policy_put_obj_enc, and _put_bucket_encryption
test_bucket_policy_put_obj_enc was testing too many things at once.
new tests:
* customer encryption and sse-s3: should fail
* customer encryption and sse-kms: should fail
* deny if not sse-s3: no-enc fails, sse-s3 succeeds.
* deny if not sse-s3: kms fails
deny if not sse-ksm: no-enc fails, sse-kms succeeds.
deny if not sse-ksm: s3 fails
_put_bucket_encryption was only testing sse-s3.
* test both these variations: sse-s3 and sse-kms
Note:
* these tests will fail on pre-sse-s3 ceph.
python3: comment out all boto3.set_stream_logger() calls
They made too much output.
Signed-off-by: Marcus Watts <mwatts@redhat.com>
To be able to successfully run s3tests on dbstore backend in teuthology,
mark all the s3-tests currently failing on it with 'fails_on_dbstore' attr
Signed-off-by: Soumya Koduri <skoduri@redhat.com>