For example:
The request sends 'Origin: example.origin' while * is allowed as
a origin by the CORS rule.
The Server should then respond with * if no Authorization was send.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
If you are run the S3 testsuite against a reverse proxy (eg haproxy)
rather than RGW directly, some of the tests will return errors from the
proxy, rather than haproxy itself.
Some proxies differ from RGW in the exact case of 'Bad Request', so do
the match in a case-insensitive manner. Haproxy for example returns 'Bad
request'.
This removes the need for some of the prior fails_on_dho test tags, as
the failure was due to haproxy.
Signed-off-by: Robin H. Johnson <robin.johnson@dreamhost.com>
Bucket cleanup code was exiting after the first succesful bucket
removed, instead of cleaning up all buckets. This left many buckets
behind over time, causing large slowdowns of the test suite.
This was introduced in commit c41ebab9cf,
when changes for versioned files were added.
Signed-off-by: Kobi Laredo <kobi.laredo@dreamhost.com>
Signed-off-by: Robin H. Johnson <robin.johnson@dreamhost.com>
Backport: hammer
Issue #11563 deals with the COPY command breaking the Content-Type
header. Add a test case for this to ensure it remains fixed.
Reference: http://tracker.ceph.com/issues/11563
Signed-off-by: Robin H. Johnson <robin.johnson@dreamhost.com>
Error responses must contains a RequestId tag. This is the ID of the
request associated with the error.
Signed-off-by: Javier M. Mellid <jmunhoz@igalia.com>
User should be able to copy non-owned objects inside non-owned buckets when
they have full permissions to both bucket and object.
Test for issue #11639
Signed-off-by: Javier M. Mellid <jmunhoz@igalia.com>
retry bucket removal after setting the bucket permissions if we're
getting pemission denied. This will make sure the bucket gets removed.
Signed-off-by: Yehuda Sadeh <yehuda@redhat.com>
This commit allows test_bucket_head test_bucket_head,
test_bucket_head_extended, test_multipart_upload, and
test_abort_multipart_upload to succeed against AWS-S3. Also use
public buckets for head tests since they do not require private
buckets.
Signed-off-by: Andrew Gaul <andrew@gaul.org>
Since we fixed the rgw redirect code, it turns out that boto actually
follows on them. We just want to check if redirect is sent.
Signed-off-by: Yehuda Sadeh <yehuda@inktank.com>
s3tests: add regression test for bug #10311 - rgw: Object is deleted automatically if conditional PUT returns 412
Reviewed-by: Yehuda Sadeh <yehuda@redhat.com>
Since we fixed the rgw redirect code, it turns out that boto actually
follows on them. We just want to check if redirect is sent.
Signed-off-by: Yehuda Sadeh <yehuda@inktank.com>
AWS S3 has two behaviors for recreating a bucket depending if you use
the us-standard or another region:
>>> bucket = conn.create_bucket('gaul-default', location=Location.DEFAULT)
>>> bucket = conn.create_bucket('gaul-default', location=Location.DEFAULT)
>>> bucket = conn.create_bucket('gaul-uswest', location=Location.USWest)
>>> bucket = conn.create_bucket('gaul-uswest', location=Location.USWest)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/dist-packages/boto/s3/connection.py", line 499, in create_bucket
response.status, response.reason, body)
boto.exception.S3CreateError: S3CreateError: 409 Conflict
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>BucketAlreadyOwnedByYou</Code><Message>Your previous request to create the named bucket succeeded and you already own it.</Message><BucketName>gaul-uswest</BucketName><RequestId>24B6DC3170365CD7</RequestId><HostId>hUynMTyqc9WZFxAJ2RFK6P7BqmmeHHlMl9xL2NOy56xBUnOZCAlHqGvtMeGeAfVs</HostId></Error>
Additional discussion:
https://issues.apache.org/jira/browse/JCLOUDS-334
This reverts commit c82649b635.
The suite does not support duality in behaviors (e.g.
US Standard vs. Regional behavior) so we adhere to US
Standard only.
Signed-off-by: Alfredo Deza <alfredo@deza.pe>
Test was calling _multipart_upload() with extra useless param, this
broke when we added a new param to the function.
Signed-off-by: Yehuda Sadeh <yehuda@inktank.com>
Previously test_object_create_bad_contentlength_mismatch_above failed
with:
Traceback (most recent call last):
File "/home/gaul/work/s3-tests/virtualenv/local/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/home/gaul/work/s3-tests/s3tests/functional/test_headers.py", line 343, in test_object_create_bad_contentlength_mismatch_above
e = assert_raises(boto.exception.S3ResponseError, key.set_contents_from_string, content)
File "/home/gaul/work/s3-tests/s3tests/functional/utils.py", line 11, in assert_raises
callableObj(*args, **kwargs)
File "/home/gaul/work/s3-tests/virtualenv/local/lib/python2.7/site-packages/boto/s3/key.py", line 1379, in set_contents_from_string
encrypt_key=encrypt_key)
File "/home/gaul/work/s3-tests/virtualenv/local/lib/python2.7/site-packages/boto/s3/key.py", line 1246, in set_contents_from_file
chunked_transfer=chunked_transfer, size=size)
File "/home/gaul/work/s3-tests/virtualenv/local/lib/python2.7/site-packages/boto/s3/key.py", line 725, in send_file
chunked_transfer=chunked_transfer, size=size)
File "/home/gaul/work/s3-tests/virtualenv/local/lib/python2.7/site-packages/boto/s3/key.py", line 914, in _send_file_internal
query_args=query_args
File "/home/gaul/work/s3-tests/virtualenv/local/lib/python2.7/site-packages/boto/s3/connection.py", line 658, in make_request
retry_handler=retry_handler
File "/home/gaul/work/s3-tests/virtualenv/local/lib/python2.7/site-packages/boto/connection.py", line 1048, in make_request
retry_handler=retry_handler)
File "/home/gaul/work/s3-tests/virtualenv/local/lib/python2.7/site-packages/boto/connection.py", line 1004, in _mexe
raise BotoServerError(response.status, response.reason, body)
BotoServerError: BotoServerError: 400 Bad Request
AWS S3 has two behaviors for recreating a bucket depending if you use
the us-standard or another region:
>>> bucket = conn.create_bucket('gaul-default', location=Location.DEFAULT)
>>> bucket = conn.create_bucket('gaul-default', location=Location.DEFAULT)
>>> bucket = conn.create_bucket('gaul-uswest', location=Location.USWest)
>>> bucket = conn.create_bucket('gaul-uswest', location=Location.USWest)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/dist-packages/boto/s3/connection.py", line 499, in create_bucket
response.status, response.reason, body)
boto.exception.S3CreateError: S3CreateError: 409 Conflict
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>BucketAlreadyOwnedByYou</Code><Message>Your previous request to create the named bucket succeeded and you already own it.</Message><BucketName>gaul-uswest</BucketName><RequestId>24B6DC3170365CD7</RequestId><HostId>hUynMTyqc9WZFxAJ2RFK6P7BqmmeHHlMl9xL2NOy56xBUnOZCAlHqGvtMeGeAfVs</HostId></Error>
Additional discussion:
https://issues.apache.org/jira/browse/JCLOUDS-334
Since other classes need to do syncs,
let's move it to the utils file.
Signed-off-by: Joe Buck <jbbuck@gmail.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Nuke all buckets on the master, sync, and then
nuke any remaining buckets on non-master regions.
Signed-off-by: Joe Buck <jbbuck@gmail.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
The multi-region tests were not being diligent
about cleaning up their buckets / keys
and making sure that those deletes were synced.
This was causing the nuke_prefixed_buckets()
method to run into errors.
This patch adds more cleanup code and
validation of that cleanup within
the pertinent tests.
Signed-off-by: Joe Buck <jbbuck@gmail.com>
Changes to actually use the
'default_region' setting in the
S3TEST_CONF file.
Previously, the first connection set
was assumed to be the default.
This means that the parse order
of the S3TEST_CONF file was dictating
how the tests behaved, resulting in
failures in some cases.
Signed-off-by: Joe Buck <jbbuck@gmail.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>