Compare commits

...

61 commits

Author SHA1 Message Date
Ali Maredia
02257a7409
Merge pull request #550 from hmaheswa/ceph-pacific-lc-header-fix
fixing intermittent lifecycle_expiration_header failures(2!=1) beacause of datetime.now() on different timezone
2024-02-20 12:16:16 -05:00
Hemanth Sai Maheswarla
8f67b0e4f8 fixing intermittent lifecycle_expiration_header failures(2!=1) beacause of datetime.now() on different timezone
Signed-off-by: Hemanth Sai Maheswarla <hemanthsaimaheswarla@Hemanths-MacBook-Pro.local>
2024-02-17 12:16:16 +05:30
Tobias Urdin
d141ee87b4 Add more testing for presigned URLs
This improves the testing for presigned URLs for
both get_object and put_object when using
generate_presigned_url().

It covers the case where you pass for example
a x-amz-acl (ACL in params for generated_presigned_url)
header that should be signed.

Tests the regression in [1].

[1] https://tracker.ceph.com/issues/64308

Signed-off-by: Tobias Urdin <tobias.urdin@binero.se>
(cherry picked from commit 3eab156575cef9de40470abf38ffe25e5634796a)

Conflicts:
	s3tests_boto3/functional/test_s3.py pytest assert -> nose eq()
2024-02-12 14:06:58 -05:00
Casey Bodley
093e510c39 fix GetBucketTagging error code
related to https://tracker.ceph.com/issues/55460

Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit 5b08b26453)
2023-12-05 11:54:03 -05:00
Kalpesh Pandya
67251732b7 test_sts: Changing code for proper cleanup
This solves: https://tracker.ceph.com/issues/53090

The solution is: We need to delete the role_policy and
user_policy attached user which was causing the failure.

Signed-off-by: Kalpesh Pandya <kapandya@redhat.com>
(cherry picked from commit 1af1880b7a)
2023-11-27 11:24:59 -05:00
Tobias Urdin
bbf65028e5 Add test to verify HTTP OPTIONS on presigned URL
Related: https://tracker.ceph.com/issues/62033

Signed-off-by: Tobias Urdin <tobias.urdin@binero.com>
(cherry picked from commit c0a1880d4c)

Conflicts:
	s3tests_boto3/functional/test_s3.py: nose->pytest changes
(cherry picked from commit 8fb5d9a59c52adda9cc8172c96611398944be478)
2023-09-15 11:03:25 -04:00
Casey Bodley
e2aa00d394 pin botocore to resolve v2 signature failures
Signed-off-by: Casey Bodley <cbodley@redhat.com>
2022-11-23 13:05:24 -05:00
Soumya Koduri
9fccf6b81f
Merge pull request #466 from ceph/ceph-pacific-lifecycle-expiration-fixes
Ceph pacific lifecycle expiration fixes
2022-08-09 23:55:45 +05:30
Soumya Koduri
e445fe983f lifecycle/deletemarker_expiration: Increase timer window
Increase wait time in test_lifecycle_deletemarker_expiration(..)
to avoid any spurious failure.

Signed-off-by: Soumya Koduri <skoduri@redhat.com>
(cherry picked from commit cb830ebae1)
2022-08-09 11:08:40 -04:00
Soumya Koduri
eb1717ff17 Enable lifecycle tests
Add an option to configure lc debug interval and adjust lifecycle
tests sleep as per the value set.

Signed-off-by: Soumya Koduri <skoduri@redhat.com>
(cherry picked from commit 0f3f35ef01)
2022-08-09 11:08:38 -04:00
Pragadeeswaran Sathyanarayanan
0f4f722f25 Improving check_grants reliability
Signed-off-by: Pragadeeswaran Sathyanarayanan <psathyan@redhat.com>
(cherry picked from commit 5f96a32045)
2022-03-03 12:51:48 -05:00
Ali Maredia
b31649f351 disable ssl verify by default
Signed-off-by: Ali Maredia <amaredia@redhat.com>
(cherry picked from commit b252638369)
2021-08-11 13:14:46 -04:00
Pragadeeswaran Sathyanarayanan
521346dcc4 Add support for disabling SSL certificate verification
Signed-off-by: Pragadeeswaran Sathyanarayanan <psathyan@redhat.com>
(cherry picked from commit ea3caaa76b)
2021-08-09 14:05:50 -04:00
galsalomon66
287acbc6e7 change test_s3select.py permission; add s3select attribute
Signed-off-by: galsalomon66 <gal.salomon@gmail.com>
(cherry picked from commit ea7d5fb563)
2021-01-14 00:08:50 -05:00
galsalomon66
68f1939942 reduce object size for test_like_expression, it may cause timeouts in teuthology runs
Signed-off-by: galsalomon66 <gal.salomon@gmail.com>
2020-12-22 00:32:18 -05:00
galsalomon66
a0aa55d4ae rename charlength and character_length function names
Signed-off-by: galsalomon66 <gal.salomon@gmail.com>
2020-12-22 00:32:07 -05:00
Or Friedmann
e3d31ef6eb Add test for head bucket usage headers
Signed-off-by: Or Friedmann <ofriedma@redhat.com>
(cherry picked from commit ef8f65d917)
2020-12-03 16:04:29 -05:00
Or Friedmann
3fe80dc877 Add test for GetUsage api
Signed-off-by: Or Friedmann <ofriedma@redhat.com>
(cherry picked from commit f4f7812efd)
2020-12-03 16:04:14 -05:00
Albin Antony
a5108a7d69 s3select tests for coalesce and case
Signed-off-by: Albin Antony <aantony@redhat.com>
2020-12-03 01:04:02 -05:00
gal salomon
3be10d722f per each new uploaded file(for test), it got unique name(random), and uploaded file is verified for its content
Signed-off-by: gal salomon <gal.salomon@gmail.com>

Signed-off-by: gal salomon <gal.salomon@gmail.com>
2020-12-03 01:03:55 -05:00
Albin Antony
adad16121f s3select predicate tests
Signed-off-by: Albin Antony <aantony@redhat.com>
2020-12-03 00:59:32 -05:00
root
dd163877d4 Webidentity Test addition to test_sts.py
Signed-off-by: Kalpesh Pandya <kapandya@redhat.com>

Few main changes/additions:
1. Webidentity test addition to test_sts.py.
2. A function named check_webidentity() added to __init__.py in order to check for section presence.
3. Few lines shifted from setup() to get_iam_client() to make them execute only when sts-tests run.
4. Documentation update (for sts section)
5. Changes in s3tests.conf.SAMPLE regarding sts sections
2020-11-25 20:34:32 -05:00
Matt Benjamin
86bc2a191f Add test for HeadBucket on a non-existent bucket
n.b., RGW does not send a response document for this operation,
which seems consistent with
https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadBucket.html

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
2020-11-25 00:06:12 -05:00
Ilsoo Byun
3698d093bf Check if invalid payload is added after serving errordoc
Signed-off-by: Ilsoo Byun <ilsoobyun@linecorp.com>
(cherry picked from commit c08de72d55)
2020-11-19 10:24:30 -05:00
Casey Bodley
65b067486e test bucket recreation with different acls
Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit f6218fa1de)
2020-10-08 13:32:53 -04:00
root
0e36699571 STS issue fix (https://tracker.ceph.com/issues/47588)
This is the fix for the issue reported (https://tracker.ceph.com/issues/47588). The issue was with the argument which was passed to the function. After removing that argument (as it's already an optional argument) the issue is fixed.

Signed-off-by: Kalpesh Pandya <kapandya@redhat.com>
(cherry picked from commit daf9062a22)
2020-10-08 13:32:53 -04:00
Matt Benjamin
3dc4ff5da8 add noncurrent version expiration rule w/tag filter
Create 10 object versions (9 noncurrent).  Install a noncurrent
version expiration at 4 days.  Verify that 10 versions exist at
T+20, and only 1 (current) at T+60.

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
2020-10-01 11:30:33 -04:00
Matt Benjamin
c9792cb975 add lifecycle expiration test mixing 2-tag filter w/versioning
By design this test duplicates test_lifecycle_expiration_tags2,
but enables object versioning on the bucket.

The tests install a rule which requires -2- tags to be matched,
and creates 2 objects, one matching only 1 of the required tags,
the other matching both.  Only the 2nd object should expire.

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
2020-10-01 11:30:24 -04:00
Matt Benjamin
b930f194e4 add tests for lifecycle expiration w/1 and 2 object tags
Note that the 1-tag case contains a filter prefix--which exposes
an apparent bug parsing Filter when it contains a Prefix element
and a single Tag element (without And).

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
2020-10-01 11:30:15 -04:00
Matt Benjamin
253b63aa11 fix lifecycle expiration days: 0
In fact test_lifecycle_expiration_days0 is should fail, as 0-day
expiration is permitted for transition rules but not expiration
rules.

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
2020-10-01 11:30:08 -04:00
Matt Benjamin
6bd75be1d6 s/test_set_tagging/test_set_bucket_tagging/;
The test exercises bucket tagging, has nothing to do with object
tagging (clarity).

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
2020-10-01 11:30:01 -04:00
Matt Benjamin
61804bcf91 fix test_lifecycle_expiration_header_{put,head}
Primarily fixes the expiration header() verifier function
check_lifecycle_expiration_header, but also cleans up
prefix handling in setup_lifecycle_expiration().

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
2020-10-01 11:29:52 -04:00
Matt Benjamin
ea9f07a2bf fix and remark on test_lifecycle_expiration_days0
1. fix a python3-related KeyError exception

2. note here:  AWS documentation includes examples of "Days 0"
   in use, but boto3 will not accept them--this is why the days0
   test currently sets Days 1

3. delay increased to 30s, to avoid occasional failures due to
   jitter

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
2020-10-01 11:29:45 -04:00
root
7d14452035 STS Tests Files and modification in __init__.py
Signed-off-by: Kalpesh Pandya <kapandya@redhat.com>
2020-09-16 01:40:33 -04:00
gal salomon
6ea6cb6467 add filter for s3select tests
Signed-off-by: gal salomon <gal.salomon@gmail.com>
(cherry picked from commit fce9a52ef4)
2020-07-07 07:38:41 -04:00
gal salomon
10c801a2e0 fix comments;remove non-used imports;enable test for projection;using get_client()
(cherry picked from commit 72e251ed69)
2020-06-25 13:42:23 -04:00
gal salomon
9d670846a3 adding radom-generated tests, one for where clause , the second for prjection. random arithmetical expression is generated and used for building s3-select query; result is compared to python-engine
(cherry picked from commit fb39ac4829)
2020-06-25 13:42:23 -04:00
gal salomon
bb801b8625 python linter; replace assert with assert_equal; add complex query test(sum,count,where); add test-schema ;
(cherry picked from commit e006dd4753)
2020-06-25 13:42:23 -04:00
gal salomon
652619f46f using config parameters
(cherry picked from commit 1a9d3677f7)
2020-06-25 13:42:23 -04:00
gal salomon
b1ddeee6eb add tests to validate csv-header-info functionalities is correct
(cherry picked from commit 5dc8bc75ab)
2020-06-25 13:42:23 -04:00
gal salomon
ca9cb5cc2c adding test cases for processing CSV objects with different CSV definitions; validate null,escape-rules and quotes are processed correctly
(cherry picked from commit 94b1986228)
2020-06-25 13:42:23 -04:00
gal salomon
e010c4cfec adding utcnow test
(cherry picked from commit 4c7c279f70)
2020-06-25 13:42:23 -04:00
gal salomon
edea887e9c adding tests for date-time functionalities
(cherry picked from commit 5925f0fb3f)
2020-06-25 13:42:23 -04:00
gal salomon
cd4f7e1a7a add complex expression tests; for nested function calls; and different where-clause which create the same group of values
(cherry picked from commit c1bce6ac70)
2020-06-25 13:42:23 -04:00
gal salomon
048f9297a1 adding test for detection of cyclic reference to alias
(cherry picked from commit d543619e71)
2020-06-25 13:42:23 -04:00
gal salomon
8bd6158054 adding aggregation tests
(cherry picked from commit f42872fd53)
2020-06-25 13:42:23 -04:00
gal salomon
aca68a9d39 adding alias test case
(cherry picked from commit 74daf86fe5)
2020-06-25 13:42:23 -04:00
gal salomon
537431c686 commit first tests for s3select and initial framework
(cherry picked from commit dac38694ef)
2020-06-25 13:42:23 -04:00
Matt Benjamin
8ca96c4519 fix/restore test_lifecycle_expiration checks
Commit bf956df71e adding
listobvjectsv2 tests inadvertently changed the v1
test_lifecycle_expiration test, which it had copied to
create a v2 version.  Revert this.

Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
2020-05-26 11:07:53 -04:00
Kefu Chai
34040769ff bootstrap,requirements.txt: bump up setuptools and requests
Fixes: https://tracker.ceph.com/issues/45691
Signed-off-by: Kefu Chai <kchai@redhat.com>
2020-05-26 09:08:26 -04:00
Casey Bodley
8ebb504159 bootstrap: remove deprecated virtualenv options
this fails on Ubuntu 20.04:

> virtualenv: error: unrecognized arguments: --no-site-packages --distribute

according to `virtualenv -h`:

>   --no-site-packages    DEPRECATED. Retained only for backward compatibility.
>                         Not having access to global site-packages is now the
>                         default behavior.
>   --distribute          DEPRECATED. Retained only for backward compatibility.
>                         This option has no effect.

Signed-off-by: Casey Bodley <cbodley@redhat.com>
(cherry picked from commit a0c15c80ad)
2020-05-21 11:09:26 -04:00
Abhishek L
9092d1ac61
Merge pull request #343 from theanalyst/ceph-master-public-buckets-qa
Ceph master public buckets backport

Reviewed-By: Casey Bodley <cbodley@redhat.com>
2020-03-30 15:25:57 +02:00
Abhishek Lekshmanan
7b3df700cc fix ignore public acls with py3 compatible code
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
(cherry picked from commit 4d675235dd)
2020-03-26 16:28:12 +01:00
Abhishek Lekshmanan
4fc133b1b5 add tests for ignore public acls
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
(cherry picked from commit 3b1571ace6)
2020-03-26 16:28:12 +01:00
Abhishek Lekshmanan
0a495efc8c add test for block public policy
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
(cherry picked from commit b4516725f2)
2020-03-26 16:28:12 +01:00
Abhishek Lekshmanan
a48cf75391 use empty bodies for canned acl tests with BlockPublicAccess
This should be a temporary workaround until #42208 is fixed

Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
(cherry picked from commit d02c1819f6)
2020-03-26 16:28:12 +01:00
Abhishek Lekshmanan
a20e0d47f2 remove redundant get_client calls
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
(cherry picked from commit 4996430709)
2020-03-26 16:28:12 +01:00
Abhishek Lekshmanan
19947bd541 add ability to get svc client for s3config set of apis
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
(cherry picked from commit 6d3f574a8e)
2020-03-26 16:26:39 +01:00
Abhishek Lekshmanan
94168194fd add tests for public access configuration
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
(cherry picked from commit 1ad38530e0)
2020-03-26 16:26:19 +01:00
Abhishek Lekshmanan
0e3084c995 add a few test cases for public bucket policies
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
(cherry picked from commit 3f9d31c6c7)
2020-03-26 16:24:14 +01:00
Abhishek Lekshmanan
1d39198872 boto3: add bucket policy status checks for public ACLs
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
(cherry picked from commit 02b1d50ca7)
2020-03-26 16:23:46 +01:00
9 changed files with 2190 additions and 101 deletions

View file

@ -54,3 +54,20 @@ You can run only the boto3 tests with::
S3TEST_CONF=your.conf ./virtualenv/bin/nosetests -v -s -A 'not fails_on_rgw' s3tests_boto3.functional
========================
STS compatibility tests
========================
This section contains some basic tests for the AssumeRole, GetSessionToken and AssumeRoleWithWebIdentity API's. The test file is located under ``s3tests_boto3/functional``.
You can run only the sts tests (all the three API's) with::
S3TEST_CONF=your.conf ./virtualenv/bin/nosetests s3tests_boto3.functional.test_sts
You can filter tests based on the attributes. There is a attribute named ``test_of_sts`` to run AssumeRole and GetSessionToken tests and ``webidentity_test`` to run the AssumeRoleWithWebIdentity tests. If you want to execute only ``test_of_sts`` tests you can apply that filter as below::
S3TEST_CONF=your.conf ./virtualenv/bin/nosetests -v -s -A 'test_of_sts' s3tests_boto3.functional.test_sts
For running ``webidentity_test`` you'll need have Keycloak running.
In order to run any STS test you'll need to add "iam" section to the config file. For further reference on how your config file should look check ``s3tests.conf.SAMPLE``.

View file

@ -59,13 +59,13 @@ esac
# s3-tests only works on python 3.6 not newer versions of python3
${virtualenv} --python=$(which python3.6) --no-site-packages --distribute virtualenv
${virtualenv} --python=$(which python3.6) virtualenv
# avoid pip bugs
./virtualenv/bin/pip3 install --upgrade pip
# slightly old version of setuptools; newer fails w/ requests 0.14.0
./virtualenv/bin/pip3 install setuptools==32.3.1
# latest setuptools supporting python 2.7
./virtualenv/bin/pip install setuptools==44.1.0
./virtualenv/bin/pip3 install -r requirements.txt

View file

@ -2,11 +2,13 @@ PyYAML
nose >=1.0.0
boto >=2.6.0
boto3 >=1.0.0
# botocore-1.28 broke v2 signatures, see https://tracker.ceph.com/issues/58059
botocore <1.28.0
munch >=2.0.0
# 0.14 switches to libev, that means bootstrap needs to change too
gevent >=1.0
isodate >=0.4.4
requests >=0.14.0
requests >=2.23.0
pytz >=2011k
httplib2
lxml

View file

@ -10,6 +10,9 @@ port = 8000
## say "False" to disable TLS
is_secure = False
## say "False" to disable SSL Verify
ssl_verify = False
[fixtures]
## all the buckets created will start with this prefix;
## {random} will be filled with random characters to pad
@ -38,6 +41,12 @@ secret_key = h7GhxuBLTrlhVUyxSPUKUV8r/2EI4ngqJxD7iBdBYLhwluN30JaT3Q==
## replace with key id obtained when secret is created, or delete if KMS not tested
#kms_keyid = 01234567-89ab-cdef-0123-456789abcdef
## Storage classes
#storage_classes = "LUKEWARM, FROZEN"
## Lifecycle debug interval (default: 10)
#lc_debug_interval = 20
[s3 alt]
# alt display_name set in vstart.sh
display_name = john.doe
@ -68,3 +77,30 @@ secret_key = opqrstuvwxyzabcdefghijklmnopqrstuvwxyzab
# tenant email set in vstart.sh
email = tenanteduser@example.com
#following section needs to be added for all sts-tests
[iam]
#used for iam operations in sts-tests
#user_id from vstart.sh
user_id = 0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef
#access_key from vstart.sh
access_key = ABCDEFGHIJKLMNOPQRST
#secret_key vstart.sh
secret_key = abcdefghijklmnopqrstuvwxyzabcdefghijklmn
#display_name from vstart.sh
display_name = youruseridhere
#following section needs to be added when you want to run Assume Role With Webidentity test
[webidentity]
#used for assume role with web identity test in sts-tests
#all parameters will be obtained from ceph/qa/tasks/keycloak.py
token=<access_token>
aud=<obtained after introspecting token>
thumbprint=<obtained from x509 certificate>
KC_REALM=<name of the realm>

View file

@ -7,6 +7,7 @@ import random
from pprint import pprint
import time
import boto.exception
import socket
from urllib.parse import urlparse
@ -520,6 +521,57 @@ def test_website_private_bucket_list_empty_blockederrordoc():
errorhtml.delete()
bucket.delete()
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')
@attr(assertion='check if there is an invalid payload after serving error doc')
@attr('s3website')
@nose.with_setup(setup=check_can_test_website, teardown=common.teardown)
def test_website_public_bucket_list_pubilc_errordoc():
bucket = get_new_bucket()
f = _test_website_prep(bucket, WEBSITE_CONFIGS_XMLFRAG['IndexDocErrorDoc'])
bucket.make_public()
errorhtml = bucket.new_key(f['ErrorDocument_Key'])
errorstring = choose_bucket_prefix(template=ERRORDOC_TEMPLATE, max_len=256)
errorhtml.set_contents_from_string(errorstring)
errorhtml.set_canned_acl('public-read')
url = get_website_url(proto='http', bucket=bucket.name, path='')
o = urlparse(url)
host = o.hostname
port = s3.main.port
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect((host, port))
request = "GET / HTTP/1.1\r\nHost:%s.%s:%s\r\n\r\n" % (bucket.name, host, port)
sock.send(request.encode())
#receive header
resp = sock.recv(4096)
print(resp)
#receive body
resp = sock.recv(4096)
print('payload length=%d' % len(resp))
print(resp)
#check if any additional payload is left
resp_len = 0
sock.settimeout(2)
try:
resp = sock.recv(4096)
resp_len = len(resp)
print('invalid payload length=%d' % resp_len)
print(resp)
except socket.timeout:
print('no invalid payload')
ok(resp_len == 0, 'invalid payload')
errorhtml.delete()
bucket.delete()
@attr(resource='bucket')
@attr(method='get')
@attr(operation='list')

View file

@ -9,6 +9,7 @@ import munch
import random
import string
import itertools
import urllib3
config = munch.Munch
@ -166,6 +167,15 @@ def setup():
proto = 'https' if config.default_is_secure else 'http'
config.default_endpoint = "%s://%s:%d" % (proto, config.default_host, config.default_port)
try:
config.default_ssl_verify = cfg.getboolean('DEFAULT', "ssl_verify")
except configparser.NoOptionError:
config.default_ssl_verify = False
# Disable InsecureRequestWarning reported by urllib3 when ssl_verify is False
if not config.default_ssl_verify:
urllib3.disable_warnings()
# vars from the main section
config.main_access_key = cfg.get('s3 main',"access_key")
config.main_secret_key = cfg.get('s3 main',"secret_key")
@ -188,6 +198,11 @@ def setup():
config.main_api_name = ""
pass
try:
config.lc_debug_interval = int(cfg.get('s3 main',"lc_debug_interval"))
except (configparser.NoSectionError, configparser.NoOptionError):
config.lc_debug_interval = 10
config.alt_access_key = cfg.get('s3 alt',"access_key")
config.alt_secret_key = cfg.get('s3 alt',"secret_key")
config.alt_display_name = cfg.get('s3 alt',"display_name")
@ -213,6 +228,7 @@ def setup():
nuke_prefixed_buckets(prefix=prefix, client=alt_client)
nuke_prefixed_buckets(prefix=prefix, client=tenant_client)
def teardown():
alt_client = get_alt_client()
tenant_client = get_tenant_client()
@ -220,6 +236,24 @@ def teardown():
nuke_prefixed_buckets(prefix=prefix, client=alt_client)
nuke_prefixed_buckets(prefix=prefix, client=tenant_client)
def check_webidentity():
cfg = configparser.RawConfigParser()
try:
path = os.environ['S3TEST_CONF']
except KeyError:
raise RuntimeError(
'To run tests, point environment '
+ 'variable S3TEST_CONF to a config file.',
)
cfg.read(path)
if not cfg.has_section("webidentity"):
raise RuntimeError('Your config file is missing the "webidentity" section!')
config.webidentity_thumbprint = cfg.get('webidentity', "thumbprint")
config.webidentity_aud = cfg.get('webidentity', "aud")
config.webidentity_token = cfg.get('webidentity', "token")
config.webidentity_realm = cfg.get('webidentity', "KC_REALM")
def get_client(client_config=None):
if client_config == None:
client_config = Config(signature_version='s3v4')
@ -229,6 +263,7 @@ def get_client(client_config=None):
aws_secret_access_key=config.main_secret_key,
endpoint_url=config.default_endpoint,
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
config=client_config)
return client
@ -238,9 +273,56 @@ def get_v2_client():
aws_secret_access_key=config.main_secret_key,
endpoint_url=config.default_endpoint,
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
config=Config(signature_version='s3'))
return client
def get_sts_client(client_config=None):
if client_config == None:
client_config = Config(signature_version='s3v4')
client = boto3.client(service_name='sts',
aws_access_key_id=config.alt_access_key,
aws_secret_access_key=config.alt_secret_key,
endpoint_url=config.default_endpoint,
region_name='',
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
config=client_config)
return client
def get_iam_client(client_config=None):
cfg = configparser.RawConfigParser()
try:
path = os.environ['S3TEST_CONF']
except KeyError:
raise RuntimeError(
'To run tests, point environment '
+ 'variable S3TEST_CONF to a config file.',
)
cfg.read(path)
if not cfg.has_section("iam"):
raise RuntimeError('Your config file is missing the "iam" section!')
config.iam_access_key = cfg.get('iam',"access_key")
config.iam_secret_key = cfg.get('iam',"secret_key")
config.iam_display_name = cfg.get('iam',"display_name")
config.iam_user_id = cfg.get('iam',"user_id")
config.iam_email = cfg.get('iam',"email")
if client_config == None:
client_config = Config(signature_version='s3v4')
client = boto3.client(service_name='iam',
aws_access_key_id=config.iam_access_key,
aws_secret_access_key=config.iam_secret_key,
endpoint_url=config.default_endpoint,
region_name='',
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
config=client_config)
return client
def get_alt_client(client_config=None):
if client_config == None:
client_config = Config(signature_version='s3v4')
@ -250,6 +332,7 @@ def get_alt_client(client_config=None):
aws_secret_access_key=config.alt_secret_key,
endpoint_url=config.default_endpoint,
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
config=client_config)
return client
@ -262,6 +345,7 @@ def get_tenant_client(client_config=None):
aws_secret_access_key=config.tenant_secret_key,
endpoint_url=config.default_endpoint,
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
config=client_config)
return client
@ -272,6 +356,7 @@ def get_tenant_iam_client():
aws_access_key_id=config.tenant_access_key,
aws_secret_access_key=config.tenant_secret_key,
endpoint_url=config.default_endpoint,
verify=config.default_ssl_verify,
use_ssl=config.default_is_secure)
return client
@ -281,6 +366,7 @@ def get_unauthenticated_client():
aws_secret_access_key='',
endpoint_url=config.default_endpoint,
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
config=Config(signature_version=UNSIGNED))
return client
@ -290,9 +376,23 @@ def get_bad_auth_client(aws_access_key_id='badauth'):
aws_secret_access_key='roflmao',
endpoint_url=config.default_endpoint,
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
config=Config(signature_version='s3v4'))
return client
def get_svc_client(client_config=None, svc='s3'):
if client_config == None:
client_config = Config(signature_version='s3v4')
client = boto3.client(service_name=svc,
aws_access_key_id=config.main_access_key,
aws_secret_access_key=config.main_secret_key,
endpoint_url=config.default_endpoint,
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify,
config=client_config)
return client
bucket_counter = itertools.count(1)
def get_new_bucket_name():
@ -320,7 +420,8 @@ def get_new_bucket_resource(name=None):
aws_access_key_id=config.main_access_key,
aws_secret_access_key=config.main_secret_key,
endpoint_url=config.default_endpoint,
use_ssl=config.default_is_secure)
use_ssl=config.default_is_secure,
verify=config.default_ssl_verify)
if name is None:
name = get_new_bucket_name()
bucket = s3.Bucket(name)
@ -342,6 +443,21 @@ def get_new_bucket(client=None, name=None):
client.create_bucket(Bucket=name)
return name
def get_parameter_name():
parameter_name=""
rand = ''.join(
random.choice(string.ascii_lowercase + string.digits)
for c in range(255)
)
while rand:
parameter_name = '{random}'.format(random=rand)
if len(parameter_name) <= 10:
return parameter_name
rand = rand[:-1]
return parameter_name
def get_sts_user_id():
return config.alt_user_id
def get_config_is_secure():
return config.default_is_secure
@ -355,6 +471,9 @@ def get_config_port():
def get_config_endpoint():
return config.default_endpoint
def get_config_ssl_verify():
return config.default_ssl_verify
def get_main_aws_access_key():
return config.main_access_key
@ -408,3 +527,18 @@ def get_tenant_user_id():
def get_tenant_email():
return config.tenant_email
def get_thumbprint():
return config.webidentity_thumbprint
def get_aud():
return config.webidentity_aud
def get_token():
return config.webidentity_token
def get_realm_name():
return config.webidentity_realm
def get_lc_debug_interval():
return config.lc_debug_interval

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,696 @@
import nose
import random
import string
from nose.plugins.attrib import attr
import uuid
from nose.tools import eq_ as eq
from . import (
get_client
)
region_name = ''
# recurssion function for generating arithmetical expression
def random_expr(depth):
# depth is the complexity of expression
if depth==1 :
return str(int(random.random() * 100) + 1)+".0"
return '(' + random_expr(depth-1) + random.choice(['+','-','*','/']) + random_expr(depth-1) + ')'
def generate_s3select_where_clause(bucket_name,obj_name):
a=random_expr(4)
b=random_expr(4)
s=random.choice([ '<','>','==','<=','>=','!=' ])
try:
eval( a )
eval( b )
except ZeroDivisionError:
return
# generate s3select statement using generated randome expression
# upon count(0)>0 it means true for the where clause expression
# the python-engine {eval( conditional expression )} should return same boolean result.
s3select_stmt = "select count(0) from stdin where " + a + s + b + ";"
res = remove_xml_tags_from_result( run_s3select(bucket_name,obj_name,s3select_stmt) ).replace(",","")
nose.tools.assert_equal(int(res)>0 , eval( a + s + b ))
def generate_s3select_expression_projection(bucket_name,obj_name):
# generate s3select statement using generated randome expression
# statement return an arithmetical result for the generated expression.
# the same expression is evaluated by python-engine, result should be close enough(Epsilon)
e = random_expr( 4 )
try:
eval( e )
except ZeroDivisionError:
return
if eval( e ) == 0:
return
res = remove_xml_tags_from_result( run_s3select(bucket_name,obj_name,"select " + e + " from stdin;",) ).replace(",","")
# accuracy level
epsilon = float(0.000001)
# both results should be close (epsilon)
assert (1 - (float(res.split("\n")[1]) / eval( e )) ) < epsilon
@attr('s3select')
def get_random_string():
return uuid.uuid4().hex[:6].upper()
@attr('s3select')
def test_generate_where_clause():
# create small csv file for testing the random expressions
single_line_csv = create_random_csv_object(1,1)
bucket_name = "test"
obj_name = get_random_string() #"single_line_csv.csv"
upload_csv_object(bucket_name,obj_name,single_line_csv)
for _ in range(100):
generate_s3select_where_clause(bucket_name,obj_name)
@attr('s3select')
def test_generate_projection():
# create small csv file for testing the random expressions
single_line_csv = create_random_csv_object(1,1)
bucket_name = "test"
obj_name = get_random_string() #"single_line_csv.csv"
upload_csv_object(bucket_name,obj_name,single_line_csv)
for _ in range(100):
generate_s3select_expression_projection(bucket_name,obj_name)
def create_csv_object_for_datetime(rows,columns):
result = ""
for _ in range(rows):
row = ""
for _ in range(columns):
row = row + "{}{:02d}{:02d}-{:02d}{:02d}{:02d},".format(random.randint(0,100)+1900,random.randint(1,12),random.randint(1,28),random.randint(0,23),random.randint(0,59),random.randint(0,59),)
result += row + "\n"
return result
def create_random_csv_object(rows,columns,col_delim=",",record_delim="\n",csv_schema=""):
result = ""
if len(csv_schema)>0 :
result = csv_schema + record_delim
for _ in range(rows):
row = ""
for _ in range(columns):
row = row + "{}{}".format(random.randint(0,1000),col_delim)
result += row + record_delim
return result
def create_random_csv_object_string(rows,columns,col_delim=",",record_delim="\n",csv_schema=""):
result = ""
if len(csv_schema)>0 :
result = csv_schema + record_delim
for _ in range(rows):
row = ""
for _ in range(columns):
if random.randint(0,9) == 5:
row = row + "{}{}".format(''.join(random.choice(string.ascii_letters) for m in range(10)) + "aeiou",col_delim)
else:
row = row + "{}{}".format(''.join("cbcd" + random.choice(string.ascii_letters) for m in range(10)) + "vwxyzzvwxyz" ,col_delim)
result += row + record_delim
return result
def upload_csv_object(bucket_name,new_key,obj):
client = get_client()
client.create_bucket(Bucket=bucket_name)
client.put_object(Bucket=bucket_name, Key=new_key, Body=obj)
# validate uploaded object
c2 = get_client()
response = c2.get_object(Bucket=bucket_name, Key=new_key)
eq(response['Body'].read().decode('utf-8'), obj, 's3select error[ downloaded object not equal to uploaded objecy')
def run_s3select(bucket,key,query,column_delim=",",row_delim="\n",quot_char='"',esc_char='\\',csv_header_info="NONE"):
s3 = get_client()
r = s3.select_object_content(
Bucket=bucket,
Key=key,
ExpressionType='SQL',
InputSerialization = {"CSV": {"RecordDelimiter" : row_delim, "FieldDelimiter" : column_delim,"QuoteEscapeCharacter": esc_char, "QuoteCharacter": quot_char, "FileHeaderInfo": csv_header_info}, "CompressionType": "NONE"},
OutputSerialization = {"CSV": {}},
Expression=query,)
result = ""
for event in r['Payload']:
if 'Records' in event:
records = event['Records']['Payload'].decode('utf-8')
result += records
return result
def remove_xml_tags_from_result(obj):
result = ""
for rec in obj.split("\n"):
if(rec.find("Payload")>0 or rec.find("Records")>0):
continue
result += rec + "\n" # remove by split
return result
def create_list_of_int(column_pos,obj,field_split=",",row_split="\n"):
list_of_int = []
for rec in obj.split(row_split):
col_num = 1
if ( len(rec) == 0):
continue
for col in rec.split(field_split):
if (col_num == column_pos):
list_of_int.append(int(col))
col_num+=1
return list_of_int
@attr('s3select')
def test_count_operation():
csv_obj_name = get_random_string()
bucket_name = "test"
num_of_rows = 1234
obj_to_load = create_random_csv_object(num_of_rows,10)
upload_csv_object(bucket_name,csv_obj_name,obj_to_load)
res = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,"select count(0) from stdin;") ).replace(",","")
nose.tools.assert_equal( num_of_rows, int( res ))
@attr('s3select')
def test_column_sum_min_max():
csv_obj = create_random_csv_object(10000,10)
csv_obj_name = get_random_string()
bucket_name = "test"
upload_csv_object(bucket_name,csv_obj_name,csv_obj)
csv_obj_name_2 = get_random_string()
bucket_name_2 = "testbuck2"
upload_csv_object(bucket_name_2,csv_obj_name_2,csv_obj)
res_s3select = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,"select min(int(_1)) from stdin;") ).replace(",","")
list_int = create_list_of_int( 1 , csv_obj )
res_target = min( list_int )
nose.tools.assert_equal( int(res_s3select), int(res_target))
res_s3select = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,"select min(int(_4)) from stdin;") ).replace(",","")
list_int = create_list_of_int( 4 , csv_obj )
res_target = min( list_int )
nose.tools.assert_equal( int(res_s3select), int(res_target))
res_s3select = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,"select avg(int(_6)) from stdin;") ).replace(",","")
list_int = create_list_of_int( 6 , csv_obj )
res_target = float(sum(list_int ))/10000
nose.tools.assert_equal( float(res_s3select), float(res_target))
res_s3select = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,"select max(int(_4)) from stdin;") ).replace(",","")
list_int = create_list_of_int( 4 , csv_obj )
res_target = max( list_int )
nose.tools.assert_equal( int(res_s3select), int(res_target))
res_s3select = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,"select max(int(_7)) from stdin;") ).replace(",","")
list_int = create_list_of_int( 7 , csv_obj )
res_target = max( list_int )
nose.tools.assert_equal( int(res_s3select), int(res_target))
res_s3select = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,"select sum(int(_4)) from stdin;") ).replace(",","")
list_int = create_list_of_int( 4 , csv_obj )
res_target = sum( list_int )
nose.tools.assert_equal( int(res_s3select), int(res_target))
res_s3select = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,"select sum(int(_7)) from stdin;") ).replace(",","")
list_int = create_list_of_int( 7 , csv_obj )
res_target = sum( list_int )
nose.tools.assert_equal( int(res_s3select) , int(res_target) )
# the following queries, validates on *random* input an *accurate* relation between condition result,sum operation and count operation.
res_s3select = remove_xml_tags_from_result( run_s3select(bucket_name_2,csv_obj_name_2,"select count(0),sum(int(_1)),sum(int(_2)) from stdin where (int(_1)-int(_2)) == 2;" ) )
count,sum1,sum2,d = res_s3select.split(",")
nose.tools.assert_equal( int(count)*2 , int(sum1)-int(sum2 ) )
res_s3select = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,"select count(0),sum(int(_1)),sum(int(_2)) from stdin where (int(_1)-int(_2)) == 4;" ) )
count,sum1,sum2,d = res_s3select.split(",")
nose.tools.assert_equal( int(count)*4 , int(sum1)-int(sum2) )
@attr('s3select')
def test_nullif_expressions():
csv_obj = create_random_csv_object(10000,10)
csv_obj_name = get_random_string()
bucket_name = "test"
upload_csv_object(bucket_name,csv_obj_name,csv_obj)
res_s3select_nullif = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,"select count(0) from stdin where nullif(_1,_2) is null ;") ).replace("\n","")
res_s3select = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,"select count(0) from stdin where _1 == _2 ;") ).replace("\n","")
nose.tools.assert_equal( res_s3select_nullif, res_s3select)
res_s3select_nullif = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,"select count(0) from stdin where not nullif(_1,_2) is null ;") ).replace("\n","")
res_s3select = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,"select count(0) from stdin where _1 != _2 ;") ).replace("\n","")
nose.tools.assert_equal( res_s3select_nullif, res_s3select)
res_s3select_nullif = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,"select count(0) from stdin where nullif(_1,_2) == _1 ;") ).replace("\n","")
res_s3select = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,"select count(0) from stdin where _1 != _2 ;") ).replace("\n","")
nose.tools.assert_equal( res_s3select_nullif, res_s3select)
@attr('s3select')
def test_lowerupper_expressions():
csv_obj = create_random_csv_object(1,10)
csv_obj_name = get_random_string()
bucket_name = "test"
upload_csv_object(bucket_name,csv_obj_name,csv_obj)
res_s3select = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,'select lower("AB12cd$$") from stdin ;') ).replace("\n","")
nose.tools.assert_equal( res_s3select, "ab12cd$$,")
res_s3select = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,'select upper("ab12CD$$") from stdin ;') ).replace("\n","")
nose.tools.assert_equal( res_s3select, "AB12CD$$,")
@attr('s3select')
def test_in_expressions():
# purpose of test: engine is process correctly several projections containing aggregation-functions
csv_obj = create_random_csv_object(10000,10)
csv_obj_name = get_random_string()
bucket_name = "test"
upload_csv_object(bucket_name,csv_obj_name,csv_obj)
res_s3select_in = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,'select int(_1) from stdin where int(_1) in(1);')).replace("\n","")
res_s3select = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,'select int(_1) from stdin where int(_1) == 1;')).replace("\n","")
nose.tools.assert_equal( res_s3select_in, res_s3select )
res_s3select_in = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,'select int(_1) from stdin where int(_1) in(1,0);')).replace("\n","")
res_s3select = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,'select int(_1) from stdin where int(_1) == 1 or int(_1) == 0;')).replace("\n","")
nose.tools.assert_equal( res_s3select_in, res_s3select )
res_s3select_in = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,'select int(_2) from stdin where int(_2) in(1,0,2);')).replace("\n","")
res_s3select = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,'select int(_2) from stdin where int(_2) == 1 or int(_2) == 0 or int(_2) == 2;')).replace("\n","")
nose.tools.assert_equal( res_s3select_in, res_s3select )
res_s3select_in = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,'select int(_2) from stdin where int(_2)*2 in(int(_3)*2,int(_4)*3,int(_5)*5);')).replace("\n","")
res_s3select = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,'select int(_2) from stdin where int(_2)*2 == int(_3)*2 or int(_2)*2 == int(_4)*3 or int(_2)*2 == int(_5)*5;')).replace("\n","")
nose.tools.assert_equal( res_s3select_in, res_s3select )
res_s3select_in = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,'select int(_1) from stdin where character_length(_1) == 2 and substring(_1,2,1) in ("3");')).replace("\n","")
res_s3select = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,'select int(_1) from stdin where _1 like "_3";')).replace("\n","")
nose.tools.assert_equal( res_s3select_in, res_s3select )
@attr('s3select')
def test_like_expressions():
csv_obj = create_random_csv_object_string(1000,10)
csv_obj_name = get_random_string()
bucket_name = "test"
upload_csv_object(bucket_name,csv_obj_name,csv_obj)
res_s3select_in = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,'select count(*) from stdin where _1 like "%aeio%";')).replace("\n","")
res_s3select = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name, 'select count(*) from stdin where substring(_1,11,4) == "aeio" ;')).replace("\n","")
nose.tools.assert_equal( res_s3select_in, res_s3select )
res_s3select_in = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,'select count(*) from stdin where _1 like "cbcd%";')).replace("\n","")
res_s3select = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name, 'select count(*) from stdin where substring(_1,1,4) == "cbcd";')).replace("\n","")
nose.tools.assert_equal( res_s3select_in, res_s3select )
res_s3select_in = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,'select count(*) from stdin where _3 like "%y[y-z]";')).replace("\n","")
res_s3select = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name, 'select count(*) from stdin where substring(_3,char_length(_3),1) between "y" and "z" and substring(_3,char_length(_3)-1,1) == "y";')).replace("\n","")
nose.tools.assert_equal( res_s3select_in, res_s3select )
res_s3select_in = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,'select count(*) from stdin where _2 like "%yz";')).replace("\n","")
res_s3select = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name, 'select count(*) from stdin where substring(_2,char_length(_2),1) == "z" and substring(_2,char_length(_2)-1,1) == "y";')).replace("\n","")
nose.tools.assert_equal( res_s3select_in, res_s3select )
res_s3select_in = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,'select count(*) from stdin where _3 like "c%z";')).replace("\n","")
res_s3select = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name, 'select count(*) from stdin where substring(_3,char_length(_3),1) == "z" and substring(_3,1,1) == "c";')).replace("\n","")
nose.tools.assert_equal( res_s3select_in, res_s3select )
res_s3select_in = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,'select count(*) from stdin where _2 like "%xy_";')).replace("\n","")
res_s3select = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name, 'select count(*) from stdin where substring(_2,char_length(_2)-1,1) == "y" and substring(_2,char_length(_2)-2,1) == "x";')).replace("\n","")
nose.tools.assert_equal( res_s3select_in, res_s3select )
@attr('s3select')
def test_complex_expressions():
# purpose of test: engine is process correctly several projections containing aggregation-functions
csv_obj = create_random_csv_object(10000,10)
csv_obj_name = get_random_string()
bucket_name = "test"
upload_csv_object(bucket_name,csv_obj_name,csv_obj)
res_s3select = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,"select min(int(_1)),max(int(_2)),min(int(_3))+1 from stdin;")).replace("\n","")
min_1 = min ( create_list_of_int( 1 , csv_obj ) )
max_2 = max ( create_list_of_int( 2 , csv_obj ) )
min_3 = min ( create_list_of_int( 3 , csv_obj ) ) + 1
__res = "{},{},{},".format(min_1,max_2,min_3)
# assert is according to radom-csv function
nose.tools.assert_equal( res_s3select, __res )
# purpose of test that all where conditions create the same group of values, thus same result
res_s3select_substring = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,'select min(int(_2)),max(int(_2)) from stdin where substring(_2,1,1) == "1"')).replace("\n","")
res_s3select_between_numbers = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,'select min(int(_2)),max(int(_2)) from stdin where int(_2)>=100 and int(_2)<200')).replace("\n","")
res_s3select_eq_modolu = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,'select min(int(_2)),max(int(_2)) from stdin where int(_2)/100 == 1 or int(_2)/10 == 1 or int(_2) == 1')).replace("\n","")
nose.tools.assert_equal( res_s3select_substring, res_s3select_between_numbers)
nose.tools.assert_equal( res_s3select_between_numbers, res_s3select_eq_modolu)
@attr('s3select')
def test_alias():
# purpose: test is comparing result of exactly the same queries , one with alias the other without.
# this test is setting alias on 3 projections, the third projection is using other projection alias, also the where clause is using aliases
# the test validate that where-clause and projections are executing aliases correctly, bare in mind that each alias has its own cache,
# and that cache need to be invalidate per new row.
csv_obj = create_random_csv_object(10000,10)
csv_obj_name = get_random_string()
bucket_name = "test"
upload_csv_object(bucket_name,csv_obj_name,csv_obj)
res_s3select_alias = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,"select int(_1) as a1, int(_2) as a2 , (a1+a2) as a3 from stdin where a3>100 and a3<300;") ).replace(",","")
res_s3select_no_alias = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,"select int(_1),int(_2),int(_1)+int(_2) from stdin where (int(_1)+int(_2))>100 and (int(_1)+int(_2))<300;") ).replace(",","")
nose.tools.assert_equal( res_s3select_alias, res_s3select_no_alias)
@attr('s3select')
def test_alias_cyclic_refernce():
number_of_rows = 10000
# purpose of test is to validate the s3select-engine is able to detect a cyclic reference to alias.
csv_obj = create_random_csv_object(number_of_rows,10)
csv_obj_name = get_random_string()
bucket_name = "test"
upload_csv_object(bucket_name,csv_obj_name,csv_obj)
res_s3select_alias = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,"select int(_1) as a1,int(_2) as a2, a1+a4 as a3, a5+a1 as a4, int(_3)+a3 as a5 from stdin;") )
find_res = res_s3select_alias.find("number of calls exceed maximum size, probably a cyclic reference to alias")
assert int(find_res) >= 0
@attr('s3select')
def test_datetime():
# purpose of test is to validate date-time functionality is correct,
# by creating same groups with different functions (nested-calls) ,which later produce the same result
csv_obj = create_csv_object_for_datetime(10000,1)
csv_obj_name = get_random_string()
bucket_name = "test"
upload_csv_object(bucket_name,csv_obj_name,csv_obj)
res_s3select_date_time = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,'select count(0) from stdin where extract("year",timestamp(_1)) > 1950 and extract("year",timestamp(_1)) < 1960;') )
res_s3select_substring = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,'select count(0) from stdin where int(substring(_1,1,4))>1950 and int(substring(_1,1,4))<1960;') )
nose.tools.assert_equal( res_s3select_date_time, res_s3select_substring)
res_s3select_date_time = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,'select count(0) from stdin where datediff("month",timestamp(_1),dateadd("month",2,timestamp(_1)) ) == 2;') )
res_s3select_count = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,'select count(0) from stdin;') )
nose.tools.assert_equal( res_s3select_date_time, res_s3select_count)
res_s3select_date_time = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,'select count(0) from stdin where datediff("year",timestamp(_1),dateadd("day", 366 ,timestamp(_1))) == 1 ;') )
nose.tools.assert_equal( res_s3select_date_time, res_s3select_count)
# validate that utcnow is integrate correctly with other date-time functions
res_s3select_date_time_utcnow = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,'select count(0) from stdin where datediff("hours",utcnow(),dateadd("day",1,utcnow())) == 24 ;') )
nose.tools.assert_equal( res_s3select_date_time_utcnow, res_s3select_count)
@attr('s3select')
def test_csv_parser():
# purpuse: test default csv values(, \n " \ ), return value may contain meta-char
# NOTE: should note that default meta-char for s3select are also for python, thus for one example double \ is mandatory
csv_obj = ',first,,,second,third="c31,c32,c33",forth="1,2,3,4",fifth="my_string=\\"any_value\\" , my_other_string=\\"aaaa,bbb\\" ",' + "\n"
csv_obj_name = get_random_string()
bucket_name = "test"
upload_csv_object(bucket_name,csv_obj_name,csv_obj)
# return value contain comma{,}
res_s3select_alias = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,"select _6 from stdin;") ).replace("\n","")
nose.tools.assert_equal( res_s3select_alias, 'third="c31,c32,c33",')
# return value contain comma{,}
res_s3select_alias = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,"select _7 from stdin;") ).replace("\n","")
nose.tools.assert_equal( res_s3select_alias, 'forth="1,2,3,4",')
# return value contain comma{,}{"}, escape-rule{\} by-pass quote{"} , the escape{\} is removed.
res_s3select_alias = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,"select _8 from stdin;") ).replace("\n","")
nose.tools.assert_equal( res_s3select_alias, 'fifth="my_string="any_value" , my_other_string="aaaa,bbb" ",')
# return NULL as first token
res_s3select_alias = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,"select _1 from stdin;") ).replace("\n","")
nose.tools.assert_equal( res_s3select_alias, ',')
# return NULL in the middle of line
res_s3select_alias = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,"select _3 from stdin;") ).replace("\n","")
nose.tools.assert_equal( res_s3select_alias, ',')
# return NULL in the middle of line (successive)
res_s3select_alias = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,"select _4 from stdin;") ).replace("\n","")
nose.tools.assert_equal( res_s3select_alias, ',')
# return NULL at the end line
res_s3select_alias = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,"select _9 from stdin;") ).replace("\n","")
nose.tools.assert_equal( res_s3select_alias, ',')
@attr('s3select')
def test_csv_definition():
number_of_rows = 10000
#create object with pipe-sign as field separator and tab as row delimiter.
csv_obj = create_random_csv_object(number_of_rows,10,"|","\t")
csv_obj_name = get_random_string()
bucket_name = "test"
upload_csv_object(bucket_name,csv_obj_name,csv_obj)
# purpose of tests is to parse correctly input with different csv defintions
res = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,"select count(0) from stdin;","|","\t") ).replace(",","")
nose.tools.assert_equal( number_of_rows, int(res))
# assert is according to radom-csv function
# purpose of test is validate that tokens are processed correctly
res_s3select = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,"select min(int(_1)),max(int(_2)),min(int(_3))+1 from stdin;","|","\t") ).replace("\n","")
min_1 = min ( create_list_of_int( 1 , csv_obj , "|","\t") )
max_2 = max ( create_list_of_int( 2 , csv_obj , "|","\t") )
min_3 = min ( create_list_of_int( 3 , csv_obj , "|","\t") ) + 1
__res = "{},{},{},".format(min_1,max_2,min_3)
nose.tools.assert_equal( res_s3select, __res )
@attr('s3select')
def test_schema_definition():
number_of_rows = 10000
# purpose of test is to validate functionality using csv header info
csv_obj = create_random_csv_object(number_of_rows,10,csv_schema="c1,c2,c3,c4,c5,c6,c7,c8,c9,c10")
csv_obj_name = get_random_string()
bucket_name = "test"
upload_csv_object(bucket_name,csv_obj_name,csv_obj)
# ignoring the schema on first line and retrieve using generic column number
res_ignore = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,"select _1,_3 from stdin;",csv_header_info="IGNORE") ).replace("\n","")
# using the scheme on first line, query is using the attach schema
res_use = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,"select c1,c3 from stdin;",csv_header_info="USE") ).replace("\n","")
# result of both queries should be the same
nose.tools.assert_equal( res_ignore, res_use)
# using column-name not exist in schema
res_multiple_defintion = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,"select c1,c10,int(c11) from stdin;",csv_header_info="USE") ).replace("\n","")
assert res_multiple_defintion.find("alias {c11} or column not exist in schema") > 0
# alias-name is identical to column-name
res_multiple_defintion = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,"select int(c1)+int(c2) as c4,c4 from stdin;",csv_header_info="USE") ).replace("\n","")
assert res_multiple_defintion.find("multiple definition of column {c4} as schema-column and alias") > 0
@attr('s3select')
def test_when_than_else_expressions():
csv_obj = create_random_csv_object(10000,10)
csv_obj_name = get_random_string()
bucket_name = "test"
upload_csv_object(bucket_name,csv_obj_name,csv_obj)
res_s3select = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,'select case when cast(_1 as int)>100 and cast(_1 as int)<200 than "(100-200)" when cast(_1 as int)>200 and cast(_1 as int)<300 than "(200-300)" else "NONE" end from s3object;') ).replace("\n","")
count1 = res_s3select.count("(100-200)")
count2 = res_s3select.count("(200-300)")
count3 = res_s3select.count("NONE")
res = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,'select count(*) from s3object where cast(_1 as int)>100 and cast(_1 as int)<200 ;') ).replace("\n","")
res1 = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,'select count(*) from s3object where cast(_1 as int)>200 and cast(_1 as int)<300 ;') ).replace("\n","")
res2 = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,'select count(*) from s3object where cast(_1 as int)<=100 or cast(_1 as int)>=300 or cast(_1 as int)==200 ;') ).replace("\n","")
nose.tools.assert_equal( str(count1) + ',', res)
nose.tools.assert_equal( str(count2) + ',', res1)
nose.tools.assert_equal( str(count3) + ',', res2)
@attr('s3select')
def test_coalesce_expressions():
csv_obj = create_random_csv_object(10000,10)
csv_obj_name = get_random_string()
bucket_name = "test"
upload_csv_object(bucket_name,csv_obj_name,csv_obj)
res_s3select = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,'select count(*) from s3object where char_length(_3)>2 and char_length(_4)>2 and cast(substring(_3,1,2) as int) == cast(substring(_4,1,2) as int);') ).replace("\n","")
res_null = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,'select count(*) from s3object where cast(_3 as int)>99 and cast(_4 as int)>99 and coalesce(nullif(cast(substring(_3,1,2) as int),cast(substring(_4,1,2) as int)),7) == 7;' ) ).replace("\n","")
nose.tools.assert_equal( res_s3select, res_null)
res_s3select = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,'select coalesce(nullif(_5,_5),nullif(_1,_1),_2) from stdin;') ).replace("\n","")
res_coalesce = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,'select coalesce(_2) from stdin;') ).replace("\n","")
nose.tools.assert_equal( res_s3select, res_coalesce)
@attr('s3select')
def test_cast_expressions():
csv_obj = create_random_csv_object(10000,10)
csv_obj_name = get_random_string()
bucket_name = "test"
upload_csv_object(bucket_name,csv_obj_name,csv_obj)
res_s3select = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,'select count(*) from s3object where cast(_3 as int)>999;') ).replace("\n","")
res = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,'select count(*) from s3object where char_length(_3)>3;') ).replace("\n","")
nose.tools.assert_equal( res_s3select, res)
res_s3select = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,'select count(*) from s3object where cast(_3 as int)>99 and cast(_3 as int)<1000;') ).replace("\n","")
res = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,'select count(*) from s3object where char_length(_3)==3;') ).replace("\n","")
nose.tools.assert_equal( res_s3select, res)
@attr('s3select')
def test_version():
return
number_of_rows = 1
# purpose of test is to validate functionality using csv header info
csv_obj = create_random_csv_object(number_of_rows,10)
csv_obj_name = get_random_string()
bucket_name = "test"
upload_csv_object(bucket_name,csv_obj_name,csv_obj)
res_version = remove_xml_tags_from_result( run_s3select(bucket_name,csv_obj_name,"select version() from stdin;") ).replace("\n","")
nose.tools.assert_equal( res_version, "41.a," )

View file

@ -0,0 +1,360 @@
import boto3
import botocore.session
from botocore.exceptions import ClientError
from botocore.exceptions import ParamValidationError
from nose.tools import eq_ as eq
from nose.plugins.attrib import attr
from nose.plugins.skip import SkipTest
import isodate
import email.utils
import datetime
import threading
import re
import pytz
from collections import OrderedDict
import requests
import json
import base64
import hmac
import hashlib
import xml.etree.ElementTree as ET
import time
import operator
import nose
import os
import string
import random
import socket
import ssl
import logging
from collections import namedtuple
from email.header import decode_header
from . import(
get_iam_client,
get_sts_client,
get_client,
get_alt_user_id,
get_config_endpoint,
get_new_bucket_name,
get_parameter_name,
get_main_aws_access_key,
get_main_aws_secret_key,
get_thumbprint,
get_aud,
get_token,
get_realm_name,
check_webidentity
)
log = logging.getLogger(__name__)
def create_role(iam_client,path,rolename,policy_document,description,sessionduration,permissionboundary):
role_err=None
if rolename is None:
rolename=get_parameter_name()
try:
role_response = iam_client.create_role(Path=path,RoleName=rolename,AssumeRolePolicyDocument=policy_document,)
except ClientError as e:
role_err = e.response['Code']
return (role_err,role_response,rolename)
def put_role_policy(iam_client,rolename,policyname,role_policy):
role_err=None
if policyname is None:
policyname=get_parameter_name()
try:
role_response = iam_client.put_role_policy(RoleName=rolename,PolicyName=policyname,PolicyDocument=role_policy)
except ClientError as e:
role_err = e.response['Code']
return (role_err,role_response)
def put_user_policy(iam_client,username,policyname,policy_document):
role_err=None
if policyname is None:
policyname=get_parameter_name()
try:
role_response = iam_client.put_user_policy(UserName=username,PolicyName=policyname,PolicyDocument=policy_document)
except ClientError as e:
role_err = e.response['Code']
return (role_err,role_response,policyname)
@attr(resource='get session token')
@attr(method='get')
@attr(operation='check')
@attr(assertion='s3 ops only accessible by temporary credentials')
@attr('test_of_sts')
def test_get_session_token():
iam_client=get_iam_client()
sts_client=get_sts_client()
sts_user_id=get_alt_user_id()
default_endpoint=get_config_endpoint()
user_policy = "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Deny\",\"Action\":\"s3:*\",\"Resource\":[\"*\"],\"Condition\":{\"BoolIfExists\":{\"sts:authentication\":\"false\"}}},{\"Effect\":\"Allow\",\"Action\":\"sts:GetSessionToken\",\"Resource\":\"*\",\"Condition\":{\"BoolIfExists\":{\"sts:authentication\":\"false\"}}}]}"
(resp_err,resp,policy_name)=put_user_policy(iam_client,sts_user_id,None,user_policy)
eq(resp['ResponseMetadata']['HTTPStatusCode'],200)
response=sts_client.get_session_token()
eq(response['ResponseMetadata']['HTTPStatusCode'],200)
s3_client=boto3.client('s3',
aws_access_key_id = response['Credentials']['AccessKeyId'],
aws_secret_access_key = response['Credentials']['SecretAccessKey'],
aws_session_token = response['Credentials']['SessionToken'],
endpoint_url=default_endpoint,
region_name='',
)
bucket_name = get_new_bucket_name()
try:
s3bucket = s3_client.create_bucket(Bucket=bucket_name)
eq(s3bucket['ResponseMetadata']['HTTPStatusCode'],200)
finish=s3_client.delete_bucket(Bucket=bucket_name)
finally: # clean up user policy even if create_bucket/delete_bucket fails
iam_client.delete_user_policy(UserName=sts_user_id,PolicyName=policy_name)
@attr(resource='get session token')
@attr(method='get')
@attr(operation='check')
@attr(assertion='s3 ops denied by permanent credentials')
@attr('test_of_sts')
def test_get_session_token_permanent_creds_denied():
s3bucket_error=None
iam_client=get_iam_client()
sts_client=get_sts_client()
sts_user_id=get_alt_user_id()
default_endpoint=get_config_endpoint()
s3_main_access_key=get_main_aws_access_key()
s3_main_secret_key=get_main_aws_secret_key()
user_policy = "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Deny\",\"Action\":\"s3:*\",\"Resource\":[\"*\"],\"Condition\":{\"BoolIfExists\":{\"sts:authentication\":\"false\"}}},{\"Effect\":\"Allow\",\"Action\":\"sts:GetSessionToken\",\"Resource\":\"*\",\"Condition\":{\"BoolIfExists\":{\"sts:authentication\":\"false\"}}}]}"
(resp_err,resp,policy_name)=put_user_policy(iam_client,sts_user_id,None,user_policy)
eq(resp['ResponseMetadata']['HTTPStatusCode'],200)
response=sts_client.get_session_token()
eq(response['ResponseMetadata']['HTTPStatusCode'],200)
s3_client=boto3.client('s3',
aws_access_key_id = s3_main_access_key,
aws_secret_access_key = s3_main_secret_key,
aws_session_token = response['Credentials']['SessionToken'],
endpoint_url=default_endpoint,
region_name='',
)
bucket_name = get_new_bucket_name()
try:
s3bucket = s3_client.create_bucket(Bucket=bucket_name)
except ClientError as e:
s3bucket_error = e.response.get("Error", {}).get("Code")
eq(s3bucket_error,'AccessDenied')
iam_client.delete_user_policy(UserName=sts_user_id,PolicyName=policy_name)
@attr(resource='assume role')
@attr(method='get')
@attr(operation='check')
@attr(assertion='role policy allows all s3 ops')
@attr('test_of_sts')
def test_assume_role_allow():
iam_client=get_iam_client()
sts_client=get_sts_client()
sts_user_id=get_alt_user_id()
default_endpoint=get_config_endpoint()
role_session_name=get_parameter_name()
policy_document = "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"AWS\":[\"arn:aws:iam:::user/"+sts_user_id+"\"]},\"Action\":[\"sts:AssumeRole\"]}]}"
(role_error,role_response,general_role_name)=create_role(iam_client,'/',None,policy_document,None,None,None)
eq(role_response['Role']['Arn'],'arn:aws:iam:::role/'+general_role_name+'')
role_policy = "{\"Version\":\"2012-10-17\",\"Statement\":{\"Effect\":\"Allow\",\"Action\":\"s3:*\",\"Resource\":\"arn:aws:s3:::*\"}}"
(role_err,response)=put_role_policy(iam_client,general_role_name,None,role_policy)
eq(response['ResponseMetadata']['HTTPStatusCode'],200)
resp=sts_client.assume_role(RoleArn=role_response['Role']['Arn'],RoleSessionName=role_session_name)
eq(resp['ResponseMetadata']['HTTPStatusCode'],200)
s3_client = boto3.client('s3',
aws_access_key_id = resp['Credentials']['AccessKeyId'],
aws_secret_access_key = resp['Credentials']['SecretAccessKey'],
aws_session_token = resp['Credentials']['SessionToken'],
endpoint_url=default_endpoint,
region_name='',
)
bucket_name = get_new_bucket_name()
s3bucket = s3_client.create_bucket(Bucket=bucket_name)
eq(s3bucket['ResponseMetadata']['HTTPStatusCode'],200)
bkt = s3_client.delete_bucket(Bucket=bucket_name)
eq(bkt['ResponseMetadata']['HTTPStatusCode'],204)
@attr(resource='assume role')
@attr(method='get')
@attr(operation='check')
@attr(assertion='role policy denies all s3 ops')
@attr('test_of_sts')
def test_assume_role_deny():
s3bucket_error=None
iam_client=get_iam_client()
sts_client=get_sts_client()
sts_user_id=get_alt_user_id()
default_endpoint=get_config_endpoint()
role_session_name=get_parameter_name()
policy_document = "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"AWS\":[\"arn:aws:iam:::user/"+sts_user_id+"\"]},\"Action\":[\"sts:AssumeRole\"]}]}"
(role_error,role_response,general_role_name)=create_role(iam_client,'/',None,policy_document,None,None,None)
eq(role_response['Role']['Arn'],'arn:aws:iam:::role/'+general_role_name+'')
role_policy = "{\"Version\":\"2012-10-17\",\"Statement\":{\"Effect\":\"Deny\",\"Action\":\"s3:*\",\"Resource\":\"arn:aws:s3:::*\"}}"
(role_err,response)=put_role_policy(iam_client,general_role_name,None,role_policy)
eq(response['ResponseMetadata']['HTTPStatusCode'],200)
resp=sts_client.assume_role(RoleArn=role_response['Role']['Arn'],RoleSessionName=role_session_name)
eq(resp['ResponseMetadata']['HTTPStatusCode'],200)
s3_client = boto3.client('s3',
aws_access_key_id = resp['Credentials']['AccessKeyId'],
aws_secret_access_key = resp['Credentials']['SecretAccessKey'],
aws_session_token = resp['Credentials']['SessionToken'],
endpoint_url=default_endpoint,
region_name='',
)
bucket_name = get_new_bucket_name()
try:
s3bucket = s3_client.create_bucket(Bucket=bucket_name)
except ClientError as e:
s3bucket_error = e.response.get("Error", {}).get("Code")
eq(s3bucket_error,'AccessDenied')
@attr(resource='assume role')
@attr(method='get')
@attr(operation='check')
@attr(assertion='creds expire so all s3 ops fails')
@attr('test_of_sts')
def test_assume_role_creds_expiry():
iam_client=get_iam_client()
sts_client=get_sts_client()
sts_user_id=get_alt_user_id()
default_endpoint=get_config_endpoint()
role_session_name=get_parameter_name()
policy_document = "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"AWS\":[\"arn:aws:iam:::user/"+sts_user_id+"\"]},\"Action\":[\"sts:AssumeRole\"]}]}"
(role_error,role_response,general_role_name)=create_role(iam_client,'/',None,policy_document,None,None,None)
eq(role_response['Role']['Arn'],'arn:aws:iam:::role/'+general_role_name+'')
role_policy = "{\"Version\":\"2012-10-17\",\"Statement\":{\"Effect\":\"Allow\",\"Action\":\"s3:*\",\"Resource\":\"arn:aws:s3:::*\"}}"
(role_err,response)=put_role_policy(iam_client,general_role_name,None,role_policy)
eq(response['ResponseMetadata']['HTTPStatusCode'],200)
resp=sts_client.assume_role(RoleArn=role_response['Role']['Arn'],RoleSessionName=role_session_name,DurationSeconds=900)
eq(resp['ResponseMetadata']['HTTPStatusCode'],200)
time.sleep(900)
s3_client = boto3.client('s3',
aws_access_key_id = resp['Credentials']['AccessKeyId'],
aws_secret_access_key = resp['Credentials']['SecretAccessKey'],
aws_session_token = resp['Credentials']['SessionToken'],
endpoint_url=default_endpoint,
region_name='',
)
bucket_name = get_new_bucket_name()
try:
s3bucket = s3_client.create_bucket(Bucket=bucket_name)
except ClientError as e:
s3bucket_error = e.response.get("Error", {}).get("Code")
eq(s3bucket_error,'AccessDenied')
@attr(resource='assume role with web identity')
@attr(method='get')
@attr(operation='check')
@attr(assertion='assuming role through web token')
@attr('webidentity_test')
def test_assume_role_with_web_identity():
check_webidentity()
iam_client=get_iam_client()
sts_client=get_sts_client()
default_endpoint=get_config_endpoint()
role_session_name=get_parameter_name()
thumbprint=get_thumbprint()
aud=get_aud()
token=get_token()
realm=get_realm_name()
oidc_response = iam_client.create_open_id_connect_provider(
Url='http://localhost:8080/auth/realms/{}'.format(realm),
ThumbprintList=[
thumbprint,
],
)
policy_document = "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"Federated\":[\""+oidc_response["OpenIDConnectProviderArn"]+"\"]},\"Action\":[\"sts:AssumeRoleWithWebIdentity\"],\"Condition\":{\"StringEquals\":{\"localhost:8080/auth/realms/"+realm+":app_id\":\""+aud+"\"}}}]}"
(role_error,role_response,general_role_name)=create_role(iam_client,'/',None,policy_document,None,None,None)
eq(role_response['Role']['Arn'],'arn:aws:iam:::role/'+general_role_name+'')
role_policy = "{\"Version\":\"2012-10-17\",\"Statement\":{\"Effect\":\"Allow\",\"Action\":\"s3:*\",\"Resource\":\"arn:aws:s3:::*\"}}"
(role_err,response)=put_role_policy(iam_client,general_role_name,None,role_policy)
eq(response['ResponseMetadata']['HTTPStatusCode'],200)
resp=sts_client.assume_role_with_web_identity(RoleArn=role_response['Role']['Arn'],RoleSessionName=role_session_name,WebIdentityToken=token)
eq(resp['ResponseMetadata']['HTTPStatusCode'],200)
s3_client = boto3.client('s3',
aws_access_key_id = resp['Credentials']['AccessKeyId'],
aws_secret_access_key = resp['Credentials']['SecretAccessKey'],
aws_session_token = resp['Credentials']['SessionToken'],
endpoint_url=default_endpoint,
region_name='',
)
bucket_name = get_new_bucket_name()
s3bucket = s3_client.create_bucket(Bucket=bucket_name)
eq(s3bucket['ResponseMetadata']['HTTPStatusCode'],200)
bkt = s3_client.delete_bucket(Bucket=bucket_name)
eq(bkt['ResponseMetadata']['HTTPStatusCode'],204)
oidc_remove=iam_client.delete_open_id_connect_provider(
OpenIDConnectProviderArn=oidc_response["OpenIDConnectProviderArn"]
)
'''
@attr(resource='assume role with web identity')
@attr(method='get')
@attr(operation='check')
@attr(assertion='assume_role_with_web_token creds expire')
@attr('webidentity_test')
def test_assume_role_with_web_identity_invalid_webtoken():
resp_error=None
iam_client=get_iam_client()
sts_client=get_sts_client()
default_endpoint=get_config_endpoint()
role_session_name=get_parameter_name()
thumbprint=get_thumbprint()
aud=get_aud()
token=get_token()
realm=get_realm_name()
oidc_response = iam_client.create_open_id_connect_provider(
Url='http://localhost:8080/auth/realms/{}'.format(realm),
ThumbprintList=[
thumbprint,
],
)
policy_document = "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"Federated\":[\""+oidc_response["OpenIDConnectProviderArn"]+"\"]},\"Action\":[\"sts:AssumeRoleWithWebIdentity\"],\"Condition\":{\"StringEquals\":{\"localhost:8080/auth/realms/"+realm+":app_id\":\""+aud+"\"}}}]}"
(role_error,role_response,general_role_name)=create_role(iam_client,'/',None,policy_document,None,None,None)
eq(role_response['Role']['Arn'],'arn:aws:iam:::role/'+general_role_name+'')
role_policy = "{\"Version\":\"2012-10-17\",\"Statement\":{\"Effect\":\"Allow\",\"Action\":\"s3:*\",\"Resource\":\"arn:aws:s3:::*\"}}"
(role_err,response)=put_role_policy(iam_client,general_role_name,None,role_policy)
eq(response['ResponseMetadata']['HTTPStatusCode'],200)
resp=""
try:
resp=sts_client.assume_role_with_web_identity(RoleArn=role_response['Role']['Arn'],RoleSessionName=role_session_name,WebIdentityToken='abcdef')
except InvalidIdentityTokenException as e:
log.debug('{}'.format(resp))
log.debug('{}'.format(e.response.get("Error", {}).get("Code")))
log.debug('{}'.format(e))
resp_error = e.response.get("Error", {}).get("Code")
eq(resp_error,'AccessDenied')
oidc_remove=iam_client.delete_open_id_connect_provider(
OpenIDConnectProviderArn=oidc_response["OpenIDConnectProviderArn"]
)
'''