Compare commits

...

485 commits

Author SHA1 Message Date
0e72717dcd [#310] Extend bucket testsuite
- Added tests to check correctness of bucket names according to AWS specification
- Added test to check availability of non-empty bucket after attempting to delete it

Signed-off-by: Kirill Sosnovskikh <k.sosnovskikh@yadro.com>
2024-10-31 12:14:44 +00:00
b33514df3c [#303] add local deny ape tests 2024-10-31 13:19:49 +03:00
09acd6f283 [#316] Move variable under package for single repo runs
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-10-30 14:19:26 +03:00
b7669fc96f [#315] Use relpath for files
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-10-29 13:32:07 +03:00
75508cc70c Added nearest node count calculation for choosing ec policy
Signed-off-by: Dmitry Anurin <danurin@yadro.com>
2024-10-25 08:08:43 +03:00
6b83a89b94 [#312] Extend container metrics tests 2024-10-24 16:00:10 +03:00
77126f2706 [#311] add new pattern 2024-10-17 15:07:53 +03:00
64bc778116 [#308] Fix unique user names
Signed-off-by: Kirill Sosnovskikh <k.sosnovskikh@yadro.com>
2024-10-15 07:56:04 +00:00
6442a52abd [#307] fix and extend container metrics tests 2024-10-14 10:07:17 +00:00
8dcb3ccf3c [#309] Add marks
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-10-11 09:31:24 +00:00
44ed00f9bc [#306] Refactor tests: use unique_name instead hex + timestamp
Signed-off-by: Kirill Sosnovskikh <k.sosnovskikh@yadro.com>
2024-10-07 18:19:06 +03:00
d10e5975e7 TrueCloudLab/frostfs-node#1297 update error pattern
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-10-03 08:22:55 +00:00
f6576d4f6f [#302] Fixed logs metrics test 2024-09-27 16:53:52 +03:00
1afadfa363 [#301] Update all tests EC policy
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2024-09-17 10:19:20 +00:00
7d0fa79fb2 [#300] Move temp dir fixture to testlib
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-09-10 13:29:02 +00:00
64c70948f9 [#299] Small change logic EC policy tests
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2024-09-10 13:28:45 +03:00
8234a0ece2 [#298] Add set mode shards in teardown test ec
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2024-09-03 16:53:08 +00:00
9528ff0333 [#295] Update revision allure-validator
Signed-off-by: Kirill Sosnovskikh <k.sosnovskikh@yadro.com>
2024-09-03 16:21:23 +00:00
ffdfff6ba0 [#297] Refactore APE tests
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-09-02 19:22:59 +03:00
ccdd6ab784 [#296] Add resolve bucket fixture into old resolve func.
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2024-09-02 13:25:13 +03:00
65955a6b06 [#293] Integrate allure-validator into pre-commit hook
Signed-off-by: Kirill Sosnovskikh <k.sosnovskikh@yadro.com>
2024-08-27 16:25:23 +00:00
19a690361d [#294] Fixed test metrics garbage collector 2024-08-27 08:33:55 +03:00
0a5ce7f21a [#292] Skip failing APE tests
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-08-16 17:18:27 +03:00
c8b95d98f4 [#291] Change error message for network test
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2024-08-16 12:07:50 +03:00
ed19a83068 [#290] Fixed tests logs metrics 2024-08-15 07:31:43 +00:00
f1fb95b40c [#285] Add missing titles to tests
Added titles to the following tests:
- `test_static_session_token_container_create`
- `test_static_session_token_container_create_with_other_verb`
- `test_static_session_token_container_create_with_other_wallet`
- `test_static_session_token_container_delete`
- `test_put_with_bearer_when_eacl_restrict`
- `test_shard_errors`

Signed-off-by: Kirill Sosnovskikh <k.sosnovskikh@yadro.com>
2024-08-14 11:08:01 +03:00
108aae59dd [#284] Add required parameters to test titles
Added `object_size` to `test_object_put_get_bucketname_key`

Signed-off-by: Kirill Sosnovskikh <k.sosnovskikh@yadro.com>
2024-08-12 12:14:04 +00:00
9e1e4610a8 [#288] Add static title for test test_container_creation
Signed-off-by: Kirill Sosnovskikh <k.sosnovskikh@yadro.com>
2024-08-12 12:12:57 +00:00
b6aeb97193 [#289] Remove duplicate test test_more_one_ec_policy
Signed-off-by: Kirill Sosnovskikh <k.sosnovskikh@yadro.com>
2024-08-12 14:28:44 +03:00
6a372cc1c0 [#286] Add APE tests with objectID filter
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-08-09 08:19:56 +00:00
fe23edbf12 [#286] Return int typinc in verify func
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2024-08-09 09:54:11 +03:00
3806185c74 [#282] Minor APE tests update
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-08-08 11:12:24 +00:00
802fc4a6a9 [#283] Fix import for module EC tests
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2024-08-08 09:52:14 +03:00
0c881c6fc8 [#281] Added tests EC policy
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2024-08-06 11:54:04 +03:00
5c35e9bb81 [#280] Update verify object func
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2024-08-05 12:47:12 +00:00
626409af78 [#278] Add object operation tests with -g flag
The tests check the result of an 'anonymous' user interacting with a gRPC API object.

Signed-off-by: Kirill Sosnovskikh <k.sosnovskikh@yadro.com>
2024-07-31 10:17:27 +03:00
79882345e9 [#278] Fix teardown network tests
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2024-07-29 13:59:03 +00:00
89891b306b [#277] Updates related to testing platform
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-07-26 13:28:19 +00:00
8702c9dc88 [#273] Add new mark session_logs 2024-07-25 14:45:39 +03:00
1f43aa4dc0 [#272] Update CODEOWNERS
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-07-18 15:01:09 +03:00
fe17f2236b [#271] Migrate eACL tests to APE
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-07-17 23:58:46 +03:00
6e4c3c33a5 [#270] Move alluredir check to start of fixture
Signed-off-by: Kirill Sosnovskikh <k.sosnovskikh@yadro.com>
2024-07-16 18:09:15 +03:00
c969d9e482 [#269] Fix failover cases with outdated LC, used newer LC 2024-07-11 16:29:21 +00:00
b949ca2ba3 [#267] Fixed expected shard mode if errors on shard are accumulated 2024-07-08 14:03:06 +00:00
35b872dc66 [#268] Mark maintenance mode tests as failover 2024-07-08 14:56:36 +03:00
741102ec17 [#265] fix tests metrics object and logs 2024-07-02 16:23:02 +03:00
5d3d22f685 [#262] Added new test for EC policy
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2024-07-01 15:41:14 +00:00
7d4792f49b [#260] fix tests logs 2024-07-01 10:26:21 +00:00
cc440f9c12 [#261] Update nested test to check keys
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-06-28 15:15:25 +03:00
5ec844417a [#259] Optimize failover tests by paralleling long steps
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-06-27 12:31:02 +03:00
d08dbfa07d [#255] refactore metrics tests 2024-06-26 08:04:19 +00:00
e65d19f056 [#258] Speed up tests by removing cleanup and per test healthcheck
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-06-25 02:27:54 +03:00
c1759bfa08 [#257] Fix maintenance test and introduce custom ordering mark
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-06-24 17:51:36 +03:00
9756953b10 [#254] Use TestFiles which automatically deletes itself
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-06-18 13:02:51 +03:00
439a5595f8 [#252] Refactor version checks
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-06-07 17:20:50 +03:00
b8d5706515 [#250] Add range tests for container and non-container node endpoints
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-06-06 21:32:36 +03:00
564fb53f7c [#247] Fix indirect param in master
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-06-06 12:04:59 +03:00
6d68d14461 [#246] Fix ACL and Policy tests
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-06-05 15:00:07 +03:00
bc88e8eb9e [#244] Fix test policies 2024-06-05 06:52:01 +00:00
236be159db [#244] Fix test policies 2024-06-05 06:52:01 +00:00
ae3ace6ee1 [#238] Added test shard metrics 2024-06-04 09:24:34 +00:00
08aa61ed79 [#239] Added test log counter metrics 2024-06-04 09:22:00 +00:00
a772172fb5 [#237] add test garbage collector metrics 2024-06-04 09:17:16 +00:00
e4584637c6 [#235] add test gRPC metrics 2024-05-31 10:31:43 +00:00
bcb1234766 [#232] add test object metrics 2024-05-29 09:43:42 +00:00
a5ee580345 [#234] Add Object head after deletion test
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-05-27 20:08:07 +03:00
dcad44869f Updated test_s3_bucket_policy according to 1.5 2024-05-22 13:59:46 +00:00
3941619431 [#228] add test container metrics 2024-05-15 11:18:03 +03:00
9cf083fac9 [#225] Fixed step label in test_extended_acl_deny_all_operations test case 2024-04-25 12:58:22 +00:00
29a23b1e7e [#221] Add multipart test cases with bucket without versioning 2024-04-24 11:10:06 +03:00
abf46a7e16 [#217] Add bucket/container listing check in multipart test case 2024-04-22 17:59:57 +03:00
e098f63251 [#216] Enable http tests, since we remove them in plugin
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-04-16 15:17:49 +03:00
b8c58c3b70 [#214] Change wait mode shard
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2024-04-08 10:21:54 +00:00
4da86afa39 [#213] Remove hostname cludges
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2024-04-05 16:41:56 +00:00
9d664290f7 [#205] Add EC policy to sanity testsuite
Signed-off-by: Evgenii Stratonikov <stratonikov@runbox.com>
2024-03-26 15:22:07 +03:00
2d042e2387 [#212] Fix error shard test
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2024-03-21 14:51:18 +03:00
3e878444ce [#211] Fix write cache lost test
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2024-03-21 11:35:26 +03:00
f06e44642a [#210] Update failover case to not use stopped node
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2024-03-19 18:40:42 +03:00
b55103d212 [#209] Update usage of CLI for node management
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2024-03-19 15:44:50 +03:00
b1cb86e360 [#208] Tune sanity mark and log checker
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2024-03-18 12:16:10 +03:00
3387f88ea2 [#207] Fix containers fixtures
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2024-03-14 17:55:30 +00:00
833878d1d9 [#203] Check binaries versions at setup
Signed-off-by: Liza <e.chichindaeva@yadro.com>
2024-03-14 17:54:52 +00:00
d25024e0d7 [#204] Update policy with SELECT and FILTER results with UNIQUE nodes 2024-03-14 17:54:12 +00:00
b61dd7b39c [#206] Overhaul credentials work
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2024-03-11 19:34:54 +03:00
6af5ad9de5 [#202] Use creds provider for s3 client
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2024-02-29 23:28:12 +00:00
9068b96d69 [#199] Update shards tests
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2024-02-20 08:38:55 +00:00
8164d35fc8 [#197] Add switch bucket node endpoint v1.5 2024-02-19 18:43:23 +03:00
c433fe2264 [#196] Update curl related function usages
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2024-02-15 00:23:21 +03:00
251a7881c9 [#194] Add except bucket node 2024-02-14 13:36:09 +03:00
e453614381 [#192] Fix parse name CID
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2024-02-09 11:08:16 +03:00
fe4341893b Fixed S3 policy test case 2024-02-02 08:24:04 +00:00
f7475f9841 [OBJECT-6537] Components versions check
Components versions check

Signed-off-by: Mikhail Kadilov m.kadilov@yadro.com
2024-02-01 09:13:02 +00:00
566f1a425f [#189] parallel get remote binaries versions 2024-01-31 14:19:17 +00:00
46e57870d0 Change logs pattern
Signed-off-by: anikeev-yadro <a.anikeev@yadro.com>
2024-01-31 10:31:38 +03:00
0d7befe9a6 [#186] Change call object nodes func
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2024-01-22 14:29:49 +03:00
f49d68a6e7 rename local_config_path 2024-01-16 11:16:53 +00:00
241d3d0585 [#184] Small fix using function search container
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2024-01-16 08:14:49 +03:00
3a380755b4 [#182] Updates for dev-env
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2024-01-12 16:42:01 +00:00
5bc170e6f9 update policy with field price 2024-01-12 16:38:19 +00:00
522b8e576d [#180] Add test shard mode
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2024-01-12 08:40:40 +00:00
5ddc31cca6 [#179] Change argument func
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2024-01-10 13:43:52 +03:00
9e89dba03d Update pilorama loss and shards test cases 2023-12-22 17:10:30 +03:00
d327e8149b [OBJECT-5670] test_container_creation_deletion_parallel extended
test_container_creation_deletion_parallel extended

Signed-off-by: Mikhail Kadilov m.kadilov@yadro.com
2023-12-20 15:00:53 +03:00
6ee9b70d50 [#175] Update maintenance steps
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2023-12-15 13:10:01 +03:00
e14579f026 [#173] Update network test
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2023-12-13 13:14:50 +00:00
7b688af84d add reporter 2023-12-12 07:54:58 +00:00
9712644c38 [#171] Executive command changed
Added exception of error 'Too many requests' in log analyzer

Signed-off-by: Mikhail Kadilov m.kadilov@yadro.com
2023-12-11 15:07:16 +03:00
1c3460eecf Add issues invalid bind regex
Signed-off-by: anikeev-yadro <a.anikeev@yadro.com>
2023-12-08 12:31:53 +03:00
05d592e583 [#166] Fixed arguments order in get_filtered_logs
Fixed arguments order in get_filtered_logs

Signed-off-by: Mikhail Kadilov <m.kadilov@yadro.com>
2023-12-06 12:13:27 +03:00
71bb73a410 [#165] Fix test flow in case of skipped tests
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-12-05 11:33:59 +03:00
f6438c5b93 [#163] Added log exception
Added log exception

Signed-off-by: Mikhail Kadilov <m.kadilov@yadro.com>
2023-12-04 13:07:50 +00:00
426f999e92 update allure title 2023-12-01 14:48:48 +03:00
873d6e3d14 [#161] Improve logging
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-11-29 16:34:59 +03:00
3b071f02f7 [#157] Updates for skipped tests
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-11-27 13:47:02 +03:00
553fb1ec50 [#157] Move fixture
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2023-11-27 10:07:25 +00:00
858ca71e40 [#155] Allow to skip binaries version check
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-11-27 10:05:11 +00:00
da06c09ed0 [#150] Patterns for keys in logs added
Now regex can find kyes in logs

Signed-off-by: Mikhail Kadilov <m.kadilov@yadro.com>
2023-11-24 14:12:18 +00:00
269caf8e03 [#154] Add new test
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2023-11-24 13:22:00 +03:00
fbefa422e8 [#153] Register services automatically
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-11-22 20:13:19 +00:00
ee71c17700 Delete kernel panic tests
Signed-off-by: anikeev-yadro <a.anikeev@yadro.com>
2023-11-22 15:12:35 +00:00
0e57ad79cd [#151] Updates after testlib
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-11-22 17:26:59 +03:00
850b8533a8 Move Time test
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2023-11-20 14:29:19 +03:00
b0c9502bd3 Add Maintenance tests
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2023-11-20 13:49:54 +03:00
7da11807da Reduction of sanity tests
Signed-off-by: anikeev-yadro <a.anikeev@yadro.com>
2023-11-16 16:22:14 +03:00
86b0d1e0fe Add await
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2023-11-16 08:47:00 +03:00
aa0b556bd3 [#142] Renamed Github to Gitea in links
Some links changed to git.frostfs from github

Signed-off-by: Mikhail Kadilov <m.kadilov@yadro.com>
2023-11-15 12:21:26 +03:00
c37fedc04c discard_dynamic_title 2023-11-13 20:24:46 +03:00
44873ad0ca [#137] Some markers added
PytestUnknownMarkWarning excluded

Signed-off-by: Mikhail Kadilov <m.kadilov@yadro.com>
2023-11-13 11:49:08 +00:00
d82ce3626f Add autotest for resolving with bucket name and object name
Signed-off-by: Liza <e.chichindaeva@yadro.com>
2023-11-13 10:26:27 +00:00
a2838455cc Update check Policy: REP 1 IN SPB REP 1 IN MSK REP 3 2023-11-13 10:02:29 +00:00
0f5f5da9d3 [#135]Fixed test title
Test title was wrong before

Signed-off-by: Mikhail Kadilov <m.kadilov@yadro.com>
2023-11-08 19:13:32 +03:00
d0ec778346 Change sanity makrs first step
Signed-off-by: anikeev-yadro <a.anikeev@yadro.com>
2023-11-02 14:06:31 +03:00
52ea27e01e Remove sanity from awsclient
Signed-off-by: anikeev-yadro <a.anikeev@yadro.com>
2023-11-02 14:06:31 +03:00
d7144c65bf Skip teardown if sanity in markexpr 2023-11-02 14:06:31 +03:00
e5093bf6ac Remove pytest hooks 2023-11-02 14:06:25 +03:00
975d06de40 Add sanity marks
Signed-off-by: anikeev-yadro <a.anikeev@yadro.com>
2023-11-02 13:58:16 +03:00
aab8b07726 Add time test
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2023-11-02 08:43:22 +00:00
3944d9ff3b update test_container_creation_deletion_parallel 2023-11-01 10:43:43 +03:00
ae57672c7d [#129] Updates for failover
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-10-31 14:28:44 +03:00
e8289a3f83 Update check Policy: REP 1 IN SPB REP 1 IN MSK REP 3 2023-10-27 16:21:46 +03:00
00f312dab9 Update check filters results with 25% node the object appearance 2023-10-27 13:19:34 +03:00
eee02d1346 Update check filters results with 50% node the object appearance 2023-10-26 16:14:30 +00:00
e7e963b1a1 Delete non actual policy test
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2023-10-26 10:37:54 +03:00
6749245206 Add fixture skip test interfaces
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2023-10-25 14:10:59 +00:00
d986fe2fb6 [#124] Use CSC in case of node shutdown
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-10-25 13:15:54 +00:00
6128468310 Update check filters results with 75% node the object appearance 2023-10-25 11:01:50 +03:00
69202cc703 Add test down interfaces
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2023-10-24 12:42:01 +00:00
009f2876a2 Update check filters results with 100% node the object appearance 2023-10-24 11:43:53 +03:00
c5f6d6cf2b [#120] increase 1sec wait to 15sec
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-10-23 19:42:23 +03:00
f159cd89f3 [#118] Add after-deploy healthcheck
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-10-23 15:04:41 +00:00
283149f837 Update check filters results with unique node the object appearance 2023-10-23 17:11:53 +03:00
af50be78e6 Update check filters results with one node the object appearance 2023-10-23 16:13:10 +03:00
cbbbc686f4 Update simple check the object appearance 2023-10-23 08:30:18 +00:00
b72f6daeb7 [#115] Make logs gathering parallel and in single command
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-10-20 18:50:02 +03:00
ad1254b4f3 [#113] Add new pattern
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-10-20 12:35:32 +03:00
2564f7421e Add test down all data interfaces
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2023-10-19 10:04:22 +00:00
bcc61303df Update placement rule 2023-10-18 14:57:52 +03:00
6b8800760d Add policy REP 1 IN SPB REP 1 IN MSK REP 3 2023-10-18 13:54:58 +03:00
5eae65b471 Add negative policy tests 2023-10-18 13:08:15 +03:00
b58af1b01b [#109] up pyyaml
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-10-17 20:08:43 +03:00
78b0fd5b2a Update policy tests 2023-10-17 11:28:09 +00:00
20d6510c3a Add small fix and delete test
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2023-10-13 11:52:42 +03:00
adfd6854f8 Add network internal interface test
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2023-10-12 10:30:13 +03:00
b3c24828d3 [#103] Make allure treat same test differently 2023-10-10 16:17:56 +00:00
1d22162d2f Add network data interface test
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2023-10-10 16:39:24 +03:00
c4f4e637fa [#101] add request to proper work
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-10-06 12:13:10 +03:00
38d1bdbf83 [#100] Make special test distinct 2023-10-05 19:05:34 +03:00
b01fff85f6 Delete split-brain test
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2023-10-04 08:27:52 +00:00
4f78085b44 Add await mode for delete container
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2023-10-03 09:45:19 +03:00
f70dfd310e [#97] Use shards config path
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-10-02 07:49:44 +00:00
07debbb1ca [#96] Move healthcheck to function level
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-09-29 16:16:06 +03:00
7c788057db Fix empty map tests
Signed-off-by: anikeev-yadro <a.anikeev@yadro.com>
2023-09-27 18:28:56 +03:00
a0ea180aa9 add policy 2023-09-25 13:59:17 +00:00
73a9c95704 Attach ACL wallets to allure
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-09-21 19:49:31 +03:00
d38e05c100 Fix teardown for network test
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-09-19 14:35:21 +00:00
2a1d40680a Add new fixture
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2023-09-19 08:35:27 +00:00
ed15485b72 Add new fixture
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2023-09-19 08:35:27 +00:00
3021805f7e Fix tests title
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-09-11 16:59:06 +03:00
1cd077fdf3 Update test titles to conform standard 2023-09-08 10:42:47 +00:00
4d2e27a317 Fix policy
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2023-09-07 13:40:42 +03:00
967f4f37d9 Change file_pith fixture and change title test acl
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2023-09-07 08:59:03 +00:00
28a7748398 Update regex to skip panic in OID and CID
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-09-06 17:13:26 +03:00
3455c5360d add 2 cases for test object lock 2023-09-01 09:06:46 +00:00
9456a0bb28 Update regex to catch more errors 2023-08-30 11:32:59 +00:00
841d6674f6 Update fixtures to be properly reflected in the allure report 2023-08-29 16:43:16 +03:00
ee110e5baf change hostname
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2023-08-24 08:30:31 +03:00
da31a82126 Add replace in assert policy and fix policy
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2023-08-17 11:54:13 +03:00
e70532d6e0 New test Split Brain
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2023-08-16 11:53:14 +03:00
Yaroslava Lukoyanova
4a17bc7b9c Marked HTTP GW cases with PUT call with separate pytest mark 2023-08-09 08:08:23 +00:00
247bc5ba8d Update name for missed S3 test 2023-08-08 14:56:46 +03:00
Yaroslava Lukoyanova
0ae0cee522 Small fixes and skips for http gw test cases 2023-08-07 14:46:38 +00:00
e0a9d687f2 Update test titles for most cases 2023-08-07 12:43:16 +03:00
d40e875091 regex for versions ready 2023-08-04 09:49:59 +00:00
6449264dcf Changes for object size usage
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-08-02 14:54:03 +03:00
05b5f7d133 Mark replication test as failover and add fixture to return nodes 2023-07-31 12:27:12 +03:00
63c4ac5ac6 Add simple replication test
Signed-off-by: Vladimir Avdeev <v.avdeev@yadro.com>
2023-07-28 08:29:32 +00:00
935aa6c264 Disable sudo_shell for dev env
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-07-27 19:29:22 +03:00
49e1019a2c Fix method name
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-07-27 12:05:49 +03:00
Yaroslava Lukoyanova
0e1d34b2f7 Added http hostname as a header to all http calls 2023-07-25 14:08:26 +03:00
Yaroslava Lukoyanova
89a31a695b Skipped HTTP GW cases with PUT 'cause of feature deprecation 2023-07-24 06:56:29 +00:00
0ad2532e04 Added code validation targets
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-07-18 20:50:15 +03:00
f4a267fe81 Changed fixture restore network
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2023-07-12 12:05:36 +03:00
dc13252ff2 Add logs
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2023-06-27 15:39:00 +00:00
Yaroslava Lukoyanova
d97d852940 Deprecated bearer-rules parameter 2023-06-26 17:23:29 +03:00
79a60c5e5b Temp disable curl upload for complex objects
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-06-23 17:50:18 +03:00
5acf19592a Fix s3_client fixture
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2023-06-22 10:30:09 +00:00
ead55f657f Delete object after put to container
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2023-06-19 07:43:48 +03:00
01f4c5217d Delete wait replication steps for tests
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2023-06-15 09:11:19 +03:00
Yaroslava Lukoyanova
c2cd9ba887 Added case for not ignoring unhealthy tree endpoints 2023-06-14 08:28:39 +00:00
6f77a6ab08 Fix test shutdown node
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2023-06-09 09:21:56 +03:00
Yaroslava Lukoyanova
dd7f91e66b Add case for loss of pilorama on one node 2023-06-05 10:39:36 +03:00
Yaroslava Lukoyanova
c071f54b56 Add test case for loss of one node 2023-06-01 14:24:48 +03:00
a0587438c4 Add write-cache loss test
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-05-31 18:31:35 +03:00
27cf9bb1bd Add metabase loss test 2023-05-30 16:45:23 +03:00
Yaroslava Lukoyanova
d0660d626b Fixture for restore stopped storage nodes in test_failover_storage 2023-05-29 17:50:02 +00:00
Yaroslava Lukoyanova
e36e18dc57 Fixed typo in test_s3_delete_versioning case assertion 2023-05-25 15:58:04 +00:00
Yaroslava Lukoyanova
8d50407439 Add pilorama loss test cases, marked as skipped 2023-05-23 17:42:52 +03:00
Yaroslava Lukoyanova
520f9fe5b5 Add test cases for S3 blobovnicza and fstree loss 2023-05-19 12:41:57 +00:00
2d174831ab Add wait block for tick epoch and add version testlib
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2023-05-19 15:30:29 +03:00
5be478e577 Test for invalid location constraint
Signed-off-by: Liza <e.chichindaeva@yadro.com>
2023-05-18 13:49:48 +00:00
856e5afa60 [#41] lifetime: Fix lifetime test
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-05-17 12:33:04 +03:00
8a3d617c19 Fix tests
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-05-15 20:20:27 +03:00
c77123f301 Move shared code to testlib
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-05-15 12:59:33 +03:00
Yaroslava Lukoyanova
b13f0ec33d Added new test cases for s3 gate, delete marker feature 2023-05-12 08:48:28 +00:00
cbe8847231 Node argument made optional for epoch ticks
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2023-05-10 11:53:47 +03:00
a6e1190f23 Add tests for node shutdown and object replication integrity
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2023-05-05 08:18:46 +00:00
bbf9ea7143 Use proper name for binary
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-05-03 16:23:05 +03:00
532d58abc7 Add check for Errors while deleting objects 2023-05-02 11:11:35 +03:00
e86ed765b1 Add test for putting object while one node is stopped
Signed-off-by: Liza <e.chichindaeva@yadro.com>
2023-04-20 17:40:30 +03:00
1e6d3e77f9 Divide test_expiration_epoch_in_http into several (parametrized)
Two tests now checks if an expired object can be got.
X-Attribute-System-Expiration-Epoch -> X-Attribute-System-Expiration-Epoch
python3.9 -> 3.10 in .pre-commit-config.yaml

Signed-off-by: Liza <e.chichindaeva@yadro.com>
2023-04-13 14:19:15 +03:00
520ac116df Update multipart upload abort test
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-04-06 21:47:08 +03:00
d355eccfd8 Add deletion 1001 objects test
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-04-05 19:02:42 +03:00
b995bfca41 Fix s3 tests
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-03-29 19:59:14 +03:00
4779d2be88 update headers names
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-03-27 19:49:41 +03:00
5684d11408 Fix dataloss test
Signed-off-by: anikeev-yadro <a.anikeev@yadro.com>
2023-03-24 13:54:08 +00:00
bb831697f7 [#25] testcases: Fix test_static_session_search
Lists should be compared sorted
Enable test after bugfix

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-03-21 16:19:01 +03:00
2b950f41cd Add test cycles feature
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-03-20 19:03:29 +03:00
eb464f422c Add tests with empty map
Signed-off-by: anikeev-yadro <a.anikeev@yadro.com>
2023-03-17 17:15:01 +03:00
c3947b0716 Remove payments and storagegroupe tests
Signed-off-by: Liza <e.chichindaeva@yadro.com>
2023-03-14 14:46:25 +00:00
c97855dcee Fix __FROSTFS__EXPIRATION*
Signed-off-by: Liza <e.chichindaeva@yadro.com>
2023-03-14 16:24:29 +03:00
c997e23194 Updates for testcases
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-03-14 12:21:40 +03:00
cff0e0f23e Update session token tests related to expiration rules
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-03-13 17:05:24 +03:00
ef5e142015 Add timeout for cli commands
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-03-09 14:43:14 +03:00
06dc226ef8 Add timeout for cli commands
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-03-09 14:41:38 +03:00
ac7dae0d2d Add timeout for cli commands
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-03-09 14:13:51 +03:00
3802df25fe Merge pull request 'import fix for some helpers and steps' (#12) from EliChin/fix/import into master
Reviewed-on: TrueCloudLab/frostfs-testcases#12
2023-03-07 11:22:15 +00:00
4755a2e167 import fix for some helpers and steps
Signed-off-by: Liza <e.chichindaeva@yadro.com>
2023-03-01 18:47:33 +03:00
565d740239 Update git links to clone from
Signed-off-by: Liza <e.chichindaeva@yadro.com>
2023-03-01 15:19:55 +03:00
Aleksei Chetaev
b549836b60 Add pytest lazy fixtures to requirements 2023-02-28 11:48:06 +01:00
Aleksei Chetaev
cb1b0c9bdd Fix issue with frostfs-authmate binnary name
Signed-off-by: Aleksei Chetaev <alex.chetaev@gmail.com>
2023-02-28 11:48:06 +01:00
Aleksei Chetaev
aa145357f3 Fixing path to DevEnv
Signed-off-by: Aleksei Chetaev <alex.chetaev@gmail.com>
2023-02-28 11:48:06 +01:00
Aleksei Chetaev
1fb08e36c3 Fixing comments changed by auto replace imports
Signed-off-by: Aleksei Chetaev <alex.chetaev@gmail.com>
2023-02-28 11:48:06 +01:00
Aleksei Chetaev
b55731830e Remove trash from requirements.txt
Signed-off-by: Aleksei Chetaev <alex.chetaev@gmail.com>
2023-02-28 11:48:06 +01:00
Aleksei Chetaev
ee0c2527f7 Change python to 3.10
Signed-off-by: Aleksei Chetaev <alex.chetaev@gmail.com>
2023-02-28 11:48:06 +01:00
Aleksei Chetaev
52001dc23a Change all imports to imports from root and remove robot
Signed-off-by: Aleksei Chetaev <alex.chetaev@gmail.com>
2023-02-28 11:48:06 +01:00
Aleksei Chetaev
13bc98eecc Fixing imports after move utils ti frostfs-testlib
Signed-off-by: Aleksei Chetaev <alex.chetaev@gmail.com>
2023-02-27 11:44:19 +01:00
Aleksei Chetaev
d253e8f5fd Remove non used files from the repo 2023-02-27 11:44:19 +01:00
Aleksei Chetaev
25761428f7 Change frostfs-testlib version 2023-02-27 11:44:19 +01:00
Aleksei Chetaev
7a742d57fc Move errors templates to testlib
Signed-off-by: Aleksei Chetaev <alex.chetaev@gmail.com>
2023-02-27 11:44:19 +01:00
19809c5641 Rename to frostfs
Signed-off-by: Yulia Kovshova <y.kovshova@yadro.com>
2023-02-17 16:31:07 +03:00
f6056a4f79 Add @alexchetaev @abereziny
to codeowner

Signed-off-by: Yulia Kovshova <y.kovshova@yadro.com>
2023-02-17 17:30:09 +04:00
Aleskei Chetaev
9395a8003f Add assert_s3_acl
Signed-off-by: Elizaveta Chichindaeva <elizaveta@nspcc.ru>
2023-02-16 11:53:39 +03:00
Aleskei Chetaev
c7a69b89e3 Aplly isort to commit
Signed-off-by: Aleskei Chetaev <alex.chetaev@gmail.com>
2023-02-14 10:29:29 +01:00
Aleskei Chetaev
b94c106656 Revert removing venv environment files
Signed-off-by: Aleskei Chetaev <alex.chetaev@gmail.com>
2023-02-14 10:29:29 +01:00
Aleskei Chetaev
d76951ed4f Change mamba version, fix imports and support python 3.10
Signed-off-by: Aleskei Chetaev <alex.chetaev@gmail.com>
2023-02-14 10:29:29 +01:00
850c0e77ec Remove wait_for_success for start/stop service methods
Signed-off-by: Vladislav Karakozov <v.karakozov@yadro.com>
2023-02-07 12:29:39 +03:00
fc6f9ac162 Update expected errors
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-01-24 13:26:50 +03:00
f23bfe754e Update lib to 0.9.0
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-01-17 12:21:01 +03:00
baf0b4dd0f Add waiting for epoch align for storage group lifetime test
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-01-16 15:45:22 +03:00
c6ebe1d67d align all nodes to be in the same epoch
Signed-off-by: Vladislav Karakozov <v.karakozov@yadro.com>
2023-01-13 19:08:08 +03:00
anastasia prasolova
1aa94028a8 Remove aprasolova from CODEOWNERS file
Signed-off-by: anastasia prasolova <anastasia@nspcc.ru>
2022-12-30 15:55:23 +03:00
a942464de6 Fix epoch duration in http test
Signed-off-by: Vladislav Karakozov <v.karakozov@yadro.com>
2022-12-30 13:40:10 +03:00
690323e85d http test with bearer token
Signed-off-by: Vladislav Karakozov <v.karakozov@yadro.com>
2022-12-29 12:42:03 +03:00
ced72602ef Add too many open files to logs analyzer
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2022-12-29 11:28:24 +03:00
4099413577 #478 Update lock tests
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2022-12-28 11:27:19 +03:00
1abf544433 Add support data and internal ips
Signed-off-by: Yulia Kovshova <y.kovshova@yadro.com>
2022-12-28 11:26:22 +03:00
4f9294918d add system http system header test
Signed-off-by: Vladislav Karakozov <v.karakozov@yadro.com>
2022-12-28 10:10:13 +03:00
6209a61258 Unskip static session token tests
Signed-off-by: Vladimir Avdeev <v.avdeev@yadro.com>
2022-12-27 11:28:29 +03:00
Мария Малыгина
ad2eafd230 [fix] s3 load
Signed-off-by: Мария Малыгина <m.malygina@MacBook-Pro-Maria.local>
2022-12-26 18:08:22 +03:00
2151f0e446 Add await to delete container
Signed-off-by: anikeev-yadro <a.anikeev@yadro.com>
2022-12-26 17:30:36 +03:00
Vlad K
d9474b9bc9 Revert "Fix: IndexError: list index out of range"
This reverts commit 1dc4516258.
2022-12-26 13:32:16 +03:00
1dc4516258 Fix: IndexError: list index out of range
Signed-off-by: Vladislav Karakozov <v.karakozov@yadro.com>
2022-12-26 13:28:03 +03:00
422636f68b Updates for local dev env runs
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2022-12-23 12:18:37 +03:00
5c4f6b6a7d new http test
Signed-off-by: Vladislav Karakozov <v.karakozov@yadro.com>
2022-12-21 17:07:29 +03:00
a2a234f1b2 Update range tests
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2022-12-20 11:26:51 +03:00
4f5aedebfe Fix wildcard flag value
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2022-12-16 15:17:02 +03:00
c7f832e77a Fix allure attaches for failover test
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2022-12-16 11:51:48 +03:00
ee204528b8 fix lock_mode
Signed-off-by: Yulia Kovshova <y.kovshova@yadro.com>
2022-12-15 16:44:18 +03:00
aa957639ec Fix policy test s3
Signed-off-by: Yulia Kovshova <y.kovshova@yadro.com>
2022-12-14 14:07:24 +03:00
4003d0115c Fix s3 range tests
Signed-off-by: anikeev-yadro <a.anikeev@yadro.com>
2022-12-14 11:22:11 +03:00
f89d66817b Add drop locked objects tests
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2022-12-13 14:59:04 +03:00
2f04775fce Fix delete all objects in bucket
Signed-off-by: anikeev-yadro <a.anikeev@yadro.com>
2022-12-13 11:40:56 +03:00
3497f3b23a Add load_param file, delete old tests, new universal, parametrized test, add stop unused nodes function.
Signed-off-by: a.lipay <a.lipay@yadro.com>
2022-12-12 19:14:17 +03:00
15677e89eb Add load_param file, delete old tests, new universal, parametrized test, add stop unused nodes function.
Signed-off-by: a.lipay <a.lipay@yadro.com>
2022-12-12 19:14:17 +03:00
1bb640a0db Add load_param file, delete old tests, new universal, parametrized test, add stop unused nodes function.
Signed-off-by: a.lipay <a.lipay@yadro.com>
2022-12-12 19:14:17 +03:00
614031a53a Fix s3 object tests after remove obj size hardcode
Signed-off-by: Vladimir Avdeev <v.avdeev@yadro.com>
2022-12-09 14:45:30 +03:00
ddf6406e10 fix generate file in http test
Signed-off-by: Vladislav Karakozov <v.karakozov@yadro.com>
2022-12-09 14:35:43 +03:00
bceea1926a Fix after remove obj size hardcode
Signed-off-by: Vladimir Avdeev <v.avdeev@yadro.com>
2022-12-09 14:34:10 +03:00
7ae0e8a21d Bump neofs-testlib to 0.8.1 (Fix logging problem)
Signed-off-by: Vladimir Avdeev <v.avdeev@yadro.com>
2022-12-09 14:02:02 +03:00
00bf387f34 Update shards test
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2022-12-09 11:59:37 +03:00
6230d2244e create http folder, and adding a new test for http attributes
Signed-off-by: Vladislav Karakozov <v.karakozov@yadro.com>
2022-12-08 17:24:20 +03:00
3afdaa0e2a Small fixes for tests
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2022-12-08 13:27:50 +03:00
05924784ab Remove SIMPLE_OBJ_SIZE and COMPLEX_OBJ_SIZE from env
Signed-off-by: Vladimir Avdeev <v.avdeev@yadro.com>
2022-12-08 13:21:19 +03:00
76c5d40e63 Return to session log analyzer
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2022-12-07 15:12:46 +03:00
12b592713b Add control shards test
Signed-off-by: Vladimir Avdeev <v.avdeev@yadro.com>
2022-12-07 12:46:21 +03:00
6567aa72a9 Add bearer token tests for s3 wallet api calls
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2022-12-07 11:01:38 +03:00
522fc9dccd Delete node extra test
Signed-off-by: anikeev-yadro <a.anikeev@yadro.com>
2022-12-07 10:46:57 +03:00
bd05aae585 Refactor for cluster usage
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2022-12-06 12:34:28 +03:00
d9e881001e Add background load fixture
Signed-off-by: Vladimir Avdeev <v.avdeev@yadro.com>
2022-12-05 09:39:15 +03:00
9b0ac8579b Add static session token container tests
Signed-off-by: Vladimir Avdeev <v.avdeev@yadro.com>
2022-12-02 10:55:01 +03:00
455cafa08a add new test for 10 paths
Signed-off-by: Yulia Kovshova <y.kovshova@yadro.com>
2022-12-02 10:02:14 +03:00
b2a17c26e7 Add bucket as fixture to s3_test
Signed-off-by: Yulia Kovshova <y.kovshova@yadro.com>
2022-11-30 17:15:21 +03:00
d765d52fc4 Update neofs-testlib version
Signed-off-by: a.lipay <a.lipay@yadro.com>
2022-11-29 13:29:49 +03:00
30ea4ab54e Add grpc lock tests
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2022-11-25 16:45:49 +03:00
08274d4620 Enable tests for fixed functionality
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2022-11-23 12:34:36 +03:00
69efc2fcce Changed placement rules REP 2 IN X CBF 2 SELECT 2 FROM * AS X for http tests
Signed-off-by: acheyda <a.cheyda@yadro.com>
2022-11-22 16:06:23 +03:00
30600b8856 add password for s3
Signed-off-by: Yulia Kovshova <y.kovshova@yadro.com>
2022-11-21 14:34:41 +03:00
4e6bbaca64 Fix too long logs dir for analyze logs
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2022-11-18 15:38:18 +03:00
6047ad2fb5 Add s3 tests
Signed-off-by: a.lipay <a.lipay@yadro.com>
2022-11-18 13:03:58 +03:00
bdbcee4e81 Add log analyze for each test
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2022-11-17 18:05:52 +03:00
3e5a204d19 Refactor balance tests
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2022-11-16 19:14:58 +03:00
2159982dbd Find critical pattern in system logs
Signed-off-by: Vladimir Avdeev <v.avdeev@yadro.com>
2022-11-15 11:44:43 +03:00
2b08a932ac [#312] Add new policy test
Signed-off-by: Yulia Kovshova <y.kovshova@yadro.com>
2022-11-15 10:31:12 +03:00
a0da15e60b add new check for tags
Signed-off-by: Yulia Kovshova <y.kovshova@yadro.com>
2022-11-14 15:41:42 +03:00
21f1c3a922 Add static session tests for object
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2022-11-10 18:48:56 +03:00
anastasia prasolova
013cf8fcc8 Remove neofs-keywords dependency and fix make venv.local-pytest
Signed-off-by: anastasia prasolova <anastasia@nspcc.ru>
2022-11-10 18:06:57 +03:00
9650dfb4aa Add comments to timeout between commands
Signed-off-by: anikeev-yadro <a.anikeev@yadro.com>
2022-11-10 17:38:26 +03:00
c4b1bcad1c Add timeout between commands
Signed-off-by: anikeev-yadro <a.anikeev@yadro.com>
2022-11-10 16:10:49 +03:00
f9fa249cf2 Add new testmarks
Signed-off-by: Vladimir Avdeev <v.avdeev@yadro.com>
2022-11-10 10:11:02 +03:00
a7817304d5 Disable pre-commit CI interventions
Disable automatic setting for auto-fixing PRs, we want pre-commit to
be triggered on commits locally. Interventions in PRs are annoying.

Auto-update schedule is set to quarterly, as it takes manual action
to update tools versions in requirements.txt and it would be too
annoying to do that every week.

Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-11-08 10:53:59 +04:00
bf2f638618 Bump neofs-testlib version 0.3.0 -> 0.4.0
Signed-off-by: Vladimir Avdeev <v.avdeev@yadro.com>
2022-11-07 10:45:53 +03:00
14a4d014d1 Move requirements.txt to root repository folder
Signed-off-by: Vladimir Avdeev <v.avdeev@yadro.com>
2022-11-03 23:12:25 +03:00
48b9cfbed5 Make node management tests to be last
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2022-11-03 16:24:15 +03:00
5faef4df8b Add missed shell parameter in verify_list_storage_group()
Signed-off-by: Vladimir Avdeev <v.avdeev@yadro.com>
2022-11-03 12:08:35 +03:00
Stanislav Bogatyrev
39b34f60a6 [#397] Prepare public release
Stable public release!

Signed-off-by: Stanislav Bogatyrev <stanislav@nspcc.ru>
2022-11-02 13:13:28 +03:00
55c61ca73f Fix logger output variable
Signed-off-by: anikeev-yadro <a.anikeev@yadro.com>
2022-11-02 11:18:37 +03:00
b8ab64e2c6 Add http benchmark tests
Signed-off-by: a.lipay <a.lipay@yadro.com>
2022-11-01 20:15:38 +03:00
f80a9b7cbe Refactor Api tests and extend get_ranges_tests
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2022-11-01 19:12:56 +03:00
bf71f3250d Switch storagegroup and session_token tests to testlib library
Signed-off-by: Vladimir Avdeev <v.avdeev@yadro.com>
2022-11-01 16:07:14 +03:00
c9e42a7a0a [#312] Add new Locking test
Signed-off-by: Yulia Kovshova <y.kovshova@yadro.com>
2022-11-01 11:43:46 +03:00
d21a89485b Fix check headers in object tests
Signed-off-by: anikeev-yadro <a.anikeev@yadro.com>
2022-11-01 09:14:55 +03:00
2e8e105756 exclude multiupload from dev-env
Signed-off-by: Yulia Kovshova <y.kovshova@yadro.com>
2022-10-28 17:32:15 +03:00
9e2f8dfb00 Add missed shell parameter in wait_for_expected_object_copies()
Signed-off-by: Vladimir Avdeev <v.avdeev@yadro.com>
2022-10-28 12:28:07 +04:00
ecd5cd1252 Add missed shell parameter in tick_epoch()
Signed-off-by: Vladimir Avdeev <v.avdeev@yadro.com>
2022-10-28 11:08:57 +03:00
70a0f9f216 Remove neofs-keywords submodule
Signed-off-by: Vladimir Avdeev <v.avdeev@yadro.com>
2022-10-27 11:56:01 +03:00
bc1f873975 Fix k6 search
Signed-off-by: a.lipay <a.lipay@yadro.com>
2022-10-26 22:22:37 +03:00
b3cf2ee0e3 Fix session token tests
Delete some commands which not supported dynamic sessions

Signed-off-by: anikeev-yadro <a.anikeev@yadro.com>
2022-10-26 17:11:01 +03:00
f47a9d09ec Fix for object range content tests
Signed-off-by: anikeev-yadro <a.anikeev@yadro.com>
2022-10-25 18:00:02 +03:00
f70dc9d648 Add tag
Signed-off-by: Yulia Kovshova <y.kovshova@yadro.com>
2022-10-25 15:25:13 +03:00
a85f04a73b Add grpc benchmark tests
Signed-off-by: a.lipay <a.lipay@yadro.com>
2022-10-25 14:41:17 +03:00
anikeev-yadro
ec1dd45e0b Update pytest_tests/helpers/file_helper.py
Co-authored-by: Vladimir <108462321+vdomnich-yadro@users.noreply.github.com>
Signed-off-by: anikeev-yadro <110966367+anikeev-yadro@users.noreply.github.com>
2022-10-25 09:56:36 +03:00
anikeev-yadro
abe73fcc96 Update pytest_tests/helpers/file_helper.py
Co-authored-by: Vladimir <108462321+vdomnich-yadro@users.noreply.github.com>
Signed-off-by: anikeev-yadro <110966367+anikeev-yadro@users.noreply.github.com>
2022-10-25 09:56:36 +03:00
anikeev-yadro
77ebf95434 Update pytest_tests/helpers/file_helper.py
Co-authored-by: Vladimir <108462321+vdomnich-yadro@users.noreply.github.com>
Signed-off-by: anikeev-yadro <110966367+anikeev-yadro@users.noreply.github.com>
2022-10-25 09:56:36 +03:00
0e86d55806 Add get arbitrary range from file
Signed-off-by: anikeev-yadro <a.anikeev@yadro.com>
2022-10-25 09:56:36 +03:00
8a48402f53 Fix failover tests
Signed-off-by: anikeev-yadro <a.anikeev@yadro.com>
2022-10-24 14:58:05 +03:00
5cab1ecf19 Fix put object with --grant-full-control id=mycanonicaluserid
Signed-off-by: Yulia Kovshova <y.kovshova@yadro.com>
2022-10-21 16:22:18 +03:00
3f41fbc14b Correct paths, add load mark to pytest.ini
Signed-off-by: a.lipay <a.lipay@yadro.com>
2022-10-21 08:58:33 +04:00
b662418e42 Add shell parameter in eacl_full_placement_container_with_object() fixture
Signed-off-by: Vladimir Avdeev <v.avdeev@yadro.com>
2022-10-20 22:19:24 +03:00
c716c94b9a Add load mark
Signed-off-by: a.lipay <a.lipay@yadro.com>
2022-10-20 20:07:18 +03:00
93e5cb5f46 Add Load library, new params for common.py, new load tests, Adapt K6, remote_process for Hosting
Signed-off-by: a.lipay <a.lipay@yadro.com>
2022-10-19 23:59:42 +03:00
805e014c2f Fix: Allow RANGEHASH by default for system wallets
Signed-off-by: Vladimir Avdeev <v.avdeev@yadro.com>
2022-10-19 18:12:12 +03:00
b38403699c Add shell parameter to head_object calls
Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-10-19 11:39:46 +04:00
3de4d574d3 Fix code that constructs paths
Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-10-18 16:32:53 +04:00
7fcbdb6c34 Rename bearer_token to bearer
Signed-off-by: Vladimir Avdeev <v.avdeev@yadro.com>
2022-10-18 13:52:28 +03:00
7d54641e54 Add shell parameter to acl function calls
Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-10-18 11:29:14 +04:00
59f7679b5d Fix config for neofs-cli in balance test
Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-10-17 16:15:17 +04:00
114f0a1623 Switch to fixed version of testlib
Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-10-17 16:15:17 +04:00
b64656f0b3 Don't check ACL in sync test
Signed-off-by: Yulia Kovshova <y.kovshova@yadro.com>
2022-10-17 15:04:54 +03:00
8e8a5b6efd Pass shell where it was missed
Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-10-17 14:04:53 +04:00
anastasia prasolova
4b3a5f60c4 Add check to version test S3
Signed-off-by: Yulia Kovshova <y.kovshova@yadro.com>
2022-10-17 12:58:15 +03:00
anastasia prasolova
6ccbdadc88 Add CODEOWNERS file
Signed-off-by: anastasia prasolova <anastasia@nspcc.ru>
2022-10-16 19:29:09 +03:00
48e53b3d86 Switch failover test to hosting from testlib
Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-10-14 20:35:26 +04:00
92c034c10b Update environment check for failover tests
With testlib we have a new concept of host. Depending on number of hosts we
can decide whether to run tests or not. This allows us to run failover tests
on devenv if we deploy multiple devenv hosts, also this allows us to add hardware
hosting without modifying code of the tests.

Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-10-14 20:35:26 +04:00
bfd02531ef Integrate with hosting from testlib
Replace service_helper with hosting class from the testlib.
Instead of invoking commands on remote via ssh_helper, we now use
shell from the hosting.

Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-10-14 20:35:26 +04:00
88da942b03 Add tenacity to requirements for remote_process
Signed-off-by: a.lipay <a.lipay@yadro.com>
2022-10-14 13:03:08 +03:00
cf748bf785 Fix 'datetime.datetime' is not iterable
Signed-off-by: Yulia Kovshova <y.kovshova@yadro.com>
2022-10-14 13:01:42 +03:00
7ab737b595 Parsing k6 results + dataclass for K6 results
Signed-off-by: a.lipay <a.lipay@yadro.com>
2022-10-14 12:15:53 +03:00
e63db788c5 Use neofs-testlib
Signed-off-by: Vladimir Avdeev <v.avdeev@yadro.com>
2022-10-13 21:59:26 +03:00
31d43fbba9 Fix timeout for node returned wait
Signed-off-by: anikeev-yadro <a.anikeev@yadro.com>
2022-10-13 13:17:54 +03:00
6734cd70e6 [#312] Add new multipart upload test
Signed-off-by: Yulia Kovshova <y.kovshova@yadro.com>
2022-10-13 10:20:44 +03:00
7e30006623 Fix code formatting in json transformers
Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-10-12 13:07:16 +04:00
3eadf934e0 Fix decode_session_token after API was changed
Signed-off-by: anikeev-yadro <a.anikeev@yadro.com>
2022-10-12 11:06:41 +03:00
5eeb8b4058 [#350] Move file-related functions to file_helper
Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-10-12 10:18:44 +04:00
ce41104d3a Fix regexp for error put object with lock mode
Signed-off-by: Yulia Kovshova <y.kovshova@yadro.com>
2022-10-11 18:26:25 +03:00
0aeb998be9 [#350] Cleanup utility keywords
Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-10-11 18:14:58 +04:00
f9d1a4dfae [#312] Add new test for s3 Bucket function
Signed-off-by: Yulia Kovshova <y.kovshova@yadro.com>
2022-10-10 14:11:01 +03:00
bb62299945 [#312] Add new tagging tests for s3
Signed-off-by: Yulia Kovshova <y.kovshova@yadro.com>
2022-10-10 08:58:53 +03:00
1d09fc73b6 Fix https://github.com/nspcc-dev/neofs-s3-gw/issues/628
Signed-off-by: Yulia Kovshova <y.kovshova@yadro.com>
2022-10-07 20:33:30 +04:00
c29beb69a9 [#266] Upgrade test to S3 bucket removal
Signed-off-by: Yulia Kovshova <y.kovshova@yadro.com>
2022-10-06 15:49:16 +03:00
6b04663dee [#341] Remove duplication of wallet passwords in configs
Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-10-06 12:53:47 +04:00
e8cbd286cd [#344] Fix assert for http object not found error
Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-10-06 12:33:19 +04:00
455f2f4734 Fix test extended actions system
Signed-off-by: Vladimir Avdeev <v.avdeev@yadro.com>
2022-10-04 11:37:28 +03:00
2b635059c2 [#339] Fix code that checks complex object copies
Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-10-04 11:51:29 +04:00
2ebe3192e2 Change SberCloud api call (get id by ip)
Signed-off-by: Vladimir Avdeev <v.avdeev@yadro.com>
2022-10-04 10:07:48 +03:00
f1d3aa6098 [#334] Disable automatic retries in S3 clients
Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-10-03 16:38:50 +04:00
987df42542 [#312] add new ACL test to s3
Signed-off-by: Yulia Kovshova <y.kovshova@yadro.com>
2022-10-03 09:42:57 +03:00
c71d24ea76 Fix sbecloud nightly run
Signed-off-by: Yulia Kovshova <y.kovshova@yadro.com>
2022-10-01 09:15:06 +03:00
92f7470757 [#312] add version test
Signed-off-by: Yulia Kovshova <y.kovshova@yadro.com>
2022-09-30 16:03:27 +03:00
147cac0ebc [#314] Format all files with black and isort
Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-09-30 13:45:25 +04:00
26032a67ec [#330] Switch to new command netmap snapshot
1. Add netmap command to NeofsCli wrapper.
2. Update node_management steps to use netmap.snapshot method instead of
   deprecated "neofs-cli control netmap-snapshot" command.
3. Switch node's public key in netmap from base58-encoding to hex-encoding.

Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-09-30 13:22:52 +04:00
2a175b5824 Add eACL test for system account
Signed-off-by: Vladimir Avdeev <v.avdeev@yadro.com>
2022-09-27 13:20:32 +03:00
c53e48d1f8 Fix skipping s3 test by python mistake
Signed-off-by: Yulia Kovshova <y.kovshova@yadro.com>
2022-09-27 13:20:11 +03:00
d28d7c6e6d Skip balance tests when storage is free
This is an alternative implementation of PR https://github.com/nspcc-dev/neofs-testcases/pull/304

Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-09-27 13:27:31 +04:00
fed50cb96d Set limit to 1000 in GET VM details query to sbercloud
By default sbercloud API returns only first 25 VMs per query

Signed-off-by: anikeev-yadro <a.anikeev@yadro.com>
2022-09-27 12:19:36 +03:00
38bb0c35a6 fix AttributeError in set_bucket_versioning
Signed-off-by: Yulia Kovshova <y.kovshova@yadro.com>
2022-09-27 10:45:29 +03:00
30703bf701 Fix error response in test_expiration_epoch_in_http
Signed-off-by: Yulia Kovshova <y.kovshova@yadro.com>
2022-09-27 10:43:40 +03:00
588292dfb5 [#314] Fix tools config
Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-09-26 17:33:42 +04:00
2452cccba0 adding k6 + remote_process helper
Why script file:
We have script file for debug after test is finished
We don't need too long strings for passing environment variables
We can easy get PID
https://serverfault.com/questions/420905/nohup-multiple-sequential-commands

Signed-off-by: a.lipay <a.lipay@yadro.com>
2022-09-23 18:23:23 +03:00
02c859796f Fix parameter in allure step message
Signed-off-by: anikeev-yadro <a.anikeev@yadro.com>
2022-09-23 17:56:36 +03:00
ee2ed667c6 [#312] add new test for s3
Signed-off-by: Yulia Kovshova <y.kovshova@yadro.com>
2022-09-23 15:54:57 +03:00
anastasia prasolova
b385c2466c [nspcc-dev/nspcc-infra#840]: Add DCO check
Signed-off-by: anastasia prasolova <anastasia@nspcc.ru>
2022-09-22 19:21:21 +03:00
karmadim
68591a902d Add check for node is ready
Signed-off-by: Dmitry Karmanov <d.karmanov@yadro.com>
2022-09-22 16:30:45 +03:00
a8a00c1c53 [#297] remove robot.logger
Signed-off-by: Yulia Kovshova <y.kovshova@yadro.com>
2022-09-22 15:33:42 +03:00
035175894d [#297] Replace @keyword decorator with allure.step 2022-09-21 14:02:09 +03:00
589197ba72 Add black formatter and isort into a precommit hook
Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-09-21 12:18:17 +04:00
9eb33465f9 Change sleeps from MAINNET_BLOCK_TIME to MORPH_BLOCK_TIME
Our tests sleeps should based on MORPH_BLOCK_TIME

Signed-off-by: anikeev-yadro <a.anikeev@yadro.com>
2022-09-19 17:26:29 +03:00
Elizaveta Chichindaeva
467349fc68 Test: get obj size from env
Signed-off-by: Elizaveta Chichindaeva <elizaveta@nspcc.ru>
2022-09-09 17:13:20 +03:00
Elizaveta Chichindaeva
37f73af11e Test: balance accounting test -> pytest
Signed-off-by: Elizaveta Chichindaeva <elizaveta@nspcc.ru>
2022-09-07 14:44:37 +03:00
Elizaveta Chichindaeva
38a177107e HOTFIX: FREE_STORAGE condition
FREE_STORAGE may be false or true and it affects GAS transfer.

Signed-off-by: Elizaveta Chichindaeva <elizaveta@nspcc.ru>
2022-09-07 13:47:51 +03:00
926a7a5779 Add eACL tests using bearer token
Signed-off-by: Vladimir Avdeev <v.avdeev@yadro.com>
2022-09-07 12:11:49 +04:00
Elizaveta Chichindaeva
92cbc2e11b [226] Tests: test for session token for object
A test for session token for object rewritten in pytest.

Signed-off-by: Elizaveta Chichindaeva <elizaveta@nspcc.ru>
2022-09-05 14:36:01 +03:00
Elizaveta Chichindaeva
3f6ba19a8b Tests: Storagegroup tests into pytest
Tests for Storagegroups rewritten in pytest

Signed-off-by: Elizaveta Chichindaeva <elizaveta@nspcc.ru>
2022-09-05 12:12:26 +03:00
f7bbce1912 Fix misprints
Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-09-02 10:10:26 +04:00
f40111dc4a Implemented neofs-adm lib
Signed-off-by: Vladimir Avdeev <v.avdeev@yadro.com>
2022-08-31 23:52:02 +03:00
b6a451dc8d Fix path to inner ring wallet for devenv
It was updated in 587a6b3eec

Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-31 19:52:59 +04:00
e2ab4d3774 Update README on running allure from docker
Also cleanup README from description specific to Robot framework.

Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-30 17:23:15 +03:00
7e31610462 Change log collection fixture to put all logs into archive
When collecting logs we dump all of the logs into a directory, because it is RAM-intensive
to keep entire set of logs in memory, especially in a large cluster.
And we attach logs to Allure not as individual files, but as single zip archive, because it
is more convenient to download, attach to bugs, etc.

Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-30 13:36:28 +04:00
94d6ec6b12 Add fixture to collect logs after test execution
Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-30 13:36:28 +04:00
6d040c6834 Add ACL and eACL PyTest tests
Signed-off-by: Vladimir Avdeev <v.avdeev@yadro.com>
2022-08-25 14:06:21 +03:00
590a5cfb0e Exclude content-length header from request signature
Despite of SberCloud sample for python, content-length header is not accounted for
when calculating signature.

Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-22 18:52:04 +04:00
9454c5eb95 Implement access key authentication in SberCloud API
Replaced insecture login/password authentication in SberCloud API with authentication
via access key. This is more secture and is the recommended approach for authentication
from an application.

Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-22 18:52:04 +04:00
3294299612 Implement neofs-cli lib for container and object
Signed-off-by: Vladimir Avdeev <v.avdeev@yadro.com>
2022-08-19 09:47:19 +03:00
d935c2cafa Remove step for iptables installation
Installation of iptables was implemented in environment preparation pipeline and tests
do not need to worry about it.
Removed conditions that were checking pytest mode vs robot mode, because we got rid of
robot tests in this branch of codebase.

Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-18 14:01:33 +04:00
0ca45d1ba8 Rename variable for GC waiting
We need just an aggregate variable that allows to wait until GC pass occurs on
a storage node, rather than a variable for specific shard. Also, we need to account
for a time that GC session itself takes.

Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-18 14:01:33 +04:00
b270f39387 Fix node transition to online state
Node hangs up if we attempt to transfer it to online state immediately after start.

Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-18 14:01:33 +04:00
a76614b40d Add asserts for error status codes in grpc responses
Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-18 14:01:33 +04:00
b6b1644fd6 Refactor privileges for ssh commands
Remove logic that checks for root login and prepends command with sudo, because
we should not use root login at all and all commands (that require higher permissions
should be prefixed with sudo anyways).
Add sudo prefix to privileged commands that require it.

Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-18 14:01:33 +04:00
f9ba463d2e Refactor container tests
Use wellknown ACL constants.
Remove 0x prefix from ACL, because neofs CLI changed formatting.
Remove redundant comments.

Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-18 14:01:33 +04:00
Elizaveta Chichindaeva
186091640f Tests: fix in spelling
Signed-off-by: Elizaveta Chichindaeva <elizaveta@nspcc.ru>
2022-08-17 16:20:18 +03:00
anastasia prasolova
b6cbd7c07c removed robot tests
Signed-off-by: anastasia prasolova <anastasia@nspcc.ru>
2022-08-17 14:20:41 +03:00
6110de9268 Refactor devenv service helper
Use docker API to operate with remote devenv, this makes code cleaner and more uniform
between local devenv and remote devenv.

Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-15 18:49:05 +04:00
e88d64a263 Fix decorator for skipping binary version test
Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-15 18:49:05 +04:00
2b9b0d837d Skip test for binaries versions
It is currently blocked because internal components do not expose versions

Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-15 18:49:05 +04:00
b597937286 Fix container filtering by name in devenv
Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-15 18:49:05 +04:00
453dcb99fa Fix container wait logic in devenv
Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-15 18:49:05 +04:00
c131bb04ba Fix node cleanup step
The intention of the test was not to delete node entirely, but just to erase it's data.

Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-15 18:49:05 +04:00
d6861f4f62 Refactor env properties
Encapculate reading/writing environment.properties in helper.

Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-15 18:49:05 +04:00
a.y.volkov
9fea2efe83 Check binaries versions
Signed-off-by: a.y.volkov <a.y.volkov@yadro.com>
2022-08-15 18:49:05 +04:00
ce099c61a4 Move node deletion logic to service helper
Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-15 18:49:05 +04:00
91197335ba Add tests that start or stop services on remote vm
Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-15 18:49:05 +04:00
a.y.volkov
f97bfed183 Add test for adding node to cluster
Signed-off-by: a.y.volkov <a.y.volkov@yadro.com>
2022-08-15 18:49:05 +04:00
b468a06f4e Fix hard reboot via sysrq-trigger
Signed-off-by: Vladimir Avdeev <v.avdeev@yadro.com>
2022-08-15 18:49:05 +04:00
448570afa0 Fix get_range usage in acl tests
Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-05 13:31:22 +03:00
eff4b032a5 Refactor fixture that checks cloud environment
Now it relies on presence of sbercloud configuration rather than on free storage setting.

Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-05 13:31:22 +03:00
e1d7999313 Cleanup sbercloud config
Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-05 13:31:22 +03:00
47c55f0060 Remove redundant variables
Small refactoring that includes:
 - Removed variables that are not used any more.
 - Cleanup in helper functions' names.

Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-05 13:31:17 +03:00
a.y.volkov
642af0a888 Add test for network failover
Signed-off-by: a.y.volkov <a.y.volkov@yadro.com>
2022-08-05 13:29:31 +03:00
5f53e80f93 Fixes in tests to enable them to run in a cloud environment
Few small fixes were made:
 - Fix path to binaries on storage node in cloud env.
 - Add logic to prepend ssh command with sudo.
 - Make re-encoding of homomorphic hash conditional.
 - Increase ssh timeout.

Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-05 13:29:31 +03:00
892b8f227a Add helper function to wait for GC pass on storage nodes
Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-05 13:29:31 +03:00
05da181998 Add retries when checking presence of buckets in list
Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-05 13:29:31 +03:00
8a7d8f7c39 Disable automatic pagination in aws cli client
This should prevent output truncation if response contains too many items.

Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-05 13:29:31 +03:00
d8911f2490 Fix parsing of CLI output
Formatting has been changed in CLI tools in version v0.30 and it required us to
change logic in tests:
 - Fix authmate output parsing.
 - Fix format of container name in assert.

Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-05 13:29:31 +03:00
cccfc41409 [#268]: Rename neofs-cli parameter to expire-at
Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-05 13:29:31 +03:00
6357554ed9 Fix logic that collects versions of binaries
Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-05 13:29:31 +03:00
279bb3dedd Add delays after s3 operations
Delays were added after:
 - S3 container create/delete.
 - S3 object create/delete.

Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-05 13:29:31 +03:00
46b593b02e Fix endpoint parameter
Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-05 13:29:31 +03:00
a.y.volkov
3f12bd75f6 Add storage nodes health check before tests run
Signed-off-by: a.y.volkov <a.y.volkov@yadro.com>
2022-08-05 13:29:31 +03:00
d701e2cb62 Remove redundant environment variables
Along with that few tweaks were made:
 - Increase wait time as it seems to take more time for complex object.
 - Increase timeout for create_bucket as it fails periodically.

Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-05 13:29:31 +03:00
ae9c9947b6 Fix logic that checks presense of node in netmap
We are now checking by node's public key as it is represented in netmap.

Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-05 13:29:31 +03:00
18e87e3a13 Add delays to http gateway tests
There were 2 delays added:
 - Waiting for GC pass is driven by system design.
 - Waiting after HTTP upload is just termporary workaround.

Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-05 13:29:31 +03:00
08081a8629 Fix cleanup of versioned s3 bucket
Add logic that deletes all objects versions from the bucket before attempting to
delete the bucket itself. This is required per AWS S3 specification.

Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-05 13:29:31 +03:00
8afba7fca6 Fix assert that checks presence of node in netmap
We should be looking for node host rather than for node name that we assigned in tests code.

Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-05 13:29:31 +03:00
568b4421ce Fix object lifetime tests
We should wait for GC pass on storage nodes, because object with expiration is garbage collected
only after epoch ticks.

Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-05 13:29:31 +03:00
2c232c222c Fix node management tests
When we call storage node's control endpoint we need to override storage wallet path.

Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-05 13:29:31 +03:00
eb5532c08e Extend allure logging for failover tests
Signed-off-by: Vladimir Avdeev <v.avdeev@yadro.com>
2022-08-05 13:29:31 +03:00
b6b95b86e8 Add markers for failover tests
This allows us to skip failover tests during regular run of integration tests.

Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-05 13:29:31 +03:00
7026b93c37 Fix SberCloud failovers
Signed-off-by: Vladimir Avdeev <v.avdeev@yadro.com>
2022-08-05 13:29:31 +03:00
ab85389d59 Use neofs-adm to tick epoch
This is a more convenient way to tick epoch when we have multiple moprh blockchain nodes.
Approach that we use in devenv would require multi-signed transaction which is cumbersome.

Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-05 13:29:31 +03:00
cbaecc60dc Fix usage of generate_file, prepare_wallet_and_deposit fixtures
Signed-off-by: Vladimir Avdeev <v.avdeev@yadro.com>
2022-08-05 13:29:31 +03:00
f60020f5aa Fix usage of temp_dir fixture
Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-05 13:29:31 +03:00
93a52b4a66 Add failover tests for storage nodes
The tests are aimed to work on cloud infrastructure (SberCloud VMs)

Signed-off-by: Vladimir Avdeev <v.avdeev@yadro.com>
2022-08-05 13:29:31 +03:00
014a1fee95 Enable configuration of wallets directories
Add new variables to common.py that allow:
 - Make paths to wallets configurable.
 - Make devenv services path configurable.

Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-05 13:29:31 +03:00
be39480ade Fix node management tests
Remote connection was created to the 1st storage node only. While in reality
we want to create connection to a specific node.

Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-05 13:29:30 +03:00
68cbb7deea Update variables for node management tests
New variables allow us:
1. To configure path of CLI binaries and config file on storage node.
2. Update variable names for storage node endpoints.

Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>
2022-08-05 13:29:30 +03:00
d9d74baa72 Add test suites for acl, container and node management
Signed-off-by: Vladimir Domnich <v.domnich@yadro.com>

commit f7c68cfb423e3213179521954dccb6053fc6382d
Merge: e234b61 99bfe6b
    Merge branch 'avolkov/add_ssh' into internal_tmp_b

commit 99bfe6b56cd75590f868313910068cf1a80bd43f
    Tick one more epoch.

commit bd70bc49391d578cdda727edb4dcd181b832bf1e
    Start nodes in case of test fail.

commit b3888ec62cfc3c18b1dff58962a94a3094342186
    Catch json decode error.

commit c18e415b783ec3e4ce804f43c19246240c186a97
    Add ssh-key access.

commit 7dbdeb653b7d5b7ab3874b546e05a48b502c2460
    Add some tests.

commit 844367c68638c7f97ba4860dd0069c07f499d66d
    Add some tests for nodes management.

commit 1b84b37048dcd3cc0888aa54639975fc11fb2d75
    Add some tests for nodes management.

commit b30c1336a6919e0c8e500bdf2a9be3d5a14470ea
    Add ssh execution option.

commit 2df40eca74ee20bd668778715185ffddda63cb05
    Change AWS cli v1 to cli v2.

commit 7403da3d7c2a5963cfbb12b7c0f3d1d641f52a7e
    Change AWS cli v1 to cli v2.

commit b110dcdb655a585e6c53e6ebc3eae7bf1f1e792f
    Change AWS cli v1 to cli v2.

commit 6183756a4c064c932ee193c2e08a79343017fa49
    Change AWS cli v1 to cli v2.

commit 398006544d60896faa3fc6e6a9dbb51ada06759c
    Fix container run.

commit e7202136dabbe7e2d3da508e0a2ec55a0d5cb67a
    Added tests with AWS CLI.

commit 042e1478ee1fd700c8572cbc6d0d9e6b312b8e8d
    Fix PR comments.

commit e234b61dbb9b8b10812e069322ab03615af0d44e
    Add debug for env.

commit 14febd06713dc03a8207bb80384acb4a7d32df0e
    Move env variables for pytest docker into env file.

commit bafdc6131b5ac855a43b672be194cde2ccf6f75b
    Move env variables for pytest docker into env file.

commit 27c2c6b11f51d2e3c085d44b814cb4c00f81b376
    Move env variables for pytest docker into env file.

commit e4db4948978e092adb83aeacdf06619f5ca2f242
    Merge branch 'master' into avolkov/try_pytest

commit c83a7e625e8daba3a40b65a1d69b2b1323e9ae28
    WIP.

commit 42489bbf8058acd2926cdb04074dc9a8ff86a0a0
    Merge branch 'avolkov/try_pytest' into internal_tmp_b

commit 62526d94dc2bf72372125bea119fa66f670cf7e1
    Improve allure attachments.

commit 4564dae697cb069ac45bc4ba7eb0b5bbdcf1d153
    Merge branch 'avolkov/try_pytest' into internal_tmp_b

commit ab65810b23410ca7382ed4bdd257addfa6619659
    Added tests for S3 API.

commit 846c495a846c977f3e5f0bada01e5a9691a81e3d
    Let's get NEOFS_IR_CONTRACTS_NEOFS from env.

commit c39bd88568b70ffcb76b76d68531b17d3747829d
    Added S3 test for versioning.

commit d7c9f351abc7e02d4ebf162475604a2d6b46e712
    Merge remote-tracking branch 'origin/avolkov/try_pytest' into internal_tmp_b

commit bfbed22a50ce4cb6a49de383cfef66452ba9f4c1
    Added some tests for S3 API and curl tests for HTTP.

commit 1c49def3ddd0b3f7cf97f131e269ad465c70a680
    Add yadro submodule

commit 2a91685f9108101ab523e05cc9287d0f5a20196b
    Fix.

commit 33fc2813e205766e69ef74a42a10850db6c63ce6
    Add debug.

commit aaaceca59e4c67253ecd4a741667b7327d1fb679
    Add env variables for data nodes.

commit 001cb26bcc22c8543fb2672564e898928d20622b
Merge: b48a87d c70da26
    Merge branch 'avolkov/try_pytest' into tmp_b

commit b48a87d9a09309fea671573ba6cf303c31b11b6a
    Added submodule

commit c70da265d319950977774e34740276f324eb57a7
    Added tests for S3 bucket API.

commit 3d335abe6de45d1859454f1ddf85a97514667b8f
    Added tests for S3 object API.

commit 2ac829c700f5bc20c28953f1d40cd953fed8b390
    flake8 changes for python_keywords module.

commit 2de5963e96b13a5e944906b695e5d9c0829de9ad
    Add pytest tests.

commit 4472c079b9dfd979b7c101bea32893c80cb1fe57
    Add pytest tests.

Signed-off-by: a.y.volkov <a.y.volkov@yadro.com>
2022-08-05 13:29:30 +03:00
201 changed files with 16031 additions and 9959 deletions

View file

@ -8,5 +8,5 @@ exclude =
per-file-ignores =
# imported but unused
__init__.py: F401
max-line-length = 120
max-line-length = 100
disable-noqa

22
.github/workflows/dco.yml vendored Normal file
View file

@ -0,0 +1,22 @@
name: DCO check
on:
pull_request:
branches:
- master
- develop
jobs:
commits_check_job:
runs-on: ubuntu-latest
name: Commits Check
steps:
- name: Get PR Commits
id: 'get-pr-commits'
uses: tim-actions/get-pr-commits@master
with:
token: ${{ secrets.GITHUB_TOKEN }}
- name: DCO Check
uses: tim-actions/dco@master
with:
commits: ${{ steps.get-pr-commits.outputs.commits }}

22
.gitignore vendored
View file

@ -1,12 +1,30 @@
# ignore test result files under any path
# ignore IDE files
.vscode
.idea
.DS_Store
venv.*
venv_macos
# ignore test results
**/log.html
**/output.xml
**/report.html
**/dockerlogs*.tar.gz
allure_results/*
xunit_results.xml
# ignore pycache under any path
# ignore caches under any path
**/__pycache__
**/.pytest_cache
*.egg-info
# ignore work directories and setup files
.setup
.env
TemporaryDir/*
artifacts/*
docs/*
venv.*/*
wallet_config.yml

4
.gitmodules vendored
View file

@ -1,4 +0,0 @@
[submodule "neofs-keywords"]
path = neofs-keywords
url = ssh://git@github.com/nspcc-dev/neofs-keywords.git
ignore = all

25
.pre-commit-config.yaml Normal file
View file

@ -0,0 +1,25 @@
repos:
- repo: https://github.com/psf/black
rev: 22.8.0
hooks:
- id: black
language_version: python3.10
- repo: https://github.com/pycqa/isort
rev: 5.12.0
hooks:
- id: isort
name: isort (python)
- repo: https://git.frostfs.info/TrueCloudLab/allure-validator
rev: 1.1.0
hooks:
- id: allure-validator
args: [
"pytest_tests/",
"--plugins",
"frostfs[-_]testlib*",
]
pass_filenames: false
ci:
autofix_prs: false
autoupdate_schedule: quarterly

1
CODEOWNERS Normal file
View file

@ -0,0 +1 @@
* @JuliaKovshova @abereziny @d.zayakin @anikeev-yadro @anurindm @ylukoyan @i.niyazov

177
CONTRIBUTING.md Normal file
View file

@ -0,0 +1,177 @@
# Contribution guide
First, thank you for contributing! We love and encourage pull requests from
everyone. Please follow the guidelines:
- Check the open [issues](https://git.frostfs.info/TrueCloudLab/frostfs-testcases/issues) and
[pull requests](https://git.frostfs.info/TrueCloudLab/frostfs-testcases/pulls) for existing
discussions.
- Open an issue first, to discuss a new feature or enhancement.
- Write tests, and make sure the test suite passes locally.
- Open a pull request, and reference the relevant issue(s).
- Make sure your commits are logically separated and have good comments
explaining the details of your change.
- After receiving feedback, amend your commits or add new ones as appropriate.
- **Have fun!**
## Development Workflow
Start by forking the `frostfs-testcases` repository, make changes in a branch and then
send a pull request. We encourage pull requests to discuss code changes. Here
are the steps in details:
### Set up your Git Repository
Fork [FrosfFS testcases upstream](https://git.frostfs.info/TrueCloudLab/frostfs-testcases/forks) source
repository to your own personal repository. Copy the URL of your fork and clone it:
```shell
$ git clone <url of your fork>
```
### Set up git remote as ``upstream``
```sh
$ cd frostfs-testcases
$ git remote add upstream https://git.frostfs.info/TrueCloudLab/frostfs-testcases
$ git fetch upstream
```
### Set up development environment
To setup development environment for `frosfs-testcases`, please, take the following steps:
1. Prepare virtualenv
```shell
$ make venv
$ source frostfs-testcases-3.10/bin/activate
```
Optionally you might want to integrate code formatters with your code editor to apply formatters to code files as you go:
* isort is supported by [PyCharm](https://plugins.jetbrains.com/plugin/15434-isortconnect), [VS Code](https://cereblanco.medium.com/setup-black-and-isort-in-vscode-514804590bf9). Plugins exist for other IDEs/editors as well.
* black can be integrated with multiple editors, please, instructions are available [here](https://black.readthedocs.io/en/stable/integrations/editors.html).
### Create your feature branch
Before making code changes, make sure you create a separate branch for these
changes. Maybe you will find it convenient to name branch in
`<type>/<issue>-<changes_topic>` format.
```shell
$ git checkout -b feature/123-something_awesome
```
### Commit changes
After verification, commit your changes. There is a [great
post](https://chris.beams.io/posts/git-commit/) on how to write useful commit
messages. Try following this template:
```
[#Issue] Summary
Description
<Macros>
<Sign-Off>
```
```shell
$ git commit -am '[#123] Add some feature'
```
### Push to the branch
Push your locally committed changes to the remote origin (your fork):
```shell
$ git push origin feature/123-something_awesome
```
### Create a Pull Request
Pull requests can be created via Git. Refer to [this
document](https://docs.codeberg.org/collaborating/pull-requests-and-git-flow/) for
detailed steps on how to create a pull request. After a Pull Request gets peer
reviewed and approved, it will be merged.
## Code Style
The names of Python variables, functions and classes must comply with [PEP8](https://peps.python.org/pep-0008) rules, in particular:
* Name of a variable/function must be in snake_case (lowercase, with words separated by underscores as necessary to improve readability).
* Name of a global variable must be in UPPER_SNAKE_CASE, the underscore (`_`) symbol must be used as a separator between words.
* Name of a class must be in PascalCase (the first letter of each compound word in a variable name is capitalized).
* Names of other variables should not be ended with the underscore symbol.
Line length limit is set as 100 characters.
Imports should be ordered in accordance with [isort default rules](https://pycqa.github.io/isort/).
We use `black` and `isort` for code formatting. Please, refer to [Black code style](https://black.readthedocs.io/en/stable/the_black_code_style/current_style.html) for details.
Type hints are mandatory for library's code:
- class attributes;
- function or method's parameters;
- function or method's return type.
The only exception is return type of test functions or methods - there's no much use in specifying `None` as return type for each test function.
Do not use relative imports. Even if the module is in the same package, use the full package name.
To format docstrings, please, use [Google Style Docstrings](https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html). Type annotations should be specified in the code and not in docstrings (please, refer to [this sample](https://sphinxcontrib-napoleon.readthedocs.io/en/latest/index.html#type-annotations)).
## DCO Sign off
All authors to the project retain copyright to their work. However, to ensure
that they are only submitting work that they have rights to, we are requiring
everyone to acknowledge this by signing their work.
Any copyright notices in this repository should specify the authors as "the
contributors".
To sign your work, just add a line like this at the end of your commit message:
```
Signed-off-by: Samii Sakisaka <samii@nspcc.ru>
```
This can easily be done with the `--signoff` option to `git commit`.
By doing this you state that you can certify the following (from [The Developer
Certificate of Origin](https://developercertificate.org/)):
```
Developer Certificate of Origin
Version 1.1
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
1 Letterman Drive
Suite D4700
San Francisco, CA, 94129
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
```

674
LICENSE Normal file
View file

@ -0,0 +1,674 @@
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.

View file

@ -1,38 +1,42 @@
#!/usr/bin/make -f
SHELL := /bin/bash
PYTHON_VERSION := 3.10
VENV_NAME = frostfs-testcases-${PYTHON_VERSION}
VENV_DIR := venv.${VENV_NAME}
.DEFAULT_GOAL := help
current_dir := $(shell pwd)
FROM_VENV := . ${VENV_DIR}/bin/activate &&
SHELL = bash
venv: create requirements paths precommit
@echo Ready
OUTPUT_DIR = artifacts/
KEYWORDS_REPO = git@github.com:nspcc-dev/neofs-keywords.git
VENVS = $(shell ls -1d venv/*/ | sort -u | xargs basename -a)
precommit:
@echo Isntalling pre-commit hooks
${FROM_VENV} pre-commit install
.PHONY: all
all: venvs
paths:
@echo Append paths for project
@echo Virtual environment: ${VENV_DIR}
@rm -rf ${VENV_DIR}/lib/python${PYTHON_VERSION}/site-packages/_paths.pth
@touch ${VENV_DIR}/lib/python${PYTHON_VERSION}/site-packages/_paths.pth
@echo ${current_dir} | tee ${VENV_DIR}/lib/python${PYTHON_VERSION}/site-packages/_paths.pth
include venv_template.mk
create: ${VENV_DIR}
run: venvs
@echo "⇒ Test Run"
@robot --timestampoutputs --outputdir $(OUTPUT_DIR) robot/testsuites/integration/
${VENV_DIR}:
@echo Create virtual environment ${VENV_DIR}
virtualenv --python=python${PYTHON_VERSION} --prompt=${VENV_NAME} ${VENV_DIR}
.PHONY: venvs
venvs:
$(foreach venv,$(VENVS),venv.$(venv))
requirements:
@echo Isntalling pip requirements
${FROM_VENV} pip install -e ../frostfs-testlib
${FROM_VENV} pip install -Ur requirements.txt
${FROM_VENV} pip install -Ur requirements_dev.txt
$(foreach venv,$(VENVS),$(eval $(call VENV_template,$(venv))))
submodules:
@git submodule init
@git submodule update --recursive --remote
#### VALIDATION SECTION ####
lint: create requirements
${FROM_VENV} pip install -e ../frostfs-testlib;
${FROM_VENV} pylint --disable R,C,W pytest_tests
clean:
rm -rf venv.*
pytest-local:
@echo "⇒ Run Pytest"
python -m pytest pytest_tests/testsuites/
help:
@echo "⇒ run Run testcases ${R}"
validation: lint
${FROM_VENV} pytest --collect-only

195
README.md
View file

@ -1,163 +1,132 @@
## Testcases structure
Tests written with PyTest Framework are located under `pytest_tests/testsuites` directory.
These tests rely on resources and utility modules that have been originally developed for Pytest Framework.
## Testcases execution
### Initial preparation
1. Install neofs-cli
- `git clone git@github.com:nspcc-dev/neofs-node.git`
- `cd neofs-node`
1. Install frostfs-cli
- `git clone git@git.frostfs.info:TrueCloudLab/frostfs-node.git`
- `cd frostfs-node`
- `make`
- `sudo cp bin/neofs-cli /usr/local/bin/neofs-cli`
- `sudo cp bin/frostfs-cli /usr/local/bin/frostfs-cli`
2. Install neofs-authmate
- `git clone git@github.com:nspcc-dev/neofs-s3-gw.git`
- `cd neofs-s3-gw`
2. Install frostfs-s3-authmate
- `git clone git@git.frostfs.info:TrueCloudLab/frostfs-s3-gw.git`
- `cd frostfs-s3-gw`
- `make`
- `sudo cp bin/neofs-authmate /usr/local/bin/neofs-authmate`
- `sudo cp bin/frostfs-s3-authmate /usr/local/bin/frostfs-s3-authmate`
3. Install neo-go
- `git clone git@github.com:nspcc-dev/neo-go.git`
- `git clone git@git.frostfs.info:TrueCloudLab/neo-go.git`
- `cd neo-go`
- `git checkout v0.92.0` (or the current version in the neofs-dev-env)
- `git checkout v0.101.0` (or the current version in the frostfs-dev-env)
- `make`
- `sudo cp bin/neo-go /usr/local/bin/neo-go`
or download binary from releases: https://github.com/nspcc-dev/neo-go/releases
or download binary from releases: https://git.frostfs.info/TrueCloudLab/neo-go/releases
4. Clone neofs-dev-env
`git clone git@github.com:nspcc-dev/neofs-dev-env.git`
4. Clone frostfs-dev-env
`git clone git@git.frostfs.info:TrueCloudLab/frostfs-dev-env.git`
Note that we expect neofs-dev-env to be located under
the `<testcases_root_dir>/../neofs-dev-env` directory. If you put this repo in any other place,
manually set the full path to neofs-dev-env in the environment variable `DEVENV_PATH` at this step.
Note that we expect frostfs-dev-env to be located under
the `<testcases_root_dir>/../frostfs-dev-env` directory. If you put this repo in any other place,
manually set the full path to frostfs-dev-env in the environment variable `DEVENV_PATH` at this step.
5. Make sure you have installed all of the following prerequisites on your machine
5. Make sure you have installed all the following prerequisites on your machine
```
make
python3.9
python3.9-dev
python3.10
python3.10-dev
libssl-dev
```
As we use neofs-dev-env, you'll also need to install
[prerequisites](https://github.com/nspcc-dev/neofs-dev-env#prerequisites) of this repository.
As we use frostfs-dev-env, you'll also need to install
[prerequisites](https://git.frostfs.info/TrueCloudLab/frostfs-dev-env#prerequisites) of this repository.
## Robot Framework
6. Prepare virtualenv
### Run
1. Prepare virtualenv
```
$ make venv.localtest
$ . venv.localtest/bin/activate
```shell
$ make venv
$ source venv.frostfs-testcases-3.10/bin/activate
```
2. Run tests
7. Optionally you might want to integrate code formatters with your code editor to apply formatters to code files as you go:
* isort is supported by [PyCharm](https://plugins.jetbrains.com/plugin/15434-isortconnect), [VS Code](https://cereblanco.medium.com/setup-black-and-isort-in-vscode-514804590bf9). Plugins exist for other IDEs/editors as well.
* black can be integrated with multiple editors, please, instructions are available [here](https://black.readthedocs.io/en/stable/integrations/editors.html).
In the activated virtualenv, execute the following command(s) to run a singular testsuite or all the suites in the directory
```
$ robot --outputdir artifacts/ robot/testsuites/integration/<UserScenario>
$ robot --outputdir artifacts/ robot/testsuites/integration/<UserScenario>/<testcase>.robot
```
8. Install Allure CLI
Allure CLI installation is not an easy task, so a better option might be to run allure from
docker container (please, refer to p.2 of the next section for instructions).
### Generation of documentation
To generate Keywords documentation:
```
python3 -m robot.libdoc robot/resources/lib/neofs.py docs/NeoFS_Library.html
python3 -m robot.libdoc robot/resources/lib/payment_neogo.py docs/Payment_Library.html
```
To generate testcases documentation:
```
python3 -m robot.testdoc robot/testsuites/integration/ docs/testcases.html
```
### Source code overview
`robot/` - Files related/depended on Robot Framework.
`robot/resources/` - All resources (Robot Framework Keywords, Python Libraries, etc) which could be used for creating test suites.
`robot/resources/lib/` - Common Python Libraries depended on Robot Framework (with Keywords). For example neofs.py, payment.py.
`robot/variables/` - All variables for tests. It is possible to add the auto-loading logic of parameters from the smart-contract in the future. Contain python files.
`robot/testsuites/` - Robot TestSuites and TestCases.
`robot/testsuites/integration/` - Integration test suites and testcases
### Code style
Robot Framework keyword should use space as a separator between particular words
The name of the library function in Robot Framework keyword usage and the name of the same function in the Python library must be identical.
The name of GLOBAL VARIABLE must be in UPPER CASE, the underscore ('_')' symbol must be used as a separator between words.
The name of local variable must be in lower case, the underscore symbol must be used as a separator between words.
The names of Python variables, functions and classes must comply with accepted rules, in particular:
Name of variable/function must be in lower case with underscore symbol between words
Name of class must start with a capital letter. It is not allowed to use underscore symbol in name, use capital for each particular word.
For example: NeoFSConf
Name of other variables should not be ended with underscore symbol
On keywords definition, one should specify variable type, e.g. path: str
### Robot style
You should always complete the [Tags] and [Documentation] sections for Testcases and Documentation for Test Suites.
### Robot-framework User Guide
http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html
## PyTest
Tests written with PyTest framework are located under `pytest_tests/testsuites` directory.
### Run and get report
1. Prepare virtualenv
```
$ make venv.local-pytest
$ . venv.local-pytest/bin/activate
```
2. Install Allure CLI
Allure CLI installation is not an easy task. You may select one of the following ways. If none of the options would help you please complete the instruction with your approach:
To install Allure CLI you may take one of the following ways:
- Follow the [instruction](https://docs.qameta.io/allure/#_linux) from the official website
- Consult [the thread](https://github.com/allure-framework/allure2/issues/989)
- Download release from the Github
```
```shell
$ wget https://github.com/allure-framework/allure2/releases/download/2.18.1/allure_2.18.1-1_all.deb
$ sudo apt install ./allure_2.18.1-1_all.deb
```
You also need the `default-jre` package installed.
3. Run tests
If none of the options worked for you, please complete the instruction with your approach.
In the activated virtualenv, execute the following command(s) to run a singular testsuite or all the suites in the directory
```
### Run and get report
1. Run tests
Make sure that the virtualenv is activated, then execute the following command to run a singular test suite or all the suites in the directory
```shell
$ pytest --alluredir my-allure-123 pytest_tests/testsuites/object/test_object_api.py
$ pytest --alluredir my-allure-123 pytest_tests/testsuites/
```
4. Generate report
2. Generate report
To generate a report, execute the command `allure generate`. The report will be under the `allure-report` directory.
```
If you opted to install Allure CLI, you can generate a report using the command `allure generate`. The web representation of the report will be under `allure-report` directory:
```shell
$ allure generate my-allure-123
$ ls allure-report/
app.js data export favicon.ico history index.html plugins styles.css widgets
```
To inspect the report in a browser, run
```
```shell
$ allure serve my-allure-123
```
If you prefer to run allure from Docker, you can use the following command:
```shell
$ mkdir -p $PWD/allure-reports
$ docker run -p 5050:5050 -e CHECK_RESULTS_EVERY_SECONDS=30 -e KEEP_HISTORY=1 \
-v $PWD/my-allure-123:/app/allure-results \
-v $PWD/allure-reports:/app/default-reports \
frankescobar/allure-docker-service
```
Then, you can check the allure report in your browser [by this link](http://localhost:5050/allure-docker-service/projects/default/reports/latest/index.html?redirect=false)
NOTE: feel free to select a different location for `allure-reports` directory, there is no requirement to have it inside `frostfs-testcases`. For example, you can place it under `/tmp` path.
# Contributing
Feel free to contribute to this project after reading the [contributing
guidelines](CONTRIBUTING.md).
Before starting to work on a certain topic, create a new issue first, describing
the feature/topic you are going to implement.
# License
- [GNU General Public License v3.0](LICENSE)
## Pytest marks
Custom pytest marks used in tests:
* `sanity` - Tests must be runs in sanity testruns.
* `smoke` - Tests must be runs in smoke testruns.

View file

@ -1,31 +0,0 @@
diff -urN bin.orig/activate bin/activate
--- bin.orig/activate 2018-12-27 14:55:13.916461020 +0900
+++ bin/activate 2018-12-27 20:38:35.223248728 +0900
@@ -30,6 +30,15 @@
unset _OLD_VIRTUAL_PS1
fi
+ # Unset exported dev-env variables
+ pushd ${DEVENV_PATH} > /dev/null
+ unset `make env | awk -F= '{print $1}'`
+ popd > /dev/null
+
+ # Unset external env variables
+ declare -f env_deactivate > /dev/null && env_deactivate
+ declare -f venv_deactivate > /dev/null && venv_deactivate
+
unset VIRTUAL_ENV
if [ ! "${1-}" = "nondestructive" ] ; then
# Self destruct!
@@ -47,6 +56,11 @@
PATH="$VIRTUAL_ENV/bin:$PATH"
export PATH
+# Set external variables
+if [ -f ${VIRTUAL_ENV}/bin/environment.sh ] ; then
+ . ${VIRTUAL_ENV}/bin/environment.sh
+fi
+
# unset PYTHONHOME if set
if ! [ -z "${PYTHONHOME+_}" ] ; then
_OLD_VIRTUAL_PYTHONHOME="$PYTHONHOME"

View file

@ -1,81 +0,0 @@
---
# Object sizes
simple_obj_size: 1000
complex_obj_size: 2000
# Timeouts
container_wait_interval: '1m'
mainnet_block_time: '1s'
mainnet_timeout: '1min'
morph_block_time: '1s'
neofs_contract_cache_timeout: '30s'
shard_remove_interval: '1m'
# Services endpoints
neofs_endpoint: 's01.neofs.devenv:8080'
neo_mainnet_endpoint: 'http://main-chain.neofs.devenv:30333'
morph_endpoint: 'http://morph-chain.neofs.devenv:30333'
http_gate: 'http://http.neofs.devenv'
s3_gate: 'https://s3.neofs.devenv:8080'
storage_node_1: 's01.neofs.devenv:8080'
storage_node_2: 's02.neofs.devenv:8080'
storage_node_3: 's03.neofs.devenv:8080'
storage_node_4: 's04.neofs.devenv:8080'
neofs_netmap:
s01:
rpc: 's01.neofs.devenv:8080'
control: 's01.neofs.devenv:8081'
wallet_path: '../neofs-dev-env/services/storage/wallet01.json'
un_locode: 'RU MOW'
s02:
rpc: 's02.neofs.devenv:8080'
control: 's02.neofs.devenv:8081'
wallet_path: '../neofs-dev-env/services/storage/wallet02.json'
un_locode: 'RU LED'
s03:
rpc: 's03.neofs.devenv:8080'
control: 's03.neofs.devenv:8081'
wallet_path: '../neofs-dev-env/services/storage/wallet03.json'
un_locode: 'SE STO'
s04:
rpc: 's04.neofs.devenv:8080'
control: 's04.neofs.devenv:8081'
wallet_path: '../neofs-dev-env/services/storage/wallet04.json'
un_locode: 'FI HEL'
# Paths to binaries
neogo_cli_exec: 'neo-go'
neogo_executable: 'neo-go'
neofs_cli_exec: 'neofs-cli'
# Neo Blockchain configuration
gas_hash: '0xd2a4cff31913016155e38e474a2c06d08be276cf'
neofs_contract: 'd07ec2a43d2f8638934d340bfb60b6c23afce106'
morph_magic: '15405'
# NeoFS common parameters
common_placement_rule: "REP 2 IN X CBF 1 SELECT 4 FROM * AS X"
# TODO: remove within the scope of
# https://github.com/nspcc-dev/neofs-testcases/issues/246
gate_pub_key: '0313b1ac3a8076e155a7e797b24f0b650cccad5941ea59d7cfd51a024a8b2a06bf'
# Wallets
devenv_services_path: '../neofs-dev-env/services'
wallet_config: 'neofs_cli_configs/empty_passwd.yml'
mainnet_wallet_path: '../neofs-dev-env/services/chain/node-wallet.json'
mainnet_wallet_config: 'neofs_cli_configs/one_wallet_password.yml'
mainnet_single_addr: 'NfgHwwTi3wHAS8aFAN243C5vGbkYDpqLHP'
mainnet_wallet_pass: 'one'
ir_wallet_path: '../neofs-dev-env/services/ir/wallet01.json'
ir_wallet_config: 'neofs_cli_configs/one_wallet_password.yml'
ir_wallet_pass: 'one'
storage_wallet_path: '../neofs-dev-env/services/storage/wallet01.json'

@ -1 +0,0 @@
Subproject commit f66be076acb102a80e9f8abd5d1cde104673464e

View file

@ -1 +0,0 @@
password: ""

View file

@ -1 +0,0 @@
password: "one"

8
pyproject.toml Normal file
View file

@ -0,0 +1,8 @@
[tool.isort]
profile = "black"
src_paths = ["pytest_tests"]
line_length = 140
[tool.black]
line-length = 140
target-version = ["py310"]

73
pytest.ini Normal file
View file

@ -0,0 +1,73 @@
[pytest]
log_cli = 1
log_cli_level = DEBUG
log_cli_format = %(asctime)s [%(levelname)4s] %(message)s
log_format = %(asctime)s [%(levelname)4s] %(message)s
log_cli_date_format = %Y-%m-%d %H:%M:%S
log_date_format = %H:%M:%S
markers =
# special markers
staging: test to be excluded from run in verifier/pr-validation/sanity jobs and run test in staging job
sanity: test runs in sanity testrun
smoke: test runs in smoke testrun
# controlling markers
order: manual control of test order
logs_after_session: Make the last test in session
# parametrizing markers
container: specify container details for container creation
# functional markers
maintenance: tests for change mode node
container: tests for container creation
grpc_api: standard gRPC API tests
grpc_control: tests related to using frostfs-cli control commands
grpc_object_lock: gRPC lock tests
grpc_without_user: gRPC without user tests
http_gate: HTTP gate contract
http_put: HTTP gate test cases with PUT call
s3_gate: All S3 gate tests
s3_gate_base: Base S3 gate tests
s3_gate_bucket: Bucket S3 gate tests
s3_gate_locking: Locking S3 gate tests
s3_gate_multipart: S3 gate tests with multipart object
s3_gate_object: Object S3 gate tests
s3_gate_tagging: Tagging S3 gate tests
s3_gate_versioning: Versioning S3 gate tests
long: long tests (with long execution time)
node_mgmt: frostfs control commands
session_token: tests for operations with session token
static_session: tests for operations with static session token
bearer: tests for bearer tokens
acl: All tests for ACL
acl_basic: tests for basic ACL
acl_bearer: tests for ACL with bearer
acl_extended: tests for extended ACL
acl_filters: tests for extended ACL with filters and headers
storage_group: tests for storage groups
failover: tests for system recovery after a failure
failover_panic: tests for system recovery after panic reboot of a node
failover_network: tests for network failure
failover_reboot: tests for system recovery after reboot of a node
interfaces: tests down interface to system
check_binaries: check frostfs installed binaries versions
payments: tests for payment associated operations
load: performance tests
simple: tests with simple characteristics
complex: tests with complex characteristics
aws: AWS related tests
boto3: tests using the boto3
policy: policy tests
failover_baremetal: failover tests on hardware (baremetal)
failover_server: server failover tests
failover_storage: storage failover tests
failover_empty_map: failover tests for an empty map
failover_empty_map_offlne: offline failover tests for an empty map
failover_empty_map_stop_service: failover tests for stopped empty map service
failover_data_loss: failover tests in case of data loss
metabase_loss: tests for metadata loss
write_cache_loss: tests for write cache loss
time: time tests
replication: replication tests
ec_replication: replication EC
static_session_container: tests for a static session in a container
shard: shard management tests
session_logs: check logs messages

3
pytest_tests/__init__.py Normal file
View file

@ -0,0 +1,3 @@
import os
TESTS_BASE_PATH = os.path.dirname(os.path.relpath(__file__))

View file

@ -0,0 +1,17 @@
import os
from frostfs_testlib.cli.frostfs_cli.cli import FrostfsCli
from frostfs_testlib.storage.dataclasses import ape
from frostfs_testlib.utils import string_utils
def create_bearer_token(frostfs_cli: FrostfsCli, directory: str, cid: str, rule: ape.Rule, endpoint: str) -> str:
chain_file = os.path.join(directory, string_utils.unique_name("chain-", ".json"))
bearer_token_file = os.path.join(directory, string_utils.unique_name("bt-", ".json"))
signed_bearer_token_file = os.path.join(directory, string_utils.unique_name("bt-sign-", ".json"))
frostfs_cli.bearer.generate_ape_override(rule.chain_id, rule=rule.as_string(), cid=cid, output=chain_file)
frostfs_cli.bearer.create(endpoint, bearer_token_file, issued_at=1, expire_at=9999, ape=chain_file)
frostfs_cli.util.sign_bearer_token(bearer_token_file, signed_bearer_token_file)
return signed_bearer_token_file

View file

@ -0,0 +1,57 @@
import functools
from typing import Optional
from frostfs_testlib.shell import Shell
from frostfs_testlib.storage.cluster import Cluster
from frostfs_testlib.storage.dataclasses import ape
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
from ..helpers.object_access import (
can_delete_object,
can_get_head_object,
can_get_object,
can_get_range_hash_of_object,
can_get_range_of_object,
can_put_object,
can_search_object,
)
ALL_OBJECT_OPERATIONS = ape.ObjectOperations.get_all()
FULL_ACCESS = {op: True for op in ALL_OBJECT_OPERATIONS}
NO_ACCESS = {op: False for op in ALL_OBJECT_OPERATIONS}
RO_ACCESS = {op: True if op not in [ape.ObjectOperations.PUT, ape.ObjectOperations.DELETE] else False for op in ALL_OBJECT_OPERATIONS}
def assert_access_to_container(
access_matrix: dict[ape.ObjectOperations, bool],
wallet: WalletInfo,
cid: str,
oid: str,
file_name: str,
shell: Shell,
cluster: Cluster,
bearer: Optional[str] = None,
xhdr: Optional[dict] = None,
):
endpoint = cluster.default_rpc_endpoint
results: dict = {}
results[ape.ObjectOperations.PUT] = can_put_object(wallet, cid, file_name, shell, cluster, bearer, xhdr)
results[ape.ObjectOperations.HEAD] = can_get_head_object(wallet, cid, oid, shell, endpoint, bearer, xhdr)
results[ape.ObjectOperations.GET_RANGE] = can_get_range_of_object(wallet, cid, oid, shell, endpoint, bearer, xhdr)
results[ape.ObjectOperations.GET_RANGE_HASH] = can_get_range_hash_of_object(wallet, cid, oid, shell, endpoint, bearer, xhdr)
results[ape.ObjectOperations.SEARCH] = can_search_object(wallet, cid, shell, endpoint, oid, bearer, xhdr)
results[ape.ObjectOperations.GET] = can_get_object(wallet, cid, oid, file_name, shell, cluster, bearer, xhdr)
results[ape.ObjectOperations.DELETE] = can_delete_object(wallet, cid, oid, shell, endpoint, bearer, xhdr)
failed_checks = [
f"allowed {action} failed" for action, success in results.items() if not success and access_matrix[action] != results[action]
] + [f"denied {action} succeeded" for action, success in results.items() if success and access_matrix[action] != results[action]]
assert not failed_checks, ", ".join(failed_checks)
assert_full_access_to_container = functools.partial(assert_access_to_container, FULL_ACCESS)
assert_no_access_to_container = functools.partial(assert_access_to_container, NO_ACCESS)
assert_read_only_container = functools.partial(assert_access_to_container, RO_ACCESS)

View file

@ -0,0 +1,23 @@
from dataclasses import dataclass
from frostfs_testlib.steps.cli.container import DEFAULT_PLACEMENT_RULE
from frostfs_testlib.storage.cluster import Cluster
@dataclass
class ContainerSpec:
rule: str = DEFAULT_PLACEMENT_RULE
basic_acl: str = None
allow_owner_via_ape: bool = False
def parsed_rule(self, cluster: Cluster):
if self.rule is None:
return None
substitutions = {"%NODE_COUNT%": str(len(cluster.cluster_nodes))}
parsed_rule = self.rule
for sub, replacement in substitutions.items():
parsed_rule = parsed_rule.replace(sub, replacement)
return parsed_rule

View file

@ -0,0 +1,218 @@
from typing import Optional
from frostfs_testlib import reporter
from frostfs_testlib.resources.cli import CLI_DEFAULT_TIMEOUT
from frostfs_testlib.resources.error_patterns import OBJECT_ACCESS_DENIED
from frostfs_testlib.shell import Shell
from frostfs_testlib.steps.cli.object import (
delete_object,
get_object_from_random_node,
get_range,
get_range_hash,
head_object,
put_object_to_random_node,
search_object,
)
from frostfs_testlib.storage.cluster import Cluster
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
from frostfs_testlib.utils import string_utils
from frostfs_testlib.utils.file_utils import get_file_hash
OPERATION_ERROR_TYPE = RuntimeError
def can_get_object(
wallet: WalletInfo,
cid: str,
oid: str,
file_name: str,
shell: Shell,
cluster: Cluster,
bearer: Optional[str] = None,
xhdr: Optional[dict] = None,
) -> bool:
with reporter.step("Try get object from container"):
try:
got_file_path = get_object_from_random_node(
wallet,
cid,
oid,
bearer=bearer,
xhdr=xhdr,
shell=shell,
cluster=cluster,
)
except OPERATION_ERROR_TYPE as err:
assert string_utils.is_str_match_pattern(err, OBJECT_ACCESS_DENIED), f"Expected {err} to match {OBJECT_ACCESS_DENIED}"
return False
assert get_file_hash(file_name) == get_file_hash(got_file_path)
return True
def can_put_object(
wallet: WalletInfo,
cid: str,
file_name: str,
shell: Shell,
cluster: Cluster,
bearer: Optional[str] = None,
xhdr: Optional[dict] = None,
attributes: Optional[dict] = None,
) -> bool:
with reporter.step("Try put object to container"):
try:
put_object_to_random_node(
wallet,
file_name,
cid,
bearer=bearer,
xhdr=xhdr,
attributes=attributes,
shell=shell,
cluster=cluster,
)
except OPERATION_ERROR_TYPE as err:
assert string_utils.is_str_match_pattern(err, OBJECT_ACCESS_DENIED), f"Expected {err} to match {OBJECT_ACCESS_DENIED}"
return False
return True
def can_delete_object(
wallet: WalletInfo,
cid: str,
oid: str,
shell: Shell,
endpoint: str,
bearer: Optional[str] = None,
xhdr: Optional[dict] = None,
) -> bool:
with reporter.step("Try delete object from container"):
try:
delete_object(
wallet,
cid,
oid,
bearer=bearer,
xhdr=xhdr,
shell=shell,
endpoint=endpoint,
)
except OPERATION_ERROR_TYPE as err:
assert string_utils.is_str_match_pattern(err, OBJECT_ACCESS_DENIED), f"Expected {err} to match {OBJECT_ACCESS_DENIED}"
return False
return True
def can_get_head_object(
wallet: WalletInfo,
cid: str,
oid: str,
shell: Shell,
endpoint: str,
bearer: Optional[str] = None,
xhdr: Optional[dict] = None,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
) -> bool:
with reporter.step("Try get head of object"):
try:
head_object(
wallet,
cid,
oid,
bearer=bearer,
xhdr=xhdr,
shell=shell,
endpoint=endpoint,
timeout=timeout,
)
except OPERATION_ERROR_TYPE as err:
assert string_utils.is_str_match_pattern(err, OBJECT_ACCESS_DENIED), f"Expected {err} to match {OBJECT_ACCESS_DENIED}"
return False
return True
def can_get_range_of_object(
wallet: WalletInfo,
cid: str,
oid: str,
shell: Shell,
endpoint: str,
bearer: Optional[str] = None,
xhdr: Optional[dict] = None,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
) -> bool:
with reporter.step("Try get range of object"):
try:
get_range(
wallet,
cid,
oid,
bearer=bearer,
range_cut="0:10",
xhdr=xhdr,
shell=shell,
endpoint=endpoint,
timeout=timeout,
)
except OPERATION_ERROR_TYPE as err:
assert string_utils.is_str_match_pattern(err, OBJECT_ACCESS_DENIED), f"Expected {err} to match {OBJECT_ACCESS_DENIED}"
return False
return True
def can_get_range_hash_of_object(
wallet: WalletInfo,
cid: str,
oid: str,
shell: Shell,
endpoint: str,
bearer: Optional[str] = None,
xhdr: Optional[dict] = None,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
) -> bool:
with reporter.step("Try get range hash of object"):
try:
get_range_hash(
wallet,
cid,
oid,
bearer=bearer,
range_cut="0:10",
xhdr=xhdr,
shell=shell,
endpoint=endpoint,
timeout=timeout,
)
except OPERATION_ERROR_TYPE as err:
assert string_utils.is_str_match_pattern(err, OBJECT_ACCESS_DENIED), f"Expected {err} to match {OBJECT_ACCESS_DENIED}"
return False
return True
def can_search_object(
wallet: WalletInfo,
cid: str,
shell: Shell,
endpoint: str,
oid: Optional[str] = None,
bearer: Optional[str] = None,
xhdr: Optional[dict] = None,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
) -> bool:
with reporter.step("Try search object in container"):
try:
oids = search_object(
wallet,
cid,
bearer=bearer,
xhdr=xhdr,
shell=shell,
endpoint=endpoint,
timeout=timeout,
)
except OPERATION_ERROR_TYPE as err:
assert string_utils.is_str_match_pattern(err, OBJECT_ACCESS_DENIED), f"Expected {err} to match {OBJECT_ACCESS_DENIED}"
return False
if oid:
return oid in oids
return True

View file

@ -1,220 +0,0 @@
import logging
import socket
import tempfile
import textwrap
from contextlib import contextmanager
from dataclasses import dataclass
from datetime import datetime
from functools import wraps
from time import sleep
from typing import ClassVar, Optional
import allure
from paramiko import AutoAddPolicy, SFTPClient, SSHClient, SSHException, ssh_exception, RSAKey
from paramiko.ssh_exception import AuthenticationException
class HostIsNotAvailable(Exception):
"""Raises when host is not reachable."""
def __init__(self, ip: str = None, exc: Exception = None):
msg = f'Host is not available{f" by ip: {ip}" if ip else ""}'
if exc:
msg = f'{msg}. {exc}'
super().__init__(msg)
def log_command(func):
@wraps(func)
def wrapper(host: 'HostClient', command: str, *args, **kwargs):
display_length = 60
short = command.removeprefix("$ProgressPreference='SilentlyContinue'\n")
short = short[:display_length]
short += '...' if short != command else ''
with allure.step(f'SSH: {short}'):
logging.info(f'Execute command "{command}" on "{host.ip}"')
start_time = datetime.utcnow()
cmd_result = func(host, command, *args, **kwargs)
end_time = datetime.utcnow()
log_message = f'HOST: {host.ip}\n' \
f'COMMAND:\n{textwrap.indent(command, " ")}\n' \
f'RC:\n {cmd_result.rc}\n' \
f'STDOUT:\n{textwrap.indent(cmd_result.stdout, " ")}\n' \
f'STDERR:\n{textwrap.indent(cmd_result.stderr, " ")}\n' \
f'Start / End / Elapsed\t {start_time.time()} / {end_time.time()} / {end_time - start_time}'
logging.info(log_message)
allure.attach(log_message, 'SSH command', allure.attachment_type.TEXT)
return cmd_result
return wrapper
@dataclass
class SSHCommand:
stdout: str
stderr: str
rc: int
class HostClient:
ssh_client: SSHClient
SSH_CONNECTION_ATTEMPTS: ClassVar[int] = 3
CONNECTION_TIMEOUT = 30
TIMEOUT_RESTORE_CONNECTION = 10, 24
def __init__(self, ip: str, login: str, password: Optional[str],
private_key_path: Optional[str] = None, init_ssh_client=True) -> None:
self.ip = ip
self.login = login
self.password = password
self.private_key_path = private_key_path
if init_ssh_client:
self.create_connection(self.SSH_CONNECTION_ATTEMPTS)
def exec(self, cmd: str, verify=True, timeout=30) -> SSHCommand:
cmd_result = self._inner_exec(cmd, timeout)
if verify:
assert cmd_result.rc == 0, f'Non zero rc from command: "{cmd}"'
return cmd_result
@log_command
def exec_with_confirmation(self, cmd: str, confirmation: list, verify=True, timeout=10) -> SSHCommand:
ssh_stdin, ssh_stdout, ssh_stderr = self.ssh_client.exec_command(cmd, timeout=timeout)
for line in confirmation:
if not line.endswith('\n'):
line = f'{line}\n'
try:
ssh_stdin.write(line)
except OSError as err:
logging.error(f'Got error {err} executing command {cmd}')
ssh_stdin.close()
output = SSHCommand(stdout=ssh_stdout.read().decode(errors='ignore'),
stderr=ssh_stderr.read().decode(errors='ignore'),
rc=ssh_stdout.channel.recv_exit_status())
if verify:
debug_info = f'\nSTDOUT: {output.stdout}\nSTDERR: {output.stderr}\nRC: {output.rc}'
assert output.rc == 0, f'Non zero rc from command: "{cmd}"{debug_info}'
return output
@contextmanager
def as_user(self, user: str, password: str):
keep_user, keep_password = self.login, self.password
self.login, self.password = user, password
self.create_connection()
yield
self.login, self.password = keep_user, keep_password
self.create_connection()
@allure.step('Restore connection')
def restore_ssh_connection(self):
retry_time, retry_count = self.TIMEOUT_RESTORE_CONNECTION
for _ in range(retry_count):
try:
self.create_connection()
except AssertionError:
logging.warning(f'Host: Cant reach host: {self.ip}.')
sleep(retry_time)
else:
logging.info(f'Host: Cant reach host: {self.ip}.')
return
raise AssertionError(f'Host: Cant reach host: {self.ip} after 240 seconds..')
@allure.step('Copy file {host_path_to_file} to local file {path_to_file}')
def copy_file_from_host(self, host_path_to_file: str, path_to_file: str):
with self._sftp_client() as sftp_client:
sftp_client.get(host_path_to_file, path_to_file)
def copy_file_to_host(self, path_to_file: str, host_path_to_file: str):
with allure.step(f'Copy local file {path_to_file} to remote file {host_path_to_file} on host {self.ip}'):
with self._sftp_client() as sftp_client:
sftp_client.put(path_to_file, host_path_to_file)
@allure.step('Save string to remote file {host_path_to_file}')
def copy_str_to_host_file(self, string: str, host_path_to_file: str):
with tempfile.NamedTemporaryFile(mode='r+') as temp:
temp.writelines(string)
temp.flush()
with self._sftp_client() as client:
client.put(temp.name, host_path_to_file)
self.exec(f'cat {host_path_to_file}', verify=False)
def create_connection(self, attempts=SSH_CONNECTION_ATTEMPTS):
exc_err = None
for attempt in range(attempts):
self.ssh_client = SSHClient()
self.ssh_client.set_missing_host_key_policy(AutoAddPolicy())
try:
if self.private_key_path:
logging.info(
f"Trying to connect to host {self.ip} using SSH key "
f"{self.private_key_path} (attempt {attempt})"
)
self.ssh_client.connect(
hostname=self.ip,
pkey=RSAKey.from_private_key_file(self.private_key_path, self.password),
timeout=self.CONNECTION_TIMEOUT
)
else:
logging.info(
f"Trying to connect to host {self.ip} as {self.login} using password "
f"{self.password[:2] + '***' if self.password else ''} (attempt {attempt})"
)
self.ssh_client.connect(
hostname=self.ip,
username=self.login,
password=self.password,
timeout=self.CONNECTION_TIMEOUT
)
return True
except AuthenticationException as auth_err:
logging.error(f'Host: {self.ip}. {auth_err}')
raise auth_err
except (
SSHException,
ssh_exception.NoValidConnectionsError,
AttributeError,
socket.timeout,
OSError
) as ssh_err:
exc_err = ssh_err
logging.error(f'Host: {self.ip}, connection error. {exc_err}')
raise HostIsNotAvailable(self.ip, exc_err)
def drop(self):
self.ssh_client.close()
@log_command
def _inner_exec(self, cmd: str, timeout: int) -> SSHCommand:
if not self.ssh_client:
self.create_connection()
for _ in range(self.SSH_CONNECTION_ATTEMPTS):
try:
_, stdout, stderr = self.ssh_client.exec_command(cmd, timeout=timeout)
return SSHCommand(
stdout=stdout.read().decode(errors='ignore'),
stderr=stderr.read().decode(errors='ignore'),
rc=stdout.channel.recv_exit_status()
)
except (
SSHException,
TimeoutError,
ssh_exception.NoValidConnectionsError,
ConnectionResetError,
AttributeError,
socket.timeout,
) as ssh_err:
logging.error(f'Host: {self.ip}, exec command error {ssh_err}')
self.create_connection()
raise HostIsNotAvailable(f'Host: {self.ip} is not reachable.')
@contextmanager
def _sftp_client(self) -> SFTPClient:
with self.ssh_client.open_sftp() as sftp:
yield sftp

View file

@ -1,48 +1,50 @@
import os
import uuid
import time
from common import ASSETS_DIR, SIMPLE_OBJ_SIZE
from frostfs_testlib import reporter
from frostfs_testlib.resources.common import STORAGE_GC_TIME
from frostfs_testlib.utils import datetime_utils
def create_file_with_content(file_path: str = None, content: str = None) -> str:
mode = 'w+'
if not content:
content = os.urandom(SIMPLE_OBJ_SIZE)
mode = 'wb'
def placement_policy_from_container(container_info: str) -> str:
"""
Get placement policy from container info:
if not file_path:
file_path = f"{os.getcwd()}/{ASSETS_DIR}/{str(uuid.uuid4())}"
else:
if not os.path.exists(os.path.dirname(file_path)):
os.makedirs(os.path.dirname(file_path))
container ID: j7k4auNHRmiPMSmnH2qENLECD2au2y675fvTX6csDwd
version: 2.12
owner ID: NQ8HUxE5qEj7UUvADj7z9Z7pcvJdjtPwuw
basic ACL: 0fbfbfff (eacl-public-read-write)
attribute: Timestamp=1656340345 (2022-06-27 17:32:25 +0300 MSK)
nonce: 1c511e88-efd7-4004-8dbf-14391a5d375a
placement policy:
REP 1 IN LOC_PLACE
CBF 1
SELECT 1 FROM LOC_SW AS LOC_PLACE
FILTER Country EQ Sweden AS LOC_SW
with open(file_path, mode) as out_file:
out_file.write(content)
Args:
container_info: output from frostfs-cli container get command
return file_path
Returns:
placement policy as a string
"""
assert ":" in container_info, f"Could not find placement rule in the output {container_info}"
return container_info.split(":")[-1].replace("\n", " ").strip()
def get_file_content(file_path: str) -> str:
with open(file_path, 'r') as out_file:
content = out_file.read()
return content
def wait_for_gc_pass_on_storage_nodes() -> None:
wait_time = datetime_utils.parse_time(STORAGE_GC_TIME)
with reporter.step(f"Wait {wait_time}s until GC completes on storage nodes"):
time.sleep(wait_time)
def split_file(file_path: str, parts: int) -> list[str]:
files = []
with open(file_path, 'rb') as in_file:
data = in_file.read()
def are_numbers_similar(num1, num2, tolerance_percentage: float = 1.0):
"""
if difference of numbers is less than permissible deviation than numbers are similar
"""
# Calculate the permissible deviation
average = (num1 + num2) / 2
tolerance = average * (tolerance_percentage / 100)
content_size = len(data)
chunk_size = int((content_size + parts) / parts)
part_id = 1
for start_position in range(0, content_size + 1, chunk_size):
part_file_name = f'{file_path}_part_{part_id}'
files.append(part_file_name)
with open(part_file_name, 'wb') as out_file:
out_file.write(data[start_position:start_position + chunk_size])
part_id += 1
return files
# Calculate the real difference
difference = abs(num1 - num2)
return difference <= tolerance

View file

@ -1,17 +0,0 @@
[pytest]
log_cli = 1
log_cli_level = DEBUG
log_cli_format = %(asctime)s [%(levelname)4s] %(message)s
log_format = %(asctime)s [%(levelname)4s] %(message)s
log_cli_date_format = %Y-%m-%d %H:%M:%S
log_date_format = %H:%M:%S
markers =
# special markers
sanity: small tests subset
staging: test to be excluded from run in verifier/pr-validation/sanity jobs and run test in staging job
# functional markers
grpc_api: standard gRPC API tests
http_gate: HTTP gate contract
s3_gate: S3 gate tests
curl: tests for HTTP gate with curl utility
long: long tests (with long execution time)

View file

@ -1,63 +1 @@
aiodns==3.0.0
aiohttp==3.7.4.post0
aioresponses==0.7.2
allure-pytest==2.9.45
allure-python-commons==2.9.45
async-timeout==3.0.1
asynctest==0.13.0
attrs==21.4.0
base58==2.1.0
bitarray==2.3.4
boto3==1.16.33
botocore==1.19.33
certifi==2022.5.18
cffi==1.15.0
chardet==4.0.0
charset-normalizer==2.0.12
coverage==6.3.3
docker==4.4.0
docutils==0.17.1
Events==0.4
flake8==4.0.1
idna==3.3
iniconfig==1.1.1
isort==5.10.1
jmespath==0.10.0
jsonschema==4.5.1
lz4==3.1.3
mccabe==0.6.1
mmh3==3.0.0
multidict==6.0.2
mypy==0.950
mypy-extensions==0.4.3
neo-mamba==0.10.0
neo3crypto==0.2.1
neo3vm==0.9.0
neo3vm-stubs==0.9.0
netaddr==0.8.0
orjson==3.6.8
packaging==21.3
pexpect==4.8.0
pluggy==1.0.0
ptyprocess==0.7.0
py==1.11.0
pybiginteger==1.2.6
pybiginteger-stubs==1.2.6
pycares==4.1.2
pycodestyle==2.8.0
pycparser==2.21
pycryptodome==3.11.0
pyflakes==2.4.0
pyparsing==3.0.9
pyrsistent==0.18.1
pytest==7.1.2
python-dateutil==2.8.2
requests==2.27.1
robotframework==4.1.2
s3transfer==0.3.7
six==1.16.0
tomli==2.0.1
typing-extensions==4.2.0
urllib3==1.26.9
websocket-client==1.3.2
yarl==1.7.2
-r ../requirements.txt

View file

@ -0,0 +1,8 @@
import os
from .. import TESTS_BASE_PATH
TEST_CYCLES_COUNT = int(os.getenv("TEST_CYCLES_COUNT", "1"))
DEVENV_PATH = os.getenv("DEVENV_PATH", os.path.join("..", "frostfs-dev-env"))
S3_POLICY_FILE_LOCATION = os.path.join(TESTS_BASE_PATH, "resources/files/policy.json")

View file

@ -0,0 +1,6 @@
{
"rep-3": "REP 3",
"rep-1": "REP 1",
"complex": "REP 1 IN X CBF 1 SELECT 1 FROM * AS X",
"ec3.1": "EC 3.1 CBF 1 SELECT 4 FROM *"
}

View file

@ -0,0 +1,89 @@
{
"records":
[
{
"operation":"PUT",
"action":"ALLOW",
"filters":[],
"targets":
[
{
"role":"OTHERS",
"keys":[]
}
]
},
{
"operation":"HEAD",
"action":"ALLOW",
"filters":[],
"targets":
[
{
"role":"OTHERS",
"keys":[]
}
]
},
{
"operation":"DELETE",
"action":"ALLOW",
"filters":[],
"targets":
[
{
"role":"OTHERS",
"keys":[]
}
]
},
{
"operation":"SEARCH",
"action":"ALLOW",
"filters":[],
"targets":
[
{
"role":"OTHERS",
"keys":[]
}
]
},
{
"operation":"GET",
"action":"ALLOW",
"filters":[],
"targets":
[
{
"role":"OTHERS",
"keys":[]
}
]
},
{
"operation":"GETRANGE",
"action":"ALLOW",
"filters":[],
"targets":
[
{
"role":"OTHERS",
"keys":[]
}
]
},
{
"operation":"GETRANGEHASH",
"action":"ALLOW",
"filters":[],
"targets":
[
{
"role":"OTHERS",
"keys":[]
}
]
}
]
}

View file

@ -0,0 +1,4 @@
NOT_PARSE_POLICY = "can't parse placement policy"
NOT_ENOUGH_TO_SELECT = "selector is not enough"
NOT_FOUND_FILTER = "filter not found"
NOT_FOUND_SELECTOR = "selector not found"

View file

@ -0,0 +1,105 @@
import allure
import pytest
from frostfs_testlib import reporter
from frostfs_testlib.resources.wellknown_acl import PRIVATE_ACL_F, PUBLIC_ACL_F, READONLY_ACL_F
from frostfs_testlib.shell import Shell
from frostfs_testlib.steps.cli.object import put_object_to_random_node
from frostfs_testlib.storage.cluster import Cluster
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
from frostfs_testlib.testing.cluster_test_base import ClusterTestBase
from ....helpers.container_access import assert_full_access_to_container, assert_no_access_to_container, assert_read_only_container
from ....helpers.container_spec import ContainerSpec
@pytest.mark.nightly
@pytest.mark.sanity
@pytest.mark.acl
class TestACLBasic(ClusterTestBase):
@allure.title("Operations in public container available to everyone (obj_size={object_size})")
@pytest.mark.container(ContainerSpec(basic_acl=PUBLIC_ACL_F))
def test_basic_acl_public(
self,
default_wallet: WalletInfo,
other_wallet: WalletInfo,
client_shell: Shell,
container: str,
file_path: str,
cluster: Cluster,
):
"""
Test access to object operations in public container.
"""
for wallet, role in ((default_wallet, "owner"), (other_wallet, "others")):
with reporter.step("Put objects to container"):
# We create new objects for each wallet because assert_full_access_to_container
# deletes the object
owner_object_oid = put_object_to_random_node(
default_wallet,
file_path,
container,
shell=self.shell,
cluster=self.cluster,
attributes={"created": "owner"},
)
other_object_oid = put_object_to_random_node(
other_wallet,
file_path,
container,
shell=self.shell,
cluster=self.cluster,
attributes={"created": "other"},
)
with reporter.step(f"Check {role} has full access to public container"):
assert_full_access_to_container(wallet, container, owner_object_oid, file_path, client_shell, cluster)
assert_full_access_to_container(wallet, container, other_object_oid, file_path, client_shell, cluster)
@allure.title("Operations in private container only available to owner (obj_size={object_size})")
@pytest.mark.container(ContainerSpec(basic_acl=PRIVATE_ACL_F))
def test_basic_acl_private(
self,
default_wallet: WalletInfo,
other_wallet: WalletInfo,
client_shell: Shell,
container: str,
file_path: str,
cluster: Cluster,
):
"""
Test access to object operations in private container.
"""
with reporter.step("Put object to container"):
owner_object_oid = put_object_to_random_node(default_wallet, file_path, container, client_shell, cluster)
with reporter.step("Check no one except owner has access to operations with container"):
assert_no_access_to_container(other_wallet, container, owner_object_oid, file_path, client_shell, cluster)
with reporter.step("Check owner has full access to private container"):
assert_full_access_to_container(default_wallet, container, owner_object_oid, file_path, self.shell, cluster)
@allure.title("Read operations in readonly container available to others (obj_size={object_size})")
@pytest.mark.container(ContainerSpec(basic_acl=READONLY_ACL_F))
def test_basic_acl_readonly(
self,
default_wallet: WalletInfo,
other_wallet: WalletInfo,
client_shell: Shell,
container: str,
file_path: str,
cluster: Cluster,
):
"""
Test access to object operations in readonly container.
"""
with reporter.step("Put object to container"):
object_oid = put_object_to_random_node(default_wallet, file_path, container, client_shell, cluster)
with reporter.step("Check others has read-only access to operations with container"):
assert_read_only_container(other_wallet, container, object_oid, file_path, client_shell, cluster)
with reporter.step("Check owner has full access to public container"):
assert_full_access_to_container(default_wallet, container, object_oid, file_path, client_shell, cluster)

View file

@ -0,0 +1,228 @@
import allure
import pytest
from frostfs_testlib import reporter
from frostfs_testlib.cli.frostfs_cli.cli import FrostfsCli
from frostfs_testlib.resources.wellknown_acl import PUBLIC_ACL
from frostfs_testlib.steps.cli.object import put_object_to_random_node
from frostfs_testlib.steps.node_management import drop_object
from frostfs_testlib.storage.dataclasses import ape
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
from frostfs_testlib.testing.cluster_test_base import ClusterTestBase
from frostfs_testlib.utils import wallet_utils
from frostfs_testlib.utils.failover_utils import wait_object_replication
from frostfs_testlib.utils.file_utils import TestFile
from ....helpers.container_access import (
ALL_OBJECT_OPERATIONS,
assert_access_to_container,
assert_full_access_to_container,
assert_no_access_to_container,
)
from ....helpers.container_spec import ContainerSpec
@pytest.fixture
def denied_wallet(default_wallet: WalletInfo, other_wallet: WalletInfo, role: ape.Role) -> WalletInfo:
return other_wallet if role == ape.Role.OTHERS else default_wallet
@pytest.fixture
def allowed_wallet(default_wallet: WalletInfo, other_wallet: WalletInfo, role: ape.Role) -> WalletInfo:
return default_wallet if role == ape.Role.OTHERS else other_wallet
@pytest.mark.nightly
@pytest.mark.ape
class TestApeContainer(ClusterTestBase):
@pytest.mark.sanity
@allure.title("Deny operations via APE by role (role={role}, obj_size={object_size})")
@pytest.mark.parametrize("role", [ape.Role.OWNER, ape.Role.OTHERS], indirect=True)
def test_deny_operations_via_ape_by_role(
self,
denied_wallet: WalletInfo,
allowed_wallet: WalletInfo,
frostfs_cli: FrostfsCli,
container: str,
objects: list[str],
role: ape.Role,
file_path: TestFile,
rpc_endpoint: str,
):
with reporter.step(f"Deny all operations for {role} via APE"):
deny_rule = ape.Rule(ape.Verb.DENY, ALL_OBJECT_OPERATIONS, ape.Condition.by_role(role.value))
frostfs_cli.ape_manager.add(
rpc_endpoint, deny_rule.chain_id, target_name=container, target_type="container", rule=deny_rule.as_string()
)
with reporter.step("Wait for one block"):
self.wait_for_blocks()
with reporter.step(f"Assert denied role have no access to public container"):
# access checks will try to remove object, so we use .pop() to ensure we have object before deletion
assert_no_access_to_container(denied_wallet, container, objects.pop(), file_path, self.shell, self.cluster)
with reporter.step(f"Assert allowed role have full access to public container"):
assert_full_access_to_container(allowed_wallet, container, objects.pop(), file_path, self.shell, self.cluster)
with reporter.step(f"Remove deny rule from APE"):
frostfs_cli.ape_manager.remove(rpc_endpoint, deny_rule.chain_id, target_name=container, target_type="container")
with reporter.step("Wait for one block"):
self.wait_for_blocks()
with reporter.step("Assert allowed role have full access to public container"):
assert_full_access_to_container(allowed_wallet, container, objects.pop(), file_path, self.shell, self.cluster)
with reporter.step("Assert denied role have full access to public container"):
assert_full_access_to_container(denied_wallet, container, objects.pop(), file_path, self.shell, self.cluster)
@allure.title("Deny operations for others via APE excluding single pubkey (obj_size={object_size})")
def test_deny_opeartions_excluding_pubkey(
self,
frostfs_cli: FrostfsCli,
default_wallet: WalletInfo,
other_wallet: WalletInfo,
other_wallet_2: WalletInfo,
container: str,
objects: list[str],
rpc_endpoint: str,
file_path: TestFile,
):
with reporter.step("Add deny APE rules for others except single wallet"):
rule_conditions = [
ape.Condition.by_role(ape.Role.OTHERS),
ape.Condition.by_key(
wallet_utils.get_wallet_public_key(other_wallet_2.path, other_wallet_2.password),
match_type=ape.MatchType.NOT_EQUAL,
),
]
rule = ape.Rule(ape.Verb.DENY, ALL_OBJECT_OPERATIONS, rule_conditions)
frostfs_cli.ape_manager.add(rpc_endpoint, rule.chain_id, target_name=container, target_type="container", rule=rule.as_string())
with reporter.step("Wait for one block"):
self.wait_for_blocks()
with reporter.step("Assert others have no access to public container"):
# access checks will try to remove object, so we use .pop() to ensure we have object before deletion
assert_no_access_to_container(other_wallet, container, objects[0], file_path, self.shell, self.cluster)
with reporter.step("Assert owner have full access to public container"):
assert_full_access_to_container(default_wallet, container, objects.pop(), file_path, self.shell, self.cluster)
with reporter.step("Assert allowed wallet have full access to public container"):
assert_full_access_to_container(other_wallet_2, container, objects.pop(), file_path, self.shell, self.cluster)
@allure.title("Replication works with APE deny rules on OWNER and OTHERS (obj_size={object_size})")
@pytest.mark.container(ContainerSpec(f"REP %NODE_COUNT% IN X CBF 1 SELECT %NODE_COUNT% FROM * AS X", PUBLIC_ACL))
def test_replication_works_with_deny_rules(
self,
default_wallet: WalletInfo,
frostfs_cli: FrostfsCli,
container: str,
rpc_endpoint: str,
file_path: TestFile,
):
with reporter.step("Put object to container"):
oid = put_object_to_random_node(default_wallet, file_path, container, self.shell, self.cluster)
with reporter.step("Wait for object replication after upload"):
wait_object_replication(container, oid, len(self.cluster.cluster_nodes), self.shell, self.cluster.storage_nodes)
with reporter.step("Add deny APE rules for owner and others"):
rule_conditions = [
ape.Condition.by_role(ape.Role.OWNER),
ape.Condition.by_role(ape.Role.OTHERS),
]
for rule_condition in rule_conditions:
rule = ape.Rule(ape.Verb.DENY, ALL_OBJECT_OPERATIONS, rule_condition)
frostfs_cli.ape_manager.add(
rpc_endpoint, rule.chain_id, target_name=container, target_type="container", rule=rule.as_string()
)
with reporter.step("Wait for one block"):
self.wait_for_blocks()
with reporter.step("Drop object"):
drop_object(self.cluster.storage_nodes[0], container, oid)
with reporter.step("Wait for dropped object to be replicated"):
wait_object_replication(container, oid, len(self.cluster.storage_nodes), self.shell, self.cluster.storage_nodes)
@allure.title("Deny operations via APE by role (role=ir, obj_size={object_size})")
def test_deny_operations_via_ape_by_role_ir(
self, frostfs_cli: FrostfsCli, ir_wallet: WalletInfo, container: str, objects: list[str], rpc_endpoint: str, file_path: TestFile
):
default_ir_access = {
ape.ObjectOperations.PUT: False,
ape.ObjectOperations.GET: True,
ape.ObjectOperations.HEAD: True,
ape.ObjectOperations.GET_RANGE: False,
ape.ObjectOperations.GET_RANGE_HASH: True,
ape.ObjectOperations.SEARCH: True,
ape.ObjectOperations.DELETE: False,
}
with reporter.step("Assert IR wallet access in default state"):
assert_access_to_container(default_ir_access, ir_wallet, container, objects[0], file_path, self.shell, self.cluster)
with reporter.step("Add deny APE rule with deny all operations for IR role"):
rule = ape.Rule(ape.Verb.DENY, ALL_OBJECT_OPERATIONS, [ape.Condition.by_role(ape.Role.IR.value)])
frostfs_cli.ape_manager.add(rpc_endpoint, rule.chain_id, target_name=container, target_type="container", rule=rule.as_string())
with reporter.step("Wait for one block"):
self.wait_for_blocks()
with reporter.step("Assert IR wallet ignores APE rules"):
assert_access_to_container(default_ir_access, ir_wallet, container, objects[0], file_path, self.shell, self.cluster)
with reporter.step("Remove APE rule"):
frostfs_cli.ape_manager.remove(rpc_endpoint, rule.chain_id, target_name=container, target_type="container")
with reporter.step("Wait for one block"):
self.wait_for_blocks()
with reporter.step("Assert IR wallet access is restored"):
assert_access_to_container(default_ir_access, ir_wallet, container, objects[0], file_path, self.shell, self.cluster)
@allure.title("Deny operations via APE by role (role=container, obj_size={object_size})")
def test_deny_operations_via_ape_by_role_container(
self,
frostfs_cli: FrostfsCli,
container_node_wallet: WalletInfo,
container: str,
objects: list[str],
rpc_endpoint: str,
file_path: TestFile,
):
access_matrix = {
ape.ObjectOperations.PUT: True,
ape.ObjectOperations.GET: True,
ape.ObjectOperations.HEAD: True,
ape.ObjectOperations.GET_RANGE: False,
ape.ObjectOperations.GET_RANGE_HASH: True,
ape.ObjectOperations.SEARCH: True,
ape.ObjectOperations.DELETE: False,
}
with reporter.step("Assert CONTAINER wallet access in default state"):
assert_access_to_container(access_matrix, container_node_wallet, container, objects[0], file_path, self.shell, self.cluster)
rule = ape.Rule(ape.Verb.DENY, ALL_OBJECT_OPERATIONS, ape.Condition.by_role(ape.Role.CONTAINER.value))
with reporter.step(f"Add APE rule with deny all operations for CONTAINER and IR roles"):
frostfs_cli.ape_manager.add(rpc_endpoint, rule.chain_id, target_name=container, target_type="container", rule=rule.as_string())
with reporter.step("Wait for one block"):
self.wait_for_blocks()
with reporter.step("Assert CONTAINER wallet ignores APE rule"):
assert_access_to_container(access_matrix, container_node_wallet, container, objects[0], file_path, self.shell, self.cluster)
with reporter.step("Remove APE rule"):
frostfs_cli.ape_manager.remove(rpc_endpoint, rule.chain_id, target_name=container, target_type="container")
with reporter.step("Wait for one block"):
self.wait_for_blocks()
with reporter.step("Assert CONTAINER wallet access after rule was removed"):
assert_access_to_container(access_matrix, container_node_wallet, container, objects[0], file_path, self.shell, self.cluster)

View file

@ -0,0 +1,398 @@
import allure
import pytest
from frostfs_testlib import reporter
from frostfs_testlib.cli.frostfs_cli.cli import FrostfsCli
from frostfs_testlib.resources.error_patterns import OBJECT_ACCESS_DENIED
from frostfs_testlib.steps.cli.object import get_object_from_random_node, head_object, put_object_to_random_node
from frostfs_testlib.storage.dataclasses import ape
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
from frostfs_testlib.testing.cluster_test_base import ClusterTestBase
from frostfs_testlib.testing.test_control import expect_not_raises
from frostfs_testlib.utils.file_utils import TestFile
from ....helpers.bearer_token import create_bearer_token
from ....helpers.container_access import (
ALL_OBJECT_OPERATIONS,
FULL_ACCESS,
assert_access_to_container,
assert_full_access_to_container,
assert_no_access_to_container,
)
from ....helpers.container_spec import ContainerSpec
from ....helpers.object_access import OBJECT_ACCESS_DENIED
@pytest.mark.nightly
@pytest.mark.ape
class TestApeFilters(ClusterTestBase):
# SPEC: https://github.com/nspcc-dev/neofs-spec/blob/master/01-arch/07-acl.md
HEADER = {"check_key": "check_value"}
OTHER_HEADER = {"check_key": "other_value"}
ATTRIBUTES = {
"key_one": "check_value",
"x_key": "xvalue",
"check_key": "check_value",
}
OTHER_ATTRIBUTES = {
"key_one": "check_value",
"x_key": "other_value",
"check_key": "other_value",
}
OBJECT_COUNT = 5
RESOURCE_OPERATIONS = [
ape.ObjectOperations.GET,
ape.ObjectOperations.HEAD,
ape.ObjectOperations.PUT,
]
@pytest.fixture
def objects_with_attributes(self, default_wallet: WalletInfo, file_path: TestFile, container: str):
return [
put_object_to_random_node(
default_wallet, file_path, container, self.shell, self.cluster, attributes={**self.ATTRIBUTES, "key": val}
)
for val in range(self.OBJECT_COUNT)
]
@pytest.fixture
def objects_with_other_attributes(self, default_wallet: WalletInfo, file_path: TestFile, container: str):
return [
put_object_to_random_node(
default_wallet, file_path, container, self.shell, self.cluster, attributes={**self.OTHER_ATTRIBUTES, "key": val}
)
for val in range(self.OBJECT_COUNT)
]
@pytest.fixture
def objects_without_attributes(self, default_wallet: WalletInfo, file_path: TestFile, container: str):
return [put_object_to_random_node(default_wallet, file_path, container, self.shell, self.cluster) for _ in range(self.OBJECT_COUNT)]
@pytest.mark.sanity
@allure.title("Operations with request filter (match_type={match_type}, obj_size={object_size})")
@pytest.mark.parametrize("match_type", [ape.MatchType.EQUAL, ape.MatchType.NOT_EQUAL])
@pytest.mark.skip("https://git.frostfs.info/TrueCloudLab/frostfs-node/issues/1243")
def test_ape_filters_request(
self,
frostfs_cli: FrostfsCli,
temp_directory: str,
other_wallet: WalletInfo,
container: str,
objects_with_attributes: list[str],
objects_with_other_attributes: list[str],
objects_without_attributes: list[str],
match_type: ape.MatchType,
file_path: TestFile,
rpc_endpoint: str,
):
with reporter.step("Deny all operations for others via APE with request condition"):
request_condition = ape.Condition('"frostfs:xheader/check_key"', '"check_value"', ape.ConditionType.REQUEST, match_type)
role_condition = ape.Condition.by_role(ape.Role.OTHERS)
deny_rule = ape.Rule(ape.Verb.DENY, ALL_OBJECT_OPERATIONS, [request_condition, role_condition])
frostfs_cli.ape_manager.add(
rpc_endpoint, deny_rule.chain_id, target_name=container, target_type="container", rule=deny_rule.as_string()
)
with reporter.step("Wait for one block"):
self.wait_for_blocks()
with reporter.step("Create bearer token with everything allowed for others role"):
role_condition = ape.Condition.by_role(ape.Role.OTHERS)
rule = ape.Rule(ape.Verb.ALLOW, ALL_OBJECT_OPERATIONS, role_condition)
bearer = create_bearer_token(frostfs_cli, temp_directory, container, rule, rpc_endpoint)
# Filter denies requests where "check_key {match_type} ATTRIBUTE", so when match_type
# is STRING_EQUAL, then requests with "check_key=OTHER_ATTRIBUTE" will be allowed while
# requests with "check_key=ATTRIBUTE" will be denied, and vice versa
allow_headers = self.OTHER_HEADER if match_type == ape.MatchType.EQUAL else self.HEADER
deny_headers = self.HEADER if match_type == ape.MatchType.EQUAL else self.OTHER_HEADER
# We test on 3 groups of objects with various headers,
# but APE rule should ignore object headers and only work based on request headers
for oids in [objects_with_attributes, objects_with_other_attributes, objects_without_attributes]:
with reporter.step("Check others has full access when sending request without headers"):
assert_full_access_to_container(other_wallet, container, oids.pop(), file_path, self.shell, self.cluster)
with reporter.step("Check others has full access when sending request with allowed headers"):
assert_full_access_to_container(
other_wallet, container, oids.pop(), file_path, self.shell, self.cluster, xhdr=allow_headers
)
with reporter.step("Check others has no access when sending request with denied headers"):
assert_no_access_to_container(other_wallet, container, oids.pop(), file_path, self.shell, self.cluster, xhdr=deny_headers)
with reporter.step("Check others has full access when sending request with denied headers and using bearer token"):
assert_full_access_to_container(
other_wallet, container, oids.pop(), file_path, self.shell, self.cluster, bearer, deny_headers
)
@allure.title("Operations with deny user headers filter (match_type={match_type}, obj_size={object_size})")
@pytest.mark.parametrize("match_type", [ape.MatchType.EQUAL, ape.MatchType.NOT_EQUAL])
@pytest.mark.skip("https://git.frostfs.info/TrueCloudLab/frostfs-node/issues/1300")
def test_ape_deny_filters_object(
self,
frostfs_cli: FrostfsCli,
temp_directory: str,
other_wallet: WalletInfo,
container: str,
objects_with_attributes: list[str],
objects_with_other_attributes: list[str],
objects_without_attributes: list[str],
match_type: ape.MatchType,
rpc_endpoint: str,
file_path: TestFile,
):
allow_objects = objects_with_other_attributes if match_type == ape.MatchType.EQUAL else objects_with_attributes
deny_objects = objects_with_attributes if match_type == ape.MatchType.EQUAL else objects_with_other_attributes
# When there is no attribute on the object, it's the same as "", and "" is not equal to "<some_value>"
# So it's the same as deny_objects
no_attributes_access = {
ape.MatchType.EQUAL: FULL_ACCESS,
ape.MatchType.NOT_EQUAL: {
ape.ObjectOperations.PUT: False,
ape.ObjectOperations.GET: False,
ape.ObjectOperations.HEAD: False,
ape.ObjectOperations.GET_RANGE: True,
ape.ObjectOperations.GET_RANGE_HASH: True,
ape.ObjectOperations.SEARCH: True,
ape.ObjectOperations.DELETE: False, # Denied by restricted PUT
},
}
allowed_access = {
ape.MatchType.EQUAL: FULL_ACCESS,
ape.MatchType.NOT_EQUAL: {
ape.ObjectOperations.PUT: False, # because currently we are put without attributes
ape.ObjectOperations.GET: True,
ape.ObjectOperations.HEAD: True,
ape.ObjectOperations.GET_RANGE: True,
ape.ObjectOperations.GET_RANGE_HASH: True,
ape.ObjectOperations.SEARCH: True,
ape.ObjectOperations.DELETE: False, # Because delete needs to put a tombstone without attributes
},
}
with reporter.step("Deny operations for others via APE with resource condition"):
resource_condition = ape.Condition('"check_key"', '"check_value"', ape.ConditionType.RESOURCE, match_type)
role_condition = ape.Condition.by_role(ape.Role.OTHERS)
deny_rule = ape.Rule(ape.Verb.DENY, self.RESOURCE_OPERATIONS, [resource_condition, role_condition])
frostfs_cli.ape_manager.add(
rpc_endpoint, deny_rule.chain_id, target_name=container, target_type="container", rule=deny_rule.as_string()
)
with reporter.step("Wait for one block"):
self.wait_for_blocks()
with reporter.step("Create bearer token with everything allowed for others role"):
role_condition = ape.Condition.by_role(ape.Role.OTHERS)
rule = ape.Rule(ape.Verb.ALLOW, ALL_OBJECT_OPERATIONS, role_condition)
bearer = create_bearer_token(frostfs_cli, temp_directory, container, rule, rpc_endpoint)
with reporter.step("Create bearer token with allowed put for others role"):
role_condition = ape.Condition.by_role(ape.Role.OTHERS)
rule = ape.Rule(ape.Verb.ALLOW, ape.ObjectOperations.PUT, role_condition)
bearer_put = create_bearer_token(frostfs_cli, temp_directory, container, rule, rpc_endpoint)
# We will attempt requests with various headers,
# but APE rule should ignore request headers and validate only object headers
for xhdr in (self.HEADER, self.OTHER_HEADER, None):
with reporter.step("Check others access to objects without attributes"):
assert_access_to_container(
no_attributes_access[match_type],
other_wallet,
container,
objects_without_attributes.pop(),
file_path,
self.shell,
self.cluster,
xhdr=xhdr,
)
with reporter.step("Check others have full access to objects without deny attribute"):
assert_access_to_container(
allowed_access[match_type], other_wallet, container, allow_objects.pop(), file_path, self.shell, self.cluster, xhdr=xhdr
)
with reporter.step("Check others have no access to objects with deny attribute"):
with pytest.raises(Exception, match=OBJECT_ACCESS_DENIED):
head_object(other_wallet, container, deny_objects[0], self.shell, rpc_endpoint, xhdr=xhdr)
with pytest.raises(Exception, match=OBJECT_ACCESS_DENIED):
get_object_from_random_node(other_wallet, container, deny_objects[0], self.shell, self.cluster, xhdr=xhdr)
with reporter.step("Check others have access to objects with deny attribute and using bearer token"):
assert_full_access_to_container(
other_wallet, container, deny_objects.pop(), file_path, self.shell, self.cluster, bearer, xhdr
)
allow_attribute = self.OTHER_HEADER if match_type == ape.MatchType.EQUAL else self.HEADER
with reporter.step("Check others can PUT objects without denied attribute"):
put_object_to_random_node(other_wallet, file_path, container, self.shell, self.cluster, attributes=allow_attribute)
deny_attribute = self.HEADER if match_type == ape.MatchType.EQUAL else self.OTHER_HEADER
with reporter.step("Check others can not PUT objects with denied attribute"):
with pytest.raises(Exception, match=OBJECT_ACCESS_DENIED):
put_object_to_random_node(other_wallet, file_path, container, self.shell, self.cluster, attributes=deny_attribute)
with reporter.step("Check others can PUT objects with denied attribute and using bearer token"):
put_object_to_random_node(other_wallet, file_path, container, self.shell, self.cluster, bearer_put, attributes=deny_attribute)
@allure.title("Operations with allow APE rule with resource filters (match_type={match_type}, obj_size={object_size})")
@pytest.mark.parametrize("match_type", [ape.MatchType.EQUAL, ape.MatchType.NOT_EQUAL])
@pytest.mark.parametrize("object_size", ["simple"], indirect=True)
@pytest.mark.container(ContainerSpec(basic_acl="0", allow_owner_via_ape=True))
def test_ape_allow_filters_object(
self,
frostfs_cli: FrostfsCli,
other_wallet: WalletInfo,
container: str,
objects_with_attributes: list[str],
objects_with_other_attributes: list[str],
objects_without_attributes: list[str],
match_type: ape.MatchType,
rpc_endpoint: str,
file_path: TestFile,
temp_directory: str,
):
if match_type == ape.MatchType.EQUAL:
allow_objects = objects_with_attributes
deny_objects = objects_with_other_attributes
allow_attribute = self.HEADER
deny_attribute = self.OTHER_HEADER
no_attributes_match_context = pytest.raises(Exception, match=OBJECT_ACCESS_DENIED)
else:
allow_objects = objects_with_other_attributes
deny_objects = objects_with_attributes
allow_attribute = self.OTHER_HEADER
deny_attribute = self.HEADER
no_attributes_match_context = expect_not_raises()
with reporter.step("Allow operations for others except few operations by resource condition via APE"):
resource_condition = ape.Condition('"check_key"', '"check_value"', ape.ConditionType.RESOURCE, match_type)
role_condition = ape.Condition.by_role(ape.Role.OTHERS)
deny_rule = ape.Rule(ape.Verb.ALLOW, self.RESOURCE_OPERATIONS, [resource_condition, role_condition])
frostfs_cli.ape_manager.add(
rpc_endpoint, deny_rule.chain_id, target_name=container, target_type="container", rule=deny_rule.as_string()
)
with reporter.step("Wait for one block"):
self.wait_for_blocks()
with reporter.step("Check GET, PUT and HEAD operations with objects without attributes for OTHERS role"):
oid = objects_without_attributes.pop()
with no_attributes_match_context:
assert head_object(other_wallet, container, oid, self.shell, rpc_endpoint)
with no_attributes_match_context:
assert get_object_from_random_node(other_wallet, container, oid, self.shell, self.cluster)
with no_attributes_match_context:
assert put_object_to_random_node(other_wallet, file_path, container, self.shell, self.cluster)
with reporter.step("Create bearer token with everything allowed for others role"):
role_condition = ape.Condition.by_role(ape.Role.OTHERS)
rule = ape.Rule(ape.Verb.ALLOW, ALL_OBJECT_OPERATIONS, role_condition)
bearer = create_bearer_token(frostfs_cli, temp_directory, container, rule, rpc_endpoint)
with reporter.step("Check others can get and put objects without attributes and using bearer token"):
oid = objects_without_attributes[0]
with expect_not_raises():
head_object(other_wallet, container, oid, self.shell, rpc_endpoint, bearer)
with expect_not_raises():
get_object_from_random_node(other_wallet, container, oid, self.shell, self.cluster, bearer)
with expect_not_raises():
put_object_to_random_node(other_wallet, file_path, container, self.shell, self.cluster, bearer)
with reporter.step("Check others can get and put objects with attributes matching the filter"):
oid = allow_objects.pop()
with expect_not_raises():
head_object(other_wallet, container, oid, self.shell, rpc_endpoint)
with expect_not_raises():
get_object_from_random_node(other_wallet, container, oid, self.shell, self.cluster)
with expect_not_raises():
put_object_to_random_node(other_wallet, file_path, container, self.shell, self.cluster, attributes=allow_attribute)
with reporter.step("Check others cannot get and put objects without attributes matching the filter"):
oid = deny_objects[0]
with pytest.raises(Exception, match=OBJECT_ACCESS_DENIED):
head_object(other_wallet, container, oid, self.shell, rpc_endpoint)
with pytest.raises(Exception, match=OBJECT_ACCESS_DENIED):
assert get_object_from_random_node(other_wallet, container, oid, self.shell, self.cluster)
with pytest.raises(Exception, match=OBJECT_ACCESS_DENIED):
assert put_object_to_random_node(other_wallet, file_path, container, self.shell, self.cluster, attributes=deny_attribute)
with reporter.step("Check others can get and put objects without attributes matching the filter with bearer token"):
oid = deny_objects.pop()
with expect_not_raises():
head_object(other_wallet, container, oid, self.shell, rpc_endpoint, bearer)
with expect_not_raises():
get_object_from_random_node(other_wallet, container, oid, self.shell, self.cluster, bearer)
with expect_not_raises():
put_object_to_random_node(other_wallet, file_path, container, self.shell, self.cluster, bearer, attributes=allow_attribute)
@allure.title("PUT and GET object using bearer with objectID in filter (obj_size={object_size}, match_type=NOT_EQUAL)")
@pytest.mark.container(ContainerSpec(basic_acl="0", allow_owner_via_ape=True))
def test_ape_filter_object_id_not_equals(
self,
frostfs_cli: FrostfsCli,
default_wallet: WalletInfo,
other_wallet: WalletInfo,
container: str,
temp_directory: str,
file_path: TestFile,
):
with reporter.step("Put object to container"):
oid = put_object_to_random_node(default_wallet, file_path, container, self.shell, self.cluster)
with reporter.step("Create bearer token with objectID filter"):
role_condition = ape.Condition.by_role(ape.Role.OTHERS)
object_condition = ape.Condition.by_object_id(oid, ape.ConditionType.RESOURCE, ape.MatchType.NOT_EQUAL)
rule = ape.Rule(ape.Verb.ALLOW, ALL_OBJECT_OPERATIONS, [role_condition, object_condition])
bearer = create_bearer_token(frostfs_cli, temp_directory, container, rule, self.cluster.default_rpc_endpoint)
with reporter.step("Others should be able to put object using bearer token"):
with expect_not_raises():
put_object_to_random_node(other_wallet, file_path, container, self.shell, self.cluster, bearer)
with reporter.step("Others should not be able to get object matching the filter"):
with pytest.raises(Exception, match=OBJECT_ACCESS_DENIED):
get_object_from_random_node(other_wallet, container, oid, self.shell, self.cluster, bearer)
@allure.title("PUT and GET object using bearer with objectID in filter (obj_size={object_size}, match_type=EQUAL)")
@pytest.mark.container(ContainerSpec(basic_acl="0", allow_owner_via_ape=True))
def test_ape_filter_object_id_equals(
self,
frostfs_cli: FrostfsCli,
default_wallet: WalletInfo,
other_wallet: WalletInfo,
container: str,
temp_directory: str,
file_path: TestFile,
):
with reporter.step("Put object to container"):
oid = put_object_to_random_node(default_wallet, file_path, container, self.shell, self.cluster)
with reporter.step("Create bearer token with objectID filter"):
role_condition = ape.Condition.by_role(ape.Role.OTHERS)
object_condition = ape.Condition.by_object_id(oid, ape.ConditionType.RESOURCE, ape.MatchType.EQUAL)
rule = ape.Rule(ape.Verb.ALLOW, ALL_OBJECT_OPERATIONS, [role_condition, object_condition])
bearer = create_bearer_token(frostfs_cli, temp_directory, container, rule, self.cluster.default_rpc_endpoint)
with reporter.step("Others should not be able to put object using bearer token"):
with pytest.raises(Exception, match=OBJECT_ACCESS_DENIED):
put_object_to_random_node(other_wallet, file_path, container, self.shell, self.cluster, bearer)
with reporter.step("Others should be able to get object matching the filter"):
with expect_not_raises():
get_object_from_random_node(other_wallet, container, oid, self.shell, self.cluster, bearer)

View file

@ -0,0 +1,194 @@
import allure
import pytest
from frostfs_testlib import reporter
from frostfs_testlib.cli.frostfs_cli.cli import FrostfsCli
from frostfs_testlib.storage.dataclasses import ape
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
from frostfs_testlib.testing.cluster_test_base import ClusterTestBase
from frostfs_testlib.utils.file_utils import TestFile
from ....helpers.bearer_token import create_bearer_token
from ....helpers.container_access import (
ALL_OBJECT_OPERATIONS,
assert_access_to_container,
assert_full_access_to_container,
assert_no_access_to_container,
)
@pytest.mark.nightly
@pytest.mark.sanity
@pytest.mark.bearer
@pytest.mark.ape
class TestApeBearer(ClusterTestBase):
@allure.title("Operations with BearerToken (role={role}, obj_size={object_size})")
@pytest.mark.parametrize("role", [ape.Role.OWNER, ape.Role.OTHERS], indirect=True)
def test_bearer_token_operations(
self,
container: str,
objects: list[str],
frostfs_cli: FrostfsCli,
temp_directory: str,
test_wallet: WalletInfo,
role: ape.Role,
file_path: TestFile,
rpc_endpoint: str,
):
with reporter.step(f"Check {role} has full access to container without bearer token"):
assert_full_access_to_container(test_wallet, container, objects.pop(), file_path, self.shell, self.cluster)
with reporter.step(f"Deny all operations for everyone via APE"):
rule = ape.Rule(ape.Verb.DENY, ALL_OBJECT_OPERATIONS)
frostfs_cli.ape_manager.add(rpc_endpoint, rule.chain_id, target_name=container, target_type="container", rule=rule.as_string())
with reporter.step("Wait for one block"):
self.wait_for_blocks()
with reporter.step(f"Create bearer token with all operations allowed"):
bearer = create_bearer_token(
frostfs_cli,
temp_directory,
container,
rule=ape.Rule(ape.Verb.ALLOW, ALL_OBJECT_OPERATIONS),
endpoint=rpc_endpoint,
)
with reporter.step(f"Check {role} without token has no access to all operations with container"):
assert_no_access_to_container(test_wallet, container, objects.pop(), file_path, self.shell, self.cluster)
with reporter.step(f"Check {role} with token has access to all operations with container"):
assert_full_access_to_container(test_wallet, container, objects.pop(), file_path, self.shell, self.cluster, bearer)
with reporter.step(f"Remove deny rule from APE"):
frostfs_cli.ape_manager.remove(rpc_endpoint, rule.chain_id, target_name=container, target_type="container")
with reporter.step("Wait for one block"):
self.wait_for_blocks()
with reporter.step(f"Check {role} without token has access to all operations with container"):
assert_full_access_to_container(test_wallet, container, objects.pop(), file_path, self.shell, self.cluster)
@allure.title("BearerToken for compound operations (obj_size={object_size})")
def test_bearer_token_compound_operations(
self,
frostfs_cli: FrostfsCli,
temp_directory: str,
default_wallet: WalletInfo,
other_wallet: WalletInfo,
container: tuple[str, list[str], str],
objects: list[str],
rpc_endpoint: str,
file_path: TestFile,
):
"""
Bearer Token COMPLETLY overrides chains set for the specific target.
Thus, any restictions or permissions should be explicitly defined in BT.
"""
wallets_map = {
ape.Role.OWNER: default_wallet,
ape.Role.OTHERS: other_wallet,
}
access_map = {
ape.Role.OWNER: {
ape.ObjectOperations.PUT: True,
ape.ObjectOperations.GET: True,
ape.ObjectOperations.HEAD: True,
ape.ObjectOperations.GET_RANGE: True,
ape.ObjectOperations.GET_RANGE_HASH: True,
ape.ObjectOperations.SEARCH: True,
ape.ObjectOperations.DELETE: False,
},
ape.Role.OTHERS: {
ape.ObjectOperations.PUT: True,
ape.ObjectOperations.GET: True,
ape.ObjectOperations.HEAD: True,
ape.ObjectOperations.GET_RANGE: False,
ape.ObjectOperations.GET_RANGE_HASH: False,
ape.ObjectOperations.SEARCH: False,
ape.ObjectOperations.DELETE: True,
},
}
bt_access_map = {
ape.Role.OWNER: {
ape.ObjectOperations.PUT: True,
ape.ObjectOperations.GET: True,
ape.ObjectOperations.HEAD: True,
ape.ObjectOperations.GET_RANGE: True,
ape.ObjectOperations.GET_RANGE_HASH: True,
ape.ObjectOperations.SEARCH: True,
ape.ObjectOperations.DELETE: True,
},
ape.Role.OTHERS: {
ape.ObjectOperations.PUT: True,
ape.ObjectOperations.GET: False,
ape.ObjectOperations.HEAD: True,
ape.ObjectOperations.GET_RANGE: False,
ape.ObjectOperations.GET_RANGE_HASH: False,
# Although SEARCH is denied by the APE chain defined in Policy contract,
# Bearer Token COMPLETLY overrides chains set for the specific target.
# Thus, any restictions or permissions should be explicitly defined in BT.
ape.ObjectOperations.SEARCH: True,
ape.ObjectOperations.DELETE: True,
},
}
# Operations that we will deny for each role via APE
deny_map = {
ape.Role.OWNER: [ape.ObjectOperations.DELETE],
ape.Role.OTHERS: [
ape.ObjectOperations.SEARCH,
ape.ObjectOperations.GET_RANGE_HASH,
ape.ObjectOperations.GET_RANGE,
],
}
# Operations that we will allow for each role with bearer token
bearer_map = {
ape.Role.OWNER: [
ape.ObjectOperations.DELETE,
ape.ObjectOperations.PUT,
ape.ObjectOperations.GET_RANGE,
],
ape.Role.OTHERS: [
ape.ObjectOperations.GET,
ape.ObjectOperations.GET_RANGE,
ape.ObjectOperations.GET_RANGE_HASH,
],
}
conditions_map = {
ape.Role.OWNER: ape.Condition.by_role(ape.Role.OWNER),
ape.Role.OTHERS: ape.Condition.by_role(ape.Role.OTHERS),
}
verb_map = {ape.Role.OWNER: ape.Verb.ALLOW, ape.Role.OTHERS: ape.Verb.DENY}
for role, operations in deny_map.items():
with reporter.step(f"Add APE deny rule for {role}"):
rule = ape.Rule(ape.Verb.DENY, operations, conditions_map[role])
frostfs_cli.ape_manager.add(
rpc_endpoint, rule.chain_id, target_name=container, target_type="container", rule=rule.as_string()
)
with reporter.step("Wait for one block"):
self.wait_for_blocks()
for role, wallet in wallets_map.items():
with reporter.step(f"Assert access to container without bearer token for {role}"):
assert_access_to_container(access_map[role], wallet, container, objects.pop(), file_path, self.shell, self.cluster)
bearer_tokens = {}
for role in wallets_map.keys():
with reporter.step(f"Create bearer token for {role}"):
rule = ape.Rule(verb_map[role], bearer_map[role], conditions_map[role])
bt = create_bearer_token(frostfs_cli, temp_directory, container, rule, rpc_endpoint)
bearer_tokens[role] = bt
for role, wallet in wallets_map.items():
with reporter.step(f"Assert access to container with bearer token for {role}"):
assert_access_to_container(
bt_access_map[role], wallet, container, objects.pop(), file_path, self.shell, self.cluster, bearer_tokens[role]
)

View file

@ -0,0 +1,155 @@
import json
import time
import allure
import pytest
from frostfs_testlib import reporter
from frostfs_testlib.cli.frostfs_cli.cli import FrostfsCli
from frostfs_testlib.resources.common import MORPH_BLOCK_TIME
from frostfs_testlib.resources.wellknown_acl import PUBLIC_ACL
from frostfs_testlib.shell import Shell
from frostfs_testlib.steps.cli.container import create_container, search_nodes_with_container
from frostfs_testlib.steps.cli.object import put_object_to_random_node
from frostfs_testlib.storage.cluster import Cluster, ClusterNode
from frostfs_testlib.storage.dataclasses import ape
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
from frostfs_testlib.testing.parallel import parallel
from frostfs_testlib.utils import datetime_utils
from ...helpers.container_spec import ContainerSpec
OBJECT_COUNT = 5
@pytest.fixture(scope="session")
def ir_wallet(cluster: Cluster) -> WalletInfo:
return WalletInfo.from_node(cluster.ir_nodes[0])
@pytest.fixture(scope="session")
def storage_wallet(cluster: Cluster) -> WalletInfo:
return WalletInfo.from_node(cluster.storage_nodes[0])
@pytest.fixture(scope="session")
def role(request: pytest.FixtureRequest):
return request.param
@pytest.fixture(scope="session")
def test_wallet(default_wallet: WalletInfo, other_wallet: WalletInfo, role: ape.Role):
role_to_wallet_map = {
ape.Role.OWNER: default_wallet,
ape.Role.OTHERS: other_wallet,
}
assert role in role_to_wallet_map, "Missing wallet with role {role}"
return role_to_wallet_map[role]
@pytest.fixture
def container(
default_wallet: WalletInfo,
frostfs_cli: FrostfsCli,
client_shell: Shell,
cluster: Cluster,
request: pytest.FixtureRequest,
rpc_endpoint: str,
) -> str:
container_spec = _get_container_spec(request)
cid = _create_container_by_spec(default_wallet, client_shell, cluster, rpc_endpoint, container_spec)
if container_spec.allow_owner_via_ape:
_allow_owner_via_ape(frostfs_cli, cluster, cid)
return cid
def _create_container_by_spec(
default_wallet: WalletInfo, client_shell: Shell, cluster: Cluster, rpc_endpoint: str, container_spec: ContainerSpec
) -> str:
# TODO: add container spec to step message
with reporter.step("Create container"):
cid = create_container(
default_wallet, client_shell, rpc_endpoint, basic_acl=container_spec.basic_acl, rule=container_spec.parsed_rule(cluster)
)
with reporter.step("Search nodes holding the container"):
container_holder_nodes = search_nodes_with_container(default_wallet, cid, client_shell, cluster.default_rpc_endpoint, cluster)
report_data = {node.id: node.host_ip for node in container_holder_nodes}
reporter.attach(json.dumps(report_data, indent=2), "container_nodes.json")
return cid
def _get_container_spec(request: pytest.FixtureRequest) -> ContainerSpec:
container_marker = request.node.get_closest_marker("container")
# let default container to be public at the moment
container_spec = ContainerSpec(basic_acl=PUBLIC_ACL)
if container_marker:
if len(container_marker.args) != 1:
raise RuntimeError(f"Something wrong with container marker: {container_marker}")
container_spec = container_marker.args[0]
if "param" in request.__dict__:
container_spec = request.param
if not container_spec:
raise RuntimeError(
f"""Container specification is empty.
Either add @pytest.mark.container(ContainerSpec(...)) or
@pytest.mark.parametrize(\"container\", [ContainerSpec(...)], indirect=True) decorator"""
)
return container_spec
def _allow_owner_via_ape(frostfs_cli: FrostfsCli, cluster: Cluster, container: str):
with reporter.step("Create allow APE rule for container owner"):
role_condition = ape.Condition.by_role(ape.Role.OWNER)
deny_rule = ape.Rule(ape.Verb.ALLOW, ape.ObjectOperations.WILDCARD_ALL, role_condition)
frostfs_cli.ape_manager.add(
cluster.default_rpc_endpoint,
deny_rule.chain_id,
target_name=container,
target_type="container",
rule=deny_rule.as_string(),
)
with reporter.step("Wait for one block"):
time.sleep(datetime_utils.parse_time(MORPH_BLOCK_TIME))
@pytest.fixture
def objects(container: str, default_wallet: WalletInfo, client_shell: Shell, cluster: Cluster, file_path: str):
with reporter.step("Add test objects to container"):
put_results = parallel(
[put_object_to_random_node] * OBJECT_COUNT,
wallet=default_wallet,
path=file_path,
cid=container,
shell=client_shell,
cluster=cluster,
)
objects_oids = [put_result.result() for put_result in put_results]
return objects_oids
@pytest.fixture
def container_nodes(default_wallet: WalletInfo, container: str, client_shell: Shell, cluster: Cluster) -> list[ClusterNode]:
cid = container
container_holder_nodes = search_nodes_with_container(default_wallet, cid, client_shell, cluster.default_rpc_endpoint, cluster)
report_data = {node.id: node.host_ip for node in container_holder_nodes}
reporter.attach(json.dumps(report_data, indent=2), "container_nodes.json")
return container_holder_nodes
@pytest.fixture
def container_node_wallet(container_nodes: list[ClusterNode]) -> WalletInfo:
return WalletInfo.from_node(container_nodes[0].storage_node)

File diff suppressed because it is too large Load diff

View file

@ -1,62 +1,471 @@
import logging
import os
import random
import shutil
from re import search
from datetime import datetime, timedelta, timezone
from typing import Optional
import allure
import pytest
from robot.api import deco
from dateutil import parser
from frostfs_testlib import plugins, reporter
from frostfs_testlib.cli import FrostfsCli
from frostfs_testlib.credentials.interfaces import CredentialsProvider, User
from frostfs_testlib.healthcheck.interfaces import Healthcheck
from frostfs_testlib.hosting import Hosting
from frostfs_testlib.resources import optionals
from frostfs_testlib.resources.common import COMPLEX_OBJECT_CHUNKS_COUNT, COMPLEX_OBJECT_TAIL_SIZE, SIMPLE_OBJECT_SIZE
from frostfs_testlib.s3 import AwsCliClient, Boto3ClientWrapper, S3ClientWrapper, VersioningStatus
from frostfs_testlib.s3.interfaces import BucketContainerResolver
from frostfs_testlib.shell import LocalShell, Shell
from frostfs_testlib.steps.cli.container import DEFAULT_EC_PLACEMENT_RULE, DEFAULT_PLACEMENT_RULE, FROSTFS_CLI_EXEC
from frostfs_testlib.steps.cli.object import get_netmap_netinfo
from frostfs_testlib.steps.s3 import s3_helper
from frostfs_testlib.storage.cluster import Cluster, ClusterNode
from frostfs_testlib.storage.controllers.cluster_state_controller import ClusterStateController
from frostfs_testlib.storage.dataclasses.frostfs_services import StorageNode
from frostfs_testlib.storage.dataclasses.object_size import ObjectSize
from frostfs_testlib.storage.dataclasses.policy import PlacementPolicy
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
from frostfs_testlib.storage.grpc_operations.client_wrappers import CliClientWrapper
from frostfs_testlib.storage.grpc_operations.interfaces import GrpcClientWrapper
from frostfs_testlib.testing.cluster_test_base import ClusterTestBase
from frostfs_testlib.testing.parallel import parallel
from frostfs_testlib.testing.test_control import run_optionally, wait_for_success
from frostfs_testlib.utils import env_utils, string_utils, version_utils
from frostfs_testlib.utils.file_utils import TestFile, generate_file
import wallet
from cli_helpers import _cmd_run
from common import ASSETS_DIR, FREE_STORAGE, MAINNET_WALLET_PATH
from payment_neogo import neofs_deposit, transfer_mainnet_gas
from ..resources.common import TEST_CYCLES_COUNT
def robot_keyword_adapter(name=None, tags=(), types=()):
return allure.step(name)
deco.keyword = robot_keyword_adapter
logger = logging.getLogger("NeoLogger")
logger = logging.getLogger('NeoLogger')
SERVICE_ACTIVE_TIME = 20
WALLTETS_IN_POOL = 2
@pytest.fixture(scope='session', autouse=True)
@allure.title('Check binary versions')
def check_binary_versions(request):
environment_dir = request.config.getoption('--alluredir')
binaries = ['neo-go', 'neofs-cli', 'neofs-authmate', 'aws']
env_out = {}
for binary in binaries:
out = _cmd_run(f'{binary} --version')
version = search(r'(v?\d.*)\s+', out)
version = version.group(1) if version else 'Unknown'
env_out[binary.upper()] = version
if environment_dir:
with open(f'{environment_dir}/environment.properties', 'w') as out_file:
for env, env_value in env_out.items():
out_file.write(f'{env}={env_value}\n')
# Add logs check test even if it's not fit to mark selectors
def pytest_configure(config: pytest.Config):
markers = config.option.markexpr
if markers != "" and "sanity" not in markers:
config.option.markexpr = f"logs_after_session or ({markers})"
@pytest.fixture(scope='session')
@allure.title('Init wallet with address')
def init_wallet_with_address():
full_path = f'{os.getcwd()}/{ASSETS_DIR}'
os.mkdir(full_path)
yield wallet.init_wallet(ASSETS_DIR)
shutil.rmtree(full_path)
number_key = pytest.StashKey[str]()
start_time = pytest.StashKey[int]()
test_outcome = pytest.StashKey[str]()
@pytest.fixture(scope='session')
@allure.title('Prepare wallet and deposit')
def prepare_wallet_and_deposit(init_wallet_with_address):
wallet, addr, _ = init_wallet_with_address
logger.info(f'Init wallet: {wallet},\naddr: {addr}')
# pytest hook. Do not rename
def pytest_collection_modifyitems(items: list[pytest.Item]):
# Change order of tests based on @pytest.mark.order(<int>) marker
def order(item: pytest.Item) -> int:
order_marker = item.get_closest_marker("order")
if order_marker and (len(order_marker.args) != 1 or not isinstance(order_marker.args[0], int)):
raise RuntimeError("Incorrect usage of pytest.mark.order")
if not FREE_STORAGE:
deposit = 30
transfer_mainnet_gas(wallet, deposit + 1, wallet_path=MAINNET_WALLET_PATH)
neofs_deposit(wallet, deposit)
order_value = order_marker.args[0] if order_marker else 0
return order_value
return wallet
items.sort(key=lambda item: order(item))
# pytest hook. Do not rename
def pytest_collection_finish(session: pytest.Session):
items_total = len(session.items)
for number, item in enumerate(session.items, 1):
item.stash[number_key] = f"[{number}/{items_total}]"
item.stash[test_outcome] = ""
item.stash[start_time] = 0
# pytest hook. Do not rename
def pytest_runtest_setup(item: pytest.Item):
item.stash[start_time] = int(datetime.now().timestamp())
logger.info(f"STARTED {item.stash[number_key]}: {item.name}")
# pytest hook. Do not rename
def pytest_runtest_makereport(item: pytest.Item, call: pytest.CallInfo):
if call.excinfo is not None:
if call.excinfo.typename == "Skipped":
item.stash[start_time] = int(datetime.now().timestamp())
item.stash[test_outcome] += f"SKIPPED on {call.when}; "
else:
item.stash[test_outcome] += f"FAILED on {call.when}; "
if call.when == "teardown":
duration = int(datetime.now().timestamp()) - item.stash[start_time]
if not item.stash[test_outcome]:
outcome = "PASSED "
else:
outcome = item.stash[test_outcome]
logger.info(f"ENDED {item.stash[number_key]}: {item.name}: {outcome}(duration={duration}s)")
# pytest hook. Do not rename
def pytest_generate_tests(metafunc: pytest.Metafunc):
if (
TEST_CYCLES_COUNT <= 1
or metafunc.definition.get_closest_marker("logs_after_session")
or metafunc.definition.get_closest_marker("no_cycles")
):
return
metafunc.fixturenames.append("cycle")
metafunc.parametrize("cycle", range(1, TEST_CYCLES_COUNT + 1), ids=[f"cycle {cycle}" for cycle in range(1, TEST_CYCLES_COUNT + 1)])
@pytest.fixture(scope="session")
def client_shell(configure_testlib) -> Shell:
yield LocalShell()
@pytest.fixture(scope="session")
def require_multiple_hosts(hosting: Hosting):
"""Designates tests that require environment with multiple hosts.
These tests will be skipped on an environment that has only 1 host.
"""
if len(hosting.hosts) <= 1:
pytest.skip("Test only works with multiple hosts")
yield
@pytest.fixture(scope="session")
def require_multiple_interfaces(cluster: Cluster):
"""
We determine that there are the required number of interfaces for tests
If there are no required interfaces, the tests will be skipped.
"""
interfaces = cluster.cluster_nodes[0].host.config.interfaces
if "internal1" not in interfaces or "data1" not in interfaces:
pytest.skip("This test requires multiple internal and data interfaces")
yield
@pytest.fixture(scope="session")
def max_object_size(cluster: Cluster, client_shell: Shell) -> int:
storage_node = cluster.storage_nodes[0]
wallet = WalletInfo.from_node(storage_node)
net_info = get_netmap_netinfo(wallet=wallet, endpoint=storage_node.get_rpc_endpoint(), shell=client_shell)
yield net_info["maximum_object_size"]
@pytest.fixture(scope="session")
def simple_object_size(max_object_size: int) -> ObjectSize:
size = min(int(SIMPLE_OBJECT_SIZE), max_object_size)
return ObjectSize("simple", size)
@pytest.fixture()
def file_path(object_size: ObjectSize) -> TestFile:
return generate_file(object_size.value)
@pytest.fixture(scope="session")
def complex_object_size(max_object_size: int) -> ObjectSize:
size = max_object_size * int(COMPLEX_OBJECT_CHUNKS_COUNT) + int(COMPLEX_OBJECT_TAIL_SIZE)
return ObjectSize("complex", size)
# By default we want all tests to be executed with both object sizes
# This can be overriden in choosen tests if needed
@pytest.fixture(
scope="session", params=[pytest.param("simple", marks=pytest.mark.simple), pytest.param("complex", marks=pytest.mark.complex)]
)
def object_size(simple_object_size: ObjectSize, complex_object_size: ObjectSize, request: pytest.FixtureRequest) -> ObjectSize:
if request.param == "simple":
return simple_object_size
return complex_object_size
@pytest.fixture(scope="session")
def rep_placement_policy() -> PlacementPolicy:
return PlacementPolicy("rep", DEFAULT_PLACEMENT_RULE)
@pytest.fixture(scope="session")
def ec_placement_policy() -> PlacementPolicy:
return PlacementPolicy("ec", DEFAULT_EC_PLACEMENT_RULE)
@pytest.fixture(scope="session")
@allure.title("Init Frostfs CLI")
def frostfs_cli(client_shell: Shell, default_wallet: WalletInfo) -> FrostfsCli:
return FrostfsCli(client_shell, FROSTFS_CLI_EXEC, default_wallet.config_path)
@pytest.fixture(scope="session")
@allure.title("Init GrpcClientWrapper with local Frostfs CLI")
def grpc_client(frostfs_cli: FrostfsCli) -> GrpcClientWrapper:
return CliClientWrapper(frostfs_cli)
# By default we want all tests to be executed with both storage policies.
# This can be overriden in choosen tests if needed.
@pytest.fixture(scope="session", params=[pytest.param("rep", marks=pytest.mark.rep), pytest.param("ec", marks=pytest.mark.ec)])
def placement_policy(
rep_placement_policy: PlacementPolicy, ec_placement_policy: PlacementPolicy, request: pytest.FixtureRequest
) -> PlacementPolicy:
if request.param == "rep":
return rep_placement_policy
return ec_placement_policy
@pytest.fixture(scope="session")
def cluster(temp_directory: str, hosting: Hosting, client_shell: Shell) -> Cluster:
cluster = Cluster(hosting)
if cluster.is_local_devenv():
cluster.create_wallet_configs(hosting)
ClusterTestBase.shell = client_shell
ClusterTestBase.cluster = cluster
yield cluster
@allure.title("[Session]: Provide S3 policy")
@pytest.fixture(scope="session")
def s3_policy(request: pytest.FixtureRequest):
policy = None
if "param" in request.__dict__:
policy = request.param
return policy
@pytest.fixture(scope="session")
@allure.title("[Session] Create healthcheck object")
def healthcheck(cluster: Cluster) -> Healthcheck:
healthcheck_cls = plugins.load_plugin("frostfs.testlib.healthcheck", cluster.cluster_nodes[0].host.config.healthcheck_plugin_name)
return healthcheck_cls()
@pytest.fixture(scope="session")
def cluster_state_controller_session(client_shell: Shell, cluster: Cluster, healthcheck: Healthcheck) -> ClusterStateController:
controller = ClusterStateController(client_shell, cluster, healthcheck)
return controller
@pytest.fixture
def cluster_state_controller(cluster_state_controller_session: ClusterStateController) -> ClusterStateController:
yield cluster_state_controller_session
cluster_state_controller_session.start_stopped_hosts()
cluster_state_controller_session.start_all_stopped_services()
@pytest.fixture(scope="session")
def credentials_provider(cluster: Cluster) -> CredentialsProvider:
return CredentialsProvider(cluster)
@allure.title("[Session]: Create S3 client")
@pytest.fixture(
scope="session",
params=[
pytest.param(AwsCliClient, marks=[pytest.mark.aws, pytest.mark.weekly]),
pytest.param(Boto3ClientWrapper, marks=[pytest.mark.boto3, pytest.mark.nightly]),
],
)
def s3_client(
default_user: User,
s3_policy: Optional[str],
cluster: Cluster,
request: pytest.FixtureRequest,
credentials_provider: CredentialsProvider,
) -> S3ClientWrapper:
node = cluster.cluster_nodes[0]
credentials_provider.S3.provide(default_user, node, s3_policy)
s3_client_cls = request.param
client = s3_client_cls(default_user.s3_credentials.access_key, default_user.s3_credentials.secret_key, cluster.default_s3_gate_endpoint)
return client
@pytest.fixture
def versioning_status(request: pytest.FixtureRequest) -> VersioningStatus:
if "param" in request.__dict__:
return request.param
return VersioningStatus.UNDEFINED
@allure.title("[Session] Bulk create buckets for tests")
@pytest.fixture(scope="session")
def buckets_pool(s3_client: S3ClientWrapper, request: pytest.FixtureRequest):
test_buckets: list = []
s3_client_type = type(s3_client).__name__
for test in request.session.items:
if s3_client_type not in test.name:
continue
if "bucket" in test.fixturenames:
test_buckets.append(string_utils.unique_name("bucket-"))
if "two_buckets" in test.fixturenames:
test_buckets.append(string_utils.unique_name("bucket-"))
test_buckets.append(string_utils.unique_name("bucket-"))
if test_buckets:
parallel(s3_client.create_bucket, test_buckets)
return test_buckets
@allure.title("[Test] Create bucket")
@pytest.fixture
def bucket(buckets_pool: list[str], s3_client: S3ClientWrapper, versioning_status: VersioningStatus):
if buckets_pool:
bucket_name = buckets_pool.pop()
else:
bucket_name = s3_client.create_bucket()
if versioning_status:
s3_helper.set_bucket_versioning(s3_client, bucket_name, versioning_status)
return bucket_name
@allure.title("[Test] Create two buckets")
@pytest.fixture
def two_buckets(buckets_pool: list[str], s3_client: S3ClientWrapper) -> list[str]:
buckets: list[str] = []
for _ in range(2):
if buckets_pool:
buckets.append(buckets_pool.pop())
else:
buckets.append(s3_client.create_bucket())
return buckets
@allure.title("[Autouse/Session] Collect binary versions")
@pytest.fixture(scope="session", autouse=True)
@run_optionally(optionals.OPTIONAL_AUTOUSE_FIXTURES_ENABLED)
def collect_binary_versions(hosting: Hosting, client_shell: Shell, request: pytest.FixtureRequest):
environment_dir = request.config.getoption("--alluredir")
if not environment_dir:
return None
local_versions = version_utils.get_local_binaries_versions(client_shell)
remote_versions = version_utils.get_remote_binaries_versions(hosting)
remote_versions_keys = list(remote_versions.keys())
all_versions = {
**local_versions,
**{
f"{name}_{remote_versions_keys.index(host) + 1:02d}": version
for host, versions in remote_versions.items()
for name, version in versions.items()
},
}
file_path = f"{environment_dir}/environment.properties"
env_utils.save_env_properties(file_path, all_versions)
@reporter.step("[Autouse/Session] Test session start time")
@pytest.fixture(scope="session", autouse=True)
def session_start_time(configure_testlib):
start_time = datetime.utcnow()
return start_time
@allure.title("[Autouse/Session] After deploy healthcheck")
@pytest.fixture(scope="session", autouse=True)
@run_optionally(optionals.OPTIONAL_AUTOUSE_FIXTURES_ENABLED)
def after_deploy_healthcheck(cluster: Cluster):
with reporter.step("Wait for cluster readiness after deploy"):
parallel(readiness_on_node, cluster.cluster_nodes)
@pytest.fixture(scope="session")
def rpc_endpoint(cluster: Cluster):
return cluster.default_rpc_endpoint
@wait_for_success(60 * SERVICE_ACTIVE_TIME * 3, 60, title="Wait for {cluster_node} readiness")
def readiness_on_node(cluster_node: ClusterNode):
if "skip_readiness_check" in cluster_node.host.config.attributes and cluster_node.host.config.attributes["skip_readiness_check"]:
return
# TODO: Move to healtcheck classes
svc_name = cluster_node.service(StorageNode).get_service_systemctl_name()
with reporter.step(f"Check service {svc_name} is active"):
result = cluster_node.host.get_shell().exec(f"systemctl is-active {svc_name}")
assert "active" == result.stdout.strip(), f"Service {svc_name} should be in active state"
with reporter.step(f"Check service {svc_name} is active more than {SERVICE_ACTIVE_TIME} minutes"):
result = cluster_node.host.get_shell().exec(f"systemctl show {svc_name} --property ActiveEnterTimestamp | cut -d '=' -f 2")
start_time = parser.parse(result.stdout.strip())
current_time = datetime.now(tz=timezone.utc)
active_time = current_time - start_time
active_minutes = active_time.seconds // 60
active_seconds = active_time.seconds - active_minutes * 60
assert active_time > timedelta(
minutes=SERVICE_ACTIVE_TIME
), f"Service should be in active state more than {SERVICE_ACTIVE_TIME} minutes, current {active_minutes}m:{active_seconds}s"
@reporter.step("Prepare default user with wallet")
@pytest.fixture(scope="session")
def default_user(credentials_provider: CredentialsProvider, cluster: Cluster) -> User:
user = User(string_utils.unique_name("user-"))
node = cluster.cluster_nodes[0]
credentials_provider.GRPC.provide(user, node)
return user
@reporter.step("Get wallet for default user")
@pytest.fixture(scope="session")
def default_wallet(default_user: User) -> WalletInfo:
return default_user.wallet
@pytest.fixture(scope="session")
def wallets_pool(credentials_provider: CredentialsProvider, cluster: Cluster) -> list[WalletInfo]:
users = [User(string_utils.unique_name("user-")) for _ in range(WALLTETS_IN_POOL)]
parallel(credentials_provider.GRPC.provide, users, cluster_node=cluster.cluster_nodes[0])
return [user.wallet for user in users]
@pytest.fixture(scope="session")
def other_wallet(wallets_pool: list[WalletInfo]) -> WalletInfo:
if not wallets_pool:
raise RuntimeError("[other_wallet] No wallets in pool. Consider increasing WALLTETS_IN_POOL or review.")
return wallets_pool.pop()
@pytest.fixture(scope="session")
def other_wallet_2(wallets_pool: list[WalletInfo]) -> WalletInfo:
if not wallets_pool:
raise RuntimeError("[other_wallet2] No wallets in pool. Consider increasing WALLTETS_IN_POOL or review.")
return wallets_pool.pop()
@pytest.fixture()
@allure.title("Select random node for testing")
def node_under_test(cluster: Cluster) -> ClusterNode:
selected_node = random.choice(cluster.cluster_nodes)
reporter.attach(f"{selected_node}", "Selected node")
return selected_node
@allure.title("Init bucket container resolver")
@pytest.fixture()
def bucket_container_resolver(node_under_test: ClusterNode) -> BucketContainerResolver:
resolver_cls = plugins.load_plugin("frostfs.testlib.bucket_cid_resolver", node_under_test.host.config.product)
resolver: BucketContainerResolver = resolver_cls()
return resolver

View file

@ -0,0 +1,114 @@
import allure
import pytest
from frostfs_testlib import reporter
from frostfs_testlib.resources.wellknown_acl import PRIVATE_ACL_F
from frostfs_testlib.steps.cli.container import (
create_container,
delete_container,
get_container,
list_containers,
wait_for_container_creation,
wait_for_container_deletion,
)
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
from frostfs_testlib.testing.cluster_test_base import ClusterTestBase
from ...helpers.utility import placement_policy_from_container
@pytest.mark.nightly
@pytest.mark.sanity
@pytest.mark.container
class TestContainer(ClusterTestBase):
@allure.title("Create container (name={name})")
@pytest.mark.parametrize("name", ["", "test-container"], ids=["No name", "Set particular name"])
@pytest.mark.smoke
def test_container_creation(self, default_wallet: WalletInfo, name: str):
wallet = default_wallet
placement_rule = "REP 2 IN X CBF 1 SELECT 2 FROM * AS X"
cid = create_container(
wallet,
rule=placement_rule,
name=name,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
)
containers = list_containers(wallet, shell=self.shell, endpoint=self.cluster.default_rpc_endpoint)
assert cid in containers, f"Expected container {cid} in containers: {containers}"
container_info: str = get_container(
wallet,
cid,
json_mode=False,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
)
container_info = container_info.casefold() # To ignore case when comparing with expected values
info_to_check = {
f"basic ACL: {PRIVATE_ACL_F} (private)",
f"owner ID: {wallet.get_address_from_json(0)}",
f"CID: {cid}",
}
if name:
info_to_check.add(f"Name={name}")
with reporter.step("Check container has correct information"):
expected_policy = placement_rule.casefold()
actual_policy = placement_policy_from_container(container_info)
assert actual_policy == expected_policy, f"Expected policy\n{expected_policy} but got policy\n{actual_policy}"
for info in info_to_check:
expected_info = info.casefold()
assert expected_info in container_info, f"Expected {expected_info} in container info:\n{container_info}"
with reporter.step("Delete container and check it was deleted"):
delete_container(
wallet,
cid,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
await_mode=True,
)
self.tick_epoch()
wait_for_container_deletion(wallet, cid, shell=self.shell, endpoint=self.cluster.default_rpc_endpoint)
@allure.title("Parallel container creation and deletion")
def test_container_creation_deletion_parallel(self, default_wallet: WalletInfo):
containers_count = 3
wallet = default_wallet
placement_rule = "REP 2 IN X CBF 1 SELECT 2 FROM * AS X"
iteration_count = 10
for iteration in range(iteration_count):
cids: list[str] = []
with reporter.step(f"Create {containers_count} containers"):
for _ in range(containers_count):
cids.append(
create_container(
wallet,
rule=placement_rule,
await_mode=False,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
wait_for_creation=False,
)
)
with reporter.step("Wait for containers occur in container list"):
for cid in cids:
wait_for_container_creation(
wallet,
cid,
sleep_interval=containers_count,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
)
with reporter.step("Delete containers and check they were deleted"):
for cid in cids:
delete_container(wallet, cid, shell=self.shell, endpoint=self.cluster.default_rpc_endpoint, await_mode=True)
containers_list = list_containers(wallet, shell=self.shell, endpoint=self.cluster.default_rpc_endpoint)
assert cid not in containers_list, "Container not deleted"

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,30 @@
import os
from datetime import datetime
import allure
import pytest
from frostfs_testlib.storage.cluster import ClusterNode
from frostfs_testlib.storage.controllers import ShardsWatcher
from frostfs_testlib.storage.dataclasses.object_size import ObjectSize
from frostfs_testlib.utils.file_utils import TestFile, generate_file
@pytest.fixture()
@allure.title("Provide Shards watcher")
def shards_watcher(node_under_test: ClusterNode) -> ShardsWatcher:
watcher = ShardsWatcher(node_under_test)
return watcher
@pytest.fixture()
@allure.title("Test start time")
def test_start_time() -> datetime:
start_time = datetime.utcnow()
return start_time
@pytest.fixture()
@allure.title("Generate simple size file")
def simple_file(simple_object_size: ObjectSize) -> TestFile:
path_file = generate_file(size=simple_object_size.value)
return path_file

View file

@ -0,0 +1,79 @@
import datetime
from time import sleep
import allure
import pytest
from frostfs_testlib import reporter
from frostfs_testlib.resources.common import MORPH_BLOCK_TIME
from frostfs_testlib.steps.cli.object import neo_go_query_height
from frostfs_testlib.storage.controllers import ClusterStateController
from frostfs_testlib.testing.cluster_test_base import ClusterTestBase
from frostfs_testlib.utils import datetime_utils
@pytest.mark.order(20)
@pytest.mark.failover
class TestTime(ClusterTestBase):
@reporter.step("Neo-go should continue to release blocks")
def check_nodes_block(self, cluster_state_controller: ClusterStateController):
count_blocks = {}
with reporter.step("Get current block id"):
for cluster_node in self.cluster.cluster_nodes:
cluster_state_controller.get_node_date(cluster_node)
count_blocks[cluster_node] = neo_go_query_height(
shell=cluster_node.host.get_shell(), endpoint=cluster_node.morph_chain.get_http_endpoint()
)["Latest block"]
with reporter.step("Wait for 3 blocks"):
sleep(datetime_utils.parse_time(MORPH_BLOCK_TIME) * 3)
with reporter.step("Current block id should be higher than before"):
for cluster_node in self.cluster.cluster_nodes:
shell = cluster_node.host.get_shell()
now_block = neo_go_query_height(shell=shell, endpoint=cluster_node.morph_chain.get_http_endpoint())[
"Latest block"
]
assert count_blocks[cluster_node] < now_block
@pytest.fixture()
def node_time_synchronizer(self, cluster_state_controller: ClusterStateController) -> None:
cluster_state_controller.set_sync_date_all_nodes(status="inactive")
yield
cluster_state_controller.set_sync_date_all_nodes(status="active")
@allure.title("Changing hardware and system time")
def test_system_time(self, cluster_state_controller: ClusterStateController, node_time_synchronizer: None):
cluster_nodes = self.cluster.cluster_nodes
timezone_utc = datetime.timezone.utc
node_1, node_2, node_3 = cluster_nodes[0:3]
with reporter.step("On node 1, move the system time forward by 5 days"):
cluster_state_controller.change_node_date(
node_1, (datetime.datetime.now(timezone_utc) + datetime.timedelta(days=5))
)
self.check_nodes_block(cluster_state_controller)
with reporter.step("On node 2, move the system time back 5 days."):
cluster_state_controller.change_node_date(
node_2, (datetime.datetime.now(timezone_utc) - datetime.timedelta(days=5))
)
self.check_nodes_block(cluster_state_controller)
with reporter.step("On node 3, move the system time forward by 10 days"):
cluster_state_controller.change_node_date(
node_3, (datetime.datetime.now(timezone_utc) + datetime.timedelta(days=10))
)
self.check_nodes_block(cluster_state_controller)
with reporter.step("Return the time on all nodes to the current one"):
for cluster_node in self.cluster.cluster_nodes:
cluster_state_controller.restore_node_date(cluster_node)
self.check_nodes_block(cluster_state_controller)
with reporter.step("Reboot all nodes"):
cluster_state_controller.shutdown_cluster(mode="soft")
cluster_state_controller.start_stopped_hosts()
self.check_nodes_block(cluster_state_controller)

View file

@ -0,0 +1,264 @@
import itertools
import logging
import os
import random
import allure
import pytest
from frostfs_testlib import reporter
from frostfs_testlib.resources.wellknown_acl import PUBLIC_ACL
from frostfs_testlib.steps.cli.container import StorageContainer, StorageContainerInfo, create_container
from frostfs_testlib.steps.cli.object import get_object, get_object_nodes, put_object
from frostfs_testlib.steps.node_management import check_node_in_map, check_node_not_in_map
from frostfs_testlib.storage.cluster import ClusterNode, StorageNode
from frostfs_testlib.storage.controllers import ClusterStateController
from frostfs_testlib.storage.dataclasses.object_size import ObjectSize
from frostfs_testlib.storage.dataclasses.storage_object_info import StorageObjectInfo
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
from frostfs_testlib.testing.cluster_test_base import ClusterTestBase
from frostfs_testlib.testing.parallel import parallel, parallel_workers_limit
from frostfs_testlib.testing.test_control import wait_for_success
from frostfs_testlib.utils.file_utils import get_file_hash
from pytest import FixtureRequest
logger = logging.getLogger("NeoLogger")
@pytest.mark.failover
@pytest.mark.failover_server
class TestFailoverServer(ClusterTestBase):
@wait_for_success(max_wait_time=120, interval=1)
def wait_node_not_in_map(self, *args, **kwargs):
check_node_not_in_map(*args, **kwargs)
@wait_for_success(max_wait_time=120, interval=1)
def wait_node_in_map(self, *args, **kwargs):
check_node_in_map(*args, **kwargs)
@allure.title("[Test] Create containers")
@pytest.fixture
def containers(
self,
request: FixtureRequest,
default_wallet: WalletInfo,
) -> list[StorageContainer]:
placement_rule = "REP 2 CBF 2 SELECT 2 FROM *"
containers_count = request.param
results = parallel(
[create_container for _ in range(containers_count)],
wallet=default_wallet,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
rule=placement_rule,
basic_acl=PUBLIC_ACL,
)
containers = [
StorageContainer(StorageContainerInfo(result.result(), default_wallet), self.shell, self.cluster) for result in results
]
return containers
@allure.title("[Test] Create container")
@pytest.fixture()
def container(self, default_wallet: WalletInfo) -> StorageContainer:
select = len(self.cluster.cluster_nodes)
placement_rule = f"REP {select - 1} CBF 1 SELECT {select} FROM *"
cont_id = create_container(
default_wallet,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
rule=placement_rule,
basic_acl=PUBLIC_ACL,
)
storage_cont_info = StorageContainerInfo(cont_id, default_wallet)
return StorageContainer(storage_cont_info, self.shell, self.cluster)
@allure.title("[Class] Create objects")
@pytest.fixture(scope="class")
def storage_objects(
self,
request: FixtureRequest,
containers: list[StorageContainer],
simple_object_size: ObjectSize,
complex_object_size: ObjectSize,
) -> list[StorageObjectInfo]:
object_count = request.param
sizes_samples = [simple_object_size, complex_object_size]
samples_count = len(sizes_samples)
assert object_count >= samples_count, f"Object count is too low, must be >= {samples_count}"
sizes_weights = [2, 1]
sizes = sizes_samples + random.choices(sizes_samples, weights=sizes_weights, k=object_count - samples_count)
results = parallel(
[container.generate_object for _ in sizes for container in containers],
size=itertools.cycle([size.value for size in sizes]),
)
return [result.result() for result in results]
@allure.title("[Test] Create objects and get nodes with object")
@pytest.fixture()
def object_and_nodes(self, simple_object_size: ObjectSize, container: StorageContainer) -> tuple[StorageObjectInfo, list[ClusterNode]]:
object_info = container.generate_object(simple_object_size.value)
object_nodes = get_object_nodes(self.cluster, object_info.cid, object_info.oid, self.cluster.cluster_nodes[0])
return object_info, object_nodes
def _verify_object(self, storage_object: StorageObjectInfo, node: StorageNode):
with reporter.step(f"Verify object {storage_object.oid} from node {node}"):
file_path = get_object(
storage_object.wallet,
storage_object.cid,
storage_object.oid,
endpoint=node.get_rpc_endpoint(),
shell=self.shell,
timeout="60s",
)
assert storage_object.file_hash == get_file_hash(file_path)
@reporter.step("Verify objects")
def verify_objects(self, nodes: list[StorageNode], storage_objects: list[StorageObjectInfo]) -> None:
workers_count = os.environ.get("PARALLEL_CUSTOM_LIMIT", 50)
with parallel_workers_limit(int(workers_count)):
parallel(self._verify_object, storage_objects * len(nodes), node=itertools.cycle(nodes))
@allure.title("Full shutdown node")
@pytest.mark.parametrize("containers, storage_objects", [(5, 10)], indirect=True)
def test_complete_node_shutdown(
self,
storage_objects: list[StorageObjectInfo],
node_under_test: ClusterNode,
cluster_state_controller: ClusterStateController,
):
with reporter.step(f"Remove one node from the list of nodes"):
alive_nodes = list(set(self.cluster.cluster_nodes) - {node_under_test})
storage_nodes = [cluster.storage_node for cluster in alive_nodes]
with reporter.step("Tick 2 epochs and wait for 2 blocks"):
self.tick_epochs(2, storage_nodes[0], wait_block=2)
with reporter.step(f"Stop node"):
cluster_state_controller.stop_node_host(node_under_test, "hard")
with reporter.step("Verify that there are no corrupted objects"):
self.verify_objects(storage_nodes, storage_objects)
with reporter.step(f"Check node still in map"):
self.wait_node_in_map(node_under_test.storage_node, self.shell, alive_node=storage_nodes[0])
count_tick_epoch = int(alive_nodes[0].ir_node.get_netmap_cleaner_threshold()) + 4
with reporter.step(f"Tick {count_tick_epoch} epochs and wait for 2 blocks"):
self.tick_epochs(count_tick_epoch, storage_nodes[0], wait_block=2)
with reporter.step(f"Check node in not map after {count_tick_epoch} epochs"):
self.wait_node_not_in_map(node_under_test.storage_node, self.shell, alive_node=storage_nodes[0])
with reporter.step(f"Verify that there are no corrupted objects after {count_tick_epoch} epochs"):
self.verify_objects(storage_nodes, storage_objects)
@allure.title("Temporarily disable a node")
@pytest.mark.parametrize("containers, storage_objects", [(5, 10)], indirect=True)
def test_temporarily_disable_a_node(
self,
storage_objects: list[StorageObjectInfo],
node_under_test: ClusterNode,
cluster_state_controller: ClusterStateController,
):
with reporter.step(f"Remove one node from the list"):
storage_nodes = list(set(self.cluster.storage_nodes) - {node_under_test.storage_node})
with reporter.step("Tick 2 epochs and wait for 2 blocks"):
self.tick_epochs(2, storage_nodes[0], wait_block=2)
with reporter.step(f"Stop node"):
cluster_state_controller.stop_node_host(node_under_test, "hard")
with reporter.step("Verify that there are no corrupted objects"):
self.verify_objects(storage_nodes, storage_objects)
with reporter.step(f"Check node still in map"):
self.wait_node_in_map(node_under_test.storage_node, self.shell, alive_node=storage_nodes[0])
with reporter.step(f"Start node"):
cluster_state_controller.start_node_host(node_under_test)
with reporter.step("Verify that there are no corrupted objects"):
self.verify_objects(storage_nodes, storage_objects)
@allure.title("Not enough nodes in the container with policy - 'REP 3 CBF 1 SELECT 4 FROM *'")
def test_not_enough_nodes_in_container_rep_3(
self,
object_and_nodes: tuple[StorageObjectInfo, list[ClusterNode]],
default_wallet: WalletInfo,
cluster_state_controller: ClusterStateController,
simple_file: str,
):
object_info, object_nodes = object_and_nodes
endpoint_without_object = list(set(self.cluster.cluster_nodes) - set(object_nodes))[0].storage_node.get_rpc_endpoint()
endpoint_with_object = object_nodes[0].storage_node.get_rpc_endpoint()
with reporter.step("Stop all nodes with object except first one"):
parallel(cluster_state_controller.stop_node_host, object_nodes[1:], mode="hard")
with reporter.step(f"Get object from node without object"):
get_object(default_wallet, object_info.cid, object_info.oid, self.shell, endpoint_without_object)
with reporter.step(f"Get object from node with object"):
get_object(default_wallet, object_info.cid, object_info.oid, self.shell, endpoint_with_object)
with reporter.step(f"[Negative] Put operation to node with object"):
with pytest.raises(RuntimeError):
put_object(default_wallet, simple_file, object_info.cid, self.shell, endpoint_with_object)
@allure.title("Not enough nodes in the container with policy - 'REP 2 CBF 2 SELECT 4 FROM *'")
def test_not_enough_nodes_in_container_rep_2(
self,
default_wallet: WalletInfo,
cluster_state_controller: ClusterStateController,
simple_file: str,
):
with reporter.step("Create container with full network map"):
node_count = len(self.cluster.cluster_nodes)
placement_rule = f"REP {node_count - 2} IN X CBF 2 SELECT {node_count} FROM * AS X"
cid = create_container(
default_wallet,
self.shell,
self.cluster.default_rpc_endpoint,
rule=placement_rule,
basic_acl=PUBLIC_ACL,
)
with reporter.step("Put object"):
oid = put_object(default_wallet, simple_file, cid, self.shell, self.cluster.default_rpc_endpoint)
with reporter.step("Search nodes with object"):
object_nodes = get_object_nodes(self.cluster, cid, oid, self.cluster.cluster_nodes[0])
with reporter.step("Choose node to stop"):
node_under_test = random.choice(object_nodes)
alive_node_with_object = random.choice(list(set(object_nodes) - {node_under_test}))
alive_endpoint_with_object = alive_node_with_object.storage_node.get_rpc_endpoint()
with reporter.step("Stop random node with object"):
cluster_state_controller.stop_node_host(node_under_test, "hard")
with reporter.step("Put object to alive node with object"):
oid_2 = put_object(default_wallet, simple_file, cid, self.shell, alive_endpoint_with_object)
with reporter.step("Get object from alive node with object"):
get_object(default_wallet, cid, oid_2, self.shell, alive_endpoint_with_object)
with reporter.step("Create container on alive node"):
create_container(
default_wallet,
self.shell,
alive_endpoint_with_object,
rule=placement_rule,
basic_acl=PUBLIC_ACL,
)

View file

@ -0,0 +1,689 @@
import logging
import random
from datetime import datetime
from time import sleep
import allure
import pytest
from frostfs_testlib import reporter
from frostfs_testlib.resources.common import MORPH_BLOCK_TIME
from frostfs_testlib.resources.wellknown_acl import PUBLIC_ACL
from frostfs_testlib.s3 import S3ClientWrapper, VersioningStatus
from frostfs_testlib.s3.interfaces import BucketContainerResolver
from frostfs_testlib.steps.cli.container import StorageContainer, StorageContainerInfo, create_container
from frostfs_testlib.steps.cli.object import get_object, put_object_to_random_node
from frostfs_testlib.steps.node_management import (
check_node_in_map,
check_node_not_in_map,
exclude_node_from_network_map,
include_node_to_network_map,
remove_nodes_from_map_morph,
wait_for_node_to_be_ready,
)
from frostfs_testlib.steps.s3 import s3_helper
from frostfs_testlib.steps.s3.s3_helper import search_nodes_with_bucket
from frostfs_testlib.storage.cluster import Cluster, ClusterNode, S3Gate, StorageNode
from frostfs_testlib.storage.controllers import ClusterStateController, ShardsWatcher
from frostfs_testlib.storage.dataclasses.object_size import ObjectSize
from frostfs_testlib.storage.dataclasses.storage_object_info import StorageObjectInfo
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
from frostfs_testlib.testing.cluster_test_base import ClusterTestBase
from frostfs_testlib.testing.test_control import expect_not_raises
from frostfs_testlib.utils import datetime_utils
from frostfs_testlib.utils.failover_utils import wait_object_replication
from frostfs_testlib.utils.file_keeper import FileKeeper
from frostfs_testlib.utils.file_utils import generate_file, get_file_hash
from ...resources.common import S3_POLICY_FILE_LOCATION
logger = logging.getLogger("NeoLogger")
stopped_nodes: list[StorageNode] = []
@pytest.fixture(scope="function")
@allure.title("Provide File Keeper")
def file_keeper():
keeper = FileKeeper()
yield keeper
keeper.restore_files()
@pytest.mark.failover
@pytest.mark.failover_storage
class TestFailoverStorage(ClusterTestBase):
@allure.title("Shutdown and start node (stop_mode={stop_mode})")
@pytest.mark.parametrize("stop_mode", ["hard", "soft"])
@pytest.mark.failover_reboot
def test_lose_storage_node_host(
self,
default_wallet,
stop_mode: str,
require_multiple_hosts,
simple_object_size: ObjectSize,
cluster: Cluster,
cluster_state_controller: ClusterStateController,
):
wallet = default_wallet
placement_rule = "REP 2 IN X CBF 2 SELECT 2 FROM * AS X"
source_file_path = generate_file(simple_object_size.value)
stopped_hosts_nodes = []
with reporter.step(f"Create container and put object"):
cid = create_container(
wallet,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
rule=placement_rule,
basic_acl=PUBLIC_ACL,
)
oid = put_object_to_random_node(wallet, source_file_path, cid, shell=self.shell, cluster=self.cluster)
with reporter.step(f"Wait for replication and get nodes with object"):
nodes_with_object = wait_object_replication(cid, oid, 2, shell=self.shell, nodes=self.cluster.storage_nodes)
with reporter.step(f"Stop 2 nodes with object and wait replication one by one"):
for storage_node in random.sample(nodes_with_object, 2):
stopped_hosts_nodes.append(storage_node)
cluster_node = cluster.node(storage_node)
cluster_state_controller.stop_node_host(cluster_node, stop_mode)
replicated_nodes = wait_object_replication(
cid,
oid,
2,
shell=self.shell,
nodes=list(set(self.cluster.storage_nodes) - {*stopped_hosts_nodes}),
)
with reporter.step("Check object data is not corrupted"):
got_file_path = get_object(wallet, cid, oid, endpoint=replicated_nodes[0].get_rpc_endpoint(), shell=self.shell)
assert get_file_hash(source_file_path) == get_file_hash(got_file_path)
with reporter.step("Return all hosts"):
cluster_state_controller.start_stopped_hosts()
with reporter.step("Check object data is not corrupted"):
replicated_nodes = wait_object_replication(cid, oid, 2, shell=self.shell, nodes=self.cluster.storage_nodes)
got_file_path = get_object(wallet, cid, oid, shell=self.shell, endpoint=replicated_nodes[0].get_rpc_endpoint())
assert get_file_hash(source_file_path) == get_file_hash(got_file_path)
@pytest.mark.parametrize("s3_policy", [S3_POLICY_FILE_LOCATION], indirect=True)
@allure.title("Do not ignore unhealthy tree endpoints (s3_client={s3_client})")
def test_unhealthy_tree(
self,
s3_client: S3ClientWrapper,
default_wallet: WalletInfo,
simple_object_size: ObjectSize,
cluster_state_controller: ClusterStateController,
bucket_container_resolver: BucketContainerResolver,
):
default_node = self.cluster.cluster_nodes[0]
with reporter.step("Turn S3 GW off on default node"):
cluster_state_controller.stop_service_of_type(default_node, S3Gate)
with reporter.step("Turn off storage on default node"):
cluster_state_controller.stop_service_of_type(default_node, StorageNode)
with reporter.step("Turn on S3 GW on default node"):
cluster_state_controller.start_service_of_type(default_node, S3Gate)
with reporter.step("Turn on storage on default node"):
cluster_state_controller.start_service_of_type(default_node, StorageNode)
with reporter.step("Create bucket with REP 1 SELECT 1 policy"):
bucket = s3_client.create_bucket(
location_constraint="rep-1",
)
file_path = generate_file(simple_object_size.value)
file_name = s3_helper.object_key_from_file_path(file_path)
with reporter.step("Put object into bucket"):
put_object = s3_client.put_object(bucket, file_path)
s3_helper.check_objects_in_bucket(s3_client, bucket, expected_objects=[file_name])
node_bucket = search_nodes_with_bucket(
cluster=self.cluster,
bucket_name=bucket,
wallet=default_wallet,
shell=self.shell,
endpoint=self.cluster.storage_nodes[0].get_rpc_endpoint(),
bucket_container_resolver=bucket_container_resolver,
)[0]
with reporter.step("Turn off all storage nodes except bucket node"):
for node in [node_to_stop for node_to_stop in self.cluster.cluster_nodes if node_to_stop != node_bucket]:
with reporter.step(f"Stop storage service on node: {node}"):
cluster_state_controller.stop_service_of_type(node, StorageNode)
with reporter.step(f"Change s3 endpoint to bucket node"):
s3_client.set_endpoint(node_bucket.s3_gate.get_endpoint())
with reporter.step("Check that object is available"):
s3_helper.check_objects_in_bucket(s3_client, bucket, expected_objects=[file_name])
with reporter.step("Start storage nodes"):
cluster_state_controller.start_all_stopped_services()
@pytest.mark.failover
@pytest.mark.failover_empty_map
class TestEmptyMap(ClusterTestBase):
"""
A set of tests for makes map empty and verify that we can read objects after that
"""
@reporter.step("Teardown after EmptyMap offline test")
@pytest.fixture()
def empty_map_offline_teardown(self):
yield
with reporter.step("Return all storage nodes to network map"):
for node in stopped_nodes:
include_node_to_network_map(node, node, shell=self.shell, cluster=self.cluster)
stopped_nodes.remove(node)
@pytest.mark.failover_empty_map_offlne
@allure.title("Empty network map via offline all storage nodes (s3_client={s3_client})")
def test_offline_all_storage_nodes(
self,
s3_client: S3ClientWrapper,
bucket: str,
simple_object_size: ObjectSize,
empty_map_offline_teardown,
):
"""
The test makes network map empty (set offline status on all storage nodes) then returns all nodes to map and checks that object can read through s3.
Steps:
1. Check that bucket is empty
2: PUT object into bucket
3: Check that object exists in bucket
4: Exclude all storage nodes from network map (set status OFFLINE)
5: Return all storage nodes to network map
6: Check that we can read object from #2
Args:
bucket: bucket which contains tested object
simple_object_size: size of object
"""
file_path = generate_file(simple_object_size.value)
file_name = s3_helper.object_key_from_file_path(file_path)
bucket_objects = [file_name]
with reporter.step("Put object into bucket"):
s3_client.put_object(bucket, file_path)
with reporter.step("Check that object exists in bucket"):
s3_helper.check_objects_in_bucket(s3_client, bucket, bucket_objects)
storage_nodes = self.cluster.storage_nodes
with reporter.step("Exclude all storage nodes from network map"):
for node in storage_nodes:
stopped_nodes.append(node)
exclude_node_from_network_map(node, node, shell=self.shell, cluster=self.cluster)
with reporter.step("Return all storage nodes to network map"):
for node in storage_nodes:
include_node_to_network_map(node, node, shell=self.shell, cluster=self.cluster)
stopped_nodes.remove(node)
with reporter.step("Check that we can read object"):
s3_helper.check_objects_in_bucket(s3_client, bucket, bucket_objects)
@reporter.step("Teardown after EmptyMap stop service test")
@pytest.fixture()
def empty_map_stop_service_teardown(self, cluster_state_controller: ClusterStateController):
yield
with reporter.step("Return all storage nodes to network map"):
cluster_state_controller.start_all_stopped_services()
for node in stopped_nodes:
check_node_in_map(node, shell=self.shell, alive_node=node)
@pytest.mark.failover_empty_map_stop_service
@allure.title("Empty network map via stop all storage services (s3_client={s3_client})")
def test_stop_all_storage_nodes(
self,
s3_client: S3ClientWrapper,
bucket: str,
simple_object_size: ObjectSize,
empty_map_stop_service_teardown,
cluster_state_controller: ClusterStateController,
):
"""
The test makes network map empty (stop storage service on all nodes
then use 'frostfs-adm morph delete-nodes' to delete nodes from map)
then start all services and checks that object can read through s3.
Steps:
1. Check that bucket is empty
2: PUT object into bucket
3: Check that object exists in bucket
4: Exclude all storage nodes from network map (stop storage service
and manual exclude from map)
5: Return all storage nodes to network map
6: Check that we can read object from #2
Args:
bucket: bucket which contains tested object
simple_object_size: size of object
"""
file_path = generate_file(simple_object_size.value)
file_name = s3_helper.object_key_from_file_path(file_path)
bucket_objects = [file_name]
with reporter.step("Put object into bucket"):
s3_client.put_object(bucket, file_path)
with reporter.step("Check that object exists in bucket"):
s3_helper.check_objects_in_bucket(s3_client, bucket, bucket_objects)
with reporter.step("Stop all storage nodes"):
cluster_state_controller.stop_services_of_type(StorageNode)
with reporter.step("Remove all nodes from network map"):
remove_nodes_from_map_morph(shell=self.shell, cluster=self.cluster, remove_nodes=self.cluster.services(StorageNode))
with reporter.step("Return all storage nodes to network map"):
self.return_nodes_after_stop_with_check_empty_map(cluster_state_controller)
with reporter.step("Check that object exists in bucket"):
s3_helper.check_objects_in_bucket(s3_client, bucket, bucket_objects)
@reporter.step("Return all nodes to cluster with check empty map first")
def return_nodes_after_stop_with_check_empty_map(self, cluster_state_controller: ClusterStateController) -> None:
first_node = self.cluster.cluster_nodes[0].service(StorageNode)
with reporter.step("Start first node and check network map"):
cluster_state_controller.start_service_of_type(self.cluster.cluster_nodes[0], StorageNode)
wait_for_node_to_be_ready(first_node)
for check_node in self.cluster.storage_nodes:
check_node_not_in_map(check_node, shell=self.shell, alive_node=first_node)
for node in self.cluster.cluster_nodes[1:]:
storage_node = node.service(StorageNode)
cluster_state_controller.start_service_of_type(node, StorageNode)
wait_for_node_to_be_ready(storage_node)
sleep(datetime_utils.parse_time(MORPH_BLOCK_TIME))
self.tick_epochs(1)
check_node_in_map(storage_node, shell=self.shell, alive_node=first_node)
@allure.title("Object loss from fstree/blobovnicza (versioning=enabled, s3_client={s3_client})")
def test_s3_fstree_blobovnicza_loss_versioning_on(
self,
s3_client: S3ClientWrapper,
simple_object_size: ObjectSize,
cluster_state_controller: ClusterStateController,
bucket: str,
):
s3_helper.set_bucket_versioning(s3_client, bucket, VersioningStatus.ENABLED)
file_path = generate_file(simple_object_size.value)
file_name = s3_helper.object_key_from_file_path(file_path)
object_versions = []
with reporter.step("Put object into one bucket"):
put_object = s3_client.put_object(bucket, file_path)
s3_helper.check_objects_in_bucket(s3_client, bucket, expected_objects=[file_name])
object_versions.append(put_object)
with reporter.step("Stop all storage nodes"):
cluster_state_controller.stop_services_of_type(StorageNode)
with reporter.step("Delete blobovnicza and fstree from all nodes"):
for node in self.cluster.storage_nodes:
node.delete_blobovnicza()
node.delete_fstree()
with reporter.step("Start all storage nodes"):
cluster_state_controller.start_all_stopped_services()
# need to get Delete Marker first
with reporter.step("Delete the object from the bucket"):
delete_object = s3_client.delete_object(bucket, file_name)
object_versions.append(delete_object["VersionId"])
# and now delete all versions of object (including Delete Markers)
with reporter.step("Delete all versions of the object from the bucket"):
for version in object_versions:
delete_object = s3_client.delete_object(bucket, file_name, version_id=version)
with reporter.step("Delete bucket"):
s3_client.delete_bucket(bucket)
@allure.title("Object loss from fstree/blobovnicza (versioning=disabled, s3_client={s3_client})")
def test_s3_fstree_blobovnicza_loss_versioning_off(
self,
s3_client: S3ClientWrapper,
simple_object_size: ObjectSize,
cluster_state_controller: ClusterStateController,
bucket: str,
):
file_path = generate_file(simple_object_size.value)
file_name = s3_helper.object_key_from_file_path(file_path)
with reporter.step("Put object into one bucket"):
s3_client.put_object(bucket, file_path)
s3_helper.check_objects_in_bucket(s3_client, bucket, expected_objects=[file_name])
with reporter.step("Stop all storage nodes"):
cluster_state_controller.stop_services_of_type(StorageNode)
with reporter.step("Delete blobovnicza and fstree from all nodes"):
for node in self.cluster.storage_nodes:
node.delete_blobovnicza()
node.delete_fstree()
with reporter.step("Start all storage nodes"):
cluster_state_controller.start_all_stopped_services()
with reporter.step("Delete the object from the bucket"):
s3_client.delete_object(bucket, file_name)
with reporter.step("Delete bucket"):
s3_client.delete_bucket(bucket)
@pytest.mark.skip(reason="Need to increase cache lifetime")
@pytest.mark.parametrize(
# versioning should NOT be VersioningStatus.SUSPENDED, it needs to be undefined
"versioning_status",
[VersioningStatus.ENABLED, VersioningStatus.UNDEFINED],
)
@allure.title(
"After Pilorama.db loss on all nodes list objects should return nothing in second listing (versioning_status={versioning_status}, s3_client={s3_client})"
)
def test_s3_pilorama_loss(
self,
s3_client: S3ClientWrapper,
simple_object_size: ObjectSize,
versioning_status: VersioningStatus,
cluster_state_controller: ClusterStateController,
bucket: str,
):
s3_helper.set_bucket_versioning(s3_client, bucket, versioning_status)
file_path = generate_file(simple_object_size.value)
file_name = s3_helper.object_key_from_file_path(file_path)
with reporter.step("Put object into one bucket"):
s3_client.put_object(bucket, file_path)
s3_helper.check_objects_in_bucket(s3_client, bucket, expected_objects=[file_name])
with reporter.step("Stop all storage nodes"):
cluster_state_controller.stop_services_of_type(StorageNode)
with reporter.step("Delete pilorama.db from all nodes"):
for node in self.cluster.storage_nodes:
for shard in node.get_shards():
node.delete_file(shard.pilorama)
with reporter.step("Start all storage nodes"):
cluster_state_controller.start_all_stopped_services()
with reporter.step("Check list objects first time"):
objects_list = s3_client.list_objects(bucket)
assert objects_list, f"Expected not empty bucket"
with reporter.step("Check list objects second time"):
objects_list = s3_client.list_objects(bucket)
assert not objects_list, f"Expected empty bucket, got {objects_list}"
with reporter.step("Delete bucket"):
s3_client.delete_bucket(bucket)
@pytest.mark.failover
@pytest.mark.failover_data_loss
class TestStorageDataLoss(ClusterTestBase):
@allure.title(
"After metabase loss on all nodes operations on objects and buckets should be still available via S3 (s3_client={s3_client})"
)
@pytest.mark.metabase_loss
def test_metabase_loss(
self,
s3_client: S3ClientWrapper,
simple_object_size: ObjectSize,
complex_object_size: ObjectSize,
cluster_state_controller: ClusterStateController,
file_keeper: FileKeeper,
bucket: str,
):
with reporter.step("Put objects into bucket"):
simple_object_path = generate_file(simple_object_size.value)
simple_object_key = s3_helper.object_key_from_file_path(simple_object_path)
complex_object_path = generate_file(complex_object_size.value)
complex_object_key = s3_helper.object_key_from_file_path(complex_object_path)
s3_client.put_object(bucket, simple_object_path)
s3_client.put_object(bucket, complex_object_path)
with reporter.step("Check objects are in bucket"):
s3_helper.check_objects_in_bucket(s3_client, bucket, expected_objects=[simple_object_key, complex_object_key])
with reporter.step("Stop storage services on all nodes"):
cluster_state_controller.stop_services_of_type(StorageNode)
with reporter.step("Delete metabase from all nodes"):
for node in cluster_state_controller.cluster.storage_nodes:
node.delete_metabase()
with reporter.step("Enable resync_metabase option for storage services"):
for storage_node in cluster_state_controller.cluster.storage_nodes:
with reporter.step(f"Enable resync_metabase option for {storage_node}"):
config_file_path, config = storage_node.get_shards_config()
if not config["storage"]["shard"]["default"]["resync_metabase"]:
file_keeper.add(storage_node, config_file_path)
config["storage"]["shard"]["default"]["resync_metabase"] = True
storage_node.save_config(config, config_file_path)
with reporter.step("Start storage services on all nodes"):
cluster_state_controller.start_all_stopped_services()
with reporter.step("Wait for tree rebalance"):
# TODO: Use product metric when we have proper ones for this check
sleep(30)
with reporter.step("Delete objects from bucket"):
with reporter.step("Delete simple object from bucket"):
with expect_not_raises():
s3_client.delete_object(bucket, simple_object_key)
with reporter.step("Delete complex object from bucket"):
with expect_not_raises():
s3_client.delete_object(bucket, complex_object_key)
with reporter.step("Delete bucket"):
with expect_not_raises():
s3_client.delete_bucket(bucket)
@allure.title("Write cache loss on one node should not affect shards and should not produce errors in log")
@pytest.mark.write_cache_loss
def test_write_cache_loss_on_one_node(
self,
node_under_test: ClusterNode,
simple_object_size: ObjectSize,
cluster_state_controller: ClusterStateController,
shards_watcher: ShardsWatcher,
default_wallet: WalletInfo,
test_start_time: datetime,
):
exception_messages = []
with reporter.step(f"Create container on node {node_under_test}"):
locode = node_under_test.storage_node.get_un_locode()
placement_rule = f"""REP 1 IN X
CBF 1
SELECT 1 FROM C AS X
FILTER 'UN-LOCODE' EQ '{locode}' AS C"""
cid = create_container(
default_wallet,
self.shell,
node_under_test.storage_node.get_rpc_endpoint(),
rule=placement_rule,
)
container = StorageContainer(
StorageContainerInfo(cid, default_wallet),
self.shell,
cluster_state_controller.cluster,
)
with reporter.step(f"Put couple objects to container on node {node_under_test}"):
storage_objects: list[StorageObjectInfo] = []
for _ in range(5):
storage_object = container.generate_object(
simple_object_size.value,
endpoint=node_under_test.storage_node.get_rpc_endpoint(),
)
storage_objects.append(storage_object)
with reporter.step("Take shards snapshot"):
shards_watcher.take_shards_snapshot()
with reporter.step(f"Stop storage service on node {node_under_test}"):
cluster_state_controller.stop_service_of_type(node_under_test, StorageNode)
with reporter.step(f"Delete write cache from node {node_under_test}"):
node_under_test.storage_node.delete_write_cache()
with reporter.step(f"Start storage service on node {node_under_test}"):
cluster_state_controller.start_all_stopped_services()
with reporter.step("Objects should be available"):
for storage_object in storage_objects:
get_object(
storage_object.wallet,
container.get_id(),
storage_object.oid,
self.shell,
node_under_test.storage_node.get_rpc_endpoint(),
)
with reporter.step("No shards should have new errors"):
shards_watcher.take_shards_snapshot()
shards_with_errors = shards_watcher.get_shards_with_new_errors()
if shards_with_errors:
exception_messages.append(f"Shards have new errors: {shards_with_errors}")
with reporter.step("No shards should have degraded status"):
snapshot = shards_watcher.get_shards_snapshot()
for shard in snapshot:
status = snapshot[shard]["mode"]
if status != "read-write":
exception_messages.append(f"Shard {shard} changed status to {status}")
with reporter.step("No related errors should be in log"):
if node_under_test.host.is_message_in_logs(message_regex=r"\Wno such file or directory\W", since=test_start_time):
exception_messages.append(f"Node {node_under_test} have shard errors in logs")
with reporter.step("Pass test if no errors found"):
assert not exception_messages, "\n".join(exception_messages)
@allure.title("Loss of one node should trigger use of tree and storage service in another node (s3_client={s3_client})")
def test_s3_one_endpoint_loss(
self,
bucket,
s3_client: S3ClientWrapper,
simple_object_size: ObjectSize,
cluster_state_controller: ClusterStateController,
):
# TODO: need to check that s3 gate is connected to localhost (such metric will be supported in 1.3)
with reporter.step("Stop one node and wait for rebalance connection of s3 gate to storage service"):
current_node = self.cluster.cluster_nodes[0]
cluster_state_controller.stop_service_of_type(current_node, StorageNode)
# waiting for rebalance connection of s3 gate to storage service
sleep(60)
file_path = generate_file(simple_object_size.value)
file_name = s3_helper.object_key_from_file_path(file_path)
with reporter.step("Put object into one bucket"):
put_object = s3_client.put_object(bucket, file_path)
s3_helper.check_objects_in_bucket(s3_client, bucket, expected_objects=[file_name])
@pytest.mark.parametrize("s3_policy", [S3_POLICY_FILE_LOCATION], indirect=True)
@allure.title("After Pilorama.db loss on one node object is retrievable (s3_client={s3_client})")
def test_s3_one_pilorama_loss(
self,
s3_client: S3ClientWrapper,
simple_object_size: ObjectSize,
cluster_state_controller: ClusterStateController,
):
bucket = s3_client.create_bucket(
location_constraint="rep3",
grant_read="uri=http://acs.amazonaws.com/groups/global/AllUsers",
)
s3_helper.set_bucket_versioning(s3_client, bucket, VersioningStatus.ENABLED)
with reporter.step("Check bucket versioning"):
bucket_versioning = s3_client.get_bucket_versioning_status(bucket)
assert bucket_versioning == "Enabled", "Bucket should have enabled versioning"
file_path = generate_file(simple_object_size.value)
file_name = s3_helper.object_key_from_file_path(file_path)
object_versions = []
with reporter.step("Put object into one bucket"):
put_object = s3_client.put_object(bucket, file_path)
s3_helper.check_objects_in_bucket(s3_client, bucket, expected_objects=[file_name])
object_versions.append(put_object)
node_to_check = self.cluster.storage_nodes[0]
piloramas_list_before_removing = []
with reporter.step("Get list of all pilorama.db on shards"):
for shard in node_to_check.get_shards():
piloramas_list_before_removing.append(shard.pilorama)
with reporter.step("Check that all pilorama.db files exist on node"):
for pilorama in piloramas_list_before_removing:
assert node_to_check.is_file_exist(pilorama), f"File {pilorama} does not exist"
with reporter.step("Stop all storage nodes"):
cluster_state_controller.stop_services_of_type(StorageNode)
with reporter.step("Delete pilorama.db from one node"):
for pilorama in piloramas_list_before_removing:
node_to_check.delete_file(pilorama)
with reporter.step("Start all storage nodes"):
cluster_state_controller.start_all_stopped_services()
with reporter.step("Tick epoch to trigger sync and then wait for 1 minute"):
self.tick_epochs(1)
sleep(120)
with reporter.step("Get list of all pilorama.db after sync"):
for pilorama in piloramas_list_before_removing:
assert node_to_check.is_file_exist(pilorama), f"File {pilorama} does not exist"
with reporter.step("Check bucket versioning"):
bucket_versioning = s3_client.get_bucket_versioning_status(bucket)
assert bucket_versioning == "Enabled", "Bucket should have enabled versioning"
with reporter.step("Check list objects"):
objects_list = s3_client.list_objects(bucket)
assert objects_list, f"Expected not empty bucket"
with reporter.step("Delete the object from the bucket"):
delete_object = s3_client.delete_object(bucket, file_name)
assert "DeleteMarker" in delete_object.keys(), "Delete markers not found"
with reporter.step("Check list objects"):
objects_list = s3_client.list_objects_versions(bucket)
assert objects_list, f"Expected not empty bucket"
object_versions.append(delete_object["VersionId"])
# and now delete all versions of object (including Delete Markers)
with reporter.step("Delete all versions of the object from the bucket"):
for version in object_versions:
delete_object = s3_client.delete_object(bucket, file_name, version_id=version)
with reporter.step("Check list objects"):
objects_list = s3_client.list_objects_versions(bucket)
assert not objects_list, f"Expected empty bucket"
with reporter.step("Delete bucket"):
s3_client.delete_bucket(bucket)

View file

@ -0,0 +1,438 @@
import logging
import random
from time import sleep
import allure
import pytest
from frostfs_testlib import reporter
from frostfs_testlib.healthcheck.interfaces import Healthcheck
from frostfs_testlib.resources.wellknown_acl import EACL_PUBLIC_READ_WRITE, PUBLIC_ACL
from frostfs_testlib.steps.cli.container import create_container
from frostfs_testlib.steps.cli.object import get_object, get_object_nodes, neo_go_query_height, put_object, put_object_to_random_node
from frostfs_testlib.steps.storage_object import delete_objects
from frostfs_testlib.storage.cluster import ClusterNode
from frostfs_testlib.storage.controllers import ClusterStateController
from frostfs_testlib.storage.dataclasses.object_size import ObjectSize
from frostfs_testlib.storage.dataclasses.storage_object_info import Interfaces, StorageObjectInfo
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
from frostfs_testlib.testing.cluster_test_base import ClusterTestBase
from frostfs_testlib.testing.parallel import parallel
from frostfs_testlib.utils.failover_utils import wait_object_replication
from frostfs_testlib.utils.file_utils import generate_file, get_file_hash
logger = logging.getLogger("NeoLogger")
STORAGE_NODE_COMMUNICATION_PORT = "8080"
STORAGE_NODE_COMMUNICATION_PORT_TLS = "8082"
PORTS_TO_BLOCK = [STORAGE_NODE_COMMUNICATION_PORT, STORAGE_NODE_COMMUNICATION_PORT_TLS]
blocked_nodes: list[ClusterNode] = []
OBJECT_ATTRIBUTES = [
None,
{"key1": 1, "key2": "abc", "common_key": "common_value"},
{"key1": 2, "common_key": "common_value"},
]
@pytest.mark.failover
@pytest.mark.failover_network
class TestFailoverNetwork(ClusterTestBase):
@pytest.fixture(autouse=True)
@allure.title("Restore network")
def restore_network(self, healthcheck: Healthcheck, cluster_state_controller: ClusterStateController):
yield
with reporter.step(f"Count blocked nodes {len(blocked_nodes)}"):
not_empty = len(blocked_nodes) != 0
for node in list(blocked_nodes):
with reporter.step(f"Restore network for {node}"):
cluster_state_controller.restore_traffic(node=node)
blocked_nodes.remove(node)
if not_empty:
parallel(healthcheck.storage_healthcheck, self.cluster.cluster_nodes)
@pytest.fixture()
@allure.title("Restore drop traffic to system")
def restore_down_interfaces(self, cluster_state_controller: ClusterStateController):
yield
cluster_state_controller.restore_interfaces()
@pytest.fixture()
def storage_objects(
self,
simple_object_size: ObjectSize,
default_wallet: WalletInfo,
) -> list[StorageObjectInfo]:
file_path = generate_file(simple_object_size.value)
file_hash = get_file_hash(file_path)
with reporter.step("Create container"):
placement_rule = "REP 1 CBF 1"
cid = create_container(
wallet=default_wallet,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
rule=placement_rule,
await_mode=True,
basic_acl=EACL_PUBLIC_READ_WRITE,
)
storage_objects = []
with reporter.step("Put object"):
for attribute in OBJECT_ATTRIBUTES:
oid = put_object_to_random_node(
wallet=default_wallet,
path=file_path,
cid=cid,
shell=self.shell,
cluster=self.cluster,
)
storage_object = StorageObjectInfo(cid=cid, oid=oid)
storage_object.size = simple_object_size.value
storage_object.wallet = default_wallet
storage_object.file_path = file_path
storage_object.file_hash = file_hash
storage_object.attributes = attribute
storage_objects.append(storage_object)
return storage_objects
@allure.title("Block Storage node traffic")
def test_block_storage_node_traffic(
self,
default_wallet: WalletInfo,
require_multiple_hosts,
simple_object_size: ObjectSize,
cluster_state_controller: ClusterStateController,
):
"""
Block storage nodes traffic using iptables and wait for replication for objects.
"""
wallet = default_wallet
placement_rule = "REP 2 IN X CBF 2 SELECT 2 FROM * AS X"
wakeup_node_timeout = 10 # timeout to let nodes detect that traffic has blocked
nodes_to_block_count = 2
source_file_path = generate_file(simple_object_size.value)
cid = create_container(
wallet,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
rule=placement_rule,
basic_acl=PUBLIC_ACL,
)
oid = put_object_to_random_node(wallet, source_file_path, cid, shell=self.shell, cluster=self.cluster)
nodes = wait_object_replication(cid, oid, 2, shell=self.shell, nodes=self.cluster.storage_nodes)
logger.info(f"Nodes are {nodes}")
nodes_to_block = nodes
if nodes_to_block_count > len(nodes):
# TODO: the intent of this logic is not clear, need to revisit
nodes_to_block = random.choices(nodes, k=2)
nodes_non_block = list(set(self.cluster.storage_nodes) - set(nodes_to_block))
nodes_non_block_cluster = [
cluster_node for cluster_node in self.cluster.cluster_nodes if cluster_node.storage_node in nodes_non_block
]
with reporter.step("Block traffic and check corrupted object"):
for node in nodes_non_block_cluster:
with reporter.step(f"Block incoming traffic at node {node}"):
blocked_nodes.append(node)
cluster_state_controller.drop_traffic(
node=node, wakeup_timeout=wakeup_node_timeout, name_interface="data", block_nodes=nodes_to_block
)
with reporter.step(f"Check object is not stored on node {node}"):
new_nodes = wait_object_replication(
cid,
oid,
2,
shell=self.shell,
nodes=list(set(self.cluster.storage_nodes) - set(nodes_non_block)),
)
assert node.storage_node not in new_nodes
with reporter.step("Check object data is not corrupted"):
got_file_path = get_object(wallet, cid, oid, endpoint=new_nodes[0].get_rpc_endpoint(), shell=self.shell)
assert get_file_hash(source_file_path) == get_file_hash(got_file_path)
with reporter.step(f"Unblock incoming traffic"):
for node in nodes_non_block_cluster:
with reporter.step(f"Unblock at host {node}"):
cluster_state_controller.restore_traffic(node=node)
block_node = [
cluster_node for cluster_node in self.cluster.cluster_nodes if cluster_node.storage_node == node.storage_node
]
blocked_nodes.remove(*block_node)
sleep(wakeup_node_timeout)
with reporter.step("Check object data is not corrupted"):
new_nodes = wait_object_replication(cid, oid, 2, shell=self.shell, nodes=self.cluster.storage_nodes)
got_file_path = get_object(wallet, cid, oid, shell=self.shell, endpoint=new_nodes[0].get_rpc_endpoint())
assert get_file_hash(source_file_path) == get_file_hash(got_file_path)
@pytest.mark.interfaces
@allure.title("Block DATA interface node")
def test_block_data_interface(
self,
cluster_state_controller: ClusterStateController,
default_wallet: WalletInfo,
restore_down_interfaces: None,
storage_objects: list[StorageObjectInfo],
):
storage_object = storage_objects[0]
with reporter.step("Search nodes with object"):
nodes_with_object = get_object_nodes(
cluster=self.cluster,
cid=storage_object.cid,
oid=storage_object.oid,
alive_node=self.cluster.cluster_nodes[0],
)
with reporter.step("Get data interface to node"):
config_interfaces = list(nodes_with_object[0].host.config.interfaces.keys())
with reporter.step(f"Get data in {config_interfaces}"):
data_interfaces = [interface for interface in config_interfaces if "data" in interface]
with reporter.step("Block data interfaces for node"):
for interface in data_interfaces:
cluster_state_controller.down_interface(nodes=nodes_with_object, interface=interface)
with reporter.step("Tick epoch and wait 2 block"):
nodes_without_an_object = list(set(self.cluster.cluster_nodes) - set(nodes_with_object))
self.tick_epochs(1, alive_node=nodes_without_an_object[0].storage_node, wait_block=2)
with reporter.step("Get object for target nodes to data interfaces, expect false"):
with pytest.raises(RuntimeError, match="can't create API client: can't init SDK client: gRPC dial: context deadline exceeded"):
get_object(
wallet=default_wallet,
cid=storage_object.cid,
oid=storage_object.oid,
shell=self.shell,
endpoint=nodes_with_object[0].storage_node.get_rpc_endpoint(),
)
with reporter.step(f"Get object others nodes, expect true"):
input_file = get_object(
wallet=default_wallet,
cid=storage_object.cid,
oid=storage_object.oid,
shell=self.shell,
endpoint=nodes_without_an_object[0].storage_node.get_rpc_endpoint(),
)
with reporter.step("Restore interface and tick 1 epoch, wait 2 block"):
cluster_state_controller.restore_interfaces()
self.tick_epochs(1, alive_node=nodes_without_an_object[0].storage_node, wait_block=2)
@pytest.mark.interfaces
@allure.title("Block INTERNAL interface node")
def test_block_internal_interface(
self,
cluster_state_controller: ClusterStateController,
default_wallet: WalletInfo,
restore_down_interfaces: None,
storage_objects: list[StorageObjectInfo],
simple_object_size: ObjectSize,
):
storage_object = storage_objects[0]
with reporter.step("Search nodes with object"):
nodes_with_object = get_object_nodes(
cluster=self.cluster,
cid=storage_object.cid,
oid=storage_object.oid,
alive_node=self.cluster.cluster_nodes[0],
)
with reporter.step("Get internal interface to node"):
config_interfaces = list(nodes_with_object[0].host.config.interfaces.keys())
with reporter.step(f"Get internal in {config_interfaces}"):
internal_interfaces = [interface for interface in config_interfaces if "internal" in interface]
with reporter.step("Block internal interfaces for node"):
for interface in internal_interfaces:
cluster_state_controller.down_interface(nodes=nodes_with_object, interface=interface)
with reporter.step("Tick epoch and wait 2 block"):
nodes_without_an_object = list(set(self.cluster.cluster_nodes) - set(nodes_with_object))
self.tick_epochs(1, alive_node=nodes_without_an_object[0].storage_node, wait_block=2)
with reporter.step("Get object others node, expect false"):
with pytest.raises(RuntimeError, match="rpc error"):
get_object(
wallet=default_wallet,
cid=storage_object.cid,
oid=storage_object.oid,
shell=self.shell,
endpoint=nodes_without_an_object[0].storage_node.get_rpc_endpoint(),
)
with reporter.step("Put object, others node, expect false"):
with pytest.raises(RuntimeError, match="rpc error"):
put_object(
wallet=default_wallet,
path=storage_object.file_path,
cid=storage_object.cid,
shell=self.shell,
endpoint=nodes_without_an_object[0].storage_node.get_rpc_endpoint(),
)
with reporter.step(f"Get object nodes with object, expect true"):
input_file = get_object(
wallet=default_wallet,
cid=storage_object.cid,
oid=storage_object.oid,
shell=self.shell,
endpoint=nodes_with_object[0].storage_node.get_rpc_endpoint(),
)
with reporter.step(f"Put object nodes with object, expect true"):
temp_file_path = generate_file(simple_object_size.value)
_ = put_object(
wallet=default_wallet,
path=temp_file_path,
cid=storage_object.cid,
shell=self.shell,
endpoint=nodes_with_object[0].storage_node.get_rpc_endpoint(),
)
with reporter.step("Restore interface and tick 1 epoch, wait 2 block"):
cluster_state_controller.restore_interfaces()
self.tick_epochs(1, alive_node=nodes_without_an_object[0].storage_node, wait_block=2)
@pytest.mark.interfaces
@pytest.mark.failover_baremetal
@pytest.mark.parametrize(
"block_interface, other_interface",
[(Interfaces.DATA_O, Interfaces.DATA_1), (Interfaces.DATA_1, Interfaces.DATA_O)],
)
@allure.title("Down data interfaces to all nodes(interface={block_interface})")
def test_down_data_interface(
self,
require_multiple_interfaces,
cluster_state_controller: ClusterStateController,
default_wallet: WalletInfo,
simple_object_size: ObjectSize,
restore_down_interfaces: None,
block_interface: Interfaces,
other_interface: Interfaces,
):
cluster_nodes = self.cluster.cluster_nodes
with reporter.step(f"Block {block_interface.value} interfaces"):
cluster_state_controller.down_interface(cluster_nodes, block_interface.value)
with reporter.step("Tick 1 epoch and wait 2 block for sync all nodes"):
self.tick_epochs(1, alive_node=cluster_nodes[0].storage_node, wait_block=2)
with reporter.step("Create container"):
cid = create_container(
wallet=default_wallet,
shell=self.shell,
endpoint=f"{cluster_nodes[0].get_data_interface(other_interface.value)[0]}:8080",
rule="REP 4 CBF 1",
)
with reporter.step("Put object"):
file_path = generate_file(simple_object_size.value)
oid = put_object(
wallet=default_wallet,
path=file_path,
cid=cid,
shell=self.shell,
endpoint=f"{cluster_nodes[0].get_data_interface(other_interface.value)[0]}:8080",
)
with reporter.step("Get object"):
file_get_path = get_object(
wallet=default_wallet,
cid=cid,
oid=oid,
shell=self.shell,
endpoint=f"{cluster_nodes[0].get_data_interface(other_interface.value)[0]}:8080",
)
with reporter.step("Restore interfaces all nodes"):
cluster_state_controller.restore_interfaces()
self.tick_epochs(1, alive_node=cluster_nodes[0].storage_node, wait_block=2)
@pytest.mark.interfaces
@pytest.mark.failover_baremetal
@pytest.mark.parametrize("interface", [Interfaces.INTERNAL_0, Interfaces.INTERNAL_1])
@allure.title("Down internal interfaces to all nodes(interface={interface})")
def test_down_internal_interface(
self,
require_multiple_interfaces,
cluster_state_controller: ClusterStateController,
default_wallet: WalletInfo,
simple_object_size: ObjectSize,
restore_down_interfaces: None,
interface: Interfaces,
):
cluster_nodes = self.cluster.cluster_nodes
latest_block = {}
with reporter.step("Get block all nodes"):
for cluster_node in cluster_nodes:
latest_block[cluster_node] = neo_go_query_height(
shell=cluster_node.host.get_shell(), endpoint=cluster_node.morph_chain.get_http_endpoint()
)
with reporter.step(f"Block {interface} interfaces"):
cluster_state_controller.down_interface(cluster_nodes, interface.value)
with reporter.step("Tick 1 epoch and wait 2 block for sync all nodes"):
self.tick_epochs(1, alive_node=cluster_nodes[0].storage_node, wait_block=2)
with reporter.step("Create container"):
cid = create_container(
wallet=default_wallet,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
rule="REP 4 CBF 1",
)
with reporter.step(f"Put object, after down {interface}"):
file_path = generate_file(simple_object_size.value)
oid = put_object(
wallet=default_wallet,
path=file_path,
cid=cid,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
)
with reporter.step("Get object"):
file_get_path = get_object(
wallet=default_wallet,
cid=cid,
oid=oid,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
)
now_block = {}
with reporter.step("Get actual block"):
for cluster_node in cluster_nodes:
now_block[cluster_node] = neo_go_query_height(
shell=cluster_node.host.get_shell(), endpoint=cluster_node.morph_chain.get_http_endpoint()
)
with reporter.step(f"Compare block"):
for cluster_node, items in now_block.items():
with reporter.step(
f"Node - {cluster_node.host_ip}, old block - {latest_block[cluster_node]['Latest block']}, "
f"now block - {now_block[cluster_node]['Latest block']}"
):
assert latest_block[cluster_node]["Latest block"] < now_block[cluster_node]["Latest block"]
with reporter.step("Restore interfaces all nodes"):
cluster_state_controller.restore_interfaces()
self.tick_epochs(1, alive_node=self.cluster.cluster_nodes[0].storage_node, wait_block=2)

View file

@ -0,0 +1,534 @@
import logging
import random
from time import sleep
from typing import Callable, Optional, Tuple
import allure
import pytest
from frostfs_testlib import reporter
from frostfs_testlib.cli import FrostfsCli
from frostfs_testlib.cli.netmap_parser import NetmapParser
from frostfs_testlib.resources.cli import FROSTFS_CLI_EXEC
from frostfs_testlib.resources.error_patterns import OBJECT_NOT_FOUND
from frostfs_testlib.resources.wellknown_acl import PUBLIC_ACL
from frostfs_testlib.steps.cli.container import create_container, search_nodes_with_container
from frostfs_testlib.steps.cli.object import (
delete_object,
get_object,
get_object_from_random_node,
head_object,
put_object,
put_object_to_random_node,
search_object,
)
from frostfs_testlib.steps.node_management import (
check_node_in_map,
delete_node_data,
drop_object,
exclude_node_from_network_map,
get_locode_from_random_node,
include_node_to_network_map,
node_shard_list,
node_shard_set_mode,
storage_node_set_status,
wait_for_node_to_be_ready,
)
from frostfs_testlib.steps.storage_policy import get_nodes_with_object
from frostfs_testlib.storage.cluster import ClusterNode, StorageNode
from frostfs_testlib.storage.controllers import ClusterStateController
from frostfs_testlib.storage.dataclasses.object_size import ObjectSize
from frostfs_testlib.storage.dataclasses.storage_object_info import NodeStatus
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
from frostfs_testlib.testing.cluster_test_base import ClusterTestBase
from frostfs_testlib.utils import string_utils
from frostfs_testlib.utils.failover_utils import wait_object_replication
from frostfs_testlib.utils.file_utils import generate_file
from ...helpers.utility import wait_for_gc_pass_on_storage_nodes
logger = logging.getLogger("NeoLogger")
check_nodes: list[StorageNode] = []
@pytest.mark.node_mgmt
@pytest.mark.failover
@pytest.mark.order(10)
class TestNodeManagement(ClusterTestBase):
@pytest.fixture
@allure.title("Create container and pick the node with data")
def create_container_and_pick_node(self, default_wallet: WalletInfo, simple_object_size: ObjectSize) -> Tuple[str, StorageNode]:
file_path = generate_file(simple_object_size.value)
placement_rule = "REP 1 IN X CBF 1 SELECT 1 FROM * AS X"
endpoint = self.cluster.default_rpc_endpoint
cid = create_container(
default_wallet,
shell=self.shell,
endpoint=endpoint,
rule=placement_rule,
basic_acl=PUBLIC_ACL,
)
oid = put_object_to_random_node(default_wallet, file_path, cid, self.shell, self.cluster)
nodes = get_nodes_with_object(cid, oid, shell=self.shell, nodes=self.cluster.storage_nodes)
assert len(nodes) == 1
node = nodes[0]
yield cid, node
shards = node_shard_list(node)
assert shards
for shard in shards:
node_shard_set_mode(node, shard, "read-write")
node_shard_list(node)
@reporter.step("Tick epoch with retries")
def tick_epoch_with_retries(self, attempts: int = 3, timeout: int = 3, wait_block: int = None):
for attempt in range(attempts):
try:
self.tick_epoch(wait_block=wait_block)
except RuntimeError:
sleep(timeout)
if attempt >= attempts - 1:
raise
continue
return
@pytest.fixture
def return_nodes_after_test_run(self):
yield
self.return_nodes()
@reporter.step("Return node to cluster")
def return_nodes(self, alive_node: Optional[StorageNode] = None) -> None:
for node in list(check_nodes):
with reporter.step(f"Start node {node}"):
node.start_service()
with reporter.step(f"Waiting status ready for node {node}"):
wait_for_node_to_be_ready(node)
# We need to wait for node to establish notifications from morph-chain
# Otherwise it will hang up when we will try to set status
self.wait_for_blocks()
with reporter.step(f"Move node {node} to online state"):
storage_node_set_status(node, status="online", retries=2)
check_nodes.remove(node)
self.wait_for_blocks()
self.tick_epoch_with_retries(3, wait_block=2)
check_node_in_map(node, shell=self.shell, alive_node=alive_node)
@allure.title("Add one node to cluster")
def test_add_nodes(
self,
default_wallet: WalletInfo,
simple_object_size: ObjectSize,
return_nodes_after_test_run,
):
"""
This test remove one node from frostfs_testlib.storage.cluster then add it back. Test uses base control operations with storage nodes (healthcheck, netmap-snapshot, set-status).
"""
wallet = default_wallet
placement_rule_3 = "REP 3 IN X CBF 1 SELECT 3 FROM * AS X"
placement_rule_4 = "REP 4 IN X CBF 1 SELECT 4 FROM * AS X"
source_file_path = generate_file(simple_object_size.value)
storage_nodes = self.cluster.storage_nodes
random_node = random.choice(storage_nodes[1:])
alive_node = random.choice([storage_node for storage_node in storage_nodes if storage_node.id != random_node.id])
check_node_in_map(random_node, shell=self.shell, alive_node=alive_node)
# Add node to recovery list before messing with it
check_nodes.append(random_node)
exclude_node_from_network_map(random_node, alive_node, shell=self.shell, cluster=self.cluster)
delete_node_data(random_node)
cid = create_container(
wallet,
rule=placement_rule_3,
basic_acl=PUBLIC_ACL,
shell=self.shell,
endpoint=alive_node.get_rpc_endpoint(),
)
oid = put_object(
wallet,
source_file_path,
cid,
shell=self.shell,
endpoint=alive_node.get_rpc_endpoint(),
)
wait_object_replication(cid, oid, 3, shell=self.shell, nodes=storage_nodes)
self.return_nodes(alive_node)
with reporter.step("Check data could be replicated to new node"):
random_node = random.choice(list(set(storage_nodes) - {random_node, alive_node}))
# Add node to recovery list before messing with it
check_nodes.append(random_node)
exclude_node_from_network_map(random_node, alive_node, shell=self.shell, cluster=self.cluster)
wait_object_replication(
cid,
oid,
3,
shell=self.shell,
nodes=list(set(storage_nodes) - {random_node}),
)
include_node_to_network_map(random_node, alive_node, shell=self.shell, cluster=self.cluster)
wait_object_replication(cid, oid, 3, shell=self.shell, nodes=storage_nodes)
with reporter.step("Check container could be created with new node"):
cid = create_container(
wallet,
rule=placement_rule_4,
basic_acl=PUBLIC_ACL,
shell=self.shell,
endpoint=alive_node.get_rpc_endpoint(),
)
oid = put_object(
wallet,
source_file_path,
cid,
shell=self.shell,
endpoint=alive_node.get_rpc_endpoint(),
)
wait_object_replication(cid, oid, 4, shell=self.shell, nodes=storage_nodes)
@allure.title("Drop object using control command")
def test_drop_object(self, default_wallet, complex_object_size: ObjectSize, simple_object_size: ObjectSize):
"""
Test checks object could be dropped using `frostfs-cli control drop-objects` command.
"""
wallet = default_wallet
endpoint = self.cluster.default_rpc_endpoint
file_path_simple = generate_file(simple_object_size.value)
file_path_complex = generate_file(complex_object_size.value)
locode = get_locode_from_random_node(self.cluster)
rule = f"REP 1 IN SE CBF 1 SELECT 1 FROM LOC AS SE FILTER 'UN-LOCODE' EQ '{locode}' AS LOC"
cid = create_container(wallet, rule=rule, shell=self.shell, endpoint=endpoint)
oid_simple = put_object_to_random_node(wallet, file_path_simple, cid, shell=self.shell, cluster=self.cluster)
oid_complex = put_object_to_random_node(wallet, file_path_complex, cid, shell=self.shell, cluster=self.cluster)
for oid in (oid_simple, oid_complex):
get_object_from_random_node(wallet, cid, oid, shell=self.shell, cluster=self.cluster)
head_object(wallet, cid, oid, shell=self.shell, endpoint=endpoint)
nodes_with_object = get_nodes_with_object(cid, oid_simple, shell=self.shell, nodes=self.cluster.storage_nodes)
random_node = random.choice(nodes_with_object)
for oid in (oid_simple, oid_complex):
with reporter.step(f"Drop object {oid}"):
get_object_from_random_node(wallet, cid, oid, shell=self.shell, cluster=self.cluster)
head_object(wallet, cid, oid, shell=self.shell, endpoint=endpoint)
drop_object(random_node, cid, oid)
self.wait_for_obj_dropped(wallet, cid, oid, endpoint, get_object)
self.wait_for_obj_dropped(wallet, cid, oid, endpoint, head_object)
@pytest.mark.skip(reason="Need to clarify scenario")
@allure.title("Control Operations with storage nodes")
def test_shards(
self,
default_wallet,
create_container_and_pick_node,
simple_object_size: ObjectSize,
):
wallet = default_wallet
file_path = generate_file(simple_object_size.value)
cid, node = create_container_and_pick_node
original_oid = put_object_to_random_node(wallet, file_path, cid, self.shell, self.cluster)
# for mode in ('read-only', 'degraded'):
for mode in ("degraded",):
shards = node_shard_list(node)
assert shards
for shard in shards:
node_shard_set_mode(node, shard, mode)
shards = node_shard_list(node)
assert shards
with pytest.raises(RuntimeError):
put_object_to_random_node(wallet, file_path, cid, self.shell, self.cluster)
with pytest.raises(RuntimeError):
delete_object(wallet, cid, original_oid, self.shell, self.cluster.default_rpc_endpoint)
get_object_from_random_node(wallet, cid, original_oid, self.shell, self.cluster)
for shard in shards:
node_shard_set_mode(node, shard, "read-write")
shards = node_shard_list(node)
assert shards
oid = put_object_to_random_node(wallet, file_path, cid, self.shell, self.cluster)
delete_object(wallet, cid, oid, self.shell, self.cluster.default_rpc_endpoint)
@allure.title("Put object with stopped node")
def test_stop_node(self, default_wallet, return_nodes_after_test_run, simple_object_size: ObjectSize):
wallet = default_wallet
placement_rule = "REP 3 IN X SELECT 4 FROM * AS X"
source_file_path = generate_file(simple_object_size.value)
storage_nodes = self.cluster.storage_nodes
random_node = random.choice(storage_nodes[1:])
alive_node = random.choice([storage_node for storage_node in storage_nodes if storage_node.id != random_node.id])
cid = create_container(
wallet,
rule=placement_rule,
basic_acl=PUBLIC_ACL,
shell=self.shell,
endpoint=random_node.get_rpc_endpoint(),
)
with reporter.step("Stop the random node"):
check_nodes.append(random_node)
random_node.stop_service()
with reporter.step("Try to put an object and expect success"):
put_object(
wallet,
source_file_path,
cid,
shell=self.shell,
endpoint=alive_node.get_rpc_endpoint(),
)
self.return_nodes(alive_node)
@reporter.step("Wait for object to be dropped")
def wait_for_obj_dropped(self, wallet: str, cid: str, oid: str, endpoint: str, checker: Callable) -> None:
for _ in range(3):
try:
checker(wallet, cid, oid, shell=self.shell, endpoint=endpoint)
wait_for_gc_pass_on_storage_nodes()
except Exception as err:
if string_utils.is_str_match_pattern(err, OBJECT_NOT_FOUND):
return
raise AssertionError(f'Expected "{OBJECT_NOT_FOUND}" error, got\n{err}')
raise AssertionError(f"Object {oid} was not dropped from node")
@pytest.mark.maintenance
@pytest.mark.failover
@pytest.mark.order(9)
class TestMaintenanceMode(ClusterTestBase):
@pytest.fixture()
@allure.title("Init Frostfs CLI remote")
def frostfs_cli_remote(self, node_under_test: ClusterNode) -> FrostfsCli:
host = node_under_test.host
service_config = host.get_service_config(node_under_test.storage_node.name)
wallet_path = service_config.attributes["wallet_path"]
wallet_password = service_config.attributes["wallet_password"]
shell = host.get_shell()
wallet_config_path = f"/tmp/{node_under_test.storage_node.name}-config.yaml"
wallet_config = f'wallet: {wallet_path}\npassword: "{wallet_password}"'
shell.exec(f"echo '{wallet_config}' > {wallet_config_path}")
cli = FrostfsCli(shell=shell, frostfs_cli_exec_path=FROSTFS_CLI_EXEC, config_file=wallet_config_path)
return cli
@pytest.fixture()
def restore_node_status(self, cluster_state_controller: ClusterStateController, default_wallet: WalletInfo):
nodes_to_restore = []
yield nodes_to_restore
for node_to_restore in nodes_to_restore:
cluster_state_controller.set_node_status(node_to_restore, default_wallet, NodeStatus.ONLINE)
def check_node_status(self, expected_status: NodeStatus, node_under_test: ClusterNode, frostfs_cli: FrostfsCli, rpc_endpoint: str):
netmap = frostfs_cli.netmap.snapshot(rpc_endpoint).stdout
all_snapshots = NetmapParser.snapshot_all_nodes(netmap)
node_snapshot = [snapshot for snapshot in all_snapshots if node_under_test.host_ip == snapshot.node]
if expected_status == NodeStatus.OFFLINE and not node_snapshot:
assert node_under_test.host_ip not in netmap, f"{node_under_test} status should be {expected_status}. See netmap:\n{netmap}"
return
assert node_snapshot, f"{node_under_test} status should be {expected_status}, but was not in netmap. See netmap:\n{netmap}"
node_snapshot = node_snapshot[0]
assert (
expected_status == node_snapshot.node_status
), f"{node_under_test} status should be {expected_status}, but was {node_snapshot.node_status}. See netmap:\n{netmap}"
@allure.title("Test of basic node operations in maintenance mode")
def test_maintenance_mode(
self,
default_wallet: WalletInfo,
simple_object_size: ObjectSize,
cluster_state_controller: ClusterStateController,
restore_node_status: list[ClusterNode],
):
with reporter.step("Create container and create\put object"):
cid = create_container(
wallet=default_wallet,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
rule="REP 1 CBF 1",
)
nodes_with_container = search_nodes_with_container(
wallet=default_wallet,
cid=cid,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
cluster=self.cluster,
)
node_under_test = nodes_with_container[0]
endpoint = node_under_test.storage_node.get_rpc_endpoint()
file_path = generate_file(simple_object_size.value)
oid = put_object(
wallet=default_wallet,
path=file_path,
cid=cid,
shell=self.shell,
endpoint=endpoint,
)
with reporter.step("Set node status to 'maintenance'"):
restore_node_status.append(node_under_test)
cluster_state_controller.set_node_status(node_under_test, default_wallet, NodeStatus.MAINTENANCE)
node_under_maintenance_error = "node is under maintenance"
with reporter.step("Run basic operations with node in maintenance"):
with pytest.raises(RuntimeError, match=node_under_maintenance_error):
get_object(default_wallet, cid, oid, self.shell, endpoint)
with pytest.raises(RuntimeError, match=node_under_maintenance_error):
search_object(default_wallet, cid, self.shell, endpoint)
with pytest.raises(RuntimeError, match=node_under_maintenance_error):
delete_object(default_wallet, cid, oid, self.shell, endpoint)
with pytest.raises(RuntimeError, match=node_under_maintenance_error):
put_object(default_wallet, file_path, cid, self.shell, endpoint)
with reporter.step("Run basic operations with node not in maintenance"):
other_nodes = list(set(self.cluster.cluster_nodes) - set(nodes_with_container))
endpoint = other_nodes[0].storage_node.get_rpc_endpoint()
with pytest.raises(RuntimeError, match=OBJECT_NOT_FOUND):
get_object(default_wallet, cid, oid, self.shell, endpoint)
search_object(default_wallet, cid, self.shell, endpoint)
with pytest.raises(RuntimeError, match=OBJECT_NOT_FOUND):
delete_object(default_wallet, cid, oid, self.shell, endpoint)
with pytest.raises(RuntimeError, match=node_under_maintenance_error):
put_object(default_wallet, file_path, cid, self.shell, endpoint)
@pytest.mark.sanity
@allure.title("MAINTENANCE and OFFLINE mode transitions")
def test_mode_transitions(
self,
cluster_state_controller: ClusterStateController,
node_under_test: ClusterNode,
default_wallet: WalletInfo,
frostfs_cli: FrostfsCli,
restore_node_status: list[ClusterNode],
):
restore_node_status.append(node_under_test)
alive_nodes = list(set(self.cluster.cluster_nodes) - {node_under_test})
alive_storage_node = alive_nodes[0].storage_node
alive_rpc_endpoint = alive_storage_node.get_rpc_endpoint()
with reporter.step("Set node status to 'offline'"):
cluster_state_controller.set_node_status(node_under_test, default_wallet, NodeStatus.OFFLINE)
with reporter.step("Check node status is 'offline' after update the network map"):
self.check_node_status(NodeStatus.OFFLINE, node_under_test, frostfs_cli, alive_rpc_endpoint)
with reporter.step("Restart storage service"):
cluster_state_controller.stop_storage_service(node_under_test)
cluster_state_controller.start_storage_service(node_under_test)
with reporter.step("Tick 2 epochs"):
self.tick_epochs(2, alive_storage_node, 2)
with reporter.step("Check node status is 'online' after storage service restart"):
self.check_node_status(NodeStatus.ONLINE, node_under_test, frostfs_cli, alive_rpc_endpoint)
with reporter.step("Set node status to 'maintenance'"):
cluster_state_controller.set_node_status(node_under_test, default_wallet, NodeStatus.MAINTENANCE)
with reporter.step("Restart storage service"):
cluster_state_controller.stop_storage_service(node_under_test)
cluster_state_controller.start_storage_service(node_under_test)
with reporter.step("Tick 2 epochs"):
self.tick_epochs(2, alive_storage_node, 2)
with reporter.step("Check node staus is 'maintenance' after storage service restart"):
self.check_node_status(NodeStatus.MAINTENANCE, node_under_test, frostfs_cli, alive_rpc_endpoint)
with reporter.step("Set node status to 'offline'"):
cluster_state_controller.set_node_status(node_under_test, default_wallet, NodeStatus.OFFLINE)
with reporter.step("Stop storage service"):
cluster_state_controller.stop_storage_service(node_under_test)
with reporter.step("Tick 2 epochs"):
self.tick_epochs(2, alive_storage_node, 2)
with reporter.step("Start storage service"):
cluster_state_controller.start_storage_service(node_under_test)
with reporter.step("Tick 2 epochs"):
self.tick_epochs(2, alive_storage_node, 2)
with reporter.step("Check node status is 'online' after storage service start"):
self.check_node_status(NodeStatus.ONLINE, node_under_test, frostfs_cli, alive_rpc_endpoint)
with reporter.step("Set node status to 'maintenance'"):
cluster_state_controller.set_node_status(node_under_test, default_wallet, NodeStatus.MAINTENANCE)
with reporter.step("Stop storage service"):
cluster_state_controller.stop_storage_service(node_under_test)
with reporter.step("Tick 2 epochs"):
self.tick_epochs(2, alive_storage_node, 2)
with reporter.step("Start storage service"):
cluster_state_controller.start_storage_service(node_under_test)
with reporter.step("Check node status is 'maintenance'"):
self.check_node_status(NodeStatus.MAINTENANCE, node_under_test, frostfs_cli, alive_rpc_endpoint)
with reporter.step("Tick 2 epochs"):
self.tick_epochs(2, alive_storage_node, 2)
with reporter.step("Check node status is 'maintenance'"):
self.check_node_status(NodeStatus.MAINTENANCE, node_under_test, frostfs_cli, alive_rpc_endpoint)
@allure.title("A node cannot go into maintenance if maintenance is prohibited globally in the network")
def test_maintenance_globally_forbidden(
self,
cluster_state_controller: ClusterStateController,
node_under_test: ClusterNode,
frostfs_cli_remote: FrostfsCli,
default_wallet: WalletInfo,
restore_node_status: list[ClusterNode],
):
restore_node_status.append(node_under_test)
control_endpoint = node_under_test.service(StorageNode).get_control_endpoint()
with reporter.step("Set MaintenanceModeAllowed = false"):
cluster_state_controller.set_maintenance_mode_allowed("false", node_under_test)
with reporter.step("Set node status to 'maintenance'"):
with pytest.raises(RuntimeError, match="maintenance mode is not allowed by the network"):
frostfs_cli_remote.control.set_status(endpoint=control_endpoint, status="maintenance")
with reporter.step("Set MaintenanceModeAllowed = true"):
cluster_state_controller.set_maintenance_mode_allowed("true", node_under_test)
with reporter.step("Set node status to 'maintenance'"):
cluster_state_controller.set_node_status(node_under_test, default_wallet, NodeStatus.MAINTENANCE)

View file

@ -0,0 +1,214 @@
import math
import allure
from frostfs_testlib.testing.parallel import parallel
import pytest
from frostfs_testlib import reporter
from frostfs_testlib.steps.cli.container import create_container, delete_container, search_nodes_with_container, wait_for_container_deletion
from frostfs_testlib.steps.cli.object import delete_object, head_object, put_object_to_random_node
from frostfs_testlib.steps.metrics import calc_metrics_count_from_stdout, check_metrics_counter, get_metrics_value
from frostfs_testlib.steps.storage_policy import get_nodes_with_object
from frostfs_testlib.storage.cluster import Cluster, ClusterNode
from frostfs_testlib.storage.dataclasses.object_size import ObjectSize
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
from frostfs_testlib.testing.cluster_test_base import ClusterTestBase
from frostfs_testlib.utils.file_utils import generate_file
from ...helpers.utility import are_numbers_similar
@pytest.mark.nightly
@pytest.mark.container
class TestContainerMetrics(ClusterTestBase):
@reporter.step("Put object to container: {cid}")
def put_object_parallel(self, file_path: str, wallet: WalletInfo, cid: str):
oid = put_object_to_random_node(wallet, file_path, cid, self.shell, self.cluster)
return oid
@reporter.step("Get metrics value from node")
def get_metrics_search_by_greps_parallel(self, node: ClusterNode, **greps):
try:
content_stdout = node.metrics.storage.get_metrics_search_by_greps(greps)
return calc_metrics_count_from_stdout(content_stdout)
except Exception as e:
return None
@allure.title("Container metrics (obj_size={object_size},policy={policy})")
@pytest.mark.parametrize("placement_policy, policy", [("REP 2 IN X CBF 2 SELECT 2 FROM * AS X", "REP"), ("EC 1.1 CBF 1", "EC")])
def test_container_metrics(
self,
object_size: ObjectSize,
max_object_size: int,
default_wallet: WalletInfo,
cluster: Cluster,
placement_policy: str,
policy: str,
):
file_path = generate_file(object_size.value)
copies = 2 if policy == "REP" else 1
object_chunks = 1
link_object = 0
with reporter.step(f"Create container with policy {placement_policy}"):
cid = create_container(default_wallet, self.shell, cluster.default_rpc_endpoint, placement_policy)
if object_size.value > max_object_size:
object_chunks = math.ceil(object_size.value / max_object_size)
link_object = len(search_nodes_with_container(default_wallet, cid, self.shell, cluster.default_rpc_endpoint, cluster))
with reporter.step("Put object to random node"):
oid = put_object_to_random_node(
wallet=default_wallet,
path=file_path,
cid=cid,
shell=self.shell,
cluster=cluster,
)
with reporter.step("Get object nodes"):
object_storage_nodes = get_nodes_with_object(cid, oid, self.shell, cluster.storage_nodes)
object_nodes = [cluster_node for cluster_node in cluster.cluster_nodes if cluster_node.storage_node in object_storage_nodes]
with reporter.step("Check metric appears in node where the object is located"):
count_metrics = (object_chunks * copies) + link_object
if policy == "EC":
count_metrics = (object_chunks * 2) + link_object
check_metrics_counter(object_nodes, counter_exp=count_metrics, command="container_objects_total", cid=cid, type="phy")
check_metrics_counter(object_nodes, counter_exp=count_metrics, command="container_objects_total", cid=cid, type="logic")
check_metrics_counter(object_nodes, counter_exp=copies, command="container_objects_total", cid=cid, type="user")
with reporter.step("Delete file, wait until gc remove object"):
delete_object(default_wallet, cid, oid, self.shell, cluster.default_rpc_endpoint)
with reporter.step(f"Check container metrics 'the counter should equal {len(object_nodes)}' in object nodes"):
check_metrics_counter(object_nodes, counter_exp=len(object_nodes), command="container_objects_total", cid=cid, type="phy")
check_metrics_counter(object_nodes, counter_exp=len(object_nodes), command="container_objects_total", cid=cid, type="logic")
check_metrics_counter(object_nodes, counter_exp=0, command="container_objects_total", cid=cid, type="user")
with reporter.step("Check metrics(Phy, Logic, User) in each nodes"):
# Phy and Logic metrics are 4, because in rule 'CBF 2 SELECT 2 FROM', cbf2*sel2=4
expect_metrics = 4 if policy == "REP" else 2
check_metrics_counter(cluster.cluster_nodes, counter_exp=expect_metrics, command="container_objects_total", cid=cid, type="phy")
check_metrics_counter(
cluster.cluster_nodes, counter_exp=expect_metrics, command="container_objects_total", cid=cid, type="logic"
)
check_metrics_counter(cluster.cluster_nodes, counter_exp=0, command="container_objects_total", cid=cid, type="user")
@allure.title("Container size metrics (obj_size={object_size},policy={policy})")
@pytest.mark.parametrize("placement_policy, policy", [("REP 2 IN X CBF 2 SELECT 2 FROM * AS X", "REP"), ("EC 1.1 CBF 1", "EC")])
def test_container_size_metrics(
self,
object_size: ObjectSize,
default_wallet: WalletInfo,
placement_policy: str,
policy: str,
):
file_path = generate_file(object_size.value)
with reporter.step(f"Create container with policy {policy}"):
cid = create_container(default_wallet, self.shell, self.cluster.default_rpc_endpoint, placement_policy)
with reporter.step("Put object to random node"):
oid = put_object_to_random_node(
wallet=default_wallet,
path=file_path,
cid=cid,
shell=self.shell,
cluster=self.cluster,
)
with reporter.step("Get object nodes"):
object_storage_nodes = get_nodes_with_object(cid, oid, self.shell, self.cluster.storage_nodes)
object_nodes = [
cluster_node for cluster_node in self.cluster.cluster_nodes if cluster_node.storage_node in object_storage_nodes
]
with reporter.step("Check metric appears in all node where the object is located"):
act_metric = sum(
[get_metrics_value(node, command="frostfs_node_engine_container_size_bytes", cid=cid) for node in object_nodes]
)
assert (act_metric // 2) == object_size.value
with reporter.step("Delete file, wait until gc remove object"):
id_tombstone = delete_object(default_wallet, cid, oid, self.shell, self.cluster.default_rpc_endpoint)
tombstone = head_object(default_wallet, cid, id_tombstone, self.shell, self.cluster.default_rpc_endpoint)
with reporter.step(f"Check container size metrics"):
act_metric = get_metrics_value(object_nodes[0], command="frostfs_node_engine_container_size_bytes", cid=cid)
assert act_metric == int(tombstone["header"]["payloadLength"])
@allure.title("Container size metrics put {objects_count} objects (obj_size={object_size})")
@pytest.mark.parametrize("objects_count", [5, 10, 20])
def test_container_size_metrics_more_objects(
self,
object_size: ObjectSize,
default_wallet: WalletInfo,
objects_count: int
):
with reporter.step(f"Create container"):
cid = create_container(default_wallet, self.shell, self.cluster.default_rpc_endpoint)
with reporter.step(f"Put {objects_count} objects"):
files_path = [generate_file(object_size.value) for _ in range(objects_count)]
futures = parallel(self.put_object_parallel, files_path, wallet=default_wallet, cid=cid)
oids = [future.result() for future in futures]
with reporter.step("Check metric appears in all nodes"):
metric_values = [get_metrics_value(node, command="frostfs_node_engine_container_size_bytes", cid=cid) for node in self.cluster.cluster_nodes]
actual_value = sum(metric_values) // 2 # for policy REP 2, value divide by 2
expected_value = object_size.value * objects_count
assert are_numbers_similar(actual_value, expected_value, tolerance_percentage=2), "metric container size bytes value not correct"
with reporter.step("Delete file, wait until gc remove object"):
tombstones_size = 0
for oid in oids:
tombstone_id = delete_object(default_wallet, cid, oid, self.shell, self.cluster.default_rpc_endpoint)
tombstone = head_object(default_wallet, cid, tombstone_id, self.shell, self.cluster.default_rpc_endpoint)
tombstones_size += int(tombstone["header"]["payloadLength"])
with reporter.step(f"Check container size metrics, 'should be positive in all nodes'"):
futures = parallel(get_metrics_value, self.cluster.cluster_nodes, command="frostfs_node_engine_container_size_bytes", cid=cid)
metrics_value_nodes = [future.result() for future in futures]
for act_metric in metrics_value_nodes:
assert act_metric >= 0, "Metrics value is negative"
assert sum(metrics_value_nodes) // len(self.cluster.cluster_nodes) == tombstones_size, "tomstone size of objects not correct"
@allure.title("Container metrics (policy={policy})")
@pytest.mark.parametrize("placement_policy, policy", [("REP 2 IN X CBF 2 SELECT 2 FROM * AS X", "REP"), ("EC 1.1 CBF 1", "EC")])
def test_container_metrics_delete_complex_objects(
self,
complex_object_size: ObjectSize,
default_wallet: WalletInfo,
cluster: Cluster,
placement_policy: str,
policy: str
):
copies = 2 if policy == "REP" else 1
objects_count = 2
metric_name = "frostfs_node_engine_container_objects_total"
with reporter.step(f"Create container"):
cid = create_container(default_wallet, self.shell, cluster.default_rpc_endpoint, rule=placement_policy)
with reporter.step(f"Put {objects_count} objects"):
files_path = [generate_file(complex_object_size.value) for _ in range(objects_count)]
futures = parallel(self.put_object_parallel, files_path, wallet=default_wallet, cid=cid)
oids = [future.result() for future in futures]
with reporter.step(f"Check metrics value in each nodes, should be {objects_count} for 'user'"):
check_metrics_counter(cluster.cluster_nodes, counter_exp=objects_count * copies, command=metric_name, cid=cid, type="user")
with reporter.step("Delete objects and container"):
for oid in oids:
delete_object(default_wallet, cid, oid, self.shell, cluster.default_rpc_endpoint)
delete_container(default_wallet, cid, self.shell, cluster.default_rpc_endpoint)
with reporter.step("Tick epoch and check container was deleted"):
self.tick_epoch()
wait_for_container_deletion(default_wallet, cid, shell=self.shell, endpoint=cluster.default_rpc_endpoint)
with reporter.step(f"Check metrics value in each nodes, should not be show any result"):
futures = parallel(self.get_metrics_search_by_greps_parallel, cluster.cluster_nodes, command=metric_name, cid=cid)
metrics_results = [future.result() for future in futures if future.result() is not None]
assert len(metrics_results) == 0, f"Metrics value is not empty in Prometheus, actual value in nodes: {metrics_results}"

View file

@ -0,0 +1,114 @@
import random
import re
import allure
import pytest
from frostfs_testlib import reporter
from frostfs_testlib.steps.cli.container import create_container
from frostfs_testlib.steps.cli.object import delete_object, put_object, put_object_to_random_node
from frostfs_testlib.steps.metrics import check_metrics_counter, get_metrics_value
from frostfs_testlib.steps.storage_policy import get_nodes_with_object
from frostfs_testlib.storage.cluster import Cluster, ClusterNode
from frostfs_testlib.storage.dataclasses.object_size import ObjectSize
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
from frostfs_testlib.testing.cluster_test_base import ClusterTestBase
from frostfs_testlib.testing.test_control import wait_for_success
from frostfs_testlib.utils.file_utils import generate_file
@pytest.mark.nightly
class TestGarbageCollectorMetrics(ClusterTestBase):
@wait_for_success(interval=10)
def check_metrics_in_node(self, cluster_node: ClusterNode, counter_exp: int, **metrics_greps: str):
counter_act = 0
try:
metric_result = cluster_node.metrics.storage.get_metrics_search_by_greps(**metrics_greps)
counter_act += self.calc_metrics_count_from_stdout(metric_result.stdout)
except RuntimeError as e:
...
assert counter_act == counter_exp, f"Expected: {counter_exp}, Actual: {counter_act} in node: {cluster_node}"
@staticmethod
def calc_metrics_count_from_stdout(metric_result_stdout: str):
result = re.findall(r"}\s(\d+)", metric_result_stdout)
return sum(map(int, result))
@allure.title("Garbage collector expire_at object")
def test_garbage_collector_metrics_expire_at_object(self, simple_object_size: ObjectSize, default_wallet: WalletInfo, cluster: Cluster):
file_path = generate_file(simple_object_size.value)
placement_policy = "REP 2 IN X CBF 2 SELECT 2 FROM * AS X"
metrics_step = 1
with reporter.step("Get current garbage collector metrics for each nodes"):
metrics_counter = {}
for node in cluster.cluster_nodes:
metrics_counter[node] = get_metrics_value(node, command="frostfs_node_garbage_collector_marked_for_removal_objects_total")
with reporter.step(f"Create container with policy {placement_policy}"):
cid = create_container(default_wallet, self.shell, cluster.default_rpc_endpoint, placement_policy)
with reporter.step("Put object to random node with expire_at"):
current_epoch = self.get_epoch()
oid = put_object_to_random_node(
default_wallet,
file_path,
cid,
self.shell,
cluster,
expire_at=current_epoch + 1,
)
with reporter.step("Get object nodes"):
object_storage_nodes = get_nodes_with_object(cid, oid, self.shell, cluster.storage_nodes)
object_nodes = [cluster_node for cluster_node in cluster.cluster_nodes if cluster_node.storage_node in object_storage_nodes]
with reporter.step("Tick Epoch"):
self.tick_epochs(epochs_to_tick=2, wait_block=2)
with reporter.step(f"Check garbage collector metrics 'the counter should increase by {metrics_step}' in object nodes"):
for node in object_nodes:
metrics_counter[node] += metrics_step
for node, counter in metrics_counter.items():
check_metrics_counter(
[node],
counter_exp=counter,
command="frostfs_node_garbage_collector_marked_for_removal_objects_total",
)
@allure.title("Garbage collector delete object")
def test_garbage_collector_metrics_deleted_objects(self, simple_object_size: ObjectSize, default_wallet: WalletInfo, cluster: Cluster):
file_path = generate_file(simple_object_size.value)
placement_policy = "REP 2 IN X CBF 2 SELECT 2 FROM * AS X"
metrics_step = 1
with reporter.step("Get current garbage collector metrics for each nodes"):
metrics_counter = {}
for node in cluster.cluster_nodes:
metrics_counter[node] = get_metrics_value(node, command="frostfs_node_garbage_collector_deleted_objects_total")
with reporter.step(f"Create container with policy {placement_policy}"):
cid = create_container(default_wallet, self.shell, node.storage_node.get_rpc_endpoint(), placement_policy)
with reporter.step("Put object to random node"):
oid = put_object_to_random_node(
default_wallet,
file_path,
cid,
self.shell,
cluster,
)
with reporter.step("Get object nodes"):
object_storage_nodes = get_nodes_with_object(cid, oid, self.shell, cluster.storage_nodes)
object_nodes = [cluster_node for cluster_node in cluster.cluster_nodes if cluster_node.storage_node in object_storage_nodes]
with reporter.step("Delete file, wait until gc remove object"):
delete_object(default_wallet, cid, oid, self.shell, node.storage_node.get_rpc_endpoint())
with reporter.step(f"Check garbage collector metrics 'the counter should increase by {metrics_step}'"):
for node in object_nodes:
exp_metrics_counter = metrics_counter[node] + metrics_step
check_metrics_counter(
[node], counter_exp=exp_metrics_counter, command="frostfs_node_garbage_collector_deleted_objects_total"
)

View file

@ -0,0 +1,207 @@
import random
import allure
import pytest
from frostfs_testlib import reporter
from frostfs_testlib.healthcheck.interfaces import Healthcheck
from frostfs_testlib.steps.cli.container import create_container, get_container, list_containers
from frostfs_testlib.steps.cli.object import get_object, head_object, put_object, search_object
from frostfs_testlib.steps.cli.tree import get_tree_list
from frostfs_testlib.steps.metrics import check_metrics_counter, get_metrics_value
from frostfs_testlib.storage.cluster import Cluster
from frostfs_testlib.storage.controllers.cluster_state_controller import ClusterStateController
from frostfs_testlib.storage.controllers.state_managers.config_state_manager import ConfigStateManager
from frostfs_testlib.storage.dataclasses.frostfs_services import StorageNode
from frostfs_testlib.storage.dataclasses.object_size import ObjectSize
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
from frostfs_testlib.testing.cluster_test_base import ClusterTestBase
from frostfs_testlib.utils.file_utils import generate_file
@pytest.mark.nightly
class TestGRPCMetrics(ClusterTestBase):
@pytest.fixture
def disable_policer(self, cluster_state_controller: ClusterStateController):
config_manager = cluster_state_controller.manager(ConfigStateManager)
config_manager.set_on_all_nodes(StorageNode, {"policer:unsafe_disable": "true"})
yield
cluster_state_controller.manager(ConfigStateManager).revert_all()
@allure.title("GRPC metrics container operations")
def test_grpc_metrics_container_operations(self, default_wallet: WalletInfo, cluster: Cluster):
placement_policy = "REP 2 IN X CBF 1 SELECT 4 FROM * AS X"
with reporter.step("Select random node"):
node = random.choice(cluster.cluster_nodes)
with reporter.step("Get current gRPC metrics for method 'Put'"):
metrics_counter_put = get_metrics_value(node, command="grpc_server_handled_total", service="ContainerService", method="Put")
with reporter.step(f"Create container with policy {placement_policy}"):
cid = create_container(default_wallet, self.shell, node.storage_node.get_rpc_endpoint(), placement_policy)
with reporter.step(f"Check gRPC metrics method 'Put', 'the counter should increase by 1'"):
metrics_counter_put += 1
check_metrics_counter(
[node],
counter_exp=metrics_counter_put,
command="grpc_server_handled_total",
service="ContainerService",
method="Put",
)
with reporter.step("Get current gRPC metrics for method 'Get'"):
metrics_counter_get = get_metrics_value(node, command="grpc_server_handled_total", service="ContainerService", method="Get")
with reporter.step(f"Get container"):
get_container(default_wallet, cid, self.shell, node.storage_node.get_rpc_endpoint())
with reporter.step(f"Check gRPC metrics method=Get, 'the counter should increase by 1'"):
metrics_counter_get += 1
check_metrics_counter(
[node],
counter_exp=metrics_counter_get,
command="grpc_server_handled_total",
service="ContainerService",
method="Get",
)
with reporter.step("Get current gRPC metrics for method 'List'"):
metrics_counter_list = get_metrics_value(node, command="grpc_server_handled_total", service="ContainerService", method="List")
with reporter.step(f"Get container list"):
list_containers(default_wallet, self.shell, node.storage_node.get_rpc_endpoint())
with reporter.step(f"Check gRPC metrics method=List, 'the counter should increase by 1'"):
metrics_counter_list += 1
check_metrics_counter(
[node],
counter_exp=metrics_counter_list,
command="grpc_server_handled_total",
service="ContainerService",
method="List",
)
@allure.title("GRPC metrics object operations")
def test_grpc_metrics_object_operations(
self, simple_object_size: ObjectSize, default_wallet: WalletInfo, cluster: Cluster, disable_policer
):
file_path = generate_file(simple_object_size.value)
placement_policy = "REP 2 IN X CBF 1 SELECT 4 FROM * AS X"
with reporter.step("Select random node"):
node = random.choice(cluster.cluster_nodes)
with reporter.step(f"Create container with policy {placement_policy}"):
cid = create_container(default_wallet, self.shell, node.storage_node.get_rpc_endpoint(), placement_policy)
with reporter.step("Get current gRPC metrics for method 'Put'"):
metrics_counter_put = get_metrics_value(node, command="grpc_server_handled_total", service="ObjectService", method="Put")
with reporter.step("Put object to selected node"):
oid = put_object(default_wallet, file_path, cid, self.shell, node.storage_node.get_rpc_endpoint())
with reporter.step(f"Check gRPC metrics method 'Put', 'the counter should increase by 1'"):
metrics_counter_put += 1
check_metrics_counter(
[node],
counter_exp=metrics_counter_put,
command="grpc_server_handled_total",
service="ObjectService",
method="Put",
)
with reporter.step("Get current gRPC metrics for method 'Get'"):
metrics_counter_get = get_metrics_value(node, command="grpc_server_handled_total", service="ObjectService", method="Get")
with reporter.step(f"Get object"):
get_object(default_wallet, cid, oid, self.shell, node.storage_node.get_rpc_endpoint())
with reporter.step(f"Check gRPC metrics method=Get, 'the counter should increase by 1'"):
metrics_counter_get += 1
check_metrics_counter(
[node],
counter_exp=metrics_counter_get,
command="grpc_server_handled_total",
service="ObjectService",
method="Get",
)
with reporter.step("Get current gRPC metrics for method 'Search'"):
metrics_counter_search = get_metrics_value(node, command="grpc_server_handled_total", service="ObjectService", method="Search")
with reporter.step(f"Search object"):
search_object(default_wallet, cid, self.shell, node.storage_node.get_rpc_endpoint())
with reporter.step(f"Check gRPC metrics method=Search, 'the counter should increase by 1'"):
metrics_counter_search += 1
check_metrics_counter(
[node],
counter_exp=metrics_counter_search,
command="grpc_server_handled_total",
service="ObjectService",
method="Search",
)
with reporter.step("Get current gRPC metrics for method 'Head'"):
metrics_counter_head = get_metrics_value(node, command="grpc_server_handled_total", service="ObjectService", method="Head")
with reporter.step(f"Head object"):
head_object(default_wallet, cid, oid, self.shell, node.storage_node.get_rpc_endpoint())
with reporter.step(f"Check gRPC metrics method=Head, 'the counter should increase by 1'"):
metrics_counter_head += 1
check_metrics_counter(
[node],
counter_exp=metrics_counter_head,
command="grpc_server_handled_total",
service="ObjectService",
method="Head",
)
@allure.title("GRPC metrics Tree healthcheck")
def test_grpc_metrics_tree_service(self, cluster: Cluster, healthcheck: Healthcheck):
with reporter.step("Select random node"):
node = random.choice(cluster.cluster_nodes)
with reporter.step("Get current gRPC metrics for Healthcheck"):
metrics_counter = get_metrics_value(node, command="grpc_server_handled_total", service="TreeService", method="Healthcheck")
with reporter.step("Query Tree healthcheck status"):
healthcheck.tree_healthcheck(node)
with reporter.step(f"Check gRPC metrics for Healthcheck, 'the counter should increase'"):
check_metrics_counter(
[node],
">",
metrics_counter,
command="grpc_server_handled_total",
service="TreeService",
method="Healthcheck",
)
@allure.title("GRPC metrics Tree list")
def test_grpc_metrics_tree_list(self, default_wallet: WalletInfo, cluster: Cluster):
placement_policy = "REP 2 IN X CBF 1 SELECT 4 FROM * AS X"
with reporter.step("Select random node"):
node = random.choice(cluster.cluster_nodes)
with reporter.step(f"Create container with policy {placement_policy}"):
cid = create_container(default_wallet, self.shell, node.storage_node.get_rpc_endpoint(), placement_policy)
with reporter.step("Get current gRPC metrics for Tree List"):
metrics_counter = get_metrics_value(node, command="grpc_server_handled_total", service="TreeService", method="TreeList")
with reporter.step("Query Tree List"):
get_tree_list(default_wallet, cid, self.shell, node.storage_node.get_rpc_endpoint())
with reporter.step(f"Check gRPC metrics for Tree List, 'the counter should increase by 1'"):
metrics_counter += 1
check_metrics_counter(
[node],
counter_exp=metrics_counter,
command="grpc_server_handled_total",
service="TreeService",
method="TreeList",
)

View file

@ -0,0 +1,68 @@
import random
import re
from datetime import datetime, timezone
import allure
import pytest
from frostfs_testlib import reporter
from frostfs_testlib.steps.metrics import get_metrics_value
from frostfs_testlib.storage.cluster import Cluster, ClusterNode
from frostfs_testlib.storage.controllers.cluster_state_controller import ClusterStateController
from frostfs_testlib.storage.controllers.state_managers.config_state_manager import ConfigStateManager
from frostfs_testlib.storage.dataclasses.frostfs_services import StorageNode
from frostfs_testlib.testing.cluster_test_base import ClusterTestBase
from frostfs_testlib.testing.test_control import wait_for_success
@pytest.mark.nightly
class TestLogsMetrics(ClusterTestBase):
@pytest.fixture
def revert_all(self, cluster_state_controller: ClusterStateController):
yield
cluster_state_controller.manager(ConfigStateManager).revert_all()
def restart_storage_service(self, cluster_state_controller: ClusterStateController) -> datetime:
config_manager = cluster_state_controller.manager(ConfigStateManager)
config_manager.csc.stop_services_of_type(StorageNode)
restart_time = datetime.now(timezone.utc)
config_manager.csc.start_services_of_type(StorageNode)
return restart_time
@wait_for_success(interval=10)
def check_metrics_in_node(self, cluster_node: ClusterNode, restart_time: datetime, log_priority: str = None, **metrics_greps):
current_time = datetime.now(timezone.utc)
counter_metrics = get_metrics_value(cluster_node, **metrics_greps)
counter_logs = self.get_count_logs_by_level(cluster_node, metrics_greps.get("level"), restart_time, current_time, log_priority)
assert counter_logs == counter_metrics, f"counter_logs: {counter_logs}, counter_metrics: {counter_metrics} in node: {cluster_node}"
@staticmethod
def get_count_logs_by_level(cluster_node: ClusterNode, log_level: str, after_time: datetime, until_time: datetime, log_priority: str):
count_logs = 0
try:
logs = cluster_node.host.get_filtered_logs(
log_level, unit="frostfs-storage", since=after_time, until=until_time, priority=log_priority
)
result = re.findall(rf":\s+{log_level}\s+", logs)
count_logs += len(result)
except RuntimeError as e:
...
return count_logs
@allure.title("Metrics for the log counter")
def test_log_counter_metrics(self, cluster_state_controller: ClusterStateController, revert_all):
restart_time = self.restart_storage_service(cluster_state_controller)
with reporter.step("Select random node"):
node = random.choice(self.cluster.cluster_nodes)
with reporter.step(f"Check metrics count logs with level 'info'"):
self.check_metrics_in_node(
node,
restart_time,
log_priority="6..6",
command="frostfs_node_logger_entry_count",
level="info",
dropped="false",
)
with reporter.step(f"Check metrics count logs with level 'error'"):
self.check_metrics_in_node(node, restart_time, command="frostfs_node_logger_entry_count", level="error", dropped="false")

View file

@ -0,0 +1,294 @@
import random
import re
import allure
import pytest
from frostfs_testlib import reporter
from frostfs_testlib.steps.cli.container import create_container, delete_container, search_nodes_with_container
from frostfs_testlib.steps.cli.object import delete_object, lock_object, put_object, put_object_to_random_node
from frostfs_testlib.steps.metrics import check_metrics_counter, get_metrics_value
from frostfs_testlib.steps.storage_policy import get_nodes_with_object
from frostfs_testlib.storage.cluster import Cluster, ClusterNode
from frostfs_testlib.storage.controllers.cluster_state_controller import ClusterStateController
from frostfs_testlib.storage.dataclasses.object_size import ObjectSize
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
from frostfs_testlib.testing.cluster_test_base import ClusterTestBase
from frostfs_testlib.utils.file_utils import generate_file
@pytest.mark.nightly
class TestObjectMetrics(ClusterTestBase):
@allure.title("Object metrics of removed container (obj_size={object_size})")
def test_object_metrics_removed_container(self, object_size: ObjectSize, default_wallet: WalletInfo, cluster: Cluster):
file_path = generate_file(object_size.value)
placement_policy = "REP 2 IN X CBF 2 SELECT 2 FROM * AS X"
copies = 2
with reporter.step(f"Create container with policy {placement_policy}"):
cid = create_container(default_wallet, self.shell, cluster.default_rpc_endpoint, placement_policy)
with reporter.step("Put object to random node"):
oid = put_object_to_random_node(default_wallet, file_path, cid, self.shell, cluster)
with reporter.step("Check metric appears in node where the object is located"):
object_storage_nodes = get_nodes_with_object(cid, oid, self.shell, cluster.storage_nodes)
object_nodes = [cluster_node for cluster_node in cluster.cluster_nodes if cluster_node.storage_node in object_storage_nodes]
check_metrics_counter(
object_nodes,
counter_exp=copies,
command="frostfs_node_engine_container_objects_total",
cid=cid,
type="user",
)
with reporter.step("Delete container"):
delete_container(default_wallet, cid, shell=self.shell, endpoint=self.cluster.default_rpc_endpoint)
with reporter.step("Tick Epoch"):
self.tick_epochs(epochs_to_tick=2, wait_block=2)
with reporter.step("Check metrics of removed containers doesn't appear in the storage node"):
check_metrics_counter(object_nodes, counter_exp=0, command="frostfs_node_engine_container_objects_total", cid=cid, type="user")
check_metrics_counter(object_nodes, counter_exp=0, command="frostfs_node_engine_container_size_byte", cid=cid)
for node in object_nodes:
all_metrics = node.metrics.storage.get_metrics_search_by_greps(command="frostfs_node_engine_container_size_byte")
assert cid not in all_metrics.stdout, "metrics of removed containers shouldn't appear in the storage node"
@allure.title("Object metrics, locked object (obj_size={object_size}, policy={placement_policy})")
@pytest.mark.parametrize("placement_policy", ["REP 1 IN X CBF 1 SELECT 1 FROM * AS X", "REP 2 IN X CBF 2 SELECT 2 FROM * AS X"])
def test_object_metrics_blocked_object(
self, object_size: ObjectSize, default_wallet: WalletInfo, cluster: Cluster, placement_policy: str
):
file_path = generate_file(object_size.value)
metric_step = int(re.search(r"REP\s(\d+)", placement_policy).group(1))
with reporter.step(f"Create container with policy {placement_policy}"):
cid = create_container(default_wallet, self.shell, cluster.default_rpc_endpoint, placement_policy)
with reporter.step("Search container nodes"):
container_nodes = search_nodes_with_container(
wallet=default_wallet,
cid=cid,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
cluster=cluster,
)
with reporter.step("Get current metrics for metric_type=user"):
objects_metric_counter = 0
for node in container_nodes:
objects_metric_counter += get_metrics_value(node, command="frostfs_node_engine_objects_total", type="user")
with reporter.step("Put object to container node"):
oid = put_object(default_wallet, file_path, cid, self.shell, container_nodes[0].storage_node.get_rpc_endpoint())
with reporter.step(f"Check metric user 'the counter should increase by {metric_step}'"):
objects_metric_counter += metric_step
check_metrics_counter(
container_nodes,
counter_exp=objects_metric_counter,
command="frostfs_node_engine_objects_total",
type="user",
)
check_metrics_counter(
container_nodes,
counter_exp=metric_step,
command="frostfs_node_engine_container_objects_total",
cid=cid,
type="user",
)
with reporter.step("Delete object"):
delete_object(default_wallet, cid, oid, self.shell, self.cluster.default_rpc_endpoint)
with reporter.step(f"Check metric user 'the counter should decrease by {metric_step}'"):
objects_metric_counter -= metric_step
check_metrics_counter(
container_nodes,
counter_exp=objects_metric_counter,
command="frostfs_node_engine_objects_total",
type="user",
)
check_metrics_counter(
container_nodes,
counter_exp=0,
command="frostfs_node_engine_container_objects_total",
cid=cid,
type="user",
)
with reporter.step("Put object and lock it to next epoch"):
oid = put_object(default_wallet, file_path, cid, self.shell, container_nodes[0].storage_node.get_rpc_endpoint())
current_epoch = self.get_epoch()
lock_object(
default_wallet,
cid,
oid,
self.shell,
container_nodes[0].storage_node.get_rpc_endpoint(),
expire_at=current_epoch + 1,
)
with reporter.step(f"Check metric user 'the counter should increase by {metric_step}'"):
objects_metric_counter += metric_step
check_metrics_counter(
container_nodes,
counter_exp=objects_metric_counter,
command="frostfs_node_engine_objects_total",
type="user",
)
check_metrics_counter(
container_nodes,
counter_exp=metric_step,
command="frostfs_node_engine_container_objects_total",
cid=cid,
type="user",
)
with reporter.step(f"Wait until remove locking 'the counter doesn't change'"):
self.tick_epochs(epochs_to_tick=2)
check_metrics_counter(
container_nodes,
counter_exp=objects_metric_counter,
command="frostfs_node_engine_objects_total",
type="user",
)
with reporter.step("Delete object"):
delete_object(default_wallet, cid, oid, self.shell, self.cluster.default_rpc_endpoint)
with reporter.step(f"Check metric user 'the counter should decrease by {metric_step}'"):
objects_metric_counter -= metric_step
check_metrics_counter(
container_nodes,
counter_exp=objects_metric_counter,
command="frostfs_node_engine_objects_total",
type="user",
)
check_metrics_counter(
container_nodes,
counter_exp=0,
command="frostfs_node_engine_container_objects_total",
cid=cid,
type="user",
)
with reporter.step("Put object with expire_at"):
current_epoch = self.get_epoch()
oid = put_object(
default_wallet,
file_path,
cid,
self.shell,
container_nodes[0].storage_node.get_rpc_endpoint(),
expire_at=current_epoch + 1,
)
with reporter.step(f"Check metric user 'the counter should increase by {metric_step}'"):
objects_metric_counter += metric_step
check_metrics_counter(
container_nodes,
counter_exp=objects_metric_counter,
command="frostfs_node_engine_objects_total",
type="user",
)
check_metrics_counter(
container_nodes,
counter_exp=metric_step,
command="frostfs_node_engine_container_objects_total",
cid=cid,
type="user",
)
with reporter.step("Tick Epoch"):
self.tick_epochs(epochs_to_tick=2)
with reporter.step(f"Check metric user 'the counter should decrease by {metric_step}'"):
objects_metric_counter -= metric_step
check_metrics_counter(
container_nodes,
counter_exp=objects_metric_counter,
command="frostfs_node_engine_objects_total",
type="user",
)
check_metrics_counter(
container_nodes,
counter_exp=0,
command="frostfs_node_engine_container_objects_total",
cid=cid,
type="user",
)
@allure.title("Object metrics, stop the node (obj_size={object_size})")
def test_object_metrics_stop_node(
self,
object_size: ObjectSize,
default_wallet: WalletInfo,
cluster_state_controller: ClusterStateController,
):
placement_policy = "REP 2 IN X CBF 2 SELECT 2 FROM * AS X"
file_path = generate_file(object_size.value)
copies = 2
with reporter.step(f"Create container with policy {placement_policy}"):
cid = create_container(default_wallet, self.shell, self.cluster.default_rpc_endpoint, placement_policy)
with reporter.step(f"Check object metrics in container 'should be zero'"):
check_metrics_counter(
self.cluster.cluster_nodes,
counter_exp=0,
command="frostfs_node_engine_container_objects_total",
type="user",
cid=cid,
)
with reporter.step("Get current metrics for each nodes"):
objects_metric_counter: dict[ClusterNode:int] = {}
for node in self.cluster.cluster_nodes:
objects_metric_counter[node] = get_metrics_value(node, command="frostfs_node_engine_objects_total", type="user")
with reporter.step("Put object"):
oid = put_object(default_wallet, file_path, cid, self.shell, self.cluster.default_rpc_endpoint)
with reporter.step("Get object nodes"):
object_storage_nodes = get_nodes_with_object(cid, oid, self.shell, self.cluster.storage_nodes)
object_nodes = [
cluster_node for cluster_node in self.cluster.cluster_nodes if cluster_node.storage_node in object_storage_nodes
]
with reporter.step(f"Check metrics in object nodes 'the counter should increase by {copies}'"):
counter_exp = sum(objects_metric_counter[node] for node in object_nodes) + copies
check_metrics_counter(object_nodes, counter_exp=counter_exp, command="frostfs_node_engine_objects_total", type="user")
check_metrics_counter(
object_nodes,
counter_exp=copies,
command="frostfs_node_engine_container_objects_total",
type="user",
cid=cid,
)
with reporter.step(f"Select node to stop"):
node_to_stop = random.choice(object_nodes)
alive_nodes = set(object_nodes).difference({node_to_stop})
with reporter.step(f"Stop the node, wait until the object is replicated to another node"):
cluster_state_controller.stop_node_host(node_to_stop, "hard")
objects_metric_counter[node_to_stop] += 1
with reporter.step(f"Check metric in alive nodes 'the counter should increase'"):
counter_exp = sum(objects_metric_counter[node] for node in alive_nodes)
check_metrics_counter(alive_nodes, ">=", counter_exp, command="frostfs_node_engine_objects_total", type="user")
with reporter.step("Start node"):
cluster_state_controller.start_node_host(node_to_stop)
with reporter.step(f"Check metric in restarted node, 'the counter doesn't change'"):
check_metrics_counter(
object_nodes,
counter_exp=copies,
command="frostfs_node_engine_container_objects_total",
type="user",
cid=cid,
)

View file

@ -0,0 +1,170 @@
import random
import re
import allure
import pytest
from frostfs_testlib import reporter
from frostfs_testlib.resources.error_patterns import OBJECT_NOT_FOUND
from frostfs_testlib.resources.wellknown_acl import EACL_PUBLIC_READ_WRITE
from frostfs_testlib.steps.cli.container import create_container
from frostfs_testlib.steps.cli.object import get_object, put_object
from frostfs_testlib.steps.metrics import check_metrics_counter
from frostfs_testlib.steps.node_management import node_shard_list, node_shard_set_mode
from frostfs_testlib.steps.storage_policy import get_nodes_with_object
from frostfs_testlib.storage.cluster import Cluster, ClusterNode
from frostfs_testlib.storage.controllers import ShardsWatcher
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
from frostfs_testlib.testing import parallel, wait_for_success
from frostfs_testlib.testing.cluster_test_base import ClusterTestBase
from frostfs_testlib.utils.file_utils import generate_file
@pytest.mark.nightly
class TestShardMetrics(ClusterTestBase):
@pytest.fixture()
@allure.title("Get two shards for set mode")
def two_shards_and_node(self, cluster: Cluster) -> tuple[str, str, ClusterNode]:
node = random.choice(cluster.cluster_nodes)
shards = node_shard_list(node.storage_node)
two_shards = random.sample(shards, k=2)
yield two_shards[0], two_shards[1], node
for shard in two_shards:
node_shard_set_mode(node.storage_node, shard, "read-write")
node_shard_list(node.storage_node)
@pytest.fixture()
@allure.title("Revert all shards mode")
def revert_all_shards_mode(self):
yield
parallel(self.set_shard_rw_mode, self.cluster.cluster_nodes)
def set_shard_rw_mode(self, node: ClusterNode):
watcher = ShardsWatcher(node)
shards = watcher.get_shards()
for shard in shards:
watcher.set_shard_mode(shard["shard_id"], mode="read-write")
watcher.await_for_all_shards_status(status="read-write")
@staticmethod
def get_error_count_from_logs(cluster_node: ClusterNode, object_path: str, object_name: str):
error_count = 0
try:
logs = cluster_node.host.get_filtered_logs("error count", unit="frostfs-storage")
# search error logs for current object
for error_line in logs.split("\n"):
if object_path in error_line and object_name in error_line:
result = re.findall(r'"error\scount":\s(\d+)', error_line)
error_count += sum(map(int, result))
except RuntimeError as e:
...
return error_count
@staticmethod
@wait_for_success(180, 30)
def get_object_path_and_name_file(oid: str, cid: str, node: ClusterNode) -> tuple[str, str]:
oid_path = f"{oid[0]}/{oid[1]}/{oid[2]}/{oid[3]}"
object_path = None
with reporter.step("Search object file"):
node_shell = node.storage_node.host.get_shell()
data_path = node.storage_node.get_data_directory()
all_datas = node_shell.exec(f"ls -la {data_path}/data | awk '{{ print $9 }}'").stdout.strip()
for data_dir in all_datas.replace(".", "").strip().split("\n"):
check_dir = node_shell.exec(f" [ -d {data_path}/data/{data_dir}/data/{oid_path} ] && echo 1 || echo 0").stdout
if "1" in check_dir:
object_path = f"{data_path}/data/{data_dir}/data/{oid_path}"
object_name = f"{oid[4:]}.{cid}"
break
assert object_path is not None, f"{oid} object not found in directory - {data_path}/data"
return object_path, object_name
@allure.title("Metric for shard mode")
def test_shard_metrics_set_mode(self, two_shards_and_node: tuple[str, str, ClusterNode]):
metrics_counter = 1
shard1, shard2, node = two_shards_and_node
with reporter.step("Shard1 set to mode 'read-only'"):
node_shard_set_mode(node.storage_node, shard1, "read-only")
with reporter.step(f"Check shard metrics, 'the mode will change to 'READ_ONLY'"):
check_metrics_counter(
[node],
counter_exp=metrics_counter,
command="frostfs_node_engine_mode_info",
mode="READ_ONLY",
shard_id=shard1,
)
with reporter.step("Shard2 set to mode 'degraded-read-only'"):
node_shard_set_mode(node.storage_node, shard2, "degraded-read-only")
with reporter.step(f"Check shard metrics, 'the mode will change to 'DEGRADED_READ_ONLY'"):
check_metrics_counter(
[node],
counter_exp=metrics_counter,
command="frostfs_node_engine_mode_info",
mode="DEGRADED_READ_ONLY",
shard_id=shard2,
)
with reporter.step("Both shards set to mode 'read-write'"):
for shard in [shard1, shard2]:
node_shard_set_mode(node.storage_node, shard, "read-write")
with reporter.step(f"Check shard metrics, 'the mode will change to 'READ_WRITE'"):
for shard in [shard1, shard2]:
check_metrics_counter(
[node],
counter_exp=metrics_counter,
command="frostfs_node_engine_mode_info",
mode="READ_WRITE",
shard_id=shard,
)
@allure.title("Metric for error count on shard")
def test_shard_metrics_error_count(self, max_object_size: int, default_wallet: WalletInfo, cluster: Cluster, revert_all_shards_mode):
file_path = generate_file(round(max_object_size * 0.8))
with reporter.step(f"Create container"):
cid = create_container(
wallet=default_wallet,
shell=self.shell,
endpoint=cluster.default_rpc_endpoint,
rule="REP 1 CBF 1",
basic_acl=EACL_PUBLIC_READ_WRITE,
)
with reporter.step("Put object"):
oid = put_object(default_wallet, file_path, cid, self.shell, cluster.default_rpc_endpoint)
with reporter.step("Get object nodes"):
object_storage_nodes = get_nodes_with_object(cid, oid, self.shell, cluster.storage_nodes)
object_nodes = [cluster_node for cluster_node in cluster.cluster_nodes if cluster_node.storage_node in object_storage_nodes]
node = random.choice(object_nodes)
with reporter.step("Search object in system."):
object_path, object_name = self.get_object_path_and_name_file(oid, cid, node)
with reporter.step("Block read file"):
node.host.get_shell().exec(f"chmod a-r {object_path}/{object_name}")
with reporter.step("Get object, expect error"):
with pytest.raises(RuntimeError, match=OBJECT_NOT_FOUND):
get_object(
wallet=default_wallet,
cid=cid,
oid=oid,
shell=self.shell,
endpoint=node.storage_node.get_rpc_endpoint(),
)
with reporter.step(f"Get shard error count from logs"):
counter = self.get_error_count_from_logs(node, object_path, object_name)
with reporter.step(f"Check shard error metrics"):
check_metrics_counter([node], counter_exp=counter, command="frostfs_node_engine_errors_total")

631
pytest_tests/testsuites/object/test_object_api.py Normal file → Executable file
View file

@ -1,95 +1,578 @@
import logging
from time import sleep
import random
import sys
import allure
import pytest
from container import create_container
from epoch import tick_epoch
from tombstone import verify_head_tombstone
from python_keywords.neofs_verbs import (delete_object, get_object, get_range,
get_range_hash, head_object,
put_object, search_object)
from python_keywords.storage_policy import get_simple_object_copies
from python_keywords.utility_keywords import generate_file, get_file_hash
from frostfs_testlib import reporter
from frostfs_testlib.resources.error_patterns import (
INVALID_LENGTH_SPECIFIER,
INVALID_OFFSET_SPECIFIER,
INVALID_RANGE_OVERFLOW,
INVALID_RANGE_ZERO_LENGTH,
OBJECT_ALREADY_REMOVED,
OUT_OF_RANGE,
)
from frostfs_testlib.shell import Shell
from frostfs_testlib.steps.cli.container import create_container, search_nodes_with_container
from frostfs_testlib.steps.cli.object import (
get_object_from_random_node,
get_range,
get_range_hash,
head_object,
put_object,
put_object_to_random_node,
search_object,
)
from frostfs_testlib.steps.complex_object_actions import get_complex_object_split_ranges
from frostfs_testlib.steps.storage_object import delete_object, delete_objects
from frostfs_testlib.steps.storage_policy import get_complex_object_copies, get_simple_object_copies
from frostfs_testlib.storage.cluster import Cluster, ClusterNode
from frostfs_testlib.storage.dataclasses.object_size import ObjectSize
from frostfs_testlib.storage.dataclasses.policy import PlacementPolicy
from frostfs_testlib.storage.dataclasses.storage_object_info import StorageObjectInfo
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
from frostfs_testlib.testing.cluster_test_base import ClusterTestBase
from frostfs_testlib.utils.file_utils import generate_file, get_file_content, get_file_hash
logger = logging.getLogger('NeoLogger')
logger = logging.getLogger("NeoLogger")
CLEANUP_TIMEOUT = 10
COMMON_ATTRIBUTE = {"common_key": "common_value"}
# Will upload object for each attribute set
OBJECT_ATTRIBUTES = [
None,
{"key1": 1, "key2": "abc", "common_key": "common_value"},
{"key1": 2, "common_key": "common_value"},
]
# Config for Range tests
RANGES_COUNT = 4 # by quarters
RANGE_MIN_LEN = 10
RANGE_MAX_LEN = 500
# Used for static ranges found with issues
STATIC_RANGES = {}
@allure.title('Test native object API')
@pytest.mark.sanity
@pytest.mark.grpc_api
def test_object_api(prepare_wallet_and_deposit):
wallet = prepare_wallet_and_deposit
cid = create_container(wallet)
wallet_cid = {'wallet': wallet, 'cid': cid}
file_usr_header = {'key1': 1, 'key2': 'abc'}
file_usr_header_oth = {'key1': 2}
range_cut = '0:10'
oids = []
def generate_ranges(storage_object: StorageObjectInfo, max_object_size: int, shell: Shell, cluster: Cluster) -> list[(int, int)]:
file_range_step = storage_object.size / RANGES_COUNT
file_path = generate_file()
file_ranges = []
file_ranges_to_test = []
for i in range(0, RANGES_COUNT):
file_ranges.append((int(file_range_step * i), int(file_range_step)))
# For simple object we can read all file ranges without too much time for testing
if storage_object.size < max_object_size:
file_ranges_to_test.extend(file_ranges)
# For complex object we need to fetch multiple child objects from different nodes.
else:
assert (
storage_object.size >= RANGE_MAX_LEN + max_object_size
), f"Complex object size should be at least {max_object_size + RANGE_MAX_LEN}. Current: {storage_object.size}"
file_ranges_to_test.append((RANGE_MAX_LEN, max_object_size - RANGE_MAX_LEN))
file_ranges_to_test.extend(get_complex_object_split_ranges(storage_object, shell, cluster))
# Special cases to read some bytes from start and some bytes from end of object
file_ranges_to_test.append((0, RANGE_MIN_LEN))
file_ranges_to_test.append((storage_object.size - RANGE_MIN_LEN, RANGE_MIN_LEN))
for offset, length in file_ranges:
range_length = random.randint(RANGE_MIN_LEN, RANGE_MAX_LEN)
range_start = random.randint(offset, offset + length)
file_ranges_to_test.append((range_start, min(range_length, storage_object.size - range_start)))
file_ranges_to_test.extend(STATIC_RANGES.get(storage_object.size, []))
return file_ranges_to_test
@pytest.fixture(scope="module")
def common_container(default_wallet: WalletInfo, client_shell: Shell, cluster: Cluster) -> str:
rule = "REP 1 IN X CBF 1 SELECT 1 FROM * AS X"
with reporter.step(f"Create container with {rule} and put object"):
cid = create_container(default_wallet, client_shell, cluster.default_rpc_endpoint, rule)
return cid
@pytest.fixture(scope="module")
def container_nodes(default_wallet: WalletInfo, client_shell: Shell, cluster: Cluster, common_container: str) -> list[ClusterNode]:
return search_nodes_with_container(default_wallet, common_container, client_shell, cluster.default_rpc_endpoint, cluster)
@pytest.fixture(scope="module")
def non_container_nodes(cluster: Cluster, container_nodes: list[ClusterNode]) -> list[ClusterNode]:
return list(set(cluster.cluster_nodes) - set(container_nodes))
@pytest.fixture(
# Scope session to upload/delete each files set only once
scope="module"
)
def storage_objects(
default_wallet: WalletInfo,
client_shell: Shell,
cluster: Cluster,
object_size: ObjectSize,
placement_policy: PlacementPolicy,
) -> list[StorageObjectInfo]:
wallet = default_wallet
# Separate containers for complex/simple objects to avoid side-effects
cid = create_container(wallet, shell=client_shell, rule=placement_policy.value, endpoint=cluster.default_rpc_endpoint)
file_path = generate_file(object_size.value)
file_hash = get_file_hash(file_path)
search_object(**wallet_cid, expected_objects_list=oids)
storage_objects = []
with allure.step('Put objects'):
oids.append(put_object(wallet=wallet, path=file_path, cid=cid))
oids.append(put_object(wallet=wallet, path=file_path, cid=cid, user_headers=file_usr_header))
oids.append(put_object(wallet=wallet, path=file_path, cid=cid, user_headers=file_usr_header_oth))
with reporter.step("Put objects"):
# We need to upload objects multiple times with different attributes
for attributes in OBJECT_ATTRIBUTES:
storage_object_id = put_object_to_random_node(
wallet=wallet,
path=file_path,
cid=cid,
shell=client_shell,
cluster=cluster,
attributes=attributes,
)
with allure.step('Validate storage policy for objects'):
for oid_to_check in oids:
assert get_simple_object_copies(wallet=wallet, cid=cid, oid=oid_to_check) == 2, 'Expected 2 copies'
storage_object = StorageObjectInfo(cid, storage_object_id)
storage_object.size = object_size.value
storage_object.wallet = wallet
storage_object.file_path = file_path
storage_object.file_hash = file_hash
storage_object.attributes = attributes
with allure.step('Get objects and compare hashes'):
for oid_to_check in oids:
got_file_path = get_object(wallet=wallet, cid=cid, oid=oid_to_check)
got_file_hash = get_file_hash(got_file_path)
assert file_hash == got_file_hash
storage_objects.append(storage_object)
with allure.step('Get range/range hash'):
get_range_hash(**wallet_cid, oid=oids[0], bearer_token='', range_cut=range_cut)
get_range_hash(**wallet_cid, oid=oids[1], bearer_token='', range_cut=range_cut)
get_range(**wallet_cid, oid=oids[1], bearer='', range_cut=range_cut)
yield storage_objects
with allure.step('Search objects'):
search_object(**wallet_cid, expected_objects_list=oids)
search_object(**wallet_cid, filters=file_usr_header, expected_objects_list=oids[1:2])
search_object(**wallet_cid, filters=file_usr_header_oth, expected_objects_list=oids[2:3])
with allure.step('Head object and validate'):
head_object(**wallet_cid, oid=oids[0])
head_info = head_object(**wallet_cid, oid=oids[1])
check_header_is_presented(head_info, file_usr_header)
with allure.step('Delete objects'):
tombstone_s = delete_object(**wallet_cid, oid=oids[0])
tombstone_h = delete_object(**wallet_cid, oid=oids[1])
verify_head_tombstone(wallet_path=wallet, cid=cid, oid_ts=tombstone_s, oid=oids[0])
verify_head_tombstone(wallet_path=wallet, cid=cid, oid_ts=tombstone_h, oid=oids[1])
tick_epoch()
sleep(CLEANUP_TIMEOUT)
with allure.step('Get objects and check errors'):
get_object_and_check_error(**wallet_cid, oid=oids[0], err_msg='object already removed')
get_object_and_check_error(**wallet_cid, oid=oids[1], err_msg='object already removed')
# Teardown after all tests done with current param
delete_objects(storage_objects, client_shell, cluster)
def get_object_and_check_error(wallet: str, cid: str, oid: str, err_msg: str):
try:
get_object(wallet=wallet, cid=cid, oid=oid)
raise AssertionError(f'Expected object {oid} removed, but it is not')
except Exception as err:
logger.info(f'Error is {err}')
assert err_msg in str(err), f'Expected message {err_msg} in error: {err}'
@pytest.fixture()
def expected_object_copies(placement_policy: PlacementPolicy) -> int:
if placement_policy.name == "rep":
return 2
return 4
def check_header_is_presented(head_info: dict, object_header: dict):
for key_to_check, val_to_check in object_header.items():
assert key_to_check in head_info['header']['attributes'], f'Key {key_to_check} is found in {head_object}'
assert head_info['header']['attributes'].get(key_to_check) == str(
val_to_check), f'Value {val_to_check} is equal'
@pytest.mark.nightly
@pytest.mark.sanity
@pytest.mark.grpc_api
class TestObjectApi(ClusterTestBase):
@allure.title("Storage policy by native API (obj_size={object_size}, policy={placement_policy})")
def test_object_storage_policies(
self,
storage_objects: list[StorageObjectInfo],
simple_object_size: ObjectSize,
expected_object_copies: int,
):
"""
Validate object storage policy
"""
with reporter.step("Validate storage policy for objects"):
for storage_object in storage_objects:
if storage_object.size == simple_object_size.value:
copies = get_simple_object_copies(
storage_object.wallet,
storage_object.cid,
storage_object.oid,
shell=self.shell,
nodes=self.cluster.storage_nodes,
)
else:
copies = get_complex_object_copies(
storage_object.wallet,
storage_object.cid,
storage_object.oid,
shell=self.shell,
nodes=self.cluster.storage_nodes,
)
assert copies == expected_object_copies, f"Expected {expected_object_copies} copies"
@allure.title("Get object by native API (obj_size={object_size}, policy={placement_policy})")
def test_get_object_api(self, storage_objects: list[StorageObjectInfo]):
"""
Validate get object native API
"""
with reporter.step("Get objects and compare hashes"):
for storage_object in storage_objects:
file_path = get_object_from_random_node(
storage_object.wallet,
storage_object.cid,
storage_object.oid,
self.shell,
cluster=self.cluster,
)
file_hash = get_file_hash(file_path)
assert storage_object.file_hash == file_hash
@allure.title("Head object by native API (obj_size={object_size}, policy={placement_policy})")
def test_head_object_api(self, storage_objects: list[StorageObjectInfo]):
"""
Validate head object native API
"""
storage_object_1 = storage_objects[0]
storage_object_2 = storage_objects[1]
with reporter.step("Head object and validate"):
head_object(
storage_object_1.wallet,
storage_object_1.cid,
storage_object_1.oid,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
)
head_info = head_object(
storage_object_2.wallet,
storage_object_2.cid,
storage_object_2.oid,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
)
self.check_header_is_presented(head_info, storage_object_2.attributes)
@allure.title("Head deleted object with --raw arg (obj_size={object_size}, policy={placement_policy})")
def test_object_head_raw(self, default_wallet: str, object_size: ObjectSize, placement_policy: PlacementPolicy):
with reporter.step("Create container"):
cid = create_container(default_wallet, self.shell, self.cluster.default_rpc_endpoint, placement_policy.value)
with reporter.step("Upload object"):
file_path = generate_file(object_size.value)
oid = put_object_to_random_node(default_wallet, file_path, cid, self.shell, self.cluster)
with reporter.step("Delete object"):
delete_object(default_wallet, cid, oid, self.shell, self.cluster.default_rpc_endpoint)
with reporter.step("Call object head --raw and expect error"):
with pytest.raises(Exception, match=OBJECT_ALREADY_REMOVED):
head_object(default_wallet, cid, oid, self.shell, self.cluster.default_rpc_endpoint, is_raw=True)
@allure.title("Search objects by native API (obj_size={object_size}, policy={placement_policy})")
def test_search_object_api(self, storage_objects: list[StorageObjectInfo]):
"""
Validate object search by native API
"""
oids = [storage_object.oid for storage_object in storage_objects]
wallet = storage_objects[0].wallet
cid = storage_objects[0].cid
test_table = [
(OBJECT_ATTRIBUTES[1], oids[1:2]),
(OBJECT_ATTRIBUTES[2], oids[2:3]),
(COMMON_ATTRIBUTE, oids[1:3]),
]
with reporter.step("Search objects"):
# Search with no attributes
result = search_object(
wallet,
cid,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
expected_objects_list=oids,
root=True,
)
assert sorted(oids) == sorted(result)
# search by test table
for filter, expected_oids in test_table:
result = search_object(
wallet,
cid,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
filters=filter,
expected_objects_list=expected_oids,
root=True,
)
assert sorted(expected_oids) == sorted(result)
@allure.title("Search objects with removed items (obj_size={object_size})")
def test_object_search_should_return_tombstone_items(self, default_wallet: WalletInfo, object_size: ObjectSize):
"""
Validate object search with removed items
"""
wallet = default_wallet
cid = create_container(wallet, self.shell, self.cluster.default_rpc_endpoint)
with reporter.step("Upload file"):
file_path = generate_file(object_size.value)
file_hash = get_file_hash(file_path)
storage_object = StorageObjectInfo(
cid=cid,
oid=put_object_to_random_node(wallet, file_path, cid, self.shell, self.cluster),
size=object_size.value,
wallet=wallet,
file_path=file_path,
file_hash=file_hash,
)
with reporter.step("Search object"):
# Root Search object should return root object oid
result = search_object(wallet, cid, shell=self.shell, endpoint=self.cluster.default_rpc_endpoint, root=True)
assert result == [storage_object.oid]
with reporter.step("Delete file"):
delete_objects([storage_object], self.shell, self.cluster)
with reporter.step("Search deleted object with --root"):
# Root Search object should return nothing
result = search_object(wallet, cid, shell=self.shell, endpoint=self.cluster.default_rpc_endpoint, root=True)
assert len(result) == 0
with reporter.step("Search deleted object with --phy should return only tombstones"):
# Physical Search object should return only tombstones
result = search_object(wallet, cid, shell=self.shell, endpoint=self.cluster.default_rpc_endpoint, phy=True)
assert storage_object.tombstone in result, "Search result should contain tombstone of removed object"
assert storage_object.oid not in result, "Search result should not contain ObjectId of removed object"
for tombstone_oid in result:
header = head_object(
wallet,
cid,
tombstone_oid,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
)["header"]
object_type = header["objectType"]
assert object_type == "TOMBSTONE", f"Object wasn't deleted properly. Found object {tombstone_oid} with type {object_type}"
@allure.title("Get range hash by native API (obj_size={object_size}, policy={placement_policy})")
@pytest.mark.grpc_api
def test_object_get_range_hash(self, storage_objects: list[StorageObjectInfo], max_object_size):
"""
Validate get_range_hash for object by native gRPC API
"""
wallet = storage_objects[0].wallet
cid = storage_objects[0].cid
oids = [storage_object.oid for storage_object in storage_objects[:2]]
file_path = storage_objects[0].file_path
file_ranges_to_test = generate_ranges(storage_objects[0], max_object_size, self.shell, self.cluster)
logging.info(f"Ranges used in test {file_ranges_to_test}")
for range_start, range_len in file_ranges_to_test:
range_cut = f"{range_start}:{range_len}"
with reporter.step(f"Get range hash ({range_cut})"):
for oid in oids:
range_hash = get_range_hash(
wallet,
cid,
oid,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
range_cut=range_cut,
)
assert (
get_file_hash(file_path, range_len, range_start) == range_hash
), f"Expected range hash to match {range_cut} slice of file payload"
@allure.title("Get range by native API (obj_size={object_size}, policy={placement_policy})")
@pytest.mark.grpc_api
def test_object_get_range(self, storage_objects: list[StorageObjectInfo], max_object_size):
"""
Validate get_range for object by native gRPC API
"""
wallet = storage_objects[0].wallet
cid = storage_objects[0].cid
oids = [storage_object.oid for storage_object in storage_objects[:2]]
file_path = storage_objects[0].file_path
file_ranges_to_test = generate_ranges(storage_objects[0], max_object_size, self.shell, self.cluster)
logging.info(f"Ranges used in test {file_ranges_to_test}")
for range_start, range_len in file_ranges_to_test:
range_cut = f"{range_start}:{range_len}"
with reporter.step(f"Get range ({range_cut})"):
for oid in oids:
_, range_content = get_range(
wallet,
cid,
oid,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
range_cut=range_cut,
)
assert (
get_file_content(file_path, content_len=range_len, mode="rb", offset=range_start) == range_content
), f"Expected range content to match {range_cut} slice of file payload"
@allure.title("[NEGATIVE] Get invalid range by native API (obj_size={object_size}, policy={placement_policy})")
@pytest.mark.grpc_api
def test_object_get_range_negatives(
self,
storage_objects: list[StorageObjectInfo],
):
"""
Validate get_range negative for object by native gRPC API
"""
wallet = storage_objects[0].wallet
cid = storage_objects[0].cid
oids = [storage_object.oid for storage_object in storage_objects[:2]]
file_size = storage_objects[0].size
assert RANGE_MIN_LEN < file_size, f"Incorrect test setup. File size ({file_size}) is less than RANGE_MIN_LEN ({RANGE_MIN_LEN})"
file_ranges_to_test: list[tuple(int, int, str)] = [
# Offset is bigger than the file size, the length is small.
(file_size + 1, RANGE_MIN_LEN, OUT_OF_RANGE),
# Offset is ok, but offset+length is too big.
(file_size - RANGE_MIN_LEN, RANGE_MIN_LEN * 2, OUT_OF_RANGE),
# Offset is ok, and length is very-very big (e.g. MaxUint64) so that offset+length is wrapped and still "valid".
(RANGE_MIN_LEN, sys.maxsize * 2 + 1, INVALID_RANGE_OVERFLOW),
# Length is zero
(10, 0, INVALID_RANGE_ZERO_LENGTH),
# Negative values
(-1, 1, INVALID_OFFSET_SPECIFIER),
(10, -5, INVALID_LENGTH_SPECIFIER),
]
for range_start, range_len, expected_error in file_ranges_to_test:
range_cut = f"{range_start}:{range_len}"
expected_error = expected_error.format(range=range_cut) if "{range}" in expected_error else expected_error
with reporter.step(f"Get range ({range_cut})"):
for oid in oids:
with pytest.raises(Exception, match=expected_error):
get_range(
wallet,
cid,
oid,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
range_cut=range_cut,
)
@allure.title("[NEGATIVE] Get invalid range hash by native API (obj_size={object_size}, policy={placement_policy})")
def test_object_get_range_hash_negatives(
self,
storage_objects: list[StorageObjectInfo],
):
"""
Validate get_range_hash negative for object by native gRPC API
"""
wallet = storage_objects[0].wallet
cid = storage_objects[0].cid
oids = [storage_object.oid for storage_object in storage_objects[:2]]
file_size = storage_objects[0].size
assert RANGE_MIN_LEN < file_size, f"Incorrect test setup. File size ({file_size}) is less than RANGE_MIN_LEN ({RANGE_MIN_LEN})"
file_ranges_to_test: list[tuple(int, int, str)] = [
# Offset is bigger than the file size, the length is small.
(file_size + 1, RANGE_MIN_LEN, OUT_OF_RANGE),
# Offset is ok, but offset+length is too big.
(file_size - RANGE_MIN_LEN, RANGE_MIN_LEN * 2, OUT_OF_RANGE),
# Offset is ok, and length is very-very big (e.g. MaxUint64) so that offset+length is wrapped and still "valid".
(RANGE_MIN_LEN, sys.maxsize * 2 + 1, INVALID_RANGE_OVERFLOW),
# Length is zero
(10, 0, INVALID_RANGE_ZERO_LENGTH),
# Negative values
(-1, 1, INVALID_OFFSET_SPECIFIER),
(10, -5, INVALID_LENGTH_SPECIFIER),
]
for range_start, range_len, expected_error in file_ranges_to_test:
range_cut = f"{range_start}:{range_len}"
expected_error = expected_error.format(range=range_cut) if "{range}" in expected_error else expected_error
with reporter.step(f"Get range hash ({range_cut})"):
for oid in oids:
with pytest.raises(Exception, match=expected_error):
get_range_hash(
wallet,
cid,
oid,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
range_cut=range_cut,
)
@allure.title("Get range from container and non-container nodes (object_size={object_size})")
def test_get_range_from_different_node(
self,
default_wallet: str,
common_container: str,
container_nodes: list[ClusterNode],
non_container_nodes: list[ClusterNode],
file_path: str,
):
with reporter.step("Put object to container"):
container_node = random.choice(container_nodes)
oid = put_object(default_wallet, file_path, common_container, self.shell, container_node.storage_node.get_rpc_endpoint())
with reporter.step("Get range from container node endpoint"):
get_range(
default_wallet,
common_container,
oid,
"0:10",
self.shell,
container_node.storage_node.get_rpc_endpoint(),
)
with reporter.step("Get range from non-container node endpoint"):
non_container_node = random.choice(non_container_nodes)
get_range(
default_wallet,
common_container,
oid,
"0:10",
self.shell,
non_container_node.storage_node.get_rpc_endpoint(),
)
@allure.title("Get range hash from container and non-container nodes (object_size={object_size})")
def test_get_range_hash_from_different_node(
self,
default_wallet: str,
common_container: str,
container_nodes: list[ClusterNode],
non_container_nodes: list[ClusterNode],
file_path: str,
):
with reporter.step("Put object to container"):
container_node = random.choice(container_nodes)
oid = put_object(default_wallet, file_path, common_container, self.shell, container_node.storage_node.get_rpc_endpoint())
with reporter.step("Get range hash from container node endpoint"):
get_range_hash(
default_wallet,
common_container,
oid,
"0:10",
self.shell,
container_node.storage_node.get_rpc_endpoint(),
)
with reporter.step("Get range hash from non-container node endpoint"):
non_container_node = random.choice(non_container_nodes)
get_range_hash(
default_wallet,
common_container,
oid,
"0:10",
self.shell,
non_container_node.storage_node.get_rpc_endpoint(),
)
def check_header_is_presented(self, head_info: dict, object_header: dict) -> None:
for key_to_check, val_to_check in object_header.items():
assert key_to_check in head_info["header"]["attributes"], f"Key {key_to_check} is found in {head_object}"
assert head_info["header"]["attributes"].get(key_to_check) == str(val_to_check), f"Value {val_to_check} is equal"

View file

@ -0,0 +1,134 @@
import allure
import pytest
from frostfs_testlib import reporter
from frostfs_testlib.cli import FrostfsCli
from frostfs_testlib.resources.wellknown_acl import PUBLIC_ACL
from frostfs_testlib.shell import Shell
from frostfs_testlib.steps.cli.container import (
REP_2_FOR_3_NODES_PLACEMENT_RULE,
SINGLE_PLACEMENT_RULE,
StorageContainer,
StorageContainerInfo,
create_container,
)
from frostfs_testlib.steps.cli.object import delete_object, get_object
from frostfs_testlib.steps.storage_object import StorageObjectInfo
from frostfs_testlib.storage.cluster import Cluster
from frostfs_testlib.storage.dataclasses import ape
from frostfs_testlib.storage.dataclasses.object_size import ObjectSize
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
from frostfs_testlib.testing.cluster_test_base import ClusterTestBase
from frostfs_testlib.testing.test_control import expect_not_raises
from pytest import FixtureRequest
from ...helpers.bearer_token import create_bearer_token
from ...helpers.container_access import assert_full_access_to_container
@pytest.fixture(scope="session")
@allure.title("Create user container for bearer token usage")
def user_container(default_wallet: WalletInfo, client_shell: Shell, cluster: Cluster, request: FixtureRequest) -> StorageContainer:
rule = request.param if "param" in request.__dict__ else SINGLE_PLACEMENT_RULE
container_id = create_container(default_wallet, client_shell, cluster.default_rpc_endpoint, rule, PUBLIC_ACL)
# Deliberately using s3gate wallet here to test bearer token
s3_gate_wallet = WalletInfo.from_node(cluster.s3_gates[0])
return StorageContainer(StorageContainerInfo(container_id, s3_gate_wallet), client_shell, cluster)
@pytest.fixture(scope="session")
@allure.title("Create bearer token with allowed put for container")
def bearer_token(frostfs_cli: FrostfsCli, temp_directory: str, user_container: StorageContainer, cluster: Cluster) -> str:
rule = ape.Rule(ape.Verb.ALLOW, ape.ObjectOperations.WILDCARD_ALL)
return create_bearer_token(frostfs_cli, temp_directory, user_container.get_id(), rule, cluster.default_rpc_endpoint)
@pytest.fixture()
def storage_objects(
user_container: StorageContainer,
bearer_token: str,
object_size: ObjectSize,
cluster: Cluster,
) -> list[StorageObjectInfo]:
storage_objects: list[StorageObjectInfo] = []
for node in cluster.storage_nodes:
storage_objects.append(
user_container.generate_object(
object_size.value,
bearer_token=bearer_token,
endpoint=node.get_rpc_endpoint(),
)
)
return storage_objects
@pytest.mark.nightly
@pytest.mark.bearer
@pytest.mark.ape
class TestObjectApiWithBearerToken(ClusterTestBase):
@allure.title("Object can be deleted from any node using s3gate wallet with bearer token (obj_size={object_size})")
@pytest.mark.parametrize(
"user_container",
[SINGLE_PLACEMENT_RULE],
indirect=True,
)
def test_delete_object_with_s3_wallet_bearer(
self,
storage_objects: list[StorageObjectInfo],
bearer_token: str,
):
s3_gate_wallet = WalletInfo.from_node(self.cluster.s3_gates[0])
with reporter.step("Delete each object from first storage node"):
for storage_object in storage_objects:
with expect_not_raises():
delete_object(
s3_gate_wallet,
storage_object.cid,
storage_object.oid,
self.shell,
endpoint=self.cluster.default_rpc_endpoint,
bearer=bearer_token,
)
@allure.title("Object can be fetched from any node using s3gate wallet with bearer token (obj_size={object_size})")
@pytest.mark.parametrize(
"user_container",
[REP_2_FOR_3_NODES_PLACEMENT_RULE],
indirect=True,
)
def test_get_object_with_s3_wallet_bearer_from_all_nodes(
self,
user_container: StorageContainer,
object_size: ObjectSize,
bearer_token: str,
):
s3_gate_wallet = WalletInfo.from_node(self.cluster.s3_gates[0])
with reporter.step("Put object to container"):
storage_object = user_container.generate_object(
object_size.value,
bearer_token=bearer_token,
endpoint=self.cluster.default_rpc_endpoint,
)
with reporter.step("Get object from each storage node"):
for node in self.cluster.storage_nodes:
with expect_not_raises():
get_object(
s3_gate_wallet,
storage_object.cid,
storage_object.oid,
self.shell,
node.get_rpc_endpoint(),
bearer_token,
)
@allure.title("Wildcard APE rule contains all permissions (obj_size={object_size})")
def test_ape_wildcard_contains_all_rules(
self,
other_wallet: WalletInfo,
storage_objects: list[StorageObjectInfo],
bearer_token: str,
):
obj = storage_objects.pop()
with reporter.step(f"Assert all operations available with object"):
assert_full_access_to_container(other_wallet, obj.cid, obj.oid, obj.file_path, self.shell, self.cluster, bearer_token)

View file

@ -0,0 +1,64 @@
import logging
import allure
import pytest
from frostfs_testlib import reporter
from frostfs_testlib.resources.error_patterns import OBJECT_NOT_FOUND
from frostfs_testlib.steps.cli.container import create_container
from frostfs_testlib.steps.cli.object import get_object_from_random_node, head_object, put_object_to_random_node
from frostfs_testlib.steps.epoch import get_epoch
from frostfs_testlib.storage.dataclasses.object_size import ObjectSize
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
from frostfs_testlib.testing.cluster_test_base import ClusterTestBase
from frostfs_testlib.utils.file_utils import generate_file, get_file_hash
from ...helpers.utility import wait_for_gc_pass_on_storage_nodes
logger = logging.getLogger("NeoLogger")
@pytest.mark.nightly
@pytest.mark.sanity
@pytest.mark.grpc_api
class TestObjectApiLifetime(ClusterTestBase):
@allure.title("Object is removed when lifetime expired (obj_size={object_size})")
def test_object_api_lifetime(self, default_wallet: WalletInfo, object_size: ObjectSize):
"""
Test object deleted after expiration epoch.
"""
wallet = default_wallet
endpoint = self.cluster.default_rpc_endpoint
cid = create_container(wallet, self.shell, endpoint)
file_path = generate_file(object_size.value)
file_hash = get_file_hash(file_path)
epoch = get_epoch(self.shell, self.cluster)
oid = put_object_to_random_node(wallet, file_path, cid, self.shell, self.cluster, expire_at=epoch + 1)
got_file = get_object_from_random_node(wallet, cid, oid, self.shell, self.cluster)
assert get_file_hash(got_file) == file_hash
with reporter.step("Tick two epochs"):
for _ in range(2):
self.tick_epoch()
# Wait for GC, because object with expiration is counted as alive until GC removes it
wait_for_gc_pass_on_storage_nodes()
with reporter.step("Check object deleted because it expires on epoch"):
with pytest.raises(Exception, match=OBJECT_NOT_FOUND):
head_object(wallet, cid, oid, self.shell, self.cluster.default_rpc_endpoint)
with pytest.raises(Exception, match=OBJECT_NOT_FOUND):
get_object_from_random_node(wallet, cid, oid, self.shell, self.cluster)
with reporter.step("Tick additional epoch"):
self.tick_epoch()
wait_for_gc_pass_on_storage_nodes()
with reporter.step("Check object deleted because it expires on previous epoch"):
with pytest.raises(Exception, match=OBJECT_NOT_FOUND):
head_object(wallet, cid, oid, self.shell, self.cluster.default_rpc_endpoint)
with pytest.raises(Exception, match=OBJECT_NOT_FOUND):
get_object_from_random_node(wallet, cid, oid, self.shell, self.cluster)

View file

@ -0,0 +1,706 @@
import logging
import re
import allure
import pytest
from frostfs_testlib import reporter
from frostfs_testlib.credentials.interfaces import CredentialsProvider, User
from frostfs_testlib.resources.common import STORAGE_GC_TIME
from frostfs_testlib.resources.error_patterns import (
LIFETIME_REQUIRED,
LOCK_NON_REGULAR_OBJECT,
LOCK_OBJECT_EXPIRATION,
LOCK_OBJECT_REMOVAL,
OBJECT_ALREADY_REMOVED,
OBJECT_IS_LOCKED,
OBJECT_NOT_FOUND,
)
from frostfs_testlib.shell import Shell
from frostfs_testlib.steps.cli.container import StorageContainer, StorageContainerInfo, create_container
from frostfs_testlib.steps.cli.object import delete_object, head_object, lock_object
from frostfs_testlib.steps.complex_object_actions import get_link_object, get_storage_object_chunks
from frostfs_testlib.steps.epoch import ensure_fresh_epoch, get_epoch, tick_epoch
from frostfs_testlib.steps.node_management import drop_object
from frostfs_testlib.steps.storage_object import delete_objects
from frostfs_testlib.steps.storage_policy import get_nodes_with_object
from frostfs_testlib.storage.cluster import Cluster
from frostfs_testlib.storage.dataclasses.object_size import ObjectSize
from frostfs_testlib.storage.dataclasses.storage_object_info import LockObjectInfo, StorageObjectInfo
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
from frostfs_testlib.testing.cluster_test_base import ClusterTestBase
from frostfs_testlib.testing.test_control import expect_not_raises, wait_for_success
from frostfs_testlib.utils import datetime_utils, string_utils
from ...helpers.utility import wait_for_gc_pass_on_storage_nodes
logger = logging.getLogger("NeoLogger")
FIXTURE_LOCK_LIFETIME = 5
FIXTURE_OBJECT_LIFETIME = 10
@pytest.fixture(scope="module")
def user_wallet(credentials_provider: CredentialsProvider, cluster: Cluster) -> WalletInfo:
with reporter.step("Create user wallet with container"):
user = User(string_utils.unique_name("user-"))
return credentials_provider.GRPC.provide(user, cluster.cluster_nodes[0])
@pytest.fixture(scope="module")
def user_container(user_wallet: WalletInfo, client_shell: Shell, cluster: Cluster):
container_id = create_container(user_wallet, shell=client_shell, endpoint=cluster.default_rpc_endpoint)
return StorageContainer(StorageContainerInfo(container_id, user_wallet), client_shell, cluster)
@pytest.fixture(scope="module")
def locked_storage_object(
user_container: StorageContainer,
client_shell: Shell,
cluster: Cluster,
object_size: ObjectSize,
):
"""
Intention of this fixture is to provide storage object which is NOT expected to be deleted during test act phase
"""
with reporter.step("Creating locked object"):
current_epoch = ensure_fresh_epoch(client_shell, cluster)
expiration_epoch = current_epoch + FIXTURE_LOCK_LIFETIME
storage_object = user_container.generate_object(object_size.value, expire_at=current_epoch + FIXTURE_OBJECT_LIFETIME)
lock_object_id = lock_object(
storage_object.wallet,
storage_object.cid,
storage_object.oid,
client_shell,
cluster.default_rpc_endpoint,
lifetime=FIXTURE_LOCK_LIFETIME,
)
storage_object.locks = [LockObjectInfo(storage_object.cid, lock_object_id, FIXTURE_LOCK_LIFETIME, expiration_epoch)]
yield storage_object
with reporter.step("Delete created locked object"):
current_epoch = get_epoch(client_shell, cluster)
epoch_diff = expiration_epoch - current_epoch + 1
if epoch_diff > 0:
with reporter.step(f"Tick {epoch_diff} epochs"):
for _ in range(epoch_diff):
tick_epoch(client_shell, cluster)
try:
delete_object(
storage_object.wallet,
storage_object.cid,
storage_object.oid,
client_shell,
cluster.default_rpc_endpoint,
)
except Exception as ex:
ex_message = str(ex)
# It's okay if object already removed
if not re.search(OBJECT_NOT_FOUND, ex_message) and not re.search(OBJECT_ALREADY_REMOVED, ex_message):
raise ex
logger.debug(ex_message)
@wait_for_success(datetime_utils.parse_time(STORAGE_GC_TIME))
def check_object_not_found(wallet: WalletInfo, cid: str, oid: str, shell: Shell, rpc_endpoint: str):
with pytest.raises(Exception, match=OBJECT_NOT_FOUND):
head_object(
wallet,
cid,
oid,
shell,
rpc_endpoint,
)
def verify_object_available(wallet: WalletInfo, cid: str, oid: str, shell: Shell, rpc_endpoint: str):
with expect_not_raises():
head_object(
wallet,
cid,
oid,
shell,
rpc_endpoint,
)
@pytest.mark.nightly
@pytest.mark.grpc_object_lock
class TestObjectLockWithGrpc(ClusterTestBase):
@pytest.fixture()
def new_locked_storage_object(self, user_container: StorageContainer, object_size: ObjectSize) -> StorageObjectInfo:
"""
Intention of this fixture is to provide new storage object for tests which may delete or corrupt the object or it's complementary objects
So we need a new one each time we ask for it
"""
with reporter.step("Creating locked object"):
current_epoch = self.get_epoch()
storage_object = user_container.generate_object(object_size.value, expire_at=current_epoch + FIXTURE_OBJECT_LIFETIME)
lock_object(
storage_object.wallet,
storage_object.cid,
storage_object.oid,
self.shell,
self.cluster.default_rpc_endpoint,
lifetime=FIXTURE_LOCK_LIFETIME,
)
return storage_object
@allure.title("Locked object is protected from deletion (obj_size={object_size})")
def test_locked_object_cannot_be_deleted(
self,
locked_storage_object: StorageObjectInfo,
):
"""
Locked object should be protected from deletion
"""
with pytest.raises(Exception, match=OBJECT_IS_LOCKED):
delete_object(
locked_storage_object.wallet,
locked_storage_object.cid,
locked_storage_object.oid,
self.shell,
self.cluster.default_rpc_endpoint,
)
@allure.title("Lock object itself is protected from deletion")
# We operate with only lock object here so no complex object needed in this test
@pytest.mark.parametrize("object_size", ["simple"], indirect=True)
def test_lock_object_itself_cannot_be_deleted(
self,
locked_storage_object: StorageObjectInfo,
):
"""
Lock object itself should be protected from deletion
"""
lock_object = locked_storage_object.locks[0]
wallet_path = locked_storage_object.wallet
with pytest.raises(Exception, match=LOCK_OBJECT_REMOVAL):
delete_object(
wallet_path,
lock_object.cid,
lock_object.oid,
self.shell,
self.cluster.default_rpc_endpoint,
)
@allure.title("Lock object itself cannot be locked")
# We operate with only lock object here so no complex object needed in this test
@pytest.mark.parametrize("object_size", ["simple"], indirect=True)
def test_lock_object_cannot_be_locked(
self,
locked_storage_object: StorageObjectInfo,
):
"""
Lock object itself cannot be locked
"""
lock_object_info = locked_storage_object.locks[0]
wallet_path = locked_storage_object.wallet
with pytest.raises(Exception, match=LOCK_NON_REGULAR_OBJECT):
lock_object(
wallet_path,
lock_object_info.cid,
lock_object_info.oid,
self.shell,
self.cluster.default_rpc_endpoint,
1,
)
@allure.title("Lock must contain valid lifetime or expire_at field: (lifetime={wrong_lifetime}, expire-at={wrong_expire_at})")
# We operate with only lock object here so no complex object needed in this test
@pytest.mark.parametrize("object_size", ["simple"], indirect=True)
@pytest.mark.parametrize(
"wrong_lifetime,wrong_expire_at,expected_error",
[
(None, None, LIFETIME_REQUIRED),
(0, 0, LIFETIME_REQUIRED),
(0, None, LIFETIME_REQUIRED),
(None, 0, LIFETIME_REQUIRED),
(-1, None, 'invalid argument "-1" for "--lifetime" flag'),
(None, -1, 'invalid argument "-1" for "-e, --expire-at" flag'),
],
)
def test_cannot_lock_object_without_lifetime(
self,
locked_storage_object: StorageObjectInfo,
wrong_lifetime: int,
wrong_expire_at: int,
expected_error: str,
):
"""
Cannot lock object without lifetime and expire_at fields
"""
lock_object_info = locked_storage_object.locks[0]
wallet_path = locked_storage_object.wallet
with pytest.raises(Exception, match=expected_error):
lock_object(
wallet_path,
lock_object_info.cid,
lock_object_info.oid,
self.shell,
self.cluster.default_rpc_endpoint,
lifetime=wrong_lifetime,
expire_at=wrong_expire_at,
)
@pytest.mark.sanity
@allure.title("Expired object is deleted when locks are expired (obj_size={object_size})")
def test_expired_object_should_be_deleted_after_locks_are_expired(
self,
user_container: StorageContainer,
object_size: ObjectSize,
):
"""
Expired object should be deleted after locks are expired
"""
current_epoch = self.ensure_fresh_epoch()
storage_object = user_container.generate_object(object_size.value, expire_at=current_epoch + 1)
with reporter.step("Lock object for couple epochs"):
lock_object(
storage_object.wallet,
storage_object.cid,
storage_object.oid,
self.shell,
self.cluster.default_rpc_endpoint,
lifetime=2,
)
lock_object(
storage_object.wallet,
storage_object.cid,
storage_object.oid,
self.shell,
self.cluster.default_rpc_endpoint,
expire_at=current_epoch + 2,
)
with reporter.step("Check object is not deleted at expiration time"):
self.tick_epochs(2)
# Must wait to ensure object is not deleted
wait_for_gc_pass_on_storage_nodes()
with expect_not_raises():
head_object(
storage_object.wallet,
storage_object.cid,
storage_object.oid,
self.shell,
self.cluster.default_rpc_endpoint,
)
with reporter.step("Wait for object to be deleted after third epoch"):
self.tick_epoch()
check_object_not_found(
storage_object.wallet,
storage_object.cid,
storage_object.oid,
self.shell,
self.cluster.default_rpc_endpoint,
)
@allure.title("Lock multiple objects at once (obj_size={object_size})")
def test_should_be_possible_to_lock_multiple_objects_at_once(
self,
user_container: StorageContainer,
object_size: ObjectSize,
):
"""
Should be possible to lock multiple objects at once
"""
current_epoch = ensure_fresh_epoch(self.shell, self.cluster)
storage_objects: list[StorageObjectInfo] = []
with reporter.step("Generate three objects"):
for _ in range(3):
storage_objects.append(user_container.generate_object(object_size.value, expire_at=current_epoch + 5))
lock_object(
storage_objects[0].wallet,
storage_objects[0].cid,
",".join([storage_object.oid for storage_object in storage_objects]),
self.shell,
self.cluster.default_rpc_endpoint,
expire_at=current_epoch + 1,
)
for storage_object in storage_objects:
with reporter.step(f"Try to delete object {storage_object.oid}"):
with pytest.raises(Exception, match=OBJECT_IS_LOCKED):
delete_object(
storage_object.wallet,
storage_object.cid,
storage_object.oid,
self.shell,
self.cluster.default_rpc_endpoint,
)
with reporter.step("Tick two epochs"):
self.tick_epoch()
self.tick_epoch()
with expect_not_raises():
delete_objects(storage_objects, self.shell, self.cluster)
@allure.title("Outdated lock cannot be applied (obj_size={object_size})")
def test_already_outdated_lock_should_not_be_applied(
self,
user_container: StorageContainer,
object_size: ObjectSize,
):
"""
Already outdated lock should not be applied
"""
current_epoch = self.ensure_fresh_epoch()
storage_object = user_container.generate_object(object_size.value, expire_at=current_epoch + 1)
expiration_epoch = current_epoch - 1
with pytest.raises(
Exception,
match=LOCK_OBJECT_EXPIRATION.format(expiration_epoch=expiration_epoch, current_epoch=current_epoch),
):
lock_object(
storage_object.wallet,
storage_object.cid,
storage_object.oid,
self.shell,
self.cluster.default_rpc_endpoint,
expire_at=expiration_epoch,
)
@pytest.mark.sanity
@allure.title("Delete object when lock is expired by lifetime (obj_size={object_size})")
@expect_not_raises()
def test_after_lock_expiration_with_lifetime_user_should_be_able_to_delete_object(
self,
user_container: StorageContainer,
object_size: ObjectSize,
):
"""
After lock expiration with lifetime user should be able to delete object
"""
current_epoch = self.ensure_fresh_epoch()
storage_object = user_container.generate_object(object_size.value, expire_at=current_epoch + 5)
lock_object(
storage_object.wallet,
storage_object.cid,
storage_object.oid,
self.shell,
self.cluster.default_rpc_endpoint,
lifetime=1,
)
self.tick_epochs(2)
with expect_not_raises():
delete_object(
storage_object.wallet,
storage_object.cid,
storage_object.oid,
self.shell,
self.cluster.default_rpc_endpoint,
)
@allure.title("Delete object when lock is expired by expire_at (obj_size={object_size})")
@expect_not_raises()
def test_after_lock_expiration_with_expire_at_user_should_be_able_to_delete_object(
self,
user_container: StorageContainer,
object_size: ObjectSize,
):
"""
After lock expiration with expire_at user should be able to delete object
"""
current_epoch = self.ensure_fresh_epoch()
storage_object = user_container.generate_object(object_size.value, expire_at=current_epoch + 5)
lock_object(
storage_object.wallet,
storage_object.cid,
storage_object.oid,
self.shell,
endpoint=self.cluster.default_rpc_endpoint,
expire_at=current_epoch + 1,
)
self.tick_epochs(2)
with expect_not_raises():
delete_object(
storage_object.wallet,
storage_object.cid,
storage_object.oid,
self.shell,
self.cluster.default_rpc_endpoint,
)
@allure.title("Complex object chunks are protected from deletion")
@pytest.mark.parametrize(
# Only complex objects are required for this test
"object_size",
["complex"],
indirect=True,
)
def test_complex_object_chunks_should_also_be_protected_from_deletion(
self,
locked_storage_object: StorageObjectInfo,
):
"""
Complex object chunks should also be protected from deletion
"""
chunk_object_ids = get_storage_object_chunks(locked_storage_object, self.shell, self.cluster)
for chunk_object_id in chunk_object_ids:
with reporter.step(f"Try to delete chunk object {chunk_object_id}"):
with pytest.raises(Exception, match=OBJECT_IS_LOCKED):
delete_object(
locked_storage_object.wallet,
locked_storage_object.cid,
chunk_object_id,
self.shell,
self.cluster.default_rpc_endpoint,
)
@allure.title("Drop link object of locked complex object")
@pytest.mark.grpc_control
@pytest.mark.parametrize(
"object_size",
# Only complex object is required
["complex"],
indirect=True,
)
def test_link_object_of_locked_complex_object_can_be_dropped(self, new_locked_storage_object: StorageObjectInfo):
link_object_id = get_link_object(
new_locked_storage_object.wallet,
new_locked_storage_object.cid,
new_locked_storage_object.oid,
self.shell,
self.cluster.storage_nodes,
)
with reporter.step(f"Drop link object with id {link_object_id} from nodes"):
nodes_with_object = get_nodes_with_object(
new_locked_storage_object.cid,
link_object_id,
shell=self.shell,
nodes=self.cluster.storage_nodes,
)
for node in nodes_with_object:
with expect_not_raises():
drop_object(node, new_locked_storage_object.cid, link_object_id)
@allure.title("Drop chunks of locked complex object")
@pytest.mark.grpc_control
@pytest.mark.parametrize(
"object_size",
# Only complex object is required
["complex"],
indirect=True,
)
def test_chunks_of_locked_complex_object_can_be_dropped(self, new_locked_storage_object: StorageObjectInfo):
chunk_objects = get_storage_object_chunks(new_locked_storage_object, self.shell, self.cluster)
for chunk_object_id in chunk_objects:
with reporter.step(f"Drop chunk object with id {chunk_object_id} from nodes"):
nodes_with_object = get_nodes_with_object(
new_locked_storage_object.cid,
chunk_object_id,
shell=self.shell,
nodes=self.cluster.storage_nodes,
)
for node in nodes_with_object:
with expect_not_raises():
drop_object(node, new_locked_storage_object.cid, chunk_object_id)
@allure.title("Drop locked object (obj_size={object_size})")
@pytest.mark.grpc_control
def test_locked_object_can_be_dropped(self, new_locked_storage_object: StorageObjectInfo):
nodes_with_object = get_nodes_with_object(
new_locked_storage_object.cid,
new_locked_storage_object.oid,
shell=self.shell,
nodes=self.cluster.storage_nodes,
)
for node in nodes_with_object:
with expect_not_raises():
drop_object(node, new_locked_storage_object.cid, new_locked_storage_object.oid)
@allure.title("Link object of complex object is protected from deletion")
@pytest.mark.parametrize(
# Only complex objects are required for this test
"object_size",
["complex"],
indirect=True,
)
def test_link_object_of_complex_object_should_also_be_protected_from_deletion(
self,
locked_storage_object: StorageObjectInfo,
):
"""
Link object of complex object should also be protected from deletion
"""
link_object_id = get_link_object(
locked_storage_object.wallet,
locked_storage_object.cid,
locked_storage_object.oid,
self.shell,
self.cluster.storage_nodes,
is_direct=False,
)
with reporter.step(f"Try to delete link object {link_object_id}"):
with pytest.raises(Exception, match=OBJECT_IS_LOCKED):
delete_object(
locked_storage_object.wallet,
locked_storage_object.cid,
link_object_id,
self.shell,
self.cluster.default_rpc_endpoint,
)
@allure.title("Expired object is removed after all locks are expired (obj_size={object_size})")
def test_expired_object_should_be_removed_after_relocks_expare_at(
self,
user_container: StorageContainer,
object_size: ObjectSize,
):
current_epoch = self.ensure_fresh_epoch()
storage_object = user_container.generate_object(object_size.value, expire_at=current_epoch + 1)
with reporter.step("Apply first lock to object for 3 epochs"):
lock_object_id_0 = lock_object(
storage_object.wallet,
storage_object.cid,
storage_object.oid,
self.shell,
self.cluster.default_rpc_endpoint,
expire_at=current_epoch + 3,
)
self.tick_epochs(2)
with reporter.step("Check first lock is still available"):
verify_object_available(
storage_object.wallet,
storage_object.cid,
lock_object_id_0,
self.shell,
self.cluster.default_rpc_endpoint,
)
with reporter.step("Apply second lock to object for 3 more epochs"):
lock_object_id_1 = lock_object(
storage_object.wallet,
storage_object.cid,
storage_object.oid,
self.shell,
self.cluster.default_rpc_endpoint,
expire_at=current_epoch + 5,
)
self.tick_epochs(2)
with reporter.step("Verify first lock is expired and removed"):
check_object_not_found(
storage_object.wallet,
storage_object.cid,
lock_object_id_0,
self.shell,
self.cluster.default_rpc_endpoint,
)
with reporter.step("Verify second lock is still available"):
verify_object_available(
storage_object.wallet,
storage_object.cid,
lock_object_id_1,
self.shell,
self.cluster.default_rpc_endpoint,
)
with reporter.step("Apply third lock to object for 3 more epochs"):
lock_object(
storage_object.wallet,
storage_object.cid,
storage_object.oid,
self.shell,
self.cluster.default_rpc_endpoint,
expire_at=current_epoch + 7,
)
with reporter.step("Verify object is deleted after all locks are expired"):
self.tick_epochs(4)
check_object_not_found(
storage_object.wallet,
storage_object.cid,
storage_object.oid,
self.shell,
self.cluster.default_rpc_endpoint,
)
@pytest.mark.sanity
@allure.title("Two expired objects with one lock are deleted after lock expiration (obj_size={object_size})")
def test_two_objects_expiration_with_one_lock(
self,
user_container: StorageContainer,
object_size: ObjectSize,
):
current_epoch = self.ensure_fresh_epoch()
storage_objects: list[StorageObjectInfo] = []
with reporter.step("Generate two objects"):
for epoch_i in range(2):
storage_objects.append(user_container.generate_object(object_size.value, expire_at=current_epoch + epoch_i + 3))
self.tick_epoch()
with reporter.step("Lock objects for 4 epochs"):
lock_object(
storage_objects[0].wallet,
storage_objects[0].cid,
",".join([storage_object.oid for storage_object in storage_objects]),
self.shell,
self.cluster.default_rpc_endpoint,
expire_at=current_epoch + 4,
)
with reporter.step("Verify objects are available during next three epochs"):
for epoch_i in range(3):
self.tick_epoch()
with reporter.step(f"Check objects at epoch {current_epoch + epoch_i + 2}"):
for storage_object in storage_objects:
verify_object_available(
storage_object.wallet,
storage_object.cid,
storage_object.oid,
self.shell,
self.cluster.default_rpc_endpoint,
)
with reporter.step("Verify objects are deleted after lock was expired"):
self.tick_epoch()
for storage_object in storage_objects:
check_object_not_found(
storage_object.wallet,
storage_object.cid,
storage_object.oid,
self.shell,
self.cluster.default_rpc_endpoint,
)

View file

@ -0,0 +1,415 @@
import logging
import re
import allure
import pytest
from frostfs_testlib import reporter
from frostfs_testlib.cli import FrostfsCli
from frostfs_testlib.resources.cli import CLI_DEFAULT_TIMEOUT, FROSTFS_CLI_EXEC
from frostfs_testlib.resources.error_patterns import OBJECT_IS_LOCKED
from frostfs_testlib.resources.wellknown_acl import PUBLIC_ACL_F
from frostfs_testlib.shell import Shell
from frostfs_testlib.steps.cli.container import create_container
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
from frostfs_testlib.testing.cluster_test_base import ClusterTestBase
from frostfs_testlib.testing.test_control import expect_not_raises
from frostfs_testlib.utils.file_utils import TestFile, get_file_hash
logger = logging.getLogger("NeoLogger")
@pytest.mark.nightly
@pytest.mark.grpc_without_user
class TestObjectApiWithoutUser(ClusterTestBase):
def _parse_oid(self, stdout: str) -> str:
id_str = stdout.strip().split("\n")[-2]
oid = id_str.split(":")[1]
return oid.strip()
def _parse_tombstone_oid(self, stdout: str) -> str:
id_str = stdout.split("\n")[1]
tombstone = id_str.split(":")[1]
return tombstone.strip()
@pytest.fixture(scope="function")
def public_container(self, default_wallet: WalletInfo) -> str:
with reporter.step("Create public container"):
cid_public = create_container(
default_wallet,
self.shell,
self.cluster.default_rpc_endpoint,
basic_acl=PUBLIC_ACL_F,
)
return cid_public
@pytest.fixture(scope="class")
def frostfs_cli(self, client_shell: Shell) -> FrostfsCli:
return FrostfsCli(client_shell, FROSTFS_CLI_EXEC)
@allure.title("Get public container by native API with generate private key")
def test_get_container_with_generated_key(self, frostfs_cli: FrostfsCli, public_container: str):
"""
Validate `container get` native API with flag `--generate-key`.
"""
cid = public_container
rpc_endpoint = self.cluster.default_rpc_endpoint
with reporter.step("Get container with generate key"):
with expect_not_raises():
frostfs_cli.container.get(rpc_endpoint, cid, generate_key=True, timeout=CLI_DEFAULT_TIMEOUT)
@allure.title("Get list containers by native API with generate private key")
def test_list_containers_with_generated_key(self, frostfs_cli: FrostfsCli, default_wallet: WalletInfo, public_container: str):
"""
Validate `container list` native API with flag `--generate-key`.
"""
rpc_endpoint = self.cluster.default_rpc_endpoint
owner = default_wallet.get_address_from_json(0)
with reporter.step("List containers with generate key"):
with expect_not_raises():
result = frostfs_cli.container.list(rpc_endpoint, owner=owner, generate_key=True, timeout=CLI_DEFAULT_TIMEOUT)
with reporter.step("Expect container in received containers list"):
containers = result.stdout.split()
assert public_container in containers
@allure.title("Get list of public container objects by native API with generate private key")
def test_list_objects_with_generate_key(self, frostfs_cli: FrostfsCli, public_container: str):
"""
Validate `container list_objects` native API with flag `--generate-key`.
"""
cid = public_container
rpc_endpoint = self.cluster.default_rpc_endpoint
with reporter.step("List objects with generate key"):
with expect_not_raises():
result = frostfs_cli.container.list_objects(rpc_endpoint, cid, generate_key=True, timeout=CLI_DEFAULT_TIMEOUT)
with reporter.step("Expect empty objects list"):
objects = result.stdout.split()
assert len(objects) == 0, objects
@allure.title("Search public container nodes by native API with generate private key")
def test_search_nodes_with_generate_key(self, frostfs_cli: FrostfsCli, public_container: str):
"""
Validate `container search_node` native API with flag `--generate-key`.
"""
cid = public_container
rpc_endpoint = self.cluster.default_rpc_endpoint
with reporter.step("Search nodes with generate key"):
with expect_not_raises():
frostfs_cli.container.search_node(rpc_endpoint, cid, generate_key=True, timeout=CLI_DEFAULT_TIMEOUT)
@allure.title("Put object into public container by native API with generate private key (obj_size={object_size})")
def test_put_object_with_generate_key(self, frostfs_cli: FrostfsCli, public_container: str, file_path: TestFile):
"""
Validate `object put` into container with public ACL and flag `--generate-key`.
"""
cid = public_container
rpc_endpoint = self.cluster.default_rpc_endpoint
with reporter.step("Put object with generate key"):
with expect_not_raises():
result = frostfs_cli.object.put(
rpc_endpoint,
cid,
file_path,
generate_key=True,
no_progress=True,
timeout=CLI_DEFAULT_TIMEOUT,
)
oid = self._parse_oid(result.stdout)
with reporter.step("List objects with generate key"):
result = frostfs_cli.container.list_objects(rpc_endpoint, cid, generate_key=True, timeout=CLI_DEFAULT_TIMEOUT)
with reporter.step("Expect object in received objects list"):
objects = result.stdout.split()
assert oid in objects, objects
@allure.title("Get public container object by native API with generate private key (obj_size={object_size})")
def test_get_object_with_generate_key(self, frostfs_cli: FrostfsCli, public_container: str, file_path: TestFile):
"""
Validate `object get` for container with public ACL and flag `--generate-key`.
"""
cid = public_container
rpc_endpoint = self.cluster.default_rpc_endpoint
expected_hash = get_file_hash(file_path)
with reporter.step("Put object with generate key"):
result = frostfs_cli.object.put(
rpc_endpoint,
cid,
file_path,
generate_key=True,
no_progress=True,
timeout=CLI_DEFAULT_TIMEOUT,
)
oid = self._parse_oid(result.stdout)
with reporter.step("Get object with generate key"):
with expect_not_raises():
frostfs_cli.object.get(
rpc_endpoint,
cid,
oid,
file=file_path,
generate_key=True,
no_progress=True,
timeout=CLI_DEFAULT_TIMEOUT,
)
downloaded_hash = get_file_hash(file_path)
with reporter.step("Validate downloaded file"):
assert expected_hash == downloaded_hash
@allure.title("Head public container object by native API with generate private key (obj_size={object_size})")
def test_head_object_with_generate_key(self, frostfs_cli: FrostfsCli, public_container: str, file_path: TestFile):
"""
Validate `object head` for container with public ACL and flag `--generate-key`.
"""
cid = public_container
rpc_endpoint = self.cluster.default_rpc_endpoint
with reporter.step("Put object with generate key"):
result = frostfs_cli.object.put(
rpc_endpoint,
cid,
file_path,
generate_key=True,
no_progress=True,
timeout=CLI_DEFAULT_TIMEOUT,
)
oid = self._parse_oid(result.stdout)
with reporter.step("Head object with generate key"):
with expect_not_raises():
frostfs_cli.object.head(rpc_endpoint, cid, oid, generate_key=True, timeout=CLI_DEFAULT_TIMEOUT)
@allure.title("Delete public container object by native API with generate private key (obj_size={object_size})")
def test_delete_object_with_generate_key(self, frostfs_cli: FrostfsCli, public_container: str, file_path: TestFile):
"""
Validate `object delete` for container with public ACL and flag `--generate key`.
"""
cid = public_container
rpc_endpoint = self.cluster.default_rpc_endpoint
with reporter.step("Put object with generate key"):
result = frostfs_cli.object.put(
rpc_endpoint,
cid,
file_path,
generate_key=True,
no_progress=True,
timeout=CLI_DEFAULT_TIMEOUT,
)
oid = self._parse_oid(result.stdout)
with reporter.step("Delete object with generate key"):
with expect_not_raises():
result = frostfs_cli.object.delete(rpc_endpoint, cid, oid, generate_key=True, timeout=CLI_DEFAULT_TIMEOUT)
oid = self._parse_tombstone_oid(result.stdout)
with reporter.step("Head object with generate key"):
result = frostfs_cli.object.head(
rpc_endpoint,
cid,
oid,
generate_key=True,
timeout=CLI_DEFAULT_TIMEOUT,
)
with reporter.step("Expect object type TOMBSTONE"):
object_type = re.search(r"(?<=type: )tombstone", result.stdout, re.IGNORECASE).group()
assert object_type == "TOMBSTONE", object_type
@allure.title("Lock public container object by native API with generate private key (obj_size={object_size})")
def test_lock_object_with_generate_key(self, frostfs_cli: FrostfsCli, public_container: str, file_path: TestFile):
"""
Validate `object lock` for container with public ACL and flag `--generate-key`.
Attempt to delete the locked object.
"""
cid = public_container
rpc_endpoint = self.cluster.default_rpc_endpoint
with reporter.step("Put object with generate key"):
result = frostfs_cli.object.put(
rpc_endpoint,
cid,
file_path,
generate_key=True,
no_progress=True,
timeout=CLI_DEFAULT_TIMEOUT,
)
oid = self._parse_oid(result.stdout)
with reporter.step("Lock object with generate key"):
with expect_not_raises():
frostfs_cli.object.lock(
rpc_endpoint,
cid,
oid,
generate_key=True,
timeout=CLI_DEFAULT_TIMEOUT,
lifetime=5,
)
with reporter.step("Delete locked object with generate key and expect error"):
with pytest.raises(Exception, match=OBJECT_IS_LOCKED):
frostfs_cli.object.delete(
rpc_endpoint,
cid,
oid,
generate_key=True,
timeout=CLI_DEFAULT_TIMEOUT,
)
@allure.title("Search public container objects by native API with generate private key (obj_size={object_size})")
def test_search_object_with_generate_key(self, frostfs_cli: FrostfsCli, public_container: str, file_path: TestFile):
"""
Validate `object search` for container with public ACL and flag `--generate-key`.
"""
cid = public_container
rpc_endpoint = self.cluster.default_rpc_endpoint
with reporter.step("Put object with generate key"):
result = frostfs_cli.object.put(
rpc_endpoint,
cid,
file_path,
generate_key=True,
no_progress=True,
timeout=CLI_DEFAULT_TIMEOUT,
)
oid = self._parse_oid(result.stdout)
with reporter.step("Object search with generate key"):
with expect_not_raises():
result = frostfs_cli.object.search(rpc_endpoint, cid, generate_key=True, timeout=CLI_DEFAULT_TIMEOUT)
with reporter.step("Expect object in received objects list of container"):
object_ids = re.findall(r"(\w{43,44})", result.stdout)
assert oid in object_ids
@allure.title("Get range of public container object by native API with generate private key (obj_size={object_size})")
def test_range_with_generate_key(self, frostfs_cli: FrostfsCli, public_container: str, file_path: TestFile):
"""
Validate `object range` for container with public ACL and `--generate-key`.
"""
cid = public_container
rpc_endpoint = self.cluster.default_rpc_endpoint
with reporter.step("Put object with generate key"):
result = frostfs_cli.object.put(
rpc_endpoint,
cid,
file_path,
generate_key=True,
no_progress=True,
timeout=CLI_DEFAULT_TIMEOUT,
)
oid = self._parse_oid(result.stdout)
with reporter.step("Get range of object with generate key"):
with expect_not_raises():
frostfs_cli.object.range(
rpc_endpoint,
cid,
oid,
"0:10",
file=file_path,
generate_key=True,
timeout=CLI_DEFAULT_TIMEOUT,
)
@allure.title("Get hash of public container object by native API with generate private key (obj_size={object_size})")
def test_hash_with_generate_key(self, frostfs_cli: FrostfsCli, public_container: str, file_path: TestFile):
"""
Validate `object hash` for container with public ACL and `--generate-key`.
"""
cid = public_container
rpc_endpoint = self.cluster.default_rpc_endpoint
with reporter.step("Put object with generate key"):
result = frostfs_cli.object.put(
rpc_endpoint,
cid,
file_path,
generate_key=True,
no_progress=True,
timeout=CLI_DEFAULT_TIMEOUT,
)
oid = self._parse_oid(result.stdout)
with reporter.step("Get range hash of object with generate key"):
with expect_not_raises():
frostfs_cli.object.hash(
rpc_endpoint,
cid,
oid,
range="0:10",
generate_key=True,
timeout=CLI_DEFAULT_TIMEOUT,
)
@allure.title("Get public container object nodes by native API with generate private key (obj_size={object_size})")
def test_nodes_with_generate_key(self, frostfs_cli: FrostfsCli, public_container: str, file_path: TestFile):
"""
Validate `object nodes` for container with public ACL and `--generate-key`.
"""
cid = public_container
rpc_endpoint = self.cluster.default_rpc_endpoint
with reporter.step("Put object with generate key"):
result = frostfs_cli.object.put(
rpc_endpoint,
cid,
file_path,
no_progress=True,
generate_key=True,
timeout=CLI_DEFAULT_TIMEOUT,
)
oid = self._parse_oid(result.stdout)
with reporter.step("Configure frostfs-cli for alive remote node"):
alive_node = self.cluster.cluster_nodes[0]
node_shell = alive_node.host.get_shell()
rpc_endpoint = alive_node.storage_node.get_rpc_endpoint()
node_frostfs_cli = FrostfsCli(node_shell, FROSTFS_CLI_EXEC)
with reporter.step("Get object nodes with generate key"):
with expect_not_raises():
node_frostfs_cli.object.nodes(
rpc_endpoint,
cid,
oid=oid,
generate_key=True,
timeout=CLI_DEFAULT_TIMEOUT,
)

View file

@ -0,0 +1,752 @@
import json
import time
import allure
import pytest
import yaml
from frostfs_testlib import reporter
from frostfs_testlib.cli import FrostfsAdm, FrostfsCli
from frostfs_testlib.cli.netmap_parser import NetmapParser
from frostfs_testlib.resources.cli import CLI_DEFAULT_TIMEOUT, FROSTFS_ADM_CONFIG_PATH, FROSTFS_ADM_EXEC, FROSTFS_CLI_EXEC
from frostfs_testlib.resources.common import COMPLEX_OBJECT_CHUNKS_COUNT, COMPLEX_OBJECT_TAIL_SIZE, HOSTING_CONFIG_FILE, MORPH_BLOCK_TIME
from frostfs_testlib.s3 import AwsCliClient, S3ClientWrapper
from frostfs_testlib.s3.interfaces import BucketContainerResolver, VersioningStatus
from frostfs_testlib.storage.cluster import Cluster, ClusterNode, StorageNode
from frostfs_testlib.storage.controllers import ClusterStateController
from frostfs_testlib.storage.controllers.state_managers.config_state_manager import ConfigStateManager
from frostfs_testlib.storage.dataclasses.object_size import ObjectSize
from frostfs_testlib.storage.dataclasses.storage_object_info import Chunk
from frostfs_testlib.storage.grpc_operations.interfaces import GrpcClientWrapper
from frostfs_testlib.testing.cluster_test_base import ClusterTestBase
from frostfs_testlib.testing.test_control import wait_for_success
from frostfs_testlib.utils import datetime_utils
from frostfs_testlib.utils.file_utils import generate_file, get_file_hash
from ...resources.common import S3_POLICY_FILE_LOCATION
def pytest_generate_tests(metafunc: pytest.Metafunc) -> None:
if "ec_policy" not in metafunc.fixturenames:
return
with open(HOSTING_CONFIG_FILE, "r") as file:
hosting_config = yaml.full_load(file)
node_count = len(hosting_config["hosts"])
ec_map = {
4: ["EC 1.1", "EC 2.1", "EC 3.1", "EC 2.2"],
8: ["EC 5.3", "EC 3.2", "EC 7.1", "EC 4.4", "EC 3.1"],
16: ["EC 12.4", "EC 8.4", "EC 5.3", "EC 4.4"],
100: ["EC 12.4", "EC 8.4", "EC 5.3", "EC 4.4"],
}
nearest_node_count = ([4] + (list(filter(lambda x: x <= node_count, ec_map.keys()))))[-1]
metafunc.parametrize("ec_policy, node_count", ((ec_policy, node_count) for ec_policy in ec_map[nearest_node_count]))
@allure.title("Initialized remote FrostfsAdm")
@pytest.fixture
def frostfs_remote_adm(cluster: Cluster) -> FrostfsAdm:
node = cluster.cluster_nodes[0]
shell = node.host.get_shell()
return FrostfsAdm(shell, frostfs_adm_exec_path=FROSTFS_ADM_EXEC, config_file=FROSTFS_ADM_CONFIG_PATH)
@pytest.mark.nightly
@pytest.mark.replication
@pytest.mark.ec_replication
class TestECReplication(ClusterTestBase):
def get_node_cli(self, cluster_node: ClusterNode, config: str) -> FrostfsCli:
shell = cluster_node.host.get_shell()
cli = FrostfsCli(shell, frostfs_cli_exec_path=FROSTFS_CLI_EXEC, config_file=config)
self.cli_change_shards_mode: dict[FrostfsCli, str] = {cli: cluster_node.storage_node.get_control_endpoint()}
return cli
@pytest.fixture()
def restore_nodes_shards_mode(self):
yield
for cli, endpoint in self.cli_change_shards_mode.items():
cli.shards.set_mode(endpoint, mode="read-write", all=True)
time.sleep(datetime_utils.parse_time(MORPH_BLOCK_TIME))
@pytest.fixture()
def rep_count(self, object_size: ObjectSize) -> int:
rep_count = 3
if object_size.name == "complex":
rep_count *= int(COMPLEX_OBJECT_CHUNKS_COUNT) + 1 if COMPLEX_OBJECT_TAIL_SIZE else int(COMPLEX_OBJECT_CHUNKS_COUNT)
return rep_count
@wait_for_success(120, 5)
def wait_replication(self, total_chunks: int, client: GrpcClientWrapper, cid: str, oid: str, success: bool = True) -> None:
if not success:
assert not self.check_replication(total_chunks, client, cid, oid)
else:
assert self.check_replication(total_chunks, client, cid, oid)
@allure.title("Restore chunk maximum params in network params ")
@pytest.fixture
def restore_network_config(self, frostfs_remote_adm: FrostfsAdm) -> None:
yield
frostfs_remote_adm.morph.set_config(set_key_value='"MaxECDataCount=12" "MaxECParityCount=5"')
@reporter.step("Get object nodes output ")
def get_object_nodes(self, cli: FrostfsCli, cid: str, oid: str, endpoint: str = None) -> dict:
if not endpoint:
endpoint = self.cluster.default_rpc_endpoint
object_nodes = json.loads(cli.object.nodes(endpoint, cid, oid=oid, json=True, timeout=CLI_DEFAULT_TIMEOUT).stdout)
if object_nodes.get("errors"):
raise object_nodes["errors"]
return object_nodes
@reporter.step("Get parity chunk ")
def get_parity_chunk_object(self, cli: FrostfsCli, cid: str, oid: str, endpoint: str = None) -> Chunk:
chunks = self.get_object_nodes(cli, cid, oid, endpoint)["data_objects"]
return Chunk(**chunks[-1])
@reporter.step("Get data chunk ")
def get_data_chunk_object(self, cli: FrostfsCli, cid: str, oid: str, endpoint: str = None) -> Chunk:
chunks = self.get_object_nodes(cli, cid, oid, endpoint)["data_objects"]
return Chunk(**chunks[0])
@reporter.step("Check replication chunks={total_chunks} chunks ")
def check_replication(self, total_chunks: int, client: GrpcClientWrapper, cid: str, oid: str) -> bool:
object_nodes_info = client.object.chunks.get_all(self.cluster.default_rpc_endpoint, cid, oid)
return len(object_nodes_info) == total_chunks
@pytest.fixture()
def include_excluded_nodes(self, cluster_state_controller: ClusterStateController):
yield
cluster_state_controller.include_all_excluded_nodes()
@allure.title("Disable Policer on all nodes")
@pytest.fixture()
def disable_policer(self, cluster_state_controller: ClusterStateController) -> None:
with reporter.step("Disable policer for nodes"):
cluster_state_controller.manager(ConfigStateManager).set_on_all_nodes(
service_type=StorageNode, values={"policer": {"unsafe_disable": True}}
)
yield
with reporter.step("Enable policer for nodes"):
cluster_state_controller.start_stopped_hosts()
cluster_state_controller.manager(ConfigStateManager).revert_all()
@wait_for_success(300, 15)
@reporter.step("Check count nodes chunks")
def wait_sync_count_chunks_nodes(self, grpc_client: GrpcClientWrapper, cid: str, oid: str, count: int):
all_chunks_after_include_node = grpc_client.object.chunks.get_all(self.cluster.default_rpc_endpoint, cid, oid)
chunks_nodes = [node for chunk in all_chunks_after_include_node for node in chunk.confirmed_nodes]
assert len(chunks_nodes) == count
@allure.title("Create container with EC policy (size={object_size})")
def test_create_container_with_ec_policy(self, object_size: ObjectSize, rep_count: int, grpc_client: GrpcClientWrapper) -> None:
test_file = generate_file(object_size.value)
with reporter.step("Create container."):
cid = grpc_client.container.create(self.cluster.default_rpc_endpoint, policy="EC 2.1", await_mode=True)
with reporter.step("Put object in container."):
oid = grpc_client.object.put(test_file, cid, self.cluster.default_rpc_endpoint)
with reporter.step("Check replication chunks."):
assert self.check_replication(rep_count, grpc_client, cid, oid)
@allure.title("Lose node with chunk data")
@pytest.mark.failover
def test_lose_node_with_data_chunk(
self,
grpc_client: GrpcClientWrapper,
simple_object_size: ObjectSize,
cluster_state_controller: ClusterStateController,
disable_policer: None,
) -> None:
with reporter.step("Create container."):
cid = grpc_client.container.create(self.cluster.default_rpc_endpoint, policy="EC 3.1", await_mode=True)
with reporter.step("Put object in container."):
test_file = generate_file(simple_object_size.value)
oid = grpc_client.object.put(test_file, cid, self.cluster.default_rpc_endpoint)
with reporter.step("Check chunk replication on 4 nodes."):
assert self.check_replication(4, grpc_client, cid, oid)
with reporter.step("Search node data chunk"):
chunk = grpc_client.object.chunks.get_first_data(self.cluster.default_rpc_endpoint, cid, oid=oid)
chunk_node = grpc_client.object.chunks.get_chunk_node(self.cluster, chunk)
with reporter.step("Stop node with data chunk."):
cluster_state_controller.stop_node_host(chunk_node[0], "hard")
with reporter.step("Get object"):
node = list(set(self.cluster.cluster_nodes) - {chunk_node[0]})[0]
grpc_client.object.get(cid, oid, node.storage_node.get_rpc_endpoint())
with reporter.step("Start stopped node, and check replication chunks."):
cluster_state_controller.start_node_host(chunk_node[0])
self.wait_replication(4, grpc_client, cid, oid)
@allure.title("Lose node with chunk parity")
@pytest.mark.failover
def test_lose_node_with_parity_chunk(
self,
grpc_client: GrpcClientWrapper,
simple_object_size: ObjectSize,
cluster_state_controller: ClusterStateController,
disable_policer: None,
) -> None:
with reporter.step("Create container."):
cid = grpc_client.container.create(self.cluster.default_rpc_endpoint, policy="EC 3.1", await_mode=True)
with reporter.step("Put object in container."):
test_file = generate_file(simple_object_size.value)
oid = grpc_client.object.put(test_file, cid, self.cluster.default_rpc_endpoint)
with reporter.step("Check chunk replication on 4 nodes."):
assert self.check_replication(4, grpc_client, cid, oid)
with reporter.step("Search node with parity chunk"):
chunk = grpc_client.object.chunks.get_parity(self.cluster.default_rpc_endpoint, cid, oid=oid)
chunk_node = grpc_client.object.chunks.get_chunk_node(self.cluster, chunk)[0]
with reporter.step("Stop node parity chunk."):
cluster_state_controller.stop_node_host(chunk_node, "hard")
with reporter.step("Get object, expect success."):
node = list(set(self.cluster.cluster_nodes) - {chunk_node})[0]
grpc_client.object.get(cid, oid, node.storage_node.get_rpc_endpoint())
with reporter.step("Start stoped node, and check replication chunks."):
cluster_state_controller.start_node_host(chunk_node)
self.wait_replication(4, grpc_client, cid, oid)
@allure.title("Lose nodes with chunk data and parity")
@pytest.mark.failover
def test_lose_nodes_data_chunk_and_parity(
self,
grpc_client: GrpcClientWrapper,
simple_object_size: ObjectSize,
cluster_state_controller: ClusterStateController,
disable_policer: None,
) -> None:
with reporter.step("Create container."):
cid = grpc_client.container.create(self.cluster.default_rpc_endpoint, policy="EC 3.1", await_mode=True)
with reporter.step("Put object in container."):
test_file = generate_file(simple_object_size.value)
oid = grpc_client.object.put(test_file, cid, self.cluster.default_rpc_endpoint)
with reporter.step("Check count chunks, expect 4."):
assert self.check_replication(4, grpc_client, cid, oid)
with reporter.step("Search node data chunk and node parity chunk"):
data_chunk = grpc_client.object.chunks.get_first_data(self.cluster.default_rpc_endpoint, cid, oid=oid)
data_chunk_node = grpc_client.object.chunks.get_chunk_node(self.cluster, data_chunk)[0]
parity_chunk = grpc_client.object.chunks.get_parity(self.cluster.default_rpc_endpoint, cid, oid=oid)
parity_chunk_node = grpc_client.object.chunks.get_chunk_node(self.cluster, parity_chunk)[0]
with reporter.step("Stop node with data chunk."):
cluster_state_controller.stop_node_host(data_chunk_node, "hard")
with reporter.step("Get object"):
node = list(set(self.cluster.cluster_nodes) - {data_chunk_node, parity_chunk_node})[0]
grpc_client.object.get(cid, oid, node.storage_node.get_rpc_endpoint())
with reporter.step("Start stopped host and check chunks."):
cluster_state_controller.start_node_host(data_chunk_node)
self.wait_replication(4, grpc_client, cid, oid)
with reporter.step("Stop node with parity chunk and one all node."):
cluster_state_controller.stop_node_host(data_chunk_node, "hard")
cluster_state_controller.stop_node_host(parity_chunk_node, "hard")
with reporter.step("Get object, expect error."):
with pytest.raises(RuntimeError):
grpc_client.object.get(cid, oid, node.storage_node.get_rpc_endpoint())
with reporter.step("Start stopped nodes and check replication chunk."):
cluster_state_controller.start_stopped_hosts()
self.wait_replication(4, grpc_client, cid, oid)
@allure.title("Policer work with chunk")
@pytest.mark.failover
def test_work_policer_with_nodes(
self,
simple_object_size: ObjectSize,
grpc_client: GrpcClientWrapper,
cluster_state_controller: ClusterStateController,
include_excluded_nodes: None,
) -> None:
with reporter.step("Create container."):
cid = grpc_client.container.create(self.cluster.default_rpc_endpoint, policy="EC 2.1", await_mode=True)
with reporter.step("Put object on container."):
test_file = generate_file(simple_object_size.value)
oid = grpc_client.object.put(test_file, cid, self.cluster.default_rpc_endpoint)
with reporter.step("Check count chunks nodes on 3."):
assert self.check_replication(3, grpc_client, cid, oid)
with reporter.step("Search node with chunk."):
data_chunk = grpc_client.object.chunks.get_first_data(self.cluster.default_rpc_endpoint, cid, oid=oid)
node_data_chunk = grpc_client.object.chunks.get_chunk_node(self.cluster, data_chunk)[0]
first_all_chunks = grpc_client.object.chunks.get_all(self.cluster.default_rpc_endpoint, cid, oid)
with reporter.step("Remove chunk node from network map"):
cluster_state_controller.remove_node_from_netmap([node_data_chunk.storage_node])
with reporter.step("Tick epoch."):
alive_node = list(set(self.cluster.cluster_nodes) - {node_data_chunk})[0]
self.tick_epoch(alive_node.storage_node, 2)
with reporter.step("Wait replication chunk with different node."):
node = grpc_client.object.chunks.search_node_without_chunks(
first_all_chunks, self.cluster, alive_node.storage_node.get_rpc_endpoint()
)[0]
self.wait_replication(3, grpc_client, cid, oid)
with reporter.step("Get new chunks"):
second_all_chunks = grpc_client.object.chunks.get_all(node.storage_node.get_rpc_endpoint(), cid, oid)
with reporter.step("Check that oid no change."):
assert [chunk for chunk in second_all_chunks if data_chunk.object_id == chunk.object_id]
with reporter.step("Include node in netmap"):
cluster_state_controller.include_node_to_netmap(node_data_chunk.storage_node, alive_node.storage_node)
self.wait_sync_count_chunks_nodes(grpc_client, cid, oid, 3)
@allure.title("EC X.Y combinations (nodes={node_count},policy={ec_policy},size={object_size})")
def test_create_container_with_difference_count_nodes(
self, node_count: int, ec_policy: str, object_size: ObjectSize, grpc_client: GrpcClientWrapper
) -> None:
with reporter.step("Create container."):
expected_chunks = int(ec_policy.split(" ")[1].split(".")[0]) + int(ec_policy.split(" ")[1].split(".")[1])
if "complex" in object_size.name:
expected_chunks *= 4
cid = grpc_client.container.create(self.cluster.default_rpc_endpoint, policy=ec_policy, await_mode=True)
with reporter.step("Put object in container."):
test_file = generate_file(object_size.value)
oid = grpc_client.object.put(test_file, cid, self.cluster.default_rpc_endpoint)
with reporter.step("Check count object chunks."):
chunks = grpc_client.object.chunks.get_all(self.cluster.default_rpc_endpoint, cid, oid)
assert len(chunks) == expected_chunks
with reporter.step("get object and check hash."):
file_with_node = grpc_client.object.get(cid, oid, self.cluster.default_rpc_endpoint)
assert get_file_hash(test_file) == get_file_hash(file_with_node)
@allure.title("Request PUT with copies_number flag")
def test_put_object_with_copies_number(self, grpc_client: GrpcClientWrapper, simple_object_size: ObjectSize) -> None:
with reporter.step("Create container."):
cid = grpc_client.container.create(self.cluster.default_rpc_endpoint, policy="EC 2.1", await_mode=True)
with reporter.step("Put object in container with copies number = 1"):
test_file = generate_file(simple_object_size.value)
oid = grpc_client.object.put(test_file, cid, self.cluster.default_rpc_endpoint, copies_number=1)
with reporter.step("Check that count chunks > 1."):
chunks = grpc_client.object.chunks.get_all(self.cluster.default_rpc_endpoint, cid, oid)
assert len(chunks) > 1
@allure.title("Request PUT and 1 node off")
@pytest.mark.failover
def test_put_object_with_off_cnr_node(
self, grpc_client: GrpcClientWrapper, cluster_state_controller: ClusterStateController, simple_object_size: ObjectSize
) -> None:
with reporter.step("Create container."):
cid = grpc_client.container.create(self.cluster.default_rpc_endpoint, policy="EC 3.1", await_mode=True)
with reporter.step("Stop one node in container nodes"):
cluster_state_controller.stop_node_host(self.cluster.cluster_nodes[1], "hard")
with reporter.step("Put object in container, expect success for EC container."):
test_file = generate_file(simple_object_size.value)
grpc_client.object.put(test_file, cid, self.cluster.default_rpc_endpoint, copies_number=1)
@allure.title("Request PUT (size={object_size})")
def test_put_object_with_ec_cnr(self, grpc_client: GrpcClientWrapper, object_size: ObjectSize) -> None:
with reporter.step("Create container."):
cid = grpc_client.container.create(self.cluster.default_rpc_endpoint, policy="EC 2.1", await_mode=True)
with reporter.step("Put object in container"):
test_file = generate_file(object_size.value)
oid = grpc_client.object.put(test_file, cid, self.cluster.default_rpc_endpoint)
with reporter.step("Get chunks object."):
chunks = grpc_client.object.chunks.get_all(self.cluster.default_rpc_endpoint, cid, oid)
with reporter.step("Check header chunks object"):
for chunk in chunks:
chunk_head = grpc_client.object.head(
cid, chunk.object_id, self.cluster.default_rpc_endpoint, is_raw=True, json_output=False
).stdout
assert "EC header:" in chunk_head
@allure.title("Request GET (size={object_size})")
def test_get_object_in_ec_cnr(self, grpc_client: GrpcClientWrapper, object_size: ObjectSize) -> None:
with reporter.step("Create container."):
cid = grpc_client.container.create(self.cluster.default_rpc_endpoint, policy="EC 2.1 CBF 1", await_mode=True)
with reporter.step("Put object in container"):
test_file = generate_file(object_size.value)
hash_origin_file = get_file_hash(test_file)
oid = grpc_client.object.put(test_file, cid, self.cluster.default_rpc_endpoint)
with reporter.step("Get id all chunks."):
chunks = grpc_client.object.chunks.get_all(self.cluster.default_rpc_endpoint, cid, oid)
with reporter.step("Search chunk node and not chunks node."):
chunk_node = grpc_client.object.chunks.get_chunk_node(self.cluster, chunks[0])[0]
not_chunk_node = grpc_client.object.chunks.search_node_without_chunks(chunks, self.cluster, self.cluster.default_rpc_endpoint)[
0
]
with reporter.step("GET request with chunk node, expect success"):
file_one = grpc_client.object.get(cid, oid, chunk_node.storage_node.get_rpc_endpoint())
hash_file_one = get_file_hash(file_one)
assert hash_file_one == hash_origin_file
with reporter.step("Get request with not chunk node"):
file_two = grpc_client.object.get(cid, oid, not_chunk_node.storage_node.get_rpc_endpoint())
hash_file_two = get_file_hash(file_two)
assert hash_file_two == hash_file_one == hash_origin_file
@allure.title("Request SEARCH with flags 'root' (size={object_size})")
def test_search_object_in_ec_cnr_root_flags(self, grpc_client: GrpcClientWrapper, object_size: ObjectSize) -> None:
with reporter.step("Create container."):
cid = grpc_client.container.create(self.cluster.default_rpc_endpoint, policy="EC 2.1", await_mode=True)
with reporter.step("Put object in container"):
test_file = generate_file(object_size.value)
oid = grpc_client.object.put(test_file, cid, self.cluster.default_rpc_endpoint)
with reporter.step("Search operation with --root flags"):
search_output = grpc_client.object.search(cid, self.cluster.default_rpc_endpoint, root=True)
assert search_output[0] == oid
@allure.title("Request SEARCH check valid chunk id (size={object_size})")
def test_search_object_in_ec_cnr_chunk_id(self, grpc_client: GrpcClientWrapper, object_size: ObjectSize) -> None:
with reporter.step("Create container."):
cid = grpc_client.container.create(self.cluster.default_rpc_endpoint, policy="EC 2.1", await_mode=True)
with reporter.step("Put object in container"):
test_file = generate_file(object_size.value)
oid = grpc_client.object.put(test_file, cid, self.cluster.default_rpc_endpoint)
with reporter.step("Search operation object"):
search_output = grpc_client.object.search(cid, self.cluster.default_rpc_endpoint)
chunks = grpc_client.object.chunks.get_all(self.cluster.default_rpc_endpoint, cid, oid)
for chunk in chunks:
assert chunk.object_id in search_output
@allure.title("Request SEARCH check no chunk index info (size={object_size})")
def test_search_object_in_ec_cnr(self, grpc_client: GrpcClientWrapper, object_size: ObjectSize) -> None:
with reporter.step("Create container."):
cid = grpc_client.container.create(self.cluster.default_rpc_endpoint, policy="EC 2.1", await_mode=True)
with reporter.step("Put object in container"):
test_file = generate_file(object_size.value)
oid = grpc_client.object.put(test_file, cid, self.cluster.default_rpc_endpoint)
with reporter.step("Search operation all chunk"):
chunks = grpc_client.object.chunks.get_all(self.cluster.default_rpc_endpoint, cid, oid)
for chunk in chunks:
chunk_search = grpc_client.object.search(cid, self.cluster.default_rpc_endpoint, oid=chunk.object_id)
assert "index" not in chunk_search
@allure.title("Request DELETE (size={object_size})")
@pytest.mark.failover
def test_delete_object_in_ec_cnr(
self, grpc_client: GrpcClientWrapper, object_size: ObjectSize, cluster_state_controller: ClusterStateController
) -> None:
with reporter.step("Create container."):
cid = grpc_client.container.create(self.cluster.default_rpc_endpoint, policy="EC 2.1", await_mode=True)
with reporter.step("Put object in container."):
test_file = generate_file(object_size.value)
oid = grpc_client.object.put(test_file, cid, self.cluster.default_rpc_endpoint)
with reporter.step("Check object chunks nodes."):
chunks = grpc_client.object.chunks.get_all(self.cluster.default_rpc_endpoint, cid, oid)
replication_count = 3 if object_size.name == "simple" else 3 * 4
assert len(chunks) == replication_count
with reporter.step("Delete object"):
grpc_client.object.delete(cid, oid, self.cluster.default_rpc_endpoint)
with reporter.step("Check that delete all chunks."):
for chunk in chunks:
with pytest.raises(RuntimeError, match="object already removed"):
grpc_client.object.head(cid, chunk.object_id, self.cluster.default_rpc_endpoint)
with reporter.step("Put second object."):
oid_second = grpc_client.object.put(test_file, cid, self.cluster.default_rpc_endpoint)
with reporter.step("Check second object chunks nodes."):
chunks_second_object = grpc_client.object.chunks.get_all(self.cluster.default_rpc_endpoint, cid, oid_second)
assert len(chunks_second_object) == replication_count
with reporter.step("Stop nodes with chunk."):
chunk_node = grpc_client.object.chunks.get_chunk_node(self.cluster, chunks_second_object[0])
cluster_state_controller.stop_node_host(chunk_node[0], "hard")
with reporter.step("Delete second object"):
cluster_nodes = list(set(self.cluster.cluster_nodes) - {chunk_node[0]})
grpc_client.object.delete(cid, oid_second, cluster_nodes[0].storage_node.get_rpc_endpoint())
with reporter.step("Check that delete all chunk second object."):
for chunk in chunks_second_object:
with pytest.raises(RuntimeError, match="object already removed|object not found"):
grpc_client.object.head(cid, chunk.object_id, cluster_nodes[0].storage_node.get_rpc_endpoint())
@allure.title("Request LOCK (size={object_size})")
@pytest.mark.failover
def test_lock_object_in_ec_cnr(
self,
grpc_client: GrpcClientWrapper,
frostfs_cli: FrostfsCli,
object_size: ObjectSize,
cluster_state_controller: ClusterStateController,
include_excluded_nodes: None,
) -> None:
with reporter.step("Create container."):
cid = grpc_client.container.create(self.cluster.default_rpc_endpoint, policy="EC 2.1", await_mode=True)
with reporter.step("Put object in container."):
test_file = generate_file(object_size.value)
oid = grpc_client.object.put(test_file, cid, self.cluster.default_rpc_endpoint)
with reporter.step("Check object chunks nodes."):
chunks_object = grpc_client.object.chunks.get_all(self.cluster.default_rpc_endpoint, cid, oid)
replication_count = 3 if object_size.name == "simple" else 3 * 4
assert len(chunks_object) == replication_count
with reporter.step("Put LOCK in object."):
# TODO Rework for the grpc_client when the netmap methods are implemented
epoch = frostfs_cli.netmap.epoch(self.cluster.default_rpc_endpoint, timeout=CLI_DEFAULT_TIMEOUT).stdout.strip()
grpc_client.object.lock(cid, oid, self.cluster.default_rpc_endpoint, expire_at=(int(epoch) + 5))
with reporter.step("Check don`t delete chunk"):
for chunk in chunks_object:
with pytest.raises(RuntimeError, match="Lock EC chunk failed"):
grpc_client.object.delete(cid, chunk.object_id, self.cluster.default_rpc_endpoint)
with reporter.step("Check enable LOCK object"):
with pytest.raises(RuntimeError, match="object is locked"):
grpc_client.object.delete(cid, oid, self.cluster.default_rpc_endpoint)
with reporter.step("Remove node in netmap."):
chunk_node = grpc_client.object.chunks.get_chunk_node(self.cluster, chunks_object[0])[0]
alive_node = list(set(self.cluster.cluster_nodes) - {chunk_node})[0]
cluster_state_controller.remove_node_from_netmap([chunk_node.storage_node])
with reporter.step("Check don`t delete chunk."):
for chunk in chunks_object:
with pytest.raises(RuntimeError, match="Lock EC chunk failed|object not found"):
grpc_client.object.delete(cid, chunk.object_id, alive_node.storage_node.get_rpc_endpoint())
with reporter.step("Check enable LOCK object"):
with pytest.raises(RuntimeError, match="object is locked"):
grpc_client.object.delete(cid, oid, alive_node.storage_node.get_rpc_endpoint())
with reporter.step("Include node in netmap"):
cluster_state_controller.include_node_to_netmap(chunk_node.storage_node, alive_node.storage_node)
@allure.title("Output MaxEC* params in frostf-scli (type={type_shards})")
@pytest.mark.parametrize("type_shards", ["Maximum count of data shards", "Maximum count of parity shards"])
def test_maxec_info_with_output_cli(self, frostfs_cli: FrostfsCli, type_shards: str) -> None:
with reporter.step("Get and check params"):
# TODO Rework for the grpc_client when the netmap methods are implemented
net_info = frostfs_cli.netmap.netinfo(self.cluster.default_rpc_endpoint).stdout
assert type_shards in net_info
@allure.title("Change MaxEC*Count params")
def test_change_max_data_shards_params(
self, frostfs_remote_adm: FrostfsAdm, frostfs_cli: FrostfsCli, restore_network_config: None
) -> None:
# TODO Rework for the grpc_client when the netmap methods are implemented
with reporter.step("Get now params MaxECDataCount and MaxECParityCount"):
node_netinfo = NetmapParser.netinfo(
frostfs_cli.netmap.netinfo(self.cluster.default_rpc_endpoint, timeout=CLI_DEFAULT_TIMEOUT).stdout
)
with reporter.step("Change params"):
frostfs_remote_adm.morph.set_config(set_key_value='"MaxECDataCount=5" "MaxECParityCount=3"')
with reporter.step("Get update params"):
update_net_info = NetmapParser.netinfo(
frostfs_cli.netmap.netinfo(self.cluster.default_rpc_endpoint, timeout=CLI_DEFAULT_TIMEOUT).stdout
)
with reporter.step("Check old and new params difference"):
assert (
update_net_info.maximum_count_of_data_shards not in node_netinfo.maximum_count_of_data_shards
and update_net_info.maximum_count_of_parity_shards not in node_netinfo.maximum_count_of_parity_shards
)
@allure.title("Check maximum count data and parity shards")
def test_change_over_max_parity_shards_params(self, frostfs_remote_adm: FrostfsAdm) -> None:
with reporter.step("Change over maximum params shards count."):
with pytest.raises(RuntimeError, match="MaxECDataCount and MaxECParityCount must be <= 256"):
frostfs_remote_adm.morph.set_config(set_key_value='"MaxECDataCount=130" "MaxECParityCount=130"')
@allure.title("Create container with EC policy and SELECT (SELECT={select})")
@pytest.mark.parametrize("select", [2, 4])
def test_create_container_with_select(self, select: int, grpc_client: GrpcClientWrapper) -> None:
with reporter.step("Create container"):
policy = f"EC 1.1 CBF 1 SELECT {select} FROM *"
cid = grpc_client.container.create(self.cluster.default_rpc_endpoint, policy=policy, await_mode=True)
with reporter.step("Check container nodes decomposed"):
container_nodes = grpc_client.container.nodes(self.cluster.default_rpc_endpoint, cid, self.cluster)
assert len(container_nodes) == select
@allure.title("Create container with EC policy and CBF (CBF={cbf})")
@pytest.mark.parametrize("cbf, expected_nodes", [(1, 2), (2, 4)])
def test_create_container_with_cbf(self, cbf: int, expected_nodes: int, grpc_client: GrpcClientWrapper) -> None:
with reporter.step("Create container."):
policy = f"EC 1.1 CBF {cbf}"
cid = grpc_client.container.create(self.cluster.default_rpc_endpoint, policy=policy, await_mode=True)
with reporter.step("Check expected container nodes."):
container_nodes = grpc_client.container.nodes(self.cluster.default_rpc_endpoint, cid, self.cluster)
assert len(container_nodes) == expected_nodes
@allure.title("Create container with EC policy and FILTER")
def test_create_container_with_filter(self, grpc_client: GrpcClientWrapper, simple_object_size: ObjectSize) -> None:
with reporter.step("Create Container."):
policy = "EC 1.1 IN RUS SELECT 2 FROM RU AS RUS FILTER Country EQ Russia AS RU"
cid = grpc_client.container.create(self.cluster.default_rpc_endpoint, policy=policy, await_mode=True)
with reporter.step("Put object in container."):
test_file = generate_file(simple_object_size.value)
oid = grpc_client.object.put(test_file, cid, self.cluster.default_rpc_endpoint)
with reporter.step("Check object is decomposed exclusively on Russian nodes"):
data_chunk = grpc_client.object.chunks.get_first_data(self.cluster.default_rpc_endpoint, cid, oid=oid)
parity_chunk = grpc_client.object.chunks.get_parity(self.cluster.default_rpc_endpoint, cid, oid=oid)
node_data_chunk = grpc_client.object.chunks.get_chunk_node(self.cluster, data_chunk)
node_parity_chunk = grpc_client.object.chunks.get_chunk_node(self.cluster, parity_chunk)
for node in [node_data_chunk[1], node_parity_chunk[1]]:
assert "Russia" in node.country
@allure.title("Evacuation shard with chunk (type={type})")
@pytest.mark.parametrize("type, get_chunk", [("data", get_data_chunk_object), ("parity", get_parity_chunk_object)])
def test_evacuation_data_shard(
self,
restore_nodes_shards_mode: None,
frostfs_cli: FrostfsCli,
grpc_client: GrpcClientWrapper,
max_object_size: int,
type: str,
get_chunk,
) -> None:
with reporter.step("Create container."):
cid = grpc_client.container.create(self.cluster.default_rpc_endpoint, policy="EC 1.1 CBF 1", await_mode=True)
with reporter.step("Put object in container."):
test_file = generate_file(max_object_size - 1000)
oid = grpc_client.object.put(test_file, cid, self.cluster.default_rpc_endpoint)
with reporter.step("Get object chunks."):
chunk = get_chunk(self, frostfs_cli, cid, oid, self.cluster.default_rpc_endpoint)
chunk_node = grpc_client.object.chunks.get_chunk_node(self.cluster, chunk)
frostfs_node_cli = self.get_node_cli(chunk_node[0], config=chunk_node[0].storage_node.get_remote_wallet_config_path())
with reporter.step("Search shards chunk"):
time.sleep(datetime_utils.parse_time(MORPH_BLOCK_TIME) * 2)
shard_id = grpc_client.object.chunks.get_shard_chunk(chunk_node[0], chunk)
with reporter.step("Enable evacuation for shard"):
frostfs_node_cli.shards.set_mode(chunk_node[0].storage_node.get_control_endpoint(), mode="read-only", id=shard_id)
frostfs_node_cli.shards.evacuation_start(chunk_node[0].storage_node.get_control_endpoint(), shard_id, await_mode=True)
with reporter.step("Get object after evacuation shard"):
grpc_client.object.get(cid, oid, self.cluster.default_rpc_endpoint)
@allure.title("[NEGATIVE] Don`t create more 1 EC policy")
def test_more_one_ec_policy(self, grpc_client: GrpcClientWrapper) -> None:
with reporter.step("Create container with policy - 'EC 2.1 EC 1.1'"):
with pytest.raises(RuntimeError, match="can't parse placement policy"):
grpc_client.container.create(
self.cluster.default_rpc_endpoint, policy="EC 2.1 EC 1.1 CBF 1 SELECT 4 FROM *", await_mode=True
)
@allure.title("Create bucket with EC policy (s3_client={s3_client})")
@pytest.mark.parametrize("s3_policy, s3_client", [(S3_POLICY_FILE_LOCATION, AwsCliClient)], indirect=True)
def test_create_bucket_with_ec_location(
self, s3_client: S3ClientWrapper, bucket_container_resolver: BucketContainerResolver, grpc_client: GrpcClientWrapper
) -> None:
with reporter.step("Create bucket with EC location constrain"):
bucket = s3_client.create_bucket(location_constraint="ec3.1")
with reporter.step("Resolve container bucket"):
cid = bucket_container_resolver.resolve(self.cluster.cluster_nodes[0], bucket)
with reporter.step("Validate container policy"):
container = grpc_client.container.get(self.cluster.default_rpc_endpoint, cid, json_mode=True, timeout=CLI_DEFAULT_TIMEOUT)
assert container
@allure.title("Bucket object count chunks (s3_client={s3_client}, size={object_size})")
@pytest.mark.parametrize("s3_policy, s3_client", [(S3_POLICY_FILE_LOCATION, AwsCliClient)], indirect=True)
def test_count_chunks_bucket_with_ec_location(
self,
s3_client: S3ClientWrapper,
bucket_container_resolver: BucketContainerResolver,
grpc_client: GrpcClientWrapper,
object_size: ObjectSize,
) -> None:
with reporter.step("Create bucket with EC location constrain"):
bucket = s3_client.create_bucket(location_constraint="ec3.1")
with reporter.step("Enable versioning object"):
s3_client.put_bucket_versioning(bucket, VersioningStatus.ENABLED)
bucket_status = s3_client.get_bucket_versioning_status(bucket)
assert bucket_status == VersioningStatus.ENABLED.value
with reporter.step("Put object in bucket"):
test_file = generate_file(object_size.value)
bucket_object = s3_client.put_object(bucket, test_file)
with reporter.step("Watch replication count chunks"):
cid = bucket_container_resolver.resolve(self.cluster.cluster_nodes[0], bucket)
chunks = grpc_client.object.chunks.get_all(self.cluster.default_rpc_endpoint, cid, bucket_object)
expect_chunks = 4 if object_size.name == "simple" else 16
assert len(chunks) == expect_chunks
@allure.title("Replication chunk after drop (size={object_size})")
def test_drop_chunk_and_replication(self, grpc_client: GrpcClientWrapper, object_size: ObjectSize, rep_count: int) -> None:
with reporter.step("Create container"):
cid = grpc_client.container.create(self.cluster.default_rpc_endpoint, policy="EC 2.1 CBF 1", await_mode=True)
with reporter.step("Put object"):
test_file = generate_file(object_size.value)
oid = grpc_client.object.put(test_file, cid, self.cluster.default_rpc_endpoint)
with reporter.step("Get all chunks"):
data_chunk = grpc_client.object.chunks.get_first_data(self.cluster.default_rpc_endpoint, cid, oid=oid)
with reporter.step("Search chunk node"):
chunk_node = grpc_client.object.chunks.get_chunk_node(self.cluster, data_chunk)
shell_chunk_node = chunk_node[0].host.get_shell()
with reporter.step("Get replication count"):
assert self.check_replication(rep_count, grpc_client, cid, oid)
with reporter.step("Delete chunk"):
frostfs_node_cli = FrostfsCli(
shell_chunk_node,
frostfs_cli_exec_path=FROSTFS_CLI_EXEC,
config_file=chunk_node[0].storage_node.get_remote_wallet_config_path(),
)
frostfs_node_cli.control.drop_objects(chunk_node[0].storage_node.get_control_endpoint(), f"{cid}/{data_chunk.object_id}")
with reporter.step("Wait replication count after drop one chunk"):
self.wait_replication(rep_count, grpc_client, cid, oid)

View file

@ -0,0 +1,111 @@
import logging
import random
import allure
import pytest
from frostfs_testlib import reporter
from frostfs_testlib.shell import Shell
from frostfs_testlib.steps.cli.container import create_container
from frostfs_testlib.steps.cli.object import head_object, put_object
from frostfs_testlib.storage.cluster import Cluster
from frostfs_testlib.storage.controllers.cluster_state_controller import ClusterStateController
from frostfs_testlib.storage.dataclasses.object_size import ObjectSize
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
from frostfs_testlib.testing.cluster_test_base import ClusterTestBase
from frostfs_testlib.utils.failover_utils import wait_object_replication
from frostfs_testlib.utils.file_utils import generate_file
logger = logging.getLogger("NeoLogger")
OBJECT_ATTRIBUTES = {"common_key": "common_value"}
WAIT_FOR_REPLICATION = 60
# Adding failover mark because it may make cluster unhealthy
@pytest.mark.sanity
@pytest.mark.failover
@pytest.mark.replication
class TestReplication(ClusterTestBase):
@allure.title("Replication (obj_size={object_size})")
def test_replication(
self,
default_wallet: WalletInfo,
client_shell: Shell,
cluster: Cluster,
object_size: ObjectSize,
cluster_state_controller: ClusterStateController,
):
nodes_count = len(cluster.cluster_nodes)
node_for_rep = random.choice(cluster.cluster_nodes)
alive_nodes = [node for node in cluster.cluster_nodes if node != node_for_rep]
cid = create_container(
wallet=default_wallet,
shell=client_shell,
endpoint=cluster.default_rpc_endpoint,
rule=f"REP 1 IN SELF_PLACE REP {nodes_count - 1} IN OTHER_PLACE CBF 1 "
"SELECT 1 FROM SELF AS SELF_PLACE "
f"SELECT {nodes_count - 1} FROM OTHER AS OTHER_PLACE "
f"FILTER 'UN-LOCODE' EQ '{node_for_rep.storage_node.get_un_locode()}' AS SELF "
f"FILTER 'UN-LOCODE' NE '{node_for_rep.storage_node.get_un_locode()}' AS OTHER",
)
cluster_state_controller.stop_node_host(node_for_rep, mode="hard")
file_path = generate_file(object_size.value)
with reporter.step("Put object"):
oid = put_object(
wallet=default_wallet,
path=file_path,
cid=cid,
shell=client_shell,
attributes=OBJECT_ATTRIBUTES,
copies_number=3,
endpoint=random.choice(alive_nodes).storage_node.get_rpc_endpoint(),
timeout="45s",
)
cluster_state_controller.start_node_host(node_for_rep)
with reporter.step(f"Wait for replication."):
object_nodes = wait_object_replication(
cid=cid,
oid=oid,
expected_copies=len(self.cluster.cluster_nodes),
shell=client_shell,
nodes=self.cluster.storage_nodes,
)
with reporter.step("Check attributes"):
for node in object_nodes:
header_info = head_object(
wallet=default_wallet,
oid=oid,
cid=cid,
shell=self.shell,
endpoint=node.get_rpc_endpoint(),
is_direct=True,
)["header"]
attributes = header_info["attributes"]
for attribute_key, attribute_value in OBJECT_ATTRIBUTES.items():
assert attribute_key in attributes, f"{attribute_key} not found in {header_info}"
assert header_info["attributes"].get(attribute_key) == str(attribute_value), (
f"{attribute_key} value not equal: "
f"got attribute value: {attributes.get(attribute_key)}"
f"expected attribute value: {attribute_value}"
)
# TODO: Research why this fails
# with reporter.step("Cleanup"):
# delete_object(
# wallet=default_wallet,
# cid=cid,
# oid=oid,
# shell=client_shell,
# endpoint=cluster.default_rpc_endpoint,
# )
# delete_container(
# wallet=default_wallet,
# cid=cid,
# shell=client_shell,
# endpoint=cluster.default_rpc_endpoint,
# )

View file

@ -0,0 +1,88 @@
import logging
import allure
import pytest
from frostfs_testlib import reporter
from frostfs_testlib.cli.frostfs_cli.cli import FrostfsCli
from frostfs_testlib.resources.wellknown_acl import PUBLIC_ACL
from frostfs_testlib.steps.acl import bearer_token_base64_from_file
from frostfs_testlib.steps.cli.container import create_container
from frostfs_testlib.steps.http.http_gate import upload_via_http_gate_curl, verify_object_hash
from frostfs_testlib.storage.cluster import Cluster
from frostfs_testlib.storage.dataclasses import ape
from frostfs_testlib.storage.dataclasses.object_size import ObjectSize
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
from frostfs_testlib.testing.cluster_test_base import ClusterTestBase
from frostfs_testlib.utils.file_utils import generate_file
from ....helpers.bearer_token import create_bearer_token
logger = logging.getLogger("NeoLogger")
@pytest.mark.http_gate
@pytest.mark.http_put
class Test_http_bearer(ClusterTestBase):
PLACEMENT_RULE = "REP 2 IN X CBF 1 SELECT 2 FROM * AS X"
@pytest.fixture(scope="class")
def user_container(self, frostfs_cli: FrostfsCli, default_wallet: WalletInfo, cluster: Cluster) -> str:
with reporter.step("Create container"):
cid = create_container(default_wallet, self.shell, self.cluster.default_rpc_endpoint, self.PLACEMENT_RULE, PUBLIC_ACL)
with reporter.step("Deny PUT via APE rule to container"):
role_condition = ape.Condition.by_role(ape.Role.OWNER)
rule = ape.Rule(ape.Verb.DENY, ape.ObjectOperations.PUT, role_condition)
frostfs_cli.ape_manager.add(
cluster.default_rpc_endpoint, rule.chain_id, target_name=cid, target_type="container", rule=rule.as_string()
)
with reporter.step("Wait for one block"):
self.wait_for_blocks()
return cid
@pytest.fixture(scope="class")
def bearer_token(self, frostfs_cli: FrostfsCli, user_container: str, temp_directory: str, cluster: Cluster) -> str:
with reporter.step(f"Create bearer token for {ape.Role.OTHERS} with all operations allowed"):
role_condition = ape.Condition.by_role(ape.Role.OTHERS)
rule = ape.Rule(ape.Verb.ALLOW, ape.ObjectOperations.WILDCARD_ALL, role_condition)
bearer = create_bearer_token(frostfs_cli, temp_directory, user_container, rule, cluster.default_rpc_endpoint)
return bearer_token_base64_from_file(bearer)
@allure.title(f"[NEGATIVE] Put object without bearer token for {ape.Role.OTHERS}")
def test_unable_put_without_bearer_token(self, simple_object_size: ObjectSize, user_container: str):
upload_via_http_gate_curl(
cid=user_container,
filepath=generate_file(simple_object_size.value),
endpoint=self.cluster.default_http_gate_endpoint,
error_pattern="access to object operation denied",
)
@allure.title("Put object via HTTP using bearer token (object_size={object_size})")
def test_put_with_bearer_when_eacl_restrict(
self,
object_size: ObjectSize,
default_wallet: WalletInfo,
user_container: str,
bearer_token: str,
):
file_path = generate_file(object_size.value)
with reporter.step(f"Put object with bearer token for {ape.Role.OTHERS}, then get and verify hashes"):
headers = [f" -H 'Authorization: Bearer {bearer_token}'"]
oid = upload_via_http_gate_curl(
cid=user_container,
filepath=file_path,
endpoint=self.cluster.default_http_gate_endpoint,
headers=headers,
)
verify_object_hash(
oid=oid,
file_name=file_path,
wallet=default_wallet,
cid=user_container,
shell=self.shell,
nodes=self.cluster.storage_nodes,
request_node=self.cluster.cluster_nodes[0],
)

View file

@ -0,0 +1,370 @@
import allure
import pytest
from frostfs_testlib import reporter
from frostfs_testlib.resources.wellknown_acl import PUBLIC_ACL
from frostfs_testlib.steps.cli.container import create_container
from frostfs_testlib.steps.cli.object import put_object_to_random_node
from frostfs_testlib.steps.epoch import get_epoch
from frostfs_testlib.steps.http.http_gate import (
attr_into_header,
get_object_by_attr_and_verify_hashes,
get_via_http_curl,
get_via_http_gate,
get_via_zip_http_gate,
try_to_get_object_and_expect_error,
upload_via_http_gate,
upload_via_http_gate_curl,
verify_object_hash,
)
from frostfs_testlib.storage.dataclasses.object_size import ObjectSize
from frostfs_testlib.testing.cluster_test_base import ClusterTestBase
from frostfs_testlib.utils.file_utils import generate_file, get_file_hash
from ....helpers.utility import wait_for_gc_pass_on_storage_nodes
OBJECT_NOT_FOUND_ERROR = "not found"
@allure.link(
"https://git.frostfs.info/TrueCloudLab/frostfs-http-gw#frostfs-http-gateway",
name="frostfs-http-gateway",
)
@allure.link("https://git.frostfs.info/TrueCloudLab/frostfs-http-gw#uploading", name="uploading")
@allure.link("https://git.frostfs.info/TrueCloudLab/frostfs-http-gw#downloading", name="downloading")
@pytest.mark.nightly
@pytest.mark.sanity
@pytest.mark.http_gate
class TestHttpGate(ClusterTestBase):
PLACEMENT_RULE_1 = "REP 1 IN X CBF 1 SELECT 1 FROM * AS X"
PLACEMENT_RULE_2 = "REP 2 IN X CBF 2 SELECT 2 FROM * AS X"
@pytest.fixture(scope="class", autouse=True)
@allure.title("[Class/Autouse]: Prepare wallet and deposit")
def prepare_wallet(self, default_wallet):
TestHttpGate.wallet = default_wallet
@allure.title("Put over gRPC, Get over HTTP")
def test_put_grpc_get_http(self, complex_object_size: ObjectSize, simple_object_size: ObjectSize):
"""
Test that object can be put using gRPC interface and get using HTTP.
Steps:
1. Create simple and large objects.
2. Put objects using gRPC (frostfs-cli).
3. Download objects using HTTP gate (https://git.frostfs.info/TrueCloudLab/frostfs-http-gw#downloading).
4. Get objects using gRPC (frostfs-cli).
5. Compare hashes for got objects.
6. Compare hashes for got and original objects.
Expected result:
Hashes must be the same.
"""
cid = create_container(
self.wallet,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
rule=self.PLACEMENT_RULE_1,
basic_acl=PUBLIC_ACL,
)
file_path_simple = generate_file(simple_object_size.value)
file_path_large = generate_file(complex_object_size.value)
with reporter.step("Put objects using gRPC"):
oid_simple = put_object_to_random_node(
wallet=self.wallet,
path=file_path_simple,
cid=cid,
shell=self.shell,
cluster=self.cluster,
)
oid_large = put_object_to_random_node(
wallet=self.wallet,
path=file_path_large,
cid=cid,
shell=self.shell,
cluster=self.cluster,
)
for oid, file_path in ((oid_simple, file_path_simple), (oid_large, file_path_large)):
verify_object_hash(
oid=oid,
file_name=file_path,
wallet=self.wallet,
cid=cid,
shell=self.shell,
nodes=self.cluster.storage_nodes,
request_node=self.cluster.cluster_nodes[0],
)
@allure.link(
"https://git.frostfs.info/TrueCloudLab/frostfs-http-gw#frostfs-http-gateway",
name="frostfs-http-gateway",
)
@allure.link("https://git.frostfs.info/TrueCloudLab/frostfs-http-gw#uploading", name="uploading")
@allure.link("https://git.frostfs.info/TrueCloudLab/frostfs-http-gw#downloading", name="downloading")
@pytest.mark.http_gate
@pytest.mark.http_put
class TestHttpPut(ClusterTestBase):
@allure.link("https://git.frostfs.info/TrueCloudLab/frostfs-http-gw#uploading", name="uploading")
@allure.link("https://git.frostfs.info/TrueCloudLab/frostfs-http-gw#downloading", name="downloading")
@allure.title("Put over HTTP, Get over HTTP")
@pytest.mark.smoke
def test_put_http_get_http(self, complex_object_size: ObjectSize, simple_object_size: ObjectSize):
"""
Test that object can be put and get using HTTP interface.
Steps:
1. Create simple and large objects.
2. Upload objects using HTTP (https://git.frostfs.info/TrueCloudLab/frostfs-http-gw#uploading).
3. Download objects using HTTP gate (https://git.frostfs.info/TrueCloudLab/frostfs-http-gw#downloading).
4. Compare hashes for got and original objects.
Expected result:
Hashes must be the same.
"""
cid = create_container(
self.wallet,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
rule=self.PLACEMENT_RULE_2,
basic_acl=PUBLIC_ACL,
)
file_path_simple = generate_file(simple_object_size.value)
file_path_large = generate_file(complex_object_size.value)
with reporter.step("Put objects using HTTP"):
oid_simple = upload_via_http_gate(cid=cid, path=file_path_simple, endpoint=self.cluster.default_http_gate_endpoint)
oid_large = upload_via_http_gate(cid=cid, path=file_path_large, endpoint=self.cluster.default_http_gate_endpoint)
for oid, file_path in ((oid_simple, file_path_simple), (oid_large, file_path_large)):
verify_object_hash(
oid=oid,
file_name=file_path,
wallet=self.wallet,
cid=cid,
shell=self.shell,
nodes=self.cluster.storage_nodes,
request_node=self.cluster.cluster_nodes[0],
)
@allure.link(
"https://git.frostfs.info/TrueCloudLab/frostfs-http-gw#by-attributes",
name="download by attributes",
)
@allure.title("Put over HTTP, Get over HTTP with {id} header")
@pytest.mark.parametrize(
"attributes,id",
[
({"fileName": "simple_obj_filename"}, "simple"),
({"file-Name": "simple obj filename"}, "hyphen"),
({"cat%jpeg": "cat%jpeg"}, "percent"),
],
ids=["simple", "hyphen", "percent"],
)
def test_put_http_get_http_with_headers(self, attributes: dict, simple_object_size: ObjectSize, id: str):
"""
Test that object can be downloaded using different attributes in HTTP header.
Steps:
1. Create simple and large objects.
2. Upload objects using HTTP with particular attributes in the header.
3. Download objects by attributes using HTTP gate (https://git.frostfs.info/TrueCloudLab/frostfs-http-gw#by-attributes).
4. Compare hashes for got and original objects.
Expected result:
Hashes must be the same.
"""
cid = create_container(
self.wallet,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
rule=self.PLACEMENT_RULE_2,
basic_acl=PUBLIC_ACL,
)
file_path = generate_file(simple_object_size.value)
with reporter.step("Put objects using HTTP with attribute"):
headers = attr_into_header(attributes)
oid = upload_via_http_gate(
cid=cid,
path=file_path,
headers=headers,
endpoint=self.cluster.default_http_gate_endpoint,
)
get_object_by_attr_and_verify_hashes(
oid=oid,
file_name=file_path,
cid=cid,
attrs=attributes,
node=self.cluster.cluster_nodes[0],
)
@allure.title("Expiration-Epoch in HTTP header (epoch_gap={epoch_gap})")
@pytest.mark.parametrize("epoch_gap", [0, 1])
def test_expiration_epoch_in_http(self, simple_object_size: ObjectSize, epoch_gap: int):
endpoint = self.cluster.default_rpc_endpoint
http_endpoint = self.cluster.default_http_gate_endpoint
min_valid_epoch = get_epoch(self.shell, self.cluster) + epoch_gap
cid = create_container(
self.wallet,
shell=self.shell,
endpoint=endpoint,
rule=self.PLACEMENT_RULE_2,
basic_acl=PUBLIC_ACL,
)
file_path = generate_file(simple_object_size.value)
oids_to_be_expired = []
oids_to_be_valid = []
for gap_until in (0, 1, 2, 100):
valid_until = min_valid_epoch + gap_until
headers = {"X-Attribute-System-Expiration-Epoch": str(valid_until)}
with reporter.step("Put objects using HTTP with attribute Expiration-Epoch"):
oid = upload_via_http_gate(
cid=cid,
path=file_path,
headers=headers,
endpoint=http_endpoint,
)
if get_epoch(self.shell, self.cluster) + 1 <= valid_until:
oids_to_be_valid.append(oid)
else:
oids_to_be_expired.append(oid)
with reporter.step("This object can be got"):
get_via_http_gate(cid=cid, oid=oid, node=self.cluster.cluster_nodes[0])
self.tick_epoch()
# Wait for GC, because object with expiration is counted as alive until GC removes it
wait_for_gc_pass_on_storage_nodes()
for oid in oids_to_be_expired:
with reporter.step(f"{oid} shall be expired and cannot be got"):
try_to_get_object_and_expect_error(
cid=cid,
oid=oid,
node=self.cluster.cluster_nodes[0],
error_pattern=OBJECT_NOT_FOUND_ERROR,
)
for oid in oids_to_be_valid:
with reporter.step(f"{oid} shall be valid and can be got"):
get_via_http_gate(cid=cid, oid=oid, node=self.cluster.cluster_nodes[0])
@allure.title("Zip in HTTP header")
def test_zip_in_http(self, complex_object_size: ObjectSize, simple_object_size: ObjectSize):
cid = create_container(
self.wallet,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
rule=self.PLACEMENT_RULE_2,
basic_acl=PUBLIC_ACL,
)
file_path_simple = generate_file(simple_object_size.value)
file_path_large = generate_file(complex_object_size.value)
common_prefix = "my_files"
headers1 = {"X-Attribute-FilePath": f"{common_prefix}/file1"}
headers2 = {"X-Attribute-FilePath": f"{common_prefix}/file2"}
upload_via_http_gate(
cid=cid,
path=file_path_simple,
headers=headers1,
endpoint=self.cluster.default_http_gate_endpoint,
)
upload_via_http_gate(
cid=cid,
path=file_path_large,
headers=headers2,
endpoint=self.cluster.default_http_gate_endpoint,
)
dir_path = get_via_zip_http_gate(cid=cid, prefix=common_prefix, node=self.cluster.cluster_nodes[0])
with reporter.step("Verify hashes"):
assert get_file_hash(f"{dir_path}/file1") == get_file_hash(file_path_simple)
assert get_file_hash(f"{dir_path}/file2") == get_file_hash(file_path_large)
@pytest.mark.long
@allure.title("Put over HTTP/Curl, Get over HTTP/Curl for large object")
def test_put_http_get_http_large_file(self, complex_object_size: ObjectSize):
"""
This test checks upload and download using curl with 'large' object.
Large is object with size up to 20Mb.
"""
cid = create_container(
self.wallet,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
rule=self.PLACEMENT_RULE_2,
basic_acl=PUBLIC_ACL,
)
file_path = generate_file(complex_object_size.value)
with reporter.step("Put objects using HTTP"):
oid_gate = upload_via_http_gate(cid=cid, path=file_path, endpoint=self.cluster.default_http_gate_endpoint)
oid_curl = upload_via_http_gate_curl(
cid=cid,
filepath=file_path,
endpoint=self.cluster.default_http_gate_endpoint,
)
verify_object_hash(
oid=oid_gate,
file_name=file_path,
wallet=self.wallet,
cid=cid,
shell=self.shell,
nodes=self.cluster.storage_nodes,
request_node=self.cluster.cluster_nodes[0],
)
verify_object_hash(
oid=oid_curl,
file_name=file_path,
wallet=self.wallet,
cid=cid,
shell=self.shell,
nodes=self.cluster.storage_nodes,
request_node=self.cluster.cluster_nodes[0],
object_getter=get_via_http_curl,
)
@allure.title("Put/Get over HTTP using Curl utility")
def test_put_http_get_http_curl(self, complex_object_size: ObjectSize, simple_object_size: ObjectSize):
"""
Test checks upload and download over HTTP using curl utility.
"""
cid = create_container(
self.wallet,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
rule=self.PLACEMENT_RULE_2,
basic_acl=PUBLIC_ACL,
)
file_path_simple = generate_file(simple_object_size.value)
file_path_large = generate_file(complex_object_size.value)
with reporter.step("Put objects using curl utility"):
oid_simple = upload_via_http_gate_curl(cid=cid, filepath=file_path_simple, endpoint=self.cluster.default_http_gate_endpoint)
oid_large = upload_via_http_gate_curl(
cid=cid,
filepath=file_path_large,
endpoint=self.cluster.default_http_gate_endpoint,
)
for oid, file_path in ((oid_simple, file_path_simple), (oid_large, file_path_large)):
verify_object_hash(
oid=oid,
file_name=file_path,
wallet=self.wallet,
cid=cid,
shell=self.shell,
nodes=self.cluster.storage_nodes,
request_node=self.cluster.cluster_nodes[0],
object_getter=get_via_http_curl,
)

View file

@ -0,0 +1,217 @@
import logging
import os
import allure
import pytest
from frostfs_testlib import reporter
from frostfs_testlib.resources.wellknown_acl import PUBLIC_ACL
from frostfs_testlib.steps.cli.container import (
create_container,
delete_container,
list_containers,
wait_for_container_deletion,
)
from frostfs_testlib.steps.cli.object import delete_object
from frostfs_testlib.steps.http.http_gate import (
attr_into_str_header_curl,
get_object_by_attr_and_verify_hashes,
try_to_get_object_and_expect_error,
try_to_get_object_via_passed_request_and_expect_error,
upload_via_http_gate_curl,
)
from frostfs_testlib.storage.dataclasses.object_size import ObjectSize
from frostfs_testlib.storage.dataclasses.storage_object_info import StorageObjectInfo
from frostfs_testlib.testing.cluster_test_base import ClusterTestBase
from frostfs_testlib.utils.file_utils import generate_file
OBJECT_ALREADY_REMOVED_ERROR = "object already removed"
logger = logging.getLogger("NeoLogger")
@pytest.mark.http_gate
@pytest.mark.http_put
class Test_http_headers(ClusterTestBase):
PLACEMENT_RULE = "REP 2 IN X CBF 1 SELECT 4 FROM * AS X"
obj1_keys = ["Writer", "Chapter1", "Chapter2"]
obj2_keys = ["Writer", "Ch@pter1", "chapter2"]
values = ["Leo Tolstoy", "peace", "w@r"]
OBJECT_ATTRIBUTES = [
{obj1_keys[0]: values[0], obj1_keys[1]: values[1], obj1_keys[2]: values[2]},
{obj2_keys[0]: values[0], obj2_keys[1]: values[1], obj2_keys[2]: values[2]},
]
@pytest.fixture(scope="class", autouse=True)
@allure.title("[Class/Autouse]: Prepare wallet and deposit")
def prepare_wallet(self, default_wallet):
Test_http_headers.wallet = default_wallet
def storage_objects_with_attributes(self, object_size: ObjectSize) -> list[StorageObjectInfo]:
# TODO: Deal with http tests
if object_size.value > 1000:
pytest.skip("Complex objects for HTTP temporarly disabled for v0.37")
storage_objects = []
wallet = self.wallet
cid = create_container(
wallet=self.wallet,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
rule=self.PLACEMENT_RULE,
basic_acl=PUBLIC_ACL,
)
file_path = generate_file(object_size.value)
for attributes in self.OBJECT_ATTRIBUTES:
storage_object_id = upload_via_http_gate_curl(
cid=cid,
filepath=file_path,
endpoint=self.cluster.default_http_gate_endpoint,
headers=attr_into_str_header_curl(attributes),
)
storage_object = StorageObjectInfo(cid, storage_object_id)
storage_object.size = os.path.getsize(file_path)
storage_object.wallet = wallet
storage_object.file_path = file_path
storage_object.attributes = attributes
storage_objects.append(storage_object)
yield storage_objects
@allure.title("Get object1 by attribute")
def test_object1_can_be_get_by_attr(self, storage_objects_with_attributes: list[StorageObjectInfo]):
"""
Test to get object#1 by attribute and comapre hashes
Steps:
1. Download object#1 with attributes [Chapter2=w@r] and compare hashes
"""
storage_object_1 = storage_objects_with_attributes[0]
with reporter.step(
f'Download object#1 via wget with attributes Chapter2: {storage_object_1.attributes["Chapter2"]} and compare hashes'
):
get_object_by_attr_and_verify_hashes(
oid=storage_object_1.oid,
file_name=storage_object_1.file_path,
cid=storage_object_1.cid,
attrs={"Chapter2": storage_object_1.attributes["Chapter2"]},
node=self.cluster.cluster_nodes[0],
)
@allure.title("Get object2 with different attributes, then delete object2 and get object1")
def test_object2_can_be_get_by_attr(self, storage_objects_with_attributes: list[StorageObjectInfo]):
"""
Test to get object2 with different attributes, then delete object2 and get object1 using 1st attribute. Note: obj1 and obj2 have the same attribute#1,
and when obj2 is deleted you can get obj1 by 1st attribute
Steps:
1. Download object#2 with attributes [chapter2=w@r] and compare hashes
2. Download object#2 with attributes [Ch@pter1=peace] and compare hashes
3. Delete object#2
4. Download object#1 with attributes [Writer=Leo Tolstoy] and compare hashes
"""
storage_object_1 = storage_objects_with_attributes[0]
storage_object_2 = storage_objects_with_attributes[1]
with reporter.step(
f'Download object#2 via wget with attributes [chapter2={storage_object_2.attributes["chapter2"]}] / [Ch@pter1={storage_object_2.attributes["Ch@pter1"]}] and compare hashes'
):
selected_attributes_object2 = [
{"chapter2": storage_object_2.attributes["chapter2"]},
{"Ch@pter1": storage_object_2.attributes["Ch@pter1"]},
]
for attributes in selected_attributes_object2:
get_object_by_attr_and_verify_hashes(
oid=storage_object_2.oid,
file_name=storage_object_2.file_path,
cid=storage_object_2.cid,
attrs=attributes,
node=self.cluster.cluster_nodes[0],
)
with reporter.step("Delete object#2 and verify is the container deleted"):
delete_object(
wallet=self.wallet,
cid=storage_object_2.cid,
oid=storage_object_2.oid,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
)
try_to_get_object_and_expect_error(
cid=storage_object_2.cid,
oid=storage_object_2.oid,
node=self.cluster.cluster_nodes[0],
error_pattern=OBJECT_ALREADY_REMOVED_ERROR,
)
storage_objects_with_attributes.remove(storage_object_2)
with reporter.step(
f'Download object#1 with attributes [Writer={storage_object_1.attributes["Writer"]}] and compare hashes'
):
key_value_pair = {"Writer": storage_object_1.attributes["Writer"]}
get_object_by_attr_and_verify_hashes(
oid=storage_object_1.oid,
file_name=storage_object_1.file_path,
cid=storage_object_1.cid,
attrs=key_value_pair,
node=self.cluster.cluster_nodes[0],
)
@allure.title("[NEGATIVE] Put object and get right after container is deleted")
def test_negative_put_and_get_object3(self, storage_objects_with_attributes: list[StorageObjectInfo]):
"""
Test to attempt to put object and try to download it right after the container has been deleted
Steps:
1. [Negative] Allocate and attempt to put object#3 via http with attributes: [Writer=Leo Tolstoy, Writer=peace, peace=peace]
Expected: "Error duplication of attributes detected"
2. Delete container
3. [Negative] Try to download object with attributes [peace=peace]
Expected: "HTTP request sent, awaiting response... 404 Not Found"
"""
storage_object_1 = storage_objects_with_attributes[0]
with reporter.step(
"[Negative] Allocate and attemt to put object#3 via http with attributes: [Writer=Leo Tolstoy, Writer=peace, peace=peace]"
):
file_path_3 = generate_file(storage_object_1.size)
attrs_obj3 = {"Writer": "Leo Tolstoy", "peace": "peace"}
headers = attr_into_str_header_curl(attrs_obj3)
headers.append(" ".join(attr_into_str_header_curl({"Writer": "peace"})))
error_pattern = f"key duplication error: X-Attribute-Writer"
upload_via_http_gate_curl(
cid=storage_object_1.cid,
filepath=file_path_3,
endpoint=self.cluster.default_http_gate_endpoint,
headers=headers,
error_pattern=error_pattern,
)
with reporter.step("Delete container and verify container deletion"):
delete_container(
wallet=self.wallet,
cid=storage_object_1.cid,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
await_mode=True,
)
self.tick_epoch()
wait_for_container_deletion(
self.wallet,
storage_object_1.cid,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
)
assert storage_object_1.cid not in list_containers(
self.wallet, shell=self.shell, endpoint=self.cluster.default_rpc_endpoint
)
with reporter.step("[Negative] Try to download (wget) object via wget with attributes [peace=peace]"):
request = f"/get/{storage_object_1.cid}/peace/peace"
error_pattern = "404 Not Found"
try_to_get_object_via_passed_request_and_expect_error(
cid=storage_object_1.cid,
oid="",
node=self.cluster.cluster_nodes[0],
error_pattern=error_pattern,
attrs=attrs_obj3,
http_request_path=request,
)

View file

@ -0,0 +1,160 @@
import logging
import allure
import pytest
from frostfs_testlib import reporter
from frostfs_testlib.resources.wellknown_acl import PUBLIC_ACL
from frostfs_testlib.s3 import AwsCliClient, S3ClientWrapper
from frostfs_testlib.steps.cli.container import create_container
from frostfs_testlib.steps.cli.object import put_object_to_random_node
from frostfs_testlib.steps.http.http_gate import (
assert_hashes_are_equal,
get_object_by_attr_and_verify_hashes,
get_via_http_gate,
try_to_get_object_via_passed_request_and_expect_error,
verify_object_hash,
)
from frostfs_testlib.steps.s3 import s3_helper
from frostfs_testlib.storage.dataclasses.object_size import ObjectSize
from frostfs_testlib.testing.cluster_test_base import ClusterTestBase
from frostfs_testlib.utils.file_utils import generate_file
logger = logging.getLogger("NeoLogger")
@pytest.mark.nightly
@pytest.mark.sanity
@pytest.mark.http_gate
class Test_http_object(ClusterTestBase):
PLACEMENT_RULE = "REP 2 IN X CBF 1 SELECT 4 FROM * AS X"
@pytest.fixture(scope="class", autouse=True)
@allure.title("[Class/Autouse]: Prepare wallet and deposit")
def prepare_wallet(self, default_wallet):
Test_http_object.wallet = default_wallet
@allure.title("Put over gRPC, Get over HTTP with attributes (obj_size={object_size})")
def test_object_put_get_attributes(self, object_size: ObjectSize):
"""
Test that object can be put using gRPC interface and got using HTTP.
Steps:
1. Create an object;
2. Put object(s) using gRPC (frostfs-cli) with attributes [--attributes chapter1=peace,chapter2=war];
3. Download the object using HTTP gate (https://git.frostfs.info/TrueCloudLab/frostfs-http-gw#downloading);
4. Compare hashes of the original and the downloaded object;
5. [Negative] Try to the get the object with the specified attributes and `get` request: [get/$CID/chapter1/peace];
6. Download the object with the specified attributes and `get_by_attribute` request: [get_by_attribute/$CID/chapter1/peace];
7. Compare hashes of the original and the downloaded object;
8. [Negative] Try to the get the object via `get_by_attribute` request: [get_by_attribute/$CID/$OID];
Expected result:
Hashes must be the same.
"""
with reporter.step("Create public container"):
cid = create_container(
self.wallet,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
rule=self.PLACEMENT_RULE,
basic_acl=PUBLIC_ACL,
)
# Generate file
file_path = generate_file(object_size.value)
# List of Key=Value attributes
obj_key1 = "chapter1"
obj_value1 = "peace"
obj_key2 = "chapter2"
obj_value2 = "war"
# Prepare for grpc PUT request
key_value1 = obj_key1 + "=" + obj_value1
key_value2 = obj_key2 + "=" + obj_value2
with reporter.step("Put objects using gRPC [--attributes chapter1=peace,chapter2=war]"):
oid = put_object_to_random_node(
wallet=self.wallet,
path=file_path,
cid=cid,
shell=self.shell,
cluster=self.cluster,
attributes=f"{key_value1},{key_value2}",
)
with reporter.step("Get object and verify hashes [ get/$CID/$OID ]"):
verify_object_hash(
oid=oid,
file_name=file_path,
wallet=self.wallet,
cid=cid,
shell=self.shell,
nodes=self.cluster.storage_nodes,
request_node=self.cluster.cluster_nodes[0],
)
with reporter.step("[Negative] try to get object: [get/$CID/chapter1/peace]"):
attrs = {obj_key1: obj_value1, obj_key2: obj_value2}
request = f"/get/{cid}/{obj_key1}/{obj_value1}"
expected_err_msg = "Failed to get object via HTTP gate:"
try_to_get_object_via_passed_request_and_expect_error(
cid=cid,
oid=oid,
node=self.cluster.cluster_nodes[0],
error_pattern=expected_err_msg,
http_request_path=request,
attrs=attrs,
)
with reporter.step("Download the object with attribute [get_by_attribute/$CID/chapter1/peace]"):
get_object_by_attr_and_verify_hashes(
oid=oid,
file_name=file_path,
cid=cid,
attrs=attrs,
node=self.cluster.cluster_nodes[0],
)
with reporter.step("[Negative] try to get object: get_by_attribute/$CID/$OID"):
request = f"/get_by_attribute/{cid}/{oid}"
try_to_get_object_via_passed_request_and_expect_error(
cid=cid,
oid=oid,
node=self.cluster.cluster_nodes[0],
error_pattern=expected_err_msg,
http_request_path=request,
)
@allure.title("Put over s3, Get over HTTP with bucket name and key (object_size={object_size})")
@pytest.mark.parametrize("s3_client", [AwsCliClient], indirect=True)
def test_object_put_get_bucketname_key(self, object_size: ObjectSize, s3_client: S3ClientWrapper):
"""
Test that object can be put using s3-gateway interface and got via HTTP with bucket name and object key.
Steps:
1. Create an object;
2. Create a bucket via s3;
3. Put the object via s3;
4. Download the object using HTTP gate with the bucket name and the object key;
5. Compare hashes of the original and the downloaded objects;
Expected result:
Hashes must be the same.
"""
file_path = generate_file(object_size.value)
object_key = s3_helper.object_key_from_file_path(file_path)
bucket = s3_client.create_bucket(acl="public-read-write")
s3_client.put_object(bucket=bucket, filepath=file_path, key=object_key)
obj_s3 = s3_client.get_object(bucket=bucket, key=object_key)
request = f"/get/{bucket}/{object_key}"
obj_http = get_via_http_gate(
cid=None,
oid=None,
node=self.cluster.cluster_nodes[0],
request_path=request,
)
with reporter.step("Verify hashes"):
assert_hashes_are_equal(file_path, obj_http, obj_s3)

View file

@ -0,0 +1,64 @@
import logging
import allure
import pytest
from frostfs_testlib import reporter
from frostfs_testlib.resources.wellknown_acl import PUBLIC_ACL
from frostfs_testlib.steps.cli.container import create_container
from frostfs_testlib.steps.http.http_gate import upload_via_http_gate_curl, verify_object_hash
from frostfs_testlib.storage.dataclasses.object_size import ObjectSize
from frostfs_testlib.testing.cluster_test_base import ClusterTestBase
from frostfs_testlib.utils.file_utils import generate_file
logger = logging.getLogger("NeoLogger")
@pytest.mark.http_gate
@pytest.mark.http_put
class Test_http_streaming(ClusterTestBase):
PLACEMENT_RULE = "REP 2 IN X CBF 1 SELECT 4 FROM * AS X"
@pytest.fixture(scope="class", autouse=True)
@allure.title("[Class/Autouse]: Prepare wallet and deposit")
def prepare_wallet(self, default_wallet):
Test_http_streaming.wallet = default_wallet
@allure.title("Put via pipe (streaming), Get over HTTP and verify hashes")
def test_object_can_be_put_get_by_streaming(self, complex_object_size: ObjectSize):
"""
Test that object can be put using gRPC interface and get using HTTP.
Steps:
1. Create big object;
2. Put object using curl with pipe (streaming);
3. Download object using HTTP gate (https://git.frostfs.info/TrueCloudLab/frostfs-http-gw#downloading);
4. Compare hashes between original and downloaded object;
Expected result:
Hashes must be the same.
"""
with reporter.step("Create public container and verify container creation"):
cid = create_container(
self.wallet,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
rule=self.PLACEMENT_RULE,
basic_acl=PUBLIC_ACL,
)
with reporter.step("Allocate big object"):
# Generate file
file_path = generate_file(complex_object_size.value)
with reporter.step("Put objects using curl utility and Get object and verify hashes [ get/$CID/$OID ]"):
oid = upload_via_http_gate_curl(
cid=cid, filepath=file_path, endpoint=self.cluster.default_http_gate_endpoint
)
verify_object_hash(
oid=oid,
file_name=file_path,
wallet=self.wallet,
cid=cid,
shell=self.shell,
nodes=self.cluster.storage_nodes,
request_node=self.cluster.cluster_nodes[0],
)

View file

@ -0,0 +1,359 @@
import calendar
import datetime
import logging
from typing import Optional
import allure
import pytest
from frostfs_testlib import reporter
from frostfs_testlib.resources.error_patterns import OBJECT_NOT_FOUND
from frostfs_testlib.resources.wellknown_acl import PUBLIC_ACL
from frostfs_testlib.steps.cli.container import create_container
from frostfs_testlib.steps.cli.object import get_netmap_netinfo, get_object_from_random_node, head_object
from frostfs_testlib.steps.epoch import get_epoch, wait_for_epochs_align
from frostfs_testlib.steps.http.http_gate import (
attr_into_str_header_curl,
try_to_get_object_and_expect_error,
upload_via_http_gate_curl,
verify_object_hash,
)
from frostfs_testlib.storage.dataclasses.object_size import ObjectSize
from frostfs_testlib.testing.cluster_test_base import ClusterTestBase
from frostfs_testlib.utils.file_utils import generate_file
logger = logging.getLogger("NeoLogger")
EXPIRATION_TIMESTAMP_HEADER = "__SYSTEM__EXPIRATION_TIMESTAMP"
EXPIRATION_EPOCH_HEADER = "__SYSTEM__EXPIRATION_EPOCH"
EXPIRATION_DURATION_HEADER = "__SYSTEM__EXPIRATION_DURATION"
EXPIRATION_EXPIRATION_RFC = "__SYSTEM__EXPIRATION_RFC3339"
SYSTEM_EXPIRATION_EPOCH = "System-Expiration-Epoch"
SYSTEM_EXPIRATION_DURATION = "System-Expiration-Duration"
SYSTEM_EXPIRATION_TIMESTAMP = "System-Expiration-Timestamp"
SYSTEM_EXPIRATION_RFC3339 = "System-Expiration-RFC3339"
@pytest.mark.http_gate
@pytest.mark.http_put
class Test_http_system_header(ClusterTestBase):
PLACEMENT_RULE = "REP 2 IN X CBF 1 SELECT 2 FROM * AS X"
@pytest.fixture(scope="class", autouse=True)
@allure.title("[Class/Autouse]: Prepare wallet and deposit")
def prepare_wallet(self, default_wallet):
Test_http_system_header.wallet = default_wallet
@pytest.fixture(scope="class")
@allure.title("Create container")
def user_container(self):
return create_container(
wallet=self.wallet,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
rule=self.PLACEMENT_RULE,
basic_acl=PUBLIC_ACL,
)
@pytest.fixture(scope="class")
@allure.title("epoch_duration in seconds")
def epoch_duration(self) -> int:
net_info = get_netmap_netinfo(
wallet=self.wallet,
endpoint=self.cluster.default_rpc_endpoint,
shell=self.shell,
)
epoch_duration_in_blocks = net_info["epoch_duration"]
time_per_block = net_info["time_per_block"]
return int(epoch_duration_in_blocks * time_per_block)
@allure.title("Return N-epoch count in minutes")
def epoch_count_into_mins(self, epoch_duration: int, epoch: int) -> str:
mins = epoch_duration * epoch / 60
return f"{mins}m"
@allure.title("Return future timestamp after N epochs are passed")
def epoch_count_into_timestamp(self, epoch_duration: int, epoch: int, rfc3339: Optional[bool] = False) -> str:
current_datetime = datetime.datetime.utcnow()
epoch_count_in_seconds = epoch_duration * epoch
future_datetime = current_datetime + datetime.timedelta(seconds=epoch_count_in_seconds)
if rfc3339:
return future_datetime.isoformat("T") + "Z"
else:
return str(calendar.timegm(future_datetime.timetuple()))
@allure.title("Check is (header_output) Key=Value exists and equal in passed (header_to_find)")
def check_key_value_presented_header(self, header_output: dict, header_to_find: dict) -> bool:
header_att = header_output["header"]["attributes"]
for key_to_check, val_to_check in header_to_find.items():
if key_to_check not in header_att or val_to_check != header_att[key_to_check]:
logger.info(f"Unable to find {key_to_check}: '{val_to_check}' in {header_att}")
return False
return True
@allure.title(f"Validate that only {EXPIRATION_EPOCH_HEADER} exists in header and other headers are abesent")
def validation_for_http_header_attr(self, head_info: dict, expected_epoch: int) -> None:
# check that __SYSTEM__EXPIRATION_EPOCH attribute has corresponding epoch
assert self.check_key_value_presented_header(
head_info, {EXPIRATION_EPOCH_HEADER: str(expected_epoch)}
), f'Expected to find {EXPIRATION_EPOCH_HEADER}: {expected_epoch} in: {head_info["header"]["attributes"]}'
# check that {EXPIRATION_EPOCH_HEADER} absents in header output
assert not (
self.check_key_value_presented_header(head_info, {EXPIRATION_DURATION_HEADER: ""})
), f"Only {EXPIRATION_EPOCH_HEADER} can be displayed in header attributes"
# check that {EXPIRATION_TIMESTAMP_HEADER} absents in header output
assert not (
self.check_key_value_presented_header(head_info, {EXPIRATION_TIMESTAMP_HEADER: ""})
), f"Only {EXPIRATION_TIMESTAMP_HEADER} can be displayed in header attributes"
# check that {EXPIRATION_EXPIRATION_RFC} absents in header output
assert not (
self.check_key_value_presented_header(head_info, {EXPIRATION_EXPIRATION_RFC: ""})
), f"Only {EXPIRATION_EXPIRATION_RFC} can be displayed in header attributes"
@allure.title("Put / get / verify object and return head command result to invoker")
def oid_header_info_for_object(self, file_path: str, attributes: dict, user_container: str):
oid = upload_via_http_gate_curl(
cid=user_container,
filepath=file_path,
endpoint=self.cluster.default_http_gate_endpoint,
headers=attr_into_str_header_curl(attributes),
)
verify_object_hash(
oid=oid,
file_name=file_path,
wallet=self.wallet,
cid=user_container,
shell=self.shell,
nodes=self.cluster.storage_nodes,
request_node=self.cluster.cluster_nodes[0],
)
head = head_object(
wallet=self.wallet,
cid=user_container,
oid=oid,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
)
return oid, head
@allure.title("[NEGATIVE] Put object with expired epoch")
def test_unable_put_expired_epoch(self, user_container: str, simple_object_size: ObjectSize):
headers = attr_into_str_header_curl({"System-Expiration-Epoch": str(get_epoch(self.shell, self.cluster) - 1)})
file_path = generate_file(simple_object_size.value)
with reporter.step("Put object using HTTP with attribute Expiration-Epoch where epoch is expired"):
upload_via_http_gate_curl(
cid=user_container,
filepath=file_path,
endpoint=self.cluster.default_http_gate_endpoint,
headers=headers,
error_pattern="must be greater than current epoch",
)
@allure.title("[NEGATIVE] Put object with negative System-Expiration-Duration")
def test_unable_put_negative_duration(self, user_container: str, simple_object_size: ObjectSize):
headers = attr_into_str_header_curl({"System-Expiration-Duration": "-1h"})
file_path = generate_file(simple_object_size.value)
with reporter.step(
"Put object using HTTP with attribute System-Expiration-Duration where duration is negative"
):
upload_via_http_gate_curl(
cid=user_container,
filepath=file_path,
endpoint=self.cluster.default_http_gate_endpoint,
headers=headers,
error_pattern=f"{EXPIRATION_DURATION_HEADER} must be positive",
)
@allure.title("[NEGATIVE] Put object with System-Expiration-Timestamp value in the past")
def test_unable_put_expired_timestamp(self, user_container: str, simple_object_size: ObjectSize):
headers = attr_into_str_header_curl({"System-Expiration-Timestamp": "1635075727"})
file_path = generate_file(simple_object_size.value)
with reporter.step(
"Put object using HTTP with attribute System-Expiration-Timestamp where duration is in the past"
):
upload_via_http_gate_curl(
cid=user_container,
filepath=file_path,
endpoint=self.cluster.default_http_gate_endpoint,
headers=headers,
error_pattern=f"{EXPIRATION_TIMESTAMP_HEADER} must be in the future",
)
@allure.title(
"[NEGATIVE] Put object using HTTP with attribute System-Expiration-RFC3339 where duration is in the past"
)
def test_unable_put_expired_rfc(self, user_container: str, simple_object_size: ObjectSize):
headers = attr_into_str_header_curl({"System-Expiration-RFC3339": "2021-11-22T09:55:49Z"})
file_path = generate_file(simple_object_size.value)
upload_via_http_gate_curl(
cid=user_container,
filepath=file_path,
endpoint=self.cluster.default_http_gate_endpoint,
headers=headers,
error_pattern=f"{EXPIRATION_EXPIRATION_RFC} must be in the future",
)
@allure.title("Priority of attributes epoch>duration (obj_size={object_size})")
def test_http_attr_priority_epoch_duration(self, user_container: str, object_size: ObjectSize, epoch_duration: int):
self.tick_epoch()
epoch_count = 1
expected_epoch = get_epoch(self.shell, self.cluster) + epoch_count
logger.info(
f"epoch duration={epoch_duration}, current_epoch= {get_epoch(self.shell, self.cluster)} expected_epoch {expected_epoch}"
)
attributes = {SYSTEM_EXPIRATION_EPOCH: expected_epoch, SYSTEM_EXPIRATION_DURATION: "1m"}
file_path = generate_file(object_size.value)
with reporter.step(
f"Put objects using HTTP with attributes and head command should display {EXPIRATION_EPOCH_HEADER}: {expected_epoch} attr"
):
oid, head_info = self.oid_header_info_for_object(
file_path=file_path, attributes=attributes, user_container=user_container
)
self.validation_for_http_header_attr(head_info=head_info, expected_epoch=expected_epoch)
with reporter.step("Check that object becomes unavailable when epoch is expired"):
for _ in range(0, epoch_count + 1):
self.tick_epoch()
assert (
get_epoch(self.shell, self.cluster) == expected_epoch + 1
), f"Epochs should be equal: {get_epoch(self.shell, self.cluster)} != {expected_epoch + 1}"
with reporter.step("Check object deleted because it expires-on epoch"):
wait_for_epochs_align(self.shell, self.cluster)
try_to_get_object_and_expect_error(
cid=user_container,
oid=oid,
node=self.cluster.cluster_nodes[0],
error_pattern="404 Not Found",
)
# check that object is not available via grpc
with pytest.raises(Exception, match=OBJECT_NOT_FOUND):
get_object_from_random_node(self.wallet, user_container, oid, self.shell, self.cluster)
@allure.title("Priority of attributes duration>timestamp (obj_size={object_size})")
def test_http_attr_priority_dur_timestamp(self, user_container: str, object_size: ObjectSize, epoch_duration: int):
self.tick_epoch()
epoch_count = 2
expected_epoch = get_epoch(self.shell, self.cluster) + epoch_count
logger.info(
f"epoch duration={epoch_duration}, current_epoch= {get_epoch(self.shell, self.cluster)} expected_epoch {expected_epoch}"
)
attributes = {
SYSTEM_EXPIRATION_DURATION: self.epoch_count_into_mins(epoch_duration=epoch_duration, epoch=2),
SYSTEM_EXPIRATION_TIMESTAMP: self.epoch_count_into_timestamp(epoch_duration=epoch_duration, epoch=1),
}
file_path = generate_file(object_size.value)
with reporter.step(
f"Put objects using HTTP with attributes and head command should display {EXPIRATION_EPOCH_HEADER}: {expected_epoch} attr"
):
oid, head_info = self.oid_header_info_for_object(
file_path=file_path, attributes=attributes, user_container=user_container
)
self.validation_for_http_header_attr(head_info=head_info, expected_epoch=expected_epoch)
with reporter.step("Check that object becomes unavailable when epoch is expired"):
for _ in range(0, epoch_count + 1):
self.tick_epoch()
assert (
get_epoch(self.shell, self.cluster) == expected_epoch + 1
), f"Epochs should be equal: {get_epoch(self.shell, self.cluster)} != {expected_epoch + 1}"
with reporter.step("Check object deleted because it expires-on epoch"):
wait_for_epochs_align(self.shell, self.cluster)
try_to_get_object_and_expect_error(
cid=user_container,
oid=oid,
node=self.cluster.cluster_nodes[0],
error_pattern="404 Not Found",
)
# check that object is not available via grpc
with pytest.raises(Exception, match=OBJECT_NOT_FOUND):
get_object_from_random_node(self.wallet, user_container, oid, self.shell, self.cluster)
@allure.title("Priority of attributes timestamp>Expiration-RFC (obj_size={object_size})")
def test_http_attr_priority_timestamp_rfc(self, user_container: str, object_size: ObjectSize, epoch_duration: int):
self.tick_epoch()
epoch_count = 2
expected_epoch = get_epoch(self.shell, self.cluster) + epoch_count
logger.info(
f"epoch duration={epoch_duration}, current_epoch= {get_epoch(self.shell, self.cluster)} expected_epoch {expected_epoch}"
)
attributes = {
SYSTEM_EXPIRATION_TIMESTAMP: self.epoch_count_into_timestamp(epoch_duration=epoch_duration, epoch=2),
SYSTEM_EXPIRATION_RFC3339: self.epoch_count_into_timestamp(
epoch_duration=epoch_duration, epoch=1, rfc3339=True
),
}
file_path = generate_file(object_size.value)
with reporter.step(
f"Put objects using HTTP with attributes and head command should display {EXPIRATION_EPOCH_HEADER}: {expected_epoch} attr"
):
oid, head_info = self.oid_header_info_for_object(
file_path=file_path, attributes=attributes, user_container=user_container
)
self.validation_for_http_header_attr(head_info=head_info, expected_epoch=expected_epoch)
with reporter.step("Check that object becomes unavailable when epoch is expired"):
for _ in range(0, epoch_count + 1):
self.tick_epoch()
assert (
get_epoch(self.shell, self.cluster) == expected_epoch + 1
), f"Epochs should be equal: {get_epoch(self.shell, self.cluster)} != {expected_epoch + 1}"
with reporter.step("Check object deleted because it expires-on epoch"):
wait_for_epochs_align(self.shell, self.cluster)
try_to_get_object_and_expect_error(
cid=user_container,
oid=oid,
node=self.cluster.cluster_nodes[0],
error_pattern="404 Not Found",
)
# check that object is not available via grpc
with pytest.raises(Exception, match=OBJECT_NOT_FOUND):
get_object_from_random_node(self.wallet, user_container, oid, self.shell, self.cluster)
@allure.title("Object should be deleted when expiration passed (obj_size={object_size})")
@pytest.mark.parametrize(
"object_size",
# TODO: "complex" temporarly disabled for v0.37
["simple"],
indirect=True,
)
def test_http_rfc_object_unavailable_after_expir(
self, user_container: str, object_size: ObjectSize, epoch_duration: int
):
self.tick_epoch()
epoch_count = 2
expected_epoch = get_epoch(self.shell, self.cluster) + epoch_count
logger.info(
f"epoch duration={epoch_duration}, current_epoch= {get_epoch(self.shell, self.cluster)} expected_epoch {expected_epoch}"
)
attributes = {
SYSTEM_EXPIRATION_RFC3339: self.epoch_count_into_timestamp(
epoch_duration=epoch_duration, epoch=2, rfc3339=True
)
}
file_path = generate_file(object_size.value)
with reporter.step(
f"Put objects using HTTP with attributes and head command should display {EXPIRATION_EPOCH_HEADER}: {expected_epoch} attr"
):
oid, head_info = self.oid_header_info_for_object(
file_path=file_path,
attributes=attributes,
user_container=user_container,
)
self.validation_for_http_header_attr(head_info=head_info, expected_epoch=expected_epoch)
with reporter.step("Check that object becomes unavailable when epoch is expired"):
for _ in range(0, epoch_count + 1):
self.tick_epoch()
# check that {EXPIRATION_EXPIRATION_RFC} absents in header output
assert (
get_epoch(self.shell, self.cluster) == expected_epoch + 1
), f"Epochs should be equal: {get_epoch(self.shell, self.cluster)} != {expected_epoch + 1}"
with reporter.step("Check object deleted because it expires-on epoch"):
wait_for_epochs_align(self.shell, self.cluster)
try_to_get_object_and_expect_error(
cid=user_container,
oid=oid,
node=self.cluster.cluster_nodes[0],
error_pattern="404 Not Found",
)
# check that object is not available via grpc
with pytest.raises(Exception, match=OBJECT_NOT_FOUND):
get_object_from_random_node(self.wallet, user_container, oid, self.shell, self.cluster)

View file

@ -0,0 +1,64 @@
import allure
import pytest
from frostfs_testlib import reporter
from frostfs_testlib.resources.error_patterns import S3_BUCKET_DOES_NOT_ALLOW_ACL
from frostfs_testlib.resources.s3_acl_grants import PRIVATE_GRANTS, PUBLIC_READ_GRANTS, PUBLIC_READ_WRITE_GRANTS
from frostfs_testlib.s3 import S3ClientWrapper
from frostfs_testlib.steps.s3 import s3_helper
from frostfs_testlib.storage.dataclasses.object_size import ObjectSize
from frostfs_testlib.utils.file_utils import generate_file
@pytest.mark.nightly
@pytest.mark.acl
@pytest.mark.s3_gate
class TestS3GateACL:
@allure.title("Object ACL (s3_client={s3_client})")
def test_s3_object_ACL(self, s3_client: S3ClientWrapper, bucket: str, simple_object_size: ObjectSize):
file_path = generate_file(simple_object_size.value)
file_name = s3_helper.object_key_from_file_path(file_path)
with reporter.step("Put object into bucket"):
s3_client.put_object(bucket, file_path)
with reporter.step("Verify private ACL is default"):
object_grants = s3_client.get_object_acl(bucket, file_name)
s3_helper.verify_acl_permissions(object_grants, PRIVATE_GRANTS)
with reporter.step("Verify put object ACL is restricted"):
with pytest.raises(Exception, match=S3_BUCKET_DOES_NOT_ALLOW_ACL):
object_grants = s3_client.put_object_acl(bucket, file_name, acl="public-read")
@allure.title("Create Bucket with different ACL (s3_client={s3_client})")
def test_s3_create_bucket_with_ACL(self, s3_client: S3ClientWrapper):
with reporter.step("Create bucket with ACL private"):
bucket = s3_client.create_bucket(object_lock_enabled_for_bucket=True, acl="private")
bucket_grants = s3_client.get_bucket_acl(bucket)
s3_helper.verify_acl_permissions(bucket_grants, PRIVATE_GRANTS)
with reporter.step("Create bucket with ACL public-read"):
read_bucket = s3_client.create_bucket(object_lock_enabled_for_bucket=True, acl="public-read")
bucket_grants = s3_client.get_bucket_acl(read_bucket)
s3_helper.verify_acl_permissions(bucket_grants, PUBLIC_READ_GRANTS)
with reporter.step("Create bucket with ACL public-read-write"):
public_rw_bucket = s3_client.create_bucket(object_lock_enabled_for_bucket=True, acl="public-read-write")
bucket_grants = s3_client.get_bucket_acl(public_rw_bucket)
s3_helper.verify_acl_permissions(bucket_grants, PUBLIC_READ_WRITE_GRANTS)
@allure.title("Bucket ACL (s3_client={s3_client})")
def test_s3_bucket_ACL(self, s3_client: S3ClientWrapper):
with reporter.step("Create bucket with public-read-write ACL"):
bucket = s3_client.create_bucket(object_lock_enabled_for_bucket=True, acl="public-read-write")
bucket_grants = s3_client.get_bucket_acl(bucket)
s3_helper.verify_acl_permissions(bucket_grants, PUBLIC_READ_WRITE_GRANTS)
with reporter.step("Change bucket ACL to private"):
s3_client.put_bucket_acl(bucket, acl="private")
bucket_grants = s3_client.get_bucket_acl(bucket)
s3_helper.verify_acl_permissions(bucket_grants, PRIVATE_GRANTS)
with reporter.step("Change bucket ACL to public-read"):
s3_client.put_bucket_acl(bucket, acl="public-read")
bucket_grants = s3_client.get_bucket_acl(bucket)
s3_helper.verify_acl_permissions(bucket_grants, PUBLIC_READ_GRANTS)

View file

@ -0,0 +1,234 @@
import string
from datetime import datetime, timedelta
import allure
import pytest
from frostfs_testlib import reporter
from frostfs_testlib.s3 import S3ClientWrapper, VersioningStatus
from frostfs_testlib.steps.s3 import s3_helper
from frostfs_testlib.storage.dataclasses.object_size import ObjectSize
from frostfs_testlib.utils import string_utils
from frostfs_testlib.utils.file_utils import generate_file
VALID_SYMBOLS_WITHOUT_DOT = string.ascii_lowercase + string.digits + "-"
VALID_AND_INVALID_SYMBOLS = string.ascii_letters + string.punctuation
# TODO: The dot symbol is temporarily not supported.
VALID_SYMBOLS_WITH_DOT = VALID_SYMBOLS_WITHOUT_DOT + "."
@pytest.mark.nightly
@pytest.mark.s3_gate
@pytest.mark.s3_gate_bucket
class TestS3GateBucket:
@allure.title("Bucket API (s3_client={s3_client})")
def test_s3_buckets(
self,
s3_client: S3ClientWrapper,
simple_object_size: ObjectSize,
):
"""
Test base S3 Bucket API (Create/List/Head/Delete).
"""
file_path = generate_file(simple_object_size.value)
file_name = s3_helper.object_key_from_file_path(file_path)
with reporter.step("Create buckets"):
bucket_1 = s3_client.create_bucket(object_lock_enabled_for_bucket=True)
s3_helper.set_bucket_versioning(s3_client, bucket_1, VersioningStatus.ENABLED)
bucket_2 = s3_client.create_bucket()
with reporter.step("Check buckets are presented in the system"):
buckets = s3_client.list_buckets()
assert bucket_1 in buckets, f"Expected bucket {bucket_1} is in the list"
assert bucket_2 in buckets, f"Expected bucket {bucket_2} is in the list"
with reporter.step("Bucket must be empty"):
for bucket in (bucket_1, bucket_2):
with reporter.step("Verify default list command"):
objects_list = s3_client.list_objects(bucket)
assert not objects_list, f"Expected empty bucket, got {objects_list}"
with reporter.step("Verify V2 list command"):
objects_list = s3_client.list_objects_v2(bucket)
assert not objects_list, f"Expected empty bucket, got {objects_list}"
with reporter.step("Check buckets are visible with S3 head command"):
s3_client.head_bucket(bucket_1)
s3_client.head_bucket(bucket_2)
with reporter.step("Check we can put/list object with S3 commands"):
version_id = s3_client.put_object(bucket_1, file_path)
s3_client.head_object(bucket_1, file_name)
bucket_objects = s3_client.list_objects(bucket_1)
assert file_name in bucket_objects, f"Expected file {file_name} in objects list {bucket_objects}"
with reporter.step("Try to delete not empty bucket and get error"):
with pytest.raises(Exception, match=r".*The bucket you tried to delete is not empty.*"):
s3_client.delete_bucket(bucket_1)
s3_client.head_bucket(bucket_1)
with reporter.step("Delete empty bucket_2"):
s3_client.delete_bucket(bucket_2)
with reporter.step("Check bucket_2 is deleted"):
with pytest.raises(Exception, match=r".*Not Found.*"):
s3_client.head_bucket(bucket_2)
buckets = s3_client.list_buckets()
assert bucket_1 in buckets, f"Expected bucket {bucket_1} is in the list"
assert bucket_2 not in buckets, f"Expected bucket {bucket_2} is not in the list"
with reporter.step("Delete object from bucket_1"):
s3_client.delete_object(bucket_1, file_name, version_id)
s3_helper.check_objects_in_bucket(s3_client, bucket_1, expected_objects=[])
with reporter.step("Delete bucket_1"):
s3_client.delete_bucket(bucket_1)
with reporter.step("Check bucket_1 deleted"):
with pytest.raises(Exception, match=r".*Not Found.*"):
s3_client.head_bucket(bucket_1)
@allure.title("Create bucket with object lock (s3_client={s3_client})")
def test_s3_bucket_object_lock(self, s3_client: S3ClientWrapper, simple_object_size: ObjectSize):
file_path = generate_file(simple_object_size.value)
file_name = s3_helper.object_key_from_file_path(file_path)
with reporter.step("Create bucket with --no-object-lock-enabled-for-bucket"):
bucket = s3_client.create_bucket(object_lock_enabled_for_bucket=False)
date_obj = datetime.utcnow() + timedelta(days=1)
with pytest.raises(Exception, match=r".*Object Lock configuration does not exist for this bucket.*"):
# An error occurred (ObjectLockConfigurationNotFoundError) when calling the PutObject operation (reached max retries: 0):
# Object Lock configuration does not exist for this bucket
s3_client.put_object(
bucket,
file_path,
object_lock_mode="COMPLIANCE",
object_lock_retain_until_date=date_obj.strftime("%Y-%m-%dT%H:%M:%S"),
)
with reporter.step("Create bucket with --object-lock-enabled-for-bucket"):
bucket_1 = s3_client.create_bucket(object_lock_enabled_for_bucket=True)
date_obj_1 = datetime.utcnow() + timedelta(days=1)
s3_client.put_object(
bucket_1,
file_path,
object_lock_mode="COMPLIANCE",
object_lock_retain_until_date=date_obj_1.strftime("%Y-%m-%dT%H:%M:%S"),
object_lock_legal_hold_status="ON",
)
s3_helper.assert_object_lock_mode(s3_client, bucket_1, file_name, "COMPLIANCE", date_obj_1, "ON")
@allure.title("Delete bucket (s3_client={s3_client})")
def test_s3_delete_bucket(self, s3_client: S3ClientWrapper, simple_object_size: ObjectSize):
file_path_1 = generate_file(simple_object_size.value)
file_name_1 = s3_helper.object_key_from_file_path(file_path_1)
file_path_2 = generate_file(simple_object_size.value)
file_name_2 = s3_helper.object_key_from_file_path(file_path_2)
bucket = s3_client.create_bucket()
with reporter.step("Put two objects into bucket"):
s3_client.put_object(bucket, file_path_1)
s3_client.put_object(bucket, file_path_2)
s3_helper.check_objects_in_bucket(s3_client, bucket, [file_name_1, file_name_2])
with reporter.step("Try to delete not empty bucket and get error"):
with pytest.raises(Exception, match=r".*The bucket you tried to delete is not empty.*"):
s3_client.delete_bucket(bucket)
with reporter.step("Delete object in bucket"):
s3_client.delete_object(bucket, file_name_1)
s3_client.delete_object(bucket, file_name_2)
s3_helper.check_objects_in_bucket(s3_client, bucket, [])
with reporter.step("Delete empty bucket"):
s3_client.delete_bucket(bucket)
with pytest.raises(Exception, match=r".*Not Found.*"):
s3_client.head_bucket(bucket)
@allure.title("Create bucket with valid name length (s3_client={s3_client}, length={length})")
@pytest.mark.parametrize("length", [3, 4, 32, 62, 63])
def test_s3_create_bucket_with_valid_length(self, s3_client: S3ClientWrapper, length: int):
bucket_name = string_utils.random_string(length, VALID_SYMBOLS_WITHOUT_DOT)
while not (bucket_name[0].isalnum() and bucket_name[-1].isalnum()):
bucket_name = string_utils.random_string(length, VALID_SYMBOLS_WITHOUT_DOT)
with reporter.step("Create bucket with valid name length"):
s3_client.create_bucket(bucket_name)
with reporter.step("Check bucket name in buckets"):
assert bucket_name in s3_client.list_buckets()
@allure.title("[NEGATIVE] Bucket with invalid name length should not be created (s3_client={s3_client}, length={length})")
@pytest.mark.parametrize("length", [2, 64, 254, 255, 256])
def test_s3_create_bucket_with_invalid_length(self, s3_client: S3ClientWrapper, length: int):
bucket_name = string_utils.random_string(length, VALID_SYMBOLS_WITHOUT_DOT)
while not (bucket_name[0].isalnum() and bucket_name[-1].isalnum()):
bucket_name = string_utils.random_string(length, VALID_SYMBOLS_WITHOUT_DOT)
with reporter.step("Create bucket with invalid name length and catch exception"):
with pytest.raises(Exception, match=".*(?:InvalidBucketName|Invalid bucket name).*"):
s3_client.create_bucket(bucket_name)
@allure.title("[NEGATIVE] Bucket with invalid name should not be created (s3_client={s3_client}, bucket_name={bucket_name})")
@pytest.mark.parametrize(
"bucket_name",
[
"BUCKET-1",
"buckeT-2",
# The following case for AWS CLI is not handled correctly
# "-bucket-3",
"bucket-4-",
".bucket-5",
"bucket-6.",
"bucket..7",
"bucket+8",
"bucket_9",
"bucket 10",
"127.10.5.11",
"xn--bucket-12",
"bucket-13-s3alias",
# The following names can be used in FrostFS but are prohibited by the AWS specification.
# "sthree-bucket-14"
# "sthree-configurator-bucket-15"
# "amzn-s3-demo-bucket-16"
# "sthree-bucket-17"
# "bucket-18--ol-s3"
# "bucket-19--x-s3"
# "bucket-20.mrap"
],
)
def test_s3_create_bucket_with_invalid_name(self, s3_client: S3ClientWrapper, bucket_name: str):
with reporter.step("Create bucket with invalid name and catch exception"):
with pytest.raises(Exception, match=".*(?:InvalidBucketName|Invalid bucket name).*"):
s3_client.create_bucket(bucket_name)
@allure.title("[NEGATIVE] Delete non-empty bucket (s3_client={s3_client})")
def test_s3_check_availability_non_empty_bucket_after_deleting(
self,
bucket: str,
simple_object_size: ObjectSize,
s3_client: S3ClientWrapper,
):
object_path = generate_file(simple_object_size.value)
object_name = s3_helper.object_key_from_file_path(object_path)
with reporter.step("Put object into bucket"):
s3_client.put_object(bucket, object_path)
with reporter.step("Check that object appears in bucket"):
objects = s3_client.list_objects(bucket)
assert objects, f"Expected bucket with object, got empty {objects}"
assert object_name in objects, f"Object {object_name} not found in bucket object list {objects}"
with reporter.step("Try to delete not empty bucket and get error"):
with pytest.raises(Exception, match=r".*The bucket you tried to delete is not empty.*"):
s3_client.delete_bucket(bucket)
with reporter.step("Check bucket availability"):
objects = s3_client.list_objects(bucket)
assert objects, f"Expected bucket with object, got empty {objects}"
assert object_name in objects, f"Object {object_name} not found in bucket object list {objects}"

View file

@ -0,0 +1,191 @@
import time
from datetime import datetime, timedelta
import allure
import pytest
from frostfs_testlib import reporter
from frostfs_testlib.s3 import S3ClientWrapper
from frostfs_testlib.steps.s3 import s3_helper
from frostfs_testlib.storage.dataclasses.object_size import ObjectSize
from frostfs_testlib.utils.file_utils import generate_file, generate_file_with_content
@allure.title("[Module] Create bucket with object_lock_enabled_for_bucket")
@pytest.fixture(scope="module")
def bucket_w_lock(s3_client: S3ClientWrapper):
return s3_client.create_bucket(object_lock_enabled_for_bucket=True)
@allure.title("[Module] Create bucket without object_lock_enabled_for_bucket")
@pytest.fixture(scope="module")
def bucket_no_lock(s3_client: S3ClientWrapper):
return s3_client.create_bucket(object_lock_enabled_for_bucket=False)
@pytest.mark.nightly
@pytest.mark.s3_gate
@pytest.mark.s3_gate_locking
@pytest.mark.parametrize("version_id", [None, "second"])
class TestS3GateLocking:
@allure.title("Retention period and legal lock on object (version_id={version_id}, s3_client={s3_client})")
def test_s3_object_locking(self, s3_client: S3ClientWrapper, bucket_w_lock: str, version_id: str, simple_object_size: ObjectSize):
file_path = generate_file(simple_object_size.value)
file_name = s3_helper.object_key_from_file_path(file_path)
retention_period = 2
with reporter.step("Put several versions of object into bucket"):
s3_client.put_object(bucket_w_lock, file_path)
file_name_1 = generate_file_with_content(simple_object_size.value, file_path=file_path)
version_id_2 = s3_client.put_object(bucket_w_lock, file_name_1)
if version_id:
version_id = version_id_2
with reporter.step(f"Put retention period {retention_period}min to object {file_name}"):
date_obj = datetime.utcnow() + timedelta(minutes=retention_period)
retention = {
"Mode": "COMPLIANCE",
"RetainUntilDate": date_obj,
}
s3_client.put_object_retention(bucket_w_lock, file_name, retention, version_id)
s3_helper.assert_object_lock_mode(s3_client, bucket_w_lock, file_name, "COMPLIANCE", date_obj, "OFF")
with reporter.step(f"Put legal hold to object {file_name}"):
s3_client.put_object_legal_hold(bucket_w_lock, file_name, "ON", version_id)
s3_helper.assert_object_lock_mode(s3_client, bucket_w_lock, file_name, "COMPLIANCE", date_obj, "ON")
with reporter.step("Fail with deleting object with legal hold and retention period"):
if version_id:
with pytest.raises(Exception):
# An error occurred (AccessDenied) when calling the DeleteObject operation (reached max retries: 0): Access Denied.
s3_client.delete_object(bucket_w_lock, file_name, version_id)
with reporter.step("Check retention period is no longer set on the uploaded object"):
time.sleep((retention_period + 1) * 60)
s3_helper.assert_object_lock_mode(s3_client, bucket_w_lock, file_name, "COMPLIANCE", date_obj, "ON")
with reporter.step("Fail with deleting object with legal hold and retention period"):
if version_id:
with pytest.raises(Exception):
# An error occurred (AccessDenied) when calling the DeleteObject operation (reached max retries: 0): Access Denied.
s3_client.delete_object(bucket_w_lock, file_name, version_id)
else:
s3_client.delete_object(bucket_w_lock, file_name, version_id)
@allure.title("Impossible to change retention mode COMPLIANCE (version_id={version_id}, s3_client={s3_client})")
def test_s3_mode_compliance(self, s3_client: S3ClientWrapper, bucket_w_lock: str, version_id: str, simple_object_size: ObjectSize):
file_path = generate_file(simple_object_size.value)
file_name = s3_helper.object_key_from_file_path(file_path)
retention_period = 2
retention_period_1 = 1
with reporter.step("Put object into bucket"):
obj_version = s3_client.put_object(bucket_w_lock, file_path)
if version_id:
version_id = obj_version
with reporter.step(f"Put retention period {retention_period}min to object {file_name}"):
date_obj = datetime.utcnow() + timedelta(minutes=retention_period)
retention = {
"Mode": "COMPLIANCE",
"RetainUntilDate": date_obj,
}
s3_client.put_object_retention(bucket_w_lock, file_name, retention, version_id)
s3_helper.assert_object_lock_mode(s3_client, bucket_w_lock, file_name, "COMPLIANCE", date_obj, "OFF")
with reporter.step(f"Try to change retention period {retention_period_1}min to object {file_name}"):
date_obj = datetime.utcnow() + timedelta(minutes=retention_period_1)
retention = {
"Mode": "COMPLIANCE",
"RetainUntilDate": date_obj,
}
with pytest.raises(Exception):
s3_client.put_object_retention(bucket_w_lock, file_name, retention, version_id)
@allure.title("Change retention mode GOVERNANCE (version_id={version_id}, s3_client={s3_client})")
def test_s3_mode_governance(self, s3_client: S3ClientWrapper, bucket_w_lock: str, version_id: str, simple_object_size: ObjectSize):
file_path = generate_file(simple_object_size.value)
file_name = s3_helper.object_key_from_file_path(file_path)
retention_period = 3
retention_period_1 = 2
retention_period_2 = 5
with reporter.step("Put object into bucket"):
obj_version = s3_client.put_object(bucket_w_lock, file_path)
if version_id:
version_id = obj_version
with reporter.step(f"Put retention period {retention_period}min to object {file_name}"):
date_obj = datetime.utcnow() + timedelta(minutes=retention_period)
retention = {
"Mode": "GOVERNANCE",
"RetainUntilDate": date_obj,
}
s3_client.put_object_retention(bucket_w_lock, file_name, retention, version_id)
s3_helper.assert_object_lock_mode(s3_client, bucket_w_lock, file_name, "GOVERNANCE", date_obj, "OFF")
with reporter.step(f"Try to change retention period {retention_period_1}min to object {file_name}"):
date_obj = datetime.utcnow() + timedelta(minutes=retention_period_1)
retention = {
"Mode": "GOVERNANCE",
"RetainUntilDate": date_obj,
}
with pytest.raises(Exception):
s3_client.put_object_retention(bucket_w_lock, file_name, retention, version_id)
with reporter.step(f"Try to change retention period {retention_period_1}min to object {file_name}"):
date_obj = datetime.utcnow() + timedelta(minutes=retention_period_1)
retention = {
"Mode": "GOVERNANCE",
"RetainUntilDate": date_obj,
}
with pytest.raises(Exception):
s3_client.put_object_retention(bucket_w_lock, file_name, retention, version_id)
with reporter.step(f"Put new retention period {retention_period_2}min to object {file_name}"):
date_obj = datetime.utcnow() + timedelta(minutes=retention_period_2)
retention = {
"Mode": "GOVERNANCE",
"RetainUntilDate": date_obj,
}
s3_client.put_object_retention(bucket_w_lock, file_name, retention, version_id, True)
s3_helper.assert_object_lock_mode(s3_client, bucket_w_lock, file_name, "GOVERNANCE", date_obj, "OFF")
@allure.title("[NEGATIVE] Lock object in bucket with disabled locking (version_id={version_id}, s3_client={s3_client})")
def test_s3_legal_hold(self, s3_client: S3ClientWrapper, bucket_no_lock: str, version_id: str, simple_object_size: ObjectSize):
file_path = generate_file(simple_object_size.value)
file_name = s3_helper.object_key_from_file_path(file_path)
with reporter.step("Put object into bucket"):
obj_version = s3_client.put_object(bucket_no_lock, file_path)
if version_id:
version_id = obj_version
with reporter.step(f"Put legal hold to object {file_name}"):
with pytest.raises(Exception):
s3_client.put_object_legal_hold(bucket_no_lock, file_name, "ON", version_id)
@pytest.mark.nightly
@pytest.mark.s3_gate
class TestS3GateLockingBucket:
@allure.title("Bucket Lock (s3_client={s3_client})")
def test_s3_bucket_lock(self, s3_client: S3ClientWrapper, bucket_w_lock: str, simple_object_size: ObjectSize):
file_path = generate_file(simple_object_size.value)
file_name = s3_helper.object_key_from_file_path(file_path)
configuration = {"Rule": {"DefaultRetention": {"Mode": "COMPLIANCE", "Days": 1}}}
with reporter.step("PutObjectLockConfiguration with ObjectLockEnabled=False"):
s3_client.put_object_lock_configuration(bucket_w_lock, configuration)
with reporter.step("PutObjectLockConfiguration with ObjectLockEnabled=True"):
configuration["ObjectLockEnabled"] = "Enabled"
s3_client.put_object_lock_configuration(bucket_w_lock, configuration)
with reporter.step("GetObjectLockConfiguration"):
config = s3_client.get_object_lock_configuration(bucket_w_lock)
configuration["Rule"]["DefaultRetention"]["Years"] = 0
assert config == configuration, f"Configurations must be equal {configuration}"
with reporter.step("Put object into bucket"):
s3_client.put_object(bucket_w_lock, file_path)
s3_helper.assert_object_lock_mode(s3_client, bucket_w_lock, file_name, "COMPLIANCE", None, "OFF", 1)

View file

@ -0,0 +1,178 @@
import allure
import pytest
from frostfs_testlib import reporter
from frostfs_testlib.s3 import S3ClientWrapper, VersioningStatus
from frostfs_testlib.s3.interfaces import BucketContainerResolver
from frostfs_testlib.steps.cli.container import list_objects
from frostfs_testlib.steps.s3 import s3_helper
from frostfs_testlib.storage.dataclasses.object_size import ObjectSize
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
from frostfs_testlib.testing.cluster_test_base import ClusterTestBase
from frostfs_testlib.testing.test_control import wait_for_success
from frostfs_testlib.utils.file_utils import generate_file, get_file_hash, split_file
PART_SIZE = 5 * 1024 * 1024
@pytest.mark.nightly
@pytest.mark.s3_gate
@pytest.mark.s3_gate_multipart
class TestS3GateMultipart(ClusterTestBase):
NO_SUCH_UPLOAD = "The upload ID may be invalid, or the upload may have been aborted or completed."
@allure.title("Object Multipart API (s3_client={s3_client}, bucket versioning = {versioning_status})")
@pytest.mark.parametrize("versioning_status", [VersioningStatus.ENABLED, VersioningStatus.UNDEFINED], indirect=True)
def test_s3_object_multipart(
self,
s3_client: S3ClientWrapper,
bucket: str,
default_wallet: WalletInfo,
versioning_status: str,
bucket_container_resolver: BucketContainerResolver,
):
parts_count = 5
file_name_large = generate_file(PART_SIZE * parts_count) # 5Mb - min part
object_key = s3_helper.object_key_from_file_path(file_name_large)
part_files = split_file(file_name_large, parts_count)
parts = []
with reporter.step(f"Get related container_id for bucket"):
for cluster_node in self.cluster.cluster_nodes:
container_id = bucket_container_resolver.resolve(cluster_node, bucket)
if container_id:
break
with reporter.step("Upload first part"):
upload_id = s3_client.create_multipart_upload(bucket, object_key)
uploads = s3_client.list_multipart_uploads(bucket)
etag = s3_client.upload_part(bucket, object_key, upload_id, 1, part_files[0])
parts.append((1, etag))
got_parts = s3_client.list_parts(bucket, object_key, upload_id)
assert len(got_parts) == 1, f"Expected {1} parts, got\n{got_parts}"
with reporter.step("Upload last parts"):
for part_id, file_path in enumerate(part_files[1:], start=2):
etag = s3_client.upload_part(bucket, object_key, upload_id, part_id, file_path)
parts.append((part_id, etag))
with reporter.step("Check all parts are visible in bucket"):
got_parts = s3_client.list_parts(bucket, object_key, upload_id)
assert len(got_parts) == len(part_files), f"Expected {parts_count} parts, got\n{got_parts}"
with reporter.step("Complete multipart upload"):
response = s3_client.complete_multipart_upload(bucket, object_key, upload_id, parts)
version_id = None
if versioning_status == VersioningStatus.ENABLED:
version_id = response["VersionId"]
with reporter.step("There should be no multipart uploads"):
uploads = s3_client.list_multipart_uploads(bucket)
assert not uploads, f"Expected there is no uploads in bucket {bucket}"
with reporter.step("Check we can get whole object from bucket"):
got_object = s3_client.get_object(bucket, object_key)
assert get_file_hash(got_object) == get_file_hash(file_name_large)
with reporter.step("Delete the object"):
s3_client.delete_object(bucket, object_key, version_id)
with reporter.step("There should be no objects in bucket"):
objects_list = s3_client.list_objects(bucket)
assert not objects_list, f"Expected empty bucket, got {objects_list}"
with reporter.step("There should be no objects in container"):
objects = list_objects(default_wallet, self.shell, container_id, self.cluster.default_rpc_endpoint)
assert len(objects) == 0, f"Expected no objects in container, got\n{objects}"
@allure.title("Abort Multipart Upload (s3_client={s3_client})")
@pytest.mark.parametrize("versioning_status", [VersioningStatus.ENABLED], indirect=True)
def test_s3_abort_multipart(
self,
s3_client: S3ClientWrapper,
default_wallet: WalletInfo,
bucket: str,
simple_object_size: ObjectSize,
complex_object_size: ObjectSize,
bucket_container_resolver: BucketContainerResolver,
):
complex_file = generate_file(complex_object_size.value)
simple_file = generate_file(simple_object_size.value)
to_upload = [complex_file, complex_file, simple_file]
files_count = len(to_upload)
upload_key = "multipart_abort"
with reporter.step("Get related container_id for bucket"):
for cluster_node in self.cluster.cluster_nodes:
container_id = bucket_container_resolver.resolve(cluster_node, bucket)
if container_id:
break
with reporter.step("Create multipart upload"):
upload_id = s3_client.create_multipart_upload(bucket, upload_key)
with reporter.step(f"Upload {files_count} parts to multipart upload"):
for i, file in enumerate(to_upload, 1):
s3_client.upload_part(bucket, upload_key, upload_id, i, file)
with reporter.step(f"There should be {files_count} objects in bucket"):
parts = s3_client.list_parts(bucket, upload_key, upload_id)
assert len(parts) == files_count, f"Expected {files_count} parts, got\n{parts}"
with reporter.step(f"There should be {files_count} objects in container"):
objects = list_objects(default_wallet, self.shell, container_id, self.cluster.default_rpc_endpoint)
assert len(objects) == files_count, f"Expected {files_count} objects in container, got\n{objects}"
with reporter.step("Abort multipart upload"):
s3_client.abort_multipart_upload(bucket, upload_key, upload_id)
uploads = s3_client.list_multipart_uploads(bucket)
assert not uploads, f"Expected no uploads in bucket {bucket}"
with reporter.step("There should be no objects in bucket"):
with pytest.raises(Exception, match=self.NO_SUCH_UPLOAD):
s3_client.list_parts(bucket, upload_key, upload_id)
with reporter.step("There should be no objects in container"):
@wait_for_success(120, 10)
def check_no_objects():
objects = list_objects(default_wallet, self.shell, container_id, self.cluster.default_rpc_endpoint)
assert len(objects) == 0, f"Expected no objects in container, got\n{objects}"
check_no_objects()
@allure.title("Upload Part Copy (s3_client={s3_client})")
@pytest.mark.parametrize("versioning_status", [VersioningStatus.ENABLED], indirect=True)
def test_s3_multipart_copy(self, s3_client: S3ClientWrapper, bucket: str):
parts_count = 3
file_name_large = generate_file(PART_SIZE * parts_count) # 5Mb - min part
object_key = s3_helper.object_key_from_file_path(file_name_large)
part_files = split_file(file_name_large, parts_count)
parts = []
objs = []
with reporter.step(f"Put {parts_count} objects in bucket"):
for part in part_files:
s3_client.put_object(bucket, part)
objs.append(s3_helper.object_key_from_file_path(part))
s3_helper.check_objects_in_bucket(s3_client, bucket, objs)
with reporter.step("Create multipart upload object"):
upload_id = s3_client.create_multipart_upload(bucket, object_key)
uploads = s3_client.list_multipart_uploads(bucket)
assert len(uploads) == 1, f"Expected one upload in bucket {bucket}"
assert uploads[0].get("Key") == object_key, f"Expected correct key {object_key} in upload {uploads}"
assert uploads[0].get("UploadId") == upload_id, f"Expected correct UploadId {upload_id} in upload {uploads}"
with reporter.step("Upload parts to multipart upload"):
for part_id, obj_key in enumerate(objs, start=1):
etag = s3_client.upload_part_copy(bucket, object_key, upload_id, part_id, f"{bucket}/{obj_key}")
parts.append((part_id, etag))
got_parts = s3_client.list_parts(bucket, object_key, upload_id)
with reporter.step("Complete multipart upload"):
s3_client.complete_multipart_upload(bucket, object_key, upload_id, parts)
assert len(got_parts) == len(part_files), f"Expected {parts_count} parts, got\n{got_parts}"
with reporter.step("Get whole object from bucket"):
got_object = s3_client.get_object(bucket, object_key)
assert get_file_hash(got_object) == get_file_hash(file_name_large)

View file

@ -0,0 +1,916 @@
import os
import random
import string
import uuid
from datetime import datetime, timedelta
from typing import Literal
import allure
import pytest
from frostfs_testlib import reporter
from frostfs_testlib.resources.common import ASSETS_DIR, DEFAULT_WALLET_PASS
from frostfs_testlib.resources.error_patterns import S3_BUCKET_DOES_NOT_ALLOW_ACL, S3_MALFORMED_XML_REQUEST
from frostfs_testlib.resources.s3_acl_grants import PRIVATE_GRANTS
from frostfs_testlib.s3 import AwsCliClient, S3ClientWrapper, VersioningStatus
from frostfs_testlib.steps.s3 import s3_helper
from frostfs_testlib.storage.dataclasses.object_size import ObjectSize
from frostfs_testlib.testing.test_control import expect_not_raises
from frostfs_testlib.utils import wallet_utils
from frostfs_testlib.utils.file_utils import TestFile, concat_files, generate_file, generate_file_with_content, get_file_hash
@pytest.mark.nightly
@pytest.mark.s3_gate
@pytest.mark.s3_gate_object
class TestS3GateObject:
@pytest.fixture
def second_wallet_public_key(self):
second_wallet = os.path.join(os.getcwd(), ASSETS_DIR, f"{str(uuid.uuid4())}.json")
wallet_utils.init_wallet(second_wallet, DEFAULT_WALLET_PASS)
public_key = wallet_utils.get_wallet_public_key(second_wallet, DEFAULT_WALLET_PASS)
yield public_key
@allure.title("Object API (obj_size={object_size}, s3_client={s3_client})")
@pytest.mark.parametrize(
"object_size",
["simple", "complex"],
indirect=True,
)
def test_s3_api_object(
self,
s3_client: S3ClientWrapper,
object_size: ObjectSize,
bucket: str,
):
"""
Test base S3 Object API (Put/Head/List) for simple and complex objects.
"""
with reporter.step("Prepare object to upload"):
test_file = generate_file(object_size.value)
file_name = s3_helper.object_key_from_file_path(test_file)
with reporter.step("Put object to bucket"):
s3_client.put_object(bucket, test_file)
with reporter.step("Head object from bucket"):
s3_client.head_object(bucket, file_name)
with reporter.step("Verify object in list"):
bucket_objects = s3_client.list_objects(bucket)
assert file_name in bucket_objects, f"Expected file {file_name} in objects list {bucket_objects}"
with reporter.step("Check object's attributes"):
for attrs in (["ETag"], ["ObjectSize", "StorageClass"]):
s3_client.get_object_attributes(bucket, file_name, attrs)
@allure.title("Copy object (s3_client={s3_client})")
def test_s3_copy_object(
self,
s3_client: S3ClientWrapper,
two_buckets: list[str],
simple_object_size: ObjectSize,
):
file_path = generate_file(simple_object_size.value)
file_name = s3_helper.object_key_from_file_path(file_path)
bucket_1_objects = [file_name]
bucket_1, bucket_2 = two_buckets
with reporter.step("Put object into one bucket"):
s3_client.put_object(bucket_1, file_path)
with reporter.step("Copy one object into the same bucket"):
copy_obj_path = s3_client.copy_object(bucket_1, file_name)
bucket_1_objects.append(copy_obj_path)
s3_helper.check_objects_in_bucket(s3_client, bucket_1, bucket_1_objects)
objects_list = s3_client.list_objects(bucket_2)
assert not objects_list, f"Expected empty bucket, got {objects_list}"
with reporter.step("Copy object from first bucket into second"):
copy_obj_path_b2 = s3_client.copy_object(bucket_1, file_name, bucket=bucket_2)
s3_helper.check_objects_in_bucket(s3_client, bucket_1, expected_objects=bucket_1_objects)
s3_helper.check_objects_in_bucket(s3_client, bucket_2, expected_objects=[copy_obj_path_b2])
with reporter.step("Check copied object has the same content"):
got_copied_file_b2 = s3_client.get_object(bucket_2, copy_obj_path_b2)
assert get_file_hash(file_path) == get_file_hash(got_copied_file_b2), "Hashes must be the same"
with reporter.step("Delete one object from first bucket"):
s3_client.delete_object(bucket_1, file_name)
bucket_1_objects.remove(file_name)
s3_helper.check_objects_in_bucket(s3_client, bucket_1, expected_objects=bucket_1_objects)
s3_helper.check_objects_in_bucket(s3_client, bucket_2, expected_objects=[copy_obj_path_b2])
with reporter.step("Copy one object into the same bucket"):
with pytest.raises(Exception):
s3_client.copy_object(bucket_1, file_name)
@allure.title("Copy version of object (s3_client={s3_client})")
def test_s3_copy_version_object(
self,
s3_client: S3ClientWrapper,
two_buckets: list[str],
simple_object_size: ObjectSize,
):
version_1_content = "Version 1"
file_name_simple = generate_file_with_content(simple_object_size.value, content=version_1_content)
obj_key = os.path.basename(file_name_simple)
bucket_1, bucket_2 = two_buckets
s3_helper.set_bucket_versioning(s3_client, bucket_1, VersioningStatus.ENABLED)
with reporter.step("Put object into bucket"):
s3_client.put_object(bucket_1, file_name_simple)
bucket_1_objects = [obj_key]
s3_helper.check_objects_in_bucket(s3_client, bucket_1, [obj_key])
with reporter.step("Copy one object into the same bucket"):
copy_obj_path = s3_client.copy_object(bucket_1, obj_key)
bucket_1_objects.append(copy_obj_path)
s3_helper.check_objects_in_bucket(s3_client, bucket_1, bucket_1_objects)
s3_helper.set_bucket_versioning(s3_client, bucket_2, VersioningStatus.ENABLED)
with reporter.step("Copy object from first bucket into second"):
copy_obj_path_b2 = s3_client.copy_object(bucket_1, obj_key, bucket=bucket_2)
s3_helper.check_objects_in_bucket(s3_client, bucket_1, expected_objects=bucket_1_objects)
s3_helper.check_objects_in_bucket(s3_client, bucket_2, expected_objects=[copy_obj_path_b2])
with reporter.step("Delete one object from first bucket and check object in bucket"):
s3_client.delete_object(bucket_1, obj_key)
bucket_1_objects.remove(obj_key)
s3_helper.check_objects_in_bucket(s3_client, bucket_1, expected_objects=bucket_1_objects)
with reporter.step("Copy one object into the same bucket"):
with pytest.raises(Exception):
s3_client.copy_object(bucket_1, obj_key)
@allure.title("Copy with acl (s3_client={s3_client})")
def test_s3_copy_acl(self, s3_client: S3ClientWrapper, bucket: str, simple_object_size: ObjectSize):
file_path = generate_file_with_content(simple_object_size.value)
file_name = os.path.basename(file_path)
s3_helper.set_bucket_versioning(s3_client, bucket, VersioningStatus.ENABLED)
with reporter.step("Put object into bucket"):
s3_client.put_object(bucket, file_path)
s3_helper.check_objects_in_bucket(s3_client, bucket, [file_name])
with reporter.step("[NEGATIVE] Copy object with public-read-write ACL"):
with pytest.raises(Exception, match=S3_BUCKET_DOES_NOT_ALLOW_ACL):
copy_path = s3_client.copy_object(bucket, file_name, acl="public-read-write")
with reporter.step("Copy object with private ACL"):
copy_path = s3_client.copy_object(bucket, file_name, acl="private")
object_grants = s3_client.get_object_acl(bucket, copy_path)
s3_helper.verify_acl_permissions(object_grants, PRIVATE_GRANTS)
@allure.title("Copy object with metadata (s3_client={s3_client})")
def test_s3_copy_metadate(self, s3_client: S3ClientWrapper, bucket: str, simple_object_size: ObjectSize):
object_metadata = {f"{uuid.uuid4()}": f"{uuid.uuid4()}"}
file_path = generate_file(simple_object_size.value)
file_name = s3_helper.object_key_from_file_path(file_path)
bucket_1_objects = [file_name]
s3_helper.set_bucket_versioning(s3_client, bucket, VersioningStatus.ENABLED)
with reporter.step("Put object into bucket"):
s3_client.put_object(bucket, file_path, metadata=object_metadata)
bucket_1_objects = [file_name]
s3_helper.check_objects_in_bucket(s3_client, bucket, bucket_1_objects)
with reporter.step("Copy one object"):
copy_obj_path = s3_client.copy_object(bucket, file_name)
bucket_1_objects.append(copy_obj_path)
s3_helper.check_objects_in_bucket(s3_client, bucket, bucket_1_objects)
obj_head = s3_client.head_object(bucket, copy_obj_path)
assert obj_head.get("Metadata") == object_metadata, f"Metadata must be {object_metadata}"
with reporter.step("Copy one object with metadata"):
copy_obj_path = s3_client.copy_object(bucket, file_name, metadata_directive="COPY")
bucket_1_objects.append(copy_obj_path)
obj_head = s3_client.head_object(bucket, copy_obj_path)
assert obj_head.get("Metadata") == object_metadata, f"Metadata must be {object_metadata}"
with reporter.step("Copy one object with new metadata"):
object_metadata_1 = {f"{uuid.uuid4()}": f"{uuid.uuid4()}"}
copy_obj_path = s3_client.copy_object(
bucket,
file_name,
metadata_directive="REPLACE",
metadata=object_metadata_1,
)
bucket_1_objects.append(copy_obj_path)
obj_head = s3_client.head_object(bucket, copy_obj_path)
assert obj_head.get("Metadata") == object_metadata_1, f"Metadata must be {object_metadata_1}"
@allure.title("Copy object with tagging (s3_client={s3_client})")
def test_s3_copy_tagging(self, s3_client: S3ClientWrapper, bucket: str, simple_object_size: ObjectSize):
object_tagging = [(f"{uuid.uuid4()}", f"{uuid.uuid4()}")]
file_path = generate_file(simple_object_size.value)
file_name_simple = s3_helper.object_key_from_file_path(file_path)
bucket_1_objects = [file_name_simple]
s3_helper.set_bucket_versioning(s3_client, bucket, VersioningStatus.ENABLED)
with reporter.step("Put several versions of object into bucket"):
s3_client.put_object(bucket, file_path)
s3_client.put_object_tagging(bucket, file_name_simple, tags=object_tagging)
bucket_1_objects = [file_name_simple]
s3_helper.check_objects_in_bucket(s3_client, bucket, bucket_1_objects)
with reporter.step("Copy one object without tag"):
copy_obj_path = s3_client.copy_object(bucket, file_name_simple)
got_tags = s3_client.get_object_tagging(bucket, copy_obj_path)
assert got_tags, f"Expected tags, got {got_tags}"
expected_tags = [{"Key": key, "Value": value} for key, value in object_tagging]
for tag in expected_tags:
assert tag in got_tags, f"Expected tag {tag} in {got_tags}"
with reporter.step("Copy one object with tag"):
copy_obj_path_1 = s3_client.copy_object(bucket, file_name_simple, tagging_directive="COPY")
got_tags = s3_client.get_object_tagging(bucket, copy_obj_path_1)
assert got_tags, f"Expected tags, got {got_tags}"
expected_tags = [{"Key": key, "Value": value} for key, value in object_tagging]
for tag in expected_tags:
assert tag in got_tags, f"Expected tag {tag} in {got_tags}"
with reporter.step("Copy one object with new tag"):
tag_key = "tag1"
tag_value = uuid.uuid4()
new_tag = f"{tag_key}={tag_value}"
copy_obj_path = s3_client.copy_object(
bucket,
file_name_simple,
tagging_directive="REPLACE",
tagging=new_tag,
)
got_tags = s3_client.get_object_tagging(bucket, copy_obj_path)
assert got_tags, f"Expected tags, got {got_tags}"
expected_tags = [{"Key": tag_key, "Value": str(tag_value)}]
for tag in expected_tags:
assert tag in got_tags, f"Expected tag {tag} in {got_tags}"
@allure.title("Delete version of object (s3_client={s3_client})")
def test_s3_delete_versioning(
self,
s3_client: S3ClientWrapper,
bucket: str,
simple_object_size: ObjectSize,
complex_object_size: ObjectSize,
):
version_1_content = "Version 1"
version_2_content = "Version 2"
file_name_simple = generate_file_with_content(simple_object_size.value, content=version_1_content)
obj_key = os.path.basename(file_name_simple)
s3_helper.set_bucket_versioning(s3_client, bucket, VersioningStatus.ENABLED)
with reporter.step("Put several versions of object into bucket"):
version_id_1 = s3_client.put_object(bucket, file_name_simple)
file_name_1 = generate_file_with_content(simple_object_size.value, file_name_simple, version_2_content)
version_id_2 = s3_client.put_object(bucket, file_name_1)
with reporter.step("Check bucket shows all versions"):
versions = s3_client.list_objects_versions(bucket)
obj_versions = {version.get("VersionId") for version in versions if version.get("Key") == obj_key}
assert obj_versions == {
version_id_1,
version_id_2,
}, f"Object should have versions: {version_id_1, version_id_2}"
with reporter.step("Delete 1 version of object"):
delete_obj = s3_client.delete_object(bucket, obj_key, version_id=version_id_1)
versions = s3_client.list_objects_versions(bucket)
obj_versions = {version.get("VersionId") for version in versions if version.get("Key") == obj_key}
assert obj_versions == {version_id_2}, f"Object should have versions: {version_id_2}"
assert "DeleteMarker" not in delete_obj.keys(), "Delete markers should not be created"
with reporter.step("Delete second version of object"):
delete_obj = s3_client.delete_object(bucket, obj_key, version_id_2)
versions = s3_client.list_objects_versions(bucket)
obj_versions = {version.get("VersionId") for version in versions if version.get("Key") == obj_key}
assert not obj_versions, "Expected object not found"
assert "DeleteMarker" not in delete_obj.keys(), "Delete markers should not be created"
with reporter.step("Put new object into bucket"):
file_name_complex = generate_file(complex_object_size.value)
obj_key = os.path.basename(file_name_complex)
s3_client.put_object(bucket, file_name_complex)
with reporter.step("Delete last object"):
delete_obj = s3_client.delete_object(bucket, obj_key)
versions = s3_client.list_objects_versions(bucket, True)
assert versions.get("DeleteMarkers", None), "Expected delete Marker"
assert "DeleteMarker" in delete_obj.keys(), "Expected delete Marker"
@allure.title("Bulk delete version of object (s3_client={s3_client})")
def test_s3_bulk_delete_versioning(self, s3_client: S3ClientWrapper, bucket: str, simple_object_size: ObjectSize):
version_1_content = "Version 1"
version_2_content = "Version 2"
version_3_content = "Version 3"
version_4_content = "Version 4"
file_name_1 = generate_file_with_content(simple_object_size.value, content=version_1_content)
obj_key = os.path.basename(file_name_1)
s3_helper.set_bucket_versioning(s3_client, bucket, VersioningStatus.ENABLED)
with reporter.step("Put several versions of object into bucket"):
version_id_1 = s3_client.put_object(bucket, file_name_1)
file_name_2 = generate_file_with_content(simple_object_size.value, file_name_1, version_2_content)
version_id_2 = s3_client.put_object(bucket, file_name_2)
file_name_3 = generate_file_with_content(simple_object_size.value, file_name_1, version_3_content)
version_id_3 = s3_client.put_object(bucket, file_name_3)
file_name_4 = generate_file_with_content(simple_object_size.value, file_name_1, version_4_content)
version_id_4 = s3_client.put_object(bucket, file_name_4)
version_ids = {version_id_1, version_id_2, version_id_3, version_id_4}
with reporter.step("Check bucket shows all versions"):
versions = s3_client.list_objects_versions(bucket)
obj_versions = {version.get("VersionId") for version in versions if version.get("Key") == obj_key}
assert obj_versions == version_ids, f"Object should have versions: {version_ids}"
with reporter.step("Delete two objects from bucket one by one"):
version_to_delete_b1 = random.sample([version_id_1, version_id_2, version_id_3, version_id_4], k=2)
version_to_save = list(set(version_ids) - set(version_to_delete_b1))
for ver in version_to_delete_b1:
s3_client.delete_object(bucket, obj_key, ver)
with reporter.step("Check bucket shows all versions"):
versions = s3_client.list_objects_versions(bucket)
obj_versions = [version.get("VersionId") for version in versions if version.get("Key") == obj_key]
assert obj_versions.sort() == version_to_save.sort(), f"Object should have versions: {version_to_save}"
@allure.title("Get versions of object (s3_client={s3_client})")
def test_s3_get_versioning(self, s3_client: S3ClientWrapper, bucket: str, simple_object_size: ObjectSize):
version_1_content = "Version 1"
version_2_content = "Version 2"
file_name_simple = generate_file_with_content(simple_object_size.value, content=version_1_content)
obj_key = os.path.basename(file_name_simple)
s3_helper.set_bucket_versioning(s3_client, bucket, VersioningStatus.ENABLED)
with reporter.step("Put several versions of object into bucket"):
version_id_1 = s3_client.put_object(bucket, file_name_simple)
file_name_1 = generate_file_with_content(simple_object_size.value, file_path=file_name_simple, content=version_2_content)
version_id_2 = s3_client.put_object(bucket, file_name_1)
with reporter.step("Get first version of object"):
object_1 = s3_client.get_object(bucket, obj_key, version_id_1, full_output=True)
assert object_1.get("VersionId") == version_id_1, f"Get object with version {version_id_1}"
with reporter.step("Get second version of object"):
object_2 = s3_client.get_object(bucket, obj_key, version_id_2, full_output=True)
assert object_2.get("VersionId") == version_id_2, f"Get object with version {version_id_2}"
with reporter.step("Get object"):
object_3 = s3_client.get_object(bucket, obj_key, full_output=True)
assert object_3.get("VersionId") == version_id_2, f"Get object with version {version_id_2}"
@allure.title("Get range (s3_client={s3_client})")
def test_s3_get_range(
self,
s3_client: S3ClientWrapper,
bucket: str,
complex_object_size: ObjectSize,
simple_object_size: ObjectSize,
):
file_path = generate_file(complex_object_size.value)
file_name = s3_helper.object_key_from_file_path(file_path)
file_hash = get_file_hash(file_path)
s3_helper.set_bucket_versioning(s3_client, bucket, VersioningStatus.ENABLED)
with reporter.step("Put several versions of object into bucket"):
version_id_1 = s3_client.put_object(bucket, file_path)
file_name_1 = generate_file_with_content(simple_object_size.value, file_path=file_path)
version_id_2 = s3_client.put_object(bucket, file_name_1)
with reporter.step("Get first version of object"):
object_1_part_1 = s3_client.get_object(
bucket,
file_name,
version_id_1,
object_range=[0, int(complex_object_size.value / 3)],
)
object_1_part_2 = s3_client.get_object(
bucket,
file_name,
version_id_1,
object_range=[
int(complex_object_size.value / 3) + 1,
2 * int(complex_object_size.value / 3),
],
)
object_1_part_3 = s3_client.get_object(
bucket,
file_name,
version_id_1,
object_range=[
2 * int(complex_object_size.value / 3) + 1,
complex_object_size.value,
],
)
con_file = concat_files([object_1_part_1, object_1_part_2, object_1_part_3])
assert get_file_hash(con_file) == file_hash, "Hashes must be the same"
with reporter.step("Get second version of object"):
object_2_part_1 = s3_client.get_object(
bucket,
file_name,
version_id_2,
object_range=[0, int(simple_object_size.value / 3)],
)
object_2_part_2 = s3_client.get_object(
bucket,
file_name,
version_id_2,
object_range=[
int(simple_object_size.value / 3) + 1,
2 * int(simple_object_size.value / 3),
],
)
object_2_part_3 = s3_client.get_object(
bucket,
file_name,
version_id_2,
object_range=[2 * int(simple_object_size.value / 3) + 1, simple_object_size.value],
)
con_file_1 = concat_files([object_2_part_1, object_2_part_2, object_2_part_3])
assert get_file_hash(con_file_1) == get_file_hash(file_name_1), "Hashes must be the same"
with reporter.step("Get object"):
object_3_part_1 = s3_client.get_object(bucket, file_name, object_range=[0, int(simple_object_size.value / 3)])
object_3_part_2 = s3_client.get_object(
bucket,
file_name,
object_range=[
int(simple_object_size.value / 3) + 1,
2 * int(simple_object_size.value / 3),
],
)
object_3_part_3 = s3_client.get_object(
bucket,
file_name,
object_range=[2 * int(simple_object_size.value / 3) + 1, simple_object_size.value],
)
con_file = concat_files([object_3_part_1, object_3_part_2, object_3_part_3])
assert get_file_hash(con_file) == get_file_hash(file_name_1), "Hashes must be the same"
def copy_extend_list(self, original_list: list[str], n: int) -> list[str]:
"""Extend the list with own elements up to n elements"""
multiplier = n // len(original_list)
result_list = original_list.copy()
result_list = result_list * multiplier
for i in range(n - len(result_list)):
result_list.append(result_list[i])
return result_list
@allure.title("Bulk deletion is limited to 1000 objects (s3_client={s3_client})")
def test_s3_bulk_deletion_limit(
self,
s3_client: S3ClientWrapper,
bucket: str,
simple_object_size: ObjectSize,
):
objects_in_bucket = []
objects_count = 3
with reporter.step(f"Put {objects_count} into bucket"):
for _ in range(objects_count):
file_path = generate_file(simple_object_size.value)
file_name = s3_helper.object_key_from_file_path(file_path)
objects_in_bucket.append(file_name)
s3_client.put_object(bucket, file_path)
# Extend deletion list to 1001 elements with same keys for test speed
objects_to_delete = self.copy_extend_list(objects_in_bucket, 1001)
with reporter.step("Send delete request with 1001 objects and expect error"):
with pytest.raises(Exception, match=S3_MALFORMED_XML_REQUEST):
s3_client.delete_objects(bucket, objects_to_delete)
with reporter.step("Send delete request with 1000 objects without error"):
with expect_not_raises():
s3_client.delete_objects(bucket, objects_to_delete[:1000])
@allure.title("Object head is unloaded with the correct version (s3_client={s3_client})")
@pytest.mark.smoke
def test_s3_head_object(
self,
s3_client: S3ClientWrapper,
bucket: str,
complex_object_size: ObjectSize,
simple_object_size: ObjectSize,
):
object_metadata = {f"{uuid.uuid4()}": f"{uuid.uuid4()}"}
file_path = generate_file(complex_object_size.value)
file_name = s3_helper.object_key_from_file_path(file_path)
s3_helper.set_bucket_versioning(s3_client, bucket, VersioningStatus.ENABLED)
with reporter.step("Put several versions of object into bucket"):
version_id_1 = s3_client.put_object(bucket, file_path, metadata=object_metadata)
file_name_1 = generate_file_with_content(simple_object_size.value, file_path=file_path)
version_id_2 = s3_client.put_object(bucket, file_name_1)
with reporter.step("Get head of first version of object"):
response = s3_client.head_object(bucket, file_name)
assert "LastModified" in response, "Expected LastModified field"
assert "ETag" in response, "Expected ETag field"
assert response.get("Metadata") == {}, "Expected Metadata empty"
assert response.get("VersionId") == version_id_2, f"Expected VersionId is {version_id_2}"
assert response.get("ContentLength") != 0, "Expected ContentLength is not zero"
with reporter.step("Get head ob first version of object"):
response = s3_client.head_object(bucket, file_name, version_id=version_id_1)
assert "LastModified" in response, "Expected LastModified field"
assert "ETag" in response, "Expected ETag field"
assert response.get("Metadata") == object_metadata, f"Expected Metadata is {object_metadata}"
assert response.get("VersionId") == version_id_1, f"Expected VersionId is {version_id_1}"
assert response.get("ContentLength") != 0, "Expected ContentLength is not zero"
@allure.title("List of objects with version (method_version={list_type}, s3_client={s3_client})")
@pytest.mark.parametrize("list_type", ["v1", "v2"])
def test_s3_list_object(
self,
s3_client: S3ClientWrapper,
list_type: str,
bucket: str,
complex_object_size: ObjectSize,
):
file_path_1 = generate_file(complex_object_size.value)
file_name = s3_helper.object_key_from_file_path(file_path_1)
file_path_2 = generate_file(complex_object_size.value)
file_name_2 = s3_helper.object_key_from_file_path(file_path_2)
s3_helper.set_bucket_versioning(s3_client, bucket, VersioningStatus.ENABLED)
with reporter.step("Put several versions of object into bucket"):
s3_client.put_object(bucket, file_path_1)
s3_client.put_object(bucket, file_path_2)
with reporter.step("Get list of object"):
if list_type == "v1":
list_obj = s3_client.list_objects(bucket)
elif list_type == "v2":
list_obj = s3_client.list_objects_v2(bucket)
assert len(list_obj) == 2, "bucket should have 2 objects"
assert list_obj.sort() == [file_name, file_name_2].sort(), f"bucket should have object key {file_name, file_name_2}"
with reporter.step("Delete object"):
delete_obj = s3_client.delete_object(bucket, file_name)
if list_type == "v1":
list_obj_1 = s3_client.list_objects(bucket, full_output=True)
elif list_type == "v2":
list_obj_1 = s3_client.list_objects_v2(bucket, full_output=True)
contents = list_obj_1.get("Contents", [])
assert len(contents) == 1, "bucket should have only 1 object"
assert contents[0].get("Key") == file_name_2, f"bucket should have object key {file_name_2}"
assert "DeleteMarker" in delete_obj.keys(), "Expected delete Marker"
@allure.title("Put object (s3_client={s3_client})")
def test_s3_put_object(
self,
s3_client: S3ClientWrapper,
bucket: str,
complex_object_size: ObjectSize,
simple_object_size: ObjectSize,
):
file_path_1 = generate_file(complex_object_size.value)
file_name = s3_helper.object_key_from_file_path(file_path_1)
object_1_metadata = {f"{uuid.uuid4()}": f"{uuid.uuid4()}"}
tag_key_1 = "tag1"
tag_value_1 = uuid.uuid4()
tag_1 = f"{tag_key_1}={tag_value_1}"
object_2_metadata = {f"{uuid.uuid4()}": f"{uuid.uuid4()}"}
tag_key_2 = "tag2"
tag_value_2 = uuid.uuid4()
tag_2 = f"{tag_key_2}={tag_value_2}"
s3_helper.set_bucket_versioning(s3_client, bucket, VersioningStatus.SUSPENDED)
with reporter.step("Put first object into bucket"):
s3_client.put_object(bucket, file_path_1, metadata=object_1_metadata, tagging=tag_1)
obj_head = s3_client.head_object(bucket, file_name)
assert obj_head.get("Metadata") == object_1_metadata, "Metadata must be the same"
got_tags = s3_client.get_object_tagging(bucket, file_name)
assert got_tags, f"Expected tags, got {got_tags}"
assert got_tags == [{"Key": tag_key_1, "Value": str(tag_value_1)}], "Tags must be the same"
with reporter.step("Rewrite file into bucket"):
file_path_2 = generate_file_with_content(simple_object_size.value, file_path=file_path_1)
s3_client.put_object(bucket, file_path_2, metadata=object_2_metadata, tagging=tag_2)
obj_head = s3_client.head_object(bucket, file_name)
assert obj_head.get("Metadata") == object_2_metadata, "Metadata must be the same"
got_tags_1 = s3_client.get_object_tagging(bucket, file_name)
assert got_tags_1, f"Expected tags, got {got_tags_1}"
assert got_tags_1 == [{"Key": tag_key_2, "Value": str(tag_value_2)}], "Tags must be the same"
s3_helper.set_bucket_versioning(s3_client, bucket, VersioningStatus.ENABLED)
file_path_3 = generate_file(complex_object_size.value)
file_hash = get_file_hash(file_path_3)
file_name_3 = s3_helper.object_key_from_file_path(file_path_3)
object_3_metadata = {f"{uuid.uuid4()}": f"{uuid.uuid4()}"}
tag_key_3 = "tag3"
tag_value_3 = uuid.uuid4()
tag_3 = f"{tag_key_3}={tag_value_3}"
with reporter.step("Put third object into bucket"):
version_id_1 = s3_client.put_object(bucket, file_path_3, metadata=object_3_metadata, tagging=tag_3)
obj_head_3 = s3_client.head_object(bucket, file_name_3)
assert obj_head_3.get("Metadata") == object_3_metadata, "Matadata must be the same"
got_tags_3 = s3_client.get_object_tagging(bucket, file_name_3)
assert got_tags_3, f"Expected tags, got {got_tags_3}"
assert got_tags_3 == [{"Key": tag_key_3, "Value": str(tag_value_3)}], "Tags must be the same"
with reporter.step("Put new version of file into bucket"):
file_path_4 = generate_file_with_content(simple_object_size.value, file_path=file_path_3)
version_id_2 = s3_client.put_object(bucket, file_path_4)
versions = s3_client.list_objects_versions(bucket)
obj_versions = {version.get("VersionId") for version in versions if version.get("Key") == file_name_3}
assert obj_versions == {
version_id_1,
version_id_2,
}, f"Object should have versions: {version_id_1, version_id_2}"
got_tags_4 = s3_client.get_object_tagging(bucket, file_name_3)
assert not got_tags_4, "No tags expected"
with reporter.step("Get object"):
object_3 = s3_client.get_object(bucket, file_name_3, full_output=True)
assert object_3.get("VersionId") == version_id_2, f"get object with version {version_id_2}"
object_3 = s3_client.get_object(bucket, file_name_3)
assert get_file_hash(file_path_4) == get_file_hash(object_3), "Hashes must be the same"
with reporter.step("Get first version of object"):
object_4 = s3_client.get_object(bucket, file_name_3, version_id_1, full_output=True)
assert object_4.get("VersionId") == version_id_1, f"get object with version {version_id_1}"
object_4 = s3_client.get_object(bucket, file_name_3, version_id_1)
assert file_hash == get_file_hash(object_4), "Hashes must be the same"
obj_head_3 = s3_client.head_object(bucket, file_name_3, version_id_1)
assert obj_head_3.get("Metadata") == object_3_metadata, "Metadata must be the same"
got_tags_3 = s3_client.get_object_tagging(bucket, file_name_3, version_id_1)
assert got_tags_3, f"Expected tags, got {got_tags_3}"
assert got_tags_3 == [{"Key": tag_key_3, "Value": str(tag_value_3)}], "Tags must be the same"
@allure.title("Put object with ACL (versioning={bucket_versioning}, s3_client={s3_client})")
@pytest.mark.parametrize("bucket_versioning", ["ENABLED", "SUSPENDED"])
def test_s3_put_object_acl(
self,
s3_client: S3ClientWrapper,
bucket_versioning: Literal["ENABLED", "SUSPENDED"],
bucket: str,
complex_object_size: ObjectSize,
simple_object_size: ObjectSize,
second_wallet_public_key: str,
):
file_path = generate_file(complex_object_size.value)
file_name = s3_helper.object_key_from_file_path(file_path)
s3_helper.set_bucket_versioning(s3_client, bucket, VersioningStatus[bucket_versioning])
with reporter.step("Put object with acl private"):
s3_client.put_object(bucket, file_path, acl="private")
object_grants = s3_client.get_object_acl(bucket, file_name)
s3_helper.verify_acl_permissions(object_grants, PRIVATE_GRANTS)
object = s3_client.get_object(bucket, file_name)
assert get_file_hash(file_path) == get_file_hash(object), "Hashes must be the same"
with reporter.step("[NEGATIVE] Put object with acl public-read"):
generate_file_with_content(simple_object_size.value, file_path)
with pytest.raises(Exception, match=S3_BUCKET_DOES_NOT_ALLOW_ACL):
s3_client.put_object(bucket, file_path, acl="public-read")
with reporter.step("[NEGATIVE] Put object with acl public-read-write"):
generate_file_with_content(simple_object_size.value, file_path)
with pytest.raises(Exception, match=S3_BUCKET_DOES_NOT_ALLOW_ACL):
s3_client.put_object(bucket, file_path, acl="public-read-write")
with reporter.step("[NEGATIVE] Put object with --grant-full-control id=mycanonicaluserid"):
with pytest.raises(Exception, match=S3_BUCKET_DOES_NOT_ALLOW_ACL):
s3_client.put_object(bucket, file_path, grant_full_control=f"id={second_wallet_public_key}")
with reporter.step("[NEGATIVE] Put object with --grant-read uri=http://acs.amazonaws.com/groups/global/AllUsers"):
with pytest.raises(Exception, match=S3_BUCKET_DOES_NOT_ALLOW_ACL):
s3_client.put_object(bucket, file_path, grant_read="uri=http://acs.amazonaws.com/groups/global/AllUsers")
@allure.title("Put object with lock-mode (s3_client={s3_client})")
def test_s3_put_object_lock_mode(
self,
s3_client: S3ClientWrapper,
complex_object_size: ObjectSize,
simple_object_size: ObjectSize,
):
file_path_1 = generate_file(complex_object_size.value)
file_name = s3_helper.object_key_from_file_path(file_path_1)
bucket = s3_client.create_bucket(object_lock_enabled_for_bucket=True)
s3_helper.set_bucket_versioning(s3_client, bucket, VersioningStatus.ENABLED)
with reporter.step("Put object with lock-mode GOVERNANCE lock-retain-until-date +1day, lock-legal-hold-status"):
date_obj = datetime.utcnow() + timedelta(days=1)
s3_client.put_object(
bucket,
file_path_1,
object_lock_mode="GOVERNANCE",
object_lock_retain_until_date=date_obj.strftime("%Y-%m-%dT%H:%M:%S"),
object_lock_legal_hold_status="OFF",
)
s3_helper.assert_object_lock_mode(s3_client, bucket, file_name, "GOVERNANCE", date_obj, "OFF")
with reporter.step("Put new version of object with [--object-lock-mode COMPLIANCE] и [--object-lock-retain-until-date +3days]"):
date_obj = datetime.utcnow() + timedelta(days=2)
generate_file_with_content(simple_object_size.value, file_path=file_path_1)
s3_client.put_object(
bucket,
file_path_1,
object_lock_mode="COMPLIANCE",
object_lock_retain_until_date=date_obj,
)
s3_helper.assert_object_lock_mode(s3_client, bucket, file_name, "COMPLIANCE", date_obj, "OFF")
with reporter.step("Put new version of object with [--object-lock-mode COMPLIANCE] и [--object-lock-retain-until-date +2days]"):
date_obj = datetime.utcnow() + timedelta(days=3)
generate_file_with_content(simple_object_size.value, file_path=file_path_1)
s3_client.put_object(
bucket,
file_path_1,
object_lock_mode="COMPLIANCE",
object_lock_retain_until_date=date_obj,
object_lock_legal_hold_status="ON",
)
s3_helper.assert_object_lock_mode(s3_client, bucket, file_name, "COMPLIANCE", date_obj, "ON")
with reporter.step("Put object with lock-mode"):
with pytest.raises(
Exception,
match=r".*must both be supplied*",
):
# x-amz-object-lock-retain-until-date and x-amz-object-lock-mode must both be supplied
s3_client.put_object(bucket, file_path_1, object_lock_mode="COMPLIANCE")
with reporter.step("Put object with lock-mode and past date"):
date_obj = datetime.utcnow() - timedelta(days=3)
with pytest.raises(
Exception,
match=r".*until date must be in the future*",
):
# The retain until date must be in the future
s3_client.put_object(
bucket,
file_path_1,
object_lock_mode="COMPLIANCE",
object_lock_retain_until_date=date_obj,
)
@allure.title("Delete object & delete objects (s3_client={s3_client})")
def test_s3_api_delete(
self,
s3_client: S3ClientWrapper,
two_buckets: list[str],
simple_object_size: ObjectSize,
complex_object_size: ObjectSize,
):
"""
Check delete_object and delete_objects S3 API operation. From first bucket some objects deleted one by one.
From second bucket some objects deleted all at once.
"""
max_obj_count = 20
max_delete_objects = 17
put_objects = []
file_paths = []
obj_sizes = [simple_object_size, complex_object_size]
bucket_1, bucket_2 = two_buckets
with reporter.step(f"Generate {max_obj_count} files"):
for _ in range(max_obj_count):
test_file = generate_file(random.choice(obj_sizes).value)
file_paths.append(test_file)
put_objects.append(s3_helper.object_key_from_file_path(test_file.path))
for i, bucket in enumerate([bucket_1, bucket_2], 1):
with reporter.step(f"Put {max_obj_count} objects into bucket_{i}"):
for file_path in file_paths:
s3_client.put_object(bucket, file_path)
with reporter.step(f"Check all objects put in bucket_{i} successfully"):
bucket_objects = s3_client.list_objects_v2(bucket)
assert set(put_objects) == set(bucket_objects), f"Expected all objects {put_objects} in objects list {bucket_objects}"
with reporter.step("Delete some objects from bucket_1 one by one"):
objects_to_delete_b1 = random.sample(put_objects, k=max_delete_objects)
for obj in objects_to_delete_b1:
s3_client.delete_object(bucket_1, obj)
with reporter.step("Check deleted objects are not visible in bucket bucket_1"):
bucket_objects = s3_client.list_objects_v2(bucket_1)
assert set(put_objects).difference(set(objects_to_delete_b1)) == set(
bucket_objects
), f"Expected all objects {put_objects} in objects list {bucket_objects}"
for object_key in objects_to_delete_b1:
with pytest.raises(Exception, match="The specified key does not exist"):
s3_client.get_object(bucket_1, object_key)
with reporter.step("Delete some objects from bucket_2 at once"):
objects_to_delete_b2 = random.sample(put_objects, k=max_delete_objects)
s3_client.delete_objects(bucket_2, objects_to_delete_b2)
with reporter.step("Check deleted objects are not visible in bucket bucket_2"):
objects_list = s3_client.list_objects_v2(bucket_2)
assert set(put_objects).difference(set(objects_to_delete_b2)) == set(
objects_list
), f"Expected all objects {put_objects} in objects list {bucket_objects}"
for object_key in objects_to_delete_b2:
with pytest.raises(Exception, match="The specified key does not exist"):
s3_client.get_object(bucket_2, object_key)
@allure.title("Sync directory (sync_type={sync_type}, s3_client={s3_client})")
@pytest.mark.parametrize("s3_client", [AwsCliClient], indirect=True)
@pytest.mark.parametrize("sync_type", ["sync", "cp"])
def test_s3_sync_dir(
self,
s3_client: S3ClientWrapper,
sync_type: Literal["sync", "cp"],
bucket: str,
simple_object_size: ObjectSize,
):
test_file_1 = TestFile(os.path.join(os.getcwd(), ASSETS_DIR, "test_sync", "test_file_1"))
test_file_2 = TestFile(os.path.join(os.getcwd(), ASSETS_DIR, "test_sync", "test_file_2"))
object_metadata = {f"{uuid.uuid4()}": f"{uuid.uuid4()}"}
key_to_path = {"test_file_1": test_file_1.path, "test_file_2": test_file_2.path}
generate_file_with_content(simple_object_size.value, test_file_1)
generate_file_with_content(simple_object_size.value, test_file_2)
s3_helper.set_bucket_versioning(s3_client, bucket, VersioningStatus.ENABLED)
if sync_type == "sync":
s3_client.sync(bucket, os.path.dirname(test_file_1), metadata=object_metadata)
elif sync_type == "cp":
s3_client.cp(bucket, os.path.dirname(test_file_1), metadata=object_metadata)
with reporter.step("Check objects are synced"):
objects = s3_client.list_objects(bucket)
assert set(key_to_path.keys()) == set(objects), f"Expected all abjects saved. Got {objects}"
with reporter.step("Check these are the same objects"):
for obj_key in objects:
got_object = s3_client.get_object(bucket, obj_key)
assert get_file_hash(got_object) == get_file_hash(key_to_path.get(obj_key)), "Expected hashes are the same"
obj_head = s3_client.head_object(bucket, obj_key)
assert obj_head.get("Metadata") == object_metadata, f"Metadata of object is {object_metadata}"
object_grants = s3_client.get_object_acl(bucket, obj_key)
s3_helper.verify_acl_permissions(object_grants, PRIVATE_GRANTS)
@allure.title("Put 10 nested level object (s3_client={s3_client})")
def test_s3_put_10_folder(
self,
s3_client: S3ClientWrapper,
bucket: str,
simple_object_size: ObjectSize,
):
key_characters_sample = string.ascii_letters + string.digits + "._-"
with reporter.step("Put object"):
test_file = generate_file(simple_object_size.value)
obj_key = "/" + "/".join(["".join(random.choices(key_characters_sample, k=5)) for _ in range(10)]) + "/test_file_1"
s3_client.put_object(bucket, test_file, obj_key)
with reporter.step("Check object can be downloaded"):
s3_client.get_object(bucket, obj_key)
with reporter.step("Check object listing"):
s3_helper.check_objects_in_bucket(s3_client, bucket, [obj_key])
@allure.title("Delete non-existing object from empty bucket (s3_client={s3_client})")
def test_s3_delete_non_existing_object(self, s3_client: S3ClientWrapper, bucket: str):
s3_helper.set_bucket_versioning(s3_client, bucket, VersioningStatus.ENABLED)
obj_key = "fake_object_key"
with reporter.step("Delete non-existing object"):
delete_obj = s3_client.delete_object(bucket, obj_key)
assert "DeleteMarker" not in delete_obj.keys(), "Delete markers should not be created"
objects_list = s3_client.list_objects_versions(bucket)
assert not objects_list, f"Expected empty bucket, got {objects_list}"
@allure.title("Delete the same object twice (s3_client={s3_client})")
def test_s3_delete_twice(self, s3_client: S3ClientWrapper, bucket: str, simple_object_size: ObjectSize):
s3_helper.set_bucket_versioning(s3_client, bucket, VersioningStatus.ENABLED)
file_path = generate_file(simple_object_size.value)
file_name = s3_helper.object_key_from_file_path(file_path)
with reporter.step("Put object into one bucket"):
s3_client.put_object(bucket, file_path)
with reporter.step("Delete the object from the bucket"):
delete_object = s3_client.delete_object(bucket, file_name)
versions = s3_client.list_objects_versions(bucket)
obj_versions = {version.get("VersionId") for version in versions if version.get("Key") == file_name}
assert obj_versions, f"Object versions were not found {versions}"
assert "DeleteMarker" in delete_object.keys(), "Delete markers not found"
with reporter.step("Delete the object from the bucket again"):
delete_object_2nd_attempt = s3_client.delete_object(bucket, file_name)
versions_2nd_attempt = s3_client.list_objects_versions(bucket)
assert delete_object.keys() == delete_object_2nd_attempt.keys(), "Delete markers are not the same"
# check that nothing was changed
# checking here not VersionId only, but all data (for example LastModified)
assert versions == versions_2nd_attempt, "Versions are not the same"

View file

@ -0,0 +1,164 @@
import json
import allure
import pytest
from botocore.exceptions import ClientError
from frostfs_testlib import reporter
from frostfs_testlib.s3 import S3ClientWrapper, VersioningStatus
from frostfs_testlib.s3.interfaces import BucketContainerResolver
from frostfs_testlib.steps.s3 import s3_helper
from frostfs_testlib.steps.storage_policy import get_simple_object_copies
from frostfs_testlib.storage.dataclasses.object_size import ObjectSize
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
from frostfs_testlib.testing.cluster_test_base import ClusterTestBase
from frostfs_testlib.testing.test_control import expect_not_raises
from frostfs_testlib.utils.file_utils import generate_file
from ....resources.common import S3_POLICY_FILE_LOCATION
@pytest.mark.nightly
@pytest.mark.s3_gate
@pytest.mark.parametrize("s3_policy", [S3_POLICY_FILE_LOCATION], indirect=True)
class TestS3GatePolicy(ClusterTestBase):
@allure.title("Bucket creation with retention policy applied (s3_client={s3_client})")
def test_s3_bucket_location(
self,
default_wallet: WalletInfo,
s3_client: S3ClientWrapper,
simple_object_size: ObjectSize,
bucket_container_resolver: BucketContainerResolver,
):
file_path_1 = generate_file(simple_object_size.value)
file_name_1 = s3_helper.object_key_from_file_path(file_path_1)
file_path_2 = generate_file(simple_object_size.value)
file_name_2 = s3_helper.object_key_from_file_path(file_path_2)
with reporter.step("Create two buckets with different bucket configuration"):
bucket_1 = s3_client.create_bucket(location_constraint="complex")
s3_helper.set_bucket_versioning(s3_client, bucket_1, VersioningStatus.ENABLED)
bucket_2 = s3_client.create_bucket(location_constraint="rep-3")
s3_helper.set_bucket_versioning(s3_client, bucket_2, VersioningStatus.ENABLED)
list_buckets = s3_client.list_buckets()
assert bucket_1 in list_buckets and bucket_2 in list_buckets, f"Expected two buckets {bucket_1, bucket_2}, got {list_buckets}"
with reporter.step("Check head buckets"):
with expect_not_raises():
s3_client.head_bucket(bucket_1)
s3_client.head_bucket(bucket_2)
with reporter.step("Put objects into buckets"):
version_id_1 = s3_client.put_object(bucket_1, file_path_1)
version_id_2 = s3_client.put_object(bucket_2, file_path_2)
s3_helper.check_objects_in_bucket(s3_client, bucket_1, [file_name_1])
s3_helper.check_objects_in_bucket(s3_client, bucket_2, [file_name_2])
with reporter.step("Check bucket location"):
bucket_loc_1 = s3_client.get_bucket_location(bucket_1)
bucket_loc_2 = s3_client.get_bucket_location(bucket_2)
assert bucket_loc_1 == "complex"
assert bucket_loc_2 == "rep-3"
with reporter.step("Check object policy"):
for cluster_node in self.cluster.cluster_nodes:
cid_1 = bucket_container_resolver.resolve(cluster_node, bucket_1)
if cid_1:
break
copies_1 = get_simple_object_copies(
wallet=default_wallet,
cid=cid_1,
oid=version_id_1,
shell=self.shell,
nodes=self.cluster.storage_nodes,
)
assert copies_1 == 1
for cluster_node in self.cluster.cluster_nodes:
cid_2 = bucket_container_resolver.resolve(cluster_node, bucket_2)
if cid_2:
break
copies_2 = get_simple_object_copies(
wallet=default_wallet,
cid=cid_2,
oid=version_id_2,
shell=self.shell,
nodes=self.cluster.storage_nodes,
)
assert copies_2 == 3
@allure.title("Bucket with unexisting location constraint (s3_client={s3_client})")
def test_s3_bucket_wrong_location(self, s3_client: S3ClientWrapper):
with reporter.step("Create bucket with unenxisting location constraint policy"):
with pytest.raises(Exception):
s3_client.create_bucket(location_constraint="UNEXISTING LOCATION CONSTRAINT")
@allure.title("Bucket policy (s3_client={s3_client})")
def test_s3_bucket_policy(self, s3_client: S3ClientWrapper, bucket: str):
with reporter.step("Create bucket"):
s3_helper.set_bucket_versioning(s3_client, bucket, VersioningStatus.ENABLED)
with reporter.step("GetBucketPolicy"):
with pytest.raises((RuntimeError, ClientError)):
s3_client.get_bucket_policy(bucket)
with reporter.step("Put new policy"):
custom_policy = {
"Version": "2012-10-17",
"Id": "aaaa-bbbb-cccc-dddd",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": "*",
"Action": ["s3:GetObject"],
"Resource": [f"arn:aws:s3:::{bucket}/*"],
}
],
}
s3_client.put_bucket_policy(bucket, custom_policy)
with reporter.step("GetBucketPolicy"):
returned_policy = json.loads(s3_client.get_bucket_policy(bucket))
assert returned_policy == custom_policy, "Wrong policy was received"
with reporter.step("Delete the policy"):
s3_client.delete_bucket_policy(bucket)
with reporter.step("GetBucketPolicy"):
with pytest.raises((RuntimeError, ClientError)):
s3_client.get_bucket_policy(bucket)
@allure.title("Bucket CORS (s3_client={s3_client})")
def test_s3_cors(self, s3_client: S3ClientWrapper, bucket: str):
with reporter.step("Create bucket without cors"):
s3_helper.set_bucket_versioning(s3_client, bucket, VersioningStatus.ENABLED)
with pytest.raises(Exception):
bucket_cors = s3_client.get_bucket_cors(bucket)
with reporter.step("Put bucket cors"):
cors = {
"CORSRules": [
{
"AllowedOrigins": ["http://www.example.com"],
"AllowedHeaders": ["*"],
"AllowedMethods": ["PUT", "POST", "DELETE"],
"MaxAgeSeconds": 3000,
"ExposeHeaders": ["x-amz-server-side-encryption"],
},
{
"AllowedOrigins": ["*"],
"AllowedHeaders": ["Authorization"],
"AllowedMethods": ["GET"],
"MaxAgeSeconds": 3000,
},
]
}
s3_client.put_bucket_cors(bucket, cors)
bucket_cors = s3_client.get_bucket_cors(bucket)
assert bucket_cors == cors.get("CORSRules"), f"Expected CORSRules must be {cors.get('CORSRules')}"
with reporter.step("delete bucket cors"):
s3_client.delete_bucket_cors(bucket)
with pytest.raises(Exception):
bucket_cors = s3_client.get_bucket_cors(bucket)

View file

@ -0,0 +1,104 @@
from random import choice
from string import ascii_letters
from typing import Tuple
import allure
import pytest
from frostfs_testlib import reporter
from frostfs_testlib.s3 import S3ClientWrapper
from frostfs_testlib.steps.s3 import s3_helper
from frostfs_testlib.storage.dataclasses.object_size import ObjectSize
from frostfs_testlib.utils.file_utils import generate_file
@pytest.mark.nightly
@pytest.mark.s3_gate
@pytest.mark.s3_gate_tagging
class TestS3GateTagging:
@staticmethod
def create_tags(count: int) -> Tuple[list, list]:
tags = []
for _ in range(count):
tag_key = "".join(choice(ascii_letters) for _ in range(8))
tag_value = "".join(choice(ascii_letters) for _ in range(12))
tags.append((tag_key, tag_value))
return tags
@allure.title("Object tagging (s3_client={s3_client})")
def test_s3_object_tagging(self, s3_client: S3ClientWrapper, bucket: str, simple_object_size: ObjectSize):
file_path = generate_file(simple_object_size.value)
file_name = s3_helper.object_key_from_file_path(file_path)
with reporter.step("Put with 3 tags object into bucket"):
tag_1 = "Tag1=Value1"
s3_client.put_object(bucket, file_path, tagging=tag_1)
got_tags = s3_client.get_object_tagging(bucket, file_name)
assert got_tags, f"Expected tags, got {got_tags}"
assert got_tags == [{"Key": "Tag1", "Value": "Value1"}], "Tags must be the same"
with reporter.step("Put 10 new tags for object"):
tags_2 = self.create_tags(10)
s3_client.put_object_tagging(bucket, file_name, tags=tags_2)
s3_helper.check_tags_by_object(s3_client, bucket, file_name, tags_2, [("Tag1", "Value1")])
with reporter.step("Put 10 extra new tags for object"):
tags_3 = self.create_tags(10)
s3_client.put_object_tagging(bucket, file_name, tags=tags_3)
s3_helper.check_tags_by_object(s3_client, bucket, file_name, tags_3, tags_2)
with reporter.step("Copy one object with tag"):
copy_obj_path_1 = s3_client.copy_object(bucket, file_name, tagging_directive="COPY")
s3_helper.check_tags_by_object(s3_client, bucket, copy_obj_path_1, tags_3, tags_2)
with reporter.step("Put 11 new tags to object and expect an error"):
tags_4 = self.create_tags(11)
with pytest.raises(Exception, match=r".*Object tags cannot be greater than 10*"):
# An error occurred (BadRequest) when calling the PutObjectTagging operation: Object tags cannot be greater than 10
s3_client.put_object_tagging(bucket, file_name, tags=tags_4)
with reporter.step("Put empty tag"):
tags_5 = []
s3_client.put_object_tagging(bucket, file_name, tags=tags_5)
s3_helper.check_tags_by_object(s3_client, bucket, file_name, [])
with reporter.step("Put 10 object tags"):
tags_6 = self.create_tags(10)
s3_client.put_object_tagging(bucket, file_name, tags=tags_6)
s3_helper.check_tags_by_object(s3_client, bucket, file_name, tags_6)
with reporter.step("Delete tags by delete-object-tagging"):
s3_client.delete_object_tagging(bucket, file_name)
s3_helper.check_tags_by_object(s3_client, bucket, file_name, [])
@allure.title("Bucket tagging (s3_client={s3_client})")
def test_s3_bucket_tagging(self, s3_client: S3ClientWrapper, bucket: str):
with reporter.step("Put 10 bucket tags"):
tags_1 = self.create_tags(10)
s3_client.put_bucket_tagging(bucket, tags_1)
s3_helper.check_tags_by_bucket(s3_client, bucket, tags_1)
with reporter.step("Put new 10 bucket tags"):
tags_2 = self.create_tags(10)
s3_client.put_bucket_tagging(bucket, tags_2)
s3_helper.check_tags_by_bucket(s3_client, bucket, tags_2, tags_1)
with reporter.step("Put 11 new tags to bucket and expect an error"):
tags_3 = self.create_tags(11)
with pytest.raises(Exception, match=r".*Object tags cannot be greater than 10.*"):
# An error occurred (BadRequest) when calling the PutBucketTagging operation (reached max retries: 0): Object tags cannot be greater than 10
s3_client.put_bucket_tagging(bucket, tags_3)
with reporter.step("Put empty tag"):
tags_4 = []
s3_client.put_bucket_tagging(bucket, tags_4)
s3_helper.check_tags_by_bucket(s3_client, bucket, tags_4)
with reporter.step("Put new 10 bucket tags"):
tags_5 = self.create_tags(10)
s3_client.put_bucket_tagging(bucket, tags_5)
s3_helper.check_tags_by_bucket(s3_client, bucket, tags_5, tags_2)
with reporter.step("Delete tags by delete-bucket-tagging"):
s3_client.delete_bucket_tagging(bucket)
s3_helper.check_tags_by_bucket(s3_client, bucket, [])

View file

@ -0,0 +1,125 @@
import os
import allure
import pytest
from frostfs_testlib import reporter
from frostfs_testlib.s3 import S3ClientWrapper, VersioningStatus
from frostfs_testlib.steps.s3 import s3_helper
from frostfs_testlib.storage.dataclasses.object_size import ObjectSize
from frostfs_testlib.utils.file_utils import generate_file, generate_file_with_content, get_file_content
@pytest.mark.nightly
@pytest.mark.s3_gate
@pytest.mark.s3_gate_versioning
class TestS3GateVersioning:
@allure.title("Impossible to disable versioning with object_lock (s3_client={s3_client})")
def test_s3_version_off(self, s3_client: S3ClientWrapper):
bucket = s3_client.create_bucket(object_lock_enabled_for_bucket=True)
with pytest.raises(Exception):
s3_helper.set_bucket_versioning(s3_client, bucket, VersioningStatus.SUSPENDED)
@allure.title("Object versioning (s3_client={s3_client})")
def test_s3_api_versioning(self, s3_client: S3ClientWrapper, bucket: str, simple_object_size: ObjectSize):
"""
Test checks basic versioning functionality for S3 bucket.
"""
version_1_content = "Version 1"
version_2_content = "Version 2"
file_name_simple = generate_file_with_content(simple_object_size.value, content=version_1_content)
obj_key = os.path.basename(file_name_simple)
s3_helper.set_bucket_versioning(s3_client, bucket, VersioningStatus.ENABLED)
with reporter.step("Put several versions of object into bucket"):
version_id_1 = s3_client.put_object(bucket, file_name_simple)
generate_file_with_content(simple_object_size.value, file_path=file_name_simple, content=version_2_content)
version_id_2 = s3_client.put_object(bucket, file_name_simple)
with reporter.step("Check bucket shows all versions"):
versions = s3_client.list_objects_versions(bucket)
obj_versions = {version.get("VersionId") for version in versions if version.get("Key") == obj_key}
assert obj_versions == {
version_id_1,
version_id_2,
}, f"Expected object has versions: {version_id_1, version_id_2}"
with reporter.step("Show information about particular version"):
for version_id in (version_id_1, version_id_2):
response = s3_client.head_object(bucket, obj_key, version_id=version_id)
assert "LastModified" in response, "Expected LastModified field"
assert "ETag" in response, "Expected ETag field"
assert response.get("VersionId") == version_id, f"Expected VersionId is {version_id}"
assert response.get("ContentLength") != 0, "Expected ContentLength is not zero"
with reporter.step("Check object's attributes"):
for version_id in (version_id_1, version_id_2):
got_attrs = s3_client.get_object_attributes(bucket, obj_key, ["ETag"], version_id=version_id)
if got_attrs:
assert got_attrs.get("VersionId") == version_id, f"Expected VersionId is {version_id}"
with reporter.step("Delete object and check it was deleted"):
response = s3_client.delete_object(bucket, obj_key)
version_id_delete = response.get("VersionId")
with pytest.raises(Exception, match=r".*Not Found.*"):
s3_client.head_object(bucket, obj_key)
with reporter.step("Get content for all versions and check it is correct"):
for version, content in (
(version_id_2, version_2_content),
(version_id_1, version_1_content),
):
file_name = s3_client.get_object(bucket, obj_key, version_id=version)
got_content = get_file_content(file_name)
assert got_content == content, f"Expected object content is\n{content}\nGot\n{got_content}"
with reporter.step("Restore previous object version"):
s3_client.delete_object(bucket, obj_key, version_id=version_id_delete)
file_name = s3_client.get_object(bucket, obj_key)
got_content = get_file_content(file_name)
assert got_content == version_2_content, f"Expected object content is\n{version_2_content}\nGot\n{got_content}"
@allure.title("Enable and disable versioning without object_lock (s3_client={s3_client})")
def test_s3_version(self, s3_client: S3ClientWrapper, simple_object_size: ObjectSize):
file_path = generate_file(simple_object_size.value)
file_name = s3_helper.object_key_from_file_path(file_path)
bucket_objects = [file_name]
bucket = s3_client.create_bucket(object_lock_enabled_for_bucket=False)
s3_helper.set_bucket_versioning(s3_client, bucket, VersioningStatus.SUSPENDED)
with reporter.step("Put object into bucket"):
s3_client.put_object(bucket, file_path)
objects_list = s3_client.list_objects(bucket)
assert objects_list == bucket_objects, f"Expected list with single objects in bucket, got {objects_list}"
object_version = s3_client.list_objects_versions(bucket)
actual_version = [version.get("VersionId") for version in object_version if version.get("Key") == file_name]
assert actual_version == ["null"], f"Expected version is null in list-object-versions, got {object_version}"
object_0 = s3_client.head_object(bucket, file_name)
assert object_0.get("VersionId") == "null", f"Expected version is null in head-object, got {object_0.get('VersionId')}"
s3_helper.set_bucket_versioning(s3_client, bucket, VersioningStatus.ENABLED)
with reporter.step("Put several versions of object into bucket"):
version_id_1 = s3_client.put_object(bucket, file_path)
file_name_1 = generate_file_with_content(simple_object_size.value, file_path=file_path)
version_id_2 = s3_client.put_object(bucket, file_name_1)
with reporter.step("Check bucket shows all versions"):
versions = s3_client.list_objects_versions(bucket)
obj_versions = [version.get("VersionId") for version in versions if version.get("Key") == file_name]
assert (
obj_versions.sort() == [version_id_1, version_id_2, "null"].sort()
), f"Expected object has versions: {version_id_1, version_id_2, 'null'}"
with reporter.step("Get object"):
object_1 = s3_client.get_object(bucket, file_name, full_output=True)
assert object_1.get("VersionId") == version_id_2, f"Get object with version {version_id_2}"
with reporter.step("Get first version of object"):
object_2 = s3_client.get_object(bucket, file_name, version_id_1, full_output=True)
assert object_2.get("VersionId") == version_id_1, f"Get object with version {version_id_1}"
with reporter.step("Get second version of object"):
object_3 = s3_client.get_object(bucket, file_name, version_id_2, full_output=True)
assert object_3.get("VersionId") == version_id_2, f"Get object with version {version_id_2}"

View file

@ -0,0 +1,42 @@
import logging
from re import fullmatch
import allure
import pytest
from frostfs_testlib import reporter
from frostfs_testlib.hosting import Hosting
from frostfs_testlib.utils.version_utils import get_remote_binaries_versions
logger = logging.getLogger("NeoLogger")
VERSION_REGEX = r"^([a-zA-Z0-9]*/)?\d+\.\d+\.\d+(-.*)?(?<!dirty)"
VERSION_ERROR_MSG = "{name} [{host}]: Actual version doesn't conform to format '0.0.0-000-aaaaaaa': {version}"
def _check_version_format(version):
return fullmatch(VERSION_REGEX, version)
@allure.title("Check binaries versions")
@pytest.mark.check_binaries
def test_binaries_versions(hosting: Hosting):
"""
Compare binaries versions from external source (url) and deployed on servers.
"""
with reporter.step("Get binaries versions from servers"):
versions_by_host = get_remote_binaries_versions(hosting)
exсeptions = []
last_host, versions_on_last_host = versions_by_host.popitem()
for name, version in versions_on_last_host.items():
for host, versions_on_host in versions_by_host.items():
if versions_on_host[name] != version:
exсeptions.append(f"Binary of {name} has inconsistent version {versions_on_host[name]} on host {host}")
if not _check_version_format(versions_on_host[name]):
exсeptions.append(VERSION_ERROR_MSG.format(name=name, host=host, version=version))
if not _check_version_format(version):
exсeptions.append(VERSION_ERROR_MSG.format(name=name, host=last_host, version=version))
assert not exсeptions, "\n".join(exсeptions)

View file

@ -1,251 +0,0 @@
import logging
import os
from random import choice
from time import sleep
import allure
import pytest
from common import COMPLEX_OBJ_SIZE
from container import create_container
from epoch import get_epoch, tick_epoch
from python_keywords.http_gate import (get_via_http_curl, get_via_http_gate,
get_via_http_gate_by_attribute, get_via_zip_http_gate,
upload_via_http_gate, upload_via_http_gate_curl)
from python_keywords.neofs_verbs import get_object, put_object
from python_keywords.storage_policy import get_nodes_without_object
from python_keywords.utility_keywords import generate_file, get_file_hash
from wellknown_acl import PUBLIC_ACL
logger = logging.getLogger('NeoLogger')
CLEANUP_TIMEOUT = 10
@allure.link('https://github.com/nspcc-dev/neofs-http-gw#neofs-http-gateway', name='neofs-http-gateway')
@allure.link('https://github.com/nspcc-dev/neofs-http-gw#uploading', name='uploading')
@allure.link('https://github.com/nspcc-dev/neofs-http-gw#downloading', name='downloading')
@pytest.mark.http_gate
class TestHttpGate:
PLACEMENT_RULE = "REP 1 IN X CBF 1 SELECT 1 FROM * AS X"
@pytest.fixture(scope="class", autouse=True)
@allure.title('[Class/Autouse]: Prepare wallet and deposit')
def prepare_wallet(self, prepare_wallet_and_deposit):
TestHttpGate.wallet = prepare_wallet_and_deposit
@allure.title('Test Put over gRPC, Get over HTTP')
def test_put_grpc_get_http(self):
"""
Test that object can be put using gRPC interface and get using HTTP.
Steps:
1. Create simple and large objects.
2. Put objects using gRPC (neofs-cli).
3. Download objects using HTTP gate (https://github.com/nspcc-dev/neofs-http-gw#downloading).
4. Get objects using gRPC (neofs-cli).
5. Compare hashes for got objects.
6. Compare hashes for got and original objects.
Expected result:
Hashes must be the same.
"""
cid = create_container(self.wallet, rule=self.PLACEMENT_RULE, basic_acl=PUBLIC_ACL)
file_path_simple, file_path_large = generate_file(), generate_file(COMPLEX_OBJ_SIZE)
with allure.step('Put objects using gRPC'):
oid_simple = put_object(wallet=self.wallet, path=file_path_simple, cid=cid)
oid_large = put_object(wallet=self.wallet, path=file_path_large, cid=cid)
for oid, file_path in ((oid_simple, file_path_simple), (oid_large, file_path_large)):
self.get_object_and_verify_hashes(oid, file_path, self.wallet, cid)
@allure.link('https://github.com/nspcc-dev/neofs-http-gw#uploading', name='uploading')
@allure.link('https://github.com/nspcc-dev/neofs-http-gw#downloading', name='downloading')
@pytest.mark.sanity
@allure.title('Test Put over HTTP, Get over HTTP')
def test_put_http_get_http(self):
"""
Test that object can be put and get using HTTP interface.
Steps:
1. Create simple and large objects.
2. Upload objects using HTTP (https://github.com/nspcc-dev/neofs-http-gw#uploading).
3. Download objects using HTTP gate (https://github.com/nspcc-dev/neofs-http-gw#downloading).
4. Compare hashes for got and original objects.
Expected result:
Hashes must be the same.
"""
cid = create_container(self.wallet, rule=self.PLACEMENT_RULE, basic_acl=PUBLIC_ACL)
file_path_simple, file_path_large = generate_file(), generate_file(COMPLEX_OBJ_SIZE)
with allure.step('Put objects using HTTP'):
oid_simple = upload_via_http_gate(cid=cid, path=file_path_simple)
oid_large = upload_via_http_gate(cid=cid, path=file_path_large)
for oid, file_path in ((oid_simple, file_path_simple), (oid_large, file_path_large)):
self.get_object_and_verify_hashes(oid, file_path, self.wallet, cid)
@allure.link('https://github.com/nspcc-dev/neofs-http-gw#by-attributes', name='download by attributes')
@allure.title('Test Put over HTTP, Get over HTTP with headers')
@pytest.mark.parametrize(
'attributes',
[
{'fileName': 'simple_obj_filename'},
{'file-Name': 'simple obj filename'},
{'cat%jpeg': 'cat%jpeg'}
],
ids=['simple', 'hyphen', 'percent']
)
def test_put_http_get_http_with_headers(self, attributes: dict):
"""
Test that object can be downloaded using different attributes in HTTP header.
Steps:
1. Create simple and large objects.
2. Upload objects using HTTP with particular attributes in the header.
3. Download objects by attributes using HTTP gate (https://github.com/nspcc-dev/neofs-http-gw#by-attributes).
4. Compare hashes for got and original objects.
Expected result:
Hashes must be the same.
"""
cid = create_container(self.wallet, rule=self.PLACEMENT_RULE, basic_acl=PUBLIC_ACL)
file_path = generate_file()
with allure.step('Put objects using HTTP with attribute'):
headers = self._attr_into_header(attributes)
oid = upload_via_http_gate(cid=cid, path=file_path, headers=headers)
self.get_object_by_attr_and_verify_hashes(oid, file_path, cid, attributes)
@allure.title('Test Expiration-Epoch in HTTP header')
def test_expiration_epoch_in_http(self):
cid = create_container(self.wallet, rule=self.PLACEMENT_RULE, basic_acl=PUBLIC_ACL)
file_path = generate_file()
object_not_found_err = 'object not found'
oids = []
curr_epoch = get_epoch()
epochs = (curr_epoch, curr_epoch + 1, curr_epoch + 2, curr_epoch + 100)
for epoch in epochs:
headers = {'X-Attribute-Neofs-Expiration-Epoch': str(epoch)}
with allure.step('Put objects using HTTP with attribute Expiration-Epoch'):
oids.append(upload_via_http_gate(cid=cid, path=file_path, headers=headers))
assert len(oids) == len(epochs), 'Expected all objects has been put successfully'
with allure.step('All objects can be get'):
for oid in oids:
get_via_http_gate(cid=cid, oid=oid)
for expired_objects, not_expired_objects in [(oids[:1], oids[1:]), (oids[:2], oids[2:])]:
tick_epoch()
sleep(CLEANUP_TIMEOUT)
for oid in expired_objects:
self.try_to_get_object_and_expect_error(
cid=cid,
oid=oid,
expected_err=object_not_found_err
)
with allure.step('Other objects can be get'):
for oid in not_expired_objects:
get_via_http_gate(cid=cid, oid=oid)
@allure.title('Test Zip in HTTP header')
def test_zip_in_http(self):
cid = create_container(self.wallet, rule=self.PLACEMENT_RULE, basic_acl=PUBLIC_ACL)
file_path_simple, file_path_large = generate_file(), generate_file(COMPLEX_OBJ_SIZE)
common_prefix = 'my_files'
headers1 = {'X-Attribute-FilePath': f'{common_prefix}/file1'}
headers2 = {'X-Attribute-FilePath': f'{common_prefix}/file2'}
upload_via_http_gate(cid=cid, path=file_path_simple, headers=headers1)
upload_via_http_gate(cid=cid, path=file_path_large, headers=headers2)
dir_path = get_via_zip_http_gate(cid=cid, prefix=common_prefix)
with allure.step('Verify hashes'):
assert get_file_hash(f'{dir_path}/file1') == get_file_hash(file_path_simple)
assert get_file_hash(f'{dir_path}/file2') == get_file_hash(file_path_large)
@pytest.mark.curl
@pytest.mark.long
@allure.title('Test Put over HTTP/Curl, Get over HTTP/Curl for large object')
def test_put_http_get_http_large_file(self):
"""
This test checks upload and download using curl with 'large' object. Large is object with size up to 20Mb.
"""
cid = create_container(self.wallet, rule=self.PLACEMENT_RULE, basic_acl=PUBLIC_ACL)
obj_size = int(os.getenv('BIG_OBJ_SIZE', COMPLEX_OBJ_SIZE))
file_path = generate_file(obj_size)
with allure.step('Put objects using HTTP'):
oid_gate = upload_via_http_gate(cid=cid, path=file_path)
oid_curl = upload_via_http_gate_curl(cid=cid, filepath=file_path, large_object=True)
self.get_object_and_verify_hashes(oid_gate, file_path, self.wallet, cid)
self.get_object_and_verify_hashes(oid_curl, file_path, self.wallet, cid, get_via_http_curl)
@pytest.mark.curl
@allure.title('Test Put/Get over HTTP using Curl utility')
def test_put_http_get_http_curl(self):
"""
Test checks upload and download over HTTP using curl utility.
"""
cid = create_container(self.wallet, rule=self.PLACEMENT_RULE, basic_acl=PUBLIC_ACL)
file_path_simple, file_path_large = generate_file(), generate_file(COMPLEX_OBJ_SIZE)
with allure.step('Put objects using curl utility'):
oid_simple = upload_via_http_gate_curl(cid=cid, filepath=file_path_simple)
oid_large = upload_via_http_gate_curl(cid=cid, filepath=file_path_large)
for oid, file_path in ((oid_simple, file_path_simple), (oid_large, file_path_large)):
self.get_object_and_verify_hashes(oid, file_path, self.wallet, cid, get_via_http_curl)
@staticmethod
@allure.step('Try to get object and expect error')
def try_to_get_object_and_expect_error(cid: str, oid: str, expected_err: str):
try:
get_via_http_gate(cid=cid, oid=oid)
raise AssertionError(f'Expected error on getting object with cid: {cid}')
except Exception as err:
assert expected_err in str(err), f'Expected error {expected_err} in {err}'
@staticmethod
@allure.step('Verify object can be get using HTTP header attribute')
def get_object_by_attr_and_verify_hashes(oid: str, file_name: str, cid: str, attrs: dict):
got_file_path_http = get_via_http_gate(cid=cid, oid=oid)
got_file_path_http_attr = get_via_http_gate_by_attribute(cid=cid, attribute=attrs)
TestHttpGate._assert_hashes_are_equal(file_name, got_file_path_http, got_file_path_http_attr)
@staticmethod
@allure.step('Verify object can be get using HTTP')
def get_object_and_verify_hashes(oid: str, file_name: str, wallet: str, cid: str, object_getter=None):
nodes = get_nodes_without_object(wallet=wallet, cid=cid, oid=oid)
random_node = choice(nodes)
object_getter = object_getter or get_via_http_gate
got_file_path = get_object(wallet=wallet, cid=cid, oid=oid, endpoint=random_node)
got_file_path_http = object_getter(cid=cid, oid=oid)
TestHttpGate._assert_hashes_are_equal(file_name, got_file_path, got_file_path_http)
@staticmethod
def _assert_hashes_are_equal(orig_file_name: str, got_file_1: str, got_file_2: str):
msg = 'Expected hashes are equal for files {f1} and {f2}'
got_file_hash_http = get_file_hash(got_file_1)
assert get_file_hash(got_file_2) == got_file_hash_http, msg.format(f1=got_file_2, f2=got_file_1)
assert get_file_hash(orig_file_name) == got_file_hash_http, msg.format(f1=orig_file_name, f2=got_file_1)
@staticmethod
def _attr_into_header(attrs: dict) -> dict:
return {f'X-Attribute-{_key}': _value for _key, _value in attrs.items()}

View file

@ -1,523 +0,0 @@
import logging
import os
from random import choice, choices
import allure
import pytest
from common import ASSETS_DIR, COMPLEX_OBJ_SIZE, SIMPLE_OBJ_SIZE
from epoch import tick_epoch
from python_keywords import s3_gate_bucket, s3_gate_object
from python_keywords.aws_cli_client import AwsCliClient
from python_keywords.container import list_containers
from python_keywords.utility_keywords import (generate_file, generate_file_and_file_hash,
get_file_hash)
from utility import create_file_with_content, get_file_content, split_file
logger = logging.getLogger('NeoLogger')
def pytest_generate_tests(metafunc):
if "s3_client" in metafunc.fixturenames:
metafunc.parametrize("s3_client", ['aws cli', 'boto3'], indirect=True)
@allure.link('https://github.com/nspcc-dev/neofs-s3-gw#neofs-s3-gateway', name='neofs-s3-gateway')
@pytest.mark.s3_gate
class TestS3Gate:
s3_client = None
@pytest.fixture(scope='class', autouse=True)
@allure.title('[Class/Autouse]: Create S3 client')
def s3_client(self, prepare_wallet_and_deposit, request):
wallet = prepare_wallet_and_deposit
s3_bearer_rules_file = f"{os.getcwd()}/robot/resources/files/s3_bearer_rules.json"
cid, bucket, access_key_id, secret_access_key, owner_private_key = \
s3_gate_bucket.init_s3_credentials(wallet, s3_bearer_rules_file=s3_bearer_rules_file)
containers_list = list_containers(wallet)
assert cid in containers_list, f'Expected cid {cid} in {containers_list}'
if request.param == 'aws cli':
try:
client = AwsCliClient(access_key_id, secret_access_key)
except Exception as err:
if 'command was not found or was not executable' in str(err):
pytest.skip('AWS CLI was not found')
else:
raise RuntimeError('Error on creating instance for AwsCliClient') from err
else:
client = s3_gate_bucket.config_s3_client(access_key_id, secret_access_key)
TestS3Gate.s3_client = client
@pytest.fixture
@allure.title('Create two buckets')
def create_buckets(self):
bucket_1 = s3_gate_bucket.create_bucket_s3(self.s3_client)
bucket_2 = s3_gate_bucket.create_bucket_s3(self.s3_client)
return bucket_1, bucket_2
@pytest.fixture
@allure.title('Create/delete bucket')
def bucket(self):
bucket = s3_gate_bucket.create_bucket_s3(self.s3_client)
yield bucket
objects = s3_gate_object.list_objects_s3(self.s3_client, bucket)
if objects:
s3_gate_object.delete_objects_s3(self.s3_client, bucket, objects)
s3_gate_bucket.delete_bucket_s3(self.s3_client, bucket)
@allure.title('Test S3 Bucket API')
def test_s3_buckets(self):
"""
Test base S3 Bucket API (Create/List/Head/Delete).
"""
file_path = generate_file()
file_name = self.object_key_from_file_path(file_path)
with allure.step('Create buckets'):
bucket_1 = s3_gate_bucket.create_bucket_s3(self.s3_client)
bucket_2 = s3_gate_bucket.create_bucket_s3(self.s3_client)
with allure.step('Check buckets are presented in the system'):
buckets = s3_gate_bucket.list_buckets_s3(self.s3_client)
assert bucket_1 in buckets, f'Expected bucket {bucket_1} is in the list'
assert bucket_2 in buckets, f'Expected bucket {bucket_2} is in the list'
with allure.step('Bucket must be empty'):
for bucket in (bucket_1, bucket_2):
objects_list = s3_gate_object.list_objects_s3(self.s3_client, bucket)
assert not objects_list, f'Expected empty bucket, got {objects_list}'
with allure.step('Check buckets are visible with S3 head command'):
s3_gate_bucket.head_bucket(self.s3_client, bucket_1)
s3_gate_bucket.head_bucket(self.s3_client, bucket_2)
with allure.step('Check we can put/list object with S3 commands'):
s3_gate_object.put_object_s3(self.s3_client, bucket_1, file_path)
s3_gate_object.head_object_s3(self.s3_client, bucket_1, file_name)
bucket_objects = s3_gate_object.list_objects_s3(self.s3_client, bucket_1)
assert file_name in bucket_objects, \
f'Expected file {file_name} in objects list {bucket_objects}'
with allure.step('Try to delete not empty bucket and get error'):
with pytest.raises(Exception, match=r'.*The bucket you tried to delete is not empty.*'):
s3_gate_bucket.delete_bucket_s3(self.s3_client, bucket_1)
s3_gate_bucket.head_bucket(self.s3_client, bucket_1)
with allure.step(f'Delete empty bucket {bucket_2}'):
s3_gate_bucket.delete_bucket_s3(self.s3_client, bucket_2)
tick_epoch()
with allure.step(f'Check bucket {bucket_2} deleted'):
with pytest.raises(Exception, match=r'.*Not Found.*'):
s3_gate_bucket.head_bucket(self.s3_client, bucket_2)
buckets = s3_gate_bucket.list_buckets_s3(self.s3_client)
assert bucket_1 in buckets, f'Expected bucket {bucket_1} is in the list'
assert bucket_2 not in buckets, f'Expected bucket {bucket_2} is not in the list'
@allure.title('Test S3 Object API')
@pytest.mark.sanity
@pytest.mark.parametrize('file_type', ['simple', 'large'], ids=['Simple object', 'Large object'])
def test_s3_api_object(self, file_type):
"""
Test base S3 Object API (Put/Head/List) for simple and large objects.
"""
file_path = generate_file(SIMPLE_OBJ_SIZE if file_type == 'simple' else COMPLEX_OBJ_SIZE)
file_name = self.object_key_from_file_path(file_path)
bucket_1 = s3_gate_bucket.create_bucket_s3(self.s3_client)
bucket_2 = s3_gate_bucket.create_bucket_s3(self.s3_client)
for bucket in (bucket_1, bucket_2):
with allure.step('Bucket must be empty'):
objects_list = s3_gate_object.list_objects_s3(self.s3_client, bucket)
assert not objects_list, f'Expected empty bucket, got {objects_list}'
s3_gate_object.put_object_s3(self.s3_client, bucket, file_path)
s3_gate_object.head_object_s3(self.s3_client, bucket, file_name)
bucket_objects = s3_gate_object.list_objects_s3(self.s3_client, bucket)
assert file_name in bucket_objects, \
f'Expected file {file_name} in objects list {bucket_objects}'
with allure.step("Check object's attributes"):
for attrs in (['ETag'], ['ObjectSize', 'StorageClass']):
s3_gate_object.get_object_attributes(self.s3_client, bucket, file_name, *attrs)
@allure.title('Test S3 Sync directory')
def test_s3_sync_dir(self, bucket):
"""
Test checks sync directory with AWS CLI utility.
"""
file_path_1 = f"{os.getcwd()}/{ASSETS_DIR}/test_sync/test_file_1"
file_path_2 = f"{os.getcwd()}/{ASSETS_DIR}/test_sync/test_file_2"
key_to_path = {'test_file_1': file_path_1, 'test_file_2': file_path_2}
if not isinstance(self.s3_client, AwsCliClient):
pytest.skip('This test is not supported with boto3 client')
create_file_with_content(file_path=file_path_1)
create_file_with_content(file_path=file_path_2)
self.s3_client.sync(bucket_name=bucket, dir_path=os.path.dirname(file_path_1))
with allure.step('Check objects are synced'):
objects = s3_gate_object.list_objects_s3(self.s3_client, bucket)
with allure.step('Check these are the same objects'):
assert set(key_to_path.keys()) == set(objects), f'Expected all abjects saved. Got {objects}'
for obj_key in objects:
got_object = s3_gate_object.get_object_s3(self.s3_client, bucket, obj_key)
assert get_file_hash(got_object) == get_file_hash(key_to_path.get(obj_key)), \
'Expected hashes are the same'
@allure.title('Test S3 Object versioning')
def test_s3_api_versioning(self, bucket):
"""
Test checks basic versioning functionality for S3 bucket.
"""
version_1_content = 'Version 1'
version_2_content = 'Version 2'
file_name_simple = create_file_with_content(content=version_1_content)
obj_key = os.path.basename(file_name_simple)
with allure.step('Set versioning enable for bucket'):
s3_gate_bucket.get_bucket_versioning_status(self.s3_client, bucket)
s3_gate_bucket.set_bucket_versioning(self.s3_client, bucket, status=s3_gate_bucket.VersioningStatus.ENABLED)
status = s3_gate_bucket.get_bucket_versioning_status(self.s3_client, bucket)
assert status == s3_gate_bucket.VersioningStatus.ENABLED.value, f'Expected enabled status. Got {status}'
with allure.step('Put several versions of object into bucket'):
version_id_1 = s3_gate_object.put_object_s3(self.s3_client, bucket, file_name_simple)
create_file_with_content(file_path=file_name_simple, content=version_2_content)
version_id_2 = s3_gate_object.put_object_s3(self.s3_client, bucket, file_name_simple)
with allure.step('Check bucket shows all versions'):
versions = s3_gate_object.list_objects_versions_s3(self.s3_client, bucket)
obj_versions = {version.get('VersionId') for version in versions if version.get('Key') == obj_key}
assert obj_versions == {version_id_1, version_id_2}, \
f'Expected object has versions: {version_id_1, version_id_2}'
with allure.step('Show information about particular version'):
for version_id in (version_id_1, version_id_2):
response = s3_gate_object.head_object_s3(self.s3_client, bucket, obj_key, version_id=version_id)
assert 'LastModified' in response, 'Expected LastModified field'
assert 'ETag' in response, 'Expected ETag field'
assert response.get('VersionId') == version_id, f'Expected VersionId is {version_id}'
assert response.get('ContentLength') != 0, 'Expected ContentLength is not zero'
with allure.step("Check object's attributes"):
for version_id in (version_id_1, version_id_2):
got_attrs = s3_gate_object.get_object_attributes(self.s3_client, bucket, obj_key, 'ETag',
version_id=version_id)
if got_attrs:
assert got_attrs.get('VersionId') == version_id, f'Expected VersionId is {version_id}'
with allure.step('Delete object and check it was deleted'):
response = s3_gate_object.delete_object_s3(self.s3_client, bucket, obj_key)
version_id_delete = response.get('VersionId')
with pytest.raises(Exception, match=r'.*Not Found.*'):
s3_gate_object.head_object_s3(self.s3_client, bucket, obj_key)
with allure.step('Get content for all versions and check it is correct'):
for version, content in ((version_id_2, version_2_content), (version_id_1, version_1_content)):
file_name = s3_gate_object.get_object_s3(self.s3_client, bucket, obj_key, version_id=version)
got_content = get_file_content(file_name)
assert got_content == content, f'Expected object content is\n{content}\nGot\n{got_content}'
with allure.step('Restore previous object version'):
s3_gate_object.delete_object_s3(self.s3_client, bucket, obj_key, version_id=version_id_delete)
file_name = s3_gate_object.get_object_s3(self.s3_client, bucket, obj_key)
got_content = get_file_content(file_name)
assert got_content == version_2_content, \
f'Expected object content is\n{version_2_content}\nGot\n{got_content}'
@allure.title('Test S3 Object Multipart API')
def test_s3_api_multipart(self, bucket):
"""
Test checks S3 Multipart API (Create multipart upload/Abort multipart upload/List multipart upload/
Upload part/List parts/Complete multipart upload).
"""
parts_count = 3
file_name_large, _ = generate_file_and_file_hash(SIMPLE_OBJ_SIZE * 1024 * 6 * parts_count) # 5Mb - min part
# file_name_large, _ = generate_file_and_file_hash(SIMPLE_OBJ_SIZE * 1024 * 30 * parts_count) # 5Mb - min part
object_key = self.object_key_from_file_path(file_name_large)
part_files = split_file(file_name_large, parts_count)
parts = []
uploads = s3_gate_object.list_multipart_uploads_s3(self.s3_client, bucket)
assert not uploads, f'Expected there is no uploads in bucket {bucket}'
with allure.step('Create and abort multipart upload'):
upload_id = s3_gate_object.create_multipart_upload_s3(self.s3_client, bucket, object_key)
uploads = s3_gate_object.list_multipart_uploads_s3(self.s3_client, bucket)
assert uploads, f'Expected there one upload in bucket {bucket}'
assert uploads[0].get('Key') == object_key, f'Expected correct key {object_key} in upload {uploads}'
assert uploads[0].get('UploadId') == upload_id, f'Expected correct UploadId {upload_id} in upload {uploads}'
s3_gate_object.abort_multipart_uploads_s3(self.s3_client, bucket, object_key, upload_id)
uploads = s3_gate_object.list_multipart_uploads_s3(self.s3_client, bucket)
assert not uploads, f'Expected there is no uploads in bucket {bucket}'
with allure.step('Create new multipart upload and upload several parts'):
upload_id = s3_gate_object.create_multipart_upload_s3(self.s3_client, bucket, object_key)
for part_id, file_path in enumerate(part_files, start=1):
etag = s3_gate_object.upload_part_s3(self.s3_client, bucket, object_key, upload_id, part_id, file_path)
parts.append((part_id, etag))
with allure.step('Check all parts are visible in bucket'):
got_parts = s3_gate_object.list_parts_s3(self.s3_client, bucket, object_key, upload_id)
assert len(got_parts) == len(part_files), f'Expected {parts_count} parts, got\n{got_parts}'
s3_gate_object.complete_multipart_upload_s3(self.s3_client, bucket, object_key, upload_id, parts)
uploads = s3_gate_object.list_multipart_uploads_s3(self.s3_client, bucket)
assert not uploads, f'Expected there is no uploads in bucket {bucket}'
with allure.step('Check we can get whole object from bucket'):
got_object = s3_gate_object.get_object_s3(self.s3_client, bucket, object_key)
assert get_file_hash(got_object) == get_file_hash(file_name_large)
self.check_object_attributes(bucket, object_key, parts_count)
@allure.title('Test S3 Bucket tagging API')
def test_s3_api_bucket_tagging(self, bucket):
"""
Test checks S3 Bucket tagging API (Put tag/Get tag).
"""
key_value_pair = [('some-key', 'some-value'), ('some-key-2', 'some-value-2')]
s3_gate_bucket.put_bucket_tagging(self.s3_client, bucket, key_value_pair)
got_tags = s3_gate_bucket.get_bucket_tagging(self.s3_client, bucket)
with allure.step('Check all tags are presented'):
assert got_tags, f'Expected tags, got {got_tags}'
expected_tags = [{'Key': key, 'Value': value} for key, value in key_value_pair]
for tag in expected_tags:
assert tag in got_tags
s3_gate_bucket.delete_bucket_tagging(self.s3_client, bucket)
tags = s3_gate_bucket.get_bucket_tagging(self.s3_client, bucket)
assert not tags, f'Expected there is no tags for bucket {bucket}, got {tags}'
@allure.title('Test S3 Object tagging API')
def test_s3_api_object_tagging(self, bucket):
"""
Test checks S3 Object tagging API (Put tag/Get tag/Update tag).
"""
key_value_pair_bucket = [('some-key', 'some-value'), ('some-key-2', 'some-value-2')]
key_value_pair_obj = [('some-key-obj', 'some-value-obj'), ('some-key--obj2', 'some-value--obj2')]
key_value_pair_obj_new = [('some-key-obj-new', 'some-value-obj-new')]
file_name_simple, _ = generate_file_and_file_hash(SIMPLE_OBJ_SIZE)
obj_key = self.object_key_from_file_path(file_name_simple)
s3_gate_bucket.put_bucket_tagging(self.s3_client, bucket, key_value_pair_bucket)
s3_gate_object.put_object_s3(self.s3_client, bucket, file_name_simple)
for tags in (key_value_pair_obj, key_value_pair_obj_new):
s3_gate_object.put_object_tagging(self.s3_client, bucket, obj_key, tags)
got_tags = s3_gate_object.get_object_tagging(self.s3_client, bucket, obj_key)
assert got_tags, f'Expected tags, got {got_tags}'
expected_tags = [{'Key': key, 'Value': value} for key, value in tags]
for tag in expected_tags:
assert tag in got_tags
s3_gate_object.delete_object_tagging(self.s3_client, bucket, obj_key)
got_tags = s3_gate_object.get_object_tagging(self.s3_client, bucket, obj_key)
assert not got_tags, f'Expected there is no tags for bucket {bucket}, got {got_tags}'
@allure.title('Test S3: Delete object & delete objects S3 API')
def test_s3_api_delete(self, create_buckets):
"""
Check delete_object and delete_objects S3 API operation. From first bucket some objects deleted one by one.
From second bucket some objects deleted all at once.
"""
max_obj_count = 20
max_delete_objects = 17
put_objects = []
file_paths = []
obj_sizes = [SIMPLE_OBJ_SIZE, COMPLEX_OBJ_SIZE]
bucket_1, bucket_2 = create_buckets
with allure.step(f'Generate {max_obj_count} files'):
for _ in range(max_obj_count):
file_paths.append(generate_file_and_file_hash(choice(obj_sizes))[0])
for bucket in (bucket_1, bucket_2):
with allure.step(f'Bucket {bucket} must be empty as it just created'):
objects_list = s3_gate_object.list_objects_s3_v2(self.s3_client, bucket)
assert not objects_list, f'Expected empty bucket, got {objects_list}'
for file_path in file_paths:
s3_gate_object.put_object_s3(self.s3_client, bucket, file_path)
put_objects.append(self.object_key_from_file_path(file_path))
with allure.step(f'Check all objects put in bucket {bucket} successfully'):
bucket_objects = s3_gate_object.list_objects_s3_v2(self.s3_client, bucket)
assert set(put_objects) == set(bucket_objects), \
f'Expected all objects {put_objects} in objects list {bucket_objects}'
with allure.step('Delete some objects from bucket_1 one by one'):
objects_to_delete_b1 = choices(put_objects, k=max_delete_objects)
for obj in objects_to_delete_b1:
s3_gate_object.delete_object_s3(self.s3_client, bucket_1, obj)
with allure.step('Check deleted objects are not visible in bucket bucket_1'):
bucket_objects = s3_gate_object.list_objects_s3_v2(self.s3_client, bucket_1)
assert set(put_objects).difference(set(objects_to_delete_b1)) == set(bucket_objects), \
f'Expected all objects {put_objects} in objects list {bucket_objects}'
self.try_to_get_object_and_got_error(bucket_1, objects_to_delete_b1)
with allure.step('Delete some objects from bucket_2 at once'):
objects_to_delete_b2 = choices(put_objects, k=max_delete_objects)
s3_gate_object.delete_objects_s3(self.s3_client, bucket_2, objects_to_delete_b2)
with allure.step('Check deleted objects are not visible in bucket bucket_2'):
objects_list = s3_gate_object.list_objects_s3_v2(self.s3_client, bucket_2)
assert set(put_objects).difference(set(objects_to_delete_b2)) == set(objects_list), \
f'Expected all objects {put_objects} in objects list {bucket_objects}'
self.try_to_get_object_and_got_error(bucket_2, objects_to_delete_b2)
@allure.title('Test S3: Copy object to the same bucket')
def test_s3_copy_same_bucket(self):
"""
Test object can be copied to the same bucket.
"""
file_path_simple, file_path_large = generate_file(), generate_file(COMPLEX_OBJ_SIZE)
file_name_simple = self.object_key_from_file_path(file_path_simple)
file_name_large = self.object_key_from_file_path(file_path_large)
bucket_objects = [file_name_simple, file_name_large]
bucket = s3_gate_bucket.create_bucket_s3(self.s3_client)
with allure.step('Bucket must be empty'):
objects_list = s3_gate_object.list_objects_s3(self.s3_client, bucket)
assert not objects_list, f'Expected empty bucket, got {objects_list}'
with allure.step('Put objects into bucket'):
for file_path in (file_path_simple, file_path_large):
s3_gate_object.put_object_s3(self.s3_client, bucket, file_path)
with allure.step('Copy one object into the same bucket'):
copy_obj_path = s3_gate_object.copy_object_s3(self.s3_client, bucket, file_name_simple)
bucket_objects.append(copy_obj_path)
self.check_objects_in_bucket(bucket, bucket_objects)
with allure.step('Check copied object has the same content'):
got_copied_file = s3_gate_object.get_object_s3(self.s3_client, bucket, copy_obj_path)
assert get_file_hash(file_path_simple) == get_file_hash(got_copied_file), 'Hashes must be the same'
with allure.step('Delete one object from bucket'):
s3_gate_object.delete_object_s3(self.s3_client, bucket, file_name_simple)
bucket_objects.remove(file_name_simple)
self.check_objects_in_bucket(bucket, expected_objects=bucket_objects, unexpected_objects=[file_name_simple])
@allure.title('Test S3: Copy object to another bucket')
def test_s3_copy_to_another_bucket(self):
"""
Test object can be copied to another bucket.
"""
file_path_simple, file_path_large = generate_file(), generate_file(COMPLEX_OBJ_SIZE)
file_name_simple = self.object_key_from_file_path(file_path_simple)
file_name_large = self.object_key_from_file_path(file_path_large)
bucket_1_objects = [file_name_simple, file_name_large]
bucket_1 = s3_gate_bucket.create_bucket_s3(self.s3_client)
bucket_2 = s3_gate_bucket.create_bucket_s3(self.s3_client)
with allure.step('Buckets must be empty'):
for bucket in (bucket_1, bucket_2):
objects_list = s3_gate_object.list_objects_s3(self.s3_client, bucket)
assert not objects_list, f'Expected empty bucket, got {objects_list}'
with allure.step('Put objects into one bucket'):
for file_path in (file_path_simple, file_path_large):
s3_gate_object.put_object_s3(self.s3_client, bucket_1, file_path)
with allure.step('Copy object from first bucket into second'):
copy_obj_path_b2 = s3_gate_object.copy_object_s3(self.s3_client, bucket_1, file_name_large,
bucket_dst=bucket_2)
self.check_objects_in_bucket(bucket_1, expected_objects=bucket_1_objects)
self.check_objects_in_bucket(bucket_2, expected_objects=[copy_obj_path_b2])
with allure.step('Check copied object has the same content'):
got_copied_file_b2 = s3_gate_object.get_object_s3(self.s3_client, bucket_2, copy_obj_path_b2)
assert get_file_hash(file_path_large) == get_file_hash(got_copied_file_b2), 'Hashes must be the same'
with allure.step('Delete one object from first bucket'):
s3_gate_object.delete_object_s3(self.s3_client, bucket_1, file_name_simple)
bucket_1_objects.remove(file_name_simple)
self.check_objects_in_bucket(bucket_1, expected_objects=bucket_1_objects)
self.check_objects_in_bucket(bucket_2, expected_objects=[copy_obj_path_b2])
with allure.step('Delete one object from second bucket and check it is empty'):
s3_gate_object.delete_object_s3(self.s3_client, bucket_2, copy_obj_path_b2)
self.check_objects_in_bucket(bucket_2, expected_objects=[])
def check_object_attributes(self, bucket: str, object_key: str, parts_count: int):
if not isinstance(self.s3_client, AwsCliClient):
logger.warning('Attributes check is not supported for boto3 implementation')
return
with allure.step("Check object's attributes"):
obj_parts = s3_gate_object.get_object_attributes(self.s3_client, bucket, object_key, 'ObjectParts',
get_full_resp=False)
assert obj_parts.get('TotalPartsCount') == parts_count, f'Expected TotalPartsCount is {parts_count}'
assert len(obj_parts.get('Parts')) == parts_count, f'Expected Parts cunt is {parts_count}'
with allure.step("Check object's attribute max-parts"):
max_parts = 2
obj_parts = s3_gate_object.get_object_attributes(self.s3_client, bucket, object_key, 'ObjectParts',
max_parts=max_parts, get_full_resp=False)
assert obj_parts.get('TotalPartsCount') == parts_count, f'Expected TotalPartsCount is {parts_count}'
assert obj_parts.get('MaxParts') == max_parts, f'Expected MaxParts is {parts_count}'
assert len(obj_parts.get('Parts')) == max_parts, f'Expected Parts count is {parts_count}'
with allure.step("Check object's attribute part-number-marker"):
part_number_marker = 3
obj_parts = s3_gate_object.get_object_attributes(self.s3_client, bucket, object_key, 'ObjectParts',
part_number=part_number_marker, get_full_resp=False)
assert obj_parts.get('TotalPartsCount') == parts_count, f'Expected TotalPartsCount is {parts_count}'
assert obj_parts.get(
'PartNumberMarker') == part_number_marker, f'Expected PartNumberMarker is {part_number_marker}'
assert len(obj_parts.get('Parts')) == 1, f'Expected Parts count is {parts_count}'
@allure.step('Expected all objects are presented in the bucket')
def check_objects_in_bucket(self, bucket, expected_objects: list, unexpected_objects: list = None):
unexpected_objects = unexpected_objects or []
bucket_objects = s3_gate_object.list_objects_s3(self.s3_client, bucket)
assert len(bucket_objects) == len(expected_objects), f'Expected {len(expected_objects)} objects in the bucket'
for bucket_object in expected_objects:
assert bucket_object in bucket_objects, \
f'Expected object {bucket_object} in objects list {bucket_objects}'
for bucket_object in unexpected_objects:
assert bucket_object not in bucket_objects, \
f'Expected object {bucket_object} not in objects list {bucket_objects}'
@allure.step('Try to get object and got error')
def try_to_get_object_and_got_error(self, bucket: str, unexpected_objects: list):
for obj in unexpected_objects:
try:
s3_gate_object.get_object_s3(self.s3_client, bucket, obj)
raise AssertionError(f'Object {obj} found in bucket {bucket}')
except Exception as err:
assert 'The specified key does not exist' in str(err), f'Expected error in exception {err}'
@staticmethod
def object_key_from_file_path(full_path: str) -> str:
return os.path.basename(full_path)

View file

@ -0,0 +1,25 @@
import pytest
from frostfs_testlib import reporter
from frostfs_testlib.credentials.interfaces import CredentialsProvider, User
from frostfs_testlib.storage.cluster import Cluster
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
from frostfs_testlib.utils import string_utils
@pytest.fixture(scope="module")
def owner_wallet(default_wallet: WalletInfo) -> WalletInfo:
return default_wallet
@pytest.fixture(scope="module")
def user_wallet(credentials_provider: CredentialsProvider, cluster: Cluster) -> WalletInfo:
with reporter.step("Create user wallet which will use objects from owner via static session"):
user = User(string_utils.unique_name("user-"))
return credentials_provider.GRPC.provide(user, cluster.cluster_nodes[0])
@pytest.fixture(scope="module")
def stranger_wallet(credentials_provider: CredentialsProvider, cluster: Cluster) -> WalletInfo:
with reporter.step("Create stranger user wallet which should fail to obtain data"):
user = User(string_utils.unique_name("user-"))
return credentials_provider.GRPC.provide(user, cluster.cluster_nodes[0])

View file

@ -0,0 +1,137 @@
import random
import allure
import pytest
from frostfs_testlib import reporter
from frostfs_testlib.resources.error_patterns import SESSION_NOT_FOUND
from frostfs_testlib.steps.cli.container import create_container
from frostfs_testlib.steps.cli.object import delete_object, put_object, put_object_to_random_node
from frostfs_testlib.steps.session_token import create_session_token
from frostfs_testlib.storage.dataclasses.object_size import ObjectSize
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
from frostfs_testlib.testing.cluster_test_base import ClusterTestBase
from frostfs_testlib.utils.file_utils import generate_file
@pytest.mark.nightly
@pytest.mark.sanity
@pytest.mark.session_token
class TestDynamicObjectSession(ClusterTestBase):
@allure.title("Object Operations with Session Token (obj_size={object_size})")
def test_object_session_token(self, default_wallet: WalletInfo, object_size: ObjectSize):
"""
Test how operations over objects are executed with a session token
Steps:
1. Create a private container
2. Obj operation requests to the node which IS NOT in the container but granted
with a session token
3. Obj operation requests to the node which IS in the container and NOT granted
with a session token
4. Obj operation requests to the node which IS NOT in the container and NOT granted
with a session token
"""
with reporter.step("Init wallet"):
wallet = default_wallet
with reporter.step("Nodes Settlements"):
session_token_node, container_node, non_container_node = random.sample(self.cluster.storage_nodes, 3)
with reporter.step("Create Session Token"):
session_token = create_session_token(
shell=self.shell,
owner=default_wallet.get_address(),
wallet=default_wallet,
rpc_endpoint=session_token_node.get_rpc_endpoint(),
)
with reporter.step("Create Private Container"):
un_locode = container_node.get_un_locode()
locode = "SPB" if un_locode == "RU LED" else un_locode.split()[1]
placement_policy = (
f"REP 1 IN LOC_{locode}_PLACE CBF 1 SELECT 1 FROM LOC_{locode} "
f'AS LOC_{locode}_PLACE FILTER "UN-LOCODE" '
f'EQ "{un_locode}" AS LOC_{locode}'
)
cid = create_container(
wallet,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
rule=placement_policy,
)
with reporter.step("Put Objects"):
file_path = generate_file(object_size.value)
oid = put_object_to_random_node(
wallet=wallet,
path=file_path,
cid=cid,
shell=self.shell,
cluster=self.cluster,
)
oid_delete = put_object_to_random_node(
wallet=wallet,
path=file_path,
cid=cid,
shell=self.shell,
cluster=self.cluster,
)
with reporter.step("Node not in container but granted a session token"):
put_object(
wallet=wallet,
path=file_path,
cid=cid,
shell=self.shell,
endpoint=session_token_node.get_rpc_endpoint(),
session=session_token,
)
delete_object(
wallet=wallet,
cid=cid,
oid=oid_delete,
shell=self.shell,
endpoint=session_token_node.get_rpc_endpoint(),
session=session_token,
)
with reporter.step("Node in container and not granted a session token"):
with pytest.raises(Exception, match=SESSION_NOT_FOUND):
put_object(
wallet=wallet,
path=file_path,
cid=cid,
shell=self.shell,
endpoint=container_node.get_rpc_endpoint(),
session=session_token,
)
with pytest.raises(Exception, match=SESSION_NOT_FOUND):
delete_object(
wallet=wallet,
cid=cid,
oid=oid,
shell=self.shell,
endpoint=container_node.get_rpc_endpoint(),
session=session_token,
)
with reporter.step("Node not in container and not granted a session token"):
with pytest.raises(Exception, match=SESSION_NOT_FOUND):
put_object(
wallet=wallet,
path=file_path,
cid=cid,
shell=self.shell,
endpoint=non_container_node.get_rpc_endpoint(),
session=session_token,
)
with pytest.raises(Exception, match=SESSION_NOT_FOUND):
delete_object(
wallet=wallet,
cid=cid,
oid=oid,
shell=self.shell,
endpoint=non_container_node.get_rpc_endpoint(),
session=session_token,
)

View file

@ -0,0 +1,676 @@
import logging
import allure
import pytest
from frostfs_testlib import reporter
from frostfs_testlib.resources.error_patterns import EXPIRED_SESSION_TOKEN, MALFORMED_REQUEST, OBJECT_ACCESS_DENIED, OBJECT_NOT_FOUND
from frostfs_testlib.shell import Shell
from frostfs_testlib.steps.cli.container import create_container
from frostfs_testlib.steps.cli.object import (
delete_object,
get_object,
get_object_from_random_node,
get_range,
get_range_hash,
head_object,
put_object_to_random_node,
search_object,
)
from frostfs_testlib.steps.epoch import ensure_fresh_epoch
from frostfs_testlib.steps.session_token import (
INVALID_SIGNATURE,
UNRELATED_CONTAINER,
UNRELATED_KEY,
UNRELATED_OBJECT,
WRONG_VERB,
Lifetime,
ObjectVerb,
generate_object_session_token,
get_object_signed_token,
sign_session_token,
)
from frostfs_testlib.steps.storage_object import delete_objects
from frostfs_testlib.storage.cluster import Cluster
from frostfs_testlib.storage.dataclasses.object_size import ObjectSize
from frostfs_testlib.storage.dataclasses.storage_object_info import StorageObjectInfo
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
from frostfs_testlib.testing.cluster_test_base import ClusterTestBase
from frostfs_testlib.testing.test_control import expect_not_raises
from frostfs_testlib.utils.file_utils import generate_file
logger = logging.getLogger("NeoLogger")
RANGE_OFFSET_FOR_COMPLEX_OBJECT = 200
@pytest.fixture(scope="module")
def storage_containers(owner_wallet: WalletInfo, client_shell: Shell, cluster: Cluster) -> list[str]:
cid = create_container(owner_wallet, shell=client_shell, endpoint=cluster.default_rpc_endpoint)
other_cid = create_container(owner_wallet, shell=client_shell, endpoint=cluster.default_rpc_endpoint)
yield [cid, other_cid]
@pytest.fixture(
# Scope module to upload/delete each files set only once
scope="module",
)
def storage_objects(
owner_wallet: WalletInfo,
client_shell: Shell,
storage_containers: list[str],
cluster: Cluster,
object_size: ObjectSize,
) -> list[StorageObjectInfo]:
file_path = generate_file(object_size.value)
storage_objects = []
with reporter.step("Put objects"):
# upload couple objects
for _ in range(3):
storage_object_id = put_object_to_random_node(
wallet=owner_wallet,
path=file_path,
cid=storage_containers[0],
shell=client_shell,
cluster=cluster,
)
storage_object = StorageObjectInfo(storage_containers[0], storage_object_id)
storage_object.size = object_size.value
storage_object.wallet = owner_wallet
storage_object.file_path = file_path
storage_objects.append(storage_object)
yield storage_objects
# Teardown after all tests done with current param
delete_objects(storage_objects, client_shell, cluster)
@reporter.step("Get ranges for test")
def get_ranges(storage_object: StorageObjectInfo, max_object_size: int, shell: Shell, endpoint: str) -> list[str]:
"""
Returns ranges to test range/hash methods via static session
"""
object_size = storage_object.size
if object_size > max_object_size:
assert object_size >= max_object_size + RANGE_OFFSET_FOR_COMPLEX_OBJECT
return [
"0:10",
f"{object_size-10}:10",
f"{max_object_size - RANGE_OFFSET_FOR_COMPLEX_OBJECT}:" f"{RANGE_OFFSET_FOR_COMPLEX_OBJECT * 2}",
]
else:
return ["0:10", f"{object_size-10}:10"]
@pytest.fixture(scope="module")
def static_sessions(
owner_wallet: WalletInfo,
user_wallet: WalletInfo,
storage_containers: list[str],
storage_objects: list[StorageObjectInfo],
client_shell: Shell,
temp_directory: str,
) -> dict[ObjectVerb, str]:
"""
Returns dict with static session token file paths for all verbs with default lifetime with
valid container and first two objects
"""
return {
verb: get_object_signed_token(
owner_wallet,
user_wallet,
storage_containers[0],
storage_objects[0:2],
verb,
client_shell,
temp_directory,
)
for verb in ObjectVerb
}
@pytest.mark.nightly
@pytest.mark.static_session
class TestObjectStaticSession(ClusterTestBase):
@allure.title("Read operations with static session (method={method_under_test.__name__}, obj_size={object_size})")
@pytest.mark.parametrize(
"method_under_test,verb",
[
(head_object, ObjectVerb.HEAD),
(get_object, ObjectVerb.GET),
],
)
def test_static_session_read(
self,
user_wallet: WalletInfo,
storage_objects: list[StorageObjectInfo],
static_sessions: dict[ObjectVerb, str],
method_under_test,
verb: ObjectVerb,
):
"""
Validate static session with read operations
"""
for node in self.cluster.storage_nodes:
for storage_object in storage_objects[0:2]:
method_under_test(
wallet=user_wallet,
cid=storage_object.cid,
oid=storage_object.oid,
shell=self.shell,
endpoint=node.get_rpc_endpoint(),
session=static_sessions[verb],
)
@allure.title("Range operations with static session (method={method_under_test.__name__}, obj_size={object_size})")
@pytest.mark.parametrize(
"method_under_test,verb",
[(get_range, ObjectVerb.RANGE), (get_range_hash, ObjectVerb.RANGEHASH)],
)
@pytest.mark.sanity
def test_static_session_range(
self,
user_wallet: WalletInfo,
storage_objects: list[StorageObjectInfo],
static_sessions: dict[ObjectVerb, str],
method_under_test,
verb: ObjectVerb,
max_object_size,
):
"""
Validate static session with range operations
"""
storage_object = storage_objects[0]
ranges_to_test = get_ranges(storage_object, max_object_size, self.shell, self.cluster.default_rpc_endpoint)
for range_to_test in ranges_to_test:
with reporter.step(f"Check range {range_to_test}"):
with expect_not_raises():
method_under_test(
user_wallet,
storage_object.cid,
storage_object.oid,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
session=static_sessions[verb],
range_cut=range_to_test,
)
@allure.title("Search operation with static session (obj_size={object_size})")
def test_static_session_search(
self,
user_wallet: WalletInfo,
storage_objects: list[StorageObjectInfo],
static_sessions: dict[ObjectVerb, str],
):
"""
Validate static session with search operations
"""
cid = storage_objects[0].cid
expected_object_ids = [storage_object.oid for storage_object in storage_objects[0:2]]
actual_object_ids = search_object(
user_wallet,
cid,
self.shell,
endpoint=self.cluster.default_rpc_endpoint,
session=static_sessions[ObjectVerb.SEARCH],
root=True,
)
assert sorted(expected_object_ids) == sorted(actual_object_ids)
@allure.title("[NEGATIVE] Static session with object id not in session (obj_size={object_size})")
def test_static_session_unrelated_object(
self,
user_wallet: WalletInfo,
storage_objects: list[StorageObjectInfo],
static_sessions: dict[ObjectVerb, str],
):
"""
Validate static session with object id not in session
"""
with pytest.raises(Exception, match=UNRELATED_OBJECT):
head_object(
user_wallet,
storage_objects[2].cid,
storage_objects[2].oid,
self.shell,
self.cluster.default_rpc_endpoint,
session=static_sessions[ObjectVerb.HEAD],
)
@allure.title("[NEGATIVE] Static session with user id not in session (obj_size={object_size})")
def test_static_session_head_unrelated_user(
self,
stranger_wallet: WalletInfo,
storage_objects: list[StorageObjectInfo],
static_sessions: dict[ObjectVerb, str],
):
"""
Validate static session with user id not in session
"""
storage_object = storage_objects[0]
with pytest.raises(Exception, match=UNRELATED_KEY):
head_object(
stranger_wallet,
storage_object.cid,
storage_object.oid,
self.shell,
self.cluster.default_rpc_endpoint,
session=static_sessions[ObjectVerb.HEAD],
)
@allure.title("[NEGATIVE] Static session with wrong verb in session (obj_size={object_size})")
def test_static_session_head_wrong_verb(
self,
user_wallet: WalletInfo,
storage_objects: list[StorageObjectInfo],
static_sessions: dict[ObjectVerb, str],
):
"""
Validate static session with wrong verb in session
"""
storage_object = storage_objects[0]
with pytest.raises(Exception, match=WRONG_VERB):
get_object(
user_wallet,
storage_object.cid,
storage_object.oid,
self.shell,
self.cluster.default_rpc_endpoint,
session=static_sessions[ObjectVerb.HEAD],
)
@allure.title("[NEGATIVE] Static session with container id not in session (obj_size={object_size})")
def test_static_session_unrelated_container(
self,
user_wallet: WalletInfo,
storage_objects: list[StorageObjectInfo],
storage_containers: list[str],
static_sessions: dict[ObjectVerb, str],
):
"""
Validate static session with container id not in session
"""
storage_object = storage_objects[0]
with pytest.raises(Exception, match=UNRELATED_CONTAINER):
get_object_from_random_node(
user_wallet,
storage_containers[1],
storage_object.oid,
self.shell,
self.cluster,
session=static_sessions[ObjectVerb.GET],
)
@allure.title("[NEGATIVE] Static session signed by another wallet (obj_size={object_size})")
def test_static_session_signed_by_other(
self,
owner_wallet: WalletInfo,
user_wallet: WalletInfo,
stranger_wallet: WalletInfo,
storage_containers: list[str],
storage_objects: list[StorageObjectInfo],
temp_directory: str,
):
"""
Validate static session which signed by another wallet
"""
storage_object = storage_objects[0]
session_token_file = generate_object_session_token(
owner_wallet,
user_wallet,
[storage_object.oid],
storage_containers[0],
ObjectVerb.HEAD,
temp_directory,
)
signed_token_file = sign_session_token(self.shell, session_token_file, stranger_wallet)
with pytest.raises(Exception, match=OBJECT_ACCESS_DENIED):
head_object(
user_wallet,
storage_object.cid,
storage_object.oid,
self.shell,
self.cluster.default_rpc_endpoint,
session=signed_token_file,
)
@allure.title("[NEGATIVE] Static session for another container (obj_size={object_size})")
def test_static_session_signed_for_other_container(
self,
owner_wallet: WalletInfo,
user_wallet: WalletInfo,
storage_containers: list[str],
storage_objects: list[StorageObjectInfo],
temp_directory: str,
):
"""
Validate static session which signed for another container
"""
storage_object = storage_objects[0]
container = storage_containers[1]
session_token_file = generate_object_session_token(
owner_wallet,
user_wallet,
[storage_object.oid],
container,
ObjectVerb.HEAD,
temp_directory,
)
signed_token_file = sign_session_token(self.shell, session_token_file, owner_wallet)
with pytest.raises(Exception, match=OBJECT_NOT_FOUND):
head_object(
user_wallet,
container,
storage_object.oid,
self.shell,
self.cluster.default_rpc_endpoint,
session=signed_token_file,
)
@allure.title("[NEGATIVE] Static session without sign (obj_size={object_size})")
def test_static_session_without_sign(
self,
owner_wallet: WalletInfo,
user_wallet: WalletInfo,
storage_containers: list[str],
storage_objects: list[StorageObjectInfo],
temp_directory: str,
):
"""
Validate static session which wasn't signed
"""
storage_object = storage_objects[0]
session_token_file = generate_object_session_token(
owner_wallet,
user_wallet,
[storage_object.oid],
storage_containers[0],
ObjectVerb.HEAD,
temp_directory,
)
with pytest.raises(Exception, match=INVALID_SIGNATURE):
head_object(
user_wallet,
storage_object.cid,
storage_object.oid,
self.shell,
self.cluster.default_rpc_endpoint,
session=session_token_file,
)
@allure.title("Static session which expires at next epoch (obj_size={object_size})")
def test_static_session_expiration_at_next(
self,
owner_wallet: WalletInfo,
user_wallet: WalletInfo,
storage_containers: list[str],
storage_objects: list[StorageObjectInfo],
temp_directory: str,
):
"""
Validate static session which expires at next epoch
"""
epoch = ensure_fresh_epoch(self.shell, self.cluster)
container = storage_containers[0]
object_id = storage_objects[0].oid
expiration = Lifetime(epoch + 1, epoch, epoch)
with reporter.step("Create session token"):
token_expire_at_next_epoch = get_object_signed_token(
owner_wallet,
user_wallet,
container,
storage_objects,
ObjectVerb.HEAD,
self.shell,
temp_directory,
expiration,
)
with reporter.step("Object should be available with session token after token creation"):
with expect_not_raises():
head_object(
user_wallet,
container,
object_id,
self.shell,
self.cluster.default_rpc_endpoint,
session=token_expire_at_next_epoch,
)
with reporter.step("Object should be available at last epoch before session token expiration"):
self.tick_epoch()
with expect_not_raises():
head_object(
user_wallet,
container,
object_id,
self.shell,
self.cluster.default_rpc_endpoint,
session=token_expire_at_next_epoch,
)
with reporter.step("Object should NOT be available after session token expiration epoch"):
self.tick_epoch()
with pytest.raises(Exception, match=EXPIRED_SESSION_TOKEN):
head_object(
user_wallet,
container,
object_id,
self.shell,
self.cluster.default_rpc_endpoint,
session=token_expire_at_next_epoch,
)
@pytest.mark.sanity
@allure.title("Static session which is valid since next epoch (obj_size={object_size})")
def test_static_session_start_at_next(
self,
owner_wallet: WalletInfo,
user_wallet: WalletInfo,
storage_containers: list[str],
storage_objects: list[StorageObjectInfo],
temp_directory: str,
):
"""
Validate static session which is valid starting from next epoch
"""
epoch = ensure_fresh_epoch(self.shell, self.cluster)
container = storage_containers[0]
object_id = storage_objects[0].oid
expiration = Lifetime(epoch + 2, epoch + 1, epoch)
with reporter.step("Create session token"):
token_start_at_next_epoch = get_object_signed_token(
owner_wallet,
user_wallet,
container,
storage_objects,
ObjectVerb.HEAD,
self.shell,
temp_directory,
expiration,
)
with reporter.step("Object should NOT be available with session token after token creation"):
with pytest.raises(Exception, match=MALFORMED_REQUEST):
head_object(
user_wallet,
container,
object_id,
self.shell,
self.cluster.default_rpc_endpoint,
session=token_start_at_next_epoch,
)
with reporter.step("Object should be available with session token starting from token nbf epoch"):
self.tick_epoch()
with expect_not_raises():
head_object(
user_wallet,
container,
object_id,
self.shell,
self.cluster.default_rpc_endpoint,
session=token_start_at_next_epoch,
)
with reporter.step("Object should be available at last epoch before session token expiration"):
self.tick_epoch()
with expect_not_raises():
head_object(
user_wallet,
container,
object_id,
self.shell,
self.cluster.default_rpc_endpoint,
session=token_start_at_next_epoch,
)
with reporter.step("Object should NOT be available after session token expiration epoch"):
self.tick_epoch()
with pytest.raises(Exception, match=EXPIRED_SESSION_TOKEN):
head_object(
user_wallet,
container,
object_id,
self.shell,
self.cluster.default_rpc_endpoint,
session=token_start_at_next_epoch,
)
@allure.title("[NEGATIVE] Expired static session (obj_size={object_size})")
def test_static_session_already_expired(
self,
owner_wallet: WalletInfo,
user_wallet: WalletInfo,
storage_containers: list[str],
storage_objects: list[StorageObjectInfo],
temp_directory: str,
):
"""
Validate static session which is already expired
"""
epoch = ensure_fresh_epoch(self.shell, self.cluster)
container = storage_containers[0]
object_id = storage_objects[0].oid
expiration = Lifetime(epoch - 1, epoch - 2, epoch - 2)
token_already_expired = get_object_signed_token(
owner_wallet,
user_wallet,
container,
storage_objects,
ObjectVerb.HEAD,
self.shell,
temp_directory,
expiration,
)
with pytest.raises(Exception, match=EXPIRED_SESSION_TOKEN):
head_object(
user_wallet,
container,
object_id,
self.shell,
self.cluster.default_rpc_endpoint,
session=token_already_expired,
)
@allure.title("Delete verb is restricted for static session (obj_size={object_size})")
def test_static_session_delete_verb(
self,
user_wallet: WalletInfo,
storage_objects: list[StorageObjectInfo],
static_sessions: dict[ObjectVerb, str],
):
"""
Delete verb should be restricted for static session
"""
storage_object = storage_objects[0]
with pytest.raises(Exception, match=OBJECT_ACCESS_DENIED):
delete_object(
user_wallet,
storage_object.cid,
storage_object.oid,
self.shell,
endpoint=self.cluster.default_rpc_endpoint,
session=static_sessions[ObjectVerb.DELETE],
)
@pytest.mark.sanity
@allure.title("Put verb is restricted for static session (obj_size={object_size})")
def test_static_session_put_verb(
self,
user_wallet: WalletInfo,
storage_objects: list[StorageObjectInfo],
static_sessions: dict[ObjectVerb, str],
):
"""
Put verb should be restricted for static session
"""
storage_object = storage_objects[0]
with pytest.raises(Exception, match=OBJECT_ACCESS_DENIED):
put_object_to_random_node(
user_wallet,
storage_object.file_path,
storage_object.cid,
self.shell,
self.cluster,
session=static_sessions[ObjectVerb.PUT],
)
@allure.title("[NEGATIVE] Static session is issued in future epoch (obj_size={object_size})")
def test_static_session_invalid_issued_epoch(
self,
owner_wallet: WalletInfo,
user_wallet: WalletInfo,
storage_containers: list[str],
storage_objects: list[StorageObjectInfo],
temp_directory: str,
):
"""
Validate static session which is issued in future epoch
"""
epoch = ensure_fresh_epoch(self.shell, self.cluster)
container = storage_containers[0]
object_id = storage_objects[0].oid
expiration = Lifetime(epoch + 10, 0, epoch + 1)
token_invalid_issue_time = get_object_signed_token(
owner_wallet,
user_wallet,
container,
storage_objects,
ObjectVerb.HEAD,
self.shell,
temp_directory,
expiration,
)
with pytest.raises(Exception, match=MALFORMED_REQUEST):
head_object(
user_wallet,
container,
object_id,
self.shell,
self.cluster.default_rpc_endpoint,
session=token_invalid_issue_time,
)

View file

@ -0,0 +1,118 @@
import allure
import pytest
from frostfs_testlib import reporter
from frostfs_testlib.shell import Shell
from frostfs_testlib.steps.cli.container import create_container, delete_container, get_container, list_containers
from frostfs_testlib.steps.session_token import ContainerVerb, get_container_signed_token
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
from frostfs_testlib.testing.cluster_test_base import ClusterTestBase
@pytest.mark.nightly
@pytest.mark.static_session_container
class TestSessionTokenContainer(ClusterTestBase):
@pytest.fixture(scope="module")
def static_sessions(
self,
owner_wallet: WalletInfo,
user_wallet: WalletInfo,
client_shell: Shell,
temp_directory: str,
) -> dict[ContainerVerb, str]:
"""
Returns dict with static session token file paths for all verbs with default lifetime
"""
return {verb: get_container_signed_token(owner_wallet, user_wallet, verb, client_shell, temp_directory) for verb in ContainerVerb}
@allure.title("Static session with create operation")
def test_static_session_token_container_create(
self,
owner_wallet: WalletInfo,
user_wallet: WalletInfo,
static_sessions: dict[ContainerVerb, str],
):
"""
Validate static session with create operation
"""
with reporter.step("Create container with static session token"):
cid = create_container(
user_wallet,
session_token=static_sessions[ContainerVerb.CREATE],
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
wait_for_creation=False,
)
container_info: dict[str, str] = get_container(owner_wallet, cid, shell=self.shell, endpoint=self.cluster.default_rpc_endpoint)
assert container_info["ownerID"] == owner_wallet.get_address()
assert cid not in list_containers(user_wallet, shell=self.shell, endpoint=self.cluster.default_rpc_endpoint)
assert cid in list_containers(owner_wallet, shell=self.shell, endpoint=self.cluster.default_rpc_endpoint)
@allure.title("[NEGATIVE] Static session without create operation")
def test_static_session_token_container_create_with_other_verb(
self,
user_wallet: WalletInfo,
static_sessions: dict[ContainerVerb, str],
):
"""
Validate static session without create operation
"""
with reporter.step("Try create container with static session token without PUT rule"):
for verb in [verb for verb in ContainerVerb if verb != ContainerVerb.CREATE]:
with pytest.raises(Exception):
create_container(
user_wallet,
session_token=static_sessions[verb],
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
wait_for_creation=False,
)
@allure.title("[NEGATIVE] Static session with create operation for other wallet")
def test_static_session_token_container_create_with_other_wallet(
self,
stranger_wallet: WalletInfo,
static_sessions: dict[ContainerVerb, str],
):
"""
Validate static session with create operation for other wallet
"""
with reporter.step("Try create container with static session token without PUT rule"):
with pytest.raises(Exception):
create_container(
stranger_wallet,
session_token=static_sessions[ContainerVerb.CREATE],
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
wait_for_creation=False,
)
@allure.title("Static session with delete operation")
def test_static_session_token_container_delete(
self,
owner_wallet: WalletInfo,
user_wallet: WalletInfo,
static_sessions: dict[ContainerVerb, str],
):
"""
Validate static session with delete operation
"""
with reporter.step("Create container"):
cid = create_container(
owner_wallet,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
wait_for_creation=False,
)
with reporter.step("Delete container with static session token"):
delete_container(
wallet=user_wallet,
cid=cid,
session_token=static_sessions[ContainerVerb.DELETE],
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
await_mode=True,
)
assert cid not in list_containers(owner_wallet, shell=self.shell, endpoint=self.cluster.default_rpc_endpoint)

View file

@ -0,0 +1,144 @@
import json
import allure
import pytest
from frostfs_testlib import reporter
from frostfs_testlib.cli import FrostfsCli
from frostfs_testlib.resources.cli import CLI_DEFAULT_TIMEOUT
from frostfs_testlib.resources.wellknown_acl import EACL_PUBLIC_READ_WRITE
from frostfs_testlib.steps.cli.container import create_container, delete_container
from frostfs_testlib.steps.cli.object import delete_object, get_object, get_object_nodes, put_object
from frostfs_testlib.storage.cluster import Cluster, ClusterNode, StorageNode
from frostfs_testlib.storage.controllers import ClusterStateController, ShardsWatcher
from frostfs_testlib.storage.controllers.state_managers.config_state_manager import ConfigStateManager
from frostfs_testlib.storage.dataclasses.shard import Shard
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
from frostfs_testlib.testing import parallel, wait_for_success
from frostfs_testlib.testing.cluster_test_base import ClusterTestBase
from frostfs_testlib.utils.file_utils import generate_file
@pytest.mark.nightly
@pytest.mark.shard
class TestControlShard(ClusterTestBase):
@staticmethod
@wait_for_success(180, 30)
def get_object_path_and_name_file(oid: str, cid: str, node: ClusterNode) -> tuple[str, str]:
oid_path = f"{oid[0]}/{oid[1]}/{oid[2]}/{oid[3]}"
object_path = None
with reporter.step("Search object file"):
node_shell = node.storage_node.host.get_shell()
data_path = node.storage_node.get_data_directory()
all_datas = node_shell.exec(f"ls -la {data_path}/data | awk '{{ print $9 }}'").stdout.strip()
for data_dir in all_datas.replace(".", "").strip().split("\n"):
check_dir = node_shell.exec(f" [ -d {data_path}/data/{data_dir}/data/{oid_path} ] && echo 1 || echo 0").stdout
if "1" in check_dir:
object_path = f"{data_path}/data/{data_dir}/data/{oid_path}"
object_name = f"{oid[4:]}.{cid}"
break
assert object_path is not None, f"{oid} object not found in directory - {data_path}/data"
return object_path, object_name
def set_shard_rw_mode(self, node: ClusterNode):
watcher = ShardsWatcher(node)
shards = watcher.get_shards()
for shard in shards:
watcher.set_shard_mode(shard["shard_id"], mode="read-write")
watcher.await_for_all_shards_status(status="read-write")
@pytest.fixture()
@allure.title("Revert all shards mode")
def revert_all_shards_mode(self) -> None:
yield
parallel(self.set_shard_rw_mode, self.cluster.cluster_nodes)
@pytest.fixture()
def oid_cid_node(self, default_wallet: WalletInfo, max_object_size: int) -> tuple[str, str, ClusterNode]:
with reporter.step("Create container, and put object"):
cid = create_container(
wallet=default_wallet,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
rule="REP 1 CBF 1",
basic_acl=EACL_PUBLIC_READ_WRITE,
)
file = generate_file(round(max_object_size * 0.8))
oid = put_object(wallet=default_wallet, path=file, cid=cid, shell=self.shell, endpoint=self.cluster.default_rpc_endpoint)
with reporter.step("Search node with object"):
nodes = get_object_nodes(cluster=self.cluster, cid=cid, oid=oid, alive_node=self.cluster.cluster_nodes[0])
yield oid, cid, nodes[0]
object_path, object_name = self.get_object_path_and_name_file(oid, cid, nodes[0])
nodes[0].host.get_shell().exec(f"chmod +r {object_path}/{object_name}")
delete_object(wallet=default_wallet, cid=cid, oid=oid, shell=self.shell, endpoint=self.cluster.default_rpc_endpoint)
delete_container(wallet=default_wallet, cid=cid, shell=self.shell, endpoint=self.cluster.default_rpc_endpoint)
@staticmethod
def get_shards_from_cli(node: StorageNode) -> list[Shard]:
wallet_path = node.get_remote_wallet_path()
wallet_password = node.get_wallet_password()
control_endpoint = node.get_control_endpoint()
cli_config = node.host.get_cli_config("frostfs-cli")
cli = FrostfsCli(node.host.get_shell(), cli_config.exec_path)
result = cli.shards.list(
endpoint=control_endpoint,
wallet=wallet_path,
wallet_password=wallet_password,
json_mode=True,
timeout=CLI_DEFAULT_TIMEOUT,
)
return [Shard.from_object(shard) for shard in json.loads(result.stdout.split(">", 1)[1])]
@pytest.fixture()
def change_config_storage(self, cluster_state_controller: ClusterStateController):
with reporter.step("Change threshold error shards"):
cluster_state_controller.manager(ConfigStateManager).set_on_all_nodes(
service_type=StorageNode, values={"storage:shard_ro_error_threshold": "5"}
)
yield
with reporter.step("Restore threshold error shards"):
cluster_state_controller.manager(ConfigStateManager).revert_all()
@allure.title("All shards are available")
def test_control_shard(self, cluster: Cluster):
for storage_node in cluster.storage_nodes:
shards_from_config = storage_node.get_shards()
shards_from_cli = self.get_shards_from_cli(storage_node)
assert set(shards_from_config) == set(shards_from_cli)
@allure.title("Shard become read-only when errors exceeds threshold")
@pytest.mark.failover
def test_shard_errors(
self,
default_wallet: WalletInfo,
oid_cid_node: tuple[str, str, ClusterNode],
change_config_storage: None,
revert_all_shards_mode: None,
):
oid, cid, node = oid_cid_node
with reporter.step("Search object in system."):
object_path, object_name = self.get_object_path_and_name_file(*oid_cid_node)
with reporter.step("Block read file"):
node.host.get_shell().exec(f"chmod a-r {object_path}/{object_name}")
with reporter.step("Get object, expect 6 errors"):
for _ in range(6):
with pytest.raises(RuntimeError):
get_object(
wallet=default_wallet,
cid=cid,
oid=oid,
shell=self.shell,
endpoint=node.storage_node.get_rpc_endpoint(),
)
with reporter.step("Check shard status"):
for shard in ShardsWatcher(node).get_shards():
if shard["blobstor"][1]["path"] in object_path:
with reporter.step(f"Shard - {shard['shard_id']} to {node.host_ip}, mode - {shard['mode']}"):
assert shard["mode"] == "read-only"
break

View file

@ -0,0 +1,133 @@
import os
import shutil
import time
from datetime import datetime, timezone
import allure
import pytest
from frostfs_testlib import reporter
from frostfs_testlib.hosting import Host
from frostfs_testlib.testing.cluster_test_base import Cluster
from frostfs_testlib.testing.parallel import parallel
def pytest_generate_tests(metafunc: pytest.Metafunc):
metafunc.fixturenames.append("repo")
metafunc.fixturenames.append("markers")
metafunc.parametrize(
"repo, markers",
[("frostfs-testcases", metafunc.config.option.markexpr)],
)
@pytest.mark.session_logs
class TestLogs:
@pytest.mark.logs_after_session
@pytest.mark.order(1000)
@allure.title("Check logs from frostfs-testcases with marks '{request.config.option.markexpr}' - search errors")
def test_logs_search_errors(self, temp_directory: str, cluster: Cluster, session_start_time: datetime, request: pytest.FixtureRequest):
end_time = datetime.now(timezone.utc)
logs_dir = os.path.join(temp_directory, "logs")
if not os.path.exists(logs_dir):
os.makedirs(logs_dir)
regexes = [
r"\bpanic\b",
r"\boom\b",
r"too many",
r"insufficient funds",
r"insufficient amount of gas",
r"cannot assign requested address",
r"\bunable to process\b",
r"\bmaximum number of subscriptions is reached\b",
]
issues_regex = "|".join(regexes)
exclude_filter = r"too many requests"
log_level_priority = "3" # will include 0-3 priority logs (0: emergency 1: alerts 2: critical 3: errors)
time.sleep(2)
futures = parallel(
self._collect_logs_on_host,
cluster.hosts,
logs_dir,
issues_regex,
session_start_time,
end_time,
exclude_filter,
priority=log_level_priority,
)
hosts_with_problems = [future.result() for future in futures if not future.exception() and future.result() is not None]
if hosts_with_problems:
self._attach_logs(logs_dir)
assert not hosts_with_problems, f"The following hosts contains critical errors in system logs: {', '.join(hosts_with_problems)}"
@pytest.mark.order(1001)
@allure.title("Check logs from frostfs-testcases with marks '{request.config.option.markexpr}' - identify sensitive data")
def test_logs_identify_sensitive_data(
self, temp_directory: str, cluster: Cluster, session_start_time: datetime, request: pytest.FixtureRequest
):
end_time = datetime.now(timezone.utc)
logs_dir = os.path.join(temp_directory, "logs")
if not os.path.exists(logs_dir):
os.makedirs(logs_dir)
_regex = {
"authorization_basic": r"basic [a-zA-Z0-9=:_\+\/-]{16,100}",
"authorization_bearer": r"bearer [a-zA-Z0-9_\-\.=:_\+\/]{16,100}",
"access_token": r"\"access_token\":\"[0-9a-z]{16}\$[0-9a-f]{32}\"",
"api_token": r"\"api_token\":\"(xox[a-zA-Z]-[a-zA-Z0-9-]+)\"",
"yadro_access_token": r"[a-zA-Z0-9_-]*:[a-zA-Z0-9_\-]+@yadro\.com*",
"SSH_privKey": r"([-]+BEGIN [^\s]+ PRIVATE KEY[-]+[\s]*[^-]*[-]+END [^\s]+ PRIVATE KEY[-]+)",
"possible_Creds": r"(?i)(" r"password\s*[`=:]+\s*[^\s]+|" r"password is\s*[`=:]+\s*[^\s]+|" r"passwd\s*[`=:]+\s*[^\s]+)",
}
issues_regex = "|".join(_regex.values())
exclude_filter = r"COMMAND=\|--\sBoot\s"
time.sleep(2)
futures = parallel(
self._collect_logs_on_host,
cluster.hosts,
logs_dir,
issues_regex,
session_start_time,
end_time,
exclude_filter,
)
hosts_with_problems = [future.result() for future in futures if not future.exception() and future.result() is not None]
if hosts_with_problems:
self._attach_logs(logs_dir)
assert not hosts_with_problems, f"The following hosts contains sensitive data in system logs: {', '.join(hosts_with_problems)}"
def _collect_logs_on_host(
self,
host: Host,
logs_dir: str,
regex: str,
since: datetime,
until: datetime,
exclude_filter: str,
priority: str = None,
):
with reporter.step(f"Get logs from {host.config.address}"):
logs = host.get_filtered_logs(filter_regex=regex, since=since, until=until, exclude_filter=exclude_filter, priority=priority)
if not logs:
return None
with open(os.path.join(logs_dir, f"{host.config.address}.log"), "w") as file:
file.write(logs)
return host.config.address
def _attach_logs(self, logs_dir: str) -> None:
# Zip all files and attach to Allure because it is more convenient to download a single
# zip with all logs rather than mess with individual logs files per service or node
logs_zip_file_path = shutil.make_archive(logs_dir, "zip", logs_dir)
reporter.attach(logs_zip_file_path, "logs.zip")

View file

@ -1 +0,0 @@
password: ""

View file

@ -1,8 +1,15 @@
robotframework==4.1.2
requests==2.25.1
pexpect==4.8.0
boto3==1.16.33
docker==4.4.0
allure-pytest==2.13.2
allure-python-commons==2.13.2
base58==2.1.0
boto3==1.35.30
botocore==1.19.33
urllib3==1.26.3
base58==1.0.3
configobj==5.0.6
neo-mamba==1.0.0
pexpect==4.8.0
pyyaml==6.0.1
pytest==7.1.2
pytest-lazy-fixture==0.6.3
python-dateutil==2.8.2
requests==2.28.1
tenacity==8.0.1
urllib3==1.26.9

3
requirements_dev.txt Normal file
View file

@ -0,0 +1,3 @@
pre-commit==2.20.0
isort==5.12.0
pylint==2.17.4

View file

@ -1,74 +0,0 @@
{
"records": [
{
"operation": "GET",
"action": "ALLOW",
"filters": [],
"targets": [
{
"role": "OTHERS"
}
]
},
{
"operation": "HEAD",
"action": "ALLOW",
"filters": [],
"targets": [
{
"role": "OTHERS"
}
]
},
{
"operation": "PUT",
"action": "ALLOW",
"filters": [],
"targets": [
{
"role": "OTHERS"
}
]
},
{
"operation": "DELETE",
"action": "ALLOW",
"filters": [],
"targets": [
{
"role": "OTHERS"
}
]
},
{
"operation": "SEARCH",
"action": "ALLOW",
"filters": [],
"targets": [
{
"role": "OTHERS"
}
]
},
{
"operation": "GETRANGE",
"action": "ALLOW",
"filters": [],
"targets": [
{
"role": "OTHERS"
}
]
},
{
"operation": "GETRANGEHASH",
"action": "ALLOW",
"filters": [],
"targets": [
{
"role": "OTHERS"
}
]
}
]
}

View file

@ -1,74 +0,0 @@
{
"records": [
{
"operation": "GET",
"action": "ALLOW",
"filters": [],
"targets": [
{
"role": "SYSTEM"
}
]
},
{
"operation": "HEAD",
"action": "ALLOW",
"filters": [],
"targets": [
{
"role": "SYSTEM"
}
]
},
{
"operation": "PUT",
"action": "ALLOW",
"filters": [],
"targets": [
{
"role": "SYSTEM"
}
]
},
{
"operation": "DELETE",
"action": "ALLOW",
"filters": [],
"targets": [
{
"role": "SYSTEM"
}
]
},
{
"operation": "SEARCH",
"action": "ALLOW",
"filters": [],
"targets": [
{
"role": "SYSTEM"
}
]
},
{
"operation": "GETRANGE",
"action": "ALLOW",
"filters": [],
"targets": [
{
"role": "SYSTEM"
}
]
},
{
"operation": "GETRANGEHASH",
"action": "ALLOW",
"filters": [],
"targets": [
{
"role": "SYSTEM"
}
]
}
]
}

View file

@ -1,74 +0,0 @@
{
"records": [
{
"operation": "GET",
"action": "ALLOW",
"filters": [],
"targets": [
{
"role": "USER"
}
]
},
{
"operation": "HEAD",
"action": "ALLOW",
"filters": [],
"targets": [
{
"role": "USER"
}
]
},
{
"operation": "PUT",
"action": "ALLOW",
"filters": [],
"targets": [
{
"role": "USER"
}
]
},
{
"operation": "DELETE",
"action": "ALLOW",
"filters": [],
"targets": [
{
"role": "USER"
}
]
},
{
"operation": "SEARCH",
"action": "ALLOW",
"filters": [],
"targets": [
{
"role": "USER"
}
]
},
{
"operation": "GETRANGE",
"action": "ALLOW",
"filters": [],
"targets": [
{
"role": "USER"
}
]
},
{
"operation": "GETRANGEHASH",
"action": "ALLOW",
"filters": [],
"targets": [
{
"role": "USER"
}
]
}
]
}

Some files were not shown because too many files have changed in this diff Show more