Compare commits

...

117 commits

Author SHA1 Message Date
0c9660fffc [#323] Update APE related entities
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-11-20 17:14:33 +03:00
8eaa511e5c [#322] Added classmethod decorator in Http client
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2024-11-18 15:07:24 +00:00
a1953684b8 [#307] added methods for testing MFA 2024-11-18 07:08:42 +00:00
451de5e07e [#320] Added shards detach function
Signed-off-by: Dmitry Anurin <danurin@yadro.com>
2024-11-14 16:22:06 +03:00
f24bfc06fd [#319] Add cached fixture feature
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-11-13 17:46:03 +03:00
47bc11835b [#318] Add tombstone expiration test
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-11-13 10:11:03 +03:00
2a90ec74ff [#317] update morph rule chain 2024-11-12 16:01:12 +03:00
95b32a036a [#316] Extend parallel exception message output
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-11-12 12:28:10 +03:00
55d8ee5da0 [#315] Add http client
Signed-off-by: Kirill Sosnovskikh <k.sosnovskikh@yadro.com>
2024-11-08 15:51:32 +03:00
ea40940514 [#313] update force_new_epoch 2024-11-05 12:37:56 +03:00
6f1baf3cf6 [#312] update morph remove_nodes 2024-11-01 15:50:17 +03:00
26139767f4 [#311] Add AWS CLI command to report from Boto3 request
Signed-off-by: Kirill Sosnovskikh <k.sosnovskikh@yadro.com>
2024-10-31 12:14:51 +00:00
3d6a356e20 [#306] Fix handling of bucket names in AWS CLI
- Add quotes around container names if they contain spaces or `-`.

Signed-off-by: Kirill Sosnovskikh <k.sosnovskikh@yadro.com>
2024-10-31 12:14:36 +00:00
e6faddedeb [#297] add morph rule chain 2024-10-31 13:00:40 +03:00
b2bf6677f1 [#310] Update test marking
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-10-25 18:52:43 +03:00
3f3be83d90 [#305] Added IAM abstract method 2024-10-25 08:07:47 +00:00
5fa58a55c0 [#304] Improve logging Boto3 IAM methods
Signed-off-by: Kirill Sosnovskikh <k.sosnovskikh@yadro.com>
2024-10-18 19:24:26 +03:00
738cfacbb7 [#300] Refactor tests: use unique_name instead hex + timestamp
Signed-off-by: Kirill Sosnovskikh <k.sosnovskikh@yadro.com>
2024-10-14 10:09:13 +00:00
cf48f474eb [#303] add check if registry is on hdd
Signed-off-by: m.malygina <m.malygina@yadro.com>
2024-10-14 11:16:09 +03:00
2a41f2b0f6 [#301] Added interfaces for put/get lifecycle configuration to s3 clients 2024-10-11 13:35:33 +00:00
a04eba8aec [#302] Autoadd marks for frostfs
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-10-11 12:23:32 +03:00
2976e30b75 [#299] Add fuse to prevent similar names generation
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-10-07 15:59:00 +03:00
24b8ca73d7 [#291] get namespace endpoint 2024-09-18 12:30:02 +00:00
cef64e315e [#267] add no rule found object and morph chain 2024-09-18 12:29:54 +00:00
0d750ed114 [#293] Add in CSC methods change blockchain netmap and update CliWrapper
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2024-09-17 10:19:28 +00:00
1bee69042b [#294] add wipe data using wipefs method
Signed-off-by: m.malygina <m.malygina@yadro.com>
2024-09-17 09:38:03 +00:00
4a2ac8a9b6 [#290] Update restore traffic method
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2024-09-11 10:42:51 +03:00
36bfe385d5 Added method get s3 endpoint for namespace 2024-09-10 14:05:44 +00:00
565fd4c72b [#289] Move temp dir fixture to testlib
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-09-10 13:28:57 +00:00
84e83487f9 [#288] Update object and chunks Clients
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2024-09-10 13:54:51 +03:00
d2f8323fb9 [#286] Change args id in shards.set-mode command
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2024-09-03 15:11:43 +03:00
eba782e7d2 [#285] Change func search bucket nodes and remove old resolver bucket cnr
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2024-09-02 11:15:56 +00:00
85c2707ec8 [#284] Add container operational in CliWrapper
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2024-08-28 12:12:05 +03:00
0caca54e36 [#283] Fix mistakes
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-08-16 18:12:25 +03:00
8ae1b99db9 [#282] New grpc realization for object operations
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2024-08-16 10:22:21 +03:00
6926c09dbe [#281] add hostname to HostConfig
Signed-off-by: m.malygina <m.malygina@yadro.com>
2024-08-13 14:34:29 +00:00
1c2ed25929 [#280] Fix neo-go query height in steps
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2024-08-13 13:50:19 +00:00
0ba4a73db3 [#279] Add objectID filter for APE
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-08-08 18:34:46 +03:00
8a8b35846e [#278] Small QoL updates
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-08-07 18:01:03 +03:00
5bdacdf5ba [#269] Fix get contracts method 2024-08-05 12:54:31 +00:00
ae9e8d8c30 [#274] Fix iam_get_policy function 2024-08-05 12:48:58 +00:00
54b42e2d8d [#274] Fix iam_attach_group_policy function 2024-08-05 12:48:58 +00:00
ea60c2104a [#277] MInor change for shard
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2024-08-05 12:48:20 +00:00
8306a9f3ff [#276] Context manager for parralel func
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2024-08-05 12:47:29 +00:00
6b036a09b7 [#275] Add 'retry' and 'PRESET_CONTAINER_CREATION_RETRY_COUNT' variables to define max num of container creation retries 2024-08-02 11:32:02 +03:00
a983e0566e [#272] Add --generate-key flag to object operations
Signed-off-by: Kirill Sosnovskikh <k.sosnovskikh@yadro.com>
2024-07-29 13:26:47 +03:00
7a500330de [#270] Updates related to testing platform
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-07-26 16:34:47 +03:00
166e44da9c [#266] Remove duplicate messages in logs
Signed-off-by: Kirill Sosnovskikh <k.sosnovskikh@yadro.com>
2024-07-19 19:07:12 +03:00
4c0d76408c [#265] Update codeowners
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-07-18 18:21:46 +03:00
40dfd015a8 [#264] Add APE related commands
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-07-18 11:17:29 +00:00
f472d7e1ce [#261] Add error pattern no rule 2024-07-17 13:04:58 +03:00
b6a657e76c [#258] add tests for preupgrade 2024-07-17 08:49:11 +00:00
6f99aef406 [#263] Unify version parsing
Function `_parse_version` renamed to `parse_version`
and changed regex for version parsing

Signed-off-by: Kirill Sosnovskikh <k.sosnovskikh@yadro.com>
2024-07-16 17:58:32 +03:00
996f92ffa7 [#259] Improve logging of boto3 client requests
Signed-off-by: Kirill Sosnovskikh <k.sosnovskikh@yadro.com>
2024-07-15 11:51:54 +03:00
429698944e [#256] Allow to set mix of policies for containers and buckets
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-07-03 12:02:40 +03:00
376499a7e8 [#254] Added change for EC policy
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2024-07-01 15:08:55 +00:00
f4460194bc [#252] add filter priority to get_filtered_logs method 2024-07-01 10:25:46 +00:00
3a4204f2e4 [#253] Update S3 clients and permission matrixes
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-06-28 15:18:20 +03:00
c9e4c2c7bb [#251] Update get object nodes command call
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-06-26 08:05:05 +00:00
da16f3c3a5 [#248] add metrics methods 2024-06-26 08:03:08 +00:00
f1b2fbd47b [#250] Adjustments for tests optimization
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-06-25 02:31:14 +03:00
cb31d41f15 [#247] Use TestFiles which automatically deletes itself
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-06-18 13:37:07 +03:00
7a482152a8 [#245] Update versions check
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-06-07 17:12:08 +03:00
bfd7f70b6c [#241] Methods for tag IAM user 2024-06-06 17:36:12 +00:00
10821f4c49 [#239] write cache metrics 2024-06-06 14:23:53 +00:00
5d192524a0 [#243] New error patterns 2024-06-06 15:10:36 +03:00
a3b78559a9 [#238] Update S3 acl verify method
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-06-05 14:57:47 +03:00
ec42b156ac [#236] Add EC logic this HEAD command CLI 2024-06-05 06:52:08 +00:00
ea1b348120 [#232] grpc metrics 2024-05-31 10:33:38 +00:00
e7423938e9 [#232]Change provide methods 2024-05-30 09:12:21 +03:00
a563f089f6 [#228] metrics for object 2024-05-28 08:10:29 +00:00
37a1177a3c Added delete bucket policy method to s3 client 2024-05-22 13:57:19 +00:00
b8ce75b299 [#224] Restore invalid_obj check
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-05-16 12:47:46 +03:00
3fee7aa197 [#221] Added new control command CLI 2024-05-15 12:30:23 +00:00
3e64b52306 [#220] add container metrics 2024-05-13 13:34:37 +03:00
0306c09bed [#216] Add parameter max_total_size_gb 2024-05-06 08:17:05 +00:00
a32bd120f2 [#218] Add ns attribute for container create
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2024-05-03 17:12:54 +03:00
5b715877b3 [#214] Removed x10 wait in delete bucket function 2024-04-24 15:07:04 +03:00
c0e37c8138 [#210] Return response in complete_multipart_upload function 2024-04-23 23:51:42 +03:00
80c65b454e [#203] Remove hostnames cludges
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2024-04-22 12:31:35 +00:00
541a3e0636 [#208] Add await for search func
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2024-04-17 11:03:47 +03:00
70f0357960 [#207] Fix shards for disabled write_cache
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-04-15 16:50:54 +03:00
a85070e957 [#206] Change epoch in func set status node, to 2
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2024-04-15 12:35:33 +03:00
82a8f9bab3 [#205] Propagate SETUP_TIMEOUT option
Signed-off-by: a.berezin <a.berezin@yadro.com>
2024-04-11 11:46:04 +03:00
65ec50391e Interfaces for IAM in S3 client 2024-04-11 07:51:40 +00:00
863e74f161 [#204] Fix custom_registry for verify scenario
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2024-04-09 12:10:02 +03:00
6629b9bbaa [#202] .forgejo: Replace old DCO action
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-04-04 12:23:00 +00:00
e2a170d66e [#190] Introduce default EC placement policy
The default policy which is similar to REP 2, but uses EC instead.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-04-04 11:21:15 +00:00
338584069d [#190] Add PlacementPolicy dataclass
Allow to parametrize tests with placement policy.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-04-04 11:21:15 +00:00
9cfaf1a618 [#201] Add more time for node return
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2024-04-03 01:02:21 +03:00
076e444f84 [#198] Check only data disks for local safe-stopper
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2024-03-22 12:19:53 +03:00
653621fb7e [#197] Allow config_dir for local scenario 2024-03-20 18:59:22 +03:00
2dc5aa8a1e [#195] Update netmap parser and status check message
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2024-03-19 12:48:04 +00:00
11487e983d [#196] Removed profile name from Boto3 client 2024-03-18 20:12:40 +03:00
9c508c4f66 [#194] Fix shards watcher CLI usage
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2024-03-15 17:44:18 +03:00
f2bded64e4 [#189] Add setup step to check binaries versions
Signed-off-by: Liza <e.chichindaeva@yadro.com>
2024-03-15 16:09:02 +03:00
0e247c2ff2 [#193] Fix auth provider
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2024-03-14 16:39:20 +03:00
b323bcfd0a [#192] Fix param 2024-03-14 14:27:31 +03:00
25925c637b [#191] Credentials work overhaul
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2024-03-11 19:23:10 +03:00
09a7f66d1e [#188] Add CredentialsProvider
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2024-03-01 02:18:05 +03:00
22b41b227f [#186] Add total bytes to report
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2024-02-27 11:58:53 +03:00
f5a7ff5c90 [#185] Add prometheus load parameters 2024-02-21 18:37:48 +03:00
3fc3eaadf3 [#182] Refactoring old functions for FrostfsCli
Refactoring old functions for FrostfsCli

Signed-off-by: Mikhail Kadilov m.kadilov@yadro.com
2024-02-20 14:51:50 +00:00
273f0d13a5 [#184] Add streaming param
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2024-02-20 13:27:45 +03:00
55cebc042c [#183] Read all configuration files for service config
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2024-02-19 17:48:09 +03:00
751381cd60 Add GenericCli utility
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2024-02-14 16:16:59 +03:00
4f3814690e [TrueCloudLab/xk6-frostfs#125] Add acl option
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2024-02-05 18:53:33 +03:00
d79fd87ede [#174] Add flag to remove registry file
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2024-02-05 12:43:09 +03:00
8ba2cb8030 [#171] Components versions check
Components versions check

Signed-off-by: Mikhail Kadilov m.kadilov@yadro.com
2024-02-01 09:12:58 +00:00
6caa77dedf [#172] parallel get remote binaries versions 2024-01-31 16:42:30 +03:00
0d7a15877c [#169] Update metrics
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2024-01-26 15:29:02 +03:00
82f9df088a [#167] Strip components for new xk6 archive and update unit tests
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2024-01-26 13:35:42 +03:00
e04fac0770 [#164] Add local flag to preset in load
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2024-01-22 19:06:38 +03:00
328e43fe67 [#162] Refactor frostfs-cli functional
Signed-off-by: Dmitriy Zayakin <d.zayakin@yadro.com>
2024-01-22 13:11:59 +00:00
c0a25ab699 Support of custom version parameter instead of --version for all bins 2024-01-18 10:41:36 +03:00
40fa2c24cc rename local_config_path 2024-01-12 20:25:39 +03:00
be36a10f1e [#157] fix for dev-env and unit-tests
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2024-01-12 16:42:19 +00:00
106 changed files with 6395 additions and 1599 deletions

109
.devenv.hosting.yaml Normal file
View file

@ -0,0 +1,109 @@
hosts:
- address: localhost
hostname: localhost
attributes:
sudo_shell: false
plugin_name: docker
healthcheck_plugin_name: basic
attributes:
skip_readiness_check: True
force_transactions: True
services:
- name: frostfs-storage_01
attributes:
container_name: s01
config_path: /etc/frostfs/storage/config.yml
wallet_path: ../frostfs-dev-env/services/storage/wallet01.json
local_wallet_config_path: ./TemporaryDir/empty-password.yml
local_wallet_path: ../frostfs-dev-env/services/storage/wallet01.json
wallet_password: ""
volume_name: storage_storage_s01
endpoint_data0: s01.frostfs.devenv:8080
control_endpoint: s01.frostfs.devenv:8081
un_locode: "RU MOW"
- name: frostfs-storage_02
attributes:
container_name: s02
config_path: /etc/frostfs/storage/config.yml
wallet_path: ../frostfs-dev-env/services/storage/wallet02.json
local_wallet_config_path: ./TemporaryDir/empty-password.yml
local_wallet_path: ../frostfs-dev-env/services/storage/wallet02.json
wallet_password: ""
volume_name: storage_storage_s02
endpoint_data0: s02.frostfs.devenv:8080
control_endpoint: s02.frostfs.devenv:8081
un_locode: "RU LED"
- name: frostfs-storage_03
attributes:
container_name: s03
config_path: /etc/frostfs/storage/config.yml
wallet_path: ../frostfs-dev-env/services/storage/wallet03.json
local_wallet_config_path: ./TemporaryDir/empty-password.yml
local_wallet_path: ../frostfs-dev-env/services/storage/wallet03.json
wallet_password: ""
volume_name: storage_storage_s03
endpoint_data0: s03.frostfs.devenv:8080
control_endpoint: s03.frostfs.devenv:8081
un_locode: "SE STO"
- name: frostfs-storage_04
attributes:
container_name: s04
config_path: /etc/frostfs/storage/config.yml
wallet_path: ../frostfs-dev-env/services/storage/wallet04.json
local_wallet_config_path: ./TemporaryDir/empty-password.yml
local_wallet_path: ../frostfs-dev-env/services/storage/wallet04.json
wallet_password: ""
volume_name: storage_storage_s04
endpoint_data0: s04.frostfs.devenv:8080
control_endpoint: s04.frostfs.devenv:8081
un_locode: "FI HEL"
- name: frostfs-s3_01
attributes:
container_name: s3_gate
config_path: ../frostfs-dev-env/services/s3_gate/.s3.env
wallet_path: ../frostfs-dev-env/services/s3_gate/wallet.json
local_wallet_config_path: ./TemporaryDir/password-s3.yml
local_wallet_path: ../frostfs-dev-env/services/s3_gate/wallet.json
wallet_password: "s3"
endpoint_data0: https://s3.frostfs.devenv:8080
- name: frostfs-http_01
attributes:
container_name: http_gate
config_path: ../frostfs-dev-env/services/http_gate/.http.env
wallet_path: ../frostfs-dev-env/services/http_gate/wallet.json
local_wallet_config_path: ./TemporaryDir/password-other.yml
local_wallet_path: ../frostfs-dev-env/services/http_gate/wallet.json
wallet_password: "one"
endpoint_data0: http://http.frostfs.devenv
- name: frostfs-ir_01
attributes:
container_name: ir01
config_path: ../frostfs-dev-env/services/ir/.ir.env
wallet_path: ../frostfs-dev-env/services/ir/az.json
local_wallet_config_path: ./TemporaryDir/password-other.yml
local_wallet_path: ../frostfs-dev-env/services/ir/az.json
wallet_password: "one"
- name: neo-go_01
attributes:
container_name: morph_chain
config_path: ../frostfs-dev-env/services/morph_chain/protocol.privnet.yml
wallet_path: ../frostfs-dev-env/services/morph_chain/node-wallet.json
local_wallet_config_path: ./TemporaryDir/password-other.yml
local_wallet_path: ../frostfs-dev-env/services/morph_chain/node-wallet.json
wallet_password: "one"
endpoint_internal0: http://morph-chain.frostfs.devenv:30333
- name: main-chain_01
attributes:
container_name: main_chain
config_path: ../frostfs-dev-env/services/chain/protocol.privnet.yml
wallet_path: ../frostfs-dev-env/services/chain/node-wallet.json
local_wallet_config_path: ./TemporaryDir/password-other.yml
local_wallet_path: ../frostfs-dev-env/services/chain/node-wallet.json
wallet_password: "one"
endpoint_internal0: http://main-chain.frostfs.devenv:30333
- name: coredns_01
attributes:
container_name: coredns
clis:
- name: frostfs-cli
exec_path: frostfs-cli

View file

@ -0,0 +1,21 @@
name: DCO action
on: [pull_request]
jobs:
dco:
name: DCO
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Setup Go
uses: actions/setup-go@v3
with:
go-version: '1.21'
- name: Run commit format checker
uses: https://git.frostfs.info/TrueCloudLab/dco-go@v3
with:
from: 'origin/${{ github.event.pull_request.base.ref }}'

1
.github/CODEOWNERS vendored
View file

@ -1 +0,0 @@
* @aprasolova @vdomnich-yadro @dansingjulia @yadro-vavdeev @abereziny

View file

@ -1,21 +0,0 @@
name: DCO check
on:
pull_request:
branches:
- master
jobs:
commits_check_job:
runs-on: ubuntu-latest
name: Commits Check
steps:
- name: Get PR Commits
id: 'get-pr-commits'
uses: tim-actions/get-pr-commits@master
with:
token: ${{ secrets.GITHUB_TOKEN }}
- name: DCO Check
uses: tim-actions/dco@master
with:
commits: ${{ steps.get-pr-commits.outputs.commits }}

1
CODEOWNERS Normal file
View file

@ -0,0 +1 @@
* @JuliaKovshova @abereziny @d.zayakin @anikeev-yadro @anurindm @ylukoyan @i.niyazov

View file

@ -27,8 +27,8 @@ dependencies = [
"testrail-api>=1.12.0", "testrail-api>=1.12.0",
"pytest==7.1.2", "pytest==7.1.2",
"tenacity==8.0.1", "tenacity==8.0.1",
"boto3==1.16.33", "boto3==1.35.30",
"boto3-stubs[essential]==1.16.33", "boto3-stubs[essential]==1.35.30",
] ]
requires-python = ">=3.10" requires-python = ">=3.10"
@ -51,19 +51,26 @@ basic = "frostfs_testlib.healthcheck.basic_healthcheck:BasicHealthcheck"
config = "frostfs_testlib.storage.controllers.state_managers.config_state_manager:ConfigStateManager" config = "frostfs_testlib.storage.controllers.state_managers.config_state_manager:ConfigStateManager"
[project.entry-points."frostfs.testlib.services"] [project.entry-points."frostfs.testlib.services"]
s = "frostfs_testlib.storage.dataclasses.frostfs_services:StorageNode" frostfs-storage = "frostfs_testlib.storage.dataclasses.frostfs_services:StorageNode"
s3-gate = "frostfs_testlib.storage.dataclasses.frostfs_services:S3Gate" frostfs-s3 = "frostfs_testlib.storage.dataclasses.frostfs_services:S3Gate"
http-gate = "frostfs_testlib.storage.dataclasses.frostfs_services:HTTPGate" frostfs-http = "frostfs_testlib.storage.dataclasses.frostfs_services:HTTPGate"
morph-chain = "frostfs_testlib.storage.dataclasses.frostfs_services:MorphChain" neo-go = "frostfs_testlib.storage.dataclasses.frostfs_services:MorphChain"
ir = "frostfs_testlib.storage.dataclasses.frostfs_services:InnerRing" frostfs-ir = "frostfs_testlib.storage.dataclasses.frostfs_services:InnerRing"
[project.entry-points."frostfs.testlib.credentials_providers"]
authmate = "frostfs_testlib.credentials.authmate_s3_provider:AuthmateS3CredentialsProvider"
wallet_factory = "frostfs_testlib.credentials.wallet_factory_provider:WalletFactoryProvider"
[project.entry-points."frostfs.testlib.bucket_cid_resolver"]
frostfs = "frostfs_testlib.s3.curl_bucket_resolver:CurlBucketContainerResolver"
[tool.isort] [tool.isort]
profile = "black" profile = "black"
src_paths = ["src", "tests"] src_paths = ["src", "tests"]
line_length = 120 line_length = 140
[tool.black] [tool.black]
line-length = 120 line-length = 140
target-version = ["py310"] target-version = ["py310"]
[tool.bumpver] [tool.bumpver]
@ -83,3 +90,6 @@ filterwarnings = [
"ignore:Blowfish has been deprecated:cryptography.utils.CryptographyDeprecationWarning", "ignore:Blowfish has been deprecated:cryptography.utils.CryptographyDeprecationWarning",
] ]
testpaths = ["tests"] testpaths = ["tests"]
[project.entry-points.pytest11]
testlib = "frostfs_testlib"

View file

@ -8,8 +8,8 @@ docstring_parser==0.15
testrail-api==1.12.0 testrail-api==1.12.0
tenacity==8.0.1 tenacity==8.0.1
pytest==7.1.2 pytest==7.1.2
boto3==1.16.33 boto3==1.35.30
boto3-stubs[essential]==1.16.33 boto3-stubs[essential]==1.35.30
# Dev dependencies # Dev dependencies
black==22.8.0 black==22.8.0

View file

@ -1 +1,4 @@
__version__ = "2.0.1" __version__ = "2.0.1"
from .fixtures import configure_testlib, hosting, temp_directory
from .hooks import pytest_collection_modifyitems

View file

@ -1,5 +1,5 @@
from frostfs_testlib.analytics import test_case from frostfs_testlib.analytics import test_case
from frostfs_testlib.analytics.test_case import TestCasePriority from frostfs_testlib.analytics.test_case import TestCasePriority
from frostfs_testlib.analytics.test_collector import TestCase, TestCaseCollector from frostfs_testlib.analytics.test_collector import TestCase, TestCaseCollector
from frostfs_testlib.analytics.test_exporter import TestExporter from frostfs_testlib.analytics.test_exporter import TСExporter
from frostfs_testlib.analytics.testrail_exporter import TestrailExporter from frostfs_testlib.analytics.testrail_exporter import TestrailExporter

View file

@ -3,7 +3,8 @@ from abc import ABC, abstractmethod
from frostfs_testlib.analytics.test_collector import TestCase from frostfs_testlib.analytics.test_collector import TestCase
class TestExporter(ABC): # TODO: REMOVE ME
class TСExporter(ABC):
test_cases_cache = [] test_cases_cache = []
test_suites_cache = [] test_suites_cache = []
@ -46,9 +47,7 @@ class TestExporter(ABC):
""" """
@abstractmethod @abstractmethod
def update_test_case( def update_test_case(self, test_case: TestCase, test_case_in_tms, test_suite, test_suite_section) -> None:
self, test_case: TestCase, test_case_in_tms, test_suite, test_suite_section
) -> None:
""" """
Update test case in TMS Update test case in TMS
""" """
@ -60,9 +59,7 @@ class TestExporter(ABC):
for test_case in test_cases: for test_case in test_cases:
test_suite = self.get_or_create_test_suite(test_case.suite_name) test_suite = self.get_or_create_test_suite(test_case.suite_name)
test_section = self.get_or_create_suite_section( test_section = self.get_or_create_suite_section(test_suite, test_case.suite_section_name)
test_suite, test_case.suite_section_name
)
test_case_in_tms = self.search_test_case_id(test_case.id) test_case_in_tms = self.search_test_case_id(test_case.id)
steps = [{"content": value, "expected": " "} for key, value in test_case.steps.items()] steps = [{"content": value, "expected": " "} for key, value in test_case.steps.items()]

View file

@ -1,10 +1,10 @@
from testrail_api import TestRailAPI from testrail_api import TestRailAPI
from frostfs_testlib.analytics.test_collector import TestCase from frostfs_testlib.analytics.test_collector import TestCase
from frostfs_testlib.analytics.test_exporter import TestExporter from frostfs_testlib.analytics.test_exporter import TСExporter
class TestrailExporter(TestExporter): class TestrailExporter(TСExporter):
def __init__( def __init__(
self, self,
tr_url: str, tr_url: str,
@ -62,19 +62,13 @@ class TestrailExporter(TestExporter):
It's help do not call TMS each time then we search test case It's help do not call TMS each time then we search test case
""" """
for test_suite in self.test_suites_cache: for test_suite in self.test_suites_cache:
self.test_cases_cache.extend( self.test_cases_cache.extend(self.api.cases.get_cases(self.tr_project_id, suite_id=test_suite["id"]))
self.api.cases.get_cases(self.tr_project_id, suite_id=test_suite["id"])
)
def search_test_case_id(self, test_case_id: str) -> object: def search_test_case_id(self, test_case_id: str) -> object:
""" """
Find test cases in TestRail (cache) by ID Find test cases in TestRail (cache) by ID
""" """
test_cases = [ test_cases = [test_case for test_case in self.test_cases_cache if test_case["custom_autotest_name"] == test_case_id]
test_case
for test_case in self.test_cases_cache
if test_case["custom_autotest_name"] == test_case_id
]
if len(test_cases) > 1: if len(test_cases) > 1:
raise RuntimeError(f"Too many results found in test rail for id {test_case_id}") raise RuntimeError(f"Too many results found in test rail for id {test_case_id}")
@ -87,9 +81,7 @@ class TestrailExporter(TestExporter):
""" """
Get suite name with exact name from Testrail or create if not exist Get suite name with exact name from Testrail or create if not exist
""" """
test_rail_suites = [ test_rail_suites = [suite for suite in self.test_suites_cache if suite["name"] == test_suite_name]
suite for suite in self.test_suites_cache if suite["name"] == test_suite_name
]
if not test_rail_suites: if not test_rail_suites:
test_rail_suite = self.api.suites.add_suite( test_rail_suite = self.api.suites.add_suite(
@ -102,17 +94,13 @@ class TestrailExporter(TestExporter):
elif len(test_rail_suites) == 1: elif len(test_rail_suites) == 1:
return test_rail_suites.pop() return test_rail_suites.pop()
else: else:
raise RuntimeError( raise RuntimeError(f"Too many results found in test rail for suite name {test_suite_name}")
f"Too many results found in test rail for suite name {test_suite_name}"
)
def get_or_create_suite_section(self, test_rail_suite, section_name) -> object: def get_or_create_suite_section(self, test_rail_suite, section_name) -> object:
""" """
Get suite section with exact name from Testrail or create new one if not exist Get suite section with exact name from Testrail or create new one if not exist
""" """
test_rail_sections = [ test_rail_sections = [section for section in test_rail_suite["sections"] if section["name"] == section_name]
section for section in test_rail_suite["sections"] if section["name"] == section_name
]
if not test_rail_sections: if not test_rail_sections:
test_rail_section = self.api.sections.add_section( test_rail_section = self.api.sections.add_section(
@ -128,9 +116,7 @@ class TestrailExporter(TestExporter):
elif len(test_rail_sections) == 1: elif len(test_rail_sections) == 1:
return test_rail_sections.pop() return test_rail_sections.pop()
else: else:
raise RuntimeError( raise RuntimeError(f"Too many results found in test rail for section name {section_name}")
f"Too many results found in test rail for section name {section_name}"
)
def prepare_request_body(self, test_case: TestCase, test_suite, test_suite_section) -> dict: def prepare_request_body(self, test_case: TestCase, test_suite, test_suite_section) -> dict:
""" """
@ -164,9 +150,7 @@ class TestrailExporter(TestExporter):
self.api.cases.add_case(**request_body) self.api.cases.add_case(**request_body)
def update_test_case( def update_test_case(self, test_case: TestCase, test_case_in_tms, test_suite, test_suite_section) -> None:
self, test_case: TestCase, test_case_in_tms, test_suite, test_suite_section
) -> None:
""" """
Update test case in Testrail Update test case in Testrail
""" """

View file

@ -1,4 +1,5 @@
from frostfs_testlib.cli.frostfs_adm import FrostfsAdm from frostfs_testlib.cli.frostfs_adm import FrostfsAdm
from frostfs_testlib.cli.frostfs_authmate import FrostfsAuthmate from frostfs_testlib.cli.frostfs_authmate import FrostfsAuthmate
from frostfs_testlib.cli.frostfs_cli import FrostfsCli from frostfs_testlib.cli.frostfs_cli import FrostfsCli
from frostfs_testlib.cli.generic_cli import GenericCli
from frostfs_testlib.cli.neogo import NeoGo, NetworkType from frostfs_testlib.cli.neogo import NeoGo, NetworkType

View file

@ -69,9 +69,7 @@ class FrostfsAdmMorph(CliCommand):
**{param: param_value for param, param_value in locals().items() if param not in ["self"]}, **{param: param_value for param, param_value in locals().items() if param not in ["self"]},
) )
def set_config( def set_config(self, set_key_value: str, rpc_endpoint: Optional[str] = None, alphabet_wallets: Optional[str] = None) -> CommandResult:
self, set_key_value: str, rpc_endpoint: Optional[str] = None, alphabet_wallets: Optional[str] = None
) -> CommandResult:
"""Add/update global config value in the FrostFS network. """Add/update global config value in the FrostFS network.
Args: Args:
@ -110,7 +108,7 @@ class FrostfsAdmMorph(CliCommand):
**{param: param_value for param, param_value in locals().items() if param not in ["self"]}, **{param: param_value for param, param_value in locals().items() if param not in ["self"]},
) )
def dump_hashes(self, rpc_endpoint: str) -> CommandResult: def dump_hashes(self, rpc_endpoint: str, domain: Optional[str] = None) -> CommandResult:
"""Dump deployed contract hashes. """Dump deployed contract hashes.
Args: Args:
@ -125,7 +123,7 @@ class FrostfsAdmMorph(CliCommand):
) )
def force_new_epoch( def force_new_epoch(
self, rpc_endpoint: Optional[str] = None, alphabet_wallets: Optional[str] = None self, rpc_endpoint: Optional[str] = None, alphabet_wallets: Optional[str] = None, delta: Optional[int] = None
) -> CommandResult: ) -> CommandResult:
"""Create new FrostFS epoch event in the side chain. """Create new FrostFS epoch event in the side chain.
@ -344,9 +342,124 @@ class FrostfsAdmMorph(CliCommand):
return self._execute( return self._execute(
f"morph remove-nodes {' '.join(node_netmap_keys)}", f"morph remove-nodes {' '.join(node_netmap_keys)}",
**{ **{param: param_value for param, param_value in locals().items() if param not in ["self", "node_netmap_keys"]},
param: param_value )
for param, param_value in locals().items()
if param not in ["self", "node_netmap_keys"] def add_rule(
}, self,
chain_id: str,
target_name: str,
target_type: str,
rule: Optional[list[str]] = None,
path: Optional[str] = None,
chain_id_hex: Optional[bool] = None,
chain_name: Optional[str] = None,
wallet: Optional[str] = None,
address: Optional[str] = None,
timeout: Optional[str] = None,
) -> CommandResult:
"""Drop objects from the node's local storage
Args:
chain-id: Assign ID to the parsed chain
chain-id-hex: Flag to parse chain ID as hex
path: Path to encoded chain in JSON or binary format
rule: Rule statement
target-name: Resource name in APE resource name format
target-type: Resource type(container/namespace)
timeout: Timeout for an operation (default 15s)
wallet: Path to the wallet or binary key
Returns:
Command`s result.
"""
return self._execute(
"morph ape add-rule-chain",
**{param: value for param, value in locals().items() if param not in ["self"]},
)
def get_rule(
self,
chain_id: str,
target_name: str,
target_type: str,
chain_id_hex: Optional[bool] = None,
chain_name: Optional[str] = None,
wallet: Optional[str] = None,
address: Optional[str] = None,
timeout: Optional[str] = None,
) -> CommandResult:
"""Drop objects from the node's local storage
Args:
chain-id string Chain id
chain-id-hex Flag to parse chain ID as hex
target-name string Resource name in APE resource name format
target-type string Resource type(container/namespace)
timeout duration Timeout for an operation (default 15s)
wallet string Path to the wallet or binary key
Returns:
Command`s result.
"""
return self._execute(
"morph ape get-rule-chain",
**{param: value for param, value in locals().items() if param not in ["self"]},
)
def list_rules(
self,
target_type: str,
target_name: Optional[str] = None,
rpc_endpoint: Optional[str] = None,
chain_name: Optional[str] = None,
wallet: Optional[str] = None,
address: Optional[str] = None,
timeout: Optional[str] = None,
) -> CommandResult:
"""Drop objects from the node's local storage
Args:
target-name: Resource name in APE resource name format
target-type: Resource type(container/namespace)
timeout: Timeout for an operation (default 15s)
wallet: Path to the wallet or binary key
Returns:
Command`s result.
"""
return self._execute(
"morph ape list-rule-chains",
**{param: value for param, value in locals().items() if param not in ["self"]},
)
def remove_rule(
self,
chain_id: str,
target_name: str,
target_type: str,
all: Optional[bool] = None,
chain_name: Optional[str] = None,
chain_id_hex: Optional[bool] = None,
wallet: Optional[str] = None,
address: Optional[str] = None,
timeout: Optional[str] = None,
) -> CommandResult:
"""Drop objects from the node's local storage
Args:
all: Remove all chains
chain-id: Assign ID to the parsed chain
chain-id-hex: Flag to parse chain ID as hex
target-name: Resource name in APE resource name format
target-type: Resource type(container/namespace)
timeout: Timeout for an operation (default 15s)
wallet: Path to the wallet or binary key
Returns:
Command`s result.
"""
return self._execute(
"morph ape rm-rule-chain",
**{param: value for param, value in locals().items() if param not in ["self"]},
) )

View file

@ -0,0 +1,70 @@
from typing import Optional
from frostfs_testlib.cli.cli_command import CliCommand
from frostfs_testlib.shell import CommandResult
class FrostfsCliApeManager(CliCommand):
"""Operations with APE manager."""
def add(
self,
rpc_endpoint: str,
chain_id: Optional[str] = None,
chain_id_hex: Optional[str] = None,
path: Optional[str] = None,
rule: Optional[str] | Optional[list[str]] = None,
target_name: Optional[str] = None,
target_type: Optional[str] = None,
wallet: Optional[str] = None,
address: Optional[str] = None,
timeout: Optional[str] = None,
) -> CommandResult:
"""Add rule chain for a target."""
return self._execute(
"ape-manager add",
**{param: value for param, value in locals().items() if param not in ["self"]},
)
def list(
self,
rpc_endpoint: str,
target_name: Optional[str] = None,
target_type: Optional[str] = None,
wallet: Optional[str] = None,
address: Optional[str] = None,
timeout: Optional[str] = None,
) -> CommandResult:
"""Generate APE override by target and APE chains. Util command.
Generated APE override can be dumped to a file in JSON format that is passed to
"create" command.
"""
return self._execute(
"ape-manager list",
**{param: value for param, value in locals().items() if param not in ["self"]},
)
def remove(
self,
rpc_endpoint: str,
chain_id: Optional[str] = None,
chain_id_hex: Optional[str] = None,
target_name: Optional[str] = None,
target_type: Optional[str] = None,
wallet: Optional[str] = None,
address: Optional[str] = None,
timeout: Optional[str] = None,
) -> CommandResult:
"""Generate APE override by target and APE chains. Util command.
Generated APE override can be dumped to a file in JSON format that is passed to
"create" command.
"""
return self._execute(
"ape-manager remove",
**{param: value for param, value in locals().items() if param not in ["self"]},
)

View file

@ -0,0 +1,54 @@
from typing import Optional
from frostfs_testlib.cli.cli_command import CliCommand
from frostfs_testlib.shell import CommandResult
class FrostfsCliBearer(CliCommand):
def create(
self,
rpc_endpoint: str,
out: str,
issued_at: Optional[str] = None,
expire_at: Optional[str] = None,
not_valid_before: Optional[str] = None,
ape: Optional[str] = None,
eacl: Optional[str] = None,
owner: Optional[str] = None,
json: Optional[bool] = False,
impersonate: Optional[bool] = False,
wallet: Optional[str] = None,
address: Optional[str] = None,
) -> CommandResult:
"""Create bearer token.
All epoch flags can be specified relative to the current epoch with the +n syntax.
In this case --rpc-endpoint flag should be specified and the epoch in bearer token
is set to current epoch + n.
"""
return self._execute(
"bearer create",
**{param: value for param, value in locals().items() if param not in ["self"]},
)
def generate_ape_override(
self,
chain_id: Optional[str] = None,
chain_id_hex: Optional[str] = None,
cid: Optional[str] = None,
output: Optional[str] = None,
path: Optional[str] = None,
rule: Optional[str] = None,
wallet: Optional[str] = None,
address: Optional[str] = None,
) -> CommandResult:
"""Generate APE override by target and APE chains. Util command.
Generated APE override can be dumped to a file in JSON format that is passed to
"create" command.
"""
return self._execute(
"bearer generate-ape-override",
**{param: value for param, value in locals().items() if param not in ["self"]},
)

View file

@ -2,6 +2,8 @@ from typing import Optional
from frostfs_testlib.cli.frostfs_cli.accounting import FrostfsCliAccounting from frostfs_testlib.cli.frostfs_cli.accounting import FrostfsCliAccounting
from frostfs_testlib.cli.frostfs_cli.acl import FrostfsCliACL from frostfs_testlib.cli.frostfs_cli.acl import FrostfsCliACL
from frostfs_testlib.cli.frostfs_cli.ape_manager import FrostfsCliApeManager
from frostfs_testlib.cli.frostfs_cli.bearer import FrostfsCliBearer
from frostfs_testlib.cli.frostfs_cli.container import FrostfsCliContainer from frostfs_testlib.cli.frostfs_cli.container import FrostfsCliContainer
from frostfs_testlib.cli.frostfs_cli.control import FrostfsCliControl from frostfs_testlib.cli.frostfs_cli.control import FrostfsCliControl
from frostfs_testlib.cli.frostfs_cli.netmap import FrostfsCliNetmap from frostfs_testlib.cli.frostfs_cli.netmap import FrostfsCliNetmap
@ -41,3 +43,5 @@ class FrostfsCli:
self.version = FrostfsCliVersion(shell, frostfs_cli_exec_path, config=config_file) self.version = FrostfsCliVersion(shell, frostfs_cli_exec_path, config=config_file)
self.tree = FrostfsCliTree(shell, frostfs_cli_exec_path, config=config_file) self.tree = FrostfsCliTree(shell, frostfs_cli_exec_path, config=config_file)
self.control = FrostfsCliControl(shell, frostfs_cli_exec_path, config=config_file) self.control = FrostfsCliControl(shell, frostfs_cli_exec_path, config=config_file)
self.bearer = FrostfsCliBearer(shell, frostfs_cli_exec_path, config=config_file)
self.ape_manager = FrostfsCliApeManager(shell, frostfs_cli_exec_path, config=config_file)

View file

@ -8,12 +8,16 @@ class FrostfsCliContainer(CliCommand):
def create( def create(
self, self,
rpc_endpoint: str, rpc_endpoint: str,
wallet: str, wallet: Optional[str] = None,
nns_zone: Optional[str] = None,
nns_name: Optional[str] = None,
address: Optional[str] = None, address: Optional[str] = None,
attributes: Optional[dict] = None, attributes: Optional[dict] = None,
basic_acl: Optional[str] = None, basic_acl: Optional[str] = None,
await_mode: bool = False, await_mode: bool = False,
disable_timestamp: bool = False, disable_timestamp: bool = False,
force: bool = False,
trace: bool = False,
name: Optional[str] = None, name: Optional[str] = None,
nonce: Optional[str] = None, nonce: Optional[str] = None,
policy: Optional[str] = None, policy: Optional[str] = None,
@ -35,6 +39,8 @@ class FrostfsCliContainer(CliCommand):
basic_acl: Hex encoded basic ACL value or keywords like 'public-read-write', basic_acl: Hex encoded basic ACL value or keywords like 'public-read-write',
'private', 'eacl-public-read' (default "private"). 'private', 'eacl-public-read' (default "private").
disable_timestamp: Disable timestamp container attribute. disable_timestamp: Disable timestamp container attribute.
force: Skip placement validity check.
trace: Generate trace ID and print it.
name: Container name attribute. name: Container name attribute.
nonce: UUIDv4 nonce value for container. nonce: UUIDv4 nonce value for container.
policy: QL-encoded or JSON-encoded placement policy or path to file with it. policy: QL-encoded or JSON-encoded placement policy or path to file with it.
@ -45,6 +51,8 @@ class FrostfsCliContainer(CliCommand):
wallet: WIF (NEP-2) string or path to the wallet or binary key. wallet: WIF (NEP-2) string or path to the wallet or binary key.
xhdr: Dict with request X-Headers. xhdr: Dict with request X-Headers.
timeout: Timeout for the operation (default 15s). timeout: Timeout for the operation (default 15s).
nns_zone: Container nns zone attribute.
nns_name: Container nns name attribute.
Returns: Returns:
Command's result. Command's result.
@ -57,15 +65,15 @@ class FrostfsCliContainer(CliCommand):
def delete( def delete(
self, self,
rpc_endpoint: str, rpc_endpoint: str,
wallet: str,
cid: str, cid: str,
wallet: Optional[str] = None,
address: Optional[str] = None, address: Optional[str] = None,
await_mode: bool = False, await_mode: bool = False,
session: Optional[str] = None, session: Optional[str] = None,
ttl: Optional[int] = None, ttl: Optional[int] = None,
xhdr: Optional[dict] = None, xhdr: Optional[dict] = None,
force: bool = False, force: bool = False,
timeout: Optional[str] = None, trace: bool = False,
) -> CommandResult: ) -> CommandResult:
""" """
Delete an existing container. Delete an existing container.
@ -75,13 +83,13 @@ class FrostfsCliContainer(CliCommand):
address: Address of wallet account. address: Address of wallet account.
await_mode: Block execution until container is removed. await_mode: Block execution until container is removed.
cid: Container ID. cid: Container ID.
trace: Generate trace ID and print it.
force: Do not check whether container contains locks and remove immediately. force: Do not check whether container contains locks and remove immediately.
rpc_endpoint: Remote node address (as 'multiaddr' or '<host>:<port>'). rpc_endpoint: Remote node address (as 'multiaddr' or '<host>:<port>').
session: Path to a JSON-encoded container session token. session: Path to a JSON-encoded container session token.
ttl: TTL value in request meta header (default 2). ttl: TTL value in request meta header (default 2).
wallet: WIF (NEP-2) string or path to the wallet or binary key. wallet: WIF (NEP-2) string or path to the wallet or binary key.
xhdr: Dict with request X-Headers. xhdr: Dict with request X-Headers.
timeout: Timeout for the operation (default 15s).
Returns: Returns:
Command's result. Command's result.
@ -95,12 +103,14 @@ class FrostfsCliContainer(CliCommand):
def get( def get(
self, self,
rpc_endpoint: str, rpc_endpoint: str,
wallet: str,
cid: str, cid: str,
wallet: Optional[str] = None,
address: Optional[str] = None, address: Optional[str] = None,
generate_key: Optional[bool] = None,
await_mode: bool = False, await_mode: bool = False,
to: Optional[str] = None, to: Optional[str] = None,
json_mode: bool = False, json_mode: bool = False,
trace: bool = False,
ttl: Optional[int] = None, ttl: Optional[int] = None,
xhdr: Optional[dict] = None, xhdr: Optional[dict] = None,
timeout: Optional[str] = None, timeout: Optional[str] = None,
@ -113,12 +123,14 @@ class FrostfsCliContainer(CliCommand):
await_mode: Block execution until container is removed. await_mode: Block execution until container is removed.
cid: Container ID. cid: Container ID.
json_mode: Print or dump container in JSON format. json_mode: Print or dump container in JSON format.
trace: Generate trace ID and print it.
rpc_endpoint: Remote node address (as 'multiaddr' or '<host>:<port>'). rpc_endpoint: Remote node address (as 'multiaddr' or '<host>:<port>').
to: Path to dump encoded container. to: Path to dump encoded container.
ttl: TTL value in request meta header (default 2). ttl: TTL value in request meta header (default 2).
wallet: WIF (NEP-2) string or path to the wallet or binary key. wallet: WIF (NEP-2) string or path to the wallet or binary key.
xhdr: Dict with request X-Headers. xhdr: Dict with request X-Headers.
timeout: Timeout for the operation (default 15s). timeout: Timeout for the operation (default 15s).
generate_key: Generate a new private key.
Returns: Returns:
Command's result. Command's result.
@ -131,9 +143,10 @@ class FrostfsCliContainer(CliCommand):
def get_eacl( def get_eacl(
self, self,
rpc_endpoint: str, rpc_endpoint: str,
wallet: str,
cid: str, cid: str,
wallet: Optional[str] = None,
address: Optional[str] = None, address: Optional[str] = None,
generate_key: Optional[bool] = None,
await_mode: bool = False, await_mode: bool = False,
to: Optional[str] = None, to: Optional[str] = None,
session: Optional[str] = None, session: Optional[str] = None,
@ -150,11 +163,14 @@ class FrostfsCliContainer(CliCommand):
cid: Container ID. cid: Container ID.
rpc_endpoint: Remote node address (as 'multiaddr' or '<host>:<port>'). rpc_endpoint: Remote node address (as 'multiaddr' or '<host>:<port>').
to: Path to dump encoded container. to: Path to dump encoded container.
json_mode: Print or dump container in JSON format.
trace: Generate trace ID and print it.
session: Path to a JSON-encoded container session token. session: Path to a JSON-encoded container session token.
ttl: TTL value in request meta header (default 2). ttl: TTL value in request meta header (default 2).
wallet: WIF (NEP-2) string or path to the wallet or binary key. wallet: WIF (NEP-2) string or path to the wallet or binary key.
xhdr: Dict with request X-Headers. xhdr: Dict with request X-Headers.
timeout: Timeout for the operation (default 15s). timeout: Timeout for the operation (default 15s).
generate_key: Generate a new private key.
Returns: Returns:
Command's result. Command's result.
@ -168,8 +184,10 @@ class FrostfsCliContainer(CliCommand):
def list( def list(
self, self,
rpc_endpoint: str, rpc_endpoint: str,
wallet: str, name: Optional[str] = None,
wallet: Optional[str] = None,
address: Optional[str] = None, address: Optional[str] = None,
generate_key: Optional[bool] = None,
owner: Optional[str] = None, owner: Optional[str] = None,
ttl: Optional[int] = None, ttl: Optional[int] = None,
xhdr: Optional[dict] = None, xhdr: Optional[dict] = None,
@ -181,12 +199,15 @@ class FrostfsCliContainer(CliCommand):
Args: Args:
address: Address of wallet account. address: Address of wallet account.
name: List containers by the attribute name.
owner: Owner of containers (omit to use owner from private key). owner: Owner of containers (omit to use owner from private key).
rpc_endpoint: Remote node address (as 'multiaddr' or '<host>:<port>'). rpc_endpoint: Remote node address (as 'multiaddr' or '<host>:<port>').
ttl: TTL value in request meta header (default 2). ttl: TTL value in request meta header (default 2).
wallet: WIF (NEP-2) string or path to the wallet or binary key. wallet: WIF (NEP-2) string or path to the wallet or binary key.
xhdr: Dict with request X-Headers. xhdr: Dict with request X-Headers.
trace: Generate trace ID and print it.
timeout: Timeout for the operation (default 15s). timeout: Timeout for the operation (default 15s).
generate_key: Generate a new private key.
Returns: Returns:
Command's result. Command's result.
@ -199,9 +220,12 @@ class FrostfsCliContainer(CliCommand):
def list_objects( def list_objects(
self, self,
rpc_endpoint: str, rpc_endpoint: str,
wallet: str,
cid: str, cid: str,
bearer: Optional[str] = None,
wallet: Optional[str] = None,
address: Optional[str] = None, address: Optional[str] = None,
generate_key: Optional[bool] = None,
trace: bool = False,
ttl: Optional[int] = None, ttl: Optional[int] = None,
xhdr: Optional[dict] = None, xhdr: Optional[dict] = None,
timeout: Optional[str] = None, timeout: Optional[str] = None,
@ -212,11 +236,14 @@ class FrostfsCliContainer(CliCommand):
Args: Args:
address: Address of wallet account. address: Address of wallet account.
cid: Container ID. cid: Container ID.
bearer: File with signed JSON or binary encoded bearer token.
rpc_endpoint: Remote node address (as 'multiaddr' or '<host>:<port>'). rpc_endpoint: Remote node address (as 'multiaddr' or '<host>:<port>').
ttl: TTL value in request meta header (default 2). ttl: TTL value in request meta header (default 2).
wallet: WIF (NEP-2) string or path to the wallet or binary key. wallet: WIF (NEP-2) string or path to the wallet or binary key.
xhdr: Dict with request X-Headers. xhdr: Dict with request X-Headers.
trace: Generate trace ID and print it.
timeout: Timeout for the operation (default 15s). timeout: Timeout for the operation (default 15s).
generate_key: Generate a new private key.
Returns: Returns:
Command's result. Command's result.
@ -226,11 +253,12 @@ class FrostfsCliContainer(CliCommand):
**{param: value for param, value in locals().items() if param not in ["self"]}, **{param: value for param, value in locals().items() if param not in ["self"]},
) )
# TODO Deprecated method with 0.42
def set_eacl( def set_eacl(
self, self,
rpc_endpoint: str, rpc_endpoint: str,
wallet: str,
cid: str, cid: str,
wallet: Optional[str] = None,
address: Optional[str] = None, address: Optional[str] = None,
await_mode: bool = False, await_mode: bool = False,
table: Optional[str] = None, table: Optional[str] = None,
@ -266,11 +294,12 @@ class FrostfsCliContainer(CliCommand):
def search_node( def search_node(
self, self,
rpc_endpoint: str, rpc_endpoint: str,
wallet: str,
cid: str, cid: str,
wallet: Optional[str] = None,
address: Optional[str] = None, address: Optional[str] = None,
ttl: Optional[int] = None, ttl: Optional[int] = None,
from_file: Optional[str] = None, from_file: Optional[str] = None,
trace: bool = False,
short: Optional[bool] = True, short: Optional[bool] = True,
xhdr: Optional[dict] = None, xhdr: Optional[dict] = None,
generate_key: Optional[bool] = None, generate_key: Optional[bool] = None,
@ -288,8 +317,9 @@ class FrostfsCliContainer(CliCommand):
from_file: string File path with encoded container from_file: string File path with encoded container
timeout: duration Timeout for the operation (default 15 s) timeout: duration Timeout for the operation (default 15 s)
short: shorten the output of node information. short: shorten the output of node information.
trace: Generate trace ID and print it.
xhdr: Dict with request X-Headers. xhdr: Dict with request X-Headers.
generate_key: Generate a new private key generate_key: Generate a new private key.
Returns: Returns:
@ -298,9 +328,5 @@ class FrostfsCliContainer(CliCommand):
return self._execute( return self._execute(
f"container nodes {from_str}", f"container nodes {from_str}",
**{ **{param: value for param, value in locals().items() if param not in ["self", "from_file", "from_str"]},
param: value
for param, value in locals().items()
if param not in ["self", "from_file", "from_str"]
},
) )

View file

@ -39,14 +39,12 @@ class FrostfsCliControl(CliCommand):
address: Optional[str] = None, address: Optional[str] = None,
timeout: Optional[str] = None, timeout: Optional[str] = None,
) -> CommandResult: ) -> CommandResult:
"""Set status of the storage node in FrostFS network map """Health check for FrostFS storage nodes
Args: Args:
wallet: Path to the wallet or binary key wallet: Path to the wallet or binary key
address: Address of wallet account address: Address of wallet account
endpoint: Remote node control address (as 'multiaddr' or '<host>:<port>') endpoint: Remote node control address (as 'multiaddr' or '<host>:<port>')
force: Force turning to local maintenance
status: New netmap status keyword ('online', 'offline', 'maintenance')
timeout: Timeout for an operation (default 15s) timeout: Timeout for an operation (default 15s)
Returns: Returns:
@ -56,3 +54,179 @@ class FrostfsCliControl(CliCommand):
"control healthcheck", "control healthcheck",
**{param: value for param, value in locals().items() if param not in ["self"]}, **{param: value for param, value in locals().items() if param not in ["self"]},
) )
def drop_objects(
self,
endpoint: str,
objects: str,
wallet: Optional[str] = None,
address: Optional[str] = None,
timeout: Optional[str] = None,
) -> CommandResult:
"""Drop objects from the node's local storage
Args:
wallet: Path to the wallet or binary key
address: Address of wallet account
endpoint: Remote node control address (as 'multiaddr' or '<host>:<port>')
objects: List of object addresses to be removed in string format
timeout: Timeout for an operation (default 15s)
Returns:
Command`s result.
"""
return self._execute(
"control drop-objects",
**{param: value for param, value in locals().items() if param not in ["self"]},
)
def add_rule(
self,
endpoint: str,
chain_id: str,
target_name: str,
target_type: str,
rule: Optional[list[str]] = None,
path: Optional[str] = None,
chain_id_hex: Optional[bool] = None,
wallet: Optional[str] = None,
address: Optional[str] = None,
timeout: Optional[str] = None,
) -> CommandResult:
"""Drop objects from the node's local storage
Args:
address: Address of wallet account
chain-id: Assign ID to the parsed chain
chain-id-hex: Flag to parse chain ID as hex
endpoint: Remote node control address (as 'multiaddr' or '<host>:<port>')
path: Path to encoded chain in JSON or binary format
rule: Rule statement
target-name: Resource name in APE resource name format
target-type: Resource type(container/namespace)
timeout: Timeout for an operation (default 15s)
wallet: Path to the wallet or binary key
Returns:
Command`s result.
"""
return self._execute(
"control add-rule",
**{param: value for param, value in locals().items() if param not in ["self"]},
)
def get_rule(
self,
endpoint: str,
chain_id: str,
target_name: str,
target_type: str,
chain_id_hex: Optional[bool] = None,
wallet: Optional[str] = None,
address: Optional[str] = None,
timeout: Optional[str] = None,
) -> CommandResult:
"""Drop objects from the node's local storage
Args:
address string Address of wallet account
chain-id string Chain id
chain-id-hex Flag to parse chain ID as hex
endpoint string Remote node control address (as 'multiaddr' or '<host>:<port>')
target-name string Resource name in APE resource name format
target-type string Resource type(container/namespace)
timeout duration Timeout for an operation (default 15s)
wallet string Path to the wallet or binary key
Returns:
Command`s result.
"""
return self._execute(
"control get-rule",
**{param: value for param, value in locals().items() if param not in ["self"]},
)
def list_rules(
self,
endpoint: str,
target_name: str,
target_type: str,
wallet: Optional[str] = None,
address: Optional[str] = None,
timeout: Optional[str] = None,
) -> CommandResult:
"""Drop objects from the node's local storage
Args:
address: Address of wallet account
endpoint: Remote node control address (as 'multiaddr' or '<host>:<port>')
target-name: Resource name in APE resource name format
target-type: Resource type(container/namespace)
timeout: Timeout for an operation (default 15s)
wallet: Path to the wallet or binary key
Returns:
Command`s result.
"""
return self._execute(
"control list-rules",
**{param: value for param, value in locals().items() if param not in ["self"]},
)
def list_targets(
self,
endpoint: str,
chain_name: str,
wallet: Optional[str] = None,
address: Optional[str] = None,
timeout: Optional[str] = None,
) -> CommandResult:
"""Drop objects from the node's local storage
Args:
address: Address of wallet account
chain-name: Chain name(ingress|s3)
endpoint: Remote node control address (as 'multiaddr' or '<host>:<port>')
timeout: Timeout for an operation (default 15s)
wallet: Path to the wallet or binary key
Returns:
Command`s result.
"""
return self._execute(
"control list-targets",
**{param: value for param, value in locals().items() if param not in ["self"]},
)
def remove_rule(
self,
endpoint: str,
chain_id: str,
target_name: str,
target_type: str,
all: Optional[bool] = None,
chain_id_hex: Optional[bool] = None,
wallet: Optional[str] = None,
address: Optional[str] = None,
timeout: Optional[str] = None,
) -> CommandResult:
"""Drop objects from the node's local storage
Args:
address: Address of wallet account
all: Remove all chains
chain-id: Assign ID to the parsed chain
chain-id-hex: Flag to parse chain ID as hex
endpoint: Remote node control address (as 'multiaddr' or '<host>:<port>')
target-name: Resource name in APE resource name format
target-type: Resource type(container/namespace)
timeout: Timeout for an operation (default 15s)
wallet: Path to the wallet or binary key
Returns:
Command`s result.
"""
return self._execute(
"control remove-rule",
**{param: value for param, value in locals().items() if param not in ["self"]},
)

View file

@ -8,7 +8,7 @@ class FrostfsCliNetmap(CliCommand):
def epoch( def epoch(
self, self,
rpc_endpoint: str, rpc_endpoint: str,
wallet: str, wallet: Optional[str] = None,
address: Optional[str] = None, address: Optional[str] = None,
generate_key: bool = False, generate_key: bool = False,
ttl: Optional[int] = None, ttl: Optional[int] = None,
@ -38,7 +38,7 @@ class FrostfsCliNetmap(CliCommand):
def netinfo( def netinfo(
self, self,
rpc_endpoint: str, rpc_endpoint: str,
wallet: str, wallet: Optional[str] = None,
address: Optional[str] = None, address: Optional[str] = None,
generate_key: bool = False, generate_key: bool = False,
ttl: Optional[int] = None, ttl: Optional[int] = None,
@ -68,7 +68,7 @@ class FrostfsCliNetmap(CliCommand):
def nodeinfo( def nodeinfo(
self, self,
rpc_endpoint: str, rpc_endpoint: str,
wallet: str, wallet: Optional[str] = None,
address: Optional[str] = None, address: Optional[str] = None,
generate_key: bool = False, generate_key: bool = False,
json: bool = False, json: bool = False,
@ -100,7 +100,7 @@ class FrostfsCliNetmap(CliCommand):
def snapshot( def snapshot(
self, self,
rpc_endpoint: str, rpc_endpoint: str,
wallet: str, wallet: Optional[str] = None,
address: Optional[str] = None, address: Optional[str] = None,
generate_key: bool = False, generate_key: bool = False,
ttl: Optional[int] = None, ttl: Optional[int] = None,

View file

@ -8,11 +8,12 @@ class FrostfsCliObject(CliCommand):
def delete( def delete(
self, self,
rpc_endpoint: str, rpc_endpoint: str,
wallet: str,
cid: str, cid: str,
oid: str, oid: str,
wallet: Optional[str] = None,
address: Optional[str] = None, address: Optional[str] = None,
bearer: Optional[str] = None, bearer: Optional[str] = None,
generate_key: Optional[bool] = None,
session: Optional[str] = None, session: Optional[str] = None,
ttl: Optional[int] = None, ttl: Optional[int] = None,
xhdr: Optional[dict] = None, xhdr: Optional[dict] = None,
@ -25,6 +26,7 @@ class FrostfsCliObject(CliCommand):
address: Address of wallet account. address: Address of wallet account.
bearer: File with signed JSON or binary encoded bearer token. bearer: File with signed JSON or binary encoded bearer token.
cid: Container ID. cid: Container ID.
generate_key: Generate new private key.
oid: Object ID. oid: Object ID.
rpc_endpoint: Remote node address (as 'multiaddr' or '<host>:<port>'). rpc_endpoint: Remote node address (as 'multiaddr' or '<host>:<port>').
session: Filepath to a JSON- or binary-encoded token of the object DELETE session. session: Filepath to a JSON- or binary-encoded token of the object DELETE session.
@ -44,11 +46,12 @@ class FrostfsCliObject(CliCommand):
def get( def get(
self, self,
rpc_endpoint: str, rpc_endpoint: str,
wallet: str,
cid: str, cid: str,
oid: str, oid: str,
wallet: Optional[str] = None,
address: Optional[str] = None, address: Optional[str] = None,
bearer: Optional[str] = None, bearer: Optional[str] = None,
generate_key: Optional[bool] = None,
file: Optional[str] = None, file: Optional[str] = None,
header: Optional[str] = None, header: Optional[str] = None,
no_progress: bool = False, no_progress: bool = False,
@ -66,6 +69,7 @@ class FrostfsCliObject(CliCommand):
bearer: File with signed JSON or binary encoded bearer token. bearer: File with signed JSON or binary encoded bearer token.
cid: Container ID. cid: Container ID.
file: File to write object payload to. Default: stdout. file: File to write object payload to. Default: stdout.
generate_key: Generate new private key.
header: File to write header to. Default: stdout. header: File to write header to. Default: stdout.
no_progress: Do not show progress bar. no_progress: Do not show progress bar.
oid: Object ID. oid: Object ID.
@ -88,11 +92,12 @@ class FrostfsCliObject(CliCommand):
def hash( def hash(
self, self,
rpc_endpoint: str, rpc_endpoint: str,
wallet: str,
cid: str, cid: str,
oid: str, oid: str,
wallet: Optional[str] = None,
address: Optional[str] = None, address: Optional[str] = None,
bearer: Optional[str] = None, bearer: Optional[str] = None,
generate_key: Optional[bool] = None,
range: Optional[str] = None, range: Optional[str] = None,
salt: Optional[str] = None, salt: Optional[str] = None,
ttl: Optional[int] = None, ttl: Optional[int] = None,
@ -108,6 +113,7 @@ class FrostfsCliObject(CliCommand):
address: Address of wallet account. address: Address of wallet account.
bearer: File with signed JSON or binary encoded bearer token. bearer: File with signed JSON or binary encoded bearer token.
cid: Container ID. cid: Container ID.
generate_key: Generate new private key.
oid: Object ID. oid: Object ID.
range: Range to take hash from in the form offset1:length1,... range: Range to take hash from in the form offset1:length1,...
rpc_endpoint: Remote node address (as 'multiaddr' or '<host>:<port>'). rpc_endpoint: Remote node address (as 'multiaddr' or '<host>:<port>').
@ -124,19 +130,18 @@ class FrostfsCliObject(CliCommand):
""" """
return self._execute( return self._execute(
"object hash", "object hash",
**{ **{param: value for param, value in locals().items() if param not in ["self", "params"]},
param: value for param, value in locals().items() if param not in ["self", "params"]
},
) )
def head( def head(
self, self,
rpc_endpoint: str, rpc_endpoint: str,
wallet: str,
cid: str, cid: str,
oid: str, oid: str,
wallet: Optional[str] = None,
address: Optional[str] = None, address: Optional[str] = None,
bearer: Optional[str] = None, bearer: Optional[str] = None,
generate_key: Optional[bool] = None,
file: Optional[str] = None, file: Optional[str] = None,
json_mode: bool = False, json_mode: bool = False,
main_only: bool = False, main_only: bool = False,
@ -155,6 +160,7 @@ class FrostfsCliObject(CliCommand):
bearer: File with signed JSON or binary encoded bearer token. bearer: File with signed JSON or binary encoded bearer token.
cid: Container ID. cid: Container ID.
file: File to write object payload to. Default: stdout. file: File to write object payload to. Default: stdout.
generate_key: Generate new private key.
json_mode: Marshal output in JSON. json_mode: Marshal output in JSON.
main_only: Return only main fields. main_only: Return only main fields.
oid: Object ID. oid: Object ID.
@ -178,13 +184,14 @@ class FrostfsCliObject(CliCommand):
def lock( def lock(
self, self,
rpc_endpoint: str, rpc_endpoint: str,
wallet: str,
cid: str, cid: str,
oid: str, oid: str,
wallet: Optional[str] = None,
lifetime: Optional[int] = None, lifetime: Optional[int] = None,
expire_at: Optional[int] = None, expire_at: Optional[int] = None,
address: Optional[str] = None, address: Optional[str] = None,
bearer: Optional[str] = None, bearer: Optional[str] = None,
generate_key: Optional[bool] = None,
session: Optional[str] = None, session: Optional[str] = None,
ttl: Optional[int] = None, ttl: Optional[int] = None,
xhdr: Optional[dict] = None, xhdr: Optional[dict] = None,
@ -197,6 +204,7 @@ class FrostfsCliObject(CliCommand):
address: Address of wallet account. address: Address of wallet account.
bearer: File with signed JSON or binary encoded bearer token. bearer: File with signed JSON or binary encoded bearer token.
cid: Container ID. cid: Container ID.
generate_key: Generate new private key.
oid: Object ID. oid: Object ID.
lifetime: Lock lifetime. lifetime: Lock lifetime.
expire_at: Lock expiration epoch. expire_at: Lock expiration epoch.
@ -218,12 +226,13 @@ class FrostfsCliObject(CliCommand):
def put( def put(
self, self,
rpc_endpoint: str, rpc_endpoint: str,
wallet: str,
cid: str, cid: str,
file: str, file: str,
wallet: Optional[str] = None,
address: Optional[str] = None, address: Optional[str] = None,
attributes: Optional[dict] = None, attributes: Optional[dict] = None,
bearer: Optional[str] = None, bearer: Optional[str] = None,
generate_key: Optional[bool] = None,
copies_number: Optional[int] = None, copies_number: Optional[int] = None,
disable_filename: bool = False, disable_filename: bool = False,
disable_timestamp: bool = False, disable_timestamp: bool = False,
@ -248,6 +257,7 @@ class FrostfsCliObject(CliCommand):
disable_timestamp: Do not set well-known timestamp attribute. disable_timestamp: Do not set well-known timestamp attribute.
expire_at: Last epoch in the life of the object. expire_at: Last epoch in the life of the object.
file: File with object payload. file: File with object payload.
generate_key: Generate new private key.
no_progress: Do not show progress bar. no_progress: Do not show progress bar.
notify: Object notification in the form of *epoch*:*topic*; '-' notify: Object notification in the form of *epoch*:*topic*; '-'
topic means using default. topic means using default.
@ -269,12 +279,13 @@ class FrostfsCliObject(CliCommand):
def range( def range(
self, self,
rpc_endpoint: str, rpc_endpoint: str,
wallet: str,
cid: str, cid: str,
oid: str, oid: str,
range: str, range: str,
wallet: Optional[str] = None,
address: Optional[str] = None, address: Optional[str] = None,
bearer: Optional[str] = None, bearer: Optional[str] = None,
generate_key: Optional[bool] = None,
file: Optional[str] = None, file: Optional[str] = None,
json_mode: bool = False, json_mode: bool = False,
raw: bool = False, raw: bool = False,
@ -291,6 +302,7 @@ class FrostfsCliObject(CliCommand):
bearer: File with signed JSON or binary encoded bearer token. bearer: File with signed JSON or binary encoded bearer token.
cid: Container ID. cid: Container ID.
file: File to write object payload to. Default: stdout. file: File to write object payload to. Default: stdout.
generate_key: Generate new private key.
json_mode: Marshal output in JSON. json_mode: Marshal output in JSON.
oid: Object ID. oid: Object ID.
range: Range to take data from in the form offset:length. range: Range to take data from in the form offset:length.
@ -313,10 +325,11 @@ class FrostfsCliObject(CliCommand):
def search( def search(
self, self,
rpc_endpoint: str, rpc_endpoint: str,
wallet: str,
cid: str, cid: str,
wallet: Optional[str] = None,
address: Optional[str] = None, address: Optional[str] = None,
bearer: Optional[str] = None, bearer: Optional[str] = None,
generate_key: Optional[bool] = None,
filters: Optional[list] = None, filters: Optional[list] = None,
oid: Optional[str] = None, oid: Optional[str] = None,
phy: bool = False, phy: bool = False,
@ -334,6 +347,7 @@ class FrostfsCliObject(CliCommand):
bearer: File with signed JSON or binary encoded bearer token. bearer: File with signed JSON or binary encoded bearer token.
cid: Container ID. cid: Container ID.
filters: Repeated filter expressions or files with protobuf JSON. filters: Repeated filter expressions or files with protobuf JSON.
generate_key: Generate new private key.
oid: Object ID. oid: Object ID.
phy: Search physically stored objects. phy: Search physically stored objects.
root: Search for user objects. root: Search for user objects.
@ -355,15 +369,16 @@ class FrostfsCliObject(CliCommand):
def nodes( def nodes(
self, self,
rpc_endpoint: str, rpc_endpoint: str,
wallet: str,
cid: str, cid: str,
oid: Optional[str] = None,
wallet: Optional[str] = None,
address: Optional[str] = None, address: Optional[str] = None,
bearer: Optional[str] = None, bearer: Optional[str] = None,
generate_key: Optional = None, generate_key: Optional[bool] = None,
oid: Optional[str] = None,
trace: bool = False, trace: bool = False,
root: bool = False, root: bool = False,
verify_presence_all: bool = False, verify_presence_all: bool = False,
json: bool = False,
ttl: Optional[int] = None, ttl: Optional[int] = None,
xhdr: Optional[dict] = None, xhdr: Optional[dict] = None,
timeout: Optional[str] = None, timeout: Optional[str] = None,

View file

@ -9,7 +9,6 @@ class FrostfsCliSession(CliCommand):
self, self,
rpc_endpoint: str, rpc_endpoint: str,
wallet: str, wallet: str,
wallet_password: str,
out: str, out: str,
lifetime: Optional[int] = None, lifetime: Optional[int] = None,
address: Optional[str] = None, address: Optional[str] = None,
@ -30,12 +29,7 @@ class FrostfsCliSession(CliCommand):
Returns: Returns:
Command's result. Command's result.
""" """
return self._execute_with_password( return self._execute(
"session create", "session create",
wallet_password, **{param: value for param, value in locals().items() if param not in ["self"]},
**{
param: value
for param, value in locals().items()
if param not in ["self", "wallet_password"]
},
) )

View file

@ -39,10 +39,10 @@ class FrostfsCliShards(CliCommand):
def set_mode( def set_mode(
self, self,
endpoint: str, endpoint: str,
wallet: str,
wallet_password: str,
mode: str, mode: str,
id: Optional[list[str]], id: Optional[list[str]] = None,
wallet: Optional[str] = None,
wallet_password: Optional[str] = None,
address: Optional[str] = None, address: Optional[str] = None,
all: bool = False, all: bool = False,
clear_errors: bool = False, clear_errors: bool = False,
@ -65,6 +65,11 @@ class FrostfsCliShards(CliCommand):
Returns: Returns:
Command's result. Command's result.
""" """
if not wallet_password:
return self._execute(
"control shards set-mode",
**{param: value for param, value in locals().items() if param not in ["self"]},
)
return self._execute_with_password( return self._execute_with_password(
"control shards set-mode", "control shards set-mode",
wallet_password, wallet_password,
@ -137,3 +142,120 @@ class FrostfsCliShards(CliCommand):
wallet_password, wallet_password,
**{param: value for param, value in locals().items() if param not in ["self", "wallet_password"]}, **{param: value for param, value in locals().items() if param not in ["self", "wallet_password"]},
) )
def evacuation_start(
self,
endpoint: str,
id: Optional[str] = None,
scope: Optional[str] = None,
all: bool = False,
no_errors: bool = True,
await_mode: bool = False,
address: Optional[str] = None,
timeout: Optional[str] = None,
no_progress: bool = False,
) -> CommandResult:
"""
Objects evacuation from shard to other shards.
Args:
address: Address of wallet account
all: Process all shards
await: Block execution until evacuation is completed
endpoint: Remote node control address (as 'multiaddr' or '<host>:<port>')
id: List of shard IDs in base58 encoding
no_errors: Skip invalid/unreadable objects (default true)
no_progress: Print progress if await provided
scope: Evacuation scope; possible values: trees, objects, all (default "all")
timeout: Timeout for an operation (default 15s)
Returns:
Command's result.
"""
return self._execute(
"control shards evacuation start",
**{param: value for param, value in locals().items() if param not in ["self"]},
)
def evacuation_reset(
self,
endpoint: str,
address: Optional[str] = None,
timeout: Optional[str] = None,
) -> CommandResult:
"""
Reset evacuate objects from shard to other shards status.
Args:
address: Address of wallet account
endpoint: Remote node control address (as 'multiaddr' or '<host>:<port>')
timeout: Timeout for an operation (default 15s)
Returns:
Command's result.
"""
return self._execute(
"control shards evacuation reset",
**{param: value for param, value in locals().items() if param not in ["self"]},
)
def evacuation_stop(
self,
endpoint: str,
address: Optional[str] = None,
timeout: Optional[str] = None,
) -> CommandResult:
"""
Stop running evacuate process from shard to other shards.
Args:
address: Address of wallet account
endpoint: Remote node control address (as 'multiaddr' or '<host>:<port>')
timeout: Timeout for an operation (default 15s)
Returns:
Command's result.
"""
return self._execute(
"control shards evacuation stop",
**{param: value for param, value in locals().items() if param not in ["self"]},
)
def evacuation_status(
self,
endpoint: str,
address: Optional[str] = None,
timeout: Optional[str] = None,
) -> CommandResult:
"""
Get evacuate objects from shard to other shards status.
Args:
address: Address of wallet account
endpoint: Remote node control address (as 'multiaddr' or '<host>:<port>')
timeout: Timeout for an operation (default 15s)
Returns:
Command's result.
"""
return self._execute(
"control shards evacuation status",
**{param: value for param, value in locals().items() if param not in ["self"]},
)
def detach(self, endpoint: str, address: Optional[str] = None, id: Optional[str] = None, timeout: Optional[str] = None):
"""
Detach and close the shards
Args:
address: Address of wallet account
endpoint: Remote node control address (as 'multiaddr' or '<host>:<port>')
id: List of shard IDs in base58 encoding
timeout: Timeout for an operation (default 15s)
Returns:
Command's result.
"""
return self._execute(
"control shards detach",
**{param: value for param, value in locals().items() if param not in ["self"]},
)

View file

@ -27,3 +27,27 @@ class FrostfsCliTree(CliCommand):
"tree healthcheck", "tree healthcheck",
**{param: value for param, value in locals().items() if param not in ["self"]}, **{param: value for param, value in locals().items() if param not in ["self"]},
) )
def list(
self,
cid: str,
rpc_endpoint: Optional[str] = None,
wallet: Optional[str] = None,
timeout: Optional[str] = None,
) -> CommandResult:
"""Get Tree List
Args:
cid: Container ID.
rpc_endpoint: Remote node address (as 'multiaddr' or '<host>:<port>').
wallet: WIF (NEP-2) string or path to the wallet or binary key.
timeout: duration Timeout for the operation (default 15 s)
Returns:
Command's result.
"""
return self._execute(
"tree list",
**{param: value for param, value in locals().items() if param not in ["self"]},
)

View file

@ -7,9 +7,9 @@ from frostfs_testlib.shell import CommandResult
class FrostfsCliUtil(CliCommand): class FrostfsCliUtil(CliCommand):
def sign_bearer_token( def sign_bearer_token(
self, self,
wallet: str,
from_file: str, from_file: str,
to_file: str, to_file: str,
wallet: Optional[str] = None,
address: Optional[str] = None, address: Optional[str] = None,
json: Optional[bool] = False, json: Optional[bool] = False,
) -> CommandResult: ) -> CommandResult:
@ -33,9 +33,9 @@ class FrostfsCliUtil(CliCommand):
def sign_session_token( def sign_session_token(
self, self,
wallet: str,
from_file: str, from_file: str,
to_file: str, to_file: str,
wallet: Optional[str] = None,
address: Optional[str] = None, address: Optional[str] = None,
) -> CommandResult: ) -> CommandResult:
""" """
@ -54,3 +54,11 @@ class FrostfsCliUtil(CliCommand):
"util sign session-token", "util sign session-token",
**{param: value for param, value in locals().items() if param not in ["self"]}, **{param: value for param, value in locals().items() if param not in ["self"]},
) )
def convert_eacl(self, from_file: str, to_file: str, json: Optional[bool] = False, ape: Optional[bool] = False):
"""Convert representation of extended ACL table."""
return self._execute(
"util convert eacl",
**{param: value for param, value in locals().items() if param not in ["self"]},
)

View file

@ -0,0 +1,30 @@
from typing import Optional
from frostfs_testlib.hosting.interfaces import Host
from frostfs_testlib.shell.interfaces import CommandOptions, Shell
class GenericCli(object):
def __init__(self, cli_name: str, host: Host) -> None:
self.host = host
self.cli_name = cli_name
def __call__(
self,
args: Optional[str] = "",
pipes: Optional[str] = "",
shell: Optional[Shell] = None,
options: Optional[CommandOptions] = None,
):
if not shell:
shell = self.host.get_shell()
cli_config = self.host.get_cli_config(self.cli_name, True)
extra_args = ""
exec_path = self.cli_name
if cli_config:
extra_args = " ".join(cli_config.extra_args)
exec_path = cli_config.exec_path
cmd = f"{exec_path} {args} {extra_args} {pipes}"
return shell.exec(cmd, options)

View file

@ -1,7 +1,7 @@
import re import re
from frostfs_testlib.storage.cluster import ClusterNode from frostfs_testlib.storage.cluster import ClusterNode
from frostfs_testlib.storage.dataclasses.storage_object_info import NodeNetInfo, NodeNetmapInfo from frostfs_testlib.storage.dataclasses.storage_object_info import NodeNetInfo, NodeNetmapInfo, NodeStatus
class NetmapParser: class NetmapParser:
@ -15,6 +15,8 @@ class NetmapParser:
"epoch_duration": r"Epoch duration: (?P<epoch_duration>\d+)", "epoch_duration": r"Epoch duration: (?P<epoch_duration>\d+)",
"inner_ring_candidate_fee": r"Inner Ring candidate fee: (?P<inner_ring_candidate_fee>\d+)", "inner_ring_candidate_fee": r"Inner Ring candidate fee: (?P<inner_ring_candidate_fee>\d+)",
"maximum_object_size": r"Maximum object size: (?P<maximum_object_size>\d+)", "maximum_object_size": r"Maximum object size: (?P<maximum_object_size>\d+)",
"maximum_count_of_data_shards": r"Maximum count of data shards: (?P<maximum_count_of_data_shards>\d+)",
"maximum_count_of_parity_shards": r"Maximum count of parity shards: (?P<maximum_count_of_parity_shards>\d+)",
"withdrawal_fee": r"Withdrawal fee: (?P<withdrawal_fee>\d+)", "withdrawal_fee": r"Withdrawal fee: (?P<withdrawal_fee>\d+)",
"homomorphic_hashing_disabled": r"Homomorphic hashing disabled: (?P<homomorphic_hashing_disabled>true|false)", "homomorphic_hashing_disabled": r"Homomorphic hashing disabled: (?P<homomorphic_hashing_disabled>true|false)",
"maintenance_mode_allowed": r"Maintenance mode allowed: (?P<maintenance_mode_allowed>true|false)", "maintenance_mode_allowed": r"Maintenance mode allowed: (?P<maintenance_mode_allowed>true|false)",
@ -44,7 +46,7 @@ class NetmapParser:
regexes = { regexes = {
"node_id": r"\d+: (?P<node_id>\w+)", "node_id": r"\d+: (?P<node_id>\w+)",
"node_data_ips": r"(?P<node_data_ips>/ip4/.+?)$", "node_data_ips": r"(?P<node_data_ips>/ip4/.+?)$",
"node_status": r"(?P<node_status>ONLINE|OFFLINE)", "node_status": r"(?P<node_status>ONLINE|MAINTENANCE|OFFLINE)",
"cluster_name": r"ClusterName: (?P<cluster_name>\w+)", "cluster_name": r"ClusterName: (?P<cluster_name>\w+)",
"continent": r"Continent: (?P<continent>\w+)", "continent": r"Continent: (?P<continent>\w+)",
"country": r"Country: (?P<country>\w+)", "country": r"Country: (?P<country>\w+)",
@ -62,14 +64,17 @@ class NetmapParser:
for node in netmap_nodes: for node in netmap_nodes:
for key, regex in regexes.items(): for key, regex in regexes.items():
search_result = re.search(regex, node, flags=re.MULTILINE) search_result = re.search(regex, node, flags=re.MULTILINE)
if search_result == None:
result_netmap[key] = None
continue
if key == "node_data_ips": if key == "node_data_ips":
result_netmap[key] = search_result[key].strip().split(" ") result_netmap[key] = search_result[key].strip().split(" ")
continue continue
if key == "external_address": if key == "external_address":
result_netmap[key] = search_result[key].strip().split(",") result_netmap[key] = search_result[key].strip().split(",")
continue continue
if search_result == None: if key == "node_status":
result_netmap[key] = None result_netmap[key] = NodeStatus(search_result[key].strip().lower())
continue continue
result_netmap[key] = search_result[key].strip() result_netmap[key] = search_result[key].strip()

View file

@ -0,0 +1,47 @@
import re
from typing import Optional
from frostfs_testlib import reporter
from frostfs_testlib.cli import FrostfsAuthmate
from frostfs_testlib.credentials.interfaces import S3Credentials, S3CredentialsProvider, User
from frostfs_testlib.resources.cli import FROSTFS_AUTHMATE_EXEC
from frostfs_testlib.shell import LocalShell
from frostfs_testlib.steps.cli.container import list_containers
from frostfs_testlib.storage.cluster import ClusterNode
from frostfs_testlib.storage.dataclasses.frostfs_services import S3Gate
from frostfs_testlib.utils import string_utils
class AuthmateS3CredentialsProvider(S3CredentialsProvider):
@reporter.step("Init S3 Credentials using Authmate CLI")
def provide(self, user: User, cluster_node: ClusterNode, location_constraints: Optional[str] = None) -> S3Credentials:
cluster_nodes: list[ClusterNode] = self.cluster.cluster_nodes
shell = LocalShell()
wallet = user.wallet
endpoint = cluster_node.storage_node.get_rpc_endpoint()
gate_public_keys = [node.service(S3Gate).get_wallet_public_key() for node in cluster_nodes]
# unique short bucket name
bucket = string_utils.unique_name("bucket-")
frostfs_authmate: FrostfsAuthmate = FrostfsAuthmate(shell, FROSTFS_AUTHMATE_EXEC)
issue_secret_output = frostfs_authmate.secret.issue(
wallet=wallet.path,
peer=endpoint,
gate_public_key=gate_public_keys,
wallet_password=wallet.password,
container_policy=location_constraints,
container_friendly_name=bucket,
).stdout
aws_access_key_id = str(re.search(r"access_key_id.*:\s.(?P<aws_access_key_id>\w*)", issue_secret_output).group("aws_access_key_id"))
aws_secret_access_key = str(
re.search(r"secret_access_key.*:\s.(?P<aws_secret_access_key>\w*)", issue_secret_output).group("aws_secret_access_key")
)
cid = str(re.search(r"container_id.*:\s.(?P<container_id>\w*)", issue_secret_output).group("container_id"))
containers_list = list_containers(wallet, shell, endpoint)
assert cid in containers_list, f"Expected cid {cid} in {containers_list}"
user.s3_credentials = S3Credentials(aws_access_key_id, aws_secret_access_key)
return user.s3_credentials

View file

@ -0,0 +1,51 @@
from abc import ABC, abstractmethod
from dataclasses import dataclass, field
from typing import Any, Optional
from frostfs_testlib.plugins import load_plugin
from frostfs_testlib.storage.cluster import Cluster, ClusterNode
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
@dataclass
class S3Credentials:
access_key: str
secret_key: str
@dataclass
class User:
name: str
attributes: dict[str, Any] = field(default_factory=dict)
wallet: WalletInfo | None = None
s3_credentials: S3Credentials | None = None
class S3CredentialsProvider(ABC):
def __init__(self, cluster: Cluster) -> None:
self.cluster = cluster
@abstractmethod
def provide(self, user: User, cluster_node: ClusterNode, location_constraints: Optional[str] = None, **kwargs) -> S3Credentials:
raise NotImplementedError("Directly called abstract class?")
class GrpcCredentialsProvider(ABC):
def __init__(self, cluster: Cluster) -> None:
self.cluster = cluster
@abstractmethod
def provide(self, user: User, cluster_node: ClusterNode, **kwargs) -> WalletInfo:
raise NotImplementedError("Directly called abstract class?")
class CredentialsProvider(object):
S3: S3CredentialsProvider
GRPC: GrpcCredentialsProvider
def __init__(self, cluster: Cluster) -> None:
config = cluster.cluster_nodes[0].host.config
s3_cls = load_plugin("frostfs.testlib.credentials_providers", config.s3_creds_plugin_name)
self.S3 = s3_cls(cluster)
grpc_cls = load_plugin("frostfs.testlib.credentials_providers", config.grpc_creds_plugin_name)
self.GRPC = grpc_cls(cluster)

View file

@ -0,0 +1,14 @@
from frostfs_testlib import reporter
from frostfs_testlib.credentials.interfaces import GrpcCredentialsProvider, User
from frostfs_testlib.resources.common import ASSETS_DIR, DEFAULT_WALLET_PASS
from frostfs_testlib.shell.local_shell import LocalShell
from frostfs_testlib.storage.cluster import ClusterNode
from frostfs_testlib.storage.dataclasses.wallet import WalletFactory, WalletInfo
class WalletFactoryProvider(GrpcCredentialsProvider):
@reporter.step("Init gRPC Credentials using wallet generation")
def provide(self, user: User, cluster_node: ClusterNode) -> WalletInfo:
wallet_factory = WalletFactory(ASSETS_DIR, LocalShell())
user.wallet = wallet_factory.create_wallet(file_name=user.name, password=DEFAULT_WALLET_PASS)
return user.wallet

View file

@ -1,5 +1,5 @@
class Options: class Options:
DEFAULT_SHELL_TIMEOUT = 90 DEFAULT_SHELL_TIMEOUT = 120
@staticmethod @staticmethod
def get_default_shell_timeout(): def get_default_shell_timeout():

View file

@ -0,0 +1,45 @@
import logging
import os
from importlib.metadata import entry_points
import pytest
import yaml
from frostfs_testlib import reporter
from frostfs_testlib.hosting.hosting import Hosting
from frostfs_testlib.resources.common import ASSETS_DIR, HOSTING_CONFIG_FILE
from frostfs_testlib.storage import get_service_registry
@pytest.fixture(scope="session")
def configure_testlib():
reporter.get_reporter().register_handler(reporter.AllureHandler())
reporter.get_reporter().register_handler(reporter.StepsLogger())
logging.getLogger("paramiko").setLevel(logging.INFO)
# Register Services for cluster
registry = get_service_registry()
services = entry_points(group="frostfs.testlib.services")
for svc in services:
registry.register_service(svc.name, svc.load())
@pytest.fixture(scope="session")
def temp_directory(configure_testlib):
with reporter.step("Prepare tmp directory"):
full_path = ASSETS_DIR
if not os.path.exists(full_path):
os.mkdir(full_path)
return full_path
@pytest.fixture(scope="session")
def hosting(configure_testlib) -> Hosting:
with open(HOSTING_CONFIG_FILE, "r") as file:
hosting_config = yaml.full_load(file)
hosting_instance = Hosting()
hosting_instance.configure(hosting_config)
return hosting_instance

View file

@ -47,6 +47,14 @@ class BasicHealthcheck(Healthcheck):
self._perform(cluster_node, checks) self._perform(cluster_node, checks)
@wait_for_success(900, 30, title="Wait for tree healthcheck on {cluster_node}")
def tree_healthcheck(self, cluster_node: ClusterNode) -> str | None:
checks = {
self._tree_healthcheck: {},
}
self._perform(cluster_node, checks)
@wait_for_success(120, 5, title="Wait for service healthcheck on {cluster_node}") @wait_for_success(120, 5, title="Wait for service healthcheck on {cluster_node}")
def services_healthcheck(self, cluster_node: ClusterNode): def services_healthcheck(self, cluster_node: ClusterNode):
svcs_to_check = cluster_node.services svcs_to_check = cluster_node.services

View file

@ -19,3 +19,7 @@ class Healthcheck(ABC):
@abstractmethod @abstractmethod
def services_healthcheck(self, cluster_node: ClusterNode): def services_healthcheck(self, cluster_node: ClusterNode):
"""Perform service status check on target cluster node""" """Perform service status check on target cluster node"""
@abstractmethod
def tree_healthcheck(self, cluster_node: ClusterNode):
"""Perform tree healthcheck on target cluster node"""

View file

@ -0,0 +1,13 @@
import pytest
@pytest.hookimpl
def pytest_collection_modifyitems(items: list[pytest.Item]):
# All tests which reside in frostfs nodeid are granted with frostfs marker, excluding
# nodeid = full path of the test
# 1. plugins
# 2. testlib itself
for item in items:
location = item.location[0]
if "frostfs" in location and "plugin" not in location and "testlib" not in location:
item.add_marker("frostfs")

View file

@ -10,9 +10,7 @@ class ParsedAttributes:
def parse(cls, attributes: dict[str, Any]): def parse(cls, attributes: dict[str, Any]):
# Pick attributes supported by the class # Pick attributes supported by the class
field_names = set(field.name for field in fields(cls)) field_names = set(field.name for field in fields(cls))
supported_attributes = { supported_attributes = {key: value for key, value in attributes.items() if key in field_names}
key: value for key, value in attributes.items() if key in field_names
}
return cls(**supported_attributes) return cls(**supported_attributes)
@ -29,6 +27,7 @@ class CLIConfig:
name: str name: str
exec_path: str exec_path: str
attributes: dict[str, str] = field(default_factory=dict) attributes: dict[str, str] = field(default_factory=dict)
extra_args: list[str] = field(default_factory=list)
@dataclass @dataclass
@ -61,8 +60,12 @@ class HostConfig:
""" """
plugin_name: str plugin_name: str
hostname: str
healthcheck_plugin_name: str healthcheck_plugin_name: str
address: str address: str
s3_creds_plugin_name: str = field(default="authmate")
grpc_creds_plugin_name: str = field(default="wallet_factory")
product: str = field(default="frostfs")
services: list[ServiceConfig] = field(default_factory=list) services: list[ServiceConfig] = field(default_factory=list)
clis: list[CLIConfig] = field(default_factory=list) clis: list[CLIConfig] = field(default_factory=list)
attributes: dict[str, str] = field(default_factory=dict) attributes: dict[str, str] = field(default_factory=dict)

View file

@ -152,9 +152,7 @@ class DockerHost(Host):
timeout=service_attributes.start_timeout, timeout=service_attributes.start_timeout,
) )
def wait_for_service_to_be_in_state( def wait_for_service_to_be_in_state(self, systemd_service_name: str, expected_state: str, timeout: int) -> None:
self, systemd_service_name: str, expected_state: str, timeout: int
) -> None:
raise NotImplementedError("Not implemented for docker") raise NotImplementedError("Not implemented for docker")
def get_data_directory(self, service_name: str) -> str: def get_data_directory(self, service_name: str) -> str:
@ -166,6 +164,9 @@ class DockerHost(Host):
return volume_path return volume_path
def send_signal_to_service(self, service_name: str, signal: str) -> None:
raise NotImplementedError("Not implemented for docker")
def delete_metabase(self, service_name: str) -> None: def delete_metabase(self, service_name: str) -> None:
raise NotImplementedError("Not implemented for docker") raise NotImplementedError("Not implemented for docker")
@ -181,6 +182,18 @@ class DockerHost(Host):
def delete_pilorama(self, service_name: str) -> None: def delete_pilorama(self, service_name: str) -> None:
raise NotImplementedError("Not implemented for docker") raise NotImplementedError("Not implemented for docker")
def delete_file(self, file_path: str) -> None:
raise NotImplementedError("Not implemented for docker")
def is_file_exist(self, file_path: str) -> None:
raise NotImplementedError("Not implemented for docker")
def wipefs_storage_node_data(self, service_name: str) -> None:
raise NotImplementedError("Not implemented for docker")
def finish_wipefs(self, service_name: str) -> None:
raise NotImplementedError("Not implemented for docker")
def delete_storage_node_data(self, service_name: str, cache_only: bool = False) -> None: def delete_storage_node_data(self, service_name: str, cache_only: bool = False) -> None:
volume_path = self.get_data_directory(service_name) volume_path = self.get_data_directory(service_name)
@ -236,6 +249,7 @@ class DockerHost(Host):
until: Optional[datetime] = None, until: Optional[datetime] = None,
unit: Optional[str] = None, unit: Optional[str] = None,
exclude_filter: Optional[str] = None, exclude_filter: Optional[str] = None,
priority: Optional[str] = None,
) -> str: ) -> str:
client = self._get_docker_client() client = self._get_docker_client()
filtered_logs = "" filtered_logs = ""
@ -305,9 +319,7 @@ class DockerHost(Host):
return container return container
return None return None
def _wait_for_container_to_be_in_state( def _wait_for_container_to_be_in_state(self, container_name: str, expected_state: str, timeout: int) -> None:
self, container_name: str, expected_state: str, timeout: int
) -> None:
iterations = 10 iterations = 10
iteration_wait_time = timeout / iterations iteration_wait_time = timeout / iterations

View file

@ -54,7 +54,7 @@ class Host(ABC):
raise ValueError(f"Unknown service name: '{service_name}'") raise ValueError(f"Unknown service name: '{service_name}'")
return service_config return service_config
def get_cli_config(self, cli_name: str) -> CLIConfig: def get_cli_config(self, cli_name: str, allow_empty: bool = False) -> CLIConfig:
"""Returns config of CLI tool with specified name. """Returns config of CLI tool with specified name.
The CLI must be located on this host. The CLI must be located on this host.
@ -66,7 +66,7 @@ class Host(ABC):
Config of the CLI tool. Config of the CLI tool.
""" """
cli_config = self._cli_config_by_name.get(cli_name) cli_config = self._cli_config_by_name.get(cli_name)
if cli_config is None: if cli_config is None and not allow_empty:
raise ValueError(f"Unknown CLI name: '{cli_name}'") raise ValueError(f"Unknown CLI name: '{cli_name}'")
return cli_config return cli_config
@ -117,6 +117,17 @@ class Host(ABC):
service_name: Name of the service to stop. service_name: Name of the service to stop.
""" """
@abstractmethod
def send_signal_to_service(self, service_name: str, signal: str) -> None:
"""Send signal to service with specified name using kill -<signal>
The service must be hosted on this host.
Args:
service_name: Name of the service to stop.
signal: signal name. See kill -l to all names
"""
@abstractmethod @abstractmethod
def mask_service(self, service_name: str) -> None: def mask_service(self, service_name: str) -> None:
"""Prevent the service from start by any activity by masking it. """Prevent the service from start by any activity by masking it.
@ -178,6 +189,21 @@ class Host(ABC):
cache_only: To delete cache only. cache_only: To delete cache only.
""" """
@abstractmethod
def wipefs_storage_node_data(self, service_name: str) -> None:
"""Erases all data of the storage node with specified name.
Args:
service_name: Name of storage node service.
"""
def finish_wipefs(self, service_name: str) -> None:
"""Erases all data of the storage node with specified name.
Args:
service_name: Name of storage node service.
"""
@abstractmethod @abstractmethod
def delete_fstree(self, service_name: str) -> None: def delete_fstree(self, service_name: str) -> None:
""" """
@ -297,6 +323,7 @@ class Host(ABC):
until: Optional[datetime] = None, until: Optional[datetime] = None,
unit: Optional[str] = None, unit: Optional[str] = None,
exclude_filter: Optional[str] = None, exclude_filter: Optional[str] = None,
priority: Optional[str] = None,
) -> str: ) -> str:
"""Get logs from host filtered by regex. """Get logs from host filtered by regex.
@ -305,6 +332,8 @@ class Host(ABC):
since: If set, limits the time from which logs should be collected. Must be in UTC. since: If set, limits the time from which logs should be collected. Must be in UTC.
until: If set, limits the time until which logs should be collected. Must be in UTC. until: If set, limits the time until which logs should be collected. Must be in UTC.
unit: required unit. unit: required unit.
priority: logs level, 0 - emergency, 7 - debug. All messages with that code and higher.
For example, if we specify the -p 2 option, journalctl will show all messages with levels 2, 1 and 0.
Returns: Returns:
Found entries as str if any found. Found entries as str if any found.

View file

View file

@ -0,0 +1,97 @@
import json
import logging
import logging.config
import httpx
from frostfs_testlib import reporter
timeout = httpx.Timeout(60, read=150)
LOGGING_CONFIG = {
"disable_existing_loggers": False,
"version": 1,
"handlers": {"default": {"class": "logging.StreamHandler", "formatter": "http", "stream": "ext://sys.stderr"}},
"formatters": {
"http": {
"format": "%(levelname)s [%(asctime)s] %(name)s - %(message)s",
"datefmt": "%Y-%m-%d %H:%M:%S",
}
},
"loggers": {
"httpx": {
"handlers": ["default"],
"level": "DEBUG",
},
"httpcore": {
"handlers": ["default"],
"level": "ERROR",
},
},
}
logging.config.dictConfig(LOGGING_CONFIG)
logger = logging.getLogger("NeoLogger")
class HttpClient:
@reporter.step("Send {method} request to {url}")
def send(self, method: str, url: str, expected_status_code: int = None, **kwargs: dict) -> httpx.Response:
transport = httpx.HTTPTransport(verify=False, retries=5)
client = httpx.Client(timeout=timeout, transport=transport)
response = client.request(method, url, **kwargs)
self._attach_response(response)
logger.info(f"Response: {response.status_code} => {response.text}")
if expected_status_code:
assert response.status_code == expected_status_code, (
f"Got {response.status_code} response code" f" while {expected_status_code} expected"
)
return response
@classmethod
def _attach_response(cls, response: httpx.Response):
request = response.request
try:
request_headers = json.dumps(dict(request.headers), indent=4)
except json.JSONDecodeError:
request_headers = str(request.headers)
try:
request_body = request.read()
try:
request_body = request_body.decode("utf-8")
except UnicodeDecodeError as e:
request_body = f"Unable to decode binary data to text using UTF-8 encoding: {str(e)}"
except Exception as e:
request_body = f"Error reading request body: {str(e)}"
request_body = "" if request_body is None else request_body
try:
response_headers = json.dumps(dict(response.headers), indent=4)
except json.JSONDecodeError:
response_headers = str(response.headers)
report = (
f"Method: {request.method}\n\n"
f"URL: {request.url}\n\n"
f"Request Headers: {request_headers}\n\n"
f"Request Body: {request_body}\n\n"
f"Response Status Code: {response.status_code}\n\n"
f"Response Headers: {response_headers}\n\n"
f"Response Body: {response.text}\n\n"
)
curl_request = cls._create_curl_request(request.url, request.method, request.headers, request_body)
reporter.attach(report, "Requests Info")
reporter.attach(curl_request, "CURL")
@classmethod
def _create_curl_request(cls, url: str, method: str, headers: httpx.Headers, data: str) -> str:
headers = " ".join(f'-H "{name.title()}: {value}"' for name, value in headers.items())
data = f" -d '{data}'" if data else ""
# Option -k means no verify SSL
return f"curl {url} -X {method} {headers}{data} -k"

View file

@ -1,5 +1,6 @@
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
from frostfs_testlib.load.interfaces.loader import Loader
from frostfs_testlib.load.k6 import K6 from frostfs_testlib.load.k6 import K6
from frostfs_testlib.load.load_config import LoadParams from frostfs_testlib.load.load_config import LoadParams
from frostfs_testlib.storage.cluster import ClusterNode from frostfs_testlib.storage.cluster import ClusterNode
@ -48,3 +49,7 @@ class ScenarioRunner(ABC):
@abstractmethod @abstractmethod
def get_results(self) -> dict: def get_results(self) -> dict:
"""Get results from K6 run""" """Get results from K6 run"""
@abstractmethod
def get_loaders(self) -> list[Loader]:
"""Return loaders"""

View file

@ -50,6 +50,7 @@ class SummarizedStats:
throughput: float = field(default_factory=float) throughput: float = field(default_factory=float)
latencies: SummarizedLatencies = field(default_factory=SummarizedLatencies) latencies: SummarizedLatencies = field(default_factory=SummarizedLatencies)
errors: SummarizedErorrs = field(default_factory=SummarizedErorrs) errors: SummarizedErorrs = field(default_factory=SummarizedErorrs)
total_bytes: int = field(default_factory=int)
passed: bool = True passed: bool = True
def calc_stats(self): def calc_stats(self):
@ -85,6 +86,7 @@ class SummarizedStats:
target.latencies.by_node[node_key] = operation.latency target.latencies.by_node[node_key] = operation.latency
target.throughput += operation.throughput target.throughput += operation.throughput
target.errors.threshold = load_params.error_threshold target.errors.threshold = load_params.error_threshold
target.total_bytes += operation.total_bytes
if operation.failed_iterations: if operation.failed_iterations:
target.errors.by_node[node_key] = operation.failed_iterations target.errors.by_node[node_key] = operation.failed_iterations

View file

@ -4,18 +4,19 @@ import math
import os import os
from dataclasses import dataclass from dataclasses import dataclass
from datetime import datetime from datetime import datetime
from threading import Event
from time import sleep from time import sleep
from typing import Any from typing import Any
from urllib.parse import urlparse from urllib.parse import urlparse
from frostfs_testlib import reporter from frostfs_testlib import reporter
from frostfs_testlib.credentials.interfaces import User
from frostfs_testlib.load.interfaces.loader import Loader from frostfs_testlib.load.interfaces.loader import Loader
from frostfs_testlib.load.load_config import K6ProcessAllocationStrategy, LoadParams, LoadScenario, LoadType from frostfs_testlib.load.load_config import K6ProcessAllocationStrategy, LoadParams, LoadScenario, LoadType
from frostfs_testlib.processes.remote_process import RemoteProcess from frostfs_testlib.processes.remote_process import RemoteProcess
from frostfs_testlib.resources.common import STORAGE_USER_NAME from frostfs_testlib.resources.common import STORAGE_USER_NAME
from frostfs_testlib.resources.load_params import K6_STOP_SIGNAL_TIMEOUT, K6_TEARDOWN_PERIOD from frostfs_testlib.resources.load_params import K6_STOP_SIGNAL_TIMEOUT, K6_TEARDOWN_PERIOD
from frostfs_testlib.shell import Shell from frostfs_testlib.shell import Shell
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
from frostfs_testlib.testing.test_control import wait_for_success from frostfs_testlib.testing.test_control import wait_for_success
EXIT_RESULT_CODE = 0 EXIT_RESULT_CODE = 0
@ -42,16 +43,16 @@ class K6:
k6_dir: str, k6_dir: str,
shell: Shell, shell: Shell,
loader: Loader, loader: Loader,
wallet: WalletInfo, user: User,
): ):
if load_params.scenario is None: if load_params.scenario is None:
raise RuntimeError("Scenario should not be none") raise RuntimeError("Scenario should not be none")
self.load_params: LoadParams = load_params self.load_params = load_params
self.endpoints = endpoints self.endpoints = endpoints
self.loader: Loader = loader self.loader = loader
self.shell: Shell = shell self.shell = shell
self.wallet = wallet self.user = user
self.preset_output: str = "" self.preset_output: str = ""
self.summary_json: str = os.path.join( self.summary_json: str = os.path.join(
self.load_params.working_dir, self.load_params.working_dir,
@ -61,26 +62,22 @@ class K6:
self._k6_dir: str = k6_dir self._k6_dir: str = k6_dir
command = ( command = (
f"{self._k6_dir}/k6 run {self._generate_env_variables()} " f"{self._generate_env_variables()}{self._k6_dir}/k6 run {self._generate_k6_variables()} "
f"{self._k6_dir}/scenarios/{self.load_params.scenario.value}.js" f"{self._k6_dir}/scenarios/{self.load_params.scenario.value}.js"
) )
user = STORAGE_USER_NAME if self.load_params.scenario == LoadScenario.LOCAL else None remote_user = STORAGE_USER_NAME if self.load_params.scenario == LoadScenario.LOCAL else None
process_id = ( process_id = self.load_params.load_id if self.load_params.scenario != LoadScenario.VERIFY else f"{self.load_params.load_id}_verify"
self.load_params.load_id self._k6_process = RemoteProcess.create(command, self.shell, self.load_params.working_dir, remote_user, process_id)
if self.load_params.scenario != LoadScenario.VERIFY
else f"{self.load_params.load_id}_verify"
)
self._k6_process = RemoteProcess.create(command, self.shell, self.load_params.working_dir, user, process_id)
def _get_fill_percents(self): def _get_fill_percents(self):
fill_percents = self.shell.exec("df -H --output=source,pcent,target | grep frostfs").stdout.split("\n") fill_percents = self.shell.exec("df -H --output=source,pcent,target | grep frostfs | grep data").stdout.split("\n")
return [line.split() for line in fill_percents][:-1] return [line.split() for line in fill_percents][:-1]
def check_fill_percent(self): def check_fill_percent(self):
fill_percents = self._get_fill_percents() fill_percents = self._get_fill_percents()
percent_mean = 0 percent_mean = 0
for line in fill_percents: for line in fill_percents:
percent_mean += float(line[1].split('%')[0]) percent_mean += float(line[1].split("%")[0])
percent_mean = percent_mean / len(fill_percents) percent_mean = percent_mean / len(fill_percents)
logger.info(f"{self.loader.ip} mean fill percent is {percent_mean}") logger.info(f"{self.loader.ip} mean fill percent is {percent_mean}")
return percent_mean >= self.load_params.fill_percent return percent_mean >= self.load_params.fill_percent
@ -103,8 +100,8 @@ class K6:
preset_grpc: [ preset_grpc: [
preset_grpc, preset_grpc,
f"--endpoint {','.join(self.endpoints)}", f"--endpoint {','.join(self.endpoints)}",
f"--wallet {self.wallet.path} ", f"--wallet {self.user.wallet.path} ",
f"--config {self.wallet.config_path} ", f"--config {self.user.wallet.config_path} ",
], ],
preset_s3: [ preset_s3: [
preset_s3, preset_s3,
@ -125,9 +122,9 @@ class K6:
self.preset_output = result.stdout.strip("\n") self.preset_output = result.stdout.strip("\n")
return self.preset_output return self.preset_output
@reporter.step("Generate K6 command") @reporter.step("Generate K6 variables")
def _generate_env_variables(self) -> str: def _generate_k6_variables(self) -> str:
env_vars = self.load_params.get_env_vars() env_vars = self.load_params.get_k6_vars()
env_vars[f"{self.load_params.load_type.value.upper()}_ENDPOINTS"] = ",".join(self.endpoints) env_vars[f"{self.load_params.load_type.value.upper()}_ENDPOINTS"] = ",".join(self.endpoints)
env_vars["SUMMARY_JSON"] = self.summary_json env_vars["SUMMARY_JSON"] = self.summary_json
@ -135,6 +132,14 @@ class K6:
reporter.attach("\n".join(f"{param}: {value}" for param, value in env_vars.items()), "K6 ENV variables") reporter.attach("\n".join(f"{param}: {value}" for param, value in env_vars.items()), "K6 ENV variables")
return " ".join([f"-e {param}='{value}'" for param, value in env_vars.items() if value is not None]) return " ".join([f"-e {param}='{value}'" for param, value in env_vars.items() if value is not None])
@reporter.step("Generate env variables")
def _generate_env_variables(self) -> str:
env_vars = self.load_params.get_env_vars()
if not env_vars:
return ""
reporter.attach("\n".join(f"{param}: {value}" for param, value in env_vars.items()), "ENV variables")
return " ".join([f"{param}='{value}'" for param, value in env_vars.items() if value is not None]) + " "
def get_start_time(self) -> datetime: def get_start_time(self) -> datetime:
return datetime.fromtimestamp(self._k6_process.start_time()) return datetime.fromtimestamp(self._k6_process.start_time())
@ -145,7 +150,7 @@ class K6:
with reporter.step(f"Start load from loader {self.loader.ip} on endpoints {self.endpoints}"): with reporter.step(f"Start load from loader {self.loader.ip} on endpoints {self.endpoints}"):
self._k6_process.start() self._k6_process.start()
def wait_until_finished(self, event, soft_timeout: int = 0) -> None: def wait_until_finished(self, event: Event, soft_timeout: int = 0) -> None:
with reporter.step(f"Wait until load is finished from loader {self.loader.ip} on endpoints {self.endpoints}"): with reporter.step(f"Wait until load is finished from loader {self.loader.ip} on endpoints {self.endpoints}"):
if self.load_params.scenario == LoadScenario.VERIFY: if self.load_params.scenario == LoadScenario.VERIFY:
timeout = self.load_params.verify_time or 0 timeout = self.load_params.verify_time or 0
@ -159,9 +164,7 @@ class K6:
remaining_time = timeout - working_time remaining_time = timeout - working_time
setup_teardown_time = ( setup_teardown_time = (
int(K6_TEARDOWN_PERIOD) int(K6_TEARDOWN_PERIOD) + self.load_params.get_init_time() + int(self.load_params.setup_timeout.replace("s", "").strip())
+ self.load_params.get_init_time()
+ int(self.load_params.setup_timeout.replace("s", "").strip())
) )
remaining_time_including_setup_and_teardown = remaining_time + setup_teardown_time remaining_time_including_setup_and_teardown = remaining_time + setup_teardown_time
timeout = remaining_time_including_setup_and_teardown timeout = remaining_time_including_setup_and_teardown

View file

@ -25,6 +25,16 @@ def convert_time_to_seconds(time: int | str | None) -> int:
return seconds return seconds
def force_list(input: str | list[str]):
if input is None:
return None
if isinstance(input, list):
return list(map(str.strip, input))
return [input.strip()]
class LoadType(Enum): class LoadType(Enum):
gRPC = "grpc" gRPC = "grpc"
S3 = "s3" S3 = "s3"
@ -94,16 +104,18 @@ def metadata_field(
string_repr: Optional[bool] = True, string_repr: Optional[bool] = True,
distributed: Optional[bool] = False, distributed: Optional[bool] = False,
formatter: Optional[Callable] = None, formatter: Optional[Callable] = None,
env_variable: Optional[str] = None,
): ):
return field( return field(
default=None, default=None,
metadata={ metadata={
"applicable_scenarios": applicable_scenarios, "applicable_scenarios": applicable_scenarios,
"preset_argument": preset_param, "preset_argument": preset_param,
"env_variable": scenario_variable, "scenario_variable": scenario_variable,
"string_repr": string_repr, "string_repr": string_repr,
"distributed": distributed, "distributed": distributed,
"formatter": formatter, "formatter": formatter,
"env_variable": env_variable,
}, },
) )
@ -117,6 +129,8 @@ class NodesSelectionStrategy(Enum):
ALL_EXCEPT_UNDER_TEST = "ALL_EXCEPT_UNDER_TEST" ALL_EXCEPT_UNDER_TEST = "ALL_EXCEPT_UNDER_TEST"
# Select ONE random node except under test (useful for failover). # Select ONE random node except under test (useful for failover).
RANDOM_SINGLE_EXCEPT_UNDER_TEST = "RANDOM_SINGLE_EXCEPT_UNDER_TEST" RANDOM_SINGLE_EXCEPT_UNDER_TEST = "RANDOM_SINGLE_EXCEPT_UNDER_TEST"
# Select node under test
NODE_UNDER_TEST = "NODE_UNDER_TEST"
class EndpointSelectionStrategy(Enum): class EndpointSelectionStrategy(Enum):
@ -138,8 +152,29 @@ class K6ProcessAllocationStrategy(Enum):
PER_ENDPOINT = "PER_ENDPOINT" PER_ENDPOINT = "PER_ENDPOINT"
class MetaConfig:
def _get_field_formatter(self, field_name: str) -> Callable | None:
data_fields = fields(self)
formatters = [
field.metadata["formatter"]
for field in data_fields
if field.name == field_name and "formatter" in field.metadata and field.metadata["formatter"] != None
]
if formatters:
return formatters[0]
return None
def __setattr__(self, field_name, value):
formatter = self._get_field_formatter(field_name)
if formatter:
value = formatter(value)
super().__setattr__(field_name, value)
@dataclass @dataclass
class Preset: class Preset(MetaConfig):
# ------ COMMON ------ # ------ COMMON ------
# Amount of objects which should be created # Amount of objects which should be created
objects_count: Optional[int] = metadata_field(all_load_scenarios, "preload_obj", None, False) objects_count: Optional[int] = metadata_field(all_load_scenarios, "preload_obj", None, False)
@ -147,18 +182,22 @@ class Preset:
pregen_json: Optional[str] = metadata_field(all_load_scenarios, "out", "PREGEN_JSON", False) pregen_json: Optional[str] = metadata_field(all_load_scenarios, "out", "PREGEN_JSON", False)
# Workers count for preset # Workers count for preset
workers: Optional[int] = metadata_field(all_load_scenarios, "workers", None, False) workers: Optional[int] = metadata_field(all_load_scenarios, "workers", None, False)
# Acl for container/buckets
acl: Optional[str] = metadata_field(all_load_scenarios, "acl", None, False)
# ------ GRPC ------ # ------ GRPC ------
# Amount of containers which should be created # Amount of containers which should be created
containers_count: Optional[int] = metadata_field(grpc_preset_scenarios, "containers", None, False) containers_count: Optional[int] = metadata_field(grpc_preset_scenarios, "containers", None, False)
# Container placement policy for containers for gRPC # Container placement policy for containers for gRPC
container_placement_policy: Optional[str] = metadata_field(grpc_preset_scenarios, "policy", None, False) container_placement_policy: Optional[list[str]] = metadata_field(grpc_preset_scenarios, "policy", None, False, formatter=force_list)
# Number of retries for creation of container
container_creation_retry: Optional[int] = metadata_field(grpc_preset_scenarios, "retry", None, False)
# ------ S3 ------ # ------ S3 ------
# Amount of buckets which should be created # Amount of buckets which should be created
buckets_count: Optional[int] = metadata_field(s3_preset_scenarios, "buckets", None, False) buckets_count: Optional[int] = metadata_field(s3_preset_scenarios, "buckets", None, False)
# S3 region (AKA placement policy for S3 buckets) # S3 region (AKA placement policy for S3 buckets)
s3_location: Optional[str] = metadata_field(s3_preset_scenarios, "location", None, False) s3_location: Optional[list[str]] = metadata_field(s3_preset_scenarios, "location", None, False, formatter=force_list)
# Delay between containers creation and object upload for preset # Delay between containers creation and object upload for preset
object_upload_delay: Optional[int] = metadata_field(all_load_scenarios, "sleep", None, False) object_upload_delay: Optional[int] = metadata_field(all_load_scenarios, "sleep", None, False)
@ -166,9 +205,22 @@ class Preset:
# Flag to control preset erorrs # Flag to control preset erorrs
ignore_errors: Optional[bool] = metadata_field(all_load_scenarios, "ignore-errors", None, False) ignore_errors: Optional[bool] = metadata_field(all_load_scenarios, "ignore-errors", None, False)
# Flag to ensure created containers store data on local endpoints
local: Optional[bool] = metadata_field(grpc_preset_scenarios, "local", None, False)
@dataclass @dataclass
class LoadParams: class PrometheusParams(MetaConfig):
# Prometheus server URL
server_url: Optional[str] = metadata_field(all_load_scenarios, env_variable="K6_PROMETHEUS_RW_SERVER_URL", string_repr=False)
# Prometheus trend stats
trend_stats: Optional[str] = metadata_field(all_load_scenarios, env_variable="K6_PROMETHEUS_RW_TREND_STATS", string_repr=False)
# Additional tags
metrics_tags: Optional[str] = metadata_field(all_load_scenarios, None, "METRIC_TAGS", False)
@dataclass
class LoadParams(MetaConfig):
# ------- CONTROL PARAMS ------- # ------- CONTROL PARAMS -------
# Load type can be gRPC, HTTP, S3. # Load type can be gRPC, HTTP, S3.
load_type: LoadType load_type: LoadType
@ -216,12 +268,18 @@ class LoadParams:
) )
# Percentage of filling of all data disks on all nodes # Percentage of filling of all data disks on all nodes
fill_percent: Optional[float] = None fill_percent: Optional[float] = None
# if specified, max payload size in GB of the storage engine. If the storage engine is already full, no new objects will be saved.
max_total_size_gb: Optional[float] = metadata_field([LoadScenario.LOCAL, LoadScenario.S3_LOCAL], None, "MAX_TOTAL_SIZE_GB")
# if set, the payload is generated on the fly and is not read into memory fully.
streaming: Optional[int] = metadata_field(all_load_scenarios, None, "STREAMING", False)
# Output format
output: Optional[str] = metadata_field(all_load_scenarios, None, "K6_OUT", False)
# Prometheus params
prometheus: Optional[PrometheusParams] = None
# ------- COMMON SCENARIO PARAMS ------- # ------- COMMON SCENARIO PARAMS -------
# Load time is the maximum duration for k6 to give load. Default is the BACKGROUND_LOAD_DEFAULT_TIME value. # Load time is the maximum duration for k6 to give load. Default is the BACKGROUND_LOAD_DEFAULT_TIME value.
load_time: Optional[int] = metadata_field( load_time: Optional[int] = metadata_field(all_load_scenarios, None, "DURATION", False, formatter=convert_time_to_seconds)
all_load_scenarios, None, "DURATION", False, formatter=convert_time_to_seconds
)
# Object size in KB for load and preset. # Object size in KB for load and preset.
object_size: Optional[int] = metadata_field(all_load_scenarios, "size", "WRITE_OBJ_SIZE", False) object_size: Optional[int] = metadata_field(all_load_scenarios, "size", "WRITE_OBJ_SIZE", False)
# For read operations, controls from which set get objects to read # For read operations, controls from which set get objects to read
@ -232,14 +290,14 @@ class LoadParams:
registry_file: Optional[str] = metadata_field(all_scenarios, None, "REGISTRY_FILE", False) registry_file: Optional[str] = metadata_field(all_scenarios, None, "REGISTRY_FILE", False)
# In case if we want to use custom registry file left from another load run # In case if we want to use custom registry file left from another load run
custom_registry: Optional[str] = None custom_registry: Optional[str] = None
# In case if we want to use custom registry file left from another load run
force_fresh_registry: Optional[bool] = None
# Specifies the minimum duration of every single execution (i.e. iteration). # Specifies the minimum duration of every single execution (i.e. iteration).
# Any iterations that are shorter than this value will cause that VU to # Any iterations that are shorter than this value will cause that VU to
# sleep for the remainder of the time until the specified minimum duration is reached. # sleep for the remainder of the time until the specified minimum duration is reached.
min_iteration_duration: Optional[str] = metadata_field(all_load_scenarios, None, "K6_MIN_ITERATION_DURATION", False) min_iteration_duration: Optional[str] = metadata_field(all_load_scenarios, None, "K6_MIN_ITERATION_DURATION", False)
# Prepare/cut objects locally on client before sending # Prepare/cut objects locally on client before sending
prepare_locally: Optional[bool] = metadata_field( prepare_locally: Optional[bool] = metadata_field([LoadScenario.gRPC, LoadScenario.gRPC_CAR], None, "PREPARE_LOCALLY", False)
[LoadScenario.gRPC, LoadScenario.gRPC_CAR], None, "PREPARE_LOCALLY", False
)
# Specifies K6 setupTimeout time. Currently hardcoded in xk6 as 5 seconds for all scenarios # Specifies K6 setupTimeout time. Currently hardcoded in xk6 as 5 seconds for all scenarios
# https://k6.io/docs/using-k6/k6-options/reference/#setup-timeout # https://k6.io/docs/using-k6/k6-options/reference/#setup-timeout
setup_timeout: Optional[str] = metadata_field(all_scenarios, None, "K6_SETUP_TIMEOUT", False) setup_timeout: Optional[str] = metadata_field(all_scenarios, None, "K6_SETUP_TIMEOUT", False)
@ -269,35 +327,25 @@ class LoadParams:
delete_rate: Optional[int] = metadata_field(constant_arrival_rate_scenarios, None, "DELETE_RATE", True, True) delete_rate: Optional[int] = metadata_field(constant_arrival_rate_scenarios, None, "DELETE_RATE", True, True)
# Amount of preAllocatedVUs for write operations. # Amount of preAllocatedVUs for write operations.
preallocated_writers: Optional[int] = metadata_field( preallocated_writers: Optional[int] = metadata_field(constant_arrival_rate_scenarios, None, "PRE_ALLOC_WRITERS", True, True)
constant_arrival_rate_scenarios, None, "PRE_ALLOC_WRITERS", True, True
)
# Amount of maxVUs for write operations. # Amount of maxVUs for write operations.
max_writers: Optional[int] = metadata_field(constant_arrival_rate_scenarios, None, "MAX_WRITERS", False, True) max_writers: Optional[int] = metadata_field(constant_arrival_rate_scenarios, None, "MAX_WRITERS", False, True)
# Amount of preAllocatedVUs for read operations. # Amount of preAllocatedVUs for read operations.
preallocated_readers: Optional[int] = metadata_field( preallocated_readers: Optional[int] = metadata_field(constant_arrival_rate_scenarios, None, "PRE_ALLOC_READERS", True, True)
constant_arrival_rate_scenarios, None, "PRE_ALLOC_READERS", True, True
)
# Amount of maxVUs for read operations. # Amount of maxVUs for read operations.
max_readers: Optional[int] = metadata_field(constant_arrival_rate_scenarios, None, "MAX_READERS", False, True) max_readers: Optional[int] = metadata_field(constant_arrival_rate_scenarios, None, "MAX_READERS", False, True)
# Amount of preAllocatedVUs for read operations. # Amount of preAllocatedVUs for read operations.
preallocated_deleters: Optional[int] = metadata_field( preallocated_deleters: Optional[int] = metadata_field(constant_arrival_rate_scenarios, None, "PRE_ALLOC_DELETERS", True, True)
constant_arrival_rate_scenarios, None, "PRE_ALLOC_DELETERS", True, True
)
# Amount of maxVUs for delete operations. # Amount of maxVUs for delete operations.
max_deleters: Optional[int] = metadata_field(constant_arrival_rate_scenarios, None, "MAX_DELETERS", False, True) max_deleters: Optional[int] = metadata_field(constant_arrival_rate_scenarios, None, "MAX_DELETERS", False, True)
# Multipart # Multipart
# Number of parts to upload in parallel # Number of parts to upload in parallel
writers_multipart: Optional[int] = metadata_field( writers_multipart: Optional[int] = metadata_field([LoadScenario.S3_MULTIPART], None, "WRITERS_MULTIPART", False, True)
[LoadScenario.S3_MULTIPART], None, "WRITERS_MULTIPART", False, True
)
# part size must be greater than (5 MB) # part size must be greater than (5 MB)
write_object_part_size: Optional[int] = metadata_field( write_object_part_size: Optional[int] = metadata_field([LoadScenario.S3_MULTIPART], None, "WRITE_OBJ_PART_SIZE", False)
[LoadScenario.S3_MULTIPART], None, "WRITE_OBJ_PART_SIZE", False
)
# Period of time to apply the rate value. # Period of time to apply the rate value.
time_unit: Optional[str] = metadata_field(constant_arrival_rate_scenarios, None, "TIME_UNIT", False) time_unit: Optional[str] = metadata_field(constant_arrival_rate_scenarios, None, "TIME_UNIT", False)
@ -312,7 +360,7 @@ class LoadParams:
# Config file location (filled automatically) # Config file location (filled automatically)
config_file: Optional[str] = metadata_field([LoadScenario.LOCAL, LoadScenario.S3_LOCAL], None, "CONFIG_FILE", False) config_file: Optional[str] = metadata_field([LoadScenario.LOCAL, LoadScenario.S3_LOCAL], None, "CONFIG_FILE", False)
# Config directory location (filled automatically) # Config directory location (filled automatically)
config_dir: Optional[str] = metadata_field([LoadScenario.S3_LOCAL], None, "CONFIG_DIR", False) config_dir: Optional[str] = metadata_field([LoadScenario.LOCAL, LoadScenario.S3_LOCAL], None, "CONFIG_DIR", False)
def set_id(self, load_id): def set_id(self, load_id):
self.load_id = load_id self.load_id = load_id
@ -330,6 +378,17 @@ class LoadParams:
if self.preset: if self.preset:
self.preset.pregen_json = os.path.join(self.working_dir, f"{load_id}_prepare.json") self.preset.pregen_json = os.path.join(self.working_dir, f"{load_id}_prepare.json")
def get_k6_vars(self):
env_vars = {
meta_field.metadata["scenario_variable"]: meta_field.value
for meta_field in self._get_meta_fields(self)
if self.scenario in meta_field.metadata["applicable_scenarios"]
and meta_field.metadata["scenario_variable"]
and meta_field.value is not None
}
return env_vars
def get_env_vars(self): def get_env_vars(self):
env_vars = { env_vars = {
meta_field.metadata["env_variable"]: meta_field.value meta_field.metadata["env_variable"]: meta_field.value
@ -386,6 +445,11 @@ class LoadParams:
# For preset calls, bool values are passed with just --<argument_name> if the value is True # For preset calls, bool values are passed with just --<argument_name> if the value is True
return f"--{meta_field.metadata['preset_argument']}" if meta_field.value else "" return f"--{meta_field.metadata['preset_argument']}" if meta_field.value else ""
if isinstance(meta_field.value, list):
return (
" ".join(f"--{meta_field.metadata['preset_argument']} '{value}'" for value in meta_field.value) if meta_field.value else ""
)
return f"--{meta_field.metadata['preset_argument']} '{meta_field.value}'" return f"--{meta_field.metadata['preset_argument']} '{meta_field.value}'"
@staticmethod @staticmethod
@ -405,25 +469,6 @@ class LoadParams:
return fields_with_data or [] return fields_with_data or []
def _get_field_formatter(self, field_name: str) -> Callable | None:
data_fields = fields(self)
formatters = [
field.metadata["formatter"]
for field in data_fields
if field.name == field_name and "formatter" in field.metadata and field.metadata["formatter"] != None
]
if formatters:
return formatters[0]
return None
def __setattr__(self, field_name, value):
formatter = self._get_field_formatter(field_name)
if formatter:
value = formatter(value)
super().__setattr__(field_name, value)
def __str__(self) -> str: def __str__(self) -> str:
load_type_str = self.scenario.value if self.scenario else self.load_type.value load_type_str = self.scenario.value if self.scenario else self.load_type.value
# TODO: migrate load_params defaults to testlib # TODO: migrate load_params defaults to testlib
@ -434,9 +479,7 @@ class LoadParams:
static_params = [f"{load_type_str}"] static_params = [f"{load_type_str}"]
dynamic_params = [ dynamic_params = [
f"{meta_field.name}={meta_field.value}" f"{meta_field.name}={meta_field.value}" for meta_field in self._get_applicable_fields() if meta_field.metadata["string_repr"]
for meta_field in self._get_applicable_fields()
if meta_field.metadata["string_repr"]
] ]
params = ", ".join(static_params + dynamic_params) params = ", ".join(static_params + dynamic_params)

View file

@ -39,6 +39,10 @@ class OperationMetric(ABC):
def throughput(self) -> float: def throughput(self) -> float:
return self._get_metric_rate(self._THROUGHPUT) return self._get_metric_rate(self._THROUGHPUT)
@property
def total_bytes(self) -> float:
return self._get_metric(self._THROUGHPUT)
def _get_metric(self, metric: str) -> int: def _get_metric(self, metric: str) -> int:
metrics_method_map = { metrics_method_map = {
"counter": self._get_counter_metric, "counter": self._get_counter_metric,
@ -107,66 +111,66 @@ class DeleteOperationMetric(OperationMetric):
class GrpcWriteOperationMetric(WriteOperationMetric): class GrpcWriteOperationMetric(WriteOperationMetric):
_SUCCESS = "frostfs_obj_put_total" _SUCCESS = "frostfs_obj_put_success"
_ERRORS = "frostfs_obj_put_fails" _ERRORS = "frostfs_obj_put_fails"
_LATENCY = "frostfs_obj_put_duration" _LATENCY = "frostfs_obj_put_duration"
class GrpcReadOperationMetric(ReadOperationMetric): class GrpcReadOperationMetric(ReadOperationMetric):
_SUCCESS = "frostfs_obj_get_total" _SUCCESS = "frostfs_obj_get_success"
_ERRORS = "frostfs_obj_get_fails" _ERRORS = "frostfs_obj_get_fails"
_LATENCY = "frostfs_obj_get_duration" _LATENCY = "frostfs_obj_get_duration"
class GrpcDeleteOperationMetric(DeleteOperationMetric): class GrpcDeleteOperationMetric(DeleteOperationMetric):
_SUCCESS = "frostfs_obj_delete_total" _SUCCESS = "frostfs_obj_delete_success"
_ERRORS = "frostfs_obj_delete_fails" _ERRORS = "frostfs_obj_delete_fails"
_LATENCY = "frostfs_obj_delete_duration" _LATENCY = "frostfs_obj_delete_duration"
class S3WriteOperationMetric(WriteOperationMetric): class S3WriteOperationMetric(WriteOperationMetric):
_SUCCESS = "aws_obj_put_total" _SUCCESS = "aws_obj_put_success"
_ERRORS = "aws_obj_put_fails" _ERRORS = "aws_obj_put_fails"
_LATENCY = "aws_obj_put_duration" _LATENCY = "aws_obj_put_duration"
class S3ReadOperationMetric(ReadOperationMetric): class S3ReadOperationMetric(ReadOperationMetric):
_SUCCESS = "aws_obj_get_total" _SUCCESS = "aws_obj_get_success"
_ERRORS = "aws_obj_get_fails" _ERRORS = "aws_obj_get_fails"
_LATENCY = "aws_obj_get_duration" _LATENCY = "aws_obj_get_duration"
class S3DeleteOperationMetric(DeleteOperationMetric): class S3DeleteOperationMetric(DeleteOperationMetric):
_SUCCESS = "aws_obj_delete_total" _SUCCESS = "aws_obj_delete_success"
_ERRORS = "aws_obj_delete_fails" _ERRORS = "aws_obj_delete_fails"
_LATENCY = "aws_obj_delete_duration" _LATENCY = "aws_obj_delete_duration"
class S3LocalWriteOperationMetric(WriteOperationMetric): class S3LocalWriteOperationMetric(WriteOperationMetric):
_SUCCESS = "s3local_obj_put_total" _SUCCESS = "s3local_obj_put_success"
_ERRORS = "s3local_obj_put_fails" _ERRORS = "s3local_obj_put_fails"
_LATENCY = "s3local_obj_put_duration" _LATENCY = "s3local_obj_put_duration"
class S3LocalReadOperationMetric(ReadOperationMetric): class S3LocalReadOperationMetric(ReadOperationMetric):
_SUCCESS = "s3local_obj_get_total" _SUCCESS = "s3local_obj_get_success"
_ERRORS = "s3local_obj_get_fails" _ERRORS = "s3local_obj_get_fails"
_LATENCY = "s3local_obj_get_duration" _LATENCY = "s3local_obj_get_duration"
class LocalWriteOperationMetric(WriteOperationMetric): class LocalWriteOperationMetric(WriteOperationMetric):
_SUCCESS = "local_obj_put_total" _SUCCESS = "local_obj_put_success"
_ERRORS = "local_obj_put_fails" _ERRORS = "local_obj_put_fails"
_LATENCY = "local_obj_put_duration" _LATENCY = "local_obj_put_duration"
class LocalReadOperationMetric(ReadOperationMetric): class LocalReadOperationMetric(ReadOperationMetric):
_SUCCESS = "local_obj_get_total" _SUCCESS = "local_obj_get_success"
_ERRORS = "local_obj_get_fails" _ERRORS = "local_obj_get_fails"
class LocalDeleteOperationMetric(DeleteOperationMetric): class LocalDeleteOperationMetric(DeleteOperationMetric):
_SUCCESS = "local_obj_delete_total" _SUCCESS = "local_obj_delete_success"
_ERRORS = "local_obj_delete_fails" _ERRORS = "local_obj_delete_fails"

View file

@ -120,6 +120,11 @@ class LoadReport:
throughput, unit = calc_unit(stats.throughput) throughput, unit = calc_unit(stats.throughput)
throughput_html = self._row("Throughput", f"{throughput:.2f} {unit}/sec") throughput_html = self._row("Throughput", f"{throughput:.2f} {unit}/sec")
bytes_html = ""
if stats.total_bytes > 0:
total_bytes, total_bytes_unit = calc_unit(stats.total_bytes)
bytes_html = self._row("Total transferred", f"{total_bytes:.2f} {total_bytes_unit}")
per_node_errors_html = "" per_node_errors_html = ""
for node_key, errors in stats.errors.by_node.items(): for node_key, errors in stats.errors.by_node.items():
if self.load_params.k6_process_allocation_strategy == K6ProcessAllocationStrategy.PER_ENDPOINT: if self.load_params.k6_process_allocation_strategy == K6ProcessAllocationStrategy.PER_ENDPOINT:
@ -148,6 +153,7 @@ class LoadReport:
<tr><th colspan="2" bgcolor="gainsboro">Metrics</th></tr> <tr><th colspan="2" bgcolor="gainsboro">Metrics</th></tr>
{self._row("Total operations", stats.operations)} {self._row("Total operations", stats.operations)}
{self._row("OP/sec", f"{stats.rate:.2f}")} {self._row("OP/sec", f"{stats.rate:.2f}")}
{bytes_html}
{throughput_html} {throughput_html}
{latency_html} {latency_html}
<tr><th colspan="2" bgcolor="gainsboro">Errors</th></tr> <tr><th colspan="2" bgcolor="gainsboro">Errors</th></tr>

View file

@ -57,6 +57,8 @@ class LoadVerifier:
invalid_objects = verify_metrics.read.failed_iterations invalid_objects = verify_metrics.read.failed_iterations
total_left_objects = load_metrics.write.success_iterations - delete_success total_left_objects = load_metrics.write.success_iterations - delete_success
if invalid_objects > 0:
issues.append(f"There were {invalid_objects} verification fails (hash mismatch).")
# Due to interruptions we may see total verified objects to be less than written on writers count # Due to interruptions we may see total verified objects to be less than written on writers count
if abs(total_left_objects - verified_objects) > writers: if abs(total_left_objects - verified_objects) > writers:
issues.append( issues.append(

View file

@ -1,23 +1,20 @@
import copy import copy
import itertools import itertools
import math import math
import re
import time import time
from dataclasses import fields from dataclasses import fields
from threading import Event
from typing import Optional from typing import Optional
from urllib.parse import urlparse from urllib.parse import urlparse
import yaml
from frostfs_testlib import reporter from frostfs_testlib import reporter
from frostfs_testlib.cli.frostfs_authmate.authmate import FrostfsAuthmate from frostfs_testlib.credentials.interfaces import S3Credentials, User
from frostfs_testlib.load.interfaces.loader import Loader from frostfs_testlib.load.interfaces.loader import Loader
from frostfs_testlib.load.interfaces.scenario_runner import ScenarioRunner from frostfs_testlib.load.interfaces.scenario_runner import ScenarioRunner
from frostfs_testlib.load.k6 import K6 from frostfs_testlib.load.k6 import K6
from frostfs_testlib.load.load_config import K6ProcessAllocationStrategy, LoadParams, LoadType from frostfs_testlib.load.load_config import K6ProcessAllocationStrategy, LoadParams, LoadType
from frostfs_testlib.load.loaders import NodeLoader, RemoteLoader from frostfs_testlib.load.loaders import NodeLoader, RemoteLoader
from frostfs_testlib.resources import optionals from frostfs_testlib.resources import optionals
from frostfs_testlib.resources.cli import FROSTFS_AUTHMATE_EXEC
from frostfs_testlib.resources.common import STORAGE_USER_NAME from frostfs_testlib.resources.common import STORAGE_USER_NAME
from frostfs_testlib.resources.load_params import BACKGROUND_LOAD_VUS_COUNT_DIVISOR, LOAD_NODE_SSH_USER, LOAD_NODES from frostfs_testlib.resources.load_params import BACKGROUND_LOAD_VUS_COUNT_DIVISOR, LOAD_NODE_SSH_USER, LOAD_NODES
from frostfs_testlib.shell.command_inspectors import SuInspector from frostfs_testlib.shell.command_inspectors import SuInspector
@ -25,16 +22,15 @@ from frostfs_testlib.shell.interfaces import CommandOptions, InteractiveInput
from frostfs_testlib.storage.cluster import ClusterNode from frostfs_testlib.storage.cluster import ClusterNode
from frostfs_testlib.storage.controllers.cluster_state_controller import ClusterStateController from frostfs_testlib.storage.controllers.cluster_state_controller import ClusterStateController
from frostfs_testlib.storage.dataclasses.frostfs_services import S3Gate, StorageNode from frostfs_testlib.storage.dataclasses.frostfs_services import S3Gate, StorageNode
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
from frostfs_testlib.testing import parallel, run_optionally from frostfs_testlib.testing import parallel, run_optionally
from frostfs_testlib.testing.test_control import retry from frostfs_testlib.testing.test_control import retry
from frostfs_testlib.utils import datetime_utils from frostfs_testlib.utils import datetime_utils
from frostfs_testlib.utils.file_keeper import FileKeeper from frostfs_testlib.utils.file_keeper import FileKeeper
from threading import Event
class RunnerBase(ScenarioRunner): class RunnerBase(ScenarioRunner):
k6_instances: list[K6] k6_instances: list[K6]
loaders: list[Loader]
@reporter.step("Run preset on loaders") @reporter.step("Run preset on loaders")
def preset(self): def preset(self):
@ -54,20 +50,22 @@ class RunnerBase(ScenarioRunner):
def get_k6_instances(self): def get_k6_instances(self):
return self.k6_instances return self.k6_instances
def get_loaders(self) -> list[Loader]:
return self.loaders
class DefaultRunner(RunnerBase): class DefaultRunner(RunnerBase):
loaders: list[Loader] user: User
loaders_wallet: WalletInfo
def __init__( def __init__(
self, self,
loaders_wallet: WalletInfo, user: User,
load_ip_list: Optional[list[str]] = None, load_ip_list: Optional[list[str]] = None,
) -> None: ) -> None:
if load_ip_list is None: if load_ip_list is None:
load_ip_list = LOAD_NODES load_ip_list = LOAD_NODES
self.loaders = RemoteLoader.from_ip_list(load_ip_list) self.loaders = RemoteLoader.from_ip_list(load_ip_list)
self.loaders_wallet = loaders_wallet self.user = user
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED) @run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED)
@reporter.step("Preparation steps") @reporter.step("Preparation steps")
@ -78,54 +76,35 @@ class DefaultRunner(RunnerBase):
nodes_under_load: list[ClusterNode], nodes_under_load: list[ClusterNode],
k6_dir: str, k6_dir: str,
): ):
if load_params.force_fresh_registry and load_params.custom_registry:
with reporter.step("Forcing fresh registry files"):
parallel(self._force_fresh_registry, self.loaders, load_params)
if load_params.load_type != LoadType.S3: if load_params.load_type != LoadType.S3:
return return
with reporter.step("Init s3 client on loaders"): with reporter.step("Init s3 client on loaders"):
storage_node = nodes_under_load[0].service(StorageNode) s3_credentials = self.user.s3_credentials
s3_public_keys = [node.service(S3Gate).get_wallet_public_key() for node in cluster_nodes] parallel(self._aws_configure_on_loader, self.loaders, s3_credentials)
grpc_peer = storage_node.get_rpc_endpoint()
parallel(self._prepare_loader, self.loaders, load_params, grpc_peer, s3_public_keys, k6_dir) def _force_fresh_registry(self, loader: Loader, load_params: LoadParams):
with reporter.step(f"Forcing fresh registry on {loader.ip}"):
shell = loader.get_shell()
shell.exec(f"rm -f {load_params.registry_file}")
def _prepare_loader( def _aws_configure_on_loader(
self, self,
loader: Loader, loader: Loader,
load_params: LoadParams, s3_credentials: S3Credentials,
grpc_peer: str,
s3_public_keys: list[str],
k6_dir: str,
): ):
with reporter.step(f"Init s3 client on {loader.ip}"): with reporter.step(f"Aws configure on {loader.ip}"):
shell = loader.get_shell()
frostfs_authmate_exec: FrostfsAuthmate = FrostfsAuthmate(shell, FROSTFS_AUTHMATE_EXEC)
issue_secret_output = frostfs_authmate_exec.secret.issue(
wallet=self.loaders_wallet.path,
peer=grpc_peer,
gate_public_key=s3_public_keys,
container_placement_policy=load_params.preset.container_placement_policy,
container_policy=f"{k6_dir}/scenarios/files/policy.json",
wallet_password=self.loaders_wallet.password,
).stdout
aws_access_key_id = str(
re.search(r"access_key_id.*:\s.(?P<aws_access_key_id>\w*)", issue_secret_output).group(
"aws_access_key_id"
)
)
aws_secret_access_key = str(
re.search(
r"secret_access_key.*:\s.(?P<aws_secret_access_key>\w*)",
issue_secret_output,
).group("aws_secret_access_key")
)
configure_input = [ configure_input = [
InteractiveInput(prompt_pattern=r"AWS Access Key ID.*", input=aws_access_key_id), InteractiveInput(prompt_pattern=r"AWS Access Key ID.*", input=s3_credentials.access_key),
InteractiveInput(prompt_pattern=r"AWS Secret Access Key.*", input=aws_secret_access_key), InteractiveInput(prompt_pattern=r"AWS Secret Access Key.*", input=s3_credentials.secret_key),
InteractiveInput(prompt_pattern=r".*", input=""), InteractiveInput(prompt_pattern=r".*", input=""),
InteractiveInput(prompt_pattern=r".*", input=""), InteractiveInput(prompt_pattern=r".*", input=""),
] ]
shell.exec("aws configure", CommandOptions(interactive_inputs=configure_input)) loader.get_shell().exec("aws configure", CommandOptions(interactive_inputs=configure_input))
@reporter.step("Init k6 instances") @reporter.step("Init k6 instances")
def init_k6_instances(self, load_params: LoadParams, endpoints: list[str], k6_dir: str): def init_k6_instances(self, load_params: LoadParams, endpoints: list[str], k6_dir: str):
@ -167,12 +146,10 @@ class DefaultRunner(RunnerBase):
k6_dir, k6_dir,
shell, shell,
loader, loader,
self.loaders_wallet, self.user,
) )
def _get_distributed_load_params_list( def _get_distributed_load_params_list(self, original_load_params: LoadParams, workers_count: int) -> list[LoadParams]:
self, original_load_params: LoadParams, workers_count: int
) -> list[LoadParams]:
divisor = int(BACKGROUND_LOAD_VUS_COUNT_DIVISOR) divisor = int(BACKGROUND_LOAD_VUS_COUNT_DIVISOR)
distributed_load_params: list[LoadParams] = [] distributed_load_params: list[LoadParams] = []
@ -254,21 +231,22 @@ class DefaultRunner(RunnerBase):
class LocalRunner(RunnerBase): class LocalRunner(RunnerBase):
loaders: list[Loader]
cluster_state_controller: ClusterStateController cluster_state_controller: ClusterStateController
file_keeper: FileKeeper file_keeper: FileKeeper
wallet: WalletInfo user: User
def __init__( def __init__(
self, self,
cluster_state_controller: ClusterStateController, cluster_state_controller: ClusterStateController,
file_keeper: FileKeeper, file_keeper: FileKeeper,
nodes_under_load: list[ClusterNode], nodes_under_load: list[ClusterNode],
user: User,
) -> None: ) -> None:
self.cluster_state_controller = cluster_state_controller self.cluster_state_controller = cluster_state_controller
self.file_keeper = file_keeper self.file_keeper = file_keeper
self.loaders = [NodeLoader(node) for node in nodes_under_load] self.loaders = [NodeLoader(node) for node in nodes_under_load]
self.nodes_under_load = nodes_under_load self.nodes_under_load = nodes_under_load
self.user = user
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED) @run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED)
@reporter.step("Preparation steps") @reporter.step("Preparation steps")
@ -314,14 +292,12 @@ class LocalRunner(RunnerBase):
with reporter.step("Download K6"): with reporter.step("Download K6"):
shell.exec(f"sudo rm -rf {k6_dir};sudo mkdir {k6_dir}") shell.exec(f"sudo rm -rf {k6_dir};sudo mkdir {k6_dir}")
shell.exec(f"sudo curl -so {k6_dir}/k6.tar.gz {load_params.k6_url}") shell.exec(f"sudo curl -so {k6_dir}/k6.tar.gz {load_params.k6_url}")
shell.exec(f"sudo tar xf {k6_dir}/k6.tar.gz -C {k6_dir}") shell.exec(f"sudo tar xf {k6_dir}/k6.tar.gz --strip-components 2 -C {k6_dir}")
shell.exec(f"sudo chmod -R 777 {k6_dir}") shell.exec(f"sudo chmod -R 777 {k6_dir}")
with reporter.step("Create empty_passwd"): with reporter.step("chmod 777 wallet related files on loader"):
self.wallet = WalletInfo(f"{k6_dir}/scenarios/files/wallet.json", "", "/tmp/empty_passwd.yml") shell.exec(f"sudo chmod -R 777 {self.user.wallet.config_path}")
content = yaml.dump({"password": ""}) shell.exec(f"sudo chmod -R 777 {self.user.wallet.path}")
shell.exec(f'echo "{content}" | sudo tee {self.wallet.config_path}')
shell.exec(f"sudo chmod -R 777 {self.wallet.config_path}")
@reporter.step("Init k6 instances") @reporter.step("Init k6 instances")
def init_k6_instances(self, load_params: LoadParams, endpoints: list[str], k6_dir: str): def init_k6_instances(self, load_params: LoadParams, endpoints: list[str], k6_dir: str):
@ -354,7 +330,7 @@ class LocalRunner(RunnerBase):
k6_dir, k6_dir,
shell, shell,
loader, loader,
self.wallet, self.user,
) )
def start(self): def start(self):
@ -444,7 +420,7 @@ class S3LocalRunner(LocalRunner):
k6_dir, k6_dir,
shell, shell,
loader, loader,
self.wallet, self.user,
) )
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED) @run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED)
@ -457,17 +433,10 @@ class S3LocalRunner(LocalRunner):
k6_dir: str, k6_dir: str,
): ):
self.k6_dir = k6_dir self.k6_dir = k6_dir
with reporter.step("Init s3 client on loaders"): parallel(self.prepare_node, nodes_under_load, k6_dir, load_params, cluster_nodes)
storage_node = nodes_under_load[0].service(StorageNode)
s3_public_keys = [node.service(S3Gate).get_wallet_public_key() for node in cluster_nodes]
grpc_peer = storage_node.get_rpc_endpoint()
parallel(self.prepare_node, nodes_under_load, k6_dir, load_params, s3_public_keys, grpc_peer)
@reporter.step("Prepare node {cluster_node}") @reporter.step("Prepare node {cluster_node}")
def prepare_node( def prepare_node(self, cluster_node: ClusterNode, k6_dir: str, load_params: LoadParams, cluster_nodes: list[ClusterNode]):
self, cluster_node: ClusterNode, k6_dir: str, load_params: LoadParams, s3_public_keys: list[str], grpc_peer: str
):
LocalRunner.prepare_node(self, cluster_node, k6_dir, load_params) LocalRunner.prepare_node(self, cluster_node, k6_dir, load_params)
self.endpoints = cluster_node.s3_gate.get_all_endpoints() self.endpoints = cluster_node.s3_gate.get_all_endpoints()
shell = cluster_node.host.get_shell() shell = cluster_node.host.get_shell()
@ -488,29 +457,9 @@ class S3LocalRunner(LocalRunner):
shell.exec(f"sudo python3 -m pip install -I {k6_dir}/requests.tar.gz") shell.exec(f"sudo python3 -m pip install -I {k6_dir}/requests.tar.gz")
with reporter.step(f"Init s3 client on {cluster_node.host_ip}"): with reporter.step(f"Init s3 client on {cluster_node.host_ip}"):
frostfs_authmate_exec: FrostfsAuthmate = FrostfsAuthmate(shell, FROSTFS_AUTHMATE_EXEC)
issue_secret_output = frostfs_authmate_exec.secret.issue(
wallet=self.wallet.path,
peer=grpc_peer,
gate_public_key=s3_public_keys,
container_placement_policy=load_params.preset.container_placement_policy,
container_policy=f"{k6_dir}/scenarios/files/policy.json",
wallet_password=self.wallet.password,
).stdout
aws_access_key_id = str(
re.search(r"access_key_id.*:\s.(?P<aws_access_key_id>\w*)", issue_secret_output).group(
"aws_access_key_id"
)
)
aws_secret_access_key = str(
re.search(
r"secret_access_key.*:\s.(?P<aws_secret_access_key>\w*)",
issue_secret_output,
).group("aws_secret_access_key")
)
configure_input = [ configure_input = [
InteractiveInput(prompt_pattern=r"AWS Access Key ID.*", input=aws_access_key_id), InteractiveInput(prompt_pattern=r"AWS Access Key ID.*", input=self.user.s3_credentials.access_key),
InteractiveInput(prompt_pattern=r"AWS Secret Access Key.*", input=aws_secret_access_key), InteractiveInput(prompt_pattern=r"AWS Secret Access Key.*", input=self.user.s3_credentials.secret_key),
InteractiveInput(prompt_pattern=r".*", input=""), InteractiveInput(prompt_pattern=r".*", input=""),
InteractiveInput(prompt_pattern=r".*", input=""), InteractiveInput(prompt_pattern=r".*", input=""),
] ]

View file

@ -9,4 +9,4 @@ FROSTFS_ADM_EXEC = os.getenv("FROSTFS_ADM_EXEC", "frostfs-adm")
# Config for frostfs-adm utility. Optional if tests are running against devenv # Config for frostfs-adm utility. Optional if tests are running against devenv
FROSTFS_ADM_CONFIG_PATH = os.getenv("FROSTFS_ADM_CONFIG_PATH") FROSTFS_ADM_CONFIG_PATH = os.getenv("FROSTFS_ADM_CONFIG_PATH")
CLI_DEFAULT_TIMEOUT = os.getenv("CLI_DEFAULT_TIMEOUT", None) CLI_DEFAULT_TIMEOUT = os.getenv("CLI_DEFAULT_TIMEOUT", "100s")

View file

@ -46,3 +46,11 @@ with open(DEFAULT_WALLET_CONFIG, "w") as file:
MAX_REQUEST_ATTEMPTS = 5 MAX_REQUEST_ATTEMPTS = 5
RETRY_MODE = "standard" RETRY_MODE = "standard"
CREDENTIALS_CREATE_TIMEOUT = "1m" CREDENTIALS_CREATE_TIMEOUT = "1m"
HOSTING_CONFIG_FILE = os.getenv(
"HOSTING_CONFIG_FILE", os.path.abspath(os.path.join(os.path.dirname(__file__), "..", "..", "..", ".devenv.hosting.yaml"))
)
MORE_LOG = os.getenv("MORE_LOG", "1")
EXPIRATION_EPOCH_ATTRIBUTE = "__SYSTEM__EXPIRATION_EPOCH"

View file

@ -23,6 +23,14 @@ INVALID_RANGE_OVERFLOW = "invalid '{range}' range: uint64 overflow"
INVALID_OFFSET_SPECIFIER = "invalid '{range}' range offset specifier" INVALID_OFFSET_SPECIFIER = "invalid '{range}' range offset specifier"
INVALID_LENGTH_SPECIFIER = "invalid '{range}' range length specifier" INVALID_LENGTH_SPECIFIER = "invalid '{range}' range length specifier"
S3_MALFORMED_XML_REQUEST = ( S3_BUCKET_DOES_NOT_ALLOW_ACL = "The bucket does not allow ACLs"
"The XML you provided was not well-formed or did not validate against our published schema." S3_MALFORMED_XML_REQUEST = "The XML you provided was not well-formed or did not validate against our published schema."
)
RULE_ACCESS_DENIED_CONTAINER = "access to container operation {operation} is denied by access policy engine: Access denied"
# Errors from node missing reasons if request was forwarded. Commenting for now
# RULE_ACCESS_DENIED_OBJECT = "access to object operation denied: ape denied request: method {operation}: Access denied"
RULE_ACCESS_DENIED_OBJECT = "access to object operation denied: ape denied request"
NO_RULE_FOUND_CONTAINER = "access to container operation {operation} is denied by access policy engine: NoRuleFound"
# Errors from node missing reasons if request was forwarded. Commenting for now
# NO_RULE_FOUND_OBJECT = "access to object operation denied: ape denied request: method {operation}: NoRuleFound"
NO_RULE_FOUND_OBJECT = "access to object operation denied: ape denied request"

View file

@ -26,6 +26,7 @@ BACKGROUND_LOAD_CONTAINER_PLACEMENT_POLICY = os.getenv(
) )
BACKGROUND_LOAD_S3_LOCATION = os.getenv("BACKGROUND_LOAD_S3_LOCATION", "node-off") BACKGROUND_LOAD_S3_LOCATION = os.getenv("BACKGROUND_LOAD_S3_LOCATION", "node-off")
PRESET_CONTAINERS_COUNT = os.getenv("CONTAINERS_COUNT", "40") PRESET_CONTAINERS_COUNT = os.getenv("CONTAINERS_COUNT", "40")
PRESET_CONTAINER_CREATION_RETRY_COUNT = os.getenv("CONTAINER_CREATION_RETRY_COUNT", "20")
# TODO: At lease one object is required due to bug in xk6 (buckets with no objects produce millions exceptions in read) # TODO: At lease one object is required due to bug in xk6 (buckets with no objects produce millions exceptions in read)
PRESET_OBJECTS_COUNT = os.getenv("OBJ_COUNT", "1") PRESET_OBJECTS_COUNT = os.getenv("OBJ_COUNT", "1")
K6_DIRECTORY = os.getenv("K6_DIRECTORY", "/etc/k6") K6_DIRECTORY = os.getenv("K6_DIRECTORY", "/etc/k6")

View file

@ -16,11 +16,10 @@ OPTIONAL_NODE_UNDER_LOAD = os.getenv("OPTIONAL_NODE_UNDER_LOAD")
OPTIONAL_FAILOVER_ENABLED = str_to_bool(os.getenv("OPTIONAL_FAILOVER_ENABLED", "true")) OPTIONAL_FAILOVER_ENABLED = str_to_bool(os.getenv("OPTIONAL_FAILOVER_ENABLED", "true"))
# Set this to True to disable background load. I.E. node which supposed to be stopped will not be actually stopped. # Set this to True to disable background load. I.E. node which supposed to be stopped will not be actually stopped.
OPTIONAL_BACKGROUND_LOAD_ENABLED = str_to_bool( OPTIONAL_BACKGROUND_LOAD_ENABLED = str_to_bool(os.getenv("OPTIONAL_BACKGROUND_LOAD_ENABLED", "true"))
os.getenv("OPTIONAL_BACKGROUND_LOAD_ENABLED", "true")
)
# Set this to False for disable autouse fixture like node healthcheck during developing time. # Set this to False for disable autouse fixture like node healthcheck during developing time.
OPTIONAL_AUTOUSE_FIXTURES_ENABLED = str_to_bool( OPTIONAL_AUTOUSE_FIXTURES_ENABLED = str_to_bool(os.getenv("OPTIONAL_AUTOUSE_FIXTURES_ENABLED", "true"))
os.getenv("OPTIONAL_AUTOUSE_FIXTURES_ENABLED", "true")
) # Use cache for fixtures with @cachec_fixture decorator
OPTIONAL_CACHE_FIXTURES = str_to_bool(os.getenv("OPTIONAL_CACHE_FIXTURES", "false"))

View file

@ -0,0 +1,9 @@
ALL_USERS_GROUP_URI = "http://acs.amazonaws.com/groups/global/AllUsers"
ALL_USERS_GROUP_WRITE_GRANT = {"Grantee": {"Type": "Group", "URI": ALL_USERS_GROUP_URI}, "Permission": "WRITE"}
ALL_USERS_GROUP_READ_GRANT = {"Grantee": {"Type": "Group", "URI": ALL_USERS_GROUP_URI}, "Permission": "READ"}
CANONICAL_USER_FULL_CONTROL_GRANT = {"Grantee": {"Type": "CanonicalUser"}, "Permission": "FULL_CONTROL"}
# https://docs.aws.amazon.com/AmazonS3/latest/userguide/acl-overview.html#canned-acl
PRIVATE_GRANTS = []
PUBLIC_READ_GRANTS = [ALL_USERS_GROUP_READ_GRANT]
PUBLIC_READ_WRITE_GRANTS = [ALL_USERS_GROUP_WRITE_GRANT, ALL_USERS_GROUP_READ_GRANT]

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,16 @@
import re
from frostfs_testlib.cli.generic_cli import GenericCli
from frostfs_testlib.s3.interfaces import BucketContainerResolver
from frostfs_testlib.storage.cluster import ClusterNode
class CurlBucketContainerResolver(BucketContainerResolver):
def resolve(self, node: ClusterNode, bucket_name: str, **kwargs: dict) -> str:
curl = GenericCli("curl", node.host)
output = curl(f"-I http://127.0.0.1:8084/{bucket_name}")
pattern = r"X-Container-Id: (\S+)"
cid = re.findall(pattern, output.stdout)
if cid:
return cid[0]
return None

View file

@ -1,8 +1,10 @@
from abc import abstractmethod from abc import ABC, abstractmethod
from datetime import datetime from datetime import datetime
from typing import Literal, Optional, Union from typing import Literal, Optional, Union
from frostfs_testlib.storage.cluster import ClusterNode
from frostfs_testlib.testing.readable import HumanReadableABC, HumanReadableEnum from frostfs_testlib.testing.readable import HumanReadableABC, HumanReadableEnum
from frostfs_testlib.utils.file_utils import TestFile
def _make_objs_dict(key_names): def _make_objs_dict(key_names):
@ -31,15 +33,35 @@ ACL_COPY = [
] ]
class BucketContainerResolver(ABC):
@abstractmethod
def resolve(self, node: ClusterNode, bucket_name: str, **kwargs: dict) -> str:
"""
Resolve Container ID from bucket name
Args:
node: node from where we want to resolve
bucket_name: name of the bucket
**kwargs: any other required params
Returns: Container ID
"""
raise NotImplementedError("Call from abstract class")
class S3ClientWrapper(HumanReadableABC): class S3ClientWrapper(HumanReadableABC):
@abstractmethod @abstractmethod
def __init__(self, access_key_id: str, secret_access_key: str, s3gate_endpoint: str, profile: str) -> None: def __init__(self, access_key_id: str, secret_access_key: str, s3gate_endpoint: str, profile: str, region: str) -> None:
pass pass
@abstractmethod @abstractmethod
def set_endpoint(self, s3gate_endpoint: str): def set_endpoint(self, s3gate_endpoint: str):
"""Set endpoint""" """Set endpoint"""
@abstractmethod
def set_iam_endpoint(self, iam_endpoint: str):
"""Set iam endpoint"""
@abstractmethod @abstractmethod
def create_bucket( def create_bucket(
self, self,
@ -135,6 +157,10 @@ class S3ClientWrapper(HumanReadableABC):
def get_bucket_policy(self, bucket: str) -> str: def get_bucket_policy(self, bucket: str) -> str:
"""Returns the policy of a specified bucket.""" """Returns the policy of a specified bucket."""
@abstractmethod
def delete_bucket_policy(self, bucket: str) -> str:
"""Deletes the policy of a specified bucket."""
@abstractmethod @abstractmethod
def put_bucket_policy(self, bucket: str, policy: dict) -> None: def put_bucket_policy(self, bucket: str, policy: dict) -> None:
"""Applies S3 bucket policy to an S3 bucket.""" """Applies S3 bucket policy to an S3 bucket."""
@ -268,7 +294,7 @@ class S3ClientWrapper(HumanReadableABC):
version_id: Optional[str] = None, version_id: Optional[str] = None,
object_range: Optional[tuple[int, int]] = None, object_range: Optional[tuple[int, int]] = None,
full_output: bool = False, full_output: bool = False,
) -> Union[dict, str]: ) -> dict | TestFile:
"""Retrieves objects from S3.""" """Retrieves objects from S3."""
@abstractmethod @abstractmethod
@ -296,15 +322,11 @@ class S3ClientWrapper(HumanReadableABC):
abort a given multipart upload multiple times in order to completely free all storage consumed by all parts.""" abort a given multipart upload multiple times in order to completely free all storage consumed by all parts."""
@abstractmethod @abstractmethod
def upload_part( def upload_part(self, bucket: str, key: str, upload_id: str, part_num: int, filepath: str) -> str:
self, bucket: str, key: str, upload_id: str, part_num: int, filepath: str
) -> str:
"""Uploads a part in a multipart upload.""" """Uploads a part in a multipart upload."""
@abstractmethod @abstractmethod
def upload_part_copy( def upload_part_copy(self, bucket: str, key: str, upload_id: str, part_num: int, copy_source: str) -> str:
self, bucket: str, key: str, upload_id: str, part_num: int, copy_source: str
) -> str:
"""Uploads a part by copying data from an existing object as data source.""" """Uploads a part by copying data from an existing object as data source."""
@abstractmethod @abstractmethod
@ -348,6 +370,18 @@ class S3ClientWrapper(HumanReadableABC):
def delete_object_tagging(self, bucket: str, key: str) -> None: def delete_object_tagging(self, bucket: str, key: str) -> None:
"""Removes the entire tag set from the specified object.""" """Removes the entire tag set from the specified object."""
@abstractmethod
def put_bucket_lifecycle_configuration(self, bucket: str, lifecycle_configuration: dict, dumped_configuration: str) -> dict:
"""Adds or updates bucket lifecycle configuration"""
@abstractmethod
def get_bucket_lifecycle_configuration(self, bucket: str) -> dict:
"""Gets bucket lifecycle configuration"""
@abstractmethod
def delete_bucket_lifecycle(self, bucket: str) -> dict:
"""Deletes bucket lifecycle"""
@abstractmethod @abstractmethod
def get_object_attributes( def get_object_attributes(
self, self,
@ -382,3 +416,194 @@ class S3ClientWrapper(HumanReadableABC):
"""cp directory TODO: Add proper description""" """cp directory TODO: Add proper description"""
# END OF OBJECT METHODS # # END OF OBJECT METHODS #
# IAM METHODS #
@abstractmethod
def iam_add_user_to_group(self, user_name: str, group_name: str) -> dict:
"""Adds the specified user to the specified group"""
@abstractmethod
def iam_attach_group_policy(self, group_name: str, policy_arn: str) -> dict:
"""Attaches the specified managed policy to the specified IAM group"""
@abstractmethod
def iam_attach_user_policy(self, user_name: str, policy_arn: str) -> dict:
"""Attaches the specified managed policy to the specified user"""
@abstractmethod
def iam_create_access_key(self, user_name: str) -> dict:
"""Creates a new AWS secret access key and access key ID for the specified user"""
@abstractmethod
def iam_create_group(self, group_name: str) -> dict:
"""Creates a new group"""
@abstractmethod
def iam_create_policy(self, policy_name: str, policy_document: dict) -> dict:
"""Creates a new managed policy for your AWS account"""
@abstractmethod
def iam_create_user(self, user_name: str) -> dict:
"""Creates a new IAM user for your AWS account"""
@abstractmethod
def iam_delete_access_key(self, access_key_id: str, user_name: str) -> dict:
"""Deletes the access key pair associated with the specified IAM user"""
@abstractmethod
def iam_delete_group(self, group_name: str) -> dict:
"""Deletes the specified IAM group"""
@abstractmethod
def iam_delete_group_policy(self, group_name: str, policy_name: str) -> dict:
"""Deletes the specified inline policy that is embedded in the specified IAM group"""
@abstractmethod
def iam_delete_policy(self, policy_arn: str) -> dict:
"""Deletes the specified managed policy"""
@abstractmethod
def iam_delete_user(self, user_name: str) -> dict:
"""Deletes the specified IAM user"""
@abstractmethod
def iam_delete_user_policy(self, user_name: str, policy_name: str) -> dict:
"""Deletes the specified inline policy that is embedded in the specified IAM user"""
@abstractmethod
def iam_detach_group_policy(self, group_name: str, policy_arn: str) -> dict:
"""Removes the specified managed policy from the specified IAM group"""
@abstractmethod
def iam_detach_user_policy(self, user_name: str, policy_arn: str) -> dict:
"""Removes the specified managed policy from the specified user"""
@abstractmethod
def iam_get_group(self, group_name: str) -> dict:
"""Returns a list of IAM users that are in the specified IAM group"""
@abstractmethod
def iam_get_group_policy(self, group_name: str, policy_name: str) -> dict:
"""Retrieves the specified inline policy document that is embedded in the specified IAM group"""
@abstractmethod
def iam_get_policy(self, policy_arn: str) -> dict:
"""Retrieves information about the specified managed policy"""
@abstractmethod
def iam_get_policy_version(self, policy_arn: str, version_id: str) -> dict:
"""Retrieves information about the specified version of the specified managed policy"""
@abstractmethod
def iam_get_user(self, user_name: str) -> dict:
"""Retrieves information about the specified IAM user"""
@abstractmethod
def iam_get_user_policy(self, user_name: str, policy_name: str) -> dict:
"""Retrieves the specified inline policy document that is embedded in the specified IAM user"""
@abstractmethod
def iam_list_access_keys(self, user_name: str) -> dict:
"""Returns information about the access key IDs associated with the specified IAM user"""
@abstractmethod
def iam_list_attached_group_policies(self, group_name: str) -> dict:
"""Lists all managed policies that are attached to the specified IAM group"""
@abstractmethod
def iam_list_attached_user_policies(self, user_name: str) -> dict:
"""Lists all managed policies that are attached to the specified IAM user"""
@abstractmethod
def iam_list_entities_for_policy(self, policy_arn: str) -> dict:
"""Lists all IAM users, groups, and roles that the specified managed policy is attached to"""
@abstractmethod
def iam_list_group_policies(self, group_name: str) -> dict:
"""Lists the names of the inline policies that are embedded in the specified IAM group"""
@abstractmethod
def iam_list_groups(self) -> dict:
"""Lists the IAM groups"""
@abstractmethod
def iam_list_groups_for_user(self, user_name: str) -> dict:
"""Lists the IAM groups that the specified IAM user belongs to"""
@abstractmethod
def iam_list_policies(self) -> dict:
"""Lists all the managed policies that are available in your AWS account"""
@abstractmethod
def iam_list_policy_versions(self, policy_arn: str) -> dict:
"""Lists information about the versions of the specified managed policy"""
@abstractmethod
def iam_list_user_policies(self, user_name: str) -> dict:
"""Lists the names of the inline policies embedded in the specified IAM user"""
@abstractmethod
def iam_list_users(self) -> dict:
"""Lists the IAM users"""
@abstractmethod
def iam_put_group_policy(self, group_name: str, policy_name: str, policy_document: dict) -> dict:
"""Adds or updates an inline policy document that is embedded in the specified IAM group"""
@abstractmethod
def iam_put_user_policy(self, user_name: str, policy_name: str, policy_document: dict) -> dict:
"""Adds or updates an inline policy document that is embedded in the specified IAM user"""
@abstractmethod
def iam_remove_user_from_group(self, group_name: str, user_name: str) -> dict:
"""Removes the specified user from the specified group"""
@abstractmethod
def iam_update_group(self, group_name: str, new_name: Optional[str] = None, new_path: Optional[str] = None) -> dict:
"""Updates the name and/or the path of the specified IAM group"""
@abstractmethod
def iam_update_user(self, user_name: str, new_name: Optional[str] = None, new_path: Optional[str] = None) -> dict:
"""Updates the name and/or the path of the specified IAM user"""
@abstractmethod
def iam_tag_user(self, user_name: str, tags: list) -> dict:
"""Adds one or more tags to an IAM user"""
@abstractmethod
def iam_list_user_tags(self, user_name: str) -> dict:
"""List tags of IAM user"""
@abstractmethod
def iam_untag_user(self, user_name: str, tag_keys: list) -> dict:
"""Removes the specified tags from the user"""
# MFA methods
@abstractmethod
def iam_create_virtual_mfa_device(
self, virtual_mfa_device_name: str, outfile: Optional[str] = None, bootstrap_method: Optional[str] = None
) -> tuple:
"""Creates a new virtual MFA device"""
@abstractmethod
def iam_deactivate_mfa_device(self, user_name: str, serial_number: str) -> dict:
"""Deactivates the specified MFA device and removes it from association with the user name"""
@abstractmethod
def iam_delete_virtual_mfa_device(self, serial_number: str) -> dict:
"""Deletes a virtual MFA device"""
@abstractmethod
def iam_enable_mfa_device(self, user_name: str, serial_number: str, authentication_code1: str, authentication_code2: str) -> dict:
"""Enables the specified MFA device and associates it with the specified IAM user"""
@abstractmethod
def iam_list_virtual_mfa_devices(self) -> dict:
"""Lists the MFA devices for an IAM user"""
@abstractmethod
def sts_get_session_token(
self, duration_seconds: Optional[str] = None, serial_number: Optional[str] = None, token_code: Optional[str] = None
) -> tuple:
"""Get session token for user"""

View file

@ -1,15 +1,18 @@
import logging import logging
import subprocess import subprocess
import tempfile import tempfile
from contextlib import nullcontext
from datetime import datetime from datetime import datetime
from typing import IO, Optional from typing import IO, Optional
import pexpect import pexpect
from frostfs_testlib import reporter from frostfs_testlib import reporter
from frostfs_testlib.resources.common import MORE_LOG
from frostfs_testlib.shell.interfaces import CommandInspector, CommandOptions, CommandResult, Shell from frostfs_testlib.shell.interfaces import CommandInspector, CommandOptions, CommandResult, Shell
logger = logging.getLogger("frostfs.testlib.shell") logger = logging.getLogger("frostfs.testlib.shell")
step_context = reporter.step if MORE_LOG == "1" else nullcontext
class LocalShell(Shell): class LocalShell(Shell):
@ -28,7 +31,7 @@ class LocalShell(Shell):
for inspector in [*self.command_inspectors, *extra_inspectors]: for inspector in [*self.command_inspectors, *extra_inspectors]:
command = inspector.inspect(original_command, command) command = inspector.inspect(original_command, command)
logger.info(f"Executing command: {command}") with step_context(f"Executing command: {command}"):
if options.interactive_inputs: if options.interactive_inputs:
return self._exec_interactive(command, options) return self._exec_interactive(command, options)
return self._exec_non_interactive(command, options) return self._exec_non_interactive(command, options)
@ -60,9 +63,7 @@ class LocalShell(Shell):
if options.check and result.return_code != 0: if options.check and result.return_code != 0:
raise RuntimeError( raise RuntimeError(
f"Command: {command}\nreturn code: {result.return_code}\n" f"Command: {command}\nreturn code: {result.return_code}\n" f"Output: {result.stdout}\n" f"Stderr: {result.stderr}\n"
f"Output: {result.stdout}\n"
f"Stderr: {result.stderr}\n"
) )
return result return result
@ -93,9 +94,7 @@ class LocalShell(Shell):
stderr="", stderr="",
return_code=exc.returncode, return_code=exc.returncode,
) )
raise RuntimeError( raise RuntimeError(f"Command: {command}\nError with retcode: {exc.returncode}\n Output: {exc.output}") from exc
f"Command: {command}\nError:\n" f"return code: {exc.returncode}\n" f"output: {exc.output}"
) from exc
except OSError as exc: except OSError as exc:
raise RuntimeError(f"Command: {command}\nOutput: {exc.strerror}") from exc raise RuntimeError(f"Command: {command}\nOutput: {exc.strerror}") from exc
finally: finally:
@ -129,15 +128,13 @@ class LocalShell(Shell):
end_time: datetime, end_time: datetime,
result: Optional[CommandResult], result: Optional[CommandResult],
) -> None: ) -> None:
# TODO: increase logging level if return code is non 0, should be warning at least if not result:
logger.info( logger.warning(f"Command: {command}\n" f"Error: result is None")
f"Command: {command}\n" return
f"{'Success:' if result and result.return_code == 0 else 'Error:'}\n"
f"return code: {result.return_code if result else ''} " status, log_method = ("Success", logger.info) if result.return_code == 0 else ("Error", logger.warning)
f"\nOutput: {result.stdout if result else ''}" log_method(f"Command: {command}\n" f"{status} with retcode {result.return_code}\n" f"Output: \n{result.stdout}")
)
if result:
elapsed_time = end_time - start_time elapsed_time = end_time - start_time
command_attachment = ( command_attachment = (
f"COMMAND: {command}\n" f"COMMAND: {command}\n"
@ -146,5 +143,4 @@ class LocalShell(Shell):
f"STDERR:\n{result.stderr}\n" f"STDERR:\n{result.stderr}\n"
f"Start / End / Elapsed\t {start_time.time()} / {end_time.time()} / {elapsed_time}" f"Start / End / Elapsed\t {start_time.time()} / {end_time.time()} / {elapsed_time}"
) )
with reporter.step(f"COMMAND: {command}"):
reporter.attach(command_attachment, "Command execution.txt") reporter.attach(command_attachment, "Command execution.txt")

View file

@ -11,25 +11,20 @@ import base58
from frostfs_testlib import reporter from frostfs_testlib import reporter
from frostfs_testlib.cli import FrostfsCli from frostfs_testlib.cli import FrostfsCli
from frostfs_testlib.resources.cli import FROSTFS_CLI_EXEC from frostfs_testlib.resources.cli import FROSTFS_CLI_EXEC
from frostfs_testlib.resources.common import ASSETS_DIR, DEFAULT_WALLET_CONFIG from frostfs_testlib.resources.common import ASSETS_DIR
from frostfs_testlib.shell import Shell from frostfs_testlib.shell import Shell
from frostfs_testlib.storage.dataclasses.acl import ( from frostfs_testlib.storage.dataclasses.acl import EACL_LIFETIME, FROSTFS_CONTRACT_CACHE_TIMEOUT, EACLPubKey, EACLRole, EACLRule
EACL_LIFETIME, from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
FROSTFS_CONTRACT_CACHE_TIMEOUT,
EACLPubKey,
EACLRole,
EACLRule,
)
from frostfs_testlib.utils import wallet_utils from frostfs_testlib.utils import wallet_utils
logger = logging.getLogger("NeoLogger") logger = logging.getLogger("NeoLogger")
@reporter.step("Get extended ACL") @reporter.step("Get extended ACL")
def get_eacl(wallet_path: str, cid: str, shell: Shell, endpoint: str) -> Optional[str]: def get_eacl(wallet: WalletInfo, cid: str, shell: Shell, endpoint: str) -> Optional[str]:
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, DEFAULT_WALLET_CONFIG) cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet.config_path)
try: try:
result = cli.container.get_eacl(wallet=wallet_path, rpc_endpoint=endpoint, cid=cid) result = cli.container.get_eacl(rpc_endpoint=endpoint, cid=cid)
except RuntimeError as exc: except RuntimeError as exc:
logger.info("Extended ACL table is not set for this container") logger.info("Extended ACL table is not set for this container")
logger.info(f"Got exception while getting eacl: {exc}") logger.info(f"Got exception while getting eacl: {exc}")
@ -41,16 +36,15 @@ def get_eacl(wallet_path: str, cid: str, shell: Shell, endpoint: str) -> Optiona
@reporter.step("Set extended ACL") @reporter.step("Set extended ACL")
def set_eacl( def set_eacl(
wallet_path: str, wallet: WalletInfo,
cid: str, cid: str,
eacl_table_path: str, eacl_table_path: str,
shell: Shell, shell: Shell,
endpoint: str, endpoint: str,
session_token: Optional[str] = None, session_token: Optional[str] = None,
) -> None: ) -> None:
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, DEFAULT_WALLET_CONFIG) cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet.config_path)
cli.container.set_eacl( cli.container.set_eacl(
wallet=wallet_path,
rpc_endpoint=endpoint, rpc_endpoint=endpoint,
cid=cid, cid=cid,
table=eacl_table_path, table=eacl_table_path,
@ -66,7 +60,7 @@ def _encode_cid_for_eacl(cid: str) -> str:
def create_eacl(cid: str, rules_list: List[EACLRule], shell: Shell) -> str: def create_eacl(cid: str, rules_list: List[EACLRule], shell: Shell) -> str:
table_file_path = os.path.join(os.getcwd(), ASSETS_DIR, f"eacl_table_{str(uuid.uuid4())}.json") table_file_path = os.path.join(os.getcwd(), ASSETS_DIR, f"eacl_table_{str(uuid.uuid4())}.json")
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, DEFAULT_WALLET_CONFIG) cli = FrostfsCli(shell, FROSTFS_CLI_EXEC)
cli.acl.extended_create(cid=cid, out=table_file_path, rule=rules_list) cli.acl.extended_create(cid=cid, out=table_file_path, rule=rules_list)
with open(table_file_path, "r") as file: with open(table_file_path, "r") as file:
@ -77,7 +71,7 @@ def create_eacl(cid: str, rules_list: List[EACLRule], shell: Shell) -> str:
def form_bearertoken_file( def form_bearertoken_file(
wif: str, wallet: WalletInfo,
cid: str, cid: str,
eacl_rule_list: List[Union[EACLRule, EACLPubKey]], eacl_rule_list: List[Union[EACLRule, EACLPubKey]],
shell: Shell, shell: Shell,
@ -92,7 +86,7 @@ def form_bearertoken_file(
enc_cid = _encode_cid_for_eacl(cid) if cid else None enc_cid = _encode_cid_for_eacl(cid) if cid else None
file_path = os.path.join(os.getcwd(), ASSETS_DIR, str(uuid.uuid4())) file_path = os.path.join(os.getcwd(), ASSETS_DIR, str(uuid.uuid4()))
eacl = get_eacl(wif, cid, shell, endpoint) eacl = get_eacl(wallet, cid, shell, endpoint)
json_eacl = dict() json_eacl = dict()
if eacl: if eacl:
eacl = eacl.replace("eACL: ", "").split("Signature")[0] eacl = eacl.replace("eACL: ", "").split("Signature")[0]
@ -133,7 +127,7 @@ def form_bearertoken_file(
if sign: if sign:
sign_bearer( sign_bearer(
shell=shell, shell=shell,
wallet_path=wif, wallet=wallet,
eacl_rules_file_from=file_path, eacl_rules_file_from=file_path,
eacl_rules_file_to=file_path, eacl_rules_file_to=file_path,
json=True, json=True,
@ -164,11 +158,9 @@ def eacl_rules(access: str, verbs: list, user: str) -> list[str]:
return rules return rules
def sign_bearer(shell: Shell, wallet_path: str, eacl_rules_file_from: str, eacl_rules_file_to: str, json: bool) -> None: def sign_bearer(shell: Shell, wallet: WalletInfo, eacl_rules_file_from: str, eacl_rules_file_to: str, json: bool) -> None:
frostfscli = FrostfsCli(shell=shell, frostfs_cli_exec_path=FROSTFS_CLI_EXEC, config_file=DEFAULT_WALLET_CONFIG) frostfscli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet.config_path)
frostfscli.util.sign_bearer_token( frostfscli.util.sign_bearer_token(eacl_rules_file_from, eacl_rules_file_to, json=json)
wallet=wallet_path, from_file=eacl_rules_file_from, to_file=eacl_rules_file_to, json=json
)
@reporter.step("Wait for eACL cache expired") @reporter.step("Wait for eACL cache expired")
@ -178,9 +170,7 @@ def wait_for_cache_expired():
@reporter.step("Return bearer token in base64 to caller") @reporter.step("Return bearer token in base64 to caller")
def bearer_token_base64_from_file( def bearer_token_base64_from_file(bearer_path: str) -> str:
bearer_path: str,
) -> str:
with open(bearer_path, "rb") as file: with open(bearer_path, "rb") as file:
signed = file.read() signed = file.read()
return base64.b64encode(signed).decode("utf-8") return base64.b64encode(signed).decode("utf-8")

View file

@ -1,15 +1,15 @@
import json import json
import logging import logging
import re import re
import requests
from dataclasses import dataclass from dataclasses import dataclass
from time import sleep from time import sleep
from typing import Optional, Union from typing import Optional, Union
from frostfs_testlib import reporter from frostfs_testlib import reporter
from frostfs_testlib.cli import FrostfsCli from frostfs_testlib.cli import FrostfsCli
from frostfs_testlib.plugins import load_plugin
from frostfs_testlib.resources.cli import CLI_DEFAULT_TIMEOUT, FROSTFS_CLI_EXEC from frostfs_testlib.resources.cli import CLI_DEFAULT_TIMEOUT, FROSTFS_CLI_EXEC
from frostfs_testlib.resources.common import DEFAULT_WALLET_CONFIG from frostfs_testlib.s3.interfaces import BucketContainerResolver
from frostfs_testlib.shell import Shell from frostfs_testlib.shell import Shell
from frostfs_testlib.steps.cli.object import put_object, put_object_to_random_node from frostfs_testlib.steps.cli.object import put_object, put_object_to_random_node
from frostfs_testlib.storage.cluster import Cluster, ClusterNode from frostfs_testlib.storage.cluster import Cluster, ClusterNode
@ -24,7 +24,7 @@ logger = logging.getLogger("NeoLogger")
@dataclass @dataclass
class StorageContainerInfo: class StorageContainerInfo:
id: str id: str
wallet_file: WalletInfo wallet: WalletInfo
class StorageContainer: class StorageContainer:
@ -41,11 +41,8 @@ class StorageContainer:
def get_id(self) -> str: def get_id(self) -> str:
return self.storage_container_info.id return self.storage_container_info.id
def get_wallet_path(self) -> str: def get_wallet(self) -> str:
return self.storage_container_info.wallet_file.path return self.storage_container_info.wallet
def get_wallet_config_path(self) -> str:
return self.storage_container_info.wallet_file.config_path
@reporter.step("Generate new object and put in container") @reporter.step("Generate new object and put in container")
def generate_object( def generate_object(
@ -60,37 +57,34 @@ class StorageContainer:
file_hash = get_file_hash(file_path) file_hash = get_file_hash(file_path)
container_id = self.get_id() container_id = self.get_id()
wallet_path = self.get_wallet_path() wallet = self.get_wallet()
wallet_config = self.get_wallet_config_path()
with reporter.step(f"Put object with size {size} to container {container_id}"): with reporter.step(f"Put object with size {size} to container {container_id}"):
if endpoint: if endpoint:
object_id = put_object( object_id = put_object(
wallet=wallet_path, wallet=wallet,
path=file_path, path=file_path,
cid=container_id, cid=container_id,
expire_at=expire_at, expire_at=expire_at,
shell=self.shell, shell=self.shell,
endpoint=endpoint, endpoint=endpoint,
bearer=bearer_token, bearer=bearer_token,
wallet_config=wallet_config,
) )
else: else:
object_id = put_object_to_random_node( object_id = put_object_to_random_node(
wallet=wallet_path, wallet=wallet,
path=file_path, path=file_path,
cid=container_id, cid=container_id,
expire_at=expire_at, expire_at=expire_at,
shell=self.shell, shell=self.shell,
cluster=self.cluster, cluster=self.cluster,
bearer=bearer_token, bearer=bearer_token,
wallet_config=wallet_config,
) )
storage_object = StorageObjectInfo( storage_object = StorageObjectInfo(
container_id, container_id,
object_id, object_id,
size=size, size=size,
wallet_file_path=wallet_path, wallet=wallet,
file_path=file_path, file_path=file_path,
file_hash=file_hash, file_hash=file_hash,
) )
@ -101,18 +95,18 @@ class StorageContainer:
DEFAULT_PLACEMENT_RULE = "REP 2 IN X CBF 1 SELECT 4 FROM * AS X" DEFAULT_PLACEMENT_RULE = "REP 2 IN X CBF 1 SELECT 4 FROM * AS X"
SINGLE_PLACEMENT_RULE = "REP 1 IN X CBF 1 SELECT 4 FROM * AS X" SINGLE_PLACEMENT_RULE = "REP 1 IN X CBF 1 SELECT 4 FROM * AS X"
REP_2_FOR_3_NODES_PLACEMENT_RULE = "REP 2 IN X CBF 1 SELECT 3 FROM * AS X" REP_2_FOR_3_NODES_PLACEMENT_RULE = "REP 2 IN X CBF 1 SELECT 3 FROM * AS X"
DEFAULT_EC_PLACEMENT_RULE = "EC 3.1"
@reporter.step("Create Container") @reporter.step("Create Container")
def create_container( def create_container(
wallet: str, wallet: WalletInfo,
shell: Shell, shell: Shell,
endpoint: str, endpoint: str,
rule: str = DEFAULT_PLACEMENT_RULE, rule: str = DEFAULT_PLACEMENT_RULE,
basic_acl: str = "", basic_acl: str = "",
attributes: Optional[dict] = None, attributes: Optional[dict] = None,
session_token: str = "", session_token: str = "",
session_wallet: str = "",
name: Optional[str] = None, name: Optional[str] = None,
options: Optional[dict] = None, options: Optional[dict] = None,
await_mode: bool = True, await_mode: bool = True,
@ -123,7 +117,7 @@ def create_container(
A wrapper for `frostfs-cli container create` call. A wrapper for `frostfs-cli container create` call.
Args: Args:
wallet (str): a wallet on whose behalf a container is created wallet (WalletInfo): a wallet on whose behalf a container is created
rule (optional, str): placement rule for container rule (optional, str): placement rule for container
basic_acl (optional, str): an ACL for container, will be basic_acl (optional, str): an ACL for container, will be
appended to `--basic-acl` key appended to `--basic-acl` key
@ -145,10 +139,9 @@ def create_container(
(str): CID of the created container (str): CID of the created container
""" """
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, DEFAULT_WALLET_CONFIG) cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet.config_path)
result = cli.container.create( result = cli.container.create(
rpc_endpoint=endpoint, rpc_endpoint=endpoint,
wallet=session_wallet if session_wallet else wallet,
policy=rule, policy=rule,
basic_acl=basic_acl, basic_acl=basic_acl,
attributes=attributes, attributes=attributes,
@ -169,9 +162,7 @@ def create_container(
return cid return cid
def wait_for_container_creation( def wait_for_container_creation(wallet: WalletInfo, cid: str, shell: Shell, endpoint: str, attempts: int = 15, sleep_interval: int = 1):
wallet: str, cid: str, shell: Shell, endpoint: str, attempts: int = 15, sleep_interval: int = 1
):
for _ in range(attempts): for _ in range(attempts):
containers = list_containers(wallet, shell, endpoint) containers = list_containers(wallet, shell, endpoint)
if cid in containers: if cid in containers:
@ -181,9 +172,7 @@ def wait_for_container_creation(
raise RuntimeError(f"After {attempts * sleep_interval} seconds container {cid} hasn't been persisted; exiting") raise RuntimeError(f"After {attempts * sleep_interval} seconds container {cid} hasn't been persisted; exiting")
def wait_for_container_deletion( def wait_for_container_deletion(wallet: WalletInfo, cid: str, shell: Shell, endpoint: str, attempts: int = 30, sleep_interval: int = 1):
wallet: str, cid: str, shell: Shell, endpoint: str, attempts: int = 30, sleep_interval: int = 1
):
for _ in range(attempts): for _ in range(attempts):
try: try:
get_container(wallet, cid, shell=shell, endpoint=endpoint) get_container(wallet, cid, shell=shell, endpoint=endpoint)
@ -197,29 +186,26 @@ def wait_for_container_deletion(
@reporter.step("List Containers") @reporter.step("List Containers")
def list_containers( def list_containers(wallet: WalletInfo, shell: Shell, endpoint: str, timeout: Optional[str] = CLI_DEFAULT_TIMEOUT) -> list[str]:
wallet: str, shell: Shell, endpoint: str, timeout: Optional[str] = CLI_DEFAULT_TIMEOUT
) -> list[str]:
""" """
A wrapper for `frostfs-cli container list` call. It returns all the A wrapper for `frostfs-cli container list` call. It returns all the
available containers for the given wallet. available containers for the given wallet.
Args: Args:
wallet (str): a wallet on whose behalf we list the containers wallet (WalletInfo): a wallet on whose behalf we list the containers
shell: executor for cli command shell: executor for cli command
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
timeout: Timeout for the operation. timeout: Timeout for the operation.
Returns: Returns:
(list): list of containers (list): list of containers
""" """
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, DEFAULT_WALLET_CONFIG) cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet.config_path)
result = cli.container.list(rpc_endpoint=endpoint, wallet=wallet, timeout=timeout) result = cli.container.list(rpc_endpoint=endpoint, timeout=timeout)
logger.info(f"Containers: \n{result}")
return result.stdout.split() return result.stdout.split()
@reporter.step("List Objects in container") @reporter.step("List Objects in container")
def list_objects( def list_objects(
wallet: str, wallet: WalletInfo,
shell: Shell, shell: Shell,
container_id: str, container_id: str,
endpoint: str, endpoint: str,
@ -229,7 +215,7 @@ def list_objects(
A wrapper for `frostfs-cli container list-objects` call. It returns all the A wrapper for `frostfs-cli container list-objects` call. It returns all the
available objects in container. available objects in container.
Args: Args:
wallet (str): a wallet on whose behalf we list the containers objects wallet (WalletInfo): a wallet on whose behalf we list the containers objects
shell: executor for cli command shell: executor for cli command
container_id: cid of container container_id: cid of container
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
@ -237,15 +223,15 @@ def list_objects(
Returns: Returns:
(list): list of containers (list): list of containers
""" """
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, DEFAULT_WALLET_CONFIG) cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet.config_path)
result = cli.container.list_objects(rpc_endpoint=endpoint, wallet=wallet, cid=container_id, timeout=timeout) result = cli.container.list_objects(rpc_endpoint=endpoint, cid=container_id, timeout=timeout)
logger.info(f"Container objects: \n{result}") logger.info(f"Container objects: \n{result}")
return result.stdout.split() return result.stdout.split()
@reporter.step("Get Container") @reporter.step("Get Container")
def get_container( def get_container(
wallet: str, wallet: WalletInfo,
cid: str, cid: str,
shell: Shell, shell: Shell,
endpoint: str, endpoint: str,
@ -256,7 +242,7 @@ def get_container(
A wrapper for `frostfs-cli container get` call. It extracts container's A wrapper for `frostfs-cli container get` call. It extracts container's
attributes and rearranges them into a more compact view. attributes and rearranges them into a more compact view.
Args: Args:
wallet (str): path to a wallet on whose behalf we get the container wallet (WalletInfo): path to a wallet on whose behalf we get the container
cid (str): ID of the container to get cid (str): ID of the container to get
shell: executor for cli command shell: executor for cli command
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
@ -266,8 +252,8 @@ def get_container(
(dict, str): dict of container attributes (dict, str): dict of container attributes
""" """
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, DEFAULT_WALLET_CONFIG) cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet.config_path)
result = cli.container.get(rpc_endpoint=endpoint, wallet=wallet, cid=cid, json_mode=json_mode, timeout=timeout) result = cli.container.get(rpc_endpoint=endpoint, cid=cid, json_mode=json_mode, timeout=timeout)
if not json_mode: if not json_mode:
return result.stdout return result.stdout
@ -284,37 +270,34 @@ def get_container(
@reporter.step("Delete Container") @reporter.step("Delete Container")
# TODO: make the error message about a non-found container more user-friendly # TODO: make the error message about a non-found container more user-friendly
def delete_container( def delete_container(
wallet: str, wallet: WalletInfo,
cid: str, cid: str,
shell: Shell, shell: Shell,
endpoint: str, endpoint: str,
force: bool = False, force: bool = False,
session_token: Optional[str] = None, session_token: Optional[str] = None,
await_mode: bool = False, await_mode: bool = False,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
) -> None: ) -> None:
""" """
A wrapper for `frostfs-cli container delete` call. A wrapper for `frostfs-cli container delete` call.
Args: Args:
wallet (str): path to a wallet on whose behalf we delete the container await_mode: Block execution until container is removed.
wallet (WalletInfo): path to a wallet on whose behalf we delete the container
cid (str): ID of the container to delete cid (str): ID of the container to delete
shell: executor for cli command shell: executor for cli command
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
force (bool): do not check whether container contains locks and remove immediately force (bool): do not check whether container contains locks and remove immediately
session_token: a path to session token file session_token: a path to session token file
timeout: Timeout for the operation.
This function doesn't return anything. This function doesn't return anything.
""" """
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, DEFAULT_WALLET_CONFIG) cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet.config_path)
cli.container.delete( cli.container.delete(
wallet=wallet,
cid=cid, cid=cid,
rpc_endpoint=endpoint, rpc_endpoint=endpoint,
force=force, force=force,
session=session_token, session=session_token,
await_mode=await_mode, await_mode=await_mode,
timeout=timeout,
) )
@ -344,28 +327,17 @@ def _parse_cid(output: str) -> str:
return splitted[1] return splitted[1]
@reporter.step("Search container by name")
def search_container_by_name(name: str, node: ClusterNode):
node_shell = node.host.get_shell()
output = node_shell.exec(f"curl -I HEAD http://127.0.0.1:8084/{name}")
pattern = r"X-Container-Id: (\S+)"
cid = re.findall(pattern, output.stdout)
if cid:
return cid[0]
return None
@reporter.step("Search for nodes with a container") @reporter.step("Search for nodes with a container")
def search_nodes_with_container( def search_nodes_with_container(
wallet: str, wallet: WalletInfo,
cid: str, cid: str,
shell: Shell, shell: Shell,
endpoint: str, endpoint: str,
cluster: Cluster, cluster: Cluster,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT, timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
) -> list[ClusterNode]: ) -> list[ClusterNode]:
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, DEFAULT_WALLET_CONFIG) cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet.config_path)
result = cli.container.search_node(rpc_endpoint=endpoint, wallet=wallet, cid=cid, timeout=timeout) result = cli.container.search_node(rpc_endpoint=endpoint, cid=cid, timeout=timeout)
pattern = r"[0-9]+(?:\.[0-9]+){3}" pattern = r"[0-9]+(?:\.[0-9]+){3}"
nodes_ip = list(set(re.findall(pattern, result.stdout))) nodes_ip = list(set(re.findall(pattern, result.stdout)))

View file

@ -9,18 +9,21 @@ from frostfs_testlib import reporter
from frostfs_testlib.cli import FrostfsCli from frostfs_testlib.cli import FrostfsCli
from frostfs_testlib.cli.neogo import NeoGo from frostfs_testlib.cli.neogo import NeoGo
from frostfs_testlib.resources.cli import CLI_DEFAULT_TIMEOUT, FROSTFS_CLI_EXEC, NEOGO_EXECUTABLE from frostfs_testlib.resources.cli import CLI_DEFAULT_TIMEOUT, FROSTFS_CLI_EXEC, NEOGO_EXECUTABLE
from frostfs_testlib.resources.common import ASSETS_DIR, DEFAULT_WALLET_CONFIG from frostfs_testlib.resources.common import ASSETS_DIR
from frostfs_testlib.shell import Shell from frostfs_testlib.shell import Shell
from frostfs_testlib.storage.cluster import Cluster, ClusterNode from frostfs_testlib.storage.cluster import Cluster, ClusterNode
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
from frostfs_testlib.testing import wait_for_success
from frostfs_testlib.utils import json_utils from frostfs_testlib.utils import json_utils
from frostfs_testlib.utils.cli_utils import parse_cmd_table, parse_netmap_output from frostfs_testlib.utils.cli_utils import parse_netmap_output
from frostfs_testlib.utils.file_utils import TestFile
logger = logging.getLogger("NeoLogger") logger = logging.getLogger("NeoLogger")
@reporter.step("Get object from random node") @reporter.step("Get object from random node")
def get_object_from_random_node( def get_object_from_random_node(
wallet: str, wallet: WalletInfo,
cid: str, cid: str,
oid: str, oid: str,
shell: Shell, shell: Shell,
@ -28,7 +31,6 @@ def get_object_from_random_node(
bearer: Optional[str] = None, bearer: Optional[str] = None,
write_object: Optional[str] = None, write_object: Optional[str] = None,
xhdr: Optional[dict] = None, xhdr: Optional[dict] = None,
wallet_config: Optional[str] = None,
no_progress: bool = True, no_progress: bool = True,
session: Optional[str] = None, session: Optional[str] = None,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT, timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
@ -44,7 +46,6 @@ def get_object_from_random_node(
cluster: cluster object cluster: cluster object
bearer (optional, str): path to Bearer Token file, appends to `--bearer` key bearer (optional, str): path to Bearer Token file, appends to `--bearer` key
write_object (optional, str): path to downloaded file, appends to `--file` key write_object (optional, str): path to downloaded file, appends to `--file` key
wallet_config(optional, str): path to the wallet config
no_progress(optional, bool): do not show progress bar no_progress(optional, bool): do not show progress bar
xhdr (optional, dict): Request X-Headers in form of Key=Value xhdr (optional, dict): Request X-Headers in form of Key=Value
session (optional, dict): path to a JSON-encoded container session token session (optional, dict): path to a JSON-encoded container session token
@ -62,7 +63,6 @@ def get_object_from_random_node(
bearer, bearer,
write_object, write_object,
xhdr, xhdr,
wallet_config,
no_progress, no_progress,
session, session,
timeout, timeout,
@ -71,7 +71,7 @@ def get_object_from_random_node(
@reporter.step("Get object from {endpoint}") @reporter.step("Get object from {endpoint}")
def get_object( def get_object(
wallet: str, wallet: WalletInfo,
cid: str, cid: str,
oid: str, oid: str,
shell: Shell, shell: Shell,
@ -79,23 +79,21 @@ def get_object(
bearer: Optional[str] = None, bearer: Optional[str] = None,
write_object: Optional[str] = None, write_object: Optional[str] = None,
xhdr: Optional[dict] = None, xhdr: Optional[dict] = None,
wallet_config: Optional[str] = None,
no_progress: bool = True, no_progress: bool = True,
session: Optional[str] = None, session: Optional[str] = None,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT, timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
) -> str: ) -> TestFile:
""" """
GET from FrostFS. GET from FrostFS.
Args: Args:
wallet (str): wallet on whose behalf GET is done wallet (WalletInfo): wallet on whose behalf GET is done
cid (str): ID of Container where we get the Object from cid (str): ID of Container where we get the Object from
oid (str): Object ID oid (str): Object ID
shell: executor for cli command shell: executor for cli command
bearer: path to Bearer Token file, appends to `--bearer` key bearer: path to Bearer Token file, appends to `--bearer` key
write_object: path to downloaded file, appends to `--file` key write_object: path to downloaded file, appends to `--file` key
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
wallet_config(optional, str): path to the wallet config
no_progress(optional, bool): do not show progress bar no_progress(optional, bool): do not show progress bar
xhdr (optional, dict): Request X-Headers in form of Key=Value xhdr (optional, dict): Request X-Headers in form of Key=Value
session (optional, dict): path to a JSON-encoded container session token session (optional, dict): path to a JSON-encoded container session token
@ -106,15 +104,14 @@ def get_object(
if not write_object: if not write_object:
write_object = str(uuid.uuid4()) write_object = str(uuid.uuid4())
file_path = os.path.join(ASSETS_DIR, write_object) test_file = TestFile(os.path.join(ASSETS_DIR, write_object))
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet_config or DEFAULT_WALLET_CONFIG) cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet.config_path)
cli.object.get( cli.object.get(
rpc_endpoint=endpoint, rpc_endpoint=endpoint,
wallet=wallet,
cid=cid, cid=cid,
oid=oid, oid=oid,
file=file_path, file=test_file,
bearer=bearer, bearer=bearer,
no_progress=no_progress, no_progress=no_progress,
xhdr=xhdr, xhdr=xhdr,
@ -122,19 +119,18 @@ def get_object(
timeout=timeout, timeout=timeout,
) )
return file_path return test_file
@reporter.step("Get Range Hash from {endpoint}") @reporter.step("Get Range Hash from {endpoint}")
def get_range_hash( def get_range_hash(
wallet: str, wallet: WalletInfo,
cid: str, cid: str,
oid: str, oid: str,
range_cut: str, range_cut: str,
shell: Shell, shell: Shell,
endpoint: str, endpoint: str,
bearer: Optional[str] = None, bearer: Optional[str] = None,
wallet_config: Optional[str] = None,
xhdr: Optional[dict] = None, xhdr: Optional[dict] = None,
session: Optional[str] = None, session: Optional[str] = None,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT, timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
@ -151,17 +147,15 @@ def get_range_hash(
range_cut: Range to take hash from in the form offset1:length1,..., range_cut: Range to take hash from in the form offset1:length1,...,
value to pass to the `--range` parameter value to pass to the `--range` parameter
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
wallet_config: path to the wallet config
xhdr: Request X-Headers in form of Key=Values xhdr: Request X-Headers in form of Key=Values
session: Filepath to a JSON- or binary-encoded token of the object RANGEHASH session. session: Filepath to a JSON- or binary-encoded token of the object RANGEHASH session.
timeout: Timeout for the operation. timeout: Timeout for the operation.
Returns: Returns:
None None
""" """
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet_config or DEFAULT_WALLET_CONFIG) cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet.config_path)
result = cli.object.hash( result = cli.object.hash(
rpc_endpoint=endpoint, rpc_endpoint=endpoint,
wallet=wallet,
cid=cid, cid=cid,
oid=oid, oid=oid,
range=range_cut, range=range_cut,
@ -177,7 +171,7 @@ def get_range_hash(
@reporter.step("Put object to random node") @reporter.step("Put object to random node")
def put_object_to_random_node( def put_object_to_random_node(
wallet: str, wallet: WalletInfo,
path: str, path: str,
cid: str, cid: str,
shell: Shell, shell: Shell,
@ -186,7 +180,6 @@ def put_object_to_random_node(
copies_number: Optional[int] = None, copies_number: Optional[int] = None,
attributes: Optional[dict] = None, attributes: Optional[dict] = None,
xhdr: Optional[dict] = None, xhdr: Optional[dict] = None,
wallet_config: Optional[str] = None,
expire_at: Optional[int] = None, expire_at: Optional[int] = None,
no_progress: bool = True, no_progress: bool = True,
session: Optional[str] = None, session: Optional[str] = None,
@ -205,7 +198,6 @@ def put_object_to_random_node(
copies_number: Number of copies of the object to store within the RPC call copies_number: Number of copies of the object to store within the RPC call
attributes: User attributes in form of Key1=Value1,Key2=Value2 attributes: User attributes in form of Key1=Value1,Key2=Value2
cluster: cluster under test cluster: cluster under test
wallet_config: path to the wallet config
no_progress: do not show progress bar no_progress: do not show progress bar
expire_at: Last epoch in the life of the object expire_at: Last epoch in the life of the object
xhdr: Request X-Headers in form of Key=Value xhdr: Request X-Headers in form of Key=Value
@ -226,7 +218,6 @@ def put_object_to_random_node(
copies_number, copies_number,
attributes, attributes,
xhdr, xhdr,
wallet_config,
expire_at, expire_at,
no_progress, no_progress,
session, session,
@ -236,7 +227,7 @@ def put_object_to_random_node(
@reporter.step("Put object at {endpoint} in container {cid}") @reporter.step("Put object at {endpoint} in container {cid}")
def put_object( def put_object(
wallet: str, wallet: WalletInfo,
path: str, path: str,
cid: str, cid: str,
shell: Shell, shell: Shell,
@ -245,7 +236,6 @@ def put_object(
copies_number: Optional[int] = None, copies_number: Optional[int] = None,
attributes: Optional[dict] = None, attributes: Optional[dict] = None,
xhdr: Optional[dict] = None, xhdr: Optional[dict] = None,
wallet_config: Optional[str] = None,
expire_at: Optional[int] = None, expire_at: Optional[int] = None,
no_progress: bool = True, no_progress: bool = True,
session: Optional[str] = None, session: Optional[str] = None,
@ -263,7 +253,6 @@ def put_object(
copies_number: Number of copies of the object to store within the RPC call copies_number: Number of copies of the object to store within the RPC call
attributes: User attributes in form of Key1=Value1,Key2=Value2 attributes: User attributes in form of Key1=Value1,Key2=Value2
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
wallet_config: path to the wallet config
no_progress: do not show progress bar no_progress: do not show progress bar
expire_at: Last epoch in the life of the object expire_at: Last epoch in the life of the object
xhdr: Request X-Headers in form of Key=Value xhdr: Request X-Headers in form of Key=Value
@ -273,10 +262,9 @@ def put_object(
(str): ID of uploaded Object (str): ID of uploaded Object
""" """
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet_config or DEFAULT_WALLET_CONFIG) cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet.config_path)
result = cli.object.put( result = cli.object.put(
rpc_endpoint=endpoint, rpc_endpoint=endpoint,
wallet=wallet,
file=path, file=path,
cid=cid, cid=cid,
attributes=attributes, attributes=attributes,
@ -297,13 +285,12 @@ def put_object(
@reporter.step("Delete object {cid}/{oid} from {endpoint}") @reporter.step("Delete object {cid}/{oid} from {endpoint}")
def delete_object( def delete_object(
wallet: str, wallet: WalletInfo,
cid: str, cid: str,
oid: str, oid: str,
shell: Shell, shell: Shell,
endpoint: str, endpoint: str,
bearer: str = "", bearer: str = "",
wallet_config: Optional[str] = None,
xhdr: Optional[dict] = None, xhdr: Optional[dict] = None,
session: Optional[str] = None, session: Optional[str] = None,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT, timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
@ -318,7 +305,6 @@ def delete_object(
shell: executor for cli command shell: executor for cli command
bearer: path to Bearer Token file, appends to `--bearer` key bearer: path to Bearer Token file, appends to `--bearer` key
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
wallet_config: path to the wallet config
xhdr: Request X-Headers in form of Key=Value xhdr: Request X-Headers in form of Key=Value
session: path to a JSON-encoded container session token session: path to a JSON-encoded container session token
timeout: Timeout for the operation. timeout: Timeout for the operation.
@ -326,10 +312,9 @@ def delete_object(
(str): Tombstone ID (str): Tombstone ID
""" """
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet_config or DEFAULT_WALLET_CONFIG) cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet.config_path)
result = cli.object.delete( result = cli.object.delete(
rpc_endpoint=endpoint, rpc_endpoint=endpoint,
wallet=wallet,
cid=cid, cid=cid,
oid=oid, oid=oid,
bearer=bearer, bearer=bearer,
@ -345,13 +330,12 @@ def delete_object(
@reporter.step("Get Range") @reporter.step("Get Range")
def get_range( def get_range(
wallet: str, wallet: WalletInfo,
cid: str, cid: str,
oid: str, oid: str,
range_cut: str, range_cut: str,
shell: Shell, shell: Shell,
endpoint: str, endpoint: str,
wallet_config: Optional[str] = None,
bearer: str = "", bearer: str = "",
xhdr: Optional[dict] = None, xhdr: Optional[dict] = None,
session: Optional[str] = None, session: Optional[str] = None,
@ -368,37 +352,35 @@ def get_range(
shell: executor for cli command shell: executor for cli command
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
bearer: path to Bearer Token file, appends to `--bearer` key bearer: path to Bearer Token file, appends to `--bearer` key
wallet_config: path to the wallet config
xhdr: Request X-Headers in form of Key=Value xhdr: Request X-Headers in form of Key=Value
session: path to a JSON-encoded container session token session: path to a JSON-encoded container session token
timeout: Timeout for the operation. timeout: Timeout for the operation.
Returns: Returns:
(str, bytes) - path to the file with range content and content of this file as bytes (str, bytes) - path to the file with range content and content of this file as bytes
""" """
range_file_path = os.path.join(ASSETS_DIR, str(uuid.uuid4())) test_file = TestFile(os.path.join(ASSETS_DIR, str(uuid.uuid4())))
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet_config or DEFAULT_WALLET_CONFIG) cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet.config_path)
cli.object.range( cli.object.range(
rpc_endpoint=endpoint, rpc_endpoint=endpoint,
wallet=wallet,
cid=cid, cid=cid,
oid=oid, oid=oid,
range=range_cut, range=range_cut,
file=range_file_path, file=test_file,
bearer=bearer, bearer=bearer,
xhdr=xhdr, xhdr=xhdr,
session=session, session=session,
timeout=timeout, timeout=timeout,
) )
with open(range_file_path, "rb") as file: with open(test_file, "rb") as file:
content = file.read() content = file.read()
return range_file_path, content return test_file, content
@reporter.step("Lock Object") @reporter.step("Lock Object")
def lock_object( def lock_object(
wallet: str, wallet: WalletInfo,
cid: str, cid: str,
oid: str, oid: str,
shell: Shell, shell: Shell,
@ -408,7 +390,6 @@ def lock_object(
address: Optional[str] = None, address: Optional[str] = None,
bearer: Optional[str] = None, bearer: Optional[str] = None,
session: Optional[str] = None, session: Optional[str] = None,
wallet_config: Optional[str] = None,
ttl: Optional[int] = None, ttl: Optional[int] = None,
xhdr: Optional[dict] = None, xhdr: Optional[dict] = None,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT, timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
@ -435,13 +416,12 @@ def lock_object(
Lock object ID Lock object ID
""" """
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet_config or DEFAULT_WALLET_CONFIG) cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet.config_path)
result = cli.object.lock( result = cli.object.lock(
rpc_endpoint=endpoint, rpc_endpoint=endpoint,
lifetime=lifetime, lifetime=lifetime,
expire_at=expire_at, expire_at=expire_at,
address=address, address=address,
wallet=wallet,
cid=cid, cid=cid,
oid=oid, oid=oid,
bearer=bearer, bearer=bearer,
@ -459,14 +439,13 @@ def lock_object(
@reporter.step("Search object") @reporter.step("Search object")
def search_object( def search_object(
wallet: str, wallet: WalletInfo,
cid: str, cid: str,
shell: Shell, shell: Shell,
endpoint: str, endpoint: str,
bearer: str = "", bearer: str = "",
filters: Optional[dict] = None, filters: Optional[dict] = None,
expected_objects_list: Optional[list] = None, expected_objects_list: Optional[list] = None,
wallet_config: Optional[str] = None,
xhdr: Optional[dict] = None, xhdr: Optional[dict] = None,
session: Optional[str] = None, session: Optional[str] = None,
phy: bool = False, phy: bool = False,
@ -484,7 +463,6 @@ def search_object(
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
filters: key=value pairs to filter Objects filters: key=value pairs to filter Objects
expected_objects_list: a list of ObjectIDs to compare found Objects with expected_objects_list: a list of ObjectIDs to compare found Objects with
wallet_config: path to the wallet config
xhdr: Request X-Headers in form of Key=Value xhdr: Request X-Headers in form of Key=Value
session: path to a JSON-encoded container session token session: path to a JSON-encoded container session token
phy: Search physically stored objects. phy: Search physically stored objects.
@ -495,10 +473,9 @@ def search_object(
list of found ObjectIDs list of found ObjectIDs
""" """
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet_config or DEFAULT_WALLET_CONFIG) cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet.config_path)
result = cli.object.search( result = cli.object.search(
rpc_endpoint=endpoint, rpc_endpoint=endpoint,
wallet=wallet,
cid=cid, cid=cid,
bearer=bearer, bearer=bearer,
xhdr=xhdr, xhdr=xhdr,
@ -513,23 +490,18 @@ def search_object(
if expected_objects_list: if expected_objects_list:
if sorted(found_objects) == sorted(expected_objects_list): if sorted(found_objects) == sorted(expected_objects_list):
logger.info( logger.info(f"Found objects list '{found_objects}' " f"is equal for expected list '{expected_objects_list}'")
f"Found objects list '{found_objects}' " f"is equal for expected list '{expected_objects_list}'"
)
else: else:
logger.warning( logger.warning(f"Found object list {found_objects} " f"is not equal to expected list '{expected_objects_list}'")
f"Found object list {found_objects} " f"is not equal to expected list '{expected_objects_list}'"
)
return found_objects return found_objects
@reporter.step("Get netmap netinfo") @reporter.step("Get netmap netinfo")
def get_netmap_netinfo( def get_netmap_netinfo(
wallet: str, wallet: WalletInfo,
shell: Shell, shell: Shell,
endpoint: str, endpoint: str,
wallet_config: Optional[str] = None,
address: Optional[str] = None, address: Optional[str] = None,
ttl: Optional[int] = None, ttl: Optional[int] = None,
xhdr: Optional[dict] = None, xhdr: Optional[dict] = None,
@ -539,7 +511,7 @@ def get_netmap_netinfo(
Get netmap netinfo output from node Get netmap netinfo output from node
Args: Args:
wallet (str): wallet on whose behalf request is done wallet (WalletInfo): wallet on whose behalf request is done
shell: executor for cli command shell: executor for cli command
endpoint (optional, str): FrostFS endpoint to send request to, appends to `--rpc-endpoint` key endpoint (optional, str): FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
address: Address of wallet account address: Address of wallet account
@ -552,9 +524,8 @@ def get_netmap_netinfo(
(dict): dict of parsed command output (dict): dict of parsed command output
""" """
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet_config or DEFAULT_WALLET_CONFIG) cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet.config_path)
output = cli.netmap.netinfo( output = cli.netmap.netinfo(
wallet=wallet,
rpc_endpoint=endpoint, rpc_endpoint=endpoint,
address=address, address=address,
ttl=ttl, ttl=ttl,
@ -578,7 +549,7 @@ def get_netmap_netinfo(
@reporter.step("Head object") @reporter.step("Head object")
def head_object( def head_object(
wallet: str, wallet: WalletInfo,
cid: str, cid: str,
oid: str, oid: str,
shell: Shell, shell: Shell,
@ -588,7 +559,6 @@ def head_object(
json_output: bool = True, json_output: bool = True,
is_raw: bool = False, is_raw: bool = False,
is_direct: bool = False, is_direct: bool = False,
wallet_config: Optional[str] = None,
session: Optional[str] = None, session: Optional[str] = None,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT, timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
): ):
@ -596,7 +566,7 @@ def head_object(
HEAD an Object. HEAD an Object.
Args: Args:
wallet (str): wallet on whose behalf HEAD is done wallet (WalletInfo): wallet on whose behalf HEAD is done
cid (str): ID of Container where we get the Object from cid (str): ID of Container where we get the Object from
oid (str): ObjectID to HEAD oid (str): ObjectID to HEAD
shell: executor for cli command shell: executor for cli command
@ -608,7 +578,6 @@ def head_object(
turns into `--raw` key turns into `--raw` key
is_direct(optional, bool): send request directly to the node or not; this flag is_direct(optional, bool): send request directly to the node or not; this flag
turns into `--ttl 1` key turns into `--ttl 1` key
wallet_config(optional, str): path to the wallet config
xhdr (optional, dict): Request X-Headers in form of Key=Value xhdr (optional, dict): Request X-Headers in form of Key=Value
session (optional, dict): path to a JSON-encoded container session token session (optional, dict): path to a JSON-encoded container session token
timeout: Timeout for the operation. timeout: Timeout for the operation.
@ -619,10 +588,9 @@ def head_object(
(str): HEAD response as a plain text (str): HEAD response as a plain text
""" """
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet_config or DEFAULT_WALLET_CONFIG) cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet.config_path)
result = cli.object.head( result = cli.object.head(
rpc_endpoint=endpoint, rpc_endpoint=endpoint,
wallet=wallet,
cid=cid, cid=cid,
oid=oid, oid=oid,
bearer=bearer, bearer=bearer,
@ -648,32 +616,32 @@ def head_object(
fst_line_idx = result.stdout.find("\n") fst_line_idx = result.stdout.find("\n")
decoded = json.loads(result.stdout[fst_line_idx:]) decoded = json.loads(result.stdout[fst_line_idx:])
# if response
if "chunks" in decoded.keys():
logger.info("decoding ec chunks")
return decoded["chunks"]
# If response is Complex Object header, it has `splitId` key # If response is Complex Object header, it has `splitId` key
if "splitId" in decoded.keys(): if "splitId" in decoded.keys():
logger.info("decoding split header")
return json_utils.decode_split_header(decoded) return json_utils.decode_split_header(decoded)
# If response is Last or Linking Object header, # If response is Last or Linking Object header,
# it has `header` dictionary and non-null `split` dictionary # it has `header` dictionary and non-null `split` dictionary
if "split" in decoded["header"].keys(): if "split" in decoded["header"].keys():
if decoded["header"]["split"]: if decoded["header"]["split"]:
logger.info("decoding linking object")
return json_utils.decode_linking_object(decoded) return json_utils.decode_linking_object(decoded)
if decoded["header"]["objectType"] == "STORAGE_GROUP": if decoded["header"]["objectType"] == "STORAGE_GROUP":
logger.info("decoding storage group")
return json_utils.decode_storage_group(decoded) return json_utils.decode_storage_group(decoded)
if decoded["header"]["objectType"] == "TOMBSTONE": if decoded["header"]["objectType"] == "TOMBSTONE":
logger.info("decoding tombstone")
return json_utils.decode_tombstone(decoded) return json_utils.decode_tombstone(decoded)
logger.info("decoding simple header")
return json_utils.decode_simple_header(decoded) return json_utils.decode_simple_header(decoded)
@reporter.step("Run neo-go dump-keys") @reporter.step("Run neo-go dump-keys")
def neo_go_dump_keys(shell: Shell, wallet: str) -> dict: def neo_go_dump_keys(shell: Shell, wallet: WalletInfo) -> dict:
""" """
Run neo-go dump keys command Run neo-go dump keys command
@ -722,48 +690,56 @@ def neo_go_query_height(shell: Shell, endpoint: str) -> dict:
latest_block = first_line.split(":") latest_block = first_line.split(":")
# taking second line from command's output contain wallet key # taking second line from command's output contain wallet key
second_line = output.split("\n")[1] second_line = output.split("\n")[1]
if second_line != "":
validated_state = second_line.split(":") validated_state = second_line.split(":")
return { return {
latest_block[0].replace(":", ""): int(latest_block[1]), latest_block[0].replace(":", ""): int(latest_block[1]),
validated_state[0].replace(":", ""): int(validated_state[1]), validated_state[0].replace(":", ""): int(validated_state[1]),
} }
return {latest_block[0].replace(":", ""): int(latest_block[1])}
@wait_for_success()
@reporter.step("Search object nodes") @reporter.step("Search object nodes")
def get_object_nodes( def get_object_nodes(
cluster: Cluster, cluster: Cluster,
wallet: str,
cid: str, cid: str,
oid: str, oid: str,
shell: Shell, alive_node: ClusterNode,
endpoint: str,
bearer: str = "", bearer: str = "",
xhdr: Optional[dict] = None, xhdr: Optional[dict] = None,
is_direct: bool = False, is_direct: bool = False,
verify_presence_all: bool = False, verify_presence_all: bool = False,
wallet_config: Optional[str] = None,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT, timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
) -> list[ClusterNode]: ) -> list[ClusterNode]:
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet_config or DEFAULT_WALLET_CONFIG) shell = alive_node.host.get_shell()
endpoint = alive_node.storage_node.get_rpc_endpoint()
wallet = alive_node.storage_node.get_remote_wallet_path()
wallet_config = alive_node.storage_node.get_remote_wallet_config_path()
result_object_nodes = cli.object.nodes( cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet_config)
response = cli.object.nodes(
rpc_endpoint=endpoint, rpc_endpoint=endpoint,
wallet=wallet,
cid=cid, cid=cid,
oid=oid, oid=oid,
bearer=bearer, bearer=bearer,
ttl=1 if is_direct else None, ttl=1 if is_direct else None,
json=True,
xhdr=xhdr, xhdr=xhdr,
timeout=timeout, timeout=timeout,
verify_presence_all=verify_presence_all, verify_presence_all=verify_presence_all,
) )
parsing_output = parse_cmd_table(result_object_nodes.stdout, "|") response_json = json.loads(response.stdout)
list_object_nodes = [ # Currently, the command will show expected and confirmed nodes.
node # And we (currently) count only nodes which are both expected and confirmed
for node in parsing_output object_nodes_id = {
if node["should_contain_object"] == "true" and node["actually_contains_object"] == "true" required_node
] for data_object in response_json["data_objects"]
for required_node in data_object["required_nodes"]
if required_node in data_object["confirmed_nodes"]
}
netmap_nodes_list = parse_netmap_output( netmap_nodes_list = parse_netmap_output(
cli.netmap.snapshot( cli.netmap.snapshot(
@ -772,17 +748,11 @@ def get_object_nodes(
).stdout ).stdout
) )
netmap_nodes = [ netmap_nodes = [
netmap_node netmap_node for object_node in object_nodes_id for netmap_node in netmap_nodes_list if object_node == netmap_node.node_id
for object_node in list_object_nodes
for netmap_node in netmap_nodes_list
if object_node["node_id"] == netmap_node.node_id
] ]
result = [ object_nodes = [
cluster_node cluster_node for netmap_node in netmap_nodes for cluster_node in cluster.cluster_nodes if netmap_node.node == cluster_node.host_ip
for netmap_node in netmap_nodes
for cluster_node in cluster.cluster_nodes
if netmap_node.node == cluster_node.host_ip
] ]
return result return object_nodes

View file

@ -0,0 +1,35 @@
import logging
from typing import Optional
from frostfs_testlib import reporter
from frostfs_testlib.cli import FrostfsCli
from frostfs_testlib.plugins import load_plugin
from frostfs_testlib.resources.cli import CLI_DEFAULT_TIMEOUT, FROSTFS_CLI_EXEC
from frostfs_testlib.shell import Shell
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
logger = logging.getLogger("NeoLogger")
@reporter.step("Get Tree List")
def get_tree_list(
wallet: WalletInfo,
cid: str,
shell: Shell,
endpoint: str,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
) -> None:
"""
A wrapper for `frostfs-cli tree list` call.
Args:
wallet (WalletInfo): path to a wallet on whose behalf we delete the container
cid (str): ID of the container to delete
shell: executor for cli command
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
timeout: Timeout for the operation.
This function doesn't return anything.
"""
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet.config_path)
cli.tree.list(cid=cid, rpc_endpoint=endpoint, timeout=timeout)

View file

@ -14,11 +14,11 @@ from typing import Optional, Tuple
from frostfs_testlib import reporter from frostfs_testlib import reporter
from frostfs_testlib.resources.cli import CLI_DEFAULT_TIMEOUT from frostfs_testlib.resources.cli import CLI_DEFAULT_TIMEOUT
from frostfs_testlib.resources.common import DEFAULT_WALLET_CONFIG
from frostfs_testlib.shell import Shell from frostfs_testlib.shell import Shell
from frostfs_testlib.steps.cli.object import head_object from frostfs_testlib.steps.cli.object import head_object
from frostfs_testlib.storage.cluster import Cluster, StorageNode from frostfs_testlib.storage.cluster import Cluster, StorageNode
from frostfs_testlib.storage.dataclasses.storage_object_info import StorageObjectInfo from frostfs_testlib.storage.dataclasses.storage_object_info import StorageObjectInfo
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
logger = logging.getLogger("NeoLogger") logger = logging.getLogger("NeoLogger")
@ -44,7 +44,7 @@ def get_storage_object_chunks(
with reporter.step(f"Get complex object chunks (f{storage_object.oid})"): with reporter.step(f"Get complex object chunks (f{storage_object.oid})"):
split_object_id = get_link_object( split_object_id = get_link_object(
storage_object.wallet_file_path, storage_object.wallet,
storage_object.cid, storage_object.cid,
storage_object.oid, storage_object.oid,
shell, shell,
@ -53,7 +53,7 @@ def get_storage_object_chunks(
timeout=timeout, timeout=timeout,
) )
head = head_object( head = head_object(
storage_object.wallet_file_path, storage_object.wallet,
storage_object.cid, storage_object.cid,
split_object_id, split_object_id,
shell, shell,
@ -96,7 +96,7 @@ def get_complex_object_split_ranges(
chunks_ids = get_storage_object_chunks(storage_object, shell, cluster) chunks_ids = get_storage_object_chunks(storage_object, shell, cluster)
for chunk_id in chunks_ids: for chunk_id in chunks_ids:
head = head_object( head = head_object(
storage_object.wallet_file_path, storage_object.wallet,
storage_object.cid, storage_object.cid,
chunk_id, chunk_id,
shell, shell,
@ -114,13 +114,12 @@ def get_complex_object_split_ranges(
@reporter.step("Get Link Object") @reporter.step("Get Link Object")
def get_link_object( def get_link_object(
wallet: str, wallet: WalletInfo,
cid: str, cid: str,
oid: str, oid: str,
shell: Shell, shell: Shell,
nodes: list[StorageNode], nodes: list[StorageNode],
bearer: str = "", bearer: str = "",
wallet_config: str = DEFAULT_WALLET_CONFIG,
is_direct: bool = True, is_direct: bool = True,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT, timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
): ):
@ -154,7 +153,6 @@ def get_link_object(
is_raw=True, is_raw=True,
is_direct=is_direct, is_direct=is_direct,
bearer=bearer, bearer=bearer,
wallet_config=wallet_config,
timeout=timeout, timeout=timeout,
) )
if resp["link"]: if resp["link"]:
@ -167,7 +165,7 @@ def get_link_object(
@reporter.step("Get Last Object") @reporter.step("Get Last Object")
def get_last_object( def get_last_object(
wallet: str, wallet: WalletInfo,
cid: str, cid: str,
oid: str, oid: str,
shell: Shell, shell: Shell,

View file

@ -4,13 +4,7 @@ from typing import Optional
from frostfs_testlib import reporter from frostfs_testlib import reporter
from frostfs_testlib.cli import FrostfsAdm, FrostfsCli, NeoGo from frostfs_testlib.cli import FrostfsAdm, FrostfsCli, NeoGo
from frostfs_testlib.resources.cli import ( from frostfs_testlib.resources.cli import CLI_DEFAULT_TIMEOUT, FROSTFS_ADM_CONFIG_PATH, FROSTFS_ADM_EXEC, FROSTFS_CLI_EXEC, NEOGO_EXECUTABLE
CLI_DEFAULT_TIMEOUT,
FROSTFS_ADM_CONFIG_PATH,
FROSTFS_ADM_EXEC,
FROSTFS_CLI_EXEC,
NEOGO_EXECUTABLE,
)
from frostfs_testlib.resources.common import MORPH_BLOCK_TIME from frostfs_testlib.resources.common import MORPH_BLOCK_TIME
from frostfs_testlib.shell import Shell from frostfs_testlib.shell import Shell
from frostfs_testlib.steps.payment_neogo import get_contract_hash from frostfs_testlib.steps.payment_neogo import get_contract_hash
@ -75,7 +69,7 @@ def get_epoch(shell: Shell, cluster: Cluster, alive_node: Optional[StorageNode]
@reporter.step("Tick Epoch") @reporter.step("Tick Epoch")
def tick_epoch(shell: Shell, cluster: Cluster, alive_node: Optional[StorageNode] = None): def tick_epoch(shell: Shell, cluster: Cluster, alive_node: Optional[StorageNode] = None, delta: Optional[int] = None):
""" """
Tick epoch using frostfs-adm or NeoGo if frostfs-adm is not available (DevEnv) Tick epoch using frostfs-adm or NeoGo if frostfs-adm is not available (DevEnv)
Args: Args:
@ -87,19 +81,24 @@ def tick_epoch(shell: Shell, cluster: Cluster, alive_node: Optional[StorageNode]
alive_node = alive_node if alive_node else cluster.services(StorageNode)[0] alive_node = alive_node if alive_node else cluster.services(StorageNode)[0]
remote_shell = alive_node.host.get_shell() remote_shell = alive_node.host.get_shell()
if FROSTFS_ADM_EXEC and FROSTFS_ADM_CONFIG_PATH: if "force_transactions" not in alive_node.host.config.attributes:
# If frostfs-adm is available, then we tick epoch with it (to be consistent with UAT tests) # If frostfs-adm is available, then we tick epoch with it (to be consistent with UAT tests)
frostfs_adm = FrostfsAdm( frostfs_adm = FrostfsAdm(
shell=remote_shell, shell=remote_shell,
frostfs_adm_exec_path=FROSTFS_ADM_EXEC, frostfs_adm_exec_path=FROSTFS_ADM_EXEC,
config_file=FROSTFS_ADM_CONFIG_PATH, config_file=FROSTFS_ADM_CONFIG_PATH,
) )
frostfs_adm.morph.force_new_epoch() frostfs_adm.morph.force_new_epoch(delta=delta)
return return
# Otherwise we tick epoch using transaction # Otherwise we tick epoch using transaction
cur_epoch = get_epoch(shell, cluster) cur_epoch = get_epoch(shell, cluster)
if delta:
next_epoch = cur_epoch + delta
else:
next_epoch = cur_epoch + 1
# Use first node by default # Use first node by default
ir_node = cluster.services(InnerRing)[0] ir_node = cluster.services(InnerRing)[0]
# In case if no local_wallet_path is provided, we use wallet_path # In case if no local_wallet_path is provided, we use wallet_path
@ -116,7 +115,7 @@ def tick_epoch(shell: Shell, cluster: Cluster, alive_node: Optional[StorageNode]
wallet_password=ir_wallet_pass, wallet_password=ir_wallet_pass,
scripthash=get_contract_hash(morph_chain, "netmap.frostfs", shell=shell), scripthash=get_contract_hash(morph_chain, "netmap.frostfs", shell=shell),
method="newEpoch", method="newEpoch",
arguments=f"int:{cur_epoch + 1}", arguments=f"int:{next_epoch}",
multisig_hash=f"{ir_address}:Global", multisig_hash=f"{ir_address}:Global",
address=ir_address, address=ir_address,
rpc_endpoint=morph_endpoint, rpc_endpoint=morph_endpoint,

View file

@ -11,19 +11,19 @@ from urllib.parse import quote_plus
import requests import requests
from frostfs_testlib import reporter from frostfs_testlib import reporter
from frostfs_testlib.resources.common import SIMPLE_OBJECT_SIZE from frostfs_testlib.cli import GenericCli
from frostfs_testlib.resources.common import ASSETS_DIR, SIMPLE_OBJECT_SIZE
from frostfs_testlib.s3.aws_cli_client import command_options from frostfs_testlib.s3.aws_cli_client import command_options
from frostfs_testlib.shell import Shell from frostfs_testlib.shell import Shell
from frostfs_testlib.shell.local_shell import LocalShell from frostfs_testlib.shell.local_shell import LocalShell
from frostfs_testlib.steps.cli.object import get_object from frostfs_testlib.steps.cli.object import get_object
from frostfs_testlib.steps.storage_policy import get_nodes_without_object from frostfs_testlib.steps.storage_policy import get_nodes_without_object
from frostfs_testlib.storage.cluster import StorageNode from frostfs_testlib.storage.cluster import ClusterNode, StorageNode
from frostfs_testlib.testing.test_control import retry from frostfs_testlib.testing.test_control import retry
from frostfs_testlib.utils.file_utils import get_file_hash from frostfs_testlib.utils.file_utils import TestFile, get_file_hash
logger = logging.getLogger("NeoLogger") logger = logging.getLogger("NeoLogger")
ASSETS_DIR = os.getenv("ASSETS_DIR", "TemporaryDir/")
local_shell = LocalShell() local_shell = LocalShell()
@ -31,8 +31,7 @@ local_shell = LocalShell()
def get_via_http_gate( def get_via_http_gate(
cid: str, cid: str,
oid: str, oid: str,
endpoint: str, node: ClusterNode,
http_hostname: str,
request_path: Optional[str] = None, request_path: Optional[str] = None,
timeout: Optional[int] = 300, timeout: Optional[int] = 300,
): ):
@ -40,47 +39,16 @@ def get_via_http_gate(
This function gets given object from HTTP gate This function gets given object from HTTP gate
cid: container id to get object from cid: container id to get object from
oid: object ID oid: object ID
endpoint: http gate endpoint node: node to make request
http_hostname: http host name on the node
request_path: (optional) http request, if ommited - use default [{endpoint}/get/{cid}/{oid}] request_path: (optional) http request, if ommited - use default [{endpoint}/get/{cid}/{oid}]
""" """
# if `request_path` parameter omitted, use default # if `request_path` parameter omitted, use default
if request_path is None: if request_path is None:
request = f"{endpoint}/get/{cid}/{oid}" request = f"{node.http_gate.get_endpoint()}/get/{cid}/{oid}"
else: else:
request = f"{endpoint}{request_path}" request = f"{node.http_gate.get_endpoint()}{request_path}"
resp = requests.get(request, headers={"Host": http_hostname}, stream=True, timeout=timeout, verify=False)
if not resp.ok:
raise Exception(
f"""Failed to get object via HTTP gate:
request: {resp.request.path_url},
response: {resp.text},
headers: {resp.headers},
status code: {resp.status_code} {resp.reason}"""
)
logger.info(f"Request: {request}")
_attach_allure_step(request, resp.status_code)
file_path = os.path.join(os.getcwd(), ASSETS_DIR, f"{cid}_{oid}")
with open(file_path, "wb") as file:
shutil.copyfileobj(resp.raw, file)
return file_path
@reporter.step("Get via Zip HTTP Gate")
def get_via_zip_http_gate(cid: str, prefix: str, endpoint: str, http_hostname: str, timeout: Optional[int] = 300):
"""
This function gets given object from HTTP gate
cid: container id to get object from
prefix: common prefix
endpoint: http gate endpoint
http_hostname: http host name on the node
"""
request = f"{endpoint}/zip/{cid}/{prefix}"
resp = requests.get(request, stream=True, timeout=timeout, verify=False) resp = requests.get(request, stream=True, timeout=timeout, verify=False)
if not resp.ok: if not resp.ok:
@ -95,42 +63,22 @@ def get_via_zip_http_gate(cid: str, prefix: str, endpoint: str, http_hostname: s
logger.info(f"Request: {request}") logger.info(f"Request: {request}")
_attach_allure_step(request, resp.status_code) _attach_allure_step(request, resp.status_code)
file_path = os.path.join(os.getcwd(), ASSETS_DIR, f"{cid}_archive.zip") test_file = TestFile(os.path.join(os.getcwd(), ASSETS_DIR, f"{cid}_{oid}"))
with open(file_path, "wb") as file: with open(test_file, "wb") as file:
shutil.copyfileobj(resp.raw, file) shutil.copyfileobj(resp.raw, file)
return test_file
with zipfile.ZipFile(file_path, "r") as zip_ref:
zip_ref.extractall(ASSETS_DIR)
return os.path.join(os.getcwd(), ASSETS_DIR, prefix)
@reporter.step("Get via HTTP Gate by attribute") @reporter.step("Get via Zip HTTP Gate")
def get_via_http_gate_by_attribute( def get_via_zip_http_gate(cid: str, prefix: str, node: ClusterNode, timeout: Optional[int] = 300):
cid: str,
attribute: dict,
endpoint: str,
http_hostname: str,
request_path: Optional[str] = None,
timeout: Optional[int] = 300,
):
""" """
This function gets given object from HTTP gate This function gets given object from HTTP gate
cid: CID to get object from cid: container id to get object from
attribute: attribute {name: attribute} value pair prefix: common prefix
endpoint: http gate endpoint node: node to make request
http_hostname: http host name on the node
request_path: (optional) http request path, if ommited - use default [{endpoint}/get_by_attribute/{Key}/{Value}]
""" """
attr_name = list(attribute.keys())[0] request = f"{node.http_gate.get_endpoint()}/zip/{cid}/{prefix}"
attr_value = quote_plus(str(attribute.get(attr_name))) resp = requests.get(request, stream=True, timeout=timeout, verify=False)
# if `request_path` parameter ommited, use default
if request_path is None:
request = f"{endpoint}/get_by_attribute/{cid}/{quote_plus(str(attr_name))}/{attr_value}"
else:
request = f"{endpoint}{request_path}"
resp = requests.get(request, stream=True, timeout=timeout, verify=False, headers={"Host": http_hostname})
if not resp.ok: if not resp.ok:
raise Exception( raise Exception(
@ -144,17 +92,61 @@ def get_via_http_gate_by_attribute(
logger.info(f"Request: {request}") logger.info(f"Request: {request}")
_attach_allure_step(request, resp.status_code) _attach_allure_step(request, resp.status_code)
file_path = os.path.join(os.getcwd(), ASSETS_DIR, f"{cid}_{str(uuid.uuid4())}") test_file = TestFile(os.path.join(os.getcwd(), ASSETS_DIR, f"{cid}_archive.zip"))
with open(file_path, "wb") as file: with open(test_file, "wb") as file:
shutil.copyfileobj(resp.raw, file) shutil.copyfileobj(resp.raw, file)
return file_path
with zipfile.ZipFile(test_file, "r") as zip_ref:
zip_ref.extractall(ASSETS_DIR)
return os.path.join(os.getcwd(), ASSETS_DIR, prefix)
@reporter.step("Get via HTTP Gate by attribute")
def get_via_http_gate_by_attribute(
cid: str,
attribute: dict,
node: ClusterNode,
request_path: Optional[str] = None,
timeout: Optional[int] = 300,
):
"""
This function gets given object from HTTP gate
cid: CID to get object from
attribute: attribute {name: attribute} value pair
endpoint: http gate endpoint
request_path: (optional) http request path, if ommited - use default [{endpoint}/get_by_attribute/{Key}/{Value}]
"""
attr_name = list(attribute.keys())[0]
attr_value = quote_plus(str(attribute.get(attr_name)))
# if `request_path` parameter ommited, use default
if request_path is None:
request = f"{node.http_gate.get_endpoint()}/get_by_attribute/{cid}/{quote_plus(str(attr_name))}/{attr_value}"
else:
request = f"{node.http_gate.get_endpoint()}{request_path}"
resp = requests.get(request, stream=True, timeout=timeout, verify=False)
if not resp.ok:
raise Exception(
f"""Failed to get object via HTTP gate:
request: {resp.request.path_url},
response: {resp.text},
headers: {resp.headers},
status code: {resp.status_code} {resp.reason}"""
)
logger.info(f"Request: {request}")
_attach_allure_step(request, resp.status_code)
test_file = TestFile(os.path.join(os.getcwd(), ASSETS_DIR, f"{cid}_{str(uuid.uuid4())}"))
with open(test_file, "wb") as file:
shutil.copyfileobj(resp.raw, file)
return test_file
# TODO: pass http_hostname as a header
@reporter.step("Upload via HTTP Gate") @reporter.step("Upload via HTTP Gate")
def upload_via_http_gate( def upload_via_http_gate(cid: str, path: str, endpoint: str, headers: Optional[dict] = None, timeout: Optional[int] = 300) -> str:
cid: str, path: str, endpoint: str, headers: Optional[dict] = None, timeout: Optional[int] = 300
) -> str:
""" """
This function upload given object through HTTP gate This function upload given object through HTTP gate
cid: CID to get object from cid: CID to get object from
@ -197,7 +189,6 @@ def is_object_large(filepath: str) -> bool:
return False return False
# TODO: pass http_hostname as a header
@reporter.step("Upload via HTTP Gate using Curl") @reporter.step("Upload via HTTP Gate using Curl")
def upload_via_http_gate_curl( def upload_via_http_gate_curl(
cid: str, cid: str,
@ -247,21 +238,20 @@ def upload_via_http_gate_curl(
@retry(max_attempts=3, sleep_interval=1) @retry(max_attempts=3, sleep_interval=1)
@reporter.step("Get via HTTP Gate using Curl") @reporter.step("Get via HTTP Gate using Curl")
def get_via_http_curl(cid: str, oid: str, endpoint: str, http_hostname: str) -> str: def get_via_http_curl(cid: str, oid: str, node: ClusterNode) -> TestFile:
""" """
This function gets given object from HTTP gate using curl utility. This function gets given object from HTTP gate using curl utility.
cid: CID to get object from cid: CID to get object from
oid: object OID oid: object OID
endpoint: http gate endpoint node: node for request
http_hostname: http host name of the node
""" """
request = f"{endpoint}/get/{cid}/{oid}" request = f"{node.http_gate.get_endpoint()}/get/{cid}/{oid}"
file_path = os.path.join(os.getcwd(), ASSETS_DIR, f"{cid}_{oid}_{str(uuid.uuid4())}") test_file = TestFile(os.path.join(os.getcwd(), ASSETS_DIR, f"{cid}_{oid}_{str(uuid.uuid4())}"))
cmd = f'curl -k -H "Host: {http_hostname}" {request} > {file_path}' curl = GenericCli("curl", node.host)
local_shell.exec(cmd) curl(f"-k ", f"{request} > {test_file}", shell=local_shell)
return file_path return test_file
def _attach_allure_step(request: str, status_code: int, req_type="GET"): def _attach_allure_step(request: str, status_code: int, req_type="GET"):
@ -274,12 +264,11 @@ def _attach_allure_step(request: str, status_code: int, req_type="GET"):
def try_to_get_object_and_expect_error( def try_to_get_object_and_expect_error(
cid: str, cid: str,
oid: str, oid: str,
node: ClusterNode,
error_pattern: str, error_pattern: str,
endpoint: str,
http_hostname: str,
) -> None: ) -> None:
try: try:
get_via_http_gate(cid=cid, oid=oid, endpoint=endpoint, http_hostname=http_hostname) get_via_http_gate(cid=cid, oid=oid, node=node)
raise AssertionError(f"Expected error on getting object with cid: {cid}") raise AssertionError(f"Expected error on getting object with cid: {cid}")
except Exception as err: except Exception as err:
match = error_pattern.casefold() in str(err).casefold() match = error_pattern.casefold() in str(err).casefold()
@ -292,13 +281,10 @@ def get_object_by_attr_and_verify_hashes(
file_name: str, file_name: str,
cid: str, cid: str,
attrs: dict, attrs: dict,
endpoint: str, node: ClusterNode,
http_hostname: str,
) -> None: ) -> None:
got_file_path_http = get_via_http_gate(cid=cid, oid=oid, endpoint=endpoint, http_hostname=http_hostname) got_file_path_http = get_via_http_gate(cid=cid, oid=oid, node=node)
got_file_path_http_attr = get_via_http_gate_by_attribute( got_file_path_http_attr = get_via_http_gate_by_attribute(cid=cid, attribute=attrs, node=node)
cid=cid, attribute=attrs, endpoint=endpoint, http_hostname=http_hostname
)
assert_hashes_are_equal(file_name, got_file_path_http, got_file_path_http_attr) assert_hashes_are_equal(file_name, got_file_path_http, got_file_path_http_attr)
@ -309,8 +295,7 @@ def verify_object_hash(
cid: str, cid: str,
shell: Shell, shell: Shell,
nodes: list[StorageNode], nodes: list[StorageNode],
endpoint: str, request_node: ClusterNode,
http_hostname: str,
object_getter=None, object_getter=None,
) -> None: ) -> None:
@ -336,7 +321,7 @@ def verify_object_hash(
shell=shell, shell=shell,
endpoint=random_node.get_rpc_endpoint(), endpoint=random_node.get_rpc_endpoint(),
) )
got_file_path_http = object_getter(cid=cid, oid=oid, endpoint=endpoint, http_hostname=http_hostname) got_file_path_http = object_getter(cid=cid, oid=oid, node=request_node)
assert_hashes_are_equal(file_name, got_file_path, got_file_path_http) assert_hashes_are_equal(file_name, got_file_path, got_file_path_http)
@ -365,10 +350,9 @@ def attr_into_str_header_curl(attrs: dict) -> list:
def try_to_get_object_via_passed_request_and_expect_error( def try_to_get_object_via_passed_request_and_expect_error(
cid: str, cid: str,
oid: str, oid: str,
node: ClusterNode,
error_pattern: str, error_pattern: str,
endpoint: str,
http_request_path: str, http_request_path: str,
http_hostname: str,
attrs: Optional[dict] = None, attrs: Optional[dict] = None,
) -> None: ) -> None:
try: try:
@ -376,17 +360,15 @@ def try_to_get_object_via_passed_request_and_expect_error(
get_via_http_gate( get_via_http_gate(
cid=cid, cid=cid,
oid=oid, oid=oid,
endpoint=endpoint, node=node,
request_path=http_request_path, request_path=http_request_path,
http_hostname=http_hostname,
) )
else: else:
get_via_http_gate_by_attribute( get_via_http_gate_by_attribute(
cid=cid, cid=cid,
attribute=attrs, attribute=attrs,
endpoint=endpoint, node=node,
request_path=http_request_path, request_path=http_request_path,
http_hostname=http_hostname,
) )
raise AssertionError(f"Expected error on getting object with cid: {cid}") raise AssertionError(f"Expected error on getting object with cid: {cid}")
except Exception as err: except Exception as err:

View file

@ -0,0 +1,45 @@
import re
from frostfs_testlib import reporter
from frostfs_testlib.storage.cluster import ClusterNode
from frostfs_testlib.testing.test_control import wait_for_success
@reporter.step("Check metrics result")
@wait_for_success(interval=10)
def check_metrics_counter(
cluster_nodes: list[ClusterNode],
operator: str = "==",
counter_exp: int = 0,
parse_from_command: bool = False,
**metrics_greps: str,
):
counter_act = 0
for cluster_node in cluster_nodes:
counter_act += get_metrics_value(cluster_node, parse_from_command, **metrics_greps)
assert eval(
f"{counter_act} {operator} {counter_exp}"
), f"Expected: {counter_exp} {operator} Actual: {counter_act} in nodes: {cluster_nodes}"
@reporter.step("Get metrics value from node: {node}")
def get_metrics_value(node: ClusterNode, parse_from_command: bool = False, **metrics_greps: str):
try:
command_result = node.metrics.storage.get_metrics_search_by_greps(**metrics_greps)
if parse_from_command:
metrics_counter = calc_metrics_count_from_stdout(command_result.stdout, **metrics_greps)
else:
metrics_counter = calc_metrics_count_from_stdout(command_result.stdout)
except RuntimeError as e:
metrics_counter = 0
return metrics_counter
@reporter.step("Parse metrics count and calc sum of result")
def calc_metrics_count_from_stdout(metric_result_stdout: str, command: str = None):
if command:
result = re.findall(rf"{command}\s*([\d.e+-]+)", metric_result_stdout)
else:
result = re.findall(r"}\s*([\d.e+-]+)", metric_result_stdout)
return sum(map(lambda x: int(float(x)), result))

View file

@ -13,7 +13,7 @@ from frostfs_testlib.resources.common import MORPH_BLOCK_TIME
from frostfs_testlib.shell import Shell from frostfs_testlib.shell import Shell
from frostfs_testlib.steps.epoch import tick_epoch, wait_for_epochs_align from frostfs_testlib.steps.epoch import tick_epoch, wait_for_epochs_align
from frostfs_testlib.storage.cluster import Cluster, StorageNode from frostfs_testlib.storage.cluster import Cluster, StorageNode
from frostfs_testlib.storage.dataclasses.frostfs_services import S3Gate from frostfs_testlib.testing.test_control import wait_for_success
from frostfs_testlib.utils import datetime_utils from frostfs_testlib.utils import datetime_utils
logger = logging.getLogger("NeoLogger") logger = logging.getLogger("NeoLogger")
@ -52,9 +52,24 @@ def storage_node_healthcheck(node: StorageNode) -> HealthStatus:
Returns: Returns:
health status as HealthStatus object. health status as HealthStatus object.
""" """
command = "control healthcheck"
output = _run_control_command_with_retries(node, command) host = node.host
return HealthStatus.from_stdout(output) service_config = host.get_service_config(node.name)
wallet_path = service_config.attributes["wallet_path"]
wallet_password = service_config.attributes["wallet_password"]
control_endpoint = service_config.attributes["control_endpoint"]
shell = host.get_shell()
wallet_config_path = f"/tmp/{node.name}-config.yaml"
wallet_config = f'wallet: {wallet_path}\npassword: "{wallet_password}"'
shell.exec(f"echo '{wallet_config}' > {wallet_config_path}")
cli_config = host.get_cli_config("frostfs-cli")
cli = FrostfsCli(shell, cli_config.exec_path, wallet_config_path)
result = cli.control.healthcheck(control_endpoint)
return HealthStatus.from_stdout(result.stdout)
@reporter.step("Set status for {node}") @reporter.step("Set status for {node}")
@ -66,8 +81,21 @@ def storage_node_set_status(node: StorageNode, status: str, retries: int = 0) ->
status: online or offline. status: online or offline.
retries (optional, int): number of retry attempts if it didn't work from the first time retries (optional, int): number of retry attempts if it didn't work from the first time
""" """
command = f"control set-status --status {status}" host = node.host
_run_control_command_with_retries(node, command, retries) service_config = host.get_service_config(node.name)
wallet_path = service_config.attributes["wallet_path"]
wallet_password = service_config.attributes["wallet_password"]
control_endpoint = service_config.attributes["control_endpoint"]
shell = host.get_shell()
wallet_config_path = f"/tmp/{node.name}-config.yaml"
wallet_config = f'wallet: {wallet_path}\npassword: "{wallet_password}"'
shell.exec(f"echo '{wallet_config}' > {wallet_config_path}")
cli_config = host.get_cli_config("frostfs-cli")
cli = FrostfsCli(shell, cli_config.exec_path, wallet_config_path)
cli.control.set_status(control_endpoint, status)
@reporter.step("Get netmap snapshot") @reporter.step("Get netmap snapshot")
@ -84,14 +112,11 @@ def get_netmap_snapshot(node: StorageNode, shell: Shell) -> str:
storage_wallet_path = node.get_wallet_path() storage_wallet_path = node.get_wallet_path()
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, config_file=storage_wallet_config) cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, config_file=storage_wallet_config)
return cli.netmap.snapshot( return cli.netmap.snapshot(rpc_endpoint=node.get_rpc_endpoint(), wallet=storage_wallet_path).stdout
rpc_endpoint=node.get_rpc_endpoint(),
wallet=storage_wallet_path,
).stdout
@reporter.step("Get shard list for {node}") @reporter.step("Get shard list for {node}")
def node_shard_list(node: StorageNode) -> list[str]: def node_shard_list(node: StorageNode, json: Optional[bool] = None) -> list[str]:
""" """
The function returns list of shards for specified storage node. The function returns list of shards for specified storage node.
Args: Args:
@ -99,31 +124,72 @@ def node_shard_list(node: StorageNode) -> list[str]:
Returns: Returns:
list of shards. list of shards.
""" """
command = "control shards list" host = node.host
output = _run_control_command_with_retries(node, command) service_config = host.get_service_config(node.name)
return re.findall(r"Shard (.*):", output) wallet_path = service_config.attributes["wallet_path"]
wallet_password = service_config.attributes["wallet_password"]
control_endpoint = service_config.attributes["control_endpoint"]
shell = host.get_shell()
wallet_config_path = f"/tmp/{node.name}-config.yaml"
wallet_config = f'wallet: {wallet_path}\npassword: "{wallet_password}"'
shell.exec(f"echo '{wallet_config}' > {wallet_config_path}")
cli_config = host.get_cli_config("frostfs-cli")
cli = FrostfsCli(shell, cli_config.exec_path, wallet_config_path)
result = cli.shards.list(endpoint=control_endpoint, json_mode=json)
return re.findall(r"Shard (.*):", result.stdout)
@reporter.step("Shard set for {node}") @reporter.step("Shard set for {node}")
def node_shard_set_mode(node: StorageNode, shard: str, mode: str) -> str: def node_shard_set_mode(node: StorageNode, shard: list[str], mode: str) -> None:
""" """
The function sets mode for specified shard. The function sets mode for specified shard.
Args: Args:
node: node on which shard mode should be set. node: node on which shard mode should be set.
""" """
command = f"control shards set-mode --id {shard} --mode {mode}" host = node.host
return _run_control_command_with_retries(node, command) service_config = host.get_service_config(node.name)
wallet_path = service_config.attributes["wallet_path"]
wallet_password = service_config.attributes["wallet_password"]
control_endpoint = service_config.attributes["control_endpoint"]
shell = host.get_shell()
wallet_config_path = f"/tmp/{node.name}-config.yaml"
wallet_config = f'wallet: {wallet_path}\npassword: "{wallet_password}"'
shell.exec(f"echo '{wallet_config}' > {wallet_config_path}")
cli_config = host.get_cli_config("frostfs-cli")
cli = FrostfsCli(shell, cli_config.exec_path, wallet_config_path)
cli.shards.set_mode(endpoint=control_endpoint, mode=mode, id=shard)
@reporter.step("Drop object from {node}") @reporter.step("Drop object from {node}")
def drop_object(node: StorageNode, cid: str, oid: str) -> str: def drop_object(node: StorageNode, cid: str, oid: str) -> None:
""" """
The function drops object from specified node. The function drops object from specified node.
Args: Args:
node_id str: node from which object should be dropped. node: node from which object should be dropped.
""" """
command = f"control drop-objects -o {cid}/{oid}" host = node.host
return _run_control_command_with_retries(node, command) service_config = host.get_service_config(node.name)
wallet_path = service_config.attributes["wallet_path"]
wallet_password = service_config.attributes["wallet_password"]
control_endpoint = service_config.attributes["control_endpoint"]
shell = host.get_shell()
wallet_config_path = f"/tmp/{node.name}-config.yaml"
wallet_config = f'wallet: {wallet_path}\npassword: "{wallet_password}"'
shell.exec(f"echo '{wallet_config}' > {wallet_config_path}")
cli_config = host.get_cli_config("frostfs-cli")
cli = FrostfsCli(shell, cli_config.exec_path, wallet_config_path)
objects = f"{cid}/{oid}"
cli.control.drop_objects(control_endpoint, objects)
@reporter.step("Delete data from host for node {node}") @reporter.step("Delete data from host for node {node}")
@ -134,12 +200,7 @@ def delete_node_data(node: StorageNode) -> None:
@reporter.step("Exclude node {node_to_exclude} from network map") @reporter.step("Exclude node {node_to_exclude} from network map")
def exclude_node_from_network_map( def exclude_node_from_network_map(node_to_exclude: StorageNode, alive_node: StorageNode, shell: Shell, cluster: Cluster) -> None:
node_to_exclude: StorageNode,
alive_node: StorageNode,
shell: Shell,
cluster: Cluster,
) -> None:
node_netmap_key = node_to_exclude.get_wallet_public_key() node_netmap_key = node_to_exclude.get_wallet_public_key()
storage_node_set_status(node_to_exclude, status="offline") storage_node_set_status(node_to_exclude, status="offline")
@ -153,12 +214,7 @@ def exclude_node_from_network_map(
@reporter.step("Include node {node_to_include} into network map") @reporter.step("Include node {node_to_include} into network map")
def include_node_to_network_map( def include_node_to_network_map(node_to_include: StorageNode, alive_node: StorageNode, shell: Shell, cluster: Cluster) -> None:
node_to_include: StorageNode,
alive_node: StorageNode,
shell: Shell,
cluster: Cluster,
) -> None:
storage_node_set_status(node_to_include, status="online") storage_node_set_status(node_to_include, status="online")
# Per suggestion of @fyrchik we need to wait for 2 blocks after we set status and after tick epoch. # Per suggestion of @fyrchik we need to wait for 2 blocks after we set status and after tick epoch.
@ -168,7 +224,7 @@ def include_node_to_network_map(
tick_epoch(shell, cluster) tick_epoch(shell, cluster)
time.sleep(datetime_utils.parse_time(MORPH_BLOCK_TIME) * 2) time.sleep(datetime_utils.parse_time(MORPH_BLOCK_TIME) * 2)
check_node_in_map(node_to_include, shell, alive_node) await_node_in_map(node_to_include, shell, alive_node)
@reporter.step("Check node {node} in network map") @reporter.step("Check node {node} in network map")
@ -182,6 +238,11 @@ def check_node_in_map(node: StorageNode, shell: Shell, alive_node: Optional[Stor
assert node_netmap_key in snapshot, f"Expected node with key {node_netmap_key} to be in network map" assert node_netmap_key in snapshot, f"Expected node with key {node_netmap_key} to be in network map"
@wait_for_success(300, 15, title="Await node {node} in network map")
def await_node_in_map(node: StorageNode, shell: Shell, alive_node: Optional[StorageNode] = None) -> None:
check_node_in_map(node, shell, alive_node)
@reporter.step("Check node {node} NOT in network map") @reporter.step("Check node {node} NOT in network map")
def check_node_not_in_map(node: StorageNode, shell: Shell, alive_node: Optional[StorageNode] = None) -> None: def check_node_not_in_map(node: StorageNode, shell: Shell, alive_node: Optional[StorageNode] = None) -> None:
alive_node = alive_node or node alive_node = alive_node or node
@ -195,7 +256,7 @@ def check_node_not_in_map(node: StorageNode, shell: Shell, alive_node: Optional[
@reporter.step("Wait for node {node} is ready") @reporter.step("Wait for node {node} is ready")
def wait_for_node_to_be_ready(node: StorageNode) -> None: def wait_for_node_to_be_ready(node: StorageNode) -> None:
timeout, attempts = 30, 6 timeout, attempts = 60, 15
for _ in range(attempts): for _ in range(attempts):
try: try:
health_check = storage_node_healthcheck(node) health_check = storage_node_healthcheck(node)
@ -208,12 +269,7 @@ def wait_for_node_to_be_ready(node: StorageNode) -> None:
@reporter.step("Remove nodes from network map trough cli-adm morph command") @reporter.step("Remove nodes from network map trough cli-adm morph command")
def remove_nodes_from_map_morph( def remove_nodes_from_map_morph(shell: Shell, cluster: Cluster, remove_nodes: list[StorageNode], alive_node: Optional[StorageNode] = None):
shell: Shell,
cluster: Cluster,
remove_nodes: list[StorageNode],
alive_node: Optional[StorageNode] = None,
):
""" """
Move node to the Offline state in the candidates list and tick an epoch to update the netmap Move node to the Offline state in the candidates list and tick an epoch to update the netmap
using frostfs-adm using frostfs-adm
@ -232,44 +288,5 @@ def remove_nodes_from_map_morph(
if FROSTFS_ADM_EXEC and FROSTFS_ADM_CONFIG_PATH: if FROSTFS_ADM_EXEC and FROSTFS_ADM_CONFIG_PATH:
# If frostfs-adm is available, then we tick epoch with it (to be consistent with UAT tests) # If frostfs-adm is available, then we tick epoch with it (to be consistent with UAT tests)
frostfsadm = FrostfsAdm( frostfsadm = FrostfsAdm(shell=remote_shell, frostfs_adm_exec_path=FROSTFS_ADM_EXEC, config_file=FROSTFS_ADM_CONFIG_PATH)
shell=remote_shell,
frostfs_adm_exec_path=FROSTFS_ADM_EXEC,
config_file=FROSTFS_ADM_CONFIG_PATH,
)
frostfsadm.morph.remove_nodes(node_netmap_keys) frostfsadm.morph.remove_nodes(node_netmap_keys)
def _run_control_command_with_retries(node: StorageNode, command: str, retries: int = 0) -> str:
for attempt in range(1 + retries): # original attempt + specified retries
try:
return _run_control_command(node, command)
except AssertionError as err:
if attempt < retries:
logger.warning(f"Command {command} failed with error {err} and will be retried")
continue
raise AssertionError(f"Command {command} failed with error {err}") from err
def _run_control_command(node: StorageNode, command: str) -> None:
host = node.host
service_config = host.get_service_config(node.name)
wallet_path = service_config.attributes["wallet_path"]
wallet_password = service_config.attributes["wallet_password"]
control_endpoint = service_config.attributes["control_endpoint"]
shell = host.get_shell()
wallet_config_path = f"/tmp/{node.name}-config.yaml"
wallet_config = f'password: "{wallet_password}"'
shell.exec(f"echo '{wallet_config}' > {wallet_config_path}")
cli_config = host.get_cli_config("frostfs-cli")
# TODO: implement cli.control
# cli = FrostfsCli(shell, cli_config.exec_path, wallet_config_path)
result = shell.exec(
f"{cli_config.exec_path} {command} --endpoint {control_endpoint} "
f"--wallet {wallet_path} --config {wallet_config_path}"
)
return result.stdout

View file

@ -1,25 +1,17 @@
import json
import logging import logging
import os import os
import re
import uuid
from datetime import datetime, timedelta from datetime import datetime, timedelta
from typing import Optional from typing import Optional
from dateutil.parser import parse from dateutil.parser import parse
from frostfs_testlib import reporter from frostfs_testlib import reporter
from frostfs_testlib.cli import FrostfsAuthmate
from frostfs_testlib.resources.cli import FROSTFS_AUTHMATE_EXEC
from frostfs_testlib.resources.common import CREDENTIALS_CREATE_TIMEOUT
from frostfs_testlib.s3 import S3ClientWrapper, VersioningStatus from frostfs_testlib.s3 import S3ClientWrapper, VersioningStatus
from frostfs_testlib.shell import CommandOptions, InteractiveInput, Shell from frostfs_testlib.s3.interfaces import BucketContainerResolver
from frostfs_testlib.shell.interfaces import SshCredentials from frostfs_testlib.shell import Shell
from frostfs_testlib.steps.cli.container import search_container_by_name, search_nodes_with_container from frostfs_testlib.steps.cli.container import search_nodes_with_container
from frostfs_testlib.storage.cluster import Cluster, ClusterNode from frostfs_testlib.storage.cluster import Cluster, ClusterNode
from frostfs_testlib.storage.dataclasses.frostfs_services import S3Gate
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
from frostfs_testlib.utils.cli_utils import _run_with_passwd
logger = logging.getLogger("NeoLogger") logger = logging.getLogger("NeoLogger")
@ -38,9 +30,7 @@ def check_objects_in_bucket(
assert bucket_object in bucket_objects, f"Expected object {bucket_object} in objects list {bucket_objects}" assert bucket_object in bucket_objects, f"Expected object {bucket_object} in objects list {bucket_objects}"
for bucket_object in unexpected_objects: for bucket_object in unexpected_objects:
assert ( assert bucket_object not in bucket_objects, f"Expected object {bucket_object} not in objects list {bucket_objects}"
bucket_object not in bucket_objects
), f"Expected object {bucket_object} not in objects list {bucket_objects}"
@reporter.step("Try to get object and got error") @reporter.step("Try to get object and got error")
@ -58,7 +48,6 @@ def set_bucket_versioning(s3_client: S3ClientWrapper, bucket: str, status: Versi
if status == VersioningStatus.UNDEFINED: if status == VersioningStatus.UNDEFINED:
return return
s3_client.get_bucket_versioning_status(bucket)
s3_client.put_bucket_versioning(bucket, status=status) s3_client.put_bucket_versioning(bucket, status=status)
bucket_status = s3_client.get_bucket_versioning_status(bucket) bucket_status = s3_client.get_bucket_versioning_status(bucket)
assert bucket_status == status.value, f"Expected {bucket_status} status. Got {status.value}" assert bucket_status == status.value, f"Expected {bucket_status} status. Got {status.value}"
@ -68,9 +57,7 @@ def object_key_from_file_path(full_path: str) -> str:
return os.path.basename(full_path) return os.path.basename(full_path)
def assert_tags( def assert_tags(actual_tags: list, expected_tags: Optional[list] = None, unexpected_tags: Optional[list] = None) -> None:
actual_tags: list, expected_tags: Optional[list] = None, unexpected_tags: Optional[list] = None
) -> None:
expected_tags = [{"Key": key, "Value": value} for key, value in expected_tags] if expected_tags else [] expected_tags = [{"Key": key, "Value": value} for key, value in expected_tags] if expected_tags else []
unexpected_tags = [{"Key": key, "Value": value} for key, value in unexpected_tags] if unexpected_tags else [] unexpected_tags = [{"Key": key, "Value": value} for key, value in unexpected_tags] if unexpected_tags else []
if expected_tags == []: if expected_tags == []:
@ -133,69 +120,28 @@ def assert_object_lock_mode(
).days == retain_period, f"Expected retention period is {retain_period} days" ).days == retain_period, f"Expected retention period is {retain_period} days"
def assert_s3_acl(acl_grants: list, permitted_users: str): def _format_grants_as_strings(grants: list[dict]) -> list:
if permitted_users == "AllUsers": grantee_format = "{g_type}::{uri}:{permission}"
grantees = {"AllUsers": 0, "CanonicalUser": 0} return set(
for acl_grant in acl_grants: [
if acl_grant.get("Grantee", {}).get("Type") == "Group": grantee_format.format(
uri = acl_grant.get("Grantee", {}).get("URI") g_type=grant.get("Grantee", {}).get("Type", ""),
permission = acl_grant.get("Permission") uri=grant.get("Grantee", {}).get("URI", ""),
assert (uri, permission) == ( permission=grant.get("Permission", ""),
"http://acs.amazonaws.com/groups/global/AllUsers", )
"FULL_CONTROL", for grant in grants
), "All Groups should have FULL_CONTROL" ]
grantees["AllUsers"] += 1 )
if acl_grant.get("Grantee", {}).get("Type") == "CanonicalUser":
permission = acl_grant.get("Permission")
assert permission == "FULL_CONTROL", "Canonical User should have FULL_CONTROL"
grantees["CanonicalUser"] += 1
assert grantees["AllUsers"] >= 1, "All Users should have FULL_CONTROL"
assert grantees["CanonicalUser"] >= 1, "Canonical User should have FULL_CONTROL"
if permitted_users == "CanonicalUser":
for acl_grant in acl_grants:
if acl_grant.get("Grantee", {}).get("Type") == "CanonicalUser":
permission = acl_grant.get("Permission")
assert permission == "FULL_CONTROL", "Only CanonicalUser should have FULL_CONTROL"
else:
logger.error("FULL_CONTROL is given to All Users")
@reporter.step("Init S3 Credentials") @reporter.step("Verify ACL permissions")
def init_s3_credentials( def verify_acl_permissions(actual_acl_grants: list[dict], expected_acl_grants: list[dict], strict: bool = True):
wallet: WalletInfo, actual_grants = _format_grants_as_strings(actual_acl_grants)
shell: Shell, expected_grants = _format_grants_as_strings(expected_acl_grants)
cluster: Cluster,
policy: Optional[dict] = None, assert expected_grants <= actual_grants, "Permissions mismatch"
s3gates: Optional[list[S3Gate]] = None, if strict:
container_placement_policy: Optional[str] = None, assert expected_grants == actual_grants, "Extra permissions found, must not be there"
):
gate_public_keys = []
bucket = str(uuid.uuid4())
if not s3gates:
s3gates = [cluster.s3_gates[0]]
for s3gate in s3gates:
gate_public_keys.append(s3gate.get_wallet_public_key())
frostfs_authmate_exec: FrostfsAuthmate = FrostfsAuthmate(shell, FROSTFS_AUTHMATE_EXEC)
issue_secret_output = frostfs_authmate_exec.secret.issue(
wallet=wallet.path,
peer=cluster.default_rpc_endpoint,
gate_public_key=gate_public_keys,
wallet_password=wallet.password,
container_policy=policy,
container_friendly_name=bucket,
container_placement_policy=container_placement_policy,
).stdout
aws_access_key_id = str(
re.search(r"access_key_id.*:\s.(?P<aws_access_key_id>\w*)", issue_secret_output).group("aws_access_key_id")
)
aws_secret_access_key = str(
re.search(r"secret_access_key.*:\s.(?P<aws_secret_access_key>\w*)", issue_secret_output).group(
"aws_secret_access_key"
)
)
cid = str(re.search(r"container_id.*:\s.(?P<container_id>\w*)", issue_secret_output).group("container_id"))
return cid, aws_access_key_id, aws_secret_access_key
@reporter.step("Delete bucket with all objects") @reporter.step("Delete bucket with all objects")
@ -227,13 +173,14 @@ def delete_bucket_with_objects(s3_client: S3ClientWrapper, bucket: str):
def search_nodes_with_bucket( def search_nodes_with_bucket(
cluster: Cluster, cluster: Cluster,
bucket_name: str, bucket_name: str,
wallet: str, wallet: WalletInfo,
shell: Shell, shell: Shell,
endpoint: str, endpoint: str,
bucket_container_resolver: BucketContainerResolver,
) -> list[ClusterNode]: ) -> list[ClusterNode]:
cid = None cid = None
for cluster_node in cluster.cluster_nodes: for cluster_node in cluster.cluster_nodes:
cid = search_container_by_name(name=bucket_name, node=cluster_node) cid = bucket_container_resolver.resolve(cluster_node, bucket_name)
if cid: if cid:
break break
nodes_list = search_nodes_with_container(wallet=wallet, cid=cid, shell=shell, endpoint=endpoint, cluster=cluster) nodes_list = search_nodes_with_container(wallet=wallet, cid=cid, shell=shell, endpoint=endpoint, cluster=cluster)

View file

@ -4,13 +4,12 @@ import logging
import os import os
import uuid import uuid
from dataclasses import dataclass from dataclasses import dataclass
from enum import Enum
from typing import Any, Optional from typing import Any, Optional
from frostfs_testlib import reporter from frostfs_testlib import reporter
from frostfs_testlib.cli import FrostfsCli from frostfs_testlib.cli import FrostfsCli
from frostfs_testlib.resources.cli import FROSTFS_CLI_EXEC from frostfs_testlib.resources.cli import FROSTFS_CLI_EXEC
from frostfs_testlib.resources.common import ASSETS_DIR, DEFAULT_WALLET_CONFIG from frostfs_testlib.resources.common import ASSETS_DIR
from frostfs_testlib.shell import Shell from frostfs_testlib.shell import Shell
from frostfs_testlib.storage.dataclasses.storage_object_info import StorageObjectInfo from frostfs_testlib.storage.dataclasses.storage_object_info import StorageObjectInfo
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
@ -231,8 +230,7 @@ def get_object_signed_token(
def create_session_token( def create_session_token(
shell: Shell, shell: Shell,
owner: str, owner: str,
wallet_path: str, wallet: WalletInfo,
wallet_password: str,
rpc_endpoint: str, rpc_endpoint: str,
) -> str: ) -> str:
""" """
@ -247,19 +245,18 @@ def create_session_token(
The path to the generated session token file. The path to the generated session token file.
""" """
session_token = os.path.join(os.getcwd(), ASSETS_DIR, str(uuid.uuid4())) session_token = os.path.join(os.getcwd(), ASSETS_DIR, str(uuid.uuid4()))
frostfscli = FrostfsCli(shell=shell, frostfs_cli_exec_path=FROSTFS_CLI_EXEC) frostfscli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet.config_path)
frostfscli.session.create( frostfscli.session.create(
rpc_endpoint=rpc_endpoint, rpc_endpoint=rpc_endpoint,
address=owner, address=owner,
wallet=wallet_path,
wallet_password=wallet_password,
out=session_token, out=session_token,
wallet=wallet.path,
) )
return session_token return session_token
@reporter.step("Sign Session Token") @reporter.step("Sign Session Token")
def sign_session_token(shell: Shell, session_token_file: str, wlt: WalletInfo) -> str: def sign_session_token(shell: Shell, session_token_file: str, wallet: WalletInfo) -> str:
""" """
This function signs the session token by the given wallet. This function signs the session token by the given wallet.
@ -272,6 +269,6 @@ def sign_session_token(shell: Shell, session_token_file: str, wlt: WalletInfo) -
The path to the signed token. The path to the signed token.
""" """
signed_token_file = os.path.join(os.getcwd(), ASSETS_DIR, str(uuid.uuid4())) signed_token_file = os.path.join(os.getcwd(), ASSETS_DIR, str(uuid.uuid4()))
frostfscli = FrostfsCli(shell=shell, frostfs_cli_exec_path=FROSTFS_CLI_EXEC, config_file=DEFAULT_WALLET_CONFIG) frostfscli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet.config_path)
frostfscli.util.sign_session_token(wallet=wlt.path, from_file=session_token_file, to_file=signed_token_file) frostfscli.util.sign_session_token(session_token_file, signed_token_file)
return signed_token_file return signed_token_file

View file

@ -30,14 +30,14 @@ def delete_objects(storage_objects: list[StorageObjectInfo], shell: Shell, clust
with reporter.step("Delete objects"): with reporter.step("Delete objects"):
for storage_object in storage_objects: for storage_object in storage_objects:
storage_object.tombstone = delete_object( storage_object.tombstone = delete_object(
storage_object.wallet_file_path, storage_object.wallet,
storage_object.cid, storage_object.cid,
storage_object.oid, storage_object.oid,
shell=shell, shell=shell,
endpoint=cluster.default_rpc_endpoint, endpoint=cluster.default_rpc_endpoint,
) )
verify_head_tombstone( verify_head_tombstone(
wallet_path=storage_object.wallet_file_path, wallet=storage_object.wallet,
cid=storage_object.cid, cid=storage_object.cid,
oid_ts=storage_object.tombstone, oid_ts=storage_object.tombstone,
oid=storage_object.oid, oid=storage_object.oid,
@ -52,7 +52,7 @@ def delete_objects(storage_objects: list[StorageObjectInfo], shell: Shell, clust
for storage_object in storage_objects: for storage_object in storage_objects:
with pytest.raises(Exception, match=OBJECT_ALREADY_REMOVED): with pytest.raises(Exception, match=OBJECT_ALREADY_REMOVED):
get_object( get_object(
storage_object.wallet_file_path, storage_object.wallet,
storage_object.cid, storage_object.cid,
storage_object.oid, storage_object.oid,
shell=shell, shell=shell,

View file

@ -12,13 +12,15 @@ from frostfs_testlib.shell import Shell
from frostfs_testlib.steps.cli.object import head_object from frostfs_testlib.steps.cli.object import head_object
from frostfs_testlib.steps.complex_object_actions import get_last_object from frostfs_testlib.steps.complex_object_actions import get_last_object
from frostfs_testlib.storage.cluster import StorageNode from frostfs_testlib.storage.cluster import StorageNode
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
from frostfs_testlib.utils import string_utils from frostfs_testlib.utils import string_utils
logger = logging.getLogger("NeoLogger") logger = logging.getLogger("NeoLogger")
# TODO: Unused, remove or make use of
@reporter.step("Get Object Copies") @reporter.step("Get Object Copies")
def get_object_copies(complexity: str, wallet: str, cid: str, oid: str, shell: Shell, nodes: list[StorageNode]) -> int: def get_object_copies(complexity: str, wallet: WalletInfo, cid: str, oid: str, shell: Shell, nodes: list[StorageNode]) -> int:
""" """
The function performs requests to all nodes of the container and The function performs requests to all nodes of the container and
finds out if they store a copy of the object. The procedure is finds out if they store a copy of the object. The procedure is
@ -43,7 +45,7 @@ def get_object_copies(complexity: str, wallet: str, cid: str, oid: str, shell: S
@reporter.step("Get Simple Object Copies") @reporter.step("Get Simple Object Copies")
def get_simple_object_copies(wallet: str, cid: str, oid: str, shell: Shell, nodes: list[StorageNode]) -> int: def get_simple_object_copies(wallet: WalletInfo, cid: str, oid: str, shell: Shell, nodes: list[StorageNode]) -> int:
""" """
To figure out the number of a simple object copies, only direct To figure out the number of a simple object copies, only direct
HEAD requests should be made to the every node of the container. HEAD requests should be made to the every node of the container.
@ -72,7 +74,7 @@ def get_simple_object_copies(wallet: str, cid: str, oid: str, shell: Shell, node
@reporter.step("Get Complex Object Copies") @reporter.step("Get Complex Object Copies")
def get_complex_object_copies(wallet: str, cid: str, oid: str, shell: Shell, nodes: list[StorageNode]) -> int: def get_complex_object_copies(wallet: WalletInfo, cid: str, oid: str, shell: Shell, nodes: list[StorageNode]) -> int:
""" """
To figure out the number of a complex object copies, we firstly To figure out the number of a complex object copies, we firstly
need to retrieve its Last object. We consider that the number of need to retrieve its Last object. We consider that the number of
@ -109,8 +111,7 @@ def get_nodes_with_object(cid: str, oid: str, shell: Shell, nodes: list[StorageN
nodes_list = [] nodes_list = []
for node in nodes: for node in nodes:
wallet = node.get_wallet_path() wallet = WalletInfo.from_node(node)
wallet_config = node.get_wallet_config_path()
try: try:
res = head_object( res = head_object(
wallet, wallet,
@ -119,7 +120,6 @@ def get_nodes_with_object(cid: str, oid: str, shell: Shell, nodes: list[StorageN
shell=shell, shell=shell,
endpoint=node.get_rpc_endpoint(), endpoint=node.get_rpc_endpoint(),
is_direct=True, is_direct=True,
wallet_config=wallet_config,
) )
if res is not None: if res is not None:
logger.info(f"Found object {oid} on node {node}") logger.info(f"Found object {oid} on node {node}")
@ -131,9 +131,7 @@ def get_nodes_with_object(cid: str, oid: str, shell: Shell, nodes: list[StorageN
@reporter.step("Get Nodes Without Object") @reporter.step("Get Nodes Without Object")
def get_nodes_without_object( def get_nodes_without_object(wallet: WalletInfo, cid: str, oid: str, shell: Shell, nodes: list[StorageNode]) -> list[StorageNode]:
wallet: str, cid: str, oid: str, shell: Shell, nodes: list[StorageNode]
) -> list[StorageNode]:
""" """
The function returns list of nodes which do not store The function returns list of nodes which do not store
the given object. the given object.

View file

@ -1,31 +1,23 @@
import json
import logging import logging
from neo3.wallet import wallet
from frostfs_testlib import reporter from frostfs_testlib import reporter
from frostfs_testlib.shell import Shell from frostfs_testlib.shell import Shell
from frostfs_testlib.steps.cli.object import head_object from frostfs_testlib.steps.cli.object import head_object
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
logger = logging.getLogger("NeoLogger") logger = logging.getLogger("NeoLogger")
@reporter.step("Verify Head Tombstone") @reporter.step("Verify Head Tombstone")
def verify_head_tombstone(wallet_path: str, cid: str, oid_ts: str, oid: str, shell: Shell, endpoint: str): def verify_head_tombstone(wallet: WalletInfo, cid: str, oid_ts: str, oid: str, shell: Shell, endpoint: str):
header = head_object(wallet_path, cid, oid_ts, shell=shell, endpoint=endpoint)["header"] header = head_object(wallet, cid, oid_ts, shell=shell, endpoint=endpoint)["header"]
s_oid = header["sessionToken"]["body"]["object"]["target"]["objects"] s_oid = header["sessionToken"]["body"]["object"]["target"]["objects"]
logger.info(f"Header Session OIDs is {s_oid}") logger.info(f"Header Session OIDs is {s_oid}")
logger.info(f"OID is {oid}") logger.info(f"OID is {oid}")
assert header["containerID"] == cid, "Tombstone Header CID is wrong" assert header["containerID"] == cid, "Tombstone Header CID is wrong"
assert header["ownerID"] == wallet.get_address_from_json(0), "Tombstone Owner ID is wrong"
with open(wallet_path, "r") as file:
wlt_data = json.loads(file.read())
wlt = wallet.Wallet.from_json(wlt_data, password="")
addr = wlt.accounts[0].address
assert header["ownerID"] == addr, "Tombstone Owner ID is wrong"
assert header["objectType"] == "TOMBSTONE", "Header Type isn't Tombstone" assert header["objectType"] == "TOMBSTONE", "Header Type isn't Tombstone"
assert header["sessionToken"]["body"]["object"]["verb"] == "DELETE", "Header Session Type isn't DELETE" assert header["sessionToken"]["body"]["object"]["verb"] == "DELETE", "Header Session Type isn't DELETE"
assert header["sessionToken"]["body"]["object"]["target"]["container"] == cid, "Header Session ID is wrong" assert header["sessionToken"]["body"]["object"]["target"]["container"] == cid, "Header Session ID is wrong"

View file

@ -9,9 +9,9 @@ from frostfs_testlib.hosting import Host, Hosting
from frostfs_testlib.hosting.config import ServiceConfig from frostfs_testlib.hosting.config import ServiceConfig
from frostfs_testlib.storage import get_service_registry from frostfs_testlib.storage import get_service_registry
from frostfs_testlib.storage.configuration.interfaces import ServiceConfigurationYml from frostfs_testlib.storage.configuration.interfaces import ServiceConfigurationYml
from frostfs_testlib.storage.configuration.service_configuration import ServiceConfiguration
from frostfs_testlib.storage.constants import ConfigAttributes from frostfs_testlib.storage.constants import ConfigAttributes
from frostfs_testlib.storage.dataclasses.frostfs_services import HTTPGate, InnerRing, MorphChain, S3Gate, StorageNode from frostfs_testlib.storage.dataclasses.frostfs_services import HTTPGate, InnerRing, MorphChain, S3Gate, StorageNode
from frostfs_testlib.storage.dataclasses.metrics import Metrics
from frostfs_testlib.storage.dataclasses.node_base import NodeBase, ServiceClass from frostfs_testlib.storage.dataclasses.node_base import NodeBase, ServiceClass
from frostfs_testlib.storage.dataclasses.storage_object_info import Interfaces from frostfs_testlib.storage.dataclasses.storage_object_info import Interfaces
from frostfs_testlib.storage.service_registry import ServiceRegistry from frostfs_testlib.storage.service_registry import ServiceRegistry
@ -25,11 +25,13 @@ class ClusterNode:
class_registry: ServiceRegistry class_registry: ServiceRegistry
id: int id: int
host: Host host: Host
metrics: Metrics
def __init__(self, host: Host, id: int) -> None: def __init__(self, host: Host, id: int) -> None:
self.host = host self.host = host
self.id = id self.id = id
self.class_registry = get_service_registry() self.class_registry = get_service_registry()
self.metrics = Metrics(host=self.host, metrics_endpoint=self.storage_node.get_metrics_endpoint())
@property @property
def host_ip(self): def host_ip(self):
@ -72,6 +74,7 @@ class ClusterNode:
def s3_gate(self) -> S3Gate: def s3_gate(self) -> S3Gate:
return self.service(S3Gate) return self.service(S3Gate)
# TODO: Deprecated. Use config with ServiceConfigurationYml interface
def get_config(self, config_file_path: str) -> dict: def get_config(self, config_file_path: str) -> dict:
shell = self.host.get_shell() shell = self.host.get_shell()
@ -81,16 +84,17 @@ class ClusterNode:
config = yaml.safe_load(config_text) config = yaml.safe_load(config_text)
return config return config
# TODO: Deprecated. Use config with ServiceConfigurationYml interface
def save_config(self, new_config: dict, config_file_path: str) -> None: def save_config(self, new_config: dict, config_file_path: str) -> None:
shell = self.host.get_shell() shell = self.host.get_shell()
config_str = yaml.dump(new_config) config_str = yaml.dump(new_config)
shell.exec(f"echo '{config_str}' | sudo tee {config_file_path}") shell.exec(f"echo '{config_str}' | sudo tee {config_file_path}")
def config(self, service_type: type[ServiceClass]) -> ServiceConfigurationYml: def config(self, service_type: ServiceClass) -> ServiceConfigurationYml:
return ServiceConfiguration(self.service(service_type)) return self.service(service_type).config
def service(self, service_type: type[ServiceClass]) -> ServiceClass: def service(self, service_type: ServiceClass) -> ServiceClass:
""" """
Get a service cluster node of specified type. Get a service cluster node of specified type.
@ -105,7 +109,7 @@ class ClusterNode:
service_entry = self.class_registry.get_entry(service_type) service_entry = self.class_registry.get_entry(service_type)
service_name = service_entry["hosting_service_name"] service_name = service_entry["hosting_service_name"]
pattern = f"{service_name}{self.id:02}" pattern = f"{service_name}_{self.id:02}"
config = self.host.get_service_config(pattern) config = self.host.get_service_config(pattern)
return service_type( return service_type(
@ -120,7 +124,7 @@ class ClusterNode:
svcs_names_on_node = [svc.name for svc in self.host.config.services] svcs_names_on_node = [svc.name for svc in self.host.config.services]
for entry in self.class_registry._class_mapping.values(): for entry in self.class_registry._class_mapping.values():
hosting_svc_name = entry["hosting_service_name"] hosting_svc_name = entry["hosting_service_name"]
pattern = f"{hosting_svc_name}{self.id:02}" pattern = f"{hosting_svc_name}_{self.id:02}"
if pattern in svcs_names_on_node: if pattern in svcs_names_on_node:
config = self.host.get_service_config(pattern) config = self.host.get_service_config(pattern)
svcs.append( svcs.append(
@ -140,30 +144,16 @@ class ClusterNode:
return self.host.config.interfaces[interface.value] return self.host.config.interfaces[interface.value]
def get_data_interfaces(self) -> list[str]: def get_data_interfaces(self) -> list[str]:
return [ return [ip_address for name_interface, ip_address in self.host.config.interfaces.items() if "data" in name_interface]
ip_address for name_interface, ip_address in self.host.config.interfaces.items() if "data" in name_interface
]
def get_data_interface(self, search_interface: str) -> list[str]: def get_data_interface(self, search_interface: str) -> list[str]:
return [ return [self.host.config.interfaces[interface] for interface in self.host.config.interfaces.keys() if search_interface == interface]
self.host.config.interfaces[interface]
for interface in self.host.config.interfaces.keys()
if search_interface == interface
]
def get_internal_interfaces(self) -> list[str]: def get_internal_interfaces(self) -> list[str]:
return [ return [ip_address for name_interface, ip_address in self.host.config.interfaces.items() if "internal" in name_interface]
ip_address
for name_interface, ip_address in self.host.config.interfaces.items()
if "internal" in name_interface
]
def get_internal_interface(self, search_internal: str) -> list[str]: def get_internal_interface(self, search_internal: str) -> list[str]:
return [ return [self.host.config.interfaces[interface] for interface in self.host.config.interfaces.keys() if search_internal == interface]
self.host.config.interfaces[interface]
for interface in self.host.config.interfaces.keys()
if search_internal == interface
]
class Cluster: class Cluster:
@ -174,8 +164,6 @@ class Cluster:
default_rpc_endpoint: str default_rpc_endpoint: str
default_s3_gate_endpoint: str default_s3_gate_endpoint: str
default_http_gate_endpoint: str default_http_gate_endpoint: str
default_http_hostname: str
default_s3_hostname: str
def __init__(self, hosting: Hosting) -> None: def __init__(self, hosting: Hosting) -> None:
self._hosting = hosting self._hosting = hosting
@ -184,8 +172,6 @@ class Cluster:
self.default_rpc_endpoint = self.services(StorageNode)[0].get_rpc_endpoint() self.default_rpc_endpoint = self.services(StorageNode)[0].get_rpc_endpoint()
self.default_s3_gate_endpoint = self.services(S3Gate)[0].get_endpoint() self.default_s3_gate_endpoint = self.services(S3Gate)[0].get_endpoint()
self.default_http_gate_endpoint = self.services(HTTPGate)[0].get_endpoint() self.default_http_gate_endpoint = self.services(HTTPGate)[0].get_endpoint()
self.default_http_hostname = self.services(StorageNode)[0].get_http_hostname()
self.default_s3_hostname = self.services(StorageNode)[0].get_s3_hostname()
@property @property
def hosts(self) -> list[Host]: def hosts(self) -> list[Host]:
@ -267,13 +253,13 @@ class Cluster:
service_name = service["hosting_service_name"] service_name = service["hosting_service_name"]
cls: type[NodeBase] = service["cls"] cls: type[NodeBase] = service["cls"]
pattern = f"{service_name}\d*$" pattern = f"{service_name}_\d*$"
configs = self.hosting.find_service_configs(pattern) configs = self.hosting.find_service_configs(pattern)
found_nodes = [] found_nodes = []
for config in configs: for config in configs:
# config.name is something like s3-gate01. Cut last digits to know service type # config.name is something like s3-gate01. Cut last digits to know service type
service_type = re.findall(".*\D", config.name)[0] service_type = re.findall("(.*)_\d+", config.name)[0]
# exclude unsupported services # exclude unsupported services
if service_type != service_name: if service_type != service_name:
continue continue

View file

@ -5,51 +5,74 @@ from typing import Any
import yaml import yaml
from frostfs_testlib import reporter from frostfs_testlib import reporter
from frostfs_testlib.shell.interfaces import CommandOptions from frostfs_testlib.shell.interfaces import CommandOptions, Shell
from frostfs_testlib.storage.configuration.interfaces import ServiceConfigurationYml from frostfs_testlib.storage.configuration.interfaces import ServiceConfigurationYml
from frostfs_testlib.storage.dataclasses.node_base import ServiceClass
def extend_dict(extend_me: dict, extend_by: dict):
if isinstance(extend_by, dict):
for k, v in extend_by.items():
if k in extend_me:
extend_dict(extend_me.get(k), v)
else:
extend_me[k] = v
else:
extend_me += extend_by
class ServiceConfiguration(ServiceConfigurationYml): class ServiceConfiguration(ServiceConfigurationYml):
def __init__(self, service: "ServiceClass") -> None: def __init__(self, service_name: str, shell: Shell, config_dir: str, main_config_path: str) -> None:
self.service = service self.service_name = service_name
self.shell = self.service.host.get_shell() self.shell = shell
self.confd_path = os.path.join(self.service.config_dir, "conf.d") self.main_config_path = main_config_path
self.confd_path = os.path.join(config_dir, "conf.d")
self.custom_file = os.path.join(self.confd_path, "99_changes.yml") self.custom_file = os.path.join(self.confd_path, "99_changes.yml")
def _path_exists(self, path: str) -> bool: def _path_exists(self, path: str) -> bool:
return not self.shell.exec(f"test -e {path}", options=CommandOptions(check=False)).return_code return not self.shell.exec(f"test -e {path}", options=CommandOptions(check=False)).return_code
def _get_data_from_file(self, path: str) -> dict: def _get_config_files(self):
content = self.shell.exec(f"cat {path}").stdout config_files = [self.main_config_path]
data = yaml.safe_load(content)
return data
def get(self, key: str) -> str:
with reporter.step(f"Get {key} configuration value for {self.service}"):
config_files = [self.service.main_config_path]
if self._path_exists(self.confd_path): if self._path_exists(self.confd_path):
files = self.shell.exec(f"find {self.confd_path} -type f").stdout.strip().split() files = self.shell.exec(f"find {self.confd_path} -type f").stdout.strip().split()
# Sorting files in backwards order from latest to first one # Sorting files in backwards order from latest to first one
config_files.extend(sorted(files, key=lambda x: -int(re.findall("^\d+", os.path.basename(x))[0]))) config_files.extend(sorted(files, key=lambda x: -int(re.findall("^\d+", os.path.basename(x))[0])))
result = None return config_files
for file in files:
data = self._get_data_from_file(file)
result = self._find_option(key, data)
if result is not None:
break
def _get_configuration(self, config_files: list[str]) -> dict:
if not config_files:
return [{}]
splitter = "+++++"
files_str = " ".join(config_files)
all_content = self.shell.exec(
f"echo Getting config files; for file in {files_str}; do (echo {splitter}; sudo cat ${{file}}); done"
).stdout
files_content = all_content.split("+++++")[1:]
files_data = [yaml.safe_load(file_content) for file_content in files_content]
mergedData = {}
for data in files_data:
extend_dict(mergedData, data)
return mergedData
def get(self, key: str) -> str | Any:
with reporter.step(f"Get {key} configuration value for {self.service_name}"):
config_files = self._get_config_files()
configuration = self._get_configuration(config_files)
result = self._find_option(key, configuration)
return result return result
def set(self, values: dict[str, Any]): def set(self, values: dict[str, Any]):
with reporter.step(f"Change configuration for {self.service}"): with reporter.step(f"Change configuration for {self.service_name}"):
if not self._path_exists(self.confd_path): if not self._path_exists(self.confd_path):
self.shell.exec(f"mkdir {self.confd_path}") self.shell.exec(f"mkdir {self.confd_path}")
if self._path_exists(self.custom_file): if self._path_exists(self.custom_file):
data = self._get_data_from_file(self.custom_file) data = self._get_configuration([self.custom_file])
else: else:
data = {} data = {}
@ -61,5 +84,5 @@ class ServiceConfiguration(ServiceConfigurationYml):
self.shell.exec(f"chmod 777 {self.custom_file}") self.shell.exec(f"chmod 777 {self.custom_file}")
def revert(self): def revert(self):
with reporter.step(f"Revert changed options for {self.service}"): with reporter.step(f"Revert changed options for {self.service_name}"):
self.shell.exec(f"rm -rf {self.custom_file}") self.shell.exec(f"rm -rf {self.custom_file}")

View file

@ -8,20 +8,19 @@ class ConfigAttributes:
SHARD_CONFIG_PATH = "shard_config_path" SHARD_CONFIG_PATH = "shard_config_path"
LOGGER_CONFIG_PATH = "logger_config_path" LOGGER_CONFIG_PATH = "logger_config_path"
LOCAL_WALLET_PATH = "local_wallet_path" LOCAL_WALLET_PATH = "local_wallet_path"
LOCAL_WALLET_CONFIG = "local_config_path" LOCAL_WALLET_CONFIG = "local_wallet_config_path"
REMOTE_WALLET_CONFIG = "remote_wallet_config_path"
ENDPOINT_DATA_0 = "endpoint_data0" ENDPOINT_DATA_0 = "endpoint_data0"
ENDPOINT_DATA_1 = "endpoint_data1" ENDPOINT_DATA_1 = "endpoint_data1"
ENDPOINT_DATA_0_NS = "endpoint_data0_namespace"
ENDPOINT_INTERNAL = "endpoint_internal0" ENDPOINT_INTERNAL = "endpoint_internal0"
ENDPOINT_PROMETHEUS = "endpoint_prometheus" ENDPOINT_PROMETHEUS = "endpoint_prometheus"
CONTROL_ENDPOINT = "control_endpoint" CONTROL_ENDPOINT = "control_endpoint"
UN_LOCODE = "un_locode" UN_LOCODE = "un_locode"
HTTP_HOSTNAME = "http_hostname"
S3_HOSTNAME = "s3_hostname"
class _FrostfsServicesNames: class PlacementRule:
STORAGE = "s" DEFAULT_PLACEMENT_RULE = "REP 2 IN X CBF 1 SELECT 4 FROM * AS X"
S3_GATE = "s3-gate" SINGLE_PLACEMENT_RULE = "REP 1 IN X CBF 1 SELECT 4 FROM * AS X"
HTTP_GATE = "http-gate" REP_2_FOR_3_NODES_PLACEMENT_RULE = "REP 2 IN X CBF 1 SELECT 3 FROM * AS X"
MORPH_CHAIN = "morph-chain" DEFAULT_EC_PLACEMENT_RULE = "EC 3.1"
INNER_RING = "ir"

View file

@ -1,6 +1,5 @@
import copy import copy
from datetime import datetime from datetime import datetime
from typing import Optional
import frostfs_testlib.resources.optionals as optionals import frostfs_testlib.resources.optionals as optionals
from frostfs_testlib import reporter from frostfs_testlib import reporter
@ -10,7 +9,6 @@ from frostfs_testlib.load.load_report import LoadReport
from frostfs_testlib.load.load_verifiers import LoadVerifier from frostfs_testlib.load.load_verifiers import LoadVerifier
from frostfs_testlib.storage.cluster import ClusterNode from frostfs_testlib.storage.cluster import ClusterNode
from frostfs_testlib.storage.dataclasses.frostfs_services import S3Gate, StorageNode from frostfs_testlib.storage.dataclasses.frostfs_services import S3Gate, StorageNode
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
from frostfs_testlib.testing.parallel import parallel from frostfs_testlib.testing.parallel import parallel
from frostfs_testlib.testing.test_control import run_optionally from frostfs_testlib.testing.test_control import run_optionally
@ -23,7 +21,6 @@ class BackgroundLoadController:
cluster_nodes: list[ClusterNode] cluster_nodes: list[ClusterNode]
nodes_under_load: list[ClusterNode] nodes_under_load: list[ClusterNode]
load_counter: int load_counter: int
loaders_wallet: WalletInfo
load_summaries: dict load_summaries: dict
endpoints: list[str] endpoints: list[str]
runner: ScenarioRunner runner: ScenarioRunner
@ -34,7 +31,6 @@ class BackgroundLoadController:
self, self,
k6_dir: str, k6_dir: str,
load_params: LoadParams, load_params: LoadParams,
loaders_wallet: WalletInfo,
cluster_nodes: list[ClusterNode], cluster_nodes: list[ClusterNode],
nodes_under_load: list[ClusterNode], nodes_under_load: list[ClusterNode],
runner: ScenarioRunner, runner: ScenarioRunner,
@ -45,7 +41,6 @@ class BackgroundLoadController:
self.cluster_nodes = cluster_nodes self.cluster_nodes = cluster_nodes
self.nodes_under_load = nodes_under_load self.nodes_under_load = nodes_under_load
self.load_counter = 1 self.load_counter = 1
self.loaders_wallet = loaders_wallet
self.runner = runner self.runner = runner
self.started = False self.started = False
self.load_reporters = [] self.load_reporters = []
@ -64,10 +59,7 @@ class BackgroundLoadController:
) )
), ),
EndpointSelectionStrategy.FIRST: list( EndpointSelectionStrategy.FIRST: list(
set( set(node_under_load.service(StorageNode).get_rpc_endpoint() for node_under_load in self.nodes_under_load)
node_under_load.service(StorageNode).get_rpc_endpoint()
for node_under_load in self.nodes_under_load
)
), ),
}, },
# for some reason xk6 appends http protocol on its own # for some reason xk6 appends http protocol on its own
@ -195,15 +187,19 @@ class BackgroundLoadController:
read_from=self.load_params.read_from, read_from=self.load_params.read_from,
registry_file=self.load_params.registry_file, registry_file=self.load_params.registry_file,
verify_time=self.load_params.verify_time, verify_time=self.load_params.verify_time,
custom_registry=self.load_params.custom_registry,
load_type=self.load_params.load_type, load_type=self.load_params.load_type,
load_id=self.load_params.load_id, load_id=self.load_params.load_id,
vu_init_time=0, vu_init_time=0,
working_dir=self.load_params.working_dir, working_dir=self.load_params.working_dir,
endpoint_selection_strategy=self.load_params.endpoint_selection_strategy, endpoint_selection_strategy=self.load_params.endpoint_selection_strategy,
k6_process_allocation_strategy=self.load_params.k6_process_allocation_strategy, k6_process_allocation_strategy=self.load_params.k6_process_allocation_strategy,
setup_timeout="1s", setup_timeout=self.load_params.setup_timeout,
) )
if self.verification_params.custom_registry:
self.verification_params.registry_file = self.load_params.custom_registry
if self.verification_params.verify_time is None: if self.verification_params.verify_time is None:
raise RuntimeError("verify_time should not be none") raise RuntimeError("verify_time should not be none")

View file

@ -11,12 +11,15 @@ from frostfs_testlib.healthcheck.interfaces import Healthcheck
from frostfs_testlib.hosting.interfaces import HostStatus from frostfs_testlib.hosting.interfaces import HostStatus
from frostfs_testlib.plugins import load_all from frostfs_testlib.plugins import load_all
from frostfs_testlib.resources.cli import FROSTFS_ADM_CONFIG_PATH, FROSTFS_ADM_EXEC, FROSTFS_CLI_EXEC from frostfs_testlib.resources.cli import FROSTFS_ADM_CONFIG_PATH, FROSTFS_ADM_EXEC, FROSTFS_CLI_EXEC
from frostfs_testlib.resources.common import DEFAULT_WALLET_CONFIG, MORPH_BLOCK_TIME from frostfs_testlib.resources.common import MORPH_BLOCK_TIME
from frostfs_testlib.shell import CommandOptions, Shell, SshConnectionProvider from frostfs_testlib.shell import CommandOptions, Shell, SshConnectionProvider
from frostfs_testlib.steps.network import IpHelper from frostfs_testlib.steps.network import IpHelper
from frostfs_testlib.steps.node_management import include_node_to_network_map, remove_nodes_from_map_morph
from frostfs_testlib.storage.cluster import Cluster, ClusterNode, S3Gate, StorageNode from frostfs_testlib.storage.cluster import Cluster, ClusterNode, S3Gate, StorageNode
from frostfs_testlib.storage.controllers.disk_controller import DiskController from frostfs_testlib.storage.controllers.disk_controller import DiskController
from frostfs_testlib.storage.dataclasses.node_base import NodeBase, ServiceClass from frostfs_testlib.storage.dataclasses.node_base import NodeBase, ServiceClass
from frostfs_testlib.storage.dataclasses.storage_object_info import NodeStatus
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
from frostfs_testlib.testing import parallel from frostfs_testlib.testing import parallel
from frostfs_testlib.testing.test_control import retry, run_optionally, wait_for_success from frostfs_testlib.testing.test_control import retry, run_optionally, wait_for_success
from frostfs_testlib.utils.datetime_utils import parse_time from frostfs_testlib.utils.datetime_utils import parse_time
@ -37,6 +40,7 @@ class ClusterStateController:
self.stopped_nodes: list[ClusterNode] = [] self.stopped_nodes: list[ClusterNode] = []
self.detached_disks: dict[str, DiskController] = {} self.detached_disks: dict[str, DiskController] = {}
self.dropped_traffic: list[ClusterNode] = [] self.dropped_traffic: list[ClusterNode] = []
self.excluded_from_netmap: list[StorageNode] = []
self.stopped_services: set[NodeBase] = set() self.stopped_services: set[NodeBase] = set()
self.cluster = cluster self.cluster = cluster
self.healthcheck = healthcheck self.healthcheck = healthcheck
@ -168,6 +172,15 @@ class ClusterStateController:
if service_type == StorageNode: if service_type == StorageNode:
self.wait_after_storage_startup() self.wait_after_storage_startup()
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
@reporter.step("Send sighup to all {service_type} services")
def sighup_services_of_type(self, service_type: type[ServiceClass]):
services = self.cluster.services(service_type)
parallel([service.send_signal_to_service for service in services], signal="SIGHUP")
if service_type == StorageNode:
self.wait_after_storage_startup()
@wait_for_success(600, 60) @wait_for_success(600, 60)
def wait_s3gate(self, s3gate: S3Gate): def wait_s3gate(self, s3gate: S3Gate):
with reporter.step(f"Wait for {s3gate} reconnection"): with reporter.step(f"Wait for {s3gate} reconnection"):
@ -202,21 +215,27 @@ class ClusterStateController:
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED) @run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
@reporter.step("Stop {service_type} service on {node}") @reporter.step("Stop {service_type} service on {node}")
def stop_service_of_type(self, node: ClusterNode, service_type: type[ServiceClass], mask: bool = True): def stop_service_of_type(self, node: ClusterNode, service_type: ServiceClass, mask: bool = True):
service = node.service(service_type) service = node.service(service_type)
service.stop_service(mask) service.stop_service(mask)
self.stopped_services.add(service) self.stopped_services.add(service)
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
@reporter.step("Send sighup to {service_type} service on {node}")
def sighup_service_of_type(self, node: ClusterNode, service_type: ServiceClass):
service = node.service(service_type)
service.send_signal_to_service("SIGHUP")
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED) @run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
@reporter.step("Start {service_type} service on {node}") @reporter.step("Start {service_type} service on {node}")
def start_service_of_type(self, node: ClusterNode, service_type: type[ServiceClass]): def start_service_of_type(self, node: ClusterNode, service_type: ServiceClass):
service = node.service(service_type) service = node.service(service_type)
service.start_service() service.start_service()
self.stopped_services.discard(service) self.stopped_services.discard(service)
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED) @run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
@reporter.step("Start all stopped {service_type} services") @reporter.step("Start all stopped {service_type} services")
def start_stopped_services_of_type(self, service_type: type[ServiceClass]): def start_stopped_services_of_type(self, service_type: ServiceClass):
stopped_svc = self._get_stopped_by_type(service_type) stopped_svc = self._get_stopped_by_type(service_type)
if not stopped_svc: if not stopped_svc:
return return
@ -305,27 +324,22 @@ class ClusterStateController:
self.suspended_services = {} self.suspended_services = {}
@reporter.step("Drop traffic to {node}, nodes - {block_nodes}") @reporter.step("Drop traffic to {node}, nodes - {block_nodes}")
def drop_traffic( def drop_traffic(self, node: ClusterNode, wakeup_timeout: int, name_interface: str, block_nodes: list[ClusterNode] = None) -> None:
self,
node: ClusterNode,
wakeup_timeout: int,
name_interface: str,
block_nodes: list[ClusterNode] = None,
) -> None:
list_ip = self._parse_interfaces(block_nodes, name_interface) list_ip = self._parse_interfaces(block_nodes, name_interface)
IpHelper.drop_input_traffic_to_node(node, list_ip) IpHelper.drop_input_traffic_to_node(node, list_ip)
time.sleep(wakeup_timeout) time.sleep(wakeup_timeout)
self.dropped_traffic.append(node) self.dropped_traffic.append(node)
@reporter.step("Start traffic to {node}") @reporter.step("Start traffic to {node}")
def restore_traffic( def restore_traffic(self, node: ClusterNode) -> None:
self,
node: ClusterNode,
) -> None:
IpHelper.restore_input_traffic_to_node(node=node) IpHelper.restore_input_traffic_to_node(node=node)
index = self.dropped_traffic.index(node)
self.dropped_traffic.pop(index)
@reporter.step("Restore blocked nodes") @reporter.step("Restore blocked nodes")
def restore_all_traffic(self): def restore_all_traffic(self):
if not self.dropped_traffic:
return
parallel(self._restore_traffic_to_node, self.dropped_traffic) parallel(self._restore_traffic_to_node, self.dropped_traffic)
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED) @run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
@ -404,51 +418,69 @@ class ClusterStateController:
@reporter.step("Set MaintenanceModeAllowed - {status}") @reporter.step("Set MaintenanceModeAllowed - {status}")
def set_maintenance_mode_allowed(self, status: str, cluster_node: ClusterNode) -> None: def set_maintenance_mode_allowed(self, status: str, cluster_node: ClusterNode) -> None:
frostfs_adm = FrostfsAdm( frostfs_adm = FrostfsAdm(
shell=cluster_node.host.get_shell(), shell=cluster_node.host.get_shell(), frostfs_adm_exec_path=FROSTFS_ADM_EXEC, config_file=FROSTFS_ADM_CONFIG_PATH
frostfs_adm_exec_path=FROSTFS_ADM_EXEC,
config_file=FROSTFS_ADM_CONFIG_PATH,
) )
frostfs_adm.morph.set_config(set_key_value=f"MaintenanceModeAllowed={status}") frostfs_adm.morph.set_config(set_key_value=f"MaintenanceModeAllowed={status}")
@reporter.step("Set mode node to {status}") @reporter.step("Set node status to {status} in CSC")
def set_mode_node(self, cluster_node: ClusterNode, wallet: str, status: str, await_tick: bool = True) -> None: def set_node_status(self, cluster_node: ClusterNode, wallet: WalletInfo, status: NodeStatus, await_tick: bool = True) -> None:
rpc_endpoint = cluster_node.storage_node.get_rpc_endpoint() rpc_endpoint = cluster_node.storage_node.get_rpc_endpoint()
control_endpoint = cluster_node.service(StorageNode).get_control_endpoint() control_endpoint = cluster_node.service(StorageNode).get_control_endpoint()
frostfs_adm, frostfs_cli, frostfs_cli_remote = self._get_cli(local_shell=self.shell, cluster_node=cluster_node) frostfs_adm, frostfs_cli, frostfs_cli_remote = self._get_cli(self.shell, wallet, cluster_node)
node_netinfo = NetmapParser.netinfo(frostfs_cli.netmap.netinfo(rpc_endpoint=rpc_endpoint, wallet=wallet).stdout) node_netinfo = NetmapParser.netinfo(frostfs_cli.netmap.netinfo(rpc_endpoint).stdout)
with reporter.step("If status maintenance, then check that the option is enabled"):
if node_netinfo.maintenance_mode_allowed == "false": if node_netinfo.maintenance_mode_allowed == "false":
frostfs_adm.morph.set_config(set_key_value="MaintenanceModeAllowed=true") with reporter.step("Enable maintenance mode"):
frostfs_adm.morph.set_config("MaintenanceModeAllowed=true")
with reporter.step(f"Change the status to {status}"): with reporter.step(f"Set node status to {status} using FrostfsCli"):
frostfs_cli_remote.control.set_status(endpoint=control_endpoint, status=status) frostfs_cli_remote.control.set_status(control_endpoint, status.value)
if not await_tick: if not await_tick:
return return
with reporter.step("Tick 1 epoch, and await 2 block"): with reporter.step("Tick 2 epoch with 2 block await."):
for _ in range(2):
frostfs_adm.morph.force_new_epoch() frostfs_adm.morph.force_new_epoch()
time.sleep(parse_time(MORPH_BLOCK_TIME) * 2) time.sleep(parse_time(MORPH_BLOCK_TIME) * 2)
self.check_node_status(status=status, wallet=wallet, cluster_node=cluster_node) self.await_node_status(status, wallet, cluster_node)
@wait_for_success(80, 8, title="Wait for storage status become {status}") @wait_for_success(80, 8, title="Wait for node status become {status}")
def check_node_status(self, status: str, wallet: str, cluster_node: ClusterNode): def await_node_status(self, status: NodeStatus, wallet: WalletInfo, cluster_node: ClusterNode, checker_node: ClusterNode = None):
frostfs_cli = FrostfsCli( frostfs_cli = FrostfsCli(self.shell, FROSTFS_CLI_EXEC, wallet.config_path)
shell=self.shell, frostfs_cli_exec_path=FROSTFS_CLI_EXEC, config_file=DEFAULT_WALLET_CONFIG if not checker_node:
) checker_node = cluster_node
netmap = NetmapParser.snapshot_all_nodes( netmap = NetmapParser.snapshot_all_nodes(frostfs_cli.netmap.snapshot(checker_node.storage_node.get_rpc_endpoint()).stdout)
frostfs_cli.netmap.snapshot(rpc_endpoint=cluster_node.storage_node.get_rpc_endpoint(), wallet=wallet).stdout
)
netmap = [node for node in netmap if cluster_node.host_ip == node.node] netmap = [node for node in netmap if cluster_node.host_ip == node.node]
if status == "offline": if status == NodeStatus.OFFLINE:
assert cluster_node.host_ip not in netmap, f"{cluster_node.host_ip} not in Offline" assert cluster_node.host_ip not in netmap, f"{cluster_node.host_ip} not in Offline"
else: else:
assert netmap[0].node_status == status.upper(), f"Node state - {netmap[0].node_status} != {status} expect" assert netmap[0].node_status == status, f"Node status should be '{status}', but was '{netmap[0].node_status}'"
def _get_cli(self, local_shell: Shell, cluster_node: ClusterNode) -> tuple[FrostfsAdm, FrostfsCli, FrostfsCli]: def remove_node_from_netmap(self, removes_nodes: list[StorageNode]) -> None:
alive_storage = list(set(self.cluster.storage_nodes) - set(removes_nodes))[0]
remove_nodes_from_map_morph(self.shell, self.cluster, removes_nodes, alive_storage)
self.excluded_from_netmap.extend(removes_nodes)
def include_node_to_netmap(self, include_node: StorageNode, alive_node: StorageNode):
include_node_to_network_map(include_node, alive_node, self.shell, self.cluster)
self.excluded_from_netmap.pop(self.excluded_from_netmap.index(include_node))
def include_all_excluded_nodes(self):
if not self.excluded_from_netmap:
return
alive_node = list(set(self.cluster.storage_nodes) - set(self.excluded_from_netmap))[0]
if not alive_node:
return
for exclude_node in self.excluded_from_netmap.copy():
self.include_node_to_netmap(exclude_node, alive_node)
def _get_cli(
self, local_shell: Shell, local_wallet: WalletInfo, cluster_node: ClusterNode
) -> tuple[FrostfsAdm, FrostfsCli, FrostfsCli]:
# TODO Move to service config # TODO Move to service config
host = cluster_node.host host = cluster_node.host
service_config = host.get_service_config(cluster_node.storage_node.name) service_config = host.get_service_config(cluster_node.storage_node.name)
@ -460,17 +492,9 @@ class ClusterStateController:
wallet_config = f'wallet: {wallet_path}\npassword: "{wallet_password}"' wallet_config = f'wallet: {wallet_path}\npassword: "{wallet_password}"'
shell.exec(f"echo '{wallet_config}' > {wallet_config_path}") shell.exec(f"echo '{wallet_config}' > {wallet_config_path}")
frostfs_adm = FrostfsAdm( frostfs_adm = FrostfsAdm(shell=shell, frostfs_adm_exec_path=FROSTFS_ADM_EXEC, config_file=FROSTFS_ADM_CONFIG_PATH)
shell=shell, frostfs_adm_exec_path=FROSTFS_ADM_EXEC, config_file=FROSTFS_ADM_CONFIG_PATH frostfs_cli = FrostfsCli(local_shell, FROSTFS_CLI_EXEC, local_wallet.config_path)
) frostfs_cli_remote = FrostfsCli(shell=shell, frostfs_cli_exec_path=FROSTFS_CLI_EXEC, config_file=wallet_config_path)
frostfs_cli = FrostfsCli(
shell=local_shell, frostfs_cli_exec_path=FROSTFS_CLI_EXEC, config_file=DEFAULT_WALLET_CONFIG
)
frostfs_cli_remote = FrostfsCli(
shell=shell,
frostfs_cli_exec_path=FROSTFS_CLI_EXEC,
config_file=wallet_config_path,
)
return frostfs_adm, frostfs_cli, frostfs_cli_remote return frostfs_adm, frostfs_cli, frostfs_cli_remote
def _enable_date_synchronizer(self, cluster_node: ClusterNode): def _enable_date_synchronizer(self, cluster_node: ClusterNode):
@ -509,9 +533,7 @@ class ClusterStateController:
options = CommandOptions(check=False) options = CommandOptions(check=False)
return self.shell.exec(f"ping {node.host.config.address} -c 1", options).return_code return self.shell.exec(f"ping {node.host.config.address} -c 1", options).return_code
@retry( @retry(max_attempts=60, sleep_interval=10, expected_result=HostStatus.ONLINE, title="Waiting for {node} to go online")
max_attempts=60, sleep_interval=10, expected_result=HostStatus.ONLINE, title="Waiting for {node} to go online"
)
def _wait_for_host_online(self, node: ClusterNode): def _wait_for_host_online(self, node: ClusterNode):
try: try:
ping_result = self._ping_host(node) ping_result = self._ping_host(node)
@ -522,9 +544,7 @@ class ClusterStateController:
logger.warning(f"Host ping fails with error {err}") logger.warning(f"Host ping fails with error {err}")
return HostStatus.OFFLINE return HostStatus.OFFLINE
@retry( @retry(max_attempts=60, sleep_interval=10, expected_result=HostStatus.OFFLINE, title="Waiting for {node} to go offline")
max_attempts=60, sleep_interval=10, expected_result=HostStatus.OFFLINE, title="Waiting for {node} to go offline"
)
def _wait_for_host_offline(self, node: ClusterNode): def _wait_for_host_offline(self, node: ClusterNode):
try: try:
ping_result = self._ping_host(node) ping_result = self._ping_host(node)
@ -534,3 +554,8 @@ class ClusterStateController:
except Exception as err: except Exception as err:
logger.warning(f"Host ping fails with error {err}") logger.warning(f"Host ping fails with error {err}")
return HostStatus.ONLINE return HostStatus.ONLINE
@reporter.step("Get contract by domain - {domain_name}")
def get_domain_contracts(self, cluster_node: ClusterNode, domain_name: str):
frostfs_adm = FrostfsAdm(shell=cluster_node.host.get_shell(), frostfs_adm_exec_path=FROSTFS_ADM_EXEC)
return frostfs_adm.morph.dump_hashes(cluster_node.morph_chain.get_http_endpoint(), domain_name).stdout

View file

@ -2,22 +2,22 @@ import json
from typing import Any from typing import Any
from frostfs_testlib.cli.frostfs_cli.shards import FrostfsCliShards from frostfs_testlib.cli.frostfs_cli.shards import FrostfsCliShards
from frostfs_testlib.shell.interfaces import CommandResult
from frostfs_testlib.storage.cluster import ClusterNode from frostfs_testlib.storage.cluster import ClusterNode
from frostfs_testlib.testing.test_control import wait_for_success from frostfs_testlib.testing.test_control import wait_for_success
class ShardsWatcher: class ShardsWatcher:
shards_snapshots: list[dict[str, Any]] = []
def __init__(self, node_under_test: ClusterNode) -> None: def __init__(self, node_under_test: ClusterNode) -> None:
self.shards_snapshots: list[dict[str, Any]] = []
self.storage_node = node_under_test.storage_node self.storage_node = node_under_test.storage_node
self.take_shards_snapshot() self.take_shards_snapshot()
def take_shards_snapshot(self): def take_shards_snapshot(self) -> None:
snapshot = self.get_shards_snapshot() snapshot = self.get_shards_snapshot()
self.shards_snapshots.append(snapshot) self.shards_snapshots.append(snapshot)
def get_shards_snapshot(self): def get_shards_snapshot(self) -> dict[str, Any]:
shards_snapshot: dict[str, Any] = {} shards_snapshot: dict[str, Any] = {}
shards = self.get_shards() shards = self.get_shards()
@ -26,17 +26,17 @@ class ShardsWatcher:
return shards_snapshot return shards_snapshot
def _get_current_snapshot(self): def _get_current_snapshot(self) -> dict[str, Any]:
return self.shards_snapshots[-1] return self.shards_snapshots[-1]
def _get_previous_snapshot(self): def _get_previous_snapshot(self) -> dict[str, Any]:
return self.shards_snapshots[-2] return self.shards_snapshots[-2]
def _is_shard_present(self, shard_id): def _is_shard_present(self, shard_id) -> bool:
snapshot = self._get_current_snapshot() snapshot = self._get_current_snapshot()
return shard_id in snapshot return shard_id in snapshot
def get_shards_with_new_errors(self): def get_shards_with_new_errors(self) -> dict[str, Any]:
current_snapshot = self._get_current_snapshot() current_snapshot = self._get_current_snapshot()
previous_snapshot = self._get_previous_snapshot() previous_snapshot = self._get_previous_snapshot()
shards_with_new_errors: dict[str, Any] = {} shards_with_new_errors: dict[str, Any] = {}
@ -46,7 +46,7 @@ class ShardsWatcher:
return shards_with_new_errors return shards_with_new_errors
def get_shards_with_errors(self): def get_shards_with_errors(self) -> dict[str, Any]:
snapshot = self.get_shards_snapshot() snapshot = self.get_shards_snapshot()
shards_with_errors: dict[str, Any] = {} shards_with_errors: dict[str, Any] = {}
for shard_id, shard in snapshot.items(): for shard_id, shard in snapshot.items():
@ -55,7 +55,7 @@ class ShardsWatcher:
return shards_with_errors return shards_with_errors
def get_shard_status(self, shard_id: str): def get_shard_status(self, shard_id: str): # -> Any:
snapshot = self.get_shards_snapshot() snapshot = self.get_shards_snapshot()
assert shard_id in snapshot, f"Shard {shard_id} is missing: {snapshot}" assert shard_id in snapshot, f"Shard {shard_id} is missing: {snapshot}"
@ -63,28 +63,26 @@ class ShardsWatcher:
return snapshot[shard_id]["mode"] return snapshot[shard_id]["mode"]
@wait_for_success(60, 2) @wait_for_success(60, 2)
def await_for_all_shards_status(self, status: str): def await_for_all_shards_status(self, status: str) -> None:
snapshot = self.get_shards_snapshot() snapshot = self.get_shards_snapshot()
for shard_id in snapshot: for shard_id in snapshot:
assert snapshot[shard_id]["mode"] == status, f"Shard {shard_id} have wrong shard status" assert snapshot[shard_id]["mode"] == status, f"Shard {shard_id} have wrong shard status"
@wait_for_success(60, 2) @wait_for_success(60, 2)
def await_for_shard_status(self, shard_id: str, status: str): def await_for_shard_status(self, shard_id: str, status: str) -> None:
assert self.get_shard_status(shard_id) == status assert self.get_shard_status(shard_id) == status
@wait_for_success(60, 2) @wait_for_success(60, 2)
def await_for_shard_have_new_errors(self, shard_id: str): def await_for_shard_have_new_errors(self, shard_id: str) -> None:
self.take_shards_snapshot() self.take_shards_snapshot()
assert self._is_shard_present(shard_id) assert self._is_shard_present(shard_id)
shards_with_new_errors = self.get_shards_with_new_errors() shards_with_new_errors = self.get_shards_with_new_errors()
assert ( assert shard_id in shards_with_new_errors, f"Expected shard {shard_id} to have new errors, but haven't {self.shards_snapshots[-1]}"
shard_id in shards_with_new_errors
), f"Expected shard {shard_id} to have new errors, but haven't {self.shards_snapshots[-1]}"
@wait_for_success(300, 5) @wait_for_success(300, 5)
def await_for_shards_have_no_new_errors(self): def await_for_shards_have_no_new_errors(self) -> None:
self.take_shards_snapshot() self.take_shards_snapshot()
shards_with_new_errors = self.get_shards_with_new_errors() shards_with_new_errors = self.get_shards_with_new_errors()
assert len(shards_with_new_errors) == 0 assert len(shards_with_new_errors) == 0
@ -104,15 +102,15 @@ class ShardsWatcher:
return json.loads(response.stdout.split(">", 1)[1]) return json.loads(response.stdout.split(">", 1)[1])
def set_shard_mode(self, shard_id: str, mode: str, clear_errors: bool = True): def set_shard_mode(self, shard_id: str, mode: str, clear_errors: bool = True) -> CommandResult:
shards_cli = FrostfsCliShards( shards_cli = FrostfsCliShards(
self.storage_node.host.get_shell(), self.storage_node.host.get_shell(),
self.storage_node.host.get_cli_config("frostfs-cli").exec_path, self.storage_node.host.get_cli_config("frostfs-cli").exec_path,
) )
return shards_cli.set_mode( return shards_cli.set_mode(
self.storage_node.get_control_endpoint(), endpoint=self.storage_node.get_control_endpoint(),
self.storage_node.get_remote_wallet_path(), wallet=self.storage_node.get_remote_wallet_path(),
self.storage_node.get_wallet_password(), wallet_password=self.storage_node.get_wallet_password(),
mode=mode, mode=mode,
id=[shard_id], id=[shard_id],
clear_errors=clear_errors, clear_errors=clear_errors,

View file

@ -14,14 +14,19 @@ class ConfigStateManager(StateManager):
self.cluster = self.csc.cluster self.cluster = self.csc.cluster
@reporter.step("Change configuration for {service_type} on all nodes") @reporter.step("Change configuration for {service_type} on all nodes")
def set_on_all_nodes(self, service_type: type[ServiceClass], values: dict[str, Any]): def set_on_all_nodes(self, service_type: type[ServiceClass], values: dict[str, Any], sighup: bool = False):
services = self.cluster.services(service_type) services = self.cluster.services(service_type)
nodes = self.cluster.nodes(services) nodes = self.cluster.nodes(services)
self.services_with_changed_config.update([(node, service_type) for node in nodes]) self.services_with_changed_config.update([(node, service_type) for node in nodes])
if not sighup:
self.csc.stop_services_of_type(service_type) self.csc.stop_services_of_type(service_type)
parallel([node.config(service_type).set for node in nodes], values=values) parallel([node.config(service_type).set for node in nodes], values=values)
if not sighup:
self.csc.start_services_of_type(service_type) self.csc.start_services_of_type(service_type)
else:
self.csc.sighup_services_of_type(service_type)
@reporter.step("Change configuration for {service_type} on {node}") @reporter.step("Change configuration for {service_type} on {node}")
def set_on_node(self, node: ClusterNode, service_type: type[ServiceClass], values: dict[str, Any]): def set_on_node(self, node: ClusterNode, service_type: type[ServiceClass], values: dict[str, Any]):
@ -32,18 +37,26 @@ class ConfigStateManager(StateManager):
self.csc.start_service_of_type(node, service_type) self.csc.start_service_of_type(node, service_type)
@reporter.step("Revert all configuration changes") @reporter.step("Revert all configuration changes")
def revert_all(self): def revert_all(self, sighup: bool = False):
if not self.services_with_changed_config: if not self.services_with_changed_config:
return return
parallel(self._revert_svc, self.services_with_changed_config) parallel(self._revert_svc, self.services_with_changed_config, sighup)
self.services_with_changed_config.clear() self.services_with_changed_config.clear()
if not sighup:
self.csc.start_all_stopped_services() self.csc.start_all_stopped_services()
# TODO: parallel can't have multiple parallel_items :( # TODO: parallel can't have multiple parallel_items :(
@reporter.step("Revert all configuration {node_and_service}") @reporter.step("Revert all configuration {node_and_service}")
def _revert_svc(self, node_and_service: tuple[ClusterNode, ServiceClass]): def _revert_svc(self, node_and_service: tuple[ClusterNode, ServiceClass], sighup: bool = False):
node, service_type = node_and_service node, service_type = node_and_service
service = node.service(service_type)
if not sighup:
self.csc.stop_service_of_type(node, service_type) self.csc.stop_service_of_type(node, service_type)
node.config(service_type).revert() node.config(service_type).revert()
if sighup:
service.send_signal_to_service("SIGHUP")

View file

@ -1,8 +1,8 @@
import logging import logging
from dataclasses import dataclass from dataclasses import dataclass
from enum import Enum
from typing import Any, Dict, List, Optional, Union from typing import Any, Dict, List, Optional, Union
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
from frostfs_testlib.testing.readable import HumanReadableEnum from frostfs_testlib.testing.readable import HumanReadableEnum
from frostfs_testlib.utils import wallet_utils from frostfs_testlib.utils import wallet_utils
@ -65,11 +65,7 @@ class EACLFilters:
def __str__(self): def __str__(self):
return ",".join( return ",".join(
[ [f"{filter.header_type.value}:" f"{filter.key}{filter.match_type.value}{filter.value}" for filter in self.filters]
f"{filter.header_type.value}:"
f"{filter.key}{filter.match_type.value}{filter.value}"
for filter in self.filters
]
if self.filters if self.filters
else [] else []
) )
@ -84,7 +80,7 @@ class EACLPubKey:
class EACLRule: class EACLRule:
operation: Optional[EACLOperation] = None operation: Optional[EACLOperation] = None
access: Optional[EACLAccess] = None access: Optional[EACLAccess] = None
role: Optional[Union[EACLRole, str]] = None role: Optional[Union[EACLRole, WalletInfo]] = None
filters: Optional[EACLFilters] = None filters: Optional[EACLFilters] = None
def to_dict(self) -> Dict[str, Any]: def to_dict(self) -> Dict[str, Any]:
@ -96,9 +92,9 @@ class EACLRule:
} }
def __str__(self): def __str__(self):
role = ( role = ""
self.role.value if isinstance(self.role, EACLRole):
if isinstance(self.role, EACLRole) role = self.role.value
else f'pubkey:{wallet_utils.get_wallet_public_key(self.role, "")}' if isinstance(self.role, WalletInfo):
) role = f"pubkey:{wallet_utils.get_wallet_public_key(self.role.path, self.role.password)}"
return f'{self.access.value} {self.operation.value} {self.filters or ""} {role}' return f'{self.access.value} {self.operation.value} {self.filters or ""} {role}'

View file

@ -0,0 +1,152 @@
import logging
from dataclasses import dataclass
from enum import Enum
from typing import Optional
from frostfs_testlib.testing.readable import HumanReadableEnum
from frostfs_testlib.utils import string_utils
logger = logging.getLogger("NeoLogger")
EACL_LIFETIME = 100500
FROSTFS_CONTRACT_CACHE_TIMEOUT = 30
class ObjectOperations(HumanReadableEnum):
PUT = "object.put"
GET = "object.get"
HEAD = "object.head"
GET_RANGE = "object.range"
GET_RANGE_HASH = "object.hash"
SEARCH = "object.search"
DELETE = "object.delete"
WILDCARD_ALL = "object.*"
@staticmethod
def get_all():
return [op for op in ObjectOperations if op != ObjectOperations.WILDCARD_ALL]
class ContainerOperations(HumanReadableEnum):
PUT = "container.put"
GET = "container.get"
LIST = "container.list"
DELETE = "container.delete"
WILDCARD_ALL = "container.*"
@staticmethod
def get_all():
return [op for op in ObjectOperations if op != ObjectOperations.WILDCARD_ALL]
@dataclass
class Operations:
GET_CONTAINER = "GetContainer"
PUT_CONTAINER = "PutContainer"
DELETE_CONTAINER = "DeleteContainer"
LIST_CONTAINER = "ListContainers"
GET_OBJECT = "GetObject"
DELETE_OBJECT = "DeleteObject"
HASH_OBJECT = "HashObject"
RANGE_OBJECT = "RangeObject"
SEARCH_OBJECT = "SearchObject"
HEAD_OBJECT = "HeadObject"
PUT_OBJECT = "PutObject"
class Verb(HumanReadableEnum):
ALLOW = "allow"
DENY = "deny"
class Role(HumanReadableEnum):
OWNER = "owner"
IR = "ir"
CONTAINER = "container"
OTHERS = "others"
class ConditionType(HumanReadableEnum):
RESOURCE = "ResourceCondition"
REQUEST = "RequestCondition"
# See https://git.frostfs.info/TrueCloudLab/policy-engine/src/branch/master/schema/native/consts.go#L40-L53
class ConditionKey(HumanReadableEnum):
ROLE = '"\\$Actor:role"'
PUBLIC_KEY = '"\\$Actor:publicKey"'
OBJECT_TYPE = '"\\$Object:objectType"'
OBJECT_ID = '"\\$Object:objectID"'
class MatchType(HumanReadableEnum):
EQUAL = "="
NOT_EQUAL = "!="
@dataclass
class Condition:
condition_key: ConditionKey | str
condition_value: str
condition_type: ConditionType = ConditionType.REQUEST
match_type: MatchType = MatchType.EQUAL
def as_string(self):
key = self.condition_key.value if isinstance(self.condition_key, ConditionKey) else self.condition_key
value = self.condition_value.value if isinstance(self.condition_value, Enum) else self.condition_value
return f"{self.condition_type.value}:{key}{self.match_type.value}{value}"
@staticmethod
def by_role(*args, **kwargs) -> "Condition":
return Condition(ConditionKey.ROLE, *args, **kwargs)
@staticmethod
def by_key(*args, **kwargs) -> "Condition":
return Condition(ConditionKey.PUBLIC_KEY, *args, **kwargs)
@staticmethod
def by_object_type(*args, **kwargs) -> "Condition":
return Condition(ConditionKey.OBJECT_TYPE, *args, **kwargs)
@staticmethod
def by_object_id(*args, **kwargs) -> "Condition":
return Condition(ConditionKey.OBJECT_ID, *args, **kwargs)
class Rule:
def __init__(
self,
access: Verb,
operations: list[ObjectOperations] | ObjectOperations,
conditions: list[Condition] | Condition = None,
chain_id: Optional[str] = None,
) -> None:
self.access = access
self.operations = operations
if not conditions:
self.conditions = []
elif isinstance(conditions, Condition):
self.conditions = [conditions]
else:
self.conditions = conditions
if not isinstance(self.conditions, list):
raise RuntimeError("Conditions must be a list")
if not operations:
self.operations = []
elif isinstance(operations, (ObjectOperations, ContainerOperations)):
self.operations = [operations]
else:
self.operations = operations
if not isinstance(self.operations, list):
raise RuntimeError("Operations must be a list")
self.chain_id = chain_id if chain_id else string_utils.unique_name("chain-id-")
def as_string(self):
conditions = " ".join([cond.as_string() for cond in self.conditions])
operations = " ".join([op.value for op in self.operations])
return f"{self.access.value} {operations} {conditions} *"

View file

@ -5,6 +5,7 @@ from frostfs_testlib.storage.constants import ConfigAttributes
from frostfs_testlib.storage.dataclasses.node_base import NodeBase from frostfs_testlib.storage.dataclasses.node_base import NodeBase
from frostfs_testlib.storage.dataclasses.shard import Shard from frostfs_testlib.storage.dataclasses.shard import Shard
class InnerRing(NodeBase): class InnerRing(NodeBase):
""" """
Class represents inner ring node in a cluster Class represents inner ring node in a cluster
@ -17,11 +18,7 @@ class InnerRing(NodeBase):
def service_healthcheck(self) -> bool: def service_healthcheck(self) -> bool:
health_metric = "frostfs_ir_ir_health" health_metric = "frostfs_ir_ir_health"
output = ( output = self.host.get_shell().exec(f"curl -s localhost:6662 | grep {health_metric} | sed 1,2d").stdout
self.host.get_shell()
.exec(f"curl -s localhost:6662 | grep {health_metric} | sed 1,2d")
.stdout
)
return health_metric in output return health_metric in output
def get_netmap_cleaner_threshold(self) -> str: def get_netmap_cleaner_threshold(self) -> str:
@ -42,19 +39,21 @@ class S3Gate(NodeBase):
def get_endpoint(self) -> str: def get_endpoint(self) -> str:
return self._get_attribute(ConfigAttributes.ENDPOINT_DATA_0) return self._get_attribute(ConfigAttributes.ENDPOINT_DATA_0)
def get_ns_endpoint(self, ns_name: str) -> str:
return self._get_attribute(f"{ConfigAttributes.ENDPOINT_DATA_0}_namespace").format(namespace=ns_name)
def get_all_endpoints(self) -> list[str]: def get_all_endpoints(self) -> list[str]:
return [ return [
self._get_attribute(ConfigAttributes.ENDPOINT_DATA_0), self._get_attribute(ConfigAttributes.ENDPOINT_DATA_0),
self._get_attribute(ConfigAttributes.ENDPOINT_DATA_1), self._get_attribute(ConfigAttributes.ENDPOINT_DATA_1),
] ]
def get_ns_endpoint(self, ns_name: str) -> str:
return self._get_attribute(ConfigAttributes.ENDPOINT_DATA_0_NS).format(namespace=ns_name)
def service_healthcheck(self) -> bool: def service_healthcheck(self) -> bool:
health_metric = "frostfs_s3_gw_state_health" health_metric = "frostfs_s3_gw_state_health"
output = ( output = self.host.get_shell().exec(f"curl -s localhost:8086 | grep {health_metric} | sed 1,2d").stdout
self.host.get_shell()
.exec(f"curl -s localhost:8086 | grep {health_metric} | sed 1,2d")
.stdout
)
return health_metric in output return health_metric in output
@property @property
@ -72,11 +71,7 @@ class HTTPGate(NodeBase):
def service_healthcheck(self) -> bool: def service_healthcheck(self) -> bool:
health_metric = "frostfs_http_gw_state_health" health_metric = "frostfs_http_gw_state_health"
output = ( output = self.host.get_shell().exec(f"curl -s localhost:5662 | grep {health_metric} | sed 1,2d").stdout
self.host.get_shell()
.exec(f"curl -s localhost:5662 | grep {health_metric} | sed 1,2d")
.stdout
)
return health_metric in output return health_metric in output
@property @property
@ -135,32 +130,26 @@ class StorageNode(NodeBase):
def service_healthcheck(self) -> bool: def service_healthcheck(self) -> bool:
health_metric = "frostfs_node_state_health" health_metric = "frostfs_node_state_health"
output = ( output = self.host.get_shell().exec(f"curl -s localhost:6672 | grep {health_metric} | sed 1,2d").stdout
self.host.get_shell()
.exec(f"curl -s localhost:6672 | grep {health_metric} | sed 1,2d")
.stdout
)
return health_metric in output return health_metric in output
# TODO: Deprecated. Use new approach with config
def get_shard_config_path(self) -> str: def get_shard_config_path(self) -> str:
return self._get_attribute(ConfigAttributes.SHARD_CONFIG_PATH) return self._get_attribute(ConfigAttributes.SHARD_CONFIG_PATH)
# TODO: Deprecated. Use new approach with config
def get_shards_config(self) -> tuple[str, dict]: def get_shards_config(self) -> tuple[str, dict]:
return self.get_config(self.get_shard_config_path()) return self.get_config(self.get_shard_config_path())
def get_shards(self) -> list[Shard]: def get_shards(self) -> list[Shard]:
config = self.get_shards_config()[1] shards = self.config.get("storage:shard")
config["storage"]["shard"].pop("default")
return [Shard.from_object(shard) for shard in config["storage"]["shard"].values()]
def get_shards_from_env(self) -> list[Shard]: if not shards:
config = self.get_shards_config()[1] raise RuntimeError(f"Cannot get shards information for {self.name} on {self.host.config.address}")
configObj = ConfigObj(StringIO(config))
pattern = f"{SHARD_PREFIX}\d*" if "default" in shards:
num_shards = len(set(re.findall(pattern, self.get_shards_config()))) shards.pop("default")
return [Shard.from_object(shard) for shard in shards.values()]
return [Shard.from_config_object(configObj, shard_id) for shard_id in range(num_shards)]
def get_control_endpoint(self) -> str: def get_control_endpoint(self) -> str:
return self._get_attribute(ConfigAttributes.CONTROL_ENDPOINT) return self._get_attribute(ConfigAttributes.CONTROL_ENDPOINT)
@ -171,15 +160,6 @@ class StorageNode(NodeBase):
def get_data_directory(self) -> str: def get_data_directory(self) -> str:
return self.host.get_data_directory(self.name) return self.host.get_data_directory(self.name)
def get_storage_config(self) -> str:
return self.host.get_storage_config(self.name)
def get_http_hostname(self) -> str:
return self._get_attribute(ConfigAttributes.HTTP_HOSTNAME)
def get_s3_hostname(self) -> str:
return self._get_attribute(ConfigAttributes.S3_HOSTNAME)
def delete_blobovnicza(self): def delete_blobovnicza(self):
self.host.delete_blobovnicza(self.name) self.host.delete_blobovnicza(self.name)

View file

@ -0,0 +1,36 @@
from frostfs_testlib.hosting import Host
from frostfs_testlib.shell.interfaces import CommandResult
class Metrics:
def __init__(self, host: Host, metrics_endpoint: str) -> None:
self.storage = StorageMetrics(host, metrics_endpoint)
class StorageMetrics:
"""
Class represents storage metrics in a cluster
"""
def __init__(self, host: Host, metrics_endpoint: str) -> None:
self.host = host
self.metrics_endpoint = metrics_endpoint
def get_metrics_search_by_greps(self, **greps) -> CommandResult:
"""
Get a metrics, search by: cid, metric_type, shard_id etc.
Args:
greps: dict of grep-command-name and value
for example get_metrics_search_by_greps(command='container_objects_total', cid='123456')
Return:
result of metrics
"""
shell = self.host.get_shell()
additional_greps = " |grep ".join([grep_command for grep_command in greps.values()])
result = shell.exec(f"curl -s {self.metrics_endpoint} | grep {additional_greps}")
return result
def get_all_metrics(self) -> CommandResult:
shell = self.host.get_shell()
result = shell.exec(f"curl -s {self.metrics_endpoint}")
return result

View file

@ -10,6 +10,7 @@ from frostfs_testlib import reporter
from frostfs_testlib.hosting.config import ServiceConfig from frostfs_testlib.hosting.config import ServiceConfig
from frostfs_testlib.hosting.interfaces import Host from frostfs_testlib.hosting.interfaces import Host
from frostfs_testlib.shell.interfaces import CommandResult from frostfs_testlib.shell.interfaces import CommandResult
from frostfs_testlib.storage.configuration.service_configuration import ServiceConfiguration, ServiceConfigurationYml
from frostfs_testlib.storage.constants import ConfigAttributes from frostfs_testlib.storage.constants import ConfigAttributes
from frostfs_testlib.testing.readable import HumanReadableABC from frostfs_testlib.testing.readable import HumanReadableABC
from frostfs_testlib.utils import wallet_utils from frostfs_testlib.utils import wallet_utils
@ -64,6 +65,10 @@ class NodeBase(HumanReadableABC):
with reporter.step(f"Start {self.name} service on {self.host.config.address}"): with reporter.step(f"Start {self.name} service on {self.host.config.address}"):
self.host.start_service(self.name) self.host.start_service(self.name)
def send_signal_to_service(self, signal: str):
with reporter.step(f"Send -{signal} signal to {self.name} service on {self.host.config.address}"):
self.host.send_signal_to_service(self.name, signal)
@abstractmethod @abstractmethod
def service_healthcheck(self) -> bool: def service_healthcheck(self) -> bool:
"""Service healthcheck.""" """Service healthcheck."""
@ -114,6 +119,14 @@ class NodeBase(HumanReadableABC):
ConfigAttributes.CONFIG_PATH, ConfigAttributes.CONFIG_PATH,
) )
def get_remote_wallet_config_path(self) -> str:
"""
Returns node config file path located on remote host
"""
return self._get_attribute(
ConfigAttributes.REMOTE_WALLET_CONFIG,
)
def get_wallet_config_path(self) -> str: def get_wallet_config_path(self) -> str:
return self._get_attribute( return self._get_attribute(
ConfigAttributes.LOCAL_WALLET_CONFIG, ConfigAttributes.LOCAL_WALLET_CONFIG,
@ -125,8 +138,11 @@ class NodeBase(HumanReadableABC):
Returns config path for logger located on remote host Returns config path for logger located on remote host
""" """
config_attributes = self.host.get_service_config(self.name) config_attributes = self.host.get_service_config(self.name)
return self._get_attribute( return (
ConfigAttributes.LOGGER_CONFIG_PATH) if ConfigAttributes.LOGGER_CONFIG_PATH in config_attributes.attributes else None self._get_attribute(ConfigAttributes.LOGGER_CONFIG_PATH)
if ConfigAttributes.LOGGER_CONFIG_PATH in config_attributes.attributes
else None
)
@property @property
def config_dir(self) -> str: def config_dir(self) -> str:
@ -136,7 +152,11 @@ class NodeBase(HumanReadableABC):
def main_config_path(self) -> str: def main_config_path(self) -> str:
return self._get_attribute(ConfigAttributes.CONFIG_PATH) return self._get_attribute(ConfigAttributes.CONFIG_PATH)
# TODO: Deprecated @property
def config(self) -> ServiceConfigurationYml:
return ServiceConfiguration(self.name, self.host.get_shell(), self.config_dir, self.main_config_path)
# TODO: Deprecated. Use config with ServiceConfigurationYml interface
def get_config(self, config_file_path: Optional[str] = None) -> tuple[str, dict]: def get_config(self, config_file_path: Optional[str] = None) -> tuple[str, dict]:
if config_file_path is None: if config_file_path is None:
config_file_path = self._get_attribute(ConfigAttributes.CONFIG_PATH) config_file_path = self._get_attribute(ConfigAttributes.CONFIG_PATH)
@ -149,7 +169,7 @@ class NodeBase(HumanReadableABC):
config = yaml.safe_load(config_text) config = yaml.safe_load(config_text)
return config_file_path, config return config_file_path, config
# TODO: Deprecated # TODO: Deprecated. Use config with ServiceConfigurationYml interface
def save_config(self, new_config: dict, config_file_path: Optional[str] = None) -> None: def save_config(self, new_config: dict, config_file_path: Optional[str] = None) -> None:
if config_file_path is None: if config_file_path is None:
config_file_path = self._get_attribute(ConfigAttributes.CONFIG_PATH) config_file_path = self._get_attribute(ConfigAttributes.CONFIG_PATH)
@ -169,9 +189,7 @@ class NodeBase(HumanReadableABC):
if attribute_name not in config.attributes: if attribute_name not in config.attributes:
if default_attribute_name is None: if default_attribute_name is None:
raise RuntimeError( raise RuntimeError(f"Service {self.name} has no {attribute_name} in config and fallback attribute isn't set either")
f"Service {self.name} has no {attribute_name} in config and fallback attribute isn't set either"
)
return config.attributes[default_attribute_name] return config.attributes[default_attribute_name]
@ -181,9 +199,7 @@ class NodeBase(HumanReadableABC):
return self.host.get_service_config(self.name) return self.host.get_service_config(self.name)
def get_service_uptime(self, service: str) -> datetime: def get_service_uptime(self, service: str) -> datetime:
result = self.host.get_shell().exec( result = self.host.get_shell().exec(f"systemctl show {service} --property ActiveEnterTimestamp | cut -d '=' -f 2")
f"systemctl show {service} --property ActiveEnterTimestamp | cut -d '=' -f 2"
)
start_time = parser.parse(result.stdout.strip()) start_time = parser.parse(result.stdout.strip())
current_time = datetime.now(tz=timezone.utc) current_time = datetime.now(tz=timezone.utc)
active_time = current_time - start_time active_time = current_time - start_time

View file

@ -0,0 +1,13 @@
from dataclasses import dataclass
@dataclass
class PlacementPolicy:
name: str
value: str
def __str__(self) -> str:
return self.name
def __repr__(self) -> str:
return self.__str__()

View file

@ -1,16 +1,6 @@
import json
import pathlib
import re
from dataclasses import dataclass from dataclasses import dataclass
from io import StringIO
import allure
import pytest
import yaml
from configobj import ConfigObj from configobj import ConfigObj
from frostfs_testlib.cli import FrostfsCli
from frostfs_testlib.resources.cli import CLI_DEFAULT_TIMEOUT
from frostfs_testlib.resources.common import DEFAULT_WALLET_CONFIG
SHARD_PREFIX = "FROSTFS_STORAGE_SHARD_" SHARD_PREFIX = "FROSTFS_STORAGE_SHARD_"
BLOBSTOR_PREFIX = "_BLOBSTOR_" BLOBSTOR_PREFIX = "_BLOBSTOR_"
@ -66,9 +56,7 @@ class Shard:
var_prefix = f"{SHARD_PREFIX}{shard_id}" var_prefix = f"{SHARD_PREFIX}{shard_id}"
blobstor_count = Shard._get_blobstor_count_from_section(config_object, shard_id) blobstor_count = Shard._get_blobstor_count_from_section(config_object, shard_id)
blobstors = [ blobstors = [Blobstor.from_config_object(config_object, shard_id, blobstor_id) for blobstor_id in range(blobstor_count)]
Blobstor.from_config_object(config_object, shard_id, blobstor_id) for blobstor_id in range(blobstor_count)
]
write_cache_enabled = config_object.as_bool(f"{var_prefix}_WRITECACHE_ENABLED") write_cache_enabled = config_object.as_bool(f"{var_prefix}_WRITECACHE_ENABLED")
@ -81,7 +69,13 @@ class Shard:
@staticmethod @staticmethod
def from_object(shard): def from_object(shard):
metabase = shard["metabase"]["path"] if "path" in shard["metabase"] else shard["metabase"] metabase = shard["metabase"]["path"] if "path" in shard["metabase"] else shard["metabase"]
writecache_enabled = True
if "enabled" in shard["writecache"]:
writecache_enabled = shard["writecache"]["enabled"]
writecache = shard["writecache"]["path"] if "path" in shard["writecache"] else shard["writecache"] writecache = shard["writecache"]["path"] if "path" in shard["writecache"] else shard["writecache"]
if not writecache_enabled:
writecache = ""
# Currently due to issue we need to check if pilorama exists in keys # Currently due to issue we need to check if pilorama exists in keys
# TODO: make pilorama mandatory after fix # TODO: make pilorama mandatory after fix
@ -94,6 +88,5 @@ class Shard:
blobstor=[Blobstor(path=blobstor["path"], path_type=blobstor["type"]) for blobstor in shard["blobstor"]], blobstor=[Blobstor(path=blobstor["path"], path_type=blobstor["type"]) for blobstor in shard["blobstor"]],
metabase=metabase, metabase=metabase,
writecache=writecache, writecache=writecache,
pilorama=pilorama pilorama=pilorama,
) )

View file

@ -1,6 +1,7 @@
from dataclasses import dataclass from dataclasses import dataclass
from typing import Optional from typing import Optional
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
from frostfs_testlib.testing.readable import HumanReadableEnum from frostfs_testlib.testing.readable import HumanReadableEnum
@ -19,7 +20,7 @@ class LockObjectInfo(ObjectRef):
@dataclass @dataclass
class StorageObjectInfo(ObjectRef): class StorageObjectInfo(ObjectRef):
size: Optional[int] = None size: Optional[int] = None
wallet_file_path: Optional[str] = None wallet: Optional[WalletInfo] = None
file_path: Optional[str] = None file_path: Optional[str] = None
file_hash: Optional[str] = None file_hash: Optional[str] = None
attributes: Optional[list[dict[str, str]]] = None attributes: Optional[list[dict[str, str]]] = None
@ -27,7 +28,7 @@ class StorageObjectInfo(ObjectRef):
locks: Optional[list[LockObjectInfo]] = None locks: Optional[list[LockObjectInfo]] = None
class ModeNode(HumanReadableEnum): class NodeStatus(HumanReadableEnum):
MAINTENANCE: str = "maintenance" MAINTENANCE: str = "maintenance"
ONLINE: str = "online" ONLINE: str = "online"
OFFLINE: str = "offline" OFFLINE: str = "offline"
@ -36,7 +37,7 @@ class ModeNode(HumanReadableEnum):
@dataclass @dataclass
class NodeNetmapInfo: class NodeNetmapInfo:
node_id: str = None node_id: str = None
node_status: ModeNode = None node_status: NodeStatus = None
node_data_ips: list[str] = None node_data_ips: list[str] = None
cluster_name: str = None cluster_name: str = None
continent: str = None continent: str = None
@ -69,8 +70,26 @@ class NodeNetInfo:
epoch_duration: str = None epoch_duration: str = None
inner_ring_candidate_fee: str = None inner_ring_candidate_fee: str = None
maximum_object_size: str = None maximum_object_size: str = None
maximum_count_of_data_shards: str = None
maximum_count_of_parity_shards: str = None
withdrawal_fee: str = None withdrawal_fee: str = None
homomorphic_hashing_disabled: str = None homomorphic_hashing_disabled: str = None
maintenance_mode_allowed: str = None maintenance_mode_allowed: str = None
eigen_trust_alpha: str = None eigen_trust_alpha: str = None
eigen_trust_iterations: str = None eigen_trust_iterations: str = None
@dataclass
class Chunk:
def __init__(self, object_id: str, required_nodes: list, confirmed_nodes: list, ec_parent_object_id: str, ec_index: int) -> None:
self.object_id = object_id
self.required_nodes = required_nodes
self.confirmed_nodes = confirmed_nodes
self.ec_parent_object_id = ec_parent_object_id
self.ec_index = ec_index
def __str__(self) -> str:
return self.object_id
def __repr__(self) -> str:
return self.object_id

View file

@ -1,13 +1,15 @@
import json import json
import logging import logging
import os import os
import uuid
from dataclasses import dataclass from dataclasses import dataclass
from typing import Optional from typing import Optional
from frostfs_testlib.resources.common import DEFAULT_WALLET_CONFIG, DEFAULT_WALLET_PASS import yaml
from frostfs_testlib import reporter
from frostfs_testlib.resources.common import ASSETS_DIR, DEFAULT_WALLET_CONFIG, DEFAULT_WALLET_PASS
from frostfs_testlib.shell import Shell from frostfs_testlib.shell import Shell
from frostfs_testlib.storage.cluster import Cluster, NodeBase from frostfs_testlib.storage.cluster import NodeBase
from frostfs_testlib.utils.wallet_utils import get_last_address_from_wallet, init_wallet from frostfs_testlib.utils.wallet_utils import get_last_address_from_wallet, init_wallet
logger = logging.getLogger("frostfs.testlib.utils") logger = logging.getLogger("frostfs.testlib.utils")
@ -21,9 +23,13 @@ class WalletInfo:
@staticmethod @staticmethod
def from_node(node: NodeBase): def from_node(node: NodeBase):
return WalletInfo( wallet_path = node.get_wallet_path()
node.get_wallet_path(), node.get_wallet_password(), node.get_wallet_config_path() wallet_password = node.get_wallet_password()
) wallet_config_file = os.path.join(ASSETS_DIR, os.path.basename(node.get_wallet_config_path()))
with open(wallet_config_file, "w") as file:
file.write(yaml.dump({"wallet": wallet_path, "password": wallet_password}))
return WalletInfo(wallet_path, wallet_password, wallet_config_file)
def get_address(self) -> str: def get_address(self) -> str:
""" """
@ -47,22 +53,17 @@ class WalletInfo:
""" """
with open(self.path, "r") as wallet: with open(self.path, "r") as wallet:
wallet_json = json.load(wallet) wallet_json = json.load(wallet)
assert abs(account_id) + 1 <= len( assert abs(account_id) + 1 <= len(wallet_json["accounts"]), f"There is no index '{account_id}' in wallet: {wallet_json}"
wallet_json["accounts"]
), f"There is no index '{account_id}' in wallet: {wallet_json}"
return wallet_json["accounts"][account_id]["address"] return wallet_json["accounts"][account_id]["address"]
class WalletFactory: class WalletFactory:
def __init__(self, wallets_dir: str, shell: Shell, cluster: Cluster) -> None: def __init__(self, wallets_dir: str, shell: Shell) -> None:
self.shell = shell self.shell = shell
self.wallets_dir = wallets_dir self.wallets_dir = wallets_dir
self.cluster = cluster
def create_wallet( def create_wallet(self, file_name: str, password: Optional[str] = None) -> WalletInfo:
self, file_name: Optional[str] = None, password: Optional[str] = None
) -> WalletInfo:
""" """
Creates new default wallet. Creates new default wallet.
@ -74,8 +75,6 @@ class WalletFactory:
WalletInfo object of new wallet. WalletInfo object of new wallet.
""" """
if file_name is None:
file_name = str(uuid.uuid4())
if password is None: if password is None:
password = "" password = ""
@ -85,6 +84,8 @@ class WalletFactory:
init_wallet(wallet_path, password) init_wallet(wallet_path, password)
with open(wallet_config_path, "w") as config_file: with open(wallet_config_path, "w") as config_file:
config_file.write(f'password: "{password}"') config_file.write(f'wallet: {wallet_path}\npassword: "{password}"')
reporter.attach(wallet_path, os.path.basename(wallet_path))
return WalletInfo(wallet_path, password, wallet_config_path) return WalletInfo(wallet_path, password, wallet_config_path)

View file

@ -0,0 +1,14 @@
from frostfs_testlib.cli.frostfs_cli.cli import FrostfsCli
from frostfs_testlib.storage.grpc_operations import interfaces
from frostfs_testlib.storage.grpc_operations.implementations import container, object
class CliClientWrapper(interfaces.GrpcClientWrapper):
def __init__(self, cli: FrostfsCli) -> None:
self.cli = cli
self.object: interfaces.ObjectInterface = object.ObjectOperations(self.cli)
self.container: interfaces.ContainerInterface = container.ContainerOperations(self.cli)
class RpcClientWrapper(interfaces.GrpcClientWrapper):
pass # The next series

View file

@ -0,0 +1,165 @@
import json
from typing import Optional
from frostfs_testlib import reporter
from frostfs_testlib.cli.frostfs_cli.cli import FrostfsCli
from frostfs_testlib.resources.cli import CLI_DEFAULT_TIMEOUT
from frostfs_testlib.storage.cluster import Cluster, ClusterNode
from frostfs_testlib.storage.controllers.shards_watcher import ShardsWatcher
from frostfs_testlib.storage.dataclasses.storage_object_info import Chunk, NodeNetmapInfo
from frostfs_testlib.storage.grpc_operations import interfaces
from frostfs_testlib.testing.test_control import wait_for_success
from frostfs_testlib.utils.cli_utils import parse_netmap_output
class ChunksOperations(interfaces.ChunksInterface):
def __init__(self, cli: FrostfsCli) -> None:
self.cli = cli
@reporter.step("Search node without chunks")
def search_node_without_chunks(self, chunks: list[Chunk], cluster: Cluster, endpoint: str = None) -> list[ClusterNode]:
if not endpoint:
endpoint = cluster.default_rpc_endpoint
netmap = parse_netmap_output(self.cli.netmap.snapshot(endpoint, timeout=CLI_DEFAULT_TIMEOUT).stdout)
chunks_node_key = []
for chunk in chunks:
chunks_node_key.extend(chunk.confirmed_nodes)
for node_info in netmap.copy():
if node_info.node_id in chunks_node_key and node_info in netmap:
netmap.remove(node_info)
result = []
for node_info in netmap:
for cluster_node in cluster.cluster_nodes:
if node_info.node == cluster_node.host_ip:
result.append(cluster_node)
return result
@reporter.step("Search node with chunk {chunk}")
def get_chunk_node(self, cluster: Cluster, chunk: Chunk) -> tuple[ClusterNode, NodeNetmapInfo]:
netmap = parse_netmap_output(self.cli.netmap.snapshot(cluster.default_rpc_endpoint, timeout=CLI_DEFAULT_TIMEOUT).stdout)
for node_info in netmap:
if node_info.node_id in chunk.confirmed_nodes:
for cluster_node in cluster.cluster_nodes:
if cluster_node.host_ip == node_info.node:
return (cluster_node, node_info)
@wait_for_success(300, 5, fail_testcase=None)
@reporter.step("Search shard with chunk {chunk}")
def get_shard_chunk(self, node: ClusterNode, chunk: Chunk) -> str:
oid_path = f"{chunk.object_id[0]}/{chunk.object_id[1]}/{chunk.object_id[2]}/{chunk.object_id[3]}"
node_shell = node.storage_node.host.get_shell()
shards_watcher = ShardsWatcher(node)
with reporter.step("Search object file"):
for shard_id, shard_info in shards_watcher.shards_snapshots[-1].items():
check_dir = node_shell.exec(f" [ -d {shard_info['blobstor'][1]['path']}/{oid_path} ] && echo 1 || echo 0").stdout
if "1" in check_dir.strip():
return shard_id
@reporter.step("Get all chunks")
def get_all(
self,
rpc_endpoint: str,
cid: str,
oid: str,
address: Optional[str] = None,
bearer: Optional[str] = None,
generate_key: Optional[bool] = None,
trace: bool = True,
root: bool = False,
verify_presence_all: bool = False,
json: bool = True,
ttl: Optional[int] = None,
xhdr: Optional[dict] = None,
timeout: Optional[str] = None,
) -> list[Chunk]:
object_nodes = self.cli.object.nodes(
rpc_endpoint=rpc_endpoint,
cid=cid,
address=address,
bearer=bearer,
generate_key=generate_key,
oid=oid,
trace=trace,
root=root,
verify_presence_all=verify_presence_all,
json=json,
ttl=ttl,
xhdr=xhdr,
timeout=timeout,
)
return self._parse_object_nodes(object_nodes.stdout.split("\n")[0])
@reporter.step("Get last parity chunk")
def get_parity(
self,
rpc_endpoint: str,
cid: str,
address: Optional[str] = None,
bearer: Optional[str] = None,
generate_key: Optional[bool] = None,
oid: Optional[str] = None,
trace: bool = True,
root: bool = False,
verify_presence_all: bool = False,
json: bool = True,
ttl: Optional[int] = None,
xhdr: Optional[dict] = None,
timeout: Optional[str] = None,
) -> Chunk:
object_nodes = self.cli.object.nodes(
rpc_endpoint=rpc_endpoint,
cid=cid,
address=address,
bearer=bearer,
generate_key=generate_key,
oid=oid,
trace=trace,
root=root,
verify_presence_all=verify_presence_all,
json=json,
ttl=ttl,
xhdr=xhdr,
timeout=timeout,
)
return self._parse_object_nodes(object_nodes.stdout.split("\n")[0])[-1]
@reporter.step("Get first data chunk")
def get_first_data(
self,
rpc_endpoint: str,
cid: str,
oid: Optional[str] = None,
address: Optional[str] = None,
bearer: Optional[str] = None,
generate_key: Optional[bool] = None,
trace: bool = True,
root: bool = False,
verify_presence_all: bool = False,
json: bool = True,
ttl: Optional[int] = None,
xhdr: Optional[dict] = None,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
) -> Chunk:
object_nodes = self.cli.object.nodes(
rpc_endpoint=rpc_endpoint,
cid=cid,
address=address,
bearer=bearer,
generate_key=generate_key,
oid=oid,
trace=trace,
root=root,
verify_presence_all=verify_presence_all,
json=json,
ttl=ttl,
xhdr=xhdr,
timeout=timeout,
)
return self._parse_object_nodes(object_nodes.stdout.split("\n")[0])[0]
def _parse_object_nodes(self, object_nodes: str) -> list[Chunk]:
parse_result = json.loads(object_nodes)
if parse_result.get("errors"):
raise parse_result["errors"]
return [Chunk(**chunk) for chunk in parse_result["data_objects"]]

View file

@ -0,0 +1,330 @@
import json
import logging
import re
from typing import List, Optional, Union
from frostfs_testlib import reporter
from frostfs_testlib.cli.frostfs_cli.cli import FrostfsCli
from frostfs_testlib.plugins import load_plugin
from frostfs_testlib.resources.cli import CLI_DEFAULT_TIMEOUT
from frostfs_testlib.s3.interfaces import BucketContainerResolver
from frostfs_testlib.storage.cluster import Cluster, ClusterNode
from frostfs_testlib.storage.grpc_operations import interfaces
from frostfs_testlib.utils import json_utils
logger = logging.getLogger("NeoLogger")
class ContainerOperations(interfaces.ContainerInterface):
def __init__(self, cli: FrostfsCli) -> None:
self.cli = cli
@reporter.step("Create Container")
def create(
self,
endpoint: str,
nns_zone: Optional[str] = None,
nns_name: Optional[str] = None,
address: Optional[str] = None,
attributes: Optional[dict] = None,
basic_acl: Optional[str] = None,
await_mode: bool = False,
disable_timestamp: bool = False,
force: bool = False,
trace: bool = False,
name: Optional[str] = None,
nonce: Optional[str] = None,
policy: Optional[str] = None,
session: Optional[str] = None,
subnet: Optional[str] = None,
ttl: Optional[int] = None,
xhdr: Optional[dict] = None,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
) -> str:
"""
A wrapper for `frostfs-cli container create` call.
Args:
wallet (WalletInfo): a wallet on whose behalf a container is created
rule (optional, str): placement rule for container
basic_acl (optional, str): an ACL for container, will be
appended to `--basic-acl` key
attributes (optional, dict): container attributes , will be
appended to `--attributes` key
session_token (optional, str): a path to session token file
session_wallet(optional, str): a path to the wallet which signed
the session token; this parameter makes sense
when paired with `session_token`
shell: executor for cli command
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
options (optional, dict): any other options to pass to the call
name (optional, str): container name attribute
await_mode (bool): block execution until container is persisted
wait_for_creation (): Wait for container shows in container list
timeout: Timeout for the operation.
Returns:
(str): CID of the created container
"""
result = self.cli.container.create(
rpc_endpoint=endpoint,
policy=policy,
nns_zone=nns_zone,
nns_name=nns_name,
address=address,
attributes=attributes,
basic_acl=basic_acl,
await_mode=await_mode,
disable_timestamp=disable_timestamp,
force=force,
trace=trace,
name=name,
nonce=nonce,
session=session,
subnet=subnet,
ttl=ttl,
xhdr=xhdr,
timeout=timeout,
)
cid = self._parse_cid(result.stdout)
logger.info("Container created; waiting until it is persisted in the sidechain")
return cid
@reporter.step("List Containers")
def list(
self,
endpoint: str,
name: Optional[str] = None,
address: Optional[str] = None,
generate_key: Optional[bool] = None,
owner: Optional[str] = None,
ttl: Optional[int] = None,
xhdr: Optional[dict] = None,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
**params,
) -> List[str]:
"""
A wrapper for `frostfs-cli container list` call. It returns all the
available containers for the given wallet.
Args:
shell: executor for cli command
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
timeout: Timeout for the operation.
Returns:
(list): list of containers
"""
result = self.cli.container.list(
rpc_endpoint=endpoint,
name=name,
address=address,
generate_key=generate_key,
owner=owner,
ttl=ttl,
xhdr=xhdr,
timeout=timeout,
**params,
)
return result.stdout.split()
@reporter.step("List Objects in container")
def list_objects(
self,
endpoint: str,
cid: str,
bearer: Optional[str] = None,
wallet: Optional[str] = None,
address: Optional[str] = None,
generate_key: Optional[bool] = None,
trace: bool = False,
ttl: Optional[int] = None,
xhdr: Optional[dict] = None,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
) -> List[str]:
"""
A wrapper for `frostfs-cli container list-objects` call. It returns all the
available objects in container.
Args:
container_id: cid of container
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
timeout: Timeout for the operation.
Returns:
(list): list of containers
"""
result = self.cli.container.list_objects(
rpc_endpoint=endpoint,
cid=cid,
bearer=bearer,
wallet=wallet,
address=address,
generate_key=generate_key,
trace=trace,
ttl=ttl,
xhdr=xhdr,
timeout=timeout,
)
logger.info(f"Container objects: \n{result}")
return result.stdout.split()
@reporter.step("Delete container")
def delete(
self,
endpoint: str,
cid: str,
address: Optional[str] = None,
await_mode: bool = False,
session: Optional[str] = None,
ttl: Optional[int] = None,
xhdr: Optional[dict] = None,
force: bool = False,
trace: bool = False,
):
try:
return self.cli.container.delete(
rpc_endpoint=endpoint,
cid=cid,
address=address,
await_mode=await_mode,
session=session,
ttl=ttl,
xhdr=xhdr,
force=force,
trace=trace,
).stdout
except RuntimeError as e:
print(f"Error request:\n{e}")
@reporter.step("Get container")
def get(
self,
endpoint: str,
cid: str,
address: Optional[str] = None,
generate_key: Optional[bool] = None,
await_mode: bool = False,
to: Optional[str] = None,
json_mode: bool = True,
trace: bool = False,
ttl: Optional[int] = None,
xhdr: Optional[dict] = None,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
) -> Union[dict, str]:
result = self.cli.container.get(
rpc_endpoint=endpoint,
cid=cid,
address=address,
generate_key=generate_key,
await_mode=await_mode,
to=to,
json_mode=json_mode,
trace=trace,
ttl=ttl,
xhdr=xhdr,
timeout=timeout,
)
container_info = json.loads(result.stdout)
attributes = dict()
for attr in container_info["attributes"]:
attributes[attr["key"]] = attr["value"]
container_info["attributes"] = attributes
container_info["ownerID"] = json_utils.json_reencode(container_info["ownerID"]["value"])
return container_info
@reporter.step("Get eacl container")
def get_eacl(
self,
endpoint: str,
cid: str,
address: Optional[str] = None,
generate_key: Optional[bool] = None,
await_mode: bool = False,
json_mode: bool = True,
trace: bool = False,
to: Optional[str] = None,
session: Optional[str] = None,
ttl: Optional[int] = None,
xhdr: Optional[dict] = None,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
):
return self.cli.container.get_eacl(
rpc_endpoint=endpoint,
cid=cid,
address=address,
generate_key=generate_key,
await_mode=await_mode,
to=to,
session=session,
ttl=ttl,
xhdr=xhdr,
timeout=CLI_DEFAULT_TIMEOUT,
).stdout
@reporter.step("Get nodes container")
def nodes(
self,
endpoint: str,
cid: str,
cluster: Cluster,
address: Optional[str] = None,
ttl: Optional[int] = None,
from_file: Optional[str] = None,
trace: bool = False,
short: Optional[bool] = True,
xhdr: Optional[dict] = None,
generate_key: Optional[bool] = None,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
) -> List[ClusterNode]:
result = self.cli.container.search_node(
rpc_endpoint=endpoint,
cid=cid,
address=address,
ttl=ttl,
from_file=from_file,
trace=trace,
short=short,
xhdr=xhdr,
generate_key=generate_key,
timeout=timeout,
).stdout
pattern = r"[0-9]+(?:\.[0-9]+){3}"
nodes_ip = list(set(re.findall(pattern, result)))
with reporter.step(f"nodes ips = {nodes_ip}"):
nodes_list = cluster.get_nodes_by_ip(nodes_ip)
with reporter.step(f"Return nodes - {nodes_list}"):
return nodes_list
@reporter.step("Resolve container by name")
def resolve_container_by_name(name: str, node: ClusterNode):
resolver_cls = load_plugin("frostfs.testlib.bucket_cid_resolver", node.host.config.product)
resolver: BucketContainerResolver = resolver_cls()
return resolver.resolve(node, name)
def _parse_cid(self, output: str) -> str:
"""
Parses container ID from a given CLI output. The input string we expect:
container ID: 2tz86kVTDpJxWHrhw3h6PbKMwkLtBEwoqhHQCKTre1FN
awaiting...
container has been persisted on sidechain
We want to take 'container ID' value from the string.
Args:
output (str): CLI output to parse
Returns:
(str): extracted CID
"""
try:
# taking first line from command's output
first_line = output.split("\n")[0]
except Exception:
first_line = ""
logger.error(f"Got empty output: {output}")
splitted = first_line.split(": ")
if len(splitted) != 2:
raise ValueError(f"no CID was parsed from command output: \t{first_line}")
return splitted[1]

View file

@ -0,0 +1,624 @@
import json
import logging
import os
import re
import uuid
from typing import Any, Optional
from frostfs_testlib import reporter, utils
from frostfs_testlib.cli.frostfs_cli.cli import FrostfsCli
from frostfs_testlib.resources.cli import CLI_DEFAULT_TIMEOUT
from frostfs_testlib.resources.common import ASSETS_DIR
from frostfs_testlib.shell.interfaces import CommandResult
from frostfs_testlib.storage.cluster import Cluster, ClusterNode
from frostfs_testlib.storage.grpc_operations import interfaces
from frostfs_testlib.storage.grpc_operations.implementations.chunks import ChunksOperations
from frostfs_testlib.testing.test_control import wait_for_success
from frostfs_testlib.utils import cli_utils, file_utils
logger = logging.getLogger("NeoLogger")
class ObjectOperations(interfaces.ObjectInterface):
def __init__(self, cli: FrostfsCli) -> None:
self.cli = cli
self.chunks: interfaces.ChunksInterface = ChunksOperations(self.cli)
@reporter.step("Delete object")
def delete(
self,
cid: str,
oid: str,
endpoint: str,
bearer: str = "",
xhdr: Optional[dict] = None,
session: Optional[str] = None,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
) -> str:
"""
DELETE an Object.
Args:
cid: ID of Container where we get the Object from
oid: ID of Object we are going to delete
bearer: path to Bearer Token file, appends to `--bearer` key
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
xhdr: Request X-Headers in form of Key=Value
session: path to a JSON-encoded container session token
timeout: Timeout for the operation.
Returns:
(str): Tombstone ID
"""
result = self.cli.object.delete(
rpc_endpoint=endpoint,
cid=cid,
oid=oid,
bearer=bearer,
xhdr=xhdr,
session=session,
timeout=timeout,
)
id_str = result.stdout.split("\n")[1]
tombstone = id_str.split(":")[1]
return tombstone.strip()
@reporter.step("Get object")
def get(
self,
cid: str,
oid: str,
endpoint: str,
bearer: Optional[str] = None,
write_object: Optional[str] = None,
xhdr: Optional[dict] = None,
no_progress: bool = True,
session: Optional[str] = None,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
) -> file_utils.TestFile:
"""
GET from FrostFS.
Args:
cid (str): ID of Container where we get the Object from
oid (str): Object ID
bearer: path to Bearer Token file, appends to `--bearer` key
write_object: path to downloaded file, appends to `--file` key
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
no_progress(optional, bool): do not show progress bar
xhdr (optional, dict): Request X-Headers in form of Key=Value
session (optional, dict): path to a JSON-encoded container session token
timeout: Timeout for the operation.
Returns:
(str): path to downloaded file
"""
if not write_object:
write_object = str(uuid.uuid4())
test_file = file_utils.TestFile(os.path.join(ASSETS_DIR, write_object))
self.cli.object.get(
rpc_endpoint=endpoint,
cid=cid,
oid=oid,
file=test_file,
bearer=bearer,
no_progress=no_progress,
xhdr=xhdr,
session=session,
timeout=timeout,
)
return test_file
@reporter.step("Get object from random node")
def get_from_random_node(
self,
cid: str,
oid: str,
cluster: Cluster,
bearer: Optional[str] = None,
write_object: Optional[str] = None,
xhdr: Optional[dict] = None,
no_progress: bool = True,
session: Optional[str] = None,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
) -> str:
"""
GET from FrostFS random storage node
Args:
cid: ID of Container where we get the Object from
oid: Object ID
cluster: cluster object
bearer (optional, str): path to Bearer Token file, appends to `--bearer` key
write_object (optional, str): path to downloaded file, appends to `--file` key
no_progress(optional, bool): do not show progress bar
xhdr (optional, dict): Request X-Headers in form of Key=Value
session (optional, dict): path to a JSON-encoded container session token
timeout: Timeout for the operation.
Returns:
(str): path to downloaded file
"""
endpoint = cluster.get_random_storage_rpc_endpoint()
return self.get(
cid,
oid,
endpoint,
bearer,
write_object,
xhdr,
no_progress,
session,
timeout,
)
@reporter.step("Get hash object")
def hash(
self,
rpc_endpoint: str,
cid: str,
oid: str,
address: Optional[str] = None,
bearer: Optional[str] = None,
generate_key: Optional[bool] = None,
range: Optional[str] = None,
salt: Optional[str] = None,
ttl: Optional[int] = None,
session: Optional[str] = None,
hash_type: Optional[str] = None,
xhdr: Optional[dict] = None,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
) -> str:
"""
Get object hash.
Args:
address: Address of wallet account.
bearer: File with signed JSON or binary encoded bearer token.
cid: Container ID.
generate_key: Generate new private key.
oid: Object ID.
range: Range to take hash from in the form offset1:length1,...
rpc_endpoint: Remote node address (as 'multiaddr' or '<host>:<port>').
salt: Salt in hex format.
ttl: TTL value in request meta header (default 2).
session: Filepath to a JSON- or binary-encoded token of the object RANGEHASH session.
hash_type: Hash type. Either 'sha256' or 'tz' (default "sha256").
wallet: WIF (NEP-2) string or path to the wallet or binary key.
xhdr: Dict with request X-Headers.
timeout: Timeout for the operation (default 15s).
Returns:
Command's result.
"""
result = self.cli.object.hash(
rpc_endpoint=rpc_endpoint,
cid=cid,
oid=oid,
address=address,
bearer=bearer,
generate_key=generate_key,
range=range,
salt=salt,
ttl=ttl,
xhdr=xhdr,
session=session,
hash_type=hash_type,
timeout=timeout,
)
return result.stdout
@reporter.step("Head object")
def head(
self,
cid: str,
oid: str,
endpoint: str,
bearer: str = "",
xhdr: Optional[dict] = None,
json_output: bool = True,
is_raw: bool = False,
is_direct: bool = False,
session: Optional[str] = None,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
) -> CommandResult | Any:
"""
HEAD an Object.
Args:
cid (str): ID of Container where we get the Object from
oid (str): ObjectID to HEAD
bearer (optional, str): path to Bearer Token file, appends to `--bearer` key
endpoint(optional, str): FrostFS endpoint to send request to
json_output(optional, bool): return response in JSON format or not; this flag
turns into `--json` key
is_raw(optional, bool): send "raw" request or not; this flag
turns into `--raw` key
is_direct(optional, bool): send request directly to the node or not; this flag
turns into `--ttl 1` key
xhdr (optional, dict): Request X-Headers in form of Key=Value
session (optional, dict): path to a JSON-encoded container session token
timeout: Timeout for the operation.
Returns:
depending on the `json_output` parameter value, the function returns
(dict): HEAD response in JSON format
or
(str): HEAD response as a plain text
"""
result = self.cli.object.head(
rpc_endpoint=endpoint,
cid=cid,
oid=oid,
bearer=bearer,
json_mode=json_output,
raw=is_raw,
ttl=1 if is_direct else None,
xhdr=xhdr,
session=session,
timeout=timeout,
)
if not json_output:
return result
try:
decoded = json.loads(result.stdout)
except Exception as exc:
# If we failed to parse output as JSON, the cause might be
# the plain text string in the beginning of the output.
# Here we cut off first string and try to parse again.
logger.info(f"failed to parse output: {exc}")
logger.info("parsing output in another way")
fst_line_idx = result.stdout.find("\n")
decoded = json.loads(result.stdout[fst_line_idx:])
# if response
if "chunks" in decoded.keys():
logger.info("decoding ec chunks")
return decoded["chunks"]
# If response is Complex Object header, it has `splitId` key
if "splitId" in decoded.keys():
logger.info("decoding split header")
return utils.json_utils.decode_split_header(decoded)
# If response is Last or Linking Object header,
# it has `header` dictionary and non-null `split` dictionary
if "split" in decoded["header"].keys():
if decoded["header"]["split"]:
logger.info("decoding linking object")
return utils.json_utils.decode_linking_object(decoded)
if decoded["header"]["objectType"] == "STORAGE_GROUP":
logger.info("decoding storage group")
return utils.json_utils.decode_storage_group(decoded)
if decoded["header"]["objectType"] == "TOMBSTONE":
logger.info("decoding tombstone")
return utils.json_utils.decode_tombstone(decoded)
logger.info("decoding simple header")
return utils.json_utils.decode_simple_header(decoded)
@reporter.step("Lock Object")
def lock(
self,
cid: str,
oid: str,
endpoint: str,
lifetime: Optional[int] = None,
expire_at: Optional[int] = None,
address: Optional[str] = None,
bearer: Optional[str] = None,
session: Optional[str] = None,
ttl: Optional[int] = None,
xhdr: Optional[dict] = None,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
) -> str:
"""
Locks object in container.
Args:
address: Address of wallet account.
bearer: File with signed JSON or binary encoded bearer token.
cid: Container ID.
oid: Object ID.
lifetime: Lock lifetime.
expire_at: Lock expiration epoch.
shell: executor for cli command
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
session: Path to a JSON-encoded container session token.
ttl: TTL value in request meta header (default 2).
wallet: WIF (NEP-2) string or path to the wallet or binary key.
xhdr: Dict with request X-Headers.
timeout: Timeout for the operation.
Returns:
Lock object ID
"""
result = self.cli.object.lock(
rpc_endpoint=endpoint,
lifetime=lifetime,
expire_at=expire_at,
address=address,
cid=cid,
oid=oid,
bearer=bearer,
xhdr=xhdr,
session=session,
ttl=ttl,
timeout=timeout,
)
# Splitting CLI output to separate lines and taking the penultimate line
id_str = result.stdout.strip().split("\n")[0]
oid = id_str.split(":")[1]
return oid.strip()
@reporter.step("Put object")
def put(
self,
path: str,
cid: str,
endpoint: str,
bearer: Optional[str] = None,
copies_number: Optional[int] = None,
attributes: Optional[dict] = None,
xhdr: Optional[dict] = None,
expire_at: Optional[int] = None,
no_progress: bool = True,
session: Optional[str] = None,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
) -> str:
"""
PUT of given file.
Args:
path: path to file to be PUT
cid: ID of Container where we get the Object from
bearer: path to Bearer Token file, appends to `--bearer` key
copies_number: Number of copies of the object to store within the RPC call
attributes: User attributes in form of Key1=Value1,Key2=Value2
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
no_progress: do not show progress bar
expire_at: Last epoch in the life of the object
xhdr: Request X-Headers in form of Key=Value
session: path to a JSON-encoded container session token
timeout: Timeout for the operation.
Returns:
(str): ID of uploaded Object
"""
result = self.cli.object.put(
rpc_endpoint=endpoint,
file=path,
cid=cid,
attributes=attributes,
bearer=bearer,
copies_number=copies_number,
expire_at=expire_at,
no_progress=no_progress,
xhdr=xhdr,
session=session,
timeout=timeout,
)
# Splitting CLI output to separate lines and taking the penultimate line
id_str = result.stdout.strip().split("\n")[-2]
oid = id_str.split(":")[1]
return oid.strip()
@reporter.step("Put object to random node")
def put_to_random_node(
self,
path: str,
cid: str,
cluster: Cluster,
bearer: Optional[str] = None,
copies_number: Optional[int] = None,
attributes: Optional[dict] = None,
xhdr: Optional[dict] = None,
expire_at: Optional[int] = None,
no_progress: bool = True,
session: Optional[str] = None,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
) -> str:
"""
PUT of given file to a random storage node.
Args:
path: path to file to be PUT
cid: ID of Container where we get the Object from
cluster: cluster under test
bearer: path to Bearer Token file, appends to `--bearer` key
copies_number: Number of copies of the object to store within the RPC call
attributes: User attributes in form of Key1=Value1,Key2=Value2
cluster: cluster under test
no_progress: do not show progress bar
expire_at: Last epoch in the life of the object
xhdr: Request X-Headers in form of Key=Value
session: path to a JSON-encoded container session token
timeout: Timeout for the operation.
Returns:
ID of uploaded Object
"""
endpoint = cluster.get_random_storage_rpc_endpoint()
return self.put(
path,
cid,
endpoint,
bearer,
copies_number,
attributes,
xhdr,
expire_at,
no_progress,
session,
timeout=timeout,
)
@reporter.step("Get Range")
def range(
self,
cid: str,
oid: str,
range_cut: str,
endpoint: str,
bearer: str = "",
xhdr: Optional[dict] = None,
session: Optional[str] = None,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
) -> tuple[file_utils.TestFile, bytes]:
"""
GETRANGE an Object.
Args:
wallet: wallet on whose behalf GETRANGE is done
cid: ID of Container where we get the Object from
oid: ID of Object we are going to request
range_cut: range to take data from in the form offset:length
shell: executor for cli command
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
bearer: path to Bearer Token file, appends to `--bearer` key
xhdr: Request X-Headers in form of Key=Value
session: path to a JSON-encoded container session token
timeout: Timeout for the operation.
Returns:
(str, bytes) - path to the file with range content and content of this file as bytes
"""
test_file = file_utils.TestFile(os.path.join(ASSETS_DIR, str(uuid.uuid4())))
self.cli.object.range(
rpc_endpoint=endpoint,
cid=cid,
oid=oid,
range=range_cut,
file=test_file,
bearer=bearer,
xhdr=xhdr,
session=session,
timeout=timeout,
)
with open(test_file, "rb") as file:
content = file.read()
return test_file, content
@reporter.step("Search object")
def search(
self,
cid: str,
endpoint: str,
bearer: str = "",
oid: Optional[str] = None,
filters: Optional[dict] = None,
expected_objects_list: Optional[list] = None,
xhdr: Optional[dict] = None,
session: Optional[str] = None,
phy: bool = False,
root: bool = False,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
address: Optional[str] = None,
generate_key: Optional[bool] = None,
ttl: Optional[int] = None,
) -> list:
"""
SEARCH an Object.
Args:
wallet: wallet on whose behalf SEARCH is done
cid: ID of Container where we get the Object from
shell: executor for cli command
bearer: path to Bearer Token file, appends to `--bearer` key
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
filters: key=value pairs to filter Objects
expected_objects_list: a list of ObjectIDs to compare found Objects with
xhdr: Request X-Headers in form of Key=Value
session: path to a JSON-encoded container session token
phy: Search physically stored objects.
root: Search for user objects.
timeout: Timeout for the operation.
Returns:
list of found ObjectIDs
"""
result = self.cli.object.search(
rpc_endpoint=endpoint,
cid=cid,
bearer=bearer,
oid=oid,
xhdr=xhdr,
filters=[f"{filter_key} EQ {filter_val}" for filter_key, filter_val in filters.items()] if filters else None,
session=session,
phy=phy,
root=root,
address=address,
generate_key=generate_key,
ttl=ttl,
timeout=timeout,
)
found_objects = re.findall(r"(\w{43,44})", result.stdout)
if expected_objects_list:
if sorted(found_objects) == sorted(expected_objects_list):
logger.info(f"Found objects list '{found_objects}' " f"is equal for expected list '{expected_objects_list}'")
else:
logger.warning(f"Found object list {found_objects} " f"is not equal to expected list '{expected_objects_list}'")
return found_objects
@wait_for_success()
@reporter.step("Search object nodes")
def nodes(
self,
cluster: Cluster,
cid: str,
oid: str,
alive_node: ClusterNode,
bearer: str = "",
xhdr: Optional[dict] = None,
is_direct: bool = False,
verify_presence_all: bool = False,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
) -> list[ClusterNode]:
endpoint = alive_node.storage_node.get_rpc_endpoint()
response = self.cli.object.nodes(
rpc_endpoint=endpoint,
cid=cid,
oid=oid,
bearer=bearer,
ttl=1 if is_direct else None,
json=True,
xhdr=xhdr,
timeout=timeout,
verify_presence_all=verify_presence_all,
)
response_json = json.loads(response.stdout)
# Currently, the command will show expected and confirmed nodes.
# And we (currently) count only nodes which are both expected and confirmed
object_nodes_id = {
required_node
for data_object in response_json["data_objects"]
for required_node in data_object["required_nodes"]
if required_node in data_object["confirmed_nodes"]
}
netmap_nodes_list = cli_utils.parse_netmap_output(
self.cli.netmap.snapshot(
rpc_endpoint=endpoint,
).stdout
)
netmap_nodes = [
netmap_node for object_node in object_nodes_id for netmap_node in netmap_nodes_list if object_node == netmap_node.node_id
]
object_nodes = [
cluster_node
for netmap_node in netmap_nodes
for cluster_node in cluster.cluster_nodes
if netmap_node.node == cluster_node.host_ip
]
return object_nodes

View file

@ -0,0 +1,392 @@
from abc import ABC, abstractmethod
from typing import Any, List, Optional
from frostfs_testlib.shell.interfaces import CommandResult
from frostfs_testlib.storage.cluster import Cluster, ClusterNode
from frostfs_testlib.storage.constants import PlacementRule
from frostfs_testlib.storage.dataclasses.storage_object_info import Chunk, NodeNetmapInfo
from frostfs_testlib.utils import file_utils
class ChunksInterface(ABC):
@abstractmethod
def search_node_without_chunks(self, chunks: list[Chunk], cluster: Cluster, endpoint: str = None) -> list[ClusterNode]:
pass
@abstractmethod
def get_chunk_node(self, cluster: Cluster, chunk: Chunk) -> tuple[ClusterNode, NodeNetmapInfo]:
pass
@abstractmethod
def get_shard_chunk(self, node: ClusterNode, chunk: Chunk) -> str:
pass
@abstractmethod
def get_all(
self,
rpc_endpoint: str,
cid: str,
oid: str,
wallet: Optional[str] = None,
address: Optional[str] = None,
bearer: Optional[str] = None,
generate_key: Optional[bool] = None,
trace: bool = False,
root: bool = False,
verify_presence_all: bool = False,
json: bool = True,
ttl: Optional[int] = None,
xhdr: Optional[dict] = None,
timeout: Optional[str] = None,
) -> list[Chunk]:
pass
@abstractmethod
def get_parity(
self,
rpc_endpoint: str,
cid: str,
wallet: Optional[str] = None,
address: Optional[str] = None,
bearer: Optional[str] = None,
generate_key: Optional[bool] = None,
oid: Optional[str] = None,
trace: bool = False,
root: bool = False,
verify_presence_all: bool = False,
json: bool = True,
ttl: Optional[int] = None,
xhdr: Optional[dict] = None,
timeout: Optional[str] = None,
) -> Chunk:
pass
@abstractmethod
def get_first_data(
self,
rpc_endpoint: str,
cid: str,
wallet: Optional[str] = None,
address: Optional[str] = None,
bearer: Optional[str] = None,
generate_key: Optional[bool] = None,
oid: Optional[str] = None,
trace: bool = False,
root: bool = False,
verify_presence_all: bool = False,
json: bool = True,
ttl: Optional[int] = None,
xhdr: Optional[dict] = None,
timeout: Optional[str] = None,
) -> Chunk:
pass
class ObjectInterface(ABC):
def __init__(self) -> None:
self.chunks: ChunksInterface
@abstractmethod
def delete(
self,
cid: str,
oid: str,
endpoint: str,
bearer: str = "",
xhdr: Optional[dict] = None,
session: Optional[str] = None,
timeout: Optional[str] = None,
) -> str:
pass
@abstractmethod
def get(
self,
cid: str,
oid: str,
endpoint: str,
bearer: Optional[str] = None,
write_object: Optional[str] = None,
xhdr: Optional[dict] = None,
no_progress: bool = True,
session: Optional[str] = None,
timeout: Optional[str] = None,
) -> file_utils.TestFile:
pass
@abstractmethod
def get_from_random_node(
self,
cid: str,
oid: str,
cluster: Cluster,
bearer: Optional[str] = None,
write_object: Optional[str] = None,
xhdr: Optional[dict] = None,
no_progress: bool = True,
session: Optional[str] = None,
timeout: Optional[str] = None,
) -> str:
pass
@abstractmethod
def hash(
self,
endpoint: str,
cid: str,
oid: str,
address: Optional[str] = None,
bearer: Optional[str] = None,
generate_key: Optional[bool] = None,
range: Optional[str] = None,
salt: Optional[str] = None,
ttl: Optional[int] = None,
session: Optional[str] = None,
hash_type: Optional[str] = None,
xhdr: Optional[dict] = None,
timeout: Optional[str] = None,
) -> str:
pass
@abstractmethod
def head(
self,
cid: str,
oid: str,
endpoint: str,
bearer: str = "",
xhdr: Optional[dict] = None,
json_output: bool = True,
is_raw: bool = False,
is_direct: bool = False,
session: Optional[str] = None,
timeout: Optional[str] = None,
) -> CommandResult | Any:
pass
@abstractmethod
def lock(
self,
cid: str,
oid: str,
endpoint: str,
lifetime: Optional[int] = None,
expire_at: Optional[int] = None,
address: Optional[str] = None,
bearer: Optional[str] = None,
session: Optional[str] = None,
ttl: Optional[int] = None,
xhdr: Optional[dict] = None,
timeout: Optional[str] = None,
) -> str:
pass
@abstractmethod
def put(
self,
path: str,
cid: str,
endpoint: str,
bearer: Optional[str] = None,
copies_number: Optional[int] = None,
attributes: Optional[dict] = None,
xhdr: Optional[dict] = None,
expire_at: Optional[int] = None,
no_progress: bool = True,
session: Optional[str] = None,
timeout: Optional[str] = None,
) -> str:
pass
@abstractmethod
def put_to_random_node(
self,
path: str,
cid: str,
cluster: Cluster,
bearer: Optional[str] = None,
copies_number: Optional[int] = None,
attributes: Optional[dict] = None,
xhdr: Optional[dict] = None,
expire_at: Optional[int] = None,
no_progress: bool = True,
session: Optional[str] = None,
timeout: Optional[str] = None,
) -> str:
pass
@abstractmethod
def range(
self,
cid: str,
oid: str,
range_cut: str,
endpoint: str,
bearer: str = "",
xhdr: Optional[dict] = None,
session: Optional[str] = None,
timeout: Optional[str] = None,
) -> tuple[file_utils.TestFile, bytes]:
pass
@abstractmethod
def search(
self,
cid: str,
endpoint: str,
bearer: str = "",
oid: Optional[str] = None,
filters: Optional[dict] = None,
expected_objects_list: Optional[list] = None,
xhdr: Optional[dict] = None,
session: Optional[str] = None,
phy: bool = False,
root: bool = False,
timeout: Optional[str] = None,
address: Optional[str] = None,
generate_key: Optional[bool] = None,
ttl: Optional[int] = None,
) -> List:
pass
@abstractmethod
def nodes(
self,
cluster: Cluster,
cid: str,
oid: str,
alive_node: ClusterNode,
bearer: str = "",
xhdr: Optional[dict] = None,
is_direct: bool = False,
verify_presence_all: bool = False,
timeout: Optional[str] = None,
) -> List[ClusterNode]:
pass
class ContainerInterface(ABC):
@abstractmethod
def create(
self,
endpoint: str,
nns_zone: Optional[str] = None,
nns_name: Optional[str] = None,
address: Optional[str] = None,
attributes: Optional[dict] = None,
basic_acl: Optional[str] = None,
await_mode: bool = False,
disable_timestamp: bool = False,
force: bool = False,
trace: bool = False,
name: Optional[str] = None,
nonce: Optional[str] = None,
policy: Optional[str] = None,
session: Optional[str] = None,
subnet: Optional[str] = None,
ttl: Optional[int] = None,
xhdr: Optional[dict] = None,
timeout: Optional[str] = None,
) -> str:
"""
Create a new container and register it in the FrostFS.
It will be stored in the sidechain when the Inner Ring accepts it.
"""
raise NotImplementedError("No implemethed method create")
@abstractmethod
def delete(
self,
endpoint: str,
cid: str,
address: Optional[str] = None,
await_mode: bool = False,
session: Optional[str] = None,
ttl: Optional[int] = None,
xhdr: Optional[dict] = None,
force: bool = False,
trace: bool = False,
) -> List[str]:
"""
Delete an existing container.
Only the owner of the container has permission to remove the container.
"""
raise NotImplementedError("No implemethed method delete")
@abstractmethod
def get(
self,
endpoint: str,
cid: str,
address: Optional[str] = None,
generate_key: Optional[bool] = None,
await_mode: bool = False,
to: Optional[str] = None,
json_mode: bool = True,
trace: bool = False,
ttl: Optional[int] = None,
xhdr: Optional[dict] = None,
timeout: Optional[str] = None,
) -> List[str]:
"""Get container field info."""
raise NotImplementedError("No implemethed method get")
@abstractmethod
def get_eacl(
self,
endpoint: str,
cid: str,
address: Optional[str] = None,
generate_key: Optional[bool] = None,
await_mode: bool = False,
json_mode: bool = True,
trace: bool = False,
to: Optional[str] = None,
session: Optional[str] = None,
ttl: Optional[int] = None,
xhdr: Optional[dict] = None,
timeout: Optional[str] = None,
) -> List[str]:
"""Get extended ACL table of container."""
raise NotImplementedError("No implemethed method get-eacl")
@abstractmethod
def list(
self,
endpoint: str,
name: Optional[str] = None,
address: Optional[str] = None,
generate_key: Optional[bool] = None,
trace: bool = False,
owner: Optional[str] = None,
ttl: Optional[int] = None,
xhdr: Optional[dict] = None,
timeout: Optional[str] = None,
**params,
) -> List[str]:
"""List all created containers."""
raise NotImplementedError("No implemethed method list")
@abstractmethod
def nodes(
self,
endpoint: str,
cid: str,
cluster: Cluster,
address: Optional[str] = None,
ttl: Optional[int] = None,
from_file: Optional[str] = None,
trace: bool = False,
short: Optional[bool] = True,
xhdr: Optional[dict] = None,
generate_key: Optional[bool] = None,
timeout: Optional[str] = None,
) -> List[ClusterNode]:
"""Show the nodes participating in the container in the current epoch."""
raise NotImplementedError("No implemethed method nodes")
class GrpcClientWrapper(ABC):
def __init__(self) -> None:
self.object: ObjectInterface
self.container: ContainerInterface

View file

@ -25,14 +25,10 @@ class ClusterTestBase:
for _ in range(epochs_to_tick): for _ in range(epochs_to_tick):
self.tick_epoch(alive_node, wait_block) self.tick_epoch(alive_node, wait_block)
def tick_epoch( def tick_epoch(self, alive_node: Optional[StorageNode] = None, wait_block: int = None, delta: Optional[int] = None):
self, epoch.tick_epoch(self.shell, self.cluster, alive_node=alive_node, delta=delta)
alive_node: Optional[StorageNode] = None,
wait_block: int = None,
):
epoch.tick_epoch(self.shell, self.cluster, alive_node=alive_node)
if wait_block: if wait_block:
time.sleep(datetime_utils.parse_time(MORPH_BLOCK_TIME) * wait_block) self.wait_for_blocks(wait_block)
def wait_for_epochs_align(self): def wait_for_epochs_align(self):
epoch.wait_for_epochs_align(self.shell, self.cluster) epoch.wait_for_epochs_align(self.shell, self.cluster)
@ -42,3 +38,6 @@ class ClusterTestBase:
def ensure_fresh_epoch(self): def ensure_fresh_epoch(self):
return epoch.ensure_fresh_epoch(self.shell, self.cluster) return epoch.ensure_fresh_epoch(self.shell, self.cluster)
def wait_for_blocks(self, blocks_count: int = 1):
time.sleep(datetime_utils.parse_time(MORPH_BLOCK_TIME) * blocks_count)

View file

@ -1,7 +1,22 @@
import itertools import itertools
import traceback
from concurrent.futures import Future, ThreadPoolExecutor from concurrent.futures import Future, ThreadPoolExecutor
from contextlib import contextmanager
from typing import Callable, Collection, Optional, Union from typing import Callable, Collection, Optional, Union
MAX_WORKERS = 50
@contextmanager
def parallel_workers_limit(workers_count: int):
global MAX_WORKERS
original_value = MAX_WORKERS
MAX_WORKERS = workers_count
try:
yield
finally:
MAX_WORKERS = original_value
def parallel( def parallel(
fn: Union[Callable, list[Callable]], fn: Union[Callable, list[Callable]],
@ -41,7 +56,42 @@ def parallel(
# Check for exceptions # Check for exceptions
exceptions = [future.exception() for future in futures if future.exception()] exceptions = [future.exception() for future in futures if future.exception()]
if exceptions: if exceptions:
message = "\n".join([str(e) for e in exceptions]) # Prettify exception in parallel with all underlying stack traces
# For example, we had 3 RuntimeError exceptions during parallel. This format will give us something like
#
# RuntimeError: The following exceptions occured during parallel run:
# 1) Exception one text
# 2) Exception two text
# 3) Exception three text
# TRACES:
# ==== 1 ====
# Traceback (most recent call last):
# File "/usr/lib/python3.10/concurrent/futures/thread.py", line 58, in run
# result = self.fn(*self.args, **self.kwargs)
# File "frostfs_testcases/pytest_tests/testsuites/object/test_object_tombstone.py", line 17, in check_service
# raise RuntimeError(f"Exception one text")
# RuntimeError: Exception one text
#
# ==== 2 ====
# Traceback (most recent call last):
# File "/usr/lib/python3.10/concurrent/futures/thread.py", line 58, in run
# result = self.fn(*self.args, **self.kwargs)
# File "frostfs_testcases/pytest_tests/testsuites/object/test_object_tombstone.py", line 17, in check_service
# raise RuntimeError(f"Exception two text")
# RuntimeError: Exception two text
#
# ==== 3 ====
# Traceback (most recent call last):
# File "/usr/lib/python3.10/concurrent/futures/thread.py", line 58, in run
# result = self.fn(*self.args, **self.kwargs)
# File "frostfs_testcases/pytest_tests/testsuites/object/test_object_tombstone.py", line 17, in check_service
# raise RuntimeError(f"Exception three text")
# RuntimeError: Exception three text
short_summary = "\n".join([f"{i}) {str(e)}" for i, e in enumerate(exceptions, 1)])
stack_traces = "\n".join(
[f"==== {i} ====\n{''.join(traceback.TracebackException.from_exception(e).format())}" for i, e in enumerate(exceptions, 1)]
)
message = f"{short_summary}\nTRACES:\n{stack_traces}"
raise RuntimeError(f"The following exceptions occured during parallel run:\n{message}") raise RuntimeError(f"The following exceptions occured during parallel run:\n{message}")
return futures return futures
@ -54,7 +104,7 @@ def _run_by_fn_list(fn_list: list[Callable], *args, **kwargs) -> list[Future]:
futures: list[Future] = [] futures: list[Future] = []
with ThreadPoolExecutor(max_workers=len(fn_list)) as executor: with ThreadPoolExecutor(max_workers=min(len(fn_list), MAX_WORKERS)) as executor:
for fn in fn_list: for fn in fn_list:
task_args = _get_args(*args) task_args = _get_args(*args)
task_kwargs = _get_kwargs(**kwargs) task_kwargs = _get_kwargs(**kwargs)
@ -67,7 +117,7 @@ def _run_by_fn_list(fn_list: list[Callable], *args, **kwargs) -> list[Future]:
def _run_by_items(fn: Callable, parallel_items: Collection, *args, **kwargs) -> list[Future]: def _run_by_items(fn: Callable, parallel_items: Collection, *args, **kwargs) -> list[Future]:
futures: list[Future] = [] futures: list[Future] = []
with ThreadPoolExecutor(max_workers=len(parallel_items)) as executor: with ThreadPoolExecutor(max_workers=min(len(parallel_items), MAX_WORKERS)) as executor:
for item in parallel_items: for item in parallel_items:
task_args = _get_args(*args) task_args = _get_args(*args)
task_kwargs = _get_kwargs(**kwargs) task_kwargs = _get_kwargs(**kwargs)

View file

@ -1,13 +1,16 @@
import inspect import inspect
import logging import logging
import os
from functools import wraps from functools import wraps
from time import sleep, time from time import sleep, time
from typing import Any from typing import Any
import yaml
from _pytest.outcomes import Failed from _pytest.outcomes import Failed
from pytest import fail from pytest import fail
from frostfs_testlib import reporter from frostfs_testlib import reporter
from frostfs_testlib.resources.common import ASSETS_DIR
from frostfs_testlib.utils.func_utils import format_by_args from frostfs_testlib.utils.func_utils import format_by_args
logger = logging.getLogger("NeoLogger") logger = logging.getLogger("NeoLogger")
@ -128,6 +131,42 @@ def run_optionally(enabled: bool, mock_value: Any = True):
return deco return deco
def cached_fixture(enabled: bool):
"""
Decorator to cache fixtures.
MUST be placed after @pytest.fixture and before @allure decorators.
Args:
enabled: if true, decorated func will be cached.
"""
def deco(func):
@wraps(func)
def func_impl(*a, **kw):
# TODO: *a and *kw should be parsed to some kind of hashsum and used in filename to prevent cache load from different parameters
cache_file = os.path.join(ASSETS_DIR, f"fixture_cache_{func.__name__}.yml")
if enabled and os.path.exists(cache_file):
with open(cache_file, "r") as cache_input:
return yaml.load(cache_input, Loader=yaml.Loader)
result = func(*a, **kw)
if enabled:
with open(cache_file, "w") as cache_output:
yaml.dump(result, cache_output)
return result
# TODO: cache yielding fixtures
@wraps(func)
def gen_impl(*a, **kw):
raise NotImplementedError("Not implemented for yielding fixtures")
return gen_impl if inspect.isgeneratorfunction(func) else func_impl
return deco
def wait_for_success( def wait_for_success(
max_wait_time: int = 60, max_wait_time: int = 60,
interval: int = 1, interval: int = 1,

View file

@ -9,13 +9,12 @@ import csv
import json import json
import logging import logging
import re import re
import subprocess
import sys import sys
from contextlib import suppress from contextlib import suppress
from datetime import datetime from datetime import datetime
from io import StringIO from io import StringIO
from textwrap import shorten from textwrap import shorten
from typing import Dict, List, TypedDict, Union from typing import Any, Optional, Union
import pexpect import pexpect
@ -41,7 +40,7 @@ def _run_with_passwd(cmd: str) -> str:
return cmd.decode() return cmd.decode()
def _configure_aws_cli(cmd: str, key_id: str, access_key: str, out_format: str = "json") -> str: def _configure_aws_cli(cmd: str, key_id: str, access_key: str, region: str, out_format: str = "json") -> str:
child = pexpect.spawn(cmd) child = pexpect.spawn(cmd)
child.delaybeforesend = 1 child.delaybeforesend = 1
@ -52,7 +51,7 @@ def _configure_aws_cli(cmd: str, key_id: str, access_key: str, out_format: str =
child.sendline(access_key) child.sendline(access_key)
child.expect("Default region name.*") child.expect("Default region name.*")
child.sendline("") child.sendline("region")
child.expect("Default output format.*") child.expect("Default output format.*")
child.sendline(out_format) child.sendline(out_format)
@ -75,14 +74,75 @@ def _attach_allure_log(cmd: str, output: str, return_code: int, start_time: date
reporter.attach(command_attachment, "Command execution") reporter.attach(command_attachment, "Command execution")
def log_command_execution(cmd: str, output: Union[str, TypedDict]) -> None: def log_command_execution(cmd: str, output: Union[str, dict], params: Optional[dict] = None, **kwargs) -> None:
logger.info(f"{cmd}: {output}") logger.info(f"{cmd}: {output}")
with suppress(Exception):
json_output = json.dumps(output, indent=4, sort_keys=True) if not params:
output = json_output params = {}
command_attachment = f"COMMAND: '{cmd}'\n" f"OUTPUT:\n {output}\n"
with reporter.step(f'COMMAND: {shorten(cmd, width=60, placeholder="...")}'): output_params = params
reporter.attach(command_attachment, "Command execution")
try:
json_params = json.dumps(params, indent=4, sort_keys=True, default=str)
except TypeError as err:
logger.warning(f"Failed to serialize '{cmd}' request parameters:\n{params}\nException: {err}")
else:
output_params = json_params
output = json.dumps(output, indent=4, sort_keys=True, default=str)
command_execution = f"COMMAND: '{cmd}'\n" f"URL: {kwargs['endpoint']}\n" f"PARAMS:\n{output_params}\n" f"OUTPUT:\n{output}\n"
aws_command = _convert_request_to_aws_cli_command(cmd, params, **kwargs)
reporter.attach(command_execution, "Command execution")
reporter.attach(aws_command, "AWS CLI Command")
def _convert_request_to_aws_cli_command(command: str, params: dict, **kwargs) -> str:
overriden_names = [_convert_json_name_to_aws_cli(name) for name in kwargs.keys()]
command = command.replace("_", "-")
options = []
for name, value in params.items():
name = _convert_json_name_to_aws_cli(name)
# To override parameters for AWS CLI
if name in overriden_names:
continue
if option := _create_option(name, value):
options.append(option)
for name, value in kwargs.items():
name = _convert_json_name_to_aws_cli(name)
if option := _create_option(name, value):
options.append(option)
options = " ".join(options)
api = "s3api" if "s3" in kwargs["endpoint"] else "iam"
return f"aws --no-verify-ssl --no-paginate {api} {command} {options}"
def _convert_json_name_to_aws_cli(name: str) -> str:
specific_names = {"CORSConfiguration": "cors-configuration"}
if aws_cli_name := specific_names.get(name):
return aws_cli_name
return re.sub(r"([a-z])([A-Z])", r"\1 \2", name).lower().replace(" ", "-").replace("_", "-")
def _create_option(name: str, value: Any) -> str | None:
if isinstance(value, bool) and value:
return f"--{name}"
if isinstance(value, dict):
value = json.dumps(value, indent=4, sort_keys=True, default=str)
return f"--{name} '{value}'"
if value:
return f"--{name} {value}"
return None
def parse_netmap_output(output: str) -> list[NodeNetmapInfo]: def parse_netmap_output(output: str) -> list[NodeNetmapInfo]:

View file

@ -6,11 +6,46 @@ from typing import Any, Optional
from frostfs_testlib import reporter from frostfs_testlib import reporter
from frostfs_testlib.resources.common import ASSETS_DIR from frostfs_testlib.resources.common import ASSETS_DIR
from frostfs_testlib.utils import string_utils
logger = logging.getLogger("NeoLogger") logger = logging.getLogger("NeoLogger")
def generate_file(size: int) -> str: class TestFile(os.PathLike):
def __init__(self, path: str):
self.path = path
def __del__(self):
logger.debug(f"Removing file {self.path}")
if os.path.exists(self.path):
os.remove(self.path)
def __str__(self):
return self.path
def __repr__(self):
return self.path
def __fspath__(self):
return self.path
def ensure_directory(path):
directory = os.path.dirname(path)
if not os.path.exists(directory):
os.makedirs(directory)
def ensure_directory_opener(path, flags):
ensure_directory(path)
return os.open(path, flags)
# TODO: Do not add {size} to title yet, since it produces dynamic info in top level steps
# Use object_size dt in future as argument
@reporter.step("Generate file")
def generate_file(size: int, file_name: Optional[str] = None) -> TestFile:
"""Generates a binary file with the specified size in bytes. """Generates a binary file with the specified size in bytes.
Args: Args:
@ -19,19 +54,26 @@ def generate_file(size: int) -> str:
Returns: Returns:
The path to the generated file. The path to the generated file.
""" """
file_path = os.path.join(ASSETS_DIR, str(uuid.uuid4()))
with open(file_path, "wb") as file: if file_name is None:
file_name = string_utils.unique_name("object-")
test_file = TestFile(os.path.join(ASSETS_DIR, file_name))
with open(test_file, "wb", opener=ensure_directory_opener) as file:
file.write(os.urandom(size)) file.write(os.urandom(size))
logger.info(f"File with size {size} bytes has been generated: {file_path}") logger.info(f"File with size {size} bytes has been generated: {test_file}")
return file_path return test_file
# TODO: Do not add {size} to title yet, since it produces dynamic info in top level steps
# Use object_size dt in future as argument
@reporter.step("Generate file with content")
def generate_file_with_content( def generate_file_with_content(
size: int, size: int,
file_path: Optional[str] = None, file_path: Optional[str | TestFile] = None,
content: Optional[str] = None, content: Optional[str] = None,
) -> str: ) -> TestFile:
"""Creates a new file with specified content. """Creates a new file with specified content.
Args: Args:
@ -48,20 +90,22 @@ def generate_file_with_content(
content = os.urandom(size) content = os.urandom(size)
mode = "wb" mode = "wb"
test_file = None
if not file_path: if not file_path:
file_path = os.path.join(os.getcwd(), ASSETS_DIR, str(uuid.uuid4())) test_file = TestFile(os.path.join(os.getcwd(), ASSETS_DIR, str(uuid.uuid4())))
elif isinstance(file_path, TestFile):
test_file = file_path
else: else:
if not os.path.exists(os.path.dirname(file_path)): test_file = TestFile(file_path)
os.makedirs(os.path.dirname(file_path))
with open(file_path, mode) as file: with open(test_file, mode, opener=ensure_directory_opener) as file:
file.write(content) file.write(content)
return file_path return test_file
@reporter.step("Get File Hash") @reporter.step("Get File Hash")
def get_file_hash(file_path: str, len: Optional[int] = None, offset: Optional[int] = None) -> str: def get_file_hash(file_path: str | TestFile, len: Optional[int] = None, offset: Optional[int] = None) -> str:
"""Generates hash for the specified file. """Generates hash for the specified file.
Args: Args:
@ -88,7 +132,7 @@ def get_file_hash(file_path: str, len: Optional[int] = None, offset: Optional[in
@reporter.step("Concatenation set of files to one file") @reporter.step("Concatenation set of files to one file")
def concat_files(file_paths: list, resulting_file_path: Optional[str] = None) -> str: def concat_files(file_paths: list[str | TestFile], resulting_file_path: Optional[str | TestFile] = None) -> TestFile:
"""Concatenates several files into a single file. """Concatenates several files into a single file.
Args: Args:
@ -98,16 +142,24 @@ def concat_files(file_paths: list, resulting_file_path: Optional[str] = None) ->
Returns: Returns:
Path to the resulting file. Path to the resulting file.
""" """
test_file = None
if not resulting_file_path: if not resulting_file_path:
resulting_file_path = os.path.join(os.getcwd(), ASSETS_DIR, str(uuid.uuid4())) test_file = TestFile(os.path.join(os.getcwd(), ASSETS_DIR, str(uuid.uuid4())))
with open(resulting_file_path, "wb") as f: elif isinstance(resulting_file_path, TestFile):
test_file = resulting_file_path
else:
test_file = TestFile(resulting_file_path)
with open(test_file, "wb", opener=ensure_directory_opener) as f:
for file in file_paths: for file in file_paths:
with open(file, "rb") as part_file: with open(file, "rb") as part_file:
f.write(part_file.read()) f.write(part_file.read())
return resulting_file_path return test_file
def split_file(file_path: str, parts: int) -> list[str]: @reporter.step("Split file to {parts} parts")
def split_file(file_path: str | TestFile, parts: int) -> list[TestFile]:
"""Splits specified file into several specified number of parts. """Splits specified file into several specified number of parts.
Each part is saved under name `{original_file}_part_{i}`. Each part is saved under name `{original_file}_part_{i}`.
@ -129,7 +181,7 @@ def split_file(file_path: str, parts: int) -> list[str]:
part_file_paths = [] part_file_paths = []
for content_offset in range(0, content_size + 1, chunk_size): for content_offset in range(0, content_size + 1, chunk_size):
part_file_name = f"{file_path}_part_{part_id}" part_file_name = f"{file_path}_part_{part_id}"
part_file_paths.append(part_file_name) part_file_paths.append(TestFile(part_file_name))
with open(part_file_name, "wb") as out_file: with open(part_file_name, "wb") as out_file:
out_file.write(content[content_offset : content_offset + chunk_size]) out_file.write(content[content_offset : content_offset + chunk_size])
part_id += 1 part_id += 1
@ -137,9 +189,8 @@ def split_file(file_path: str, parts: int) -> list[str]:
return part_file_paths return part_file_paths
def get_file_content( @reporter.step("Get file content")
file_path: str, content_len: Optional[int] = None, mode: str = "r", offset: Optional[int] = None def get_file_content(file_path: str | TestFile, content_len: Optional[int] = None, mode: str = "r", offset: Optional[int] = None) -> Any:
) -> Any:
"""Returns content of specified file. """Returns content of specified file.
Args: Args:

Some files were not shown because too many files have changed in this diff Show more