forked from TrueCloudLab/frostfs-testlib
Compare commits
102 commits
support/v0
...
master
Author | SHA1 | Date | |
---|---|---|---|
429698944e | |||
376499a7e8 | |||
f4460194bc | |||
3a4204f2e4 | |||
c9e4c2c7bb | |||
da16f3c3a5 | |||
f1b2fbd47b | |||
cb31d41f15 | |||
7a482152a8 | |||
bfd7f70b6c | |||
10821f4c49 | |||
5d192524a0 | |||
a3b78559a9 | |||
ec42b156ac | |||
ea1b348120 | |||
e7423938e9 | |||
a563f089f6 | |||
37a1177a3c | |||
b8ce75b299 | |||
3fee7aa197 | |||
3e64b52306 | |||
0306c09bed | |||
a32bd120f2 | |||
5b715877b3 | |||
c0e37c8138 | |||
80c65b454e | |||
541a3e0636 | |||
70f0357960 | |||
a85070e957 | |||
82a8f9bab3 | |||
65ec50391e | |||
863e74f161 | |||
6629b9bbaa | |||
e2a170d66e | |||
338584069d | |||
9cfaf1a618 | |||
076e444f84 | |||
653621fb7e | |||
2dc5aa8a1e | |||
11487e983d | |||
9c508c4f66 | |||
f2bded64e4 | |||
0e247c2ff2 | |||
b323bcfd0a | |||
25925c637b | |||
09a7f66d1e | |||
22b41b227f | |||
f5a7ff5c90 | |||
3fc3eaadf3 | |||
273f0d13a5 | |||
55cebc042c | |||
751381cd60 | |||
4f3814690e | |||
d79fd87ede | |||
8ba2cb8030 | |||
6caa77dedf | |||
0d7a15877c | |||
82f9df088a | |||
e04fac0770 | |||
328e43fe67 | |||
c0a25ab699 | |||
40fa2c24cc | |||
be36a10f1e | |||
df8d99d83c | |||
d6a2cf92a2 | |||
a3bda0b34f | |||
a4d1082ed5 | |||
73c362c307 | |||
10a6efa333 | |||
663c144709 | |||
8e739adea5 | |||
3d63772f4a | |||
02f3ef6b40 | |||
89522b607c | |||
be964e731f | |||
f1264bd473 | |||
54d26b226c | |||
247d2fbab7 | |||
ae566b413b | |||
81dfc723da | |||
e65fc359fe | |||
17c1a4f14b | |||
dc6b0e407f | |||
39a17f3634 | |||
47414eb866 | |||
c17f0f6173 | |||
d1ba7eb661 | |||
f072f88673 | |||
253bb3b1d8 | |||
9ab4def44f | |||
ed8f90dfc0 | |||
ed70dada96 | |||
22647c6d59 | |||
61a1b28652 | |||
6519cfafc9 | |||
72bd467c53 | |||
f8562da7e0 | |||
c8227e80af | |||
1f50166e78 | |||
03c45d7592 | |||
e970fe2788 | |||
8ee2985c89 |
106 changed files with 5022 additions and 2645 deletions
21
.forgejo/workflows/dco.yml
Normal file
21
.forgejo/workflows/dco.yml
Normal file
|
@ -0,0 +1,21 @@
|
||||||
|
name: DCO action
|
||||||
|
on: [pull_request]
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
dco:
|
||||||
|
name: DCO
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
|
||||||
|
- name: Setup Go
|
||||||
|
uses: actions/setup-go@v3
|
||||||
|
with:
|
||||||
|
go-version: '1.21'
|
||||||
|
|
||||||
|
- name: Run commit format checker
|
||||||
|
uses: https://git.frostfs.info/TrueCloudLab/dco-go@v3
|
||||||
|
with:
|
||||||
|
from: 'origin/${{ github.event.pull_request.base.ref }}'
|
1
.github/CODEOWNERS
vendored
1
.github/CODEOWNERS
vendored
|
@ -1 +0,0 @@
|
||||||
* @aprasolova @vdomnich-yadro @dansingjulia @yadro-vavdeev @abereziny
|
|
21
.github/workflows/dco.yml
vendored
21
.github/workflows/dco.yml
vendored
|
@ -1,21 +0,0 @@
|
||||||
name: DCO check
|
|
||||||
|
|
||||||
on:
|
|
||||||
pull_request:
|
|
||||||
branches:
|
|
||||||
- master
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
commits_check_job:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
name: Commits Check
|
|
||||||
steps:
|
|
||||||
- name: Get PR Commits
|
|
||||||
id: 'get-pr-commits'
|
|
||||||
uses: tim-actions/get-pr-commits@master
|
|
||||||
with:
|
|
||||||
token: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
- name: DCO Check
|
|
||||||
uses: tim-actions/dco@master
|
|
||||||
with:
|
|
||||||
commits: ${{ steps.get-pr-commits.outputs.commits }}
|
|
|
@ -3,8 +3,8 @@
|
||||||
First, thank you for contributing! We love and encourage pull requests from
|
First, thank you for contributing! We love and encourage pull requests from
|
||||||
everyone. Please follow the guidelines:
|
everyone. Please follow the guidelines:
|
||||||
|
|
||||||
- Check the open [issues](https://github.com/TrueCloudLab/frostfs-testlib/issues) and
|
- Check the open [issues](https://git.frostfs.info/TrueCloudLab/frostfs-testlib/issues) and
|
||||||
[pull requests](https://github.com/TrueCloudLab/frostfs-testlib/pulls) for existing
|
[pull requests](https://git.frostfs.info/TrueCloudLab/frostfs-testlib/pulls) for existing
|
||||||
discussions.
|
discussions.
|
||||||
|
|
||||||
- Open an issue first, to discuss a new feature or enhancement.
|
- Open an issue first, to discuss a new feature or enhancement.
|
||||||
|
@ -26,8 +26,8 @@ Start by forking the `frostfs-testlib` repository, make changes in a branch and
|
||||||
send a pull request. We encourage pull requests to discuss code changes. Here
|
send a pull request. We encourage pull requests to discuss code changes. Here
|
||||||
are the steps in details:
|
are the steps in details:
|
||||||
|
|
||||||
### Set up your GitHub Repository
|
### Set up your Git Repository
|
||||||
Fork [FrostFS testlib upstream](https://github.com/TrueCloudLab/frostfs-testlib/fork) source
|
Fork [FrostFS testlib upstream](https://git.frostfs.info/TrueCloudLab/frostfs-testlib/forks) source
|
||||||
repository to your own personal repository. Copy the URL of your fork and clone it:
|
repository to your own personal repository. Copy the URL of your fork and clone it:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
|
@ -37,7 +37,7 @@ $ git clone <url of your fork>
|
||||||
### Set up git remote as ``upstream``
|
### Set up git remote as ``upstream``
|
||||||
```shell
|
```shell
|
||||||
$ cd frostfs-testlib
|
$ cd frostfs-testlib
|
||||||
$ git remote add upstream https://github.com/TrueCloudLab/frostfs-testlib
|
$ git remote add upstream https://git.frostfs.info/TrueCloudLab/frostfs-testlib
|
||||||
$ git fetch upstream
|
$ git fetch upstream
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -99,8 +99,8 @@ $ git push origin feature/123-something_awesome
|
||||||
```
|
```
|
||||||
|
|
||||||
### Create a Pull Request
|
### Create a Pull Request
|
||||||
Pull requests can be created via GitHub. Refer to [this
|
Pull requests can be created via Git. Refer to [this
|
||||||
document](https://help.github.com/articles/creating-a-pull-request/) for
|
document](https://docs.codeberg.org/collaborating/pull-requests-and-git-flow/) for
|
||||||
detailed steps on how to create a pull request. After a Pull Request gets peer
|
detailed steps on how to create a pull request. After a Pull Request gets peer
|
||||||
reviewed and approved, it will be merged.
|
reviewed and approved, it will be merged.
|
||||||
|
|
||||||
|
|
|
@ -92,4 +92,4 @@ The library provides the following primary components:
|
||||||
|
|
||||||
|
|
||||||
## Contributing
|
## Contributing
|
||||||
Any contributions to the library should conform to the [contribution guideline](https://github.com/TrueCloudLab/frostfs-testlib/blob/master/CONTRIBUTING.md).
|
Any contributions to the library should conform to the [contribution guideline](https://git.frostfs.info/TrueCloudLab/frostfs-testlib/src/branch/master/CONTRIBUTING.md).
|
||||||
|
|
|
@ -36,7 +36,7 @@ requires-python = ">=3.10"
|
||||||
dev = ["black", "bumpver", "isort", "pre-commit"]
|
dev = ["black", "bumpver", "isort", "pre-commit"]
|
||||||
|
|
||||||
[project.urls]
|
[project.urls]
|
||||||
Homepage = "https://github.com/TrueCloudLab/frostfs-testlib"
|
Homepage = "https://git.frostfs.info/TrueCloudLab/frostfs-testlib"
|
||||||
|
|
||||||
[project.entry-points."frostfs.testlib.reporter"]
|
[project.entry-points."frostfs.testlib.reporter"]
|
||||||
allure = "frostfs_testlib.reporter.allure_handler:AllureHandler"
|
allure = "frostfs_testlib.reporter.allure_handler:AllureHandler"
|
||||||
|
@ -47,13 +47,30 @@ docker = "frostfs_testlib.hosting.docker_host:DockerHost"
|
||||||
[project.entry-points."frostfs.testlib.healthcheck"]
|
[project.entry-points."frostfs.testlib.healthcheck"]
|
||||||
basic = "frostfs_testlib.healthcheck.basic_healthcheck:BasicHealthcheck"
|
basic = "frostfs_testlib.healthcheck.basic_healthcheck:BasicHealthcheck"
|
||||||
|
|
||||||
|
[project.entry-points."frostfs.testlib.csc_managers"]
|
||||||
|
config = "frostfs_testlib.storage.controllers.state_managers.config_state_manager:ConfigStateManager"
|
||||||
|
|
||||||
|
[project.entry-points."frostfs.testlib.services"]
|
||||||
|
frostfs-storage = "frostfs_testlib.storage.dataclasses.frostfs_services:StorageNode"
|
||||||
|
frostfs-s3 = "frostfs_testlib.storage.dataclasses.frostfs_services:S3Gate"
|
||||||
|
frostfs-http = "frostfs_testlib.storage.dataclasses.frostfs_services:HTTPGate"
|
||||||
|
neo-go = "frostfs_testlib.storage.dataclasses.frostfs_services:MorphChain"
|
||||||
|
frostfs-ir = "frostfs_testlib.storage.dataclasses.frostfs_services:InnerRing"
|
||||||
|
|
||||||
|
[project.entry-points."frostfs.testlib.credentials_providers"]
|
||||||
|
authmate = "frostfs_testlib.credentials.authmate_s3_provider:AuthmateS3CredentialsProvider"
|
||||||
|
wallet_factory = "frostfs_testlib.credentials.wallet_factory_provider:WalletFactoryProvider"
|
||||||
|
|
||||||
|
[project.entry-points."frostfs.testlib.bucket_cid_resolver"]
|
||||||
|
frostfs = "frostfs_testlib.s3.curl_bucket_resolver:CurlBucketContainerResolver"
|
||||||
|
|
||||||
[tool.isort]
|
[tool.isort]
|
||||||
profile = "black"
|
profile = "black"
|
||||||
src_paths = ["src", "tests"]
|
src_paths = ["src", "tests"]
|
||||||
line_length = 100
|
line_length = 140
|
||||||
|
|
||||||
[tool.black]
|
[tool.black]
|
||||||
line-length = 100
|
line-length = 140
|
||||||
target-version = ["py310"]
|
target-version = ["py310"]
|
||||||
|
|
||||||
[tool.bumpver]
|
[tool.bumpver]
|
||||||
|
|
|
@ -1,4 +1,5 @@
|
||||||
from frostfs_testlib.cli.frostfs_adm import FrostfsAdm
|
from frostfs_testlib.cli.frostfs_adm import FrostfsAdm
|
||||||
from frostfs_testlib.cli.frostfs_authmate import FrostfsAuthmate
|
from frostfs_testlib.cli.frostfs_authmate import FrostfsAuthmate
|
||||||
from frostfs_testlib.cli.frostfs_cli import FrostfsCli
|
from frostfs_testlib.cli.frostfs_cli import FrostfsCli
|
||||||
|
from frostfs_testlib.cli.generic_cli import GenericCli
|
||||||
from frostfs_testlib.cli.neogo import NeoGo, NetworkType
|
from frostfs_testlib.cli.neogo import NeoGo, NetworkType
|
||||||
|
|
|
@ -27,11 +27,7 @@ class FrostfsAdmMorph(CliCommand):
|
||||||
"""
|
"""
|
||||||
return self._execute(
|
return self._execute(
|
||||||
"morph deposit-notary",
|
"morph deposit-notary",
|
||||||
**{
|
**{param: param_value for param, param_value in locals().items() if param not in ["self"]},
|
||||||
param: param_value
|
|
||||||
for param, param_value in locals().items()
|
|
||||||
if param not in ["self"]
|
|
||||||
},
|
|
||||||
)
|
)
|
||||||
|
|
||||||
def dump_balances(
|
def dump_balances(
|
||||||
|
@ -56,11 +52,7 @@ class FrostfsAdmMorph(CliCommand):
|
||||||
"""
|
"""
|
||||||
return self._execute(
|
return self._execute(
|
||||||
"morph dump-balances",
|
"morph dump-balances",
|
||||||
**{
|
**{param: param_value for param, param_value in locals().items() if param not in ["self"]},
|
||||||
param: param_value
|
|
||||||
for param, param_value in locals().items()
|
|
||||||
if param not in ["self"]
|
|
||||||
},
|
|
||||||
)
|
)
|
||||||
|
|
||||||
def dump_config(self, rpc_endpoint: str) -> CommandResult:
|
def dump_config(self, rpc_endpoint: str) -> CommandResult:
|
||||||
|
@ -74,11 +66,25 @@ class FrostfsAdmMorph(CliCommand):
|
||||||
"""
|
"""
|
||||||
return self._execute(
|
return self._execute(
|
||||||
"morph dump-config",
|
"morph dump-config",
|
||||||
**{
|
**{param: param_value for param, param_value in locals().items() if param not in ["self"]},
|
||||||
param: param_value
|
)
|
||||||
for param, param_value in locals().items()
|
|
||||||
if param not in ["self"]
|
def set_config(
|
||||||
},
|
self, set_key_value: str, rpc_endpoint: Optional[str] = None, alphabet_wallets: Optional[str] = None
|
||||||
|
) -> CommandResult:
|
||||||
|
"""Add/update global config value in the FrostFS network.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
set_key_value: key1=val1 [key2=val2 ...]
|
||||||
|
alphabet_wallets: Path to alphabet wallets dir
|
||||||
|
rpc_endpoint: N3 RPC node endpoint
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Command's result.
|
||||||
|
"""
|
||||||
|
return self._execute(
|
||||||
|
f"morph set-config {set_key_value}",
|
||||||
|
**{param: param_value for param, param_value in locals().items() if param not in ["self", "set_key_value"]},
|
||||||
)
|
)
|
||||||
|
|
||||||
def dump_containers(
|
def dump_containers(
|
||||||
|
@ -101,11 +107,7 @@ class FrostfsAdmMorph(CliCommand):
|
||||||
"""
|
"""
|
||||||
return self._execute(
|
return self._execute(
|
||||||
"morph dump-containers",
|
"morph dump-containers",
|
||||||
**{
|
**{param: param_value for param, param_value in locals().items() if param not in ["self"]},
|
||||||
param: param_value
|
|
||||||
for param, param_value in locals().items()
|
|
||||||
if param not in ["self"]
|
|
||||||
},
|
|
||||||
)
|
)
|
||||||
|
|
||||||
def dump_hashes(self, rpc_endpoint: str) -> CommandResult:
|
def dump_hashes(self, rpc_endpoint: str) -> CommandResult:
|
||||||
|
@ -119,11 +121,7 @@ class FrostfsAdmMorph(CliCommand):
|
||||||
"""
|
"""
|
||||||
return self._execute(
|
return self._execute(
|
||||||
"morph dump-hashes",
|
"morph dump-hashes",
|
||||||
**{
|
**{param: param_value for param, param_value in locals().items() if param not in ["self"]},
|
||||||
param: param_value
|
|
||||||
for param, param_value in locals().items()
|
|
||||||
if param not in ["self"]
|
|
||||||
},
|
|
||||||
)
|
)
|
||||||
|
|
||||||
def force_new_epoch(
|
def force_new_epoch(
|
||||||
|
@ -140,11 +138,7 @@ class FrostfsAdmMorph(CliCommand):
|
||||||
"""
|
"""
|
||||||
return self._execute(
|
return self._execute(
|
||||||
"morph force-new-epoch",
|
"morph force-new-epoch",
|
||||||
**{
|
**{param: param_value for param, param_value in locals().items() if param not in ["self"]},
|
||||||
param: param_value
|
|
||||||
for param, param_value in locals().items()
|
|
||||||
if param not in ["self"]
|
|
||||||
},
|
|
||||||
)
|
)
|
||||||
|
|
||||||
def generate_alphabet(
|
def generate_alphabet(
|
||||||
|
@ -165,11 +159,7 @@ class FrostfsAdmMorph(CliCommand):
|
||||||
"""
|
"""
|
||||||
return self._execute(
|
return self._execute(
|
||||||
"morph generate-alphabet",
|
"morph generate-alphabet",
|
||||||
**{
|
**{param: param_value for param, param_value in locals().items() if param not in ["self"]},
|
||||||
param: param_value
|
|
||||||
for param, param_value in locals().items()
|
|
||||||
if param not in ["self"]
|
|
||||||
},
|
|
||||||
)
|
)
|
||||||
|
|
||||||
def generate_storage_wallet(
|
def generate_storage_wallet(
|
||||||
|
@ -192,11 +182,7 @@ class FrostfsAdmMorph(CliCommand):
|
||||||
"""
|
"""
|
||||||
return self._execute(
|
return self._execute(
|
||||||
"morph generate-storage-wallet",
|
"morph generate-storage-wallet",
|
||||||
**{
|
**{param: param_value for param, param_value in locals().items() if param not in ["self"]},
|
||||||
param: param_value
|
|
||||||
for param, param_value in locals().items()
|
|
||||||
if param not in ["self"]
|
|
||||||
},
|
|
||||||
)
|
)
|
||||||
|
|
||||||
def init(
|
def init(
|
||||||
|
@ -219,7 +205,7 @@ class FrostfsAdmMorph(CliCommand):
|
||||||
container_alias_fee: Container alias fee (default 500).
|
container_alias_fee: Container alias fee (default 500).
|
||||||
container_fee: Container registration fee (default 1000).
|
container_fee: Container registration fee (default 1000).
|
||||||
contracts: Path to archive with compiled FrostFS contracts
|
contracts: Path to archive with compiled FrostFS contracts
|
||||||
(default fetched from latest github release).
|
(default fetched from latest git release).
|
||||||
epoch_duration: Amount of side chain blocks in one FrostFS epoch (default 240).
|
epoch_duration: Amount of side chain blocks in one FrostFS epoch (default 240).
|
||||||
homomorphic_disabled: Disable object homomorphic hashing.
|
homomorphic_disabled: Disable object homomorphic hashing.
|
||||||
local_dump: Path to the blocks dump file.
|
local_dump: Path to the blocks dump file.
|
||||||
|
@ -232,11 +218,7 @@ class FrostfsAdmMorph(CliCommand):
|
||||||
"""
|
"""
|
||||||
return self._execute(
|
return self._execute(
|
||||||
"morph init",
|
"morph init",
|
||||||
**{
|
**{param: param_value for param, param_value in locals().items() if param not in ["self"]},
|
||||||
param: param_value
|
|
||||||
for param, param_value in locals().items()
|
|
||||||
if param not in ["self"]
|
|
||||||
},
|
|
||||||
)
|
)
|
||||||
|
|
||||||
def refill_gas(
|
def refill_gas(
|
||||||
|
@ -259,11 +241,7 @@ class FrostfsAdmMorph(CliCommand):
|
||||||
"""
|
"""
|
||||||
return self._execute(
|
return self._execute(
|
||||||
"morph refill-gas",
|
"morph refill-gas",
|
||||||
**{
|
**{param: param_value for param, param_value in locals().items() if param not in ["self"]},
|
||||||
param: param_value
|
|
||||||
for param, param_value in locals().items()
|
|
||||||
if param not in ["self"]
|
|
||||||
},
|
|
||||||
)
|
)
|
||||||
|
|
||||||
def restore_containers(
|
def restore_containers(
|
||||||
|
@ -286,11 +264,7 @@ class FrostfsAdmMorph(CliCommand):
|
||||||
"""
|
"""
|
||||||
return self._execute(
|
return self._execute(
|
||||||
"morph restore-containers",
|
"morph restore-containers",
|
||||||
**{
|
**{param: param_value for param, param_value in locals().items() if param not in ["self"]},
|
||||||
param: param_value
|
|
||||||
for param, param_value in locals().items()
|
|
||||||
if param not in ["self"]
|
|
||||||
},
|
|
||||||
)
|
)
|
||||||
|
|
||||||
def set_policy(
|
def set_policy(
|
||||||
|
@ -340,7 +314,7 @@ class FrostfsAdmMorph(CliCommand):
|
||||||
Args:
|
Args:
|
||||||
alphabet_wallets: Path to alphabet wallets dir.
|
alphabet_wallets: Path to alphabet wallets dir.
|
||||||
contracts: Path to archive with compiled FrostFS contracts
|
contracts: Path to archive with compiled FrostFS contracts
|
||||||
(default fetched from latest github release).
|
(default fetched from latest git release).
|
||||||
rpc_endpoint: N3 RPC node endpoint.
|
rpc_endpoint: N3 RPC node endpoint.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
|
@ -348,17 +322,13 @@ class FrostfsAdmMorph(CliCommand):
|
||||||
"""
|
"""
|
||||||
return self._execute(
|
return self._execute(
|
||||||
"morph update-contracts",
|
"morph update-contracts",
|
||||||
**{
|
**{param: param_value for param, param_value in locals().items() if param not in ["self"]},
|
||||||
param: param_value
|
|
||||||
for param, param_value in locals().items()
|
|
||||||
if param not in ["self"]
|
|
||||||
},
|
|
||||||
)
|
)
|
||||||
|
|
||||||
def remove_nodes(
|
def remove_nodes(
|
||||||
self, node_netmap_keys: list[str], rpc_endpoint: Optional[str] = None, alphabet_wallets: Optional[str] = None
|
self, node_netmap_keys: list[str], rpc_endpoint: Optional[str] = None, alphabet_wallets: Optional[str] = None
|
||||||
) -> CommandResult:
|
) -> CommandResult:
|
||||||
""" Move node to the Offline state in the candidates list
|
"""Move node to the Offline state in the candidates list
|
||||||
and tick an epoch to update the netmap using frostfs-adm
|
and tick an epoch to update the netmap using frostfs-adm
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
|
@ -371,7 +341,7 @@ class FrostfsAdmMorph(CliCommand):
|
||||||
"""
|
"""
|
||||||
if not len(node_netmap_keys):
|
if not len(node_netmap_keys):
|
||||||
raise AttributeError("Got empty node_netmap_keys list")
|
raise AttributeError("Got empty node_netmap_keys list")
|
||||||
|
|
||||||
return self._execute(
|
return self._execute(
|
||||||
f"morph remove-nodes {' '.join(node_netmap_keys)}",
|
f"morph remove-nodes {' '.join(node_netmap_keys)}",
|
||||||
**{
|
**{
|
||||||
|
@ -379,4 +349,4 @@ class FrostfsAdmMorph(CliCommand):
|
||||||
for param, param_value in locals().items()
|
for param, param_value in locals().items()
|
||||||
if param not in ["self", "node_netmap_keys"]
|
if param not in ["self", "node_netmap_keys"]
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
|
|
|
@ -22,7 +22,7 @@ class FrostfsCliACL(CliCommand):
|
||||||
Well-known system object headers start with '$Object:' prefix.
|
Well-known system object headers start with '$Object:' prefix.
|
||||||
User defined headers start without prefix.
|
User defined headers start without prefix.
|
||||||
Read more about filter keys at:
|
Read more about filter keys at:
|
||||||
http://github.com/TrueCloudLab/frostfs-api/blob/master/proto-docs/acl.md#message-eaclrecordfilter
|
https://git.frostfs.info/TrueCloudLab/frostfs-api/src/branch/master/proto-docs/acl.md#message-eaclrecord-filter
|
||||||
Match is '=' for matching and '!=' for non-matching filter.
|
Match is '=' for matching and '!=' for non-matching filter.
|
||||||
Value is a valid unicode string corresponding to object or request header value.
|
Value is a valid unicode string corresponding to object or request header value.
|
||||||
|
|
||||||
|
|
|
@ -3,6 +3,7 @@ from typing import Optional
|
||||||
from frostfs_testlib.cli.frostfs_cli.accounting import FrostfsCliAccounting
|
from frostfs_testlib.cli.frostfs_cli.accounting import FrostfsCliAccounting
|
||||||
from frostfs_testlib.cli.frostfs_cli.acl import FrostfsCliACL
|
from frostfs_testlib.cli.frostfs_cli.acl import FrostfsCliACL
|
||||||
from frostfs_testlib.cli.frostfs_cli.container import FrostfsCliContainer
|
from frostfs_testlib.cli.frostfs_cli.container import FrostfsCliContainer
|
||||||
|
from frostfs_testlib.cli.frostfs_cli.control import FrostfsCliControl
|
||||||
from frostfs_testlib.cli.frostfs_cli.netmap import FrostfsCliNetmap
|
from frostfs_testlib.cli.frostfs_cli.netmap import FrostfsCliNetmap
|
||||||
from frostfs_testlib.cli.frostfs_cli.object import FrostfsCliObject
|
from frostfs_testlib.cli.frostfs_cli.object import FrostfsCliObject
|
||||||
from frostfs_testlib.cli.frostfs_cli.session import FrostfsCliSession
|
from frostfs_testlib.cli.frostfs_cli.session import FrostfsCliSession
|
||||||
|
@ -25,6 +26,7 @@ class FrostfsCli:
|
||||||
storagegroup: FrostfsCliStorageGroup
|
storagegroup: FrostfsCliStorageGroup
|
||||||
util: FrostfsCliUtil
|
util: FrostfsCliUtil
|
||||||
version: FrostfsCliVersion
|
version: FrostfsCliVersion
|
||||||
|
control: FrostfsCliControl
|
||||||
|
|
||||||
def __init__(self, shell: Shell, frostfs_cli_exec_path: str, config_file: Optional[str] = None):
|
def __init__(self, shell: Shell, frostfs_cli_exec_path: str, config_file: Optional[str] = None):
|
||||||
self.accounting = FrostfsCliAccounting(shell, frostfs_cli_exec_path, config=config_file)
|
self.accounting = FrostfsCliAccounting(shell, frostfs_cli_exec_path, config=config_file)
|
||||||
|
@ -38,3 +40,4 @@ class FrostfsCli:
|
||||||
self.util = FrostfsCliUtil(shell, frostfs_cli_exec_path, config=config_file)
|
self.util = FrostfsCliUtil(shell, frostfs_cli_exec_path, config=config_file)
|
||||||
self.version = FrostfsCliVersion(shell, frostfs_cli_exec_path, config=config_file)
|
self.version = FrostfsCliVersion(shell, frostfs_cli_exec_path, config=config_file)
|
||||||
self.tree = FrostfsCliTree(shell, frostfs_cli_exec_path, config=config_file)
|
self.tree = FrostfsCliTree(shell, frostfs_cli_exec_path, config=config_file)
|
||||||
|
self.control = FrostfsCliControl(shell, frostfs_cli_exec_path, config=config_file)
|
||||||
|
|
|
@ -8,7 +8,9 @@ class FrostfsCliContainer(CliCommand):
|
||||||
def create(
|
def create(
|
||||||
self,
|
self,
|
||||||
rpc_endpoint: str,
|
rpc_endpoint: str,
|
||||||
wallet: str,
|
wallet: Optional[str] = None,
|
||||||
|
nns_zone: Optional[str] = None,
|
||||||
|
nns_name: Optional[str] = None,
|
||||||
address: Optional[str] = None,
|
address: Optional[str] = None,
|
||||||
attributes: Optional[dict] = None,
|
attributes: Optional[dict] = None,
|
||||||
basic_acl: Optional[str] = None,
|
basic_acl: Optional[str] = None,
|
||||||
|
@ -45,6 +47,8 @@ class FrostfsCliContainer(CliCommand):
|
||||||
wallet: WIF (NEP-2) string or path to the wallet or binary key.
|
wallet: WIF (NEP-2) string or path to the wallet or binary key.
|
||||||
xhdr: Dict with request X-Headers.
|
xhdr: Dict with request X-Headers.
|
||||||
timeout: Timeout for the operation (default 15s).
|
timeout: Timeout for the operation (default 15s).
|
||||||
|
nns_zone: Container nns zone attribute.
|
||||||
|
nns_name: Container nns name attribute.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Command's result.
|
Command's result.
|
||||||
|
@ -57,15 +61,14 @@ class FrostfsCliContainer(CliCommand):
|
||||||
def delete(
|
def delete(
|
||||||
self,
|
self,
|
||||||
rpc_endpoint: str,
|
rpc_endpoint: str,
|
||||||
wallet: str,
|
|
||||||
cid: str,
|
cid: str,
|
||||||
|
wallet: Optional[str] = None,
|
||||||
address: Optional[str] = None,
|
address: Optional[str] = None,
|
||||||
await_mode: bool = False,
|
await_mode: bool = False,
|
||||||
session: Optional[str] = None,
|
session: Optional[str] = None,
|
||||||
ttl: Optional[int] = None,
|
ttl: Optional[int] = None,
|
||||||
xhdr: Optional[dict] = None,
|
xhdr: Optional[dict] = None,
|
||||||
force: bool = False,
|
force: bool = False,
|
||||||
timeout: Optional[str] = None,
|
|
||||||
) -> CommandResult:
|
) -> CommandResult:
|
||||||
"""
|
"""
|
||||||
Delete an existing container.
|
Delete an existing container.
|
||||||
|
@ -81,7 +84,6 @@ class FrostfsCliContainer(CliCommand):
|
||||||
ttl: TTL value in request meta header (default 2).
|
ttl: TTL value in request meta header (default 2).
|
||||||
wallet: WIF (NEP-2) string or path to the wallet or binary key.
|
wallet: WIF (NEP-2) string or path to the wallet or binary key.
|
||||||
xhdr: Dict with request X-Headers.
|
xhdr: Dict with request X-Headers.
|
||||||
timeout: Timeout for the operation (default 15s).
|
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Command's result.
|
Command's result.
|
||||||
|
@ -95,8 +97,8 @@ class FrostfsCliContainer(CliCommand):
|
||||||
def get(
|
def get(
|
||||||
self,
|
self,
|
||||||
rpc_endpoint: str,
|
rpc_endpoint: str,
|
||||||
wallet: str,
|
|
||||||
cid: str,
|
cid: str,
|
||||||
|
wallet: Optional[str] = None,
|
||||||
address: Optional[str] = None,
|
address: Optional[str] = None,
|
||||||
await_mode: bool = False,
|
await_mode: bool = False,
|
||||||
to: Optional[str] = None,
|
to: Optional[str] = None,
|
||||||
|
@ -131,8 +133,8 @@ class FrostfsCliContainer(CliCommand):
|
||||||
def get_eacl(
|
def get_eacl(
|
||||||
self,
|
self,
|
||||||
rpc_endpoint: str,
|
rpc_endpoint: str,
|
||||||
wallet: str,
|
|
||||||
cid: str,
|
cid: str,
|
||||||
|
wallet: Optional[str] = None,
|
||||||
address: Optional[str] = None,
|
address: Optional[str] = None,
|
||||||
await_mode: bool = False,
|
await_mode: bool = False,
|
||||||
to: Optional[str] = None,
|
to: Optional[str] = None,
|
||||||
|
@ -168,7 +170,7 @@ class FrostfsCliContainer(CliCommand):
|
||||||
def list(
|
def list(
|
||||||
self,
|
self,
|
||||||
rpc_endpoint: str,
|
rpc_endpoint: str,
|
||||||
wallet: str,
|
wallet: Optional[str] = None,
|
||||||
address: Optional[str] = None,
|
address: Optional[str] = None,
|
||||||
owner: Optional[str] = None,
|
owner: Optional[str] = None,
|
||||||
ttl: Optional[int] = None,
|
ttl: Optional[int] = None,
|
||||||
|
@ -199,8 +201,8 @@ class FrostfsCliContainer(CliCommand):
|
||||||
def list_objects(
|
def list_objects(
|
||||||
self,
|
self,
|
||||||
rpc_endpoint: str,
|
rpc_endpoint: str,
|
||||||
wallet: str,
|
|
||||||
cid: str,
|
cid: str,
|
||||||
|
wallet: Optional[str] = None,
|
||||||
address: Optional[str] = None,
|
address: Optional[str] = None,
|
||||||
ttl: Optional[int] = None,
|
ttl: Optional[int] = None,
|
||||||
xhdr: Optional[dict] = None,
|
xhdr: Optional[dict] = None,
|
||||||
|
@ -229,8 +231,8 @@ class FrostfsCliContainer(CliCommand):
|
||||||
def set_eacl(
|
def set_eacl(
|
||||||
self,
|
self,
|
||||||
rpc_endpoint: str,
|
rpc_endpoint: str,
|
||||||
wallet: str,
|
|
||||||
cid: str,
|
cid: str,
|
||||||
|
wallet: Optional[str] = None,
|
||||||
address: Optional[str] = None,
|
address: Optional[str] = None,
|
||||||
await_mode: bool = False,
|
await_mode: bool = False,
|
||||||
table: Optional[str] = None,
|
table: Optional[str] = None,
|
||||||
|
@ -266,8 +268,8 @@ class FrostfsCliContainer(CliCommand):
|
||||||
def search_node(
|
def search_node(
|
||||||
self,
|
self,
|
||||||
rpc_endpoint: str,
|
rpc_endpoint: str,
|
||||||
wallet: str,
|
|
||||||
cid: str,
|
cid: str,
|
||||||
|
wallet: Optional[str] = None,
|
||||||
address: Optional[str] = None,
|
address: Optional[str] = None,
|
||||||
ttl: Optional[int] = None,
|
ttl: Optional[int] = None,
|
||||||
from_file: Optional[str] = None,
|
from_file: Optional[str] = None,
|
||||||
|
@ -298,9 +300,5 @@ class FrostfsCliContainer(CliCommand):
|
||||||
|
|
||||||
return self._execute(
|
return self._execute(
|
||||||
f"container nodes {from_str}",
|
f"container nodes {from_str}",
|
||||||
**{
|
**{param: value for param, value in locals().items() if param not in ["self", "from_file", "from_str"]},
|
||||||
param: value
|
|
||||||
for param, value in locals().items()
|
|
||||||
if param not in ["self", "from_file", "from_str"]
|
|
||||||
},
|
|
||||||
)
|
)
|
||||||
|
|
232
src/frostfs_testlib/cli/frostfs_cli/control.py
Normal file
232
src/frostfs_testlib/cli/frostfs_cli/control.py
Normal file
|
@ -0,0 +1,232 @@
|
||||||
|
from typing import Optional
|
||||||
|
|
||||||
|
from frostfs_testlib.cli.cli_command import CliCommand
|
||||||
|
from frostfs_testlib.shell import CommandResult
|
||||||
|
|
||||||
|
|
||||||
|
class FrostfsCliControl(CliCommand):
|
||||||
|
def set_status(
|
||||||
|
self,
|
||||||
|
endpoint: str,
|
||||||
|
status: str,
|
||||||
|
wallet: Optional[str] = None,
|
||||||
|
force: Optional[bool] = None,
|
||||||
|
address: Optional[str] = None,
|
||||||
|
timeout: Optional[str] = None,
|
||||||
|
) -> CommandResult:
|
||||||
|
"""Set status of the storage node in FrostFS network map
|
||||||
|
|
||||||
|
Args:
|
||||||
|
wallet: Path to the wallet or binary key
|
||||||
|
address: Address of wallet account
|
||||||
|
endpoint: Remote node control address (as 'multiaddr' or '<host>:<port>')
|
||||||
|
force: Force turning to local maintenance
|
||||||
|
status: New netmap status keyword ('online', 'offline', 'maintenance')
|
||||||
|
timeout: Timeout for an operation (default 15s)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Command`s result.
|
||||||
|
"""
|
||||||
|
return self._execute(
|
||||||
|
"control set-status",
|
||||||
|
**{param: value for param, value in locals().items() if param not in ["self"]},
|
||||||
|
)
|
||||||
|
|
||||||
|
def healthcheck(
|
||||||
|
self,
|
||||||
|
endpoint: str,
|
||||||
|
wallet: Optional[str] = None,
|
||||||
|
address: Optional[str] = None,
|
||||||
|
timeout: Optional[str] = None,
|
||||||
|
) -> CommandResult:
|
||||||
|
"""Health check for FrostFS storage nodes
|
||||||
|
|
||||||
|
Args:
|
||||||
|
wallet: Path to the wallet or binary key
|
||||||
|
address: Address of wallet account
|
||||||
|
endpoint: Remote node control address (as 'multiaddr' or '<host>:<port>')
|
||||||
|
timeout: Timeout for an operation (default 15s)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Command`s result.
|
||||||
|
"""
|
||||||
|
return self._execute(
|
||||||
|
"control healthcheck",
|
||||||
|
**{param: value for param, value in locals().items() if param not in ["self"]},
|
||||||
|
)
|
||||||
|
|
||||||
|
def drop_objects(
|
||||||
|
self,
|
||||||
|
endpoint: str,
|
||||||
|
objects: str,
|
||||||
|
wallet: Optional[str] = None,
|
||||||
|
address: Optional[str] = None,
|
||||||
|
timeout: Optional[str] = None,
|
||||||
|
) -> CommandResult:
|
||||||
|
"""Drop objects from the node's local storage
|
||||||
|
|
||||||
|
Args:
|
||||||
|
wallet: Path to the wallet or binary key
|
||||||
|
address: Address of wallet account
|
||||||
|
endpoint: Remote node control address (as 'multiaddr' or '<host>:<port>')
|
||||||
|
objects: List of object addresses to be removed in string format
|
||||||
|
timeout: Timeout for an operation (default 15s)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Command`s result.
|
||||||
|
"""
|
||||||
|
return self._execute(
|
||||||
|
"control drop-objects",
|
||||||
|
**{param: value for param, value in locals().items() if param not in ["self"]},
|
||||||
|
)
|
||||||
|
|
||||||
|
def add_rule(
|
||||||
|
self,
|
||||||
|
endpoint: str,
|
||||||
|
chain_id: str,
|
||||||
|
target_name: str,
|
||||||
|
target_type: str,
|
||||||
|
rule: Optional[list[str]] = None,
|
||||||
|
path: Optional[str] = None,
|
||||||
|
chain_id_hex: Optional[bool] = None,
|
||||||
|
wallet: Optional[str] = None,
|
||||||
|
address: Optional[str] = None,
|
||||||
|
timeout: Optional[str] = None,
|
||||||
|
) -> CommandResult:
|
||||||
|
"""Drop objects from the node's local storage
|
||||||
|
|
||||||
|
Args:
|
||||||
|
address: Address of wallet account
|
||||||
|
chain-id: Assign ID to the parsed chain
|
||||||
|
chain-id-hex: Flag to parse chain ID as hex
|
||||||
|
endpoint: Remote node control address (as 'multiaddr' or '<host>:<port>')
|
||||||
|
path: Path to encoded chain in JSON or binary format
|
||||||
|
rule: Rule statement
|
||||||
|
target-name: Resource name in APE resource name format
|
||||||
|
target-type: Resource type(container/namespace)
|
||||||
|
timeout: Timeout for an operation (default 15s)
|
||||||
|
wallet: Path to the wallet or binary key
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Command`s result.
|
||||||
|
"""
|
||||||
|
return self._execute(
|
||||||
|
"control add-rule",
|
||||||
|
**{param: value for param, value in locals().items() if param not in ["self"]},
|
||||||
|
)
|
||||||
|
|
||||||
|
def get_rule(
|
||||||
|
self,
|
||||||
|
endpoint: str,
|
||||||
|
chain_id: str,
|
||||||
|
target_name: str,
|
||||||
|
target_type: str,
|
||||||
|
chain_id_hex: Optional[bool] = None,
|
||||||
|
wallet: Optional[str] = None,
|
||||||
|
address: Optional[str] = None,
|
||||||
|
timeout: Optional[str] = None,
|
||||||
|
) -> CommandResult:
|
||||||
|
"""Drop objects from the node's local storage
|
||||||
|
|
||||||
|
Args:
|
||||||
|
address string Address of wallet account
|
||||||
|
chain-id string Chain id
|
||||||
|
chain-id-hex Flag to parse chain ID as hex
|
||||||
|
endpoint string Remote node control address (as 'multiaddr' or '<host>:<port>')
|
||||||
|
target-name string Resource name in APE resource name format
|
||||||
|
target-type string Resource type(container/namespace)
|
||||||
|
timeout duration Timeout for an operation (default 15s)
|
||||||
|
wallet string Path to the wallet or binary key
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Command`s result.
|
||||||
|
"""
|
||||||
|
return self._execute(
|
||||||
|
"control get-rule",
|
||||||
|
**{param: value for param, value in locals().items() if param not in ["self"]},
|
||||||
|
)
|
||||||
|
|
||||||
|
def list_rules(
|
||||||
|
self,
|
||||||
|
endpoint: str,
|
||||||
|
target_name: str,
|
||||||
|
target_type: str,
|
||||||
|
wallet: Optional[str] = None,
|
||||||
|
address: Optional[str] = None,
|
||||||
|
timeout: Optional[str] = None,
|
||||||
|
) -> CommandResult:
|
||||||
|
"""Drop objects from the node's local storage
|
||||||
|
|
||||||
|
Args:
|
||||||
|
address: Address of wallet account
|
||||||
|
endpoint: Remote node control address (as 'multiaddr' or '<host>:<port>')
|
||||||
|
target-name: Resource name in APE resource name format
|
||||||
|
target-type: Resource type(container/namespace)
|
||||||
|
timeout: Timeout for an operation (default 15s)
|
||||||
|
wallet: Path to the wallet or binary key
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Command`s result.
|
||||||
|
"""
|
||||||
|
return self._execute(
|
||||||
|
"control list-rules",
|
||||||
|
**{param: value for param, value in locals().items() if param not in ["self"]},
|
||||||
|
)
|
||||||
|
|
||||||
|
def list_targets(
|
||||||
|
self,
|
||||||
|
endpoint: str,
|
||||||
|
chain_name: str,
|
||||||
|
wallet: Optional[str] = None,
|
||||||
|
address: Optional[str] = None,
|
||||||
|
timeout: Optional[str] = None,
|
||||||
|
) -> CommandResult:
|
||||||
|
"""Drop objects from the node's local storage
|
||||||
|
|
||||||
|
Args:
|
||||||
|
address: Address of wallet account
|
||||||
|
chain-name: Chain name(ingress|s3)
|
||||||
|
endpoint: Remote node control address (as 'multiaddr' or '<host>:<port>')
|
||||||
|
timeout: Timeout for an operation (default 15s)
|
||||||
|
wallet: Path to the wallet or binary key
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Command`s result.
|
||||||
|
"""
|
||||||
|
return self._execute(
|
||||||
|
"control list-targets",
|
||||||
|
**{param: value for param, value in locals().items() if param not in ["self"]},
|
||||||
|
)
|
||||||
|
|
||||||
|
def remove_rule(
|
||||||
|
self,
|
||||||
|
endpoint: str,
|
||||||
|
chain_id: str,
|
||||||
|
target_name: str,
|
||||||
|
target_type: str,
|
||||||
|
all: Optional[bool] = None,
|
||||||
|
chain_id_hex: Optional[bool] = None,
|
||||||
|
wallet: Optional[str] = None,
|
||||||
|
address: Optional[str] = None,
|
||||||
|
timeout: Optional[str] = None,
|
||||||
|
) -> CommandResult:
|
||||||
|
"""Drop objects from the node's local storage
|
||||||
|
|
||||||
|
Args:
|
||||||
|
address: Address of wallet account
|
||||||
|
all: Remove all chains
|
||||||
|
chain-id: Assign ID to the parsed chain
|
||||||
|
chain-id-hex: Flag to parse chain ID as hex
|
||||||
|
endpoint: Remote node control address (as 'multiaddr' or '<host>:<port>')
|
||||||
|
target-name: Resource name in APE resource name format
|
||||||
|
target-type: Resource type(container/namespace)
|
||||||
|
timeout: Timeout for an operation (default 15s)
|
||||||
|
wallet: Path to the wallet or binary key
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Command`s result.
|
||||||
|
"""
|
||||||
|
return self._execute(
|
||||||
|
"control remove-rule",
|
||||||
|
**{param: value for param, value in locals().items() if param not in ["self"]},
|
||||||
|
)
|
|
@ -8,7 +8,7 @@ class FrostfsCliNetmap(CliCommand):
|
||||||
def epoch(
|
def epoch(
|
||||||
self,
|
self,
|
||||||
rpc_endpoint: str,
|
rpc_endpoint: str,
|
||||||
wallet: str,
|
wallet: Optional[str] = None,
|
||||||
address: Optional[str] = None,
|
address: Optional[str] = None,
|
||||||
generate_key: bool = False,
|
generate_key: bool = False,
|
||||||
ttl: Optional[int] = None,
|
ttl: Optional[int] = None,
|
||||||
|
@ -38,7 +38,7 @@ class FrostfsCliNetmap(CliCommand):
|
||||||
def netinfo(
|
def netinfo(
|
||||||
self,
|
self,
|
||||||
rpc_endpoint: str,
|
rpc_endpoint: str,
|
||||||
wallet: str,
|
wallet: Optional[str] = None,
|
||||||
address: Optional[str] = None,
|
address: Optional[str] = None,
|
||||||
generate_key: bool = False,
|
generate_key: bool = False,
|
||||||
ttl: Optional[int] = None,
|
ttl: Optional[int] = None,
|
||||||
|
@ -68,7 +68,7 @@ class FrostfsCliNetmap(CliCommand):
|
||||||
def nodeinfo(
|
def nodeinfo(
|
||||||
self,
|
self,
|
||||||
rpc_endpoint: str,
|
rpc_endpoint: str,
|
||||||
wallet: str,
|
wallet: Optional[str] = None,
|
||||||
address: Optional[str] = None,
|
address: Optional[str] = None,
|
||||||
generate_key: bool = False,
|
generate_key: bool = False,
|
||||||
json: bool = False,
|
json: bool = False,
|
||||||
|
@ -100,7 +100,7 @@ class FrostfsCliNetmap(CliCommand):
|
||||||
def snapshot(
|
def snapshot(
|
||||||
self,
|
self,
|
||||||
rpc_endpoint: str,
|
rpc_endpoint: str,
|
||||||
wallet: str,
|
wallet: Optional[str] = None,
|
||||||
address: Optional[str] = None,
|
address: Optional[str] = None,
|
||||||
generate_key: bool = False,
|
generate_key: bool = False,
|
||||||
ttl: Optional[int] = None,
|
ttl: Optional[int] = None,
|
||||||
|
|
|
@ -8,9 +8,9 @@ class FrostfsCliObject(CliCommand):
|
||||||
def delete(
|
def delete(
|
||||||
self,
|
self,
|
||||||
rpc_endpoint: str,
|
rpc_endpoint: str,
|
||||||
wallet: str,
|
|
||||||
cid: str,
|
cid: str,
|
||||||
oid: str,
|
oid: str,
|
||||||
|
wallet: Optional[str] = None,
|
||||||
address: Optional[str] = None,
|
address: Optional[str] = None,
|
||||||
bearer: Optional[str] = None,
|
bearer: Optional[str] = None,
|
||||||
session: Optional[str] = None,
|
session: Optional[str] = None,
|
||||||
|
@ -44,9 +44,9 @@ class FrostfsCliObject(CliCommand):
|
||||||
def get(
|
def get(
|
||||||
self,
|
self,
|
||||||
rpc_endpoint: str,
|
rpc_endpoint: str,
|
||||||
wallet: str,
|
|
||||||
cid: str,
|
cid: str,
|
||||||
oid: str,
|
oid: str,
|
||||||
|
wallet: Optional[str] = None,
|
||||||
address: Optional[str] = None,
|
address: Optional[str] = None,
|
||||||
bearer: Optional[str] = None,
|
bearer: Optional[str] = None,
|
||||||
file: Optional[str] = None,
|
file: Optional[str] = None,
|
||||||
|
@ -88,9 +88,9 @@ class FrostfsCliObject(CliCommand):
|
||||||
def hash(
|
def hash(
|
||||||
self,
|
self,
|
||||||
rpc_endpoint: str,
|
rpc_endpoint: str,
|
||||||
wallet: str,
|
|
||||||
cid: str,
|
cid: str,
|
||||||
oid: str,
|
oid: str,
|
||||||
|
wallet: Optional[str] = None,
|
||||||
address: Optional[str] = None,
|
address: Optional[str] = None,
|
||||||
bearer: Optional[str] = None,
|
bearer: Optional[str] = None,
|
||||||
range: Optional[str] = None,
|
range: Optional[str] = None,
|
||||||
|
@ -124,17 +124,15 @@ class FrostfsCliObject(CliCommand):
|
||||||
"""
|
"""
|
||||||
return self._execute(
|
return self._execute(
|
||||||
"object hash",
|
"object hash",
|
||||||
**{
|
**{param: value for param, value in locals().items() if param not in ["self", "params"]},
|
||||||
param: value for param, value in locals().items() if param not in ["self", "params"]
|
|
||||||
},
|
|
||||||
)
|
)
|
||||||
|
|
||||||
def head(
|
def head(
|
||||||
self,
|
self,
|
||||||
rpc_endpoint: str,
|
rpc_endpoint: str,
|
||||||
wallet: str,
|
|
||||||
cid: str,
|
cid: str,
|
||||||
oid: str,
|
oid: str,
|
||||||
|
wallet: Optional[str] = None,
|
||||||
address: Optional[str] = None,
|
address: Optional[str] = None,
|
||||||
bearer: Optional[str] = None,
|
bearer: Optional[str] = None,
|
||||||
file: Optional[str] = None,
|
file: Optional[str] = None,
|
||||||
|
@ -178,9 +176,9 @@ class FrostfsCliObject(CliCommand):
|
||||||
def lock(
|
def lock(
|
||||||
self,
|
self,
|
||||||
rpc_endpoint: str,
|
rpc_endpoint: str,
|
||||||
wallet: str,
|
|
||||||
cid: str,
|
cid: str,
|
||||||
oid: str,
|
oid: str,
|
||||||
|
wallet: Optional[str] = None,
|
||||||
lifetime: Optional[int] = None,
|
lifetime: Optional[int] = None,
|
||||||
expire_at: Optional[int] = None,
|
expire_at: Optional[int] = None,
|
||||||
address: Optional[str] = None,
|
address: Optional[str] = None,
|
||||||
|
@ -218,9 +216,9 @@ class FrostfsCliObject(CliCommand):
|
||||||
def put(
|
def put(
|
||||||
self,
|
self,
|
||||||
rpc_endpoint: str,
|
rpc_endpoint: str,
|
||||||
wallet: str,
|
|
||||||
cid: str,
|
cid: str,
|
||||||
file: str,
|
file: str,
|
||||||
|
wallet: Optional[str] = None,
|
||||||
address: Optional[str] = None,
|
address: Optional[str] = None,
|
||||||
attributes: Optional[dict] = None,
|
attributes: Optional[dict] = None,
|
||||||
bearer: Optional[str] = None,
|
bearer: Optional[str] = None,
|
||||||
|
@ -269,10 +267,10 @@ class FrostfsCliObject(CliCommand):
|
||||||
def range(
|
def range(
|
||||||
self,
|
self,
|
||||||
rpc_endpoint: str,
|
rpc_endpoint: str,
|
||||||
wallet: str,
|
|
||||||
cid: str,
|
cid: str,
|
||||||
oid: str,
|
oid: str,
|
||||||
range: str,
|
range: str,
|
||||||
|
wallet: Optional[str] = None,
|
||||||
address: Optional[str] = None,
|
address: Optional[str] = None,
|
||||||
bearer: Optional[str] = None,
|
bearer: Optional[str] = None,
|
||||||
file: Optional[str] = None,
|
file: Optional[str] = None,
|
||||||
|
@ -313,8 +311,8 @@ class FrostfsCliObject(CliCommand):
|
||||||
def search(
|
def search(
|
||||||
self,
|
self,
|
||||||
rpc_endpoint: str,
|
rpc_endpoint: str,
|
||||||
wallet: str,
|
|
||||||
cid: str,
|
cid: str,
|
||||||
|
wallet: Optional[str] = None,
|
||||||
address: Optional[str] = None,
|
address: Optional[str] = None,
|
||||||
bearer: Optional[str] = None,
|
bearer: Optional[str] = None,
|
||||||
filters: Optional[list] = None,
|
filters: Optional[list] = None,
|
||||||
|
@ -355,15 +353,16 @@ class FrostfsCliObject(CliCommand):
|
||||||
def nodes(
|
def nodes(
|
||||||
self,
|
self,
|
||||||
rpc_endpoint: str,
|
rpc_endpoint: str,
|
||||||
wallet: str,
|
|
||||||
cid: str,
|
cid: str,
|
||||||
|
wallet: Optional[str] = None,
|
||||||
address: Optional[str] = None,
|
address: Optional[str] = None,
|
||||||
bearer: Optional[str] = None,
|
bearer: Optional[str] = None,
|
||||||
generate_key: Optional = None,
|
generate_key: Optional[bool] = None,
|
||||||
oid: Optional[str] = None,
|
oid: Optional[str] = None,
|
||||||
trace: bool = False,
|
trace: bool = False,
|
||||||
root: bool = False,
|
root: bool = False,
|
||||||
verify_presence_all: bool = False,
|
verify_presence_all: bool = False,
|
||||||
|
json: bool = False,
|
||||||
ttl: Optional[int] = None,
|
ttl: Optional[int] = None,
|
||||||
xhdr: Optional[dict] = None,
|
xhdr: Optional[dict] = None,
|
||||||
timeout: Optional[str] = None,
|
timeout: Optional[str] = None,
|
||||||
|
|
|
@ -9,7 +9,6 @@ class FrostfsCliSession(CliCommand):
|
||||||
self,
|
self,
|
||||||
rpc_endpoint: str,
|
rpc_endpoint: str,
|
||||||
wallet: str,
|
wallet: str,
|
||||||
wallet_password: str,
|
|
||||||
out: str,
|
out: str,
|
||||||
lifetime: Optional[int] = None,
|
lifetime: Optional[int] = None,
|
||||||
address: Optional[str] = None,
|
address: Optional[str] = None,
|
||||||
|
@ -30,12 +29,7 @@ class FrostfsCliSession(CliCommand):
|
||||||
Returns:
|
Returns:
|
||||||
Command's result.
|
Command's result.
|
||||||
"""
|
"""
|
||||||
return self._execute_with_password(
|
return self._execute(
|
||||||
"session create",
|
"session create",
|
||||||
wallet_password,
|
**{param: value for param, value in locals().items() if param not in ["self"]},
|
||||||
**{
|
|
||||||
param: value
|
|
||||||
for param, value in locals().items()
|
|
||||||
if param not in ["self", "wallet_password"]
|
|
||||||
},
|
|
||||||
)
|
)
|
||||||
|
|
|
@ -39,10 +39,10 @@ class FrostfsCliShards(CliCommand):
|
||||||
def set_mode(
|
def set_mode(
|
||||||
self,
|
self,
|
||||||
endpoint: str,
|
endpoint: str,
|
||||||
wallet: str,
|
|
||||||
wallet_password: str,
|
|
||||||
mode: str,
|
mode: str,
|
||||||
id: Optional[list[str]],
|
id: Optional[list[str]],
|
||||||
|
wallet: Optional[str] = None,
|
||||||
|
wallet_password: Optional[str] = None,
|
||||||
address: Optional[str] = None,
|
address: Optional[str] = None,
|
||||||
all: bool = False,
|
all: bool = False,
|
||||||
clear_errors: bool = False,
|
clear_errors: bool = False,
|
||||||
|
@ -65,14 +65,15 @@ class FrostfsCliShards(CliCommand):
|
||||||
Returns:
|
Returns:
|
||||||
Command's result.
|
Command's result.
|
||||||
"""
|
"""
|
||||||
|
if not wallet_password:
|
||||||
|
return self._execute(
|
||||||
|
"control shards set-mode",
|
||||||
|
**{param: value for param, value in locals().items() if param not in ["self"]},
|
||||||
|
)
|
||||||
return self._execute_with_password(
|
return self._execute_with_password(
|
||||||
"control shards set-mode",
|
"control shards set-mode",
|
||||||
wallet_password,
|
wallet_password,
|
||||||
**{
|
**{param: value for param, value in locals().items() if param not in ["self", "wallet_password"]},
|
||||||
param: value
|
|
||||||
for param, value in locals().items()
|
|
||||||
if param not in ["self", "wallet_password"]
|
|
||||||
},
|
|
||||||
)
|
)
|
||||||
|
|
||||||
def dump(
|
def dump(
|
||||||
|
@ -105,18 +106,14 @@ class FrostfsCliShards(CliCommand):
|
||||||
return self._execute_with_password(
|
return self._execute_with_password(
|
||||||
"control shards dump",
|
"control shards dump",
|
||||||
wallet_password,
|
wallet_password,
|
||||||
**{
|
**{param: value for param, value in locals().items() if param not in ["self", "wallet_password"]},
|
||||||
param: value
|
|
||||||
for param, value in locals().items()
|
|
||||||
if param not in ["self", "wallet_password"]
|
|
||||||
},
|
|
||||||
)
|
)
|
||||||
|
|
||||||
def list(
|
def list(
|
||||||
self,
|
self,
|
||||||
endpoint: str,
|
endpoint: str,
|
||||||
wallet: str,
|
wallet: Optional[str] = None,
|
||||||
wallet_password: str,
|
wallet_password: Optional[str] = None,
|
||||||
address: Optional[str] = None,
|
address: Optional[str] = None,
|
||||||
json_mode: bool = False,
|
json_mode: bool = False,
|
||||||
timeout: Optional[str] = None,
|
timeout: Optional[str] = None,
|
||||||
|
@ -135,12 +132,14 @@ class FrostfsCliShards(CliCommand):
|
||||||
Returns:
|
Returns:
|
||||||
Command's result.
|
Command's result.
|
||||||
"""
|
"""
|
||||||
|
if not wallet_password:
|
||||||
|
return self._execute(
|
||||||
|
"control shards list",
|
||||||
|
**{param: value for param, value in locals().items() if param not in ["self"]},
|
||||||
|
)
|
||||||
return self._execute_with_password(
|
return self._execute_with_password(
|
||||||
"control shards list",
|
"control shards list",
|
||||||
wallet_password,
|
wallet_password,
|
||||||
**{
|
**{param: value for param, value in locals().items() if param not in ["self", "wallet_password"]},
|
||||||
param: value
|
|
||||||
for param, value in locals().items()
|
|
||||||
if param not in ["self", "wallet_password"]
|
|
||||||
},
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
|
@ -27,3 +27,27 @@ class FrostfsCliTree(CliCommand):
|
||||||
"tree healthcheck",
|
"tree healthcheck",
|
||||||
**{param: value for param, value in locals().items() if param not in ["self"]},
|
**{param: value for param, value in locals().items() if param not in ["self"]},
|
||||||
)
|
)
|
||||||
|
|
||||||
|
def list(
|
||||||
|
self,
|
||||||
|
cid: str,
|
||||||
|
rpc_endpoint: Optional[str] = None,
|
||||||
|
wallet: Optional[str] = None,
|
||||||
|
timeout: Optional[str] = None,
|
||||||
|
) -> CommandResult:
|
||||||
|
"""Get Tree List
|
||||||
|
|
||||||
|
Args:
|
||||||
|
cid: Container ID.
|
||||||
|
rpc_endpoint: Remote node address (as 'multiaddr' or '<host>:<port>').
|
||||||
|
wallet: WIF (NEP-2) string or path to the wallet or binary key.
|
||||||
|
timeout: duration Timeout for the operation (default 15 s)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Command's result.
|
||||||
|
|
||||||
|
"""
|
||||||
|
return self._execute(
|
||||||
|
"tree list",
|
||||||
|
**{param: value for param, value in locals().items() if param not in ["self"]},
|
||||||
|
)
|
||||||
|
|
|
@ -6,12 +6,12 @@ from frostfs_testlib.shell import CommandResult
|
||||||
|
|
||||||
class FrostfsCliUtil(CliCommand):
|
class FrostfsCliUtil(CliCommand):
|
||||||
def sign_bearer_token(
|
def sign_bearer_token(
|
||||||
self,
|
self,
|
||||||
wallet: str,
|
from_file: str,
|
||||||
from_file: str,
|
to_file: str,
|
||||||
to_file: str,
|
wallet: Optional[str] = None,
|
||||||
address: Optional[str] = None,
|
address: Optional[str] = None,
|
||||||
json: Optional[bool] = False,
|
json: Optional[bool] = False,
|
||||||
) -> CommandResult:
|
) -> CommandResult:
|
||||||
"""
|
"""
|
||||||
Sign bearer token to use it in requests.
|
Sign bearer token to use it in requests.
|
||||||
|
@ -33,9 +33,9 @@ class FrostfsCliUtil(CliCommand):
|
||||||
|
|
||||||
def sign_session_token(
|
def sign_session_token(
|
||||||
self,
|
self,
|
||||||
wallet: str,
|
|
||||||
from_file: str,
|
from_file: str,
|
||||||
to_file: str,
|
to_file: str,
|
||||||
|
wallet: Optional[str] = None,
|
||||||
address: Optional[str] = None,
|
address: Optional[str] = None,
|
||||||
) -> CommandResult:
|
) -> CommandResult:
|
||||||
"""
|
"""
|
||||||
|
|
30
src/frostfs_testlib/cli/generic_cli.py
Normal file
30
src/frostfs_testlib/cli/generic_cli.py
Normal file
|
@ -0,0 +1,30 @@
|
||||||
|
from typing import Optional
|
||||||
|
|
||||||
|
from frostfs_testlib.hosting.interfaces import Host
|
||||||
|
from frostfs_testlib.shell.interfaces import CommandOptions, Shell
|
||||||
|
|
||||||
|
|
||||||
|
class GenericCli(object):
|
||||||
|
def __init__(self, cli_name: str, host: Host) -> None:
|
||||||
|
self.host = host
|
||||||
|
self.cli_name = cli_name
|
||||||
|
|
||||||
|
def __call__(
|
||||||
|
self,
|
||||||
|
args: Optional[str] = "",
|
||||||
|
pipes: Optional[str] = "",
|
||||||
|
shell: Optional[Shell] = None,
|
||||||
|
options: Optional[CommandOptions] = None,
|
||||||
|
):
|
||||||
|
if not shell:
|
||||||
|
shell = self.host.get_shell()
|
||||||
|
|
||||||
|
cli_config = self.host.get_cli_config(self.cli_name, True)
|
||||||
|
extra_args = ""
|
||||||
|
exec_path = self.cli_name
|
||||||
|
if cli_config:
|
||||||
|
extra_args = " ".join(cli_config.extra_args)
|
||||||
|
exec_path = cli_config.exec_path
|
||||||
|
|
||||||
|
cmd = f"{exec_path} {args} {extra_args} {pipes}"
|
||||||
|
return shell.exec(cmd, options)
|
91
src/frostfs_testlib/cli/netmap_parser.py
Normal file
91
src/frostfs_testlib/cli/netmap_parser.py
Normal file
|
@ -0,0 +1,91 @@
|
||||||
|
import re
|
||||||
|
|
||||||
|
from frostfs_testlib.storage.cluster import ClusterNode
|
||||||
|
from frostfs_testlib.storage.dataclasses.storage_object_info import NodeNetInfo, NodeNetmapInfo, NodeStatus
|
||||||
|
|
||||||
|
|
||||||
|
class NetmapParser:
|
||||||
|
@staticmethod
|
||||||
|
def netinfo(output: str) -> NodeNetInfo:
|
||||||
|
regexes = {
|
||||||
|
"epoch": r"Epoch: (?P<epoch>\d+)",
|
||||||
|
"network_magic": r"Network magic: (?P<network_magic>.*$)",
|
||||||
|
"time_per_block": r"Time per block: (?P<time_per_block>\d+\w+)",
|
||||||
|
"container_fee": r"Container fee: (?P<container_fee>\d+)",
|
||||||
|
"epoch_duration": r"Epoch duration: (?P<epoch_duration>\d+)",
|
||||||
|
"inner_ring_candidate_fee": r"Inner Ring candidate fee: (?P<inner_ring_candidate_fee>\d+)",
|
||||||
|
"maximum_object_size": r"Maximum object size: (?P<maximum_object_size>\d+)",
|
||||||
|
"maximum_count_of_data_shards": r"Maximum count of data shards: (?P<maximum_count_of_data_shards>\d+)",
|
||||||
|
"maximum_count_of_parity_shards": r"Maximum count of parity shards: (?P<maximum_count_of_parity_shards>\d+)",
|
||||||
|
"withdrawal_fee": r"Withdrawal fee: (?P<withdrawal_fee>\d+)",
|
||||||
|
"homomorphic_hashing_disabled": r"Homomorphic hashing disabled: (?P<homomorphic_hashing_disabled>true|false)",
|
||||||
|
"maintenance_mode_allowed": r"Maintenance mode allowed: (?P<maintenance_mode_allowed>true|false)",
|
||||||
|
"eigen_trust_alpha": r"EigenTrustAlpha: (?P<eigen_trust_alpha>\d+\w+$)",
|
||||||
|
"eigen_trust_iterations": r"EigenTrustIterations: (?P<eigen_trust_iterations>\d+)",
|
||||||
|
}
|
||||||
|
parse_result = {}
|
||||||
|
|
||||||
|
for key, regex in regexes.items():
|
||||||
|
search_result = re.search(regex, output, flags=re.MULTILINE)
|
||||||
|
if search_result == None:
|
||||||
|
parse_result[key] = None
|
||||||
|
continue
|
||||||
|
parse_result[key] = search_result[key].strip()
|
||||||
|
|
||||||
|
node_netinfo = NodeNetInfo(**parse_result)
|
||||||
|
|
||||||
|
return node_netinfo
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def snapshot_all_nodes(output: str) -> list[NodeNetmapInfo]:
|
||||||
|
"""The code will parse each line and return each node as dataclass."""
|
||||||
|
netmap_nodes = output.split("Node ")[1:]
|
||||||
|
dataclasses_netmap = []
|
||||||
|
result_netmap = {}
|
||||||
|
|
||||||
|
regexes = {
|
||||||
|
"node_id": r"\d+: (?P<node_id>\w+)",
|
||||||
|
"node_data_ips": r"(?P<node_data_ips>/ip4/.+?)$",
|
||||||
|
"node_status": r"(?P<node_status>ONLINE|MAINTENANCE|OFFLINE)",
|
||||||
|
"cluster_name": r"ClusterName: (?P<cluster_name>\w+)",
|
||||||
|
"continent": r"Continent: (?P<continent>\w+)",
|
||||||
|
"country": r"Country: (?P<country>\w+)",
|
||||||
|
"country_code": r"CountryCode: (?P<country_code>\w+)",
|
||||||
|
"external_address": r"ExternalAddr: (?P<external_address>/ip[4].+?)$",
|
||||||
|
"location": r"Location: (?P<location>\w+.*)",
|
||||||
|
"node": r"Node: (?P<node>\d+\.\d+\.\d+\.\d+)",
|
||||||
|
"price": r"Price: (?P<price>\d+)",
|
||||||
|
"sub_div": r"SubDiv: (?P<sub_div>.*)",
|
||||||
|
"sub_div_code": r"SubDivCode: (?P<sub_div_code>\w+)",
|
||||||
|
"un_locode": r"UN-LOCODE: (?P<un_locode>\w+.*)",
|
||||||
|
"role": r"role: (?P<role>\w+)",
|
||||||
|
}
|
||||||
|
|
||||||
|
for node in netmap_nodes:
|
||||||
|
for key, regex in regexes.items():
|
||||||
|
search_result = re.search(regex, node, flags=re.MULTILINE)
|
||||||
|
if search_result == None:
|
||||||
|
result_netmap[key] = None
|
||||||
|
continue
|
||||||
|
if key == "node_data_ips":
|
||||||
|
result_netmap[key] = search_result[key].strip().split(" ")
|
||||||
|
continue
|
||||||
|
if key == "external_address":
|
||||||
|
result_netmap[key] = search_result[key].strip().split(",")
|
||||||
|
continue
|
||||||
|
if key == "node_status":
|
||||||
|
result_netmap[key] = NodeStatus(search_result[key].strip().lower())
|
||||||
|
continue
|
||||||
|
result_netmap[key] = search_result[key].strip()
|
||||||
|
|
||||||
|
dataclasses_netmap.append(NodeNetmapInfo(**result_netmap))
|
||||||
|
|
||||||
|
return dataclasses_netmap
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def snapshot_one_node(output: str, cluster_node: ClusterNode) -> NodeNetmapInfo | None:
|
||||||
|
snapshot_nodes = NetmapParser.snapshot_all_nodes(output=output)
|
||||||
|
snapshot_node = [node for node in snapshot_nodes if node.node == cluster_node.host_ip]
|
||||||
|
if not snapshot_node:
|
||||||
|
return None
|
||||||
|
return snapshot_node[0]
|
47
src/frostfs_testlib/credentials/authmate_s3_provider.py
Normal file
47
src/frostfs_testlib/credentials/authmate_s3_provider.py
Normal file
|
@ -0,0 +1,47 @@
|
||||||
|
import re
|
||||||
|
from datetime import datetime
|
||||||
|
from typing import Optional
|
||||||
|
|
||||||
|
from frostfs_testlib import reporter
|
||||||
|
from frostfs_testlib.cli import FrostfsAuthmate
|
||||||
|
from frostfs_testlib.credentials.interfaces import S3Credentials, S3CredentialsProvider, User
|
||||||
|
from frostfs_testlib.resources.cli import FROSTFS_AUTHMATE_EXEC
|
||||||
|
from frostfs_testlib.shell import LocalShell
|
||||||
|
from frostfs_testlib.steps.cli.container import list_containers
|
||||||
|
from frostfs_testlib.storage.cluster import ClusterNode
|
||||||
|
from frostfs_testlib.storage.dataclasses.frostfs_services import S3Gate
|
||||||
|
|
||||||
|
|
||||||
|
class AuthmateS3CredentialsProvider(S3CredentialsProvider):
|
||||||
|
@reporter.step("Init S3 Credentials using Authmate CLI")
|
||||||
|
def provide(self, user: User, cluster_node: ClusterNode, location_constraints: Optional[str] = None) -> S3Credentials:
|
||||||
|
cluster_nodes: list[ClusterNode] = self.cluster.cluster_nodes
|
||||||
|
shell = LocalShell()
|
||||||
|
wallet = user.wallet
|
||||||
|
endpoint = cluster_node.storage_node.get_rpc_endpoint()
|
||||||
|
|
||||||
|
gate_public_keys = [node.service(S3Gate).get_wallet_public_key() for node in cluster_nodes]
|
||||||
|
# unique short bucket name
|
||||||
|
bucket = f"bucket-{hex(int(datetime.now().timestamp()*1000000))}"
|
||||||
|
|
||||||
|
frostfs_authmate: FrostfsAuthmate = FrostfsAuthmate(shell, FROSTFS_AUTHMATE_EXEC)
|
||||||
|
issue_secret_output = frostfs_authmate.secret.issue(
|
||||||
|
wallet=wallet.path,
|
||||||
|
peer=endpoint,
|
||||||
|
gate_public_key=gate_public_keys,
|
||||||
|
wallet_password=wallet.password,
|
||||||
|
container_policy=location_constraints,
|
||||||
|
container_friendly_name=bucket,
|
||||||
|
).stdout
|
||||||
|
|
||||||
|
aws_access_key_id = str(re.search(r"access_key_id.*:\s.(?P<aws_access_key_id>\w*)", issue_secret_output).group("aws_access_key_id"))
|
||||||
|
aws_secret_access_key = str(
|
||||||
|
re.search(r"secret_access_key.*:\s.(?P<aws_secret_access_key>\w*)", issue_secret_output).group("aws_secret_access_key")
|
||||||
|
)
|
||||||
|
cid = str(re.search(r"container_id.*:\s.(?P<container_id>\w*)", issue_secret_output).group("container_id"))
|
||||||
|
|
||||||
|
containers_list = list_containers(wallet, shell, endpoint)
|
||||||
|
assert cid in containers_list, f"Expected cid {cid} in {containers_list}"
|
||||||
|
|
||||||
|
user.s3_credentials = S3Credentials(aws_access_key_id, aws_secret_access_key)
|
||||||
|
return user.s3_credentials
|
51
src/frostfs_testlib/credentials/interfaces.py
Normal file
51
src/frostfs_testlib/credentials/interfaces.py
Normal file
|
@ -0,0 +1,51 @@
|
||||||
|
from abc import ABC, abstractmethod
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from typing import Any, Optional
|
||||||
|
|
||||||
|
from frostfs_testlib.plugins import load_plugin
|
||||||
|
from frostfs_testlib.storage.cluster import Cluster, ClusterNode
|
||||||
|
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class S3Credentials:
|
||||||
|
access_key: str
|
||||||
|
secret_key: str
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class User:
|
||||||
|
name: str
|
||||||
|
attributes: dict[str, Any] = field(default_factory=dict)
|
||||||
|
wallet: WalletInfo | None = None
|
||||||
|
s3_credentials: S3Credentials | None = None
|
||||||
|
|
||||||
|
|
||||||
|
class S3CredentialsProvider(ABC):
|
||||||
|
def __init__(self, cluster: Cluster) -> None:
|
||||||
|
self.cluster = cluster
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def provide(self, user: User, cluster_node: ClusterNode, location_constraints: Optional[str] = None, **kwargs) -> S3Credentials:
|
||||||
|
raise NotImplementedError("Directly called abstract class?")
|
||||||
|
|
||||||
|
|
||||||
|
class GrpcCredentialsProvider(ABC):
|
||||||
|
def __init__(self, cluster: Cluster) -> None:
|
||||||
|
self.cluster = cluster
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def provide(self, user: User, cluster_node: ClusterNode, **kwargs) -> WalletInfo:
|
||||||
|
raise NotImplementedError("Directly called abstract class?")
|
||||||
|
|
||||||
|
|
||||||
|
class CredentialsProvider(object):
|
||||||
|
S3: S3CredentialsProvider
|
||||||
|
GRPC: GrpcCredentialsProvider
|
||||||
|
|
||||||
|
def __init__(self, cluster: Cluster) -> None:
|
||||||
|
config = cluster.cluster_nodes[0].host.config
|
||||||
|
s3_cls = load_plugin("frostfs.testlib.credentials_providers", config.s3_creds_plugin_name)
|
||||||
|
self.S3 = s3_cls(cluster)
|
||||||
|
grpc_cls = load_plugin("frostfs.testlib.credentials_providers", config.grpc_creds_plugin_name)
|
||||||
|
self.GRPC = grpc_cls(cluster)
|
14
src/frostfs_testlib/credentials/wallet_factory_provider.py
Normal file
14
src/frostfs_testlib/credentials/wallet_factory_provider.py
Normal file
|
@ -0,0 +1,14 @@
|
||||||
|
from frostfs_testlib import reporter
|
||||||
|
from frostfs_testlib.credentials.interfaces import GrpcCredentialsProvider, User
|
||||||
|
from frostfs_testlib.resources.common import ASSETS_DIR, DEFAULT_WALLET_PASS
|
||||||
|
from frostfs_testlib.shell.local_shell import LocalShell
|
||||||
|
from frostfs_testlib.storage.cluster import ClusterNode
|
||||||
|
from frostfs_testlib.storage.dataclasses.wallet import WalletFactory, WalletInfo
|
||||||
|
|
||||||
|
|
||||||
|
class WalletFactoryProvider(GrpcCredentialsProvider):
|
||||||
|
@reporter.step("Init gRPC Credentials using wallet generation")
|
||||||
|
def provide(self, user: User, cluster_node: ClusterNode) -> WalletInfo:
|
||||||
|
wallet_factory = WalletFactory(ASSETS_DIR, LocalShell())
|
||||||
|
user.wallet = wallet_factory.create_wallet(file_name=user.name, password=DEFAULT_WALLET_PASS)
|
||||||
|
return user.wallet
|
|
@ -1,5 +1,5 @@
|
||||||
class Options:
|
class Options:
|
||||||
DEFAULT_SHELL_TIMEOUT = 90
|
DEFAULT_SHELL_TIMEOUT = 120
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def get_default_shell_timeout():
|
def get_default_shell_timeout():
|
||||||
|
|
|
@ -1,14 +1,109 @@
|
||||||
from frostfs_testlib.healthcheck.interfaces import Healthcheck
|
from typing import Callable
|
||||||
from frostfs_testlib.reporter import get_reporter
|
|
||||||
from frostfs_testlib.steps.node_management import storage_node_healthcheck
|
|
||||||
from frostfs_testlib.storage.cluster import ClusterNode
|
|
||||||
|
|
||||||
reporter = get_reporter()
|
from frostfs_testlib import reporter
|
||||||
|
from frostfs_testlib.cli.frostfs_cli.cli import FrostfsCli
|
||||||
|
from frostfs_testlib.healthcheck.interfaces import Healthcheck
|
||||||
|
from frostfs_testlib.resources.cli import FROSTFS_CLI_EXEC
|
||||||
|
from frostfs_testlib.shell import CommandOptions
|
||||||
|
from frostfs_testlib.steps.node_management import storage_node_healthcheck
|
||||||
|
from frostfs_testlib.storage.cluster import ClusterNode, ServiceClass
|
||||||
|
from frostfs_testlib.testing.test_control import wait_for_success
|
||||||
|
from frostfs_testlib.utils.failover_utils import check_services_status
|
||||||
|
|
||||||
|
|
||||||
class BasicHealthcheck(Healthcheck):
|
class BasicHealthcheck(Healthcheck):
|
||||||
@reporter.step_deco("Perform healthcheck for {cluster_node}")
|
def _perform(self, cluster_node: ClusterNode, checks: dict[Callable, dict]):
|
||||||
def perform(self, cluster_node: ClusterNode):
|
issues: list[str] = []
|
||||||
health_check = storage_node_healthcheck(cluster_node.storage_node)
|
for check, kwargs in checks.items():
|
||||||
if health_check.health_status != "READY" or health_check.network_status != "ONLINE":
|
issue = check(cluster_node, **kwargs)
|
||||||
raise AssertionError("Node {cluster_node} is not healthy")
|
if issue:
|
||||||
|
issues.append(issue)
|
||||||
|
|
||||||
|
assert not issues, "Issues found:\n" + "\n".join(issues)
|
||||||
|
|
||||||
|
@wait_for_success(900, 30, title="Wait for full healthcheck for {cluster_node}")
|
||||||
|
def full_healthcheck(self, cluster_node: ClusterNode):
|
||||||
|
checks = {
|
||||||
|
self.storage_healthcheck: {},
|
||||||
|
self._tree_healthcheck: {},
|
||||||
|
}
|
||||||
|
|
||||||
|
self._perform(cluster_node, checks)
|
||||||
|
|
||||||
|
@wait_for_success(900, 30, title="Wait for startup healthcheck on {cluster_node}")
|
||||||
|
def startup_healthcheck(self, cluster_node: ClusterNode):
|
||||||
|
checks = {
|
||||||
|
self.storage_healthcheck: {},
|
||||||
|
self._tree_healthcheck: {},
|
||||||
|
}
|
||||||
|
|
||||||
|
self._perform(cluster_node, checks)
|
||||||
|
|
||||||
|
@wait_for_success(900, 30, title="Wait for storage healthcheck on {cluster_node}")
|
||||||
|
def storage_healthcheck(self, cluster_node: ClusterNode) -> str | None:
|
||||||
|
checks = {
|
||||||
|
self._storage_healthcheck: {},
|
||||||
|
}
|
||||||
|
|
||||||
|
self._perform(cluster_node, checks)
|
||||||
|
|
||||||
|
@wait_for_success(900, 30, title="Wait for tree healthcheck on {cluster_node}")
|
||||||
|
def tree_healthcheck(self, cluster_node: ClusterNode) -> str | None:
|
||||||
|
checks = {
|
||||||
|
self._tree_healthcheck: {},
|
||||||
|
}
|
||||||
|
|
||||||
|
self._perform(cluster_node, checks)
|
||||||
|
|
||||||
|
@wait_for_success(120, 5, title="Wait for service healthcheck on {cluster_node}")
|
||||||
|
def services_healthcheck(self, cluster_node: ClusterNode):
|
||||||
|
svcs_to_check = cluster_node.services
|
||||||
|
checks = {
|
||||||
|
check_services_status: {
|
||||||
|
"service_list": svcs_to_check,
|
||||||
|
"expected_status": "active",
|
||||||
|
},
|
||||||
|
self._check_services: {"services": svcs_to_check},
|
||||||
|
}
|
||||||
|
|
||||||
|
self._perform(cluster_node, checks)
|
||||||
|
|
||||||
|
def _check_services(self, cluster_node: ClusterNode, services: list[ServiceClass]):
|
||||||
|
for svc in services:
|
||||||
|
result = svc.service_healthcheck()
|
||||||
|
if result == False:
|
||||||
|
return f"Service {svc.get_service_systemctl_name()} healthcheck failed on node {cluster_node}."
|
||||||
|
|
||||||
|
@reporter.step("Storage healthcheck on {cluster_node}")
|
||||||
|
def _storage_healthcheck(self, cluster_node: ClusterNode) -> str | None:
|
||||||
|
result = storage_node_healthcheck(cluster_node.storage_node)
|
||||||
|
self._gather_socket_info(cluster_node)
|
||||||
|
if result.health_status != "READY" or result.network_status != "ONLINE":
|
||||||
|
return f"Node {cluster_node} is not healthy. Health={result.health_status}. Network={result.network_status}"
|
||||||
|
|
||||||
|
@reporter.step("Tree healthcheck on {cluster_node}")
|
||||||
|
def _tree_healthcheck(self, cluster_node: ClusterNode) -> str | None:
|
||||||
|
host = cluster_node.host
|
||||||
|
service_config = host.get_service_config(cluster_node.storage_node.name)
|
||||||
|
wallet_path = service_config.attributes["wallet_path"]
|
||||||
|
wallet_password = service_config.attributes["wallet_password"]
|
||||||
|
|
||||||
|
shell = host.get_shell()
|
||||||
|
wallet_config_path = f"/tmp/{cluster_node.storage_node.name}-config.yaml"
|
||||||
|
wallet_config = f'wallet: {wallet_path}\npassword: "{wallet_password}"'
|
||||||
|
shell.exec(f"echo '{wallet_config}' > {wallet_config_path}")
|
||||||
|
|
||||||
|
remote_cli = FrostfsCli(
|
||||||
|
shell,
|
||||||
|
host.get_cli_config(FROSTFS_CLI_EXEC).exec_path,
|
||||||
|
config_file=wallet_config_path,
|
||||||
|
)
|
||||||
|
result = remote_cli.tree.healthcheck(rpc_endpoint="127.0.0.1:8080")
|
||||||
|
if result.return_code != 0:
|
||||||
|
return (
|
||||||
|
f"Error during tree healthcheck (rc={result.return_code}): {result.stdout}. \n Stderr: {result.stderr}"
|
||||||
|
)
|
||||||
|
|
||||||
|
@reporter.step("Gather socket info for {cluster_node}")
|
||||||
|
def _gather_socket_info(self, cluster_node: ClusterNode):
|
||||||
|
cluster_node.host.get_shell().exec("ss -tuln | grep 8080", CommandOptions(check=False))
|
||||||
|
|
|
@ -5,5 +5,21 @@ from frostfs_testlib.storage.cluster import ClusterNode
|
||||||
|
|
||||||
class Healthcheck(ABC):
|
class Healthcheck(ABC):
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
def perform(self, cluster_node: ClusterNode):
|
def full_healthcheck(self, cluster_node: ClusterNode):
|
||||||
"""Perform healthcheck on the target cluster node"""
|
"""Perform full healthcheck on the target cluster node"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def startup_healthcheck(self, cluster_node: ClusterNode):
|
||||||
|
"""Perform healthcheck required on startup of target cluster node"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def storage_healthcheck(self, cluster_node: ClusterNode):
|
||||||
|
"""Perform storage service healthcheck on target cluster node"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def services_healthcheck(self, cluster_node: ClusterNode):
|
||||||
|
"""Perform service status check on target cluster node"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def tree_healthcheck(self, cluster_node: ClusterNode):
|
||||||
|
"""Perform tree healthcheck on target cluster node"""
|
||||||
|
|
|
@ -10,9 +10,7 @@ class ParsedAttributes:
|
||||||
def parse(cls, attributes: dict[str, Any]):
|
def parse(cls, attributes: dict[str, Any]):
|
||||||
# Pick attributes supported by the class
|
# Pick attributes supported by the class
|
||||||
field_names = set(field.name for field in fields(cls))
|
field_names = set(field.name for field in fields(cls))
|
||||||
supported_attributes = {
|
supported_attributes = {key: value for key, value in attributes.items() if key in field_names}
|
||||||
key: value for key, value in attributes.items() if key in field_names
|
|
||||||
}
|
|
||||||
return cls(**supported_attributes)
|
return cls(**supported_attributes)
|
||||||
|
|
||||||
|
|
||||||
|
@ -29,6 +27,7 @@ class CLIConfig:
|
||||||
name: str
|
name: str
|
||||||
exec_path: str
|
exec_path: str
|
||||||
attributes: dict[str, str] = field(default_factory=dict)
|
attributes: dict[str, str] = field(default_factory=dict)
|
||||||
|
extra_args: list[str] = field(default_factory=list)
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
|
@ -63,10 +62,14 @@ class HostConfig:
|
||||||
plugin_name: str
|
plugin_name: str
|
||||||
healthcheck_plugin_name: str
|
healthcheck_plugin_name: str
|
||||||
address: str
|
address: str
|
||||||
|
s3_creds_plugin_name: str = field(default="authmate")
|
||||||
|
grpc_creds_plugin_name: str = field(default="wallet_factory")
|
||||||
|
product: str = field(default="frostfs")
|
||||||
services: list[ServiceConfig] = field(default_factory=list)
|
services: list[ServiceConfig] = field(default_factory=list)
|
||||||
clis: list[CLIConfig] = field(default_factory=list)
|
clis: list[CLIConfig] = field(default_factory=list)
|
||||||
attributes: dict[str, str] = field(default_factory=dict)
|
attributes: dict[str, str] = field(default_factory=dict)
|
||||||
interfaces: dict[str, str] = field(default_factory=dict)
|
interfaces: dict[str, str] = field(default_factory=dict)
|
||||||
|
environment: dict[str, str] = field(default_factory=dict)
|
||||||
|
|
||||||
def __post_init__(self) -> None:
|
def __post_init__(self) -> None:
|
||||||
self.services = [ServiceConfig(**service) for service in self.services or []]
|
self.services = [ServiceConfig(**service) for service in self.services or []]
|
||||||
|
|
|
@ -152,9 +152,7 @@ class DockerHost(Host):
|
||||||
timeout=service_attributes.start_timeout,
|
timeout=service_attributes.start_timeout,
|
||||||
)
|
)
|
||||||
|
|
||||||
def wait_for_service_to_be_in_state(
|
def wait_for_service_to_be_in_state(self, systemd_service_name: str, expected_state: str, timeout: int) -> None:
|
||||||
self, systemd_service_name: str, expected_state: str, timeout: int
|
|
||||||
) -> None:
|
|
||||||
raise NotImplementedError("Not implemented for docker")
|
raise NotImplementedError("Not implemented for docker")
|
||||||
|
|
||||||
def get_data_directory(self, service_name: str) -> str:
|
def get_data_directory(self, service_name: str) -> str:
|
||||||
|
@ -181,6 +179,12 @@ class DockerHost(Host):
|
||||||
def delete_pilorama(self, service_name: str) -> None:
|
def delete_pilorama(self, service_name: str) -> None:
|
||||||
raise NotImplementedError("Not implemented for docker")
|
raise NotImplementedError("Not implemented for docker")
|
||||||
|
|
||||||
|
def delete_file(self, file_path: str) -> None:
|
||||||
|
raise NotImplementedError("Not implemented for docker")
|
||||||
|
|
||||||
|
def is_file_exist(self, file_path: str) -> None:
|
||||||
|
raise NotImplementedError("Not implemented for docker")
|
||||||
|
|
||||||
def delete_storage_node_data(self, service_name: str, cache_only: bool = False) -> None:
|
def delete_storage_node_data(self, service_name: str, cache_only: bool = False) -> None:
|
||||||
volume_path = self.get_data_directory(service_name)
|
volume_path = self.get_data_directory(service_name)
|
||||||
|
|
||||||
|
@ -235,6 +239,8 @@ class DockerHost(Host):
|
||||||
since: Optional[datetime] = None,
|
since: Optional[datetime] = None,
|
||||||
until: Optional[datetime] = None,
|
until: Optional[datetime] = None,
|
||||||
unit: Optional[str] = None,
|
unit: Optional[str] = None,
|
||||||
|
exclude_filter: Optional[str] = None,
|
||||||
|
priority: Optional[str] = None
|
||||||
) -> str:
|
) -> str:
|
||||||
client = self._get_docker_client()
|
client = self._get_docker_client()
|
||||||
filtered_logs = ""
|
filtered_logs = ""
|
||||||
|
@ -246,8 +252,11 @@ class DockerHost(Host):
|
||||||
logger.info(f"Got exception while dumping logs of '{container_name}': {exc}")
|
logger.info(f"Got exception while dumping logs of '{container_name}': {exc}")
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
if exclude_filter:
|
||||||
|
filtered_logs = filtered_logs.replace(exclude_filter, "")
|
||||||
matches = re.findall(filter_regex, filtered_logs, re.IGNORECASE + re.MULTILINE)
|
matches = re.findall(filter_regex, filtered_logs, re.IGNORECASE + re.MULTILINE)
|
||||||
found = list(matches)
|
found = list(matches)
|
||||||
|
|
||||||
if found:
|
if found:
|
||||||
filtered_logs += f"{container_name}:\n{os.linesep.join(found)}"
|
filtered_logs += f"{container_name}:\n{os.linesep.join(found)}"
|
||||||
|
|
||||||
|
@ -301,9 +310,7 @@ class DockerHost(Host):
|
||||||
return container
|
return container
|
||||||
return None
|
return None
|
||||||
|
|
||||||
def _wait_for_container_to_be_in_state(
|
def _wait_for_container_to_be_in_state(self, container_name: str, expected_state: str, timeout: int) -> None:
|
||||||
self, container_name: str, expected_state: str, timeout: int
|
|
||||||
) -> None:
|
|
||||||
iterations = 10
|
iterations = 10
|
||||||
iteration_wait_time = timeout / iterations
|
iteration_wait_time = timeout / iterations
|
||||||
|
|
||||||
|
|
|
@ -5,6 +5,7 @@ from typing import Optional
|
||||||
from frostfs_testlib.hosting.config import CLIConfig, HostConfig, ServiceConfig
|
from frostfs_testlib.hosting.config import CLIConfig, HostConfig, ServiceConfig
|
||||||
from frostfs_testlib.shell.interfaces import Shell
|
from frostfs_testlib.shell.interfaces import Shell
|
||||||
from frostfs_testlib.testing.readable import HumanReadableEnum
|
from frostfs_testlib.testing.readable import HumanReadableEnum
|
||||||
|
from frostfs_testlib.testing.test_control import retry
|
||||||
|
|
||||||
|
|
||||||
class HostStatus(HumanReadableEnum):
|
class HostStatus(HumanReadableEnum):
|
||||||
|
@ -25,9 +26,7 @@ class Host(ABC):
|
||||||
|
|
||||||
def __init__(self, config: HostConfig) -> None:
|
def __init__(self, config: HostConfig) -> None:
|
||||||
self._config = config
|
self._config = config
|
||||||
self._service_config_by_name = {
|
self._service_config_by_name = {service_config.name: service_config for service_config in config.services}
|
||||||
service_config.name: service_config for service_config in config.services
|
|
||||||
}
|
|
||||||
self._cli_config_by_name = {cli_config.name: cli_config for cli_config in config.clis}
|
self._cli_config_by_name = {cli_config.name: cli_config for cli_config in config.clis}
|
||||||
|
|
||||||
@property
|
@property
|
||||||
|
@ -55,7 +54,7 @@ class Host(ABC):
|
||||||
raise ValueError(f"Unknown service name: '{service_name}'")
|
raise ValueError(f"Unknown service name: '{service_name}'")
|
||||||
return service_config
|
return service_config
|
||||||
|
|
||||||
def get_cli_config(self, cli_name: str) -> CLIConfig:
|
def get_cli_config(self, cli_name: str, allow_empty: bool = False) -> CLIConfig:
|
||||||
"""Returns config of CLI tool with specified name.
|
"""Returns config of CLI tool with specified name.
|
||||||
|
|
||||||
The CLI must be located on this host.
|
The CLI must be located on this host.
|
||||||
|
@ -67,7 +66,7 @@ class Host(ABC):
|
||||||
Config of the CLI tool.
|
Config of the CLI tool.
|
||||||
"""
|
"""
|
||||||
cli_config = self._cli_config_by_name.get(cli_name)
|
cli_config = self._cli_config_by_name.get(cli_name)
|
||||||
if cli_config is None:
|
if cli_config is None and not allow_empty:
|
||||||
raise ValueError(f"Unknown CLI name: '{cli_name}'")
|
raise ValueError(f"Unknown CLI name: '{cli_name}'")
|
||||||
return cli_config
|
return cli_config
|
||||||
|
|
||||||
|
@ -220,12 +219,22 @@ class Host(ABC):
|
||||||
"""
|
"""
|
||||||
|
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
def delete_pilorama(self, service_name: str) -> None:
|
def delete_file(self, file_path: str) -> None:
|
||||||
"""
|
"""
|
||||||
Deletes all pilorama.db files in the node.
|
Deletes file with provided file path
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
service_name: Name of storage node service.
|
file_path: full path to the file to delete
|
||||||
|
|
||||||
|
"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def is_file_exist(self, file_path: str) -> bool:
|
||||||
|
"""
|
||||||
|
Checks if file exist
|
||||||
|
|
||||||
|
Args:
|
||||||
|
file_path: full path to the file to check
|
||||||
|
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
@ -287,6 +296,8 @@ class Host(ABC):
|
||||||
since: Optional[datetime] = None,
|
since: Optional[datetime] = None,
|
||||||
until: Optional[datetime] = None,
|
until: Optional[datetime] = None,
|
||||||
unit: Optional[str] = None,
|
unit: Optional[str] = None,
|
||||||
|
exclude_filter: Optional[str] = None,
|
||||||
|
priority: Optional[str] = None
|
||||||
) -> str:
|
) -> str:
|
||||||
"""Get logs from host filtered by regex.
|
"""Get logs from host filtered by regex.
|
||||||
|
|
||||||
|
@ -295,6 +306,8 @@ class Host(ABC):
|
||||||
since: If set, limits the time from which logs should be collected. Must be in UTC.
|
since: If set, limits the time from which logs should be collected. Must be in UTC.
|
||||||
until: If set, limits the time until which logs should be collected. Must be in UTC.
|
until: If set, limits the time until which logs should be collected. Must be in UTC.
|
||||||
unit: required unit.
|
unit: required unit.
|
||||||
|
priority: logs level, 0 - emergency, 7 - debug. All messages with that code and higher.
|
||||||
|
For example, if we specify the -p 2 option, journalctl will show all messages with levels 2, 1 and 0.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Found entries as str if any found.
|
Found entries as str if any found.
|
||||||
|
@ -322,9 +335,7 @@ class Host(ABC):
|
||||||
"""
|
"""
|
||||||
|
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
def wait_for_service_to_be_in_state(
|
def wait_for_service_to_be_in_state(self, systemd_service_name: str, expected_state: str, timeout: int) -> None:
|
||||||
self, systemd_service_name: str, expected_state: str, timeout: int
|
|
||||||
) -> None:
|
|
||||||
"""
|
"""
|
||||||
Waites for service to be in specified state.
|
Waites for service to be in specified state.
|
||||||
|
|
||||||
|
@ -334,3 +345,23 @@ class Host(ABC):
|
||||||
timeout: Seconds to wait
|
timeout: Seconds to wait
|
||||||
|
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
def down_interface(self, interface: str) -> None:
|
||||||
|
shell = self.get_shell()
|
||||||
|
shell.exec(f"ip link set {interface} down")
|
||||||
|
|
||||||
|
def up_interface(self, interface: str) -> None:
|
||||||
|
shell = self.get_shell()
|
||||||
|
shell.exec(f"ip link set {interface} up")
|
||||||
|
|
||||||
|
def check_state(self, interface: str) -> str:
|
||||||
|
shell = self.get_shell()
|
||||||
|
return shell.exec(f"ip link show {interface} | sed -z 's/.*state \(.*\) mode .*/\\1/'").stdout.strip()
|
||||||
|
|
||||||
|
@retry(max_attempts=5, sleep_interval=5, expected_result="UP")
|
||||||
|
def check_state_up(self, interface: str) -> str:
|
||||||
|
return self.check_state(interface=interface)
|
||||||
|
|
||||||
|
@retry(max_attempts=5, sleep_interval=5, expected_result="DOWN")
|
||||||
|
def check_state_down(self, interface: str) -> str:
|
||||||
|
return self.check_state(interface=interface)
|
||||||
|
|
|
@ -1,4 +1,5 @@
|
||||||
from frostfs_testlib.load.interfaces import Loader, ScenarioRunner
|
from frostfs_testlib.load.interfaces.loader import Loader
|
||||||
|
from frostfs_testlib.load.interfaces.scenario_runner import ScenarioRunner
|
||||||
from frostfs_testlib.load.load_config import (
|
from frostfs_testlib.load.load_config import (
|
||||||
EndpointSelectionStrategy,
|
EndpointSelectionStrategy,
|
||||||
K6ProcessAllocationStrategy,
|
K6ProcessAllocationStrategy,
|
||||||
|
@ -11,4 +12,4 @@ from frostfs_testlib.load.load_config import (
|
||||||
)
|
)
|
||||||
from frostfs_testlib.load.load_report import LoadReport
|
from frostfs_testlib.load.load_report import LoadReport
|
||||||
from frostfs_testlib.load.loaders import NodeLoader, RemoteLoader
|
from frostfs_testlib.load.loaders import NodeLoader, RemoteLoader
|
||||||
from frostfs_testlib.load.runners import DefaultRunner, LocalRunner
|
from frostfs_testlib.load.runners import DefaultRunner, LocalRunner, S3LocalRunner
|
||||||
|
|
14
src/frostfs_testlib/load/interfaces/loader.py
Normal file
14
src/frostfs_testlib/load/interfaces/loader.py
Normal file
|
@ -0,0 +1,14 @@
|
||||||
|
from abc import ABC, abstractmethod
|
||||||
|
|
||||||
|
from frostfs_testlib.shell.interfaces import Shell
|
||||||
|
|
||||||
|
|
||||||
|
class Loader(ABC):
|
||||||
|
@abstractmethod
|
||||||
|
def get_shell(self) -> Shell:
|
||||||
|
"""Get shell for the loader"""
|
||||||
|
|
||||||
|
@property
|
||||||
|
@abstractmethod
|
||||||
|
def ip(self):
|
||||||
|
"""Get address of the loader"""
|
|
@ -1,20 +1,8 @@
|
||||||
from abc import ABC, abstractmethod
|
from abc import ABC, abstractmethod
|
||||||
|
|
||||||
|
from frostfs_testlib.load.k6 import K6
|
||||||
from frostfs_testlib.load.load_config import LoadParams
|
from frostfs_testlib.load.load_config import LoadParams
|
||||||
from frostfs_testlib.shell.interfaces import Shell
|
|
||||||
from frostfs_testlib.storage.cluster import ClusterNode
|
from frostfs_testlib.storage.cluster import ClusterNode
|
||||||
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
|
|
||||||
|
|
||||||
|
|
||||||
class Loader(ABC):
|
|
||||||
@abstractmethod
|
|
||||||
def get_shell(self) -> Shell:
|
|
||||||
"""Get shell for the loader"""
|
|
||||||
|
|
||||||
@property
|
|
||||||
@abstractmethod
|
|
||||||
def ip(self):
|
|
||||||
"""Get address of the loader"""
|
|
||||||
|
|
||||||
|
|
||||||
class ScenarioRunner(ABC):
|
class ScenarioRunner(ABC):
|
||||||
|
@ -32,6 +20,10 @@ class ScenarioRunner(ABC):
|
||||||
def init_k6_instances(self, load_params: LoadParams, endpoints: list[str], k6_dir: str):
|
def init_k6_instances(self, load_params: LoadParams, endpoints: list[str], k6_dir: str):
|
||||||
"""Init K6 instances"""
|
"""Init K6 instances"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def get_k6_instances(self) -> list[K6]:
|
||||||
|
"""Get K6 instances"""
|
||||||
|
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
def start(self):
|
def start(self):
|
||||||
"""Start K6 instances"""
|
"""Start K6 instances"""
|
96
src/frostfs_testlib/load/interfaces/summarized.py
Normal file
96
src/frostfs_testlib/load/interfaces/summarized.py
Normal file
|
@ -0,0 +1,96 @@
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
|
||||||
|
from frostfs_testlib.load.load_config import LoadParams, LoadScenario
|
||||||
|
from frostfs_testlib.load.load_metrics import get_metrics_object
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class SummarizedErorrs:
|
||||||
|
total: int = field(default_factory=int)
|
||||||
|
percent: float = field(default_factory=float)
|
||||||
|
threshold: float = field(default_factory=float)
|
||||||
|
by_node: dict[str, int] = field(default_factory=dict)
|
||||||
|
|
||||||
|
def calc_stats(self, operations):
|
||||||
|
self.total += sum(self.by_node.values())
|
||||||
|
|
||||||
|
if not operations:
|
||||||
|
return
|
||||||
|
|
||||||
|
self.percent = self.total / operations * 100
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class SummarizedLatencies:
|
||||||
|
avg: float = field(default_factory=float)
|
||||||
|
min: float = field(default_factory=float)
|
||||||
|
max: float = field(default_factory=float)
|
||||||
|
by_node: dict[str, dict[str, int]] = field(default_factory=dict)
|
||||||
|
|
||||||
|
def calc_stats(self):
|
||||||
|
if not self.by_node:
|
||||||
|
return
|
||||||
|
|
||||||
|
avgs = [lt["avg"] for lt in self.by_node.values()]
|
||||||
|
self.avg = sum(avgs) / len(avgs)
|
||||||
|
|
||||||
|
minimal = [lt["min"] for lt in self.by_node.values()]
|
||||||
|
self.min = min(minimal)
|
||||||
|
|
||||||
|
maximum = [lt["max"] for lt in self.by_node.values()]
|
||||||
|
self.max = max(maximum)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class SummarizedStats:
|
||||||
|
threads: int = field(default_factory=int)
|
||||||
|
requested_rate: int = field(default_factory=int)
|
||||||
|
operations: int = field(default_factory=int)
|
||||||
|
rate: float = field(default_factory=float)
|
||||||
|
throughput: float = field(default_factory=float)
|
||||||
|
latencies: SummarizedLatencies = field(default_factory=SummarizedLatencies)
|
||||||
|
errors: SummarizedErorrs = field(default_factory=SummarizedErorrs)
|
||||||
|
total_bytes: int = field(default_factory=int)
|
||||||
|
passed: bool = True
|
||||||
|
|
||||||
|
def calc_stats(self):
|
||||||
|
self.errors.calc_stats(self.operations)
|
||||||
|
self.latencies.calc_stats()
|
||||||
|
self.passed = self.errors.percent <= self.errors.threshold
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def collect(load_params: LoadParams, load_summaries: dict) -> dict[str, "SummarizedStats"]:
|
||||||
|
if load_params.scenario in [LoadScenario.gRPC_CAR, LoadScenario.S3_CAR]:
|
||||||
|
delete_vus = max(load_params.preallocated_deleters or 0, load_params.max_deleters or 0)
|
||||||
|
write_vus = max(load_params.preallocated_writers or 0, load_params.max_writers or 0)
|
||||||
|
read_vus = max(load_params.preallocated_readers or 0, load_params.max_readers or 0)
|
||||||
|
else:
|
||||||
|
write_vus = load_params.writers
|
||||||
|
read_vus = load_params.readers
|
||||||
|
delete_vus = load_params.deleters
|
||||||
|
|
||||||
|
summarized = {
|
||||||
|
"Write": SummarizedStats(threads=write_vus, requested_rate=load_params.write_rate),
|
||||||
|
"Read": SummarizedStats(threads=read_vus, requested_rate=load_params.read_rate),
|
||||||
|
"Delete": SummarizedStats(threads=delete_vus, requested_rate=load_params.delete_rate),
|
||||||
|
}
|
||||||
|
|
||||||
|
for node_key, load_summary in load_summaries.items():
|
||||||
|
metrics = get_metrics_object(load_params.scenario, load_summary)
|
||||||
|
for operation in metrics.operations:
|
||||||
|
target = summarized[operation._NAME]
|
||||||
|
if not operation.total_iterations:
|
||||||
|
continue
|
||||||
|
target.operations += operation.total_iterations
|
||||||
|
target.rate += operation.rate
|
||||||
|
target.latencies.by_node[node_key] = operation.latency
|
||||||
|
target.throughput += operation.throughput
|
||||||
|
target.errors.threshold = load_params.error_threshold
|
||||||
|
target.total_bytes += operation.total_bytes
|
||||||
|
if operation.failed_iterations:
|
||||||
|
target.errors.by_node[node_key] = operation.failed_iterations
|
||||||
|
|
||||||
|
for operation in summarized.values():
|
||||||
|
operation.calc_stats()
|
||||||
|
|
||||||
|
return summarized
|
|
@ -4,29 +4,24 @@ import math
|
||||||
import os
|
import os
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
|
from threading import Event
|
||||||
from time import sleep
|
from time import sleep
|
||||||
from typing import Any
|
from typing import Any
|
||||||
from urllib.parse import urlparse
|
from urllib.parse import urlparse
|
||||||
|
|
||||||
from frostfs_testlib.load.interfaces import Loader
|
from frostfs_testlib import reporter
|
||||||
from frostfs_testlib.load.load_config import (
|
from frostfs_testlib.credentials.interfaces import User
|
||||||
K6ProcessAllocationStrategy,
|
from frostfs_testlib.load.interfaces.loader import Loader
|
||||||
LoadParams,
|
from frostfs_testlib.load.load_config import K6ProcessAllocationStrategy, LoadParams, LoadScenario, LoadType
|
||||||
LoadScenario,
|
|
||||||
LoadType,
|
|
||||||
)
|
|
||||||
from frostfs_testlib.processes.remote_process import RemoteProcess
|
from frostfs_testlib.processes.remote_process import RemoteProcess
|
||||||
from frostfs_testlib.reporter import get_reporter
|
|
||||||
from frostfs_testlib.resources.common import STORAGE_USER_NAME
|
from frostfs_testlib.resources.common import STORAGE_USER_NAME
|
||||||
from frostfs_testlib.resources.load_params import K6_STOP_SIGNAL_TIMEOUT, K6_TEARDOWN_PERIOD
|
from frostfs_testlib.resources.load_params import K6_STOP_SIGNAL_TIMEOUT, K6_TEARDOWN_PERIOD
|
||||||
from frostfs_testlib.shell import Shell
|
from frostfs_testlib.shell import Shell
|
||||||
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
|
|
||||||
from frostfs_testlib.testing.test_control import wait_for_success
|
from frostfs_testlib.testing.test_control import wait_for_success
|
||||||
|
|
||||||
EXIT_RESULT_CODE = 0
|
EXIT_RESULT_CODE = 0
|
||||||
|
|
||||||
logger = logging.getLogger("NeoLogger")
|
logger = logging.getLogger("NeoLogger")
|
||||||
reporter = get_reporter()
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
|
@ -40,7 +35,6 @@ class LoadResults:
|
||||||
|
|
||||||
class K6:
|
class K6:
|
||||||
_k6_process: RemoteProcess
|
_k6_process: RemoteProcess
|
||||||
_start_time: datetime
|
|
||||||
|
|
||||||
def __init__(
|
def __init__(
|
||||||
self,
|
self,
|
||||||
|
@ -49,16 +43,17 @@ class K6:
|
||||||
k6_dir: str,
|
k6_dir: str,
|
||||||
shell: Shell,
|
shell: Shell,
|
||||||
loader: Loader,
|
loader: Loader,
|
||||||
wallet: WalletInfo,
|
user: User,
|
||||||
):
|
):
|
||||||
if load_params.scenario is None:
|
if load_params.scenario is None:
|
||||||
raise RuntimeError("Scenario should not be none")
|
raise RuntimeError("Scenario should not be none")
|
||||||
|
|
||||||
self.load_params: LoadParams = load_params
|
self.load_params = load_params
|
||||||
self.endpoints = endpoints
|
self.endpoints = endpoints
|
||||||
self.loader: Loader = loader
|
self.loader = loader
|
||||||
self.shell: Shell = shell
|
self.shell = shell
|
||||||
self.wallet = wallet
|
self.user = user
|
||||||
|
self.preset_output: str = ""
|
||||||
self.summary_json: str = os.path.join(
|
self.summary_json: str = os.path.join(
|
||||||
self.load_params.working_dir,
|
self.load_params.working_dir,
|
||||||
f"{self.load_params.load_id}_{self.load_params.scenario.value}_summary.json",
|
f"{self.load_params.load_id}_{self.load_params.scenario.value}_summary.json",
|
||||||
|
@ -66,6 +61,27 @@ class K6:
|
||||||
|
|
||||||
self._k6_dir: str = k6_dir
|
self._k6_dir: str = k6_dir
|
||||||
|
|
||||||
|
command = (
|
||||||
|
f"{self._generate_env_variables()}{self._k6_dir}/k6 run {self._generate_k6_variables()} "
|
||||||
|
f"{self._k6_dir}/scenarios/{self.load_params.scenario.value}.js"
|
||||||
|
)
|
||||||
|
remote_user = STORAGE_USER_NAME if self.load_params.scenario == LoadScenario.LOCAL else None
|
||||||
|
process_id = self.load_params.load_id if self.load_params.scenario != LoadScenario.VERIFY else f"{self.load_params.load_id}_verify"
|
||||||
|
self._k6_process = RemoteProcess.create(command, self.shell, self.load_params.working_dir, remote_user, process_id)
|
||||||
|
|
||||||
|
def _get_fill_percents(self):
|
||||||
|
fill_percents = self.shell.exec("df -H --output=source,pcent,target | grep frostfs | grep data").stdout.split("\n")
|
||||||
|
return [line.split() for line in fill_percents][:-1]
|
||||||
|
|
||||||
|
def check_fill_percent(self):
|
||||||
|
fill_percents = self._get_fill_percents()
|
||||||
|
percent_mean = 0
|
||||||
|
for line in fill_percents:
|
||||||
|
percent_mean += float(line[1].split("%")[0])
|
||||||
|
percent_mean = percent_mean / len(fill_percents)
|
||||||
|
logger.info(f"{self.loader.ip} mean fill percent is {percent_mean}")
|
||||||
|
return percent_mean >= self.load_params.fill_percent
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def process_dir(self) -> str:
|
def process_dir(self) -> str:
|
||||||
return self._k6_process.process_dir
|
return self._k6_process.process_dir
|
||||||
|
@ -84,8 +100,8 @@ class K6:
|
||||||
preset_grpc: [
|
preset_grpc: [
|
||||||
preset_grpc,
|
preset_grpc,
|
||||||
f"--endpoint {','.join(self.endpoints)}",
|
f"--endpoint {','.join(self.endpoints)}",
|
||||||
f"--wallet {self.wallet.path} ",
|
f"--wallet {self.user.wallet.path} ",
|
||||||
f"--config {self.wallet.config_path} ",
|
f"--config {self.user.wallet.config_path} ",
|
||||||
],
|
],
|
||||||
preset_s3: [
|
preset_s3: [
|
||||||
preset_s3,
|
preset_s3,
|
||||||
|
@ -101,56 +117,54 @@ class K6:
|
||||||
command = " ".join(command_args)
|
command = " ".join(command_args)
|
||||||
result = self.shell.exec(command)
|
result = self.shell.exec(command)
|
||||||
|
|
||||||
assert (
|
assert result.return_code == EXIT_RESULT_CODE, f"Return code of preset is not zero: {result.stdout}"
|
||||||
result.return_code == EXIT_RESULT_CODE
|
|
||||||
), f"Return code of preset is not zero: {result.stdout}"
|
|
||||||
return result.stdout.strip("\n")
|
|
||||||
|
|
||||||
@reporter.step_deco("Generate K6 command")
|
self.preset_output = result.stdout.strip("\n")
|
||||||
def _generate_env_variables(self) -> str:
|
return self.preset_output
|
||||||
env_vars = self.load_params.get_env_vars()
|
|
||||||
|
@reporter.step("Generate K6 variables")
|
||||||
|
def _generate_k6_variables(self) -> str:
|
||||||
|
env_vars = self.load_params.get_k6_vars()
|
||||||
|
|
||||||
env_vars[f"{self.load_params.load_type.value.upper()}_ENDPOINTS"] = ",".join(self.endpoints)
|
env_vars[f"{self.load_params.load_type.value.upper()}_ENDPOINTS"] = ",".join(self.endpoints)
|
||||||
env_vars["SUMMARY_JSON"] = self.summary_json
|
env_vars["SUMMARY_JSON"] = self.summary_json
|
||||||
|
|
||||||
reporter.attach(
|
reporter.attach("\n".join(f"{param}: {value}" for param, value in env_vars.items()), "K6 ENV variables")
|
||||||
"\n".join(f"{param}: {value}" for param, value in env_vars.items()), "K6 ENV variables"
|
return " ".join([f"-e {param}='{value}'" for param, value in env_vars.items() if value is not None])
|
||||||
)
|
|
||||||
return " ".join(
|
@reporter.step("Generate env variables")
|
||||||
[f"-e {param}='{value}'" for param, value in env_vars.items() if value is not None]
|
def _generate_env_variables(self) -> str:
|
||||||
)
|
env_vars = self.load_params.get_env_vars()
|
||||||
|
if not env_vars:
|
||||||
|
return ""
|
||||||
|
reporter.attach("\n".join(f"{param}: {value}" for param, value in env_vars.items()), "ENV variables")
|
||||||
|
return " ".join([f"{param}='{value}'" for param, value in env_vars.items() if value is not None]) + " "
|
||||||
|
|
||||||
|
def get_start_time(self) -> datetime:
|
||||||
|
return datetime.fromtimestamp(self._k6_process.start_time())
|
||||||
|
|
||||||
|
def get_end_time(self) -> datetime:
|
||||||
|
return datetime.fromtimestamp(self._k6_process.end_time())
|
||||||
|
|
||||||
def start(self) -> None:
|
def start(self) -> None:
|
||||||
with reporter.step(
|
with reporter.step(f"Start load from loader {self.loader.ip} on endpoints {self.endpoints}"):
|
||||||
f"Start load from loader {self.loader.ip} on endpoints {self.endpoints}"
|
self._k6_process.start()
|
||||||
):
|
|
||||||
self._start_time = int(datetime.utcnow().timestamp())
|
|
||||||
command = (
|
|
||||||
f"{self._k6_dir}/k6 run {self._generate_env_variables()} "
|
|
||||||
f"{self._k6_dir}/scenarios/{self.load_params.scenario.value}.js"
|
|
||||||
)
|
|
||||||
user = STORAGE_USER_NAME if self.load_params.scenario == LoadScenario.LOCAL else None
|
|
||||||
self._k6_process = RemoteProcess.create(
|
|
||||||
command, self.shell, self.load_params.working_dir, user
|
|
||||||
)
|
|
||||||
|
|
||||||
def wait_until_finished(self, soft_timeout: int = 0) -> None:
|
def wait_until_finished(self, event: Event, soft_timeout: int = 0) -> None:
|
||||||
with reporter.step(
|
with reporter.step(f"Wait until load is finished from loader {self.loader.ip} on endpoints {self.endpoints}"):
|
||||||
f"Wait until load is finished from loader {self.loader.ip} on endpoints {self.endpoints}"
|
|
||||||
):
|
|
||||||
if self.load_params.scenario == LoadScenario.VERIFY:
|
if self.load_params.scenario == LoadScenario.VERIFY:
|
||||||
timeout = self.load_params.verify_time or 0
|
timeout = self.load_params.verify_time or 0
|
||||||
else:
|
else:
|
||||||
timeout = self.load_params.load_time or 0
|
timeout = self.load_params.load_time or 0
|
||||||
|
|
||||||
|
start_time = int(self.get_start_time().timestamp())
|
||||||
|
|
||||||
current_time = int(datetime.utcnow().timestamp())
|
current_time = int(datetime.utcnow().timestamp())
|
||||||
working_time = current_time - self._start_time
|
working_time = current_time - start_time
|
||||||
remaining_time = timeout - working_time
|
remaining_time = timeout - working_time
|
||||||
|
|
||||||
setup_teardown_time = (
|
setup_teardown_time = (
|
||||||
int(K6_TEARDOWN_PERIOD)
|
int(K6_TEARDOWN_PERIOD) + self.load_params.get_init_time() + int(self.load_params.setup_timeout.replace("s", "").strip())
|
||||||
+ self.load_params.get_init_time()
|
|
||||||
+ int(self.load_params.setup_timeout.replace("s", "").strip())
|
|
||||||
)
|
)
|
||||||
remaining_time_including_setup_and_teardown = remaining_time + setup_teardown_time
|
remaining_time_including_setup_and_teardown = remaining_time + setup_teardown_time
|
||||||
timeout = remaining_time_including_setup_and_teardown
|
timeout = remaining_time_including_setup_and_teardown
|
||||||
|
@ -161,7 +175,7 @@ class K6:
|
||||||
original_timeout = timeout
|
original_timeout = timeout
|
||||||
|
|
||||||
timeouts = {
|
timeouts = {
|
||||||
"K6 start time": self._start_time,
|
"K6 start time": start_time,
|
||||||
"Current time": current_time,
|
"Current time": current_time,
|
||||||
"K6 working time": working_time,
|
"K6 working time": working_time,
|
||||||
"Remaining time for load": remaining_time,
|
"Remaining time for load": remaining_time,
|
||||||
|
@ -177,10 +191,28 @@ class K6:
|
||||||
wait_interval = min_wait_interval
|
wait_interval = min_wait_interval
|
||||||
if self._k6_process is None:
|
if self._k6_process is None:
|
||||||
assert "No k6 instances were executed"
|
assert "No k6 instances were executed"
|
||||||
|
|
||||||
while timeout > 0:
|
while timeout > 0:
|
||||||
|
if not self.load_params.fill_percent is None:
|
||||||
|
with reporter.step(f"Check the percentage of filling of all data disks on the node"):
|
||||||
|
if self.check_fill_percent():
|
||||||
|
logger.info(f"Stopping load on because disks is filled more then {self.load_params.fill_percent}%")
|
||||||
|
event.set()
|
||||||
|
self.stop()
|
||||||
|
return
|
||||||
|
|
||||||
|
if event.is_set():
|
||||||
|
self.stop()
|
||||||
|
return
|
||||||
|
|
||||||
if not self._k6_process.running():
|
if not self._k6_process.running():
|
||||||
return
|
return
|
||||||
logger.info(f"K6 is running. Waiting {wait_interval} seconds...")
|
|
||||||
|
remaining_time_hours = f"{timeout//3600}h" if timeout // 3600 != 0 else ""
|
||||||
|
remaining_time_minutes = f"{timeout//60%60}m" if timeout // 60 % 60 != 0 else ""
|
||||||
|
logger.info(
|
||||||
|
f"K6 is running. Remaining time {remaining_time_hours}{remaining_time_minutes}{timeout%60}s. Next check after {wait_interval} seconds..."
|
||||||
|
)
|
||||||
sleep(wait_interval)
|
sleep(wait_interval)
|
||||||
timeout -= min(timeout, wait_interval)
|
timeout -= min(timeout, wait_interval)
|
||||||
wait_interval = max(
|
wait_interval = max(
|
||||||
|
@ -196,9 +228,7 @@ class K6:
|
||||||
raise TimeoutError(f"Expected K6 to finish after {original_timeout} sec.")
|
raise TimeoutError(f"Expected K6 to finish after {original_timeout} sec.")
|
||||||
|
|
||||||
def get_results(self) -> Any:
|
def get_results(self) -> Any:
|
||||||
with reporter.step(
|
with reporter.step(f"Get load results from loader {self.loader.ip} on endpoints {self.endpoints}"):
|
||||||
f"Get load results from loader {self.loader.ip} on endpoints {self.endpoints}"
|
|
||||||
):
|
|
||||||
self.__log_output()
|
self.__log_output()
|
||||||
|
|
||||||
if not self.summary_json:
|
if not self.summary_json:
|
||||||
|
@ -228,10 +258,8 @@ class K6:
|
||||||
return self._k6_process.running()
|
return self._k6_process.running()
|
||||||
return False
|
return False
|
||||||
|
|
||||||
@reporter.step_deco("Wait until K6 process end")
|
@reporter.step("Wait until K6 process end")
|
||||||
@wait_for_success(
|
@wait_for_success(K6_STOP_SIGNAL_TIMEOUT, 15, False, False, "Can not stop K6 process within timeout")
|
||||||
K6_STOP_SIGNAL_TIMEOUT, 15, False, False, "Can not stop K6 process within timeout"
|
|
||||||
)
|
|
||||||
def _wait_until_process_end(self):
|
def _wait_until_process_end(self):
|
||||||
return self._k6_process.running()
|
return self._k6_process.running()
|
||||||
|
|
||||||
|
|
|
@ -3,11 +3,38 @@ import os
|
||||||
from dataclasses import dataclass, field, fields, is_dataclass
|
from dataclasses import dataclass, field, fields, is_dataclass
|
||||||
from enum import Enum
|
from enum import Enum
|
||||||
from types import MappingProxyType
|
from types import MappingProxyType
|
||||||
from typing import Any, Optional, get_args
|
from typing import Any, Callable, Optional, get_args
|
||||||
|
|
||||||
from frostfs_testlib.utils.converting_utils import calc_unit
|
from frostfs_testlib.utils.converting_utils import calc_unit
|
||||||
|
|
||||||
|
|
||||||
|
def convert_time_to_seconds(time: int | str | None) -> int:
|
||||||
|
if time is None:
|
||||||
|
return None
|
||||||
|
if str(time).isdigit():
|
||||||
|
seconds = int(time)
|
||||||
|
else:
|
||||||
|
days, hours, minutes = 0, 0, 0
|
||||||
|
if "d" in time:
|
||||||
|
days, time = time.split("d")
|
||||||
|
if "h" in time:
|
||||||
|
hours, time = time.split("h")
|
||||||
|
if "min" in time:
|
||||||
|
minutes = time.replace("min", "")
|
||||||
|
seconds = int(days) * 86400 + int(hours) * 3600 + int(minutes) * 60
|
||||||
|
return seconds
|
||||||
|
|
||||||
|
|
||||||
|
def force_list(input: str | list[str]):
|
||||||
|
if input is None:
|
||||||
|
return None
|
||||||
|
|
||||||
|
if isinstance(input, list):
|
||||||
|
return list(map(str.strip, input))
|
||||||
|
|
||||||
|
return [input.strip()]
|
||||||
|
|
||||||
|
|
||||||
class LoadType(Enum):
|
class LoadType(Enum):
|
||||||
gRPC = "grpc"
|
gRPC = "grpc"
|
||||||
S3 = "s3"
|
S3 = "s3"
|
||||||
|
@ -20,6 +47,7 @@ class LoadScenario(Enum):
|
||||||
S3 = "s3"
|
S3 = "s3"
|
||||||
S3_CAR = "s3_car"
|
S3_CAR = "s3_car"
|
||||||
S3_MULTIPART = "s3_multipart"
|
S3_MULTIPART = "s3_multipart"
|
||||||
|
S3_LOCAL = "s3local"
|
||||||
HTTP = "http"
|
HTTP = "http"
|
||||||
VERIFY = "verify"
|
VERIFY = "verify"
|
||||||
LOCAL = "local"
|
LOCAL = "local"
|
||||||
|
@ -38,11 +66,19 @@ all_load_scenarios = [
|
||||||
LoadScenario.S3_CAR,
|
LoadScenario.S3_CAR,
|
||||||
LoadScenario.gRPC_CAR,
|
LoadScenario.gRPC_CAR,
|
||||||
LoadScenario.LOCAL,
|
LoadScenario.LOCAL,
|
||||||
LoadScenario.S3_MULTIPART
|
LoadScenario.S3_MULTIPART,
|
||||||
|
LoadScenario.S3_LOCAL,
|
||||||
]
|
]
|
||||||
all_scenarios = all_load_scenarios.copy() + [LoadScenario.VERIFY]
|
all_scenarios = all_load_scenarios.copy() + [LoadScenario.VERIFY]
|
||||||
|
|
||||||
constant_vus_scenarios = [LoadScenario.gRPC, LoadScenario.S3, LoadScenario.HTTP, LoadScenario.LOCAL, LoadScenario.S3_MULTIPART]
|
constant_vus_scenarios = [
|
||||||
|
LoadScenario.gRPC,
|
||||||
|
LoadScenario.S3,
|
||||||
|
LoadScenario.HTTP,
|
||||||
|
LoadScenario.LOCAL,
|
||||||
|
LoadScenario.S3_MULTIPART,
|
||||||
|
LoadScenario.S3_LOCAL,
|
||||||
|
]
|
||||||
constant_arrival_rate_scenarios = [LoadScenario.gRPC_CAR, LoadScenario.S3_CAR]
|
constant_arrival_rate_scenarios = [LoadScenario.gRPC_CAR, LoadScenario.S3_CAR]
|
||||||
|
|
||||||
grpc_preset_scenarios = [
|
grpc_preset_scenarios = [
|
||||||
|
@ -51,7 +87,7 @@ grpc_preset_scenarios = [
|
||||||
LoadScenario.gRPC_CAR,
|
LoadScenario.gRPC_CAR,
|
||||||
LoadScenario.LOCAL,
|
LoadScenario.LOCAL,
|
||||||
]
|
]
|
||||||
s3_preset_scenarios = [LoadScenario.S3, LoadScenario.S3_CAR, LoadScenario.S3_MULTIPART]
|
s3_preset_scenarios = [LoadScenario.S3, LoadScenario.S3_CAR, LoadScenario.S3_MULTIPART, LoadScenario.S3_LOCAL]
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
|
@ -67,15 +103,19 @@ def metadata_field(
|
||||||
scenario_variable: Optional[str] = None,
|
scenario_variable: Optional[str] = None,
|
||||||
string_repr: Optional[bool] = True,
|
string_repr: Optional[bool] = True,
|
||||||
distributed: Optional[bool] = False,
|
distributed: Optional[bool] = False,
|
||||||
|
formatter: Optional[Callable] = None,
|
||||||
|
env_variable: Optional[str] = None,
|
||||||
):
|
):
|
||||||
return field(
|
return field(
|
||||||
default=None,
|
default=None,
|
||||||
metadata={
|
metadata={
|
||||||
"applicable_scenarios": applicable_scenarios,
|
"applicable_scenarios": applicable_scenarios,
|
||||||
"preset_argument": preset_param,
|
"preset_argument": preset_param,
|
||||||
"env_variable": scenario_variable,
|
"scenario_variable": scenario_variable,
|
||||||
"string_repr": string_repr,
|
"string_repr": string_repr,
|
||||||
"distributed": distributed,
|
"distributed": distributed,
|
||||||
|
"formatter": formatter,
|
||||||
|
"env_variable": env_variable,
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -89,6 +129,8 @@ class NodesSelectionStrategy(Enum):
|
||||||
ALL_EXCEPT_UNDER_TEST = "ALL_EXCEPT_UNDER_TEST"
|
ALL_EXCEPT_UNDER_TEST = "ALL_EXCEPT_UNDER_TEST"
|
||||||
# Select ONE random node except under test (useful for failover).
|
# Select ONE random node except under test (useful for failover).
|
||||||
RANDOM_SINGLE_EXCEPT_UNDER_TEST = "RANDOM_SINGLE_EXCEPT_UNDER_TEST"
|
RANDOM_SINGLE_EXCEPT_UNDER_TEST = "RANDOM_SINGLE_EXCEPT_UNDER_TEST"
|
||||||
|
# Select node under test
|
||||||
|
NODE_UNDER_TEST = "NODE_UNDER_TEST"
|
||||||
|
|
||||||
|
|
||||||
class EndpointSelectionStrategy(Enum):
|
class EndpointSelectionStrategy(Enum):
|
||||||
|
@ -110,8 +152,29 @@ class K6ProcessAllocationStrategy(Enum):
|
||||||
PER_ENDPOINT = "PER_ENDPOINT"
|
PER_ENDPOINT = "PER_ENDPOINT"
|
||||||
|
|
||||||
|
|
||||||
|
class MetaConfig:
|
||||||
|
def _get_field_formatter(self, field_name: str) -> Callable | None:
|
||||||
|
data_fields = fields(self)
|
||||||
|
formatters = [
|
||||||
|
field.metadata["formatter"]
|
||||||
|
for field in data_fields
|
||||||
|
if field.name == field_name and "formatter" in field.metadata and field.metadata["formatter"] != None
|
||||||
|
]
|
||||||
|
if formatters:
|
||||||
|
return formatters[0]
|
||||||
|
|
||||||
|
return None
|
||||||
|
|
||||||
|
def __setattr__(self, field_name, value):
|
||||||
|
formatter = self._get_field_formatter(field_name)
|
||||||
|
if formatter:
|
||||||
|
value = formatter(value)
|
||||||
|
|
||||||
|
super().__setattr__(field_name, value)
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class Preset:
|
class Preset(MetaConfig):
|
||||||
# ------ COMMON ------
|
# ------ COMMON ------
|
||||||
# Amount of objects which should be created
|
# Amount of objects which should be created
|
||||||
objects_count: Optional[int] = metadata_field(all_load_scenarios, "preload_obj", None, False)
|
objects_count: Optional[int] = metadata_field(all_load_scenarios, "preload_obj", None, False)
|
||||||
|
@ -119,22 +182,20 @@ class Preset:
|
||||||
pregen_json: Optional[str] = metadata_field(all_load_scenarios, "out", "PREGEN_JSON", False)
|
pregen_json: Optional[str] = metadata_field(all_load_scenarios, "out", "PREGEN_JSON", False)
|
||||||
# Workers count for preset
|
# Workers count for preset
|
||||||
workers: Optional[int] = metadata_field(all_load_scenarios, "workers", None, False)
|
workers: Optional[int] = metadata_field(all_load_scenarios, "workers", None, False)
|
||||||
|
# Acl for container/buckets
|
||||||
|
acl: Optional[str] = metadata_field(all_load_scenarios, "acl", None, False)
|
||||||
|
|
||||||
# ------ GRPC ------
|
# ------ GRPC ------
|
||||||
# Amount of containers which should be created
|
# Amount of containers which should be created
|
||||||
containers_count: Optional[int] = metadata_field(
|
containers_count: Optional[int] = metadata_field(grpc_preset_scenarios, "containers", None, False)
|
||||||
grpc_preset_scenarios, "containers", None, False
|
|
||||||
)
|
|
||||||
# Container placement policy for containers for gRPC
|
# Container placement policy for containers for gRPC
|
||||||
container_placement_policy: Optional[str] = metadata_field(
|
container_placement_policy: Optional[list[str]] = metadata_field(grpc_preset_scenarios, "policy", None, False, formatter=force_list)
|
||||||
grpc_preset_scenarios, "policy", None, False
|
|
||||||
)
|
|
||||||
|
|
||||||
# ------ S3 ------
|
# ------ S3 ------
|
||||||
# Amount of buckets which should be created
|
# Amount of buckets which should be created
|
||||||
buckets_count: Optional[int] = metadata_field(s3_preset_scenarios, "buckets", None, False)
|
buckets_count: Optional[int] = metadata_field(s3_preset_scenarios, "buckets", None, False)
|
||||||
# S3 region (AKA placement policy for S3 buckets)
|
# S3 region (AKA placement policy for S3 buckets)
|
||||||
s3_location: Optional[str] = metadata_field(s3_preset_scenarios, "location", None, False)
|
s3_location: Optional[list[str]] = metadata_field(s3_preset_scenarios, "location", None, False, formatter=force_list)
|
||||||
|
|
||||||
# Delay between containers creation and object upload for preset
|
# Delay between containers creation and object upload for preset
|
||||||
object_upload_delay: Optional[int] = metadata_field(all_load_scenarios, "sleep", None, False)
|
object_upload_delay: Optional[int] = metadata_field(all_load_scenarios, "sleep", None, False)
|
||||||
|
@ -142,9 +203,22 @@ class Preset:
|
||||||
# Flag to control preset erorrs
|
# Flag to control preset erorrs
|
||||||
ignore_errors: Optional[bool] = metadata_field(all_load_scenarios, "ignore-errors", None, False)
|
ignore_errors: Optional[bool] = metadata_field(all_load_scenarios, "ignore-errors", None, False)
|
||||||
|
|
||||||
|
# Flag to ensure created containers store data on local endpoints
|
||||||
|
local: Optional[bool] = metadata_field(grpc_preset_scenarios, "local", None, False)
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class LoadParams:
|
class PrometheusParams(MetaConfig):
|
||||||
|
# Prometheus server URL
|
||||||
|
server_url: Optional[str] = metadata_field(all_load_scenarios, env_variable="K6_PROMETHEUS_RW_SERVER_URL", string_repr=False)
|
||||||
|
# Prometheus trend stats
|
||||||
|
trend_stats: Optional[str] = metadata_field(all_load_scenarios, env_variable="K6_PROMETHEUS_RW_TREND_STATS", string_repr=False)
|
||||||
|
# Additional tags
|
||||||
|
metrics_tags: Optional[str] = metadata_field(all_load_scenarios, None, "METRIC_TAGS", False)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class LoadParams(MetaConfig):
|
||||||
# ------- CONTROL PARAMS -------
|
# ------- CONTROL PARAMS -------
|
||||||
# Load type can be gRPC, HTTP, S3.
|
# Load type can be gRPC, HTTP, S3.
|
||||||
load_type: LoadType
|
load_type: LoadType
|
||||||
|
@ -172,33 +246,56 @@ class LoadParams:
|
||||||
preset: Optional[Preset] = None
|
preset: Optional[Preset] = None
|
||||||
# K6 download url
|
# K6 download url
|
||||||
k6_url: Optional[str] = None
|
k6_url: Optional[str] = None
|
||||||
|
# Requests module url
|
||||||
|
requests_module_url: Optional[str] = None
|
||||||
|
# aws cli download url
|
||||||
|
awscli_url: Optional[str] = None
|
||||||
# No ssl verification flag
|
# No ssl verification flag
|
||||||
no_verify_ssl: Optional[bool] = metadata_field(
|
no_verify_ssl: Optional[bool] = metadata_field(
|
||||||
[LoadScenario.S3, LoadScenario.S3_CAR, LoadScenario.S3_MULTIPART, LoadScenario.VERIFY, LoadScenario.HTTP],
|
[
|
||||||
|
LoadScenario.S3,
|
||||||
|
LoadScenario.S3_CAR,
|
||||||
|
LoadScenario.S3_MULTIPART,
|
||||||
|
LoadScenario.S3_LOCAL,
|
||||||
|
LoadScenario.VERIFY,
|
||||||
|
LoadScenario.HTTP,
|
||||||
|
],
|
||||||
"no-verify-ssl",
|
"no-verify-ssl",
|
||||||
"NO_VERIFY_SSL",
|
"NO_VERIFY_SSL",
|
||||||
False,
|
False,
|
||||||
)
|
)
|
||||||
|
# Percentage of filling of all data disks on all nodes
|
||||||
|
fill_percent: Optional[float] = None
|
||||||
|
# if specified, max payload size in GB of the storage engine. If the storage engine is already full, no new objects will be saved.
|
||||||
|
max_total_size_gb: Optional[float] = metadata_field([LoadScenario.LOCAL, LoadScenario.S3_LOCAL], None, "MAX_TOTAL_SIZE_GB")
|
||||||
|
# if set, the payload is generated on the fly and is not read into memory fully.
|
||||||
|
streaming: Optional[int] = metadata_field(all_load_scenarios, None, "STREAMING", False)
|
||||||
|
# Output format
|
||||||
|
output: Optional[str] = metadata_field(all_load_scenarios, None, "K6_OUT", False)
|
||||||
|
# Prometheus params
|
||||||
|
prometheus: Optional[PrometheusParams] = None
|
||||||
|
|
||||||
# ------- COMMON SCENARIO PARAMS -------
|
# ------- COMMON SCENARIO PARAMS -------
|
||||||
# Load time is the maximum duration for k6 to give load. Default is the BACKGROUND_LOAD_DEFAULT_TIME value.
|
# Load time is the maximum duration for k6 to give load. Default is the BACKGROUND_LOAD_DEFAULT_TIME value.
|
||||||
load_time: Optional[int] = metadata_field(all_load_scenarios, None, "DURATION", False)
|
load_time: Optional[int] = metadata_field(all_load_scenarios, None, "DURATION", False, formatter=convert_time_to_seconds)
|
||||||
# Object size in KB for load and preset.
|
# Object size in KB for load and preset.
|
||||||
object_size: Optional[int] = metadata_field(all_load_scenarios, "size", "WRITE_OBJ_SIZE", False)
|
object_size: Optional[int] = metadata_field(all_load_scenarios, "size", "WRITE_OBJ_SIZE", False)
|
||||||
# For read operations, controls from which set get objects to read
|
# For read operations, controls from which set get objects to read
|
||||||
read_from: Optional[ReadFrom] = None
|
read_from: Optional[ReadFrom] = None
|
||||||
|
# For read operations done from REGISTRY, controls delay which object should live before it will be used for read operation
|
||||||
|
read_age: Optional[int] = metadata_field(all_load_scenarios, None, "READ_AGE", False)
|
||||||
# Output registry K6 file. Filled automatically.
|
# Output registry K6 file. Filled automatically.
|
||||||
registry_file: Optional[str] = metadata_field(all_scenarios, None, "REGISTRY_FILE", False)
|
registry_file: Optional[str] = metadata_field(all_scenarios, None, "REGISTRY_FILE", False)
|
||||||
|
# In case if we want to use custom registry file left from another load run
|
||||||
|
custom_registry: Optional[str] = None
|
||||||
|
# In case if we want to use custom registry file left from another load run
|
||||||
|
force_fresh_registry: Optional[bool] = None
|
||||||
# Specifies the minimum duration of every single execution (i.e. iteration).
|
# Specifies the minimum duration of every single execution (i.e. iteration).
|
||||||
# Any iterations that are shorter than this value will cause that VU to
|
# Any iterations that are shorter than this value will cause that VU to
|
||||||
# sleep for the remainder of the time until the specified minimum duration is reached.
|
# sleep for the remainder of the time until the specified minimum duration is reached.
|
||||||
min_iteration_duration: Optional[str] = metadata_field(
|
min_iteration_duration: Optional[str] = metadata_field(all_load_scenarios, None, "K6_MIN_ITERATION_DURATION", False)
|
||||||
all_load_scenarios, None, "K6_MIN_ITERATION_DURATION", False
|
|
||||||
)
|
|
||||||
# Prepare/cut objects locally on client before sending
|
# Prepare/cut objects locally on client before sending
|
||||||
prepare_locally: Optional[bool] = metadata_field(
|
prepare_locally: Optional[bool] = metadata_field([LoadScenario.gRPC, LoadScenario.gRPC_CAR], None, "PREPARE_LOCALLY", False)
|
||||||
[LoadScenario.gRPC, LoadScenario.gRPC_CAR], None, "PREPARE_LOCALLY", False
|
|
||||||
)
|
|
||||||
# Specifies K6 setupTimeout time. Currently hardcoded in xk6 as 5 seconds for all scenarios
|
# Specifies K6 setupTimeout time. Currently hardcoded in xk6 as 5 seconds for all scenarios
|
||||||
# https://k6.io/docs/using-k6/k6-options/reference/#setup-timeout
|
# https://k6.io/docs/using-k6/k6-options/reference/#setup-timeout
|
||||||
setup_timeout: Optional[str] = metadata_field(all_scenarios, None, "K6_SETUP_TIMEOUT", False)
|
setup_timeout: Optional[str] = metadata_field(all_scenarios, None, "K6_SETUP_TIMEOUT", False)
|
||||||
|
@ -219,83 +316,77 @@ class LoadParams:
|
||||||
|
|
||||||
# ------- CONSTANT ARRIVAL RATE SCENARIO PARAMS -------
|
# ------- CONSTANT ARRIVAL RATE SCENARIO PARAMS -------
|
||||||
# Number of iterations to start during each timeUnit period for write.
|
# Number of iterations to start during each timeUnit period for write.
|
||||||
write_rate: Optional[int] = metadata_field(
|
write_rate: Optional[int] = metadata_field(constant_arrival_rate_scenarios, None, "WRITE_RATE", True, True)
|
||||||
constant_arrival_rate_scenarios, None, "WRITE_RATE", True, True
|
|
||||||
)
|
|
||||||
|
|
||||||
# Number of iterations to start during each timeUnit period for read.
|
# Number of iterations to start during each timeUnit period for read.
|
||||||
read_rate: Optional[int] = metadata_field(
|
read_rate: Optional[int] = metadata_field(constant_arrival_rate_scenarios, None, "READ_RATE", True, True)
|
||||||
constant_arrival_rate_scenarios, None, "READ_RATE", True, True
|
|
||||||
)
|
|
||||||
|
|
||||||
# Number of iterations to start during each timeUnit period for delete.
|
# Number of iterations to start during each timeUnit period for delete.
|
||||||
delete_rate: Optional[int] = metadata_field(
|
delete_rate: Optional[int] = metadata_field(constant_arrival_rate_scenarios, None, "DELETE_RATE", True, True)
|
||||||
constant_arrival_rate_scenarios, None, "DELETE_RATE", True, True
|
|
||||||
)
|
|
||||||
|
|
||||||
# Amount of preAllocatedVUs for write operations.
|
# Amount of preAllocatedVUs for write operations.
|
||||||
preallocated_writers: Optional[int] = metadata_field(
|
preallocated_writers: Optional[int] = metadata_field(constant_arrival_rate_scenarios, None, "PRE_ALLOC_WRITERS", True, True)
|
||||||
constant_arrival_rate_scenarios, None, "PRE_ALLOC_WRITERS", True, True
|
|
||||||
)
|
|
||||||
# Amount of maxVUs for write operations.
|
# Amount of maxVUs for write operations.
|
||||||
max_writers: Optional[int] = metadata_field(
|
max_writers: Optional[int] = metadata_field(constant_arrival_rate_scenarios, None, "MAX_WRITERS", False, True)
|
||||||
constant_arrival_rate_scenarios, None, "MAX_WRITERS", False, True
|
|
||||||
)
|
|
||||||
|
|
||||||
# Amount of preAllocatedVUs for read operations.
|
# Amount of preAllocatedVUs for read operations.
|
||||||
preallocated_readers: Optional[int] = metadata_field(
|
preallocated_readers: Optional[int] = metadata_field(constant_arrival_rate_scenarios, None, "PRE_ALLOC_READERS", True, True)
|
||||||
constant_arrival_rate_scenarios, None, "PRE_ALLOC_READERS", True, True
|
|
||||||
)
|
|
||||||
# Amount of maxVUs for read operations.
|
# Amount of maxVUs for read operations.
|
||||||
max_readers: Optional[int] = metadata_field(
|
max_readers: Optional[int] = metadata_field(constant_arrival_rate_scenarios, None, "MAX_READERS", False, True)
|
||||||
constant_arrival_rate_scenarios, None, "MAX_READERS", False, True
|
|
||||||
)
|
|
||||||
|
|
||||||
# Amount of preAllocatedVUs for read operations.
|
# Amount of preAllocatedVUs for read operations.
|
||||||
preallocated_deleters: Optional[int] = metadata_field(
|
preallocated_deleters: Optional[int] = metadata_field(constant_arrival_rate_scenarios, None, "PRE_ALLOC_DELETERS", True, True)
|
||||||
constant_arrival_rate_scenarios, None, "PRE_ALLOC_DELETERS", True, True
|
|
||||||
)
|
|
||||||
# Amount of maxVUs for delete operations.
|
# Amount of maxVUs for delete operations.
|
||||||
max_deleters: Optional[int] = metadata_field(
|
max_deleters: Optional[int] = metadata_field(constant_arrival_rate_scenarios, None, "MAX_DELETERS", False, True)
|
||||||
constant_arrival_rate_scenarios, None, "MAX_DELETERS", False, True
|
|
||||||
)
|
|
||||||
|
|
||||||
# Multipart
|
# Multipart
|
||||||
# Number of parts to upload in parallel
|
# Number of parts to upload in parallel
|
||||||
writers_multipart: Optional[int] = metadata_field(
|
writers_multipart: Optional[int] = metadata_field([LoadScenario.S3_MULTIPART], None, "WRITERS_MULTIPART", False, True)
|
||||||
[LoadScenario.S3_MULTIPART], None, "WRITERS_MULTIPART", False, True
|
|
||||||
)
|
|
||||||
# part size must be greater than (5 MB)
|
# part size must be greater than (5 MB)
|
||||||
write_object_part_size: Optional[int] = metadata_field([LoadScenario.S3_MULTIPART], None, "WRITE_OBJ_PART_SIZE", False)
|
write_object_part_size: Optional[int] = metadata_field([LoadScenario.S3_MULTIPART], None, "WRITE_OBJ_PART_SIZE", False)
|
||||||
|
|
||||||
# Period of time to apply the rate value.
|
# Period of time to apply the rate value.
|
||||||
time_unit: Optional[str] = metadata_field(
|
time_unit: Optional[str] = metadata_field(constant_arrival_rate_scenarios, None, "TIME_UNIT", False)
|
||||||
constant_arrival_rate_scenarios, None, "TIME_UNIT", False
|
|
||||||
)
|
|
||||||
|
|
||||||
# ------- VERIFY SCENARIO PARAMS -------
|
# ------- VERIFY SCENARIO PARAMS -------
|
||||||
# Maximum verification time for k6 to verify objects. Default is BACKGROUND_LOAD_MAX_VERIFY_TIME (3600).
|
# Maximum verification time for k6 to verify objects. Default is BACKGROUND_LOAD_MAX_VERIFY_TIME (3600).
|
||||||
verify_time: Optional[int] = metadata_field([LoadScenario.VERIFY], None, "TIME_LIMIT", False)
|
verify_time: Optional[int] = metadata_field([LoadScenario.VERIFY], None, "TIME_LIMIT", False)
|
||||||
# Amount of Verification VU.
|
# Amount of Verification VU.
|
||||||
verify_clients: Optional[int] = metadata_field(
|
verify_clients: Optional[int] = metadata_field([LoadScenario.VERIFY], None, "CLIENTS", True, False)
|
||||||
[LoadScenario.VERIFY], None, "CLIENTS", True, False
|
|
||||||
)
|
|
||||||
|
|
||||||
# ------- LOCAL SCENARIO PARAMS -------
|
# ------- LOCAL SCENARIO PARAMS -------
|
||||||
# Config file location (filled automatically)
|
# Config file location (filled automatically)
|
||||||
config_file: Optional[str] = metadata_field([LoadScenario.LOCAL], None, "CONFIG_FILE", False)
|
config_file: Optional[str] = metadata_field([LoadScenario.LOCAL, LoadScenario.S3_LOCAL], None, "CONFIG_FILE", False)
|
||||||
|
# Config directory location (filled automatically)
|
||||||
|
config_dir: Optional[str] = metadata_field([LoadScenario.LOCAL, LoadScenario.S3_LOCAL], None, "CONFIG_DIR", False)
|
||||||
|
|
||||||
def set_id(self, load_id):
|
def set_id(self, load_id):
|
||||||
self.load_id = load_id
|
self.load_id = load_id
|
||||||
|
|
||||||
if self.read_from == ReadFrom.REGISTRY:
|
if self.read_from == ReadFrom.REGISTRY:
|
||||||
self.registry_file = os.path.join(self.working_dir, f"{load_id}_registry.bolt")
|
self.registry_file = os.path.join(self.working_dir, f"{load_id}_registry.bolt")
|
||||||
|
|
||||||
|
# For now it's okay to have it this way
|
||||||
|
if self.custom_registry is not None:
|
||||||
|
self.registry_file = self.custom_registry
|
||||||
|
|
||||||
if self.read_from == ReadFrom.PRESET:
|
if self.read_from == ReadFrom.PRESET:
|
||||||
self.registry_file = None
|
self.registry_file = None
|
||||||
|
|
||||||
if self.preset:
|
if self.preset:
|
||||||
self.preset.pregen_json = os.path.join(self.working_dir, f"{load_id}_prepare.json")
|
self.preset.pregen_json = os.path.join(self.working_dir, f"{load_id}_prepare.json")
|
||||||
|
|
||||||
|
def get_k6_vars(self):
|
||||||
|
env_vars = {
|
||||||
|
meta_field.metadata["scenario_variable"]: meta_field.value
|
||||||
|
for meta_field in self._get_meta_fields(self)
|
||||||
|
if self.scenario in meta_field.metadata["applicable_scenarios"]
|
||||||
|
and meta_field.metadata["scenario_variable"]
|
||||||
|
and meta_field.value is not None
|
||||||
|
}
|
||||||
|
|
||||||
|
return env_vars
|
||||||
|
|
||||||
def get_env_vars(self):
|
def get_env_vars(self):
|
||||||
env_vars = {
|
env_vars = {
|
||||||
meta_field.metadata["env_variable"]: meta_field.value
|
meta_field.metadata["env_variable"]: meta_field.value
|
||||||
|
@ -333,10 +424,8 @@ class LoadParams:
|
||||||
return math.ceil(self._get_total_vus() * self.vu_init_time)
|
return math.ceil(self._get_total_vus() * self.vu_init_time)
|
||||||
|
|
||||||
def _get_total_vus(self) -> int:
|
def _get_total_vus(self) -> int:
|
||||||
vu_fields = ["writers", "preallocated_writers"]
|
vu_fields = ["writers", "preallocated_writers", "readers", "preallocated_readers"]
|
||||||
data_fields = [
|
data_fields = [getattr(self, field.name) or 0 for field in fields(self) if field.name in vu_fields]
|
||||||
getattr(self, field.name) or 0 for field in fields(self) if field.name in vu_fields
|
|
||||||
]
|
|
||||||
return sum(data_fields)
|
return sum(data_fields)
|
||||||
|
|
||||||
def _get_applicable_fields(self):
|
def _get_applicable_fields(self):
|
||||||
|
@ -354,6 +443,11 @@ class LoadParams:
|
||||||
# For preset calls, bool values are passed with just --<argument_name> if the value is True
|
# For preset calls, bool values are passed with just --<argument_name> if the value is True
|
||||||
return f"--{meta_field.metadata['preset_argument']}" if meta_field.value else ""
|
return f"--{meta_field.metadata['preset_argument']}" if meta_field.value else ""
|
||||||
|
|
||||||
|
if isinstance(meta_field.value, list):
|
||||||
|
return (
|
||||||
|
" ".join(f"--{meta_field.metadata['preset_argument']} '{value}'" for value in meta_field.value) if meta_field.value else ""
|
||||||
|
)
|
||||||
|
|
||||||
return f"--{meta_field.metadata['preset_argument']} '{meta_field.value}'"
|
return f"--{meta_field.metadata['preset_argument']} '{meta_field.value}'"
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
|
@ -367,9 +461,7 @@ class LoadParams:
|
||||||
]
|
]
|
||||||
|
|
||||||
for field in data_fields:
|
for field in data_fields:
|
||||||
actual_field_type = (
|
actual_field_type = get_args(field.type)[0] if len(get_args(field.type)) else get_args(field.type)
|
||||||
get_args(field.type)[0] if len(get_args(field.type)) else get_args(field.type)
|
|
||||||
)
|
|
||||||
if is_dataclass(actual_field_type) and getattr(instance, field.name):
|
if is_dataclass(actual_field_type) and getattr(instance, field.name):
|
||||||
fields_with_data += LoadParams._get_meta_fields(getattr(instance, field.name))
|
fields_with_data += LoadParams._get_meta_fields(getattr(instance, field.name))
|
||||||
|
|
||||||
|
@ -385,9 +477,7 @@ class LoadParams:
|
||||||
static_params = [f"{load_type_str}"]
|
static_params = [f"{load_type_str}"]
|
||||||
|
|
||||||
dynamic_params = [
|
dynamic_params = [
|
||||||
f"{meta_field.name}={meta_field.value}"
|
f"{meta_field.name}={meta_field.value}" for meta_field in self._get_applicable_fields() if meta_field.metadata["string_repr"]
|
||||||
for meta_field in self._get_applicable_fields()
|
|
||||||
if meta_field.metadata["string_repr"]
|
|
||||||
]
|
]
|
||||||
params = ", ".join(static_params + dynamic_params)
|
params = ", ".join(static_params + dynamic_params)
|
||||||
|
|
||||||
|
|
|
@ -1,95 +1,47 @@
|
||||||
from abc import ABC
|
from abc import ABC
|
||||||
from typing import Any
|
from typing import Any, Optional
|
||||||
|
|
||||||
from frostfs_testlib.load.load_config import LoadScenario
|
from frostfs_testlib.load.load_config import LoadScenario
|
||||||
|
|
||||||
|
|
||||||
class MetricsBase(ABC):
|
class OperationMetric(ABC):
|
||||||
_WRITE_SUCCESS = ""
|
_NAME = ""
|
||||||
_WRITE_ERRORS = ""
|
_SUCCESS = ""
|
||||||
_WRITE_THROUGHPUT = "data_sent"
|
_ERRORS = ""
|
||||||
_WRITE_LATENCY = ""
|
_THROUGHPUT = ""
|
||||||
|
_LATENCY = ""
|
||||||
_READ_SUCCESS = ""
|
|
||||||
_READ_ERRORS = ""
|
|
||||||
_READ_LATENCY = ""
|
|
||||||
_READ_THROUGHPUT = "data_received"
|
|
||||||
|
|
||||||
_DELETE_SUCCESS = ""
|
|
||||||
_DELETE_LATENCY = ""
|
|
||||||
_DELETE_ERRORS = ""
|
|
||||||
|
|
||||||
def __init__(self, summary) -> None:
|
def __init__(self, summary) -> None:
|
||||||
self.summary = summary
|
self.summary = summary
|
||||||
self.metrics = summary["metrics"]
|
self.metrics = summary["metrics"]
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def write_total_iterations(self) -> int:
|
def total_iterations(self) -> int:
|
||||||
return self._get_metric(self._WRITE_SUCCESS) + self._get_metric(self._WRITE_ERRORS)
|
return self._get_metric(self._SUCCESS) + self._get_metric(self._ERRORS)
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def write_success_iterations(self) -> int:
|
def success_iterations(self) -> int:
|
||||||
return self._get_metric(self._WRITE_SUCCESS)
|
return self._get_metric(self._SUCCESS)
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def write_latency(self) -> dict:
|
def latency(self) -> dict:
|
||||||
return self._get_metric(self._WRITE_LATENCY)
|
return self._get_metric(self._LATENCY)
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def write_rate(self) -> float:
|
def rate(self) -> float:
|
||||||
return self._get_metric_rate(self._WRITE_SUCCESS)
|
return self._get_metric_rate(self._SUCCESS)
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def write_failed_iterations(self) -> int:
|
def failed_iterations(self) -> int:
|
||||||
return self._get_metric(self._WRITE_ERRORS)
|
return self._get_metric(self._ERRORS)
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def write_throughput(self) -> float:
|
def throughput(self) -> float:
|
||||||
return self._get_metric_rate(self._WRITE_THROUGHPUT)
|
return self._get_metric_rate(self._THROUGHPUT)
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def read_total_iterations(self) -> int:
|
def total_bytes(self) -> float:
|
||||||
return self._get_metric(self._READ_SUCCESS) + self._get_metric(self._READ_ERRORS)
|
return self._get_metric(self._THROUGHPUT)
|
||||||
|
|
||||||
@property
|
|
||||||
def read_success_iterations(self) -> int:
|
|
||||||
return self._get_metric(self._READ_SUCCESS)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def read_latency(self) -> dict:
|
|
||||||
return self._get_metric(self._READ_LATENCY)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def read_rate(self) -> int:
|
|
||||||
return self._get_metric_rate(self._READ_SUCCESS)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def read_failed_iterations(self) -> int:
|
|
||||||
return self._get_metric(self._READ_ERRORS)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def read_throughput(self) -> float:
|
|
||||||
return self._get_metric_rate(self._READ_THROUGHPUT)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def delete_total_iterations(self) -> int:
|
|
||||||
return self._get_metric(self._DELETE_SUCCESS) + self._get_metric(self._DELETE_ERRORS)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def delete_success_iterations(self) -> int:
|
|
||||||
return self._get_metric(self._DELETE_SUCCESS)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def delete_latency(self) -> dict:
|
|
||||||
return self._get_metric(self._DELETE_LATENCY)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def delete_failed_iterations(self) -> int:
|
|
||||||
return self._get_metric(self._DELETE_ERRORS)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def delete_rate(self) -> int:
|
|
||||||
return self._get_metric_rate(self._DELETE_SUCCESS)
|
|
||||||
|
|
||||||
def _get_metric(self, metric: str) -> int:
|
def _get_metric(self, metric: str) -> int:
|
||||||
metrics_method_map = {
|
metrics_method_map = {
|
||||||
|
@ -104,9 +56,7 @@ class MetricsBase(ABC):
|
||||||
metric = self.metrics[metric]
|
metric = self.metrics[metric]
|
||||||
metric_type = metric["type"]
|
metric_type = metric["type"]
|
||||||
if metric_type not in metrics_method_map:
|
if metric_type not in metrics_method_map:
|
||||||
raise Exception(
|
raise Exception(f"Unsupported metric type: {metric_type}, supported: {metrics_method_map.keys()}")
|
||||||
f"Unsupported metric type: {metric_type}, supported: {metrics_method_map.keys()}"
|
|
||||||
)
|
|
||||||
|
|
||||||
return metrics_method_map[metric_type](metric)
|
return metrics_method_map[metric_type](metric)
|
||||||
|
|
||||||
|
@ -119,9 +69,7 @@ class MetricsBase(ABC):
|
||||||
metric = self.metrics[metric]
|
metric = self.metrics[metric]
|
||||||
metric_type = metric["type"]
|
metric_type = metric["type"]
|
||||||
if metric_type not in metrics_method_map:
|
if metric_type not in metrics_method_map:
|
||||||
raise Exception(
|
raise Exception(f"Unsupported rate metric type: {metric_type}, supported: {metrics_method_map.keys()}")
|
||||||
f"Unsupported rate metric type: {metric_type}, supported: {metrics_method_map.keys()}"
|
|
||||||
)
|
|
||||||
|
|
||||||
return metrics_method_map[metric_type](metric)
|
return metrics_method_map[metric_type](metric)
|
||||||
|
|
||||||
|
@ -138,55 +86,145 @@ class MetricsBase(ABC):
|
||||||
return metric["values"]
|
return metric["values"]
|
||||||
|
|
||||||
|
|
||||||
|
class WriteOperationMetric(OperationMetric):
|
||||||
|
_NAME = "Write"
|
||||||
|
_SUCCESS = ""
|
||||||
|
_ERRORS = ""
|
||||||
|
_THROUGHPUT = "data_sent"
|
||||||
|
_LATENCY = ""
|
||||||
|
|
||||||
|
|
||||||
|
class ReadOperationMetric(OperationMetric):
|
||||||
|
_NAME = "Read"
|
||||||
|
_SUCCESS = ""
|
||||||
|
_ERRORS = ""
|
||||||
|
_THROUGHPUT = "data_received"
|
||||||
|
_LATENCY = ""
|
||||||
|
|
||||||
|
|
||||||
|
class DeleteOperationMetric(OperationMetric):
|
||||||
|
_NAME = "Delete"
|
||||||
|
_SUCCESS = ""
|
||||||
|
_ERRORS = ""
|
||||||
|
_THROUGHPUT = ""
|
||||||
|
_LATENCY = ""
|
||||||
|
|
||||||
|
|
||||||
|
class GrpcWriteOperationMetric(WriteOperationMetric):
|
||||||
|
_SUCCESS = "frostfs_obj_put_success"
|
||||||
|
_ERRORS = "frostfs_obj_put_fails"
|
||||||
|
_LATENCY = "frostfs_obj_put_duration"
|
||||||
|
|
||||||
|
|
||||||
|
class GrpcReadOperationMetric(ReadOperationMetric):
|
||||||
|
_SUCCESS = "frostfs_obj_get_success"
|
||||||
|
_ERRORS = "frostfs_obj_get_fails"
|
||||||
|
_LATENCY = "frostfs_obj_get_duration"
|
||||||
|
|
||||||
|
|
||||||
|
class GrpcDeleteOperationMetric(DeleteOperationMetric):
|
||||||
|
_SUCCESS = "frostfs_obj_delete_success"
|
||||||
|
_ERRORS = "frostfs_obj_delete_fails"
|
||||||
|
_LATENCY = "frostfs_obj_delete_duration"
|
||||||
|
|
||||||
|
|
||||||
|
class S3WriteOperationMetric(WriteOperationMetric):
|
||||||
|
_SUCCESS = "aws_obj_put_success"
|
||||||
|
_ERRORS = "aws_obj_put_fails"
|
||||||
|
_LATENCY = "aws_obj_put_duration"
|
||||||
|
|
||||||
|
|
||||||
|
class S3ReadOperationMetric(ReadOperationMetric):
|
||||||
|
_SUCCESS = "aws_obj_get_success"
|
||||||
|
_ERRORS = "aws_obj_get_fails"
|
||||||
|
_LATENCY = "aws_obj_get_duration"
|
||||||
|
|
||||||
|
|
||||||
|
class S3DeleteOperationMetric(DeleteOperationMetric):
|
||||||
|
_SUCCESS = "aws_obj_delete_success"
|
||||||
|
_ERRORS = "aws_obj_delete_fails"
|
||||||
|
_LATENCY = "aws_obj_delete_duration"
|
||||||
|
|
||||||
|
|
||||||
|
class S3LocalWriteOperationMetric(WriteOperationMetric):
|
||||||
|
_SUCCESS = "s3local_obj_put_success"
|
||||||
|
_ERRORS = "s3local_obj_put_fails"
|
||||||
|
_LATENCY = "s3local_obj_put_duration"
|
||||||
|
|
||||||
|
|
||||||
|
class S3LocalReadOperationMetric(ReadOperationMetric):
|
||||||
|
_SUCCESS = "s3local_obj_get_success"
|
||||||
|
_ERRORS = "s3local_obj_get_fails"
|
||||||
|
_LATENCY = "s3local_obj_get_duration"
|
||||||
|
|
||||||
|
|
||||||
|
class LocalWriteOperationMetric(WriteOperationMetric):
|
||||||
|
_SUCCESS = "local_obj_put_success"
|
||||||
|
_ERRORS = "local_obj_put_fails"
|
||||||
|
_LATENCY = "local_obj_put_duration"
|
||||||
|
|
||||||
|
|
||||||
|
class LocalReadOperationMetric(ReadOperationMetric):
|
||||||
|
_SUCCESS = "local_obj_get_success"
|
||||||
|
_ERRORS = "local_obj_get_fails"
|
||||||
|
|
||||||
|
|
||||||
|
class LocalDeleteOperationMetric(DeleteOperationMetric):
|
||||||
|
_SUCCESS = "local_obj_delete_success"
|
||||||
|
_ERRORS = "local_obj_delete_fails"
|
||||||
|
|
||||||
|
|
||||||
|
class VerifyReadOperationMetric(ReadOperationMetric):
|
||||||
|
_SUCCESS = "verified_obj"
|
||||||
|
_ERRORS = "invalid_obj"
|
||||||
|
|
||||||
|
|
||||||
|
class MetricsBase(ABC):
|
||||||
|
def __init__(self) -> None:
|
||||||
|
self.write: Optional[WriteOperationMetric] = None
|
||||||
|
self.read: Optional[ReadOperationMetric] = None
|
||||||
|
self.delete: Optional[DeleteOperationMetric] = None
|
||||||
|
|
||||||
|
@property
|
||||||
|
def operations(self) -> list[OperationMetric]:
|
||||||
|
return [metric for metric in [self.write, self.read, self.delete] if metric is not None]
|
||||||
|
|
||||||
|
|
||||||
class GrpcMetrics(MetricsBase):
|
class GrpcMetrics(MetricsBase):
|
||||||
_WRITE_SUCCESS = "frostfs_obj_put_total"
|
def __init__(self, summary) -> None:
|
||||||
_WRITE_ERRORS = "frostfs_obj_put_fails"
|
super().__init__()
|
||||||
_WRITE_LATENCY = "frostfs_obj_put_duration"
|
self.write = GrpcWriteOperationMetric(summary)
|
||||||
|
self.read = GrpcReadOperationMetric(summary)
|
||||||
_READ_SUCCESS = "frostfs_obj_get_total"
|
self.delete = GrpcDeleteOperationMetric(summary)
|
||||||
_READ_ERRORS = "frostfs_obj_get_fails"
|
|
||||||
_READ_LATENCY = "frostfs_obj_get_duration"
|
|
||||||
|
|
||||||
_DELETE_SUCCESS = "frostfs_obj_delete_total"
|
|
||||||
_DELETE_ERRORS = "frostfs_obj_delete_fails"
|
|
||||||
_DELETE_LATENCY = "frostfs_obj_delete_duration"
|
|
||||||
|
|
||||||
|
|
||||||
class S3Metrics(MetricsBase):
|
class S3Metrics(MetricsBase):
|
||||||
_WRITE_SUCCESS = "aws_obj_put_total"
|
def __init__(self, summary) -> None:
|
||||||
_WRITE_ERRORS = "aws_obj_put_fails"
|
super().__init__()
|
||||||
_WRITE_LATENCY = "aws_obj_put_duration"
|
self.write = S3WriteOperationMetric(summary)
|
||||||
|
self.read = S3ReadOperationMetric(summary)
|
||||||
|
self.delete = S3DeleteOperationMetric(summary)
|
||||||
|
|
||||||
_READ_SUCCESS = "aws_obj_get_total"
|
|
||||||
_READ_ERRORS = "aws_obj_get_fails"
|
|
||||||
_READ_LATENCY = "aws_obj_get_duration"
|
|
||||||
|
|
||||||
_DELETE_SUCCESS = "aws_obj_delete_total"
|
class S3LocalMetrics(MetricsBase):
|
||||||
_DELETE_ERRORS = "aws_obj_delete_fails"
|
def __init__(self, summary) -> None:
|
||||||
_DELETE_LATENCY = "aws_obj_delete_duration"
|
super().__init__()
|
||||||
|
self.write = S3LocalWriteOperationMetric(summary)
|
||||||
|
self.read = S3LocalReadOperationMetric(summary)
|
||||||
|
|
||||||
|
|
||||||
class LocalMetrics(MetricsBase):
|
class LocalMetrics(MetricsBase):
|
||||||
_WRITE_SUCCESS = "local_obj_put_total"
|
def __init__(self, summary) -> None:
|
||||||
_WRITE_ERRORS = "local_obj_put_fails"
|
super().__init__()
|
||||||
_WRITE_LATENCY = "local_obj_put_duration"
|
self.write = LocalWriteOperationMetric(summary)
|
||||||
|
self.read = LocalReadOperationMetric(summary)
|
||||||
_READ_SUCCESS = "local_obj_get_total"
|
self.delete = LocalDeleteOperationMetric(summary)
|
||||||
_READ_ERRORS = "local_obj_get_fails"
|
|
||||||
|
|
||||||
_DELETE_SUCCESS = "local_obj_delete_total"
|
|
||||||
_DELETE_ERRORS = "local_obj_delete_fails"
|
|
||||||
|
|
||||||
|
|
||||||
class VerifyMetrics(MetricsBase):
|
class VerifyMetrics(MetricsBase):
|
||||||
_WRITE_SUCCESS = "N/A"
|
def __init__(self, summary) -> None:
|
||||||
_WRITE_ERRORS = "N/A"
|
super().__init__()
|
||||||
|
self.read = VerifyReadOperationMetric(summary)
|
||||||
_READ_SUCCESS = "verified_obj"
|
|
||||||
_READ_ERRORS = "invalid_obj"
|
|
||||||
|
|
||||||
_DELETE_SUCCESS = "N/A"
|
|
||||||
_DELETE_ERRORS = "N/A"
|
|
||||||
|
|
||||||
|
|
||||||
def get_metrics_object(load_type: LoadScenario, summary: dict[str, Any]) -> MetricsBase:
|
def get_metrics_object(load_type: LoadScenario, summary: dict[str, Any]) -> MetricsBase:
|
||||||
|
@ -197,6 +235,7 @@ def get_metrics_object(load_type: LoadScenario, summary: dict[str, Any]) -> Metr
|
||||||
LoadScenario.S3: S3Metrics,
|
LoadScenario.S3: S3Metrics,
|
||||||
LoadScenario.S3_CAR: S3Metrics,
|
LoadScenario.S3_CAR: S3Metrics,
|
||||||
LoadScenario.S3_MULTIPART: S3Metrics,
|
LoadScenario.S3_MULTIPART: S3Metrics,
|
||||||
|
LoadScenario.S3_LOCAL: S3LocalMetrics,
|
||||||
LoadScenario.VERIFY: VerifyMetrics,
|
LoadScenario.VERIFY: VerifyMetrics,
|
||||||
LoadScenario.LOCAL: LocalMetrics,
|
LoadScenario.LOCAL: LocalMetrics,
|
||||||
}
|
}
|
||||||
|
|
|
@ -3,8 +3,8 @@ from typing import Optional
|
||||||
|
|
||||||
import yaml
|
import yaml
|
||||||
|
|
||||||
|
from frostfs_testlib.load.interfaces.summarized import SummarizedStats
|
||||||
from frostfs_testlib.load.load_config import K6ProcessAllocationStrategy, LoadParams, LoadScenario
|
from frostfs_testlib.load.load_config import K6ProcessAllocationStrategy, LoadParams, LoadScenario
|
||||||
from frostfs_testlib.load.load_metrics import get_metrics_object
|
|
||||||
from frostfs_testlib.utils.converting_utils import calc_unit
|
from frostfs_testlib.utils.converting_utils import calc_unit
|
||||||
|
|
||||||
|
|
||||||
|
@ -17,11 +17,15 @@ class LoadReport:
|
||||||
self.start_time: Optional[datetime] = None
|
self.start_time: Optional[datetime] = None
|
||||||
self.end_time: Optional[datetime] = None
|
self.end_time: Optional[datetime] = None
|
||||||
|
|
||||||
def set_start_time(self):
|
def set_start_time(self, time: datetime = None):
|
||||||
self.start_time = datetime.utcnow()
|
if time is None:
|
||||||
|
time = datetime.utcnow()
|
||||||
|
self.start_time = time
|
||||||
|
|
||||||
def set_end_time(self):
|
def set_end_time(self, time: datetime = None):
|
||||||
self.end_time = datetime.utcnow()
|
if time is None:
|
||||||
|
time = datetime.utcnow()
|
||||||
|
self.end_time = time
|
||||||
|
|
||||||
def add_summaries(self, load_summaries: dict):
|
def add_summaries(self, load_summaries: dict):
|
||||||
self.load_summaries_list.append(load_summaries)
|
self.load_summaries_list.append(load_summaries)
|
||||||
|
@ -31,6 +35,7 @@ class LoadReport:
|
||||||
|
|
||||||
def get_report_html(self):
|
def get_report_html(self):
|
||||||
report_sections = [
|
report_sections = [
|
||||||
|
[self.load_params, self._get_load_id_section_html],
|
||||||
[self.load_test, self._get_load_params_section_html],
|
[self.load_test, self._get_load_params_section_html],
|
||||||
[self.load_summaries_list, self._get_totals_section_html],
|
[self.load_summaries_list, self._get_totals_section_html],
|
||||||
[self.end_time, self._get_test_time_html],
|
[self.end_time, self._get_test_time_html],
|
||||||
|
@ -44,9 +49,7 @@ class LoadReport:
|
||||||
return html
|
return html
|
||||||
|
|
||||||
def _get_load_params_section_html(self) -> str:
|
def _get_load_params_section_html(self) -> str:
|
||||||
params: str = yaml.safe_dump(
|
params: str = yaml.safe_dump([self.load_test], sort_keys=False, indent=2, explicit_start=True)
|
||||||
[self.load_test], sort_keys=False, indent=2, explicit_start=True
|
|
||||||
)
|
|
||||||
params = params.replace("\n", "<br>").replace(" ", " ")
|
params = params.replace("\n", "<br>").replace(" ", " ")
|
||||||
section_html = f"""<h3>Scenario params</h3>
|
section_html = f"""<h3>Scenario params</h3>
|
||||||
|
|
||||||
|
@ -55,8 +58,17 @@ class LoadReport:
|
||||||
|
|
||||||
return section_html
|
return section_html
|
||||||
|
|
||||||
|
def _get_load_id_section_html(self) -> str:
|
||||||
|
section_html = f"""<h3>Load ID: {self.load_params.load_id}</h3>
|
||||||
|
<hr>"""
|
||||||
|
|
||||||
|
return section_html
|
||||||
|
|
||||||
def _get_test_time_html(self) -> str:
|
def _get_test_time_html(self) -> str:
|
||||||
html = f"""<h3>Scenario duration in UTC time (from agent)</h3>
|
if not self.start_time or not self.end_time:
|
||||||
|
return ""
|
||||||
|
|
||||||
|
html = f"""<h3>Scenario duration</h3>
|
||||||
{self.start_time} - {self.end_time}<br>
|
{self.start_time} - {self.end_time}<br>
|
||||||
<hr>
|
<hr>
|
||||||
"""
|
"""
|
||||||
|
@ -97,72 +109,57 @@ class LoadReport:
|
||||||
LoadScenario.gRPC_CAR: "open model",
|
LoadScenario.gRPC_CAR: "open model",
|
||||||
LoadScenario.S3_CAR: "open model",
|
LoadScenario.S3_CAR: "open model",
|
||||||
LoadScenario.LOCAL: "local fill",
|
LoadScenario.LOCAL: "local fill",
|
||||||
|
LoadScenario.S3_LOCAL: "local fill",
|
||||||
}
|
}
|
||||||
|
|
||||||
return model_map[self.load_params.scenario]
|
return model_map[self.load_params.scenario]
|
||||||
|
|
||||||
def _get_operations_sub_section_html(
|
def _get_operations_sub_section_html(self, operation_type: str, stats: SummarizedStats):
|
||||||
self,
|
|
||||||
operation_type: str,
|
|
||||||
total_operations: int,
|
|
||||||
requested_rate_str: str,
|
|
||||||
vus_str: str,
|
|
||||||
total_rate: float,
|
|
||||||
throughput: float,
|
|
||||||
errors: dict[str, int],
|
|
||||||
latency: dict[str, dict],
|
|
||||||
):
|
|
||||||
throughput_html = ""
|
throughput_html = ""
|
||||||
if throughput > 0:
|
if stats.throughput > 0:
|
||||||
throughput, unit = calc_unit(throughput)
|
throughput, unit = calc_unit(stats.throughput)
|
||||||
throughput_html = self._row("Throughput", f"{throughput:.2f} {unit}/sec")
|
throughput_html = self._row("Throughput", f"{throughput:.2f} {unit}/sec")
|
||||||
|
|
||||||
|
bytes_html = ""
|
||||||
|
if stats.total_bytes > 0:
|
||||||
|
total_bytes, total_bytes_unit = calc_unit(stats.total_bytes)
|
||||||
|
bytes_html = self._row("Total transferred", f"{total_bytes:.2f} {total_bytes_unit}")
|
||||||
|
|
||||||
per_node_errors_html = ""
|
per_node_errors_html = ""
|
||||||
total_errors = 0
|
for node_key, errors in stats.errors.by_node.items():
|
||||||
if errors:
|
if self.load_params.k6_process_allocation_strategy == K6ProcessAllocationStrategy.PER_ENDPOINT:
|
||||||
total_errors: int = 0
|
per_node_errors_html += self._row(f"At {node_key}", errors)
|
||||||
for node_key, errors in errors.items():
|
|
||||||
total_errors += errors
|
|
||||||
if (
|
|
||||||
self.load_params.k6_process_allocation_strategy
|
|
||||||
== K6ProcessAllocationStrategy.PER_ENDPOINT
|
|
||||||
):
|
|
||||||
per_node_errors_html += self._row(f"At {node_key}", errors)
|
|
||||||
|
|
||||||
latency_html = ""
|
latency_html = ""
|
||||||
if latency:
|
for node_key, latencies in stats.latencies.by_node.items():
|
||||||
for node_key, latency_dict in latency.items():
|
latency_values = "N/A"
|
||||||
latency_values = "N/A"
|
if latencies:
|
||||||
if latency_dict:
|
latency_values = ""
|
||||||
latency_values = ""
|
for param_name, param_val in latencies.items():
|
||||||
for param_name, param_val in latency_dict.items():
|
latency_values += f"{param_name}={param_val:.2f}ms "
|
||||||
latency_values += f"{param_name}={param_val:.2f}ms "
|
|
||||||
|
|
||||||
latency_html += self._row(
|
latency_html += self._row(f"{operation_type} latency {node_key.split(':')[0]}", latency_values)
|
||||||
f"{operation_type} latency {node_key.split(':')[0]}", latency_values
|
|
||||||
)
|
|
||||||
|
|
||||||
object_size, object_size_unit = calc_unit(self.load_params.object_size, 1)
|
object_size, object_size_unit = calc_unit(self.load_params.object_size, 1)
|
||||||
duration = self._seconds_to_formatted_duration(self.load_params.load_time)
|
duration = self._seconds_to_formatted_duration(self.load_params.load_time)
|
||||||
model = self._get_model_string()
|
model = self._get_model_string()
|
||||||
|
requested_rate_str = f"{stats.requested_rate}op/sec" if stats.requested_rate else ""
|
||||||
# write 8KB 15h49m 50op/sec 50th open model/closed model/min_iteration duration=1s - 1.636MB/s 199.57451/s
|
# write 8KB 15h49m 50op/sec 50th open model/closed model/min_iteration duration=1s - 1.636MB/s 199.57451/s
|
||||||
short_summary = f"{operation_type} {object_size}{object_size_unit} {duration} {requested_rate_str} {vus_str} {model} - {throughput:.2f}{unit}/s {total_rate:.2f}/s"
|
short_summary = f"{operation_type} {object_size}{object_size_unit} {duration} {requested_rate_str} {stats.threads}th {model} - {throughput:.2f}{unit}/s {stats.rate:.2f}/s"
|
||||||
errors_percent = 0
|
|
||||||
if total_operations:
|
|
||||||
errors_percent = total_errors / total_operations * 100.0
|
|
||||||
|
|
||||||
html = f"""
|
html = f"""
|
||||||
<table border="1" cellpadding="5px"><tbody>
|
<table border="1" cellpadding="5px"><tbody>
|
||||||
<tr><th colspan="2" bgcolor="gainsboro">{short_summary}</th></tr>
|
<tr><th colspan="2" bgcolor="gainsboro">{short_summary}</th></tr>
|
||||||
<tr><th colspan="2" bgcolor="gainsboro">Metrics</th></tr>
|
<tr><th colspan="2" bgcolor="gainsboro">Metrics</th></tr>
|
||||||
{self._row("Total operations", total_operations)}
|
{self._row("Total operations", stats.operations)}
|
||||||
{self._row("OP/sec", f"{total_rate:.2f}")}
|
{self._row("OP/sec", f"{stats.rate:.2f}")}
|
||||||
|
{bytes_html}
|
||||||
{throughput_html}
|
{throughput_html}
|
||||||
{latency_html}
|
{latency_html}
|
||||||
<tr><th colspan="2" bgcolor="gainsboro">Errors</th></tr>
|
<tr><th colspan="2" bgcolor="gainsboro">Errors</th></tr>
|
||||||
{per_node_errors_html}
|
{per_node_errors_html}
|
||||||
{self._row("Total", f"{total_errors} ({errors_percent:.2f}%)")}
|
{self._row("Total", f"{stats.errors.total} ({stats.errors.percent:.2f}%)")}
|
||||||
{self._row("Threshold", f"{self.load_params.error_threshold:.2f}%")}
|
{self._row("Threshold", f"{stats.errors.threshold:.2f}%")}
|
||||||
</tbody></table><br><hr>
|
</tbody></table><br><hr>
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
@ -170,121 +167,12 @@ class LoadReport:
|
||||||
|
|
||||||
def _get_totals_section_html(self):
|
def _get_totals_section_html(self):
|
||||||
html = ""
|
html = ""
|
||||||
for i, load_summaries in enumerate(self.load_summaries_list, 1):
|
for i in range(len(self.load_summaries_list)):
|
||||||
html += f"<h3>Load Results for load #{i}</h3>"
|
html += f"<h3>Load Results for load #{i+1}</h3>"
|
||||||
|
|
||||||
write_operations = 0
|
summarized = SummarizedStats.collect(self.load_params, self.load_summaries_list[i])
|
||||||
write_op_sec = 0
|
for operation_type, stats in summarized.items():
|
||||||
write_throughput = 0
|
if stats.operations:
|
||||||
write_latency = {}
|
html += self._get_operations_sub_section_html(operation_type, stats)
|
||||||
write_errors = {}
|
|
||||||
requested_write_rate = self.load_params.write_rate
|
|
||||||
requested_write_rate_str = (
|
|
||||||
f"{requested_write_rate}op/sec" if requested_write_rate else ""
|
|
||||||
)
|
|
||||||
|
|
||||||
read_operations = 0
|
|
||||||
read_op_sec = 0
|
|
||||||
read_throughput = 0
|
|
||||||
read_latency = {}
|
|
||||||
read_errors = {}
|
|
||||||
requested_read_rate = self.load_params.read_rate
|
|
||||||
requested_read_rate_str = f"{requested_read_rate}op/sec" if requested_read_rate else ""
|
|
||||||
|
|
||||||
delete_operations = 0
|
|
||||||
delete_op_sec = 0
|
|
||||||
delete_latency = {}
|
|
||||||
delete_errors = {}
|
|
||||||
requested_delete_rate = self.load_params.delete_rate
|
|
||||||
requested_delete_rate_str = (
|
|
||||||
f"{requested_delete_rate}op/sec" if requested_delete_rate else ""
|
|
||||||
)
|
|
||||||
|
|
||||||
if self.load_params.scenario in [LoadScenario.gRPC_CAR, LoadScenario.S3_CAR]:
|
|
||||||
delete_vus = max(
|
|
||||||
self.load_params.preallocated_deleters or 0, self.load_params.max_deleters or 0
|
|
||||||
)
|
|
||||||
write_vus = max(
|
|
||||||
self.load_params.preallocated_writers or 0, self.load_params.max_writers or 0
|
|
||||||
)
|
|
||||||
read_vus = max(
|
|
||||||
self.load_params.preallocated_readers or 0, self.load_params.max_readers or 0
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
write_vus = self.load_params.writers
|
|
||||||
read_vus = self.load_params.readers
|
|
||||||
delete_vus = self.load_params.deleters
|
|
||||||
|
|
||||||
write_vus_str = f"{write_vus}th"
|
|
||||||
read_vus_str = f"{read_vus}th"
|
|
||||||
delete_vus_str = f"{delete_vus}th"
|
|
||||||
|
|
||||||
write_section_required = False
|
|
||||||
read_section_required = False
|
|
||||||
delete_section_required = False
|
|
||||||
|
|
||||||
for node_key, load_summary in load_summaries.items():
|
|
||||||
metrics = get_metrics_object(self.load_params.scenario, load_summary)
|
|
||||||
write_operations += metrics.write_total_iterations
|
|
||||||
if write_operations:
|
|
||||||
write_section_required = True
|
|
||||||
write_op_sec += metrics.write_rate
|
|
||||||
write_latency[node_key] = metrics.write_latency
|
|
||||||
write_throughput += metrics.write_throughput
|
|
||||||
if metrics.write_failed_iterations:
|
|
||||||
write_errors[node_key] = metrics.write_failed_iterations
|
|
||||||
|
|
||||||
read_operations += metrics.read_total_iterations
|
|
||||||
if read_operations:
|
|
||||||
read_section_required = True
|
|
||||||
read_op_sec += metrics.read_rate
|
|
||||||
read_throughput += metrics.read_throughput
|
|
||||||
read_latency[node_key] = metrics.read_latency
|
|
||||||
if metrics.read_failed_iterations:
|
|
||||||
read_errors[node_key] = metrics.read_failed_iterations
|
|
||||||
|
|
||||||
delete_operations += metrics.delete_total_iterations
|
|
||||||
if delete_operations:
|
|
||||||
delete_section_required = True
|
|
||||||
delete_op_sec += metrics.delete_rate
|
|
||||||
delete_latency[node_key] = metrics.delete_latency
|
|
||||||
if metrics.delete_failed_iterations:
|
|
||||||
delete_errors[node_key] = metrics.delete_failed_iterations
|
|
||||||
|
|
||||||
if write_section_required:
|
|
||||||
html += self._get_operations_sub_section_html(
|
|
||||||
"Write",
|
|
||||||
write_operations,
|
|
||||||
requested_write_rate_str,
|
|
||||||
write_vus_str,
|
|
||||||
write_op_sec,
|
|
||||||
write_throughput,
|
|
||||||
write_errors,
|
|
||||||
write_latency,
|
|
||||||
)
|
|
||||||
|
|
||||||
if read_section_required:
|
|
||||||
html += self._get_operations_sub_section_html(
|
|
||||||
"Read",
|
|
||||||
read_operations,
|
|
||||||
requested_read_rate_str,
|
|
||||||
read_vus_str,
|
|
||||||
read_op_sec,
|
|
||||||
read_throughput,
|
|
||||||
read_errors,
|
|
||||||
read_latency,
|
|
||||||
)
|
|
||||||
|
|
||||||
if delete_section_required:
|
|
||||||
html += self._get_operations_sub_section_html(
|
|
||||||
"Delete",
|
|
||||||
delete_operations,
|
|
||||||
requested_delete_rate_str,
|
|
||||||
delete_vus_str,
|
|
||||||
delete_op_sec,
|
|
||||||
0,
|
|
||||||
delete_errors,
|
|
||||||
delete_latency,
|
|
||||||
)
|
|
||||||
|
|
||||||
return html
|
return html
|
||||||
|
|
|
@ -1,11 +1,7 @@
|
||||||
import logging
|
from frostfs_testlib import reporter
|
||||||
|
from frostfs_testlib.load.interfaces.summarized import SummarizedStats
|
||||||
from frostfs_testlib.load.load_config import LoadParams, LoadScenario
|
from frostfs_testlib.load.load_config import LoadParams, LoadScenario
|
||||||
from frostfs_testlib.load.load_metrics import get_metrics_object
|
from frostfs_testlib.load.load_metrics import get_metrics_object
|
||||||
from frostfs_testlib.reporter import get_reporter
|
|
||||||
|
|
||||||
reporter = get_reporter()
|
|
||||||
logger = logging.getLogger("NeoLogger")
|
|
||||||
|
|
||||||
|
|
||||||
class LoadVerifier:
|
class LoadVerifier:
|
||||||
|
@ -13,66 +9,16 @@ class LoadVerifier:
|
||||||
self.load_params = load_params
|
self.load_params = load_params
|
||||||
|
|
||||||
def collect_load_issues(self, load_summaries: dict[str, dict]) -> list[str]:
|
def collect_load_issues(self, load_summaries: dict[str, dict]) -> list[str]:
|
||||||
write_operations = 0
|
summarized = SummarizedStats.collect(self.load_params, load_summaries)
|
||||||
write_errors = 0
|
|
||||||
|
|
||||||
read_operations = 0
|
|
||||||
read_errors = 0
|
|
||||||
|
|
||||||
delete_operations = 0
|
|
||||||
delete_errors = 0
|
|
||||||
|
|
||||||
writers = self.load_params.writers or self.load_params.preallocated_writers or 0
|
|
||||||
readers = self.load_params.readers or self.load_params.preallocated_readers or 0
|
|
||||||
deleters = self.load_params.deleters or self.load_params.preallocated_deleters or 0
|
|
||||||
|
|
||||||
for load_summary in load_summaries.values():
|
|
||||||
metrics = get_metrics_object(self.load_params.scenario, load_summary)
|
|
||||||
|
|
||||||
if writers:
|
|
||||||
write_operations += metrics.write_total_iterations
|
|
||||||
write_errors += metrics.write_failed_iterations
|
|
||||||
|
|
||||||
if readers:
|
|
||||||
read_operations += metrics.read_total_iterations
|
|
||||||
read_errors += metrics.read_failed_iterations
|
|
||||||
|
|
||||||
if deleters:
|
|
||||||
delete_operations += metrics.delete_total_iterations
|
|
||||||
delete_errors += metrics.delete_failed_iterations
|
|
||||||
|
|
||||||
issues = []
|
issues = []
|
||||||
if writers and not write_operations:
|
|
||||||
issues.append(f"No any write operation was performed")
|
|
||||||
if readers and not read_operations:
|
|
||||||
issues.append(f"No any read operation was performed")
|
|
||||||
if deleters and not delete_operations:
|
|
||||||
issues.append(f"No any delete operation was performed")
|
|
||||||
|
|
||||||
if (
|
for operation_type, stats in summarized.items():
|
||||||
write_operations
|
if stats.threads and not stats.operations:
|
||||||
and writers
|
issues.append(f"No any {operation_type.lower()} operation was performed")
|
||||||
and write_errors / write_operations * 100 > self.load_params.error_threshold
|
|
||||||
):
|
if stats.errors.percent > stats.errors.threshold:
|
||||||
issues.append(
|
rate_str = self._get_rate_str(stats.errors.percent)
|
||||||
f"Write error rate is greater than threshold: {write_errors / write_operations * 100} > {self.load_params.error_threshold}"
|
issues.append(f"{operation_type} errors exceeded threshold: {rate_str} > {stats.errors.threshold}%")
|
||||||
)
|
|
||||||
if (
|
|
||||||
read_operations
|
|
||||||
and readers
|
|
||||||
and read_errors / read_operations * 100 > self.load_params.error_threshold
|
|
||||||
):
|
|
||||||
issues.append(
|
|
||||||
f"Read error rate is greater than threshold: {read_errors / read_operations * 100} > {self.load_params.error_threshold}"
|
|
||||||
)
|
|
||||||
if (
|
|
||||||
delete_operations
|
|
||||||
and deleters
|
|
||||||
and delete_errors / delete_operations * 100 > self.load_params.error_threshold
|
|
||||||
):
|
|
||||||
issues.append(
|
|
||||||
f"Delete error rate is greater than threshold: {delete_errors / delete_operations * 100} > {self.load_params.error_threshold}"
|
|
||||||
)
|
|
||||||
|
|
||||||
return issues
|
return issues
|
||||||
|
|
||||||
|
@ -89,9 +35,10 @@ class LoadVerifier:
|
||||||
)
|
)
|
||||||
return verify_issues
|
return verify_issues
|
||||||
|
|
||||||
def _collect_verify_issues_on_process(
|
def _get_rate_str(self, rate: float, minimal: float = 0.01) -> str:
|
||||||
self, label, load_summary, verification_summary
|
return f"{rate:.2f}%" if rate >= minimal else f"~{minimal}%"
|
||||||
) -> list[str]:
|
|
||||||
|
def _collect_verify_issues_on_process(self, label, load_summary, verification_summary) -> list[str]:
|
||||||
issues = []
|
issues = []
|
||||||
|
|
||||||
load_metrics = get_metrics_object(self.load_params.scenario, load_summary)
|
load_metrics = get_metrics_object(self.load_params.scenario, load_summary)
|
||||||
|
@ -102,14 +49,16 @@ class LoadVerifier:
|
||||||
delete_success = 0
|
delete_success = 0
|
||||||
|
|
||||||
if deleters > 0:
|
if deleters > 0:
|
||||||
delete_success = load_metrics.delete_success_iterations
|
delete_success = load_metrics.delete.success_iterations
|
||||||
|
|
||||||
if verification_summary:
|
if verification_summary:
|
||||||
verify_metrics = get_metrics_object(LoadScenario.VERIFY, verification_summary)
|
verify_metrics = get_metrics_object(LoadScenario.VERIFY, verification_summary)
|
||||||
verified_objects = verify_metrics.read_success_iterations
|
verified_objects = verify_metrics.read.success_iterations
|
||||||
invalid_objects = verify_metrics.read_failed_iterations
|
invalid_objects = verify_metrics.read.failed_iterations
|
||||||
total_left_objects = load_metrics.write_success_iterations - delete_success
|
total_left_objects = load_metrics.write.success_iterations - delete_success
|
||||||
|
|
||||||
|
if invalid_objects > 0:
|
||||||
|
issues.append(f"There were {invalid_objects} verification fails (hash mismatch).")
|
||||||
# Due to interruptions we may see total verified objects to be less than written on writers count
|
# Due to interruptions we may see total verified objects to be less than written on writers count
|
||||||
if abs(total_left_objects - verified_objects) > writers:
|
if abs(total_left_objects - verified_objects) > writers:
|
||||||
issues.append(
|
issues.append(
|
||||||
|
|
|
@ -1,4 +1,4 @@
|
||||||
from frostfs_testlib.load.interfaces import Loader
|
from frostfs_testlib.load.interfaces.loader import Loader
|
||||||
from frostfs_testlib.resources.load_params import (
|
from frostfs_testlib.resources.load_params import (
|
||||||
LOAD_NODE_SSH_PASSWORD,
|
LOAD_NODE_SSH_PASSWORD,
|
||||||
LOAD_NODE_SSH_PRIVATE_KEY_PASSPHRASE,
|
LOAD_NODE_SSH_PRIVATE_KEY_PASSPHRASE,
|
||||||
|
|
|
@ -1,50 +1,44 @@
|
||||||
import copy
|
import copy
|
||||||
import itertools
|
import itertools
|
||||||
import math
|
import math
|
||||||
import re
|
|
||||||
import time
|
import time
|
||||||
from concurrent.futures import ThreadPoolExecutor
|
|
||||||
from dataclasses import fields
|
from dataclasses import fields
|
||||||
|
from threading import Event
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
from urllib.parse import urlparse
|
from urllib.parse import urlparse
|
||||||
|
|
||||||
import yaml
|
from frostfs_testlib import reporter
|
||||||
|
from frostfs_testlib.credentials.interfaces import S3Credentials, User
|
||||||
from frostfs_testlib.cli.frostfs_authmate.authmate import FrostfsAuthmate
|
from frostfs_testlib.load.interfaces.loader import Loader
|
||||||
from frostfs_testlib.load.interfaces import Loader, ScenarioRunner
|
from frostfs_testlib.load.interfaces.scenario_runner import ScenarioRunner
|
||||||
from frostfs_testlib.load.k6 import K6
|
from frostfs_testlib.load.k6 import K6
|
||||||
from frostfs_testlib.load.load_config import K6ProcessAllocationStrategy, LoadParams, LoadType
|
from frostfs_testlib.load.load_config import K6ProcessAllocationStrategy, LoadParams, LoadType
|
||||||
from frostfs_testlib.load.loaders import NodeLoader, RemoteLoader
|
from frostfs_testlib.load.loaders import NodeLoader, RemoteLoader
|
||||||
from frostfs_testlib.reporter import get_reporter
|
|
||||||
from frostfs_testlib.resources import optionals
|
from frostfs_testlib.resources import optionals
|
||||||
from frostfs_testlib.resources.cli import FROSTFS_AUTHMATE_EXEC
|
|
||||||
from frostfs_testlib.resources.common import STORAGE_USER_NAME
|
from frostfs_testlib.resources.common import STORAGE_USER_NAME
|
||||||
from frostfs_testlib.resources.load_params import (
|
from frostfs_testlib.resources.load_params import BACKGROUND_LOAD_VUS_COUNT_DIVISOR, LOAD_NODE_SSH_USER, LOAD_NODES
|
||||||
BACKGROUND_LOAD_VUS_COUNT_DIVISOR,
|
from frostfs_testlib.shell.command_inspectors import SuInspector
|
||||||
LOAD_NODE_SSH_USER,
|
|
||||||
LOAD_NODES,
|
|
||||||
)
|
|
||||||
from frostfs_testlib.shell.interfaces import CommandOptions, InteractiveInput
|
from frostfs_testlib.shell.interfaces import CommandOptions, InteractiveInput
|
||||||
from frostfs_testlib.storage.cluster import ClusterNode
|
from frostfs_testlib.storage.cluster import ClusterNode
|
||||||
from frostfs_testlib.storage.controllers.cluster_state_controller import ClusterStateController
|
from frostfs_testlib.storage.controllers.cluster_state_controller import ClusterStateController
|
||||||
from frostfs_testlib.storage.dataclasses.frostfs_services import S3Gate, StorageNode
|
from frostfs_testlib.storage.dataclasses.frostfs_services import S3Gate, StorageNode
|
||||||
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
|
|
||||||
from frostfs_testlib.testing import parallel, run_optionally
|
from frostfs_testlib.testing import parallel, run_optionally
|
||||||
from frostfs_testlib.utils import FileKeeper, datetime_utils
|
from frostfs_testlib.testing.test_control import retry
|
||||||
|
from frostfs_testlib.utils import datetime_utils
|
||||||
reporter = get_reporter()
|
from frostfs_testlib.utils.file_keeper import FileKeeper
|
||||||
|
|
||||||
|
|
||||||
class RunnerBase(ScenarioRunner):
|
class RunnerBase(ScenarioRunner):
|
||||||
k6_instances: list[K6]
|
k6_instances: list[K6]
|
||||||
|
|
||||||
@reporter.step_deco("Run preset on loaders")
|
@reporter.step("Run preset on loaders")
|
||||||
def preset(self):
|
def preset(self):
|
||||||
parallel([k6.preset for k6 in self.k6_instances])
|
parallel([k6.preset for k6 in self.k6_instances])
|
||||||
|
|
||||||
@reporter.step_deco("Wait until load finish")
|
@reporter.step("Wait until load finish")
|
||||||
def wait_until_finish(self, soft_timeout: int = 0):
|
def wait_until_finish(self, soft_timeout: int = 0):
|
||||||
parallel([k6.wait_until_finished for k6 in self.k6_instances], soft_timeout=soft_timeout)
|
event = Event()
|
||||||
|
parallel([k6.wait_until_finished for k6 in self.k6_instances], event=event, soft_timeout=soft_timeout)
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def is_running(self):
|
def is_running(self):
|
||||||
|
@ -52,23 +46,26 @@ class RunnerBase(ScenarioRunner):
|
||||||
|
|
||||||
return any([future.result() for future in futures])
|
return any([future.result() for future in futures])
|
||||||
|
|
||||||
|
def get_k6_instances(self):
|
||||||
|
return self.k6_instances
|
||||||
|
|
||||||
|
|
||||||
class DefaultRunner(RunnerBase):
|
class DefaultRunner(RunnerBase):
|
||||||
loaders: list[Loader]
|
loaders: list[Loader]
|
||||||
loaders_wallet: WalletInfo
|
user: User
|
||||||
|
|
||||||
def __init__(
|
def __init__(
|
||||||
self,
|
self,
|
||||||
loaders_wallet: WalletInfo,
|
user: User,
|
||||||
load_ip_list: Optional[list[str]] = None,
|
load_ip_list: Optional[list[str]] = None,
|
||||||
) -> None:
|
) -> None:
|
||||||
if load_ip_list is None:
|
if load_ip_list is None:
|
||||||
load_ip_list = LOAD_NODES
|
load_ip_list = LOAD_NODES
|
||||||
self.loaders = RemoteLoader.from_ip_list(load_ip_list)
|
self.loaders = RemoteLoader.from_ip_list(load_ip_list)
|
||||||
self.loaders_wallet = loaders_wallet
|
self.user = user
|
||||||
|
|
||||||
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED)
|
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED)
|
||||||
@reporter.step_deco("Preparation steps")
|
@reporter.step("Preparation steps")
|
||||||
def prepare(
|
def prepare(
|
||||||
self,
|
self,
|
||||||
load_params: LoadParams,
|
load_params: LoadParams,
|
||||||
|
@ -76,62 +73,37 @@ class DefaultRunner(RunnerBase):
|
||||||
nodes_under_load: list[ClusterNode],
|
nodes_under_load: list[ClusterNode],
|
||||||
k6_dir: str,
|
k6_dir: str,
|
||||||
):
|
):
|
||||||
|
if load_params.force_fresh_registry and load_params.custom_registry:
|
||||||
|
with reporter.step("Forcing fresh registry files"):
|
||||||
|
parallel(self._force_fresh_registry, self.loaders, load_params)
|
||||||
|
|
||||||
if load_params.load_type != LoadType.S3:
|
if load_params.load_type != LoadType.S3:
|
||||||
return
|
return
|
||||||
|
|
||||||
with reporter.step("Init s3 client on loaders"):
|
with reporter.step("Init s3 client on loaders"):
|
||||||
storage_node = nodes_under_load[0].service(StorageNode)
|
s3_credentials = self.user.s3_credentials
|
||||||
s3_public_keys = [
|
parallel(self._aws_configure_on_loader, self.loaders, s3_credentials)
|
||||||
node.service(S3Gate).get_wallet_public_key() for node in cluster_nodes
|
|
||||||
]
|
|
||||||
grpc_peer = storage_node.get_rpc_endpoint()
|
|
||||||
|
|
||||||
parallel(
|
def _force_fresh_registry(self, loader: Loader, load_params: LoadParams):
|
||||||
self._prepare_loader, self.loaders, load_params, grpc_peer, s3_public_keys, k6_dir
|
with reporter.step(f"Forcing fresh registry on {loader.ip}"):
|
||||||
)
|
shell = loader.get_shell()
|
||||||
|
shell.exec(f"rm -f {load_params.registry_file}")
|
||||||
|
|
||||||
def _prepare_loader(
|
def _aws_configure_on_loader(
|
||||||
self,
|
self,
|
||||||
loader: Loader,
|
loader: Loader,
|
||||||
load_params: LoadParams,
|
s3_credentials: S3Credentials,
|
||||||
grpc_peer: str,
|
|
||||||
s3_public_keys: list[str],
|
|
||||||
k6_dir: str,
|
|
||||||
):
|
):
|
||||||
with reporter.step(f"Init s3 client on {loader.ip}"):
|
with reporter.step(f"Aws configure on {loader.ip}"):
|
||||||
shell = loader.get_shell()
|
|
||||||
frostfs_authmate_exec: FrostfsAuthmate = FrostfsAuthmate(shell, FROSTFS_AUTHMATE_EXEC)
|
|
||||||
issue_secret_output = frostfs_authmate_exec.secret.issue(
|
|
||||||
wallet=self.loaders_wallet.path,
|
|
||||||
peer=grpc_peer,
|
|
||||||
gate_public_key=s3_public_keys,
|
|
||||||
container_placement_policy=load_params.preset.container_placement_policy,
|
|
||||||
container_policy=f"{k6_dir}/scenarios/files/policy.json",
|
|
||||||
wallet_password=self.loaders_wallet.password,
|
|
||||||
).stdout
|
|
||||||
aws_access_key_id = str(
|
|
||||||
re.search(
|
|
||||||
r"access_key_id.*:\s.(?P<aws_access_key_id>\w*)", issue_secret_output
|
|
||||||
).group("aws_access_key_id")
|
|
||||||
)
|
|
||||||
aws_secret_access_key = str(
|
|
||||||
re.search(
|
|
||||||
r"secret_access_key.*:\s.(?P<aws_secret_access_key>\w*)",
|
|
||||||
issue_secret_output,
|
|
||||||
).group("aws_secret_access_key")
|
|
||||||
)
|
|
||||||
|
|
||||||
configure_input = [
|
configure_input = [
|
||||||
InteractiveInput(prompt_pattern=r"AWS Access Key ID.*", input=aws_access_key_id),
|
InteractiveInput(prompt_pattern=r"AWS Access Key ID.*", input=s3_credentials.access_key),
|
||||||
InteractiveInput(
|
InteractiveInput(prompt_pattern=r"AWS Secret Access Key.*", input=s3_credentials.secret_key),
|
||||||
prompt_pattern=r"AWS Secret Access Key.*", input=aws_secret_access_key
|
|
||||||
),
|
|
||||||
InteractiveInput(prompt_pattern=r".*", input=""),
|
InteractiveInput(prompt_pattern=r".*", input=""),
|
||||||
InteractiveInput(prompt_pattern=r".*", input=""),
|
InteractiveInput(prompt_pattern=r".*", input=""),
|
||||||
]
|
]
|
||||||
shell.exec("aws configure", CommandOptions(interactive_inputs=configure_input))
|
loader.get_shell().exec("aws configure", CommandOptions(interactive_inputs=configure_input))
|
||||||
|
|
||||||
@reporter.step_deco("Init k6 instances")
|
@reporter.step("Init k6 instances")
|
||||||
def init_k6_instances(self, load_params: LoadParams, endpoints: list[str], k6_dir: str):
|
def init_k6_instances(self, load_params: LoadParams, endpoints: list[str], k6_dir: str):
|
||||||
self.k6_instances = []
|
self.k6_instances = []
|
||||||
cycled_loaders = itertools.cycle(self.loaders)
|
cycled_loaders = itertools.cycle(self.loaders)
|
||||||
|
@ -142,16 +114,12 @@ class DefaultRunner(RunnerBase):
|
||||||
}
|
}
|
||||||
endpoints_generators = {
|
endpoints_generators = {
|
||||||
K6ProcessAllocationStrategy.PER_LOAD_NODE: itertools.cycle([endpoints]),
|
K6ProcessAllocationStrategy.PER_LOAD_NODE: itertools.cycle([endpoints]),
|
||||||
K6ProcessAllocationStrategy.PER_ENDPOINT: itertools.cycle(
|
K6ProcessAllocationStrategy.PER_ENDPOINT: itertools.cycle([[endpoint] for endpoint in endpoints]),
|
||||||
[[endpoint] for endpoint in endpoints]
|
|
||||||
),
|
|
||||||
}
|
}
|
||||||
k6_processes_count = k6_distribution_count[load_params.k6_process_allocation_strategy]
|
k6_processes_count = k6_distribution_count[load_params.k6_process_allocation_strategy]
|
||||||
endpoints_gen = endpoints_generators[load_params.k6_process_allocation_strategy]
|
endpoints_gen = endpoints_generators[load_params.k6_process_allocation_strategy]
|
||||||
|
|
||||||
distributed_load_params_list = self._get_distributed_load_params_list(
|
distributed_load_params_list = self._get_distributed_load_params_list(load_params, k6_processes_count)
|
||||||
load_params, k6_processes_count
|
|
||||||
)
|
|
||||||
|
|
||||||
futures = parallel(
|
futures = parallel(
|
||||||
self._init_k6_instance,
|
self._init_k6_instance,
|
||||||
|
@ -162,9 +130,7 @@ class DefaultRunner(RunnerBase):
|
||||||
)
|
)
|
||||||
self.k6_instances = [future.result() for future in futures]
|
self.k6_instances = [future.result() for future in futures]
|
||||||
|
|
||||||
def _init_k6_instance(
|
def _init_k6_instance(self, load_params_for_loader: LoadParams, loader: Loader, endpoints: list[str], k6_dir: str):
|
||||||
self, load_params_for_loader: LoadParams, loader: Loader, endpoints: list[str], k6_dir: str
|
|
||||||
):
|
|
||||||
shell = loader.get_shell()
|
shell = loader.get_shell()
|
||||||
with reporter.step(f"Init K6 instance on {loader.ip} for endpoints {endpoints}"):
|
with reporter.step(f"Init K6 instance on {loader.ip} for endpoints {endpoints}"):
|
||||||
with reporter.step(f"Make working directory"):
|
with reporter.step(f"Make working directory"):
|
||||||
|
@ -177,12 +143,10 @@ class DefaultRunner(RunnerBase):
|
||||||
k6_dir,
|
k6_dir,
|
||||||
shell,
|
shell,
|
||||||
loader,
|
loader,
|
||||||
self.loaders_wallet,
|
self.user,
|
||||||
)
|
)
|
||||||
|
|
||||||
def _get_distributed_load_params_list(
|
def _get_distributed_load_params_list(self, original_load_params: LoadParams, workers_count: int) -> list[LoadParams]:
|
||||||
self, original_load_params: LoadParams, workers_count: int
|
|
||||||
) -> list[LoadParams]:
|
|
||||||
divisor = int(BACKGROUND_LOAD_VUS_COUNT_DIVISOR)
|
divisor = int(BACKGROUND_LOAD_VUS_COUNT_DIVISOR)
|
||||||
distributed_load_params: list[LoadParams] = []
|
distributed_load_params: list[LoadParams] = []
|
||||||
|
|
||||||
|
@ -202,9 +166,7 @@ class DefaultRunner(RunnerBase):
|
||||||
and getattr(original_load_params, field.name) is not None
|
and getattr(original_load_params, field.name) is not None
|
||||||
):
|
):
|
||||||
original_value = getattr(original_load_params, field.name)
|
original_value = getattr(original_load_params, field.name)
|
||||||
distribution = self._get_distribution(
|
distribution = self._get_distribution(math.ceil(original_value / divisor), workers_count)
|
||||||
math.ceil(original_value / divisor), workers_count
|
|
||||||
)
|
|
||||||
for i in range(workers_count):
|
for i in range(workers_count):
|
||||||
setattr(distributed_load_params[i], field.name, distribution[i])
|
setattr(distributed_load_params[i], field.name, distribution[i])
|
||||||
|
|
||||||
|
@ -231,10 +193,7 @@ class DefaultRunner(RunnerBase):
|
||||||
# Remainder of clients left to be distributed
|
# Remainder of clients left to be distributed
|
||||||
remainder = clients_count - clients_per_worker * workers_count
|
remainder = clients_count - clients_per_worker * workers_count
|
||||||
|
|
||||||
distribution = [
|
distribution = [clients_per_worker + 1 if i < remainder else clients_per_worker for i in range(workers_count)]
|
||||||
clients_per_worker + 1 if i < remainder else clients_per_worker
|
|
||||||
for i in range(workers_count)
|
|
||||||
]
|
|
||||||
return distribution
|
return distribution
|
||||||
|
|
||||||
def start(self):
|
def start(self):
|
||||||
|
@ -243,9 +202,7 @@ class DefaultRunner(RunnerBase):
|
||||||
parallel([k6.start for k6 in self.k6_instances])
|
parallel([k6.start for k6 in self.k6_instances])
|
||||||
|
|
||||||
wait_after_start_time = datetime_utils.parse_time(load_params.setup_timeout) + 5
|
wait_after_start_time = datetime_utils.parse_time(load_params.setup_timeout) + 5
|
||||||
with reporter.step(
|
with reporter.step(f"Wait for start timeout + couple more seconds ({wait_after_start_time}) before moving on"):
|
||||||
f"Wait for start timeout + couple more seconds ({wait_after_start_time}) before moving on"
|
|
||||||
):
|
|
||||||
time.sleep(wait_after_start_time)
|
time.sleep(wait_after_start_time)
|
||||||
|
|
||||||
def stop(self):
|
def stop(self):
|
||||||
|
@ -274,21 +231,23 @@ class LocalRunner(RunnerBase):
|
||||||
loaders: list[Loader]
|
loaders: list[Loader]
|
||||||
cluster_state_controller: ClusterStateController
|
cluster_state_controller: ClusterStateController
|
||||||
file_keeper: FileKeeper
|
file_keeper: FileKeeper
|
||||||
wallet: WalletInfo
|
user: User
|
||||||
|
|
||||||
def __init__(
|
def __init__(
|
||||||
self,
|
self,
|
||||||
cluster_state_controller: ClusterStateController,
|
cluster_state_controller: ClusterStateController,
|
||||||
file_keeper: FileKeeper,
|
file_keeper: FileKeeper,
|
||||||
nodes_under_load: list[ClusterNode],
|
nodes_under_load: list[ClusterNode],
|
||||||
|
user: User,
|
||||||
) -> None:
|
) -> None:
|
||||||
self.cluster_state_controller = cluster_state_controller
|
self.cluster_state_controller = cluster_state_controller
|
||||||
self.file_keeper = file_keeper
|
self.file_keeper = file_keeper
|
||||||
self.loaders = [NodeLoader(node) for node in nodes_under_load]
|
self.loaders = [NodeLoader(node) for node in nodes_under_load]
|
||||||
self.nodes_under_load = nodes_under_load
|
self.nodes_under_load = nodes_under_load
|
||||||
|
self.user = user
|
||||||
|
|
||||||
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED)
|
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED)
|
||||||
@reporter.step_deco("Preparation steps")
|
@reporter.step("Preparation steps")
|
||||||
def prepare(
|
def prepare(
|
||||||
self,
|
self,
|
||||||
load_params: LoadParams,
|
load_params: LoadParams,
|
||||||
|
@ -296,42 +255,49 @@ class LocalRunner(RunnerBase):
|
||||||
nodes_under_load: list[ClusterNode],
|
nodes_under_load: list[ClusterNode],
|
||||||
k6_dir: str,
|
k6_dir: str,
|
||||||
):
|
):
|
||||||
@reporter.step_deco("Prepare node {cluster_node}")
|
parallel(self.prepare_node, nodes_under_load, k6_dir, load_params)
|
||||||
def prepare_node(cluster_node: ClusterNode):
|
|
||||||
shell = cluster_node.host.get_shell()
|
|
||||||
|
|
||||||
with reporter.step("Allow storage user to login into system"):
|
@retry(3, 5, expected_result=True)
|
||||||
shell.exec(f"sudo chsh -s /bin/bash {STORAGE_USER_NAME}")
|
def allow_user_to_login_in_system(self, cluster_node: ClusterNode):
|
||||||
shell.exec("sudo chattr +i /etc/passwd")
|
shell = cluster_node.host.get_shell()
|
||||||
|
|
||||||
with reporter.step("Update limits.conf"):
|
result = None
|
||||||
limits_path = "/etc/security/limits.conf"
|
try:
|
||||||
self.file_keeper.add(cluster_node.storage_node, limits_path)
|
shell.exec(f"sudo chsh -s /bin/bash {STORAGE_USER_NAME}")
|
||||||
content = f"{STORAGE_USER_NAME} hard nofile 65536\n{STORAGE_USER_NAME} soft nofile 65536\n"
|
self.lock_passwd_on_node(cluster_node)
|
||||||
shell.exec(f"echo '{content}' | sudo tee {limits_path}")
|
options = CommandOptions(check=False, extra_inspectors=[SuInspector(STORAGE_USER_NAME)])
|
||||||
|
result = shell.exec("whoami", options)
|
||||||
|
finally:
|
||||||
|
if not result or result.return_code:
|
||||||
|
self.restore_passwd_on_node(cluster_node)
|
||||||
|
return False
|
||||||
|
|
||||||
with reporter.step("Download K6"):
|
return True
|
||||||
shell.exec(f"sudo rm -rf {k6_dir};sudo mkdir {k6_dir}")
|
|
||||||
shell.exec(f"sudo curl -so {k6_dir}/k6.tar.gz {load_params.k6_url}")
|
|
||||||
shell.exec(f"sudo tar xf {k6_dir}/k6.tar.gz -C {k6_dir}")
|
|
||||||
shell.exec(f"sudo chmod -R 777 {k6_dir}")
|
|
||||||
|
|
||||||
with reporter.step("Create empty_passwd"):
|
@reporter.step("Prepare node {cluster_node}")
|
||||||
self.wallet = WalletInfo(
|
def prepare_node(self, cluster_node: ClusterNode, k6_dir: str, load_params: LoadParams):
|
||||||
f"{k6_dir}/scenarios/files/wallet.json", "", "/tmp/empty_passwd.yml"
|
shell = cluster_node.host.get_shell()
|
||||||
)
|
|
||||||
content = yaml.dump({"password": ""})
|
|
||||||
shell.exec(f'echo "{content}" | sudo tee {self.wallet.config_path}')
|
|
||||||
shell.exec(f"sudo chmod -R 777 {self.wallet.config_path}")
|
|
||||||
|
|
||||||
with ThreadPoolExecutor(max_workers=len(nodes_under_load)) as executor:
|
with reporter.step("Allow storage user to login into system"):
|
||||||
result = executor.map(prepare_node, nodes_under_load)
|
self.allow_user_to_login_in_system(cluster_node)
|
||||||
|
|
||||||
# Check for exceptions
|
with reporter.step("Update limits.conf"):
|
||||||
for _ in result:
|
limits_path = "/etc/security/limits.conf"
|
||||||
pass
|
self.file_keeper.add(cluster_node.storage_node, limits_path)
|
||||||
|
content = f"{STORAGE_USER_NAME} hard nofile 65536\n{STORAGE_USER_NAME} soft nofile 65536\n"
|
||||||
|
shell.exec(f"echo '{content}' | sudo tee {limits_path}")
|
||||||
|
|
||||||
@reporter.step_deco("Init k6 instances")
|
with reporter.step("Download K6"):
|
||||||
|
shell.exec(f"sudo rm -rf {k6_dir};sudo mkdir {k6_dir}")
|
||||||
|
shell.exec(f"sudo curl -so {k6_dir}/k6.tar.gz {load_params.k6_url}")
|
||||||
|
shell.exec(f"sudo tar xf {k6_dir}/k6.tar.gz --strip-components 2 -C {k6_dir}")
|
||||||
|
shell.exec(f"sudo chmod -R 777 {k6_dir}")
|
||||||
|
|
||||||
|
with reporter.step("chmod 777 wallet related files on loader"):
|
||||||
|
shell.exec(f"sudo chmod -R 777 {self.user.wallet.config_path}")
|
||||||
|
shell.exec(f"sudo chmod -R 777 {self.user.wallet.path}")
|
||||||
|
|
||||||
|
@reporter.step("Init k6 instances")
|
||||||
def init_k6_instances(self, load_params: LoadParams, endpoints: list[str], k6_dir: str):
|
def init_k6_instances(self, load_params: LoadParams, endpoints: list[str], k6_dir: str):
|
||||||
self.k6_instances = []
|
self.k6_instances = []
|
||||||
futures = parallel(
|
futures = parallel(
|
||||||
|
@ -362,36 +328,36 @@ class LocalRunner(RunnerBase):
|
||||||
k6_dir,
|
k6_dir,
|
||||||
shell,
|
shell,
|
||||||
loader,
|
loader,
|
||||||
self.wallet,
|
self.user,
|
||||||
)
|
)
|
||||||
|
|
||||||
def start(self):
|
def start(self):
|
||||||
load_params = self.k6_instances[0].load_params
|
load_params = self.k6_instances[0].load_params
|
||||||
|
|
||||||
self.cluster_state_controller.stop_all_s3_gates()
|
self.cluster_state_controller.stop_services_of_type(S3Gate)
|
||||||
self.cluster_state_controller.stop_all_storage_services()
|
self.cluster_state_controller.stop_services_of_type(StorageNode)
|
||||||
|
|
||||||
parallel([k6.start for k6 in self.k6_instances])
|
parallel([k6.start for k6 in self.k6_instances])
|
||||||
|
|
||||||
wait_after_start_time = datetime_utils.parse_time(load_params.setup_timeout) + 5
|
wait_after_start_time = datetime_utils.parse_time(load_params.setup_timeout) + 5
|
||||||
with reporter.step(
|
with reporter.step(f"Wait for start timeout + couple more seconds ({wait_after_start_time}) before moving on"):
|
||||||
f"Wait for start timeout + couple more seconds ({wait_after_start_time}) before moving on"
|
|
||||||
):
|
|
||||||
time.sleep(wait_after_start_time)
|
time.sleep(wait_after_start_time)
|
||||||
|
|
||||||
|
@reporter.step("Restore passwd on {cluster_node}")
|
||||||
|
def restore_passwd_on_node(self, cluster_node: ClusterNode):
|
||||||
|
shell = cluster_node.host.get_shell()
|
||||||
|
shell.exec("sudo chattr -i /etc/passwd")
|
||||||
|
|
||||||
|
@reporter.step("Lock passwd on {cluster_node}")
|
||||||
|
def lock_passwd_on_node(self, cluster_node: ClusterNode):
|
||||||
|
shell = cluster_node.host.get_shell()
|
||||||
|
shell.exec("sudo chattr +i /etc/passwd")
|
||||||
|
|
||||||
def stop(self):
|
def stop(self):
|
||||||
for k6_instance in self.k6_instances:
|
for k6_instance in self.k6_instances:
|
||||||
k6_instance.stop()
|
k6_instance.stop()
|
||||||
|
|
||||||
@reporter.step_deco("Restore passwd on {cluster_node}")
|
self.cluster_state_controller.start_all_stopped_services()
|
||||||
def restore_passwd_attr_on_node(cluster_node: ClusterNode):
|
|
||||||
shell = cluster_node.host.get_shell()
|
|
||||||
shell.exec("sudo chattr -i /etc/passwd")
|
|
||||||
|
|
||||||
parallel(restore_passwd_attr_on_node, self.nodes_under_load)
|
|
||||||
|
|
||||||
self.cluster_state_controller.start_stopped_storage_services()
|
|
||||||
self.cluster_state_controller.start_stopped_s3_gates()
|
|
||||||
|
|
||||||
def get_results(self) -> dict:
|
def get_results(self) -> dict:
|
||||||
results = {}
|
results = {}
|
||||||
|
@ -399,4 +365,100 @@ class LocalRunner(RunnerBase):
|
||||||
result = k6_instance.get_results()
|
result = k6_instance.get_results()
|
||||||
results[k6_instance.loader.ip] = result
|
results[k6_instance.loader.ip] = result
|
||||||
|
|
||||||
|
parallel(self.restore_passwd_on_node, self.nodes_under_load)
|
||||||
|
|
||||||
return results
|
return results
|
||||||
|
|
||||||
|
|
||||||
|
class S3LocalRunner(LocalRunner):
|
||||||
|
endpoints: list[str]
|
||||||
|
k6_dir: str
|
||||||
|
|
||||||
|
@reporter.step("Run preset on loaders")
|
||||||
|
def preset(self):
|
||||||
|
LocalRunner.preset(self)
|
||||||
|
with reporter.step(f"Resolve containers in preset"):
|
||||||
|
parallel(self._resolve_containers_in_preset, self.k6_instances)
|
||||||
|
|
||||||
|
@reporter.step("Resolve containers in preset")
|
||||||
|
def _resolve_containers_in_preset(self, k6_instance: K6):
|
||||||
|
k6_instance.shell.exec(
|
||||||
|
f"sudo {self.k6_dir}/scenarios/preset/resolve_containers_in_preset.py --endpoint {k6_instance.endpoints[0]} --preset_file {k6_instance.load_params.preset.pregen_json}"
|
||||||
|
)
|
||||||
|
|
||||||
|
@reporter.step("Init k6 instances")
|
||||||
|
def init_k6_instances(self, load_params: LoadParams, endpoints: list[str], k6_dir: str):
|
||||||
|
self.k6_instances = []
|
||||||
|
futures = parallel(
|
||||||
|
self._init_k6_instance_,
|
||||||
|
self.loaders,
|
||||||
|
load_params,
|
||||||
|
endpoints,
|
||||||
|
k6_dir,
|
||||||
|
)
|
||||||
|
self.k6_instances = [future.result() for future in futures]
|
||||||
|
|
||||||
|
def _init_k6_instance_(self, loader: Loader, load_params: LoadParams, endpoints: list[str], k6_dir: str):
|
||||||
|
shell = loader.get_shell()
|
||||||
|
with reporter.step(f"Init K6 instance on {loader.ip} for endpoints {endpoints}"):
|
||||||
|
with reporter.step(f"Make working directory"):
|
||||||
|
shell.exec(f"sudo mkdir -p {load_params.working_dir}")
|
||||||
|
# If we chmod /home/<user_name> folder we can no longer ssh to the node
|
||||||
|
# !! IMPORTANT !!
|
||||||
|
if (
|
||||||
|
load_params.working_dir
|
||||||
|
and not load_params.working_dir == f"/home/{LOAD_NODE_SSH_USER}"
|
||||||
|
and not load_params.working_dir == f"/home/{LOAD_NODE_SSH_USER}/"
|
||||||
|
):
|
||||||
|
shell.exec(f"sudo chmod -R 777 {load_params.working_dir}")
|
||||||
|
|
||||||
|
return K6(
|
||||||
|
load_params,
|
||||||
|
self.endpoints,
|
||||||
|
k6_dir,
|
||||||
|
shell,
|
||||||
|
loader,
|
||||||
|
self.user,
|
||||||
|
)
|
||||||
|
|
||||||
|
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED)
|
||||||
|
@reporter.step("Preparation steps")
|
||||||
|
def prepare(
|
||||||
|
self,
|
||||||
|
load_params: LoadParams,
|
||||||
|
cluster_nodes: list[ClusterNode],
|
||||||
|
nodes_under_load: list[ClusterNode],
|
||||||
|
k6_dir: str,
|
||||||
|
):
|
||||||
|
self.k6_dir = k6_dir
|
||||||
|
parallel(self.prepare_node, nodes_under_load, k6_dir, load_params, cluster_nodes)
|
||||||
|
|
||||||
|
@reporter.step("Prepare node {cluster_node}")
|
||||||
|
def prepare_node(self, cluster_node: ClusterNode, k6_dir: str, load_params: LoadParams, cluster_nodes: list[ClusterNode]):
|
||||||
|
LocalRunner.prepare_node(self, cluster_node, k6_dir, load_params)
|
||||||
|
self.endpoints = cluster_node.s3_gate.get_all_endpoints()
|
||||||
|
shell = cluster_node.host.get_shell()
|
||||||
|
|
||||||
|
with reporter.step("Uninstall previous installation of aws cli"):
|
||||||
|
shell.exec(f"sudo rm -rf /usr/local/aws-cli")
|
||||||
|
shell.exec(f"sudo rm -rf /usr/local/bin/aws")
|
||||||
|
shell.exec(f"sudo rm -rf /usr/local/bin/aws_completer")
|
||||||
|
|
||||||
|
with reporter.step("Install aws cli"):
|
||||||
|
shell.exec(f"sudo curl {load_params.awscli_url} -o {k6_dir}/awscliv2.zip")
|
||||||
|
shell.exec(f"sudo unzip -q {k6_dir}/awscliv2.zip -d {k6_dir}")
|
||||||
|
shell.exec(f"sudo {k6_dir}/aws/install")
|
||||||
|
|
||||||
|
with reporter.step("Install requests python module"):
|
||||||
|
shell.exec(f"sudo apt-get -y install python3-pip")
|
||||||
|
shell.exec(f"sudo curl -so {k6_dir}/requests.tar.gz {load_params.requests_module_url}")
|
||||||
|
shell.exec(f"sudo python3 -m pip install -I {k6_dir}/requests.tar.gz")
|
||||||
|
|
||||||
|
with reporter.step(f"Init s3 client on {cluster_node.host_ip}"):
|
||||||
|
configure_input = [
|
||||||
|
InteractiveInput(prompt_pattern=r"AWS Access Key ID.*", input=self.user.s3_credentials.access_key),
|
||||||
|
InteractiveInput(prompt_pattern=r"AWS Secret Access Key.*", input=self.user.s3_credentials.secret_key),
|
||||||
|
InteractiveInput(prompt_pattern=r".*", input=""),
|
||||||
|
InteractiveInput(prompt_pattern=r".*", input=""),
|
||||||
|
]
|
||||||
|
shell.exec("aws configure", CommandOptions(interactive_inputs=configure_input))
|
||||||
|
|
|
@ -17,3 +17,16 @@ def load_plugin(plugin_group: str, name: str) -> Any:
|
||||||
return None
|
return None
|
||||||
plugin = plugins[name]
|
plugin = plugins[name]
|
||||||
return plugin.load()
|
return plugin.load()
|
||||||
|
|
||||||
|
|
||||||
|
def load_all(group: str) -> Any:
|
||||||
|
"""Loads all plugins using entry point specification.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
group: Name of plugin group.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Classes from specified group.
|
||||||
|
"""
|
||||||
|
plugins = entry_points(group=group)
|
||||||
|
return [plugin.load() for plugin in plugins]
|
||||||
|
|
|
@ -8,17 +8,15 @@ from tenacity import retry
|
||||||
from tenacity.stop import stop_after_attempt
|
from tenacity.stop import stop_after_attempt
|
||||||
from tenacity.wait import wait_fixed
|
from tenacity.wait import wait_fixed
|
||||||
|
|
||||||
from frostfs_testlib.reporter import get_reporter
|
from frostfs_testlib import reporter
|
||||||
from frostfs_testlib.shell import Shell
|
from frostfs_testlib.shell import Shell
|
||||||
from frostfs_testlib.shell.command_inspectors import SuInspector
|
from frostfs_testlib.shell.command_inspectors import SuInspector
|
||||||
from frostfs_testlib.shell.interfaces import CommandInspector, CommandOptions
|
from frostfs_testlib.shell.interfaces import CommandInspector, CommandOptions
|
||||||
|
|
||||||
reporter = get_reporter()
|
|
||||||
|
|
||||||
|
|
||||||
class RemoteProcess:
|
class RemoteProcess:
|
||||||
def __init__(
|
def __init__(
|
||||||
self, cmd: str, process_dir: str, shell: Shell, cmd_inspector: Optional[CommandInspector]
|
self, cmd: str, process_dir: str, shell: Shell, cmd_inspector: Optional[CommandInspector], proc_id: str
|
||||||
):
|
):
|
||||||
self.process_dir = process_dir
|
self.process_dir = process_dir
|
||||||
self.cmd = cmd
|
self.cmd = cmd
|
||||||
|
@ -26,15 +24,23 @@ class RemoteProcess:
|
||||||
self.stderr_last_line_number = 0
|
self.stderr_last_line_number = 0
|
||||||
self.pid: Optional[str] = None
|
self.pid: Optional[str] = None
|
||||||
self.proc_rc: Optional[int] = None
|
self.proc_rc: Optional[int] = None
|
||||||
|
self.proc_start_time: Optional[int] = None
|
||||||
|
self.proc_end_time: Optional[int] = None
|
||||||
self.saved_stdout: Optional[str] = None
|
self.saved_stdout: Optional[str] = None
|
||||||
self.saved_stderr: Optional[str] = None
|
self.saved_stderr: Optional[str] = None
|
||||||
self.shell = shell
|
self.shell = shell
|
||||||
|
self.proc_id: str = proc_id
|
||||||
self.cmd_inspectors: list[CommandInspector] = [cmd_inspector] if cmd_inspector else []
|
self.cmd_inspectors: list[CommandInspector] = [cmd_inspector] if cmd_inspector else []
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
@reporter.step_deco("Create remote process")
|
@reporter.step("Create remote process")
|
||||||
def create(
|
def create(
|
||||||
cls, command: str, shell: Shell, working_dir: str = "/tmp", user: Optional[str] = None
|
cls,
|
||||||
|
command: str,
|
||||||
|
shell: Shell,
|
||||||
|
working_dir: str = "/tmp",
|
||||||
|
user: Optional[str] = None,
|
||||||
|
proc_id: Optional[str] = None,
|
||||||
) -> RemoteProcess:
|
) -> RemoteProcess:
|
||||||
"""
|
"""
|
||||||
Create a process on a remote host.
|
Create a process on a remote host.
|
||||||
|
@ -46,6 +52,7 @@ class RemoteProcess:
|
||||||
stderr: contains script errors
|
stderr: contains script errors
|
||||||
stdout: contains script output
|
stdout: contains script output
|
||||||
user: user on behalf whom command will be executed
|
user: user on behalf whom command will be executed
|
||||||
|
proc_id: process string identificator
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
shell: Shell instance
|
shell: Shell instance
|
||||||
|
@ -55,20 +62,32 @@ class RemoteProcess:
|
||||||
Returns:
|
Returns:
|
||||||
RemoteProcess instance for further examination
|
RemoteProcess instance for further examination
|
||||||
"""
|
"""
|
||||||
|
if proc_id is None:
|
||||||
|
proc_id = f"{uuid.uuid4()}"
|
||||||
|
|
||||||
cmd_inspector = SuInspector(user) if user else None
|
cmd_inspector = SuInspector(user) if user else None
|
||||||
remote_process = cls(
|
remote_process = cls(
|
||||||
cmd=command,
|
cmd=command,
|
||||||
process_dir=os.path.join(working_dir, f"proc_{uuid.uuid4()}"),
|
process_dir=os.path.join(working_dir, f"proc_{proc_id}"),
|
||||||
shell=shell,
|
shell=shell,
|
||||||
cmd_inspector=cmd_inspector,
|
cmd_inspector=cmd_inspector,
|
||||||
|
proc_id=proc_id,
|
||||||
)
|
)
|
||||||
remote_process._create_process_dir()
|
|
||||||
remote_process._generate_command_script(command)
|
|
||||||
remote_process._start_process()
|
|
||||||
remote_process.pid = remote_process._get_pid()
|
|
||||||
return remote_process
|
return remote_process
|
||||||
|
|
||||||
@reporter.step_deco("Get process stdout")
|
@reporter.step("Start remote process")
|
||||||
|
def start(self):
|
||||||
|
"""
|
||||||
|
Starts a process on a remote host.
|
||||||
|
"""
|
||||||
|
|
||||||
|
self._create_process_dir()
|
||||||
|
self._generate_command_script()
|
||||||
|
self._start_process()
|
||||||
|
self.pid = self._get_pid()
|
||||||
|
|
||||||
|
@reporter.step("Get process stdout")
|
||||||
def stdout(self, full: bool = False) -> str:
|
def stdout(self, full: bool = False) -> str:
|
||||||
"""
|
"""
|
||||||
Method to get process stdout, either fresh info or full.
|
Method to get process stdout, either fresh info or full.
|
||||||
|
@ -100,7 +119,7 @@ class RemoteProcess:
|
||||||
return resulted_stdout
|
return resulted_stdout
|
||||||
return ""
|
return ""
|
||||||
|
|
||||||
@reporter.step_deco("Get process stderr")
|
@reporter.step("Get process stderr")
|
||||||
def stderr(self, full: bool = False) -> str:
|
def stderr(self, full: bool = False) -> str:
|
||||||
"""
|
"""
|
||||||
Method to get process stderr, either fresh info or full.
|
Method to get process stderr, either fresh info or full.
|
||||||
|
@ -131,28 +150,59 @@ class RemoteProcess:
|
||||||
return resulted_stderr
|
return resulted_stderr
|
||||||
return ""
|
return ""
|
||||||
|
|
||||||
@reporter.step_deco("Get process rc")
|
@reporter.step("Get process rc")
|
||||||
def rc(self) -> Optional[int]:
|
def rc(self) -> Optional[int]:
|
||||||
if self.proc_rc is not None:
|
if self.proc_rc is not None:
|
||||||
return self.proc_rc
|
return self.proc_rc
|
||||||
|
|
||||||
|
result = self._cat_proc_file("rc")
|
||||||
|
if not result:
|
||||||
|
return None
|
||||||
|
|
||||||
|
self.proc_rc = int(result)
|
||||||
|
return self.proc_rc
|
||||||
|
|
||||||
|
@reporter.step("Get process start time")
|
||||||
|
def start_time(self) -> Optional[int]:
|
||||||
|
if self.proc_start_time is not None:
|
||||||
|
return self.proc_start_time
|
||||||
|
|
||||||
|
result = self._cat_proc_file("start_time")
|
||||||
|
if not result:
|
||||||
|
return None
|
||||||
|
|
||||||
|
self.proc_start_time = int(result)
|
||||||
|
return self.proc_start_time
|
||||||
|
|
||||||
|
@reporter.step("Get process end time")
|
||||||
|
def end_time(self) -> Optional[int]:
|
||||||
|
if self.proc_end_time is not None:
|
||||||
|
return self.proc_end_time
|
||||||
|
|
||||||
|
result = self._cat_proc_file("end_time")
|
||||||
|
if not result:
|
||||||
|
return None
|
||||||
|
|
||||||
|
self.proc_end_time = int(result)
|
||||||
|
return self.proc_end_time
|
||||||
|
|
||||||
|
def _cat_proc_file(self, file: str) -> Optional[str]:
|
||||||
terminal = self.shell.exec(
|
terminal = self.shell.exec(
|
||||||
f"cat {self.process_dir}/rc",
|
f"cat {self.process_dir}/{file}",
|
||||||
CommandOptions(check=False, extra_inspectors=self.cmd_inspectors, no_log=True),
|
CommandOptions(check=False, extra_inspectors=self.cmd_inspectors, no_log=True),
|
||||||
)
|
)
|
||||||
if "No such file or directory" in terminal.stderr:
|
if "No such file or directory" in terminal.stderr:
|
||||||
return None
|
return None
|
||||||
elif terminal.stderr or terminal.return_code != 0:
|
elif terminal.stderr or terminal.return_code != 0:
|
||||||
raise AssertionError(f"cat process rc was not successful: {terminal.stderr}")
|
raise AssertionError(f"cat process {file} was not successful: {terminal.stderr}")
|
||||||
|
|
||||||
self.proc_rc = int(terminal.stdout)
|
return terminal.stdout
|
||||||
return self.proc_rc
|
|
||||||
|
|
||||||
@reporter.step_deco("Check if process is running")
|
@reporter.step("Check if process is running")
|
||||||
def running(self) -> bool:
|
def running(self) -> bool:
|
||||||
return self.rc() is None
|
return self.rc() is None
|
||||||
|
|
||||||
@reporter.step_deco("Send signal to process")
|
@reporter.step("Send signal to process")
|
||||||
def send_signal(self, signal: int) -> None:
|
def send_signal(self, signal: int) -> None:
|
||||||
kill_res = self.shell.exec(
|
kill_res = self.shell.exec(
|
||||||
f"kill -{signal} {self.pid}",
|
f"kill -{signal} {self.pid}",
|
||||||
|
@ -161,27 +211,23 @@ class RemoteProcess:
|
||||||
if "No such process" in kill_res.stderr:
|
if "No such process" in kill_res.stderr:
|
||||||
return
|
return
|
||||||
if kill_res.return_code:
|
if kill_res.return_code:
|
||||||
raise AssertionError(
|
raise AssertionError(f"Signal {signal} not sent. Return code of kill: {kill_res.return_code}")
|
||||||
f"Signal {signal} not sent. Return code of kill: {kill_res.return_code}"
|
|
||||||
)
|
|
||||||
|
|
||||||
@reporter.step_deco("Stop process")
|
@reporter.step("Stop process")
|
||||||
def stop(self) -> None:
|
def stop(self) -> None:
|
||||||
self.send_signal(15)
|
self.send_signal(15)
|
||||||
|
|
||||||
@reporter.step_deco("Kill process")
|
@reporter.step("Kill process")
|
||||||
def kill(self) -> None:
|
def kill(self) -> None:
|
||||||
self.send_signal(9)
|
self.send_signal(9)
|
||||||
|
|
||||||
@reporter.step_deco("Clear process directory")
|
@reporter.step("Clear process directory")
|
||||||
def clear(self) -> None:
|
def clear(self) -> None:
|
||||||
if self.process_dir == "/":
|
if self.process_dir == "/":
|
||||||
raise AssertionError(f"Invalid path to delete: {self.process_dir}")
|
raise AssertionError(f"Invalid path to delete: {self.process_dir}")
|
||||||
self.shell.exec(
|
self.shell.exec(f"rm -rf {self.process_dir}", CommandOptions(extra_inspectors=self.cmd_inspectors))
|
||||||
f"rm -rf {self.process_dir}", CommandOptions(extra_inspectors=self.cmd_inspectors)
|
|
||||||
)
|
|
||||||
|
|
||||||
@reporter.step_deco("Start remote process")
|
@reporter.step("Start remote process")
|
||||||
def _start_process(self) -> None:
|
def _start_process(self) -> None:
|
||||||
self.shell.exec(
|
self.shell.exec(
|
||||||
f"nohup {self.process_dir}/command.sh </dev/null "
|
f"nohup {self.process_dir}/command.sh </dev/null "
|
||||||
|
@ -190,40 +236,34 @@ class RemoteProcess:
|
||||||
CommandOptions(extra_inspectors=self.cmd_inspectors),
|
CommandOptions(extra_inspectors=self.cmd_inspectors),
|
||||||
)
|
)
|
||||||
|
|
||||||
@reporter.step_deco("Create process directory")
|
@reporter.step("Create process directory")
|
||||||
def _create_process_dir(self) -> None:
|
def _create_process_dir(self) -> None:
|
||||||
self.shell.exec(
|
self.shell.exec(f"mkdir -p {self.process_dir}", CommandOptions(extra_inspectors=self.cmd_inspectors))
|
||||||
f"mkdir -p {self.process_dir}", CommandOptions(extra_inspectors=self.cmd_inspectors)
|
self.shell.exec(f"chmod 777 {self.process_dir}", CommandOptions(extra_inspectors=self.cmd_inspectors))
|
||||||
)
|
terminal = self.shell.exec(f"realpath {self.process_dir}", CommandOptions(extra_inspectors=self.cmd_inspectors))
|
||||||
self.shell.exec(
|
|
||||||
f"chmod 777 {self.process_dir}", CommandOptions(extra_inspectors=self.cmd_inspectors)
|
|
||||||
)
|
|
||||||
terminal = self.shell.exec(
|
|
||||||
f"realpath {self.process_dir}", CommandOptions(extra_inspectors=self.cmd_inspectors)
|
|
||||||
)
|
|
||||||
self.process_dir = terminal.stdout.strip()
|
self.process_dir = terminal.stdout.strip()
|
||||||
|
|
||||||
@reporter.step_deco("Get pid")
|
@reporter.step("Get pid")
|
||||||
@retry(wait=wait_fixed(10), stop=stop_after_attempt(5), reraise=True)
|
@retry(wait=wait_fixed(10), stop=stop_after_attempt(5), reraise=True)
|
||||||
def _get_pid(self) -> str:
|
def _get_pid(self) -> str:
|
||||||
terminal = self.shell.exec(
|
terminal = self.shell.exec(f"cat {self.process_dir}/pid", CommandOptions(extra_inspectors=self.cmd_inspectors))
|
||||||
f"cat {self.process_dir}/pid", CommandOptions(extra_inspectors=self.cmd_inspectors)
|
|
||||||
)
|
|
||||||
assert terminal.stdout, f"invalid pid: {terminal.stdout}"
|
assert terminal.stdout, f"invalid pid: {terminal.stdout}"
|
||||||
return terminal.stdout.strip()
|
return terminal.stdout.strip()
|
||||||
|
|
||||||
@reporter.step_deco("Generate command script")
|
@reporter.step("Generate command script")
|
||||||
def _generate_command_script(self, command: str) -> None:
|
def _generate_command_script(self) -> None:
|
||||||
command = command.replace('"', '\\"').replace("\\", "\\\\")
|
command = self.cmd.replace('"', '\\"').replace("\\", "\\\\")
|
||||||
script = (
|
script = (
|
||||||
f"#!/bin/bash\n"
|
f"#!/bin/bash\n"
|
||||||
f"cd {self.process_dir}\n"
|
f"cd {self.process_dir}\n"
|
||||||
|
f"date +%s > {self.process_dir}/start_time\n"
|
||||||
f"{command} &\n"
|
f"{command} &\n"
|
||||||
f"pid=\$!\n"
|
f"pid=\$!\n"
|
||||||
f"cd {self.process_dir}\n"
|
f"cd {self.process_dir}\n"
|
||||||
f"echo \$pid > {self.process_dir}/pid\n"
|
f"echo \$pid > {self.process_dir}/pid\n"
|
||||||
f"wait \$pid\n"
|
f"wait \$pid\n"
|
||||||
f"echo $? > {self.process_dir}/rc"
|
f"echo $? > {self.process_dir}/rc\n"
|
||||||
|
f"date +%s > {self.process_dir}/end_time\n"
|
||||||
)
|
)
|
||||||
|
|
||||||
self.shell.exec(
|
self.shell.exec(
|
||||||
|
|
|
@ -1,6 +1,9 @@
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
from frostfs_testlib.reporter.allure_handler import AllureHandler
|
from frostfs_testlib.reporter.allure_handler import AllureHandler
|
||||||
from frostfs_testlib.reporter.interfaces import ReporterHandler
|
from frostfs_testlib.reporter.interfaces import ReporterHandler
|
||||||
from frostfs_testlib.reporter.reporter import Reporter
|
from frostfs_testlib.reporter.reporter import Reporter
|
||||||
|
from frostfs_testlib.reporter.steps_logger import StepsLogger
|
||||||
|
|
||||||
__reporter = Reporter()
|
__reporter = Reporter()
|
||||||
|
|
||||||
|
@ -15,3 +18,11 @@ def get_reporter() -> Reporter:
|
||||||
Singleton reporter instance.
|
Singleton reporter instance.
|
||||||
"""
|
"""
|
||||||
return __reporter
|
return __reporter
|
||||||
|
|
||||||
|
|
||||||
|
def step(title: str):
|
||||||
|
return __reporter.step(title)
|
||||||
|
|
||||||
|
|
||||||
|
def attach(content: Any, file_name: str):
|
||||||
|
return __reporter.attach(content, file_name)
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
import os
|
import os
|
||||||
from contextlib import AbstractContextManager
|
from contextlib import AbstractContextManager, ContextDecorator
|
||||||
from textwrap import shorten
|
from textwrap import shorten
|
||||||
from typing import Any, Callable
|
from typing import Any, Callable
|
||||||
|
|
||||||
|
@ -12,7 +12,7 @@ from frostfs_testlib.reporter.interfaces import ReporterHandler
|
||||||
class AllureHandler(ReporterHandler):
|
class AllureHandler(ReporterHandler):
|
||||||
"""Handler that stores test artifacts in Allure report."""
|
"""Handler that stores test artifacts in Allure report."""
|
||||||
|
|
||||||
def step(self, name: str) -> AbstractContextManager:
|
def step(self, name: str) -> AbstractContextManager | ContextDecorator:
|
||||||
name = shorten(name, width=140, placeholder="...")
|
name = shorten(name, width=140, placeholder="...")
|
||||||
return allure.step(name)
|
return allure.step(name)
|
||||||
|
|
||||||
|
@ -21,9 +21,14 @@ class AllureHandler(ReporterHandler):
|
||||||
|
|
||||||
def attach(self, body: Any, file_name: str) -> None:
|
def attach(self, body: Any, file_name: str) -> None:
|
||||||
attachment_name, extension = os.path.splitext(file_name)
|
attachment_name, extension = os.path.splitext(file_name)
|
||||||
|
if extension.startswith("."):
|
||||||
|
extension = extension[1:]
|
||||||
attachment_type = self._resolve_attachment_type(extension)
|
attachment_type = self._resolve_attachment_type(extension)
|
||||||
|
|
||||||
allure.attach(body, attachment_name, attachment_type, extension)
|
if os.path.exists(body):
|
||||||
|
allure.attach.file(body, file_name, attachment_type, extension)
|
||||||
|
else:
|
||||||
|
allure.attach(body, attachment_name, attachment_type, extension)
|
||||||
|
|
||||||
def _resolve_attachment_type(self, extension: str) -> attachment_type:
|
def _resolve_attachment_type(self, extension: str) -> attachment_type:
|
||||||
"""Try to find matching Allure attachment type by extension.
|
"""Try to find matching Allure attachment type by extension.
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
from abc import ABC, abstractmethod
|
from abc import ABC, abstractmethod
|
||||||
from contextlib import AbstractContextManager
|
from contextlib import AbstractContextManager, ContextDecorator
|
||||||
from typing import Any, Callable
|
from typing import Any, Callable
|
||||||
|
|
||||||
|
|
||||||
|
@ -7,7 +7,7 @@ class ReporterHandler(ABC):
|
||||||
"""Interface of handler that stores test artifacts in some reporting tool."""
|
"""Interface of handler that stores test artifacts in some reporting tool."""
|
||||||
|
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
def step(self, name: str) -> AbstractContextManager:
|
def step(self, name: str) -> AbstractContextManager | ContextDecorator:
|
||||||
"""Register a new step in test execution.
|
"""Register a new step in test execution.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
|
|
|
@ -5,6 +5,7 @@ from typing import Any, Callable, Optional
|
||||||
|
|
||||||
from frostfs_testlib.plugins import load_plugin
|
from frostfs_testlib.plugins import load_plugin
|
||||||
from frostfs_testlib.reporter.interfaces import ReporterHandler
|
from frostfs_testlib.reporter.interfaces import ReporterHandler
|
||||||
|
from frostfs_testlib.utils.func_utils import format_by_args
|
||||||
|
|
||||||
|
|
||||||
@contextmanager
|
@contextmanager
|
||||||
|
@ -63,7 +64,8 @@ class Reporter:
|
||||||
def wrapper(*a, **kw):
|
def wrapper(*a, **kw):
|
||||||
resulting_func = func
|
resulting_func = func
|
||||||
for handler in self.handlers:
|
for handler in self.handlers:
|
||||||
decorator = handler.step_decorator(name)
|
parsed_name = format_by_args(func, name, *a, **kw)
|
||||||
|
decorator = handler.step_decorator(parsed_name)
|
||||||
resulting_func = decorator(resulting_func)
|
resulting_func = decorator(resulting_func)
|
||||||
|
|
||||||
return resulting_func(*a, **kw)
|
return resulting_func(*a, **kw)
|
||||||
|
@ -81,11 +83,11 @@ class Reporter:
|
||||||
Returns:
|
Returns:
|
||||||
Step context.
|
Step context.
|
||||||
"""
|
"""
|
||||||
if not self.handlers:
|
|
||||||
return _empty_step()
|
|
||||||
|
|
||||||
step_contexts = [handler.step(name) for handler in self.handlers]
|
step_contexts = [handler.step(name) for handler in self.handlers]
|
||||||
return AggregateContextManager(step_contexts)
|
if not step_contexts:
|
||||||
|
step_contexts = [_empty_step()]
|
||||||
|
decorated_wrapper = self.step_deco(name)
|
||||||
|
return AggregateContextManager(step_contexts, decorated_wrapper)
|
||||||
|
|
||||||
def attach(self, content: Any, file_name: str) -> None:
|
def attach(self, content: Any, file_name: str) -> None:
|
||||||
"""Attach specified content with given file name to the test report.
|
"""Attach specified content with given file name to the test report.
|
||||||
|
@ -104,9 +106,10 @@ class AggregateContextManager(AbstractContextManager):
|
||||||
|
|
||||||
contexts: list[AbstractContextManager]
|
contexts: list[AbstractContextManager]
|
||||||
|
|
||||||
def __init__(self, contexts: list[AbstractContextManager]) -> None:
|
def __init__(self, contexts: list[AbstractContextManager], decorated_wrapper: Callable) -> None:
|
||||||
super().__init__()
|
super().__init__()
|
||||||
self.contexts = contexts
|
self.contexts = contexts
|
||||||
|
self.wrapper = decorated_wrapper
|
||||||
|
|
||||||
def __enter__(self):
|
def __enter__(self):
|
||||||
for context in self.contexts:
|
for context in self.contexts:
|
||||||
|
@ -127,3 +130,6 @@ class AggregateContextManager(AbstractContextManager):
|
||||||
# If all context agreed to suppress exception, then suppress it;
|
# If all context agreed to suppress exception, then suppress it;
|
||||||
# otherwise return None to reraise
|
# otherwise return None to reraise
|
||||||
return True if all(suppress_decisions) else None
|
return True if all(suppress_decisions) else None
|
||||||
|
|
||||||
|
def __call__(self, *args: Any, **kwds: Any) -> Any:
|
||||||
|
return self.wrapper(*args, **kwds)
|
||||||
|
|
56
src/frostfs_testlib/reporter/steps_logger.py
Normal file
56
src/frostfs_testlib/reporter/steps_logger.py
Normal file
|
@ -0,0 +1,56 @@
|
||||||
|
import logging
|
||||||
|
import threading
|
||||||
|
from contextlib import AbstractContextManager, ContextDecorator
|
||||||
|
from functools import wraps
|
||||||
|
from types import TracebackType
|
||||||
|
from typing import Any, Callable
|
||||||
|
|
||||||
|
from frostfs_testlib.reporter.interfaces import ReporterHandler
|
||||||
|
|
||||||
|
|
||||||
|
class StepsLogger(ReporterHandler):
|
||||||
|
"""Handler that prints steps to log."""
|
||||||
|
|
||||||
|
def step(self, name: str) -> AbstractContextManager | ContextDecorator:
|
||||||
|
return StepLoggerContext(name)
|
||||||
|
|
||||||
|
def step_decorator(self, name: str) -> Callable:
|
||||||
|
return StepLoggerContext(name)
|
||||||
|
|
||||||
|
def attach(self, body: Any, file_name: str) -> None:
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
class StepLoggerContext(AbstractContextManager):
|
||||||
|
INDENT = {}
|
||||||
|
|
||||||
|
def __init__(self, title: str):
|
||||||
|
self.title = title
|
||||||
|
self.logger = logging.getLogger("NeoLogger")
|
||||||
|
self.thread = threading.get_ident()
|
||||||
|
if self.thread not in StepLoggerContext.INDENT:
|
||||||
|
StepLoggerContext.INDENT[self.thread] = 1
|
||||||
|
|
||||||
|
def __enter__(self) -> Any:
|
||||||
|
indent = ">" * StepLoggerContext.INDENT[self.thread]
|
||||||
|
self.logger.info(f"[{self.thread}] {indent} {self.title}")
|
||||||
|
StepLoggerContext.INDENT[self.thread] += 1
|
||||||
|
|
||||||
|
def __exit__(
|
||||||
|
self,
|
||||||
|
__exc_type: type[BaseException] | None,
|
||||||
|
__exc_value: BaseException | None,
|
||||||
|
__traceback: TracebackType | None,
|
||||||
|
) -> bool | None:
|
||||||
|
|
||||||
|
StepLoggerContext.INDENT[self.thread] -= 1
|
||||||
|
indent = "<" * StepLoggerContext.INDENT[self.thread]
|
||||||
|
self.logger.info(f"[{self.thread}] {indent} {self.title}")
|
||||||
|
|
||||||
|
def __call__(self, func):
|
||||||
|
@wraps(func)
|
||||||
|
def impl(*a, **kw):
|
||||||
|
with self:
|
||||||
|
return func(*a, **kw)
|
||||||
|
|
||||||
|
return impl
|
|
@ -9,4 +9,4 @@ FROSTFS_ADM_EXEC = os.getenv("FROSTFS_ADM_EXEC", "frostfs-adm")
|
||||||
# Config for frostfs-adm utility. Optional if tests are running against devenv
|
# Config for frostfs-adm utility. Optional if tests are running against devenv
|
||||||
FROSTFS_ADM_CONFIG_PATH = os.getenv("FROSTFS_ADM_CONFIG_PATH")
|
FROSTFS_ADM_CONFIG_PATH = os.getenv("FROSTFS_ADM_CONFIG_PATH")
|
||||||
|
|
||||||
CLI_DEFAULT_TIMEOUT = os.getenv("CLI_DEFAULT_TIMEOUT", None)
|
CLI_DEFAULT_TIMEOUT = os.getenv("CLI_DEFAULT_TIMEOUT", "100s")
|
||||||
|
|
|
@ -43,6 +43,6 @@ with open(DEFAULT_WALLET_CONFIG, "w") as file:
|
||||||
|
|
||||||
# Number of attempts that S3 clients will attempt per each request (1 means single attempt
|
# Number of attempts that S3 clients will attempt per each request (1 means single attempt
|
||||||
# without any retries)
|
# without any retries)
|
||||||
MAX_REQUEST_ATTEMPTS = 1
|
MAX_REQUEST_ATTEMPTS = 5
|
||||||
RETRY_MODE = "standard"
|
RETRY_MODE = "standard"
|
||||||
CREDENTIALS_CREATE_TIMEOUT = "1m"
|
CREDENTIALS_CREATE_TIMEOUT = "1m"
|
||||||
|
|
|
@ -23,6 +23,8 @@ INVALID_RANGE_OVERFLOW = "invalid '{range}' range: uint64 overflow"
|
||||||
INVALID_OFFSET_SPECIFIER = "invalid '{range}' range offset specifier"
|
INVALID_OFFSET_SPECIFIER = "invalid '{range}' range offset specifier"
|
||||||
INVALID_LENGTH_SPECIFIER = "invalid '{range}' range length specifier"
|
INVALID_LENGTH_SPECIFIER = "invalid '{range}' range length specifier"
|
||||||
|
|
||||||
S3_MALFORMED_XML_REQUEST = (
|
S3_BUCKET_DOES_NOT_ALLOW_ACL = "The bucket does not allow ACLs"
|
||||||
"The XML you provided was not well-formed or did not validate against our published schema."
|
S3_MALFORMED_XML_REQUEST = "The XML you provided was not well-formed or did not validate against our published schema."
|
||||||
)
|
|
||||||
|
RULE_ACCESS_DENIED_CONTAINER = "access to container operation {operation} is denied by access policy engine: Access denied"
|
||||||
|
RULE_ACCESS_DENIED_OBJECT = "access to object operation denied: ape denied request: method {operation}: Access denied"
|
||||||
|
|
9
src/frostfs_testlib/resources/s3_acl_grants.py
Normal file
9
src/frostfs_testlib/resources/s3_acl_grants.py
Normal file
|
@ -0,0 +1,9 @@
|
||||||
|
ALL_USERS_GROUP_URI = "http://acs.amazonaws.com/groups/global/AllUsers"
|
||||||
|
ALL_USERS_GROUP_WRITE_GRANT = {"Grantee": {"Type": "Group", "URI": ALL_USERS_GROUP_URI}, "Permission": "WRITE"}
|
||||||
|
ALL_USERS_GROUP_READ_GRANT = {"Grantee": {"Type": "Group", "URI": ALL_USERS_GROUP_URI}, "Permission": "READ"}
|
||||||
|
CANONICAL_USER_FULL_CONTROL_GRANT = {"Grantee": {"Type": "CanonicalUser"}, "Permission": "FULL_CONTROL"}
|
||||||
|
|
||||||
|
# https://docs.aws.amazon.com/AmazonS3/latest/userguide/acl-overview.html#canned-acl
|
||||||
|
PRIVATE_GRANTS = []
|
||||||
|
PUBLIC_READ_GRANTS = [ALL_USERS_GROUP_READ_GRANT]
|
||||||
|
PUBLIC_READ_WRITE_GRANTS = [ALL_USERS_GROUP_WRITE_GRANT, ALL_USERS_GROUP_READ_GRANT]
|
File diff suppressed because it is too large
Load diff
|
@ -1,7 +1,6 @@
|
||||||
import json
|
import json
|
||||||
import logging
|
import logging
|
||||||
import os
|
import os
|
||||||
import uuid
|
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
from functools import wraps
|
from functools import wraps
|
||||||
from time import sleep
|
from time import sleep
|
||||||
|
@ -13,17 +12,15 @@ from botocore.config import Config
|
||||||
from botocore.exceptions import ClientError
|
from botocore.exceptions import ClientError
|
||||||
from mypy_boto3_s3 import S3Client
|
from mypy_boto3_s3 import S3Client
|
||||||
|
|
||||||
from frostfs_testlib.reporter import get_reporter
|
from frostfs_testlib import reporter
|
||||||
from frostfs_testlib.resources.common import (
|
from frostfs_testlib.resources.common import ASSETS_DIR, MAX_REQUEST_ATTEMPTS, RETRY_MODE, S3_SYNC_WAIT_TIME
|
||||||
ASSETS_DIR,
|
|
||||||
MAX_REQUEST_ATTEMPTS,
|
|
||||||
RETRY_MODE,
|
|
||||||
S3_SYNC_WAIT_TIME,
|
|
||||||
)
|
|
||||||
from frostfs_testlib.s3.interfaces import S3ClientWrapper, VersioningStatus, _make_objs_dict
|
from frostfs_testlib.s3.interfaces import S3ClientWrapper, VersioningStatus, _make_objs_dict
|
||||||
from frostfs_testlib.utils.cli_utils import log_command_execution
|
from frostfs_testlib.utils import string_utils
|
||||||
|
|
||||||
|
# TODO: Refactor this code to use shell instead of _cmd_run
|
||||||
|
from frostfs_testlib.utils.cli_utils import _configure_aws_cli, log_command_execution
|
||||||
|
from frostfs_testlib.utils.file_utils import TestFile
|
||||||
|
|
||||||
reporter = get_reporter()
|
|
||||||
logger = logging.getLogger("NeoLogger")
|
logger = logging.getLogger("NeoLogger")
|
||||||
|
|
||||||
# Disable warnings on self-signed certificate which the
|
# Disable warnings on self-signed certificate which the
|
||||||
|
@ -46,11 +43,14 @@ def report_error(func):
|
||||||
class Boto3ClientWrapper(S3ClientWrapper):
|
class Boto3ClientWrapper(S3ClientWrapper):
|
||||||
__repr_name__: str = "Boto3 client"
|
__repr_name__: str = "Boto3 client"
|
||||||
|
|
||||||
@reporter.step_deco("Configure S3 client (boto3)")
|
@reporter.step("Configure S3 client (boto3)")
|
||||||
@report_error
|
@report_error
|
||||||
def __init__(self, access_key_id: str, secret_access_key: str, s3gate_endpoint: str) -> None:
|
def __init__(
|
||||||
|
self, access_key_id: str, secret_access_key: str, s3gate_endpoint: str, profile: str = "default", region: str = "us-east-1"
|
||||||
|
) -> None:
|
||||||
self.boto3_client: S3Client = None
|
self.boto3_client: S3Client = None
|
||||||
self.session = boto3.Session()
|
self.session = boto3.Session()
|
||||||
|
self.region = region
|
||||||
self.config = Config(
|
self.config = Config(
|
||||||
retries={
|
retries={
|
||||||
"max_attempts": MAX_REQUEST_ATTEMPTS,
|
"max_attempts": MAX_REQUEST_ATTEMPTS,
|
||||||
|
@ -60,9 +60,10 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
||||||
self.access_key_id: str = access_key_id
|
self.access_key_id: str = access_key_id
|
||||||
self.secret_access_key: str = secret_access_key
|
self.secret_access_key: str = secret_access_key
|
||||||
self.s3gate_endpoint: str = ""
|
self.s3gate_endpoint: str = ""
|
||||||
|
self.boto3_iam_client: S3Client = None
|
||||||
self.set_endpoint(s3gate_endpoint)
|
self.set_endpoint(s3gate_endpoint)
|
||||||
|
|
||||||
@reporter.step_deco("Set endpoint S3 to {s3gate_endpoint}")
|
@reporter.step("Set endpoint S3 to {s3gate_endpoint}")
|
||||||
def set_endpoint(self, s3gate_endpoint: str):
|
def set_endpoint(self, s3gate_endpoint: str):
|
||||||
if self.s3gate_endpoint == s3gate_endpoint:
|
if self.s3gate_endpoint == s3gate_endpoint:
|
||||||
return
|
return
|
||||||
|
@ -73,11 +74,22 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
||||||
service_name="s3",
|
service_name="s3",
|
||||||
aws_access_key_id=self.access_key_id,
|
aws_access_key_id=self.access_key_id,
|
||||||
aws_secret_access_key=self.secret_access_key,
|
aws_secret_access_key=self.secret_access_key,
|
||||||
|
region_name=self.region,
|
||||||
config=self.config,
|
config=self.config,
|
||||||
endpoint_url=s3gate_endpoint,
|
endpoint_url=s3gate_endpoint,
|
||||||
verify=False,
|
verify=False,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@reporter.step("Set endpoint IAM to {iam_endpoint}")
|
||||||
|
def set_iam_endpoint(self, iam_endpoint: str):
|
||||||
|
self.boto3_iam_client = self.session.client(
|
||||||
|
service_name="iam",
|
||||||
|
aws_access_key_id=self.access_key_id,
|
||||||
|
aws_secret_access_key=self.secret_access_key,
|
||||||
|
endpoint_url=iam_endpoint,
|
||||||
|
verify=False,
|
||||||
|
)
|
||||||
|
|
||||||
def _to_s3_param(self, param: str):
|
def _to_s3_param(self, param: str):
|
||||||
replacement_map = {
|
replacement_map = {
|
||||||
"Acl": "ACL",
|
"Acl": "ACL",
|
||||||
|
@ -90,7 +102,7 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
||||||
return result
|
return result
|
||||||
|
|
||||||
# BUCKET METHODS #
|
# BUCKET METHODS #
|
||||||
@reporter.step_deco("Create bucket S3")
|
@reporter.step("Create bucket S3")
|
||||||
@report_error
|
@report_error
|
||||||
def create_bucket(
|
def create_bucket(
|
||||||
self,
|
self,
|
||||||
|
@ -103,7 +115,7 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
||||||
location_constraint: Optional[str] = None,
|
location_constraint: Optional[str] = None,
|
||||||
) -> str:
|
) -> str:
|
||||||
if bucket is None:
|
if bucket is None:
|
||||||
bucket = str(uuid.uuid4())
|
bucket = string_utils.unique_name("bucket-")
|
||||||
|
|
||||||
params = {"Bucket": bucket}
|
params = {"Bucket": bucket}
|
||||||
if object_lock_enabled_for_bucket is not None:
|
if object_lock_enabled_for_bucket is not None:
|
||||||
|
@ -118,16 +130,13 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
||||||
elif grant_full_control:
|
elif grant_full_control:
|
||||||
params.update({"GrantFullControl": grant_full_control})
|
params.update({"GrantFullControl": grant_full_control})
|
||||||
if location_constraint:
|
if location_constraint:
|
||||||
params.update(
|
params.update({"CreateBucketConfiguration": {"LocationConstraint": location_constraint}})
|
||||||
{"CreateBucketConfiguration": {"LocationConstraint": location_constraint}}
|
|
||||||
)
|
|
||||||
|
|
||||||
s3_bucket = self.boto3_client.create_bucket(**params)
|
s3_bucket = self.boto3_client.create_bucket(**params)
|
||||||
log_command_execution(f"Created S3 bucket {bucket}", s3_bucket)
|
log_command_execution(f"Created S3 bucket {bucket}", s3_bucket)
|
||||||
sleep(S3_SYNC_WAIT_TIME)
|
|
||||||
return bucket
|
return bucket
|
||||||
|
|
||||||
@reporter.step_deco("List buckets S3")
|
@reporter.step("List buckets S3")
|
||||||
@report_error
|
@report_error
|
||||||
def list_buckets(self) -> list[str]:
|
def list_buckets(self) -> list[str]:
|
||||||
found_buckets = []
|
found_buckets = []
|
||||||
|
@ -140,28 +149,25 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
||||||
|
|
||||||
return found_buckets
|
return found_buckets
|
||||||
|
|
||||||
@reporter.step_deco("Delete bucket S3")
|
@reporter.step("Delete bucket S3")
|
||||||
@report_error
|
@report_error
|
||||||
def delete_bucket(self, bucket: str) -> None:
|
def delete_bucket(self, bucket: str) -> None:
|
||||||
response = self.boto3_client.delete_bucket(Bucket=bucket)
|
response = self.boto3_client.delete_bucket(Bucket=bucket)
|
||||||
log_command_execution("S3 Delete bucket result", response)
|
log_command_execution("S3 Delete bucket result", response)
|
||||||
sleep(S3_SYNC_WAIT_TIME)
|
|
||||||
|
|
||||||
@reporter.step_deco("Head bucket S3")
|
@reporter.step("Head bucket S3")
|
||||||
@report_error
|
@report_error
|
||||||
def head_bucket(self, bucket: str) -> None:
|
def head_bucket(self, bucket: str) -> None:
|
||||||
response = self.boto3_client.head_bucket(Bucket=bucket)
|
response = self.boto3_client.head_bucket(Bucket=bucket)
|
||||||
log_command_execution("S3 Head bucket result", response)
|
log_command_execution("S3 Head bucket result", response)
|
||||||
|
|
||||||
@reporter.step_deco("Put bucket versioning status")
|
@reporter.step("Put bucket versioning status")
|
||||||
@report_error
|
@report_error
|
||||||
def put_bucket_versioning(self, bucket: str, status: VersioningStatus) -> None:
|
def put_bucket_versioning(self, bucket: str, status: VersioningStatus) -> None:
|
||||||
response = self.boto3_client.put_bucket_versioning(
|
response = self.boto3_client.put_bucket_versioning(Bucket=bucket, VersioningConfiguration={"Status": status.value})
|
||||||
Bucket=bucket, VersioningConfiguration={"Status": status.value}
|
|
||||||
)
|
|
||||||
log_command_execution("S3 Set bucket versioning to", response)
|
log_command_execution("S3 Set bucket versioning to", response)
|
||||||
|
|
||||||
@reporter.step_deco("Get bucket versioning status")
|
@reporter.step("Get bucket versioning status")
|
||||||
@report_error
|
@report_error
|
||||||
def get_bucket_versioning_status(self, bucket: str) -> Literal["Enabled", "Suspended"]:
|
def get_bucket_versioning_status(self, bucket: str) -> Literal["Enabled", "Suspended"]:
|
||||||
response = self.boto3_client.get_bucket_versioning(Bucket=bucket)
|
response = self.boto3_client.get_bucket_versioning(Bucket=bucket)
|
||||||
|
@ -169,7 +175,7 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
||||||
log_command_execution("S3 Got bucket versioning status", response)
|
log_command_execution("S3 Got bucket versioning status", response)
|
||||||
return status
|
return status
|
||||||
|
|
||||||
@reporter.step_deco("Put bucket tagging")
|
@reporter.step("Put bucket tagging")
|
||||||
@report_error
|
@report_error
|
||||||
def put_bucket_tagging(self, bucket: str, tags: list) -> None:
|
def put_bucket_tagging(self, bucket: str, tags: list) -> None:
|
||||||
tags = [{"Key": tag_key, "Value": tag_value} for tag_key, tag_value in tags]
|
tags = [{"Key": tag_key, "Value": tag_value} for tag_key, tag_value in tags]
|
||||||
|
@ -177,27 +183,27 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
||||||
response = self.boto3_client.put_bucket_tagging(Bucket=bucket, Tagging=tagging)
|
response = self.boto3_client.put_bucket_tagging(Bucket=bucket, Tagging=tagging)
|
||||||
log_command_execution("S3 Put bucket tagging", response)
|
log_command_execution("S3 Put bucket tagging", response)
|
||||||
|
|
||||||
@reporter.step_deco("Get bucket tagging")
|
@reporter.step("Get bucket tagging")
|
||||||
@report_error
|
@report_error
|
||||||
def get_bucket_tagging(self, bucket: str) -> list:
|
def get_bucket_tagging(self, bucket: str) -> list:
|
||||||
response = self.boto3_client.get_bucket_tagging(Bucket=bucket)
|
response = self.boto3_client.get_bucket_tagging(Bucket=bucket)
|
||||||
log_command_execution("S3 Get bucket tagging", response)
|
log_command_execution("S3 Get bucket tagging", response)
|
||||||
return response.get("TagSet")
|
return response.get("TagSet")
|
||||||
|
|
||||||
@reporter.step_deco("Get bucket acl")
|
@reporter.step("Get bucket acl")
|
||||||
@report_error
|
@report_error
|
||||||
def get_bucket_acl(self, bucket: str) -> list:
|
def get_bucket_acl(self, bucket: str) -> list:
|
||||||
response = self.boto3_client.get_bucket_acl(Bucket=bucket)
|
response = self.boto3_client.get_bucket_acl(Bucket=bucket)
|
||||||
log_command_execution("S3 Get bucket acl", response)
|
log_command_execution("S3 Get bucket acl", response)
|
||||||
return response.get("Grants")
|
return response.get("Grants")
|
||||||
|
|
||||||
@reporter.step_deco("Delete bucket tagging")
|
@reporter.step("Delete bucket tagging")
|
||||||
@report_error
|
@report_error
|
||||||
def delete_bucket_tagging(self, bucket: str) -> None:
|
def delete_bucket_tagging(self, bucket: str) -> None:
|
||||||
response = self.boto3_client.delete_bucket_tagging(Bucket=bucket)
|
response = self.boto3_client.delete_bucket_tagging(Bucket=bucket)
|
||||||
log_command_execution("S3 Delete bucket tagging", response)
|
log_command_execution("S3 Delete bucket tagging", response)
|
||||||
|
|
||||||
@reporter.step_deco("Put bucket ACL")
|
@reporter.step("Put bucket ACL")
|
||||||
@report_error
|
@report_error
|
||||||
def put_bucket_acl(
|
def put_bucket_acl(
|
||||||
self,
|
self,
|
||||||
|
@ -206,68 +212,67 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
||||||
grant_write: Optional[str] = None,
|
grant_write: Optional[str] = None,
|
||||||
grant_read: Optional[str] = None,
|
grant_read: Optional[str] = None,
|
||||||
) -> None:
|
) -> None:
|
||||||
params = {
|
params = {self._to_s3_param(param): value for param, value in locals().items() if param not in ["self"] and value is not None}
|
||||||
self._to_s3_param(param): value
|
|
||||||
for param, value in locals().items()
|
|
||||||
if param not in ["self"] and value is not None
|
|
||||||
}
|
|
||||||
response = self.boto3_client.put_bucket_acl(**params)
|
response = self.boto3_client.put_bucket_acl(**params)
|
||||||
log_command_execution("S3 ACL bucket result", response)
|
log_command_execution("S3 ACL bucket result", response)
|
||||||
|
|
||||||
@reporter.step_deco("Put object lock configuration")
|
@reporter.step("Put object lock configuration")
|
||||||
@report_error
|
@report_error
|
||||||
def put_object_lock_configuration(self, bucket: str, configuration: dict) -> dict:
|
def put_object_lock_configuration(self, bucket: str, configuration: dict) -> dict:
|
||||||
response = self.boto3_client.put_object_lock_configuration(
|
response = self.boto3_client.put_object_lock_configuration(Bucket=bucket, ObjectLockConfiguration=configuration)
|
||||||
Bucket=bucket, ObjectLockConfiguration=configuration
|
|
||||||
)
|
|
||||||
log_command_execution("S3 put_object_lock_configuration result", response)
|
log_command_execution("S3 put_object_lock_configuration result", response)
|
||||||
return response
|
return response
|
||||||
|
|
||||||
@reporter.step_deco("Get object lock configuration")
|
@reporter.step("Get object lock configuration")
|
||||||
@report_error
|
@report_error
|
||||||
def get_object_lock_configuration(self, bucket: str) -> dict:
|
def get_object_lock_configuration(self, bucket: str) -> dict:
|
||||||
response = self.boto3_client.get_object_lock_configuration(Bucket=bucket)
|
response = self.boto3_client.get_object_lock_configuration(Bucket=bucket)
|
||||||
log_command_execution("S3 get_object_lock_configuration result", response)
|
log_command_execution("S3 get_object_lock_configuration result", response)
|
||||||
return response.get("ObjectLockConfiguration")
|
return response.get("ObjectLockConfiguration")
|
||||||
|
|
||||||
@reporter.step_deco("Get bucket policy")
|
@reporter.step("Get bucket policy")
|
||||||
@report_error
|
@report_error
|
||||||
def get_bucket_policy(self, bucket: str) -> str:
|
def get_bucket_policy(self, bucket: str) -> str:
|
||||||
response = self.boto3_client.get_bucket_policy(Bucket=bucket)
|
response = self.boto3_client.get_bucket_policy(Bucket=bucket)
|
||||||
log_command_execution("S3 get_bucket_policy result", response)
|
log_command_execution("S3 get_bucket_policy result", response)
|
||||||
return response.get("Policy")
|
return response.get("Policy")
|
||||||
|
|
||||||
@reporter.step_deco("Put bucket policy")
|
@reporter.step("Delete bucket policy")
|
||||||
|
@report_error
|
||||||
|
def delete_bucket_policy(self, bucket: str) -> str:
|
||||||
|
response = self.boto3_client.delete_bucket_policy(Bucket=bucket)
|
||||||
|
log_command_execution("S3 delete_bucket_policy result", response)
|
||||||
|
return response
|
||||||
|
|
||||||
|
@reporter.step("Put bucket policy")
|
||||||
@report_error
|
@report_error
|
||||||
def put_bucket_policy(self, bucket: str, policy: dict) -> None:
|
def put_bucket_policy(self, bucket: str, policy: dict) -> None:
|
||||||
response = self.boto3_client.put_bucket_policy(Bucket=bucket, Policy=json.dumps(policy))
|
response = self.boto3_client.put_bucket_policy(Bucket=bucket, Policy=json.dumps(policy))
|
||||||
log_command_execution("S3 put_bucket_policy result", response)
|
log_command_execution("S3 put_bucket_policy result", response)
|
||||||
return response
|
return response
|
||||||
|
|
||||||
@reporter.step_deco("Get bucket cors")
|
@reporter.step("Get bucket cors")
|
||||||
@report_error
|
@report_error
|
||||||
def get_bucket_cors(self, bucket: str) -> dict:
|
def get_bucket_cors(self, bucket: str) -> dict:
|
||||||
response = self.boto3_client.get_bucket_cors(Bucket=bucket)
|
response = self.boto3_client.get_bucket_cors(Bucket=bucket)
|
||||||
log_command_execution("S3 get_bucket_cors result", response)
|
log_command_execution("S3 get_bucket_cors result", response)
|
||||||
return response.get("CORSRules")
|
return response.get("CORSRules")
|
||||||
|
|
||||||
@reporter.step_deco("Get bucket location")
|
@reporter.step("Get bucket location")
|
||||||
@report_error
|
@report_error
|
||||||
def get_bucket_location(self, bucket: str) -> str:
|
def get_bucket_location(self, bucket: str) -> str:
|
||||||
response = self.boto3_client.get_bucket_location(Bucket=bucket)
|
response = self.boto3_client.get_bucket_location(Bucket=bucket)
|
||||||
log_command_execution("S3 get_bucket_location result", response)
|
log_command_execution("S3 get_bucket_location result", response)
|
||||||
return response.get("LocationConstraint")
|
return response.get("LocationConstraint")
|
||||||
|
|
||||||
@reporter.step_deco("Put bucket cors")
|
@reporter.step("Put bucket cors")
|
||||||
@report_error
|
@report_error
|
||||||
def put_bucket_cors(self, bucket: str, cors_configuration: dict) -> None:
|
def put_bucket_cors(self, bucket: str, cors_configuration: dict) -> None:
|
||||||
response = self.boto3_client.put_bucket_cors(
|
response = self.boto3_client.put_bucket_cors(Bucket=bucket, CORSConfiguration=cors_configuration)
|
||||||
Bucket=bucket, CORSConfiguration=cors_configuration
|
|
||||||
)
|
|
||||||
log_command_execution("S3 put_bucket_cors result", response)
|
log_command_execution("S3 put_bucket_cors result", response)
|
||||||
return response
|
return response
|
||||||
|
|
||||||
@reporter.step_deco("Delete bucket cors")
|
@reporter.step("Delete bucket cors")
|
||||||
@report_error
|
@report_error
|
||||||
def delete_bucket_cors(self, bucket: str) -> None:
|
def delete_bucket_cors(self, bucket: str) -> None:
|
||||||
response = self.boto3_client.delete_bucket_cors(Bucket=bucket)
|
response = self.boto3_client.delete_bucket_cors(Bucket=bucket)
|
||||||
|
@ -276,7 +281,7 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
||||||
# END OF BUCKET METHODS #
|
# END OF BUCKET METHODS #
|
||||||
# OBJECT METHODS #
|
# OBJECT METHODS #
|
||||||
|
|
||||||
@reporter.step_deco("List objects S3 v2")
|
@reporter.step("List objects S3 v2")
|
||||||
@report_error
|
@report_error
|
||||||
def list_objects_v2(self, bucket: str, full_output: bool = False) -> Union[dict, list[str]]:
|
def list_objects_v2(self, bucket: str, full_output: bool = False) -> Union[dict, list[str]]:
|
||||||
response = self.boto3_client.list_objects_v2(Bucket=bucket)
|
response = self.boto3_client.list_objects_v2(Bucket=bucket)
|
||||||
|
@ -287,7 +292,7 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
||||||
|
|
||||||
return response if full_output else obj_list
|
return response if full_output else obj_list
|
||||||
|
|
||||||
@reporter.step_deco("List objects S3")
|
@reporter.step("List objects S3")
|
||||||
@report_error
|
@report_error
|
||||||
def list_objects(self, bucket: str, full_output: bool = False) -> Union[dict, list[str]]:
|
def list_objects(self, bucket: str, full_output: bool = False) -> Union[dict, list[str]]:
|
||||||
response = self.boto3_client.list_objects(Bucket=bucket)
|
response = self.boto3_client.list_objects(Bucket=bucket)
|
||||||
|
@ -298,21 +303,21 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
||||||
|
|
||||||
return response if full_output else obj_list
|
return response if full_output else obj_list
|
||||||
|
|
||||||
@reporter.step_deco("List objects versions S3")
|
@reporter.step("List objects versions S3")
|
||||||
@report_error
|
@report_error
|
||||||
def list_objects_versions(self, bucket: str, full_output: bool = False) -> dict:
|
def list_objects_versions(self, bucket: str, full_output: bool = False) -> dict:
|
||||||
response = self.boto3_client.list_object_versions(Bucket=bucket)
|
response = self.boto3_client.list_object_versions(Bucket=bucket)
|
||||||
log_command_execution("S3 List objects versions result", response)
|
log_command_execution("S3 List objects versions result", response)
|
||||||
return response if full_output else response.get("Versions", [])
|
return response if full_output else response.get("Versions", [])
|
||||||
|
|
||||||
@reporter.step_deco("List objects delete markers S3")
|
@reporter.step("List objects delete markers S3")
|
||||||
@report_error
|
@report_error
|
||||||
def list_delete_markers(self, bucket: str, full_output: bool = False) -> list:
|
def list_delete_markers(self, bucket: str, full_output: bool = False) -> list:
|
||||||
response = self.boto3_client.list_object_versions(Bucket=bucket)
|
response = self.boto3_client.list_object_versions(Bucket=bucket)
|
||||||
log_command_execution("S3 List objects delete markers result", response)
|
log_command_execution("S3 List objects delete markers result", response)
|
||||||
return response if full_output else response.get("DeleteMarkers", [])
|
return response if full_output else response.get("DeleteMarkers", [])
|
||||||
|
|
||||||
@reporter.step_deco("Put object S3")
|
@reporter.step("Put object S3")
|
||||||
@report_error
|
@report_error
|
||||||
def put_object(
|
def put_object(
|
||||||
self,
|
self,
|
||||||
|
@ -343,32 +348,23 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
||||||
log_command_execution("S3 Put object result", response)
|
log_command_execution("S3 Put object result", response)
|
||||||
return response.get("VersionId")
|
return response.get("VersionId")
|
||||||
|
|
||||||
@reporter.step_deco("Head object S3")
|
@reporter.step("Head object S3")
|
||||||
@report_error
|
@report_error
|
||||||
def head_object(self, bucket: str, key: str, version_id: Optional[str] = None) -> dict:
|
def head_object(self, bucket: str, key: str, version_id: Optional[str] = None) -> dict:
|
||||||
params = {
|
params = {self._to_s3_param(param): value for param, value in locals().items() if param not in ["self"] and value is not None}
|
||||||
self._to_s3_param(param): value
|
|
||||||
for param, value in locals().items()
|
|
||||||
if param not in ["self"] and value is not None
|
|
||||||
}
|
|
||||||
response = self.boto3_client.head_object(**params)
|
response = self.boto3_client.head_object(**params)
|
||||||
log_command_execution("S3 Head object result", response)
|
log_command_execution("S3 Head object result", response)
|
||||||
return response
|
return response
|
||||||
|
|
||||||
@reporter.step_deco("Delete object S3")
|
@reporter.step("Delete object S3")
|
||||||
@report_error
|
@report_error
|
||||||
def delete_object(self, bucket: str, key: str, version_id: Optional[str] = None) -> dict:
|
def delete_object(self, bucket: str, key: str, version_id: Optional[str] = None) -> dict:
|
||||||
params = {
|
params = {self._to_s3_param(param): value for param, value in locals().items() if param not in ["self"] and value is not None}
|
||||||
self._to_s3_param(param): value
|
|
||||||
for param, value in locals().items()
|
|
||||||
if param not in ["self"] and value is not None
|
|
||||||
}
|
|
||||||
response = self.boto3_client.delete_object(**params)
|
response = self.boto3_client.delete_object(**params)
|
||||||
log_command_execution("S3 Delete object result", response)
|
log_command_execution("S3 Delete object result", response)
|
||||||
sleep(S3_SYNC_WAIT_TIME)
|
|
||||||
return response
|
return response
|
||||||
|
|
||||||
@reporter.step_deco("Delete objects S3")
|
@reporter.step("Delete objects S3")
|
||||||
@report_error
|
@report_error
|
||||||
def delete_objects(self, bucket: str, keys: list[str]) -> dict:
|
def delete_objects(self, bucket: str, keys: list[str]) -> dict:
|
||||||
response = self.boto3_client.delete_objects(Bucket=bucket, Delete=_make_objs_dict(keys))
|
response = self.boto3_client.delete_objects(Bucket=bucket, Delete=_make_objs_dict(keys))
|
||||||
|
@ -376,10 +372,9 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
||||||
assert (
|
assert (
|
||||||
"Errors" not in response
|
"Errors" not in response
|
||||||
), f'The following objects have not been deleted: {[err_info["Key"] for err_info in response["Errors"]]}.\nError Message: {response["Errors"]["Message"]}'
|
), f'The following objects have not been deleted: {[err_info["Key"] for err_info in response["Errors"]]}.\nError Message: {response["Errors"]["Message"]}'
|
||||||
sleep(S3_SYNC_WAIT_TIME)
|
|
||||||
return response
|
return response
|
||||||
|
|
||||||
@reporter.step_deco("Delete object versions S3")
|
@reporter.step("Delete object versions S3")
|
||||||
@report_error
|
@report_error
|
||||||
def delete_object_versions(self, bucket: str, object_versions: list) -> dict:
|
def delete_object_versions(self, bucket: str, object_versions: list) -> dict:
|
||||||
# Build deletion list in S3 format
|
# Build deletion list in S3 format
|
||||||
|
@ -396,17 +391,15 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
||||||
log_command_execution("S3 Delete objects result", response)
|
log_command_execution("S3 Delete objects result", response)
|
||||||
return response
|
return response
|
||||||
|
|
||||||
@reporter.step_deco("Delete object versions S3 without delete markers")
|
@reporter.step("Delete object versions S3 without delete markers")
|
||||||
@report_error
|
@report_error
|
||||||
def delete_object_versions_without_dm(self, bucket: str, object_versions: list) -> None:
|
def delete_object_versions_without_dm(self, bucket: str, object_versions: list) -> None:
|
||||||
# Delete objects without creating delete markers
|
# Delete objects without creating delete markers
|
||||||
for object_version in object_versions:
|
for object_version in object_versions:
|
||||||
response = self.boto3_client.delete_object(
|
response = self.boto3_client.delete_object(Bucket=bucket, Key=object_version["Key"], VersionId=object_version["VersionId"])
|
||||||
Bucket=bucket, Key=object_version["Key"], VersionId=object_version["VersionId"]
|
|
||||||
)
|
|
||||||
log_command_execution("S3 Delete object result", response)
|
log_command_execution("S3 Delete object result", response)
|
||||||
|
|
||||||
@reporter.step_deco("Put object ACL")
|
@reporter.step("Put object ACL")
|
||||||
@report_error
|
@report_error
|
||||||
def put_object_acl(
|
def put_object_acl(
|
||||||
self,
|
self,
|
||||||
|
@ -416,22 +409,20 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
||||||
grant_write: Optional[str] = None,
|
grant_write: Optional[str] = None,
|
||||||
grant_read: Optional[str] = None,
|
grant_read: Optional[str] = None,
|
||||||
) -> list:
|
) -> list:
|
||||||
# pytest.skip("Method put_object_acl is not supported by boto3 client")
|
params = {self._to_s3_param(param): value for param, value in locals().items() if param not in ["self"] and value is not None}
|
||||||
raise NotImplementedError("Unsupported for boto3 client")
|
response = self.boto3_client.put_object_acl(**params)
|
||||||
|
log_command_execution("S3 put object ACL", response)
|
||||||
|
return response.get("Grants")
|
||||||
|
|
||||||
@reporter.step_deco("Get object ACL")
|
@reporter.step("Get object ACL")
|
||||||
@report_error
|
@report_error
|
||||||
def get_object_acl(self, bucket: str, key: str, version_id: Optional[str] = None) -> list:
|
def get_object_acl(self, bucket: str, key: str, version_id: Optional[str] = None) -> list:
|
||||||
params = {
|
params = {self._to_s3_param(param): value for param, value in locals().items() if param not in ["self"] and value is not None}
|
||||||
self._to_s3_param(param): value
|
|
||||||
for param, value in locals().items()
|
|
||||||
if param not in ["self"] and value is not None
|
|
||||||
}
|
|
||||||
response = self.boto3_client.get_object_acl(**params)
|
response = self.boto3_client.get_object_acl(**params)
|
||||||
log_command_execution("S3 ACL objects result", response)
|
log_command_execution("S3 ACL objects result", response)
|
||||||
return response.get("Grants")
|
return response.get("Grants")
|
||||||
|
|
||||||
@reporter.step_deco("Copy object S3")
|
@reporter.step("Copy object S3")
|
||||||
@report_error
|
@report_error
|
||||||
def copy_object(
|
def copy_object(
|
||||||
self,
|
self,
|
||||||
|
@ -448,7 +439,7 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
||||||
if bucket is None:
|
if bucket is None:
|
||||||
bucket = source_bucket
|
bucket = source_bucket
|
||||||
if key is None:
|
if key is None:
|
||||||
key = os.path.join(os.getcwd(), str(uuid.uuid4()))
|
key = string_utils.unique_name("copy-object-")
|
||||||
copy_source = f"{source_bucket}/{source_key}"
|
copy_source = f"{source_bucket}/{source_key}"
|
||||||
|
|
||||||
params = {
|
params = {
|
||||||
|
@ -460,7 +451,7 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
||||||
log_command_execution("S3 Copy objects result", response)
|
log_command_execution("S3 Copy objects result", response)
|
||||||
return key
|
return key
|
||||||
|
|
||||||
@reporter.step_deco("Get object S3")
|
@reporter.step("Get object S3")
|
||||||
@report_error
|
@report_error
|
||||||
def get_object(
|
def get_object(
|
||||||
self,
|
self,
|
||||||
|
@ -469,8 +460,7 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
||||||
version_id: Optional[str] = None,
|
version_id: Optional[str] = None,
|
||||||
object_range: Optional[tuple[int, int]] = None,
|
object_range: Optional[tuple[int, int]] = None,
|
||||||
full_output: bool = False,
|
full_output: bool = False,
|
||||||
) -> Union[dict, str]:
|
) -> dict | TestFile:
|
||||||
filename = os.path.join(os.getcwd(), ASSETS_DIR, str(uuid.uuid4()))
|
|
||||||
range_str = None
|
range_str = None
|
||||||
if object_range:
|
if object_range:
|
||||||
range_str = f"bytes={object_range[0]}-{object_range[1]}"
|
range_str = f"bytes={object_range[0]}-{object_range[1]}"
|
||||||
|
@ -478,20 +468,23 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
||||||
params = {
|
params = {
|
||||||
self._to_s3_param(param): value
|
self._to_s3_param(param): value
|
||||||
for param, value in {**locals(), **{"Range": range_str}}.items()
|
for param, value in {**locals(), **{"Range": range_str}}.items()
|
||||||
if param not in ["self", "object_range", "full_output", "range_str", "filename"]
|
if param not in ["self", "object_range", "full_output", "range_str", "filename"] and value is not None
|
||||||
and value is not None
|
|
||||||
}
|
}
|
||||||
response = self.boto3_client.get_object(**params)
|
response = self.boto3_client.get_object(**params)
|
||||||
log_command_execution("S3 Get objects result", response)
|
log_command_execution("S3 Get objects result", response)
|
||||||
|
|
||||||
with open(f"{filename}", "wb") as get_file:
|
if full_output:
|
||||||
|
return response
|
||||||
|
|
||||||
|
test_file = TestFile(os.path.join(os.getcwd(), ASSETS_DIR, string_utils.unique_name("dl-object-")))
|
||||||
|
with open(test_file, "wb") as file:
|
||||||
chunk = response["Body"].read(1024)
|
chunk = response["Body"].read(1024)
|
||||||
while chunk:
|
while chunk:
|
||||||
get_file.write(chunk)
|
file.write(chunk)
|
||||||
chunk = response["Body"].read(1024)
|
chunk = response["Body"].read(1024)
|
||||||
return response if full_output else filename
|
return test_file
|
||||||
|
|
||||||
@reporter.step_deco("Create multipart upload S3")
|
@reporter.step("Create multipart upload S3")
|
||||||
@report_error
|
@report_error
|
||||||
def create_multipart_upload(self, bucket: str, key: str) -> str:
|
def create_multipart_upload(self, bucket: str, key: str) -> str:
|
||||||
response = self.boto3_client.create_multipart_upload(Bucket=bucket, Key=key)
|
response = self.boto3_client.create_multipart_upload(Bucket=bucket, Key=key)
|
||||||
|
@ -500,7 +493,7 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
||||||
|
|
||||||
return response["UploadId"]
|
return response["UploadId"]
|
||||||
|
|
||||||
@reporter.step_deco("List multipart uploads S3")
|
@reporter.step("List multipart uploads S3")
|
||||||
@report_error
|
@report_error
|
||||||
def list_multipart_uploads(self, bucket: str) -> Optional[list[dict]]:
|
def list_multipart_uploads(self, bucket: str) -> Optional[list[dict]]:
|
||||||
response = self.boto3_client.list_multipart_uploads(Bucket=bucket)
|
response = self.boto3_client.list_multipart_uploads(Bucket=bucket)
|
||||||
|
@ -508,19 +501,15 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
||||||
|
|
||||||
return response.get("Uploads")
|
return response.get("Uploads")
|
||||||
|
|
||||||
@reporter.step_deco("Abort multipart upload S3")
|
@reporter.step("Abort multipart upload S3")
|
||||||
@report_error
|
@report_error
|
||||||
def abort_multipart_upload(self, bucket: str, key: str, upload_id: str) -> None:
|
def abort_multipart_upload(self, bucket: str, key: str, upload_id: str) -> None:
|
||||||
response = self.boto3_client.abort_multipart_upload(
|
response = self.boto3_client.abort_multipart_upload(Bucket=bucket, Key=key, UploadId=upload_id)
|
||||||
Bucket=bucket, Key=key, UploadId=upload_id
|
|
||||||
)
|
|
||||||
log_command_execution("S3 Abort multipart upload", response)
|
log_command_execution("S3 Abort multipart upload", response)
|
||||||
|
|
||||||
@reporter.step_deco("Upload part S3")
|
@reporter.step("Upload part S3")
|
||||||
@report_error
|
@report_error
|
||||||
def upload_part(
|
def upload_part(self, bucket: str, key: str, upload_id: str, part_num: int, filepath: str) -> str:
|
||||||
self, bucket: str, key: str, upload_id: str, part_num: int, filepath: str
|
|
||||||
) -> str:
|
|
||||||
with open(filepath, "rb") as put_file:
|
with open(filepath, "rb") as put_file:
|
||||||
body = put_file.read()
|
body = put_file.read()
|
||||||
|
|
||||||
|
@ -536,11 +525,9 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
||||||
|
|
||||||
return response["ETag"]
|
return response["ETag"]
|
||||||
|
|
||||||
@reporter.step_deco("Upload copy part S3")
|
@reporter.step("Upload copy part S3")
|
||||||
@report_error
|
@report_error
|
||||||
def upload_part_copy(
|
def upload_part_copy(self, bucket: str, key: str, upload_id: str, part_num: int, copy_source: str) -> str:
|
||||||
self, bucket: str, key: str, upload_id: str, part_num: int, copy_source: str
|
|
||||||
) -> str:
|
|
||||||
response = self.boto3_client.upload_part_copy(
|
response = self.boto3_client.upload_part_copy(
|
||||||
UploadId=upload_id,
|
UploadId=upload_id,
|
||||||
Bucket=bucket,
|
Bucket=bucket,
|
||||||
|
@ -549,13 +536,11 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
||||||
CopySource=copy_source,
|
CopySource=copy_source,
|
||||||
)
|
)
|
||||||
log_command_execution("S3 Upload copy part", response)
|
log_command_execution("S3 Upload copy part", response)
|
||||||
assert response.get("CopyPartResult", []).get(
|
assert response.get("CopyPartResult", []).get("ETag"), f"Expected ETag in response:\n{response}"
|
||||||
"ETag"
|
|
||||||
), f"Expected ETag in response:\n{response}"
|
|
||||||
|
|
||||||
return response["CopyPartResult"]["ETag"]
|
return response["CopyPartResult"]["ETag"]
|
||||||
|
|
||||||
@reporter.step_deco("List parts S3")
|
@reporter.step("List parts S3")
|
||||||
@report_error
|
@report_error
|
||||||
def list_parts(self, bucket: str, key: str, upload_id: str) -> list[dict]:
|
def list_parts(self, bucket: str, key: str, upload_id: str) -> list[dict]:
|
||||||
response = self.boto3_client.list_parts(UploadId=upload_id, Bucket=bucket, Key=key)
|
response = self.boto3_client.list_parts(UploadId=upload_id, Bucket=bucket, Key=key)
|
||||||
|
@ -564,16 +549,16 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
||||||
|
|
||||||
return response["Parts"]
|
return response["Parts"]
|
||||||
|
|
||||||
@reporter.step_deco("Complete multipart upload S3")
|
@reporter.step("Complete multipart upload S3")
|
||||||
@report_error
|
@report_error
|
||||||
def complete_multipart_upload(self, bucket: str, key: str, upload_id: str, parts: list) -> None:
|
def complete_multipart_upload(self, bucket: str, key: str, upload_id: str, parts: list) -> None:
|
||||||
parts = [{"ETag": etag, "PartNumber": part_num} for part_num, etag in parts]
|
parts = [{"ETag": etag, "PartNumber": part_num} for part_num, etag in parts]
|
||||||
response = self.boto3_client.complete_multipart_upload(
|
response = self.boto3_client.complete_multipart_upload(Bucket=bucket, Key=key, UploadId=upload_id, MultipartUpload={"Parts": parts})
|
||||||
Bucket=bucket, Key=key, UploadId=upload_id, MultipartUpload={"Parts": parts}
|
|
||||||
)
|
|
||||||
log_command_execution("S3 Complete multipart upload", response)
|
log_command_execution("S3 Complete multipart upload", response)
|
||||||
|
|
||||||
@reporter.step_deco("Put object retention")
|
return response
|
||||||
|
|
||||||
|
@reporter.step("Put object retention")
|
||||||
@report_error
|
@report_error
|
||||||
def put_object_retention(
|
def put_object_retention(
|
||||||
self,
|
self,
|
||||||
|
@ -583,15 +568,11 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
||||||
version_id: Optional[str] = None,
|
version_id: Optional[str] = None,
|
||||||
bypass_governance_retention: Optional[bool] = None,
|
bypass_governance_retention: Optional[bool] = None,
|
||||||
) -> None:
|
) -> None:
|
||||||
params = {
|
params = {self._to_s3_param(param): value for param, value in locals().items() if param not in ["self"] and value is not None}
|
||||||
self._to_s3_param(param): value
|
|
||||||
for param, value in locals().items()
|
|
||||||
if param not in ["self"] and value is not None
|
|
||||||
}
|
|
||||||
response = self.boto3_client.put_object_retention(**params)
|
response = self.boto3_client.put_object_retention(**params)
|
||||||
log_command_execution("S3 Put object retention ", response)
|
log_command_execution("S3 Put object retention ", response)
|
||||||
|
|
||||||
@reporter.step_deco("Put object legal hold")
|
@reporter.step("Put object legal hold")
|
||||||
@report_error
|
@report_error
|
||||||
def put_object_legal_hold(
|
def put_object_legal_hold(
|
||||||
self,
|
self,
|
||||||
|
@ -609,33 +590,29 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
||||||
response = self.boto3_client.put_object_legal_hold(**params)
|
response = self.boto3_client.put_object_legal_hold(**params)
|
||||||
log_command_execution("S3 Put object legal hold ", response)
|
log_command_execution("S3 Put object legal hold ", response)
|
||||||
|
|
||||||
@reporter.step_deco("Put object tagging")
|
@reporter.step("Put object tagging")
|
||||||
@report_error
|
@report_error
|
||||||
def put_object_tagging(self, bucket: str, key: str, tags: list) -> None:
|
def put_object_tagging(self, bucket: str, key: str, tags: list, version_id: Optional[str] = "") -> None:
|
||||||
tags = [{"Key": tag_key, "Value": tag_value} for tag_key, tag_value in tags]
|
tags = [{"Key": tag_key, "Value": tag_value} for tag_key, tag_value in tags]
|
||||||
tagging = {"TagSet": tags}
|
tagging = {"TagSet": tags}
|
||||||
response = self.boto3_client.put_object_tagging(Bucket=bucket, Key=key, Tagging=tagging)
|
response = self.boto3_client.put_object_tagging(Bucket=bucket, Key=key, Tagging=tagging, VersionId=version_id)
|
||||||
log_command_execution("S3 Put object tagging", response)
|
log_command_execution("S3 Put object tagging", response)
|
||||||
|
|
||||||
@reporter.step_deco("Get object tagging")
|
@reporter.step("Get object tagging")
|
||||||
@report_error
|
@report_error
|
||||||
def get_object_tagging(self, bucket: str, key: str, version_id: Optional[str] = None) -> list:
|
def get_object_tagging(self, bucket: str, key: str, version_id: Optional[str] = None) -> list:
|
||||||
params = {
|
params = {self._to_s3_param(param): value for param, value in locals().items() if param not in ["self"] and value is not None}
|
||||||
self._to_s3_param(param): value
|
|
||||||
for param, value in locals().items()
|
|
||||||
if param not in ["self"] and value is not None
|
|
||||||
}
|
|
||||||
response = self.boto3_client.get_object_tagging(**params)
|
response = self.boto3_client.get_object_tagging(**params)
|
||||||
log_command_execution("S3 Get object tagging", response)
|
log_command_execution("S3 Get object tagging", response)
|
||||||
return response.get("TagSet")
|
return response.get("TagSet")
|
||||||
|
|
||||||
@reporter.step_deco("Delete object tagging")
|
@reporter.step("Delete object tagging")
|
||||||
@report_error
|
@report_error
|
||||||
def delete_object_tagging(self, bucket: str, key: str) -> None:
|
def delete_object_tagging(self, bucket: str, key: str) -> None:
|
||||||
response = self.boto3_client.delete_object_tagging(Bucket=bucket, Key=key)
|
response = self.boto3_client.delete_object_tagging(Bucket=bucket, Key=key)
|
||||||
log_command_execution("S3 Delete object tagging", response)
|
log_command_execution("S3 Delete object tagging", response)
|
||||||
|
|
||||||
@reporter.step_deco("Get object attributes")
|
@reporter.step("Get object attributes")
|
||||||
@report_error
|
@report_error
|
||||||
def get_object_attributes(
|
def get_object_attributes(
|
||||||
self,
|
self,
|
||||||
|
@ -650,7 +627,7 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
||||||
logger.warning("Method get_object_attributes is not supported by boto3 client")
|
logger.warning("Method get_object_attributes is not supported by boto3 client")
|
||||||
return {}
|
return {}
|
||||||
|
|
||||||
@reporter.step_deco("Sync directory S3")
|
@reporter.step("Sync directory S3")
|
||||||
@report_error
|
@report_error
|
||||||
def sync(
|
def sync(
|
||||||
self,
|
self,
|
||||||
|
@ -661,7 +638,7 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
||||||
) -> dict:
|
) -> dict:
|
||||||
raise NotImplementedError("Sync is not supported for boto3 client")
|
raise NotImplementedError("Sync is not supported for boto3 client")
|
||||||
|
|
||||||
@reporter.step_deco("CP directory S3")
|
@reporter.step("CP directory S3")
|
||||||
@report_error
|
@report_error
|
||||||
def cp(
|
def cp(
|
||||||
self,
|
self,
|
||||||
|
@ -673,3 +650,270 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
||||||
raise NotImplementedError("Cp is not supported for boto3 client")
|
raise NotImplementedError("Cp is not supported for boto3 client")
|
||||||
|
|
||||||
# END OBJECT METHODS #
|
# END OBJECT METHODS #
|
||||||
|
|
||||||
|
# IAM METHODS #
|
||||||
|
# Some methods don't have checks because boto3 is silent in some cases (delete, attach, etc.)
|
||||||
|
|
||||||
|
@reporter.step("Adds the specified user to the specified group")
|
||||||
|
def iam_add_user_to_group(self, user_name: str, group_name: str) -> dict:
|
||||||
|
response = self.boto3_iam_client.add_user_to_group(UserName=user_name, GroupName=group_name)
|
||||||
|
return response
|
||||||
|
|
||||||
|
@reporter.step("Attaches the specified managed policy to the specified IAM group")
|
||||||
|
def iam_attach_group_policy(self, group_name: str, policy_arn: str) -> dict:
|
||||||
|
response = self.boto3_iam_client.attach_group_policy(GroupName=group_name, PolicyArn=policy_arn)
|
||||||
|
sleep(S3_SYNC_WAIT_TIME * 10)
|
||||||
|
return response
|
||||||
|
|
||||||
|
@reporter.step("Attaches the specified managed policy to the specified user")
|
||||||
|
def iam_attach_user_policy(self, user_name: str, policy_arn: str) -> dict:
|
||||||
|
response = self.boto3_iam_client.attach_user_policy(UserName=user_name, PolicyArn=policy_arn)
|
||||||
|
sleep(S3_SYNC_WAIT_TIME * 10)
|
||||||
|
return response
|
||||||
|
|
||||||
|
@reporter.step("Creates a new AWS secret access key and access key ID for the specified user")
|
||||||
|
def iam_create_access_key(self, user_name: str) -> dict:
|
||||||
|
response = self.boto3_iam_client.create_access_key(UserName=user_name)
|
||||||
|
|
||||||
|
access_key_id = response["AccessKey"].get("AccessKeyId")
|
||||||
|
secret_access_key = response["AccessKey"].get("SecretAccessKey")
|
||||||
|
assert access_key_id, f"Expected AccessKeyId in response:\n{response}"
|
||||||
|
assert secret_access_key, f"Expected SecretAccessKey in response:\n{response}"
|
||||||
|
|
||||||
|
return access_key_id, secret_access_key
|
||||||
|
|
||||||
|
@reporter.step("Creates a new group")
|
||||||
|
def iam_create_group(self, group_name: str) -> dict:
|
||||||
|
response = self.boto3_iam_client.create_group(GroupName=group_name)
|
||||||
|
assert response.get("Group"), f"Expected Group in response:\n{response}"
|
||||||
|
assert response["Group"].get("GroupName") == group_name, f"GroupName should be equal to {group_name}"
|
||||||
|
|
||||||
|
return response
|
||||||
|
|
||||||
|
@reporter.step("Creates a new managed policy for your AWS account")
|
||||||
|
def iam_create_policy(self, policy_name: str, policy_document: dict) -> dict:
|
||||||
|
response = self.boto3_iam_client.create_policy(PolicyName=policy_name, PolicyDocument=json.dumps(policy_document))
|
||||||
|
assert response.get("Policy"), f"Expected Policy in response:\n{response}"
|
||||||
|
assert response["Policy"].get("PolicyName") == policy_name, f"PolicyName should be equal to {policy_name}"
|
||||||
|
|
||||||
|
return response
|
||||||
|
|
||||||
|
@reporter.step("Creates a new IAM user for your AWS account")
|
||||||
|
def iam_create_user(self, user_name: str) -> dict:
|
||||||
|
response = self.boto3_iam_client.create_user(UserName=user_name)
|
||||||
|
assert response.get("User"), f"Expected User in response:\n{response}"
|
||||||
|
assert response["User"].get("UserName") == user_name, f"UserName should be equal to {user_name}"
|
||||||
|
|
||||||
|
return response
|
||||||
|
|
||||||
|
@reporter.step("Deletes the access key pair associated with the specified IAM user")
|
||||||
|
def iam_delete_access_key(self, access_key_id: str, user_name: str) -> dict:
|
||||||
|
response = self.boto3_iam_client.delete_access_key(AccessKeyId=access_key_id, UserName=user_name)
|
||||||
|
return response
|
||||||
|
|
||||||
|
@reporter.step("Deletes the specified IAM group")
|
||||||
|
def iam_delete_group(self, group_name: str) -> dict:
|
||||||
|
response = self.boto3_iam_client.delete_group(GroupName=group_name)
|
||||||
|
return response
|
||||||
|
|
||||||
|
@reporter.step("Deletes the specified inline policy that is embedded in the specified IAM group")
|
||||||
|
def iam_delete_group_policy(self, group_name: str, policy_name: str) -> dict:
|
||||||
|
response = self.boto3_iam_client.delete_group_policy(GroupName=group_name, PolicyName=policy_name)
|
||||||
|
return response
|
||||||
|
|
||||||
|
@reporter.step("Deletes the specified managed policy")
|
||||||
|
def iam_delete_policy(self, policy_arn: str) -> dict:
|
||||||
|
response = self.boto3_iam_client.delete_policy(PolicyArn=policy_arn)
|
||||||
|
return response
|
||||||
|
|
||||||
|
@reporter.step("Deletes the specified IAM user")
|
||||||
|
def iam_delete_user(self, user_name: str) -> dict:
|
||||||
|
response = self.boto3_iam_client.delete_user(UserName=user_name)
|
||||||
|
return response
|
||||||
|
|
||||||
|
@reporter.step("Deletes the specified inline policy that is embedded in the specified IAM user")
|
||||||
|
def iam_delete_user_policy(self, user_name: str, policy_name: str) -> dict:
|
||||||
|
response = self.boto3_iam_client.delete_user_policy(UserName=user_name, PolicyName=policy_name)
|
||||||
|
return response
|
||||||
|
|
||||||
|
@reporter.step("Removes the specified managed policy from the specified IAM group")
|
||||||
|
def iam_detach_group_policy(self, group_name: str, policy_arn: str) -> dict:
|
||||||
|
response = self.boto3_iam_client.detach_group_policy(GroupName=group_name, PolicyArn=policy_arn)
|
||||||
|
sleep(S3_SYNC_WAIT_TIME * 10)
|
||||||
|
return response
|
||||||
|
|
||||||
|
@reporter.step("Removes the specified managed policy from the specified user")
|
||||||
|
def iam_detach_user_policy(self, user_name: str, policy_arn: str) -> dict:
|
||||||
|
response = self.boto3_iam_client.detach_user_policy(UserName=user_name, PolicyArn=policy_arn)
|
||||||
|
sleep(S3_SYNC_WAIT_TIME * 10)
|
||||||
|
return response
|
||||||
|
|
||||||
|
@reporter.step("Returns a list of IAM users that are in the specified IAM group")
|
||||||
|
def iam_get_group(self, group_name: str) -> dict:
|
||||||
|
response = self.boto3_iam_client.get_group(GroupName=group_name)
|
||||||
|
assert response.get("Group").get("GroupName") == group_name, f"GroupName should be equal to {group_name}"
|
||||||
|
|
||||||
|
return response
|
||||||
|
|
||||||
|
@reporter.step("Retrieves the specified inline policy document that is embedded in the specified IAM group")
|
||||||
|
def iam_get_group_policy(self, group_name: str, policy_name: str) -> dict:
|
||||||
|
response = self.boto3_iam_client.get_group_policy(GroupName=group_name, PolicyName=policy_name)
|
||||||
|
|
||||||
|
return response
|
||||||
|
|
||||||
|
@reporter.step("Retrieves information about the specified managed policy")
|
||||||
|
def iam_get_policy(self, policy_arn: str) -> dict:
|
||||||
|
response = self.boto3_iam_client.get_policy(PolicyArn=policy_arn)
|
||||||
|
assert response.get("Policy"), f"Expected Policy in response:\n{response}"
|
||||||
|
assert response["Policy"].get("PolicyName") == policy_name, f"PolicyName should be equal to {policy_name}"
|
||||||
|
|
||||||
|
return response
|
||||||
|
|
||||||
|
@reporter.step("Retrieves information about the specified version of the specified managed policy")
|
||||||
|
def iam_get_policy_version(self, policy_arn: str, version_id: str) -> dict:
|
||||||
|
response = self.boto3_iam_client.get_policy_version(PolicyArn=policy_arn, VersionId=version_id)
|
||||||
|
assert response.get("PolicyVersion"), f"Expected PolicyVersion in response:\n{response}"
|
||||||
|
assert response["PolicyVersion"].get("VersionId") == version_id, f"VersionId should be equal to {version_id}"
|
||||||
|
|
||||||
|
return response
|
||||||
|
|
||||||
|
@reporter.step("Retrieves information about the specified IAM user")
|
||||||
|
def iam_get_user(self, user_name: str) -> dict:
|
||||||
|
response = self.boto3_iam_client.get_user(UserName=user_name)
|
||||||
|
assert response.get("User"), f"Expected User in response:\n{response}"
|
||||||
|
assert response["User"].get("UserName") == user_name, f"UserName should be equal to {user_name}"
|
||||||
|
|
||||||
|
return response
|
||||||
|
|
||||||
|
@reporter.step("Retrieves the specified inline policy document that is embedded in the specified IAM user")
|
||||||
|
def iam_get_user_policy(self, user_name: str, policy_name: str) -> dict:
|
||||||
|
response = self.boto3_iam_client.get_user_policy(UserName=user_name, PolicyName=policy_name)
|
||||||
|
assert response.get("UserName"), f"Expected UserName in response:\n{response}"
|
||||||
|
|
||||||
|
return response
|
||||||
|
|
||||||
|
@reporter.step("Returns information about the access key IDs associated with the specified IAM user")
|
||||||
|
def iam_list_access_keys(self, user_name: str) -> dict:
|
||||||
|
response = self.boto3_iam_client.list_access_keys(UserName=user_name)
|
||||||
|
|
||||||
|
return response
|
||||||
|
|
||||||
|
@reporter.step("Lists all managed policies that are attached to the specified IAM group")
|
||||||
|
def iam_list_attached_group_policies(self, group_name: str) -> dict:
|
||||||
|
response = self.boto3_iam_client.list_attached_group_policies(GroupName=group_name)
|
||||||
|
assert response.get("AttachedPolicies"), f"Expected AttachedPolicies in response:\n{response}"
|
||||||
|
|
||||||
|
return response
|
||||||
|
|
||||||
|
@reporter.step("Lists all managed policies that are attached to the specified IAM user")
|
||||||
|
def iam_list_attached_user_policies(self, user_name: str) -> dict:
|
||||||
|
response = self.boto3_iam_client.list_attached_user_policies(UserName=user_name)
|
||||||
|
assert response.get("AttachedPolicies"), f"Expected AttachedPolicies in response:\n{response}"
|
||||||
|
|
||||||
|
return response
|
||||||
|
|
||||||
|
@reporter.step("Lists all IAM users, groups, and roles that the specified managed policy is attached to")
|
||||||
|
def iam_list_entities_for_policy(self, policy_arn: str) -> dict:
|
||||||
|
response = self.boto3_iam_client.list_entities_for_policy(PolicyArn=policy_arn)
|
||||||
|
|
||||||
|
assert response.get("PolicyGroups"), f"Expected PolicyGroups in response:\n{response}"
|
||||||
|
assert response.get("PolicyUsers"), f"Expected PolicyUsers in response:\n{response}"
|
||||||
|
|
||||||
|
return response
|
||||||
|
|
||||||
|
@reporter.step("Lists the names of the inline policies that are embedded in the specified IAM group")
|
||||||
|
def iam_list_group_policies(self, group_name: str) -> dict:
|
||||||
|
response = self.boto3_iam_client.list_group_policies(GroupName=group_name)
|
||||||
|
assert response.get("PolicyNames"), f"Expected PolicyNames in response:\n{response}"
|
||||||
|
|
||||||
|
return response
|
||||||
|
|
||||||
|
@reporter.step("Lists the IAM groups")
|
||||||
|
def iam_list_groups(self) -> dict:
|
||||||
|
response = self.boto3_iam_client.list_groups()
|
||||||
|
assert response.get("Groups"), f"Expected Groups in response:\n{response}"
|
||||||
|
|
||||||
|
return response
|
||||||
|
|
||||||
|
@reporter.step("Lists the IAM groups that the specified IAM user belongs to")
|
||||||
|
def iam_list_groups_for_user(self, user_name: str) -> dict:
|
||||||
|
response = self.boto3_iam_client.list_groups_for_user(UserName=user_name)
|
||||||
|
assert response.get("Groups"), f"Expected Groups in response:\n{response}"
|
||||||
|
|
||||||
|
return response
|
||||||
|
|
||||||
|
@reporter.step("Lists all the managed policies that are available in your AWS account")
|
||||||
|
def iam_list_policies(self) -> dict:
|
||||||
|
response = self.boto3_iam_client.list_policies()
|
||||||
|
assert response.get("Policies"), f"Expected Policies in response:\n{response}"
|
||||||
|
|
||||||
|
return response
|
||||||
|
|
||||||
|
@reporter.step("Lists information about the versions of the specified managed policy")
|
||||||
|
def iam_list_policy_versions(self, policy_arn: str) -> dict:
|
||||||
|
response = self.boto3_iam_client.list_policy_versions(PolicyArn=policy_arn)
|
||||||
|
assert response.get("Versions"), f"Expected Versions in response:\n{response}"
|
||||||
|
|
||||||
|
return response
|
||||||
|
|
||||||
|
@reporter.step("Lists the names of the inline policies embedded in the specified IAM user")
|
||||||
|
def iam_list_user_policies(self, user_name: str) -> dict:
|
||||||
|
response = self.boto3_iam_client.list_user_policies(UserName=user_name)
|
||||||
|
assert response.get("PolicyNames"), f"Expected PolicyNames in response:\n{response}"
|
||||||
|
|
||||||
|
return response
|
||||||
|
|
||||||
|
@reporter.step("Lists the IAM users")
|
||||||
|
def iam_list_users(self) -> dict:
|
||||||
|
response = self.boto3_iam_client.list_users()
|
||||||
|
assert response.get("Users"), f"Expected Users in response:\n{response}"
|
||||||
|
|
||||||
|
return response
|
||||||
|
|
||||||
|
@reporter.step("Adds or updates an inline policy document that is embedded in the specified IAM group")
|
||||||
|
def iam_put_group_policy(self, group_name: str, policy_name: str, policy_document: dict) -> dict:
|
||||||
|
response = self.boto3_iam_client.put_group_policy(
|
||||||
|
GroupName=group_name, PolicyName=policy_name, PolicyDocument=json.dumps(policy_document)
|
||||||
|
)
|
||||||
|
sleep(S3_SYNC_WAIT_TIME * 10)
|
||||||
|
return response
|
||||||
|
|
||||||
|
@reporter.step("Adds or updates an inline policy document that is embedded in the specified IAM user")
|
||||||
|
def iam_put_user_policy(self, user_name: str, policy_name: str, policy_document: dict) -> dict:
|
||||||
|
response = self.boto3_iam_client.put_user_policy(
|
||||||
|
UserName=user_name, PolicyName=policy_name, PolicyDocument=json.dumps(policy_document)
|
||||||
|
)
|
||||||
|
sleep(S3_SYNC_WAIT_TIME * 10)
|
||||||
|
return response
|
||||||
|
|
||||||
|
@reporter.step("Removes the specified user from the specified group")
|
||||||
|
def iam_remove_user_from_group(self, group_name: str, user_name: str) -> dict:
|
||||||
|
response = self.boto3_iam_client.remove_user_from_group(GroupName=group_name, UserName=user_name)
|
||||||
|
return response
|
||||||
|
|
||||||
|
@reporter.step("Updates the name and/or the path of the specified IAM group")
|
||||||
|
def iam_update_group(self, group_name: str, new_name: str, new_path: Optional[str] = None) -> dict:
|
||||||
|
response = self.boto3_iam_client.update_group(GroupName=group_name, NewGroupName=new_name, NewPath="/")
|
||||||
|
|
||||||
|
return response
|
||||||
|
|
||||||
|
@reporter.step("Updates the name and/or the path of the specified IAM user")
|
||||||
|
def iam_update_user(self, user_name: str, new_name: str, new_path: Optional[str] = None) -> dict:
|
||||||
|
response = self.boto3_iam_client.update_user(UserName=user_name, NewUserName=new_name, NewPath="/")
|
||||||
|
return response
|
||||||
|
|
||||||
|
@reporter.step("Adds one or more tags to an IAM user")
|
||||||
|
def iam_tag_user(self, user_name: str, tags: list) -> dict:
|
||||||
|
tags_json = [{"Key": tag_key, "Value": tag_value} for tag_key, tag_value in tags]
|
||||||
|
response = self.boto3_iam_client.tag_user(UserName=user_name, Tags=tags_json)
|
||||||
|
return response
|
||||||
|
|
||||||
|
@reporter.step("List tags of IAM user")
|
||||||
|
def iam_list_user_tags(self, user_name: str) -> dict:
|
||||||
|
response = self.boto3_iam_client.list_user_tags(UserName=user_name)
|
||||||
|
return response
|
||||||
|
|
||||||
|
@reporter.step("Removes the specified tags from the user")
|
||||||
|
def iam_untag_user(self, user_name: str, tag_keys: list) -> dict:
|
||||||
|
response = self.boto3_iam_client.untag_user(UserName=user_name, TagKeys=tag_keys)
|
||||||
|
return response
|
||||||
|
|
16
src/frostfs_testlib/s3/curl_bucket_resolver.py
Normal file
16
src/frostfs_testlib/s3/curl_bucket_resolver.py
Normal file
|
@ -0,0 +1,16 @@
|
||||||
|
import re
|
||||||
|
|
||||||
|
from frostfs_testlib.cli.generic_cli import GenericCli
|
||||||
|
from frostfs_testlib.s3.interfaces import BucketContainerResolver
|
||||||
|
from frostfs_testlib.storage.cluster import ClusterNode
|
||||||
|
|
||||||
|
|
||||||
|
class CurlBucketContainerResolver(BucketContainerResolver):
|
||||||
|
def resolve(self, node: ClusterNode, bucket_name: str, **kwargs: dict) -> str:
|
||||||
|
curl = GenericCli("curl", node.host)
|
||||||
|
output = curl(f"-I http://127.0.0.1:8084/{bucket_name}")
|
||||||
|
pattern = r"X-Container-Id: (\S+)"
|
||||||
|
cid = re.findall(pattern, output.stdout)
|
||||||
|
if cid:
|
||||||
|
return cid[0]
|
||||||
|
return None
|
|
@ -1,8 +1,10 @@
|
||||||
from abc import abstractmethod
|
from abc import ABC, abstractmethod
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
from typing import Literal, Optional, Union
|
from typing import Literal, Optional, Union
|
||||||
|
|
||||||
|
from frostfs_testlib.storage.cluster import ClusterNode
|
||||||
from frostfs_testlib.testing.readable import HumanReadableABC, HumanReadableEnum
|
from frostfs_testlib.testing.readable import HumanReadableABC, HumanReadableEnum
|
||||||
|
from frostfs_testlib.utils.file_utils import TestFile
|
||||||
|
|
||||||
|
|
||||||
def _make_objs_dict(key_names):
|
def _make_objs_dict(key_names):
|
||||||
|
@ -31,9 +33,25 @@ ACL_COPY = [
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
|
class BucketContainerResolver(ABC):
|
||||||
|
@abstractmethod
|
||||||
|
def resolve(self, node: ClusterNode, bucket_name: str, **kwargs: dict) -> str:
|
||||||
|
"""
|
||||||
|
Resolve Container ID from bucket name
|
||||||
|
|
||||||
|
Args:
|
||||||
|
node: node from where we want to resolve
|
||||||
|
bucket_name: name of the bucket
|
||||||
|
**kwargs: any other required params
|
||||||
|
|
||||||
|
Returns: Container ID
|
||||||
|
"""
|
||||||
|
raise NotImplementedError("Call from abstract class")
|
||||||
|
|
||||||
|
|
||||||
class S3ClientWrapper(HumanReadableABC):
|
class S3ClientWrapper(HumanReadableABC):
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
def __init__(self, access_key_id: str, secret_access_key: str, s3gate_endpoint: str) -> None:
|
def __init__(self, access_key_id: str, secret_access_key: str, s3gate_endpoint: str, profile: str, region: str) -> None:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
|
@ -135,6 +153,10 @@ class S3ClientWrapper(HumanReadableABC):
|
||||||
def get_bucket_policy(self, bucket: str) -> str:
|
def get_bucket_policy(self, bucket: str) -> str:
|
||||||
"""Returns the policy of a specified bucket."""
|
"""Returns the policy of a specified bucket."""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def delete_bucket_policy(self, bucket: str) -> str:
|
||||||
|
"""Deletes the policy of a specified bucket."""
|
||||||
|
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
def put_bucket_policy(self, bucket: str, policy: dict) -> None:
|
def put_bucket_policy(self, bucket: str, policy: dict) -> None:
|
||||||
"""Applies S3 bucket policy to an S3 bucket."""
|
"""Applies S3 bucket policy to an S3 bucket."""
|
||||||
|
@ -268,7 +290,7 @@ class S3ClientWrapper(HumanReadableABC):
|
||||||
version_id: Optional[str] = None,
|
version_id: Optional[str] = None,
|
||||||
object_range: Optional[tuple[int, int]] = None,
|
object_range: Optional[tuple[int, int]] = None,
|
||||||
full_output: bool = False,
|
full_output: bool = False,
|
||||||
) -> Union[dict, str]:
|
) -> dict | TestFile:
|
||||||
"""Retrieves objects from S3."""
|
"""Retrieves objects from S3."""
|
||||||
|
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
|
@ -296,15 +318,11 @@ class S3ClientWrapper(HumanReadableABC):
|
||||||
abort a given multipart upload multiple times in order to completely free all storage consumed by all parts."""
|
abort a given multipart upload multiple times in order to completely free all storage consumed by all parts."""
|
||||||
|
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
def upload_part(
|
def upload_part(self, bucket: str, key: str, upload_id: str, part_num: int, filepath: str) -> str:
|
||||||
self, bucket: str, key: str, upload_id: str, part_num: int, filepath: str
|
|
||||||
) -> str:
|
|
||||||
"""Uploads a part in a multipart upload."""
|
"""Uploads a part in a multipart upload."""
|
||||||
|
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
def upload_part_copy(
|
def upload_part_copy(self, bucket: str, key: str, upload_id: str, part_num: int, copy_source: str) -> str:
|
||||||
self, bucket: str, key: str, upload_id: str, part_num: int, copy_source: str
|
|
||||||
) -> str:
|
|
||||||
"""Uploads a part by copying data from an existing object as data source."""
|
"""Uploads a part by copying data from an existing object as data source."""
|
||||||
|
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
|
@ -382,3 +400,165 @@ class S3ClientWrapper(HumanReadableABC):
|
||||||
"""cp directory TODO: Add proper description"""
|
"""cp directory TODO: Add proper description"""
|
||||||
|
|
||||||
# END OF OBJECT METHODS #
|
# END OF OBJECT METHODS #
|
||||||
|
|
||||||
|
# IAM METHODS #
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def iam_add_user_to_group(self, user_name: str, group_name: str) -> dict:
|
||||||
|
"""Adds the specified user to the specified group"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def iam_attach_group_policy(self, group: str, policy_arn: str) -> dict:
|
||||||
|
"""Attaches the specified managed policy to the specified IAM group"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def iam_attach_user_policy(self, user_name: str, policy_arn: str) -> dict:
|
||||||
|
"""Attaches the specified managed policy to the specified user"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def iam_create_access_key(self, user_name: str) -> dict:
|
||||||
|
"""Creates a new AWS secret access key and access key ID for the specified user"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def iam_create_group(self, group_name: str) -> dict:
|
||||||
|
"""Creates a new group"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def iam_create_policy(self, policy_name: str, policy_document: dict) -> dict:
|
||||||
|
"""Creates a new managed policy for your AWS account"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def iam_create_user(self, user_name: str) -> dict:
|
||||||
|
"""Creates a new IAM user for your AWS account"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def iam_delete_access_key(self, access_key_id: str, user_name: str) -> dict:
|
||||||
|
"""Deletes the access key pair associated with the specified IAM user"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def iam_delete_group(self, group_name: str) -> dict:
|
||||||
|
"""Deletes the specified IAM group"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def iam_delete_group_policy(self, group_name: str, policy_name: str) -> dict:
|
||||||
|
"""Deletes the specified inline policy that is embedded in the specified IAM group"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def iam_delete_policy(self, policy_arn: str) -> dict:
|
||||||
|
"""Deletes the specified managed policy"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def iam_delete_user(self, user_name: str) -> dict:
|
||||||
|
"""Deletes the specified IAM user"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def iam_delete_user_policy(self, user_name: str, policy_name: str) -> dict:
|
||||||
|
"""Deletes the specified inline policy that is embedded in the specified IAM user"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def iam_detach_group_policy(self, group_name: str, policy_arn: str) -> dict:
|
||||||
|
"""Removes the specified managed policy from the specified IAM group"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def iam_detach_user_policy(self, user_name: str, policy_arn: str) -> dict:
|
||||||
|
"""Removes the specified managed policy from the specified user"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def iam_get_group(self, group_name: str) -> dict:
|
||||||
|
"""Returns a list of IAM users that are in the specified IAM group"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def iam_get_group_policy(self, group_name: str, policy_name: str) -> dict:
|
||||||
|
"""Retrieves the specified inline policy document that is embedded in the specified IAM group"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def iam_get_policy(self, policy_arn: str) -> dict:
|
||||||
|
"""Retrieves information about the specified managed policy"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def iam_get_policy_version(self, policy_arn: str, version_id: str) -> dict:
|
||||||
|
"""Retrieves information about the specified version of the specified managed policy"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def iam_get_user(self, user_name: str) -> dict:
|
||||||
|
"""Retrieves information about the specified IAM user"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def iam_get_user_policy(self, user_name: str, policy_name: str) -> dict:
|
||||||
|
"""Retrieves the specified inline policy document that is embedded in the specified IAM user"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def iam_list_access_keys(self, user_name: str) -> dict:
|
||||||
|
"""Returns information about the access key IDs associated with the specified IAM user"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def iam_list_attached_group_policies(self, group_name: str) -> dict:
|
||||||
|
"""Lists all managed policies that are attached to the specified IAM group"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def iam_list_attached_user_policies(self, user_name: str) -> dict:
|
||||||
|
"""Lists all managed policies that are attached to the specified IAM user"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def iam_list_entities_for_policy(self, policy_arn: str) -> dict:
|
||||||
|
"""Lists all IAM users, groups, and roles that the specified managed policy is attached to"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def iam_list_group_policies(self, group_name: str) -> dict:
|
||||||
|
"""Lists the names of the inline policies that are embedded in the specified IAM group"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def iam_list_groups(self) -> dict:
|
||||||
|
"""Lists the IAM groups"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def iam_list_groups_for_user(self, user_name: str) -> dict:
|
||||||
|
"""Lists the IAM groups that the specified IAM user belongs to"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def iam_list_policies(self) -> dict:
|
||||||
|
"""Lists all the managed policies that are available in your AWS account"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def iam_list_policy_versions(self, policy_arn: str) -> dict:
|
||||||
|
"""Lists information about the versions of the specified managed policy"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def iam_list_user_policies(self, user_name: str) -> dict:
|
||||||
|
"""Lists the names of the inline policies embedded in the specified IAM user"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def iam_list_users(self) -> dict:
|
||||||
|
"""Lists the IAM users"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def iam_put_group_policy(self, group_name: str, policy_name: str, policy_document: dict) -> dict:
|
||||||
|
"""Adds or updates an inline policy document that is embedded in the specified IAM group"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def iam_put_user_policy(self, user_name: str, policy_name: str, policy_document: dict) -> dict:
|
||||||
|
"""Adds or updates an inline policy document that is embedded in the specified IAM user"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def iam_remove_user_from_group(self, group_name: str, user_name: str) -> dict:
|
||||||
|
"""Removes the specified user from the specified group"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def iam_update_group(self, group_name: str, new_name: Optional[str] = None, new_path: Optional[str] = None) -> dict:
|
||||||
|
"""Updates the name and/or the path of the specified IAM group"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def iam_update_user(self, user_name: str, new_name: Optional[str] = None, new_path: Optional[str] = None) -> dict:
|
||||||
|
"""Updates the name and/or the path of the specified IAM user"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def iam_tag_user(self, user_name: str, tags: list) -> dict:
|
||||||
|
"""Adds one or more tags to an IAM user"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def iam_list_user_tags(self, user_name: str) -> dict:
|
||||||
|
"""List tags of IAM user"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def iam_untag_user(self, user_name: str, tag_keys: list) -> dict:
|
||||||
|
"""Removes the specified tags from the user"""
|
||||||
|
|
|
@ -6,11 +6,10 @@ from typing import IO, Optional
|
||||||
|
|
||||||
import pexpect
|
import pexpect
|
||||||
|
|
||||||
from frostfs_testlib.reporter import get_reporter
|
from frostfs_testlib import reporter
|
||||||
from frostfs_testlib.shell.interfaces import CommandInspector, CommandOptions, CommandResult, Shell
|
from frostfs_testlib.shell.interfaces import CommandInspector, CommandOptions, CommandResult, Shell
|
||||||
|
|
||||||
logger = logging.getLogger("frostfs.testlib.shell")
|
logger = logging.getLogger("frostfs.testlib.shell")
|
||||||
reporter = get_reporter()
|
|
||||||
|
|
||||||
|
|
||||||
class LocalShell(Shell):
|
class LocalShell(Shell):
|
||||||
|
@ -62,7 +61,8 @@ class LocalShell(Shell):
|
||||||
if options.check and result.return_code != 0:
|
if options.check and result.return_code != 0:
|
||||||
raise RuntimeError(
|
raise RuntimeError(
|
||||||
f"Command: {command}\nreturn code: {result.return_code}\n"
|
f"Command: {command}\nreturn code: {result.return_code}\n"
|
||||||
f"Output: {result.stdout}"
|
f"Output: {result.stdout}\n"
|
||||||
|
f"Stderr: {result.stderr}\n"
|
||||||
)
|
)
|
||||||
return result
|
return result
|
||||||
|
|
||||||
|
@ -94,9 +94,7 @@ class LocalShell(Shell):
|
||||||
return_code=exc.returncode,
|
return_code=exc.returncode,
|
||||||
)
|
)
|
||||||
raise RuntimeError(
|
raise RuntimeError(
|
||||||
f"Command: {command}\nError:\n"
|
f"Command: {command}\nError:\n" f"return code: {exc.returncode}\n" f"output: {exc.output}"
|
||||||
f"return code: {exc.returncode}\n"
|
|
||||||
f"output: {exc.output}"
|
|
||||||
) from exc
|
) from exc
|
||||||
except OSError as exc:
|
except OSError as exc:
|
||||||
raise RuntimeError(f"Command: {command}\nOutput: {exc.strerror}") from exc
|
raise RuntimeError(f"Command: {command}\nOutput: {exc.strerror}") from exc
|
||||||
|
|
|
@ -6,30 +6,13 @@ from functools import lru_cache, wraps
|
||||||
from time import sleep
|
from time import sleep
|
||||||
from typing import ClassVar, Optional, Tuple
|
from typing import ClassVar, Optional, Tuple
|
||||||
|
|
||||||
from paramiko import (
|
from paramiko import AutoAddPolicy, Channel, ECDSAKey, Ed25519Key, PKey, RSAKey, SSHClient, SSHException, ssh_exception
|
||||||
AutoAddPolicy,
|
|
||||||
Channel,
|
|
||||||
ECDSAKey,
|
|
||||||
Ed25519Key,
|
|
||||||
PKey,
|
|
||||||
RSAKey,
|
|
||||||
SSHClient,
|
|
||||||
SSHException,
|
|
||||||
ssh_exception,
|
|
||||||
)
|
|
||||||
from paramiko.ssh_exception import AuthenticationException
|
from paramiko.ssh_exception import AuthenticationException
|
||||||
|
|
||||||
from frostfs_testlib.reporter import get_reporter
|
from frostfs_testlib import reporter
|
||||||
from frostfs_testlib.shell.interfaces import (
|
from frostfs_testlib.shell.interfaces import CommandInspector, CommandOptions, CommandResult, Shell, SshCredentials
|
||||||
CommandInspector,
|
|
||||||
CommandOptions,
|
|
||||||
CommandResult,
|
|
||||||
Shell,
|
|
||||||
SshCredentials,
|
|
||||||
)
|
|
||||||
|
|
||||||
logger = logging.getLogger("frostfs.testlib.shell")
|
logger = logging.getLogger("frostfs.testlib.shell")
|
||||||
reporter = get_reporter()
|
|
||||||
|
|
||||||
|
|
||||||
class SshConnectionProvider:
|
class SshConnectionProvider:
|
||||||
|
@ -97,8 +80,7 @@ class SshConnectionProvider:
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
logger.info(
|
logger.info(
|
||||||
f"Trying to connect to host {host} as {creds.ssh_login} using password "
|
f"Trying to connect to host {host} as {creds.ssh_login} using password " f"(attempt {attempt})"
|
||||||
f"(attempt {attempt})"
|
|
||||||
)
|
)
|
||||||
connection.connect(
|
connection.connect(
|
||||||
hostname=host,
|
hostname=host,
|
||||||
|
@ -141,9 +123,7 @@ class HostIsNotAvailable(Exception):
|
||||||
|
|
||||||
def log_command(func):
|
def log_command(func):
|
||||||
@wraps(func)
|
@wraps(func)
|
||||||
def wrapper(
|
def wrapper(shell: "SSHShell", command: str, options: CommandOptions, *args, **kwargs) -> CommandResult:
|
||||||
shell: "SSHShell", command: str, options: CommandOptions, *args, **kwargs
|
|
||||||
) -> CommandResult:
|
|
||||||
command_info = command.removeprefix("$ProgressPreference='SilentlyContinue'\n")
|
command_info = command.removeprefix("$ProgressPreference='SilentlyContinue'\n")
|
||||||
with reporter.step(command_info):
|
with reporter.step(command_info):
|
||||||
logger.info(f'Execute command "{command}" on "{shell.host}"')
|
logger.info(f'Execute command "{command}" on "{shell.host}"')
|
||||||
|
@ -205,6 +185,7 @@ class SSHShell(Shell):
|
||||||
private_key_passphrase: Optional[str] = None,
|
private_key_passphrase: Optional[str] = None,
|
||||||
port: str = "22",
|
port: str = "22",
|
||||||
command_inspectors: Optional[list[CommandInspector]] = None,
|
command_inspectors: Optional[list[CommandInspector]] = None,
|
||||||
|
custom_environment: Optional[dict] = None
|
||||||
) -> None:
|
) -> None:
|
||||||
super().__init__()
|
super().__init__()
|
||||||
self.connection_provider = SshConnectionProvider()
|
self.connection_provider = SshConnectionProvider()
|
||||||
|
@ -216,6 +197,8 @@ class SSHShell(Shell):
|
||||||
|
|
||||||
self.command_inspectors = command_inspectors or []
|
self.command_inspectors = command_inspectors or []
|
||||||
|
|
||||||
|
self.environment = custom_environment
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def _connection(self):
|
def _connection(self):
|
||||||
return self.connection_provider.provide(self.host, self.port)
|
return self.connection_provider.provide(self.host, self.port)
|
||||||
|
@ -238,15 +221,13 @@ class SSHShell(Shell):
|
||||||
|
|
||||||
if options.check and result.return_code != 0:
|
if options.check and result.return_code != 0:
|
||||||
raise RuntimeError(
|
raise RuntimeError(
|
||||||
f"Command: {command}\nreturn code: {result.return_code}\nOutput: {result.stdout}"
|
f"Command: {command}\nreturn code: {result.return_code}\nOutput: {result.stdout}\nStderr: {result.stderr}\n"
|
||||||
)
|
)
|
||||||
return result
|
return result
|
||||||
|
|
||||||
@log_command
|
@log_command
|
||||||
def _exec_interactive(self, command: str, options: CommandOptions) -> CommandResult:
|
def _exec_interactive(self, command: str, options: CommandOptions) -> CommandResult:
|
||||||
stdin, stdout, stderr = self._connection.exec_command(
|
stdin, stdout, stderr = self._connection.exec_command(command, timeout=options.timeout, get_pty=True, environment=self.environment)
|
||||||
command, timeout=options.timeout, get_pty=True
|
|
||||||
)
|
|
||||||
for interactive_input in options.interactive_inputs:
|
for interactive_input in options.interactive_inputs:
|
||||||
input = interactive_input.input
|
input = interactive_input.input
|
||||||
if not input.endswith("\n"):
|
if not input.endswith("\n"):
|
||||||
|
@ -273,7 +254,7 @@ class SSHShell(Shell):
|
||||||
@log_command
|
@log_command
|
||||||
def _exec_non_interactive(self, command: str, options: CommandOptions) -> CommandResult:
|
def _exec_non_interactive(self, command: str, options: CommandOptions) -> CommandResult:
|
||||||
try:
|
try:
|
||||||
stdin, stdout, stderr = self._connection.exec_command(command, timeout=options.timeout)
|
stdin, stdout, stderr = self._connection.exec_command(command, timeout=options.timeout, environment=self.environment)
|
||||||
|
|
||||||
if options.close_stdin:
|
if options.close_stdin:
|
||||||
stdin.close()
|
stdin.close()
|
||||||
|
|
|
@ -8,29 +8,23 @@ from typing import List, Optional, Union
|
||||||
|
|
||||||
import base58
|
import base58
|
||||||
|
|
||||||
|
from frostfs_testlib import reporter
|
||||||
from frostfs_testlib.cli import FrostfsCli
|
from frostfs_testlib.cli import FrostfsCli
|
||||||
from frostfs_testlib.reporter import get_reporter
|
|
||||||
from frostfs_testlib.resources.cli import FROSTFS_CLI_EXEC
|
from frostfs_testlib.resources.cli import FROSTFS_CLI_EXEC
|
||||||
from frostfs_testlib.resources.common import ASSETS_DIR, DEFAULT_WALLET_CONFIG
|
from frostfs_testlib.resources.common import ASSETS_DIR
|
||||||
from frostfs_testlib.shell import Shell
|
from frostfs_testlib.shell import Shell
|
||||||
from frostfs_testlib.storage.dataclasses.acl import (
|
from frostfs_testlib.storage.dataclasses.acl import EACL_LIFETIME, FROSTFS_CONTRACT_CACHE_TIMEOUT, EACLPubKey, EACLRole, EACLRule
|
||||||
EACL_LIFETIME,
|
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
|
||||||
FROSTFS_CONTRACT_CACHE_TIMEOUT,
|
|
||||||
EACLPubKey,
|
|
||||||
EACLRole,
|
|
||||||
EACLRule,
|
|
||||||
)
|
|
||||||
from frostfs_testlib.utils import wallet_utils
|
from frostfs_testlib.utils import wallet_utils
|
||||||
|
|
||||||
reporter = get_reporter()
|
|
||||||
logger = logging.getLogger("NeoLogger")
|
logger = logging.getLogger("NeoLogger")
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Get extended ACL")
|
@reporter.step("Get extended ACL")
|
||||||
def get_eacl(wallet_path: str, cid: str, shell: Shell, endpoint: str) -> Optional[str]:
|
def get_eacl(wallet: WalletInfo, cid: str, shell: Shell, endpoint: str) -> Optional[str]:
|
||||||
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, DEFAULT_WALLET_CONFIG)
|
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet.config_path)
|
||||||
try:
|
try:
|
||||||
result = cli.container.get_eacl(wallet=wallet_path, rpc_endpoint=endpoint, cid=cid)
|
result = cli.container.get_eacl(rpc_endpoint=endpoint, cid=cid)
|
||||||
except RuntimeError as exc:
|
except RuntimeError as exc:
|
||||||
logger.info("Extended ACL table is not set for this container")
|
logger.info("Extended ACL table is not set for this container")
|
||||||
logger.info(f"Got exception while getting eacl: {exc}")
|
logger.info(f"Got exception while getting eacl: {exc}")
|
||||||
|
@ -40,18 +34,17 @@ def get_eacl(wallet_path: str, cid: str, shell: Shell, endpoint: str) -> Optiona
|
||||||
return result.stdout
|
return result.stdout
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Set extended ACL")
|
@reporter.step("Set extended ACL")
|
||||||
def set_eacl(
|
def set_eacl(
|
||||||
wallet_path: str,
|
wallet: WalletInfo,
|
||||||
cid: str,
|
cid: str,
|
||||||
eacl_table_path: str,
|
eacl_table_path: str,
|
||||||
shell: Shell,
|
shell: Shell,
|
||||||
endpoint: str,
|
endpoint: str,
|
||||||
session_token: Optional[str] = None,
|
session_token: Optional[str] = None,
|
||||||
) -> None:
|
) -> None:
|
||||||
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, DEFAULT_WALLET_CONFIG)
|
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet.config_path)
|
||||||
cli.container.set_eacl(
|
cli.container.set_eacl(
|
||||||
wallet=wallet_path,
|
|
||||||
rpc_endpoint=endpoint,
|
rpc_endpoint=endpoint,
|
||||||
cid=cid,
|
cid=cid,
|
||||||
table=eacl_table_path,
|
table=eacl_table_path,
|
||||||
|
@ -67,7 +60,7 @@ def _encode_cid_for_eacl(cid: str) -> str:
|
||||||
|
|
||||||
def create_eacl(cid: str, rules_list: List[EACLRule], shell: Shell) -> str:
|
def create_eacl(cid: str, rules_list: List[EACLRule], shell: Shell) -> str:
|
||||||
table_file_path = os.path.join(os.getcwd(), ASSETS_DIR, f"eacl_table_{str(uuid.uuid4())}.json")
|
table_file_path = os.path.join(os.getcwd(), ASSETS_DIR, f"eacl_table_{str(uuid.uuid4())}.json")
|
||||||
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, DEFAULT_WALLET_CONFIG)
|
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC)
|
||||||
cli.acl.extended_create(cid=cid, out=table_file_path, rule=rules_list)
|
cli.acl.extended_create(cid=cid, out=table_file_path, rule=rules_list)
|
||||||
|
|
||||||
with open(table_file_path, "r") as file:
|
with open(table_file_path, "r") as file:
|
||||||
|
@ -78,7 +71,7 @@ def create_eacl(cid: str, rules_list: List[EACLRule], shell: Shell) -> str:
|
||||||
|
|
||||||
|
|
||||||
def form_bearertoken_file(
|
def form_bearertoken_file(
|
||||||
wif: str,
|
wallet: WalletInfo,
|
||||||
cid: str,
|
cid: str,
|
||||||
eacl_rule_list: List[Union[EACLRule, EACLPubKey]],
|
eacl_rule_list: List[Union[EACLRule, EACLPubKey]],
|
||||||
shell: Shell,
|
shell: Shell,
|
||||||
|
@ -93,7 +86,7 @@ def form_bearertoken_file(
|
||||||
enc_cid = _encode_cid_for_eacl(cid) if cid else None
|
enc_cid = _encode_cid_for_eacl(cid) if cid else None
|
||||||
file_path = os.path.join(os.getcwd(), ASSETS_DIR, str(uuid.uuid4()))
|
file_path = os.path.join(os.getcwd(), ASSETS_DIR, str(uuid.uuid4()))
|
||||||
|
|
||||||
eacl = get_eacl(wif, cid, shell, endpoint)
|
eacl = get_eacl(wallet, cid, shell, endpoint)
|
||||||
json_eacl = dict()
|
json_eacl = dict()
|
||||||
if eacl:
|
if eacl:
|
||||||
eacl = eacl.replace("eACL: ", "").split("Signature")[0]
|
eacl = eacl.replace("eACL: ", "").split("Signature")[0]
|
||||||
|
@ -134,7 +127,7 @@ def form_bearertoken_file(
|
||||||
if sign:
|
if sign:
|
||||||
sign_bearer(
|
sign_bearer(
|
||||||
shell=shell,
|
shell=shell,
|
||||||
wallet_path=wif,
|
wallet=wallet,
|
||||||
eacl_rules_file_from=file_path,
|
eacl_rules_file_from=file_path,
|
||||||
eacl_rules_file_to=file_path,
|
eacl_rules_file_to=file_path,
|
||||||
json=True,
|
json=True,
|
||||||
|
@ -165,27 +158,19 @@ def eacl_rules(access: str, verbs: list, user: str) -> list[str]:
|
||||||
return rules
|
return rules
|
||||||
|
|
||||||
|
|
||||||
def sign_bearer(
|
def sign_bearer(shell: Shell, wallet: WalletInfo, eacl_rules_file_from: str, eacl_rules_file_to: str, json: bool) -> None:
|
||||||
shell: Shell, wallet_path: str, eacl_rules_file_from: str, eacl_rules_file_to: str, json: bool
|
frostfscli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet.config_path)
|
||||||
) -> None:
|
frostfscli.util.sign_bearer_token(eacl_rules_file_from, eacl_rules_file_to, json=json)
|
||||||
frostfscli = FrostfsCli(
|
|
||||||
shell=shell, frostfs_cli_exec_path=FROSTFS_CLI_EXEC, config_file=DEFAULT_WALLET_CONFIG
|
|
||||||
)
|
|
||||||
frostfscli.util.sign_bearer_token(
|
|
||||||
wallet=wallet_path, from_file=eacl_rules_file_from, to_file=eacl_rules_file_to, json=json
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Wait for eACL cache expired")
|
@reporter.step("Wait for eACL cache expired")
|
||||||
def wait_for_cache_expired():
|
def wait_for_cache_expired():
|
||||||
sleep(FROSTFS_CONTRACT_CACHE_TIMEOUT)
|
sleep(FROSTFS_CONTRACT_CACHE_TIMEOUT)
|
||||||
return
|
return
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Return bearer token in base64 to caller")
|
@reporter.step("Return bearer token in base64 to caller")
|
||||||
def bearer_token_base64_from_file(
|
def bearer_token_base64_from_file(bearer_path: str) -> str:
|
||||||
bearer_path: str,
|
|
||||||
) -> str:
|
|
||||||
with open(bearer_path, "rb") as file:
|
with open(bearer_path, "rb") as file:
|
||||||
signed = file.read()
|
signed = file.read()
|
||||||
return base64.b64encode(signed).decode("utf-8")
|
return base64.b64encode(signed).decode("utf-8")
|
||||||
|
|
|
@ -5,10 +5,11 @@ from dataclasses import dataclass
|
||||||
from time import sleep
|
from time import sleep
|
||||||
from typing import Optional, Union
|
from typing import Optional, Union
|
||||||
|
|
||||||
|
from frostfs_testlib import reporter
|
||||||
from frostfs_testlib.cli import FrostfsCli
|
from frostfs_testlib.cli import FrostfsCli
|
||||||
from frostfs_testlib.reporter import get_reporter
|
from frostfs_testlib.plugins import load_plugin
|
||||||
from frostfs_testlib.resources.cli import CLI_DEFAULT_TIMEOUT, FROSTFS_CLI_EXEC
|
from frostfs_testlib.resources.cli import CLI_DEFAULT_TIMEOUT, FROSTFS_CLI_EXEC
|
||||||
from frostfs_testlib.resources.common import DEFAULT_WALLET_CONFIG
|
from frostfs_testlib.s3.interfaces import BucketContainerResolver
|
||||||
from frostfs_testlib.shell import Shell
|
from frostfs_testlib.shell import Shell
|
||||||
from frostfs_testlib.steps.cli.object import put_object, put_object_to_random_node
|
from frostfs_testlib.steps.cli.object import put_object, put_object_to_random_node
|
||||||
from frostfs_testlib.storage.cluster import Cluster, ClusterNode
|
from frostfs_testlib.storage.cluster import Cluster, ClusterNode
|
||||||
|
@ -17,14 +18,13 @@ from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
|
||||||
from frostfs_testlib.utils import json_utils
|
from frostfs_testlib.utils import json_utils
|
||||||
from frostfs_testlib.utils.file_utils import generate_file, get_file_hash
|
from frostfs_testlib.utils.file_utils import generate_file, get_file_hash
|
||||||
|
|
||||||
reporter = get_reporter()
|
|
||||||
logger = logging.getLogger("NeoLogger")
|
logger = logging.getLogger("NeoLogger")
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class StorageContainerInfo:
|
class StorageContainerInfo:
|
||||||
id: str
|
id: str
|
||||||
wallet_file: WalletInfo
|
wallet: WalletInfo
|
||||||
|
|
||||||
|
|
||||||
class StorageContainer:
|
class StorageContainer:
|
||||||
|
@ -41,13 +41,10 @@ class StorageContainer:
|
||||||
def get_id(self) -> str:
|
def get_id(self) -> str:
|
||||||
return self.storage_container_info.id
|
return self.storage_container_info.id
|
||||||
|
|
||||||
def get_wallet_path(self) -> str:
|
def get_wallet(self) -> str:
|
||||||
return self.storage_container_info.wallet_file.path
|
return self.storage_container_info.wallet
|
||||||
|
|
||||||
def get_wallet_config_path(self) -> str:
|
@reporter.step("Generate new object and put in container")
|
||||||
return self.storage_container_info.wallet_file.config_path
|
|
||||||
|
|
||||||
@reporter.step_deco("Generate new object and put in container")
|
|
||||||
def generate_object(
|
def generate_object(
|
||||||
self,
|
self,
|
||||||
size: int,
|
size: int,
|
||||||
|
@ -60,37 +57,34 @@ class StorageContainer:
|
||||||
file_hash = get_file_hash(file_path)
|
file_hash = get_file_hash(file_path)
|
||||||
|
|
||||||
container_id = self.get_id()
|
container_id = self.get_id()
|
||||||
wallet_path = self.get_wallet_path()
|
wallet = self.get_wallet()
|
||||||
wallet_config = self.get_wallet_config_path()
|
|
||||||
with reporter.step(f"Put object with size {size} to container {container_id}"):
|
with reporter.step(f"Put object with size {size} to container {container_id}"):
|
||||||
if endpoint:
|
if endpoint:
|
||||||
object_id = put_object(
|
object_id = put_object(
|
||||||
wallet=wallet_path,
|
wallet=wallet,
|
||||||
path=file_path,
|
path=file_path,
|
||||||
cid=container_id,
|
cid=container_id,
|
||||||
expire_at=expire_at,
|
expire_at=expire_at,
|
||||||
shell=self.shell,
|
shell=self.shell,
|
||||||
endpoint=endpoint,
|
endpoint=endpoint,
|
||||||
bearer=bearer_token,
|
bearer=bearer_token,
|
||||||
wallet_config=wallet_config,
|
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
object_id = put_object_to_random_node(
|
object_id = put_object_to_random_node(
|
||||||
wallet=wallet_path,
|
wallet=wallet,
|
||||||
path=file_path,
|
path=file_path,
|
||||||
cid=container_id,
|
cid=container_id,
|
||||||
expire_at=expire_at,
|
expire_at=expire_at,
|
||||||
shell=self.shell,
|
shell=self.shell,
|
||||||
cluster=self.cluster,
|
cluster=self.cluster,
|
||||||
bearer=bearer_token,
|
bearer=bearer_token,
|
||||||
wallet_config=wallet_config,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
storage_object = StorageObjectInfo(
|
storage_object = StorageObjectInfo(
|
||||||
container_id,
|
container_id,
|
||||||
object_id,
|
object_id,
|
||||||
size=size,
|
size=size,
|
||||||
wallet_file_path=wallet_path,
|
wallet=wallet,
|
||||||
file_path=file_path,
|
file_path=file_path,
|
||||||
file_hash=file_hash,
|
file_hash=file_hash,
|
||||||
)
|
)
|
||||||
|
@ -101,18 +95,18 @@ class StorageContainer:
|
||||||
DEFAULT_PLACEMENT_RULE = "REP 2 IN X CBF 1 SELECT 4 FROM * AS X"
|
DEFAULT_PLACEMENT_RULE = "REP 2 IN X CBF 1 SELECT 4 FROM * AS X"
|
||||||
SINGLE_PLACEMENT_RULE = "REP 1 IN X CBF 1 SELECT 4 FROM * AS X"
|
SINGLE_PLACEMENT_RULE = "REP 1 IN X CBF 1 SELECT 4 FROM * AS X"
|
||||||
REP_2_FOR_3_NODES_PLACEMENT_RULE = "REP 2 IN X CBF 1 SELECT 3 FROM * AS X"
|
REP_2_FOR_3_NODES_PLACEMENT_RULE = "REP 2 IN X CBF 1 SELECT 3 FROM * AS X"
|
||||||
|
DEFAULT_EC_PLACEMENT_RULE = "EC 3.1"
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Create Container")
|
@reporter.step("Create Container")
|
||||||
def create_container(
|
def create_container(
|
||||||
wallet: str,
|
wallet: WalletInfo,
|
||||||
shell: Shell,
|
shell: Shell,
|
||||||
endpoint: str,
|
endpoint: str,
|
||||||
rule: str = DEFAULT_PLACEMENT_RULE,
|
rule: str = DEFAULT_PLACEMENT_RULE,
|
||||||
basic_acl: str = "",
|
basic_acl: str = "",
|
||||||
attributes: Optional[dict] = None,
|
attributes: Optional[dict] = None,
|
||||||
session_token: str = "",
|
session_token: str = "",
|
||||||
session_wallet: str = "",
|
|
||||||
name: Optional[str] = None,
|
name: Optional[str] = None,
|
||||||
options: Optional[dict] = None,
|
options: Optional[dict] = None,
|
||||||
await_mode: bool = True,
|
await_mode: bool = True,
|
||||||
|
@ -123,7 +117,7 @@ def create_container(
|
||||||
A wrapper for `frostfs-cli container create` call.
|
A wrapper for `frostfs-cli container create` call.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
wallet (str): a wallet on whose behalf a container is created
|
wallet (WalletInfo): a wallet on whose behalf a container is created
|
||||||
rule (optional, str): placement rule for container
|
rule (optional, str): placement rule for container
|
||||||
basic_acl (optional, str): an ACL for container, will be
|
basic_acl (optional, str): an ACL for container, will be
|
||||||
appended to `--basic-acl` key
|
appended to `--basic-acl` key
|
||||||
|
@ -145,10 +139,9 @@ def create_container(
|
||||||
(str): CID of the created container
|
(str): CID of the created container
|
||||||
"""
|
"""
|
||||||
|
|
||||||
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, DEFAULT_WALLET_CONFIG)
|
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet.config_path)
|
||||||
result = cli.container.create(
|
result = cli.container.create(
|
||||||
rpc_endpoint=endpoint,
|
rpc_endpoint=endpoint,
|
||||||
wallet=session_wallet if session_wallet else wallet,
|
|
||||||
policy=rule,
|
policy=rule,
|
||||||
basic_acl=basic_acl,
|
basic_acl=basic_acl,
|
||||||
attributes=attributes,
|
attributes=attributes,
|
||||||
|
@ -169,23 +162,17 @@ def create_container(
|
||||||
return cid
|
return cid
|
||||||
|
|
||||||
|
|
||||||
def wait_for_container_creation(
|
def wait_for_container_creation(wallet: WalletInfo, cid: str, shell: Shell, endpoint: str, attempts: int = 15, sleep_interval: int = 1):
|
||||||
wallet: str, cid: str, shell: Shell, endpoint: str, attempts: int = 15, sleep_interval: int = 1
|
|
||||||
):
|
|
||||||
for _ in range(attempts):
|
for _ in range(attempts):
|
||||||
containers = list_containers(wallet, shell, endpoint)
|
containers = list_containers(wallet, shell, endpoint)
|
||||||
if cid in containers:
|
if cid in containers:
|
||||||
return
|
return
|
||||||
logger.info(f"There is no {cid} in {containers} yet; sleep {sleep_interval} and continue")
|
logger.info(f"There is no {cid} in {containers} yet; sleep {sleep_interval} and continue")
|
||||||
sleep(sleep_interval)
|
sleep(sleep_interval)
|
||||||
raise RuntimeError(
|
raise RuntimeError(f"After {attempts * sleep_interval} seconds container {cid} hasn't been persisted; exiting")
|
||||||
f"After {attempts * sleep_interval} seconds container {cid} hasn't been persisted; exiting"
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def wait_for_container_deletion(
|
def wait_for_container_deletion(wallet: WalletInfo, cid: str, shell: Shell, endpoint: str, attempts: int = 30, sleep_interval: int = 1):
|
||||||
wallet: str, cid: str, shell: Shell, endpoint: str, attempts: int = 30, sleep_interval: int = 1
|
|
||||||
):
|
|
||||||
for _ in range(attempts):
|
for _ in range(attempts):
|
||||||
try:
|
try:
|
||||||
get_container(wallet, cid, shell=shell, endpoint=endpoint)
|
get_container(wallet, cid, shell=shell, endpoint=endpoint)
|
||||||
|
@ -198,30 +185,28 @@ def wait_for_container_deletion(
|
||||||
raise AssertionError(f"Expected container deleted during {attempts * sleep_interval} sec.")
|
raise AssertionError(f"Expected container deleted during {attempts * sleep_interval} sec.")
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("List Containers")
|
@reporter.step("List Containers")
|
||||||
def list_containers(
|
def list_containers(wallet: WalletInfo, shell: Shell, endpoint: str, timeout: Optional[str] = CLI_DEFAULT_TIMEOUT) -> list[str]:
|
||||||
wallet: str, shell: Shell, endpoint: str, timeout: Optional[str] = CLI_DEFAULT_TIMEOUT
|
|
||||||
) -> list[str]:
|
|
||||||
"""
|
"""
|
||||||
A wrapper for `frostfs-cli container list` call. It returns all the
|
A wrapper for `frostfs-cli container list` call. It returns all the
|
||||||
available containers for the given wallet.
|
available containers for the given wallet.
|
||||||
Args:
|
Args:
|
||||||
wallet (str): a wallet on whose behalf we list the containers
|
wallet (WalletInfo): a wallet on whose behalf we list the containers
|
||||||
shell: executor for cli command
|
shell: executor for cli command
|
||||||
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
|
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
|
||||||
timeout: Timeout for the operation.
|
timeout: Timeout for the operation.
|
||||||
Returns:
|
Returns:
|
||||||
(list): list of containers
|
(list): list of containers
|
||||||
"""
|
"""
|
||||||
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, DEFAULT_WALLET_CONFIG)
|
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet.config_path)
|
||||||
result = cli.container.list(rpc_endpoint=endpoint, wallet=wallet, timeout=timeout)
|
result = cli.container.list(rpc_endpoint=endpoint, timeout=timeout)
|
||||||
logger.info(f"Containers: \n{result}")
|
logger.info(f"Containers: \n{result}")
|
||||||
return result.stdout.split()
|
return result.stdout.split()
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("List Objects in container")
|
@reporter.step("List Objects in container")
|
||||||
def list_objects(
|
def list_objects(
|
||||||
wallet: str,
|
wallet: WalletInfo,
|
||||||
shell: Shell,
|
shell: Shell,
|
||||||
container_id: str,
|
container_id: str,
|
||||||
endpoint: str,
|
endpoint: str,
|
||||||
|
@ -231,7 +216,7 @@ def list_objects(
|
||||||
A wrapper for `frostfs-cli container list-objects` call. It returns all the
|
A wrapper for `frostfs-cli container list-objects` call. It returns all the
|
||||||
available objects in container.
|
available objects in container.
|
||||||
Args:
|
Args:
|
||||||
wallet (str): a wallet on whose behalf we list the containers objects
|
wallet (WalletInfo): a wallet on whose behalf we list the containers objects
|
||||||
shell: executor for cli command
|
shell: executor for cli command
|
||||||
container_id: cid of container
|
container_id: cid of container
|
||||||
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
|
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
|
||||||
|
@ -239,17 +224,15 @@ def list_objects(
|
||||||
Returns:
|
Returns:
|
||||||
(list): list of containers
|
(list): list of containers
|
||||||
"""
|
"""
|
||||||
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, DEFAULT_WALLET_CONFIG)
|
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet.config_path)
|
||||||
result = cli.container.list_objects(
|
result = cli.container.list_objects(rpc_endpoint=endpoint, cid=container_id, timeout=timeout)
|
||||||
rpc_endpoint=endpoint, wallet=wallet, cid=container_id, timeout=timeout
|
|
||||||
)
|
|
||||||
logger.info(f"Container objects: \n{result}")
|
logger.info(f"Container objects: \n{result}")
|
||||||
return result.stdout.split()
|
return result.stdout.split()
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Get Container")
|
@reporter.step("Get Container")
|
||||||
def get_container(
|
def get_container(
|
||||||
wallet: str,
|
wallet: WalletInfo,
|
||||||
cid: str,
|
cid: str,
|
||||||
shell: Shell,
|
shell: Shell,
|
||||||
endpoint: str,
|
endpoint: str,
|
||||||
|
@ -260,7 +243,7 @@ def get_container(
|
||||||
A wrapper for `frostfs-cli container get` call. It extracts container's
|
A wrapper for `frostfs-cli container get` call. It extracts container's
|
||||||
attributes and rearranges them into a more compact view.
|
attributes and rearranges them into a more compact view.
|
||||||
Args:
|
Args:
|
||||||
wallet (str): path to a wallet on whose behalf we get the container
|
wallet (WalletInfo): path to a wallet on whose behalf we get the container
|
||||||
cid (str): ID of the container to get
|
cid (str): ID of the container to get
|
||||||
shell: executor for cli command
|
shell: executor for cli command
|
||||||
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
|
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
|
||||||
|
@ -270,10 +253,8 @@ def get_container(
|
||||||
(dict, str): dict of container attributes
|
(dict, str): dict of container attributes
|
||||||
"""
|
"""
|
||||||
|
|
||||||
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, DEFAULT_WALLET_CONFIG)
|
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet.config_path)
|
||||||
result = cli.container.get(
|
result = cli.container.get(rpc_endpoint=endpoint, cid=cid, json_mode=json_mode, timeout=timeout)
|
||||||
rpc_endpoint=endpoint, wallet=wallet, cid=cid, json_mode=json_mode, timeout=timeout
|
|
||||||
)
|
|
||||||
|
|
||||||
if not json_mode:
|
if not json_mode:
|
||||||
return result.stdout
|
return result.stdout
|
||||||
|
@ -287,40 +268,37 @@ def get_container(
|
||||||
return container_info
|
return container_info
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Delete Container")
|
@reporter.step("Delete Container")
|
||||||
# TODO: make the error message about a non-found container more user-friendly
|
# TODO: make the error message about a non-found container more user-friendly
|
||||||
def delete_container(
|
def delete_container(
|
||||||
wallet: str,
|
wallet: WalletInfo,
|
||||||
cid: str,
|
cid: str,
|
||||||
shell: Shell,
|
shell: Shell,
|
||||||
endpoint: str,
|
endpoint: str,
|
||||||
force: bool = False,
|
force: bool = False,
|
||||||
session_token: Optional[str] = None,
|
session_token: Optional[str] = None,
|
||||||
await_mode: bool = False,
|
await_mode: bool = False,
|
||||||
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
|
|
||||||
) -> None:
|
) -> None:
|
||||||
"""
|
"""
|
||||||
A wrapper for `frostfs-cli container delete` call.
|
A wrapper for `frostfs-cli container delete` call.
|
||||||
Args:
|
Args:
|
||||||
wallet (str): path to a wallet on whose behalf we delete the container
|
await_mode: Block execution until container is removed.
|
||||||
|
wallet (WalletInfo): path to a wallet on whose behalf we delete the container
|
||||||
cid (str): ID of the container to delete
|
cid (str): ID of the container to delete
|
||||||
shell: executor for cli command
|
shell: executor for cli command
|
||||||
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
|
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
|
||||||
force (bool): do not check whether container contains locks and remove immediately
|
force (bool): do not check whether container contains locks and remove immediately
|
||||||
session_token: a path to session token file
|
session_token: a path to session token file
|
||||||
timeout: Timeout for the operation.
|
|
||||||
This function doesn't return anything.
|
This function doesn't return anything.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, DEFAULT_WALLET_CONFIG)
|
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet.config_path)
|
||||||
cli.container.delete(
|
cli.container.delete(
|
||||||
wallet=wallet,
|
|
||||||
cid=cid,
|
cid=cid,
|
||||||
rpc_endpoint=endpoint,
|
rpc_endpoint=endpoint,
|
||||||
force=force,
|
force=force,
|
||||||
session=session_token,
|
session=session_token,
|
||||||
await_mode=await_mode,
|
await_mode=await_mode,
|
||||||
timeout=timeout,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@ -350,29 +328,24 @@ def _parse_cid(output: str) -> str:
|
||||||
return splitted[1]
|
return splitted[1]
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Search container by name")
|
@reporter.step("Search container by name")
|
||||||
def search_container_by_name(wallet: str, name: str, shell: Shell, endpoint: str):
|
def search_container_by_name(name: str, node: ClusterNode):
|
||||||
list_cids = list_containers(wallet, shell, endpoint)
|
resolver_cls = load_plugin("frostfs.testlib.bucket_cid_resolver", node.host.config.product)
|
||||||
for cid in list_cids:
|
resolver: BucketContainerResolver = resolver_cls()
|
||||||
cont_info = get_container(wallet, cid, shell, endpoint, True)
|
return resolver.resolve(node, name)
|
||||||
if cont_info.get("attributes", {}).get("Name", None) == name:
|
|
||||||
return cid
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Search for nodes with a container")
|
@reporter.step("Search for nodes with a container")
|
||||||
def search_nodes_with_container(
|
def search_nodes_with_container(
|
||||||
wallet: str,
|
wallet: WalletInfo,
|
||||||
cid: str,
|
cid: str,
|
||||||
shell: Shell,
|
shell: Shell,
|
||||||
endpoint: str,
|
endpoint: str,
|
||||||
cluster: Cluster,
|
cluster: Cluster,
|
||||||
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
|
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
|
||||||
) -> list[ClusterNode]:
|
) -> list[ClusterNode]:
|
||||||
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, DEFAULT_WALLET_CONFIG)
|
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet.config_path)
|
||||||
result = cli.container.search_node(
|
result = cli.container.search_node(rpc_endpoint=endpoint, cid=cid, timeout=timeout)
|
||||||
rpc_endpoint=endpoint, wallet=wallet, cid=cid, timeout=timeout
|
|
||||||
)
|
|
||||||
|
|
||||||
pattern = r"[0-9]+(?:\.[0-9]+){3}"
|
pattern = r"[0-9]+(?:\.[0-9]+){3}"
|
||||||
nodes_ip = list(set(re.findall(pattern, result.stdout)))
|
nodes_ip = list(set(re.findall(pattern, result.stdout)))
|
||||||
|
|
|
@ -5,23 +5,25 @@ import re
|
||||||
import uuid
|
import uuid
|
||||||
from typing import Any, Optional
|
from typing import Any, Optional
|
||||||
|
|
||||||
|
from frostfs_testlib import reporter
|
||||||
from frostfs_testlib.cli import FrostfsCli
|
from frostfs_testlib.cli import FrostfsCli
|
||||||
from frostfs_testlib.cli.neogo import NeoGo
|
from frostfs_testlib.cli.neogo import NeoGo
|
||||||
from frostfs_testlib.reporter import get_reporter
|
|
||||||
from frostfs_testlib.resources.cli import CLI_DEFAULT_TIMEOUT, FROSTFS_CLI_EXEC, NEOGO_EXECUTABLE
|
from frostfs_testlib.resources.cli import CLI_DEFAULT_TIMEOUT, FROSTFS_CLI_EXEC, NEOGO_EXECUTABLE
|
||||||
from frostfs_testlib.resources.common import ASSETS_DIR, DEFAULT_WALLET_CONFIG
|
from frostfs_testlib.resources.common import ASSETS_DIR
|
||||||
from frostfs_testlib.shell import Shell
|
from frostfs_testlib.shell import Shell
|
||||||
from frostfs_testlib.storage.cluster import Cluster, ClusterNode
|
from frostfs_testlib.storage.cluster import Cluster, ClusterNode
|
||||||
|
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
|
||||||
|
from frostfs_testlib.testing import wait_for_success
|
||||||
from frostfs_testlib.utils import json_utils
|
from frostfs_testlib.utils import json_utils
|
||||||
from frostfs_testlib.utils.cli_utils import parse_cmd_table, parse_netmap_output
|
from frostfs_testlib.utils.cli_utils import parse_cmd_table, parse_netmap_output
|
||||||
|
from frostfs_testlib.utils.file_utils import TestFile
|
||||||
|
|
||||||
logger = logging.getLogger("NeoLogger")
|
logger = logging.getLogger("NeoLogger")
|
||||||
reporter = get_reporter()
|
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Get object from random node")
|
@reporter.step("Get object from random node")
|
||||||
def get_object_from_random_node(
|
def get_object_from_random_node(
|
||||||
wallet: str,
|
wallet: WalletInfo,
|
||||||
cid: str,
|
cid: str,
|
||||||
oid: str,
|
oid: str,
|
||||||
shell: Shell,
|
shell: Shell,
|
||||||
|
@ -29,7 +31,6 @@ def get_object_from_random_node(
|
||||||
bearer: Optional[str] = None,
|
bearer: Optional[str] = None,
|
||||||
write_object: Optional[str] = None,
|
write_object: Optional[str] = None,
|
||||||
xhdr: Optional[dict] = None,
|
xhdr: Optional[dict] = None,
|
||||||
wallet_config: Optional[str] = None,
|
|
||||||
no_progress: bool = True,
|
no_progress: bool = True,
|
||||||
session: Optional[str] = None,
|
session: Optional[str] = None,
|
||||||
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
|
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
|
||||||
|
@ -45,7 +46,6 @@ def get_object_from_random_node(
|
||||||
cluster: cluster object
|
cluster: cluster object
|
||||||
bearer (optional, str): path to Bearer Token file, appends to `--bearer` key
|
bearer (optional, str): path to Bearer Token file, appends to `--bearer` key
|
||||||
write_object (optional, str): path to downloaded file, appends to `--file` key
|
write_object (optional, str): path to downloaded file, appends to `--file` key
|
||||||
wallet_config(optional, str): path to the wallet config
|
|
||||||
no_progress(optional, bool): do not show progress bar
|
no_progress(optional, bool): do not show progress bar
|
||||||
xhdr (optional, dict): Request X-Headers in form of Key=Value
|
xhdr (optional, dict): Request X-Headers in form of Key=Value
|
||||||
session (optional, dict): path to a JSON-encoded container session token
|
session (optional, dict): path to a JSON-encoded container session token
|
||||||
|
@ -63,16 +63,15 @@ def get_object_from_random_node(
|
||||||
bearer,
|
bearer,
|
||||||
write_object,
|
write_object,
|
||||||
xhdr,
|
xhdr,
|
||||||
wallet_config,
|
|
||||||
no_progress,
|
no_progress,
|
||||||
session,
|
session,
|
||||||
timeout,
|
timeout,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Get object from {endpoint}")
|
@reporter.step("Get object from {endpoint}")
|
||||||
def get_object(
|
def get_object(
|
||||||
wallet: str,
|
wallet: WalletInfo,
|
||||||
cid: str,
|
cid: str,
|
||||||
oid: str,
|
oid: str,
|
||||||
shell: Shell,
|
shell: Shell,
|
||||||
|
@ -80,23 +79,21 @@ def get_object(
|
||||||
bearer: Optional[str] = None,
|
bearer: Optional[str] = None,
|
||||||
write_object: Optional[str] = None,
|
write_object: Optional[str] = None,
|
||||||
xhdr: Optional[dict] = None,
|
xhdr: Optional[dict] = None,
|
||||||
wallet_config: Optional[str] = None,
|
|
||||||
no_progress: bool = True,
|
no_progress: bool = True,
|
||||||
session: Optional[str] = None,
|
session: Optional[str] = None,
|
||||||
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
|
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
|
||||||
) -> str:
|
) -> TestFile:
|
||||||
"""
|
"""
|
||||||
GET from FrostFS.
|
GET from FrostFS.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
wallet (str): wallet on whose behalf GET is done
|
wallet (WalletInfo): wallet on whose behalf GET is done
|
||||||
cid (str): ID of Container where we get the Object from
|
cid (str): ID of Container where we get the Object from
|
||||||
oid (str): Object ID
|
oid (str): Object ID
|
||||||
shell: executor for cli command
|
shell: executor for cli command
|
||||||
bearer: path to Bearer Token file, appends to `--bearer` key
|
bearer: path to Bearer Token file, appends to `--bearer` key
|
||||||
write_object: path to downloaded file, appends to `--file` key
|
write_object: path to downloaded file, appends to `--file` key
|
||||||
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
|
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
|
||||||
wallet_config(optional, str): path to the wallet config
|
|
||||||
no_progress(optional, bool): do not show progress bar
|
no_progress(optional, bool): do not show progress bar
|
||||||
xhdr (optional, dict): Request X-Headers in form of Key=Value
|
xhdr (optional, dict): Request X-Headers in form of Key=Value
|
||||||
session (optional, dict): path to a JSON-encoded container session token
|
session (optional, dict): path to a JSON-encoded container session token
|
||||||
|
@ -107,15 +104,14 @@ def get_object(
|
||||||
|
|
||||||
if not write_object:
|
if not write_object:
|
||||||
write_object = str(uuid.uuid4())
|
write_object = str(uuid.uuid4())
|
||||||
file_path = os.path.join(ASSETS_DIR, write_object)
|
test_file = TestFile(os.path.join(ASSETS_DIR, write_object))
|
||||||
|
|
||||||
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet_config or DEFAULT_WALLET_CONFIG)
|
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet.config_path)
|
||||||
cli.object.get(
|
cli.object.get(
|
||||||
rpc_endpoint=endpoint,
|
rpc_endpoint=endpoint,
|
||||||
wallet=wallet,
|
|
||||||
cid=cid,
|
cid=cid,
|
||||||
oid=oid,
|
oid=oid,
|
||||||
file=file_path,
|
file=test_file,
|
||||||
bearer=bearer,
|
bearer=bearer,
|
||||||
no_progress=no_progress,
|
no_progress=no_progress,
|
||||||
xhdr=xhdr,
|
xhdr=xhdr,
|
||||||
|
@ -123,19 +119,18 @@ def get_object(
|
||||||
timeout=timeout,
|
timeout=timeout,
|
||||||
)
|
)
|
||||||
|
|
||||||
return file_path
|
return test_file
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Get Range Hash from {endpoint}")
|
@reporter.step("Get Range Hash from {endpoint}")
|
||||||
def get_range_hash(
|
def get_range_hash(
|
||||||
wallet: str,
|
wallet: WalletInfo,
|
||||||
cid: str,
|
cid: str,
|
||||||
oid: str,
|
oid: str,
|
||||||
range_cut: str,
|
range_cut: str,
|
||||||
shell: Shell,
|
shell: Shell,
|
||||||
endpoint: str,
|
endpoint: str,
|
||||||
bearer: Optional[str] = None,
|
bearer: Optional[str] = None,
|
||||||
wallet_config: Optional[str] = None,
|
|
||||||
xhdr: Optional[dict] = None,
|
xhdr: Optional[dict] = None,
|
||||||
session: Optional[str] = None,
|
session: Optional[str] = None,
|
||||||
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
|
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
|
||||||
|
@ -152,17 +147,15 @@ def get_range_hash(
|
||||||
range_cut: Range to take hash from in the form offset1:length1,...,
|
range_cut: Range to take hash from in the form offset1:length1,...,
|
||||||
value to pass to the `--range` parameter
|
value to pass to the `--range` parameter
|
||||||
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
|
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
|
||||||
wallet_config: path to the wallet config
|
|
||||||
xhdr: Request X-Headers in form of Key=Values
|
xhdr: Request X-Headers in form of Key=Values
|
||||||
session: Filepath to a JSON- or binary-encoded token of the object RANGEHASH session.
|
session: Filepath to a JSON- or binary-encoded token of the object RANGEHASH session.
|
||||||
timeout: Timeout for the operation.
|
timeout: Timeout for the operation.
|
||||||
Returns:
|
Returns:
|
||||||
None
|
None
|
||||||
"""
|
"""
|
||||||
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet_config or DEFAULT_WALLET_CONFIG)
|
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet.config_path)
|
||||||
result = cli.object.hash(
|
result = cli.object.hash(
|
||||||
rpc_endpoint=endpoint,
|
rpc_endpoint=endpoint,
|
||||||
wallet=wallet,
|
|
||||||
cid=cid,
|
cid=cid,
|
||||||
oid=oid,
|
oid=oid,
|
||||||
range=range_cut,
|
range=range_cut,
|
||||||
|
@ -176,9 +169,9 @@ def get_range_hash(
|
||||||
return result.stdout.split(":")[1].strip()
|
return result.stdout.split(":")[1].strip()
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Put object to random node")
|
@reporter.step("Put object to random node")
|
||||||
def put_object_to_random_node(
|
def put_object_to_random_node(
|
||||||
wallet: str,
|
wallet: WalletInfo,
|
||||||
path: str,
|
path: str,
|
||||||
cid: str,
|
cid: str,
|
||||||
shell: Shell,
|
shell: Shell,
|
||||||
|
@ -187,7 +180,6 @@ def put_object_to_random_node(
|
||||||
copies_number: Optional[int] = None,
|
copies_number: Optional[int] = None,
|
||||||
attributes: Optional[dict] = None,
|
attributes: Optional[dict] = None,
|
||||||
xhdr: Optional[dict] = None,
|
xhdr: Optional[dict] = None,
|
||||||
wallet_config: Optional[str] = None,
|
|
||||||
expire_at: Optional[int] = None,
|
expire_at: Optional[int] = None,
|
||||||
no_progress: bool = True,
|
no_progress: bool = True,
|
||||||
session: Optional[str] = None,
|
session: Optional[str] = None,
|
||||||
|
@ -206,7 +198,6 @@ def put_object_to_random_node(
|
||||||
copies_number: Number of copies of the object to store within the RPC call
|
copies_number: Number of copies of the object to store within the RPC call
|
||||||
attributes: User attributes in form of Key1=Value1,Key2=Value2
|
attributes: User attributes in form of Key1=Value1,Key2=Value2
|
||||||
cluster: cluster under test
|
cluster: cluster under test
|
||||||
wallet_config: path to the wallet config
|
|
||||||
no_progress: do not show progress bar
|
no_progress: do not show progress bar
|
||||||
expire_at: Last epoch in the life of the object
|
expire_at: Last epoch in the life of the object
|
||||||
xhdr: Request X-Headers in form of Key=Value
|
xhdr: Request X-Headers in form of Key=Value
|
||||||
|
@ -227,7 +218,6 @@ def put_object_to_random_node(
|
||||||
copies_number,
|
copies_number,
|
||||||
attributes,
|
attributes,
|
||||||
xhdr,
|
xhdr,
|
||||||
wallet_config,
|
|
||||||
expire_at,
|
expire_at,
|
||||||
no_progress,
|
no_progress,
|
||||||
session,
|
session,
|
||||||
|
@ -235,9 +225,9 @@ def put_object_to_random_node(
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Put object at {endpoint} in container {cid}")
|
@reporter.step("Put object at {endpoint} in container {cid}")
|
||||||
def put_object(
|
def put_object(
|
||||||
wallet: str,
|
wallet: WalletInfo,
|
||||||
path: str,
|
path: str,
|
||||||
cid: str,
|
cid: str,
|
||||||
shell: Shell,
|
shell: Shell,
|
||||||
|
@ -246,7 +236,6 @@ def put_object(
|
||||||
copies_number: Optional[int] = None,
|
copies_number: Optional[int] = None,
|
||||||
attributes: Optional[dict] = None,
|
attributes: Optional[dict] = None,
|
||||||
xhdr: Optional[dict] = None,
|
xhdr: Optional[dict] = None,
|
||||||
wallet_config: Optional[str] = None,
|
|
||||||
expire_at: Optional[int] = None,
|
expire_at: Optional[int] = None,
|
||||||
no_progress: bool = True,
|
no_progress: bool = True,
|
||||||
session: Optional[str] = None,
|
session: Optional[str] = None,
|
||||||
|
@ -264,7 +253,6 @@ def put_object(
|
||||||
copies_number: Number of copies of the object to store within the RPC call
|
copies_number: Number of copies of the object to store within the RPC call
|
||||||
attributes: User attributes in form of Key1=Value1,Key2=Value2
|
attributes: User attributes in form of Key1=Value1,Key2=Value2
|
||||||
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
|
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
|
||||||
wallet_config: path to the wallet config
|
|
||||||
no_progress: do not show progress bar
|
no_progress: do not show progress bar
|
||||||
expire_at: Last epoch in the life of the object
|
expire_at: Last epoch in the life of the object
|
||||||
xhdr: Request X-Headers in form of Key=Value
|
xhdr: Request X-Headers in form of Key=Value
|
||||||
|
@ -274,10 +262,9 @@ def put_object(
|
||||||
(str): ID of uploaded Object
|
(str): ID of uploaded Object
|
||||||
"""
|
"""
|
||||||
|
|
||||||
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet_config or DEFAULT_WALLET_CONFIG)
|
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet.config_path)
|
||||||
result = cli.object.put(
|
result = cli.object.put(
|
||||||
rpc_endpoint=endpoint,
|
rpc_endpoint=endpoint,
|
||||||
wallet=wallet,
|
|
||||||
file=path,
|
file=path,
|
||||||
cid=cid,
|
cid=cid,
|
||||||
attributes=attributes,
|
attributes=attributes,
|
||||||
|
@ -296,15 +283,14 @@ def put_object(
|
||||||
return oid.strip()
|
return oid.strip()
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Delete object {cid}/{oid} from {endpoint}")
|
@reporter.step("Delete object {cid}/{oid} from {endpoint}")
|
||||||
def delete_object(
|
def delete_object(
|
||||||
wallet: str,
|
wallet: WalletInfo,
|
||||||
cid: str,
|
cid: str,
|
||||||
oid: str,
|
oid: str,
|
||||||
shell: Shell,
|
shell: Shell,
|
||||||
endpoint: str,
|
endpoint: str,
|
||||||
bearer: str = "",
|
bearer: str = "",
|
||||||
wallet_config: Optional[str] = None,
|
|
||||||
xhdr: Optional[dict] = None,
|
xhdr: Optional[dict] = None,
|
||||||
session: Optional[str] = None,
|
session: Optional[str] = None,
|
||||||
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
|
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
|
||||||
|
@ -319,7 +305,6 @@ def delete_object(
|
||||||
shell: executor for cli command
|
shell: executor for cli command
|
||||||
bearer: path to Bearer Token file, appends to `--bearer` key
|
bearer: path to Bearer Token file, appends to `--bearer` key
|
||||||
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
|
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
|
||||||
wallet_config: path to the wallet config
|
|
||||||
xhdr: Request X-Headers in form of Key=Value
|
xhdr: Request X-Headers in form of Key=Value
|
||||||
session: path to a JSON-encoded container session token
|
session: path to a JSON-encoded container session token
|
||||||
timeout: Timeout for the operation.
|
timeout: Timeout for the operation.
|
||||||
|
@ -327,10 +312,9 @@ def delete_object(
|
||||||
(str): Tombstone ID
|
(str): Tombstone ID
|
||||||
"""
|
"""
|
||||||
|
|
||||||
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet_config or DEFAULT_WALLET_CONFIG)
|
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet.config_path)
|
||||||
result = cli.object.delete(
|
result = cli.object.delete(
|
||||||
rpc_endpoint=endpoint,
|
rpc_endpoint=endpoint,
|
||||||
wallet=wallet,
|
|
||||||
cid=cid,
|
cid=cid,
|
||||||
oid=oid,
|
oid=oid,
|
||||||
bearer=bearer,
|
bearer=bearer,
|
||||||
|
@ -344,15 +328,14 @@ def delete_object(
|
||||||
return tombstone.strip()
|
return tombstone.strip()
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Get Range")
|
@reporter.step("Get Range")
|
||||||
def get_range(
|
def get_range(
|
||||||
wallet: str,
|
wallet: WalletInfo,
|
||||||
cid: str,
|
cid: str,
|
||||||
oid: str,
|
oid: str,
|
||||||
range_cut: str,
|
range_cut: str,
|
||||||
shell: Shell,
|
shell: Shell,
|
||||||
endpoint: str,
|
endpoint: str,
|
||||||
wallet_config: Optional[str] = None,
|
|
||||||
bearer: str = "",
|
bearer: str = "",
|
||||||
xhdr: Optional[dict] = None,
|
xhdr: Optional[dict] = None,
|
||||||
session: Optional[str] = None,
|
session: Optional[str] = None,
|
||||||
|
@ -369,37 +352,35 @@ def get_range(
|
||||||
shell: executor for cli command
|
shell: executor for cli command
|
||||||
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
|
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
|
||||||
bearer: path to Bearer Token file, appends to `--bearer` key
|
bearer: path to Bearer Token file, appends to `--bearer` key
|
||||||
wallet_config: path to the wallet config
|
|
||||||
xhdr: Request X-Headers in form of Key=Value
|
xhdr: Request X-Headers in form of Key=Value
|
||||||
session: path to a JSON-encoded container session token
|
session: path to a JSON-encoded container session token
|
||||||
timeout: Timeout for the operation.
|
timeout: Timeout for the operation.
|
||||||
Returns:
|
Returns:
|
||||||
(str, bytes) - path to the file with range content and content of this file as bytes
|
(str, bytes) - path to the file with range content and content of this file as bytes
|
||||||
"""
|
"""
|
||||||
range_file_path = os.path.join(ASSETS_DIR, str(uuid.uuid4()))
|
test_file = TestFile(os.path.join(ASSETS_DIR, str(uuid.uuid4())))
|
||||||
|
|
||||||
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet_config or DEFAULT_WALLET_CONFIG)
|
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet.config_path)
|
||||||
cli.object.range(
|
cli.object.range(
|
||||||
rpc_endpoint=endpoint,
|
rpc_endpoint=endpoint,
|
||||||
wallet=wallet,
|
|
||||||
cid=cid,
|
cid=cid,
|
||||||
oid=oid,
|
oid=oid,
|
||||||
range=range_cut,
|
range=range_cut,
|
||||||
file=range_file_path,
|
file=test_file,
|
||||||
bearer=bearer,
|
bearer=bearer,
|
||||||
xhdr=xhdr,
|
xhdr=xhdr,
|
||||||
session=session,
|
session=session,
|
||||||
timeout=timeout,
|
timeout=timeout,
|
||||||
)
|
)
|
||||||
|
|
||||||
with open(range_file_path, "rb") as file:
|
with open(test_file, "rb") as file:
|
||||||
content = file.read()
|
content = file.read()
|
||||||
return range_file_path, content
|
return test_file, content
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Lock Object")
|
@reporter.step("Lock Object")
|
||||||
def lock_object(
|
def lock_object(
|
||||||
wallet: str,
|
wallet: WalletInfo,
|
||||||
cid: str,
|
cid: str,
|
||||||
oid: str,
|
oid: str,
|
||||||
shell: Shell,
|
shell: Shell,
|
||||||
|
@ -409,7 +390,6 @@ def lock_object(
|
||||||
address: Optional[str] = None,
|
address: Optional[str] = None,
|
||||||
bearer: Optional[str] = None,
|
bearer: Optional[str] = None,
|
||||||
session: Optional[str] = None,
|
session: Optional[str] = None,
|
||||||
wallet_config: Optional[str] = None,
|
|
||||||
ttl: Optional[int] = None,
|
ttl: Optional[int] = None,
|
||||||
xhdr: Optional[dict] = None,
|
xhdr: Optional[dict] = None,
|
||||||
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
|
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
|
||||||
|
@ -436,13 +416,12 @@ def lock_object(
|
||||||
Lock object ID
|
Lock object ID
|
||||||
"""
|
"""
|
||||||
|
|
||||||
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet_config or DEFAULT_WALLET_CONFIG)
|
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet.config_path)
|
||||||
result = cli.object.lock(
|
result = cli.object.lock(
|
||||||
rpc_endpoint=endpoint,
|
rpc_endpoint=endpoint,
|
||||||
lifetime=lifetime,
|
lifetime=lifetime,
|
||||||
expire_at=expire_at,
|
expire_at=expire_at,
|
||||||
address=address,
|
address=address,
|
||||||
wallet=wallet,
|
|
||||||
cid=cid,
|
cid=cid,
|
||||||
oid=oid,
|
oid=oid,
|
||||||
bearer=bearer,
|
bearer=bearer,
|
||||||
|
@ -458,16 +437,15 @@ def lock_object(
|
||||||
return oid.strip()
|
return oid.strip()
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Search object")
|
@reporter.step("Search object")
|
||||||
def search_object(
|
def search_object(
|
||||||
wallet: str,
|
wallet: WalletInfo,
|
||||||
cid: str,
|
cid: str,
|
||||||
shell: Shell,
|
shell: Shell,
|
||||||
endpoint: str,
|
endpoint: str,
|
||||||
bearer: str = "",
|
bearer: str = "",
|
||||||
filters: Optional[dict] = None,
|
filters: Optional[dict] = None,
|
||||||
expected_objects_list: Optional[list] = None,
|
expected_objects_list: Optional[list] = None,
|
||||||
wallet_config: Optional[str] = None,
|
|
||||||
xhdr: Optional[dict] = None,
|
xhdr: Optional[dict] = None,
|
||||||
session: Optional[str] = None,
|
session: Optional[str] = None,
|
||||||
phy: bool = False,
|
phy: bool = False,
|
||||||
|
@ -485,7 +463,6 @@ def search_object(
|
||||||
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
|
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
|
||||||
filters: key=value pairs to filter Objects
|
filters: key=value pairs to filter Objects
|
||||||
expected_objects_list: a list of ObjectIDs to compare found Objects with
|
expected_objects_list: a list of ObjectIDs to compare found Objects with
|
||||||
wallet_config: path to the wallet config
|
|
||||||
xhdr: Request X-Headers in form of Key=Value
|
xhdr: Request X-Headers in form of Key=Value
|
||||||
session: path to a JSON-encoded container session token
|
session: path to a JSON-encoded container session token
|
||||||
phy: Search physically stored objects.
|
phy: Search physically stored objects.
|
||||||
|
@ -496,16 +473,13 @@ def search_object(
|
||||||
list of found ObjectIDs
|
list of found ObjectIDs
|
||||||
"""
|
"""
|
||||||
|
|
||||||
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet_config or DEFAULT_WALLET_CONFIG)
|
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet.config_path)
|
||||||
result = cli.object.search(
|
result = cli.object.search(
|
||||||
rpc_endpoint=endpoint,
|
rpc_endpoint=endpoint,
|
||||||
wallet=wallet,
|
|
||||||
cid=cid,
|
cid=cid,
|
||||||
bearer=bearer,
|
bearer=bearer,
|
||||||
xhdr=xhdr,
|
xhdr=xhdr,
|
||||||
filters=[f"{filter_key} EQ {filter_val}" for filter_key, filter_val in filters.items()]
|
filters=[f"{filter_key} EQ {filter_val}" for filter_key, filter_val in filters.items()] if filters else None,
|
||||||
if filters
|
|
||||||
else None,
|
|
||||||
session=session,
|
session=session,
|
||||||
phy=phy,
|
phy=phy,
|
||||||
root=root,
|
root=root,
|
||||||
|
@ -516,25 +490,18 @@ def search_object(
|
||||||
|
|
||||||
if expected_objects_list:
|
if expected_objects_list:
|
||||||
if sorted(found_objects) == sorted(expected_objects_list):
|
if sorted(found_objects) == sorted(expected_objects_list):
|
||||||
logger.info(
|
logger.info(f"Found objects list '{found_objects}' " f"is equal for expected list '{expected_objects_list}'")
|
||||||
f"Found objects list '{found_objects}' "
|
|
||||||
f"is equal for expected list '{expected_objects_list}'"
|
|
||||||
)
|
|
||||||
else:
|
else:
|
||||||
logger.warning(
|
logger.warning(f"Found object list {found_objects} " f"is not equal to expected list '{expected_objects_list}'")
|
||||||
f"Found object list {found_objects} "
|
|
||||||
f"is not equal to expected list '{expected_objects_list}'"
|
|
||||||
)
|
|
||||||
|
|
||||||
return found_objects
|
return found_objects
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Get netmap netinfo")
|
@reporter.step("Get netmap netinfo")
|
||||||
def get_netmap_netinfo(
|
def get_netmap_netinfo(
|
||||||
wallet: str,
|
wallet: WalletInfo,
|
||||||
shell: Shell,
|
shell: Shell,
|
||||||
endpoint: str,
|
endpoint: str,
|
||||||
wallet_config: Optional[str] = None,
|
|
||||||
address: Optional[str] = None,
|
address: Optional[str] = None,
|
||||||
ttl: Optional[int] = None,
|
ttl: Optional[int] = None,
|
||||||
xhdr: Optional[dict] = None,
|
xhdr: Optional[dict] = None,
|
||||||
|
@ -544,7 +511,7 @@ def get_netmap_netinfo(
|
||||||
Get netmap netinfo output from node
|
Get netmap netinfo output from node
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
wallet (str): wallet on whose behalf request is done
|
wallet (WalletInfo): wallet on whose behalf request is done
|
||||||
shell: executor for cli command
|
shell: executor for cli command
|
||||||
endpoint (optional, str): FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
|
endpoint (optional, str): FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
|
||||||
address: Address of wallet account
|
address: Address of wallet account
|
||||||
|
@ -557,9 +524,8 @@ def get_netmap_netinfo(
|
||||||
(dict): dict of parsed command output
|
(dict): dict of parsed command output
|
||||||
"""
|
"""
|
||||||
|
|
||||||
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet_config or DEFAULT_WALLET_CONFIG)
|
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet.config_path)
|
||||||
output = cli.netmap.netinfo(
|
output = cli.netmap.netinfo(
|
||||||
wallet=wallet,
|
|
||||||
rpc_endpoint=endpoint,
|
rpc_endpoint=endpoint,
|
||||||
address=address,
|
address=address,
|
||||||
ttl=ttl,
|
ttl=ttl,
|
||||||
|
@ -581,9 +547,9 @@ def get_netmap_netinfo(
|
||||||
return settings
|
return settings
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Head object")
|
@reporter.step("Head object")
|
||||||
def head_object(
|
def head_object(
|
||||||
wallet: str,
|
wallet: WalletInfo,
|
||||||
cid: str,
|
cid: str,
|
||||||
oid: str,
|
oid: str,
|
||||||
shell: Shell,
|
shell: Shell,
|
||||||
|
@ -593,7 +559,6 @@ def head_object(
|
||||||
json_output: bool = True,
|
json_output: bool = True,
|
||||||
is_raw: bool = False,
|
is_raw: bool = False,
|
||||||
is_direct: bool = False,
|
is_direct: bool = False,
|
||||||
wallet_config: Optional[str] = None,
|
|
||||||
session: Optional[str] = None,
|
session: Optional[str] = None,
|
||||||
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
|
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
|
||||||
):
|
):
|
||||||
|
@ -601,7 +566,7 @@ def head_object(
|
||||||
HEAD an Object.
|
HEAD an Object.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
wallet (str): wallet on whose behalf HEAD is done
|
wallet (WalletInfo): wallet on whose behalf HEAD is done
|
||||||
cid (str): ID of Container where we get the Object from
|
cid (str): ID of Container where we get the Object from
|
||||||
oid (str): ObjectID to HEAD
|
oid (str): ObjectID to HEAD
|
||||||
shell: executor for cli command
|
shell: executor for cli command
|
||||||
|
@ -613,7 +578,6 @@ def head_object(
|
||||||
turns into `--raw` key
|
turns into `--raw` key
|
||||||
is_direct(optional, bool): send request directly to the node or not; this flag
|
is_direct(optional, bool): send request directly to the node or not; this flag
|
||||||
turns into `--ttl 1` key
|
turns into `--ttl 1` key
|
||||||
wallet_config(optional, str): path to the wallet config
|
|
||||||
xhdr (optional, dict): Request X-Headers in form of Key=Value
|
xhdr (optional, dict): Request X-Headers in form of Key=Value
|
||||||
session (optional, dict): path to a JSON-encoded container session token
|
session (optional, dict): path to a JSON-encoded container session token
|
||||||
timeout: Timeout for the operation.
|
timeout: Timeout for the operation.
|
||||||
|
@ -624,10 +588,9 @@ def head_object(
|
||||||
(str): HEAD response as a plain text
|
(str): HEAD response as a plain text
|
||||||
"""
|
"""
|
||||||
|
|
||||||
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet_config or DEFAULT_WALLET_CONFIG)
|
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet.config_path)
|
||||||
result = cli.object.head(
|
result = cli.object.head(
|
||||||
rpc_endpoint=endpoint,
|
rpc_endpoint=endpoint,
|
||||||
wallet=wallet,
|
|
||||||
cid=cid,
|
cid=cid,
|
||||||
oid=oid,
|
oid=oid,
|
||||||
bearer=bearer,
|
bearer=bearer,
|
||||||
|
@ -653,6 +616,11 @@ def head_object(
|
||||||
fst_line_idx = result.stdout.find("\n")
|
fst_line_idx = result.stdout.find("\n")
|
||||||
decoded = json.loads(result.stdout[fst_line_idx:])
|
decoded = json.loads(result.stdout[fst_line_idx:])
|
||||||
|
|
||||||
|
# if response
|
||||||
|
if "chunks" in decoded.keys():
|
||||||
|
logger.info("decoding ec chunks")
|
||||||
|
return decoded["chunks"]
|
||||||
|
|
||||||
# If response is Complex Object header, it has `splitId` key
|
# If response is Complex Object header, it has `splitId` key
|
||||||
if "splitId" in decoded.keys():
|
if "splitId" in decoded.keys():
|
||||||
logger.info("decoding split header")
|
logger.info("decoding split header")
|
||||||
|
@ -677,8 +645,8 @@ def head_object(
|
||||||
return json_utils.decode_simple_header(decoded)
|
return json_utils.decode_simple_header(decoded)
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Run neo-go dump-keys")
|
@reporter.step("Run neo-go dump-keys")
|
||||||
def neo_go_dump_keys(shell: Shell, wallet: str) -> dict:
|
def neo_go_dump_keys(shell: Shell, wallet: WalletInfo) -> dict:
|
||||||
"""
|
"""
|
||||||
Run neo-go dump keys command
|
Run neo-go dump keys command
|
||||||
|
|
||||||
|
@ -702,7 +670,7 @@ def neo_go_dump_keys(shell: Shell, wallet: str) -> dict:
|
||||||
return {address_id: wallet_key}
|
return {address_id: wallet_key}
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Run neo-go query height")
|
@reporter.step("Run neo-go query height")
|
||||||
def neo_go_query_height(shell: Shell, endpoint: str) -> dict:
|
def neo_go_query_height(shell: Shell, endpoint: str) -> dict:
|
||||||
"""
|
"""
|
||||||
Run neo-go query height command
|
Run neo-go query height command
|
||||||
|
@ -734,41 +702,47 @@ def neo_go_query_height(shell: Shell, endpoint: str) -> dict:
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Search object nodes")
|
@wait_for_success()
|
||||||
|
@reporter.step("Search object nodes")
|
||||||
def get_object_nodes(
|
def get_object_nodes(
|
||||||
cluster: Cluster,
|
cluster: Cluster,
|
||||||
wallet: str,
|
|
||||||
cid: str,
|
cid: str,
|
||||||
oid: str,
|
oid: str,
|
||||||
shell: Shell,
|
alive_node: ClusterNode,
|
||||||
endpoint: str,
|
|
||||||
bearer: str = "",
|
bearer: str = "",
|
||||||
xhdr: Optional[dict] = None,
|
xhdr: Optional[dict] = None,
|
||||||
is_direct: bool = False,
|
is_direct: bool = False,
|
||||||
verify_presence_all: bool = False,
|
verify_presence_all: bool = False,
|
||||||
wallet_config: Optional[str] = None,
|
|
||||||
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
|
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
|
||||||
) -> list[ClusterNode]:
|
) -> list[ClusterNode]:
|
||||||
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet_config or DEFAULT_WALLET_CONFIG)
|
shell = alive_node.host.get_shell()
|
||||||
|
endpoint = alive_node.storage_node.get_rpc_endpoint()
|
||||||
|
wallet = alive_node.storage_node.get_remote_wallet_path()
|
||||||
|
wallet_config = alive_node.storage_node.get_remote_wallet_config_path()
|
||||||
|
|
||||||
result_object_nodes = cli.object.nodes(
|
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet_config)
|
||||||
|
|
||||||
|
response = cli.object.nodes(
|
||||||
rpc_endpoint=endpoint,
|
rpc_endpoint=endpoint,
|
||||||
wallet=wallet,
|
|
||||||
cid=cid,
|
cid=cid,
|
||||||
oid=oid,
|
oid=oid,
|
||||||
bearer=bearer,
|
bearer=bearer,
|
||||||
ttl=1 if is_direct else None,
|
ttl=1 if is_direct else None,
|
||||||
|
json=True,
|
||||||
xhdr=xhdr,
|
xhdr=xhdr,
|
||||||
timeout=timeout,
|
timeout=timeout,
|
||||||
verify_presence_all=verify_presence_all,
|
verify_presence_all=verify_presence_all,
|
||||||
)
|
)
|
||||||
|
|
||||||
parsing_output = parse_cmd_table(result_object_nodes.stdout, "|")
|
response_json = json.loads(response.stdout)
|
||||||
list_object_nodes = [
|
# Currently, the command will show expected and confirmed nodes.
|
||||||
node
|
# And we (currently) count only nodes which are both expected and confirmed
|
||||||
for node in parsing_output
|
object_nodes_id = {
|
||||||
if node["should_contain_object"] == "true" and node["actually_contains_object"] == "true"
|
required_node
|
||||||
]
|
for data_object in response_json["data_objects"]
|
||||||
|
for required_node in data_object["required_nodes"]
|
||||||
|
if required_node in data_object["confirmed_nodes"]
|
||||||
|
}
|
||||||
|
|
||||||
netmap_nodes_list = parse_netmap_output(
|
netmap_nodes_list = parse_netmap_output(
|
||||||
cli.netmap.snapshot(
|
cli.netmap.snapshot(
|
||||||
|
@ -777,17 +751,11 @@ def get_object_nodes(
|
||||||
).stdout
|
).stdout
|
||||||
)
|
)
|
||||||
netmap_nodes = [
|
netmap_nodes = [
|
||||||
netmap_node
|
netmap_node for object_node in object_nodes_id for netmap_node in netmap_nodes_list if object_node == netmap_node.node_id
|
||||||
for object_node in list_object_nodes
|
|
||||||
for netmap_node in netmap_nodes_list
|
|
||||||
if object_node["node_id"] == netmap_node.node_id
|
|
||||||
]
|
]
|
||||||
|
|
||||||
result = [
|
object_nodes = [
|
||||||
cluster_node
|
cluster_node for netmap_node in netmap_nodes for cluster_node in cluster.cluster_nodes if netmap_node.node == cluster_node.host_ip
|
||||||
for netmap_node in netmap_nodes
|
|
||||||
for cluster_node in cluster.cluster_nodes
|
|
||||||
if netmap_node.node == cluster_node.host_ip
|
|
||||||
]
|
]
|
||||||
|
|
||||||
return result
|
return object_nodes
|
||||||
|
|
35
src/frostfs_testlib/steps/cli/tree.py
Normal file
35
src/frostfs_testlib/steps/cli/tree.py
Normal file
|
@ -0,0 +1,35 @@
|
||||||
|
import logging
|
||||||
|
from typing import Optional
|
||||||
|
|
||||||
|
from frostfs_testlib import reporter
|
||||||
|
from frostfs_testlib.cli import FrostfsCli
|
||||||
|
from frostfs_testlib.plugins import load_plugin
|
||||||
|
from frostfs_testlib.resources.cli import CLI_DEFAULT_TIMEOUT, FROSTFS_CLI_EXEC
|
||||||
|
from frostfs_testlib.shell import Shell
|
||||||
|
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
|
||||||
|
|
||||||
|
logger = logging.getLogger("NeoLogger")
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
@reporter.step("Get Tree List")
|
||||||
|
def get_tree_list(
|
||||||
|
wallet: WalletInfo,
|
||||||
|
cid: str,
|
||||||
|
shell: Shell,
|
||||||
|
endpoint: str,
|
||||||
|
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
|
||||||
|
) -> None:
|
||||||
|
"""
|
||||||
|
A wrapper for `frostfs-cli tree list` call.
|
||||||
|
Args:
|
||||||
|
wallet (WalletInfo): path to a wallet on whose behalf we delete the container
|
||||||
|
cid (str): ID of the container to delete
|
||||||
|
shell: executor for cli command
|
||||||
|
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
|
||||||
|
timeout: Timeout for the operation.
|
||||||
|
This function doesn't return anything.
|
||||||
|
"""
|
||||||
|
|
||||||
|
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet.config_path)
|
||||||
|
cli.tree.list(cid=cid, rpc_endpoint=endpoint, timeout=timeout)
|
|
@ -12,15 +12,14 @@
|
||||||
import logging
|
import logging
|
||||||
from typing import Optional, Tuple
|
from typing import Optional, Tuple
|
||||||
|
|
||||||
from frostfs_testlib.reporter import get_reporter
|
from frostfs_testlib import reporter
|
||||||
from frostfs_testlib.resources.cli import CLI_DEFAULT_TIMEOUT
|
from frostfs_testlib.resources.cli import CLI_DEFAULT_TIMEOUT
|
||||||
from frostfs_testlib.resources.common import DEFAULT_WALLET_CONFIG
|
|
||||||
from frostfs_testlib.shell import Shell
|
from frostfs_testlib.shell import Shell
|
||||||
from frostfs_testlib.steps.cli.object import head_object
|
from frostfs_testlib.steps.cli.object import head_object
|
||||||
from frostfs_testlib.storage.cluster import Cluster, StorageNode
|
from frostfs_testlib.storage.cluster import Cluster, StorageNode
|
||||||
from frostfs_testlib.storage.dataclasses.storage_object_info import StorageObjectInfo
|
from frostfs_testlib.storage.dataclasses.storage_object_info import StorageObjectInfo
|
||||||
|
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
|
||||||
|
|
||||||
reporter = get_reporter()
|
|
||||||
logger = logging.getLogger("NeoLogger")
|
logger = logging.getLogger("NeoLogger")
|
||||||
|
|
||||||
|
|
||||||
|
@ -45,7 +44,7 @@ def get_storage_object_chunks(
|
||||||
|
|
||||||
with reporter.step(f"Get complex object chunks (f{storage_object.oid})"):
|
with reporter.step(f"Get complex object chunks (f{storage_object.oid})"):
|
||||||
split_object_id = get_link_object(
|
split_object_id = get_link_object(
|
||||||
storage_object.wallet_file_path,
|
storage_object.wallet,
|
||||||
storage_object.cid,
|
storage_object.cid,
|
||||||
storage_object.oid,
|
storage_object.oid,
|
||||||
shell,
|
shell,
|
||||||
|
@ -54,7 +53,7 @@ def get_storage_object_chunks(
|
||||||
timeout=timeout,
|
timeout=timeout,
|
||||||
)
|
)
|
||||||
head = head_object(
|
head = head_object(
|
||||||
storage_object.wallet_file_path,
|
storage_object.wallet,
|
||||||
storage_object.cid,
|
storage_object.cid,
|
||||||
split_object_id,
|
split_object_id,
|
||||||
shell,
|
shell,
|
||||||
|
@ -97,7 +96,7 @@ def get_complex_object_split_ranges(
|
||||||
chunks_ids = get_storage_object_chunks(storage_object, shell, cluster)
|
chunks_ids = get_storage_object_chunks(storage_object, shell, cluster)
|
||||||
for chunk_id in chunks_ids:
|
for chunk_id in chunks_ids:
|
||||||
head = head_object(
|
head = head_object(
|
||||||
storage_object.wallet_file_path,
|
storage_object.wallet,
|
||||||
storage_object.cid,
|
storage_object.cid,
|
||||||
chunk_id,
|
chunk_id,
|
||||||
shell,
|
shell,
|
||||||
|
@ -113,15 +112,14 @@ def get_complex_object_split_ranges(
|
||||||
return ranges
|
return ranges
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Get Link Object")
|
@reporter.step("Get Link Object")
|
||||||
def get_link_object(
|
def get_link_object(
|
||||||
wallet: str,
|
wallet: WalletInfo,
|
||||||
cid: str,
|
cid: str,
|
||||||
oid: str,
|
oid: str,
|
||||||
shell: Shell,
|
shell: Shell,
|
||||||
nodes: list[StorageNode],
|
nodes: list[StorageNode],
|
||||||
bearer: str = "",
|
bearer: str = "",
|
||||||
wallet_config: str = DEFAULT_WALLET_CONFIG,
|
|
||||||
is_direct: bool = True,
|
is_direct: bool = True,
|
||||||
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
|
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
|
||||||
):
|
):
|
||||||
|
@ -155,7 +153,6 @@ def get_link_object(
|
||||||
is_raw=True,
|
is_raw=True,
|
||||||
is_direct=is_direct,
|
is_direct=is_direct,
|
||||||
bearer=bearer,
|
bearer=bearer,
|
||||||
wallet_config=wallet_config,
|
|
||||||
timeout=timeout,
|
timeout=timeout,
|
||||||
)
|
)
|
||||||
if resp["link"]:
|
if resp["link"]:
|
||||||
|
@ -166,9 +163,9 @@ def get_link_object(
|
||||||
return None
|
return None
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Get Last Object")
|
@reporter.step("Get Last Object")
|
||||||
def get_last_object(
|
def get_last_object(
|
||||||
wallet: str,
|
wallet: WalletInfo,
|
||||||
cid: str,
|
cid: str,
|
||||||
oid: str,
|
oid: str,
|
||||||
shell: Shell,
|
shell: Shell,
|
||||||
|
|
|
@ -2,15 +2,9 @@ import logging
|
||||||
from time import sleep
|
from time import sleep
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
|
|
||||||
|
from frostfs_testlib import reporter
|
||||||
from frostfs_testlib.cli import FrostfsAdm, FrostfsCli, NeoGo
|
from frostfs_testlib.cli import FrostfsAdm, FrostfsCli, NeoGo
|
||||||
from frostfs_testlib.reporter import get_reporter
|
from frostfs_testlib.resources.cli import CLI_DEFAULT_TIMEOUT, FROSTFS_ADM_CONFIG_PATH, FROSTFS_ADM_EXEC, FROSTFS_CLI_EXEC, NEOGO_EXECUTABLE
|
||||||
from frostfs_testlib.resources.cli import (
|
|
||||||
CLI_DEFAULT_TIMEOUT,
|
|
||||||
FROSTFS_ADM_CONFIG_PATH,
|
|
||||||
FROSTFS_ADM_EXEC,
|
|
||||||
FROSTFS_CLI_EXEC,
|
|
||||||
NEOGO_EXECUTABLE,
|
|
||||||
)
|
|
||||||
from frostfs_testlib.resources.common import MORPH_BLOCK_TIME
|
from frostfs_testlib.resources.common import MORPH_BLOCK_TIME
|
||||||
from frostfs_testlib.shell import Shell
|
from frostfs_testlib.shell import Shell
|
||||||
from frostfs_testlib.steps.payment_neogo import get_contract_hash
|
from frostfs_testlib.steps.payment_neogo import get_contract_hash
|
||||||
|
@ -19,11 +13,10 @@ from frostfs_testlib.storage.dataclasses.frostfs_services import InnerRing, Morp
|
||||||
from frostfs_testlib.testing.test_control import wait_for_success
|
from frostfs_testlib.testing.test_control import wait_for_success
|
||||||
from frostfs_testlib.utils import datetime_utils, wallet_utils
|
from frostfs_testlib.utils import datetime_utils, wallet_utils
|
||||||
|
|
||||||
reporter = get_reporter()
|
|
||||||
logger = logging.getLogger("NeoLogger")
|
logger = logging.getLogger("NeoLogger")
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Get epochs from nodes")
|
@reporter.step("Get epochs from nodes")
|
||||||
def get_epochs_from_nodes(shell: Shell, cluster: Cluster) -> dict[str, int]:
|
def get_epochs_from_nodes(shell: Shell, cluster: Cluster) -> dict[str, int]:
|
||||||
"""
|
"""
|
||||||
Get current epochs on each node.
|
Get current epochs on each node.
|
||||||
|
@ -41,10 +34,8 @@ def get_epochs_from_nodes(shell: Shell, cluster: Cluster) -> dict[str, int]:
|
||||||
return epochs_by_node
|
return epochs_by_node
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Ensure fresh epoch")
|
@reporter.step("Ensure fresh epoch")
|
||||||
def ensure_fresh_epoch(
|
def ensure_fresh_epoch(shell: Shell, cluster: Cluster, alive_node: Optional[StorageNode] = None) -> int:
|
||||||
shell: Shell, cluster: Cluster, alive_node: Optional[StorageNode] = None
|
|
||||||
) -> int:
|
|
||||||
# ensure new fresh epoch to avoid epoch switch during test session
|
# ensure new fresh epoch to avoid epoch switch during test session
|
||||||
alive_node = alive_node if alive_node else cluster.services(StorageNode)[0]
|
alive_node = alive_node if alive_node else cluster.services(StorageNode)[0]
|
||||||
current_epoch = get_epoch(shell, cluster, alive_node)
|
current_epoch = get_epoch(shell, cluster, alive_node)
|
||||||
|
@ -54,7 +45,7 @@ def ensure_fresh_epoch(
|
||||||
return epoch
|
return epoch
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Wait up to {timeout} seconds for nodes on cluster to align epochs")
|
@reporter.step("Wait up to {timeout} seconds for nodes on cluster to align epochs")
|
||||||
def wait_for_epochs_align(shell: Shell, cluster: Cluster, timeout=60):
|
def wait_for_epochs_align(shell: Shell, cluster: Cluster, timeout=60):
|
||||||
@wait_for_success(timeout, 5, None, True)
|
@wait_for_success(timeout, 5, None, True)
|
||||||
def check_epochs():
|
def check_epochs():
|
||||||
|
@ -64,7 +55,7 @@ def wait_for_epochs_align(shell: Shell, cluster: Cluster, timeout=60):
|
||||||
check_epochs()
|
check_epochs()
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Get Epoch")
|
@reporter.step("Get Epoch")
|
||||||
def get_epoch(shell: Shell, cluster: Cluster, alive_node: Optional[StorageNode] = None):
|
def get_epoch(shell: Shell, cluster: Cluster, alive_node: Optional[StorageNode] = None):
|
||||||
alive_node = alive_node if alive_node else cluster.services(StorageNode)[0]
|
alive_node = alive_node if alive_node else cluster.services(StorageNode)[0]
|
||||||
endpoint = alive_node.get_rpc_endpoint()
|
endpoint = alive_node.get_rpc_endpoint()
|
||||||
|
@ -77,7 +68,7 @@ def get_epoch(shell: Shell, cluster: Cluster, alive_node: Optional[StorageNode]
|
||||||
return int(epoch.stdout)
|
return int(epoch.stdout)
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Tick Epoch")
|
@reporter.step("Tick Epoch")
|
||||||
def tick_epoch(shell: Shell, cluster: Cluster, alive_node: Optional[StorageNode] = None):
|
def tick_epoch(shell: Shell, cluster: Cluster, alive_node: Optional[StorageNode] = None):
|
||||||
"""
|
"""
|
||||||
Tick epoch using frostfs-adm or NeoGo if frostfs-adm is not available (DevEnv)
|
Tick epoch using frostfs-adm or NeoGo if frostfs-adm is not available (DevEnv)
|
||||||
|
@ -90,7 +81,7 @@ def tick_epoch(shell: Shell, cluster: Cluster, alive_node: Optional[StorageNode]
|
||||||
alive_node = alive_node if alive_node else cluster.services(StorageNode)[0]
|
alive_node = alive_node if alive_node else cluster.services(StorageNode)[0]
|
||||||
remote_shell = alive_node.host.get_shell()
|
remote_shell = alive_node.host.get_shell()
|
||||||
|
|
||||||
if FROSTFS_ADM_EXEC and FROSTFS_ADM_CONFIG_PATH:
|
if "force_transactions" not in alive_node.host.config.attributes:
|
||||||
# If frostfs-adm is available, then we tick epoch with it (to be consistent with UAT tests)
|
# If frostfs-adm is available, then we tick epoch with it (to be consistent with UAT tests)
|
||||||
frostfs_adm = FrostfsAdm(
|
frostfs_adm = FrostfsAdm(
|
||||||
shell=remote_shell,
|
shell=remote_shell,
|
||||||
|
|
|
@ -10,31 +10,28 @@ from urllib.parse import quote_plus
|
||||||
|
|
||||||
import requests
|
import requests
|
||||||
|
|
||||||
from frostfs_testlib.reporter import get_reporter
|
from frostfs_testlib import reporter
|
||||||
from frostfs_testlib.resources.common import SIMPLE_OBJECT_SIZE
|
from frostfs_testlib.cli import GenericCli
|
||||||
|
from frostfs_testlib.resources.common import ASSETS_DIR, SIMPLE_OBJECT_SIZE
|
||||||
from frostfs_testlib.s3.aws_cli_client import command_options
|
from frostfs_testlib.s3.aws_cli_client import command_options
|
||||||
from frostfs_testlib.shell import Shell
|
from frostfs_testlib.shell import Shell
|
||||||
from frostfs_testlib.shell.local_shell import LocalShell
|
from frostfs_testlib.shell.local_shell import LocalShell
|
||||||
from frostfs_testlib.steps.cli.object import get_object
|
from frostfs_testlib.steps.cli.object import get_object
|
||||||
from frostfs_testlib.steps.storage_policy import get_nodes_without_object
|
from frostfs_testlib.steps.storage_policy import get_nodes_without_object
|
||||||
from frostfs_testlib.storage.cluster import StorageNode
|
from frostfs_testlib.storage.cluster import ClusterNode, StorageNode
|
||||||
from frostfs_testlib.testing.test_control import retry
|
from frostfs_testlib.testing.test_control import retry
|
||||||
from frostfs_testlib.utils.file_utils import get_file_hash
|
from frostfs_testlib.utils.file_utils import TestFile, get_file_hash
|
||||||
|
|
||||||
reporter = get_reporter()
|
|
||||||
|
|
||||||
logger = logging.getLogger("NeoLogger")
|
logger = logging.getLogger("NeoLogger")
|
||||||
|
|
||||||
ASSETS_DIR = os.getenv("ASSETS_DIR", "TemporaryDir/")
|
|
||||||
local_shell = LocalShell()
|
local_shell = LocalShell()
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Get via HTTP Gate")
|
@reporter.step("Get via HTTP Gate")
|
||||||
def get_via_http_gate(
|
def get_via_http_gate(
|
||||||
cid: str,
|
cid: str,
|
||||||
oid: str,
|
oid: str,
|
||||||
endpoint: str,
|
node: ClusterNode,
|
||||||
http_hostname: str,
|
|
||||||
request_path: Optional[str] = None,
|
request_path: Optional[str] = None,
|
||||||
timeout: Optional[int] = 300,
|
timeout: Optional[int] = 300,
|
||||||
):
|
):
|
||||||
|
@ -42,51 +39,16 @@ def get_via_http_gate(
|
||||||
This function gets given object from HTTP gate
|
This function gets given object from HTTP gate
|
||||||
cid: container id to get object from
|
cid: container id to get object from
|
||||||
oid: object ID
|
oid: object ID
|
||||||
endpoint: http gate endpoint
|
node: node to make request
|
||||||
http_hostname: http host name on the node
|
|
||||||
request_path: (optional) http request, if ommited - use default [{endpoint}/get/{cid}/{oid}]
|
request_path: (optional) http request, if ommited - use default [{endpoint}/get/{cid}/{oid}]
|
||||||
"""
|
"""
|
||||||
|
|
||||||
# if `request_path` parameter omitted, use default
|
# if `request_path` parameter omitted, use default
|
||||||
if request_path is None:
|
if request_path is None:
|
||||||
request = f"{endpoint}/get/{cid}/{oid}"
|
request = f"{node.http_gate.get_endpoint()}/get/{cid}/{oid}"
|
||||||
else:
|
else:
|
||||||
request = f"{endpoint}{request_path}"
|
request = f"{node.http_gate.get_endpoint()}{request_path}"
|
||||||
|
|
||||||
resp = requests.get(
|
|
||||||
request, headers={"Host": http_hostname}, stream=True, timeout=timeout, verify=False
|
|
||||||
)
|
|
||||||
|
|
||||||
if not resp.ok:
|
|
||||||
raise Exception(
|
|
||||||
f"""Failed to get object via HTTP gate:
|
|
||||||
request: {resp.request.path_url},
|
|
||||||
response: {resp.text},
|
|
||||||
headers: {resp.headers},
|
|
||||||
status code: {resp.status_code} {resp.reason}"""
|
|
||||||
)
|
|
||||||
|
|
||||||
logger.info(f"Request: {request}")
|
|
||||||
_attach_allure_step(request, resp.status_code)
|
|
||||||
|
|
||||||
file_path = os.path.join(os.getcwd(), ASSETS_DIR, f"{cid}_{oid}")
|
|
||||||
with open(file_path, "wb") as file:
|
|
||||||
shutil.copyfileobj(resp.raw, file)
|
|
||||||
return file_path
|
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Get via Zip HTTP Gate")
|
|
||||||
def get_via_zip_http_gate(
|
|
||||||
cid: str, prefix: str, endpoint: str, http_hostname: str, timeout: Optional[int] = 300
|
|
||||||
):
|
|
||||||
"""
|
|
||||||
This function gets given object from HTTP gate
|
|
||||||
cid: container id to get object from
|
|
||||||
prefix: common prefix
|
|
||||||
endpoint: http gate endpoint
|
|
||||||
http_hostname: http host name on the node
|
|
||||||
"""
|
|
||||||
request = f"{endpoint}/zip/{cid}/{prefix}"
|
|
||||||
resp = requests.get(request, stream=True, timeout=timeout, verify=False)
|
resp = requests.get(request, stream=True, timeout=timeout, verify=False)
|
||||||
|
|
||||||
if not resp.ok:
|
if not resp.ok:
|
||||||
|
@ -101,44 +63,22 @@ def get_via_zip_http_gate(
|
||||||
logger.info(f"Request: {request}")
|
logger.info(f"Request: {request}")
|
||||||
_attach_allure_step(request, resp.status_code)
|
_attach_allure_step(request, resp.status_code)
|
||||||
|
|
||||||
file_path = os.path.join(os.getcwd(), ASSETS_DIR, f"{cid}_archive.zip")
|
test_file = TestFile(os.path.join(os.getcwd(), ASSETS_DIR, f"{cid}_{oid}"))
|
||||||
with open(file_path, "wb") as file:
|
with open(test_file, "wb") as file:
|
||||||
shutil.copyfileobj(resp.raw, file)
|
shutil.copyfileobj(resp.raw, file)
|
||||||
|
return test_file
|
||||||
with zipfile.ZipFile(file_path, "r") as zip_ref:
|
|
||||||
zip_ref.extractall(ASSETS_DIR)
|
|
||||||
|
|
||||||
return os.path.join(os.getcwd(), ASSETS_DIR, prefix)
|
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Get via HTTP Gate by attribute")
|
@reporter.step("Get via Zip HTTP Gate")
|
||||||
def get_via_http_gate_by_attribute(
|
def get_via_zip_http_gate(cid: str, prefix: str, node: ClusterNode, timeout: Optional[int] = 300):
|
||||||
cid: str,
|
|
||||||
attribute: dict,
|
|
||||||
endpoint: str,
|
|
||||||
http_hostname: str,
|
|
||||||
request_path: Optional[str] = None,
|
|
||||||
timeout: Optional[int] = 300,
|
|
||||||
):
|
|
||||||
"""
|
"""
|
||||||
This function gets given object from HTTP gate
|
This function gets given object from HTTP gate
|
||||||
cid: CID to get object from
|
cid: container id to get object from
|
||||||
attribute: attribute {name: attribute} value pair
|
prefix: common prefix
|
||||||
endpoint: http gate endpoint
|
node: node to make request
|
||||||
http_hostname: http host name on the node
|
|
||||||
request_path: (optional) http request path, if ommited - use default [{endpoint}/get_by_attribute/{Key}/{Value}]
|
|
||||||
"""
|
"""
|
||||||
attr_name = list(attribute.keys())[0]
|
request = f"{node.http_gate.get_endpoint()}/zip/{cid}/{prefix}"
|
||||||
attr_value = quote_plus(str(attribute.get(attr_name)))
|
resp = requests.get(request, stream=True, timeout=timeout, verify=False)
|
||||||
# if `request_path` parameter ommited, use default
|
|
||||||
if request_path is None:
|
|
||||||
request = f"{endpoint}/get_by_attribute/{cid}/{quote_plus(str(attr_name))}/{attr_value}"
|
|
||||||
else:
|
|
||||||
request = f"{endpoint}{request_path}"
|
|
||||||
|
|
||||||
resp = requests.get(
|
|
||||||
request, stream=True, timeout=timeout, verify=False, headers={"Host": http_hostname}
|
|
||||||
)
|
|
||||||
|
|
||||||
if not resp.ok:
|
if not resp.ok:
|
||||||
raise Exception(
|
raise Exception(
|
||||||
|
@ -152,17 +92,61 @@ def get_via_http_gate_by_attribute(
|
||||||
logger.info(f"Request: {request}")
|
logger.info(f"Request: {request}")
|
||||||
_attach_allure_step(request, resp.status_code)
|
_attach_allure_step(request, resp.status_code)
|
||||||
|
|
||||||
file_path = os.path.join(os.getcwd(), ASSETS_DIR, f"{cid}_{str(uuid.uuid4())}")
|
test_file = TestFile(os.path.join(os.getcwd(), ASSETS_DIR, f"{cid}_archive.zip"))
|
||||||
with open(file_path, "wb") as file:
|
with open(test_file, "wb") as file:
|
||||||
shutil.copyfileobj(resp.raw, file)
|
shutil.copyfileobj(resp.raw, file)
|
||||||
return file_path
|
|
||||||
|
with zipfile.ZipFile(test_file, "r") as zip_ref:
|
||||||
|
zip_ref.extractall(ASSETS_DIR)
|
||||||
|
|
||||||
|
return os.path.join(os.getcwd(), ASSETS_DIR, prefix)
|
||||||
|
|
||||||
|
|
||||||
# TODO: pass http_hostname as a header
|
@reporter.step("Get via HTTP Gate by attribute")
|
||||||
@reporter.step_deco("Upload via HTTP Gate")
|
def get_via_http_gate_by_attribute(
|
||||||
def upload_via_http_gate(
|
cid: str,
|
||||||
cid: str, path: str, endpoint: str, headers: Optional[dict] = None, timeout: Optional[int] = 300
|
attribute: dict,
|
||||||
) -> str:
|
node: ClusterNode,
|
||||||
|
request_path: Optional[str] = None,
|
||||||
|
timeout: Optional[int] = 300,
|
||||||
|
):
|
||||||
|
"""
|
||||||
|
This function gets given object from HTTP gate
|
||||||
|
cid: CID to get object from
|
||||||
|
attribute: attribute {name: attribute} value pair
|
||||||
|
endpoint: http gate endpoint
|
||||||
|
request_path: (optional) http request path, if ommited - use default [{endpoint}/get_by_attribute/{Key}/{Value}]
|
||||||
|
"""
|
||||||
|
attr_name = list(attribute.keys())[0]
|
||||||
|
attr_value = quote_plus(str(attribute.get(attr_name)))
|
||||||
|
# if `request_path` parameter ommited, use default
|
||||||
|
if request_path is None:
|
||||||
|
request = f"{node.http_gate.get_endpoint()}/get_by_attribute/{cid}/{quote_plus(str(attr_name))}/{attr_value}"
|
||||||
|
else:
|
||||||
|
request = f"{node.http_gate.get_endpoint()}{request_path}"
|
||||||
|
|
||||||
|
resp = requests.get(request, stream=True, timeout=timeout, verify=False)
|
||||||
|
|
||||||
|
if not resp.ok:
|
||||||
|
raise Exception(
|
||||||
|
f"""Failed to get object via HTTP gate:
|
||||||
|
request: {resp.request.path_url},
|
||||||
|
response: {resp.text},
|
||||||
|
headers: {resp.headers},
|
||||||
|
status code: {resp.status_code} {resp.reason}"""
|
||||||
|
)
|
||||||
|
|
||||||
|
logger.info(f"Request: {request}")
|
||||||
|
_attach_allure_step(request, resp.status_code)
|
||||||
|
|
||||||
|
test_file = TestFile(os.path.join(os.getcwd(), ASSETS_DIR, f"{cid}_{str(uuid.uuid4())}"))
|
||||||
|
with open(test_file, "wb") as file:
|
||||||
|
shutil.copyfileobj(resp.raw, file)
|
||||||
|
return test_file
|
||||||
|
|
||||||
|
|
||||||
|
@reporter.step("Upload via HTTP Gate")
|
||||||
|
def upload_via_http_gate(cid: str, path: str, endpoint: str, headers: Optional[dict] = None, timeout: Optional[int] = 300) -> str:
|
||||||
"""
|
"""
|
||||||
This function upload given object through HTTP gate
|
This function upload given object through HTTP gate
|
||||||
cid: CID to get object from
|
cid: CID to get object from
|
||||||
|
@ -173,9 +157,7 @@ def upload_via_http_gate(
|
||||||
request = f"{endpoint}/upload/{cid}"
|
request = f"{endpoint}/upload/{cid}"
|
||||||
files = {"upload_file": open(path, "rb")}
|
files = {"upload_file": open(path, "rb")}
|
||||||
body = {"filename": path}
|
body = {"filename": path}
|
||||||
resp = requests.post(
|
resp = requests.post(request, files=files, data=body, headers=headers, timeout=timeout, verify=False)
|
||||||
request, files=files, data=body, headers=headers, timeout=timeout, verify=False
|
|
||||||
)
|
|
||||||
|
|
||||||
if not resp.ok:
|
if not resp.ok:
|
||||||
raise Exception(
|
raise Exception(
|
||||||
|
@ -193,7 +175,7 @@ def upload_via_http_gate(
|
||||||
return resp.json().get("object_id")
|
return resp.json().get("object_id")
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Check is the passed object large")
|
@reporter.step("Check is the passed object large")
|
||||||
def is_object_large(filepath: str) -> bool:
|
def is_object_large(filepath: str) -> bool:
|
||||||
"""
|
"""
|
||||||
This function check passed file size and return True if file_size > SIMPLE_OBJECT_SIZE
|
This function check passed file size and return True if file_size > SIMPLE_OBJECT_SIZE
|
||||||
|
@ -207,8 +189,7 @@ def is_object_large(filepath: str) -> bool:
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
|
||||||
# TODO: pass http_hostname as a header
|
@reporter.step("Upload via HTTP Gate using Curl")
|
||||||
@reporter.step_deco("Upload via HTTP Gate using Curl")
|
|
||||||
def upload_via_http_gate_curl(
|
def upload_via_http_gate_curl(
|
||||||
cid: str,
|
cid: str,
|
||||||
filepath: str,
|
filepath: str,
|
||||||
|
@ -256,22 +237,21 @@ def upload_via_http_gate_curl(
|
||||||
|
|
||||||
|
|
||||||
@retry(max_attempts=3, sleep_interval=1)
|
@retry(max_attempts=3, sleep_interval=1)
|
||||||
@reporter.step_deco("Get via HTTP Gate using Curl")
|
@reporter.step("Get via HTTP Gate using Curl")
|
||||||
def get_via_http_curl(cid: str, oid: str, endpoint: str, http_hostname: str) -> str:
|
def get_via_http_curl(cid: str, oid: str, node: ClusterNode) -> TestFile:
|
||||||
"""
|
"""
|
||||||
This function gets given object from HTTP gate using curl utility.
|
This function gets given object from HTTP gate using curl utility.
|
||||||
cid: CID to get object from
|
cid: CID to get object from
|
||||||
oid: object OID
|
oid: object OID
|
||||||
endpoint: http gate endpoint
|
node: node for request
|
||||||
http_hostname: http host name of the node
|
|
||||||
"""
|
"""
|
||||||
request = f"{endpoint}/get/{cid}/{oid}"
|
request = f"{node.http_gate.get_endpoint()}/get/{cid}/{oid}"
|
||||||
file_path = os.path.join(os.getcwd(), ASSETS_DIR, f"{cid}_{oid}_{str(uuid.uuid4())}")
|
test_file = TestFile(os.path.join(os.getcwd(), ASSETS_DIR, f"{cid}_{oid}_{str(uuid.uuid4())}"))
|
||||||
|
|
||||||
cmd = f'curl -k -H "Host: {http_hostname}" {request} > {file_path}'
|
curl = GenericCli("curl", node.host)
|
||||||
local_shell.exec(cmd)
|
curl(f"-k ", f"{request} > {test_file}", shell=local_shell)
|
||||||
|
|
||||||
return file_path
|
return test_file
|
||||||
|
|
||||||
|
|
||||||
def _attach_allure_step(request: str, status_code: int, req_type="GET"):
|
def _attach_allure_step(request: str, status_code: int, req_type="GET"):
|
||||||
|
@ -280,37 +260,31 @@ def _attach_allure_step(request: str, status_code: int, req_type="GET"):
|
||||||
reporter.attach(command_attachment, f"{req_type} Request")
|
reporter.attach(command_attachment, f"{req_type} Request")
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Try to get object and expect error")
|
@reporter.step("Try to get object and expect error")
|
||||||
def try_to_get_object_and_expect_error(
|
def try_to_get_object_and_expect_error(
|
||||||
cid: str,
|
cid: str,
|
||||||
oid: str,
|
oid: str,
|
||||||
|
node: ClusterNode,
|
||||||
error_pattern: str,
|
error_pattern: str,
|
||||||
endpoint: str,
|
|
||||||
http_hostname: str,
|
|
||||||
) -> None:
|
) -> None:
|
||||||
try:
|
try:
|
||||||
get_via_http_gate(cid=cid, oid=oid, endpoint=endpoint, http_hostname=http_hostname)
|
get_via_http_gate(cid=cid, oid=oid, node=node)
|
||||||
raise AssertionError(f"Expected error on getting object with cid: {cid}")
|
raise AssertionError(f"Expected error on getting object with cid: {cid}")
|
||||||
except Exception as err:
|
except Exception as err:
|
||||||
match = error_pattern.casefold() in str(err).casefold()
|
match = error_pattern.casefold() in str(err).casefold()
|
||||||
assert match, f"Expected {err} to match {error_pattern}"
|
assert match, f"Expected {err} to match {error_pattern}"
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Verify object can be get using HTTP header attribute")
|
@reporter.step("Verify object can be get using HTTP header attribute")
|
||||||
def get_object_by_attr_and_verify_hashes(
|
def get_object_by_attr_and_verify_hashes(
|
||||||
oid: str,
|
oid: str,
|
||||||
file_name: str,
|
file_name: str,
|
||||||
cid: str,
|
cid: str,
|
||||||
attrs: dict,
|
attrs: dict,
|
||||||
endpoint: str,
|
node: ClusterNode,
|
||||||
http_hostname: str,
|
|
||||||
) -> None:
|
) -> None:
|
||||||
got_file_path_http = get_via_http_gate(
|
got_file_path_http = get_via_http_gate(cid=cid, oid=oid, node=node)
|
||||||
cid=cid, oid=oid, endpoint=endpoint, http_hostname=http_hostname
|
got_file_path_http_attr = get_via_http_gate_by_attribute(cid=cid, attribute=attrs, node=node)
|
||||||
)
|
|
||||||
got_file_path_http_attr = get_via_http_gate_by_attribute(
|
|
||||||
cid=cid, attribute=attrs, endpoint=endpoint, http_hostname=http_hostname
|
|
||||||
)
|
|
||||||
assert_hashes_are_equal(file_name, got_file_path_http, got_file_path_http_attr)
|
assert_hashes_are_equal(file_name, got_file_path_http, got_file_path_http_attr)
|
||||||
|
|
||||||
|
|
||||||
|
@ -321,8 +295,7 @@ def verify_object_hash(
|
||||||
cid: str,
|
cid: str,
|
||||||
shell: Shell,
|
shell: Shell,
|
||||||
nodes: list[StorageNode],
|
nodes: list[StorageNode],
|
||||||
endpoint: str,
|
request_node: ClusterNode,
|
||||||
http_hostname: str,
|
|
||||||
object_getter=None,
|
object_getter=None,
|
||||||
) -> None:
|
) -> None:
|
||||||
|
|
||||||
|
@ -348,9 +321,7 @@ def verify_object_hash(
|
||||||
shell=shell,
|
shell=shell,
|
||||||
endpoint=random_node.get_rpc_endpoint(),
|
endpoint=random_node.get_rpc_endpoint(),
|
||||||
)
|
)
|
||||||
got_file_path_http = object_getter(
|
got_file_path_http = object_getter(cid=cid, oid=oid, node=request_node)
|
||||||
cid=cid, oid=oid, endpoint=endpoint, http_hostname=http_hostname
|
|
||||||
)
|
|
||||||
|
|
||||||
assert_hashes_are_equal(file_name, got_file_path, got_file_path_http)
|
assert_hashes_are_equal(file_name, got_file_path, got_file_path_http)
|
||||||
|
|
||||||
|
@ -359,18 +330,14 @@ def assert_hashes_are_equal(orig_file_name: str, got_file_1: str, got_file_2: st
|
||||||
msg = "Expected hashes are equal for files {f1} and {f2}"
|
msg = "Expected hashes are equal for files {f1} and {f2}"
|
||||||
got_file_hash_http = get_file_hash(got_file_1)
|
got_file_hash_http = get_file_hash(got_file_1)
|
||||||
assert get_file_hash(got_file_2) == got_file_hash_http, msg.format(f1=got_file_2, f2=got_file_1)
|
assert get_file_hash(got_file_2) == got_file_hash_http, msg.format(f1=got_file_2, f2=got_file_1)
|
||||||
assert get_file_hash(orig_file_name) == got_file_hash_http, msg.format(
|
assert get_file_hash(orig_file_name) == got_file_hash_http, msg.format(f1=orig_file_name, f2=got_file_1)
|
||||||
f1=orig_file_name, f2=got_file_1
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def attr_into_header(attrs: dict) -> dict:
|
def attr_into_header(attrs: dict) -> dict:
|
||||||
return {f"X-Attribute-{_key}": _value for _key, _value in attrs.items()}
|
return {f"X-Attribute-{_key}": _value for _key, _value in attrs.items()}
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco(
|
@reporter.step("Convert each attribute (Key=Value) to the following format: -H 'X-Attribute-Key: Value'")
|
||||||
"Convert each attribute (Key=Value) to the following format: -H 'X-Attribute-Key: Value'"
|
|
||||||
)
|
|
||||||
def attr_into_str_header_curl(attrs: dict) -> list:
|
def attr_into_str_header_curl(attrs: dict) -> list:
|
||||||
headers = []
|
headers = []
|
||||||
for k, v in attrs.items():
|
for k, v in attrs.items():
|
||||||
|
@ -379,16 +346,13 @@ def attr_into_str_header_curl(attrs: dict) -> list:
|
||||||
return headers
|
return headers
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco(
|
@reporter.step("Try to get object via http (pass http_request and optional attributes) and expect error")
|
||||||
"Try to get object via http (pass http_request and optional attributes) and expect error"
|
|
||||||
)
|
|
||||||
def try_to_get_object_via_passed_request_and_expect_error(
|
def try_to_get_object_via_passed_request_and_expect_error(
|
||||||
cid: str,
|
cid: str,
|
||||||
oid: str,
|
oid: str,
|
||||||
|
node: ClusterNode,
|
||||||
error_pattern: str,
|
error_pattern: str,
|
||||||
endpoint: str,
|
|
||||||
http_request_path: str,
|
http_request_path: str,
|
||||||
http_hostname: str,
|
|
||||||
attrs: Optional[dict] = None,
|
attrs: Optional[dict] = None,
|
||||||
) -> None:
|
) -> None:
|
||||||
try:
|
try:
|
||||||
|
@ -396,17 +360,15 @@ def try_to_get_object_via_passed_request_and_expect_error(
|
||||||
get_via_http_gate(
|
get_via_http_gate(
|
||||||
cid=cid,
|
cid=cid,
|
||||||
oid=oid,
|
oid=oid,
|
||||||
endpoint=endpoint,
|
node=node,
|
||||||
request_path=http_request_path,
|
request_path=http_request_path,
|
||||||
http_hostname=http_hostname,
|
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
get_via_http_gate_by_attribute(
|
get_via_http_gate_by_attribute(
|
||||||
cid=cid,
|
cid=cid,
|
||||||
attribute=attrs,
|
attribute=attrs,
|
||||||
endpoint=endpoint,
|
node=node,
|
||||||
request_path=http_request_path,
|
request_path=http_request_path,
|
||||||
http_hostname=http_hostname,
|
|
||||||
)
|
)
|
||||||
raise AssertionError(f"Expected error on getting object with cid: {cid}")
|
raise AssertionError(f"Expected error on getting object with cid: {cid}")
|
||||||
except Exception as err:
|
except Exception as err:
|
||||||
|
|
45
src/frostfs_testlib/steps/metrics.py
Normal file
45
src/frostfs_testlib/steps/metrics.py
Normal file
|
@ -0,0 +1,45 @@
|
||||||
|
import re
|
||||||
|
|
||||||
|
from frostfs_testlib import reporter
|
||||||
|
from frostfs_testlib.testing.test_control import wait_for_success
|
||||||
|
from frostfs_testlib.storage.cluster import ClusterNode
|
||||||
|
|
||||||
|
|
||||||
|
@reporter.step("Check metrics result")
|
||||||
|
@wait_for_success(interval=10)
|
||||||
|
def check_metrics_counter(
|
||||||
|
cluster_nodes: list[ClusterNode],
|
||||||
|
operator: str = "==",
|
||||||
|
counter_exp: int = 0,
|
||||||
|
parse_from_command: bool = False,
|
||||||
|
**metrics_greps: str,
|
||||||
|
):
|
||||||
|
counter_act = 0
|
||||||
|
for cluster_node in cluster_nodes:
|
||||||
|
counter_act += get_metrics_value(cluster_node, parse_from_command, **metrics_greps)
|
||||||
|
assert eval(
|
||||||
|
f"{counter_act} {operator} {counter_exp}"
|
||||||
|
), f"Expected: {counter_exp} {operator} Actual: {counter_act} in node: {cluster_node}"
|
||||||
|
|
||||||
|
|
||||||
|
@reporter.step("Get metrics value from node: {node}")
|
||||||
|
def get_metrics_value(node: ClusterNode, parse_from_command: bool = False, **metrics_greps: str):
|
||||||
|
try:
|
||||||
|
command_result = node.metrics.storage.get_metrics_search_by_greps(**metrics_greps)
|
||||||
|
if parse_from_command:
|
||||||
|
metrics_counter = calc_metrics_count_from_stdout(command_result.stdout, **metrics_greps)
|
||||||
|
else:
|
||||||
|
metrics_counter = calc_metrics_count_from_stdout(command_result.stdout)
|
||||||
|
except RuntimeError as e:
|
||||||
|
metrics_counter = 0
|
||||||
|
|
||||||
|
return metrics_counter
|
||||||
|
|
||||||
|
|
||||||
|
@reporter.step("Parse metrics count and calc sum of result")
|
||||||
|
def calc_metrics_count_from_stdout(metric_result_stdout: str, command: str = None):
|
||||||
|
if command:
|
||||||
|
result = re.findall(rf"{command}\s*([\d.e+-]+)", metric_result_stdout)
|
||||||
|
else:
|
||||||
|
result = re.findall(r"}\s*([\d.e+-]+)", metric_result_stdout)
|
||||||
|
return sum(map(lambda x: int(float(x)), result))
|
|
@ -1,89 +1,19 @@
|
||||||
from frostfs_testlib.reporter import get_reporter
|
from frostfs_testlib.shell import CommandOptions
|
||||||
from frostfs_testlib.storage.cluster import ClusterNode
|
from frostfs_testlib.storage.cluster import ClusterNode
|
||||||
from frostfs_testlib.testing.test_control import retry
|
|
||||||
|
|
||||||
reporter = get_reporter()
|
|
||||||
|
|
||||||
|
|
||||||
class IpTablesHelper:
|
class IpHelper:
|
||||||
@staticmethod
|
|
||||||
def drop_input_traffic_to_port(node: ClusterNode, ports: list[str]) -> None:
|
|
||||||
shell = node.host.get_shell()
|
|
||||||
for port in ports:
|
|
||||||
shell.exec(f"iptables -A INPUT -p tcp --dport {port} -j DROP")
|
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def drop_input_traffic_to_node(node: ClusterNode, block_ip: list[str]) -> None:
|
def drop_input_traffic_to_node(node: ClusterNode, block_ip: list[str]) -> None:
|
||||||
shell = node.host.get_shell()
|
shell = node.host.get_shell()
|
||||||
for ip in block_ip:
|
for ip in block_ip:
|
||||||
shell.exec(f"iptables -A INPUT -s {ip} -j DROP")
|
shell.exec(f"ip route add blackhole {ip}")
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def restore_input_traffic_to_port(node: ClusterNode) -> None:
|
|
||||||
shell = node.host.get_shell()
|
|
||||||
ports = (
|
|
||||||
shell.exec("iptables -L --numeric | grep DROP | awk '{print $7}'")
|
|
||||||
.stdout.strip()
|
|
||||||
.split("\n")
|
|
||||||
)
|
|
||||||
if ports[0] == "":
|
|
||||||
return
|
|
||||||
for port in ports:
|
|
||||||
shell.exec(f"iptables -D INPUT -p tcp --dport {port.split(':')[-1]} -j DROP")
|
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def restore_input_traffic_to_node(node: ClusterNode) -> None:
|
def restore_input_traffic_to_node(node: ClusterNode) -> None:
|
||||||
shell = node.host.get_shell()
|
shell = node.host.get_shell()
|
||||||
unlock_ip = (
|
unlock_ip = shell.exec("ip route list | grep blackhole", CommandOptions(check=False))
|
||||||
shell.exec("iptables -L --numeric | grep DROP | awk '{print $4}'")
|
if unlock_ip.return_code != 0:
|
||||||
.stdout.strip()
|
|
||||||
.split("\n")
|
|
||||||
)
|
|
||||||
if unlock_ip[0] == "":
|
|
||||||
return
|
return
|
||||||
for ip in unlock_ip:
|
for ip in unlock_ip.stdout.strip().split("\n"):
|
||||||
shell.exec(f"iptables -D INPUT -s {ip} -j DROP")
|
shell.exec(f"ip route del blackhole {ip.split(' ')[1]}")
|
||||||
|
|
||||||
|
|
||||||
# TODO Move class to HOST
|
|
||||||
class IfUpDownHelper:
|
|
||||||
@reporter.step_deco("Down {interface} to {node}")
|
|
||||||
def down_interface(self, node: ClusterNode, interface: str) -> None:
|
|
||||||
shell = node.host.get_shell()
|
|
||||||
shell.exec(f"ifdown {interface}")
|
|
||||||
|
|
||||||
@reporter.step_deco("Up {interface} to {node}")
|
|
||||||
def up_interface(self, node: ClusterNode, interface: str) -> None:
|
|
||||||
shell = node.host.get_shell()
|
|
||||||
shell.exec(f"ifup {interface}")
|
|
||||||
|
|
||||||
@reporter.step_deco("Up all interface to {node}")
|
|
||||||
def up_all_interface(self, node: ClusterNode) -> None:
|
|
||||||
shell = node.host.get_shell()
|
|
||||||
interfaces = list(node.host.config.interfaces.keys())
|
|
||||||
shell.exec("ifup -av")
|
|
||||||
for name_interface in interfaces:
|
|
||||||
self.check_state_up(node, name_interface)
|
|
||||||
|
|
||||||
@reporter.step_deco("Down all interface to {node}")
|
|
||||||
def down_all_interface(self, node: ClusterNode) -> None:
|
|
||||||
shell = node.host.get_shell()
|
|
||||||
interfaces = list(node.host.config.interfaces.keys())
|
|
||||||
shell.exec("ifdown -av")
|
|
||||||
for name_interface in interfaces:
|
|
||||||
self.check_state_down(node, name_interface)
|
|
||||||
|
|
||||||
@reporter.step_deco("Check {node} to {interface}")
|
|
||||||
def check_state(self, node: ClusterNode, interface: str) -> str:
|
|
||||||
shell = node.host.get_shell()
|
|
||||||
return shell.exec(
|
|
||||||
f"ip link show {interface} | sed -z 's/.*state \(.*\) mode .*/\\1/'"
|
|
||||||
).stdout.strip()
|
|
||||||
|
|
||||||
@retry(max_attempts=5, sleep_interval=5, expected_result="UP")
|
|
||||||
def check_state_up(self, node: ClusterNode, interface: str) -> str:
|
|
||||||
return self.check_state(node=node, interface=interface)
|
|
||||||
|
|
||||||
@retry(max_attempts=5, sleep_interval=5, expected_result="DOWN")
|
|
||||||
def check_state_down(self, node: ClusterNode, interface: str) -> str:
|
|
||||||
return self.check_state(node=node, interface=interface)
|
|
||||||
|
|
|
@ -6,21 +6,15 @@ from dataclasses import dataclass
|
||||||
from time import sleep
|
from time import sleep
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
|
|
||||||
|
from frostfs_testlib import reporter
|
||||||
from frostfs_testlib.cli import FrostfsAdm, FrostfsCli
|
from frostfs_testlib.cli import FrostfsAdm, FrostfsCli
|
||||||
from frostfs_testlib.reporter import get_reporter
|
from frostfs_testlib.resources.cli import FROSTFS_ADM_CONFIG_PATH, FROSTFS_ADM_EXEC, FROSTFS_CLI_EXEC
|
||||||
from frostfs_testlib.resources.cli import (
|
|
||||||
FROSTFS_ADM_CONFIG_PATH,
|
|
||||||
FROSTFS_ADM_EXEC,
|
|
||||||
FROSTFS_CLI_EXEC,
|
|
||||||
)
|
|
||||||
from frostfs_testlib.resources.common import MORPH_BLOCK_TIME
|
from frostfs_testlib.resources.common import MORPH_BLOCK_TIME
|
||||||
from frostfs_testlib.shell import Shell
|
from frostfs_testlib.shell import Shell
|
||||||
from frostfs_testlib.steps.epoch import tick_epoch, wait_for_epochs_align
|
from frostfs_testlib.steps.epoch import tick_epoch, wait_for_epochs_align
|
||||||
from frostfs_testlib.storage.cluster import Cluster, StorageNode
|
from frostfs_testlib.storage.cluster import Cluster, StorageNode
|
||||||
from frostfs_testlib.storage.dataclasses.frostfs_services import S3Gate
|
|
||||||
from frostfs_testlib.utils import datetime_utils
|
from frostfs_testlib.utils import datetime_utils
|
||||||
|
|
||||||
reporter = get_reporter()
|
|
||||||
logger = logging.getLogger("NeoLogger")
|
logger = logging.getLogger("NeoLogger")
|
||||||
|
|
||||||
|
|
||||||
|
@ -40,7 +34,7 @@ class HealthStatus:
|
||||||
return HealthStatus(network, health)
|
return HealthStatus(network, health)
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Get Locode from random storage node")
|
@reporter.step("Get Locode from random storage node")
|
||||||
def get_locode_from_random_node(cluster: Cluster) -> str:
|
def get_locode_from_random_node(cluster: Cluster) -> str:
|
||||||
node = random.choice(cluster.services(StorageNode))
|
node = random.choice(cluster.services(StorageNode))
|
||||||
locode = node.get_un_locode()
|
locode = node.get_un_locode()
|
||||||
|
@ -48,7 +42,7 @@ def get_locode_from_random_node(cluster: Cluster) -> str:
|
||||||
return locode
|
return locode
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Healthcheck for storage node {node}")
|
@reporter.step("Healthcheck for storage node {node}")
|
||||||
def storage_node_healthcheck(node: StorageNode) -> HealthStatus:
|
def storage_node_healthcheck(node: StorageNode) -> HealthStatus:
|
||||||
"""
|
"""
|
||||||
The function returns storage node's health status.
|
The function returns storage node's health status.
|
||||||
|
@ -57,12 +51,27 @@ def storage_node_healthcheck(node: StorageNode) -> HealthStatus:
|
||||||
Returns:
|
Returns:
|
||||||
health status as HealthStatus object.
|
health status as HealthStatus object.
|
||||||
"""
|
"""
|
||||||
command = "control healthcheck"
|
|
||||||
output = _run_control_command_with_retries(node, command)
|
host = node.host
|
||||||
return HealthStatus.from_stdout(output)
|
service_config = host.get_service_config(node.name)
|
||||||
|
wallet_path = service_config.attributes["wallet_path"]
|
||||||
|
wallet_password = service_config.attributes["wallet_password"]
|
||||||
|
control_endpoint = service_config.attributes["control_endpoint"]
|
||||||
|
|
||||||
|
shell = host.get_shell()
|
||||||
|
wallet_config_path = f"/tmp/{node.name}-config.yaml"
|
||||||
|
wallet_config = f'wallet: {wallet_path}\npassword: "{wallet_password}"'
|
||||||
|
shell.exec(f"echo '{wallet_config}' > {wallet_config_path}")
|
||||||
|
|
||||||
|
cli_config = host.get_cli_config("frostfs-cli")
|
||||||
|
|
||||||
|
cli = FrostfsCli(shell, cli_config.exec_path, wallet_config_path)
|
||||||
|
result = cli.control.healthcheck(control_endpoint)
|
||||||
|
|
||||||
|
return HealthStatus.from_stdout(result.stdout)
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Set status for {node}")
|
@reporter.step("Set status for {node}")
|
||||||
def storage_node_set_status(node: StorageNode, status: str, retries: int = 0) -> None:
|
def storage_node_set_status(node: StorageNode, status: str, retries: int = 0) -> None:
|
||||||
"""
|
"""
|
||||||
The function sets particular status for given node.
|
The function sets particular status for given node.
|
||||||
|
@ -71,11 +80,24 @@ def storage_node_set_status(node: StorageNode, status: str, retries: int = 0) ->
|
||||||
status: online or offline.
|
status: online or offline.
|
||||||
retries (optional, int): number of retry attempts if it didn't work from the first time
|
retries (optional, int): number of retry attempts if it didn't work from the first time
|
||||||
"""
|
"""
|
||||||
command = f"control set-status --status {status}"
|
host = node.host
|
||||||
_run_control_command_with_retries(node, command, retries)
|
service_config = host.get_service_config(node.name)
|
||||||
|
wallet_path = service_config.attributes["wallet_path"]
|
||||||
|
wallet_password = service_config.attributes["wallet_password"]
|
||||||
|
control_endpoint = service_config.attributes["control_endpoint"]
|
||||||
|
|
||||||
|
shell = host.get_shell()
|
||||||
|
wallet_config_path = f"/tmp/{node.name}-config.yaml"
|
||||||
|
wallet_config = f'wallet: {wallet_path}\npassword: "{wallet_password}"'
|
||||||
|
shell.exec(f"echo '{wallet_config}' > {wallet_config_path}")
|
||||||
|
|
||||||
|
cli_config = host.get_cli_config("frostfs-cli")
|
||||||
|
|
||||||
|
cli = FrostfsCli(shell, cli_config.exec_path, wallet_config_path)
|
||||||
|
cli.control.set_status(control_endpoint, status)
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Get netmap snapshot")
|
@reporter.step("Get netmap snapshot")
|
||||||
def get_netmap_snapshot(node: StorageNode, shell: Shell) -> str:
|
def get_netmap_snapshot(node: StorageNode, shell: Shell) -> str:
|
||||||
"""
|
"""
|
||||||
The function returns string representation of netmap snapshot.
|
The function returns string representation of netmap snapshot.
|
||||||
|
@ -95,8 +117,8 @@ def get_netmap_snapshot(node: StorageNode, shell: Shell) -> str:
|
||||||
).stdout
|
).stdout
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Get shard list for {node}")
|
@reporter.step("Get shard list for {node}")
|
||||||
def node_shard_list(node: StorageNode) -> list[str]:
|
def node_shard_list(node: StorageNode, json: Optional[bool] = None) -> list[str]:
|
||||||
"""
|
"""
|
||||||
The function returns list of shards for specified storage node.
|
The function returns list of shards for specified storage node.
|
||||||
Args:
|
Args:
|
||||||
|
@ -104,41 +126,82 @@ def node_shard_list(node: StorageNode) -> list[str]:
|
||||||
Returns:
|
Returns:
|
||||||
list of shards.
|
list of shards.
|
||||||
"""
|
"""
|
||||||
command = "control shards list"
|
host = node.host
|
||||||
output = _run_control_command_with_retries(node, command)
|
service_config = host.get_service_config(node.name)
|
||||||
return re.findall(r"Shard (.*):", output)
|
wallet_path = service_config.attributes["wallet_path"]
|
||||||
|
wallet_password = service_config.attributes["wallet_password"]
|
||||||
|
control_endpoint = service_config.attributes["control_endpoint"]
|
||||||
|
|
||||||
|
shell = host.get_shell()
|
||||||
|
wallet_config_path = f"/tmp/{node.name}-config.yaml"
|
||||||
|
wallet_config = f'wallet: {wallet_path}\npassword: "{wallet_password}"'
|
||||||
|
shell.exec(f"echo '{wallet_config}' > {wallet_config_path}")
|
||||||
|
|
||||||
|
cli_config = host.get_cli_config("frostfs-cli")
|
||||||
|
|
||||||
|
cli = FrostfsCli(shell, cli_config.exec_path, wallet_config_path)
|
||||||
|
result = cli.shards.list(endpoint=control_endpoint, json_mode=json)
|
||||||
|
|
||||||
|
return re.findall(r"Shard (.*):", result.stdout)
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Shard set for {node}")
|
@reporter.step("Shard set for {node}")
|
||||||
def node_shard_set_mode(node: StorageNode, shard: str, mode: str) -> str:
|
def node_shard_set_mode(node: StorageNode, shard: list[str], mode: str) -> None:
|
||||||
"""
|
"""
|
||||||
The function sets mode for specified shard.
|
The function sets mode for specified shard.
|
||||||
Args:
|
Args:
|
||||||
node: node on which shard mode should be set.
|
node: node on which shard mode should be set.
|
||||||
"""
|
"""
|
||||||
command = f"control shards set-mode --id {shard} --mode {mode}"
|
host = node.host
|
||||||
return _run_control_command_with_retries(node, command)
|
service_config = host.get_service_config(node.name)
|
||||||
|
wallet_path = service_config.attributes["wallet_path"]
|
||||||
|
wallet_password = service_config.attributes["wallet_password"]
|
||||||
|
control_endpoint = service_config.attributes["control_endpoint"]
|
||||||
|
|
||||||
|
shell = host.get_shell()
|
||||||
|
wallet_config_path = f"/tmp/{node.name}-config.yaml"
|
||||||
|
wallet_config = f'wallet: {wallet_path}\npassword: "{wallet_password}"'
|
||||||
|
shell.exec(f"echo '{wallet_config}' > {wallet_config_path}")
|
||||||
|
|
||||||
|
cli_config = host.get_cli_config("frostfs-cli")
|
||||||
|
|
||||||
|
cli = FrostfsCli(shell, cli_config.exec_path, wallet_config_path)
|
||||||
|
cli.shards.set_mode(endpoint=control_endpoint, mode=mode, id=shard)
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Drop object from {node}")
|
@reporter.step("Drop object from {node}")
|
||||||
def drop_object(node: StorageNode, cid: str, oid: str) -> str:
|
def drop_object(node: StorageNode, cid: str, oid: str) -> None:
|
||||||
"""
|
"""
|
||||||
The function drops object from specified node.
|
The function drops object from specified node.
|
||||||
Args:
|
Args:
|
||||||
node_id str: node from which object should be dropped.
|
node: node from which object should be dropped.
|
||||||
"""
|
"""
|
||||||
command = f"control drop-objects -o {cid}/{oid}"
|
host = node.host
|
||||||
return _run_control_command_with_retries(node, command)
|
service_config = host.get_service_config(node.name)
|
||||||
|
wallet_path = service_config.attributes["wallet_path"]
|
||||||
|
wallet_password = service_config.attributes["wallet_password"]
|
||||||
|
control_endpoint = service_config.attributes["control_endpoint"]
|
||||||
|
|
||||||
|
shell = host.get_shell()
|
||||||
|
wallet_config_path = f"/tmp/{node.name}-config.yaml"
|
||||||
|
wallet_config = f'wallet: {wallet_path}\npassword: "{wallet_password}"'
|
||||||
|
shell.exec(f"echo '{wallet_config}' > {wallet_config_path}")
|
||||||
|
|
||||||
|
cli_config = host.get_cli_config("frostfs-cli")
|
||||||
|
|
||||||
|
cli = FrostfsCli(shell, cli_config.exec_path, wallet_config_path)
|
||||||
|
objects = f"{cid}/{oid}"
|
||||||
|
cli.control.drop_objects(control_endpoint, objects)
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Delete data from host for node {node}")
|
@reporter.step("Delete data from host for node {node}")
|
||||||
def delete_node_data(node: StorageNode) -> None:
|
def delete_node_data(node: StorageNode) -> None:
|
||||||
node.stop_service()
|
node.stop_service()
|
||||||
node.host.delete_storage_node_data(node.name)
|
node.host.delete_storage_node_data(node.name)
|
||||||
time.sleep(datetime_utils.parse_time(MORPH_BLOCK_TIME))
|
time.sleep(datetime_utils.parse_time(MORPH_BLOCK_TIME))
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Exclude node {node_to_exclude} from network map")
|
@reporter.step("Exclude node {node_to_exclude} from network map")
|
||||||
def exclude_node_from_network_map(
|
def exclude_node_from_network_map(
|
||||||
node_to_exclude: StorageNode,
|
node_to_exclude: StorageNode,
|
||||||
alive_node: StorageNode,
|
alive_node: StorageNode,
|
||||||
|
@ -154,12 +217,10 @@ def exclude_node_from_network_map(
|
||||||
wait_for_epochs_align(shell, cluster)
|
wait_for_epochs_align(shell, cluster)
|
||||||
|
|
||||||
snapshot = get_netmap_snapshot(node=alive_node, shell=shell)
|
snapshot = get_netmap_snapshot(node=alive_node, shell=shell)
|
||||||
assert (
|
assert node_netmap_key not in snapshot, f"Expected node with key {node_netmap_key} to be absent in network map"
|
||||||
node_netmap_key not in snapshot
|
|
||||||
), f"Expected node with key {node_netmap_key} to be absent in network map"
|
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Include node {node_to_include} into network map")
|
@reporter.step("Include node {node_to_include} into network map")
|
||||||
def include_node_to_network_map(
|
def include_node_to_network_map(
|
||||||
node_to_include: StorageNode,
|
node_to_include: StorageNode,
|
||||||
alive_node: StorageNode,
|
alive_node: StorageNode,
|
||||||
|
@ -169,7 +230,7 @@ def include_node_to_network_map(
|
||||||
storage_node_set_status(node_to_include, status="online")
|
storage_node_set_status(node_to_include, status="online")
|
||||||
|
|
||||||
# Per suggestion of @fyrchik we need to wait for 2 blocks after we set status and after tick epoch.
|
# Per suggestion of @fyrchik we need to wait for 2 blocks after we set status and after tick epoch.
|
||||||
# First sleep can be omitted after https://github.com/TrueCloudLab/frostfs-node/issues/60 complete.
|
# First sleep can be omitted after https://git.frostfs.info/TrueCloudLab/frostfs-node/issues/60 complete.
|
||||||
|
|
||||||
time.sleep(datetime_utils.parse_time(MORPH_BLOCK_TIME) * 2)
|
time.sleep(datetime_utils.parse_time(MORPH_BLOCK_TIME) * 2)
|
||||||
tick_epoch(shell, cluster)
|
tick_epoch(shell, cluster)
|
||||||
|
@ -178,39 +239,31 @@ def include_node_to_network_map(
|
||||||
check_node_in_map(node_to_include, shell, alive_node)
|
check_node_in_map(node_to_include, shell, alive_node)
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Check node {node} in network map")
|
@reporter.step("Check node {node} in network map")
|
||||||
def check_node_in_map(
|
def check_node_in_map(node: StorageNode, shell: Shell, alive_node: Optional[StorageNode] = None) -> None:
|
||||||
node: StorageNode, shell: Shell, alive_node: Optional[StorageNode] = None
|
|
||||||
) -> None:
|
|
||||||
alive_node = alive_node or node
|
alive_node = alive_node or node
|
||||||
|
|
||||||
node_netmap_key = node.get_wallet_public_key()
|
node_netmap_key = node.get_wallet_public_key()
|
||||||
logger.info(f"Node ({node.label}) netmap key: {node_netmap_key}")
|
logger.info(f"Node ({node.label}) netmap key: {node_netmap_key}")
|
||||||
|
|
||||||
snapshot = get_netmap_snapshot(alive_node, shell)
|
snapshot = get_netmap_snapshot(alive_node, shell)
|
||||||
assert (
|
assert node_netmap_key in snapshot, f"Expected node with key {node_netmap_key} to be in network map"
|
||||||
node_netmap_key in snapshot
|
|
||||||
), f"Expected node with key {node_netmap_key} to be in network map"
|
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Check node {node} NOT in network map")
|
@reporter.step("Check node {node} NOT in network map")
|
||||||
def check_node_not_in_map(
|
def check_node_not_in_map(node: StorageNode, shell: Shell, alive_node: Optional[StorageNode] = None) -> None:
|
||||||
node: StorageNode, shell: Shell, alive_node: Optional[StorageNode] = None
|
|
||||||
) -> None:
|
|
||||||
alive_node = alive_node or node
|
alive_node = alive_node or node
|
||||||
|
|
||||||
node_netmap_key = node.get_wallet_public_key()
|
node_netmap_key = node.get_wallet_public_key()
|
||||||
logger.info(f"Node ({node.label}) netmap key: {node_netmap_key}")
|
logger.info(f"Node ({node.label}) netmap key: {node_netmap_key}")
|
||||||
|
|
||||||
snapshot = get_netmap_snapshot(alive_node, shell)
|
snapshot = get_netmap_snapshot(alive_node, shell)
|
||||||
assert (
|
assert node_netmap_key not in snapshot, f"Expected node with key {node_netmap_key} to be NOT in network map"
|
||||||
node_netmap_key not in snapshot
|
|
||||||
), f"Expected node with key {node_netmap_key} to be NOT in network map"
|
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Wait for node {node} is ready")
|
@reporter.step("Wait for node {node} is ready")
|
||||||
def wait_for_node_to_be_ready(node: StorageNode) -> None:
|
def wait_for_node_to_be_ready(node: StorageNode) -> None:
|
||||||
timeout, attempts = 30, 6
|
timeout, attempts = 60, 15
|
||||||
for _ in range(attempts):
|
for _ in range(attempts):
|
||||||
try:
|
try:
|
||||||
health_check = storage_node_healthcheck(node)
|
health_check = storage_node_healthcheck(node)
|
||||||
|
@ -219,12 +272,10 @@ def wait_for_node_to_be_ready(node: StorageNode) -> None:
|
||||||
except Exception as err:
|
except Exception as err:
|
||||||
logger.warning(f"Node {node} is not ready:\n{err}")
|
logger.warning(f"Node {node} is not ready:\n{err}")
|
||||||
sleep(timeout)
|
sleep(timeout)
|
||||||
raise AssertionError(
|
raise AssertionError(f"Node {node} hasn't gone to the READY state after {timeout * attempts} seconds")
|
||||||
f"Node {node} hasn't gone to the READY state after {timeout * attempts} seconds"
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Remove nodes from network map trough cli-adm morph command")
|
@reporter.step("Remove nodes from network map trough cli-adm morph command")
|
||||||
def remove_nodes_from_map_morph(
|
def remove_nodes_from_map_morph(
|
||||||
shell: Shell,
|
shell: Shell,
|
||||||
cluster: Cluster,
|
cluster: Cluster,
|
||||||
|
@ -255,38 +306,3 @@ def remove_nodes_from_map_morph(
|
||||||
config_file=FROSTFS_ADM_CONFIG_PATH,
|
config_file=FROSTFS_ADM_CONFIG_PATH,
|
||||||
)
|
)
|
||||||
frostfsadm.morph.remove_nodes(node_netmap_keys)
|
frostfsadm.morph.remove_nodes(node_netmap_keys)
|
||||||
|
|
||||||
|
|
||||||
def _run_control_command_with_retries(node: StorageNode, command: str, retries: int = 0) -> str:
|
|
||||||
for attempt in range(1 + retries): # original attempt + specified retries
|
|
||||||
try:
|
|
||||||
return _run_control_command(node, command)
|
|
||||||
except AssertionError as err:
|
|
||||||
if attempt < retries:
|
|
||||||
logger.warning(f"Command {command} failed with error {err} and will be retried")
|
|
||||||
continue
|
|
||||||
raise AssertionError(f"Command {command} failed with error {err}") from err
|
|
||||||
|
|
||||||
|
|
||||||
def _run_control_command(node: StorageNode, command: str) -> None:
|
|
||||||
host = node.host
|
|
||||||
|
|
||||||
service_config = host.get_service_config(node.name)
|
|
||||||
wallet_path = service_config.attributes["wallet_path"]
|
|
||||||
wallet_password = service_config.attributes["wallet_password"]
|
|
||||||
control_endpoint = service_config.attributes["control_endpoint"]
|
|
||||||
|
|
||||||
shell = host.get_shell()
|
|
||||||
wallet_config_path = f"/tmp/{node.name}-config.yaml"
|
|
||||||
wallet_config = f'password: "{wallet_password}"'
|
|
||||||
shell.exec(f"echo '{wallet_config}' > {wallet_config_path}")
|
|
||||||
|
|
||||||
cli_config = host.get_cli_config("frostfs-cli")
|
|
||||||
|
|
||||||
# TODO: implement cli.control
|
|
||||||
# cli = FrostfsCli(shell, cli_config.exec_path, wallet_config_path)
|
|
||||||
result = shell.exec(
|
|
||||||
f"{cli_config.exec_path} {command} --endpoint {control_endpoint} "
|
|
||||||
f"--wallet {wallet_path} --config {wallet_config_path}"
|
|
||||||
)
|
|
||||||
return result.stdout
|
|
||||||
|
|
|
@ -8,21 +8,21 @@ from typing import Optional
|
||||||
from neo3.wallet import utils as neo3_utils
|
from neo3.wallet import utils as neo3_utils
|
||||||
from neo3.wallet import wallet as neo3_wallet
|
from neo3.wallet import wallet as neo3_wallet
|
||||||
|
|
||||||
|
from frostfs_testlib import reporter
|
||||||
from frostfs_testlib.cli import NeoGo
|
from frostfs_testlib.cli import NeoGo
|
||||||
from frostfs_testlib.reporter import get_reporter
|
|
||||||
from frostfs_testlib.resources.cli import NEOGO_EXECUTABLE
|
from frostfs_testlib.resources.cli import NEOGO_EXECUTABLE
|
||||||
from frostfs_testlib.resources.common import FROSTFS_CONTRACT, GAS_HASH, MORPH_BLOCK_TIME
|
from frostfs_testlib.resources.common import FROSTFS_CONTRACT, GAS_HASH, MORPH_BLOCK_TIME
|
||||||
from frostfs_testlib.shell import Shell
|
from frostfs_testlib.shell import Shell
|
||||||
from frostfs_testlib.storage.dataclasses.frostfs_services import MorphChain
|
from frostfs_testlib.storage.dataclasses.frostfs_services import MorphChain
|
||||||
from frostfs_testlib.utils import converting_utils, datetime_utils, wallet_utils
|
from frostfs_testlib.utils import converting_utils, datetime_utils, wallet_utils
|
||||||
|
|
||||||
reporter = get_reporter()
|
|
||||||
logger = logging.getLogger("NeoLogger")
|
logger = logging.getLogger("NeoLogger")
|
||||||
|
|
||||||
EMPTY_PASSWORD = ""
|
EMPTY_PASSWORD = ""
|
||||||
TX_PERSIST_TIMEOUT = 15 # seconds
|
TX_PERSIST_TIMEOUT = 15 # seconds
|
||||||
ASSET_POWER_SIDECHAIN = 10**12
|
ASSET_POWER_SIDECHAIN = 10**12
|
||||||
|
|
||||||
|
|
||||||
def get_nns_contract_hash(morph_chain: MorphChain) -> str:
|
def get_nns_contract_hash(morph_chain: MorphChain) -> str:
|
||||||
return morph_chain.rpc_client.get_contract_state(1)["hash"]
|
return morph_chain.rpc_client.get_contract_state(1)["hash"]
|
||||||
|
|
||||||
|
@ -39,6 +39,7 @@ def get_contract_hash(morph_chain: MorphChain, resolve_name: str, shell: Shell)
|
||||||
stack_data = json.loads(out.stdout.replace("\n", ""))["stack"][0]["value"]
|
stack_data = json.loads(out.stdout.replace("\n", ""))["stack"][0]["value"]
|
||||||
return bytes.decode(base64.b64decode(stack_data[0]["value"]))
|
return bytes.decode(base64.b64decode(stack_data[0]["value"]))
|
||||||
|
|
||||||
|
|
||||||
def transaction_accepted(morph_chain: MorphChain, tx_id: str):
|
def transaction_accepted(morph_chain: MorphChain, tx_id: str):
|
||||||
"""
|
"""
|
||||||
This function returns True in case of accepted TX.
|
This function returns True in case of accepted TX.
|
||||||
|
@ -62,7 +63,7 @@ def transaction_accepted(morph_chain: MorphChain, tx_id: str):
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Get FrostFS Balance")
|
@reporter.step("Get FrostFS Balance")
|
||||||
def get_balance(shell: Shell, morph_chain: MorphChain, wallet_path: str, wallet_password: str = ""):
|
def get_balance(shell: Shell, morph_chain: MorphChain, wallet_path: str, wallet_password: str = ""):
|
||||||
"""
|
"""
|
||||||
This function returns FrostFS balance for given wallet.
|
This function returns FrostFS balance for given wallet.
|
||||||
|
@ -82,7 +83,8 @@ def get_balance(shell: Shell, morph_chain: MorphChain, wallet_path: str, wallet_
|
||||||
logger.error(f"failed to get wallet balance: {out}")
|
logger.error(f"failed to get wallet balance: {out}")
|
||||||
raise out
|
raise out
|
||||||
|
|
||||||
@reporter.step_deco("Transfer Gas")
|
|
||||||
|
@reporter.step("Transfer Gas")
|
||||||
def transfer_gas(
|
def transfer_gas(
|
||||||
shell: Shell,
|
shell: Shell,
|
||||||
amount: int,
|
amount: int,
|
||||||
|
@ -111,16 +113,10 @@ def transfer_gas(
|
||||||
"""
|
"""
|
||||||
wallet_from_path = wallet_from_path or morph_chain.get_wallet_path()
|
wallet_from_path = wallet_from_path or morph_chain.get_wallet_path()
|
||||||
wallet_from_password = (
|
wallet_from_password = (
|
||||||
wallet_from_password
|
wallet_from_password if wallet_from_password is not None else morph_chain.get_wallet_password()
|
||||||
if wallet_from_password is not None
|
|
||||||
else morph_chain.get_wallet_password()
|
|
||||||
)
|
|
||||||
address_from = address_from or wallet_utils.get_last_address_from_wallet(
|
|
||||||
wallet_from_path, wallet_from_password
|
|
||||||
)
|
|
||||||
address_to = address_to or wallet_utils.get_last_address_from_wallet(
|
|
||||||
wallet_to_path, wallet_to_password
|
|
||||||
)
|
)
|
||||||
|
address_from = address_from or wallet_utils.get_last_address_from_wallet(wallet_from_path, wallet_from_password)
|
||||||
|
address_to = address_to or wallet_utils.get_last_address_from_wallet(wallet_to_path, wallet_to_password)
|
||||||
|
|
||||||
neogo = NeoGo(shell, neo_go_exec_path=NEOGO_EXECUTABLE)
|
neogo = NeoGo(shell, neo_go_exec_path=NEOGO_EXECUTABLE)
|
||||||
out = neogo.nep17.transfer(
|
out = neogo.nep17.transfer(
|
||||||
|
@ -141,7 +137,7 @@ def transfer_gas(
|
||||||
time.sleep(datetime_utils.parse_time(MORPH_BLOCK_TIME))
|
time.sleep(datetime_utils.parse_time(MORPH_BLOCK_TIME))
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Get Sidechain Balance")
|
@reporter.step("Get Sidechain Balance")
|
||||||
def get_sidechain_balance(morph_chain: MorphChain, address: str):
|
def get_sidechain_balance(morph_chain: MorphChain, address: str):
|
||||||
resp = morph_chain.rpc_client.get_nep17_balances(address=address)
|
resp = morph_chain.rpc_client.get_nep17_balances(address=address)
|
||||||
logger.info(f"Got getnep17balances response: {resp}")
|
logger.info(f"Got getnep17balances response: {resp}")
|
||||||
|
|
|
@ -1,34 +1,21 @@
|
||||||
import json
|
|
||||||
import logging
|
import logging
|
||||||
import os
|
import os
|
||||||
import re
|
|
||||||
import uuid
|
|
||||||
from datetime import datetime, timedelta
|
from datetime import datetime, timedelta
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
|
|
||||||
from dateutil.parser import parse
|
from dateutil.parser import parse
|
||||||
|
|
||||||
from frostfs_testlib.cli import FrostfsAuthmate
|
from frostfs_testlib import reporter
|
||||||
from frostfs_testlib.reporter import get_reporter
|
|
||||||
from frostfs_testlib.resources.cli import FROSTFS_AUTHMATE_EXEC
|
|
||||||
from frostfs_testlib.resources.common import CREDENTIALS_CREATE_TIMEOUT
|
|
||||||
from frostfs_testlib.s3 import S3ClientWrapper, VersioningStatus
|
from frostfs_testlib.s3 import S3ClientWrapper, VersioningStatus
|
||||||
from frostfs_testlib.shell import CommandOptions, InteractiveInput, Shell
|
from frostfs_testlib.shell import Shell
|
||||||
from frostfs_testlib.shell.interfaces import SshCredentials
|
from frostfs_testlib.steps.cli.container import search_container_by_name, search_nodes_with_container
|
||||||
from frostfs_testlib.steps.cli.container import (
|
|
||||||
search_container_by_name,
|
|
||||||
search_nodes_with_container,
|
|
||||||
)
|
|
||||||
from frostfs_testlib.storage.cluster import Cluster, ClusterNode
|
from frostfs_testlib.storage.cluster import Cluster, ClusterNode
|
||||||
from frostfs_testlib.storage.dataclasses.frostfs_services import S3Gate
|
|
||||||
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
|
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
|
||||||
from frostfs_testlib.utils.cli_utils import _run_with_passwd
|
|
||||||
|
|
||||||
reporter = get_reporter()
|
|
||||||
logger = logging.getLogger("NeoLogger")
|
logger = logging.getLogger("NeoLogger")
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Expected all objects are presented in the bucket")
|
@reporter.step("Expected all objects are presented in the bucket")
|
||||||
def check_objects_in_bucket(
|
def check_objects_in_bucket(
|
||||||
s3_client: S3ClientWrapper,
|
s3_client: S3ClientWrapper,
|
||||||
bucket: str,
|
bucket: str,
|
||||||
|
@ -37,40 +24,29 @@ def check_objects_in_bucket(
|
||||||
) -> None:
|
) -> None:
|
||||||
unexpected_objects = unexpected_objects or []
|
unexpected_objects = unexpected_objects or []
|
||||||
bucket_objects = s3_client.list_objects(bucket)
|
bucket_objects = s3_client.list_objects(bucket)
|
||||||
assert len(bucket_objects) == len(
|
assert len(bucket_objects) == len(expected_objects), f"Expected {len(expected_objects)} objects in the bucket"
|
||||||
expected_objects
|
|
||||||
), f"Expected {len(expected_objects)} objects in the bucket"
|
|
||||||
for bucket_object in expected_objects:
|
for bucket_object in expected_objects:
|
||||||
assert (
|
assert bucket_object in bucket_objects, f"Expected object {bucket_object} in objects list {bucket_objects}"
|
||||||
bucket_object in bucket_objects
|
|
||||||
), f"Expected object {bucket_object} in objects list {bucket_objects}"
|
|
||||||
|
|
||||||
for bucket_object in unexpected_objects:
|
for bucket_object in unexpected_objects:
|
||||||
assert (
|
assert bucket_object not in bucket_objects, f"Expected object {bucket_object} not in objects list {bucket_objects}"
|
||||||
bucket_object not in bucket_objects
|
|
||||||
), f"Expected object {bucket_object} not in objects list {bucket_objects}"
|
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Try to get object and got error")
|
@reporter.step("Try to get object and got error")
|
||||||
def try_to_get_objects_and_expect_error(
|
def try_to_get_objects_and_expect_error(s3_client: S3ClientWrapper, bucket: str, object_keys: list) -> None:
|
||||||
s3_client: S3ClientWrapper, bucket: str, object_keys: list
|
|
||||||
) -> None:
|
|
||||||
for obj in object_keys:
|
for obj in object_keys:
|
||||||
try:
|
try:
|
||||||
s3_client.get_object(bucket, obj)
|
s3_client.get_object(bucket, obj)
|
||||||
raise AssertionError(f"Object {obj} found in bucket {bucket}")
|
raise AssertionError(f"Object {obj} found in bucket {bucket}")
|
||||||
except Exception as err:
|
except Exception as err:
|
||||||
assert "The specified key does not exist" in str(
|
assert "The specified key does not exist" in str(err), f"Expected error in exception {err}"
|
||||||
err
|
|
||||||
), f"Expected error in exception {err}"
|
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Set versioning status to '{status}' for bucket '{bucket}'")
|
@reporter.step("Set versioning status to '{status}' for bucket '{bucket}'")
|
||||||
def set_bucket_versioning(s3_client: S3ClientWrapper, bucket: str, status: VersioningStatus):
|
def set_bucket_versioning(s3_client: S3ClientWrapper, bucket: str, status: VersioningStatus):
|
||||||
if status == VersioningStatus.UNDEFINED:
|
if status == VersioningStatus.UNDEFINED:
|
||||||
return
|
return
|
||||||
|
|
||||||
s3_client.get_bucket_versioning_status(bucket)
|
|
||||||
s3_client.put_bucket_versioning(bucket, status=status)
|
s3_client.put_bucket_versioning(bucket, status=status)
|
||||||
bucket_status = s3_client.get_bucket_versioning_status(bucket)
|
bucket_status = s3_client.get_bucket_versioning_status(bucket)
|
||||||
assert bucket_status == status.value, f"Expected {bucket_status} status. Got {status.value}"
|
assert bucket_status == status.value, f"Expected {bucket_status} status. Got {status.value}"
|
||||||
|
@ -80,15 +56,9 @@ def object_key_from_file_path(full_path: str) -> str:
|
||||||
return os.path.basename(full_path)
|
return os.path.basename(full_path)
|
||||||
|
|
||||||
|
|
||||||
def assert_tags(
|
def assert_tags(actual_tags: list, expected_tags: Optional[list] = None, unexpected_tags: Optional[list] = None) -> None:
|
||||||
actual_tags: list, expected_tags: Optional[list] = None, unexpected_tags: Optional[list] = None
|
expected_tags = [{"Key": key, "Value": value} for key, value in expected_tags] if expected_tags else []
|
||||||
) -> None:
|
unexpected_tags = [{"Key": key, "Value": value} for key, value in unexpected_tags] if unexpected_tags else []
|
||||||
expected_tags = (
|
|
||||||
[{"Key": key, "Value": value} for key, value in expected_tags] if expected_tags else []
|
|
||||||
)
|
|
||||||
unexpected_tags = (
|
|
||||||
[{"Key": key, "Value": value} for key, value in unexpected_tags] if unexpected_tags else []
|
|
||||||
)
|
|
||||||
if expected_tags == []:
|
if expected_tags == []:
|
||||||
assert not actual_tags, f"Expected there is no tags, got {actual_tags}"
|
assert not actual_tags, f"Expected there is no tags, got {actual_tags}"
|
||||||
assert len(expected_tags) == len(actual_tags)
|
assert len(expected_tags) == len(actual_tags)
|
||||||
|
@ -98,7 +68,7 @@ def assert_tags(
|
||||||
assert tag not in actual_tags, f"Tag {tag} should not be in {actual_tags}"
|
assert tag not in actual_tags, f"Tag {tag} should not be in {actual_tags}"
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Expected all tags are presented in object")
|
@reporter.step("Expected all tags are presented in object")
|
||||||
def check_tags_by_object(
|
def check_tags_by_object(
|
||||||
s3_client: S3ClientWrapper,
|
s3_client: S3ClientWrapper,
|
||||||
bucket: str,
|
bucket: str,
|
||||||
|
@ -107,12 +77,10 @@ def check_tags_by_object(
|
||||||
unexpected_tags: Optional[list] = None,
|
unexpected_tags: Optional[list] = None,
|
||||||
) -> None:
|
) -> None:
|
||||||
actual_tags = s3_client.get_object_tagging(bucket, key)
|
actual_tags = s3_client.get_object_tagging(bucket, key)
|
||||||
assert_tags(
|
assert_tags(expected_tags=expected_tags, unexpected_tags=unexpected_tags, actual_tags=actual_tags)
|
||||||
expected_tags=expected_tags, unexpected_tags=unexpected_tags, actual_tags=actual_tags
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Expected all tags are presented in bucket")
|
@reporter.step("Expected all tags are presented in bucket")
|
||||||
def check_tags_by_bucket(
|
def check_tags_by_bucket(
|
||||||
s3_client: S3ClientWrapper,
|
s3_client: S3ClientWrapper,
|
||||||
bucket: str,
|
bucket: str,
|
||||||
|
@ -120,9 +88,7 @@ def check_tags_by_bucket(
|
||||||
unexpected_tags: Optional[list] = None,
|
unexpected_tags: Optional[list] = None,
|
||||||
) -> None:
|
) -> None:
|
||||||
actual_tags = s3_client.get_bucket_tagging(bucket)
|
actual_tags = s3_client.get_bucket_tagging(bucket)
|
||||||
assert_tags(
|
assert_tags(expected_tags=expected_tags, unexpected_tags=unexpected_tags, actual_tags=actual_tags)
|
||||||
expected_tags=expected_tags, unexpected_tags=unexpected_tags, actual_tags=actual_tags
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def assert_object_lock_mode(
|
def assert_object_lock_mode(
|
||||||
|
@ -135,102 +101,49 @@ def assert_object_lock_mode(
|
||||||
retain_period: Optional[int] = None,
|
retain_period: Optional[int] = None,
|
||||||
):
|
):
|
||||||
object_dict = s3_client.get_object(bucket, file_name, full_output=True)
|
object_dict = s3_client.get_object(bucket, file_name, full_output=True)
|
||||||
assert (
|
assert object_dict.get("ObjectLockMode") == object_lock_mode, f"Expected Object Lock Mode is {object_lock_mode}"
|
||||||
object_dict.get("ObjectLockMode") == object_lock_mode
|
|
||||||
), f"Expected Object Lock Mode is {object_lock_mode}"
|
|
||||||
assert (
|
assert (
|
||||||
object_dict.get("ObjectLockLegalHoldStatus") == legal_hold_status
|
object_dict.get("ObjectLockLegalHoldStatus") == legal_hold_status
|
||||||
), f"Expected Object Lock Legal Hold Status is {legal_hold_status}"
|
), f"Expected Object Lock Legal Hold Status is {legal_hold_status}"
|
||||||
object_retain_date = object_dict.get("ObjectLockRetainUntilDate")
|
object_retain_date = object_dict.get("ObjectLockRetainUntilDate")
|
||||||
retain_date = (
|
retain_date = parse(object_retain_date) if isinstance(object_retain_date, str) else object_retain_date
|
||||||
parse(object_retain_date) if isinstance(object_retain_date, str) else object_retain_date
|
|
||||||
)
|
|
||||||
if retain_until_date:
|
if retain_until_date:
|
||||||
assert retain_date.strftime("%Y-%m-%dT%H:%M:%S") == retain_until_date.strftime(
|
assert retain_date.strftime("%Y-%m-%dT%H:%M:%S") == retain_until_date.strftime(
|
||||||
"%Y-%m-%dT%H:%M:%S"
|
"%Y-%m-%dT%H:%M:%S"
|
||||||
), f'Expected Object Lock Retain Until Date is {str(retain_until_date.strftime("%Y-%m-%dT%H:%M:%S"))}'
|
), f'Expected Object Lock Retain Until Date is {str(retain_until_date.strftime("%Y-%m-%dT%H:%M:%S"))}'
|
||||||
elif retain_period:
|
elif retain_period:
|
||||||
last_modify_date = object_dict.get("LastModified")
|
last_modify_date = object_dict.get("LastModified")
|
||||||
last_modify = (
|
last_modify = parse(last_modify_date) if isinstance(last_modify_date, str) else last_modify_date
|
||||||
parse(last_modify_date) if isinstance(last_modify_date, str) else last_modify_date
|
|
||||||
)
|
|
||||||
assert (
|
assert (
|
||||||
retain_date - last_modify + timedelta(seconds=1)
|
retain_date - last_modify + timedelta(seconds=1)
|
||||||
).days == retain_period, f"Expected retention period is {retain_period} days"
|
).days == retain_period, f"Expected retention period is {retain_period} days"
|
||||||
|
|
||||||
|
|
||||||
def assert_s3_acl(acl_grants: list, permitted_users: str):
|
def _format_grants_as_strings(grants: list[dict]) -> list:
|
||||||
if permitted_users == "AllUsers":
|
grantee_format = "{g_type}::{uri}:{permission}"
|
||||||
grantees = {"AllUsers": 0, "CanonicalUser": 0}
|
return set(
|
||||||
for acl_grant in acl_grants:
|
[
|
||||||
if acl_grant.get("Grantee", {}).get("Type") == "Group":
|
grantee_format.format(
|
||||||
uri = acl_grant.get("Grantee", {}).get("URI")
|
g_type=grant.get("Grantee", {}).get("Type", ""),
|
||||||
permission = acl_grant.get("Permission")
|
uri=grant.get("Grantee", {}).get("URI", ""),
|
||||||
assert (uri, permission) == (
|
permission=grant.get("Permission", ""),
|
||||||
"http://acs.amazonaws.com/groups/global/AllUsers",
|
)
|
||||||
"FULL_CONTROL",
|
for grant in grants
|
||||||
), "All Groups should have FULL_CONTROL"
|
]
|
||||||
grantees["AllUsers"] += 1
|
|
||||||
if acl_grant.get("Grantee", {}).get("Type") == "CanonicalUser":
|
|
||||||
permission = acl_grant.get("Permission")
|
|
||||||
assert permission == "FULL_CONTROL", "Canonical User should have FULL_CONTROL"
|
|
||||||
grantees["CanonicalUser"] += 1
|
|
||||||
assert grantees["AllUsers"] >= 1, "All Users should have FULL_CONTROL"
|
|
||||||
assert grantees["CanonicalUser"] >= 1, "Canonical User should have FULL_CONTROL"
|
|
||||||
|
|
||||||
if permitted_users == "CanonicalUser":
|
|
||||||
for acl_grant in acl_grants:
|
|
||||||
if acl_grant.get("Grantee", {}).get("Type") == "CanonicalUser":
|
|
||||||
permission = acl_grant.get("Permission")
|
|
||||||
assert permission == "FULL_CONTROL", "Only CanonicalUser should have FULL_CONTROL"
|
|
||||||
else:
|
|
||||||
logger.error("FULL_CONTROL is given to All Users")
|
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Init S3 Credentials")
|
|
||||||
def init_s3_credentials(
|
|
||||||
wallet: WalletInfo,
|
|
||||||
shell: Shell,
|
|
||||||
cluster: Cluster,
|
|
||||||
policy: Optional[dict] = None,
|
|
||||||
s3gates: Optional[list[S3Gate]] = None,
|
|
||||||
container_placement_policy: Optional[str] = None,
|
|
||||||
):
|
|
||||||
gate_public_keys = []
|
|
||||||
bucket = str(uuid.uuid4())
|
|
||||||
if not s3gates:
|
|
||||||
s3gates = [cluster.s3_gates[0]]
|
|
||||||
for s3gate in s3gates:
|
|
||||||
gate_public_keys.append(s3gate.get_wallet_public_key())
|
|
||||||
frostfs_authmate_exec: FrostfsAuthmate = FrostfsAuthmate(shell, FROSTFS_AUTHMATE_EXEC)
|
|
||||||
issue_secret_output = frostfs_authmate_exec.secret.issue(
|
|
||||||
wallet=wallet.path,
|
|
||||||
peer=cluster.default_rpc_endpoint,
|
|
||||||
gate_public_key=gate_public_keys,
|
|
||||||
wallet_password=wallet.password,
|
|
||||||
container_policy=policy,
|
|
||||||
container_friendly_name=bucket,
|
|
||||||
container_placement_policy=container_placement_policy,
|
|
||||||
).stdout
|
|
||||||
aws_access_key_id = str(
|
|
||||||
re.search(r"access_key_id.*:\s.(?P<aws_access_key_id>\w*)", issue_secret_output).group(
|
|
||||||
"aws_access_key_id"
|
|
||||||
)
|
|
||||||
)
|
)
|
||||||
aws_secret_access_key = str(
|
|
||||||
re.search(
|
|
||||||
r"secret_access_key.*:\s.(?P<aws_secret_access_key>\w*)", issue_secret_output
|
|
||||||
).group("aws_secret_access_key")
|
|
||||||
)
|
|
||||||
cid = str(
|
|
||||||
re.search(r"container_id.*:\s.(?P<container_id>\w*)", issue_secret_output).group(
|
|
||||||
"container_id"
|
|
||||||
)
|
|
||||||
)
|
|
||||||
return cid, aws_access_key_id, aws_secret_access_key
|
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Delete bucket with all objects")
|
@reporter.step("Verify ACL permissions")
|
||||||
|
def verify_acl_permissions(actual_acl_grants: list[dict], expected_acl_grants: list[dict], strict: bool = True):
|
||||||
|
actual_grants = _format_grants_as_strings(actual_acl_grants)
|
||||||
|
expected_grants = _format_grants_as_strings(expected_acl_grants)
|
||||||
|
|
||||||
|
assert expected_grants <= actual_grants, "Permissions mismatch"
|
||||||
|
if strict:
|
||||||
|
assert expected_grants == actual_grants, "Extra permissions found, must not be there"
|
||||||
|
|
||||||
|
|
||||||
|
@reporter.step("Delete bucket with all objects")
|
||||||
def delete_bucket_with_objects(s3_client: S3ClientWrapper, bucket: str):
|
def delete_bucket_with_objects(s3_client: S3ClientWrapper, bucket: str):
|
||||||
versioning_status = s3_client.get_bucket_versioning_status(bucket)
|
versioning_status = s3_client.get_bucket_versioning_status(bucket)
|
||||||
if versioning_status == VersioningStatus.ENABLED.value:
|
if versioning_status == VersioningStatus.ENABLED.value:
|
||||||
|
@ -255,16 +168,18 @@ def delete_bucket_with_objects(s3_client: S3ClientWrapper, bucket: str):
|
||||||
s3_client.delete_bucket(bucket)
|
s3_client.delete_bucket(bucket)
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Search nodes bucket")
|
@reporter.step("Search nodes bucket")
|
||||||
def search_nodes_with_bucket(
|
def search_nodes_with_bucket(
|
||||||
cluster: Cluster,
|
cluster: Cluster,
|
||||||
bucket_name: str,
|
bucket_name: str,
|
||||||
wallet: str,
|
wallet: WalletInfo,
|
||||||
shell: Shell,
|
shell: Shell,
|
||||||
endpoint: str,
|
endpoint: str,
|
||||||
) -> list[ClusterNode]:
|
) -> list[ClusterNode]:
|
||||||
cid = search_container_by_name(wallet=wallet, name=bucket_name, shell=shell, endpoint=endpoint)
|
cid = None
|
||||||
nodes_list = search_nodes_with_container(
|
for cluster_node in cluster.cluster_nodes:
|
||||||
wallet=wallet, cid=cid, shell=shell, endpoint=endpoint, cluster=cluster
|
cid = search_container_by_name(name=bucket_name, node=cluster_node)
|
||||||
)
|
if cid:
|
||||||
|
break
|
||||||
|
nodes_list = search_nodes_with_container(wallet=wallet, cid=cid, shell=shell, endpoint=endpoint, cluster=cluster)
|
||||||
return nodes_list
|
return nodes_list
|
||||||
|
|
|
@ -4,20 +4,18 @@ import logging
|
||||||
import os
|
import os
|
||||||
import uuid
|
import uuid
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass
|
||||||
from enum import Enum
|
|
||||||
from typing import Any, Optional
|
from typing import Any, Optional
|
||||||
|
|
||||||
|
from frostfs_testlib import reporter
|
||||||
from frostfs_testlib.cli import FrostfsCli
|
from frostfs_testlib.cli import FrostfsCli
|
||||||
from frostfs_testlib.reporter import get_reporter
|
|
||||||
from frostfs_testlib.resources.cli import FROSTFS_CLI_EXEC
|
from frostfs_testlib.resources.cli import FROSTFS_CLI_EXEC
|
||||||
from frostfs_testlib.resources.common import ASSETS_DIR, DEFAULT_WALLET_CONFIG
|
from frostfs_testlib.resources.common import ASSETS_DIR
|
||||||
from frostfs_testlib.shell import Shell
|
from frostfs_testlib.shell import Shell
|
||||||
from frostfs_testlib.storage.dataclasses.storage_object_info import StorageObjectInfo
|
from frostfs_testlib.storage.dataclasses.storage_object_info import StorageObjectInfo
|
||||||
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
|
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
|
||||||
from frostfs_testlib.testing.readable import HumanReadableEnum
|
from frostfs_testlib.testing.readable import HumanReadableEnum
|
||||||
from frostfs_testlib.utils import json_utils, wallet_utils
|
from frostfs_testlib.utils import json_utils, wallet_utils
|
||||||
|
|
||||||
reporter = get_reporter()
|
|
||||||
logger = logging.getLogger("NeoLogger")
|
logger = logging.getLogger("NeoLogger")
|
||||||
|
|
||||||
UNRELATED_KEY = "unrelated key in the session"
|
UNRELATED_KEY = "unrelated key in the session"
|
||||||
|
@ -50,7 +48,7 @@ class Lifetime:
|
||||||
iat: int = 0
|
iat: int = 0
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Generate Session Token")
|
@reporter.step("Generate Session Token")
|
||||||
def generate_session_token(
|
def generate_session_token(
|
||||||
owner_wallet: WalletInfo,
|
owner_wallet: WalletInfo,
|
||||||
session_wallet: WalletInfo,
|
session_wallet: WalletInfo,
|
||||||
|
@ -72,9 +70,7 @@ def generate_session_token(
|
||||||
|
|
||||||
file_path = os.path.join(tokens_dir, str(uuid.uuid4()))
|
file_path = os.path.join(tokens_dir, str(uuid.uuid4()))
|
||||||
|
|
||||||
pub_key_64 = wallet_utils.get_wallet_public_key(
|
pub_key_64 = wallet_utils.get_wallet_public_key(session_wallet.path, session_wallet.password, "base64")
|
||||||
session_wallet.path, session_wallet.password, "base64"
|
|
||||||
)
|
|
||||||
|
|
||||||
lifetime = lifetime or Lifetime()
|
lifetime = lifetime or Lifetime()
|
||||||
|
|
||||||
|
@ -99,7 +95,7 @@ def generate_session_token(
|
||||||
return file_path
|
return file_path
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Generate Session Token For Container")
|
@reporter.step("Generate Session Token For Container")
|
||||||
def generate_container_session_token(
|
def generate_container_session_token(
|
||||||
owner_wallet: WalletInfo,
|
owner_wallet: WalletInfo,
|
||||||
session_wallet: WalletInfo,
|
session_wallet: WalletInfo,
|
||||||
|
@ -126,11 +122,7 @@ def generate_container_session_token(
|
||||||
"container": {
|
"container": {
|
||||||
"verb": verb.value,
|
"verb": verb.value,
|
||||||
"wildcard": cid is None,
|
"wildcard": cid is None,
|
||||||
**(
|
**({"containerID": {"value": f"{json_utils.encode_for_json(cid)}"}} if cid is not None else {}),
|
||||||
{"containerID": {"value": f"{json_utils.encode_for_json(cid)}"}}
|
|
||||||
if cid is not None
|
|
||||||
else {}
|
|
||||||
),
|
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -143,7 +135,7 @@ def generate_container_session_token(
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Generate Session Token For Object")
|
@reporter.step("Generate Session Token For Object")
|
||||||
def generate_object_session_token(
|
def generate_object_session_token(
|
||||||
owner_wallet: WalletInfo,
|
owner_wallet: WalletInfo,
|
||||||
session_wallet: WalletInfo,
|
session_wallet: WalletInfo,
|
||||||
|
@ -185,7 +177,7 @@ def generate_object_session_token(
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Get signed token for container session")
|
@reporter.step("Get signed token for container session")
|
||||||
def get_container_signed_token(
|
def get_container_signed_token(
|
||||||
owner_wallet: WalletInfo,
|
owner_wallet: WalletInfo,
|
||||||
user_wallet: WalletInfo,
|
user_wallet: WalletInfo,
|
||||||
|
@ -207,7 +199,7 @@ def get_container_signed_token(
|
||||||
return sign_session_token(shell, session_token_file, owner_wallet)
|
return sign_session_token(shell, session_token_file, owner_wallet)
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Get signed token for object session")
|
@reporter.step("Get signed token for object session")
|
||||||
def get_object_signed_token(
|
def get_object_signed_token(
|
||||||
owner_wallet: WalletInfo,
|
owner_wallet: WalletInfo,
|
||||||
user_wallet: WalletInfo,
|
user_wallet: WalletInfo,
|
||||||
|
@ -234,12 +226,11 @@ def get_object_signed_token(
|
||||||
return sign_session_token(shell, session_token_file, owner_wallet)
|
return sign_session_token(shell, session_token_file, owner_wallet)
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Create Session Token")
|
@reporter.step("Create Session Token")
|
||||||
def create_session_token(
|
def create_session_token(
|
||||||
shell: Shell,
|
shell: Shell,
|
||||||
owner: str,
|
owner: str,
|
||||||
wallet_path: str,
|
wallet: WalletInfo,
|
||||||
wallet_password: str,
|
|
||||||
rpc_endpoint: str,
|
rpc_endpoint: str,
|
||||||
) -> str:
|
) -> str:
|
||||||
"""
|
"""
|
||||||
|
@ -254,19 +245,18 @@ def create_session_token(
|
||||||
The path to the generated session token file.
|
The path to the generated session token file.
|
||||||
"""
|
"""
|
||||||
session_token = os.path.join(os.getcwd(), ASSETS_DIR, str(uuid.uuid4()))
|
session_token = os.path.join(os.getcwd(), ASSETS_DIR, str(uuid.uuid4()))
|
||||||
frostfscli = FrostfsCli(shell=shell, frostfs_cli_exec_path=FROSTFS_CLI_EXEC)
|
frostfscli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet.config_path)
|
||||||
frostfscli.session.create(
|
frostfscli.session.create(
|
||||||
rpc_endpoint=rpc_endpoint,
|
rpc_endpoint=rpc_endpoint,
|
||||||
address=owner,
|
address=owner,
|
||||||
wallet=wallet_path,
|
|
||||||
wallet_password=wallet_password,
|
|
||||||
out=session_token,
|
out=session_token,
|
||||||
|
wallet=wallet.path,
|
||||||
)
|
)
|
||||||
return session_token
|
return session_token
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Sign Session Token")
|
@reporter.step("Sign Session Token")
|
||||||
def sign_session_token(shell: Shell, session_token_file: str, wlt: WalletInfo) -> str:
|
def sign_session_token(shell: Shell, session_token_file: str, wallet: WalletInfo) -> str:
|
||||||
"""
|
"""
|
||||||
This function signs the session token by the given wallet.
|
This function signs the session token by the given wallet.
|
||||||
|
|
||||||
|
@ -279,10 +269,6 @@ def sign_session_token(shell: Shell, session_token_file: str, wlt: WalletInfo) -
|
||||||
The path to the signed token.
|
The path to the signed token.
|
||||||
"""
|
"""
|
||||||
signed_token_file = os.path.join(os.getcwd(), ASSETS_DIR, str(uuid.uuid4()))
|
signed_token_file = os.path.join(os.getcwd(), ASSETS_DIR, str(uuid.uuid4()))
|
||||||
frostfscli = FrostfsCli(
|
frostfscli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet.config_path)
|
||||||
shell=shell, frostfs_cli_exec_path=FROSTFS_CLI_EXEC, config_file=DEFAULT_WALLET_CONFIG
|
frostfscli.util.sign_session_token(session_token_file, signed_token_file)
|
||||||
)
|
|
||||||
frostfscli.util.sign_session_token(
|
|
||||||
wallet=wlt.path, from_file=session_token_file, to_file=signed_token_file
|
|
||||||
)
|
|
||||||
return signed_token_file
|
return signed_token_file
|
||||||
|
|
|
@ -3,7 +3,7 @@ from time import sleep
|
||||||
|
|
||||||
import pytest
|
import pytest
|
||||||
|
|
||||||
from frostfs_testlib.reporter import get_reporter
|
from frostfs_testlib import reporter
|
||||||
from frostfs_testlib.resources.error_patterns import OBJECT_ALREADY_REMOVED
|
from frostfs_testlib.resources.error_patterns import OBJECT_ALREADY_REMOVED
|
||||||
from frostfs_testlib.shell import Shell
|
from frostfs_testlib.shell import Shell
|
||||||
from frostfs_testlib.steps.cli.object import delete_object, get_object
|
from frostfs_testlib.steps.cli.object import delete_object, get_object
|
||||||
|
@ -12,16 +12,13 @@ from frostfs_testlib.steps.tombstone import verify_head_tombstone
|
||||||
from frostfs_testlib.storage.cluster import Cluster
|
from frostfs_testlib.storage.cluster import Cluster
|
||||||
from frostfs_testlib.storage.dataclasses.storage_object_info import StorageObjectInfo
|
from frostfs_testlib.storage.dataclasses.storage_object_info import StorageObjectInfo
|
||||||
|
|
||||||
reporter = get_reporter()
|
|
||||||
logger = logging.getLogger("NeoLogger")
|
logger = logging.getLogger("NeoLogger")
|
||||||
|
|
||||||
CLEANUP_TIMEOUT = 10
|
CLEANUP_TIMEOUT = 10
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Delete Objects")
|
@reporter.step("Delete Objects")
|
||||||
def delete_objects(
|
def delete_objects(storage_objects: list[StorageObjectInfo], shell: Shell, cluster: Cluster) -> None:
|
||||||
storage_objects: list[StorageObjectInfo], shell: Shell, cluster: Cluster
|
|
||||||
) -> None:
|
|
||||||
"""
|
"""
|
||||||
Deletes given storage objects.
|
Deletes given storage objects.
|
||||||
|
|
||||||
|
@ -33,14 +30,14 @@ def delete_objects(
|
||||||
with reporter.step("Delete objects"):
|
with reporter.step("Delete objects"):
|
||||||
for storage_object in storage_objects:
|
for storage_object in storage_objects:
|
||||||
storage_object.tombstone = delete_object(
|
storage_object.tombstone = delete_object(
|
||||||
storage_object.wallet_file_path,
|
storage_object.wallet,
|
||||||
storage_object.cid,
|
storage_object.cid,
|
||||||
storage_object.oid,
|
storage_object.oid,
|
||||||
shell=shell,
|
shell=shell,
|
||||||
endpoint=cluster.default_rpc_endpoint,
|
endpoint=cluster.default_rpc_endpoint,
|
||||||
)
|
)
|
||||||
verify_head_tombstone(
|
verify_head_tombstone(
|
||||||
wallet_path=storage_object.wallet_file_path,
|
wallet=storage_object.wallet,
|
||||||
cid=storage_object.cid,
|
cid=storage_object.cid,
|
||||||
oid_ts=storage_object.tombstone,
|
oid_ts=storage_object.tombstone,
|
||||||
oid=storage_object.oid,
|
oid=storage_object.oid,
|
||||||
|
@ -55,7 +52,7 @@ def delete_objects(
|
||||||
for storage_object in storage_objects:
|
for storage_object in storage_objects:
|
||||||
with pytest.raises(Exception, match=OBJECT_ALREADY_REMOVED):
|
with pytest.raises(Exception, match=OBJECT_ALREADY_REMOVED):
|
||||||
get_object(
|
get_object(
|
||||||
storage_object.wallet_file_path,
|
storage_object.wallet,
|
||||||
storage_object.cid,
|
storage_object.cid,
|
||||||
storage_object.oid,
|
storage_object.oid,
|
||||||
shell=shell,
|
shell=shell,
|
||||||
|
|
|
@ -6,22 +6,21 @@
|
||||||
"""
|
"""
|
||||||
import logging
|
import logging
|
||||||
|
|
||||||
from frostfs_testlib.reporter import get_reporter
|
from frostfs_testlib import reporter
|
||||||
from frostfs_testlib.resources.error_patterns import OBJECT_NOT_FOUND
|
from frostfs_testlib.resources.error_patterns import OBJECT_NOT_FOUND
|
||||||
from frostfs_testlib.shell import Shell
|
from frostfs_testlib.shell import Shell
|
||||||
from frostfs_testlib.steps.cli.object import head_object
|
from frostfs_testlib.steps.cli.object import head_object
|
||||||
from frostfs_testlib.steps.complex_object_actions import get_last_object
|
from frostfs_testlib.steps.complex_object_actions import get_last_object
|
||||||
from frostfs_testlib.storage.cluster import StorageNode
|
from frostfs_testlib.storage.cluster import StorageNode
|
||||||
|
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
|
||||||
from frostfs_testlib.utils import string_utils
|
from frostfs_testlib.utils import string_utils
|
||||||
|
|
||||||
reporter = get_reporter()
|
|
||||||
logger = logging.getLogger("NeoLogger")
|
logger = logging.getLogger("NeoLogger")
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Get Object Copies")
|
# TODO: Unused, remove or make use of
|
||||||
def get_object_copies(
|
@reporter.step("Get Object Copies")
|
||||||
complexity: str, wallet: str, cid: str, oid: str, shell: Shell, nodes: list[StorageNode]
|
def get_object_copies(complexity: str, wallet: WalletInfo, cid: str, oid: str, shell: Shell, nodes: list[StorageNode]) -> int:
|
||||||
) -> int:
|
|
||||||
"""
|
"""
|
||||||
The function performs requests to all nodes of the container and
|
The function performs requests to all nodes of the container and
|
||||||
finds out if they store a copy of the object. The procedure is
|
finds out if they store a copy of the object. The procedure is
|
||||||
|
@ -45,10 +44,8 @@ def get_object_copies(
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Get Simple Object Copies")
|
@reporter.step("Get Simple Object Copies")
|
||||||
def get_simple_object_copies(
|
def get_simple_object_copies(wallet: WalletInfo, cid: str, oid: str, shell: Shell, nodes: list[StorageNode]) -> int:
|
||||||
wallet: str, cid: str, oid: str, shell: Shell, nodes: list[StorageNode]
|
|
||||||
) -> int:
|
|
||||||
"""
|
"""
|
||||||
To figure out the number of a simple object copies, only direct
|
To figure out the number of a simple object copies, only direct
|
||||||
HEAD requests should be made to the every node of the container.
|
HEAD requests should be made to the every node of the container.
|
||||||
|
@ -66,9 +63,7 @@ def get_simple_object_copies(
|
||||||
copies = 0
|
copies = 0
|
||||||
for node in nodes:
|
for node in nodes:
|
||||||
try:
|
try:
|
||||||
response = head_object(
|
response = head_object(wallet, cid, oid, shell=shell, endpoint=node.get_rpc_endpoint(), is_direct=True)
|
||||||
wallet, cid, oid, shell=shell, endpoint=node.get_rpc_endpoint(), is_direct=True
|
|
||||||
)
|
|
||||||
if response:
|
if response:
|
||||||
logger.info(f"Found object {oid} on node {node}")
|
logger.info(f"Found object {oid} on node {node}")
|
||||||
copies += 1
|
copies += 1
|
||||||
|
@ -78,10 +73,8 @@ def get_simple_object_copies(
|
||||||
return copies
|
return copies
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Get Complex Object Copies")
|
@reporter.step("Get Complex Object Copies")
|
||||||
def get_complex_object_copies(
|
def get_complex_object_copies(wallet: WalletInfo, cid: str, oid: str, shell: Shell, nodes: list[StorageNode]) -> int:
|
||||||
wallet: str, cid: str, oid: str, shell: Shell, nodes: list[StorageNode]
|
|
||||||
) -> int:
|
|
||||||
"""
|
"""
|
||||||
To figure out the number of a complex object copies, we firstly
|
To figure out the number of a complex object copies, we firstly
|
||||||
need to retrieve its Last object. We consider that the number of
|
need to retrieve its Last object. We consider that the number of
|
||||||
|
@ -102,10 +95,8 @@ def get_complex_object_copies(
|
||||||
return get_simple_object_copies(wallet, cid, last_oid, shell, nodes)
|
return get_simple_object_copies(wallet, cid, last_oid, shell, nodes)
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Get Nodes With Object")
|
@reporter.step("Get Nodes With Object")
|
||||||
def get_nodes_with_object(
|
def get_nodes_with_object(cid: str, oid: str, shell: Shell, nodes: list[StorageNode]) -> list[StorageNode]:
|
||||||
cid: str, oid: str, shell: Shell, nodes: list[StorageNode]
|
|
||||||
) -> list[StorageNode]:
|
|
||||||
"""
|
"""
|
||||||
The function returns list of nodes which store
|
The function returns list of nodes which store
|
||||||
the given object.
|
the given object.
|
||||||
|
@ -120,8 +111,7 @@ def get_nodes_with_object(
|
||||||
|
|
||||||
nodes_list = []
|
nodes_list = []
|
||||||
for node in nodes:
|
for node in nodes:
|
||||||
wallet = node.get_wallet_path()
|
wallet = WalletInfo.from_node(node)
|
||||||
wallet_config = node.get_wallet_config_path()
|
|
||||||
try:
|
try:
|
||||||
res = head_object(
|
res = head_object(
|
||||||
wallet,
|
wallet,
|
||||||
|
@ -130,7 +120,6 @@ def get_nodes_with_object(
|
||||||
shell=shell,
|
shell=shell,
|
||||||
endpoint=node.get_rpc_endpoint(),
|
endpoint=node.get_rpc_endpoint(),
|
||||||
is_direct=True,
|
is_direct=True,
|
||||||
wallet_config=wallet_config,
|
|
||||||
)
|
)
|
||||||
if res is not None:
|
if res is not None:
|
||||||
logger.info(f"Found object {oid} on node {node}")
|
logger.info(f"Found object {oid} on node {node}")
|
||||||
|
@ -141,10 +130,8 @@ def get_nodes_with_object(
|
||||||
return nodes_list
|
return nodes_list
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Get Nodes Without Object")
|
@reporter.step("Get Nodes Without Object")
|
||||||
def get_nodes_without_object(
|
def get_nodes_without_object(wallet: WalletInfo, cid: str, oid: str, shell: Shell, nodes: list[StorageNode]) -> list[StorageNode]:
|
||||||
wallet: str, cid: str, oid: str, shell: Shell, nodes: list[StorageNode]
|
|
||||||
) -> list[StorageNode]:
|
|
||||||
"""
|
"""
|
||||||
The function returns list of nodes which do not store
|
The function returns list of nodes which do not store
|
||||||
the given object.
|
the given object.
|
||||||
|
@ -160,9 +147,7 @@ def get_nodes_without_object(
|
||||||
nodes_list = []
|
nodes_list = []
|
||||||
for node in nodes:
|
for node in nodes:
|
||||||
try:
|
try:
|
||||||
res = head_object(
|
res = head_object(wallet, cid, oid, shell=shell, endpoint=node.get_rpc_endpoint(), is_direct=True)
|
||||||
wallet, cid, oid, shell=shell, endpoint=node.get_rpc_endpoint(), is_direct=True
|
|
||||||
)
|
|
||||||
if res is None:
|
if res is None:
|
||||||
nodes_list.append(node)
|
nodes_list.append(node)
|
||||||
except Exception as err:
|
except Exception as err:
|
||||||
|
|
|
@ -1,41 +1,24 @@
|
||||||
import json
|
|
||||||
import logging
|
import logging
|
||||||
|
|
||||||
from neo3.wallet import wallet
|
from frostfs_testlib import reporter
|
||||||
|
|
||||||
from frostfs_testlib.reporter import get_reporter
|
|
||||||
from frostfs_testlib.shell import Shell
|
from frostfs_testlib.shell import Shell
|
||||||
from frostfs_testlib.steps.cli.object import head_object
|
from frostfs_testlib.steps.cli.object import head_object
|
||||||
|
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
|
||||||
|
|
||||||
reporter = get_reporter()
|
|
||||||
logger = logging.getLogger("NeoLogger")
|
logger = logging.getLogger("NeoLogger")
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Verify Head Tombstone")
|
@reporter.step("Verify Head Tombstone")
|
||||||
def verify_head_tombstone(
|
def verify_head_tombstone(wallet: WalletInfo, cid: str, oid_ts: str, oid: str, shell: Shell, endpoint: str):
|
||||||
wallet_path: str, cid: str, oid_ts: str, oid: str, shell: Shell, endpoint: str
|
header = head_object(wallet, cid, oid_ts, shell=shell, endpoint=endpoint)["header"]
|
||||||
):
|
|
||||||
header = head_object(wallet_path, cid, oid_ts, shell=shell, endpoint=endpoint)["header"]
|
|
||||||
|
|
||||||
s_oid = header["sessionToken"]["body"]["object"]["target"]["objects"]
|
s_oid = header["sessionToken"]["body"]["object"]["target"]["objects"]
|
||||||
logger.info(f"Header Session OIDs is {s_oid}")
|
logger.info(f"Header Session OIDs is {s_oid}")
|
||||||
logger.info(f"OID is {oid}")
|
logger.info(f"OID is {oid}")
|
||||||
|
|
||||||
assert header["containerID"] == cid, "Tombstone Header CID is wrong"
|
assert header["containerID"] == cid, "Tombstone Header CID is wrong"
|
||||||
|
assert header["ownerID"] == wallet.get_address_from_json(0), "Tombstone Owner ID is wrong"
|
||||||
with open(wallet_path, "r") as file:
|
|
||||||
wlt_data = json.loads(file.read())
|
|
||||||
wlt = wallet.Wallet.from_json(wlt_data, password="")
|
|
||||||
addr = wlt.accounts[0].address
|
|
||||||
|
|
||||||
assert header["ownerID"] == addr, "Tombstone Owner ID is wrong"
|
|
||||||
assert header["objectType"] == "TOMBSTONE", "Header Type isn't Tombstone"
|
assert header["objectType"] == "TOMBSTONE", "Header Type isn't Tombstone"
|
||||||
assert (
|
assert header["sessionToken"]["body"]["object"]["verb"] == "DELETE", "Header Session Type isn't DELETE"
|
||||||
header["sessionToken"]["body"]["object"]["verb"] == "DELETE"
|
assert header["sessionToken"]["body"]["object"]["target"]["container"] == cid, "Header Session ID is wrong"
|
||||||
), "Header Session Type isn't DELETE"
|
assert oid in header["sessionToken"]["body"]["object"]["target"]["objects"], "Header Session OID is wrong"
|
||||||
assert (
|
|
||||||
header["sessionToken"]["body"]["object"]["target"]["container"] == cid
|
|
||||||
), "Header Session ID is wrong"
|
|
||||||
assert (
|
|
||||||
oid in header["sessionToken"]["body"]["object"]["target"]["objects"]
|
|
||||||
), "Header Session OID is wrong"
|
|
||||||
|
|
|
@ -1,22 +1,7 @@
|
||||||
from frostfs_testlib.storage.constants import _FrostfsServicesNames
|
|
||||||
from frostfs_testlib.storage.dataclasses.frostfs_services import (
|
|
||||||
HTTPGate,
|
|
||||||
InnerRing,
|
|
||||||
MorphChain,
|
|
||||||
S3Gate,
|
|
||||||
StorageNode,
|
|
||||||
)
|
|
||||||
from frostfs_testlib.storage.service_registry import ServiceRegistry
|
from frostfs_testlib.storage.service_registry import ServiceRegistry
|
||||||
|
|
||||||
__class_registry = ServiceRegistry()
|
__class_registry = ServiceRegistry()
|
||||||
|
|
||||||
# Register default public services
|
|
||||||
__class_registry.register_service(_FrostfsServicesNames.STORAGE, StorageNode)
|
|
||||||
__class_registry.register_service(_FrostfsServicesNames.INNER_RING, InnerRing)
|
|
||||||
__class_registry.register_service(_FrostfsServicesNames.MORPH_CHAIN, MorphChain)
|
|
||||||
__class_registry.register_service(_FrostfsServicesNames.S3_GATE, S3Gate)
|
|
||||||
__class_registry.register_service(_FrostfsServicesNames.HTTP_GATE, HTTPGate)
|
|
||||||
|
|
||||||
|
|
||||||
def get_service_registry() -> ServiceRegistry:
|
def get_service_registry() -> ServiceRegistry:
|
||||||
"""Returns registry with registered classes related to cluster and cluster nodes.
|
"""Returns registry with registered classes related to cluster and cluster nodes.
|
||||||
|
|
|
@ -4,23 +4,17 @@ import re
|
||||||
import yaml
|
import yaml
|
||||||
from yarl import URL
|
from yarl import URL
|
||||||
|
|
||||||
|
from frostfs_testlib import reporter
|
||||||
from frostfs_testlib.hosting import Host, Hosting
|
from frostfs_testlib.hosting import Host, Hosting
|
||||||
from frostfs_testlib.hosting.config import ServiceConfig
|
from frostfs_testlib.hosting.config import ServiceConfig
|
||||||
from frostfs_testlib.reporter import get_reporter
|
|
||||||
from frostfs_testlib.storage import get_service_registry
|
from frostfs_testlib.storage import get_service_registry
|
||||||
|
from frostfs_testlib.storage.configuration.interfaces import ServiceConfigurationYml
|
||||||
from frostfs_testlib.storage.constants import ConfigAttributes
|
from frostfs_testlib.storage.constants import ConfigAttributes
|
||||||
from frostfs_testlib.storage.dataclasses.frostfs_services import (
|
from frostfs_testlib.storage.dataclasses.frostfs_services import HTTPGate, InnerRing, MorphChain, S3Gate, StorageNode
|
||||||
HTTPGate,
|
|
||||||
InnerRing,
|
|
||||||
MorphChain,
|
|
||||||
S3Gate,
|
|
||||||
StorageNode,
|
|
||||||
)
|
|
||||||
from frostfs_testlib.storage.dataclasses.node_base import NodeBase, ServiceClass
|
from frostfs_testlib.storage.dataclasses.node_base import NodeBase, ServiceClass
|
||||||
from frostfs_testlib.storage.dataclasses.storage_object_info import Interfaces
|
from frostfs_testlib.storage.dataclasses.storage_object_info import Interfaces
|
||||||
from frostfs_testlib.storage.service_registry import ServiceRegistry
|
from frostfs_testlib.storage.service_registry import ServiceRegistry
|
||||||
|
from frostfs_testlib.storage.dataclasses.metrics import Metrics
|
||||||
reporter = get_reporter()
|
|
||||||
|
|
||||||
|
|
||||||
class ClusterNode:
|
class ClusterNode:
|
||||||
|
@ -31,11 +25,13 @@ class ClusterNode:
|
||||||
class_registry: ServiceRegistry
|
class_registry: ServiceRegistry
|
||||||
id: int
|
id: int
|
||||||
host: Host
|
host: Host
|
||||||
|
metrics: Metrics
|
||||||
|
|
||||||
def __init__(self, host: Host, id: int) -> None:
|
def __init__(self, host: Host, id: int) -> None:
|
||||||
self.host = host
|
self.host = host
|
||||||
self.id = id
|
self.id = id
|
||||||
self.class_registry = get_service_registry()
|
self.class_registry = get_service_registry()
|
||||||
|
self.metrics = Metrics(host=self.host, metrics_endpoint=self.storage_node.get_metrics_endpoint())
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def host_ip(self):
|
def host_ip(self):
|
||||||
|
@ -78,6 +74,7 @@ class ClusterNode:
|
||||||
def s3_gate(self) -> S3Gate:
|
def s3_gate(self) -> S3Gate:
|
||||||
return self.service(S3Gate)
|
return self.service(S3Gate)
|
||||||
|
|
||||||
|
# TODO: Deprecated. Use config with ServiceConfigurationYml interface
|
||||||
def get_config(self, config_file_path: str) -> dict:
|
def get_config(self, config_file_path: str) -> dict:
|
||||||
shell = self.host.get_shell()
|
shell = self.host.get_shell()
|
||||||
|
|
||||||
|
@ -87,12 +84,16 @@ class ClusterNode:
|
||||||
config = yaml.safe_load(config_text)
|
config = yaml.safe_load(config_text)
|
||||||
return config
|
return config
|
||||||
|
|
||||||
|
# TODO: Deprecated. Use config with ServiceConfigurationYml interface
|
||||||
def save_config(self, new_config: dict, config_file_path: str) -> None:
|
def save_config(self, new_config: dict, config_file_path: str) -> None:
|
||||||
shell = self.host.get_shell()
|
shell = self.host.get_shell()
|
||||||
|
|
||||||
config_str = yaml.dump(new_config)
|
config_str = yaml.dump(new_config)
|
||||||
shell.exec(f"echo '{config_str}' | sudo tee {config_file_path}")
|
shell.exec(f"echo '{config_str}' | sudo tee {config_file_path}")
|
||||||
|
|
||||||
|
def config(self, service_type: type[ServiceClass]) -> ServiceConfigurationYml:
|
||||||
|
return self.service(service_type).config
|
||||||
|
|
||||||
def service(self, service_type: type[ServiceClass]) -> ServiceClass:
|
def service(self, service_type: type[ServiceClass]) -> ServiceClass:
|
||||||
"""
|
"""
|
||||||
Get a service cluster node of specified type.
|
Get a service cluster node of specified type.
|
||||||
|
@ -108,7 +109,7 @@ class ClusterNode:
|
||||||
service_entry = self.class_registry.get_entry(service_type)
|
service_entry = self.class_registry.get_entry(service_type)
|
||||||
service_name = service_entry["hosting_service_name"]
|
service_name = service_entry["hosting_service_name"]
|
||||||
|
|
||||||
pattern = f"{service_name}{self.id:02}"
|
pattern = f"{service_name}_{self.id:02}"
|
||||||
config = self.host.get_service_config(pattern)
|
config = self.host.get_service_config(pattern)
|
||||||
|
|
||||||
return service_type(
|
return service_type(
|
||||||
|
@ -117,10 +118,24 @@ class ClusterNode:
|
||||||
self.host,
|
self.host,
|
||||||
)
|
)
|
||||||
|
|
||||||
def get_list_of_services(self) -> list[str]:
|
@property
|
||||||
return [
|
def services(self) -> list[NodeBase]:
|
||||||
config.attributes[ConfigAttributes.SERVICE_NAME] for config in self.host.config.services
|
svcs: list[NodeBase] = []
|
||||||
]
|
svcs_names_on_node = [svc.name for svc in self.host.config.services]
|
||||||
|
for entry in self.class_registry._class_mapping.values():
|
||||||
|
hosting_svc_name = entry["hosting_service_name"]
|
||||||
|
pattern = f"{hosting_svc_name}_{self.id:02}"
|
||||||
|
if pattern in svcs_names_on_node:
|
||||||
|
config = self.host.get_service_config(pattern)
|
||||||
|
svcs.append(
|
||||||
|
entry["cls"](
|
||||||
|
self.id,
|
||||||
|
config.name,
|
||||||
|
self.host,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
return svcs
|
||||||
|
|
||||||
def get_all_interfaces(self) -> dict[str, str]:
|
def get_all_interfaces(self) -> dict[str, str]:
|
||||||
return self.host.config.interfaces
|
return self.host.config.interfaces
|
||||||
|
@ -129,32 +144,16 @@ class ClusterNode:
|
||||||
return self.host.config.interfaces[interface.value]
|
return self.host.config.interfaces[interface.value]
|
||||||
|
|
||||||
def get_data_interfaces(self) -> list[str]:
|
def get_data_interfaces(self) -> list[str]:
|
||||||
return [
|
return [ip_address for name_interface, ip_address in self.host.config.interfaces.items() if "data" in name_interface]
|
||||||
ip_address
|
|
||||||
for name_interface, ip_address in self.host.config.interfaces.items()
|
|
||||||
if "data" in name_interface
|
|
||||||
]
|
|
||||||
|
|
||||||
def get_data_interface(self, search_interface: str) -> list[str]:
|
def get_data_interface(self, search_interface: str) -> list[str]:
|
||||||
return [
|
return [self.host.config.interfaces[interface] for interface in self.host.config.interfaces.keys() if search_interface == interface]
|
||||||
self.host.config.interfaces[interface]
|
|
||||||
for interface in self.host.config.interfaces.keys()
|
|
||||||
if search_interface == interface
|
|
||||||
]
|
|
||||||
|
|
||||||
def get_internal_interfaces(self) -> list[str]:
|
def get_internal_interfaces(self) -> list[str]:
|
||||||
return [
|
return [ip_address for name_interface, ip_address in self.host.config.interfaces.items() if "internal" in name_interface]
|
||||||
ip_address
|
|
||||||
for name_interface, ip_address in self.host.config.interfaces.items()
|
|
||||||
if "internal" in name_interface
|
|
||||||
]
|
|
||||||
|
|
||||||
def get_internal_interface(self, search_internal: str) -> list[str]:
|
def get_internal_interface(self, search_internal: str) -> list[str]:
|
||||||
return [
|
return [self.host.config.interfaces[interface] for interface in self.host.config.interfaces.keys() if search_internal == interface]
|
||||||
self.host.config.interfaces[interface]
|
|
||||||
for interface in self.host.config.interfaces.keys()
|
|
||||||
if search_internal == interface
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
class Cluster:
|
class Cluster:
|
||||||
|
@ -165,8 +164,6 @@ class Cluster:
|
||||||
default_rpc_endpoint: str
|
default_rpc_endpoint: str
|
||||||
default_s3_gate_endpoint: str
|
default_s3_gate_endpoint: str
|
||||||
default_http_gate_endpoint: str
|
default_http_gate_endpoint: str
|
||||||
default_http_hostname: str
|
|
||||||
default_s3_hostname: str
|
|
||||||
|
|
||||||
def __init__(self, hosting: Hosting) -> None:
|
def __init__(self, hosting: Hosting) -> None:
|
||||||
self._hosting = hosting
|
self._hosting = hosting
|
||||||
|
@ -175,8 +172,6 @@ class Cluster:
|
||||||
self.default_rpc_endpoint = self.services(StorageNode)[0].get_rpc_endpoint()
|
self.default_rpc_endpoint = self.services(StorageNode)[0].get_rpc_endpoint()
|
||||||
self.default_s3_gate_endpoint = self.services(S3Gate)[0].get_endpoint()
|
self.default_s3_gate_endpoint = self.services(S3Gate)[0].get_endpoint()
|
||||||
self.default_http_gate_endpoint = self.services(HTTPGate)[0].get_endpoint()
|
self.default_http_gate_endpoint = self.services(HTTPGate)[0].get_endpoint()
|
||||||
self.default_http_hostname = self.services(StorageNode)[0].get_http_hostname()
|
|
||||||
self.default_s3_hostname = self.services(StorageNode)[0].get_s3_hostname()
|
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def hosts(self) -> list[Host]:
|
def hosts(self) -> list[Host]:
|
||||||
|
@ -221,9 +216,7 @@ class Cluster:
|
||||||
|
|
||||||
cluster_nodes = set()
|
cluster_nodes = set()
|
||||||
for service in services:
|
for service in services:
|
||||||
cluster_nodes.update(
|
cluster_nodes.update([node for node in self.cluster_nodes if node.service(type(service)) == service])
|
||||||
[node for node in self.cluster_nodes if node.service(type(service)) == service]
|
|
||||||
)
|
|
||||||
|
|
||||||
return list(cluster_nodes)
|
return list(cluster_nodes)
|
||||||
|
|
||||||
|
@ -260,13 +253,13 @@ class Cluster:
|
||||||
service_name = service["hosting_service_name"]
|
service_name = service["hosting_service_name"]
|
||||||
cls: type[NodeBase] = service["cls"]
|
cls: type[NodeBase] = service["cls"]
|
||||||
|
|
||||||
pattern = f"{service_name}\d*$"
|
pattern = f"{service_name}_\d*$"
|
||||||
configs = self.hosting.find_service_configs(pattern)
|
configs = self.hosting.find_service_configs(pattern)
|
||||||
|
|
||||||
found_nodes = []
|
found_nodes = []
|
||||||
for config in configs:
|
for config in configs:
|
||||||
# config.name is something like s3-gate01. Cut last digits to know service type
|
# config.name is something like s3-gate01. Cut last digits to know service type
|
||||||
service_type = re.findall(".*\D", config.name)[0]
|
service_type = re.findall("(.*)_\d+", config.name)[0]
|
||||||
# exclude unsupported services
|
# exclude unsupported services
|
||||||
if service_type != service_name:
|
if service_type != service_name:
|
||||||
continue
|
continue
|
||||||
|
@ -331,8 +324,6 @@ class Cluster:
|
||||||
return [node.get_endpoint() for node in nodes]
|
return [node.get_endpoint() for node in nodes]
|
||||||
|
|
||||||
def get_nodes_by_ip(self, ips: list[str]) -> list[ClusterNode]:
|
def get_nodes_by_ip(self, ips: list[str]) -> list[ClusterNode]:
|
||||||
cluster_nodes = [
|
cluster_nodes = [node for node in self.cluster_nodes if URL(node.morph_chain.get_endpoint()).host in ips]
|
||||||
node for node in self.cluster_nodes if URL(node.morph_chain.get_endpoint()).host in ips
|
|
||||||
]
|
|
||||||
with reporter.step(f"Return cluster nodes - {cluster_nodes}"):
|
with reporter.step(f"Return cluster nodes - {cluster_nodes}"):
|
||||||
return cluster_nodes
|
return cluster_nodes
|
||||||
|
|
65
src/frostfs_testlib/storage/configuration/interfaces.py
Normal file
65
src/frostfs_testlib/storage/configuration/interfaces.py
Normal file
|
@ -0,0 +1,65 @@
|
||||||
|
from abc import ABC, abstractmethod
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
|
||||||
|
class ServiceConfigurationYml(ABC):
|
||||||
|
"""
|
||||||
|
Class to manipulate yml configuration for service
|
||||||
|
"""
|
||||||
|
|
||||||
|
def _find_option(self, key: str, data: dict):
|
||||||
|
tree = key.split(":")
|
||||||
|
current = data
|
||||||
|
for node in tree:
|
||||||
|
if isinstance(current, list) and len(current) - 1 >= int(node):
|
||||||
|
current = current[int(node)]
|
||||||
|
continue
|
||||||
|
|
||||||
|
if node not in current:
|
||||||
|
return None
|
||||||
|
|
||||||
|
current = current[node]
|
||||||
|
|
||||||
|
return current
|
||||||
|
|
||||||
|
def _set_option(self, key: str, value: Any, data: dict):
|
||||||
|
tree = key.split(":")
|
||||||
|
current = data
|
||||||
|
for node in tree[:-1]:
|
||||||
|
if isinstance(current, list) and len(current) - 1 >= int(node):
|
||||||
|
current = current[int(node)]
|
||||||
|
continue
|
||||||
|
|
||||||
|
if node not in current:
|
||||||
|
current[node] = {}
|
||||||
|
|
||||||
|
current = current[node]
|
||||||
|
|
||||||
|
current[tree[-1]] = value
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def get(self, key: str) -> str:
|
||||||
|
"""
|
||||||
|
Get parameter value from current configuration
|
||||||
|
|
||||||
|
Args:
|
||||||
|
key: key of the parameter in yaml format like 'storage:shard:default:resync_metabase'
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
value of the parameter
|
||||||
|
"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def set(self, values: dict[str, Any]):
|
||||||
|
"""
|
||||||
|
Sets parameters to configuration
|
||||||
|
|
||||||
|
Args:
|
||||||
|
values: dict where key is the key of the parameter in yaml format like 'storage:shard:default:resync_metabase' and value is the value of the option to set
|
||||||
|
"""
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def revert(self):
|
||||||
|
"""
|
||||||
|
Revert changes
|
||||||
|
"""
|
|
@ -0,0 +1,88 @@
|
||||||
|
import os
|
||||||
|
import re
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
import yaml
|
||||||
|
|
||||||
|
from frostfs_testlib import reporter
|
||||||
|
from frostfs_testlib.shell.interfaces import CommandOptions, Shell
|
||||||
|
from frostfs_testlib.storage.configuration.interfaces import ServiceConfigurationYml
|
||||||
|
|
||||||
|
|
||||||
|
def extend_dict(extend_me: dict, extend_by: dict):
|
||||||
|
if isinstance(extend_by, dict):
|
||||||
|
for k, v in extend_by.items():
|
||||||
|
if k in extend_me:
|
||||||
|
extend_dict(extend_me.get(k), v)
|
||||||
|
else:
|
||||||
|
extend_me[k] = v
|
||||||
|
else:
|
||||||
|
extend_me += extend_by
|
||||||
|
|
||||||
|
|
||||||
|
class ServiceConfiguration(ServiceConfigurationYml):
|
||||||
|
def __init__(self, service_name: str, shell: Shell, config_dir: str, main_config_path: str) -> None:
|
||||||
|
self.service_name = service_name
|
||||||
|
self.shell = shell
|
||||||
|
self.main_config_path = main_config_path
|
||||||
|
self.confd_path = os.path.join(config_dir, "conf.d")
|
||||||
|
self.custom_file = os.path.join(self.confd_path, "99_changes.yml")
|
||||||
|
|
||||||
|
def _path_exists(self, path: str) -> bool:
|
||||||
|
return not self.shell.exec(f"test -e {path}", options=CommandOptions(check=False)).return_code
|
||||||
|
|
||||||
|
def _get_config_files(self):
|
||||||
|
config_files = [self.main_config_path]
|
||||||
|
|
||||||
|
if self._path_exists(self.confd_path):
|
||||||
|
files = self.shell.exec(f"find {self.confd_path} -type f").stdout.strip().split()
|
||||||
|
# Sorting files in backwards order from latest to first one
|
||||||
|
config_files.extend(sorted(files, key=lambda x: -int(re.findall("^\d+", os.path.basename(x))[0])))
|
||||||
|
|
||||||
|
return config_files
|
||||||
|
|
||||||
|
def _get_configuration(self, config_files: list[str]) -> dict:
|
||||||
|
if not config_files:
|
||||||
|
return [{}]
|
||||||
|
|
||||||
|
splitter = "+++++"
|
||||||
|
files_str = " ".join(config_files)
|
||||||
|
all_content = self.shell.exec(
|
||||||
|
f"echo Getting config files; for file in {files_str}; do (echo {splitter}; sudo cat ${{file}}); done"
|
||||||
|
).stdout
|
||||||
|
files_content = all_content.split("+++++")[1:]
|
||||||
|
files_data = [yaml.safe_load(file_content) for file_content in files_content]
|
||||||
|
|
||||||
|
mergedData = {}
|
||||||
|
for data in files_data:
|
||||||
|
extend_dict(mergedData, data)
|
||||||
|
|
||||||
|
return mergedData
|
||||||
|
|
||||||
|
def get(self, key: str) -> str | Any:
|
||||||
|
with reporter.step(f"Get {key} configuration value for {self.service_name}"):
|
||||||
|
config_files = self._get_config_files()
|
||||||
|
configuration = self._get_configuration(config_files)
|
||||||
|
result = self._find_option(key, configuration)
|
||||||
|
return result
|
||||||
|
|
||||||
|
def set(self, values: dict[str, Any]):
|
||||||
|
with reporter.step(f"Change configuration for {self.service_name}"):
|
||||||
|
if not self._path_exists(self.confd_path):
|
||||||
|
self.shell.exec(f"mkdir {self.confd_path}")
|
||||||
|
|
||||||
|
if self._path_exists(self.custom_file):
|
||||||
|
data = self._get_configuration([self.custom_file])
|
||||||
|
else:
|
||||||
|
data = {}
|
||||||
|
|
||||||
|
for key, value in values.items():
|
||||||
|
self._set_option(key, value, data)
|
||||||
|
|
||||||
|
content = yaml.dump(data)
|
||||||
|
self.shell.exec(f"echo '{content}' | sudo tee {self.custom_file}")
|
||||||
|
self.shell.exec(f"chmod 777 {self.custom_file}")
|
||||||
|
|
||||||
|
def revert(self):
|
||||||
|
with reporter.step(f"Revert changed options for {self.service_name}"):
|
||||||
|
self.shell.exec(f"rm -rf {self.custom_file}")
|
|
@ -3,23 +3,16 @@ class ConfigAttributes:
|
||||||
WALLET_PASSWORD = "wallet_password"
|
WALLET_PASSWORD = "wallet_password"
|
||||||
WALLET_PATH = "wallet_path"
|
WALLET_PATH = "wallet_path"
|
||||||
WALLET_CONFIG = "wallet_config"
|
WALLET_CONFIG = "wallet_config"
|
||||||
|
CONFIG_DIR = "service_config_dir"
|
||||||
CONFIG_PATH = "config_path"
|
CONFIG_PATH = "config_path"
|
||||||
SHARD_CONFIG_PATH = "shard_config_path"
|
SHARD_CONFIG_PATH = "shard_config_path"
|
||||||
|
LOGGER_CONFIG_PATH = "logger_config_path"
|
||||||
LOCAL_WALLET_PATH = "local_wallet_path"
|
LOCAL_WALLET_PATH = "local_wallet_path"
|
||||||
LOCAL_WALLET_CONFIG = "local_config_path"
|
LOCAL_WALLET_CONFIG = "local_wallet_config_path"
|
||||||
|
REMOTE_WALLET_CONFIG = "remote_wallet_config_path"
|
||||||
ENDPOINT_DATA_0 = "endpoint_data0"
|
ENDPOINT_DATA_0 = "endpoint_data0"
|
||||||
ENDPOINT_DATA_1 = "endpoint_data1"
|
ENDPOINT_DATA_1 = "endpoint_data1"
|
||||||
ENDPOINT_INTERNAL = "endpoint_internal0"
|
ENDPOINT_INTERNAL = "endpoint_internal0"
|
||||||
ENDPOINT_PROMETHEUS = "endpoint_prometheus"
|
ENDPOINT_PROMETHEUS = "endpoint_prometheus"
|
||||||
CONTROL_ENDPOINT = "control_endpoint"
|
CONTROL_ENDPOINT = "control_endpoint"
|
||||||
UN_LOCODE = "un_locode"
|
UN_LOCODE = "un_locode"
|
||||||
HTTP_HOSTNAME = "http_hostname"
|
|
||||||
S3_HOSTNAME = "s3_hostname"
|
|
||||||
|
|
||||||
|
|
||||||
class _FrostfsServicesNames:
|
|
||||||
STORAGE = "s"
|
|
||||||
S3_GATE = "s3-gate"
|
|
||||||
HTTP_GATE = "http-gate"
|
|
||||||
MORPH_CHAIN = "morph-chain"
|
|
||||||
INNER_RING = "ir"
|
|
||||||
|
|
|
@ -1,24 +1,17 @@
|
||||||
import copy
|
import copy
|
||||||
from typing import Optional
|
from datetime import datetime
|
||||||
|
|
||||||
import frostfs_testlib.resources.optionals as optionals
|
import frostfs_testlib.resources.optionals as optionals
|
||||||
from frostfs_testlib.load.interfaces import ScenarioRunner
|
from frostfs_testlib import reporter
|
||||||
from frostfs_testlib.load.load_config import (
|
from frostfs_testlib.load.interfaces.scenario_runner import ScenarioRunner
|
||||||
EndpointSelectionStrategy,
|
from frostfs_testlib.load.load_config import EndpointSelectionStrategy, LoadParams, LoadScenario, LoadType
|
||||||
LoadParams,
|
|
||||||
LoadScenario,
|
|
||||||
LoadType,
|
|
||||||
)
|
|
||||||
from frostfs_testlib.load.load_report import LoadReport
|
from frostfs_testlib.load.load_report import LoadReport
|
||||||
from frostfs_testlib.load.load_verifiers import LoadVerifier
|
from frostfs_testlib.load.load_verifiers import LoadVerifier
|
||||||
from frostfs_testlib.reporter import get_reporter
|
|
||||||
from frostfs_testlib.storage.cluster import ClusterNode
|
from frostfs_testlib.storage.cluster import ClusterNode
|
||||||
from frostfs_testlib.storage.dataclasses.frostfs_services import S3Gate, StorageNode
|
from frostfs_testlib.storage.dataclasses.frostfs_services import S3Gate, StorageNode
|
||||||
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
|
from frostfs_testlib.testing.parallel import parallel
|
||||||
from frostfs_testlib.testing.test_control import run_optionally
|
from frostfs_testlib.testing.test_control import run_optionally
|
||||||
|
|
||||||
reporter = get_reporter()
|
|
||||||
|
|
||||||
|
|
||||||
class BackgroundLoadController:
|
class BackgroundLoadController:
|
||||||
k6_dir: str
|
k6_dir: str
|
||||||
|
@ -28,17 +21,16 @@ class BackgroundLoadController:
|
||||||
cluster_nodes: list[ClusterNode]
|
cluster_nodes: list[ClusterNode]
|
||||||
nodes_under_load: list[ClusterNode]
|
nodes_under_load: list[ClusterNode]
|
||||||
load_counter: int
|
load_counter: int
|
||||||
loaders_wallet: WalletInfo
|
|
||||||
load_summaries: dict
|
load_summaries: dict
|
||||||
endpoints: list[str]
|
endpoints: list[str]
|
||||||
runner: ScenarioRunner
|
runner: ScenarioRunner
|
||||||
started: bool
|
started: bool
|
||||||
|
load_reporters: list[LoadReport]
|
||||||
|
|
||||||
def __init__(
|
def __init__(
|
||||||
self,
|
self,
|
||||||
k6_dir: str,
|
k6_dir: str,
|
||||||
load_params: LoadParams,
|
load_params: LoadParams,
|
||||||
loaders_wallet: WalletInfo,
|
|
||||||
cluster_nodes: list[ClusterNode],
|
cluster_nodes: list[ClusterNode],
|
||||||
nodes_under_load: list[ClusterNode],
|
nodes_under_load: list[ClusterNode],
|
||||||
runner: ScenarioRunner,
|
runner: ScenarioRunner,
|
||||||
|
@ -49,16 +41,14 @@ class BackgroundLoadController:
|
||||||
self.cluster_nodes = cluster_nodes
|
self.cluster_nodes = cluster_nodes
|
||||||
self.nodes_under_load = nodes_under_load
|
self.nodes_under_load = nodes_under_load
|
||||||
self.load_counter = 1
|
self.load_counter = 1
|
||||||
self.loaders_wallet = loaders_wallet
|
|
||||||
self.runner = runner
|
self.runner = runner
|
||||||
self.started = False
|
self.started = False
|
||||||
|
self.load_reporters = []
|
||||||
if load_params.endpoint_selection_strategy is None:
|
if load_params.endpoint_selection_strategy is None:
|
||||||
raise RuntimeError("endpoint_selection_strategy should not be None")
|
raise RuntimeError("endpoint_selection_strategy should not be None")
|
||||||
|
|
||||||
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED, [])
|
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED, [])
|
||||||
def _get_endpoints(
|
def _get_endpoints(self, load_type: LoadType, endpoint_selection_strategy: EndpointSelectionStrategy):
|
||||||
self, load_type: LoadType, endpoint_selection_strategy: EndpointSelectionStrategy
|
|
||||||
):
|
|
||||||
all_endpoints = {
|
all_endpoints = {
|
||||||
LoadType.gRPC: {
|
LoadType.gRPC: {
|
||||||
EndpointSelectionStrategy.ALL: list(
|
EndpointSelectionStrategy.ALL: list(
|
||||||
|
@ -69,10 +59,7 @@ class BackgroundLoadController:
|
||||||
)
|
)
|
||||||
),
|
),
|
||||||
EndpointSelectionStrategy.FIRST: list(
|
EndpointSelectionStrategy.FIRST: list(
|
||||||
set(
|
set(node_under_load.service(StorageNode).get_rpc_endpoint() for node_under_load in self.nodes_under_load)
|
||||||
node_under_load.service(StorageNode).get_rpc_endpoint()
|
|
||||||
for node_under_load in self.nodes_under_load
|
|
||||||
)
|
|
||||||
),
|
),
|
||||||
},
|
},
|
||||||
# for some reason xk6 appends http protocol on its own
|
# for some reason xk6 appends http protocol on its own
|
||||||
|
@ -85,10 +72,7 @@ class BackgroundLoadController:
|
||||||
)
|
)
|
||||||
),
|
),
|
||||||
EndpointSelectionStrategy.FIRST: list(
|
EndpointSelectionStrategy.FIRST: list(
|
||||||
set(
|
set(node_under_load.service(S3Gate).get_endpoint() for node_under_load in self.nodes_under_load)
|
||||||
node_under_load.service(S3Gate).get_endpoint()
|
|
||||||
for node_under_load in self.nodes_under_load
|
|
||||||
)
|
|
||||||
),
|
),
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
@ -96,16 +80,20 @@ class BackgroundLoadController:
|
||||||
return all_endpoints[load_type][endpoint_selection_strategy]
|
return all_endpoints[load_type][endpoint_selection_strategy]
|
||||||
|
|
||||||
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED)
|
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED)
|
||||||
@reporter.step_deco("Prepare load instances")
|
@reporter.step("Init k6 instances")
|
||||||
def prepare(self):
|
def init_k6(self):
|
||||||
self.endpoints = self._get_endpoints(
|
self.endpoints = self._get_endpoints(self.load_params.load_type, self.load_params.endpoint_selection_strategy)
|
||||||
self.load_params.load_type, self.load_params.endpoint_selection_strategy
|
|
||||||
)
|
|
||||||
self.runner.prepare(
|
|
||||||
self.load_params, self.cluster_nodes, self.nodes_under_load, self.k6_dir
|
|
||||||
)
|
|
||||||
self.runner.init_k6_instances(self.load_params, self.endpoints, self.k6_dir)
|
self.runner.init_k6_instances(self.load_params, self.endpoints, self.k6_dir)
|
||||||
|
|
||||||
|
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED)
|
||||||
|
@reporter.step("Prepare load instances")
|
||||||
|
def prepare(self):
|
||||||
|
self.runner.prepare(self.load_params, self.cluster_nodes, self.nodes_under_load, self.k6_dir)
|
||||||
|
self.init_k6()
|
||||||
|
|
||||||
|
def append_reporter(self, load_report: LoadReport):
|
||||||
|
self.load_reporters.append(load_report)
|
||||||
|
|
||||||
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED)
|
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED)
|
||||||
def start(self):
|
def start(self):
|
||||||
with reporter.step(f"Start load on nodes {self.nodes_under_load}"):
|
with reporter.step(f"Start load on nodes {self.nodes_under_load}"):
|
||||||
|
@ -113,7 +101,7 @@ class BackgroundLoadController:
|
||||||
self.started = True
|
self.started = True
|
||||||
|
|
||||||
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED)
|
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED)
|
||||||
@reporter.step_deco("Stop load")
|
@reporter.step("Stop load")
|
||||||
def stop(self):
|
def stop(self):
|
||||||
self.runner.stop()
|
self.runner.stop()
|
||||||
|
|
||||||
|
@ -122,7 +110,7 @@ class BackgroundLoadController:
|
||||||
return self.runner.is_running
|
return self.runner.is_running
|
||||||
|
|
||||||
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED)
|
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED)
|
||||||
@reporter.step_deco("Reset load")
|
@reporter.step("Reset load")
|
||||||
def _reset_for_consequent_load(self):
|
def _reset_for_consequent_load(self):
|
||||||
"""This method is required if we want to run multiple loads during test run.
|
"""This method is required if we want to run multiple loads during test run.
|
||||||
Raise load counter by 1 and append it to load_id
|
Raise load counter by 1 and append it to load_id
|
||||||
|
@ -132,7 +120,7 @@ class BackgroundLoadController:
|
||||||
self.load_params.set_id(f"{self.load_params.load_id}_{self.load_counter}")
|
self.load_params.set_id(f"{self.load_params.load_id}_{self.load_counter}")
|
||||||
|
|
||||||
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED)
|
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED)
|
||||||
@reporter.step_deco("Startup load")
|
@reporter.step("Startup load")
|
||||||
def startup(self):
|
def startup(self):
|
||||||
self.prepare()
|
self.prepare()
|
||||||
self.preset()
|
self.preset()
|
||||||
|
@ -143,19 +131,33 @@ class BackgroundLoadController:
|
||||||
self.runner.preset()
|
self.runner.preset()
|
||||||
|
|
||||||
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED)
|
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED)
|
||||||
@reporter.step_deco("Stop and get results of load")
|
@reporter.step("Stop and get results of load")
|
||||||
def teardown(self, load_report: Optional[LoadReport] = None):
|
def teardown(self):
|
||||||
if not self.started:
|
if not self.started:
|
||||||
return
|
return
|
||||||
|
|
||||||
self.stop()
|
self.stop()
|
||||||
self.load_summaries = self._get_results()
|
self.load_summaries = self._get_results()
|
||||||
self.started = False
|
self.started = False
|
||||||
if load_report:
|
|
||||||
|
start_time = min(self._get_start_times())
|
||||||
|
end_time = max(self._get_end_times())
|
||||||
|
|
||||||
|
for load_report in self.load_reporters:
|
||||||
|
load_report.set_start_time(start_time)
|
||||||
|
load_report.set_end_time(end_time)
|
||||||
load_report.add_summaries(self.load_summaries)
|
load_report.add_summaries(self.load_summaries)
|
||||||
|
|
||||||
|
def _get_start_times(self) -> list[datetime]:
|
||||||
|
futures = parallel([k6.get_start_time for k6 in self.runner.get_k6_instances()])
|
||||||
|
return [future.result() for future in futures]
|
||||||
|
|
||||||
|
def _get_end_times(self) -> list[datetime]:
|
||||||
|
futures = parallel([k6.get_end_time for k6 in self.runner.get_k6_instances()])
|
||||||
|
return [future.result() for future in futures]
|
||||||
|
|
||||||
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED)
|
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED)
|
||||||
@reporter.step_deco("Run post-load verification")
|
@reporter.step("Run post-load verification")
|
||||||
def verify(self):
|
def verify(self):
|
||||||
try:
|
try:
|
||||||
load_issues = self._collect_load_issues()
|
load_issues = self._collect_load_issues()
|
||||||
|
@ -167,7 +169,7 @@ class BackgroundLoadController:
|
||||||
self._reset_for_consequent_load()
|
self._reset_for_consequent_load()
|
||||||
|
|
||||||
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED)
|
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED)
|
||||||
@reporter.step_deco("Collect load issues")
|
@reporter.step("Collect load issues")
|
||||||
def _collect_load_issues(self):
|
def _collect_load_issues(self):
|
||||||
verifier = LoadVerifier(self.load_params)
|
verifier = LoadVerifier(self.load_params)
|
||||||
return verifier.collect_load_issues(self.load_summaries)
|
return verifier.collect_load_issues(self.load_summaries)
|
||||||
|
@ -177,7 +179,7 @@ class BackgroundLoadController:
|
||||||
self.runner.wait_until_finish(soft_timeout)
|
self.runner.wait_until_finish(soft_timeout)
|
||||||
|
|
||||||
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED)
|
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED)
|
||||||
@reporter.step_deco("Verify loaded objects")
|
@reporter.step("Verify loaded objects")
|
||||||
def _run_verify_scenario(self) -> list[str]:
|
def _run_verify_scenario(self) -> list[str]:
|
||||||
self.verification_params = LoadParams(
|
self.verification_params = LoadParams(
|
||||||
verify_clients=self.load_params.verify_clients,
|
verify_clients=self.load_params.verify_clients,
|
||||||
|
@ -185,15 +187,19 @@ class BackgroundLoadController:
|
||||||
read_from=self.load_params.read_from,
|
read_from=self.load_params.read_from,
|
||||||
registry_file=self.load_params.registry_file,
|
registry_file=self.load_params.registry_file,
|
||||||
verify_time=self.load_params.verify_time,
|
verify_time=self.load_params.verify_time,
|
||||||
|
custom_registry=self.load_params.custom_registry,
|
||||||
load_type=self.load_params.load_type,
|
load_type=self.load_params.load_type,
|
||||||
load_id=self.load_params.load_id,
|
load_id=self.load_params.load_id,
|
||||||
vu_init_time=0,
|
vu_init_time=0,
|
||||||
working_dir=self.load_params.working_dir,
|
working_dir=self.load_params.working_dir,
|
||||||
endpoint_selection_strategy=self.load_params.endpoint_selection_strategy,
|
endpoint_selection_strategy=self.load_params.endpoint_selection_strategy,
|
||||||
k6_process_allocation_strategy=self.load_params.k6_process_allocation_strategy,
|
k6_process_allocation_strategy=self.load_params.k6_process_allocation_strategy,
|
||||||
setup_timeout="1s",
|
setup_timeout=self.load_params.setup_timeout,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
if self.verification_params.custom_registry:
|
||||||
|
self.verification_params.registry_file = self.load_params.custom_registry
|
||||||
|
|
||||||
if self.verification_params.verify_time is None:
|
if self.verification_params.verify_time is None:
|
||||||
raise RuntimeError("verify_time should not be none")
|
raise RuntimeError("verify_time should not be none")
|
||||||
|
|
||||||
|
|
|
@ -1,41 +1,82 @@
|
||||||
import copy
|
import datetime
|
||||||
|
import logging
|
||||||
import time
|
import time
|
||||||
|
from typing import TypeVar
|
||||||
|
|
||||||
import frostfs_testlib.resources.optionals as optionals
|
import frostfs_testlib.resources.optionals as optionals
|
||||||
from frostfs_testlib.reporter import get_reporter
|
from frostfs_testlib import reporter
|
||||||
|
from frostfs_testlib.cli import FrostfsAdm, FrostfsCli
|
||||||
|
from frostfs_testlib.cli.netmap_parser import NetmapParser
|
||||||
|
from frostfs_testlib.healthcheck.interfaces import Healthcheck
|
||||||
|
from frostfs_testlib.hosting.interfaces import HostStatus
|
||||||
|
from frostfs_testlib.plugins import load_all
|
||||||
|
from frostfs_testlib.resources.cli import FROSTFS_ADM_CONFIG_PATH, FROSTFS_ADM_EXEC, FROSTFS_CLI_EXEC
|
||||||
|
from frostfs_testlib.resources.common import MORPH_BLOCK_TIME
|
||||||
from frostfs_testlib.shell import CommandOptions, Shell, SshConnectionProvider
|
from frostfs_testlib.shell import CommandOptions, Shell, SshConnectionProvider
|
||||||
from frostfs_testlib.steps.network import IfUpDownHelper, IpTablesHelper
|
from frostfs_testlib.steps.network import IpHelper
|
||||||
from frostfs_testlib.storage.cluster import Cluster, ClusterNode, StorageNode
|
from frostfs_testlib.storage.cluster import Cluster, ClusterNode, S3Gate, StorageNode
|
||||||
from frostfs_testlib.storage.controllers.disk_controller import DiskController
|
from frostfs_testlib.storage.controllers.disk_controller import DiskController
|
||||||
from frostfs_testlib.storage.dataclasses.node_base import NodeBase, ServiceClass
|
from frostfs_testlib.storage.dataclasses.node_base import NodeBase, ServiceClass
|
||||||
|
from frostfs_testlib.storage.dataclasses.storage_object_info import NodeStatus
|
||||||
|
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
|
||||||
from frostfs_testlib.testing import parallel
|
from frostfs_testlib.testing import parallel
|
||||||
from frostfs_testlib.testing.test_control import run_optionally
|
from frostfs_testlib.testing.test_control import retry, run_optionally, wait_for_success
|
||||||
from frostfs_testlib.utils.failover_utils import (
|
from frostfs_testlib.utils.datetime_utils import parse_time
|
||||||
wait_all_storage_nodes_returned,
|
|
||||||
wait_for_host_offline,
|
|
||||||
wait_for_host_online,
|
|
||||||
wait_for_node_online,
|
|
||||||
)
|
|
||||||
|
|
||||||
reporter = get_reporter()
|
logger = logging.getLogger("NeoLogger")
|
||||||
if_up_down_helper = IfUpDownHelper()
|
|
||||||
|
|
||||||
|
class StateManager:
|
||||||
|
def __init__(self, cluster_state_controller: "ClusterStateController") -> None:
|
||||||
|
self.csc = cluster_state_controller
|
||||||
|
|
||||||
|
|
||||||
|
StateManagerClass = TypeVar("StateManagerClass", bound=StateManager)
|
||||||
|
|
||||||
|
|
||||||
class ClusterStateController:
|
class ClusterStateController:
|
||||||
def __init__(self, shell: Shell, cluster: Cluster) -> None:
|
def __init__(self, shell: Shell, cluster: Cluster, healthcheck: Healthcheck) -> None:
|
||||||
self.stopped_nodes: list[ClusterNode] = []
|
self.stopped_nodes: list[ClusterNode] = []
|
||||||
self.detached_disks: dict[str, DiskController] = {}
|
self.detached_disks: dict[str, DiskController] = {}
|
||||||
self.stopped_storage_nodes: list[ClusterNode] = []
|
|
||||||
self.stopped_s3_gates: list[ClusterNode] = []
|
|
||||||
self.dropped_traffic: list[ClusterNode] = []
|
self.dropped_traffic: list[ClusterNode] = []
|
||||||
self.stopped_services: set[NodeBase] = set()
|
self.stopped_services: set[NodeBase] = set()
|
||||||
self.cluster = cluster
|
self.cluster = cluster
|
||||||
|
self.healthcheck = healthcheck
|
||||||
self.shell = shell
|
self.shell = shell
|
||||||
self.suspended_services: dict[str, list[ClusterNode]] = {}
|
self.suspended_services: dict[str, list[ClusterNode]] = {}
|
||||||
self.nodes_with_modified_interface: list[ClusterNode] = []
|
self.nodes_with_modified_interface: list[ClusterNode] = []
|
||||||
|
self.managers: list[StateManagerClass] = []
|
||||||
|
|
||||||
|
# TODO: move all functionality to managers
|
||||||
|
managers = set(load_all(group="frostfs.testlib.csc_managers"))
|
||||||
|
for manager in managers:
|
||||||
|
self.managers.append(manager(self))
|
||||||
|
|
||||||
|
def manager(self, manager_type: type[StateManagerClass]) -> StateManagerClass:
|
||||||
|
for manager in self.managers:
|
||||||
|
# Subclasses here for the future if we have overriding subclasses of base interface
|
||||||
|
if issubclass(type(manager), manager_type):
|
||||||
|
return manager
|
||||||
|
|
||||||
|
def _get_stopped_by_node(self, node: ClusterNode) -> set[NodeBase]:
|
||||||
|
stopped_by_node = [svc for svc in self.stopped_services if svc.host == node.host]
|
||||||
|
return set(stopped_by_node)
|
||||||
|
|
||||||
|
def _get_stopped_by_type(self, service_type: type[ServiceClass]) -> set[ServiceClass]:
|
||||||
|
stopped_by_type = [svc for svc in self.stopped_services if isinstance(svc, service_type)]
|
||||||
|
return set(stopped_by_type)
|
||||||
|
|
||||||
|
def _from_stopped_nodes(self, service_type: type[ServiceClass]) -> set[ServiceClass]:
|
||||||
|
stopped_on_nodes = set([node.service(service_type) for node in self.stopped_nodes])
|
||||||
|
return set(stopped_on_nodes)
|
||||||
|
|
||||||
|
def _get_online(self, service_type: type[ServiceClass]) -> set[ServiceClass]:
|
||||||
|
stopped_svc = self._get_stopped_by_type(service_type).union(self._from_stopped_nodes(service_type))
|
||||||
|
online_svc = set(self.cluster.services(service_type)) - stopped_svc
|
||||||
|
return online_svc
|
||||||
|
|
||||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||||
@reporter.step_deco("Stop host of node {node}")
|
@reporter.step("Stop host of node {node}")
|
||||||
def stop_node_host(self, node: ClusterNode, mode: str):
|
def stop_node_host(self, node: ClusterNode, mode: str):
|
||||||
# Drop ssh connection for this node before shutdown
|
# Drop ssh connection for this node before shutdown
|
||||||
provider = SshConnectionProvider()
|
provider = SshConnectionProvider()
|
||||||
|
@ -44,14 +85,12 @@ class ClusterStateController:
|
||||||
self.stopped_nodes.append(node)
|
self.stopped_nodes.append(node)
|
||||||
with reporter.step(f"Stop host {node.host.config.address}"):
|
with reporter.step(f"Stop host {node.host.config.address}"):
|
||||||
node.host.stop_host(mode=mode)
|
node.host.stop_host(mode=mode)
|
||||||
wait_for_host_offline(self.shell, node.storage_node)
|
self._wait_for_host_offline(node)
|
||||||
|
|
||||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||||
@reporter.step_deco("Shutdown whole cluster")
|
@reporter.step("Shutdown whole cluster")
|
||||||
def shutdown_cluster(self, mode: str, reversed_order: bool = False):
|
def shutdown_cluster(self, mode: str, reversed_order: bool = False):
|
||||||
nodes = (
|
nodes = reversed(self.cluster.cluster_nodes) if reversed_order else self.cluster.cluster_nodes
|
||||||
reversed(self.cluster.cluster_nodes) if reversed_order else self.cluster.cluster_nodes
|
|
||||||
)
|
|
||||||
|
|
||||||
# Drop all ssh connections before shutdown
|
# Drop all ssh connections before shutdown
|
||||||
provider = SshConnectionProvider()
|
provider = SshConnectionProvider()
|
||||||
|
@ -63,39 +102,20 @@ class ClusterStateController:
|
||||||
node.host.stop_host(mode=mode)
|
node.host.stop_host(mode=mode)
|
||||||
|
|
||||||
for node in nodes:
|
for node in nodes:
|
||||||
wait_for_host_offline(self.shell, node.storage_node)
|
self._wait_for_host_offline(node)
|
||||||
|
|
||||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||||
@reporter.step_deco("Stop all storage services on cluster")
|
@reporter.step("Start host of node {node}")
|
||||||
def stop_all_storage_services(self, reversed_order: bool = False):
|
def start_node_host(self, node: ClusterNode, startup_healthcheck: bool = True):
|
||||||
nodes = (
|
|
||||||
reversed(self.cluster.cluster_nodes) if reversed_order else self.cluster.cluster_nodes
|
|
||||||
)
|
|
||||||
|
|
||||||
for node in nodes:
|
|
||||||
self.stop_storage_service(node)
|
|
||||||
|
|
||||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
|
||||||
@reporter.step_deco("Stop all S3 gates on cluster")
|
|
||||||
def stop_all_s3_gates(self, reversed_order: bool = False):
|
|
||||||
nodes = (
|
|
||||||
reversed(self.cluster.cluster_nodes) if reversed_order else self.cluster.cluster_nodes
|
|
||||||
)
|
|
||||||
|
|
||||||
for node in nodes:
|
|
||||||
self.stop_s3_gate(node)
|
|
||||||
|
|
||||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
|
||||||
@reporter.step_deco("Start host of node {node}")
|
|
||||||
def start_node_host(self, node: ClusterNode):
|
|
||||||
with reporter.step(f"Start host {node.host.config.address}"):
|
with reporter.step(f"Start host {node.host.config.address}"):
|
||||||
node.host.start_host()
|
node.host.start_host()
|
||||||
wait_for_host_online(self.shell, node.storage_node)
|
self._wait_for_host_online(node)
|
||||||
wait_for_node_online(node.storage_node)
|
self.stopped_nodes.remove(node)
|
||||||
self.stopped_nodes.remove(node)
|
if startup_healthcheck:
|
||||||
|
self.wait_startup_healthcheck()
|
||||||
|
|
||||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||||
@reporter.step_deco("Start stopped hosts")
|
@reporter.step("Start stopped hosts")
|
||||||
def start_stopped_hosts(self, reversed_order: bool = False):
|
def start_stopped_hosts(self, reversed_order: bool = False):
|
||||||
if not self.stopped_nodes:
|
if not self.stopped_nodes:
|
||||||
return
|
return
|
||||||
|
@ -104,132 +124,167 @@ class ClusterStateController:
|
||||||
for node in nodes:
|
for node in nodes:
|
||||||
with reporter.step(f"Start host {node.host.config.address}"):
|
with reporter.step(f"Start host {node.host.config.address}"):
|
||||||
node.host.start_host()
|
node.host.start_host()
|
||||||
if node in self.stopped_storage_nodes:
|
self.stopped_services.difference_update(self._get_stopped_by_node(node))
|
||||||
self.stopped_storage_nodes.remove(node)
|
|
||||||
|
|
||||||
if node in self.stopped_s3_gates:
|
|
||||||
self.stopped_s3_gates.remove(node)
|
|
||||||
self.stopped_nodes = []
|
self.stopped_nodes = []
|
||||||
wait_all_storage_nodes_returned(self.shell, self.cluster)
|
with reporter.step("Wait for all nodes to go online"):
|
||||||
|
parallel(self._wait_for_host_online, self.cluster.cluster_nodes)
|
||||||
|
|
||||||
|
self.wait_after_storage_startup()
|
||||||
|
|
||||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||||
@reporter.step_deco("Detach disk {device} at {mountpoint} on node {node}")
|
@reporter.step("Detach disk {device} at {mountpoint} on node {node}")
|
||||||
def detach_disk(self, node: StorageNode, device: str, mountpoint: str):
|
def detach_disk(self, node: StorageNode, device: str, mountpoint: str):
|
||||||
disk_controller = self._get_disk_controller(node, device, mountpoint)
|
disk_controller = self._get_disk_controller(node, device, mountpoint)
|
||||||
self.detached_disks[disk_controller.id] = disk_controller
|
self.detached_disks[disk_controller.id] = disk_controller
|
||||||
disk_controller.detach()
|
disk_controller.detach()
|
||||||
|
|
||||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||||
@reporter.step_deco("Attach disk {device} at {mountpoint} on node {node}")
|
@reporter.step("Attach disk {device} at {mountpoint} on node {node}")
|
||||||
def attach_disk(self, node: StorageNode, device: str, mountpoint: str):
|
def attach_disk(self, node: StorageNode, device: str, mountpoint: str):
|
||||||
disk_controller = self._get_disk_controller(node, device, mountpoint)
|
disk_controller = self._get_disk_controller(node, device, mountpoint)
|
||||||
disk_controller.attach()
|
disk_controller.attach()
|
||||||
self.detached_disks.pop(disk_controller.id, None)
|
self.detached_disks.pop(disk_controller.id, None)
|
||||||
|
|
||||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||||
@reporter.step_deco("Restore detached disks")
|
@reporter.step("Restore detached disks")
|
||||||
def restore_disks(self):
|
def restore_disks(self):
|
||||||
for disk_controller in self.detached_disks.values():
|
for disk_controller in self.detached_disks.values():
|
||||||
disk_controller.attach()
|
disk_controller.attach()
|
||||||
self.detached_disks = {}
|
self.detached_disks = {}
|
||||||
|
|
||||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||||
@reporter.step_deco("Stop storage service on {node}")
|
@reporter.step("Stop all {service_type} services")
|
||||||
def stop_storage_service(self, node: ClusterNode, mask: bool = True):
|
def stop_services_of_type(self, service_type: type[ServiceClass], mask: bool = True):
|
||||||
self.stopped_storage_nodes.append(node)
|
|
||||||
node.storage_node.stop_service(mask)
|
|
||||||
|
|
||||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
|
||||||
@reporter.step_deco("Stop all {service_type} services")
|
|
||||||
def stop_services_of_type(self, service_type: type[ServiceClass]):
|
|
||||||
services = self.cluster.services(service_type)
|
services = self.cluster.services(service_type)
|
||||||
self.stopped_services.update(services)
|
self.stopped_services.update(services)
|
||||||
parallel([service.stop_service for service in services])
|
parallel([service.stop_service for service in services], mask=mask)
|
||||||
|
|
||||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||||
@reporter.step_deco("Start all {service_type} services")
|
@reporter.step("Start all {service_type} services")
|
||||||
def start_services_of_type(self, service_type: type[ServiceClass]):
|
def start_services_of_type(self, service_type: type[ServiceClass]):
|
||||||
services = self.cluster.services(service_type)
|
services = self.cluster.services(service_type)
|
||||||
parallel([service.start_service for service in services])
|
parallel([service.start_service for service in services])
|
||||||
|
self.stopped_services.difference_update(set(services))
|
||||||
|
|
||||||
if service_type == StorageNode:
|
if service_type == StorageNode:
|
||||||
wait_all_storage_nodes_returned(self.shell, self.cluster)
|
self.wait_after_storage_startup()
|
||||||
|
|
||||||
self.stopped_services = self.stopped_services - set(services)
|
@wait_for_success(600, 60)
|
||||||
|
def wait_s3gate(self, s3gate: S3Gate):
|
||||||
|
with reporter.step(f"Wait for {s3gate} reconnection"):
|
||||||
|
result = s3gate.get_metric("frostfs_s3_gw_pool_current_nodes")
|
||||||
|
assert 'address="127.0.0.1' in result.stdout, "S3Gate should connect to local storage node"
|
||||||
|
|
||||||
|
@reporter.step("Wait for S3Gates reconnection to local storage")
|
||||||
|
def wait_s3gates(self):
|
||||||
|
online_s3gates = self._get_online(S3Gate)
|
||||||
|
if online_s3gates:
|
||||||
|
parallel(self.wait_s3gate, online_s3gates)
|
||||||
|
|
||||||
|
@reporter.step("Wait for cluster startup healtcheck")
|
||||||
|
def wait_startup_healthcheck(self):
|
||||||
|
nodes = self.cluster.nodes(self._get_online(StorageNode))
|
||||||
|
parallel(self.healthcheck.startup_healthcheck, nodes)
|
||||||
|
|
||||||
|
@reporter.step("Wait for storage reconnection to the system")
|
||||||
|
def wait_after_storage_startup(self):
|
||||||
|
self.wait_startup_healthcheck()
|
||||||
|
self.wait_s3gates()
|
||||||
|
|
||||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||||
@reporter.step_deco("Start all stopped services")
|
@reporter.step("Start all stopped services")
|
||||||
def start_all_stopped_services(self):
|
def start_all_stopped_services(self):
|
||||||
|
stopped_storages = self._get_stopped_by_type(StorageNode)
|
||||||
parallel([service.start_service for service in self.stopped_services])
|
parallel([service.start_service for service in self.stopped_services])
|
||||||
|
|
||||||
for service in self.stopped_services:
|
|
||||||
if isinstance(service, StorageNode):
|
|
||||||
wait_all_storage_nodes_returned(self.shell, self.cluster)
|
|
||||||
break
|
|
||||||
|
|
||||||
self.stopped_services.clear()
|
self.stopped_services.clear()
|
||||||
|
|
||||||
|
if stopped_storages:
|
||||||
|
self.wait_after_storage_startup()
|
||||||
|
|
||||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||||
@reporter.step_deco("Stop {service_type} service on {node}")
|
@reporter.step("Stop {service_type} service on {node}")
|
||||||
def stop_service_of_type(
|
def stop_service_of_type(self, node: ClusterNode, service_type: type[ServiceClass], mask: bool = True):
|
||||||
self, node: ClusterNode, service_type: type[ServiceClass], mask: bool = True
|
|
||||||
):
|
|
||||||
service = node.service(service_type)
|
service = node.service(service_type)
|
||||||
service.stop_service(mask)
|
service.stop_service(mask)
|
||||||
self.stopped_services.add(service)
|
self.stopped_services.add(service)
|
||||||
|
|
||||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||||
@reporter.step_deco("Start {service_type} service on {node}")
|
@reporter.step("Start {service_type} service on {node}")
|
||||||
def start_service_of_type(self, node: ClusterNode, service_type: type[ServiceClass]):
|
def start_service_of_type(self, node: ClusterNode, service_type: type[ServiceClass]):
|
||||||
service = node.service(service_type)
|
service = node.service(service_type)
|
||||||
service.start_service()
|
service.start_service()
|
||||||
if service in self.stopped_services:
|
self.stopped_services.discard(service)
|
||||||
self.stopped_services.remove(service)
|
|
||||||
|
|
||||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||||
@reporter.step_deco("Start storage service on {node}")
|
@reporter.step("Start all stopped {service_type} services")
|
||||||
|
def start_stopped_services_of_type(self, service_type: type[ServiceClass]):
|
||||||
|
stopped_svc = self._get_stopped_by_type(service_type)
|
||||||
|
if not stopped_svc:
|
||||||
|
return
|
||||||
|
|
||||||
|
parallel([svc.start_service for svc in stopped_svc])
|
||||||
|
self.stopped_services.difference_update(stopped_svc)
|
||||||
|
|
||||||
|
if service_type == StorageNode:
|
||||||
|
self.wait_after_storage_startup()
|
||||||
|
|
||||||
|
# TODO: Deprecated
|
||||||
|
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||||
|
@reporter.step("Stop all storage services on cluster")
|
||||||
|
def stop_all_storage_services(self, reversed_order: bool = False):
|
||||||
|
nodes = reversed(self.cluster.cluster_nodes) if reversed_order else self.cluster.cluster_nodes
|
||||||
|
|
||||||
|
for node in nodes:
|
||||||
|
self.stop_service_of_type(node, StorageNode)
|
||||||
|
|
||||||
|
# TODO: Deprecated
|
||||||
|
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||||
|
@reporter.step("Stop all S3 gates on cluster")
|
||||||
|
def stop_all_s3_gates(self, reversed_order: bool = False):
|
||||||
|
nodes = reversed(self.cluster.cluster_nodes) if reversed_order else self.cluster.cluster_nodes
|
||||||
|
|
||||||
|
for node in nodes:
|
||||||
|
self.stop_service_of_type(node, S3Gate)
|
||||||
|
|
||||||
|
# TODO: Deprecated
|
||||||
|
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||||
|
@reporter.step("Stop storage service on {node}")
|
||||||
|
def stop_storage_service(self, node: ClusterNode, mask: bool = True):
|
||||||
|
self.stop_service_of_type(node, StorageNode, mask)
|
||||||
|
|
||||||
|
# TODO: Deprecated
|
||||||
|
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||||
|
@reporter.step("Start storage service on {node}")
|
||||||
def start_storage_service(self, node: ClusterNode):
|
def start_storage_service(self, node: ClusterNode):
|
||||||
node.storage_node.start_service()
|
self.start_service_of_type(node, StorageNode)
|
||||||
self.stopped_storage_nodes.remove(node)
|
|
||||||
|
|
||||||
|
# TODO: Deprecated
|
||||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||||
@reporter.step_deco("Start stopped storage services")
|
@reporter.step("Start stopped storage services")
|
||||||
def start_stopped_storage_services(self):
|
def start_stopped_storage_services(self):
|
||||||
if not self.stopped_storage_nodes:
|
self.start_stopped_services_of_type(StorageNode)
|
||||||
return
|
|
||||||
|
|
||||||
# In case if we stopped couple services, for example (s01-s04):
|
|
||||||
# After starting only s01, it may require connections to s02-s04, which is still down, and fail to start.
|
|
||||||
# Also, if something goes wrong here, we might skip s02-s04 start at all, and cluster will be left in a bad state.
|
|
||||||
# So in order to make sure that services are at least attempted to be started, using parallel runs here.
|
|
||||||
parallel(self.start_storage_service, copy.copy(self.stopped_storage_nodes))
|
|
||||||
|
|
||||||
wait_all_storage_nodes_returned(self.shell, self.cluster)
|
|
||||||
self.stopped_storage_nodes = []
|
|
||||||
|
|
||||||
|
# TODO: Deprecated
|
||||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||||
@reporter.step_deco("Stop s3 gate on {node}")
|
@reporter.step("Stop s3 gate on {node}")
|
||||||
def stop_s3_gate(self, node: ClusterNode, mask: bool = True):
|
def stop_s3_gate(self, node: ClusterNode, mask: bool = True):
|
||||||
node.s3_gate.stop_service(mask)
|
self.stop_service_of_type(node, S3Gate, mask)
|
||||||
self.stopped_s3_gates.append(node)
|
|
||||||
|
|
||||||
|
# TODO: Deprecated
|
||||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||||
@reporter.step_deco("Start s3 gate on {node}")
|
@reporter.step("Start s3 gate on {node}")
|
||||||
def start_s3_gate(self, node: ClusterNode):
|
def start_s3_gate(self, node: ClusterNode):
|
||||||
node.s3_gate.start_service()
|
self.start_service_of_type(node, S3Gate)
|
||||||
self.stopped_s3_gates.remove(node)
|
|
||||||
|
|
||||||
|
# TODO: Deprecated
|
||||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||||
@reporter.step_deco("Start stopped S3 gates")
|
@reporter.step("Start stopped S3 gates")
|
||||||
def start_stopped_s3_gates(self):
|
def start_stopped_s3_gates(self):
|
||||||
if not self.stopped_s3_gates:
|
self.start_stopped_services_of_type(S3Gate)
|
||||||
return
|
|
||||||
|
|
||||||
parallel(self.start_s3_gate, copy.copy(self.stopped_s3_gates))
|
|
||||||
self.stopped_s3_gates = []
|
|
||||||
|
|
||||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||||
@reporter.step_deco("Suspend {process_name} service in {node}")
|
@reporter.step("Suspend {process_name} service in {node}")
|
||||||
def suspend_service(self, process_name: str, node: ClusterNode):
|
def suspend_service(self, process_name: str, node: ClusterNode):
|
||||||
node.host.wait_success_suspend_process(process_name)
|
node.host.wait_success_suspend_process(process_name)
|
||||||
if self.suspended_services.get(process_name):
|
if self.suspended_services.get(process_name):
|
||||||
|
@ -238,81 +293,48 @@ class ClusterStateController:
|
||||||
self.suspended_services[process_name] = [node]
|
self.suspended_services[process_name] = [node]
|
||||||
|
|
||||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||||
@reporter.step_deco("Resume {process_name} service in {node}")
|
@reporter.step("Resume {process_name} service in {node}")
|
||||||
def resume_service(self, process_name: str, node: ClusterNode):
|
def resume_service(self, process_name: str, node: ClusterNode):
|
||||||
node.host.wait_success_resume_process(process_name)
|
node.host.wait_success_resume_process(process_name)
|
||||||
if (
|
if self.suspended_services.get(process_name) and node in self.suspended_services[process_name]:
|
||||||
self.suspended_services.get(process_name)
|
|
||||||
and node in self.suspended_services[process_name]
|
|
||||||
):
|
|
||||||
self.suspended_services[process_name].remove(node)
|
self.suspended_services[process_name].remove(node)
|
||||||
|
|
||||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||||
@reporter.step_deco("Start suspend processes services")
|
@reporter.step("Start suspend processes services")
|
||||||
def resume_suspended_services(self):
|
def resume_suspended_services(self):
|
||||||
for process_name, list_nodes in self.suspended_services.items():
|
for process_name, list_nodes in self.suspended_services.items():
|
||||||
[node.host.wait_success_resume_process(process_name) for node in list_nodes]
|
[node.host.wait_success_resume_process(process_name) for node in list_nodes]
|
||||||
self.suspended_services = {}
|
self.suspended_services = {}
|
||||||
|
|
||||||
@reporter.step_deco("Drop traffic to {node}, with ports - {ports}, nodes - {block_nodes}")
|
@reporter.step("Drop traffic to {node}, nodes - {block_nodes}")
|
||||||
def drop_traffic(
|
def drop_traffic(
|
||||||
self,
|
self,
|
||||||
mode: str,
|
|
||||||
node: ClusterNode,
|
node: ClusterNode,
|
||||||
wakeup_timeout: int,
|
wakeup_timeout: int,
|
||||||
ports: list[str] = None,
|
name_interface: str,
|
||||||
block_nodes: list[ClusterNode] = None,
|
block_nodes: list[ClusterNode] = None,
|
||||||
) -> None:
|
) -> None:
|
||||||
allowed_modes = ["ports", "nodes"]
|
list_ip = self._parse_interfaces(block_nodes, name_interface)
|
||||||
assert mode in allowed_modes
|
IpHelper.drop_input_traffic_to_node(node, list_ip)
|
||||||
|
|
||||||
match mode:
|
|
||||||
case "ports":
|
|
||||||
IpTablesHelper.drop_input_traffic_to_port(node, ports)
|
|
||||||
case "nodes":
|
|
||||||
list_ip = self._parse_intefaces(block_nodes)
|
|
||||||
IpTablesHelper.drop_input_traffic_to_node(node, list_ip)
|
|
||||||
time.sleep(wakeup_timeout)
|
time.sleep(wakeup_timeout)
|
||||||
self.dropped_traffic.append(node)
|
self.dropped_traffic.append(node)
|
||||||
|
|
||||||
@reporter.step_deco("Ping traffic")
|
@reporter.step("Start traffic to {node}")
|
||||||
def ping_traffic(
|
|
||||||
self,
|
|
||||||
node: ClusterNode,
|
|
||||||
nodes_list: list[ClusterNode],
|
|
||||||
expect_result: int,
|
|
||||||
) -> bool:
|
|
||||||
shell = node.host.get_shell()
|
|
||||||
options = CommandOptions(check=False)
|
|
||||||
ips = self._parse_intefaces(nodes_list)
|
|
||||||
for ip in ips:
|
|
||||||
code = shell.exec(f"ping {ip} -c 1", options).return_code
|
|
||||||
if code != expect_result:
|
|
||||||
return False
|
|
||||||
return True
|
|
||||||
|
|
||||||
@reporter.step_deco("Start traffic to {node}")
|
|
||||||
def restore_traffic(
|
def restore_traffic(
|
||||||
self,
|
self,
|
||||||
mode: str,
|
|
||||||
node: ClusterNode,
|
node: ClusterNode,
|
||||||
) -> None:
|
) -> None:
|
||||||
allowed_modes = ["ports", "nodes"]
|
IpHelper.restore_input_traffic_to_node(node=node)
|
||||||
assert mode in allowed_modes
|
|
||||||
|
|
||||||
match mode:
|
@reporter.step("Restore blocked nodes")
|
||||||
case "ports":
|
|
||||||
IpTablesHelper.restore_input_traffic_to_port(node=node)
|
|
||||||
case "nodes":
|
|
||||||
IpTablesHelper.restore_input_traffic_to_node(node=node)
|
|
||||||
|
|
||||||
@reporter.step_deco("Restore blocked nodes")
|
|
||||||
def restore_all_traffic(self):
|
def restore_all_traffic(self):
|
||||||
|
if not self.dropped_traffic:
|
||||||
|
return
|
||||||
parallel(self._restore_traffic_to_node, self.dropped_traffic)
|
parallel(self._restore_traffic_to_node, self.dropped_traffic)
|
||||||
|
|
||||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||||
@reporter.step_deco("Hard reboot host {node} via magic SysRq option")
|
@reporter.step("Hard reboot host {node} via magic SysRq option")
|
||||||
def panic_reboot_host(self, node: ClusterNode, wait_for_return: bool = True):
|
def panic_reboot_host(self, node: ClusterNode, wait_for_return: bool = True, startup_healthcheck: bool = True):
|
||||||
shell = node.host.get_shell()
|
shell = node.host.get_shell()
|
||||||
shell.exec('sudo sh -c "echo 1 > /proc/sys/kernel/sysrq"')
|
shell.exec('sudo sh -c "echo 1 > /proc/sys/kernel/sysrq"')
|
||||||
|
|
||||||
|
@ -327,32 +349,142 @@ class ClusterStateController:
|
||||||
# Let the things to be settled
|
# Let the things to be settled
|
||||||
# A little wait here to prevent ssh stuck during panic
|
# A little wait here to prevent ssh stuck during panic
|
||||||
time.sleep(10)
|
time.sleep(10)
|
||||||
wait_for_host_online(self.shell, node.storage_node)
|
self._wait_for_host_online(node)
|
||||||
wait_for_node_online(node.storage_node)
|
if startup_healthcheck:
|
||||||
|
self.wait_startup_healthcheck()
|
||||||
|
|
||||||
@reporter.step_deco("Down {interface} to {nodes}")
|
@reporter.step("Down {interface} to {nodes}")
|
||||||
def down_interface(self, nodes: list[ClusterNode], interface: str):
|
def down_interface(self, nodes: list[ClusterNode], interface: str):
|
||||||
for node in nodes:
|
for node in nodes:
|
||||||
if_up_down_helper.down_interface(node=node, interface=interface)
|
node.host.down_interface(interface=interface)
|
||||||
assert if_up_down_helper.check_state(node=node, interface=interface) == "DOWN"
|
assert node.host.check_state(interface=interface) == "DOWN"
|
||||||
self.nodes_with_modified_interface.append(node)
|
self.nodes_with_modified_interface.append(node)
|
||||||
|
|
||||||
@reporter.step_deco("Up {interface} to {nodes}")
|
@reporter.step("Up {interface} to {nodes}")
|
||||||
def up_interface(self, nodes: list[ClusterNode], interface: str):
|
def up_interface(self, nodes: list[ClusterNode], interface: str):
|
||||||
for node in nodes:
|
for node in nodes:
|
||||||
if_up_down_helper.up_interface(node=node, interface=interface)
|
node.host.up_interface(interface=interface)
|
||||||
assert if_up_down_helper.check_state(node=node, interface=interface) == "UP"
|
assert node.host.check_state(interface=interface) == "UP"
|
||||||
if node in self.nodes_with_modified_interface:
|
if node in self.nodes_with_modified_interface:
|
||||||
self.nodes_with_modified_interface.remove(node)
|
self.nodes_with_modified_interface.remove(node)
|
||||||
|
|
||||||
@reporter.step_deco("Restore interface")
|
@reporter.step("Restore interface")
|
||||||
def restore_interfaces(self):
|
def restore_interfaces(self):
|
||||||
for node in self.nodes_with_modified_interface:
|
for node in self.nodes_with_modified_interface:
|
||||||
if_up_down_helper.up_all_interface(node)
|
dict_interfaces = node.host.config.interfaces.keys()
|
||||||
|
for name_interface in dict_interfaces:
|
||||||
|
if "mgmt" not in name_interface:
|
||||||
|
node.host.up_interface(interface=name_interface)
|
||||||
|
|
||||||
def _get_disk_controller(
|
@reporter.step("Get node time")
|
||||||
self, node: StorageNode, device: str, mountpoint: str
|
def get_node_date(self, node: ClusterNode) -> datetime:
|
||||||
) -> DiskController:
|
shell = node.host.get_shell()
|
||||||
|
return datetime.datetime.strptime(shell.exec("hwclock -r").stdout.strip(), "%Y-%m-%d %H:%M:%S.%f%z")
|
||||||
|
|
||||||
|
@reporter.step("Set node time to {in_date}")
|
||||||
|
def change_node_date(self, node: ClusterNode, in_date: datetime) -> None:
|
||||||
|
shell = node.host.get_shell()
|
||||||
|
shell.exec(f"date -s @{time.mktime(in_date.timetuple())}")
|
||||||
|
shell.exec("hwclock --systohc")
|
||||||
|
node_time = self.get_node_date(node)
|
||||||
|
with reporter.step(f"Verify difference between {node_time} and {in_date} is less than a minute"):
|
||||||
|
assert (self.get_node_date(node) - in_date) < datetime.timedelta(minutes=1)
|
||||||
|
|
||||||
|
@reporter.step(f"Restore time")
|
||||||
|
def restore_node_date(self, node: ClusterNode) -> None:
|
||||||
|
shell = node.host.get_shell()
|
||||||
|
now_time = datetime.datetime.now(datetime.timezone.utc)
|
||||||
|
with reporter.step(f"Set {now_time} time"):
|
||||||
|
shell.exec(f"date -s @{time.mktime(now_time.timetuple())}")
|
||||||
|
shell.exec("hwclock --systohc")
|
||||||
|
|
||||||
|
@reporter.step("Change the synchronizer status to {status}")
|
||||||
|
def set_sync_date_all_nodes(self, status: str):
|
||||||
|
if status == "active":
|
||||||
|
parallel(self._enable_date_synchronizer, self.cluster.cluster_nodes)
|
||||||
|
return
|
||||||
|
parallel(self._disable_date_synchronizer, self.cluster.cluster_nodes)
|
||||||
|
|
||||||
|
@reporter.step("Set MaintenanceModeAllowed - {status}")
|
||||||
|
def set_maintenance_mode_allowed(self, status: str, cluster_node: ClusterNode) -> None:
|
||||||
|
frostfs_adm = FrostfsAdm(
|
||||||
|
shell=cluster_node.host.get_shell(),
|
||||||
|
frostfs_adm_exec_path=FROSTFS_ADM_EXEC,
|
||||||
|
config_file=FROSTFS_ADM_CONFIG_PATH,
|
||||||
|
)
|
||||||
|
frostfs_adm.morph.set_config(set_key_value=f"MaintenanceModeAllowed={status}")
|
||||||
|
|
||||||
|
@reporter.step("Set node status to {status} in CSC")
|
||||||
|
def set_node_status(self, cluster_node: ClusterNode, wallet: WalletInfo, status: NodeStatus, await_tick: bool = True) -> None:
|
||||||
|
rpc_endpoint = cluster_node.storage_node.get_rpc_endpoint()
|
||||||
|
control_endpoint = cluster_node.service(StorageNode).get_control_endpoint()
|
||||||
|
|
||||||
|
frostfs_adm, frostfs_cli, frostfs_cli_remote = self._get_cli(self.shell, wallet, cluster_node)
|
||||||
|
node_netinfo = NetmapParser.netinfo(frostfs_cli.netmap.netinfo(rpc_endpoint).stdout)
|
||||||
|
|
||||||
|
if node_netinfo.maintenance_mode_allowed == "false":
|
||||||
|
with reporter.step("Enable maintenance mode"):
|
||||||
|
frostfs_adm.morph.set_config("MaintenanceModeAllowed=true")
|
||||||
|
|
||||||
|
with reporter.step(f"Set node status to {status} using FrostfsCli"):
|
||||||
|
frostfs_cli_remote.control.set_status(control_endpoint, status.value)
|
||||||
|
|
||||||
|
if not await_tick:
|
||||||
|
return
|
||||||
|
|
||||||
|
with reporter.step("Tick 2 epoch with 2 block await."):
|
||||||
|
for _ in range(2):
|
||||||
|
frostfs_adm.morph.force_new_epoch()
|
||||||
|
time.sleep(parse_time(MORPH_BLOCK_TIME) * 2)
|
||||||
|
|
||||||
|
self.await_node_status(status, wallet, cluster_node)
|
||||||
|
|
||||||
|
@wait_for_success(80, 8, title="Wait for node status become {status}")
|
||||||
|
def await_node_status(self, status: NodeStatus, wallet: WalletInfo, cluster_node: ClusterNode, checker_node: ClusterNode = None):
|
||||||
|
frostfs_cli = FrostfsCli(self.shell, FROSTFS_CLI_EXEC, wallet.config_path)
|
||||||
|
if not checker_node:
|
||||||
|
checker_node = cluster_node
|
||||||
|
netmap = NetmapParser.snapshot_all_nodes(frostfs_cli.netmap.snapshot(checker_node.storage_node.get_rpc_endpoint()).stdout)
|
||||||
|
netmap = [node for node in netmap if cluster_node.host_ip == node.node]
|
||||||
|
if status == NodeStatus.OFFLINE:
|
||||||
|
assert cluster_node.host_ip not in netmap, f"{cluster_node.host_ip} not in Offline"
|
||||||
|
else:
|
||||||
|
assert netmap[0].node_status == status, f"Node status should be '{status}', but was '{netmap[0].node_status}'"
|
||||||
|
|
||||||
|
def _get_cli(
|
||||||
|
self, local_shell: Shell, local_wallet: WalletInfo, cluster_node: ClusterNode
|
||||||
|
) -> tuple[FrostfsAdm, FrostfsCli, FrostfsCli]:
|
||||||
|
# TODO Move to service config
|
||||||
|
host = cluster_node.host
|
||||||
|
service_config = host.get_service_config(cluster_node.storage_node.name)
|
||||||
|
wallet_path = service_config.attributes["wallet_path"]
|
||||||
|
wallet_password = service_config.attributes["wallet_password"]
|
||||||
|
|
||||||
|
shell = host.get_shell()
|
||||||
|
wallet_config_path = f"/tmp/{cluster_node.storage_node.name}-config.yaml"
|
||||||
|
wallet_config = f'wallet: {wallet_path}\npassword: "{wallet_password}"'
|
||||||
|
shell.exec(f"echo '{wallet_config}' > {wallet_config_path}")
|
||||||
|
|
||||||
|
frostfs_adm = FrostfsAdm(shell=shell, frostfs_adm_exec_path=FROSTFS_ADM_EXEC, config_file=FROSTFS_ADM_CONFIG_PATH)
|
||||||
|
frostfs_cli = FrostfsCli(local_shell, FROSTFS_CLI_EXEC, local_wallet.config_path)
|
||||||
|
frostfs_cli_remote = FrostfsCli(
|
||||||
|
shell=shell,
|
||||||
|
frostfs_cli_exec_path=FROSTFS_CLI_EXEC,
|
||||||
|
config_file=wallet_config_path,
|
||||||
|
)
|
||||||
|
return frostfs_adm, frostfs_cli, frostfs_cli_remote
|
||||||
|
|
||||||
|
def _enable_date_synchronizer(self, cluster_node: ClusterNode):
|
||||||
|
shell = cluster_node.host.get_shell()
|
||||||
|
shell.exec("timedatectl set-ntp true")
|
||||||
|
cluster_node.host.wait_for_service_to_be_in_state("systemd-timesyncd", "active", 15)
|
||||||
|
|
||||||
|
def _disable_date_synchronizer(self, cluster_node: ClusterNode):
|
||||||
|
shell = cluster_node.host.get_shell()
|
||||||
|
shell.exec("timedatectl set-ntp false")
|
||||||
|
cluster_node.host.wait_for_service_to_be_in_state("systemd-timesyncd", "inactive", 15)
|
||||||
|
|
||||||
|
def _get_disk_controller(self, node: StorageNode, device: str, mountpoint: str) -> DiskController:
|
||||||
disk_controller_id = DiskController.get_id(node, device)
|
disk_controller_id = DiskController.get_id(node, device)
|
||||||
if disk_controller_id in self.detached_disks.keys():
|
if disk_controller_id in self.detached_disks.keys():
|
||||||
disk_controller = self.detached_disks[disk_controller_id]
|
disk_controller = self.detached_disks[disk_controller_id]
|
||||||
|
@ -362,14 +494,40 @@ class ClusterStateController:
|
||||||
return disk_controller
|
return disk_controller
|
||||||
|
|
||||||
def _restore_traffic_to_node(self, node):
|
def _restore_traffic_to_node(self, node):
|
||||||
IpTablesHelper.restore_input_traffic_to_port(node)
|
IpHelper.restore_input_traffic_to_node(node)
|
||||||
IpTablesHelper.restore_input_traffic_to_node(node)
|
|
||||||
|
|
||||||
def _parse_intefaces(self, nodes: list[ClusterNode]):
|
def _parse_interfaces(self, nodes: list[ClusterNode], name_interface: str):
|
||||||
interfaces = []
|
interfaces = []
|
||||||
for node in nodes:
|
for node in nodes:
|
||||||
dict_interfaces = node.host.config.interfaces
|
dict_interfaces = node.host.config.interfaces
|
||||||
for type, ip in dict_interfaces.items():
|
for type, ip in dict_interfaces.items():
|
||||||
if "mgmt" not in type:
|
if name_interface in type:
|
||||||
interfaces.append(ip)
|
interfaces.append(ip)
|
||||||
return interfaces
|
return interfaces
|
||||||
|
|
||||||
|
@reporter.step("Ping node")
|
||||||
|
def _ping_host(self, node: ClusterNode):
|
||||||
|
options = CommandOptions(check=False)
|
||||||
|
return self.shell.exec(f"ping {node.host.config.address} -c 1", options).return_code
|
||||||
|
|
||||||
|
@retry(max_attempts=60, sleep_interval=10, expected_result=HostStatus.ONLINE, title="Waiting for {node} to go online")
|
||||||
|
def _wait_for_host_online(self, node: ClusterNode):
|
||||||
|
try:
|
||||||
|
ping_result = self._ping_host(node)
|
||||||
|
if ping_result != 0:
|
||||||
|
return HostStatus.OFFLINE
|
||||||
|
return node.host.get_host_status()
|
||||||
|
except Exception as err:
|
||||||
|
logger.warning(f"Host ping fails with error {err}")
|
||||||
|
return HostStatus.OFFLINE
|
||||||
|
|
||||||
|
@retry(max_attempts=60, sleep_interval=10, expected_result=HostStatus.OFFLINE, title="Waiting for {node} to go offline")
|
||||||
|
def _wait_for_host_offline(self, node: ClusterNode):
|
||||||
|
try:
|
||||||
|
ping_result = self._ping_host(node)
|
||||||
|
if ping_result == 0:
|
||||||
|
return HostStatus.ONLINE
|
||||||
|
return node.host.get_host_status()
|
||||||
|
except Exception as err:
|
||||||
|
logger.warning(f"Host ping fails with error {err}")
|
||||||
|
return HostStatus.ONLINE
|
||||||
|
|
|
@ -79,9 +79,7 @@ class ShardsWatcher:
|
||||||
assert self._is_shard_present(shard_id)
|
assert self._is_shard_present(shard_id)
|
||||||
shards_with_new_errors = self.get_shards_with_new_errors()
|
shards_with_new_errors = self.get_shards_with_new_errors()
|
||||||
|
|
||||||
assert (
|
assert shard_id in shards_with_new_errors, f"Expected shard {shard_id} to have new errors, but haven't {self.shards_snapshots[-1]}"
|
||||||
shard_id in shards_with_new_errors
|
|
||||||
), f"Expected shard {shard_id} to have new errors, but haven't {self.shards_snapshots[-1]}"
|
|
||||||
|
|
||||||
@wait_for_success(300, 5)
|
@wait_for_success(300, 5)
|
||||||
def await_for_shards_have_no_new_errors(self):
|
def await_for_shards_have_no_new_errors(self):
|
||||||
|
@ -110,9 +108,9 @@ class ShardsWatcher:
|
||||||
self.storage_node.host.get_cli_config("frostfs-cli").exec_path,
|
self.storage_node.host.get_cli_config("frostfs-cli").exec_path,
|
||||||
)
|
)
|
||||||
return shards_cli.set_mode(
|
return shards_cli.set_mode(
|
||||||
self.storage_node.get_control_endpoint(),
|
endpoint=self.storage_node.get_control_endpoint(),
|
||||||
self.storage_node.get_remote_wallet_path(),
|
wallet=self.storage_node.get_remote_wallet_path(),
|
||||||
self.storage_node.get_wallet_password(),
|
wallet_password=self.storage_node.get_wallet_password(),
|
||||||
mode=mode,
|
mode=mode,
|
||||||
id=[shard_id],
|
id=[shard_id],
|
||||||
clear_errors=clear_errors,
|
clear_errors=clear_errors,
|
||||||
|
|
|
@ -0,0 +1,49 @@
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
from frostfs_testlib import reporter
|
||||||
|
from frostfs_testlib.storage.cluster import ClusterNode
|
||||||
|
from frostfs_testlib.storage.controllers.cluster_state_controller import ClusterStateController, StateManager
|
||||||
|
from frostfs_testlib.storage.dataclasses.node_base import ServiceClass
|
||||||
|
from frostfs_testlib.testing import parallel
|
||||||
|
|
||||||
|
|
||||||
|
class ConfigStateManager(StateManager):
|
||||||
|
def __init__(self, cluster_state_controller: ClusterStateController) -> None:
|
||||||
|
super().__init__(cluster_state_controller)
|
||||||
|
self.services_with_changed_config: set[tuple[ClusterNode, ServiceClass]] = set()
|
||||||
|
self.cluster = self.csc.cluster
|
||||||
|
|
||||||
|
@reporter.step("Change configuration for {service_type} on all nodes")
|
||||||
|
def set_on_all_nodes(self, service_type: type[ServiceClass], values: dict[str, Any]):
|
||||||
|
services = self.cluster.services(service_type)
|
||||||
|
nodes = self.cluster.nodes(services)
|
||||||
|
self.services_with_changed_config.update([(node, service_type) for node in nodes])
|
||||||
|
|
||||||
|
self.csc.stop_services_of_type(service_type)
|
||||||
|
parallel([node.config(service_type).set for node in nodes], values=values)
|
||||||
|
self.csc.start_services_of_type(service_type)
|
||||||
|
|
||||||
|
@reporter.step("Change configuration for {service_type} on {node}")
|
||||||
|
def set_on_node(self, node: ClusterNode, service_type: type[ServiceClass], values: dict[str, Any]):
|
||||||
|
self.services_with_changed_config.add((node, service_type))
|
||||||
|
|
||||||
|
self.csc.stop_service_of_type(node, service_type)
|
||||||
|
node.config(service_type).set(values)
|
||||||
|
self.csc.start_service_of_type(node, service_type)
|
||||||
|
|
||||||
|
@reporter.step("Revert all configuration changes")
|
||||||
|
def revert_all(self):
|
||||||
|
if not self.services_with_changed_config:
|
||||||
|
return
|
||||||
|
|
||||||
|
parallel(self._revert_svc, self.services_with_changed_config)
|
||||||
|
self.services_with_changed_config.clear()
|
||||||
|
|
||||||
|
self.csc.start_all_stopped_services()
|
||||||
|
|
||||||
|
# TODO: parallel can't have multiple parallel_items :(
|
||||||
|
@reporter.step("Revert all configuration {node_and_service}")
|
||||||
|
def _revert_svc(self, node_and_service: tuple[ClusterNode, ServiceClass]):
|
||||||
|
node, service_type = node_and_service
|
||||||
|
self.csc.stop_service_of_type(node, service_type)
|
||||||
|
node.config(service_type).revert()
|
|
@ -1,8 +1,8 @@
|
||||||
import logging
|
import logging
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass
|
||||||
from enum import Enum
|
|
||||||
from typing import Any, Dict, List, Optional, Union
|
from typing import Any, Dict, List, Optional, Union
|
||||||
|
|
||||||
|
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
|
||||||
from frostfs_testlib.testing.readable import HumanReadableEnum
|
from frostfs_testlib.testing.readable import HumanReadableEnum
|
||||||
from frostfs_testlib.utils import wallet_utils
|
from frostfs_testlib.utils import wallet_utils
|
||||||
|
|
||||||
|
@ -65,11 +65,7 @@ class EACLFilters:
|
||||||
|
|
||||||
def __str__(self):
|
def __str__(self):
|
||||||
return ",".join(
|
return ",".join(
|
||||||
[
|
[f"{filter.header_type.value}:" f"{filter.key}{filter.match_type.value}{filter.value}" for filter in self.filters]
|
||||||
f"{filter.header_type.value}:"
|
|
||||||
f"{filter.key}{filter.match_type.value}{filter.value}"
|
|
||||||
for filter in self.filters
|
|
||||||
]
|
|
||||||
if self.filters
|
if self.filters
|
||||||
else []
|
else []
|
||||||
)
|
)
|
||||||
|
@ -84,7 +80,7 @@ class EACLPubKey:
|
||||||
class EACLRule:
|
class EACLRule:
|
||||||
operation: Optional[EACLOperation] = None
|
operation: Optional[EACLOperation] = None
|
||||||
access: Optional[EACLAccess] = None
|
access: Optional[EACLAccess] = None
|
||||||
role: Optional[Union[EACLRole, str]] = None
|
role: Optional[Union[EACLRole, WalletInfo]] = None
|
||||||
filters: Optional[EACLFilters] = None
|
filters: Optional[EACLFilters] = None
|
||||||
|
|
||||||
def to_dict(self) -> Dict[str, Any]:
|
def to_dict(self) -> Dict[str, Any]:
|
||||||
|
@ -96,9 +92,9 @@ class EACLRule:
|
||||||
}
|
}
|
||||||
|
|
||||||
def __str__(self):
|
def __str__(self):
|
||||||
role = (
|
role = ""
|
||||||
self.role.value
|
if isinstance(self.role, EACLRole):
|
||||||
if isinstance(self.role, EACLRole)
|
role = self.role.value
|
||||||
else f'pubkey:{wallet_utils.get_wallet_public_key(self.role, "")}'
|
if isinstance(self.role, WalletInfo):
|
||||||
)
|
role = f"pubkey:{wallet_utils.get_wallet_public_key(self.role.path, self.role.password)}"
|
||||||
return f'{self.access.value} {self.operation.value} {self.filters or ""} {role}'
|
return f'{self.access.value} {self.operation.value} {self.filters or ""} {role}'
|
||||||
|
|
|
@ -3,6 +3,7 @@ import yaml
|
||||||
from frostfs_testlib.blockchain import RPCClient
|
from frostfs_testlib.blockchain import RPCClient
|
||||||
from frostfs_testlib.storage.constants import ConfigAttributes
|
from frostfs_testlib.storage.constants import ConfigAttributes
|
||||||
from frostfs_testlib.storage.dataclasses.node_base import NodeBase
|
from frostfs_testlib.storage.dataclasses.node_base import NodeBase
|
||||||
|
from frostfs_testlib.storage.dataclasses.shard import Shard
|
||||||
|
|
||||||
|
|
||||||
class InnerRing(NodeBase):
|
class InnerRing(NodeBase):
|
||||||
|
@ -17,11 +18,7 @@ class InnerRing(NodeBase):
|
||||||
|
|
||||||
def service_healthcheck(self) -> bool:
|
def service_healthcheck(self) -> bool:
|
||||||
health_metric = "frostfs_ir_ir_health"
|
health_metric = "frostfs_ir_ir_health"
|
||||||
output = (
|
output = self.host.get_shell().exec(f"curl -s localhost:6662 | grep {health_metric} | sed 1,2d").stdout
|
||||||
self.host.get_shell()
|
|
||||||
.exec(f"curl -s localhost:6662 | grep {health_metric} | sed 1,2d")
|
|
||||||
.stdout
|
|
||||||
)
|
|
||||||
return health_metric in output
|
return health_metric in output
|
||||||
|
|
||||||
def get_netmap_cleaner_threshold(self) -> str:
|
def get_netmap_cleaner_threshold(self) -> str:
|
||||||
|
@ -50,11 +47,7 @@ class S3Gate(NodeBase):
|
||||||
|
|
||||||
def service_healthcheck(self) -> bool:
|
def service_healthcheck(self) -> bool:
|
||||||
health_metric = "frostfs_s3_gw_state_health"
|
health_metric = "frostfs_s3_gw_state_health"
|
||||||
output = (
|
output = self.host.get_shell().exec(f"curl -s localhost:8086 | grep {health_metric} | sed 1,2d").stdout
|
||||||
self.host.get_shell()
|
|
||||||
.exec(f"curl -s localhost:8086 | grep {health_metric} | sed 1,2d")
|
|
||||||
.stdout
|
|
||||||
)
|
|
||||||
return health_metric in output
|
return health_metric in output
|
||||||
|
|
||||||
@property
|
@property
|
||||||
|
@ -72,11 +65,7 @@ class HTTPGate(NodeBase):
|
||||||
|
|
||||||
def service_healthcheck(self) -> bool:
|
def service_healthcheck(self) -> bool:
|
||||||
health_metric = "frostfs_http_gw_state_health"
|
health_metric = "frostfs_http_gw_state_health"
|
||||||
output = (
|
output = self.host.get_shell().exec(f"curl -s localhost:5662 | grep {health_metric} | sed 1,2d").stdout
|
||||||
self.host.get_shell()
|
|
||||||
.exec(f"curl -s localhost:5662 | grep {health_metric} | sed 1,2d")
|
|
||||||
.stdout
|
|
||||||
)
|
|
||||||
return health_metric in output
|
return health_metric in output
|
||||||
|
|
||||||
@property
|
@property
|
||||||
|
@ -135,19 +124,27 @@ class StorageNode(NodeBase):
|
||||||
|
|
||||||
def service_healthcheck(self) -> bool:
|
def service_healthcheck(self) -> bool:
|
||||||
health_metric = "frostfs_node_state_health"
|
health_metric = "frostfs_node_state_health"
|
||||||
output = (
|
output = self.host.get_shell().exec(f"curl -s localhost:6672 | grep {health_metric} | sed 1,2d").stdout
|
||||||
self.host.get_shell()
|
|
||||||
.exec(f"curl -s localhost:6672 | grep {health_metric} | sed 1,2d")
|
|
||||||
.stdout
|
|
||||||
)
|
|
||||||
return health_metric in output
|
return health_metric in output
|
||||||
|
|
||||||
|
# TODO: Deprecated. Use new approach with config
|
||||||
def get_shard_config_path(self) -> str:
|
def get_shard_config_path(self) -> str:
|
||||||
return self._get_attribute(ConfigAttributes.SHARD_CONFIG_PATH)
|
return self._get_attribute(ConfigAttributes.SHARD_CONFIG_PATH)
|
||||||
|
|
||||||
|
# TODO: Deprecated. Use new approach with config
|
||||||
def get_shards_config(self) -> tuple[str, dict]:
|
def get_shards_config(self) -> tuple[str, dict]:
|
||||||
return self.get_config(self.get_shard_config_path())
|
return self.get_config(self.get_shard_config_path())
|
||||||
|
|
||||||
|
def get_shards(self) -> list[Shard]:
|
||||||
|
shards = self.config.get("storage:shard")
|
||||||
|
|
||||||
|
if not shards:
|
||||||
|
raise RuntimeError(f"Cannot get shards information for {self.name} on {self.host.config.address}")
|
||||||
|
|
||||||
|
if "default" in shards:
|
||||||
|
shards.pop("default")
|
||||||
|
return [Shard.from_object(shard) for shard in shards.values()]
|
||||||
|
|
||||||
def get_control_endpoint(self) -> str:
|
def get_control_endpoint(self) -> str:
|
||||||
return self._get_attribute(ConfigAttributes.CONTROL_ENDPOINT)
|
return self._get_attribute(ConfigAttributes.CONTROL_ENDPOINT)
|
||||||
|
|
||||||
|
@ -157,20 +154,17 @@ class StorageNode(NodeBase):
|
||||||
def get_data_directory(self) -> str:
|
def get_data_directory(self) -> str:
|
||||||
return self.host.get_data_directory(self.name)
|
return self.host.get_data_directory(self.name)
|
||||||
|
|
||||||
def get_http_hostname(self) -> str:
|
|
||||||
return self._get_attribute(ConfigAttributes.HTTP_HOSTNAME)
|
|
||||||
|
|
||||||
def get_s3_hostname(self) -> str:
|
|
||||||
return self._get_attribute(ConfigAttributes.S3_HOSTNAME)
|
|
||||||
|
|
||||||
def delete_blobovnicza(self):
|
def delete_blobovnicza(self):
|
||||||
self.host.delete_blobovnicza(self.name)
|
self.host.delete_blobovnicza(self.name)
|
||||||
|
|
||||||
def delete_fstree(self):
|
def delete_fstree(self):
|
||||||
self.host.delete_fstree(self.name)
|
self.host.delete_fstree(self.name)
|
||||||
|
|
||||||
def delete_pilorama(self):
|
def delete_file(self, file_path: str) -> None:
|
||||||
self.host.delete_pilorama(self.name)
|
self.host.delete_file(file_path)
|
||||||
|
|
||||||
|
def is_file_exist(self, file_path) -> bool:
|
||||||
|
return self.host.is_file_exist(file_path)
|
||||||
|
|
||||||
def delete_metabase(self):
|
def delete_metabase(self):
|
||||||
self.host.delete_metabase(self.name)
|
self.host.delete_metabase(self.name)
|
||||||
|
|
36
src/frostfs_testlib/storage/dataclasses/metrics.py
Normal file
36
src/frostfs_testlib/storage/dataclasses/metrics.py
Normal file
|
@ -0,0 +1,36 @@
|
||||||
|
from frostfs_testlib.hosting import Host
|
||||||
|
from frostfs_testlib.shell.interfaces import CommandResult
|
||||||
|
|
||||||
|
|
||||||
|
class Metrics:
|
||||||
|
def __init__(self, host: Host, metrics_endpoint: str) -> None:
|
||||||
|
self.storage = StorageMetrics(host, metrics_endpoint)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
class StorageMetrics:
|
||||||
|
"""
|
||||||
|
Class represents storage metrics in a cluster
|
||||||
|
"""
|
||||||
|
def __init__(self, host: Host, metrics_endpoint: str) -> None:
|
||||||
|
self.host = host
|
||||||
|
self.metrics_endpoint = metrics_endpoint
|
||||||
|
|
||||||
|
def get_metrics_search_by_greps(self, **greps) -> CommandResult:
|
||||||
|
"""
|
||||||
|
Get a metrics, search by: cid, metric_type, shard_id etc.
|
||||||
|
Args:
|
||||||
|
greps: dict of grep-command-name and value
|
||||||
|
for example get_metrics_search_by_greps(command='container_objects_total', cid='123456')
|
||||||
|
Return:
|
||||||
|
result of metrics
|
||||||
|
"""
|
||||||
|
shell = self.host.get_shell()
|
||||||
|
additional_greps = " |grep ".join([grep_command for grep_command in greps.values()])
|
||||||
|
result = shell.exec(f"curl -s {self.metrics_endpoint} | grep {additional_greps}")
|
||||||
|
return result
|
||||||
|
|
||||||
|
def get_all_metrics(self) -> CommandResult:
|
||||||
|
shell = self.host.get_shell()
|
||||||
|
result = shell.exec(f"curl -s {self.metrics_endpoint}")
|
||||||
|
return result
|
|
@ -1,18 +1,20 @@
|
||||||
from abc import abstractmethod
|
from abc import abstractmethod
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass
|
||||||
|
from datetime import datetime, timezone
|
||||||
from typing import Optional, TypedDict, TypeVar
|
from typing import Optional, TypedDict, TypeVar
|
||||||
|
|
||||||
import yaml
|
import yaml
|
||||||
|
from dateutil import parser
|
||||||
|
|
||||||
|
from frostfs_testlib import reporter
|
||||||
from frostfs_testlib.hosting.config import ServiceConfig
|
from frostfs_testlib.hosting.config import ServiceConfig
|
||||||
from frostfs_testlib.hosting.interfaces import Host
|
from frostfs_testlib.hosting.interfaces import Host
|
||||||
from frostfs_testlib.reporter import get_reporter
|
from frostfs_testlib.shell.interfaces import CommandResult
|
||||||
|
from frostfs_testlib.storage.configuration.service_configuration import ServiceConfiguration, ServiceConfigurationYml
|
||||||
from frostfs_testlib.storage.constants import ConfigAttributes
|
from frostfs_testlib.storage.constants import ConfigAttributes
|
||||||
from frostfs_testlib.testing.readable import HumanReadableABC
|
from frostfs_testlib.testing.readable import HumanReadableABC
|
||||||
from frostfs_testlib.utils import wallet_utils
|
from frostfs_testlib.utils import wallet_utils
|
||||||
|
|
||||||
reporter = get_reporter()
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class NodeBase(HumanReadableABC):
|
class NodeBase(HumanReadableABC):
|
||||||
|
@ -67,6 +69,12 @@ class NodeBase(HumanReadableABC):
|
||||||
def service_healthcheck(self) -> bool:
|
def service_healthcheck(self) -> bool:
|
||||||
"""Service healthcheck."""
|
"""Service healthcheck."""
|
||||||
|
|
||||||
|
# TODO: Migrate to sub-class Metrcis (not yet exists :))
|
||||||
|
def get_metric(self, metric: str) -> CommandResult:
|
||||||
|
shell = self.host.get_shell()
|
||||||
|
result = shell.exec(f"curl -s {self.get_metrics_endpoint()} | grep -e '^{metric}'")
|
||||||
|
return result
|
||||||
|
|
||||||
def get_metrics_endpoint(self) -> str:
|
def get_metrics_endpoint(self) -> str:
|
||||||
return self._get_attribute(ConfigAttributes.ENDPOINT_PROMETHEUS)
|
return self._get_attribute(ConfigAttributes.ENDPOINT_PROMETHEUS)
|
||||||
|
|
||||||
|
@ -107,12 +115,44 @@ class NodeBase(HumanReadableABC):
|
||||||
ConfigAttributes.CONFIG_PATH,
|
ConfigAttributes.CONFIG_PATH,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
def get_remote_wallet_config_path(self) -> str:
|
||||||
|
"""
|
||||||
|
Returns node config file path located on remote host
|
||||||
|
"""
|
||||||
|
return self._get_attribute(
|
||||||
|
ConfigAttributes.REMOTE_WALLET_CONFIG,
|
||||||
|
)
|
||||||
|
|
||||||
def get_wallet_config_path(self) -> str:
|
def get_wallet_config_path(self) -> str:
|
||||||
return self._get_attribute(
|
return self._get_attribute(
|
||||||
ConfigAttributes.LOCAL_WALLET_CONFIG,
|
ConfigAttributes.LOCAL_WALLET_CONFIG,
|
||||||
ConfigAttributes.WALLET_CONFIG,
|
ConfigAttributes.WALLET_CONFIG,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
def get_logger_config_path(self) -> str:
|
||||||
|
"""
|
||||||
|
Returns config path for logger located on remote host
|
||||||
|
"""
|
||||||
|
config_attributes = self.host.get_service_config(self.name)
|
||||||
|
return (
|
||||||
|
self._get_attribute(ConfigAttributes.LOGGER_CONFIG_PATH)
|
||||||
|
if ConfigAttributes.LOGGER_CONFIG_PATH in config_attributes.attributes
|
||||||
|
else None
|
||||||
|
)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def config_dir(self) -> str:
|
||||||
|
return self._get_attribute(ConfigAttributes.CONFIG_DIR)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def main_config_path(self) -> str:
|
||||||
|
return self._get_attribute(ConfigAttributes.CONFIG_PATH)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def config(self) -> ServiceConfigurationYml:
|
||||||
|
return ServiceConfiguration(self.name, self.host.get_shell(), self.config_dir, self.main_config_path)
|
||||||
|
|
||||||
|
# TODO: Deprecated. Use config with ServiceConfigurationYml interface
|
||||||
def get_config(self, config_file_path: Optional[str] = None) -> tuple[str, dict]:
|
def get_config(self, config_file_path: Optional[str] = None) -> tuple[str, dict]:
|
||||||
if config_file_path is None:
|
if config_file_path is None:
|
||||||
config_file_path = self._get_attribute(ConfigAttributes.CONFIG_PATH)
|
config_file_path = self._get_attribute(ConfigAttributes.CONFIG_PATH)
|
||||||
|
@ -125,6 +165,7 @@ class NodeBase(HumanReadableABC):
|
||||||
config = yaml.safe_load(config_text)
|
config = yaml.safe_load(config_text)
|
||||||
return config_file_path, config
|
return config_file_path, config
|
||||||
|
|
||||||
|
# TODO: Deprecated. Use config with ServiceConfigurationYml interface
|
||||||
def save_config(self, new_config: dict, config_file_path: Optional[str] = None) -> None:
|
def save_config(self, new_config: dict, config_file_path: Optional[str] = None) -> None:
|
||||||
if config_file_path is None:
|
if config_file_path is None:
|
||||||
config_file_path = self._get_attribute(ConfigAttributes.CONFIG_PATH)
|
config_file_path = self._get_attribute(ConfigAttributes.CONFIG_PATH)
|
||||||
|
@ -139,9 +180,7 @@ class NodeBase(HumanReadableABC):
|
||||||
storage_wallet_pass = self.get_wallet_password()
|
storage_wallet_pass = self.get_wallet_password()
|
||||||
return wallet_utils.get_wallet_public_key(storage_wallet_path, storage_wallet_pass)
|
return wallet_utils.get_wallet_public_key(storage_wallet_path, storage_wallet_pass)
|
||||||
|
|
||||||
def _get_attribute(
|
def _get_attribute(self, attribute_name: str, default_attribute_name: Optional[str] = None) -> str:
|
||||||
self, attribute_name: str, default_attribute_name: Optional[str] = None
|
|
||||||
) -> str:
|
|
||||||
config = self.host.get_service_config(self.name)
|
config = self.host.get_service_config(self.name)
|
||||||
|
|
||||||
if attribute_name not in config.attributes:
|
if attribute_name not in config.attributes:
|
||||||
|
@ -157,6 +196,15 @@ class NodeBase(HumanReadableABC):
|
||||||
def _get_service_config(self) -> ServiceConfig:
|
def _get_service_config(self) -> ServiceConfig:
|
||||||
return self.host.get_service_config(self.name)
|
return self.host.get_service_config(self.name)
|
||||||
|
|
||||||
|
def get_service_uptime(self, service: str) -> datetime:
|
||||||
|
result = self.host.get_shell().exec(
|
||||||
|
f"systemctl show {service} --property ActiveEnterTimestamp | cut -d '=' -f 2"
|
||||||
|
)
|
||||||
|
start_time = parser.parse(result.stdout.strip())
|
||||||
|
current_time = datetime.now(tz=timezone.utc)
|
||||||
|
active_time = current_time - start_time
|
||||||
|
return active_time
|
||||||
|
|
||||||
|
|
||||||
ServiceClass = TypeVar("ServiceClass", bound=NodeBase)
|
ServiceClass = TypeVar("ServiceClass", bound=NodeBase)
|
||||||
|
|
||||||
|
|
13
src/frostfs_testlib/storage/dataclasses/policy.py
Normal file
13
src/frostfs_testlib/storage/dataclasses/policy.py
Normal file
|
@ -0,0 +1,13 @@
|
||||||
|
from dataclasses import dataclass
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class PlacementPolicy:
|
||||||
|
name: str
|
||||||
|
value: str
|
||||||
|
|
||||||
|
def __str__(self) -> str:
|
||||||
|
return self.name
|
||||||
|
|
||||||
|
def __repr__(self) -> str:
|
||||||
|
return self.__str__()
|
92
src/frostfs_testlib/storage/dataclasses/shard.py
Normal file
92
src/frostfs_testlib/storage/dataclasses/shard.py
Normal file
|
@ -0,0 +1,92 @@
|
||||||
|
from dataclasses import dataclass
|
||||||
|
|
||||||
|
from configobj import ConfigObj
|
||||||
|
|
||||||
|
SHARD_PREFIX = "FROSTFS_STORAGE_SHARD_"
|
||||||
|
BLOBSTOR_PREFIX = "_BLOBSTOR_"
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class Blobstor:
|
||||||
|
path: str
|
||||||
|
path_type: str
|
||||||
|
|
||||||
|
def __eq__(self, other) -> bool:
|
||||||
|
if not isinstance(other, self.__class__):
|
||||||
|
raise RuntimeError(f"Only two {self.__class__.__name__} instances can be compared")
|
||||||
|
return self.path == other.path and self.path_type == other.path_type
|
||||||
|
|
||||||
|
def __hash__(self):
|
||||||
|
return hash((self.path, self.path_type))
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def from_config_object(section: ConfigObj, shard_id: str, blobstor_id: str):
|
||||||
|
var_prefix = f"{SHARD_PREFIX}{shard_id}{BLOBSTOR_PREFIX}{blobstor_id}"
|
||||||
|
return Blobstor(section.get(f"{var_prefix}_PATH"), section.get(f"{var_prefix}_TYPE"))
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class Shard:
|
||||||
|
blobstor: list[Blobstor]
|
||||||
|
metabase: str
|
||||||
|
writecache: str
|
||||||
|
pilorama: str
|
||||||
|
|
||||||
|
def __eq__(self, other) -> bool:
|
||||||
|
if not isinstance(other, self.__class__):
|
||||||
|
raise RuntimeError(f"Only two {self.__class__.__name__} instances can be compared")
|
||||||
|
return (
|
||||||
|
set(self.blobstor) == set(other.blobstor)
|
||||||
|
and self.metabase == other.metabase
|
||||||
|
and self.writecache == other.writecache
|
||||||
|
and self.pilorama == other.pilorama
|
||||||
|
)
|
||||||
|
|
||||||
|
def __hash__(self):
|
||||||
|
return hash((self.metabase, self.writecache))
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def _get_blobstor_count_from_section(config_object: ConfigObj, shard_id: int):
|
||||||
|
pattern = f"{SHARD_PREFIX}{shard_id}{BLOBSTOR_PREFIX}"
|
||||||
|
blobstors = {key[: len(pattern) + 2] for key in config_object.keys() if pattern in key}
|
||||||
|
return len(blobstors)
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def from_config_object(config_object: ConfigObj, shard_id: int):
|
||||||
|
var_prefix = f"{SHARD_PREFIX}{shard_id}"
|
||||||
|
|
||||||
|
blobstor_count = Shard._get_blobstor_count_from_section(config_object, shard_id)
|
||||||
|
blobstors = [Blobstor.from_config_object(config_object, shard_id, blobstor_id) for blobstor_id in range(blobstor_count)]
|
||||||
|
|
||||||
|
write_cache_enabled = config_object.as_bool(f"{var_prefix}_WRITECACHE_ENABLED")
|
||||||
|
|
||||||
|
return Shard(
|
||||||
|
blobstors,
|
||||||
|
config_object.get(f"{var_prefix}_METABASE_PATH"),
|
||||||
|
config_object.get(f"{var_prefix}_WRITECACHE_PATH") if write_cache_enabled else "",
|
||||||
|
)
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def from_object(shard):
|
||||||
|
metabase = shard["metabase"]["path"] if "path" in shard["metabase"] else shard["metabase"]
|
||||||
|
writecache_enabled = True
|
||||||
|
if "enabled" in shard["writecache"]:
|
||||||
|
writecache_enabled = shard["writecache"]["enabled"]
|
||||||
|
|
||||||
|
writecache = shard["writecache"]["path"] if "path" in shard["writecache"] else shard["writecache"]
|
||||||
|
if not writecache_enabled:
|
||||||
|
writecache = ""
|
||||||
|
|
||||||
|
# Currently due to issue we need to check if pilorama exists in keys
|
||||||
|
# TODO: make pilorama mandatory after fix
|
||||||
|
if shard.get("pilorama"):
|
||||||
|
pilorama = shard["pilorama"]["path"] if "path" in shard["pilorama"] else shard["pilorama"]
|
||||||
|
else:
|
||||||
|
pilorama = None
|
||||||
|
|
||||||
|
return Shard(
|
||||||
|
blobstor=[Blobstor(path=blobstor["path"], path_type=blobstor["type"]) for blobstor in shard["blobstor"]],
|
||||||
|
metabase=metabase,
|
||||||
|
writecache=writecache,
|
||||||
|
pilorama=pilorama,
|
||||||
|
)
|
|
@ -1,7 +1,7 @@
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass
|
||||||
from enum import Enum
|
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
|
|
||||||
|
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
|
||||||
from frostfs_testlib.testing.readable import HumanReadableEnum
|
from frostfs_testlib.testing.readable import HumanReadableEnum
|
||||||
|
|
||||||
|
|
||||||
|
@ -20,7 +20,7 @@ class LockObjectInfo(ObjectRef):
|
||||||
@dataclass
|
@dataclass
|
||||||
class StorageObjectInfo(ObjectRef):
|
class StorageObjectInfo(ObjectRef):
|
||||||
size: Optional[int] = None
|
size: Optional[int] = None
|
||||||
wallet_file_path: Optional[str] = None
|
wallet: Optional[WalletInfo] = None
|
||||||
file_path: Optional[str] = None
|
file_path: Optional[str] = None
|
||||||
file_hash: Optional[str] = None
|
file_hash: Optional[str] = None
|
||||||
attributes: Optional[list[dict[str, str]]] = None
|
attributes: Optional[list[dict[str, str]]] = None
|
||||||
|
@ -28,10 +28,16 @@ class StorageObjectInfo(ObjectRef):
|
||||||
locks: Optional[list[LockObjectInfo]] = None
|
locks: Optional[list[LockObjectInfo]] = None
|
||||||
|
|
||||||
|
|
||||||
|
class NodeStatus(HumanReadableEnum):
|
||||||
|
MAINTENANCE: str = "maintenance"
|
||||||
|
ONLINE: str = "online"
|
||||||
|
OFFLINE: str = "offline"
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class NodeNetmapInfo:
|
class NodeNetmapInfo:
|
||||||
node_id: str = None
|
node_id: str = None
|
||||||
node_status: str = None
|
node_status: NodeStatus = None
|
||||||
node_data_ips: list[str] = None
|
node_data_ips: list[str] = None
|
||||||
cluster_name: str = None
|
cluster_name: str = None
|
||||||
continent: str = None
|
continent: str = None
|
||||||
|
@ -53,3 +59,21 @@ class Interfaces(HumanReadableEnum):
|
||||||
MGMT: str = "mgmt"
|
MGMT: str = "mgmt"
|
||||||
INTERNAL_0: str = "internal0"
|
INTERNAL_0: str = "internal0"
|
||||||
INTERNAL_1: str = "internal1"
|
INTERNAL_1: str = "internal1"
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class NodeNetInfo:
|
||||||
|
epoch: str = None
|
||||||
|
network_magic: str = None
|
||||||
|
time_per_block: str = None
|
||||||
|
container_fee: str = None
|
||||||
|
epoch_duration: str = None
|
||||||
|
inner_ring_candidate_fee: str = None
|
||||||
|
maximum_object_size: str = None
|
||||||
|
maximum_count_of_data_shards: str = None
|
||||||
|
maximum_count_of_parity_shards: str = None
|
||||||
|
withdrawal_fee: str = None
|
||||||
|
homomorphic_hashing_disabled: str = None
|
||||||
|
maintenance_mode_allowed: str = None
|
||||||
|
eigen_trust_alpha: str = None
|
||||||
|
eigen_trust_iterations: str = None
|
||||||
|
|
|
@ -1,13 +1,15 @@
|
||||||
import json
|
import json
|
||||||
import logging
|
import logging
|
||||||
import os
|
import os
|
||||||
import uuid
|
|
||||||
from dataclasses import dataclass
|
from dataclasses import dataclass
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
|
|
||||||
from frostfs_testlib.resources.common import DEFAULT_WALLET_CONFIG, DEFAULT_WALLET_PASS
|
import yaml
|
||||||
|
|
||||||
|
from frostfs_testlib import reporter
|
||||||
|
from frostfs_testlib.resources.common import ASSETS_DIR, DEFAULT_WALLET_CONFIG, DEFAULT_WALLET_PASS
|
||||||
from frostfs_testlib.shell import Shell
|
from frostfs_testlib.shell import Shell
|
||||||
from frostfs_testlib.storage.cluster import Cluster, NodeBase
|
from frostfs_testlib.storage.cluster import NodeBase
|
||||||
from frostfs_testlib.utils.wallet_utils import get_last_address_from_wallet, init_wallet
|
from frostfs_testlib.utils.wallet_utils import get_last_address_from_wallet, init_wallet
|
||||||
|
|
||||||
logger = logging.getLogger("frostfs.testlib.utils")
|
logger = logging.getLogger("frostfs.testlib.utils")
|
||||||
|
@ -21,9 +23,13 @@ class WalletInfo:
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def from_node(node: NodeBase):
|
def from_node(node: NodeBase):
|
||||||
return WalletInfo(
|
wallet_path = node.get_wallet_path()
|
||||||
node.get_wallet_path(), node.get_wallet_password(), node.get_wallet_config_path()
|
wallet_password = node.get_wallet_password()
|
||||||
)
|
wallet_config_file = os.path.join(ASSETS_DIR, os.path.basename(node.get_wallet_config_path()))
|
||||||
|
with open(wallet_config_file, "w") as file:
|
||||||
|
file.write(yaml.dump({"wallet": wallet_path, "password": wallet_password}))
|
||||||
|
|
||||||
|
return WalletInfo(wallet_path, wallet_password, wallet_config_file)
|
||||||
|
|
||||||
def get_address(self) -> str:
|
def get_address(self) -> str:
|
||||||
"""
|
"""
|
||||||
|
@ -47,22 +53,17 @@ class WalletInfo:
|
||||||
"""
|
"""
|
||||||
with open(self.path, "r") as wallet:
|
with open(self.path, "r") as wallet:
|
||||||
wallet_json = json.load(wallet)
|
wallet_json = json.load(wallet)
|
||||||
assert abs(account_id) + 1 <= len(
|
assert abs(account_id) + 1 <= len(wallet_json["accounts"]), f"There is no index '{account_id}' in wallet: {wallet_json}"
|
||||||
wallet_json["accounts"]
|
|
||||||
), f"There is no index '{account_id}' in wallet: {wallet_json}"
|
|
||||||
|
|
||||||
return wallet_json["accounts"][account_id]["address"]
|
return wallet_json["accounts"][account_id]["address"]
|
||||||
|
|
||||||
|
|
||||||
class WalletFactory:
|
class WalletFactory:
|
||||||
def __init__(self, wallets_dir: str, shell: Shell, cluster: Cluster) -> None:
|
def __init__(self, wallets_dir: str, shell: Shell) -> None:
|
||||||
self.shell = shell
|
self.shell = shell
|
||||||
self.wallets_dir = wallets_dir
|
self.wallets_dir = wallets_dir
|
||||||
self.cluster = cluster
|
|
||||||
|
|
||||||
def create_wallet(
|
def create_wallet(self, file_name: str, password: Optional[str] = None) -> WalletInfo:
|
||||||
self, file_name: Optional[str] = None, password: Optional[str] = None
|
|
||||||
) -> WalletInfo:
|
|
||||||
"""
|
"""
|
||||||
Creates new default wallet.
|
Creates new default wallet.
|
||||||
|
|
||||||
|
@ -74,8 +75,6 @@ class WalletFactory:
|
||||||
WalletInfo object of new wallet.
|
WalletInfo object of new wallet.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
if file_name is None:
|
|
||||||
file_name = str(uuid.uuid4())
|
|
||||||
if password is None:
|
if password is None:
|
||||||
password = ""
|
password = ""
|
||||||
|
|
||||||
|
@ -85,6 +84,8 @@ class WalletFactory:
|
||||||
init_wallet(wallet_path, password)
|
init_wallet(wallet_path, password)
|
||||||
|
|
||||||
with open(wallet_config_path, "w") as config_file:
|
with open(wallet_config_path, "w") as config_file:
|
||||||
config_file.write(f'password: "{password}"')
|
config_file.write(f'wallet: {wallet_path}\npassword: "{password}"')
|
||||||
|
|
||||||
|
reporter.attach(wallet_path, os.path.basename(wallet_path))
|
||||||
|
|
||||||
return WalletInfo(wallet_path, password, wallet_config_path)
|
return WalletInfo(wallet_path, password, wallet_config_path)
|
||||||
|
|
|
@ -1,7 +1,7 @@
|
||||||
import time
|
import time
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
|
|
||||||
from frostfs_testlib.reporter import get_reporter
|
from frostfs_testlib import reporter
|
||||||
from frostfs_testlib.resources.common import MORPH_BLOCK_TIME
|
from frostfs_testlib.resources.common import MORPH_BLOCK_TIME
|
||||||
from frostfs_testlib.shell import Shell
|
from frostfs_testlib.shell import Shell
|
||||||
from frostfs_testlib.steps import epoch
|
from frostfs_testlib.steps import epoch
|
||||||
|
@ -9,15 +9,13 @@ from frostfs_testlib.storage.cluster import Cluster
|
||||||
from frostfs_testlib.storage.dataclasses.frostfs_services import StorageNode
|
from frostfs_testlib.storage.dataclasses.frostfs_services import StorageNode
|
||||||
from frostfs_testlib.utils import datetime_utils
|
from frostfs_testlib.utils import datetime_utils
|
||||||
|
|
||||||
reporter = get_reporter()
|
|
||||||
|
|
||||||
|
|
||||||
# To skip adding every mandatory singleton dependency to EACH test function
|
# To skip adding every mandatory singleton dependency to EACH test function
|
||||||
class ClusterTestBase:
|
class ClusterTestBase:
|
||||||
shell: Shell
|
shell: Shell
|
||||||
cluster: Cluster
|
cluster: Cluster
|
||||||
|
|
||||||
@reporter.step_deco("Tick {epochs_to_tick} epochs, wait {wait_block} block")
|
@reporter.step("Tick {epochs_to_tick} epochs, wait {wait_block} block")
|
||||||
def tick_epochs(
|
def tick_epochs(
|
||||||
self,
|
self,
|
||||||
epochs_to_tick: int,
|
epochs_to_tick: int,
|
||||||
|
|
|
@ -2,6 +2,8 @@ import itertools
|
||||||
from concurrent.futures import Future, ThreadPoolExecutor
|
from concurrent.futures import Future, ThreadPoolExecutor
|
||||||
from typing import Callable, Collection, Optional, Union
|
from typing import Callable, Collection, Optional, Union
|
||||||
|
|
||||||
|
MAX_WORKERS = 50
|
||||||
|
|
||||||
|
|
||||||
def parallel(
|
def parallel(
|
||||||
fn: Union[Callable, list[Callable]],
|
fn: Union[Callable, list[Callable]],
|
||||||
|
@ -42,7 +44,7 @@ def parallel(
|
||||||
exceptions = [future.exception() for future in futures if future.exception()]
|
exceptions = [future.exception() for future in futures if future.exception()]
|
||||||
if exceptions:
|
if exceptions:
|
||||||
message = "\n".join([str(e) for e in exceptions])
|
message = "\n".join([str(e) for e in exceptions])
|
||||||
raise RuntimeError(f"The following exceptions occured during parallel run:\n {message}")
|
raise RuntimeError(f"The following exceptions occured during parallel run:\n{message}")
|
||||||
return futures
|
return futures
|
||||||
|
|
||||||
|
|
||||||
|
@ -54,7 +56,7 @@ def _run_by_fn_list(fn_list: list[Callable], *args, **kwargs) -> list[Future]:
|
||||||
|
|
||||||
futures: list[Future] = []
|
futures: list[Future] = []
|
||||||
|
|
||||||
with ThreadPoolExecutor(max_workers=len(fn_list)) as executor:
|
with ThreadPoolExecutor(max_workers=min(len(fn_list), MAX_WORKERS)) as executor:
|
||||||
for fn in fn_list:
|
for fn in fn_list:
|
||||||
task_args = _get_args(*args)
|
task_args = _get_args(*args)
|
||||||
task_kwargs = _get_kwargs(**kwargs)
|
task_kwargs = _get_kwargs(**kwargs)
|
||||||
|
@ -67,7 +69,7 @@ def _run_by_fn_list(fn_list: list[Callable], *args, **kwargs) -> list[Future]:
|
||||||
def _run_by_items(fn: Callable, parallel_items: Collection, *args, **kwargs) -> list[Future]:
|
def _run_by_items(fn: Callable, parallel_items: Collection, *args, **kwargs) -> list[Future]:
|
||||||
futures: list[Future] = []
|
futures: list[Future] = []
|
||||||
|
|
||||||
with ThreadPoolExecutor(max_workers=len(parallel_items)) as executor:
|
with ThreadPoolExecutor(max_workers=min(len(parallel_items), MAX_WORKERS)) as executor:
|
||||||
for item in parallel_items:
|
for item in parallel_items:
|
||||||
task_args = _get_args(*args)
|
task_args = _get_args(*args)
|
||||||
task_kwargs = _get_kwargs(**kwargs)
|
task_kwargs = _get_kwargs(**kwargs)
|
||||||
|
|
|
@ -7,6 +7,9 @@ from typing import Any
|
||||||
from _pytest.outcomes import Failed
|
from _pytest.outcomes import Failed
|
||||||
from pytest import fail
|
from pytest import fail
|
||||||
|
|
||||||
|
from frostfs_testlib import reporter
|
||||||
|
from frostfs_testlib.utils.func_utils import format_by_args
|
||||||
|
|
||||||
logger = logging.getLogger("NeoLogger")
|
logger = logging.getLogger("NeoLogger")
|
||||||
|
|
||||||
# TODO: we may consider deprecating some methods here and use tenacity instead
|
# TODO: we may consider deprecating some methods here and use tenacity instead
|
||||||
|
@ -50,7 +53,7 @@ class expect_not_raises:
|
||||||
return impl
|
return impl
|
||||||
|
|
||||||
|
|
||||||
def retry(max_attempts: int, sleep_interval: int = 1, expected_result: Any = None):
|
def retry(max_attempts: int, sleep_interval: int = 1, expected_result: Any = None, title: str = None):
|
||||||
"""
|
"""
|
||||||
Decorator to wait for some conditions/functions to pass successfully.
|
Decorator to wait for some conditions/functions to pass successfully.
|
||||||
This is useful if you don't know exact time when something should pass successfully and do not
|
This is useful if you don't know exact time when something should pass successfully and do not
|
||||||
|
@ -62,8 +65,7 @@ def retry(max_attempts: int, sleep_interval: int = 1, expected_result: Any = Non
|
||||||
assert max_attempts >= 1, "Cannot apply retry decorator with max_attempts < 1"
|
assert max_attempts >= 1, "Cannot apply retry decorator with max_attempts < 1"
|
||||||
|
|
||||||
def wrapper(func):
|
def wrapper(func):
|
||||||
@wraps(func)
|
def call(func, *a, **kw):
|
||||||
def impl(*a, **kw):
|
|
||||||
last_exception = None
|
last_exception = None
|
||||||
for _ in range(max_attempts):
|
for _ in range(max_attempts):
|
||||||
try:
|
try:
|
||||||
|
@ -84,6 +86,14 @@ def retry(max_attempts: int, sleep_interval: int = 1, expected_result: Any = Non
|
||||||
if last_exception is not None:
|
if last_exception is not None:
|
||||||
raise last_exception
|
raise last_exception
|
||||||
|
|
||||||
|
@wraps(func)
|
||||||
|
def impl(*a, **kw):
|
||||||
|
if title is not None:
|
||||||
|
with reporter.step(format_by_args(func, title, *a, **kw)):
|
||||||
|
return call(func, *a, **kw)
|
||||||
|
|
||||||
|
return call(func, *a, **kw)
|
||||||
|
|
||||||
return impl
|
return impl
|
||||||
|
|
||||||
return wrapper
|
return wrapper
|
||||||
|
@ -124,6 +134,7 @@ def wait_for_success(
|
||||||
expected_result: Any = None,
|
expected_result: Any = None,
|
||||||
fail_testcase: bool = False,
|
fail_testcase: bool = False,
|
||||||
fail_message: str = "",
|
fail_message: str = "",
|
||||||
|
title: str = None,
|
||||||
):
|
):
|
||||||
"""
|
"""
|
||||||
Decorator to wait for some conditions/functions to pass successfully.
|
Decorator to wait for some conditions/functions to pass successfully.
|
||||||
|
@ -134,8 +145,7 @@ def wait_for_success(
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def wrapper(func):
|
def wrapper(func):
|
||||||
@wraps(func)
|
def call(func, *a, **kw):
|
||||||
def impl(*a, **kw):
|
|
||||||
start = int(round(time()))
|
start = int(round(time()))
|
||||||
last_exception = None
|
last_exception = None
|
||||||
while start + max_wait_time >= int(round(time())):
|
while start + max_wait_time >= int(round(time())):
|
||||||
|
@ -160,6 +170,14 @@ def wait_for_success(
|
||||||
if last_exception is not None:
|
if last_exception is not None:
|
||||||
raise last_exception
|
raise last_exception
|
||||||
|
|
||||||
|
@wraps(func)
|
||||||
|
def impl(*a, **kw):
|
||||||
|
if title is not None:
|
||||||
|
with reporter.step(format_by_args(func, title, *a, **kw)):
|
||||||
|
return call(func, *a, **kw)
|
||||||
|
|
||||||
|
return call(func, *a, **kw)
|
||||||
|
|
||||||
return impl
|
return impl
|
||||||
|
|
||||||
return wrapper
|
return wrapper
|
||||||
|
|
|
@ -1,8 +1,9 @@
|
||||||
|
"""
|
||||||
|
Idea of utils is to have small utilitary functions which are not dependent of anything.
|
||||||
|
"""
|
||||||
|
|
||||||
import frostfs_testlib.utils.converting_utils
|
import frostfs_testlib.utils.converting_utils
|
||||||
import frostfs_testlib.utils.datetime_utils
|
import frostfs_testlib.utils.datetime_utils
|
||||||
import frostfs_testlib.utils.json_utils
|
import frostfs_testlib.utils.json_utils
|
||||||
import frostfs_testlib.utils.string_utils
|
import frostfs_testlib.utils.string_utils
|
||||||
import frostfs_testlib.utils.wallet_utils
|
import frostfs_testlib.utils.wallet_utils
|
||||||
|
|
||||||
# TODO: Circullar dependency FileKeeper -> NodeBase -> Utils -> FileKeeper -> NodeBase
|
|
||||||
from frostfs_testlib.utils.file_keeper import FileKeeper
|
|
||||||
|
|
|
@ -19,10 +19,9 @@ from typing import Dict, List, TypedDict, Union
|
||||||
|
|
||||||
import pexpect
|
import pexpect
|
||||||
|
|
||||||
from frostfs_testlib.reporter import get_reporter
|
from frostfs_testlib import reporter
|
||||||
from frostfs_testlib.storage.dataclasses.storage_object_info import NodeNetmapInfo
|
from frostfs_testlib.storage.dataclasses.storage_object_info import NodeNetmapInfo
|
||||||
|
|
||||||
reporter = get_reporter()
|
|
||||||
logger = logging.getLogger("NeoLogger")
|
logger = logging.getLogger("NeoLogger")
|
||||||
COLOR_GREEN = "\033[92m"
|
COLOR_GREEN = "\033[92m"
|
||||||
COLOR_OFF = "\033[0m"
|
COLOR_OFF = "\033[0m"
|
||||||
|
@ -42,7 +41,7 @@ def _run_with_passwd(cmd: str) -> str:
|
||||||
return cmd.decode()
|
return cmd.decode()
|
||||||
|
|
||||||
|
|
||||||
def _configure_aws_cli(cmd: str, key_id: str, access_key: str, out_format: str = "json") -> str:
|
def _configure_aws_cli(cmd: str, key_id: str, access_key: str, region: str, out_format: str = "json") -> str:
|
||||||
child = pexpect.spawn(cmd)
|
child = pexpect.spawn(cmd)
|
||||||
child.delaybeforesend = 1
|
child.delaybeforesend = 1
|
||||||
|
|
||||||
|
@ -53,7 +52,7 @@ def _configure_aws_cli(cmd: str, key_id: str, access_key: str, out_format: str =
|
||||||
child.sendline(access_key)
|
child.sendline(access_key)
|
||||||
|
|
||||||
child.expect("Default region name.*")
|
child.expect("Default region name.*")
|
||||||
child.sendline("")
|
child.sendline("region")
|
||||||
|
|
||||||
child.expect("Default output format.*")
|
child.expect("Default output format.*")
|
||||||
child.sendline(out_format)
|
child.sendline(out_format)
|
||||||
|
@ -65,9 +64,7 @@ def _configure_aws_cli(cmd: str, key_id: str, access_key: str, out_format: str =
|
||||||
return cmd.decode()
|
return cmd.decode()
|
||||||
|
|
||||||
|
|
||||||
def _attach_allure_log(
|
def _attach_allure_log(cmd: str, output: str, return_code: int, start_time: datetime, end_time: datetime) -> None:
|
||||||
cmd: str, output: str, return_code: int, start_time: datetime, end_time: datetime
|
|
||||||
) -> None:
|
|
||||||
command_attachment = (
|
command_attachment = (
|
||||||
f"COMMAND: '{cmd}'\n"
|
f"COMMAND: '{cmd}'\n"
|
||||||
f"OUTPUT:\n {output}\n"
|
f"OUTPUT:\n {output}\n"
|
||||||
|
|
|
@ -1,13 +1,12 @@
|
||||||
import logging
|
import logging
|
||||||
import re
|
import re
|
||||||
|
|
||||||
from frostfs_testlib.reporter import get_reporter
|
from frostfs_testlib import reporter
|
||||||
|
|
||||||
reporter = get_reporter()
|
|
||||||
logger = logging.getLogger("NeoLogger")
|
logger = logging.getLogger("NeoLogger")
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Read environment.properties")
|
@reporter.step("Read environment.properties")
|
||||||
def read_env_properties(file_path: str) -> dict:
|
def read_env_properties(file_path: str) -> dict:
|
||||||
with open(file_path, "r") as file:
|
with open(file_path, "r") as file:
|
||||||
raw_content = file.read()
|
raw_content = file.read()
|
||||||
|
@ -23,7 +22,7 @@ def read_env_properties(file_path: str) -> dict:
|
||||||
return env_properties
|
return env_properties
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Update data in environment.properties")
|
@reporter.step("Update data in environment.properties")
|
||||||
def save_env_properties(file_path: str, env_data: dict) -> None:
|
def save_env_properties(file_path: str, env_data: dict) -> None:
|
||||||
with open(file_path, "a+") as env_file:
|
with open(file_path, "a+") as env_file:
|
||||||
for env, env_value in env_data.items():
|
for env, env_value in env_data.items():
|
||||||
|
|
|
@ -3,72 +3,22 @@ from dataclasses import dataclass
|
||||||
from time import sleep
|
from time import sleep
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
|
|
||||||
from frostfs_testlib.hosting import Host
|
from frostfs_testlib import reporter
|
||||||
from frostfs_testlib.reporter import get_reporter
|
|
||||||
from frostfs_testlib.resources.common import SERVICE_MAX_STARTUP_TIME
|
from frostfs_testlib.resources.common import SERVICE_MAX_STARTUP_TIME
|
||||||
from frostfs_testlib.shell import CommandOptions, Shell
|
from frostfs_testlib.shell import Shell
|
||||||
from frostfs_testlib.steps.cli.object import neo_go_dump_keys
|
from frostfs_testlib.steps.cli.object import neo_go_dump_keys
|
||||||
from frostfs_testlib.steps.node_management import storage_node_healthcheck
|
from frostfs_testlib.steps.node_management import storage_node_healthcheck
|
||||||
from frostfs_testlib.steps.storage_policy import get_nodes_with_object
|
from frostfs_testlib.steps.storage_policy import get_nodes_with_object
|
||||||
from frostfs_testlib.storage.cluster import Cluster, ClusterNode, NodeBase, StorageNode
|
from frostfs_testlib.storage.cluster import Cluster, ClusterNode, NodeBase, StorageNode
|
||||||
from frostfs_testlib.storage.dataclasses.frostfs_services import MorphChain
|
from frostfs_testlib.storage.dataclasses.frostfs_services import MorphChain
|
||||||
from frostfs_testlib.testing.test_control import retry, wait_for_success
|
from frostfs_testlib.storage.dataclasses.node_base import ServiceClass
|
||||||
|
from frostfs_testlib.testing.test_control import wait_for_success
|
||||||
from frostfs_testlib.utils.datetime_utils import parse_time
|
from frostfs_testlib.utils.datetime_utils import parse_time
|
||||||
|
|
||||||
reporter = get_reporter()
|
|
||||||
|
|
||||||
logger = logging.getLogger("NeoLogger")
|
logger = logging.getLogger("NeoLogger")
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Ping node")
|
@reporter.step("Check and return status of given service")
|
||||||
def ping_host(shell: Shell, host: Host):
|
|
||||||
options = CommandOptions(check=False)
|
|
||||||
return shell.exec(f"ping {host.config.address} -c 1", options).return_code
|
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Wait for storage nodes returned to cluster")
|
|
||||||
def wait_all_storage_nodes_returned(shell: Shell, cluster: Cluster) -> None:
|
|
||||||
for node in cluster.services(StorageNode):
|
|
||||||
with reporter.step(f"Run health check for storage at '{node}'"):
|
|
||||||
wait_for_host_online(shell, node)
|
|
||||||
wait_for_node_online(node)
|
|
||||||
|
|
||||||
|
|
||||||
@retry(max_attempts=60, sleep_interval=5, expected_result=0)
|
|
||||||
@reporter.step_deco("Waiting for host of {node} to go online")
|
|
||||||
def wait_for_host_online(shell: Shell, node: StorageNode):
|
|
||||||
try:
|
|
||||||
# TODO: Quick solution for now, should be replaced by lib interactions
|
|
||||||
return ping_host(shell, node.host)
|
|
||||||
except Exception as err:
|
|
||||||
logger.warning(f"Host ping fails with error {err}")
|
|
||||||
return 1
|
|
||||||
|
|
||||||
|
|
||||||
@retry(max_attempts=60, sleep_interval=5, expected_result=1)
|
|
||||||
@reporter.step_deco("Waiting for host of {node} to go offline")
|
|
||||||
def wait_for_host_offline(shell: Shell, node: StorageNode):
|
|
||||||
try:
|
|
||||||
# TODO: Quick solution for now, should be replaced by lib interactions
|
|
||||||
return ping_host(shell, node.host)
|
|
||||||
except Exception as err:
|
|
||||||
logger.warning(f"Host ping fails with error {err}")
|
|
||||||
return 0
|
|
||||||
|
|
||||||
|
|
||||||
@retry(max_attempts=20, sleep_interval=30, expected_result=True)
|
|
||||||
@reporter.step_deco("Waiting for node {node} to go online")
|
|
||||||
def wait_for_node_online(node: StorageNode):
|
|
||||||
try:
|
|
||||||
health_check = storage_node_healthcheck(node)
|
|
||||||
except Exception as err:
|
|
||||||
logger.warning(f"Node healthcheck fails with error {err}")
|
|
||||||
return False
|
|
||||||
|
|
||||||
return health_check.health_status == "READY" and health_check.network_status == "ONLINE"
|
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Check and return status of given service")
|
|
||||||
def service_status(service: str, shell: Shell) -> str:
|
def service_status(service: str, shell: Shell) -> str:
|
||||||
return shell.exec(f"sudo systemctl is-active {service}").stdout.rstrip()
|
return shell.exec(f"sudo systemctl is-active {service}").stdout.rstrip()
|
||||||
|
|
||||||
|
@ -121,14 +71,14 @@ class TopCommand:
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Run `top` command with specified PID")
|
@reporter.step("Run `top` command with specified PID")
|
||||||
def service_status_top(service: str, shell: Shell) -> TopCommand:
|
def service_status_top(service: str, shell: Shell) -> TopCommand:
|
||||||
pid = service_pid(service, shell)
|
pid = service_pid(service, shell)
|
||||||
output = shell.exec(f"sudo top -b -n 1 -p {pid}").stdout
|
output = shell.exec(f"sudo top -b -n 1 -p {pid}").stdout
|
||||||
return TopCommand.from_stdout(output, pid)
|
return TopCommand.from_stdout(output, pid)
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Restart service n times with sleep")
|
@reporter.step("Restart service n times with sleep")
|
||||||
def multiple_restart(
|
def multiple_restart(
|
||||||
service_type: type[NodeBase],
|
service_type: type[NodeBase],
|
||||||
node: ClusterNode,
|
node: ClusterNode,
|
||||||
|
@ -139,19 +89,16 @@ def multiple_restart(
|
||||||
service_name = node.service(service_type).name
|
service_name = node.service(service_type).name
|
||||||
for _ in range(count):
|
for _ in range(count):
|
||||||
node.host.restart_service(service_name)
|
node.host.restart_service(service_name)
|
||||||
logger.info(
|
logger.info(f"Restart {service_systemctl_name}; sleep {sleep_interval} seconds and continue")
|
||||||
f"Restart {service_systemctl_name}; sleep {sleep_interval} seconds and continue"
|
|
||||||
)
|
|
||||||
sleep(sleep_interval)
|
sleep(sleep_interval)
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Get status of list of services and check expected status")
|
@wait_for_success(60, 5, title="Wait for services become {expected_status} on node {cluster_node}")
|
||||||
@wait_for_success(60, 5)
|
def check_services_status(cluster_node: ClusterNode, service_list: list[ServiceClass], expected_status: str):
|
||||||
def check_services_status(service_list: list[str], expected_status: str, shell: Shell):
|
|
||||||
cmd = ""
|
cmd = ""
|
||||||
for service in service_list:
|
for service in service_list:
|
||||||
cmd += f' sudo systemctl status {service} --lines=0 | grep "Active:";'
|
cmd += f' sudo systemctl status {service.get_service_systemctl_name()} --lines=0 | grep "Active:";'
|
||||||
result = shell.exec(cmd).stdout.rstrip()
|
result = cluster_node.host.get_shell().exec(cmd).stdout.rstrip()
|
||||||
statuses = list()
|
statuses = list()
|
||||||
for line in result.split("\n"):
|
for line in result.split("\n"):
|
||||||
status_substring = line.split()
|
status_substring = line.split()
|
||||||
|
@ -162,19 +109,15 @@ def check_services_status(service_list: list[str], expected_status: str, shell:
|
||||||
), f"Requested status={expected_status} not found in requested services={service_list}, list of statuses={result}"
|
), f"Requested status={expected_status} not found in requested services={service_list}, list of statuses={result}"
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Wait for active status of passed service")
|
@wait_for_success(60, 5, title="Wait for {service} become active")
|
||||||
@wait_for_success(60, 5)
|
def wait_service_in_desired_state(service: str, shell: Shell, expected_status: Optional[str] = "active"):
|
||||||
def wait_service_in_desired_state(
|
|
||||||
service: str, shell: Shell, expected_status: Optional[str] = "active"
|
|
||||||
):
|
|
||||||
real_status = service_status(service=service, shell=shell)
|
real_status = service_status(service=service, shell=shell)
|
||||||
assert (
|
assert (
|
||||||
expected_status == real_status
|
expected_status == real_status
|
||||||
), f"Service {service}: expected status= {expected_status}, real status {real_status}"
|
), f"Service {service}: expected status= {expected_status}, real status {real_status}"
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Run healthcheck against passed service")
|
@wait_for_success(parse_time(SERVICE_MAX_STARTUP_TIME), 1, title="Wait for {service_type} passes healtcheck on {node}")
|
||||||
@wait_for_success(parse_time(SERVICE_MAX_STARTUP_TIME), 1)
|
|
||||||
def service_type_healthcheck(
|
def service_type_healthcheck(
|
||||||
service_type: type[NodeBase],
|
service_type: type[NodeBase],
|
||||||
node: ClusterNode,
|
node: ClusterNode,
|
||||||
|
@ -185,26 +128,25 @@ def service_type_healthcheck(
|
||||||
), f"Healthcheck failed for {service.get_service_systemctl_name()}, IP={node.host_ip}"
|
), f"Healthcheck failed for {service.get_service_systemctl_name()}, IP={node.host_ip}"
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Kill by process name")
|
@reporter.step("Kill by process name")
|
||||||
def kill_by_service_name(service_type: type[NodeBase], node: ClusterNode):
|
def kill_by_service_name(service_type: type[NodeBase], node: ClusterNode):
|
||||||
service_systemctl_name = node.service(service_type).get_service_systemctl_name()
|
service_systemctl_name = node.service(service_type).get_service_systemctl_name()
|
||||||
pid = service_pid(service_systemctl_name, node.host.get_shell())
|
pid = service_pid(service_systemctl_name, node.host.get_shell())
|
||||||
node.host.get_shell().exec(f"sudo kill -9 {pid}")
|
node.host.get_shell().exec(f"sudo kill -9 {pid}")
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Service {service} suspend")
|
@reporter.step("Suspend {service}")
|
||||||
def suspend_service(shell: Shell, service: str):
|
def suspend_service(shell: Shell, service: str):
|
||||||
shell.exec(f"sudo kill -STOP {service_pid(service, shell)}")
|
shell.exec(f"sudo kill -STOP {service_pid(service, shell)}")
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Service {service} resume")
|
@reporter.step("Resume {service}")
|
||||||
def resume_service(shell: Shell, service: str):
|
def resume_service(shell: Shell, service: str):
|
||||||
shell.exec(f"sudo kill -CONT {service_pid(service, shell)}")
|
shell.exec(f"sudo kill -CONT {service_pid(service, shell)}")
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Retrieve service's pid")
|
|
||||||
# retry mechanism cause when the task has been started recently '0' PID could be returned
|
# retry mechanism cause when the task has been started recently '0' PID could be returned
|
||||||
@wait_for_success(10, 1)
|
@wait_for_success(10, 1, title="Get {service} pid")
|
||||||
def service_pid(service: str, shell: Shell) -> int:
|
def service_pid(service: str, shell: Shell) -> int:
|
||||||
output = shell.exec(f"systemctl show --property MainPID {service}").stdout.rstrip()
|
output = shell.exec(f"systemctl show --property MainPID {service}").stdout.rstrip()
|
||||||
splitted = output.split("=")
|
splitted = output.split("=")
|
||||||
|
@ -213,7 +155,7 @@ def service_pid(service: str, shell: Shell) -> int:
|
||||||
return PID
|
return PID
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Wrapper for neo-go dump keys command")
|
@reporter.step("Wrapper for neo-go dump keys command")
|
||||||
def dump_keys(shell: Shell, node: ClusterNode) -> dict:
|
def dump_keys(shell: Shell, node: ClusterNode) -> dict:
|
||||||
host = node.host
|
host = node.host
|
||||||
service_config = host.get_service_config(node.service(MorphChain).name)
|
service_config = host.get_service_config(node.service(MorphChain).name)
|
||||||
|
@ -221,7 +163,7 @@ def dump_keys(shell: Shell, node: ClusterNode) -> dict:
|
||||||
return neo_go_dump_keys(shell=shell, wallet=wallet)
|
return neo_go_dump_keys(shell=shell, wallet=wallet)
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Wait for object replication")
|
@reporter.step("Wait for object replication")
|
||||||
def wait_object_replication(
|
def wait_object_replication(
|
||||||
cid: str,
|
cid: str,
|
||||||
oid: str,
|
oid: str,
|
||||||
|
|
|
@ -1,17 +1,15 @@
|
||||||
from concurrent.futures import ThreadPoolExecutor
|
from concurrent.futures import ThreadPoolExecutor
|
||||||
|
|
||||||
from frostfs_testlib.reporter import get_reporter
|
from frostfs_testlib import reporter
|
||||||
from frostfs_testlib.storage.dataclasses.node_base import NodeBase
|
from frostfs_testlib.storage.dataclasses.node_base import NodeBase
|
||||||
|
|
||||||
reporter = get_reporter()
|
|
||||||
|
|
||||||
|
|
||||||
class FileKeeper:
|
class FileKeeper:
|
||||||
"""This class is responsible to make backup copy of modified file and restore when required (mostly after the test)"""
|
"""This class is responsible to make backup copy of modified file and restore when required (mostly after the test)"""
|
||||||
|
|
||||||
files_to_restore: dict[NodeBase, list[str]] = {}
|
files_to_restore: dict[NodeBase, list[str]] = {}
|
||||||
|
|
||||||
@reporter.step_deco("Adding {file_to_restore} from node {node} to restore list")
|
@reporter.step("Adding {file_to_restore} from node {node} to restore list")
|
||||||
def add(self, node: NodeBase, file_to_restore: str):
|
def add(self, node: NodeBase, file_to_restore: str):
|
||||||
if node in self.files_to_restore and file_to_restore in self.files_to_restore[node]:
|
if node in self.files_to_restore and file_to_restore in self.files_to_restore[node]:
|
||||||
# Already added
|
# Already added
|
||||||
|
@ -26,7 +24,7 @@ class FileKeeper:
|
||||||
shell = node.host.get_shell()
|
shell = node.host.get_shell()
|
||||||
shell.exec(f"cp {file_to_restore} {file_to_restore}.bak")
|
shell.exec(f"cp {file_to_restore} {file_to_restore}.bak")
|
||||||
|
|
||||||
@reporter.step_deco("Restore files")
|
@reporter.step("Restore files")
|
||||||
def restore_files(self):
|
def restore_files(self):
|
||||||
nodes = self.files_to_restore.keys()
|
nodes = self.files_to_restore.keys()
|
||||||
if not nodes:
|
if not nodes:
|
||||||
|
@ -41,7 +39,7 @@ class FileKeeper:
|
||||||
# Iterate through results for exception check if any
|
# Iterate through results for exception check if any
|
||||||
pass
|
pass
|
||||||
|
|
||||||
@reporter.step_deco("Restore files on node {node}")
|
@reporter.step("Restore files on node {node}")
|
||||||
def _restore_files_on_node(self, node: NodeBase):
|
def _restore_files_on_node(self, node: NodeBase):
|
||||||
shell = node.host.get_shell()
|
shell = node.host.get_shell()
|
||||||
for file_to_restore in self.files_to_restore[node]:
|
for file_to_restore in self.files_to_restore[node]:
|
||||||
|
|
|
@ -4,14 +4,48 @@ import os
|
||||||
import uuid
|
import uuid
|
||||||
from typing import Any, Optional
|
from typing import Any, Optional
|
||||||
|
|
||||||
from frostfs_testlib.reporter import get_reporter
|
from frostfs_testlib import reporter
|
||||||
from frostfs_testlib.resources.common import ASSETS_DIR
|
from frostfs_testlib.resources.common import ASSETS_DIR
|
||||||
|
from frostfs_testlib.utils import string_utils
|
||||||
|
|
||||||
reporter = get_reporter()
|
|
||||||
logger = logging.getLogger("NeoLogger")
|
logger = logging.getLogger("NeoLogger")
|
||||||
|
|
||||||
|
|
||||||
def generate_file(size: int) -> str:
|
class TestFile(os.PathLike):
|
||||||
|
def __init__(self, path: str):
|
||||||
|
self.path = path
|
||||||
|
|
||||||
|
def __del__(self):
|
||||||
|
logger.debug(f"Removing file {self.path}")
|
||||||
|
if os.path.exists(self.path):
|
||||||
|
os.remove(self.path)
|
||||||
|
|
||||||
|
def __str__(self):
|
||||||
|
return self.path
|
||||||
|
|
||||||
|
def __repr__(self):
|
||||||
|
return self.path
|
||||||
|
|
||||||
|
def __fspath__(self):
|
||||||
|
return self.path
|
||||||
|
|
||||||
|
|
||||||
|
def ensure_directory(path):
|
||||||
|
directory = os.path.dirname(path)
|
||||||
|
|
||||||
|
if not os.path.exists(directory):
|
||||||
|
os.makedirs(directory)
|
||||||
|
|
||||||
|
|
||||||
|
def ensure_directory_opener(path, flags):
|
||||||
|
ensure_directory(path)
|
||||||
|
return os.open(path, flags)
|
||||||
|
|
||||||
|
|
||||||
|
# TODO: Do not add {size} to title yet, since it produces dynamic info in top level steps
|
||||||
|
# Use object_size dt in future as argument
|
||||||
|
@reporter.step("Generate file")
|
||||||
|
def generate_file(size: int) -> TestFile:
|
||||||
"""Generates a binary file with the specified size in bytes.
|
"""Generates a binary file with the specified size in bytes.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
|
@ -20,19 +54,22 @@ def generate_file(size: int) -> str:
|
||||||
Returns:
|
Returns:
|
||||||
The path to the generated file.
|
The path to the generated file.
|
||||||
"""
|
"""
|
||||||
file_path = os.path.join(ASSETS_DIR, str(uuid.uuid4()))
|
test_file = TestFile(os.path.join(ASSETS_DIR, string_utils.unique_name("object-")))
|
||||||
with open(file_path, "wb") as file:
|
with open(test_file, "wb", opener=ensure_directory_opener) as file:
|
||||||
file.write(os.urandom(size))
|
file.write(os.urandom(size))
|
||||||
logger.info(f"File with size {size} bytes has been generated: {file_path}")
|
logger.info(f"File with size {size} bytes has been generated: {test_file}")
|
||||||
|
|
||||||
return file_path
|
return test_file
|
||||||
|
|
||||||
|
|
||||||
|
# TODO: Do not add {size} to title yet, since it produces dynamic info in top level steps
|
||||||
|
# Use object_size dt in future as argument
|
||||||
|
@reporter.step("Generate file with content")
|
||||||
def generate_file_with_content(
|
def generate_file_with_content(
|
||||||
size: int,
|
size: int,
|
||||||
file_path: Optional[str] = None,
|
file_path: Optional[str | TestFile] = None,
|
||||||
content: Optional[str] = None,
|
content: Optional[str] = None,
|
||||||
) -> str:
|
) -> TestFile:
|
||||||
"""Creates a new file with specified content.
|
"""Creates a new file with specified content.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
|
@ -49,20 +86,22 @@ def generate_file_with_content(
|
||||||
content = os.urandom(size)
|
content = os.urandom(size)
|
||||||
mode = "wb"
|
mode = "wb"
|
||||||
|
|
||||||
|
test_file = None
|
||||||
if not file_path:
|
if not file_path:
|
||||||
file_path = os.path.join(os.getcwd(), ASSETS_DIR, str(uuid.uuid4()))
|
test_file = TestFile(os.path.join(os.getcwd(), ASSETS_DIR, str(uuid.uuid4())))
|
||||||
|
elif isinstance(file_path, TestFile):
|
||||||
|
test_file = file_path
|
||||||
else:
|
else:
|
||||||
if not os.path.exists(os.path.dirname(file_path)):
|
test_file = TestFile(file_path)
|
||||||
os.makedirs(os.path.dirname(file_path))
|
|
||||||
|
|
||||||
with open(file_path, mode) as file:
|
with open(test_file, mode, opener=ensure_directory_opener) as file:
|
||||||
file.write(content)
|
file.write(content)
|
||||||
|
|
||||||
return file_path
|
return test_file
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Get File Hash")
|
@reporter.step("Get File Hash")
|
||||||
def get_file_hash(file_path: str, len: Optional[int] = None, offset: Optional[int] = None) -> str:
|
def get_file_hash(file_path: str | TestFile, len: Optional[int] = None, offset: Optional[int] = None) -> str:
|
||||||
"""Generates hash for the specified file.
|
"""Generates hash for the specified file.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
|
@ -88,8 +127,8 @@ def get_file_hash(file_path: str, len: Optional[int] = None, offset: Optional[in
|
||||||
return file_hash.hexdigest()
|
return file_hash.hexdigest()
|
||||||
|
|
||||||
|
|
||||||
@reporter.step_deco("Concatenation set of files to one file")
|
@reporter.step("Concatenation set of files to one file")
|
||||||
def concat_files(file_paths: list, resulting_file_path: Optional[str] = None) -> str:
|
def concat_files(file_paths: list[str | TestFile], resulting_file_path: Optional[str | TestFile] = None) -> TestFile:
|
||||||
"""Concatenates several files into a single file.
|
"""Concatenates several files into a single file.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
|
@ -99,16 +138,24 @@ def concat_files(file_paths: list, resulting_file_path: Optional[str] = None) ->
|
||||||
Returns:
|
Returns:
|
||||||
Path to the resulting file.
|
Path to the resulting file.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
test_file = None
|
||||||
if not resulting_file_path:
|
if not resulting_file_path:
|
||||||
resulting_file_path = os.path.join(os.getcwd(), ASSETS_DIR, str(uuid.uuid4()))
|
test_file = TestFile(os.path.join(os.getcwd(), ASSETS_DIR, str(uuid.uuid4())))
|
||||||
with open(resulting_file_path, "wb") as f:
|
elif isinstance(resulting_file_path, TestFile):
|
||||||
|
test_file = resulting_file_path
|
||||||
|
else:
|
||||||
|
test_file = TestFile(resulting_file_path)
|
||||||
|
|
||||||
|
with open(test_file, "wb", opener=ensure_directory_opener) as f:
|
||||||
for file in file_paths:
|
for file in file_paths:
|
||||||
with open(file, "rb") as part_file:
|
with open(file, "rb") as part_file:
|
||||||
f.write(part_file.read())
|
f.write(part_file.read())
|
||||||
return resulting_file_path
|
return test_file
|
||||||
|
|
||||||
|
|
||||||
def split_file(file_path: str, parts: int) -> list[str]:
|
@reporter.step("Split file to {parts} parts")
|
||||||
|
def split_file(file_path: str | TestFile, parts: int) -> list[TestFile]:
|
||||||
"""Splits specified file into several specified number of parts.
|
"""Splits specified file into several specified number of parts.
|
||||||
|
|
||||||
Each part is saved under name `{original_file}_part_{i}`.
|
Each part is saved under name `{original_file}_part_{i}`.
|
||||||
|
@ -130,7 +177,7 @@ def split_file(file_path: str, parts: int) -> list[str]:
|
||||||
part_file_paths = []
|
part_file_paths = []
|
||||||
for content_offset in range(0, content_size + 1, chunk_size):
|
for content_offset in range(0, content_size + 1, chunk_size):
|
||||||
part_file_name = f"{file_path}_part_{part_id}"
|
part_file_name = f"{file_path}_part_{part_id}"
|
||||||
part_file_paths.append(part_file_name)
|
part_file_paths.append(TestFile(part_file_name))
|
||||||
with open(part_file_name, "wb") as out_file:
|
with open(part_file_name, "wb") as out_file:
|
||||||
out_file.write(content[content_offset : content_offset + chunk_size])
|
out_file.write(content[content_offset : content_offset + chunk_size])
|
||||||
part_id += 1
|
part_id += 1
|
||||||
|
@ -138,9 +185,8 @@ def split_file(file_path: str, parts: int) -> list[str]:
|
||||||
return part_file_paths
|
return part_file_paths
|
||||||
|
|
||||||
|
|
||||||
def get_file_content(
|
@reporter.step("Get file content")
|
||||||
file_path: str, content_len: Optional[int] = None, mode: str = "r", offset: Optional[int] = None
|
def get_file_content(file_path: str | TestFile, content_len: Optional[int] = None, mode: str = "r", offset: Optional[int] = None) -> Any:
|
||||||
) -> Any:
|
|
||||||
"""Returns content of specified file.
|
"""Returns content of specified file.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
|
|
58
src/frostfs_testlib/utils/func_utils.py
Normal file
58
src/frostfs_testlib/utils/func_utils.py
Normal file
|
@ -0,0 +1,58 @@
|
||||||
|
import collections
|
||||||
|
import inspect
|
||||||
|
import sys
|
||||||
|
from typing import Callable
|
||||||
|
|
||||||
|
|
||||||
|
def format_by_args(__func: Callable, __title: str, *a, **kw) -> str:
|
||||||
|
params = _func_parameters(__func, *a, **kw)
|
||||||
|
args = list(map(lambda x: _represent(x), a))
|
||||||
|
|
||||||
|
return __title.format(*args, **params)
|
||||||
|
|
||||||
|
|
||||||
|
# These 2 functions are copied from allure_commons._allure
|
||||||
|
# Duplicate it here in order to be independent of allure and make some adjustments.
|
||||||
|
def _represent(item):
|
||||||
|
if isinstance(item, str):
|
||||||
|
return item
|
||||||
|
elif isinstance(item, (bytes, bytearray)):
|
||||||
|
return repr(type(item))
|
||||||
|
else:
|
||||||
|
return repr(item)
|
||||||
|
|
||||||
|
|
||||||
|
def _func_parameters(func, *args, **kwargs):
|
||||||
|
parameters = {}
|
||||||
|
arg_spec = inspect.getfullargspec(func)
|
||||||
|
arg_order = list(arg_spec.args)
|
||||||
|
args_dict = dict(zip(arg_spec.args, args))
|
||||||
|
|
||||||
|
if arg_spec.defaults:
|
||||||
|
kwargs_defaults_dict = dict(zip(arg_spec.args[-len(arg_spec.defaults) :], arg_spec.defaults))
|
||||||
|
parameters.update(kwargs_defaults_dict)
|
||||||
|
|
||||||
|
if arg_spec.varargs:
|
||||||
|
arg_order.append(arg_spec.varargs)
|
||||||
|
varargs = args[len(arg_spec.args) :]
|
||||||
|
parameters.update({arg_spec.varargs: varargs} if varargs else {})
|
||||||
|
|
||||||
|
if arg_spec.args and arg_spec.args[0] in ["cls", "self"]:
|
||||||
|
args_dict.pop(arg_spec.args[0], None)
|
||||||
|
|
||||||
|
if kwargs:
|
||||||
|
if sys.version_info < (3, 7):
|
||||||
|
# Sort alphabetically as old python versions does
|
||||||
|
# not preserve call order for kwargs.
|
||||||
|
arg_order.extend(sorted(list(kwargs.keys())))
|
||||||
|
else:
|
||||||
|
# Keep py3.7 behaviour to preserve kwargs order
|
||||||
|
arg_order.extend(list(kwargs.keys()))
|
||||||
|
parameters.update(kwargs)
|
||||||
|
|
||||||
|
parameters.update(args_dict)
|
||||||
|
|
||||||
|
items = parameters.items()
|
||||||
|
sorted_items = sorted(map(lambda kv: (kv[0], _represent(kv[1])), items), key=lambda x: arg_order.index(x[0]))
|
||||||
|
|
||||||
|
return collections.OrderedDict(sorted_items)
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue