forked from TrueCloudLab/frostfs-testlib
Compare commits
1 commit
master
...
get_versio
Author | SHA1 | Date | |
---|---|---|---|
3cb2f28ef5 |
102 changed files with 2264 additions and 5626 deletions
2
.gitignore
vendored
2
.gitignore
vendored
|
@ -1,7 +1,6 @@
|
|||
# ignore IDE files
|
||||
.vscode
|
||||
.idea
|
||||
venv.*
|
||||
|
||||
# ignore temp files under any path
|
||||
.DS_Store
|
||||
|
@ -11,4 +10,3 @@ venv.*
|
|||
/dist
|
||||
/build
|
||||
*.egg-info
|
||||
wallet_config.yml
|
|
@ -3,8 +3,8 @@
|
|||
First, thank you for contributing! We love and encourage pull requests from
|
||||
everyone. Please follow the guidelines:
|
||||
|
||||
- Check the open [issues](https://git.frostfs.info/TrueCloudLab/frostfs-testlib/issues) and
|
||||
[pull requests](https://git.frostfs.info/TrueCloudLab/frostfs-testlib/pulls) for existing
|
||||
- Check the open [issues](https://github.com/TrueCloudLab/frostfs-testlib/issues) and
|
||||
[pull requests](https://github.com/TrueCloudLab/frostfs-testlib/pulls) for existing
|
||||
discussions.
|
||||
|
||||
- Open an issue first, to discuss a new feature or enhancement.
|
||||
|
@ -26,8 +26,8 @@ Start by forking the `frostfs-testlib` repository, make changes in a branch and
|
|||
send a pull request. We encourage pull requests to discuss code changes. Here
|
||||
are the steps in details:
|
||||
|
||||
### Set up your Git Repository
|
||||
Fork [FrostFS testlib upstream](https://git.frostfs.info/TrueCloudLab/frostfs-testlib/forks) source
|
||||
### Set up your GitHub Repository
|
||||
Fork [FrostFS testlib upstream](https://github.com/TrueCloudLab/frostfs-testlib/fork) source
|
||||
repository to your own personal repository. Copy the URL of your fork and clone it:
|
||||
|
||||
```shell
|
||||
|
@ -37,7 +37,7 @@ $ git clone <url of your fork>
|
|||
### Set up git remote as ``upstream``
|
||||
```shell
|
||||
$ cd frostfs-testlib
|
||||
$ git remote add upstream https://git.frostfs.info/TrueCloudLab/frostfs-testlib
|
||||
$ git remote add upstream https://github.com/TrueCloudLab/frostfs-testlib
|
||||
$ git fetch upstream
|
||||
```
|
||||
|
||||
|
@ -63,9 +63,9 @@ $ git checkout -b feature/123-something_awesome
|
|||
```
|
||||
|
||||
### Test your changes
|
||||
Before submitting any changes to the library, please, make sure that linter and all unit tests are passing. To run the tests, please, use the following command:
|
||||
Before submitting any changes to the library, please, make sure that all unit tests are passing. To run the tests, please, use the following command:
|
||||
```shell
|
||||
$ make validation
|
||||
$ python -m unittest discover --start-directory tests
|
||||
```
|
||||
|
||||
To enable tests that interact with SSH server, please, setup SSH server and set the following environment variables before running the tests:
|
||||
|
@ -99,8 +99,8 @@ $ git push origin feature/123-something_awesome
|
|||
```
|
||||
|
||||
### Create a Pull Request
|
||||
Pull requests can be created via Git. Refer to [this
|
||||
document](https://docs.codeberg.org/collaborating/pull-requests-and-git-flow/) for
|
||||
Pull requests can be created via GitHub. Refer to [this
|
||||
document](https://help.github.com/articles/creating-a-pull-request/) for
|
||||
detailed steps on how to create a pull request. After a Pull Request gets peer
|
||||
reviewed and approved, it will be merged.
|
||||
|
||||
|
|
41
Makefile
41
Makefile
|
@ -1,11 +1,8 @@
|
|||
SHELL := /bin/bash
|
||||
PYTHON_VERSION := 3.10
|
||||
VENV_NAME := frostfs-testlib
|
||||
VENV_DIR := venv.${VENV_NAME}
|
||||
VENV_DIR := venv.frostfs-testlib
|
||||
|
||||
current_dir := $(shell pwd)
|
||||
DIRECTORIES := $(sort $(dir $(wildcard ../frostfs-testlib-plugin-*/ ../*-testcases/)))
|
||||
FROM_VENV := . ${VENV_DIR}/bin/activate &&
|
||||
|
||||
venv: create requirements paths precommit
|
||||
@echo Ready
|
||||
|
@ -16,35 +13,15 @@ precommit:
|
|||
|
||||
paths:
|
||||
@echo Append paths for project
|
||||
@echo Virtual environment: ${current_dir}/${VENV_DIR}
|
||||
@rm -rf ${VENV_DIR}/lib/python${PYTHON_VERSION}/site-packages/_paths.pth
|
||||
@touch ${VENV_DIR}/lib/python${PYTHON_VERSION}/site-packages/_paths.pth
|
||||
@echo ${current_dir}/src | tee ${VENV_DIR}/lib/python${PYTHON_VERSION}/site-packages/_paths.pth
|
||||
@echo Virtual environment: ${VENV_DIR}
|
||||
@sudo rm -rf ${VENV_DIR}/lib/python${PYTHON_VERSION}/site-packages/_paths.pth
|
||||
@sudo touch ${VENV_DIR}/lib/python${PYTHON_VERSION}/site-packages/_paths.pth
|
||||
@echo ${current_dir}/src/frostfs_testlib_frostfs_testlib | sudo tee ${VENV_DIR}/lib/python${PYTHON_VERSION}/site-packages/_paths.pth
|
||||
|
||||
create: ${VENV_DIR}
|
||||
|
||||
${VENV_DIR}:
|
||||
@echo Create virtual environment ${current_dir}/${VENV_DIR}
|
||||
virtualenv --python=python${PYTHON_VERSION} --prompt=${VENV_NAME} ${VENV_DIR}
|
||||
create:
|
||||
@echo Create virtual environment for
|
||||
virtualenv --python=python${PYTHON_VERSION} --prompt=frostfs-testlib ${VENV_DIR}
|
||||
|
||||
requirements:
|
||||
@echo Isntalling pip requirements
|
||||
. ${VENV_DIR}/bin/activate && pip install -Ur requirements.txt
|
||||
|
||||
|
||||
#### VALIDATION SECTION ####
|
||||
lint: create requirements
|
||||
${FROM_VENV} pylint --disable R,C,W ./src
|
||||
|
||||
unit_test:
|
||||
@echo Starting unit tests
|
||||
${FROM_VENV} python -m pytest tests
|
||||
|
||||
.PHONY: lint_dependent $(DIRECTORIES)
|
||||
lint_dependent: $(DIRECTORIES)
|
||||
|
||||
$(DIRECTORIES):
|
||||
@echo checking dependent repo $@
|
||||
$(MAKE) validation -C $@
|
||||
|
||||
validation: lint unit_test lint_dependent
|
||||
. ${VENV_DIR}/bin/activate && pip install -Ur requirements.txt
|
|
@ -92,4 +92,4 @@ The library provides the following primary components:
|
|||
|
||||
|
||||
## Contributing
|
||||
Any contributions to the library should conform to the [contribution guideline](https://git.frostfs.info/TrueCloudLab/frostfs-testlib/src/branch/master/CONTRIBUTING.md).
|
||||
Any contributions to the library should conform to the [contribution guideline](https://github.com/TrueCloudLab/frostfs-testlib/blob/master/CONTRIBUTING.md).
|
||||
|
|
|
@ -18,11 +18,11 @@ keywords = ["frostfs", "test"]
|
|||
dependencies = [
|
||||
"allure-python-commons>=2.13.2",
|
||||
"docker>=4.4.0",
|
||||
"pyyaml==6.0.1",
|
||||
"importlib_metadata>=5.0; python_version < '3.10'",
|
||||
"neo-mamba==1.0.0",
|
||||
"paramiko>=2.10.3",
|
||||
"pexpect>=4.8.0",
|
||||
"requests==2.28.1",
|
||||
"requests>=2.28.0",
|
||||
"docstring_parser>=0.15",
|
||||
"testrail-api>=1.12.0",
|
||||
"pytest==7.1.2",
|
||||
|
@ -36,7 +36,7 @@ requires-python = ">=3.10"
|
|||
dev = ["black", "bumpver", "isort", "pre-commit"]
|
||||
|
||||
[project.urls]
|
||||
Homepage = "https://git.frostfs.info/TrueCloudLab/frostfs-testlib"
|
||||
Homepage = "https://github.com/TrueCloudLab/frostfs-testlib"
|
||||
|
||||
[project.entry-points."frostfs.testlib.reporter"]
|
||||
allure = "frostfs_testlib.reporter.allure_handler:AllureHandler"
|
||||
|
@ -44,26 +44,13 @@ allure = "frostfs_testlib.reporter.allure_handler:AllureHandler"
|
|||
[project.entry-points."frostfs.testlib.hosting"]
|
||||
docker = "frostfs_testlib.hosting.docker_host:DockerHost"
|
||||
|
||||
[project.entry-points."frostfs.testlib.healthcheck"]
|
||||
basic = "frostfs_testlib.healthcheck.basic_healthcheck:BasicHealthcheck"
|
||||
|
||||
[project.entry-points."frostfs.testlib.csc_managers"]
|
||||
config = "frostfs_testlib.storage.controllers.state_managers.config_state_manager:ConfigStateManager"
|
||||
|
||||
[project.entry-points."frostfs.testlib.services"]
|
||||
s = "frostfs_testlib.storage.dataclasses.frostfs_services:StorageNode"
|
||||
s3-gate = "frostfs_testlib.storage.dataclasses.frostfs_services:S3Gate"
|
||||
http-gate = "frostfs_testlib.storage.dataclasses.frostfs_services:HTTPGate"
|
||||
morph-chain = "frostfs_testlib.storage.dataclasses.frostfs_services:MorphChain"
|
||||
ir = "frostfs_testlib.storage.dataclasses.frostfs_services:InnerRing"
|
||||
|
||||
[tool.isort]
|
||||
profile = "black"
|
||||
src_paths = ["src", "tests"]
|
||||
line_length = 120
|
||||
line_length = 100
|
||||
|
||||
[tool.black]
|
||||
line-length = 120
|
||||
line-length = 100
|
||||
target-version = ["py310"]
|
||||
|
||||
[tool.bumpver]
|
||||
|
@ -77,9 +64,3 @@ push = false
|
|||
[tool.bumpver.file_patterns]
|
||||
"pyproject.toml" = ['current_version = "{version}"', 'version = "{version}"']
|
||||
"src/frostfs_testlib/__init__.py" = ["{version}"]
|
||||
|
||||
[tool.pytest.ini_options]
|
||||
filterwarnings = [
|
||||
"ignore:Blowfish has been deprecated:cryptography.utils.CryptographyDeprecationWarning",
|
||||
]
|
||||
testpaths = ["tests"]
|
|
@ -1,5 +1,6 @@
|
|||
allure-python-commons==2.13.2
|
||||
allure-python-commons==2.9.45
|
||||
docker==4.4.0
|
||||
importlib_metadata==5.0.0
|
||||
neo-mamba==1.0.0
|
||||
paramiko==2.10.3
|
||||
pexpect==4.8.0
|
||||
|
@ -16,7 +17,6 @@ black==22.8.0
|
|||
bumpver==2022.1118
|
||||
isort==5.12.0
|
||||
pre-commit==2.20.0
|
||||
pylint==2.17.4
|
||||
|
||||
# Packaging dependencies
|
||||
build==0.8.0
|
||||
|
|
|
@ -6,7 +6,6 @@ from docstring_parser.google import DEFAULT_SECTIONS, Section, SectionType
|
|||
|
||||
DEFAULT_SECTIONS.append(Section("Steps", "steps", SectionType.MULTIPLE))
|
||||
|
||||
|
||||
class TestCase:
|
||||
"""
|
||||
Test case object implementation for use in collector and exporters
|
||||
|
@ -107,9 +106,7 @@ class TestCaseCollector:
|
|||
# Read test_case suite and section name from test class if possible and get test function from class
|
||||
if test.cls:
|
||||
suite_name = test.cls.__dict__.get("__test_case_suite_name__", suite_name)
|
||||
suite_section_name = test.cls.__dict__.get(
|
||||
"__test_case_suite_section__", suite_section_name
|
||||
)
|
||||
suite_section_name = test.cls.__dict__.get("__test_case_suite_section__", suite_section_name)
|
||||
test_function = test.cls.__dict__[test.originalname]
|
||||
else:
|
||||
# If no test class, read test function from module
|
||||
|
@ -120,9 +117,7 @@ class TestCaseCollector:
|
|||
test_case_title = test_function.__dict__.get("__test_case_title__", None)
|
||||
test_case_priority = test_function.__dict__.get("__test_case_priority__", None)
|
||||
suite_name = test_function.__dict__.get("__test_case_suite_name__", suite_name)
|
||||
suite_section_name = test_function.__dict__.get(
|
||||
"__test_case_suite_section__", suite_section_name
|
||||
)
|
||||
suite_section_name = test_function.__dict__.get("__test_case_suite_section__", suite_section_name)
|
||||
|
||||
# Parce test_steps if they define in __doc__
|
||||
doc_string = parse(test_function.__doc__, style=DocstringStyle.GOOGLE)
|
||||
|
@ -130,9 +125,7 @@ class TestCaseCollector:
|
|||
if doc_string.short_description:
|
||||
test_case_description = doc_string.short_description
|
||||
if doc_string.long_description:
|
||||
test_case_description = (
|
||||
f"{doc_string.short_description}\r\n{doc_string.long_description}"
|
||||
)
|
||||
test_case_description = f"{doc_string.short_description}\r\n{doc_string.long_description}"
|
||||
|
||||
if doc_string.meta:
|
||||
for meta in doc_string.meta:
|
||||
|
@ -147,27 +140,25 @@ class TestCaseCollector:
|
|||
test_case_params = test_case_call_spec.id
|
||||
# Format title with params
|
||||
if test_case_title:
|
||||
test_case_title = self.__format_string_with_params__(
|
||||
test_case_title, test_case_call_spec.params
|
||||
)
|
||||
test_case_title = self.__format_string_with_params__(test_case_title,test_case_call_spec.params)
|
||||
# Format steps with params
|
||||
if test_case_steps:
|
||||
for key, value in test_case_steps.items():
|
||||
value = self.__format_string_with_params__(value, test_case_call_spec.params)
|
||||
value = self.__format_string_with_params__(value,test_case_call_spec.params)
|
||||
test_case_steps[key] = value
|
||||
|
||||
# If there is set basic test case attributes create TestCase and return
|
||||
if test_case_id and test_case_title and suite_name and suite_name:
|
||||
test_case = TestCase(
|
||||
uuid_id=test_case_id,
|
||||
title=test_case_title,
|
||||
description=test_case_description,
|
||||
priority=test_case_priority,
|
||||
steps=test_case_steps,
|
||||
params=test_case_params,
|
||||
suite_name=suite_name,
|
||||
suite_section_name=suite_section_name,
|
||||
)
|
||||
id=test_case_id,
|
||||
title=test_case_title,
|
||||
description=test_case_description,
|
||||
priority=test_case_priority,
|
||||
steps=test_case_steps,
|
||||
params=test_case_params,
|
||||
suite_name=suite_name,
|
||||
suite_section_name=suite_section_name,
|
||||
)
|
||||
return test_case
|
||||
# Return None if there is no enough information for return test case
|
||||
return None
|
||||
|
@ -196,4 +187,4 @@ class TestCaseCollector:
|
|||
test_case = self.__get_test_case_from_pytest_test__(test)
|
||||
if test_case:
|
||||
test_cases.append(test_case)
|
||||
return test_cases
|
||||
return test_cases
|
|
@ -67,6 +67,6 @@ class TestExporter(ABC):
|
|||
steps = [{"content": value, "expected": " "} for key, value in test_case.steps.items()]
|
||||
|
||||
if test_case_in_tms:
|
||||
self.update_test_case(test_case, test_case_in_tms, test_suite, test_section)
|
||||
self.update_test_case(test_case, test_case_in_tms)
|
||||
else:
|
||||
self.create_test_case(test_case, test_suite, test_section)
|
||||
self.create_test_case(test_case)
|
||||
|
|
|
@ -27,7 +27,11 @@ class FrostfsAdmMorph(CliCommand):
|
|||
"""
|
||||
return self._execute(
|
||||
"morph deposit-notary",
|
||||
**{param: param_value for param, param_value in locals().items() if param not in ["self"]},
|
||||
**{
|
||||
param: param_value
|
||||
for param, param_value in locals().items()
|
||||
if param not in ["self"]
|
||||
},
|
||||
)
|
||||
|
||||
def dump_balances(
|
||||
|
@ -52,7 +56,11 @@ class FrostfsAdmMorph(CliCommand):
|
|||
"""
|
||||
return self._execute(
|
||||
"morph dump-balances",
|
||||
**{param: param_value for param, param_value in locals().items() if param not in ["self"]},
|
||||
**{
|
||||
param: param_value
|
||||
for param, param_value in locals().items()
|
||||
if param not in ["self"]
|
||||
},
|
||||
)
|
||||
|
||||
def dump_config(self, rpc_endpoint: str) -> CommandResult:
|
||||
|
@ -66,25 +74,11 @@ class FrostfsAdmMorph(CliCommand):
|
|||
"""
|
||||
return self._execute(
|
||||
"morph dump-config",
|
||||
**{param: param_value for param, param_value in locals().items() if param not in ["self"]},
|
||||
)
|
||||
|
||||
def set_config(
|
||||
self, set_key_value: str, rpc_endpoint: Optional[str] = None, alphabet_wallets: Optional[str] = None
|
||||
) -> CommandResult:
|
||||
"""Add/update global config value in the FrostFS network.
|
||||
|
||||
Args:
|
||||
set_key_value: key1=val1 [key2=val2 ...]
|
||||
alphabet_wallets: Path to alphabet wallets dir
|
||||
rpc_endpoint: N3 RPC node endpoint
|
||||
|
||||
Returns:
|
||||
Command's result.
|
||||
"""
|
||||
return self._execute(
|
||||
f"morph set-config {set_key_value}",
|
||||
**{param: param_value for param, param_value in locals().items() if param not in ["self", "set_key_value"]},
|
||||
**{
|
||||
param: param_value
|
||||
for param, param_value in locals().items()
|
||||
if param not in ["self"]
|
||||
},
|
||||
)
|
||||
|
||||
def dump_containers(
|
||||
|
@ -107,7 +101,11 @@ class FrostfsAdmMorph(CliCommand):
|
|||
"""
|
||||
return self._execute(
|
||||
"morph dump-containers",
|
||||
**{param: param_value for param, param_value in locals().items() if param not in ["self"]},
|
||||
**{
|
||||
param: param_value
|
||||
for param, param_value in locals().items()
|
||||
if param not in ["self"]
|
||||
},
|
||||
)
|
||||
|
||||
def dump_hashes(self, rpc_endpoint: str) -> CommandResult:
|
||||
|
@ -121,7 +119,11 @@ class FrostfsAdmMorph(CliCommand):
|
|||
"""
|
||||
return self._execute(
|
||||
"morph dump-hashes",
|
||||
**{param: param_value for param, param_value in locals().items() if param not in ["self"]},
|
||||
**{
|
||||
param: param_value
|
||||
for param, param_value in locals().items()
|
||||
if param not in ["self"]
|
||||
},
|
||||
)
|
||||
|
||||
def force_new_epoch(
|
||||
|
@ -138,7 +140,11 @@ class FrostfsAdmMorph(CliCommand):
|
|||
"""
|
||||
return self._execute(
|
||||
"morph force-new-epoch",
|
||||
**{param: param_value for param, param_value in locals().items() if param not in ["self"]},
|
||||
**{
|
||||
param: param_value
|
||||
for param, param_value in locals().items()
|
||||
if param not in ["self"]
|
||||
},
|
||||
)
|
||||
|
||||
def generate_alphabet(
|
||||
|
@ -159,7 +165,11 @@ class FrostfsAdmMorph(CliCommand):
|
|||
"""
|
||||
return self._execute(
|
||||
"morph generate-alphabet",
|
||||
**{param: param_value for param, param_value in locals().items() if param not in ["self"]},
|
||||
**{
|
||||
param: param_value
|
||||
for param, param_value in locals().items()
|
||||
if param not in ["self"]
|
||||
},
|
||||
)
|
||||
|
||||
def generate_storage_wallet(
|
||||
|
@ -182,7 +192,11 @@ class FrostfsAdmMorph(CliCommand):
|
|||
"""
|
||||
return self._execute(
|
||||
"morph generate-storage-wallet",
|
||||
**{param: param_value for param, param_value in locals().items() if param not in ["self"]},
|
||||
**{
|
||||
param: param_value
|
||||
for param, param_value in locals().items()
|
||||
if param not in ["self"]
|
||||
},
|
||||
)
|
||||
|
||||
def init(
|
||||
|
@ -205,7 +219,7 @@ class FrostfsAdmMorph(CliCommand):
|
|||
container_alias_fee: Container alias fee (default 500).
|
||||
container_fee: Container registration fee (default 1000).
|
||||
contracts: Path to archive with compiled FrostFS contracts
|
||||
(default fetched from latest git release).
|
||||
(default fetched from latest github release).
|
||||
epoch_duration: Amount of side chain blocks in one FrostFS epoch (default 240).
|
||||
homomorphic_disabled: Disable object homomorphic hashing.
|
||||
local_dump: Path to the blocks dump file.
|
||||
|
@ -218,7 +232,11 @@ class FrostfsAdmMorph(CliCommand):
|
|||
"""
|
||||
return self._execute(
|
||||
"morph init",
|
||||
**{param: param_value for param, param_value in locals().items() if param not in ["self"]},
|
||||
**{
|
||||
param: param_value
|
||||
for param, param_value in locals().items()
|
||||
if param not in ["self"]
|
||||
},
|
||||
)
|
||||
|
||||
def refill_gas(
|
||||
|
@ -241,7 +259,11 @@ class FrostfsAdmMorph(CliCommand):
|
|||
"""
|
||||
return self._execute(
|
||||
"morph refill-gas",
|
||||
**{param: param_value for param, param_value in locals().items() if param not in ["self"]},
|
||||
**{
|
||||
param: param_value
|
||||
for param, param_value in locals().items()
|
||||
if param not in ["self"]
|
||||
},
|
||||
)
|
||||
|
||||
def restore_containers(
|
||||
|
@ -264,7 +286,11 @@ class FrostfsAdmMorph(CliCommand):
|
|||
"""
|
||||
return self._execute(
|
||||
"morph restore-containers",
|
||||
**{param: param_value for param, param_value in locals().items() if param not in ["self"]},
|
||||
**{
|
||||
param: param_value
|
||||
for param, param_value in locals().items()
|
||||
if param not in ["self"]
|
||||
},
|
||||
)
|
||||
|
||||
def set_policy(
|
||||
|
@ -314,7 +340,7 @@ class FrostfsAdmMorph(CliCommand):
|
|||
Args:
|
||||
alphabet_wallets: Path to alphabet wallets dir.
|
||||
contracts: Path to archive with compiled FrostFS contracts
|
||||
(default fetched from latest git release).
|
||||
(default fetched from latest github release).
|
||||
rpc_endpoint: N3 RPC node endpoint.
|
||||
|
||||
Returns:
|
||||
|
@ -322,13 +348,17 @@ class FrostfsAdmMorph(CliCommand):
|
|||
"""
|
||||
return self._execute(
|
||||
"morph update-contracts",
|
||||
**{param: param_value for param, param_value in locals().items() if param not in ["self"]},
|
||||
**{
|
||||
param: param_value
|
||||
for param, param_value in locals().items()
|
||||
if param not in ["self"]
|
||||
},
|
||||
)
|
||||
|
||||
def remove_nodes(
|
||||
self, node_netmap_keys: list[str], rpc_endpoint: Optional[str] = None, alphabet_wallets: Optional[str] = None
|
||||
) -> CommandResult:
|
||||
"""Move node to the Offline state in the candidates list
|
||||
""" Move node to the Offline state in the candidates list
|
||||
and tick an epoch to update the netmap using frostfs-adm
|
||||
|
||||
Args:
|
||||
|
@ -341,7 +371,7 @@ class FrostfsAdmMorph(CliCommand):
|
|||
"""
|
||||
if not len(node_netmap_keys):
|
||||
raise AttributeError("Got empty node_netmap_keys list")
|
||||
|
||||
|
||||
return self._execute(
|
||||
f"morph remove-nodes {' '.join(node_netmap_keys)}",
|
||||
**{
|
||||
|
@ -349,4 +379,4 @@ class FrostfsAdmMorph(CliCommand):
|
|||
for param, param_value in locals().items()
|
||||
if param not in ["self", "node_netmap_keys"]
|
||||
},
|
||||
)
|
||||
)
|
|
@ -6,8 +6,8 @@ from frostfs_testlib.shell import Shell
|
|||
|
||||
|
||||
class FrostfsAuthmate:
|
||||
secret: FrostfsAuthmateSecret
|
||||
version: FrostfsAuthmateVersion
|
||||
secret: Optional[FrostfsAuthmateSecret] = None
|
||||
version: Optional[FrostfsAuthmateVersion] = None
|
||||
|
||||
def __init__(self, shell: Shell, frostfs_authmate_exec_path: str):
|
||||
self.secret = FrostfsAuthmateSecret(shell, frostfs_authmate_exec_path)
|
||||
|
|
|
@ -44,6 +44,7 @@ class FrostfsAuthmateSecret(CliCommand):
|
|||
wallet: str,
|
||||
wallet_password: str,
|
||||
peer: str,
|
||||
bearer_rules: str,
|
||||
gate_public_key: Union[str, list[str]],
|
||||
address: Optional[str] = None,
|
||||
container_id: Optional[str] = None,
|
||||
|
|
|
@ -22,7 +22,7 @@ class FrostfsCliACL(CliCommand):
|
|||
Well-known system object headers start with '$Object:' prefix.
|
||||
User defined headers start without prefix.
|
||||
Read more about filter keys at:
|
||||
https://git.frostfs.info/TrueCloudLab/frostfs-api/src/branch/master/proto-docs/acl.md#message-eaclrecord-filter
|
||||
http://github.com/TrueCloudLab/frostfs-api/blob/master/proto-docs/acl.md#message-eaclrecordfilter
|
||||
Match is '=' for matching and '!=' for non-matching filter.
|
||||
Value is a valid unicode string corresponding to object or request header value.
|
||||
|
||||
|
|
|
@ -3,13 +3,11 @@ from typing import Optional
|
|||
from frostfs_testlib.cli.frostfs_cli.accounting import FrostfsCliAccounting
|
||||
from frostfs_testlib.cli.frostfs_cli.acl import FrostfsCliACL
|
||||
from frostfs_testlib.cli.frostfs_cli.container import FrostfsCliContainer
|
||||
from frostfs_testlib.cli.frostfs_cli.control import FrostfsCliControl
|
||||
from frostfs_testlib.cli.frostfs_cli.netmap import FrostfsCliNetmap
|
||||
from frostfs_testlib.cli.frostfs_cli.object import FrostfsCliObject
|
||||
from frostfs_testlib.cli.frostfs_cli.session import FrostfsCliSession
|
||||
from frostfs_testlib.cli.frostfs_cli.shards import FrostfsCliShards
|
||||
from frostfs_testlib.cli.frostfs_cli.storagegroup import FrostfsCliStorageGroup
|
||||
from frostfs_testlib.cli.frostfs_cli.tree import FrostfsCliTree
|
||||
from frostfs_testlib.cli.frostfs_cli.util import FrostfsCliUtil
|
||||
from frostfs_testlib.cli.frostfs_cli.version import FrostfsCliVersion
|
||||
from frostfs_testlib.shell import Shell
|
||||
|
@ -26,7 +24,6 @@ class FrostfsCli:
|
|||
storagegroup: FrostfsCliStorageGroup
|
||||
util: FrostfsCliUtil
|
||||
version: FrostfsCliVersion
|
||||
control: FrostfsCliControl
|
||||
|
||||
def __init__(self, shell: Shell, frostfs_cli_exec_path: str, config_file: Optional[str] = None):
|
||||
self.accounting = FrostfsCliAccounting(shell, frostfs_cli_exec_path, config=config_file)
|
||||
|
@ -39,5 +36,3 @@ class FrostfsCli:
|
|||
self.storagegroup = FrostfsCliStorageGroup(shell, frostfs_cli_exec_path, config=config_file)
|
||||
self.util = FrostfsCliUtil(shell, frostfs_cli_exec_path, config=config_file)
|
||||
self.version = FrostfsCliVersion(shell, frostfs_cli_exec_path, config=config_file)
|
||||
self.tree = FrostfsCliTree(shell, frostfs_cli_exec_path, config=config_file)
|
||||
self.control = FrostfsCliControl(shell, frostfs_cli_exec_path, config=config_file)
|
||||
|
|
|
@ -262,45 +262,3 @@ class FrostfsCliContainer(CliCommand):
|
|||
"container set-eacl",
|
||||
**{param: value for param, value in locals().items() if param not in ["self"]},
|
||||
)
|
||||
|
||||
def search_node(
|
||||
self,
|
||||
rpc_endpoint: str,
|
||||
wallet: str,
|
||||
cid: str,
|
||||
address: Optional[str] = None,
|
||||
ttl: Optional[int] = None,
|
||||
from_file: Optional[str] = None,
|
||||
short: Optional[bool] = True,
|
||||
xhdr: Optional[dict] = None,
|
||||
generate_key: Optional[bool] = None,
|
||||
timeout: Optional[str] = None,
|
||||
) -> CommandResult:
|
||||
"""
|
||||
Show the nodes participating in the container in the current epoch.
|
||||
|
||||
Args:
|
||||
rpc_endpoint: string Remote host address (as 'multiaddr' or '<host>:<port>')
|
||||
wallet: WIF (NEP-2) string or path to the wallet or binary key.
|
||||
cid: Container ID.
|
||||
address: Address of wallet account.
|
||||
ttl: TTL value in request meta header (default 2).
|
||||
from_file: string File path with encoded container
|
||||
timeout: duration Timeout for the operation (default 15 s)
|
||||
short: shorten the output of node information.
|
||||
xhdr: Dict with request X-Headers.
|
||||
generate_key: Generate a new private key
|
||||
|
||||
Returns:
|
||||
|
||||
"""
|
||||
from_str = f"--from {from_file}" if from_file else ""
|
||||
|
||||
return self._execute(
|
||||
f"container nodes {from_str}",
|
||||
**{
|
||||
param: value
|
||||
for param, value in locals().items()
|
||||
if param not in ["self", "from_file", "from_str"]
|
||||
},
|
||||
)
|
||||
|
|
|
@ -1,58 +0,0 @@
|
|||
from typing import Optional
|
||||
|
||||
from frostfs_testlib.cli.cli_command import CliCommand
|
||||
from frostfs_testlib.shell import CommandResult
|
||||
|
||||
|
||||
class FrostfsCliControl(CliCommand):
|
||||
def set_status(
|
||||
self,
|
||||
endpoint: str,
|
||||
status: str,
|
||||
wallet: Optional[str] = None,
|
||||
force: Optional[bool] = None,
|
||||
address: Optional[str] = None,
|
||||
timeout: Optional[str] = None,
|
||||
) -> CommandResult:
|
||||
"""Set status of the storage node in FrostFS network map
|
||||
|
||||
Args:
|
||||
wallet: Path to the wallet or binary key
|
||||
address: Address of wallet account
|
||||
endpoint: Remote node control address (as 'multiaddr' or '<host>:<port>')
|
||||
force: Force turning to local maintenance
|
||||
status: New netmap status keyword ('online', 'offline', 'maintenance')
|
||||
timeout: Timeout for an operation (default 15s)
|
||||
|
||||
Returns:
|
||||
Command`s result.
|
||||
"""
|
||||
return self._execute(
|
||||
"control set-status",
|
||||
**{param: value for param, value in locals().items() if param not in ["self"]},
|
||||
)
|
||||
|
||||
def healthcheck(
|
||||
self,
|
||||
endpoint: str,
|
||||
wallet: Optional[str] = None,
|
||||
address: Optional[str] = None,
|
||||
timeout: Optional[str] = None,
|
||||
) -> CommandResult:
|
||||
"""Set status of the storage node in FrostFS network map
|
||||
|
||||
Args:
|
||||
wallet: Path to the wallet or binary key
|
||||
address: Address of wallet account
|
||||
endpoint: Remote node control address (as 'multiaddr' or '<host>:<port>')
|
||||
force: Force turning to local maintenance
|
||||
status: New netmap status keyword ('online', 'offline', 'maintenance')
|
||||
timeout: Timeout for an operation (default 15s)
|
||||
|
||||
Returns:
|
||||
Command`s result.
|
||||
"""
|
||||
return self._execute(
|
||||
"control healthcheck",
|
||||
**{param: value for param, value in locals().items() if param not in ["self"]},
|
||||
)
|
|
@ -224,7 +224,6 @@ class FrostfsCliObject(CliCommand):
|
|||
address: Optional[str] = None,
|
||||
attributes: Optional[dict] = None,
|
||||
bearer: Optional[str] = None,
|
||||
copies_number: Optional[int] = None,
|
||||
disable_filename: bool = False,
|
||||
disable_timestamp: bool = False,
|
||||
expire_at: Optional[int] = None,
|
||||
|
@ -242,7 +241,6 @@ class FrostfsCliObject(CliCommand):
|
|||
address: Address of wallet account.
|
||||
attributes: User attributes in form of Key1=Value1,Key2=Value2.
|
||||
bearer: File with signed JSON or binary encoded bearer token.
|
||||
copies_number: Number of copies of the object to store within the RPC call.
|
||||
cid: Container ID.
|
||||
disable_filename: Do not set well-known filename attribute.
|
||||
disable_timestamp: Do not set well-known timestamp attribute.
|
||||
|
@ -351,45 +349,3 @@ class FrostfsCliObject(CliCommand):
|
|||
"object search",
|
||||
**{param: value for param, value in locals().items() if param not in ["self"]},
|
||||
)
|
||||
|
||||
def nodes(
|
||||
self,
|
||||
rpc_endpoint: str,
|
||||
wallet: str,
|
||||
cid: str,
|
||||
address: Optional[str] = None,
|
||||
bearer: Optional[str] = None,
|
||||
generate_key: Optional = None,
|
||||
oid: Optional[str] = None,
|
||||
trace: bool = False,
|
||||
root: bool = False,
|
||||
verify_presence_all: bool = False,
|
||||
ttl: Optional[int] = None,
|
||||
xhdr: Optional[dict] = None,
|
||||
timeout: Optional[str] = None,
|
||||
) -> CommandResult:
|
||||
"""
|
||||
Search object nodes.
|
||||
|
||||
Args:
|
||||
address: Address of wallet account.
|
||||
bearer: File with signed JSON or binary encoded bearer token.
|
||||
cid: Container ID.
|
||||
generate_key: Generate new private key.
|
||||
oid: Object ID.
|
||||
trace: Generate trace ID and print it.
|
||||
root: Search for user objects.
|
||||
rpc_endpoint: Remote node address (as 'multiaddr' or '<host>:<port>').
|
||||
verify_presence_all: Verify the actual presence of the object on all netmap nodes.
|
||||
ttl: TTL value in request meta header (default 2).
|
||||
wallet: WIF (NEP-2) string or path to the wallet or binary key.
|
||||
xhdr: Dict with request X-Headers.
|
||||
timeout: Timeout for the operation (default 15s).
|
||||
|
||||
Returns:
|
||||
Command's result.
|
||||
"""
|
||||
return self._execute(
|
||||
"object nodes",
|
||||
**{param: value for param, value in locals().items() if param not in ["self"]},
|
||||
)
|
||||
|
|
|
@ -68,7 +68,11 @@ class FrostfsCliShards(CliCommand):
|
|||
return self._execute_with_password(
|
||||
"control shards set-mode",
|
||||
wallet_password,
|
||||
**{param: value for param, value in locals().items() if param not in ["self", "wallet_password"]},
|
||||
**{
|
||||
param: value
|
||||
for param, value in locals().items()
|
||||
if param not in ["self", "wallet_password"]
|
||||
},
|
||||
)
|
||||
|
||||
def dump(
|
||||
|
@ -101,14 +105,18 @@ class FrostfsCliShards(CliCommand):
|
|||
return self._execute_with_password(
|
||||
"control shards dump",
|
||||
wallet_password,
|
||||
**{param: value for param, value in locals().items() if param not in ["self", "wallet_password"]},
|
||||
**{
|
||||
param: value
|
||||
for param, value in locals().items()
|
||||
if param not in ["self", "wallet_password"]
|
||||
},
|
||||
)
|
||||
|
||||
def list(
|
||||
self,
|
||||
endpoint: str,
|
||||
wallet: Optional[str] = None,
|
||||
wallet_password: Optional[str] = None,
|
||||
wallet: str,
|
||||
wallet_password: str,
|
||||
address: Optional[str] = None,
|
||||
json_mode: bool = False,
|
||||
timeout: Optional[str] = None,
|
||||
|
@ -127,13 +135,12 @@ class FrostfsCliShards(CliCommand):
|
|||
Returns:
|
||||
Command's result.
|
||||
"""
|
||||
if not wallet_password:
|
||||
return self._execute(
|
||||
"control shards list",
|
||||
**{param: value for param, value in locals().items() if param not in ["self"]},
|
||||
)
|
||||
return self._execute_with_password(
|
||||
"control shards list",
|
||||
wallet_password,
|
||||
**{param: value for param, value in locals().items() if param not in ["self", "wallet_password"]},
|
||||
**{
|
||||
param: value
|
||||
for param, value in locals().items()
|
||||
if param not in ["self", "wallet_password"]
|
||||
},
|
||||
)
|
||||
|
|
|
@ -1,29 +0,0 @@
|
|||
from typing import Optional
|
||||
|
||||
from frostfs_testlib.cli.cli_command import CliCommand
|
||||
from frostfs_testlib.shell import CommandResult
|
||||
|
||||
|
||||
class FrostfsCliTree(CliCommand):
|
||||
def healthcheck(
|
||||
self,
|
||||
wallet: Optional[str] = None,
|
||||
rpc_endpoint: Optional[str] = None,
|
||||
timeout: Optional[str] = None,
|
||||
) -> CommandResult:
|
||||
"""Get internal balance of FrostFS account
|
||||
|
||||
Args:
|
||||
address: Address of wallet account.
|
||||
owner: Owner of balance account (omit to use owner from private key).
|
||||
rpc_endpoint: Remote node address (as 'multiaddr' or '<host>:<port>').
|
||||
wallet: WIF (NEP-2) string or path to the wallet or binary key.
|
||||
|
||||
Returns:
|
||||
Command's result.
|
||||
|
||||
"""
|
||||
return self._execute(
|
||||
"tree healthcheck",
|
||||
**{param: value for param, value in locals().items() if param not in ["self"]},
|
||||
)
|
|
@ -1,86 +0,0 @@
|
|||
import re
|
||||
|
||||
from frostfs_testlib.storage.cluster import ClusterNode
|
||||
from frostfs_testlib.storage.dataclasses.storage_object_info import NodeNetInfo, NodeNetmapInfo
|
||||
|
||||
|
||||
class NetmapParser:
|
||||
@staticmethod
|
||||
def netinfo(output: str) -> NodeNetInfo:
|
||||
regexes = {
|
||||
"epoch": r"Epoch: (?P<epoch>\d+)",
|
||||
"network_magic": r"Network magic: (?P<network_magic>.*$)",
|
||||
"time_per_block": r"Time per block: (?P<time_per_block>\d+\w+)",
|
||||
"container_fee": r"Container fee: (?P<container_fee>\d+)",
|
||||
"epoch_duration": r"Epoch duration: (?P<epoch_duration>\d+)",
|
||||
"inner_ring_candidate_fee": r"Inner Ring candidate fee: (?P<inner_ring_candidate_fee>\d+)",
|
||||
"maximum_object_size": r"Maximum object size: (?P<maximum_object_size>\d+)",
|
||||
"withdrawal_fee": r"Withdrawal fee: (?P<withdrawal_fee>\d+)",
|
||||
"homomorphic_hashing_disabled": r"Homomorphic hashing disabled: (?P<homomorphic_hashing_disabled>true|false)",
|
||||
"maintenance_mode_allowed": r"Maintenance mode allowed: (?P<maintenance_mode_allowed>true|false)",
|
||||
"eigen_trust_alpha": r"EigenTrustAlpha: (?P<eigen_trust_alpha>\d+\w+$)",
|
||||
"eigen_trust_iterations": r"EigenTrustIterations: (?P<eigen_trust_iterations>\d+)",
|
||||
}
|
||||
parse_result = {}
|
||||
|
||||
for key, regex in regexes.items():
|
||||
search_result = re.search(regex, output, flags=re.MULTILINE)
|
||||
if search_result == None:
|
||||
parse_result[key] = None
|
||||
continue
|
||||
parse_result[key] = search_result[key].strip()
|
||||
|
||||
node_netinfo = NodeNetInfo(**parse_result)
|
||||
|
||||
return node_netinfo
|
||||
|
||||
@staticmethod
|
||||
def snapshot_all_nodes(output: str) -> list[NodeNetmapInfo]:
|
||||
"""The code will parse each line and return each node as dataclass."""
|
||||
netmap_nodes = output.split("Node ")[1:]
|
||||
dataclasses_netmap = []
|
||||
result_netmap = {}
|
||||
|
||||
regexes = {
|
||||
"node_id": r"\d+: (?P<node_id>\w+)",
|
||||
"node_data_ips": r"(?P<node_data_ips>/ip4/.+?)$",
|
||||
"node_status": r"(?P<node_status>ONLINE|OFFLINE)",
|
||||
"cluster_name": r"ClusterName: (?P<cluster_name>\w+)",
|
||||
"continent": r"Continent: (?P<continent>\w+)",
|
||||
"country": r"Country: (?P<country>\w+)",
|
||||
"country_code": r"CountryCode: (?P<country_code>\w+)",
|
||||
"external_address": r"ExternalAddr: (?P<external_address>/ip[4].+?)$",
|
||||
"location": r"Location: (?P<location>\w+.*)",
|
||||
"node": r"Node: (?P<node>\d+\.\d+\.\d+\.\d+)",
|
||||
"price": r"Price: (?P<price>\d+)",
|
||||
"sub_div": r"SubDiv: (?P<sub_div>.*)",
|
||||
"sub_div_code": r"SubDivCode: (?P<sub_div_code>\w+)",
|
||||
"un_locode": r"UN-LOCODE: (?P<un_locode>\w+.*)",
|
||||
"role": r"role: (?P<role>\w+)",
|
||||
}
|
||||
|
||||
for node in netmap_nodes:
|
||||
for key, regex in regexes.items():
|
||||
search_result = re.search(regex, node, flags=re.MULTILINE)
|
||||
if key == "node_data_ips":
|
||||
result_netmap[key] = search_result[key].strip().split(" ")
|
||||
continue
|
||||
if key == "external_address":
|
||||
result_netmap[key] = search_result[key].strip().split(",")
|
||||
continue
|
||||
if search_result == None:
|
||||
result_netmap[key] = None
|
||||
continue
|
||||
result_netmap[key] = search_result[key].strip()
|
||||
|
||||
dataclasses_netmap.append(NodeNetmapInfo(**result_netmap))
|
||||
|
||||
return dataclasses_netmap
|
||||
|
||||
@staticmethod
|
||||
def snapshot_one_node(output: str, cluster_node: ClusterNode) -> NodeNetmapInfo | None:
|
||||
snapshot_nodes = NetmapParser.snapshot_all_nodes(output=output)
|
||||
snapshot_node = [node for node in snapshot_nodes if node.node == cluster_node.host_ip]
|
||||
if not snapshot_node:
|
||||
return None
|
||||
return snapshot_node[0]
|
|
@ -1,101 +0,0 @@
|
|||
from typing import Callable
|
||||
|
||||
from frostfs_testlib import reporter
|
||||
from frostfs_testlib.cli.frostfs_cli.cli import FrostfsCli
|
||||
from frostfs_testlib.healthcheck.interfaces import Healthcheck
|
||||
from frostfs_testlib.resources.cli import FROSTFS_CLI_EXEC
|
||||
from frostfs_testlib.shell import CommandOptions
|
||||
from frostfs_testlib.steps.node_management import storage_node_healthcheck
|
||||
from frostfs_testlib.storage.cluster import ClusterNode, ServiceClass
|
||||
from frostfs_testlib.testing.test_control import wait_for_success
|
||||
from frostfs_testlib.utils.failover_utils import check_services_status
|
||||
|
||||
|
||||
class BasicHealthcheck(Healthcheck):
|
||||
def _perform(self, cluster_node: ClusterNode, checks: dict[Callable, dict]):
|
||||
issues: list[str] = []
|
||||
for check, kwargs in checks.items():
|
||||
issue = check(cluster_node, **kwargs)
|
||||
if issue:
|
||||
issues.append(issue)
|
||||
|
||||
assert not issues, "Issues found:\n" + "\n".join(issues)
|
||||
|
||||
@wait_for_success(900, 30, title="Wait for full healthcheck for {cluster_node}")
|
||||
def full_healthcheck(self, cluster_node: ClusterNode):
|
||||
checks = {
|
||||
self.storage_healthcheck: {},
|
||||
self._tree_healthcheck: {},
|
||||
}
|
||||
|
||||
self._perform(cluster_node, checks)
|
||||
|
||||
@wait_for_success(900, 30, title="Wait for startup healthcheck on {cluster_node}")
|
||||
def startup_healthcheck(self, cluster_node: ClusterNode):
|
||||
checks = {
|
||||
self.storage_healthcheck: {},
|
||||
self._tree_healthcheck: {},
|
||||
}
|
||||
|
||||
self._perform(cluster_node, checks)
|
||||
|
||||
@wait_for_success(900, 30, title="Wait for storage healthcheck on {cluster_node}")
|
||||
def storage_healthcheck(self, cluster_node: ClusterNode) -> str | None:
|
||||
checks = {
|
||||
self._storage_healthcheck: {},
|
||||
}
|
||||
|
||||
self._perform(cluster_node, checks)
|
||||
|
||||
@wait_for_success(120, 5, title="Wait for service healthcheck on {cluster_node}")
|
||||
def services_healthcheck(self, cluster_node: ClusterNode):
|
||||
svcs_to_check = cluster_node.services
|
||||
checks = {
|
||||
check_services_status: {
|
||||
"service_list": svcs_to_check,
|
||||
"expected_status": "active",
|
||||
},
|
||||
self._check_services: {"services": svcs_to_check},
|
||||
}
|
||||
|
||||
self._perform(cluster_node, checks)
|
||||
|
||||
def _check_services(self, cluster_node: ClusterNode, services: list[ServiceClass]):
|
||||
for svc in services:
|
||||
result = svc.service_healthcheck()
|
||||
if result == False:
|
||||
return f"Service {svc.get_service_systemctl_name()} healthcheck failed on node {cluster_node}."
|
||||
|
||||
@reporter.step("Storage healthcheck on {cluster_node}")
|
||||
def _storage_healthcheck(self, cluster_node: ClusterNode) -> str | None:
|
||||
result = storage_node_healthcheck(cluster_node.storage_node)
|
||||
self._gather_socket_info(cluster_node)
|
||||
if result.health_status != "READY" or result.network_status != "ONLINE":
|
||||
return f"Node {cluster_node} is not healthy. Health={result.health_status}. Network={result.network_status}"
|
||||
|
||||
@reporter.step("Tree healthcheck on {cluster_node}")
|
||||
def _tree_healthcheck(self, cluster_node: ClusterNode) -> str | None:
|
||||
host = cluster_node.host
|
||||
service_config = host.get_service_config(cluster_node.storage_node.name)
|
||||
wallet_path = service_config.attributes["wallet_path"]
|
||||
wallet_password = service_config.attributes["wallet_password"]
|
||||
|
||||
shell = host.get_shell()
|
||||
wallet_config_path = f"/tmp/{cluster_node.storage_node.name}-config.yaml"
|
||||
wallet_config = f'wallet: {wallet_path}\npassword: "{wallet_password}"'
|
||||
shell.exec(f"echo '{wallet_config}' > {wallet_config_path}")
|
||||
|
||||
remote_cli = FrostfsCli(
|
||||
shell,
|
||||
host.get_cli_config(FROSTFS_CLI_EXEC).exec_path,
|
||||
config_file=wallet_config_path,
|
||||
)
|
||||
result = remote_cli.tree.healthcheck(rpc_endpoint="127.0.0.1:8080")
|
||||
if result.return_code != 0:
|
||||
return (
|
||||
f"Error during tree healthcheck (rc={result.return_code}): {result.stdout}. \n Stderr: {result.stderr}"
|
||||
)
|
||||
|
||||
@reporter.step("Gather socket info for {cluster_node}")
|
||||
def _gather_socket_info(self, cluster_node: ClusterNode):
|
||||
cluster_node.host.get_shell().exec("ss -tuln | grep 8080", CommandOptions(check=False))
|
|
@ -1,21 +0,0 @@
|
|||
from abc import ABC, abstractmethod
|
||||
|
||||
from frostfs_testlib.storage.cluster import ClusterNode
|
||||
|
||||
|
||||
class Healthcheck(ABC):
|
||||
@abstractmethod
|
||||
def full_healthcheck(self, cluster_node: ClusterNode):
|
||||
"""Perform full healthcheck on the target cluster node"""
|
||||
|
||||
@abstractmethod
|
||||
def startup_healthcheck(self, cluster_node: ClusterNode):
|
||||
"""Perform healthcheck required on startup of target cluster node"""
|
||||
|
||||
@abstractmethod
|
||||
def storage_healthcheck(self, cluster_node: ClusterNode):
|
||||
"""Perform storage service healthcheck on target cluster node"""
|
||||
|
||||
@abstractmethod
|
||||
def services_healthcheck(self, cluster_node: ClusterNode):
|
||||
"""Perform service status check on target cluster node"""
|
|
@ -52,7 +52,6 @@ class HostConfig:
|
|||
|
||||
Attributes:
|
||||
plugin_name: Name of plugin that should be used to manage the host.
|
||||
healthcheck_plugin_name: Name of the plugin for healthcheck operations.
|
||||
address: Address of the machine (IP or DNS name).
|
||||
services: List of services hosted on the machine.
|
||||
clis: List of CLI tools available on the machine.
|
||||
|
@ -61,13 +60,10 @@ class HostConfig:
|
|||
"""
|
||||
|
||||
plugin_name: str
|
||||
healthcheck_plugin_name: str
|
||||
address: str
|
||||
services: list[ServiceConfig] = field(default_factory=list)
|
||||
clis: list[CLIConfig] = field(default_factory=list)
|
||||
attributes: dict[str, str] = field(default_factory=dict)
|
||||
interfaces: dict[str, str] = field(default_factory=dict)
|
||||
environment: dict[str, str] = field(default_factory=dict)
|
||||
|
||||
def __post_init__(self) -> None:
|
||||
self.services = [ServiceConfig(**service) for service in self.services or []]
|
||||
|
|
|
@ -11,7 +11,7 @@ import docker
|
|||
from requests import HTTPError
|
||||
|
||||
from frostfs_testlib.hosting.config import ParsedAttributes
|
||||
from frostfs_testlib.hosting.interfaces import DiskInfo, Host, HostStatus
|
||||
from frostfs_testlib.hosting.interfaces import DiskInfo, Host
|
||||
from frostfs_testlib.shell import LocalShell, Shell, SSHShell
|
||||
from frostfs_testlib.shell.command_inspectors import SudoInspector
|
||||
|
||||
|
@ -61,10 +61,10 @@ class ServiceAttributes(ParsedAttributes):
|
|||
class DockerHost(Host):
|
||||
"""Manages services hosted in Docker containers running on a local or remote machine."""
|
||||
|
||||
def get_shell(self, sudo: bool = False) -> Shell:
|
||||
def get_shell(self) -> Shell:
|
||||
host_attributes = HostAttributes.parse(self._config.attributes)
|
||||
command_inspectors = []
|
||||
if sudo:
|
||||
if host_attributes.sudo_shell:
|
||||
command_inspectors.append(SudoInspector())
|
||||
|
||||
if not host_attributes.ssh_login:
|
||||
|
@ -87,15 +87,6 @@ class DockerHost(Host):
|
|||
for service_config in self._config.services:
|
||||
self.start_service(service_config.name)
|
||||
|
||||
def get_host_status(self) -> HostStatus:
|
||||
# We emulate host status by checking all services.
|
||||
for service_config in self._config.services:
|
||||
state = self._get_container_state(service_config.name)
|
||||
if state != "running":
|
||||
return HostStatus.OFFLINE
|
||||
|
||||
return HostStatus.ONLINE
|
||||
|
||||
def stop_host(self) -> None:
|
||||
# We emulate stopping machine by stopping all services
|
||||
# As an alternative we can probably try to stop docker service...
|
||||
|
@ -126,20 +117,6 @@ class DockerHost(Host):
|
|||
timeout=service_attributes.stop_timeout,
|
||||
)
|
||||
|
||||
def mask_service(self, service_name: str) -> None:
|
||||
# Not required for Docker
|
||||
return
|
||||
|
||||
def unmask_service(self, service_name: str) -> None:
|
||||
# Not required for Docker
|
||||
return
|
||||
|
||||
def wait_success_suspend_process(self, service_name: str):
|
||||
raise NotImplementedError("Not supported for docker")
|
||||
|
||||
def wait_success_resume_process(self, service_name: str):
|
||||
raise NotImplementedError("Not supported for docker")
|
||||
|
||||
def restart_service(self, service_name: str) -> None:
|
||||
service_attributes = self._get_service_attributes(service_name)
|
||||
|
||||
|
@ -152,18 +129,6 @@ class DockerHost(Host):
|
|||
timeout=service_attributes.start_timeout,
|
||||
)
|
||||
|
||||
def wait_for_service_to_be_in_state(self, systemd_service_name: str, expected_state: str, timeout: int) -> None:
|
||||
raise NotImplementedError("Not implemented for docker")
|
||||
|
||||
def get_data_directory(self, service_name: str) -> str:
|
||||
service_attributes = self._get_service_attributes(service_name)
|
||||
|
||||
client = self._get_docker_client()
|
||||
volume_info = client.inspect_volume(service_attributes.volume_name)
|
||||
volume_path = volume_info["Mountpoint"]
|
||||
|
||||
return volume_path
|
||||
|
||||
def delete_metabase(self, service_name: str) -> None:
|
||||
raise NotImplementedError("Not implemented for docker")
|
||||
|
||||
|
@ -179,14 +144,12 @@ class DockerHost(Host):
|
|||
def delete_pilorama(self, service_name: str) -> None:
|
||||
raise NotImplementedError("Not implemented for docker")
|
||||
|
||||
def delete_file(self, file_path: str) -> None:
|
||||
raise NotImplementedError("Not implemented for docker")
|
||||
|
||||
def is_file_exist(self, file_path: str) -> None:
|
||||
raise NotImplementedError("Not implemented for docker")
|
||||
|
||||
def delete_storage_node_data(self, service_name: str, cache_only: bool = False) -> None:
|
||||
volume_path = self.get_data_directory(service_name)
|
||||
service_attributes = self._get_service_attributes(service_name)
|
||||
|
||||
client = self._get_docker_client()
|
||||
volume_info = client.inspect_volume(service_attributes.volume_name)
|
||||
volume_path = volume_info["Mountpoint"]
|
||||
|
||||
shell = self.get_shell()
|
||||
meta_clean_cmd = f"rm -rf {volume_path}/meta*/*"
|
||||
|
@ -233,40 +196,11 @@ class DockerHost(Host):
|
|||
with open(file_path, "wb") as file:
|
||||
file.write(logs)
|
||||
|
||||
def get_filtered_logs(
|
||||
self,
|
||||
filter_regex: str,
|
||||
since: Optional[datetime] = None,
|
||||
until: Optional[datetime] = None,
|
||||
unit: Optional[str] = None,
|
||||
exclude_filter: Optional[str] = None,
|
||||
) -> str:
|
||||
client = self._get_docker_client()
|
||||
filtered_logs = ""
|
||||
for service_config in self._config.services:
|
||||
container_name = self._get_service_attributes(service_config.name).container_name
|
||||
try:
|
||||
filtered_logs = client.logs(container_name, since=since, until=until)
|
||||
except HTTPError as exc:
|
||||
logger.info(f"Got exception while dumping logs of '{container_name}': {exc}")
|
||||
continue
|
||||
|
||||
if exclude_filter:
|
||||
filtered_logs = filtered_logs.replace(exclude_filter, "")
|
||||
matches = re.findall(filter_regex, filtered_logs, re.IGNORECASE + re.MULTILINE)
|
||||
found = list(matches)
|
||||
|
||||
if found:
|
||||
filtered_logs += f"{container_name}:\n{os.linesep.join(found)}"
|
||||
|
||||
return filtered_logs
|
||||
|
||||
def is_message_in_logs(
|
||||
self,
|
||||
message_regex: str,
|
||||
since: Optional[datetime] = None,
|
||||
until: Optional[datetime] = None,
|
||||
unit: Optional[str] = None,
|
||||
) -> bool:
|
||||
client = self._get_docker_client()
|
||||
for service_config in self._config.services:
|
||||
|
@ -309,23 +243,20 @@ class DockerHost(Host):
|
|||
return container
|
||||
return None
|
||||
|
||||
def _wait_for_container_to_be_in_state(self, container_name: str, expected_state: str, timeout: int) -> None:
|
||||
def _wait_for_container_to_be_in_state(
|
||||
self, container_name: str, expected_state: str, timeout: int
|
||||
) -> None:
|
||||
iterations = 10
|
||||
iteration_wait_time = timeout / iterations
|
||||
|
||||
# To speed things up, we break timeout in smaller iterations and check container state
|
||||
# several times. This way waiting stops as soon as container reaches the expected state
|
||||
for _ in range(iterations):
|
||||
state = self._get_container_state(container_name)
|
||||
container = self._get_container_by_name(container_name)
|
||||
logger.debug(f"Current container state\n:{json.dumps(container, indent=2)}")
|
||||
|
||||
if state == expected_state:
|
||||
if container and container["State"] == expected_state:
|
||||
return
|
||||
time.sleep(iteration_wait_time)
|
||||
|
||||
raise RuntimeError(f"Container {container_name} is not in {expected_state} state.")
|
||||
|
||||
def _get_container_state(self, container_name: str) -> str:
|
||||
container = self._get_container_by_name(container_name)
|
||||
logger.debug(f"Current container state\n:{json.dumps(container, indent=2)}")
|
||||
|
||||
return container.get("State", None)
|
||||
|
|
|
@ -4,14 +4,6 @@ from typing import Optional
|
|||
|
||||
from frostfs_testlib.hosting.config import CLIConfig, HostConfig, ServiceConfig
|
||||
from frostfs_testlib.shell.interfaces import Shell
|
||||
from frostfs_testlib.testing.readable import HumanReadableEnum
|
||||
from frostfs_testlib.testing.test_control import retry
|
||||
|
||||
|
||||
class HostStatus(HumanReadableEnum):
|
||||
ONLINE = "Online"
|
||||
OFFLINE = "Offline"
|
||||
UNKNOWN = "Unknown"
|
||||
|
||||
|
||||
class DiskInfo(dict):
|
||||
|
@ -26,7 +18,9 @@ class Host(ABC):
|
|||
|
||||
def __init__(self, config: HostConfig) -> None:
|
||||
self._config = config
|
||||
self._service_config_by_name = {service_config.name: service_config for service_config in config.services}
|
||||
self._service_config_by_name = {
|
||||
service_config.name: service_config for service_config in config.services
|
||||
}
|
||||
self._cli_config_by_name = {cli_config.name: cli_config for cli_config in config.clis}
|
||||
|
||||
@property
|
||||
|
@ -71,12 +65,9 @@ class Host(ABC):
|
|||
return cli_config
|
||||
|
||||
@abstractmethod
|
||||
def get_shell(self, sudo: bool = True) -> Shell:
|
||||
def get_shell(self) -> Shell:
|
||||
"""Returns shell to this host.
|
||||
|
||||
Args:
|
||||
sudo: if True, run all commands in shell with elevated rights
|
||||
|
||||
Returns:
|
||||
Shell that executes commands on this host.
|
||||
"""
|
||||
|
@ -85,10 +76,6 @@ class Host(ABC):
|
|||
def start_host(self) -> None:
|
||||
"""Starts the host machine."""
|
||||
|
||||
@abstractmethod
|
||||
def get_host_status(self) -> HostStatus:
|
||||
"""Check host status."""
|
||||
|
||||
@abstractmethod
|
||||
def stop_host(self, mode: str) -> None:
|
||||
"""Stops the host machine.
|
||||
|
@ -117,26 +104,6 @@ class Host(ABC):
|
|||
service_name: Name of the service to stop.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def mask_service(self, service_name: str) -> None:
|
||||
"""Prevent the service from start by any activity by masking it.
|
||||
|
||||
The service must be hosted on this host.
|
||||
|
||||
Args:
|
||||
service_name: Name of the service to mask.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def unmask_service(self, service_name: str) -> None:
|
||||
"""Allow the service to start by any activity by unmasking it.
|
||||
|
||||
The service must be hosted on this host.
|
||||
|
||||
Args:
|
||||
service_name: Name of the service to unmask.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def restart_service(self, service_name: str) -> None:
|
||||
"""Restarts the service with specified name and waits until it starts.
|
||||
|
@ -145,30 +112,6 @@ class Host(ABC):
|
|||
service_name: Name of the service to restart.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def get_data_directory(self, service_name: str) -> str:
|
||||
"""
|
||||
Getting path to data directory on node for further usage
|
||||
(example: list databases pilorama.db)
|
||||
|
||||
Args:
|
||||
service_name: Name of storage node service.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def wait_success_suspend_process(self, process_name: str) -> None:
|
||||
"""Search for a service ID by its name and stop the process
|
||||
Args:
|
||||
process_name: Name
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def wait_success_resume_process(self, process_name: str) -> None:
|
||||
"""Search for a service by its ID and start the process
|
||||
Args:
|
||||
process_name: Name
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def delete_storage_node_data(self, service_name: str, cache_only: bool = False) -> None:
|
||||
"""Erases all data of the storage node with specified name.
|
||||
|
@ -219,22 +162,12 @@ class Host(ABC):
|
|||
"""
|
||||
|
||||
@abstractmethod
|
||||
def delete_file(self, file_path: str) -> None:
|
||||
def delete_pilorama(self, service_name: str) -> None:
|
||||
"""
|
||||
Deletes file with provided file path
|
||||
Deletes all pilorama.db files in the node.
|
||||
|
||||
Args:
|
||||
file_path: full path to the file to delete
|
||||
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def is_file_exist(self, file_path: str) -> bool:
|
||||
"""
|
||||
Checks if file exist
|
||||
|
||||
Args:
|
||||
file_path: full path to the file to check
|
||||
service_name: Name of storage node service.
|
||||
|
||||
"""
|
||||
|
||||
|
@ -289,35 +222,12 @@ class Host(ABC):
|
|||
filter_regex: regex to filter output
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def get_filtered_logs(
|
||||
self,
|
||||
filter_regex: str,
|
||||
since: Optional[datetime] = None,
|
||||
until: Optional[datetime] = None,
|
||||
unit: Optional[str] = None,
|
||||
exclude_filter: Optional[str] = None,
|
||||
) -> str:
|
||||
"""Get logs from host filtered by regex.
|
||||
|
||||
Args:
|
||||
filter_regex: regex filter for logs.
|
||||
since: If set, limits the time from which logs should be collected. Must be in UTC.
|
||||
until: If set, limits the time until which logs should be collected. Must be in UTC.
|
||||
unit: required unit.
|
||||
|
||||
Returns:
|
||||
Found entries as str if any found.
|
||||
Empty string otherwise.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def is_message_in_logs(
|
||||
self,
|
||||
message_regex: str,
|
||||
since: Optional[datetime] = None,
|
||||
until: Optional[datetime] = None,
|
||||
unit: Optional[str] = None,
|
||||
) -> bool:
|
||||
"""Checks logs on host for specified message regex.
|
||||
|
||||
|
@ -330,35 +240,3 @@ class Host(ABC):
|
|||
True if message found in logs in the given time frame.
|
||||
False otherwise.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def wait_for_service_to_be_in_state(self, systemd_service_name: str, expected_state: str, timeout: int) -> None:
|
||||
"""
|
||||
Waites for service to be in specified state.
|
||||
|
||||
Args:
|
||||
systemd_service_name: Service to wait state of.
|
||||
expected_state: State to wait for
|
||||
timeout: Seconds to wait
|
||||
|
||||
"""
|
||||
|
||||
def down_interface(self, interface: str) -> None:
|
||||
shell = self.get_shell()
|
||||
shell.exec(f"ip link set {interface} down")
|
||||
|
||||
def up_interface(self, interface: str) -> None:
|
||||
shell = self.get_shell()
|
||||
shell.exec(f"ip link set {interface} up")
|
||||
|
||||
def check_state(self, interface: str) -> str:
|
||||
shell = self.get_shell()
|
||||
return shell.exec(f"ip link show {interface} | sed -z 's/.*state \(.*\) mode .*/\\1/'").stdout.strip()
|
||||
|
||||
@retry(max_attempts=5, sleep_interval=5, expected_result="UP")
|
||||
def check_state_up(self, interface: str) -> str:
|
||||
return self.check_state(interface=interface)
|
||||
|
||||
@retry(max_attempts=5, sleep_interval=5, expected_result="DOWN")
|
||||
def check_state_down(self, interface: str) -> str:
|
||||
return self.check_state(interface=interface)
|
||||
|
|
|
@ -1,15 +0,0 @@
|
|||
from frostfs_testlib.load.interfaces.loader import Loader
|
||||
from frostfs_testlib.load.interfaces.scenario_runner import ScenarioRunner
|
||||
from frostfs_testlib.load.load_config import (
|
||||
EndpointSelectionStrategy,
|
||||
K6ProcessAllocationStrategy,
|
||||
LoadParams,
|
||||
LoadScenario,
|
||||
LoadType,
|
||||
NodesSelectionStrategy,
|
||||
Preset,
|
||||
ReadFrom,
|
||||
)
|
||||
from frostfs_testlib.load.load_report import LoadReport
|
||||
from frostfs_testlib.load.loaders import NodeLoader, RemoteLoader
|
||||
from frostfs_testlib.load.runners import DefaultRunner, LocalRunner, S3LocalRunner
|
|
@ -1,14 +0,0 @@
|
|||
from abc import ABC, abstractmethod
|
||||
|
||||
from frostfs_testlib.shell.interfaces import Shell
|
||||
|
||||
|
||||
class Loader(ABC):
|
||||
@abstractmethod
|
||||
def get_shell(self) -> Shell:
|
||||
"""Get shell for the loader"""
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def ip(self):
|
||||
"""Get address of the loader"""
|
|
@ -1,50 +0,0 @@
|
|||
from abc import ABC, abstractmethod
|
||||
|
||||
from frostfs_testlib.load.k6 import K6
|
||||
from frostfs_testlib.load.load_config import LoadParams
|
||||
from frostfs_testlib.storage.cluster import ClusterNode
|
||||
|
||||
|
||||
class ScenarioRunner(ABC):
|
||||
@abstractmethod
|
||||
def prepare(
|
||||
self,
|
||||
load_params: LoadParams,
|
||||
cluster_nodes: list[ClusterNode],
|
||||
nodes_under_load: list[ClusterNode],
|
||||
k6_dir: str,
|
||||
):
|
||||
"""Preparation steps before running the load"""
|
||||
|
||||
@abstractmethod
|
||||
def init_k6_instances(self, load_params: LoadParams, endpoints: list[str], k6_dir: str):
|
||||
"""Init K6 instances"""
|
||||
|
||||
@abstractmethod
|
||||
def get_k6_instances(self) -> list[K6]:
|
||||
"""Get K6 instances"""
|
||||
|
||||
@abstractmethod
|
||||
def start(self):
|
||||
"""Start K6 instances"""
|
||||
|
||||
@abstractmethod
|
||||
def stop(self):
|
||||
"""Stop K6 instances"""
|
||||
|
||||
@abstractmethod
|
||||
def preset(self):
|
||||
"""Run preset for load"""
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def is_running(self) -> bool:
|
||||
"""Returns True if load is running at the moment"""
|
||||
|
||||
@abstractmethod
|
||||
def wait_until_finish(self, soft_timeout: int = 0):
|
||||
"""Wait until load is finished"""
|
||||
|
||||
@abstractmethod
|
||||
def get_results(self) -> dict:
|
||||
"""Get results from K6 run"""
|
|
@ -1,94 +0,0 @@
|
|||
from dataclasses import dataclass, field
|
||||
|
||||
from frostfs_testlib.load.load_config import LoadParams, LoadScenario
|
||||
from frostfs_testlib.load.load_metrics import get_metrics_object
|
||||
|
||||
|
||||
@dataclass
|
||||
class SummarizedErorrs:
|
||||
total: int = field(default_factory=int)
|
||||
percent: float = field(default_factory=float)
|
||||
threshold: float = field(default_factory=float)
|
||||
by_node: dict[str, int] = field(default_factory=dict)
|
||||
|
||||
def calc_stats(self, operations):
|
||||
self.total += sum(self.by_node.values())
|
||||
|
||||
if not operations:
|
||||
return
|
||||
|
||||
self.percent = self.total / operations * 100
|
||||
|
||||
|
||||
@dataclass
|
||||
class SummarizedLatencies:
|
||||
avg: float = field(default_factory=float)
|
||||
min: float = field(default_factory=float)
|
||||
max: float = field(default_factory=float)
|
||||
by_node: dict[str, dict[str, int]] = field(default_factory=dict)
|
||||
|
||||
def calc_stats(self):
|
||||
if not self.by_node:
|
||||
return
|
||||
|
||||
avgs = [lt["avg"] for lt in self.by_node.values()]
|
||||
self.avg = sum(avgs) / len(avgs)
|
||||
|
||||
minimal = [lt["min"] for lt in self.by_node.values()]
|
||||
self.min = min(minimal)
|
||||
|
||||
maximum = [lt["max"] for lt in self.by_node.values()]
|
||||
self.max = max(maximum)
|
||||
|
||||
|
||||
@dataclass
|
||||
class SummarizedStats:
|
||||
threads: int = field(default_factory=int)
|
||||
requested_rate: int = field(default_factory=int)
|
||||
operations: int = field(default_factory=int)
|
||||
rate: float = field(default_factory=float)
|
||||
throughput: float = field(default_factory=float)
|
||||
latencies: SummarizedLatencies = field(default_factory=SummarizedLatencies)
|
||||
errors: SummarizedErorrs = field(default_factory=SummarizedErorrs)
|
||||
passed: bool = True
|
||||
|
||||
def calc_stats(self):
|
||||
self.errors.calc_stats(self.operations)
|
||||
self.latencies.calc_stats()
|
||||
self.passed = self.errors.percent <= self.errors.threshold
|
||||
|
||||
@staticmethod
|
||||
def collect(load_params: LoadParams, load_summaries: dict) -> dict[str, "SummarizedStats"]:
|
||||
if load_params.scenario in [LoadScenario.gRPC_CAR, LoadScenario.S3_CAR]:
|
||||
delete_vus = max(load_params.preallocated_deleters or 0, load_params.max_deleters or 0)
|
||||
write_vus = max(load_params.preallocated_writers or 0, load_params.max_writers or 0)
|
||||
read_vus = max(load_params.preallocated_readers or 0, load_params.max_readers or 0)
|
||||
else:
|
||||
write_vus = load_params.writers
|
||||
read_vus = load_params.readers
|
||||
delete_vus = load_params.deleters
|
||||
|
||||
summarized = {
|
||||
"Write": SummarizedStats(threads=write_vus, requested_rate=load_params.write_rate),
|
||||
"Read": SummarizedStats(threads=read_vus, requested_rate=load_params.read_rate),
|
||||
"Delete": SummarizedStats(threads=delete_vus, requested_rate=load_params.delete_rate),
|
||||
}
|
||||
|
||||
for node_key, load_summary in load_summaries.items():
|
||||
metrics = get_metrics_object(load_params.scenario, load_summary)
|
||||
for operation in metrics.operations:
|
||||
target = summarized[operation._NAME]
|
||||
if not operation.total_iterations:
|
||||
continue
|
||||
target.operations += operation.total_iterations
|
||||
target.rate += operation.rate
|
||||
target.latencies.by_node[node_key] = operation.latency
|
||||
target.throughput += operation.throughput
|
||||
target.errors.threshold = load_params.error_threshold
|
||||
if operation.failed_iterations:
|
||||
target.errors.by_node[node_key] = operation.failed_iterations
|
||||
|
||||
for operation in summarized.values():
|
||||
operation.calc_stats()
|
||||
|
||||
return summarized
|
|
@ -1,19 +1,19 @@
|
|||
import json
|
||||
import logging
|
||||
import math
|
||||
import os
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime
|
||||
from dataclasses import dataclass, fields
|
||||
from time import sleep
|
||||
from typing import Any
|
||||
from urllib.parse import urlparse
|
||||
|
||||
from frostfs_testlib import reporter
|
||||
from frostfs_testlib.load.interfaces.loader import Loader
|
||||
from frostfs_testlib.load.load_config import K6ProcessAllocationStrategy, LoadParams, LoadScenario, LoadType
|
||||
from frostfs_testlib.load.load_config import (
|
||||
K6ProcessAllocationStrategy,
|
||||
LoadParams,
|
||||
LoadScenario,
|
||||
LoadType,
|
||||
)
|
||||
from frostfs_testlib.processes.remote_process import RemoteProcess
|
||||
from frostfs_testlib.resources.common import STORAGE_USER_NAME
|
||||
from frostfs_testlib.resources.load_params import K6_STOP_SIGNAL_TIMEOUT, K6_TEARDOWN_PERIOD
|
||||
from frostfs_testlib.reporter import get_reporter
|
||||
from frostfs_testlib.resources.load_params import K6_STOP_SIGNAL_TIMEOUT, LOAD_NODE_SSH_USER
|
||||
from frostfs_testlib.shell import Shell
|
||||
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
|
||||
from frostfs_testlib.testing.test_control import wait_for_success
|
||||
|
@ -21,6 +21,7 @@ from frostfs_testlib.testing.test_control import wait_for_success
|
|||
EXIT_RESULT_CODE = 0
|
||||
|
||||
logger = logging.getLogger("NeoLogger")
|
||||
reporter = get_reporter()
|
||||
|
||||
|
||||
@dataclass
|
||||
|
@ -41,7 +42,7 @@ class K6:
|
|||
endpoints: list[str],
|
||||
k6_dir: str,
|
||||
shell: Shell,
|
||||
loader: Loader,
|
||||
load_node: str,
|
||||
wallet: WalletInfo,
|
||||
):
|
||||
if load_params.scenario is None:
|
||||
|
@ -49,183 +50,133 @@ class K6:
|
|||
|
||||
self.load_params: LoadParams = load_params
|
||||
self.endpoints = endpoints
|
||||
self.loader: Loader = loader
|
||||
self.load_node: str = load_node
|
||||
self.shell: Shell = shell
|
||||
self.wallet = wallet
|
||||
self.preset_output: str = ""
|
||||
self.scenario: LoadScenario = load_params.scenario
|
||||
self.summary_json: str = os.path.join(
|
||||
self.load_params.working_dir,
|
||||
f"{self.load_params.load_id}_{self.load_params.scenario.value}_summary.json",
|
||||
f"{self.load_params.load_id}_{self.scenario.value}_summary.json",
|
||||
)
|
||||
|
||||
self._k6_dir: str = k6_dir
|
||||
|
||||
command = (
|
||||
f"{self._k6_dir}/k6 run {self._generate_env_variables()} "
|
||||
f"{self._k6_dir}/scenarios/{self.load_params.scenario.value}.js"
|
||||
)
|
||||
user = STORAGE_USER_NAME if self.load_params.scenario == LoadScenario.LOCAL else None
|
||||
process_id = (
|
||||
self.load_params.load_id
|
||||
if self.load_params.scenario != LoadScenario.VERIFY
|
||||
else f"{self.load_params.load_id}_verify"
|
||||
)
|
||||
self._k6_process = RemoteProcess.create(command, self.shell, self.load_params.working_dir, user, process_id)
|
||||
|
||||
def _get_fill_percents(self):
|
||||
fill_percents = self.shell.exec("df -H --output=source,pcent,target | grep frostfs").stdout.split("\n")
|
||||
return [line.split() for line in fill_percents][:-1]
|
||||
|
||||
def check_fill_percent(self):
|
||||
fill_percents = self._get_fill_percents()
|
||||
percent_mean = 0
|
||||
for line in fill_percents:
|
||||
percent_mean += float(line[1].split('%')[0])
|
||||
percent_mean = percent_mean / len(fill_percents)
|
||||
logger.info(f"{self.loader.ip} mean fill percent is {percent_mean}")
|
||||
return percent_mean >= self.load_params.fill_percent
|
||||
|
||||
@property
|
||||
def process_dir(self) -> str:
|
||||
return self._k6_process.process_dir
|
||||
|
||||
@reporter.step_deco("Preset containers and objects")
|
||||
def preset(self) -> str:
|
||||
with reporter.step(f"Run preset on loader {self.loader.ip} for endpoints {self.endpoints}"):
|
||||
preset_grpc = f"{self._k6_dir}/scenarios/preset/preset_grpc.py"
|
||||
preset_s3 = f"{self._k6_dir}/scenarios/preset/preset_s3.py"
|
||||
preset_map = {
|
||||
LoadType.gRPC: preset_grpc,
|
||||
LoadType.S3: preset_s3,
|
||||
LoadType.HTTP: preset_grpc,
|
||||
}
|
||||
preset_grpc = f"{self._k6_dir}/scenarios/preset/preset_grpc.py"
|
||||
preset_s3 = f"{self._k6_dir}/scenarios/preset/preset_s3.py"
|
||||
preset_map = {
|
||||
LoadType.gRPC: preset_grpc,
|
||||
LoadType.S3: preset_s3,
|
||||
LoadType.HTTP: preset_grpc,
|
||||
}
|
||||
|
||||
base_args = {
|
||||
preset_grpc: [
|
||||
preset_grpc,
|
||||
f"--endpoint {','.join(self.endpoints)}",
|
||||
f"--wallet {self.wallet.path} ",
|
||||
f"--config {self.wallet.config_path} ",
|
||||
],
|
||||
preset_s3: [
|
||||
preset_s3,
|
||||
f"--endpoint {','.join(self.endpoints)}",
|
||||
],
|
||||
}
|
||||
base_args = {
|
||||
preset_grpc: [
|
||||
preset_grpc,
|
||||
f"--endpoint {self.endpoints[0]}",
|
||||
f"--wallet {self.wallet.path} ",
|
||||
f"--config {self.wallet.config_path} ",
|
||||
],
|
||||
preset_s3: [
|
||||
preset_s3,
|
||||
f"--endpoint {self.endpoints[0]}",
|
||||
],
|
||||
}
|
||||
|
||||
preset_scenario = preset_map[self.load_params.load_type]
|
||||
command_args = base_args[preset_scenario].copy()
|
||||
preset_scenario = preset_map[self.load_params.load_type]
|
||||
command_args = base_args[preset_scenario].copy()
|
||||
|
||||
command_args += self.load_params.get_preset_arguments()
|
||||
command_args += [
|
||||
f"--{field.metadata['preset_argument']} '{getattr(self.load_params, field.name)}'"
|
||||
for field in fields(self.load_params)
|
||||
if field.metadata
|
||||
and self.scenario in field.metadata["applicable_scenarios"]
|
||||
and field.metadata["preset_argument"]
|
||||
and getattr(self.load_params, field.name) is not None
|
||||
]
|
||||
|
||||
command = " ".join(command_args)
|
||||
result = self.shell.exec(command)
|
||||
if self.load_params.preset:
|
||||
command_args += [
|
||||
f"--{field.metadata['preset_argument']} '{getattr(self.load_params.preset, field.name)}'"
|
||||
for field in fields(self.load_params.preset)
|
||||
if field.metadata
|
||||
and self.scenario in field.metadata["applicable_scenarios"]
|
||||
and field.metadata["preset_argument"]
|
||||
and getattr(self.load_params.preset, field.name) is not None
|
||||
]
|
||||
|
||||
assert result.return_code == EXIT_RESULT_CODE, f"Return code of preset is not zero: {result.stdout}"
|
||||
command = " ".join(command_args)
|
||||
result = self.shell.exec(command)
|
||||
|
||||
self.preset_output = result.stdout.strip("\n")
|
||||
return self.preset_output
|
||||
assert (
|
||||
result.return_code == EXIT_RESULT_CODE
|
||||
), f"Return code of preset is not zero: {result.stdout}"
|
||||
return result.stdout.strip("\n")
|
||||
|
||||
@reporter.step("Generate K6 command")
|
||||
@reporter.step_deco("Generate K6 command")
|
||||
def _generate_env_variables(self) -> str:
|
||||
env_vars = self.load_params.get_env_vars()
|
||||
env_vars = {
|
||||
field.metadata["env_variable"]: getattr(self.load_params, field.name)
|
||||
for field in fields(self.load_params)
|
||||
if field.metadata
|
||||
and self.scenario in field.metadata["applicable_scenarios"]
|
||||
and field.metadata["env_variable"]
|
||||
and getattr(self.load_params, field.name) is not None
|
||||
}
|
||||
|
||||
if self.load_params.preset:
|
||||
env_vars.update(
|
||||
{
|
||||
field.metadata["env_variable"]: getattr(self.load_params.preset, field.name)
|
||||
for field in fields(self.load_params.preset)
|
||||
if field.metadata
|
||||
and self.scenario in field.metadata["applicable_scenarios"]
|
||||
and field.metadata["env_variable"]
|
||||
and getattr(self.load_params.preset, field.name) is not None
|
||||
}
|
||||
)
|
||||
|
||||
env_vars[f"{self.load_params.load_type.value.upper()}_ENDPOINTS"] = ",".join(self.endpoints)
|
||||
env_vars["SUMMARY_JSON"] = self.summary_json
|
||||
|
||||
reporter.attach("\n".join(f"{param}: {value}" for param, value in env_vars.items()), "K6 ENV variables")
|
||||
return " ".join([f"-e {param}='{value}'" for param, value in env_vars.items() if value is not None])
|
||||
|
||||
def get_start_time(self) -> datetime:
|
||||
return datetime.fromtimestamp(self._k6_process.start_time())
|
||||
|
||||
def get_end_time(self) -> datetime:
|
||||
return datetime.fromtimestamp(self._k6_process.end_time())
|
||||
reporter.attach(
|
||||
"\n".join(f"{param}: {value}" for param, value in env_vars.items()), "K6 ENV variables"
|
||||
)
|
||||
return " ".join(
|
||||
[f"-e {param}='{value}'" for param, value in env_vars.items() if value is not None]
|
||||
)
|
||||
|
||||
@reporter.step_deco("Start K6 on initiator")
|
||||
def start(self) -> None:
|
||||
with reporter.step(f"Start load from loader {self.loader.ip} on endpoints {self.endpoints}"):
|
||||
self._k6_process.start()
|
||||
|
||||
def wait_until_finished(self, event, soft_timeout: int = 0) -> None:
|
||||
with reporter.step(f"Wait until load is finished from loader {self.loader.ip} on endpoints {self.endpoints}"):
|
||||
if self.load_params.scenario == LoadScenario.VERIFY:
|
||||
timeout = self.load_params.verify_time or 0
|
||||
else:
|
||||
timeout = self.load_params.load_time or 0
|
||||
|
||||
start_time = int(self.get_start_time().timestamp())
|
||||
|
||||
current_time = int(datetime.utcnow().timestamp())
|
||||
working_time = current_time - start_time
|
||||
remaining_time = timeout - working_time
|
||||
|
||||
setup_teardown_time = (
|
||||
int(K6_TEARDOWN_PERIOD)
|
||||
+ self.load_params.get_init_time()
|
||||
+ int(self.load_params.setup_timeout.replace("s", "").strip())
|
||||
)
|
||||
remaining_time_including_setup_and_teardown = remaining_time + setup_teardown_time
|
||||
timeout = remaining_time_including_setup_and_teardown
|
||||
|
||||
if soft_timeout:
|
||||
timeout = min(timeout, soft_timeout)
|
||||
|
||||
original_timeout = timeout
|
||||
|
||||
timeouts = {
|
||||
"K6 start time": start_time,
|
||||
"Current time": current_time,
|
||||
"K6 working time": working_time,
|
||||
"Remaining time for load": remaining_time,
|
||||
"Setup and teardown": setup_teardown_time,
|
||||
"Remaining time including setup/teardown": remaining_time_including_setup_and_teardown,
|
||||
"Soft timeout": soft_timeout,
|
||||
"Selected timeout": original_timeout,
|
||||
}
|
||||
|
||||
reporter.attach("\n".join([f"{k}: {v}" for k, v in timeouts.items()]), "timeouts.txt")
|
||||
|
||||
min_wait_interval = 10
|
||||
wait_interval = min_wait_interval
|
||||
if self._k6_process is None:
|
||||
assert "No k6 instances were executed"
|
||||
|
||||
while timeout > 0:
|
||||
if not self.load_params.fill_percent is None:
|
||||
with reporter.step(f"Check the percentage of filling of all data disks on the node"):
|
||||
if self.check_fill_percent():
|
||||
logger.info(f"Stopping load on because disks is filled more then {self.load_params.fill_percent}%")
|
||||
event.set()
|
||||
self.stop()
|
||||
return
|
||||
|
||||
if event.is_set():
|
||||
self.stop()
|
||||
return
|
||||
|
||||
if not self._k6_process.running():
|
||||
return
|
||||
|
||||
remaining_time_hours = f"{timeout//3600}h" if timeout // 3600 != 0 else ""
|
||||
remaining_time_minutes = f"{timeout//60%60}m" if timeout // 60 % 60 != 0 else ""
|
||||
logger.info(
|
||||
f"K6 is running. Remaining time {remaining_time_hours}{remaining_time_minutes}{timeout%60}s. Next check after {wait_interval} seconds..."
|
||||
)
|
||||
sleep(wait_interval)
|
||||
timeout -= min(timeout, wait_interval)
|
||||
wait_interval = max(
|
||||
min(timeout, int(math.log2(timeout + 1)) * 15) - min_wait_interval,
|
||||
min_wait_interval,
|
||||
)
|
||||
command = (
|
||||
f"{self._k6_dir}/k6 run {self._generate_env_variables()} "
|
||||
f"{self._k6_dir}/scenarios/{self.scenario.value}.js"
|
||||
)
|
||||
self._k6_process = RemoteProcess.create(command, self.shell, self.load_params.working_dir)
|
||||
|
||||
@reporter.step_deco("Wait until K6 is finished")
|
||||
def wait_until_finished(self, timeout: int = 0, k6_should_be_running: bool = False) -> None:
|
||||
wait_interval = 10
|
||||
if self._k6_process is None:
|
||||
assert "No k6 instances were executed"
|
||||
if k6_should_be_running:
|
||||
assert self._k6_process.running(), "k6 should be running."
|
||||
while timeout > 0:
|
||||
if not self._k6_process.running():
|
||||
return
|
||||
|
||||
self.stop()
|
||||
if not soft_timeout:
|
||||
raise TimeoutError(f"Expected K6 to finish after {original_timeout} sec.")
|
||||
logger.info(f"K6 is running. Waiting {wait_interval} seconds...")
|
||||
sleep(wait_interval)
|
||||
timeout -= wait_interval
|
||||
self.stop()
|
||||
raise TimeoutError(f"Expected K6 finished in {timeout} sec.")
|
||||
|
||||
def get_results(self) -> Any:
|
||||
with reporter.step(f"Get load results from loader {self.loader.ip} on endpoints {self.endpoints}"):
|
||||
with reporter.step(f"K6 results from {self.load_node}"):
|
||||
self.__log_output()
|
||||
|
||||
if not self.summary_json:
|
||||
|
@ -233,30 +184,33 @@ class K6:
|
|||
|
||||
summary_text = self.shell.exec(f"cat {self.summary_json}").stdout
|
||||
summary_json = json.loads(summary_text)
|
||||
endpoint = urlparse(self.endpoints[0]).netloc or self.endpoints[0]
|
||||
|
||||
allure_filenames = {
|
||||
K6ProcessAllocationStrategy.PER_LOAD_NODE: f"{self.loader.ip}_{self.load_params.scenario.value}_summary.json",
|
||||
K6ProcessAllocationStrategy.PER_ENDPOINT: f"{self.loader.ip}_{self.load_params.scenario.value}_{endpoint}_summary.json",
|
||||
K6ProcessAllocationStrategy.PER_LOAD_NODE: f"{self.load_node}_{self.scenario.value}_summary.json",
|
||||
K6ProcessAllocationStrategy.PER_ENDPOINT: f"{self.load_node}_{self.scenario.value}_{self.endpoints[0]}_summary.json",
|
||||
}
|
||||
allure_filename = allure_filenames[self.load_params.k6_process_allocation_strategy]
|
||||
|
||||
reporter.attach(summary_text, allure_filename)
|
||||
return summary_json
|
||||
|
||||
@reporter.step_deco("Stop K6")
|
||||
def stop(self) -> None:
|
||||
with reporter.step(f"Stop load from loader {self.loader.ip} on endpoints {self.endpoints}"):
|
||||
if self.is_running():
|
||||
self._k6_process.stop()
|
||||
if self.is_running:
|
||||
self._k6_process.stop()
|
||||
|
||||
self._wait_until_process_end()
|
||||
self._wait_until_process_end()
|
||||
|
||||
@property
|
||||
def is_running(self) -> bool:
|
||||
if self._k6_process:
|
||||
return self._k6_process.running()
|
||||
return False
|
||||
|
||||
@reporter.step("Wait until K6 process end")
|
||||
@wait_for_success(K6_STOP_SIGNAL_TIMEOUT, 15, False, False, "Can not stop K6 process within timeout")
|
||||
@reporter.step_deco("Wait until process end")
|
||||
@wait_for_success(
|
||||
K6_STOP_SIGNAL_TIMEOUT, 15, False, False, "Can not stop K6 process within timeout"
|
||||
)
|
||||
def _wait_until_process_end(self):
|
||||
return self._k6_process.running()
|
||||
|
||||
|
|
|
@ -1,28 +1,7 @@
|
|||
import math
|
||||
import os
|
||||
from dataclasses import dataclass, field, fields, is_dataclass
|
||||
from dataclasses import dataclass, field
|
||||
from enum import Enum
|
||||
from types import MappingProxyType
|
||||
from typing import Any, Callable, Optional, get_args
|
||||
|
||||
from frostfs_testlib.utils.converting_utils import calc_unit
|
||||
|
||||
|
||||
def convert_time_to_seconds(time: int | str | None) -> int:
|
||||
if time is None:
|
||||
return None
|
||||
if str(time).isdigit():
|
||||
seconds = int(time)
|
||||
else:
|
||||
days, hours, minutes = 0, 0, 0
|
||||
if "d" in time:
|
||||
days, time = time.split("d")
|
||||
if "h" in time:
|
||||
hours, time = time.split("h")
|
||||
if "min" in time:
|
||||
minutes = time.replace("min", "")
|
||||
seconds = int(days) * 86400 + int(hours) * 3600 + int(minutes) * 60
|
||||
return seconds
|
||||
from typing import Optional
|
||||
|
||||
|
||||
class LoadType(Enum):
|
||||
|
@ -36,17 +15,8 @@ class LoadScenario(Enum):
|
|||
gRPC_CAR = "grpc_car"
|
||||
S3 = "s3"
|
||||
S3_CAR = "s3_car"
|
||||
S3_MULTIPART = "s3_multipart"
|
||||
S3_LOCAL = "s3local"
|
||||
HTTP = "http"
|
||||
VERIFY = "verify"
|
||||
LOCAL = "local"
|
||||
|
||||
|
||||
class ReadFrom(Enum):
|
||||
REGISTRY = "registry"
|
||||
PRESET = "preset"
|
||||
MANUAL = "manual"
|
||||
|
||||
|
||||
all_load_scenarios = [
|
||||
|
@ -55,45 +25,21 @@ all_load_scenarios = [
|
|||
LoadScenario.HTTP,
|
||||
LoadScenario.S3_CAR,
|
||||
LoadScenario.gRPC_CAR,
|
||||
LoadScenario.LOCAL,
|
||||
LoadScenario.S3_MULTIPART,
|
||||
LoadScenario.S3_LOCAL,
|
||||
]
|
||||
all_scenarios = all_load_scenarios.copy() + [LoadScenario.VERIFY]
|
||||
|
||||
constant_vus_scenarios = [
|
||||
LoadScenario.gRPC,
|
||||
LoadScenario.S3,
|
||||
LoadScenario.HTTP,
|
||||
LoadScenario.LOCAL,
|
||||
LoadScenario.S3_MULTIPART,
|
||||
LoadScenario.S3_LOCAL,
|
||||
]
|
||||
constant_vus_scenarios = [LoadScenario.gRPC, LoadScenario.S3, LoadScenario.HTTP]
|
||||
constant_arrival_rate_scenarios = [LoadScenario.gRPC_CAR, LoadScenario.S3_CAR]
|
||||
|
||||
grpc_preset_scenarios = [
|
||||
LoadScenario.gRPC,
|
||||
LoadScenario.HTTP,
|
||||
LoadScenario.gRPC_CAR,
|
||||
LoadScenario.LOCAL,
|
||||
]
|
||||
s3_preset_scenarios = [LoadScenario.S3, LoadScenario.S3_CAR, LoadScenario.S3_MULTIPART, LoadScenario.S3_LOCAL]
|
||||
|
||||
|
||||
@dataclass
|
||||
class MetaField:
|
||||
name: str
|
||||
metadata: MappingProxyType
|
||||
value: Any
|
||||
grpc_preset_scenarios = [LoadScenario.gRPC, LoadScenario.HTTP, LoadScenario.gRPC_CAR]
|
||||
s3_preset_scenarios = [LoadScenario.S3, LoadScenario.S3_CAR]
|
||||
|
||||
|
||||
def metadata_field(
|
||||
applicable_scenarios: list[LoadScenario],
|
||||
preset_param: Optional[str] = None,
|
||||
scenario_variable: Optional[str] = None,
|
||||
string_repr: Optional[bool] = True,
|
||||
distributed: Optional[bool] = False,
|
||||
formatter: Optional[Callable] = None,
|
||||
):
|
||||
return field(
|
||||
default=None,
|
||||
|
@ -101,9 +47,7 @@ def metadata_field(
|
|||
"applicable_scenarios": applicable_scenarios,
|
||||
"preset_argument": preset_param,
|
||||
"env_variable": scenario_variable,
|
||||
"string_repr": string_repr,
|
||||
"distributed": distributed,
|
||||
"formatter": formatter,
|
||||
},
|
||||
)
|
||||
|
||||
|
@ -142,29 +86,25 @@ class K6ProcessAllocationStrategy(Enum):
|
|||
class Preset:
|
||||
# ------ COMMON ------
|
||||
# Amount of objects which should be created
|
||||
objects_count: Optional[int] = metadata_field(all_load_scenarios, "preload_obj", None, False)
|
||||
objects_count: Optional[int] = metadata_field(all_load_scenarios, "preload_obj", None)
|
||||
# Preset json. Filled automatically.
|
||||
pregen_json: Optional[str] = metadata_field(all_load_scenarios, "out", "PREGEN_JSON", False)
|
||||
pregen_json: Optional[str] = metadata_field(all_load_scenarios, "out", "PREGEN_JSON")
|
||||
# Workers count for preset
|
||||
workers: Optional[int] = metadata_field(all_load_scenarios, "workers", None, False)
|
||||
workers: Optional[int] = metadata_field(all_load_scenarios, "workers", None)
|
||||
|
||||
# ------ GRPC ------
|
||||
# Amount of containers which should be created
|
||||
containers_count: Optional[int] = metadata_field(grpc_preset_scenarios, "containers", None, False)
|
||||
containers_count: Optional[int] = metadata_field(grpc_preset_scenarios, "containers", None)
|
||||
# Container placement policy for containers for gRPC
|
||||
container_placement_policy: Optional[str] = metadata_field(grpc_preset_scenarios, "policy", None, False)
|
||||
container_placement_policy: Optional[str] = metadata_field(
|
||||
grpc_preset_scenarios, "policy", None
|
||||
)
|
||||
|
||||
# ------ S3 ------
|
||||
# Amount of buckets which should be created
|
||||
buckets_count: Optional[int] = metadata_field(s3_preset_scenarios, "buckets", None, False)
|
||||
buckets_count: Optional[int] = metadata_field(s3_preset_scenarios, "buckets", None)
|
||||
# S3 region (AKA placement policy for S3 buckets)
|
||||
s3_location: Optional[str] = metadata_field(s3_preset_scenarios, "location", None, False)
|
||||
|
||||
# Delay between containers creation and object upload for preset
|
||||
object_upload_delay: Optional[int] = metadata_field(all_load_scenarios, "sleep", None, False)
|
||||
|
||||
# Flag to control preset erorrs
|
||||
ignore_errors: Optional[bool] = metadata_field(all_load_scenarios, "ignore-errors", None, False)
|
||||
s3_location: Optional[str] = metadata_field(s3_preset_scenarios, "location", None)
|
||||
|
||||
|
||||
@dataclass
|
||||
|
@ -185,262 +125,90 @@ class LoadParams:
|
|||
verify: Optional[bool] = None
|
||||
# Just id for load so distinct it between runs. Filled automatically.
|
||||
load_id: Optional[str] = None
|
||||
# Acceptable number of load errors in %
|
||||
# 100 means 100% errors allowed
|
||||
# 1.5 means 1.5% errors allowed
|
||||
# 0 means no errors allowed
|
||||
error_threshold: Optional[float] = None
|
||||
# Working directory
|
||||
working_dir: Optional[str] = None
|
||||
# Preset for the k6 run
|
||||
preset: Optional[Preset] = None
|
||||
# K6 download url
|
||||
k6_url: Optional[str] = None
|
||||
# Requests module url
|
||||
requests_module_url: Optional[str] = None
|
||||
# aws cli download url
|
||||
awscli_url: Optional[str] = None
|
||||
# No ssl verification flag
|
||||
no_verify_ssl: Optional[bool] = metadata_field(
|
||||
[
|
||||
LoadScenario.S3,
|
||||
LoadScenario.S3_CAR,
|
||||
LoadScenario.S3_MULTIPART,
|
||||
LoadScenario.S3_LOCAL,
|
||||
LoadScenario.VERIFY,
|
||||
LoadScenario.HTTP,
|
||||
],
|
||||
"no-verify-ssl",
|
||||
"NO_VERIFY_SSL",
|
||||
False,
|
||||
)
|
||||
# Percentage of filling of all data disks on all nodes
|
||||
fill_percent: Optional[float] = None
|
||||
|
||||
# ------- COMMON SCENARIO PARAMS -------
|
||||
# Load time is the maximum duration for k6 to give load. Default is the BACKGROUND_LOAD_DEFAULT_TIME value.
|
||||
load_time: Optional[int] = metadata_field(
|
||||
all_load_scenarios, None, "DURATION", False, formatter=convert_time_to_seconds
|
||||
)
|
||||
load_time: Optional[int] = metadata_field(all_load_scenarios, None, "DURATION")
|
||||
# Object size in KB for load and preset.
|
||||
object_size: Optional[int] = metadata_field(all_load_scenarios, "size", "WRITE_OBJ_SIZE", False)
|
||||
# For read operations, controls from which set get objects to read
|
||||
read_from: Optional[ReadFrom] = None
|
||||
# For read operations done from REGISTRY, controls delay which object should live before it will be used for read operation
|
||||
read_age: Optional[int] = metadata_field(all_load_scenarios, None, "READ_AGE", False)
|
||||
object_size: Optional[int] = metadata_field(all_load_scenarios, "size", "WRITE_OBJ_SIZE")
|
||||
# Output registry K6 file. Filled automatically.
|
||||
registry_file: Optional[str] = metadata_field(all_scenarios, None, "REGISTRY_FILE", False)
|
||||
# In case if we want to use custom registry file left from another load run
|
||||
custom_registry: Optional[str] = None
|
||||
registry_file: Optional[str] = metadata_field(all_scenarios, None, "REGISTRY_FILE")
|
||||
# Specifies the minimum duration of every single execution (i.e. iteration).
|
||||
# Any iterations that are shorter than this value will cause that VU to
|
||||
# sleep for the remainder of the time until the specified minimum duration is reached.
|
||||
min_iteration_duration: Optional[str] = metadata_field(all_load_scenarios, None, "K6_MIN_ITERATION_DURATION", False)
|
||||
# Prepare/cut objects locally on client before sending
|
||||
prepare_locally: Optional[bool] = metadata_field(
|
||||
[LoadScenario.gRPC, LoadScenario.gRPC_CAR], None, "PREPARE_LOCALLY", False
|
||||
min_iteration_duration: Optional[str] = metadata_field(
|
||||
all_load_scenarios, None, "K6_MIN_ITERATION_DURATION"
|
||||
)
|
||||
# Specifies K6 setupTimeout time. Currently hardcoded in xk6 as 5 seconds for all scenarios
|
||||
# https://k6.io/docs/using-k6/k6-options/reference/#setup-timeout
|
||||
setup_timeout: Optional[str] = metadata_field(all_scenarios, None, "K6_SETUP_TIMEOUT", False)
|
||||
|
||||
# Delay for read operations in case if we read from registry
|
||||
read_age: Optional[int] = metadata_field(all_load_scenarios, None, "READ_AGE", None, False)
|
||||
|
||||
# Initialization time for each VU for k6 load
|
||||
vu_init_time: Optional[float] = None
|
||||
setup_timeout: Optional[str] = metadata_field(all_scenarios, None, "K6_SETUP_TIMEOUT")
|
||||
|
||||
# ------- CONSTANT VUS SCENARIO PARAMS -------
|
||||
# Amount of Writers VU.
|
||||
writers: Optional[int] = metadata_field(constant_vus_scenarios, None, "WRITERS", True, True)
|
||||
writers: Optional[int] = metadata_field(constant_vus_scenarios, None, "WRITERS", True)
|
||||
# Amount of Readers VU.
|
||||
readers: Optional[int] = metadata_field(constant_vus_scenarios, None, "READERS", True, True)
|
||||
readers: Optional[int] = metadata_field(constant_vus_scenarios, None, "READERS", True)
|
||||
# Amount of Deleters VU.
|
||||
deleters: Optional[int] = metadata_field(constant_vus_scenarios, None, "DELETERS", True, True)
|
||||
deleters: Optional[int] = metadata_field(constant_vus_scenarios, None, "DELETERS", True)
|
||||
|
||||
# ------- CONSTANT ARRIVAL RATE SCENARIO PARAMS -------
|
||||
# Number of iterations to start during each timeUnit period for write.
|
||||
write_rate: Optional[int] = metadata_field(constant_arrival_rate_scenarios, None, "WRITE_RATE", True, True)
|
||||
write_rate: Optional[int] = metadata_field(
|
||||
constant_arrival_rate_scenarios, None, "WRITE_RATE", True
|
||||
)
|
||||
|
||||
# Number of iterations to start during each timeUnit period for read.
|
||||
read_rate: Optional[int] = metadata_field(constant_arrival_rate_scenarios, None, "READ_RATE", True, True)
|
||||
read_rate: Optional[int] = metadata_field(
|
||||
constant_arrival_rate_scenarios, None, "READ_RATE", True
|
||||
)
|
||||
|
||||
# Number of iterations to start during each timeUnit period for delete.
|
||||
delete_rate: Optional[int] = metadata_field(constant_arrival_rate_scenarios, None, "DELETE_RATE", True, True)
|
||||
delete_rate: Optional[int] = metadata_field(
|
||||
constant_arrival_rate_scenarios, None, "DELETE_RATE", True
|
||||
)
|
||||
|
||||
# Amount of preAllocatedVUs for write operations.
|
||||
preallocated_writers: Optional[int] = metadata_field(
|
||||
constant_arrival_rate_scenarios, None, "PRE_ALLOC_WRITERS", True, True
|
||||
constant_arrival_rate_scenarios, None, "PRE_ALLOC_WRITERS", True
|
||||
)
|
||||
# Amount of maxVUs for write operations.
|
||||
max_writers: Optional[int] = metadata_field(constant_arrival_rate_scenarios, None, "MAX_WRITERS", False, True)
|
||||
max_writers: Optional[int] = metadata_field(
|
||||
constant_arrival_rate_scenarios, None, "MAX_WRITERS", True
|
||||
)
|
||||
|
||||
# Amount of preAllocatedVUs for read operations.
|
||||
preallocated_readers: Optional[int] = metadata_field(
|
||||
constant_arrival_rate_scenarios, None, "PRE_ALLOC_READERS", True, True
|
||||
constant_arrival_rate_scenarios, None, "PRE_ALLOC_READERS", True
|
||||
)
|
||||
# Amount of maxVUs for read operations.
|
||||
max_readers: Optional[int] = metadata_field(constant_arrival_rate_scenarios, None, "MAX_READERS", False, True)
|
||||
max_readers: Optional[int] = metadata_field(
|
||||
constant_arrival_rate_scenarios, None, "MAX_READERS", True
|
||||
)
|
||||
|
||||
# Amount of preAllocatedVUs for read operations.
|
||||
preallocated_deleters: Optional[int] = metadata_field(
|
||||
constant_arrival_rate_scenarios, None, "PRE_ALLOC_DELETERS", True, True
|
||||
constant_arrival_rate_scenarios, None, "PRE_ALLOC_DELETERS", True
|
||||
)
|
||||
# Amount of maxVUs for delete operations.
|
||||
max_deleters: Optional[int] = metadata_field(constant_arrival_rate_scenarios, None, "MAX_DELETERS", False, True)
|
||||
|
||||
# Multipart
|
||||
# Number of parts to upload in parallel
|
||||
writers_multipart: Optional[int] = metadata_field(
|
||||
[LoadScenario.S3_MULTIPART], None, "WRITERS_MULTIPART", False, True
|
||||
)
|
||||
# part size must be greater than (5 MB)
|
||||
write_object_part_size: Optional[int] = metadata_field(
|
||||
[LoadScenario.S3_MULTIPART], None, "WRITE_OBJ_PART_SIZE", False
|
||||
max_deleters: Optional[int] = metadata_field(
|
||||
constant_arrival_rate_scenarios, None, "MAX_DELETERS", True
|
||||
)
|
||||
|
||||
# Period of time to apply the rate value.
|
||||
time_unit: Optional[str] = metadata_field(constant_arrival_rate_scenarios, None, "TIME_UNIT", False)
|
||||
time_unit: Optional[str] = metadata_field(constant_arrival_rate_scenarios, None, "TIME_UNIT")
|
||||
|
||||
# ------- VERIFY SCENARIO PARAMS -------
|
||||
# Maximum verification time for k6 to verify objects. Default is BACKGROUND_LOAD_MAX_VERIFY_TIME (3600).
|
||||
verify_time: Optional[int] = metadata_field([LoadScenario.VERIFY], None, "TIME_LIMIT", False)
|
||||
verify_time: Optional[int] = metadata_field([LoadScenario.VERIFY], None, "TIME_LIMIT")
|
||||
# Amount of Verification VU.
|
||||
verify_clients: Optional[int] = metadata_field([LoadScenario.VERIFY], None, "CLIENTS", True, False)
|
||||
|
||||
# ------- LOCAL SCENARIO PARAMS -------
|
||||
# Config file location (filled automatically)
|
||||
config_file: Optional[str] = metadata_field([LoadScenario.LOCAL, LoadScenario.S3_LOCAL], None, "CONFIG_FILE", False)
|
||||
# Config directory location (filled automatically)
|
||||
config_dir: Optional[str] = metadata_field([LoadScenario.S3_LOCAL], None, "CONFIG_DIR", False)
|
||||
verify_clients: Optional[int] = metadata_field([LoadScenario.VERIFY], None, "CLIENTS", True)
|
||||
|
||||
def set_id(self, load_id):
|
||||
self.load_id = load_id
|
||||
|
||||
if self.read_from == ReadFrom.REGISTRY:
|
||||
self.registry_file = os.path.join(self.working_dir, f"{load_id}_registry.bolt")
|
||||
|
||||
# For now it's okay to have it this way
|
||||
if self.custom_registry is not None:
|
||||
self.registry_file = self.custom_registry
|
||||
|
||||
if self.read_from == ReadFrom.PRESET:
|
||||
self.registry_file = None
|
||||
|
||||
self.registry_file = os.path.join(self.working_dir, f"{load_id}_registry.bolt")
|
||||
if self.preset:
|
||||
self.preset.pregen_json = os.path.join(self.working_dir, f"{load_id}_prepare.json")
|
||||
|
||||
def get_env_vars(self):
|
||||
env_vars = {
|
||||
meta_field.metadata["env_variable"]: meta_field.value
|
||||
for meta_field in self._get_meta_fields(self)
|
||||
if self.scenario in meta_field.metadata["applicable_scenarios"]
|
||||
and meta_field.metadata["env_variable"]
|
||||
and meta_field.value is not None
|
||||
}
|
||||
|
||||
return env_vars
|
||||
|
||||
def __post_init__(self):
|
||||
default_scenario_map = {
|
||||
LoadType.gRPC: LoadScenario.gRPC,
|
||||
LoadType.HTTP: LoadScenario.HTTP,
|
||||
LoadType.S3: LoadScenario.S3,
|
||||
}
|
||||
|
||||
if self.scenario is None:
|
||||
self.scenario = default_scenario_map[self.load_type]
|
||||
|
||||
def get_preset_arguments(self):
|
||||
command_args = [
|
||||
self._get_preset_argument(meta_field)
|
||||
for meta_field in self._get_meta_fields(self)
|
||||
if self.scenario in meta_field.metadata["applicable_scenarios"]
|
||||
and meta_field.metadata["preset_argument"]
|
||||
and meta_field.value is not None
|
||||
and self._get_preset_argument(meta_field)
|
||||
]
|
||||
|
||||
return command_args
|
||||
|
||||
def get_init_time(self) -> int:
|
||||
return math.ceil(self._get_total_vus() * self.vu_init_time)
|
||||
|
||||
def _get_total_vus(self) -> int:
|
||||
vu_fields = ["writers", "preallocated_writers", "readers", "preallocated_readers"]
|
||||
data_fields = [getattr(self, field.name) or 0 for field in fields(self) if field.name in vu_fields]
|
||||
return sum(data_fields)
|
||||
|
||||
def _get_applicable_fields(self):
|
||||
applicable_fields = [
|
||||
meta_field
|
||||
for meta_field in self._get_meta_fields(self)
|
||||
if self.scenario in meta_field.metadata["applicable_scenarios"] and meta_field.value
|
||||
]
|
||||
|
||||
return applicable_fields
|
||||
|
||||
@staticmethod
|
||||
def _get_preset_argument(meta_field: MetaField) -> str:
|
||||
if isinstance(meta_field.value, bool):
|
||||
# For preset calls, bool values are passed with just --<argument_name> if the value is True
|
||||
return f"--{meta_field.metadata['preset_argument']}" if meta_field.value else ""
|
||||
|
||||
return f"--{meta_field.metadata['preset_argument']} '{meta_field.value}'"
|
||||
|
||||
@staticmethod
|
||||
def _get_meta_fields(instance) -> list[MetaField]:
|
||||
data_fields = fields(instance)
|
||||
|
||||
fields_with_data = [
|
||||
MetaField(field.name, field.metadata, getattr(instance, field.name))
|
||||
for field in data_fields
|
||||
if field.metadata and getattr(instance, field.name) is not None
|
||||
]
|
||||
|
||||
for field in data_fields:
|
||||
actual_field_type = get_args(field.type)[0] if len(get_args(field.type)) else get_args(field.type)
|
||||
if is_dataclass(actual_field_type) and getattr(instance, field.name):
|
||||
fields_with_data += LoadParams._get_meta_fields(getattr(instance, field.name))
|
||||
|
||||
return fields_with_data or []
|
||||
|
||||
def _get_field_formatter(self, field_name: str) -> Callable | None:
|
||||
data_fields = fields(self)
|
||||
formatters = [
|
||||
field.metadata["formatter"]
|
||||
for field in data_fields
|
||||
if field.name == field_name and "formatter" in field.metadata and field.metadata["formatter"] != None
|
||||
]
|
||||
if formatters:
|
||||
return formatters[0]
|
||||
|
||||
return None
|
||||
|
||||
def __setattr__(self, field_name, value):
|
||||
formatter = self._get_field_formatter(field_name)
|
||||
if formatter:
|
||||
value = formatter(value)
|
||||
|
||||
super().__setattr__(field_name, value)
|
||||
|
||||
def __str__(self) -> str:
|
||||
load_type_str = self.scenario.value if self.scenario else self.load_type.value
|
||||
# TODO: migrate load_params defaults to testlib
|
||||
if self.object_size is not None:
|
||||
size, unit = calc_unit(self.object_size, 1)
|
||||
static_params = [f"{load_type_str} {size:.4g} {unit}"]
|
||||
else:
|
||||
static_params = [f"{load_type_str}"]
|
||||
|
||||
dynamic_params = [
|
||||
f"{meta_field.name}={meta_field.value}"
|
||||
for meta_field in self._get_applicable_fields()
|
||||
if meta_field.metadata["string_repr"]
|
||||
]
|
||||
params = ", ".join(static_params + dynamic_params)
|
||||
|
||||
return params
|
||||
|
||||
def __repr__(self) -> str:
|
||||
return self.__str__()
|
||||
|
|
|
@ -1,50 +1,83 @@
|
|||
from abc import ABC
|
||||
from typing import Any, Optional
|
||||
from typing import Any
|
||||
|
||||
from frostfs_testlib.load.load_config import LoadScenario
|
||||
|
||||
|
||||
class OperationMetric(ABC):
|
||||
_NAME = ""
|
||||
_SUCCESS = ""
|
||||
_ERRORS = ""
|
||||
_THROUGHPUT = ""
|
||||
_LATENCY = ""
|
||||
class MetricsBase(ABC):
|
||||
_WRITE_SUCCESS = ""
|
||||
_WRITE_ERRORS = ""
|
||||
_WRITE_THROUGHPUT = "data_sent"
|
||||
|
||||
_READ_SUCCESS = ""
|
||||
_READ_ERRORS = ""
|
||||
_READ_THROUGHPUT = "data_received"
|
||||
|
||||
_DELETE_SUCCESS = ""
|
||||
_DELETE_ERRORS = ""
|
||||
|
||||
def __init__(self, summary) -> None:
|
||||
self.summary = summary
|
||||
self.metrics = summary["metrics"]
|
||||
|
||||
@property
|
||||
def total_iterations(self) -> int:
|
||||
return self._get_metric(self._SUCCESS) + self._get_metric(self._ERRORS)
|
||||
def write_total_iterations(self) -> int:
|
||||
return self._get_metric(self._WRITE_SUCCESS) + self._get_metric(self._WRITE_ERRORS)
|
||||
|
||||
@property
|
||||
def success_iterations(self) -> int:
|
||||
return self._get_metric(self._SUCCESS)
|
||||
def write_success_iterations(self) -> int:
|
||||
return self._get_metric(self._WRITE_SUCCESS)
|
||||
|
||||
@property
|
||||
def latency(self) -> dict:
|
||||
return self._get_metric(self._LATENCY)
|
||||
def write_rate(self) -> float:
|
||||
return self._get_metric_rate(self._WRITE_SUCCESS)
|
||||
|
||||
@property
|
||||
def rate(self) -> float:
|
||||
return self._get_metric_rate(self._SUCCESS)
|
||||
def write_failed_iterations(self) -> int:
|
||||
return self._get_metric(self._WRITE_ERRORS)
|
||||
|
||||
@property
|
||||
def failed_iterations(self) -> int:
|
||||
return self._get_metric(self._ERRORS)
|
||||
def write_throughput(self) -> float:
|
||||
return self._get_metric_rate(self._WRITE_THROUGHPUT)
|
||||
|
||||
@property
|
||||
def throughput(self) -> float:
|
||||
return self._get_metric_rate(self._THROUGHPUT)
|
||||
def read_total_iterations(self) -> int:
|
||||
return self._get_metric(self._READ_SUCCESS) + self._get_metric(self._READ_ERRORS)
|
||||
|
||||
@property
|
||||
def read_success_iterations(self) -> int:
|
||||
return self._get_metric(self._READ_SUCCESS)
|
||||
|
||||
@property
|
||||
def read_rate(self) -> int:
|
||||
return self._get_metric_rate(self._READ_SUCCESS)
|
||||
|
||||
@property
|
||||
def read_failed_iterations(self) -> int:
|
||||
return self._get_metric(self._READ_ERRORS)
|
||||
|
||||
@property
|
||||
def read_throughput(self) -> float:
|
||||
return self._get_metric_rate(self._READ_THROUGHPUT)
|
||||
|
||||
@property
|
||||
def delete_total_iterations(self) -> int:
|
||||
return self._get_metric(self._DELETE_SUCCESS) + self._get_metric(self._DELETE_ERRORS)
|
||||
|
||||
@property
|
||||
def delete_success_iterations(self) -> int:
|
||||
return self._get_metric(self._DELETE_SUCCESS)
|
||||
|
||||
@property
|
||||
def delete_failed_iterations(self) -> int:
|
||||
return self._get_metric(self._DELETE_ERRORS)
|
||||
|
||||
@property
|
||||
def delete_rate(self) -> int:
|
||||
return self._get_metric_rate(self._DELETE_SUCCESS)
|
||||
|
||||
def _get_metric(self, metric: str) -> int:
|
||||
metrics_method_map = {
|
||||
"counter": self._get_counter_metric,
|
||||
"gauge": self._get_gauge_metric,
|
||||
"trend": self._get_trend_metrics,
|
||||
}
|
||||
metrics_method_map = {"counter": self._get_counter_metric, "gauge": self._get_gauge_metric}
|
||||
|
||||
if metric not in self.metrics:
|
||||
return 0
|
||||
|
@ -52,7 +85,9 @@ class OperationMetric(ABC):
|
|||
metric = self.metrics[metric]
|
||||
metric_type = metric["type"]
|
||||
if metric_type not in metrics_method_map:
|
||||
raise Exception(f"Unsupported metric type: {metric_type}, supported: {metrics_method_map.keys()}")
|
||||
raise Exception(
|
||||
f"Unsupported metric type: {metric_type}, supported: {metrics_method_map.keys()}"
|
||||
)
|
||||
|
||||
return metrics_method_map[metric_type](metric)
|
||||
|
||||
|
@ -65,7 +100,9 @@ class OperationMetric(ABC):
|
|||
metric = self.metrics[metric]
|
||||
metric_type = metric["type"]
|
||||
if metric_type not in metrics_method_map:
|
||||
raise Exception(f"Unsupported rate metric type: {metric_type}, supported: {metrics_method_map.keys()}")
|
||||
raise Exception(
|
||||
f"Unsupported rate metric type: {metric_type}, supported: {metrics_method_map.keys()}"
|
||||
)
|
||||
|
||||
return metrics_method_map[metric_type](metric)
|
||||
|
||||
|
@ -78,149 +115,38 @@ class OperationMetric(ABC):
|
|||
def _get_gauge_metric(self, metric: str) -> int:
|
||||
return metric["values"]["value"]
|
||||
|
||||
def _get_trend_metrics(self, metric: str) -> int:
|
||||
return metric["values"]
|
||||
|
||||
|
||||
class WriteOperationMetric(OperationMetric):
|
||||
_NAME = "Write"
|
||||
_SUCCESS = ""
|
||||
_ERRORS = ""
|
||||
_THROUGHPUT = "data_sent"
|
||||
_LATENCY = ""
|
||||
|
||||
|
||||
class ReadOperationMetric(OperationMetric):
|
||||
_NAME = "Read"
|
||||
_SUCCESS = ""
|
||||
_ERRORS = ""
|
||||
_THROUGHPUT = "data_received"
|
||||
_LATENCY = ""
|
||||
|
||||
|
||||
class DeleteOperationMetric(OperationMetric):
|
||||
_NAME = "Delete"
|
||||
_SUCCESS = ""
|
||||
_ERRORS = ""
|
||||
_THROUGHPUT = ""
|
||||
_LATENCY = ""
|
||||
|
||||
|
||||
class GrpcWriteOperationMetric(WriteOperationMetric):
|
||||
_SUCCESS = "frostfs_obj_put_total"
|
||||
_ERRORS = "frostfs_obj_put_fails"
|
||||
_LATENCY = "frostfs_obj_put_duration"
|
||||
|
||||
|
||||
class GrpcReadOperationMetric(ReadOperationMetric):
|
||||
_SUCCESS = "frostfs_obj_get_total"
|
||||
_ERRORS = "frostfs_obj_get_fails"
|
||||
_LATENCY = "frostfs_obj_get_duration"
|
||||
|
||||
|
||||
class GrpcDeleteOperationMetric(DeleteOperationMetric):
|
||||
_SUCCESS = "frostfs_obj_delete_total"
|
||||
_ERRORS = "frostfs_obj_delete_fails"
|
||||
_LATENCY = "frostfs_obj_delete_duration"
|
||||
|
||||
|
||||
class S3WriteOperationMetric(WriteOperationMetric):
|
||||
_SUCCESS = "aws_obj_put_total"
|
||||
_ERRORS = "aws_obj_put_fails"
|
||||
_LATENCY = "aws_obj_put_duration"
|
||||
|
||||
|
||||
class S3ReadOperationMetric(ReadOperationMetric):
|
||||
_SUCCESS = "aws_obj_get_total"
|
||||
_ERRORS = "aws_obj_get_fails"
|
||||
_LATENCY = "aws_obj_get_duration"
|
||||
|
||||
|
||||
class S3DeleteOperationMetric(DeleteOperationMetric):
|
||||
_SUCCESS = "aws_obj_delete_total"
|
||||
_ERRORS = "aws_obj_delete_fails"
|
||||
_LATENCY = "aws_obj_delete_duration"
|
||||
|
||||
|
||||
class S3LocalWriteOperationMetric(WriteOperationMetric):
|
||||
_SUCCESS = "s3local_obj_put_total"
|
||||
_ERRORS = "s3local_obj_put_fails"
|
||||
_LATENCY = "s3local_obj_put_duration"
|
||||
|
||||
|
||||
class S3LocalReadOperationMetric(ReadOperationMetric):
|
||||
_SUCCESS = "s3local_obj_get_total"
|
||||
_ERRORS = "s3local_obj_get_fails"
|
||||
_LATENCY = "s3local_obj_get_duration"
|
||||
|
||||
|
||||
class LocalWriteOperationMetric(WriteOperationMetric):
|
||||
_SUCCESS = "local_obj_put_total"
|
||||
_ERRORS = "local_obj_put_fails"
|
||||
_LATENCY = "local_obj_put_duration"
|
||||
|
||||
|
||||
class LocalReadOperationMetric(ReadOperationMetric):
|
||||
_SUCCESS = "local_obj_get_total"
|
||||
_ERRORS = "local_obj_get_fails"
|
||||
|
||||
|
||||
class LocalDeleteOperationMetric(DeleteOperationMetric):
|
||||
_SUCCESS = "local_obj_delete_total"
|
||||
_ERRORS = "local_obj_delete_fails"
|
||||
|
||||
|
||||
class VerifyReadOperationMetric(ReadOperationMetric):
|
||||
_SUCCESS = "verified_obj"
|
||||
_ERRORS = "invalid_obj"
|
||||
|
||||
|
||||
class MetricsBase(ABC):
|
||||
def __init__(self) -> None:
|
||||
self.write: Optional[WriteOperationMetric] = None
|
||||
self.read: Optional[ReadOperationMetric] = None
|
||||
self.delete: Optional[DeleteOperationMetric] = None
|
||||
|
||||
@property
|
||||
def operations(self) -> list[OperationMetric]:
|
||||
return [metric for metric in [self.write, self.read, self.delete] if metric is not None]
|
||||
|
||||
|
||||
class GrpcMetrics(MetricsBase):
|
||||
def __init__(self, summary) -> None:
|
||||
super().__init__()
|
||||
self.write = GrpcWriteOperationMetric(summary)
|
||||
self.read = GrpcReadOperationMetric(summary)
|
||||
self.delete = GrpcDeleteOperationMetric(summary)
|
||||
_WRITE_SUCCESS = "frostfs_obj_put_total"
|
||||
_WRITE_ERRORS = "frostfs_obj_put_fails"
|
||||
|
||||
_READ_SUCCESS = "frostfs_obj_get_total"
|
||||
_READ_ERRORS = "frostfs_obj_get_fails"
|
||||
|
||||
_DELETE_SUCCESS = "frostfs_obj_delete_total"
|
||||
_DELETE_ERRORS = "frostfs_obj_delete_fails"
|
||||
|
||||
|
||||
class S3Metrics(MetricsBase):
|
||||
def __init__(self, summary) -> None:
|
||||
super().__init__()
|
||||
self.write = S3WriteOperationMetric(summary)
|
||||
self.read = S3ReadOperationMetric(summary)
|
||||
self.delete = S3DeleteOperationMetric(summary)
|
||||
_WRITE_SUCCESS = "aws_obj_put_total"
|
||||
_WRITE_ERRORS = "aws_obj_put_fails"
|
||||
|
||||
_READ_SUCCESS = "aws_obj_get_total"
|
||||
_READ_ERRORS = "aws_obj_get_fails"
|
||||
|
||||
class S3LocalMetrics(MetricsBase):
|
||||
def __init__(self, summary) -> None:
|
||||
super().__init__()
|
||||
self.write = S3LocalWriteOperationMetric(summary)
|
||||
self.read = S3LocalReadOperationMetric(summary)
|
||||
|
||||
|
||||
class LocalMetrics(MetricsBase):
|
||||
def __init__(self, summary) -> None:
|
||||
super().__init__()
|
||||
self.write = LocalWriteOperationMetric(summary)
|
||||
self.read = LocalReadOperationMetric(summary)
|
||||
self.delete = LocalDeleteOperationMetric(summary)
|
||||
_DELETE_SUCCESS = "aws_obj_delete_total"
|
||||
_DELETE_ERRORS = "aws_obj_delete_fails"
|
||||
|
||||
|
||||
class VerifyMetrics(MetricsBase):
|
||||
def __init__(self, summary) -> None:
|
||||
super().__init__()
|
||||
self.read = VerifyReadOperationMetric(summary)
|
||||
_WRITE_SUCCESS = "N/A"
|
||||
_WRITE_ERRORS = "N/A"
|
||||
|
||||
_READ_SUCCESS = "verified_obj"
|
||||
_READ_ERRORS = "invalid_obj"
|
||||
|
||||
_DELETE_SUCCESS = "N/A"
|
||||
_DELETE_ERRORS = "N/A"
|
||||
|
||||
|
||||
def get_metrics_object(load_type: LoadScenario, summary: dict[str, Any]) -> MetricsBase:
|
||||
|
@ -230,10 +156,7 @@ def get_metrics_object(load_type: LoadScenario, summary: dict[str, Any]) -> Metr
|
|||
LoadScenario.HTTP: GrpcMetrics,
|
||||
LoadScenario.S3: S3Metrics,
|
||||
LoadScenario.S3_CAR: S3Metrics,
|
||||
LoadScenario.S3_MULTIPART: S3Metrics,
|
||||
LoadScenario.S3_LOCAL: S3LocalMetrics,
|
||||
LoadScenario.VERIFY: VerifyMetrics,
|
||||
LoadScenario.LOCAL: LocalMetrics,
|
||||
}
|
||||
|
||||
return class_map[load_type](summary)
|
||||
|
|
|
@ -1,11 +1,10 @@
|
|||
from datetime import datetime
|
||||
from typing import Optional
|
||||
from typing import Optional, Tuple
|
||||
|
||||
import yaml
|
||||
|
||||
from frostfs_testlib.load.interfaces.summarized import SummarizedStats
|
||||
from frostfs_testlib.load.load_config import K6ProcessAllocationStrategy, LoadParams, LoadScenario
|
||||
from frostfs_testlib.utils.converting_utils import calc_unit
|
||||
from frostfs_testlib.load.load_metrics import get_metrics_object
|
||||
|
||||
|
||||
class LoadReport:
|
||||
|
@ -17,15 +16,11 @@ class LoadReport:
|
|||
self.start_time: Optional[datetime] = None
|
||||
self.end_time: Optional[datetime] = None
|
||||
|
||||
def set_start_time(self, time: datetime = None):
|
||||
if time is None:
|
||||
time = datetime.utcnow()
|
||||
self.start_time = time
|
||||
def set_start_time(self):
|
||||
self.start_time = datetime.utcnow()
|
||||
|
||||
def set_end_time(self, time: datetime = None):
|
||||
if time is None:
|
||||
time = datetime.utcnow()
|
||||
self.end_time = time
|
||||
def set_end_time(self):
|
||||
self.end_time = datetime.utcnow()
|
||||
|
||||
def add_summaries(self, load_summaries: dict):
|
||||
self.load_summaries_list.append(load_summaries)
|
||||
|
@ -35,7 +30,6 @@ class LoadReport:
|
|||
|
||||
def get_report_html(self):
|
||||
report_sections = [
|
||||
[self.load_params, self._get_load_id_section_html],
|
||||
[self.load_test, self._get_load_params_section_html],
|
||||
[self.load_summaries_list, self._get_totals_section_html],
|
||||
[self.end_time, self._get_test_time_html],
|
||||
|
@ -49,8 +43,8 @@ class LoadReport:
|
|||
return html
|
||||
|
||||
def _get_load_params_section_html(self) -> str:
|
||||
params: str = yaml.safe_dump([self.load_test], sort_keys=False, indent=2, explicit_start=True)
|
||||
params = params.replace("\n", "<br>").replace(" ", " ")
|
||||
params: str = yaml.safe_dump(self.load_test, sort_keys=False)
|
||||
params = params.replace("\n", "<br>")
|
||||
section_html = f"""<h3>Scenario params</h3>
|
||||
|
||||
<pre>{params}</pre>
|
||||
|
@ -58,23 +52,25 @@ class LoadReport:
|
|||
|
||||
return section_html
|
||||
|
||||
def _get_load_id_section_html(self) -> str:
|
||||
section_html = f"""<h3>Load ID: {self.load_params.load_id}</h3>
|
||||
<hr>"""
|
||||
|
||||
return section_html
|
||||
|
||||
def _get_test_time_html(self) -> str:
|
||||
if not self.start_time or not self.end_time:
|
||||
return ""
|
||||
|
||||
html = f"""<h3>Scenario duration</h3>
|
||||
html = f"""<h3>Scenario duration in UTC time (from agent)</h3>
|
||||
{self.start_time} - {self.end_time}<br>
|
||||
<hr>
|
||||
"""
|
||||
|
||||
return html
|
||||
|
||||
def _calc_unit(self, value: float, skip_units: int = 0) -> Tuple[float, str]:
|
||||
units = ["B", "KiB", "MiB", "GiB", "TiB"]
|
||||
|
||||
for unit in units[skip_units:]:
|
||||
if value < 1024:
|
||||
return value, unit
|
||||
|
||||
value = value / 1024.0
|
||||
|
||||
return value, unit
|
||||
|
||||
def _seconds_to_formatted_duration(self, seconds: int) -> str:
|
||||
"""Converts N number of seconds to formatted output ignoring zeroes.
|
||||
Examples:
|
||||
|
@ -104,56 +100,57 @@ class LoadReport:
|
|||
model_map = {
|
||||
LoadScenario.gRPC: "closed model",
|
||||
LoadScenario.S3: "closed model",
|
||||
LoadScenario.S3_MULTIPART: "closed model",
|
||||
LoadScenario.HTTP: "closed model",
|
||||
LoadScenario.gRPC_CAR: "open model",
|
||||
LoadScenario.S3_CAR: "open model",
|
||||
LoadScenario.LOCAL: "local fill",
|
||||
LoadScenario.S3_LOCAL: "local fill",
|
||||
}
|
||||
|
||||
return model_map[self.load_params.scenario]
|
||||
|
||||
def _get_operations_sub_section_html(self, operation_type: str, stats: SummarizedStats):
|
||||
def _get_oprations_sub_section_html(
|
||||
self,
|
||||
operation_type: str,
|
||||
total_operations: int,
|
||||
requested_rate_str: str,
|
||||
vus_str: str,
|
||||
total_rate: float,
|
||||
throughput: float,
|
||||
errors: dict[str, int],
|
||||
):
|
||||
throughput_html = ""
|
||||
if stats.throughput > 0:
|
||||
throughput, unit = calc_unit(stats.throughput)
|
||||
if throughput > 0:
|
||||
throughput, unit = self._calc_unit(throughput)
|
||||
throughput_html = self._row("Throughput", f"{throughput:.2f} {unit}/sec")
|
||||
|
||||
per_node_errors_html = ""
|
||||
for node_key, errors in stats.errors.by_node.items():
|
||||
if self.load_params.k6_process_allocation_strategy == K6ProcessAllocationStrategy.PER_ENDPOINT:
|
||||
per_node_errors_html += self._row(f"At {node_key}", errors)
|
||||
total_errors = 0
|
||||
if errors:
|
||||
total_errors: int = 0
|
||||
for node_key, errors in errors.items():
|
||||
total_errors += errors
|
||||
if (
|
||||
self.load_params.k6_process_allocation_strategy
|
||||
== K6ProcessAllocationStrategy.PER_ENDPOINT
|
||||
):
|
||||
per_node_errors_html += self._row(f"At {node_key}", errors)
|
||||
|
||||
latency_html = ""
|
||||
for node_key, latencies in stats.latencies.by_node.items():
|
||||
latency_values = "N/A"
|
||||
if latencies:
|
||||
latency_values = ""
|
||||
for param_name, param_val in latencies.items():
|
||||
latency_values += f"{param_name}={param_val:.2f}ms "
|
||||
|
||||
latency_html += self._row(f"{operation_type} latency {node_key.split(':')[0]}", latency_values)
|
||||
|
||||
object_size, object_size_unit = calc_unit(self.load_params.object_size, 1)
|
||||
object_size, object_size_unit = self._calc_unit(self.load_params.object_size, 1)
|
||||
duration = self._seconds_to_formatted_duration(self.load_params.load_time)
|
||||
model = self._get_model_string()
|
||||
requested_rate_str = f"{stats.requested_rate}op/sec" if stats.requested_rate else ""
|
||||
# write 8KB 15h49m 50op/sec 50th open model/closed model/min_iteration duration=1s - 1.636MB/s 199.57451/s
|
||||
short_summary = f"{operation_type} {object_size}{object_size_unit} {duration} {requested_rate_str} {stats.threads}th {model} - {throughput:.2f}{unit}/s {stats.rate:.2f}/s"
|
||||
short_summary = f"{operation_type} {object_size}{object_size_unit} {duration} {requested_rate_str} {vus_str} {model} - {throughput:.2f}{unit} {total_rate:.2f}/s"
|
||||
|
||||
html = f"""
|
||||
<table border="1" cellpadding="5px"><tbody>
|
||||
<tr><th colspan="2" bgcolor="gainsboro">{short_summary}</th></tr>
|
||||
<tr><th colspan="2" bgcolor="gainsboro">Metrics</th></tr>
|
||||
{self._row("Total operations", stats.operations)}
|
||||
{self._row("OP/sec", f"{stats.rate:.2f}")}
|
||||
{self._row("Total operations", total_operations)}
|
||||
{self._row("OP/sec", f"{total_rate:.2f}")}
|
||||
{throughput_html}
|
||||
{latency_html}
|
||||
|
||||
<tr><th colspan="2" bgcolor="gainsboro">Errors</th></tr>
|
||||
{per_node_errors_html}
|
||||
{self._row("Total", f"{stats.errors.total} ({stats.errors.percent:.2f}%)")}
|
||||
{self._row("Threshold", f"{stats.errors.threshold:.2f}%")}
|
||||
{self._row("Total", f"{total_errors} ({total_errors/total_operations*100.0:.2f}%)")}
|
||||
</tbody></table><br><hr>
|
||||
"""
|
||||
|
||||
|
@ -161,12 +158,112 @@ class LoadReport:
|
|||
|
||||
def _get_totals_section_html(self):
|
||||
html = ""
|
||||
for i in range(len(self.load_summaries_list)):
|
||||
html += f"<h3>Load Results for load #{i+1}</h3>"
|
||||
for i, load_summaries in enumerate(self.load_summaries_list, 1):
|
||||
html += f"<h3>Load Results for load #{i}</h3>"
|
||||
|
||||
summarized = SummarizedStats.collect(self.load_params, self.load_summaries_list[i])
|
||||
for operation_type, stats in summarized.items():
|
||||
if stats.operations:
|
||||
html += self._get_operations_sub_section_html(operation_type, stats)
|
||||
write_operations = 0
|
||||
write_op_sec = 0
|
||||
write_throughput = 0
|
||||
write_errors = {}
|
||||
requested_write_rate = self.load_params.write_rate
|
||||
requested_write_rate_str = (
|
||||
f"{requested_write_rate}op/sec" if requested_write_rate else ""
|
||||
)
|
||||
|
||||
read_operations = 0
|
||||
read_op_sec = 0
|
||||
read_throughput = 0
|
||||
read_errors = {}
|
||||
requested_read_rate = self.load_params.read_rate
|
||||
requested_read_rate_str = f"{requested_read_rate}op/sec" if requested_read_rate else ""
|
||||
|
||||
delete_operations = 0
|
||||
delete_op_sec = 0
|
||||
delete_errors = {}
|
||||
requested_delete_rate = self.load_params.delete_rate
|
||||
requested_delete_rate_str = (
|
||||
f"{requested_delete_rate}op/sec" if requested_delete_rate else ""
|
||||
)
|
||||
|
||||
if self.load_params.scenario in [LoadScenario.gRPC_CAR, LoadScenario.S3_CAR]:
|
||||
delete_vus = max(
|
||||
self.load_params.preallocated_deleters or 0, self.load_params.max_deleters or 0
|
||||
)
|
||||
write_vus = max(
|
||||
self.load_params.preallocated_writers or 0, self.load_params.max_writers or 0
|
||||
)
|
||||
read_vus = max(
|
||||
self.load_params.preallocated_readers or 0, self.load_params.max_readers or 0
|
||||
)
|
||||
else:
|
||||
write_vus = self.load_params.writers
|
||||
read_vus = self.load_params.readers
|
||||
delete_vus = self.load_params.deleters
|
||||
|
||||
write_vus_str = f"{write_vus}th"
|
||||
read_vus_str = f"{read_vus}th"
|
||||
delete_vus_str = f"{delete_vus}th"
|
||||
|
||||
write_section_required = False
|
||||
read_section_required = False
|
||||
delete_section_required = False
|
||||
|
||||
for node_key, load_summary in load_summaries.items():
|
||||
metrics = get_metrics_object(self.load_params.scenario, load_summary)
|
||||
write_operations += metrics.write_total_iterations
|
||||
if write_operations:
|
||||
write_section_required = True
|
||||
write_op_sec += metrics.write_rate
|
||||
write_throughput += metrics.write_throughput
|
||||
if metrics.write_failed_iterations:
|
||||
write_errors[node_key] = metrics.write_failed_iterations
|
||||
|
||||
read_operations += metrics.read_total_iterations
|
||||
if read_operations:
|
||||
read_section_required = True
|
||||
read_op_sec += metrics.read_rate
|
||||
read_throughput += metrics.read_throughput
|
||||
if metrics.read_failed_iterations:
|
||||
read_errors[node_key] = metrics.read_failed_iterations
|
||||
|
||||
delete_operations += metrics.delete_total_iterations
|
||||
if delete_operations:
|
||||
delete_section_required = True
|
||||
delete_op_sec += metrics.delete_rate
|
||||
if metrics.delete_failed_iterations:
|
||||
delete_errors[node_key] = metrics.delete_failed_iterations
|
||||
|
||||
if write_section_required:
|
||||
html += self._get_oprations_sub_section_html(
|
||||
"Write",
|
||||
write_operations,
|
||||
requested_write_rate_str,
|
||||
write_vus_str,
|
||||
write_op_sec,
|
||||
write_throughput,
|
||||
write_errors,
|
||||
)
|
||||
|
||||
if read_section_required:
|
||||
html += self._get_oprations_sub_section_html(
|
||||
"Read",
|
||||
read_operations,
|
||||
requested_read_rate_str,
|
||||
read_vus_str,
|
||||
read_op_sec,
|
||||
read_throughput,
|
||||
read_errors,
|
||||
)
|
||||
|
||||
if delete_section_required:
|
||||
html += self._get_oprations_sub_section_html(
|
||||
"Delete",
|
||||
delete_operations,
|
||||
requested_delete_rate_str,
|
||||
delete_vus_str,
|
||||
delete_op_sec,
|
||||
0,
|
||||
delete_errors,
|
||||
)
|
||||
|
||||
return html
|
||||
|
|
191
src/frostfs_testlib/load/load_steps.py
Normal file
191
src/frostfs_testlib/load/load_steps.py
Normal file
|
@ -0,0 +1,191 @@
|
|||
import copy
|
||||
import itertools
|
||||
import math
|
||||
import re
|
||||
from dataclasses import fields
|
||||
|
||||
from frostfs_testlib.cli import FrostfsAuthmate
|
||||
from frostfs_testlib.load.k6 import K6
|
||||
from frostfs_testlib.load.load_config import K6ProcessAllocationStrategy, LoadParams
|
||||
from frostfs_testlib.reporter import get_reporter
|
||||
from frostfs_testlib.resources.cli import FROSTFS_AUTHMATE_EXEC
|
||||
from frostfs_testlib.resources.load_params import (
|
||||
BACKGROUND_LOAD_VUS_COUNT_DIVISOR,
|
||||
LOAD_NODE_SSH_USER,
|
||||
)
|
||||
from frostfs_testlib.shell import CommandOptions, SSHShell
|
||||
from frostfs_testlib.shell.interfaces import InteractiveInput, SshCredentials
|
||||
from frostfs_testlib.storage.cluster import ClusterNode
|
||||
from frostfs_testlib.storage.dataclasses.frostfs_services import S3Gate, StorageNode
|
||||
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
|
||||
|
||||
reporter = get_reporter()
|
||||
|
||||
STOPPED_HOSTS = []
|
||||
|
||||
|
||||
@reporter.step_deco("Init s3 client on load nodes")
|
||||
def init_s3_client(
|
||||
load_nodes: list[str],
|
||||
load_params: LoadParams,
|
||||
k6_directory: str,
|
||||
ssh_credentials: SshCredentials,
|
||||
nodes_under_load: list[ClusterNode],
|
||||
wallet: WalletInfo,
|
||||
):
|
||||
storage_node = nodes_under_load[0].service(StorageNode)
|
||||
s3_public_keys = [node.service(S3Gate).get_wallet_public_key() for node in nodes_under_load]
|
||||
grpc_peer = storage_node.get_rpc_endpoint()
|
||||
|
||||
for load_node in load_nodes:
|
||||
ssh_client = _get_shell(ssh_credentials, load_node)
|
||||
frostfs_authmate_exec: FrostfsAuthmate = FrostfsAuthmate(ssh_client, FROSTFS_AUTHMATE_EXEC)
|
||||
issue_secret_output = frostfs_authmate_exec.secret.issue(
|
||||
wallet=wallet.path,
|
||||
peer=grpc_peer,
|
||||
bearer_rules=f"{k6_directory}/scenarios/files/rules.json",
|
||||
gate_public_key=s3_public_keys,
|
||||
container_placement_policy=load_params.preset.container_placement_policy,
|
||||
container_policy=f"{k6_directory}/scenarios/files/policy.json",
|
||||
wallet_password=wallet.password,
|
||||
).stdout
|
||||
aws_access_key_id = str(
|
||||
re.search(r"access_key_id.*:\s.(?P<aws_access_key_id>\w*)", issue_secret_output).group(
|
||||
"aws_access_key_id"
|
||||
)
|
||||
)
|
||||
aws_secret_access_key = str(
|
||||
re.search(
|
||||
r"secret_access_key.*:\s.(?P<aws_secret_access_key>\w*)", issue_secret_output
|
||||
).group("aws_secret_access_key")
|
||||
)
|
||||
# prompt_pattern doesn't work at the moment
|
||||
configure_input = [
|
||||
InteractiveInput(prompt_pattern=r"AWS Access Key ID.*", input=aws_access_key_id),
|
||||
InteractiveInput(
|
||||
prompt_pattern=r"AWS Secret Access Key.*", input=aws_secret_access_key
|
||||
),
|
||||
InteractiveInput(prompt_pattern=r".*", input=""),
|
||||
InteractiveInput(prompt_pattern=r".*", input=""),
|
||||
]
|
||||
ssh_client.exec("aws configure", CommandOptions(interactive_inputs=configure_input))
|
||||
|
||||
|
||||
@reporter.step_deco("Prepare K6 instances and objects")
|
||||
def prepare_k6_instances(
|
||||
load_nodes: list[str],
|
||||
ssh_credentials: SshCredentials,
|
||||
k6_dir: str,
|
||||
load_params: LoadParams,
|
||||
endpoints: list[str],
|
||||
loaders_wallet: WalletInfo,
|
||||
) -> list[K6]:
|
||||
k6_load_objects: list[K6] = []
|
||||
nodes = itertools.cycle(load_nodes)
|
||||
|
||||
k6_distribution_count = {
|
||||
K6ProcessAllocationStrategy.PER_LOAD_NODE: len(load_nodes),
|
||||
K6ProcessAllocationStrategy.PER_ENDPOINT: len(endpoints),
|
||||
}
|
||||
endpoints_generators = {
|
||||
K6ProcessAllocationStrategy.PER_LOAD_NODE: itertools.cycle([endpoints]),
|
||||
K6ProcessAllocationStrategy.PER_ENDPOINT: itertools.cycle(
|
||||
[[endpoint] for endpoint in endpoints]
|
||||
),
|
||||
}
|
||||
k6_processes_count = k6_distribution_count[load_params.k6_process_allocation_strategy]
|
||||
endpoints_gen = endpoints_generators[load_params.k6_process_allocation_strategy]
|
||||
|
||||
distributed_load_params_list = _get_distributed_load_params_list(
|
||||
load_params, k6_processes_count
|
||||
)
|
||||
|
||||
for distributed_load_params in distributed_load_params_list:
|
||||
load_node = next(nodes)
|
||||
shell = _get_shell(ssh_credentials, load_node)
|
||||
# Make working_dir directory
|
||||
shell.exec(f"sudo mkdir -p {distributed_load_params.working_dir}")
|
||||
shell.exec(f"sudo chown {LOAD_NODE_SSH_USER} {distributed_load_params.working_dir}")
|
||||
|
||||
k6_load_object = K6(
|
||||
distributed_load_params,
|
||||
next(endpoints_gen),
|
||||
k6_dir,
|
||||
shell,
|
||||
load_node,
|
||||
loaders_wallet,
|
||||
)
|
||||
k6_load_objects.append(k6_load_object)
|
||||
if load_params.preset:
|
||||
k6_load_object.preset()
|
||||
|
||||
return k6_load_objects
|
||||
|
||||
|
||||
def _get_shell(ssh_credentials: SshCredentials, load_node: str) -> SSHShell:
|
||||
ssh_client = SSHShell(
|
||||
host=load_node,
|
||||
login=ssh_credentials.ssh_login,
|
||||
password=ssh_credentials.ssh_password,
|
||||
private_key_path=ssh_credentials.ssh_key_path,
|
||||
private_key_passphrase=ssh_credentials.ssh_key_passphrase,
|
||||
)
|
||||
|
||||
return ssh_client
|
||||
|
||||
|
||||
def _get_distributed_load_params_list(
|
||||
original_load_params: LoadParams, workers_count: int
|
||||
) -> list[LoadParams]:
|
||||
divisor = int(BACKGROUND_LOAD_VUS_COUNT_DIVISOR)
|
||||
distributed_load_params: list[LoadParams] = []
|
||||
|
||||
for i in range(workers_count):
|
||||
load_params = copy.deepcopy(original_load_params)
|
||||
# Append #i here in case if multiple k6 processes goes into same load node
|
||||
load_params.set_id(f"{load_params.load_id}_{i}")
|
||||
distributed_load_params.append(load_params)
|
||||
|
||||
load_fields = fields(original_load_params)
|
||||
|
||||
for field in load_fields:
|
||||
if (
|
||||
field.metadata
|
||||
and original_load_params.scenario in field.metadata["applicable_scenarios"]
|
||||
and field.metadata["distributed"]
|
||||
and getattr(original_load_params, field.name) is not None
|
||||
):
|
||||
original_value = getattr(original_load_params, field.name)
|
||||
distribution = _get_distribution(math.ceil(original_value / divisor), workers_count)
|
||||
for i in range(workers_count):
|
||||
setattr(distributed_load_params[i], field.name, distribution[i])
|
||||
|
||||
return distributed_load_params
|
||||
|
||||
|
||||
def _get_distribution(clients_count: int, workers_count: int) -> list[int]:
|
||||
"""
|
||||
This function will distribute evenly as possible X clients to Y workers.
|
||||
For example if we have 150 readers (clients) and we want to spread it over 4 load nodes (workers)
|
||||
this will return [38, 38, 37, 37].
|
||||
|
||||
Args:
|
||||
clients_count: amount of things needs to be distributed.
|
||||
workers_count: amount of workers.
|
||||
|
||||
Returns:
|
||||
list of distribution.
|
||||
"""
|
||||
if workers_count < 1:
|
||||
raise Exception("Workers cannot be less then 1")
|
||||
|
||||
# Amount of guaranteed payload on one worker
|
||||
clients_per_worker = clients_count // workers_count
|
||||
# Remainder of clients left to be distributed
|
||||
remainder = clients_count - clients_per_worker * workers_count
|
||||
|
||||
distribution = [
|
||||
clients_per_worker + 1 if i < remainder else clients_per_worker
|
||||
for i in range(workers_count)
|
||||
]
|
||||
return distribution
|
|
@ -1,66 +1,63 @@
|
|||
from frostfs_testlib import reporter
|
||||
from frostfs_testlib.load.interfaces.summarized import SummarizedStats
|
||||
import logging
|
||||
|
||||
from frostfs_testlib.load.load_config import LoadParams, LoadScenario
|
||||
from frostfs_testlib.load.load_metrics import get_metrics_object
|
||||
|
||||
logger = logging.getLogger("NeoLogger")
|
||||
|
||||
|
||||
class LoadVerifier:
|
||||
def __init__(self, load_params: LoadParams) -> None:
|
||||
self.load_params = load_params
|
||||
|
||||
def collect_load_issues(self, load_summaries: dict[str, dict]) -> list[str]:
|
||||
summarized = SummarizedStats.collect(self.load_params, load_summaries)
|
||||
issues = []
|
||||
def verify_summaries(self, load_summary, verification_summary) -> None:
|
||||
exceptions = []
|
||||
|
||||
for operation_type, stats in summarized.items():
|
||||
if stats.threads and not stats.operations:
|
||||
issues.append(f"No any {operation_type.lower()} operation was performed")
|
||||
|
||||
if stats.errors.percent > stats.errors.threshold:
|
||||
rate_str = self._get_rate_str(stats.errors.percent)
|
||||
issues.append(f"{operation_type} errors exceeded threshold: {rate_str} > {stats.errors.threshold}%")
|
||||
|
||||
return issues
|
||||
|
||||
def collect_verify_issues(self, load_summaries, verification_summaries) -> list[str]:
|
||||
verify_issues: list[str] = []
|
||||
for k6_process_label in load_summaries:
|
||||
with reporter.step(f"Check verify scenario results for {k6_process_label}"):
|
||||
verify_issues.extend(
|
||||
self._collect_verify_issues_on_process(
|
||||
k6_process_label,
|
||||
load_summaries[k6_process_label],
|
||||
verification_summaries[k6_process_label],
|
||||
)
|
||||
)
|
||||
return verify_issues
|
||||
|
||||
def _get_rate_str(self, rate: float, minimal: float = 0.01) -> str:
|
||||
return f"{rate:.2f}%" if rate >= minimal else f"~{minimal}%"
|
||||
|
||||
def _collect_verify_issues_on_process(self, label, load_summary, verification_summary) -> list[str]:
|
||||
issues = []
|
||||
if not verification_summary or not load_summary:
|
||||
logger.info("Can't check load results due to missing summary")
|
||||
|
||||
load_metrics = get_metrics_object(self.load_params.scenario, load_summary)
|
||||
|
||||
writers = self.load_params.writers or self.load_params.preallocated_writers or 0
|
||||
readers = self.load_params.readers or self.load_params.preallocated_readers or 0
|
||||
deleters = self.load_params.deleters or self.load_params.preallocated_deleters or 0
|
||||
|
||||
delete_success = 0
|
||||
objects_count = load_metrics.write_success_iterations
|
||||
fails_count = load_metrics.write_failed_iterations
|
||||
|
||||
if writers > 0:
|
||||
if objects_count < 1:
|
||||
exceptions.append("Total put objects should be greater than 0")
|
||||
if fails_count > 0:
|
||||
exceptions.append(f"There were {fails_count} failed write operations")
|
||||
|
||||
if readers > 0:
|
||||
read_count = load_metrics.read_success_iterations
|
||||
read_fails_count = load_metrics.read_failed_iterations
|
||||
if read_count < 1:
|
||||
exceptions.append("Total read operations should be greater than 0")
|
||||
if read_fails_count > 0:
|
||||
exceptions.append(f"There were {read_fails_count} failed read operations")
|
||||
|
||||
if deleters > 0:
|
||||
delete_success = load_metrics.delete.success_iterations
|
||||
delete_count = load_metrics.delete_success_iterations
|
||||
delete_fails_count = load_metrics.delete_failed_iterations
|
||||
if delete_count < 1:
|
||||
exceptions.append("Total delete operations should be greater than 0")
|
||||
if delete_fails_count > 0:
|
||||
exceptions.append(f"There were {delete_fails_count} failed delete operations")
|
||||
|
||||
if verification_summary:
|
||||
verify_metrics = get_metrics_object(LoadScenario.VERIFY, verification_summary)
|
||||
verified_objects = verify_metrics.read.success_iterations
|
||||
invalid_objects = verify_metrics.read.failed_iterations
|
||||
total_left_objects = load_metrics.write.success_iterations - delete_success
|
||||
verified_objects = verify_metrics.read_success_iterations
|
||||
invalid_objects = verify_metrics.read_failed_iterations
|
||||
|
||||
if invalid_objects > 0:
|
||||
exceptions.append(f"There were {invalid_objects} verification fails")
|
||||
# Due to interruptions we may see total verified objects to be less than written on writers count
|
||||
if abs(total_left_objects - verified_objects) > writers:
|
||||
issues.append(
|
||||
f"Verified objects mismatch for {label}. Total: {total_left_objects}, Verified: {verified_objects}. Writers: {writers}."
|
||||
if abs(objects_count - verified_objects) > writers:
|
||||
exceptions.append(
|
||||
f"Verified objects mismatch. Total: {objects_count}, Verified: {verified_objects}. Writers: {writers}."
|
||||
)
|
||||
|
||||
return issues
|
||||
assert not exceptions, "\n".join(exceptions)
|
||||
|
|
|
@ -1,60 +0,0 @@
|
|||
from frostfs_testlib.load.interfaces.loader import Loader
|
||||
from frostfs_testlib.resources.load_params import (
|
||||
LOAD_NODE_SSH_PASSWORD,
|
||||
LOAD_NODE_SSH_PRIVATE_KEY_PASSPHRASE,
|
||||
LOAD_NODE_SSH_PRIVATE_KEY_PATH,
|
||||
LOAD_NODE_SSH_USER,
|
||||
)
|
||||
from frostfs_testlib.shell.interfaces import Shell, SshCredentials
|
||||
from frostfs_testlib.shell.ssh_shell import SSHShell
|
||||
from frostfs_testlib.storage.cluster import ClusterNode
|
||||
|
||||
|
||||
class RemoteLoader(Loader):
|
||||
def __init__(self, ssh_credentials: SshCredentials, ip: str) -> None:
|
||||
self.ssh_credentials = ssh_credentials
|
||||
self._ip = ip
|
||||
|
||||
@property
|
||||
def ip(self):
|
||||
return self._ip
|
||||
|
||||
def get_shell(self) -> Shell:
|
||||
ssh_client = SSHShell(
|
||||
host=self.ip,
|
||||
login=self.ssh_credentials.ssh_login,
|
||||
password=self.ssh_credentials.ssh_password,
|
||||
private_key_path=self.ssh_credentials.ssh_key_path,
|
||||
private_key_passphrase=self.ssh_credentials.ssh_key_passphrase,
|
||||
)
|
||||
|
||||
return ssh_client
|
||||
|
||||
@classmethod
|
||||
def from_ip_list(cls, ip_list: list[str]) -> list[Loader]:
|
||||
loaders: list[Loader] = []
|
||||
ssh_credentials = SshCredentials(
|
||||
LOAD_NODE_SSH_USER,
|
||||
LOAD_NODE_SSH_PASSWORD,
|
||||
LOAD_NODE_SSH_PRIVATE_KEY_PATH,
|
||||
LOAD_NODE_SSH_PRIVATE_KEY_PASSPHRASE,
|
||||
)
|
||||
|
||||
for ip in ip_list:
|
||||
loaders.append(RemoteLoader(ssh_credentials, ip))
|
||||
|
||||
return loaders
|
||||
|
||||
|
||||
class NodeLoader(Loader):
|
||||
"""When ClusterNode is the loader for itself (for Local scenario only)."""
|
||||
|
||||
def __init__(self, cluster_node: ClusterNode) -> None:
|
||||
self.cluster_node = cluster_node
|
||||
|
||||
def get_shell(self) -> Shell:
|
||||
return self.cluster_node.host.get_shell()
|
||||
|
||||
@property
|
||||
def ip(self):
|
||||
return self.cluster_node.host_ip
|
|
@ -1,517 +0,0 @@
|
|||
import copy
|
||||
import itertools
|
||||
import math
|
||||
import re
|
||||
import time
|
||||
from dataclasses import fields
|
||||
from typing import Optional
|
||||
from urllib.parse import urlparse
|
||||
|
||||
import yaml
|
||||
|
||||
from frostfs_testlib import reporter
|
||||
from frostfs_testlib.cli.frostfs_authmate.authmate import FrostfsAuthmate
|
||||
from frostfs_testlib.load.interfaces.loader import Loader
|
||||
from frostfs_testlib.load.interfaces.scenario_runner import ScenarioRunner
|
||||
from frostfs_testlib.load.k6 import K6
|
||||
from frostfs_testlib.load.load_config import K6ProcessAllocationStrategy, LoadParams, LoadType
|
||||
from frostfs_testlib.load.loaders import NodeLoader, RemoteLoader
|
||||
from frostfs_testlib.resources import optionals
|
||||
from frostfs_testlib.resources.cli import FROSTFS_AUTHMATE_EXEC
|
||||
from frostfs_testlib.resources.common import STORAGE_USER_NAME
|
||||
from frostfs_testlib.resources.load_params import BACKGROUND_LOAD_VUS_COUNT_DIVISOR, LOAD_NODE_SSH_USER, LOAD_NODES
|
||||
from frostfs_testlib.shell.command_inspectors import SuInspector
|
||||
from frostfs_testlib.shell.interfaces import CommandOptions, InteractiveInput
|
||||
from frostfs_testlib.storage.cluster import ClusterNode
|
||||
from frostfs_testlib.storage.controllers.cluster_state_controller import ClusterStateController
|
||||
from frostfs_testlib.storage.dataclasses.frostfs_services import S3Gate, StorageNode
|
||||
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
|
||||
from frostfs_testlib.testing import parallel, run_optionally
|
||||
from frostfs_testlib.testing.test_control import retry
|
||||
from frostfs_testlib.utils import datetime_utils
|
||||
from frostfs_testlib.utils.file_keeper import FileKeeper
|
||||
from threading import Event
|
||||
|
||||
|
||||
class RunnerBase(ScenarioRunner):
|
||||
k6_instances: list[K6]
|
||||
|
||||
@reporter.step("Run preset on loaders")
|
||||
def preset(self):
|
||||
parallel([k6.preset for k6 in self.k6_instances])
|
||||
|
||||
@reporter.step("Wait until load finish")
|
||||
def wait_until_finish(self, soft_timeout: int = 0):
|
||||
event = Event()
|
||||
parallel([k6.wait_until_finished for k6 in self.k6_instances], event=event, soft_timeout=soft_timeout)
|
||||
|
||||
@property
|
||||
def is_running(self):
|
||||
futures = parallel([k6.is_running for k6 in self.k6_instances])
|
||||
|
||||
return any([future.result() for future in futures])
|
||||
|
||||
def get_k6_instances(self):
|
||||
return self.k6_instances
|
||||
|
||||
|
||||
class DefaultRunner(RunnerBase):
|
||||
loaders: list[Loader]
|
||||
loaders_wallet: WalletInfo
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
loaders_wallet: WalletInfo,
|
||||
load_ip_list: Optional[list[str]] = None,
|
||||
) -> None:
|
||||
if load_ip_list is None:
|
||||
load_ip_list = LOAD_NODES
|
||||
self.loaders = RemoteLoader.from_ip_list(load_ip_list)
|
||||
self.loaders_wallet = loaders_wallet
|
||||
|
||||
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED)
|
||||
@reporter.step("Preparation steps")
|
||||
def prepare(
|
||||
self,
|
||||
load_params: LoadParams,
|
||||
cluster_nodes: list[ClusterNode],
|
||||
nodes_under_load: list[ClusterNode],
|
||||
k6_dir: str,
|
||||
):
|
||||
if load_params.load_type != LoadType.S3:
|
||||
return
|
||||
|
||||
with reporter.step("Init s3 client on loaders"):
|
||||
storage_node = nodes_under_load[0].service(StorageNode)
|
||||
s3_public_keys = [node.service(S3Gate).get_wallet_public_key() for node in cluster_nodes]
|
||||
grpc_peer = storage_node.get_rpc_endpoint()
|
||||
|
||||
parallel(self._prepare_loader, self.loaders, load_params, grpc_peer, s3_public_keys, k6_dir)
|
||||
|
||||
def _prepare_loader(
|
||||
self,
|
||||
loader: Loader,
|
||||
load_params: LoadParams,
|
||||
grpc_peer: str,
|
||||
s3_public_keys: list[str],
|
||||
k6_dir: str,
|
||||
):
|
||||
with reporter.step(f"Init s3 client on {loader.ip}"):
|
||||
shell = loader.get_shell()
|
||||
frostfs_authmate_exec: FrostfsAuthmate = FrostfsAuthmate(shell, FROSTFS_AUTHMATE_EXEC)
|
||||
issue_secret_output = frostfs_authmate_exec.secret.issue(
|
||||
wallet=self.loaders_wallet.path,
|
||||
peer=grpc_peer,
|
||||
gate_public_key=s3_public_keys,
|
||||
container_placement_policy=load_params.preset.container_placement_policy,
|
||||
container_policy=f"{k6_dir}/scenarios/files/policy.json",
|
||||
wallet_password=self.loaders_wallet.password,
|
||||
).stdout
|
||||
aws_access_key_id = str(
|
||||
re.search(r"access_key_id.*:\s.(?P<aws_access_key_id>\w*)", issue_secret_output).group(
|
||||
"aws_access_key_id"
|
||||
)
|
||||
)
|
||||
aws_secret_access_key = str(
|
||||
re.search(
|
||||
r"secret_access_key.*:\s.(?P<aws_secret_access_key>\w*)",
|
||||
issue_secret_output,
|
||||
).group("aws_secret_access_key")
|
||||
)
|
||||
|
||||
configure_input = [
|
||||
InteractiveInput(prompt_pattern=r"AWS Access Key ID.*", input=aws_access_key_id),
|
||||
InteractiveInput(prompt_pattern=r"AWS Secret Access Key.*", input=aws_secret_access_key),
|
||||
InteractiveInput(prompt_pattern=r".*", input=""),
|
||||
InteractiveInput(prompt_pattern=r".*", input=""),
|
||||
]
|
||||
shell.exec("aws configure", CommandOptions(interactive_inputs=configure_input))
|
||||
|
||||
@reporter.step("Init k6 instances")
|
||||
def init_k6_instances(self, load_params: LoadParams, endpoints: list[str], k6_dir: str):
|
||||
self.k6_instances = []
|
||||
cycled_loaders = itertools.cycle(self.loaders)
|
||||
|
||||
k6_distribution_count = {
|
||||
K6ProcessAllocationStrategy.PER_LOAD_NODE: len(self.loaders),
|
||||
K6ProcessAllocationStrategy.PER_ENDPOINT: len(endpoints),
|
||||
}
|
||||
endpoints_generators = {
|
||||
K6ProcessAllocationStrategy.PER_LOAD_NODE: itertools.cycle([endpoints]),
|
||||
K6ProcessAllocationStrategy.PER_ENDPOINT: itertools.cycle([[endpoint] for endpoint in endpoints]),
|
||||
}
|
||||
k6_processes_count = k6_distribution_count[load_params.k6_process_allocation_strategy]
|
||||
endpoints_gen = endpoints_generators[load_params.k6_process_allocation_strategy]
|
||||
|
||||
distributed_load_params_list = self._get_distributed_load_params_list(load_params, k6_processes_count)
|
||||
|
||||
futures = parallel(
|
||||
self._init_k6_instance,
|
||||
distributed_load_params_list,
|
||||
loader=cycled_loaders,
|
||||
endpoints=endpoints_gen,
|
||||
k6_dir=k6_dir,
|
||||
)
|
||||
self.k6_instances = [future.result() for future in futures]
|
||||
|
||||
def _init_k6_instance(self, load_params_for_loader: LoadParams, loader: Loader, endpoints: list[str], k6_dir: str):
|
||||
shell = loader.get_shell()
|
||||
with reporter.step(f"Init K6 instance on {loader.ip} for endpoints {endpoints}"):
|
||||
with reporter.step(f"Make working directory"):
|
||||
shell.exec(f"sudo mkdir -p {load_params_for_loader.working_dir}")
|
||||
shell.exec(f"sudo chown {LOAD_NODE_SSH_USER} {load_params_for_loader.working_dir}")
|
||||
|
||||
return K6(
|
||||
load_params_for_loader,
|
||||
endpoints,
|
||||
k6_dir,
|
||||
shell,
|
||||
loader,
|
||||
self.loaders_wallet,
|
||||
)
|
||||
|
||||
def _get_distributed_load_params_list(
|
||||
self, original_load_params: LoadParams, workers_count: int
|
||||
) -> list[LoadParams]:
|
||||
divisor = int(BACKGROUND_LOAD_VUS_COUNT_DIVISOR)
|
||||
distributed_load_params: list[LoadParams] = []
|
||||
|
||||
for i in range(workers_count):
|
||||
load_params = copy.deepcopy(original_load_params)
|
||||
# Append #i here in case if multiple k6 processes goes into same load node
|
||||
load_params.set_id(f"{load_params.load_id}_{i}")
|
||||
distributed_load_params.append(load_params)
|
||||
|
||||
load_fields = fields(original_load_params)
|
||||
|
||||
for field in load_fields:
|
||||
if (
|
||||
field.metadata
|
||||
and original_load_params.scenario in field.metadata["applicable_scenarios"]
|
||||
and field.metadata["distributed"]
|
||||
and getattr(original_load_params, field.name) is not None
|
||||
):
|
||||
original_value = getattr(original_load_params, field.name)
|
||||
distribution = self._get_distribution(math.ceil(original_value / divisor), workers_count)
|
||||
for i in range(workers_count):
|
||||
setattr(distributed_load_params[i], field.name, distribution[i])
|
||||
|
||||
return distributed_load_params
|
||||
|
||||
def _get_distribution(self, clients_count: int, workers_count: int) -> list[int]:
|
||||
"""
|
||||
This function will distribute evenly as possible X clients to Y workers.
|
||||
For example if we have 150 readers (clients) and we want to spread it over 4 load nodes (workers)
|
||||
this will return [38, 38, 37, 37].
|
||||
|
||||
Args:
|
||||
clients_count: amount of things needs to be distributed.
|
||||
workers_count: amount of workers.
|
||||
|
||||
Returns:
|
||||
list of distribution.
|
||||
"""
|
||||
if workers_count < 1:
|
||||
raise Exception("Workers cannot be less then 1")
|
||||
|
||||
# Amount of guaranteed payload on one worker
|
||||
clients_per_worker = clients_count // workers_count
|
||||
# Remainder of clients left to be distributed
|
||||
remainder = clients_count - clients_per_worker * workers_count
|
||||
|
||||
distribution = [clients_per_worker + 1 if i < remainder else clients_per_worker for i in range(workers_count)]
|
||||
return distribution
|
||||
|
||||
def start(self):
|
||||
load_params = self.k6_instances[0].load_params
|
||||
|
||||
parallel([k6.start for k6 in self.k6_instances])
|
||||
|
||||
wait_after_start_time = datetime_utils.parse_time(load_params.setup_timeout) + 5
|
||||
with reporter.step(f"Wait for start timeout + couple more seconds ({wait_after_start_time}) before moving on"):
|
||||
time.sleep(wait_after_start_time)
|
||||
|
||||
def stop(self):
|
||||
for k6_instance in self.k6_instances:
|
||||
k6_instance.stop()
|
||||
|
||||
def get_results(self) -> dict:
|
||||
results = {}
|
||||
for k6_instance in self.k6_instances:
|
||||
if k6_instance.load_params.k6_process_allocation_strategy is None:
|
||||
raise RuntimeError("k6_process_allocation_strategy should not be none")
|
||||
|
||||
result = k6_instance.get_results()
|
||||
endpoint = urlparse(k6_instance.endpoints[0]).netloc or k6_instance.endpoints[0]
|
||||
keys_map = {
|
||||
K6ProcessAllocationStrategy.PER_LOAD_NODE: k6_instance.loader.ip,
|
||||
K6ProcessAllocationStrategy.PER_ENDPOINT: endpoint,
|
||||
}
|
||||
key = keys_map[k6_instance.load_params.k6_process_allocation_strategy]
|
||||
results[key] = result
|
||||
|
||||
return results
|
||||
|
||||
|
||||
class LocalRunner(RunnerBase):
|
||||
loaders: list[Loader]
|
||||
cluster_state_controller: ClusterStateController
|
||||
file_keeper: FileKeeper
|
||||
wallet: WalletInfo
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
cluster_state_controller: ClusterStateController,
|
||||
file_keeper: FileKeeper,
|
||||
nodes_under_load: list[ClusterNode],
|
||||
) -> None:
|
||||
self.cluster_state_controller = cluster_state_controller
|
||||
self.file_keeper = file_keeper
|
||||
self.loaders = [NodeLoader(node) for node in nodes_under_load]
|
||||
self.nodes_under_load = nodes_under_load
|
||||
|
||||
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED)
|
||||
@reporter.step("Preparation steps")
|
||||
def prepare(
|
||||
self,
|
||||
load_params: LoadParams,
|
||||
cluster_nodes: list[ClusterNode],
|
||||
nodes_under_load: list[ClusterNode],
|
||||
k6_dir: str,
|
||||
):
|
||||
parallel(self.prepare_node, nodes_under_load, k6_dir, load_params)
|
||||
|
||||
@retry(3, 5, expected_result=True)
|
||||
def allow_user_to_login_in_system(self, cluster_node: ClusterNode):
|
||||
shell = cluster_node.host.get_shell()
|
||||
|
||||
result = None
|
||||
try:
|
||||
shell.exec(f"sudo chsh -s /bin/bash {STORAGE_USER_NAME}")
|
||||
self.lock_passwd_on_node(cluster_node)
|
||||
options = CommandOptions(check=False, extra_inspectors=[SuInspector(STORAGE_USER_NAME)])
|
||||
result = shell.exec("whoami", options)
|
||||
finally:
|
||||
if not result or result.return_code:
|
||||
self.restore_passwd_on_node(cluster_node)
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
@reporter.step("Prepare node {cluster_node}")
|
||||
def prepare_node(self, cluster_node: ClusterNode, k6_dir: str, load_params: LoadParams):
|
||||
shell = cluster_node.host.get_shell()
|
||||
|
||||
with reporter.step("Allow storage user to login into system"):
|
||||
self.allow_user_to_login_in_system(cluster_node)
|
||||
|
||||
with reporter.step("Update limits.conf"):
|
||||
limits_path = "/etc/security/limits.conf"
|
||||
self.file_keeper.add(cluster_node.storage_node, limits_path)
|
||||
content = f"{STORAGE_USER_NAME} hard nofile 65536\n{STORAGE_USER_NAME} soft nofile 65536\n"
|
||||
shell.exec(f"echo '{content}' | sudo tee {limits_path}")
|
||||
|
||||
with reporter.step("Download K6"):
|
||||
shell.exec(f"sudo rm -rf {k6_dir};sudo mkdir {k6_dir}")
|
||||
shell.exec(f"sudo curl -so {k6_dir}/k6.tar.gz {load_params.k6_url}")
|
||||
shell.exec(f"sudo tar xf {k6_dir}/k6.tar.gz -C {k6_dir}")
|
||||
shell.exec(f"sudo chmod -R 777 {k6_dir}")
|
||||
|
||||
with reporter.step("Create empty_passwd"):
|
||||
self.wallet = WalletInfo(f"{k6_dir}/scenarios/files/wallet.json", "", "/tmp/empty_passwd.yml")
|
||||
content = yaml.dump({"password": ""})
|
||||
shell.exec(f'echo "{content}" | sudo tee {self.wallet.config_path}')
|
||||
shell.exec(f"sudo chmod -R 777 {self.wallet.config_path}")
|
||||
|
||||
@reporter.step("Init k6 instances")
|
||||
def init_k6_instances(self, load_params: LoadParams, endpoints: list[str], k6_dir: str):
|
||||
self.k6_instances = []
|
||||
futures = parallel(
|
||||
self._init_k6_instance,
|
||||
self.loaders,
|
||||
load_params,
|
||||
k6_dir,
|
||||
)
|
||||
self.k6_instances = [future.result() for future in futures]
|
||||
|
||||
def _init_k6_instance(self, loader: Loader, load_params: LoadParams, k6_dir: str):
|
||||
shell = loader.get_shell()
|
||||
with reporter.step(f"Init K6 instance on {loader.ip}"):
|
||||
with reporter.step(f"Make working directory"):
|
||||
shell.exec(f"sudo mkdir -p {load_params.working_dir}")
|
||||
# If we chmod /home/<user_name> folder we can no longer ssh to the node
|
||||
# !! IMPORTANT !!
|
||||
if (
|
||||
load_params.working_dir
|
||||
and not load_params.working_dir == f"/home/{LOAD_NODE_SSH_USER}"
|
||||
and not load_params.working_dir == f"/home/{LOAD_NODE_SSH_USER}/"
|
||||
):
|
||||
shell.exec(f"sudo chmod -R 777 {load_params.working_dir}")
|
||||
|
||||
return K6(
|
||||
load_params,
|
||||
["localhost:8080"],
|
||||
k6_dir,
|
||||
shell,
|
||||
loader,
|
||||
self.wallet,
|
||||
)
|
||||
|
||||
def start(self):
|
||||
load_params = self.k6_instances[0].load_params
|
||||
|
||||
self.cluster_state_controller.stop_services_of_type(S3Gate)
|
||||
self.cluster_state_controller.stop_services_of_type(StorageNode)
|
||||
|
||||
parallel([k6.start for k6 in self.k6_instances])
|
||||
|
||||
wait_after_start_time = datetime_utils.parse_time(load_params.setup_timeout) + 5
|
||||
with reporter.step(f"Wait for start timeout + couple more seconds ({wait_after_start_time}) before moving on"):
|
||||
time.sleep(wait_after_start_time)
|
||||
|
||||
@reporter.step("Restore passwd on {cluster_node}")
|
||||
def restore_passwd_on_node(self, cluster_node: ClusterNode):
|
||||
shell = cluster_node.host.get_shell()
|
||||
shell.exec("sudo chattr -i /etc/passwd")
|
||||
|
||||
@reporter.step("Lock passwd on {cluster_node}")
|
||||
def lock_passwd_on_node(self, cluster_node: ClusterNode):
|
||||
shell = cluster_node.host.get_shell()
|
||||
shell.exec("sudo chattr +i /etc/passwd")
|
||||
|
||||
def stop(self):
|
||||
for k6_instance in self.k6_instances:
|
||||
k6_instance.stop()
|
||||
|
||||
self.cluster_state_controller.start_all_stopped_services()
|
||||
|
||||
def get_results(self) -> dict:
|
||||
results = {}
|
||||
for k6_instance in self.k6_instances:
|
||||
result = k6_instance.get_results()
|
||||
results[k6_instance.loader.ip] = result
|
||||
|
||||
parallel(self.restore_passwd_on_node, self.nodes_under_load)
|
||||
|
||||
return results
|
||||
|
||||
|
||||
class S3LocalRunner(LocalRunner):
|
||||
endpoints: list[str]
|
||||
k6_dir: str
|
||||
|
||||
@reporter.step("Run preset on loaders")
|
||||
def preset(self):
|
||||
LocalRunner.preset(self)
|
||||
with reporter.step(f"Resolve containers in preset"):
|
||||
parallel(self._resolve_containers_in_preset, self.k6_instances)
|
||||
|
||||
@reporter.step("Resolve containers in preset")
|
||||
def _resolve_containers_in_preset(self, k6_instance: K6):
|
||||
k6_instance.shell.exec(
|
||||
f"sudo {self.k6_dir}/scenarios/preset/resolve_containers_in_preset.py --endpoint {k6_instance.endpoints[0]} --preset_file {k6_instance.load_params.preset.pregen_json}"
|
||||
)
|
||||
|
||||
@reporter.step("Init k6 instances")
|
||||
def init_k6_instances(self, load_params: LoadParams, endpoints: list[str], k6_dir: str):
|
||||
self.k6_instances = []
|
||||
futures = parallel(
|
||||
self._init_k6_instance_,
|
||||
self.loaders,
|
||||
load_params,
|
||||
endpoints,
|
||||
k6_dir,
|
||||
)
|
||||
self.k6_instances = [future.result() for future in futures]
|
||||
|
||||
def _init_k6_instance_(self, loader: Loader, load_params: LoadParams, endpoints: list[str], k6_dir: str):
|
||||
shell = loader.get_shell()
|
||||
with reporter.step(f"Init K6 instance on {loader.ip} for endpoints {endpoints}"):
|
||||
with reporter.step(f"Make working directory"):
|
||||
shell.exec(f"sudo mkdir -p {load_params.working_dir}")
|
||||
# If we chmod /home/<user_name> folder we can no longer ssh to the node
|
||||
# !! IMPORTANT !!
|
||||
if (
|
||||
load_params.working_dir
|
||||
and not load_params.working_dir == f"/home/{LOAD_NODE_SSH_USER}"
|
||||
and not load_params.working_dir == f"/home/{LOAD_NODE_SSH_USER}/"
|
||||
):
|
||||
shell.exec(f"sudo chmod -R 777 {load_params.working_dir}")
|
||||
|
||||
return K6(
|
||||
load_params,
|
||||
self.endpoints,
|
||||
k6_dir,
|
||||
shell,
|
||||
loader,
|
||||
self.wallet,
|
||||
)
|
||||
|
||||
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED)
|
||||
@reporter.step("Preparation steps")
|
||||
def prepare(
|
||||
self,
|
||||
load_params: LoadParams,
|
||||
cluster_nodes: list[ClusterNode],
|
||||
nodes_under_load: list[ClusterNode],
|
||||
k6_dir: str,
|
||||
):
|
||||
self.k6_dir = k6_dir
|
||||
with reporter.step("Init s3 client on loaders"):
|
||||
storage_node = nodes_under_load[0].service(StorageNode)
|
||||
s3_public_keys = [node.service(S3Gate).get_wallet_public_key() for node in cluster_nodes]
|
||||
grpc_peer = storage_node.get_rpc_endpoint()
|
||||
|
||||
parallel(self.prepare_node, nodes_under_load, k6_dir, load_params, s3_public_keys, grpc_peer)
|
||||
|
||||
@reporter.step("Prepare node {cluster_node}")
|
||||
def prepare_node(
|
||||
self, cluster_node: ClusterNode, k6_dir: str, load_params: LoadParams, s3_public_keys: list[str], grpc_peer: str
|
||||
):
|
||||
LocalRunner.prepare_node(self, cluster_node, k6_dir, load_params)
|
||||
self.endpoints = cluster_node.s3_gate.get_all_endpoints()
|
||||
shell = cluster_node.host.get_shell()
|
||||
|
||||
with reporter.step("Uninstall previous installation of aws cli"):
|
||||
shell.exec(f"sudo rm -rf /usr/local/aws-cli")
|
||||
shell.exec(f"sudo rm -rf /usr/local/bin/aws")
|
||||
shell.exec(f"sudo rm -rf /usr/local/bin/aws_completer")
|
||||
|
||||
with reporter.step("Install aws cli"):
|
||||
shell.exec(f"sudo curl {load_params.awscli_url} -o {k6_dir}/awscliv2.zip")
|
||||
shell.exec(f"sudo unzip -q {k6_dir}/awscliv2.zip -d {k6_dir}")
|
||||
shell.exec(f"sudo {k6_dir}/aws/install")
|
||||
|
||||
with reporter.step("Install requests python module"):
|
||||
shell.exec(f"sudo apt-get -y install python3-pip")
|
||||
shell.exec(f"sudo curl -so {k6_dir}/requests.tar.gz {load_params.requests_module_url}")
|
||||
shell.exec(f"sudo python3 -m pip install -I {k6_dir}/requests.tar.gz")
|
||||
|
||||
with reporter.step(f"Init s3 client on {cluster_node.host_ip}"):
|
||||
frostfs_authmate_exec: FrostfsAuthmate = FrostfsAuthmate(shell, FROSTFS_AUTHMATE_EXEC)
|
||||
issue_secret_output = frostfs_authmate_exec.secret.issue(
|
||||
wallet=self.wallet.path,
|
||||
peer=grpc_peer,
|
||||
gate_public_key=s3_public_keys,
|
||||
container_placement_policy=load_params.preset.container_placement_policy,
|
||||
container_policy=f"{k6_dir}/scenarios/files/policy.json",
|
||||
wallet_password=self.wallet.password,
|
||||
).stdout
|
||||
aws_access_key_id = str(
|
||||
re.search(r"access_key_id.*:\s.(?P<aws_access_key_id>\w*)", issue_secret_output).group(
|
||||
"aws_access_key_id"
|
||||
)
|
||||
)
|
||||
aws_secret_access_key = str(
|
||||
re.search(
|
||||
r"secret_access_key.*:\s.(?P<aws_secret_access_key>\w*)",
|
||||
issue_secret_output,
|
||||
).group("aws_secret_access_key")
|
||||
)
|
||||
configure_input = [
|
||||
InteractiveInput(prompt_pattern=r"AWS Access Key ID.*", input=aws_access_key_id),
|
||||
InteractiveInput(prompt_pattern=r"AWS Secret Access Key.*", input=aws_secret_access_key),
|
||||
InteractiveInput(prompt_pattern=r".*", input=""),
|
||||
InteractiveInput(prompt_pattern=r".*", input=""),
|
||||
]
|
||||
shell.exec("aws configure", CommandOptions(interactive_inputs=configure_input))
|
|
@ -1,6 +1,12 @@
|
|||
from importlib.metadata import entry_points
|
||||
import sys
|
||||
from typing import Any
|
||||
|
||||
if sys.version_info < (3, 10):
|
||||
# On Python prior 3.10 we need to use backport of entry points
|
||||
from importlib_metadata import entry_points
|
||||
else:
|
||||
from importlib.metadata import entry_points
|
||||
|
||||
|
||||
def load_plugin(plugin_group: str, name: str) -> Any:
|
||||
"""Loads plugin using entry point specification.
|
||||
|
@ -17,16 +23,3 @@ def load_plugin(plugin_group: str, name: str) -> Any:
|
|||
return None
|
||||
plugin = plugins[name]
|
||||
return plugin.load()
|
||||
|
||||
|
||||
def load_all(group: str) -> Any:
|
||||
"""Loads all plugins using entry point specification.
|
||||
|
||||
Args:
|
||||
group: Name of plugin group.
|
||||
|
||||
Returns:
|
||||
Classes from specified group.
|
||||
"""
|
||||
plugins = entry_points(group=group)
|
||||
return [plugin.load() for plugin in plugins]
|
||||
|
|
|
@ -8,40 +8,28 @@ from tenacity import retry
|
|||
from tenacity.stop import stop_after_attempt
|
||||
from tenacity.wait import wait_fixed
|
||||
|
||||
from frostfs_testlib import reporter
|
||||
from frostfs_testlib.reporter import get_reporter
|
||||
from frostfs_testlib.shell import Shell
|
||||
from frostfs_testlib.shell.command_inspectors import SuInspector
|
||||
from frostfs_testlib.shell.interfaces import CommandInspector, CommandOptions
|
||||
from frostfs_testlib.shell.interfaces import CommandOptions
|
||||
|
||||
reporter = get_reporter()
|
||||
|
||||
|
||||
class RemoteProcess:
|
||||
def __init__(
|
||||
self, cmd: str, process_dir: str, shell: Shell, cmd_inspector: Optional[CommandInspector], proc_id: str
|
||||
):
|
||||
def __init__(self, cmd: str, process_dir: str, shell: Shell):
|
||||
self.process_dir = process_dir
|
||||
self.cmd = cmd
|
||||
self.stdout_last_line_number = 0
|
||||
self.stderr_last_line_number = 0
|
||||
self.pid: Optional[str] = None
|
||||
self.proc_rc: Optional[int] = None
|
||||
self.proc_start_time: Optional[int] = None
|
||||
self.proc_end_time: Optional[int] = None
|
||||
self.saved_stdout: Optional[str] = None
|
||||
self.saved_stderr: Optional[str] = None
|
||||
self.shell = shell
|
||||
self.proc_id: str = proc_id
|
||||
self.cmd_inspectors: list[CommandInspector] = [cmd_inspector] if cmd_inspector else []
|
||||
|
||||
@classmethod
|
||||
@reporter.step("Create remote process")
|
||||
def create(
|
||||
cls,
|
||||
command: str,
|
||||
shell: Shell,
|
||||
working_dir: str = "/tmp",
|
||||
user: Optional[str] = None,
|
||||
proc_id: Optional[str] = None,
|
||||
) -> RemoteProcess:
|
||||
@reporter.step_deco("Create remote process")
|
||||
def create(cls, command: str, shell: Shell, working_dir: str = "/tmp") -> RemoteProcess:
|
||||
"""
|
||||
Create a process on a remote host.
|
||||
|
||||
|
@ -51,8 +39,6 @@ class RemoteProcess:
|
|||
rc: contains script return code
|
||||
stderr: contains script errors
|
||||
stdout: contains script output
|
||||
user: user on behalf whom command will be executed
|
||||
proc_id: process string identificator
|
||||
|
||||
Args:
|
||||
shell: Shell instance
|
||||
|
@ -62,32 +48,16 @@ class RemoteProcess:
|
|||
Returns:
|
||||
RemoteProcess instance for further examination
|
||||
"""
|
||||
if proc_id is None:
|
||||
proc_id = f"{uuid.uuid4()}"
|
||||
|
||||
cmd_inspector = SuInspector(user) if user else None
|
||||
remote_process = cls(
|
||||
cmd=command,
|
||||
process_dir=os.path.join(working_dir, f"proc_{proc_id}"),
|
||||
shell=shell,
|
||||
cmd_inspector=cmd_inspector,
|
||||
proc_id=proc_id,
|
||||
cmd=command, process_dir=os.path.join(working_dir, f"proc_{uuid.uuid4()}"), shell=shell
|
||||
)
|
||||
|
||||
remote_process._create_process_dir()
|
||||
remote_process._generate_command_script(command)
|
||||
remote_process._start_process()
|
||||
remote_process.pid = remote_process._get_pid()
|
||||
return remote_process
|
||||
|
||||
@reporter.step("Start remote process")
|
||||
def start(self):
|
||||
"""
|
||||
Starts a process on a remote host.
|
||||
"""
|
||||
|
||||
self._create_process_dir()
|
||||
self._generate_command_script()
|
||||
self._start_process()
|
||||
self.pid = self._get_pid()
|
||||
|
||||
@reporter.step("Get process stdout")
|
||||
@reporter.step_deco("Get process stdout")
|
||||
def stdout(self, full: bool = False) -> str:
|
||||
"""
|
||||
Method to get process stdout, either fresh info or full.
|
||||
|
@ -103,8 +73,7 @@ class RemoteProcess:
|
|||
cur_stdout = self.saved_stdout
|
||||
else:
|
||||
terminal = self.shell.exec(
|
||||
f"cat {self.process_dir}/stdout",
|
||||
options=CommandOptions(no_log=True, extra_inspectors=self.cmd_inspectors),
|
||||
f"cat {self.process_dir}/stdout", options=CommandOptions(no_log=True)
|
||||
)
|
||||
if self.proc_rc is not None:
|
||||
self.saved_stdout = terminal.stdout
|
||||
|
@ -119,7 +88,7 @@ class RemoteProcess:
|
|||
return resulted_stdout
|
||||
return ""
|
||||
|
||||
@reporter.step("Get process stderr")
|
||||
@reporter.step_deco("Get process stderr")
|
||||
def stderr(self, full: bool = False) -> str:
|
||||
"""
|
||||
Method to get process stderr, either fresh info or full.
|
||||
|
@ -135,8 +104,7 @@ class RemoteProcess:
|
|||
cur_stderr = self.saved_stderr
|
||||
else:
|
||||
terminal = self.shell.exec(
|
||||
f"cat {self.process_dir}/stderr",
|
||||
options=CommandOptions(no_log=True, extra_inspectors=self.cmd_inspectors),
|
||||
f"cat {self.process_dir}/stderr", options=CommandOptions(no_log=True)
|
||||
)
|
||||
if self.proc_rc is not None:
|
||||
self.saved_stderr = terminal.stdout
|
||||
|
@ -150,131 +118,84 @@ class RemoteProcess:
|
|||
return resulted_stderr
|
||||
return ""
|
||||
|
||||
@reporter.step("Get process rc")
|
||||
@reporter.step_deco("Get process rc")
|
||||
def rc(self) -> Optional[int]:
|
||||
if self.proc_rc is not None:
|
||||
return self.proc_rc
|
||||
|
||||
result = self._cat_proc_file("rc")
|
||||
if not result:
|
||||
return None
|
||||
|
||||
self.proc_rc = int(result)
|
||||
return self.proc_rc
|
||||
|
||||
@reporter.step("Get process start time")
|
||||
def start_time(self) -> Optional[int]:
|
||||
if self.proc_start_time is not None:
|
||||
return self.proc_start_time
|
||||
|
||||
result = self._cat_proc_file("start_time")
|
||||
if not result:
|
||||
return None
|
||||
|
||||
self.proc_start_time = int(result)
|
||||
return self.proc_start_time
|
||||
|
||||
@reporter.step("Get process end time")
|
||||
def end_time(self) -> Optional[int]:
|
||||
if self.proc_end_time is not None:
|
||||
return self.proc_end_time
|
||||
|
||||
result = self._cat_proc_file("end_time")
|
||||
if not result:
|
||||
return None
|
||||
|
||||
self.proc_end_time = int(result)
|
||||
return self.proc_end_time
|
||||
|
||||
def _cat_proc_file(self, file: str) -> Optional[str]:
|
||||
terminal = self.shell.exec(
|
||||
f"cat {self.process_dir}/{file}",
|
||||
CommandOptions(check=False, extra_inspectors=self.cmd_inspectors, no_log=True),
|
||||
)
|
||||
terminal = self.shell.exec(f"cat {self.process_dir}/rc", CommandOptions(check=False))
|
||||
if "No such file or directory" in terminal.stderr:
|
||||
return None
|
||||
elif terminal.stderr or terminal.return_code != 0:
|
||||
raise AssertionError(f"cat process {file} was not successful: {terminal.stderr}")
|
||||
raise AssertionError(f"cat process rc was not successful: {terminal.stderr}")
|
||||
|
||||
return terminal.stdout
|
||||
self.proc_rc = int(terminal.stdout)
|
||||
return self.proc_rc
|
||||
|
||||
@reporter.step("Check if process is running")
|
||||
@reporter.step_deco("Check if process is running")
|
||||
def running(self) -> bool:
|
||||
return self.rc() is None
|
||||
|
||||
@reporter.step("Send signal to process")
|
||||
@reporter.step_deco("Send signal to process")
|
||||
def send_signal(self, signal: int) -> None:
|
||||
kill_res = self.shell.exec(
|
||||
f"kill -{signal} {self.pid}",
|
||||
CommandOptions(check=False, extra_inspectors=self.cmd_inspectors),
|
||||
)
|
||||
kill_res = self.shell.exec(f"kill -{signal} {self.pid}", CommandOptions(check=False))
|
||||
if "No such process" in kill_res.stderr:
|
||||
return
|
||||
if kill_res.return_code:
|
||||
raise AssertionError(f"Signal {signal} not sent. Return code of kill: {kill_res.return_code}")
|
||||
raise AssertionError(
|
||||
f"Signal {signal} not sent. Return code of kill: {kill_res.return_code}"
|
||||
)
|
||||
|
||||
@reporter.step("Stop process")
|
||||
@reporter.step_deco("Stop process")
|
||||
def stop(self) -> None:
|
||||
self.send_signal(15)
|
||||
|
||||
@reporter.step("Kill process")
|
||||
@reporter.step_deco("Kill process")
|
||||
def kill(self) -> None:
|
||||
self.send_signal(9)
|
||||
|
||||
@reporter.step("Clear process directory")
|
||||
@reporter.step_deco("Clear process directory")
|
||||
def clear(self) -> None:
|
||||
if self.process_dir == "/":
|
||||
raise AssertionError(f"Invalid path to delete: {self.process_dir}")
|
||||
self.shell.exec(f"rm -rf {self.process_dir}", CommandOptions(extra_inspectors=self.cmd_inspectors))
|
||||
self.shell.exec(f"rm -rf {self.process_dir}")
|
||||
|
||||
@reporter.step("Start remote process")
|
||||
@reporter.step_deco("Start remote process")
|
||||
def _start_process(self) -> None:
|
||||
self.shell.exec(
|
||||
f"nohup {self.process_dir}/command.sh </dev/null "
|
||||
f">{self.process_dir}/stdout "
|
||||
f"2>{self.process_dir}/stderr &",
|
||||
CommandOptions(extra_inspectors=self.cmd_inspectors),
|
||||
f"2>{self.process_dir}/stderr &"
|
||||
)
|
||||
|
||||
@reporter.step("Create process directory")
|
||||
@reporter.step_deco("Create process directory")
|
||||
def _create_process_dir(self) -> None:
|
||||
self.shell.exec(f"mkdir -p {self.process_dir}", CommandOptions(extra_inspectors=self.cmd_inspectors))
|
||||
self.shell.exec(f"chmod 777 {self.process_dir}", CommandOptions(extra_inspectors=self.cmd_inspectors))
|
||||
terminal = self.shell.exec(f"realpath {self.process_dir}", CommandOptions(extra_inspectors=self.cmd_inspectors))
|
||||
self.shell.exec(f"mkdir {self.process_dir}")
|
||||
self.shell.exec(f"chmod 777 {self.process_dir}")
|
||||
terminal = self.shell.exec(f"realpath {self.process_dir}")
|
||||
self.process_dir = terminal.stdout.strip()
|
||||
|
||||
@reporter.step("Get pid")
|
||||
@reporter.step_deco("Get pid")
|
||||
@retry(wait=wait_fixed(10), stop=stop_after_attempt(5), reraise=True)
|
||||
def _get_pid(self) -> str:
|
||||
terminal = self.shell.exec(f"cat {self.process_dir}/pid", CommandOptions(extra_inspectors=self.cmd_inspectors))
|
||||
terminal = self.shell.exec(f"cat {self.process_dir}/pid")
|
||||
assert terminal.stdout, f"invalid pid: {terminal.stdout}"
|
||||
return terminal.stdout.strip()
|
||||
|
||||
@reporter.step("Generate command script")
|
||||
def _generate_command_script(self) -> None:
|
||||
command = self.cmd.replace('"', '\\"').replace("\\", "\\\\")
|
||||
@reporter.step_deco("Generate command script")
|
||||
def _generate_command_script(self, command: str) -> None:
|
||||
command = command.replace('"', '\\"').replace("\\", "\\\\")
|
||||
script = (
|
||||
f"#!/bin/bash\n"
|
||||
f"cd {self.process_dir}\n"
|
||||
f"date +%s > {self.process_dir}/start_time\n"
|
||||
f"{command} &\n"
|
||||
f"pid=\$!\n"
|
||||
f"cd {self.process_dir}\n"
|
||||
f"echo \$pid > {self.process_dir}/pid\n"
|
||||
f"wait \$pid\n"
|
||||
f"echo $? > {self.process_dir}/rc\n"
|
||||
f"date +%s > {self.process_dir}/end_time\n"
|
||||
f"echo $? > {self.process_dir}/rc"
|
||||
)
|
||||
|
||||
self.shell.exec(
|
||||
f'echo "{script}" > {self.process_dir}/command.sh',
|
||||
CommandOptions(extra_inspectors=self.cmd_inspectors),
|
||||
)
|
||||
self.shell.exec(
|
||||
f"cat {self.process_dir}/command.sh",
|
||||
CommandOptions(extra_inspectors=self.cmd_inspectors),
|
||||
)
|
||||
self.shell.exec(
|
||||
f"chmod +x {self.process_dir}/command.sh",
|
||||
CommandOptions(extra_inspectors=self.cmd_inspectors),
|
||||
)
|
||||
self.shell.exec(f'echo "{script}" > {self.process_dir}/command.sh')
|
||||
self.shell.exec(f"cat {self.process_dir}/command.sh")
|
||||
self.shell.exec(f"chmod +x {self.process_dir}/command.sh")
|
||||
|
|
|
@ -1,9 +1,6 @@
|
|||
from typing import Any
|
||||
|
||||
from frostfs_testlib.reporter.allure_handler import AllureHandler
|
||||
from frostfs_testlib.reporter.interfaces import ReporterHandler
|
||||
from frostfs_testlib.reporter.reporter import Reporter
|
||||
from frostfs_testlib.reporter.steps_logger import StepsLogger
|
||||
|
||||
__reporter = Reporter()
|
||||
|
||||
|
@ -18,11 +15,3 @@ def get_reporter() -> Reporter:
|
|||
Singleton reporter instance.
|
||||
"""
|
||||
return __reporter
|
||||
|
||||
|
||||
def step(title: str):
|
||||
return __reporter.step(title)
|
||||
|
||||
|
||||
def attach(content: Any, file_name: str):
|
||||
return __reporter.attach(content, file_name)
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
import os
|
||||
from contextlib import AbstractContextManager, ContextDecorator
|
||||
from contextlib import AbstractContextManager
|
||||
from textwrap import shorten
|
||||
from typing import Any, Callable
|
||||
|
||||
|
@ -12,8 +12,8 @@ from frostfs_testlib.reporter.interfaces import ReporterHandler
|
|||
class AllureHandler(ReporterHandler):
|
||||
"""Handler that stores test artifacts in Allure report."""
|
||||
|
||||
def step(self, name: str) -> AbstractContextManager | ContextDecorator:
|
||||
name = shorten(name, width=140, placeholder="...")
|
||||
def step(self, name: str) -> AbstractContextManager:
|
||||
name = shorten(name, width=70, placeholder="...")
|
||||
return allure.step(name)
|
||||
|
||||
def step_decorator(self, name: str) -> Callable:
|
||||
|
@ -21,14 +21,9 @@ class AllureHandler(ReporterHandler):
|
|||
|
||||
def attach(self, body: Any, file_name: str) -> None:
|
||||
attachment_name, extension = os.path.splitext(file_name)
|
||||
if extension.startswith("."):
|
||||
extension = extension[1:]
|
||||
attachment_type = self._resolve_attachment_type(extension)
|
||||
|
||||
if os.path.exists(body):
|
||||
allure.attach.file(body, file_name, attachment_type, extension)
|
||||
else:
|
||||
allure.attach(body, attachment_name, attachment_type, extension)
|
||||
allure.attach(body, attachment_name, attachment_type, extension)
|
||||
|
||||
def _resolve_attachment_type(self, extension: str) -> attachment_type:
|
||||
"""Try to find matching Allure attachment type by extension.
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
from abc import ABC, abstractmethod
|
||||
from contextlib import AbstractContextManager, ContextDecorator
|
||||
from contextlib import AbstractContextManager
|
||||
from typing import Any, Callable
|
||||
|
||||
|
||||
|
@ -7,7 +7,7 @@ class ReporterHandler(ABC):
|
|||
"""Interface of handler that stores test artifacts in some reporting tool."""
|
||||
|
||||
@abstractmethod
|
||||
def step(self, name: str) -> AbstractContextManager | ContextDecorator:
|
||||
def step(self, name: str) -> AbstractContextManager:
|
||||
"""Register a new step in test execution.
|
||||
|
||||
Args:
|
||||
|
|
|
@ -5,7 +5,6 @@ from typing import Any, Callable, Optional
|
|||
|
||||
from frostfs_testlib.plugins import load_plugin
|
||||
from frostfs_testlib.reporter.interfaces import ReporterHandler
|
||||
from frostfs_testlib.utils.func_utils import format_by_args
|
||||
|
||||
|
||||
@contextmanager
|
||||
|
@ -64,8 +63,7 @@ class Reporter:
|
|||
def wrapper(*a, **kw):
|
||||
resulting_func = func
|
||||
for handler in self.handlers:
|
||||
parsed_name = format_by_args(func, name, *a, **kw)
|
||||
decorator = handler.step_decorator(parsed_name)
|
||||
decorator = handler.step_decorator(name)
|
||||
resulting_func = decorator(resulting_func)
|
||||
|
||||
return resulting_func(*a, **kw)
|
||||
|
@ -83,11 +81,11 @@ class Reporter:
|
|||
Returns:
|
||||
Step context.
|
||||
"""
|
||||
if not self.handlers:
|
||||
return _empty_step()
|
||||
|
||||
step_contexts = [handler.step(name) for handler in self.handlers]
|
||||
if not step_contexts:
|
||||
step_contexts = [_empty_step()]
|
||||
decorated_wrapper = self.step_deco(name)
|
||||
return AggregateContextManager(step_contexts, decorated_wrapper)
|
||||
return AggregateContextManager(step_contexts)
|
||||
|
||||
def attach(self, content: Any, file_name: str) -> None:
|
||||
"""Attach specified content with given file name to the test report.
|
||||
|
@ -106,10 +104,9 @@ class AggregateContextManager(AbstractContextManager):
|
|||
|
||||
contexts: list[AbstractContextManager]
|
||||
|
||||
def __init__(self, contexts: list[AbstractContextManager], decorated_wrapper: Callable) -> None:
|
||||
def __init__(self, contexts: list[AbstractContextManager]) -> None:
|
||||
super().__init__()
|
||||
self.contexts = contexts
|
||||
self.wrapper = decorated_wrapper
|
||||
|
||||
def __enter__(self):
|
||||
for context in self.contexts:
|
||||
|
@ -130,6 +127,3 @@ class AggregateContextManager(AbstractContextManager):
|
|||
# If all context agreed to suppress exception, then suppress it;
|
||||
# otherwise return None to reraise
|
||||
return True if all(suppress_decisions) else None
|
||||
|
||||
def __call__(self, *args: Any, **kwds: Any) -> Any:
|
||||
return self.wrapper(*args, **kwds)
|
||||
|
|
|
@ -1,56 +0,0 @@
|
|||
import logging
|
||||
import threading
|
||||
from contextlib import AbstractContextManager, ContextDecorator
|
||||
from functools import wraps
|
||||
from types import TracebackType
|
||||
from typing import Any, Callable
|
||||
|
||||
from frostfs_testlib.reporter.interfaces import ReporterHandler
|
||||
|
||||
|
||||
class StepsLogger(ReporterHandler):
|
||||
"""Handler that prints steps to log."""
|
||||
|
||||
def step(self, name: str) -> AbstractContextManager | ContextDecorator:
|
||||
return StepLoggerContext(name)
|
||||
|
||||
def step_decorator(self, name: str) -> Callable:
|
||||
return StepLoggerContext(name)
|
||||
|
||||
def attach(self, body: Any, file_name: str) -> None:
|
||||
pass
|
||||
|
||||
|
||||
class StepLoggerContext(AbstractContextManager):
|
||||
INDENT = {}
|
||||
|
||||
def __init__(self, title: str):
|
||||
self.title = title
|
||||
self.logger = logging.getLogger("NeoLogger")
|
||||
self.thread = threading.get_ident()
|
||||
if self.thread not in StepLoggerContext.INDENT:
|
||||
StepLoggerContext.INDENT[self.thread] = 1
|
||||
|
||||
def __enter__(self) -> Any:
|
||||
indent = ">" * StepLoggerContext.INDENT[self.thread]
|
||||
self.logger.info(f"[{self.thread}] {indent} {self.title}")
|
||||
StepLoggerContext.INDENT[self.thread] += 1
|
||||
|
||||
def __exit__(
|
||||
self,
|
||||
__exc_type: type[BaseException] | None,
|
||||
__exc_value: BaseException | None,
|
||||
__traceback: TracebackType | None,
|
||||
) -> bool | None:
|
||||
|
||||
StepLoggerContext.INDENT[self.thread] -= 1
|
||||
indent = "<" * StepLoggerContext.INDENT[self.thread]
|
||||
self.logger.info(f"[{self.thread}] {indent} {self.title}")
|
||||
|
||||
def __call__(self, func):
|
||||
@wraps(func)
|
||||
def impl(*a, **kw):
|
||||
with self:
|
||||
return func(*a, **kw)
|
||||
|
||||
return impl
|
|
@ -10,8 +10,6 @@ COMPLEX_OBJECT_TAIL_SIZE = os.getenv("COMPLEX_OBJECT_TAIL_SIZE", "1000")
|
|||
|
||||
SERVICE_MAX_STARTUP_TIME = os.getenv("SERVICE_MAX_STARTUP_TIME", "5m")
|
||||
|
||||
STORAGE_USER_NAME = "frostfs-storage"
|
||||
|
||||
MORPH_TIMEOUT = os.getenv("MORPH_BLOCK_TIME", "8s")
|
||||
MORPH_BLOCK_TIME = os.getenv("MORPH_BLOCK_TIME", "8s")
|
||||
FROSTFS_CONTRACT_CACHE_TIMEOUT = os.getenv("FROSTFS_CONTRACT_CACHE_TIMEOUT", "30s")
|
||||
|
@ -43,6 +41,6 @@ with open(DEFAULT_WALLET_CONFIG, "w") as file:
|
|||
|
||||
# Number of attempts that S3 clients will attempt per each request (1 means single attempt
|
||||
# without any retries)
|
||||
MAX_REQUEST_ATTEMPTS = 5
|
||||
MAX_REQUEST_ATTEMPTS = 1
|
||||
RETRY_MODE = "standard"
|
||||
CREDENTIALS_CREATE_TIMEOUT = "1m"
|
||||
|
|
|
@ -11,9 +11,8 @@ BACKGROUND_WRITERS_COUNT = os.getenv("BACKGROUND_WRITERS_COUNT", 0)
|
|||
BACKGROUND_READERS_COUNT = os.getenv("BACKGROUND_READERS_COUNT", 0)
|
||||
BACKGROUND_DELETERS_COUNT = os.getenv("BACKGROUND_DELETERS_COUNT", 0)
|
||||
BACKGROUND_VERIFIERS_COUNT = os.getenv("BACKGROUND_VERIFIERS_COUNT", 0)
|
||||
BACKGROUND_LOAD_DEFAULT_TIME = os.getenv("BACKGROUND_LOAD_DEFAULT_TIME", 1800)
|
||||
BACKGROUND_LOAD_DEFAULT_TIME = os.getenv("BACKGROUND_LOAD_DEFAULT_TIME", 600)
|
||||
BACKGROUND_LOAD_DEFAULT_OBJECT_SIZE = os.getenv("BACKGROUND_LOAD_DEFAULT_OBJECT_SIZE", 32)
|
||||
BACKGROUND_LOAD_DEFAULT_VU_INIT_TIME = float(os.getenv("BACKGROUND_LOAD_DEFAULT_VU_INIT_TIME", 0.8))
|
||||
BACKGROUND_LOAD_SETUP_TIMEOUT = os.getenv("BACKGROUND_LOAD_SETUP_TIMEOUT", "5s")
|
||||
|
||||
# This will decrease load params for some weak environments
|
||||
|
@ -27,7 +26,7 @@ BACKGROUND_LOAD_CONTAINER_PLACEMENT_POLICY = os.getenv(
|
|||
BACKGROUND_LOAD_S3_LOCATION = os.getenv("BACKGROUND_LOAD_S3_LOCATION", "node-off")
|
||||
PRESET_CONTAINERS_COUNT = os.getenv("CONTAINERS_COUNT", "40")
|
||||
# TODO: At lease one object is required due to bug in xk6 (buckets with no objects produce millions exceptions in read)
|
||||
PRESET_OBJECTS_COUNT = os.getenv("OBJ_COUNT", "1")
|
||||
PRESET_OBJECTS_COUNT = os.getenv("OBJ_COUNT", "10")
|
||||
K6_DIRECTORY = os.getenv("K6_DIRECTORY", "/etc/k6")
|
||||
K6_TEARDOWN_PERIOD = os.getenv("K6_TEARDOWN_PERIOD", "30")
|
||||
K6_STOP_SIGNAL_TIMEOUT = int(os.getenv("K6_STOP_SIGNAL_TIMEOUT", 300))
|
||||
|
|
|
@ -6,48 +6,40 @@ from datetime import datetime
|
|||
from time import sleep
|
||||
from typing import Literal, Optional, Union
|
||||
|
||||
from frostfs_testlib import reporter
|
||||
from frostfs_testlib.resources.common import ASSETS_DIR, MAX_REQUEST_ATTEMPTS, RETRY_MODE, S3_SYNC_WAIT_TIME
|
||||
from frostfs_testlib.reporter import get_reporter
|
||||
from frostfs_testlib.resources.common import (
|
||||
ASSETS_DIR,
|
||||
MAX_REQUEST_ATTEMPTS,
|
||||
RETRY_MODE,
|
||||
S3_SYNC_WAIT_TIME,
|
||||
)
|
||||
from frostfs_testlib.s3.interfaces import S3ClientWrapper, VersioningStatus, _make_objs_dict
|
||||
from frostfs_testlib.shell import CommandOptions
|
||||
from frostfs_testlib.shell.local_shell import LocalShell
|
||||
|
||||
# TODO: Refactor this code to use shell instead of _cmd_run
|
||||
from frostfs_testlib.utils.cli_utils import _configure_aws_cli
|
||||
from frostfs_testlib.utils.cli_utils import _cmd_run, _configure_aws_cli
|
||||
|
||||
reporter = get_reporter()
|
||||
logger = logging.getLogger("NeoLogger")
|
||||
command_options = CommandOptions(timeout=480)
|
||||
LONG_TIMEOUT = 240
|
||||
|
||||
|
||||
class AwsCliClient(S3ClientWrapper):
|
||||
__repr_name__: str = "AWS CLI"
|
||||
|
||||
# Flags that we use for all S3 commands: disable SSL verification (as we use self-signed
|
||||
# certificate in devenv) and disable automatic pagination in CLI output
|
||||
common_flags = "--no-verify-ssl --no-paginate"
|
||||
s3gate_endpoint: str
|
||||
|
||||
@reporter.step("Configure S3 client (aws cli)")
|
||||
def __init__(
|
||||
self, access_key_id: str, secret_access_key: str, s3gate_endpoint: str, profile: str = "default"
|
||||
) -> None:
|
||||
@reporter.step_deco("Configure S3 client (aws cli)")
|
||||
def __init__(self, access_key_id: str, secret_access_key: str, s3gate_endpoint: str) -> None:
|
||||
self.s3gate_endpoint = s3gate_endpoint
|
||||
self.profile = profile
|
||||
self.local_shell = LocalShell()
|
||||
try:
|
||||
_configure_aws_cli(f"aws configure --profile {profile}", access_key_id, secret_access_key)
|
||||
self.local_shell.exec(f"aws configure set max_attempts {MAX_REQUEST_ATTEMPTS} --profile {profile}")
|
||||
self.local_shell.exec(
|
||||
f"aws configure set retry_mode {RETRY_MODE} --profile {profile}",
|
||||
)
|
||||
_configure_aws_cli("aws configure", access_key_id, secret_access_key)
|
||||
_cmd_run(f"aws configure set max_attempts {MAX_REQUEST_ATTEMPTS}")
|
||||
_cmd_run(f"aws configure set retry_mode {RETRY_MODE}")
|
||||
except Exception as err:
|
||||
raise RuntimeError("Error while configuring AwsCliClient") from err
|
||||
|
||||
@reporter.step("Set endpoint S3 to {s3gate_endpoint}")
|
||||
def set_endpoint(self, s3gate_endpoint: str):
|
||||
self.s3gate_endpoint = s3gate_endpoint
|
||||
|
||||
@reporter.step("Create bucket S3")
|
||||
@reporter.step_deco("Create bucket S3")
|
||||
def create_bucket(
|
||||
self,
|
||||
bucket: Optional[str] = None,
|
||||
|
@ -69,7 +61,7 @@ class AwsCliClient(S3ClientWrapper):
|
|||
object_lock = " --no-object-lock-enabled-for-bucket"
|
||||
cmd = (
|
||||
f"aws {self.common_flags} s3api create-bucket --bucket {bucket} "
|
||||
f"{object_lock} --endpoint {self.s3gate_endpoint} --profile {self.profile}"
|
||||
f"{object_lock} --endpoint {self.s3gate_endpoint}"
|
||||
)
|
||||
if acl:
|
||||
cmd += f" --acl {acl}"
|
||||
|
@ -81,94 +73,96 @@ class AwsCliClient(S3ClientWrapper):
|
|||
cmd += f" --grant-read {grant_read}"
|
||||
if location_constraint:
|
||||
cmd += f" --create-bucket-configuration LocationConstraint={location_constraint}"
|
||||
self.local_shell.exec(cmd)
|
||||
_cmd_run(cmd)
|
||||
sleep(S3_SYNC_WAIT_TIME)
|
||||
|
||||
return bucket
|
||||
|
||||
@reporter.step("List buckets S3")
|
||||
@reporter.step_deco("List buckets S3")
|
||||
def list_buckets(self) -> list[str]:
|
||||
cmd = f"aws {self.common_flags} s3api list-buckets --endpoint {self.s3gate_endpoint} --profile {self.profile}"
|
||||
output = self.local_shell.exec(cmd).stdout
|
||||
cmd = f"aws {self.common_flags} s3api list-buckets --endpoint {self.s3gate_endpoint}"
|
||||
output = _cmd_run(cmd)
|
||||
buckets_json = self._to_json(output)
|
||||
return [bucket["Name"] for bucket in buckets_json["Buckets"]]
|
||||
|
||||
@reporter.step("Delete bucket S3")
|
||||
@reporter.step_deco("Delete bucket S3")
|
||||
def delete_bucket(self, bucket: str) -> None:
|
||||
cmd = f"aws {self.common_flags} s3api delete-bucket --bucket {bucket} --endpoint {self.s3gate_endpoint} --profile {self.profile}"
|
||||
self.local_shell.exec(cmd, command_options)
|
||||
cmd = f"aws {self.common_flags} s3api delete-bucket --bucket {bucket} --endpoint {self.s3gate_endpoint}"
|
||||
_cmd_run(cmd, LONG_TIMEOUT)
|
||||
sleep(S3_SYNC_WAIT_TIME)
|
||||
|
||||
@reporter.step("Head bucket S3")
|
||||
@reporter.step_deco("Head bucket S3")
|
||||
def head_bucket(self, bucket: str) -> None:
|
||||
cmd = f"aws {self.common_flags} s3api head-bucket --bucket {bucket} --endpoint {self.s3gate_endpoint} --profile {self.profile}"
|
||||
self.local_shell.exec(cmd)
|
||||
cmd = f"aws {self.common_flags} s3api head-bucket --bucket {bucket} --endpoint {self.s3gate_endpoint}"
|
||||
_cmd_run(cmd)
|
||||
|
||||
@reporter.step("Put bucket versioning status")
|
||||
@reporter.step_deco("Put bucket versioning status")
|
||||
def put_bucket_versioning(self, bucket: str, status: VersioningStatus) -> None:
|
||||
cmd = (
|
||||
f"aws {self.common_flags} s3api put-bucket-versioning --bucket {bucket} "
|
||||
f"--versioning-configuration Status={status.value} "
|
||||
f"--endpoint {self.s3gate_endpoint} --profile {self.profile}"
|
||||
f"--endpoint {self.s3gate_endpoint}"
|
||||
)
|
||||
self.local_shell.exec(cmd)
|
||||
_cmd_run(cmd)
|
||||
|
||||
@reporter.step("Get bucket versioning status")
|
||||
@reporter.step_deco("Get bucket versioning status")
|
||||
def get_bucket_versioning_status(self, bucket: str) -> Literal["Enabled", "Suspended"]:
|
||||
cmd = (
|
||||
f"aws {self.common_flags} s3api get-bucket-versioning --bucket {bucket} "
|
||||
f"--endpoint {self.s3gate_endpoint} --profile {self.profile}"
|
||||
f"--endpoint {self.s3gate_endpoint}"
|
||||
)
|
||||
output = self.local_shell.exec(cmd).stdout
|
||||
output = _cmd_run(cmd)
|
||||
response = self._to_json(output)
|
||||
return response.get("Status")
|
||||
|
||||
@reporter.step("Put bucket tagging")
|
||||
@reporter.step_deco("Put bucket tagging")
|
||||
def put_bucket_tagging(self, bucket: str, tags: list) -> None:
|
||||
tags_json = {"TagSet": [{"Key": tag_key, "Value": tag_value} for tag_key, tag_value in tags]}
|
||||
tags_json = {
|
||||
"TagSet": [{"Key": tag_key, "Value": tag_value} for tag_key, tag_value in tags]
|
||||
}
|
||||
cmd = (
|
||||
f"aws {self.common_flags} s3api put-bucket-tagging --bucket {bucket} "
|
||||
f"--tagging '{json.dumps(tags_json)}' --endpoint {self.s3gate_endpoint} --profile {self.profile}"
|
||||
f"--tagging '{json.dumps(tags_json)}' --endpoint {self.s3gate_endpoint}"
|
||||
)
|
||||
self.local_shell.exec(cmd)
|
||||
_cmd_run(cmd)
|
||||
|
||||
@reporter.step("Get bucket tagging")
|
||||
@reporter.step_deco("Get bucket tagging")
|
||||
def get_bucket_tagging(self, bucket: str) -> list:
|
||||
cmd = (
|
||||
f"aws {self.common_flags} s3api get-bucket-tagging --bucket {bucket} "
|
||||
f"--endpoint {self.s3gate_endpoint} --profile {self.profile}"
|
||||
f"--endpoint {self.s3gate_endpoint}"
|
||||
)
|
||||
output = self.local_shell.exec(cmd).stdout
|
||||
output = _cmd_run(cmd)
|
||||
response = self._to_json(output)
|
||||
return response.get("TagSet")
|
||||
|
||||
@reporter.step("Get bucket acl")
|
||||
@reporter.step_deco("Get bucket acl")
|
||||
def get_bucket_acl(self, bucket: str) -> list:
|
||||
cmd = (
|
||||
f"aws {self.common_flags} s3api get-bucket-acl --bucket {bucket} "
|
||||
f"--endpoint {self.s3gate_endpoint} --profile {self.profile}"
|
||||
f"--endpoint {self.s3gate_endpoint}"
|
||||
)
|
||||
output = self.local_shell.exec(cmd).stdout
|
||||
output = _cmd_run(cmd)
|
||||
response = self._to_json(output)
|
||||
return response.get("Grants")
|
||||
|
||||
@reporter.step("Get bucket location")
|
||||
@reporter.step_deco("Get bucket location")
|
||||
def get_bucket_location(self, bucket: str) -> dict:
|
||||
cmd = (
|
||||
f"aws {self.common_flags} s3api get-bucket-location --bucket {bucket} "
|
||||
f"--endpoint {self.s3gate_endpoint} --profile {self.profile}"
|
||||
f"--endpoint {self.s3gate_endpoint}"
|
||||
)
|
||||
output = self.local_shell.exec(cmd).stdout
|
||||
output = _cmd_run(cmd)
|
||||
response = self._to_json(output)
|
||||
return response.get("LocationConstraint")
|
||||
|
||||
@reporter.step("List objects S3")
|
||||
@reporter.step_deco("List objects S3")
|
||||
def list_objects(self, bucket: str, full_output: bool = False) -> Union[dict, list[str]]:
|
||||
cmd = (
|
||||
f"aws {self.common_flags} s3api list-objects --bucket {bucket} "
|
||||
f"--endpoint {self.s3gate_endpoint} --profile {self.profile}"
|
||||
f"--endpoint {self.s3gate_endpoint}"
|
||||
)
|
||||
output = self.local_shell.exec(cmd).stdout
|
||||
output = _cmd_run(cmd)
|
||||
response = self._to_json(output)
|
||||
|
||||
obj_list = [obj["Key"] for obj in response.get("Contents", [])]
|
||||
|
@ -176,13 +170,13 @@ class AwsCliClient(S3ClientWrapper):
|
|||
|
||||
return response if full_output else obj_list
|
||||
|
||||
@reporter.step("List objects S3 v2")
|
||||
@reporter.step_deco("List objects S3 v2")
|
||||
def list_objects_v2(self, bucket: str, full_output: bool = False) -> Union[dict, list[str]]:
|
||||
cmd = (
|
||||
f"aws {self.common_flags} s3api list-objects-v2 --bucket {bucket} "
|
||||
f"--endpoint {self.s3gate_endpoint} --profile {self.profile}"
|
||||
f"--endpoint {self.s3gate_endpoint}"
|
||||
)
|
||||
output = self.local_shell.exec(cmd).stdout
|
||||
output = _cmd_run(cmd)
|
||||
response = self._to_json(output)
|
||||
|
||||
obj_list = [obj["Key"] for obj in response.get("Contents", [])]
|
||||
|
@ -190,27 +184,27 @@ class AwsCliClient(S3ClientWrapper):
|
|||
|
||||
return response if full_output else obj_list
|
||||
|
||||
@reporter.step("List objects versions S3")
|
||||
@reporter.step_deco("List objects versions S3")
|
||||
def list_objects_versions(self, bucket: str, full_output: bool = False) -> dict:
|
||||
cmd = (
|
||||
f"aws {self.common_flags} s3api list-object-versions --bucket {bucket} "
|
||||
f"--endpoint {self.s3gate_endpoint} --profile {self.profile}"
|
||||
f"--endpoint {self.s3gate_endpoint}"
|
||||
)
|
||||
output = self.local_shell.exec(cmd).stdout
|
||||
output = _cmd_run(cmd)
|
||||
response = self._to_json(output)
|
||||
return response if full_output else response.get("Versions", [])
|
||||
|
||||
@reporter.step("List objects delete markers S3")
|
||||
@reporter.step_deco("List objects delete markers S3")
|
||||
def list_delete_markers(self, bucket: str, full_output: bool = False) -> list:
|
||||
cmd = (
|
||||
f"aws {self.common_flags} s3api list-object-versions --bucket {bucket} "
|
||||
f"--endpoint {self.s3gate_endpoint} --profile {self.profile}"
|
||||
f"--endpoint {self.s3gate_endpoint}"
|
||||
)
|
||||
output = self.local_shell.exec(cmd).stdout
|
||||
output = _cmd_run(cmd)
|
||||
response = self._to_json(output)
|
||||
return response if full_output else response.get("DeleteMarkers", [])
|
||||
|
||||
@reporter.step("Copy object S3")
|
||||
@reporter.step_deco("Copy object S3")
|
||||
def copy_object(
|
||||
self,
|
||||
source_bucket: str,
|
||||
|
@ -231,7 +225,7 @@ class AwsCliClient(S3ClientWrapper):
|
|||
|
||||
cmd = (
|
||||
f"aws {self.common_flags} s3api copy-object --copy-source {copy_source} "
|
||||
f"--bucket {bucket} --key {key} --endpoint {self.s3gate_endpoint} --profile {self.profile}"
|
||||
f"--bucket {bucket} --key {key} --endpoint {self.s3gate_endpoint}"
|
||||
)
|
||||
if acl:
|
||||
cmd += f" --acl {acl}"
|
||||
|
@ -245,10 +239,10 @@ class AwsCliClient(S3ClientWrapper):
|
|||
cmd += f" --tagging-directive {tagging_directive}"
|
||||
if tagging:
|
||||
cmd += f" --tagging {tagging}"
|
||||
self.local_shell.exec(cmd, command_options)
|
||||
_cmd_run(cmd, LONG_TIMEOUT)
|
||||
return key
|
||||
|
||||
@reporter.step("Put object S3")
|
||||
@reporter.step_deco("Put object S3")
|
||||
def put_object(
|
||||
self,
|
||||
bucket: str,
|
||||
|
@ -268,7 +262,7 @@ class AwsCliClient(S3ClientWrapper):
|
|||
|
||||
cmd = (
|
||||
f"aws {self.common_flags} s3api put-object --bucket {bucket} --key {key} "
|
||||
f"--body {filepath} --endpoint {self.s3gate_endpoint} --profile {self.profile}"
|
||||
f"--body {filepath} --endpoint {self.s3gate_endpoint}"
|
||||
)
|
||||
if metadata:
|
||||
cmd += " --metadata"
|
||||
|
@ -288,22 +282,22 @@ class AwsCliClient(S3ClientWrapper):
|
|||
cmd += f" --grant-full-control '{grant_full_control}'"
|
||||
if grant_read:
|
||||
cmd += f" --grant-read {grant_read}"
|
||||
output = self.local_shell.exec(cmd, command_options).stdout
|
||||
output = _cmd_run(cmd, LONG_TIMEOUT)
|
||||
response = self._to_json(output)
|
||||
return response.get("VersionId")
|
||||
|
||||
@reporter.step("Head object S3")
|
||||
@reporter.step_deco("Head object S3")
|
||||
def head_object(self, bucket: str, key: str, version_id: Optional[str] = None) -> dict:
|
||||
version = f" --version-id {version_id}" if version_id else ""
|
||||
cmd = (
|
||||
f"aws {self.common_flags} s3api head-object --bucket {bucket} --key {key} "
|
||||
f"{version} --endpoint {self.s3gate_endpoint} --profile {self.profile}"
|
||||
f"{version} --endpoint {self.s3gate_endpoint}"
|
||||
)
|
||||
output = self.local_shell.exec(cmd).stdout
|
||||
output = _cmd_run(cmd)
|
||||
response = self._to_json(output)
|
||||
return response
|
||||
|
||||
@reporter.step("Get object S3")
|
||||
@reporter.step_deco("Get object S3")
|
||||
def get_object(
|
||||
self,
|
||||
bucket: str,
|
||||
|
@ -316,26 +310,26 @@ class AwsCliClient(S3ClientWrapper):
|
|||
version = f" --version-id {version_id}" if version_id else ""
|
||||
cmd = (
|
||||
f"aws {self.common_flags} s3api get-object --bucket {bucket} --key {key} "
|
||||
f"{version} {file_path} --endpoint {self.s3gate_endpoint} --profile {self.profile}"
|
||||
f"{version} {file_path} --endpoint {self.s3gate_endpoint}"
|
||||
)
|
||||
if object_range:
|
||||
cmd += f" --range bytes={object_range[0]}-{object_range[1]}"
|
||||
output = self.local_shell.exec(cmd).stdout
|
||||
output = _cmd_run(cmd)
|
||||
response = self._to_json(output)
|
||||
return response if full_output else file_path
|
||||
|
||||
@reporter.step("Get object ACL")
|
||||
@reporter.step_deco("Get object ACL")
|
||||
def get_object_acl(self, bucket: str, key: str, version_id: Optional[str] = None) -> list:
|
||||
version = f" --version-id {version_id}" if version_id else ""
|
||||
cmd = (
|
||||
f"aws {self.common_flags} s3api get-object-acl --bucket {bucket} --key {key} "
|
||||
f"{version} --endpoint {self.s3gate_endpoint} --profile {self.profile}"
|
||||
f"{version} --endpoint {self.s3gate_endpoint}"
|
||||
)
|
||||
output = self.local_shell.exec(cmd).stdout
|
||||
output = _cmd_run(cmd)
|
||||
response = self._to_json(output)
|
||||
return response.get("Grants")
|
||||
|
||||
@reporter.step("Put object ACL")
|
||||
@reporter.step_deco("Put object ACL")
|
||||
def put_object_acl(
|
||||
self,
|
||||
bucket: str,
|
||||
|
@ -346,7 +340,7 @@ class AwsCliClient(S3ClientWrapper):
|
|||
) -> list:
|
||||
cmd = (
|
||||
f"aws {self.common_flags} s3api put-object-acl --bucket {bucket} --key {key} "
|
||||
f" --endpoint {self.s3gate_endpoint} --profile {self.profile}"
|
||||
f" --endpoint {self.s3gate_endpoint}"
|
||||
)
|
||||
if acl:
|
||||
cmd += f" --acl {acl}"
|
||||
|
@ -354,11 +348,11 @@ class AwsCliClient(S3ClientWrapper):
|
|||
cmd += f" --grant-write {grant_write}"
|
||||
if grant_read:
|
||||
cmd += f" --grant-read {grant_read}"
|
||||
output = self.local_shell.exec(cmd).stdout
|
||||
output = _cmd_run(cmd)
|
||||
response = self._to_json(output)
|
||||
return response.get("Grants")
|
||||
|
||||
@reporter.step("Put bucket ACL")
|
||||
@reporter.step_deco("Put bucket ACL")
|
||||
def put_bucket_acl(
|
||||
self,
|
||||
bucket: str,
|
||||
|
@ -368,7 +362,7 @@ class AwsCliClient(S3ClientWrapper):
|
|||
) -> None:
|
||||
cmd = (
|
||||
f"aws {self.common_flags} s3api put-bucket-acl --bucket {bucket} "
|
||||
f" --endpoint {self.s3gate_endpoint} --profile {self.profile}"
|
||||
f" --endpoint {self.s3gate_endpoint}"
|
||||
)
|
||||
if acl:
|
||||
cmd += f" --acl {acl}"
|
||||
|
@ -376,9 +370,9 @@ class AwsCliClient(S3ClientWrapper):
|
|||
cmd += f" --grant-write {grant_write}"
|
||||
if grant_read:
|
||||
cmd += f" --grant-read {grant_read}"
|
||||
self.local_shell.exec(cmd)
|
||||
_cmd_run(cmd)
|
||||
|
||||
@reporter.step("Delete objects S3")
|
||||
@reporter.step_deco("Delete objects S3")
|
||||
def delete_objects(self, bucket: str, keys: list[str]) -> dict:
|
||||
file_path = os.path.join(os.getcwd(), ASSETS_DIR, "delete.json")
|
||||
delete_structure = json.dumps(_make_objs_dict(keys))
|
||||
|
@ -388,25 +382,25 @@ class AwsCliClient(S3ClientWrapper):
|
|||
|
||||
cmd = (
|
||||
f"aws {self.common_flags} s3api delete-objects --bucket {bucket} "
|
||||
f"--delete file://{file_path} --endpoint {self.s3gate_endpoint} --profile {self.profile}"
|
||||
f"--delete file://{file_path} --endpoint {self.s3gate_endpoint}"
|
||||
)
|
||||
output = self.local_shell.exec(cmd, command_options).stdout
|
||||
output = _cmd_run(cmd, LONG_TIMEOUT)
|
||||
response = self._to_json(output)
|
||||
sleep(S3_SYNC_WAIT_TIME)
|
||||
return response
|
||||
|
||||
@reporter.step("Delete object S3")
|
||||
@reporter.step_deco("Delete object S3")
|
||||
def delete_object(self, bucket: str, key: str, version_id: Optional[str] = None) -> dict:
|
||||
version = f" --version-id {version_id}" if version_id else ""
|
||||
cmd = (
|
||||
f"aws {self.common_flags} s3api delete-object --bucket {bucket} "
|
||||
f"--key {key} {version} --endpoint {self.s3gate_endpoint} --profile {self.profile}"
|
||||
f"--key {key} {version} --endpoint {self.s3gate_endpoint}"
|
||||
)
|
||||
output = self.local_shell.exec(cmd, command_options).stdout
|
||||
output = _cmd_run(cmd, LONG_TIMEOUT)
|
||||
sleep(S3_SYNC_WAIT_TIME)
|
||||
return self._to_json(output)
|
||||
|
||||
@reporter.step("Delete object versions S3")
|
||||
@reporter.step_deco("Delete object versions S3")
|
||||
def delete_object_versions(self, bucket: str, object_versions: list) -> dict:
|
||||
# Build deletion list in S3 format
|
||||
delete_list = {
|
||||
|
@ -427,19 +421,21 @@ class AwsCliClient(S3ClientWrapper):
|
|||
|
||||
cmd = (
|
||||
f"aws {self.common_flags} s3api delete-objects --bucket {bucket} "
|
||||
f"--delete file://{file_path} --endpoint {self.s3gate_endpoint} --profile {self.profile}"
|
||||
f"--delete file://{file_path} --endpoint {self.s3gate_endpoint}"
|
||||
)
|
||||
output = self.local_shell.exec(cmd, command_options).stdout
|
||||
output = _cmd_run(cmd, LONG_TIMEOUT)
|
||||
sleep(S3_SYNC_WAIT_TIME)
|
||||
return self._to_json(output)
|
||||
|
||||
@reporter.step("Delete object versions S3 without delete markers")
|
||||
@reporter.step_deco("Delete object versions S3 without delete markers")
|
||||
def delete_object_versions_without_dm(self, bucket: str, object_versions: list) -> None:
|
||||
# Delete objects without creating delete markers
|
||||
for object_version in object_versions:
|
||||
self.delete_object(bucket=bucket, key=object_version["Key"], version_id=object_version["VersionId"])
|
||||
self.delete_object(
|
||||
bucket=bucket, key=object_version["Key"], version_id=object_version["VersionId"]
|
||||
)
|
||||
|
||||
@reporter.step("Get object attributes")
|
||||
@reporter.step_deco("Get object attributes")
|
||||
def get_object_attributes(
|
||||
self,
|
||||
bucket: str,
|
||||
|
@ -458,9 +454,9 @@ class AwsCliClient(S3ClientWrapper):
|
|||
cmd = (
|
||||
f"aws {self.common_flags} s3api get-object-attributes --bucket {bucket} "
|
||||
f"--key {key} {version} {parts} {part_number_str} --object-attributes {attrs} "
|
||||
f"--endpoint {self.s3gate_endpoint} --profile {self.profile}"
|
||||
f"--endpoint {self.s3gate_endpoint}"
|
||||
)
|
||||
output = self.local_shell.exec(cmd).stdout
|
||||
output = _cmd_run(cmd)
|
||||
response = self._to_json(output)
|
||||
|
||||
for attr in attributes:
|
||||
|
@ -471,17 +467,17 @@ class AwsCliClient(S3ClientWrapper):
|
|||
else:
|
||||
return response.get(attributes[0])
|
||||
|
||||
@reporter.step("Get bucket policy")
|
||||
@reporter.step_deco("Get bucket policy")
|
||||
def get_bucket_policy(self, bucket: str) -> dict:
|
||||
cmd = (
|
||||
f"aws {self.common_flags} s3api get-bucket-policy --bucket {bucket} "
|
||||
f"--endpoint {self.s3gate_endpoint} --profile {self.profile}"
|
||||
f"--endpoint {self.s3gate_endpoint}"
|
||||
)
|
||||
output = self.local_shell.exec(cmd).stdout
|
||||
output = _cmd_run(cmd)
|
||||
response = self._to_json(output)
|
||||
return response.get("Policy")
|
||||
|
||||
@reporter.step("Put bucket policy")
|
||||
@reporter.step_deco("Put bucket policy")
|
||||
def put_bucket_policy(self, bucket: str, policy: dict) -> None:
|
||||
# Leaving it as is was in test repo. Double dumps to escape resulting string
|
||||
# Example:
|
||||
|
@ -492,45 +488,45 @@ class AwsCliClient(S3ClientWrapper):
|
|||
dumped_policy = json.dumps(json.dumps(policy))
|
||||
cmd = (
|
||||
f"aws {self.common_flags} s3api put-bucket-policy --bucket {bucket} "
|
||||
f"--policy {dumped_policy} --endpoint {self.s3gate_endpoint} --profile {self.profile}"
|
||||
f"--policy {dumped_policy} --endpoint {self.s3gate_endpoint}"
|
||||
)
|
||||
self.local_shell.exec(cmd)
|
||||
_cmd_run(cmd)
|
||||
|
||||
@reporter.step("Get bucket cors")
|
||||
@reporter.step_deco("Get bucket cors")
|
||||
def get_bucket_cors(self, bucket: str) -> dict:
|
||||
cmd = (
|
||||
f"aws {self.common_flags} s3api get-bucket-cors --bucket {bucket} "
|
||||
f"--endpoint {self.s3gate_endpoint} --profile {self.profile}"
|
||||
f"--endpoint {self.s3gate_endpoint}"
|
||||
)
|
||||
output = self.local_shell.exec(cmd).stdout
|
||||
output = _cmd_run(cmd)
|
||||
response = self._to_json(output)
|
||||
return response.get("CORSRules")
|
||||
|
||||
@reporter.step("Put bucket cors")
|
||||
@reporter.step_deco("Put bucket cors")
|
||||
def put_bucket_cors(self, bucket: str, cors_configuration: dict) -> None:
|
||||
cmd = (
|
||||
f"aws {self.common_flags} s3api put-bucket-cors --bucket {bucket} "
|
||||
f"--cors-configuration '{json.dumps(cors_configuration)}' --endpoint {self.s3gate_endpoint} --profile {self.profile}"
|
||||
f"--cors-configuration '{json.dumps(cors_configuration)}' --endpoint {self.s3gate_endpoint}"
|
||||
)
|
||||
self.local_shell.exec(cmd)
|
||||
_cmd_run(cmd)
|
||||
|
||||
@reporter.step("Delete bucket cors")
|
||||
@reporter.step_deco("Delete bucket cors")
|
||||
def delete_bucket_cors(self, bucket: str) -> None:
|
||||
cmd = (
|
||||
f"aws {self.common_flags} s3api delete-bucket-cors --bucket {bucket} "
|
||||
f"--endpoint {self.s3gate_endpoint} --profile {self.profile}"
|
||||
f"--endpoint {self.s3gate_endpoint}"
|
||||
)
|
||||
self.local_shell.exec(cmd)
|
||||
_cmd_run(cmd)
|
||||
|
||||
@reporter.step("Delete bucket tagging")
|
||||
@reporter.step_deco("Delete bucket tagging")
|
||||
def delete_bucket_tagging(self, bucket: str) -> None:
|
||||
cmd = (
|
||||
f"aws {self.common_flags} s3api delete-bucket-tagging --bucket {bucket} "
|
||||
f"--endpoint {self.s3gate_endpoint} --profile {self.profile}"
|
||||
f"--endpoint {self.s3gate_endpoint}"
|
||||
)
|
||||
self.local_shell.exec(cmd)
|
||||
_cmd_run(cmd)
|
||||
|
||||
@reporter.step("Put object retention")
|
||||
@reporter.step_deco("Put object retention")
|
||||
def put_object_retention(
|
||||
self,
|
||||
bucket: str,
|
||||
|
@ -542,13 +538,13 @@ class AwsCliClient(S3ClientWrapper):
|
|||
version = f" --version-id {version_id}" if version_id else ""
|
||||
cmd = (
|
||||
f"aws {self.common_flags} s3api put-object-retention --bucket {bucket} --key {key} "
|
||||
f"{version} --retention '{json.dumps(retention, indent=4, sort_keys=True, default=str)}' --endpoint {self.s3gate_endpoint} --profile {self.profile}"
|
||||
f"{version} --retention '{json.dumps(retention, indent=4, sort_keys=True, default=str)}' --endpoint {self.s3gate_endpoint}"
|
||||
)
|
||||
if bypass_governance_retention is not None:
|
||||
cmd += " --bypass-governance-retention"
|
||||
self.local_shell.exec(cmd)
|
||||
_cmd_run(cmd)
|
||||
|
||||
@reporter.step("Put object legal hold")
|
||||
@reporter.step_deco("Put object legal hold")
|
||||
def put_object_legal_hold(
|
||||
self,
|
||||
bucket: str,
|
||||
|
@ -560,40 +556,40 @@ class AwsCliClient(S3ClientWrapper):
|
|||
legal_hold = json.dumps({"Status": legal_hold_status})
|
||||
cmd = (
|
||||
f"aws {self.common_flags} s3api put-object-legal-hold --bucket {bucket} --key {key} "
|
||||
f"{version} --legal-hold '{legal_hold}' --endpoint {self.s3gate_endpoint} --profile {self.profile}"
|
||||
f"{version} --legal-hold '{legal_hold}' --endpoint {self.s3gate_endpoint}"
|
||||
)
|
||||
self.local_shell.exec(cmd)
|
||||
_cmd_run(cmd)
|
||||
|
||||
@reporter.step("Put object tagging")
|
||||
@reporter.step_deco("Put object tagging")
|
||||
def put_object_tagging(self, bucket: str, key: str, tags: list) -> None:
|
||||
tags = [{"Key": tag_key, "Value": tag_value} for tag_key, tag_value in tags]
|
||||
tagging = {"TagSet": tags}
|
||||
cmd = (
|
||||
f"aws {self.common_flags} s3api put-object-tagging --bucket {bucket} --key {key} "
|
||||
f"--tagging '{json.dumps(tagging)}' --endpoint {self.s3gate_endpoint} --profile {self.profile}"
|
||||
f"--tagging '{json.dumps(tagging)}' --endpoint {self.s3gate_endpoint}"
|
||||
)
|
||||
self.local_shell.exec(cmd)
|
||||
_cmd_run(cmd)
|
||||
|
||||
@reporter.step("Get object tagging")
|
||||
@reporter.step_deco("Get object tagging")
|
||||
def get_object_tagging(self, bucket: str, key: str, version_id: Optional[str] = None) -> list:
|
||||
version = f" --version-id {version_id}" if version_id else ""
|
||||
cmd = (
|
||||
f"aws {self.common_flags} s3api get-object-tagging --bucket {bucket} --key {key} "
|
||||
f"{version} --endpoint {self.s3gate_endpoint} --profile {self.profile}"
|
||||
f"{version} --endpoint {self.s3gate_endpoint}"
|
||||
)
|
||||
output = self.local_shell.exec(cmd).stdout
|
||||
output = _cmd_run(cmd)
|
||||
response = self._to_json(output)
|
||||
return response.get("TagSet")
|
||||
|
||||
@reporter.step("Delete object tagging")
|
||||
@reporter.step_deco("Delete object tagging")
|
||||
def delete_object_tagging(self, bucket: str, key: str) -> None:
|
||||
cmd = (
|
||||
f"aws {self.common_flags} s3api delete-object-tagging --bucket {bucket} "
|
||||
f"--key {key} --endpoint {self.s3gate_endpoint} --profile {self.profile}"
|
||||
f"--key {key} --endpoint {self.s3gate_endpoint}"
|
||||
)
|
||||
self.local_shell.exec(cmd)
|
||||
_cmd_run(cmd)
|
||||
|
||||
@reporter.step("Sync directory S3")
|
||||
@reporter.step_deco("Sync directory S3")
|
||||
def sync(
|
||||
self,
|
||||
bucket: str,
|
||||
|
@ -603,7 +599,7 @@ class AwsCliClient(S3ClientWrapper):
|
|||
) -> dict:
|
||||
cmd = (
|
||||
f"aws {self.common_flags} s3 sync {dir_path} s3://{bucket} "
|
||||
f"--endpoint-url {self.s3gate_endpoint} --profile {self.profile}"
|
||||
f"--endpoint-url {self.s3gate_endpoint}"
|
||||
)
|
||||
if metadata:
|
||||
cmd += " --metadata"
|
||||
|
@ -611,10 +607,10 @@ class AwsCliClient(S3ClientWrapper):
|
|||
cmd += f" {key}={value}"
|
||||
if acl:
|
||||
cmd += f" --acl {acl}"
|
||||
output = self.local_shell.exec(cmd, command_options).stdout
|
||||
output = _cmd_run(cmd, LONG_TIMEOUT)
|
||||
return self._to_json(output)
|
||||
|
||||
@reporter.step("CP directory S3")
|
||||
@reporter.step_deco("CP directory S3")
|
||||
def cp(
|
||||
self,
|
||||
bucket: str,
|
||||
|
@ -624,7 +620,7 @@ class AwsCliClient(S3ClientWrapper):
|
|||
) -> dict:
|
||||
cmd = (
|
||||
f"aws {self.common_flags} s3 cp {dir_path} s3://{bucket} "
|
||||
f"--endpoint-url {self.s3gate_endpoint} --recursive --profile {self.profile}"
|
||||
f"--endpoint-url {self.s3gate_endpoint} --recursive"
|
||||
)
|
||||
if metadata:
|
||||
cmd += " --metadata"
|
||||
|
@ -632,79 +628,85 @@ class AwsCliClient(S3ClientWrapper):
|
|||
cmd += f" {key}={value}"
|
||||
if acl:
|
||||
cmd += f" --acl {acl}"
|
||||
output = self.local_shell.exec(cmd, command_options).stdout
|
||||
output = _cmd_run(cmd, LONG_TIMEOUT)
|
||||
return self._to_json(output)
|
||||
|
||||
@reporter.step("Create multipart upload S3")
|
||||
@reporter.step_deco("Create multipart upload S3")
|
||||
def create_multipart_upload(self, bucket: str, key: str) -> str:
|
||||
cmd = (
|
||||
f"aws {self.common_flags} s3api create-multipart-upload --bucket {bucket} "
|
||||
f"--key {key} --endpoint-url {self.s3gate_endpoint} --profile {self.profile}"
|
||||
f"--key {key} --endpoint-url {self.s3gate_endpoint}"
|
||||
)
|
||||
output = self.local_shell.exec(cmd).stdout
|
||||
output = _cmd_run(cmd)
|
||||
response = self._to_json(output)
|
||||
|
||||
assert response.get("UploadId"), f"Expected UploadId in response:\n{response}"
|
||||
|
||||
return response["UploadId"]
|
||||
|
||||
@reporter.step("List multipart uploads S3")
|
||||
@reporter.step_deco("List multipart uploads S3")
|
||||
def list_multipart_uploads(self, bucket: str) -> Optional[list[dict]]:
|
||||
cmd = (
|
||||
f"aws {self.common_flags} s3api list-multipart-uploads --bucket {bucket} "
|
||||
f"--endpoint-url {self.s3gate_endpoint} --profile {self.profile}"
|
||||
f"--endpoint-url {self.s3gate_endpoint}"
|
||||
)
|
||||
output = self.local_shell.exec(cmd).stdout
|
||||
output = _cmd_run(cmd)
|
||||
response = self._to_json(output)
|
||||
return response.get("Uploads")
|
||||
|
||||
@reporter.step("Abort multipart upload S3")
|
||||
@reporter.step_deco("Abort multipart upload S3")
|
||||
def abort_multipart_upload(self, bucket: str, key: str, upload_id: str) -> None:
|
||||
cmd = (
|
||||
f"aws {self.common_flags} s3api abort-multipart-upload --bucket {bucket} "
|
||||
f"--key {key} --upload-id {upload_id} --endpoint-url {self.s3gate_endpoint} --profile {self.profile}"
|
||||
f"--key {key} --upload-id {upload_id} --endpoint-url {self.s3gate_endpoint}"
|
||||
)
|
||||
self.local_shell.exec(cmd)
|
||||
_cmd_run(cmd)
|
||||
|
||||
@reporter.step("Upload part S3")
|
||||
def upload_part(self, bucket: str, key: str, upload_id: str, part_num: int, filepath: str) -> str:
|
||||
@reporter.step_deco("Upload part S3")
|
||||
def upload_part(
|
||||
self, bucket: str, key: str, upload_id: str, part_num: int, filepath: str
|
||||
) -> str:
|
||||
cmd = (
|
||||
f"aws {self.common_flags} s3api upload-part --bucket {bucket} --key {key} "
|
||||
f"--upload-id {upload_id} --part-number {part_num} --body {filepath} "
|
||||
f"--endpoint-url {self.s3gate_endpoint} --profile {self.profile}"
|
||||
f"--endpoint-url {self.s3gate_endpoint}"
|
||||
)
|
||||
output = self.local_shell.exec(cmd, command_options).stdout
|
||||
output = _cmd_run(cmd, LONG_TIMEOUT)
|
||||
response = self._to_json(output)
|
||||
assert response.get("ETag"), f"Expected ETag in response:\n{response}"
|
||||
return response["ETag"]
|
||||
|
||||
@reporter.step("Upload copy part S3")
|
||||
def upload_part_copy(self, bucket: str, key: str, upload_id: str, part_num: int, copy_source: str) -> str:
|
||||
@reporter.step_deco("Upload copy part S3")
|
||||
def upload_part_copy(
|
||||
self, bucket: str, key: str, upload_id: str, part_num: int, copy_source: str
|
||||
) -> str:
|
||||
cmd = (
|
||||
f"aws {self.common_flags} s3api upload-part-copy --bucket {bucket} --key {key} "
|
||||
f"--upload-id {upload_id} --part-number {part_num} --copy-source {copy_source} "
|
||||
f"--endpoint-url {self.s3gate_endpoint} --profile {self.profile}"
|
||||
f"--endpoint-url {self.s3gate_endpoint}"
|
||||
)
|
||||
output = self.local_shell.exec(cmd, command_options).stdout
|
||||
output = _cmd_run(cmd, LONG_TIMEOUT)
|
||||
response = self._to_json(output)
|
||||
assert response.get("CopyPartResult", []).get("ETag"), f"Expected ETag in response:\n{response}"
|
||||
assert response.get("CopyPartResult", []).get(
|
||||
"ETag"
|
||||
), f"Expected ETag in response:\n{response}"
|
||||
|
||||
return response["CopyPartResult"]["ETag"]
|
||||
|
||||
@reporter.step("List parts S3")
|
||||
@reporter.step_deco("List parts S3")
|
||||
def list_parts(self, bucket: str, key: str, upload_id: str) -> list[dict]:
|
||||
cmd = (
|
||||
f"aws {self.common_flags} s3api list-parts --bucket {bucket} --key {key} "
|
||||
f"--upload-id {upload_id} --endpoint-url {self.s3gate_endpoint} --profile {self.profile}"
|
||||
f"--upload-id {upload_id} --endpoint-url {self.s3gate_endpoint}"
|
||||
)
|
||||
output = self.local_shell.exec(cmd).stdout
|
||||
output = _cmd_run(cmd)
|
||||
response = self._to_json(output)
|
||||
|
||||
assert response.get("Parts"), f"Expected Parts in response:\n{response}"
|
||||
|
||||
return response["Parts"]
|
||||
|
||||
@reporter.step("Complete multipart upload S3")
|
||||
@reporter.step_deco("Complete multipart upload S3")
|
||||
def complete_multipart_upload(self, bucket: str, key: str, upload_id: str, parts: list) -> None:
|
||||
file_path = os.path.join(os.getcwd(), ASSETS_DIR, "parts.json")
|
||||
parts_dict = {"Parts": [{"ETag": etag, "PartNumber": part_num} for part_num, etag in parts]}
|
||||
|
@ -717,26 +719,26 @@ class AwsCliClient(S3ClientWrapper):
|
|||
cmd = (
|
||||
f"aws {self.common_flags} s3api complete-multipart-upload --bucket {bucket} "
|
||||
f"--key {key} --upload-id {upload_id} --multipart-upload file://{file_path} "
|
||||
f"--endpoint-url {self.s3gate_endpoint} --profile {self.profile}"
|
||||
f"--endpoint-url {self.s3gate_endpoint}"
|
||||
)
|
||||
self.local_shell.exec(cmd)
|
||||
_cmd_run(cmd)
|
||||
|
||||
@reporter.step("Put object lock configuration")
|
||||
@reporter.step_deco("Put object lock configuration")
|
||||
def put_object_lock_configuration(self, bucket: str, configuration: dict) -> dict:
|
||||
cmd = (
|
||||
f"aws {self.common_flags} s3api put-object-lock-configuration --bucket {bucket} "
|
||||
f"--object-lock-configuration '{json.dumps(configuration)}' --endpoint-url {self.s3gate_endpoint} --profile {self.profile}"
|
||||
f"--object-lock-configuration '{json.dumps(configuration)}' --endpoint-url {self.s3gate_endpoint}"
|
||||
)
|
||||
output = self.local_shell.exec(cmd).stdout
|
||||
output = _cmd_run(cmd)
|
||||
return self._to_json(output)
|
||||
|
||||
@reporter.step("Get object lock configuration")
|
||||
@reporter.step_deco("Get object lock configuration")
|
||||
def get_object_lock_configuration(self, bucket: str):
|
||||
cmd = (
|
||||
f"aws {self.common_flags} s3api get-object-lock-configuration --bucket {bucket} "
|
||||
f"--endpoint-url {self.s3gate_endpoint} --profile {self.profile}"
|
||||
f"--endpoint-url {self.s3gate_endpoint}"
|
||||
)
|
||||
output = self.local_shell.exec(cmd).stdout
|
||||
output = _cmd_run(cmd)
|
||||
response = self._to_json(output)
|
||||
return response.get("ObjectLockConfiguration")
|
||||
|
||||
|
|
|
@ -13,11 +13,17 @@ from botocore.config import Config
|
|||
from botocore.exceptions import ClientError
|
||||
from mypy_boto3_s3 import S3Client
|
||||
|
||||
from frostfs_testlib import reporter
|
||||
from frostfs_testlib.resources.common import ASSETS_DIR, MAX_REQUEST_ATTEMPTS, RETRY_MODE, S3_SYNC_WAIT_TIME
|
||||
from frostfs_testlib.reporter import get_reporter
|
||||
from frostfs_testlib.resources.common import (
|
||||
ASSETS_DIR,
|
||||
MAX_REQUEST_ATTEMPTS,
|
||||
RETRY_MODE,
|
||||
S3_SYNC_WAIT_TIME,
|
||||
)
|
||||
from frostfs_testlib.s3.interfaces import S3ClientWrapper, VersioningStatus, _make_objs_dict
|
||||
from frostfs_testlib.utils.cli_utils import log_command_execution
|
||||
|
||||
reporter = get_reporter()
|
||||
logger = logging.getLogger("NeoLogger")
|
||||
|
||||
# Disable warnings on self-signed certificate which the
|
||||
|
@ -38,38 +44,22 @@ def report_error(func):
|
|||
|
||||
|
||||
class Boto3ClientWrapper(S3ClientWrapper):
|
||||
__repr_name__: str = "Boto3 client"
|
||||
|
||||
@reporter.step("Configure S3 client (boto3)")
|
||||
@reporter.step_deco("Configure S3 client (boto3)")
|
||||
@report_error
|
||||
def __init__(
|
||||
self, access_key_id: str, secret_access_key: str, s3gate_endpoint: str, profile: str = "default"
|
||||
) -> None:
|
||||
self.boto3_client: S3Client = None
|
||||
self.session = boto3.Session(profile_name=profile)
|
||||
self.config = Config(
|
||||
def __init__(self, access_key_id: str, secret_access_key: str, s3gate_endpoint: str) -> None:
|
||||
session = boto3.Session()
|
||||
config = Config(
|
||||
retries={
|
||||
"max_attempts": MAX_REQUEST_ATTEMPTS,
|
||||
"mode": RETRY_MODE,
|
||||
}
|
||||
)
|
||||
self.access_key_id: str = access_key_id
|
||||
self.secret_access_key: str = secret_access_key
|
||||
self.s3gate_endpoint: str = ""
|
||||
self.set_endpoint(s3gate_endpoint)
|
||||
|
||||
@reporter.step("Set endpoint S3 to {s3gate_endpoint}")
|
||||
def set_endpoint(self, s3gate_endpoint: str):
|
||||
if self.s3gate_endpoint == s3gate_endpoint:
|
||||
return
|
||||
|
||||
self.s3gate_endpoint = s3gate_endpoint
|
||||
|
||||
self.boto3_client: S3Client = self.session.client(
|
||||
self.boto3_client: S3Client = session.client(
|
||||
service_name="s3",
|
||||
aws_access_key_id=self.access_key_id,
|
||||
aws_secret_access_key=self.secret_access_key,
|
||||
config=self.config,
|
||||
aws_access_key_id=access_key_id,
|
||||
aws_secret_access_key=secret_access_key,
|
||||
config=config,
|
||||
endpoint_url=s3gate_endpoint,
|
||||
verify=False,
|
||||
)
|
||||
|
@ -86,7 +76,7 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
|||
return result
|
||||
|
||||
# BUCKET METHODS #
|
||||
@reporter.step("Create bucket S3")
|
||||
@reporter.step_deco("Create bucket S3")
|
||||
@report_error
|
||||
def create_bucket(
|
||||
self,
|
||||
|
@ -114,14 +104,16 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
|||
elif grant_full_control:
|
||||
params.update({"GrantFullControl": grant_full_control})
|
||||
if location_constraint:
|
||||
params.update({"CreateBucketConfiguration": {"LocationConstraint": location_constraint}})
|
||||
params.update(
|
||||
{"CreateBucketConfiguration": {"LocationConstraint": location_constraint}}
|
||||
)
|
||||
|
||||
s3_bucket = self.boto3_client.create_bucket(**params)
|
||||
log_command_execution(f"Created S3 bucket {bucket}", s3_bucket)
|
||||
sleep(S3_SYNC_WAIT_TIME)
|
||||
return bucket
|
||||
|
||||
@reporter.step("List buckets S3")
|
||||
@reporter.step_deco("List buckets S3")
|
||||
@report_error
|
||||
def list_buckets(self) -> list[str]:
|
||||
found_buckets = []
|
||||
|
@ -134,20 +126,20 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
|||
|
||||
return found_buckets
|
||||
|
||||
@reporter.step("Delete bucket S3")
|
||||
@reporter.step_deco("Delete bucket S3")
|
||||
@report_error
|
||||
def delete_bucket(self, bucket: str) -> None:
|
||||
response = self.boto3_client.delete_bucket(Bucket=bucket)
|
||||
log_command_execution("S3 Delete bucket result", response)
|
||||
sleep(S3_SYNC_WAIT_TIME)
|
||||
|
||||
@reporter.step("Head bucket S3")
|
||||
@reporter.step_deco("Head bucket S3")
|
||||
@report_error
|
||||
def head_bucket(self, bucket: str) -> None:
|
||||
response = self.boto3_client.head_bucket(Bucket=bucket)
|
||||
log_command_execution("S3 Head bucket result", response)
|
||||
|
||||
@reporter.step("Put bucket versioning status")
|
||||
@reporter.step_deco("Put bucket versioning status")
|
||||
@report_error
|
||||
def put_bucket_versioning(self, bucket: str, status: VersioningStatus) -> None:
|
||||
response = self.boto3_client.put_bucket_versioning(
|
||||
|
@ -155,7 +147,7 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
|||
)
|
||||
log_command_execution("S3 Set bucket versioning to", response)
|
||||
|
||||
@reporter.step("Get bucket versioning status")
|
||||
@reporter.step_deco("Get bucket versioning status")
|
||||
@report_error
|
||||
def get_bucket_versioning_status(self, bucket: str) -> Literal["Enabled", "Suspended"]:
|
||||
response = self.boto3_client.get_bucket_versioning(Bucket=bucket)
|
||||
|
@ -163,7 +155,7 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
|||
log_command_execution("S3 Got bucket versioning status", response)
|
||||
return status
|
||||
|
||||
@reporter.step("Put bucket tagging")
|
||||
@reporter.step_deco("Put bucket tagging")
|
||||
@report_error
|
||||
def put_bucket_tagging(self, bucket: str, tags: list) -> None:
|
||||
tags = [{"Key": tag_key, "Value": tag_value} for tag_key, tag_value in tags]
|
||||
|
@ -171,27 +163,27 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
|||
response = self.boto3_client.put_bucket_tagging(Bucket=bucket, Tagging=tagging)
|
||||
log_command_execution("S3 Put bucket tagging", response)
|
||||
|
||||
@reporter.step("Get bucket tagging")
|
||||
@reporter.step_deco("Get bucket tagging")
|
||||
@report_error
|
||||
def get_bucket_tagging(self, bucket: str) -> list:
|
||||
response = self.boto3_client.get_bucket_tagging(Bucket=bucket)
|
||||
log_command_execution("S3 Get bucket tagging", response)
|
||||
return response.get("TagSet")
|
||||
|
||||
@reporter.step("Get bucket acl")
|
||||
@reporter.step_deco("Get bucket acl")
|
||||
@report_error
|
||||
def get_bucket_acl(self, bucket: str) -> list:
|
||||
response = self.boto3_client.get_bucket_acl(Bucket=bucket)
|
||||
log_command_execution("S3 Get bucket acl", response)
|
||||
return response.get("Grants")
|
||||
|
||||
@reporter.step("Delete bucket tagging")
|
||||
@reporter.step_deco("Delete bucket tagging")
|
||||
@report_error
|
||||
def delete_bucket_tagging(self, bucket: str) -> None:
|
||||
response = self.boto3_client.delete_bucket_tagging(Bucket=bucket)
|
||||
log_command_execution("S3 Delete bucket tagging", response)
|
||||
|
||||
@reporter.step("Put bucket ACL")
|
||||
@reporter.step_deco("Put bucket ACL")
|
||||
@report_error
|
||||
def put_bucket_acl(
|
||||
self,
|
||||
|
@ -208,56 +200,60 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
|||
response = self.boto3_client.put_bucket_acl(**params)
|
||||
log_command_execution("S3 ACL bucket result", response)
|
||||
|
||||
@reporter.step("Put object lock configuration")
|
||||
@reporter.step_deco("Put object lock configuration")
|
||||
@report_error
|
||||
def put_object_lock_configuration(self, bucket: str, configuration: dict) -> dict:
|
||||
response = self.boto3_client.put_object_lock_configuration(Bucket=bucket, ObjectLockConfiguration=configuration)
|
||||
response = self.boto3_client.put_object_lock_configuration(
|
||||
Bucket=bucket, ObjectLockConfiguration=configuration
|
||||
)
|
||||
log_command_execution("S3 put_object_lock_configuration result", response)
|
||||
return response
|
||||
|
||||
@reporter.step("Get object lock configuration")
|
||||
@reporter.step_deco("Get object lock configuration")
|
||||
@report_error
|
||||
def get_object_lock_configuration(self, bucket: str) -> dict:
|
||||
response = self.boto3_client.get_object_lock_configuration(Bucket=bucket)
|
||||
log_command_execution("S3 get_object_lock_configuration result", response)
|
||||
return response.get("ObjectLockConfiguration")
|
||||
|
||||
@reporter.step("Get bucket policy")
|
||||
@reporter.step_deco("Get bucket policy")
|
||||
@report_error
|
||||
def get_bucket_policy(self, bucket: str) -> str:
|
||||
response = self.boto3_client.get_bucket_policy(Bucket=bucket)
|
||||
log_command_execution("S3 get_bucket_policy result", response)
|
||||
return response.get("Policy")
|
||||
|
||||
@reporter.step("Put bucket policy")
|
||||
@reporter.step_deco("Put bucket policy")
|
||||
@report_error
|
||||
def put_bucket_policy(self, bucket: str, policy: dict) -> None:
|
||||
response = self.boto3_client.put_bucket_policy(Bucket=bucket, Policy=json.dumps(policy))
|
||||
log_command_execution("S3 put_bucket_policy result", response)
|
||||
return response
|
||||
|
||||
@reporter.step("Get bucket cors")
|
||||
@reporter.step_deco("Get bucket cors")
|
||||
@report_error
|
||||
def get_bucket_cors(self, bucket: str) -> dict:
|
||||
response = self.boto3_client.get_bucket_cors(Bucket=bucket)
|
||||
log_command_execution("S3 get_bucket_cors result", response)
|
||||
return response.get("CORSRules")
|
||||
|
||||
@reporter.step("Get bucket location")
|
||||
@reporter.step_deco("Get bucket location")
|
||||
@report_error
|
||||
def get_bucket_location(self, bucket: str) -> str:
|
||||
response = self.boto3_client.get_bucket_location(Bucket=bucket)
|
||||
log_command_execution("S3 get_bucket_location result", response)
|
||||
return response.get("LocationConstraint")
|
||||
|
||||
@reporter.step("Put bucket cors")
|
||||
@reporter.step_deco("Put bucket cors")
|
||||
@report_error
|
||||
def put_bucket_cors(self, bucket: str, cors_configuration: dict) -> None:
|
||||
response = self.boto3_client.put_bucket_cors(Bucket=bucket, CORSConfiguration=cors_configuration)
|
||||
response = self.boto3_client.put_bucket_cors(
|
||||
Bucket=bucket, CORSConfiguration=cors_configuration
|
||||
)
|
||||
log_command_execution("S3 put_bucket_cors result", response)
|
||||
return response
|
||||
|
||||
@reporter.step("Delete bucket cors")
|
||||
@reporter.step_deco("Delete bucket cors")
|
||||
@report_error
|
||||
def delete_bucket_cors(self, bucket: str) -> None:
|
||||
response = self.boto3_client.delete_bucket_cors(Bucket=bucket)
|
||||
|
@ -266,7 +262,7 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
|||
# END OF BUCKET METHODS #
|
||||
# OBJECT METHODS #
|
||||
|
||||
@reporter.step("List objects S3 v2")
|
||||
@reporter.step_deco("List objects S3 v2")
|
||||
@report_error
|
||||
def list_objects_v2(self, bucket: str, full_output: bool = False) -> Union[dict, list[str]]:
|
||||
response = self.boto3_client.list_objects_v2(Bucket=bucket)
|
||||
|
@ -277,7 +273,7 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
|||
|
||||
return response if full_output else obj_list
|
||||
|
||||
@reporter.step("List objects S3")
|
||||
@reporter.step_deco("List objects S3")
|
||||
@report_error
|
||||
def list_objects(self, bucket: str, full_output: bool = False) -> Union[dict, list[str]]:
|
||||
response = self.boto3_client.list_objects(Bucket=bucket)
|
||||
|
@ -288,21 +284,21 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
|||
|
||||
return response if full_output else obj_list
|
||||
|
||||
@reporter.step("List objects versions S3")
|
||||
@reporter.step_deco("List objects versions S3")
|
||||
@report_error
|
||||
def list_objects_versions(self, bucket: str, full_output: bool = False) -> dict:
|
||||
response = self.boto3_client.list_object_versions(Bucket=bucket)
|
||||
log_command_execution("S3 List objects versions result", response)
|
||||
return response if full_output else response.get("Versions", [])
|
||||
|
||||
@reporter.step("List objects delete markers S3")
|
||||
@reporter.step_deco("List objects delete markers S3")
|
||||
@report_error
|
||||
def list_delete_markers(self, bucket: str, full_output: bool = False) -> list:
|
||||
response = self.boto3_client.list_object_versions(Bucket=bucket)
|
||||
log_command_execution("S3 List objects delete markers result", response)
|
||||
return response if full_output else response.get("DeleteMarkers", [])
|
||||
|
||||
@reporter.step("Put object S3")
|
||||
@reporter.step_deco("Put object S3")
|
||||
@report_error
|
||||
def put_object(
|
||||
self,
|
||||
|
@ -333,7 +329,7 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
|||
log_command_execution("S3 Put object result", response)
|
||||
return response.get("VersionId")
|
||||
|
||||
@reporter.step("Head object S3")
|
||||
@reporter.step_deco("Head object S3")
|
||||
@report_error
|
||||
def head_object(self, bucket: str, key: str, version_id: Optional[str] = None) -> dict:
|
||||
params = {
|
||||
|
@ -345,7 +341,7 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
|||
log_command_execution("S3 Head object result", response)
|
||||
return response
|
||||
|
||||
@reporter.step("Delete object S3")
|
||||
@reporter.step_deco("Delete object S3")
|
||||
@report_error
|
||||
def delete_object(self, bucket: str, key: str, version_id: Optional[str] = None) -> dict:
|
||||
params = {
|
||||
|
@ -358,7 +354,7 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
|||
sleep(S3_SYNC_WAIT_TIME)
|
||||
return response
|
||||
|
||||
@reporter.step("Delete objects S3")
|
||||
@reporter.step_deco("Delete objects S3")
|
||||
@report_error
|
||||
def delete_objects(self, bucket: str, keys: list[str]) -> dict:
|
||||
response = self.boto3_client.delete_objects(Bucket=bucket, Delete=_make_objs_dict(keys))
|
||||
|
@ -369,7 +365,7 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
|||
sleep(S3_SYNC_WAIT_TIME)
|
||||
return response
|
||||
|
||||
@reporter.step("Delete object versions S3")
|
||||
@reporter.step_deco("Delete object versions S3")
|
||||
@report_error
|
||||
def delete_object_versions(self, bucket: str, object_versions: list) -> dict:
|
||||
# Build deletion list in S3 format
|
||||
|
@ -386,7 +382,7 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
|||
log_command_execution("S3 Delete objects result", response)
|
||||
return response
|
||||
|
||||
@reporter.step("Delete object versions S3 without delete markers")
|
||||
@reporter.step_deco("Delete object versions S3 without delete markers")
|
||||
@report_error
|
||||
def delete_object_versions_without_dm(self, bucket: str, object_versions: list) -> None:
|
||||
# Delete objects without creating delete markers
|
||||
|
@ -396,7 +392,7 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
|||
)
|
||||
log_command_execution("S3 Delete object result", response)
|
||||
|
||||
@reporter.step("Put object ACL")
|
||||
@reporter.step_deco("Put object ACL")
|
||||
@report_error
|
||||
def put_object_acl(
|
||||
self,
|
||||
|
@ -409,7 +405,7 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
|||
# pytest.skip("Method put_object_acl is not supported by boto3 client")
|
||||
raise NotImplementedError("Unsupported for boto3 client")
|
||||
|
||||
@reporter.step("Get object ACL")
|
||||
@reporter.step_deco("Get object ACL")
|
||||
@report_error
|
||||
def get_object_acl(self, bucket: str, key: str, version_id: Optional[str] = None) -> list:
|
||||
params = {
|
||||
|
@ -421,7 +417,7 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
|||
log_command_execution("S3 ACL objects result", response)
|
||||
return response.get("Grants")
|
||||
|
||||
@reporter.step("Copy object S3")
|
||||
@reporter.step_deco("Copy object S3")
|
||||
@report_error
|
||||
def copy_object(
|
||||
self,
|
||||
|
@ -450,7 +446,7 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
|||
log_command_execution("S3 Copy objects result", response)
|
||||
return key
|
||||
|
||||
@reporter.step("Get object S3")
|
||||
@reporter.step_deco("Get object S3")
|
||||
@report_error
|
||||
def get_object(
|
||||
self,
|
||||
|
@ -468,7 +464,8 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
|||
params = {
|
||||
self._to_s3_param(param): value
|
||||
for param, value in {**locals(), **{"Range": range_str}}.items()
|
||||
if param not in ["self", "object_range", "full_output", "range_str", "filename"] and value is not None
|
||||
if param not in ["self", "object_range", "full_output", "range_str", "filename"]
|
||||
and value is not None
|
||||
}
|
||||
response = self.boto3_client.get_object(**params)
|
||||
log_command_execution("S3 Get objects result", response)
|
||||
|
@ -480,7 +477,7 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
|||
chunk = response["Body"].read(1024)
|
||||
return response if full_output else filename
|
||||
|
||||
@reporter.step("Create multipart upload S3")
|
||||
@reporter.step_deco("Create multipart upload S3")
|
||||
@report_error
|
||||
def create_multipart_upload(self, bucket: str, key: str) -> str:
|
||||
response = self.boto3_client.create_multipart_upload(Bucket=bucket, Key=key)
|
||||
|
@ -489,7 +486,7 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
|||
|
||||
return response["UploadId"]
|
||||
|
||||
@reporter.step("List multipart uploads S3")
|
||||
@reporter.step_deco("List multipart uploads S3")
|
||||
@report_error
|
||||
def list_multipart_uploads(self, bucket: str) -> Optional[list[dict]]:
|
||||
response = self.boto3_client.list_multipart_uploads(Bucket=bucket)
|
||||
|
@ -497,15 +494,19 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
|||
|
||||
return response.get("Uploads")
|
||||
|
||||
@reporter.step("Abort multipart upload S3")
|
||||
@reporter.step_deco("Abort multipart upload S3")
|
||||
@report_error
|
||||
def abort_multipart_upload(self, bucket: str, key: str, upload_id: str) -> None:
|
||||
response = self.boto3_client.abort_multipart_upload(Bucket=bucket, Key=key, UploadId=upload_id)
|
||||
response = self.boto3_client.abort_multipart_upload(
|
||||
Bucket=bucket, Key=key, UploadId=upload_id
|
||||
)
|
||||
log_command_execution("S3 Abort multipart upload", response)
|
||||
|
||||
@reporter.step("Upload part S3")
|
||||
@reporter.step_deco("Upload part S3")
|
||||
@report_error
|
||||
def upload_part(self, bucket: str, key: str, upload_id: str, part_num: int, filepath: str) -> str:
|
||||
def upload_part(
|
||||
self, bucket: str, key: str, upload_id: str, part_num: int, filepath: str
|
||||
) -> str:
|
||||
with open(filepath, "rb") as put_file:
|
||||
body = put_file.read()
|
||||
|
||||
|
@ -521,9 +522,11 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
|||
|
||||
return response["ETag"]
|
||||
|
||||
@reporter.step("Upload copy part S3")
|
||||
@reporter.step_deco("Upload copy part S3")
|
||||
@report_error
|
||||
def upload_part_copy(self, bucket: str, key: str, upload_id: str, part_num: int, copy_source: str) -> str:
|
||||
def upload_part_copy(
|
||||
self, bucket: str, key: str, upload_id: str, part_num: int, copy_source: str
|
||||
) -> str:
|
||||
response = self.boto3_client.upload_part_copy(
|
||||
UploadId=upload_id,
|
||||
Bucket=bucket,
|
||||
|
@ -532,11 +535,13 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
|||
CopySource=copy_source,
|
||||
)
|
||||
log_command_execution("S3 Upload copy part", response)
|
||||
assert response.get("CopyPartResult", []).get("ETag"), f"Expected ETag in response:\n{response}"
|
||||
assert response.get("CopyPartResult", []).get(
|
||||
"ETag"
|
||||
), f"Expected ETag in response:\n{response}"
|
||||
|
||||
return response["CopyPartResult"]["ETag"]
|
||||
|
||||
@reporter.step("List parts S3")
|
||||
@reporter.step_deco("List parts S3")
|
||||
@report_error
|
||||
def list_parts(self, bucket: str, key: str, upload_id: str) -> list[dict]:
|
||||
response = self.boto3_client.list_parts(UploadId=upload_id, Bucket=bucket, Key=key)
|
||||
|
@ -545,7 +550,7 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
|||
|
||||
return response["Parts"]
|
||||
|
||||
@reporter.step("Complete multipart upload S3")
|
||||
@reporter.step_deco("Complete multipart upload S3")
|
||||
@report_error
|
||||
def complete_multipart_upload(self, bucket: str, key: str, upload_id: str, parts: list) -> None:
|
||||
parts = [{"ETag": etag, "PartNumber": part_num} for part_num, etag in parts]
|
||||
|
@ -554,7 +559,7 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
|||
)
|
||||
log_command_execution("S3 Complete multipart upload", response)
|
||||
|
||||
@reporter.step("Put object retention")
|
||||
@reporter.step_deco("Put object retention")
|
||||
@report_error
|
||||
def put_object_retention(
|
||||
self,
|
||||
|
@ -572,7 +577,7 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
|||
response = self.boto3_client.put_object_retention(**params)
|
||||
log_command_execution("S3 Put object retention ", response)
|
||||
|
||||
@reporter.step("Put object legal hold")
|
||||
@reporter.step_deco("Put object legal hold")
|
||||
@report_error
|
||||
def put_object_legal_hold(
|
||||
self,
|
||||
|
@ -590,7 +595,7 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
|||
response = self.boto3_client.put_object_legal_hold(**params)
|
||||
log_command_execution("S3 Put object legal hold ", response)
|
||||
|
||||
@reporter.step("Put object tagging")
|
||||
@reporter.step_deco("Put object tagging")
|
||||
@report_error
|
||||
def put_object_tagging(self, bucket: str, key: str, tags: list) -> None:
|
||||
tags = [{"Key": tag_key, "Value": tag_value} for tag_key, tag_value in tags]
|
||||
|
@ -598,7 +603,7 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
|||
response = self.boto3_client.put_object_tagging(Bucket=bucket, Key=key, Tagging=tagging)
|
||||
log_command_execution("S3 Put object tagging", response)
|
||||
|
||||
@reporter.step("Get object tagging")
|
||||
@reporter.step_deco("Get object tagging")
|
||||
@report_error
|
||||
def get_object_tagging(self, bucket: str, key: str, version_id: Optional[str] = None) -> list:
|
||||
params = {
|
||||
|
@ -610,13 +615,13 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
|||
log_command_execution("S3 Get object tagging", response)
|
||||
return response.get("TagSet")
|
||||
|
||||
@reporter.step("Delete object tagging")
|
||||
@reporter.step_deco("Delete object tagging")
|
||||
@report_error
|
||||
def delete_object_tagging(self, bucket: str, key: str) -> None:
|
||||
response = self.boto3_client.delete_object_tagging(Bucket=bucket, Key=key)
|
||||
log_command_execution("S3 Delete object tagging", response)
|
||||
|
||||
@reporter.step("Get object attributes")
|
||||
@reporter.step_deco("Get object attributes")
|
||||
@report_error
|
||||
def get_object_attributes(
|
||||
self,
|
||||
|
@ -631,7 +636,7 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
|||
logger.warning("Method get_object_attributes is not supported by boto3 client")
|
||||
return {}
|
||||
|
||||
@reporter.step("Sync directory S3")
|
||||
@reporter.step_deco("Sync directory S3")
|
||||
@report_error
|
||||
def sync(
|
||||
self,
|
||||
|
@ -642,7 +647,7 @@ class Boto3ClientWrapper(S3ClientWrapper):
|
|||
) -> dict:
|
||||
raise NotImplementedError("Sync is not supported for boto3 client")
|
||||
|
||||
@reporter.step("CP directory S3")
|
||||
@reporter.step_deco("CP directory S3")
|
||||
@report_error
|
||||
def cp(
|
||||
self,
|
||||
|
|
|
@ -1,9 +1,8 @@
|
|||
from abc import abstractmethod
|
||||
from abc import ABC, abstractmethod
|
||||
from datetime import datetime
|
||||
from enum import Enum
|
||||
from typing import Literal, Optional, Union
|
||||
|
||||
from frostfs_testlib.testing.readable import HumanReadableABC, HumanReadableEnum
|
||||
|
||||
|
||||
def _make_objs_dict(key_names):
|
||||
objs_list = []
|
||||
|
@ -14,8 +13,7 @@ def _make_objs_dict(key_names):
|
|||
return objs_dict
|
||||
|
||||
|
||||
class VersioningStatus(HumanReadableEnum):
|
||||
UNDEFINED = None
|
||||
class VersioningStatus(Enum):
|
||||
ENABLED = "Enabled"
|
||||
SUSPENDED = "Suspended"
|
||||
|
||||
|
@ -31,15 +29,11 @@ ACL_COPY = [
|
|||
]
|
||||
|
||||
|
||||
class S3ClientWrapper(HumanReadableABC):
|
||||
class S3ClientWrapper(ABC):
|
||||
@abstractmethod
|
||||
def __init__(self, access_key_id: str, secret_access_key: str, s3gate_endpoint: str, profile: str) -> None:
|
||||
def __init__(self, access_key_id: str, secret_access_key: str, s3gate_endpoint: str) -> None:
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def set_endpoint(self, s3gate_endpoint: str):
|
||||
"""Set endpoint"""
|
||||
|
||||
@abstractmethod
|
||||
def create_bucket(
|
||||
self,
|
||||
|
|
|
@ -1,3 +1,3 @@
|
|||
from frostfs_testlib.shell.interfaces import CommandOptions, CommandResult, InteractiveInput, Shell
|
||||
from frostfs_testlib.shell.local_shell import LocalShell
|
||||
from frostfs_testlib.shell.ssh_shell import SshConnectionProvider, SSHShell
|
||||
from frostfs_testlib.shell.ssh_shell import SSHShell
|
||||
|
|
|
@ -7,23 +7,7 @@ class SudoInspector(CommandInspector):
|
|||
If command is already prepended with sudo, then has no effect.
|
||||
"""
|
||||
|
||||
def inspect(self, original_command: str, command: str) -> str:
|
||||
def inspect(self, command: str) -> str:
|
||||
if not command.startswith("sudo"):
|
||||
return f"sudo {command}"
|
||||
return command
|
||||
|
||||
|
||||
class SuInspector(CommandInspector):
|
||||
"""Allows to run command as another user via sudo su call
|
||||
|
||||
If command is already prepended with sudo su, then has no effect.
|
||||
"""
|
||||
|
||||
def __init__(self, user: str) -> None:
|
||||
self.user = user
|
||||
|
||||
def inspect(self, original_command: str, command: str) -> str:
|
||||
if not original_command.startswith("sudo su"):
|
||||
cmd = original_command.replace('"', '\\"').replace("\$", "\\\\\\$")
|
||||
return f'sudo su - {self.user} -c "{cmd}"'
|
||||
return original_command
|
||||
|
|
|
@ -22,12 +22,11 @@ class CommandInspector(ABC):
|
|||
"""Interface of inspector that processes command text before execution."""
|
||||
|
||||
@abstractmethod
|
||||
def inspect(self, original_command: str, command: str) -> str:
|
||||
def inspect(self, command: str) -> str:
|
||||
"""Transforms command text and returns modified command.
|
||||
|
||||
Args:
|
||||
command: Command to transform with this inspector.
|
||||
original_command: Untransformed command to transform with this inspector. Depending on type of the inspector it might be required to modify original command
|
||||
|
||||
Returns:
|
||||
Transformed command text.
|
||||
|
@ -48,7 +47,6 @@ class CommandOptions:
|
|||
check: Controls whether to check return code of the command. Set to False to
|
||||
ignore non-zero return codes.
|
||||
no_log: Do not print output to logger if True.
|
||||
extra_inspectors: Exctra command inspectors to process command
|
||||
"""
|
||||
|
||||
interactive_inputs: Optional[list[InteractiveInput]] = None
|
||||
|
@ -56,7 +54,6 @@ class CommandOptions:
|
|||
timeout: Optional[int] = None
|
||||
check: bool = True
|
||||
no_log: bool = False
|
||||
extra_inspectors: Optional[list[CommandInspector]] = None
|
||||
|
||||
def __post_init__(self):
|
||||
if self.timeout is None:
|
||||
|
|
|
@ -6,10 +6,11 @@ from typing import IO, Optional
|
|||
|
||||
import pexpect
|
||||
|
||||
from frostfs_testlib import reporter
|
||||
from frostfs_testlib.reporter import get_reporter
|
||||
from frostfs_testlib.shell.interfaces import CommandInspector, CommandOptions, CommandResult, Shell
|
||||
|
||||
logger = logging.getLogger("frostfs.testlib.shell")
|
||||
reporter = get_reporter()
|
||||
|
||||
|
||||
class LocalShell(Shell):
|
||||
|
@ -23,10 +24,8 @@ class LocalShell(Shell):
|
|||
# If no options were provided, use default options
|
||||
options = options or CommandOptions()
|
||||
|
||||
original_command = command
|
||||
extra_inspectors = options.extra_inspectors if options.extra_inspectors else []
|
||||
for inspector in [*self.command_inspectors, *extra_inspectors]:
|
||||
command = inspector.inspect(original_command, command)
|
||||
for inspector in self.command_inspectors:
|
||||
command = inspector.inspect(command)
|
||||
|
||||
logger.info(f"Executing command: {command}")
|
||||
if options.interactive_inputs:
|
||||
|
@ -38,7 +37,7 @@ class LocalShell(Shell):
|
|||
log_file = tempfile.TemporaryFile() # File is reliable cross-platform way to capture output
|
||||
|
||||
try:
|
||||
command_process = pexpect.spawn(command, timeout=options.timeout, use_poll=True)
|
||||
command_process = pexpect.spawn(command, timeout=options.timeout)
|
||||
except (pexpect.ExceptionPexpect, OSError) as exc:
|
||||
raise RuntimeError(f"Command: {command}") from exc
|
||||
|
||||
|
@ -61,8 +60,7 @@ class LocalShell(Shell):
|
|||
if options.check and result.return_code != 0:
|
||||
raise RuntimeError(
|
||||
f"Command: {command}\nreturn code: {result.return_code}\n"
|
||||
f"Output: {result.stdout}\n"
|
||||
f"Stderr: {result.stderr}\n"
|
||||
f"Output: {result.stdout}"
|
||||
)
|
||||
return result
|
||||
|
||||
|
@ -94,7 +92,9 @@ class LocalShell(Shell):
|
|||
return_code=exc.returncode,
|
||||
)
|
||||
raise RuntimeError(
|
||||
f"Command: {command}\nError:\n" f"return code: {exc.returncode}\n" f"output: {exc.output}"
|
||||
f"Command: {command}\nError:\n"
|
||||
f"return code: {exc.returncode}\n"
|
||||
f"output: {exc.output}"
|
||||
) from exc
|
||||
except OSError as exc:
|
||||
raise RuntimeError(f"Command: {command}\nOutput: {exc.strerror}") from exc
|
||||
|
|
|
@ -6,111 +6,24 @@ from functools import lru_cache, wraps
|
|||
from time import sleep
|
||||
from typing import ClassVar, Optional, Tuple
|
||||
|
||||
from paramiko import AutoAddPolicy, Channel, ECDSAKey, Ed25519Key, PKey, RSAKey, SSHClient, SSHException, ssh_exception
|
||||
from paramiko import (
|
||||
AutoAddPolicy,
|
||||
Channel,
|
||||
ECDSAKey,
|
||||
Ed25519Key,
|
||||
PKey,
|
||||
RSAKey,
|
||||
SSHClient,
|
||||
SSHException,
|
||||
ssh_exception,
|
||||
)
|
||||
from paramiko.ssh_exception import AuthenticationException
|
||||
|
||||
from frostfs_testlib import reporter
|
||||
from frostfs_testlib.shell.interfaces import CommandInspector, CommandOptions, CommandResult, Shell, SshCredentials
|
||||
from frostfs_testlib.reporter import get_reporter
|
||||
from frostfs_testlib.shell.interfaces import CommandInspector, CommandOptions, CommandResult, Shell
|
||||
|
||||
logger = logging.getLogger("frostfs.testlib.shell")
|
||||
|
||||
|
||||
class SshConnectionProvider:
|
||||
SSH_CONNECTION_ATTEMPTS: ClassVar[int] = 4
|
||||
SSH_ATTEMPTS_INTERVAL: ClassVar[int] = 10
|
||||
CONNECTION_TIMEOUT = 60
|
||||
|
||||
instance = None
|
||||
connections: dict[str, SSHClient] = {}
|
||||
creds: dict[str, SshCredentials] = {}
|
||||
|
||||
def __new__(cls):
|
||||
if not cls.instance:
|
||||
cls.instance = super(SshConnectionProvider, cls).__new__(cls)
|
||||
return cls.instance
|
||||
|
||||
def store_creds(self, host: str, ssh_creds: SshCredentials):
|
||||
self.creds[host] = ssh_creds
|
||||
|
||||
def provide(self, host: str, port: str) -> SSHClient:
|
||||
if host not in self.creds:
|
||||
raise RuntimeError(f"Please add credentials for host {host}")
|
||||
|
||||
if host in self.connections:
|
||||
client = self.connections[host]
|
||||
if client:
|
||||
return client
|
||||
|
||||
creds = self.creds[host]
|
||||
client = self._create_connection(host, port, creds)
|
||||
self.connections[host] = client
|
||||
return client
|
||||
|
||||
def drop(self, host: str):
|
||||
if host in self.connections:
|
||||
client = self.connections.pop(host)
|
||||
client.close()
|
||||
|
||||
def drop_all(self):
|
||||
hosts = list(self.connections.keys())
|
||||
for host in hosts:
|
||||
self.drop(host)
|
||||
|
||||
def _create_connection(
|
||||
self,
|
||||
host: str,
|
||||
port: str,
|
||||
creds: SshCredentials,
|
||||
) -> SSHClient:
|
||||
for attempt in range(self.SSH_CONNECTION_ATTEMPTS):
|
||||
connection = SSHClient()
|
||||
connection.set_missing_host_key_policy(AutoAddPolicy())
|
||||
try:
|
||||
if creds.ssh_key_path:
|
||||
logger.info(
|
||||
f"Trying to connect to host {host} as {creds.ssh_login} using SSH key "
|
||||
f"{creds.ssh_key_path} (attempt {attempt})"
|
||||
)
|
||||
connection.connect(
|
||||
hostname=host,
|
||||
port=port,
|
||||
username=creds.ssh_login,
|
||||
pkey=_load_private_key(creds.ssh_key_path, creds.ssh_key_passphrase),
|
||||
timeout=self.CONNECTION_TIMEOUT,
|
||||
)
|
||||
else:
|
||||
logger.info(
|
||||
f"Trying to connect to host {host} as {creds.ssh_login} using password " f"(attempt {attempt})"
|
||||
)
|
||||
connection.connect(
|
||||
hostname=host,
|
||||
port=port,
|
||||
username=creds.ssh_login,
|
||||
password=creds.ssh_password,
|
||||
timeout=self.CONNECTION_TIMEOUT,
|
||||
)
|
||||
return connection
|
||||
except AuthenticationException:
|
||||
connection.close()
|
||||
logger.exception(f"Can't connect to host {host}")
|
||||
raise
|
||||
except (
|
||||
SSHException,
|
||||
ssh_exception.NoValidConnectionsError,
|
||||
AttributeError,
|
||||
socket.timeout,
|
||||
OSError,
|
||||
) as exc:
|
||||
connection.close()
|
||||
can_retry = attempt + 1 < self.SSH_CONNECTION_ATTEMPTS
|
||||
if can_retry:
|
||||
logger.warn(
|
||||
f"Can't connect to host {host}, will retry after {self.SSH_ATTEMPTS_INTERVAL}s. Error: {exc}"
|
||||
)
|
||||
sleep(self.SSH_ATTEMPTS_INTERVAL)
|
||||
continue
|
||||
logger.exception(f"Can't connect to host {host}")
|
||||
raise HostIsNotAvailable(host) from exc
|
||||
reporter = get_reporter()
|
||||
|
||||
|
||||
class HostIsNotAvailable(Exception):
|
||||
|
@ -123,7 +36,9 @@ class HostIsNotAvailable(Exception):
|
|||
|
||||
def log_command(func):
|
||||
@wraps(func)
|
||||
def wrapper(shell: "SSHShell", command: str, options: CommandOptions, *args, **kwargs) -> CommandResult:
|
||||
def wrapper(
|
||||
shell: "SSHShell", command: str, options: CommandOptions, *args, **kwargs
|
||||
) -> CommandResult:
|
||||
command_info = command.removeprefix("$ProgressPreference='SilentlyContinue'\n")
|
||||
with reporter.step(command_info):
|
||||
logger.info(f'Execute command "{command}" on "{shell.host}"')
|
||||
|
@ -176,6 +91,9 @@ class SSHShell(Shell):
|
|||
# to allow remote command to flush its output buffer
|
||||
DELAY_AFTER_EXIT = 0.2
|
||||
|
||||
SSH_CONNECTION_ATTEMPTS: ClassVar[int] = 3
|
||||
CONNECTION_TIMEOUT = 90
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
host: str,
|
||||
|
@ -185,34 +103,31 @@ class SSHShell(Shell):
|
|||
private_key_passphrase: Optional[str] = None,
|
||||
port: str = "22",
|
||||
command_inspectors: Optional[list[CommandInspector]] = None,
|
||||
custom_environment: Optional[dict] = None
|
||||
) -> None:
|
||||
super().__init__()
|
||||
self.connection_provider = SshConnectionProvider()
|
||||
self.connection_provider.store_creds(
|
||||
host, SshCredentials(login, password, private_key_path, private_key_passphrase)
|
||||
)
|
||||
self.host = host
|
||||
self.port = port
|
||||
|
||||
self.login = login
|
||||
self.password = password
|
||||
self.private_key_path = private_key_path
|
||||
self.private_key_passphrase = private_key_passphrase
|
||||
self.command_inspectors = command_inspectors or []
|
||||
|
||||
self.environment = custom_environment
|
||||
self.__connection: Optional[SSHClient] = None
|
||||
|
||||
@property
|
||||
def _connection(self):
|
||||
return self.connection_provider.provide(self.host, self.port)
|
||||
if not self.__connection:
|
||||
self.__connection = self._create_connection()
|
||||
return self.__connection
|
||||
|
||||
def drop(self):
|
||||
self.connection_provider.drop(self.host)
|
||||
self._reset_connection()
|
||||
|
||||
def exec(self, command: str, options: Optional[CommandOptions] = None) -> CommandResult:
|
||||
options = options or CommandOptions()
|
||||
|
||||
original_command = command
|
||||
extra_inspectors = options.extra_inspectors if options.extra_inspectors else []
|
||||
for inspector in [*self.command_inspectors, *extra_inspectors]:
|
||||
command = inspector.inspect(original_command, command)
|
||||
for inspector in self.command_inspectors:
|
||||
command = inspector.inspect(command)
|
||||
|
||||
if options.interactive_inputs:
|
||||
result = self._exec_interactive(command, options)
|
||||
|
@ -221,13 +136,15 @@ class SSHShell(Shell):
|
|||
|
||||
if options.check and result.return_code != 0:
|
||||
raise RuntimeError(
|
||||
f"Command: {command}\nreturn code: {result.return_code}\nOutput: {result.stdout}\nStderr: {result.stderr}\n"
|
||||
f"Command: {command}\nreturn code: {result.return_code}\nOutput: {result.stdout}"
|
||||
)
|
||||
return result
|
||||
|
||||
@log_command
|
||||
def _exec_interactive(self, command: str, options: CommandOptions) -> CommandResult:
|
||||
stdin, stdout, stderr = self._connection.exec_command(command, timeout=options.timeout, get_pty=True, environment=self.environment)
|
||||
stdin, stdout, stderr = self._connection.exec_command(
|
||||
command, timeout=options.timeout, get_pty=True
|
||||
)
|
||||
for interactive_input in options.interactive_inputs:
|
||||
input = interactive_input.input
|
||||
if not input.endswith("\n"):
|
||||
|
@ -254,7 +171,7 @@ class SSHShell(Shell):
|
|||
@log_command
|
||||
def _exec_non_interactive(self, command: str, options: CommandOptions) -> CommandResult:
|
||||
try:
|
||||
stdin, stdout, stderr = self._connection.exec_command(command, timeout=options.timeout, environment=self.environment)
|
||||
stdin, stdout, stderr = self._connection.exec_command(command, timeout=options.timeout)
|
||||
|
||||
if options.close_stdin:
|
||||
stdin.close()
|
||||
|
@ -276,7 +193,7 @@ class SSHShell(Shell):
|
|||
socket.timeout,
|
||||
) as exc:
|
||||
logger.exception(f"Can't execute command {command} on host: {self.host}")
|
||||
self.drop()
|
||||
self._reset_connection()
|
||||
raise HostIsNotAvailable(self.host) from exc
|
||||
|
||||
def _read_channels(
|
||||
|
@ -331,3 +248,57 @@ class SSHShell(Shell):
|
|||
full_stderr = b"".join(stderr_chunks)
|
||||
|
||||
return (full_stdout.decode(errors="ignore"), full_stderr.decode(errors="ignore"))
|
||||
|
||||
def _create_connection(self, attempts: int = SSH_CONNECTION_ATTEMPTS) -> SSHClient:
|
||||
for attempt in range(attempts):
|
||||
connection = SSHClient()
|
||||
connection.set_missing_host_key_policy(AutoAddPolicy())
|
||||
try:
|
||||
if self.private_key_path:
|
||||
logger.info(
|
||||
f"Trying to connect to host {self.host} as {self.login} using SSH key "
|
||||
f"{self.private_key_path} (attempt {attempt})"
|
||||
)
|
||||
connection.connect(
|
||||
hostname=self.host,
|
||||
port=self.port,
|
||||
username=self.login,
|
||||
pkey=_load_private_key(self.private_key_path, self.private_key_passphrase),
|
||||
timeout=self.CONNECTION_TIMEOUT,
|
||||
)
|
||||
else:
|
||||
logger.info(
|
||||
f"Trying to connect to host {self.host} as {self.login} using password "
|
||||
f"(attempt {attempt})"
|
||||
)
|
||||
connection.connect(
|
||||
hostname=self.host,
|
||||
port=self.port,
|
||||
username=self.login,
|
||||
password=self.password,
|
||||
timeout=self.CONNECTION_TIMEOUT,
|
||||
)
|
||||
return connection
|
||||
except AuthenticationException:
|
||||
connection.close()
|
||||
logger.exception(f"Can't connect to host {self.host}")
|
||||
raise
|
||||
except (
|
||||
SSHException,
|
||||
ssh_exception.NoValidConnectionsError,
|
||||
AttributeError,
|
||||
socket.timeout,
|
||||
OSError,
|
||||
) as exc:
|
||||
connection.close()
|
||||
can_retry = attempt + 1 < attempts
|
||||
if can_retry:
|
||||
logger.warn(f"Can't connect to host {self.host}, will retry. Error: {exc}")
|
||||
continue
|
||||
logger.exception(f"Can't connect to host {self.host}")
|
||||
raise HostIsNotAvailable(self.host) from exc
|
||||
|
||||
def _reset_connection(self) -> None:
|
||||
if self.__connection:
|
||||
self.__connection.close()
|
||||
self.__connection = None
|
||||
|
|
|
@ -8,8 +8,8 @@ from typing import List, Optional, Union
|
|||
|
||||
import base58
|
||||
|
||||
from frostfs_testlib import reporter
|
||||
from frostfs_testlib.cli import FrostfsCli
|
||||
from frostfs_testlib.reporter import get_reporter
|
||||
from frostfs_testlib.resources.cli import FROSTFS_CLI_EXEC
|
||||
from frostfs_testlib.resources.common import ASSETS_DIR, DEFAULT_WALLET_CONFIG
|
||||
from frostfs_testlib.shell import Shell
|
||||
|
@ -22,10 +22,11 @@ from frostfs_testlib.storage.dataclasses.acl import (
|
|||
)
|
||||
from frostfs_testlib.utils import wallet_utils
|
||||
|
||||
reporter = get_reporter()
|
||||
logger = logging.getLogger("NeoLogger")
|
||||
|
||||
|
||||
@reporter.step("Get extended ACL")
|
||||
@reporter.step_deco("Get extended ACL")
|
||||
def get_eacl(wallet_path: str, cid: str, shell: Shell, endpoint: str) -> Optional[str]:
|
||||
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, DEFAULT_WALLET_CONFIG)
|
||||
try:
|
||||
|
@ -39,7 +40,7 @@ def get_eacl(wallet_path: str, cid: str, shell: Shell, endpoint: str) -> Optiona
|
|||
return result.stdout
|
||||
|
||||
|
||||
@reporter.step("Set extended ACL")
|
||||
@reporter.step_deco("Set extended ACL")
|
||||
def set_eacl(
|
||||
wallet_path: str,
|
||||
cid: str,
|
||||
|
@ -164,20 +165,24 @@ def eacl_rules(access: str, verbs: list, user: str) -> list[str]:
|
|||
return rules
|
||||
|
||||
|
||||
def sign_bearer(shell: Shell, wallet_path: str, eacl_rules_file_from: str, eacl_rules_file_to: str, json: bool) -> None:
|
||||
frostfscli = FrostfsCli(shell=shell, frostfs_cli_exec_path=FROSTFS_CLI_EXEC, config_file=DEFAULT_WALLET_CONFIG)
|
||||
def sign_bearer(
|
||||
shell: Shell, wallet_path: str, eacl_rules_file_from: str, eacl_rules_file_to: str, json: bool
|
||||
) -> None:
|
||||
frostfscli = FrostfsCli(
|
||||
shell=shell, frostfs_cli_exec_path=FROSTFS_CLI_EXEC, config_file=DEFAULT_WALLET_CONFIG
|
||||
)
|
||||
frostfscli.util.sign_bearer_token(
|
||||
wallet=wallet_path, from_file=eacl_rules_file_from, to_file=eacl_rules_file_to, json=json
|
||||
)
|
||||
|
||||
|
||||
@reporter.step("Wait for eACL cache expired")
|
||||
@reporter.step_deco("Wait for eACL cache expired")
|
||||
def wait_for_cache_expired():
|
||||
sleep(FROSTFS_CONTRACT_CACHE_TIMEOUT)
|
||||
return
|
||||
|
||||
|
||||
@reporter.step("Return bearer token in base64 to caller")
|
||||
@reporter.step_deco("Return bearer token in base64 to caller")
|
||||
def bearer_token_base64_from_file(
|
||||
bearer_path: str,
|
||||
) -> str:
|
||||
|
|
|
@ -1,23 +1,22 @@
|
|||
import json
|
||||
import logging
|
||||
import re
|
||||
import requests
|
||||
from dataclasses import dataclass
|
||||
from time import sleep
|
||||
from typing import Optional, Union
|
||||
|
||||
from frostfs_testlib import reporter
|
||||
from frostfs_testlib.cli import FrostfsCli
|
||||
from frostfs_testlib.reporter import get_reporter
|
||||
from frostfs_testlib.resources.cli import CLI_DEFAULT_TIMEOUT, FROSTFS_CLI_EXEC
|
||||
from frostfs_testlib.resources.common import DEFAULT_WALLET_CONFIG
|
||||
from frostfs_testlib.shell import Shell
|
||||
from frostfs_testlib.steps.cli.object import put_object, put_object_to_random_node
|
||||
from frostfs_testlib.storage.cluster import Cluster, ClusterNode
|
||||
from frostfs_testlib.storage.cluster import Cluster
|
||||
from frostfs_testlib.storage.dataclasses.storage_object_info import StorageObjectInfo
|
||||
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
|
||||
from frostfs_testlib.utils import json_utils
|
||||
from frostfs_testlib.utils.file_utils import generate_file, get_file_hash
|
||||
|
||||
reporter = get_reporter()
|
||||
logger = logging.getLogger("NeoLogger")
|
||||
|
||||
|
||||
|
@ -47,7 +46,7 @@ class StorageContainer:
|
|||
def get_wallet_config_path(self) -> str:
|
||||
return self.storage_container_info.wallet_file.config_path
|
||||
|
||||
@reporter.step("Generate new object and put in container")
|
||||
@reporter.step_deco("Generate new object and put in container")
|
||||
def generate_object(
|
||||
self,
|
||||
size: int,
|
||||
|
@ -103,7 +102,7 @@ SINGLE_PLACEMENT_RULE = "REP 1 IN X CBF 1 SELECT 4 FROM * AS X"
|
|||
REP_2_FOR_3_NODES_PLACEMENT_RULE = "REP 2 IN X CBF 1 SELECT 3 FROM * AS X"
|
||||
|
||||
|
||||
@reporter.step("Create Container")
|
||||
@reporter.step_deco("Create Container")
|
||||
def create_container(
|
||||
wallet: str,
|
||||
shell: Shell,
|
||||
|
@ -178,7 +177,9 @@ def wait_for_container_creation(
|
|||
return
|
||||
logger.info(f"There is no {cid} in {containers} yet; sleep {sleep_interval} and continue")
|
||||
sleep(sleep_interval)
|
||||
raise RuntimeError(f"After {attempts * sleep_interval} seconds container {cid} hasn't been persisted; exiting")
|
||||
raise RuntimeError(
|
||||
f"After {attempts * sleep_interval} seconds container {cid} hasn't been persisted; exiting"
|
||||
)
|
||||
|
||||
|
||||
def wait_for_container_deletion(
|
||||
|
@ -196,7 +197,7 @@ def wait_for_container_deletion(
|
|||
raise AssertionError(f"Expected container deleted during {attempts * sleep_interval} sec.")
|
||||
|
||||
|
||||
@reporter.step("List Containers")
|
||||
@reporter.step_deco("List Containers")
|
||||
def list_containers(
|
||||
wallet: str, shell: Shell, endpoint: str, timeout: Optional[str] = CLI_DEFAULT_TIMEOUT
|
||||
) -> list[str]:
|
||||
|
@ -217,7 +218,7 @@ def list_containers(
|
|||
return result.stdout.split()
|
||||
|
||||
|
||||
@reporter.step("List Objects in container")
|
||||
@reporter.step_deco("List Objects in container")
|
||||
def list_objects(
|
||||
wallet: str,
|
||||
shell: Shell,
|
||||
|
@ -238,12 +239,14 @@ def list_objects(
|
|||
(list): list of containers
|
||||
"""
|
||||
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, DEFAULT_WALLET_CONFIG)
|
||||
result = cli.container.list_objects(rpc_endpoint=endpoint, wallet=wallet, cid=container_id, timeout=timeout)
|
||||
result = cli.container.list_objects(
|
||||
rpc_endpoint=endpoint, wallet=wallet, cid=container_id, timeout=timeout
|
||||
)
|
||||
logger.info(f"Container objects: \n{result}")
|
||||
return result.stdout.split()
|
||||
|
||||
|
||||
@reporter.step("Get Container")
|
||||
@reporter.step_deco("Get Container")
|
||||
def get_container(
|
||||
wallet: str,
|
||||
cid: str,
|
||||
|
@ -267,7 +270,9 @@ def get_container(
|
|||
"""
|
||||
|
||||
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, DEFAULT_WALLET_CONFIG)
|
||||
result = cli.container.get(rpc_endpoint=endpoint, wallet=wallet, cid=cid, json_mode=json_mode, timeout=timeout)
|
||||
result = cli.container.get(
|
||||
rpc_endpoint=endpoint, wallet=wallet, cid=cid, json_mode=json_mode, timeout=timeout
|
||||
)
|
||||
|
||||
if not json_mode:
|
||||
return result.stdout
|
||||
|
@ -281,7 +286,7 @@ def get_container(
|
|||
return container_info
|
||||
|
||||
|
||||
@reporter.step("Delete Container")
|
||||
@reporter.step_deco("Delete Container")
|
||||
# TODO: make the error message about a non-found container more user-friendly
|
||||
def delete_container(
|
||||
wallet: str,
|
||||
|
@ -344,34 +349,11 @@ def _parse_cid(output: str) -> str:
|
|||
return splitted[1]
|
||||
|
||||
|
||||
@reporter.step("Search container by name")
|
||||
def search_container_by_name(name: str, node: ClusterNode):
|
||||
node_shell = node.host.get_shell()
|
||||
output = node_shell.exec(f"curl -I HEAD http://127.0.0.1:8084/{name}")
|
||||
pattern = r"X-Container-Id: (\S+)"
|
||||
cid = re.findall(pattern, output.stdout)
|
||||
if cid:
|
||||
return cid[0]
|
||||
@reporter.step_deco("Search container by name")
|
||||
def search_container_by_name(wallet: str, name: str, shell: Shell, endpoint: str):
|
||||
list_cids = list_containers(wallet, shell, endpoint)
|
||||
for cid in list_cids:
|
||||
cont_info = get_container(wallet, cid, shell, endpoint, True)
|
||||
if cont_info.get("attributes", {}).get("Name", None) == name:
|
||||
return cid
|
||||
return None
|
||||
|
||||
|
||||
@reporter.step("Search for nodes with a container")
|
||||
def search_nodes_with_container(
|
||||
wallet: str,
|
||||
cid: str,
|
||||
shell: Shell,
|
||||
endpoint: str,
|
||||
cluster: Cluster,
|
||||
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
|
||||
) -> list[ClusterNode]:
|
||||
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, DEFAULT_WALLET_CONFIG)
|
||||
result = cli.container.search_node(rpc_endpoint=endpoint, wallet=wallet, cid=cid, timeout=timeout)
|
||||
|
||||
pattern = r"[0-9]+(?:\.[0-9]+){3}"
|
||||
nodes_ip = list(set(re.findall(pattern, result.stdout)))
|
||||
|
||||
with reporter.step(f"nodes ips = {nodes_ip}"):
|
||||
nodes_list = cluster.get_nodes_by_ip(nodes_ip)
|
||||
|
||||
with reporter.step(f"Return nodes - {nodes_list}"):
|
||||
return nodes_list
|
||||
|
|
|
@ -5,20 +5,20 @@ import re
|
|||
import uuid
|
||||
from typing import Any, Optional
|
||||
|
||||
from frostfs_testlib import reporter
|
||||
from frostfs_testlib.cli import FrostfsCli
|
||||
from frostfs_testlib.cli.neogo import NeoGo
|
||||
from frostfs_testlib.reporter import get_reporter
|
||||
from frostfs_testlib.resources.cli import CLI_DEFAULT_TIMEOUT, FROSTFS_CLI_EXEC, NEOGO_EXECUTABLE
|
||||
from frostfs_testlib.resources.common import ASSETS_DIR, DEFAULT_WALLET_CONFIG
|
||||
from frostfs_testlib.shell import Shell
|
||||
from frostfs_testlib.storage.cluster import Cluster, ClusterNode
|
||||
from frostfs_testlib.storage.cluster import Cluster
|
||||
from frostfs_testlib.utils import json_utils
|
||||
from frostfs_testlib.utils.cli_utils import parse_cmd_table, parse_netmap_output
|
||||
|
||||
logger = logging.getLogger("NeoLogger")
|
||||
reporter = get_reporter()
|
||||
|
||||
|
||||
@reporter.step("Get object from random node")
|
||||
@reporter.step_deco("Get object from random node")
|
||||
def get_object_from_random_node(
|
||||
wallet: str,
|
||||
cid: str,
|
||||
|
@ -69,7 +69,7 @@ def get_object_from_random_node(
|
|||
)
|
||||
|
||||
|
||||
@reporter.step("Get object from {endpoint}")
|
||||
@reporter.step_deco("Get object from {endpoint}")
|
||||
def get_object(
|
||||
wallet: str,
|
||||
cid: str,
|
||||
|
@ -125,7 +125,7 @@ def get_object(
|
|||
return file_path
|
||||
|
||||
|
||||
@reporter.step("Get Range Hash from {endpoint}")
|
||||
@reporter.step_deco("Get Range Hash from {endpoint}")
|
||||
def get_range_hash(
|
||||
wallet: str,
|
||||
cid: str,
|
||||
|
@ -175,7 +175,7 @@ def get_range_hash(
|
|||
return result.stdout.split(":")[1].strip()
|
||||
|
||||
|
||||
@reporter.step("Put object to random node")
|
||||
@reporter.step_deco("Put object to random node")
|
||||
def put_object_to_random_node(
|
||||
wallet: str,
|
||||
path: str,
|
||||
|
@ -183,7 +183,6 @@ def put_object_to_random_node(
|
|||
shell: Shell,
|
||||
cluster: Cluster,
|
||||
bearer: Optional[str] = None,
|
||||
copies_number: Optional[int] = None,
|
||||
attributes: Optional[dict] = None,
|
||||
xhdr: Optional[dict] = None,
|
||||
wallet_config: Optional[str] = None,
|
||||
|
@ -202,7 +201,6 @@ def put_object_to_random_node(
|
|||
shell: executor for cli command
|
||||
cluster: cluster under test
|
||||
bearer: path to Bearer Token file, appends to `--bearer` key
|
||||
copies_number: Number of copies of the object to store within the RPC call
|
||||
attributes: User attributes in form of Key1=Value1,Key2=Value2
|
||||
cluster: cluster under test
|
||||
wallet_config: path to the wallet config
|
||||
|
@ -223,7 +221,6 @@ def put_object_to_random_node(
|
|||
shell,
|
||||
endpoint,
|
||||
bearer,
|
||||
copies_number,
|
||||
attributes,
|
||||
xhdr,
|
||||
wallet_config,
|
||||
|
@ -234,7 +231,7 @@ def put_object_to_random_node(
|
|||
)
|
||||
|
||||
|
||||
@reporter.step("Put object at {endpoint} in container {cid}")
|
||||
@reporter.step_deco("Put object at {endpoint} in container {cid}")
|
||||
def put_object(
|
||||
wallet: str,
|
||||
path: str,
|
||||
|
@ -242,7 +239,6 @@ def put_object(
|
|||
shell: Shell,
|
||||
endpoint: str,
|
||||
bearer: Optional[str] = None,
|
||||
copies_number: Optional[int] = None,
|
||||
attributes: Optional[dict] = None,
|
||||
xhdr: Optional[dict] = None,
|
||||
wallet_config: Optional[str] = None,
|
||||
|
@ -260,7 +256,6 @@ def put_object(
|
|||
cid: ID of Container where we get the Object from
|
||||
shell: executor for cli command
|
||||
bearer: path to Bearer Token file, appends to `--bearer` key
|
||||
copies_number: Number of copies of the object to store within the RPC call
|
||||
attributes: User attributes in form of Key1=Value1,Key2=Value2
|
||||
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
|
||||
wallet_config: path to the wallet config
|
||||
|
@ -281,7 +276,6 @@ def put_object(
|
|||
cid=cid,
|
||||
attributes=attributes,
|
||||
bearer=bearer,
|
||||
copies_number=copies_number,
|
||||
expire_at=expire_at,
|
||||
no_progress=no_progress,
|
||||
xhdr=xhdr,
|
||||
|
@ -295,7 +289,7 @@ def put_object(
|
|||
return oid.strip()
|
||||
|
||||
|
||||
@reporter.step("Delete object {cid}/{oid} from {endpoint}")
|
||||
@reporter.step_deco("Delete object {cid}/{oid} from {endpoint}")
|
||||
def delete_object(
|
||||
wallet: str,
|
||||
cid: str,
|
||||
|
@ -343,7 +337,7 @@ def delete_object(
|
|||
return tombstone.strip()
|
||||
|
||||
|
||||
@reporter.step("Get Range")
|
||||
@reporter.step_deco("Get Range")
|
||||
def get_range(
|
||||
wallet: str,
|
||||
cid: str,
|
||||
|
@ -396,7 +390,7 @@ def get_range(
|
|||
return range_file_path, content
|
||||
|
||||
|
||||
@reporter.step("Lock Object")
|
||||
@reporter.step_deco("Lock Object")
|
||||
def lock_object(
|
||||
wallet: str,
|
||||
cid: str,
|
||||
|
@ -457,7 +451,7 @@ def lock_object(
|
|||
return oid.strip()
|
||||
|
||||
|
||||
@reporter.step("Search object")
|
||||
@reporter.step_deco("Search object")
|
||||
def search_object(
|
||||
wallet: str,
|
||||
cid: str,
|
||||
|
@ -502,7 +496,9 @@ def search_object(
|
|||
cid=cid,
|
||||
bearer=bearer,
|
||||
xhdr=xhdr,
|
||||
filters=[f"{filter_key} EQ {filter_val}" for filter_key, filter_val in filters.items()] if filters else None,
|
||||
filters=[f"{filter_key} EQ {filter_val}" for filter_key, filter_val in filters.items()]
|
||||
if filters
|
||||
else None,
|
||||
session=session,
|
||||
phy=phy,
|
||||
root=root,
|
||||
|
@ -514,17 +510,19 @@ def search_object(
|
|||
if expected_objects_list:
|
||||
if sorted(found_objects) == sorted(expected_objects_list):
|
||||
logger.info(
|
||||
f"Found objects list '{found_objects}' " f"is equal for expected list '{expected_objects_list}'"
|
||||
f"Found objects list '{found_objects}' "
|
||||
f"is equal for expected list '{expected_objects_list}'"
|
||||
)
|
||||
else:
|
||||
logger.warning(
|
||||
f"Found object list {found_objects} " f"is not equal to expected list '{expected_objects_list}'"
|
||||
f"Found object list {found_objects} "
|
||||
f"is not equal to expected list '{expected_objects_list}'"
|
||||
)
|
||||
|
||||
return found_objects
|
||||
|
||||
|
||||
@reporter.step("Get netmap netinfo")
|
||||
@reporter.step_deco("Get netmap netinfo")
|
||||
def get_netmap_netinfo(
|
||||
wallet: str,
|
||||
shell: Shell,
|
||||
|
@ -576,7 +574,7 @@ def get_netmap_netinfo(
|
|||
return settings
|
||||
|
||||
|
||||
@reporter.step("Head object")
|
||||
@reporter.step_deco("Head object")
|
||||
def head_object(
|
||||
wallet: str,
|
||||
cid: str,
|
||||
|
@ -672,7 +670,7 @@ def head_object(
|
|||
return json_utils.decode_simple_header(decoded)
|
||||
|
||||
|
||||
@reporter.step("Run neo-go dump-keys")
|
||||
@reporter.step_deco("Run neo-go dump-keys")
|
||||
def neo_go_dump_keys(shell: Shell, wallet: str) -> dict:
|
||||
"""
|
||||
Run neo-go dump keys command
|
||||
|
@ -697,7 +695,7 @@ def neo_go_dump_keys(shell: Shell, wallet: str) -> dict:
|
|||
return {address_id: wallet_key}
|
||||
|
||||
|
||||
@reporter.step("Run neo-go query height")
|
||||
@reporter.step_deco("Run neo-go query height")
|
||||
def neo_go_query_height(shell: Shell, endpoint: str) -> dict:
|
||||
"""
|
||||
Run neo-go query height command
|
||||
|
@ -727,62 +725,3 @@ def neo_go_query_height(shell: Shell, endpoint: str) -> dict:
|
|||
latest_block[0].replace(":", ""): int(latest_block[1]),
|
||||
validated_state[0].replace(":", ""): int(validated_state[1]),
|
||||
}
|
||||
|
||||
|
||||
@reporter.step("Search object nodes")
|
||||
def get_object_nodes(
|
||||
cluster: Cluster,
|
||||
wallet: str,
|
||||
cid: str,
|
||||
oid: str,
|
||||
shell: Shell,
|
||||
endpoint: str,
|
||||
bearer: str = "",
|
||||
xhdr: Optional[dict] = None,
|
||||
is_direct: bool = False,
|
||||
verify_presence_all: bool = False,
|
||||
wallet_config: Optional[str] = None,
|
||||
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
|
||||
) -> list[ClusterNode]:
|
||||
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet_config or DEFAULT_WALLET_CONFIG)
|
||||
|
||||
result_object_nodes = cli.object.nodes(
|
||||
rpc_endpoint=endpoint,
|
||||
wallet=wallet,
|
||||
cid=cid,
|
||||
oid=oid,
|
||||
bearer=bearer,
|
||||
ttl=1 if is_direct else None,
|
||||
xhdr=xhdr,
|
||||
timeout=timeout,
|
||||
verify_presence_all=verify_presence_all,
|
||||
)
|
||||
|
||||
parsing_output = parse_cmd_table(result_object_nodes.stdout, "|")
|
||||
list_object_nodes = [
|
||||
node
|
||||
for node in parsing_output
|
||||
if node["should_contain_object"] == "true" and node["actually_contains_object"] == "true"
|
||||
]
|
||||
|
||||
netmap_nodes_list = parse_netmap_output(
|
||||
cli.netmap.snapshot(
|
||||
rpc_endpoint=endpoint,
|
||||
wallet=wallet,
|
||||
).stdout
|
||||
)
|
||||
netmap_nodes = [
|
||||
netmap_node
|
||||
for object_node in list_object_nodes
|
||||
for netmap_node in netmap_nodes_list
|
||||
if object_node["node_id"] == netmap_node.node_id
|
||||
]
|
||||
|
||||
result = [
|
||||
cluster_node
|
||||
for netmap_node in netmap_nodes
|
||||
for cluster_node in cluster.cluster_nodes
|
||||
if netmap_node.node == cluster_node.host_ip
|
||||
]
|
||||
|
||||
return result
|
||||
|
|
|
@ -12,7 +12,7 @@
|
|||
import logging
|
||||
from typing import Optional, Tuple
|
||||
|
||||
from frostfs_testlib import reporter
|
||||
from frostfs_testlib.reporter import get_reporter
|
||||
from frostfs_testlib.resources.cli import CLI_DEFAULT_TIMEOUT
|
||||
from frostfs_testlib.resources.common import DEFAULT_WALLET_CONFIG
|
||||
from frostfs_testlib.shell import Shell
|
||||
|
@ -20,6 +20,7 @@ from frostfs_testlib.steps.cli.object import head_object
|
|||
from frostfs_testlib.storage.cluster import Cluster, StorageNode
|
||||
from frostfs_testlib.storage.dataclasses.storage_object_info import StorageObjectInfo
|
||||
|
||||
reporter = get_reporter()
|
||||
logger = logging.getLogger("NeoLogger")
|
||||
|
||||
|
||||
|
@ -112,7 +113,7 @@ def get_complex_object_split_ranges(
|
|||
return ranges
|
||||
|
||||
|
||||
@reporter.step("Get Link Object")
|
||||
@reporter.step_deco("Get Link Object")
|
||||
def get_link_object(
|
||||
wallet: str,
|
||||
cid: str,
|
||||
|
@ -165,7 +166,7 @@ def get_link_object(
|
|||
return None
|
||||
|
||||
|
||||
@reporter.step("Get Last Object")
|
||||
@reporter.step_deco("Get Last Object")
|
||||
def get_last_object(
|
||||
wallet: str,
|
||||
cid: str,
|
||||
|
|
|
@ -2,8 +2,8 @@ import logging
|
|||
from time import sleep
|
||||
from typing import Optional
|
||||
|
||||
from frostfs_testlib import reporter
|
||||
from frostfs_testlib.cli import FrostfsAdm, FrostfsCli, NeoGo
|
||||
from frostfs_testlib.reporter import get_reporter
|
||||
from frostfs_testlib.resources.cli import (
|
||||
CLI_DEFAULT_TIMEOUT,
|
||||
FROSTFS_ADM_CONFIG_PATH,
|
||||
|
@ -19,10 +19,11 @@ from frostfs_testlib.storage.dataclasses.frostfs_services import InnerRing, Morp
|
|||
from frostfs_testlib.testing.test_control import wait_for_success
|
||||
from frostfs_testlib.utils import datetime_utils, wallet_utils
|
||||
|
||||
reporter = get_reporter()
|
||||
logger = logging.getLogger("NeoLogger")
|
||||
|
||||
|
||||
@reporter.step("Get epochs from nodes")
|
||||
@reporter.step_deco("Get epochs from nodes")
|
||||
def get_epochs_from_nodes(shell: Shell, cluster: Cluster) -> dict[str, int]:
|
||||
"""
|
||||
Get current epochs on each node.
|
||||
|
@ -40,8 +41,10 @@ def get_epochs_from_nodes(shell: Shell, cluster: Cluster) -> dict[str, int]:
|
|||
return epochs_by_node
|
||||
|
||||
|
||||
@reporter.step("Ensure fresh epoch")
|
||||
def ensure_fresh_epoch(shell: Shell, cluster: Cluster, alive_node: Optional[StorageNode] = None) -> int:
|
||||
@reporter.step_deco("Ensure fresh epoch")
|
||||
def ensure_fresh_epoch(
|
||||
shell: Shell, cluster: Cluster, alive_node: Optional[StorageNode] = None
|
||||
) -> int:
|
||||
# ensure new fresh epoch to avoid epoch switch during test session
|
||||
alive_node = alive_node if alive_node else cluster.services(StorageNode)[0]
|
||||
current_epoch = get_epoch(shell, cluster, alive_node)
|
||||
|
@ -51,17 +54,19 @@ def ensure_fresh_epoch(shell: Shell, cluster: Cluster, alive_node: Optional[Stor
|
|||
return epoch
|
||||
|
||||
|
||||
@reporter.step("Wait up to {timeout} seconds for nodes on cluster to align epochs")
|
||||
def wait_for_epochs_align(shell: Shell, cluster: Cluster, timeout=60):
|
||||
@wait_for_success(timeout, 5, None, True)
|
||||
def check_epochs():
|
||||
epochs_by_node = get_epochs_from_nodes(shell, cluster)
|
||||
assert len(set(epochs_by_node.values())) == 1, f"unaligned epochs found: {epochs_by_node}"
|
||||
|
||||
check_epochs()
|
||||
@reporter.step_deco("Wait for epochs align in whole cluster")
|
||||
@wait_for_success(60, 5)
|
||||
def wait_for_epochs_align(shell: Shell, cluster: Cluster) -> None:
|
||||
epochs = []
|
||||
for node in cluster.services(StorageNode):
|
||||
epochs.append(get_epoch(shell, cluster, node))
|
||||
unique_epochs = list(set(epochs))
|
||||
assert (
|
||||
len(unique_epochs) == 1
|
||||
), f"unaligned epochs found, {epochs}, count of unique epochs {len(unique_epochs)}"
|
||||
|
||||
|
||||
@reporter.step("Get Epoch")
|
||||
@reporter.step_deco("Get Epoch")
|
||||
def get_epoch(shell: Shell, cluster: Cluster, alive_node: Optional[StorageNode] = None):
|
||||
alive_node = alive_node if alive_node else cluster.services(StorageNode)[0]
|
||||
endpoint = alive_node.get_rpc_endpoint()
|
||||
|
@ -74,7 +79,7 @@ def get_epoch(shell: Shell, cluster: Cluster, alive_node: Optional[StorageNode]
|
|||
return int(epoch.stdout)
|
||||
|
||||
|
||||
@reporter.step("Tick Epoch")
|
||||
@reporter.step_deco("Tick Epoch")
|
||||
def tick_epoch(shell: Shell, cluster: Cluster, alive_node: Optional[StorageNode] = None):
|
||||
"""
|
||||
Tick epoch using frostfs-adm or NeoGo if frostfs-adm is not available (DevEnv)
|
||||
|
@ -87,7 +92,7 @@ def tick_epoch(shell: Shell, cluster: Cluster, alive_node: Optional[StorageNode]
|
|||
alive_node = alive_node if alive_node else cluster.services(StorageNode)[0]
|
||||
remote_shell = alive_node.host.get_shell()
|
||||
|
||||
if "force_transactions" not in alive_node.host.config.attributes:
|
||||
if FROSTFS_ADM_EXEC and FROSTFS_ADM_CONFIG_PATH:
|
||||
# If frostfs-adm is available, then we tick epoch with it (to be consistent with UAT tests)
|
||||
frostfs_adm = FrostfsAdm(
|
||||
shell=remote_shell,
|
||||
|
|
|
@ -10,38 +10,30 @@ from urllib.parse import quote_plus
|
|||
|
||||
import requests
|
||||
|
||||
from frostfs_testlib import reporter
|
||||
from frostfs_testlib.reporter import get_reporter
|
||||
from frostfs_testlib.resources.common import SIMPLE_OBJECT_SIZE
|
||||
from frostfs_testlib.s3.aws_cli_client import command_options
|
||||
from frostfs_testlib.s3.aws_cli_client import LONG_TIMEOUT
|
||||
from frostfs_testlib.shell import Shell
|
||||
from frostfs_testlib.shell.local_shell import LocalShell
|
||||
from frostfs_testlib.steps.cli.object import get_object
|
||||
from frostfs_testlib.steps.storage_policy import get_nodes_without_object
|
||||
from frostfs_testlib.storage.cluster import StorageNode
|
||||
from frostfs_testlib.testing.test_control import retry
|
||||
from frostfs_testlib.utils.cli_utils import _cmd_run
|
||||
from frostfs_testlib.utils.file_utils import get_file_hash
|
||||
|
||||
reporter = get_reporter()
|
||||
|
||||
logger = logging.getLogger("NeoLogger")
|
||||
|
||||
ASSETS_DIR = os.getenv("ASSETS_DIR", "TemporaryDir/")
|
||||
local_shell = LocalShell()
|
||||
|
||||
|
||||
@reporter.step("Get via HTTP Gate")
|
||||
def get_via_http_gate(
|
||||
cid: str,
|
||||
oid: str,
|
||||
endpoint: str,
|
||||
http_hostname: str,
|
||||
request_path: Optional[str] = None,
|
||||
timeout: Optional[int] = 300,
|
||||
):
|
||||
@reporter.step_deco("Get via HTTP Gate")
|
||||
def get_via_http_gate(cid: str, oid: str, endpoint: str, request_path: Optional[str] = None):
|
||||
"""
|
||||
This function gets given object from HTTP gate
|
||||
cid: container id to get object from
|
||||
oid: object ID
|
||||
endpoint: http gate endpoint
|
||||
http_hostname: http host name on the node
|
||||
request_path: (optional) http request, if ommited - use default [{endpoint}/get/{cid}/{oid}]
|
||||
"""
|
||||
|
||||
|
@ -51,14 +43,13 @@ def get_via_http_gate(
|
|||
else:
|
||||
request = f"{endpoint}{request_path}"
|
||||
|
||||
resp = requests.get(request, headers={"Host": http_hostname}, stream=True, timeout=timeout, verify=False)
|
||||
resp = requests.get(request, stream=True)
|
||||
|
||||
if not resp.ok:
|
||||
raise Exception(
|
||||
f"""Failed to get object via HTTP gate:
|
||||
request: {resp.request.path_url},
|
||||
response: {resp.text},
|
||||
headers: {resp.headers},
|
||||
status code: {resp.status_code} {resp.reason}"""
|
||||
)
|
||||
|
||||
|
@ -71,24 +62,22 @@ def get_via_http_gate(
|
|||
return file_path
|
||||
|
||||
|
||||
@reporter.step("Get via Zip HTTP Gate")
|
||||
def get_via_zip_http_gate(cid: str, prefix: str, endpoint: str, http_hostname: str, timeout: Optional[int] = 300):
|
||||
@reporter.step_deco("Get via Zip HTTP Gate")
|
||||
def get_via_zip_http_gate(cid: str, prefix: str, endpoint: str):
|
||||
"""
|
||||
This function gets given object from HTTP gate
|
||||
cid: container id to get object from
|
||||
prefix: common prefix
|
||||
endpoint: http gate endpoint
|
||||
http_hostname: http host name on the node
|
||||
"""
|
||||
request = f"{endpoint}/zip/{cid}/{prefix}"
|
||||
resp = requests.get(request, stream=True, timeout=timeout, verify=False)
|
||||
resp = requests.get(request, stream=True)
|
||||
|
||||
if not resp.ok:
|
||||
raise Exception(
|
||||
f"""Failed to get object via HTTP gate:
|
||||
request: {resp.request.path_url},
|
||||
response: {resp.text},
|
||||
headers: {resp.headers},
|
||||
status code: {resp.status_code} {resp.reason}"""
|
||||
)
|
||||
|
||||
|
@ -105,21 +94,15 @@ def get_via_zip_http_gate(cid: str, prefix: str, endpoint: str, http_hostname: s
|
|||
return os.path.join(os.getcwd(), ASSETS_DIR, prefix)
|
||||
|
||||
|
||||
@reporter.step("Get via HTTP Gate by attribute")
|
||||
@reporter.step_deco("Get via HTTP Gate by attribute")
|
||||
def get_via_http_gate_by_attribute(
|
||||
cid: str,
|
||||
attribute: dict,
|
||||
endpoint: str,
|
||||
http_hostname: str,
|
||||
request_path: Optional[str] = None,
|
||||
timeout: Optional[int] = 300,
|
||||
cid: str, attribute: dict, endpoint: str, request_path: Optional[str] = None
|
||||
):
|
||||
"""
|
||||
This function gets given object from HTTP gate
|
||||
cid: CID to get object from
|
||||
attribute: attribute {name: attribute} value pair
|
||||
endpoint: http gate endpoint
|
||||
http_hostname: http host name on the node
|
||||
request_path: (optional) http request path, if ommited - use default [{endpoint}/get_by_attribute/{Key}/{Value}]
|
||||
"""
|
||||
attr_name = list(attribute.keys())[0]
|
||||
|
@ -130,14 +113,13 @@ def get_via_http_gate_by_attribute(
|
|||
else:
|
||||
request = f"{endpoint}{request_path}"
|
||||
|
||||
resp = requests.get(request, stream=True, timeout=timeout, verify=False, headers={"Host": http_hostname})
|
||||
resp = requests.get(request, stream=True)
|
||||
|
||||
if not resp.ok:
|
||||
raise Exception(
|
||||
f"""Failed to get object via HTTP gate:
|
||||
request: {resp.request.path_url},
|
||||
response: {resp.text},
|
||||
headers: {resp.headers},
|
||||
status code: {resp.status_code} {resp.reason}"""
|
||||
)
|
||||
|
||||
|
@ -150,11 +132,8 @@ def get_via_http_gate_by_attribute(
|
|||
return file_path
|
||||
|
||||
|
||||
# TODO: pass http_hostname as a header
|
||||
@reporter.step("Upload via HTTP Gate")
|
||||
def upload_via_http_gate(
|
||||
cid: str, path: str, endpoint: str, headers: Optional[dict] = None, timeout: Optional[int] = 300
|
||||
) -> str:
|
||||
@reporter.step_deco("Upload via HTTP Gate")
|
||||
def upload_via_http_gate(cid: str, path: str, endpoint: str, headers: Optional[dict] = None) -> str:
|
||||
"""
|
||||
This function upload given object through HTTP gate
|
||||
cid: CID to get object from
|
||||
|
@ -165,7 +144,7 @@ def upload_via_http_gate(
|
|||
request = f"{endpoint}/upload/{cid}"
|
||||
files = {"upload_file": open(path, "rb")}
|
||||
body = {"filename": path}
|
||||
resp = requests.post(request, files=files, data=body, headers=headers, timeout=timeout, verify=False)
|
||||
resp = requests.post(request, files=files, data=body, headers=headers)
|
||||
|
||||
if not resp.ok:
|
||||
raise Exception(
|
||||
|
@ -183,7 +162,7 @@ def upload_via_http_gate(
|
|||
return resp.json().get("object_id")
|
||||
|
||||
|
||||
@reporter.step("Check is the passed object large")
|
||||
@reporter.step_deco("Check is the passed object large")
|
||||
def is_object_large(filepath: str) -> bool:
|
||||
"""
|
||||
This function check passed file size and return True if file_size > SIMPLE_OBJECT_SIZE
|
||||
|
@ -197,8 +176,7 @@ def is_object_large(filepath: str) -> bool:
|
|||
return False
|
||||
|
||||
|
||||
# TODO: pass http_hostname as a header
|
||||
@reporter.step("Upload via HTTP Gate using Curl")
|
||||
@reporter.step_deco("Upload via HTTP Gate using Curl")
|
||||
def upload_via_http_gate_curl(
|
||||
cid: str,
|
||||
filepath: str,
|
||||
|
@ -223,16 +201,16 @@ def upload_via_http_gate_curl(
|
|||
large_object = is_object_large(filepath)
|
||||
if large_object:
|
||||
# pre-clean
|
||||
local_shell.exec("rm pipe -f")
|
||||
_cmd_run("rm pipe -f")
|
||||
files = f"file=@pipe;filename={os.path.basename(filepath)}"
|
||||
cmd = f"mkfifo pipe;cat {filepath} > pipe & curl -k --no-buffer -F '{files}' {attributes} {request}"
|
||||
output = local_shell.exec(cmd, command_options)
|
||||
cmd = f"mkfifo pipe;cat {filepath} > pipe & curl --no-buffer -F '{files}' {attributes} {request}"
|
||||
output = _cmd_run(cmd, LONG_TIMEOUT)
|
||||
# clean up pipe
|
||||
local_shell.exec("rm pipe")
|
||||
_cmd_run("rm pipe")
|
||||
else:
|
||||
files = f"file=@{filepath};filename={os.path.basename(filepath)}"
|
||||
cmd = f"curl -k -F '{files}' {attributes} {request}"
|
||||
output = local_shell.exec(cmd)
|
||||
cmd = f"curl -F '{files}' {attributes} {request}"
|
||||
output = _cmd_run(cmd)
|
||||
|
||||
if error_pattern:
|
||||
match = error_pattern.casefold() in str(output).casefold()
|
||||
|
@ -245,21 +223,19 @@ def upload_via_http_gate_curl(
|
|||
return oid_re.group(1)
|
||||
|
||||
|
||||
@retry(max_attempts=3, sleep_interval=1)
|
||||
@reporter.step("Get via HTTP Gate using Curl")
|
||||
def get_via_http_curl(cid: str, oid: str, endpoint: str, http_hostname: str) -> str:
|
||||
@reporter.step_deco("Get via HTTP Gate using Curl")
|
||||
def get_via_http_curl(cid: str, oid: str, endpoint: str) -> str:
|
||||
"""
|
||||
This function gets given object from HTTP gate using curl utility.
|
||||
cid: CID to get object from
|
||||
oid: object OID
|
||||
endpoint: http gate endpoint
|
||||
http_hostname: http host name of the node
|
||||
"""
|
||||
request = f"{endpoint}/get/{cid}/{oid}"
|
||||
file_path = os.path.join(os.getcwd(), ASSETS_DIR, f"{cid}_{oid}_{str(uuid.uuid4())}")
|
||||
|
||||
cmd = f'curl -k -H "Host: {http_hostname}" {request} > {file_path}'
|
||||
local_shell.exec(cmd)
|
||||
cmd = f"curl {request} > {file_path}"
|
||||
_cmd_run(cmd)
|
||||
|
||||
return file_path
|
||||
|
||||
|
@ -270,34 +246,25 @@ def _attach_allure_step(request: str, status_code: int, req_type="GET"):
|
|||
reporter.attach(command_attachment, f"{req_type} Request")
|
||||
|
||||
|
||||
@reporter.step("Try to get object and expect error")
|
||||
@reporter.step_deco("Try to get object and expect error")
|
||||
def try_to_get_object_and_expect_error(
|
||||
cid: str,
|
||||
oid: str,
|
||||
error_pattern: str,
|
||||
endpoint: str,
|
||||
http_hostname: str,
|
||||
cid: str, oid: str, error_pattern: str, endpoint: str
|
||||
) -> None:
|
||||
try:
|
||||
get_via_http_gate(cid=cid, oid=oid, endpoint=endpoint, http_hostname=http_hostname)
|
||||
get_via_http_gate(cid=cid, oid=oid, endpoint=endpoint)
|
||||
raise AssertionError(f"Expected error on getting object with cid: {cid}")
|
||||
except Exception as err:
|
||||
match = error_pattern.casefold() in str(err).casefold()
|
||||
assert match, f"Expected {err} to match {error_pattern}"
|
||||
|
||||
|
||||
@reporter.step("Verify object can be get using HTTP header attribute")
|
||||
@reporter.step_deco("Verify object can be get using HTTP header attribute")
|
||||
def get_object_by_attr_and_verify_hashes(
|
||||
oid: str,
|
||||
file_name: str,
|
||||
cid: str,
|
||||
attrs: dict,
|
||||
endpoint: str,
|
||||
http_hostname: str,
|
||||
oid: str, file_name: str, cid: str, attrs: dict, endpoint: str
|
||||
) -> None:
|
||||
got_file_path_http = get_via_http_gate(cid=cid, oid=oid, endpoint=endpoint, http_hostname=http_hostname)
|
||||
got_file_path_http = get_via_http_gate(cid=cid, oid=oid, endpoint=endpoint)
|
||||
got_file_path_http_attr = get_via_http_gate_by_attribute(
|
||||
cid=cid, attribute=attrs, endpoint=endpoint, http_hostname=http_hostname
|
||||
cid=cid, attribute=attrs, endpoint=endpoint
|
||||
)
|
||||
assert_hashes_are_equal(file_name, got_file_path_http, got_file_path_http_attr)
|
||||
|
||||
|
@ -310,7 +277,6 @@ def verify_object_hash(
|
|||
shell: Shell,
|
||||
nodes: list[StorageNode],
|
||||
endpoint: str,
|
||||
http_hostname: str,
|
||||
object_getter=None,
|
||||
) -> None:
|
||||
|
||||
|
@ -336,7 +302,7 @@ def verify_object_hash(
|
|||
shell=shell,
|
||||
endpoint=random_node.get_rpc_endpoint(),
|
||||
)
|
||||
got_file_path_http = object_getter(cid=cid, oid=oid, endpoint=endpoint, http_hostname=http_hostname)
|
||||
got_file_path_http = object_getter(cid=cid, oid=oid, endpoint=endpoint)
|
||||
|
||||
assert_hashes_are_equal(file_name, got_file_path, got_file_path_http)
|
||||
|
||||
|
@ -345,14 +311,18 @@ def assert_hashes_are_equal(orig_file_name: str, got_file_1: str, got_file_2: st
|
|||
msg = "Expected hashes are equal for files {f1} and {f2}"
|
||||
got_file_hash_http = get_file_hash(got_file_1)
|
||||
assert get_file_hash(got_file_2) == got_file_hash_http, msg.format(f1=got_file_2, f2=got_file_1)
|
||||
assert get_file_hash(orig_file_name) == got_file_hash_http, msg.format(f1=orig_file_name, f2=got_file_1)
|
||||
assert get_file_hash(orig_file_name) == got_file_hash_http, msg.format(
|
||||
f1=orig_file_name, f2=got_file_1
|
||||
)
|
||||
|
||||
|
||||
def attr_into_header(attrs: dict) -> dict:
|
||||
return {f"X-Attribute-{_key}": _value for _key, _value in attrs.items()}
|
||||
|
||||
|
||||
@reporter.step("Convert each attribute (Key=Value) to the following format: -H 'X-Attribute-Key: Value'")
|
||||
@reporter.step_deco(
|
||||
"Convert each attribute (Key=Value) to the following format: -H 'X-Attribute-Key: Value'"
|
||||
)
|
||||
def attr_into_str_header_curl(attrs: dict) -> list:
|
||||
headers = []
|
||||
for k, v in attrs.items():
|
||||
|
@ -361,32 +331,23 @@ def attr_into_str_header_curl(attrs: dict) -> list:
|
|||
return headers
|
||||
|
||||
|
||||
@reporter.step("Try to get object via http (pass http_request and optional attributes) and expect error")
|
||||
@reporter.step_deco(
|
||||
"Try to get object via http (pass http_request and optional attributes) and expect error"
|
||||
)
|
||||
def try_to_get_object_via_passed_request_and_expect_error(
|
||||
cid: str,
|
||||
oid: str,
|
||||
error_pattern: str,
|
||||
endpoint: str,
|
||||
http_request_path: str,
|
||||
http_hostname: str,
|
||||
attrs: Optional[dict] = None,
|
||||
) -> None:
|
||||
try:
|
||||
if attrs is None:
|
||||
get_via_http_gate(
|
||||
cid=cid,
|
||||
oid=oid,
|
||||
endpoint=endpoint,
|
||||
request_path=http_request_path,
|
||||
http_hostname=http_hostname,
|
||||
)
|
||||
get_via_http_gate(cid=cid, oid=oid, endpoint=endpoint, request_path=http_request_path)
|
||||
else:
|
||||
get_via_http_gate_by_attribute(
|
||||
cid=cid,
|
||||
attribute=attrs,
|
||||
endpoint=endpoint,
|
||||
request_path=http_request_path,
|
||||
http_hostname=http_hostname,
|
||||
cid=cid, attribute=attrs, endpoint=endpoint, request_path=http_request_path
|
||||
)
|
||||
raise AssertionError(f"Expected error on getting object with cid: {cid}")
|
||||
except Exception as err:
|
||||
|
|
|
@ -1,19 +0,0 @@
|
|||
from frostfs_testlib.shell import CommandOptions
|
||||
from frostfs_testlib.storage.cluster import ClusterNode
|
||||
|
||||
|
||||
class IpHelper:
|
||||
@staticmethod
|
||||
def drop_input_traffic_to_node(node: ClusterNode, block_ip: list[str]) -> None:
|
||||
shell = node.host.get_shell()
|
||||
for ip in block_ip:
|
||||
shell.exec(f"ip route add blackhole {ip}")
|
||||
|
||||
@staticmethod
|
||||
def restore_input_traffic_to_node(node: ClusterNode) -> None:
|
||||
shell = node.host.get_shell()
|
||||
unlock_ip = shell.exec("ip route list | grep blackhole", CommandOptions(check=False))
|
||||
if unlock_ip.return_code != 0:
|
||||
return
|
||||
for ip in unlock_ip.stdout.strip().split("\n"):
|
||||
shell.exec(f"ip route del blackhole {ip.split(' ')[1]}")
|
|
@ -6,16 +6,21 @@ from dataclasses import dataclass
|
|||
from time import sleep
|
||||
from typing import Optional
|
||||
|
||||
from frostfs_testlib import reporter
|
||||
from frostfs_testlib.cli import FrostfsAdm, FrostfsCli
|
||||
from frostfs_testlib.resources.cli import FROSTFS_ADM_CONFIG_PATH, FROSTFS_ADM_EXEC, FROSTFS_CLI_EXEC
|
||||
from frostfs_testlib.reporter import get_reporter
|
||||
from frostfs_testlib.resources.cli import (
|
||||
FROSTFS_ADM_CONFIG_PATH,
|
||||
FROSTFS_ADM_EXEC,
|
||||
FROSTFS_CLI_EXEC,
|
||||
)
|
||||
from frostfs_testlib.resources.common import MORPH_BLOCK_TIME
|
||||
from frostfs_testlib.shell import Shell
|
||||
from frostfs_testlib.steps.epoch import tick_epoch, wait_for_epochs_align
|
||||
from frostfs_testlib.steps.epoch import tick_epoch
|
||||
from frostfs_testlib.storage.cluster import Cluster, StorageNode
|
||||
from frostfs_testlib.storage.dataclasses.frostfs_services import S3Gate
|
||||
from frostfs_testlib.utils import datetime_utils
|
||||
|
||||
reporter = get_reporter()
|
||||
logger = logging.getLogger("NeoLogger")
|
||||
|
||||
|
||||
|
@ -35,7 +40,45 @@ class HealthStatus:
|
|||
return HealthStatus(network, health)
|
||||
|
||||
|
||||
@reporter.step("Get Locode from random storage node")
|
||||
@reporter.step_deco("Stop random storage nodes")
|
||||
def stop_random_storage_nodes(number: int, nodes: list[StorageNode]) -> list[StorageNode]:
|
||||
"""
|
||||
Shuts down the given number of randomly selected storage nodes.
|
||||
Args:
|
||||
number: the number of storage nodes to stop
|
||||
nodes: the list of storage nodes to stop
|
||||
Returns:
|
||||
the list of nodes that were stopped
|
||||
"""
|
||||
nodes_to_stop = random.sample(nodes, number)
|
||||
for node in nodes_to_stop:
|
||||
node.stop_service()
|
||||
return nodes_to_stop
|
||||
|
||||
|
||||
@reporter.step_deco("Start storage node")
|
||||
def start_storage_nodes(nodes: list[StorageNode]) -> None:
|
||||
"""
|
||||
The function starts specified storage nodes.
|
||||
Args:
|
||||
nodes: the list of nodes to start
|
||||
"""
|
||||
for node in nodes:
|
||||
node.start_service()
|
||||
|
||||
|
||||
@reporter.step_deco("Stop storage node")
|
||||
def stop_storage_nodes(nodes: list[StorageNode]) -> None:
|
||||
"""
|
||||
The function starts specified storage nodes.
|
||||
Args:
|
||||
nodes: the list of nodes to start
|
||||
"""
|
||||
for node in nodes:
|
||||
node.stop_service()
|
||||
|
||||
|
||||
@reporter.step_deco("Get Locode from random storage node")
|
||||
def get_locode_from_random_node(cluster: Cluster) -> str:
|
||||
node = random.choice(cluster.services(StorageNode))
|
||||
locode = node.get_un_locode()
|
||||
|
@ -43,7 +86,7 @@ def get_locode_from_random_node(cluster: Cluster) -> str:
|
|||
return locode
|
||||
|
||||
|
||||
@reporter.step("Healthcheck for storage node {node}")
|
||||
@reporter.step_deco("Healthcheck for storage node {node}")
|
||||
def storage_node_healthcheck(node: StorageNode) -> HealthStatus:
|
||||
"""
|
||||
The function returns storage node's health status.
|
||||
|
@ -57,7 +100,7 @@ def storage_node_healthcheck(node: StorageNode) -> HealthStatus:
|
|||
return HealthStatus.from_stdout(output)
|
||||
|
||||
|
||||
@reporter.step("Set status for {node}")
|
||||
@reporter.step_deco("Set status for {node}")
|
||||
def storage_node_set_status(node: StorageNode, status: str, retries: int = 0) -> None:
|
||||
"""
|
||||
The function sets particular status for given node.
|
||||
|
@ -70,7 +113,7 @@ def storage_node_set_status(node: StorageNode, status: str, retries: int = 0) ->
|
|||
_run_control_command_with_retries(node, command, retries)
|
||||
|
||||
|
||||
@reporter.step("Get netmap snapshot")
|
||||
@reporter.step_deco("Get netmap snapshot")
|
||||
def get_netmap_snapshot(node: StorageNode, shell: Shell) -> str:
|
||||
"""
|
||||
The function returns string representation of netmap snapshot.
|
||||
|
@ -90,7 +133,7 @@ def get_netmap_snapshot(node: StorageNode, shell: Shell) -> str:
|
|||
).stdout
|
||||
|
||||
|
||||
@reporter.step("Get shard list for {node}")
|
||||
@reporter.step_deco("Get shard list for {node}")
|
||||
def node_shard_list(node: StorageNode) -> list[str]:
|
||||
"""
|
||||
The function returns list of shards for specified storage node.
|
||||
|
@ -104,7 +147,7 @@ def node_shard_list(node: StorageNode) -> list[str]:
|
|||
return re.findall(r"Shard (.*):", output)
|
||||
|
||||
|
||||
@reporter.step("Shard set for {node}")
|
||||
@reporter.step_deco("Shard set for {node}")
|
||||
def node_shard_set_mode(node: StorageNode, shard: str, mode: str) -> str:
|
||||
"""
|
||||
The function sets mode for specified shard.
|
||||
|
@ -115,7 +158,7 @@ def node_shard_set_mode(node: StorageNode, shard: str, mode: str) -> str:
|
|||
return _run_control_command_with_retries(node, command)
|
||||
|
||||
|
||||
@reporter.step("Drop object from {node}")
|
||||
@reporter.step_deco("Drop object from {node}")
|
||||
def drop_object(node: StorageNode, cid: str, oid: str) -> str:
|
||||
"""
|
||||
The function drops object from specified node.
|
||||
|
@ -126,14 +169,14 @@ def drop_object(node: StorageNode, cid: str, oid: str) -> str:
|
|||
return _run_control_command_with_retries(node, command)
|
||||
|
||||
|
||||
@reporter.step("Delete data from host for node {node}")
|
||||
@reporter.step_deco("Delete data from host for node {node}")
|
||||
def delete_node_data(node: StorageNode) -> None:
|
||||
node.stop_service()
|
||||
node.host.delete_storage_node_data(node.name)
|
||||
time.sleep(datetime_utils.parse_time(MORPH_BLOCK_TIME))
|
||||
|
||||
|
||||
@reporter.step("Exclude node {node_to_exclude} from network map")
|
||||
@reporter.step_deco("Exclude node {node_to_exclude} from network map")
|
||||
def exclude_node_from_network_map(
|
||||
node_to_exclude: StorageNode,
|
||||
alive_node: StorageNode,
|
||||
|
@ -146,13 +189,14 @@ def exclude_node_from_network_map(
|
|||
|
||||
time.sleep(datetime_utils.parse_time(MORPH_BLOCK_TIME))
|
||||
tick_epoch(shell, cluster)
|
||||
wait_for_epochs_align(shell, cluster)
|
||||
|
||||
snapshot = get_netmap_snapshot(node=alive_node, shell=shell)
|
||||
assert node_netmap_key not in snapshot, f"Expected node with key {node_netmap_key} to be absent in network map"
|
||||
assert (
|
||||
node_netmap_key not in snapshot
|
||||
), f"Expected node with key {node_netmap_key} to be absent in network map"
|
||||
|
||||
|
||||
@reporter.step("Include node {node_to_include} into network map")
|
||||
@reporter.step_deco("Include node {node_to_include} into network map")
|
||||
def include_node_to_network_map(
|
||||
node_to_include: StorageNode,
|
||||
alive_node: StorageNode,
|
||||
|
@ -162,7 +206,7 @@ def include_node_to_network_map(
|
|||
storage_node_set_status(node_to_include, status="online")
|
||||
|
||||
# Per suggestion of @fyrchik we need to wait for 2 blocks after we set status and after tick epoch.
|
||||
# First sleep can be omitted after https://git.frostfs.info/TrueCloudLab/frostfs-node/issues/60 complete.
|
||||
# First sleep can be omitted after https://github.com/TrueCloudLab/frostfs-node/issues/60 complete.
|
||||
|
||||
time.sleep(datetime_utils.parse_time(MORPH_BLOCK_TIME) * 2)
|
||||
tick_epoch(shell, cluster)
|
||||
|
@ -171,29 +215,37 @@ def include_node_to_network_map(
|
|||
check_node_in_map(node_to_include, shell, alive_node)
|
||||
|
||||
|
||||
@reporter.step("Check node {node} in network map")
|
||||
def check_node_in_map(node: StorageNode, shell: Shell, alive_node: Optional[StorageNode] = None) -> None:
|
||||
@reporter.step_deco("Check node {node} in network map")
|
||||
def check_node_in_map(
|
||||
node: StorageNode, shell: Shell, alive_node: Optional[StorageNode] = None
|
||||
) -> None:
|
||||
alive_node = alive_node or node
|
||||
|
||||
node_netmap_key = node.get_wallet_public_key()
|
||||
logger.info(f"Node ({node.label}) netmap key: {node_netmap_key}")
|
||||
|
||||
snapshot = get_netmap_snapshot(alive_node, shell)
|
||||
assert node_netmap_key in snapshot, f"Expected node with key {node_netmap_key} to be in network map"
|
||||
assert (
|
||||
node_netmap_key in snapshot
|
||||
), f"Expected node with key {node_netmap_key} to be in network map"
|
||||
|
||||
|
||||
@reporter.step("Check node {node} NOT in network map")
|
||||
def check_node_not_in_map(node: StorageNode, shell: Shell, alive_node: Optional[StorageNode] = None) -> None:
|
||||
@reporter.step_deco("Check node {node} NOT in network map")
|
||||
def check_node_not_in_map(
|
||||
node: StorageNode, shell: Shell, alive_node: Optional[StorageNode] = None
|
||||
) -> None:
|
||||
alive_node = alive_node or node
|
||||
|
||||
node_netmap_key = node.get_wallet_public_key()
|
||||
logger.info(f"Node ({node.label}) netmap key: {node_netmap_key}")
|
||||
|
||||
snapshot = get_netmap_snapshot(alive_node, shell)
|
||||
assert node_netmap_key not in snapshot, f"Expected node with key {node_netmap_key} to be NOT in network map"
|
||||
assert (
|
||||
node_netmap_key not in snapshot
|
||||
), f"Expected node with key {node_netmap_key} to be NOT in network map"
|
||||
|
||||
|
||||
@reporter.step("Wait for node {node} is ready")
|
||||
@reporter.step_deco("Wait for node {node} is ready")
|
||||
def wait_for_node_to_be_ready(node: StorageNode) -> None:
|
||||
timeout, attempts = 30, 6
|
||||
for _ in range(attempts):
|
||||
|
@ -204,10 +256,12 @@ def wait_for_node_to_be_ready(node: StorageNode) -> None:
|
|||
except Exception as err:
|
||||
logger.warning(f"Node {node} is not ready:\n{err}")
|
||||
sleep(timeout)
|
||||
raise AssertionError(f"Node {node} hasn't gone to the READY state after {timeout * attempts} seconds")
|
||||
raise AssertionError(
|
||||
f"Node {node} hasn't gone to the READY state after {timeout * attempts} seconds"
|
||||
)
|
||||
|
||||
|
||||
@reporter.step("Remove nodes from network map trough cli-adm morph command")
|
||||
@reporter.step_deco("Remove nodes from network map trough cli-adm morph command")
|
||||
def remove_nodes_from_map_morph(
|
||||
shell: Shell,
|
||||
cluster: Cluster,
|
||||
|
@ -273,3 +327,25 @@ def _run_control_command(node: StorageNode, command: str) -> None:
|
|||
f"--wallet {wallet_path} --config {wallet_config_path}"
|
||||
)
|
||||
return result.stdout
|
||||
|
||||
|
||||
@reporter.step_deco("Start services s3gate ")
|
||||
def start_s3gates(cluster: Cluster) -> None:
|
||||
"""
|
||||
The function starts specified storage nodes.
|
||||
Args:
|
||||
cluster: cluster instance under test
|
||||
"""
|
||||
for gate in cluster.services(S3Gate):
|
||||
gate.start_service()
|
||||
|
||||
|
||||
@reporter.step_deco("Stop services s3gate ")
|
||||
def stop_s3gates(cluster: Cluster) -> None:
|
||||
"""
|
||||
The function starts specified storage nodes.
|
||||
Args:
|
||||
cluster: cluster instance under test
|
||||
"""
|
||||
for gate in cluster.services(S3Gate):
|
||||
gate.stop_service()
|
||||
|
|
|
@ -8,18 +8,20 @@ from typing import Optional
|
|||
from neo3.wallet import utils as neo3_utils
|
||||
from neo3.wallet import wallet as neo3_wallet
|
||||
|
||||
from frostfs_testlib import reporter
|
||||
from frostfs_testlib.cli import NeoGo
|
||||
from frostfs_testlib.reporter import get_reporter
|
||||
from frostfs_testlib.resources.cli import NEOGO_EXECUTABLE
|
||||
from frostfs_testlib.resources.common import FROSTFS_CONTRACT, GAS_HASH, MORPH_BLOCK_TIME
|
||||
from frostfs_testlib.shell import Shell
|
||||
from frostfs_testlib.storage.dataclasses.frostfs_services import MorphChain
|
||||
from frostfs_testlib.storage.dataclasses.frostfs_services import MainChain, MorphChain
|
||||
from frostfs_testlib.utils import converting_utils, datetime_utils, wallet_utils
|
||||
|
||||
reporter = get_reporter()
|
||||
logger = logging.getLogger("NeoLogger")
|
||||
|
||||
EMPTY_PASSWORD = ""
|
||||
TX_PERSIST_TIMEOUT = 15 # seconds
|
||||
ASSET_POWER_MAINCHAIN = 10**8
|
||||
ASSET_POWER_SIDECHAIN = 10**12
|
||||
|
||||
|
||||
|
@ -40,7 +42,32 @@ def get_contract_hash(morph_chain: MorphChain, resolve_name: str, shell: Shell)
|
|||
return bytes.decode(base64.b64decode(stack_data[0]["value"]))
|
||||
|
||||
|
||||
def transaction_accepted(morph_chain: MorphChain, tx_id: str):
|
||||
@reporter.step_deco("Withdraw Mainnet Gas")
|
||||
def withdraw_mainnet_gas(shell: Shell, main_chain: MainChain, wlt: str, amount: int):
|
||||
address = wallet_utils.get_last_address_from_wallet(wlt, EMPTY_PASSWORD)
|
||||
scripthash = neo3_utils.address_to_script_hash(address)
|
||||
|
||||
neogo = NeoGo(shell=shell, neo_go_exec_path=NEOGO_EXECUTABLE)
|
||||
out = neogo.contract.invokefunction(
|
||||
wallet=wlt,
|
||||
address=address,
|
||||
rpc_endpoint=main_chain.get_endpoint(),
|
||||
scripthash=FROSTFS_CONTRACT,
|
||||
method="withdraw",
|
||||
arguments=f"{scripthash} int:{amount}",
|
||||
multisig_hash=f"{scripthash}:Global",
|
||||
wallet_password="",
|
||||
)
|
||||
|
||||
m = re.match(r"^Sent invocation transaction (\w{64})$", out.stdout)
|
||||
if m is None:
|
||||
raise Exception("Can not get Tx.")
|
||||
tx = m.group(1)
|
||||
if not transaction_accepted(main_chain, tx):
|
||||
raise AssertionError(f"TX {tx} hasn't been processed")
|
||||
|
||||
|
||||
def transaction_accepted(main_chain: MainChain, tx_id: str):
|
||||
"""
|
||||
This function returns True in case of accepted TX.
|
||||
Args:
|
||||
|
@ -52,8 +79,8 @@ def transaction_accepted(morph_chain: MorphChain, tx_id: str):
|
|||
try:
|
||||
for _ in range(0, TX_PERSIST_TIMEOUT):
|
||||
time.sleep(1)
|
||||
neogo = NeoGo(shell=morph_chain.host.get_shell(), neo_go_exec_path=NEOGO_EXECUTABLE)
|
||||
resp = neogo.query.tx(tx_hash=tx_id, rpc_endpoint=morph_chain.get_endpoint())
|
||||
neogo = NeoGo(shell=main_chain.host.get_shell(), neo_go_exec_path=NEOGO_EXECUTABLE)
|
||||
resp = neogo.query.tx(tx_hash=tx_id, rpc_endpoint=main_chain.get_endpoint())
|
||||
if resp is not None:
|
||||
logger.info(f"TX is accepted in block: {resp}")
|
||||
return True, resp
|
||||
|
@ -63,7 +90,7 @@ def transaction_accepted(morph_chain: MorphChain, tx_id: str):
|
|||
return False
|
||||
|
||||
|
||||
@reporter.step("Get FrostFS Balance")
|
||||
@reporter.step_deco("Get FrostFS Balance")
|
||||
def get_balance(shell: Shell, morph_chain: MorphChain, wallet_path: str, wallet_password: str = ""):
|
||||
"""
|
||||
This function returns FrostFS balance for given wallet.
|
||||
|
@ -84,11 +111,11 @@ def get_balance(shell: Shell, morph_chain: MorphChain, wallet_path: str, wallet_
|
|||
raise out
|
||||
|
||||
|
||||
@reporter.step("Transfer Gas")
|
||||
@reporter.step_deco("Transfer Gas")
|
||||
def transfer_gas(
|
||||
shell: Shell,
|
||||
amount: int,
|
||||
morph_chain: MorphChain,
|
||||
main_chain: MainChain,
|
||||
wallet_from_path: Optional[str] = None,
|
||||
wallet_from_password: Optional[str] = None,
|
||||
address_from: Optional[str] = None,
|
||||
|
@ -111,16 +138,22 @@ def transfer_gas(
|
|||
address_to: The address of the wallet to transfer assets to.
|
||||
amount: Amount of gas to transfer.
|
||||
"""
|
||||
wallet_from_path = wallet_from_path or morph_chain.get_wallet_path()
|
||||
wallet_from_path = wallet_from_path or main_chain.get_wallet_path()
|
||||
wallet_from_password = (
|
||||
wallet_from_password if wallet_from_password is not None else morph_chain.get_wallet_password()
|
||||
wallet_from_password
|
||||
if wallet_from_password is not None
|
||||
else main_chain.get_wallet_password()
|
||||
)
|
||||
address_from = address_from or wallet_utils.get_last_address_from_wallet(
|
||||
wallet_from_path, wallet_from_password
|
||||
)
|
||||
address_to = address_to or wallet_utils.get_last_address_from_wallet(
|
||||
wallet_to_path, wallet_to_password
|
||||
)
|
||||
address_from = address_from or wallet_utils.get_last_address_from_wallet(wallet_from_path, wallet_from_password)
|
||||
address_to = address_to or wallet_utils.get_last_address_from_wallet(wallet_to_path, wallet_to_password)
|
||||
|
||||
neogo = NeoGo(shell, neo_go_exec_path=NEOGO_EXECUTABLE)
|
||||
out = neogo.nep17.transfer(
|
||||
rpc_endpoint=morph_chain.get_endpoint(),
|
||||
rpc_endpoint=main_chain.get_endpoint(),
|
||||
wallet=wallet_from_path,
|
||||
wallet_password=wallet_from_password,
|
||||
amount=amount,
|
||||
|
@ -132,12 +165,50 @@ def transfer_gas(
|
|||
txid = out.stdout.strip().split("\n")[-1]
|
||||
if len(txid) != 64:
|
||||
raise Exception("Got no TXID after run the command")
|
||||
if not transaction_accepted(morph_chain, txid):
|
||||
if not transaction_accepted(main_chain, txid):
|
||||
raise AssertionError(f"TX {txid} hasn't been processed")
|
||||
time.sleep(datetime_utils.parse_time(MORPH_BLOCK_TIME))
|
||||
|
||||
|
||||
@reporter.step("Get Sidechain Balance")
|
||||
@reporter.step_deco("FrostFS Deposit")
|
||||
def deposit_gas(
|
||||
shell: Shell,
|
||||
main_chain: MainChain,
|
||||
amount: int,
|
||||
wallet_from_path: str,
|
||||
wallet_from_password: str,
|
||||
):
|
||||
"""
|
||||
Transferring GAS from given wallet to FrostFS contract address.
|
||||
"""
|
||||
# get FrostFS contract address
|
||||
deposit_addr = converting_utils.contract_hash_to_address(FROSTFS_CONTRACT)
|
||||
logger.info(f"FrostFS contract address: {deposit_addr}")
|
||||
address_from = wallet_utils.get_last_address_from_wallet(
|
||||
wallet_path=wallet_from_path, wallet_password=wallet_from_password
|
||||
)
|
||||
transfer_gas(
|
||||
shell=shell,
|
||||
main_chain=main_chain,
|
||||
amount=amount,
|
||||
wallet_from_path=wallet_from_path,
|
||||
wallet_from_password=wallet_from_password,
|
||||
address_to=deposit_addr,
|
||||
address_from=address_from,
|
||||
)
|
||||
|
||||
|
||||
@reporter.step_deco("Get Mainnet Balance")
|
||||
def get_mainnet_balance(main_chain: MainChain, address: str):
|
||||
resp = main_chain.rpc_client.get_nep17_balances(address=address)
|
||||
logger.info(f"Got getnep17balances response: {resp}")
|
||||
for balance in resp["balance"]:
|
||||
if balance["assethash"] == GAS_HASH:
|
||||
return float(balance["amount"]) / ASSET_POWER_MAINCHAIN
|
||||
return float(0)
|
||||
|
||||
|
||||
@reporter.step_deco("Get Sidechain Balance")
|
||||
def get_sidechain_balance(morph_chain: MorphChain, address: str):
|
||||
resp = morph_chain.rpc_client.get_nep17_balances(address=address)
|
||||
logger.info(f"Got getnep17balances response: {resp}")
|
||||
|
|
|
@ -8,23 +8,19 @@ from typing import Optional
|
|||
|
||||
from dateutil.parser import parse
|
||||
|
||||
from frostfs_testlib import reporter
|
||||
from frostfs_testlib.cli import FrostfsAuthmate
|
||||
from frostfs_testlib.reporter import get_reporter
|
||||
from frostfs_testlib.resources.cli import FROSTFS_AUTHMATE_EXEC
|
||||
from frostfs_testlib.resources.common import CREDENTIALS_CREATE_TIMEOUT
|
||||
from frostfs_testlib.s3 import S3ClientWrapper, VersioningStatus
|
||||
from frostfs_testlib.shell import CommandOptions, InteractiveInput, Shell
|
||||
from frostfs_testlib.shell.interfaces import SshCredentials
|
||||
from frostfs_testlib.steps.cli.container import search_container_by_name, search_nodes_with_container
|
||||
from frostfs_testlib.storage.cluster import Cluster, ClusterNode
|
||||
from frostfs_testlib.storage.cluster import Cluster
|
||||
from frostfs_testlib.storage.dataclasses.frostfs_services import S3Gate
|
||||
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
|
||||
from frostfs_testlib.utils.cli_utils import _run_with_passwd
|
||||
|
||||
reporter = get_reporter()
|
||||
logger = logging.getLogger("NeoLogger")
|
||||
|
||||
|
||||
@reporter.step("Expected all objects are presented in the bucket")
|
||||
@reporter.step_deco("Expected all objects are presented in the bucket")
|
||||
def check_objects_in_bucket(
|
||||
s3_client: S3ClientWrapper,
|
||||
bucket: str,
|
||||
|
@ -33,9 +29,13 @@ def check_objects_in_bucket(
|
|||
) -> None:
|
||||
unexpected_objects = unexpected_objects or []
|
||||
bucket_objects = s3_client.list_objects(bucket)
|
||||
assert len(bucket_objects) == len(expected_objects), f"Expected {len(expected_objects)} objects in the bucket"
|
||||
assert len(bucket_objects) == len(
|
||||
expected_objects
|
||||
), f"Expected {len(expected_objects)} objects in the bucket"
|
||||
for bucket_object in expected_objects:
|
||||
assert bucket_object in bucket_objects, f"Expected object {bucket_object} in objects list {bucket_objects}"
|
||||
assert (
|
||||
bucket_object in bucket_objects
|
||||
), f"Expected object {bucket_object} in objects list {bucket_objects}"
|
||||
|
||||
for bucket_object in unexpected_objects:
|
||||
assert (
|
||||
|
@ -43,21 +43,22 @@ def check_objects_in_bucket(
|
|||
), f"Expected object {bucket_object} not in objects list {bucket_objects}"
|
||||
|
||||
|
||||
@reporter.step("Try to get object and got error")
|
||||
def try_to_get_objects_and_expect_error(s3_client: S3ClientWrapper, bucket: str, object_keys: list) -> None:
|
||||
@reporter.step_deco("Try to get object and got error")
|
||||
def try_to_get_objects_and_expect_error(
|
||||
s3_client: S3ClientWrapper, bucket: str, object_keys: list
|
||||
) -> None:
|
||||
for obj in object_keys:
|
||||
try:
|
||||
s3_client.get_object(bucket, obj)
|
||||
raise AssertionError(f"Object {obj} found in bucket {bucket}")
|
||||
except Exception as err:
|
||||
assert "The specified key does not exist" in str(err), f"Expected error in exception {err}"
|
||||
assert "The specified key does not exist" in str(
|
||||
err
|
||||
), f"Expected error in exception {err}"
|
||||
|
||||
|
||||
@reporter.step("Set versioning status to '{status}' for bucket '{bucket}'")
|
||||
@reporter.step_deco("Set versioning status to '{status}' for bucket '{bucket}'")
|
||||
def set_bucket_versioning(s3_client: S3ClientWrapper, bucket: str, status: VersioningStatus):
|
||||
if status == VersioningStatus.UNDEFINED:
|
||||
return
|
||||
|
||||
s3_client.get_bucket_versioning_status(bucket)
|
||||
s3_client.put_bucket_versioning(bucket, status=status)
|
||||
bucket_status = s3_client.get_bucket_versioning_status(bucket)
|
||||
|
@ -71,8 +72,12 @@ def object_key_from_file_path(full_path: str) -> str:
|
|||
def assert_tags(
|
||||
actual_tags: list, expected_tags: Optional[list] = None, unexpected_tags: Optional[list] = None
|
||||
) -> None:
|
||||
expected_tags = [{"Key": key, "Value": value} for key, value in expected_tags] if expected_tags else []
|
||||
unexpected_tags = [{"Key": key, "Value": value} for key, value in unexpected_tags] if unexpected_tags else []
|
||||
expected_tags = (
|
||||
[{"Key": key, "Value": value} for key, value in expected_tags] if expected_tags else []
|
||||
)
|
||||
unexpected_tags = (
|
||||
[{"Key": key, "Value": value} for key, value in unexpected_tags] if unexpected_tags else []
|
||||
)
|
||||
if expected_tags == []:
|
||||
assert not actual_tags, f"Expected there is no tags, got {actual_tags}"
|
||||
assert len(expected_tags) == len(actual_tags)
|
||||
|
@ -82,7 +87,7 @@ def assert_tags(
|
|||
assert tag not in actual_tags, f"Tag {tag} should not be in {actual_tags}"
|
||||
|
||||
|
||||
@reporter.step("Expected all tags are presented in object")
|
||||
@reporter.step_deco("Expected all tags are presented in object")
|
||||
def check_tags_by_object(
|
||||
s3_client: S3ClientWrapper,
|
||||
bucket: str,
|
||||
|
@ -91,10 +96,12 @@ def check_tags_by_object(
|
|||
unexpected_tags: Optional[list] = None,
|
||||
) -> None:
|
||||
actual_tags = s3_client.get_object_tagging(bucket, key)
|
||||
assert_tags(expected_tags=expected_tags, unexpected_tags=unexpected_tags, actual_tags=actual_tags)
|
||||
assert_tags(
|
||||
expected_tags=expected_tags, unexpected_tags=unexpected_tags, actual_tags=actual_tags
|
||||
)
|
||||
|
||||
|
||||
@reporter.step("Expected all tags are presented in bucket")
|
||||
@reporter.step_deco("Expected all tags are presented in bucket")
|
||||
def check_tags_by_bucket(
|
||||
s3_client: S3ClientWrapper,
|
||||
bucket: str,
|
||||
|
@ -102,7 +109,9 @@ def check_tags_by_bucket(
|
|||
unexpected_tags: Optional[list] = None,
|
||||
) -> None:
|
||||
actual_tags = s3_client.get_bucket_tagging(bucket)
|
||||
assert_tags(expected_tags=expected_tags, unexpected_tags=unexpected_tags, actual_tags=actual_tags)
|
||||
assert_tags(
|
||||
expected_tags=expected_tags, unexpected_tags=unexpected_tags, actual_tags=actual_tags
|
||||
)
|
||||
|
||||
|
||||
def assert_object_lock_mode(
|
||||
|
@ -115,19 +124,25 @@ def assert_object_lock_mode(
|
|||
retain_period: Optional[int] = None,
|
||||
):
|
||||
object_dict = s3_client.get_object(bucket, file_name, full_output=True)
|
||||
assert object_dict.get("ObjectLockMode") == object_lock_mode, f"Expected Object Lock Mode is {object_lock_mode}"
|
||||
assert (
|
||||
object_dict.get("ObjectLockMode") == object_lock_mode
|
||||
), f"Expected Object Lock Mode is {object_lock_mode}"
|
||||
assert (
|
||||
object_dict.get("ObjectLockLegalHoldStatus") == legal_hold_status
|
||||
), f"Expected Object Lock Legal Hold Status is {legal_hold_status}"
|
||||
object_retain_date = object_dict.get("ObjectLockRetainUntilDate")
|
||||
retain_date = parse(object_retain_date) if isinstance(object_retain_date, str) else object_retain_date
|
||||
retain_date = (
|
||||
parse(object_retain_date) if isinstance(object_retain_date, str) else object_retain_date
|
||||
)
|
||||
if retain_until_date:
|
||||
assert retain_date.strftime("%Y-%m-%dT%H:%M:%S") == retain_until_date.strftime(
|
||||
"%Y-%m-%dT%H:%M:%S"
|
||||
), f'Expected Object Lock Retain Until Date is {str(retain_until_date.strftime("%Y-%m-%dT%H:%M:%S"))}'
|
||||
elif retain_period:
|
||||
last_modify_date = object_dict.get("LastModified")
|
||||
last_modify = parse(last_modify_date) if isinstance(last_modify_date, str) else last_modify_date
|
||||
last_modify = (
|
||||
parse(last_modify_date) if isinstance(last_modify_date, str) else last_modify_date
|
||||
)
|
||||
assert (
|
||||
retain_date - last_modify + timedelta(seconds=1)
|
||||
).days == retain_period, f"Expected retention period is {retain_period} days"
|
||||
|
@ -161,44 +176,53 @@ def assert_s3_acl(acl_grants: list, permitted_users: str):
|
|||
logger.error("FULL_CONTROL is given to All Users")
|
||||
|
||||
|
||||
@reporter.step("Init S3 Credentials")
|
||||
@reporter.step_deco("Init S3 Credentials")
|
||||
def init_s3_credentials(
|
||||
wallet: WalletInfo,
|
||||
shell: Shell,
|
||||
wallet_path: str,
|
||||
cluster: Cluster,
|
||||
s3_bearer_rules_file: str,
|
||||
policy: Optional[dict] = None,
|
||||
s3gates: Optional[list[S3Gate]] = None,
|
||||
container_placement_policy: Optional[str] = None,
|
||||
):
|
||||
gate_public_keys = []
|
||||
bucket = str(uuid.uuid4())
|
||||
if not s3gates:
|
||||
s3gates = [cluster.s3_gates[0]]
|
||||
for s3gate in s3gates:
|
||||
gate_public_keys.append(s3gate.get_wallet_public_key())
|
||||
frostfs_authmate_exec: FrostfsAuthmate = FrostfsAuthmate(shell, FROSTFS_AUTHMATE_EXEC)
|
||||
issue_secret_output = frostfs_authmate_exec.secret.issue(
|
||||
wallet=wallet.path,
|
||||
peer=cluster.default_rpc_endpoint,
|
||||
gate_public_key=gate_public_keys,
|
||||
wallet_password=wallet.password,
|
||||
container_policy=policy,
|
||||
container_friendly_name=bucket,
|
||||
container_placement_policy=container_placement_policy,
|
||||
).stdout
|
||||
aws_access_key_id = str(
|
||||
re.search(r"access_key_id.*:\s.(?P<aws_access_key_id>\w*)", issue_secret_output).group("aws_access_key_id")
|
||||
|
||||
s3gate_node = cluster.services(S3Gate)[0]
|
||||
gate_public_key = s3gate_node.get_wallet_public_key()
|
||||
cmd = (
|
||||
f"{FROSTFS_AUTHMATE_EXEC} --debug --with-log --timeout {CREDENTIALS_CREATE_TIMEOUT} "
|
||||
f"issue-secret --wallet {wallet_path} --gate-public-key={gate_public_key} "
|
||||
f"--peer {cluster.default_rpc_endpoint} --container-friendly-name {bucket} "
|
||||
f"--bearer-rules {s3_bearer_rules_file}"
|
||||
)
|
||||
aws_secret_access_key = str(
|
||||
re.search(r"secret_access_key.*:\s.(?P<aws_secret_access_key>\w*)", issue_secret_output).group(
|
||||
"aws_secret_access_key"
|
||||
)
|
||||
)
|
||||
cid = str(re.search(r"container_id.*:\s.(?P<container_id>\w*)", issue_secret_output).group("container_id"))
|
||||
return cid, aws_access_key_id, aws_secret_access_key
|
||||
if policy:
|
||||
cmd += f" --container-policy {policy}'"
|
||||
logger.info(f"Executing command: {cmd}")
|
||||
|
||||
try:
|
||||
output = _run_with_passwd(cmd)
|
||||
logger.info(f"Command completed with output: {output}")
|
||||
|
||||
# output contains some debug info and then several JSON structures, so we find each
|
||||
# JSON structure by curly brackets (naive approach, but works while JSON is not nested)
|
||||
# and then we take JSON containing secret_access_key
|
||||
json_blocks = re.findall(r"\{.*?\}", output, re.DOTALL)
|
||||
for json_block in json_blocks:
|
||||
try:
|
||||
parsed_json_block = json.loads(json_block)
|
||||
if "secret_access_key" in parsed_json_block:
|
||||
return (
|
||||
parsed_json_block["container_id"],
|
||||
parsed_json_block["access_key_id"],
|
||||
parsed_json_block["secret_access_key"],
|
||||
)
|
||||
except json.JSONDecodeError:
|
||||
raise AssertionError(f"Could not parse info from output\n{output}")
|
||||
raise AssertionError(f"Could not find AWS credentials in output:\n{output}")
|
||||
|
||||
except Exception as exc:
|
||||
raise RuntimeError(f"Failed to init s3 credentials because of error\n{exc}") from exc
|
||||
|
||||
|
||||
@reporter.step("Delete bucket with all objects")
|
||||
@reporter.step_deco("Delete bucket with all objects")
|
||||
def delete_bucket_with_objects(s3_client: S3ClientWrapper, bucket: str):
|
||||
versioning_status = s3_client.get_bucket_versioning_status(bucket)
|
||||
if versioning_status == VersioningStatus.ENABLED.value:
|
||||
|
@ -221,20 +245,3 @@ def delete_bucket_with_objects(s3_client: S3ClientWrapper, bucket: str):
|
|||
|
||||
# Delete the bucket itself
|
||||
s3_client.delete_bucket(bucket)
|
||||
|
||||
|
||||
@reporter.step("Search nodes bucket")
|
||||
def search_nodes_with_bucket(
|
||||
cluster: Cluster,
|
||||
bucket_name: str,
|
||||
wallet: str,
|
||||
shell: Shell,
|
||||
endpoint: str,
|
||||
) -> list[ClusterNode]:
|
||||
cid = None
|
||||
for cluster_node in cluster.cluster_nodes:
|
||||
cid = search_container_by_name(name=bucket_name, node=cluster_node)
|
||||
if cid:
|
||||
break
|
||||
nodes_list = search_nodes_with_container(wallet=wallet, cid=cid, shell=shell, endpoint=endpoint, cluster=cluster)
|
||||
return nodes_list
|
||||
|
|
|
@ -7,16 +7,16 @@ from dataclasses import dataclass
|
|||
from enum import Enum
|
||||
from typing import Any, Optional
|
||||
|
||||
from frostfs_testlib import reporter
|
||||
from frostfs_testlib.cli import FrostfsCli
|
||||
from frostfs_testlib.reporter import get_reporter
|
||||
from frostfs_testlib.resources.cli import FROSTFS_CLI_EXEC
|
||||
from frostfs_testlib.resources.common import ASSETS_DIR, DEFAULT_WALLET_CONFIG
|
||||
from frostfs_testlib.shell import Shell
|
||||
from frostfs_testlib.storage.dataclasses.storage_object_info import StorageObjectInfo
|
||||
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
|
||||
from frostfs_testlib.testing.readable import HumanReadableEnum
|
||||
from frostfs_testlib.utils import json_utils, wallet_utils
|
||||
|
||||
reporter = get_reporter()
|
||||
logger = logging.getLogger("NeoLogger")
|
||||
|
||||
UNRELATED_KEY = "unrelated key in the session"
|
||||
|
@ -26,7 +26,7 @@ WRONG_VERB = "wrong verb of the session"
|
|||
INVALID_SIGNATURE = "invalid signature of the session data"
|
||||
|
||||
|
||||
class ObjectVerb(HumanReadableEnum):
|
||||
class ObjectVerb(Enum):
|
||||
PUT = "PUT"
|
||||
DELETE = "DELETE"
|
||||
GET = "GET"
|
||||
|
@ -36,7 +36,7 @@ class ObjectVerb(HumanReadableEnum):
|
|||
SEARCH = "SEARCH"
|
||||
|
||||
|
||||
class ContainerVerb(HumanReadableEnum):
|
||||
class ContainerVerb(Enum):
|
||||
CREATE = "PUT"
|
||||
DELETE = "DELETE"
|
||||
SETEACL = "SETEACL"
|
||||
|
@ -49,7 +49,7 @@ class Lifetime:
|
|||
iat: int = 0
|
||||
|
||||
|
||||
@reporter.step("Generate Session Token")
|
||||
@reporter.step_deco("Generate Session Token")
|
||||
def generate_session_token(
|
||||
owner_wallet: WalletInfo,
|
||||
session_wallet: WalletInfo,
|
||||
|
@ -71,7 +71,9 @@ def generate_session_token(
|
|||
|
||||
file_path = os.path.join(tokens_dir, str(uuid.uuid4()))
|
||||
|
||||
pub_key_64 = wallet_utils.get_wallet_public_key(session_wallet.path, session_wallet.password, "base64")
|
||||
pub_key_64 = wallet_utils.get_wallet_public_key(
|
||||
session_wallet.path, session_wallet.password, "base64"
|
||||
)
|
||||
|
||||
lifetime = lifetime or Lifetime()
|
||||
|
||||
|
@ -96,7 +98,7 @@ def generate_session_token(
|
|||
return file_path
|
||||
|
||||
|
||||
@reporter.step("Generate Session Token For Container")
|
||||
@reporter.step_deco("Generate Session Token For Container")
|
||||
def generate_container_session_token(
|
||||
owner_wallet: WalletInfo,
|
||||
session_wallet: WalletInfo,
|
||||
|
@ -123,7 +125,11 @@ def generate_container_session_token(
|
|||
"container": {
|
||||
"verb": verb.value,
|
||||
"wildcard": cid is None,
|
||||
**({"containerID": {"value": f"{json_utils.encode_for_json(cid)}"}} if cid is not None else {}),
|
||||
**(
|
||||
{"containerID": {"value": f"{json_utils.encode_for_json(cid)}"}}
|
||||
if cid is not None
|
||||
else {}
|
||||
),
|
||||
},
|
||||
}
|
||||
|
||||
|
@ -136,7 +142,7 @@ def generate_container_session_token(
|
|||
)
|
||||
|
||||
|
||||
@reporter.step("Generate Session Token For Object")
|
||||
@reporter.step_deco("Generate Session Token For Object")
|
||||
def generate_object_session_token(
|
||||
owner_wallet: WalletInfo,
|
||||
session_wallet: WalletInfo,
|
||||
|
@ -178,7 +184,7 @@ def generate_object_session_token(
|
|||
)
|
||||
|
||||
|
||||
@reporter.step("Get signed token for container session")
|
||||
@reporter.step_deco("Get signed token for container session")
|
||||
def get_container_signed_token(
|
||||
owner_wallet: WalletInfo,
|
||||
user_wallet: WalletInfo,
|
||||
|
@ -200,7 +206,7 @@ def get_container_signed_token(
|
|||
return sign_session_token(shell, session_token_file, owner_wallet)
|
||||
|
||||
|
||||
@reporter.step("Get signed token for object session")
|
||||
@reporter.step_deco("Get signed token for object session")
|
||||
def get_object_signed_token(
|
||||
owner_wallet: WalletInfo,
|
||||
user_wallet: WalletInfo,
|
||||
|
@ -227,7 +233,7 @@ def get_object_signed_token(
|
|||
return sign_session_token(shell, session_token_file, owner_wallet)
|
||||
|
||||
|
||||
@reporter.step("Create Session Token")
|
||||
@reporter.step_deco("Create Session Token")
|
||||
def create_session_token(
|
||||
shell: Shell,
|
||||
owner: str,
|
||||
|
@ -258,7 +264,7 @@ def create_session_token(
|
|||
return session_token
|
||||
|
||||
|
||||
@reporter.step("Sign Session Token")
|
||||
@reporter.step_deco("Sign Session Token")
|
||||
def sign_session_token(shell: Shell, session_token_file: str, wlt: WalletInfo) -> str:
|
||||
"""
|
||||
This function signs the session token by the given wallet.
|
||||
|
@ -272,6 +278,10 @@ def sign_session_token(shell: Shell, session_token_file: str, wlt: WalletInfo) -
|
|||
The path to the signed token.
|
||||
"""
|
||||
signed_token_file = os.path.join(os.getcwd(), ASSETS_DIR, str(uuid.uuid4()))
|
||||
frostfscli = FrostfsCli(shell=shell, frostfs_cli_exec_path=FROSTFS_CLI_EXEC, config_file=DEFAULT_WALLET_CONFIG)
|
||||
frostfscli.util.sign_session_token(wallet=wlt.path, from_file=session_token_file, to_file=signed_token_file)
|
||||
frostfscli = FrostfsCli(
|
||||
shell=shell, frostfs_cli_exec_path=FROSTFS_CLI_EXEC, config_file=DEFAULT_WALLET_CONFIG
|
||||
)
|
||||
frostfscli.util.sign_session_token(
|
||||
wallet=wlt.path, from_file=session_token_file, to_file=signed_token_file
|
||||
)
|
||||
return signed_token_file
|
||||
|
|
|
@ -3,7 +3,7 @@ from time import sleep
|
|||
|
||||
import pytest
|
||||
|
||||
from frostfs_testlib import reporter
|
||||
from frostfs_testlib.reporter import get_reporter
|
||||
from frostfs_testlib.resources.error_patterns import OBJECT_ALREADY_REMOVED
|
||||
from frostfs_testlib.shell import Shell
|
||||
from frostfs_testlib.steps.cli.object import delete_object, get_object
|
||||
|
@ -12,13 +12,16 @@ from frostfs_testlib.steps.tombstone import verify_head_tombstone
|
|||
from frostfs_testlib.storage.cluster import Cluster
|
||||
from frostfs_testlib.storage.dataclasses.storage_object_info import StorageObjectInfo
|
||||
|
||||
reporter = get_reporter()
|
||||
logger = logging.getLogger("NeoLogger")
|
||||
|
||||
CLEANUP_TIMEOUT = 10
|
||||
|
||||
|
||||
@reporter.step("Delete Objects")
|
||||
def delete_objects(storage_objects: list[StorageObjectInfo], shell: Shell, cluster: Cluster) -> None:
|
||||
@reporter.step_deco("Delete Objects")
|
||||
def delete_objects(
|
||||
storage_objects: list[StorageObjectInfo], shell: Shell, cluster: Cluster
|
||||
) -> None:
|
||||
"""
|
||||
Deletes given storage objects.
|
||||
|
||||
|
|
|
@ -6,7 +6,7 @@
|
|||
"""
|
||||
import logging
|
||||
|
||||
from frostfs_testlib import reporter
|
||||
from frostfs_testlib.reporter import get_reporter
|
||||
from frostfs_testlib.resources.error_patterns import OBJECT_NOT_FOUND
|
||||
from frostfs_testlib.shell import Shell
|
||||
from frostfs_testlib.steps.cli.object import head_object
|
||||
|
@ -14,11 +14,14 @@ from frostfs_testlib.steps.complex_object_actions import get_last_object
|
|||
from frostfs_testlib.storage.cluster import StorageNode
|
||||
from frostfs_testlib.utils import string_utils
|
||||
|
||||
reporter = get_reporter()
|
||||
logger = logging.getLogger("NeoLogger")
|
||||
|
||||
|
||||
@reporter.step("Get Object Copies")
|
||||
def get_object_copies(complexity: str, wallet: str, cid: str, oid: str, shell: Shell, nodes: list[StorageNode]) -> int:
|
||||
@reporter.step_deco("Get Object Copies")
|
||||
def get_object_copies(
|
||||
complexity: str, wallet: str, cid: str, oid: str, shell: Shell, nodes: list[StorageNode]
|
||||
) -> int:
|
||||
"""
|
||||
The function performs requests to all nodes of the container and
|
||||
finds out if they store a copy of the object. The procedure is
|
||||
|
@ -42,8 +45,10 @@ def get_object_copies(complexity: str, wallet: str, cid: str, oid: str, shell: S
|
|||
)
|
||||
|
||||
|
||||
@reporter.step("Get Simple Object Copies")
|
||||
def get_simple_object_copies(wallet: str, cid: str, oid: str, shell: Shell, nodes: list[StorageNode]) -> int:
|
||||
@reporter.step_deco("Get Simple Object Copies")
|
||||
def get_simple_object_copies(
|
||||
wallet: str, cid: str, oid: str, shell: Shell, nodes: list[StorageNode]
|
||||
) -> int:
|
||||
"""
|
||||
To figure out the number of a simple object copies, only direct
|
||||
HEAD requests should be made to the every node of the container.
|
||||
|
@ -61,7 +66,9 @@ def get_simple_object_copies(wallet: str, cid: str, oid: str, shell: Shell, node
|
|||
copies = 0
|
||||
for node in nodes:
|
||||
try:
|
||||
response = head_object(wallet, cid, oid, shell=shell, endpoint=node.get_rpc_endpoint(), is_direct=True)
|
||||
response = head_object(
|
||||
wallet, cid, oid, shell=shell, endpoint=node.get_rpc_endpoint(), is_direct=True
|
||||
)
|
||||
if response:
|
||||
logger.info(f"Found object {oid} on node {node}")
|
||||
copies += 1
|
||||
|
@ -71,8 +78,10 @@ def get_simple_object_copies(wallet: str, cid: str, oid: str, shell: Shell, node
|
|||
return copies
|
||||
|
||||
|
||||
@reporter.step("Get Complex Object Copies")
|
||||
def get_complex_object_copies(wallet: str, cid: str, oid: str, shell: Shell, nodes: list[StorageNode]) -> int:
|
||||
@reporter.step_deco("Get Complex Object Copies")
|
||||
def get_complex_object_copies(
|
||||
wallet: str, cid: str, oid: str, shell: Shell, nodes: list[StorageNode]
|
||||
) -> int:
|
||||
"""
|
||||
To figure out the number of a complex object copies, we firstly
|
||||
need to retrieve its Last object. We consider that the number of
|
||||
|
@ -93,8 +102,10 @@ def get_complex_object_copies(wallet: str, cid: str, oid: str, shell: Shell, nod
|
|||
return get_simple_object_copies(wallet, cid, last_oid, shell, nodes)
|
||||
|
||||
|
||||
@reporter.step("Get Nodes With Object")
|
||||
def get_nodes_with_object(cid: str, oid: str, shell: Shell, nodes: list[StorageNode]) -> list[StorageNode]:
|
||||
@reporter.step_deco("Get Nodes With Object")
|
||||
def get_nodes_with_object(
|
||||
cid: str, oid: str, shell: Shell, nodes: list[StorageNode]
|
||||
) -> list[StorageNode]:
|
||||
"""
|
||||
The function returns list of nodes which store
|
||||
the given object.
|
||||
|
@ -130,7 +141,7 @@ def get_nodes_with_object(cid: str, oid: str, shell: Shell, nodes: list[StorageN
|
|||
return nodes_list
|
||||
|
||||
|
||||
@reporter.step("Get Nodes Without Object")
|
||||
@reporter.step_deco("Get Nodes Without Object")
|
||||
def get_nodes_without_object(
|
||||
wallet: str, cid: str, oid: str, shell: Shell, nodes: list[StorageNode]
|
||||
) -> list[StorageNode]:
|
||||
|
@ -149,7 +160,9 @@ def get_nodes_without_object(
|
|||
nodes_list = []
|
||||
for node in nodes:
|
||||
try:
|
||||
res = head_object(wallet, cid, oid, shell=shell, endpoint=node.get_rpc_endpoint(), is_direct=True)
|
||||
res = head_object(
|
||||
wallet, cid, oid, shell=shell, endpoint=node.get_rpc_endpoint(), is_direct=True
|
||||
)
|
||||
if res is None:
|
||||
nodes_list.append(node)
|
||||
except Exception as err:
|
||||
|
|
|
@ -3,15 +3,18 @@ import logging
|
|||
|
||||
from neo3.wallet import wallet
|
||||
|
||||
from frostfs_testlib import reporter
|
||||
from frostfs_testlib.reporter import get_reporter
|
||||
from frostfs_testlib.shell import Shell
|
||||
from frostfs_testlib.steps.cli.object import head_object
|
||||
|
||||
reporter = get_reporter()
|
||||
logger = logging.getLogger("NeoLogger")
|
||||
|
||||
|
||||
@reporter.step("Verify Head Tombstone")
|
||||
def verify_head_tombstone(wallet_path: str, cid: str, oid_ts: str, oid: str, shell: Shell, endpoint: str):
|
||||
@reporter.step_deco("Verify Head Tombstone")
|
||||
def verify_head_tombstone(
|
||||
wallet_path: str, cid: str, oid_ts: str, oid: str, shell: Shell, endpoint: str
|
||||
):
|
||||
header = head_object(wallet_path, cid, oid_ts, shell=shell, endpoint=endpoint)["header"]
|
||||
|
||||
s_oid = header["sessionToken"]["body"]["object"]["target"]["objects"]
|
||||
|
@ -27,6 +30,12 @@ def verify_head_tombstone(wallet_path: str, cid: str, oid_ts: str, oid: str, she
|
|||
|
||||
assert header["ownerID"] == addr, "Tombstone Owner ID is wrong"
|
||||
assert header["objectType"] == "TOMBSTONE", "Header Type isn't Tombstone"
|
||||
assert header["sessionToken"]["body"]["object"]["verb"] == "DELETE", "Header Session Type isn't DELETE"
|
||||
assert header["sessionToken"]["body"]["object"]["target"]["container"] == cid, "Header Session ID is wrong"
|
||||
assert oid in header["sessionToken"]["body"]["object"]["target"]["objects"], "Header Session OID is wrong"
|
||||
assert (
|
||||
header["sessionToken"]["body"]["object"]["verb"] == "DELETE"
|
||||
), "Header Session Type isn't DELETE"
|
||||
assert (
|
||||
header["sessionToken"]["body"]["object"]["target"]["container"] == cid
|
||||
), "Header Session ID is wrong"
|
||||
assert (
|
||||
oid in header["sessionToken"]["body"]["object"]["target"]["objects"]
|
||||
), "Header Session OID is wrong"
|
||||
|
|
|
@ -1,7 +1,25 @@
|
|||
from frostfs_testlib.storage.constants import _FrostfsServicesNames
|
||||
from frostfs_testlib.storage.dataclasses.frostfs_services import (
|
||||
HTTPGate,
|
||||
InnerRing,
|
||||
MainChain,
|
||||
MorphChain,
|
||||
S3Gate,
|
||||
StorageNode,
|
||||
)
|
||||
from frostfs_testlib.storage.service_registry import ServiceRegistry
|
||||
|
||||
__class_registry = ServiceRegistry()
|
||||
|
||||
# Register default public services
|
||||
__class_registry.register_service(_FrostfsServicesNames.STORAGE, StorageNode)
|
||||
__class_registry.register_service(_FrostfsServicesNames.INNER_RING, InnerRing)
|
||||
__class_registry.register_service(_FrostfsServicesNames.MORPH_CHAIN, MorphChain)
|
||||
__class_registry.register_service(_FrostfsServicesNames.S3_GATE, S3Gate)
|
||||
__class_registry.register_service(_FrostfsServicesNames.HTTP_GATE, HTTPGate)
|
||||
# # TODO: Remove this since we are no longer have main chain
|
||||
__class_registry.register_service(_FrostfsServicesNames.MAIN_CHAIN, MainChain)
|
||||
|
||||
|
||||
def get_service_registry() -> ServiceRegistry:
|
||||
"""Returns registry with registered classes related to cluster and cluster nodes.
|
||||
|
|
|
@ -2,18 +2,19 @@ import random
|
|||
import re
|
||||
|
||||
import yaml
|
||||
from yarl import URL
|
||||
|
||||
from frostfs_testlib import reporter
|
||||
from frostfs_testlib.hosting import Host, Hosting
|
||||
from frostfs_testlib.hosting.config import ServiceConfig
|
||||
from frostfs_testlib.storage import get_service_registry
|
||||
from frostfs_testlib.storage.configuration.interfaces import ServiceConfigurationYml
|
||||
from frostfs_testlib.storage.configuration.service_configuration import ServiceConfiguration
|
||||
from frostfs_testlib.storage.constants import ConfigAttributes
|
||||
from frostfs_testlib.storage.dataclasses.frostfs_services import HTTPGate, InnerRing, MorphChain, S3Gate, StorageNode
|
||||
from frostfs_testlib.storage.dataclasses.frostfs_services import (
|
||||
HTTPGate,
|
||||
InnerRing,
|
||||
MorphChain,
|
||||
S3Gate,
|
||||
StorageNode,
|
||||
)
|
||||
from frostfs_testlib.storage.dataclasses.node_base import NodeBase, ServiceClass
|
||||
from frostfs_testlib.storage.dataclasses.storage_object_info import Interfaces
|
||||
from frostfs_testlib.storage.service_registry import ServiceRegistry
|
||||
|
||||
|
||||
|
@ -87,9 +88,6 @@ class ClusterNode:
|
|||
config_str = yaml.dump(new_config)
|
||||
shell.exec(f"echo '{config_str}' | sudo tee {config_file_path}")
|
||||
|
||||
def config(self, service_type: type[ServiceClass]) -> ServiceConfigurationYml:
|
||||
return ServiceConfiguration(self.service(service_type))
|
||||
|
||||
def service(self, service_type: type[ServiceClass]) -> ServiceClass:
|
||||
"""
|
||||
Get a service cluster node of specified type.
|
||||
|
@ -114,55 +112,9 @@ class ClusterNode:
|
|||
self.host,
|
||||
)
|
||||
|
||||
@property
|
||||
def services(self) -> list[NodeBase]:
|
||||
svcs: list[NodeBase] = []
|
||||
svcs_names_on_node = [svc.name for svc in self.host.config.services]
|
||||
for entry in self.class_registry._class_mapping.values():
|
||||
hosting_svc_name = entry["hosting_service_name"]
|
||||
pattern = f"{hosting_svc_name}{self.id:02}"
|
||||
if pattern in svcs_names_on_node:
|
||||
config = self.host.get_service_config(pattern)
|
||||
svcs.append(
|
||||
entry["cls"](
|
||||
self.id,
|
||||
config.name,
|
||||
self.host,
|
||||
)
|
||||
)
|
||||
|
||||
return svcs
|
||||
|
||||
def get_all_interfaces(self) -> dict[str, str]:
|
||||
return self.host.config.interfaces
|
||||
|
||||
def get_interface(self, interface: Interfaces) -> str:
|
||||
return self.host.config.interfaces[interface.value]
|
||||
|
||||
def get_data_interfaces(self) -> list[str]:
|
||||
def get_list_of_services(self) -> list[str]:
|
||||
return [
|
||||
ip_address for name_interface, ip_address in self.host.config.interfaces.items() if "data" in name_interface
|
||||
]
|
||||
|
||||
def get_data_interface(self, search_interface: str) -> list[str]:
|
||||
return [
|
||||
self.host.config.interfaces[interface]
|
||||
for interface in self.host.config.interfaces.keys()
|
||||
if search_interface == interface
|
||||
]
|
||||
|
||||
def get_internal_interfaces(self) -> list[str]:
|
||||
return [
|
||||
ip_address
|
||||
for name_interface, ip_address in self.host.config.interfaces.items()
|
||||
if "internal" in name_interface
|
||||
]
|
||||
|
||||
def get_internal_interface(self, search_internal: str) -> list[str]:
|
||||
return [
|
||||
self.host.config.interfaces[interface]
|
||||
for interface in self.host.config.interfaces.keys()
|
||||
if search_internal == interface
|
||||
config.attributes[ConfigAttributes.SERVICE_NAME] for config in self.host.config.services
|
||||
]
|
||||
|
||||
|
||||
|
@ -174,8 +126,6 @@ class Cluster:
|
|||
default_rpc_endpoint: str
|
||||
default_s3_gate_endpoint: str
|
||||
default_http_gate_endpoint: str
|
||||
default_http_hostname: str
|
||||
default_s3_hostname: str
|
||||
|
||||
def __init__(self, hosting: Hosting) -> None:
|
||||
self._hosting = hosting
|
||||
|
@ -184,8 +134,6 @@ class Cluster:
|
|||
self.default_rpc_endpoint = self.services(StorageNode)[0].get_rpc_endpoint()
|
||||
self.default_s3_gate_endpoint = self.services(S3Gate)[0].get_endpoint()
|
||||
self.default_http_gate_endpoint = self.services(HTTPGate)[0].get_endpoint()
|
||||
self.default_http_hostname = self.services(StorageNode)[0].get_http_hostname()
|
||||
self.default_s3_hostname = self.services(StorageNode)[0].get_s3_hostname()
|
||||
|
||||
@property
|
||||
def hosts(self) -> list[Host]:
|
||||
|
@ -217,40 +165,6 @@ class Cluster:
|
|||
def morph_chain(self) -> list[MorphChain]:
|
||||
return self.services(MorphChain)
|
||||
|
||||
def nodes(self, services: list[ServiceClass]) -> list[ClusterNode]:
|
||||
"""
|
||||
Resolve which cluster nodes hosting the specified services.
|
||||
|
||||
Args:
|
||||
services: list of services to resolve hosting cluster nodes.
|
||||
|
||||
Returns:
|
||||
list of cluster nodes which host specified services.
|
||||
"""
|
||||
|
||||
cluster_nodes = set()
|
||||
for service in services:
|
||||
cluster_nodes.update([node for node in self.cluster_nodes if node.service(type(service)) == service])
|
||||
|
||||
return list(cluster_nodes)
|
||||
|
||||
def node(self, service: ServiceClass) -> ClusterNode:
|
||||
"""
|
||||
Resolve single cluster node hosting the specified service.
|
||||
|
||||
Args:
|
||||
services: list of services to resolve hosting cluster nodes.
|
||||
|
||||
Returns:
|
||||
list of cluster nodes which host specified services.
|
||||
"""
|
||||
|
||||
nodes = [node for node in self.cluster_nodes if node.service(type(service)) == service]
|
||||
if not len(nodes):
|
||||
raise RuntimeError(f"Cannot find service {service} on any node")
|
||||
|
||||
return nodes[0]
|
||||
|
||||
def services(self, service_type: type[ServiceClass]) -> list[ServiceClass]:
|
||||
"""
|
||||
Get all services in a cluster of specified type.
|
||||
|
@ -336,8 +250,3 @@ class Cluster:
|
|||
def get_morph_endpoints(self) -> list[str]:
|
||||
nodes: list[MorphChain] = self.services(MorphChain)
|
||||
return [node.get_endpoint() for node in nodes]
|
||||
|
||||
def get_nodes_by_ip(self, ips: list[str]) -> list[ClusterNode]:
|
||||
cluster_nodes = [node for node in self.cluster_nodes if URL(node.morph_chain.get_endpoint()).host in ips]
|
||||
with reporter.step(f"Return cluster nodes - {cluster_nodes}"):
|
||||
return cluster_nodes
|
||||
|
|
|
@ -1,65 +0,0 @@
|
|||
from abc import ABC, abstractmethod
|
||||
from typing import Any
|
||||
|
||||
|
||||
class ServiceConfigurationYml(ABC):
|
||||
"""
|
||||
Class to manipulate yml configuration for service
|
||||
"""
|
||||
|
||||
def _find_option(self, key: str, data: dict):
|
||||
tree = key.split(":")
|
||||
current = data
|
||||
for node in tree:
|
||||
if isinstance(current, list) and len(current) - 1 >= int(node):
|
||||
current = current[int(node)]
|
||||
continue
|
||||
|
||||
if node not in current:
|
||||
return None
|
||||
|
||||
current = current[node]
|
||||
|
||||
return current
|
||||
|
||||
def _set_option(self, key: str, value: Any, data: dict):
|
||||
tree = key.split(":")
|
||||
current = data
|
||||
for node in tree[:-1]:
|
||||
if isinstance(current, list) and len(current) - 1 >= int(node):
|
||||
current = current[int(node)]
|
||||
continue
|
||||
|
||||
if node not in current:
|
||||
current[node] = {}
|
||||
|
||||
current = current[node]
|
||||
|
||||
current[tree[-1]] = value
|
||||
|
||||
@abstractmethod
|
||||
def get(self, key: str) -> str:
|
||||
"""
|
||||
Get parameter value from current configuration
|
||||
|
||||
Args:
|
||||
key: key of the parameter in yaml format like 'storage:shard:default:resync_metabase'
|
||||
|
||||
Returns:
|
||||
value of the parameter
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def set(self, values: dict[str, Any]):
|
||||
"""
|
||||
Sets parameters to configuration
|
||||
|
||||
Args:
|
||||
values: dict where key is the key of the parameter in yaml format like 'storage:shard:default:resync_metabase' and value is the value of the option to set
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def revert(self):
|
||||
"""
|
||||
Revert changes
|
||||
"""
|
|
@ -1,65 +0,0 @@
|
|||
import os
|
||||
import re
|
||||
from typing import Any
|
||||
|
||||
import yaml
|
||||
|
||||
from frostfs_testlib import reporter
|
||||
from frostfs_testlib.shell.interfaces import CommandOptions
|
||||
from frostfs_testlib.storage.configuration.interfaces import ServiceConfigurationYml
|
||||
from frostfs_testlib.storage.dataclasses.node_base import ServiceClass
|
||||
|
||||
|
||||
class ServiceConfiguration(ServiceConfigurationYml):
|
||||
def __init__(self, service: "ServiceClass") -> None:
|
||||
self.service = service
|
||||
self.shell = self.service.host.get_shell()
|
||||
self.confd_path = os.path.join(self.service.config_dir, "conf.d")
|
||||
self.custom_file = os.path.join(self.confd_path, "99_changes.yml")
|
||||
|
||||
def _path_exists(self, path: str) -> bool:
|
||||
return not self.shell.exec(f"test -e {path}", options=CommandOptions(check=False)).return_code
|
||||
|
||||
def _get_data_from_file(self, path: str) -> dict:
|
||||
content = self.shell.exec(f"cat {path}").stdout
|
||||
data = yaml.safe_load(content)
|
||||
return data
|
||||
|
||||
def get(self, key: str) -> str:
|
||||
with reporter.step(f"Get {key} configuration value for {self.service}"):
|
||||
config_files = [self.service.main_config_path]
|
||||
|
||||
if self._path_exists(self.confd_path):
|
||||
files = self.shell.exec(f"find {self.confd_path} -type f").stdout.strip().split()
|
||||
# Sorting files in backwards order from latest to first one
|
||||
config_files.extend(sorted(files, key=lambda x: -int(re.findall("^\d+", os.path.basename(x))[0])))
|
||||
|
||||
result = None
|
||||
for file in files:
|
||||
data = self._get_data_from_file(file)
|
||||
result = self._find_option(key, data)
|
||||
if result is not None:
|
||||
break
|
||||
|
||||
return result
|
||||
|
||||
def set(self, values: dict[str, Any]):
|
||||
with reporter.step(f"Change configuration for {self.service}"):
|
||||
if not self._path_exists(self.confd_path):
|
||||
self.shell.exec(f"mkdir {self.confd_path}")
|
||||
|
||||
if self._path_exists(self.custom_file):
|
||||
data = self._get_data_from_file(self.custom_file)
|
||||
else:
|
||||
data = {}
|
||||
|
||||
for key, value in values.items():
|
||||
self._set_option(key, value, data)
|
||||
|
||||
content = yaml.dump(data)
|
||||
self.shell.exec(f"echo '{content}' | sudo tee {self.custom_file}")
|
||||
self.shell.exec(f"chmod 777 {self.custom_file}")
|
||||
|
||||
def revert(self):
|
||||
with reporter.step(f"Revert changed options for {self.service}"):
|
||||
self.shell.exec(f"rm -rf {self.custom_file}")
|
|
@ -3,20 +3,14 @@ class ConfigAttributes:
|
|||
WALLET_PASSWORD = "wallet_password"
|
||||
WALLET_PATH = "wallet_path"
|
||||
WALLET_CONFIG = "wallet_config"
|
||||
CONFIG_DIR = "service_config_dir"
|
||||
CONFIG_PATH = "config_path"
|
||||
SHARD_CONFIG_PATH = "shard_config_path"
|
||||
LOGGER_CONFIG_PATH = "logger_config_path"
|
||||
LOCAL_WALLET_PATH = "local_wallet_path"
|
||||
LOCAL_WALLET_CONFIG = "local_config_path"
|
||||
ENDPOINT_DATA_0 = "endpoint_data0"
|
||||
ENDPOINT_DATA_1 = "endpoint_data1"
|
||||
ENDPOINT_INTERNAL = "endpoint_internal0"
|
||||
ENDPOINT_PROMETHEUS = "endpoint_prometheus"
|
||||
CONTROL_ENDPOINT = "control_endpoint"
|
||||
UN_LOCODE = "un_locode"
|
||||
HTTP_HOSTNAME = "http_hostname"
|
||||
S3_HOSTNAME = "s3_hostname"
|
||||
|
||||
|
||||
class _FrostfsServicesNames:
|
||||
|
@ -25,3 +19,4 @@ class _FrostfsServicesNames:
|
|||
HTTP_GATE = "http-gate"
|
||||
MORPH_CHAIN = "morph-chain"
|
||||
INNER_RING = "ir"
|
||||
MAIN_CHAIN = "main-chain"
|
||||
|
|
|
@ -1,59 +1,84 @@
|
|||
import copy
|
||||
from datetime import datetime
|
||||
from typing import Optional
|
||||
import time
|
||||
|
||||
import frostfs_testlib.resources.optionals as optionals
|
||||
from frostfs_testlib import reporter
|
||||
from frostfs_testlib.load.interfaces.scenario_runner import ScenarioRunner
|
||||
from frostfs_testlib.load.load_config import EndpointSelectionStrategy, LoadParams, LoadScenario, LoadType
|
||||
from frostfs_testlib.load.k6 import K6
|
||||
from frostfs_testlib.load.load_config import (
|
||||
EndpointSelectionStrategy,
|
||||
K6ProcessAllocationStrategy,
|
||||
LoadParams,
|
||||
LoadScenario,
|
||||
LoadType,
|
||||
)
|
||||
from frostfs_testlib.load.load_report import LoadReport
|
||||
from frostfs_testlib.load.load_steps import init_s3_client, prepare_k6_instances
|
||||
from frostfs_testlib.load.load_verifiers import LoadVerifier
|
||||
from frostfs_testlib.reporter import get_reporter
|
||||
from frostfs_testlib.resources.load_params import (
|
||||
K6_TEARDOWN_PERIOD,
|
||||
LOAD_NODE_SSH_PASSWORD,
|
||||
LOAD_NODE_SSH_PRIVATE_KEY_PASSPHRASE,
|
||||
LOAD_NODE_SSH_PRIVATE_KEY_PATH,
|
||||
LOAD_NODE_SSH_USER,
|
||||
LOAD_NODES,
|
||||
)
|
||||
from frostfs_testlib.shell.interfaces import SshCredentials
|
||||
from frostfs_testlib.storage.cluster import ClusterNode
|
||||
from frostfs_testlib.storage.dataclasses.frostfs_services import S3Gate, StorageNode
|
||||
from frostfs_testlib.storage.dataclasses.wallet import WalletInfo
|
||||
from frostfs_testlib.testing.parallel import parallel
|
||||
from frostfs_testlib.testing.test_control import run_optionally
|
||||
from frostfs_testlib.utils import datetime_utils
|
||||
|
||||
reporter = get_reporter()
|
||||
|
||||
|
||||
class BackgroundLoadController:
|
||||
k6_instances: list[K6]
|
||||
k6_dir: str
|
||||
load_params: LoadParams
|
||||
original_load_params: LoadParams
|
||||
load_nodes: list[str]
|
||||
verification_params: LoadParams
|
||||
cluster_nodes: list[ClusterNode]
|
||||
nodes_under_load: list[ClusterNode]
|
||||
load_counter: int
|
||||
ssh_credentials: SshCredentials
|
||||
loaders_wallet: WalletInfo
|
||||
load_summaries: dict
|
||||
endpoints: list[str]
|
||||
runner: ScenarioRunner
|
||||
started: bool
|
||||
load_reporters: list[LoadReport]
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
k6_dir: str,
|
||||
load_params: LoadParams,
|
||||
loaders_wallet: WalletInfo,
|
||||
cluster_nodes: list[ClusterNode],
|
||||
nodes_under_load: list[ClusterNode],
|
||||
runner: ScenarioRunner,
|
||||
) -> None:
|
||||
self.k6_dir = k6_dir
|
||||
self.original_load_params = load_params
|
||||
self.load_params = copy.deepcopy(self.original_load_params)
|
||||
self.cluster_nodes = cluster_nodes
|
||||
self.nodes_under_load = nodes_under_load
|
||||
self.load_counter = 1
|
||||
self.load_nodes = LOAD_NODES
|
||||
self.loaders_wallet = loaders_wallet
|
||||
self.runner = runner
|
||||
self.started = False
|
||||
self.load_reporters = []
|
||||
|
||||
if load_params.endpoint_selection_strategy is None:
|
||||
raise RuntimeError("endpoint_selection_strategy should not be None")
|
||||
|
||||
self.endpoints = self._get_endpoints(
|
||||
load_params.load_type, load_params.endpoint_selection_strategy
|
||||
)
|
||||
|
||||
self.ssh_credentials = SshCredentials(
|
||||
LOAD_NODE_SSH_USER,
|
||||
LOAD_NODE_SSH_PASSWORD,
|
||||
LOAD_NODE_SSH_PRIVATE_KEY_PATH,
|
||||
LOAD_NODE_SSH_PRIVATE_KEY_PASSPHRASE,
|
||||
)
|
||||
|
||||
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED, [])
|
||||
def _get_endpoints(self, load_type: LoadType, endpoint_selection_strategy: EndpointSelectionStrategy):
|
||||
def _get_endpoints(
|
||||
self, load_type: LoadType, endpoint_selection_strategy: EndpointSelectionStrategy
|
||||
):
|
||||
all_endpoints = {
|
||||
LoadType.gRPC: {
|
||||
EndpointSelectionStrategy.ALL: list(
|
||||
|
@ -74,13 +99,16 @@ class BackgroundLoadController:
|
|||
LoadType.S3: {
|
||||
EndpointSelectionStrategy.ALL: list(
|
||||
set(
|
||||
endpoint
|
||||
endpoint.replace("http://", "")
|
||||
for node_under_load in self.nodes_under_load
|
||||
for endpoint in node_under_load.service(S3Gate).get_all_endpoints()
|
||||
)
|
||||
),
|
||||
EndpointSelectionStrategy.FIRST: list(
|
||||
set(node_under_load.service(S3Gate).get_endpoint() for node_under_load in self.nodes_under_load)
|
||||
set(
|
||||
node_under_load.service(S3Gate).get_endpoint().replace("http://", "")
|
||||
for node_under_load in self.nodes_under_load
|
||||
)
|
||||
),
|
||||
},
|
||||
}
|
||||
|
@ -88,37 +116,69 @@ class BackgroundLoadController:
|
|||
return all_endpoints[load_type][endpoint_selection_strategy]
|
||||
|
||||
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED)
|
||||
@reporter.step("Init k6 instances")
|
||||
def init_k6(self):
|
||||
self.endpoints = self._get_endpoints(self.load_params.load_type, self.load_params.endpoint_selection_strategy)
|
||||
self.runner.init_k6_instances(self.load_params, self.endpoints, self.k6_dir)
|
||||
|
||||
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED)
|
||||
@reporter.step("Prepare load instances")
|
||||
@reporter.step_deco("Prepare background load instances")
|
||||
def prepare(self):
|
||||
self.runner.prepare(self.load_params, self.cluster_nodes, self.nodes_under_load, self.k6_dir)
|
||||
self.init_k6()
|
||||
if self.load_params.load_type == LoadType.S3:
|
||||
init_s3_client(
|
||||
self.load_nodes,
|
||||
self.load_params,
|
||||
self.k6_dir,
|
||||
self.ssh_credentials,
|
||||
self.nodes_under_load,
|
||||
self.loaders_wallet,
|
||||
)
|
||||
|
||||
def append_reporter(self, load_report: LoadReport):
|
||||
self.load_reporters.append(load_report)
|
||||
self._prepare(self.load_params)
|
||||
|
||||
def _prepare(self, load_params: LoadParams):
|
||||
self.k6_instances = prepare_k6_instances(
|
||||
load_nodes=LOAD_NODES,
|
||||
ssh_credentials=self.ssh_credentials,
|
||||
k6_dir=self.k6_dir,
|
||||
load_params=load_params,
|
||||
endpoints=self.endpoints,
|
||||
loaders_wallet=self.loaders_wallet,
|
||||
)
|
||||
|
||||
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED)
|
||||
@reporter.step_deco("Start background load")
|
||||
def start(self):
|
||||
with reporter.step(f"Start load on nodes {self.nodes_under_load}"):
|
||||
self.runner.start()
|
||||
self.started = True
|
||||
if self.load_params.preset is None:
|
||||
raise RuntimeError("Preset should not be none at the moment of start")
|
||||
|
||||
with reporter.step(
|
||||
f"Start background load on nodes {self.nodes_under_load}: "
|
||||
f"writers = {self.load_params.writers}, "
|
||||
f"obj_size = {self.load_params.object_size}, "
|
||||
f"load_time = {self.load_params.load_time}, "
|
||||
f"prepare_json = {self.load_params.preset.pregen_json}, "
|
||||
f"endpoints = {self.endpoints}"
|
||||
):
|
||||
for k6_load_instance in self.k6_instances:
|
||||
k6_load_instance.start()
|
||||
|
||||
wait_after_start_time = datetime_utils.parse_time(self.load_params.setup_timeout) + 5
|
||||
with reporter.step(
|
||||
f"Wait for start timeout + couple more seconds ({wait_after_start_time}) before moving on"
|
||||
):
|
||||
time.sleep(wait_after_start_time)
|
||||
|
||||
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED)
|
||||
@reporter.step("Stop load")
|
||||
@reporter.step_deco("Stop background load")
|
||||
def stop(self):
|
||||
self.runner.stop()
|
||||
for k6_load_instance in self.k6_instances:
|
||||
k6_load_instance.stop()
|
||||
|
||||
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED, True)
|
||||
def is_running(self) -> bool:
|
||||
return self.runner.is_running
|
||||
def is_running(self):
|
||||
for k6_load_instance in self.k6_instances:
|
||||
if not k6_load_instance.is_running:
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED)
|
||||
@reporter.step("Reset load")
|
||||
@reporter.step_deco("Reset background load")
|
||||
def _reset_for_consequent_load(self):
|
||||
"""This method is required if we want to run multiple loads during test run.
|
||||
Raise load counter by 1 and append it to load_id
|
||||
|
@ -128,102 +188,89 @@ class BackgroundLoadController:
|
|||
self.load_params.set_id(f"{self.load_params.load_id}_{self.load_counter}")
|
||||
|
||||
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED)
|
||||
@reporter.step("Startup load")
|
||||
@reporter.step_deco("Startup background load")
|
||||
def startup(self):
|
||||
self.prepare()
|
||||
self.preset()
|
||||
self.start()
|
||||
|
||||
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED)
|
||||
def preset(self):
|
||||
self.runner.preset()
|
||||
|
||||
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED)
|
||||
@reporter.step("Stop and get results of load")
|
||||
def teardown(self):
|
||||
if not self.started:
|
||||
@reporter.step_deco("Stop and get results of background load")
|
||||
def teardown(self, load_report: LoadReport = None):
|
||||
if not self.k6_instances:
|
||||
return
|
||||
|
||||
self.stop()
|
||||
self.load_summaries = self._get_results()
|
||||
self.started = False
|
||||
|
||||
start_time = min(self._get_start_times())
|
||||
end_time = max(self._get_end_times())
|
||||
|
||||
for load_report in self.load_reporters:
|
||||
load_report.set_start_time(start_time)
|
||||
load_report.set_end_time(end_time)
|
||||
self.load_summaries = self.get_results()
|
||||
self.k6_instances = []
|
||||
if load_report:
|
||||
load_report.add_summaries(self.load_summaries)
|
||||
|
||||
def _get_start_times(self) -> list[datetime]:
|
||||
futures = parallel([k6.get_start_time for k6 in self.runner.get_k6_instances()])
|
||||
return [future.result() for future in futures]
|
||||
|
||||
def _get_end_times(self) -> list[datetime]:
|
||||
futures = parallel([k6.get_end_time for k6 in self.runner.get_k6_instances()])
|
||||
return [future.result() for future in futures]
|
||||
|
||||
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED)
|
||||
@reporter.step("Run post-load verification")
|
||||
@reporter.step_deco("Verify results of background load")
|
||||
def verify(self):
|
||||
try:
|
||||
load_issues = self._collect_load_issues()
|
||||
if self.load_params.verify:
|
||||
load_issues.extend(self._run_verify_scenario())
|
||||
|
||||
assert not load_issues, "\n".join(load_issues)
|
||||
self.verification_params = LoadParams(
|
||||
verify_clients=self.load_params.verify_clients,
|
||||
scenario=LoadScenario.VERIFY,
|
||||
registry_file=self.load_params.registry_file,
|
||||
verify_time=self.load_params.verify_time,
|
||||
load_type=self.load_params.load_type,
|
||||
load_id=self.load_params.load_id,
|
||||
working_dir=self.load_params.working_dir,
|
||||
endpoint_selection_strategy=self.load_params.endpoint_selection_strategy,
|
||||
k6_process_allocation_strategy=self.load_params.k6_process_allocation_strategy,
|
||||
)
|
||||
self._run_verify_scenario()
|
||||
verification_summaries = self.get_results()
|
||||
self.verify_summaries(self.load_summaries, verification_summaries)
|
||||
finally:
|
||||
self._reset_for_consequent_load()
|
||||
|
||||
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED)
|
||||
@reporter.step("Collect load issues")
|
||||
def _collect_load_issues(self):
|
||||
@reporter.step_deco("Verify summaries from k6")
|
||||
def verify_summaries(self, load_summaries: dict, verification_summaries: dict):
|
||||
verifier = LoadVerifier(self.load_params)
|
||||
return verifier.collect_load_issues(self.load_summaries)
|
||||
for node_or_endpoint in load_summaries:
|
||||
with reporter.step(f"Verify load summaries for {node_or_endpoint}"):
|
||||
verifier.verify_summaries(
|
||||
load_summaries[node_or_endpoint], verification_summaries[node_or_endpoint]
|
||||
)
|
||||
|
||||
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED)
|
||||
def wait_until_finish(self, soft_timeout: int = 0):
|
||||
self.runner.wait_until_finish(soft_timeout)
|
||||
def wait_until_finish(self):
|
||||
if self.load_params.load_time is None:
|
||||
raise RuntimeError("LoadTime should not be none")
|
||||
|
||||
for k6_instance in self.k6_instances:
|
||||
k6_instance.wait_until_finished(self.load_params.load_time + int(K6_TEARDOWN_PERIOD))
|
||||
|
||||
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED)
|
||||
@reporter.step("Verify loaded objects")
|
||||
def _run_verify_scenario(self) -> list[str]:
|
||||
self.verification_params = LoadParams(
|
||||
verify_clients=self.load_params.verify_clients,
|
||||
scenario=LoadScenario.VERIFY,
|
||||
read_from=self.load_params.read_from,
|
||||
registry_file=self.load_params.registry_file,
|
||||
verify_time=self.load_params.verify_time,
|
||||
load_type=self.load_params.load_type,
|
||||
load_id=self.load_params.load_id,
|
||||
vu_init_time=0,
|
||||
working_dir=self.load_params.working_dir,
|
||||
endpoint_selection_strategy=self.load_params.endpoint_selection_strategy,
|
||||
k6_process_allocation_strategy=self.load_params.k6_process_allocation_strategy,
|
||||
setup_timeout="1s",
|
||||
)
|
||||
|
||||
@reporter.step_deco("Run verify scenario for background load")
|
||||
def _run_verify_scenario(self):
|
||||
if self.verification_params.verify_time is None:
|
||||
raise RuntimeError("verify_time should not be none")
|
||||
|
||||
self.runner.init_k6_instances(self.verification_params, self.endpoints, self.k6_dir)
|
||||
with reporter.step("Run verify scenario"):
|
||||
self.runner.start()
|
||||
self.runner.wait_until_finish()
|
||||
|
||||
with reporter.step("Collect verify issues"):
|
||||
verification_summaries = self._get_results()
|
||||
verifier = LoadVerifier(self.load_params)
|
||||
return verifier.collect_verify_issues(self.load_summaries, verification_summaries)
|
||||
self._prepare(self.verification_params)
|
||||
with reporter.step("Run verify background load data"):
|
||||
for k6_verify_instance in self.k6_instances:
|
||||
k6_verify_instance.start()
|
||||
k6_verify_instance.wait_until_finished(self.verification_params.verify_time)
|
||||
|
||||
@run_optionally(optionals.OPTIONAL_BACKGROUND_LOAD_ENABLED)
|
||||
def _get_results(self) -> dict:
|
||||
with reporter.step(f"Get {self.load_params.scenario.value} scenario results"):
|
||||
return self.runner.get_results()
|
||||
@reporter.step_deco("K6 run results")
|
||||
def get_results(self) -> dict:
|
||||
results = {}
|
||||
for k6_instance in self.k6_instances:
|
||||
if k6_instance.load_params.k6_process_allocation_strategy is None:
|
||||
raise RuntimeError("k6_process_allocation_strategy should not be none")
|
||||
|
||||
def __str__(self) -> str:
|
||||
return self.load_params.__str__()
|
||||
result = k6_instance.get_results()
|
||||
keys_map = {
|
||||
K6ProcessAllocationStrategy.PER_LOAD_NODE: k6_instance.load_node,
|
||||
K6ProcessAllocationStrategy.PER_ENDPOINT: k6_instance.endpoints[0],
|
||||
}
|
||||
key = keys_map[k6_instance.load_params.k6_process_allocation_strategy]
|
||||
results[key] = result
|
||||
|
||||
def __repr__(self) -> str:
|
||||
return repr(self.load_params)
|
||||
return results
|
||||
|
|
|
@ -1,489 +1,165 @@
|
|||
import datetime
|
||||
import logging
|
||||
import time
|
||||
from typing import TypeVar
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
|
||||
import frostfs_testlib.resources.optionals as optionals
|
||||
from frostfs_testlib import reporter
|
||||
from frostfs_testlib.cli import FrostfsAdm, FrostfsCli
|
||||
from frostfs_testlib.cli.netmap_parser import NetmapParser
|
||||
from frostfs_testlib.healthcheck.interfaces import Healthcheck
|
||||
from frostfs_testlib.hosting.interfaces import HostStatus
|
||||
from frostfs_testlib.plugins import load_all
|
||||
from frostfs_testlib.resources.cli import FROSTFS_ADM_CONFIG_PATH, FROSTFS_ADM_EXEC, FROSTFS_CLI_EXEC
|
||||
from frostfs_testlib.resources.common import DEFAULT_WALLET_CONFIG, MORPH_BLOCK_TIME
|
||||
from frostfs_testlib.shell import CommandOptions, Shell, SshConnectionProvider
|
||||
from frostfs_testlib.steps.network import IpHelper
|
||||
from frostfs_testlib.storage.cluster import Cluster, ClusterNode, S3Gate, StorageNode
|
||||
from frostfs_testlib.reporter import get_reporter
|
||||
from frostfs_testlib.shell import CommandOptions, Shell
|
||||
from frostfs_testlib.steps import epoch
|
||||
from frostfs_testlib.storage.cluster import Cluster, ClusterNode, StorageNode
|
||||
from frostfs_testlib.storage.controllers.disk_controller import DiskController
|
||||
from frostfs_testlib.storage.dataclasses.node_base import NodeBase, ServiceClass
|
||||
from frostfs_testlib.testing import parallel
|
||||
from frostfs_testlib.testing.test_control import retry, run_optionally, wait_for_success
|
||||
from frostfs_testlib.utils.datetime_utils import parse_time
|
||||
from frostfs_testlib.testing.test_control import run_optionally, wait_for_success
|
||||
from frostfs_testlib.utils.failover_utils import (
|
||||
wait_all_storage_nodes_returned,
|
||||
wait_for_host_offline,
|
||||
wait_for_host_online,
|
||||
wait_for_node_online,
|
||||
)
|
||||
|
||||
logger = logging.getLogger("NeoLogger")
|
||||
|
||||
|
||||
class StateManager:
|
||||
def __init__(self, cluster_state_controller: "ClusterStateController") -> None:
|
||||
self.csc = cluster_state_controller
|
||||
|
||||
|
||||
StateManagerClass = TypeVar("StateManagerClass", bound=StateManager)
|
||||
reporter = get_reporter()
|
||||
|
||||
|
||||
class ClusterStateController:
|
||||
def __init__(self, shell: Shell, cluster: Cluster, healthcheck: Healthcheck) -> None:
|
||||
def __init__(self, shell: Shell, cluster: Cluster) -> None:
|
||||
self.stopped_nodes: list[ClusterNode] = []
|
||||
self.detached_disks: dict[str, DiskController] = {}
|
||||
self.dropped_traffic: list[ClusterNode] = []
|
||||
self.stopped_services: set[NodeBase] = set()
|
||||
self.stopped_storage_nodes: list[ClusterNode] = []
|
||||
self.cluster = cluster
|
||||
self.healthcheck = healthcheck
|
||||
self.shell = shell
|
||||
self.suspended_services: dict[str, list[ClusterNode]] = {}
|
||||
self.nodes_with_modified_interface: list[ClusterNode] = []
|
||||
self.managers: list[StateManagerClass] = []
|
||||
|
||||
# TODO: move all functionality to managers
|
||||
managers = set(load_all(group="frostfs.testlib.csc_managers"))
|
||||
for manager in managers:
|
||||
self.managers.append(manager(self))
|
||||
|
||||
def manager(self, manager_type: type[StateManagerClass]) -> StateManagerClass:
|
||||
for manager in self.managers:
|
||||
# Subclasses here for the future if we have overriding subclasses of base interface
|
||||
if issubclass(type(manager), manager_type):
|
||||
return manager
|
||||
|
||||
def _get_stopped_by_node(self, node: ClusterNode) -> set[NodeBase]:
|
||||
stopped_by_node = [svc for svc in self.stopped_services if svc.host == node.host]
|
||||
return set(stopped_by_node)
|
||||
|
||||
def _get_stopped_by_type(self, service_type: type[ServiceClass]) -> set[ServiceClass]:
|
||||
stopped_by_type = [svc for svc in self.stopped_services if isinstance(svc, service_type)]
|
||||
return set(stopped_by_type)
|
||||
|
||||
def _from_stopped_nodes(self, service_type: type[ServiceClass]) -> set[ServiceClass]:
|
||||
stopped_on_nodes = set([node.service(service_type) for node in self.stopped_nodes])
|
||||
return set(stopped_on_nodes)
|
||||
|
||||
def _get_online(self, service_type: type[ServiceClass]) -> set[ServiceClass]:
|
||||
stopped_svc = self._get_stopped_by_type(service_type).union(self._from_stopped_nodes(service_type))
|
||||
online_svc = set(self.cluster.services(service_type)) - stopped_svc
|
||||
return online_svc
|
||||
|
||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||
@reporter.step("Stop host of node {node}")
|
||||
@reporter.step_deco("Stop host of node {node}")
|
||||
def stop_node_host(self, node: ClusterNode, mode: str):
|
||||
# Drop ssh connection for this node before shutdown
|
||||
provider = SshConnectionProvider()
|
||||
provider.drop(node.host_ip)
|
||||
|
||||
self.stopped_nodes.append(node)
|
||||
with reporter.step(f"Stop host {node.host.config.address}"):
|
||||
node.host.stop_host(mode=mode)
|
||||
self._wait_for_host_offline(node)
|
||||
wait_for_host_offline(self.shell, node.storage_node)
|
||||
self.stopped_nodes.append(node)
|
||||
|
||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||
@reporter.step("Shutdown whole cluster")
|
||||
@reporter.step_deco("Shutdown whole cluster")
|
||||
def shutdown_cluster(self, mode: str, reversed_order: bool = False):
|
||||
nodes = reversed(self.cluster.cluster_nodes) if reversed_order else self.cluster.cluster_nodes
|
||||
|
||||
# Drop all ssh connections before shutdown
|
||||
provider = SshConnectionProvider()
|
||||
provider.drop_all()
|
||||
|
||||
nodes = (
|
||||
reversed(self.cluster.cluster_nodes) if reversed_order else self.cluster.cluster_nodes
|
||||
)
|
||||
for node in nodes:
|
||||
with reporter.step(f"Stop host {node.host.config.address}"):
|
||||
self.stopped_nodes.append(node)
|
||||
node.host.stop_host(mode=mode)
|
||||
|
||||
for node in nodes:
|
||||
self._wait_for_host_offline(node)
|
||||
wait_for_host_offline(self.shell, node.storage_node)
|
||||
|
||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||
@reporter.step("Start host of node {node}")
|
||||
def start_node_host(self, node: ClusterNode, startup_healthcheck: bool = True):
|
||||
@reporter.step_deco("Stop all storage services on cluster")
|
||||
def stop_all_storage_services(self, reversed_order: bool = False):
|
||||
nodes = (
|
||||
reversed(self.cluster.cluster_nodes) if reversed_order else self.cluster.cluster_nodes
|
||||
)
|
||||
|
||||
for node in nodes:
|
||||
self.stop_storage_service(node)
|
||||
|
||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||
@reporter.step_deco("Start host of node {node}")
|
||||
def start_node_host(self, node: ClusterNode):
|
||||
with reporter.step(f"Start host {node.host.config.address}"):
|
||||
node.host.start_host()
|
||||
self._wait_for_host_online(node)
|
||||
self.stopped_nodes.remove(node)
|
||||
if startup_healthcheck:
|
||||
self.wait_startup_healthcheck()
|
||||
wait_for_host_online(self.shell, node.storage_node)
|
||||
wait_for_node_online(node.storage_node)
|
||||
self.stopped_nodes.remove(node)
|
||||
|
||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||
@reporter.step("Start stopped hosts")
|
||||
@reporter.step_deco("Start stopped hosts")
|
||||
def start_stopped_hosts(self, reversed_order: bool = False):
|
||||
if not self.stopped_nodes:
|
||||
return
|
||||
|
||||
nodes = reversed(self.stopped_nodes) if reversed_order else self.stopped_nodes
|
||||
for node in nodes:
|
||||
with reporter.step(f"Start host {node.host.config.address}"):
|
||||
node.host.start_host()
|
||||
self.stopped_services.difference_update(self._get_stopped_by_node(node))
|
||||
|
||||
self.stopped_nodes = []
|
||||
with reporter.step("Wait for all nodes to go online"):
|
||||
parallel(self._wait_for_host_online, self.cluster.cluster_nodes)
|
||||
|
||||
self.wait_after_storage_startup()
|
||||
wait_all_storage_nodes_returned(self.shell, self.cluster)
|
||||
|
||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||
@reporter.step("Detach disk {device} at {mountpoint} on node {node}")
|
||||
@reporter.step_deco("Detach disk {device} at {mountpoint} on node {node}")
|
||||
def detach_disk(self, node: StorageNode, device: str, mountpoint: str):
|
||||
disk_controller = self._get_disk_controller(node, device, mountpoint)
|
||||
self.detached_disks[disk_controller.id] = disk_controller
|
||||
disk_controller.detach()
|
||||
|
||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||
@reporter.step("Attach disk {device} at {mountpoint} on node {node}")
|
||||
@reporter.step_deco("Attach disk {device} at {mountpoint} on node {node}")
|
||||
def attach_disk(self, node: StorageNode, device: str, mountpoint: str):
|
||||
disk_controller = self._get_disk_controller(node, device, mountpoint)
|
||||
disk_controller.attach()
|
||||
self.detached_disks.pop(disk_controller.id, None)
|
||||
|
||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||
@reporter.step("Restore detached disks")
|
||||
@reporter.step_deco("Restore detached disks")
|
||||
def restore_disks(self):
|
||||
for disk_controller in self.detached_disks.values():
|
||||
disk_controller.attach()
|
||||
self.detached_disks = {}
|
||||
|
||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||
@reporter.step("Stop all {service_type} services")
|
||||
def stop_services_of_type(self, service_type: type[ServiceClass], mask: bool = True):
|
||||
services = self.cluster.services(service_type)
|
||||
self.stopped_services.update(services)
|
||||
parallel([service.stop_service for service in services], mask=mask)
|
||||
@reporter.step_deco("Stop storage service on {node}")
|
||||
def stop_storage_service(self, node: ClusterNode):
|
||||
node.storage_node.stop_service()
|
||||
self.stopped_storage_nodes.append(node)
|
||||
|
||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||
@reporter.step("Start all {service_type} services")
|
||||
def start_services_of_type(self, service_type: type[ServiceClass]):
|
||||
services = self.cluster.services(service_type)
|
||||
parallel([service.start_service for service in services])
|
||||
self.stopped_services.difference_update(set(services))
|
||||
|
||||
if service_type == StorageNode:
|
||||
self.wait_after_storage_startup()
|
||||
|
||||
@wait_for_success(600, 60)
|
||||
def wait_s3gate(self, s3gate: S3Gate):
|
||||
with reporter.step(f"Wait for {s3gate} reconnection"):
|
||||
result = s3gate.get_metric("frostfs_s3_gw_pool_current_nodes")
|
||||
assert 'address="127.0.0.1' in result.stdout, "S3Gate should connect to local storage node"
|
||||
|
||||
@reporter.step("Wait for S3Gates reconnection to local storage")
|
||||
def wait_s3gates(self):
|
||||
online_s3gates = self._get_online(S3Gate)
|
||||
if online_s3gates:
|
||||
parallel(self.wait_s3gate, online_s3gates)
|
||||
|
||||
@reporter.step("Wait for cluster startup healtcheck")
|
||||
def wait_startup_healthcheck(self):
|
||||
nodes = self.cluster.nodes(self._get_online(StorageNode))
|
||||
parallel(self.healthcheck.startup_healthcheck, nodes)
|
||||
|
||||
@reporter.step("Wait for storage reconnection to the system")
|
||||
def wait_after_storage_startup(self):
|
||||
self.wait_startup_healthcheck()
|
||||
self.wait_s3gates()
|
||||
|
||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||
@reporter.step("Start all stopped services")
|
||||
def start_all_stopped_services(self):
|
||||
stopped_storages = self._get_stopped_by_type(StorageNode)
|
||||
parallel([service.start_service for service in self.stopped_services])
|
||||
self.stopped_services.clear()
|
||||
|
||||
if stopped_storages:
|
||||
self.wait_after_storage_startup()
|
||||
|
||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||
@reporter.step("Stop {service_type} service on {node}")
|
||||
def stop_service_of_type(self, node: ClusterNode, service_type: type[ServiceClass], mask: bool = True):
|
||||
service = node.service(service_type)
|
||||
service.stop_service(mask)
|
||||
self.stopped_services.add(service)
|
||||
|
||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||
@reporter.step("Start {service_type} service on {node}")
|
||||
def start_service_of_type(self, node: ClusterNode, service_type: type[ServiceClass]):
|
||||
service = node.service(service_type)
|
||||
service.start_service()
|
||||
self.stopped_services.discard(service)
|
||||
|
||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||
@reporter.step("Start all stopped {service_type} services")
|
||||
def start_stopped_services_of_type(self, service_type: type[ServiceClass]):
|
||||
stopped_svc = self._get_stopped_by_type(service_type)
|
||||
if not stopped_svc:
|
||||
return
|
||||
|
||||
parallel([svc.start_service for svc in stopped_svc])
|
||||
self.stopped_services.difference_update(stopped_svc)
|
||||
|
||||
if service_type == StorageNode:
|
||||
self.wait_after_storage_startup()
|
||||
|
||||
# TODO: Deprecated
|
||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||
@reporter.step("Stop all storage services on cluster")
|
||||
def stop_all_storage_services(self, reversed_order: bool = False):
|
||||
nodes = reversed(self.cluster.cluster_nodes) if reversed_order else self.cluster.cluster_nodes
|
||||
|
||||
for node in nodes:
|
||||
self.stop_service_of_type(node, StorageNode)
|
||||
|
||||
# TODO: Deprecated
|
||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||
@reporter.step("Stop all S3 gates on cluster")
|
||||
def stop_all_s3_gates(self, reversed_order: bool = False):
|
||||
nodes = reversed(self.cluster.cluster_nodes) if reversed_order else self.cluster.cluster_nodes
|
||||
|
||||
for node in nodes:
|
||||
self.stop_service_of_type(node, S3Gate)
|
||||
|
||||
# TODO: Deprecated
|
||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||
@reporter.step("Stop storage service on {node}")
|
||||
def stop_storage_service(self, node: ClusterNode, mask: bool = True):
|
||||
self.stop_service_of_type(node, StorageNode, mask)
|
||||
|
||||
# TODO: Deprecated
|
||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||
@reporter.step("Start storage service on {node}")
|
||||
@reporter.step_deco("Start storage service on {node}")
|
||||
def start_storage_service(self, node: ClusterNode):
|
||||
self.start_service_of_type(node, StorageNode)
|
||||
node.storage_node.start_service()
|
||||
self.stopped_storage_nodes.remove(node)
|
||||
|
||||
# TODO: Deprecated
|
||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||
@reporter.step("Start stopped storage services")
|
||||
@reporter.step_deco("Start stopped storage services")
|
||||
def start_stopped_storage_services(self):
|
||||
self.start_stopped_services_of_type(StorageNode)
|
||||
if self.stopped_storage_nodes:
|
||||
# In case if we stopped couple services, for example (s01-s04):
|
||||
# After starting only s01, it may require connections to s02-s04, which is still down, and fail to start.
|
||||
# Also, if something goes wrong here, we might skip s02-s04 start at all, and cluster will be left in a bad state.
|
||||
# So in order to make sure that services are at least attempted to be started, using threads here.
|
||||
with ThreadPoolExecutor(max_workers=len(self.stopped_storage_nodes)) as executor:
|
||||
start_result = executor.map(self.start_storage_service, self.stopped_storage_nodes)
|
||||
|
||||
# TODO: Deprecated
|
||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||
@reporter.step("Stop s3 gate on {node}")
|
||||
def stop_s3_gate(self, node: ClusterNode, mask: bool = True):
|
||||
self.stop_service_of_type(node, S3Gate, mask)
|
||||
# Looks tricky, but if exception is raised in any thread, it will be "eaten" by ThreadPoolExecutor,
|
||||
# But will be thrown here.
|
||||
# Not ideal solution, but okay for now
|
||||
for _ in start_result:
|
||||
pass
|
||||
|
||||
# TODO: Deprecated
|
||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||
@reporter.step("Start s3 gate on {node}")
|
||||
def start_s3_gate(self, node: ClusterNode):
|
||||
self.start_service_of_type(node, S3Gate)
|
||||
|
||||
# TODO: Deprecated
|
||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||
@reporter.step("Start stopped S3 gates")
|
||||
def start_stopped_s3_gates(self):
|
||||
self.start_stopped_services_of_type(S3Gate)
|
||||
wait_all_storage_nodes_returned(self.shell, self.cluster)
|
||||
self.stopped_storage_nodes = []
|
||||
|
||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||
@reporter.step("Suspend {process_name} service in {node}")
|
||||
def suspend_service(self, process_name: str, node: ClusterNode):
|
||||
node.host.wait_success_suspend_process(process_name)
|
||||
if self.suspended_services.get(process_name):
|
||||
self.suspended_services[process_name].append(node)
|
||||
else:
|
||||
self.suspended_services[process_name] = [node]
|
||||
|
||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||
@reporter.step("Resume {process_name} service in {node}")
|
||||
def resume_service(self, process_name: str, node: ClusterNode):
|
||||
node.host.wait_success_resume_process(process_name)
|
||||
if self.suspended_services.get(process_name) and node in self.suspended_services[process_name]:
|
||||
self.suspended_services[process_name].remove(node)
|
||||
|
||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||
@reporter.step("Start suspend processes services")
|
||||
def resume_suspended_services(self):
|
||||
for process_name, list_nodes in self.suspended_services.items():
|
||||
[node.host.wait_success_resume_process(process_name) for node in list_nodes]
|
||||
self.suspended_services = {}
|
||||
|
||||
@reporter.step("Drop traffic to {node}, nodes - {block_nodes}")
|
||||
def drop_traffic(
|
||||
self,
|
||||
node: ClusterNode,
|
||||
wakeup_timeout: int,
|
||||
name_interface: str,
|
||||
block_nodes: list[ClusterNode] = None,
|
||||
) -> None:
|
||||
list_ip = self._parse_interfaces(block_nodes, name_interface)
|
||||
IpHelper.drop_input_traffic_to_node(node, list_ip)
|
||||
time.sleep(wakeup_timeout)
|
||||
self.dropped_traffic.append(node)
|
||||
|
||||
@reporter.step("Start traffic to {node}")
|
||||
def restore_traffic(
|
||||
self,
|
||||
node: ClusterNode,
|
||||
) -> None:
|
||||
IpHelper.restore_input_traffic_to_node(node=node)
|
||||
|
||||
@reporter.step("Restore blocked nodes")
|
||||
def restore_all_traffic(self):
|
||||
parallel(self._restore_traffic_to_node, self.dropped_traffic)
|
||||
|
||||
@run_optionally(optionals.OPTIONAL_FAILOVER_ENABLED)
|
||||
@reporter.step("Hard reboot host {node} via magic SysRq option")
|
||||
def panic_reboot_host(self, node: ClusterNode, wait_for_return: bool = True, startup_healthcheck: bool = True):
|
||||
@reporter.step_deco("Hard reboot host {node} via magic SysRq option")
|
||||
def panic_reboot_host(self, node: ClusterNode, wait_for_return: bool = True):
|
||||
shell = node.host.get_shell()
|
||||
shell.exec('sudo sh -c "echo 1 > /proc/sys/kernel/sysrq"')
|
||||
|
||||
options = CommandOptions(close_stdin=True, timeout=1, check=False)
|
||||
shell.exec('sudo sh -c "echo b > /proc/sysrq-trigger"', options)
|
||||
|
||||
# Drop ssh connection for this node
|
||||
provider = SshConnectionProvider()
|
||||
provider.drop(node.host_ip)
|
||||
|
||||
if wait_for_return:
|
||||
# Let the things to be settled
|
||||
# A little wait here to prevent ssh stuck during panic
|
||||
time.sleep(10)
|
||||
self._wait_for_host_online(node)
|
||||
if startup_healthcheck:
|
||||
self.wait_startup_healthcheck()
|
||||
wait_for_host_online(self.shell, node.storage_node)
|
||||
wait_for_node_online(node.storage_node)
|
||||
|
||||
@reporter.step("Down {interface} to {nodes}")
|
||||
def down_interface(self, nodes: list[ClusterNode], interface: str):
|
||||
for node in nodes:
|
||||
node.host.down_interface(interface=interface)
|
||||
assert node.host.check_state(interface=interface) == "DOWN"
|
||||
self.nodes_with_modified_interface.append(node)
|
||||
@reporter.step_deco("Wait up to {timeout} seconds for nodes on cluster to align epochs")
|
||||
def wait_for_epochs_align(self, timeout=60):
|
||||
@wait_for_success(timeout, 5, None, True)
|
||||
def check_epochs():
|
||||
epochs_by_node = epoch.get_epochs_from_nodes(self.shell, self.cluster)
|
||||
assert (
|
||||
len(set(epochs_by_node.values())) == 1
|
||||
), f"unaligned epochs found: {epochs_by_node}"
|
||||
|
||||
@reporter.step("Up {interface} to {nodes}")
|
||||
def up_interface(self, nodes: list[ClusterNode], interface: str):
|
||||
for node in nodes:
|
||||
node.host.up_interface(interface=interface)
|
||||
assert node.host.check_state(interface=interface) == "UP"
|
||||
if node in self.nodes_with_modified_interface:
|
||||
self.nodes_with_modified_interface.remove(node)
|
||||
check_epochs()
|
||||
|
||||
@reporter.step("Restore interface")
|
||||
def restore_interfaces(self):
|
||||
for node in self.nodes_with_modified_interface:
|
||||
dict_interfaces = node.host.config.interfaces.keys()
|
||||
for name_interface in dict_interfaces:
|
||||
if "mgmt" not in name_interface:
|
||||
node.host.up_interface(interface=name_interface)
|
||||
|
||||
@reporter.step("Get node time")
|
||||
def get_node_date(self, node: ClusterNode) -> datetime:
|
||||
shell = node.host.get_shell()
|
||||
return datetime.datetime.strptime(shell.exec("hwclock -r").stdout.strip(), "%Y-%m-%d %H:%M:%S.%f%z")
|
||||
|
||||
@reporter.step("Set node time to {in_date}")
|
||||
def change_node_date(self, node: ClusterNode, in_date: datetime) -> None:
|
||||
shell = node.host.get_shell()
|
||||
shell.exec(f"date -s @{time.mktime(in_date.timetuple())}")
|
||||
shell.exec("hwclock --systohc")
|
||||
node_time = self.get_node_date(node)
|
||||
with reporter.step(f"Verify difference between {node_time} and {in_date} is less than a minute"):
|
||||
assert (self.get_node_date(node) - in_date) < datetime.timedelta(minutes=1)
|
||||
|
||||
@reporter.step(f"Restore time")
|
||||
def restore_node_date(self, node: ClusterNode) -> None:
|
||||
shell = node.host.get_shell()
|
||||
now_time = datetime.datetime.now(datetime.timezone.utc)
|
||||
with reporter.step(f"Set {now_time} time"):
|
||||
shell.exec(f"date -s @{time.mktime(now_time.timetuple())}")
|
||||
shell.exec("hwclock --systohc")
|
||||
|
||||
@reporter.step("Change the synchronizer status to {status}")
|
||||
def set_sync_date_all_nodes(self, status: str):
|
||||
if status == "active":
|
||||
parallel(self._enable_date_synchronizer, self.cluster.cluster_nodes)
|
||||
return
|
||||
parallel(self._disable_date_synchronizer, self.cluster.cluster_nodes)
|
||||
|
||||
@reporter.step("Set MaintenanceModeAllowed - {status}")
|
||||
def set_maintenance_mode_allowed(self, status: str, cluster_node: ClusterNode) -> None:
|
||||
frostfs_adm = FrostfsAdm(
|
||||
shell=cluster_node.host.get_shell(),
|
||||
frostfs_adm_exec_path=FROSTFS_ADM_EXEC,
|
||||
config_file=FROSTFS_ADM_CONFIG_PATH,
|
||||
)
|
||||
frostfs_adm.morph.set_config(set_key_value=f"MaintenanceModeAllowed={status}")
|
||||
|
||||
@reporter.step("Set mode node to {status}")
|
||||
def set_mode_node(self, cluster_node: ClusterNode, wallet: str, status: str, await_tick: bool = True) -> None:
|
||||
rpc_endpoint = cluster_node.storage_node.get_rpc_endpoint()
|
||||
control_endpoint = cluster_node.service(StorageNode).get_control_endpoint()
|
||||
|
||||
frostfs_adm, frostfs_cli, frostfs_cli_remote = self._get_cli(local_shell=self.shell, cluster_node=cluster_node)
|
||||
node_netinfo = NetmapParser.netinfo(frostfs_cli.netmap.netinfo(rpc_endpoint=rpc_endpoint, wallet=wallet).stdout)
|
||||
|
||||
with reporter.step("If status maintenance, then check that the option is enabled"):
|
||||
if node_netinfo.maintenance_mode_allowed == "false":
|
||||
frostfs_adm.morph.set_config(set_key_value="MaintenanceModeAllowed=true")
|
||||
|
||||
with reporter.step(f"Change the status to {status}"):
|
||||
frostfs_cli_remote.control.set_status(endpoint=control_endpoint, status=status)
|
||||
|
||||
if not await_tick:
|
||||
return
|
||||
|
||||
with reporter.step("Tick 1 epoch, and await 2 block"):
|
||||
frostfs_adm.morph.force_new_epoch()
|
||||
time.sleep(parse_time(MORPH_BLOCK_TIME) * 2)
|
||||
|
||||
self.check_node_status(status=status, wallet=wallet, cluster_node=cluster_node)
|
||||
|
||||
@wait_for_success(80, 8, title="Wait for storage status become {status}")
|
||||
def check_node_status(self, status: str, wallet: str, cluster_node: ClusterNode):
|
||||
frostfs_cli = FrostfsCli(
|
||||
shell=self.shell, frostfs_cli_exec_path=FROSTFS_CLI_EXEC, config_file=DEFAULT_WALLET_CONFIG
|
||||
)
|
||||
netmap = NetmapParser.snapshot_all_nodes(
|
||||
frostfs_cli.netmap.snapshot(rpc_endpoint=cluster_node.storage_node.get_rpc_endpoint(), wallet=wallet).stdout
|
||||
)
|
||||
netmap = [node for node in netmap if cluster_node.host_ip == node.node]
|
||||
if status == "offline":
|
||||
assert cluster_node.host_ip not in netmap, f"{cluster_node.host_ip} not in Offline"
|
||||
else:
|
||||
assert netmap[0].node_status == status.upper(), f"Node state - {netmap[0].node_status} != {status} expect"
|
||||
|
||||
def _get_cli(self, local_shell: Shell, cluster_node: ClusterNode) -> tuple[FrostfsAdm, FrostfsCli, FrostfsCli]:
|
||||
# TODO Move to service config
|
||||
host = cluster_node.host
|
||||
service_config = host.get_service_config(cluster_node.storage_node.name)
|
||||
wallet_path = service_config.attributes["wallet_path"]
|
||||
wallet_password = service_config.attributes["wallet_password"]
|
||||
|
||||
shell = host.get_shell()
|
||||
wallet_config_path = f"/tmp/{cluster_node.storage_node.name}-config.yaml"
|
||||
wallet_config = f'wallet: {wallet_path}\npassword: "{wallet_password}"'
|
||||
shell.exec(f"echo '{wallet_config}' > {wallet_config_path}")
|
||||
|
||||
frostfs_adm = FrostfsAdm(
|
||||
shell=shell, frostfs_adm_exec_path=FROSTFS_ADM_EXEC, config_file=FROSTFS_ADM_CONFIG_PATH
|
||||
)
|
||||
frostfs_cli = FrostfsCli(
|
||||
shell=local_shell, frostfs_cli_exec_path=FROSTFS_CLI_EXEC, config_file=DEFAULT_WALLET_CONFIG
|
||||
)
|
||||
frostfs_cli_remote = FrostfsCli(
|
||||
shell=shell,
|
||||
frostfs_cli_exec_path=FROSTFS_CLI_EXEC,
|
||||
config_file=wallet_config_path,
|
||||
)
|
||||
return frostfs_adm, frostfs_cli, frostfs_cli_remote
|
||||
|
||||
def _enable_date_synchronizer(self, cluster_node: ClusterNode):
|
||||
shell = cluster_node.host.get_shell()
|
||||
shell.exec("timedatectl set-ntp true")
|
||||
cluster_node.host.wait_for_service_to_be_in_state("systemd-timesyncd", "active", 15)
|
||||
|
||||
def _disable_date_synchronizer(self, cluster_node: ClusterNode):
|
||||
shell = cluster_node.host.get_shell()
|
||||
shell.exec("timedatectl set-ntp false")
|
||||
cluster_node.host.wait_for_service_to_be_in_state("systemd-timesyncd", "inactive", 15)
|
||||
|
||||
def _get_disk_controller(self, node: StorageNode, device: str, mountpoint: str) -> DiskController:
|
||||
def _get_disk_controller(
|
||||
self, node: StorageNode, device: str, mountpoint: str
|
||||
) -> DiskController:
|
||||
disk_controller_id = DiskController.get_id(node, device)
|
||||
if disk_controller_id in self.detached_disks.keys():
|
||||
disk_controller = self.detached_disks[disk_controller_id]
|
||||
|
@ -491,46 +167,3 @@ class ClusterStateController:
|
|||
disk_controller = DiskController(node, device, mountpoint)
|
||||
|
||||
return disk_controller
|
||||
|
||||
def _restore_traffic_to_node(self, node):
|
||||
IpHelper.restore_input_traffic_to_node(node)
|
||||
|
||||
def _parse_interfaces(self, nodes: list[ClusterNode], name_interface: str):
|
||||
interfaces = []
|
||||
for node in nodes:
|
||||
dict_interfaces = node.host.config.interfaces
|
||||
for type, ip in dict_interfaces.items():
|
||||
if name_interface in type:
|
||||
interfaces.append(ip)
|
||||
return interfaces
|
||||
|
||||
@reporter.step("Ping node")
|
||||
def _ping_host(self, node: ClusterNode):
|
||||
options = CommandOptions(check=False)
|
||||
return self.shell.exec(f"ping {node.host.config.address} -c 1", options).return_code
|
||||
|
||||
@retry(
|
||||
max_attempts=60, sleep_interval=10, expected_result=HostStatus.ONLINE, title="Waiting for {node} to go online"
|
||||
)
|
||||
def _wait_for_host_online(self, node: ClusterNode):
|
||||
try:
|
||||
ping_result = self._ping_host(node)
|
||||
if ping_result != 0:
|
||||
return HostStatus.OFFLINE
|
||||
return node.host.get_host_status()
|
||||
except Exception as err:
|
||||
logger.warning(f"Host ping fails with error {err}")
|
||||
return HostStatus.OFFLINE
|
||||
|
||||
@retry(
|
||||
max_attempts=60, sleep_interval=10, expected_result=HostStatus.OFFLINE, title="Waiting for {node} to go offline"
|
||||
)
|
||||
def _wait_for_host_offline(self, node: ClusterNode):
|
||||
try:
|
||||
ping_result = self._ping_host(node)
|
||||
if ping_result == 0:
|
||||
return HostStatus.ONLINE
|
||||
return node.host.get_host_status()
|
||||
except Exception as err:
|
||||
logger.warning(f"Host ping fails with error {err}")
|
||||
return HostStatus.ONLINE
|
||||
|
|
|
@ -99,7 +99,6 @@ class ShardsWatcher:
|
|||
endpoint=self.storage_node.get_control_endpoint(),
|
||||
wallet=self.storage_node.get_remote_wallet_path(),
|
||||
wallet_password=self.storage_node.get_wallet_password(),
|
||||
json_mode=True,
|
||||
)
|
||||
|
||||
return json.loads(response.stdout.split(">", 1)[1])
|
||||
|
|
|
@ -1,49 +0,0 @@
|
|||
from typing import Any
|
||||
|
||||
from frostfs_testlib import reporter
|
||||
from frostfs_testlib.storage.cluster import ClusterNode
|
||||
from frostfs_testlib.storage.controllers.cluster_state_controller import ClusterStateController, StateManager
|
||||
from frostfs_testlib.storage.dataclasses.node_base import ServiceClass
|
||||
from frostfs_testlib.testing import parallel
|
||||
|
||||
|
||||
class ConfigStateManager(StateManager):
|
||||
def __init__(self, cluster_state_controller: ClusterStateController) -> None:
|
||||
super().__init__(cluster_state_controller)
|
||||
self.services_with_changed_config: set[tuple[ClusterNode, ServiceClass]] = set()
|
||||
self.cluster = self.csc.cluster
|
||||
|
||||
@reporter.step("Change configuration for {service_type} on all nodes")
|
||||
def set_on_all_nodes(self, service_type: type[ServiceClass], values: dict[str, Any]):
|
||||
services = self.cluster.services(service_type)
|
||||
nodes = self.cluster.nodes(services)
|
||||
self.services_with_changed_config.update([(node, service_type) for node in nodes])
|
||||
|
||||
self.csc.stop_services_of_type(service_type)
|
||||
parallel([node.config(service_type).set for node in nodes], values=values)
|
||||
self.csc.start_services_of_type(service_type)
|
||||
|
||||
@reporter.step("Change configuration for {service_type} on {node}")
|
||||
def set_on_node(self, node: ClusterNode, service_type: type[ServiceClass], values: dict[str, Any]):
|
||||
self.services_with_changed_config.add((node, service_type))
|
||||
|
||||
self.csc.stop_service_of_type(node, service_type)
|
||||
node.config(service_type).set(values)
|
||||
self.csc.start_service_of_type(node, service_type)
|
||||
|
||||
@reporter.step("Revert all configuration changes")
|
||||
def revert_all(self):
|
||||
if not self.services_with_changed_config:
|
||||
return
|
||||
|
||||
parallel(self._revert_svc, self.services_with_changed_config)
|
||||
self.services_with_changed_config.clear()
|
||||
|
||||
self.csc.start_all_stopped_services()
|
||||
|
||||
# TODO: parallel can't have multiple parallel_items :(
|
||||
@reporter.step("Revert all configuration {node_and_service}")
|
||||
def _revert_svc(self, node_and_service: tuple[ClusterNode, ServiceClass]):
|
||||
node, service_type = node_and_service
|
||||
self.csc.stop_service_of_type(node, service_type)
|
||||
node.config(service_type).revert()
|
|
@ -3,7 +3,6 @@ from dataclasses import dataclass
|
|||
from enum import Enum
|
||||
from typing import Any, Dict, List, Optional, Union
|
||||
|
||||
from frostfs_testlib.testing.readable import HumanReadableEnum
|
||||
from frostfs_testlib.utils import wallet_utils
|
||||
|
||||
logger = logging.getLogger("NeoLogger")
|
||||
|
@ -11,7 +10,7 @@ EACL_LIFETIME = 100500
|
|||
FROSTFS_CONTRACT_CACHE_TIMEOUT = 30
|
||||
|
||||
|
||||
class EACLOperation(HumanReadableEnum):
|
||||
class EACLOperation(Enum):
|
||||
PUT = "put"
|
||||
GET = "get"
|
||||
HEAD = "head"
|
||||
|
@ -21,24 +20,24 @@ class EACLOperation(HumanReadableEnum):
|
|||
DELETE = "delete"
|
||||
|
||||
|
||||
class EACLAccess(HumanReadableEnum):
|
||||
class EACLAccess(Enum):
|
||||
ALLOW = "allow"
|
||||
DENY = "deny"
|
||||
|
||||
|
||||
class EACLRole(HumanReadableEnum):
|
||||
class EACLRole(Enum):
|
||||
OTHERS = "others"
|
||||
USER = "user"
|
||||
SYSTEM = "system"
|
||||
|
||||
|
||||
class EACLHeaderType(HumanReadableEnum):
|
||||
class EACLHeaderType(Enum):
|
||||
REQUEST = "req" # Filter request headers
|
||||
OBJECT = "obj" # Filter object headers
|
||||
SERVICE = "SERVICE" # Filter service headers. These are not processed by FrostFS nodes and exist for service use only
|
||||
|
||||
|
||||
class EACLMatchType(HumanReadableEnum):
|
||||
class EACLMatchType(Enum):
|
||||
STRING_EQUAL = "=" # Return true if strings are equal
|
||||
STRING_NOT_EQUAL = "!=" # Return true if strings are different
|
||||
|
||||
|
|
|
@ -3,7 +3,7 @@ import yaml
|
|||
from frostfs_testlib.blockchain import RPCClient
|
||||
from frostfs_testlib.storage.constants import ConfigAttributes
|
||||
from frostfs_testlib.storage.dataclasses.node_base import NodeBase
|
||||
from frostfs_testlib.storage.dataclasses.shard import Shard
|
||||
|
||||
|
||||
class InnerRing(NodeBase):
|
||||
"""
|
||||
|
@ -16,7 +16,7 @@ class InnerRing(NodeBase):
|
|||
"""
|
||||
|
||||
def service_healthcheck(self) -> bool:
|
||||
health_metric = "frostfs_ir_ir_health"
|
||||
health_metric = "frostfs_node_ir_health"
|
||||
output = (
|
||||
self.host.get_shell()
|
||||
.exec(f"curl -s localhost:6662 | grep {health_metric} | sed 1,2d")
|
||||
|
@ -110,8 +110,28 @@ class MorphChain(NodeBase):
|
|||
def label(self) -> str:
|
||||
return f"{self.name}: {self.get_endpoint()}"
|
||||
|
||||
def get_http_endpoint(self) -> str:
|
||||
return self._get_attribute("http_endpoint")
|
||||
|
||||
class MainChain(NodeBase):
|
||||
"""
|
||||
Class represents main-chain consensus node in a cluster
|
||||
|
||||
Consensus node is not always the same as physical host:
|
||||
It can be service running in a container or on physical host (or physical node, if you will):
|
||||
For testing perspective, it's not relevant how it is actually running,
|
||||
since frostfs network will still treat it as "node"
|
||||
"""
|
||||
|
||||
rpc_client: RPCClient
|
||||
|
||||
def construct(self):
|
||||
self.rpc_client = RPCClient(self.get_endpoint())
|
||||
|
||||
def get_endpoint(self) -> str:
|
||||
return self._get_attribute(ConfigAttributes.ENDPOINT_INTERNAL)
|
||||
|
||||
@property
|
||||
def label(self) -> str:
|
||||
return f"{self.name}: {self.get_endpoint()}"
|
||||
|
||||
|
||||
class StorageNode(NodeBase):
|
||||
|
@ -142,55 +162,20 @@ class StorageNode(NodeBase):
|
|||
)
|
||||
return health_metric in output
|
||||
|
||||
def get_shard_config_path(self) -> str:
|
||||
return self._get_attribute(ConfigAttributes.SHARD_CONFIG_PATH)
|
||||
|
||||
def get_shards_config(self) -> tuple[str, dict]:
|
||||
return self.get_config(self.get_shard_config_path())
|
||||
|
||||
def get_shards(self) -> list[Shard]:
|
||||
config = self.get_shards_config()[1]
|
||||
config["storage"]["shard"].pop("default")
|
||||
return [Shard.from_object(shard) for shard in config["storage"]["shard"].values()]
|
||||
|
||||
def get_shards_from_env(self) -> list[Shard]:
|
||||
config = self.get_shards_config()[1]
|
||||
configObj = ConfigObj(StringIO(config))
|
||||
|
||||
pattern = f"{SHARD_PREFIX}\d*"
|
||||
num_shards = len(set(re.findall(pattern, self.get_shards_config())))
|
||||
|
||||
return [Shard.from_config_object(configObj, shard_id) for shard_id in range(num_shards)]
|
||||
|
||||
def get_control_endpoint(self) -> str:
|
||||
return self._get_attribute(ConfigAttributes.CONTROL_ENDPOINT)
|
||||
|
||||
def get_un_locode(self):
|
||||
return self._get_attribute(ConfigAttributes.UN_LOCODE)
|
||||
|
||||
def get_data_directory(self) -> str:
|
||||
return self.host.get_data_directory(self.name)
|
||||
|
||||
def get_storage_config(self) -> str:
|
||||
return self.host.get_storage_config(self.name)
|
||||
|
||||
def get_http_hostname(self) -> str:
|
||||
return self._get_attribute(ConfigAttributes.HTTP_HOSTNAME)
|
||||
|
||||
def get_s3_hostname(self) -> str:
|
||||
return self._get_attribute(ConfigAttributes.S3_HOSTNAME)
|
||||
|
||||
def delete_blobovnicza(self):
|
||||
self.host.delete_blobovnicza(self.name)
|
||||
|
||||
def delete_fstree(self):
|
||||
self.host.delete_fstree(self.name)
|
||||
|
||||
def delete_file(self, file_path: str) -> None:
|
||||
self.host.delete_file(file_path)
|
||||
|
||||
def is_file_exist(self, file_path) -> bool:
|
||||
return self.host.is_file_exist(file_path)
|
||||
def delete_pilorama(self):
|
||||
self.host.delete_pilorama(self.name)
|
||||
|
||||
def delete_metabase(self):
|
||||
self.host.delete_metabase(self.name)
|
||||
|
|
|
@ -1,22 +1,17 @@
|
|||
from abc import abstractmethod
|
||||
from abc import ABC, abstractmethod
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime, timezone
|
||||
from typing import Optional, TypedDict, TypeVar
|
||||
from typing import Optional, Tuple, TypedDict, TypeVar
|
||||
|
||||
import yaml
|
||||
from dateutil import parser
|
||||
|
||||
from frostfs_testlib import reporter
|
||||
from frostfs_testlib.hosting.config import ServiceConfig
|
||||
from frostfs_testlib.hosting.interfaces import Host
|
||||
from frostfs_testlib.shell.interfaces import CommandResult
|
||||
from frostfs_testlib.storage.constants import ConfigAttributes
|
||||
from frostfs_testlib.testing.readable import HumanReadableABC
|
||||
from frostfs_testlib.utils import wallet_utils
|
||||
|
||||
|
||||
@dataclass
|
||||
class NodeBase(HumanReadableABC):
|
||||
class NodeBase(ABC):
|
||||
"""
|
||||
Represents a node of some underlying service
|
||||
"""
|
||||
|
@ -24,7 +19,6 @@ class NodeBase(HumanReadableABC):
|
|||
id: str
|
||||
name: str
|
||||
host: Host
|
||||
_process_name: str
|
||||
|
||||
def __init__(self, id, name, host) -> None:
|
||||
self.id = id
|
||||
|
@ -54,40 +48,18 @@ class NodeBase(HumanReadableABC):
|
|||
def get_service_systemctl_name(self) -> str:
|
||||
return self._get_attribute(ConfigAttributes.SERVICE_NAME)
|
||||
|
||||
def get_process_name(self) -> str:
|
||||
return self._process_name
|
||||
|
||||
def start_service(self):
|
||||
with reporter.step(f"Unmask {self.name} service on {self.host.config.address}"):
|
||||
self.host.unmask_service(self.name)
|
||||
|
||||
with reporter.step(f"Start {self.name} service on {self.host.config.address}"):
|
||||
self.host.start_service(self.name)
|
||||
self.host.start_service(self.name)
|
||||
|
||||
@abstractmethod
|
||||
def service_healthcheck(self) -> bool:
|
||||
"""Service healthcheck."""
|
||||
|
||||
# TODO: Migrate to sub-class Metrcis (not yet exists :))
|
||||
def get_metric(self, metric: str) -> CommandResult:
|
||||
shell = self.host.get_shell()
|
||||
result = shell.exec(f"curl -s {self.get_metrics_endpoint()} | grep -e '^{metric}'")
|
||||
return result
|
||||
|
||||
def get_metrics_endpoint(self) -> str:
|
||||
return self._get_attribute(ConfigAttributes.ENDPOINT_PROMETHEUS)
|
||||
|
||||
def stop_service(self, mask: bool = True):
|
||||
if mask:
|
||||
with reporter.step(f"Mask {self.name} service on {self.host.config.address}"):
|
||||
self.host.mask_service(self.name)
|
||||
|
||||
with reporter.step(f"Stop {self.name} service on {self.host.config.address}"):
|
||||
self.host.stop_service(self.name)
|
||||
def stop_service(self):
|
||||
self.host.stop_service(self.name)
|
||||
|
||||
def restart_service(self):
|
||||
with reporter.step(f"Restart {self.name} service on {self.host.config.address}"):
|
||||
self.host.restart_service(self.name)
|
||||
self.host.restart_service(self.name)
|
||||
|
||||
def get_wallet_password(self) -> str:
|
||||
return self._get_attribute(ConfigAttributes.WALLET_PASSWORD)
|
||||
|
@ -120,27 +92,8 @@ class NodeBase(HumanReadableABC):
|
|||
ConfigAttributes.WALLET_CONFIG,
|
||||
)
|
||||
|
||||
def get_logger_config_path(self) -> str:
|
||||
"""
|
||||
Returns config path for logger located on remote host
|
||||
"""
|
||||
config_attributes = self.host.get_service_config(self.name)
|
||||
return self._get_attribute(
|
||||
ConfigAttributes.LOGGER_CONFIG_PATH) if ConfigAttributes.LOGGER_CONFIG_PATH in config_attributes.attributes else None
|
||||
|
||||
@property
|
||||
def config_dir(self) -> str:
|
||||
return self._get_attribute(ConfigAttributes.CONFIG_DIR)
|
||||
|
||||
@property
|
||||
def main_config_path(self) -> str:
|
||||
return self._get_attribute(ConfigAttributes.CONFIG_PATH)
|
||||
|
||||
# TODO: Deprecated
|
||||
def get_config(self, config_file_path: Optional[str] = None) -> tuple[str, dict]:
|
||||
if config_file_path is None:
|
||||
config_file_path = self._get_attribute(ConfigAttributes.CONFIG_PATH)
|
||||
|
||||
def get_config(self) -> Tuple[str, dict]:
|
||||
config_file_path = self._get_attribute(ConfigAttributes.CONFIG_PATH)
|
||||
shell = self.host.get_shell()
|
||||
|
||||
result = shell.exec(f"cat {config_file_path}")
|
||||
|
@ -149,11 +102,8 @@ class NodeBase(HumanReadableABC):
|
|||
config = yaml.safe_load(config_text)
|
||||
return config_file_path, config
|
||||
|
||||
# TODO: Deprecated
|
||||
def save_config(self, new_config: dict, config_file_path: Optional[str] = None) -> None:
|
||||
if config_file_path is None:
|
||||
config_file_path = self._get_attribute(ConfigAttributes.CONFIG_PATH)
|
||||
|
||||
def save_config(self, new_config: dict) -> None:
|
||||
config_file_path = self._get_attribute(ConfigAttributes.CONFIG_PATH)
|
||||
shell = self.host.get_shell()
|
||||
|
||||
config_str = yaml.dump(new_config)
|
||||
|
@ -164,7 +114,9 @@ class NodeBase(HumanReadableABC):
|
|||
storage_wallet_pass = self.get_wallet_password()
|
||||
return wallet_utils.get_wallet_public_key(storage_wallet_path, storage_wallet_pass)
|
||||
|
||||
def _get_attribute(self, attribute_name: str, default_attribute_name: Optional[str] = None) -> str:
|
||||
def _get_attribute(
|
||||
self, attribute_name: str, default_attribute_name: Optional[str] = None
|
||||
) -> str:
|
||||
config = self.host.get_service_config(self.name)
|
||||
|
||||
if attribute_name not in config.attributes:
|
||||
|
@ -180,15 +132,6 @@ class NodeBase(HumanReadableABC):
|
|||
def _get_service_config(self) -> ServiceConfig:
|
||||
return self.host.get_service_config(self.name)
|
||||
|
||||
def get_service_uptime(self, service: str) -> datetime:
|
||||
result = self.host.get_shell().exec(
|
||||
f"systemctl show {service} --property ActiveEnterTimestamp | cut -d '=' -f 2"
|
||||
)
|
||||
start_time = parser.parse(result.stdout.strip())
|
||||
current_time = datetime.now(tz=timezone.utc)
|
||||
active_time = current_time - start_time
|
||||
return active_time
|
||||
|
||||
|
||||
ServiceClass = TypeVar("ServiceClass", bound=NodeBase)
|
||||
|
||||
|
|
|
@ -1,13 +0,0 @@
|
|||
from dataclasses import dataclass
|
||||
|
||||
|
||||
@dataclass
|
||||
class ObjectSize:
|
||||
name: str
|
||||
value: int
|
||||
|
||||
def __str__(self) -> str:
|
||||
return self.name
|
||||
|
||||
def __repr__(self) -> str:
|
||||
return self.__str__()
|
|
@ -1,99 +0,0 @@
|
|||
import json
|
||||
import pathlib
|
||||
import re
|
||||
from dataclasses import dataclass
|
||||
from io import StringIO
|
||||
|
||||
import allure
|
||||
import pytest
|
||||
import yaml
|
||||
from configobj import ConfigObj
|
||||
from frostfs_testlib.cli import FrostfsCli
|
||||
from frostfs_testlib.resources.cli import CLI_DEFAULT_TIMEOUT
|
||||
from frostfs_testlib.resources.common import DEFAULT_WALLET_CONFIG
|
||||
|
||||
SHARD_PREFIX = "FROSTFS_STORAGE_SHARD_"
|
||||
BLOBSTOR_PREFIX = "_BLOBSTOR_"
|
||||
|
||||
|
||||
@dataclass
|
||||
class Blobstor:
|
||||
path: str
|
||||
path_type: str
|
||||
|
||||
def __eq__(self, other) -> bool:
|
||||
if not isinstance(other, self.__class__):
|
||||
raise RuntimeError(f"Only two {self.__class__.__name__} instances can be compared")
|
||||
return self.path == other.path and self.path_type == other.path_type
|
||||
|
||||
def __hash__(self):
|
||||
return hash((self.path, self.path_type))
|
||||
|
||||
@staticmethod
|
||||
def from_config_object(section: ConfigObj, shard_id: str, blobstor_id: str):
|
||||
var_prefix = f"{SHARD_PREFIX}{shard_id}{BLOBSTOR_PREFIX}{blobstor_id}"
|
||||
return Blobstor(section.get(f"{var_prefix}_PATH"), section.get(f"{var_prefix}_TYPE"))
|
||||
|
||||
|
||||
@dataclass
|
||||
class Shard:
|
||||
blobstor: list[Blobstor]
|
||||
metabase: str
|
||||
writecache: str
|
||||
pilorama: str
|
||||
|
||||
def __eq__(self, other) -> bool:
|
||||
if not isinstance(other, self.__class__):
|
||||
raise RuntimeError(f"Only two {self.__class__.__name__} instances can be compared")
|
||||
return (
|
||||
set(self.blobstor) == set(other.blobstor)
|
||||
and self.metabase == other.metabase
|
||||
and self.writecache == other.writecache
|
||||
and self.pilorama == other.pilorama
|
||||
)
|
||||
|
||||
def __hash__(self):
|
||||
return hash((self.metabase, self.writecache))
|
||||
|
||||
@staticmethod
|
||||
def _get_blobstor_count_from_section(config_object: ConfigObj, shard_id: int):
|
||||
pattern = f"{SHARD_PREFIX}{shard_id}{BLOBSTOR_PREFIX}"
|
||||
blobstors = {key[: len(pattern) + 2] for key in config_object.keys() if pattern in key}
|
||||
return len(blobstors)
|
||||
|
||||
@staticmethod
|
||||
def from_config_object(config_object: ConfigObj, shard_id: int):
|
||||
var_prefix = f"{SHARD_PREFIX}{shard_id}"
|
||||
|
||||
blobstor_count = Shard._get_blobstor_count_from_section(config_object, shard_id)
|
||||
blobstors = [
|
||||
Blobstor.from_config_object(config_object, shard_id, blobstor_id) for blobstor_id in range(blobstor_count)
|
||||
]
|
||||
|
||||
write_cache_enabled = config_object.as_bool(f"{var_prefix}_WRITECACHE_ENABLED")
|
||||
|
||||
return Shard(
|
||||
blobstors,
|
||||
config_object.get(f"{var_prefix}_METABASE_PATH"),
|
||||
config_object.get(f"{var_prefix}_WRITECACHE_PATH") if write_cache_enabled else "",
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def from_object(shard):
|
||||
metabase = shard["metabase"]["path"] if "path" in shard["metabase"] else shard["metabase"]
|
||||
writecache = shard["writecache"]["path"] if "path" in shard["writecache"] else shard["writecache"]
|
||||
|
||||
# Currently due to issue we need to check if pilorama exists in keys
|
||||
# TODO: make pilorama mandatory after fix
|
||||
if shard.get("pilorama"):
|
||||
pilorama = shard["pilorama"]["path"] if "path" in shard["pilorama"] else shard["pilorama"]
|
||||
else:
|
||||
pilorama = None
|
||||
|
||||
return Shard(
|
||||
blobstor=[Blobstor(path=blobstor["path"], path_type=blobstor["type"]) for blobstor in shard["blobstor"]],
|
||||
metabase=metabase,
|
||||
writecache=writecache,
|
||||
pilorama=pilorama
|
||||
)
|
||||
|
|
@ -1,8 +1,6 @@
|
|||
from dataclasses import dataclass
|
||||
from typing import Optional
|
||||
|
||||
from frostfs_testlib.testing.readable import HumanReadableEnum
|
||||
|
||||
|
||||
@dataclass
|
||||
class ObjectRef:
|
||||
|
@ -25,52 +23,3 @@ class StorageObjectInfo(ObjectRef):
|
|||
attributes: Optional[list[dict[str, str]]] = None
|
||||
tombstone: Optional[str] = None
|
||||
locks: Optional[list[LockObjectInfo]] = None
|
||||
|
||||
|
||||
class ModeNode(HumanReadableEnum):
|
||||
MAINTENANCE: str = "maintenance"
|
||||
ONLINE: str = "online"
|
||||
OFFLINE: str = "offline"
|
||||
|
||||
|
||||
@dataclass
|
||||
class NodeNetmapInfo:
|
||||
node_id: str = None
|
||||
node_status: ModeNode = None
|
||||
node_data_ips: list[str] = None
|
||||
cluster_name: str = None
|
||||
continent: str = None
|
||||
country: str = None
|
||||
country_code: str = None
|
||||
external_address: list[str] = None
|
||||
location: str = None
|
||||
node: str = None
|
||||
price: int = None
|
||||
sub_div: str = None
|
||||
sub_div_code: int = None
|
||||
un_locode: str = None
|
||||
role: str = None
|
||||
|
||||
|
||||
class Interfaces(HumanReadableEnum):
|
||||
DATA_O: str = "data0"
|
||||
DATA_1: str = "data1"
|
||||
MGMT: str = "mgmt"
|
||||
INTERNAL_0: str = "internal0"
|
||||
INTERNAL_1: str = "internal1"
|
||||
|
||||
|
||||
@dataclass
|
||||
class NodeNetInfo:
|
||||
epoch: str = None
|
||||
network_magic: str = None
|
||||
time_per_block: str = None
|
||||
container_fee: str = None
|
||||
epoch_duration: str = None
|
||||
inner_ring_candidate_fee: str = None
|
||||
maximum_object_size: str = None
|
||||
withdrawal_fee: str = None
|
||||
homomorphic_hashing_disabled: str = None
|
||||
maintenance_mode_allowed: str = None
|
||||
eigen_trust_alpha: str = None
|
||||
eigen_trust_iterations: str = None
|
||||
|
|
|
@ -1,2 +0,0 @@
|
|||
from frostfs_testlib.testing.parallel import parallel
|
||||
from frostfs_testlib.testing.test_control import expect_not_raises, run_optionally, wait_for_success
|
|
@ -1,13 +1,12 @@
|
|||
import time
|
||||
from typing import Optional
|
||||
|
||||
from frostfs_testlib import reporter
|
||||
from frostfs_testlib.resources.common import MORPH_BLOCK_TIME
|
||||
from frostfs_testlib.reporter import get_reporter
|
||||
from frostfs_testlib.shell import Shell
|
||||
from frostfs_testlib.steps import epoch
|
||||
from frostfs_testlib.storage.cluster import Cluster
|
||||
from frostfs_testlib.storage.dataclasses.frostfs_services import StorageNode
|
||||
from frostfs_testlib.utils import datetime_utils
|
||||
|
||||
reporter = get_reporter()
|
||||
|
||||
|
||||
# To skip adding every mandatory singleton dependency to EACH test function
|
||||
|
@ -15,24 +14,13 @@ class ClusterTestBase:
|
|||
shell: Shell
|
||||
cluster: Cluster
|
||||
|
||||
@reporter.step("Tick {epochs_to_tick} epochs, wait {wait_block} block")
|
||||
def tick_epochs(
|
||||
self,
|
||||
epochs_to_tick: int,
|
||||
alive_node: Optional[StorageNode] = None,
|
||||
wait_block: int = None,
|
||||
):
|
||||
@reporter.step_deco("Tick {epochs_to_tick} epochs")
|
||||
def tick_epochs(self, epochs_to_tick: int, alive_node: Optional[StorageNode] = None):
|
||||
for _ in range(epochs_to_tick):
|
||||
self.tick_epoch(alive_node, wait_block)
|
||||
self.tick_epoch(alive_node)
|
||||
|
||||
def tick_epoch(
|
||||
self,
|
||||
alive_node: Optional[StorageNode] = None,
|
||||
wait_block: int = None,
|
||||
):
|
||||
def tick_epoch(self, alive_node: Optional[StorageNode] = None):
|
||||
epoch.tick_epoch(self.shell, self.cluster, alive_node=alive_node)
|
||||
if wait_block:
|
||||
time.sleep(datetime_utils.parse_time(MORPH_BLOCK_TIME) * wait_block)
|
||||
|
||||
def wait_for_epochs_align(self):
|
||||
epoch.wait_for_epochs_align(self.shell, self.cluster)
|
||||
|
|
|
@ -1,98 +0,0 @@
|
|||
import itertools
|
||||
from concurrent.futures import Future, ThreadPoolExecutor
|
||||
from typing import Callable, Collection, Optional, Union
|
||||
|
||||
|
||||
def parallel(
|
||||
fn: Union[Callable, list[Callable]],
|
||||
parallel_items: Optional[Collection] = None,
|
||||
*args,
|
||||
**kwargs,
|
||||
) -> list[Future]:
|
||||
"""Parallel execution of selected function or list of function using ThreadPoolExecutor.
|
||||
Also checks the exceptions of each thread.
|
||||
|
||||
Args:
|
||||
fn: function(s) to run. Can work in 2 modes:
|
||||
1. If you have dedicated function with some items to process in parallel,
|
||||
like you do with executor.map(fn, parallel_items), pass this function as fn.
|
||||
2. If you need to process each item with it's own method, like you do
|
||||
with executor.submit(fn, args, kwargs), pass list of methods here.
|
||||
See examples in runners.py in this repo.
|
||||
parallel_items: items to iterate on (should be None in case of 2nd mode).
|
||||
args: any other args required in target function(s).
|
||||
if any arg is itertool.cycle, it will be iterated before passing to new thread.
|
||||
kwargs: any other kwargs required in target function(s)
|
||||
if any kwarg is itertool.cycle, it will be iterated before passing to new thread.
|
||||
|
||||
Returns:
|
||||
list of futures.
|
||||
"""
|
||||
|
||||
if callable(fn):
|
||||
if not parallel_items:
|
||||
raise RuntimeError("Parallel items should not be none when fn is callable.")
|
||||
futures = _run_by_items(fn, parallel_items, *args, **kwargs)
|
||||
elif isinstance(fn, list):
|
||||
futures = _run_by_fn_list(fn, *args, **kwargs)
|
||||
else:
|
||||
raise RuntimeError("Nothing to run. fn should be either callable or list of callables.")
|
||||
|
||||
# Check for exceptions
|
||||
exceptions = [future.exception() for future in futures if future.exception()]
|
||||
if exceptions:
|
||||
message = "\n".join([str(e) for e in exceptions])
|
||||
raise RuntimeError(f"The following exceptions occured during parallel run:\n{message}")
|
||||
return futures
|
||||
|
||||
|
||||
def _run_by_fn_list(fn_list: list[Callable], *args, **kwargs) -> list[Future]:
|
||||
if not len(fn_list):
|
||||
return []
|
||||
if not all([callable(f) for f in fn_list]):
|
||||
raise RuntimeError("fn_list should contain only callables")
|
||||
|
||||
futures: list[Future] = []
|
||||
|
||||
with ThreadPoolExecutor(max_workers=len(fn_list)) as executor:
|
||||
for fn in fn_list:
|
||||
task_args = _get_args(*args)
|
||||
task_kwargs = _get_kwargs(**kwargs)
|
||||
|
||||
futures.append(executor.submit(fn, *task_args, **task_kwargs))
|
||||
|
||||
return futures
|
||||
|
||||
|
||||
def _run_by_items(fn: Callable, parallel_items: Collection, *args, **kwargs) -> list[Future]:
|
||||
futures: list[Future] = []
|
||||
|
||||
with ThreadPoolExecutor(max_workers=len(parallel_items)) as executor:
|
||||
for item in parallel_items:
|
||||
task_args = _get_args(*args)
|
||||
task_kwargs = _get_kwargs(**kwargs)
|
||||
task_args.insert(0, item)
|
||||
|
||||
futures.append(executor.submit(fn, *task_args, **task_kwargs))
|
||||
|
||||
return futures
|
||||
|
||||
|
||||
def _get_kwargs(**kwargs):
|
||||
actkwargs = {}
|
||||
for key, arg in kwargs.items():
|
||||
if isinstance(arg, itertools.cycle):
|
||||
actkwargs[key] = next(arg)
|
||||
else:
|
||||
actkwargs[key] = arg
|
||||
return actkwargs
|
||||
|
||||
|
||||
def _get_args(*args):
|
||||
actargs = []
|
||||
for arg in args:
|
||||
if isinstance(arg, itertools.cycle):
|
||||
actargs.append(next(arg))
|
||||
else:
|
||||
actargs.append(arg)
|
||||
return actargs
|
|
@ -1,36 +0,0 @@
|
|||
from abc import ABCMeta
|
||||
from enum import Enum
|
||||
|
||||
|
||||
class HumanReadableEnum(Enum):
|
||||
def __str__(self):
|
||||
return self._name_
|
||||
|
||||
def __repr__(self):
|
||||
return self._name_
|
||||
|
||||
|
||||
class HumanReadableABCMeta(ABCMeta):
|
||||
def __str__(cls):
|
||||
if "__repr_name__" in cls.__dict__:
|
||||
return cls.__dict__["__repr_name__"]
|
||||
return cls.__name__
|
||||
|
||||
def __repr__(cls):
|
||||
if "__repr_name__" in cls.__dict__:
|
||||
return cls.__dict__["__repr_name__"]
|
||||
return cls.__name__
|
||||
|
||||
|
||||
class HumanReadableABC(metaclass=HumanReadableABCMeta):
|
||||
@classmethod
|
||||
def __str__(cls):
|
||||
if "__repr_name__" in cls.__dict__:
|
||||
return cls.__dict__["__repr_name__"]
|
||||
return type(cls).__name__
|
||||
|
||||
@classmethod
|
||||
def __repr__(cls):
|
||||
if "__repr_name__" in cls.__dict__:
|
||||
return cls.__dict__["__repr_name__"]
|
||||
return type(cls).__name__
|
|
@ -7,9 +7,6 @@ from typing import Any
|
|||
from _pytest.outcomes import Failed
|
||||
from pytest import fail
|
||||
|
||||
from frostfs_testlib import reporter
|
||||
from frostfs_testlib.utils.func_utils import format_by_args
|
||||
|
||||
logger = logging.getLogger("NeoLogger")
|
||||
|
||||
# TODO: we may consider deprecating some methods here and use tenacity instead
|
||||
|
@ -53,7 +50,7 @@ class expect_not_raises:
|
|||
return impl
|
||||
|
||||
|
||||
def retry(max_attempts: int, sleep_interval: int = 1, expected_result: Any = None, title: str = None):
|
||||
def retry(max_attempts: int, sleep_interval: int = 1, expected_result: Any = None):
|
||||
"""
|
||||
Decorator to wait for some conditions/functions to pass successfully.
|
||||
This is useful if you don't know exact time when something should pass successfully and do not
|
||||
|
@ -65,7 +62,8 @@ def retry(max_attempts: int, sleep_interval: int = 1, expected_result: Any = Non
|
|||
assert max_attempts >= 1, "Cannot apply retry decorator with max_attempts < 1"
|
||||
|
||||
def wrapper(func):
|
||||
def call(func, *a, **kw):
|
||||
@wraps(func)
|
||||
def impl(*a, **kw):
|
||||
last_exception = None
|
||||
for _ in range(max_attempts):
|
||||
try:
|
||||
|
@ -86,14 +84,6 @@ def retry(max_attempts: int, sleep_interval: int = 1, expected_result: Any = Non
|
|||
if last_exception is not None:
|
||||
raise last_exception
|
||||
|
||||
@wraps(func)
|
||||
def impl(*a, **kw):
|
||||
if title is not None:
|
||||
with reporter.step(format_by_args(func, title, *a, **kw)):
|
||||
return call(func, *a, **kw)
|
||||
|
||||
return call(func, *a, **kw)
|
||||
|
||||
return impl
|
||||
|
||||
return wrapper
|
||||
|
@ -134,7 +124,6 @@ def wait_for_success(
|
|||
expected_result: Any = None,
|
||||
fail_testcase: bool = False,
|
||||
fail_message: str = "",
|
||||
title: str = None,
|
||||
):
|
||||
"""
|
||||
Decorator to wait for some conditions/functions to pass successfully.
|
||||
|
@ -145,7 +134,8 @@ def wait_for_success(
|
|||
"""
|
||||
|
||||
def wrapper(func):
|
||||
def call(func, *a, **kw):
|
||||
@wraps(func)
|
||||
def impl(*a, **kw):
|
||||
start = int(round(time()))
|
||||
last_exception = None
|
||||
while start + max_wait_time >= int(round(time())):
|
||||
|
@ -170,14 +160,6 @@ def wait_for_success(
|
|||
if last_exception is not None:
|
||||
raise last_exception
|
||||
|
||||
@wraps(func)
|
||||
def impl(*a, **kw):
|
||||
if title is not None:
|
||||
with reporter.step(format_by_args(func, title, *a, **kw)):
|
||||
return call(func, *a, **kw)
|
||||
|
||||
return call(func, *a, **kw)
|
||||
|
||||
return impl
|
||||
|
||||
return wrapper
|
||||
|
|
|
@ -1,7 +1,3 @@
|
|||
"""
|
||||
Idea of utils is to have small utilitary functions which are not dependent of anything.
|
||||
"""
|
||||
|
||||
import frostfs_testlib.utils.converting_utils
|
||||
import frostfs_testlib.utils.datetime_utils
|
||||
import frostfs_testlib.utils.json_utils
|
||||
|
|
|
@ -5,28 +5,76 @@
|
|||
"""
|
||||
Helper functions to use with `frostfs-cli`, `neo-go` and other CLIs.
|
||||
"""
|
||||
import csv
|
||||
import json
|
||||
import logging
|
||||
import re
|
||||
import subprocess
|
||||
import sys
|
||||
from contextlib import suppress
|
||||
from datetime import datetime
|
||||
from io import StringIO
|
||||
from textwrap import shorten
|
||||
from typing import Dict, List, TypedDict, Union
|
||||
from typing import TypedDict, Union
|
||||
|
||||
import pexpect
|
||||
|
||||
from frostfs_testlib import reporter
|
||||
from frostfs_testlib.storage.dataclasses.storage_object_info import NodeNetmapInfo
|
||||
from frostfs_testlib.reporter import get_reporter
|
||||
|
||||
reporter = get_reporter()
|
||||
logger = logging.getLogger("NeoLogger")
|
||||
COLOR_GREEN = "\033[92m"
|
||||
COLOR_OFF = "\033[0m"
|
||||
|
||||
|
||||
def _cmd_run(cmd: str, timeout: int = 90) -> str:
|
||||
"""
|
||||
Runs given shell command <cmd>, in case of success returns its stdout,
|
||||
in case of failure returns error message.
|
||||
"""
|
||||
compl_proc = None
|
||||
start_time = datetime.now()
|
||||
try:
|
||||
logger.info(f"{COLOR_GREEN}Executing command: {cmd}{COLOR_OFF}")
|
||||
start_time = datetime.utcnow()
|
||||
compl_proc = subprocess.run(
|
||||
cmd,
|
||||
check=True,
|
||||
universal_newlines=True,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.STDOUT,
|
||||
timeout=timeout,
|
||||
shell=True,
|
||||
)
|
||||
output = compl_proc.stdout
|
||||
return_code = compl_proc.returncode
|
||||
end_time = datetime.utcnow()
|
||||
logger.info(f"{COLOR_GREEN}Output: {output}{COLOR_OFF}")
|
||||
_attach_allure_log(cmd, output, return_code, start_time, end_time)
|
||||
|
||||
return output
|
||||
except subprocess.CalledProcessError as exc:
|
||||
logger.info(
|
||||
f"Command: {cmd}\n" f"Error:\nreturn code: {exc.returncode} " f"\nOutput: {exc.output}"
|
||||
)
|
||||
end_time = datetime.now()
|
||||
return_code, cmd_output = subprocess.getstatusoutput(cmd)
|
||||
_attach_allure_log(cmd, cmd_output, return_code, start_time, end_time)
|
||||
|
||||
raise RuntimeError(
|
||||
f"Command: {cmd}\n" f"Error:\nreturn code: {exc.returncode}\n" f"Output: {exc.output}"
|
||||
) from exc
|
||||
except OSError as exc:
|
||||
raise RuntimeError(f"Command: {cmd}\n" f"Output: {exc.strerror}") from exc
|
||||
except Exception as exc:
|
||||
return_code, cmd_output = subprocess.getstatusoutput(cmd)
|
||||
end_time = datetime.now()
|
||||
_attach_allure_log(cmd, cmd_output, return_code, start_time, end_time)
|
||||
logger.info(
|
||||
f"Command: {cmd}\n"
|
||||
f"Error:\nreturn code: {return_code}\n"
|
||||
f"Output: {exc.output.decode('utf-8') if type(exc.output) is bytes else exc.output}"
|
||||
)
|
||||
raise
|
||||
|
||||
|
||||
def _run_with_passwd(cmd: str) -> str:
|
||||
child = pexpect.spawn(cmd)
|
||||
child.delaybeforesend = 1
|
||||
|
@ -64,7 +112,9 @@ def _configure_aws_cli(cmd: str, key_id: str, access_key: str, out_format: str =
|
|||
return cmd.decode()
|
||||
|
||||
|
||||
def _attach_allure_log(cmd: str, output: str, return_code: int, start_time: datetime, end_time: datetime) -> None:
|
||||
def _attach_allure_log(
|
||||
cmd: str, output: str, return_code: int, start_time: datetime, end_time: datetime
|
||||
) -> None:
|
||||
command_attachment = (
|
||||
f"COMMAND: '{cmd}'\n"
|
||||
f"OUTPUT:\n {output}\n"
|
||||
|
@ -83,64 +133,3 @@ def log_command_execution(cmd: str, output: Union[str, TypedDict]) -> None:
|
|||
command_attachment = f"COMMAND: '{cmd}'\n" f"OUTPUT:\n {output}\n"
|
||||
with reporter.step(f'COMMAND: {shorten(cmd, width=60, placeholder="...")}'):
|
||||
reporter.attach(command_attachment, "Command execution")
|
||||
|
||||
|
||||
def parse_netmap_output(output: str) -> list[NodeNetmapInfo]:
|
||||
"""
|
||||
The code will parse each line and return each node as dataclass.
|
||||
"""
|
||||
netmap_nodes = output.split("Node ")[1:]
|
||||
dataclasses_netmap = []
|
||||
result_netmap = {}
|
||||
|
||||
regexes = {
|
||||
"node_id": r"\d+: (?P<node_id>\w+)",
|
||||
"node_data_ips": r"(?P<node_data_ips>/ip4/.+?)$",
|
||||
"node_status": r"(?P<node_status>ONLINE|OFFLINE)",
|
||||
"cluster_name": r"ClusterName: (?P<cluster_name>\w+)",
|
||||
"continent": r"Continent: (?P<continent>\w+)",
|
||||
"country": r"Country: (?P<country>\w+)",
|
||||
"country_code": r"CountryCode: (?P<country_code>\w+)",
|
||||
"external_address": r"ExternalAddr: (?P<external_address>/ip[4].+?)$",
|
||||
"location": r"Location: (?P<location>\w+.*)",
|
||||
"node": r"Node: (?P<node>\d+\.\d+\.\d+\.\d+)",
|
||||
"price": r"Price: (?P<price>\d+)",
|
||||
"sub_div": r"SubDiv: (?P<sub_div>.*)",
|
||||
"sub_div_code": r"SubDivCode: (?P<sub_div_code>\w+)",
|
||||
"un_locode": r"UN-LOCODE: (?P<un_locode>\w+.*)",
|
||||
"role": r"role: (?P<role>\w+)",
|
||||
}
|
||||
|
||||
for node in netmap_nodes:
|
||||
for key, regex in regexes.items():
|
||||
search_result = re.search(regex, node, flags=re.MULTILINE)
|
||||
if key == "node_data_ips":
|
||||
result_netmap[key] = search_result[key].strip().split(" ")
|
||||
continue
|
||||
if key == "external_address":
|
||||
result_netmap[key] = search_result[key].strip().split(",")
|
||||
continue
|
||||
if search_result == None:
|
||||
result_netmap[key] = None
|
||||
continue
|
||||
result_netmap[key] = search_result[key].strip()
|
||||
|
||||
dataclasses_netmap.append(NodeNetmapInfo(**result_netmap))
|
||||
|
||||
return dataclasses_netmap
|
||||
|
||||
|
||||
def parse_cmd_table(output: str, delimiter="|") -> list[dict[str, str]]:
|
||||
parsing_output = []
|
||||
reader = csv.reader(StringIO(output.strip()), delimiter=delimiter)
|
||||
iter_reader = iter(reader)
|
||||
header_row = next(iter_reader)
|
||||
for row in iter_reader:
|
||||
table = {}
|
||||
for i in range(len(row)):
|
||||
header = header_row[i].strip().lower().replace(" ", "_")
|
||||
value = row[i].strip().lower()
|
||||
if header:
|
||||
table[header] = value
|
||||
parsing_output.append(table)
|
||||
return parsing_output
|
||||
|
|
|
@ -1,23 +1,10 @@
|
|||
import base64
|
||||
import binascii
|
||||
import json
|
||||
from typing import Tuple
|
||||
|
||||
import base58
|
||||
|
||||
|
||||
def calc_unit(value: float, skip_units: int = 0) -> Tuple[float, str]:
|
||||
units = ["B", "KiB", "MiB", "GiB", "TiB"]
|
||||
|
||||
for unit in units[skip_units:]:
|
||||
if value < 1024:
|
||||
return value, unit
|
||||
|
||||
value = value / 1024.0
|
||||
|
||||
return value, unit
|
||||
|
||||
|
||||
def str_to_ascii_hex(input: str) -> str:
|
||||
b = binascii.hexlify(input.encode())
|
||||
return str(b)[2:-1]
|
||||
|
|
|
@ -1,12 +1,13 @@
|
|||
import logging
|
||||
import re
|
||||
|
||||
from frostfs_testlib import reporter
|
||||
from frostfs_testlib.reporter import get_reporter
|
||||
|
||||
reporter = get_reporter()
|
||||
logger = logging.getLogger("NeoLogger")
|
||||
|
||||
|
||||
@reporter.step("Read environment.properties")
|
||||
@reporter.step_deco("Read environment.properties")
|
||||
def read_env_properties(file_path: str) -> dict:
|
||||
with open(file_path, "r") as file:
|
||||
raw_content = file.read()
|
||||
|
@ -22,7 +23,7 @@ def read_env_properties(file_path: str) -> dict:
|
|||
return env_properties
|
||||
|
||||
|
||||
@reporter.step("Update data in environment.properties")
|
||||
@reporter.step_deco("Update data in environment.properties")
|
||||
def save_env_properties(file_path: str, env_data: dict) -> None:
|
||||
with open(file_path, "a+") as env_file:
|
||||
for env, env_value in env_data.items():
|
||||
|
|
|
@ -3,22 +3,72 @@ from dataclasses import dataclass
|
|||
from time import sleep
|
||||
from typing import Optional
|
||||
|
||||
from frostfs_testlib import reporter
|
||||
from frostfs_testlib.hosting import Host
|
||||
from frostfs_testlib.reporter import get_reporter
|
||||
from frostfs_testlib.resources.common import SERVICE_MAX_STARTUP_TIME
|
||||
from frostfs_testlib.shell import Shell
|
||||
from frostfs_testlib.shell import CommandOptions, Shell
|
||||
from frostfs_testlib.steps.cli.object import neo_go_dump_keys
|
||||
from frostfs_testlib.steps.node_management import storage_node_healthcheck
|
||||
from frostfs_testlib.steps.storage_policy import get_nodes_with_object
|
||||
from frostfs_testlib.storage.cluster import Cluster, ClusterNode, NodeBase, StorageNode
|
||||
from frostfs_testlib.storage.dataclasses.frostfs_services import MorphChain
|
||||
from frostfs_testlib.storage.dataclasses.node_base import ServiceClass
|
||||
from frostfs_testlib.testing.test_control import wait_for_success
|
||||
from frostfs_testlib.testing.test_control import retry, wait_for_success
|
||||
from frostfs_testlib.utils.datetime_utils import parse_time
|
||||
|
||||
reporter = get_reporter()
|
||||
|
||||
logger = logging.getLogger("NeoLogger")
|
||||
|
||||
|
||||
@reporter.step("Check and return status of given service")
|
||||
@reporter.step_deco("Ping node")
|
||||
def ping_host(shell: Shell, host: Host):
|
||||
options = CommandOptions(check=False)
|
||||
return shell.exec(f"ping {host.config.address} -c 1", options).return_code
|
||||
|
||||
|
||||
@reporter.step_deco("Wait for storage nodes returned to cluster")
|
||||
def wait_all_storage_nodes_returned(shell: Shell, cluster: Cluster) -> None:
|
||||
for node in cluster.services(StorageNode):
|
||||
with reporter.step(f"Run health check for storage at '{node}'"):
|
||||
wait_for_host_online(shell, node)
|
||||
wait_for_node_online(node)
|
||||
|
||||
|
||||
@retry(max_attempts=60, sleep_interval=5, expected_result=0)
|
||||
@reporter.step_deco("Waiting for host of {node} to go online")
|
||||
def wait_for_host_online(shell: Shell, node: StorageNode):
|
||||
try:
|
||||
# TODO: Quick solution for now, should be replaced by lib interactions
|
||||
return ping_host(shell, node.host)
|
||||
except Exception as err:
|
||||
logger.warning(f"Host ping fails with error {err}")
|
||||
return 1
|
||||
|
||||
|
||||
@retry(max_attempts=60, sleep_interval=5, expected_result=1)
|
||||
@reporter.step_deco("Waiting for host of {node} to go offline")
|
||||
def wait_for_host_offline(shell: Shell, node: StorageNode):
|
||||
try:
|
||||
# TODO: Quick solution for now, should be replaced by lib interactions
|
||||
return ping_host(shell, node.host)
|
||||
except Exception as err:
|
||||
logger.warning(f"Host ping fails with error {err}")
|
||||
return 0
|
||||
|
||||
|
||||
@retry(max_attempts=20, sleep_interval=30, expected_result=True)
|
||||
@reporter.step_deco("Waiting for node {node} to go online")
|
||||
def wait_for_node_online(node: StorageNode):
|
||||
try:
|
||||
health_check = storage_node_healthcheck(node)
|
||||
except Exception as err:
|
||||
logger.warning(f"Node healthcheck fails with error {err}")
|
||||
return False
|
||||
|
||||
return health_check.health_status == "READY" and health_check.network_status == "ONLINE"
|
||||
|
||||
|
||||
@reporter.step_deco("Check and return status of given service")
|
||||
def service_status(service: str, shell: Shell) -> str:
|
||||
return shell.exec(f"sudo systemctl is-active {service}").stdout.rstrip()
|
||||
|
||||
|
@ -71,14 +121,14 @@ class TopCommand:
|
|||
)
|
||||
|
||||
|
||||
@reporter.step("Run `top` command with specified PID")
|
||||
@reporter.step_deco("Run `top` command with specified PID")
|
||||
def service_status_top(service: str, shell: Shell) -> TopCommand:
|
||||
pid = service_pid(service, shell)
|
||||
output = shell.exec(f"sudo top -b -n 1 -p {pid}").stdout
|
||||
return TopCommand.from_stdout(output, pid)
|
||||
|
||||
|
||||
@reporter.step("Restart service n times with sleep")
|
||||
@reporter.step_deco("Restart service n times with sleep")
|
||||
def multiple_restart(
|
||||
service_type: type[NodeBase],
|
||||
node: ClusterNode,
|
||||
|
@ -89,16 +139,19 @@ def multiple_restart(
|
|||
service_name = node.service(service_type).name
|
||||
for _ in range(count):
|
||||
node.host.restart_service(service_name)
|
||||
logger.info(f"Restart {service_systemctl_name}; sleep {sleep_interval} seconds and continue")
|
||||
logger.info(
|
||||
f"Restart {service_systemctl_name}; sleep {sleep_interval} seconds and continue"
|
||||
)
|
||||
sleep(sleep_interval)
|
||||
|
||||
|
||||
@wait_for_success(60, 5, title="Wait for services become {expected_status} on node {cluster_node}")
|
||||
def check_services_status(cluster_node: ClusterNode, service_list: list[ServiceClass], expected_status: str):
|
||||
@reporter.step_deco("Get status of list of services and check expected status")
|
||||
@wait_for_success(60, 5)
|
||||
def check_services_status(service_list: list[str], expected_status: str, shell: Shell):
|
||||
cmd = ""
|
||||
for service in service_list:
|
||||
cmd += f' sudo systemctl status {service.get_service_systemctl_name()} --lines=0 | grep "Active:";'
|
||||
result = cluster_node.host.get_shell().exec(cmd).stdout.rstrip()
|
||||
cmd += f' sudo systemctl status {service} --lines=0 | grep "Active:";'
|
||||
result = shell.exec(cmd).stdout.rstrip()
|
||||
statuses = list()
|
||||
for line in result.split("\n"):
|
||||
status_substring = line.split()
|
||||
|
@ -109,15 +162,19 @@ def check_services_status(cluster_node: ClusterNode, service_list: list[ServiceC
|
|||
), f"Requested status={expected_status} not found in requested services={service_list}, list of statuses={result}"
|
||||
|
||||
|
||||
@wait_for_success(60, 5, title="Wait for {service} become active")
|
||||
def wait_service_in_desired_state(service: str, shell: Shell, expected_status: Optional[str] = "active"):
|
||||
@reporter.step_deco("Wait for active status of passed service")
|
||||
@wait_for_success(60, 5)
|
||||
def wait_service_in_desired_state(
|
||||
service: str, shell: Shell, expected_status: Optional[str] = "active"
|
||||
):
|
||||
real_status = service_status(service=service, shell=shell)
|
||||
assert (
|
||||
expected_status == real_status
|
||||
), f"Service {service}: expected status= {expected_status}, real status {real_status}"
|
||||
|
||||
|
||||
@wait_for_success(parse_time(SERVICE_MAX_STARTUP_TIME), 1, title="Wait for {service_type} passes healtcheck on {node}")
|
||||
@reporter.step_deco("Run healthcheck against passed service")
|
||||
@wait_for_success(parse_time(SERVICE_MAX_STARTUP_TIME), 1)
|
||||
def service_type_healthcheck(
|
||||
service_type: type[NodeBase],
|
||||
node: ClusterNode,
|
||||
|
@ -128,25 +185,26 @@ def service_type_healthcheck(
|
|||
), f"Healthcheck failed for {service.get_service_systemctl_name()}, IP={node.host_ip}"
|
||||
|
||||
|
||||
@reporter.step("Kill by process name")
|
||||
@reporter.step_deco("Kill by process name")
|
||||
def kill_by_service_name(service_type: type[NodeBase], node: ClusterNode):
|
||||
service_systemctl_name = node.service(service_type).get_service_systemctl_name()
|
||||
pid = service_pid(service_systemctl_name, node.host.get_shell())
|
||||
node.host.get_shell().exec(f"sudo kill -9 {pid}")
|
||||
|
||||
|
||||
@reporter.step("Suspend {service}")
|
||||
@reporter.step_deco("Service {service} suspend")
|
||||
def suspend_service(shell: Shell, service: str):
|
||||
shell.exec(f"sudo kill -STOP {service_pid(service, shell)}")
|
||||
|
||||
|
||||
@reporter.step("Resume {service}")
|
||||
@reporter.step_deco("Service {service} resume")
|
||||
def resume_service(shell: Shell, service: str):
|
||||
shell.exec(f"sudo kill -CONT {service_pid(service, shell)}")
|
||||
|
||||
|
||||
@reporter.step_deco("Retrieve service's pid")
|
||||
# retry mechanism cause when the task has been started recently '0' PID could be returned
|
||||
@wait_for_success(10, 1, title="Get {service} pid")
|
||||
@wait_for_success(10, 1)
|
||||
def service_pid(service: str, shell: Shell) -> int:
|
||||
output = shell.exec(f"systemctl show --property MainPID {service}").stdout.rstrip()
|
||||
splitted = output.split("=")
|
||||
|
@ -155,7 +213,7 @@ def service_pid(service: str, shell: Shell) -> int:
|
|||
return PID
|
||||
|
||||
|
||||
@reporter.step("Wrapper for neo-go dump keys command")
|
||||
@reporter.step_deco("Wrapper for neo-go dump keys command")
|
||||
def dump_keys(shell: Shell, node: ClusterNode) -> dict:
|
||||
host = node.host
|
||||
service_config = host.get_service_config(node.service(MorphChain).name)
|
||||
|
@ -163,7 +221,7 @@ def dump_keys(shell: Shell, node: ClusterNode) -> dict:
|
|||
return neo_go_dump_keys(shell=shell, wallet=wallet)
|
||||
|
||||
|
||||
@reporter.step("Wait for object replication")
|
||||
@reporter.step_deco("Wait for object replication")
|
||||
def wait_object_replication(
|
||||
cid: str,
|
||||
oid: str,
|
||||
|
|
|
@ -1,15 +1,17 @@
|
|||
from concurrent.futures import ThreadPoolExecutor
|
||||
|
||||
from frostfs_testlib import reporter
|
||||
from frostfs_testlib.reporter import get_reporter
|
||||
from frostfs_testlib.storage.dataclasses.node_base import NodeBase
|
||||
|
||||
reporter = get_reporter()
|
||||
|
||||
|
||||
class FileKeeper:
|
||||
"""This class is responsible to make backup copy of modified file and restore when required (mostly after the test)"""
|
||||
|
||||
files_to_restore: dict[NodeBase, list[str]] = {}
|
||||
|
||||
@reporter.step("Adding {file_to_restore} from node {node} to restore list")
|
||||
@reporter.step_deco("Adding {file_to_restore} from node {node} to restore list")
|
||||
def add(self, node: NodeBase, file_to_restore: str):
|
||||
if node in self.files_to_restore and file_to_restore in self.files_to_restore[node]:
|
||||
# Already added
|
||||
|
@ -24,7 +26,7 @@ class FileKeeper:
|
|||
shell = node.host.get_shell()
|
||||
shell.exec(f"cp {file_to_restore} {file_to_restore}.bak")
|
||||
|
||||
@reporter.step("Restore files")
|
||||
@reporter.step_deco("Restore files")
|
||||
def restore_files(self):
|
||||
nodes = self.files_to_restore.keys()
|
||||
if not nodes:
|
||||
|
@ -39,7 +41,7 @@ class FileKeeper:
|
|||
# Iterate through results for exception check if any
|
||||
pass
|
||||
|
||||
@reporter.step("Restore files on node {node}")
|
||||
@reporter.step_deco("Restore files on node {node}")
|
||||
def _restore_files_on_node(self, node: NodeBase):
|
||||
shell = node.host.get_shell()
|
||||
for file_to_restore in self.files_to_restore[node]:
|
||||
|
|
|
@ -4,9 +4,10 @@ import os
|
|||
import uuid
|
||||
from typing import Any, Optional
|
||||
|
||||
from frostfs_testlib import reporter
|
||||
from frostfs_testlib.reporter import get_reporter
|
||||
from frostfs_testlib.resources.common import ASSETS_DIR
|
||||
|
||||
reporter = get_reporter()
|
||||
logger = logging.getLogger("NeoLogger")
|
||||
|
||||
|
||||
|
@ -60,7 +61,7 @@ def generate_file_with_content(
|
|||
return file_path
|
||||
|
||||
|
||||
@reporter.step("Get File Hash")
|
||||
@reporter.step_deco("Get File Hash")
|
||||
def get_file_hash(file_path: str, len: Optional[int] = None, offset: Optional[int] = None) -> str:
|
||||
"""Generates hash for the specified file.
|
||||
|
||||
|
@ -87,7 +88,7 @@ def get_file_hash(file_path: str, len: Optional[int] = None, offset: Optional[in
|
|||
return file_hash.hexdigest()
|
||||
|
||||
|
||||
@reporter.step("Concatenation set of files to one file")
|
||||
@reporter.step_deco("Concatenation set of files to one file")
|
||||
def concat_files(file_paths: list, resulting_file_path: Optional[str] = None) -> str:
|
||||
"""Concatenates several files into a single file.
|
||||
|
||||
|
|
|
@ -1,58 +0,0 @@
|
|||
import collections
|
||||
import inspect
|
||||
import sys
|
||||
from typing import Callable
|
||||
|
||||
|
||||
def format_by_args(__func: Callable, __title: str, *a, **kw) -> str:
|
||||
params = _func_parameters(__func, *a, **kw)
|
||||
args = list(map(lambda x: _represent(x), a))
|
||||
|
||||
return __title.format(*args, **params)
|
||||
|
||||
|
||||
# These 2 functions are copied from allure_commons._allure
|
||||
# Duplicate it here in order to be independent of allure and make some adjustments.
|
||||
def _represent(item):
|
||||
if isinstance(item, str):
|
||||
return item
|
||||
elif isinstance(item, (bytes, bytearray)):
|
||||
return repr(type(item))
|
||||
else:
|
||||
return repr(item)
|
||||
|
||||
|
||||
def _func_parameters(func, *args, **kwargs):
|
||||
parameters = {}
|
||||
arg_spec = inspect.getfullargspec(func)
|
||||
arg_order = list(arg_spec.args)
|
||||
args_dict = dict(zip(arg_spec.args, args))
|
||||
|
||||
if arg_spec.defaults:
|
||||
kwargs_defaults_dict = dict(zip(arg_spec.args[-len(arg_spec.defaults) :], arg_spec.defaults))
|
||||
parameters.update(kwargs_defaults_dict)
|
||||
|
||||
if arg_spec.varargs:
|
||||
arg_order.append(arg_spec.varargs)
|
||||
varargs = args[len(arg_spec.args) :]
|
||||
parameters.update({arg_spec.varargs: varargs} if varargs else {})
|
||||
|
||||
if arg_spec.args and arg_spec.args[0] in ["cls", "self"]:
|
||||
args_dict.pop(arg_spec.args[0], None)
|
||||
|
||||
if kwargs:
|
||||
if sys.version_info < (3, 7):
|
||||
# Sort alphabetically as old python versions does
|
||||
# not preserve call order for kwargs.
|
||||
arg_order.extend(sorted(list(kwargs.keys())))
|
||||
else:
|
||||
# Keep py3.7 behaviour to preserve kwargs order
|
||||
arg_order.extend(list(kwargs.keys()))
|
||||
parameters.update(kwargs)
|
||||
|
||||
parameters.update(args_dict)
|
||||
|
||||
items = parameters.items()
|
||||
sorted_items = sorted(map(lambda kv: (kv[0], _represent(kv[1])), items), key=lambda x: arg_order.index(x[0]))
|
||||
|
||||
return collections.OrderedDict(sorted_items)
|
|
@ -1,9 +1,15 @@
|
|||
import logging
|
||||
import re
|
||||
import os
|
||||
|
||||
from frostfs_testlib.cli import FrostfsAdm, FrostfsCli
|
||||
from frostfs_testlib.hosting import Hosting
|
||||
from frostfs_testlib.resources.cli import FROSTFS_ADM_EXEC, FROSTFS_AUTHMATE_EXEC, FROSTFS_CLI_EXEC, NEOGO_EXECUTABLE
|
||||
from frostfs_testlib.resources.cli import (
|
||||
FROSTFS_ADM_EXEC,
|
||||
FROSTFS_AUTHMATE_EXEC,
|
||||
FROSTFS_CLI_EXEC,
|
||||
NEOGO_EXECUTABLE,
|
||||
)
|
||||
from frostfs_testlib.resources.common import DEFAULT_WALLET_CONFIG
|
||||
from frostfs_testlib.shell import Shell
|
||||
|
||||
|
@ -13,7 +19,10 @@ logger = logging.getLogger("NeoLogger")
|
|||
def get_local_binaries_versions(shell: Shell) -> dict[str, str]:
|
||||
versions = {}
|
||||
|
||||
for binary in [NEOGO_EXECUTABLE, FROSTFS_AUTHMATE_EXEC]:
|
||||
# Extra binaries to get version from
|
||||
extra_binaries = os.getenv("EXTRA_BINARIES", "").split(',')
|
||||
|
||||
for binary in [NEOGO_EXECUTABLE, FROSTFS_AUTHMATE_EXEC, *extra_binaries]:
|
||||
out = shell.exec(f"{binary} --version").stdout
|
||||
versions[binary] = _parse_version(out)
|
||||
|
||||
|
@ -39,44 +48,36 @@ def get_remote_binaries_versions(hosting: Hosting) -> dict[str, str]:
|
|||
binary_path_by_name = {} # Maps binary name to executable path
|
||||
for service_config in host.config.services:
|
||||
exec_path = service_config.attributes.get("exec_path")
|
||||
requires_check = service_config.attributes.get("requires_version_check", "true")
|
||||
if exec_path:
|
||||
binary_path_by_name[service_config.name] = {
|
||||
"exec_path": exec_path,
|
||||
"check": requires_check.lower() == "true",
|
||||
}
|
||||
binary_path_by_name[service_config.name] = exec_path
|
||||
for cli_config in host.config.clis:
|
||||
requires_check = cli_config.attributes.get("requires_version_check", "true")
|
||||
binary_path_by_name[cli_config.name] = {
|
||||
"exec_path": cli_config.exec_path,
|
||||
"check": requires_check.lower() == "true",
|
||||
}
|
||||
binary_path_by_name[cli_config.name] = cli_config.exec_path
|
||||
|
||||
shell = host.get_shell()
|
||||
versions_at_host = {}
|
||||
for binary_name, binary in binary_path_by_name.items():
|
||||
for binary_name, binary_path in binary_path_by_name.items():
|
||||
try:
|
||||
binary_path = binary["exec_path"]
|
||||
result = shell.exec(f"{binary_path} --version")
|
||||
versions_at_host[binary_name] = {"version": _parse_version(result.stdout), "check": binary["check"]}
|
||||
versions_at_host[binary_name] = _parse_version(result.stdout)
|
||||
except Exception as exc:
|
||||
logger.error(f"Cannot get version for {binary_path} because of\n{exc}")
|
||||
versions_at_host[binary_name] = {"version": "Unknown", "check": binary["check"]}
|
||||
versions_at_host[binary_name] = "Unknown"
|
||||
versions_by_host[host.config.address] = versions_at_host
|
||||
|
||||
# Consolidate versions across all hosts
|
||||
versions = {}
|
||||
for host, binary_versions in versions_by_host.items():
|
||||
for name, binary in binary_versions.items():
|
||||
captured_version = versions.get(name, {}).get("version")
|
||||
version = binary["version"]
|
||||
for name, version in binary_versions.items():
|
||||
captured_version = versions.get(name)
|
||||
if captured_version:
|
||||
assert captured_version == version, f"Binary {name} has inconsistent version on host {host}"
|
||||
assert (
|
||||
captured_version == version
|
||||
), f"Binary {name} has inconsistent version on host {host}"
|
||||
else:
|
||||
versions[name] = {"version": version, "check": binary["check"]}
|
||||
versions[name] = version
|
||||
return versions
|
||||
|
||||
|
||||
def _parse_version(version_output: str) -> str:
|
||||
version = re.search(r"version[:\s]*v?(.+)", version_output, re.IGNORECASE)
|
||||
return version.group(1).strip() if version else version_output
|
||||
return version.group(1).strip() if version else "Unknown"
|
||||
|
|
|
@ -1,5 +0,0 @@
|
|||
import os
|
||||
import sys
|
||||
|
||||
app_dir = os.path.join(os.getcwd(), "src")
|
||||
sys.path.insert(0, app_dir)
|
|
@ -14,7 +14,11 @@ def format_error_details(error: Exception) -> str:
|
|||
Returns:
|
||||
String containing exception details.
|
||||
"""
|
||||
detail_lines = traceback.format_exception(error)
|
||||
detail_lines = traceback.format_exception(
|
||||
etype=type(error),
|
||||
value=error,
|
||||
tb=error.__traceback__,
|
||||
)
|
||||
return "".join(detail_lines)
|
||||
|
||||
|
||||
|
|
|
@ -1,33 +0,0 @@
|
|||
from typing import Any
|
||||
|
||||
import pytest
|
||||
|
||||
from frostfs_testlib.s3 import AwsCliClient, Boto3ClientWrapper
|
||||
from frostfs_testlib.storage.dataclasses.acl import EACLRole
|
||||
from frostfs_testlib.storage.dataclasses.frostfs_services import HTTPGate, InnerRing, MorphChain, S3Gate, StorageNode
|
||||
from frostfs_testlib.storage.dataclasses.object_size import ObjectSize
|
||||
|
||||
|
||||
class TestDataclassesStr:
|
||||
"""Here we are testing important classes string representation."""
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"obj, expected",
|
||||
[
|
||||
(Boto3ClientWrapper, "Boto3 client"),
|
||||
(AwsCliClient, "AWS CLI"),
|
||||
(ObjectSize("simple", 1), "simple"),
|
||||
(ObjectSize("simple", 10), "simple"),
|
||||
(ObjectSize("complex", 5000), "complex"),
|
||||
(ObjectSize("complex", 5555), "complex"),
|
||||
(StorageNode, "StorageNode"),
|
||||
(MorphChain, "MorphChain"),
|
||||
(S3Gate, "S3Gate"),
|
||||
(HTTPGate, "HTTPGate"),
|
||||
(InnerRing, "InnerRing"),
|
||||
(EACLRole.OTHERS, "OTHERS"),
|
||||
],
|
||||
)
|
||||
def test_classes_string_representation(self, obj: Any, expected: str):
|
||||
assert f"{obj}" == expected
|
||||
assert repr(obj) == expected
|
|
@ -15,7 +15,6 @@ class TestHosting(TestCase):
|
|||
HOST1 = {
|
||||
"address": HOST1_ADDRESS,
|
||||
"plugin_name": HOST1_PLUGIN,
|
||||
"healthcheck_plugin_name": "basic",
|
||||
"attributes": HOST1_ATTRIBUTES,
|
||||
"clis": HOST1_CLIS,
|
||||
"services": HOST1_SERVICES,
|
||||
|
@ -33,7 +32,6 @@ class TestHosting(TestCase):
|
|||
HOST2 = {
|
||||
"address": HOST2_ADDRESS,
|
||||
"plugin_name": HOST2_PLUGIN,
|
||||
"healthcheck_plugin_name": "basic",
|
||||
"attributes": HOST2_ATTRIBUTES,
|
||||
"clis": HOST2_CLIS,
|
||||
"services": HOST2_SERVICES,
|
||||
|
@ -54,14 +52,18 @@ class TestHosting(TestCase):
|
|||
self.assertEqual(host1.config.plugin_name, self.HOST1_PLUGIN)
|
||||
self.assertDictEqual(host1.config.attributes, self.HOST1_ATTRIBUTES)
|
||||
self.assertListEqual(host1.config.clis, [CLIConfig(**cli) for cli in self.HOST1_CLIS])
|
||||
self.assertListEqual(host1.config.services, [ServiceConfig(**service) for service in self.HOST1_SERVICES])
|
||||
self.assertListEqual(
|
||||
host1.config.services, [ServiceConfig(**service) for service in self.HOST1_SERVICES]
|
||||
)
|
||||
|
||||
host2 = hosting.get_host_by_address(self.HOST2_ADDRESS)
|
||||
self.assertEqual(host2.config.address, self.HOST2_ADDRESS)
|
||||
self.assertEqual(host2.config.plugin_name, self.HOST2_PLUGIN)
|
||||
self.assertDictEqual(host2.config.attributes, self.HOST2_ATTRIBUTES)
|
||||
self.assertListEqual(host2.config.clis, [CLIConfig(**cli) for cli in self.HOST2_CLIS])
|
||||
self.assertListEqual(host2.config.services, [ServiceConfig(**service) for service in self.HOST2_SERVICES])
|
||||
self.assertListEqual(
|
||||
host2.config.services, [ServiceConfig(**service) for service in self.HOST2_SERVICES]
|
||||
)
|
||||
|
||||
def test_get_host_by_service(self):
|
||||
hosting = Hosting()
|
||||
|
@ -102,7 +104,9 @@ class TestHosting(TestCase):
|
|||
services = hosting.find_service_configs(rf"^{self.SERVICE_NAME_PREFIX}")
|
||||
self.assertEqual(len(services), 2)
|
||||
for service in services:
|
||||
self.assertEqual(service.name[: len(self.SERVICE_NAME_PREFIX)], self.SERVICE_NAME_PREFIX)
|
||||
self.assertEqual(
|
||||
service.name[: len(self.SERVICE_NAME_PREFIX)], self.SERVICE_NAME_PREFIX
|
||||
)
|
||||
|
||||
service1 = hosting.find_service_configs(self.SERVICE1["name"])
|
||||
self.assertEqual(len(service1), 1)
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue