Compare commits

...

67 commits

Author SHA1 Message Date
c997e23194 Updates for testcases
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-03-14 12:21:40 +03:00
cff0e0f23e Update session token tests related to expiration rules
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-03-13 17:05:24 +03:00
ef5e142015 Add timeout for cli commands
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-03-09 14:43:14 +03:00
06dc226ef8 Add timeout for cli commands
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-03-09 14:41:38 +03:00
ac7dae0d2d Add timeout for cli commands
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-03-09 14:13:51 +03:00
3802df25fe Merge pull request 'import fix for some helpers and steps' (#12) from EliChin/fix/import into master
Reviewed-on: TrueCloudLab/frostfs-testcases#12
2023-03-07 11:22:15 +00:00
4755a2e167 import fix for some helpers and steps
Signed-off-by: Liza <e.chichindaeva@yadro.com>
2023-03-01 18:47:33 +03:00
565d740239 Update git links to clone from
Signed-off-by: Liza <e.chichindaeva@yadro.com>
2023-03-01 15:19:55 +03:00
Aleksei Chetaev
b549836b60 Add pytest lazy fixtures to requirements 2023-02-28 11:48:06 +01:00
Aleksei Chetaev
cb1b0c9bdd Fix issue with frostfs-authmate binnary name
Signed-off-by: Aleksei Chetaev <alex.chetaev@gmail.com>
2023-02-28 11:48:06 +01:00
Aleksei Chetaev
aa145357f3 Fixing path to DevEnv
Signed-off-by: Aleksei Chetaev <alex.chetaev@gmail.com>
2023-02-28 11:48:06 +01:00
Aleksei Chetaev
1fb08e36c3 Fixing comments changed by auto replace imports
Signed-off-by: Aleksei Chetaev <alex.chetaev@gmail.com>
2023-02-28 11:48:06 +01:00
Aleksei Chetaev
b55731830e Remove trash from requirements.txt
Signed-off-by: Aleksei Chetaev <alex.chetaev@gmail.com>
2023-02-28 11:48:06 +01:00
Aleksei Chetaev
ee0c2527f7 Change python to 3.10
Signed-off-by: Aleksei Chetaev <alex.chetaev@gmail.com>
2023-02-28 11:48:06 +01:00
Aleksei Chetaev
52001dc23a Change all imports to imports from root and remove robot
Signed-off-by: Aleksei Chetaev <alex.chetaev@gmail.com>
2023-02-28 11:48:06 +01:00
Aleksei Chetaev
13bc98eecc Fixing imports after move utils ti frostfs-testlib
Signed-off-by: Aleksei Chetaev <alex.chetaev@gmail.com>
2023-02-27 11:44:19 +01:00
Aleksei Chetaev
d253e8f5fd Remove non used files from the repo 2023-02-27 11:44:19 +01:00
Aleksei Chetaev
25761428f7 Change frostfs-testlib version 2023-02-27 11:44:19 +01:00
Aleksei Chetaev
7a742d57fc Move errors templates to testlib
Signed-off-by: Aleksei Chetaev <alex.chetaev@gmail.com>
2023-02-27 11:44:19 +01:00
19809c5641 Rename to frostfs
Signed-off-by: Yulia Kovshova <y.kovshova@yadro.com>
2023-02-17 16:31:07 +03:00
f6056a4f79 Add @alexchetaev @abereziny
to codeowner

Signed-off-by: Yulia Kovshova <y.kovshova@yadro.com>
2023-02-17 17:30:09 +04:00
Aleskei Chetaev
9395a8003f Add assert_s3_acl
Signed-off-by: Elizaveta Chichindaeva <elizaveta@nspcc.ru>
2023-02-16 11:53:39 +03:00
Aleskei Chetaev
c7a69b89e3 Aplly isort to commit
Signed-off-by: Aleskei Chetaev <alex.chetaev@gmail.com>
2023-02-14 10:29:29 +01:00
Aleskei Chetaev
b94c106656 Revert removing venv environment files
Signed-off-by: Aleskei Chetaev <alex.chetaev@gmail.com>
2023-02-14 10:29:29 +01:00
Aleskei Chetaev
d76951ed4f Change mamba version, fix imports and support python 3.10
Signed-off-by: Aleskei Chetaev <alex.chetaev@gmail.com>
2023-02-14 10:29:29 +01:00
850c0e77ec Remove wait_for_success for start/stop service methods
Signed-off-by: Vladislav Karakozov <v.karakozov@yadro.com>
2023-02-07 12:29:39 +03:00
fc6f9ac162 Update expected errors
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-01-24 13:26:50 +03:00
f23bfe754e Update lib to 0.9.0
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-01-17 12:21:01 +03:00
baf0b4dd0f Add waiting for epoch align for storage group lifetime test
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2023-01-16 15:45:22 +03:00
c6ebe1d67d align all nodes to be in the same epoch
Signed-off-by: Vladislav Karakozov <v.karakozov@yadro.com>
2023-01-13 19:08:08 +03:00
anastasia prasolova
1aa94028a8 Remove aprasolova from CODEOWNERS file
Signed-off-by: anastasia prasolova <anastasia@nspcc.ru>
2022-12-30 15:55:23 +03:00
a942464de6 Fix epoch duration in http test
Signed-off-by: Vladislav Karakozov <v.karakozov@yadro.com>
2022-12-30 13:40:10 +03:00
690323e85d http test with bearer token
Signed-off-by: Vladislav Karakozov <v.karakozov@yadro.com>
2022-12-29 12:42:03 +03:00
ced72602ef Add too many open files to logs analyzer
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2022-12-29 11:28:24 +03:00
4099413577 #478 Update lock tests
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2022-12-28 11:27:19 +03:00
1abf544433 Add support data and internal ips
Signed-off-by: Yulia Kovshova <y.kovshova@yadro.com>
2022-12-28 11:26:22 +03:00
4f9294918d add system http system header test
Signed-off-by: Vladislav Karakozov <v.karakozov@yadro.com>
2022-12-28 10:10:13 +03:00
6209a61258 Unskip static session token tests
Signed-off-by: Vladimir Avdeev <v.avdeev@yadro.com>
2022-12-27 11:28:29 +03:00
Мария Малыгина
ad2eafd230 [fix] s3 load
Signed-off-by: Мария Малыгина <m.malygina@MacBook-Pro-Maria.local>
2022-12-26 18:08:22 +03:00
2151f0e446 Add await to delete container
Signed-off-by: anikeev-yadro <a.anikeev@yadro.com>
2022-12-26 17:30:36 +03:00
Vlad K
d9474b9bc9 Revert "Fix: IndexError: list index out of range"
This reverts commit 1dc4516258.
2022-12-26 13:32:16 +03:00
1dc4516258 Fix: IndexError: list index out of range
Signed-off-by: Vladislav Karakozov <v.karakozov@yadro.com>
2022-12-26 13:28:03 +03:00
422636f68b Updates for local dev env runs
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2022-12-23 12:18:37 +03:00
5c4f6b6a7d new http test
Signed-off-by: Vladislav Karakozov <v.karakozov@yadro.com>
2022-12-21 17:07:29 +03:00
a2a234f1b2 Update range tests
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2022-12-20 11:26:51 +03:00
4f5aedebfe Fix wildcard flag value
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2022-12-16 15:17:02 +03:00
c7f832e77a Fix allure attaches for failover test
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2022-12-16 11:51:48 +03:00
ee204528b8 fix lock_mode
Signed-off-by: Yulia Kovshova <y.kovshova@yadro.com>
2022-12-15 16:44:18 +03:00
aa957639ec Fix policy test s3
Signed-off-by: Yulia Kovshova <y.kovshova@yadro.com>
2022-12-14 14:07:24 +03:00
4003d0115c Fix s3 range tests
Signed-off-by: anikeev-yadro <a.anikeev@yadro.com>
2022-12-14 11:22:11 +03:00
f89d66817b Add drop locked objects tests
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2022-12-13 14:59:04 +03:00
2f04775fce Fix delete all objects in bucket
Signed-off-by: anikeev-yadro <a.anikeev@yadro.com>
2022-12-13 11:40:56 +03:00
3497f3b23a Add load_param file, delete old tests, new universal, parametrized test, add stop unused nodes function.
Signed-off-by: a.lipay <a.lipay@yadro.com>
2022-12-12 19:14:17 +03:00
15677e89eb Add load_param file, delete old tests, new universal, parametrized test, add stop unused nodes function.
Signed-off-by: a.lipay <a.lipay@yadro.com>
2022-12-12 19:14:17 +03:00
1bb640a0db Add load_param file, delete old tests, new universal, parametrized test, add stop unused nodes function.
Signed-off-by: a.lipay <a.lipay@yadro.com>
2022-12-12 19:14:17 +03:00
614031a53a Fix s3 object tests after remove obj size hardcode
Signed-off-by: Vladimir Avdeev <v.avdeev@yadro.com>
2022-12-09 14:45:30 +03:00
ddf6406e10 fix generate file in http test
Signed-off-by: Vladislav Karakozov <v.karakozov@yadro.com>
2022-12-09 14:35:43 +03:00
bceea1926a Fix after remove obj size hardcode
Signed-off-by: Vladimir Avdeev <v.avdeev@yadro.com>
2022-12-09 14:34:10 +03:00
7ae0e8a21d Bump neofs-testlib to 0.8.1 (Fix logging problem)
Signed-off-by: Vladimir Avdeev <v.avdeev@yadro.com>
2022-12-09 14:02:02 +03:00
00bf387f34 Update shards test
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2022-12-09 11:59:37 +03:00
6230d2244e create http folder, and adding a new test for http attributes
Signed-off-by: Vladislav Karakozov <v.karakozov@yadro.com>
2022-12-08 17:24:20 +03:00
3afdaa0e2a Small fixes for tests
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2022-12-08 13:27:50 +03:00
05924784ab Remove SIMPLE_OBJ_SIZE and COMPLEX_OBJ_SIZE from env
Signed-off-by: Vladimir Avdeev <v.avdeev@yadro.com>
2022-12-08 13:21:19 +03:00
76c5d40e63 Return to session log analyzer
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2022-12-07 15:12:46 +03:00
12b592713b Add control shards test
Signed-off-by: Vladimir Avdeev <v.avdeev@yadro.com>
2022-12-07 12:46:21 +03:00
6567aa72a9 Add bearer token tests for s3 wallet api calls
Signed-off-by: Andrey Berezin <a.berezin@yadro.com>
2022-12-07 11:01:38 +03:00
522fc9dccd Delete node extra test
Signed-off-by: anikeev-yadro <a.anikeev@yadro.com>
2022-12-07 10:46:57 +03:00
98 changed files with 4144 additions and 2959 deletions

View file

@ -5,80 +5,98 @@ hosts:
- name: s01
attributes:
container_name: s01
config_path: ../neofs-dev-env/services/storage/.storage.env
wallet_path: ../neofs-dev-env/services/storage/wallet01.json
config_path: ../frostfs-dev-env/services/storage/.storage.env
wallet_path: ../frostfs-dev-env/services/storage/wallet01.json
local_config_path: ./TemporaryDir/empty-password.yml
local_wallet_path: ../frostfs-dev-env/services/storage/wallet01.json
wallet_password: ""
volume_name: storage_storage_s01
rpc_endpoint: s01.neofs.devenv:8080
control_endpoint: s01.neofs.devenv:8081
endpoint_data0: s01.frostfs.devenv:8080
control_endpoint: s01.frostfs.devenv:8081
un_locode: "RU MOW"
- name: s02
attributes:
container_name: s02
config_path: ../neofs-dev-env/services/storage/.storage.env
wallet_path: ../neofs-dev-env/services/storage/wallet02.json
config_path: ../frostfs-dev-env/services/storage/.storage.env
wallet_path: ../frostfs-dev-env/services/storage/wallet02.json
local_config_path: ./TemporaryDir/empty-password.yml
local_wallet_path: ../frostfs-dev-env/services/storage/wallet02.json
wallet_password: ""
volume_name: storage_storage_s02
rpc_endpoint: s02.neofs.devenv:8080
control_endpoint: s02.neofs.devenv:8081
endpoint_data0: s02.frostfs.devenv:8080
control_endpoint: s02.frostfs.devenv:8081
un_locode: "RU LED"
- name: s03
attributes:
container_name: s03
config_path: ../neofs-dev-env/services/storage/.storage.env
wallet_path: ../neofs-dev-env/services/storage/wallet03.json
config_path: ../frostfs-dev-env/services/storage/.storage.env
wallet_path: ../frostfs-dev-env/services/storage/wallet03.json
local_config_path: ./TemporaryDir/empty-password.yml
local_wallet_path: ../frostfs-dev-env/services/storage/wallet03.json
wallet_password: ""
volume_name: storage_storage_s03
rpc_endpoint: s03.neofs.devenv:8080
control_endpoint: s03.neofs.devenv:8081
endpoint_data0: s03.frostfs.devenv:8080
control_endpoint: s03.frostfs.devenv:8081
un_locode: "SE STO"
- name: s04
attributes:
container_name: s04
config_path: ../neofs-dev-env/services/storage/.storage.env
wallet_path: ../neofs-dev-env/services/storage/wallet04.json
config_path: ../frostfs-dev-env/services/storage/.storage.env
wallet_path: ../frostfs-dev-env/services/storage/wallet04.json
local_config_path: ./TemporaryDir/empty-password.yml
local_wallet_path: ../frostfs-dev-env/services/storage/wallet04.json
wallet_password: ""
volume_name: storage_storage_s04
rpc_endpoint: s04.neofs.devenv:8080
control_endpoint: s04.neofs.devenv:8081
endpoint_data0: s04.frostfs.devenv:8080
control_endpoint: s04.frostfs.devenv:8081
un_locode: "FI HEL"
- name: s3-gate01
attributes:
container_name: s3_gate
config_path: ../neofs-dev-env/services/s3_gate/.s3.env
wallet_path: ../neofs-dev-env/services/s3_gate/wallet.json
config_path: ../frostfs-dev-env/services/s3_gate/.s3.env
wallet_path: ../frostfs-dev-env/services/s3_gate/wallet.json
local_config_path: ./TemporaryDir/password-s3.yml
local_wallet_path: ../frostfs-dev-env/services/s3_gate/wallet.json
wallet_password: "s3"
endpoint: https://s3.neofs.devenv:8080
endpoint_data0: https://s3.frostfs.devenv:8080
- name: http-gate01
attributes:
container_name: http_gate
config_path: ../neofs-dev-env/services/http_gate/.http.env
wallet_path: ../neofs-dev-env/services/http_gate/wallet.json
config_path: ../frostfs-dev-env/services/http_gate/.http.env
wallet_path: ../frostfs-dev-env/services/http_gate/wallet.json
local_config_path: ./TemporaryDir/password-other.yml
local_wallet_path: ../frostfs-dev-env/services/http_gate/wallet.json
wallet_password: "one"
endpoint: http://http.neofs.devenv
endpoint_data0: http://http.frostfs.devenv
- name: ir01
attributes:
container_name: ir01
config_path: ../neofs-dev-env/services/ir/.ir.env
wallet_path: ../neofs-dev-env/services/ir/az.json
config_path: ../frostfs-dev-env/services/ir/.ir.env
wallet_path: ../frostfs-dev-env/services/ir/az.json
local_config_path: ./TemporaryDir/password-other.yml
local_wallet_path: ../frostfs-dev-env/services/ir/az.json
wallet_password: "one"
- name: morph-chain01
attributes:
container_name: morph_chain
config_path: ../neofs-dev-env/services/morph_chain/protocol.privnet.yml
wallet_path: ../neofs-dev-env/services/morph_chain/node-wallet.json
config_path: ../frostfs-dev-env/services/morph_chain/protocol.privnet.yml
wallet_path: ../frostfs-dev-env/services/morph_chain/node-wallet.json
local_config_path: ./TemporaryDir/password-other.yml
local_wallet_path: ../frostfs-dev-env/services/morph_chain/node-wallet.json
wallet_password: "one"
endpoint: http://morph-chain.neofs.devenv:30333
endpoint_internal0: http://morph-chain.frostfs.devenv:30333
- name: main-chain01
attributes:
container_name: main_chain
config_path: ../neofs-dev-env/services/chain/protocol.privnet.yml
wallet_path: ../neofs-dev-env/services/chain/node-wallet.json
config_path: ../frostfs-dev-env/services/chain/protocol.privnet.yml
wallet_path: ../frostfs-dev-env/services/chain/node-wallet.json
local_config_path: ./TemporaryDir/password-other.yml
local_wallet_path: ../frostfs-dev-env/services/chain/node-wallet.json
wallet_password: "one"
endpoint: http://main-chain.neofs.devenv:30333
endpoint_internal0: http://main-chain.frostfs.devenv:30333
- name: coredns01
attributes:
container_name: coredns
clis:
- name: neofs-cli
exec_path: neofs-cli
- name: frostfs-cli
exec_path: frostfs-cli

2
.github/CODEOWNERS vendored
View file

@ -1 +1 @@
* @aprasolova @vdomnich-yadro @dansingjulia @yadro-vavdeev
* @vdomnich-yadro @dansingjulia @yadro-vavdeev @alexchetaev @abereziny

8
.gitignore vendored
View file

@ -1,5 +1,11 @@
# ignore IDE files
.vscode
.idea
.DS_Store
venv_macos
# ignore test results
**/log.html
@ -20,4 +26,4 @@ TemporaryDir/*
artifacts/*
docs/*
venv.*/*
/*wallet_config.yml
wallet_config.yml

View file

@ -5,7 +5,7 @@ repos:
- id: black
language_version: python3.9
- repo: https://github.com/pycqa/isort
rev: 5.10.1
rev: 5.12.0
hooks:
- id: isort
name: isort (python)

View file

@ -3,8 +3,8 @@
First, thank you for contributing! We love and encourage pull requests from
everyone. Please follow the guidelines:
- Check the open [issues](https://github.com/nspcc-dev/neofs-testcases/issues) and
[pull requests](https://github.com/nspcc-dev/neofs-testcases/pulls) for existing
- Check the open [issues](https://github.com/TrueCloudLab/frostfs-testcases/issues) and
[pull requests](https://github.com/TrueCloudLab/frostfs-testcases/pulls) for existing
discussions.
- Open an issue first, to discuss a new feature or enhancement.
@ -22,13 +22,13 @@ everyone. Please follow the guidelines:
## Development Workflow
Start by forking the `neofs-testcases` repository, make changes in a branch and then
Start by forking the `frostfs-testcases` repository, make changes in a branch and then
send a pull request. We encourage pull requests to discuss code changes. Here
are the steps in details:
### Set up your GitHub Repository
Fork [NeoFS testcases upstream](https://github.com/nspcc-dev/neofs-testcases/fork) source
Fork [FrosfFS testcases upstream](https://github.com/TrueCloudLab/frostfs-testcases/fork) source
repository to your own personal repository. Copy the URL of your fork and clone it:
```shell
@ -36,33 +36,20 @@ $ git clone <url of your fork>
```
### Set up git remote as ``upstream``
```shell
$ cd neofs-testcases
$ git remote add upstream https://github.com/nspcc-dev/neofs-testcases
```sh
$ cd frostfs-testcases
$ git remote add upstream https://github.com/TrueCloudLab/frostfs-testcases
$ git fetch upstream
```
### Set up development environment
To setup development environment for `neofs-testcases`, please, take the following steps:
To setup development environment for `frosfs-testcases`, please, take the following steps:
1. Prepare virtualenv
```shell
$ virtualenv --python=python3.9 venv
$ source venv/bin/activate
```
2. Install all dependencies:
```shell
$ pip install -r requirements.txt
```
3. Setup pre-commit hooks to run code formatters on staged files before you run a `git commit` command:
```shell
$ pre-commit install
$ make venv
$ source frostfs-testcases-3.10/bin/activate
```
Optionally you might want to integrate code formatters with your code editor to apply formatters to code files as you go:

View file

@ -1,28 +1,29 @@
#!/usr/bin/make -f
SHELL := /bin/bash
PYTHON_VERSION := 3.10
VENV_NAME = frostfs-testcases-${PYTHON_VERSION}
VENV_DIR := venv.${VENV_NAME}
.DEFAULT_GOAL := help
current_dir := $(shell pwd)
SHELL ?= bash
venv: create requirements paths precommit
@echo Ready
VENVS = $(shell ls -1d venv/*/ | sort -u | xargs basename -a)
precommit:
@echo Isntalling pre-commit hooks
. ${VENV_DIR}/bin/activate && pre-commit install
.PHONY: all
all: venvs
paths:
@echo Append paths for project
@echo Virtual environment: ${VENV_DIR}
@sudo rm -rf ${VENV_DIR}/lib/python${PYTHON_VERSION}/site-packages/_paths.pth
@sudo touch ${VENV_DIR}/lib/python${PYTHON_VERSION}/site-packages/_paths.pth
@echo ${current_dir} | sudo tee ${VENV_DIR}/lib/python${PYTHON_VERSION}/site-packages/_paths.pth
include venv_template.mk
create:
@echo Create virtual environment for
virtualenv --python=python${PYTHON_VERSION} --prompt=${VENV_NAME} ${VENV_DIR}
.PHONY: venvs
venvs:
$(foreach venv,$(VENVS),venv.$(venv))
$(foreach venv,$(VENVS),$(eval $(call VENV_template,$(venv))))
clean:
rm -rf venv.*
pytest-local:
@echo "⇒ Run Pytest"
python -m pytest pytest_tests/testsuites/
help:
@echo "⇒ run Run testcases ${R}"
requirements:
@echo Isntalling pip requirements
. ${VENV_DIR}/bin/activate && pip install -e ../frostfs-testlib
. ${VENV_DIR}/bin/activate && pip install -Ur pytest_tests/requirements.txt

View file

@ -2,70 +2,58 @@
Tests written with PyTest Framework are located under `pytest_tests/testsuites` directory.
These tests rely on resources and utility modules that have been originally developed for Robot Framework:
`robot/resources/files` - static files that are used in tests' commands.
`robot/resources/lib/` - common Python libraries that provide utility functions used as building blocks in tests.
`robot/variables/` - constants and configuration variables for tests.
These tests rely on resources and utility modules that have been originally developed for Pytest Framework.
## Testcases execution
### Initial preparation
1. Install neofs-cli
- `git clone git@github.com:nspcc-dev/neofs-node.git`
- `cd neofs-node`
1. Install frostfs-cli
- `git clone git@github.com:TrueCloudLab/frostfs-node.git`
- `cd frostfs-node`
- `make`
- `sudo cp bin/neofs-cli /usr/local/bin/neofs-cli`
- `sudo cp bin/frostfs-cli /usr/local/bin/frostfs-cli`
2. Install neofs-authmate
- `git clone git@github.com:nspcc-dev/neofs-s3-gw.git`
- `cd neofs-s3-gw`
2. Install frostfs-authmate
- `git clone git@github.com:TrueCloudLab/frostfs-s3-gw.git`
- `cd frostfs-s3-gw`
- `make`
- `sudo cp bin/neofs-authmate /usr/local/bin/neofs-authmate`
- `sudo cp bin/frostfs-s3-authmate /usr/local/bin/frostfs-authmate`
3. Install neo-go
- `git clone git@github.com:nspcc-dev/neo-go.git`
- `cd neo-go`
- `git checkout v0.92.0` (or the current version in the neofs-dev-env)
- `git checkout v0.101.0` (or the current version in the frostfs-dev-env)
- `make`
- `sudo cp bin/neo-go /usr/local/bin/neo-go`
or download binary from releases: https://github.com/nspcc-dev/neo-go/releases
4. Clone neofs-dev-env
`git clone git@github.com:nspcc-dev/neofs-dev-env.git`
4. Clone frostfs-dev-env
`git clone git@github.com:TrueCloudLab/frostfs-dev-env.git`
Note that we expect neofs-dev-env to be located under
the `<testcases_root_dir>/../neofs-dev-env` directory. If you put this repo in any other place,
manually set the full path to neofs-dev-env in the environment variable `DEVENV_PATH` at this step.
Note that we expect frostfs-dev-env to be located under
the `<testcases_root_dir>/../frostfs-dev-env` directory. If you put this repo in any other place,
manually set the full path to frostfs-dev-env in the environment variable `DEVENV_PATH` at this step.
5. Make sure you have installed all of the following prerequisites on your machine
5. Make sure you have installed all the following prerequisites on your machine
```
make
python3.9
python3.9-dev
python3.10
python3.10-dev
libssl-dev
```
As we use neofs-dev-env, you'll also need to install
[prerequisites](https://github.com/nspcc-dev/neofs-dev-env#prerequisites) of this repository.
As we use frostfs-dev-env, you'll also need to install
[prerequisites](https://github.com/TrueCloudLab/frostfs-dev-env#prerequisites) of this repository.
6. Prepare virtualenv
```shell
$ make venv.local-pytest
$ . venv.local-pytest/bin/activate
$ make venv
$ source venv.frostfs-testcases-3.10/bin/activate
```
7. Setup pre-commit hooks to run code formatters on staged files before you run a `git commit` command:
```shell
$ pre-commit install
```
Optionally you might want to integrate code formatters with your code editor to apply formatters to code files as you go:
7. Optionally you might want to integrate code formatters with your code editor to apply formatters to code files as you go:
* isort is supported by [PyCharm](https://plugins.jetbrains.com/plugin/15434-isortconnect), [VS Code](https://cereblanco.medium.com/setup-black-and-isort-in-vscode-514804590bf9). Plugins exist for other IDEs/editors as well.
* black can be integrated with multiple editors, please, instructions are available [here](https://black.readthedocs.io/en/stable/integrations/editors.html).
@ -122,7 +110,7 @@ $ docker run -p 5050:5050 -e CHECK_RESULTS_EVERY_SECONDS=30 -e KEEP_HISTORY=1 \
Then, you can check the allure report in your browser [by this link](http://localhost:5050/allure-docker-service/projects/default/reports/latest/index.html?redirect=false)
NOTE: feel free to select a different location for `allure-reports` directory, there is no requirement to have it inside `neofs-testcases`. For example, you can place it under `/tmp` path.
NOTE: feel free to select a different location for `allure-reports` directory, there is no requirement to have it inside `frostfs-testcases`. For example, you can place it under `/tmp` path.
# Contributing

View file

@ -1,8 +1,8 @@
[tool.isort]
profile = "black"
src_paths = ["neofs-keywords", "pytest_tests", "robot"]
src_paths = ["pytest_tests"]
line_length = 100
[tool.black]
line-length = 100
target-version = ["py39"]
target-version = ["py310"]

View file

@ -10,14 +10,15 @@ from typing import Any, Dict, List, Optional, Union
import allure
import base58
from common import ASSETS_DIR, NEOFS_CLI_EXEC, WALLET_CONFIG
from data_formatters import get_wallet_public_key
from neofs_testlib.cli import NeofsCli
from neofs_testlib.shell import Shell
from frostfs_testlib.cli import FrostfsCli
from frostfs_testlib.shell import Shell
from frostfs_testlib.utils import wallet_utils
from pytest_tests.resources.common import ASSETS_DIR, FROSTFS_CLI_EXEC, WALLET_CONFIG
logger = logging.getLogger("NeoLogger")
EACL_LIFETIME = 100500
NEOFS_CONTRACT_CACHE_TIMEOUT = 30
FROSTFS_CONTRACT_CACHE_TIMEOUT = 30
class EACLOperation(Enum):
@ -44,7 +45,7 @@ class EACLRole(Enum):
class EACLHeaderType(Enum):
REQUEST = "req" # Filter request headers
OBJECT = "obj" # Filter object headers
SERVICE = "SERVICE" # Filter service headers. These are not processed by NeoFS nodes and exist for service use only
SERVICE = "SERVICE" # Filter service headers. These are not processed by FrostFS nodes and exist for service use only
class EACLMatchType(Enum):
@ -110,14 +111,14 @@ class EACLRule:
role = (
self.role.value
if isinstance(self.role, EACLRole)
else f'pubkey:{get_wallet_public_key(self.role, "")}'
else f'pubkey:{wallet_utils.get_wallet_public_key(self.role, "")}'
)
return f'{self.access.value} {self.operation.value} {self.filters or ""} {role}'
@allure.title("Get extended ACL")
def get_eacl(wallet_path: str, cid: str, shell: Shell, endpoint: str) -> Optional[str]:
cli = NeofsCli(shell, NEOFS_CLI_EXEC, WALLET_CONFIG)
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, WALLET_CONFIG)
try:
result = cli.container.get_eacl(wallet=wallet_path, rpc_endpoint=endpoint, cid=cid)
except RuntimeError as exc:
@ -138,7 +139,7 @@ def set_eacl(
endpoint: str,
session_token: Optional[str] = None,
) -> None:
cli = NeofsCli(shell, NEOFS_CLI_EXEC, WALLET_CONFIG)
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, WALLET_CONFIG)
cli.container.set_eacl(
wallet=wallet_path,
rpc_endpoint=endpoint,
@ -156,7 +157,7 @@ def _encode_cid_for_eacl(cid: str) -> str:
def create_eacl(cid: str, rules_list: List[EACLRule], shell: Shell) -> str:
table_file_path = os.path.join(os.getcwd(), ASSETS_DIR, f"eacl_table_{str(uuid.uuid4())}.json")
cli = NeofsCli(shell, NEOFS_CLI_EXEC, WALLET_CONFIG)
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, WALLET_CONFIG)
cli.acl.extended_create(cid=cid, out=table_file_path, rule=rules_list)
with open(table_file_path, "r") as file:
@ -172,13 +173,14 @@ def form_bearertoken_file(
eacl_rule_list: List[Union[EACLRule, EACLPubKey]],
shell: Shell,
endpoint: str,
sign: Optional[bool] = True,
) -> str:
"""
This function fetches eACL for given <cid> on behalf of <wif>,
then extends it with filters taken from <eacl_rules>, signs
with bearer token and writes to file
"""
enc_cid = _encode_cid_for_eacl(cid)
enc_cid = _encode_cid_for_eacl(cid) if cid else None
file_path = os.path.join(os.getcwd(), ASSETS_DIR, str(uuid.uuid4()))
eacl = get_eacl(wif, cid, shell, endpoint)
@ -189,7 +191,7 @@ def form_bearertoken_file(
logger.info(json_eacl)
eacl_result = {
"body": {
"eaclTable": {"containerID": {"value": enc_cid}, "records": []},
"eaclTable": {"containerID": {"value": enc_cid} if cid else enc_cid, "records": []},
"lifetime": {"exp": EACL_LIFETIME, "nbf": "1", "iat": "0"},
}
}
@ -219,7 +221,14 @@ def form_bearertoken_file(
json.dump(eacl_result, eacl_file, ensure_ascii=False, indent=4)
logger.info(f"Got these extended ACL records: {eacl_result}")
sign_bearer(shell, wif, file_path)
if sign:
sign_bearer(
shell=shell,
wallet_path=wif,
eacl_rules_file_from=file_path,
eacl_rules_file_to=file_path,
json=True,
)
return file_path
@ -236,7 +245,7 @@ def eacl_rules(access: str, verbs: list, user: str) -> list[str]:
(list): a list of eACL rules
"""
if user not in ("others", "user"):
pubkey = get_wallet_public_key(user, wallet_password="")
pubkey = wallet_utils.get_wallet_public_key(user, wallet_password="")
user = f"pubkey:{pubkey}"
rules = []
@ -246,14 +255,27 @@ def eacl_rules(access: str, verbs: list, user: str) -> list[str]:
return rules
def sign_bearer(shell: Shell, wallet_path: str, eacl_rules_file: str) -> None:
neofscli = NeofsCli(shell=shell, neofs_cli_exec_path=NEOFS_CLI_EXEC, config_file=WALLET_CONFIG)
neofscli.util.sign_bearer_token(
wallet=wallet_path, from_file=eacl_rules_file, to_file=eacl_rules_file, json=True
def sign_bearer(
shell: Shell, wallet_path: str, eacl_rules_file_from: str, eacl_rules_file_to: str, json: bool
) -> None:
frostfscli = FrostfsCli(
shell=shell, frostfs_cli_exec_path=FROSTFS_CLI_EXEC, config_file=WALLET_CONFIG
)
frostfscli.util.sign_bearer_token(
wallet=wallet_path, from_file=eacl_rules_file_from, to_file=eacl_rules_file_to, json=json
)
@allure.title("Wait for eACL cache expired")
def wait_for_cache_expired():
sleep(NEOFS_CONTRACT_CACHE_TIMEOUT)
sleep(FROSTFS_CONTRACT_CACHE_TIMEOUT)
return
@allure.step("Return bearer token in base64 to caller")
def bearer_token_base64_from_file(
bearer_path: str,
) -> str:
with open(bearer_path, "rb") as file:
signed = file.read()
return base64.b64encode(signed).decode("utf-8")

View file

@ -5,8 +5,9 @@ from datetime import datetime
from typing import Optional
import allure
from cli_helpers import _cmd_run
from common import ASSETS_DIR
from pytest_tests.helpers.cli_helpers import _cmd_run
from pytest_tests.resources.common import ASSETS_DIR
logger = logging.getLogger("NeoLogger")
REGULAR_TIMEOUT = 90

View file

@ -1,10 +1,11 @@
import logging
import re
from common import NEOFS_ADM_EXEC, NEOFS_CLI_EXEC, WALLET_CONFIG
from neofs_testlib.cli import NeofsAdm, NeofsCli
from neofs_testlib.hosting import Hosting
from neofs_testlib.shell import Shell
from frostfs_testlib.cli import FrostfsAdm, FrostfsCli
from frostfs_testlib.hosting import Hosting
from frostfs_testlib.shell import Shell
from pytest_tests.resources.common import FROSTFS_ADM_EXEC, FROSTFS_CLI_EXEC, WALLET_CONFIG
logger = logging.getLogger("NeoLogger")
@ -12,18 +13,18 @@ logger = logging.getLogger("NeoLogger")
def get_local_binaries_versions(shell: Shell) -> dict[str, str]:
versions = {}
for binary in ["neo-go", "neofs-authmate"]:
for binary in ["neo-go", "frostfs-authmate"]:
out = shell.exec(f"{binary} --version").stdout
versions[binary] = _parse_version(out)
neofs_cli = NeofsCli(shell, NEOFS_CLI_EXEC, WALLET_CONFIG)
versions["neofs-cli"] = _parse_version(neofs_cli.version.get().stdout)
frostfs_cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, WALLET_CONFIG)
versions["frostfs-cli"] = _parse_version(frostfs_cli.version.get().stdout)
try:
neofs_adm = NeofsAdm(shell, NEOFS_ADM_EXEC)
versions["neofs-adm"] = _parse_version(neofs_adm.version.get().stdout)
frostfs_adm = FrostfsAdm(shell, FROSTFS_ADM_EXEC)
versions["frostfs-adm"] = _parse_version(frostfs_adm.version.get().stdout)
except RuntimeError:
logger.info(f"neofs-adm not installed")
logger.info(f"frostfs-adm not installed")
out = shell.exec("aws --version").stdout
out_lines = out.split("\n")

View file

@ -1,7 +1,7 @@
#!/usr/bin/python3.9
#!/usr/bin/python3.10
"""
Helper functions to use with `neofs-cli`, `neo-go` and other CLIs.
Helper functions to use with `frostfs-cli`, `neo-go` and other CLIs.
"""
import json
import logging

View file

@ -3,11 +3,11 @@ import re
from dataclasses import dataclass
from typing import Any
import data_formatters
from neofs_testlib.blockchain import RPCClient
from neofs_testlib.hosting import Host, Hosting
from neofs_testlib.hosting.config import ServiceConfig
from test_control import wait_for_success
import yaml
from frostfs_testlib.blockchain import RPCClient
from frostfs_testlib.hosting import Host, Hosting
from frostfs_testlib.hosting.config import ServiceConfig
from frostfs_testlib.utils import wallet_utils
@dataclass
@ -45,11 +45,9 @@ class NodeBase:
def label(self) -> str:
return self.name
@wait_for_success(60, 1)
def start_service(self):
self.host.start_service(self.name)
@wait_for_success(60, 1)
def stop_service(self):
self.host.stop_service(self.name)
@ -62,6 +60,22 @@ class NodeBase:
_ConfigAttributes.WALLET_PATH,
)
def get_remote_wallet_path(self) -> str:
"""
Returns node wallet file path located on remote host
"""
return self._get_attribute(
_ConfigAttributes.WALLET_PATH,
)
def get_remote_config_path(self) -> str:
"""
Returns node config file path located on remote host
"""
return self._get_attribute(
_ConfigAttributes.CONFIG_PATH,
)
def get_wallet_config_path(self):
return self._get_attribute(
_ConfigAttributes.LOCAL_WALLET_CONFIG,
@ -71,7 +85,7 @@ class NodeBase:
def get_wallet_public_key(self):
storage_wallet_path = self.get_wallet_path()
storage_wallet_pass = self.get_wallet_password()
return data_formatters.get_wallet_public_key(storage_wallet_path, storage_wallet_pass)
return wallet_utils.get_wallet_public_key(storage_wallet_path, storage_wallet_pass)
def _get_attribute(self, attribute_name: str, default_attribute_name: str = None) -> list[str]:
config = self.host.get_service_config(self.name)
@ -93,7 +107,7 @@ class InnerRingNode(NodeBase):
Inner ring node is not always the same as physical host (or physical node, if you will):
It can be service running in a container or on physical host
For testing perspective, it's not relevant how it is actually running,
since neofs network will still treat it as "node"
since frostfs network will still treat it as "node"
"""
pass
@ -105,7 +119,7 @@ class S3Gate(NodeBase):
"""
def get_endpoint(self) -> str:
return self._get_attribute(_ConfigAttributes.ENDPOINT)
return self._get_attribute(_ConfigAttributes.ENDPOINT_DATA)
@property
def label(self) -> str:
@ -118,7 +132,7 @@ class HTTPGate(NodeBase):
"""
def get_endpoint(self) -> str:
return self._get_attribute(_ConfigAttributes.ENDPOINT)
return self._get_attribute(_ConfigAttributes.ENDPOINT_DATA)
@property
def label(self) -> str:
@ -132,7 +146,7 @@ class MorphChain(NodeBase):
Consensus node is not always the same as physical host (or physical node, if you will):
It can be service running in a container or on physical host
For testing perspective, it's not relevant how it is actually running,
since neofs network will still treat it as "node"
since frostfs network will still treat it as "node"
"""
rpc_client: RPCClient = None
@ -141,7 +155,7 @@ class MorphChain(NodeBase):
self.rpc_client = RPCClient(self.get_endpoint())
def get_endpoint(self) -> str:
return self._get_attribute(_ConfigAttributes.ENDPOINT)
return self._get_attribute(_ConfigAttributes.ENDPOINT_INTERNAL)
@property
def label(self) -> str:
@ -155,7 +169,7 @@ class MainChain(NodeBase):
Consensus node is not always the same as physical host:
It can be service running in a container or on physical host (or physical node, if you will):
For testing perspective, it's not relevant how it is actually running,
since neofs network will still treat it as "node"
since frostfs network will still treat it as "node"
"""
rpc_client: RPCClient = None
@ -164,7 +178,7 @@ class MainChain(NodeBase):
self.rpc_client = RPCClient(self.get_endpoint())
def get_endpoint(self) -> str:
return self._get_attribute(_ConfigAttributes.ENDPOINT)
return self._get_attribute(_ConfigAttributes.ENDPOINT_INTERNAL)
@property
def label(self) -> str:
@ -178,11 +192,11 @@ class StorageNode(NodeBase):
Storage node is not always the same as physical host:
It can be service running in a container or on physical host (or physical node, if you will):
For testing perspective, it's not relevant how it is actually running,
since neofs network will still treat it as "node"
since frostfs network will still treat it as "node"
"""
def get_rpc_endpoint(self) -> str:
return self._get_attribute(_ConfigAttributes.RPC_ENDPOINT)
return self._get_attribute(_ConfigAttributes.ENDPOINT_DATA)
def get_control_endpoint(self) -> str:
return self._get_attribute(_ConfigAttributes.CONTROL_ENDPOINT)
@ -202,6 +216,7 @@ class Cluster:
default_rpc_endpoint: str
default_s3_gate_endpoint: str
default_http_gate_endpoint: str
def __init__(self, hosting: Hosting) -> None:
self._hosting = hosting
@ -220,6 +235,25 @@ class Cluster:
def hosting(self) -> Hosting:
return self._hosting
def _create_wallet_config(self, service: ServiceConfig) -> None:
wallet_path = service.attributes[_ConfigAttributes.LOCAL_WALLET_CONFIG]
wallet_password = service.attributes[_ConfigAttributes.WALLET_PASSWORD]
with open(wallet_path, "w") as file:
yaml.dump({"password": wallet_password}, file)
def create_wallet_configs(self, hosting: Hosting) -> None:
configs = hosting.find_service_configs(".*")
for config in configs:
if _ConfigAttributes.LOCAL_WALLET_CONFIG in config.attributes:
self._create_wallet_config(config)
def is_local_devevn(self) -> bool:
if len(self.hosting.hosts) == 1:
host = self.hosting.hosts[0]
if host.config.address == "localhost" and host.config.plugin_name == "docker":
return True
return False
@property
def storage_nodes(self) -> list[StorageNode]:
"""
@ -294,10 +328,17 @@ class Cluster:
def get_random_storage_rpc_endpoint(self) -> str:
return random.choice(self.get_storage_rpc_endpoints())
def get_random_storage_rpc_endpoint_mgmt(self) -> str:
return random.choice(self.get_storage_rpc_endpoints_mgmt())
def get_storage_rpc_endpoints(self) -> list[str]:
nodes = self.storage_nodes
return [node.get_rpc_endpoint() for node in nodes]
def get_storage_rpc_endpoints_mgmt(self) -> list[str]:
nodes = self.storage_nodes
return [node.get_rpc_endpoint_mgmt() for node in nodes]
def get_morph_endpoints(self) -> list[str]:
nodes = self.morph_chain_nodes
return [node.get_endpoint() for node in nodes]
@ -316,9 +357,10 @@ class _ConfigAttributes:
WALLET_PASSWORD = "wallet_password"
WALLET_PATH = "wallet_path"
WALLET_CONFIG = "wallet_config"
CONFIG_PATH = "config_path"
LOCAL_WALLET_PATH = "local_wallet_path"
LOCAL_WALLET_CONFIG = "local_config_path"
RPC_ENDPOINT = "rpc_endpoint"
ENDPOINT = "endpoint"
ENDPOINT_DATA = "endpoint_data0"
ENDPOINT_INTERNAL = "endpoint_internal0"
CONTROL_ENDPOINT = "control_endpoint"
UN_LOCODE = "un_locode"

View file

@ -0,0 +1,211 @@
#!/usr/bin/python3
"""
This module contains functions which are used for Large Object assembling:
getting Last Object and split and getting Link Object. It is not enough to
simply perform a "raw" HEAD request, as noted in the issue:
https://github.com/nspcc-dev/neofs-node/issues/1304. Therefore, the reliable
retrieval of the aforementioned objects must be done this way: send direct
"raw" HEAD request to the every Storage Node and return the desired OID on
first non-null response.
"""
import logging
from typing import Optional, Tuple
import allure
from frostfs_testlib.shell import Shell
from pytest_tests.helpers import frostfs_verbs
from pytest_tests.helpers.cluster import Cluster, StorageNode
from pytest_tests.helpers.frostfs_verbs import head_object
from pytest_tests.helpers.storage_object_info import StorageObjectInfo
from pytest_tests.resources.common import CLI_DEFAULT_TIMEOUT, WALLET_CONFIG
logger = logging.getLogger("NeoLogger")
def get_storage_object_chunks(
storage_object: StorageObjectInfo,
shell: Shell,
cluster: Cluster,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
) -> list[str]:
"""
Get complex object split objects ids (no linker object)
Args:
storage_object: storage_object to get it's chunks
shell: client shell to do cmd requests
cluster: cluster object under test
timeout: Timeout for an operation.
Returns:
list of object ids of complex object chunks
"""
with allure.step(f"Get complex object chunks (f{storage_object.oid})"):
split_object_id = get_link_object(
storage_object.wallet_file_path,
storage_object.cid,
storage_object.oid,
shell,
cluster.storage_nodes,
is_direct=False,
timeout=timeout,
)
head = head_object(
storage_object.wallet_file_path,
storage_object.cid,
split_object_id,
shell,
cluster.default_rpc_endpoint,
timeout=timeout,
)
chunks_object_ids = []
if "split" in head["header"] and "children" in head["header"]["split"]:
chunks_object_ids = head["header"]["split"]["children"]
return chunks_object_ids
def get_complex_object_split_ranges(
storage_object: StorageObjectInfo,
shell: Shell,
cluster: Cluster,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
) -> list[Tuple[int, int]]:
"""
Get list of split ranges tuples (offset, length) of a complex object
For example if object size if 100 and max object size in system is 30
the returned list should be
[(0, 30), (30, 30), (60, 30), (90, 10)]
Args:
storage_object: storage_object to get it's chunks
shell: client shell to do cmd requests
cluster: cluster object under test
timeout: Timeout for an operation.
Returns:
list of object ids of complex object chunks
"""
ranges: list = []
offset = 0
chunks_ids = get_storage_object_chunks(storage_object, shell, cluster)
for chunk_id in chunks_ids:
head = head_object(
storage_object.wallet_file_path,
storage_object.cid,
chunk_id,
shell,
cluster.default_rpc_endpoint,
timeout=timeout,
)
length = int(head["header"]["payloadLength"])
ranges.append((offset, length))
offset = offset + length
return ranges
@allure.step("Get Link Object")
def get_link_object(
wallet: str,
cid: str,
oid: str,
shell: Shell,
nodes: list[StorageNode],
bearer: str = "",
wallet_config: str = WALLET_CONFIG,
is_direct: bool = True,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
):
"""
Args:
wallet (str): path to the wallet on whose behalf the Storage Nodes
are requested
cid (str): Container ID which stores the Large Object
oid (str): Large Object ID
shell: executor for cli command
nodes: list of nodes to do search on
bearer (optional, str): path to Bearer token file
wallet_config (optional, str): path to the frostfs-cli config file
is_direct: send request directly to the node or not; this flag
turns into `--ttl 1` key
timeout: Timeout for an operation.
Returns:
(str): Link Object ID
When no Link Object ID is found after all Storage Nodes polling,
the function throws an error.
"""
for node in nodes:
endpoint = node.get_rpc_endpoint()
try:
resp = frostfs_verbs.head_object(
wallet,
cid,
oid,
shell=shell,
endpoint=endpoint,
is_raw=True,
is_direct=is_direct,
bearer=bearer,
wallet_config=wallet_config,
timeout=timeout,
)
if resp["link"]:
return resp["link"]
except Exception:
logger.info(f"No Link Object found on {endpoint}; continue")
logger.error(f"No Link Object for {cid}/{oid} found among all Storage Nodes")
return None
@allure.step("Get Last Object")
def get_last_object(
wallet: str,
cid: str,
oid: str,
shell: Shell,
nodes: list[StorageNode],
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
) -> Optional[str]:
"""
Args:
wallet (str): path to the wallet on whose behalf the Storage Nodes
are requested
cid (str): Container ID which stores the Large Object
oid (str): Large Object ID
shell: executor for cli command
nodes: list of nodes to do search on
timeout: Timeout for an operation.
Returns:
(str): Last Object ID
When no Last Object ID is found after all Storage Nodes polling,
the function throws an error.
"""
for node in nodes:
endpoint = node.get_rpc_endpoint()
try:
resp = frostfs_verbs.head_object(
wallet,
cid,
oid,
shell=shell,
endpoint=endpoint,
is_raw=True,
is_direct=True,
timeout=timeout,
)
if resp["lastPart"]:
return resp["lastPart"]
except Exception:
logger.info(f"No Last Object found on {endpoint}; continue")
logger.error(f"No Last Object for {cid}/{oid} found among all Storage Nodes")
return None

View file

@ -1,13 +1,22 @@
import json
import logging
from dataclasses import dataclass
from typing import Optional
from time import sleep
from typing import Optional, Union
import allure
from cluster import Cluster
from file_helper import generate_file, get_file_hash
from neofs_testlib.shell import Shell
from neofs_verbs import put_object_to_random_node
from storage_object import StorageObjectInfo
from wallet import WalletFile
from frostfs_testlib.cli import FrostfsCli
from frostfs_testlib.shell import Shell
from frostfs_testlib.utils import json_utils
from pytest_tests.helpers.cluster import Cluster
from pytest_tests.helpers.file_helper import generate_file, get_file_hash
from pytest_tests.helpers.frostfs_verbs import put_object, put_object_to_random_node
from pytest_tests.helpers.storage_object_info import StorageObjectInfo
from pytest_tests.helpers.wallet import WalletFile
from pytest_tests.resources.common import CLI_DEFAULT_TIMEOUT, FROSTFS_CLI_EXEC, WALLET_CONFIG
logger = logging.getLogger("NeoLogger")
@dataclass
@ -33,16 +42,37 @@ class StorageContainer:
def get_wallet_path(self) -> str:
return self.storage_container_info.wallet_file.path
def get_wallet_config_path(self) -> str:
return self.storage_container_info.wallet_file.config_path
@allure.step("Generate new object and put in container")
def generate_object(self, size: int, expire_at: Optional[int] = None) -> StorageObjectInfo:
def generate_object(
self,
size: int,
expire_at: Optional[int] = None,
bearer_token: Optional[str] = None,
endpoint: Optional[str] = None,
) -> StorageObjectInfo:
with allure.step(f"Generate object with size {size}"):
file_path = generate_file(size)
file_hash = get_file_hash(file_path)
container_id = self.get_id()
wallet_path = self.get_wallet_path()
wallet_config = self.get_wallet_config_path()
with allure.step(f"Put object with size {size} to container {container_id}"):
if endpoint:
object_id = put_object(
wallet=wallet_path,
path=file_path,
cid=container_id,
expire_at=expire_at,
shell=self.shell,
endpoint=endpoint,
bearer=bearer_token,
wallet_config=wallet_config,
)
else:
object_id = put_object_to_random_node(
wallet=wallet_path,
path=file_path,
@ -50,6 +80,8 @@ class StorageContainer:
expire_at=expire_at,
shell=self.shell,
cluster=self.cluster,
bearer=bearer_token,
wallet_config=wallet_config,
)
storage_object = StorageObjectInfo(
@ -62,3 +94,238 @@ class StorageContainer:
)
return storage_object
DEFAULT_PLACEMENT_RULE = "REP 2 IN X CBF 1 SELECT 4 FROM * AS X"
SINGLE_PLACEMENT_RULE = "REP 1 IN X CBF 1 SELECT 4 FROM * AS X"
REP_2_FOR_3_NODES_PLACEMENT_RULE = "REP 2 IN X CBF 1 SELECT 3 FROM * AS X"
@allure.step("Create Container")
def create_container(
wallet: str,
shell: Shell,
endpoint: str,
rule: str = DEFAULT_PLACEMENT_RULE,
basic_acl: str = "",
attributes: Optional[dict] = None,
session_token: str = "",
session_wallet: str = "",
name: str = None,
options: dict = None,
await_mode: bool = True,
wait_for_creation: bool = True,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
) -> str:
"""
A wrapper for `frostfs-cli container create` call.
Args:
wallet (str): a wallet on whose behalf a container is created
rule (optional, str): placement rule for container
basic_acl (optional, str): an ACL for container, will be
appended to `--basic-acl` key
attributes (optional, dict): container attributes , will be
appended to `--attributes` key
session_token (optional, str): a path to session token file
session_wallet(optional, str): a path to the wallet which signed
the session token; this parameter makes sense
when paired with `session_token`
shell: executor for cli command
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
options (optional, dict): any other options to pass to the call
name (optional, str): container name attribute
await_mode (bool): block execution until container is persisted
wait_for_creation (): Wait for container shows in container list
timeout: Timeout for the operation.
Returns:
(str): CID of the created container
"""
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, WALLET_CONFIG)
result = cli.container.create(
rpc_endpoint=endpoint,
wallet=session_wallet if session_wallet else wallet,
policy=rule,
basic_acl=basic_acl,
attributes=attributes,
name=name,
session=session_token,
await_mode=await_mode,
timeout=timeout,
**options or {},
)
cid = _parse_cid(result.stdout)
logger.info("Container created; waiting until it is persisted in the sidechain")
if wait_for_creation:
wait_for_container_creation(wallet, cid, shell, endpoint)
return cid
def wait_for_container_creation(
wallet: str, cid: str, shell: Shell, endpoint: str, attempts: int = 15, sleep_interval: int = 1
):
for _ in range(attempts):
containers = list_containers(wallet, shell, endpoint)
if cid in containers:
return
logger.info(f"There is no {cid} in {containers} yet; sleep {sleep_interval} and continue")
sleep(sleep_interval)
raise RuntimeError(
f"After {attempts * sleep_interval} seconds container {cid} hasn't been persisted; exiting"
)
def wait_for_container_deletion(
wallet: str, cid: str, shell: Shell, endpoint: str, attempts: int = 30, sleep_interval: int = 1
):
for _ in range(attempts):
try:
get_container(wallet, cid, shell=shell, endpoint=endpoint)
sleep(sleep_interval)
continue
except Exception as err:
if "container not found" not in str(err):
raise AssertionError(f'Expected "container not found" in error, got\n{err}')
return
raise AssertionError(f"Expected container deleted during {attempts * sleep_interval} sec.")
@allure.step("List Containers")
def list_containers(
wallet: str, shell: Shell, endpoint: str, timeout: Optional[str] = CLI_DEFAULT_TIMEOUT
) -> list[str]:
"""
A wrapper for `frostfs-cli container list` call. It returns all the
available containers for the given wallet.
Args:
wallet (str): a wallet on whose behalf we list the containers
shell: executor for cli command
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
timeout: Timeout for the operation.
Returns:
(list): list of containers
"""
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, WALLET_CONFIG)
result = cli.container.list(rpc_endpoint=endpoint, wallet=wallet, timeout=timeout)
logger.info(f"Containers: \n{result}")
return result.stdout.split()
@allure.step("Get Container")
def get_container(
wallet: str,
cid: str,
shell: Shell,
endpoint: str,
json_mode: bool = True,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
) -> Union[dict, str]:
"""
A wrapper for `frostfs-cli container get` call. It extracts container's
attributes and rearranges them into a more compact view.
Args:
wallet (str): path to a wallet on whose behalf we get the container
cid (str): ID of the container to get
shell: executor for cli command
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
json_mode (bool): return container in JSON format
timeout: Timeout for the operation.
Returns:
(dict, str): dict of container attributes
"""
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, WALLET_CONFIG)
result = cli.container.get(
rpc_endpoint=endpoint, wallet=wallet, cid=cid, json_mode=json_mode, timeout=timeout
)
if not json_mode:
return result.stdout
container_info = json.loads(result.stdout)
attributes = dict()
for attr in container_info["attributes"]:
attributes[attr["key"]] = attr["value"]
container_info["attributes"] = attributes
container_info["ownerID"] = json_utils.json_reencode(container_info["ownerID"]["value"])
return container_info
@allure.step("Delete Container")
# TODO: make the error message about a non-found container more user-friendly
# https://github.com/nspcc-dev/frostfs-contract/issues/121
def delete_container(
wallet: str,
cid: str,
shell: Shell,
endpoint: str,
force: bool = False,
session_token: Optional[str] = None,
await_mode: bool = False,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
) -> None:
"""
A wrapper for `frostfs-cli container delete` call.
Args:
wallet (str): path to a wallet on whose behalf we delete the container
cid (str): ID of the container to delete
shell: executor for cli command
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
force (bool): do not check whether container contains locks and remove immediately
session_token: a path to session token file
timeout: Timeout for the operation.
This function doesn't return anything.
"""
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, WALLET_CONFIG)
cli.container.delete(
wallet=wallet,
cid=cid,
rpc_endpoint=endpoint,
force=force,
session=session_token,
await_mode=await_mode,
timeout=timeout,
)
def _parse_cid(output: str) -> str:
"""
Parses container ID from a given CLI output. The input string we expect:
container ID: 2tz86kVTDpJxWHrhw3h6PbKMwkLtBEwoqhHQCKTre1FN
awaiting...
container has been persisted on sidechain
We want to take 'container ID' value from the string.
Args:
output (str): CLI output to parse
Returns:
(str): extracted CID
"""
try:
# taking first line from command's output
first_line = output.split("\n")[0]
except Exception:
first_line = ""
logger.error(f"Got empty output: {output}")
splitted = first_line.split(": ")
if len(splitted) != 2:
raise ValueError(f"no CID was parsed from command output: \t{first_line}")
return splitted[1]
@allure.step("Search container by name")
def search_container_by_name(wallet: str, name: str, shell: Shell, endpoint: str):
list_cids = list_containers(wallet, shell, endpoint)
for cid in list_cids:
cont_info = get_container(wallet, cid, shell, endpoint, True)
if cont_info.get("attributes").get("Name", None) == name:
return cid
return None

View file

@ -1,9 +1,10 @@
from typing import List, Optional
from acl import EACLOperation
from cluster import Cluster
from neofs_testlib.shell import Shell
from python_keywords.object_access import (
from frostfs_testlib.shell import Shell
from pytest_tests.helpers.acl import EACLOperation
from pytest_tests.helpers.cluster import Cluster
from pytest_tests.helpers.object_access import (
can_delete_object,
can_get_head_object,
can_get_object,

View file

@ -0,0 +1,112 @@
import logging
from time import sleep
from typing import Optional
import allure
from frostfs_testlib.cli import FrostfsAdm, FrostfsCli, NeoGo
from frostfs_testlib.shell import Shell
from frostfs_testlib.utils import datetime_utils, wallet_utils
from pytest_tests.helpers.cluster import Cluster, StorageNode
from pytest_tests.helpers.payment_neogo import get_contract_hash
from pytest_tests.helpers.test_control import wait_for_success
from pytest_tests.resources.common import (
CLI_DEFAULT_TIMEOUT,
FROSTFS_ADM_CONFIG_PATH,
FROSTFS_ADM_EXEC,
FROSTFS_CLI_EXEC,
MAINNET_BLOCK_TIME,
NEOGO_EXECUTABLE,
)
logger = logging.getLogger("NeoLogger")
@allure.step("Ensure fresh epoch")
def ensure_fresh_epoch(
shell: Shell, cluster: Cluster, alive_node: Optional[StorageNode] = None
) -> int:
# ensure new fresh epoch to avoid epoch switch during test session
alive_node = alive_node if alive_node else cluster.storage_nodes[0]
current_epoch = get_epoch(shell, cluster, alive_node)
tick_epoch(shell, cluster, alive_node)
epoch = get_epoch(shell, cluster, alive_node)
assert epoch > current_epoch, "Epoch wasn't ticked"
return epoch
@allure.step("Wait for epochs align in whole cluster")
@wait_for_success(60, 5)
def wait_for_epochs_align(shell: Shell, cluster: Cluster) -> bool:
epochs = []
for node in cluster.storage_nodes:
epochs.append(get_epoch(shell, cluster, node))
unique_epochs = list(set(epochs))
assert (
len(unique_epochs) == 1
), f"unaligned epochs found, {epochs}, count of unique epochs {len(unique_epochs)}"
@allure.step("Get Epoch")
def get_epoch(shell: Shell, cluster: Cluster, alive_node: Optional[StorageNode] = None):
alive_node = alive_node if alive_node else cluster.storage_nodes[0]
endpoint = alive_node.get_rpc_endpoint()
wallet_path = alive_node.get_wallet_path()
wallet_config = alive_node.get_wallet_config_path()
cli = FrostfsCli(shell=shell, frostfs_cli_exec_path=FROSTFS_CLI_EXEC, config_file=wallet_config)
epoch = cli.netmap.epoch(endpoint, wallet_path, timeout=CLI_DEFAULT_TIMEOUT)
return int(epoch.stdout)
@allure.step("Tick Epoch")
def tick_epoch(shell: Shell, cluster: Cluster, alive_node: Optional[StorageNode] = None):
"""
Tick epoch using frostfs-adm or NeoGo if frostfs-adm is not available (DevEnv)
Args:
shell: local shell to make queries about current epoch. Remote shell will be used to tick new one
cluster: cluster instance under test
alive_node: node to send requests to (first node in cluster by default)
"""
alive_node = alive_node if alive_node else cluster.storage_nodes[0]
remote_shell = alive_node.host.get_shell()
if FROSTFS_ADM_EXEC and FROSTFS_ADM_CONFIG_PATH:
# If frostfs-adm is available, then we tick epoch with it (to be consistent with UAT tests)
frostfsadm = FrostfsAdm(
shell=remote_shell,
frostfs_adm_exec_path=FROSTFS_ADM_EXEC,
config_file=FROSTFS_ADM_CONFIG_PATH,
)
frostfsadm.morph.force_new_epoch()
return
# Otherwise we tick epoch using transaction
cur_epoch = get_epoch(shell, cluster)
# Use first node by default
ir_node = cluster.ir_nodes[0]
# In case if no local_wallet_path is provided, we use wallet_path
ir_wallet_path = ir_node.get_wallet_path()
ir_wallet_pass = ir_node.get_wallet_password()
ir_address = wallet_utils.get_last_address_from_wallet(ir_wallet_path, ir_wallet_pass)
morph_chain = cluster.morph_chain_nodes[0]
morph_endpoint = morph_chain.get_endpoint()
neogo = NeoGo(shell, neo_go_exec_path=NEOGO_EXECUTABLE)
neogo.contract.invokefunction(
wallet=ir_wallet_path,
wallet_password=ir_wallet_pass,
scripthash=get_contract_hash(morph_chain, "netmap.frostfs", shell=shell),
method="newEpoch",
arguments=f"int:{cur_epoch + 1}",
multisig_hash=f"{ir_address}:Global",
address=ir_address,
rpc_endpoint=morph_endpoint,
force=True,
gas=1,
)
sleep(datetime_utils.parse_time(MAINNET_BLOCK_TIME))

View file

@ -2,10 +2,11 @@ import logging
from time import sleep
import allure
from cluster import Cluster, StorageNode
from neofs_testlib.shell import Shell
from python_keywords.node_management import storage_node_healthcheck
from storage_policy import get_nodes_with_object
from frostfs_testlib.shell import Shell
from pytest_tests.helpers.cluster import Cluster, StorageNode
from pytest_tests.helpers.node_management import storage_node_healthcheck
from pytest_tests.helpers.storage_policy import get_nodes_with_object
logger = logging.getLogger("NeoLogger")

View file

@ -5,12 +5,13 @@ import uuid
from typing import Any, Optional
import allure
from common import ASSETS_DIR, SIMPLE_OBJ_SIZE
from pytest_tests.resources.common import ASSETS_DIR
logger = logging.getLogger("NeoLogger")
def generate_file(size: int = SIMPLE_OBJ_SIZE) -> str:
def generate_file(size: int) -> str:
"""Generates a binary file with the specified size in bytes.
Args:
@ -28,6 +29,7 @@ def generate_file(size: int = SIMPLE_OBJ_SIZE) -> str:
def generate_file_with_content(
size: int,
file_path: Optional[str] = None,
content: Optional[str] = None,
) -> str:
@ -44,7 +46,7 @@ def generate_file_with_content(
"""
mode = "w+"
if content is None:
content = os.urandom(SIMPLE_OBJ_SIZE)
content = os.urandom(size)
mode = "wb"
if not file_path:

View file

@ -6,11 +6,17 @@ import uuid
from typing import Any, Optional
import allure
import json_transformers
from cluster import Cluster
from common import ASSETS_DIR, NEOFS_CLI_EXEC, WALLET_CONFIG
from neofs_testlib.cli import NeofsCli
from neofs_testlib.shell import Shell
from frostfs_testlib.cli import FrostfsCli
from frostfs_testlib.shell import Shell
from frostfs_testlib.utils import json_utils
from pytest_tests.helpers.cluster import Cluster
from pytest_tests.resources.common import (
ASSETS_DIR,
CLI_DEFAULT_TIMEOUT,
FROSTFS_CLI_EXEC,
WALLET_CONFIG,
)
logger = logging.getLogger("NeoLogger")
@ -28,9 +34,10 @@ def get_object_from_random_node(
wallet_config: Optional[str] = None,
no_progress: bool = True,
session: Optional[str] = None,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
) -> str:
"""
GET from NeoFS random storage node
GET from FrostFS random storage node
Args:
wallet: wallet on whose behalf GET is done
@ -39,11 +46,12 @@ def get_object_from_random_node(
shell: executor for cli command
bearer (optional, str): path to Bearer Token file, appends to `--bearer` key
write_object (optional, str): path to downloaded file, appends to `--file` key
endpoint: NeoFS endpoint to send request to, appends to `--rpc-endpoint` key
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
wallet_config(optional, str): path to the wallet config
no_progress(optional, bool): do not show progress bar
xhdr (optional, dict): Request X-Headers in form of Key=Value
session (optional, dict): path to a JSON-encoded container session token
timeout: Timeout for the operation.
Returns:
(str): path to downloaded file
"""
@ -60,6 +68,7 @@ def get_object_from_random_node(
wallet_config,
no_progress,
session,
timeout,
)
@ -76,9 +85,10 @@ def get_object(
wallet_config: Optional[str] = None,
no_progress: bool = True,
session: Optional[str] = None,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
) -> str:
"""
GET from NeoFS.
GET from FrostFS.
Args:
wallet (str): wallet on whose behalf GET is done
@ -87,11 +97,12 @@ def get_object(
shell: executor for cli command
bearer: path to Bearer Token file, appends to `--bearer` key
write_object: path to downloaded file, appends to `--file` key
endpoint: NeoFS endpoint to send request to, appends to `--rpc-endpoint` key
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
wallet_config(optional, str): path to the wallet config
no_progress(optional, bool): do not show progress bar
xhdr (optional, dict): Request X-Headers in form of Key=Value
session (optional, dict): path to a JSON-encoded container session token
timeout: Timeout for the operation.
Returns:
(str): path to downloaded file
"""
@ -100,7 +111,7 @@ def get_object(
write_object = str(uuid.uuid4())
file_path = os.path.join(ASSETS_DIR, write_object)
cli = NeofsCli(shell, NEOFS_CLI_EXEC, wallet_config or WALLET_CONFIG)
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet_config or WALLET_CONFIG)
cli.object.get(
rpc_endpoint=endpoint,
wallet=wallet,
@ -111,6 +122,7 @@ def get_object(
no_progress=no_progress,
xhdr=xhdr,
session=session,
timeout=timeout,
)
return file_path
@ -128,6 +140,7 @@ def get_range_hash(
wallet_config: Optional[str] = None,
xhdr: Optional[dict] = None,
session: Optional[str] = None,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
):
"""
GETRANGEHASH of given Object.
@ -140,14 +153,15 @@ def get_range_hash(
bearer: path to Bearer Token file, appends to `--bearer` key
range_cut: Range to take hash from in the form offset1:length1,...,
value to pass to the `--range` parameter
endpoint: NeoFS endpoint to send request to, appends to `--rpc-endpoint` key
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
wallet_config: path to the wallet config
xhdr: Request X-Headers in form of Key=Values
session: Filepath to a JSON- or binary-encoded token of the object RANGEHASH session.
timeout: Timeout for the operation.
Returns:
None
"""
cli = NeofsCli(shell, NEOFS_CLI_EXEC, wallet_config or WALLET_CONFIG)
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet_config or WALLET_CONFIG)
result = cli.object.hash(
rpc_endpoint=endpoint,
wallet=wallet,
@ -157,6 +171,7 @@ def get_range_hash(
bearer=bearer,
xhdr=xhdr,
session=session,
timeout=timeout,
)
# cutting off output about range offset and length
@ -177,6 +192,7 @@ def put_object_to_random_node(
expire_at: Optional[int] = None,
no_progress: bool = True,
session: Optional[str] = None,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
):
"""
PUT of given file to a random storage node.
@ -195,6 +211,7 @@ def put_object_to_random_node(
expire_at: Last epoch in the life of the object
xhdr: Request X-Headers in form of Key=Value
session: path to a JSON-encoded container session token
timeout: Timeout for the operation.
Returns:
ID of uploaded Object
"""
@ -213,6 +230,7 @@ def put_object_to_random_node(
expire_at,
no_progress,
session,
timeout,
)
@ -230,6 +248,7 @@ def put_object(
expire_at: Optional[int] = None,
no_progress: bool = True,
session: Optional[str] = None,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
):
"""
PUT of given file.
@ -241,17 +260,18 @@ def put_object(
shell: executor for cli command
bearer: path to Bearer Token file, appends to `--bearer` key
attributes: User attributes in form of Key1=Value1,Key2=Value2
endpoint: NeoFS endpoint to send request to, appends to `--rpc-endpoint` key
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
wallet_config: path to the wallet config
no_progress: do not show progress bar
expire_at: Last epoch in the life of the object
xhdr: Request X-Headers in form of Key=Value
session: path to a JSON-encoded container session token
timeout: Timeout for the operation.
Returns:
(str): ID of uploaded Object
"""
cli = NeofsCli(shell, NEOFS_CLI_EXEC, wallet_config or WALLET_CONFIG)
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet_config or WALLET_CONFIG)
result = cli.object.put(
rpc_endpoint=endpoint,
wallet=wallet,
@ -263,6 +283,7 @@ def put_object(
no_progress=no_progress,
xhdr=xhdr,
session=session,
timeout=timeout,
)
# splitting CLI output to lines and taking the penultimate line
@ -282,6 +303,7 @@ def delete_object(
wallet_config: Optional[str] = None,
xhdr: Optional[dict] = None,
session: Optional[str] = None,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
):
"""
DELETE an Object.
@ -292,15 +314,16 @@ def delete_object(
oid: ID of Object we are going to delete
shell: executor for cli command
bearer: path to Bearer Token file, appends to `--bearer` key
endpoint: NeoFS endpoint to send request to, appends to `--rpc-endpoint` key
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
wallet_config: path to the wallet config
xhdr: Request X-Headers in form of Key=Value
session: path to a JSON-encoded container session token
timeout: Timeout for the operation.
Returns:
(str): Tombstone ID
"""
cli = NeofsCli(shell, NEOFS_CLI_EXEC, wallet_config or WALLET_CONFIG)
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet_config or WALLET_CONFIG)
result = cli.object.delete(
rpc_endpoint=endpoint,
wallet=wallet,
@ -309,6 +332,7 @@ def delete_object(
bearer=bearer,
xhdr=xhdr,
session=session,
timeout=timeout,
)
id_str = result.stdout.split("\n")[1]
@ -328,6 +352,7 @@ def get_range(
bearer: str = "",
xhdr: Optional[dict] = None,
session: Optional[str] = None,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
):
"""
GETRANGE an Object.
@ -338,17 +363,18 @@ def get_range(
oid: ID of Object we are going to request
range_cut: range to take data from in the form offset:length
shell: executor for cli command
endpoint: NeoFS endpoint to send request to, appends to `--rpc-endpoint` key
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
bearer: path to Bearer Token file, appends to `--bearer` key
wallet_config: path to the wallet config
xhdr: Request X-Headers in form of Key=Value
session: path to a JSON-encoded container session token
timeout: Timeout for the operation.
Returns:
(str, bytes) - path to the file with range content and content of this file as bytes
"""
range_file_path = os.path.join(ASSETS_DIR, str(uuid.uuid4()))
cli = NeofsCli(shell, NEOFS_CLI_EXEC, wallet_config or WALLET_CONFIG)
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet_config or WALLET_CONFIG)
cli.object.range(
rpc_endpoint=endpoint,
wallet=wallet,
@ -359,6 +385,7 @@ def get_range(
bearer=bearer,
xhdr=xhdr,
session=session,
timeout=timeout,
)
with open(range_file_path, "rb") as file:
@ -381,6 +408,7 @@ def lock_object(
wallet_config: Optional[str] = None,
ttl: Optional[int] = None,
xhdr: Optional[dict] = None,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
) -> str:
"""
Lock object in container.
@ -393,17 +421,18 @@ def lock_object(
lifetime: Lock lifetime.
expire_at: Lock expiration epoch.
shell: executor for cli command
endpoint: NeoFS endpoint to send request to, appends to `--rpc-endpoint` key
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
session: Path to a JSON-encoded container session token.
ttl: TTL value in request meta header (default 2).
wallet: WIF (NEP-2) string or path to the wallet or binary key.
xhdr: Dict with request X-Headers.
timeout: Timeout for the operation.
Returns:
Lock object ID
"""
cli = NeofsCli(shell, NEOFS_CLI_EXEC, wallet_config or WALLET_CONFIG)
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet_config or WALLET_CONFIG)
result = cli.object.lock(
rpc_endpoint=endpoint,
lifetime=lifetime,
@ -416,6 +445,7 @@ def lock_object(
xhdr=xhdr,
session=session,
ttl=ttl,
timeout=timeout,
)
# splitting CLI output to lines and taking the penultimate line
@ -438,6 +468,7 @@ def search_object(
session: Optional[str] = None,
phy: bool = False,
root: bool = False,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
) -> list:
"""
SEARCH an Object.
@ -447,7 +478,7 @@ def search_object(
cid: ID of Container where we get the Object from
shell: executor for cli command
bearer: path to Bearer Token file, appends to `--bearer` key
endpoint: NeoFS endpoint to send request to, appends to `--rpc-endpoint` key
endpoint: FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
filters: key=value pairs to filter Objects
expected_objects_list: a list of ObjectIDs to compare found Objects with
wallet_config: path to the wallet config
@ -455,12 +486,13 @@ def search_object(
session: path to a JSON-encoded container session token
phy: Search physically stored objects.
root: Search for user objects.
timeout: Timeout for the operation.
Returns:
list of found ObjectIDs
"""
cli = NeofsCli(shell, NEOFS_CLI_EXEC, wallet_config or WALLET_CONFIG)
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet_config or WALLET_CONFIG)
result = cli.object.search(
rpc_endpoint=endpoint,
wallet=wallet,
@ -473,6 +505,7 @@ def search_object(
session=session,
phy=phy,
root=root,
timeout=timeout,
)
found_objects = re.findall(r"(\w{43,44})", result.stdout)
@ -501,6 +534,7 @@ def get_netmap_netinfo(
address: Optional[str] = None,
ttl: Optional[int] = None,
xhdr: Optional[dict] = None,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
) -> dict[str, Any]:
"""
Get netmap netinfo output from node
@ -508,7 +542,7 @@ def get_netmap_netinfo(
Args:
wallet (str): wallet on whose behalf request is done
shell: executor for cli command
endpoint (optional, str): NeoFS endpoint to send request to, appends to `--rpc-endpoint` key
endpoint (optional, str): FrostFS endpoint to send request to, appends to `--rpc-endpoint` key
address: Address of wallet account
ttl: TTL value in request meta header (default 2)
wallet: Path to the wallet or binary key
@ -518,13 +552,14 @@ def get_netmap_netinfo(
(dict): dict of parsed command output
"""
cli = NeofsCli(shell, NEOFS_CLI_EXEC, wallet_config or WALLET_CONFIG)
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet_config or WALLET_CONFIG)
output = cli.netmap.netinfo(
wallet=wallet,
rpc_endpoint=endpoint,
address=address,
ttl=ttl,
xhdr=xhdr,
timeout=timeout,
)
settings = dict()
@ -555,6 +590,7 @@ def head_object(
is_direct: bool = False,
wallet_config: Optional[str] = None,
session: Optional[str] = None,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
):
"""
HEAD an Object.
@ -565,7 +601,7 @@ def head_object(
oid (str): ObjectID to HEAD
shell: executor for cli command
bearer (optional, str): path to Bearer Token file, appends to `--bearer` key
endpoint(optional, str): NeoFS endpoint to send request to
endpoint(optional, str): FrostFS endpoint to send request to
json_output(optional, bool): return response in JSON format or not; this flag
turns into `--json` key
is_raw(optional, bool): send "raw" request or not; this flag
@ -575,6 +611,7 @@ def head_object(
wallet_config(optional, str): path to the wallet config
xhdr (optional, dict): Request X-Headers in form of Key=Value
session (optional, dict): path to a JSON-encoded container session token
timeout: Timeout for the operation.
Returns:
depending on the `json_output` parameter value, the function returns
(dict): HEAD response in JSON format
@ -582,7 +619,7 @@ def head_object(
(str): HEAD response as a plain text
"""
cli = NeofsCli(shell, NEOFS_CLI_EXEC, wallet_config or WALLET_CONFIG)
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, wallet_config or WALLET_CONFIG)
result = cli.object.head(
rpc_endpoint=endpoint,
wallet=wallet,
@ -594,6 +631,7 @@ def head_object(
ttl=1 if is_direct else None,
xhdr=xhdr,
session=session,
timeout=timeout,
)
if not json_output:
@ -613,22 +651,22 @@ def head_object(
# If response is Complex Object header, it has `splitId` key
if "splitId" in decoded.keys():
logger.info("decoding split header")
return json_transformers.decode_split_header(decoded)
return json_utils.decode_split_header(decoded)
# If response is Last or Linking Object header,
# it has `header` dictionary and non-null `split` dictionary
if "split" in decoded["header"].keys():
if decoded["header"]["split"]:
logger.info("decoding linking object")
return json_transformers.decode_linking_object(decoded)
return json_utils.decode_linking_object(decoded)
if decoded["header"]["objectType"] == "STORAGE_GROUP":
logger.info("decoding storage group")
return json_transformers.decode_storage_group(decoded)
return json_utils.decode_storage_group(decoded)
if decoded["header"]["objectType"] == "TOMBSTONE":
logger.info("decoding tombstone")
return json_transformers.decode_tombstone(decoded)
return json_utils.decode_tombstone(decoded)
logger.info("decoding simple header")
return json_transformers.decode_simple_header(decoded)
return json_utils.decode_simple_header(decoded)

View file

@ -1,32 +0,0 @@
import re
# Regex patterns of status codes of Container service (https://github.com/nspcc-dev/neofs-spec/blob/98b154848116223e486ce8b43eaa35fec08b4a99/20-api-v2/container.md)
CONTAINER_NOT_FOUND = "code = 3072.*message = container not found"
# Regex patterns of status codes of Object service (https://github.com/nspcc-dev/neofs-spec/blob/98b154848116223e486ce8b43eaa35fec08b4a99/20-api-v2/object.md)
MALFORMED_REQUEST = "code = 1024.*message = malformed request"
OBJECT_ACCESS_DENIED = "code = 2048.*message = access to object operation denied"
OBJECT_NOT_FOUND = "code = 2049.*message = object not found"
OBJECT_ALREADY_REMOVED = "code = 2052.*message = object already removed"
SESSION_NOT_FOUND = "code = 4096.*message = session token not found"
OUT_OF_RANGE = "code = 2053.*message = out of range"
# TODO: Due to https://github.com/nspcc-dev/neofs-node/issues/2092 we have to check only codes until fixed
# OBJECT_IS_LOCKED = "code = 2050.*message = object is locked"
# LOCK_NON_REGULAR_OBJECT = "code = 2051.*message = ..." will be available once 2092 is fixed
OBJECT_IS_LOCKED = "code = 2050"
LOCK_NON_REGULAR_OBJECT = "code = 2051"
LIFETIME_REQUIRED = "either expiration epoch of a lifetime is required"
LOCK_OBJECT_REMOVAL = "lock object removal"
LOCK_OBJECT_EXPIRATION = "lock object expiration: {expiration_epoch}; current: {current_epoch}"
def error_matches_status(error: Exception, status_pattern: str) -> bool:
"""
Determines whether exception matches specified status pattern.
We use re.search to be consistent with pytest.raises.
"""
match = re.search(status_pattern, str(error))
return match is not None

View file

@ -0,0 +1,353 @@
import logging
import os
import random
import re
import shutil
import uuid
import zipfile
from typing import Optional
from urllib.parse import quote_plus
import allure
import requests
from frostfs_testlib.shell import Shell
from pytest_tests.helpers.aws_cli_client import LONG_TIMEOUT
from pytest_tests.helpers.cli_helpers import _cmd_run
from pytest_tests.helpers.cluster import StorageNode
from pytest_tests.helpers.file_helper import get_file_hash
from pytest_tests.helpers.frostfs_verbs import get_object
from pytest_tests.helpers.storage_policy import get_nodes_without_object
from pytest_tests.resources.common import SIMPLE_OBJECT_SIZE
logger = logging.getLogger("NeoLogger")
ASSETS_DIR = os.getenv("ASSETS_DIR", "TemporaryDir/")
@allure.step("Get via HTTP Gate")
def get_via_http_gate(cid: str, oid: str, endpoint: str, request_path: Optional[str] = None):
"""
This function gets given object from HTTP gate
cid: container id to get object from
oid: object ID
endpoint: http gate endpoint
request_path: (optional) http request, if ommited - use default [{endpoint}/get/{cid}/{oid}]
"""
# if `request_path` parameter ommited, use default
if request_path is None:
request = f"{endpoint}/get/{cid}/{oid}"
else:
request = f"{endpoint}{request_path}"
resp = requests.get(request, stream=True)
if not resp.ok:
raise Exception(
f"""Failed to get object via HTTP gate:
request: {resp.request.path_url},
response: {resp.text},
status code: {resp.status_code} {resp.reason}"""
)
logger.info(f"Request: {request}")
_attach_allure_step(request, resp.status_code)
file_path = os.path.join(os.getcwd(), ASSETS_DIR, f"{cid}_{oid}")
with open(file_path, "wb") as file:
shutil.copyfileobj(resp.raw, file)
return file_path
@allure.step("Get via Zip HTTP Gate")
def get_via_zip_http_gate(cid: str, prefix: str, endpoint: str):
"""
This function gets given object from HTTP gate
cid: container id to get object from
prefix: common prefix
endpoint: http gate endpoint
"""
request = f"{endpoint}/zip/{cid}/{prefix}"
resp = requests.get(request, stream=True)
if not resp.ok:
raise Exception(
f"""Failed to get object via HTTP gate:
request: {resp.request.path_url},
response: {resp.text},
status code: {resp.status_code} {resp.reason}"""
)
logger.info(f"Request: {request}")
_attach_allure_step(request, resp.status_code)
file_path = os.path.join(os.getcwd(), ASSETS_DIR, f"{cid}_archive.zip")
with open(file_path, "wb") as file:
shutil.copyfileobj(resp.raw, file)
with zipfile.ZipFile(file_path, "r") as zip_ref:
zip_ref.extractall(ASSETS_DIR)
return os.path.join(os.getcwd(), ASSETS_DIR, prefix)
@allure.step("Get via HTTP Gate by attribute")
def get_via_http_gate_by_attribute(
cid: str, attribute: dict, endpoint: str, request_path: Optional[str] = None
):
"""
This function gets given object from HTTP gate
cid: CID to get object from
attribute: attribute {name: attribute} value pair
endpoint: http gate endpoint
request_path: (optional) http request path, if ommited - use default [{endpoint}/get_by_attribute/{Key}/{Value}]
"""
attr_name = list(attribute.keys())[0]
attr_value = quote_plus(str(attribute.get(attr_name)))
# if `request_path` parameter ommited, use default
if request_path is None:
request = f"{endpoint}/get_by_attribute/{cid}/{quote_plus(str(attr_name))}/{attr_value}"
else:
request = f"{endpoint}{request_path}"
resp = requests.get(request, stream=True)
if not resp.ok:
raise Exception(
f"""Failed to get object via HTTP gate:
request: {resp.request.path_url},
response: {resp.text},
status code: {resp.status_code} {resp.reason}"""
)
logger.info(f"Request: {request}")
_attach_allure_step(request, resp.status_code)
file_path = os.path.join(os.getcwd(), ASSETS_DIR, f"{cid}_{str(uuid.uuid4())}")
with open(file_path, "wb") as file:
shutil.copyfileobj(resp.raw, file)
return file_path
@allure.step("Upload via HTTP Gate")
def upload_via_http_gate(cid: str, path: str, endpoint: str, headers: dict = None) -> str:
"""
This function upload given object through HTTP gate
cid: CID to get object from
path: File path to upload
endpoint: http gate endpoint
headers: Object header
"""
request = f"{endpoint}/upload/{cid}"
files = {"upload_file": open(path, "rb")}
body = {"filename": path}
resp = requests.post(request, files=files, data=body, headers=headers)
if not resp.ok:
raise Exception(
f"""Failed to get object via HTTP gate:
request: {resp.request.path_url},
response: {resp.text},
status code: {resp.status_code} {resp.reason}"""
)
logger.info(f"Request: {request}")
_attach_allure_step(request, resp.json(), req_type="POST")
assert resp.json().get("object_id"), f"OID found in response {resp}"
return resp.json().get("object_id")
@allure.step("Check is the passed object large")
def is_object_large(filepath: str) -> bool:
"""
This function check passed file size and return True if file_size > SIMPLE_OBJECT_SIZE
filepath: File path to check
"""
file_size = os.path.getsize(filepath)
logger.info(f"Size= {file_size}")
if file_size > int(SIMPLE_OBJECT_SIZE):
return True
else:
return False
@allure.step("Upload via HTTP Gate using Curl")
def upload_via_http_gate_curl(
cid: str,
filepath: str,
endpoint: str,
headers: list = None,
error_pattern: Optional[str] = None,
) -> str:
"""
This function upload given object through HTTP gate using curl utility.
cid: CID to get object from
filepath: File path to upload
headers: Object header
endpoint: http gate endpoint
error_pattern: [optional] expected error message from the command
"""
request = f"{endpoint}/upload/{cid}"
attributes = ""
if headers:
# parse attributes
attributes = " ".join(headers)
large_object = is_object_large(filepath)
if large_object:
# pre-clean
_cmd_run("rm pipe -f")
files = f"file=@pipe;filename={os.path.basename(filepath)}"
cmd = f"mkfifo pipe;cat {filepath} > pipe & curl --no-buffer -F '{files}' {attributes} {request}"
output = _cmd_run(cmd, LONG_TIMEOUT)
# clean up pipe
_cmd_run("rm pipe")
else:
files = f"file=@{filepath};filename={os.path.basename(filepath)}"
cmd = f"curl -F '{files}' {attributes} {request}"
output = _cmd_run(cmd)
if error_pattern:
match = error_pattern.casefold() in str(output).casefold()
assert match, f"Expected {output} to match {error_pattern}"
return ""
oid_re = re.search(r'"object_id": "(.*)"', output)
if not oid_re:
raise AssertionError(f'Could not find "object_id" in {output}')
return oid_re.group(1)
@allure.step("Get via HTTP Gate using Curl")
def get_via_http_curl(cid: str, oid: str, endpoint: str) -> str:
"""
This function gets given object from HTTP gate using curl utility.
cid: CID to get object from
oid: object OID
endpoint: http gate endpoint
"""
request = f"{endpoint}/get/{cid}/{oid}"
file_path = os.path.join(os.getcwd(), ASSETS_DIR, f"{cid}_{oid}_{str(uuid.uuid4())}")
cmd = f"curl {request} > {file_path}"
_cmd_run(cmd)
return file_path
def _attach_allure_step(request: str, status_code: int, req_type="GET"):
command_attachment = f"REQUEST: '{request}'\n" f"RESPONSE:\n {status_code}\n"
with allure.step(f"{req_type} Request"):
allure.attach(command_attachment, f"{req_type} Request", allure.attachment_type.TEXT)
@allure.step("Try to get object and expect error")
def try_to_get_object_and_expect_error(
cid: str, oid: str, error_pattern: str, endpoint: str
) -> None:
try:
get_via_http_gate(cid=cid, oid=oid, endpoint=endpoint)
raise AssertionError(f"Expected error on getting object with cid: {cid}")
except Exception as err:
match = error_pattern.casefold() in str(err).casefold()
assert match, f"Expected {err} to match {error_pattern}"
@allure.step("Verify object can be get using HTTP header attribute")
def get_object_by_attr_and_verify_hashes(
oid: str, file_name: str, cid: str, attrs: dict, endpoint: str
) -> None:
got_file_path_http = get_via_http_gate(cid=cid, oid=oid, endpoint=endpoint)
got_file_path_http_attr = get_via_http_gate_by_attribute(
cid=cid, attribute=attrs, endpoint=endpoint
)
assert_hashes_are_equal(file_name, got_file_path_http, got_file_path_http_attr)
def get_object_and_verify_hashes(
oid: str,
file_name: str,
wallet: str,
cid: str,
shell: Shell,
nodes: list[StorageNode],
endpoint: str,
object_getter=None,
) -> None:
nodes_list = get_nodes_without_object(
wallet=wallet,
cid=cid,
oid=oid,
shell=shell,
nodes=nodes,
)
# for some reason we can face with case when nodes_list is empty due to object resides in all nodes
if nodes_list:
random_node = random.choice(nodes_list)
else:
random_node = random.choice(nodes)
object_getter = object_getter or get_via_http_gate
got_file_path = get_object(
wallet=wallet,
cid=cid,
oid=oid,
shell=shell,
endpoint=random_node.get_rpc_endpoint(),
)
got_file_path_http = object_getter(cid=cid, oid=oid, endpoint=endpoint)
assert_hashes_are_equal(file_name, got_file_path, got_file_path_http)
def assert_hashes_are_equal(orig_file_name: str, got_file_1: str, got_file_2: str) -> None:
msg = "Expected hashes are equal for files {f1} and {f2}"
got_file_hash_http = get_file_hash(got_file_1)
assert get_file_hash(got_file_2) == got_file_hash_http, msg.format(f1=got_file_2, f2=got_file_1)
assert get_file_hash(orig_file_name) == got_file_hash_http, msg.format(
f1=orig_file_name, f2=got_file_1
)
def attr_into_header(attrs: dict) -> dict:
return {f"X-Attribute-{_key}": _value for _key, _value in attrs.items()}
@allure.step(
"Convert each attribute (Key=Value) to the following format: -H 'X-Attribute-Key: Value'"
)
def attr_into_str_header_curl(attrs: dict) -> list:
headers = []
for k, v in attrs.items():
headers.append(f"-H 'X-Attribute-{k}: {v}'")
logger.info(f"[List of Attrs for curl:] {headers}")
return headers
@allure.step(
"Try to get object via http (pass http_request and optional attributes) and expect error"
)
def try_to_get_object_via_passed_request_and_expect_error(
cid: str,
oid: str,
error_pattern: str,
endpoint: str,
http_request_path: str,
attrs: dict = None,
) -> None:
try:
if attrs is None:
get_via_http_gate(cid=cid, oid=oid, endpoint=endpoint, request_path=http_request_path)
else:
get_via_http_gate_by_attribute(
cid=cid, attribute=attrs, endpoint=endpoint, request_path=http_request_path
)
raise AssertionError(f"Expected error on getting object with cid: {cid}")
except Exception as err:
match = error_pattern.casefold() in str(err).casefold()
assert match, f"Expected {err} to match {error_pattern}"

View file

@ -1,4 +1,4 @@
from neofs_testlib.shell import Shell
from frostfs_testlib.shell import Shell
class IpTablesHelper:

View file

@ -5,14 +5,15 @@ from time import sleep
from typing import Optional
import allure
from neofs_testlib.shell import Shell
from remote_process import RemoteProcess
from frostfs_testlib.shell import Shell
from pytest_tests.helpers.remote_process import RemoteProcess
EXIT_RESULT_CODE = 0
LOAD_RESULTS_PATTERNS = {
"grpc": {
"write_ops": r"neofs_obj_put_total\W*\d*\W*(?P<write_ops>\d*\.\d*)",
"read_ops": r"neofs_obj_get_total\W*\d*\W*(?P<read_ops>\d*\.\d*)",
"write_ops": r"frostfs_obj_put_total\W*\d*\W*(?P<write_ops>\d*\.\d*)",
"read_ops": r"frostfs_obj_get_total\W*\d*\W*(?P<read_ops>\d*\.\d*)",
},
"s3": {
"write_ops": r"aws_obj_put_total\W*\d*\W*(?P<write_ops>\d*\.\d*)",

View file

@ -6,12 +6,13 @@ from dataclasses import dataclass
from typing import Optional
import allure
from cluster import Cluster, StorageNode
from common import MORPH_BLOCK_TIME, NEOFS_CLI_EXEC
from epoch import tick_epoch
from neofs_testlib.cli import NeofsCli
from neofs_testlib.shell import Shell
from utility import parse_time
from frostfs_testlib.cli import FrostfsCli
from frostfs_testlib.shell import Shell
from frostfs_testlib.utils import datetime_utils
from pytest_tests.helpers.cluster import Cluster, StorageNode
from pytest_tests.helpers.epoch import tick_epoch
from pytest_tests.resources.common import FROSTFS_CLI_EXEC, MORPH_BLOCK_TIME
logger = logging.getLogger("NeoLogger")
@ -107,7 +108,7 @@ def get_netmap_snapshot(node: StorageNode, shell: Shell) -> str:
storage_wallet_config = node.get_wallet_config_path()
storage_wallet_path = node.get_wallet_path()
cli = NeofsCli(shell, NEOFS_CLI_EXEC, config_file=storage_wallet_config)
cli = FrostfsCli(shell, FROSTFS_CLI_EXEC, config_file=storage_wallet_config)
return cli.netmap.snapshot(
rpc_endpoint=node.get_rpc_endpoint(),
wallet=storage_wallet_path,
@ -154,7 +155,7 @@ def drop_object(node: StorageNode, cid: str, oid: str) -> str:
def delete_node_data(node: StorageNode) -> None:
node.stop_service()
node.host.delete_storage_node_data(node.name)
time.sleep(parse_time(MORPH_BLOCK_TIME))
time.sleep(datetime_utils.parse_time(MORPH_BLOCK_TIME))
@allure.step("Exclude node {node_to_exclude} from network map")
@ -168,7 +169,7 @@ def exclude_node_from_network_map(
storage_node_set_status(node_to_exclude, status="offline")
time.sleep(parse_time(MORPH_BLOCK_TIME))
time.sleep(datetime_utils.parse_time(MORPH_BLOCK_TIME))
tick_epoch(shell, cluster)
snapshot = get_netmap_snapshot(node=alive_node, shell=shell)
@ -187,11 +188,11 @@ def include_node_to_network_map(
storage_node_set_status(node_to_include, status="online")
# Per suggestion of @fyrchik we need to wait for 2 blocks after we set status and after tick epoch.
# First sleep can be omitted after https://github.com/nspcc-dev/neofs-node/issues/1790 complete.
# First sleep can be omitted after https://github.com/nspcc-dev/frostfs-node/issues/1790 complete.
time.sleep(parse_time(MORPH_BLOCK_TIME) * 2)
time.sleep(datetime_utils.parse_time(MORPH_BLOCK_TIME) * 2)
tick_epoch(shell, cluster)
time.sleep(parse_time(MORPH_BLOCK_TIME) * 2)
time.sleep(datetime_utils.parse_time(MORPH_BLOCK_TIME) * 2)
check_node_in_map(node_to_include, shell, alive_node)
@ -235,10 +236,10 @@ def _run_control_command(node: StorageNode, command: str) -> None:
wallet_config = f'password: "{wallet_password}"'
shell.exec(f"echo '{wallet_config}' > {wallet_config_path}")
cli_config = host.get_cli_config("neofs-cli")
cli_config = host.get_cli_config("frostfs-cli")
# TODO: implement cli.control
# cli = NeofsCli(shell, cli_config.exec_path, wallet_config_path)
# cli = frostfsCli(shell, cli_config.exec_path, wallet_config_path)
result = shell.exec(
f"{cli_config.exec_path} {command} --endpoint {control_endpoint} "
f"--wallet {wallet_path} --config {wallet_config_path}"

View file

@ -1,11 +1,13 @@
from typing import Optional
import allure
from cluster import Cluster
from file_helper import get_file_hash
from grpc_responses import OBJECT_ACCESS_DENIED, error_matches_status
from neofs_testlib.shell import Shell
from python_keywords.neofs_verbs import (
from frostfs_testlib.resources.common import OBJECT_ACCESS_DENIED
from frostfs_testlib.shell import Shell
from frostfs_testlib.utils import string_utils
from pytest_tests.helpers.cluster import Cluster
from pytest_tests.helpers.file_helper import get_file_hash
from pytest_tests.helpers.frostfs_verbs import (
delete_object,
get_object_from_random_node,
get_range,
@ -14,6 +16,7 @@ from python_keywords.neofs_verbs import (
put_object_to_random_node,
search_object,
)
from pytest_tests.resources.common import CLI_DEFAULT_TIMEOUT
OPERATION_ERROR_TYPE = RuntimeError
@ -42,7 +45,7 @@ def can_get_object(
cluster=cluster,
)
except OPERATION_ERROR_TYPE as err:
assert error_matches_status(
assert string_utils.is_str_match_pattern(
err, OBJECT_ACCESS_DENIED
), f"Expected {err} to match {OBJECT_ACCESS_DENIED}"
return False
@ -75,7 +78,7 @@ def can_put_object(
cluster=cluster,
)
except OPERATION_ERROR_TYPE as err:
assert error_matches_status(
assert string_utils.is_str_match_pattern(
err, OBJECT_ACCESS_DENIED
), f"Expected {err} to match {OBJECT_ACCESS_DENIED}"
return False
@ -105,7 +108,7 @@ def can_delete_object(
endpoint=endpoint,
)
except OPERATION_ERROR_TYPE as err:
assert error_matches_status(
assert string_utils.is_str_match_pattern(
err, OBJECT_ACCESS_DENIED
), f"Expected {err} to match {OBJECT_ACCESS_DENIED}"
return False
@ -121,6 +124,7 @@ def can_get_head_object(
bearer: Optional[str] = None,
wallet_config: Optional[str] = None,
xhdr: Optional[dict] = None,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
) -> bool:
with allure.step("Try get head of object"):
try:
@ -133,9 +137,10 @@ def can_get_head_object(
xhdr=xhdr,
shell=shell,
endpoint=endpoint,
timeout=timeout,
)
except OPERATION_ERROR_TYPE as err:
assert error_matches_status(
assert string_utils.is_str_match_pattern(
err, OBJECT_ACCESS_DENIED
), f"Expected {err} to match {OBJECT_ACCESS_DENIED}"
return False
@ -151,6 +156,7 @@ def can_get_range_of_object(
bearer: Optional[str] = None,
wallet_config: Optional[str] = None,
xhdr: Optional[dict] = None,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
) -> bool:
with allure.step("Try get range of object"):
try:
@ -164,9 +170,10 @@ def can_get_range_of_object(
xhdr=xhdr,
shell=shell,
endpoint=endpoint,
timeout=timeout,
)
except OPERATION_ERROR_TYPE as err:
assert error_matches_status(
assert string_utils.is_str_match_pattern(
err, OBJECT_ACCESS_DENIED
), f"Expected {err} to match {OBJECT_ACCESS_DENIED}"
return False
@ -182,6 +189,7 @@ def can_get_range_hash_of_object(
bearer: Optional[str] = None,
wallet_config: Optional[str] = None,
xhdr: Optional[dict] = None,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
) -> bool:
with allure.step("Try get range hash of object"):
try:
@ -195,9 +203,10 @@ def can_get_range_hash_of_object(
xhdr=xhdr,
shell=shell,
endpoint=endpoint,
timeout=timeout,
)
except OPERATION_ERROR_TYPE as err:
assert error_matches_status(
assert string_utils.is_str_match_pattern(
err, OBJECT_ACCESS_DENIED
), f"Expected {err} to match {OBJECT_ACCESS_DENIED}"
return False
@ -213,6 +222,7 @@ def can_search_object(
bearer: Optional[str] = None,
wallet_config: Optional[str] = None,
xhdr: Optional[dict] = None,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
) -> bool:
with allure.step("Try search object in container"):
try:
@ -224,9 +234,10 @@ def can_search_object(
xhdr=xhdr,
shell=shell,
endpoint=endpoint,
timeout=timeout,
)
except OPERATION_ERROR_TYPE as err:
assert error_matches_status(
assert string_utils.is_str_match_pattern(
err, OBJECT_ACCESS_DENIED
), f"Expected {err} to match {OBJECT_ACCESS_DENIED}"
return False

View file

@ -6,14 +6,19 @@ import time
from typing import Optional
import allure
from cluster import MainChain, MorphChain
from common import GAS_HASH, MAINNET_BLOCK_TIME, NEOFS_CONTRACT, NEOGO_EXECUTABLE
from neo3 import wallet as neo3_wallet
from neofs_testlib.cli import NeoGo
from neofs_testlib.shell import Shell
from neofs_testlib.utils.converters import contract_hash_to_address
from neofs_testlib.utils.wallet import get_last_address_from_wallet
from utility import parse_time
from frostfs_testlib.cli import NeoGo
from frostfs_testlib.shell import Shell
from frostfs_testlib.utils import converting_utils, datetime_utils, wallet_utils
from neo3.wallet import utils as neo3_utils
from neo3.wallet import wallet as neo3_wallet
from pytest_tests.helpers.cluster import MainChain, MorphChain
from pytest_tests.resources.common import (
FROSTFS_CONTRACT,
GAS_HASH,
MAINNET_BLOCK_TIME,
NEOGO_EXECUTABLE,
)
logger = logging.getLogger("NeoLogger")
@ -42,15 +47,15 @@ def get_contract_hash(morph_chain: MorphChain, resolve_name: str, shell: Shell)
@allure.step("Withdraw Mainnet Gas")
def withdraw_mainnet_gas(shell: Shell, main_chain: MainChain, wlt: str, amount: int):
address = get_last_address_from_wallet(wlt, EMPTY_PASSWORD)
scripthash = neo3_wallet.Account.address_to_script_hash(address)
address = wallet_utils.get_last_address_from_wallet(wlt, EMPTY_PASSWORD)
scripthash = neo3_utils.address_to_script_hash(address)
neogo = NeoGo(shell=shell, neo_go_exec_path=NEOGO_EXECUTABLE)
out = neogo.contract.invokefunction(
wallet=wlt,
address=address,
rpc_endpoint=main_chain.get_endpoint(),
scripthash=NEOFS_CONTRACT,
scripthash=FROSTFS_CONTRACT,
method="withdraw",
arguments=f"{scripthash} int:{amount}",
multisig_hash=f"{scripthash}:Global",
@ -87,10 +92,10 @@ def transaction_accepted(main_chain: MainChain, tx_id: str):
return False
@allure.step("Get NeoFS Balance")
@allure.step("Get FrostFS Balance")
def get_balance(shell: Shell, morph_chain: MorphChain, wallet_path: str, wallet_password: str = ""):
"""
This function returns NeoFS balance for given wallet.
This function returns FrostFS balance for given wallet.
"""
with open(wallet_path) as wallet_file:
wallet = neo3_wallet.Wallet.from_json(json.load(wallet_file), password=wallet_password)
@ -98,7 +103,7 @@ def get_balance(shell: Shell, morph_chain: MorphChain, wallet_path: str, wallet_
payload = [{"type": "Hash160", "value": str(acc.script_hash)}]
try:
resp = morph_chain.rpc_client.invoke_function(
get_contract_hash(morph_chain, "balance.neofs", shell=shell), "balanceOf", payload
get_contract_hash(morph_chain, "balance.frostfs", shell=shell), "balanceOf", payload
)
logger.info(f"Got response \n{resp}")
value = int(resp["stack"][0]["value"])
@ -141,10 +146,12 @@ def transfer_gas(
if wallet_from_password is not None
else main_chain.get_wallet_password()
)
address_from = address_from or get_last_address_from_wallet(
address_from = address_from or wallet_utils.get_last_address_from_wallet(
wallet_from_path, wallet_from_password
)
address_to = address_to or get_last_address_from_wallet(wallet_to_path, wallet_to_password)
address_to = address_to or wallet_utils.get_last_address_from_wallet(
wallet_to_path, wallet_to_password
)
neogo = NeoGo(shell, neo_go_exec_path=NEOGO_EXECUTABLE)
out = neogo.nep17.transfer(
@ -162,10 +169,10 @@ def transfer_gas(
raise Exception("Got no TXID after run the command")
if not transaction_accepted(main_chain, txid):
raise AssertionError(f"TX {txid} hasn't been processed")
time.sleep(parse_time(MAINNET_BLOCK_TIME))
time.sleep(datetime_utils.parse_time(MAINNET_BLOCK_TIME))
@allure.step("NeoFS Deposit")
@allure.step("FrostFS Deposit")
def deposit_gas(
shell: Shell,
main_chain: MainChain,
@ -174,12 +181,12 @@ def deposit_gas(
wallet_from_password: str,
):
"""
Transferring GAS from given wallet to NeoFS contract address.
Transferring GAS from given wallet to FrostFS contract address.
"""
# get NeoFS contract address
deposit_addr = contract_hash_to_address(NEOFS_CONTRACT)
logger.info(f"NeoFS contract address: {deposit_addr}")
address_from = get_last_address_from_wallet(
# get FrostFS contract address
deposit_addr = converting_utils.contract_hash_to_address(FROSTFS_CONTRACT)
logger.info(f"FrostFS contract address: {deposit_addr}")
address_from = wallet_utils.get_last_address_from_wallet(
wallet_path=wallet_from_path, wallet_password=wallet_from_password
)
transfer_gas(

View file

@ -4,8 +4,8 @@ import uuid
from typing import Optional
import allure
from neofs_testlib.shell import Shell
from neofs_testlib.shell.interfaces import CommandOptions
from frostfs_testlib.shell import Shell
from frostfs_testlib.shell.interfaces import CommandOptions
from tenacity import retry, stop_after_attempt, wait_fixed

View file

@ -1,13 +1,15 @@
import datetime
import logging
import os
from datetime import datetime, timedelta
from typing import Optional
import allure
import s3_gate_bucket
import s3_gate_object
from dateutil.parser import parse
from pytest_tests.steps import s3_gate_bucket, s3_gate_object
logger = logging.getLogger("NeoLogger")
@allure.step("Expected all objects are presented in the bucket")
def check_objects_in_bucket(
@ -127,3 +129,31 @@ def assert_object_lock_mode(
assert (
retain_date - last_modify + timedelta(seconds=1)
).days == retain_period, f"Expected retention period is {retain_period} days"
def assert_s3_acl(acl_grants: list, permitted_users: str):
if permitted_users == "AllUsers":
grantees = {"AllUsers": 0, "CanonicalUser": 0}
for acl_grant in acl_grants:
if acl_grant.get("Grantee", {}).get("Type") == "Group":
uri = acl_grant.get("Grantee", {}).get("URI")
permission = acl_grant.get("Permission")
assert (uri, permission) == (
"http://acs.amazonaws.com/groups/global/AllUsers",
"FULL_CONTROL",
), "All Groups should have FULL_CONTROL"
grantees["AllUsers"] += 1
if acl_grant.get("Grantee", {}).get("Type") == "CanonicalUser":
permission = acl_grant.get("Permission")
assert permission == "FULL_CONTROL", "Canonical User should have FULL_CONTROL"
grantees["CanonicalUser"] += 1
assert grantees["AllUsers"] >= 1, "All Users should have FULL_CONTROL"
assert grantees["CanonicalUser"] >= 1, "Canonical User should have FULL_CONTROL"
if permitted_users == "CanonicalUser":
for acl_grant in acl_grants:
if acl_grant.get("Grantee", {}).get("Type") == "CanonicalUser":
permission = acl_grant.get("Permission")
assert permission == "FULL_CONTROL", "Only CanonicalUser should have FULL_CONTROL"
else:
logger.error("FULL_CONTROL is given to All Users")

View file

@ -1,17 +1,18 @@
"""
This module contains keywords for work with Storage Groups.
It contains wrappers for `neofs-cli storagegroup` verbs.
It contains wrappers for `frostfs-cli storagegroup` verbs.
"""
import logging
from typing import Optional
import allure
from cluster import Cluster
from common import COMPLEX_OBJ_SIZE, NEOFS_CLI_EXEC, SIMPLE_OBJ_SIZE, WALLET_CONFIG
from complex_object_actions import get_link_object
from neofs_testlib.cli import NeofsCli
from neofs_testlib.shell import Shell
from neofs_verbs import head_object
from frostfs_testlib.cli import FrostfsCli
from frostfs_testlib.shell import Shell
from pytest_tests.helpers.cluster import Cluster
from pytest_tests.helpers.complex_object_actions import get_link_object
from pytest_tests.helpers.frostfs_verbs import head_object
from pytest_tests.resources.common import CLI_DEFAULT_TIMEOUT, FROSTFS_CLI_EXEC, WALLET_CONFIG
logger = logging.getLogger("NeoLogger")
@ -26,10 +27,11 @@ def put_storagegroup(
bearer: Optional[str] = None,
wallet_config: str = WALLET_CONFIG,
lifetime: int = 10,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
) -> str:
"""
Wrapper for `neofs-cli storagegroup put`. Before the SG is created,
neofs-cli performs HEAD on `objects`, so this verb must be allowed
Wrapper for `frostfs-cli storagegroup put`. Before the SG is created,
frostfs-cli performs HEAD on `objects`, so this verb must be allowed
for `wallet` in `cid`.
Args:
shell: Shell instance.
@ -38,12 +40,15 @@ def put_storagegroup(
lifetime: Storage group lifetime in epochs.
objects: List of Object IDs to include into the SG.
bearer: Path to Bearer token file.
wallet_config: Path to neofs-cli config file.
wallet_config: Path to frostfs-cli config file.
timeout: Timeout for an operation.
Returns:
Object ID of created Storage Group.
"""
neofscli = NeofsCli(shell=shell, neofs_cli_exec_path=NEOFS_CLI_EXEC, config_file=wallet_config)
result = neofscli.storagegroup.put(
frostfscli = FrostfsCli(
shell=shell, frostfs_cli_exec_path=FROSTFS_CLI_EXEC, config_file=wallet_config
)
result = frostfscli.storagegroup.put(
wallet=wallet,
cid=cid,
lifetime=lifetime,
@ -63,25 +68,30 @@ def list_storagegroup(
cid: str,
bearer: Optional[str] = None,
wallet_config: str = WALLET_CONFIG,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
) -> list:
"""
Wrapper for `neofs-cli storagegroup list`. This operation
Wrapper for `frostfs-cli storagegroup list`. This operation
requires SEARCH allowed for `wallet` in `cid`.
Args:
shell: Shell instance.
wallet: Path to wallet on whose behalf the SGs are listed in the container
cid: ID of Container to list.
bearer: Path to Bearer token file.
wallet_config: Path to neofs-cli config file.
wallet_config: Path to frostfs-cli config file.
timeout: Timeout for an operation.
Returns:
Object IDs of found Storage Groups.
"""
neofscli = NeofsCli(shell=shell, neofs_cli_exec_path=NEOFS_CLI_EXEC, config_file=wallet_config)
result = neofscli.storagegroup.list(
frostfscli = FrostfsCli(
shell=shell, frostfs_cli_exec_path=FROSTFS_CLI_EXEC, config_file=wallet_config
)
result = frostfscli.storagegroup.list(
wallet=wallet,
cid=cid,
bearer=bearer,
rpc_endpoint=endpoint,
timeout=timeout,
)
# throwing off the first string of output
found_objects = result.stdout.split("\n")[1:]
@ -97,30 +107,35 @@ def get_storagegroup(
gid: str,
bearer: str = "",
wallet_config: str = WALLET_CONFIG,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
) -> dict:
"""
Wrapper for `neofs-cli storagegroup get`.
Wrapper for `frostfs-cli storagegroup get`.
Args:
shell: Shell instance.
wallet: Path to wallet on whose behalf the SG is got.
cid: ID of Container where SG is stored.
gid: ID of the Storage Group.
bearer: Path to Bearer token file.
wallet_config: Path to neofs-cli config file.
wallet_config: Path to frostfs-cli config file.
timeout: Timeout for an operation.
Returns:
Detailed information on the Storage Group.
"""
neofscli = NeofsCli(shell=shell, neofs_cli_exec_path=NEOFS_CLI_EXEC, config_file=wallet_config)
result = neofscli.storagegroup.get(
frostfscli = FrostfsCli(
shell=shell, frostfs_cli_exec_path=FROSTFS_CLI_EXEC, config_file=wallet_config
)
result = frostfscli.storagegroup.get(
wallet=wallet,
cid=cid,
bearer=bearer,
id=gid,
rpc_endpoint=endpoint,
timeout=timeout,
)
# TODO: temporary solution for parsing output. Needs to be replaced with
# JSON parsing when https://github.com/nspcc-dev/neofs-node/issues/1355
# JSON parsing when https://github.com/nspcc-dev/frostfs-node/issues/1355
# is done.
strings = result.stdout.strip().split("\n")
# first three strings go to `data`;
@ -146,26 +161,31 @@ def delete_storagegroup(
gid: str,
bearer: str = "",
wallet_config: str = WALLET_CONFIG,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
) -> str:
"""
Wrapper for `neofs-cli storagegroup delete`.
Wrapper for `frostfs-cli storagegroup delete`.
Args:
shell: Shell instance.
wallet: Path to wallet on whose behalf the SG is deleted.
cid: ID of Container where SG is stored.
gid: ID of the Storage Group.
bearer: Path to Bearer token file.
wallet_config: Path to neofs-cli config file.
wallet_config: Path to frostfs-cli config file.
timeout: Timeout for an operation.
Returns:
Tombstone ID of the deleted Storage Group.
"""
neofscli = NeofsCli(shell=shell, neofs_cli_exec_path=NEOFS_CLI_EXEC, config_file=wallet_config)
result = neofscli.storagegroup.delete(
frostfscli = FrostfsCli(
shell=shell, frostfs_cli_exec_path=FROSTFS_CLI_EXEC, config_file=wallet_config
)
result = frostfscli.storagegroup.delete(
wallet=wallet,
cid=cid,
bearer=bearer,
id=gid,
rpc_endpoint=endpoint,
timeout=timeout,
)
tombstone_id = result.stdout.strip().split("\n")[1].split(": ")[1]
return tombstone_id
@ -180,6 +200,7 @@ def verify_list_storage_group(
gid: str,
bearer: str = None,
wallet_config: str = WALLET_CONFIG,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
):
storage_groups = list_storagegroup(
shell=shell,
@ -188,6 +209,7 @@ def verify_list_storage_group(
cid=cid,
bearer=bearer,
wallet_config=wallet_config,
timeout=timeout,
)
assert gid in storage_groups
@ -201,12 +223,14 @@ def verify_get_storage_group(
gid: str,
obj_list: list,
object_size: int,
max_object_size: int,
bearer: str = None,
wallet_config: str = WALLET_CONFIG,
timeout: Optional[str] = CLI_DEFAULT_TIMEOUT,
):
obj_parts = []
endpoint = cluster.default_rpc_endpoint
if object_size == COMPLEX_OBJ_SIZE:
if object_size > max_object_size:
for obj in obj_list:
link_oid = get_link_object(
wallet,
@ -216,6 +240,7 @@ def verify_get_storage_group(
nodes=cluster.storage_nodes,
bearer=bearer,
wallet_config=wallet_config,
timeout=timeout,
)
obj_head = head_object(
wallet=wallet,
@ -226,6 +251,7 @@ def verify_get_storage_group(
is_raw=True,
bearer=bearer,
wallet_config=wallet_config,
timeout=timeout,
)
obj_parts = obj_head["header"]["split"]["children"]
@ -238,12 +264,12 @@ def verify_get_storage_group(
gid=gid,
bearer=bearer,
wallet_config=wallet_config,
timeout=timeout,
)
if object_size == SIMPLE_OBJ_SIZE:
exp_size = SIMPLE_OBJ_SIZE * obj_num
exp_size = object_size * obj_num
if object_size < max_object_size:
assert int(storagegroup_data["Group size"]) == exp_size
assert storagegroup_data["Members"] == obj_list
else:
exp_size = COMPLEX_OBJ_SIZE * obj_num
assert int(storagegroup_data["Group size"]) == exp_size
assert storagegroup_data["Members"] == obj_parts

View file

@ -6,14 +6,14 @@
"""
import logging
from typing import List
import allure
import complex_object_actions
import neofs_verbs
from cluster import StorageNode
from grpc_responses import OBJECT_NOT_FOUND, error_matches_status
from neofs_testlib.shell import Shell
from frostfs_testlib.resources.common import OBJECT_NOT_FOUND
from frostfs_testlib.shell import Shell
from frostfs_testlib.utils import string_utils
from pytest_tests.helpers import complex_object_actions, frostfs_verbs
from pytest_tests.helpers.cluster import StorageNode
logger = logging.getLogger("NeoLogger")
@ -66,7 +66,7 @@ def get_simple_object_copies(
copies = 0
for node in nodes:
try:
response = neofs_verbs.head_object(
response = frostfs_verbs.head_object(
wallet, cid, oid, shell=shell, endpoint=node.get_rpc_endpoint(), is_direct=True
)
if response:
@ -123,7 +123,7 @@ def get_nodes_with_object(
wallet = node.get_wallet_path()
wallet_config = node.get_wallet_config_path()
try:
res = neofs_verbs.head_object(
res = frostfs_verbs.head_object(
wallet,
cid,
oid,
@ -160,13 +160,13 @@ def get_nodes_without_object(
nodes_list = []
for node in nodes:
try:
res = neofs_verbs.head_object(
res = frostfs_verbs.head_object(
wallet, cid, oid, shell=shell, endpoint=node.get_rpc_endpoint(), is_direct=True
)
if res is None:
nodes_list.append(node)
except Exception as err:
if error_matches_status(err, OBJECT_NOT_FOUND):
if string_utils.is_str_match_pattern(err, OBJECT_NOT_FOUND):
nodes_list.append(node)
else:
raise Exception(f"Got error {err} on head object command") from err

View file

@ -2,9 +2,10 @@ import json
import logging
import allure
from neo3 import wallet
from neofs_testlib.shell import Shell
from neofs_verbs import head_object
from frostfs_testlib.shell import Shell
from neo3.wallet import wallet
from pytest_tests.helpers.frostfs_verbs import head_object
logger = logging.getLogger("NeoLogger")

View file

@ -1,33 +1,9 @@
import time
import allure
from common import STORAGE_GC_TIME
from frostfs_testlib.utils import datetime_utils
def parse_time(value: str) -> int:
"""Converts time interval in text form into time interval as number of seconds.
Args:
value: time interval as text.
Returns:
Number of seconds in the parsed time interval.
"""
value = value.lower()
for suffix in ["s", "sec"]:
if value.endswith(suffix):
return int(value[: -len(suffix)])
for suffix in ["m", "min"]:
if value.endswith(suffix):
return int(value[: -len(suffix)]) * 60
for suffix in ["h", "hr", "hour"]:
if value.endswith(suffix):
return int(value[: -len(suffix)]) * 60 * 60
raise ValueError(f"Unknown units in time value '{value}'")
from pytest_tests.resources.common import STORAGE_GC_TIME
def placement_policy_from_container(container_info: str) -> str:
@ -47,7 +23,7 @@ def placement_policy_from_container(container_info: str) -> str:
FILTER Country EQ Sweden AS LOC_SW
Args:
container_info: output from neofs-cli container get command
container_info: output from frostfs-cli container get command
Returns:
placement policy as a string
@ -57,6 +33,6 @@ def placement_policy_from_container(container_info: str) -> str:
def wait_for_gc_pass_on_storage_nodes() -> None:
wait_time = parse_time(STORAGE_GC_TIME)
wait_time = datetime_utils.parse_time(STORAGE_GC_TIME)
with allure.step(f"Wait {wait_time}s until GC completes on storage nodes"):
time.sleep(wait_time)

View file

@ -1,19 +1,26 @@
import os
import uuid
from dataclasses import dataclass
from typing import Optional
from cluster import Cluster
from common import FREE_STORAGE, WALLET_PASS
from neofs_testlib.shell import Shell
from neofs_testlib.utils.wallet import get_last_address_from_wallet, init_wallet
from python_keywords.payment_neogo import deposit_gas, transfer_gas
from frostfs_testlib.shell import Shell
from frostfs_testlib.utils import wallet_utils
from pytest_tests.helpers.cluster import Cluster, NodeBase
from pytest_tests.helpers.payment_neogo import deposit_gas, transfer_gas
from pytest_tests.resources.common import FREE_STORAGE, WALLET_CONFIG, WALLET_PASS
@dataclass
class WalletFile:
path: str
password: str
password: str = WALLET_PASS
config_path: str = WALLET_CONFIG
@staticmethod
def from_node(node: NodeBase):
return WalletFile(
node.get_wallet_path(), node.get_wallet_password(), node.get_wallet_config_path()
)
def get_address(self) -> str:
"""
@ -22,7 +29,7 @@ class WalletFile:
Returns:
The address of the wallet.
"""
return get_last_address_from_wallet(self.path, self.password)
return wallet_utils.get_last_address_from_wallet(self.path, self.password)
class WalletFactory:
@ -41,7 +48,7 @@ class WalletFactory:
WalletFile object of new wallet
"""
wallet_path = os.path.join(self.wallets_dir, f"{str(uuid.uuid4())}.json")
init_wallet(wallet_path, password)
wallet_utils.init_wallet(wallet_path, password)
if not FREE_STORAGE:
main_chain = self.cluster.main_chain_nodes[0]

View file

@ -6,8 +6,6 @@ log_format = %(asctime)s [%(levelname)4s] %(message)s
log_cli_date_format = %Y-%m-%d %H:%M:%S
log_date_format = %H:%M:%S
markers =
# controller markers
no_log_analyze: skip critical errors analyzer at the end of test
# special markers
staging: test to be excluded from run in verifier/pr-validation/sanity jobs and run test in staging job
sanity: test runs in sanity testrun
@ -15,6 +13,7 @@ markers =
# functional markers
container: tests for container creation
grpc_api: standard gRPC API tests
grpc_control: tests related to using frostfs-cli control commands
grpc_object_lock: gRPC lock tests
http_gate: HTTP gate contract
s3_gate: All S3 gate tests
@ -26,9 +25,10 @@ markers =
s3_gate_tagging: Tagging S3 gate tests
s3_gate_versioning: Versioning S3 gate tests
long: long tests (with long execution time)
node_mgmt: neofs control commands
node_mgmt: frostfs control commands
session_token: tests for operations with session token
static_session: tests for operations with static session token
bearer: tests for bearer tokens
acl: All tests for ACL
acl_basic: tests for basic ACL
acl_bearer: tests for ACL with bearer
@ -40,6 +40,6 @@ markers =
failover_network: tests for network failure
failover_reboot: tests for system recovery after reboot of a node
add_nodes: add nodes to cluster
check_binaries: check neofs installed binaries versions
check_binaries: check frostfs installed binaries versions
payments: tests for payment associated operations
load: performance tests

View file

@ -4,14 +4,14 @@ import yaml
CONTAINER_WAIT_INTERVAL = "1m"
# TODO: Get object size data from a node config
SIMPLE_OBJ_SIZE = int(os.getenv("SIMPLE_OBJ_SIZE", "1000"))
COMPLEX_OBJ_SIZE = int(os.getenv("COMPLEX_OBJ_SIZE", "2000"))
SIMPLE_OBJECT_SIZE = os.getenv("SIMPLE_OBJECT_SIZE", "1000")
COMPLEX_OBJECT_CHUNKS_COUNT = os.getenv("COMPLEX_OBJECT_CHUNKS_COUNT", "3")
COMPLEX_OBJECT_TAIL_SIZE = os.getenv("COMPLEX_OBJECT_TAIL_SIZE", "1000")
MAINNET_BLOCK_TIME = os.getenv("MAINNET_BLOCK_TIME", "1s")
MAINNET_TIMEOUT = os.getenv("MAINNET_TIMEOUT", "1min")
MORPH_BLOCK_TIME = os.getenv("MORPH_BLOCK_TIME", "1s")
NEOFS_CONTRACT_CACHE_TIMEOUT = os.getenv("NEOFS_CONTRACT_CACHE_TIMEOUT", "30s")
FROSTFS_CONTRACT_CACHE_TIMEOUT = os.getenv("FROSTFS_CONTRACT_CACHE_TIMEOUT", "30s")
# Time interval that allows a GC pass on storage node (this includes GC sleep interval
# of 1min plus 15 seconds for GC pass itself)
@ -19,31 +19,23 @@ STORAGE_GC_TIME = os.getenv("STORAGE_GC_TIME", "75s")
GAS_HASH = os.getenv("GAS_HASH", "0xd2a4cff31913016155e38e474a2c06d08be276cf")
NEOFS_CONTRACT = os.getenv("NEOFS_IR_CONTRACTS_NEOFS")
FROSTFS_CONTRACT = os.getenv("FROSTFS_IR_CONTRACTS_FROSTFS")
ASSETS_DIR = os.getenv("ASSETS_DIR", "TemporaryDir")
DEVENV_PATH = os.getenv("DEVENV_PATH", os.path.join("..", "neofs-dev-env"))
DEVENV_PATH = os.getenv("DEVENV_PATH", os.path.join("..", "frostfs-dev-env"))
# Password of wallet owned by user on behalf of whom we are running tests
WALLET_PASS = os.getenv("WALLET_PASS", "")
# Load node parameters
LOAD_NODES = os.getenv("LOAD_NODES", "").split(",")
LOAD_NODE_SSH_USER = os.getenv("LOAD_NODE_SSH_USER", "root")
LOAD_NODE_SSH_PRIVATE_KEY_PATH = os.getenv("LOAD_NODE_SSH_PRIVATE_KEY_PATH")
BACKGROUND_WRITERS_COUNT = os.getenv("BACKGROUND_WRITERS_COUNT", 10)
BACKGROUND_READERS_COUNT = os.getenv("BACKGROUND_READERS_COUNT", 10)
BACKGROUND_OBJ_SIZE = os.getenv("BACKGROUND_OBJ_SIZE", 1024)
BACKGROUND_LOAD_MAX_TIME = os.getenv("BACKGROUND_LOAD_MAX_TIME", 600)
# Paths to CLI executables on machine that runs tests
NEOGO_EXECUTABLE = os.getenv("NEOGO_EXECUTABLE", "neo-go")
NEOFS_CLI_EXEC = os.getenv("NEOFS_CLI_EXEC", "neofs-cli")
NEOFS_AUTHMATE_EXEC = os.getenv("NEOFS_AUTHMATE_EXEC", "neofs-authmate")
NEOFS_ADM_EXEC = os.getenv("NEOFS_ADM_EXEC", "neofs-adm")
FROSTFS_CLI_EXEC = os.getenv("FROSTFS_CLI_EXEC", "frostfs-cli")
FROSTFS_AUTHMATE_EXEC = os.getenv("FROSTFS_AUTHMATE_EXEC", "frostfs-authmate")
FROSTFS_ADM_EXEC = os.getenv("FROSTFS_ADM_EXEC", "frostfs-adm")
# Config for neofs-adm utility. Optional if tests are running against devenv
NEOFS_ADM_CONFIG_PATH = os.getenv("NEOFS_ADM_CONFIG_PATH")
# Config for frostfs-adm utility. Optional if tests are running against devenv
FROSTFS_ADM_CONFIG_PATH = os.getenv("FROSTFS_ADM_CONFIG_PATH")
FREE_STORAGE = os.getenv("FREE_STORAGE", "false").lower() == "true"
BIN_VERSIONS_FILE = os.getenv("BIN_VERSIONS_FILE")
@ -53,6 +45,8 @@ STORAGE_NODE_SERVICE_NAME_REGEX = r"s\d\d"
HTTP_GATE_SERVICE_NAME_REGEX = r"http-gate\d\d"
S3_GATE_SERVICE_NAME_REGEX = r"s3-gate\d\d"
CLI_DEFAULT_TIMEOUT = os.getenv("CLI_DEFAULT_TIMEOUT", None)
# Generate wallet configs
# TODO: we should move all info about wallet configs to fixtures
WALLET_CONFIG = os.path.join(os.getcwd(), "wallet_config.yml")

View file

@ -0,0 +1,27 @@
import os
# Load node parameters
LOAD_NODES = os.getenv("LOAD_NODES", "").split(",")
LOAD_NODE_SSH_USER = os.getenv("LOAD_NODE_SSH_USER", "root")
LOAD_NODE_SSH_PRIVATE_KEY_PATH = os.getenv("LOAD_NODE_SSH_PRIVATE_KEY_PATH")
BACKGROUND_WRITERS_COUNT = os.getenv("BACKGROUND_WRITERS_COUNT", 10)
BACKGROUND_READERS_COUNT = os.getenv("BACKGROUND_READERS_COUNT", 10)
BACKGROUND_OBJ_SIZE = os.getenv("BACKGROUND_OBJ_SIZE", 1024)
BACKGROUND_LOAD_MAX_TIME = os.getenv("BACKGROUND_LOAD_MAX_TIME", 600)
# Load run parameters
OBJ_SIZE = os.getenv("OBJ_SIZE", "1000").split(",")
CONTAINERS_COUNT = os.getenv("CONTAINERS_COUNT", "1").split(",")
OUT_FILE = os.getenv("OUT_FILE", "1mb_200.json").split(",")
OBJ_COUNT = os.getenv("OBJ_COUNT", "4").split(",")
WRITERS = os.getenv("WRITERS", "200").split(",")
READERS = os.getenv("READER", "0").split(",")
DELETERS = os.getenv("DELETERS", "0").split(",")
LOAD_TIME = os.getenv("LOAD_TIME", "200").split(",")
LOAD_TYPE = os.getenv("LOAD_TYPE", "grpc").split(",")
LOAD_NODES_COUNT = os.getenv("LOAD_NODES_COUNT", "1").split(",")
STORAGE_NODE_COUNT = os.getenv("STORAGE_NODE_COUNT", "4").split(",")
CONTAINER_PLACEMENT_POLICY = os.getenv(
"CONTAINER_PLACEMENT_POLICY", "REP 1 IN X CBF 1 SELECT 1 FROM * AS X"
)

View file

@ -1,7 +1,9 @@
import epoch
import allure
import pytest
from cluster import Cluster
from neofs_testlib.shell import Shell
from frostfs_testlib.shell import Shell
from pytest_tests.helpers import epoch
from pytest_tests.helpers.cluster import Cluster
# To skip adding every mandatory singleton dependency to EACH test function
@ -15,9 +17,17 @@ class ClusterTestBase:
ClusterTestBase.cluster = cluster
yield
@allure.title("Tick {epochs_to_tick} epochs")
def tick_epochs(self, epochs_to_tick: int):
for _ in range(epochs_to_tick):
self.tick_epoch()
def tick_epoch(self):
epoch.tick_epoch(self.shell, self.cluster)
def wait_for_epochs_align(self):
epoch.wait_for_epochs_align(self.shell, self.cluster)
def get_epoch(self):
return epoch.get_epoch(self.shell, self.cluster)

View file

@ -3,15 +3,17 @@ import re
from dataclasses import asdict
import allure
from common import STORAGE_NODE_SERVICE_NAME_REGEX
from k6 import K6, LoadParams, LoadResults
from neofs_testlib.cli.neofs_authmate import NeofsAuthmate
from neofs_testlib.cli.neogo import NeoGo
from neofs_testlib.hosting import Hosting
from neofs_testlib.shell import CommandOptions, SSHShell
from neofs_testlib.shell.interfaces import InteractiveInput
from frostfs_testlib.cli.frostfs_authmate import FrostfsAuthmate
from frostfs_testlib.cli.neogo import NeoGo
from frostfs_testlib.hosting import Hosting
from frostfs_testlib.shell import CommandOptions, SSHShell
from frostfs_testlib.shell.interfaces import InteractiveInput
NEOFS_AUTHMATE_PATH = "neofs-s3-authmate"
from pytest_tests.helpers.k6 import K6, LoadParams, LoadResults
from pytest_tests.resources.common import STORAGE_NODE_SERVICE_NAME_REGEX
FROSTFS_AUTHMATE_PATH = "frostfs-authmate"
STOPPED_HOSTS = []
@allure.title("Get services endpoints")
@ -22,14 +24,31 @@ def get_services_endpoints(
return [service_config.attributes[endpoint_attribute] for service_config in service_configs]
@allure.title("Stop nodes")
def stop_unused_nodes(storage_nodes: list, used_nodes_count: int):
for node in storage_nodes[used_nodes_count:]:
host = node.host
STOPPED_HOSTS.append(host)
host.stop_host("hard")
@allure.title("Start nodes")
def start_stopped_nodes():
for host in STOPPED_HOSTS:
host.start_host()
STOPPED_HOSTS.remove(host)
@allure.title("Init s3 client")
def init_s3_client(load_nodes: list, login: str, pkey: str, hosting: Hosting):
def init_s3_client(
load_nodes: list, login: str, pkey: str, container_placement_policy: str, hosting: Hosting
):
service_configs = hosting.find_service_configs(STORAGE_NODE_SERVICE_NAME_REGEX)
host = hosting.get_host_by_service(service_configs[0].name)
wallet_path = service_configs[0].attributes["wallet_path"]
neogo_cli_config = host.get_cli_config("neo-go")
neogo_wallet = NeoGo(shell=host.get_shell(), neo_go_exec_path=neogo_cli_config.exec_path).wallet
dump_keys_output = neogo_wallet.dump_keys(wallet_config=wallet_path).stdout
dump_keys_output = neogo_wallet.dump_keys(wallet=wallet_path, wallet_config=None).stdout
public_key = str(re.search(r":\n(?P<public_key>.*)", dump_keys_output).group("public_key"))
node_endpoint = service_configs[0].attributes["rpc_endpoint"]
# prompt_pattern doesn't work at the moment
@ -38,13 +57,13 @@ def init_s3_client(load_nodes: list, login: str, pkey: str, hosting: Hosting):
path = ssh_client.exec(r"sudo find . -name 'k6' -exec dirname {} \; -quit").stdout.strip(
"\n"
)
neofs_authmate_exec = NeofsAuthmate(ssh_client, NEOFS_AUTHMATE_PATH)
issue_secret_output = neofs_authmate_exec.secret.issue(
frostfs_authmate_exec = FrostfsAuthmate(ssh_client, FROSTFS_AUTHMATE_PATH)
issue_secret_output = frostfs_authmate_exec.secret.issue(
wallet=f"{path}/scenarios/files/wallet.json",
peer=node_endpoint,
bearer_rules=f"{path}/scenarios/files/rules.json",
gate_public_key=public_key,
container_placement_policy="REP 1 IN X CBF 1 SELECT 1 FROM * AS X",
container_placement_policy=container_placement_policy,
container_policy=f"{path}/scenarios/files/policy.json",
wallet_password="",
).stdout
@ -88,7 +107,7 @@ def prepare_objects(k6_instance: K6):
@allure.title("Prepare K6 instances and objects")
def prepare_k6_instances(
load_nodes: list, login: str, pkey: str, load_params: LoadParams, prepare: bool = True
) -> list:
) -> list[K6]:
k6_load_objects = []
for load_node in load_nodes:
ssh_client = SSHShell(host=load_node, login=login, private_key_path=pkey)

View file

@ -8,19 +8,20 @@ from typing import Any, Optional
import allure
import boto3
import pytest
import s3_gate_bucket
import s3_gate_object
import urllib3
from aws_cli_client import AwsCliClient
from botocore.config import Config
from botocore.exceptions import ClientError
from cli_helpers import _cmd_run, _configure_aws_cli, _run_with_passwd
from cluster import Cluster
from cluster_test_base import ClusterTestBase
from common import NEOFS_AUTHMATE_EXEC
from neofs_testlib.shell import Shell
from frostfs_testlib.shell import Shell
from pytest import FixtureRequest
from python_keywords.container import list_containers
from pytest_tests.steps import s3_gate_bucket
from pytest_tests.steps import s3_gate_object
from pytest_tests.helpers.aws_cli_client import AwsCliClient
from pytest_tests.helpers.cli_helpers import _cmd_run, _configure_aws_cli, _run_with_passwd
from pytest_tests.helpers.cluster import Cluster
from pytest_tests.helpers.container import list_containers
from pytest_tests.resources.common import FROSTFS_AUTHMATE_EXEC
from pytest_tests.steps.cluster_test_base import ClusterTestBase
# Disable warnings on self-signed certificate which the
# boto library produces on requests to S3-gate in dev-env
@ -44,15 +45,11 @@ class TestS3GateBase(ClusterTestBase):
self, default_wallet, client_shell: Shell, request: FixtureRequest, cluster: Cluster
) -> Any:
wallet = default_wallet
s3_bearer_rules_file = f"{os.getcwd()}/robot/resources/files/s3_bearer_rules.json"
s3_bearer_rules_file = f"{os.getcwd()}/pytest_tests/resources/files/s3_bearer_rules.json"
policy = None if isinstance(request.param, str) else request.param[1]
(
cid,
bucket,
access_key_id,
secret_access_key,
owner_private_key,
) = init_s3_credentials(wallet, cluster, s3_bearer_rules_file=s3_bearer_rules_file)
(cid, bucket, access_key_id, secret_access_key, owner_private_key,) = init_s3_credentials(
wallet, cluster, s3_bearer_rules_file=s3_bearer_rules_file, policy=policy
)
containers_list = list_containers(
wallet, shell=client_shell, endpoint=self.cluster.default_rpc_endpoint
)
@ -88,15 +85,32 @@ class TestS3GateBase(ClusterTestBase):
def delete_all_object_in_bucket(self, bucket):
versioning_status = s3_gate_bucket.get_bucket_versioning_status(self.s3_client, bucket)
if versioning_status == s3_gate_bucket.VersioningStatus.ENABLED.value:
# From versioned bucket we should delete all versions of all objects
# From versioned bucket we should delete all versions and delete markers of all objects
objects_versions = s3_gate_object.list_objects_versions_s3(self.s3_client, bucket)
if objects_versions:
s3_gate_object.delete_object_versions_s3(self.s3_client, bucket, objects_versions)
s3_gate_object.delete_object_versions_s3_without_dm(
self.s3_client, bucket, objects_versions
)
objects_delete_markers = s3_gate_object.list_objects_delete_markers_s3(
self.s3_client, bucket
)
if objects_delete_markers:
s3_gate_object.delete_object_versions_s3_without_dm(
self.s3_client, bucket, objects_delete_markers
)
else:
# From non-versioned bucket it's sufficient to delete objects by key
objects = s3_gate_object.list_objects_s3(self.s3_client, bucket)
if objects:
s3_gate_object.delete_objects_s3(self.s3_client, bucket, objects)
objects_delete_markers = s3_gate_object.list_objects_delete_markers_s3(
self.s3_client, bucket
)
if objects_delete_markers:
s3_gate_object.delete_object_versions_s3_without_dm(
self.s3_client, bucket, objects_delete_markers
)
# Delete the bucket itself
s3_gate_bucket.delete_bucket_s3(self.s3_client, bucket)
@ -110,12 +124,12 @@ def init_s3_credentials(
policy: Optional[dict] = None,
):
bucket = str(uuid.uuid4())
s3_bearer_rules = s3_bearer_rules_file or "robot/resources/files/s3_bearer_rules.json"
s3_bearer_rules = s3_bearer_rules_file or "pytest_tests/resources/files/s3_bearer_rules.json"
s3gate_node = cluster.s3gates[0]
gate_public_key = s3gate_node.get_wallet_public_key()
cmd = (
f"{NEOFS_AUTHMATE_EXEC} --debug --with-log --timeout {CREDENTIALS_CREATE_TIMEOUT} "
f"{FROSTFS_AUTHMATE_EXEC} --debug --with-log --timeout {CREDENTIALS_CREATE_TIMEOUT} "
f"issue-secret --wallet {wallet_path} --gate-public-key={gate_public_key} "
f"--peer {cluster.default_rpc_endpoint} --container-friendly-name {bucket} "
f"--bearer-rules {s3_bearer_rules}"

View file

@ -7,7 +7,8 @@ from typing import Optional
import allure
from botocore.exceptions import ClientError
from cli_helpers import log_command_execution
from pytest_tests.helpers.cli_helpers import log_command_execution
logger = logging.getLogger("NeoLogger")

View file

@ -7,10 +7,11 @@ from typing import Optional
import allure
import pytest
import urllib3
from aws_cli_client import AwsCliClient
from botocore.exceptions import ClientError
from cli_helpers import log_command_execution
from s3_gate_bucket import S3_SYNC_WAIT_TIME
from pytest_tests.helpers.aws_cli_client import AwsCliClient
from pytest_tests.helpers.cli_helpers import log_command_execution
from pytest_tests.steps.s3_gate_bucket import S3_SYNC_WAIT_TIME
##########################################################
# Disabling warnings on self-signed certificate which the
@ -85,6 +86,21 @@ def list_objects_versions_s3(s3_client, bucket: str, full_output: bool = False)
) from err
@allure.step("List objects delete markers S3")
def list_objects_delete_markers_s3(s3_client, bucket: str, full_output: bool = False) -> list:
try:
response = s3_client.list_object_versions(Bucket=bucket)
delete_markers = response.get("DeleteMarkers", [])
log_command_execution("S3 List objects delete markers result", response)
return response if full_output else delete_markers
except ClientError as err:
raise Exception(
f'Error Message: {err.response["Error"]["Message"]}\n'
f'Http status code: {err.response["ResponseMetadata"]["HTTPStatusCode"]}'
) from err
@allure.step("Put object S3")
def put_object_s3(s3_client, bucket: str, filepath: str, **kwargs):
filename = os.path.basename(filepath)
@ -185,6 +201,27 @@ def delete_object_versions_s3(s3_client, bucket: str, object_versions: list):
) from err
@allure.step("Delete object versions S3 without delete markers")
def delete_object_versions_s3_without_dm(s3_client, bucket: str, object_versions: list):
try:
# Delete objects without creating delete markers
for object_version in object_versions:
params = {
"Bucket": bucket,
"Key": object_version["Key"],
"VersionId": object_version["VersionId"],
}
response = s3_client.delete_object(**params)
log_command_execution("S3 Delete object result", response)
return response
except ClientError as err:
raise Exception(
f'Error Message: {err.response["Error"]["Message"]}\n'
f'Http status code: {err.response["ResponseMetadata"]["HTTPStatusCode"]}'
) from err
@allure.step("Put object ACL")
def put_object_acl_s3(
s3_client,

View file

@ -8,14 +8,13 @@ from enum import Enum
from typing import Any, Optional
import allure
import json_transformers
from common import ASSETS_DIR, NEOFS_CLI_EXEC, WALLET_CONFIG
from data_formatters import get_wallet_public_key
from json_transformers import encode_for_json
from neofs_testlib.cli import NeofsCli
from neofs_testlib.shell import Shell
from storage_object_info import StorageObjectInfo
from wallet import WalletFile
from frostfs_testlib.cli import FrostfsCli
from frostfs_testlib.shell import Shell
from frostfs_testlib.utils import json_utils, wallet_utils
from pytest_tests.helpers.storage_object_info import StorageObjectInfo
from pytest_tests.helpers.wallet import WalletFile
from pytest_tests.resources.common import ASSETS_DIR, FROSTFS_CLI_EXEC, WALLET_CONFIG
logger = logging.getLogger("NeoLogger")
@ -71,16 +70,16 @@ def generate_session_token(
file_path = os.path.join(tokens_dir, str(uuid.uuid4()))
pub_key_64 = get_wallet_public_key(session_wallet.path, session_wallet.password, "base64")
pub_key_64 = wallet_utils.get_wallet_public_key(
session_wallet.path, session_wallet.password, "base64"
)
lifetime = lifetime or Lifetime()
session_token = {
"body": {
"id": f"{base64.b64encode(uuid.uuid4().bytes).decode('utf-8')}",
"ownerID": {
"value": f"{json_transformers.encode_for_json(owner_wallet.get_address())}"
},
"ownerID": {"value": f"{json_utils.encode_for_json(owner_wallet.get_address())}"},
"lifetime": {
"exp": f"{lifetime.exp}",
"nbf": f"{lifetime.nbf}",
@ -124,8 +123,12 @@ def generate_container_session_token(
session = {
"container": {
"verb": verb.value,
"wildcard": cid is not None,
**({"containerID": {"value": f"{encode_for_json(cid)}"}} if cid is not None else {}),
"wildcard": cid is None,
**(
{"containerID": {"value": f"{json_utils.encode_for_json(cid)}"}}
if cid is not None
else {}
),
},
}
@ -165,8 +168,8 @@ def generate_object_session_token(
"object": {
"verb": verb.value,
"target": {
"container": {"value": encode_for_json(cid)},
"objects": [{"value": encode_for_json(oid)} for oid in oids],
"container": {"value": json_utils.encode_for_json(cid)},
"objects": [{"value": json_utils.encode_for_json(oid)} for oid in oids],
},
},
}
@ -249,8 +252,8 @@ def create_session_token(
The path to the generated session token file.
"""
session_token = os.path.join(os.getcwd(), ASSETS_DIR, str(uuid.uuid4()))
neofscli = NeofsCli(shell=shell, neofs_cli_exec_path=NEOFS_CLI_EXEC)
neofscli.session.create(
frostfscli = FrostfsCli(shell=shell, frostfs_cli_exec_path=FROSTFS_CLI_EXEC)
frostfscli.session.create(
rpc_endpoint=rpc_endpoint,
address=owner,
wallet=wallet_path,
@ -274,8 +277,10 @@ def sign_session_token(shell: Shell, session_token_file: str, wlt: WalletFile) -
The path to the signed token.
"""
signed_token_file = os.path.join(os.getcwd(), ASSETS_DIR, str(uuid.uuid4()))
neofscli = NeofsCli(shell=shell, neofs_cli_exec_path=NEOFS_CLI_EXEC, config_file=WALLET_CONFIG)
neofscli.util.sign_session_token(
frostfscli = FrostfsCli(
shell=shell, frostfs_cli_exec_path=FROSTFS_CLI_EXEC, config_file=WALLET_CONFIG
)
frostfscli.util.sign_session_token(
wallet=wlt.path, from_file=session_token_file, to_file=signed_token_file
)
return signed_token_file

View file

@ -3,13 +3,14 @@ from time import sleep
import allure
import pytest
from cluster import Cluster
from epoch import tick_epoch
from grpc_responses import OBJECT_ALREADY_REMOVED
from neofs_testlib.shell import Shell
from python_keywords.neofs_verbs import delete_object, get_object
from storage_object_info import StorageObjectInfo
from tombstone import verify_head_tombstone
from frostfs_testlib.resources.common import OBJECT_ALREADY_REMOVED
from frostfs_testlib.shell import Shell
from pytest_tests.helpers.cluster import Cluster
from pytest_tests.helpers.epoch import tick_epoch
from pytest_tests.helpers.frostfs_verbs import delete_object, get_object
from pytest_tests.helpers.storage_object_info import StorageObjectInfo
from pytest_tests.helpers.tombstone import verify_head_tombstone
logger = logging.getLogger("NeoLogger")

View file

@ -5,15 +5,16 @@ from typing import Optional
import allure
import pytest
from cluster import Cluster
from common import WALLET_CONFIG, WALLET_PASS
from file_helper import generate_file
from neofs_testlib.shell import Shell
from neofs_testlib.utils.wallet import init_wallet
from python_keywords.acl import EACLRole
from python_keywords.container import create_container
from python_keywords.neofs_verbs import put_object_to_random_node
from wellknown_acl import PUBLIC_ACL
from frostfs_testlib.resources.common import PUBLIC_ACL
from frostfs_testlib.shell import Shell
from frostfs_testlib.utils import wallet_utils
from pytest_tests.helpers.acl import EACLRole
from pytest_tests.helpers.cluster import Cluster
from pytest_tests.helpers.container import create_container
from pytest_tests.helpers.file_helper import generate_file
from pytest_tests.helpers.frostfs_verbs import put_object_to_random_node
from pytest_tests.resources.common import WALLET_CONFIG, WALLET_PASS
OBJECT_COUNT = 5
@ -41,7 +42,7 @@ def wallets(default_wallet, temp_directory, cluster: Cluster) -> Wallets:
os.path.join(temp_directory, f"{str(uuid.uuid4())}.json") for _ in range(2)
]
for other_wallet_path in other_wallets_paths:
init_wallet(other_wallet_path, WALLET_PASS)
wallet_utils.init_wallet(other_wallet_path, WALLET_PASS)
ir_node = cluster.ir_nodes[0]
storage_node = cluster.storage_nodes[0]
@ -68,8 +69,8 @@ def wallets(default_wallet, temp_directory, cluster: Cluster) -> Wallets:
@pytest.fixture(scope="module")
def file_path():
yield generate_file()
def file_path(simple_object_size):
yield generate_file(simple_object_size)
@pytest.fixture(scope="function")

View file

@ -5,12 +5,10 @@ from typing import Optional
import allure
import pytest
from cluster_test_base import ClusterTestBase
from common import ASSETS_DIR, COMPLEX_OBJ_SIZE, FREE_STORAGE, SIMPLE_OBJ_SIZE, WALLET_PASS
from file_helper import generate_file
from grpc_responses import OBJECT_ACCESS_DENIED, OBJECT_NOT_FOUND
from neofs_testlib.utils.wallet import init_wallet
from python_keywords.acl import (
from frostfs_testlib.resources.common import OBJECT_ACCESS_DENIED, OBJECT_NOT_FOUND
from frostfs_testlib.utils import wallet_utils
from pytest_tests.helpers.acl import (
EACLAccess,
EACLOperation,
EACLRole,
@ -19,10 +17,11 @@ from python_keywords.acl import (
form_bearertoken_file,
set_eacl,
)
from python_keywords.container import create_container
from python_keywords.neofs_verbs import put_object_to_random_node
from python_keywords.payment_neogo import deposit_gas, transfer_gas
from python_keywords.storage_group import (
from pytest_tests.helpers.container import create_container
from pytest_tests.helpers.file_helper import generate_file
from pytest_tests.helpers.frostfs_verbs import put_object_to_random_node
from pytest_tests.helpers.payment_neogo import deposit_gas, transfer_gas
from pytest_tests.helpers.storage_group import (
delete_storagegroup,
get_storagegroup,
list_storagegroup,
@ -30,6 +29,8 @@ from python_keywords.storage_group import (
verify_get_storage_group,
verify_list_storage_group,
)
from pytest_tests.resources.common import ASSETS_DIR, FREE_STORAGE, WALLET_PASS
from pytest_tests.steps.cluster_test_base import ClusterTestBase
logger = logging.getLogger("NeoLogger")
deposit = 30
@ -37,7 +38,7 @@ deposit = 30
@pytest.mark.parametrize(
"object_size",
[SIMPLE_OBJ_SIZE, COMPLEX_OBJ_SIZE],
[pytest.lazy_fixture("simple_object_size"), pytest.lazy_fixture("complex_object_size")],
ids=["simple object", "complex object"],
)
@pytest.mark.sanity
@ -48,7 +49,7 @@ class TestStorageGroup(ClusterTestBase):
def prepare_two_wallets(self, default_wallet):
self.main_wallet = default_wallet
self.other_wallet = os.path.join(os.getcwd(), ASSETS_DIR, f"{str(uuid.uuid4())}.json")
init_wallet(self.other_wallet, WALLET_PASS)
wallet_utils.init_wallet(self.other_wallet, WALLET_PASS)
if not FREE_STORAGE:
main_chain = self.cluster.main_chain_nodes[0]
deposit = 30
@ -68,7 +69,7 @@ class TestStorageGroup(ClusterTestBase):
)
@allure.title("Test Storage Group in Private Container")
def test_storagegroup_basic_private_container(self, object_size):
def test_storagegroup_basic_private_container(self, object_size, max_object_size):
cid = create_container(
self.main_wallet, shell=self.shell, endpoint=self.cluster.default_rpc_endpoint
)
@ -88,6 +89,7 @@ class TestStorageGroup(ClusterTestBase):
cid=cid,
obj_list=objects,
object_size=object_size,
max_object_size=max_object_size,
)
self.expect_failure_for_storagegroup_operations(
wallet=self.other_wallet,
@ -100,10 +102,11 @@ class TestStorageGroup(ClusterTestBase):
cid=cid,
obj_list=objects,
object_size=object_size,
max_object_size=max_object_size,
)
@allure.title("Test Storage Group in Public Container")
def test_storagegroup_basic_public_container(self, object_size):
def test_storagegroup_basic_public_container(self, object_size, max_object_size):
cid = create_container(
self.main_wallet,
basic_acl="public-read-write",
@ -120,22 +123,25 @@ class TestStorageGroup(ClusterTestBase):
cid=cid,
obj_list=objects,
object_size=object_size,
max_object_size=max_object_size,
)
self.expect_success_for_storagegroup_operations(
wallet=self.other_wallet,
cid=cid,
obj_list=objects,
object_size=object_size,
max_object_size=max_object_size,
)
self.storagegroup_operations_by_system_ro_container(
wallet=self.main_wallet,
cid=cid,
obj_list=objects,
object_size=object_size,
max_object_size=max_object_size,
)
@allure.title("Test Storage Group in Read-Only Container")
def test_storagegroup_basic_ro_container(self, object_size):
def test_storagegroup_basic_ro_container(self, object_size, max_object_size):
cid = create_container(
self.main_wallet,
basic_acl="public-read",
@ -152,6 +158,7 @@ class TestStorageGroup(ClusterTestBase):
cid=cid,
obj_list=objects,
object_size=object_size,
max_object_size=max_object_size,
)
self.storagegroup_operations_by_other_ro_container(
owner_wallet=self.main_wallet,
@ -159,16 +166,18 @@ class TestStorageGroup(ClusterTestBase):
cid=cid,
obj_list=objects,
object_size=object_size,
max_object_size=max_object_size,
)
self.storagegroup_operations_by_system_ro_container(
wallet=self.main_wallet,
cid=cid,
obj_list=objects,
object_size=object_size,
max_object_size=max_object_size,
)
@allure.title("Test Storage Group with Bearer Allow")
def test_storagegroup_bearer_allow(self, object_size):
def test_storagegroup_bearer_allow(self, object_size, max_object_size):
cid = create_container(
self.main_wallet,
basic_acl="eacl-public-read-write",
@ -185,6 +194,7 @@ class TestStorageGroup(ClusterTestBase):
cid=cid,
obj_list=objects,
object_size=object_size,
max_object_size=max_object_size,
)
storage_group = put_storagegroup(
self.shell, self.cluster.default_rpc_endpoint, self.main_wallet, cid, objects
@ -219,6 +229,7 @@ class TestStorageGroup(ClusterTestBase):
cid=cid,
obj_list=objects,
object_size=object_size,
max_object_size=max_object_size,
bearer=bearer_file,
)
@ -243,6 +254,7 @@ class TestStorageGroup(ClusterTestBase):
with allure.step("Tick two epochs"):
for _ in range(2):
self.tick_epoch()
self.wait_for_epochs_align()
with pytest.raises(Exception, match=OBJECT_NOT_FOUND):
get_storagegroup(
shell=self.shell,
@ -259,6 +271,7 @@ class TestStorageGroup(ClusterTestBase):
cid: str,
obj_list: list,
object_size: int,
max_object_size: int,
bearer: Optional[str] = None,
):
"""
@ -285,6 +298,7 @@ class TestStorageGroup(ClusterTestBase):
gid=storage_group,
obj_list=obj_list,
object_size=object_size,
max_object_size=max_object_size,
bearer=bearer,
)
delete_storagegroup(
@ -342,6 +356,7 @@ class TestStorageGroup(ClusterTestBase):
cid: str,
obj_list: list,
object_size: int,
max_object_size: int,
):
storage_group = put_storagegroup(
self.shell, self.cluster.default_rpc_endpoint, owner_wallet, cid, obj_list
@ -369,6 +384,7 @@ class TestStorageGroup(ClusterTestBase):
gid=storage_group,
obj_list=obj_list,
object_size=object_size,
max_object_size=max_object_size,
)
with pytest.raises(Exception, match=OBJECT_ACCESS_DENIED):
delete_storagegroup(
@ -381,7 +397,12 @@ class TestStorageGroup(ClusterTestBase):
@allure.step("Run Storage Group Operations On Systems's Behalf In RO Container")
def storagegroup_operations_by_system_ro_container(
self, wallet: str, cid: str, obj_list: list, object_size: int
self,
wallet: str,
cid: str,
obj_list: list,
object_size: int,
max_object_size: int,
):
"""
In this func we create a Storage Group on Inner Ring's key behalf
@ -438,6 +459,7 @@ class TestStorageGroup(ClusterTestBase):
gid=storage_group,
obj_list=obj_list,
object_size=object_size,
max_object_size=max_object_size,
wallet_config=ir_wallet_config,
)
with pytest.raises(Exception, match=OBJECT_ACCESS_DENIED):

View file

@ -1,15 +1,16 @@
import allure
import pytest
from cluster_test_base import ClusterTestBase
from python_keywords.acl import EACLRole
from python_keywords.container import create_container
from python_keywords.container_access import (
from frostfs_testlib.resources.common import PRIVATE_ACL_F, PUBLIC_ACL_F, READONLY_ACL_F
from pytest_tests.helpers.acl import EACLRole
from pytest_tests.helpers.container import create_container
from pytest_tests.helpers.container_access import (
check_full_access_to_container,
check_no_access_to_container,
check_read_only_container,
)
from python_keywords.neofs_verbs import put_object_to_random_node
from wellknown_acl import PRIVATE_ACL_F, PUBLIC_ACL_F, READONLY_ACL_F
from pytest_tests.helpers.frostfs_verbs import put_object_to_random_node
from pytest_tests.steps.cluster_test_base import ClusterTestBase
@pytest.mark.sanity

View file

@ -1,7 +1,7 @@
import allure
import pytest
from cluster_test_base import ClusterTestBase
from python_keywords.acl import (
from pytest_tests.helpers.acl import (
EACLAccess,
EACLOperation,
EACLRole,
@ -11,11 +11,12 @@ from python_keywords.acl import (
set_eacl,
wait_for_cache_expired,
)
from python_keywords.container_access import (
from pytest_tests.helpers.container_access import (
check_custom_access_to_container,
check_full_access_to_container,
check_no_access_to_container,
)
from pytest_tests.steps.cluster_test_base import ClusterTestBase
@pytest.mark.sanity
@ -24,7 +25,9 @@ from python_keywords.container_access import (
class TestACLBearer(ClusterTestBase):
@pytest.mark.parametrize("role", [EACLRole.USER, EACLRole.OTHERS])
def test_bearer_token_operations(self, wallets, eacl_container_with_objects, role):
allure.dynamic.title(f"Testcase to validate NeoFS operations with {role.value} BearerToken")
allure.dynamic.title(
f"Testcase to validate FrostFS operations with {role.value} BearerToken"
)
cid, objects_oids, file_path = eacl_container_with_objects
user_wallet = wallets.get_wallet()
deny_wallet = wallets.get_wallet(role)

View file

@ -1,9 +1,8 @@
import allure
import pytest
from cluster_test_base import ClusterTestBase
from failover_utils import wait_object_replication
from neofs_testlib.shell import Shell
from python_keywords.acl import (
from frostfs_testlib.resources.common import PUBLIC_ACL
from pytest_tests.helpers.acl import (
EACLAccess,
EACLOperation,
EACLRole,
@ -12,14 +11,15 @@ from python_keywords.acl import (
set_eacl,
wait_for_cache_expired,
)
from python_keywords.container import create_container
from python_keywords.container_access import (
from pytest_tests.helpers.container import create_container
from pytest_tests.helpers.container_access import (
check_full_access_to_container,
check_no_access_to_container,
)
from python_keywords.neofs_verbs import put_object_to_random_node
from python_keywords.node_management import drop_object
from python_keywords.object_access import (
from pytest_tests.helpers.failover_utils import wait_object_replication
from pytest_tests.helpers.frostfs_verbs import put_object_to_random_node
from pytest_tests.helpers.node_management import drop_object
from pytest_tests.helpers.object_access import (
can_delete_object,
can_get_head_object,
can_get_object,
@ -28,7 +28,7 @@ from python_keywords.object_access import (
can_put_object,
can_search_object,
)
from wellknown_acl import PUBLIC_ACL
from pytest_tests.steps.cluster_test_base import ClusterTestBase
@pytest.mark.sanity
@ -74,7 +74,7 @@ class TestEACLContainer(ClusterTestBase):
not_deny_role_wallet = user_wallet if deny_role == EACLRole.OTHERS else other_wallet
deny_role_str = "all others" if deny_role == EACLRole.OTHERS else "user"
not_deny_role_str = "user" if deny_role == EACLRole.OTHERS else "all others"
allure.dynamic.title(f"Testcase to deny NeoFS operations for {deny_role_str}.")
allure.dynamic.title(f"Testcase to deny FrostFS operations for {deny_role_str}.")
cid, object_oids, file_path = eacl_container_with_objects
with allure.step(f"Deny all operations for {deny_role_str} via eACL"):
@ -148,7 +148,7 @@ class TestEACLContainer(ClusterTestBase):
cluster=self.cluster,
)
@allure.title("Testcase to allow NeoFS operations for only one other pubkey.")
@allure.title("Testcase to allow FrostFS operations for only one other pubkey.")
def test_extended_acl_deny_all_operations_exclude_pubkey(
self, wallets, eacl_container_with_objects
):
@ -209,7 +209,7 @@ class TestEACLContainer(ClusterTestBase):
cluster=self.cluster,
)
@allure.title("Testcase to validate NeoFS replication with eACL deny rules.")
@allure.title("Testcase to validate FrostFS replication with eACL deny rules.")
def test_extended_acl_deny_replication(
self,
wallets,
@ -251,7 +251,7 @@ class TestEACLContainer(ClusterTestBase):
storage_nodes,
)
@allure.title("Testcase to validate NeoFS system operations with extended ACL")
@allure.title("Testcase to validate FrostFS system operations with extended ACL")
def test_extended_actions_system(self, wallets, eacl_container_with_objects):
user_wallet = wallets.get_wallet()
ir_wallet, storage_wallet = wallets.get_wallets_list(role=EACLRole.SYSTEM)[:2]

View file

@ -1,7 +1,8 @@
import allure
import pytest
from cluster_test_base import ClusterTestBase
from python_keywords.acl import (
from frostfs_testlib.resources.common import PUBLIC_ACL
from pytest_tests.helpers.acl import (
EACLAccess,
EACLFilter,
EACLFilters,
@ -15,14 +16,14 @@ from python_keywords.acl import (
set_eacl,
wait_for_cache_expired,
)
from python_keywords.container import create_container, delete_container
from python_keywords.container_access import (
from pytest_tests.helpers.container import create_container, delete_container
from pytest_tests.helpers.container_access import (
check_full_access_to_container,
check_no_access_to_container,
)
from python_keywords.neofs_verbs import put_object_to_random_node
from python_keywords.object_access import can_get_head_object, can_get_object, can_put_object
from wellknown_acl import PUBLIC_ACL
from pytest_tests.helpers.frostfs_verbs import put_object_to_random_node
from pytest_tests.helpers.object_access import can_get_head_object, can_get_object, can_put_object
from pytest_tests.steps.cluster_test_base import ClusterTestBase
@pytest.mark.sanity
@ -128,7 +129,7 @@ class TestEACLFilters(ClusterTestBase):
"match_type", [EACLMatchType.STRING_EQUAL, EACLMatchType.STRING_NOT_EQUAL]
)
def test_extended_acl_filters_request(self, wallets, eacl_container_with_objects, match_type):
allure.dynamic.title(f"Validate NeoFS operations with request filter: {match_type.name}")
allure.dynamic.title(f"Validate FrostFS operations with request filter: {match_type.name}")
user_wallet = wallets.get_wallet()
other_wallet = wallets.get_wallet(EACLRole.OTHERS)
(
@ -243,7 +244,7 @@ class TestEACLFilters(ClusterTestBase):
self, wallets, eacl_container_with_objects, match_type
):
allure.dynamic.title(
f"Validate NeoFS operations with deny user headers filter: {match_type.name}"
f"Validate FrostFS operations with deny user headers filter: {match_type.name}"
)
user_wallet = wallets.get_wallet()
other_wallet = wallets.get_wallet(EACLRole.OTHERS)
@ -425,7 +426,7 @@ class TestEACLFilters(ClusterTestBase):
self, wallets, eacl_container_with_objects, match_type
):
allure.dynamic.title(
"Testcase to validate NeoFS operation with allow eACL user headers filters:"
"Testcase to validate FrostFS operation with allow eACL user headers filters:"
f"{match_type.name}"
)
user_wallet = wallets.get_wallet()

View file

@ -1,6 +1,5 @@
import logging
import os
import re
import shutil
import uuid
from datetime import datetime
@ -8,43 +7,56 @@ from datetime import datetime
import allure
import pytest
import yaml
from binary_version_helper import get_local_binaries_versions, get_remote_binaries_versions
from cluster import Cluster
from common import (
from frostfs_testlib.hosting import Hosting
from frostfs_testlib.reporter import AllureHandler, get_reporter
from frostfs_testlib.shell import LocalShell, Shell
from frostfs_testlib.utils import wallet_utils
from pytest_tests.helpers import binary_version, env_properties
from pytest_tests.helpers.cluster import Cluster
from pytest_tests.helpers.frostfs_verbs import get_netmap_netinfo
from pytest_tests.helpers.k6 import LoadParams
from pytest_tests.helpers.node_management import storage_node_healthcheck
from pytest_tests.helpers.payment_neogo import deposit_gas, transfer_gas
from pytest_tests.helpers.wallet import WalletFactory
from pytest_tests.resources.common import (
ASSETS_DIR,
COMPLEX_OBJECT_CHUNKS_COUNT,
COMPLEX_OBJECT_TAIL_SIZE,
FREE_STORAGE,
HOSTING_CONFIG_FILE,
SIMPLE_OBJECT_SIZE,
STORAGE_NODE_SERVICE_NAME_REGEX,
WALLET_PASS,
)
from pytest_tests.resources.load_params import (
BACKGROUND_LOAD_MAX_TIME,
BACKGROUND_OBJ_SIZE,
BACKGROUND_READERS_COUNT,
BACKGROUND_WRITERS_COUNT,
FREE_STORAGE,
HOSTING_CONFIG_FILE,
LOAD_NODE_SSH_PRIVATE_KEY_PATH,
LOAD_NODE_SSH_USER,
LOAD_NODES,
STORAGE_NODE_SERVICE_NAME_REGEX,
WALLET_PASS,
)
from env_properties import save_env_properties
from k6 import LoadParams
from load import get_services_endpoints, prepare_k6_instances
from neofs_testlib.hosting import Hosting
from neofs_testlib.reporter import AllureHandler, get_reporter
from neofs_testlib.shell import LocalShell, Shell
from neofs_testlib.utils.wallet import init_wallet
from payment_neogo import deposit_gas, transfer_gas
from pytest import FixtureRequest
from python_keywords.node_management import storage_node_healthcheck
from helpers.wallet import WalletFactory
from pytest_tests.steps.load import get_services_endpoints, prepare_k6_instances
logger = logging.getLogger("NeoLogger")
# Add logs check test even if it's not fit to mark selectors
def pytest_configure(config: pytest.Config):
markers = config.option.markexpr
if markers != "":
config.option.markexpr = f"logs_after_session or ({markers})"
# pytest hook. Do not rename
def pytest_collection_modifyitems(items):
# Make network tests last based on @pytest.mark.node_mgmt
# Make network tests last based on @pytest.mark.node_mgmt and logs_test to be latest
def priority(item: pytest.Item) -> int:
is_node_mgmt_test = item.get_closest_marker("node_mgmt")
return 0 if not is_node_mgmt_test else 1
is_node_mgmt_test = 1 if item.get_closest_marker("node_mgmt") else 0
is_logs_check_test = 100 if item.get_closest_marker("logs_after_session") else 0
return is_node_mgmt_test + is_logs_check_test
items.sort(key=lambda item: priority(item))
@ -82,24 +94,49 @@ def require_multiple_hosts(hosting: Hosting):
yield
@pytest.fixture(scope="session")
def max_object_size(cluster: Cluster, client_shell: Shell) -> int:
storage_node = cluster.storage_nodes[0]
net_info = get_netmap_netinfo(
wallet=storage_node.get_wallet_path(),
wallet_config=storage_node.get_wallet_config_path(),
endpoint=storage_node.get_rpc_endpoint(),
shell=client_shell,
)
yield net_info["maximum_object_size"]
@pytest.fixture(scope="session")
def simple_object_size(max_object_size: int) -> int:
yield int(SIMPLE_OBJECT_SIZE) if int(SIMPLE_OBJECT_SIZE) < max_object_size else max_object_size
@pytest.fixture(scope="session")
def complex_object_size(max_object_size: int) -> int:
return max_object_size * int(COMPLEX_OBJECT_CHUNKS_COUNT) + int(COMPLEX_OBJECT_TAIL_SIZE)
@pytest.fixture(scope="session")
def wallet_factory(temp_directory: str, client_shell: Shell, cluster: Cluster) -> WalletFactory:
return WalletFactory(temp_directory, client_shell, cluster)
@pytest.fixture(scope="session")
def cluster(hosting: Hosting) -> Cluster:
yield Cluster(hosting)
def cluster(temp_directory: str, hosting: Hosting) -> Cluster:
cluster = Cluster(hosting)
if cluster.is_local_devevn():
cluster.create_wallet_configs(hosting)
yield cluster
@pytest.fixture(scope="session", autouse=True)
@allure.title("Check binary versions")
def check_binary_versions(request, hosting: Hosting, client_shell: Shell):
local_versions = get_local_binaries_versions(client_shell)
remote_versions = get_remote_binaries_versions(hosting)
local_versions = binary_version.get_local_binaries_versions(client_shell)
remote_versions = binary_version.get_remote_binaries_versions(hosting)
all_versions = {**local_versions, **remote_versions}
save_env_properties(request.config, all_versions)
env_properties.save_env_properties(request.config, all_versions)
@pytest.fixture(scope="session")
@ -116,41 +153,16 @@ def temp_directory():
shutil.rmtree(full_path)
@pytest.fixture(scope="function", autouse=True)
@allure.title("Analyze logs")
def analyze_logs(temp_directory: str, hosting: Hosting, request: FixtureRequest):
start_time = datetime.utcnow()
yield
end_time = datetime.utcnow()
# Skip tests where we expect failures in logs
if request.node.get_closest_marker("no_log_analyze"):
with allure.step("Skip analyze logs due to no_log_analyze mark"):
return
# Test name may exceed os NAME_MAX (255 bytes), so we use test start datetime instead
start_time_str = start_time.strftime("%Y_%m_%d_%H_%M_%S_%f")
logs_dir = os.path.join(temp_directory, f"logs_{start_time_str}")
dump_logs(hosting, logs_dir, start_time, end_time)
check_logs(logs_dir)
@allure.step("[Autouse/Session] Test session start time")
@pytest.fixture(scope="session", autouse=True)
@allure.title("Collect logs")
def collect_logs(temp_directory, hosting: Hosting):
def session_start_time():
start_time = datetime.utcnow()
yield
end_time = datetime.utcnow()
# Dump logs to temp directory (because they might be too large to keep in RAM)
logs_dir = os.path.join(temp_directory, "logs")
dump_logs(hosting, logs_dir, start_time, end_time)
attach_logs(logs_dir)
return start_time
@pytest.fixture(scope="session", autouse=True)
@allure.title("Run health check for all storage nodes")
def run_health_check(collect_logs, cluster: Cluster):
def run_health_check(session_start_time, cluster: Cluster):
failed_nodes = []
for node in cluster.storage_nodes:
health_check = storage_node_healthcheck(node)
@ -162,7 +174,7 @@ def run_health_check(collect_logs, cluster: Cluster):
@pytest.fixture(scope="session")
def background_grpc_load(client_shell, default_wallet):
def background_grpc_load(client_shell: Shell, hosting: Hosting):
registry_file = os.path.join("/tmp/", f"{str(uuid.uuid4())}.bolt")
prepare_file = os.path.join("/tmp/", f"{str(uuid.uuid4())}.json")
allure.dynamic.title(
@ -230,7 +242,7 @@ def background_grpc_load(client_shell, default_wallet):
@allure.title("Prepare wallet and deposit")
def default_wallet(client_shell: Shell, temp_directory: str, cluster: Cluster):
wallet_path = os.path.join(os.getcwd(), ASSETS_DIR, f"{str(uuid.uuid4())}.json")
init_wallet(wallet_path, WALLET_PASS)
wallet_utils.init_wallet(wallet_path, WALLET_PASS)
allure.attach.file(wallet_path, os.path.basename(wallet_path), allure.attachment_type.JSON)
if not FREE_STORAGE:
@ -252,39 +264,3 @@ def default_wallet(client_shell: Shell, temp_directory: str, cluster: Cluster):
)
return wallet_path
def check_logs(logs_dir: str):
problem_pattern = r"\Wpanic\W|\Woom\W"
log_file_paths = []
for directory_path, _, file_names in os.walk(logs_dir):
log_file_paths += [
os.path.join(directory_path, file_name)
for file_name in file_names
if re.match(r"\.(txt|log)", os.path.splitext(file_name)[-1], flags=re.IGNORECASE)
]
logs_with_problem = []
for file_path in log_file_paths:
with open(file_path, "r") as log_file:
if re.search(problem_pattern, log_file.read(), flags=re.IGNORECASE):
attach_logs(logs_dir)
logs_with_problem.append(file_path)
if logs_with_problem:
raise pytest.fail(f"System logs {', '.join(logs_with_problem)} contain critical errors")
def dump_logs(hosting: Hosting, logs_dir: str, since: datetime, until: datetime) -> None:
# Dump logs to temp directory (because they might be too large to keep in RAM)
os.makedirs(logs_dir)
for host in hosting.hosts:
host.dump_logs(logs_dir, since=since, until=until)
def attach_logs(logs_dir: str) -> None:
# Zip all files and attach to Allure because it is more convenient to download a single
# zip with all logs rather than mess with individual logs files per service or node
logs_zip_file_path = shutil.make_archive(logs_dir, "zip", logs_dir)
allure.attach.file(logs_zip_file_path, name="logs.zip", extension="zip")

View file

@ -2,8 +2,9 @@ import json
import allure
import pytest
from epoch import tick_epoch
from python_keywords.container import (
from frostfs_testlib.resources.common import PRIVATE_ACL_F
from pytest_tests.helpers.container import (
create_container,
delete_container,
get_container,
@ -11,10 +12,8 @@ from python_keywords.container import (
wait_for_container_creation,
wait_for_container_deletion,
)
from utility import placement_policy_from_container
from wellknown_acl import PRIVATE_ACL_F
from steps.cluster_test_base import ClusterTestBase
from pytest_tests.helpers.utility import placement_policy_from_container
from pytest_tests.steps.cluster_test_base import ClusterTestBase
@pytest.mark.container
@ -81,7 +80,7 @@ class TestContainer(ClusterTestBase):
delete_container(
wallet, cid, shell=self.shell, endpoint=self.cluster.default_rpc_endpoint
)
tick_epoch(self.shell, self.cluster)
self.tick_epoch()
wait_for_container_deletion(
wallet, cid, shell=self.shell, endpoint=self.cluster.default_rpc_endpoint
)
@ -121,7 +120,7 @@ class TestContainer(ClusterTestBase):
delete_container(
wallet, cid, shell=self.shell, endpoint=self.cluster.default_rpc_endpoint
)
tick_epoch(self.shell, self.cluster)
self.tick_epoch()
wait_for_container_deletion(
wallet, cid, shell=self.shell, endpoint=self.cluster.default_rpc_endpoint
)

View file

@ -4,15 +4,18 @@ from time import sleep
import allure
import pytest
from cluster import StorageNode
from failover_utils import wait_all_storage_nodes_returned, wait_object_replication
from file_helper import generate_file, get_file_hash
from iptables_helper import IpTablesHelper
from python_keywords.container import create_container
from python_keywords.neofs_verbs import get_object, put_object_to_random_node
from wellknown_acl import PUBLIC_ACL
from frostfs_testlib.resources.common import PUBLIC_ACL
from steps.cluster_test_base import ClusterTestBase
from pytest_tests.helpers.cluster import StorageNode
from pytest_tests.helpers.container import create_container
from pytest_tests.helpers.failover_utils import (
wait_all_storage_nodes_returned,
wait_object_replication,
)
from pytest_tests.helpers.file_helper import generate_file, get_file_hash
from pytest_tests.helpers.frostfs_verbs import get_object, put_object_to_random_node
from pytest_tests.helpers.iptables_helper import IpTablesHelper
from pytest_tests.steps.cluster_test_base import ClusterTestBase
logger = logging.getLogger("NeoLogger")
STORAGE_NODE_COMMUNICATION_PORT = "8080"
@ -38,7 +41,9 @@ class TestFailoverNetwork(ClusterTestBase):
wait_all_storage_nodes_returned(self.cluster)
@allure.title("Block Storage node traffic")
def test_block_storage_node_traffic(self, default_wallet, require_multiple_hosts):
def test_block_storage_node_traffic(
self, default_wallet, require_multiple_hosts, simple_object_size
):
"""
Block storage nodes traffic using iptables and wait for replication for objects.
"""
@ -47,7 +52,7 @@ class TestFailoverNetwork(ClusterTestBase):
wakeup_node_timeout = 10 # timeout to let nodes detect that traffic has blocked
nodes_to_block_count = 2
source_file_path = generate_file()
source_file_path = generate_file(simple_object_size)
cid = create_container(
wallet,
shell=self.shell,

View file

@ -2,23 +2,26 @@ import logging
import allure
import pytest
from cluster import Cluster, StorageNode
from failover_utils import wait_all_storage_nodes_returned, wait_object_replication
from file_helper import generate_file, get_file_hash
from neofs_testlib.hosting import Host
from neofs_testlib.shell import CommandOptions
from python_keywords.container import create_container
from python_keywords.neofs_verbs import get_object, put_object_to_random_node
from wellknown_acl import PUBLIC_ACL
from frostfs_testlib.hosting import Host
from frostfs_testlib.resources.common import PUBLIC_ACL
from frostfs_testlib.shell import CommandOptions
from steps.cluster_test_base import ClusterTestBase
from pytest_tests.helpers.cluster import Cluster, StorageNode
from pytest_tests.helpers.container import create_container
from pytest_tests.helpers.failover_utils import (
wait_all_storage_nodes_returned,
wait_object_replication,
)
from pytest_tests.helpers.file_helper import generate_file, get_file_hash
from pytest_tests.helpers.frostfs_verbs import get_object, put_object_to_random_node
from pytest_tests.steps.cluster_test_base import ClusterTestBase
logger = logging.getLogger("NeoLogger")
stopped_nodes: list[StorageNode] = []
@pytest.fixture(scope="function", autouse=True)
@allure.step("Return all stopped hosts")
@pytest.fixture(scope="function", autouse=True)
def after_run_return_all_stopped_hosts(cluster: Cluster):
yield
return_stopped_hosts(cluster)
@ -47,14 +50,11 @@ class TestFailoverStorage(ClusterTestBase):
@pytest.mark.parametrize("hard_reboot", [True, False])
@pytest.mark.failover_reboot
def test_lose_storage_node_host(
self,
default_wallet,
hard_reboot: bool,
require_multiple_hosts,
self, default_wallet, hard_reboot: bool, require_multiple_hosts, simple_object_size
):
wallet = default_wallet
placement_rule = "REP 2 IN X CBF 2 SELECT 2 FROM * AS X"
source_file_path = generate_file()
source_file_path = generate_file(simple_object_size)
cid = create_container(
wallet,
shell=self.shell,
@ -90,7 +90,7 @@ class TestFailoverStorage(ClusterTestBase):
)
assert get_file_hash(source_file_path) == get_file_hash(got_file_path)
with allure.step(f"Return all hosts"):
with allure.step("Return all hosts"):
return_stopped_hosts(self.cluster)
with allure.step("Check object data is not corrupted"):
@ -106,14 +106,11 @@ class TestFailoverStorage(ClusterTestBase):
@pytest.mark.parametrize("sequence", [True, False])
@pytest.mark.failover_panic
def test_panic_storage_node_host(
self,
default_wallet,
require_multiple_hosts,
sequence: bool,
self, default_wallet, require_multiple_hosts, sequence: bool, simple_object_size
):
wallet = default_wallet
placement_rule = "REP 2 IN X CBF 2 SELECT 2 FROM * AS X"
source_file_path = generate_file()
source_file_path = generate_file(simple_object_size)
cid = create_container(
wallet,
shell=self.shell,
@ -129,7 +126,7 @@ class TestFailoverStorage(ClusterTestBase):
cid, oid, 2, shell=self.shell, nodes=self.cluster.storage_nodes
)
allure.attach(
"\n".join(nodes),
"\n".join([str(node) for node in nodes]),
"Current nodes with object",
allure.attachment_type.TEXT,
)
@ -157,7 +154,7 @@ class TestFailoverStorage(ClusterTestBase):
)
allure.attach(
"\n".join(new_nodes),
"\n".join([str(new_node) for new_node in new_nodes]),
f"Nodes with object after {node} fail",
allure.attachment_type.TEXT,
)
@ -167,7 +164,7 @@ class TestFailoverStorage(ClusterTestBase):
cid, oid, 2, shell=self.shell, nodes=self.cluster.storage_nodes
)
allure.attach(
"\n".join(new_nodes),
"\n".join([str(new_node) for new_node in new_nodes]),
"Nodes with object after nodes fail",
allure.attachment_type.TEXT,
)

View file

@ -1,50 +1,78 @@
from enum import Enum
import allure
import pytest
from common import (
from frostfs_testlib.hosting import Hosting
from pytest_tests.helpers.k6 import LoadParams
from pytest_tests.resources.common import (
HTTP_GATE_SERVICE_NAME_REGEX,
LOAD_NODE_SSH_PRIVATE_KEY_PATH,
LOAD_NODE_SSH_USER,
LOAD_NODES,
S3_GATE_SERVICE_NAME_REGEX,
STORAGE_NODE_SERVICE_NAME_REGEX,
)
from k6 import LoadParams
from load import (
from pytest_tests.resources.load_params import (
CONTAINER_PLACEMENT_POLICY,
CONTAINERS_COUNT,
DELETERS,
LOAD_NODE_SSH_PRIVATE_KEY_PATH,
LOAD_NODE_SSH_USER,
LOAD_NODES,
LOAD_NODES_COUNT,
LOAD_TIME,
LOAD_TYPE,
OBJ_COUNT,
OBJ_SIZE,
OUT_FILE,
READERS,
STORAGE_NODE_COUNT,
WRITERS,
)
from pytest_tests.steps.cluster_test_base import ClusterTestBase
from pytest_tests.steps.load import (
clear_cache_and_data,
get_services_endpoints,
init_s3_client,
multi_node_k6_run,
prepare_k6_instances,
start_stopped_nodes,
stop_unused_nodes,
)
from neofs_testlib.hosting import Hosting
class LoadTime(Enum):
EXPECTED_MAXIMUM = 200
PMI_EXPECTATION = 900
CONTAINERS_COUNT = 1
OBJ_COUNT = 3
ENDPOINTS_ATTRIBUTES = {
"http": {"regex": HTTP_GATE_SERVICE_NAME_REGEX, "endpoint_attribute": "endpoint"},
"grpc": {"regex": STORAGE_NODE_SERVICE_NAME_REGEX, "endpoint_attribute": "rpc_endpoint"},
"s3": {"regex": S3_GATE_SERVICE_NAME_REGEX, "endpoint_attribute": "endpoint"},
}
@pytest.mark.load
class TestLoad:
class TestLoad(ClusterTestBase):
@pytest.fixture(autouse=True)
def clear_cache_and_data(self, hosting: Hosting):
clear_cache_and_data(hosting=hosting)
yield
start_stopped_nodes()
@pytest.mark.parametrize("obj_size, out_file", [(1000, "1mb_200.json")])
@pytest.mark.parametrize("writers, readers, deleters", [(140, 60, 0), (200, 0, 0)])
@pytest.mark.parametrize(
"load_time", [LoadTime.EXPECTED_MAXIMUM.value, LoadTime.PMI_EXPECTATION.value]
@pytest.fixture(scope="session", autouse=True)
def init_s3_client(self, hosting: Hosting):
if "s3" in list(map(lambda x: x.lower(), LOAD_TYPE)):
init_s3_client(
load_nodes=LOAD_NODES,
login=LOAD_NODE_SSH_USER,
pkey=LOAD_NODE_SSH_PRIVATE_KEY_PATH,
hosting=hosting,
container_placement_policy=CONTAINER_PLACEMENT_POLICY,
)
@pytest.mark.parametrize("node_count", [4])
@pytest.mark.parametrize("obj_size, out_file", list(zip(OBJ_SIZE, OUT_FILE)))
@pytest.mark.parametrize("writers, readers, deleters", list(zip(WRITERS, READERS, DELETERS)))
@pytest.mark.parametrize("load_time", LOAD_TIME)
@pytest.mark.parametrize("node_count", STORAGE_NODE_COUNT)
@pytest.mark.parametrize("containers_count", CONTAINERS_COUNT)
@pytest.mark.parametrize("load_type", LOAD_TYPE)
@pytest.mark.parametrize("obj_count", OBJ_COUNT)
@pytest.mark.parametrize("load_nodes_count", LOAD_NODES_COUNT)
@pytest.mark.benchmark
@pytest.mark.grpc
def test_grpc_benchmark(
def test_custom_load(
self,
obj_size,
out_file,
@ -53,604 +81,41 @@ class TestLoad:
deleters,
load_time,
node_count,
obj_count,
load_type,
load_nodes_count,
containers_count,
hosting: Hosting,
):
allure.dynamic.title(
f"Benchmark test - node_count = {node_count}, "
f"Load test - node_count = {node_count}, "
f"writers = {writers} readers = {readers}, "
f"deleters = {deleters}, obj_size = {obj_size}, "
f"load_time = {load_time}"
)
stop_unused_nodes(self.cluster.storage_nodes, node_count)
with allure.step("Get endpoints"):
endpoints_list = get_services_endpoints(
hosting=hosting,
service_name_regex=STORAGE_NODE_SERVICE_NAME_REGEX,
endpoint_attribute="rpc_endpoint",
service_name_regex=ENDPOINTS_ATTRIBUTES[LOAD_TYPE]["regex"],
endpoint_attribute=ENDPOINTS_ATTRIBUTES[LOAD_TYPE]["endpoint_attribute"],
)
endpoints = ",".join(endpoints_list[:node_count])
load_params = LoadParams(
endpoint=endpoints,
obj_size=obj_size,
containers_count=CONTAINERS_COUNT,
containers_count=containers_count,
out_file=out_file,
obj_count=OBJ_COUNT,
obj_count=obj_count,
writers=writers,
readers=readers,
deleters=deleters,
load_time=load_time,
load_type="grpc",
load_type=load_type,
)
load_nodes_list = LOAD_NODES[:load_nodes_count]
k6_load_instances = prepare_k6_instances(
load_nodes=LOAD_NODES,
login=LOAD_NODE_SSH_USER,
pkey=LOAD_NODE_SSH_PRIVATE_KEY_PATH,
load_params=load_params,
)
with allure.step("Run load"):
multi_node_k6_run(k6_load_instances)
@pytest.mark.parametrize(
"obj_size, out_file, writers",
[
(4, "4kb_300.json", 300),
(16, "16kb_250.json", 250),
(64, "64kb_250.json", 250),
(128, "128kb_250.json", 250),
(512, "512kb_200.json", 200),
(1000, "1mb_200.json", 200),
(8000, "8mb_150.json", 150),
(32000, "32mb_150.json", 150),
(128000, "128mb_100.json", 100),
(512000, "512mb_50.json", 50),
],
)
@pytest.mark.parametrize(
"load_time", [LoadTime.EXPECTED_MAXIMUM.value, LoadTime.PMI_EXPECTATION.value]
)
@pytest.mark.benchmark
@pytest.mark.grpc
def test_grpc_benchmark_write(
self,
obj_size,
out_file,
writers,
load_time,
hosting: Hosting,
):
allure.dynamic.title(
f"Single gate benchmark write test - "
f"writers = {writers}, "
f"obj_size = {obj_size}, "
f"load_time = {load_time}"
)
with allure.step("Get endpoints"):
endpoints_list = get_services_endpoints(
hosting=hosting,
service_name_regex=STORAGE_NODE_SERVICE_NAME_REGEX,
endpoint_attribute="rpc_endpoint",
)
endpoints = ",".join(endpoints_list[:1])
load_params = LoadParams(
endpoint=endpoints,
obj_size=obj_size,
containers_count=CONTAINERS_COUNT,
out_file=out_file,
obj_count=OBJ_COUNT,
writers=writers,
readers=0,
deleters=0,
load_time=load_time,
load_type="grpc",
)
k6_load_instances = prepare_k6_instances(
load_nodes=LOAD_NODES,
login=LOAD_NODE_SSH_USER,
pkey=LOAD_NODE_SSH_PRIVATE_KEY_PATH,
load_params=load_params,
)
with allure.step("Run load"):
multi_node_k6_run(k6_load_instances)
@pytest.mark.parametrize(
"obj_size, out_file, writers, readers",
[
(8000, "8mb_350.json", 245, 105),
(32000, "32mb_300.json", 210, 90),
(128000, "128mb_100.json", 70, 30),
(512000, "512mb_70.json", 49, 21),
],
)
@pytest.mark.parametrize(
"load_time", [LoadTime.EXPECTED_MAXIMUM.value, LoadTime.PMI_EXPECTATION.value]
)
@pytest.mark.benchmark
@pytest.mark.grpc
def test_grpc_benchmark_write_read_70_30(
self,
obj_size,
out_file,
writers,
readers,
load_time,
hosting: Hosting,
):
allure.dynamic.title(
f"Single gate benchmark write + read (70%/30%) test - "
f"writers = {writers}, "
f"readers = {readers}, "
f"obj_size = {obj_size}, "
f"load_time = {load_time}"
)
with allure.step("Get endpoints"):
endpoints_list = get_services_endpoints(
hosting=hosting,
service_name_regex=STORAGE_NODE_SERVICE_NAME_REGEX,
endpoint_attribute="rpc_endpoint",
)
endpoints = ",".join(endpoints_list[:1])
load_params = LoadParams(
endpoint=endpoints,
obj_size=obj_size,
containers_count=CONTAINERS_COUNT,
out_file=out_file,
obj_count=500,
writers=writers,
readers=readers,
deleters=0,
load_time=load_time,
load_type="grpc",
)
k6_load_instances = prepare_k6_instances(
load_nodes=LOAD_NODES,
login=LOAD_NODE_SSH_USER,
pkey=LOAD_NODE_SSH_PRIVATE_KEY_PATH,
load_params=load_params,
)
with allure.step("Run load"):
multi_node_k6_run(k6_load_instances)
@pytest.mark.parametrize(
"obj_size, out_file, readers",
[
(4, "4kb_300.json", 300),
(16, "16kb_300.json", 300),
(64, "64kb_300.json", 300),
(128, "128kb_250.json", 250),
(512, "512kb_150.json", 150),
(1000, "1mb_150.json", 150),
(8000, "8mb_150.json", 150),
(32000, "32mb_100.json", 100),
(128000, "128mb_25.json", 25),
(512000, "512mb_25.json", 25),
],
)
@pytest.mark.parametrize(
"load_time", [LoadTime.EXPECTED_MAXIMUM.value, LoadTime.PMI_EXPECTATION.value]
)
@pytest.mark.benchmark
@pytest.mark.grpc
def test_grpc_benchmark_read(
self,
obj_size,
out_file,
readers,
load_time,
hosting: Hosting,
):
allure.dynamic.title(
f"Single gate benchmark read test - "
f"readers = {readers}, "
f"obj_size = {obj_size}, "
f"load_time = {load_time}"
)
with allure.step("Get endpoints"):
endpoints_list = get_services_endpoints(
hosting=hosting,
service_name_regex=STORAGE_NODE_SERVICE_NAME_REGEX,
endpoint_attribute="rpc_endpoint",
)
endpoints = ",".join(endpoints_list[:1])
load_params = LoadParams(
endpoint=endpoints,
obj_size=obj_size,
containers_count=1,
out_file=out_file,
obj_count=500,
writers=0,
readers=readers,
deleters=0,
load_time=load_time,
load_type="grpc",
)
k6_load_instances = prepare_k6_instances(
load_nodes=LOAD_NODES,
login=LOAD_NODE_SSH_USER,
pkey=LOAD_NODE_SSH_PRIVATE_KEY_PATH,
load_params=load_params,
)
with allure.step("Run load"):
multi_node_k6_run(k6_load_instances)
@pytest.mark.parametrize(
"obj_size, out_file, writers",
[
(4, "4kb_300.json", 300),
(16, "16kb_250.json", 250),
(64, "64kb_250.json", 250),
(128, "128kb_250.json", 250),
(512, "512kb_200.json", 200),
(1000, "1mb_200.json", 200),
(8000, "8mb_150.json", 150),
(32000, "32mb_150.json", 150),
(128000, "128mb_100.json", 100),
(512000, "512mb_50.json", 50),
],
)
@pytest.mark.parametrize(
"load_time", [LoadTime.EXPECTED_MAXIMUM.value, LoadTime.PMI_EXPECTATION.value]
)
@pytest.mark.benchmark
@pytest.mark.http
def test_http_benchmark_write(
self,
obj_size,
out_file,
writers,
load_time,
hosting: Hosting,
):
allure.dynamic.title(
f"Single gate benchmark write test - "
f"writers = {writers}, "
f"obj_size = {obj_size}, "
f"load_time = {load_time}"
)
with allure.step("Get endpoints"):
endpoints_list = get_services_endpoints(
hosting=hosting,
service_name_regex=HTTP_GATE_SERVICE_NAME_REGEX,
endpoint_attribute="endpoint",
)
endpoints = ",".join(endpoints_list[:1])
load_params = LoadParams(
endpoint=endpoints,
obj_size=obj_size,
containers_count=CONTAINERS_COUNT,
out_file=out_file,
obj_count=OBJ_COUNT,
writers=writers,
readers=0,
deleters=0,
load_time=load_time,
load_type="http",
)
k6_load_instances = prepare_k6_instances(
load_nodes=LOAD_NODES,
login=LOAD_NODE_SSH_USER,
pkey=LOAD_NODE_SSH_PRIVATE_KEY_PATH,
load_params=load_params,
)
with allure.step("Run load"):
multi_node_k6_run(k6_load_instances)
@pytest.mark.parametrize(
"obj_size, out_file, writers, readers",
[
(8000, "8mb_350.json", 245, 105),
(32000, "32mb_300.json", 210, 90),
(128000, "128mb_100.json", 70, 30),
(512000, "512mb_70.json", 49, 21),
],
)
@pytest.mark.parametrize(
"load_time", [LoadTime.EXPECTED_MAXIMUM.value, LoadTime.PMI_EXPECTATION.value]
)
@pytest.mark.benchmark
@pytest.mark.http
def test_http_benchmark_write_read_70_30(
self,
obj_size,
out_file,
writers,
readers,
load_time,
hosting: Hosting,
):
allure.dynamic.title(
f"Single gate benchmark write + read (70%/30%) test - "
f"writers = {writers}, "
f"readers = {readers}, "
f"obj_size = {obj_size}, "
f"load_time = {load_time}"
)
with allure.step("Get endpoints"):
endpoints_list = get_services_endpoints(
hosting=hosting,
service_name_regex=HTTP_GATE_SERVICE_NAME_REGEX,
endpoint_attribute="endpoint",
)
endpoints = ",".join(endpoints_list[:1])
load_params = LoadParams(
endpoint=endpoints,
obj_size=obj_size,
containers_count=CONTAINERS_COUNT,
out_file=out_file,
obj_count=500,
writers=writers,
readers=readers,
deleters=0,
load_time=load_time,
load_type="http",
)
k6_load_instances = prepare_k6_instances(
load_nodes=LOAD_NODES,
login=LOAD_NODE_SSH_USER,
pkey=LOAD_NODE_SSH_PRIVATE_KEY_PATH,
load_params=load_params,
)
with allure.step("Run load"):
multi_node_k6_run(k6_load_instances)
@pytest.mark.parametrize(
"obj_size, out_file, readers",
[
(4, "4kb_300.json", 300),
(16, "16kb_300.json", 300),
(64, "64kb_300.json", 300),
(128, "128kb_250.json", 250),
(512, "512kb_150.json", 150),
(1000, "1mb_150.json", 150),
(8000, "8mb_150.json", 150),
(32000, "32mb_100.json", 100),
(128000, "128mb_25.json", 25),
(512000, "512mb_25.json", 25),
],
)
@pytest.mark.parametrize(
"load_time", [LoadTime.EXPECTED_MAXIMUM.value, LoadTime.PMI_EXPECTATION.value]
)
@pytest.mark.benchmark
@pytest.mark.http
def test_http_benchmark_read(
self,
obj_size,
out_file,
readers,
load_time,
hosting: Hosting,
):
allure.dynamic.title(
f"Single gate benchmark read test - "
f"readers = {readers}, "
f"obj_size = {obj_size}, "
f"load_time = {load_time}"
)
with allure.step("Get endpoints"):
endpoints_list = get_services_endpoints(
hosting=hosting,
service_name_regex=HTTP_GATE_SERVICE_NAME_REGEX,
endpoint_attribute="endpoint",
)
endpoints = ",".join(endpoints_list[:1])
load_params = LoadParams(
endpoint=endpoints,
obj_size=obj_size,
containers_count=1,
out_file=out_file,
obj_count=500,
writers=0,
readers=readers,
deleters=0,
load_time=load_time,
load_type="http",
)
k6_load_instances = prepare_k6_instances(
load_nodes=LOAD_NODES,
login=LOAD_NODE_SSH_USER,
pkey=LOAD_NODE_SSH_PRIVATE_KEY_PATH,
load_params=load_params,
)
with allure.step("Run load"):
multi_node_k6_run(k6_load_instances)
@pytest.mark.load
@pytest.mark.s3
class TestS3Load:
@pytest.fixture(autouse=True)
def clear_cache_and_data(self, hosting: Hosting):
clear_cache_and_data(hosting=hosting)
@pytest.fixture(scope="session", autouse=True)
def init_s3_client(self, hosting: Hosting):
init_s3_client(
load_nodes=LOAD_NODES,
login=LOAD_NODE_SSH_USER,
pkey=LOAD_NODE_SSH_PRIVATE_KEY_PATH,
hosting=hosting,
)
@pytest.mark.parametrize(
"obj_size, out_file, writers",
[
(4, "4kb_300.json", 400),
(16, "16kb_250.json", 350),
(64, "64kb_250.json", 350),
(128, "128kb_250.json", 300),
(512, "512kb_200.json", 250),
(1000, "1mb_200.json", 250),
(8000, "8mb_150.json", 200),
(32000, "32mb_150.json", 200),
(128000, "128mb_100.json", 150),
(512000, "512mb_50.json", 50),
],
)
@pytest.mark.parametrize(
"load_time", [LoadTime.EXPECTED_MAXIMUM.value, LoadTime.PMI_EXPECTATION.value]
)
@pytest.mark.benchmark
@pytest.mark.s3
def test_s3_benchmark_write(
self,
obj_size,
out_file,
writers,
load_time,
hosting: Hosting,
):
allure.dynamic.title(
f"Single gate benchmark write test - "
f"writers = {writers}, "
f"obj_size = {obj_size}, "
f"load_time = {load_time}"
)
with allure.step("Get endpoints"):
endpoints_list = get_services_endpoints(
hosting=hosting,
service_name_regex=S3_GATE_SERVICE_NAME_REGEX,
endpoint_attribute="endpoint",
)
endpoints = ",".join(endpoints_list[:1])
load_params = LoadParams(
endpoint=endpoints,
obj_size=obj_size,
containers_count=CONTAINERS_COUNT,
out_file=out_file,
obj_count=OBJ_COUNT,
writers=writers,
readers=0,
deleters=0,
load_time=load_time,
load_type="s3",
)
k6_load_instances = prepare_k6_instances(
load_nodes=LOAD_NODES,
login=LOAD_NODE_SSH_USER,
pkey=LOAD_NODE_SSH_PRIVATE_KEY_PATH,
load_params=load_params,
)
with allure.step("Run load"):
multi_node_k6_run(k6_load_instances)
@pytest.mark.parametrize(
"obj_size, out_file, writers, readers",
[
(4, "4kb_350.json", 210, 90),
(16, "16kb_300.json", 210, 90),
(64, "64kb_300.json", 210, 90),
(128, "128kb_300.json", 210, 90),
(512, "512kb_300.json", 210, 90),
(1000, "1mb_300.json", 210, 90),
(8000, "8mb_250.json", 175, 75),
(32000, "32mb_200.json", 140, 60),
(128000, "128mb_100.json", 70, 30),
(512000, "512mb_50.json", 35, 15),
],
)
@pytest.mark.parametrize(
"load_time", [LoadTime.EXPECTED_MAXIMUM.value, LoadTime.PMI_EXPECTATION.value]
)
@pytest.mark.benchmark
@pytest.mark.s3
def test_s3_benchmark_write_read_70_30(
self,
obj_size,
out_file,
writers,
readers,
load_time,
hosting: Hosting,
):
allure.dynamic.title(
f"Single gate benchmark write + read (70%/30%) test - "
f"writers = {writers}, "
f"readers = {readers}, "
f"obj_size = {obj_size}, "
f"load_time = {load_time}"
)
with allure.step("Get endpoints"):
endpoints_list = get_services_endpoints(
hosting=hosting,
service_name_regex=S3_GATE_SERVICE_NAME_REGEX,
endpoint_attribute="endpoint",
)
endpoints = ",".join(endpoints_list[:1])
load_params = LoadParams(
endpoint=endpoints,
obj_size=obj_size,
containers_count=CONTAINERS_COUNT,
out_file=out_file,
obj_count=500,
writers=writers,
readers=readers,
deleters=0,
load_time=load_time,
load_type="s3",
)
k6_load_instances = prepare_k6_instances(
load_nodes=LOAD_NODES,
login=LOAD_NODE_SSH_USER,
pkey=LOAD_NODE_SSH_PRIVATE_KEY_PATH,
load_params=load_params,
)
with allure.step("Run load"):
multi_node_k6_run(k6_load_instances)
@pytest.mark.parametrize(
"obj_size, out_file, readers",
[
(4, "4kb_400.json", 400),
(16, "16kb_400.json", 400),
(64, "64kb_350.json", 350),
(128, "128kb_300.json", 300),
(512, "512kb_300.json", 300),
(1000, "1mb_300.json", 300),
(8000, "8mb_300.json", 300),
(32000, "32mb_200.json", 200),
(128000, "128mb_150.json", 150),
(512000, "512mb_50.json", 50),
],
)
@pytest.mark.parametrize(
"load_time", [LoadTime.EXPECTED_MAXIMUM.value, LoadTime.PMI_EXPECTATION.value]
)
@pytest.mark.benchmark
@pytest.mark.s3
def test_s3_benchmark_read(
self,
obj_size,
out_file,
readers,
load_time,
hosting: Hosting,
):
allure.dynamic.title(
f"Single gate benchmark read test - "
f"readers = {readers}, "
f"obj_size = {obj_size}, "
f"load_time = {load_time}"
)
with allure.step("Get endpoints"):
endpoints_list = get_services_endpoints(
hosting=hosting,
service_name_regex=S3_GATE_SERVICE_NAME_REGEX,
endpoint_attribute="endpoint",
)
endpoints = ",".join(endpoints_list[:1])
load_params = LoadParams(
endpoint=endpoints,
obj_size=obj_size,
containers_count=1,
out_file=out_file,
obj_count=500,
writers=0,
readers=readers,
deleters=0,
load_time=load_time,
load_type="s3",
)
k6_load_instances = prepare_k6_instances(
load_nodes=LOAD_NODES,
load_nodes=load_nodes_list,
login=LOAD_NODE_SSH_USER,
pkey=LOAD_NODE_SSH_PRIVATE_KEY_PATH,
load_params=load_params,

View file

@ -5,15 +5,15 @@ from typing import Optional, Tuple
import allure
import pytest
from cluster import StorageNode
from cluster_test_base import ClusterTestBase
from common import COMPLEX_OBJ_SIZE, MORPH_BLOCK_TIME, NEOFS_CONTRACT_CACHE_TIMEOUT
from epoch import tick_epoch
from file_helper import generate_file
from grpc_responses import OBJECT_NOT_FOUND, error_matches_status
from python_keywords.container import create_container, get_container
from python_keywords.failover_utils import wait_object_replication
from python_keywords.neofs_verbs import (
from frostfs_testlib.resources.common import OBJECT_NOT_FOUND, PUBLIC_ACL
from frostfs_testlib.utils import datetime_utils, string_utils
from pytest_tests.helpers.cluster import StorageNode
from pytest_tests.helpers.container import create_container, get_container
from pytest_tests.helpers.epoch import tick_epoch
from pytest_tests.helpers.failover_utils import wait_object_replication
from pytest_tests.helpers.file_helper import generate_file
from pytest_tests.helpers.frostfs_verbs import (
delete_object,
get_object,
get_object_from_random_node,
@ -21,23 +21,25 @@ from python_keywords.neofs_verbs import (
put_object,
put_object_to_random_node,
)
from python_keywords.node_management import (
from pytest_tests.helpers.node_management import (
check_node_in_map,
delete_node_data,
drop_object,
exclude_node_from_network_map,
get_locode_from_random_node,
get_netmap_snapshot,
include_node_to_network_map,
node_shard_list,
node_shard_set_mode,
start_storage_nodes,
storage_node_healthcheck,
storage_node_set_status,
)
from storage_policy import get_nodes_with_object, get_simple_object_copies
from utility import parse_time, placement_policy_from_container, wait_for_gc_pass_on_storage_nodes
from wellknown_acl import PUBLIC_ACL
from pytest_tests.helpers.storage_policy import get_nodes_with_object, get_simple_object_copies
from pytest_tests.helpers.utility import (
placement_policy_from_container,
wait_for_gc_pass_on_storage_nodes,
)
from pytest_tests.resources.common import FROSTFS_CONTRACT_CACHE_TIMEOUT, MORPH_BLOCK_TIME
from pytest_tests.steps.cluster_test_base import ClusterTestBase
logger = logging.getLogger("NeoLogger")
check_nodes: list[StorageNode] = []
@ -49,9 +51,10 @@ check_nodes: list[StorageNode] = []
class TestNodeManagement(ClusterTestBase):
@pytest.fixture
@allure.title("Create container and pick the node with data")
def create_container_and_pick_node(self, default_wallet: str) -> Tuple[str, StorageNode]:
default_wallet
file_path = generate_file()
def create_container_and_pick_node(
self, default_wallet: str, simple_object_size
) -> Tuple[str, StorageNode]:
file_path = generate_file(simple_object_size)
placement_rule = "REP 1 IN X CBF 1 SELECT 1 FROM * AS X"
endpoint = self.cluster.default_rpc_endpoint
@ -110,13 +113,13 @@ class TestNodeManagement(ClusterTestBase):
# We need to wait for node to establish notifications from morph-chain
# Otherwise it will hang up when we will try to set status
sleep(parse_time(MORPH_BLOCK_TIME))
sleep(datetime_utils.parse_time(MORPH_BLOCK_TIME))
with allure.step(f"Move node {node} to online state"):
storage_node_set_status(node, status="online", retries=2)
check_nodes.remove(node)
sleep(parse_time(MORPH_BLOCK_TIME))
sleep(datetime_utils.parse_time(MORPH_BLOCK_TIME))
self.tick_epoch_with_retries(3)
check_node_in_map(node, shell=self.shell, alive_node=alive_node)
@ -126,11 +129,15 @@ class TestNodeManagement(ClusterTestBase):
self,
default_wallet,
return_nodes_after_test_run,
simple_object_size,
):
"""
This test remove one node from pytest_tests.helpers.cluster then add it back. Test uses base control operations with storage nodes (healthcheck, netmap-snapshot, set-status).
"""
wallet = default_wallet
placement_rule_3 = "REP 3 IN X CBF 1 SELECT 3 FROM * AS X"
placement_rule_4 = "REP 4 IN X CBF 1 SELECT 4 FROM * AS X"
source_file_path = generate_file()
source_file_path = generate_file(simple_object_size)
storage_nodes = self.cluster.storage_nodes
random_node = random.choice(storage_nodes[1:])
@ -202,62 +209,6 @@ class TestNodeManagement(ClusterTestBase):
)
wait_object_replication(cid, oid, 4, shell=self.shell, nodes=storage_nodes)
@allure.title("Control Operations with storage nodes")
@pytest.mark.node_mgmt
def test_nodes_management(self, temp_directory):
"""
This test checks base control operations with storage nodes (healthcheck, netmap-snapshot, set-status).
"""
storage_nodes = self.cluster.storage_nodes
random_node = random.choice(storage_nodes)
alive_node = random.choice(list(set(storage_nodes) - {random_node}))
# Calculate public key that identifies node in netmap
random_node_netmap_key = random_node.get_wallet_public_key()
with allure.step(f"Check node ({random_node}) is in netmap"):
snapshot = get_netmap_snapshot(node=alive_node, shell=self.shell)
assert (
random_node_netmap_key in snapshot
), f"Expected node {random_node} to be in netmap"
with allure.step("Run health check for all storage nodes"):
for node in self.cluster.storage_nodes:
health_check = storage_node_healthcheck(node)
assert (
health_check.health_status == "READY"
and health_check.network_status == "ONLINE"
)
with allure.step(f"Move node ({random_node}) to offline state"):
storage_node_set_status(random_node, status="offline")
sleep(parse_time(MORPH_BLOCK_TIME))
tick_epoch(self.shell, self.cluster)
with allure.step(f"Check node {random_node} went to offline"):
health_check = storage_node_healthcheck(random_node)
assert (
health_check.health_status == "READY" and health_check.network_status == "OFFLINE"
)
snapshot = get_netmap_snapshot(node=alive_node, shell=self.shell)
assert (
random_node_netmap_key not in snapshot
), f"Expected node {random_node} not in netmap"
with allure.step(f"Check node {random_node} went to online"):
storage_node_set_status(random_node, status="online")
sleep(parse_time(MORPH_BLOCK_TIME))
tick_epoch(self.shell, self.cluster)
with allure.step(f"Check node {random_node} went to online"):
health_check = storage_node_healthcheck(random_node)
assert health_check.health_status == "READY" and health_check.network_status == "ONLINE"
snapshot = get_netmap_snapshot(node=alive_node, shell=self.shell)
assert random_node_netmap_key in snapshot, f"Expected node {random_node} in netmap"
@pytest.mark.parametrize(
"placement_rule,expected_copies",
[
@ -272,12 +223,14 @@ class TestNodeManagement(ClusterTestBase):
)
@pytest.mark.node_mgmt
@allure.title("Test object copies based on placement policy")
def test_placement_policy(self, default_wallet, placement_rule, expected_copies):
def test_placement_policy(
self, default_wallet, placement_rule, expected_copies, simple_object_size
):
"""
This test checks object's copies based on container's placement policy.
"""
wallet = default_wallet
file_path = generate_file()
file_path = generate_file(simple_object_size)
self.validate_object_copies(wallet, placement_rule, file_path, expected_copies)
@pytest.mark.parametrize(
@ -332,14 +285,19 @@ class TestNodeManagement(ClusterTestBase):
@pytest.mark.node_mgmt
@allure.title("Test object copies and storage nodes based on placement policy")
def test_placement_policy_with_nodes(
self, default_wallet, placement_rule, expected_copies, expected_nodes_id: set[int]
self,
default_wallet,
placement_rule,
expected_copies,
expected_nodes_id: set[int],
simple_object_size,
):
"""
Based on container's placement policy check that storage nodes are piked correctly and object has
correct copies amount.
"""
wallet = default_wallet
file_path = generate_file()
file_path = generate_file(simple_object_size)
cid, oid, found_nodes = self.validate_object_copies(
wallet, placement_rule, file_path, expected_copies
)
@ -356,24 +314,28 @@ class TestNodeManagement(ClusterTestBase):
)
@pytest.mark.node_mgmt
@allure.title("Negative cases for placement policy")
def test_placement_policy_negative(self, default_wallet, placement_rule, expected_copies):
def test_placement_policy_negative(
self, default_wallet, placement_rule, expected_copies, simple_object_size
):
"""
Negative test for placement policy.
"""
wallet = default_wallet
file_path = generate_file()
file_path = generate_file(simple_object_size)
with pytest.raises(RuntimeError, match=".*not enough nodes to SELECT from.*"):
self.validate_object_copies(wallet, placement_rule, file_path, expected_copies)
@pytest.mark.node_mgmt
@allure.title("NeoFS object could be dropped using control command")
def test_drop_object(self, default_wallet):
@allure.title("FrostFS object could be dropped using control command")
def test_drop_object(self, default_wallet, complex_object_size, simple_object_size):
"""
Test checks object could be dropped using `neofs-cli control drop-objects` command.
Test checks object could be dropped using `frostfs-cli control drop-objects` command.
"""
wallet = default_wallet
endpoint = self.cluster.default_rpc_endpoint
file_path_simple, file_path_complex = generate_file(), generate_file(COMPLEX_OBJ_SIZE)
file_path_simple, file_path_complex = generate_file(simple_object_size), generate_file(
complex_object_size
)
locode = get_locode_from_random_node(self.cluster)
rule = f"REP 1 CBF 1 SELECT 1 FROM * FILTER 'UN-LOCODE' EQ '{locode}' AS LOC"
@ -411,9 +373,10 @@ class TestNodeManagement(ClusterTestBase):
self,
default_wallet,
create_container_and_pick_node,
simple_object_size,
):
wallet = default_wallet
file_path = generate_file()
file_path = generate_file(simple_object_size)
cid, node = create_container_and_pick_node
original_oid = put_object_to_random_node(wallet, file_path, cid, self.shell, self.cluster)
@ -513,7 +476,7 @@ class TestNodeManagement(ClusterTestBase):
if copies == expected_copies:
break
tick_epoch(self.shell, self.cluster)
sleep(parse_time(NEOFS_CONTRACT_CACHE_TIMEOUT))
sleep(datetime_utils.parse_time(FROSTFS_CONTRACT_CACHE_TIMEOUT))
else:
raise AssertionError(f"There are no {expected_copies} copies during time")
@ -524,7 +487,7 @@ class TestNodeManagement(ClusterTestBase):
checker(wallet, cid, oid, shell=self.shell, endpoint=endpoint)
wait_for_gc_pass_on_storage_nodes()
except Exception as err:
if error_matches_status(err, OBJECT_NOT_FOUND):
if string_utils.is_str_match_pattern(err, OBJECT_NOT_FOUND):
return
raise AssertionError(f'Expected "{OBJECT_NOT_FOUND}" error, got\n{err}')

View file

@ -4,15 +4,21 @@ import sys
import allure
import pytest
from cluster import Cluster
from common import COMPLEX_OBJ_SIZE, SIMPLE_OBJ_SIZE
from container import create_container
from file_helper import generate_file, get_file_content, get_file_hash
from grpc_responses import OUT_OF_RANGE
from neofs_testlib.shell import Shell
from frostfs_testlib.resources.common import (
INVALID_LENGTH_SPECIFIER,
INVALID_OFFSET_SPECIFIER,
INVALID_RANGE_OVERFLOW,
INVALID_RANGE_ZERO_LENGTH,
OUT_OF_RANGE,
)
from frostfs_testlib.shell import Shell
from pytest import FixtureRequest
from python_keywords.neofs_verbs import (
get_netmap_netinfo,
from pytest_tests.helpers.cluster import Cluster
from pytest_tests.helpers.complex_object_actions import get_complex_object_split_ranges
from pytest_tests.helpers.container import create_container
from pytest_tests.helpers.file_helper import generate_file, get_file_content, get_file_hash
from pytest_tests.helpers.frostfs_verbs import (
get_object_from_random_node,
get_range,
get_range_hash,
@ -20,11 +26,10 @@ from python_keywords.neofs_verbs import (
put_object_to_random_node,
search_object,
)
from python_keywords.storage_policy import get_complex_object_copies, get_simple_object_copies
from helpers.storage_object_info import StorageObjectInfo
from steps.cluster_test_base import ClusterTestBase
from steps.storage_object import delete_objects
from pytest_tests.helpers.storage_object_info import StorageObjectInfo
from pytest_tests.helpers.storage_policy import get_complex_object_copies, get_simple_object_copies
from pytest_tests.steps.cluster_test_base import ClusterTestBase
from pytest_tests.steps.storage_object import delete_objects
logger = logging.getLogger("NeoLogger")
@ -42,48 +47,50 @@ RANGES_COUNT = 4 # by quarters
RANGE_MIN_LEN = 10
RANGE_MAX_LEN = 500
# Used for static ranges found with issues
STATIC_RANGES = {
SIMPLE_OBJ_SIZE: [],
COMPLEX_OBJ_SIZE: [],
}
STATIC_RANGES = {}
def generate_ranges(file_size: int, max_object_size: int) -> list[(int, int)]:
file_range_step = file_size / RANGES_COUNT
def generate_ranges(
storage_object: StorageObjectInfo, max_object_size: int, shell: Shell, cluster: Cluster
) -> list[(int, int)]:
file_range_step = storage_object.size / RANGES_COUNT
file_ranges = []
file_ranges_to_test = []
for i in range(0, RANGES_COUNT):
file_ranges.append((int(file_range_step * i), int(file_range_step * (i + 1))))
file_ranges.append((int(file_range_step * i), int(file_range_step)))
# For simple object we can read all file ranges without too much time for testing
if file_size == SIMPLE_OBJ_SIZE:
if storage_object.size < max_object_size:
file_ranges_to_test.extend(file_ranges)
# For complex object we need to fetch multiple child objects from different nodes.
if file_size == COMPLEX_OBJ_SIZE:
else:
assert (
file_size >= RANGE_MAX_LEN + max_object_size
), f"Complex object size should be at least {max_object_size + RANGE_MAX_LEN}. Current: {file_size}"
file_ranges_to_test.append((RANGE_MAX_LEN, RANGE_MAX_LEN + max_object_size))
storage_object.size >= RANGE_MAX_LEN + max_object_size
), f"Complex object size should be at least {max_object_size + RANGE_MAX_LEN}. Current: {storage_object.size}"
file_ranges_to_test.append((RANGE_MAX_LEN, max_object_size - RANGE_MAX_LEN))
file_ranges_to_test.extend(get_complex_object_split_ranges(storage_object, shell, cluster))
# Special cases to read some bytes from start and some bytes from end of object
file_ranges_to_test.append((0, RANGE_MIN_LEN))
file_ranges_to_test.append((file_size - RANGE_MIN_LEN, file_size))
file_ranges_to_test.append((storage_object.size - RANGE_MIN_LEN, RANGE_MIN_LEN))
for start, end in file_ranges:
for offset, length in file_ranges:
range_length = random.randint(RANGE_MIN_LEN, RANGE_MAX_LEN)
range_start = random.randint(start, end)
range_start = random.randint(offset, offset + length)
file_ranges_to_test.append((range_start, min(range_start + range_length, file_size)))
file_ranges_to_test.append(
(range_start, min(range_length, storage_object.size - range_start))
)
file_ranges_to_test.extend(STATIC_RANGES[file_size])
file_ranges_to_test.extend(STATIC_RANGES.get(storage_object.size, []))
return file_ranges_to_test
@pytest.fixture(
params=[SIMPLE_OBJ_SIZE, COMPLEX_OBJ_SIZE],
params=[pytest.lazy_fixture("simple_object_size"), pytest.lazy_fixture("complex_object_size")],
ids=["simple object", "complex object"],
# Scope session to upload/delete each files set only once
scope="module",
@ -132,7 +139,7 @@ def storage_objects(
class TestObjectApi(ClusterTestBase):
@allure.title("Validate object storage policy by native API")
def test_object_storage_policies(
self, request: FixtureRequest, storage_objects: list[StorageObjectInfo]
self, request: FixtureRequest, storage_objects: list[StorageObjectInfo], simple_object_size
):
"""
Validate object storage policy
@ -143,7 +150,7 @@ class TestObjectApi(ClusterTestBase):
with allure.step("Validate storage policy for objects"):
for storage_object in storage_objects:
if storage_object.size == SIMPLE_OBJ_SIZE:
if storage_object.size == simple_object_size:
copies = get_simple_object_copies(
storage_object.wallet_file_path,
storage_object.cid,
@ -257,7 +264,9 @@ class TestObjectApi(ClusterTestBase):
@allure.title("Validate object search with removed items")
@pytest.mark.parametrize(
"object_size", [SIMPLE_OBJ_SIZE, COMPLEX_OBJ_SIZE], ids=["simple object", "complex object"]
"object_size",
[pytest.lazy_fixture("simple_object_size"), pytest.lazy_fixture("complex_object_size")],
ids=["simple object", "complex object"],
)
def test_object_search_should_return_tombstone_items(
self, default_wallet: str, request: FixtureRequest, object_size: int
@ -330,10 +339,10 @@ class TestObjectApi(ClusterTestBase):
@pytest.mark.sanity
@pytest.mark.grpc_api
def test_object_get_range_hash(
self, request: FixtureRequest, storage_objects: list[StorageObjectInfo]
self, request: FixtureRequest, storage_objects: list[StorageObjectInfo], max_object_size
):
"""
Validate get_range_hash for object by common gRPC API
Validate get_range_hash for object by native gRPC API
"""
allure.dynamic.title(
f"Validate native get_range_hash object API for {request.node.callspec.id}"
@ -343,16 +352,13 @@ class TestObjectApi(ClusterTestBase):
cid = storage_objects[0].cid
oids = [storage_object.oid for storage_object in storage_objects[:2]]
file_path = storage_objects[0].file_path
net_info = get_netmap_netinfo(
wallet, self.shell, endpoint=self.cluster.default_rpc_endpoint
)
max_object_size = net_info["maximum_object_size"]
file_ranges_to_test = generate_ranges(storage_objects[0].size, max_object_size)
file_ranges_to_test = generate_ranges(
storage_objects[0], max_object_size, self.shell, self.cluster
)
logging.info(f"Ranges used in test {file_ranges_to_test}")
for range_start, range_end in file_ranges_to_test:
range_len = range_end - range_start
for range_start, range_len in file_ranges_to_test:
range_cut = f"{range_start}:{range_len}"
with allure.step(f"Get range hash ({range_cut})"):
for oid in oids:
@ -372,10 +378,10 @@ class TestObjectApi(ClusterTestBase):
@pytest.mark.sanity
@pytest.mark.grpc_api
def test_object_get_range(
self, request: FixtureRequest, storage_objects: list[StorageObjectInfo]
self, request: FixtureRequest, storage_objects: list[StorageObjectInfo], max_object_size
):
"""
Validate get_range for object by common gRPC API
Validate get_range for object by native gRPC API
"""
allure.dynamic.title(f"Validate native get_range object API for {request.node.callspec.id}")
@ -383,16 +389,13 @@ class TestObjectApi(ClusterTestBase):
cid = storage_objects[0].cid
oids = [storage_object.oid for storage_object in storage_objects[:2]]
file_path = storage_objects[0].file_path
net_info = get_netmap_netinfo(
wallet, self.shell, endpoint=self.cluster.default_rpc_endpoint
)
max_object_size = net_info["maximum_object_size"]
file_ranges_to_test = generate_ranges(storage_objects[0].size, max_object_size)
file_ranges_to_test = generate_ranges(
storage_objects[0], max_object_size, self.shell, self.cluster
)
logging.info(f"Ranges used in test {file_ranges_to_test}")
for range_start, range_end in file_ranges_to_test:
range_len = range_end - range_start
for range_start, range_len in file_ranges_to_test:
range_cut = f"{range_start}:{range_len}"
with allure.step(f"Get range ({range_cut})"):
for oid in oids:
@ -420,7 +423,7 @@ class TestObjectApi(ClusterTestBase):
storage_objects: list[StorageObjectInfo],
):
"""
Validate get_range negative for object by common gRPC API
Validate get_range negative for object by native gRPC API
"""
allure.dynamic.title(
f"Validate native get_range negative object API for {request.node.callspec.id}"
@ -435,20 +438,30 @@ class TestObjectApi(ClusterTestBase):
RANGE_MIN_LEN < file_size
), f"Incorrect test setup. File size ({file_size}) is less than RANGE_MIN_LEN ({RANGE_MIN_LEN})"
file_ranges_to_test = [
file_ranges_to_test: list[tuple(int, int, str)] = [
# Offset is bigger than the file size, the length is small.
(file_size + 1, RANGE_MIN_LEN),
(file_size + 1, RANGE_MIN_LEN, OUT_OF_RANGE),
# Offset is ok, but offset+length is too big.
(file_size - RANGE_MIN_LEN, RANGE_MIN_LEN * 2),
(file_size - RANGE_MIN_LEN, RANGE_MIN_LEN * 2, OUT_OF_RANGE),
# Offset is ok, and length is very-very big (e.g. MaxUint64) so that offset+length is wrapped and still "valid".
(RANGE_MIN_LEN, sys.maxsize * 2 + 1),
(RANGE_MIN_LEN, sys.maxsize * 2 + 1, INVALID_RANGE_OVERFLOW),
# Length is zero
(10, 0, INVALID_RANGE_ZERO_LENGTH),
# Negative values
(-1, 1, INVALID_OFFSET_SPECIFIER),
(10, -5, INVALID_LENGTH_SPECIFIER),
]
for range_start, range_len in file_ranges_to_test:
for range_start, range_len, expected_error in file_ranges_to_test:
range_cut = f"{range_start}:{range_len}"
expected_error = (
expected_error.format(range=range_cut)
if "{range}" in expected_error
else expected_error
)
with allure.step(f"Get range ({range_cut})"):
for oid in oids:
with pytest.raises(Exception, match=OUT_OF_RANGE):
with pytest.raises(Exception, match=expected_error):
get_range(
wallet,
cid,
@ -465,7 +478,7 @@ class TestObjectApi(ClusterTestBase):
storage_objects: list[StorageObjectInfo],
):
"""
Validate get_range_hash negative for object by common gRPC API
Validate get_range_hash negative for object by native gRPC API
"""
allure.dynamic.title(
f"Validate native get_range_hash negative object API for {request.node.callspec.id}"
@ -480,20 +493,30 @@ class TestObjectApi(ClusterTestBase):
RANGE_MIN_LEN < file_size
), f"Incorrect test setup. File size ({file_size}) is less than RANGE_MIN_LEN ({RANGE_MIN_LEN})"
file_ranges_to_test = [
file_ranges_to_test: list[tuple(int, int, str)] = [
# Offset is bigger than the file size, the length is small.
(file_size + 1, RANGE_MIN_LEN),
(file_size + 1, RANGE_MIN_LEN, OUT_OF_RANGE),
# Offset is ok, but offset+length is too big.
(file_size - RANGE_MIN_LEN, RANGE_MIN_LEN * 2),
(file_size - RANGE_MIN_LEN, RANGE_MIN_LEN * 2, OUT_OF_RANGE),
# Offset is ok, and length is very-very big (e.g. MaxUint64) so that offset+length is wrapped and still "valid".
(RANGE_MIN_LEN, sys.maxsize * 2 + 1),
(RANGE_MIN_LEN, sys.maxsize * 2 + 1, INVALID_RANGE_OVERFLOW),
# Length is zero
(10, 0, INVALID_RANGE_ZERO_LENGTH),
# Negative values
(-1, 1, INVALID_OFFSET_SPECIFIER),
(10, -5, INVALID_LENGTH_SPECIFIER),
]
for range_start, range_len in file_ranges_to_test:
for range_start, range_len, expected_error in file_ranges_to_test:
range_cut = f"{range_start}:{range_len}"
with allure.step(f"Get range ({range_cut})"):
expected_error = (
expected_error.format(range=range_cut)
if "{range}" in expected_error
else expected_error
)
with allure.step(f"Get range hash ({range_cut})"):
for oid in oids:
with pytest.raises(Exception, match=OUT_OF_RANGE):
with pytest.raises(Exception, match=expected_error):
get_range_hash(
wallet,
cid,

View file

@ -0,0 +1,169 @@
import allure
import pytest
from frostfs_testlib.resources.common import EACL_PUBLIC_READ_WRITE
from frostfs_testlib.shell import Shell
from pytest import FixtureRequest
from pytest_tests.helpers.acl import (
EACLAccess,
EACLOperation,
EACLRole,
EACLRule,
form_bearertoken_file,
)
from pytest_tests.helpers.cluster import Cluster
from pytest_tests.helpers.container import (
REP_2_FOR_3_NODES_PLACEMENT_RULE,
SINGLE_PLACEMENT_RULE,
StorageContainer,
StorageContainerInfo,
create_container,
)
from pytest_tests.helpers.epoch import get_epoch
from pytest_tests.helpers.frostfs_verbs import delete_object, get_object
from pytest_tests.helpers.test_control import expect_not_raises
from pytest_tests.helpers.wallet import WalletFile
from pytest_tests.steps.cluster_test_base import ClusterTestBase
from pytest_tests.steps.storage_object import StorageObjectInfo
@pytest.fixture(scope="module")
@allure.title("Create bearer token for OTHERS with all operations allowed for all containers")
def bearer_token_file_all_allow(default_wallet: str, client_shell: Shell, cluster: Cluster) -> str:
bearer = form_bearertoken_file(
default_wallet,
"",
[
EACLRule(operation=op, access=EACLAccess.ALLOW, role=EACLRole.OTHERS)
for op in EACLOperation
],
shell=client_shell,
endpoint=cluster.default_rpc_endpoint,
)
return bearer
@pytest.fixture(scope="module")
@allure.title("Create user container for bearer token usage")
def user_container(
default_wallet: str, client_shell: Shell, cluster: Cluster, request: FixtureRequest
) -> StorageContainer:
container_id = create_container(
default_wallet,
shell=client_shell,
rule=request.param,
basic_acl=EACL_PUBLIC_READ_WRITE,
endpoint=cluster.default_rpc_endpoint,
)
# Deliberately using s3gate wallet here to test bearer token
s3gate = cluster.s3gates[0]
return StorageContainer(
StorageContainerInfo(container_id, WalletFile.from_node(s3gate)),
client_shell,
cluster,
)
@pytest.fixture()
def storage_objects(
user_container: StorageContainer,
bearer_token_file_all_allow: str,
request: FixtureRequest,
client_shell: Shell,
cluster: Cluster,
) -> list[StorageObjectInfo]:
epoch = get_epoch(client_shell, cluster)
storage_objects: list[StorageObjectInfo] = []
for node in cluster.storage_nodes:
storage_objects.append(
user_container.generate_object(
request.param,
epoch + 3,
bearer_token=bearer_token_file_all_allow,
endpoint=node.get_rpc_endpoint(),
)
)
return storage_objects
@pytest.mark.smoke
@pytest.mark.bearer
class TestObjectApiWithBearerToken(ClusterTestBase):
@pytest.mark.parametrize(
"user_container",
[SINGLE_PLACEMENT_RULE],
ids=["single replica for all nodes placement rule"],
indirect=True,
)
@pytest.mark.parametrize(
"storage_objects",
[pytest.lazy_fixture("simple_object_size"), pytest.lazy_fixture("complex_object_size")],
ids=["simple object", "complex object"],
indirect=True,
)
def test_delete_object_with_s3_wallet_bearer(
self,
storage_objects: list[StorageObjectInfo],
bearer_token_file_all_allow: str,
request: FixtureRequest,
):
allure.dynamic.title(
f"Object can be deleted from any node using s3gate wallet with bearer token for {request.node.callspec.id}"
)
s3_gate_wallet = self.cluster.s3gates[0]
with allure.step("Try to delete each object from first storage node"):
for storage_object in storage_objects:
with expect_not_raises():
delete_object(
s3_gate_wallet.get_wallet_path(),
storage_object.cid,
storage_object.oid,
self.shell,
endpoint=self.cluster.default_rpc_endpoint,
bearer=bearer_token_file_all_allow,
wallet_config=s3_gate_wallet.get_wallet_config_path(),
)
@pytest.mark.parametrize(
"user_container",
[REP_2_FOR_3_NODES_PLACEMENT_RULE],
ids=["2 replicas for 3 nodes placement rule"],
indirect=True,
)
@pytest.mark.parametrize(
"file_size",
[pytest.lazy_fixture("simple_object_size"), pytest.lazy_fixture("complex_object_size")],
ids=["simple object", "complex object"],
)
def test_get_object_with_s3_wallet_bearer_from_all_nodes(
self,
user_container: StorageContainer,
file_size: int,
bearer_token_file_all_allow: str,
request: FixtureRequest,
):
allure.dynamic.title(
f"Object can be fetched from any node using s3gate wallet with bearer token for {request.node.callspec.id}"
)
s3_gate_wallet = self.cluster.s3gates[0]
with allure.step("Put one object to container"):
epoch = self.get_epoch()
storage_object = user_container.generate_object(
file_size, epoch + 3, bearer_token=bearer_token_file_all_allow
)
with allure.step("Try to fetch object from each storage node"):
for node in self.cluster.storage_nodes:
with expect_not_raises():
get_object(
s3_gate_wallet.get_wallet_path(),
storage_object.cid,
storage_object.oid,
self.shell,
endpoint=node.get_rpc_endpoint(),
bearer=bearer_token_file_all_allow,
wallet_config=s3_gate_wallet.get_wallet_config_path(),
)

View file

@ -2,26 +2,30 @@ import logging
import allure
import pytest
from common import COMPLEX_OBJ_SIZE, SIMPLE_OBJ_SIZE
from container import create_container
from epoch import get_epoch, tick_epoch
from file_helper import generate_file, get_file_hash
from grpc_responses import OBJECT_NOT_FOUND
from frostfs_testlib.resources.common import OBJECT_NOT_FOUND
from pytest import FixtureRequest
from python_keywords.neofs_verbs import get_object_from_random_node, put_object_to_random_node
from utility import wait_for_gc_pass_on_storage_nodes
from steps.cluster_test_base import ClusterTestBase
from pytest_tests.helpers.container import create_container
from pytest_tests.helpers.epoch import get_epoch
from pytest_tests.helpers.file_helper import generate_file, get_file_hash
from pytest_tests.helpers.frostfs_verbs import (
get_object_from_random_node,
put_object_to_random_node,
)
from pytest_tests.helpers.utility import wait_for_gc_pass_on_storage_nodes
from pytest_tests.steps.cluster_test_base import ClusterTestBase
logger = logging.getLogger("NeoLogger")
@pytest.mark.sanity
@pytest.mark.grpc_api
class ObjectApiLifetimeTest(ClusterTestBase):
class TestObjectApiLifetime(ClusterTestBase):
@allure.title("Test object life time")
@pytest.mark.parametrize(
"object_size", [SIMPLE_OBJ_SIZE, COMPLEX_OBJ_SIZE], ids=["simple object", "complex object"]
"object_size",
[pytest.lazy_fixture("simple_object_size"), pytest.lazy_fixture("complex_object_size")],
ids=["simple object", "complex object"],
)
def test_object_api_lifetime(
self, default_wallet: str, request: FixtureRequest, object_size: int
@ -48,7 +52,7 @@ class ObjectApiLifetimeTest(ClusterTestBase):
with allure.step("Tick two epochs"):
for _ in range(2):
tick_epoch(self.shell, self.cluster)
self.tick_epoch()
# Wait for GC, because object with expiration is counted as alive until GC removes it
wait_for_gc_pass_on_storage_nodes()

View file

@ -3,13 +3,7 @@ import re
import allure
import pytest
from cluster import Cluster
from cluster_test_base import ClusterTestBase
from common import COMPLEX_OBJ_SIZE, SIMPLE_OBJ_SIZE, STORAGE_GC_TIME
from complex_object_actions import get_link_object
from container import create_container
from epoch import ensure_fresh_epoch, get_epoch, tick_epoch
from grpc_responses import (
from frostfs_testlib.resources.common import (
LIFETIME_REQUIRED,
LOCK_NON_REGULAR_OBJECT,
LOCK_OBJECT_EXPIRATION,
@ -18,18 +12,24 @@ from grpc_responses import (
OBJECT_IS_LOCKED,
OBJECT_NOT_FOUND,
)
from neofs_testlib.shell import Shell
from frostfs_testlib.shell import Shell
from frostfs_testlib.utils import datetime_utils
from pytest import FixtureRequest
from python_keywords.neofs_verbs import delete_object, head_object, lock_object
from test_control import expect_not_raises, wait_for_success
from utility import parse_time, wait_for_gc_pass_on_storage_nodes
import steps
from helpers.container import StorageContainer, StorageContainerInfo
from helpers.storage_object_info import LockObjectInfo, StorageObjectInfo
from helpers.wallet import WalletFactory, WalletFile
from steps.cluster_test_base import ClusterTestBase
from steps.storage_object import delete_objects
from pytest_tests.helpers.cluster import Cluster
from pytest_tests.helpers.complex_object_actions import get_link_object, get_storage_object_chunks
from pytest_tests.helpers.container import StorageContainer, StorageContainerInfo, create_container
from pytest_tests.helpers.epoch import ensure_fresh_epoch, get_epoch, tick_epoch
from pytest_tests.helpers.frostfs_verbs import delete_object, head_object, lock_object
from pytest_tests.helpers.node_management import drop_object
from pytest_tests.helpers.storage_object_info import LockObjectInfo, StorageObjectInfo
from pytest_tests.helpers.storage_policy import get_nodes_with_object
from pytest_tests.helpers.test_control import expect_not_raises, wait_for_success
from pytest_tests.helpers.utility import wait_for_gc_pass_on_storage_nodes
from pytest_tests.helpers.wallet import WalletFactory, WalletFile
from pytest_tests.resources.common import STORAGE_GC_TIME
from pytest_tests.steps.cluster_test_base import ClusterTestBase
from pytest_tests.steps.storage_object import delete_objects
logger = logging.getLogger("NeoLogger")
@ -65,6 +65,9 @@ def locked_storage_object(
cluster: Cluster,
request: FixtureRequest,
):
"""
Intention of this fixture is to provide storage object which is NOT expected to be deleted during test act phase
"""
with allure.step("Creating locked object"):
current_epoch = ensure_fresh_epoch(client_shell, cluster)
expiration_epoch = current_epoch + FIXTURE_LOCK_LIFETIME
@ -117,33 +120,35 @@ def locked_storage_object(
@pytest.mark.sanity
@pytest.mark.grpc_object_lock
class TestObjectLockWithGrpc(ClusterTestBase):
def get_storage_object_chunks(self, storage_object: StorageObjectInfo):
with allure.step(f"Get complex object chunks (f{storage_object.oid})"):
split_object_id = get_link_object(
@pytest.fixture()
def new_locked_storage_object(
self, user_container: StorageContainer, request: FixtureRequest
) -> StorageObjectInfo:
"""
Intention of this fixture is to provide new storage object for tests which may delete or corrupt the object or it's complementary objects
So we need a new one each time we ask for it
"""
with allure.step("Creating locked object"):
current_epoch = self.get_epoch()
storage_object = user_container.generate_object(
request.param, expire_at=current_epoch + FIXTURE_OBJECT_LIFETIME
)
lock_object(
storage_object.wallet_file_path,
storage_object.cid,
storage_object.oid,
self.shell,
self.cluster.storage_nodes,
is_direct=False,
)
head = head_object(
storage_object.wallet_file_path,
storage_object.cid,
split_object_id,
self.shell,
self.cluster.default_rpc_endpoint,
lifetime=FIXTURE_LOCK_LIFETIME,
)
chunks_object_ids = []
if "split" in head["header"] and "children" in head["header"]["split"]:
chunks_object_ids = head["header"]["split"]["children"]
return chunks_object_ids
return storage_object
@allure.title("Locked object should be protected from deletion")
@pytest.mark.parametrize(
"locked_storage_object",
[SIMPLE_OBJ_SIZE, COMPLEX_OBJ_SIZE],
[pytest.lazy_fixture("simple_object_size"), pytest.lazy_fixture("complex_object_size")],
ids=["simple object", "complex object"],
indirect=True,
)
@ -170,7 +175,9 @@ class TestObjectLockWithGrpc(ClusterTestBase):
@allure.title("Lock object itself should be protected from deletion")
# We operate with only lock object here so no complex object needed in this test
@pytest.mark.parametrize("locked_storage_object", [SIMPLE_OBJ_SIZE], indirect=True)
@pytest.mark.parametrize(
"locked_storage_object", [pytest.lazy_fixture("simple_object_size")], indirect=True
)
def test_lock_object_itself_cannot_be_deleted(
self,
locked_storage_object: StorageObjectInfo,
@ -193,7 +200,9 @@ class TestObjectLockWithGrpc(ClusterTestBase):
@allure.title("Lock object itself cannot be locked")
# We operate with only lock object here so no complex object needed in this test
@pytest.mark.parametrize("locked_storage_object", [SIMPLE_OBJ_SIZE], indirect=True)
@pytest.mark.parametrize(
"locked_storage_object", [pytest.lazy_fixture("simple_object_size")], indirect=True
)
def test_lock_object_cannot_be_locked(
self,
locked_storage_object: StorageObjectInfo,
@ -217,7 +226,9 @@ class TestObjectLockWithGrpc(ClusterTestBase):
@allure.title("Cannot lock object without lifetime and expire_at fields")
# We operate with only lock object here so no complex object needed in this test
@pytest.mark.parametrize("locked_storage_object", [SIMPLE_OBJ_SIZE], indirect=True)
@pytest.mark.parametrize(
"locked_storage_object", [pytest.lazy_fixture("simple_object_size")], indirect=True
)
@pytest.mark.parametrize(
"wrong_lifetime,wrong_expire_at,expected_error",
[
@ -259,7 +270,9 @@ class TestObjectLockWithGrpc(ClusterTestBase):
@allure.title("Expired object should be deleted after locks are expired")
@pytest.mark.parametrize(
"object_size", [SIMPLE_OBJ_SIZE, COMPLEX_OBJ_SIZE], ids=["simple object", "complex object"]
"object_size",
[pytest.lazy_fixture("simple_object_size"), pytest.lazy_fixture("complex_object_size")],
ids=["simple object", "complex object"],
)
def test_expired_object_should_be_deleted_after_locks_are_expired(
self,
@ -284,7 +297,7 @@ class TestObjectLockWithGrpc(ClusterTestBase):
storage_object.oid,
self.shell,
self.cluster.default_rpc_endpoint,
lifetime=3,
lifetime=2,
)
lock_object(
storage_object.wallet_file_path,
@ -292,12 +305,11 @@ class TestObjectLockWithGrpc(ClusterTestBase):
storage_object.oid,
self.shell,
self.cluster.default_rpc_endpoint,
expire_at=current_epoch + 3,
expire_at=current_epoch + 2,
)
with allure.step("Check object is not deleted at expiration time"):
self.tick_epoch()
self.tick_epoch()
self.tick_epochs(2)
# Must wait to ensure object is not deleted
wait_for_gc_pass_on_storage_nodes()
with expect_not_raises():
@ -309,7 +321,7 @@ class TestObjectLockWithGrpc(ClusterTestBase):
self.cluster.default_rpc_endpoint,
)
@wait_for_success(parse_time(STORAGE_GC_TIME))
@wait_for_success(datetime_utils.parse_time(STORAGE_GC_TIME))
def check_object_not_found():
with pytest.raises(Exception, match=OBJECT_NOT_FOUND):
head_object(
@ -327,7 +339,7 @@ class TestObjectLockWithGrpc(ClusterTestBase):
@allure.title("Should be possible to lock multiple objects at once")
@pytest.mark.parametrize(
"object_size",
[SIMPLE_OBJ_SIZE, COMPLEX_OBJ_SIZE],
[pytest.lazy_fixture("simple_object_size"), pytest.lazy_fixture("complex_object_size")],
ids=["simple object", "complex object"],
)
def test_should_be_possible_to_lock_multiple_objects_at_once(
@ -382,7 +394,7 @@ class TestObjectLockWithGrpc(ClusterTestBase):
@allure.title("Already outdated lock should not be applied")
@pytest.mark.parametrize(
"object_size",
[SIMPLE_OBJ_SIZE, COMPLEX_OBJ_SIZE],
[pytest.lazy_fixture("simple_object_size"), pytest.lazy_fixture("complex_object_size")],
ids=["simple object", "complex object"],
)
def test_already_outdated_lock_should_not_be_applied(
@ -421,7 +433,7 @@ class TestObjectLockWithGrpc(ClusterTestBase):
@allure.title("After lock expiration with lifetime user should be able to delete object")
@pytest.mark.parametrize(
"object_size",
[SIMPLE_OBJ_SIZE, COMPLEX_OBJ_SIZE],
[pytest.lazy_fixture("simple_object_size"), pytest.lazy_fixture("complex_object_size")],
ids=["simple object", "complex object"],
)
@expect_not_raises()
@ -439,7 +451,7 @@ class TestObjectLockWithGrpc(ClusterTestBase):
)
current_epoch = self.ensure_fresh_epoch()
storage_object = user_container.generate_object(object_size, expire_at=current_epoch + 1)
storage_object = user_container.generate_object(object_size, expire_at=current_epoch + 5)
lock_object(
storage_object.wallet_file_path,
@ -450,7 +462,7 @@ class TestObjectLockWithGrpc(ClusterTestBase):
lifetime=1,
)
self.tick_epoch()
self.tick_epochs(2)
with expect_not_raises():
delete_object(
storage_object.wallet_file_path,
@ -463,7 +475,7 @@ class TestObjectLockWithGrpc(ClusterTestBase):
@allure.title("After lock expiration with expire_at user should be able to delete object")
@pytest.mark.parametrize(
"object_size",
[SIMPLE_OBJ_SIZE, COMPLEX_OBJ_SIZE],
[pytest.lazy_fixture("simple_object_size"), pytest.lazy_fixture("complex_object_size")],
ids=["simple object", "complex object"],
)
@expect_not_raises()
@ -493,7 +505,7 @@ class TestObjectLockWithGrpc(ClusterTestBase):
expire_at=current_epoch + 1,
)
self.tick_epoch()
self.tick_epochs(2)
with expect_not_raises():
delete_object(
@ -508,7 +520,7 @@ class TestObjectLockWithGrpc(ClusterTestBase):
@pytest.mark.parametrize(
# Only complex objects are required for this test
"locked_storage_object",
[COMPLEX_OBJ_SIZE],
[pytest.lazy_fixture("complex_object_size")],
indirect=True,
)
def test_complex_object_chunks_should_also_be_protected_from_deletion(
@ -519,7 +531,9 @@ class TestObjectLockWithGrpc(ClusterTestBase):
Complex object chunks should also be protected from deletion
"""
chunk_object_ids = self.get_storage_object_chunks(locked_storage_object)
chunk_object_ids = get_storage_object_chunks(
locked_storage_object, self.shell, self.cluster
)
for chunk_object_id in chunk_object_ids:
with allure.step(f"Try to delete chunk object {chunk_object_id}"):
with pytest.raises(Exception, match=OBJECT_IS_LOCKED):
@ -531,11 +545,90 @@ class TestObjectLockWithGrpc(ClusterTestBase):
self.cluster.default_rpc_endpoint,
)
@allure.title("Link object of locked complex object can be dropped")
@pytest.mark.grpc_control
@pytest.mark.parametrize(
"new_locked_storage_object",
# Only complex object is required
[pytest.lazy_fixture("complex_object_size")],
indirect=True,
)
def test_link_object_of_locked_complex_object_can_be_dropped(
self, new_locked_storage_object: StorageObjectInfo
):
link_object_id = get_link_object(
new_locked_storage_object.wallet_file_path,
new_locked_storage_object.cid,
new_locked_storage_object.oid,
self.shell,
self.cluster.storage_nodes,
)
with allure.step(f"Drop link object with id {link_object_id} from nodes"):
nodes_with_object = get_nodes_with_object(
new_locked_storage_object.cid,
link_object_id,
shell=self.shell,
nodes=self.cluster.storage_nodes,
)
for node in nodes_with_object:
with expect_not_raises():
drop_object(node, new_locked_storage_object.cid, link_object_id)
@allure.title("Chunks of locked complex object can be dropped")
@pytest.mark.grpc_control
@pytest.mark.parametrize(
"new_locked_storage_object",
# Only complex object is required
[pytest.lazy_fixture("complex_object_size")],
indirect=True,
)
def test_chunks_of_locked_complex_object_can_be_dropped(
self, new_locked_storage_object: StorageObjectInfo
):
chunk_objects = get_storage_object_chunks(
new_locked_storage_object, self.shell, self.cluster
)
for chunk_object_id in chunk_objects:
with allure.step(f"Drop chunk object with id {chunk_object_id} from nodes"):
nodes_with_object = get_nodes_with_object(
new_locked_storage_object.cid,
chunk_object_id,
shell=self.shell,
nodes=self.cluster.storage_nodes,
)
for node in nodes_with_object:
with expect_not_raises():
drop_object(node, new_locked_storage_object.cid, chunk_object_id)
@pytest.mark.grpc_control
@pytest.mark.parametrize(
"new_locked_storage_object",
[pytest.lazy_fixture("simple_object_size"), pytest.lazy_fixture("complex_object_size")],
ids=["simple object", "complex object"],
indirect=True,
)
def test_locked_object_can_be_dropped(
self, new_locked_storage_object: StorageObjectInfo, request: FixtureRequest
):
allure.dynamic.title(f"Locked {request.node.callspec.id} can be dropped")
nodes_with_object = get_nodes_with_object(
new_locked_storage_object.cid,
new_locked_storage_object.oid,
shell=self.shell,
nodes=self.cluster.storage_nodes,
)
for node in nodes_with_object:
with expect_not_raises():
drop_object(node, new_locked_storage_object.cid, new_locked_storage_object.oid)
@allure.title("Link object of complex object should also be protected from deletion")
@pytest.mark.parametrize(
# Only complex objects are required for this test
"locked_storage_object",
[COMPLEX_OBJ_SIZE],
[pytest.lazy_fixture("complex_object_size")],
indirect=True,
)
def test_link_object_of_complex_object_should_also_be_protected_from_deletion(

View file

@ -4,11 +4,12 @@ import os
import allure
import pytest
import yaml
from cluster_test_base import ClusterTestBase
from common import FREE_STORAGE, NEOFS_CLI_EXEC, WALLET_CONFIG
from neofs_testlib.cli import NeofsCli
from neofs_testlib.shell import CommandResult, Shell
from wallet import WalletFactory, WalletFile
from frostfs_testlib.cli import FrostfsCli
from frostfs_testlib.shell import CommandResult, Shell
from pytest_tests.helpers.wallet import WalletFactory, WalletFile
from pytest_tests.resources.common import FREE_STORAGE, FROSTFS_CLI_EXEC, WALLET_CONFIG
from pytest_tests.steps.cluster_test_base import ClusterTestBase
logger = logging.getLogger("NeoLogger")
DEPOSIT_AMOUNT = 30
@ -27,8 +28,8 @@ class TestBalanceAccounting(ClusterTestBase):
return wallet_factory.create_wallet()
@pytest.fixture(scope="class")
def cli(self, client_shell: Shell) -> NeofsCli:
return NeofsCli(client_shell, NEOFS_CLI_EXEC, WALLET_CONFIG)
def cli(self, client_shell: Shell) -> FrostfsCli:
return FrostfsCli(client_shell, FROSTFS_CLI_EXEC, WALLET_CONFIG)
@allure.step("Check deposit amount")
def check_amount(self, result: CommandResult) -> None:
@ -53,13 +54,13 @@ class TestBalanceAccounting(ClusterTestBase):
"rpc-endpoint": endpoint,
"wallet": wallet,
}
api_config_file = os.path.join(config_dir, "neofs-cli-api-config.yaml")
api_config_file = os.path.join(config_dir, "frostfs-cli-api-config.yaml")
with open(api_config_file, "w") as file:
yaml.dump(api_config, file)
return api_config_file
@allure.title("Test balance request with wallet and address")
def test_balance_wallet_address(self, main_wallet: WalletFile, cli: NeofsCli):
def test_balance_wallet_address(self, main_wallet: WalletFile, cli: FrostfsCli):
result = cli.accounting.balance(
wallet=main_wallet.path,
rpc_endpoint=self.cluster.default_rpc_endpoint,
@ -69,7 +70,7 @@ class TestBalanceAccounting(ClusterTestBase):
self.check_amount(result)
@allure.title("Test balance request with wallet only")
def test_balance_wallet(self, main_wallet: WalletFile, cli: NeofsCli):
def test_balance_wallet(self, main_wallet: WalletFile, cli: FrostfsCli):
result = cli.accounting.balance(
wallet=main_wallet.path, rpc_endpoint=self.cluster.default_rpc_endpoint
)
@ -77,7 +78,7 @@ class TestBalanceAccounting(ClusterTestBase):
@allure.title("Test balance request with wallet and wrong address")
def test_balance_wrong_address(
self, main_wallet: WalletFile, other_wallet: WalletFile, cli: NeofsCli
self, main_wallet: WalletFile, other_wallet: WalletFile, cli: FrostfsCli
):
with pytest.raises(Exception, match="address option must be specified and valid"):
cli.accounting.balance(
@ -95,7 +96,7 @@ class TestBalanceAccounting(ClusterTestBase):
)
logger.info(f"Config with API endpoint: {config_file}")
cli = NeofsCli(client_shell, NEOFS_CLI_EXEC, config_file=config_file)
cli = FrostfsCli(client_shell, FROSTFS_CLI_EXEC, config_file=config_file)
result = cli.accounting.balance()
self.check_amount(result)

View file

@ -0,0 +1,131 @@
import logging
import allure
import pytest
from frostfs_testlib.resources.common import PUBLIC_ACL
from pytest_tests.helpers.acl import (
EACLAccess,
EACLOperation,
EACLRole,
EACLRule,
bearer_token_base64_from_file,
create_eacl,
form_bearertoken_file,
set_eacl,
sign_bearer,
wait_for_cache_expired,
)
from pytest_tests.helpers.container import create_container
from pytest_tests.helpers.file_helper import generate_file
from pytest_tests.helpers.http_gate import get_object_and_verify_hashes, upload_via_http_gate_curl
from pytest_tests.steps.cluster_test_base import ClusterTestBase
logger = logging.getLogger("NeoLogger")
@pytest.mark.sanity
@pytest.mark.http_gate
class Test_http_bearer(ClusterTestBase):
PLACEMENT_RULE = "REP 2 IN X CBF 1 SELECT 2 FROM * AS X"
@pytest.fixture(scope="class", autouse=True)
@allure.title("[Class/Autouse]: Prepare wallet and deposit")
def prepare_wallet(self, default_wallet):
Test_http_bearer.wallet = default_wallet
@pytest.fixture(scope="class")
def user_container(self) -> str:
return create_container(
wallet=self.wallet,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
rule=self.PLACEMENT_RULE,
basic_acl=PUBLIC_ACL,
)
@pytest.fixture(scope="class")
def eacl_deny_for_others(self, user_container: str) -> None:
with allure.step(f"Set deny all operations for {EACLRole.OTHERS} via eACL"):
eacl = EACLRule(
access=EACLAccess.DENY, role=EACLRole.OTHERS, operation=EACLOperation.PUT
)
set_eacl(
self.wallet,
user_container,
create_eacl(user_container, eacl, shell=self.shell),
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
)
wait_for_cache_expired()
@pytest.fixture(scope="class")
def bearer_token_no_limit_for_others(self, user_container: str) -> str:
with allure.step(f"Create bearer token for {EACLRole.OTHERS} with all operations allowed"):
bearer = form_bearertoken_file(
self.wallet,
user_container,
[
EACLRule(operation=op, access=EACLAccess.ALLOW, role=EACLRole.OTHERS)
for op in EACLOperation
],
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
sign=False,
)
bearer_signed = f"{bearer}_signed"
sign_bearer(
shell=self.shell,
wallet_path=self.wallet,
eacl_rules_file_from=bearer,
eacl_rules_file_to=bearer_signed,
json=False,
)
return bearer_token_base64_from_file(bearer_signed)
@allure.title(f"[negative] Put object without bearer token for {EACLRole.OTHERS}")
def test_unable_put_without_bearer_token(
self, simple_object_size: int, user_container: str, eacl_deny_for_others
):
eacl_deny_for_others
upload_via_http_gate_curl(
cid=user_container,
filepath=generate_file(simple_object_size),
endpoint=self.cluster.default_http_gate_endpoint,
error_pattern="access to object operation denied",
)
@pytest.mark.parametrize(
"object_size",
[pytest.lazy_fixture("simple_object_size"), pytest.lazy_fixture("complex_object_size")],
ids=["simple object", "complex object"],
)
def test_put_with_bearer_when_eacl_restrict(
self,
object_size: int,
user_container: str,
eacl_deny_for_others,
bearer_token_no_limit_for_others: str,
):
eacl_deny_for_others
bearer = bearer_token_no_limit_for_others
file_path = generate_file(object_size)
with allure.step(
f"Put object with bearer token for {EACLRole.OTHERS}, then get and verify hashes"
):
headers = [f" -H 'Authorization: Bearer {bearer}'"]
oid = upload_via_http_gate_curl(
cid=user_container,
filepath=file_path,
endpoint=self.cluster.default_http_gate_endpoint,
headers=headers,
)
get_object_and_verify_hashes(
oid=oid,
file_name=file_path,
wallet=self.wallet,
cid=user_container,
shell=self.shell,
nodes=self.cluster.storage_nodes,
endpoint=self.cluster.default_http_gate_endpoint,
)

View file

@ -1,43 +1,38 @@
import logging
import os
import random
from time import sleep
import allure
import pytest
from common import COMPLEX_OBJ_SIZE
from container import create_container
from epoch import get_epoch, tick_epoch
from file_helper import generate_file, get_file_hash
from python_keywords.http_gate import (
from frostfs_testlib.resources.common import PUBLIC_ACL
from pytest_tests.helpers.container import create_container
from pytest_tests.helpers.epoch import get_epoch
from pytest_tests.helpers.file_helper import generate_file, get_file_hash
from pytest_tests.helpers.frostfs_verbs import put_object_to_random_node
from pytest_tests.helpers.http_gate import (
attr_into_header,
get_object_and_verify_hashes,
get_object_by_attr_and_verify_hashes,
get_via_http_curl,
get_via_http_gate,
get_via_http_gate_by_attribute,
get_via_zip_http_gate,
try_to_get_object_and_expect_error,
upload_via_http_gate,
upload_via_http_gate_curl,
)
from python_keywords.neofs_verbs import get_object, put_object_to_random_node
from python_keywords.storage_policy import get_nodes_without_object
from utility import wait_for_gc_pass_on_storage_nodes
from wellknown_acl import PUBLIC_ACL
from steps.cluster_test_base import ClusterTestBase
from pytest_tests.helpers.utility import wait_for_gc_pass_on_storage_nodes
from pytest_tests.steps.cluster_test_base import ClusterTestBase
logger = logging.getLogger("NeoLogger")
OBJECT_NOT_FOUND_ERROR = "not found"
# For some reason object uploaded via http gateway is not immediately available for downloading
# Until this issue is resolved we are waiting for some time before attempting to read an object
# TODO: remove after https://github.com/nspcc-dev/neofs-http-gw/issues/176 is fixed
OBJECT_UPLOAD_DELAY = 10
@allure.link(
"https://github.com/nspcc-dev/neofs-http-gw#neofs-http-gateway", name="neofs-http-gateway"
"https://github.com/TrueCloudLab/frostfs-http-gw#frostfs-http-gateway",
name="frostfs-http-gateway",
)
@allure.link("https://github.com/nspcc-dev/neofs-http-gw#uploading", name="uploading")
@allure.link("https://github.com/nspcc-dev/neofs-http-gw#downloading", name="downloading")
@allure.link("https://github.com/TrueCloudLab/frostfs-http-gw#uploading", name="uploading")
@allure.link("https://github.com/TrueCloudLab/frostfs-http-gw#downloading", name="downloading")
@pytest.mark.sanity
@pytest.mark.http_gate
class TestHttpGate(ClusterTestBase):
@ -50,15 +45,15 @@ class TestHttpGate(ClusterTestBase):
TestHttpGate.wallet = default_wallet
@allure.title("Test Put over gRPC, Get over HTTP")
def test_put_grpc_get_http(self):
def test_put_grpc_get_http(self, complex_object_size, simple_object_size):
"""
Test that object can be put using gRPC interface and get using HTTP.
Steps:
1. Create simple and large objects.
2. Put objects using gRPC (neofs-cli).
3. Download objects using HTTP gate (https://github.com/nspcc-dev/neofs-http-gw#downloading).
4. Get objects using gRPC (neofs-cli).
2. Put objects using gRPC (frostfs-cli).
3. Download objects using HTTP gate (https://github.com/TrueCloudLab/frostfs-http-gw#downloading).
4. Get objects using gRPC (frostfs-cli).
5. Compare hashes for got objects.
6. Compare hashes for got and original objects.
@ -72,7 +67,9 @@ class TestHttpGate(ClusterTestBase):
rule=self.PLACEMENT_RULE_1,
basic_acl=PUBLIC_ACL,
)
file_path_simple, file_path_large = generate_file(), generate_file(COMPLEX_OBJ_SIZE)
file_path_simple, file_path_large = generate_file(simple_object_size), generate_file(
complex_object_size
)
with allure.step("Put objects using gRPC"):
oid_simple = put_object_to_random_node(
@ -91,20 +88,28 @@ class TestHttpGate(ClusterTestBase):
)
for oid, file_path in ((oid_simple, file_path_simple), (oid_large, file_path_large)):
self.get_object_and_verify_hashes(oid, file_path, self.wallet, cid)
get_object_and_verify_hashes(
oid=oid,
file_name=file_path,
wallet=self.wallet,
cid=cid,
shell=self.shell,
nodes=self.cluster.storage_nodes,
endpoint=self.cluster.default_http_gate_endpoint,
)
@allure.link("https://github.com/nspcc-dev/neofs-http-gw#uploading", name="uploading")
@allure.link("https://github.com/nspcc-dev/neofs-http-gw#downloading", name="downloading")
@allure.link("https://github.com/TrueCloudLab/frostfs-http-gw#uploading", name="uploading")
@allure.link("https://github.com/TrueCloudLab/frostfs-http-gw#downloading", name="downloading")
@allure.title("Test Put over HTTP, Get over HTTP")
@pytest.mark.smoke
def test_put_http_get_http(self):
def test_put_http_get_http(self, complex_object_size, simple_object_size):
"""
Test that object can be put and get using HTTP interface.
Steps:
1. Create simple and large objects.
2. Upload objects using HTTP (https://github.com/nspcc-dev/neofs-http-gw#uploading).
3. Download objects using HTTP gate (https://github.com/nspcc-dev/neofs-http-gw#downloading).
2. Upload objects using HTTP (https://github.com/TrueCloudLab/frostfs-http-gw#uploading).
3. Download objects using HTTP gate (https://github.com/TrueCloudLab/frostfs-http-gw#downloading).
4. Compare hashes for got and original objects.
Expected result:
@ -117,7 +122,9 @@ class TestHttpGate(ClusterTestBase):
rule=self.PLACEMENT_RULE_2,
basic_acl=PUBLIC_ACL,
)
file_path_simple, file_path_large = generate_file(), generate_file(COMPLEX_OBJ_SIZE)
file_path_simple, file_path_large = generate_file(simple_object_size), generate_file(
complex_object_size
)
with allure.step("Put objects using HTTP"):
oid_simple = upload_via_http_gate(
@ -128,10 +135,19 @@ class TestHttpGate(ClusterTestBase):
)
for oid, file_path in ((oid_simple, file_path_simple), (oid_large, file_path_large)):
self.get_object_and_verify_hashes(oid, file_path, self.wallet, cid)
get_object_and_verify_hashes(
oid=oid,
file_name=file_path,
wallet=self.wallet,
cid=cid,
shell=self.shell,
nodes=self.cluster.storage_nodes,
endpoint=self.cluster.default_http_gate_endpoint,
)
@allure.link(
"https://github.com/nspcc-dev/neofs-http-gw#by-attributes", name="download by attributes"
"https://github.com/TrueCloudLab/frostfs-http-gw#by-attributes",
name="download by attributes",
)
@allure.title("Test Put over HTTP, Get over HTTP with headers")
@pytest.mark.parametrize(
@ -143,14 +159,14 @@ class TestHttpGate(ClusterTestBase):
],
ids=["simple", "hyphen", "percent"],
)
def test_put_http_get_http_with_headers(self, attributes: dict):
def test_put_http_get_http_with_headers(self, attributes: dict, simple_object_size):
"""
Test that object can be downloaded using different attributes in HTTP header.
Steps:
1. Create simple and large objects.
2. Upload objects using HTTP with particular attributes in the header.
3. Download objects by attributes using HTTP gate (https://github.com/nspcc-dev/neofs-http-gw#by-attributes).
3. Download objects by attributes using HTTP gate (https://github.com/TrueCloudLab/frostfs-http-gw#by-attributes).
4. Compare hashes for got and original objects.
Expected result:
@ -163,10 +179,10 @@ class TestHttpGate(ClusterTestBase):
rule=self.PLACEMENT_RULE_2,
basic_acl=PUBLIC_ACL,
)
file_path = generate_file()
file_path = generate_file(simple_object_size)
with allure.step("Put objects using HTTP with attribute"):
headers = self._attr_into_header(attributes)
headers = attr_into_header(attributes)
oid = upload_via_http_gate(
cid=cid,
path=file_path,
@ -174,12 +190,16 @@ class TestHttpGate(ClusterTestBase):
endpoint=self.cluster.default_http_gate_endpoint,
)
sleep(OBJECT_UPLOAD_DELAY)
self.get_object_by_attr_and_verify_hashes(oid, file_path, cid, attributes)
get_object_by_attr_and_verify_hashes(
oid=oid,
file_name=file_path,
cid=cid,
attrs=attributes,
endpoint=self.cluster.default_http_gate_endpoint,
)
@allure.title("Test Expiration-Epoch in HTTP header")
def test_expiration_epoch_in_http(self):
def test_expiration_epoch_in_http(self, simple_object_size):
endpoint = self.cluster.default_rpc_endpoint
http_endpoint = self.cluster.default_http_gate_endpoint
@ -190,14 +210,14 @@ class TestHttpGate(ClusterTestBase):
rule=self.PLACEMENT_RULE_2,
basic_acl=PUBLIC_ACL,
)
file_path = generate_file()
file_path = generate_file(simple_object_size)
oids = []
curr_epoch = get_epoch(self.shell, self.cluster)
epochs = (curr_epoch, curr_epoch + 1, curr_epoch + 2, curr_epoch + 100)
for epoch in epochs:
headers = {"X-Attribute-Neofs-Expiration-Epoch": str(epoch)}
headers = {"X-Attribute-Frostfs-Expiration-Epoch": str(epoch)}
with allure.step("Put objects using HTTP with attribute Expiration-Epoch"):
oids.append(
@ -213,14 +233,17 @@ class TestHttpGate(ClusterTestBase):
get_via_http_gate(cid=cid, oid=oid, endpoint=http_endpoint)
for expired_objects, not_expired_objects in [(oids[:1], oids[1:]), (oids[:2], oids[2:])]:
tick_epoch(self.shell, self.cluster)
self.tick_epoch()
# Wait for GC, because object with expiration is counted as alive until GC removes it
wait_for_gc_pass_on_storage_nodes()
for oid in expired_objects:
self.try_to_get_object_and_expect_error(
cid=cid, oid=oid, error_pattern=OBJECT_NOT_FOUND_ERROR
try_to_get_object_and_expect_error(
cid=cid,
oid=oid,
error_pattern=OBJECT_NOT_FOUND_ERROR,
endpoint=self.cluster.default_http_gate_endpoint,
)
with allure.step("Other objects can be get"):
@ -228,7 +251,7 @@ class TestHttpGate(ClusterTestBase):
get_via_http_gate(cid=cid, oid=oid, endpoint=http_endpoint)
@allure.title("Test Zip in HTTP header")
def test_zip_in_http(self):
def test_zip_in_http(self, complex_object_size, simple_object_size):
cid = create_container(
self.wallet,
shell=self.shell,
@ -236,7 +259,9 @@ class TestHttpGate(ClusterTestBase):
rule=self.PLACEMENT_RULE_2,
basic_acl=PUBLIC_ACL,
)
file_path_simple, file_path_large = generate_file(), generate_file(COMPLEX_OBJ_SIZE)
file_path_simple, file_path_large = generate_file(simple_object_size), generate_file(
complex_object_size
)
common_prefix = "my_files"
headers1 = {"X-Attribute-FilePath": f"{common_prefix}/file1"}
@ -255,8 +280,6 @@ class TestHttpGate(ClusterTestBase):
endpoint=self.cluster.default_http_gate_endpoint,
)
sleep(OBJECT_UPLOAD_DELAY)
dir_path = get_via_zip_http_gate(
cid=cid, prefix=common_prefix, endpoint=self.cluster.default_http_gate_endpoint
)
@ -267,7 +290,7 @@ class TestHttpGate(ClusterTestBase):
@pytest.mark.long
@allure.title("Test Put over HTTP/Curl, Get over HTTP/Curl for large object")
def test_put_http_get_http_large_file(self):
def test_put_http_get_http_large_file(self, complex_object_size):
"""
This test checks upload and download using curl with 'large' object.
Large is object with size up to 20Mb.
@ -280,7 +303,7 @@ class TestHttpGate(ClusterTestBase):
basic_acl=PUBLIC_ACL,
)
obj_size = int(os.getenv("BIG_OBJ_SIZE", COMPLEX_OBJ_SIZE))
obj_size = int(os.getenv("BIG_OBJ_SIZE", complex_object_size))
file_path = generate_file(obj_size)
with allure.step("Put objects using HTTP"):
@ -290,21 +313,31 @@ class TestHttpGate(ClusterTestBase):
oid_curl = upload_via_http_gate_curl(
cid=cid,
filepath=file_path,
large_object=True,
endpoint=self.cluster.default_http_gate_endpoint,
)
self.get_object_and_verify_hashes(oid_gate, file_path, self.wallet, cid)
self.get_object_and_verify_hashes(
oid_curl,
file_path,
self.wallet,
cid,
get_object_and_verify_hashes(
oid=oid_gate,
file_name=file_path,
wallet=self.wallet,
cid=cid,
shell=self.shell,
nodes=self.cluster.storage_nodes,
endpoint=self.cluster.default_http_gate_endpoint,
)
get_object_and_verify_hashes(
oid=oid_curl,
file_name=file_path,
wallet=self.wallet,
cid=cid,
shell=self.shell,
nodes=self.cluster.storage_nodes,
endpoint=self.cluster.default_http_gate_endpoint,
object_getter=get_via_http_curl,
)
@allure.title("Test Put/Get over HTTP using Curl utility")
def test_put_http_get_http_curl(self):
def test_put_http_get_http_curl(self, complex_object_size, simple_object_size):
"""
Test checks upload and download over HTTP using curl utility.
"""
@ -315,87 +348,28 @@ class TestHttpGate(ClusterTestBase):
rule=self.PLACEMENT_RULE_2,
basic_acl=PUBLIC_ACL,
)
file_path_simple, file_path_large = generate_file(), generate_file(COMPLEX_OBJ_SIZE)
file_path_simple, file_path_large = generate_file(simple_object_size), generate_file(
complex_object_size
)
with allure.step("Put objects using curl utility"):
oid_simple = upload_via_http_gate_curl(
cid=cid, filepath=file_path_simple, endpoint=self.cluster.default_http_gate_endpoint
)
oid_large = upload_via_http_gate_curl(
cid=cid, filepath=file_path_large, endpoint=self.cluster.default_http_gate_endpoint
cid=cid,
filepath=file_path_large,
endpoint=self.cluster.default_http_gate_endpoint,
)
for oid, file_path in ((oid_simple, file_path_simple), (oid_large, file_path_large)):
self.get_object_and_verify_hashes(
oid,
file_path,
self.wallet,
cid,
object_getter=get_via_http_curl,
)
@allure.step("Try to get object and expect error")
def try_to_get_object_and_expect_error(self, cid: str, oid: str, error_pattern: str) -> None:
try:
get_via_http_gate(cid=cid, oid=oid, endpoint=self.cluster.default_http_gate_endpoint)
raise AssertionError(f"Expected error on getting object with cid: {cid}")
except Exception as err:
match = error_pattern.casefold() in str(err).casefold()
assert match, f"Expected {err} to match {error_pattern}"
@allure.step("Verify object can be get using HTTP header attribute")
def get_object_by_attr_and_verify_hashes(
self, oid: str, file_name: str, cid: str, attrs: dict
) -> None:
got_file_path_http = get_via_http_gate(
cid=cid, oid=oid, endpoint=self.cluster.default_http_gate_endpoint
)
got_file_path_http_attr = get_via_http_gate_by_attribute(
cid=cid, attribute=attrs, endpoint=self.cluster.default_http_gate_endpoint
)
TestHttpGate._assert_hashes_are_equal(
file_name, got_file_path_http, got_file_path_http_attr
)
@allure.step("Verify object can be get using HTTP")
def get_object_and_verify_hashes(
self, oid: str, file_name: str, wallet: str, cid: str, object_getter=None
) -> None:
nodes = get_nodes_without_object(
wallet=wallet,
cid=cid,
get_object_and_verify_hashes(
oid=oid,
file_name=file_path,
wallet=self.wallet,
cid=cid,
shell=self.shell,
nodes=self.cluster.storage_nodes,
endpoint=self.cluster.default_http_gate_endpoint,
object_getter=get_via_http_curl,
)
random_node = random.choice(nodes)
object_getter = object_getter or get_via_http_gate
got_file_path = get_object(
wallet=wallet,
cid=cid,
oid=oid,
shell=self.shell,
endpoint=random_node.get_rpc_endpoint(),
)
got_file_path_http = object_getter(
cid=cid, oid=oid, endpoint=self.cluster.default_http_gate_endpoint
)
TestHttpGate._assert_hashes_are_equal(file_name, got_file_path, got_file_path_http)
@staticmethod
def _assert_hashes_are_equal(orig_file_name: str, got_file_1: str, got_file_2: str) -> None:
msg = "Expected hashes are equal for files {f1} and {f2}"
got_file_hash_http = get_file_hash(got_file_1)
assert get_file_hash(got_file_2) == got_file_hash_http, msg.format(
f1=got_file_2, f2=got_file_1
)
assert get_file_hash(orig_file_name) == got_file_hash_http, msg.format(
f1=orig_file_name, f2=got_file_1
)
@staticmethod
def _attr_into_header(attrs: dict) -> dict:
return {f"X-Attribute-{_key}": _value for _key, _value in attrs.items()}

View file

@ -0,0 +1,228 @@
import logging
import os
import allure
import pytest
from frostfs_testlib.resources.common import PUBLIC_ACL
from pytest import FixtureRequest
from pytest_tests.helpers.container import (
create_container,
delete_container,
list_containers,
wait_for_container_deletion,
)
from pytest_tests.helpers.file_helper import generate_file
from pytest_tests.helpers.frostfs_verbs import delete_object
from pytest_tests.helpers.http_gate import (
attr_into_str_header_curl,
get_object_by_attr_and_verify_hashes,
try_to_get_object_and_expect_error,
try_to_get_object_via_passed_request_and_expect_error,
upload_via_http_gate_curl,
)
from pytest_tests.helpers.storage_object_info import StorageObjectInfo
from pytest_tests.steps.cluster_test_base import ClusterTestBase
OBJECT_ALREADY_REMOVED_ERROR = "object already removed"
logger = logging.getLogger("NeoLogger")
@pytest.mark.sanity
@pytest.mark.http_gate
class Test_http_headers(ClusterTestBase):
PLACEMENT_RULE = "REP 2 IN X CBF 1 SELECT 4 FROM * AS X"
obj1_keys = ["Writer", "Chapter1", "Chapter2"]
obj2_keys = ["Writer", "Ch@pter1", "chapter2"]
values = ["Leo Tolstoy", "peace", "w@r"]
OBJECT_ATTRIBUTES = [
{obj1_keys[0]: values[0], obj1_keys[1]: values[1], obj1_keys[2]: values[2]},
{obj2_keys[0]: values[0], obj2_keys[1]: values[1], obj2_keys[2]: values[2]},
]
@pytest.fixture(scope="class", autouse=True)
@allure.title("[Class/Autouse]: Prepare wallet and deposit")
def prepare_wallet(self, default_wallet):
Test_http_headers.wallet = default_wallet
@pytest.fixture(
params=[
pytest.lazy_fixture("simple_object_size"),
pytest.lazy_fixture("complex_object_size"),
],
ids=["simple object", "complex object"],
scope="class",
)
def storage_objects_with_attributes(self, request: FixtureRequest) -> list[StorageObjectInfo]:
storage_objects = []
wallet = self.wallet
cid = create_container(
wallet=self.wallet,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
rule=self.PLACEMENT_RULE,
basic_acl=PUBLIC_ACL,
)
file_path = generate_file(request.param)
for attributes in self.OBJECT_ATTRIBUTES:
storage_object_id = upload_via_http_gate_curl(
cid=cid,
filepath=file_path,
endpoint=self.cluster.default_http_gate_endpoint,
headers=attr_into_str_header_curl(attributes),
)
storage_object = StorageObjectInfo(cid, storage_object_id)
storage_object.size = os.path.getsize(file_path)
storage_object.wallet_file_path = wallet
storage_object.file_path = file_path
storage_object.attributes = attributes
storage_objects.append(storage_object)
yield storage_objects
@allure.title("Get object1 by attribute")
def test_object1_can_be_get_by_attr(
self, storage_objects_with_attributes: list[StorageObjectInfo]
):
"""
Test to get object#1 by attribute and comapre hashes
Steps:
1. Download object#1 with attributes [Chapter2=w@r] and compare hashes
"""
storage_object_1 = storage_objects_with_attributes[0]
with allure.step(
f'Download object#1 via wget with attributes Chapter2: {storage_object_1.attributes["Chapter2"]} and compare hashes'
):
get_object_by_attr_and_verify_hashes(
oid=storage_object_1.oid,
file_name=storage_object_1.file_path,
cid=storage_object_1.cid,
attrs={"Chapter2": storage_object_1.attributes["Chapter2"]},
endpoint=self.cluster.default_http_gate_endpoint,
)
@allure.title("Test get object2 with different attributes, then delete object2 and get object1")
def test_object2_can_be_get_by_attr(
self, storage_objects_with_attributes: list[StorageObjectInfo]
):
"""
Test to get object2 with different attributes, then delete object2 and get object1 using 1st attribute. Note: obj1 and obj2 have the same attribute#1,
and when obj2 is deleted you can get obj1 by 1st attribute
Steps:
1. Download object#2 with attributes [chapter2=w@r] and compare hashes
2. Download object#2 with attributes [Ch@pter1=peace] and compare hashes
3. Delete object#2
4. Download object#1 with attributes [Writer=Leo Tolstoy] and compare hashes
"""
storage_object_1 = storage_objects_with_attributes[0]
storage_object_2 = storage_objects_with_attributes[1]
with allure.step(
f'Download object#2 via wget with attributes [chapter2={storage_object_2.attributes["chapter2"]}] / [Ch@pter1={storage_object_2.attributes["Ch@pter1"]}] and compare hashes'
):
selected_attributes_object2 = [
{"chapter2": storage_object_2.attributes["chapter2"]},
{"Ch@pter1": storage_object_2.attributes["Ch@pter1"]},
]
for attributes in selected_attributes_object2:
get_object_by_attr_and_verify_hashes(
oid=storage_object_2.oid,
file_name=storage_object_2.file_path,
cid=storage_object_2.cid,
attrs=attributes,
endpoint=self.cluster.default_http_gate_endpoint,
)
with allure.step("Delete object#2 and verify is the container deleted"):
delete_object(
wallet=self.wallet,
cid=storage_object_2.cid,
oid=storage_object_2.oid,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
)
try_to_get_object_and_expect_error(
cid=storage_object_2.cid,
oid=storage_object_2.oid,
error_pattern=OBJECT_ALREADY_REMOVED_ERROR,
endpoint=self.cluster.default_http_gate_endpoint,
)
storage_objects_with_attributes.remove(storage_object_2)
with allure.step(
f'Download object#1 with attributes [Writer={storage_object_1.attributes["Writer"]}] and compare hashes'
):
key_value_pair = {"Writer": storage_object_1.attributes["Writer"]}
get_object_by_attr_and_verify_hashes(
oid=storage_object_1.oid,
file_name=storage_object_1.file_path,
cid=storage_object_1.cid,
attrs=key_value_pair,
endpoint=self.cluster.default_http_gate_endpoint,
)
@allure.title("[Negative] Try to put object and get right after container is deleted")
def test_negative_put_and_get_object3(
self, storage_objects_with_attributes: list[StorageObjectInfo]
):
"""
Test to attempt to put object and try to download it right after the container has been deleted
Steps:
1. [Negative] Allocate and attempt to put object#3 via http with attributes: [Writer=Leo Tolstoy, Writer=peace, peace=peace]
Expected: "Error duplication of attributes detected"
2. Delete container
3. [Negative] Try to download object with attributes [peace=peace]
Expected: "HTTP request sent, awaiting response... 404 Not Found"
"""
storage_object_1 = storage_objects_with_attributes[0]
with allure.step(
"[Negative] Allocate and attemt to put object#3 via http with attributes: [Writer=Leo Tolstoy, Writer=peace, peace=peace]"
):
file_path_3 = generate_file(storage_object_1.size)
attrs_obj3 = {"Writer": "Leo Tolstoy", "peace": "peace"}
headers = attr_into_str_header_curl(attrs_obj3)
headers.append(" ".join(attr_into_str_header_curl({"Writer": "peace"})))
error_pattern = f"key duplication error: X-Attribute-Writer"
upload_via_http_gate_curl(
cid=storage_object_1.cid,
filepath=file_path_3,
endpoint=self.cluster.default_http_gate_endpoint,
headers=headers,
error_pattern=error_pattern,
)
with allure.step("Delete container and verify container deletion"):
delete_container(
wallet=self.wallet,
cid=storage_object_1.cid,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
)
self.tick_epoch()
wait_for_container_deletion(
self.wallet,
storage_object_1.cid,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
)
assert storage_object_1.cid not in list_containers(
self.wallet, shell=self.shell, endpoint=self.cluster.default_rpc_endpoint
)
with allure.step(
"[Negative] Try to download (wget) object via wget with attributes [peace=peace]"
):
request = f"/get/{storage_object_1.cid}/peace/peace"
error_pattern = "404 Not Found"
try_to_get_object_via_passed_request_and_expect_error(
cid=storage_object_1.cid,
oid="",
error_pattern=error_pattern,
attrs=attrs_obj3,
http_request_path=request,
endpoint=self.cluster.default_http_gate_endpoint,
)

View file

@ -0,0 +1,127 @@
import logging
import allure
import pytest
from frostfs_testlib.resources.common import PUBLIC_ACL
from pytest_tests.helpers.container import create_container
from pytest_tests.helpers.file_helper import generate_file
from pytest_tests.helpers.frostfs_verbs import put_object_to_random_node
from pytest_tests.helpers.http_gate import (
get_object_and_verify_hashes,
get_object_by_attr_and_verify_hashes,
try_to_get_object_via_passed_request_and_expect_error,
)
from pytest_tests.steps.cluster_test_base import ClusterTestBase
logger = logging.getLogger("NeoLogger")
@pytest.mark.sanity
@pytest.mark.http_gate
class Test_http_object(ClusterTestBase):
PLACEMENT_RULE = "REP 2 IN X CBF 1 SELECT 4 FROM * AS X"
@pytest.fixture(scope="class", autouse=True)
@allure.title("[Class/Autouse]: Prepare wallet and deposit")
def prepare_wallet(self, default_wallet):
Test_http_object.wallet = default_wallet
@allure.title("Test Put over gRPC, Get over HTTP")
@pytest.mark.parametrize(
"object_size",
[pytest.lazy_fixture("simple_object_size"), pytest.lazy_fixture("complex_object_size")],
ids=["simple object", "complex object"],
)
def test_object_put_get_attributes(self, object_size: int):
"""
Test that object can be put using gRPC interface and get using HTTP.
Steps:
1. Create object;
2. Put objects using gRPC (frostfs-cli) with attributes [--attributes chapter1=peace,chapter2=war];
3. Download object using HTTP gate (https://github.com/TrueCloudLab/frostfs-http-gw#downloading);
4. Compare hashes between original and downloaded object;
5. [Negative] Try to the get object with specified attributes and `get` request: [get/$CID/chapter1/peace];
6. Download the object with specified attributes and `get_by_attribute` request: [get_by_attribute/$CID/chapter1/peace];
7. Compare hashes between original and downloaded object;
8. [Negative] Try to the get object via `get_by_attribute` request: [get_by_attribute/$CID/$OID];
Expected result:
Hashes must be the same.
"""
with allure.step("Create public container"):
cid = create_container(
self.wallet,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
rule=self.PLACEMENT_RULE,
basic_acl=PUBLIC_ACL,
)
# Generate file
file_path = generate_file(object_size)
# List of Key=Value attributes
obj_key1 = "chapter1"
obj_value1 = "peace"
obj_key2 = "chapter2"
obj_value2 = "war"
# Prepare for grpc PUT request
key_value1 = obj_key1 + "=" + obj_value1
key_value2 = obj_key2 + "=" + obj_value2
with allure.step("Put objects using gRPC [--attributes chapter1=peace,chapter2=war]"):
oid = put_object_to_random_node(
wallet=self.wallet,
path=file_path,
cid=cid,
shell=self.shell,
cluster=self.cluster,
attributes=f"{key_value1},{key_value2}",
)
with allure.step("Get object and verify hashes [ get/$CID/$OID ]"):
get_object_and_verify_hashes(
oid=oid,
file_name=file_path,
wallet=self.wallet,
cid=cid,
shell=self.shell,
nodes=self.cluster.storage_nodes,
endpoint=self.cluster.default_http_gate_endpoint,
)
with allure.step("[Negative] try to get object: [get/$CID/chapter1/peace]"):
attrs = {obj_key1: obj_value1, obj_key2: obj_value2}
request = f"/get/{cid}/{obj_key1}/{obj_value1}"
expected_err_msg = "Failed to get object via HTTP gate:"
try_to_get_object_via_passed_request_and_expect_error(
cid=cid,
oid=oid,
error_pattern=expected_err_msg,
http_request_path=request,
attrs=attrs,
endpoint=self.cluster.default_http_gate_endpoint,
)
with allure.step(
"Download the object with attribute [get_by_attribute/$CID/chapter1/peace]"
):
get_object_by_attr_and_verify_hashes(
oid=oid,
file_name=file_path,
cid=cid,
attrs=attrs,
endpoint=self.cluster.default_http_gate_endpoint,
)
with allure.step("[Negative] try to get object: get_by_attribute/$CID/$OID"):
request = f"/get_by_attribute/{cid}/{oid}"
try_to_get_object_via_passed_request_and_expect_error(
cid=cid,
oid=oid,
error_pattern=expected_err_msg,
http_request_path=request,
endpoint=self.cluster.default_http_gate_endpoint,
)

View file

@ -0,0 +1,70 @@
import logging
import allure
import pytest
from frostfs_testlib.resources.common import PUBLIC_ACL
from pytest_tests.helpers.container import create_container
from pytest_tests.helpers.file_helper import generate_file
from pytest_tests.helpers.http_gate import get_object_and_verify_hashes, upload_via_http_gate_curl
from pytest_tests.steps.cluster_test_base import ClusterTestBase
logger = logging.getLogger("NeoLogger")
@pytest.mark.sanity
@pytest.mark.http_gate
class Test_http_streaming(ClusterTestBase):
PLACEMENT_RULE = "REP 2 IN X CBF 1 SELECT 4 FROM * AS X"
@pytest.fixture(scope="class", autouse=True)
@allure.title("[Class/Autouse]: Prepare wallet and deposit")
def prepare_wallet(self, default_wallet):
Test_http_streaming.wallet = default_wallet
@allure.title("Test Put via pipe (steaming), Get over HTTP and verify hashes")
@pytest.mark.parametrize(
"object_size",
[pytest.lazy_fixture("complex_object_size")],
ids=["complex object"],
)
def test_object_can_be_put_get_by_streaming(self, object_size: int):
"""
Test that object can be put using gRPC interface and get using HTTP.
Steps:
1. Create big object;
2. Put object using curl with pipe (streaming);
3. Download object using HTTP gate (https://github.com/TrueCloudLab/frostfs-http-gw#downloading);
4. Compare hashes between original and downloaded object;
Expected result:
Hashes must be the same.
"""
with allure.step("Create public container and verify container creation"):
cid = create_container(
self.wallet,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
rule=self.PLACEMENT_RULE,
basic_acl=PUBLIC_ACL,
)
with allure.step("Allocate big object"):
# Generate file
file_path = generate_file(object_size)
with allure.step(
"Put objects using curl utility and Get object and verify hashes [ get/$CID/$OID ]"
):
oid = upload_via_http_gate_curl(
cid=cid, filepath=file_path, endpoint=self.cluster.default_http_gate_endpoint
)
get_object_and_verify_hashes(
oid=oid,
file_name=file_path,
wallet=self.wallet,
cid=cid,
shell=self.shell,
nodes=self.cluster.storage_nodes,
endpoint=self.cluster.default_http_gate_endpoint,
)

View file

@ -0,0 +1,409 @@
import calendar
import datetime
import logging
from typing import Optional
import allure
import pytest
from frostfs_testlib.resources.common import OBJECT_NOT_FOUND, PUBLIC_ACL
from pytest_tests.helpers.container import create_container
from pytest_tests.helpers.epoch import get_epoch, wait_for_epochs_align
from pytest_tests.helpers.file_helper import generate_file
from pytest_tests.helpers.frostfs_verbs import (
get_netmap_netinfo,
get_object_from_random_node,
head_object,
)
from pytest_tests.helpers.http_gate import (
attr_into_str_header_curl,
get_object_and_verify_hashes,
try_to_get_object_and_expect_error,
upload_via_http_gate_curl,
)
from pytest_tests.steps.cluster_test_base import ClusterTestBase
logger = logging.getLogger("NeoLogger")
EXPIRATION_TIMESTAMP_HEADER = "__FROSRFS__EXPIRATION_TIMESTAMP"
EXPIRATION_EPOCH_HEADER = "__FROSRFS__EXPIRATION_EPOCH"
EXPIRATION_DURATION_HEADER = "__FROSRFS__EXPIRATION_DURATION"
EXPIRATION_EXPIRATION_RFC = "__FROSRFS__EXPIRATION_RFC3339"
FROSTFS_EXPIRATION_EPOCH = "Frostfs-Expiration-Epoch"
FROSTFS_EXPIRATION_DURATION = "Frostfs-Expiration-Duration"
FROSTFS_EXPIRATION_TIMESTAMP = "Frostfs-Expiration-Timestamp"
FROSTFS_EXPIRATION_RFC3339 = "Frostfs-Expiration-RFC3339"
@pytest.mark.sanity
@pytest.mark.http_gate
class Test_http_system_header(ClusterTestBase):
PLACEMENT_RULE = "REP 2 IN X CBF 1 SELECT 2 FROM * AS X"
@pytest.fixture(scope="class", autouse=True)
@allure.title("[Class/Autouse]: Prepare wallet and deposit")
def prepare_wallet(self, default_wallet):
Test_http_system_header.wallet = default_wallet
@pytest.fixture(scope="class")
@allure.title("Create container")
def user_container(self):
return create_container(
wallet=self.wallet,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
rule=self.PLACEMENT_RULE,
basic_acl=PUBLIC_ACL,
)
@pytest.fixture(scope="class")
@allure.title("epoch_duration in seconds")
def epoch_duration(self) -> int:
net_info = get_netmap_netinfo(
wallet=self.wallet,
endpoint=self.cluster.default_rpc_endpoint,
shell=self.shell,
)
epoch_duration_in_blocks = net_info["epoch_duration"]
time_per_block = net_info["time_per_block"]
return int(epoch_duration_in_blocks * time_per_block)
@allure.title("Return N-epoch count in minutes")
def epoch_count_into_mins(self, epoch_duration: int, epoch: int) -> str:
mins = epoch_duration * epoch / 60
return f"{mins}m"
@allure.title("Return future timestamp after N epochs are passed")
def epoch_count_into_timestamp(
self, epoch_duration: int, epoch: int, rfc3339: Optional[bool] = False
) -> str:
current_datetime = datetime.datetime.utcnow()
epoch_count_in_seconds = epoch_duration * epoch
future_datetime = current_datetime + datetime.timedelta(seconds=epoch_count_in_seconds)
if rfc3339:
return future_datetime.isoformat("T") + "Z"
else:
return str(calendar.timegm(future_datetime.timetuple()))
@allure.title("Check is (header_output) Key=Value exists and equal in passed (header_to_find)")
def check_key_value_presented_header(self, header_output: dict, header_to_find: dict) -> bool:
header_att = header_output["header"]["attributes"]
for key_to_check, val_to_check in header_to_find.items():
if key_to_check not in header_att or val_to_check != header_att[key_to_check]:
logger.info(f"Unable to find {key_to_check}: '{val_to_check}' in {header_att}")
return False
return True
@allure.title(
f"Validate that only {EXPIRATION_EPOCH_HEADER} exists in header and other headers are abesent"
)
def validation_for_http_header_attr(self, head_info: dict, expected_epoch: int) -> None:
# check that __FROSTFS__EXPIRATION_EPOCH attribute has corresponding epoch
assert self.check_key_value_presented_header(
head_info, {EXPIRATION_EPOCH_HEADER: str(expected_epoch)}
), f'Expected to find {EXPIRATION_EPOCH_HEADER}: {expected_epoch} in: {head_info["header"]["attributes"]}'
# check that {EXPIRATION_EPOCH_HEADER} absents in header output
assert not (
self.check_key_value_presented_header(head_info, {EXPIRATION_DURATION_HEADER: ""})
), f"Only {EXPIRATION_EPOCH_HEADER} can be displayed in header attributes"
# check that {EXPIRATION_TIMESTAMP_HEADER} absents in header output
assert not (
self.check_key_value_presented_header(head_info, {EXPIRATION_TIMESTAMP_HEADER: ""})
), f"Only {EXPIRATION_TIMESTAMP_HEADER} can be displayed in header attributes"
# check that {EXPIRATION_EXPIRATION_RFC} absents in header output
assert not (
self.check_key_value_presented_header(head_info, {EXPIRATION_EXPIRATION_RFC: ""})
), f"Only {EXPIRATION_EXPIRATION_RFC} can be displayed in header attributes"
@allure.title("Put / get / verify object and return head command result to invoker")
def oid_header_info_for_object(self, file_path: str, attributes: dict, user_container: str):
oid = upload_via_http_gate_curl(
cid=user_container,
filepath=file_path,
endpoint=self.cluster.default_http_gate_endpoint,
headers=attr_into_str_header_curl(attributes),
)
get_object_and_verify_hashes(
oid=oid,
file_name=file_path,
wallet=self.wallet,
cid=user_container,
shell=self.shell,
nodes=self.cluster.storage_nodes,
endpoint=self.cluster.default_http_gate_endpoint,
)
head = head_object(
wallet=self.wallet,
cid=user_container,
oid=oid,
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
)
return oid, head
@allure.title("[negative] attempt to put object with expired epoch")
def test_unable_put_expired_epoch(self, user_container: str, simple_object_size: int):
headers = attr_into_str_header_curl(
{"Frostfs-Expiration-Epoch": str(get_epoch(self.shell, self.cluster) - 1)}
)
file_path = generate_file(simple_object_size)
with allure.step(
"Put object using HTTP with attribute Expiration-Epoch where epoch is expired"
):
upload_via_http_gate_curl(
cid=user_container,
filepath=file_path,
endpoint=self.cluster.default_http_gate_endpoint,
headers=headers,
error_pattern="object has expired",
)
@allure.title("[negative] attempt to put object with negative Frostfs-Expiration-Duration")
def test_unable_put_negative_duration(self, user_container: str, simple_object_size: int):
headers = attr_into_str_header_curl({"Frostfs-Expiration-Duration": "-1h"})
file_path = generate_file(simple_object_size)
with allure.step(
"Put object using HTTP with attribute Frostfs-Expiration-Duration where duration is negative"
):
upload_via_http_gate_curl(
cid=user_container,
filepath=file_path,
endpoint=self.cluster.default_http_gate_endpoint,
headers=headers,
error_pattern=f"{EXPIRATION_DURATION_HEADER} must be positive",
)
@allure.title(
"[negative] attempt to put object with Frostfs-Expiration-Timestamp value in the past"
)
def test_unable_put_expired_timestamp(self, user_container: str, simple_object_size: int):
headers = attr_into_str_header_curl({"Frostfs-Expiration-Timestamp": "1635075727"})
file_path = generate_file(simple_object_size)
with allure.step(
"Put object using HTTP with attribute Frostfs-Expiration-Timestamp where duration is in the past"
):
upload_via_http_gate_curl(
cid=user_container,
filepath=file_path,
endpoint=self.cluster.default_http_gate_endpoint,
headers=headers,
error_pattern=f"{EXPIRATION_TIMESTAMP_HEADER} must be in the future",
)
@allure.title(
"[negative] Put object using HTTP with attribute Frostfs-Expiration-RFC3339 where duration is in the past"
)
def test_unable_put_expired_rfc(self, user_container: str, simple_object_size: int):
headers = attr_into_str_header_curl({"Frostfs-Expiration-RFC3339": "2021-11-22T09:55:49Z"})
file_path = generate_file(simple_object_size)
upload_via_http_gate_curl(
cid=user_container,
filepath=file_path,
endpoint=self.cluster.default_http_gate_endpoint,
headers=headers,
error_pattern=f"{EXPIRATION_EXPIRATION_RFC} must be in the future",
)
@allure.title("priority of attributes epoch>duration")
@pytest.mark.parametrize(
"object_size",
[pytest.lazy_fixture("simple_object_size"), pytest.lazy_fixture("complex_object_size")],
ids=["simple object", "complex object"],
)
def test_http_attr_priority_epoch_duration(
self, user_container: str, object_size: int, epoch_duration: int
):
self.tick_epoch()
epoch_count = 1
expected_epoch = get_epoch(self.shell, self.cluster) + epoch_count
logger.info(
f"epoch duration={epoch_duration}, current_epoch= {get_epoch(self.shell, self.cluster)} expected_epoch {expected_epoch}"
)
attributes = {FROSTFS_EXPIRATION_EPOCH: expected_epoch, FROSTFS_EXPIRATION_DURATION: "1m"}
file_path = generate_file(object_size)
with allure.step(
f"Put objects using HTTP with attributes and head command should display {EXPIRATION_EPOCH_HEADER}: {expected_epoch} attr"
):
oid, head_info = self.oid_header_info_for_object(
file_path=file_path, attributes=attributes, user_container=user_container
)
self.validation_for_http_header_attr(head_info=head_info, expected_epoch=expected_epoch)
with allure.step("Check that object becomes unavailable when epoch is expired"):
for _ in range(0, epoch_count + 1):
self.tick_epoch()
assert (
get_epoch(self.shell, self.cluster) == expected_epoch + 1
), f"Epochs should be equal: {get_epoch(self.shell, self.cluster)} != {expected_epoch + 1}"
with allure.step("Check object deleted because it expires-on epoch"):
wait_for_epochs_align(self.shell, self.cluster)
try_to_get_object_and_expect_error(
cid=user_container,
oid=oid,
error_pattern="404 Not Found",
endpoint=self.cluster.default_http_gate_endpoint,
)
# check that object is not available via grpc
with pytest.raises(Exception, match=OBJECT_NOT_FOUND):
get_object_from_random_node(
self.wallet, user_container, oid, self.shell, self.cluster
)
@allure.title(
f"priority of attributes duration>timestamp, duration time has higher priority and should be converted {EXPIRATION_EPOCH_HEADER}"
)
@pytest.mark.parametrize(
"object_size",
[pytest.lazy_fixture("simple_object_size"), pytest.lazy_fixture("complex_object_size")],
ids=["simple object", "complex object"],
)
def test_http_attr_priority_dur_timestamp(
self, user_container: str, object_size: int, epoch_duration: int
):
self.tick_epoch()
epoch_count = 2
expected_epoch = get_epoch(self.shell, self.cluster) + epoch_count
logger.info(
f"epoch duration={epoch_duration}, current_epoch= {get_epoch(self.shell, self.cluster)} expected_epoch {expected_epoch}"
)
attributes = {
FROSTFS_EXPIRATION_DURATION: self.epoch_count_into_mins(
epoch_duration=epoch_duration, epoch=2
),
FROSTFS_EXPIRATION_TIMESTAMP: self.epoch_count_into_timestamp(
epoch_duration=epoch_duration, epoch=1
),
}
file_path = generate_file(object_size)
with allure.step(
f"Put objects using HTTP with attributes and head command should display {EXPIRATION_EPOCH_HEADER}: {expected_epoch} attr"
):
oid, head_info = self.oid_header_info_for_object(
file_path=file_path, attributes=attributes, user_container=user_container
)
self.validation_for_http_header_attr(head_info=head_info, expected_epoch=expected_epoch)
with allure.step("Check that object becomes unavailable when epoch is expired"):
for _ in range(0, epoch_count + 1):
self.tick_epoch()
assert (
get_epoch(self.shell, self.cluster) == expected_epoch + 1
), f"Epochs should be equal: {get_epoch(self.shell, self.cluster)} != {expected_epoch + 1}"
with allure.step("Check object deleted because it expires-on epoch"):
wait_for_epochs_align(self.shell, self.cluster)
try_to_get_object_and_expect_error(
cid=user_container,
oid=oid,
error_pattern="404 Not Found",
endpoint=self.cluster.default_http_gate_endpoint,
)
# check that object is not available via grpc
with pytest.raises(Exception, match=OBJECT_NOT_FOUND):
get_object_from_random_node(
self.wallet, user_container, oid, self.shell, self.cluster
)
@allure.title(
f"priority of attributes timestamp>Expiration-RFC, timestamp has higher priority and should be converted {EXPIRATION_EPOCH_HEADER}"
)
@pytest.mark.parametrize(
"object_size",
[pytest.lazy_fixture("simple_object_size"), pytest.lazy_fixture("complex_object_size")],
ids=["simple object", "complex object"],
)
def test_http_attr_priority_timestamp_rfc(
self, user_container: str, object_size: int, epoch_duration: int
):
self.tick_epoch()
epoch_count = 2
expected_epoch = get_epoch(self.shell, self.cluster) + epoch_count
logger.info(
f"epoch duration={epoch_duration}, current_epoch= {get_epoch(self.shell, self.cluster)} expected_epoch {expected_epoch}"
)
attributes = {
FROSTFS_EXPIRATION_TIMESTAMP: self.epoch_count_into_timestamp(
epoch_duration=epoch_duration, epoch=2
),
FROSTFS_EXPIRATION_RFC3339: self.epoch_count_into_timestamp(
epoch_duration=epoch_duration, epoch=1, rfc3339=True
),
}
file_path = generate_file(object_size)
with allure.step(
f"Put objects using HTTP with attributes and head command should display {EXPIRATION_EPOCH_HEADER}: {expected_epoch} attr"
):
oid, head_info = self.oid_header_info_for_object(
file_path=file_path, attributes=attributes, user_container=user_container
)
self.validation_for_http_header_attr(head_info=head_info, expected_epoch=expected_epoch)
with allure.step("Check that object becomes unavailable when epoch is expired"):
for _ in range(0, epoch_count + 1):
self.tick_epoch()
assert (
get_epoch(self.shell, self.cluster) == expected_epoch + 1
), f"Epochs should be equal: {get_epoch(self.shell, self.cluster)} != {expected_epoch + 1}"
with allure.step("Check object deleted because it expires-on epoch"):
wait_for_epochs_align(self.shell, self.cluster)
try_to_get_object_and_expect_error(
cid=user_container,
oid=oid,
error_pattern="404 Not Found",
endpoint=self.cluster.default_http_gate_endpoint,
)
# check that object is not available via grpc
with pytest.raises(Exception, match=OBJECT_NOT_FOUND):
get_object_from_random_node(
self.wallet, user_container, oid, self.shell, self.cluster
)
@allure.title("Test that object is automatically delete when expiration passed")
@pytest.mark.parametrize(
"object_size",
[pytest.lazy_fixture("simple_object_size"), pytest.lazy_fixture("complex_object_size")],
ids=["simple object", "complex object"],
)
def test_http_rfc_object_unavailable_after_expir(
self, user_container: str, object_size: int, epoch_duration: int
):
self.tick_epoch()
epoch_count = 2
expected_epoch = get_epoch(self.shell, self.cluster) + epoch_count
logger.info(
f"epoch duration={epoch_duration}, current_epoch= {get_epoch(self.shell, self.cluster)} expected_epoch {expected_epoch}"
)
attributes = {
FROSTFS_EXPIRATION_RFC3339: self.epoch_count_into_timestamp(
epoch_duration=epoch_duration, epoch=2, rfc3339=True
)
}
file_path = generate_file(object_size)
with allure.step(
f"Put objects using HTTP with attributes and head command should display {EXPIRATION_EPOCH_HEADER}: {expected_epoch} attr"
):
oid, head_info = self.oid_header_info_for_object(
file_path=file_path,
attributes=attributes,
user_container=user_container,
)
self.validation_for_http_header_attr(head_info=head_info, expected_epoch=expected_epoch)
with allure.step("Check that object becomes unavailable when epoch is expired"):
for _ in range(0, epoch_count + 1):
self.tick_epoch()
# check that {EXPIRATION_EXPIRATION_RFC} absents in header output
assert (
get_epoch(self.shell, self.cluster) == expected_epoch + 1
), f"Epochs should be equal: {get_epoch(self.shell, self.cluster)} != {expected_epoch + 1}"
with allure.step("Check object deleted because it expires-on epoch"):
wait_for_epochs_align(self.shell, self.cluster)
try_to_get_object_and_expect_error(
cid=user_container,
oid=oid,
error_pattern="404 Not Found",
endpoint=self.cluster.default_http_gate_endpoint,
)
# check that object is not available via grpc
with pytest.raises(Exception, match=OBJECT_NOT_FOUND):
get_object_from_random_node(
self.wallet, user_container, oid, self.shell, self.cluster
)

View file

@ -1,10 +1,10 @@
import allure
import pytest
from file_helper import generate_file
from s3_helper import object_key_from_file_path
from steps import s3_gate_bucket, s3_gate_object
from steps.s3_gate_base import TestS3GateBase
from pytest_tests.helpers.file_helper import generate_file
from pytest_tests.helpers.s3_helper import assert_s3_acl, object_key_from_file_path
from pytest_tests.steps import s3_gate_bucket, s3_gate_object
from pytest_tests.steps.s3_gate_base import TestS3GateBase
def pytest_generate_tests(metafunc):
@ -17,8 +17,8 @@ def pytest_generate_tests(metafunc):
@pytest.mark.s3_gate
class TestS3GateACL(TestS3GateBase):
@allure.title("Test S3: Object ACL")
def test_s3_object_ACL(self, bucket):
file_path = generate_file()
def test_s3_object_ACL(self, bucket, simple_object_size):
file_path = generate_file(simple_object_size)
file_name = object_key_from_file_path(file_path)
with allure.step("Put object into bucket, Check ACL is empty"):
@ -29,19 +29,12 @@ class TestS3GateACL(TestS3GateBase):
with allure.step("Put object ACL = public-read"):
s3_gate_object.put_object_acl_s3(self.s3_client, bucket, file_name, "public-read")
obj_acl = s3_gate_object.get_object_acl_s3(self.s3_client, bucket, file_name)
obj_permission = [permission.get("Permission") for permission in obj_acl]
assert obj_permission == [
"FULL_CONTROL",
"FULL_CONTROL",
], "Permission for all groups is FULL_CONTROL"
assert_s3_acl(acl_grants=obj_acl, permitted_users="AllUsers")
with allure.step("Put object ACL = private"):
s3_gate_object.put_object_acl_s3(self.s3_client, bucket, file_name, "private")
obj_acl = s3_gate_object.get_object_acl_s3(self.s3_client, bucket, file_name)
obj_permission = [permission.get("Permission") for permission in obj_acl]
assert obj_permission == [
"FULL_CONTROL",
], "Permission for Canonical User is FULL_CONTROL"
assert_s3_acl(acl_grants=obj_acl, permitted_users="CanonicalUser")
with allure.step(
"Put object with grant-read uri=http://acs.amazonaws.com/groups/global/AllUsers"
@ -53,30 +46,19 @@ class TestS3GateACL(TestS3GateBase):
grant_read="uri=http://acs.amazonaws.com/groups/global/AllUsers",
)
obj_acl = s3_gate_object.get_object_acl_s3(self.s3_client, bucket, file_name)
obj_permission = [permission.get("Permission") for permission in obj_acl]
assert obj_permission == [
"FULL_CONTROL",
"FULL_CONTROL",
], "Permission for all groups is FULL_CONTROL"
assert_s3_acl(acl_grants=obj_acl, permitted_users="AllUsers")
@allure.title("Test S3: Bucket ACL")
def test_s3_bucket_ACL(self):
with allure.step("Create bucket with ACL = public-read-write"):
bucket = s3_gate_bucket.create_bucket_s3(self.s3_client, True, acl="public-read-write")
bucket_acl = s3_gate_bucket.get_bucket_acl(self.s3_client, bucket)
bucket_permission = [permission.get("Permission") for permission in bucket_acl]
assert bucket_permission == [
"FULL_CONTROL",
"FULL_CONTROL",
], "Permission for all groups is FULL_CONTROL"
assert_s3_acl(acl_grants=bucket_acl, permitted_users="AllUsers")
with allure.step("Change bucket ACL to private"):
s3_gate_bucket.put_bucket_acl_s3(self.s3_client, bucket, acl="private")
bucket_acl = s3_gate_bucket.get_bucket_acl(self.s3_client, bucket)
bucket_permission = [permission.get("Permission") for permission in bucket_acl]
assert bucket_permission == [
"FULL_CONTROL"
], "Permission for CanonicalUser is FULL_CONTROL"
assert_s3_acl(acl_grants=bucket_acl, permitted_users="CanonicalUser")
with allure.step(
"Change bucket acl to --grant-write uri=http://acs.amazonaws.com/groups/global/AllUsers"
@ -87,8 +69,4 @@ class TestS3GateACL(TestS3GateBase):
grant_write="uri=http://acs.amazonaws.com/groups/global/AllUsers",
)
bucket_acl = s3_gate_bucket.get_bucket_acl(self.s3_client, bucket)
bucket_permission = [permission.get("Permission") for permission in bucket_acl]
assert bucket_permission == [
"FULL_CONTROL",
"FULL_CONTROL",
], "Permission for all groups is FULL_CONTROL"
assert_s3_acl(acl_grants=bucket_acl, permitted_users="AllUsers")

View file

@ -2,11 +2,16 @@ from datetime import datetime, timedelta
import allure
import pytest
from file_helper import generate_file
from s3_helper import assert_object_lock_mode, check_objects_in_bucket, object_key_from_file_path
from steps import s3_gate_bucket, s3_gate_object
from steps.s3_gate_base import TestS3GateBase
from pytest_tests.helpers.file_helper import generate_file
from pytest_tests.helpers.s3_helper import (
assert_object_lock_mode,
assert_s3_acl,
check_objects_in_bucket,
object_key_from_file_path,
)
from pytest_tests.steps import s3_gate_bucket, s3_gate_object
from pytest_tests.steps.s3_gate_base import TestS3GateBase
def pytest_generate_tests(metafunc):
@ -24,41 +29,26 @@ class TestS3GateBucket(TestS3GateBase):
with allure.step("Create bucket with ACL private"):
bucket = s3_gate_bucket.create_bucket_s3(self.s3_client, True, acl="private")
bucket_acl = s3_gate_bucket.get_bucket_acl(self.s3_client, bucket)
bucket_permission = [permission.get("Permission") for permission in bucket_acl]
assert bucket_permission == [
"FULL_CONTROL"
], "Permission for CanonicalUser is FULL_CONTROL"
assert_s3_acl(acl_grants=bucket_acl, permitted_users="CanonicalUser")
with allure.step("Create bucket with ACL = public-read"):
bucket_1 = s3_gate_bucket.create_bucket_s3(self.s3_client, True, acl="public-read")
bucket_acl_1 = s3_gate_bucket.get_bucket_acl(self.s3_client, bucket_1)
bucket_permission_1 = [permission.get("Permission") for permission in bucket_acl_1]
assert bucket_permission_1 == [
"FULL_CONTROL",
"FULL_CONTROL",
], "Permission for all groups is FULL_CONTROL"
assert_s3_acl(acl_grants=bucket_acl_1, permitted_users="AllUsers")
with allure.step("Create bucket with ACL public-read-write"):
bucket_2 = s3_gate_bucket.create_bucket_s3(
self.s3_client, True, acl="public-read-write"
)
bucket_acl_2 = s3_gate_bucket.get_bucket_acl(self.s3_client, bucket_2)
bucket_permission_2 = [permission.get("Permission") for permission in bucket_acl_2]
assert bucket_permission_2 == [
"FULL_CONTROL",
"FULL_CONTROL",
], "Permission for CanonicalUser is FULL_CONTROL"
assert_s3_acl(acl_grants=bucket_acl_2, permitted_users="AllUsers")
with allure.step("Create bucket with ACL = authenticated-read"):
bucket_3 = s3_gate_bucket.create_bucket_s3(
self.s3_client, True, acl="authenticated-read"
)
bucket_acl_3 = s3_gate_bucket.get_bucket_acl(self.s3_client, bucket_3)
bucket_permission_3 = [permission.get("Permission") for permission in bucket_acl_3]
assert bucket_permission_3 == [
"FULL_CONTROL",
"FULL_CONTROL",
], "Permission for all groups is FULL_CONTROL"
assert_s3_acl(acl_grants=bucket_acl_3, permitted_users="AllUsers")
@allure.title("Test S3: Create Bucket with different ACL by grand")
def test_s3_create_bucket_with_grands(self):
@ -70,11 +60,7 @@ class TestS3GateBucket(TestS3GateBase):
grant_read="uri=http://acs.amazonaws.com/groups/global/AllUsers",
)
bucket_acl = s3_gate_bucket.get_bucket_acl(self.s3_client, bucket)
bucket_permission = [permission.get("Permission") for permission in bucket_acl]
assert bucket_permission == [
"FULL_CONTROL",
"FULL_CONTROL",
], "Permission for CanonicalUser is FULL_CONTROL"
assert_s3_acl(acl_grants=bucket_acl, permitted_users="AllUsers")
with allure.step("Create bucket with --grant-wtite"):
bucket_1 = s3_gate_bucket.create_bucket_s3(
@ -83,11 +69,7 @@ class TestS3GateBucket(TestS3GateBase):
grant_write="uri=http://acs.amazonaws.com/groups/global/AllUsers",
)
bucket_acl_1 = s3_gate_bucket.get_bucket_acl(self.s3_client, bucket_1)
bucket_permission_1 = [permission.get("Permission") for permission in bucket_acl_1]
assert bucket_permission_1 == [
"FULL_CONTROL",
"FULL_CONTROL",
], "Permission for all groups is FULL_CONTROL"
assert_s3_acl(acl_grants=bucket_acl_1, permitted_users="AllUsers")
with allure.step("Create bucket with --grant-full-control"):
bucket_2 = s3_gate_bucket.create_bucket_s3(
@ -96,15 +78,11 @@ class TestS3GateBucket(TestS3GateBase):
grant_full_control="uri=http://acs.amazonaws.com/groups/global/AllUsers",
)
bucket_acl_2 = s3_gate_bucket.get_bucket_acl(self.s3_client, bucket_2)
bucket_permission_2 = [permission.get("Permission") for permission in bucket_acl_2]
assert bucket_permission_2 == [
"FULL_CONTROL",
"FULL_CONTROL",
], "Permission for CanonicalUser is FULL_CONTROL"
assert_s3_acl(acl_grants=bucket_acl_2, permitted_users="AllUsers")
@allure.title("Test S3: create bucket with object lock")
def test_s3_bucket_object_lock(self):
file_path = generate_file()
def test_s3_bucket_object_lock(self, simple_object_size):
file_path = generate_file(simple_object_size)
file_name = object_key_from_file_path(file_path)
with allure.step("Create bucket with --no-object-lock-enabled-for-bucket"):
@ -138,10 +116,10 @@ class TestS3GateBucket(TestS3GateBase):
)
@allure.title("Test S3: delete bucket")
def test_s3_delete_bucket(self):
file_path_1 = generate_file()
def test_s3_delete_bucket(self, simple_object_size):
file_path_1 = generate_file(simple_object_size)
file_name_1 = object_key_from_file_path(file_path_1)
file_path_2 = generate_file()
file_path_2 = generate_file(simple_object_size)
file_name_2 = object_key_from_file_path(file_path_2)
bucket = s3_gate_bucket.create_bucket_s3(self.s3_client)

View file

@ -4,26 +4,26 @@ from random import choice, choices
import allure
import pytest
from aws_cli_client import AwsCliClient
from common import ASSETS_DIR, COMPLEX_OBJ_SIZE, SIMPLE_OBJ_SIZE
from epoch import tick_epoch
from file_helper import (
from pytest_tests.helpers.aws_cli_client import AwsCliClient
from pytest_tests.helpers.epoch import tick_epoch
from pytest_tests.helpers.file_helper import (
generate_file,
generate_file_with_content,
get_file_content,
get_file_hash,
split_file,
)
from s3_helper import (
from pytest_tests.helpers.s3_helper import (
check_objects_in_bucket,
check_tags_by_bucket,
check_tags_by_object,
set_bucket_versioning,
try_to_get_objects_and_expect_error,
)
from steps import s3_gate_bucket, s3_gate_object
from steps.s3_gate_base import TestS3GateBase
from pytest_tests.resources.common import ASSETS_DIR
from pytest_tests.steps import s3_gate_bucket, s3_gate_object
from pytest_tests.steps.s3_gate_base import TestS3GateBase
logger = logging.getLogger("NeoLogger")
@ -33,18 +33,20 @@ def pytest_generate_tests(metafunc):
metafunc.parametrize("s3_client", ["aws cli", "boto3"], indirect=True)
@allure.link("https://github.com/nspcc-dev/neofs-s3-gw#neofs-s3-gateway", name="neofs-s3-gateway")
@allure.link(
"https://github.com/TrueCloudLab/frostfs-s3-gw#frostfs-s3-gw", name="frostfs-s3-gateway"
)
@pytest.mark.sanity
@pytest.mark.s3_gate
@pytest.mark.s3_gate_base
class TestS3Gate(TestS3GateBase):
@allure.title("Test S3 Bucket API")
def test_s3_buckets(self):
def test_s3_buckets(self, simple_object_size):
"""
Test base S3 Bucket API (Create/List/Head/Delete).
"""
file_path = generate_file()
file_path = generate_file(simple_object_size)
file_name = self.object_key_from_file_path(file_path)
with allure.step("Create buckets"):
@ -99,7 +101,7 @@ class TestS3Gate(TestS3GateBase):
with allure.step(f"Delete bucket {bucket_1}"):
s3_gate_bucket.delete_bucket_s3(self.s3_client, bucket_1)
tick_epoch(self.shell, self.cluster)
self.tick_epoch()
with allure.step(f"Check bucket {bucket_1} deleted"):
with pytest.raises(Exception, match=r".*Not Found.*"):
@ -109,11 +111,13 @@ class TestS3Gate(TestS3GateBase):
@pytest.mark.parametrize(
"file_type", ["simple", "large"], ids=["Simple object", "Large object"]
)
def test_s3_api_object(self, file_type, two_buckets):
def test_s3_api_object(self, file_type, two_buckets, simple_object_size, complex_object_size):
"""
Test base S3 Object API (Put/Head/List) for simple and large objects.
"""
file_path = generate_file(SIMPLE_OBJ_SIZE if file_type == "simple" else COMPLEX_OBJ_SIZE)
file_path = generate_file(
simple_object_size if file_type == "simple" else complex_object_size
)
file_name = self.object_key_from_file_path(file_path)
bucket_1, bucket_2 = two_buckets
@ -136,7 +140,7 @@ class TestS3Gate(TestS3GateBase):
s3_gate_object.get_object_attributes(self.s3_client, bucket, file_name, *attrs)
@allure.title("Test S3 Sync directory")
def test_s3_sync_dir(self, bucket):
def test_s3_sync_dir(self, bucket, simple_object_size):
"""
Test checks sync directory with AWS CLI utility.
"""
@ -147,8 +151,8 @@ class TestS3Gate(TestS3GateBase):
if not isinstance(self.s3_client, AwsCliClient):
pytest.skip("This test is not supported with boto3 client")
generate_file_with_content(file_path=file_path_1)
generate_file_with_content(file_path=file_path_2)
generate_file_with_content(simple_object_size, file_path=file_path_1)
generate_file_with_content(simple_object_size, file_path=file_path_2)
self.s3_client.sync(bucket_name=bucket, dir_path=os.path.dirname(file_path_1))
@ -166,19 +170,21 @@ class TestS3Gate(TestS3GateBase):
), "Expected hashes are the same"
@allure.title("Test S3 Object versioning")
def test_s3_api_versioning(self, bucket):
def test_s3_api_versioning(self, bucket, simple_object_size):
"""
Test checks basic versioning functionality for S3 bucket.
"""
version_1_content = "Version 1"
version_2_content = "Version 2"
file_name_simple = generate_file_with_content(content=version_1_content)
file_name_simple = generate_file_with_content(simple_object_size, content=version_1_content)
obj_key = os.path.basename(file_name_simple)
set_bucket_versioning(self.s3_client, bucket, s3_gate_bucket.VersioningStatus.ENABLED)
with allure.step("Put several versions of object into bucket"):
version_id_1 = s3_gate_object.put_object_s3(self.s3_client, bucket, file_name_simple)
generate_file_with_content(file_path=file_name_simple, content=version_2_content)
generate_file_with_content(
simple_object_size, file_path=file_name_simple, content=version_2_content
)
version_id_2 = s3_gate_object.put_object_s3(self.s3_client, bucket, file_name_simple)
with allure.step("Check bucket shows all versions"):
@ -246,13 +252,15 @@ class TestS3Gate(TestS3GateBase):
@pytest.mark.s3_gate_multipart
@allure.title("Test S3 Object Multipart API")
def test_s3_api_multipart(self, bucket):
def test_s3_api_multipart(self, bucket, simple_object_size):
"""
Test checks S3 Multipart API (Create multipart upload/Abort multipart upload/List multipart upload/
Upload part/List parts/Complete multipart upload).
"""
parts_count = 3
file_name_large = generate_file(SIMPLE_OBJ_SIZE * 1024 * 6 * parts_count) # 5Mb - min part
file_name_large = generate_file(
simple_object_size * 1024 * 6 * parts_count
) # 5Mb - min part
object_key = self.object_key_from_file_path(file_name_large)
part_files = split_file(file_name_large, parts_count)
parts = []
@ -320,7 +328,7 @@ class TestS3Gate(TestS3GateBase):
check_tags_by_bucket(self.s3_client, bucket, [])
@allure.title("Test S3 Object tagging API")
def test_s3_api_object_tagging(self, bucket):
def test_s3_api_object_tagging(self, bucket, simple_object_size):
"""
Test checks S3 Object tagging API (Put tag/Get tag/Update tag).
"""
@ -330,7 +338,7 @@ class TestS3Gate(TestS3GateBase):
("some-key--obj2", "some-value--obj2"),
]
key_value_pair_obj_new = [("some-key-obj-new", "some-value-obj-new")]
file_name_simple = generate_file(SIMPLE_OBJ_SIZE)
file_name_simple = generate_file(simple_object_size)
obj_key = self.object_key_from_file_path(file_name_simple)
s3_gate_bucket.put_bucket_tagging(self.s3_client, bucket, key_value_pair_bucket)
@ -350,7 +358,7 @@ class TestS3Gate(TestS3GateBase):
check_tags_by_object(self.s3_client, bucket, obj_key, [])
@allure.title("Test S3: Delete object & delete objects S3 API")
def test_s3_api_delete(self, two_buckets):
def test_s3_api_delete(self, two_buckets, simple_object_size, complex_object_size):
"""
Check delete_object and delete_objects S3 API operation. From first bucket some objects deleted one by one.
From second bucket some objects deleted all at once.
@ -359,7 +367,7 @@ class TestS3Gate(TestS3GateBase):
max_delete_objects = 17
put_objects = []
file_paths = []
obj_sizes = [SIMPLE_OBJ_SIZE, COMPLEX_OBJ_SIZE]
obj_sizes = [simple_object_size, complex_object_size]
bucket_1, bucket_2 = two_buckets
@ -406,12 +414,14 @@ class TestS3Gate(TestS3GateBase):
try_to_get_objects_and_expect_error(self.s3_client, bucket_2, objects_to_delete_b2)
@allure.title("Test S3: Copy object to the same bucket")
def test_s3_copy_same_bucket(self, bucket):
def test_s3_copy_same_bucket(self, bucket, complex_object_size, simple_object_size):
"""
Test object can be copied to the same bucket.
#TODO: delete after test_s3_copy_object will be merge
"""
file_path_simple, file_path_large = generate_file(), generate_file(COMPLEX_OBJ_SIZE)
file_path_simple, file_path_large = generate_file(simple_object_size), generate_file(
complex_object_size
)
file_name_simple = self.object_key_from_file_path(file_path_simple)
file_name_large = self.object_key_from_file_path(file_path_large)
bucket_objects = [file_name_simple, file_name_large]
@ -448,12 +458,14 @@ class TestS3Gate(TestS3GateBase):
)
@allure.title("Test S3: Copy object to another bucket")
def test_s3_copy_to_another_bucket(self, two_buckets):
def test_s3_copy_to_another_bucket(self, two_buckets, complex_object_size, simple_object_size):
"""
Test object can be copied to another bucket.
#TODO: delete after test_s3_copy_object will be merge
"""
file_path_simple, file_path_large = generate_file(), generate_file(COMPLEX_OBJ_SIZE)
file_path_simple, file_path_large = generate_file(simple_object_size), generate_file(
complex_object_size
)
file_name_simple = self.object_key_from_file_path(file_path_simple)
file_name_large = self.object_key_from_file_path(file_path_large)
bucket_1_objects = [file_name_simple, file_name_large]

View file

@ -3,11 +3,15 @@ from datetime import datetime, timedelta
import allure
import pytest
from file_helper import generate_file, generate_file_with_content
from s3_helper import assert_object_lock_mode, check_objects_in_bucket, object_key_from_file_path
from steps import s3_gate_bucket, s3_gate_object
from steps.s3_gate_base import TestS3GateBase
from pytest_tests.helpers.file_helper import generate_file, generate_file_with_content
from pytest_tests.helpers.s3_helper import (
assert_object_lock_mode,
check_objects_in_bucket,
object_key_from_file_path,
)
from pytest_tests.steps import s3_gate_bucket, s3_gate_object
from pytest_tests.steps.s3_gate_base import TestS3GateBase
def pytest_generate_tests(metafunc):
@ -21,8 +25,8 @@ def pytest_generate_tests(metafunc):
@pytest.mark.parametrize("version_id", [None, "second"])
class TestS3GateLocking(TestS3GateBase):
@allure.title("Test S3: Checking the operation of retention period & legal lock on the object")
def test_s3_object_locking(self, version_id):
file_path = generate_file()
def test_s3_object_locking(self, version_id, simple_object_size):
file_path = generate_file(simple_object_size)
file_name = object_key_from_file_path(file_path)
retention_period = 2
@ -30,7 +34,7 @@ class TestS3GateLocking(TestS3GateBase):
with allure.step("Put several versions of object into bucket"):
s3_gate_object.put_object_s3(self.s3_client, bucket, file_path)
file_name_1 = generate_file_with_content(file_path=file_path)
file_name_1 = generate_file_with_content(simple_object_size, file_path=file_path)
version_id_2 = s3_gate_object.put_object_s3(self.s3_client, bucket, file_name_1)
check_objects_in_bucket(self.s3_client, bucket, [file_name])
if version_id:
@ -74,8 +78,8 @@ class TestS3GateLocking(TestS3GateBase):
s3_gate_object.delete_object_s3(self.s3_client, bucket, file_name, version_id)
@allure.title("Test S3: Checking the impossibility to change the retention mode COMPLIANCE")
def test_s3_mode_compliance(self, version_id):
file_path = generate_file()
def test_s3_mode_compliance(self, version_id, simple_object_size):
file_path = generate_file(simple_object_size)
file_name = object_key_from_file_path(file_path)
retention_period = 2
retention_period_1 = 1
@ -115,8 +119,8 @@ class TestS3GateLocking(TestS3GateBase):
)
@allure.title("Test S3: Checking the ability to change retention mode GOVERNANCE")
def test_s3_mode_governance(self, version_id):
file_path = generate_file()
def test_s3_mode_governance(self, version_id, simple_object_size):
file_path = generate_file(simple_object_size)
file_name = object_key_from_file_path(file_path)
retention_period = 3
retention_period_1 = 2
@ -183,8 +187,8 @@ class TestS3GateLocking(TestS3GateBase):
)
@allure.title("Test S3: Checking if an Object Cannot Be Locked")
def test_s3_legal_hold(self, version_id):
file_path = generate_file()
def test_s3_legal_hold(self, version_id, simple_object_size):
file_path = generate_file(simple_object_size)
file_name = object_key_from_file_path(file_path)
bucket = s3_gate_bucket.create_bucket_s3(self.s3_client, False)
@ -205,8 +209,8 @@ class TestS3GateLocking(TestS3GateBase):
@pytest.mark.s3_gate
class TestS3GateLockingBucket(TestS3GateBase):
@allure.title("Test S3: Bucket Lock")
def test_s3_bucket_lock(self):
file_path = generate_file()
def test_s3_bucket_lock(self, simple_object_size):
file_path = generate_file(simple_object_size)
file_name = object_key_from_file_path(file_path)
configuration = {"Rule": {"DefaultRetention": {"Mode": "COMPLIANCE", "Days": 1}}}

View file

@ -1,10 +1,14 @@
import allure
import pytest
from file_helper import generate_file, get_file_hash, split_file
from s3_helper import check_objects_in_bucket, object_key_from_file_path, set_bucket_versioning
from steps import s3_gate_bucket, s3_gate_object
from steps.s3_gate_base import TestS3GateBase
from pytest_tests.helpers.file_helper import generate_file, get_file_hash, split_file
from pytest_tests.helpers.s3_helper import (
check_objects_in_bucket,
object_key_from_file_path,
set_bucket_versioning,
)
from pytest_tests.steps import s3_gate_bucket, s3_gate_object
from pytest_tests.steps.s3_gate_base import TestS3GateBase
PART_SIZE = 5 * 1024 * 1024

View file

@ -6,16 +6,25 @@ from random import choices, sample
import allure
import pytest
from aws_cli_client import AwsCliClient
from common import ASSETS_DIR, COMPLEX_OBJ_SIZE, FREE_STORAGE, SIMPLE_OBJ_SIZE, WALLET_PASS
from data_formatters import get_wallet_public_key
from file_helper import concat_files, generate_file, generate_file_with_content, get_file_hash
from neofs_testlib.utils.wallet import init_wallet
from python_keywords.payment_neogo import deposit_gas, transfer_gas
from s3_helper import assert_object_lock_mode, check_objects_in_bucket, set_bucket_versioning
from frostfs_testlib.utils import wallet_utils
from steps import s3_gate_bucket, s3_gate_object
from steps.s3_gate_base import TestS3GateBase
from pytest_tests.helpers.aws_cli_client import AwsCliClient
from pytest_tests.helpers.file_helper import (
concat_files,
generate_file,
generate_file_with_content,
get_file_hash,
)
from pytest_tests.helpers.payment_neogo import deposit_gas, transfer_gas
from pytest_tests.helpers.s3_helper import (
assert_object_lock_mode,
assert_s3_acl,
check_objects_in_bucket,
set_bucket_versioning,
)
from pytest_tests.resources.common import ASSETS_DIR, FREE_STORAGE, WALLET_PASS
from pytest_tests.steps import s3_gate_bucket, s3_gate_object
from pytest_tests.steps.s3_gate_base import TestS3GateBase
def pytest_generate_tests(metafunc):
@ -32,8 +41,8 @@ class TestS3GateObject(TestS3GateBase):
return os.path.basename(full_path)
@allure.title("Test S3: Copy object")
def test_s3_copy_object(self, two_buckets):
file_path = generate_file()
def test_s3_copy_object(self, two_buckets, simple_object_size):
file_path = generate_file(simple_object_size)
file_name = self.object_key_from_file_path(file_path)
bucket_1_objects = [file_name]
@ -79,9 +88,9 @@ class TestS3GateObject(TestS3GateBase):
s3_gate_object.copy_object_s3(self.s3_client, bucket_1, file_name)
@allure.title("Test S3: Copy version of object")
def test_s3_copy_version_object(self, two_buckets):
def test_s3_copy_version_object(self, two_buckets, simple_object_size):
version_1_content = "Version 1"
file_name_simple = generate_file_with_content(content=version_1_content)
file_name_simple = generate_file_with_content(simple_object_size, content=version_1_content)
obj_key = os.path.basename(file_name_simple)
bucket_1, bucket_2 = two_buckets
@ -115,9 +124,9 @@ class TestS3GateObject(TestS3GateBase):
s3_gate_object.copy_object_s3(self.s3_client, bucket_1, obj_key)
@allure.title("Test S3: Checking copy with acl")
def test_s3_copy_acl(self, bucket):
def test_s3_copy_acl(self, bucket, simple_object_size):
version_1_content = "Version 1"
file_name_simple = generate_file_with_content(content=version_1_content)
file_name_simple = generate_file_with_content(simple_object_size, content=version_1_content)
obj_key = os.path.basename(file_name_simple)
set_bucket_versioning(self.s3_client, bucket, s3_gate_bucket.VersioningStatus.ENABLED)
@ -131,15 +140,12 @@ class TestS3GateObject(TestS3GateBase):
self.s3_client, bucket, obj_key, ACL="public-read-write"
)
obj_acl = s3_gate_object.get_object_acl_s3(self.s3_client, bucket, copy_obj_path)
for control in obj_acl:
assert (
control.get("Permission") == "FULL_CONTROL"
), "Permission for all groups is FULL_CONTROL"
assert_s3_acl(acl_grants=obj_acl, permitted_users="CanonicalUser")
@allure.title("Test S3: Copy object with metadata")
def test_s3_copy_metadate(self, bucket):
def test_s3_copy_metadate(self, bucket, simple_object_size):
object_metadata = {f"{uuid.uuid4()}": f"{uuid.uuid4()}"}
file_path = generate_file()
file_path = generate_file(simple_object_size)
file_name = self.object_key_from_file_path(file_path)
bucket_1_objects = [file_name]
@ -187,9 +193,9 @@ class TestS3GateObject(TestS3GateBase):
), f"Metadata must be {object_metadata_1}"
@allure.title("Test S3: Copy object with tagging")
def test_s3_copy_tagging(self, bucket):
def test_s3_copy_tagging(self, bucket, simple_object_size):
object_tagging = [(f"{uuid.uuid4()}", f"{uuid.uuid4()}")]
file_path = generate_file()
file_path = generate_file(simple_object_size)
file_name_simple = self.object_key_from_file_path(file_path)
bucket_1_objects = [file_name_simple]
@ -239,10 +245,10 @@ class TestS3GateObject(TestS3GateBase):
assert tag in got_tags, f"Expected tag {tag} in {got_tags}"
@allure.title("Test S3: Delete version of object")
def test_s3_delete_versioning(self, bucket):
def test_s3_delete_versioning(self, bucket, complex_object_size, simple_object_size):
version_1_content = "Version 1"
version_2_content = "Version 2"
file_name_simple = generate_file_with_content(content=version_1_content)
file_name_simple = generate_file_with_content(simple_object_size, content=version_1_content)
obj_key = os.path.basename(file_name_simple)
set_bucket_versioning(self.s3_client, bucket, s3_gate_bucket.VersioningStatus.ENABLED)
@ -250,7 +256,7 @@ class TestS3GateObject(TestS3GateBase):
with allure.step("Put several versions of object into bucket"):
version_id_1 = s3_gate_object.put_object_s3(self.s3_client, bucket, file_name_simple)
file_name_1 = generate_file_with_content(
file_path=file_name_simple, content=version_2_content
simple_object_size, file_path=file_name_simple, content=version_2_content
)
version_id_2 = s3_gate_object.put_object_s3(self.s3_client, bucket, file_name_1)
@ -287,7 +293,7 @@ class TestS3GateObject(TestS3GateBase):
assert not "DeleteMarkers" in delete_obj.keys(), "Delete markes not found"
with allure.step("Put new object into bucket"):
file_name_simple = generate_file(COMPLEX_OBJ_SIZE)
file_name_simple = generate_file(complex_object_size)
obj_key = os.path.basename(file_name_simple)
version_id = s3_gate_object.put_object_s3(self.s3_client, bucket, file_name_simple)
@ -298,12 +304,12 @@ class TestS3GateObject(TestS3GateBase):
assert "DeleteMarker" in delete_obj.keys(), f"Expected delete Marker"
@allure.title("Test S3: bulk delete version of object")
def test_s3_bulk_delete_versioning(self, bucket):
def test_s3_bulk_delete_versioning(self, bucket, simple_object_size):
version_1_content = "Version 1"
version_2_content = "Version 2"
version_3_content = "Version 3"
version_4_content = "Version 4"
file_name_1 = generate_file_with_content(content=version_1_content)
file_name_1 = generate_file_with_content(simple_object_size, content=version_1_content)
obj_key = os.path.basename(file_name_1)
set_bucket_versioning(self.s3_client, bucket, s3_gate_bucket.VersioningStatus.ENABLED)
@ -311,15 +317,15 @@ class TestS3GateObject(TestS3GateBase):
with allure.step("Put several versions of object into bucket"):
version_id_1 = s3_gate_object.put_object_s3(self.s3_client, bucket, file_name_1)
file_name_2 = generate_file_with_content(
file_path=file_name_1, content=version_2_content
simple_object_size, file_path=file_name_1, content=version_2_content
)
version_id_2 = s3_gate_object.put_object_s3(self.s3_client, bucket, file_name_2)
file_name_3 = generate_file_with_content(
file_path=file_name_1, content=version_3_content
simple_object_size, file_path=file_name_1, content=version_3_content
)
version_id_3 = s3_gate_object.put_object_s3(self.s3_client, bucket, file_name_3)
file_name_4 = generate_file_with_content(
file_path=file_name_1, content=version_4_content
simple_object_size, file_path=file_name_1, content=version_4_content
)
version_id_4 = s3_gate_object.put_object_s3(self.s3_client, bucket, file_name_4)
version_ids = {version_id_1, version_id_2, version_id_3, version_id_4}
@ -349,17 +355,17 @@ class TestS3GateObject(TestS3GateBase):
), f"Expected object has versions: {version_to_save}"
@allure.title("Test S3: Get versions of object")
def test_s3_get_versioning(self, bucket):
def test_s3_get_versioning(self, bucket, simple_object_size):
version_1_content = "Version 1"
version_2_content = "Version 2"
file_name_simple = generate_file_with_content(content=version_1_content)
file_name_simple = generate_file_with_content(simple_object_size, content=version_1_content)
obj_key = os.path.basename(file_name_simple)
set_bucket_versioning(self.s3_client, bucket, s3_gate_bucket.VersioningStatus.ENABLED)
with allure.step("Put several versions of object into bucket"):
version_id_1 = s3_gate_object.put_object_s3(self.s3_client, bucket, file_name_simple)
file_name_1 = generate_file_with_content(
file_path=file_name_simple, content=version_2_content
simple_object_size, file_path=file_name_simple, content=version_2_content
)
version_id_2 = s3_gate_object.put_object_s3(self.s3_client, bucket, file_name_1)
@ -388,14 +394,14 @@ class TestS3GateObject(TestS3GateBase):
), f"Get object with version {version_id_2}"
@allure.title("Test S3: Get range")
def test_s3_get_range(self, bucket):
file_path = generate_file(COMPLEX_OBJ_SIZE)
def test_s3_get_range(self, bucket, complex_object_size: int, simple_object_size: int):
file_path = generate_file(complex_object_size)
file_name = self.object_key_from_file_path(file_path)
file_hash = get_file_hash(file_path)
set_bucket_versioning(self.s3_client, bucket, s3_gate_bucket.VersioningStatus.ENABLED)
with allure.step("Put several versions of object into bucket"):
version_id_1 = s3_gate_object.put_object_s3(self.s3_client, bucket, file_path)
file_name_1 = generate_file_with_content(file_path=file_path)
file_name_1 = generate_file_with_content(simple_object_size, file_path=file_path)
version_id_2 = s3_gate_object.put_object_s3(self.s3_client, bucket, file_name_1)
with allure.step("Get first version of object"):
@ -404,42 +410,46 @@ class TestS3GateObject(TestS3GateBase):
bucket,
file_name,
version_id_1,
range=[0, int(COMPLEX_OBJ_SIZE / 3)],
range=[0, int(complex_object_size / 3)],
)
object_1_part_2 = s3_gate_object.get_object_s3(
self.s3_client,
bucket,
file_name,
version_id_1,
range=[int(COMPLEX_OBJ_SIZE / 3) + 1, 2 * int(COMPLEX_OBJ_SIZE / 3)],
range=[int(complex_object_size / 3) + 1, 2 * int(complex_object_size / 3)],
)
object_1_part_3 = s3_gate_object.get_object_s3(
self.s3_client,
bucket,
file_name,
version_id_1,
range=[2 * int(COMPLEX_OBJ_SIZE / 3) + 1, COMPLEX_OBJ_SIZE],
range=[2 * int(complex_object_size / 3) + 1, complex_object_size],
)
con_file = concat_files([object_1_part_1, object_1_part_2, object_1_part_3])
assert get_file_hash(con_file) == file_hash, "Hashes must be the same"
with allure.step("Get second version of object"):
object_2_part_1 = s3_gate_object.get_object_s3(
self.s3_client, bucket, file_name, version_id_2, range=[0, int(SIMPLE_OBJ_SIZE / 3)]
self.s3_client,
bucket,
file_name,
version_id_2,
range=[0, int(simple_object_size / 3)],
)
object_2_part_2 = s3_gate_object.get_object_s3(
self.s3_client,
bucket,
file_name,
version_id_2,
range=[int(SIMPLE_OBJ_SIZE / 3) + 1, 2 * int(SIMPLE_OBJ_SIZE / 3)],
range=[int(simple_object_size / 3) + 1, 2 * int(simple_object_size / 3)],
)
object_2_part_3 = s3_gate_object.get_object_s3(
self.s3_client,
bucket,
file_name,
version_id_2,
range=[2 * int(SIMPLE_OBJ_SIZE / 3) + 1, COMPLEX_OBJ_SIZE],
range=[2 * int(simple_object_size / 3) + 1, simple_object_size],
)
con_file_1 = concat_files([object_2_part_1, object_2_part_2, object_2_part_3])
assert get_file_hash(con_file_1) == get_file_hash(
@ -448,28 +458,28 @@ class TestS3GateObject(TestS3GateBase):
with allure.step("Get object"):
object_3_part_1 = s3_gate_object.get_object_s3(
self.s3_client, bucket, file_name, range=[0, int(SIMPLE_OBJ_SIZE / 3)]
self.s3_client, bucket, file_name, range=[0, int(simple_object_size / 3)]
)
object_3_part_2 = s3_gate_object.get_object_s3(
self.s3_client,
bucket,
file_name,
range=[int(SIMPLE_OBJ_SIZE / 3) + 1, 2 * int(SIMPLE_OBJ_SIZE / 3)],
range=[int(simple_object_size / 3) + 1, 2 * int(simple_object_size / 3)],
)
object_3_part_3 = s3_gate_object.get_object_s3(
self.s3_client,
bucket,
file_name,
range=[2 * int(SIMPLE_OBJ_SIZE / 3) + 1, COMPLEX_OBJ_SIZE],
range=[2 * int(simple_object_size / 3) + 1, simple_object_size],
)
con_file = concat_files([object_3_part_1, object_3_part_2, object_3_part_3])
assert get_file_hash(con_file) == get_file_hash(file_name_1), "Hashes must be the same"
@allure.title("Test S3: Copy object with metadata")
@pytest.mark.smoke
def test_s3_head_object(self, bucket):
def test_s3_head_object(self, bucket, complex_object_size, simple_object_size):
object_metadata = {f"{uuid.uuid4()}": f"{uuid.uuid4()}"}
file_path = generate_file(COMPLEX_OBJ_SIZE)
file_path = generate_file(complex_object_size)
file_name = self.object_key_from_file_path(file_path)
set_bucket_versioning(self.s3_client, bucket, s3_gate_bucket.VersioningStatus.ENABLED)
@ -477,7 +487,7 @@ class TestS3GateObject(TestS3GateBase):
version_id_1 = s3_gate_object.put_object_s3(
self.s3_client, bucket, file_path, Metadata=object_metadata
)
file_name_1 = generate_file_with_content(file_path=file_path)
file_name_1 = generate_file_with_content(simple_object_size, file_path=file_path)
version_id_2 = s3_gate_object.put_object_s3(self.s3_client, bucket, file_name_1)
with allure.step("Get head of first version of object"):
@ -506,10 +516,10 @@ class TestS3GateObject(TestS3GateBase):
@allure.title("Test S3: list of object with versions")
@pytest.mark.parametrize("list_type", ["v1", "v2"])
def test_s3_list_object(self, list_type: str, bucket):
file_path_1 = generate_file(COMPLEX_OBJ_SIZE)
def test_s3_list_object(self, list_type: str, bucket, complex_object_size):
file_path_1 = generate_file(complex_object_size)
file_name = self.object_key_from_file_path(file_path_1)
file_path_2 = generate_file(COMPLEX_OBJ_SIZE)
file_path_2 = generate_file(complex_object_size)
file_name_2 = self.object_key_from_file_path(file_path_2)
set_bucket_versioning(self.s3_client, bucket, s3_gate_bucket.VersioningStatus.ENABLED)
@ -543,8 +553,8 @@ class TestS3GateObject(TestS3GateBase):
assert "DeleteMarker" in delete_obj.keys(), f"Expected delete Marker"
@allure.title("Test S3: put object")
def test_s3_put_object(self, bucket):
file_path_1 = generate_file(COMPLEX_OBJ_SIZE)
def test_s3_put_object(self, bucket, complex_object_size, simple_object_size):
file_path_1 = generate_file(complex_object_size)
file_name = self.object_key_from_file_path(file_path_1)
object_1_metadata = {f"{uuid.uuid4()}": f"{uuid.uuid4()}"}
tag_key_1 = "tag1"
@ -569,7 +579,7 @@ class TestS3GateObject(TestS3GateBase):
], "Tags must be the same"
with allure.step("Rewrite file into bucket"):
file_path_2 = generate_file_with_content(file_path=file_path_1)
file_path_2 = generate_file_with_content(simple_object_size, file_path=file_path_1)
s3_gate_object.put_object_s3(
self.s3_client, bucket, file_path_2, Metadata=object_2_metadata, Tagging=tag_2
)
@ -583,7 +593,7 @@ class TestS3GateObject(TestS3GateBase):
set_bucket_versioning(self.s3_client, bucket, s3_gate_bucket.VersioningStatus.ENABLED)
file_path_3 = generate_file(COMPLEX_OBJ_SIZE)
file_path_3 = generate_file(complex_object_size)
file_hash = get_file_hash(file_path_3)
file_name_3 = self.object_key_from_file_path(file_path_3)
object_3_metadata = {f"{uuid.uuid4()}": f"{uuid.uuid4()}"}
@ -604,7 +614,7 @@ class TestS3GateObject(TestS3GateBase):
], "Tags must be the same"
with allure.step("Put new version of file into bucket"):
file_path_4 = generate_file_with_content(file_path=file_path_3)
file_path_4 = generate_file_with_content(simple_object_size, file_path=file_path_3)
version_id_2 = s3_gate_object.put_object_s3(self.s3_client, bucket, file_path_4)
versions = s3_gate_object.list_objects_versions_s3(self.s3_client, bucket)
obj_versions = {
@ -655,10 +665,10 @@ class TestS3GateObject(TestS3GateBase):
@pytest.fixture
def prepare_two_wallets(self, default_wallet, client_shell):
self.main_wallet = default_wallet
self.main_public_key = get_wallet_public_key(self.main_wallet, WALLET_PASS)
self.main_public_key = wallet_utils.get_wallet_public_key(self.main_wallet, WALLET_PASS)
self.other_wallet = os.path.join(os.getcwd(), ASSETS_DIR, f"{str(uuid.uuid4())}.json")
init_wallet(self.other_wallet, WALLET_PASS)
self.other_public_key = get_wallet_public_key(self.other_wallet, WALLET_PASS)
wallet_utils.init_wallet(self.other_wallet, WALLET_PASS)
self.other_public_key = wallet_utils.get_wallet_public_key(self.other_wallet, WALLET_PASS)
if not FREE_STORAGE:
main_chain = self.cluster.main_chain_nodes[0]
@ -680,8 +690,15 @@ class TestS3GateObject(TestS3GateBase):
@allure.title("Test S3: put object with ACL")
@pytest.mark.parametrize("bucket_versioning", ["ENABLED", "SUSPENDED"])
def test_s3_put_object_acl(self, prepare_two_wallets, bucket_versioning, bucket):
file_path_1 = generate_file(COMPLEX_OBJ_SIZE)
def test_s3_put_object_acl(
self,
prepare_two_wallets,
bucket_versioning,
bucket,
complex_object_size,
simple_object_size,
):
file_path_1 = generate_file(complex_object_size)
file_name = self.object_key_from_file_path(file_path_1)
if bucket_versioning == "ENABLED":
status = s3_gate_bucket.VersioningStatus.ENABLED
@ -692,56 +709,43 @@ class TestS3GateObject(TestS3GateBase):
with allure.step("Put object with acl private"):
s3_gate_object.put_object_s3(self.s3_client, bucket, file_path_1, ACL="private")
obj_acl = s3_gate_object.get_object_acl_s3(self.s3_client, bucket, file_name)
obj_permission = [permission.get("Permission") for permission in obj_acl]
assert obj_permission == ["FULL_CONTROL"], "Permission for all groups is FULL_CONTROL"
assert_s3_acl(acl_grants=obj_acl, permitted_users="CanonicalUser")
object_1 = s3_gate_object.get_object_s3(self.s3_client, bucket, file_name)
assert get_file_hash(file_path_1) == get_file_hash(object_1), "Hashes must be the same"
with allure.step("Put object with acl public-read"):
file_path_2 = generate_file_with_content(file_path=file_path_1)
file_path_2 = generate_file_with_content(simple_object_size, file_path=file_path_1)
s3_gate_object.put_object_s3(self.s3_client, bucket, file_path_2, ACL="public-read")
obj_acl = s3_gate_object.get_object_acl_s3(self.s3_client, bucket, file_name)
obj_permission = [permission.get("Permission") for permission in obj_acl]
assert obj_permission == [
"FULL_CONTROL",
"FULL_CONTROL",
], "Permission for all groups is FULL_CONTROL"
assert_s3_acl(acl_grants=obj_acl, permitted_users="AllUsers")
object_2 = s3_gate_object.get_object_s3(self.s3_client, bucket, file_name)
assert get_file_hash(file_path_2) == get_file_hash(object_2), "Hashes must be the same"
with allure.step("Put object with acl public-read-write"):
file_path_3 = generate_file_with_content(file_path=file_path_1)
file_path_3 = generate_file_with_content(simple_object_size, file_path=file_path_1)
s3_gate_object.put_object_s3(
self.s3_client, bucket, file_path_3, ACL="public-read-write"
)
obj_acl = s3_gate_object.get_object_acl_s3(self.s3_client, bucket, file_name)
obj_permission = [permission.get("Permission") for permission in obj_acl]
assert obj_permission == [
"FULL_CONTROL",
"FULL_CONTROL",
], "Permission for all groups is FULL_CONTROL"
assert_s3_acl(acl_grants=obj_acl, permitted_users="AllUsers")
object_3 = s3_gate_object.get_object_s3(self.s3_client, bucket, file_name)
assert get_file_hash(file_path_3) == get_file_hash(object_3), "Hashes must be the same"
with allure.step("Put object with acl authenticated-read"):
file_path_4 = generate_file_with_content(file_path=file_path_1)
file_path_4 = generate_file_with_content(simple_object_size, file_path=file_path_1)
s3_gate_object.put_object_s3(
self.s3_client, bucket, file_path_4, ACL="authenticated-read"
)
obj_acl = s3_gate_object.get_object_acl_s3(self.s3_client, bucket, file_name)
obj_permission = [permission.get("Permission") for permission in obj_acl]
assert obj_permission == [
"FULL_CONTROL",
"FULL_CONTROL",
], "Permission for all groups is FULL_CONTROL"
assert_s3_acl(acl_grants=obj_acl, permitted_users="AllUsers")
object_4 = s3_gate_object.get_object_s3(self.s3_client, bucket, file_name)
assert get_file_hash(file_path_4) == get_file_hash(object_4), "Hashes must be the same"
file_path_5 = generate_file(COMPLEX_OBJ_SIZE)
file_path_5 = generate_file(complex_object_size)
file_name_5 = self.object_key_from_file_path(file_path_5)
with allure.step("Put object with --grant-full-control id=mycanonicaluserid"):
file_path_6 = generate_file_with_content(file_path=file_path_5)
file_path_6 = generate_file_with_content(simple_object_size, file_path=file_path_5)
s3_gate_object.put_object_s3(
self.s3_client,
bucket,
@ -749,18 +753,14 @@ class TestS3GateObject(TestS3GateBase):
GrantFullControl=f"id={self.other_public_key}",
)
obj_acl = s3_gate_object.get_object_acl_s3(self.s3_client, bucket, file_name_5)
obj_permission = [permission.get("Permission") for permission in obj_acl]
assert obj_permission == [
"FULL_CONTROL",
"FULL_CONTROL",
], "Permission for all groups is FULL_CONTROL"
assert_s3_acl(acl_grants=obj_acl, permitted_users="CanonicalUser")
object_4 = s3_gate_object.get_object_s3(self.s3_client, bucket, file_name_5)
assert get_file_hash(file_path_5) == get_file_hash(object_4), "Hashes must be the same"
with allure.step(
"Put object with --grant-read uri=http://acs.amazonaws.com/groups/global/AllUsers"
):
file_path_7 = generate_file_with_content(file_path=file_path_5)
file_path_7 = generate_file_with_content(simple_object_size, file_path=file_path_5)
s3_gate_object.put_object_s3(
self.s3_client,
bucket,
@ -768,19 +768,16 @@ class TestS3GateObject(TestS3GateBase):
GrantRead="uri=http://acs.amazonaws.com/groups/global/AllUsers",
)
obj_acl = s3_gate_object.get_object_acl_s3(self.s3_client, bucket, file_name_5)
obj_permission = [permission.get("Permission") for permission in obj_acl]
assert obj_permission == [
"FULL_CONTROL",
"FULL_CONTROL",
], "Permission for all groups is FULL_CONTROL"
assert_s3_acl(acl_grants=obj_acl, permitted_users="AllUsers")
object_7 = s3_gate_object.get_object_s3(self.s3_client, bucket, file_name_5)
assert get_file_hash(file_path_7) == get_file_hash(object_7), "Hashes must be the same"
@allure.title("Test S3: put object with lock-mode")
def test_s3_put_object_lock_mode(self, bucket):
def test_s3_put_object_lock_mode(self, complex_object_size, simple_object_size):
file_path_1 = generate_file(COMPLEX_OBJ_SIZE)
file_path_1 = generate_file(complex_object_size)
file_name = self.object_key_from_file_path(file_path_1)
bucket = s3_gate_bucket.create_bucket_s3(self.s3_client, True)
set_bucket_versioning(self.s3_client, bucket, s3_gate_bucket.VersioningStatus.ENABLED)
with allure.step(
@ -803,7 +800,7 @@ class TestS3GateObject(TestS3GateBase):
"Put new version of object with [--object-lock-mode COMPLIANCE] и [--object-lock-retain-until-date +3days]"
):
date_obj = datetime.utcnow() + timedelta(days=2)
file_name_1 = generate_file_with_content(file_path=file_path_1)
file_name_1 = generate_file_with_content(simple_object_size, file_path=file_path_1)
s3_gate_object.put_object_s3(
self.s3_client,
bucket,
@ -819,7 +816,7 @@ class TestS3GateObject(TestS3GateBase):
"Put new version of object with [--object-lock-mode COMPLIANCE] и [--object-lock-retain-until-date +2days]"
):
date_obj = datetime.utcnow() + timedelta(days=3)
file_name_1 = generate_file_with_content(file_path=file_path_1)
file_name_1 = generate_file_with_content(simple_object_size, file_path=file_path_1)
s3_gate_object.put_object_s3(
self.s3_client,
bucket,
@ -857,7 +854,7 @@ class TestS3GateObject(TestS3GateBase):
@allure.title("Test S3 Sync directory")
@pytest.mark.parametrize("sync_type", ["sync", "cp"])
def test_s3_sync_dir(self, sync_type, bucket):
def test_s3_sync_dir(self, sync_type, bucket, simple_object_size):
file_path_1 = os.path.join(os.getcwd(), ASSETS_DIR, "test_sync", "test_file_1")
file_path_2 = os.path.join(os.getcwd(), ASSETS_DIR, "test_sync", "test_file_2")
object_metadata = {f"{uuid.uuid4()}": f"{uuid.uuid4()}"}
@ -866,8 +863,8 @@ class TestS3GateObject(TestS3GateBase):
if not isinstance(self.s3_client, AwsCliClient):
pytest.skip("This test is not supported with boto3 client")
generate_file_with_content(file_path=file_path_1)
generate_file_with_content(file_path=file_path_2)
generate_file_with_content(simple_object_size, file_path=file_path_1)
generate_file_with_content(simple_object_size, file_path=file_path_2)
set_bucket_versioning(self.s3_client, bucket, s3_gate_bucket.VersioningStatus.ENABLED)
# TODO: return ACL, when https://github.com/nspcc-dev/neofs-s3-gw/issues/685 will be closed
if sync_type == "sync":
@ -901,18 +898,15 @@ class TestS3GateObject(TestS3GateBase):
assert (
obj_head.get("Metadata") == object_metadata
), f"Metadata of object is {object_metadata}"
# Uncomment after https://github.com/nspcc-dev/neofs-s3-gw/issues/685 is solved
# obj_acl = s3_gate_object.get_object_acl_s3(self.s3_client, bucket, obj_key)
# obj_permission = [permission.get("Permission") for permission in obj_acl]
# assert obj_permission == [
# "FULL_CONTROL",
# "FULL_CONTROL",
# ], "Permission for all groups is FULL_CONTROL"
# assert_s3_acl(acl_grants = obj_acl, permitted_users = "AllUsers")
@allure.title("Test S3 Put 10 nested level object")
def test_s3_put_10_folder(self, bucket, temp_directory):
def test_s3_put_10_folder(self, bucket, temp_directory, simple_object_size):
path = "/".join(["".join(choices(string.ascii_letters, k=3)) for _ in range(10)])
file_path_1 = os.path.join(temp_directory, path, "test_file_1")
generate_file_with_content(file_path=file_path_1)
generate_file_with_content(simple_object_size, file_path=file_path_1)
file_name = self.object_key_from_file_path(file_path_1)
objects_list = s3_gate_object.list_objects_s3(self.s3_client, bucket)
assert not objects_list, f"Expected empty bucket, got {objects_list}"

View file

@ -1,28 +1,22 @@
import os
import time
from datetime import datetime, timedelta
from random import choice
from string import ascii_letters
from typing import Tuple
import allure
import pytest
from file_helper import generate_file, generate_file_with_content
from python_keywords.container import search_container_by_name
from python_keywords.storage_policy import get_simple_object_copies
from s3_helper import (
assert_object_lock_mode,
from pytest_tests.helpers.container import search_container_by_name
from pytest_tests.helpers.file_helper import generate_file
from pytest_tests.helpers.s3_helper import (
check_objects_in_bucket,
object_key_from_file_path,
set_bucket_versioning,
)
from steps import s3_gate_bucket, s3_gate_object
from steps.s3_gate_base import TestS3GateBase
from pytest_tests.helpers.storage_policy import get_simple_object_copies
from pytest_tests.steps import s3_gate_bucket, s3_gate_object
from pytest_tests.steps.s3_gate_base import TestS3GateBase
def pytest_generate_tests(metafunc):
policy = f"{os.getcwd()}/robot/resources/files/policy.json"
policy = f"{os.getcwd()}/pytest_tests/resources/files/policy.json"
if "s3_client" in metafunc.fixturenames:
metafunc.parametrize(
"s3_client",
@ -35,10 +29,10 @@ def pytest_generate_tests(metafunc):
@pytest.mark.s3_gate
class TestS3GatePolicy(TestS3GateBase):
@allure.title("Test S3: Verify bucket creation with retention policy applied")
def test_s3_bucket_location(self):
file_path_1 = generate_file()
def test_s3_bucket_location(self, simple_object_size):
file_path_1 = generate_file(simple_object_size)
file_name_1 = object_key_from_file_path(file_path_1)
file_path_2 = generate_file()
file_path_2 = generate_file(simple_object_size)
file_name_2 = object_key_from_file_path(file_path_2)
with allure.step("Create two buckets with different bucket configuration"):
@ -105,7 +99,7 @@ class TestS3GatePolicy(TestS3GateBase):
s3_gate_bucket.get_bucket_policy(self.s3_client, bucket)
with allure.step("Put new policy"):
custom_policy = f"file://{os.getcwd()}/robot/resources/files/bucket_policy.json"
custom_policy = f"file://{os.getcwd()}/pytest_tests/resources/files/bucket_policy.json"
custom_policy = {
"Version": "2008-10-17",
"Id": "aaaa-bbbb-cccc-dddd",

View file

@ -1,16 +1,18 @@
import os
import uuid
from random import choice
from string import ascii_letters
from typing import Tuple
import allure
import pytest
from file_helper import generate_file
from s3_helper import check_tags_by_bucket, check_tags_by_object, object_key_from_file_path
from steps import s3_gate_bucket, s3_gate_object
from steps.s3_gate_base import TestS3GateBase
from pytest_tests.helpers.file_helper import generate_file
from pytest_tests.helpers.s3_helper import (
check_tags_by_bucket,
check_tags_by_object,
object_key_from_file_path,
)
from pytest_tests.steps import s3_gate_bucket, s3_gate_object
from pytest_tests.steps.s3_gate_base import TestS3GateBase
def pytest_generate_tests(metafunc):
@ -32,8 +34,8 @@ class TestS3GateTagging(TestS3GateBase):
return tags
@allure.title("Test S3: Object tagging")
def test_s3_object_tagging(self, bucket):
file_path = generate_file()
def test_s3_object_tagging(self, bucket, simple_object_size):
file_path = generate_file(simple_object_size)
file_name = object_key_from_file_path(file_path)
with allure.step("Put with 3 tags object into bucket"):

View file

@ -2,11 +2,11 @@ import os
import allure
import pytest
from file_helper import generate_file, generate_file_with_content
from s3_helper import set_bucket_versioning
from steps import s3_gate_bucket, s3_gate_object
from steps.s3_gate_base import TestS3GateBase
from pytest_tests.helpers.file_helper import generate_file, generate_file_with_content
from pytest_tests.helpers.s3_helper import set_bucket_versioning
from pytest_tests.steps import s3_gate_bucket, s3_gate_object
from pytest_tests.steps.s3_gate_base import TestS3GateBase
def pytest_generate_tests(metafunc):
@ -30,8 +30,8 @@ class TestS3GateVersioning(TestS3GateBase):
set_bucket_versioning(self.s3_client, bucket, s3_gate_bucket.VersioningStatus.SUSPENDED)
@allure.title("Test S3: Enable and disable versioning")
def test_s3_version(self):
file_path = generate_file()
def test_s3_version(self, simple_object_size):
file_path = generate_file(simple_object_size)
file_name = self.object_key_from_file_path(file_path)
bucket_objects = [file_name]
bucket = s3_gate_bucket.create_bucket_s3(self.s3_client, False)
@ -61,7 +61,7 @@ class TestS3GateVersioning(TestS3GateBase):
with allure.step("Put several versions of object into bucket"):
version_id_1 = s3_gate_object.put_object_s3(self.s3_client, bucket, file_path)
file_name_1 = generate_file_with_content(file_path=file_path)
file_name_1 = generate_file_with_content(simple_object_size, file_path=file_path)
version_id_2 = s3_gate_object.put_object_s3(self.s3_client, bucket, file_name_1)
with allure.step("Check bucket shows all versions"):

View file

@ -5,10 +5,11 @@ from re import match
import allure
import pytest
import requests
from binary_version_helper import get_remote_binaries_versions
from common import BIN_VERSIONS_FILE
from env_properties import read_env_properties, save_env_properties
from neofs_testlib.hosting import Hosting
from frostfs_testlib.hosting import Hosting
from pytest_tests.helpers.binary_version import get_remote_binaries_versions
from pytest_tests.helpers.env_properties import read_env_properties, save_env_properties
from pytest_tests.resources.common import BIN_VERSIONS_FILE
logger = logging.getLogger("NeoLogger")

View file

@ -1,5 +1,6 @@
import pytest
from wallet import WalletFactory, WalletFile
from pytest_tests.helpers.wallet import WalletFactory, WalletFile
@pytest.fixture(scope="module")

View file

@ -2,15 +2,15 @@ import random
import allure
import pytest
from cluster_test_base import ClusterTestBase
from common import COMPLEX_OBJ_SIZE, SIMPLE_OBJ_SIZE, WALLET_PASS
from file_helper import generate_file
from grpc_responses import SESSION_NOT_FOUND
from neofs_testlib.utils.wallet import get_last_address_from_wallet
from python_keywords.container import create_container
from python_keywords.neofs_verbs import delete_object, put_object, put_object_to_random_node
from frostfs_testlib.resources.common import SESSION_NOT_FOUND
from frostfs_testlib.utils import wallet_utils
from steps.session_token import create_session_token
from pytest_tests.helpers.container import create_container
from pytest_tests.helpers.file_helper import generate_file
from pytest_tests.helpers.frostfs_verbs import delete_object, put_object, put_object_to_random_node
from pytest_tests.resources.common import WALLET_PASS
from pytest_tests.steps.cluster_test_base import ClusterTestBase
from pytest_tests.steps.session_token import create_session_token
@pytest.mark.sanity
@ -19,7 +19,7 @@ class TestDynamicObjectSession(ClusterTestBase):
@allure.title("Test Object Operations with Session Token")
@pytest.mark.parametrize(
"object_size",
[SIMPLE_OBJ_SIZE, COMPLEX_OBJ_SIZE],
[pytest.lazy_fixture("simple_object_size"), pytest.lazy_fixture("complex_object_size")],
ids=["simple object", "complex object"],
)
def test_object_session_token(self, default_wallet, object_size):
@ -38,7 +38,7 @@ class TestDynamicObjectSession(ClusterTestBase):
with allure.step("Init wallet"):
wallet = default_wallet
address = get_last_address_from_wallet(wallet, "")
address = wallet_utils.get_last_address_from_wallet(wallet, "")
with allure.step("Nodes Settlements"):
(

View file

@ -2,18 +2,21 @@ import logging
import allure
import pytest
from cluster import Cluster
from cluster_test_base import ClusterTestBase
from common import COMPLEX_OBJ_SIZE, SIMPLE_OBJ_SIZE
from epoch import ensure_fresh_epoch
from file_helper import generate_file
from grpc_responses import MALFORMED_REQUEST, OBJECT_ACCESS_DENIED, OBJECT_NOT_FOUND
from neofs_testlib.shell import Shell
from frostfs_testlib.resources.common import (
EXPIRED_SESSION_TOKEN,
MALFORMED_REQUEST,
OBJECT_ACCESS_DENIED,
OBJECT_NOT_FOUND,
)
from frostfs_testlib.shell import Shell
from pytest import FixtureRequest
from python_keywords.container import create_container
from python_keywords.neofs_verbs import (
from pytest_tests.helpers.cluster import Cluster
from pytest_tests.helpers.container import create_container
from pytest_tests.helpers.epoch import ensure_fresh_epoch
from pytest_tests.helpers.file_helper import generate_file
from pytest_tests.helpers.frostfs_verbs import (
delete_object,
get_netmap_netinfo,
get_object,
get_object_from_random_node,
get_range,
@ -22,10 +25,11 @@ from python_keywords.neofs_verbs import (
put_object_to_random_node,
search_object,
)
from wallet import WalletFile
from helpers.storage_object_info import StorageObjectInfo
from steps.session_token import (
from pytest_tests.helpers.storage_object_info import StorageObjectInfo
from pytest_tests.helpers.test_control import expect_not_raises
from pytest_tests.helpers.wallet import WalletFile
from pytest_tests.steps.cluster_test_base import ClusterTestBase
from pytest_tests.steps.session_token import (
INVALID_SIGNATURE,
UNRELATED_CONTAINER,
UNRELATED_KEY,
@ -37,7 +41,7 @@ from steps.session_token import (
get_object_signed_token,
sign_session_token,
)
from steps.storage_object import delete_objects
from pytest_tests.steps.storage_object import delete_objects
logger = logging.getLogger("NeoLogger")
@ -58,7 +62,7 @@ def storage_containers(
@pytest.fixture(
params=[SIMPLE_OBJ_SIZE, COMPLEX_OBJ_SIZE],
params=[pytest.lazy_fixture("simple_object_size"), pytest.lazy_fixture("complex_object_size")],
ids=["simple object", "complex object"],
# Scope module to upload/delete each files set only once
scope="module",
@ -98,16 +102,15 @@ def storage_objects(
@allure.step("Get ranges for test")
def get_ranges(storage_object: StorageObjectInfo, shell: Shell, endpoint: str) -> list[str]:
def get_ranges(
storage_object: StorageObjectInfo, max_object_size: int, shell: Shell, endpoint: str
) -> list[str]:
"""
Returns ranges to test range/hash methods via static session
"""
object_size = storage_object.size
if object_size == COMPLEX_OBJ_SIZE:
net_info = get_netmap_netinfo(storage_object.wallet_file_path, shell, endpoint)
max_object_size = net_info["maximum_object_size"]
# make sure to test multiple parts of complex object
if object_size > max_object_size:
assert object_size >= max_object_size + RANGE_OFFSET_FOR_COMPLEX_OBJECT
return [
"0:10",
@ -160,9 +163,9 @@ class TestObjectStaticSession(ClusterTestBase):
self,
user_wallet: WalletFile,
storage_objects: list[StorageObjectInfo],
static_sessions: list[str],
static_sessions: dict[ObjectVerb, str],
method_under_test,
verb: str,
verb: ObjectVerb,
request: FixtureRequest,
):
"""
@ -175,16 +178,15 @@ class TestObjectStaticSession(ClusterTestBase):
for node in self.cluster.storage_nodes:
for storage_object in storage_objects[0:2]:
method_under_test(
user_wallet.path,
storage_object.cid,
storage_object.oid,
wallet=user_wallet.path,
cid=storage_object.cid,
oid=storage_object.oid,
shell=self.shell,
endpoint=node.get_rpc_endpoint(),
session=static_sessions[verb],
)
@allure.title("Validate static session with range operations")
@pytest.mark.static_session
@pytest.mark.parametrize(
"method_under_test,verb",
[(get_range, ObjectVerb.RANGE), (get_range_hash, ObjectVerb.RANGEHASH)],
@ -193,10 +195,11 @@ class TestObjectStaticSession(ClusterTestBase):
self,
user_wallet: WalletFile,
storage_objects: list[StorageObjectInfo],
static_sessions: list[str],
static_sessions: dict[ObjectVerb, str],
method_under_test,
verb: str,
verb: ObjectVerb,
request: FixtureRequest,
max_object_size,
):
"""
Validate static session with range operations
@ -205,10 +208,13 @@ class TestObjectStaticSession(ClusterTestBase):
f"Validate static session with range operations for {request.node.callspec.id}"
)
storage_object = storage_objects[0]
ranges_to_test = get_ranges(storage_object, self.shell, self.cluster.default_rpc_endpoint)
ranges_to_test = get_ranges(
storage_object, max_object_size, self.shell, self.cluster.default_rpc_endpoint
)
for range_to_test in ranges_to_test:
with allure.step(f"Check range {range_to_test}"):
with expect_not_raises():
method_under_test(
user_wallet.path,
storage_object.cid,
@ -220,14 +226,13 @@ class TestObjectStaticSession(ClusterTestBase):
)
@allure.title("Validate static session with search operation")
@pytest.mark.static_session
@pytest.mark.xfail
# (see https://github.com/nspcc-dev/neofs-node/issues/2030)
def test_static_session_search(
self,
user_wallet: WalletFile,
storage_objects: list[StorageObjectInfo],
static_sessions: list[str],
static_sessions: dict[ObjectVerb, str],
request: FixtureRequest,
):
"""
@ -248,12 +253,11 @@ class TestObjectStaticSession(ClusterTestBase):
assert expected_object_ids == actual_object_ids
@allure.title("Validate static session with object id not in session")
@pytest.mark.static_session
def test_static_session_unrelated_object(
self,
user_wallet: WalletFile,
storage_objects: list[StorageObjectInfo],
static_sessions: list[str],
static_sessions: dict[ObjectVerb, str],
request: FixtureRequest,
):
"""
@ -273,12 +277,11 @@ class TestObjectStaticSession(ClusterTestBase):
)
@allure.title("Validate static session with user id not in session")
@pytest.mark.static_session
def test_static_session_head_unrelated_user(
self,
stranger_wallet: WalletFile,
storage_objects: list[StorageObjectInfo],
static_sessions: list[str],
static_sessions: dict[ObjectVerb, str],
request: FixtureRequest,
):
"""
@ -300,12 +303,11 @@ class TestObjectStaticSession(ClusterTestBase):
)
@allure.title("Validate static session with wrong verb in session")
@pytest.mark.static_session
def test_static_session_head_wrong_verb(
self,
user_wallet: WalletFile,
storage_objects: list[StorageObjectInfo],
static_sessions: list[str],
static_sessions: dict[ObjectVerb, str],
request: FixtureRequest,
):
"""
@ -327,13 +329,12 @@ class TestObjectStaticSession(ClusterTestBase):
)
@allure.title("Validate static session with container id not in session")
@pytest.mark.static_session
def test_static_session_unrelated_container(
self,
user_wallet: WalletFile,
storage_objects: list[StorageObjectInfo],
storage_containers: list[str],
static_sessions: list[str],
static_sessions: dict[ObjectVerb, str],
request: FixtureRequest,
):
"""
@ -355,13 +356,12 @@ class TestObjectStaticSession(ClusterTestBase):
)
@allure.title("Validate static session which signed by another wallet")
@pytest.mark.static_session
def test_static_session_signed_by_other(
self,
owner_wallet: WalletFile,
user_wallet: WalletFile,
stranger_wallet: WalletFile,
storage_containers: list[int],
storage_containers: list[str],
storage_objects: list[StorageObjectInfo],
temp_directory: str,
request: FixtureRequest,
@ -394,7 +394,6 @@ class TestObjectStaticSession(ClusterTestBase):
)
@allure.title("Validate static session which signed for another container")
@pytest.mark.static_session
def test_static_session_signed_for_other_container(
self,
owner_wallet: WalletFile,
@ -433,7 +432,6 @@ class TestObjectStaticSession(ClusterTestBase):
)
@allure.title("Validate static session which wasn't signed")
@pytest.mark.static_session
def test_static_session_without_sign(
self,
owner_wallet: WalletFile,
@ -470,7 +468,6 @@ class TestObjectStaticSession(ClusterTestBase):
)
@allure.title("Validate static session which expires at next epoch")
@pytest.mark.static_session
def test_static_session_expiration_at_next(
self,
owner_wallet: WalletFile,
@ -492,6 +489,7 @@ class TestObjectStaticSession(ClusterTestBase):
object_id = storage_objects[0].oid
expiration = Lifetime(epoch + 1, epoch, epoch)
with allure.step("Create session token"):
token_expire_at_next_epoch = get_object_signed_token(
owner_wallet,
user_wallet,
@ -503,6 +501,8 @@ class TestObjectStaticSession(ClusterTestBase):
expiration,
)
with allure.step("Object should be available with session token after token creation"):
with expect_not_raises():
head_object(
user_wallet.path,
container,
@ -512,9 +512,23 @@ class TestObjectStaticSession(ClusterTestBase):
session=token_expire_at_next_epoch,
)
with allure.step(
"Object should be available at last epoch before session token expiration"
):
self.tick_epoch()
with expect_not_raises():
head_object(
user_wallet.path,
container,
object_id,
self.shell,
self.cluster.default_rpc_endpoint,
session=token_expire_at_next_epoch,
)
with pytest.raises(Exception, match=MALFORMED_REQUEST):
with allure.step("Object should NOT be available after session token expiration epoch"):
self.tick_epoch()
with pytest.raises(Exception, match=EXPIRED_SESSION_TOKEN):
head_object(
user_wallet.path,
container,
@ -525,7 +539,6 @@ class TestObjectStaticSession(ClusterTestBase):
)
@allure.title("Validate static session which is valid starting from next epoch")
@pytest.mark.static_session
def test_static_session_start_at_next(
self,
owner_wallet: WalletFile,
@ -547,6 +560,7 @@ class TestObjectStaticSession(ClusterTestBase):
object_id = storage_objects[0].oid
expiration = Lifetime(epoch + 2, epoch + 1, epoch)
with allure.step("Create session token"):
token_start_at_next_epoch = get_object_signed_token(
owner_wallet,
user_wallet,
@ -558,6 +572,7 @@ class TestObjectStaticSession(ClusterTestBase):
expiration,
)
with allure.step("Object should NOT be available with session token after token creation"):
with pytest.raises(Exception, match=MALFORMED_REQUEST):
head_object(
user_wallet.path,
@ -568,7 +583,11 @@ class TestObjectStaticSession(ClusterTestBase):
session=token_start_at_next_epoch,
)
with allure.step(
"Object should be available with session token starting from token nbf epoch"
):
self.tick_epoch()
with expect_not_raises():
head_object(
user_wallet.path,
container,
@ -578,8 +597,23 @@ class TestObjectStaticSession(ClusterTestBase):
session=token_start_at_next_epoch,
)
with allure.step(
"Object should be available at last epoch before session token expiration"
):
self.tick_epoch()
with pytest.raises(Exception, match=MALFORMED_REQUEST):
with expect_not_raises():
head_object(
user_wallet.path,
container,
object_id,
self.shell,
self.cluster.default_rpc_endpoint,
session=token_start_at_next_epoch,
)
with allure.step("Object should NOT be available after session token expiration epoch"):
self.tick_epoch()
with pytest.raises(Exception, match=EXPIRED_SESSION_TOKEN):
head_object(
user_wallet.path,
container,
@ -590,7 +624,6 @@ class TestObjectStaticSession(ClusterTestBase):
)
@allure.title("Validate static session which is already expired")
@pytest.mark.static_session
def test_static_session_already_expired(
self,
owner_wallet: WalletFile,
@ -623,7 +656,7 @@ class TestObjectStaticSession(ClusterTestBase):
expiration,
)
with pytest.raises(Exception, match=MALFORMED_REQUEST):
with pytest.raises(Exception, match=EXPIRED_SESSION_TOKEN):
head_object(
user_wallet.path,
container,
@ -638,7 +671,7 @@ class TestObjectStaticSession(ClusterTestBase):
self,
user_wallet: WalletFile,
storage_objects: list[StorageObjectInfo],
static_sessions: list[str],
static_sessions: dict[ObjectVerb, str],
request: FixtureRequest,
):
"""
@ -663,7 +696,7 @@ class TestObjectStaticSession(ClusterTestBase):
self,
user_wallet: WalletFile,
storage_objects: list[StorageObjectInfo],
static_sessions: list[str],
static_sessions: dict[ObjectVerb, str],
request: FixtureRequest,
):
"""
@ -684,7 +717,6 @@ class TestObjectStaticSession(ClusterTestBase):
)
@allure.title("Validate static session which is issued in future epoch")
@pytest.mark.static_session
def test_static_session_invalid_issued_epoch(
self,
owner_wallet: WalletFile,

View file

@ -1,8 +1,9 @@
import allure
import pytest
from file_helper import generate_file
from neofs_testlib.shell import Shell
from python_keywords.acl import (
from frostfs_testlib.resources.common import PUBLIC_ACL
from frostfs_testlib.shell import Shell
from pytest_tests.helpers.acl import (
EACLAccess,
EACLOperation,
EACLRole,
@ -11,20 +12,20 @@ from python_keywords.acl import (
set_eacl,
wait_for_cache_expired,
)
from python_keywords.container import (
from pytest_tests.helpers.container import (
create_container,
delete_container,
get_container,
list_containers,
)
from python_keywords.object_access import can_put_object
from wallet import WalletFile
from wellknown_acl import PUBLIC_ACL
from steps.cluster_test_base import ClusterTestBase
from steps.session_token import ContainerVerb, get_container_signed_token
from pytest_tests.helpers.file_helper import generate_file
from pytest_tests.helpers.object_access import can_put_object
from pytest_tests.helpers.wallet import WalletFile
from pytest_tests.steps.cluster_test_base import ClusterTestBase
from pytest_tests.steps.session_token import ContainerVerb, get_container_signed_token
@pytest.mark.static_session_container
class TestSessionTokenContainer(ClusterTestBase):
@pytest.fixture(scope="module")
def static_sessions(
@ -74,7 +75,6 @@ class TestSessionTokenContainer(ClusterTestBase):
owner_wallet.path, shell=self.shell, endpoint=self.cluster.default_rpc_endpoint
)
@pytest.mark.skip("Failed with timeout")
def test_static_session_token_container_create_with_other_verb(
self,
user_wallet: WalletFile,
@ -94,7 +94,6 @@ class TestSessionTokenContainer(ClusterTestBase):
wait_for_creation=False,
)
@pytest.mark.skip("Failed with timeout")
def test_static_session_token_container_create_with_other_wallet(
self,
stranger_wallet: WalletFile,
@ -136,6 +135,7 @@ class TestSessionTokenContainer(ClusterTestBase):
session_token=static_sessions[ContainerVerb.DELETE],
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
await_mode=True,
)
assert cid not in list_containers(
@ -148,6 +148,7 @@ class TestSessionTokenContainer(ClusterTestBase):
user_wallet: WalletFile,
stranger_wallet: WalletFile,
static_sessions: dict[ContainerVerb, str],
simple_object_size,
):
"""
Validate static session with set eacl operation
@ -159,7 +160,7 @@ class TestSessionTokenContainer(ClusterTestBase):
shell=self.shell,
endpoint=self.cluster.default_rpc_endpoint,
)
file_path = generate_file()
file_path = generate_file(simple_object_size)
assert can_put_object(stranger_wallet.path, cid, file_path, self.shell, self.cluster)
with allure.step(f"Deny all operations for other via eACL"):

View file

@ -0,0 +1,155 @@
import json
import pathlib
import re
from dataclasses import dataclass
from io import StringIO
import allure
import pytest
import yaml
from configobj import ConfigObj
from frostfs_testlib.cli import FrostfsCli
from pytest_tests.helpers.cluster import Cluster, StorageNode
from pytest_tests.resources.common import CLI_DEFAULT_TIMEOUT, WALLET_CONFIG
SHARD_PREFIX = "FROSTFS_STORAGE_SHARD_"
BLOBSTOR_PREFIX = "_BLOBSTOR_"
@dataclass
class Blobstor:
path: str
path_type: str
def __eq__(self, other) -> bool:
if not isinstance(other, self.__class__):
raise RuntimeError(f"Only two {self.__class__.__name__} instances can be compared")
return self.path == other.path and self.path_type == other.path_type
def __hash__(self):
return hash((self.path, self.path_type))
@staticmethod
def from_config_object(section: ConfigObj, shard_id: str, blobstor_id: str):
var_prefix = f"{SHARD_PREFIX}{shard_id}{BLOBSTOR_PREFIX}{blobstor_id}"
return Blobstor(section.get(f"{var_prefix}_PATH"), section.get(f"{var_prefix}_TYPE"))
@dataclass
class Shard:
blobstor: list[Blobstor]
metabase: str
writecache: str
def __eq__(self, other) -> bool:
if not isinstance(other, self.__class__):
raise RuntimeError(f"Only two {self.__class__.__name__} instances can be compared")
return (
set(self.blobstor) == set(other.blobstor)
and self.metabase == other.metabase
and self.writecache == other.writecache
)
def __hash__(self):
return hash((self.metabase, self.writecache))
@staticmethod
def _get_blobstor_count_from_section(config_object: ConfigObj, shard_id: int):
pattern = f"{SHARD_PREFIX}{shard_id}{BLOBSTOR_PREFIX}"
blobstors = {key[: len(pattern) + 2] for key in config_object.keys() if pattern in key}
return len(blobstors)
@staticmethod
def from_config_object(config_object: ConfigObj, shard_id: int):
var_prefix = f"{SHARD_PREFIX}{shard_id}"
blobstor_count = Shard._get_blobstor_count_from_section(config_object, shard_id)
blobstors = [
Blobstor.from_config_object(config_object, shard_id, blobstor_id)
for blobstor_id in range(blobstor_count)
]
write_cache_enabled = config_object.as_bool(f"{var_prefix}_WRITECACHE_ENABLED")
return Shard(
blobstors,
config_object.get(f"{var_prefix}_METABASE_PATH"),
config_object.get(f"{var_prefix}_WRITECACHE_PATH") if write_cache_enabled else "",
)
@staticmethod
def from_object(shard):
metabase = shard["metabase"]["path"] if "path" in shard["metabase"] else shard["metabase"]
writecache = (
shard["writecache"]["path"] if "path" in shard["writecache"] else shard["writecache"]
)
return Shard(
blobstor=[
Blobstor(path=blobstor["path"], path_type=blobstor["type"])
for blobstor in shard["blobstor"]
],
metabase=metabase,
writecache=writecache,
)
def shards_from_yaml(contents: str) -> list[Shard]:
config = yaml.safe_load(contents)
config["storage"]["shard"].pop("default")
return [Shard.from_object(shard) for shard in config["storage"]["shard"].values()]
def shards_from_env(contents: str) -> list[Shard]:
configObj = ConfigObj(StringIO(contents))
pattern = f"{SHARD_PREFIX}\d*"
num_shards = len(set(re.findall(pattern, contents)))
return [Shard.from_config_object(configObj, shard_id) for shard_id in range(num_shards)]
@pytest.mark.sanity
@pytest.mark.shard
class TestControlShard:
@staticmethod
def get_shards_from_config(node: StorageNode) -> list[Shard]:
config_file = node.get_remote_config_path()
file_type = pathlib.Path(config_file).suffix
contents = node.host.get_shell().exec(f"cat {config_file}").stdout
parser_method = {
".env": shards_from_env,
".yaml": shards_from_yaml,
".yml": shards_from_yaml,
}
shards = parser_method[file_type](contents)
return shards
@staticmethod
def get_shards_from_cli(node: StorageNode) -> list[Shard]:
wallet_path = node.get_remote_wallet_path()
wallet_password = node.get_wallet_password()
control_endpoint = node.get_control_endpoint()
cli_config = node.host.get_cli_config("frostfs-cli")
cli = FrostfsCli(node.host.get_shell(), cli_config.exec_path, WALLET_CONFIG)
result = cli.shards.list(
endpoint=control_endpoint,
wallet=wallet_path,
wallet_password=wallet_password,
json_mode=True,
timeout=CLI_DEFAULT_TIMEOUT,
)
return [Shard.from_object(shard) for shard in json.loads(result.stdout.split(">", 1)[1])]
@allure.title("All shards are available")
def test_control_shard(self, cluster: Cluster):
for storage_node in cluster.storage_nodes:
shards_from_config = self.get_shards_from_config(storage_node)
shards_from_cli = self.get_shards_from_cli(storage_node)
assert set(shards_from_config) == set(shards_from_cli)

View file

@ -0,0 +1,47 @@
import os
import shutil
from datetime import datetime
import allure
import pytest
from pytest_tests.steps.cluster_test_base import ClusterTestBase
class TestLogs(ClusterTestBase):
@pytest.mark.logs_after_session
def test_logs_after_session(self, temp_directory: str, session_start_time: datetime):
"""
This test automatically added to any test run to check logs from cluster for critical errors.
"""
end_time = datetime.utcnow()
logs_dir = os.path.join(temp_directory, "logs")
os.makedirs(logs_dir)
issues_regex = r"\Wpanic\W|\Woom\W|\Wtoo many open files\W"
hosts_with_problems = []
for host in self.cluster.hosts:
with allure.step(f"Check logs on {host.config.address}"):
if host.is_message_in_logs(issues_regex, session_start_time, end_time):
hosts_with_problems.append(host.config.address)
host.dump_logs(
logs_dir,
since=session_start_time,
until=end_time,
filter_regex=issues_regex,
)
if hosts_with_problems:
self._attach_logs(logs_dir)
assert (
not hosts_with_problems
), f"The following hosts contains contain critical errors in system logs: {', '.join(hosts_with_problems)}"
def _attach_logs(self, logs_dir: str) -> None:
# Zip all files and attach to Allure because it is more convenient to download a single
# zip with all logs rather than mess with individual logs files per service or node
logs_zip_file_path = shutil.make_archive(logs_dir, "zip", logs_dir)
allure.attach.file(logs_zip_file_path, name="logs.zip", extension="zip")

View file

@ -1 +0,0 @@
password: ""

View file

@ -1,69 +1,16 @@
aiodns==3.0.0
aiohttp==3.7.4.post0
aioresponses==0.7.2
allure-pytest==2.9.45
allure-python-commons==2.9.45
async-timeout==3.0.1
asynctest==0.13.0
attrs==21.4.0
base58==2.1.0
bitarray==2.3.4
black==22.8.0
boto3==1.16.33
botocore==1.19.33
certifi==2022.5.18
cffi==1.15.0
chardet==4.0.0
charset-normalizer==2.0.12
coverage==6.3.3
docker==4.4.0
docutils==0.17.1
Events==0.4
flake8==4.0.1
idna==3.3
iniconfig==1.1.1
isort==5.10.1
jmespath==0.10.0
jsonschema==4.5.1
lz4==3.1.3
mccabe==0.6.1
mmh3==3.0.0
multidict==6.0.2
mypy==0.950
mypy-extensions==0.4.3
neo-mamba==0.10.0
neo3crypto==0.2.1
neo3vm==0.9.0
neo3vm-stubs==0.9.0
neofs-testlib==0.7.0
netaddr==0.8.0
orjson==3.6.8
packaging==21.3
paramiko==2.10.3
configobj==5.0.6
frostfs-testlib==1.3.1
neo-mamba==1.0.0
pexpect==4.8.0
pluggy==1.0.0
pre-commit==2.20.0
ptyprocess==0.7.0
py==1.11.0
pybiginteger==1.2.6
pybiginteger-stubs==1.2.6
pycares==4.1.2
pycodestyle==2.8.0
pycparser==2.21
pycryptodome==3.11.0
pyflakes==2.4.0
pyparsing==3.0.9
pyrsistent==0.18.1
pytest==7.1.2
python-dateutil==2.8.2
pyyaml==6.0
pytest==7.1.2
pytest-lazy-fixture==0.6.3
python-dateutil==2.8.2
requests==2.28.0
robotframework==4.1.2
s3transfer==0.3.7
six==1.16.0
tenacity==8.0.1
tomli==2.0.1
typing-extensions==4.2.0
urllib3==1.26.9
websocket-client==1.3.2
yarl==1.7.2

2
requirements_dev.txt Normal file
View file

@ -0,0 +1,2 @@
pre-commit==2.20.0
isort==5.12.0

View file

@ -1,103 +0,0 @@
#!/usr/bin/python3
"""
This module contains functions which are used for Large Object assembling:
getting Last Object and split and getting Link Object. It is not enough to
simply perform a "raw" HEAD request, as noted in the issue:
https://github.com/nspcc-dev/neofs-node/issues/1304. Therefore, the reliable
retrieval of the aforementioned objects must be done this way: send direct
"raw" HEAD request to the every Storage Node and return the desired OID on
first non-null response.
"""
import logging
from typing import Optional
import allure
import neofs_verbs
from cluster import StorageNode
from common import WALLET_CONFIG
from neofs_testlib.shell import Shell
logger = logging.getLogger("NeoLogger")
@allure.step("Get Link Object")
def get_link_object(
wallet: str,
cid: str,
oid: str,
shell: Shell,
nodes: list[StorageNode],
bearer: str = "",
wallet_config: str = WALLET_CONFIG,
is_direct: bool = True,
):
"""
Args:
wallet (str): path to the wallet on whose behalf the Storage Nodes
are requested
cid (str): Container ID which stores the Large Object
oid (str): Large Object ID
shell: executor for cli command
nodes: list of nodes to do search on
bearer (optional, str): path to Bearer token file
wallet_config (optional, str): path to the neofs-cli config file
is_direct: send request directly to the node or not; this flag
turns into `--ttl 1` key
Returns:
(str): Link Object ID
When no Link Object ID is found after all Storage Nodes polling,
the function throws an error.
"""
for node in nodes:
endpoint = node.get_rpc_endpoint()
try:
resp = neofs_verbs.head_object(
wallet,
cid,
oid,
shell=shell,
endpoint=endpoint,
is_raw=True,
is_direct=is_direct,
bearer=bearer,
wallet_config=wallet_config,
)
if resp["link"]:
return resp["link"]
except Exception:
logger.info(f"No Link Object found on {endpoint}; continue")
logger.error(f"No Link Object for {cid}/{oid} found among all Storage Nodes")
return None
@allure.step("Get Last Object")
def get_last_object(
wallet: str, cid: str, oid: str, shell: Shell, nodes: list[StorageNode]
) -> Optional[str]:
"""
Args:
wallet (str): path to the wallet on whose behalf the Storage Nodes
are requested
cid (str): Container ID which stores the Large Object
oid (str): Large Object ID
shell: executor for cli command
nodes: list of nodes to do search on
Returns:
(str): Last Object ID
When no Last Object ID is found after all Storage Nodes polling,
the function throws an error.
"""
for node in nodes:
endpoint = node.get_rpc_endpoint()
try:
resp = neofs_verbs.head_object(
wallet, cid, oid, shell=shell, endpoint=endpoint, is_raw=True, is_direct=True
)
if resp["lastPart"]:
return resp["lastPart"]
except Exception:
logger.info(f"No Last Object found on {endpoint}; continue")
logger.error(f"No Last Object for {cid}/{oid} found among all Storage Nodes")
return None

View file

@ -1,231 +0,0 @@
#!/usr/bin/python3.9
"""
This module contains keywords that utilize `neofs-cli container` commands.
"""
import json
import logging
from time import sleep
from typing import Optional, Union
import allure
import json_transformers
from common import NEOFS_CLI_EXEC, WALLET_CONFIG
from neofs_testlib.cli import NeofsCli
from neofs_testlib.shell import Shell
logger = logging.getLogger("NeoLogger")
DEFAULT_PLACEMENT_RULE = "REP 2 IN X CBF 1 SELECT 4 FROM * AS X"
@allure.step("Create Container")
def create_container(
wallet: str,
shell: Shell,
endpoint: str,
rule: str = DEFAULT_PLACEMENT_RULE,
basic_acl: str = "",
attributes: Optional[dict] = None,
session_token: str = "",
session_wallet: str = "",
name: str = None,
options: dict = None,
await_mode: bool = True,
wait_for_creation: bool = True,
) -> str:
"""
A wrapper for `neofs-cli container create` call.
Args:
wallet (str): a wallet on whose behalf a container is created
rule (optional, str): placement rule for container
basic_acl (optional, str): an ACL for container, will be
appended to `--basic-acl` key
attributes (optional, dict): container attributes , will be
appended to `--attributes` key
session_token (optional, str): a path to session token file
session_wallet(optional, str): a path to the wallet which signed
the session token; this parameter makes sense
when paired with `session_token`
shell: executor for cli command
endpoint: NeoFS endpoint to send request to, appends to `--rpc-endpoint` key
options (optional, dict): any other options to pass to the call
name (optional, str): container name attribute
await_mode (bool): block execution until container is persisted
wait_for_creation (): Wait for container shows in container list
Returns:
(str): CID of the created container
"""
cli = NeofsCli(shell, NEOFS_CLI_EXEC, WALLET_CONFIG)
result = cli.container.create(
rpc_endpoint=endpoint,
wallet=session_wallet if session_wallet else wallet,
policy=rule,
basic_acl=basic_acl,
attributes=attributes,
name=name,
session=session_token,
await_mode=await_mode,
**options or {},
)
cid = _parse_cid(result.stdout)
logger.info("Container created; waiting until it is persisted in the sidechain")
if wait_for_creation:
wait_for_container_creation(wallet, cid, shell, endpoint)
return cid
def wait_for_container_creation(
wallet: str, cid: str, shell: Shell, endpoint: str, attempts: int = 15, sleep_interval: int = 1
):
for _ in range(attempts):
containers = list_containers(wallet, shell, endpoint)
if cid in containers:
return
logger.info(f"There is no {cid} in {containers} yet; sleep {sleep_interval} and continue")
sleep(sleep_interval)
raise RuntimeError(
f"After {attempts * sleep_interval} seconds container {cid} hasn't been persisted; exiting"
)
def wait_for_container_deletion(
wallet: str, cid: str, shell: Shell, endpoint: str, attempts: int = 30, sleep_interval: int = 1
):
for _ in range(attempts):
try:
get_container(wallet, cid, shell=shell, endpoint=endpoint)
sleep(sleep_interval)
continue
except Exception as err:
if "container not found" not in str(err):
raise AssertionError(f'Expected "container not found" in error, got\n{err}')
return
raise AssertionError(f"Expected container deleted during {attempts * sleep_interval} sec.")
@allure.step("List Containers")
def list_containers(wallet: str, shell: Shell, endpoint: str) -> list[str]:
"""
A wrapper for `neofs-cli container list` call. It returns all the
available containers for the given wallet.
Args:
wallet (str): a wallet on whose behalf we list the containers
shell: executor for cli command
endpoint: NeoFS endpoint to send request to, appends to `--rpc-endpoint` key
Returns:
(list): list of containers
"""
cli = NeofsCli(shell, NEOFS_CLI_EXEC, WALLET_CONFIG)
result = cli.container.list(rpc_endpoint=endpoint, wallet=wallet)
logger.info(f"Containers: \n{result}")
return result.stdout.split()
@allure.step("Get Container")
def get_container(
wallet: str,
cid: str,
shell: Shell,
endpoint: str,
json_mode: bool = True,
) -> Union[dict, str]:
"""
A wrapper for `neofs-cli container get` call. It extracts container's
attributes and rearranges them into a more compact view.
Args:
wallet (str): path to a wallet on whose behalf we get the container
cid (str): ID of the container to get
shell: executor for cli command
endpoint: NeoFS endpoint to send request to, appends to `--rpc-endpoint` key
json_mode (bool): return container in JSON format
Returns:
(dict, str): dict of container attributes
"""
cli = NeofsCli(shell, NEOFS_CLI_EXEC, WALLET_CONFIG)
result = cli.container.get(rpc_endpoint=endpoint, wallet=wallet, cid=cid, json_mode=json_mode)
if not json_mode:
return result.stdout
container_info = json.loads(result.stdout)
attributes = dict()
for attr in container_info["attributes"]:
attributes[attr["key"]] = attr["value"]
container_info["attributes"] = attributes
container_info["ownerID"] = json_transformers.json_reencode(container_info["ownerID"]["value"])
return container_info
@allure.step("Delete Container")
# TODO: make the error message about a non-found container more user-friendly
# https://github.com/nspcc-dev/neofs-contract/issues/121
def delete_container(
wallet: str,
cid: str,
shell: Shell,
endpoint: str,
force: bool = False,
session_token: Optional[str] = None,
) -> None:
"""
A wrapper for `neofs-cli container delete` call.
Args:
wallet (str): path to a wallet on whose behalf we delete the container
cid (str): ID of the container to delete
shell: executor for cli command
endpoint: NeoFS endpoint to send request to, appends to `--rpc-endpoint` key
force (bool): do not check whether container contains locks and remove immediately
session_token: a path to session token file
This function doesn't return anything.
"""
cli = NeofsCli(shell, NEOFS_CLI_EXEC, WALLET_CONFIG)
cli.container.delete(
wallet=wallet, cid=cid, rpc_endpoint=endpoint, force=force, session=session_token
)
def _parse_cid(output: str) -> str:
"""
Parses container ID from a given CLI output. The input string we expect:
container ID: 2tz86kVTDpJxWHrhw3h6PbKMwkLtBEwoqhHQCKTre1FN
awaiting...
container has been persisted on sidechain
We want to take 'container ID' value from the string.
Args:
output (str): CLI output to parse
Returns:
(str): extracted CID
"""
try:
# taking first line from command's output
first_line = output.split("\n")[0]
except Exception:
first_line = ""
logger.error(f"Got empty output: {output}")
splitted = first_line.split(": ")
if len(splitted) != 2:
raise ValueError(f"no CID was parsed from command output: \t{first_line}")
return splitted[1]
@allure.step("Search container by name")
def search_container_by_name(wallet: str, name: str, shell: Shell, endpoint: str):
list_cids = list_containers(wallet, shell, endpoint)
for cid in list_cids:
cont_info = get_container(wallet, cid, shell, endpoint, True)
if cont_info.get("attributes").get("Name", None) == name:
return cid
return None

View file

@ -1,50 +0,0 @@
import base64
import json
import base58
from neo3 import wallet
def dict_to_attrs(attrs: dict) -> str:
"""
This function takes a dictionary of object's attributes and converts them
into string. The string is passed to `--attributes` key of neofs-cli.
Args:
attrs (dict): object attributes in {"a": "b", "c": "d"} format.
Returns:
(str): string in "a=b,c=d" format.
"""
return ",".join(f"{key}={value}" for key, value in attrs.items())
def __fix_wallet_schema(wallet: dict) -> None:
# Temporary function to fix wallets that do not conform to the schema
# TODO: get rid of it once issue is solved
if "name" not in wallet:
wallet["name"] = None
for account in wallet["accounts"]:
if "extra" not in account:
account["extra"] = None
def get_wallet_public_key(wallet_path: str, wallet_password: str, format: str = "hex") -> str:
# Get public key from wallet file
with open(wallet_path, "r") as file:
wallet_content = json.load(file)
__fix_wallet_schema(wallet_content)
wallet_from_json = wallet.Wallet.from_json(wallet_content, password=wallet_password)
public_key_hex = str(wallet_from_json.accounts[0].public_key)
# Convert public key to specified format
if format == "hex":
return public_key_hex
if format == "base58":
public_key_base58 = base58.b58encode(bytes.fromhex(public_key_hex))
return public_key_base58.decode("utf-8")
if format == "base64":
public_key_base64 = base64.b64encode(bytes.fromhex(public_key_hex))
return public_key_base64.decode("utf-8")
raise ValueError(f"Invalid public key format: {format}")

View file

@ -1,79 +0,0 @@
import json
import logging
from time import sleep
import allure
from cluster import Cluster
from common import MAINNET_BLOCK_TIME, NEOFS_ADM_CONFIG_PATH, NEOFS_ADM_EXEC, NEOGO_EXECUTABLE
from neofs_testlib.cli import NeofsAdm, NeoGo
from neofs_testlib.shell import Shell
from neofs_testlib.utils.wallet import get_last_address_from_wallet
from payment_neogo import get_contract_hash
from utility import parse_time
logger = logging.getLogger("NeoLogger")
@allure.step("Ensure fresh epoch")
def ensure_fresh_epoch(shell: Shell, cluster: Cluster) -> int:
# ensure new fresh epoch to avoid epoch switch during test session
current_epoch = get_epoch(shell, cluster)
tick_epoch(shell, cluster)
epoch = get_epoch(shell, cluster)
assert epoch > current_epoch, "Epoch wasn't ticked"
return epoch
@allure.step("Get Epoch")
def get_epoch(shell: Shell, cluster: Cluster):
morph_chain = cluster.morph_chain_nodes[0]
morph_endpoint = morph_chain.get_endpoint()
neogo = NeoGo(shell=shell, neo_go_exec_path=NEOGO_EXECUTABLE)
out = neogo.contract.testinvokefunction(
scripthash=get_contract_hash(morph_chain, "netmap.neofs", shell=shell),
method="epoch",
rpc_endpoint=morph_endpoint,
)
return int(json.loads(out.stdout.replace("\n", ""))["stack"][0]["value"])
@allure.step("Tick Epoch")
def tick_epoch(shell: Shell, cluster: Cluster):
if NEOFS_ADM_EXEC and NEOFS_ADM_CONFIG_PATH:
# If neofs-adm is available, then we tick epoch with it (to be consistent with UAT tests)
neofsadm = NeofsAdm(
shell=shell, neofs_adm_exec_path=NEOFS_ADM_EXEC, config_file=NEOFS_ADM_CONFIG_PATH
)
neofsadm.morph.force_new_epoch()
return
# Use first node by default
# Otherwise we tick epoch using transaction
cur_epoch = get_epoch(shell, cluster)
ir_node = cluster.ir_nodes[0]
# In case if no local_wallet_path is provided, we use wallet_path
ir_wallet_path = ir_node.get_wallet_path()
ir_wallet_pass = ir_node.get_wallet_password()
ir_address = get_last_address_from_wallet(ir_wallet_path, ir_wallet_pass)
morph_chain = cluster.morph_chain_nodes[0]
morph_endpoint = morph_chain.get_endpoint()
neogo = NeoGo(shell, neo_go_exec_path=NEOGO_EXECUTABLE)
neogo.contract.invokefunction(
wallet=ir_wallet_path,
wallet_password=ir_wallet_pass,
scripthash=get_contract_hash(morph_chain, "netmap.neofs", shell=shell),
method="newEpoch",
arguments=f"int:{cur_epoch + 1}",
multisig_hash=f"{ir_address}:Global",
address=ir_address,
rpc_endpoint=morph_endpoint,
force=True,
gas=1,
)
sleep(parse_time(MAINNET_BLOCK_TIME))

View file

@ -1,182 +0,0 @@
import logging
import os
import re
import shutil
import uuid
import zipfile
from urllib.parse import quote_plus
import allure
import requests
from cli_helpers import _cmd_run
logger = logging.getLogger("NeoLogger")
ASSETS_DIR = os.getenv("ASSETS_DIR", "TemporaryDir/")
@allure.step("Get via HTTP Gate")
def get_via_http_gate(cid: str, oid: str, endpoint: str):
"""
This function gets given object from HTTP gate
cid: container id to get object from
oid: object ID
endpoint: http gate endpoint
"""
request = f"{endpoint}/get/{cid}/{oid}"
resp = requests.get(request, stream=True)
if not resp.ok:
raise Exception(
f"""Failed to get object via HTTP gate:
request: {resp.request.path_url},
response: {resp.text},
status code: {resp.status_code} {resp.reason}"""
)
logger.info(f"Request: {request}")
_attach_allure_step(request, resp.status_code)
file_path = os.path.join(os.getcwd(), ASSETS_DIR, f"{cid}_{oid}")
with open(file_path, "wb") as file:
shutil.copyfileobj(resp.raw, file)
return file_path
@allure.step("Get via Zip HTTP Gate")
def get_via_zip_http_gate(cid: str, prefix: str, endpoint: str):
"""
This function gets given object from HTTP gate
cid: container id to get object from
prefix: common prefix
endpoint: http gate endpoint
"""
request = f"{endpoint}/zip/{cid}/{prefix}"
resp = requests.get(request, stream=True)
if not resp.ok:
raise Exception(
f"""Failed to get object via HTTP gate:
request: {resp.request.path_url},
response: {resp.text},
status code: {resp.status_code} {resp.reason}"""
)
logger.info(f"Request: {request}")
_attach_allure_step(request, resp.status_code)
file_path = os.path.join(os.getcwd(), ASSETS_DIR, f"{cid}_archive.zip")
with open(file_path, "wb") as file:
shutil.copyfileobj(resp.raw, file)
with zipfile.ZipFile(file_path, "r") as zip_ref:
zip_ref.extractall(ASSETS_DIR)
return os.path.join(os.getcwd(), ASSETS_DIR, prefix)
@allure.step("Get via HTTP Gate by attribute")
def get_via_http_gate_by_attribute(cid: str, attribute: dict, endpoint: str):
"""
This function gets given object from HTTP gate
cid: CID to get object from
attribute: attribute {name: attribute} value pair
endpoint: http gate endpoint
"""
attr_name = list(attribute.keys())[0]
attr_value = quote_plus(str(attribute.get(attr_name)))
request = f"{endpoint}/get_by_attribute/{cid}/{quote_plus(str(attr_name))}/{attr_value}"
resp = requests.get(request, stream=True)
if not resp.ok:
raise Exception(
f"""Failed to get object via HTTP gate:
request: {resp.request.path_url},
response: {resp.text},
status code: {resp.status_code} {resp.reason}"""
)
logger.info(f"Request: {request}")
_attach_allure_step(request, resp.status_code)
file_path = os.path.join(os.getcwd(), ASSETS_DIR, f"{cid}_{str(uuid.uuid4())}")
with open(file_path, "wb") as file:
shutil.copyfileobj(resp.raw, file)
return file_path
@allure.step("Upload via HTTP Gate")
def upload_via_http_gate(cid: str, path: str, endpoint: str, headers: dict = None) -> str:
"""
This function upload given object through HTTP gate
cid: CID to get object from
path: File path to upload
endpoint: http gate endpoint
headers: Object header
"""
request = f"{endpoint}/upload/{cid}"
files = {"upload_file": open(path, "rb")}
body = {"filename": path}
resp = requests.post(request, files=files, data=body, headers=headers)
if not resp.ok:
raise Exception(
f"""Failed to get object via HTTP gate:
request: {resp.request.path_url},
response: {resp.text},
status code: {resp.status_code} {resp.reason}"""
)
logger.info(f"Request: {request}")
_attach_allure_step(request, resp.json(), req_type="POST")
assert resp.json().get("object_id"), f"OID found in response {resp}"
return resp.json().get("object_id")
@allure.step("Upload via HTTP Gate using Curl")
def upload_via_http_gate_curl(
cid: str, filepath: str, endpoint: str, large_object=False, headers: dict = None
) -> str:
"""
This function upload given object through HTTP gate using curl utility.
cid: CID to get object from
filepath: File path to upload
headers: Object header
endpoint: http gate endpoint
"""
request = f"{endpoint}/upload/{cid}"
files = f"file=@{filepath};filename={os.path.basename(filepath)}"
cmd = f"curl -F '{files}' {request}"
if large_object:
files = f"file=@pipe;filename={os.path.basename(filepath)}"
cmd = f"mkfifo pipe;cat {filepath} > pipe & curl --no-buffer -F '{files}' {request}"
output = _cmd_run(cmd)
oid_re = re.search(r'"object_id": "(.*)"', output)
if not oid_re:
raise AssertionError(f'Could not find "object_id" in {output}')
return oid_re.group(1)
@allure.step("Get via HTTP Gate using Curl")
def get_via_http_curl(cid: str, oid: str, endpoint: str) -> str:
"""
This function gets given object from HTTP gate using curl utility.
cid: CID to get object from
oid: object OID
endpoint: http gate endpoint
"""
request = f"{endpoint}/get/{cid}/{oid}"
file_path = os.path.join(os.getcwd(), ASSETS_DIR, f"{cid}_{oid}_{str(uuid.uuid4())}")
cmd = f"curl {request} > {file_path}"
_cmd_run(cmd)
return file_path
def _attach_allure_step(request: str, status_code: int, req_type="GET"):
command_attachment = f"REQUEST: '{request}'\n" f"RESPONSE:\n {status_code}\n"
with allure.step(f"{req_type} Request"):
allure.attach(command_attachment, f"{req_type} Request", allure.attachment_type.TEXT)

View file

@ -1,136 +0,0 @@
"""
When doing requests to NeoFS, we get JSON output as an automatically decoded
structure from protobuf. Some fields are decoded with boilerplates and binary
values are Base64-encoded.
This module contains functions which rearrange the structure and reencode binary
data from Base64 to Base58.
"""
import base64
import base58
def decode_simple_header(data: dict) -> dict:
"""
This function reencodes Simple Object header and its attributes.
"""
try:
data = decode_common_fields(data)
# Normalize object attributes
data["header"]["attributes"] = {
attr["key"]: attr["value"] for attr in data["header"]["attributes"]
}
except Exception as exc:
raise ValueError(f"failed to decode JSON output: {exc}") from exc
return data
def decode_split_header(data: dict) -> dict:
"""
This function rearranges Complex Object header.
The header holds SplitID, a random unique
number, which is common among all splitted objects, and IDs of the Linking
Object and the last splitted Object.
"""
try:
data["splitId"] = json_reencode(data["splitId"])
data["lastPart"] = json_reencode(data["lastPart"]["value"]) if data["lastPart"] else None
data["link"] = json_reencode(data["link"]["value"]) if data["link"] else None
except Exception as exc:
raise ValueError(f"failed to decode JSON output: {exc}") from exc
return data
def decode_linking_object(data: dict) -> dict:
"""
This function reencodes Linking Object header.
It contains IDs of child Objects and Split Chain data.
"""
try:
data = decode_simple_header(data)
split = data["header"]["split"]
split["children"] = [json_reencode(item["value"]) for item in split["children"]]
split["splitID"] = json_reencode(split["splitID"])
split["previous"] = json_reencode(split["previous"]["value"]) if split["previous"] else None
split["parent"] = json_reencode(split["parent"]["value"]) if split["parent"] else None
except Exception as exc:
raise ValueError(f"failed to decode JSON output: {exc}") from exc
return data
def decode_storage_group(data: dict) -> dict:
"""
This function reencodes Storage Group header.
"""
try:
data = decode_common_fields(data)
except Exception as exc:
raise ValueError(f"failed to decode JSON output: {exc}") from exc
return data
def decode_tombstone(data: dict) -> dict:
"""
This function reencodes Tombstone header.
"""
try:
data = decode_simple_header(data)
data["header"]["sessionToken"] = decode_session_token(data["header"]["sessionToken"])
except Exception as exc:
raise ValueError(f"failed to decode JSON output: {exc}") from exc
return data
def decode_session_token(data: dict) -> dict:
"""
This function reencodes a fragment of header which contains
information about session token.
"""
target = data["body"]["object"]["target"]
target["container"] = json_reencode(target["container"]["value"])
target["objects"] = [json_reencode(obj["value"]) for obj in target["objects"]]
return data
def json_reencode(data: str) -> str:
"""
According to JSON protocol, binary data (Object/Container/Storage Group IDs, etc)
is converted to string via Base58 encoder. But we usually operate with Base64-encoded format.
This function reencodes given Base58 string into the Base64 one.
"""
return base58.b58encode(base64.b64decode(data)).decode("utf-8")
def encode_for_json(data: str) -> str:
"""
This function encodes binary data for sending them as protobuf
structures.
"""
return base64.b64encode(base58.b58decode(data)).decode("utf-8")
def decode_common_fields(data: dict) -> dict:
"""
Despite of type (simple/complex Object, Storage Group, etc) every Object
header contains several common fields.
This function rearranges these fields.
"""
data["objectID"] = json_reencode(data["objectID"]["value"])
header = data["header"]
header["containerID"] = json_reencode(header["containerID"]["value"])
header["ownerID"] = json_reencode(header["ownerID"]["value"])
header["payloadHash"] = json_reencode(header["payloadHash"]["sum"])
header["version"] = f"{header['version']['major']}{header['version']['minor']}"
# Homomorphic hash is optional and its calculation might be disabled in trusted network
if header.get("homomorphicHash"):
header["homomorphicHash"] = json_reencode(header["homomorphicHash"]["sum"])
return data

View file

@ -1,23 +0,0 @@
EACL_OBJ_FILTERS = {
"$Object:objectID": "objectID",
"$Object:containerID": "containerID",
"$Object:ownerID": "ownerID",
"$Object:creationEpoch": "creationEpoch",
"$Object:payloadLength": "payloadLength",
"$Object:payloadHash": "payloadHash",
"$Object:objectType": "objectType",
"$Object:homomorphicHash": "homomorphicHash",
"$Object:version": "version",
}
VERB_FILTER_DEP = {
"$Object:objectID": ["GET", "HEAD", "DELETE", "RANGE", "RANGEHASH"],
"$Object:containerID": ["GET", "PUT", "HEAD", "DELETE", "SEARCH", "RANGE", "RANGEHASH"],
"$Object:ownerID": ["GET", "HEAD"],
"$Object:creationEpoch": ["GET", "PUT", "HEAD"],
"$Object:payloadLength": ["GET", "PUT", "HEAD"],
"$Object:payloadHash": ["GET", "PUT", "HEAD"],
"$Object:objectType": ["GET", "PUT", "HEAD"],
"$Object:homomorphicHash": ["GET", "PUT", "HEAD"],
"$Object:version": ["GET", "PUT", "HEAD"],
}

View file

@ -1,9 +0,0 @@
# ACLs with final flag
PUBLIC_ACL_F = "1FBFBFFF"
PRIVATE_ACL_F = "1C8C8CCC"
READONLY_ACL_F = "1FBF8CFF"
# ACLs without final flag set
PUBLIC_ACL = "0FBFBFFF"
INACCESSIBLE_ACL = "40000000"
STICKYBIT_PUB_ACL = "3FFFFFFF"

4
venv/local-pytest/environment.sh Normal file → Executable file
View file

@ -1,8 +1,6 @@
# DevEnv variables
export NEOFS_MORPH_DISABLE_CACHE=true
export DEVENV_PATH="${DEVENV_PATH:-${VIRTUAL_ENV}/../../neofs-dev-env}"
export DEVENV_PATH="${DEVENV_PATH:-${VIRTUAL_ENV}/../../frostfs-dev-env}"
pushd $DEVENV_PATH > /dev/null
export `make env`
popd > /dev/null
export PYTHONPATH=${PYTHONPATH}:${VIRTUAL_ENV}/../robot/resources/lib/:${VIRTUAL_ENV}/../robot/resources/lib/python_keywords:${VIRTUAL_ENV}/../robot/resources/lib/robot:${VIRTUAL_ENV}/../robot/variables:${VIRTUAL_ENV}/../pytest_tests/helpers:${VIRTUAL_ENV}/../pytest_tests/steps