Compare commits

...

67 commits

Author SHA1 Message Date
e0ac3a583f [#1523] metabase: Remove (*DB).IterateCoveredByTombstones
Remove this method because it isn't used anywhere since 7799f8e4c.

Signed-off-by: Aleksey Savchuk <a.savchuk@yadro.com>
2024-11-29 10:49:24 +00:00
00c608c05e [#1524] tree: Make check APE error get wrapped to api status
Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2024-11-29 10:48:16 +00:00
bba1892fa1 [#1524] ape: Make APE checker return error without status
Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2024-11-29 10:48:16 +00:00
01acec708f
[#1525] pilorama: Use AppendUint* helpers from stdlib
gopatch:
```
@@
var slice, e expression
@@
+import "encoding/binary"

-append(slice, byte(e), byte(e >> 8))
+binary.LittleEndian.AppendUint16(slice, e)
```

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-28 09:40:20 +03:00
aac65001e5 [#1522] adm/frostfsid: Remove unreachable condition
SendConsensusTx() modifies SendTxs field, if it is not the case, there
is a bug in code.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-26 08:13:35 +00:00
1170370753 [#1522] adm/helper: Rename createSingleAccounts() -> getSingleAccounts()
It doesn't create any accounts, purely finds them in the wallet.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-26 08:13:35 +00:00
9e275d44c8 [#1522] adm/helper: Unexport DefaultClientContext()
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-26 08:13:35 +00:00
2469e0c683 [#1522] adm/helper: Remove NewActor() helper
It is used once, it is used only internally and it is single-statement.
I see no justification in having it as a separate function.
It introduces confusion, because we also have NewLocalActor().

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-26 08:13:35 +00:00
a6ef4ab524 [#1522] adm/helper: Rename GetN3Client() -> NewRemoteClient()
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-26 08:13:35 +00:00
49959c4166 [#1522] adm/helper: Unexport GetFrostfsIDAdmin()
It is used in `helper` package only, besides unit-tests.
Move unit-tests to the same package, where they belong.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-26 08:13:35 +00:00
61ee1b5610 [#1522] adm: Simplify LocalClient.SendRawTransaction()
The old code was there before Copy() method was introduced.
It was also supposed to check errors, however, they are already checked
server-side.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-26 08:13:35 +00:00
b10c954377 [#1522] adm: Split NewLocalClient() into functions
No functional changes.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-26 08:13:35 +00:00
1605391628 [#1522] adm/helper: Simplify Client interface
Just reuse `actor.RPCActor`. No functional changes.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-26 08:13:35 +00:00
b1766e47c7 [#1522] adm/helper: Remove unused GetCommittee() method from the Client interface
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-26 08:13:35 +00:00
caa4253249 [#1522] adm: Remove unnecessary variable declaration
It is better to have small scope.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-26 08:13:35 +00:00
7eac5fb18b
Release v0.44.0
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-25 14:41:36 +03:00
0e5524dac7 [#1515] adm: Print address in base58 format in morph ape get-admin
Signed-off-by: George Bartolomey <george@bh4.ru>
2024-11-25 10:38:05 +00:00
3ebd560f42 [#1519] cli: Make descriptive help for--rule option
Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2024-11-25 07:21:05 +00:00
1ed7ab75fb [#1517] cli: Print the reason of ape manager error
Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2024-11-22 15:24:32 +03:00
99f9e59de9 [#1514] adm: Remove --alphabet-wallets flag from readonly commands
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-21 14:37:34 +00:00
256f96e252 [#1514] adm/nns: Rename getRPCClient() to nnsWriter()
Make it more specific and similar to nnsReader().

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-21 14:37:34 +00:00
e5ea95c045 [#1514] adm/nns: Do not return hash from getRPCClient()
It was unused and we employ better abstractions now.
gopatch:
```
@@
var a, b expression
@@
-a, b, _ := getRPCClient(...)
+a, b := getRPCClient(...)
```

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-21 14:37:34 +00:00
9073e555db [#1514] adm/nns: Do not create actor for readonly commands
`nns get-records` and `nns tokens` command do not need to sign anything,
so remove useless actor and use invoker directly.
`NewLocalActor()` is only used in `ape` and `nns` packages. `ape`
package seem to use it correctly, only when alphabet wallets are
provided, so no changes there.
Also, remove --alphabet-wallets flag from commands that do not need it.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-21 14:37:34 +00:00
2771fdb8c7 [#1514] adm/nns: Use nns.GetAllRecords() wrapper
It was not possible previously, because GetAllRecords() was not declared
safe in frostfs-contract.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-21 14:37:34 +00:00
efa4ce00b8 [#1514] go.mod: Update frostfs-contract to v0.21.0-rc.3
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-21 14:37:34 +00:00
f12f04199e [#1516] traverser: Check for placement vector out of range
Placement vector may contain fewer nodes count than it required by policy
due to the outage of the one of the node.

Signed-off-by: Anton Nikiforov <an.nikiforov@yadro.com>
2024-11-21 14:18:55 +03:00
2e2c62147d
[#1513] adm: Move ProtoConfigPath from constants to commonflags package
Refs #932

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-21 09:39:06 +03:00
6ae8667fb4
[#1509] .forgejo: Run actions on push to master
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-20 11:42:12 +03:00
49a4e727fd [#1507] timer/test: Use const for constants
Make it easy to see what the test is about.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-20 08:36:25 +00:00
2e974f734c [#1507] timer/test: Improve test coverage
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-20 08:36:25 +00:00
3042490340 [#1507] timer: Remove unused OnDelta() method
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-20 08:36:25 +00:00
a339b52a60 [#1501] adm: Refactor APE-chains managing subcommands
* Use `cmd/internal/common/ape` parser commands within `ape`
  subcommands
* Use flag names from `cmd/internal/common/ape

Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2024-11-20 07:58:32 +00:00
4ab4ed6f96 [#1501] cli: Refactor bearer subcommand
* Use `cmd/internal/common/ape` parser commands within `generate-ape-override`
  subcommand
* Use flag names from `cmd/internal/common/ape`

Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2024-11-20 07:58:32 +00:00
3b1364e4cf [#1501] cli: Refactor ape-manager subcommands
* Refactor ape-manager subcommands
* Use `cmd/internal/common/ape` parser commands within ape-manager subcommands
* Use flag names from `cmd/internal/common/ape`

Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2024-11-20 07:58:32 +00:00
daff77b273 [#1501] cli: Refactor local override managing subcommands
* Refactor local override managing subcommands
* Use `cmd/internal/common/ape` parser commands within local
  override subcommands
* Use flag names from `cmd/internal/common/ape`

Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2024-11-20 07:58:32 +00:00
7a7ee71a4d [#1501] cmd: Introduce common APE-chain parser commands
* Introduce common parsing commands to use them in `frostfs-cli`
  and `frostfs-adm` APE-related subcommands
* Introduce common flags for these parsing commands

Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2024-11-20 07:58:32 +00:00
ffe9906266 [#1501] cli: Move APE-chain parser methods to pkg/util
Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2024-11-20 07:58:32 +00:00
ae31ef3602 [#1501] cli: Move PrintHumanReadableAPEChain to a common package
* Both `frostfs-cli` and `frostfs-adm` APE-related subcommands use
  `PrintHumanReadableAPEChain` to print a parsed APE-chain. So, it's
  more correct to have it in a common package over `frostfs-cli` and
  `frostfs-adm` folders.

Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2024-11-20 07:58:32 +00:00
e2cb0640f1 [#1501] util: Move eACL-to-APE converter to pkg/util
* `ConvertEACLToAPE` is useful method which couldn't be imported
  out of frostfs-node so far as it has been in `internal`
* Since `ConvertEACLToAPE` and related structures and unit-tests
  are placed in `pkg/util`

Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2024-11-20 07:58:32 +00:00
9f4ce600ac
[#1505] adm: Allow to manage additional keys in frostfsid
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-19 16:25:16 +03:00
d82f0d1926
[#1496] node/control: Await until SetNetmapStatus() persists
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-15 16:36:07 +03:00
acd5babd86
[#1496] morph: Merge InvokeRes and WaitParams
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-15 16:36:07 +03:00
b65874d1c3
[#1496] morph: Return InvokeRes from all invoke*() methods
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-15 16:36:07 +03:00
69c63006da
[#1496] morph: Move tx waiter to morph package
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-15 16:36:07 +03:00
d77a218f7c [#1493] metabase: Merge Inhume() and DropGraves() for tombstones
DropGraves() is only used to drop gravemarks after a tombstone
removal. Thus, it makes sense to do Inhume() and DropGraves() in one
transaction. It has less overhead and no unexpected problems in case
of sudden power failure.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-14 06:47:04 +00:00
44df67492f [#1493] metabase: Split inhumeTx() into 2 functions
No functional changes.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-14 06:47:04 +00:00
1e6f132b4e [#1493] metabase: Pass InhumePrm by value
Unify with the other code, no functional changes.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-14 06:47:04 +00:00
6dc0dc6691 [#1493] shard: Take mode mutex in HandleExpiredTombstones()
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-14 06:47:04 +00:00
f7cb6b4d87 [#1482] Makefile: Update golangci-lint
Signed-off-by: Alexander Chuprov <a.chuprov@yadro.com>
2024-11-13 15:01:41 +00:00
7fc6101bec
[#1491] engine/test: Rework engine test utils
- Remove `testNewShard` and `setInitializedShards` because they
violated the default engine workflow. The correct workflow is:
first use `New()`, followed by `Open()`, and then `Init()`. As a
result, adding new logic to `(*StorageEngine).Init` caused several
tests to fail with a panic when attempting to access uninitialized
resources. Now, all engines created with the test utils must be
initialized manually. The new helper method `prepare` can be used
for that purpose.
- Additionally, `setInitializedShards` hardcoded the shard worker
pool size, which prevented it from being configured in tests and
benchmarks. This has been fixed as well.
- Ensure engine initialization is done wherever it was missing.
- Refactor `setShardsNumOpts`, `setShardsNumAdditionalOpts`, and
`setShardsNum`. Make them all depend on `setShardsNumOpts`.

Signed-off-by: Aleksey Savchuk <a.savchuk@yadro.com>
2024-11-13 14:42:53 +03:00
7ef36749d0
[#1491] engine/test: Move BenchmarkExists to exists_test.go
Move `BenchmarkExists` from `engine_test.go` to `exists_test.go`
for better organization and clarity.

Signed-off-by: Aleksey Savchuk <a.savchuk@yadro.com>
2024-11-13 14:09:29 +03:00
c6066d6ee4
[#1491] engine/test: Use more suitable testing utils here and there
Use `setShardsNum` instead of `setInitializedShards` wherever possible.

Signed-off-by: Aleksey Savchuk <a.savchuk@yadro.com>
2024-11-13 14:09:29 +03:00
612b34d570
[#1437] logger: Add caller skip to log original caller position
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-11-13 10:36:12 +03:00
7429553266
[#1437] node: Fix contextcheck linter
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-11-13 10:36:10 +03:00
6921a89061
[#1437] ir: Fix contextcheck linters
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-11-13 10:36:10 +03:00
16598553d9
[#1437] shard: Fix contextcheck linter
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-11-13 10:36:09 +03:00
c139892117
[#1437] ir: Fix contextcheck linter
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-11-13 10:36:09 +03:00
62b5181618
[#1437] blobovnicza: Fix contextcheck linter
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-11-13 10:36:08 +03:00
6db46257c0
[#1437] node: Use ctx for logging
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-11-13 10:36:07 +03:00
c16dae8b4d
[#1437] logger: Use context to log trace id
Signed-off-by: Dmitrii Stepanov
2024-11-13 10:36:07 +03:00
fd004add00 [#1492] metabase: Fix import formatting
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-13 07:30:25 +00:00
8ed7a676d5 [#1492] metabase: Ensure Unmarshal() is called on a cloned slice
The slice returned from bucket.Get() is only valid during the tx
lifetime. Cloning it is not necessary everywhere, but better safe than
sorry.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-13 07:30:25 +00:00
b451de94c8 [#1492] metabase: Fix typo in objData
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-13 07:30:25 +00:00
f1556e3c42
[#1488] Makefile: Drop all containers created on env-up
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-11-12 17:24:44 +03:00
e122ff6013
[#1488] dev: Add Jaeger image and enable tracing on debug
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-11-12 17:24:44 +03:00
e2658c7519
[#1488] tracing: KV attributes for spans from config
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-11-12 17:24:43 +03:00
c00f4bab18
[#1488] go.mod: Bump observability version
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-11-12 17:24:43 +03:00
387 changed files with 3418 additions and 3360 deletions

View file

@ -1,6 +1,10 @@
name: Build
on: [pull_request]
on:
pull_request:
push:
branches:
- master
jobs:
build:

View file

@ -1,5 +1,10 @@
name: Pre-commit hooks
on: [pull_request]
on:
pull_request:
push:
branches:
- master
jobs:
precommit:

View file

@ -1,5 +1,10 @@
name: Tests and linters
on: [pull_request]
on:
pull_request:
push:
branches:
- master
jobs:
lint:

View file

@ -1,5 +1,10 @@
name: Vulncheck
on: [pull_request]
on:
pull_request:
push:
branches:
- master
jobs:
vulncheck:

View file

@ -9,6 +9,30 @@ Changelog for FrostFS Node
### Removed
### Updated
## [v0.44.0] - 2024-25-11 - Rongbuk
### Added
- Allow to prioritize nodes during GET traversal via attributes (#1439)
- Add metrics for the frostfsid cache (#1464)
- Customize constant attributes attached to every tracing span (#1488)
- Manage additional keys in the `frostfsid` contract (#1505)
- Describe `--rule` flag in detail for `frostfs-cli ape-manager` subcommands (#1519)
### Changed
- Support richer interaction with the console in `frostfs-cli container policy-playground` (#1396)
- Print address in base58 format in `frostfs-adm morph policy set-admin` (#1515)
### Fixed
- Fix EC object search (#1408)
- Fix EC object put when one of the nodes is unavailable (#1427)
### Removed
- Drop most of the eACL-related code (#1425)
- Remove `--basic-acl` flag from `frostfs-cli container create` (#1483)
### Upgrading from v0.43.0
The metabase schema has changed completely, resync is required.
## [v0.42.0]
### Added

View file

@ -8,8 +8,8 @@ HUB_IMAGE ?= git.frostfs.info/truecloudlab/frostfs
HUB_TAG ?= "$(shell echo ${VERSION} | sed 's/^v//')"
GO_VERSION ?= 1.22
LINT_VERSION ?= 1.61.0
TRUECLOUDLAB_LINT_VERSION ?= 0.0.7
LINT_VERSION ?= 1.62.0
TRUECLOUDLAB_LINT_VERSION ?= 0.0.8
PROTOC_VERSION ?= 25.0
PROTOGEN_FROSTFS_VERSION ?= $(shell go list -f '{{.Version}}' -m git.frostfs.info/TrueCloudLab/frostfs-sdk-go)
PROTOC_OS_VERSION=osx-x86_64
@ -282,7 +282,6 @@ env-up: all
# Shutdown dev environment
env-down:
docker compose -f dev/docker-compose.yml down
docker volume rm -f frostfs-node_neo-go
docker compose -f dev/docker-compose.yml down -v
rm -rf ./$(TMP_DIR)/state
rm -rf ./$(TMP_DIR)/storage

View file

@ -1 +1 @@
v0.42.0
v0.44.0

View file

@ -20,6 +20,7 @@ const (
AlphabetWalletsFlagDesc = "Path to alphabet wallets dir"
LocalDumpFlag = "local-dump"
ProtoConfigPath = "protocol"
ContractsInitFlag = "contracts"
ContractsInitFlagDesc = "Path to archive with compiled FrostFS contracts (the default is to fetch the latest release from the official repository)"
ContractsURLFlag = "contracts-url"

View file

@ -5,35 +5,19 @@ import (
"encoding/json"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/commonflags"
parseutil "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/modules/util"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
apeCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common/ape"
apechain "git.frostfs.info/TrueCloudLab/policy-engine/pkg/chain"
"github.com/nspcc-dev/neo-go/pkg/util"
"github.com/nspcc-dev/neo-go/pkg/encoding/address"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
const (
namespaceTarget = "namespace"
containerTarget = "container"
userTarget = "user"
groupTarget = "group"
jsonFlag = "json"
jsonFlagDesc = "Output rule chains in JSON format"
chainIDFlag = "chain-id"
chainIDDesc = "Rule chain ID"
ruleFlag = "rule"
ruleFlagDesc = "Rule chain in text format"
pathFlag = "path"
pathFlagDesc = "path to encoded chain in JSON or binary format"
targetNameFlag = "target-name"
targetNameDesc = "Resource name in APE resource name format"
targetTypeFlag = "target-type"
targetTypeDesc = "Resource type(container/namespace)"
addrAdminFlag = "addr"
addrAdminDesc = "The address of the admins wallet"
chainNameFlag = "chain-name"
chainNameFlagDesc = "Chain name(ingress|s3)"
jsonFlag = "json"
jsonFlagDesc = "Output rule chains in JSON format"
addrAdminFlag = "addr"
addrAdminDesc = "The address of the admins wallet"
)
var (
@ -101,17 +85,17 @@ func initAddRuleChainCmd() {
addRuleChainCmd.Flags().StringP(commonflags.EndpointFlag, commonflags.EndpointFlagShort, "", commonflags.EndpointFlagDesc)
addRuleChainCmd.Flags().String(commonflags.AlphabetWalletsFlag, "", commonflags.AlphabetWalletsFlagDesc)
addRuleChainCmd.Flags().String(targetTypeFlag, "", targetTypeDesc)
_ = addRuleChainCmd.MarkFlagRequired(targetTypeFlag)
addRuleChainCmd.Flags().String(targetNameFlag, "", targetNameDesc)
_ = addRuleChainCmd.MarkFlagRequired(targetNameFlag)
addRuleChainCmd.Flags().String(apeCmd.TargetTypeFlag, "", apeCmd.TargetTypeFlagDesc)
_ = addRuleChainCmd.MarkFlagRequired(apeCmd.TargetTypeFlag)
addRuleChainCmd.Flags().String(apeCmd.TargetNameFlag, "", apeCmd.TargetTypeFlagDesc)
_ = addRuleChainCmd.MarkFlagRequired(apeCmd.TargetNameFlag)
addRuleChainCmd.Flags().String(chainIDFlag, "", chainIDDesc)
_ = addRuleChainCmd.MarkFlagRequired(chainIDFlag)
addRuleChainCmd.Flags().StringArray(ruleFlag, []string{}, ruleFlagDesc)
addRuleChainCmd.Flags().String(pathFlag, "", pathFlagDesc)
addRuleChainCmd.Flags().String(chainNameFlag, ingress, chainNameFlagDesc)
addRuleChainCmd.MarkFlagsMutuallyExclusive(ruleFlag, pathFlag)
addRuleChainCmd.Flags().String(apeCmd.ChainIDFlag, "", apeCmd.ChainIDFlagDesc)
_ = addRuleChainCmd.MarkFlagRequired(apeCmd.ChainIDFlag)
addRuleChainCmd.Flags().StringArray(apeCmd.RuleFlag, []string{}, apeCmd.RuleFlagDesc)
addRuleChainCmd.Flags().String(apeCmd.PathFlag, "", apeCmd.PathFlagDesc)
addRuleChainCmd.Flags().String(apeCmd.ChainNameFlag, apeCmd.Ingress, apeCmd.ChainNameFlagDesc)
addRuleChainCmd.MarkFlagsMutuallyExclusive(apeCmd.RuleFlag, apeCmd.PathFlag)
}
func initRemoveRuleChainCmd() {
@ -120,26 +104,25 @@ func initRemoveRuleChainCmd() {
removeRuleChainCmd.Flags().StringP(commonflags.EndpointFlag, commonflags.EndpointFlagShort, "", commonflags.EndpointFlagDesc)
removeRuleChainCmd.Flags().String(commonflags.AlphabetWalletsFlag, "", commonflags.AlphabetWalletsFlagDesc)
removeRuleChainCmd.Flags().String(targetTypeFlag, "", targetTypeDesc)
_ = removeRuleChainCmd.MarkFlagRequired(targetTypeFlag)
removeRuleChainCmd.Flags().String(targetNameFlag, "", targetNameDesc)
_ = removeRuleChainCmd.MarkFlagRequired(targetNameFlag)
removeRuleChainCmd.Flags().String(chainIDFlag, "", chainIDDesc)
removeRuleChainCmd.Flags().String(chainNameFlag, ingress, chainNameFlagDesc)
removeRuleChainCmd.Flags().String(apeCmd.TargetTypeFlag, "", apeCmd.TargetTypeFlagDesc)
_ = removeRuleChainCmd.MarkFlagRequired(apeCmd.TargetTypeFlag)
removeRuleChainCmd.Flags().String(apeCmd.TargetNameFlag, "", apeCmd.TargetNameFlagDesc)
_ = removeRuleChainCmd.MarkFlagRequired(apeCmd.TargetNameFlag)
removeRuleChainCmd.Flags().String(apeCmd.ChainIDFlag, "", apeCmd.ChainIDFlagDesc)
removeRuleChainCmd.Flags().String(apeCmd.ChainNameFlag, apeCmd.Ingress, apeCmd.ChainNameFlagDesc)
removeRuleChainCmd.Flags().Bool(commonflags.AllFlag, false, "Remove all chains for target")
removeRuleChainCmd.MarkFlagsMutuallyExclusive(commonflags.AllFlag, chainIDFlag)
removeRuleChainCmd.MarkFlagsMutuallyExclusive(commonflags.AllFlag, apeCmd.ChainIDFlag)
}
func initListRuleChainsCmd() {
Cmd.AddCommand(listRuleChainsCmd)
listRuleChainsCmd.Flags().StringP(commonflags.EndpointFlag, commonflags.EndpointFlagShort, "", commonflags.EndpointFlagDesc)
listRuleChainsCmd.Flags().StringP(targetTypeFlag, "t", "", targetTypeDesc)
_ = listRuleChainsCmd.MarkFlagRequired(targetTypeFlag)
listRuleChainsCmd.Flags().String(targetNameFlag, "", targetNameDesc)
_ = listRuleChainsCmd.MarkFlagRequired(targetNameFlag)
listRuleChainsCmd.Flags().StringP(apeCmd.TargetTypeFlag, "t", "", apeCmd.TargetTypeFlagDesc)
_ = listRuleChainsCmd.MarkFlagRequired(apeCmd.TargetTypeFlag)
listRuleChainsCmd.Flags().String(apeCmd.TargetNameFlag, "", apeCmd.TargetNameFlagDesc)
listRuleChainsCmd.Flags().Bool(jsonFlag, false, jsonFlagDesc)
listRuleChainsCmd.Flags().String(chainNameFlag, ingress, chainNameFlagDesc)
listRuleChainsCmd.Flags().String(apeCmd.ChainNameFlag, apeCmd.Ingress, apeCmd.ChainNameFlagDesc)
}
func initSetAdminCmd() {
@ -161,15 +144,15 @@ func initListTargetsCmd() {
Cmd.AddCommand(listTargetsCmd)
listTargetsCmd.Flags().StringP(commonflags.EndpointFlag, commonflags.EndpointFlagShort, "", commonflags.EndpointFlagDesc)
listTargetsCmd.Flags().StringP(targetTypeFlag, "t", "", targetTypeDesc)
_ = listTargetsCmd.MarkFlagRequired(targetTypeFlag)
listTargetsCmd.Flags().StringP(apeCmd.TargetTypeFlag, "t", "", apeCmd.TargetTypeFlagDesc)
_ = listTargetsCmd.MarkFlagRequired(apeCmd.TargetTypeFlag)
}
func addRuleChain(cmd *cobra.Command, _ []string) {
chain := parseChain(cmd)
chain := apeCmd.ParseChain(cmd)
target := parseTarget(cmd)
pci, ac := newPolicyContractInterface(cmd)
h, vub, err := pci.AddMorphRuleChain(parseChainName(cmd), target, chain)
h, vub, err := pci.AddMorphRuleChain(apeCmd.ParseChainName(cmd), target, chain)
cmd.Println("Waiting for transaction to persist...")
_, err = ac.Wait(h, vub, err)
commonCmd.ExitOnErr(cmd, "add rule chain error: %w", err)
@ -181,14 +164,14 @@ func removeRuleChain(cmd *cobra.Command, _ []string) {
pci, ac := newPolicyContractInterface(cmd)
removeAll, _ := cmd.Flags().GetBool(commonflags.AllFlag)
if removeAll {
h, vub, err := pci.RemoveMorphRuleChainsByTarget(parseChainName(cmd), target)
h, vub, err := pci.RemoveMorphRuleChainsByTarget(apeCmd.ParseChainName(cmd), target)
cmd.Println("Waiting for transaction to persist...")
_, err = ac.Wait(h, vub, err)
commonCmd.ExitOnErr(cmd, "remove rule chain error: %w", err)
cmd.Println("All chains for target removed successfully")
} else {
chainID := parseChainID(cmd)
h, vub, err := pci.RemoveMorphRuleChain(parseChainName(cmd), target, chainID)
chainID := apeCmd.ParseChainID(cmd)
h, vub, err := pci.RemoveMorphRuleChain(apeCmd.ParseChainName(cmd), target, chainID)
cmd.Println("Waiting for transaction to persist...")
_, err = ac.Wait(h, vub, err)
commonCmd.ExitOnErr(cmd, "remove rule chain error: %w", err)
@ -199,7 +182,7 @@ func removeRuleChain(cmd *cobra.Command, _ []string) {
func listRuleChains(cmd *cobra.Command, _ []string) {
target := parseTarget(cmd)
pci, _ := newPolicyContractReaderInterface(cmd)
chains, err := pci.ListMorphRuleChains(parseChainName(cmd), target)
chains, err := pci.ListMorphRuleChains(apeCmd.ParseChainName(cmd), target)
commonCmd.ExitOnErr(cmd, "list rule chains error: %w", err)
if len(chains) == 0 {
return
@ -210,14 +193,14 @@ func listRuleChains(cmd *cobra.Command, _ []string) {
prettyJSONFormat(cmd, chains)
} else {
for _, c := range chains {
parseutil.PrintHumanReadableAPEChain(cmd, c)
apeCmd.PrintHumanReadableAPEChain(cmd, c)
}
}
}
func setAdmin(cmd *cobra.Command, _ []string) {
s, _ := cmd.Flags().GetString(addrAdminFlag)
addr, err := util.Uint160DecodeStringLE(s)
addr, err := address.StringToUint160(s)
commonCmd.ExitOnErr(cmd, "can't decode admin addr: %w", err)
pci, ac := newPolicyContractInterface(cmd)
h, vub, err := pci.SetAdmin(addr)
@ -231,12 +214,11 @@ func getAdmin(cmd *cobra.Command, _ []string) {
pci, _ := newPolicyContractReaderInterface(cmd)
addr, err := pci.GetAdmin()
commonCmd.ExitOnErr(cmd, "unable to get admin: %w", err)
cmd.Println(addr.StringLE())
cmd.Println(address.Uint160ToString(addr))
}
func listTargets(cmd *cobra.Command, _ []string) {
typ, err := parseTargetType(cmd)
commonCmd.ExitOnErr(cmd, "parse target type error: %w", err)
typ := apeCmd.ParseTargetType(cmd)
pci, inv := newPolicyContractReaderInterface(cmd)
sid, it, err := pci.ListTargetsIterator(typ)

View file

@ -2,13 +2,12 @@ package ape
import (
"errors"
"strings"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/morph/constants"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/morph/helper"
parseutil "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/modules/util"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
apechain "git.frostfs.info/TrueCloudLab/policy-engine/pkg/chain"
apeCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common/ape"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
policyengine "git.frostfs.info/TrueCloudLab/policy-engine/pkg/engine"
morph "git.frostfs.info/TrueCloudLab/policy-engine/pkg/morph/policy"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/invoker"
@ -18,90 +17,29 @@ import (
"github.com/spf13/viper"
)
const (
ingress = "ingress"
s3 = "s3"
)
var mChainName = map[string]apechain.Name{
ingress: apechain.Ingress,
s3: apechain.S3,
}
var (
errUnknownTargetType = errors.New("unknown target type")
errChainIDCannotBeEmpty = errors.New("chain id cannot be empty")
errRuleIsNotParsed = errors.New("rule is not passed")
errUnsupportedChainName = errors.New("unsupported chain name")
)
var errUnknownTargetType = errors.New("unknown target type")
func parseTarget(cmd *cobra.Command) policyengine.Target {
name, _ := cmd.Flags().GetString(targetNameFlag)
typ, err := parseTargetType(cmd)
// interpret "root" namespace as empty
if typ == policyengine.Namespace && name == "root" {
name = ""
}
commonCmd.ExitOnErr(cmd, "read target type error: %w", err)
return policyengine.Target{
Name: name,
Type: typ,
}
}
func parseTargetType(cmd *cobra.Command) (policyengine.TargetType, error) {
typ, _ := cmd.Flags().GetString(targetTypeFlag)
typ := apeCmd.ParseTargetType(cmd)
name, _ := cmd.Flags().GetString(apeCmd.TargetNameFlag)
switch typ {
case namespaceTarget:
return policyengine.Namespace, nil
case containerTarget:
return policyengine.Container, nil
case userTarget:
return policyengine.User, nil
case groupTarget:
return policyengine.Group, nil
case policyengine.Namespace:
if name == "root" {
name = ""
}
return policyengine.NamespaceTarget(name)
case policyengine.Container:
var cnr cid.ID
commonCmd.ExitOnErr(cmd, "can't decode container ID: %w", cnr.DecodeString(name))
return policyengine.ContainerTarget(name)
case policyengine.User:
return policyengine.UserTarget(name)
case policyengine.Group:
return policyengine.GroupTarget(name)
default:
commonCmd.ExitOnErr(cmd, "read target type error: %w", errUnknownTargetType)
}
return -1, errUnknownTargetType
}
func parseChainID(cmd *cobra.Command) apechain.ID {
chainID, _ := cmd.Flags().GetString(chainIDFlag)
if chainID == "" {
commonCmd.ExitOnErr(cmd, "read chain id error: %w",
errChainIDCannotBeEmpty)
}
return apechain.ID(chainID)
}
func parseChain(cmd *cobra.Command) *apechain.Chain {
chain := new(apechain.Chain)
if rules, _ := cmd.Flags().GetStringArray(ruleFlag); len(rules) > 0 {
commonCmd.ExitOnErr(cmd, "parser error: %w", parseutil.ParseAPEChain(chain, rules))
} else if encPath, _ := cmd.Flags().GetString(pathFlag); encPath != "" {
commonCmd.ExitOnErr(cmd, "decode binary or json error: %w", parseutil.ParseAPEChainBinaryOrJSON(chain, encPath))
} else {
commonCmd.ExitOnErr(cmd, "parser error: %w", errRuleIsNotParsed)
}
chain.ID = parseChainID(cmd)
cmd.Println("Parsed chain:")
parseutil.PrintHumanReadableAPEChain(cmd, chain)
return chain
}
func parseChainName(cmd *cobra.Command) apechain.Name {
chainName, _ := cmd.Flags().GetString(chainNameFlag)
apeChainName, ok := mChainName[strings.ToLower(chainName)]
if !ok {
commonCmd.ExitOnErr(cmd, "", errUnsupportedChainName)
}
return apeChainName
panic("unreachable")
}
// invokerAdapter adapats invoker.Invoker to ContractStorageInvoker interface.
@ -115,16 +53,15 @@ func (n *invokerAdapter) GetRPCInvoker() invoker.RPCInvoke {
}
func newPolicyContractReaderInterface(cmd *cobra.Command) (*morph.ContractStorageReader, *invoker.Invoker) {
c, err := helper.GetN3Client(viper.GetViper())
c, err := helper.NewRemoteClient(viper.GetViper())
commonCmd.ExitOnErr(cmd, "unable to create NEO rpc client: %w", err)
inv := invoker.New(c, nil)
var ch util.Uint160
r := management.NewReader(inv)
nnsCs, err := helper.GetContractByID(r, 1)
commonCmd.ExitOnErr(cmd, "can't get NNS contract state: %w", err)
ch, err = helper.NNSResolveHash(inv, nnsCs.Hash, helper.DomainOf(constants.PolicyContract))
ch, err := helper.NNSResolveHash(inv, nnsCs.Hash, helper.DomainOf(constants.PolicyContract))
commonCmd.ExitOnErr(cmd, "unable to resolve policy contract hash: %w", err)
invokerAdapter := &invokerAdapter{
@ -136,7 +73,7 @@ func newPolicyContractReaderInterface(cmd *cobra.Command) (*morph.ContractStorag
}
func newPolicyContractInterface(cmd *cobra.Command) (*morph.ContractStorage, *helper.LocalActor) {
c, err := helper.GetN3Client(viper.GetViper())
c, err := helper.NewRemoteClient(viper.GetViper())
commonCmd.ExitOnErr(cmd, "unable to create NEO rpc client: %w", err)
ac, err := helper.NewLocalActor(cmd, c, constants.ConsensusAccountName)

View file

@ -51,7 +51,7 @@ func dumpBalances(cmd *cobra.Command, _ []string) error {
nmHash util.Uint160
)
c, err := helper.GetN3Client(viper.GetViper())
c, err := helper.NewRemoteClient(viper.GetViper())
if err != nil {
return err
}

View file

@ -26,7 +26,7 @@ import (
const forceConfigSet = "force"
func dumpNetworkConfig(cmd *cobra.Command, _ []string) error {
c, err := helper.GetN3Client(viper.GetViper())
c, err := helper.NewRemoteClient(viper.GetViper())
if err != nil {
return fmt.Errorf("can't create N3 client: %w", err)
}

View file

@ -4,7 +4,6 @@ import "time"
const (
ConsensusAccountName = "consensus"
ProtoConfigPath = "protocol"
// MaxAlphabetNodes is the maximum number of candidates allowed, which is currently limited by the size
// of the invocation script.

View file

@ -76,7 +76,7 @@ func dumpContainers(cmd *cobra.Command, _ []string) error {
return fmt.Errorf("invalid filename: %w", err)
}
c, err := helper.GetN3Client(viper.GetViper())
c, err := helper.NewRemoteClient(viper.GetViper())
if err != nil {
return fmt.Errorf("can't create N3 client: %w", err)
}
@ -157,7 +157,7 @@ func dumpSingleContainer(bw *io.BufBinWriter, ch util.Uint160, inv *invoker.Invo
}
func listContainers(cmd *cobra.Command, _ []string) error {
c, err := helper.GetN3Client(viper.GetViper())
c, err := helper.NewRemoteClient(viper.GetViper())
if err != nil {
return fmt.Errorf("can't create N3 client: %w", err)
}

View file

@ -36,7 +36,7 @@ type contractDumpInfo struct {
}
func dumpContractHashes(cmd *cobra.Command, _ []string) error {
c, err := helper.GetN3Client(viper.GetViper())
c, err := helper.NewRemoteClient(viper.GetViper())
if err != nil {
return fmt.Errorf("can't create N3 client: %w", err)
}

View file

@ -0,0 +1,83 @@
package frostfsid
import (
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/commonflags"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
var (
frostfsidAddSubjectKeyCmd = &cobra.Command{
Use: "add-subject-key",
Short: "Add a public key to the subject in frostfsid contract",
PreRun: func(cmd *cobra.Command, _ []string) {
_ = viper.BindPFlag(commonflags.AlphabetWalletsFlag, cmd.Flags().Lookup(commonflags.AlphabetWalletsFlag))
_ = viper.BindPFlag(commonflags.EndpointFlag, cmd.Flags().Lookup(commonflags.EndpointFlag))
},
Run: frostfsidAddSubjectKey,
}
frostfsidRemoveSubjectKeyCmd = &cobra.Command{
Use: "remove-subject-key",
Short: "Remove a public key from the subject in frostfsid contract",
PreRun: func(cmd *cobra.Command, _ []string) {
_ = viper.BindPFlag(commonflags.AlphabetWalletsFlag, cmd.Flags().Lookup(commonflags.AlphabetWalletsFlag))
_ = viper.BindPFlag(commonflags.EndpointFlag, cmd.Flags().Lookup(commonflags.EndpointFlag))
},
Run: frostfsidRemoveSubjectKey,
}
)
func initFrostfsIDAddSubjectKeyCmd() {
Cmd.AddCommand(frostfsidAddSubjectKeyCmd)
ff := frostfsidAddSubjectKeyCmd.Flags()
ff.StringP(commonflags.EndpointFlag, commonflags.EndpointFlagShort, "", commonflags.EndpointFlagDesc)
ff.String(commonflags.AlphabetWalletsFlag, "", commonflags.AlphabetWalletsFlagDesc)
ff.String(subjectAddressFlag, "", "Subject address")
_ = frostfsidAddSubjectKeyCmd.MarkFlagRequired(subjectAddressFlag)
ff.String(subjectKeyFlag, "", "Public key to add")
_ = frostfsidAddSubjectKeyCmd.MarkFlagRequired(subjectKeyFlag)
}
func initFrostfsIDRemoveSubjectKeyCmd() {
Cmd.AddCommand(frostfsidRemoveSubjectKeyCmd)
ff := frostfsidRemoveSubjectKeyCmd.Flags()
ff.StringP(commonflags.EndpointFlag, commonflags.EndpointFlagShort, "", commonflags.EndpointFlagDesc)
ff.String(commonflags.AlphabetWalletsFlag, "", commonflags.AlphabetWalletsFlagDesc)
ff.String(subjectAddressFlag, "", "Subject address")
_ = frostfsidAddSubjectKeyCmd.MarkFlagRequired(subjectAddressFlag)
ff.String(subjectKeyFlag, "", "Public key to remove")
_ = frostfsidAddSubjectKeyCmd.MarkFlagRequired(subjectKeyFlag)
}
func frostfsidAddSubjectKey(cmd *cobra.Command, _ []string) {
addr := getFrostfsIDSubjectAddress(cmd)
pub := getFrostfsIDSubjectKey(cmd)
ffsid, err := newFrostfsIDClient(cmd)
commonCmd.ExitOnErr(cmd, "init contract client: %w", err)
ffsid.addCall(ffsid.roCli.AddSubjectKeyCall(addr, pub))
err = ffsid.sendWait()
commonCmd.ExitOnErr(cmd, "add subject key: %w", err)
}
func frostfsidRemoveSubjectKey(cmd *cobra.Command, _ []string) {
addr := getFrostfsIDSubjectAddress(cmd)
pub := getFrostfsIDSubjectKey(cmd)
ffsid, err := newFrostfsIDClient(cmd)
commonCmd.ExitOnErr(cmd, "init contract client: %w", err)
ffsid.addCall(ffsid.roCli.RemoveSubjectKeyCall(addr, pub))
err = ffsid.sendWait()
commonCmd.ExitOnErr(cmd, "remove subject key: %w", err)
}

View file

@ -1,7 +1,6 @@
package frostfsid
import (
"errors"
"fmt"
"math/big"
"sort"
@ -61,7 +60,6 @@ var (
Use: "list-namespaces",
Short: "List all namespaces in frostfsid",
PreRun: func(cmd *cobra.Command, _ []string) {
_ = viper.BindPFlag(commonflags.AlphabetWalletsFlag, cmd.Flags().Lookup(commonflags.AlphabetWalletsFlag))
_ = viper.BindPFlag(commonflags.EndpointFlag, cmd.Flags().Lookup(commonflags.EndpointFlag))
},
Run: frostfsidListNamespaces,
@ -91,7 +89,6 @@ var (
Use: "list-subjects",
Short: "List subjects in namespace",
PreRun: func(cmd *cobra.Command, _ []string) {
_ = viper.BindPFlag(commonflags.AlphabetWalletsFlag, cmd.Flags().Lookup(commonflags.AlphabetWalletsFlag))
_ = viper.BindPFlag(commonflags.EndpointFlag, cmd.Flags().Lookup(commonflags.EndpointFlag))
},
Run: frostfsidListSubjects,
@ -121,7 +118,6 @@ var (
Use: "list-groups",
Short: "List groups in namespace",
PreRun: func(cmd *cobra.Command, _ []string) {
_ = viper.BindPFlag(commonflags.AlphabetWalletsFlag, cmd.Flags().Lookup(commonflags.AlphabetWalletsFlag))
_ = viper.BindPFlag(commonflags.EndpointFlag, cmd.Flags().Lookup(commonflags.EndpointFlag))
},
Run: frostfsidListGroups,
@ -151,7 +147,6 @@ var (
Use: "list-group-subjects",
Short: "List subjects in group",
PreRun: func(cmd *cobra.Command, _ []string) {
_ = viper.BindPFlag(commonflags.AlphabetWalletsFlag, cmd.Flags().Lookup(commonflags.AlphabetWalletsFlag))
_ = viper.BindPFlag(commonflags.EndpointFlag, cmd.Flags().Lookup(commonflags.EndpointFlag))
},
Run: frostfsidListGroupSubjects,
@ -169,7 +164,6 @@ func initFrostfsIDCreateNamespaceCmd() {
func initFrostfsIDListNamespacesCmd() {
Cmd.AddCommand(frostfsidListNamespacesCmd)
frostfsidListNamespacesCmd.Flags().StringP(commonflags.EndpointFlag, commonflags.EndpointFlagShort, "", commonflags.EndpointFlagDesc)
frostfsidListNamespacesCmd.Flags().String(commonflags.AlphabetWalletsFlag, "", commonflags.AlphabetWalletsFlagDesc)
}
func initFrostfsIDCreateSubjectCmd() {
@ -193,7 +187,6 @@ func initFrostfsIDListSubjectsCmd() {
frostfsidListSubjectsCmd.Flags().StringP(commonflags.EndpointFlag, commonflags.EndpointFlagShort, "", commonflags.EndpointFlagDesc)
frostfsidListSubjectsCmd.Flags().String(namespaceFlag, "", "Namespace to list subjects")
frostfsidListSubjectsCmd.Flags().Bool(includeNamesFlag, false, "Whether include subject name (require additional requests)")
frostfsidListSubjectsCmd.Flags().String(commonflags.AlphabetWalletsFlag, "", commonflags.AlphabetWalletsFlagDesc)
}
func initFrostfsIDCreateGroupCmd() {
@ -217,7 +210,6 @@ func initFrostfsIDListGroupsCmd() {
Cmd.AddCommand(frostfsidListGroupsCmd)
frostfsidListGroupsCmd.Flags().StringP(commonflags.EndpointFlag, commonflags.EndpointFlagShort, "", commonflags.EndpointFlagDesc)
frostfsidListGroupsCmd.Flags().String(namespaceFlag, "", "Namespace to list groups")
frostfsidListGroupsCmd.Flags().String(commonflags.AlphabetWalletsFlag, "", commonflags.AlphabetWalletsFlagDesc)
}
func initFrostfsIDAddSubjectToGroupCmd() {
@ -242,7 +234,6 @@ func initFrostfsIDListGroupSubjectsCmd() {
frostfsidListGroupSubjectsCmd.Flags().String(namespaceFlag, "", "Namespace name")
frostfsidListGroupSubjectsCmd.Flags().Int64(groupIDFlag, 0, "Group id")
frostfsidListGroupSubjectsCmd.Flags().Bool(includeNamesFlag, false, "Whether include subject name (require additional requests)")
frostfsidListGroupSubjectsCmd.Flags().String(commonflags.AlphabetWalletsFlag, "", commonflags.AlphabetWalletsFlagDesc)
}
func frostfsidCreateNamespace(cmd *cobra.Command, _ []string) {
@ -497,10 +488,6 @@ func (f *frostfsidClient) sendWaitRes() (*state.AppExecResult, error) {
}
f.bw.Reset()
if len(f.wCtx.SentTxs) == 0 {
return nil, errors.New("no transactions to wait")
}
f.wCtx.Command.Println("Waiting for transactions to persist...")
return f.roCli.Wait(f.wCtx.SentTxs[0].Hash, f.wCtx.SentTxs[0].Vub, nil)
}
@ -522,7 +509,7 @@ func readIterator(inv *invoker.Invoker, iter *result.Iterator, batchSize int, se
}
func initInvoker(cmd *cobra.Command) (*invoker.Invoker, *state.Contract, util.Uint160) {
c, err := helper.GetN3Client(viper.GetViper())
c, err := helper.NewRemoteClient(viper.GetViper())
commonCmd.ExitOnErr(cmd, "can't create N3 client: %w", err)
inv := invoker.New(c, nil)

View file

@ -1,59 +1,12 @@
package frostfsid
import (
"encoding/hex"
"testing"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/morph/helper"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/ape"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/nspcc-dev/neo-go/pkg/encoding/address"
"github.com/spf13/viper"
"github.com/stretchr/testify/require"
)
func TestFrostfsIDConfig(t *testing.T) {
pks := make([]*keys.PrivateKey, 4)
for i := range pks {
pk, err := keys.NewPrivateKey()
require.NoError(t, err)
pks[i] = pk
}
fmts := []string{
pks[0].GetScriptHash().StringLE(),
address.Uint160ToString(pks[1].GetScriptHash()),
hex.EncodeToString(pks[2].PublicKey().UncompressedBytes()),
hex.EncodeToString(pks[3].PublicKey().Bytes()),
}
for i := range fmts {
v := viper.New()
v.Set("frostfsid.admin", fmts[i])
actual, found, err := helper.GetFrostfsIDAdmin(v)
require.NoError(t, err)
require.True(t, found)
require.Equal(t, pks[i].GetScriptHash(), actual)
}
t.Run("bad key", func(t *testing.T) {
v := viper.New()
v.Set("frostfsid.admin", "abc")
_, found, err := helper.GetFrostfsIDAdmin(v)
require.Error(t, err)
require.True(t, found)
})
t.Run("missing key", func(t *testing.T) {
v := viper.New()
_, found, err := helper.GetFrostfsIDAdmin(v)
require.NoError(t, err)
require.False(t, found)
})
}
func TestNamespaceRegexp(t *testing.T) {
for _, tc := range []struct {
name string

View file

@ -12,4 +12,6 @@ func init() {
initFrostfsIDAddSubjectToGroupCmd()
initFrostfsIDRemoveSubjectFromGroupCmd()
initFrostfsIDListGroupSubjectsCmd()
initFrostfsIDAddSubjectKeyCmd()
initFrostfsIDRemoveSubjectKeyCmd()
}

View file

@ -38,38 +38,24 @@ func NewLocalActor(cmd *cobra.Command, c actor.RPCActor, accName string) (*Local
walletDir := config.ResolveHomePath(viper.GetString(commonflags.AlphabetWalletsFlag))
var act *actor.Actor
var accounts []*wallet.Account
if walletDir == "" {
account, err := wallet.NewAccount()
commonCmd.ExitOnErr(cmd, "unable to create dummy account: %w", err)
act, err = actor.New(c, []actor.SignerAccount{{
Signer: transaction.Signer{
Account: account.Contract.ScriptHash(),
Scopes: transaction.Global,
},
Account: account,
}})
if err != nil {
return nil, err
}
} else {
wallets, err := GetAlphabetWallets(viper.GetViper(), walletDir)
commonCmd.ExitOnErr(cmd, "unable to get alphabet wallets: %w", err)
for _, w := range wallets {
acc, err := GetWalletAccount(w, accName)
commonCmd.ExitOnErr(cmd, fmt.Sprintf("can't find %s account: %%w", accName), err)
accounts = append(accounts, acc)
}
act, err = actor.New(c, []actor.SignerAccount{{
Signer: transaction.Signer{
Account: accounts[0].Contract.ScriptHash(),
Scopes: transaction.Global,
},
Account: accounts[0],
}})
if err != nil {
return nil, err
}
wallets, err := GetAlphabetWallets(viper.GetViper(), walletDir)
commonCmd.ExitOnErr(cmd, "unable to get alphabet wallets: %w", err)
for _, w := range wallets {
acc, err := GetWalletAccount(w, accName)
commonCmd.ExitOnErr(cmd, fmt.Sprintf("can't find %s account: %%w", accName), err)
accounts = append(accounts, acc)
}
act, err = actor.New(c, []actor.SignerAccount{{
Signer: transaction.Signer{
Account: accounts[0].Contract.ScriptHash(),
Scopes: transaction.Global,
},
Account: accounts[0],
}})
if err != nil {
return nil, err
}
return &LocalActor{
neoActor: act,

View file

@ -82,7 +82,7 @@ func GetContractDeployData(c *InitializeContext, ctrName string, keysParam []any
h, found, err = getFrostfsIDAdminFromContract(c.ReadOnlyInvoker)
}
if method != constants.UpdateMethodName || err == nil && !found {
h, found, err = GetFrostfsIDAdmin(viper.GetViper())
h, found, err = getFrostfsIDAdmin(viper.GetViper())
}
if err != nil {
return nil, err

View file

@ -11,7 +11,7 @@ import (
const frostfsIDAdminConfigKey = "frostfsid.admin"
func GetFrostfsIDAdmin(v *viper.Viper) (util.Uint160, bool, error) {
func getFrostfsIDAdmin(v *viper.Viper) (util.Uint160, bool, error) {
admin := v.GetString(frostfsIDAdminConfigKey)
if admin == "" {
return util.Uint160{}, false, nil

View file

@ -0,0 +1,53 @@
package helper
import (
"encoding/hex"
"testing"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/nspcc-dev/neo-go/pkg/encoding/address"
"github.com/spf13/viper"
"github.com/stretchr/testify/require"
)
func TestFrostfsIDConfig(t *testing.T) {
pks := make([]*keys.PrivateKey, 4)
for i := range pks {
pk, err := keys.NewPrivateKey()
require.NoError(t, err)
pks[i] = pk
}
fmts := []string{
pks[0].GetScriptHash().StringLE(),
address.Uint160ToString(pks[1].GetScriptHash()),
hex.EncodeToString(pks[2].PublicKey().UncompressedBytes()),
hex.EncodeToString(pks[3].PublicKey().Bytes()),
}
for i := range fmts {
v := viper.New()
v.Set("frostfsid.admin", fmts[i])
actual, found, err := getFrostfsIDAdmin(v)
require.NoError(t, err)
require.True(t, found)
require.Equal(t, pks[i].GetScriptHash(), actual)
}
t.Run("bad key", func(t *testing.T) {
v := viper.New()
v.Set("frostfsid.admin", "abc")
_, found, err := getFrostfsIDAdmin(v)
require.Error(t, err)
require.True(t, found)
})
t.Run("missing key", func(t *testing.T) {
v := viper.New()
_, found, err := getFrostfsIDAdmin(v)
require.NoError(t, err)
require.False(t, found)
})
}

View file

@ -134,12 +134,12 @@ func NewInitializeContext(cmd *cobra.Command, v *viper.Viper) (*InitializeContex
return nil, err
}
accounts, err := createWalletAccounts(wallets)
accounts, err := getSingleAccounts(wallets)
if err != nil {
return nil, err
}
cliCtx, err := DefaultClientContext(c, committeeAcc)
cliCtx, err := defaultClientContext(c, committeeAcc)
if err != nil {
return nil, fmt.Errorf("client context: %w", err)
}
@ -191,7 +191,7 @@ func createClient(cmd *cobra.Command, v *viper.Viper, wallets []*wallet.Wallet)
}
c, err = NewLocalClient(cmd, v, wallets, ldf.Value.String())
} else {
c, err = GetN3Client(v)
c, err = NewRemoteClient(v)
}
if err != nil {
return nil, fmt.Errorf("can't create N3 client: %w", err)
@ -211,7 +211,7 @@ func getContractsPath(cmd *cobra.Command, needContracts bool) (string, error) {
return ctrPath, nil
}
func createWalletAccounts(wallets []*wallet.Wallet) ([]*wallet.Account, error) {
func getSingleAccounts(wallets []*wallet.Wallet) ([]*wallet.Account, error) {
accounts := make([]*wallet.Account, len(wallets))
for i, w := range wallets {
acc, err := GetWalletAccount(w, constants.SingleAccountName)

View file

@ -8,6 +8,7 @@ import (
"sort"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/morph/constants"
"github.com/google/uuid"
"github.com/nspcc-dev/neo-go/pkg/config"
@ -47,7 +48,7 @@ type LocalClient struct {
}
func NewLocalClient(cmd *cobra.Command, v *viper.Viper, wallets []*wallet.Wallet, dumpPath string) (*LocalClient, error) {
cfg, err := config.LoadFile(v.GetString(constants.ProtoConfigPath))
cfg, err := config.LoadFile(v.GetString(commonflags.ProtoConfigPath))
if err != nil {
return nil, err
}
@ -57,17 +58,59 @@ func NewLocalClient(cmd *cobra.Command, v *viper.Viper, wallets []*wallet.Wallet
return nil, err
}
m := smartcontract.GetDefaultHonestNodeCount(int(cfg.ProtocolConfiguration.ValidatorsCount))
accounts := make([]*wallet.Account, len(wallets))
for i := range accounts {
accounts[i], err = GetWalletAccount(wallets[i], constants.ConsensusAccountName)
if err != nil {
return nil, err
go bc.Run()
accounts, err := getBlockSigningAccounts(cfg.ProtocolConfiguration, wallets)
if err != nil {
return nil, err
}
if cmd.Name() != "init" {
if err := restoreDump(bc, dumpPath); err != nil {
return nil, fmt.Errorf("restore dump: %w", err)
}
}
return &LocalClient{
bc: bc,
dumpPath: dumpPath,
accounts: accounts,
}, nil
}
func restoreDump(bc *core.Blockchain, dumpPath string) error {
f, err := os.OpenFile(dumpPath, os.O_RDONLY, 0o600)
if err != nil {
return fmt.Errorf("can't open local dump: %w", err)
}
defer f.Close()
r := io.NewBinReaderFromIO(f)
var skip uint32
if bc.BlockHeight() != 0 {
skip = bc.BlockHeight() + 1
}
count := r.ReadU32LE() - skip
if err := chaindump.Restore(bc, r, skip, count, nil); err != nil {
return err
}
return nil
}
func getBlockSigningAccounts(cfg config.ProtocolConfiguration, wallets []*wallet.Wallet) ([]*wallet.Account, error) {
accounts := make([]*wallet.Account, len(wallets))
for i := range accounts {
acc, err := GetWalletAccount(wallets[i], constants.ConsensusAccountName)
if err != nil {
return nil, err
}
accounts[i] = acc
}
indexMap := make(map[string]int)
for i, pub := range cfg.ProtocolConfiguration.StandbyCommittee {
for i, pub := range cfg.StandbyCommittee {
indexMap[pub] = i
}
@ -76,37 +119,12 @@ func NewLocalClient(cmd *cobra.Command, v *viper.Viper, wallets []*wallet.Wallet
pj := accounts[j].PrivateKey().PublicKey().Bytes()
return indexMap[string(pi)] < indexMap[string(pj)]
})
sort.Slice(accounts[:cfg.ProtocolConfiguration.ValidatorsCount], func(i, j int) bool {
sort.Slice(accounts[:cfg.ValidatorsCount], func(i, j int) bool {
return accounts[i].PublicKey().Cmp(accounts[j].PublicKey()) == -1
})
go bc.Run()
if cmd.Name() != "init" {
f, err := os.OpenFile(dumpPath, os.O_RDONLY, 0o600)
if err != nil {
return nil, fmt.Errorf("can't open local dump: %w", err)
}
defer f.Close()
r := io.NewBinReaderFromIO(f)
var skip uint32
if bc.BlockHeight() != 0 {
skip = bc.BlockHeight() + 1
}
count := r.ReadU32LE() - skip
if err := chaindump.Restore(bc, r, skip, count, nil); err != nil {
return nil, fmt.Errorf("can't restore local dump: %w", err)
}
}
return &LocalClient{
bc: bc,
dumpPath: dumpPath,
accounts: accounts[:m],
}, nil
m := smartcontract.GetDefaultHonestNodeCount(int(cfg.ValidatorsCount))
return accounts[:m], nil
}
func (l *LocalClient) GetBlockCount() (uint32, error) {
@ -127,11 +145,6 @@ func (l *LocalClient) GetApplicationLog(h util.Uint256, t *trigger.Type) (*resul
return &a, nil
}
func (l *LocalClient) GetCommittee() (keys.PublicKeys, error) {
// not used by `morph init` command
panic("unexpected call")
}
// InvokeFunction is implemented via `InvokeScript`.
func (l *LocalClient) InvokeFunction(h util.Uint160, method string, sPrm []smartcontract.Parameter, ss []transaction.Signer) (*result.Invoke, error) {
var err error
@ -295,13 +308,7 @@ func (l *LocalClient) InvokeScript(script []byte, signers []transaction.Signer)
}
func (l *LocalClient) SendRawTransaction(tx *transaction.Transaction) (util.Uint256, error) {
// We need to test that transaction was formed correctly to catch as many errors as we can.
bs := tx.Bytes()
_, err := transaction.NewTransactionFromBytes(bs)
if err != nil {
return tx.Hash(), fmt.Errorf("invalid transaction: %w", err)
}
tx = tx.Copy()
l.transactions = append(l.transactions, tx)
return tx.Hash(), nil
}

View file

@ -10,7 +10,6 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/commonflags"
"github.com/nspcc-dev/neo-go/pkg/core/state"
"github.com/nspcc-dev/neo-go/pkg/core/transaction"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/nspcc-dev/neo-go/pkg/neorpc/result"
"github.com/nspcc-dev/neo-go/pkg/rpcclient"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/actor"
@ -25,15 +24,10 @@ import (
// Client represents N3 client interface capable of test-invoking scripts
// and sending signed transactions to chain.
type Client interface {
invoker.RPCInvoke
actor.RPCActor
GetBlockCount() (uint32, error)
GetNativeContracts() ([]state.Contract, error)
GetApplicationLog(util.Uint256, *trigger.Type) (*result.ApplicationLog, error)
GetVersion() (*result.Version, error)
SendRawTransaction(*transaction.Transaction) (util.Uint256, error)
GetCommittee() (keys.PublicKeys, error)
CalculateNetworkFee(tx *transaction.Transaction) (int64, error)
}
type HashVUBPair struct {
@ -48,7 +42,7 @@ type ClientContext struct {
SentTxs []HashVUBPair
}
func GetN3Client(v *viper.Viper) (Client, error) {
func NewRemoteClient(v *viper.Viper) (Client, error) {
// number of opened connections
// by neo-go client per one host
const (
@ -88,8 +82,14 @@ func GetN3Client(v *viper.Viper) (Client, error) {
return c, nil
}
func DefaultClientContext(c Client, committeeAcc *wallet.Account) (*ClientContext, error) {
commAct, err := NewActor(c, committeeAcc)
func defaultClientContext(c Client, committeeAcc *wallet.Account) (*ClientContext, error) {
commAct, err := actor.New(c, []actor.SignerAccount{{
Signer: transaction.Signer{
Account: committeeAcc.Contract.ScriptHash(),
Scopes: transaction.Global,
},
Account: committeeAcc,
}})
if err != nil {
return nil, err
}

View file

@ -15,10 +15,8 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/morph/constants"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/innerring"
"github.com/nspcc-dev/neo-go/pkg/core/state"
"github.com/nspcc-dev/neo-go/pkg/core/transaction"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/nspcc-dev/neo-go/pkg/encoding/fixedn"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/actor"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/management"
"github.com/nspcc-dev/neo-go/pkg/wallet"
"github.com/spf13/viper"
@ -87,16 +85,6 @@ func openAlphabetWallets(v *viper.Viper, walletDir string) ([]*wallet.Wallet, er
return wallets, nil
}
func NewActor(c actor.RPCActor, committeeAcc *wallet.Account) (*actor.Actor, error) {
return actor.New(c, []actor.SignerAccount{{
Signer: transaction.Signer{
Account: committeeAcc.Contract.ScriptHash(),
Scopes: transaction.Global,
},
Account: committeeAcc,
}})
}
func ReadContract(ctrPath, ctrName string) (*ContractState, error) {
rawNef, err := os.ReadFile(filepath.Join(ctrPath, ctrName+"_contract.nef"))
if err != nil {

View file

@ -62,7 +62,7 @@ func testInitialize(t *testing.T, committeeSize int) {
v := viper.GetViper()
require.NoError(t, generateTestData(testdataDir, committeeSize))
v.Set(constants.ProtoConfigPath, filepath.Join(testdataDir, protoFileName))
v.Set(commonflags.ProtoConfigPath, filepath.Join(testdataDir, protoFileName))
// Set to the path or remove the next statement to download from the network.
require.NoError(t, Cmd.Flags().Set(commonflags.ContractsInitFlag, contractsPath))

View file

@ -2,7 +2,6 @@ package initialize
import (
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/morph/constants"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
@ -32,7 +31,7 @@ var Cmd = &cobra.Command{
_ = viper.BindPFlag(commonflags.ContainerFeeInitFlag, cmd.Flags().Lookup(containerFeeCLIFlag))
_ = viper.BindPFlag(commonflags.ContainerAliasFeeInitFlag, cmd.Flags().Lookup(containerAliasFeeCLIFlag))
_ = viper.BindPFlag(commonflags.WithdrawFeeInitFlag, cmd.Flags().Lookup(withdrawFeeCLIFlag))
_ = viper.BindPFlag(constants.ProtoConfigPath, cmd.Flags().Lookup(constants.ProtoConfigPath))
_ = viper.BindPFlag(commonflags.ProtoConfigPath, cmd.Flags().Lookup(commonflags.ProtoConfigPath))
},
RunE: initializeSideChainCmd,
}
@ -48,7 +47,7 @@ func initInitCmd() {
// Defaults are taken from neo-preodolenie.
Cmd.Flags().Uint64(containerFeeCLIFlag, 1000, "Container registration fee")
Cmd.Flags().Uint64(containerAliasFeeCLIFlag, 500, "Container alias fee")
Cmd.Flags().String(constants.ProtoConfigPath, "", "Path to the consensus node configuration")
Cmd.Flags().String(commonflags.ProtoConfigPath, "", "Path to the consensus node configuration")
Cmd.Flags().String(commonflags.LocalDumpFlag, "", "Path to the blocks dump file")
Cmd.MarkFlagsMutuallyExclusive(commonflags.ContractsInitFlag, commonflags.ContractsURLFlag)
}

View file

@ -13,7 +13,7 @@ import (
)
func listNetmapCandidatesNodes(cmd *cobra.Command, _ []string) {
c, err := helper.GetN3Client(viper.GetViper())
c, err := helper.NewRemoteClient(viper.GetViper())
commonCmd.ExitOnErr(cmd, "can't create N3 client: %w", err)
inv := invoker.New(c, nil)

View file

@ -12,7 +12,6 @@ var (
Short: "List netmap candidates nodes",
PreRun: func(cmd *cobra.Command, _ []string) {
_ = viper.BindPFlag(commonflags.EndpointFlag, cmd.Flags().Lookup(commonflags.EndpointFlag))
_ = viper.BindPFlag(commonflags.AlphabetWalletsFlag, cmd.Flags().Lookup(commonflags.AlphabetWalletsFlag))
},
Run: listNetmapCandidatesNodes,
}

View file

@ -24,7 +24,7 @@ func initRegisterCmd() {
}
func registerDomain(cmd *cobra.Command, _ []string) {
c, actor, _ := getRPCClient(cmd)
c, actor := nnsWriter(cmd)
name, _ := cmd.Flags().GetString(nnsNameFlag)
email, _ := cmd.Flags().GetString(nnsEmailFlag)
@ -53,7 +53,7 @@ func initDeleteCmd() {
}
func deleteDomain(cmd *cobra.Command, _ []string) {
c, actor, _ := getRPCClient(cmd)
c, actor := nnsWriter(cmd)
name, _ := cmd.Flags().GetString(nnsNameFlag)
h, vub, err := c.DeleteDomain(name)

View file

@ -5,15 +5,15 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/morph/constants"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/morph/helper"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/invoker"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/management"
"github.com/nspcc-dev/neo-go/pkg/util"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
func getRPCClient(cmd *cobra.Command) (*client.Contract, *helper.LocalActor, util.Uint160) {
func nnsWriter(cmd *cobra.Command) (*client.Contract, *helper.LocalActor) {
v := viper.GetViper()
c, err := helper.GetN3Client(v)
c, err := helper.NewRemoteClient(v)
commonCmd.ExitOnErr(cmd, "unable to create NEO rpc client: %w", err)
ac, err := helper.NewLocalActor(cmd, c, constants.CommitteeAccountName)
@ -22,5 +22,17 @@ func getRPCClient(cmd *cobra.Command) (*client.Contract, *helper.LocalActor, uti
r := management.NewReader(ac.Invoker)
nnsCs, err := helper.GetContractByID(r, 1)
commonCmd.ExitOnErr(cmd, "can't get NNS contract state: %w", err)
return client.New(ac, nnsCs.Hash), ac, nnsCs.Hash
return client.New(ac, nnsCs.Hash), ac
}
func nnsReader(cmd *cobra.Command) (*client.ContractReader, *invoker.Invoker) {
c, err := helper.NewRemoteClient(viper.GetViper())
commonCmd.ExitOnErr(cmd, "unable to create NEO rpc client: %w", err)
inv := invoker.New(c, nil)
r := management.NewReader(inv)
nnsCs, err := helper.GetContractByID(r, 1)
commonCmd.ExitOnErr(cmd, "can't get NNS contract state: %w", err)
return client.NewReader(inv, nnsCs.Hash), inv
}

View file

@ -8,7 +8,6 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-contract/nns"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/commonflags"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/unwrap"
"github.com/nspcc-dev/neo-go/pkg/vm/stackitem"
"github.com/spf13/cobra"
)
@ -29,7 +28,6 @@ func initAddRecordCmd() {
func initGetRecordsCmd() {
Cmd.AddCommand(getRecordsCmd)
getRecordsCmd.Flags().StringP(commonflags.EndpointFlag, commonflags.EndpointFlagShort, "", commonflags.EndpointFlagDesc)
getRecordsCmd.Flags().String(commonflags.AlphabetWalletsFlag, "", commonflags.AlphabetWalletsFlagDesc)
getRecordsCmd.Flags().String(nnsNameFlag, "", nnsNameFlagDesc)
getRecordsCmd.Flags().String(nnsRecordTypeFlag, "", nnsRecordTypeFlagDesc)
@ -61,7 +59,7 @@ func initDelRecordCmd() {
}
func addRecord(cmd *cobra.Command, _ []string) {
c, actor, _ := getRPCClient(cmd)
c, actor := nnsWriter(cmd)
name, _ := cmd.Flags().GetString(nnsNameFlag)
data, _ := cmd.Flags().GetString(nnsRecordDataFlag)
recordType, _ := cmd.Flags().GetString(nnsRecordTypeFlag)
@ -77,16 +75,16 @@ func addRecord(cmd *cobra.Command, _ []string) {
}
func getRecords(cmd *cobra.Command, _ []string) {
c, act, hash := getRPCClient(cmd)
c, inv := nnsReader(cmd)
name, _ := cmd.Flags().GetString(nnsNameFlag)
recordType, _ := cmd.Flags().GetString(nnsRecordTypeFlag)
if recordType == "" {
sid, r, err := unwrap.SessionIterator(act.Invoker.Call(hash, "getAllRecords", name))
sid, r, err := c.GetAllRecords(name)
commonCmd.ExitOnErr(cmd, "unable to get records: %w", err)
defer func() {
_ = act.Invoker.TerminateSession(sid)
_ = inv.TerminateSession(sid)
}()
items, err := act.Invoker.TraverseIterator(sid, &r, 0)
items, err := inv.TraverseIterator(sid, &r, 0)
commonCmd.ExitOnErr(cmd, "unable to get records: %w", err)
for len(items) != 0 {
for j := range items {
@ -97,7 +95,7 @@ func getRecords(cmd *cobra.Command, _ []string) {
recordTypeToString(nns.RecordType(rs[1].Value().(*big.Int).Int64())),
string(bs))
}
items, err = act.Invoker.TraverseIterator(sid, &r, 0)
items, err = inv.TraverseIterator(sid, &r, 0)
commonCmd.ExitOnErr(cmd, "unable to get records: %w", err)
}
} else {
@ -114,7 +112,7 @@ func getRecords(cmd *cobra.Command, _ []string) {
}
func delRecords(cmd *cobra.Command, _ []string) {
c, actor, _ := getRPCClient(cmd)
c, actor := nnsWriter(cmd)
name, _ := cmd.Flags().GetString(nnsNameFlag)
recordType, _ := cmd.Flags().GetString(nnsRecordTypeFlag)
typ, err := getRecordType(recordType)
@ -129,7 +127,7 @@ func delRecords(cmd *cobra.Command, _ []string) {
}
func delRecord(cmd *cobra.Command, _ []string) {
c, actor, _ := getRPCClient(cmd)
c, actor := nnsWriter(cmd)
name, _ := cmd.Flags().GetString(nnsNameFlag)
data, _ := cmd.Flags().GetString(nnsRecordDataFlag)
recordType, _ := cmd.Flags().GetString(nnsRecordTypeFlag)

View file

@ -14,7 +14,7 @@ func initRenewCmd() {
}
func renewDomain(cmd *cobra.Command, _ []string) {
c, actor, _ := getRPCClient(cmd)
c, actor := nnsWriter(cmd)
name, _ := cmd.Flags().GetString(nnsNameFlag)
h, vub, err := c.Renew(name)
commonCmd.ExitOnErr(cmd, "unable to renew domain: %w", err)

View file

@ -18,12 +18,11 @@ const (
func initTokensCmd() {
Cmd.AddCommand(tokensCmd)
tokensCmd.Flags().StringP(commonflags.EndpointFlag, commonflags.EndpointFlagShort, "", commonflags.EndpointFlagDesc)
tokensCmd.Flags().String(commonflags.AlphabetWalletsFlag, "", commonflags.AlphabetWalletsFlagDesc)
tokensCmd.Flags().BoolP(commonflags.Verbose, commonflags.VerboseShorthand, false, verboseDesc)
}
func listTokens(cmd *cobra.Command, _ []string) {
c, _, _ := getRPCClient(cmd)
c, _ := nnsReader(cmd)
it, err := c.Tokens()
commonCmd.ExitOnErr(cmd, "unable to get tokens: %w", err)
for toks, err := it.Next(10); err == nil && len(toks) > 0; toks, err = it.Next(10) {
@ -41,7 +40,7 @@ func listTokens(cmd *cobra.Command, _ []string) {
}
}
func getCnameRecord(c *client.Contract, token []byte) (string, error) {
func getCnameRecord(c *client.ContractReader, token []byte) (string, error) {
items, err := c.GetRecords(string(token), big.NewInt(int64(nns.CNAME)))
// GetRecords returns the error "not an array" if the domain does not contain records.

View file

@ -30,7 +30,7 @@ func initUpdateCmd() {
}
func updateSOA(cmd *cobra.Command, _ []string) {
c, actor, _ := getRPCClient(cmd)
c, actor := nnsWriter(cmd)
name, _ := cmd.Flags().GetString(nnsNameFlag)
email, _ := cmd.Flags().GetString(nnsEmailFlag)

View file

@ -89,7 +89,7 @@ func depositNotary(cmd *cobra.Command, _ []string) error {
}
func transferGas(cmd *cobra.Command, acc *wallet.Account, accHash util.Uint160, gasAmount fixedn.Fixed8, till int64) error {
c, err := helper.GetN3Client(viper.GetViper())
c, err := helper.NewRemoteClient(viper.GetViper())
if err != nil {
return err
}

View file

@ -62,7 +62,7 @@ func SetPolicyCmd(cmd *cobra.Command, args []string) error {
}
func dumpPolicyCmd(cmd *cobra.Command, _ []string) error {
c, err := helper.GetN3Client(viper.GetViper())
c, err := helper.NewRemoteClient(viper.GetViper())
commonCmd.ExitOnErr(cmd, "can't create N3 client:", err)
inv := invoker.New(c, nil)

View file

@ -1,44 +1,19 @@
package apemanager
import (
"encoding/hex"
"errors"
"fmt"
internalclient "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/modules/util"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
apeCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common/ape"
apeSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/ape"
client_sdk "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
apechain "git.frostfs.info/TrueCloudLab/policy-engine/pkg/chain"
"git.frostfs.info/TrueCloudLab/policy-engine/pkg/engine"
"github.com/spf13/cobra"
)
const (
chainIDFlag = "chain-id"
chainIDHexFlag = "chain-id-hex"
ruleFlag = "rule"
pathFlag = "path"
)
const (
targetNameFlag = "target-name"
targetNameDesc = "Resource name in APE resource name format"
targetTypeFlag = "target-type"
targetTypeDesc = "Resource type(container/namespace)"
)
const (
namespaceTarget = "namespace"
containerTarget = "container"
userTarget = "user"
groupTarget = "group"
)
var errUnknownTargetType = errors.New("unknown target type")
var addCmd = &cobra.Command{
Use: "add",
Short: "Add rule chain for a target",
@ -49,55 +24,28 @@ var addCmd = &cobra.Command{
}
func parseTarget(cmd *cobra.Command) (ct apeSDK.ChainTarget) {
typ, _ := cmd.Flags().GetString(targetTypeFlag)
name, _ := cmd.Flags().GetString(targetNameFlag)
t := apeCmd.ParseTarget(cmd)
ct.Name = name
ct.Name = t.Name
switch typ {
case namespaceTarget:
switch t.Type {
case engine.Namespace:
ct.TargetType = apeSDK.TargetTypeNamespace
case containerTarget:
var cnr cid.ID
commonCmd.ExitOnErr(cmd, "can't decode container ID: %w", cnr.DecodeString(name))
case engine.Container:
ct.TargetType = apeSDK.TargetTypeContainer
case userTarget:
case engine.User:
ct.TargetType = apeSDK.TargetTypeUser
case groupTarget:
case engine.Group:
ct.TargetType = apeSDK.TargetTypeGroup
default:
commonCmd.ExitOnErr(cmd, "read target type error: %w", errUnknownTargetType)
commonCmd.ExitOnErr(cmd, "conversion error: %w", fmt.Errorf("unknown type '%c'", t.Type))
}
return ct
}
func parseChain(cmd *cobra.Command) apeSDK.Chain {
chainID, _ := cmd.Flags().GetString(chainIDFlag)
hexEncoded, _ := cmd.Flags().GetBool(chainIDHexFlag)
chainIDRaw := []byte(chainID)
if hexEncoded {
var err error
chainIDRaw, err = hex.DecodeString(chainID)
commonCmd.ExitOnErr(cmd, "can't decode chain ID as hex: %w", err)
}
chain := new(apechain.Chain)
chain.ID = apechain.ID(chainIDRaw)
if rules, _ := cmd.Flags().GetStringArray(ruleFlag); len(rules) > 0 {
commonCmd.ExitOnErr(cmd, "parser error: %w", util.ParseAPEChain(chain, rules))
} else if encPath, _ := cmd.Flags().GetString(pathFlag); encPath != "" {
commonCmd.ExitOnErr(cmd, "decode binary or json error: %w", util.ParseAPEChainBinaryOrJSON(chain, encPath))
} else {
commonCmd.ExitOnErr(cmd, "parser error: %w", errors.New("rule is not passed"))
}
cmd.Println("Parsed chain:")
util.PrintHumanReadableAPEChain(cmd, chain)
serialized := chain.Bytes()
c := apeCmd.ParseChain(cmd)
serialized := c.Bytes()
return apeSDK.Chain{
Raw: serialized,
}
@ -126,13 +74,13 @@ func initAddCmd() {
commonflags.Init(addCmd)
ff := addCmd.Flags()
ff.StringArray(ruleFlag, []string{}, "Rule statement")
ff.String(pathFlag, "", "Path to encoded chain in JSON or binary format")
ff.String(chainIDFlag, "", "Assign ID to the parsed chain")
ff.String(targetNameFlag, "", targetNameDesc)
ff.String(targetTypeFlag, "", targetTypeDesc)
_ = addCmd.MarkFlagRequired(targetTypeFlag)
ff.Bool(chainIDHexFlag, false, "Flag to parse chain ID as hex")
ff.StringArray(apeCmd.RuleFlag, []string{}, apeCmd.RuleFlagDesc)
ff.String(apeCmd.PathFlag, "", apeCmd.PathFlagDesc)
ff.String(apeCmd.ChainIDFlag, "", apeCmd.ChainIDFlagDesc)
ff.String(apeCmd.TargetNameFlag, "", apeCmd.TargetNameFlagDesc)
ff.String(apeCmd.TargetTypeFlag, "", apeCmd.TargetTypeFlagDesc)
_ = addCmd.MarkFlagRequired(apeCmd.TargetTypeFlag)
ff.Bool(apeCmd.ChainIDHexFlag, false, apeCmd.ChainIDHexFlagDesc)
addCmd.MarkFlagsMutuallyExclusive(pathFlag, ruleFlag)
addCmd.MarkFlagsMutuallyExclusive(apeCmd.PathFlag, apeCmd.RuleFlag)
}

View file

@ -4,8 +4,8 @@ import (
internalclient "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
apeutil "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/modules/util"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
apeCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common/ape"
client_sdk "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client"
apechain "git.frostfs.info/TrueCloudLab/policy-engine/pkg/chain"
"github.com/spf13/cobra"
@ -35,7 +35,7 @@ func list(cmd *cobra.Command, _ []string) {
for _, respChain := range resp.Chains {
var chain apechain.Chain
commonCmd.ExitOnErr(cmd, "decode error: %w", chain.DecodeBytes(respChain.Raw))
apeutil.PrintHumanReadableAPEChain(cmd, &chain)
apeCmd.PrintHumanReadableAPEChain(cmd, &chain)
}
}
@ -43,7 +43,7 @@ func initListCmd() {
commonflags.Init(listCmd)
ff := listCmd.Flags()
ff.String(targetNameFlag, "", targetNameDesc)
ff.String(targetTypeFlag, "", targetTypeDesc)
_ = listCmd.MarkFlagRequired(targetTypeFlag)
ff.String(apeCmd.TargetNameFlag, "", apeCmd.TargetNameFlagDesc)
ff.String(apeCmd.TargetTypeFlag, "", apeCmd.TargetTypeFlagDesc)
_ = listCmd.MarkFlagRequired(apeCmd.TargetTypeFlag)
}

View file

@ -1,29 +1,23 @@
package apemanager
import (
"encoding/hex"
"errors"
internalclient "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
apeCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common/ape"
client_sdk "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client"
"github.com/spf13/cobra"
)
var (
errEmptyChainID = errors.New("chain id cannot be empty")
removeCmd = &cobra.Command{
Use: "remove",
Short: "Remove rule chain for a target",
Run: remove,
PersistentPreRun: func(cmd *cobra.Command, _ []string) {
commonflags.Bind(cmd)
},
}
)
var removeCmd = &cobra.Command{
Use: "remove",
Short: "Remove rule chain for a target",
Run: remove,
PersistentPreRun: func(cmd *cobra.Command, _ []string) {
commonflags.Bind(cmd)
},
}
func remove(cmd *cobra.Command, _ []string) {
target := parseTarget(cmd)
@ -31,19 +25,9 @@ func remove(cmd *cobra.Command, _ []string) {
key := key.Get(cmd)
cli := internalclient.GetSDKClientByFlag(cmd, key, commonflags.RPC)
chainID, _ := cmd.Flags().GetString(chainIDFlag)
if chainID == "" {
commonCmd.ExitOnErr(cmd, "read chain id error: %w", errEmptyChainID)
}
chainID := apeCmd.ParseChainID(cmd)
chainIDRaw := []byte(chainID)
hexEncoded, _ := cmd.Flags().GetBool(chainIDHexFlag)
if hexEncoded {
var err error
chainIDRaw, err = hex.DecodeString(chainID)
commonCmd.ExitOnErr(cmd, "can't decode chain ID as hex: %w", err)
}
_, err := cli.APEManagerRemoveChain(cmd.Context(), client_sdk.PrmAPEManagerRemoveChain{
ChainTarget: target,
ChainID: chainIDRaw,
@ -58,9 +42,10 @@ func initRemoveCmd() {
commonflags.Init(removeCmd)
ff := removeCmd.Flags()
ff.String(targetNameFlag, "", targetNameDesc)
ff.String(targetTypeFlag, "", targetTypeDesc)
_ = removeCmd.MarkFlagRequired(targetTypeFlag)
ff.String(chainIDFlag, "", "Chain id")
ff.Bool(chainIDHexFlag, false, "Flag to parse chain ID as hex")
ff.String(apeCmd.TargetNameFlag, "", apeCmd.TargetNameFlagDesc)
ff.String(apeCmd.TargetTypeFlag, "", apeCmd.TargetTypeFlagDesc)
_ = removeCmd.MarkFlagRequired(apeCmd.TargetTypeFlag)
ff.String(apeCmd.ChainIDFlag, "", apeCmd.ChainIDFlagDesc)
_ = removeCmd.MarkFlagRequired(apeCmd.ChainIDFlag)
ff.Bool(apeCmd.ChainIDHexFlag, false, apeCmd.ChainIDHexFlagDesc)
}

View file

@ -1,31 +1,20 @@
package bearer
import (
"errors"
"fmt"
"os"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
parseutil "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/modules/util"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
apeCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common/ape"
apeSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/ape"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/bearer"
cidSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
apechain "git.frostfs.info/TrueCloudLab/policy-engine/pkg/chain"
"github.com/spf13/cobra"
)
var (
errChainIDCannotBeEmpty = errors.New("chain id cannot be empty")
errRuleIsNotParsed = errors.New("rule is not passed")
)
const (
chainIDFlag = "chain-id"
chainIDHexFlag = "chain-id-hex"
ruleFlag = "rule"
pathFlag = "path"
outputFlag = "output"
outputFlag = "output"
)
var generateAPEOverrideCmd = &cobra.Command{
@ -40,7 +29,7 @@ Generated APE override can be dumped to a file in JSON format that is passed to
}
func genereateAPEOverride(cmd *cobra.Command, _ []string) {
c := parseChain(cmd)
c := apeCmd.ParseChain(cmd)
targetCID, _ := cmd.Flags().GetString(commonflags.CIDFlag)
var cid cidSDK.ID
@ -77,39 +66,11 @@ func init() {
ff.StringP(commonflags.CIDFlag, "", "", "Target container ID.")
_ = cobra.MarkFlagRequired(createCmd.Flags(), commonflags.CIDFlag)
ff.StringArray(ruleFlag, []string{}, "Rule statement")
ff.String(pathFlag, "", "Path to encoded chain in JSON or binary format")
ff.String(chainIDFlag, "", "Assign ID to the parsed chain")
ff.Bool(chainIDHexFlag, false, "Flag to parse chain ID as hex")
ff.StringArray(apeCmd.RuleFlag, []string{}, "Rule statement")
ff.String(apeCmd.PathFlag, "", "Path to encoded chain in JSON or binary format")
ff.String(apeCmd.ChainIDFlag, "", "Assign ID to the parsed chain")
ff.Bool(apeCmd.ChainIDHexFlag, false, "Flag to parse chain ID as hex")
ff.String(outputFlag, "", "Output path to dump result JSON-encoded APE override")
_ = cobra.MarkFlagFilename(createCmd.Flags(), outputFlag)
}
func parseChainID(cmd *cobra.Command) apechain.ID {
chainID, _ := cmd.Flags().GetString(chainIDFlag)
if chainID == "" {
commonCmd.ExitOnErr(cmd, "read chain id error: %w",
errChainIDCannotBeEmpty)
}
return apechain.ID(chainID)
}
func parseChain(cmd *cobra.Command) *apechain.Chain {
chain := new(apechain.Chain)
if rules, _ := cmd.Flags().GetStringArray(ruleFlag); len(rules) > 0 {
commonCmd.ExitOnErr(cmd, "parser error: %w", parseutil.ParseAPEChain(chain, rules))
} else if encPath, _ := cmd.Flags().GetString(pathFlag); encPath != "" {
commonCmd.ExitOnErr(cmd, "decode binary or json error: %w", parseutil.ParseAPEChainBinaryOrJSON(chain, encPath))
} else {
commonCmd.ExitOnErr(cmd, "parser error: %w", errRuleIsNotParsed)
}
chain.ID = parseChainID(cmd)
cmd.Println("Parsed chain:")
parseutil.PrintHumanReadableAPEChain(cmd, chain)
return chain
}

View file

@ -1,23 +1,14 @@
package control
import (
"encoding/hex"
"errors"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/modules/util"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
apeCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common/ape"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/rpc/client"
apechain "git.frostfs.info/TrueCloudLab/policy-engine/pkg/chain"
"github.com/spf13/cobra"
)
const (
ruleFlag = "rule"
pathFlag = "path"
)
var addRuleCmd = &cobra.Command{
Use: "add-rule",
Short: "Add local override",
@ -31,41 +22,12 @@ control add-rule --endpoint ... -w ... --address ... --chain-id ChainID --cid ..
Run: addRule,
}
func parseChain(cmd *cobra.Command) *apechain.Chain {
chainID, _ := cmd.Flags().GetString(chainIDFlag)
hexEncoded, _ := cmd.Flags().GetBool(chainIDHexFlag)
chainIDRaw := []byte(chainID)
if hexEncoded {
var err error
chainIDRaw, err = hex.DecodeString(chainID)
commonCmd.ExitOnErr(cmd, "can't decode chain ID as hex: %w", err)
}
chain := new(apechain.Chain)
chain.ID = apechain.ID(chainIDRaw)
if rules, _ := cmd.Flags().GetStringArray(ruleFlag); len(rules) > 0 {
commonCmd.ExitOnErr(cmd, "parser error: %w", util.ParseAPEChain(chain, rules))
} else if encPath, _ := cmd.Flags().GetString(pathFlag); encPath != "" {
commonCmd.ExitOnErr(cmd, "decode binary or json error: %w", util.ParseAPEChainBinaryOrJSON(chain, encPath))
} else {
commonCmd.ExitOnErr(cmd, "parser error", errors.New("rule is not passed"))
}
cmd.Println("Parsed chain:")
util.PrintHumanReadableAPEChain(cmd, chain)
return chain
}
func addRule(cmd *cobra.Command, _ []string) {
pk := key.Get(cmd)
target := parseTarget(cmd)
parsed := parseChain(cmd)
parsed := apeCmd.ParseChain(cmd)
req := &control.AddChainLocalOverrideRequest{
Body: &control.AddChainLocalOverrideRequest_Body{
@ -94,13 +56,13 @@ func initControlAddRuleCmd() {
initControlFlags(addRuleCmd)
ff := addRuleCmd.Flags()
ff.StringArray(ruleFlag, []string{}, "Rule statement")
ff.String(pathFlag, "", "Path to encoded chain in JSON or binary format")
ff.String(chainIDFlag, "", "Assign ID to the parsed chain")
ff.String(targetNameFlag, "", targetNameDesc)
ff.String(targetTypeFlag, "", targetTypeDesc)
_ = addRuleCmd.MarkFlagRequired(targetTypeFlag)
ff.Bool(chainIDHexFlag, false, "Flag to parse chain ID as hex")
ff.StringArray(apeCmd.RuleFlag, []string{}, "Rule statement")
ff.String(apeCmd.PathFlag, "", "Path to encoded chain in JSON or binary format")
ff.String(apeCmd.ChainIDFlag, "", "Assign ID to the parsed chain")
ff.String(apeCmd.TargetNameFlag, "", apeCmd.TargetNameFlagDesc)
ff.String(apeCmd.TargetTypeFlag, "", apeCmd.TargetTypeFlagDesc)
_ = addRuleCmd.MarkFlagRequired(apeCmd.TargetTypeFlag)
ff.Bool(apeCmd.ChainIDHexFlag, false, "Flag to parse chain ID as hex")
addRuleCmd.MarkFlagsMutuallyExclusive(pathFlag, ruleFlag)
addRuleCmd.MarkFlagsMutuallyExclusive(apeCmd.PathFlag, apeCmd.RuleFlag)
}

View file

@ -4,8 +4,8 @@ import (
"encoding/hex"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/modules/util"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
apecmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common/ape"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/rpc/client"
apechain "git.frostfs.info/TrueCloudLab/policy-engine/pkg/chain"
@ -24,8 +24,8 @@ func getRule(cmd *cobra.Command, _ []string) {
target := parseTarget(cmd)
chainID, _ := cmd.Flags().GetString(chainIDFlag)
hexEncoded, _ := cmd.Flags().GetBool(chainIDHexFlag)
chainID, _ := cmd.Flags().GetString(apecmd.ChainIDFlag)
hexEncoded, _ := cmd.Flags().GetBool(apecmd.ChainIDHexFlag)
if hexEncoded {
chainIDBytes, err := hex.DecodeString(chainID)
@ -56,16 +56,16 @@ func getRule(cmd *cobra.Command, _ []string) {
var chain apechain.Chain
commonCmd.ExitOnErr(cmd, "decode error: %w", chain.DecodeBytes(resp.GetBody().GetChain()))
util.PrintHumanReadableAPEChain(cmd, &chain)
apecmd.PrintHumanReadableAPEChain(cmd, &chain)
}
func initControGetRuleCmd() {
initControlFlags(getRuleCmd)
ff := getRuleCmd.Flags()
ff.String(targetNameFlag, "", targetNameDesc)
ff.String(targetTypeFlag, "", targetTypeDesc)
_ = getRuleCmd.MarkFlagRequired(targetTypeFlag)
ff.String(chainIDFlag, "", "Chain id")
ff.Bool(chainIDHexFlag, false, "Flag to parse chain ID as hex")
ff.String(apecmd.TargetNameFlag, "", apecmd.TargetNameFlagDesc)
ff.String(apecmd.TargetTypeFlag, "", apecmd.TargetTypeFlagDesc)
_ = getRuleCmd.MarkFlagRequired(apecmd.TargetTypeFlag)
ff.String(apecmd.ChainIDFlag, "", "Chain id")
ff.Bool(apecmd.ChainIDHexFlag, false, "Flag to parse chain ID as hex")
}

View file

@ -1,18 +1,16 @@
package control
import (
"errors"
"fmt"
"strings"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/modules/util"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
apeCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common/ape"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/rpc/client"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
apechain "git.frostfs.info/TrueCloudLab/policy-engine/pkg/chain"
"github.com/nspcc-dev/neo-go/cli/input"
policyengine "git.frostfs.info/TrueCloudLab/policy-engine/pkg/engine"
"github.com/spf13/cobra"
)
@ -23,65 +21,25 @@ var listRulesCmd = &cobra.Command{
Run: listRules,
}
const (
defaultNamespace = "root"
namespaceTarget = "namespace"
containerTarget = "container"
userTarget = "user"
groupTarget = "group"
)
const (
targetNameFlag = "target-name"
targetNameDesc = "Resource name in APE resource name format"
targetTypeFlag = "target-type"
targetTypeDesc = "Resource type(container/namespace)"
)
var (
errSettingDefaultValueWasDeclined = errors.New("setting default value was declined")
errUnknownTargetType = errors.New("unknown target type")
)
var engineToControlSvcType = map[policyengine.TargetType]control.ChainTarget_TargetType{
policyengine.Namespace: control.ChainTarget_NAMESPACE,
policyengine.Container: control.ChainTarget_CONTAINER,
policyengine.User: control.ChainTarget_USER,
policyengine.Group: control.ChainTarget_GROUP,
}
func parseTarget(cmd *cobra.Command) *control.ChainTarget {
typ, _ := cmd.Flags().GetString(targetTypeFlag)
name, _ := cmd.Flags().GetString(targetNameFlag)
switch typ {
case namespaceTarget:
if name == "" {
ln, err := input.ReadLine(fmt.Sprintf("Target name is not set. Confirm to use %s namespace (n|Y)> ", defaultNamespace))
commonCmd.ExitOnErr(cmd, "read line error: %w", err)
ln = strings.ToLower(ln)
if len(ln) > 0 && (ln[0] == 'n') {
commonCmd.ExitOnErr(cmd, "read namespace error: %w", errSettingDefaultValueWasDeclined)
}
name = defaultNamespace
}
return &control.ChainTarget{
Name: name,
Type: control.ChainTarget_NAMESPACE,
}
case containerTarget:
var cnr cid.ID
commonCmd.ExitOnErr(cmd, "can't decode container ID: %w", cnr.DecodeString(name))
return &control.ChainTarget{
Name: name,
Type: control.ChainTarget_CONTAINER,
}
case userTarget:
return &control.ChainTarget{
Name: name,
Type: control.ChainTarget_USER,
}
case groupTarget:
return &control.ChainTarget{
Name: name,
Type: control.ChainTarget_GROUP,
}
default:
commonCmd.ExitOnErr(cmd, "read target type error: %w", errUnknownTargetType)
target := apeCmd.ParseTarget(cmd)
typ, ok := engineToControlSvcType[target.Type]
if !ok {
commonCmd.ExitOnErr(cmd, "%w", fmt.Errorf("unknown type '%c", target.Type))
}
return &control.ChainTarget{
Name: target.Name,
Type: typ,
}
return nil
}
func listRules(cmd *cobra.Command, _ []string) {
@ -117,7 +75,7 @@ func listRules(cmd *cobra.Command, _ []string) {
for _, c := range chains {
var chain apechain.Chain
commonCmd.ExitOnErr(cmd, "decode error: %w", chain.DecodeBytes(c))
util.PrintHumanReadableAPEChain(cmd, &chain)
apeCmd.PrintHumanReadableAPEChain(cmd, &chain)
}
}
@ -125,7 +83,7 @@ func initControlListRulesCmd() {
initControlFlags(listRulesCmd)
ff := listRulesCmd.Flags()
ff.String(targetNameFlag, "", targetNameDesc)
ff.String(targetTypeFlag, "", targetTypeDesc)
_ = listRulesCmd.MarkFlagRequired(targetTypeFlag)
ff.String(apeCmd.TargetNameFlag, "", apeCmd.TargetNameFlagDesc)
ff.String(apeCmd.TargetTypeFlag, "", apeCmd.TargetTypeFlagDesc)
_ = listRulesCmd.MarkFlagRequired(apeCmd.TargetTypeFlag)
}

View file

@ -2,26 +2,20 @@ package control
import (
"bytes"
"crypto/sha256"
"fmt"
"strconv"
"text/tabwriter"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
apeCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common/ape"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/rpc/client"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"github.com/spf13/cobra"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
)
const (
chainNameFlag = "chain-name"
chainNameFlagUsage = "Chain name(ingress|s3)"
)
var listTargetsCmd = &cobra.Command{
Use: "list-targets",
Short: "List local targets",
@ -32,15 +26,11 @@ var listTargetsCmd = &cobra.Command{
func listTargets(cmd *cobra.Command, _ []string) {
pk := key.Get(cmd)
var cnr cid.ID
chainName, _ := cmd.Flags().GetString(chainNameFlag)
rawCID := make([]byte, sha256.Size)
cnr.Encode(rawCID)
chainName := apeCmd.ParseChainName(cmd)
req := &control.ListTargetsLocalOverridesRequest{
Body: &control.ListTargetsLocalOverridesRequest_Body{
ChainName: chainName,
ChainName: string(chainName),
},
}
@ -82,7 +72,7 @@ func initControlListTargetsCmd() {
initControlFlags(listTargetsCmd)
ff := listTargetsCmd.Flags()
ff.String(chainNameFlag, "", chainNameFlagUsage)
ff.String(apeCmd.ChainNameFlag, "", apeCmd.ChainNameFlagDesc)
_ = cobra.MarkFlagRequired(ff, chainNameFlag)
_ = cobra.MarkFlagRequired(ff, apeCmd.ChainNameFlag)
}

View file

@ -6,17 +6,12 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
apecmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common/ape"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/rpc/client"
"github.com/spf13/cobra"
)
const (
chainIDFlag = "chain-id"
chainIDHexFlag = "chain-id-hex"
allFlag = "all"
)
var (
errEmptyChainID = errors.New("chain id cannot be empty")
@ -30,8 +25,8 @@ var (
func removeRule(cmd *cobra.Command, _ []string) {
pk := key.Get(cmd)
hexEncoded, _ := cmd.Flags().GetBool(chainIDHexFlag)
removeAll, _ := cmd.Flags().GetBool(allFlag)
hexEncoded, _ := cmd.Flags().GetBool(apecmd.ChainIDHexFlag)
removeAll, _ := cmd.Flags().GetBool(apecmd.AllFlag)
if removeAll {
req := &control.RemoveChainLocalOverridesByTargetRequest{
Body: &control.RemoveChainLocalOverridesByTargetRequest_Body{
@ -52,7 +47,7 @@ func removeRule(cmd *cobra.Command, _ []string) {
return
}
chainID, _ := cmd.Flags().GetString(chainIDFlag)
chainID, _ := cmd.Flags().GetString(apecmd.ChainIDFlag)
if chainID == "" {
commonCmd.ExitOnErr(cmd, "read chain id error: %w", errEmptyChainID)
}
@ -92,11 +87,11 @@ func initControlRemoveRuleCmd() {
initControlFlags(removeRuleCmd)
ff := removeRuleCmd.Flags()
ff.String(targetNameFlag, "", targetNameDesc)
ff.String(targetTypeFlag, "", targetTypeDesc)
_ = removeRuleCmd.MarkFlagRequired(targetTypeFlag)
ff.String(chainIDFlag, "", "Chain id")
ff.Bool(chainIDHexFlag, false, "Flag to parse chain ID as hex")
ff.Bool(allFlag, false, "Remove all chains")
removeRuleCmd.MarkFlagsMutuallyExclusive(allFlag, chainIDFlag)
ff.String(apecmd.TargetNameFlag, "", apecmd.TargetNameFlagDesc)
ff.String(apecmd.TargetTypeFlag, "", apecmd.TargetTypeFlagDesc)
_ = removeRuleCmd.MarkFlagRequired(apecmd.TargetTypeFlag)
ff.String(apecmd.ChainIDFlag, "", apecmd.ChainIDFlagDesc)
ff.Bool(apecmd.ChainIDHexFlag, false, apecmd.ChainIDHexFlagDesc)
ff.Bool(apecmd.AllFlag, false, "Remove all chains")
removeRuleCmd.MarkFlagsMutuallyExclusive(apecmd.AllFlag, apecmd.ChainIDFlag)
}

View file

@ -6,7 +6,7 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
apeutil "git.frostfs.info/TrueCloudLab/frostfs-node/internal/ape"
apeutil "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/ape"
"git.frostfs.info/TrueCloudLab/policy-engine/pkg/chain"
"github.com/spf13/cobra"
)

View file

@ -1,6 +1,7 @@
package main
import (
"context"
"os"
"os/signal"
"syscall"
@ -46,7 +47,7 @@ func reloadConfig() error {
return logPrm.Reload()
}
func watchForSignal(cancel func()) {
func watchForSignal(ctx context.Context, cancel func()) {
ch := make(chan os.Signal, 1)
signal.Notify(ch, syscall.SIGINT, syscall.SIGTERM)
@ -58,49 +59,49 @@ func watchForSignal(cancel func()) {
// signals causing application to shut down should have priority over
// reconfiguration signal
case <-ch:
log.Info(logs.FrostFSNodeTerminationSignalHasBeenReceivedStopping)
log.Info(ctx, logs.FrostFSNodeTerminationSignalHasBeenReceivedStopping)
cancel()
shutdown()
log.Info(logs.FrostFSNodeTerminationSignalProcessingIsComplete)
shutdown(ctx)
log.Info(ctx, logs.FrostFSNodeTerminationSignalProcessingIsComplete)
return
case err := <-intErr: // internal application error
log.Info(logs.FrostFSIRInternalError, zap.String("msg", err.Error()))
log.Info(ctx, logs.FrostFSIRInternalError, zap.String("msg", err.Error()))
cancel()
shutdown()
shutdown(ctx)
return
default:
// block until any signal is receieved
select {
case <-ch:
log.Info(logs.FrostFSNodeTerminationSignalHasBeenReceivedStopping)
log.Info(ctx, logs.FrostFSNodeTerminationSignalHasBeenReceivedStopping)
cancel()
shutdown()
log.Info(logs.FrostFSNodeTerminationSignalProcessingIsComplete)
shutdown(ctx)
log.Info(ctx, logs.FrostFSNodeTerminationSignalProcessingIsComplete)
return
case err := <-intErr: // internal application error
log.Info(logs.FrostFSIRInternalError, zap.String("msg", err.Error()))
log.Info(ctx, logs.FrostFSIRInternalError, zap.String("msg", err.Error()))
cancel()
shutdown()
shutdown(ctx)
return
case <-sighupCh:
log.Info(logs.FrostFSNodeSIGHUPHasBeenReceivedRereadingConfiguration)
if !innerRing.CompareAndSwapHealthStatus(control.HealthStatus_READY, control.HealthStatus_RECONFIGURING) {
log.Info(logs.FrostFSNodeSIGHUPSkip)
log.Info(ctx, logs.FrostFSNodeSIGHUPHasBeenReceivedRereadingConfiguration)
if !innerRing.CompareAndSwapHealthStatus(ctx, control.HealthStatus_READY, control.HealthStatus_RECONFIGURING) {
log.Info(ctx, logs.FrostFSNodeSIGHUPSkip)
break
}
err := reloadConfig()
if err != nil {
log.Error(logs.FrostFSNodeConfigurationReading, zap.Error(err))
log.Error(ctx, logs.FrostFSNodeConfigurationReading, zap.Error(err))
}
pprofCmp.reload()
metricsCmp.reload()
log.Info(logs.FrostFSIRReloadExtraWallets)
pprofCmp.reload(ctx)
metricsCmp.reload(ctx)
log.Info(ctx, logs.FrostFSIRReloadExtraWallets)
err = innerRing.SetExtraWallets(cfg)
if err != nil {
log.Error(logs.FrostFSNodeConfigurationReading, zap.Error(err))
log.Error(ctx, logs.FrostFSNodeConfigurationReading, zap.Error(err))
}
innerRing.CompareAndSwapHealthStatus(control.HealthStatus_RECONFIGURING, control.HealthStatus_READY)
log.Info(logs.FrostFSNodeConfigurationHasBeenReloadedSuccessfully)
innerRing.CompareAndSwapHealthStatus(ctx, control.HealthStatus_RECONFIGURING, control.HealthStatus_READY)
log.Info(ctx, logs.FrostFSNodeConfigurationHasBeenReloadedSuccessfully)
}
}
}

View file

@ -1,6 +1,7 @@
package main
import (
"context"
"net/http"
"time"
@ -24,8 +25,8 @@ const (
shutdownTimeoutKeyPostfix = ".shutdown_timeout"
)
func (c *httpComponent) init() {
log.Info("init " + c.name)
func (c *httpComponent) init(ctx context.Context) {
log.Info(ctx, "init "+c.name)
c.enabled = cfg.GetBool(c.name + enabledKeyPostfix)
c.address = cfg.GetString(c.name + addressKeyPostfix)
c.shutdownDur = cfg.GetDuration(c.name + shutdownTimeoutKeyPostfix)
@ -39,14 +40,14 @@ func (c *httpComponent) init() {
httputil.WithShutdownTimeout(c.shutdownDur),
)
} else {
log.Info(c.name + " is disabled, skip")
log.Info(ctx, c.name+" is disabled, skip")
c.srv = nil
}
}
func (c *httpComponent) start() {
func (c *httpComponent) start(ctx context.Context) {
if c.srv != nil {
log.Info("start " + c.name)
log.Info(ctx, "start "+c.name)
wg.Add(1)
go func() {
defer wg.Done()
@ -55,10 +56,10 @@ func (c *httpComponent) start() {
}
}
func (c *httpComponent) shutdown() error {
func (c *httpComponent) shutdown(ctx context.Context) error {
if c.srv != nil {
log.Info("shutdown " + c.name)
return c.srv.Shutdown()
log.Info(ctx, "shutdown "+c.name)
return c.srv.Shutdown(ctx)
}
return nil
}
@ -70,17 +71,17 @@ func (c *httpComponent) needReload() bool {
return enabled != c.enabled || enabled && (address != c.address || dur != c.shutdownDur)
}
func (c *httpComponent) reload() {
log.Info("reload " + c.name)
func (c *httpComponent) reload(ctx context.Context) {
log.Info(ctx, "reload "+c.name)
if c.needReload() {
log.Info(c.name + " config updated")
if err := c.shutdown(); err != nil {
log.Debug(logs.FrostFSIRCouldNotShutdownHTTPServer,
log.Info(ctx, c.name+" config updated")
if err := c.shutdown(ctx); err != nil {
log.Debug(ctx, logs.FrostFSIRCouldNotShutdownHTTPServer,
zap.String("error", err.Error()),
)
} else {
c.init()
c.start()
c.init(ctx)
c.start(ctx)
}
}
}

View file

@ -87,48 +87,48 @@ func main() {
ctx, cancel := context.WithCancel(context.Background())
pprofCmp = newPprofComponent()
pprofCmp.init()
pprofCmp.init(ctx)
metricsCmp = newMetricsComponent()
metricsCmp.init()
metricsCmp.init(ctx)
audit.Store(cfg.GetBool("audit.enabled"))
innerRing, err = innerring.New(ctx, log, cfg, intErr, metrics, cmode, audit)
exitErr(err)
pprofCmp.start()
metricsCmp.start()
pprofCmp.start(ctx)
metricsCmp.start(ctx)
// start inner ring
err = innerRing.Start(ctx, intErr)
exitErr(err)
log.Info(logs.CommonApplicationStarted,
log.Info(ctx, logs.CommonApplicationStarted,
zap.String("version", misc.Version))
watchForSignal(cancel)
watchForSignal(ctx, cancel)
<-ctx.Done() // graceful shutdown
log.Debug(logs.FrostFSNodeWaitingForAllProcessesToStop)
log.Debug(ctx, logs.FrostFSNodeWaitingForAllProcessesToStop)
wg.Wait()
log.Info(logs.FrostFSIRApplicationStopped)
log.Info(ctx, logs.FrostFSIRApplicationStopped)
}
func shutdown() {
innerRing.Stop()
if err := metricsCmp.shutdown(); err != nil {
log.Debug(logs.FrostFSIRCouldNotShutdownHTTPServer,
func shutdown(ctx context.Context) {
innerRing.Stop(ctx)
if err := metricsCmp.shutdown(ctx); err != nil {
log.Debug(ctx, logs.FrostFSIRCouldNotShutdownHTTPServer,
zap.String("error", err.Error()),
)
}
if err := pprofCmp.shutdown(); err != nil {
log.Debug(logs.FrostFSIRCouldNotShutdownHTTPServer,
if err := pprofCmp.shutdown(ctx); err != nil {
log.Debug(ctx, logs.FrostFSIRCouldNotShutdownHTTPServer,
zap.String("error", err.Error()),
)
}
if err := sdnotify.ClearStatus(); err != nil {
log.Error(logs.FailedToReportStatusToSystemd, zap.Error(err))
log.Error(ctx, logs.FailedToReportStatusToSystemd, zap.Error(err))
}
}

View file

@ -1,6 +1,7 @@
package main
import (
"context"
"runtime"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
@ -28,8 +29,8 @@ func newPprofComponent() *pprofComponent {
}
}
func (c *pprofComponent) init() {
c.httpComponent.init()
func (c *pprofComponent) init(ctx context.Context) {
c.httpComponent.init(ctx)
if c.enabled {
c.blockRate = cfg.GetInt(pprofBlockRateKey)
@ -51,17 +52,17 @@ func (c *pprofComponent) needReload() bool {
c.enabled && (c.blockRate != blockRate || c.mutexRate != mutexRate)
}
func (c *pprofComponent) reload() {
log.Info("reload " + c.name)
func (c *pprofComponent) reload(ctx context.Context) {
log.Info(ctx, "reload "+c.name)
if c.needReload() {
log.Info(c.name + " config updated")
if err := c.shutdown(); err != nil {
log.Debug(logs.FrostFSIRCouldNotShutdownHTTPServer,
log.Info(ctx, c.name+" config updated")
if err := c.shutdown(ctx); err != nil {
log.Debug(ctx, logs.FrostFSIRCouldNotShutdownHTTPServer,
zap.String("error", err.Error()))
return
}
c.init()
c.start()
c.init(ctx)
c.start(ctx)
}
}

View file

@ -28,7 +28,7 @@ func inspectFunc(cmd *cobra.Command, _ []string) {
common.ExitOnErr(cmd, common.Errf("invalid address argument: %w", err))
blz := openBlobovnicza(cmd)
defer blz.Close()
defer blz.Close(cmd.Context())
var prm blobovnicza.GetPrm
prm.SetAddress(addr)

View file

@ -32,7 +32,7 @@ func listFunc(cmd *cobra.Command, _ []string) {
}
blz := openBlobovnicza(cmd)
defer blz.Close()
defer blz.Close(cmd.Context())
err := blobovnicza.IterateAddresses(context.Background(), blz, wAddr)
common.ExitOnErr(cmd, common.Errf("blobovnicza iterator failure: %w", err))

View file

@ -27,7 +27,7 @@ func openBlobovnicza(cmd *cobra.Command) *blobovnicza.Blobovnicza {
blobovnicza.WithPath(vPath),
blobovnicza.WithReadOnly(true),
)
common.ExitOnErr(cmd, blz.Open())
common.ExitOnErr(cmd, blz.Open(cmd.Context()))
return blz
}

View file

@ -31,7 +31,7 @@ func inspectFunc(cmd *cobra.Command, _ []string) {
common.ExitOnErr(cmd, common.Errf("invalid address argument: %w", err))
db := openMeta(cmd)
defer db.Close()
defer db.Close(cmd.Context())
storageID := meta.StorageIDPrm{}
storageID.SetAddress(addr)

View file

@ -19,7 +19,7 @@ func init() {
func listGarbageFunc(cmd *cobra.Command, _ []string) {
db := openMeta(cmd)
defer db.Close()
defer db.Close(cmd.Context())
var garbPrm meta.GarbageIterationPrm
garbPrm.SetHandler(

View file

@ -19,7 +19,7 @@ func init() {
func listGraveyardFunc(cmd *cobra.Command, _ []string) {
db := openMeta(cmd)
defer db.Close()
defer db.Close(cmd.Context())
var gravePrm meta.GraveyardIterationPrm
gravePrm.SetHandler(

View file

@ -397,16 +397,16 @@ type internals struct {
}
// starts node's maintenance.
func (c *cfg) startMaintenance() {
func (c *cfg) startMaintenance(ctx context.Context) {
c.isMaintenance.Store(true)
c.cfgNetmap.state.setControlNetmapStatus(control.NetmapStatus_MAINTENANCE)
c.log.Info(logs.FrostFSNodeStartedLocalNodesMaintenance)
c.log.Info(ctx, logs.FrostFSNodeStartedLocalNodesMaintenance)
}
// stops node's maintenance.
func (c *internals) stopMaintenance() {
func (c *internals) stopMaintenance(ctx context.Context) {
if c.isMaintenance.CompareAndSwap(true, false) {
c.log.Info(logs.FrostFSNodeStoppedLocalNodesMaintenance)
c.log.Info(ctx, logs.FrostFSNodeStoppedLocalNodesMaintenance)
}
}
@ -705,7 +705,7 @@ func initCfg(appCfg *config.Config) *cfg {
log, err := logger.NewLogger(logPrm)
fatalOnErr(err)
if loggerconfig.ToLokiConfig(appCfg).Enabled {
log.Logger = log.Logger.WithOptions(zap.WrapCore(func(core zapcore.Core) zapcore.Core {
log.WithOptions(zap.WrapCore(func(core zapcore.Core) zapcore.Core {
lokiCore := lokicore.New(core, loggerconfig.ToLokiConfig(appCfg))
return lokiCore
}))
@ -1089,7 +1089,7 @@ func (c *cfg) LocalAddress() network.AddressGroup {
func initLocalStorage(ctx context.Context, c *cfg) {
ls := engine.New(c.engineOpts()...)
addNewEpochAsyncNotificationHandler(c, func(ev event.Event) {
addNewEpochAsyncNotificationHandler(c, func(ctx context.Context, ev event.Event) {
ls.HandleNewEpoch(ctx, ev.(netmap2.NewEpoch).EpochNumber())
})
@ -1103,10 +1103,10 @@ func initLocalStorage(ctx context.Context, c *cfg) {
shard.WithTombstoneSource(c.createTombstoneSource()),
shard.WithContainerInfoProvider(c.createContainerInfoProvider(ctx)))...)
if err != nil {
c.log.Error(logs.FrostFSNodeFailedToAttachShardToEngine, zap.Error(err))
c.log.Error(ctx, logs.FrostFSNodeFailedToAttachShardToEngine, zap.Error(err))
} else {
shardsAttached++
c.log.Info(logs.FrostFSNodeShardAttachedToEngine, zap.Stringer("id", id))
c.log.Info(ctx, logs.FrostFSNodeShardAttachedToEngine, zap.Stringer("id", id))
}
}
if shardsAttached == 0 {
@ -1116,23 +1116,23 @@ func initLocalStorage(ctx context.Context, c *cfg) {
c.cfgObject.cfgLocalStorage.localStorage = ls
c.onShutdown(func() {
c.log.Info(logs.FrostFSNodeClosingComponentsOfTheStorageEngine)
c.log.Info(ctx, logs.FrostFSNodeClosingComponentsOfTheStorageEngine)
err := ls.Close(context.WithoutCancel(ctx))
if err != nil {
c.log.Info(logs.FrostFSNodeStorageEngineClosingFailure,
c.log.Info(ctx, logs.FrostFSNodeStorageEngineClosingFailure,
zap.String("error", err.Error()),
)
} else {
c.log.Info(logs.FrostFSNodeAllComponentsOfTheStorageEngineClosedSuccessfully)
c.log.Info(ctx, logs.FrostFSNodeAllComponentsOfTheStorageEngineClosedSuccessfully)
}
})
}
func initAccessPolicyEngine(_ context.Context, c *cfg) {
func initAccessPolicyEngine(ctx context.Context, c *cfg) {
var localOverrideDB chainbase.LocalOverrideDatabase
if nodeconfig.PersistentPolicyRules(c.appCfg).Path() == "" {
c.log.Warn(logs.FrostFSNodePersistentRuleStorageDBPathIsNotSetInmemoryWillBeUsed)
c.log.Warn(ctx, logs.FrostFSNodePersistentRuleStorageDBPathIsNotSetInmemoryWillBeUsed)
localOverrideDB = chainbase.NewInmemoryLocalOverrideDatabase()
} else {
localOverrideDB = chainbase.NewBoltLocalOverrideDatabase(
@ -1157,7 +1157,7 @@ func initAccessPolicyEngine(_ context.Context, c *cfg) {
c.onShutdown(func() {
if err := ape.LocalOverrideDatabaseCore().Close(); err != nil {
c.log.Warn(logs.FrostFSNodeAccessPolicyEngineClosingFailure,
c.log.Warn(ctx, logs.FrostFSNodeAccessPolicyEngineClosingFailure,
zap.Error(err),
)
}
@ -1206,10 +1206,10 @@ func (c *cfg) setContractNodeInfo(ni *netmap.NodeInfo) {
c.cfgNetmap.state.setNodeInfo(ni)
}
func (c *cfg) updateContractNodeInfo(epoch uint64) {
func (c *cfg) updateContractNodeInfo(ctx context.Context, epoch uint64) {
ni, err := c.netmapLocalNodeState(epoch)
if err != nil {
c.log.Error(logs.FrostFSNodeCouldNotUpdateNodeStateOnNewEpoch,
c.log.Error(ctx, logs.FrostFSNodeCouldNotUpdateNodeStateOnNewEpoch,
zap.Uint64("epoch", epoch),
zap.String("error", err.Error()))
return
@ -1221,19 +1221,19 @@ func (c *cfg) updateContractNodeInfo(epoch uint64) {
// bootstrapWithState calls "addPeer" method of the Sidechain Netmap contract
// with the binary-encoded information from the current node's configuration.
// The state is set using the provided setter which MUST NOT be nil.
func (c *cfg) bootstrapWithState(stateSetter func(*netmap.NodeInfo)) error {
func (c *cfg) bootstrapWithState(ctx context.Context, stateSetter func(*netmap.NodeInfo)) error {
ni := c.cfgNodeInfo.localInfo
stateSetter(&ni)
prm := nmClient.AddPeerPrm{}
prm.SetNodeInfo(ni)
return c.cfgNetmap.wrapper.AddPeer(prm)
return c.cfgNetmap.wrapper.AddPeer(ctx, prm)
}
// bootstrapOnline calls cfg.bootstrapWithState with "online" state.
func bootstrapOnline(c *cfg) error {
return c.bootstrapWithState(func(ni *netmap.NodeInfo) {
func bootstrapOnline(ctx context.Context, c *cfg) error {
return c.bootstrapWithState(ctx, func(ni *netmap.NodeInfo) {
ni.SetStatus(netmap.Online)
})
}
@ -1241,21 +1241,21 @@ func bootstrapOnline(c *cfg) error {
// bootstrap calls bootstrapWithState with:
// - "maintenance" state if maintenance is in progress on the current node
// - "online", otherwise
func (c *cfg) bootstrap() error {
func (c *cfg) bootstrap(ctx context.Context) error {
// switch to online except when under maintenance
st := c.cfgNetmap.state.controlNetmapStatus()
if st == control.NetmapStatus_MAINTENANCE {
c.log.Info(logs.FrostFSNodeBootstrappingWithTheMaintenanceState)
return c.bootstrapWithState(func(ni *netmap.NodeInfo) {
c.log.Info(ctx, logs.FrostFSNodeBootstrappingWithTheMaintenanceState)
return c.bootstrapWithState(ctx, func(ni *netmap.NodeInfo) {
ni.SetStatus(netmap.Maintenance)
})
}
c.log.Info(logs.FrostFSNodeBootstrappingWithOnlineState,
c.log.Info(ctx, logs.FrostFSNodeBootstrappingWithOnlineState,
zap.Stringer("previous", st),
)
return bootstrapOnline(c)
return bootstrapOnline(ctx, c)
}
// needBootstrap checks if local node should be registered in network on bootup.
@ -1280,19 +1280,19 @@ func (c *cfg) signalWatcher(ctx context.Context) {
// signals causing application to shut down should have priority over
// reconfiguration signal
case <-ch:
c.log.Info(logs.FrostFSNodeTerminationSignalHasBeenReceivedStopping)
c.log.Info(ctx, logs.FrostFSNodeTerminationSignalHasBeenReceivedStopping)
c.shutdown()
c.shutdown(ctx)
c.log.Info(logs.FrostFSNodeTerminationSignalProcessingIsComplete)
c.log.Info(ctx, logs.FrostFSNodeTerminationSignalProcessingIsComplete)
return
case err := <-c.internalErr: // internal application error
c.log.Warn(logs.FrostFSNodeInternalApplicationError,
c.log.Warn(ctx, logs.FrostFSNodeInternalApplicationError,
zap.String("message", err.Error()))
c.shutdown()
c.shutdown(ctx)
c.log.Info(logs.FrostFSNodeInternalErrorProcessingIsComplete)
c.log.Info(ctx, logs.FrostFSNodeInternalErrorProcessingIsComplete)
return
default:
// block until any signal is receieved
@ -1300,19 +1300,19 @@ func (c *cfg) signalWatcher(ctx context.Context) {
case <-sighupCh:
c.reloadConfig(ctx)
case <-ch:
c.log.Info(logs.FrostFSNodeTerminationSignalHasBeenReceivedStopping)
c.log.Info(ctx, logs.FrostFSNodeTerminationSignalHasBeenReceivedStopping)
c.shutdown()
c.shutdown(ctx)
c.log.Info(logs.FrostFSNodeTerminationSignalProcessingIsComplete)
c.log.Info(ctx, logs.FrostFSNodeTerminationSignalProcessingIsComplete)
return
case err := <-c.internalErr: // internal application error
c.log.Warn(logs.FrostFSNodeInternalApplicationError,
c.log.Warn(ctx, logs.FrostFSNodeInternalApplicationError,
zap.String("message", err.Error()))
c.shutdown()
c.shutdown(ctx)
c.log.Info(logs.FrostFSNodeInternalErrorProcessingIsComplete)
c.log.Info(ctx, logs.FrostFSNodeInternalErrorProcessingIsComplete)
return
}
}
@ -1320,17 +1320,17 @@ func (c *cfg) signalWatcher(ctx context.Context) {
}
func (c *cfg) reloadConfig(ctx context.Context) {
c.log.Info(logs.FrostFSNodeSIGHUPHasBeenReceivedRereadingConfiguration)
c.log.Info(ctx, logs.FrostFSNodeSIGHUPHasBeenReceivedRereadingConfiguration)
if !c.compareAndSwapHealthStatus(control.HealthStatus_READY, control.HealthStatus_RECONFIGURING) {
c.log.Info(logs.FrostFSNodeSIGHUPSkip)
if !c.compareAndSwapHealthStatus(ctx, control.HealthStatus_READY, control.HealthStatus_RECONFIGURING) {
c.log.Info(ctx, logs.FrostFSNodeSIGHUPSkip)
return
}
defer c.compareAndSwapHealthStatus(control.HealthStatus_RECONFIGURING, control.HealthStatus_READY)
defer c.compareAndSwapHealthStatus(ctx, control.HealthStatus_RECONFIGURING, control.HealthStatus_READY)
err := c.reloadAppConfig()
if err != nil {
c.log.Error(logs.FrostFSNodeConfigurationReading, zap.Error(err))
c.log.Error(ctx, logs.FrostFSNodeConfigurationReading, zap.Error(err))
return
}
@ -1341,7 +1341,7 @@ func (c *cfg) reloadConfig(ctx context.Context) {
logPrm, err := c.loggerPrm()
if err != nil {
c.log.Error(logs.FrostFSNodeLoggerConfigurationPreparation, zap.Error(err))
c.log.Error(ctx, logs.FrostFSNodeLoggerConfigurationPreparation, zap.Error(err))
return
}
@ -1362,25 +1362,25 @@ func (c *cfg) reloadConfig(ctx context.Context) {
err = c.cfgObject.cfgLocalStorage.localStorage.Reload(ctx, rcfg)
if err != nil {
c.log.Error(logs.FrostFSNodeStorageEngineConfigurationUpdate, zap.Error(err))
c.log.Error(ctx, logs.FrostFSNodeStorageEngineConfigurationUpdate, zap.Error(err))
return
}
for _, component := range components {
err = component.reloadFunc()
if err != nil {
c.log.Error(logs.FrostFSNodeUpdatedConfigurationApplying,
c.log.Error(ctx, logs.FrostFSNodeUpdatedConfigurationApplying,
zap.String("component", component.name),
zap.Error(err))
}
}
if err := c.dialerSource.Update(internalNetConfig(c.appCfg, c.metricsCollector.MultinetMetrics())); err != nil {
c.log.Error(logs.FailedToUpdateMultinetConfiguration, zap.Error(err))
c.log.Error(ctx, logs.FailedToUpdateMultinetConfiguration, zap.Error(err))
return
}
c.log.Info(logs.FrostFSNodeConfigurationHasBeenReloadedSuccessfully)
c.log.Info(ctx, logs.FrostFSNodeConfigurationHasBeenReloadedSuccessfully)
}
func (c *cfg) getComponents(ctx context.Context, logPrm *logger.Prm) []dCmp {
@ -1388,7 +1388,7 @@ func (c *cfg) getComponents(ctx context.Context, logPrm *logger.Prm) []dCmp {
components = append(components, dCmp{"logger", logPrm.Reload})
components = append(components, dCmp{"runtime", func() error {
setRuntimeParameters(c)
setRuntimeParameters(ctx, c)
return nil
}})
components = append(components, dCmp{"audit", func() error {
@ -1403,7 +1403,7 @@ func (c *cfg) getComponents(ctx context.Context, logPrm *logger.Prm) []dCmp {
}
updated, err := tracing.Setup(ctx, *traceConfig)
if updated {
c.log.Info(logs.FrostFSNodeTracingConfigationUpdated)
c.log.Info(ctx, logs.FrostFSNodeTracingConfigationUpdated)
}
return err
}})
@ -1438,7 +1438,7 @@ func (c *cfg) reloadPools() error {
func (c *cfg) reloadPool(p *ants.Pool, newSize int, name string) {
oldSize := p.Cap()
if oldSize != newSize {
c.log.Info(logs.FrostFSNodePoolConfigurationUpdate, zap.String("field", name),
c.log.Info(context.Background(), logs.FrostFSNodePoolConfigurationUpdate, zap.String("field", name),
zap.Int("old", oldSize), zap.Int("new", newSize))
p.Tune(newSize)
}
@ -1474,14 +1474,14 @@ func (c *cfg) createContainerInfoProvider(ctx context.Context) container.InfoPro
})
}
func (c *cfg) shutdown() {
old := c.swapHealthStatus(control.HealthStatus_SHUTTING_DOWN)
func (c *cfg) shutdown(ctx context.Context) {
old := c.swapHealthStatus(ctx, control.HealthStatus_SHUTTING_DOWN)
if old == control.HealthStatus_SHUTTING_DOWN {
c.log.Info(logs.FrostFSNodeShutdownSkip)
c.log.Info(ctx, logs.FrostFSNodeShutdownSkip)
return
}
if old == control.HealthStatus_STARTING {
c.log.Warn(logs.FrostFSNodeShutdownWhenNotReady)
c.log.Warn(ctx, logs.FrostFSNodeShutdownWhenNotReady)
}
c.ctxCancel()
@ -1491,6 +1491,6 @@ func (c *cfg) shutdown() {
}
if err := sdnotify.ClearStatus(); err != nil {
c.log.Error(logs.FailedToReportStatusToSystemd, zap.Error(err))
c.log.Error(ctx, logs.FailedToReportStatusToSystemd, zap.Error(err))
}
}

View file

@ -5,6 +5,7 @@ import (
"errors"
"fmt"
"os"
"strconv"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config"
"git.frostfs.info/TrueCloudLab/frostfs-node/misc"
@ -24,6 +25,7 @@ func ToTracingConfig(c *config.Config) (*tracing.Config, error) {
Service: "frostfs-node",
InstanceID: getInstanceIDOrDefault(c),
Version: misc.Version,
Attributes: make(map[string]string),
}
if trustedCa := config.StringSafe(c.Sub(subsection), "trusted_ca"); trustedCa != "" {
@ -38,11 +40,30 @@ func ToTracingConfig(c *config.Config) (*tracing.Config, error) {
}
conf.ServerCaCertPool = certPool
}
i := uint64(0)
for ; ; i++ {
si := strconv.FormatUint(i, 10)
ac := c.Sub(subsection).Sub("attributes").Sub(si)
k := config.StringSafe(ac, "key")
if k == "" {
break
}
v := config.StringSafe(ac, "value")
if v == "" {
return nil, fmt.Errorf("empty tracing attribute value for key %s", k)
}
if _, ok := conf.Attributes[k]; ok {
return nil, fmt.Errorf("tracing attribute key %s defined more than once", k)
}
conf.Attributes[k] = v
}
return conf, nil
}
func getInstanceIDOrDefault(c *config.Config) string {
s := config.StringSlice(c.Sub("node"), "addresses")
s := config.StringSliceSafe(c.Sub("node"), "addresses")
if len(s) > 0 {
return s[0]
}

View file

@ -0,0 +1,46 @@
package tracing
import (
"testing"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config"
configtest "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/test"
"git.frostfs.info/TrueCloudLab/frostfs-observability/tracing"
"github.com/stretchr/testify/require"
)
func TestTracingSection(t *testing.T) {
t.Run("defaults", func(t *testing.T) {
tc, err := ToTracingConfig(configtest.EmptyConfig())
require.NoError(t, err)
require.Equal(t, false, tc.Enabled)
require.Equal(t, tracing.Exporter(""), tc.Exporter)
require.Equal(t, "", tc.Endpoint)
require.Equal(t, "frostfs-node", tc.Service)
require.Equal(t, "", tc.InstanceID)
require.Nil(t, tc.ServerCaCertPool)
require.Empty(t, tc.Attributes)
})
const path = "../../../../config/example/node"
fileConfigTest := func(c *config.Config) {
tc, err := ToTracingConfig(c)
require.NoError(t, err)
require.Equal(t, true, tc.Enabled)
require.Equal(t, tracing.OTLPgRPCExporter, tc.Exporter)
require.Equal(t, "localhost", tc.Endpoint)
require.Equal(t, "frostfs-node", tc.Service)
require.Nil(t, tc.ServerCaCertPool)
require.EqualValues(t, map[string]string{
"key0": "value",
"key1": "value",
}, tc.Attributes)
}
configtest.ForEachFileType(path, fileConfigTest)
t.Run("ENV", func(t *testing.T) {
configtest.ForEnvFileType(t, path, fileConfigTest)
})
}

View file

@ -89,7 +89,7 @@ func configureEACLAndContainerSources(c *cfg, client *cntClient.Client, cnrSrc c
if c.cfgMorph.containerCacheSize > 0 {
containerCache := newCachedContainerStorage(cnrSrc, c.cfgMorph.cacheTTL, c.cfgMorph.containerCacheSize)
subscribeToContainerCreation(c, func(e event.Event) {
subscribeToContainerCreation(c, func(ctx context.Context, e event.Event) {
ev := e.(containerEvent.PutSuccess)
// read owner of the created container in order to update the reading cache.
@ -102,21 +102,21 @@ func configureEACLAndContainerSources(c *cfg, client *cntClient.Client, cnrSrc c
} else {
// unlike removal, we expect successful receive of the container
// after successful creation, so logging can be useful
c.log.Error(logs.FrostFSNodeReadNewlyCreatedContainerAfterTheNotification,
c.log.Error(ctx, logs.FrostFSNodeReadNewlyCreatedContainerAfterTheNotification,
zap.Stringer("id", ev.ID),
zap.Error(err),
)
}
c.log.Debug(logs.FrostFSNodeContainerCreationEventsReceipt,
c.log.Debug(ctx, logs.FrostFSNodeContainerCreationEventsReceipt,
zap.Stringer("id", ev.ID),
)
})
subscribeToContainerRemoval(c, func(e event.Event) {
subscribeToContainerRemoval(c, func(ctx context.Context, e event.Event) {
ev := e.(containerEvent.DeleteSuccess)
containerCache.handleRemoval(ev.ID)
c.log.Debug(logs.FrostFSNodeContainerRemovalEventsReceipt,
c.log.Debug(ctx, logs.FrostFSNodeContainerRemovalEventsReceipt,
zap.Stringer("id", ev.ID),
)
})
@ -237,10 +237,10 @@ type morphContainerWriter struct {
neoClient *cntClient.Client
}
func (m morphContainerWriter) Put(cnr containerCore.Container) (*cid.ID, error) {
return cntClient.Put(m.neoClient, cnr)
func (m morphContainerWriter) Put(ctx context.Context, cnr containerCore.Container) (*cid.ID, error) {
return cntClient.Put(ctx, m.neoClient, cnr)
}
func (m morphContainerWriter) Delete(witness containerCore.RemovalWitness) error {
return cntClient.Delete(m.neoClient, witness)
func (m morphContainerWriter) Delete(ctx context.Context, witness containerCore.RemovalWitness) error {
return cntClient.Delete(ctx, m.neoClient, witness)
}

View file

@ -16,7 +16,7 @@ import (
const serviceNameControl = "control"
func initControlService(c *cfg) {
func initControlService(ctx context.Context, c *cfg) {
endpoint := controlconfig.GRPC(c.appCfg).Endpoint()
if endpoint == controlconfig.GRPCEndpointDefault {
return
@ -46,21 +46,21 @@ func initControlService(c *cfg) {
lis, err := net.Listen("tcp", endpoint)
if err != nil {
c.log.Error(logs.FrostFSNodeCantListenGRPCEndpointControl, zap.Error(err))
c.log.Error(ctx, logs.FrostFSNodeCantListenGRPCEndpointControl, zap.Error(err))
return
}
c.cfgControlService.server = grpc.NewServer()
c.onShutdown(func() {
stopGRPC("FrostFS Control API", c.cfgControlService.server, c.log)
stopGRPC(ctx, "FrostFS Control API", c.cfgControlService.server, c.log)
})
control.RegisterControlServiceServer(c.cfgControlService.server, ctlSvc)
c.workers = append(c.workers, newWorkerFromFunc(func(ctx context.Context) {
runAndLog(ctx, c, serviceNameControl, false, func(context.Context, *cfg) {
c.log.Info(logs.FrostFSNodeStartListeningEndpoint,
c.log.Info(ctx, logs.FrostFSNodeStartListeningEndpoint,
zap.String("service", serviceNameControl),
zap.String("endpoint", endpoint))
fatalOnErr(c.cfgControlService.server.Serve(lis))
@ -72,23 +72,23 @@ func (c *cfg) NetmapStatus() control.NetmapStatus {
return c.cfgNetmap.state.controlNetmapStatus()
}
func (c *cfg) setHealthStatus(st control.HealthStatus) {
c.notifySystemd(st)
func (c *cfg) setHealthStatus(ctx context.Context, st control.HealthStatus) {
c.notifySystemd(ctx, st)
c.healthStatus.Store(int32(st))
c.metricsCollector.State().SetHealth(int32(st))
}
func (c *cfg) compareAndSwapHealthStatus(oldSt, newSt control.HealthStatus) (swapped bool) {
func (c *cfg) compareAndSwapHealthStatus(ctx context.Context, oldSt, newSt control.HealthStatus) (swapped bool) {
if swapped = c.healthStatus.CompareAndSwap(int32(oldSt), int32(newSt)); swapped {
c.notifySystemd(newSt)
c.notifySystemd(ctx, newSt)
c.metricsCollector.State().SetHealth(int32(newSt))
}
return
}
func (c *cfg) swapHealthStatus(st control.HealthStatus) (old control.HealthStatus) {
func (c *cfg) swapHealthStatus(ctx context.Context, st control.HealthStatus) (old control.HealthStatus) {
old = control.HealthStatus(c.healthStatus.Swap(int32(st)))
c.notifySystemd(st)
c.notifySystemd(ctx, st)
c.metricsCollector.State().SetHealth(int32(st))
return
}
@ -97,7 +97,7 @@ func (c *cfg) HealthStatus() control.HealthStatus {
return control.HealthStatus(c.healthStatus.Load())
}
func (c *cfg) notifySystemd(st control.HealthStatus) {
func (c *cfg) notifySystemd(ctx context.Context, st control.HealthStatus) {
if !c.sdNotify {
return
}
@ -113,6 +113,6 @@ func (c *cfg) notifySystemd(st control.HealthStatus) {
err = sdnotify.Status(fmt.Sprintf("%v", st))
}
if err != nil {
c.log.Error(logs.FailedToReportStatusToSystemd, zap.Error(err))
c.log.Error(ctx, logs.FailedToReportStatusToSystemd, zap.Error(err))
}
}

View file

@ -1,6 +1,7 @@
package main
import (
"context"
"crypto/tls"
"errors"
"net"
@ -18,11 +19,11 @@ import (
const maxRecvMsgSize = 256 << 20
func initGRPC(c *cfg) {
func initGRPC(ctx context.Context, c *cfg) {
var endpointsToReconnect []string
var successCount int
grpcconfig.IterateEndpoints(c.appCfg, func(sc *grpcconfig.Config) {
serverOpts, ok := getGrpcServerOpts(c, sc)
serverOpts, ok := getGrpcServerOpts(ctx, c, sc)
if !ok {
return
}
@ -30,7 +31,7 @@ func initGRPC(c *cfg) {
lis, err := net.Listen("tcp", sc.Endpoint())
if err != nil {
c.metricsCollector.GrpcServerMetrics().MarkUnhealthy(sc.Endpoint())
c.log.Error(logs.FrostFSNodeCantListenGRPCEndpoint, zap.Error(err))
c.log.Error(ctx, logs.FrostFSNodeCantListenGRPCEndpoint, zap.Error(err))
endpointsToReconnect = append(endpointsToReconnect, sc.Endpoint())
return
}
@ -39,7 +40,7 @@ func initGRPC(c *cfg) {
srv := grpc.NewServer(serverOpts...)
c.onShutdown(func() {
stopGRPC("FrostFS Public API", srv, c.log)
stopGRPC(ctx, "FrostFS Public API", srv, c.log)
})
c.cfgGRPC.append(sc.Endpoint(), lis, srv)
@ -52,11 +53,11 @@ func initGRPC(c *cfg) {
c.cfgGRPC.reconnectTimeout = grpcconfig.ReconnectTimeout(c.appCfg)
for _, endpoint := range endpointsToReconnect {
scheduleReconnect(endpoint, c)
scheduleReconnect(ctx, endpoint, c)
}
}
func scheduleReconnect(endpoint string, c *cfg) {
func scheduleReconnect(ctx context.Context, endpoint string, c *cfg) {
c.wg.Add(1)
go func() {
defer c.wg.Done()
@ -65,7 +66,7 @@ func scheduleReconnect(endpoint string, c *cfg) {
for {
select {
case <-t.C:
if tryReconnect(endpoint, c) {
if tryReconnect(ctx, endpoint, c) {
return
}
case <-c.done:
@ -75,20 +76,20 @@ func scheduleReconnect(endpoint string, c *cfg) {
}()
}
func tryReconnect(endpoint string, c *cfg) bool {
c.log.Info(logs.FrostFSNodeGRPCReconnecting, zap.String("endpoint", endpoint))
func tryReconnect(ctx context.Context, endpoint string, c *cfg) bool {
c.log.Info(ctx, logs.FrostFSNodeGRPCReconnecting, zap.String("endpoint", endpoint))
serverOpts, found := getGRPCEndpointOpts(endpoint, c)
serverOpts, found := getGRPCEndpointOpts(ctx, endpoint, c)
if !found {
c.log.Warn(logs.FrostFSNodeGRPCServerConfigNotFound, zap.String("endpoint", endpoint))
c.log.Warn(ctx, logs.FrostFSNodeGRPCServerConfigNotFound, zap.String("endpoint", endpoint))
return true
}
lis, err := net.Listen("tcp", endpoint)
if err != nil {
c.metricsCollector.GrpcServerMetrics().MarkUnhealthy(endpoint)
c.log.Error(logs.FrostFSNodeCantListenGRPCEndpoint, zap.Error(err))
c.log.Warn(logs.FrostFSNodeGRPCReconnectFailed, zap.Duration("next_try_in", c.cfgGRPC.reconnectTimeout))
c.log.Error(ctx, logs.FrostFSNodeCantListenGRPCEndpoint, zap.Error(err))
c.log.Warn(ctx, logs.FrostFSNodeGRPCReconnectFailed, zap.Duration("next_try_in", c.cfgGRPC.reconnectTimeout))
return false
}
c.metricsCollector.GrpcServerMetrics().MarkHealthy(endpoint)
@ -96,16 +97,16 @@ func tryReconnect(endpoint string, c *cfg) bool {
srv := grpc.NewServer(serverOpts...)
c.onShutdown(func() {
stopGRPC("FrostFS Public API", srv, c.log)
stopGRPC(ctx, "FrostFS Public API", srv, c.log)
})
c.cfgGRPC.appendAndHandle(endpoint, lis, srv)
c.log.Info(logs.FrostFSNodeGRPCReconnectedSuccessfully, zap.String("endpoint", endpoint))
c.log.Info(ctx, logs.FrostFSNodeGRPCReconnectedSuccessfully, zap.String("endpoint", endpoint))
return true
}
func getGRPCEndpointOpts(endpoint string, c *cfg) (result []grpc.ServerOption, found bool) {
func getGRPCEndpointOpts(ctx context.Context, endpoint string, c *cfg) (result []grpc.ServerOption, found bool) {
unlock := c.LockAppConfigShared()
defer unlock()
grpcconfig.IterateEndpoints(c.appCfg, func(sc *grpcconfig.Config) {
@ -116,7 +117,7 @@ func getGRPCEndpointOpts(endpoint string, c *cfg) (result []grpc.ServerOption, f
return
}
var ok bool
result, ok = getGrpcServerOpts(c, sc)
result, ok = getGrpcServerOpts(ctx, c, sc)
if !ok {
return
}
@ -125,7 +126,7 @@ func getGRPCEndpointOpts(endpoint string, c *cfg) (result []grpc.ServerOption, f
return
}
func getGrpcServerOpts(c *cfg, sc *grpcconfig.Config) ([]grpc.ServerOption, bool) {
func getGrpcServerOpts(ctx context.Context, c *cfg, sc *grpcconfig.Config) ([]grpc.ServerOption, bool) {
serverOpts := []grpc.ServerOption{
grpc.MaxRecvMsgSize(maxRecvMsgSize),
grpc.ChainUnaryInterceptor(
@ -143,7 +144,7 @@ func getGrpcServerOpts(c *cfg, sc *grpcconfig.Config) ([]grpc.ServerOption, bool
if tlsCfg != nil {
cert, err := tls.LoadX509KeyPair(tlsCfg.CertificateFile(), tlsCfg.KeyFile())
if err != nil {
c.log.Error(logs.FrostFSNodeCouldNotReadCertificateFromFile, zap.Error(err))
c.log.Error(ctx, logs.FrostFSNodeCouldNotReadCertificateFromFile, zap.Error(err))
return nil, false
}
@ -174,38 +175,38 @@ func getGrpcServerOpts(c *cfg, sc *grpcconfig.Config) ([]grpc.ServerOption, bool
return serverOpts, true
}
func serveGRPC(c *cfg) {
func serveGRPC(ctx context.Context, c *cfg) {
c.cfgGRPC.performAndSave(func(e string, l net.Listener, s *grpc.Server) {
c.wg.Add(1)
go func() {
defer func() {
c.log.Info(logs.FrostFSNodeStopListeningGRPCEndpoint,
c.log.Info(ctx, logs.FrostFSNodeStopListeningGRPCEndpoint,
zap.Stringer("endpoint", l.Addr()),
)
c.wg.Done()
}()
c.log.Info(logs.FrostFSNodeStartListeningEndpoint,
c.log.Info(ctx, logs.FrostFSNodeStartListeningEndpoint,
zap.String("service", "gRPC"),
zap.Stringer("endpoint", l.Addr()),
)
if err := s.Serve(l); err != nil {
c.metricsCollector.GrpcServerMetrics().MarkUnhealthy(e)
c.log.Error(logs.FrostFSNodeGRPCServerError, zap.Error(err))
c.log.Error(ctx, logs.FrostFSNodeGRPCServerError, zap.Error(err))
c.cfgGRPC.dropConnection(e)
scheduleReconnect(e, c)
scheduleReconnect(ctx, e, c)
}
}()
})
}
func stopGRPC(name string, s *grpc.Server, l *logger.Logger) {
l = &logger.Logger{Logger: l.With(zap.String("name", name))}
func stopGRPC(ctx context.Context, name string, s *grpc.Server, l *logger.Logger) {
l = l.With(zap.String("name", name))
l.Info(logs.FrostFSNodeStoppingGRPCServer)
l.Info(ctx, logs.FrostFSNodeStoppingGRPCServer)
// GracefulStop() may freeze forever, see #1270
done := make(chan struct{})
@ -217,9 +218,9 @@ func stopGRPC(name string, s *grpc.Server, l *logger.Logger) {
select {
case <-done:
case <-time.After(1 * time.Minute):
l.Info(logs.FrostFSNodeGRPCCannotShutdownGracefullyForcingStop)
l.Info(ctx, logs.FrostFSNodeGRPCCannotShutdownGracefullyForcingStop)
s.Stop()
}
l.Info(logs.FrostFSNodeGRPCServerStoppedSuccessfully)
l.Info(ctx, logs.FrostFSNodeGRPCServerStoppedSuccessfully)
}

View file

@ -20,9 +20,9 @@ type httpComponent struct {
preReload func(c *cfg)
}
func (cmp *httpComponent) init(c *cfg) {
func (cmp *httpComponent) init(ctx context.Context, c *cfg) {
if !cmp.enabled {
c.log.Info(cmp.name + " is disabled")
c.log.Info(ctx, cmp.name+" is disabled")
return
}
// Init server with parameters
@ -39,14 +39,14 @@ func (cmp *httpComponent) init(c *cfg) {
go func() {
defer c.wg.Done()
c.log.Info(logs.FrostFSNodeStartListeningEndpoint,
c.log.Info(ctx, logs.FrostFSNodeStartListeningEndpoint,
zap.String("service", cmp.name),
zap.String("endpoint", cmp.address))
fatalOnErr(srv.Serve())
}()
c.closers = append(c.closers, closer{
cmp.name,
func() { stopAndLog(c, cmp.name, srv.Shutdown) },
func() { stopAndLog(ctx, c, cmp.name, srv.Shutdown) },
})
}
@ -62,7 +62,7 @@ func (cmp *httpComponent) reload(ctx context.Context) error {
// Cleanup
delCloser(cmp.cfg, cmp.name)
// Init server with new parameters
cmp.init(cmp.cfg)
cmp.init(ctx, cmp.cfg)
// Start worker
if cmp.enabled {
startWorker(ctx, cmp.cfg, *getWorker(cmp.cfg, cmp.name))

View file

@ -61,21 +61,21 @@ func main() {
var ctx context.Context
ctx, c.ctxCancel = context.WithCancel(context.Background())
c.setHealthStatus(control.HealthStatus_STARTING)
c.setHealthStatus(ctx, control.HealthStatus_STARTING)
initApp(ctx, c)
bootUp(ctx, c)
c.compareAndSwapHealthStatus(control.HealthStatus_STARTING, control.HealthStatus_READY)
c.compareAndSwapHealthStatus(ctx, control.HealthStatus_STARTING, control.HealthStatus_READY)
wait(c)
}
func initAndLog(c *cfg, name string, initializer func(*cfg)) {
c.log.Info(fmt.Sprintf("initializing %s service...", name))
func initAndLog(ctx context.Context, c *cfg, name string, initializer func(*cfg)) {
c.log.Info(ctx, fmt.Sprintf("initializing %s service...", name))
initializer(c)
c.log.Info(name + " service has been successfully initialized")
c.log.Info(ctx, name+" service has been successfully initialized")
}
func initApp(ctx context.Context, c *cfg) {
@ -85,72 +85,72 @@ func initApp(ctx context.Context, c *cfg) {
c.wg.Done()
}()
setRuntimeParameters(c)
setRuntimeParameters(ctx, c)
metrics, _ := metricsComponent(c)
initAndLog(c, "profiler", initProfilerService)
initAndLog(c, metrics.name, metrics.init)
initAndLog(ctx, c, "profiler", func(c *cfg) { initProfilerService(ctx, c) })
initAndLog(ctx, c, metrics.name, func(c *cfg) { metrics.init(ctx, c) })
initAndLog(c, "tracing", func(c *cfg) { initTracing(ctx, c) })
initAndLog(ctx, c, "tracing", func(c *cfg) { initTracing(ctx, c) })
initLocalStorage(ctx, c)
initAndLog(c, "storage engine", func(c *cfg) {
initAndLog(ctx, c, "storage engine", func(c *cfg) {
fatalOnErr(c.cfgObject.cfgLocalStorage.localStorage.Open(ctx))
fatalOnErr(c.cfgObject.cfgLocalStorage.localStorage.Init(ctx))
})
initAndLog(c, "gRPC", initGRPC)
initAndLog(c, "netmap", func(c *cfg) { initNetmapService(ctx, c) })
initAndLog(ctx, c, "gRPC", func(c *cfg) { initGRPC(ctx, c) })
initAndLog(ctx, c, "netmap", func(c *cfg) { initNetmapService(ctx, c) })
initAccessPolicyEngine(ctx, c)
initAndLog(c, "access policy engine", func(c *cfg) {
initAndLog(ctx, c, "access policy engine", func(c *cfg) {
fatalOnErr(c.cfgObject.cfgAccessPolicyEngine.accessPolicyEngine.LocalOverrideDatabaseCore().Open(ctx))
fatalOnErr(c.cfgObject.cfgAccessPolicyEngine.accessPolicyEngine.LocalOverrideDatabaseCore().Init())
})
initAndLog(c, "accounting", func(c *cfg) { initAccountingService(ctx, c) })
initAndLog(c, "container", func(c *cfg) { initContainerService(ctx, c) })
initAndLog(c, "session", initSessionService)
initAndLog(c, "object", initObjectService)
initAndLog(c, "tree", initTreeService)
initAndLog(c, "apemanager", initAPEManagerService)
initAndLog(c, "control", initControlService)
initAndLog(ctx, c, "accounting", func(c *cfg) { initAccountingService(ctx, c) })
initAndLog(ctx, c, "container", func(c *cfg) { initContainerService(ctx, c) })
initAndLog(ctx, c, "session", initSessionService)
initAndLog(ctx, c, "object", initObjectService)
initAndLog(ctx, c, "tree", initTreeService)
initAndLog(ctx, c, "apemanager", initAPEManagerService)
initAndLog(ctx, c, "control", func(c *cfg) { initControlService(ctx, c) })
initAndLog(c, "morph notifications", func(c *cfg) { listenMorphNotifications(ctx, c) })
initAndLog(ctx, c, "morph notifications", func(c *cfg) { listenMorphNotifications(ctx, c) })
}
func runAndLog(ctx context.Context, c *cfg, name string, logSuccess bool, starter func(context.Context, *cfg)) {
c.log.Info(fmt.Sprintf("starting %s service...", name))
c.log.Info(ctx, fmt.Sprintf("starting %s service...", name))
starter(ctx, c)
if logSuccess {
c.log.Info(name + " service started successfully")
c.log.Info(ctx, name+" service started successfully")
}
}
func stopAndLog(c *cfg, name string, stopper func() error) {
c.log.Debug(fmt.Sprintf("shutting down %s service", name))
func stopAndLog(ctx context.Context, c *cfg, name string, stopper func(context.Context) error) {
c.log.Debug(ctx, fmt.Sprintf("shutting down %s service", name))
err := stopper()
err := stopper(ctx)
if err != nil {
c.log.Debug(fmt.Sprintf("could not shutdown %s server", name),
c.log.Debug(ctx, fmt.Sprintf("could not shutdown %s server", name),
zap.String("error", err.Error()),
)
}
c.log.Debug(name + " service has been stopped")
c.log.Debug(ctx, name+" service has been stopped")
}
func bootUp(ctx context.Context, c *cfg) {
runAndLog(ctx, c, "gRPC", false, func(_ context.Context, c *cfg) { serveGRPC(c) })
runAndLog(ctx, c, "gRPC", false, func(_ context.Context, c *cfg) { serveGRPC(ctx, c) })
runAndLog(ctx, c, "notary", true, makeAndWaitNotaryDeposit)
bootstrapNode(c)
bootstrapNode(ctx, c)
startWorkers(ctx, c)
}
func wait(c *cfg) {
c.log.Info(logs.CommonApplicationStarted,
c.log.Info(context.Background(), logs.CommonApplicationStarted,
zap.String("version", misc.Version))
<-c.done // graceful shutdown
@ -160,12 +160,12 @@ func wait(c *cfg) {
go func() {
defer drain.Done()
for err := range c.internalErr {
c.log.Warn(logs.FrostFSNodeInternalApplicationError,
c.log.Warn(context.Background(), logs.FrostFSNodeInternalApplicationError,
zap.String("message", err.Error()))
}
}()
c.log.Debug(logs.FrostFSNodeWaitingForAllProcessesToStop)
c.log.Debug(context.Background(), logs.FrostFSNodeWaitingForAllProcessesToStop)
c.wg.Wait()

View file

@ -17,11 +17,7 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/rand"
"github.com/nspcc-dev/neo-go/pkg/core/block"
"github.com/nspcc-dev/neo-go/pkg/core/state"
"github.com/nspcc-dev/neo-go/pkg/neorpc/result"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/waiter"
"github.com/nspcc-dev/neo-go/pkg/smartcontract/trigger"
"github.com/nspcc-dev/neo-go/pkg/util"
"github.com/nspcc-dev/neo-go/pkg/vm/vmstate"
"go.uber.org/zap"
)
@ -48,7 +44,7 @@ func (c *cfg) initMorphComponents(ctx context.Context) {
fatalOnErr(err)
}
c.log.Info(logs.FrostFSNodeNotarySupport,
c.log.Info(ctx, logs.FrostFSNodeNotarySupport,
zap.Bool("sidechain_enabled", c.cfgMorph.notaryEnabled),
)
@ -64,7 +60,7 @@ func (c *cfg) initMorphComponents(ctx context.Context) {
msPerBlock, err := c.cfgMorph.client.MsPerBlock()
fatalOnErr(err)
c.cfgMorph.cacheTTL = time.Duration(msPerBlock) * time.Millisecond
c.log.Debug(logs.FrostFSNodeMorphcacheTTLFetchedFromNetwork, zap.Duration("value", c.cfgMorph.cacheTTL))
c.log.Debug(ctx, logs.FrostFSNodeMorphcacheTTLFetchedFromNetwork, zap.Duration("value", c.cfgMorph.cacheTTL))
}
if c.cfgMorph.cacheTTL < 0 {
@ -102,7 +98,7 @@ func initMorphClient(ctx context.Context, c *cfg) {
client.WithDialerSource(c.dialerSource),
)
if err != nil {
c.log.Info(logs.FrostFSNodeFailedToCreateNeoRPCClient,
c.log.Info(ctx, logs.FrostFSNodeFailedToCreateNeoRPCClient,
zap.Any("endpoints", addresses),
zap.String("error", err.Error()),
)
@ -111,12 +107,12 @@ func initMorphClient(ctx context.Context, c *cfg) {
}
c.onShutdown(func() {
c.log.Info(logs.FrostFSNodeClosingMorphComponents)
c.log.Info(ctx, logs.FrostFSNodeClosingMorphComponents)
cli.Close()
})
if err := cli.SetGroupSignerScope(); err != nil {
c.log.Info(logs.FrostFSNodeFailedToSetGroupSignerScopeContinueWithGlobal, zap.Error(err))
c.log.Info(ctx, logs.FrostFSNodeFailedToSetGroupSignerScopeContinueWithGlobal, zap.Error(err))
}
c.cfgMorph.client = cli
@ -129,14 +125,14 @@ func makeAndWaitNotaryDeposit(ctx context.Context, c *cfg) {
return
}
tx, vub, err := makeNotaryDeposit(c)
tx, vub, err := makeNotaryDeposit(ctx, c)
fatalOnErr(err)
if tx.Equals(util.Uint256{}) {
// non-error deposit with an empty TX hash means
// that the deposit has already been made; no
// need to wait it.
c.log.Info(logs.FrostFSNodeNotaryDepositHasAlreadyBeenMade)
c.log.Info(ctx, logs.FrostFSNodeNotaryDepositHasAlreadyBeenMade)
return
}
@ -144,7 +140,7 @@ func makeAndWaitNotaryDeposit(ctx context.Context, c *cfg) {
fatalOnErr(err)
}
func makeNotaryDeposit(c *cfg) (util.Uint256, uint32, error) {
func makeNotaryDeposit(ctx context.Context, c *cfg) (util.Uint256, uint32, error) {
const (
// gasMultiplier defines how many times more the notary
// balance must be compared to the GAS balance of the node:
@ -161,51 +157,16 @@ func makeNotaryDeposit(c *cfg) (util.Uint256, uint32, error) {
return util.Uint256{}, 0, fmt.Errorf("could not calculate notary deposit: %w", err)
}
return c.cfgMorph.client.DepositEndlessNotary(depositAmount)
}
var (
errNotaryDepositFail = errors.New("notary deposit tx has faulted")
errNotaryDepositTimeout = errors.New("notary deposit tx has not appeared in the network")
)
type waiterClient struct {
c *client.Client
}
func (w *waiterClient) Context() context.Context {
return context.Background()
}
func (w *waiterClient) GetApplicationLog(hash util.Uint256, trig *trigger.Type) (*result.ApplicationLog, error) {
return w.c.GetApplicationLog(hash, trig)
}
func (w *waiterClient) GetBlockCount() (uint32, error) {
return w.c.BlockCount()
}
func (w *waiterClient) GetVersion() (*result.Version, error) {
return w.c.GetVersion()
return c.cfgMorph.client.DepositEndlessNotary(ctx, depositAmount)
}
func waitNotaryDeposit(ctx context.Context, c *cfg, tx util.Uint256, vub uint32) error {
w, err := waiter.NewPollingBased(&waiterClient{c: c.cfgMorph.client})
if err != nil {
return fmt.Errorf("could not create notary deposit waiter: %w", err)
if err := c.cfgMorph.client.WaitTxHalt(ctx, client.InvokeRes{Hash: tx, VUB: vub}); err != nil {
return err
}
res, err := w.WaitAny(ctx, vub, tx)
if err != nil {
if errors.Is(err, waiter.ErrTxNotAccepted) {
return errNotaryDepositTimeout
}
return fmt.Errorf("could not wait for notary deposit persists in chain: %w", err)
}
if res.Execution.VMState.HasFlag(vmstate.Halt) {
c.log.Info(logs.ClientNotaryDepositTransactionWasSuccessfullyPersisted)
return nil
}
return errNotaryDepositFail
c.log.Info(ctx, logs.ClientNotaryDepositTransactionWasSuccessfullyPersisted)
return nil
}
func listenMorphNotifications(ctx context.Context, c *cfg) {
@ -217,7 +178,7 @@ func listenMorphNotifications(ctx context.Context, c *cfg) {
fromSideChainBlock, err := c.persistate.UInt32(persistateSideChainLastBlockKey)
if err != nil {
fromSideChainBlock = 0
c.log.Warn(logs.FrostFSNodeCantGetLastProcessedSideChainBlockNumber, zap.String("error", err.Error()))
c.log.Warn(ctx, logs.FrostFSNodeCantGetLastProcessedSideChainBlockNumber, zap.String("error", err.Error()))
}
subs, err = subscriber.New(ctx, &subscriber.Params{
@ -246,7 +207,7 @@ func listenMorphNotifications(ctx context.Context, c *cfg) {
setNetmapNotificationParser(c, newEpochNotification, func(src *state.ContainedNotificationEvent) (event.Event, error) {
res, err := netmapEvent.ParseNewEpoch(src)
if err == nil {
c.log.Info(logs.FrostFSNodeNewEpochEventFromSidechain,
c.log.Info(ctx, logs.FrostFSNodeNewEpochEventFromSidechain,
zap.Uint64("number", res.(netmapEvent.NewEpoch).EpochNumber()),
)
}
@ -256,12 +217,12 @@ func listenMorphNotifications(ctx context.Context, c *cfg) {
registerNotificationHandlers(c.cfgNetmap.scriptHash, lis, c.cfgNetmap.parsers, c.cfgNetmap.subscribers)
registerNotificationHandlers(c.cfgContainer.scriptHash, lis, c.cfgContainer.parsers, c.cfgContainer.subscribers)
registerBlockHandler(lis, func(block *block.Block) {
c.log.Debug(logs.FrostFSNodeNewBlock, zap.Uint32("index", block.Index))
registerBlockHandler(lis, func(ctx context.Context, block *block.Block) {
c.log.Debug(ctx, logs.FrostFSNodeNewBlock, zap.Uint32("index", block.Index))
err = c.persistate.SetUInt32(persistateSideChainLastBlockKey, block.Index)
if err != nil {
c.log.Warn(logs.FrostFSNodeCantUpdatePersistentState,
c.log.Warn(ctx, logs.FrostFSNodeCantUpdatePersistentState,
zap.String("chain", "side"),
zap.Uint32("block_index", block.Index))
}

View file

@ -145,7 +145,7 @@ func initNetmapService(ctx context.Context, c *cfg) {
c.initMorphComponents(ctx)
initNetmapState(c)
initNetmapState(ctx, c)
server := netmapTransportGRPC.New(
netmapService.NewSignService(
@ -175,29 +175,29 @@ func initNetmapService(ctx context.Context, c *cfg) {
}
func addNewEpochNotificationHandlers(c *cfg) {
addNewEpochNotificationHandler(c, func(ev event.Event) {
addNewEpochNotificationHandler(c, func(_ context.Context, ev event.Event) {
c.cfgNetmap.state.setCurrentEpoch(ev.(netmapEvent.NewEpoch).EpochNumber())
})
addNewEpochAsyncNotificationHandler(c, func(ev event.Event) {
addNewEpochAsyncNotificationHandler(c, func(ctx context.Context, ev event.Event) {
e := ev.(netmapEvent.NewEpoch).EpochNumber()
c.updateContractNodeInfo(e)
c.updateContractNodeInfo(ctx, e)
if !c.needBootstrap() || c.cfgNetmap.reBoostrapTurnedOff.Load() { // fixes #470
return
}
if err := c.bootstrap(); err != nil {
c.log.Warn(logs.FrostFSNodeCantSendRebootstrapTx, zap.Error(err))
if err := c.bootstrap(ctx); err != nil {
c.log.Warn(ctx, logs.FrostFSNodeCantSendRebootstrapTx, zap.Error(err))
}
})
if c.cfgMorph.notaryEnabled {
addNewEpochAsyncNotificationHandler(c, func(_ event.Event) {
_, _, err := makeNotaryDeposit(c)
addNewEpochAsyncNotificationHandler(c, func(ctx context.Context, _ event.Event) {
_, _, err := makeNotaryDeposit(ctx, c)
if err != nil {
c.log.Error(logs.FrostFSNodeCouldNotMakeNotaryDeposit,
c.log.Error(ctx, logs.FrostFSNodeCouldNotMakeNotaryDeposit,
zap.String("error", err.Error()),
)
}
@ -207,13 +207,13 @@ func addNewEpochNotificationHandlers(c *cfg) {
// bootstrapNode adds current node to the Network map.
// Must be called after initNetmapService.
func bootstrapNode(c *cfg) {
func bootstrapNode(ctx context.Context, c *cfg) {
if c.needBootstrap() {
if c.IsMaintenance() {
c.log.Info(logs.FrostFSNodeNodeIsUnderMaintenanceSkipInitialBootstrap)
c.log.Info(ctx, logs.FrostFSNodeNodeIsUnderMaintenanceSkipInitialBootstrap)
return
}
err := c.bootstrap()
err := c.bootstrap(ctx)
fatalOnErrDetails("bootstrap error", err)
}
}
@ -240,17 +240,17 @@ func setNetmapNotificationParser(c *cfg, sTyp string, p event.NotificationParser
// initNetmapState inits current Network map state.
// Must be called after Morph components initialization.
func initNetmapState(c *cfg) {
func initNetmapState(ctx context.Context, c *cfg) {
epoch, err := c.cfgNetmap.wrapper.Epoch()
fatalOnErrDetails("could not initialize current epoch number", err)
var ni *netmapSDK.NodeInfo
ni, err = c.netmapInitLocalNodeState(epoch)
ni, err = c.netmapInitLocalNodeState(ctx, epoch)
fatalOnErrDetails("could not init network state", err)
stateWord := nodeState(ni)
c.log.Info(logs.FrostFSNodeInitialNetworkState,
c.log.Info(ctx, logs.FrostFSNodeInitialNetworkState,
zap.Uint64("epoch", epoch),
zap.String("state", stateWord),
)
@ -279,7 +279,7 @@ func nodeState(ni *netmapSDK.NodeInfo) string {
return "undefined"
}
func (c *cfg) netmapInitLocalNodeState(epoch uint64) (*netmapSDK.NodeInfo, error) {
func (c *cfg) netmapInitLocalNodeState(ctx context.Context, epoch uint64) (*netmapSDK.NodeInfo, error) {
nmNodes, err := c.cfgNetmap.wrapper.GetCandidates()
if err != nil {
return nil, err
@ -307,7 +307,7 @@ func (c *cfg) netmapInitLocalNodeState(epoch uint64) (*netmapSDK.NodeInfo, error
if nmState != candidateState {
// This happens when the node was switched to maintenance without epoch tick.
// We expect it to continue staying in maintenance.
c.log.Info(logs.CandidateStatusPriority,
c.log.Info(ctx, logs.CandidateStatusPriority,
zap.String("netmap", nmState),
zap.String("candidate", candidateState))
}
@ -353,16 +353,16 @@ func addNewEpochAsyncNotificationHandler(c *cfg, h event.Handler) {
var errRelayBootstrap = errors.New("setting netmap status is forbidden in relay mode")
func (c *cfg) SetNetmapStatus(st control.NetmapStatus) error {
func (c *cfg) SetNetmapStatus(ctx context.Context, st control.NetmapStatus) error {
switch st {
default:
return fmt.Errorf("unsupported status %v", st)
case control.NetmapStatus_MAINTENANCE:
return c.setMaintenanceStatus(false)
return c.setMaintenanceStatus(ctx, false)
case control.NetmapStatus_ONLINE, control.NetmapStatus_OFFLINE:
}
c.stopMaintenance()
c.stopMaintenance(ctx)
if !c.needBootstrap() {
return errRelayBootstrap
@ -370,12 +370,12 @@ func (c *cfg) SetNetmapStatus(st control.NetmapStatus) error {
if st == control.NetmapStatus_ONLINE {
c.cfgNetmap.reBoostrapTurnedOff.Store(false)
return bootstrapOnline(c)
return bootstrapOnline(ctx, c)
}
c.cfgNetmap.reBoostrapTurnedOff.Store(true)
return c.updateNetMapState(func(*nmClient.UpdatePeerPrm) {})
return c.updateNetMapState(ctx, func(*nmClient.UpdatePeerPrm) {})
}
func (c *cfg) GetNetmapStatus() (control.NetmapStatus, uint64, error) {
@ -387,11 +387,11 @@ func (c *cfg) GetNetmapStatus() (control.NetmapStatus, uint64, error) {
return st, epoch, nil
}
func (c *cfg) ForceMaintenance() error {
return c.setMaintenanceStatus(true)
func (c *cfg) ForceMaintenance(ctx context.Context) error {
return c.setMaintenanceStatus(ctx, true)
}
func (c *cfg) setMaintenanceStatus(force bool) error {
func (c *cfg) setMaintenanceStatus(ctx context.Context, force bool) error {
netSettings, err := c.cfgNetmap.wrapper.ReadNetworkConfiguration()
if err != nil {
err = fmt.Errorf("read network settings to check maintenance allowance: %w", err)
@ -400,10 +400,10 @@ func (c *cfg) setMaintenanceStatus(force bool) error {
}
if err == nil || force {
c.startMaintenance()
c.startMaintenance(ctx)
if err == nil {
err = c.updateNetMapState((*nmClient.UpdatePeerPrm).SetMaintenance)
err = c.updateNetMapState(ctx, (*nmClient.UpdatePeerPrm).SetMaintenance)
}
if err != nil {
@ -416,13 +416,16 @@ func (c *cfg) setMaintenanceStatus(force bool) error {
// calls UpdatePeerState operation of Netmap contract's client for the local node.
// State setter is used to specify node state to switch to.
func (c *cfg) updateNetMapState(stateSetter func(*nmClient.UpdatePeerPrm)) error {
func (c *cfg) updateNetMapState(ctx context.Context, stateSetter func(*nmClient.UpdatePeerPrm)) error {
var prm nmClient.UpdatePeerPrm
prm.SetKey(c.key.PublicKey().Bytes())
stateSetter(&prm)
_, err := c.cfgNetmap.wrapper.UpdatePeerState(prm)
return err
res, err := c.cfgNetmap.wrapper.UpdatePeerState(ctx, prm)
if err != nil {
return err
}
return c.cfgNetmap.wrapper.Morph().WaitTxHalt(ctx, res)
}
type netInfo struct {

View file

@ -58,7 +58,7 @@ type objectSvc struct {
func (c *cfg) MaxObjectSize() uint64 {
sz, err := c.cfgNetmap.wrapper.MaxObjectSize()
if err != nil {
c.log.Error(logs.FrostFSNodeCouldNotGetMaxObjectSizeValue,
c.log.Error(context.Background(), logs.FrostFSNodeCouldNotGetMaxObjectSizeValue,
zap.String("error", err.Error()),
)
}
@ -66,11 +66,11 @@ func (c *cfg) MaxObjectSize() uint64 {
return sz
}
func (s *objectSvc) Put() (objectService.PutObjectStream, error) {
func (s *objectSvc) Put(_ context.Context) (objectService.PutObjectStream, error) {
return s.put.Put()
}
func (s *objectSvc) Patch() (objectService.PatchObjectStream, error) {
func (s *objectSvc) Patch(_ context.Context) (objectService.PatchObjectStream, error) {
return s.patch.Patch()
}
@ -223,7 +223,7 @@ func initObjectService(c *cfg) {
func addPolicer(c *cfg, keyStorage *util.KeyStorage, clientConstructor *cache.ClientCache) {
if policerconfig.UnsafeDisable(c.appCfg) {
c.log.Warn(logs.FrostFSNodePolicerIsDisabled)
c.log.Warn(context.Background(), logs.FrostFSNodePolicerIsDisabled)
return
}
@ -287,7 +287,7 @@ func addPolicer(c *cfg, keyStorage *util.KeyStorage, clientConstructor *cache.Cl
_, err := ls.Inhume(ctx, inhumePrm)
if err != nil {
c.log.Warn(logs.FrostFSNodeCouldNotInhumeMarkRedundantCopyAsGarbage,
c.log.Warn(ctx, logs.FrostFSNodeCouldNotInhumeMarkRedundantCopyAsGarbage,
zap.String("error", err.Error()),
)
}

View file

@ -1,17 +1,18 @@
package main
import (
"context"
"runtime"
profilerconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/profiler"
httputil "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/http"
)
func initProfilerService(c *cfg) {
func initProfilerService(ctx context.Context, c *cfg) {
tuneProfilers(c)
pprof, _ := pprofComponent(c)
pprof.init(c)
pprof.init(ctx, c)
}
func pprofComponent(c *cfg) (*httpComponent, bool) {

View file

@ -1,6 +1,7 @@
package main
import (
"context"
"os"
"runtime/debug"
@ -9,17 +10,17 @@ import (
"go.uber.org/zap"
)
func setRuntimeParameters(c *cfg) {
func setRuntimeParameters(ctx context.Context, c *cfg) {
if len(os.Getenv("GOMEMLIMIT")) != 0 {
// default limit < yaml limit < app env limit < GOMEMLIMIT
c.log.Warn(logs.RuntimeSoftMemoryDefinedWithGOMEMLIMIT)
c.log.Warn(ctx, logs.RuntimeSoftMemoryDefinedWithGOMEMLIMIT)
return
}
memLimitBytes := runtime.GCMemoryLimitBytes(c.appCfg)
previous := debug.SetMemoryLimit(memLimitBytes)
if memLimitBytes != previous {
c.log.Info(logs.RuntimeSoftMemoryLimitUpdated,
c.log.Info(ctx, logs.RuntimeSoftMemoryLimitUpdated,
zap.Int64("new_value", memLimitBytes),
zap.Int64("old_value", previous))
}

View file

@ -48,7 +48,7 @@ func initSessionService(c *cfg) {
_ = c.privateTokenStore.Close()
})
addNewEpochNotificationHandler(c, func(ev event.Event) {
addNewEpochNotificationHandler(c, func(_ context.Context, ev event.Event) {
c.privateTokenStore.RemoveOld(ev.(netmap.NewEpoch).EpochNumber())
})

View file

@ -13,12 +13,12 @@ import (
func initTracing(ctx context.Context, c *cfg) {
conf, err := tracingconfig.ToTracingConfig(c.appCfg)
if err != nil {
c.log.Error(logs.FrostFSNodeFailedInitTracing, zap.Error(err))
c.log.Error(ctx, logs.FrostFSNodeFailedInitTracing, zap.Error(err))
return
}
_, err = tracing.Setup(ctx, *conf)
if err != nil {
c.log.Error(logs.FrostFSNodeFailedInitTracing, zap.Error(err))
c.log.Error(ctx, logs.FrostFSNodeFailedInitTracing, zap.Error(err))
return
}
@ -29,7 +29,7 @@ func initTracing(ctx context.Context, c *cfg) {
defer cancel()
err := tracing.Shutdown(ctx) // cfg context cancels before close
if err != nil {
c.log.Error(logs.FrostFSNodeFailedShutdownTracing, zap.Error(err))
c.log.Error(ctx, logs.FrostFSNodeFailedShutdownTracing, zap.Error(err))
}
},
})

View file

@ -44,7 +44,7 @@ func (c cnrSource) List() ([]cid.ID, error) {
func initTreeService(c *cfg) {
treeConfig := treeconfig.Tree(c.appCfg)
if !treeConfig.Enabled() {
c.log.Info(logs.FrostFSNodeTreeServiceIsNotEnabledSkipInitialization)
c.log.Info(context.Background(), logs.FrostFSNodeTreeServiceIsNotEnabledSkipInitialization)
return
}
@ -80,10 +80,10 @@ func initTreeService(c *cfg) {
}))
if d := treeConfig.SyncInterval(); d == 0 {
addNewEpochNotificationHandler(c, func(_ event.Event) {
addNewEpochNotificationHandler(c, func(ctx context.Context, _ event.Event) {
err := c.treeService.SynchronizeAll()
if err != nil {
c.log.Error(logs.FrostFSNodeCouldNotSynchronizeTreeService, zap.Error(err))
c.log.Error(ctx, logs.FrostFSNodeCouldNotSynchronizeTreeService, zap.Error(err))
}
})
} else {
@ -94,7 +94,7 @@ func initTreeService(c *cfg) {
for range tick.C {
err := c.treeService.SynchronizeAll()
if err != nil {
c.log.Error(logs.FrostFSNodeCouldNotSynchronizeTreeService, zap.Error(err))
c.log.Error(context.Background(), logs.FrostFSNodeCouldNotSynchronizeTreeService, zap.Error(err))
if errors.Is(err, tree.ErrShuttingDown) {
return
}
@ -103,15 +103,15 @@ func initTreeService(c *cfg) {
}()
}
subscribeToContainerRemoval(c, func(e event.Event) {
subscribeToContainerRemoval(c, func(ctx context.Context, e event.Event) {
ev := e.(containerEvent.DeleteSuccess)
// This is executed asynchronously, so we don't care about the operation taking some time.
c.log.Debug(logs.FrostFSNodeRemovingAllTreesForContainer, zap.Stringer("cid", ev.ID))
err := c.treeService.DropTree(context.Background(), ev.ID, "")
c.log.Debug(ctx, logs.FrostFSNodeRemovingAllTreesForContainer, zap.Stringer("cid", ev.ID))
err := c.treeService.DropTree(ctx, ev.ID, "")
if err != nil && !errors.Is(err, pilorama.ErrTreeNotFound) {
// Ignore pilorama.ErrTreeNotFound but other errors, including shard.ErrReadOnly, should be logged.
c.log.Error(logs.FrostFSNodeContainerRemovalEventReceivedButTreesWerentRemoved,
c.log.Error(ctx, logs.FrostFSNodeContainerRemovalEventReceivedButTreesWerentRemoved,
zap.Stringer("cid", ev.ID),
zap.String("error", err.Error()))
}

View file

@ -0,0 +1,167 @@
package ape
import (
"encoding/hex"
"errors"
"fmt"
"strconv"
"strings"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
apeutil "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/ape"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
apechain "git.frostfs.info/TrueCloudLab/policy-engine/pkg/chain"
"git.frostfs.info/TrueCloudLab/policy-engine/pkg/engine"
"github.com/nspcc-dev/neo-go/cli/input"
"github.com/spf13/cobra"
)
const (
defaultNamespace = "root"
namespaceTarget = "namespace"
containerTarget = "container"
userTarget = "user"
groupTarget = "group"
Ingress = "ingress"
S3 = "s3"
)
var mChainName = map[string]apechain.Name{
Ingress: apechain.Ingress,
S3: apechain.S3,
}
var (
errSettingDefaultValueWasDeclined = errors.New("setting default value was declined")
errUnknownTargetType = errors.New("unknown target type")
errUnsupportedChainName = errors.New("unsupported chain name")
)
// PrintHumanReadableAPEChain print APE chain rules.
func PrintHumanReadableAPEChain(cmd *cobra.Command, chain *apechain.Chain) {
cmd.Println("Chain ID: " + string(chain.ID))
cmd.Printf(" HEX: %x\n", chain.ID)
cmd.Println("Rules:")
for _, rule := range chain.Rules {
cmd.Println("\n\tStatus: " + rule.Status.String())
cmd.Println("\tAny: " + strconv.FormatBool(rule.Any))
cmd.Println("\tConditions:")
for _, c := range rule.Condition {
var ot string
switch c.Kind {
case apechain.KindResource:
ot = "Resource"
case apechain.KindRequest:
ot = "Request"
default:
panic("unknown object type")
}
cmd.Println(fmt.Sprintf("\t\t%s %s %s %s", ot, c.Key, c.Op, c.Value))
}
cmd.Println("\tActions:\tInverted:" + strconv.FormatBool(rule.Actions.Inverted))
for _, name := range rule.Actions.Names {
cmd.Println("\t\t" + name)
}
cmd.Println("\tResources:\tInverted:" + strconv.FormatBool(rule.Resources.Inverted))
for _, name := range rule.Resources.Names {
cmd.Println("\t\t" + name)
}
}
}
// ParseTarget handles target parsing of an APE chain.
func ParseTarget(cmd *cobra.Command) engine.Target {
typ := ParseTargetType(cmd)
name, _ := cmd.Flags().GetString(TargetNameFlag)
switch typ {
case engine.Namespace:
if name == "" {
ln, err := input.ReadLine(fmt.Sprintf("Target name is not set. Confirm to use %s namespace (n|Y)> ", defaultNamespace))
commonCmd.ExitOnErr(cmd, "read line error: %w", err)
ln = strings.ToLower(ln)
if len(ln) > 0 && (ln[0] == 'n') {
commonCmd.ExitOnErr(cmd, "read namespace error: %w", errSettingDefaultValueWasDeclined)
}
name = defaultNamespace
}
return engine.NamespaceTarget(name)
case engine.Container:
var cnr cid.ID
commonCmd.ExitOnErr(cmd, "can't decode container ID: %w", cnr.DecodeString(name))
return engine.ContainerTarget(name)
case engine.User:
return engine.UserTarget(name)
case engine.Group:
return engine.GroupTarget(name)
default:
commonCmd.ExitOnErr(cmd, "read target type error: %w", errUnknownTargetType)
}
panic("unreachable")
}
// ParseTargetType handles target type parsing of an APE chain.
func ParseTargetType(cmd *cobra.Command) engine.TargetType {
typ, _ := cmd.Flags().GetString(TargetTypeFlag)
switch typ {
case namespaceTarget:
return engine.Namespace
case containerTarget:
return engine.Container
case userTarget:
return engine.User
case groupTarget:
return engine.Group
default:
commonCmd.ExitOnErr(cmd, "parse target type error: %w", errUnknownTargetType)
}
panic("unreachable")
}
// ParseChainID handles the parsing of APE-chain identifier.
// For some subcommands, chain ID is optional as an input parameter and should be generated by
// the service instead.
func ParseChainID(cmd *cobra.Command) (id apechain.ID) {
chainID, _ := cmd.Flags().GetString(ChainIDFlag)
id = apechain.ID(chainID)
hexEncoded, _ := cmd.Flags().GetBool(ChainIDHexFlag)
if !hexEncoded {
return
}
chainIDRaw, err := hex.DecodeString(chainID)
commonCmd.ExitOnErr(cmd, "can't decode chain ID as hex: %w", err)
id = apechain.ID(chainIDRaw)
return
}
// ParseChain parses an APE chain which can be provided either as a rule statement
// or loaded from a binary/JSON file path.
func ParseChain(cmd *cobra.Command) *apechain.Chain {
chain := new(apechain.Chain)
chain.ID = ParseChainID(cmd)
if rules, _ := cmd.Flags().GetStringArray(RuleFlag); len(rules) > 0 {
commonCmd.ExitOnErr(cmd, "parser error: %w", apeutil.ParseAPEChain(chain, rules))
} else if encPath, _ := cmd.Flags().GetString(PathFlag); encPath != "" {
commonCmd.ExitOnErr(cmd, "decode binary or json error: %w", apeutil.ParseAPEChainBinaryOrJSON(chain, encPath))
} else {
commonCmd.ExitOnErr(cmd, "parser error", errors.New("rule is not passed"))
}
cmd.Println("Parsed chain:")
PrintHumanReadableAPEChain(cmd, chain)
return chain
}
// ParseChainName parses chain name: the place in the request lifecycle where policy is applied.
func ParseChainName(cmd *cobra.Command) apechain.Name {
chainName, _ := cmd.Flags().GetString(ChainNameFlag)
apeChainName, ok := mChainName[strings.ToLower(chainName)]
if !ok {
commonCmd.ExitOnErr(cmd, "", errUnsupportedChainName)
}
return apeChainName
}

View file

@ -0,0 +1,79 @@
package ape
const (
RuleFlag = "rule"
PathFlag = "path"
PathFlagDesc = "Path to encoded chain in JSON or binary format"
TargetNameFlag = "target-name"
TargetNameFlagDesc = "Resource name in APE resource name format"
TargetTypeFlag = "target-type"
TargetTypeFlagDesc = "Resource type(container/namespace)"
ChainIDFlag = "chain-id"
ChainIDFlagDesc = "Chain id"
ChainIDHexFlag = "chain-id-hex"
ChainIDHexFlagDesc = "Flag to parse chain ID as hex"
ChainNameFlag = "chain-name"
ChainNameFlagDesc = "Chain name(ingress|s3)"
AllFlag = "all"
)
const RuleFlagDesc = `Defines an Access Policy Engine (APE) rule in the format:
<status>[:status_detail] <action>... <condition>... <resource>...
Status:
- allow Permits specified actions
- deny Prohibits specified actions
- deny:QuotaLimitReached Denies access due to quota limits
Actions:
Object operations:
- Object.Put, Object.Get, etc.
- Object.* (all object operations)
Container operations:
- Container.Put, Container.Get, etc.
- Container.* (all container operations)
Conditions:
ResourceCondition:
Format: ResourceCondition:"key"=value, "key"!=value
Reserved properties (use '\' before '$'):
- $Object:version
- $Object:objectID
- $Object:containerID
- $Object:ownerID
- $Object:creationEpoch
- $Object:payloadLength
- $Object:payloadHash
- $Object:objectType
- $Object:homomorphicHash
RequestCondition:
Format: RequestCondition:"key"=value, "key"!=value
Reserved properties (use '\' before '$'):
- $Actor:publicKey
- $Actor:role
Example:
ResourceCondition:"check_key"!="check_value" RequestCondition:"$Actor:role"=others
Resources:
For objects:
- namespace/cid/oid (specific object)
- namespace/cid/* (all objects in container)
- namespace/* (all objects in namespace)
- * (all objects)
- /* (all objects in root namespace)
- /cid/* (all objects in root container)
- /cid/oid (specific object in root container)
For containers:
- namespace/cid (specific container)
- namespace/* (all containers in namespace)
- * (all containers)
- /cid (root container)
- /* (all root containers)
Notes:
- Cannot mix object and container operations in one rule
- Default behavior is Any=false unless 'any' is specified
- Use 'all' keyword to explicitly set Any=false`

View file

@ -26,13 +26,15 @@ func ExitOnErr(cmd *cobra.Command, errFmt string, err error) {
_ = iota
internal
aclDenied
apemanagerDenied
)
var (
code int
internalErr = new(sdkstatus.ServerInternal)
accessErr = new(sdkstatus.ObjectAccessDenied)
internalErr = new(sdkstatus.ServerInternal)
accessErr = new(sdkstatus.ObjectAccessDenied)
apemanagerErr = new(sdkstatus.APEManagerAccessDenied)
)
switch {
@ -41,6 +43,9 @@ func ExitOnErr(cmd *cobra.Command, errFmt string, err error) {
case errors.As(err, &accessErr):
code = aclDenied
err = fmt.Errorf("%w: %s", err, accessErr.Reason())
case errors.As(err, &apemanagerErr):
code = apemanagerDenied
err = fmt.Errorf("%w: %s", err, apemanagerErr.Reason())
default:
code = internal
}

View file

@ -203,6 +203,10 @@ FROSTFS_TRACING_ENABLED=true
FROSTFS_TRACING_ENDPOINT="localhost"
FROSTFS_TRACING_EXPORTER="otlp_grpc"
FROSTFS_TRACING_TRUSTED_CA=""
FROSTFS_TRACING_ATTRIBUTES_0_KEY=key0
FROSTFS_TRACING_ATTRIBUTES_0_VALUE=value
FROSTFS_TRACING_ATTRIBUTES_1_KEY=key1
FROSTFS_TRACING_ATTRIBUTES_1_VALUE=value
FROSTFS_RUNTIME_SOFT_MEMORY_LIMIT=1073741824

View file

@ -259,9 +259,19 @@
},
"tracing": {
"enabled": true,
"endpoint": "localhost:9090",
"endpoint": "localhost",
"exporter": "otlp_grpc",
"trusted_ca": "/etc/ssl/tracing.pem"
"trusted_ca": "",
"attributes":[
{
"key": "key0",
"value": "value"
},
{
"key": "key1",
"value": "value"
}
]
},
"runtime": {
"soft_memory_limit": 1073741824

View file

@ -239,6 +239,11 @@ tracing:
exporter: "otlp_grpc"
endpoint: "localhost"
trusted_ca: ""
attributes:
- key: key0
value: value
- key: key1
value: value
runtime:
soft_memory_limit: 1gb

View file

@ -78,7 +78,12 @@
"FROSTFS_STORAGE_SHARD_1_PILORAMA_PATH":"${workspaceFolder}/.cache/storage/s1/pilorama1",
"FROSTFS_PROMETHEUS_ENABLED":"true",
"FROSTFS_PROMETHEUS_ADDRESS":"127.0.0.1:9090",
"FROSTFS_PROMETHEUS_SHUTDOWN_TIMEOUT":"15s"
"FROSTFS_PROMETHEUS_SHUTDOWN_TIMEOUT":"15s",
"FROSTFS_TRACING_ENABLED":"true",
"FROSTFS_TRACING_EXPORTER":"otlp_grpc",
"FROSTFS_TRACING_ENDPOINT":"127.0.0.1:4317",
"FROSTFS_TRACING_ATTRIBUTES_0_KEY":"host.ip",
"FROSTFS_TRACING_ATTRIBUTES_0_VALUE":"127.0.0.1:8080"
},
"postDebugTask": "env-down"
},
@ -129,7 +134,12 @@
"FROSTFS_STORAGE_SHARD_1_PILORAMA_PATH":"${workspaceFolder}/.cache/storage/s2/pilorama1",
"FROSTFS_PROMETHEUS_ENABLED":"true",
"FROSTFS_PROMETHEUS_ADDRESS":"127.0.0.1:9091",
"FROSTFS_PROMETHEUS_SHUTDOWN_TIMEOUT":"15s"
"FROSTFS_PROMETHEUS_SHUTDOWN_TIMEOUT":"15s",
"FROSTFS_TRACING_ENABLED":"true",
"FROSTFS_TRACING_EXPORTER":"otlp_grpc",
"FROSTFS_TRACING_ENDPOINT":"127.0.0.1:4317",
"FROSTFS_TRACING_ATTRIBUTES_0_KEY":"host.ip",
"FROSTFS_TRACING_ATTRIBUTES_0_VALUE":"127.0.0.1:8082"
},
"postDebugTask": "env-down"
},
@ -180,7 +190,12 @@
"FROSTFS_STORAGE_SHARD_1_PILORAMA_PATH":"${workspaceFolder}/.cache/storage/s3/pilorama1",
"FROSTFS_PROMETHEUS_ENABLED":"true",
"FROSTFS_PROMETHEUS_ADDRESS":"127.0.0.1:9092",
"FROSTFS_PROMETHEUS_SHUTDOWN_TIMEOUT":"15s"
"FROSTFS_PROMETHEUS_SHUTDOWN_TIMEOUT":"15s",
"FROSTFS_TRACING_ENABLED":"true",
"FROSTFS_TRACING_EXPORTER":"otlp_grpc",
"FROSTFS_TRACING_ENDPOINT":"127.0.0.1:4317",
"FROSTFS_TRACING_ATTRIBUTES_0_KEY":"host.ip",
"FROSTFS_TRACING_ATTRIBUTES_0_VALUE":"127.0.0.1:8084"
},
"postDebugTask": "env-down"
},
@ -231,7 +246,12 @@
"FROSTFS_STORAGE_SHARD_1_PILORAMA_PATH":"${workspaceFolder}/.cache/storage/s4/pilorama1",
"FROSTFS_PROMETHEUS_ENABLED":"true",
"FROSTFS_PROMETHEUS_ADDRESS":"127.0.0.1:9093",
"FROSTFS_PROMETHEUS_SHUTDOWN_TIMEOUT":"15s"
"FROSTFS_PROMETHEUS_SHUTDOWN_TIMEOUT":"15s",
"FROSTFS_TRACING_ENABLED":"true",
"FROSTFS_TRACING_EXPORTER":"otlp_grpc",
"FROSTFS_TRACING_ENDPOINT":"127.0.0.1:4317",
"FROSTFS_TRACING_ATTRIBUTES_0_KEY":"host.ip",
"FROSTFS_TRACING_ATTRIBUTES_0_VALUE":"127.0.0.1:8086"
},
"postDebugTask": "env-down"
}

View file

@ -14,3 +14,15 @@ services:
- ./neo-go/node-wallet.json:/wallets/node-wallet.json
- ./neo-go/config.yml:/wallets/config.yml
- ./neo-go/wallet.json:/wallets/wallet.json
jaeger:
image: jaegertracing/all-in-one:latest
container_name: jaeger
ports:
- '4317:4317' #OTLP over gRPC
- '4318:4318' #OTLP over HTTP
- '16686:16686' #frontend
stop_signal: SIGKILL
environment:
- COLLECTOR_OTLP_ENABLED=true
- SPAN_STORAGE_TYPE=badger
- BADGER_EPHEMERAL=true

4
go.mod
View file

@ -4,10 +4,10 @@ go 1.22
require (
code.gitea.io/sdk/gitea v0.17.1
git.frostfs.info/TrueCloudLab/frostfs-contract v0.20.0
git.frostfs.info/TrueCloudLab/frostfs-contract v0.21.0-rc.4
git.frostfs.info/TrueCloudLab/frostfs-crypto v0.6.0
git.frostfs.info/TrueCloudLab/frostfs-locode-db v0.4.1-0.20240710074952-65761deb5c0d
git.frostfs.info/TrueCloudLab/frostfs-observability v0.0.0-20240909114314-666d326cc573
git.frostfs.info/TrueCloudLab/frostfs-observability v0.0.0-20241112082307-f17779933e88
git.frostfs.info/TrueCloudLab/frostfs-sdk-go v0.0.0-20241107121119-cb813e27a823
git.frostfs.info/TrueCloudLab/hrw v1.2.1
git.frostfs.info/TrueCloudLab/multinet v0.0.0-20241015075604-6cb0d80e0972

BIN
go.sum

Binary file not shown.

View file

@ -1,6 +1,8 @@
package audit
import (
"context"
crypto "git.frostfs.info/TrueCloudLab/frostfs-crypto"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger"
@ -17,15 +19,15 @@ type Target interface {
String() string
}
func LogRequest(log *logger.Logger, operation string, req Request, target Target, status bool) {
func LogRequest(ctx context.Context, log *logger.Logger, operation string, req Request, target Target, status bool) {
var key []byte
if req != nil {
key = req.GetVerificationHeader().GetBodySignature().GetKey()
}
LogRequestWithKey(log, operation, key, target, status)
LogRequestWithKey(ctx, log, operation, key, target, status)
}
func LogRequestWithKey(log *logger.Logger, operation string, key []byte, target Target, status bool) {
func LogRequestWithKey(ctx context.Context, log *logger.Logger, operation string, key []byte, target Target, status bool) {
object, subject := NotDefined, NotDefined
publicKey := crypto.UnmarshalPublicKey(key)
@ -37,7 +39,7 @@ func LogRequestWithKey(log *logger.Logger, operation string, key []byte, target
object = target.String()
}
log.Info(logs.AuditEventLogRecord,
log.Info(ctx, logs.AuditEventLogRecord,
zap.String("operation", operation),
zap.String("object", object),
zap.String("subject", subject),

View file

@ -117,7 +117,7 @@ func (v *FormatValidator) Validate(ctx context.Context, obj *objectSDK.Object, u
}
if !unprepared {
if err := v.validateSignatureKey(obj); err != nil {
if err := v.validateSignatureKey(ctx, obj); err != nil {
return fmt.Errorf("(%T) could not validate signature key: %w", v, err)
}
@ -134,7 +134,7 @@ func (v *FormatValidator) Validate(ctx context.Context, obj *objectSDK.Object, u
return nil
}
func (v *FormatValidator) validateSignatureKey(obj *objectSDK.Object) error {
func (v *FormatValidator) validateSignatureKey(ctx context.Context, obj *objectSDK.Object) error {
sig := obj.Signature()
if sig == nil {
return errMissingSignature
@ -156,7 +156,7 @@ func (v *FormatValidator) validateSignatureKey(obj *objectSDK.Object) error {
ownerID := obj.OwnerID()
if token == nil && obj.ECHeader() != nil {
role, err := v.isIROrContainerNode(obj, binKey)
role, err := v.isIROrContainerNode(ctx, obj, binKey)
if err != nil {
return err
}
@ -172,7 +172,7 @@ func (v *FormatValidator) validateSignatureKey(obj *objectSDK.Object) error {
}
if v.verifyTokenIssuer {
role, err := v.isIROrContainerNode(obj, binKey)
role, err := v.isIROrContainerNode(ctx, obj, binKey)
if err != nil {
return err
}
@ -190,7 +190,7 @@ func (v *FormatValidator) validateSignatureKey(obj *objectSDK.Object) error {
return nil
}
func (v *FormatValidator) isIROrContainerNode(obj *objectSDK.Object, signerKey []byte) (acl.Role, error) {
func (v *FormatValidator) isIROrContainerNode(ctx context.Context, obj *objectSDK.Object, signerKey []byte) (acl.Role, error) {
cnrID, containerIDSet := obj.ContainerID()
if !containerIDSet {
return acl.RoleOthers, errNilCID
@ -204,7 +204,7 @@ func (v *FormatValidator) isIROrContainerNode(obj *objectSDK.Object, signerKey [
return acl.RoleOthers, fmt.Errorf("failed to get container (id=%s): %w", cnrID.EncodeToString(), err)
}
res, err := v.senderClassifier.IsInnerRingOrContainerNode(signerKey, cnrID, cnr.Value)
res, err := v.senderClassifier.IsInnerRingOrContainerNode(ctx, signerKey, cnrID, cnr.Value)
if err != nil {
return acl.RoleOthers, err
}

View file

@ -65,7 +65,7 @@ func TestFormatValidator_Validate(t *testing.T) {
epoch: curEpoch,
}),
WithLockSource(ls),
WithLogger(&logger.Logger{Logger: zaptest.NewLogger(t)}),
WithLogger(logger.NewLoggerWrapper(zaptest.NewLogger(t))),
)
ownerKey, err := keys.NewPrivateKey()
@ -290,7 +290,7 @@ func TestFormatValidator_ValidateTokenIssuer(t *testing.T) {
}),
WithLockSource(ls),
WithVerifySessionTokenIssuer(false),
WithLogger(&logger.Logger{Logger: zaptest.NewLogger(t)}),
WithLogger(logger.NewLoggerWrapper(zaptest.NewLogger(t))),
)
tok := sessiontest.Object()
@ -339,7 +339,7 @@ func TestFormatValidator_ValidateTokenIssuer(t *testing.T) {
},
},
),
WithLogger(&logger.Logger{Logger: zaptest.NewLogger(t)}),
WithLogger(logger.NewLoggerWrapper(zaptest.NewLogger(t))),
)
tok := sessiontest.Object()
@ -417,7 +417,7 @@ func TestFormatValidator_ValidateTokenIssuer(t *testing.T) {
currentEpoch: curEpoch,
},
),
WithLogger(&logger.Logger{Logger: zaptest.NewLogger(t)}),
WithLogger(logger.NewLoggerWrapper(zaptest.NewLogger(t))),
)
require.NoError(t, v.Validate(context.Background(), obj, false))
@ -491,7 +491,7 @@ func TestFormatValidator_ValidateTokenIssuer(t *testing.T) {
currentEpoch: curEpoch,
},
),
WithLogger(&logger.Logger{Logger: zaptest.NewLogger(t)}),
WithLogger(logger.NewLoggerWrapper(zaptest.NewLogger(t))),
)
require.NoError(t, v.Validate(context.Background(), obj, false))
@ -567,7 +567,7 @@ func TestFormatValidator_ValidateTokenIssuer(t *testing.T) {
currentEpoch: curEpoch,
},
),
WithLogger(&logger.Logger{Logger: zaptest.NewLogger(t)}),
WithLogger(logger.NewLoggerWrapper(zaptest.NewLogger(t))),
)
require.Error(t, v.Validate(context.Background(), obj, false))

View file

@ -2,6 +2,7 @@ package object
import (
"bytes"
"context"
"crypto/sha256"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
@ -40,6 +41,7 @@ type ClassifyResult struct {
}
func (c SenderClassifier) Classify(
ctx context.Context,
ownerID *user.ID,
ownerKey *keys.PublicKey,
idCnr cid.ID,
@ -57,14 +59,14 @@ func (c SenderClassifier) Classify(
}, nil
}
return c.IsInnerRingOrContainerNode(ownerKeyInBytes, idCnr, cnr)
return c.IsInnerRingOrContainerNode(ctx, ownerKeyInBytes, idCnr, cnr)
}
func (c SenderClassifier) IsInnerRingOrContainerNode(ownerKeyInBytes []byte, idCnr cid.ID, cnr container.Container) (*ClassifyResult, error) {
func (c SenderClassifier) IsInnerRingOrContainerNode(ctx context.Context, ownerKeyInBytes []byte, idCnr cid.ID, cnr container.Container) (*ClassifyResult, error) {
isInnerRingNode, err := c.isInnerRingKey(ownerKeyInBytes)
if err != nil {
// do not throw error, try best case matching
c.log.Debug(logs.V2CantCheckIfRequestFromInnerRing,
c.log.Debug(ctx, logs.V2CantCheckIfRequestFromInnerRing,
zap.String("error", err.Error()))
} else if isInnerRingNode {
return &ClassifyResult{
@ -81,7 +83,7 @@ func (c SenderClassifier) IsInnerRingOrContainerNode(ownerKeyInBytes []byte, idC
// error might happen if request has `RoleOther` key and placement
// is not possible for previous epoch, so
// do not throw error, try best case matching
c.log.Debug(logs.V2CantCheckIfRequestFromContainerNode,
c.log.Debug(ctx, logs.V2CantCheckIfRequestFromContainerNode,
zap.String("error", err.Error()))
} else if isContainerNode {
return &ClassifyResult{

View file

@ -29,7 +29,7 @@ type (
emitDuration uint32 // in blocks
}
depositor func() (util.Uint256, error)
depositor func(context.Context) (util.Uint256, error)
awaiter func(context.Context, util.Uint256) error
)
@ -66,11 +66,11 @@ func newEpochTimer(args *epochTimerArgs) *timer.BlockTimer {
)
}
func newEmissionTimer(args *emitTimerArgs) *timer.BlockTimer {
func newEmissionTimer(ctx context.Context, args *emitTimerArgs) *timer.BlockTimer {
return timer.NewBlockTimer(
timer.StaticBlockMeter(args.emitDuration),
func() {
args.ap.HandleGasEmission(timerEvent.NewAlphabetEmitTick{})
args.ap.HandleGasEmission(ctx, timerEvent.NewAlphabetEmitTick{})
},
)
}

View file

@ -35,7 +35,7 @@ import (
"google.golang.org/grpc"
)
func (s *Server) initNetmapProcessor(cfg *viper.Viper,
func (s *Server) initNetmapProcessor(ctx context.Context, cfg *viper.Viper,
alphaSync event.Handler,
) error {
locodeValidator, err := s.newLocodeValidator(cfg)
@ -48,10 +48,13 @@ func (s *Server) initNetmapProcessor(cfg *viper.Viper,
var netMapCandidateStateValidator statevalidation.NetMapCandidateValidator
netMapCandidateStateValidator.SetNetworkSettings(netSettings)
poolSize := cfg.GetInt("workers.netmap")
s.log.Debug(ctx, logs.NetmapNetmapWorkerPool, zap.Int("size", poolSize))
s.netmapProcessor, err = netmap.New(&netmap.Params{
Log: s.log,
Metrics: s.irMetrics,
PoolSize: cfg.GetInt("workers.netmap"),
PoolSize: poolSize,
NetmapClient: netmap.NewNetmapClient(s.netmapClient),
EpochTimer: s,
EpochState: s,
@ -97,7 +100,7 @@ func (s *Server) initMainnet(ctx context.Context, cfg *viper.Viper, morphChain *
fromMainChainBlock, err := s.persistate.UInt32(persistateMainChainLastBlockKey)
if err != nil {
fromMainChainBlock = 0
s.log.Warn(logs.InnerringCantGetLastProcessedMainChainBlockNumber, zap.String("error", err.Error()))
s.log.Warn(ctx, logs.InnerringCantGetLastProcessedMainChainBlockNumber, zap.String("error", err.Error()))
}
mainnetChain.from = fromMainChainBlock
@ -137,12 +140,12 @@ func (s *Server) enableNotarySupport() error {
return nil
}
func (s *Server) initNotaryConfig() {
func (s *Server) initNotaryConfig(ctx context.Context) {
s.mainNotaryConfig = notaryConfigs(
!s.withoutMainNet && s.mainnetClient.ProbeNotary(), // if mainnet disabled then notary flag must be disabled too
)
s.log.Info(logs.InnerringNotarySupport,
s.log.Info(ctx, logs.InnerringNotarySupport,
zap.Bool("sidechain_enabled", true),
zap.Bool("mainchain_enabled", !s.mainNotaryConfig.disabled),
)
@ -152,8 +155,8 @@ func (s *Server) createAlphaSync(cfg *viper.Viper, frostfsCli *frostfsClient.Cli
var alphaSync event.Handler
if s.withoutMainNet || cfg.GetBool("governance.disable") {
alphaSync = func(event.Event) {
s.log.Debug(logs.InnerringAlphabetKeysSyncIsDisabled)
alphaSync = func(ctx context.Context, _ event.Event) {
s.log.Debug(ctx, logs.InnerringAlphabetKeysSyncIsDisabled)
}
} else {
// create governance processor
@ -196,16 +199,16 @@ func (s *Server) createIRFetcher() irFetcher {
return irf
}
func (s *Server) initTimers(cfg *viper.Viper) {
func (s *Server) initTimers(ctx context.Context, cfg *viper.Viper) {
s.epochTimer = newEpochTimer(&epochTimerArgs{
newEpochHandlers: s.newEpochTickHandlers(),
newEpochHandlers: s.newEpochTickHandlers(ctx),
epoch: s,
})
s.addBlockTimer(s.epochTimer)
// initialize emission timer
emissionTimer := newEmissionTimer(&emitTimerArgs{
emissionTimer := newEmissionTimer(ctx, &emitTimerArgs{
ap: s.alphabetProcessor,
emitDuration: cfg.GetUint32("timers.emit"),
})
@ -213,18 +216,20 @@ func (s *Server) initTimers(cfg *viper.Viper) {
s.addBlockTimer(emissionTimer)
}
func (s *Server) initAlphabetProcessor(cfg *viper.Viper) error {
func (s *Server) initAlphabetProcessor(ctx context.Context, cfg *viper.Viper) error {
parsedWallets, err := parseWalletAddressesFromStrings(cfg.GetStringSlice("emit.extra_wallets"))
if err != nil {
return err
}
poolSize := cfg.GetInt("workers.alphabet")
s.log.Debug(ctx, logs.AlphabetAlphabetWorkerPool, zap.Int("size", poolSize))
// create alphabet processor
s.alphabetProcessor, err = alphabet.New(&alphabet.Params{
ParsedWallets: parsedWallets,
Log: s.log,
Metrics: s.irMetrics,
PoolSize: cfg.GetInt("workers.alphabet"),
PoolSize: poolSize,
AlphabetContracts: s.contracts.alphabet,
NetmapClient: s.netmapClient,
MorphClient: s.morphClient,
@ -239,12 +244,14 @@ func (s *Server) initAlphabetProcessor(cfg *viper.Viper) error {
return err
}
func (s *Server) initContainerProcessor(cfg *viper.Viper, cnrClient *container.Client, frostfsIDClient *frostfsid.Client) error {
func (s *Server) initContainerProcessor(ctx context.Context, cfg *viper.Viper, cnrClient *container.Client, frostfsIDClient *frostfsid.Client) error {
poolSize := cfg.GetInt("workers.container")
s.log.Debug(ctx, logs.ContainerContainerWorkerPool, zap.Int("size", poolSize))
// container processor
containerProcessor, err := cont.New(&cont.Params{
Log: s.log,
Metrics: s.irMetrics,
PoolSize: cfg.GetInt("workers.container"),
PoolSize: poolSize,
AlphabetState: s,
ContainerClient: cnrClient,
MorphClient: cnrClient.Morph(),
@ -258,12 +265,14 @@ func (s *Server) initContainerProcessor(cfg *viper.Viper, cnrClient *container.C
return bindMorphProcessor(containerProcessor, s)
}
func (s *Server) initBalanceProcessor(cfg *viper.Viper, frostfsCli *frostfsClient.Client) error {
func (s *Server) initBalanceProcessor(ctx context.Context, cfg *viper.Viper, frostfsCli *frostfsClient.Client) error {
poolSize := cfg.GetInt("workers.balance")
s.log.Debug(ctx, logs.BalanceBalanceWorkerPool, zap.Int("size", poolSize))
// create balance processor
balanceProcessor, err := balance.New(&balance.Params{
Log: s.log,
Metrics: s.irMetrics,
PoolSize: cfg.GetInt("workers.balance"),
PoolSize: poolSize,
FrostFSClient: frostfsCli,
BalanceSC: s.contracts.balance,
AlphabetState: s,
@ -276,15 +285,17 @@ func (s *Server) initBalanceProcessor(cfg *viper.Viper, frostfsCli *frostfsClien
return bindMorphProcessor(balanceProcessor, s)
}
func (s *Server) initFrostFSMainnetProcessor(cfg *viper.Viper) error {
func (s *Server) initFrostFSMainnetProcessor(ctx context.Context, cfg *viper.Viper) error {
if s.withoutMainNet {
return nil
}
poolSize := cfg.GetInt("workers.frostfs")
s.log.Debug(ctx, logs.FrostFSFrostfsWorkerPool, zap.Int("size", poolSize))
frostfsProcessor, err := frostfs.New(&frostfs.Params{
Log: s.log,
Metrics: s.irMetrics,
PoolSize: cfg.GetInt("workers.frostfs"),
PoolSize: poolSize,
FrostFSContract: s.contracts.frostfs,
BalanceClient: s.balanceClient,
NetmapClient: s.netmapClient,
@ -304,10 +315,10 @@ func (s *Server) initFrostFSMainnetProcessor(cfg *viper.Viper) error {
return bindMainnetProcessor(frostfsProcessor, s)
}
func (s *Server) initGRPCServer(cfg *viper.Viper, log *logger.Logger, audit *atomic.Bool) error {
func (s *Server) initGRPCServer(ctx context.Context, cfg *viper.Viper, log *logger.Logger, audit *atomic.Bool) error {
controlSvcEndpoint := cfg.GetString("control.grpc.endpoint")
if controlSvcEndpoint == "" {
s.log.Info(logs.InnerringNoControlServerEndpointSpecified)
s.log.Info(ctx, logs.InnerringNoControlServerEndpointSpecified)
return nil
}
@ -403,7 +414,7 @@ func (s *Server) initClientsFromMorph() (*serverMorphClients, error) {
return result, nil
}
func (s *Server) initProcessors(cfg *viper.Viper, morphClients *serverMorphClients) error {
func (s *Server) initProcessors(ctx context.Context, cfg *viper.Viper, morphClients *serverMorphClients) error {
irf := s.createIRFetcher()
s.statusIndex = newInnerRingIndexer(
@ -418,27 +429,27 @@ func (s *Server) initProcessors(cfg *viper.Viper, morphClients *serverMorphClien
return err
}
err = s.initNetmapProcessor(cfg, alphaSync)
err = s.initNetmapProcessor(ctx, cfg, alphaSync)
if err != nil {
return err
}
err = s.initContainerProcessor(cfg, morphClients.CnrClient, morphClients.FrostFSIDClient)
err = s.initContainerProcessor(ctx, cfg, morphClients.CnrClient, morphClients.FrostFSIDClient)
if err != nil {
return err
}
err = s.initBalanceProcessor(cfg, morphClients.FrostFSClient)
err = s.initBalanceProcessor(ctx, cfg, morphClients.FrostFSClient)
if err != nil {
return err
}
err = s.initFrostFSMainnetProcessor(cfg)
err = s.initFrostFSMainnetProcessor(ctx, cfg)
if err != nil {
return err
}
err = s.initAlphabetProcessor(cfg)
err = s.initAlphabetProcessor(ctx, cfg)
return err
}
@ -446,7 +457,7 @@ func (s *Server) initMorph(ctx context.Context, cfg *viper.Viper, errChan chan<-
fromSideChainBlock, err := s.persistate.UInt32(persistateSideChainLastBlockKey)
if err != nil {
fromSideChainBlock = 0
s.log.Warn(logs.InnerringCantGetLastProcessedSideChainBlockNumber, zap.String("error", err.Error()))
s.log.Warn(ctx, logs.InnerringCantGetLastProcessedSideChainBlockNumber, zap.String("error", err.Error()))
}
morphChain := &chainParams{
@ -471,7 +482,7 @@ func (s *Server) initMorph(ctx context.Context, cfg *viper.Viper, errChan chan<-
return nil, err
}
if err := s.morphClient.SetGroupSignerScope(); err != nil {
morphChain.log.Info(logs.InnerringFailedToSetGroupSignerScope, zap.Error(err))
morphChain.log.Info(ctx, logs.InnerringFailedToSetGroupSignerScope, zap.Error(err))
}
return morphChain, nil

View file

@ -140,10 +140,10 @@ var (
// Start runs all event providers.
func (s *Server) Start(ctx context.Context, intError chan<- error) (err error) {
s.setHealthStatus(control.HealthStatus_STARTING)
s.setHealthStatus(ctx, control.HealthStatus_STARTING)
defer func() {
if err == nil {
s.setHealthStatus(control.HealthStatus_READY)
s.setHealthStatus(ctx, control.HealthStatus_READY)
}
}()
@ -152,12 +152,12 @@ func (s *Server) Start(ctx context.Context, intError chan<- error) (err error) {
return err
}
err = s.initConfigFromBlockchain()
err = s.initConfigFromBlockchain(ctx)
if err != nil {
return err
}
if s.IsAlphabet() {
if s.IsAlphabet(ctx) {
err = s.initMainNotary(ctx)
if err != nil {
return err
@ -173,14 +173,14 @@ func (s *Server) Start(ctx context.Context, intError chan<- error) (err error) {
prm.Validators = s.predefinedValidators
// vote for sidechain validator if it is prepared in config
err = s.voteForSidechainValidator(prm)
err = s.voteForSidechainValidator(ctx, prm)
if err != nil {
// we don't stop inner ring execution on this error
s.log.Warn(logs.InnerringCantVoteForPreparedValidators,
s.log.Warn(ctx, logs.InnerringCantVoteForPreparedValidators,
zap.String("error", err.Error()))
}
s.tickInitialExpoch()
s.tickInitialExpoch(ctx)
morphErr := make(chan error)
mainnnetErr := make(chan error)
@ -217,14 +217,14 @@ func (s *Server) Start(ctx context.Context, intError chan<- error) (err error) {
}
func (s *Server) registerMorphNewBlockEventHandler() {
s.morphListener.RegisterBlockHandler(func(b *block.Block) {
s.log.Debug(logs.InnerringNewBlock,
s.morphListener.RegisterBlockHandler(func(ctx context.Context, b *block.Block) {
s.log.Debug(ctx, logs.InnerringNewBlock,
zap.Uint32("index", b.Index),
)
err := s.persistate.SetUInt32(persistateSideChainLastBlockKey, b.Index)
if err != nil {
s.log.Warn(logs.InnerringCantUpdatePersistentState,
s.log.Warn(ctx, logs.InnerringCantUpdatePersistentState,
zap.String("chain", "side"),
zap.Uint32("block_index", b.Index))
}
@ -235,10 +235,10 @@ func (s *Server) registerMorphNewBlockEventHandler() {
func (s *Server) registerMainnetNewBlockEventHandler() {
if !s.withoutMainNet {
s.mainnetListener.RegisterBlockHandler(func(b *block.Block) {
s.mainnetListener.RegisterBlockHandler(func(ctx context.Context, b *block.Block) {
err := s.persistate.SetUInt32(persistateMainChainLastBlockKey, b.Index)
if err != nil {
s.log.Warn(logs.InnerringCantUpdatePersistentState,
s.log.Warn(ctx, logs.InnerringCantUpdatePersistentState,
zap.String("chain", "main"),
zap.Uint32("block_index", b.Index))
}
@ -283,11 +283,11 @@ func (s *Server) initSideNotary(ctx context.Context) error {
)
}
func (s *Server) tickInitialExpoch() {
func (s *Server) tickInitialExpoch(ctx context.Context) {
initialEpochTicker := timer.NewOneTickTimer(
timer.StaticBlockMeter(s.initialEpochTickDelta),
func() {
s.netmapProcessor.HandleNewEpochTick(timerEvent.NewEpochTick{})
s.netmapProcessor.HandleNewEpochTick(ctx, timerEvent.NewEpochTick{})
})
s.addBlockTimer(initialEpochTicker)
}
@ -299,15 +299,15 @@ func (s *Server) startWorkers(ctx context.Context) {
}
// Stop closes all subscription channels.
func (s *Server) Stop() {
s.setHealthStatus(control.HealthStatus_SHUTTING_DOWN)
func (s *Server) Stop(ctx context.Context) {
s.setHealthStatus(ctx, control.HealthStatus_SHUTTING_DOWN)
go s.morphListener.Stop()
go s.mainnetListener.Stop()
for _, c := range s.closers {
if err := c(); err != nil {
s.log.Warn(logs.InnerringCloserError,
s.log.Warn(ctx, logs.InnerringCloserError,
zap.String("error", err.Error()),
)
}
@ -349,7 +349,7 @@ func New(ctx context.Context, log *logger.Logger, cfg *viper.Viper, errChan chan
return nil, err
}
server.setHealthStatus(control.HealthStatus_HEALTH_STATUS_UNDEFINED)
server.setHealthStatus(ctx, control.HealthStatus_HEALTH_STATUS_UNDEFINED)
// parse notary support
server.feeConfig = config.NewFeeConfig(cfg)
@ -376,7 +376,7 @@ func New(ctx context.Context, log *logger.Logger, cfg *viper.Viper, errChan chan
return nil, err
}
server.initNotaryConfig()
server.initNotaryConfig(ctx)
err = server.initContracts(cfg)
if err != nil {
@ -400,14 +400,14 @@ func New(ctx context.Context, log *logger.Logger, cfg *viper.Viper, errChan chan
return nil, err
}
err = server.initProcessors(cfg, morphClients)
err = server.initProcessors(ctx, cfg, morphClients)
if err != nil {
return nil, err
}
server.initTimers(cfg)
server.initTimers(ctx, cfg)
err = server.initGRPCServer(cfg, log, audit)
err = server.initGRPCServer(ctx, cfg, log, audit)
if err != nil {
return nil, err
}
@ -438,7 +438,7 @@ func createListener(ctx context.Context, cli *client.Client, p *chainParams) (ev
}
listener, err := event.NewListener(event.ListenerParams{
Logger: &logger.Logger{Logger: p.log.With(zap.String("chain", p.name))},
Logger: p.log.With(zap.String("chain", p.name)),
Subscriber: sub,
})
if err != nil {
@ -573,7 +573,7 @@ func parseMultinetConfig(cfg *viper.Viper, m metrics.MultinetMetrics) internalNe
return nc
}
func (s *Server) initConfigFromBlockchain() error {
func (s *Server) initConfigFromBlockchain(ctx context.Context) error {
// get current epoch
epoch, err := s.netmapClient.Epoch()
if err != nil {
@ -602,9 +602,9 @@ func (s *Server) initConfigFromBlockchain() error {
return err
}
s.log.Debug(logs.InnerringReadConfigFromBlockchain,
zap.Bool("active", s.IsActive()),
zap.Bool("alphabet", s.IsAlphabet()),
s.log.Debug(ctx, logs.InnerringReadConfigFromBlockchain,
zap.Bool("active", s.IsActive(ctx)),
zap.Bool("alphabet", s.IsAlphabet(ctx)),
zap.Uint64("epoch", epoch),
zap.Uint32("precision", balancePrecision),
zap.Uint32("init_epoch_tick_delta", s.initialEpochTickDelta),
@ -635,17 +635,17 @@ func (s *Server) nextEpochBlockDelta() (uint32, error) {
// onlyAlphabet wrapper around event handler that executes it
// only if inner ring node is alphabet node.
func (s *Server) onlyAlphabetEventHandler(f event.Handler) event.Handler {
return func(ev event.Event) {
if s.IsAlphabet() {
f(ev)
return func(ctx context.Context, ev event.Event) {
if s.IsAlphabet(ctx) {
f(ctx, ev)
}
}
}
func (s *Server) newEpochTickHandlers() []newEpochHandler {
func (s *Server) newEpochTickHandlers(ctx context.Context) []newEpochHandler {
newEpochHandlers := []newEpochHandler{
func() {
s.netmapProcessor.HandleNewEpochTick(timerEvent.NewEpochTick{})
s.netmapProcessor.HandleNewEpochTick(ctx, timerEvent.NewEpochTick{})
},
}

View file

@ -28,38 +28,39 @@ const (
gasDivisor = 2
)
func (s *Server) depositMainNotary() (tx util.Uint256, err error) {
func (s *Server) depositMainNotary(ctx context.Context) (tx util.Uint256, err error) {
depositAmount, err := client.CalculateNotaryDepositAmount(s.mainnetClient, gasMultiplier, gasDivisor)
if err != nil {
return util.Uint256{}, fmt.Errorf("could not calculate main notary deposit amount: %w", err)
}
return s.mainnetClient.DepositNotary(
ctx,
depositAmount,
uint32(s.epochDuration.Load())+notaryExtraBlocks,
)
}
func (s *Server) depositSideNotary() (util.Uint256, error) {
func (s *Server) depositSideNotary(ctx context.Context) (util.Uint256, error) {
depositAmount, err := client.CalculateNotaryDepositAmount(s.morphClient, gasMultiplier, gasDivisor)
if err != nil {
return util.Uint256{}, fmt.Errorf("could not calculate side notary deposit amount: %w", err)
}
tx, _, err := s.morphClient.DepositEndlessNotary(depositAmount)
tx, _, err := s.morphClient.DepositEndlessNotary(ctx, depositAmount)
return tx, err
}
func (s *Server) notaryHandler(_ event.Event) {
func (s *Server) notaryHandler(ctx context.Context, _ event.Event) {
if !s.mainNotaryConfig.disabled {
_, err := s.depositMainNotary()
_, err := s.depositMainNotary(ctx)
if err != nil {
s.log.Error(logs.InnerringCantMakeNotaryDepositInMainChain, zap.Error(err))
s.log.Error(ctx, logs.InnerringCantMakeNotaryDepositInMainChain, zap.Error(err))
}
}
if _, err := s.depositSideNotary(); err != nil {
s.log.Error(logs.InnerringCantMakeNotaryDepositInSideChain, zap.Error(err))
if _, err := s.depositSideNotary(ctx); err != nil {
s.log.Error(ctx, logs.InnerringCantMakeNotaryDepositInSideChain, zap.Error(err))
}
}
@ -72,7 +73,7 @@ func (s *Server) awaitSideNotaryDeposit(ctx context.Context, tx util.Uint256) er
}
func (s *Server) initNotary(ctx context.Context, deposit depositor, await awaiter, msg string) error {
tx, err := deposit()
tx, err := deposit(ctx)
if err != nil {
return err
}
@ -81,11 +82,11 @@ func (s *Server) initNotary(ctx context.Context, deposit depositor, await awaite
// non-error deposit with an empty TX hash means
// that the deposit has already been made; no
// need to wait it.
s.log.Info(logs.InnerringNotaryDepositHasAlreadyBeenMade)
s.log.Info(ctx, logs.InnerringNotaryDepositHasAlreadyBeenMade)
return nil
}
s.log.Info(msg)
s.log.Info(ctx, msg)
return await(ctx, tx)
}

View file

@ -1,6 +1,8 @@
package alphabet
import (
"context"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/innerring/processors"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/innerring/timers"
@ -8,16 +10,16 @@ import (
"go.uber.org/zap"
)
func (ap *Processor) HandleGasEmission(ev event.Event) {
func (ap *Processor) HandleGasEmission(ctx context.Context, ev event.Event) {
_ = ev.(timers.NewAlphabetEmitTick)
ap.log.Info(logs.AlphabetTick, zap.String("type", "alphabet gas emit"))
ap.log.Info(ctx, logs.AlphabetTick, zap.String("type", "alphabet gas emit"))
// send event to the worker pool
err := processors.SubmitEvent(ap.pool, ap.metrics, "alphabet_emit_gas", ap.processEmit)
err := processors.SubmitEvent(ap.pool, ap.metrics, "alphabet_emit_gas", func() bool { return ap.processEmit(ctx) })
if err != nil {
// there system can be moved into controlled degradation stage
ap.log.Warn(logs.AlphabetAlphabetProcessorWorkerPoolDrained,
ap.log.Warn(ctx, logs.AlphabetAlphabetProcessorWorkerPoolDrained,
zap.Int("capacity", ap.pool.Cap()))
}
}

View file

@ -1,11 +1,13 @@
package alphabet_test
import (
"context"
"testing"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/innerring"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/innerring/processors/alphabet"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/innerring/timers"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger/test"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
@ -60,7 +62,7 @@ func TestProcessorEmitsGasToNetmapAndAlphabet(t *testing.T) {
processor, err := alphabet.New(params)
require.NoError(t, err, "failed to create processor instance")
processor.HandleGasEmission(timers.NewAlphabetEmitTick{})
processor.HandleGasEmission(context.Background(), timers.NewAlphabetEmitTick{})
processor.WaitPoolRunning()
@ -137,7 +139,7 @@ func TestProcessorEmitsGasToNetmapIfNoParsedWallets(t *testing.T) {
processor, err := alphabet.New(params)
require.NoError(t, err, "failed to create processor instance")
processor.HandleGasEmission(timers.NewAlphabetEmitTick{})
processor.HandleGasEmission(context.Background(), timers.NewAlphabetEmitTick{})
processor.WaitPoolRunning()
@ -198,7 +200,7 @@ func TestProcessorDoesntEmitGasIfNoNetmapOrParsedWallets(t *testing.T) {
processor, err := alphabet.New(params)
require.NoError(t, err, "failed to create processor instance")
processor.HandleGasEmission(timers.NewAlphabetEmitTick{})
processor.HandleGasEmission(context.Background(), timers.NewAlphabetEmitTick{})
processor.WaitPoolRunning()
@ -219,7 +221,7 @@ type testIndexer struct {
index int
}
func (i *testIndexer) AlphabetIndex() int {
func (i *testIndexer) AlphabetIndex(context.Context) int {
return i.index
}
@ -246,7 +248,7 @@ type testMorphClient struct {
batchTransferedGas []batchTransferGas
}
func (c *testMorphClient) Invoke(contract util.Uint160, fee fixedn.Fixed8, method string, args ...any) (uint32, error) {
func (c *testMorphClient) Invoke(_ context.Context, contract util.Uint160, fee fixedn.Fixed8, method string, args ...any) (client.InvokeRes, error) {
c.invokedMethods = append(c.invokedMethods,
invokedMethod{
contract: contract,
@ -254,7 +256,7 @@ func (c *testMorphClient) Invoke(contract util.Uint160, fee fixedn.Fixed8, metho
method: method,
args: args,
})
return 0, nil
return client.InvokeRes{}, nil
}
func (c *testMorphClient) TransferGas(receiver util.Uint160, amount fixedn.Fixed8) error {

View file

@ -1,6 +1,7 @@
package alphabet
import (
"context"
"crypto/elliptic"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
@ -13,39 +14,39 @@ import (
const emitMethod = "emit"
func (ap *Processor) processEmit() bool {
index := ap.irList.AlphabetIndex()
func (ap *Processor) processEmit(ctx context.Context) bool {
index := ap.irList.AlphabetIndex(ctx)
if index < 0 {
ap.log.Info(logs.AlphabetNonAlphabetModeIgnoreGasEmissionEvent)
ap.log.Info(ctx, logs.AlphabetNonAlphabetModeIgnoreGasEmissionEvent)
return true
}
contract, ok := ap.alphabetContracts.GetByIndex(index)
if !ok {
ap.log.Debug(logs.AlphabetNodeIsOutOfAlphabetRangeIgnoreGasEmissionEvent,
ap.log.Debug(ctx, logs.AlphabetNodeIsOutOfAlphabetRangeIgnoreGasEmissionEvent,
zap.Int("index", index))
return false
}
// there is no signature collecting, so we don't need extra fee
_, err := ap.morphClient.Invoke(contract, 0, emitMethod)
_, err := ap.morphClient.Invoke(ctx, contract, 0, emitMethod)
if err != nil {
ap.log.Warn(logs.AlphabetCantInvokeAlphabetEmitMethod, zap.String("error", err.Error()))
ap.log.Warn(ctx, logs.AlphabetCantInvokeAlphabetEmitMethod, zap.String("error", err.Error()))
return false
}
if ap.storageEmission == 0 {
ap.log.Info(logs.AlphabetStorageNodeEmissionIsOff)
ap.log.Info(ctx, logs.AlphabetStorageNodeEmissionIsOff)
return true
}
networkMap, err := ap.netmapClient.NetMap()
if err != nil {
ap.log.Warn(logs.AlphabetCantGetNetmapSnapshotToEmitGasToStorageNodes,
ap.log.Warn(ctx, logs.AlphabetCantGetNetmapSnapshotToEmitGasToStorageNodes,
zap.String("error", err.Error()))
return false
@ -58,7 +59,7 @@ func (ap *Processor) processEmit() bool {
ap.pwLock.RUnlock()
extraLen := len(pw)
ap.log.Debug(logs.AlphabetGasEmission,
ap.log.Debug(ctx, logs.AlphabetGasEmission,
zap.Int("network_map", nmLen),
zap.Int("extra_wallets", extraLen))
@ -68,20 +69,20 @@ func (ap *Processor) processEmit() bool {
gasPerNode := fixedn.Fixed8(ap.storageEmission / uint64(nmLen+extraLen))
ap.transferGasToNetmapNodes(nmNodes, gasPerNode)
ap.transferGasToNetmapNodes(ctx, nmNodes, gasPerNode)
ap.transferGasToExtraNodes(pw, gasPerNode)
ap.transferGasToExtraNodes(ctx, pw, gasPerNode)
return true
}
func (ap *Processor) transferGasToNetmapNodes(nmNodes []netmap.NodeInfo, gasPerNode fixedn.Fixed8) {
func (ap *Processor) transferGasToNetmapNodes(ctx context.Context, nmNodes []netmap.NodeInfo, gasPerNode fixedn.Fixed8) {
for i := range nmNodes {
keyBytes := nmNodes[i].PublicKey()
key, err := keys.NewPublicKeyFromBytes(keyBytes, elliptic.P256())
if err != nil {
ap.log.Warn(logs.AlphabetCantParseNodePublicKey,
ap.log.Warn(ctx, logs.AlphabetCantParseNodePublicKey,
zap.String("error", err.Error()))
continue
@ -89,7 +90,7 @@ func (ap *Processor) transferGasToNetmapNodes(nmNodes []netmap.NodeInfo, gasPerN
err = ap.morphClient.TransferGas(key.GetScriptHash(), gasPerNode)
if err != nil {
ap.log.Warn(logs.AlphabetCantTransferGas,
ap.log.Warn(ctx, logs.AlphabetCantTransferGas,
zap.String("receiver", key.Address()),
zap.Int64("amount", int64(gasPerNode)),
zap.String("error", err.Error()),
@ -98,7 +99,7 @@ func (ap *Processor) transferGasToNetmapNodes(nmNodes []netmap.NodeInfo, gasPerN
}
}
func (ap *Processor) transferGasToExtraNodes(pw []util.Uint160, gasPerNode fixedn.Fixed8) {
func (ap *Processor) transferGasToExtraNodes(ctx context.Context, pw []util.Uint160, gasPerNode fixedn.Fixed8) {
if len(pw) > 0 {
err := ap.morphClient.BatchTransferGas(pw, gasPerNode)
if err != nil {
@ -106,7 +107,7 @@ func (ap *Processor) transferGasToExtraNodes(pw []util.Uint160, gasPerNode fixed
for i, addr := range pw {
receiversLog[i] = addr.StringLE()
}
ap.log.Warn(logs.AlphabetCantTransferGasToWallet,
ap.log.Warn(ctx, logs.AlphabetCantTransferGasToWallet,
zap.Strings("receivers", receiversLog),
zap.Int64("amount", int64(gasPerNode)),
zap.String("error", err.Error()),

View file

@ -1,26 +1,26 @@
package alphabet
import (
"context"
"errors"
"fmt"
"sync"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/innerring/metrics"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/event"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap"
"github.com/nspcc-dev/neo-go/pkg/encoding/fixedn"
"github.com/nspcc-dev/neo-go/pkg/util"
"github.com/panjf2000/ants/v2"
"go.uber.org/zap"
)
type (
// Indexer is a callback interface for inner ring global state.
Indexer interface {
AlphabetIndex() int
AlphabetIndex(context.Context) int
}
// Contracts is an interface of the storage
@ -40,7 +40,7 @@ type (
}
morphClient interface {
Invoke(contract util.Uint160, fee fixedn.Fixed8, method string, args ...any) (uint32, error)
Invoke(ctx context.Context, contract util.Uint160, fee fixedn.Fixed8, method string, args ...any) (client.InvokeRes, error)
TransferGas(receiver util.Uint160, amount fixedn.Fixed8) error
BatchTransferGas(receivers []util.Uint160, amount fixedn.Fixed8) error
}
@ -85,8 +85,6 @@ func New(p *Params) (*Processor, error) {
return nil, errors.New("ir/alphabet: global state is not set")
}
p.Log.Debug(logs.AlphabetAlphabetWorkerPool, zap.Int("size", p.PoolSize))
pool, err := ants.NewPool(p.PoolSize, ants.WithNonblocking(true))
if err != nil {
return nil, fmt.Errorf("ir/frostfs: can't create worker pool: %w", err)

View file

@ -1,6 +1,7 @@
package balance
import (
"context"
"encoding/hex"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
@ -10,20 +11,20 @@ import (
"go.uber.org/zap"
)
func (bp *Processor) handleLock(ev event.Event) {
func (bp *Processor) handleLock(ctx context.Context, ev event.Event) {
lock := ev.(balanceEvent.Lock)
bp.log.Info(logs.Notification,
bp.log.Info(ctx, logs.Notification,
zap.String("type", "lock"),
zap.String("value", hex.EncodeToString(lock.ID())))
// send an event to the worker pool
err := processors.SubmitEvent(bp.pool, bp.metrics, "lock", func() bool {
return bp.processLock(&lock)
return bp.processLock(ctx, &lock)
})
if err != nil {
// there system can be moved into controlled degradation stage
bp.log.Warn(logs.BalanceBalanceWorkerPoolDrained,
bp.log.Warn(ctx, logs.BalanceBalanceWorkerPoolDrained,
zap.Int("capacity", bp.pool.Cap()))
}
}

View file

@ -1,6 +1,7 @@
package balance
import (
"context"
"testing"
"time"
@ -30,7 +31,7 @@ func TestProcessorCallsFrostFSContractForLockEvent(t *testing.T) {
})
require.NoError(t, err, "failed to create processor")
processor.handleLock(balanceEvent.Lock{})
processor.handleLock(context.Background(), balanceEvent.Lock{})
for processor.pool.Running() > 0 {
time.Sleep(10 * time.Millisecond)
@ -56,7 +57,7 @@ func TestProcessorDoesntCallFrostFSContractIfNotAlphabet(t *testing.T) {
})
require.NoError(t, err, "failed to create processor")
processor.handleLock(balanceEvent.Lock{})
processor.handleLock(context.Background(), balanceEvent.Lock{})
for processor.pool.Running() > 0 {
time.Sleep(10 * time.Millisecond)
@ -69,7 +70,7 @@ type testAlphabetState struct {
isAlphabet bool
}
func (s *testAlphabetState) IsAlphabet() bool {
func (s *testAlphabetState) IsAlphabet(context.Context) bool {
return s.isAlphabet
}
@ -83,7 +84,7 @@ type testFrostFSContractClient struct {
chequeCalls int
}
func (c *testFrostFSContractClient) Cheque(p frostfscontract.ChequePrm) error {
func (c *testFrostFSContractClient) Cheque(_ context.Context, p frostfscontract.ChequePrm) error {
c.chequeCalls++
return nil
}

View file

@ -1,6 +1,8 @@
package balance
import (
"context"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
frostfsContract "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client/frostfs"
balanceEvent "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/event/balance"
@ -9,9 +11,9 @@ import (
// Process lock event by invoking Cheque method in main net to send assets
// back to the withdraw issuer.
func (bp *Processor) processLock(lock *balanceEvent.Lock) bool {
if !bp.alphabetState.IsAlphabet() {
bp.log.Info(logs.BalanceNonAlphabetModeIgnoreBalanceLock)
func (bp *Processor) processLock(ctx context.Context, lock *balanceEvent.Lock) bool {
if !bp.alphabetState.IsAlphabet(ctx) {
bp.log.Info(ctx, logs.BalanceNonAlphabetModeIgnoreBalanceLock)
return true
}
@ -23,9 +25,9 @@ func (bp *Processor) processLock(lock *balanceEvent.Lock) bool {
prm.SetLock(lock.LockAccount())
prm.SetHash(lock.TxHash())
err := bp.frostfsClient.Cheque(prm)
err := bp.frostfsClient.Cheque(ctx, prm)
if err != nil {
bp.log.Error(logs.BalanceCantSendLockAssetTx, zap.Error(err))
bp.log.Error(ctx, logs.BalanceCantSendLockAssetTx, zap.Error(err))
return false
}

Some files were not shown because too many files have changed in this diff Show more