Compare commits

..

104 commits

Author SHA1 Message Date
3821645085
[#1555] engine: Refactor (*StorageEngine).GetLocks
Refactored after renaming the method to replace the confusing `locked`
variable with `locks`.

Signed-off-by: Aleksey Savchuk <a.savchuk@yadro.com>
2024-12-11 15:06:38 +03:00
72470d6b48
[#1555] local_object_storage: Rename method GetLocked -> GetLocks
Renamed to better reflect the method's purpose of returning locks
for the specified object.

Signed-off-by: Aleksey Savchuk <a.savchuk@yadro.com>
2024-12-11 15:06:37 +03:00
e9837bbcf9 [#1554] morph/event: Remove unused AlphabetUpdate event
Refs TrueCloudLab/frostfs-contract#138.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-11 12:01:17 +00:00
a641c91594 [#1550] Add CODEOWNERS
Signed-off-by: Vitaliy Potyarkin <v.potyarkin@yadro.com>
2024-12-11 10:34:57 +00:00
b1614a284d [#1546] morph/event: Export NotificationHandlerInfo fields
Hiding them achieves nothing, as the struct has no methods and is not
used concurrently.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-11 07:39:49 +00:00
d0ce835fbf [#1546] morph/event: Merge notification parser and handlers
They are decoupled, but it is an error to have a handler without a
corresponding parser. Register them together on the code level and get
rid of unreachable code.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-11 07:39:49 +00:00
dfa51048a8 [#1546] morph/event: Remove "is started" checks from event handler registrar
This codepath hides possible bugs in code.
All initialization function should run before init stage.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-11 07:39:49 +00:00
670305a721 [#1546] morph/event: Remove nil checks from event handler registrar
This codepath hides possible bugs in code.
We would rather panic then silently fail.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-11 07:39:49 +00:00
1f6cf57e30 [#1548] metabase: Check if EC parent is removed or expired
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-12-11 07:26:33 +00:00
386a12eea4 [#1548] engine: Rename parent -> ecParent
Parent could mean split parent or EC parent. In this case it is EC parent only.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-12-11 07:26:33 +00:00
15139d80c9 [#1548] policer: Do not replicate EC chunk if object already removed
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-12-11 07:26:33 +00:00
41da27dad5
[#1549] engine: Drop Async flag from evacuation parameters
Now it is only async evacuation.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-12-10 17:00:00 +03:00
ac0511d214
[#1549] controlSvc: Drop deprecated EvacuateShard rpc
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-12-10 16:59:52 +03:00
7e542906ef [#1539] go.mod: Bump frostfs-sdk-go version
* Also fix placement unit-test in object manager

Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2024-12-06 15:29:37 +03:00
d1bc4351c3
[#1545] morph/event: Simplify frostfs contract event parsing
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-06 14:00:23 +03:00
1c12f23b84 [#1541] morph/event: Simplify netmap contract event parsing
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-06 10:26:39 +00:00
a353d45742 [#1541] morph/event: Simplify container contract event parsing
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-06 10:26:39 +00:00
d5c46d812a [#1541] go.mod: Update frostfs-contract
New version contains more idiomatic types in the auto-generated code.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-06 10:26:39 +00:00
d5d5ce2074 [#1541] morph/event: Simplify balance contract event parsing
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-06 10:26:39 +00:00
7df3520d48 [#1540] getSvc: Drop redundant returns
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-12-05 12:39:49 +00:00
5fe78e51d1 [#1540] getSvc: Do not log context canceled errors during EC assemble
Those errors are fired when it is enough chunks retrieved and error group
cancels other requests.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-12-05 12:39:49 +00:00
84b4051b4d
[#1538] morph/container: Make opts struct similar to that of other contracts
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-04 15:30:58 +03:00
6a51086030
[#1538] morph/client: Remove TryNotary() option from side-chain contracts
The notary is always enabled and this option does always work.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-04 15:30:58 +03:00
5c3b2d95ba
[#1538] node: Assume notary is enabled
Notaryless environments are not tested at all since a while.
We use neo-go only and it has notary contract enabled.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-04 15:30:58 +03:00
2d5d4093be
[#1537] morph: Use (user.ID).ScriptHash() where possible
Pick up changes from TrueCloudLab/frostfs-sdk-go#198.

gopatch:
```
@@
var user expression
@@
-address.StringToUint160(user.EncodeToString())
+user.ScriptHash()
```

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-04 13:25:44 +03:00
e3487d5af5 [#1535] morph: Unify test invoke error messages
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-04 09:50:20 +00:00
e37dcdf88b [#1535] morph/netmap: Unify error messages for config retrieval
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-04 09:50:20 +00:00
6c679d1535 [#1535] morph: Unify client creation error messages
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-04 09:50:20 +00:00
281d65435e
[#1450] engine: Group object by shard before Inhume
```
goos: linux
goarch: amd64
pkg: git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/engine
cpu: 12th Gen Intel(R) Core(TM) i5-1235U
                                 │   old.txt    │              new.txt                │
                                 │    sec/op    │   sec/op     vs base                │
InhumeMultipart/objects=1-12        11.42m ± 1%   10.71m ± 0%   -6.27% (p=0.000 n=10)
InhumeMultipart/objects=10-12       113.5m ± 0%   100.9m ± 3%  -11.08% (p=0.000 n=10)
InhumeMultipart/objects=100-12     1135.4m ± 1%   681.3m ± 2%  -40.00% (p=0.000 n=10)
InhumeMultipart/objects=1000-12     11.358 ± 0%    1.089 ± 1%  -90.41% (p=0.000 n=10)
InhumeMultipart/objects=10000-12   113.251 ± 0%    1.645 ± 1%  -98.55% (p=0.000 n=10)
geomean                              1.136        265.5m       -76.63%
```

Signed-off-by: Aleksey Savchuk <a.savchuk@yadro.com>
2024-12-04 10:09:00 +03:00
b348b20289
[#1450] engine: Add benchmark for Inhume operation
Signed-off-by: Aleksey Savchuk <a.savchuk@yadro.com>
2024-12-04 10:08:34 +03:00
748edd1999
[#1450] engine: Return shard-level error if object is expired on inhume
Since we have errors defined on the shard-level, it looks strage that we
check an error againt the shard-level error `ErrLockObjectRemoval`, but
then return the metabase-level error. Let's return the same shard-level
error instead.

Since we have errors defined on the shard-level

Signed-off-by: Aleksey Savchuk <a.savchuk@yadro.com>
2024-12-04 10:06:57 +03:00
47dfd8840c [#1532] node: Allow to omit metabase.path if shard is disabled
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-04 03:30:19 +00:00
432042c534
[#1527] engine: Add tests for handling expired objects on inhume and lock
Currently, it's allowed to inhume or lock an expired object.
Consider the following scenario:

1) An user inhumes or locks an object
2) The object expires
3) GC hasn't yet deleted the object
4) The node loses the associated tombstone or lock
5) Another node replicates tombstone or lock to the first node

In this case, the second node succeeds, which is the desired behavior.

Signed-off-by: Aleksey Savchuk <a.savchuk@yadro.com>
2024-12-03 12:29:45 +03:00
9cabca9dfe
[#1527] engine/test: Move default metabase options to separate function
Signed-off-by: Aleksey Savchuk <a.savchuk@yadro.com>
2024-12-02 16:13:37 +03:00
60feed3b5f
[#1527] engine/test: Allow to specify current epoch in epochState
Signed-off-by: Aleksey Savchuk <a.savchuk@yadro.com>
2024-12-02 15:37:25 +03:00
635a292ae4 [#1528] cli: Keep order for required nodes in the result of object nodes
Signed-off-by: Anton Nikiforov <an.nikiforov@yadro.com>
2024-12-02 13:58:24 +03:00
edfa3f4825 [#1528] node: Keep order for equal elements when sort priority metrics
Signed-off-by: Anton Nikiforov <an.nikiforov@yadro.com>
2024-12-02 13:58:19 +03:00
e0ac3a583f [#1523] metabase: Remove (*DB).IterateCoveredByTombstones
Remove this method because it isn't used anywhere since 7799f8e4c.

Signed-off-by: Aleksey Savchuk <a.savchuk@yadro.com>
2024-11-29 10:49:24 +00:00
00c608c05e [#1524] tree: Make check APE error get wrapped to api status
Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2024-11-29 10:48:16 +00:00
bba1892fa1 [#1524] ape: Make APE checker return error without status
Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2024-11-29 10:48:16 +00:00
01acec708f
[#1525] pilorama: Use AppendUint* helpers from stdlib
gopatch:
```
@@
var slice, e expression
@@
+import "encoding/binary"

-append(slice, byte(e), byte(e >> 8))
+binary.LittleEndian.AppendUint16(slice, e)
```

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-28 09:40:20 +03:00
aac65001e5 [#1522] adm/frostfsid: Remove unreachable condition
SendConsensusTx() modifies SendTxs field, if it is not the case, there
is a bug in code.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-26 08:13:35 +00:00
1170370753 [#1522] adm/helper: Rename createSingleAccounts() -> getSingleAccounts()
It doesn't create any accounts, purely finds them in the wallet.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-26 08:13:35 +00:00
9e275d44c8 [#1522] adm/helper: Unexport DefaultClientContext()
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-26 08:13:35 +00:00
2469e0c683 [#1522] adm/helper: Remove NewActor() helper
It is used once, it is used only internally and it is single-statement.
I see no justification in having it as a separate function.
It introduces confusion, because we also have NewLocalActor().

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-26 08:13:35 +00:00
a6ef4ab524 [#1522] adm/helper: Rename GetN3Client() -> NewRemoteClient()
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-26 08:13:35 +00:00
49959c4166 [#1522] adm/helper: Unexport GetFrostfsIDAdmin()
It is used in `helper` package only, besides unit-tests.
Move unit-tests to the same package, where they belong.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-26 08:13:35 +00:00
61ee1b5610 [#1522] adm: Simplify LocalClient.SendRawTransaction()
The old code was there before Copy() method was introduced.
It was also supposed to check errors, however, they are already checked
server-side.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-26 08:13:35 +00:00
b10c954377 [#1522] adm: Split NewLocalClient() into functions
No functional changes.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-26 08:13:35 +00:00
1605391628 [#1522] adm/helper: Simplify Client interface
Just reuse `actor.RPCActor`. No functional changes.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-26 08:13:35 +00:00
b1766e47c7 [#1522] adm/helper: Remove unused GetCommittee() method from the Client interface
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-26 08:13:35 +00:00
caa4253249 [#1522] adm: Remove unnecessary variable declaration
It is better to have small scope.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-26 08:13:35 +00:00
7eac5fb18b
Release v0.44.0
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-25 14:41:36 +03:00
0e5524dac7 [#1515] adm: Print address in base58 format in morph ape get-admin
Signed-off-by: George Bartolomey <george@bh4.ru>
2024-11-25 10:38:05 +00:00
3ebd560f42 [#1519] cli: Make descriptive help for--rule option
Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2024-11-25 07:21:05 +00:00
1ed7ab75fb [#1517] cli: Print the reason of ape manager error
Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2024-11-22 15:24:32 +03:00
99f9e59de9 [#1514] adm: Remove --alphabet-wallets flag from readonly commands
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-21 14:37:34 +00:00
256f96e252 [#1514] adm/nns: Rename getRPCClient() to nnsWriter()
Make it more specific and similar to nnsReader().

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-21 14:37:34 +00:00
e5ea95c045 [#1514] adm/nns: Do not return hash from getRPCClient()
It was unused and we employ better abstractions now.
gopatch:
```
@@
var a, b expression
@@
-a, b, _ := getRPCClient(...)
+a, b := getRPCClient(...)
```

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-21 14:37:34 +00:00
9073e555db [#1514] adm/nns: Do not create actor for readonly commands
`nns get-records` and `nns tokens` command do not need to sign anything,
so remove useless actor and use invoker directly.
`NewLocalActor()` is only used in `ape` and `nns` packages. `ape`
package seem to use it correctly, only when alphabet wallets are
provided, so no changes there.
Also, remove --alphabet-wallets flag from commands that do not need it.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-21 14:37:34 +00:00
2771fdb8c7 [#1514] adm/nns: Use nns.GetAllRecords() wrapper
It was not possible previously, because GetAllRecords() was not declared
safe in frostfs-contract.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-21 14:37:34 +00:00
efa4ce00b8 [#1514] go.mod: Update frostfs-contract to v0.21.0-rc.3
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-21 14:37:34 +00:00
f12f04199e [#1516] traverser: Check for placement vector out of range
Placement vector may contain fewer nodes count than it required by policy
due to the outage of the one of the node.

Signed-off-by: Anton Nikiforov <an.nikiforov@yadro.com>
2024-11-21 14:18:55 +03:00
2e2c62147d
[#1513] adm: Move ProtoConfigPath from constants to commonflags package
Refs #932

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-21 09:39:06 +03:00
6ae8667fb4
[#1509] .forgejo: Run actions on push to master
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-20 11:42:12 +03:00
49a4e727fd [#1507] timer/test: Use const for constants
Make it easy to see what the test is about.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-20 08:36:25 +00:00
2e974f734c [#1507] timer/test: Improve test coverage
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-20 08:36:25 +00:00
3042490340 [#1507] timer: Remove unused OnDelta() method
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-20 08:36:25 +00:00
a339b52a60 [#1501] adm: Refactor APE-chains managing subcommands
* Use `cmd/internal/common/ape` parser commands within `ape`
  subcommands
* Use flag names from `cmd/internal/common/ape

Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2024-11-20 07:58:32 +00:00
4ab4ed6f96 [#1501] cli: Refactor bearer subcommand
* Use `cmd/internal/common/ape` parser commands within `generate-ape-override`
  subcommand
* Use flag names from `cmd/internal/common/ape`

Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2024-11-20 07:58:32 +00:00
3b1364e4cf [#1501] cli: Refactor ape-manager subcommands
* Refactor ape-manager subcommands
* Use `cmd/internal/common/ape` parser commands within ape-manager subcommands
* Use flag names from `cmd/internal/common/ape`

Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2024-11-20 07:58:32 +00:00
daff77b273 [#1501] cli: Refactor local override managing subcommands
* Refactor local override managing subcommands
* Use `cmd/internal/common/ape` parser commands within local
  override subcommands
* Use flag names from `cmd/internal/common/ape`

Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2024-11-20 07:58:32 +00:00
7a7ee71a4d [#1501] cmd: Introduce common APE-chain parser commands
* Introduce common parsing commands to use them in `frostfs-cli`
  and `frostfs-adm` APE-related subcommands
* Introduce common flags for these parsing commands

Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2024-11-20 07:58:32 +00:00
ffe9906266 [#1501] cli: Move APE-chain parser methods to pkg/util
Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2024-11-20 07:58:32 +00:00
ae31ef3602 [#1501] cli: Move PrintHumanReadableAPEChain to a common package
* Both `frostfs-cli` and `frostfs-adm` APE-related subcommands use
  `PrintHumanReadableAPEChain` to print a parsed APE-chain. So, it's
  more correct to have it in a common package over `frostfs-cli` and
  `frostfs-adm` folders.

Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2024-11-20 07:58:32 +00:00
e2cb0640f1 [#1501] util: Move eACL-to-APE converter to pkg/util
* `ConvertEACLToAPE` is useful method which couldn't be imported
  out of frostfs-node so far as it has been in `internal`
* Since `ConvertEACLToAPE` and related structures and unit-tests
  are placed in `pkg/util`

Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2024-11-20 07:58:32 +00:00
9f4ce600ac
[#1505] adm: Allow to manage additional keys in frostfsid
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-19 16:25:16 +03:00
d82f0d1926
[#1496] node/control: Await until SetNetmapStatus() persists
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-15 16:36:07 +03:00
acd5babd86
[#1496] morph: Merge InvokeRes and WaitParams
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-15 16:36:07 +03:00
b65874d1c3
[#1496] morph: Return InvokeRes from all invoke*() methods
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-15 16:36:07 +03:00
69c63006da
[#1496] morph: Move tx waiter to morph package
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-15 16:36:07 +03:00
d77a218f7c [#1493] metabase: Merge Inhume() and DropGraves() for tombstones
DropGraves() is only used to drop gravemarks after a tombstone
removal. Thus, it makes sense to do Inhume() and DropGraves() in one
transaction. It has less overhead and no unexpected problems in case
of sudden power failure.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-14 06:47:04 +00:00
44df67492f [#1493] metabase: Split inhumeTx() into 2 functions
No functional changes.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-14 06:47:04 +00:00
1e6f132b4e [#1493] metabase: Pass InhumePrm by value
Unify with the other code, no functional changes.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-14 06:47:04 +00:00
6dc0dc6691 [#1493] shard: Take mode mutex in HandleExpiredTombstones()
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-14 06:47:04 +00:00
f7cb6b4d87 [#1482] Makefile: Update golangci-lint
Signed-off-by: Alexander Chuprov <a.chuprov@yadro.com>
2024-11-13 15:01:41 +00:00
7fc6101bec
[#1491] engine/test: Rework engine test utils
- Remove `testNewShard` and `setInitializedShards` because they
violated the default engine workflow. The correct workflow is:
first use `New()`, followed by `Open()`, and then `Init()`. As a
result, adding new logic to `(*StorageEngine).Init` caused several
tests to fail with a panic when attempting to access uninitialized
resources. Now, all engines created with the test utils must be
initialized manually. The new helper method `prepare` can be used
for that purpose.
- Additionally, `setInitializedShards` hardcoded the shard worker
pool size, which prevented it from being configured in tests and
benchmarks. This has been fixed as well.
- Ensure engine initialization is done wherever it was missing.
- Refactor `setShardsNumOpts`, `setShardsNumAdditionalOpts`, and
`setShardsNum`. Make them all depend on `setShardsNumOpts`.

Signed-off-by: Aleksey Savchuk <a.savchuk@yadro.com>
2024-11-13 14:42:53 +03:00
7ef36749d0
[#1491] engine/test: Move BenchmarkExists to exists_test.go
Move `BenchmarkExists` from `engine_test.go` to `exists_test.go`
for better organization and clarity.

Signed-off-by: Aleksey Savchuk <a.savchuk@yadro.com>
2024-11-13 14:09:29 +03:00
c6066d6ee4
[#1491] engine/test: Use more suitable testing utils here and there
Use `setShardsNum` instead of `setInitializedShards` wherever possible.

Signed-off-by: Aleksey Savchuk <a.savchuk@yadro.com>
2024-11-13 14:09:29 +03:00
612b34d570
[#1437] logger: Add caller skip to log original caller position
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-11-13 10:36:12 +03:00
7429553266
[#1437] node: Fix contextcheck linter
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-11-13 10:36:10 +03:00
6921a89061
[#1437] ir: Fix contextcheck linters
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-11-13 10:36:10 +03:00
16598553d9
[#1437] shard: Fix contextcheck linter
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-11-13 10:36:09 +03:00
c139892117
[#1437] ir: Fix contextcheck linter
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-11-13 10:36:09 +03:00
62b5181618
[#1437] blobovnicza: Fix contextcheck linter
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-11-13 10:36:08 +03:00
6db46257c0
[#1437] node: Use ctx for logging
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-11-13 10:36:07 +03:00
c16dae8b4d
[#1437] logger: Use context to log trace id
Signed-off-by: Dmitrii Stepanov
2024-11-13 10:36:07 +03:00
fd004add00 [#1492] metabase: Fix import formatting
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-13 07:30:25 +00:00
8ed7a676d5 [#1492] metabase: Ensure Unmarshal() is called on a cloned slice
The slice returned from bucket.Get() is only valid during the tx
lifetime. Cloning it is not necessary everywhere, but better safe than
sorry.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-13 07:30:25 +00:00
b451de94c8 [#1492] metabase: Fix typo in objData
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-13 07:30:25 +00:00
f1556e3c42
[#1488] Makefile: Drop all containers created on env-up
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-11-12 17:24:44 +03:00
e122ff6013
[#1488] dev: Add Jaeger image and enable tracing on debug
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-11-12 17:24:44 +03:00
e2658c7519
[#1488] tracing: KV attributes for spans from config
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-11-12 17:24:43 +03:00
c00f4bab18
[#1488] go.mod: Bump observability version
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-11-12 17:24:43 +03:00
427 changed files with 3974 additions and 4260 deletions

View file

@ -1,6 +1,10 @@
name: Build name: Build
on: [pull_request] on:
pull_request:
push:
branches:
- master
jobs: jobs:
build: build:

View file

@ -1,5 +1,10 @@
name: Pre-commit hooks name: Pre-commit hooks
on: [pull_request]
on:
pull_request:
push:
branches:
- master
jobs: jobs:
precommit: precommit:

View file

@ -1,5 +1,10 @@
name: Tests and linters name: Tests and linters
on: [pull_request]
on:
pull_request:
push:
branches:
- master
jobs: jobs:
lint: lint:

View file

@ -1,5 +1,10 @@
name: Vulncheck name: Vulncheck
on: [pull_request]
on:
pull_request:
push:
branches:
- master
jobs: jobs:
vulncheck: vulncheck:

View file

@ -9,6 +9,30 @@ Changelog for FrostFS Node
### Removed ### Removed
### Updated ### Updated
## [v0.44.0] - 2024-25-11 - Rongbuk
### Added
- Allow to prioritize nodes during GET traversal via attributes (#1439)
- Add metrics for the frostfsid cache (#1464)
- Customize constant attributes attached to every tracing span (#1488)
- Manage additional keys in the `frostfsid` contract (#1505)
- Describe `--rule` flag in detail for `frostfs-cli ape-manager` subcommands (#1519)
### Changed
- Support richer interaction with the console in `frostfs-cli container policy-playground` (#1396)
- Print address in base58 format in `frostfs-adm morph policy set-admin` (#1515)
### Fixed
- Fix EC object search (#1408)
- Fix EC object put when one of the nodes is unavailable (#1427)
### Removed
- Drop most of the eACL-related code (#1425)
- Remove `--basic-acl` flag from `frostfs-cli container create` (#1483)
### Upgrading from v0.43.0
The metabase schema has changed completely, resync is required.
## [v0.42.0] ## [v0.42.0]
### Added ### Added

3
CODEOWNERS Normal file
View file

@ -0,0 +1,3 @@
.* @TrueCloudLab/storage-core-committers @TrueCloudLab/storage-core-developers
.forgejo/.* @potyarkin
Makefile @potyarkin

View file

@ -8,8 +8,8 @@ HUB_IMAGE ?= git.frostfs.info/truecloudlab/frostfs
HUB_TAG ?= "$(shell echo ${VERSION} | sed 's/^v//')" HUB_TAG ?= "$(shell echo ${VERSION} | sed 's/^v//')"
GO_VERSION ?= 1.22 GO_VERSION ?= 1.22
LINT_VERSION ?= 1.61.0 LINT_VERSION ?= 1.62.0
TRUECLOUDLAB_LINT_VERSION ?= 0.0.7 TRUECLOUDLAB_LINT_VERSION ?= 0.0.8
PROTOC_VERSION ?= 25.0 PROTOC_VERSION ?= 25.0
PROTOGEN_FROSTFS_VERSION ?= $(shell go list -f '{{.Version}}' -m git.frostfs.info/TrueCloudLab/frostfs-sdk-go) PROTOGEN_FROSTFS_VERSION ?= $(shell go list -f '{{.Version}}' -m git.frostfs.info/TrueCloudLab/frostfs-sdk-go)
PROTOC_OS_VERSION=osx-x86_64 PROTOC_OS_VERSION=osx-x86_64
@ -282,7 +282,6 @@ env-up: all
# Shutdown dev environment # Shutdown dev environment
env-down: env-down:
docker compose -f dev/docker-compose.yml down docker compose -f dev/docker-compose.yml down -v
docker volume rm -f frostfs-node_neo-go
rm -rf ./$(TMP_DIR)/state rm -rf ./$(TMP_DIR)/state
rm -rf ./$(TMP_DIR)/storage rm -rf ./$(TMP_DIR)/storage

View file

@ -1 +1 @@
v0.42.0 v0.44.0

View file

@ -20,6 +20,7 @@ const (
AlphabetWalletsFlagDesc = "Path to alphabet wallets dir" AlphabetWalletsFlagDesc = "Path to alphabet wallets dir"
LocalDumpFlag = "local-dump" LocalDumpFlag = "local-dump"
ProtoConfigPath = "protocol"
ContractsInitFlag = "contracts" ContractsInitFlag = "contracts"
ContractsInitFlagDesc = "Path to archive with compiled FrostFS contracts (the default is to fetch the latest release from the official repository)" ContractsInitFlagDesc = "Path to archive with compiled FrostFS contracts (the default is to fetch the latest release from the official repository)"
ContractsURLFlag = "contracts-url" ContractsURLFlag = "contracts-url"

View file

@ -135,7 +135,7 @@ func createContainerInfoProvider(cli *client.Client) (container.InfoProvider, er
if err != nil { if err != nil {
return nil, fmt.Errorf("resolve container contract hash: %w", err) return nil, fmt.Errorf("resolve container contract hash: %w", err)
} }
cc, err := morphcontainer.NewFromMorph(cli, sh, 0, morphcontainer.TryNotary()) cc, err := morphcontainer.NewFromMorph(cli, sh, 0)
if err != nil { if err != nil {
return nil, fmt.Errorf("create morph container client: %w", err) return nil, fmt.Errorf("create morph container client: %w", err)
} }

View file

@ -8,7 +8,7 @@ import (
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common" commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
apeCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common/ape" apeCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common/ape"
apechain "git.frostfs.info/TrueCloudLab/policy-engine/pkg/chain" apechain "git.frostfs.info/TrueCloudLab/policy-engine/pkg/chain"
"github.com/nspcc-dev/neo-go/pkg/util" "github.com/nspcc-dev/neo-go/pkg/encoding/address"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"github.com/spf13/viper" "github.com/spf13/viper"
) )
@ -200,7 +200,7 @@ func listRuleChains(cmd *cobra.Command, _ []string) {
func setAdmin(cmd *cobra.Command, _ []string) { func setAdmin(cmd *cobra.Command, _ []string) {
s, _ := cmd.Flags().GetString(addrAdminFlag) s, _ := cmd.Flags().GetString(addrAdminFlag)
addr, err := util.Uint160DecodeStringLE(s) addr, err := address.StringToUint160(s)
commonCmd.ExitOnErr(cmd, "can't decode admin addr: %w", err) commonCmd.ExitOnErr(cmd, "can't decode admin addr: %w", err)
pci, ac := newPolicyContractInterface(cmd) pci, ac := newPolicyContractInterface(cmd)
h, vub, err := pci.SetAdmin(addr) h, vub, err := pci.SetAdmin(addr)
@ -214,7 +214,7 @@ func getAdmin(cmd *cobra.Command, _ []string) {
pci, _ := newPolicyContractReaderInterface(cmd) pci, _ := newPolicyContractReaderInterface(cmd)
addr, err := pci.GetAdmin() addr, err := pci.GetAdmin()
commonCmd.ExitOnErr(cmd, "unable to get admin: %w", err) commonCmd.ExitOnErr(cmd, "unable to get admin: %w", err)
cmd.Println(addr.StringLE()) cmd.Println(address.Uint160ToString(addr))
} }
func listTargets(cmd *cobra.Command, _ []string) { func listTargets(cmd *cobra.Command, _ []string) {

View file

@ -53,16 +53,15 @@ func (n *invokerAdapter) GetRPCInvoker() invoker.RPCInvoke {
} }
func newPolicyContractReaderInterface(cmd *cobra.Command) (*morph.ContractStorageReader, *invoker.Invoker) { func newPolicyContractReaderInterface(cmd *cobra.Command) (*morph.ContractStorageReader, *invoker.Invoker) {
c, err := helper.GetN3Client(viper.GetViper()) c, err := helper.NewRemoteClient(viper.GetViper())
commonCmd.ExitOnErr(cmd, "unable to create NEO rpc client: %w", err) commonCmd.ExitOnErr(cmd, "unable to create NEO rpc client: %w", err)
inv := invoker.New(c, nil) inv := invoker.New(c, nil)
var ch util.Uint160
r := management.NewReader(inv) r := management.NewReader(inv)
nnsCs, err := helper.GetContractByID(r, 1) nnsCs, err := helper.GetContractByID(r, 1)
commonCmd.ExitOnErr(cmd, "can't get NNS contract state: %w", err) commonCmd.ExitOnErr(cmd, "can't get NNS contract state: %w", err)
ch, err = helper.NNSResolveHash(inv, nnsCs.Hash, helper.DomainOf(constants.PolicyContract)) ch, err := helper.NNSResolveHash(inv, nnsCs.Hash, helper.DomainOf(constants.PolicyContract))
commonCmd.ExitOnErr(cmd, "unable to resolve policy contract hash: %w", err) commonCmd.ExitOnErr(cmd, "unable to resolve policy contract hash: %w", err)
invokerAdapter := &invokerAdapter{ invokerAdapter := &invokerAdapter{
@ -74,7 +73,7 @@ func newPolicyContractReaderInterface(cmd *cobra.Command) (*morph.ContractStorag
} }
func newPolicyContractInterface(cmd *cobra.Command) (*morph.ContractStorage, *helper.LocalActor) { func newPolicyContractInterface(cmd *cobra.Command) (*morph.ContractStorage, *helper.LocalActor) {
c, err := helper.GetN3Client(viper.GetViper()) c, err := helper.NewRemoteClient(viper.GetViper())
commonCmd.ExitOnErr(cmd, "unable to create NEO rpc client: %w", err) commonCmd.ExitOnErr(cmd, "unable to create NEO rpc client: %w", err)
ac, err := helper.NewLocalActor(cmd, c, constants.ConsensusAccountName) ac, err := helper.NewLocalActor(cmd, c, constants.ConsensusAccountName)

View file

@ -51,7 +51,7 @@ func dumpBalances(cmd *cobra.Command, _ []string) error {
nmHash util.Uint160 nmHash util.Uint160
) )
c, err := helper.GetN3Client(viper.GetViper()) c, err := helper.NewRemoteClient(viper.GetViper())
if err != nil { if err != nil {
return err return err
} }

View file

@ -26,7 +26,7 @@ import (
const forceConfigSet = "force" const forceConfigSet = "force"
func dumpNetworkConfig(cmd *cobra.Command, _ []string) error { func dumpNetworkConfig(cmd *cobra.Command, _ []string) error {
c, err := helper.GetN3Client(viper.GetViper()) c, err := helper.NewRemoteClient(viper.GetViper())
if err != nil { if err != nil {
return fmt.Errorf("can't create N3 client: %w", err) return fmt.Errorf("can't create N3 client: %w", err)
} }

View file

@ -4,7 +4,6 @@ import "time"
const ( const (
ConsensusAccountName = "consensus" ConsensusAccountName = "consensus"
ProtoConfigPath = "protocol"
// MaxAlphabetNodes is the maximum number of candidates allowed, which is currently limited by the size // MaxAlphabetNodes is the maximum number of candidates allowed, which is currently limited by the size
// of the invocation script. // of the invocation script.

View file

@ -76,7 +76,7 @@ func dumpContainers(cmd *cobra.Command, _ []string) error {
return fmt.Errorf("invalid filename: %w", err) return fmt.Errorf("invalid filename: %w", err)
} }
c, err := helper.GetN3Client(viper.GetViper()) c, err := helper.NewRemoteClient(viper.GetViper())
if err != nil { if err != nil {
return fmt.Errorf("can't create N3 client: %w", err) return fmt.Errorf("can't create N3 client: %w", err)
} }
@ -157,7 +157,7 @@ func dumpSingleContainer(bw *io.BufBinWriter, ch util.Uint160, inv *invoker.Invo
} }
func listContainers(cmd *cobra.Command, _ []string) error { func listContainers(cmd *cobra.Command, _ []string) error {
c, err := helper.GetN3Client(viper.GetViper()) c, err := helper.NewRemoteClient(viper.GetViper())
if err != nil { if err != nil {
return fmt.Errorf("can't create N3 client: %w", err) return fmt.Errorf("can't create N3 client: %w", err)
} }

View file

@ -36,7 +36,7 @@ type contractDumpInfo struct {
} }
func dumpContractHashes(cmd *cobra.Command, _ []string) error { func dumpContractHashes(cmd *cobra.Command, _ []string) error {
c, err := helper.GetN3Client(viper.GetViper()) c, err := helper.NewRemoteClient(viper.GetViper())
if err != nil { if err != nil {
return fmt.Errorf("can't create N3 client: %w", err) return fmt.Errorf("can't create N3 client: %w", err)
} }

View file

@ -0,0 +1,83 @@
package frostfsid
import (
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/commonflags"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
var (
frostfsidAddSubjectKeyCmd = &cobra.Command{
Use: "add-subject-key",
Short: "Add a public key to the subject in frostfsid contract",
PreRun: func(cmd *cobra.Command, _ []string) {
_ = viper.BindPFlag(commonflags.AlphabetWalletsFlag, cmd.Flags().Lookup(commonflags.AlphabetWalletsFlag))
_ = viper.BindPFlag(commonflags.EndpointFlag, cmd.Flags().Lookup(commonflags.EndpointFlag))
},
Run: frostfsidAddSubjectKey,
}
frostfsidRemoveSubjectKeyCmd = &cobra.Command{
Use: "remove-subject-key",
Short: "Remove a public key from the subject in frostfsid contract",
PreRun: func(cmd *cobra.Command, _ []string) {
_ = viper.BindPFlag(commonflags.AlphabetWalletsFlag, cmd.Flags().Lookup(commonflags.AlphabetWalletsFlag))
_ = viper.BindPFlag(commonflags.EndpointFlag, cmd.Flags().Lookup(commonflags.EndpointFlag))
},
Run: frostfsidRemoveSubjectKey,
}
)
func initFrostfsIDAddSubjectKeyCmd() {
Cmd.AddCommand(frostfsidAddSubjectKeyCmd)
ff := frostfsidAddSubjectKeyCmd.Flags()
ff.StringP(commonflags.EndpointFlag, commonflags.EndpointFlagShort, "", commonflags.EndpointFlagDesc)
ff.String(commonflags.AlphabetWalletsFlag, "", commonflags.AlphabetWalletsFlagDesc)
ff.String(subjectAddressFlag, "", "Subject address")
_ = frostfsidAddSubjectKeyCmd.MarkFlagRequired(subjectAddressFlag)
ff.String(subjectKeyFlag, "", "Public key to add")
_ = frostfsidAddSubjectKeyCmd.MarkFlagRequired(subjectKeyFlag)
}
func initFrostfsIDRemoveSubjectKeyCmd() {
Cmd.AddCommand(frostfsidRemoveSubjectKeyCmd)
ff := frostfsidRemoveSubjectKeyCmd.Flags()
ff.StringP(commonflags.EndpointFlag, commonflags.EndpointFlagShort, "", commonflags.EndpointFlagDesc)
ff.String(commonflags.AlphabetWalletsFlag, "", commonflags.AlphabetWalletsFlagDesc)
ff.String(subjectAddressFlag, "", "Subject address")
_ = frostfsidAddSubjectKeyCmd.MarkFlagRequired(subjectAddressFlag)
ff.String(subjectKeyFlag, "", "Public key to remove")
_ = frostfsidAddSubjectKeyCmd.MarkFlagRequired(subjectKeyFlag)
}
func frostfsidAddSubjectKey(cmd *cobra.Command, _ []string) {
addr := getFrostfsIDSubjectAddress(cmd)
pub := getFrostfsIDSubjectKey(cmd)
ffsid, err := newFrostfsIDClient(cmd)
commonCmd.ExitOnErr(cmd, "init contract client: %w", err)
ffsid.addCall(ffsid.roCli.AddSubjectKeyCall(addr, pub))
err = ffsid.sendWait()
commonCmd.ExitOnErr(cmd, "add subject key: %w", err)
}
func frostfsidRemoveSubjectKey(cmd *cobra.Command, _ []string) {
addr := getFrostfsIDSubjectAddress(cmd)
pub := getFrostfsIDSubjectKey(cmd)
ffsid, err := newFrostfsIDClient(cmd)
commonCmd.ExitOnErr(cmd, "init contract client: %w", err)
ffsid.addCall(ffsid.roCli.RemoveSubjectKeyCall(addr, pub))
err = ffsid.sendWait()
commonCmd.ExitOnErr(cmd, "remove subject key: %w", err)
}

View file

@ -1,7 +1,6 @@
package frostfsid package frostfsid
import ( import (
"errors"
"fmt" "fmt"
"math/big" "math/big"
"sort" "sort"
@ -61,7 +60,6 @@ var (
Use: "list-namespaces", Use: "list-namespaces",
Short: "List all namespaces in frostfsid", Short: "List all namespaces in frostfsid",
PreRun: func(cmd *cobra.Command, _ []string) { PreRun: func(cmd *cobra.Command, _ []string) {
_ = viper.BindPFlag(commonflags.AlphabetWalletsFlag, cmd.Flags().Lookup(commonflags.AlphabetWalletsFlag))
_ = viper.BindPFlag(commonflags.EndpointFlag, cmd.Flags().Lookup(commonflags.EndpointFlag)) _ = viper.BindPFlag(commonflags.EndpointFlag, cmd.Flags().Lookup(commonflags.EndpointFlag))
}, },
Run: frostfsidListNamespaces, Run: frostfsidListNamespaces,
@ -91,7 +89,6 @@ var (
Use: "list-subjects", Use: "list-subjects",
Short: "List subjects in namespace", Short: "List subjects in namespace",
PreRun: func(cmd *cobra.Command, _ []string) { PreRun: func(cmd *cobra.Command, _ []string) {
_ = viper.BindPFlag(commonflags.AlphabetWalletsFlag, cmd.Flags().Lookup(commonflags.AlphabetWalletsFlag))
_ = viper.BindPFlag(commonflags.EndpointFlag, cmd.Flags().Lookup(commonflags.EndpointFlag)) _ = viper.BindPFlag(commonflags.EndpointFlag, cmd.Flags().Lookup(commonflags.EndpointFlag))
}, },
Run: frostfsidListSubjects, Run: frostfsidListSubjects,
@ -121,7 +118,6 @@ var (
Use: "list-groups", Use: "list-groups",
Short: "List groups in namespace", Short: "List groups in namespace",
PreRun: func(cmd *cobra.Command, _ []string) { PreRun: func(cmd *cobra.Command, _ []string) {
_ = viper.BindPFlag(commonflags.AlphabetWalletsFlag, cmd.Flags().Lookup(commonflags.AlphabetWalletsFlag))
_ = viper.BindPFlag(commonflags.EndpointFlag, cmd.Flags().Lookup(commonflags.EndpointFlag)) _ = viper.BindPFlag(commonflags.EndpointFlag, cmd.Flags().Lookup(commonflags.EndpointFlag))
}, },
Run: frostfsidListGroups, Run: frostfsidListGroups,
@ -151,7 +147,6 @@ var (
Use: "list-group-subjects", Use: "list-group-subjects",
Short: "List subjects in group", Short: "List subjects in group",
PreRun: func(cmd *cobra.Command, _ []string) { PreRun: func(cmd *cobra.Command, _ []string) {
_ = viper.BindPFlag(commonflags.AlphabetWalletsFlag, cmd.Flags().Lookup(commonflags.AlphabetWalletsFlag))
_ = viper.BindPFlag(commonflags.EndpointFlag, cmd.Flags().Lookup(commonflags.EndpointFlag)) _ = viper.BindPFlag(commonflags.EndpointFlag, cmd.Flags().Lookup(commonflags.EndpointFlag))
}, },
Run: frostfsidListGroupSubjects, Run: frostfsidListGroupSubjects,
@ -169,7 +164,6 @@ func initFrostfsIDCreateNamespaceCmd() {
func initFrostfsIDListNamespacesCmd() { func initFrostfsIDListNamespacesCmd() {
Cmd.AddCommand(frostfsidListNamespacesCmd) Cmd.AddCommand(frostfsidListNamespacesCmd)
frostfsidListNamespacesCmd.Flags().StringP(commonflags.EndpointFlag, commonflags.EndpointFlagShort, "", commonflags.EndpointFlagDesc) frostfsidListNamespacesCmd.Flags().StringP(commonflags.EndpointFlag, commonflags.EndpointFlagShort, "", commonflags.EndpointFlagDesc)
frostfsidListNamespacesCmd.Flags().String(commonflags.AlphabetWalletsFlag, "", commonflags.AlphabetWalletsFlagDesc)
} }
func initFrostfsIDCreateSubjectCmd() { func initFrostfsIDCreateSubjectCmd() {
@ -193,7 +187,6 @@ func initFrostfsIDListSubjectsCmd() {
frostfsidListSubjectsCmd.Flags().StringP(commonflags.EndpointFlag, commonflags.EndpointFlagShort, "", commonflags.EndpointFlagDesc) frostfsidListSubjectsCmd.Flags().StringP(commonflags.EndpointFlag, commonflags.EndpointFlagShort, "", commonflags.EndpointFlagDesc)
frostfsidListSubjectsCmd.Flags().String(namespaceFlag, "", "Namespace to list subjects") frostfsidListSubjectsCmd.Flags().String(namespaceFlag, "", "Namespace to list subjects")
frostfsidListSubjectsCmd.Flags().Bool(includeNamesFlag, false, "Whether include subject name (require additional requests)") frostfsidListSubjectsCmd.Flags().Bool(includeNamesFlag, false, "Whether include subject name (require additional requests)")
frostfsidListSubjectsCmd.Flags().String(commonflags.AlphabetWalletsFlag, "", commonflags.AlphabetWalletsFlagDesc)
} }
func initFrostfsIDCreateGroupCmd() { func initFrostfsIDCreateGroupCmd() {
@ -217,7 +210,6 @@ func initFrostfsIDListGroupsCmd() {
Cmd.AddCommand(frostfsidListGroupsCmd) Cmd.AddCommand(frostfsidListGroupsCmd)
frostfsidListGroupsCmd.Flags().StringP(commonflags.EndpointFlag, commonflags.EndpointFlagShort, "", commonflags.EndpointFlagDesc) frostfsidListGroupsCmd.Flags().StringP(commonflags.EndpointFlag, commonflags.EndpointFlagShort, "", commonflags.EndpointFlagDesc)
frostfsidListGroupsCmd.Flags().String(namespaceFlag, "", "Namespace to list groups") frostfsidListGroupsCmd.Flags().String(namespaceFlag, "", "Namespace to list groups")
frostfsidListGroupsCmd.Flags().String(commonflags.AlphabetWalletsFlag, "", commonflags.AlphabetWalletsFlagDesc)
} }
func initFrostfsIDAddSubjectToGroupCmd() { func initFrostfsIDAddSubjectToGroupCmd() {
@ -242,7 +234,6 @@ func initFrostfsIDListGroupSubjectsCmd() {
frostfsidListGroupSubjectsCmd.Flags().String(namespaceFlag, "", "Namespace name") frostfsidListGroupSubjectsCmd.Flags().String(namespaceFlag, "", "Namespace name")
frostfsidListGroupSubjectsCmd.Flags().Int64(groupIDFlag, 0, "Group id") frostfsidListGroupSubjectsCmd.Flags().Int64(groupIDFlag, 0, "Group id")
frostfsidListGroupSubjectsCmd.Flags().Bool(includeNamesFlag, false, "Whether include subject name (require additional requests)") frostfsidListGroupSubjectsCmd.Flags().Bool(includeNamesFlag, false, "Whether include subject name (require additional requests)")
frostfsidListGroupSubjectsCmd.Flags().String(commonflags.AlphabetWalletsFlag, "", commonflags.AlphabetWalletsFlagDesc)
} }
func frostfsidCreateNamespace(cmd *cobra.Command, _ []string) { func frostfsidCreateNamespace(cmd *cobra.Command, _ []string) {
@ -497,10 +488,6 @@ func (f *frostfsidClient) sendWaitRes() (*state.AppExecResult, error) {
} }
f.bw.Reset() f.bw.Reset()
if len(f.wCtx.SentTxs) == 0 {
return nil, errors.New("no transactions to wait")
}
f.wCtx.Command.Println("Waiting for transactions to persist...") f.wCtx.Command.Println("Waiting for transactions to persist...")
return f.roCli.Wait(f.wCtx.SentTxs[0].Hash, f.wCtx.SentTxs[0].Vub, nil) return f.roCli.Wait(f.wCtx.SentTxs[0].Hash, f.wCtx.SentTxs[0].Vub, nil)
} }
@ -522,7 +509,7 @@ func readIterator(inv *invoker.Invoker, iter *result.Iterator, batchSize int, se
} }
func initInvoker(cmd *cobra.Command) (*invoker.Invoker, *state.Contract, util.Uint160) { func initInvoker(cmd *cobra.Command) (*invoker.Invoker, *state.Contract, util.Uint160) {
c, err := helper.GetN3Client(viper.GetViper()) c, err := helper.NewRemoteClient(viper.GetViper())
commonCmd.ExitOnErr(cmd, "can't create N3 client: %w", err) commonCmd.ExitOnErr(cmd, "can't create N3 client: %w", err)
inv := invoker.New(c, nil) inv := invoker.New(c, nil)

View file

@ -1,59 +1,12 @@
package frostfsid package frostfsid
import ( import (
"encoding/hex"
"testing" "testing"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/morph/helper"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/ape" "git.frostfs.info/TrueCloudLab/frostfs-node/internal/ape"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/nspcc-dev/neo-go/pkg/encoding/address"
"github.com/spf13/viper"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
func TestFrostfsIDConfig(t *testing.T) {
pks := make([]*keys.PrivateKey, 4)
for i := range pks {
pk, err := keys.NewPrivateKey()
require.NoError(t, err)
pks[i] = pk
}
fmts := []string{
pks[0].GetScriptHash().StringLE(),
address.Uint160ToString(pks[1].GetScriptHash()),
hex.EncodeToString(pks[2].PublicKey().UncompressedBytes()),
hex.EncodeToString(pks[3].PublicKey().Bytes()),
}
for i := range fmts {
v := viper.New()
v.Set("frostfsid.admin", fmts[i])
actual, found, err := helper.GetFrostfsIDAdmin(v)
require.NoError(t, err)
require.True(t, found)
require.Equal(t, pks[i].GetScriptHash(), actual)
}
t.Run("bad key", func(t *testing.T) {
v := viper.New()
v.Set("frostfsid.admin", "abc")
_, found, err := helper.GetFrostfsIDAdmin(v)
require.Error(t, err)
require.True(t, found)
})
t.Run("missing key", func(t *testing.T) {
v := viper.New()
_, found, err := helper.GetFrostfsIDAdmin(v)
require.NoError(t, err)
require.False(t, found)
})
}
func TestNamespaceRegexp(t *testing.T) { func TestNamespaceRegexp(t *testing.T) {
for _, tc := range []struct { for _, tc := range []struct {
name string name string

View file

@ -12,4 +12,6 @@ func init() {
initFrostfsIDAddSubjectToGroupCmd() initFrostfsIDAddSubjectToGroupCmd()
initFrostfsIDRemoveSubjectFromGroupCmd() initFrostfsIDRemoveSubjectFromGroupCmd()
initFrostfsIDListGroupSubjectsCmd() initFrostfsIDListGroupSubjectsCmd()
initFrostfsIDAddSubjectKeyCmd()
initFrostfsIDRemoveSubjectKeyCmd()
} }

View file

@ -38,20 +38,7 @@ func NewLocalActor(cmd *cobra.Command, c actor.RPCActor, accName string) (*Local
walletDir := config.ResolveHomePath(viper.GetString(commonflags.AlphabetWalletsFlag)) walletDir := config.ResolveHomePath(viper.GetString(commonflags.AlphabetWalletsFlag))
var act *actor.Actor var act *actor.Actor
var accounts []*wallet.Account var accounts []*wallet.Account
if walletDir == "" {
account, err := wallet.NewAccount()
commonCmd.ExitOnErr(cmd, "unable to create dummy account: %w", err)
act, err = actor.New(c, []actor.SignerAccount{{
Signer: transaction.Signer{
Account: account.Contract.ScriptHash(),
Scopes: transaction.Global,
},
Account: account,
}})
if err != nil {
return nil, err
}
} else {
wallets, err := GetAlphabetWallets(viper.GetViper(), walletDir) wallets, err := GetAlphabetWallets(viper.GetViper(), walletDir)
commonCmd.ExitOnErr(cmd, "unable to get alphabet wallets: %w", err) commonCmd.ExitOnErr(cmd, "unable to get alphabet wallets: %w", err)
@ -70,7 +57,6 @@ func NewLocalActor(cmd *cobra.Command, c actor.RPCActor, accName string) (*Local
if err != nil { if err != nil {
return nil, err return nil, err
} }
}
return &LocalActor{ return &LocalActor{
neoActor: act, neoActor: act,
accounts: accounts, accounts: accounts,

View file

@ -82,7 +82,7 @@ func GetContractDeployData(c *InitializeContext, ctrName string, keysParam []any
h, found, err = getFrostfsIDAdminFromContract(c.ReadOnlyInvoker) h, found, err = getFrostfsIDAdminFromContract(c.ReadOnlyInvoker)
} }
if method != constants.UpdateMethodName || err == nil && !found { if method != constants.UpdateMethodName || err == nil && !found {
h, found, err = GetFrostfsIDAdmin(viper.GetViper()) h, found, err = getFrostfsIDAdmin(viper.GetViper())
} }
if err != nil { if err != nil {
return nil, err return nil, err

View file

@ -11,7 +11,7 @@ import (
const frostfsIDAdminConfigKey = "frostfsid.admin" const frostfsIDAdminConfigKey = "frostfsid.admin"
func GetFrostfsIDAdmin(v *viper.Viper) (util.Uint160, bool, error) { func getFrostfsIDAdmin(v *viper.Viper) (util.Uint160, bool, error) {
admin := v.GetString(frostfsIDAdminConfigKey) admin := v.GetString(frostfsIDAdminConfigKey)
if admin == "" { if admin == "" {
return util.Uint160{}, false, nil return util.Uint160{}, false, nil

View file

@ -0,0 +1,53 @@
package helper
import (
"encoding/hex"
"testing"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/nspcc-dev/neo-go/pkg/encoding/address"
"github.com/spf13/viper"
"github.com/stretchr/testify/require"
)
func TestFrostfsIDConfig(t *testing.T) {
pks := make([]*keys.PrivateKey, 4)
for i := range pks {
pk, err := keys.NewPrivateKey()
require.NoError(t, err)
pks[i] = pk
}
fmts := []string{
pks[0].GetScriptHash().StringLE(),
address.Uint160ToString(pks[1].GetScriptHash()),
hex.EncodeToString(pks[2].PublicKey().UncompressedBytes()),
hex.EncodeToString(pks[3].PublicKey().Bytes()),
}
for i := range fmts {
v := viper.New()
v.Set("frostfsid.admin", fmts[i])
actual, found, err := getFrostfsIDAdmin(v)
require.NoError(t, err)
require.True(t, found)
require.Equal(t, pks[i].GetScriptHash(), actual)
}
t.Run("bad key", func(t *testing.T) {
v := viper.New()
v.Set("frostfsid.admin", "abc")
_, found, err := getFrostfsIDAdmin(v)
require.Error(t, err)
require.True(t, found)
})
t.Run("missing key", func(t *testing.T) {
v := viper.New()
_, found, err := getFrostfsIDAdmin(v)
require.NoError(t, err)
require.False(t, found)
})
}

View file

@ -134,12 +134,12 @@ func NewInitializeContext(cmd *cobra.Command, v *viper.Viper) (*InitializeContex
return nil, err return nil, err
} }
accounts, err := createWalletAccounts(wallets) accounts, err := getSingleAccounts(wallets)
if err != nil { if err != nil {
return nil, err return nil, err
} }
cliCtx, err := DefaultClientContext(c, committeeAcc) cliCtx, err := defaultClientContext(c, committeeAcc)
if err != nil { if err != nil {
return nil, fmt.Errorf("client context: %w", err) return nil, fmt.Errorf("client context: %w", err)
} }
@ -191,7 +191,7 @@ func createClient(cmd *cobra.Command, v *viper.Viper, wallets []*wallet.Wallet)
} }
c, err = NewLocalClient(cmd, v, wallets, ldf.Value.String()) c, err = NewLocalClient(cmd, v, wallets, ldf.Value.String())
} else { } else {
c, err = GetN3Client(v) c, err = NewRemoteClient(v)
} }
if err != nil { if err != nil {
return nil, fmt.Errorf("can't create N3 client: %w", err) return nil, fmt.Errorf("can't create N3 client: %w", err)
@ -211,7 +211,7 @@ func getContractsPath(cmd *cobra.Command, needContracts bool) (string, error) {
return ctrPath, nil return ctrPath, nil
} }
func createWalletAccounts(wallets []*wallet.Wallet) ([]*wallet.Account, error) { func getSingleAccounts(wallets []*wallet.Wallet) ([]*wallet.Account, error) {
accounts := make([]*wallet.Account, len(wallets)) accounts := make([]*wallet.Account, len(wallets))
for i, w := range wallets { for i, w := range wallets {
acc, err := GetWalletAccount(w, constants.SingleAccountName) acc, err := GetWalletAccount(w, constants.SingleAccountName)

View file

@ -8,6 +8,7 @@ import (
"sort" "sort"
"time" "time"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/morph/constants" "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/morph/constants"
"github.com/google/uuid" "github.com/google/uuid"
"github.com/nspcc-dev/neo-go/pkg/config" "github.com/nspcc-dev/neo-go/pkg/config"
@ -47,7 +48,7 @@ type LocalClient struct {
} }
func NewLocalClient(cmd *cobra.Command, v *viper.Viper, wallets []*wallet.Wallet, dumpPath string) (*LocalClient, error) { func NewLocalClient(cmd *cobra.Command, v *viper.Viper, wallets []*wallet.Wallet, dumpPath string) (*LocalClient, error) {
cfg, err := config.LoadFile(v.GetString(constants.ProtoConfigPath)) cfg, err := config.LoadFile(v.GetString(commonflags.ProtoConfigPath))
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -57,35 +58,30 @@ func NewLocalClient(cmd *cobra.Command, v *viper.Viper, wallets []*wallet.Wallet
return nil, err return nil, err
} }
m := smartcontract.GetDefaultHonestNodeCount(int(cfg.ProtocolConfiguration.ValidatorsCount)) go bc.Run()
accounts := make([]*wallet.Account, len(wallets))
for i := range accounts { accounts, err := getBlockSigningAccounts(cfg.ProtocolConfiguration, wallets)
accounts[i], err = GetWalletAccount(wallets[i], constants.ConsensusAccountName)
if err != nil { if err != nil {
return nil, err return nil, err
} }
}
indexMap := make(map[string]int)
for i, pub := range cfg.ProtocolConfiguration.StandbyCommittee {
indexMap[pub] = i
}
sort.Slice(accounts, func(i, j int) bool {
pi := accounts[i].PrivateKey().PublicKey().Bytes()
pj := accounts[j].PrivateKey().PublicKey().Bytes()
return indexMap[string(pi)] < indexMap[string(pj)]
})
sort.Slice(accounts[:cfg.ProtocolConfiguration.ValidatorsCount], func(i, j int) bool {
return accounts[i].PublicKey().Cmp(accounts[j].PublicKey()) == -1
})
go bc.Run()
if cmd.Name() != "init" { if cmd.Name() != "init" {
if err := restoreDump(bc, dumpPath); err != nil {
return nil, fmt.Errorf("restore dump: %w", err)
}
}
return &LocalClient{
bc: bc,
dumpPath: dumpPath,
accounts: accounts,
}, nil
}
func restoreDump(bc *core.Blockchain, dumpPath string) error {
f, err := os.OpenFile(dumpPath, os.O_RDONLY, 0o600) f, err := os.OpenFile(dumpPath, os.O_RDONLY, 0o600)
if err != nil { if err != nil {
return nil, fmt.Errorf("can't open local dump: %w", err) return fmt.Errorf("can't open local dump: %w", err)
} }
defer f.Close() defer f.Close()
@ -98,15 +94,37 @@ func NewLocalClient(cmd *cobra.Command, v *viper.Viper, wallets []*wallet.Wallet
count := r.ReadU32LE() - skip count := r.ReadU32LE() - skip
if err := chaindump.Restore(bc, r, skip, count, nil); err != nil { if err := chaindump.Restore(bc, r, skip, count, nil); err != nil {
return nil, fmt.Errorf("can't restore local dump: %w", err) return err
} }
return nil
} }
return &LocalClient{ func getBlockSigningAccounts(cfg config.ProtocolConfiguration, wallets []*wallet.Wallet) ([]*wallet.Account, error) {
bc: bc, accounts := make([]*wallet.Account, len(wallets))
dumpPath: dumpPath, for i := range accounts {
accounts: accounts[:m], acc, err := GetWalletAccount(wallets[i], constants.ConsensusAccountName)
}, nil if err != nil {
return nil, err
}
accounts[i] = acc
}
indexMap := make(map[string]int)
for i, pub := range cfg.StandbyCommittee {
indexMap[pub] = i
}
sort.Slice(accounts, func(i, j int) bool {
pi := accounts[i].PrivateKey().PublicKey().Bytes()
pj := accounts[j].PrivateKey().PublicKey().Bytes()
return indexMap[string(pi)] < indexMap[string(pj)]
})
sort.Slice(accounts[:cfg.ValidatorsCount], func(i, j int) bool {
return accounts[i].PublicKey().Cmp(accounts[j].PublicKey()) == -1
})
m := smartcontract.GetDefaultHonestNodeCount(int(cfg.ValidatorsCount))
return accounts[:m], nil
} }
func (l *LocalClient) GetBlockCount() (uint32, error) { func (l *LocalClient) GetBlockCount() (uint32, error) {
@ -127,11 +145,6 @@ func (l *LocalClient) GetApplicationLog(h util.Uint256, t *trigger.Type) (*resul
return &a, nil return &a, nil
} }
func (l *LocalClient) GetCommittee() (keys.PublicKeys, error) {
// not used by `morph init` command
panic("unexpected call")
}
// InvokeFunction is implemented via `InvokeScript`. // InvokeFunction is implemented via `InvokeScript`.
func (l *LocalClient) InvokeFunction(h util.Uint160, method string, sPrm []smartcontract.Parameter, ss []transaction.Signer) (*result.Invoke, error) { func (l *LocalClient) InvokeFunction(h util.Uint160, method string, sPrm []smartcontract.Parameter, ss []transaction.Signer) (*result.Invoke, error) {
var err error var err error
@ -295,13 +308,7 @@ func (l *LocalClient) InvokeScript(script []byte, signers []transaction.Signer)
} }
func (l *LocalClient) SendRawTransaction(tx *transaction.Transaction) (util.Uint256, error) { func (l *LocalClient) SendRawTransaction(tx *transaction.Transaction) (util.Uint256, error) {
// We need to test that transaction was formed correctly to catch as many errors as we can. tx = tx.Copy()
bs := tx.Bytes()
_, err := transaction.NewTransactionFromBytes(bs)
if err != nil {
return tx.Hash(), fmt.Errorf("invalid transaction: %w", err)
}
l.transactions = append(l.transactions, tx) l.transactions = append(l.transactions, tx)
return tx.Hash(), nil return tx.Hash(), nil
} }

View file

@ -10,7 +10,6 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/commonflags" "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/commonflags"
"github.com/nspcc-dev/neo-go/pkg/core/state" "github.com/nspcc-dev/neo-go/pkg/core/state"
"github.com/nspcc-dev/neo-go/pkg/core/transaction" "github.com/nspcc-dev/neo-go/pkg/core/transaction"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/nspcc-dev/neo-go/pkg/neorpc/result" "github.com/nspcc-dev/neo-go/pkg/neorpc/result"
"github.com/nspcc-dev/neo-go/pkg/rpcclient" "github.com/nspcc-dev/neo-go/pkg/rpcclient"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/actor" "github.com/nspcc-dev/neo-go/pkg/rpcclient/actor"
@ -25,15 +24,10 @@ import (
// Client represents N3 client interface capable of test-invoking scripts // Client represents N3 client interface capable of test-invoking scripts
// and sending signed transactions to chain. // and sending signed transactions to chain.
type Client interface { type Client interface {
invoker.RPCInvoke actor.RPCActor
GetBlockCount() (uint32, error)
GetNativeContracts() ([]state.Contract, error) GetNativeContracts() ([]state.Contract, error)
GetApplicationLog(util.Uint256, *trigger.Type) (*result.ApplicationLog, error) GetApplicationLog(util.Uint256, *trigger.Type) (*result.ApplicationLog, error)
GetVersion() (*result.Version, error)
SendRawTransaction(*transaction.Transaction) (util.Uint256, error)
GetCommittee() (keys.PublicKeys, error)
CalculateNetworkFee(tx *transaction.Transaction) (int64, error)
} }
type HashVUBPair struct { type HashVUBPair struct {
@ -48,7 +42,7 @@ type ClientContext struct {
SentTxs []HashVUBPair SentTxs []HashVUBPair
} }
func GetN3Client(v *viper.Viper) (Client, error) { func NewRemoteClient(v *viper.Viper) (Client, error) {
// number of opened connections // number of opened connections
// by neo-go client per one host // by neo-go client per one host
const ( const (
@ -88,8 +82,14 @@ func GetN3Client(v *viper.Viper) (Client, error) {
return c, nil return c, nil
} }
func DefaultClientContext(c Client, committeeAcc *wallet.Account) (*ClientContext, error) { func defaultClientContext(c Client, committeeAcc *wallet.Account) (*ClientContext, error) {
commAct, err := NewActor(c, committeeAcc) commAct, err := actor.New(c, []actor.SignerAccount{{
Signer: transaction.Signer{
Account: committeeAcc.Contract.ScriptHash(),
Scopes: transaction.Global,
},
Account: committeeAcc,
}})
if err != nil { if err != nil {
return nil, err return nil, err
} }

View file

@ -15,10 +15,8 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/morph/constants" "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/morph/constants"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/innerring" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/innerring"
"github.com/nspcc-dev/neo-go/pkg/core/state" "github.com/nspcc-dev/neo-go/pkg/core/state"
"github.com/nspcc-dev/neo-go/pkg/core/transaction"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys" "github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/nspcc-dev/neo-go/pkg/encoding/fixedn" "github.com/nspcc-dev/neo-go/pkg/encoding/fixedn"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/actor"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/management" "github.com/nspcc-dev/neo-go/pkg/rpcclient/management"
"github.com/nspcc-dev/neo-go/pkg/wallet" "github.com/nspcc-dev/neo-go/pkg/wallet"
"github.com/spf13/viper" "github.com/spf13/viper"
@ -87,16 +85,6 @@ func openAlphabetWallets(v *viper.Viper, walletDir string) ([]*wallet.Wallet, er
return wallets, nil return wallets, nil
} }
func NewActor(c actor.RPCActor, committeeAcc *wallet.Account) (*actor.Actor, error) {
return actor.New(c, []actor.SignerAccount{{
Signer: transaction.Signer{
Account: committeeAcc.Contract.ScriptHash(),
Scopes: transaction.Global,
},
Account: committeeAcc,
}})
}
func ReadContract(ctrPath, ctrName string) (*ContractState, error) { func ReadContract(ctrPath, ctrName string) (*ContractState, error) {
rawNef, err := os.ReadFile(filepath.Join(ctrPath, ctrName+"_contract.nef")) rawNef, err := os.ReadFile(filepath.Join(ctrPath, ctrName+"_contract.nef"))
if err != nil { if err != nil {

View file

@ -62,7 +62,7 @@ func testInitialize(t *testing.T, committeeSize int) {
v := viper.GetViper() v := viper.GetViper()
require.NoError(t, generateTestData(testdataDir, committeeSize)) require.NoError(t, generateTestData(testdataDir, committeeSize))
v.Set(constants.ProtoConfigPath, filepath.Join(testdataDir, protoFileName)) v.Set(commonflags.ProtoConfigPath, filepath.Join(testdataDir, protoFileName))
// Set to the path or remove the next statement to download from the network. // Set to the path or remove the next statement to download from the network.
require.NoError(t, Cmd.Flags().Set(commonflags.ContractsInitFlag, contractsPath)) require.NoError(t, Cmd.Flags().Set(commonflags.ContractsInitFlag, contractsPath))

View file

@ -2,7 +2,6 @@ package initialize
import ( import (
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/commonflags" "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/morph/constants"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"github.com/spf13/viper" "github.com/spf13/viper"
) )
@ -32,7 +31,7 @@ var Cmd = &cobra.Command{
_ = viper.BindPFlag(commonflags.ContainerFeeInitFlag, cmd.Flags().Lookup(containerFeeCLIFlag)) _ = viper.BindPFlag(commonflags.ContainerFeeInitFlag, cmd.Flags().Lookup(containerFeeCLIFlag))
_ = viper.BindPFlag(commonflags.ContainerAliasFeeInitFlag, cmd.Flags().Lookup(containerAliasFeeCLIFlag)) _ = viper.BindPFlag(commonflags.ContainerAliasFeeInitFlag, cmd.Flags().Lookup(containerAliasFeeCLIFlag))
_ = viper.BindPFlag(commonflags.WithdrawFeeInitFlag, cmd.Flags().Lookup(withdrawFeeCLIFlag)) _ = viper.BindPFlag(commonflags.WithdrawFeeInitFlag, cmd.Flags().Lookup(withdrawFeeCLIFlag))
_ = viper.BindPFlag(constants.ProtoConfigPath, cmd.Flags().Lookup(constants.ProtoConfigPath)) _ = viper.BindPFlag(commonflags.ProtoConfigPath, cmd.Flags().Lookup(commonflags.ProtoConfigPath))
}, },
RunE: initializeSideChainCmd, RunE: initializeSideChainCmd,
} }
@ -48,7 +47,7 @@ func initInitCmd() {
// Defaults are taken from neo-preodolenie. // Defaults are taken from neo-preodolenie.
Cmd.Flags().Uint64(containerFeeCLIFlag, 1000, "Container registration fee") Cmd.Flags().Uint64(containerFeeCLIFlag, 1000, "Container registration fee")
Cmd.Flags().Uint64(containerAliasFeeCLIFlag, 500, "Container alias fee") Cmd.Flags().Uint64(containerAliasFeeCLIFlag, 500, "Container alias fee")
Cmd.Flags().String(constants.ProtoConfigPath, "", "Path to the consensus node configuration") Cmd.Flags().String(commonflags.ProtoConfigPath, "", "Path to the consensus node configuration")
Cmd.Flags().String(commonflags.LocalDumpFlag, "", "Path to the blocks dump file") Cmd.Flags().String(commonflags.LocalDumpFlag, "", "Path to the blocks dump file")
Cmd.MarkFlagsMutuallyExclusive(commonflags.ContractsInitFlag, commonflags.ContractsURLFlag) Cmd.MarkFlagsMutuallyExclusive(commonflags.ContractsInitFlag, commonflags.ContractsURLFlag)
} }

View file

@ -13,7 +13,7 @@ import (
) )
func listNetmapCandidatesNodes(cmd *cobra.Command, _ []string) { func listNetmapCandidatesNodes(cmd *cobra.Command, _ []string) {
c, err := helper.GetN3Client(viper.GetViper()) c, err := helper.NewRemoteClient(viper.GetViper())
commonCmd.ExitOnErr(cmd, "can't create N3 client: %w", err) commonCmd.ExitOnErr(cmd, "can't create N3 client: %w", err)
inv := invoker.New(c, nil) inv := invoker.New(c, nil)

View file

@ -12,7 +12,6 @@ var (
Short: "List netmap candidates nodes", Short: "List netmap candidates nodes",
PreRun: func(cmd *cobra.Command, _ []string) { PreRun: func(cmd *cobra.Command, _ []string) {
_ = viper.BindPFlag(commonflags.EndpointFlag, cmd.Flags().Lookup(commonflags.EndpointFlag)) _ = viper.BindPFlag(commonflags.EndpointFlag, cmd.Flags().Lookup(commonflags.EndpointFlag))
_ = viper.BindPFlag(commonflags.AlphabetWalletsFlag, cmd.Flags().Lookup(commonflags.AlphabetWalletsFlag))
}, },
Run: listNetmapCandidatesNodes, Run: listNetmapCandidatesNodes,
} }

View file

@ -24,7 +24,7 @@ func initRegisterCmd() {
} }
func registerDomain(cmd *cobra.Command, _ []string) { func registerDomain(cmd *cobra.Command, _ []string) {
c, actor, _ := getRPCClient(cmd) c, actor := nnsWriter(cmd)
name, _ := cmd.Flags().GetString(nnsNameFlag) name, _ := cmd.Flags().GetString(nnsNameFlag)
email, _ := cmd.Flags().GetString(nnsEmailFlag) email, _ := cmd.Flags().GetString(nnsEmailFlag)
@ -53,7 +53,7 @@ func initDeleteCmd() {
} }
func deleteDomain(cmd *cobra.Command, _ []string) { func deleteDomain(cmd *cobra.Command, _ []string) {
c, actor, _ := getRPCClient(cmd) c, actor := nnsWriter(cmd)
name, _ := cmd.Flags().GetString(nnsNameFlag) name, _ := cmd.Flags().GetString(nnsNameFlag)
h, vub, err := c.DeleteDomain(name) h, vub, err := c.DeleteDomain(name)

View file

@ -5,15 +5,15 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/morph/constants" "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/morph/constants"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/morph/helper" "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/morph/helper"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common" commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/invoker"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/management" "github.com/nspcc-dev/neo-go/pkg/rpcclient/management"
"github.com/nspcc-dev/neo-go/pkg/util"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"github.com/spf13/viper" "github.com/spf13/viper"
) )
func getRPCClient(cmd *cobra.Command) (*client.Contract, *helper.LocalActor, util.Uint160) { func nnsWriter(cmd *cobra.Command) (*client.Contract, *helper.LocalActor) {
v := viper.GetViper() v := viper.GetViper()
c, err := helper.GetN3Client(v) c, err := helper.NewRemoteClient(v)
commonCmd.ExitOnErr(cmd, "unable to create NEO rpc client: %w", err) commonCmd.ExitOnErr(cmd, "unable to create NEO rpc client: %w", err)
ac, err := helper.NewLocalActor(cmd, c, constants.CommitteeAccountName) ac, err := helper.NewLocalActor(cmd, c, constants.CommitteeAccountName)
@ -22,5 +22,17 @@ func getRPCClient(cmd *cobra.Command) (*client.Contract, *helper.LocalActor, uti
r := management.NewReader(ac.Invoker) r := management.NewReader(ac.Invoker)
nnsCs, err := helper.GetContractByID(r, 1) nnsCs, err := helper.GetContractByID(r, 1)
commonCmd.ExitOnErr(cmd, "can't get NNS contract state: %w", err) commonCmd.ExitOnErr(cmd, "can't get NNS contract state: %w", err)
return client.New(ac, nnsCs.Hash), ac, nnsCs.Hash return client.New(ac, nnsCs.Hash), ac
}
func nnsReader(cmd *cobra.Command) (*client.ContractReader, *invoker.Invoker) {
c, err := helper.NewRemoteClient(viper.GetViper())
commonCmd.ExitOnErr(cmd, "unable to create NEO rpc client: %w", err)
inv := invoker.New(c, nil)
r := management.NewReader(inv)
nnsCs, err := helper.GetContractByID(r, 1)
commonCmd.ExitOnErr(cmd, "can't get NNS contract state: %w", err)
return client.NewReader(inv, nnsCs.Hash), inv
} }

View file

@ -8,7 +8,6 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-contract/nns" "git.frostfs.info/TrueCloudLab/frostfs-contract/nns"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/commonflags" "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/commonflags"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common" commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/unwrap"
"github.com/nspcc-dev/neo-go/pkg/vm/stackitem" "github.com/nspcc-dev/neo-go/pkg/vm/stackitem"
"github.com/spf13/cobra" "github.com/spf13/cobra"
) )
@ -29,7 +28,6 @@ func initAddRecordCmd() {
func initGetRecordsCmd() { func initGetRecordsCmd() {
Cmd.AddCommand(getRecordsCmd) Cmd.AddCommand(getRecordsCmd)
getRecordsCmd.Flags().StringP(commonflags.EndpointFlag, commonflags.EndpointFlagShort, "", commonflags.EndpointFlagDesc) getRecordsCmd.Flags().StringP(commonflags.EndpointFlag, commonflags.EndpointFlagShort, "", commonflags.EndpointFlagDesc)
getRecordsCmd.Flags().String(commonflags.AlphabetWalletsFlag, "", commonflags.AlphabetWalletsFlagDesc)
getRecordsCmd.Flags().String(nnsNameFlag, "", nnsNameFlagDesc) getRecordsCmd.Flags().String(nnsNameFlag, "", nnsNameFlagDesc)
getRecordsCmd.Flags().String(nnsRecordTypeFlag, "", nnsRecordTypeFlagDesc) getRecordsCmd.Flags().String(nnsRecordTypeFlag, "", nnsRecordTypeFlagDesc)
@ -61,7 +59,7 @@ func initDelRecordCmd() {
} }
func addRecord(cmd *cobra.Command, _ []string) { func addRecord(cmd *cobra.Command, _ []string) {
c, actor, _ := getRPCClient(cmd) c, actor := nnsWriter(cmd)
name, _ := cmd.Flags().GetString(nnsNameFlag) name, _ := cmd.Flags().GetString(nnsNameFlag)
data, _ := cmd.Flags().GetString(nnsRecordDataFlag) data, _ := cmd.Flags().GetString(nnsRecordDataFlag)
recordType, _ := cmd.Flags().GetString(nnsRecordTypeFlag) recordType, _ := cmd.Flags().GetString(nnsRecordTypeFlag)
@ -77,16 +75,16 @@ func addRecord(cmd *cobra.Command, _ []string) {
} }
func getRecords(cmd *cobra.Command, _ []string) { func getRecords(cmd *cobra.Command, _ []string) {
c, act, hash := getRPCClient(cmd) c, inv := nnsReader(cmd)
name, _ := cmd.Flags().GetString(nnsNameFlag) name, _ := cmd.Flags().GetString(nnsNameFlag)
recordType, _ := cmd.Flags().GetString(nnsRecordTypeFlag) recordType, _ := cmd.Flags().GetString(nnsRecordTypeFlag)
if recordType == "" { if recordType == "" {
sid, r, err := unwrap.SessionIterator(act.Invoker.Call(hash, "getAllRecords", name)) sid, r, err := c.GetAllRecords(name)
commonCmd.ExitOnErr(cmd, "unable to get records: %w", err) commonCmd.ExitOnErr(cmd, "unable to get records: %w", err)
defer func() { defer func() {
_ = act.Invoker.TerminateSession(sid) _ = inv.TerminateSession(sid)
}() }()
items, err := act.Invoker.TraverseIterator(sid, &r, 0) items, err := inv.TraverseIterator(sid, &r, 0)
commonCmd.ExitOnErr(cmd, "unable to get records: %w", err) commonCmd.ExitOnErr(cmd, "unable to get records: %w", err)
for len(items) != 0 { for len(items) != 0 {
for j := range items { for j := range items {
@ -97,7 +95,7 @@ func getRecords(cmd *cobra.Command, _ []string) {
recordTypeToString(nns.RecordType(rs[1].Value().(*big.Int).Int64())), recordTypeToString(nns.RecordType(rs[1].Value().(*big.Int).Int64())),
string(bs)) string(bs))
} }
items, err = act.Invoker.TraverseIterator(sid, &r, 0) items, err = inv.TraverseIterator(sid, &r, 0)
commonCmd.ExitOnErr(cmd, "unable to get records: %w", err) commonCmd.ExitOnErr(cmd, "unable to get records: %w", err)
} }
} else { } else {
@ -114,7 +112,7 @@ func getRecords(cmd *cobra.Command, _ []string) {
} }
func delRecords(cmd *cobra.Command, _ []string) { func delRecords(cmd *cobra.Command, _ []string) {
c, actor, _ := getRPCClient(cmd) c, actor := nnsWriter(cmd)
name, _ := cmd.Flags().GetString(nnsNameFlag) name, _ := cmd.Flags().GetString(nnsNameFlag)
recordType, _ := cmd.Flags().GetString(nnsRecordTypeFlag) recordType, _ := cmd.Flags().GetString(nnsRecordTypeFlag)
typ, err := getRecordType(recordType) typ, err := getRecordType(recordType)
@ -129,7 +127,7 @@ func delRecords(cmd *cobra.Command, _ []string) {
} }
func delRecord(cmd *cobra.Command, _ []string) { func delRecord(cmd *cobra.Command, _ []string) {
c, actor, _ := getRPCClient(cmd) c, actor := nnsWriter(cmd)
name, _ := cmd.Flags().GetString(nnsNameFlag) name, _ := cmd.Flags().GetString(nnsNameFlag)
data, _ := cmd.Flags().GetString(nnsRecordDataFlag) data, _ := cmd.Flags().GetString(nnsRecordDataFlag)
recordType, _ := cmd.Flags().GetString(nnsRecordTypeFlag) recordType, _ := cmd.Flags().GetString(nnsRecordTypeFlag)

View file

@ -14,7 +14,7 @@ func initRenewCmd() {
} }
func renewDomain(cmd *cobra.Command, _ []string) { func renewDomain(cmd *cobra.Command, _ []string) {
c, actor, _ := getRPCClient(cmd) c, actor := nnsWriter(cmd)
name, _ := cmd.Flags().GetString(nnsNameFlag) name, _ := cmd.Flags().GetString(nnsNameFlag)
h, vub, err := c.Renew(name) h, vub, err := c.Renew(name)
commonCmd.ExitOnErr(cmd, "unable to renew domain: %w", err) commonCmd.ExitOnErr(cmd, "unable to renew domain: %w", err)

View file

@ -18,12 +18,11 @@ const (
func initTokensCmd() { func initTokensCmd() {
Cmd.AddCommand(tokensCmd) Cmd.AddCommand(tokensCmd)
tokensCmd.Flags().StringP(commonflags.EndpointFlag, commonflags.EndpointFlagShort, "", commonflags.EndpointFlagDesc) tokensCmd.Flags().StringP(commonflags.EndpointFlag, commonflags.EndpointFlagShort, "", commonflags.EndpointFlagDesc)
tokensCmd.Flags().String(commonflags.AlphabetWalletsFlag, "", commonflags.AlphabetWalletsFlagDesc)
tokensCmd.Flags().BoolP(commonflags.Verbose, commonflags.VerboseShorthand, false, verboseDesc) tokensCmd.Flags().BoolP(commonflags.Verbose, commonflags.VerboseShorthand, false, verboseDesc)
} }
func listTokens(cmd *cobra.Command, _ []string) { func listTokens(cmd *cobra.Command, _ []string) {
c, _, _ := getRPCClient(cmd) c, _ := nnsReader(cmd)
it, err := c.Tokens() it, err := c.Tokens()
commonCmd.ExitOnErr(cmd, "unable to get tokens: %w", err) commonCmd.ExitOnErr(cmd, "unable to get tokens: %w", err)
for toks, err := it.Next(10); err == nil && len(toks) > 0; toks, err = it.Next(10) { for toks, err := it.Next(10); err == nil && len(toks) > 0; toks, err = it.Next(10) {
@ -41,7 +40,7 @@ func listTokens(cmd *cobra.Command, _ []string) {
} }
} }
func getCnameRecord(c *client.Contract, token []byte) (string, error) { func getCnameRecord(c *client.ContractReader, token []byte) (string, error) {
items, err := c.GetRecords(string(token), big.NewInt(int64(nns.CNAME))) items, err := c.GetRecords(string(token), big.NewInt(int64(nns.CNAME)))
// GetRecords returns the error "not an array" if the domain does not contain records. // GetRecords returns the error "not an array" if the domain does not contain records.

View file

@ -30,7 +30,7 @@ func initUpdateCmd() {
} }
func updateSOA(cmd *cobra.Command, _ []string) { func updateSOA(cmd *cobra.Command, _ []string) {
c, actor, _ := getRPCClient(cmd) c, actor := nnsWriter(cmd)
name, _ := cmd.Flags().GetString(nnsNameFlag) name, _ := cmd.Flags().GetString(nnsNameFlag)
email, _ := cmd.Flags().GetString(nnsEmailFlag) email, _ := cmd.Flags().GetString(nnsEmailFlag)

View file

@ -89,7 +89,7 @@ func depositNotary(cmd *cobra.Command, _ []string) error {
} }
func transferGas(cmd *cobra.Command, acc *wallet.Account, accHash util.Uint160, gasAmount fixedn.Fixed8, till int64) error { func transferGas(cmd *cobra.Command, acc *wallet.Account, accHash util.Uint160, gasAmount fixedn.Fixed8, till int64) error {
c, err := helper.GetN3Client(viper.GetViper()) c, err := helper.NewRemoteClient(viper.GetViper())
if err != nil { if err != nil {
return err return err
} }

View file

@ -62,7 +62,7 @@ func SetPolicyCmd(cmd *cobra.Command, args []string) error {
} }
func dumpPolicyCmd(cmd *cobra.Command, _ []string) error { func dumpPolicyCmd(cmd *cobra.Command, _ []string) error {
c, err := helper.GetN3Client(viper.GetViper()) c, err := helper.NewRemoteClient(viper.GetViper())
commonCmd.ExitOnErr(cmd, "can't create N3 client:", err) commonCmd.ExitOnErr(cmd, "can't create N3 client:", err)
inv := invoker.New(c, nil) inv := invoker.New(c, nil)

View file

@ -1,56 +0,0 @@
package control
import (
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/rpc/client"
"github.com/spf13/cobra"
)
const ignoreErrorsFlag = "no-errors"
var evacuateShardCmd = &cobra.Command{
Use: "evacuate",
Short: "Evacuate objects from shard",
Long: "Evacuate objects from shard to other shards",
Run: evacuateShard,
Deprecated: "use frostfs-cli control shards evacuation start",
}
func evacuateShard(cmd *cobra.Command, _ []string) {
pk := key.Get(cmd)
req := &control.EvacuateShardRequest{Body: new(control.EvacuateShardRequest_Body)}
req.Body.Shard_ID = getShardIDList(cmd)
req.Body.IgnoreErrors, _ = cmd.Flags().GetBool(ignoreErrorsFlag)
signRequest(cmd, pk, req)
cli := getClient(cmd, pk)
var resp *control.EvacuateShardResponse
var err error
err = cli.ExecRaw(func(client *client.Client) error {
resp, err = control.EvacuateShard(client, req)
return err
})
commonCmd.ExitOnErr(cmd, "rpc error: %w", err)
cmd.Printf("Objects moved: %d\n", resp.GetBody().GetCount())
verifyResponse(cmd, resp.GetSignature(), resp.GetBody())
cmd.Println("Shard has successfully been evacuated.")
}
func initControlEvacuateShardCmd() {
initControlFlags(evacuateShardCmd)
flags := evacuateShardCmd.Flags()
flags.StringSlice(shardIDFlag, nil, "List of shard IDs in base58 encoding")
flags.Bool(shardAllFlag, false, "Process all shards")
flags.Bool(ignoreErrorsFlag, false, "Skip invalid/unreadable objects")
evacuateShardCmd.MarkFlagsMutuallyExclusive(shardIDFlag, shardAllFlag)
}

View file

@ -21,6 +21,7 @@ const (
noProgressFlag = "no-progress" noProgressFlag = "no-progress"
scopeFlag = "scope" scopeFlag = "scope"
repOneOnlyFlag = "rep-one-only" repOneOnlyFlag = "rep-one-only"
ignoreErrorsFlag = "no-errors"
containerWorkerCountFlag = "container-worker-count" containerWorkerCountFlag = "container-worker-count"
objectWorkerCountFlag = "object-worker-count" objectWorkerCountFlag = "object-worker-count"

View file

@ -13,7 +13,6 @@ var shardsCmd = &cobra.Command{
func initControlShardsCmd() { func initControlShardsCmd() {
shardsCmd.AddCommand(listShardsCmd) shardsCmd.AddCommand(listShardsCmd)
shardsCmd.AddCommand(setShardModeCmd) shardsCmd.AddCommand(setShardModeCmd)
shardsCmd.AddCommand(evacuateShardCmd)
shardsCmd.AddCommand(evacuationShardCmd) shardsCmd.AddCommand(evacuationShardCmd)
shardsCmd.AddCommand(flushCacheCmd) shardsCmd.AddCommand(flushCacheCmd)
shardsCmd.AddCommand(doctorCmd) shardsCmd.AddCommand(doctorCmd)
@ -23,7 +22,6 @@ func initControlShardsCmd() {
initControlShardsListCmd() initControlShardsListCmd()
initControlSetShardModeCmd() initControlSetShardModeCmd()
initControlEvacuateShardCmd()
initControlEvacuationShardCmd() initControlEvacuationShardCmd()
initControlFlushCacheCmd() initControlFlushCacheCmd()
initControlDoctorCmd() initControlDoctorCmd()

View file

@ -1,15 +1,12 @@
package object package object
import ( import (
"bytes"
"cmp"
"context" "context"
"crypto/ecdsa" "crypto/ecdsa"
"encoding/hex" "encoding/hex"
"encoding/json" "encoding/json"
"errors" "errors"
"fmt" "fmt"
"slices"
"sync" "sync"
internalclient "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/client" internalclient "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/client"
@ -507,7 +504,6 @@ func isObjectStoredOnNode(ctx context.Context, cmd *cobra.Command, cnrID cid.ID,
} }
func printPlacement(cmd *cobra.Command, objID oid.ID, objects []phyObject, result *objectNodesResult) { func printPlacement(cmd *cobra.Command, objID oid.ID, objects []phyObject, result *objectNodesResult) {
normilizeObjectNodesResult(objects, result)
if json, _ := cmd.Flags().GetBool(commonflags.JSON); json { if json, _ := cmd.Flags().GetBool(commonflags.JSON); json {
printObjectNodesAsJSON(cmd, objID, objects, result) printObjectNodesAsJSON(cmd, objID, objects, result)
} else { } else {
@ -515,34 +511,6 @@ func printPlacement(cmd *cobra.Command, objID oid.ID, objects []phyObject, resul
} }
} }
func normilizeObjectNodesResult(objects []phyObject, result *objectNodesResult) {
slices.SortFunc(objects, func(lhs, rhs phyObject) int {
if lhs.ecHeader == nil && rhs.ecHeader == nil {
return bytes.Compare(lhs.objectID[:], rhs.objectID[:])
}
if lhs.ecHeader == nil {
return -1
}
if rhs.ecHeader == nil {
return 1
}
if lhs.ecHeader.parent == rhs.ecHeader.parent {
return cmp.Compare(lhs.ecHeader.index, rhs.ecHeader.index)
}
return bytes.Compare(lhs.ecHeader.parent[:], rhs.ecHeader.parent[:])
})
for _, obj := range objects {
op := result.placements[obj.objectID]
slices.SortFunc(op.confirmedNodes, func(lhs, rhs netmapSDK.NodeInfo) int {
return bytes.Compare(lhs.PublicKey(), rhs.PublicKey())
})
slices.SortFunc(op.requiredNodes, func(lhs, rhs netmapSDK.NodeInfo) int {
return bytes.Compare(lhs.PublicKey(), rhs.PublicKey())
})
result.placements[obj.objectID] = op
}
}
func printObjectNodesAsText(cmd *cobra.Command, objID oid.ID, objects []phyObject, result *objectNodesResult) { func printObjectNodesAsText(cmd *cobra.Command, objID oid.ID, objects []phyObject, result *objectNodesResult) {
fmt.Fprintf(cmd.OutOrStdout(), "Object %s stores payload in %d data objects:\n", objID.EncodeToString(), len(objects)) fmt.Fprintf(cmd.OutOrStdout(), "Object %s stores payload in %d data objects:\n", objID.EncodeToString(), len(objects))

View file

@ -1,6 +1,7 @@
package main package main
import ( import (
"context"
"os" "os"
"os/signal" "os/signal"
"syscall" "syscall"
@ -46,7 +47,7 @@ func reloadConfig() error {
return logPrm.Reload() return logPrm.Reload()
} }
func watchForSignal(cancel func()) { func watchForSignal(ctx context.Context, cancel func()) {
ch := make(chan os.Signal, 1) ch := make(chan os.Signal, 1)
signal.Notify(ch, syscall.SIGINT, syscall.SIGTERM) signal.Notify(ch, syscall.SIGINT, syscall.SIGTERM)
@ -58,49 +59,49 @@ func watchForSignal(cancel func()) {
// signals causing application to shut down should have priority over // signals causing application to shut down should have priority over
// reconfiguration signal // reconfiguration signal
case <-ch: case <-ch:
log.Info(logs.FrostFSNodeTerminationSignalHasBeenReceivedStopping) log.Info(ctx, logs.FrostFSNodeTerminationSignalHasBeenReceivedStopping)
cancel() cancel()
shutdown() shutdown(ctx)
log.Info(logs.FrostFSNodeTerminationSignalProcessingIsComplete) log.Info(ctx, logs.FrostFSNodeTerminationSignalProcessingIsComplete)
return return
case err := <-intErr: // internal application error case err := <-intErr: // internal application error
log.Info(logs.FrostFSIRInternalError, zap.String("msg", err.Error())) log.Info(ctx, logs.FrostFSIRInternalError, zap.String("msg", err.Error()))
cancel() cancel()
shutdown() shutdown(ctx)
return return
default: default:
// block until any signal is receieved // block until any signal is receieved
select { select {
case <-ch: case <-ch:
log.Info(logs.FrostFSNodeTerminationSignalHasBeenReceivedStopping) log.Info(ctx, logs.FrostFSNodeTerminationSignalHasBeenReceivedStopping)
cancel() cancel()
shutdown() shutdown(ctx)
log.Info(logs.FrostFSNodeTerminationSignalProcessingIsComplete) log.Info(ctx, logs.FrostFSNodeTerminationSignalProcessingIsComplete)
return return
case err := <-intErr: // internal application error case err := <-intErr: // internal application error
log.Info(logs.FrostFSIRInternalError, zap.String("msg", err.Error())) log.Info(ctx, logs.FrostFSIRInternalError, zap.String("msg", err.Error()))
cancel() cancel()
shutdown() shutdown(ctx)
return return
case <-sighupCh: case <-sighupCh:
log.Info(logs.FrostFSNodeSIGHUPHasBeenReceivedRereadingConfiguration) log.Info(ctx, logs.FrostFSNodeSIGHUPHasBeenReceivedRereadingConfiguration)
if !innerRing.CompareAndSwapHealthStatus(control.HealthStatus_READY, control.HealthStatus_RECONFIGURING) { if !innerRing.CompareAndSwapHealthStatus(ctx, control.HealthStatus_READY, control.HealthStatus_RECONFIGURING) {
log.Info(logs.FrostFSNodeSIGHUPSkip) log.Info(ctx, logs.FrostFSNodeSIGHUPSkip)
break break
} }
err := reloadConfig() err := reloadConfig()
if err != nil { if err != nil {
log.Error(logs.FrostFSNodeConfigurationReading, zap.Error(err)) log.Error(ctx, logs.FrostFSNodeConfigurationReading, zap.Error(err))
} }
pprofCmp.reload() pprofCmp.reload(ctx)
metricsCmp.reload() metricsCmp.reload(ctx)
log.Info(logs.FrostFSIRReloadExtraWallets) log.Info(ctx, logs.FrostFSIRReloadExtraWallets)
err = innerRing.SetExtraWallets(cfg) err = innerRing.SetExtraWallets(cfg)
if err != nil { if err != nil {
log.Error(logs.FrostFSNodeConfigurationReading, zap.Error(err)) log.Error(ctx, logs.FrostFSNodeConfigurationReading, zap.Error(err))
} }
innerRing.CompareAndSwapHealthStatus(control.HealthStatus_RECONFIGURING, control.HealthStatus_READY) innerRing.CompareAndSwapHealthStatus(ctx, control.HealthStatus_RECONFIGURING, control.HealthStatus_READY)
log.Info(logs.FrostFSNodeConfigurationHasBeenReloadedSuccessfully) log.Info(ctx, logs.FrostFSNodeConfigurationHasBeenReloadedSuccessfully)
} }
} }
} }

View file

@ -1,6 +1,7 @@
package main package main
import ( import (
"context"
"net/http" "net/http"
"time" "time"
@ -24,8 +25,8 @@ const (
shutdownTimeoutKeyPostfix = ".shutdown_timeout" shutdownTimeoutKeyPostfix = ".shutdown_timeout"
) )
func (c *httpComponent) init() { func (c *httpComponent) init(ctx context.Context) {
log.Info("init " + c.name) log.Info(ctx, "init "+c.name)
c.enabled = cfg.GetBool(c.name + enabledKeyPostfix) c.enabled = cfg.GetBool(c.name + enabledKeyPostfix)
c.address = cfg.GetString(c.name + addressKeyPostfix) c.address = cfg.GetString(c.name + addressKeyPostfix)
c.shutdownDur = cfg.GetDuration(c.name + shutdownTimeoutKeyPostfix) c.shutdownDur = cfg.GetDuration(c.name + shutdownTimeoutKeyPostfix)
@ -39,14 +40,14 @@ func (c *httpComponent) init() {
httputil.WithShutdownTimeout(c.shutdownDur), httputil.WithShutdownTimeout(c.shutdownDur),
) )
} else { } else {
log.Info(c.name + " is disabled, skip") log.Info(ctx, c.name+" is disabled, skip")
c.srv = nil c.srv = nil
} }
} }
func (c *httpComponent) start() { func (c *httpComponent) start(ctx context.Context) {
if c.srv != nil { if c.srv != nil {
log.Info("start " + c.name) log.Info(ctx, "start "+c.name)
wg.Add(1) wg.Add(1)
go func() { go func() {
defer wg.Done() defer wg.Done()
@ -55,10 +56,10 @@ func (c *httpComponent) start() {
} }
} }
func (c *httpComponent) shutdown() error { func (c *httpComponent) shutdown(ctx context.Context) error {
if c.srv != nil { if c.srv != nil {
log.Info("shutdown " + c.name) log.Info(ctx, "shutdown "+c.name)
return c.srv.Shutdown() return c.srv.Shutdown(ctx)
} }
return nil return nil
} }
@ -70,17 +71,17 @@ func (c *httpComponent) needReload() bool {
return enabled != c.enabled || enabled && (address != c.address || dur != c.shutdownDur) return enabled != c.enabled || enabled && (address != c.address || dur != c.shutdownDur)
} }
func (c *httpComponent) reload() { func (c *httpComponent) reload(ctx context.Context) {
log.Info("reload " + c.name) log.Info(ctx, "reload "+c.name)
if c.needReload() { if c.needReload() {
log.Info(c.name + " config updated") log.Info(ctx, c.name+" config updated")
if err := c.shutdown(); err != nil { if err := c.shutdown(ctx); err != nil {
log.Debug(logs.FrostFSIRCouldNotShutdownHTTPServer, log.Debug(ctx, logs.FrostFSIRCouldNotShutdownHTTPServer,
zap.String("error", err.Error()), zap.String("error", err.Error()),
) )
} else { } else {
c.init() c.init(ctx)
c.start() c.start(ctx)
} }
} }
} }

View file

@ -87,48 +87,48 @@ func main() {
ctx, cancel := context.WithCancel(context.Background()) ctx, cancel := context.WithCancel(context.Background())
pprofCmp = newPprofComponent() pprofCmp = newPprofComponent()
pprofCmp.init() pprofCmp.init(ctx)
metricsCmp = newMetricsComponent() metricsCmp = newMetricsComponent()
metricsCmp.init() metricsCmp.init(ctx)
audit.Store(cfg.GetBool("audit.enabled")) audit.Store(cfg.GetBool("audit.enabled"))
innerRing, err = innerring.New(ctx, log, cfg, intErr, metrics, cmode, audit) innerRing, err = innerring.New(ctx, log, cfg, intErr, metrics, cmode, audit)
exitErr(err) exitErr(err)
pprofCmp.start() pprofCmp.start(ctx)
metricsCmp.start() metricsCmp.start(ctx)
// start inner ring // start inner ring
err = innerRing.Start(ctx, intErr) err = innerRing.Start(ctx, intErr)
exitErr(err) exitErr(err)
log.Info(logs.CommonApplicationStarted, log.Info(ctx, logs.CommonApplicationStarted,
zap.String("version", misc.Version)) zap.String("version", misc.Version))
watchForSignal(cancel) watchForSignal(ctx, cancel)
<-ctx.Done() // graceful shutdown <-ctx.Done() // graceful shutdown
log.Debug(logs.FrostFSNodeWaitingForAllProcessesToStop) log.Debug(ctx, logs.FrostFSNodeWaitingForAllProcessesToStop)
wg.Wait() wg.Wait()
log.Info(logs.FrostFSIRApplicationStopped) log.Info(ctx, logs.FrostFSIRApplicationStopped)
} }
func shutdown() { func shutdown(ctx context.Context) {
innerRing.Stop() innerRing.Stop(ctx)
if err := metricsCmp.shutdown(); err != nil { if err := metricsCmp.shutdown(ctx); err != nil {
log.Debug(logs.FrostFSIRCouldNotShutdownHTTPServer, log.Debug(ctx, logs.FrostFSIRCouldNotShutdownHTTPServer,
zap.String("error", err.Error()), zap.String("error", err.Error()),
) )
} }
if err := pprofCmp.shutdown(); err != nil { if err := pprofCmp.shutdown(ctx); err != nil {
log.Debug(logs.FrostFSIRCouldNotShutdownHTTPServer, log.Debug(ctx, logs.FrostFSIRCouldNotShutdownHTTPServer,
zap.String("error", err.Error()), zap.String("error", err.Error()),
) )
} }
if err := sdnotify.ClearStatus(); err != nil { if err := sdnotify.ClearStatus(); err != nil {
log.Error(logs.FailedToReportStatusToSystemd, zap.Error(err)) log.Error(ctx, logs.FailedToReportStatusToSystemd, zap.Error(err))
} }
} }

View file

@ -1,6 +1,7 @@
package main package main
import ( import (
"context"
"runtime" "runtime"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs" "git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
@ -28,8 +29,8 @@ func newPprofComponent() *pprofComponent {
} }
} }
func (c *pprofComponent) init() { func (c *pprofComponent) init(ctx context.Context) {
c.httpComponent.init() c.httpComponent.init(ctx)
if c.enabled { if c.enabled {
c.blockRate = cfg.GetInt(pprofBlockRateKey) c.blockRate = cfg.GetInt(pprofBlockRateKey)
@ -51,17 +52,17 @@ func (c *pprofComponent) needReload() bool {
c.enabled && (c.blockRate != blockRate || c.mutexRate != mutexRate) c.enabled && (c.blockRate != blockRate || c.mutexRate != mutexRate)
} }
func (c *pprofComponent) reload() { func (c *pprofComponent) reload(ctx context.Context) {
log.Info("reload " + c.name) log.Info(ctx, "reload "+c.name)
if c.needReload() { if c.needReload() {
log.Info(c.name + " config updated") log.Info(ctx, c.name+" config updated")
if err := c.shutdown(); err != nil { if err := c.shutdown(ctx); err != nil {
log.Debug(logs.FrostFSIRCouldNotShutdownHTTPServer, log.Debug(ctx, logs.FrostFSIRCouldNotShutdownHTTPServer,
zap.String("error", err.Error())) zap.String("error", err.Error()))
return return
} }
c.init() c.init(ctx)
c.start() c.start(ctx)
} }
} }

View file

@ -28,7 +28,7 @@ func inspectFunc(cmd *cobra.Command, _ []string) {
common.ExitOnErr(cmd, common.Errf("invalid address argument: %w", err)) common.ExitOnErr(cmd, common.Errf("invalid address argument: %w", err))
blz := openBlobovnicza(cmd) blz := openBlobovnicza(cmd)
defer blz.Close() defer blz.Close(cmd.Context())
var prm blobovnicza.GetPrm var prm blobovnicza.GetPrm
prm.SetAddress(addr) prm.SetAddress(addr)

View file

@ -32,7 +32,7 @@ func listFunc(cmd *cobra.Command, _ []string) {
} }
blz := openBlobovnicza(cmd) blz := openBlobovnicza(cmd)
defer blz.Close() defer blz.Close(cmd.Context())
err := blobovnicza.IterateAddresses(context.Background(), blz, wAddr) err := blobovnicza.IterateAddresses(context.Background(), blz, wAddr)
common.ExitOnErr(cmd, common.Errf("blobovnicza iterator failure: %w", err)) common.ExitOnErr(cmd, common.Errf("blobovnicza iterator failure: %w", err))

View file

@ -27,7 +27,7 @@ func openBlobovnicza(cmd *cobra.Command) *blobovnicza.Blobovnicza {
blobovnicza.WithPath(vPath), blobovnicza.WithPath(vPath),
blobovnicza.WithReadOnly(true), blobovnicza.WithReadOnly(true),
) )
common.ExitOnErr(cmd, blz.Open()) common.ExitOnErr(cmd, blz.Open(cmd.Context()))
return blz return blz
} }

View file

@ -31,7 +31,7 @@ func inspectFunc(cmd *cobra.Command, _ []string) {
common.ExitOnErr(cmd, common.Errf("invalid address argument: %w", err)) common.ExitOnErr(cmd, common.Errf("invalid address argument: %w", err))
db := openMeta(cmd) db := openMeta(cmd)
defer db.Close() defer db.Close(cmd.Context())
storageID := meta.StorageIDPrm{} storageID := meta.StorageIDPrm{}
storageID.SetAddress(addr) storageID.SetAddress(addr)

View file

@ -19,7 +19,7 @@ func init() {
func listGarbageFunc(cmd *cobra.Command, _ []string) { func listGarbageFunc(cmd *cobra.Command, _ []string) {
db := openMeta(cmd) db := openMeta(cmd)
defer db.Close() defer db.Close(cmd.Context())
var garbPrm meta.GarbageIterationPrm var garbPrm meta.GarbageIterationPrm
garbPrm.SetHandler( garbPrm.SetHandler(

View file

@ -19,7 +19,7 @@ func init() {
func listGraveyardFunc(cmd *cobra.Command, _ []string) { func listGraveyardFunc(cmd *cobra.Command, _ []string) {
db := openMeta(cmd) db := openMeta(cmd)
defer db.Close() defer db.Close(cmd.Context())
var gravePrm meta.GraveyardIterationPrm var gravePrm meta.GraveyardIterationPrm
gravePrm.SetHandler( gravePrm.SetHandler(

View file

@ -397,16 +397,16 @@ type internals struct {
} }
// starts node's maintenance. // starts node's maintenance.
func (c *cfg) startMaintenance() { func (c *cfg) startMaintenance(ctx context.Context) {
c.isMaintenance.Store(true) c.isMaintenance.Store(true)
c.cfgNetmap.state.setControlNetmapStatus(control.NetmapStatus_MAINTENANCE) c.cfgNetmap.state.setControlNetmapStatus(control.NetmapStatus_MAINTENANCE)
c.log.Info(logs.FrostFSNodeStartedLocalNodesMaintenance) c.log.Info(ctx, logs.FrostFSNodeStartedLocalNodesMaintenance)
} }
// stops node's maintenance. // stops node's maintenance.
func (c *internals) stopMaintenance() { func (c *internals) stopMaintenance(ctx context.Context) {
if c.isMaintenance.CompareAndSwap(true, false) { if c.isMaintenance.CompareAndSwap(true, false) {
c.log.Info(logs.FrostFSNodeStoppedLocalNodesMaintenance) c.log.Info(ctx, logs.FrostFSNodeStoppedLocalNodesMaintenance)
} }
} }
@ -591,8 +591,6 @@ type cfgMorph struct {
client *client.Client client *client.Client
notaryEnabled bool
// TTL of Sidechain cached values. Non-positive value disables caching. // TTL of Sidechain cached values. Non-positive value disables caching.
cacheTTL time.Duration cacheTTL time.Duration
@ -705,7 +703,7 @@ func initCfg(appCfg *config.Config) *cfg {
log, err := logger.NewLogger(logPrm) log, err := logger.NewLogger(logPrm)
fatalOnErr(err) fatalOnErr(err)
if loggerconfig.ToLokiConfig(appCfg).Enabled { if loggerconfig.ToLokiConfig(appCfg).Enabled {
log.Logger = log.Logger.WithOptions(zap.WrapCore(func(core zapcore.Core) zapcore.Core { log.WithOptions(zap.WrapCore(func(core zapcore.Core) zapcore.Core {
lokiCore := lokicore.New(core, loggerconfig.ToLokiConfig(appCfg)) lokiCore := lokicore.New(core, loggerconfig.ToLokiConfig(appCfg))
return lokiCore return lokiCore
})) }))
@ -1089,7 +1087,7 @@ func (c *cfg) LocalAddress() network.AddressGroup {
func initLocalStorage(ctx context.Context, c *cfg) { func initLocalStorage(ctx context.Context, c *cfg) {
ls := engine.New(c.engineOpts()...) ls := engine.New(c.engineOpts()...)
addNewEpochAsyncNotificationHandler(c, func(ev event.Event) { addNewEpochAsyncNotificationHandler(c, func(ctx context.Context, ev event.Event) {
ls.HandleNewEpoch(ctx, ev.(netmap2.NewEpoch).EpochNumber()) ls.HandleNewEpoch(ctx, ev.(netmap2.NewEpoch).EpochNumber())
}) })
@ -1103,10 +1101,10 @@ func initLocalStorage(ctx context.Context, c *cfg) {
shard.WithTombstoneSource(c.createTombstoneSource()), shard.WithTombstoneSource(c.createTombstoneSource()),
shard.WithContainerInfoProvider(c.createContainerInfoProvider(ctx)))...) shard.WithContainerInfoProvider(c.createContainerInfoProvider(ctx)))...)
if err != nil { if err != nil {
c.log.Error(logs.FrostFSNodeFailedToAttachShardToEngine, zap.Error(err)) c.log.Error(ctx, logs.FrostFSNodeFailedToAttachShardToEngine, zap.Error(err))
} else { } else {
shardsAttached++ shardsAttached++
c.log.Info(logs.FrostFSNodeShardAttachedToEngine, zap.Stringer("id", id)) c.log.Info(ctx, logs.FrostFSNodeShardAttachedToEngine, zap.Stringer("id", id))
} }
} }
if shardsAttached == 0 { if shardsAttached == 0 {
@ -1116,23 +1114,23 @@ func initLocalStorage(ctx context.Context, c *cfg) {
c.cfgObject.cfgLocalStorage.localStorage = ls c.cfgObject.cfgLocalStorage.localStorage = ls
c.onShutdown(func() { c.onShutdown(func() {
c.log.Info(logs.FrostFSNodeClosingComponentsOfTheStorageEngine) c.log.Info(ctx, logs.FrostFSNodeClosingComponentsOfTheStorageEngine)
err := ls.Close(context.WithoutCancel(ctx)) err := ls.Close(context.WithoutCancel(ctx))
if err != nil { if err != nil {
c.log.Info(logs.FrostFSNodeStorageEngineClosingFailure, c.log.Info(ctx, logs.FrostFSNodeStorageEngineClosingFailure,
zap.String("error", err.Error()), zap.String("error", err.Error()),
) )
} else { } else {
c.log.Info(logs.FrostFSNodeAllComponentsOfTheStorageEngineClosedSuccessfully) c.log.Info(ctx, logs.FrostFSNodeAllComponentsOfTheStorageEngineClosedSuccessfully)
} }
}) })
} }
func initAccessPolicyEngine(_ context.Context, c *cfg) { func initAccessPolicyEngine(ctx context.Context, c *cfg) {
var localOverrideDB chainbase.LocalOverrideDatabase var localOverrideDB chainbase.LocalOverrideDatabase
if nodeconfig.PersistentPolicyRules(c.appCfg).Path() == "" { if nodeconfig.PersistentPolicyRules(c.appCfg).Path() == "" {
c.log.Warn(logs.FrostFSNodePersistentRuleStorageDBPathIsNotSetInmemoryWillBeUsed) c.log.Warn(ctx, logs.FrostFSNodePersistentRuleStorageDBPathIsNotSetInmemoryWillBeUsed)
localOverrideDB = chainbase.NewInmemoryLocalOverrideDatabase() localOverrideDB = chainbase.NewInmemoryLocalOverrideDatabase()
} else { } else {
localOverrideDB = chainbase.NewBoltLocalOverrideDatabase( localOverrideDB = chainbase.NewBoltLocalOverrideDatabase(
@ -1157,7 +1155,7 @@ func initAccessPolicyEngine(_ context.Context, c *cfg) {
c.onShutdown(func() { c.onShutdown(func() {
if err := ape.LocalOverrideDatabaseCore().Close(); err != nil { if err := ape.LocalOverrideDatabaseCore().Close(); err != nil {
c.log.Warn(logs.FrostFSNodeAccessPolicyEngineClosingFailure, c.log.Warn(ctx, logs.FrostFSNodeAccessPolicyEngineClosingFailure,
zap.Error(err), zap.Error(err),
) )
} }
@ -1206,10 +1204,10 @@ func (c *cfg) setContractNodeInfo(ni *netmap.NodeInfo) {
c.cfgNetmap.state.setNodeInfo(ni) c.cfgNetmap.state.setNodeInfo(ni)
} }
func (c *cfg) updateContractNodeInfo(epoch uint64) { func (c *cfg) updateContractNodeInfo(ctx context.Context, epoch uint64) {
ni, err := c.netmapLocalNodeState(epoch) ni, err := c.netmapLocalNodeState(epoch)
if err != nil { if err != nil {
c.log.Error(logs.FrostFSNodeCouldNotUpdateNodeStateOnNewEpoch, c.log.Error(ctx, logs.FrostFSNodeCouldNotUpdateNodeStateOnNewEpoch,
zap.Uint64("epoch", epoch), zap.Uint64("epoch", epoch),
zap.String("error", err.Error())) zap.String("error", err.Error()))
return return
@ -1221,19 +1219,19 @@ func (c *cfg) updateContractNodeInfo(epoch uint64) {
// bootstrapWithState calls "addPeer" method of the Sidechain Netmap contract // bootstrapWithState calls "addPeer" method of the Sidechain Netmap contract
// with the binary-encoded information from the current node's configuration. // with the binary-encoded information from the current node's configuration.
// The state is set using the provided setter which MUST NOT be nil. // The state is set using the provided setter which MUST NOT be nil.
func (c *cfg) bootstrapWithState(stateSetter func(*netmap.NodeInfo)) error { func (c *cfg) bootstrapWithState(ctx context.Context, stateSetter func(*netmap.NodeInfo)) error {
ni := c.cfgNodeInfo.localInfo ni := c.cfgNodeInfo.localInfo
stateSetter(&ni) stateSetter(&ni)
prm := nmClient.AddPeerPrm{} prm := nmClient.AddPeerPrm{}
prm.SetNodeInfo(ni) prm.SetNodeInfo(ni)
return c.cfgNetmap.wrapper.AddPeer(prm) return c.cfgNetmap.wrapper.AddPeer(ctx, prm)
} }
// bootstrapOnline calls cfg.bootstrapWithState with "online" state. // bootstrapOnline calls cfg.bootstrapWithState with "online" state.
func bootstrapOnline(c *cfg) error { func bootstrapOnline(ctx context.Context, c *cfg) error {
return c.bootstrapWithState(func(ni *netmap.NodeInfo) { return c.bootstrapWithState(ctx, func(ni *netmap.NodeInfo) {
ni.SetStatus(netmap.Online) ni.SetStatus(netmap.Online)
}) })
} }
@ -1241,21 +1239,21 @@ func bootstrapOnline(c *cfg) error {
// bootstrap calls bootstrapWithState with: // bootstrap calls bootstrapWithState with:
// - "maintenance" state if maintenance is in progress on the current node // - "maintenance" state if maintenance is in progress on the current node
// - "online", otherwise // - "online", otherwise
func (c *cfg) bootstrap() error { func (c *cfg) bootstrap(ctx context.Context) error {
// switch to online except when under maintenance // switch to online except when under maintenance
st := c.cfgNetmap.state.controlNetmapStatus() st := c.cfgNetmap.state.controlNetmapStatus()
if st == control.NetmapStatus_MAINTENANCE { if st == control.NetmapStatus_MAINTENANCE {
c.log.Info(logs.FrostFSNodeBootstrappingWithTheMaintenanceState) c.log.Info(ctx, logs.FrostFSNodeBootstrappingWithTheMaintenanceState)
return c.bootstrapWithState(func(ni *netmap.NodeInfo) { return c.bootstrapWithState(ctx, func(ni *netmap.NodeInfo) {
ni.SetStatus(netmap.Maintenance) ni.SetStatus(netmap.Maintenance)
}) })
} }
c.log.Info(logs.FrostFSNodeBootstrappingWithOnlineState, c.log.Info(ctx, logs.FrostFSNodeBootstrappingWithOnlineState,
zap.Stringer("previous", st), zap.Stringer("previous", st),
) )
return bootstrapOnline(c) return bootstrapOnline(ctx, c)
} }
// needBootstrap checks if local node should be registered in network on bootup. // needBootstrap checks if local node should be registered in network on bootup.
@ -1280,19 +1278,19 @@ func (c *cfg) signalWatcher(ctx context.Context) {
// signals causing application to shut down should have priority over // signals causing application to shut down should have priority over
// reconfiguration signal // reconfiguration signal
case <-ch: case <-ch:
c.log.Info(logs.FrostFSNodeTerminationSignalHasBeenReceivedStopping) c.log.Info(ctx, logs.FrostFSNodeTerminationSignalHasBeenReceivedStopping)
c.shutdown() c.shutdown(ctx)
c.log.Info(logs.FrostFSNodeTerminationSignalProcessingIsComplete) c.log.Info(ctx, logs.FrostFSNodeTerminationSignalProcessingIsComplete)
return return
case err := <-c.internalErr: // internal application error case err := <-c.internalErr: // internal application error
c.log.Warn(logs.FrostFSNodeInternalApplicationError, c.log.Warn(ctx, logs.FrostFSNodeInternalApplicationError,
zap.String("message", err.Error())) zap.String("message", err.Error()))
c.shutdown() c.shutdown(ctx)
c.log.Info(logs.FrostFSNodeInternalErrorProcessingIsComplete) c.log.Info(ctx, logs.FrostFSNodeInternalErrorProcessingIsComplete)
return return
default: default:
// block until any signal is receieved // block until any signal is receieved
@ -1300,19 +1298,19 @@ func (c *cfg) signalWatcher(ctx context.Context) {
case <-sighupCh: case <-sighupCh:
c.reloadConfig(ctx) c.reloadConfig(ctx)
case <-ch: case <-ch:
c.log.Info(logs.FrostFSNodeTerminationSignalHasBeenReceivedStopping) c.log.Info(ctx, logs.FrostFSNodeTerminationSignalHasBeenReceivedStopping)
c.shutdown() c.shutdown(ctx)
c.log.Info(logs.FrostFSNodeTerminationSignalProcessingIsComplete) c.log.Info(ctx, logs.FrostFSNodeTerminationSignalProcessingIsComplete)
return return
case err := <-c.internalErr: // internal application error case err := <-c.internalErr: // internal application error
c.log.Warn(logs.FrostFSNodeInternalApplicationError, c.log.Warn(ctx, logs.FrostFSNodeInternalApplicationError,
zap.String("message", err.Error())) zap.String("message", err.Error()))
c.shutdown() c.shutdown(ctx)
c.log.Info(logs.FrostFSNodeInternalErrorProcessingIsComplete) c.log.Info(ctx, logs.FrostFSNodeInternalErrorProcessingIsComplete)
return return
} }
} }
@ -1320,17 +1318,17 @@ func (c *cfg) signalWatcher(ctx context.Context) {
} }
func (c *cfg) reloadConfig(ctx context.Context) { func (c *cfg) reloadConfig(ctx context.Context) {
c.log.Info(logs.FrostFSNodeSIGHUPHasBeenReceivedRereadingConfiguration) c.log.Info(ctx, logs.FrostFSNodeSIGHUPHasBeenReceivedRereadingConfiguration)
if !c.compareAndSwapHealthStatus(control.HealthStatus_READY, control.HealthStatus_RECONFIGURING) { if !c.compareAndSwapHealthStatus(ctx, control.HealthStatus_READY, control.HealthStatus_RECONFIGURING) {
c.log.Info(logs.FrostFSNodeSIGHUPSkip) c.log.Info(ctx, logs.FrostFSNodeSIGHUPSkip)
return return
} }
defer c.compareAndSwapHealthStatus(control.HealthStatus_RECONFIGURING, control.HealthStatus_READY) defer c.compareAndSwapHealthStatus(ctx, control.HealthStatus_RECONFIGURING, control.HealthStatus_READY)
err := c.reloadAppConfig() err := c.reloadAppConfig()
if err != nil { if err != nil {
c.log.Error(logs.FrostFSNodeConfigurationReading, zap.Error(err)) c.log.Error(ctx, logs.FrostFSNodeConfigurationReading, zap.Error(err))
return return
} }
@ -1341,7 +1339,7 @@ func (c *cfg) reloadConfig(ctx context.Context) {
logPrm, err := c.loggerPrm() logPrm, err := c.loggerPrm()
if err != nil { if err != nil {
c.log.Error(logs.FrostFSNodeLoggerConfigurationPreparation, zap.Error(err)) c.log.Error(ctx, logs.FrostFSNodeLoggerConfigurationPreparation, zap.Error(err))
return return
} }
@ -1362,25 +1360,25 @@ func (c *cfg) reloadConfig(ctx context.Context) {
err = c.cfgObject.cfgLocalStorage.localStorage.Reload(ctx, rcfg) err = c.cfgObject.cfgLocalStorage.localStorage.Reload(ctx, rcfg)
if err != nil { if err != nil {
c.log.Error(logs.FrostFSNodeStorageEngineConfigurationUpdate, zap.Error(err)) c.log.Error(ctx, logs.FrostFSNodeStorageEngineConfigurationUpdate, zap.Error(err))
return return
} }
for _, component := range components { for _, component := range components {
err = component.reloadFunc() err = component.reloadFunc()
if err != nil { if err != nil {
c.log.Error(logs.FrostFSNodeUpdatedConfigurationApplying, c.log.Error(ctx, logs.FrostFSNodeUpdatedConfigurationApplying,
zap.String("component", component.name), zap.String("component", component.name),
zap.Error(err)) zap.Error(err))
} }
} }
if err := c.dialerSource.Update(internalNetConfig(c.appCfg, c.metricsCollector.MultinetMetrics())); err != nil { if err := c.dialerSource.Update(internalNetConfig(c.appCfg, c.metricsCollector.MultinetMetrics())); err != nil {
c.log.Error(logs.FailedToUpdateMultinetConfiguration, zap.Error(err)) c.log.Error(ctx, logs.FailedToUpdateMultinetConfiguration, zap.Error(err))
return return
} }
c.log.Info(logs.FrostFSNodeConfigurationHasBeenReloadedSuccessfully) c.log.Info(ctx, logs.FrostFSNodeConfigurationHasBeenReloadedSuccessfully)
} }
func (c *cfg) getComponents(ctx context.Context, logPrm *logger.Prm) []dCmp { func (c *cfg) getComponents(ctx context.Context, logPrm *logger.Prm) []dCmp {
@ -1388,7 +1386,7 @@ func (c *cfg) getComponents(ctx context.Context, logPrm *logger.Prm) []dCmp {
components = append(components, dCmp{"logger", logPrm.Reload}) components = append(components, dCmp{"logger", logPrm.Reload})
components = append(components, dCmp{"runtime", func() error { components = append(components, dCmp{"runtime", func() error {
setRuntimeParameters(c) setRuntimeParameters(ctx, c)
return nil return nil
}}) }})
components = append(components, dCmp{"audit", func() error { components = append(components, dCmp{"audit", func() error {
@ -1403,7 +1401,7 @@ func (c *cfg) getComponents(ctx context.Context, logPrm *logger.Prm) []dCmp {
} }
updated, err := tracing.Setup(ctx, *traceConfig) updated, err := tracing.Setup(ctx, *traceConfig)
if updated { if updated {
c.log.Info(logs.FrostFSNodeTracingConfigationUpdated) c.log.Info(ctx, logs.FrostFSNodeTracingConfigationUpdated)
} }
return err return err
}}) }})
@ -1438,7 +1436,7 @@ func (c *cfg) reloadPools() error {
func (c *cfg) reloadPool(p *ants.Pool, newSize int, name string) { func (c *cfg) reloadPool(p *ants.Pool, newSize int, name string) {
oldSize := p.Cap() oldSize := p.Cap()
if oldSize != newSize { if oldSize != newSize {
c.log.Info(logs.FrostFSNodePoolConfigurationUpdate, zap.String("field", name), c.log.Info(context.Background(), logs.FrostFSNodePoolConfigurationUpdate, zap.String("field", name),
zap.Int("old", oldSize), zap.Int("new", newSize)) zap.Int("old", oldSize), zap.Int("new", newSize))
p.Tune(newSize) p.Tune(newSize)
} }
@ -1466,7 +1464,7 @@ func (c *cfg) createTombstoneSource() *tombstone.ExpirationChecker {
func (c *cfg) createContainerInfoProvider(ctx context.Context) container.InfoProvider { func (c *cfg) createContainerInfoProvider(ctx context.Context) container.InfoProvider {
return container.NewInfoProvider(func() (container.Source, error) { return container.NewInfoProvider(func() (container.Source, error) {
c.initMorphComponents(ctx) c.initMorphComponents(ctx)
cc, err := containerClient.NewFromMorph(c.cfgMorph.client, c.cfgContainer.scriptHash, 0, containerClient.TryNotary()) cc, err := containerClient.NewFromMorph(c.cfgMorph.client, c.cfgContainer.scriptHash, 0)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -1474,14 +1472,14 @@ func (c *cfg) createContainerInfoProvider(ctx context.Context) container.InfoPro
}) })
} }
func (c *cfg) shutdown() { func (c *cfg) shutdown(ctx context.Context) {
old := c.swapHealthStatus(control.HealthStatus_SHUTTING_DOWN) old := c.swapHealthStatus(ctx, control.HealthStatus_SHUTTING_DOWN)
if old == control.HealthStatus_SHUTTING_DOWN { if old == control.HealthStatus_SHUTTING_DOWN {
c.log.Info(logs.FrostFSNodeShutdownSkip) c.log.Info(ctx, logs.FrostFSNodeShutdownSkip)
return return
} }
if old == control.HealthStatus_STARTING { if old == control.HealthStatus_STARTING {
c.log.Warn(logs.FrostFSNodeShutdownWhenNotReady) c.log.Warn(ctx, logs.FrostFSNodeShutdownWhenNotReady)
} }
c.ctxCancel() c.ctxCancel()
@ -1491,6 +1489,6 @@ func (c *cfg) shutdown() {
} }
if err := sdnotify.ClearStatus(); err != nil { if err := sdnotify.ClearStatus(); err != nil {
c.log.Error(logs.FailedToReportStatusToSystemd, zap.Error(err)) c.log.Error(ctx, logs.FailedToReportStatusToSystemd, zap.Error(err))
} }
} }

View file

@ -41,6 +41,10 @@ func IterateShards(c *config.Config, required bool, f func(*shardconfig.Config)
c.Sub(si), c.Sub(si),
) )
if sc.Mode() == mode.Disabled {
continue
}
// Path for the blobstor can't be present in the default section, because different shards // Path for the blobstor can't be present in the default section, because different shards
// must have different paths, so if it is missing, the shard is not here. // must have different paths, so if it is missing, the shard is not here.
// At the same time checking for "blobstor" section doesn't work proper // At the same time checking for "blobstor" section doesn't work proper
@ -50,10 +54,6 @@ func IterateShards(c *config.Config, required bool, f func(*shardconfig.Config)
} }
(*config.Config)(sc).SetDefault(def) (*config.Config)(sc).SetDefault(def)
if sc.Mode() == mode.Disabled {
continue
}
if err := f(sc); err != nil { if err := f(sc); err != nil {
return err return err
} }

View file

@ -18,6 +18,22 @@ import (
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
func TestIterateShards(t *testing.T) {
fileConfigTest := func(c *config.Config) {
var res []string
require.NoError(t,
engineconfig.IterateShards(c, false, func(sc *shardconfig.Config) error {
res = append(res, sc.Metabase().Path())
return nil
}))
require.Equal(t, []string{"abc", "xyz"}, res)
}
const cfgDir = "./testdata/shards"
configtest.ForEachFileType(cfgDir, fileConfigTest)
configtest.ForEnvFileType(t, cfgDir, fileConfigTest)
}
func TestEngineSection(t *testing.T) { func TestEngineSection(t *testing.T) {
t.Run("defaults", func(t *testing.T) { t.Run("defaults", func(t *testing.T) {
empty := configtest.EmptyConfig() empty := configtest.EmptyConfig()

View file

@ -0,0 +1,3 @@
FROSTFS_STORAGE_SHARD_0_METABASE_PATH=abc
FROSTFS_STORAGE_SHARD_1_MODE=disabled
FROSTFS_STORAGE_SHARD_2_METABASE_PATH=xyz

View file

@ -0,0 +1,13 @@
{
"storage.shard": {
"0": {
"metabase.path": "abc"
},
"1": {
"mode": "disabled"
},
"2": {
"metabase.path": "xyz"
}
}
}

View file

@ -0,0 +1,7 @@
storage.shard:
0:
metabase.path: abc
1:
mode: disabled
2:
metabase.path: xyz

View file

@ -5,6 +5,7 @@ import (
"errors" "errors"
"fmt" "fmt"
"os" "os"
"strconv"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config" "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config"
"git.frostfs.info/TrueCloudLab/frostfs-node/misc" "git.frostfs.info/TrueCloudLab/frostfs-node/misc"
@ -24,6 +25,7 @@ func ToTracingConfig(c *config.Config) (*tracing.Config, error) {
Service: "frostfs-node", Service: "frostfs-node",
InstanceID: getInstanceIDOrDefault(c), InstanceID: getInstanceIDOrDefault(c),
Version: misc.Version, Version: misc.Version,
Attributes: make(map[string]string),
} }
if trustedCa := config.StringSafe(c.Sub(subsection), "trusted_ca"); trustedCa != "" { if trustedCa := config.StringSafe(c.Sub(subsection), "trusted_ca"); trustedCa != "" {
@ -38,11 +40,30 @@ func ToTracingConfig(c *config.Config) (*tracing.Config, error) {
} }
conf.ServerCaCertPool = certPool conf.ServerCaCertPool = certPool
} }
i := uint64(0)
for ; ; i++ {
si := strconv.FormatUint(i, 10)
ac := c.Sub(subsection).Sub("attributes").Sub(si)
k := config.StringSafe(ac, "key")
if k == "" {
break
}
v := config.StringSafe(ac, "value")
if v == "" {
return nil, fmt.Errorf("empty tracing attribute value for key %s", k)
}
if _, ok := conf.Attributes[k]; ok {
return nil, fmt.Errorf("tracing attribute key %s defined more than once", k)
}
conf.Attributes[k] = v
}
return conf, nil return conf, nil
} }
func getInstanceIDOrDefault(c *config.Config) string { func getInstanceIDOrDefault(c *config.Config) string {
s := config.StringSlice(c.Sub("node"), "addresses") s := config.StringSliceSafe(c.Sub("node"), "addresses")
if len(s) > 0 { if len(s) > 0 {
return s[0] return s[0]
} }

View file

@ -0,0 +1,46 @@
package tracing
import (
"testing"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config"
configtest "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/test"
"git.frostfs.info/TrueCloudLab/frostfs-observability/tracing"
"github.com/stretchr/testify/require"
)
func TestTracingSection(t *testing.T) {
t.Run("defaults", func(t *testing.T) {
tc, err := ToTracingConfig(configtest.EmptyConfig())
require.NoError(t, err)
require.Equal(t, false, tc.Enabled)
require.Equal(t, tracing.Exporter(""), tc.Exporter)
require.Equal(t, "", tc.Endpoint)
require.Equal(t, "frostfs-node", tc.Service)
require.Equal(t, "", tc.InstanceID)
require.Nil(t, tc.ServerCaCertPool)
require.Empty(t, tc.Attributes)
})
const path = "../../../../config/example/node"
fileConfigTest := func(c *config.Config) {
tc, err := ToTracingConfig(c)
require.NoError(t, err)
require.Equal(t, true, tc.Enabled)
require.Equal(t, tracing.OTLPgRPCExporter, tc.Exporter)
require.Equal(t, "localhost", tc.Endpoint)
require.Equal(t, "frostfs-node", tc.Service)
require.Nil(t, tc.ServerCaCertPool)
require.EqualValues(t, map[string]string{
"key0": "value",
"key1": "value",
}, tc.Attributes)
}
configtest.ForEachFileType(path, fileConfigTest)
t.Run("ENV", func(t *testing.T) {
configtest.ForEnvFileType(t, path, fileConfigTest)
})
}

View file

@ -28,7 +28,7 @@ import (
func initContainerService(_ context.Context, c *cfg) { func initContainerService(_ context.Context, c *cfg) {
// container wrapper that tries to invoke notary // container wrapper that tries to invoke notary
// requests if chain is configured so // requests if chain is configured so
wrap, err := cntClient.NewFromMorph(c.cfgMorph.client, c.cfgContainer.scriptHash, 0, cntClient.TryNotary()) wrap, err := cntClient.NewFromMorph(c.cfgMorph.client, c.cfgContainer.scriptHash, 0)
fatalOnErr(err) fatalOnErr(err)
c.shared.cnrClient = wrap c.shared.cnrClient = wrap
@ -89,7 +89,7 @@ func configureEACLAndContainerSources(c *cfg, client *cntClient.Client, cnrSrc c
if c.cfgMorph.containerCacheSize > 0 { if c.cfgMorph.containerCacheSize > 0 {
containerCache := newCachedContainerStorage(cnrSrc, c.cfgMorph.cacheTTL, c.cfgMorph.containerCacheSize) containerCache := newCachedContainerStorage(cnrSrc, c.cfgMorph.cacheTTL, c.cfgMorph.containerCacheSize)
subscribeToContainerCreation(c, func(e event.Event) { subscribeToContainerCreation(c, func(ctx context.Context, e event.Event) {
ev := e.(containerEvent.PutSuccess) ev := e.(containerEvent.PutSuccess)
// read owner of the created container in order to update the reading cache. // read owner of the created container in order to update the reading cache.
@ -102,21 +102,21 @@ func configureEACLAndContainerSources(c *cfg, client *cntClient.Client, cnrSrc c
} else { } else {
// unlike removal, we expect successful receive of the container // unlike removal, we expect successful receive of the container
// after successful creation, so logging can be useful // after successful creation, so logging can be useful
c.log.Error(logs.FrostFSNodeReadNewlyCreatedContainerAfterTheNotification, c.log.Error(ctx, logs.FrostFSNodeReadNewlyCreatedContainerAfterTheNotification,
zap.Stringer("id", ev.ID), zap.Stringer("id", ev.ID),
zap.Error(err), zap.Error(err),
) )
} }
c.log.Debug(logs.FrostFSNodeContainerCreationEventsReceipt, c.log.Debug(ctx, logs.FrostFSNodeContainerCreationEventsReceipt,
zap.Stringer("id", ev.ID), zap.Stringer("id", ev.ID),
) )
}) })
subscribeToContainerRemoval(c, func(e event.Event) { subscribeToContainerRemoval(c, func(ctx context.Context, e event.Event) {
ev := e.(containerEvent.DeleteSuccess) ev := e.(containerEvent.DeleteSuccess)
containerCache.handleRemoval(ev.ID) containerCache.handleRemoval(ev.ID)
c.log.Debug(logs.FrostFSNodeContainerRemovalEventsReceipt, c.log.Debug(ctx, logs.FrostFSNodeContainerRemovalEventsReceipt,
zap.Stringer("id", ev.ID), zap.Stringer("id", ev.ID),
) )
}) })
@ -237,10 +237,10 @@ type morphContainerWriter struct {
neoClient *cntClient.Client neoClient *cntClient.Client
} }
func (m morphContainerWriter) Put(cnr containerCore.Container) (*cid.ID, error) { func (m morphContainerWriter) Put(ctx context.Context, cnr containerCore.Container) (*cid.ID, error) {
return cntClient.Put(m.neoClient, cnr) return cntClient.Put(ctx, m.neoClient, cnr)
} }
func (m morphContainerWriter) Delete(witness containerCore.RemovalWitness) error { func (m morphContainerWriter) Delete(ctx context.Context, witness containerCore.RemovalWitness) error {
return cntClient.Delete(m.neoClient, witness) return cntClient.Delete(ctx, m.neoClient, witness)
} }

View file

@ -16,7 +16,7 @@ import (
const serviceNameControl = "control" const serviceNameControl = "control"
func initControlService(c *cfg) { func initControlService(ctx context.Context, c *cfg) {
endpoint := controlconfig.GRPC(c.appCfg).Endpoint() endpoint := controlconfig.GRPC(c.appCfg).Endpoint()
if endpoint == controlconfig.GRPCEndpointDefault { if endpoint == controlconfig.GRPCEndpointDefault {
return return
@ -46,21 +46,21 @@ func initControlService(c *cfg) {
lis, err := net.Listen("tcp", endpoint) lis, err := net.Listen("tcp", endpoint)
if err != nil { if err != nil {
c.log.Error(logs.FrostFSNodeCantListenGRPCEndpointControl, zap.Error(err)) c.log.Error(ctx, logs.FrostFSNodeCantListenGRPCEndpointControl, zap.Error(err))
return return
} }
c.cfgControlService.server = grpc.NewServer() c.cfgControlService.server = grpc.NewServer()
c.onShutdown(func() { c.onShutdown(func() {
stopGRPC("FrostFS Control API", c.cfgControlService.server, c.log) stopGRPC(ctx, "FrostFS Control API", c.cfgControlService.server, c.log)
}) })
control.RegisterControlServiceServer(c.cfgControlService.server, ctlSvc) control.RegisterControlServiceServer(c.cfgControlService.server, ctlSvc)
c.workers = append(c.workers, newWorkerFromFunc(func(ctx context.Context) { c.workers = append(c.workers, newWorkerFromFunc(func(ctx context.Context) {
runAndLog(ctx, c, serviceNameControl, false, func(context.Context, *cfg) { runAndLog(ctx, c, serviceNameControl, false, func(context.Context, *cfg) {
c.log.Info(logs.FrostFSNodeStartListeningEndpoint, c.log.Info(ctx, logs.FrostFSNodeStartListeningEndpoint,
zap.String("service", serviceNameControl), zap.String("service", serviceNameControl),
zap.String("endpoint", endpoint)) zap.String("endpoint", endpoint))
fatalOnErr(c.cfgControlService.server.Serve(lis)) fatalOnErr(c.cfgControlService.server.Serve(lis))
@ -72,23 +72,23 @@ func (c *cfg) NetmapStatus() control.NetmapStatus {
return c.cfgNetmap.state.controlNetmapStatus() return c.cfgNetmap.state.controlNetmapStatus()
} }
func (c *cfg) setHealthStatus(st control.HealthStatus) { func (c *cfg) setHealthStatus(ctx context.Context, st control.HealthStatus) {
c.notifySystemd(st) c.notifySystemd(ctx, st)
c.healthStatus.Store(int32(st)) c.healthStatus.Store(int32(st))
c.metricsCollector.State().SetHealth(int32(st)) c.metricsCollector.State().SetHealth(int32(st))
} }
func (c *cfg) compareAndSwapHealthStatus(oldSt, newSt control.HealthStatus) (swapped bool) { func (c *cfg) compareAndSwapHealthStatus(ctx context.Context, oldSt, newSt control.HealthStatus) (swapped bool) {
if swapped = c.healthStatus.CompareAndSwap(int32(oldSt), int32(newSt)); swapped { if swapped = c.healthStatus.CompareAndSwap(int32(oldSt), int32(newSt)); swapped {
c.notifySystemd(newSt) c.notifySystemd(ctx, newSt)
c.metricsCollector.State().SetHealth(int32(newSt)) c.metricsCollector.State().SetHealth(int32(newSt))
} }
return return
} }
func (c *cfg) swapHealthStatus(st control.HealthStatus) (old control.HealthStatus) { func (c *cfg) swapHealthStatus(ctx context.Context, st control.HealthStatus) (old control.HealthStatus) {
old = control.HealthStatus(c.healthStatus.Swap(int32(st))) old = control.HealthStatus(c.healthStatus.Swap(int32(st)))
c.notifySystemd(st) c.notifySystemd(ctx, st)
c.metricsCollector.State().SetHealth(int32(st)) c.metricsCollector.State().SetHealth(int32(st))
return return
} }
@ -97,7 +97,7 @@ func (c *cfg) HealthStatus() control.HealthStatus {
return control.HealthStatus(c.healthStatus.Load()) return control.HealthStatus(c.healthStatus.Load())
} }
func (c *cfg) notifySystemd(st control.HealthStatus) { func (c *cfg) notifySystemd(ctx context.Context, st control.HealthStatus) {
if !c.sdNotify { if !c.sdNotify {
return return
} }
@ -113,6 +113,6 @@ func (c *cfg) notifySystemd(st control.HealthStatus) {
err = sdnotify.Status(fmt.Sprintf("%v", st)) err = sdnotify.Status(fmt.Sprintf("%v", st))
} }
if err != nil { if err != nil {
c.log.Error(logs.FailedToReportStatusToSystemd, zap.Error(err)) c.log.Error(ctx, logs.FailedToReportStatusToSystemd, zap.Error(err))
} }
} }

View file

@ -1,6 +1,7 @@
package main package main
import ( import (
"context"
"crypto/tls" "crypto/tls"
"errors" "errors"
"net" "net"
@ -18,11 +19,11 @@ import (
const maxRecvMsgSize = 256 << 20 const maxRecvMsgSize = 256 << 20
func initGRPC(c *cfg) { func initGRPC(ctx context.Context, c *cfg) {
var endpointsToReconnect []string var endpointsToReconnect []string
var successCount int var successCount int
grpcconfig.IterateEndpoints(c.appCfg, func(sc *grpcconfig.Config) { grpcconfig.IterateEndpoints(c.appCfg, func(sc *grpcconfig.Config) {
serverOpts, ok := getGrpcServerOpts(c, sc) serverOpts, ok := getGrpcServerOpts(ctx, c, sc)
if !ok { if !ok {
return return
} }
@ -30,7 +31,7 @@ func initGRPC(c *cfg) {
lis, err := net.Listen("tcp", sc.Endpoint()) lis, err := net.Listen("tcp", sc.Endpoint())
if err != nil { if err != nil {
c.metricsCollector.GrpcServerMetrics().MarkUnhealthy(sc.Endpoint()) c.metricsCollector.GrpcServerMetrics().MarkUnhealthy(sc.Endpoint())
c.log.Error(logs.FrostFSNodeCantListenGRPCEndpoint, zap.Error(err)) c.log.Error(ctx, logs.FrostFSNodeCantListenGRPCEndpoint, zap.Error(err))
endpointsToReconnect = append(endpointsToReconnect, sc.Endpoint()) endpointsToReconnect = append(endpointsToReconnect, sc.Endpoint())
return return
} }
@ -39,7 +40,7 @@ func initGRPC(c *cfg) {
srv := grpc.NewServer(serverOpts...) srv := grpc.NewServer(serverOpts...)
c.onShutdown(func() { c.onShutdown(func() {
stopGRPC("FrostFS Public API", srv, c.log) stopGRPC(ctx, "FrostFS Public API", srv, c.log)
}) })
c.cfgGRPC.append(sc.Endpoint(), lis, srv) c.cfgGRPC.append(sc.Endpoint(), lis, srv)
@ -52,11 +53,11 @@ func initGRPC(c *cfg) {
c.cfgGRPC.reconnectTimeout = grpcconfig.ReconnectTimeout(c.appCfg) c.cfgGRPC.reconnectTimeout = grpcconfig.ReconnectTimeout(c.appCfg)
for _, endpoint := range endpointsToReconnect { for _, endpoint := range endpointsToReconnect {
scheduleReconnect(endpoint, c) scheduleReconnect(ctx, endpoint, c)
} }
} }
func scheduleReconnect(endpoint string, c *cfg) { func scheduleReconnect(ctx context.Context, endpoint string, c *cfg) {
c.wg.Add(1) c.wg.Add(1)
go func() { go func() {
defer c.wg.Done() defer c.wg.Done()
@ -65,7 +66,7 @@ func scheduleReconnect(endpoint string, c *cfg) {
for { for {
select { select {
case <-t.C: case <-t.C:
if tryReconnect(endpoint, c) { if tryReconnect(ctx, endpoint, c) {
return return
} }
case <-c.done: case <-c.done:
@ -75,20 +76,20 @@ func scheduleReconnect(endpoint string, c *cfg) {
}() }()
} }
func tryReconnect(endpoint string, c *cfg) bool { func tryReconnect(ctx context.Context, endpoint string, c *cfg) bool {
c.log.Info(logs.FrostFSNodeGRPCReconnecting, zap.String("endpoint", endpoint)) c.log.Info(ctx, logs.FrostFSNodeGRPCReconnecting, zap.String("endpoint", endpoint))
serverOpts, found := getGRPCEndpointOpts(endpoint, c) serverOpts, found := getGRPCEndpointOpts(ctx, endpoint, c)
if !found { if !found {
c.log.Warn(logs.FrostFSNodeGRPCServerConfigNotFound, zap.String("endpoint", endpoint)) c.log.Warn(ctx, logs.FrostFSNodeGRPCServerConfigNotFound, zap.String("endpoint", endpoint))
return true return true
} }
lis, err := net.Listen("tcp", endpoint) lis, err := net.Listen("tcp", endpoint)
if err != nil { if err != nil {
c.metricsCollector.GrpcServerMetrics().MarkUnhealthy(endpoint) c.metricsCollector.GrpcServerMetrics().MarkUnhealthy(endpoint)
c.log.Error(logs.FrostFSNodeCantListenGRPCEndpoint, zap.Error(err)) c.log.Error(ctx, logs.FrostFSNodeCantListenGRPCEndpoint, zap.Error(err))
c.log.Warn(logs.FrostFSNodeGRPCReconnectFailed, zap.Duration("next_try_in", c.cfgGRPC.reconnectTimeout)) c.log.Warn(ctx, logs.FrostFSNodeGRPCReconnectFailed, zap.Duration("next_try_in", c.cfgGRPC.reconnectTimeout))
return false return false
} }
c.metricsCollector.GrpcServerMetrics().MarkHealthy(endpoint) c.metricsCollector.GrpcServerMetrics().MarkHealthy(endpoint)
@ -96,16 +97,16 @@ func tryReconnect(endpoint string, c *cfg) bool {
srv := grpc.NewServer(serverOpts...) srv := grpc.NewServer(serverOpts...)
c.onShutdown(func() { c.onShutdown(func() {
stopGRPC("FrostFS Public API", srv, c.log) stopGRPC(ctx, "FrostFS Public API", srv, c.log)
}) })
c.cfgGRPC.appendAndHandle(endpoint, lis, srv) c.cfgGRPC.appendAndHandle(endpoint, lis, srv)
c.log.Info(logs.FrostFSNodeGRPCReconnectedSuccessfully, zap.String("endpoint", endpoint)) c.log.Info(ctx, logs.FrostFSNodeGRPCReconnectedSuccessfully, zap.String("endpoint", endpoint))
return true return true
} }
func getGRPCEndpointOpts(endpoint string, c *cfg) (result []grpc.ServerOption, found bool) { func getGRPCEndpointOpts(ctx context.Context, endpoint string, c *cfg) (result []grpc.ServerOption, found bool) {
unlock := c.LockAppConfigShared() unlock := c.LockAppConfigShared()
defer unlock() defer unlock()
grpcconfig.IterateEndpoints(c.appCfg, func(sc *grpcconfig.Config) { grpcconfig.IterateEndpoints(c.appCfg, func(sc *grpcconfig.Config) {
@ -116,7 +117,7 @@ func getGRPCEndpointOpts(endpoint string, c *cfg) (result []grpc.ServerOption, f
return return
} }
var ok bool var ok bool
result, ok = getGrpcServerOpts(c, sc) result, ok = getGrpcServerOpts(ctx, c, sc)
if !ok { if !ok {
return return
} }
@ -125,7 +126,7 @@ func getGRPCEndpointOpts(endpoint string, c *cfg) (result []grpc.ServerOption, f
return return
} }
func getGrpcServerOpts(c *cfg, sc *grpcconfig.Config) ([]grpc.ServerOption, bool) { func getGrpcServerOpts(ctx context.Context, c *cfg, sc *grpcconfig.Config) ([]grpc.ServerOption, bool) {
serverOpts := []grpc.ServerOption{ serverOpts := []grpc.ServerOption{
grpc.MaxRecvMsgSize(maxRecvMsgSize), grpc.MaxRecvMsgSize(maxRecvMsgSize),
grpc.ChainUnaryInterceptor( grpc.ChainUnaryInterceptor(
@ -143,7 +144,7 @@ func getGrpcServerOpts(c *cfg, sc *grpcconfig.Config) ([]grpc.ServerOption, bool
if tlsCfg != nil { if tlsCfg != nil {
cert, err := tls.LoadX509KeyPair(tlsCfg.CertificateFile(), tlsCfg.KeyFile()) cert, err := tls.LoadX509KeyPair(tlsCfg.CertificateFile(), tlsCfg.KeyFile())
if err != nil { if err != nil {
c.log.Error(logs.FrostFSNodeCouldNotReadCertificateFromFile, zap.Error(err)) c.log.Error(ctx, logs.FrostFSNodeCouldNotReadCertificateFromFile, zap.Error(err))
return nil, false return nil, false
} }
@ -174,38 +175,38 @@ func getGrpcServerOpts(c *cfg, sc *grpcconfig.Config) ([]grpc.ServerOption, bool
return serverOpts, true return serverOpts, true
} }
func serveGRPC(c *cfg) { func serveGRPC(ctx context.Context, c *cfg) {
c.cfgGRPC.performAndSave(func(e string, l net.Listener, s *grpc.Server) { c.cfgGRPC.performAndSave(func(e string, l net.Listener, s *grpc.Server) {
c.wg.Add(1) c.wg.Add(1)
go func() { go func() {
defer func() { defer func() {
c.log.Info(logs.FrostFSNodeStopListeningGRPCEndpoint, c.log.Info(ctx, logs.FrostFSNodeStopListeningGRPCEndpoint,
zap.Stringer("endpoint", l.Addr()), zap.Stringer("endpoint", l.Addr()),
) )
c.wg.Done() c.wg.Done()
}() }()
c.log.Info(logs.FrostFSNodeStartListeningEndpoint, c.log.Info(ctx, logs.FrostFSNodeStartListeningEndpoint,
zap.String("service", "gRPC"), zap.String("service", "gRPC"),
zap.Stringer("endpoint", l.Addr()), zap.Stringer("endpoint", l.Addr()),
) )
if err := s.Serve(l); err != nil { if err := s.Serve(l); err != nil {
c.metricsCollector.GrpcServerMetrics().MarkUnhealthy(e) c.metricsCollector.GrpcServerMetrics().MarkUnhealthy(e)
c.log.Error(logs.FrostFSNodeGRPCServerError, zap.Error(err)) c.log.Error(ctx, logs.FrostFSNodeGRPCServerError, zap.Error(err))
c.cfgGRPC.dropConnection(e) c.cfgGRPC.dropConnection(e)
scheduleReconnect(e, c) scheduleReconnect(ctx, e, c)
} }
}() }()
}) })
} }
func stopGRPC(name string, s *grpc.Server, l *logger.Logger) { func stopGRPC(ctx context.Context, name string, s *grpc.Server, l *logger.Logger) {
l = &logger.Logger{Logger: l.With(zap.String("name", name))} l = l.With(zap.String("name", name))
l.Info(logs.FrostFSNodeStoppingGRPCServer) l.Info(ctx, logs.FrostFSNodeStoppingGRPCServer)
// GracefulStop() may freeze forever, see #1270 // GracefulStop() may freeze forever, see #1270
done := make(chan struct{}) done := make(chan struct{})
@ -217,9 +218,9 @@ func stopGRPC(name string, s *grpc.Server, l *logger.Logger) {
select { select {
case <-done: case <-done:
case <-time.After(1 * time.Minute): case <-time.After(1 * time.Minute):
l.Info(logs.FrostFSNodeGRPCCannotShutdownGracefullyForcingStop) l.Info(ctx, logs.FrostFSNodeGRPCCannotShutdownGracefullyForcingStop)
s.Stop() s.Stop()
} }
l.Info(logs.FrostFSNodeGRPCServerStoppedSuccessfully) l.Info(ctx, logs.FrostFSNodeGRPCServerStoppedSuccessfully)
} }

View file

@ -20,9 +20,9 @@ type httpComponent struct {
preReload func(c *cfg) preReload func(c *cfg)
} }
func (cmp *httpComponent) init(c *cfg) { func (cmp *httpComponent) init(ctx context.Context, c *cfg) {
if !cmp.enabled { if !cmp.enabled {
c.log.Info(cmp.name + " is disabled") c.log.Info(ctx, cmp.name+" is disabled")
return return
} }
// Init server with parameters // Init server with parameters
@ -39,14 +39,14 @@ func (cmp *httpComponent) init(c *cfg) {
go func() { go func() {
defer c.wg.Done() defer c.wg.Done()
c.log.Info(logs.FrostFSNodeStartListeningEndpoint, c.log.Info(ctx, logs.FrostFSNodeStartListeningEndpoint,
zap.String("service", cmp.name), zap.String("service", cmp.name),
zap.String("endpoint", cmp.address)) zap.String("endpoint", cmp.address))
fatalOnErr(srv.Serve()) fatalOnErr(srv.Serve())
}() }()
c.closers = append(c.closers, closer{ c.closers = append(c.closers, closer{
cmp.name, cmp.name,
func() { stopAndLog(c, cmp.name, srv.Shutdown) }, func() { stopAndLog(ctx, c, cmp.name, srv.Shutdown) },
}) })
} }
@ -62,7 +62,7 @@ func (cmp *httpComponent) reload(ctx context.Context) error {
// Cleanup // Cleanup
delCloser(cmp.cfg, cmp.name) delCloser(cmp.cfg, cmp.name)
// Init server with new parameters // Init server with new parameters
cmp.init(cmp.cfg) cmp.init(ctx, cmp.cfg)
// Start worker // Start worker
if cmp.enabled { if cmp.enabled {
startWorker(ctx, cmp.cfg, *getWorker(cmp.cfg, cmp.name)) startWorker(ctx, cmp.cfg, *getWorker(cmp.cfg, cmp.name))

View file

@ -61,21 +61,21 @@ func main() {
var ctx context.Context var ctx context.Context
ctx, c.ctxCancel = context.WithCancel(context.Background()) ctx, c.ctxCancel = context.WithCancel(context.Background())
c.setHealthStatus(control.HealthStatus_STARTING) c.setHealthStatus(ctx, control.HealthStatus_STARTING)
initApp(ctx, c) initApp(ctx, c)
bootUp(ctx, c) bootUp(ctx, c)
c.compareAndSwapHealthStatus(control.HealthStatus_STARTING, control.HealthStatus_READY) c.compareAndSwapHealthStatus(ctx, control.HealthStatus_STARTING, control.HealthStatus_READY)
wait(c) wait(c)
} }
func initAndLog(c *cfg, name string, initializer func(*cfg)) { func initAndLog(ctx context.Context, c *cfg, name string, initializer func(*cfg)) {
c.log.Info(fmt.Sprintf("initializing %s service...", name)) c.log.Info(ctx, fmt.Sprintf("initializing %s service...", name))
initializer(c) initializer(c)
c.log.Info(name + " service has been successfully initialized") c.log.Info(ctx, name+" service has been successfully initialized")
} }
func initApp(ctx context.Context, c *cfg) { func initApp(ctx context.Context, c *cfg) {
@ -85,72 +85,72 @@ func initApp(ctx context.Context, c *cfg) {
c.wg.Done() c.wg.Done()
}() }()
setRuntimeParameters(c) setRuntimeParameters(ctx, c)
metrics, _ := metricsComponent(c) metrics, _ := metricsComponent(c)
initAndLog(c, "profiler", initProfilerService) initAndLog(ctx, c, "profiler", func(c *cfg) { initProfilerService(ctx, c) })
initAndLog(c, metrics.name, metrics.init) initAndLog(ctx, c, metrics.name, func(c *cfg) { metrics.init(ctx, c) })
initAndLog(c, "tracing", func(c *cfg) { initTracing(ctx, c) }) initAndLog(ctx, c, "tracing", func(c *cfg) { initTracing(ctx, c) })
initLocalStorage(ctx, c) initLocalStorage(ctx, c)
initAndLog(c, "storage engine", func(c *cfg) { initAndLog(ctx, c, "storage engine", func(c *cfg) {
fatalOnErr(c.cfgObject.cfgLocalStorage.localStorage.Open(ctx)) fatalOnErr(c.cfgObject.cfgLocalStorage.localStorage.Open(ctx))
fatalOnErr(c.cfgObject.cfgLocalStorage.localStorage.Init(ctx)) fatalOnErr(c.cfgObject.cfgLocalStorage.localStorage.Init(ctx))
}) })
initAndLog(c, "gRPC", initGRPC) initAndLog(ctx, c, "gRPC", func(c *cfg) { initGRPC(ctx, c) })
initAndLog(c, "netmap", func(c *cfg) { initNetmapService(ctx, c) }) initAndLog(ctx, c, "netmap", func(c *cfg) { initNetmapService(ctx, c) })
initAccessPolicyEngine(ctx, c) initAccessPolicyEngine(ctx, c)
initAndLog(c, "access policy engine", func(c *cfg) { initAndLog(ctx, c, "access policy engine", func(c *cfg) {
fatalOnErr(c.cfgObject.cfgAccessPolicyEngine.accessPolicyEngine.LocalOverrideDatabaseCore().Open(ctx)) fatalOnErr(c.cfgObject.cfgAccessPolicyEngine.accessPolicyEngine.LocalOverrideDatabaseCore().Open(ctx))
fatalOnErr(c.cfgObject.cfgAccessPolicyEngine.accessPolicyEngine.LocalOverrideDatabaseCore().Init()) fatalOnErr(c.cfgObject.cfgAccessPolicyEngine.accessPolicyEngine.LocalOverrideDatabaseCore().Init())
}) })
initAndLog(c, "accounting", func(c *cfg) { initAccountingService(ctx, c) }) initAndLog(ctx, c, "accounting", func(c *cfg) { initAccountingService(ctx, c) })
initAndLog(c, "container", func(c *cfg) { initContainerService(ctx, c) }) initAndLog(ctx, c, "container", func(c *cfg) { initContainerService(ctx, c) })
initAndLog(c, "session", initSessionService) initAndLog(ctx, c, "session", initSessionService)
initAndLog(c, "object", initObjectService) initAndLog(ctx, c, "object", initObjectService)
initAndLog(c, "tree", initTreeService) initAndLog(ctx, c, "tree", initTreeService)
initAndLog(c, "apemanager", initAPEManagerService) initAndLog(ctx, c, "apemanager", initAPEManagerService)
initAndLog(c, "control", initControlService) initAndLog(ctx, c, "control", func(c *cfg) { initControlService(ctx, c) })
initAndLog(c, "morph notifications", func(c *cfg) { listenMorphNotifications(ctx, c) }) initAndLog(ctx, c, "morph notifications", func(c *cfg) { listenMorphNotifications(ctx, c) })
} }
func runAndLog(ctx context.Context, c *cfg, name string, logSuccess bool, starter func(context.Context, *cfg)) { func runAndLog(ctx context.Context, c *cfg, name string, logSuccess bool, starter func(context.Context, *cfg)) {
c.log.Info(fmt.Sprintf("starting %s service...", name)) c.log.Info(ctx, fmt.Sprintf("starting %s service...", name))
starter(ctx, c) starter(ctx, c)
if logSuccess { if logSuccess {
c.log.Info(name + " service started successfully") c.log.Info(ctx, name+" service started successfully")
} }
} }
func stopAndLog(c *cfg, name string, stopper func() error) { func stopAndLog(ctx context.Context, c *cfg, name string, stopper func(context.Context) error) {
c.log.Debug(fmt.Sprintf("shutting down %s service", name)) c.log.Debug(ctx, fmt.Sprintf("shutting down %s service", name))
err := stopper() err := stopper(ctx)
if err != nil { if err != nil {
c.log.Debug(fmt.Sprintf("could not shutdown %s server", name), c.log.Debug(ctx, fmt.Sprintf("could not shutdown %s server", name),
zap.String("error", err.Error()), zap.String("error", err.Error()),
) )
} }
c.log.Debug(name + " service has been stopped") c.log.Debug(ctx, name+" service has been stopped")
} }
func bootUp(ctx context.Context, c *cfg) { func bootUp(ctx context.Context, c *cfg) {
runAndLog(ctx, c, "gRPC", false, func(_ context.Context, c *cfg) { serveGRPC(c) }) runAndLog(ctx, c, "gRPC", false, func(_ context.Context, c *cfg) { serveGRPC(ctx, c) })
runAndLog(ctx, c, "notary", true, makeAndWaitNotaryDeposit) runAndLog(ctx, c, "notary", true, makeAndWaitNotaryDeposit)
bootstrapNode(c) bootstrapNode(ctx, c)
startWorkers(ctx, c) startWorkers(ctx, c)
} }
func wait(c *cfg) { func wait(c *cfg) {
c.log.Info(logs.CommonApplicationStarted, c.log.Info(context.Background(), logs.CommonApplicationStarted,
zap.String("version", misc.Version)) zap.String("version", misc.Version))
<-c.done // graceful shutdown <-c.done // graceful shutdown
@ -160,12 +160,12 @@ func wait(c *cfg) {
go func() { go func() {
defer drain.Done() defer drain.Done()
for err := range c.internalErr { for err := range c.internalErr {
c.log.Warn(logs.FrostFSNodeInternalApplicationError, c.log.Warn(context.Background(), logs.FrostFSNodeInternalApplicationError,
zap.String("message", err.Error())) zap.String("message", err.Error()))
} }
}() }()
c.log.Debug(logs.FrostFSNodeWaitingForAllProcessesToStop) c.log.Debug(context.Background(), logs.FrostFSNodeWaitingForAllProcessesToStop)
c.wg.Wait() c.wg.Wait()

View file

@ -17,11 +17,7 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/rand" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/rand"
"github.com/nspcc-dev/neo-go/pkg/core/block" "github.com/nspcc-dev/neo-go/pkg/core/block"
"github.com/nspcc-dev/neo-go/pkg/core/state" "github.com/nspcc-dev/neo-go/pkg/core/state"
"github.com/nspcc-dev/neo-go/pkg/neorpc/result"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/waiter"
"github.com/nspcc-dev/neo-go/pkg/smartcontract/trigger"
"github.com/nspcc-dev/neo-go/pkg/util" "github.com/nspcc-dev/neo-go/pkg/util"
"github.com/nspcc-dev/neo-go/pkg/vm/vmstate"
"go.uber.org/zap" "go.uber.org/zap"
) )
@ -39,20 +35,16 @@ func (c *cfg) initMorphComponents(ctx context.Context) {
lookupScriptHashesInNNS(c) // smart contract auto negotiation lookupScriptHashesInNNS(c) // smart contract auto negotiation
if c.cfgMorph.notaryEnabled {
err := c.cfgMorph.client.EnableNotarySupport( err := c.cfgMorph.client.EnableNotarySupport(
client.WithProxyContract( client.WithProxyContract(
c.cfgMorph.proxyScriptHash, c.cfgMorph.proxyScriptHash,
), ),
) )
fatalOnErr(err) fatalOnErr(err)
}
c.log.Info(logs.FrostFSNodeNotarySupport, c.log.Info(ctx, logs.FrostFSNodeNotarySupport)
zap.Bool("sidechain_enabled", c.cfgMorph.notaryEnabled),
)
wrap, err := nmClient.NewFromMorph(c.cfgMorph.client, c.cfgNetmap.scriptHash, 0, nmClient.TryNotary()) wrap, err := nmClient.NewFromMorph(c.cfgMorph.client, c.cfgNetmap.scriptHash, 0)
fatalOnErr(err) fatalOnErr(err)
var netmapSource netmap.Source var netmapSource netmap.Source
@ -64,7 +56,7 @@ func (c *cfg) initMorphComponents(ctx context.Context) {
msPerBlock, err := c.cfgMorph.client.MsPerBlock() msPerBlock, err := c.cfgMorph.client.MsPerBlock()
fatalOnErr(err) fatalOnErr(err)
c.cfgMorph.cacheTTL = time.Duration(msPerBlock) * time.Millisecond c.cfgMorph.cacheTTL = time.Duration(msPerBlock) * time.Millisecond
c.log.Debug(logs.FrostFSNodeMorphcacheTTLFetchedFromNetwork, zap.Duration("value", c.cfgMorph.cacheTTL)) c.log.Debug(ctx, logs.FrostFSNodeMorphcacheTTLFetchedFromNetwork, zap.Duration("value", c.cfgMorph.cacheTTL))
} }
if c.cfgMorph.cacheTTL < 0 { if c.cfgMorph.cacheTTL < 0 {
@ -102,7 +94,7 @@ func initMorphClient(ctx context.Context, c *cfg) {
client.WithDialerSource(c.dialerSource), client.WithDialerSource(c.dialerSource),
) )
if err != nil { if err != nil {
c.log.Info(logs.FrostFSNodeFailedToCreateNeoRPCClient, c.log.Info(ctx, logs.FrostFSNodeFailedToCreateNeoRPCClient,
zap.Any("endpoints", addresses), zap.Any("endpoints", addresses),
zap.String("error", err.Error()), zap.String("error", err.Error()),
) )
@ -111,32 +103,26 @@ func initMorphClient(ctx context.Context, c *cfg) {
} }
c.onShutdown(func() { c.onShutdown(func() {
c.log.Info(logs.FrostFSNodeClosingMorphComponents) c.log.Info(ctx, logs.FrostFSNodeClosingMorphComponents)
cli.Close() cli.Close()
}) })
if err := cli.SetGroupSignerScope(); err != nil { if err := cli.SetGroupSignerScope(); err != nil {
c.log.Info(logs.FrostFSNodeFailedToSetGroupSignerScopeContinueWithGlobal, zap.Error(err)) c.log.Info(ctx, logs.FrostFSNodeFailedToSetGroupSignerScopeContinueWithGlobal, zap.Error(err))
} }
c.cfgMorph.client = cli c.cfgMorph.client = cli
c.cfgMorph.notaryEnabled = cli.ProbeNotary()
} }
func makeAndWaitNotaryDeposit(ctx context.Context, c *cfg) { func makeAndWaitNotaryDeposit(ctx context.Context, c *cfg) {
// skip notary deposit in non-notary environments tx, vub, err := makeNotaryDeposit(ctx, c)
if !c.cfgMorph.notaryEnabled {
return
}
tx, vub, err := makeNotaryDeposit(c)
fatalOnErr(err) fatalOnErr(err)
if tx.Equals(util.Uint256{}) { if tx.Equals(util.Uint256{}) {
// non-error deposit with an empty TX hash means // non-error deposit with an empty TX hash means
// that the deposit has already been made; no // that the deposit has already been made; no
// need to wait it. // need to wait it.
c.log.Info(logs.FrostFSNodeNotaryDepositHasAlreadyBeenMade) c.log.Info(ctx, logs.FrostFSNodeNotaryDepositHasAlreadyBeenMade)
return return
} }
@ -144,7 +130,7 @@ func makeAndWaitNotaryDeposit(ctx context.Context, c *cfg) {
fatalOnErr(err) fatalOnErr(err)
} }
func makeNotaryDeposit(c *cfg) (util.Uint256, uint32, error) { func makeNotaryDeposit(ctx context.Context, c *cfg) (util.Uint256, uint32, error) {
const ( const (
// gasMultiplier defines how many times more the notary // gasMultiplier defines how many times more the notary
// balance must be compared to the GAS balance of the node: // balance must be compared to the GAS balance of the node:
@ -161,52 +147,17 @@ func makeNotaryDeposit(c *cfg) (util.Uint256, uint32, error) {
return util.Uint256{}, 0, fmt.Errorf("could not calculate notary deposit: %w", err) return util.Uint256{}, 0, fmt.Errorf("could not calculate notary deposit: %w", err)
} }
return c.cfgMorph.client.DepositEndlessNotary(depositAmount) return c.cfgMorph.client.DepositEndlessNotary(ctx, depositAmount)
}
var (
errNotaryDepositFail = errors.New("notary deposit tx has faulted")
errNotaryDepositTimeout = errors.New("notary deposit tx has not appeared in the network")
)
type waiterClient struct {
c *client.Client
}
func (w *waiterClient) Context() context.Context {
return context.Background()
}
func (w *waiterClient) GetApplicationLog(hash util.Uint256, trig *trigger.Type) (*result.ApplicationLog, error) {
return w.c.GetApplicationLog(hash, trig)
}
func (w *waiterClient) GetBlockCount() (uint32, error) {
return w.c.BlockCount()
}
func (w *waiterClient) GetVersion() (*result.Version, error) {
return w.c.GetVersion()
} }
func waitNotaryDeposit(ctx context.Context, c *cfg, tx util.Uint256, vub uint32) error { func waitNotaryDeposit(ctx context.Context, c *cfg, tx util.Uint256, vub uint32) error {
w, err := waiter.NewPollingBased(&waiterClient{c: c.cfgMorph.client}) if err := c.cfgMorph.client.WaitTxHalt(ctx, client.InvokeRes{Hash: tx, VUB: vub}); err != nil {
if err != nil { return err
return fmt.Errorf("could not create notary deposit waiter: %w", err)
} }
res, err := w.WaitAny(ctx, vub, tx)
if err != nil { c.log.Info(ctx, logs.ClientNotaryDepositTransactionWasSuccessfullyPersisted)
if errors.Is(err, waiter.ErrTxNotAccepted) {
return errNotaryDepositTimeout
}
return fmt.Errorf("could not wait for notary deposit persists in chain: %w", err)
}
if res.Execution.VMState.HasFlag(vmstate.Halt) {
c.log.Info(logs.ClientNotaryDepositTransactionWasSuccessfullyPersisted)
return nil return nil
} }
return errNotaryDepositFail
}
func listenMorphNotifications(ctx context.Context, c *cfg) { func listenMorphNotifications(ctx context.Context, c *cfg) {
var ( var (
@ -217,7 +168,7 @@ func listenMorphNotifications(ctx context.Context, c *cfg) {
fromSideChainBlock, err := c.persistate.UInt32(persistateSideChainLastBlockKey) fromSideChainBlock, err := c.persistate.UInt32(persistateSideChainLastBlockKey)
if err != nil { if err != nil {
fromSideChainBlock = 0 fromSideChainBlock = 0
c.log.Warn(logs.FrostFSNodeCantGetLastProcessedSideChainBlockNumber, zap.String("error", err.Error())) c.log.Warn(ctx, logs.FrostFSNodeCantGetLastProcessedSideChainBlockNumber, zap.String("error", err.Error()))
} }
subs, err = subscriber.New(ctx, &subscriber.Params{ subs, err = subscriber.New(ctx, &subscriber.Params{
@ -246,7 +197,7 @@ func listenMorphNotifications(ctx context.Context, c *cfg) {
setNetmapNotificationParser(c, newEpochNotification, func(src *state.ContainedNotificationEvent) (event.Event, error) { setNetmapNotificationParser(c, newEpochNotification, func(src *state.ContainedNotificationEvent) (event.Event, error) {
res, err := netmapEvent.ParseNewEpoch(src) res, err := netmapEvent.ParseNewEpoch(src)
if err == nil { if err == nil {
c.log.Info(logs.FrostFSNodeNewEpochEventFromSidechain, c.log.Info(ctx, logs.FrostFSNodeNewEpochEventFromSidechain,
zap.Uint64("number", res.(netmapEvent.NewEpoch).EpochNumber()), zap.Uint64("number", res.(netmapEvent.NewEpoch).EpochNumber()),
) )
} }
@ -256,12 +207,12 @@ func listenMorphNotifications(ctx context.Context, c *cfg) {
registerNotificationHandlers(c.cfgNetmap.scriptHash, lis, c.cfgNetmap.parsers, c.cfgNetmap.subscribers) registerNotificationHandlers(c.cfgNetmap.scriptHash, lis, c.cfgNetmap.parsers, c.cfgNetmap.subscribers)
registerNotificationHandlers(c.cfgContainer.scriptHash, lis, c.cfgContainer.parsers, c.cfgContainer.subscribers) registerNotificationHandlers(c.cfgContainer.scriptHash, lis, c.cfgContainer.parsers, c.cfgContainer.subscribers)
registerBlockHandler(lis, func(block *block.Block) { registerBlockHandler(lis, func(ctx context.Context, block *block.Block) {
c.log.Debug(logs.FrostFSNodeNewBlock, zap.Uint32("index", block.Index)) c.log.Debug(ctx, logs.FrostFSNodeNewBlock, zap.Uint32("index", block.Index))
err = c.persistate.SetUInt32(persistateSideChainLastBlockKey, block.Index) err = c.persistate.SetUInt32(persistateSideChainLastBlockKey, block.Index)
if err != nil { if err != nil {
c.log.Warn(logs.FrostFSNodeCantUpdatePersistentState, c.log.Warn(ctx, logs.FrostFSNodeCantUpdatePersistentState,
zap.String("chain", "side"), zap.String("chain", "side"),
zap.Uint32("block_index", block.Index)) zap.Uint32("block_index", block.Index))
} }
@ -272,27 +223,17 @@ func registerNotificationHandlers(scHash util.Uint160, lis event.Listener, parse
subs map[event.Type][]event.Handler, subs map[event.Type][]event.Handler,
) { ) {
for typ, handlers := range subs { for typ, handlers := range subs {
pi := event.NotificationParserInfo{}
pi.SetType(typ)
pi.SetScriptHash(scHash)
p, ok := parsers[typ] p, ok := parsers[typ]
if !ok { if !ok {
panic(fmt.Sprintf("missing parser for event %s", typ)) panic(fmt.Sprintf("missing parser for event %s", typ))
} }
pi.SetParser(p) lis.RegisterNotificationHandler(event.NotificationHandlerInfo{
Contract: scHash,
lis.SetNotificationParser(pi) Type: typ,
Parser: p,
for _, h := range handlers { Handlers: handlers,
hi := event.NotificationHandlerInfo{} })
hi.SetType(typ)
hi.SetScriptHash(scHash)
hi.SetHandler(h)
lis.RegisterNotificationHandler(hi)
}
} }
} }
@ -321,10 +262,6 @@ func lookupScriptHashesInNNS(c *cfg) {
) )
for _, t := range targets { for _, t := range targets {
if t.nnsName == client.NNSProxyContractName && !c.cfgMorph.notaryEnabled {
continue // ignore proxy contract if notary disabled
}
if emptyHash.Equals(*t.h) { if emptyHash.Equals(*t.h) {
*t.h, err = c.cfgMorph.client.NNSContractAddress(t.nnsName) *t.h, err = c.cfgMorph.client.NNSContractAddress(t.nnsName)
fatalOnErrDetails(fmt.Sprintf("can't resolve %s in NNS", t.nnsName), err) fatalOnErrDetails(fmt.Sprintf("can't resolve %s in NNS", t.nnsName), err)

View file

@ -145,7 +145,7 @@ func initNetmapService(ctx context.Context, c *cfg) {
c.initMorphComponents(ctx) c.initMorphComponents(ctx)
initNetmapState(c) initNetmapState(ctx, c)
server := netmapTransportGRPC.New( server := netmapTransportGRPC.New(
netmapService.NewSignService( netmapService.NewSignService(
@ -175,45 +175,43 @@ func initNetmapService(ctx context.Context, c *cfg) {
} }
func addNewEpochNotificationHandlers(c *cfg) { func addNewEpochNotificationHandlers(c *cfg) {
addNewEpochNotificationHandler(c, func(ev event.Event) { addNewEpochNotificationHandler(c, func(_ context.Context, ev event.Event) {
c.cfgNetmap.state.setCurrentEpoch(ev.(netmapEvent.NewEpoch).EpochNumber()) c.cfgNetmap.state.setCurrentEpoch(ev.(netmapEvent.NewEpoch).EpochNumber())
}) })
addNewEpochAsyncNotificationHandler(c, func(ev event.Event) { addNewEpochAsyncNotificationHandler(c, func(ctx context.Context, ev event.Event) {
e := ev.(netmapEvent.NewEpoch).EpochNumber() e := ev.(netmapEvent.NewEpoch).EpochNumber()
c.updateContractNodeInfo(e) c.updateContractNodeInfo(ctx, e)
if !c.needBootstrap() || c.cfgNetmap.reBoostrapTurnedOff.Load() { // fixes #470 if !c.needBootstrap() || c.cfgNetmap.reBoostrapTurnedOff.Load() { // fixes #470
return return
} }
if err := c.bootstrap(); err != nil { if err := c.bootstrap(ctx); err != nil {
c.log.Warn(logs.FrostFSNodeCantSendRebootstrapTx, zap.Error(err)) c.log.Warn(ctx, logs.FrostFSNodeCantSendRebootstrapTx, zap.Error(err))
} }
}) })
if c.cfgMorph.notaryEnabled { addNewEpochAsyncNotificationHandler(c, func(ctx context.Context, _ event.Event) {
addNewEpochAsyncNotificationHandler(c, func(_ event.Event) { _, _, err := makeNotaryDeposit(ctx, c)
_, _, err := makeNotaryDeposit(c)
if err != nil { if err != nil {
c.log.Error(logs.FrostFSNodeCouldNotMakeNotaryDeposit, c.log.Error(ctx, logs.FrostFSNodeCouldNotMakeNotaryDeposit,
zap.String("error", err.Error()), zap.String("error", err.Error()),
) )
} }
}) })
} }
}
// bootstrapNode adds current node to the Network map. // bootstrapNode adds current node to the Network map.
// Must be called after initNetmapService. // Must be called after initNetmapService.
func bootstrapNode(c *cfg) { func bootstrapNode(ctx context.Context, c *cfg) {
if c.needBootstrap() { if c.needBootstrap() {
if c.IsMaintenance() { if c.IsMaintenance() {
c.log.Info(logs.FrostFSNodeNodeIsUnderMaintenanceSkipInitialBootstrap) c.log.Info(ctx, logs.FrostFSNodeNodeIsUnderMaintenanceSkipInitialBootstrap)
return return
} }
err := c.bootstrap() err := c.bootstrap(ctx)
fatalOnErrDetails("bootstrap error", err) fatalOnErrDetails("bootstrap error", err)
} }
} }
@ -240,17 +238,17 @@ func setNetmapNotificationParser(c *cfg, sTyp string, p event.NotificationParser
// initNetmapState inits current Network map state. // initNetmapState inits current Network map state.
// Must be called after Morph components initialization. // Must be called after Morph components initialization.
func initNetmapState(c *cfg) { func initNetmapState(ctx context.Context, c *cfg) {
epoch, err := c.cfgNetmap.wrapper.Epoch() epoch, err := c.cfgNetmap.wrapper.Epoch()
fatalOnErrDetails("could not initialize current epoch number", err) fatalOnErrDetails("could not initialize current epoch number", err)
var ni *netmapSDK.NodeInfo var ni *netmapSDK.NodeInfo
ni, err = c.netmapInitLocalNodeState(epoch) ni, err = c.netmapInitLocalNodeState(ctx, epoch)
fatalOnErrDetails("could not init network state", err) fatalOnErrDetails("could not init network state", err)
stateWord := nodeState(ni) stateWord := nodeState(ni)
c.log.Info(logs.FrostFSNodeInitialNetworkState, c.log.Info(ctx, logs.FrostFSNodeInitialNetworkState,
zap.Uint64("epoch", epoch), zap.Uint64("epoch", epoch),
zap.String("state", stateWord), zap.String("state", stateWord),
) )
@ -279,7 +277,7 @@ func nodeState(ni *netmapSDK.NodeInfo) string {
return "undefined" return "undefined"
} }
func (c *cfg) netmapInitLocalNodeState(epoch uint64) (*netmapSDK.NodeInfo, error) { func (c *cfg) netmapInitLocalNodeState(ctx context.Context, epoch uint64) (*netmapSDK.NodeInfo, error) {
nmNodes, err := c.cfgNetmap.wrapper.GetCandidates() nmNodes, err := c.cfgNetmap.wrapper.GetCandidates()
if err != nil { if err != nil {
return nil, err return nil, err
@ -307,7 +305,7 @@ func (c *cfg) netmapInitLocalNodeState(epoch uint64) (*netmapSDK.NodeInfo, error
if nmState != candidateState { if nmState != candidateState {
// This happens when the node was switched to maintenance without epoch tick. // This happens when the node was switched to maintenance without epoch tick.
// We expect it to continue staying in maintenance. // We expect it to continue staying in maintenance.
c.log.Info(logs.CandidateStatusPriority, c.log.Info(ctx, logs.CandidateStatusPriority,
zap.String("netmap", nmState), zap.String("netmap", nmState),
zap.String("candidate", candidateState)) zap.String("candidate", candidateState))
} }
@ -353,16 +351,16 @@ func addNewEpochAsyncNotificationHandler(c *cfg, h event.Handler) {
var errRelayBootstrap = errors.New("setting netmap status is forbidden in relay mode") var errRelayBootstrap = errors.New("setting netmap status is forbidden in relay mode")
func (c *cfg) SetNetmapStatus(st control.NetmapStatus) error { func (c *cfg) SetNetmapStatus(ctx context.Context, st control.NetmapStatus) error {
switch st { switch st {
default: default:
return fmt.Errorf("unsupported status %v", st) return fmt.Errorf("unsupported status %v", st)
case control.NetmapStatus_MAINTENANCE: case control.NetmapStatus_MAINTENANCE:
return c.setMaintenanceStatus(false) return c.setMaintenanceStatus(ctx, false)
case control.NetmapStatus_ONLINE, control.NetmapStatus_OFFLINE: case control.NetmapStatus_ONLINE, control.NetmapStatus_OFFLINE:
} }
c.stopMaintenance() c.stopMaintenance(ctx)
if !c.needBootstrap() { if !c.needBootstrap() {
return errRelayBootstrap return errRelayBootstrap
@ -370,12 +368,12 @@ func (c *cfg) SetNetmapStatus(st control.NetmapStatus) error {
if st == control.NetmapStatus_ONLINE { if st == control.NetmapStatus_ONLINE {
c.cfgNetmap.reBoostrapTurnedOff.Store(false) c.cfgNetmap.reBoostrapTurnedOff.Store(false)
return bootstrapOnline(c) return bootstrapOnline(ctx, c)
} }
c.cfgNetmap.reBoostrapTurnedOff.Store(true) c.cfgNetmap.reBoostrapTurnedOff.Store(true)
return c.updateNetMapState(func(*nmClient.UpdatePeerPrm) {}) return c.updateNetMapState(ctx, func(*nmClient.UpdatePeerPrm) {})
} }
func (c *cfg) GetNetmapStatus() (control.NetmapStatus, uint64, error) { func (c *cfg) GetNetmapStatus() (control.NetmapStatus, uint64, error) {
@ -387,11 +385,11 @@ func (c *cfg) GetNetmapStatus() (control.NetmapStatus, uint64, error) {
return st, epoch, nil return st, epoch, nil
} }
func (c *cfg) ForceMaintenance() error { func (c *cfg) ForceMaintenance(ctx context.Context) error {
return c.setMaintenanceStatus(true) return c.setMaintenanceStatus(ctx, true)
} }
func (c *cfg) setMaintenanceStatus(force bool) error { func (c *cfg) setMaintenanceStatus(ctx context.Context, force bool) error {
netSettings, err := c.cfgNetmap.wrapper.ReadNetworkConfiguration() netSettings, err := c.cfgNetmap.wrapper.ReadNetworkConfiguration()
if err != nil { if err != nil {
err = fmt.Errorf("read network settings to check maintenance allowance: %w", err) err = fmt.Errorf("read network settings to check maintenance allowance: %w", err)
@ -400,10 +398,10 @@ func (c *cfg) setMaintenanceStatus(force bool) error {
} }
if err == nil || force { if err == nil || force {
c.startMaintenance() c.startMaintenance(ctx)
if err == nil { if err == nil {
err = c.updateNetMapState((*nmClient.UpdatePeerPrm).SetMaintenance) err = c.updateNetMapState(ctx, (*nmClient.UpdatePeerPrm).SetMaintenance)
} }
if err != nil { if err != nil {
@ -416,14 +414,17 @@ func (c *cfg) setMaintenanceStatus(force bool) error {
// calls UpdatePeerState operation of Netmap contract's client for the local node. // calls UpdatePeerState operation of Netmap contract's client for the local node.
// State setter is used to specify node state to switch to. // State setter is used to specify node state to switch to.
func (c *cfg) updateNetMapState(stateSetter func(*nmClient.UpdatePeerPrm)) error { func (c *cfg) updateNetMapState(ctx context.Context, stateSetter func(*nmClient.UpdatePeerPrm)) error {
var prm nmClient.UpdatePeerPrm var prm nmClient.UpdatePeerPrm
prm.SetKey(c.key.PublicKey().Bytes()) prm.SetKey(c.key.PublicKey().Bytes())
stateSetter(&prm) stateSetter(&prm)
_, err := c.cfgNetmap.wrapper.UpdatePeerState(prm) res, err := c.cfgNetmap.wrapper.UpdatePeerState(ctx, prm)
if err != nil {
return err return err
} }
return c.cfgNetmap.wrapper.Morph().WaitTxHalt(ctx, res)
}
type netInfo struct { type netInfo struct {
netState netmap.State netState netmap.State

View file

@ -13,7 +13,6 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/netmap" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/netmap"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/engine" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/engine"
morphClient "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client" morphClient "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client"
nmClient "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client/netmap"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/network/cache" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/network/cache"
objectTransportGRPC "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/network/transport/object/grpc" objectTransportGRPC "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/network/transport/object/grpc"
objectService "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/object" objectService "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/object"
@ -58,7 +57,7 @@ type objectSvc struct {
func (c *cfg) MaxObjectSize() uint64 { func (c *cfg) MaxObjectSize() uint64 {
sz, err := c.cfgNetmap.wrapper.MaxObjectSize() sz, err := c.cfgNetmap.wrapper.MaxObjectSize()
if err != nil { if err != nil {
c.log.Error(logs.FrostFSNodeCouldNotGetMaxObjectSizeValue, c.log.Error(context.Background(), logs.FrostFSNodeCouldNotGetMaxObjectSizeValue,
zap.String("error", err.Error()), zap.String("error", err.Error()),
) )
} }
@ -66,11 +65,11 @@ func (c *cfg) MaxObjectSize() uint64 {
return sz return sz
} }
func (s *objectSvc) Put() (objectService.PutObjectStream, error) { func (s *objectSvc) Put(_ context.Context) (objectService.PutObjectStream, error) {
return s.put.Put() return s.put.Put()
} }
func (s *objectSvc) Patch() (objectService.PatchObjectStream, error) { func (s *objectSvc) Patch(_ context.Context) (objectService.PatchObjectStream, error) {
return s.patch.Patch() return s.patch.Patch()
} }
@ -137,24 +136,6 @@ func (fn *innerRingFetcherWithNotary) InnerRingKeys() ([][]byte, error) {
return result, nil return result, nil
} }
type innerRingFetcherWithoutNotary struct {
nm *nmClient.Client
}
func (f *innerRingFetcherWithoutNotary) InnerRingKeys() ([][]byte, error) {
keys, err := f.nm.GetInnerRingList()
if err != nil {
return nil, fmt.Errorf("can't get inner ring keys from netmap contract: %w", err)
}
result := make([][]byte, 0, len(keys))
for i := range keys {
result = append(result, keys[i].Bytes())
}
return result, nil
}
func initObjectService(c *cfg) { func initObjectService(c *cfg) {
keyStorage := util.NewKeyStorage(&c.key.PrivateKey, c.privateTokenStore, c.cfgNetmap.state) keyStorage := util.NewKeyStorage(&c.key.PrivateKey, c.privateTokenStore, c.cfgNetmap.state)
@ -223,7 +204,7 @@ func initObjectService(c *cfg) {
func addPolicer(c *cfg, keyStorage *util.KeyStorage, clientConstructor *cache.ClientCache) { func addPolicer(c *cfg, keyStorage *util.KeyStorage, clientConstructor *cache.ClientCache) {
if policerconfig.UnsafeDisable(c.appCfg) { if policerconfig.UnsafeDisable(c.appCfg) {
c.log.Warn(logs.FrostFSNodePolicerIsDisabled) c.log.Warn(context.Background(), logs.FrostFSNodePolicerIsDisabled)
return return
} }
@ -287,7 +268,7 @@ func addPolicer(c *cfg, keyStorage *util.KeyStorage, clientConstructor *cache.Cl
_, err := ls.Inhume(ctx, inhumePrm) _, err := ls.Inhume(ctx, inhumePrm)
if err != nil { if err != nil {
c.log.Warn(logs.FrostFSNodeCouldNotInhumeMarkRedundantCopyAsGarbage, c.log.Warn(ctx, logs.FrostFSNodeCouldNotInhumeMarkRedundantCopyAsGarbage,
zap.String("error", err.Error()), zap.String("error", err.Error()),
) )
} }
@ -305,15 +286,10 @@ func addPolicer(c *cfg, keyStorage *util.KeyStorage, clientConstructor *cache.Cl
} }
func createInnerRingFetcher(c *cfg) v2.InnerRingFetcher { func createInnerRingFetcher(c *cfg) v2.InnerRingFetcher {
if c.cfgMorph.client.ProbeNotary() {
return &innerRingFetcherWithNotary{ return &innerRingFetcherWithNotary{
sidechain: c.cfgMorph.client, sidechain: c.cfgMorph.client,
} }
} }
return &innerRingFetcherWithoutNotary{
nm: c.cfgNetmap.wrapper,
}
}
func createReplicator(c *cfg, keyStorage *util.KeyStorage, cache *cache.ClientCache) *replicator.Replicator { func createReplicator(c *cfg, keyStorage *util.KeyStorage, cache *cache.ClientCache) *replicator.Replicator {
ls := c.cfgObject.cfgLocalStorage.localStorage ls := c.cfgObject.cfgLocalStorage.localStorage

View file

@ -1,17 +1,18 @@
package main package main
import ( import (
"context"
"runtime" "runtime"
profilerconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/profiler" profilerconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/profiler"
httputil "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/http" httputil "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/http"
) )
func initProfilerService(c *cfg) { func initProfilerService(ctx context.Context, c *cfg) {
tuneProfilers(c) tuneProfilers(c)
pprof, _ := pprofComponent(c) pprof, _ := pprofComponent(c)
pprof.init(c) pprof.init(ctx, c)
} }
func pprofComponent(c *cfg) (*httpComponent, bool) { func pprofComponent(c *cfg) (*httpComponent, bool) {

View file

@ -1,6 +1,7 @@
package main package main
import ( import (
"context"
"os" "os"
"runtime/debug" "runtime/debug"
@ -9,17 +10,17 @@ import (
"go.uber.org/zap" "go.uber.org/zap"
) )
func setRuntimeParameters(c *cfg) { func setRuntimeParameters(ctx context.Context, c *cfg) {
if len(os.Getenv("GOMEMLIMIT")) != 0 { if len(os.Getenv("GOMEMLIMIT")) != 0 {
// default limit < yaml limit < app env limit < GOMEMLIMIT // default limit < yaml limit < app env limit < GOMEMLIMIT
c.log.Warn(logs.RuntimeSoftMemoryDefinedWithGOMEMLIMIT) c.log.Warn(ctx, logs.RuntimeSoftMemoryDefinedWithGOMEMLIMIT)
return return
} }
memLimitBytes := runtime.GCMemoryLimitBytes(c.appCfg) memLimitBytes := runtime.GCMemoryLimitBytes(c.appCfg)
previous := debug.SetMemoryLimit(memLimitBytes) previous := debug.SetMemoryLimit(memLimitBytes)
if memLimitBytes != previous { if memLimitBytes != previous {
c.log.Info(logs.RuntimeSoftMemoryLimitUpdated, c.log.Info(ctx, logs.RuntimeSoftMemoryLimitUpdated,
zap.Int64("new_value", memLimitBytes), zap.Int64("new_value", memLimitBytes),
zap.Int64("old_value", previous)) zap.Int64("old_value", previous))
} }

View file

@ -48,7 +48,7 @@ func initSessionService(c *cfg) {
_ = c.privateTokenStore.Close() _ = c.privateTokenStore.Close()
}) })
addNewEpochNotificationHandler(c, func(ev event.Event) { addNewEpochNotificationHandler(c, func(_ context.Context, ev event.Event) {
c.privateTokenStore.RemoveOld(ev.(netmap.NewEpoch).EpochNumber()) c.privateTokenStore.RemoveOld(ev.(netmap.NewEpoch).EpochNumber())
}) })

View file

@ -13,12 +13,12 @@ import (
func initTracing(ctx context.Context, c *cfg) { func initTracing(ctx context.Context, c *cfg) {
conf, err := tracingconfig.ToTracingConfig(c.appCfg) conf, err := tracingconfig.ToTracingConfig(c.appCfg)
if err != nil { if err != nil {
c.log.Error(logs.FrostFSNodeFailedInitTracing, zap.Error(err)) c.log.Error(ctx, logs.FrostFSNodeFailedInitTracing, zap.Error(err))
return return
} }
_, err = tracing.Setup(ctx, *conf) _, err = tracing.Setup(ctx, *conf)
if err != nil { if err != nil {
c.log.Error(logs.FrostFSNodeFailedInitTracing, zap.Error(err)) c.log.Error(ctx, logs.FrostFSNodeFailedInitTracing, zap.Error(err))
return return
} }
@ -29,7 +29,7 @@ func initTracing(ctx context.Context, c *cfg) {
defer cancel() defer cancel()
err := tracing.Shutdown(ctx) // cfg context cancels before close err := tracing.Shutdown(ctx) // cfg context cancels before close
if err != nil { if err != nil {
c.log.Error(logs.FrostFSNodeFailedShutdownTracing, zap.Error(err)) c.log.Error(ctx, logs.FrostFSNodeFailedShutdownTracing, zap.Error(err))
} }
}, },
}) })

View file

@ -44,7 +44,7 @@ func (c cnrSource) List() ([]cid.ID, error) {
func initTreeService(c *cfg) { func initTreeService(c *cfg) {
treeConfig := treeconfig.Tree(c.appCfg) treeConfig := treeconfig.Tree(c.appCfg)
if !treeConfig.Enabled() { if !treeConfig.Enabled() {
c.log.Info(logs.FrostFSNodeTreeServiceIsNotEnabledSkipInitialization) c.log.Info(context.Background(), logs.FrostFSNodeTreeServiceIsNotEnabledSkipInitialization)
return return
} }
@ -80,10 +80,10 @@ func initTreeService(c *cfg) {
})) }))
if d := treeConfig.SyncInterval(); d == 0 { if d := treeConfig.SyncInterval(); d == 0 {
addNewEpochNotificationHandler(c, func(_ event.Event) { addNewEpochNotificationHandler(c, func(ctx context.Context, _ event.Event) {
err := c.treeService.SynchronizeAll() err := c.treeService.SynchronizeAll()
if err != nil { if err != nil {
c.log.Error(logs.FrostFSNodeCouldNotSynchronizeTreeService, zap.Error(err)) c.log.Error(ctx, logs.FrostFSNodeCouldNotSynchronizeTreeService, zap.Error(err))
} }
}) })
} else { } else {
@ -94,7 +94,7 @@ func initTreeService(c *cfg) {
for range tick.C { for range tick.C {
err := c.treeService.SynchronizeAll() err := c.treeService.SynchronizeAll()
if err != nil { if err != nil {
c.log.Error(logs.FrostFSNodeCouldNotSynchronizeTreeService, zap.Error(err)) c.log.Error(context.Background(), logs.FrostFSNodeCouldNotSynchronizeTreeService, zap.Error(err))
if errors.Is(err, tree.ErrShuttingDown) { if errors.Is(err, tree.ErrShuttingDown) {
return return
} }
@ -103,15 +103,15 @@ func initTreeService(c *cfg) {
}() }()
} }
subscribeToContainerRemoval(c, func(e event.Event) { subscribeToContainerRemoval(c, func(ctx context.Context, e event.Event) {
ev := e.(containerEvent.DeleteSuccess) ev := e.(containerEvent.DeleteSuccess)
// This is executed asynchronously, so we don't care about the operation taking some time. // This is executed asynchronously, so we don't care about the operation taking some time.
c.log.Debug(logs.FrostFSNodeRemovingAllTreesForContainer, zap.Stringer("cid", ev.ID)) c.log.Debug(ctx, logs.FrostFSNodeRemovingAllTreesForContainer, zap.Stringer("cid", ev.ID))
err := c.treeService.DropTree(context.Background(), ev.ID, "") err := c.treeService.DropTree(ctx, ev.ID, "")
if err != nil && !errors.Is(err, pilorama.ErrTreeNotFound) { if err != nil && !errors.Is(err, pilorama.ErrTreeNotFound) {
// Ignore pilorama.ErrTreeNotFound but other errors, including shard.ErrReadOnly, should be logged. // Ignore pilorama.ErrTreeNotFound but other errors, including shard.ErrReadOnly, should be logged.
c.log.Error(logs.FrostFSNodeContainerRemovalEventReceivedButTreesWerentRemoved, c.log.Error(ctx, logs.FrostFSNodeContainerRemovalEventReceivedButTreesWerentRemoved,
zap.Stringer("cid", ev.ID), zap.Stringer("cid", ev.ID),
zap.String("error", err.Error())) zap.String("error", err.Error()))
} }

View file

@ -2,7 +2,6 @@ package ape
const ( const (
RuleFlag = "rule" RuleFlag = "rule"
RuleFlagDesc = "Rule statement"
PathFlag = "path" PathFlag = "path"
PathFlagDesc = "Path to encoded chain in JSON or binary format" PathFlagDesc = "Path to encoded chain in JSON or binary format"
TargetNameFlag = "target-name" TargetNameFlag = "target-name"
@ -17,3 +16,64 @@ const (
ChainNameFlagDesc = "Chain name(ingress|s3)" ChainNameFlagDesc = "Chain name(ingress|s3)"
AllFlag = "all" AllFlag = "all"
) )
const RuleFlagDesc = `Defines an Access Policy Engine (APE) rule in the format:
<status>[:status_detail] <action>... <condition>... <resource>...
Status:
- allow Permits specified actions
- deny Prohibits specified actions
- deny:QuotaLimitReached Denies access due to quota limits
Actions:
Object operations:
- Object.Put, Object.Get, etc.
- Object.* (all object operations)
Container operations:
- Container.Put, Container.Get, etc.
- Container.* (all container operations)
Conditions:
ResourceCondition:
Format: ResourceCondition:"key"=value, "key"!=value
Reserved properties (use '\' before '$'):
- $Object:version
- $Object:objectID
- $Object:containerID
- $Object:ownerID
- $Object:creationEpoch
- $Object:payloadLength
- $Object:payloadHash
- $Object:objectType
- $Object:homomorphicHash
RequestCondition:
Format: RequestCondition:"key"=value, "key"!=value
Reserved properties (use '\' before '$'):
- $Actor:publicKey
- $Actor:role
Example:
ResourceCondition:"check_key"!="check_value" RequestCondition:"$Actor:role"=others
Resources:
For objects:
- namespace/cid/oid (specific object)
- namespace/cid/* (all objects in container)
- namespace/* (all objects in namespace)
- * (all objects)
- /* (all objects in root namespace)
- /cid/* (all objects in root container)
- /cid/oid (specific object in root container)
For containers:
- namespace/cid (specific container)
- namespace/* (all containers in namespace)
- * (all containers)
- /cid (root container)
- /* (all root containers)
Notes:
- Cannot mix object and container operations in one rule
- Default behavior is Any=false unless 'any' is specified
- Use 'all' keyword to explicitly set Any=false`

View file

@ -26,6 +26,7 @@ func ExitOnErr(cmd *cobra.Command, errFmt string, err error) {
_ = iota _ = iota
internal internal
aclDenied aclDenied
apemanagerDenied
) )
var ( var (
@ -33,6 +34,7 @@ func ExitOnErr(cmd *cobra.Command, errFmt string, err error) {
internalErr = new(sdkstatus.ServerInternal) internalErr = new(sdkstatus.ServerInternal)
accessErr = new(sdkstatus.ObjectAccessDenied) accessErr = new(sdkstatus.ObjectAccessDenied)
apemanagerErr = new(sdkstatus.APEManagerAccessDenied)
) )
switch { switch {
@ -41,6 +43,9 @@ func ExitOnErr(cmd *cobra.Command, errFmt string, err error) {
case errors.As(err, &accessErr): case errors.As(err, &accessErr):
code = aclDenied code = aclDenied
err = fmt.Errorf("%w: %s", err, accessErr.Reason()) err = fmt.Errorf("%w: %s", err, accessErr.Reason())
case errors.As(err, &apemanagerErr):
code = apemanagerDenied
err = fmt.Errorf("%w: %s", err, apemanagerErr.Reason())
default: default:
code = internal code = internal
} }

View file

@ -203,6 +203,10 @@ FROSTFS_TRACING_ENABLED=true
FROSTFS_TRACING_ENDPOINT="localhost" FROSTFS_TRACING_ENDPOINT="localhost"
FROSTFS_TRACING_EXPORTER="otlp_grpc" FROSTFS_TRACING_EXPORTER="otlp_grpc"
FROSTFS_TRACING_TRUSTED_CA="" FROSTFS_TRACING_TRUSTED_CA=""
FROSTFS_TRACING_ATTRIBUTES_0_KEY=key0
FROSTFS_TRACING_ATTRIBUTES_0_VALUE=value
FROSTFS_TRACING_ATTRIBUTES_1_KEY=key1
FROSTFS_TRACING_ATTRIBUTES_1_VALUE=value
FROSTFS_RUNTIME_SOFT_MEMORY_LIMIT=1073741824 FROSTFS_RUNTIME_SOFT_MEMORY_LIMIT=1073741824

View file

@ -259,9 +259,19 @@
}, },
"tracing": { "tracing": {
"enabled": true, "enabled": true,
"endpoint": "localhost:9090", "endpoint": "localhost",
"exporter": "otlp_grpc", "exporter": "otlp_grpc",
"trusted_ca": "/etc/ssl/tracing.pem" "trusted_ca": "",
"attributes":[
{
"key": "key0",
"value": "value"
},
{
"key": "key1",
"value": "value"
}
]
}, },
"runtime": { "runtime": {
"soft_memory_limit": 1073741824 "soft_memory_limit": 1073741824

View file

@ -239,6 +239,11 @@ tracing:
exporter: "otlp_grpc" exporter: "otlp_grpc"
endpoint: "localhost" endpoint: "localhost"
trusted_ca: "" trusted_ca: ""
attributes:
- key: key0
value: value
- key: key1
value: value
runtime: runtime:
soft_memory_limit: 1gb soft_memory_limit: 1gb

View file

@ -78,7 +78,12 @@
"FROSTFS_STORAGE_SHARD_1_PILORAMA_PATH":"${workspaceFolder}/.cache/storage/s1/pilorama1", "FROSTFS_STORAGE_SHARD_1_PILORAMA_PATH":"${workspaceFolder}/.cache/storage/s1/pilorama1",
"FROSTFS_PROMETHEUS_ENABLED":"true", "FROSTFS_PROMETHEUS_ENABLED":"true",
"FROSTFS_PROMETHEUS_ADDRESS":"127.0.0.1:9090", "FROSTFS_PROMETHEUS_ADDRESS":"127.0.0.1:9090",
"FROSTFS_PROMETHEUS_SHUTDOWN_TIMEOUT":"15s" "FROSTFS_PROMETHEUS_SHUTDOWN_TIMEOUT":"15s",
"FROSTFS_TRACING_ENABLED":"true",
"FROSTFS_TRACING_EXPORTER":"otlp_grpc",
"FROSTFS_TRACING_ENDPOINT":"127.0.0.1:4317",
"FROSTFS_TRACING_ATTRIBUTES_0_KEY":"host.ip",
"FROSTFS_TRACING_ATTRIBUTES_0_VALUE":"127.0.0.1:8080"
}, },
"postDebugTask": "env-down" "postDebugTask": "env-down"
}, },
@ -129,7 +134,12 @@
"FROSTFS_STORAGE_SHARD_1_PILORAMA_PATH":"${workspaceFolder}/.cache/storage/s2/pilorama1", "FROSTFS_STORAGE_SHARD_1_PILORAMA_PATH":"${workspaceFolder}/.cache/storage/s2/pilorama1",
"FROSTFS_PROMETHEUS_ENABLED":"true", "FROSTFS_PROMETHEUS_ENABLED":"true",
"FROSTFS_PROMETHEUS_ADDRESS":"127.0.0.1:9091", "FROSTFS_PROMETHEUS_ADDRESS":"127.0.0.1:9091",
"FROSTFS_PROMETHEUS_SHUTDOWN_TIMEOUT":"15s" "FROSTFS_PROMETHEUS_SHUTDOWN_TIMEOUT":"15s",
"FROSTFS_TRACING_ENABLED":"true",
"FROSTFS_TRACING_EXPORTER":"otlp_grpc",
"FROSTFS_TRACING_ENDPOINT":"127.0.0.1:4317",
"FROSTFS_TRACING_ATTRIBUTES_0_KEY":"host.ip",
"FROSTFS_TRACING_ATTRIBUTES_0_VALUE":"127.0.0.1:8082"
}, },
"postDebugTask": "env-down" "postDebugTask": "env-down"
}, },
@ -180,7 +190,12 @@
"FROSTFS_STORAGE_SHARD_1_PILORAMA_PATH":"${workspaceFolder}/.cache/storage/s3/pilorama1", "FROSTFS_STORAGE_SHARD_1_PILORAMA_PATH":"${workspaceFolder}/.cache/storage/s3/pilorama1",
"FROSTFS_PROMETHEUS_ENABLED":"true", "FROSTFS_PROMETHEUS_ENABLED":"true",
"FROSTFS_PROMETHEUS_ADDRESS":"127.0.0.1:9092", "FROSTFS_PROMETHEUS_ADDRESS":"127.0.0.1:9092",
"FROSTFS_PROMETHEUS_SHUTDOWN_TIMEOUT":"15s" "FROSTFS_PROMETHEUS_SHUTDOWN_TIMEOUT":"15s",
"FROSTFS_TRACING_ENABLED":"true",
"FROSTFS_TRACING_EXPORTER":"otlp_grpc",
"FROSTFS_TRACING_ENDPOINT":"127.0.0.1:4317",
"FROSTFS_TRACING_ATTRIBUTES_0_KEY":"host.ip",
"FROSTFS_TRACING_ATTRIBUTES_0_VALUE":"127.0.0.1:8084"
}, },
"postDebugTask": "env-down" "postDebugTask": "env-down"
}, },
@ -231,7 +246,12 @@
"FROSTFS_STORAGE_SHARD_1_PILORAMA_PATH":"${workspaceFolder}/.cache/storage/s4/pilorama1", "FROSTFS_STORAGE_SHARD_1_PILORAMA_PATH":"${workspaceFolder}/.cache/storage/s4/pilorama1",
"FROSTFS_PROMETHEUS_ENABLED":"true", "FROSTFS_PROMETHEUS_ENABLED":"true",
"FROSTFS_PROMETHEUS_ADDRESS":"127.0.0.1:9093", "FROSTFS_PROMETHEUS_ADDRESS":"127.0.0.1:9093",
"FROSTFS_PROMETHEUS_SHUTDOWN_TIMEOUT":"15s" "FROSTFS_PROMETHEUS_SHUTDOWN_TIMEOUT":"15s",
"FROSTFS_TRACING_ENABLED":"true",
"FROSTFS_TRACING_EXPORTER":"otlp_grpc",
"FROSTFS_TRACING_ENDPOINT":"127.0.0.1:4317",
"FROSTFS_TRACING_ATTRIBUTES_0_KEY":"host.ip",
"FROSTFS_TRACING_ATTRIBUTES_0_VALUE":"127.0.0.1:8086"
}, },
"postDebugTask": "env-down" "postDebugTask": "env-down"
} }

View file

@ -14,3 +14,15 @@ services:
- ./neo-go/node-wallet.json:/wallets/node-wallet.json - ./neo-go/node-wallet.json:/wallets/node-wallet.json
- ./neo-go/config.yml:/wallets/config.yml - ./neo-go/config.yml:/wallets/config.yml
- ./neo-go/wallet.json:/wallets/wallet.json - ./neo-go/wallet.json:/wallets/wallet.json
jaeger:
image: jaegertracing/all-in-one:latest
container_name: jaeger
ports:
- '4317:4317' #OTLP over gRPC
- '4318:4318' #OTLP over HTTP
- '16686:16686' #frontend
stop_signal: SIGKILL
environment:
- COLLECTOR_OTLP_ENABLED=true
- SPAN_STORAGE_TYPE=badger
- BADGER_EPHEMERAL=true

6
go.mod
View file

@ -4,11 +4,11 @@ go 1.22
require ( require (
code.gitea.io/sdk/gitea v0.17.1 code.gitea.io/sdk/gitea v0.17.1
git.frostfs.info/TrueCloudLab/frostfs-contract v0.20.0 git.frostfs.info/TrueCloudLab/frostfs-contract v0.21.1-0.20241205083807-762d7f9f9f08
git.frostfs.info/TrueCloudLab/frostfs-crypto v0.6.0 git.frostfs.info/TrueCloudLab/frostfs-crypto v0.6.0
git.frostfs.info/TrueCloudLab/frostfs-locode-db v0.4.1-0.20240710074952-65761deb5c0d git.frostfs.info/TrueCloudLab/frostfs-locode-db v0.4.1-0.20240710074952-65761deb5c0d
git.frostfs.info/TrueCloudLab/frostfs-observability v0.0.0-20240909114314-666d326cc573 git.frostfs.info/TrueCloudLab/frostfs-observability v0.0.0-20241112082307-f17779933e88
git.frostfs.info/TrueCloudLab/frostfs-sdk-go v0.0.0-20241107121119-cb813e27a823 git.frostfs.info/TrueCloudLab/frostfs-sdk-go v0.0.0-20241206094944-81c423e7094d
git.frostfs.info/TrueCloudLab/hrw v1.2.1 git.frostfs.info/TrueCloudLab/hrw v1.2.1
git.frostfs.info/TrueCloudLab/multinet v0.0.0-20241015075604-6cb0d80e0972 git.frostfs.info/TrueCloudLab/multinet v0.0.0-20241015075604-6cb0d80e0972
git.frostfs.info/TrueCloudLab/policy-engine v0.0.0-20240814080254-96225afacb88 git.frostfs.info/TrueCloudLab/policy-engine v0.0.0-20240814080254-96225afacb88

BIN
go.sum

Binary file not shown.

View file

@ -1,6 +1,8 @@
package audit package audit
import ( import (
"context"
crypto "git.frostfs.info/TrueCloudLab/frostfs-crypto" crypto "git.frostfs.info/TrueCloudLab/frostfs-crypto"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs" "git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger"
@ -17,15 +19,15 @@ type Target interface {
String() string String() string
} }
func LogRequest(log *logger.Logger, operation string, req Request, target Target, status bool) { func LogRequest(ctx context.Context, log *logger.Logger, operation string, req Request, target Target, status bool) {
var key []byte var key []byte
if req != nil { if req != nil {
key = req.GetVerificationHeader().GetBodySignature().GetKey() key = req.GetVerificationHeader().GetBodySignature().GetKey()
} }
LogRequestWithKey(log, operation, key, target, status) LogRequestWithKey(ctx, log, operation, key, target, status)
} }
func LogRequestWithKey(log *logger.Logger, operation string, key []byte, target Target, status bool) { func LogRequestWithKey(ctx context.Context, log *logger.Logger, operation string, key []byte, target Target, status bool) {
object, subject := NotDefined, NotDefined object, subject := NotDefined, NotDefined
publicKey := crypto.UnmarshalPublicKey(key) publicKey := crypto.UnmarshalPublicKey(key)
@ -37,7 +39,7 @@ func LogRequestWithKey(log *logger.Logger, operation string, key []byte, target
object = target.String() object = target.String()
} }
log.Info(logs.AuditEventLogRecord, log.Info(ctx, logs.AuditEventLogRecord,
zap.String("operation", operation), zap.String("operation", operation),
zap.String("object", object), zap.String("object", object),
zap.String("subject", subject), zap.String("subject", subject),

View file

@ -164,17 +164,9 @@ const (
EventNotaryParserNotSet = "notary parser not set" EventNotaryParserNotSet = "notary parser not set"
EventCouldNotParseNotaryEvent = "could not parse notary event" EventCouldNotParseNotaryEvent = "could not parse notary event"
EventNotaryHandlersForParsedNotificationEventWereNotRegistered = "notary handlers for parsed notification event were not registered" EventNotaryHandlersForParsedNotificationEventWereNotRegistered = "notary handlers for parsed notification event were not registered"
EventIgnoreNilEventParser = "ignore nil event parser"
EventListenerHasBeenAlreadyStartedIgnoreParser = "listener has been already started, ignore parser"
EventRegisteredNewEventParser = "registered new event parser" EventRegisteredNewEventParser = "registered new event parser"
EventIgnoreNilEventHandler = "ignore nil event handler"
EventIgnoreHandlerOfEventWoParser = "ignore handler of event w/o parser"
EventRegisteredNewEventHandler = "registered new event handler" EventRegisteredNewEventHandler = "registered new event handler"
EventIgnoreNilNotaryEventParser = "ignore nil notary event parser"
EventListenerHasBeenAlreadyStartedIgnoreNotaryParser = "listener has been already started, ignore notary parser"
EventIgnoreNilNotaryEventHandler = "ignore nil notary event handler"
EventIgnoreHandlerOfNotaryEventWoParser = "ignore handler of notary event w/o parser" EventIgnoreHandlerOfNotaryEventWoParser = "ignore handler of notary event w/o parser"
EventIgnoreNilBlockHandler = "ignore nil block handler"
StorageOperation = "local object storage operation" StorageOperation = "local object storage operation"
BlobovniczaCreatingDirectoryForBoltDB = "creating directory for BoltDB" BlobovniczaCreatingDirectoryForBoltDB = "creating directory for BoltDB"
BlobovniczaOpeningBoltDB = "opening BoltDB" BlobovniczaOpeningBoltDB = "opening BoltDB"

View file

@ -117,7 +117,7 @@ func (v *FormatValidator) Validate(ctx context.Context, obj *objectSDK.Object, u
} }
if !unprepared { if !unprepared {
if err := v.validateSignatureKey(obj); err != nil { if err := v.validateSignatureKey(ctx, obj); err != nil {
return fmt.Errorf("(%T) could not validate signature key: %w", v, err) return fmt.Errorf("(%T) could not validate signature key: %w", v, err)
} }
@ -134,7 +134,7 @@ func (v *FormatValidator) Validate(ctx context.Context, obj *objectSDK.Object, u
return nil return nil
} }
func (v *FormatValidator) validateSignatureKey(obj *objectSDK.Object) error { func (v *FormatValidator) validateSignatureKey(ctx context.Context, obj *objectSDK.Object) error {
sig := obj.Signature() sig := obj.Signature()
if sig == nil { if sig == nil {
return errMissingSignature return errMissingSignature
@ -156,7 +156,7 @@ func (v *FormatValidator) validateSignatureKey(obj *objectSDK.Object) error {
ownerID := obj.OwnerID() ownerID := obj.OwnerID()
if token == nil && obj.ECHeader() != nil { if token == nil && obj.ECHeader() != nil {
role, err := v.isIROrContainerNode(obj, binKey) role, err := v.isIROrContainerNode(ctx, obj, binKey)
if err != nil { if err != nil {
return err return err
} }
@ -172,7 +172,7 @@ func (v *FormatValidator) validateSignatureKey(obj *objectSDK.Object) error {
} }
if v.verifyTokenIssuer { if v.verifyTokenIssuer {
role, err := v.isIROrContainerNode(obj, binKey) role, err := v.isIROrContainerNode(ctx, obj, binKey)
if err != nil { if err != nil {
return err return err
} }
@ -190,7 +190,7 @@ func (v *FormatValidator) validateSignatureKey(obj *objectSDK.Object) error {
return nil return nil
} }
func (v *FormatValidator) isIROrContainerNode(obj *objectSDK.Object, signerKey []byte) (acl.Role, error) { func (v *FormatValidator) isIROrContainerNode(ctx context.Context, obj *objectSDK.Object, signerKey []byte) (acl.Role, error) {
cnrID, containerIDSet := obj.ContainerID() cnrID, containerIDSet := obj.ContainerID()
if !containerIDSet { if !containerIDSet {
return acl.RoleOthers, errNilCID return acl.RoleOthers, errNilCID
@ -204,7 +204,7 @@ func (v *FormatValidator) isIROrContainerNode(obj *objectSDK.Object, signerKey [
return acl.RoleOthers, fmt.Errorf("failed to get container (id=%s): %w", cnrID.EncodeToString(), err) return acl.RoleOthers, fmt.Errorf("failed to get container (id=%s): %w", cnrID.EncodeToString(), err)
} }
res, err := v.senderClassifier.IsInnerRingOrContainerNode(signerKey, cnrID, cnr.Value) res, err := v.senderClassifier.IsInnerRingOrContainerNode(ctx, signerKey, cnrID, cnr.Value)
if err != nil { if err != nil {
return acl.RoleOthers, err return acl.RoleOthers, err
} }

View file

@ -65,7 +65,7 @@ func TestFormatValidator_Validate(t *testing.T) {
epoch: curEpoch, epoch: curEpoch,
}), }),
WithLockSource(ls), WithLockSource(ls),
WithLogger(&logger.Logger{Logger: zaptest.NewLogger(t)}), WithLogger(logger.NewLoggerWrapper(zaptest.NewLogger(t))),
) )
ownerKey, err := keys.NewPrivateKey() ownerKey, err := keys.NewPrivateKey()
@ -290,7 +290,7 @@ func TestFormatValidator_ValidateTokenIssuer(t *testing.T) {
}), }),
WithLockSource(ls), WithLockSource(ls),
WithVerifySessionTokenIssuer(false), WithVerifySessionTokenIssuer(false),
WithLogger(&logger.Logger{Logger: zaptest.NewLogger(t)}), WithLogger(logger.NewLoggerWrapper(zaptest.NewLogger(t))),
) )
tok := sessiontest.Object() tok := sessiontest.Object()
@ -339,7 +339,7 @@ func TestFormatValidator_ValidateTokenIssuer(t *testing.T) {
}, },
}, },
), ),
WithLogger(&logger.Logger{Logger: zaptest.NewLogger(t)}), WithLogger(logger.NewLoggerWrapper(zaptest.NewLogger(t))),
) )
tok := sessiontest.Object() tok := sessiontest.Object()
@ -417,7 +417,7 @@ func TestFormatValidator_ValidateTokenIssuer(t *testing.T) {
currentEpoch: curEpoch, currentEpoch: curEpoch,
}, },
), ),
WithLogger(&logger.Logger{Logger: zaptest.NewLogger(t)}), WithLogger(logger.NewLoggerWrapper(zaptest.NewLogger(t))),
) )
require.NoError(t, v.Validate(context.Background(), obj, false)) require.NoError(t, v.Validate(context.Background(), obj, false))
@ -491,7 +491,7 @@ func TestFormatValidator_ValidateTokenIssuer(t *testing.T) {
currentEpoch: curEpoch, currentEpoch: curEpoch,
}, },
), ),
WithLogger(&logger.Logger{Logger: zaptest.NewLogger(t)}), WithLogger(logger.NewLoggerWrapper(zaptest.NewLogger(t))),
) )
require.NoError(t, v.Validate(context.Background(), obj, false)) require.NoError(t, v.Validate(context.Background(), obj, false))
@ -567,7 +567,7 @@ func TestFormatValidator_ValidateTokenIssuer(t *testing.T) {
currentEpoch: curEpoch, currentEpoch: curEpoch,
}, },
), ),
WithLogger(&logger.Logger{Logger: zaptest.NewLogger(t)}), WithLogger(logger.NewLoggerWrapper(zaptest.NewLogger(t))),
) )
require.Error(t, v.Validate(context.Background(), obj, false)) require.Error(t, v.Validate(context.Background(), obj, false))

View file

@ -2,6 +2,7 @@ package object
import ( import (
"bytes" "bytes"
"context"
"crypto/sha256" "crypto/sha256"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs" "git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
@ -40,6 +41,7 @@ type ClassifyResult struct {
} }
func (c SenderClassifier) Classify( func (c SenderClassifier) Classify(
ctx context.Context,
ownerID *user.ID, ownerID *user.ID,
ownerKey *keys.PublicKey, ownerKey *keys.PublicKey,
idCnr cid.ID, idCnr cid.ID,
@ -57,14 +59,14 @@ func (c SenderClassifier) Classify(
}, nil }, nil
} }
return c.IsInnerRingOrContainerNode(ownerKeyInBytes, idCnr, cnr) return c.IsInnerRingOrContainerNode(ctx, ownerKeyInBytes, idCnr, cnr)
} }
func (c SenderClassifier) IsInnerRingOrContainerNode(ownerKeyInBytes []byte, idCnr cid.ID, cnr container.Container) (*ClassifyResult, error) { func (c SenderClassifier) IsInnerRingOrContainerNode(ctx context.Context, ownerKeyInBytes []byte, idCnr cid.ID, cnr container.Container) (*ClassifyResult, error) {
isInnerRingNode, err := c.isInnerRingKey(ownerKeyInBytes) isInnerRingNode, err := c.isInnerRingKey(ownerKeyInBytes)
if err != nil { if err != nil {
// do not throw error, try best case matching // do not throw error, try best case matching
c.log.Debug(logs.V2CantCheckIfRequestFromInnerRing, c.log.Debug(ctx, logs.V2CantCheckIfRequestFromInnerRing,
zap.String("error", err.Error())) zap.String("error", err.Error()))
} else if isInnerRingNode { } else if isInnerRingNode {
return &ClassifyResult{ return &ClassifyResult{
@ -81,7 +83,7 @@ func (c SenderClassifier) IsInnerRingOrContainerNode(ownerKeyInBytes []byte, idC
// error might happen if request has `RoleOther` key and placement // error might happen if request has `RoleOther` key and placement
// is not possible for previous epoch, so // is not possible for previous epoch, so
// do not throw error, try best case matching // do not throw error, try best case matching
c.log.Debug(logs.V2CantCheckIfRequestFromContainerNode, c.log.Debug(ctx, logs.V2CantCheckIfRequestFromContainerNode,
zap.String("error", err.Error())) zap.String("error", err.Error()))
} else if isContainerNode { } else if isContainerNode {
return &ClassifyResult{ return &ClassifyResult{

View file

@ -8,7 +8,6 @@ type (
// ContractProcessor interface defines functions for binding event producers // ContractProcessor interface defines functions for binding event producers
// such as event.Listener and Timers with contract processor. // such as event.Listener and Timers with contract processor.
ContractProcessor interface { ContractProcessor interface {
ListenerNotificationParsers() []event.NotificationParserInfo
ListenerNotificationHandlers() []event.NotificationHandlerInfo ListenerNotificationHandlers() []event.NotificationHandlerInfo
ListenerNotaryParsers() []event.NotaryParserInfo ListenerNotaryParsers() []event.NotaryParserInfo
ListenerNotaryHandlers() []event.NotaryHandlerInfo ListenerNotaryHandlers() []event.NotaryHandlerInfo
@ -16,11 +15,6 @@ type (
) )
func connectListenerWithProcessor(l event.Listener, p ContractProcessor) { func connectListenerWithProcessor(l event.Listener, p ContractProcessor) {
// register notification parsers
for _, parser := range p.ListenerNotificationParsers() {
l.SetNotificationParser(parser)
}
// register notification handlers // register notification handlers
for _, handler := range p.ListenerNotificationHandlers() { for _, handler := range p.ListenerNotificationHandlers() {
l.RegisterNotificationHandler(handler) l.RegisterNotificationHandler(handler)

View file

@ -29,7 +29,7 @@ type (
emitDuration uint32 // in blocks emitDuration uint32 // in blocks
} }
depositor func() (util.Uint256, error) depositor func(context.Context) (util.Uint256, error)
awaiter func(context.Context, util.Uint256) error awaiter func(context.Context, util.Uint256) error
) )
@ -66,11 +66,11 @@ func newEpochTimer(args *epochTimerArgs) *timer.BlockTimer {
) )
} }
func newEmissionTimer(args *emitTimerArgs) *timer.BlockTimer { func newEmissionTimer(ctx context.Context, args *emitTimerArgs) *timer.BlockTimer {
return timer.NewBlockTimer( return timer.NewBlockTimer(
timer.StaticBlockMeter(args.emitDuration), timer.StaticBlockMeter(args.emitDuration),
func() { func() {
args.ap.HandleGasEmission(timerEvent.NewAlphabetEmitTick{}) args.ap.HandleGasEmission(ctx, timerEvent.NewAlphabetEmitTick{})
}, },
) )
} }

View file

@ -35,7 +35,7 @@ import (
"google.golang.org/grpc" "google.golang.org/grpc"
) )
func (s *Server) initNetmapProcessor(cfg *viper.Viper, func (s *Server) initNetmapProcessor(ctx context.Context, cfg *viper.Viper,
alphaSync event.Handler, alphaSync event.Handler,
) error { ) error {
locodeValidator, err := s.newLocodeValidator(cfg) locodeValidator, err := s.newLocodeValidator(cfg)
@ -48,10 +48,13 @@ func (s *Server) initNetmapProcessor(cfg *viper.Viper,
var netMapCandidateStateValidator statevalidation.NetMapCandidateValidator var netMapCandidateStateValidator statevalidation.NetMapCandidateValidator
netMapCandidateStateValidator.SetNetworkSettings(netSettings) netMapCandidateStateValidator.SetNetworkSettings(netSettings)
poolSize := cfg.GetInt("workers.netmap")
s.log.Debug(ctx, logs.NetmapNetmapWorkerPool, zap.Int("size", poolSize))
s.netmapProcessor, err = netmap.New(&netmap.Params{ s.netmapProcessor, err = netmap.New(&netmap.Params{
Log: s.log, Log: s.log,
Metrics: s.irMetrics, Metrics: s.irMetrics,
PoolSize: cfg.GetInt("workers.netmap"), PoolSize: poolSize,
NetmapClient: netmap.NewNetmapClient(s.netmapClient), NetmapClient: netmap.NewNetmapClient(s.netmapClient),
EpochTimer: s, EpochTimer: s,
EpochState: s, EpochState: s,
@ -97,7 +100,7 @@ func (s *Server) initMainnet(ctx context.Context, cfg *viper.Viper, morphChain *
fromMainChainBlock, err := s.persistate.UInt32(persistateMainChainLastBlockKey) fromMainChainBlock, err := s.persistate.UInt32(persistateMainChainLastBlockKey)
if err != nil { if err != nil {
fromMainChainBlock = 0 fromMainChainBlock = 0
s.log.Warn(logs.InnerringCantGetLastProcessedMainChainBlockNumber, zap.String("error", err.Error())) s.log.Warn(ctx, logs.InnerringCantGetLastProcessedMainChainBlockNumber, zap.String("error", err.Error()))
} }
mainnetChain.from = fromMainChainBlock mainnetChain.from = fromMainChainBlock
@ -137,12 +140,12 @@ func (s *Server) enableNotarySupport() error {
return nil return nil
} }
func (s *Server) initNotaryConfig() { func (s *Server) initNotaryConfig(ctx context.Context) {
s.mainNotaryConfig = notaryConfigs( s.mainNotaryConfig = notaryConfigs(
!s.withoutMainNet && s.mainnetClient.ProbeNotary(), // if mainnet disabled then notary flag must be disabled too !s.withoutMainNet && s.mainnetClient.ProbeNotary(), // if mainnet disabled then notary flag must be disabled too
) )
s.log.Info(logs.InnerringNotarySupport, s.log.Info(ctx, logs.InnerringNotarySupport,
zap.Bool("sidechain_enabled", true), zap.Bool("sidechain_enabled", true),
zap.Bool("mainchain_enabled", !s.mainNotaryConfig.disabled), zap.Bool("mainchain_enabled", !s.mainNotaryConfig.disabled),
) )
@ -152,8 +155,8 @@ func (s *Server) createAlphaSync(cfg *viper.Viper, frostfsCli *frostfsClient.Cli
var alphaSync event.Handler var alphaSync event.Handler
if s.withoutMainNet || cfg.GetBool("governance.disable") { if s.withoutMainNet || cfg.GetBool("governance.disable") {
alphaSync = func(event.Event) { alphaSync = func(ctx context.Context, _ event.Event) {
s.log.Debug(logs.InnerringAlphabetKeysSyncIsDisabled) s.log.Debug(ctx, logs.InnerringAlphabetKeysSyncIsDisabled)
} }
} else { } else {
// create governance processor // create governance processor
@ -196,16 +199,16 @@ func (s *Server) createIRFetcher() irFetcher {
return irf return irf
} }
func (s *Server) initTimers(cfg *viper.Viper) { func (s *Server) initTimers(ctx context.Context, cfg *viper.Viper) {
s.epochTimer = newEpochTimer(&epochTimerArgs{ s.epochTimer = newEpochTimer(&epochTimerArgs{
newEpochHandlers: s.newEpochTickHandlers(), newEpochHandlers: s.newEpochTickHandlers(ctx),
epoch: s, epoch: s,
}) })
s.addBlockTimer(s.epochTimer) s.addBlockTimer(s.epochTimer)
// initialize emission timer // initialize emission timer
emissionTimer := newEmissionTimer(&emitTimerArgs{ emissionTimer := newEmissionTimer(ctx, &emitTimerArgs{
ap: s.alphabetProcessor, ap: s.alphabetProcessor,
emitDuration: cfg.GetUint32("timers.emit"), emitDuration: cfg.GetUint32("timers.emit"),
}) })
@ -213,18 +216,20 @@ func (s *Server) initTimers(cfg *viper.Viper) {
s.addBlockTimer(emissionTimer) s.addBlockTimer(emissionTimer)
} }
func (s *Server) initAlphabetProcessor(cfg *viper.Viper) error { func (s *Server) initAlphabetProcessor(ctx context.Context, cfg *viper.Viper) error {
parsedWallets, err := parseWalletAddressesFromStrings(cfg.GetStringSlice("emit.extra_wallets")) parsedWallets, err := parseWalletAddressesFromStrings(cfg.GetStringSlice("emit.extra_wallets"))
if err != nil { if err != nil {
return err return err
} }
poolSize := cfg.GetInt("workers.alphabet")
s.log.Debug(ctx, logs.AlphabetAlphabetWorkerPool, zap.Int("size", poolSize))
// create alphabet processor // create alphabet processor
s.alphabetProcessor, err = alphabet.New(&alphabet.Params{ s.alphabetProcessor, err = alphabet.New(&alphabet.Params{
ParsedWallets: parsedWallets, ParsedWallets: parsedWallets,
Log: s.log, Log: s.log,
Metrics: s.irMetrics, Metrics: s.irMetrics,
PoolSize: cfg.GetInt("workers.alphabet"), PoolSize: poolSize,
AlphabetContracts: s.contracts.alphabet, AlphabetContracts: s.contracts.alphabet,
NetmapClient: s.netmapClient, NetmapClient: s.netmapClient,
MorphClient: s.morphClient, MorphClient: s.morphClient,
@ -239,12 +244,14 @@ func (s *Server) initAlphabetProcessor(cfg *viper.Viper) error {
return err return err
} }
func (s *Server) initContainerProcessor(cfg *viper.Viper, cnrClient *container.Client, frostfsIDClient *frostfsid.Client) error { func (s *Server) initContainerProcessor(ctx context.Context, cfg *viper.Viper, cnrClient *container.Client, frostfsIDClient *frostfsid.Client) error {
poolSize := cfg.GetInt("workers.container")
s.log.Debug(ctx, logs.ContainerContainerWorkerPool, zap.Int("size", poolSize))
// container processor // container processor
containerProcessor, err := cont.New(&cont.Params{ containerProcessor, err := cont.New(&cont.Params{
Log: s.log, Log: s.log,
Metrics: s.irMetrics, Metrics: s.irMetrics,
PoolSize: cfg.GetInt("workers.container"), PoolSize: poolSize,
AlphabetState: s, AlphabetState: s,
ContainerClient: cnrClient, ContainerClient: cnrClient,
MorphClient: cnrClient.Morph(), MorphClient: cnrClient.Morph(),
@ -258,12 +265,14 @@ func (s *Server) initContainerProcessor(cfg *viper.Viper, cnrClient *container.C
return bindMorphProcessor(containerProcessor, s) return bindMorphProcessor(containerProcessor, s)
} }
func (s *Server) initBalanceProcessor(cfg *viper.Viper, frostfsCli *frostfsClient.Client) error { func (s *Server) initBalanceProcessor(ctx context.Context, cfg *viper.Viper, frostfsCli *frostfsClient.Client) error {
poolSize := cfg.GetInt("workers.balance")
s.log.Debug(ctx, logs.BalanceBalanceWorkerPool, zap.Int("size", poolSize))
// create balance processor // create balance processor
balanceProcessor, err := balance.New(&balance.Params{ balanceProcessor, err := balance.New(&balance.Params{
Log: s.log, Log: s.log,
Metrics: s.irMetrics, Metrics: s.irMetrics,
PoolSize: cfg.GetInt("workers.balance"), PoolSize: poolSize,
FrostFSClient: frostfsCli, FrostFSClient: frostfsCli,
BalanceSC: s.contracts.balance, BalanceSC: s.contracts.balance,
AlphabetState: s, AlphabetState: s,
@ -276,15 +285,17 @@ func (s *Server) initBalanceProcessor(cfg *viper.Viper, frostfsCli *frostfsClien
return bindMorphProcessor(balanceProcessor, s) return bindMorphProcessor(balanceProcessor, s)
} }
func (s *Server) initFrostFSMainnetProcessor(cfg *viper.Viper) error { func (s *Server) initFrostFSMainnetProcessor(ctx context.Context, cfg *viper.Viper) error {
if s.withoutMainNet { if s.withoutMainNet {
return nil return nil
} }
poolSize := cfg.GetInt("workers.frostfs")
s.log.Debug(ctx, logs.FrostFSFrostfsWorkerPool, zap.Int("size", poolSize))
frostfsProcessor, err := frostfs.New(&frostfs.Params{ frostfsProcessor, err := frostfs.New(&frostfs.Params{
Log: s.log, Log: s.log,
Metrics: s.irMetrics, Metrics: s.irMetrics,
PoolSize: cfg.GetInt("workers.frostfs"), PoolSize: poolSize,
FrostFSContract: s.contracts.frostfs, FrostFSContract: s.contracts.frostfs,
BalanceClient: s.balanceClient, BalanceClient: s.balanceClient,
NetmapClient: s.netmapClient, NetmapClient: s.netmapClient,
@ -304,10 +315,10 @@ func (s *Server) initFrostFSMainnetProcessor(cfg *viper.Viper) error {
return bindMainnetProcessor(frostfsProcessor, s) return bindMainnetProcessor(frostfsProcessor, s)
} }
func (s *Server) initGRPCServer(cfg *viper.Viper, log *logger.Logger, audit *atomic.Bool) error { func (s *Server) initGRPCServer(ctx context.Context, cfg *viper.Viper, log *logger.Logger, audit *atomic.Bool) error {
controlSvcEndpoint := cfg.GetString("control.grpc.endpoint") controlSvcEndpoint := cfg.GetString("control.grpc.endpoint")
if controlSvcEndpoint == "" { if controlSvcEndpoint == "" {
s.log.Info(logs.InnerringNoControlServerEndpointSpecified) s.log.Info(ctx, logs.InnerringNoControlServerEndpointSpecified)
return nil return nil
} }
@ -369,7 +380,6 @@ func (s *Server) initClientsFromMorph() (*serverMorphClients, error) {
// form morph container client's options // form morph container client's options
morphCnrOpts := make([]container.Option, 0, 3) morphCnrOpts := make([]container.Option, 0, 3)
morphCnrOpts = append(morphCnrOpts, morphCnrOpts = append(morphCnrOpts,
container.TryNotary(),
container.AsAlphabet(), container.AsAlphabet(),
) )
@ -379,12 +389,12 @@ func (s *Server) initClientsFromMorph() (*serverMorphClients, error) {
} }
s.containerClient = result.CnrClient s.containerClient = result.CnrClient
s.netmapClient, err = nmClient.NewFromMorph(s.morphClient, s.contracts.netmap, fee, nmClient.TryNotary(), nmClient.AsAlphabet()) s.netmapClient, err = nmClient.NewFromMorph(s.morphClient, s.contracts.netmap, fee, nmClient.AsAlphabet())
if err != nil { if err != nil {
return nil, err return nil, err
} }
s.balanceClient, err = balanceClient.NewFromMorph(s.morphClient, s.contracts.balance, fee, balanceClient.TryNotary(), balanceClient.AsAlphabet()) s.balanceClient, err = balanceClient.NewFromMorph(s.morphClient, s.contracts.balance, fee, balanceClient.AsAlphabet())
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -403,7 +413,7 @@ func (s *Server) initClientsFromMorph() (*serverMorphClients, error) {
return result, nil return result, nil
} }
func (s *Server) initProcessors(cfg *viper.Viper, morphClients *serverMorphClients) error { func (s *Server) initProcessors(ctx context.Context, cfg *viper.Viper, morphClients *serverMorphClients) error {
irf := s.createIRFetcher() irf := s.createIRFetcher()
s.statusIndex = newInnerRingIndexer( s.statusIndex = newInnerRingIndexer(
@ -418,27 +428,27 @@ func (s *Server) initProcessors(cfg *viper.Viper, morphClients *serverMorphClien
return err return err
} }
err = s.initNetmapProcessor(cfg, alphaSync) err = s.initNetmapProcessor(ctx, cfg, alphaSync)
if err != nil { if err != nil {
return err return err
} }
err = s.initContainerProcessor(cfg, morphClients.CnrClient, morphClients.FrostFSIDClient) err = s.initContainerProcessor(ctx, cfg, morphClients.CnrClient, morphClients.FrostFSIDClient)
if err != nil { if err != nil {
return err return err
} }
err = s.initBalanceProcessor(cfg, morphClients.FrostFSClient) err = s.initBalanceProcessor(ctx, cfg, morphClients.FrostFSClient)
if err != nil { if err != nil {
return err return err
} }
err = s.initFrostFSMainnetProcessor(cfg) err = s.initFrostFSMainnetProcessor(ctx, cfg)
if err != nil { if err != nil {
return err return err
} }
err = s.initAlphabetProcessor(cfg) err = s.initAlphabetProcessor(ctx, cfg)
return err return err
} }
@ -446,7 +456,7 @@ func (s *Server) initMorph(ctx context.Context, cfg *viper.Viper, errChan chan<-
fromSideChainBlock, err := s.persistate.UInt32(persistateSideChainLastBlockKey) fromSideChainBlock, err := s.persistate.UInt32(persistateSideChainLastBlockKey)
if err != nil { if err != nil {
fromSideChainBlock = 0 fromSideChainBlock = 0
s.log.Warn(logs.InnerringCantGetLastProcessedSideChainBlockNumber, zap.String("error", err.Error())) s.log.Warn(ctx, logs.InnerringCantGetLastProcessedSideChainBlockNumber, zap.String("error", err.Error()))
} }
morphChain := &chainParams{ morphChain := &chainParams{
@ -471,7 +481,7 @@ func (s *Server) initMorph(ctx context.Context, cfg *viper.Viper, errChan chan<-
return nil, err return nil, err
} }
if err := s.morphClient.SetGroupSignerScope(); err != nil { if err := s.morphClient.SetGroupSignerScope(); err != nil {
morphChain.log.Info(logs.InnerringFailedToSetGroupSignerScope, zap.Error(err)) morphChain.log.Info(ctx, logs.InnerringFailedToSetGroupSignerScope, zap.Error(err))
} }
return morphChain, nil return morphChain, nil

View file

@ -140,10 +140,10 @@ var (
// Start runs all event providers. // Start runs all event providers.
func (s *Server) Start(ctx context.Context, intError chan<- error) (err error) { func (s *Server) Start(ctx context.Context, intError chan<- error) (err error) {
s.setHealthStatus(control.HealthStatus_STARTING) s.setHealthStatus(ctx, control.HealthStatus_STARTING)
defer func() { defer func() {
if err == nil { if err == nil {
s.setHealthStatus(control.HealthStatus_READY) s.setHealthStatus(ctx, control.HealthStatus_READY)
} }
}() }()
@ -152,12 +152,12 @@ func (s *Server) Start(ctx context.Context, intError chan<- error) (err error) {
return err return err
} }
err = s.initConfigFromBlockchain() err = s.initConfigFromBlockchain(ctx)
if err != nil { if err != nil {
return err return err
} }
if s.IsAlphabet() { if s.IsAlphabet(ctx) {
err = s.initMainNotary(ctx) err = s.initMainNotary(ctx)
if err != nil { if err != nil {
return err return err
@ -173,14 +173,14 @@ func (s *Server) Start(ctx context.Context, intError chan<- error) (err error) {
prm.Validators = s.predefinedValidators prm.Validators = s.predefinedValidators
// vote for sidechain validator if it is prepared in config // vote for sidechain validator if it is prepared in config
err = s.voteForSidechainValidator(prm) err = s.voteForSidechainValidator(ctx, prm)
if err != nil { if err != nil {
// we don't stop inner ring execution on this error // we don't stop inner ring execution on this error
s.log.Warn(logs.InnerringCantVoteForPreparedValidators, s.log.Warn(ctx, logs.InnerringCantVoteForPreparedValidators,
zap.String("error", err.Error())) zap.String("error", err.Error()))
} }
s.tickInitialExpoch() s.tickInitialExpoch(ctx)
morphErr := make(chan error) morphErr := make(chan error)
mainnnetErr := make(chan error) mainnnetErr := make(chan error)
@ -217,14 +217,14 @@ func (s *Server) Start(ctx context.Context, intError chan<- error) (err error) {
} }
func (s *Server) registerMorphNewBlockEventHandler() { func (s *Server) registerMorphNewBlockEventHandler() {
s.morphListener.RegisterBlockHandler(func(b *block.Block) { s.morphListener.RegisterBlockHandler(func(ctx context.Context, b *block.Block) {
s.log.Debug(logs.InnerringNewBlock, s.log.Debug(ctx, logs.InnerringNewBlock,
zap.Uint32("index", b.Index), zap.Uint32("index", b.Index),
) )
err := s.persistate.SetUInt32(persistateSideChainLastBlockKey, b.Index) err := s.persistate.SetUInt32(persistateSideChainLastBlockKey, b.Index)
if err != nil { if err != nil {
s.log.Warn(logs.InnerringCantUpdatePersistentState, s.log.Warn(ctx, logs.InnerringCantUpdatePersistentState,
zap.String("chain", "side"), zap.String("chain", "side"),
zap.Uint32("block_index", b.Index)) zap.Uint32("block_index", b.Index))
} }
@ -235,10 +235,10 @@ func (s *Server) registerMorphNewBlockEventHandler() {
func (s *Server) registerMainnetNewBlockEventHandler() { func (s *Server) registerMainnetNewBlockEventHandler() {
if !s.withoutMainNet { if !s.withoutMainNet {
s.mainnetListener.RegisterBlockHandler(func(b *block.Block) { s.mainnetListener.RegisterBlockHandler(func(ctx context.Context, b *block.Block) {
err := s.persistate.SetUInt32(persistateMainChainLastBlockKey, b.Index) err := s.persistate.SetUInt32(persistateMainChainLastBlockKey, b.Index)
if err != nil { if err != nil {
s.log.Warn(logs.InnerringCantUpdatePersistentState, s.log.Warn(ctx, logs.InnerringCantUpdatePersistentState,
zap.String("chain", "main"), zap.String("chain", "main"),
zap.Uint32("block_index", b.Index)) zap.Uint32("block_index", b.Index))
} }
@ -283,11 +283,11 @@ func (s *Server) initSideNotary(ctx context.Context) error {
) )
} }
func (s *Server) tickInitialExpoch() { func (s *Server) tickInitialExpoch(ctx context.Context) {
initialEpochTicker := timer.NewOneTickTimer( initialEpochTicker := timer.NewOneTickTimer(
timer.StaticBlockMeter(s.initialEpochTickDelta), timer.StaticBlockMeter(s.initialEpochTickDelta),
func() { func() {
s.netmapProcessor.HandleNewEpochTick(timerEvent.NewEpochTick{}) s.netmapProcessor.HandleNewEpochTick(ctx, timerEvent.NewEpochTick{})
}) })
s.addBlockTimer(initialEpochTicker) s.addBlockTimer(initialEpochTicker)
} }
@ -299,15 +299,15 @@ func (s *Server) startWorkers(ctx context.Context) {
} }
// Stop closes all subscription channels. // Stop closes all subscription channels.
func (s *Server) Stop() { func (s *Server) Stop(ctx context.Context) {
s.setHealthStatus(control.HealthStatus_SHUTTING_DOWN) s.setHealthStatus(ctx, control.HealthStatus_SHUTTING_DOWN)
go s.morphListener.Stop() go s.morphListener.Stop()
go s.mainnetListener.Stop() go s.mainnetListener.Stop()
for _, c := range s.closers { for _, c := range s.closers {
if err := c(); err != nil { if err := c(); err != nil {
s.log.Warn(logs.InnerringCloserError, s.log.Warn(ctx, logs.InnerringCloserError,
zap.String("error", err.Error()), zap.String("error", err.Error()),
) )
} }
@ -349,7 +349,7 @@ func New(ctx context.Context, log *logger.Logger, cfg *viper.Viper, errChan chan
return nil, err return nil, err
} }
server.setHealthStatus(control.HealthStatus_HEALTH_STATUS_UNDEFINED) server.setHealthStatus(ctx, control.HealthStatus_HEALTH_STATUS_UNDEFINED)
// parse notary support // parse notary support
server.feeConfig = config.NewFeeConfig(cfg) server.feeConfig = config.NewFeeConfig(cfg)
@ -376,7 +376,7 @@ func New(ctx context.Context, log *logger.Logger, cfg *viper.Viper, errChan chan
return nil, err return nil, err
} }
server.initNotaryConfig() server.initNotaryConfig(ctx)
err = server.initContracts(cfg) err = server.initContracts(cfg)
if err != nil { if err != nil {
@ -400,14 +400,14 @@ func New(ctx context.Context, log *logger.Logger, cfg *viper.Viper, errChan chan
return nil, err return nil, err
} }
err = server.initProcessors(cfg, morphClients) err = server.initProcessors(ctx, cfg, morphClients)
if err != nil { if err != nil {
return nil, err return nil, err
} }
server.initTimers(cfg) server.initTimers(ctx, cfg)
err = server.initGRPCServer(cfg, log, audit) err = server.initGRPCServer(ctx, cfg, log, audit)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -438,7 +438,7 @@ func createListener(ctx context.Context, cli *client.Client, p *chainParams) (ev
} }
listener, err := event.NewListener(event.ListenerParams{ listener, err := event.NewListener(event.ListenerParams{
Logger: &logger.Logger{Logger: p.log.With(zap.String("chain", p.name))}, Logger: p.log.With(zap.String("chain", p.name)),
Subscriber: sub, Subscriber: sub,
}) })
if err != nil { if err != nil {
@ -573,7 +573,7 @@ func parseMultinetConfig(cfg *viper.Viper, m metrics.MultinetMetrics) internalNe
return nc return nc
} }
func (s *Server) initConfigFromBlockchain() error { func (s *Server) initConfigFromBlockchain(ctx context.Context) error {
// get current epoch // get current epoch
epoch, err := s.netmapClient.Epoch() epoch, err := s.netmapClient.Epoch()
if err != nil { if err != nil {
@ -602,9 +602,9 @@ func (s *Server) initConfigFromBlockchain() error {
return err return err
} }
s.log.Debug(logs.InnerringReadConfigFromBlockchain, s.log.Debug(ctx, logs.InnerringReadConfigFromBlockchain,
zap.Bool("active", s.IsActive()), zap.Bool("active", s.IsActive(ctx)),
zap.Bool("alphabet", s.IsAlphabet()), zap.Bool("alphabet", s.IsAlphabet(ctx)),
zap.Uint64("epoch", epoch), zap.Uint64("epoch", epoch),
zap.Uint32("precision", balancePrecision), zap.Uint32("precision", balancePrecision),
zap.Uint32("init_epoch_tick_delta", s.initialEpochTickDelta), zap.Uint32("init_epoch_tick_delta", s.initialEpochTickDelta),
@ -635,17 +635,17 @@ func (s *Server) nextEpochBlockDelta() (uint32, error) {
// onlyAlphabet wrapper around event handler that executes it // onlyAlphabet wrapper around event handler that executes it
// only if inner ring node is alphabet node. // only if inner ring node is alphabet node.
func (s *Server) onlyAlphabetEventHandler(f event.Handler) event.Handler { func (s *Server) onlyAlphabetEventHandler(f event.Handler) event.Handler {
return func(ev event.Event) { return func(ctx context.Context, ev event.Event) {
if s.IsAlphabet() { if s.IsAlphabet(ctx) {
f(ev) f(ctx, ev)
} }
} }
} }
func (s *Server) newEpochTickHandlers() []newEpochHandler { func (s *Server) newEpochTickHandlers(ctx context.Context) []newEpochHandler {
newEpochHandlers := []newEpochHandler{ newEpochHandlers := []newEpochHandler{
func() { func() {
s.netmapProcessor.HandleNewEpochTick(timerEvent.NewEpochTick{}) s.netmapProcessor.HandleNewEpochTick(ctx, timerEvent.NewEpochTick{})
}, },
} }

View file

@ -28,38 +28,39 @@ const (
gasDivisor = 2 gasDivisor = 2
) )
func (s *Server) depositMainNotary() (tx util.Uint256, err error) { func (s *Server) depositMainNotary(ctx context.Context) (tx util.Uint256, err error) {
depositAmount, err := client.CalculateNotaryDepositAmount(s.mainnetClient, gasMultiplier, gasDivisor) depositAmount, err := client.CalculateNotaryDepositAmount(s.mainnetClient, gasMultiplier, gasDivisor)
if err != nil { if err != nil {
return util.Uint256{}, fmt.Errorf("could not calculate main notary deposit amount: %w", err) return util.Uint256{}, fmt.Errorf("could not calculate main notary deposit amount: %w", err)
} }
return s.mainnetClient.DepositNotary( return s.mainnetClient.DepositNotary(
ctx,
depositAmount, depositAmount,
uint32(s.epochDuration.Load())+notaryExtraBlocks, uint32(s.epochDuration.Load())+notaryExtraBlocks,
) )
} }
func (s *Server) depositSideNotary() (util.Uint256, error) { func (s *Server) depositSideNotary(ctx context.Context) (util.Uint256, error) {
depositAmount, err := client.CalculateNotaryDepositAmount(s.morphClient, gasMultiplier, gasDivisor) depositAmount, err := client.CalculateNotaryDepositAmount(s.morphClient, gasMultiplier, gasDivisor)
if err != nil { if err != nil {
return util.Uint256{}, fmt.Errorf("could not calculate side notary deposit amount: %w", err) return util.Uint256{}, fmt.Errorf("could not calculate side notary deposit amount: %w", err)
} }
tx, _, err := s.morphClient.DepositEndlessNotary(depositAmount) tx, _, err := s.morphClient.DepositEndlessNotary(ctx, depositAmount)
return tx, err return tx, err
} }
func (s *Server) notaryHandler(_ event.Event) { func (s *Server) notaryHandler(ctx context.Context, _ event.Event) {
if !s.mainNotaryConfig.disabled { if !s.mainNotaryConfig.disabled {
_, err := s.depositMainNotary() _, err := s.depositMainNotary(ctx)
if err != nil { if err != nil {
s.log.Error(logs.InnerringCantMakeNotaryDepositInMainChain, zap.Error(err)) s.log.Error(ctx, logs.InnerringCantMakeNotaryDepositInMainChain, zap.Error(err))
} }
} }
if _, err := s.depositSideNotary(); err != nil { if _, err := s.depositSideNotary(ctx); err != nil {
s.log.Error(logs.InnerringCantMakeNotaryDepositInSideChain, zap.Error(err)) s.log.Error(ctx, logs.InnerringCantMakeNotaryDepositInSideChain, zap.Error(err))
} }
} }
@ -72,7 +73,7 @@ func (s *Server) awaitSideNotaryDeposit(ctx context.Context, tx util.Uint256) er
} }
func (s *Server) initNotary(ctx context.Context, deposit depositor, await awaiter, msg string) error { func (s *Server) initNotary(ctx context.Context, deposit depositor, await awaiter, msg string) error {
tx, err := deposit() tx, err := deposit(ctx)
if err != nil { if err != nil {
return err return err
} }
@ -81,11 +82,11 @@ func (s *Server) initNotary(ctx context.Context, deposit depositor, await awaite
// non-error deposit with an empty TX hash means // non-error deposit with an empty TX hash means
// that the deposit has already been made; no // that the deposit has already been made; no
// need to wait it. // need to wait it.
s.log.Info(logs.InnerringNotaryDepositHasAlreadyBeenMade) s.log.Info(ctx, logs.InnerringNotaryDepositHasAlreadyBeenMade)
return nil return nil
} }
s.log.Info(msg) s.log.Info(ctx, msg)
return await(ctx, tx) return await(ctx, tx)
} }

View file

@ -1,6 +1,8 @@
package alphabet package alphabet
import ( import (
"context"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs" "git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/innerring/processors" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/innerring/processors"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/innerring/timers" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/innerring/timers"
@ -8,16 +10,16 @@ import (
"go.uber.org/zap" "go.uber.org/zap"
) )
func (ap *Processor) HandleGasEmission(ev event.Event) { func (ap *Processor) HandleGasEmission(ctx context.Context, ev event.Event) {
_ = ev.(timers.NewAlphabetEmitTick) _ = ev.(timers.NewAlphabetEmitTick)
ap.log.Info(logs.AlphabetTick, zap.String("type", "alphabet gas emit")) ap.log.Info(ctx, logs.AlphabetTick, zap.String("type", "alphabet gas emit"))
// send event to the worker pool // send event to the worker pool
err := processors.SubmitEvent(ap.pool, ap.metrics, "alphabet_emit_gas", ap.processEmit) err := processors.SubmitEvent(ap.pool, ap.metrics, "alphabet_emit_gas", func() bool { return ap.processEmit(ctx) })
if err != nil { if err != nil {
// there system can be moved into controlled degradation stage // there system can be moved into controlled degradation stage
ap.log.Warn(logs.AlphabetAlphabetProcessorWorkerPoolDrained, ap.log.Warn(ctx, logs.AlphabetAlphabetProcessorWorkerPoolDrained,
zap.Int("capacity", ap.pool.Cap())) zap.Int("capacity", ap.pool.Cap()))
} }
} }

View file

@ -1,11 +1,13 @@
package alphabet_test package alphabet_test
import ( import (
"context"
"testing" "testing"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/innerring" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/innerring"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/innerring/processors/alphabet" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/innerring/processors/alphabet"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/innerring/timers" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/innerring/timers"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger/test" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger/test"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys" "github.com/nspcc-dev/neo-go/pkg/crypto/keys"
@ -60,7 +62,7 @@ func TestProcessorEmitsGasToNetmapAndAlphabet(t *testing.T) {
processor, err := alphabet.New(params) processor, err := alphabet.New(params)
require.NoError(t, err, "failed to create processor instance") require.NoError(t, err, "failed to create processor instance")
processor.HandleGasEmission(timers.NewAlphabetEmitTick{}) processor.HandleGasEmission(context.Background(), timers.NewAlphabetEmitTick{})
processor.WaitPoolRunning() processor.WaitPoolRunning()
@ -137,7 +139,7 @@ func TestProcessorEmitsGasToNetmapIfNoParsedWallets(t *testing.T) {
processor, err := alphabet.New(params) processor, err := alphabet.New(params)
require.NoError(t, err, "failed to create processor instance") require.NoError(t, err, "failed to create processor instance")
processor.HandleGasEmission(timers.NewAlphabetEmitTick{}) processor.HandleGasEmission(context.Background(), timers.NewAlphabetEmitTick{})
processor.WaitPoolRunning() processor.WaitPoolRunning()
@ -198,7 +200,7 @@ func TestProcessorDoesntEmitGasIfNoNetmapOrParsedWallets(t *testing.T) {
processor, err := alphabet.New(params) processor, err := alphabet.New(params)
require.NoError(t, err, "failed to create processor instance") require.NoError(t, err, "failed to create processor instance")
processor.HandleGasEmission(timers.NewAlphabetEmitTick{}) processor.HandleGasEmission(context.Background(), timers.NewAlphabetEmitTick{})
processor.WaitPoolRunning() processor.WaitPoolRunning()
@ -219,7 +221,7 @@ type testIndexer struct {
index int index int
} }
func (i *testIndexer) AlphabetIndex() int { func (i *testIndexer) AlphabetIndex(context.Context) int {
return i.index return i.index
} }
@ -246,7 +248,7 @@ type testMorphClient struct {
batchTransferedGas []batchTransferGas batchTransferedGas []batchTransferGas
} }
func (c *testMorphClient) Invoke(contract util.Uint160, fee fixedn.Fixed8, method string, args ...any) (uint32, error) { func (c *testMorphClient) Invoke(_ context.Context, contract util.Uint160, fee fixedn.Fixed8, method string, args ...any) (client.InvokeRes, error) {
c.invokedMethods = append(c.invokedMethods, c.invokedMethods = append(c.invokedMethods,
invokedMethod{ invokedMethod{
contract: contract, contract: contract,
@ -254,7 +256,7 @@ func (c *testMorphClient) Invoke(contract util.Uint160, fee fixedn.Fixed8, metho
method: method, method: method,
args: args, args: args,
}) })
return 0, nil return client.InvokeRes{}, nil
} }
func (c *testMorphClient) TransferGas(receiver util.Uint160, amount fixedn.Fixed8) error { func (c *testMorphClient) TransferGas(receiver util.Uint160, amount fixedn.Fixed8) error {

View file

@ -1,6 +1,7 @@
package alphabet package alphabet
import ( import (
"context"
"crypto/elliptic" "crypto/elliptic"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs" "git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
@ -13,39 +14,39 @@ import (
const emitMethod = "emit" const emitMethod = "emit"
func (ap *Processor) processEmit() bool { func (ap *Processor) processEmit(ctx context.Context) bool {
index := ap.irList.AlphabetIndex() index := ap.irList.AlphabetIndex(ctx)
if index < 0 { if index < 0 {
ap.log.Info(logs.AlphabetNonAlphabetModeIgnoreGasEmissionEvent) ap.log.Info(ctx, logs.AlphabetNonAlphabetModeIgnoreGasEmissionEvent)
return true return true
} }
contract, ok := ap.alphabetContracts.GetByIndex(index) contract, ok := ap.alphabetContracts.GetByIndex(index)
if !ok { if !ok {
ap.log.Debug(logs.AlphabetNodeIsOutOfAlphabetRangeIgnoreGasEmissionEvent, ap.log.Debug(ctx, logs.AlphabetNodeIsOutOfAlphabetRangeIgnoreGasEmissionEvent,
zap.Int("index", index)) zap.Int("index", index))
return false return false
} }
// there is no signature collecting, so we don't need extra fee // there is no signature collecting, so we don't need extra fee
_, err := ap.morphClient.Invoke(contract, 0, emitMethod) _, err := ap.morphClient.Invoke(ctx, contract, 0, emitMethod)
if err != nil { if err != nil {
ap.log.Warn(logs.AlphabetCantInvokeAlphabetEmitMethod, zap.String("error", err.Error())) ap.log.Warn(ctx, logs.AlphabetCantInvokeAlphabetEmitMethod, zap.String("error", err.Error()))
return false return false
} }
if ap.storageEmission == 0 { if ap.storageEmission == 0 {
ap.log.Info(logs.AlphabetStorageNodeEmissionIsOff) ap.log.Info(ctx, logs.AlphabetStorageNodeEmissionIsOff)
return true return true
} }
networkMap, err := ap.netmapClient.NetMap() networkMap, err := ap.netmapClient.NetMap()
if err != nil { if err != nil {
ap.log.Warn(logs.AlphabetCantGetNetmapSnapshotToEmitGasToStorageNodes, ap.log.Warn(ctx, logs.AlphabetCantGetNetmapSnapshotToEmitGasToStorageNodes,
zap.String("error", err.Error())) zap.String("error", err.Error()))
return false return false
@ -58,7 +59,7 @@ func (ap *Processor) processEmit() bool {
ap.pwLock.RUnlock() ap.pwLock.RUnlock()
extraLen := len(pw) extraLen := len(pw)
ap.log.Debug(logs.AlphabetGasEmission, ap.log.Debug(ctx, logs.AlphabetGasEmission,
zap.Int("network_map", nmLen), zap.Int("network_map", nmLen),
zap.Int("extra_wallets", extraLen)) zap.Int("extra_wallets", extraLen))
@ -68,20 +69,20 @@ func (ap *Processor) processEmit() bool {
gasPerNode := fixedn.Fixed8(ap.storageEmission / uint64(nmLen+extraLen)) gasPerNode := fixedn.Fixed8(ap.storageEmission / uint64(nmLen+extraLen))
ap.transferGasToNetmapNodes(nmNodes, gasPerNode) ap.transferGasToNetmapNodes(ctx, nmNodes, gasPerNode)
ap.transferGasToExtraNodes(pw, gasPerNode) ap.transferGasToExtraNodes(ctx, pw, gasPerNode)
return true return true
} }
func (ap *Processor) transferGasToNetmapNodes(nmNodes []netmap.NodeInfo, gasPerNode fixedn.Fixed8) { func (ap *Processor) transferGasToNetmapNodes(ctx context.Context, nmNodes []netmap.NodeInfo, gasPerNode fixedn.Fixed8) {
for i := range nmNodes { for i := range nmNodes {
keyBytes := nmNodes[i].PublicKey() keyBytes := nmNodes[i].PublicKey()
key, err := keys.NewPublicKeyFromBytes(keyBytes, elliptic.P256()) key, err := keys.NewPublicKeyFromBytes(keyBytes, elliptic.P256())
if err != nil { if err != nil {
ap.log.Warn(logs.AlphabetCantParseNodePublicKey, ap.log.Warn(ctx, logs.AlphabetCantParseNodePublicKey,
zap.String("error", err.Error())) zap.String("error", err.Error()))
continue continue
@ -89,7 +90,7 @@ func (ap *Processor) transferGasToNetmapNodes(nmNodes []netmap.NodeInfo, gasPerN
err = ap.morphClient.TransferGas(key.GetScriptHash(), gasPerNode) err = ap.morphClient.TransferGas(key.GetScriptHash(), gasPerNode)
if err != nil { if err != nil {
ap.log.Warn(logs.AlphabetCantTransferGas, ap.log.Warn(ctx, logs.AlphabetCantTransferGas,
zap.String("receiver", key.Address()), zap.String("receiver", key.Address()),
zap.Int64("amount", int64(gasPerNode)), zap.Int64("amount", int64(gasPerNode)),
zap.String("error", err.Error()), zap.String("error", err.Error()),
@ -98,7 +99,7 @@ func (ap *Processor) transferGasToNetmapNodes(nmNodes []netmap.NodeInfo, gasPerN
} }
} }
func (ap *Processor) transferGasToExtraNodes(pw []util.Uint160, gasPerNode fixedn.Fixed8) { func (ap *Processor) transferGasToExtraNodes(ctx context.Context, pw []util.Uint160, gasPerNode fixedn.Fixed8) {
if len(pw) > 0 { if len(pw) > 0 {
err := ap.morphClient.BatchTransferGas(pw, gasPerNode) err := ap.morphClient.BatchTransferGas(pw, gasPerNode)
if err != nil { if err != nil {
@ -106,7 +107,7 @@ func (ap *Processor) transferGasToExtraNodes(pw []util.Uint160, gasPerNode fixed
for i, addr := range pw { for i, addr := range pw {
receiversLog[i] = addr.StringLE() receiversLog[i] = addr.StringLE()
} }
ap.log.Warn(logs.AlphabetCantTransferGasToWallet, ap.log.Warn(ctx, logs.AlphabetCantTransferGasToWallet,
zap.Strings("receivers", receiversLog), zap.Strings("receivers", receiversLog),
zap.Int64("amount", int64(gasPerNode)), zap.Int64("amount", int64(gasPerNode)),
zap.String("error", err.Error()), zap.String("error", err.Error()),

View file

@ -1,26 +1,26 @@
package alphabet package alphabet
import ( import (
"context"
"errors" "errors"
"fmt" "fmt"
"sync" "sync"
"time" "time"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/innerring/metrics" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/innerring/metrics"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/event" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/event"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap"
"github.com/nspcc-dev/neo-go/pkg/encoding/fixedn" "github.com/nspcc-dev/neo-go/pkg/encoding/fixedn"
"github.com/nspcc-dev/neo-go/pkg/util" "github.com/nspcc-dev/neo-go/pkg/util"
"github.com/panjf2000/ants/v2" "github.com/panjf2000/ants/v2"
"go.uber.org/zap"
) )
type ( type (
// Indexer is a callback interface for inner ring global state. // Indexer is a callback interface for inner ring global state.
Indexer interface { Indexer interface {
AlphabetIndex() int AlphabetIndex(context.Context) int
} }
// Contracts is an interface of the storage // Contracts is an interface of the storage
@ -40,7 +40,7 @@ type (
} }
morphClient interface { morphClient interface {
Invoke(contract util.Uint160, fee fixedn.Fixed8, method string, args ...any) (uint32, error) Invoke(ctx context.Context, contract util.Uint160, fee fixedn.Fixed8, method string, args ...any) (client.InvokeRes, error)
TransferGas(receiver util.Uint160, amount fixedn.Fixed8) error TransferGas(receiver util.Uint160, amount fixedn.Fixed8) error
BatchTransferGas(receivers []util.Uint160, amount fixedn.Fixed8) error BatchTransferGas(receivers []util.Uint160, amount fixedn.Fixed8) error
} }
@ -85,8 +85,6 @@ func New(p *Params) (*Processor, error) {
return nil, errors.New("ir/alphabet: global state is not set") return nil, errors.New("ir/alphabet: global state is not set")
} }
p.Log.Debug(logs.AlphabetAlphabetWorkerPool, zap.Int("size", p.PoolSize))
pool, err := ants.NewPool(p.PoolSize, ants.WithNonblocking(true)) pool, err := ants.NewPool(p.PoolSize, ants.WithNonblocking(true))
if err != nil { if err != nil {
return nil, fmt.Errorf("ir/frostfs: can't create worker pool: %w", err) return nil, fmt.Errorf("ir/frostfs: can't create worker pool: %w", err)
@ -116,11 +114,6 @@ func (ap *Processor) SetParsedWallets(parsedWallets []util.Uint160) {
ap.pwLock.Unlock() ap.pwLock.Unlock()
} }
// ListenerNotificationParsers for the 'event.Listener' event producer.
func (ap *Processor) ListenerNotificationParsers() []event.NotificationParserInfo {
return nil
}
// ListenerNotificationHandlers for the 'event.Listener' event producer. // ListenerNotificationHandlers for the 'event.Listener' event producer.
func (ap *Processor) ListenerNotificationHandlers() []event.NotificationHandlerInfo { func (ap *Processor) ListenerNotificationHandlers() []event.NotificationHandlerInfo {
return nil return nil

View file

@ -1,6 +1,7 @@
package balance package balance
import ( import (
"context"
"encoding/hex" "encoding/hex"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs" "git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
@ -10,20 +11,20 @@ import (
"go.uber.org/zap" "go.uber.org/zap"
) )
func (bp *Processor) handleLock(ev event.Event) { func (bp *Processor) handleLock(ctx context.Context, ev event.Event) {
lock := ev.(balanceEvent.Lock) lock := ev.(balanceEvent.Lock)
bp.log.Info(logs.Notification, bp.log.Info(ctx, logs.Notification,
zap.String("type", "lock"), zap.String("type", "lock"),
zap.String("value", hex.EncodeToString(lock.ID()))) zap.String("value", hex.EncodeToString(lock.ID())))
// send an event to the worker pool // send an event to the worker pool
err := processors.SubmitEvent(bp.pool, bp.metrics, "lock", func() bool { err := processors.SubmitEvent(bp.pool, bp.metrics, "lock", func() bool {
return bp.processLock(&lock) return bp.processLock(ctx, &lock)
}) })
if err != nil { if err != nil {
// there system can be moved into controlled degradation stage // there system can be moved into controlled degradation stage
bp.log.Warn(logs.BalanceBalanceWorkerPoolDrained, bp.log.Warn(ctx, logs.BalanceBalanceWorkerPoolDrained,
zap.Int("capacity", bp.pool.Cap())) zap.Int("capacity", bp.pool.Cap()))
} }
} }

Some files were not shown because too many files have changed in this diff Show more