Compare commits

...

103 commits

Author SHA1 Message Date
c98357606b
[#1606] Use slices.Clone()/bytes.Clone() where possible
gopatch:
```
@@
var from, to expression
@@
+import "bytes"
-to := make([]byte, len(from))
-copy(to, from)
+to := bytes.Clone(from)

@@
var from, to expression
@@
+import "bytes"
-to = make([]byte, len(from))
-copy(to, from)
+to = bytes.Clone(from)

@@
var from, to, typ expression
@@
+import "slices"
-to := make([]typ, len(from))
-copy(to, from)
+to := slices.Clone(from)

@@
var from, to, typ expression
@@
+import "slices"
-to = make([]typ, len(from))
-copy(to, from)
+to = slices.Clone(from)
```

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-01-17 14:50:14 +03:00
80de5d70bf [#1593] node: Fix initialization of ape_chain cache
Signed-off-by: Alexander Chuprov <a.chuprov@yadro.com>
2025-01-17 08:58:47 +00:00
57efa0bc8e
[#1604] policer: Properly handle maintenance nodes
Consider `REP 1 REP 1` placement (selects/filters are omitted).
The placement is `[1, 2], [1, 0]`. We are the 0-th node.
Node 1 is under maintenance, so we do not replicate object
on the node 2. In the second replication group node 1 is under maintenance,
but current caching logic considers it as "replica holder" and removes
local copy. Voilà, we have DL if the object is missing from the node 1.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-01-16 16:37:52 +03:00
26e0c82fb8
[#1604] policer/test: Add test for MAINTENANCE runtime status
The node can have MAINTENANCE status in the network map, but can also be
ONLINE while responding with MAINTENANCE. These are 2 different code
paths, let's test them separately.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-01-16 16:37:16 +03:00
4538ccb12a
[#1604] policer: Do not process the same node twice
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-01-16 16:37:16 +03:00
84e1599997
[#1604] policer: Remove one-line helpers
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-01-16 16:37:16 +03:00
5a270e2e61
[#1604] policer: Use status instead of bool value in node cache
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-01-16 16:37:16 +03:00
436d65d784 [#1591] Build and host OCI images on our own infra
Similar to TrueCloudLab/frostfs-s3-gw#587
this PR introduces a CI pipeline that builds Docker images and pushes them
to our selfhosted registry.

Signed-off-by: Vitaliy Potyarkin <v.potyarkin@yadro.com>
2025-01-16 07:46:53 +00:00
c3c034ecca [#1601] util: Correctly parse 'root' name for container resources
* Convert `root/*` to `//`;
* Add unit-test case for parses to check parsing correctness.

Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2025-01-15 12:13:02 +00:00
05fd999162
[#1600] fstree: Handle incomplete writes
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-01-14 14:52:35 +03:00
eff95bd632
[#1598] engine: Drop unnecessary result structs
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-01-14 11:15:21 +03:00
fb928616cc
[#1598] golangci: Enable unparam linter
To drop unnecessary parameters and return values.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-01-14 09:06:47 +03:00
4d5ae59a52
[#1598] golangci: Enable unconvert linters
To drop unnecessary conversions.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-01-14 09:06:40 +03:00
a9f27e074b [#1243] object: Look for X-Headers within origin before APE check
* X-Headers can be found in `origin` field of `MetaHeader` if the request
  has been forwarded from non-container node.

Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2025-01-13 12:07:27 +00:00
6c51f48aab [#1596] metrics: Create public aliases for internal engine metrics
Signed-off-by: Anton Nikiforov <an.nikiforov@yadro.com>
2025-01-13 10:05:01 +00:00
a2485637bb
[#1593] node/config_example: Add description of morph/cache_ttl=0 behavior
Signed-off-by: Alexander Chuprov <a.chuprov@yadro.com>
2025-01-10 15:13:10 +03:00
09faca034c
[#1593] node: Fix initialization of frostfsid cache
Signed-off-by: Alexander Chuprov <a.chuprov@yadro.com>
2025-01-10 15:01:36 +03:00
ceac1c8709
[#1594] dev: Remove unused parameter 'FROSTFS_MORPH_INACTIVITY_TIMEOUT'
Signed-off-by: Alexander Chuprov <a.chuprov@yadro.com>
2025-01-09 20:52:24 +03:00
f7e75b13b0 [#1506] ape_manager: Await tx persist before returning response
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-01-09 12:04:21 +00:00
198aaebc94 [#1506] morph: Simplify WaitTxHalt() signature
Avoid dependency on `morph/client` package because of `InvokeRes`.
Make signature resemble `WaitAny()` method of `waiter.Waiter` from neo-go.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-01-09 12:04:21 +00:00
85af6bcd5c [#1506] ape: Use contract reader in ListMorphRuleChains()
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-01-09 12:04:21 +00:00
8a658de0b2 [#1506] ape: Do not create cosigners slice on each contract invocation
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-01-09 12:04:21 +00:00
3900b92927
Revert "[#1492] metabase: Ensure Unmarshal() is called on a cloned slice"
This reverts commit 8ed7a676d5.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-01-09 14:34:20 +03:00
5ccb3394b4
[#1592] go.mod: Update sdk-go
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-01-09 14:34:16 +03:00
dc410fca90 [#1590] adm: Accept many accounts in proxy-* commands
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-01-09 07:51:54 +00:00
cddcd73f04 [#1590] adm: Make --account flag required in proxy-* commands
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-01-09 07:51:54 +00:00
d7fcc5ce30 [#1586] objsvc: Allow to send search response in multiple messages
Previously, `ln` was only set once, so search has really worked for
small number of objects.

Fix panic:
```
panic: runtime error: slice bounds out of range [:43690] with capacity 21238
goroutine 6859775 [running]:
git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/object.(*searchStreamMsgSizeCtrl).Send(0xc001eec8d0, 0xc005734000)
        git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/object/transport_splitter.go:173 +0x1f0
git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/object/search/v2.(*streamWriter).WriteIDs(0xc000520320, {0xc00eb1a000, 0x4fd9c, 0x7fd6475a9a68?})
        git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/object/search/v2/streamer.go:28 +0x155
git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/object/search.(*uniqueIDWriter).WriteIDs(0xc001386420, {0xc00eb1a000?, 0xc0013ea9c0?, 0x113eef3?})
        git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/object/search/util.go:62 +0x202
git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/object/search.(*execCtx).writeIDList(0xc00011aa38?, {0xc00eb1a000?, 0xc001eec9f0?, 0xc0008f4380?})
        git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/object/search/exec.go:68 +0x91
git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/object/search.(*execCtx).executeLocal(0xc0008f4380, {0x176c538, 0xc001eec9f0})
        git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/object/search/local.go:18 +0x16b
```

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-28 12:29:22 +00:00
c0221d76e6 [#1577] node/container: Fix typo
Signed-off-by: Ekaterina Lebedeva <ekaterina.lebedeva@yadro.com>
2024-12-28 12:05:01 +03:00
242f0095d0 [#1577] container: Reduce iterations through container list
* Separated iteration through container ids from `ContainersOf()`
  so that it could be reused.
* When listing containers we used to iterate through the
  the whole list of containers twice: first when reading from
  a contract, then when sending them. Now we can send batches
  of containers when reading from the contract.

Signed-off-by: Ekaterina Lebedeva <ekaterina.lebedeva@yadro.com>
2024-12-27 15:30:26 +03:00
6fe34d266a [#1577] morph: Fix typo
Signed-off-by: Ekaterina Lebedeva <ekaterina.lebedeva@yadro.com>
2024-12-27 14:03:19 +03:00
fa08bfa553
[#1583] metabase/test: Update TestLisObjectsWithCursor
Update this test following recent changes to ensure
that `(*DB).ListWithCursor` skips expired objects.

Signed-off-by: Aleksey Savchuk <a.savchuk@yadro.com>
2024-12-26 14:39:50 +03:00
0da998ef50
[#1583] metabase: Skip expired objects in ListWithCursor
Signed-off-by: Aleksey Savchuk <a.savchuk@yadro.com>
2024-12-26 14:39:49 +03:00
e44782473a [#1512] object: Fix writePart for EC-container
* Immediatly return after `ObjectAlreadyRemoved` error.

Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2024-12-26 11:27:55 +00:00
9cd1bcef06 [#1512] object: Make raw PutSingle check status within response
Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2024-12-26 11:27:55 +00:00
ca0a33ea0f [#465] objsvc: Set NETMAP_EPOCH xheader for auxiliary requests
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-26 09:17:58 +00:00
f6c5222952 [#1581] services/session: Use user.ID.EncodeToString() where possible
gopatch:
```
@@
var id expression
@@
-base58.Encode(id.WalletBytes())
+id.EncodeToString()
```

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-25 18:09:36 +00:00
ea868e09f8
[#1582] adm: Use int64 type and the default value for --till flag
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-25 14:22:28 +03:00
31d3d299bf
[#1582] adm: Unify promps for reading a password
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-25 14:22:28 +03:00
b5b4f78b49
[#1582] adm: Allow using the default account in deposit-notary
It has never worked, actually.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-25 14:22:28 +03:00
2832f44437 [#1531] metrics: Rename app_info metric
Signed-off-by: Ekaterina Lebedeva <ekaterina.lebedeva@yadro.com>
2024-12-23 10:40:18 +00:00
7c3bcb0f44
[#1578] Makefile: Refill GAS with a single command in env-up
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-23 11:17:22 +03:00
e64871c3fd
[#1578] adm: Allow to transfer GAS to multiple recepients
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-23 11:17:22 +03:00
303cd35a01
[#1578] adm: Remove unnecessary comments in RefillGasCmd
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-23 11:17:22 +03:00
bb9ba1bce2
[#1578] adm: Remove bool flag from refillGas()
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-23 11:17:22 +03:00
db03742d33
[#1578] adm: Reword help message for morph refill-gas
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-23 11:17:22 +03:00
148d68933b [#1573] node: Simplify bootstrapWithState()
After #1382 we have no need to use lambdas.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-20 08:17:05 +00:00
51ee132ea3
[#1342] network/cache: Add node address to error multiClient
Signed-off-by: Alexander Chuprov <a.chuprov@yadro.com>
2024-12-18 19:27:35 +03:00
226dd25dd0 [#1568] pilorama: Replace "containerID" with "container ID" in the error message
It is "container ID" in every other place.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-18 15:52:26 +00:00
bd0197eaa8 [#1568] storage: Remove "could not/can't/failed to" from error messages
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-18 15:52:26 +00:00
e44b84c18c
[#1569] cli: Remove unnecessary variable after refactoring
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-18 10:17:04 +03:00
bed49e6ace
[#1569] cli: Make --range flag required in object hash
Previously, `object head` was used if no range was provided.
This is wrong on multiple levels:
1. We print an error if the checksum is missing in header,
   even though taking hash is possible.
2. We silently ignore --salt parameter.
3. `--range` is required for Object.RANGEHASH RPC, custom logic for one
   specific usecase has no value.

So we make it required and make CLI command follow more closely
the FrostFS API.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-18 10:17:04 +03:00
df05057ed4 [#1452] container: Add ListStream method
* Added new method for listing containers to container service.
  It opens stream and sends containers in batches.

* Added TransportSplitter wrapper around ExecutionService to
  split container ID list read from contract in parts that are
  smaller than grpc max message size. Batch size can be changed
  in node configuration file (as in example config file).

* Changed `container list` implementaion in cli: now ListStream
  is called by default. Old List is called only if ListStream
  is not implemented.

* Changed `internalclient.ListContainersPrm`.`Account` to
  `OwnerID` since `client.PrmContainerList`.`Account` was
  renamed to `OwnerID` in sdk.

Signed-off-by: Ekaterina Lebedeva <ekaterina.lebedeva@yadro.com>
2024-12-17 16:22:43 +03:00
b6c8ebf493 [#1453] container: Replace sort.Slice with slices.SortFunc
* Replaced `sort.Slice` with `slices.SortFunc` in
  `ListContainersRes.SortedIDList()` as it is a bit faster,
  according to 15102e6dfd.

Signed-off-by: Ekaterina Lebedeva <ekaterina.lebedeva@yadro.com>
2024-12-17 13:33:43 +03:00
6e82661c35 [#1563] tree: Wrap only ChainRouterError erros with ObjectAccessDenied
* Such wrapping helps to differentiate logical check errors and server internal
  errors.

Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2024-12-16 15:16:07 +03:00
1a091ea7bb [#1563] object: Wrap only ChainRouterError erros with ObjectAccessDenied
* Such wrapping helps to differentiate logical check errors and server internal
  errors.

Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2024-12-16 15:15:25 +03:00
7ac3542714 [#1563] ape: Introduce ChainRouterError error type
Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2024-12-16 15:12:30 +03:00
f0c43c8d80
[#1502] Use zap.Error for logging errors
Use `zap.Error` instead of `zap.String` for logging errors: change all expressions like
`zap.String("error", err.Error())` or `zap.String("err", err.Error())` to `zap.Error(err)`.
Leave similar expressions with other messages unchanged, for example,
`zap.String("last_error", lastErr.Error())` or `zap.String("reason", ctx.Err().Error())`.

This change was made by applying the following patch:
```diff
@@
var err expression
@@
-zap.String("error", err.Error())
+zap.Error(err)

@@
var err expression
@@
-zap.String("err", err.Error())
+zap.Error(err)
```

Signed-off-by: Aleksey Savchuk <a.savchuk@yadro.com>
2024-12-16 11:13:42 +03:00
8ba9f31fca
[#1510] metabase/test: Fix BenchmarkListWithCursor
- Fix misplaced `(*DB).Close` (broken after 47dcfa20f3)
- Use `errors.Is` for error checking (broken after fcdbf5e509)

Signed-off-by: Aleksey Savchuk <a.savchuk@yadro.com>
2024-12-13 13:19:15 +03:00
2af3409d39
[#1510] metabase/test: Fix BenchmarkGet
Fix misplaced `(*DB).Close` (broken after 47dcfa20f3)

Signed-off-by: Aleksey Savchuk <a.savchuk@yadro.com>
2024-12-13 13:18:43 +03:00
d165ac042c
[#1558] morph/client: Reuse notary rpcclient wrapper
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-12 15:30:12 +03:00
7151c71d51
[#1558] morph/client: Remove "could not"/"can't"/"failed to" from error messages
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-12 15:30:12 +03:00
91d9dc2676
[#1558] morph/event: Remove "could not" from error messages
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-12 15:30:12 +03:00
7853dbc315 [#1557] morph/event: Remove embedded structs from scriptHashWithValue
Also, make them public, because otherwise `unused` linter complains.
```
pkg/morph/event/utils.go:25:2  unused  field `typ` is unused
```
This complain is wrong, though: we _use_ `typ` field because the whole
struct is used as a map key.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-12 11:55:09 +00:00
3821645085
[#1555] engine: Refactor (*StorageEngine).GetLocks
Refactored after renaming the method to replace the confusing `locked`
variable with `locks`.

Signed-off-by: Aleksey Savchuk <a.savchuk@yadro.com>
2024-12-11 15:06:38 +03:00
72470d6b48
[#1555] local_object_storage: Rename method GetLocked -> GetLocks
Renamed to better reflect the method's purpose of returning locks
for the specified object.

Signed-off-by: Aleksey Savchuk <a.savchuk@yadro.com>
2024-12-11 15:06:37 +03:00
e9837bbcf9 [#1554] morph/event: Remove unused AlphabetUpdate event
Refs TrueCloudLab/frostfs-contract#138.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-11 12:01:17 +00:00
a641c91594 [#1550] Add CODEOWNERS
Signed-off-by: Vitaliy Potyarkin <v.potyarkin@yadro.com>
2024-12-11 10:34:57 +00:00
b1614a284d [#1546] morph/event: Export NotificationHandlerInfo fields
Hiding them achieves nothing, as the struct has no methods and is not
used concurrently.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-11 07:39:49 +00:00
d0ce835fbf [#1546] morph/event: Merge notification parser and handlers
They are decoupled, but it is an error to have a handler without a
corresponding parser. Register them together on the code level and get
rid of unreachable code.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-11 07:39:49 +00:00
dfa51048a8 [#1546] morph/event: Remove "is started" checks from event handler registrar
This codepath hides possible bugs in code.
All initialization function should run before init stage.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-11 07:39:49 +00:00
670305a721 [#1546] morph/event: Remove nil checks from event handler registrar
This codepath hides possible bugs in code.
We would rather panic then silently fail.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-11 07:39:49 +00:00
1f6cf57e30 [#1548] metabase: Check if EC parent is removed or expired
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-12-11 07:26:33 +00:00
386a12eea4 [#1548] engine: Rename parent -> ecParent
Parent could mean split parent or EC parent. In this case it is EC parent only.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-12-11 07:26:33 +00:00
15139d80c9 [#1548] policer: Do not replicate EC chunk if object already removed
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-12-11 07:26:33 +00:00
41da27dad5
[#1549] engine: Drop Async flag from evacuation parameters
Now it is only async evacuation.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-12-10 17:00:00 +03:00
ac0511d214
[#1549] controlSvc: Drop deprecated EvacuateShard rpc
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-12-10 16:59:52 +03:00
7e542906ef [#1539] go.mod: Bump frostfs-sdk-go version
* Also fix placement unit-test in object manager

Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2024-12-06 15:29:37 +03:00
d1bc4351c3
[#1545] morph/event: Simplify frostfs contract event parsing
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-06 14:00:23 +03:00
1c12f23b84 [#1541] morph/event: Simplify netmap contract event parsing
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-06 10:26:39 +00:00
a353d45742 [#1541] morph/event: Simplify container contract event parsing
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-06 10:26:39 +00:00
d5c46d812a [#1541] go.mod: Update frostfs-contract
New version contains more idiomatic types in the auto-generated code.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-06 10:26:39 +00:00
d5d5ce2074 [#1541] morph/event: Simplify balance contract event parsing
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-06 10:26:39 +00:00
7df3520d48 [#1540] getSvc: Drop redundant returns
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-12-05 12:39:49 +00:00
5fe78e51d1 [#1540] getSvc: Do not log context canceled errors during EC assemble
Those errors are fired when it is enough chunks retrieved and error group
cancels other requests.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-12-05 12:39:49 +00:00
84b4051b4d
[#1538] morph/container: Make opts struct similar to that of other contracts
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-04 15:30:58 +03:00
6a51086030
[#1538] morph/client: Remove TryNotary() option from side-chain contracts
The notary is always enabled and this option does always work.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-04 15:30:58 +03:00
5c3b2d95ba
[#1538] node: Assume notary is enabled
Notaryless environments are not tested at all since a while.
We use neo-go only and it has notary contract enabled.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-04 15:30:58 +03:00
2d5d4093be
[#1537] morph: Use (user.ID).ScriptHash() where possible
Pick up changes from TrueCloudLab/frostfs-sdk-go#198.

gopatch:
```
@@
var user expression
@@
-address.StringToUint160(user.EncodeToString())
+user.ScriptHash()
```

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-04 13:25:44 +03:00
e3487d5af5 [#1535] morph: Unify test invoke error messages
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-04 09:50:20 +00:00
e37dcdf88b [#1535] morph/netmap: Unify error messages for config retrieval
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-04 09:50:20 +00:00
6c679d1535 [#1535] morph: Unify client creation error messages
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-04 09:50:20 +00:00
281d65435e
[#1450] engine: Group object by shard before Inhume
```
goos: linux
goarch: amd64
pkg: git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/engine
cpu: 12th Gen Intel(R) Core(TM) i5-1235U
                                 │   old.txt    │              new.txt                │
                                 │    sec/op    │   sec/op     vs base                │
InhumeMultipart/objects=1-12        11.42m ± 1%   10.71m ± 0%   -6.27% (p=0.000 n=10)
InhumeMultipart/objects=10-12       113.5m ± 0%   100.9m ± 3%  -11.08% (p=0.000 n=10)
InhumeMultipart/objects=100-12     1135.4m ± 1%   681.3m ± 2%  -40.00% (p=0.000 n=10)
InhumeMultipart/objects=1000-12     11.358 ± 0%    1.089 ± 1%  -90.41% (p=0.000 n=10)
InhumeMultipart/objects=10000-12   113.251 ± 0%    1.645 ± 1%  -98.55% (p=0.000 n=10)
geomean                              1.136        265.5m       -76.63%
```

Signed-off-by: Aleksey Savchuk <a.savchuk@yadro.com>
2024-12-04 10:09:00 +03:00
b348b20289
[#1450] engine: Add benchmark for Inhume operation
Signed-off-by: Aleksey Savchuk <a.savchuk@yadro.com>
2024-12-04 10:08:34 +03:00
748edd1999
[#1450] engine: Return shard-level error if object is expired on inhume
Since we have errors defined on the shard-level, it looks strage that we
check an error againt the shard-level error `ErrLockObjectRemoval`, but
then return the metabase-level error. Let's return the same shard-level
error instead.

Since we have errors defined on the shard-level

Signed-off-by: Aleksey Savchuk <a.savchuk@yadro.com>
2024-12-04 10:06:57 +03:00
47dfd8840c [#1532] node: Allow to omit metabase.path if shard is disabled
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-12-04 03:30:19 +00:00
432042c534
[#1527] engine: Add tests for handling expired objects on inhume and lock
Currently, it's allowed to inhume or lock an expired object.
Consider the following scenario:

1) An user inhumes or locks an object
2) The object expires
3) GC hasn't yet deleted the object
4) The node loses the associated tombstone or lock
5) Another node replicates tombstone or lock to the first node

In this case, the second node succeeds, which is the desired behavior.

Signed-off-by: Aleksey Savchuk <a.savchuk@yadro.com>
2024-12-03 12:29:45 +03:00
9cabca9dfe
[#1527] engine/test: Move default metabase options to separate function
Signed-off-by: Aleksey Savchuk <a.savchuk@yadro.com>
2024-12-02 16:13:37 +03:00
60feed3b5f
[#1527] engine/test: Allow to specify current epoch in epochState
Signed-off-by: Aleksey Savchuk <a.savchuk@yadro.com>
2024-12-02 15:37:25 +03:00
635a292ae4 [#1528] cli: Keep order for required nodes in the result of object nodes
Signed-off-by: Anton Nikiforov <an.nikiforov@yadro.com>
2024-12-02 13:58:24 +03:00
edfa3f4825 [#1528] node: Keep order for equal elements when sort priority metrics
Signed-off-by: Anton Nikiforov <an.nikiforov@yadro.com>
2024-12-02 13:58:19 +03:00
e0ac3a583f [#1523] metabase: Remove (*DB).IterateCoveredByTombstones
Remove this method because it isn't used anywhere since 7799f8e4c.

Signed-off-by: Aleksey Savchuk <a.savchuk@yadro.com>
2024-11-29 10:49:24 +00:00
00c608c05e [#1524] tree: Make check APE error get wrapped to api status
Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2024-11-29 10:48:16 +00:00
bba1892fa1 [#1524] ape: Make APE checker return error without status
Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2024-11-29 10:48:16 +00:00
259 changed files with 2484 additions and 2536 deletions

View file

@ -0,0 +1,28 @@
name: OCI image
on:
push:
workflow_dispatch:
jobs:
image:
name: Build container images
runs-on: docker
container: git.frostfs.info/truecloudlab/env:oci-image-builder-bookworm
steps:
- name: Clone git repo
uses: actions/checkout@v3
- name: Build OCI image
run: make images
- name: Push image to OCI registry
run: |
echo "$REGISTRY_PASSWORD" \
| docker login --username truecloudlab --password-stdin git.frostfs.info
make push-images
if: >-
startsWith(github.ref, 'refs/tags/v') &&
(github.event_name == 'workflow_dispatch' || github.event_name == 'push')
env:
REGISTRY_PASSWORD: ${{secrets.FORGEJO_OCI_REGISTRY_PUSH_TOKEN}}

View file

@ -89,5 +89,7 @@ linters:
- protogetter - protogetter
- intrange - intrange
- tenv - tenv
- unconvert
- unparam
disable-all: true disable-all: true
fast: false fast: false

3
CODEOWNERS Normal file
View file

@ -0,0 +1,3 @@
.* @TrueCloudLab/storage-core-committers @TrueCloudLab/storage-core-developers
.forgejo/.* @potyarkin
Makefile @potyarkin

View file

@ -139,6 +139,15 @@ images: image-storage image-ir image-cli image-adm
# Build dirty local Docker images # Build dirty local Docker images
dirty-images: image-dirty-storage image-dirty-ir image-dirty-cli image-dirty-adm dirty-images: image-dirty-storage image-dirty-ir image-dirty-cli image-dirty-adm
# Push FrostFS components' docker image to the registry
push-image-%:
@echo "⇒ Publish FrostFS $* docker image "
@docker push $(HUB_IMAGE)-$*:$(HUB_TAG)
# Push all Docker images to the registry
.PHONY: push-images
push-images: push-image-storage push-image-ir push-image-cli push-image-adm
# Run `make %` in Golang container # Run `make %` in Golang container
docker/%: docker/%:
docker run --rm -t \ docker run --rm -t \
@ -270,10 +279,12 @@ env-up: all
echo "Frostfs contracts not found"; exit 1; \ echo "Frostfs contracts not found"; exit 1; \
fi fi
${BIN}/frostfs-adm --config ./dev/adm/frostfs-adm.yml morph init --contracts ${FROSTFS_CONTRACTS_PATH} ${BIN}/frostfs-adm --config ./dev/adm/frostfs-adm.yml morph init --contracts ${FROSTFS_CONTRACTS_PATH}
${BIN}/frostfs-adm --config ./dev/adm/frostfs-adm.yml morph refill-gas --storage-wallet ./dev/storage/wallet01.json --gas 10.0 ${BIN}/frostfs-adm --config ./dev/adm/frostfs-adm.yml morph refill-gas --gas 10.0 \
${BIN}/frostfs-adm --config ./dev/adm/frostfs-adm.yml morph refill-gas --storage-wallet ./dev/storage/wallet02.json --gas 10.0 --storage-wallet ./dev/storage/wallet01.json \
${BIN}/frostfs-adm --config ./dev/adm/frostfs-adm.yml morph refill-gas --storage-wallet ./dev/storage/wallet03.json --gas 10.0 --storage-wallet ./dev/storage/wallet02.json \
${BIN}/frostfs-adm --config ./dev/adm/frostfs-adm.yml morph refill-gas --storage-wallet ./dev/storage/wallet04.json --gas 10.0 --storage-wallet ./dev/storage/wallet03.json \
--storage-wallet ./dev/storage/wallet04.json
@if [ ! -f "$(LOCODE_DB_PATH)" ]; then \ @if [ ! -f "$(LOCODE_DB_PATH)" ]; then \
make locode-download; \ make locode-download; \
fi fi

View file

@ -135,7 +135,7 @@ func createContainerInfoProvider(cli *client.Client) (container.InfoProvider, er
if err != nil { if err != nil {
return nil, fmt.Errorf("resolve container contract hash: %w", err) return nil, fmt.Errorf("resolve container contract hash: %w", err)
} }
cc, err := morphcontainer.NewFromMorph(cli, sh, 0, morphcontainer.TryNotary()) cc, err := morphcontainer.NewFromMorph(cli, sh, 0)
if err != nil { if err != nil {
return nil, fmt.Errorf("create morph container client: %w", err) return nil, fmt.Errorf("create morph container client: %w", err)
} }

View file

@ -253,7 +253,7 @@ func frostfsidListNamespaces(cmd *cobra.Command, _ []string) {
reader := frostfsidrpclient.NewReader(inv, hash) reader := frostfsidrpclient.NewReader(inv, hash)
sessionID, it, err := reader.ListNamespaces() sessionID, it, err := reader.ListNamespaces()
commonCmd.ExitOnErr(cmd, "can't get namespace: %w", err) commonCmd.ExitOnErr(cmd, "can't get namespace: %w", err)
items, err := readIterator(inv, &it, iteratorBatchSize, sessionID) items, err := readIterator(inv, &it, sessionID)
commonCmd.ExitOnErr(cmd, "can't read iterator: %w", err) commonCmd.ExitOnErr(cmd, "can't read iterator: %w", err)
namespaces, err := frostfsidclient.ParseNamespaces(items) namespaces, err := frostfsidclient.ParseNamespaces(items)
@ -305,7 +305,7 @@ func frostfsidListSubjects(cmd *cobra.Command, _ []string) {
sessionID, it, err := reader.ListNamespaceSubjects(ns) sessionID, it, err := reader.ListNamespaceSubjects(ns)
commonCmd.ExitOnErr(cmd, "can't get namespace: %w", err) commonCmd.ExitOnErr(cmd, "can't get namespace: %w", err)
subAddresses, err := frostfsidclient.UnwrapArrayOfUint160(readIterator(inv, &it, iteratorBatchSize, sessionID)) subAddresses, err := frostfsidclient.UnwrapArrayOfUint160(readIterator(inv, &it, sessionID))
commonCmd.ExitOnErr(cmd, "can't unwrap: %w", err) commonCmd.ExitOnErr(cmd, "can't unwrap: %w", err)
sort.Slice(subAddresses, func(i, j int) bool { return subAddresses[i].Less(subAddresses[j]) }) sort.Slice(subAddresses, func(i, j int) bool { return subAddresses[i].Less(subAddresses[j]) })
@ -319,7 +319,7 @@ func frostfsidListSubjects(cmd *cobra.Command, _ []string) {
sessionID, it, err := reader.ListSubjects() sessionID, it, err := reader.ListSubjects()
commonCmd.ExitOnErr(cmd, "can't get subject: %w", err) commonCmd.ExitOnErr(cmd, "can't get subject: %w", err)
items, err := readIterator(inv, &it, iteratorBatchSize, sessionID) items, err := readIterator(inv, &it, sessionID)
commonCmd.ExitOnErr(cmd, "can't read iterator: %w", err) commonCmd.ExitOnErr(cmd, "can't read iterator: %w", err)
subj, err := frostfsidclient.ParseSubject(items) subj, err := frostfsidclient.ParseSubject(items)
@ -365,7 +365,7 @@ func frostfsidListGroups(cmd *cobra.Command, _ []string) {
sessionID, it, err := reader.ListGroups(ns) sessionID, it, err := reader.ListGroups(ns)
commonCmd.ExitOnErr(cmd, "can't get namespace: %w", err) commonCmd.ExitOnErr(cmd, "can't get namespace: %w", err)
items, err := readIterator(inv, &it, iteratorBatchSize, sessionID) items, err := readIterator(inv, &it, sessionID)
commonCmd.ExitOnErr(cmd, "can't list groups: %w", err) commonCmd.ExitOnErr(cmd, "can't list groups: %w", err)
groups, err := frostfsidclient.ParseGroups(items) groups, err := frostfsidclient.ParseGroups(items)
commonCmd.ExitOnErr(cmd, "can't parse groups: %w", err) commonCmd.ExitOnErr(cmd, "can't parse groups: %w", err)
@ -415,7 +415,7 @@ func frostfsidListGroupSubjects(cmd *cobra.Command, _ []string) {
sessionID, it, err := reader.ListGroupSubjects(ns, big.NewInt(groupID)) sessionID, it, err := reader.ListGroupSubjects(ns, big.NewInt(groupID))
commonCmd.ExitOnErr(cmd, "can't list groups: %w", err) commonCmd.ExitOnErr(cmd, "can't list groups: %w", err)
items, err := readIterator(inv, &it, iteratorBatchSize, sessionID) items, err := readIterator(inv, &it, sessionID)
commonCmd.ExitOnErr(cmd, "can't read iterator: %w", err) commonCmd.ExitOnErr(cmd, "can't read iterator: %w", err)
subjects, err := frostfsidclient.UnwrapArrayOfUint160(items, err) subjects, err := frostfsidclient.UnwrapArrayOfUint160(items, err)
@ -492,17 +492,17 @@ func (f *frostfsidClient) sendWaitRes() (*state.AppExecResult, error) {
return f.roCli.Wait(f.wCtx.SentTxs[0].Hash, f.wCtx.SentTxs[0].Vub, nil) return f.roCli.Wait(f.wCtx.SentTxs[0].Hash, f.wCtx.SentTxs[0].Vub, nil)
} }
func readIterator(inv *invoker.Invoker, iter *result.Iterator, batchSize int, sessionID uuid.UUID) ([]stackitem.Item, error) { func readIterator(inv *invoker.Invoker, iter *result.Iterator, sessionID uuid.UUID) ([]stackitem.Item, error) {
var shouldStop bool var shouldStop bool
res := make([]stackitem.Item, 0) res := make([]stackitem.Item, 0)
for !shouldStop { for !shouldStop {
items, err := inv.TraverseIterator(sessionID, iter, batchSize) items, err := inv.TraverseIterator(sessionID, iter, iteratorBatchSize)
if err != nil { if err != nil {
return nil, err return nil, err
} }
res = append(res, items...) res = append(res, items...)
shouldStop = len(items) < batchSize shouldStop = len(items) < iteratorBatchSize
} }
return res, nil return res, nil

View file

@ -12,7 +12,6 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/morph/helper" "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/morph/helper"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/innerring" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/innerring"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys" "github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/nspcc-dev/neo-go/pkg/encoding/address"
"github.com/nspcc-dev/neo-go/pkg/io" "github.com/nspcc-dev/neo-go/pkg/io"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/gas" "github.com/nspcc-dev/neo-go/pkg/rpcclient/gas"
"github.com/nspcc-dev/neo-go/pkg/smartcontract" "github.com/nspcc-dev/neo-go/pkg/smartcontract"
@ -141,60 +140,29 @@ func addMultisigAccount(w *wallet.Wallet, m int, name, password string, pubs key
} }
func generateStorageCreds(cmd *cobra.Command, _ []string) error { func generateStorageCreds(cmd *cobra.Command, _ []string) error {
return refillGas(cmd, storageGasConfigFlag, true) walletPath, _ := cmd.Flags().GetString(commonflags.StorageWalletFlag)
} w, err := wallet.NewWallet(walletPath)
if err != nil {
func refillGas(cmd *cobra.Command, gasFlag string, createWallet bool) (err error) { return fmt.Errorf("create wallet: %w", err)
// storage wallet path is not part of the config
storageWalletPath, _ := cmd.Flags().GetString(commonflags.StorageWalletFlag)
// wallet address is not part of the config
walletAddress, _ := cmd.Flags().GetString(walletAddressFlag)
var gasReceiver util.Uint160
if len(walletAddress) != 0 {
gasReceiver, err = address.StringToUint160(walletAddress)
if err != nil {
return fmt.Errorf("invalid wallet address %s: %w", walletAddress, err)
}
} else {
if storageWalletPath == "" {
return fmt.Errorf("missing wallet path (use '--%s <out.json>')", commonflags.StorageWalletFlag)
}
var w *wallet.Wallet
if createWallet {
w, err = wallet.NewWallet(storageWalletPath)
} else {
w, err = wallet.NewWalletFromFile(storageWalletPath)
}
if err != nil {
return fmt.Errorf("can't create wallet: %w", err)
}
if createWallet {
var password string
label, _ := cmd.Flags().GetString(storageWalletLabelFlag)
password, err := config.GetStoragePassword(viper.GetViper(), label)
if err != nil {
return fmt.Errorf("can't fetch password: %w", err)
}
if label == "" {
label = constants.SingleAccountName
}
if err := w.CreateAccount(label, password); err != nil {
return fmt.Errorf("can't create account: %w", err)
}
}
gasReceiver = w.Accounts[0].Contract.ScriptHash()
} }
label, _ := cmd.Flags().GetString(storageWalletLabelFlag)
password, err := config.GetStoragePassword(viper.GetViper(), label)
if err != nil {
return fmt.Errorf("can't fetch password: %w", err)
}
if label == "" {
label = constants.SingleAccountName
}
if err := w.CreateAccount(label, password); err != nil {
return fmt.Errorf("can't create account: %w", err)
}
return refillGas(cmd, storageGasConfigFlag, w.Accounts[0].ScriptHash())
}
func refillGas(cmd *cobra.Command, gasFlag string, gasReceivers ...util.Uint160) (err error) {
gasStr := viper.GetString(gasFlag) gasStr := viper.GetString(gasFlag)
gasAmount, err := helper.ParseGASAmount(gasStr) gasAmount, err := helper.ParseGASAmount(gasStr)
@ -208,9 +176,11 @@ func refillGas(cmd *cobra.Command, gasFlag string, createWallet bool) (err error
} }
bw := io.NewBufBinWriter() bw := io.NewBufBinWriter()
emit.AppCall(bw.BinWriter, gas.Hash, "transfer", callflag.All, for _, gasReceiver := range gasReceivers {
wCtx.CommitteeAcc.Contract.ScriptHash(), gasReceiver, int64(gasAmount), nil) emit.AppCall(bw.BinWriter, gas.Hash, "transfer", callflag.All,
emit.Opcodes(bw.BinWriter, opcode.ASSERT) wCtx.CommitteeAcc.Contract.ScriptHash(), gasReceiver, int64(gasAmount), nil)
emit.Opcodes(bw.BinWriter, opcode.ASSERT)
}
if bw.Err != nil { if bw.Err != nil {
return fmt.Errorf("BUG: invalid transfer arguments: %w", bw.Err) return fmt.Errorf("BUG: invalid transfer arguments: %w", bw.Err)
} }

View file

@ -1,7 +1,12 @@
package generate package generate
import ( import (
"fmt"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/commonflags" "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/commonflags"
"github.com/nspcc-dev/neo-go/pkg/encoding/address"
"github.com/nspcc-dev/neo-go/pkg/util"
"github.com/nspcc-dev/neo-go/pkg/wallet"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"github.com/spf13/viper" "github.com/spf13/viper"
) )
@ -33,7 +38,27 @@ var (
_ = viper.BindPFlag(commonflags.RefillGasAmountFlag, cmd.Flags().Lookup(commonflags.RefillGasAmountFlag)) _ = viper.BindPFlag(commonflags.RefillGasAmountFlag, cmd.Flags().Lookup(commonflags.RefillGasAmountFlag))
}, },
RunE: func(cmd *cobra.Command, _ []string) error { RunE: func(cmd *cobra.Command, _ []string) error {
return refillGas(cmd, commonflags.RefillGasAmountFlag, false) storageWalletPaths, _ := cmd.Flags().GetStringArray(commonflags.StorageWalletFlag)
walletAddresses, _ := cmd.Flags().GetStringArray(walletAddressFlag)
var gasReceivers []util.Uint160
for _, walletAddress := range walletAddresses {
addr, err := address.StringToUint160(walletAddress)
if err != nil {
return fmt.Errorf("invalid wallet address %s: %w", walletAddress, err)
}
gasReceivers = append(gasReceivers, addr)
}
for _, storageWalletPath := range storageWalletPaths {
w, err := wallet.NewWalletFromFile(storageWalletPath)
if err != nil {
return fmt.Errorf("can't create wallet: %w", err)
}
gasReceivers = append(gasReceivers, w.Accounts[0].Contract.ScriptHash())
}
return refillGas(cmd, commonflags.RefillGasAmountFlag, gasReceivers...)
}, },
} }
GenerateAlphabetCmd = &cobra.Command{ GenerateAlphabetCmd = &cobra.Command{
@ -50,10 +75,10 @@ var (
func initRefillGasCmd() { func initRefillGasCmd() {
RefillGasCmd.Flags().String(commonflags.AlphabetWalletsFlag, "", commonflags.AlphabetWalletsFlagDesc) RefillGasCmd.Flags().String(commonflags.AlphabetWalletsFlag, "", commonflags.AlphabetWalletsFlagDesc)
RefillGasCmd.Flags().StringP(commonflags.EndpointFlag, commonflags.EndpointFlagShort, "", commonflags.EndpointFlagDesc) RefillGasCmd.Flags().StringP(commonflags.EndpointFlag, commonflags.EndpointFlagShort, "", commonflags.EndpointFlagDesc)
RefillGasCmd.Flags().String(commonflags.StorageWalletFlag, "", "Path to storage node wallet") RefillGasCmd.Flags().StringArray(commonflags.StorageWalletFlag, nil, "Path to storage node wallet")
RefillGasCmd.Flags().String(walletAddressFlag, "", "Address of wallet") RefillGasCmd.Flags().StringArray(walletAddressFlag, nil, "Address of wallet")
RefillGasCmd.Flags().String(commonflags.RefillGasAmountFlag, "", "Additional amount of GAS to transfer") RefillGasCmd.Flags().String(commonflags.RefillGasAmountFlag, "", "Additional amount of GAS to transfer")
RefillGasCmd.MarkFlagsMutuallyExclusive(walletAddressFlag, commonflags.StorageWalletFlag) RefillGasCmd.MarkFlagsOneRequired(walletAddressFlag, commonflags.StorageWalletFlag)
} }
func initGenerateStorageCmd() { func initGenerateStorageCmd() {

View file

@ -4,7 +4,6 @@ import (
"errors" "errors"
"fmt" "fmt"
"math/big" "math/big"
"strconv"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/commonflags" "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/morph/helper" "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/morph/helper"
@ -41,7 +40,8 @@ func depositNotary(cmd *cobra.Command, _ []string) error {
} }
accHash := w.GetChangeAddress() accHash := w.GetChangeAddress()
if addr, err := cmd.Flags().GetString(walletAccountFlag); err == nil { addr, _ := cmd.Flags().GetString(walletAccountFlag)
if addr != "" {
accHash, err = address.StringToUint160(addr) accHash, err = address.StringToUint160(addr)
if err != nil { if err != nil {
return fmt.Errorf("invalid address: %s", addr) return fmt.Errorf("invalid address: %s", addr)
@ -53,7 +53,7 @@ func depositNotary(cmd *cobra.Command, _ []string) error {
return fmt.Errorf("can't find account for %s", accHash) return fmt.Errorf("can't find account for %s", accHash)
} }
prompt := fmt.Sprintf("Enter password for %s >", address.Uint160ToString(accHash)) prompt := fmt.Sprintf("Enter password for %s > ", address.Uint160ToString(accHash))
pass, err := input.ReadPassword(prompt) pass, err := input.ReadPassword(prompt)
if err != nil { if err != nil {
return fmt.Errorf("can't get password: %v", err) return fmt.Errorf("can't get password: %v", err)
@ -73,16 +73,9 @@ func depositNotary(cmd *cobra.Command, _ []string) error {
return err return err
} }
till := int64(defaultNotaryDepositLifetime) till, _ := cmd.Flags().GetInt64(notaryDepositTillFlag)
tillStr, err := cmd.Flags().GetString(notaryDepositTillFlag) if till <= 0 {
if err != nil { return errInvalidNotaryDepositLifetime
return err
}
if tillStr != "" {
till, err = strconv.ParseInt(tillStr, 10, 64)
if err != nil || till <= 0 {
return errInvalidNotaryDepositLifetime
}
} }
return transferGas(cmd, acc, accHash, gasAmount, till) return transferGas(cmd, acc, accHash, gasAmount, till)

View file

@ -20,7 +20,7 @@ func initDepositoryNotaryCmd() {
DepositCmd.Flags().String(commonflags.StorageWalletFlag, "", "Path to storage node wallet") DepositCmd.Flags().String(commonflags.StorageWalletFlag, "", "Path to storage node wallet")
DepositCmd.Flags().String(walletAccountFlag, "", "Wallet account address") DepositCmd.Flags().String(walletAccountFlag, "", "Wallet account address")
DepositCmd.Flags().String(commonflags.RefillGasAmountFlag, "", "Amount of GAS to deposit") DepositCmd.Flags().String(commonflags.RefillGasAmountFlag, "", "Amount of GAS to deposit")
DepositCmd.Flags().String(notaryDepositTillFlag, "", "Notary deposit duration in blocks") DepositCmd.Flags().Int64(notaryDepositTillFlag, defaultNotaryDepositLifetime, "Notary deposit duration in blocks")
} }
func init() { func init() {

View file

@ -20,23 +20,32 @@ const (
accountAddressFlag = "account" accountAddressFlag = "account"
) )
func parseAddresses(cmd *cobra.Command) []util.Uint160 {
var addrs []util.Uint160
accs, _ := cmd.Flags().GetStringArray(accountAddressFlag)
for _, acc := range accs {
addr, err := address.StringToUint160(acc)
commonCmd.ExitOnErr(cmd, "invalid account: %w", err)
addrs = append(addrs, addr)
}
return addrs
}
func addProxyAccount(cmd *cobra.Command, _ []string) { func addProxyAccount(cmd *cobra.Command, _ []string) {
acc, _ := cmd.Flags().GetString(accountAddressFlag) addrs := parseAddresses(cmd)
addr, err := address.StringToUint160(acc) err := processAccount(cmd, addrs, "addAccount")
commonCmd.ExitOnErr(cmd, "invalid account: %w", err)
err = processAccount(cmd, addr, "addAccount")
commonCmd.ExitOnErr(cmd, "processing error: %w", err) commonCmd.ExitOnErr(cmd, "processing error: %w", err)
} }
func removeProxyAccount(cmd *cobra.Command, _ []string) { func removeProxyAccount(cmd *cobra.Command, _ []string) {
acc, _ := cmd.Flags().GetString(accountAddressFlag) addrs := parseAddresses(cmd)
addr, err := address.StringToUint160(acc) err := processAccount(cmd, addrs, "removeAccount")
commonCmd.ExitOnErr(cmd, "invalid account: %w", err)
err = processAccount(cmd, addr, "removeAccount")
commonCmd.ExitOnErr(cmd, "processing error: %w", err) commonCmd.ExitOnErr(cmd, "processing error: %w", err)
} }
func processAccount(cmd *cobra.Command, addr util.Uint160, method string) error { func processAccount(cmd *cobra.Command, addrs []util.Uint160, method string) error {
wCtx, err := helper.NewInitializeContext(cmd, viper.GetViper()) wCtx, err := helper.NewInitializeContext(cmd, viper.GetViper())
if err != nil { if err != nil {
return fmt.Errorf("can't initialize context: %w", err) return fmt.Errorf("can't initialize context: %w", err)
@ -54,7 +63,9 @@ func processAccount(cmd *cobra.Command, addr util.Uint160, method string) error
} }
bw := io.NewBufBinWriter() bw := io.NewBufBinWriter()
emit.AppCall(bw.BinWriter, proxyHash, method, callflag.All, addr) for _, addr := range addrs {
emit.AppCall(bw.BinWriter, proxyHash, method, callflag.All, addr)
}
if err := wCtx.SendConsensusTx(bw.Bytes()); err != nil { if err := wCtx.SendConsensusTx(bw.Bytes()); err != nil {
return err return err

View file

@ -29,13 +29,15 @@ var (
func initProxyAddAccount() { func initProxyAddAccount() {
AddAccountCmd.Flags().StringP(commonflags.EndpointFlag, commonflags.EndpointFlagShort, "", commonflags.EndpointFlagDesc) AddAccountCmd.Flags().StringP(commonflags.EndpointFlag, commonflags.EndpointFlagShort, "", commonflags.EndpointFlagDesc)
AddAccountCmd.Flags().String(accountAddressFlag, "", "Wallet address string") AddAccountCmd.Flags().StringArray(accountAddressFlag, nil, "Wallet address string")
_ = AddAccountCmd.MarkFlagRequired(accountAddressFlag)
AddAccountCmd.Flags().String(commonflags.AlphabetWalletsFlag, "", commonflags.AlphabetWalletsFlagDesc) AddAccountCmd.Flags().String(commonflags.AlphabetWalletsFlag, "", commonflags.AlphabetWalletsFlagDesc)
} }
func initProxyRemoveAccount() { func initProxyRemoveAccount() {
RemoveAccountCmd.Flags().StringP(commonflags.EndpointFlag, commonflags.EndpointFlagShort, "", commonflags.EndpointFlagDesc) RemoveAccountCmd.Flags().StringP(commonflags.EndpointFlag, commonflags.EndpointFlagShort, "", commonflags.EndpointFlagDesc)
RemoveAccountCmd.Flags().String(accountAddressFlag, "", "Wallet address string") RemoveAccountCmd.Flags().StringArray(accountAddressFlag, nil, "Wallet address string")
_ = AddAccountCmd.MarkFlagRequired(accountAddressFlag)
RemoveAccountCmd.Flags().String(commonflags.AlphabetWalletsFlag, "", commonflags.AlphabetWalletsFlagDesc) RemoveAccountCmd.Flags().String(commonflags.AlphabetWalletsFlag, "", commonflags.AlphabetWalletsFlagDesc)
} }

View file

@ -11,6 +11,7 @@ import (
"net/url" "net/url"
"os" "os"
"path/filepath" "path/filepath"
"slices"
"strconv" "strconv"
"strings" "strings"
"text/template" "text/template"
@ -105,7 +106,7 @@ func storageConfig(cmd *cobra.Command, args []string) {
fatalOnErr(errors.New("can't find account in wallet")) fatalOnErr(errors.New("can't find account in wallet"))
} }
c.Wallet.Password, err = input.ReadPassword(fmt.Sprintf("Account password for %s: ", c.Wallet.Account)) c.Wallet.Password, err = input.ReadPassword(fmt.Sprintf("Enter password for %s > ", c.Wallet.Account))
fatalOnErr(err) fatalOnErr(err)
err = acc.Decrypt(c.Wallet.Password, keys.NEP2ScryptParams()) err = acc.Decrypt(c.Wallet.Password, keys.NEP2ScryptParams())
@ -410,8 +411,7 @@ func initClient(rpc []string) *rpcclient.Client {
var c *rpcclient.Client var c *rpcclient.Client
var err error var err error
shuffled := make([]string, len(rpc)) shuffled := slices.Clone(rpc)
copy(shuffled, rpc)
rand.Shuffle(len(shuffled), func(i, j int) { shuffled[i], shuffled[j] = shuffled[j], shuffled[i] }) rand.Shuffle(len(shuffled), func(i, j int) { shuffled[i], shuffled[j] = shuffled[j], shuffled[i] })
for _, endpoint := range shuffled { for _, endpoint := range shuffled {

View file

@ -9,7 +9,6 @@ import (
"io" "io"
"os" "os"
"slices" "slices"
"sort"
"strings" "strings"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/accounting" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/accounting"
@ -78,13 +77,31 @@ func ListContainers(ctx context.Context, prm ListContainersPrm) (res ListContain
// SortedIDList returns sorted list of identifiers of user's containers. // SortedIDList returns sorted list of identifiers of user's containers.
func (x ListContainersRes) SortedIDList() []cid.ID { func (x ListContainersRes) SortedIDList() []cid.ID {
list := x.cliRes.Containers() list := x.cliRes.Containers()
sort.Slice(list, func(i, j int) bool { slices.SortFunc(list, func(lhs, rhs cid.ID) int {
lhs, rhs := list[i].EncodeToString(), list[j].EncodeToString() return strings.Compare(lhs.EncodeToString(), rhs.EncodeToString())
return strings.Compare(lhs, rhs) < 0
}) })
return list return list
} }
func ListContainersStream(ctx context.Context, prm ListContainersPrm, processCnr func(id cid.ID) bool) (err error) {
cliPrm := &client.PrmContainerListStream{
XHeaders: prm.XHeaders,
OwnerID: prm.OwnerID,
Session: prm.Session,
}
rdr, err := prm.cli.ContainerListInit(ctx, *cliPrm)
if err != nil {
return fmt.Errorf("init container list: %w", err)
}
err = rdr.Iterate(processCnr)
if err != nil {
return fmt.Errorf("read container list: %w", err)
}
return
}
// PutContainerPrm groups parameters of PutContainer operation. // PutContainerPrm groups parameters of PutContainer operation.
type PutContainerPrm struct { type PutContainerPrm struct {
Client *client.Client Client *client.Client

View file

@ -52,7 +52,7 @@ func genereateAPEOverride(cmd *cobra.Command, _ []string) {
outputPath, _ := cmd.Flags().GetString(outputFlag) outputPath, _ := cmd.Flags().GetString(outputFlag)
if outputPath != "" { if outputPath != "" {
err := os.WriteFile(outputPath, []byte(overrideMarshalled), 0o644) err := os.WriteFile(outputPath, overrideMarshalled, 0o644)
commonCmd.ExitOnErr(cmd, "dump error: %w", err) commonCmd.ExitOnErr(cmd, "dump error: %w", err)
} else { } else {
fmt.Print("\n") fmt.Print("\n")

View file

@ -6,8 +6,11 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key" "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common" commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
containerSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container" containerSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
) )
// flags of list command. // flags of list command.
@ -51,44 +54,60 @@ var listContainersCmd = &cobra.Command{
var prm internalclient.ListContainersPrm var prm internalclient.ListContainersPrm
prm.SetClient(cli) prm.SetClient(cli)
prm.Account = idUser prm.OwnerID = idUser
res, err := internalclient.ListContainers(cmd.Context(), prm)
commonCmd.ExitOnErr(cmd, "rpc error: %w", err)
prmGet := internalclient.GetContainerPrm{ prmGet := internalclient.GetContainerPrm{
Client: cli, Client: cli,
} }
var containerIDs []cid.ID
err := internalclient.ListContainersStream(cmd.Context(), prm, func(id cid.ID) bool {
printContainer(cmd, prmGet, id)
return false
})
if err == nil {
return
}
if e, ok := status.FromError(err); ok && e.Code() == codes.Unimplemented {
res, err := internalclient.ListContainers(cmd.Context(), prm)
commonCmd.ExitOnErr(cmd, "rpc error: %w", err)
containerIDs = res.SortedIDList()
} else {
commonCmd.ExitOnErr(cmd, "rpc error: %w", err)
}
containerIDs := res.SortedIDList()
for _, cnrID := range containerIDs { for _, cnrID := range containerIDs {
if flagVarListName == "" && !flagVarListPrintAttr { printContainer(cmd, prmGet, cnrID)
cmd.Println(cnrID.String())
continue
}
prmGet.ClientParams.ContainerID = &cnrID
res, err := internalclient.GetContainer(cmd.Context(), prmGet)
if err != nil {
cmd.Printf(" failed to read attributes: %v\n", err)
continue
}
cnr := res.Container()
if cnrName := containerSDK.Name(cnr); flagVarListName != "" && cnrName != flagVarListName {
continue
}
cmd.Println(cnrID.String())
if flagVarListPrintAttr {
cnr.IterateUserAttributes(func(key, val string) {
cmd.Printf(" %s: %s\n", key, val)
})
}
} }
}, },
} }
func printContainer(cmd *cobra.Command, prmGet internalclient.GetContainerPrm, id cid.ID) {
if flagVarListName == "" && !flagVarListPrintAttr {
cmd.Println(id.String())
return
}
prmGet.ClientParams.ContainerID = &id
res, err := internalclient.GetContainer(cmd.Context(), prmGet)
if err != nil {
cmd.Printf(" failed to read attributes: %v\n", err)
return
}
cnr := res.Container()
if cnrName := containerSDK.Name(cnr); flagVarListName != "" && cnrName != flagVarListName {
return
}
cmd.Println(id.String())
if flagVarListPrintAttr {
cnr.IterateUserAttributes(func(key, val string) {
cmd.Printf(" %s: %s\n", key, val)
})
}
}
func initContainerListContainersCmd() { func initContainerListContainersCmd() {
commonflags.Init(listContainersCmd) commonflags.Init(listContainersCmd)

View file

@ -23,11 +23,11 @@ type policyPlaygroundREPL struct {
nodes map[string]netmap.NodeInfo nodes map[string]netmap.NodeInfo
} }
func newPolicyPlaygroundREPL(cmd *cobra.Command) (*policyPlaygroundREPL, error) { func newPolicyPlaygroundREPL(cmd *cobra.Command) *policyPlaygroundREPL {
return &policyPlaygroundREPL{ return &policyPlaygroundREPL{
cmd: cmd, cmd: cmd,
nodes: map[string]netmap.NodeInfo{}, nodes: map[string]netmap.NodeInfo{},
}, nil }
} }
func (repl *policyPlaygroundREPL) handleLs(args []string) error { func (repl *policyPlaygroundREPL) handleLs(args []string) error {
@ -246,8 +246,7 @@ var policyPlaygroundCmd = &cobra.Command{
Long: `A REPL for testing placement policies. Long: `A REPL for testing placement policies.
If a wallet and endpoint is provided, the initial netmap data will be loaded from the snapshot of the node. Otherwise, an empty playground is created.`, If a wallet and endpoint is provided, the initial netmap data will be loaded from the snapshot of the node. Otherwise, an empty playground is created.`,
Run: func(cmd *cobra.Command, _ []string) { Run: func(cmd *cobra.Command, _ []string) {
repl, err := newPolicyPlaygroundREPL(cmd) repl := newPolicyPlaygroundREPL(cmd)
commonCmd.ExitOnErr(cmd, "could not create policy playground: %w", err)
commonCmd.ExitOnErr(cmd, "policy playground failed: %w", repl.run()) commonCmd.ExitOnErr(cmd, "policy playground failed: %w", repl.run())
}, },
} }

View file

@ -1,56 +0,0 @@
package control
import (
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/rpc/client"
"github.com/spf13/cobra"
)
const ignoreErrorsFlag = "no-errors"
var evacuateShardCmd = &cobra.Command{
Use: "evacuate",
Short: "Evacuate objects from shard",
Long: "Evacuate objects from shard to other shards",
Run: evacuateShard,
Deprecated: "use frostfs-cli control shards evacuation start",
}
func evacuateShard(cmd *cobra.Command, _ []string) {
pk := key.Get(cmd)
req := &control.EvacuateShardRequest{Body: new(control.EvacuateShardRequest_Body)}
req.Body.Shard_ID = getShardIDList(cmd)
req.Body.IgnoreErrors, _ = cmd.Flags().GetBool(ignoreErrorsFlag)
signRequest(cmd, pk, req)
cli := getClient(cmd, pk)
var resp *control.EvacuateShardResponse
var err error
err = cli.ExecRaw(func(client *client.Client) error {
resp, err = control.EvacuateShard(client, req)
return err
})
commonCmd.ExitOnErr(cmd, "rpc error: %w", err)
cmd.Printf("Objects moved: %d\n", resp.GetBody().GetCount())
verifyResponse(cmd, resp.GetSignature(), resp.GetBody())
cmd.Println("Shard has successfully been evacuated.")
}
func initControlEvacuateShardCmd() {
initControlFlags(evacuateShardCmd)
flags := evacuateShardCmd.Flags()
flags.StringSlice(shardIDFlag, nil, "List of shard IDs in base58 encoding")
flags.Bool(shardAllFlag, false, "Process all shards")
flags.Bool(ignoreErrorsFlag, false, "Skip invalid/unreadable objects")
evacuateShardCmd.MarkFlagsMutuallyExclusive(shardIDFlag, shardAllFlag)
}

View file

@ -17,10 +17,11 @@ import (
) )
const ( const (
awaitFlag = "await" awaitFlag = "await"
noProgressFlag = "no-progress" noProgressFlag = "no-progress"
scopeFlag = "scope" scopeFlag = "scope"
repOneOnlyFlag = "rep-one-only" repOneOnlyFlag = "rep-one-only"
ignoreErrorsFlag = "no-errors"
containerWorkerCountFlag = "container-worker-count" containerWorkerCountFlag = "container-worker-count"
objectWorkerCountFlag = "object-worker-count" objectWorkerCountFlag = "object-worker-count"

View file

@ -13,7 +13,6 @@ var shardsCmd = &cobra.Command{
func initControlShardsCmd() { func initControlShardsCmd() {
shardsCmd.AddCommand(listShardsCmd) shardsCmd.AddCommand(listShardsCmd)
shardsCmd.AddCommand(setShardModeCmd) shardsCmd.AddCommand(setShardModeCmd)
shardsCmd.AddCommand(evacuateShardCmd)
shardsCmd.AddCommand(evacuationShardCmd) shardsCmd.AddCommand(evacuationShardCmd)
shardsCmd.AddCommand(flushCacheCmd) shardsCmd.AddCommand(flushCacheCmd)
shardsCmd.AddCommand(doctorCmd) shardsCmd.AddCommand(doctorCmd)
@ -23,7 +22,6 @@ func initControlShardsCmd() {
initControlShardsListCmd() initControlShardsListCmd()
initControlSetShardModeCmd() initControlSetShardModeCmd()
initControlEvacuateShardCmd()
initControlEvacuationShardCmd() initControlEvacuationShardCmd()
initControlFlushCacheCmd() initControlFlushCacheCmd()
initControlDoctorCmd() initControlDoctorCmd()

View file

@ -9,7 +9,6 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags" "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key" "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common" commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/checksum"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id" cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id" oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"github.com/spf13/cobra" "github.com/spf13/cobra"
@ -43,6 +42,8 @@ func initObjectHashCmd() {
_ = objectHashCmd.MarkFlagRequired(commonflags.OIDFlag) _ = objectHashCmd.MarkFlagRequired(commonflags.OIDFlag)
flags.String("range", "", "Range to take hash from in the form offset1:length1,...") flags.String("range", "", "Range to take hash from in the form offset1:length1,...")
_ = objectHashCmd.MarkFlagRequired("range")
flags.String("type", hashSha256, "Hash type. Either 'sha256' or 'tz'") flags.String("type", hashSha256, "Hash type. Either 'sha256' or 'tz'")
flags.String(getRangeHashSaltFlag, "", "Salt in hex format") flags.String(getRangeHashSaltFlag, "", "Salt in hex format")
} }
@ -66,36 +67,6 @@ func getObjectHash(cmd *cobra.Command, _ []string) {
pk := key.GetOrGenerate(cmd) pk := key.GetOrGenerate(cmd)
cli := internalclient.GetSDKClientByFlag(cmd, pk, commonflags.RPC) cli := internalclient.GetSDKClientByFlag(cmd, pk, commonflags.RPC)
tz := typ == hashTz
fullHash := len(ranges) == 0
if fullHash {
var headPrm internalclient.HeadObjectPrm
headPrm.SetClient(cli)
Prepare(cmd, &headPrm)
headPrm.SetAddress(objAddr)
// get hash of full payload through HEAD (may be user can do it through dedicated command?)
res, err := internalclient.HeadObject(cmd.Context(), headPrm)
commonCmd.ExitOnErr(cmd, "rpc error: %w", err)
var cs checksum.Checksum
var csSet bool
if tz {
cs, csSet = res.Header().PayloadHomomorphicHash()
} else {
cs, csSet = res.Header().PayloadChecksum()
}
if csSet {
cmd.Println(hex.EncodeToString(cs.Value()))
} else {
cmd.Println("Missing checksum in object header.")
}
return
}
var hashPrm internalclient.HashPayloadRangesPrm var hashPrm internalclient.HashPayloadRangesPrm
hashPrm.SetClient(cli) hashPrm.SetClient(cli)
Prepare(cmd, &hashPrm) Prepare(cmd, &hashPrm)
@ -104,7 +75,7 @@ func getObjectHash(cmd *cobra.Command, _ []string) {
hashPrm.SetSalt(salt) hashPrm.SetSalt(salt)
hashPrm.SetRanges(ranges) hashPrm.SetRanges(ranges)
if tz { if typ == hashTz {
hashPrm.TZ() hashPrm.TZ()
} }

View file

@ -1,15 +1,12 @@
package object package object
import ( import (
"bytes"
"cmp"
"context" "context"
"crypto/ecdsa" "crypto/ecdsa"
"encoding/hex" "encoding/hex"
"encoding/json" "encoding/json"
"errors" "errors"
"fmt" "fmt"
"slices"
"sync" "sync"
internalclient "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/client" internalclient "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/client"
@ -507,7 +504,6 @@ func isObjectStoredOnNode(ctx context.Context, cmd *cobra.Command, cnrID cid.ID,
} }
func printPlacement(cmd *cobra.Command, objID oid.ID, objects []phyObject, result *objectNodesResult) { func printPlacement(cmd *cobra.Command, objID oid.ID, objects []phyObject, result *objectNodesResult) {
normilizeObjectNodesResult(objects, result)
if json, _ := cmd.Flags().GetBool(commonflags.JSON); json { if json, _ := cmd.Flags().GetBool(commonflags.JSON); json {
printObjectNodesAsJSON(cmd, objID, objects, result) printObjectNodesAsJSON(cmd, objID, objects, result)
} else { } else {
@ -515,34 +511,6 @@ func printPlacement(cmd *cobra.Command, objID oid.ID, objects []phyObject, resul
} }
} }
func normilizeObjectNodesResult(objects []phyObject, result *objectNodesResult) {
slices.SortFunc(objects, func(lhs, rhs phyObject) int {
if lhs.ecHeader == nil && rhs.ecHeader == nil {
return bytes.Compare(lhs.objectID[:], rhs.objectID[:])
}
if lhs.ecHeader == nil {
return -1
}
if rhs.ecHeader == nil {
return 1
}
if lhs.ecHeader.parent == rhs.ecHeader.parent {
return cmp.Compare(lhs.ecHeader.index, rhs.ecHeader.index)
}
return bytes.Compare(lhs.ecHeader.parent[:], rhs.ecHeader.parent[:])
})
for _, obj := range objects {
op := result.placements[obj.objectID]
slices.SortFunc(op.confirmedNodes, func(lhs, rhs netmapSDK.NodeInfo) int {
return bytes.Compare(lhs.PublicKey(), rhs.PublicKey())
})
slices.SortFunc(op.requiredNodes, func(lhs, rhs netmapSDK.NodeInfo) int {
return bytes.Compare(lhs.PublicKey(), rhs.PublicKey())
})
result.placements[obj.objectID] = op
}
}
func printObjectNodesAsText(cmd *cobra.Command, objID oid.ID, objects []phyObject, result *objectNodesResult) { func printObjectNodesAsText(cmd *cobra.Command, objID oid.ID, objects []phyObject, result *objectNodesResult) {
fmt.Fprintf(cmd.OutOrStdout(), "Object %s stores payload in %d data objects:\n", objID.EncodeToString(), len(objects)) fmt.Fprintf(cmd.OutOrStdout(), "Object %s stores payload in %d data objects:\n", objID.EncodeToString(), len(objects))

View file

@ -77,7 +77,7 @@ func (c *httpComponent) reload(ctx context.Context) {
log.Info(ctx, c.name+" config updated") log.Info(ctx, c.name+" config updated")
if err := c.shutdown(ctx); err != nil { if err := c.shutdown(ctx); err != nil {
log.Debug(ctx, logs.FrostFSIRCouldNotShutdownHTTPServer, log.Debug(ctx, logs.FrostFSIRCouldNotShutdownHTTPServer,
zap.String("error", err.Error()), zap.Error(err),
) )
} else { } else {
c.init(ctx) c.init(ctx)

View file

@ -119,12 +119,12 @@ func shutdown(ctx context.Context) {
innerRing.Stop(ctx) innerRing.Stop(ctx)
if err := metricsCmp.shutdown(ctx); err != nil { if err := metricsCmp.shutdown(ctx); err != nil {
log.Debug(ctx, logs.FrostFSIRCouldNotShutdownHTTPServer, log.Debug(ctx, logs.FrostFSIRCouldNotShutdownHTTPServer,
zap.String("error", err.Error()), zap.Error(err),
) )
} }
if err := pprofCmp.shutdown(ctx); err != nil { if err := pprofCmp.shutdown(ctx); err != nil {
log.Debug(ctx, logs.FrostFSIRCouldNotShutdownHTTPServer, log.Debug(ctx, logs.FrostFSIRCouldNotShutdownHTTPServer,
zap.String("error", err.Error()), zap.Error(err),
) )
} }

View file

@ -58,7 +58,7 @@ func (c *pprofComponent) reload(ctx context.Context) {
log.Info(ctx, c.name+" config updated") log.Info(ctx, c.name+" config updated")
if err := c.shutdown(ctx); err != nil { if err := c.shutdown(ctx); err != nil {
log.Debug(ctx, logs.FrostFSIRCouldNotShutdownHTTPServer, log.Debug(ctx, logs.FrostFSIRCouldNotShutdownHTTPServer,
zap.String("error", err.Error())) zap.Error(err))
return return
} }

View file

@ -124,10 +124,7 @@ func (v *BucketsView) loadNodeChildren(
path := parentBucket.Path path := parentBucket.Path
parser := parentBucket.NextParser parser := parentBucket.NextParser
buffer, err := LoadBuckets(ctx, v.ui.db, path, v.ui.loadBufferSize) buffer := LoadBuckets(ctx, v.ui.db, path, v.ui.loadBufferSize)
if err != nil {
return err
}
for item := range buffer { for item := range buffer {
if item.err != nil { if item.err != nil {
@ -135,6 +132,7 @@ func (v *BucketsView) loadNodeChildren(
} }
bucket := item.val bucket := item.val
var err error
bucket.Entry, bucket.NextParser, err = parser(bucket.Name, nil) bucket.Entry, bucket.NextParser, err = parser(bucket.Name, nil)
if err != nil { if err != nil {
return err return err
@ -180,10 +178,7 @@ func (v *BucketsView) bucketSatisfiesFilter(
defer cancel() defer cancel()
// Check the current bucket's nested buckets if exist // Check the current bucket's nested buckets if exist
bucketsBuffer, err := LoadBuckets(ctx, v.ui.db, bucket.Path, v.ui.loadBufferSize) bucketsBuffer := LoadBuckets(ctx, v.ui.db, bucket.Path, v.ui.loadBufferSize)
if err != nil {
return false, err
}
for item := range bucketsBuffer { for item := range bucketsBuffer {
if item.err != nil { if item.err != nil {
@ -191,6 +186,7 @@ func (v *BucketsView) bucketSatisfiesFilter(
} }
b := item.val b := item.val
var err error
b.Entry, b.NextParser, err = bucket.NextParser(b.Name, nil) b.Entry, b.NextParser, err = bucket.NextParser(b.Name, nil)
if err != nil { if err != nil {
return false, err return false, err
@ -206,10 +202,7 @@ func (v *BucketsView) bucketSatisfiesFilter(
} }
// Check the current bucket's nested records if exist // Check the current bucket's nested records if exist
recordsBuffer, err := LoadRecords(ctx, v.ui.db, bucket.Path, v.ui.loadBufferSize) recordsBuffer := LoadRecords(ctx, v.ui.db, bucket.Path, v.ui.loadBufferSize)
if err != nil {
return false, err
}
for item := range recordsBuffer { for item := range recordsBuffer {
if item.err != nil { if item.err != nil {
@ -217,6 +210,7 @@ func (v *BucketsView) bucketSatisfiesFilter(
} }
r := item.val r := item.val
var err error
r.Entry, _, err = bucket.NextParser(r.Key, r.Value) r.Entry, _, err = bucket.NextParser(r.Key, r.Value)
if err != nil { if err != nil {
return false, err return false, err

View file

@ -35,7 +35,7 @@ func resolvePath(tx *bbolt.Tx, path [][]byte) (*bbolt.Bucket, error) {
func load[T any]( func load[T any](
ctx context.Context, db *bbolt.DB, path [][]byte, bufferSize int, ctx context.Context, db *bbolt.DB, path [][]byte, bufferSize int,
filter func(key, value []byte) bool, transform func(key, value []byte) T, filter func(key, value []byte) bool, transform func(key, value []byte) T,
) (<-chan Item[T], error) { ) <-chan Item[T] {
buffer := make(chan Item[T], bufferSize) buffer := make(chan Item[T], bufferSize)
go func() { go func() {
@ -77,13 +77,13 @@ func load[T any](
} }
}() }()
return buffer, nil return buffer
} }
func LoadBuckets( func LoadBuckets(
ctx context.Context, db *bbolt.DB, path [][]byte, bufferSize int, ctx context.Context, db *bbolt.DB, path [][]byte, bufferSize int,
) (<-chan Item[*Bucket], error) { ) <-chan Item[*Bucket] {
buffer, err := load( buffer := load(
ctx, db, path, bufferSize, ctx, db, path, bufferSize,
func(_, value []byte) bool { func(_, value []byte) bool {
return value == nil return value == nil
@ -98,17 +98,14 @@ func LoadBuckets(
} }
}, },
) )
if err != nil {
return nil, fmt.Errorf("can't start iterating bucket: %w", err)
}
return buffer, nil return buffer
} }
func LoadRecords( func LoadRecords(
ctx context.Context, db *bbolt.DB, path [][]byte, bufferSize int, ctx context.Context, db *bbolt.DB, path [][]byte, bufferSize int,
) (<-chan Item[*Record], error) { ) <-chan Item[*Record] {
buffer, err := load( buffer := load(
ctx, db, path, bufferSize, ctx, db, path, bufferSize,
func(_, value []byte) bool { func(_, value []byte) bool {
return value != nil return value != nil
@ -124,11 +121,8 @@ func LoadRecords(
} }
}, },
) )
if err != nil {
return nil, fmt.Errorf("can't start iterating bucket: %w", err)
}
return buffer, nil return buffer
} }
// HasBuckets checks if a bucket has nested buckets. It relies on assumption // HasBuckets checks if a bucket has nested buckets. It relies on assumption
@ -137,24 +131,21 @@ func HasBuckets(ctx context.Context, db *bbolt.DB, path [][]byte) (bool, error)
ctx, cancel := context.WithCancel(ctx) ctx, cancel := context.WithCancel(ctx)
defer cancel() defer cancel()
buffer, err := load( buffer := load(
ctx, db, path, 1, ctx, db, path, 1,
nil, nil,
func(_, value []byte) []byte { return value }, func(_, value []byte) []byte { return value },
) )
if err != nil {
return false, err
}
x, ok := <-buffer x, ok := <-buffer
if !ok { if !ok {
return false, nil return false, nil
} }
if x.err != nil { if x.err != nil {
return false, err return false, x.err
} }
if x.val != nil { if x.val != nil {
return false, err return false, nil
} }
return true, nil return true, nil
} }

View file

@ -62,10 +62,7 @@ func (v *RecordsView) Mount(ctx context.Context) error {
ctx, v.onUnmount = context.WithCancel(ctx) ctx, v.onUnmount = context.WithCancel(ctx)
tempBuffer, err := LoadRecords(ctx, v.ui.db, v.bucket.Path, v.ui.loadBufferSize) tempBuffer := LoadRecords(ctx, v.ui.db, v.bucket.Path, v.ui.loadBufferSize)
if err != nil {
return err
}
v.buffer = make(chan *Record, v.ui.loadBufferSize) v.buffer = make(chan *Record, v.ui.loadBufferSize)
go func() { go func() {
@ -73,11 +70,12 @@ func (v *RecordsView) Mount(ctx context.Context) error {
for item := range tempBuffer { for item := range tempBuffer {
if item.err != nil { if item.err != nil {
v.ui.stopOnError(err) v.ui.stopOnError(item.err)
break break
} }
record := item.val record := item.val
var err error
record.Entry, _, err = v.bucket.NextParser(record.Key, record.Value) record.Entry, _, err = v.bucket.NextParser(record.Key, record.Value)
if err != nil { if err != nil {
v.ui.stopOnError(err) v.ui.stopOnError(err)

View file

@ -19,6 +19,7 @@ func initAPEManagerService(c *cfg) {
c.cfgObject.cfgAccessPolicyEngine.policyContractHash) c.cfgObject.cfgAccessPolicyEngine.policyContractHash)
execsvc := apemanager.New(c.cfgObject.cnrSource, contractStorage, execsvc := apemanager.New(c.cfgObject.cnrSource, contractStorage,
c.cfgMorph.client,
apemanager.WithLogger(c.log)) apemanager.WithLogger(c.log))
sigsvc := apemanager.NewSignService(&c.key.PrivateKey, execsvc) sigsvc := apemanager.NewSignService(&c.key.PrivateKey, execsvc)
auditSvc := apemanager.NewAuditService(sigsvc, c.log, c.audit) auditSvc := apemanager.NewAuditService(sigsvc, c.log, c.audit)

View file

@ -591,8 +591,6 @@ type cfgMorph struct {
client *client.Client client *client.Client
notaryEnabled bool
// TTL of Sidechain cached values. Non-positive value disables caching. // TTL of Sidechain cached values. Non-positive value disables caching.
cacheTTL time.Duration cacheTTL time.Duration
@ -608,9 +606,10 @@ type cfgAccounting struct {
type cfgContainer struct { type cfgContainer struct {
scriptHash neogoutil.Uint160 scriptHash neogoutil.Uint160
parsers map[event.Type]event.NotificationParser parsers map[event.Type]event.NotificationParser
subscribers map[event.Type][]event.Handler subscribers map[event.Type][]event.Handler
workerPool util.WorkerPool // pool for asynchronous handlers workerPool util.WorkerPool // pool for asynchronous handlers
containerBatchSize uint32
} }
type cfgFrostfsID struct { type cfgFrostfsID struct {
@ -699,8 +698,7 @@ func initCfg(appCfg *config.Config) *cfg {
netState.metrics = c.metricsCollector netState.metrics = c.metricsCollector
logPrm, err := c.loggerPrm() logPrm := c.loggerPrm()
fatalOnErr(err)
logPrm.SamplingHook = c.metricsCollector.LogMetrics().GetSamplingHook() logPrm.SamplingHook = c.metricsCollector.LogMetrics().GetSamplingHook()
log, err := logger.NewLogger(logPrm) log, err := logger.NewLogger(logPrm)
fatalOnErr(err) fatalOnErr(err)
@ -854,8 +852,8 @@ func initFrostfsID(appCfg *config.Config) cfgFrostfsID {
} }
func initCfgGRPC() cfgGRPC { func initCfgGRPC() cfgGRPC {
maxChunkSize := uint64(maxMsgSize) * 3 / 4 // 25% to meta, 75% to payload maxChunkSize := uint64(maxMsgSize) * 3 / 4 // 25% to meta, 75% to payload
maxAddrAmount := uint64(maxChunkSize) / addressSize // each address is about 72 bytes maxAddrAmount := maxChunkSize / addressSize // each address is about 72 bytes
return cfgGRPC{ return cfgGRPC{
maxChunkSize: maxChunkSize, maxChunkSize: maxChunkSize,
@ -1060,7 +1058,7 @@ func (c *cfg) getShardOpts(ctx context.Context, shCfg shardCfg) shardOptsWithID
return sh return sh
} }
func (c *cfg) loggerPrm() (*logger.Prm, error) { func (c *cfg) loggerPrm() *logger.Prm {
// check if it has been inited before // check if it has been inited before
if c.dynamicConfiguration.logger == nil { if c.dynamicConfiguration.logger == nil {
c.dynamicConfiguration.logger = new(logger.Prm) c.dynamicConfiguration.logger = new(logger.Prm)
@ -1079,7 +1077,7 @@ func (c *cfg) loggerPrm() (*logger.Prm, error) {
} }
c.dynamicConfiguration.logger.PrependTimestamp = c.LoggerCfg.timestamp c.dynamicConfiguration.logger.PrependTimestamp = c.LoggerCfg.timestamp
return c.dynamicConfiguration.logger, nil return c.dynamicConfiguration.logger
} }
func (c *cfg) LocalAddress() network.AddressGroup { func (c *cfg) LocalAddress() network.AddressGroup {
@ -1121,7 +1119,7 @@ func initLocalStorage(ctx context.Context, c *cfg) {
err := ls.Close(context.WithoutCancel(ctx)) err := ls.Close(context.WithoutCancel(ctx))
if err != nil { if err != nil {
c.log.Info(ctx, logs.FrostFSNodeStorageEngineClosingFailure, c.log.Info(ctx, logs.FrostFSNodeStorageEngineClosingFailure,
zap.String("error", err.Error()), zap.Error(err),
) )
} else { } else {
c.log.Info(ctx, logs.FrostFSNodeAllComponentsOfTheStorageEngineClosedSuccessfully) c.log.Info(ctx, logs.FrostFSNodeAllComponentsOfTheStorageEngineClosedSuccessfully)
@ -1148,7 +1146,7 @@ func initAccessPolicyEngine(ctx context.Context, c *cfg) {
c.cfgObject.cfgAccessPolicyEngine.policyContractHash) c.cfgObject.cfgAccessPolicyEngine.policyContractHash)
cacheSize := morphconfig.APEChainCacheSize(c.appCfg) cacheSize := morphconfig.APEChainCacheSize(c.appCfg)
if cacheSize > 0 { if cacheSize > 0 && c.cfgMorph.cacheTTL > 0 {
morphRuleStorage = newMorphCache(morphRuleStorage, int(cacheSize), c.cfgMorph.cacheTTL) morphRuleStorage = newMorphCache(morphRuleStorage, int(cacheSize), c.cfgMorph.cacheTTL)
} }
@ -1211,7 +1209,7 @@ func (c *cfg) updateContractNodeInfo(ctx context.Context, epoch uint64) {
if err != nil { if err != nil {
c.log.Error(ctx, logs.FrostFSNodeCouldNotUpdateNodeStateOnNewEpoch, c.log.Error(ctx, logs.FrostFSNodeCouldNotUpdateNodeStateOnNewEpoch,
zap.Uint64("epoch", epoch), zap.Uint64("epoch", epoch),
zap.String("error", err.Error())) zap.Error(err))
return return
} }
@ -1221,9 +1219,9 @@ func (c *cfg) updateContractNodeInfo(ctx context.Context, epoch uint64) {
// bootstrapWithState calls "addPeer" method of the Sidechain Netmap contract // bootstrapWithState calls "addPeer" method of the Sidechain Netmap contract
// with the binary-encoded information from the current node's configuration. // with the binary-encoded information from the current node's configuration.
// The state is set using the provided setter which MUST NOT be nil. // The state is set using the provided setter which MUST NOT be nil.
func (c *cfg) bootstrapWithState(ctx context.Context, stateSetter func(*netmap.NodeInfo)) error { func (c *cfg) bootstrapWithState(ctx context.Context, state netmap.NodeState) error {
ni := c.cfgNodeInfo.localInfo ni := c.cfgNodeInfo.localInfo
stateSetter(&ni) ni.SetStatus(state)
prm := nmClient.AddPeerPrm{} prm := nmClient.AddPeerPrm{}
prm.SetNodeInfo(ni) prm.SetNodeInfo(ni)
@ -1233,9 +1231,7 @@ func (c *cfg) bootstrapWithState(ctx context.Context, stateSetter func(*netmap.N
// bootstrapOnline calls cfg.bootstrapWithState with "online" state. // bootstrapOnline calls cfg.bootstrapWithState with "online" state.
func bootstrapOnline(ctx context.Context, c *cfg) error { func bootstrapOnline(ctx context.Context, c *cfg) error {
return c.bootstrapWithState(ctx, func(ni *netmap.NodeInfo) { return c.bootstrapWithState(ctx, netmap.Online)
ni.SetStatus(netmap.Online)
})
} }
// bootstrap calls bootstrapWithState with: // bootstrap calls bootstrapWithState with:
@ -1246,9 +1242,7 @@ func (c *cfg) bootstrap(ctx context.Context) error {
st := c.cfgNetmap.state.controlNetmapStatus() st := c.cfgNetmap.state.controlNetmapStatus()
if st == control.NetmapStatus_MAINTENANCE { if st == control.NetmapStatus_MAINTENANCE {
c.log.Info(ctx, logs.FrostFSNodeBootstrappingWithTheMaintenanceState) c.log.Info(ctx, logs.FrostFSNodeBootstrappingWithTheMaintenanceState)
return c.bootstrapWithState(ctx, func(ni *netmap.NodeInfo) { return c.bootstrapWithState(ctx, netmap.Maintenance)
ni.SetStatus(netmap.Maintenance)
})
} }
c.log.Info(ctx, logs.FrostFSNodeBootstrappingWithOnlineState, c.log.Info(ctx, logs.FrostFSNodeBootstrappingWithOnlineState,
@ -1339,11 +1333,7 @@ func (c *cfg) reloadConfig(ctx context.Context) {
// Logger // Logger
logPrm, err := c.loggerPrm() logPrm := c.loggerPrm()
if err != nil {
c.log.Error(ctx, logs.FrostFSNodeLoggerConfigurationPreparation, zap.Error(err))
return
}
components := c.getComponents(ctx, logPrm) components := c.getComponents(ctx, logPrm)
@ -1466,7 +1456,7 @@ func (c *cfg) createTombstoneSource() *tombstone.ExpirationChecker {
func (c *cfg) createContainerInfoProvider(ctx context.Context) container.InfoProvider { func (c *cfg) createContainerInfoProvider(ctx context.Context) container.InfoProvider {
return container.NewInfoProvider(func() (container.Source, error) { return container.NewInfoProvider(func() (container.Source, error) {
c.initMorphComponents(ctx) c.initMorphComponents(ctx)
cc, err := containerClient.NewFromMorph(c.cfgMorph.client, c.cfgContainer.scriptHash, 0, containerClient.TryNotary()) cc, err := containerClient.NewFromMorph(c.cfgMorph.client, c.cfgContainer.scriptHash, 0)
if err != nil { if err != nil {
return nil, err return nil, err
} }

View file

@ -1,6 +1,7 @@
package config package config
import ( import (
"slices"
"strings" "strings"
configViper "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common/config" configViper "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common/config"
@ -52,6 +53,5 @@ func (x *Config) Value(name string) any {
// It supports only one level of nesting and is intended to be used // It supports only one level of nesting and is intended to be used
// to provide default values. // to provide default values.
func (x *Config) SetDefault(from *Config) { func (x *Config) SetDefault(from *Config) {
x.defaultPath = make([]string, len(from.path)) x.defaultPath = slices.Clone(from.path)
copy(x.defaultPath, from.path)
} }

View file

@ -0,0 +1,27 @@
package containerconfig
import "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config"
const (
subsection = "container"
listStreamSubsection = "list_stream"
// ContainerBatchSizeDefault represents the maximum amount of containers to send via stream at once.
ContainerBatchSizeDefault = 1000
)
// ContainerBatchSize returns the value of "batch_size" config parameter
// from "list_stream" subsection of "container" section.
//
// Returns ContainerBatchSizeDefault if the value is missing or if
// the value is not positive integer.
func ContainerBatchSize(c *config.Config) uint32 {
if c.Sub(subsection).Sub(listStreamSubsection).Value("batch_size") == nil {
return ContainerBatchSizeDefault
}
size := config.Uint32Safe(c.Sub(subsection).Sub(listStreamSubsection), "batch_size")
if size == 0 {
return ContainerBatchSizeDefault
}
return size
}

View file

@ -0,0 +1,27 @@
package containerconfig_test
import (
"testing"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config"
containerconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/container"
configtest "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/test"
"github.com/stretchr/testify/require"
)
func TestContainerSection(t *testing.T) {
t.Run("defaults", func(t *testing.T) {
empty := configtest.EmptyConfig()
require.Equal(t, uint32(containerconfig.ContainerBatchSizeDefault), containerconfig.ContainerBatchSize(empty))
})
const path = "../../../../config/example/node"
fileConfigTest := func(c *config.Config) {
require.Equal(t, uint32(500), containerconfig.ContainerBatchSize(c))
}
configtest.ForEachFileType(path, fileConfigTest)
t.Run("ENV", func(t *testing.T) {
configtest.ForEnvFileType(t, path, fileConfigTest)
})
}

View file

@ -41,6 +41,10 @@ func IterateShards(c *config.Config, required bool, f func(*shardconfig.Config)
c.Sub(si), c.Sub(si),
) )
if sc.Mode() == mode.Disabled {
continue
}
// Path for the blobstor can't be present in the default section, because different shards // Path for the blobstor can't be present in the default section, because different shards
// must have different paths, so if it is missing, the shard is not here. // must have different paths, so if it is missing, the shard is not here.
// At the same time checking for "blobstor" section doesn't work proper // At the same time checking for "blobstor" section doesn't work proper
@ -50,10 +54,6 @@ func IterateShards(c *config.Config, required bool, f func(*shardconfig.Config)
} }
(*config.Config)(sc).SetDefault(def) (*config.Config)(sc).SetDefault(def)
if sc.Mode() == mode.Disabled {
continue
}
if err := f(sc); err != nil { if err := f(sc); err != nil {
return err return err
} }

View file

@ -18,6 +18,22 @@ import (
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
func TestIterateShards(t *testing.T) {
fileConfigTest := func(c *config.Config) {
var res []string
require.NoError(t,
engineconfig.IterateShards(c, false, func(sc *shardconfig.Config) error {
res = append(res, sc.Metabase().Path())
return nil
}))
require.Equal(t, []string{"abc", "xyz"}, res)
}
const cfgDir = "./testdata/shards"
configtest.ForEachFileType(cfgDir, fileConfigTest)
configtest.ForEnvFileType(t, cfgDir, fileConfigTest)
}
func TestEngineSection(t *testing.T) { func TestEngineSection(t *testing.T) {
t.Run("defaults", func(t *testing.T) { t.Run("defaults", func(t *testing.T) {
empty := configtest.EmptyConfig() empty := configtest.EmptyConfig()

View file

@ -0,0 +1,3 @@
FROSTFS_STORAGE_SHARD_0_METABASE_PATH=abc
FROSTFS_STORAGE_SHARD_1_MODE=disabled
FROSTFS_STORAGE_SHARD_2_METABASE_PATH=xyz

View file

@ -0,0 +1,13 @@
{
"storage.shard": {
"0": {
"metabase.path": "abc"
},
"1": {
"mode": "disabled"
},
"2": {
"metabase.path": "xyz"
}
}
}

View file

@ -0,0 +1,7 @@
storage.shard:
0:
metabase.path: abc
1:
mode: disabled
2:
metabase.path: xyz

View file

@ -198,7 +198,7 @@ func (l PersistentPolicyRulesConfig) Path() string {
// //
// Returns PermDefault if the value is not a positive number. // Returns PermDefault if the value is not a positive number.
func (l PersistentPolicyRulesConfig) Perm() fs.FileMode { func (l PersistentPolicyRulesConfig) Perm() fs.FileMode {
p := config.UintSafe((*config.Config)(l.cfg), "perm") p := config.UintSafe(l.cfg, "perm")
if p == 0 { if p == 0 {
p = PermDefault p = PermDefault
} }
@ -210,7 +210,7 @@ func (l PersistentPolicyRulesConfig) Perm() fs.FileMode {
// //
// Returns false if the value is not a boolean. // Returns false if the value is not a boolean.
func (l PersistentPolicyRulesConfig) NoSync() bool { func (l PersistentPolicyRulesConfig) NoSync() bool {
return config.BoolSafe((*config.Config)(l.cfg), "no_sync") return config.BoolSafe(l.cfg, "no_sync")
} }
// CompatibilityMode returns true if need to run node in compatibility with previous versions mode. // CompatibilityMode returns true if need to run node in compatibility with previous versions mode.

View file

@ -5,6 +5,7 @@ import (
"context" "context"
"net" "net"
containerconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/container"
morphconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/morph" morphconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/morph"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs" "git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/metrics" "git.frostfs.info/TrueCloudLab/frostfs-node/internal/metrics"
@ -28,7 +29,7 @@ import (
func initContainerService(_ context.Context, c *cfg) { func initContainerService(_ context.Context, c *cfg) {
// container wrapper that tries to invoke notary // container wrapper that tries to invoke notary
// requests if chain is configured so // requests if chain is configured so
wrap, err := cntClient.NewFromMorph(c.cfgMorph.client, c.cfgContainer.scriptHash, 0, cntClient.TryNotary()) wrap, err := cntClient.NewFromMorph(c.cfgMorph.client, c.cfgContainer.scriptHash, 0)
fatalOnErr(err) fatalOnErr(err)
c.shared.cnrClient = wrap c.shared.cnrClient = wrap
@ -42,11 +43,12 @@ func initContainerService(_ context.Context, c *cfg) {
fatalOnErr(err) fatalOnErr(err)
cacheSize := morphconfig.FrostfsIDCacheSize(c.appCfg) cacheSize := morphconfig.FrostfsIDCacheSize(c.appCfg)
if cacheSize > 0 { if cacheSize > 0 && c.cfgMorph.cacheTTL > 0 {
frostfsIDSubjectProvider = newMorphFrostfsIDCache(frostfsIDSubjectProvider, int(cacheSize), c.cfgMorph.cacheTTL, metrics.NewCacheMetrics("frostfs_id")) frostfsIDSubjectProvider = newMorphFrostfsIDCache(frostfsIDSubjectProvider, int(cacheSize), c.cfgMorph.cacheTTL, metrics.NewCacheMetrics("frostfs_id"))
} }
c.shared.frostfsidClient = frostfsIDSubjectProvider c.shared.frostfsidClient = frostfsIDSubjectProvider
c.cfgContainer.containerBatchSize = containerconfig.ContainerBatchSize(c.appCfg)
defaultChainRouter := engine.NewDefaultChainRouterWithLocalOverrides( defaultChainRouter := engine.NewDefaultChainRouterWithLocalOverrides(
c.cfgObject.cfgAccessPolicyEngine.accessPolicyEngine.MorphRuleChainStorage(), c.cfgObject.cfgAccessPolicyEngine.accessPolicyEngine.MorphRuleChainStorage(),
@ -56,7 +58,9 @@ func initContainerService(_ context.Context, c *cfg) {
&c.key.PrivateKey, &c.key.PrivateKey,
containerService.NewAPEServer(defaultChainRouter, cnrRdr, containerService.NewAPEServer(defaultChainRouter, cnrRdr,
newCachedIRFetcher(createInnerRingFetcher(c)), c.netMapSource, c.shared.frostfsidClient, newCachedIRFetcher(createInnerRingFetcher(c)), c.netMapSource, c.shared.frostfsidClient,
containerService.NewExecutionService(containerMorph.NewExecutor(cnrRdr, cnrWrt), c.respSvc), containerService.NewSplitterService(
c.cfgContainer.containerBatchSize, c.respSvc,
containerService.NewExecutionService(containerMorph.NewExecutor(cnrRdr, cnrWrt), c.respSvc)),
), ),
) )
service = containerService.NewAuditService(service, c.log, c.audit) service = containerService.NewAuditService(service, c.log, c.audit)
@ -218,6 +222,7 @@ type morphContainerReader struct {
lister interface { lister interface {
ContainersOf(*user.ID) ([]cid.ID, error) ContainersOf(*user.ID) ([]cid.ID, error)
IterateContainersOf(*user.ID, func(cid.ID) error) error
} }
} }
@ -233,6 +238,10 @@ func (x *morphContainerReader) ContainersOf(id *user.ID) ([]cid.ID, error) {
return x.lister.ContainersOf(id) return x.lister.ContainersOf(id)
} }
func (x *morphContainerReader) IterateContainersOf(id *user.ID, processCID func(cid.ID) error) error {
return x.lister.IterateContainersOf(id, processCID)
}
type morphContainerWriter struct { type morphContainerWriter struct {
neoClient *cntClient.Client neoClient *cntClient.Client
} }

View file

@ -134,7 +134,7 @@ func stopAndLog(ctx context.Context, c *cfg, name string, stopper func(context.C
err := stopper(ctx) err := stopper(ctx)
if err != nil { if err != nil {
c.log.Debug(ctx, fmt.Sprintf("could not shutdown %s server", name), c.log.Debug(ctx, fmt.Sprintf("could not shutdown %s server", name),
zap.String("error", err.Error()), zap.Error(err),
) )
} }

View file

@ -35,20 +35,16 @@ func (c *cfg) initMorphComponents(ctx context.Context) {
lookupScriptHashesInNNS(c) // smart contract auto negotiation lookupScriptHashesInNNS(c) // smart contract auto negotiation
if c.cfgMorph.notaryEnabled { err := c.cfgMorph.client.EnableNotarySupport(
err := c.cfgMorph.client.EnableNotarySupport( client.WithProxyContract(
client.WithProxyContract( c.cfgMorph.proxyScriptHash,
c.cfgMorph.proxyScriptHash, ),
),
)
fatalOnErr(err)
}
c.log.Info(ctx, logs.FrostFSNodeNotarySupport,
zap.Bool("sidechain_enabled", c.cfgMorph.notaryEnabled),
) )
fatalOnErr(err)
wrap, err := nmClient.NewFromMorph(c.cfgMorph.client, c.cfgNetmap.scriptHash, 0, nmClient.TryNotary()) c.log.Info(ctx, logs.FrostFSNodeNotarySupport)
wrap, err := nmClient.NewFromMorph(c.cfgMorph.client, c.cfgNetmap.scriptHash, 0)
fatalOnErr(err) fatalOnErr(err)
var netmapSource netmap.Source var netmapSource netmap.Source
@ -100,7 +96,7 @@ func initMorphClient(ctx context.Context, c *cfg) {
if err != nil { if err != nil {
c.log.Info(ctx, logs.FrostFSNodeFailedToCreateNeoRPCClient, c.log.Info(ctx, logs.FrostFSNodeFailedToCreateNeoRPCClient,
zap.Any("endpoints", addresses), zap.Any("endpoints", addresses),
zap.String("error", err.Error()), zap.Error(err),
) )
fatalOnErr(err) fatalOnErr(err)
@ -116,15 +112,9 @@ func initMorphClient(ctx context.Context, c *cfg) {
} }
c.cfgMorph.client = cli c.cfgMorph.client = cli
c.cfgMorph.notaryEnabled = cli.ProbeNotary()
} }
func makeAndWaitNotaryDeposit(ctx context.Context, c *cfg) { func makeAndWaitNotaryDeposit(ctx context.Context, c *cfg) {
// skip notary deposit in non-notary environments
if !c.cfgMorph.notaryEnabled {
return
}
tx, vub, err := makeNotaryDeposit(ctx, c) tx, vub, err := makeNotaryDeposit(ctx, c)
fatalOnErr(err) fatalOnErr(err)
@ -161,7 +151,7 @@ func makeNotaryDeposit(ctx context.Context, c *cfg) (util.Uint256, uint32, error
} }
func waitNotaryDeposit(ctx context.Context, c *cfg, tx util.Uint256, vub uint32) error { func waitNotaryDeposit(ctx context.Context, c *cfg, tx util.Uint256, vub uint32) error {
if err := c.cfgMorph.client.WaitTxHalt(ctx, client.InvokeRes{Hash: tx, VUB: vub}); err != nil { if err := c.cfgMorph.client.WaitTxHalt(ctx, vub, tx); err != nil {
return err return err
} }
@ -178,7 +168,7 @@ func listenMorphNotifications(ctx context.Context, c *cfg) {
fromSideChainBlock, err := c.persistate.UInt32(persistateSideChainLastBlockKey) fromSideChainBlock, err := c.persistate.UInt32(persistateSideChainLastBlockKey)
if err != nil { if err != nil {
fromSideChainBlock = 0 fromSideChainBlock = 0
c.log.Warn(ctx, logs.FrostFSNodeCantGetLastProcessedSideChainBlockNumber, zap.String("error", err.Error())) c.log.Warn(ctx, logs.FrostFSNodeCantGetLastProcessedSideChainBlockNumber, zap.Error(err))
} }
subs, err = subscriber.New(ctx, &subscriber.Params{ subs, err = subscriber.New(ctx, &subscriber.Params{
@ -233,27 +223,17 @@ func registerNotificationHandlers(scHash util.Uint160, lis event.Listener, parse
subs map[event.Type][]event.Handler, subs map[event.Type][]event.Handler,
) { ) {
for typ, handlers := range subs { for typ, handlers := range subs {
pi := event.NotificationParserInfo{}
pi.SetType(typ)
pi.SetScriptHash(scHash)
p, ok := parsers[typ] p, ok := parsers[typ]
if !ok { if !ok {
panic(fmt.Sprintf("missing parser for event %s", typ)) panic(fmt.Sprintf("missing parser for event %s", typ))
} }
pi.SetParser(p) lis.RegisterNotificationHandler(event.NotificationHandlerInfo{
Contract: scHash,
lis.SetNotificationParser(pi) Type: typ,
Parser: p,
for _, h := range handlers { Handlers: handlers,
hi := event.NotificationHandlerInfo{} })
hi.SetType(typ)
hi.SetScriptHash(scHash)
hi.SetHandler(h)
lis.RegisterNotificationHandler(hi)
}
} }
} }
@ -282,10 +262,6 @@ func lookupScriptHashesInNNS(c *cfg) {
) )
for _, t := range targets { for _, t := range targets {
if t.nnsName == client.NNSProxyContractName && !c.cfgMorph.notaryEnabled {
continue // ignore proxy contract if notary disabled
}
if emptyHash.Equals(*t.h) { if emptyHash.Equals(*t.h) {
*t.h, err = c.cfgMorph.client.NNSContractAddress(t.nnsName) *t.h, err = c.cfgMorph.client.NNSContractAddress(t.nnsName)
fatalOnErrDetails(fmt.Sprintf("can't resolve %s in NNS", t.nnsName), err) fatalOnErrDetails(fmt.Sprintf("can't resolve %s in NNS", t.nnsName), err)

View file

@ -86,7 +86,7 @@ func (s *networkState) setNodeInfo(ni *netmapSDK.NodeInfo) {
} }
} }
s.setControlNetmapStatus(control.NetmapStatus(ctrlNetSt)) s.setControlNetmapStatus(ctrlNetSt)
} }
// sets the current node state to the given value. Subsequent cfg.bootstrap // sets the current node state to the given value. Subsequent cfg.bootstrap
@ -193,16 +193,14 @@ func addNewEpochNotificationHandlers(c *cfg) {
} }
}) })
if c.cfgMorph.notaryEnabled { addNewEpochAsyncNotificationHandler(c, func(ctx context.Context, _ event.Event) {
addNewEpochAsyncNotificationHandler(c, func(ctx context.Context, _ event.Event) { _, _, err := makeNotaryDeposit(ctx, c)
_, _, err := makeNotaryDeposit(ctx, c) if err != nil {
if err != nil { c.log.Error(ctx, logs.FrostFSNodeCouldNotMakeNotaryDeposit,
c.log.Error(ctx, logs.FrostFSNodeCouldNotMakeNotaryDeposit, zap.Error(err),
zap.String("error", err.Error()), )
) }
} })
})
}
} }
// bootstrapNode adds current node to the Network map. // bootstrapNode adds current node to the Network map.
@ -425,7 +423,7 @@ func (c *cfg) updateNetMapState(ctx context.Context, stateSetter func(*nmClient.
if err != nil { if err != nil {
return err return err
} }
return c.cfgNetmap.wrapper.Morph().WaitTxHalt(ctx, res) return c.cfgNetmap.wrapper.Morph().WaitTxHalt(ctx, res.VUB, res.Hash)
} }
type netInfo struct { type netInfo struct {

View file

@ -13,7 +13,6 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/netmap" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/netmap"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/engine" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/engine"
morphClient "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client" morphClient "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client"
nmClient "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client/netmap"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/network/cache" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/network/cache"
objectTransportGRPC "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/network/transport/object/grpc" objectTransportGRPC "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/network/transport/object/grpc"
objectService "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/object" objectService "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/object"
@ -59,7 +58,7 @@ func (c *cfg) MaxObjectSize() uint64 {
sz, err := c.cfgNetmap.wrapper.MaxObjectSize() sz, err := c.cfgNetmap.wrapper.MaxObjectSize()
if err != nil { if err != nil {
c.log.Error(context.Background(), logs.FrostFSNodeCouldNotGetMaxObjectSizeValue, c.log.Error(context.Background(), logs.FrostFSNodeCouldNotGetMaxObjectSizeValue,
zap.String("error", err.Error()), zap.Error(err),
) )
} }
@ -137,24 +136,6 @@ func (fn *innerRingFetcherWithNotary) InnerRingKeys() ([][]byte, error) {
return result, nil return result, nil
} }
type innerRingFetcherWithoutNotary struct {
nm *nmClient.Client
}
func (f *innerRingFetcherWithoutNotary) InnerRingKeys() ([][]byte, error) {
keys, err := f.nm.GetInnerRingList()
if err != nil {
return nil, fmt.Errorf("can't get inner ring keys from netmap contract: %w", err)
}
result := make([][]byte, 0, len(keys))
for i := range keys {
result = append(result, keys[i].Bytes())
}
return result, nil
}
func initObjectService(c *cfg) { func initObjectService(c *cfg) {
keyStorage := util.NewKeyStorage(&c.key.PrivateKey, c.privateTokenStore, c.cfgNetmap.state) keyStorage := util.NewKeyStorage(&c.key.PrivateKey, c.privateTokenStore, c.cfgNetmap.state)
@ -234,8 +215,7 @@ func addPolicer(c *cfg, keyStorage *util.KeyStorage, clientConstructor *cache.Cl
prm.MarkAsGarbage(addr) prm.MarkAsGarbage(addr)
prm.WithForceRemoval() prm.WithForceRemoval()
_, err := ls.Inhume(ctx, prm) return ls.Inhume(ctx, prm)
return err
} }
remoteReader := objectService.NewRemoteReader(keyStorage, clientConstructor) remoteReader := objectService.NewRemoteReader(keyStorage, clientConstructor)
@ -285,10 +265,9 @@ func addPolicer(c *cfg, keyStorage *util.KeyStorage, clientConstructor *cache.Cl
var inhumePrm engine.InhumePrm var inhumePrm engine.InhumePrm
inhumePrm.MarkAsGarbage(addr) inhumePrm.MarkAsGarbage(addr)
_, err := ls.Inhume(ctx, inhumePrm) if err := ls.Inhume(ctx, inhumePrm); err != nil {
if err != nil {
c.log.Warn(ctx, logs.FrostFSNodeCouldNotInhumeMarkRedundantCopyAsGarbage, c.log.Warn(ctx, logs.FrostFSNodeCouldNotInhumeMarkRedundantCopyAsGarbage,
zap.String("error", err.Error()), zap.Error(err),
) )
} }
}), }),
@ -305,13 +284,8 @@ func addPolicer(c *cfg, keyStorage *util.KeyStorage, clientConstructor *cache.Cl
} }
func createInnerRingFetcher(c *cfg) v2.InnerRingFetcher { func createInnerRingFetcher(c *cfg) v2.InnerRingFetcher {
if c.cfgMorph.client.ProbeNotary() { return &innerRingFetcherWithNotary{
return &innerRingFetcherWithNotary{ sidechain: c.cfgMorph.client,
sidechain: c.cfgMorph.client,
}
}
return &innerRingFetcherWithoutNotary{
nm: c.cfgNetmap.wrapper,
} }
} }
@ -500,8 +474,7 @@ func (e engineWithoutNotifications) Delete(ctx context.Context, tombstone oid.Ad
prm.WithTarget(tombstone, addrs...) prm.WithTarget(tombstone, addrs...)
_, err := e.engine.Inhume(ctx, prm) return e.engine.Inhume(ctx, prm)
return err
} }
func (e engineWithoutNotifications) Lock(ctx context.Context, locker oid.Address, toLock []oid.ID) error { func (e engineWithoutNotifications) Lock(ctx context.Context, locker oid.Address, toLock []oid.ID) error {

View file

@ -113,7 +113,7 @@ func initTreeService(c *cfg) {
// Ignore pilorama.ErrTreeNotFound but other errors, including shard.ErrReadOnly, should be logged. // Ignore pilorama.ErrTreeNotFound but other errors, including shard.ErrReadOnly, should be logged.
c.log.Error(ctx, logs.FrostFSNodeContainerRemovalEventReceivedButTreesWerentRemoved, c.log.Error(ctx, logs.FrostFSNodeContainerRemovalEventReceivedButTreesWerentRemoved,
zap.Stringer("cid", ev.ID), zap.Stringer("cid", ev.ID),
zap.String("error", err.Error())) zap.Error(err))
} }
}) })

View file

@ -83,6 +83,9 @@ FROSTFS_POLICER_HEAD_TIMEOUT=15s
FROSTFS_REPLICATOR_PUT_TIMEOUT=15s FROSTFS_REPLICATOR_PUT_TIMEOUT=15s
FROSTFS_REPLICATOR_POOL_SIZE=10 FROSTFS_REPLICATOR_POOL_SIZE=10
# Container service section
FROSTFS_CONTAINER_LIST_STREAM_BATCH_SIZE=500
# Object service section # Object service section
FROSTFS_OBJECT_PUT_REMOTE_POOL_SIZE=100 FROSTFS_OBJECT_PUT_REMOTE_POOL_SIZE=100
FROSTFS_OBJECT_PUT_LOCAL_POOL_SIZE=200 FROSTFS_OBJECT_PUT_LOCAL_POOL_SIZE=200

View file

@ -124,6 +124,11 @@
"pool_size": 10, "pool_size": 10,
"put_timeout": "15s" "put_timeout": "15s"
}, },
"container": {
"list_stream": {
"batch_size": "500"
}
},
"object": { "object": {
"delete": { "delete": {
"tombstone_lifetime": 10 "tombstone_lifetime": 10

View file

@ -79,7 +79,8 @@ contracts: # side chain NEOFS contract script hashes; optional, override values
morph: morph:
dial_timeout: 30s # timeout for side chain NEO RPC client connection dial_timeout: 30s # timeout for side chain NEO RPC client connection
cache_ttl: 15s # Sidechain cache TTL value (min interval between similar calls). Negative value disables caching. cache_ttl: 15s # Sidechain cache TTL value (min interval between similar calls).
# Negative value disables caching. A zero value sets the default value.
# Default value: block time. It is recommended to have this value less or equal to block time. # Default value: block time. It is recommended to have this value less or equal to block time.
# Cached entities: containers, container lists, eACL tables. # Cached entities: containers, container lists, eACL tables.
container_cache_size: 100 # container_cache_size is is the maximum number of containers in the cache. container_cache_size: 100 # container_cache_size is is the maximum number of containers in the cache.
@ -108,6 +109,10 @@ replicator:
put_timeout: 15s # timeout for the Replicator PUT remote operation put_timeout: 15s # timeout for the Replicator PUT remote operation
pool_size: 10 # maximum amount of concurrent replications pool_size: 10 # maximum amount of concurrent replications
container:
list_stream:
batch_size: 500 # container_batch_size is the maximum amount of containers to send via stream at once
object: object:
delete: delete:
tombstone_lifetime: 10 # tombstone "local" lifetime in epochs tombstone_lifetime: 10 # tombstone "local" lifetime in epochs

View file

@ -42,7 +42,6 @@
"FROSTFS_MORPH_DIAL_TIMEOUT":"30s", "FROSTFS_MORPH_DIAL_TIMEOUT":"30s",
"FROSTFS_MORPH_RPC_ENDPOINT_0_ADDRESS":"ws://127.0.0.1:30333/ws", "FROSTFS_MORPH_RPC_ENDPOINT_0_ADDRESS":"ws://127.0.0.1:30333/ws",
"FROSTFS_MORPH_RPC_ENDPOINT_0_PRIORITY":"0", "FROSTFS_MORPH_RPC_ENDPOINT_0_PRIORITY":"0",
"FROSTFS_MORPH_INACTIVITY_TIMEOUT":"60s",
"FROSTFS_NODE_WALLET_PATH":"${workspaceFolder}/dev/storage/wallet01.json", "FROSTFS_NODE_WALLET_PATH":"${workspaceFolder}/dev/storage/wallet01.json",
"FROSTFS_NODE_WALLET_PASSWORD":"", "FROSTFS_NODE_WALLET_PASSWORD":"",
"FROSTFS_NODE_ADDRESSES":"127.0.0.1:8080", "FROSTFS_NODE_ADDRESSES":"127.0.0.1:8080",
@ -98,7 +97,6 @@
"FROSTFS_MORPH_DIAL_TIMEOUT":"30s", "FROSTFS_MORPH_DIAL_TIMEOUT":"30s",
"FROSTFS_MORPH_RPC_ENDPOINT_0_ADDRESS":"ws://127.0.0.1:30333/ws", "FROSTFS_MORPH_RPC_ENDPOINT_0_ADDRESS":"ws://127.0.0.1:30333/ws",
"FROSTFS_MORPH_RPC_ENDPOINT_0_PRIORITY":"0", "FROSTFS_MORPH_RPC_ENDPOINT_0_PRIORITY":"0",
"FROSTFS_MORPH_INACTIVITY_TIMEOUT":"60s",
"FROSTFS_NODE_WALLET_PATH":"${workspaceFolder}/dev/storage/wallet02.json", "FROSTFS_NODE_WALLET_PATH":"${workspaceFolder}/dev/storage/wallet02.json",
"FROSTFS_NODE_WALLET_PASSWORD":"", "FROSTFS_NODE_WALLET_PASSWORD":"",
"FROSTFS_NODE_ADDRESSES":"127.0.0.1:8082", "FROSTFS_NODE_ADDRESSES":"127.0.0.1:8082",
@ -154,7 +152,6 @@
"FROSTFS_MORPH_DIAL_TIMEOUT":"30s", "FROSTFS_MORPH_DIAL_TIMEOUT":"30s",
"FROSTFS_MORPH_RPC_ENDPOINT_0_ADDRESS":"ws://127.0.0.1:30333/ws", "FROSTFS_MORPH_RPC_ENDPOINT_0_ADDRESS":"ws://127.0.0.1:30333/ws",
"FROSTFS_MORPH_RPC_ENDPOINT_0_PRIORITY":"0", "FROSTFS_MORPH_RPC_ENDPOINT_0_PRIORITY":"0",
"FROSTFS_MORPH_INACTIVITY_TIMEOUT":"60s",
"FROSTFS_NODE_WALLET_PATH":"${workspaceFolder}/dev/storage/wallet03.json", "FROSTFS_NODE_WALLET_PATH":"${workspaceFolder}/dev/storage/wallet03.json",
"FROSTFS_NODE_WALLET_PASSWORD":"", "FROSTFS_NODE_WALLET_PASSWORD":"",
"FROSTFS_NODE_ADDRESSES":"127.0.0.1:8084", "FROSTFS_NODE_ADDRESSES":"127.0.0.1:8084",
@ -210,7 +207,6 @@
"FROSTFS_MORPH_DIAL_TIMEOUT":"30s", "FROSTFS_MORPH_DIAL_TIMEOUT":"30s",
"FROSTFS_MORPH_RPC_ENDPOINT_0_ADDRESS":"ws://127.0.0.1:30333/ws", "FROSTFS_MORPH_RPC_ENDPOINT_0_ADDRESS":"ws://127.0.0.1:30333/ws",
"FROSTFS_MORPH_RPC_ENDPOINT_0_PRIORITY":"0", "FROSTFS_MORPH_RPC_ENDPOINT_0_PRIORITY":"0",
"FROSTFS_MORPH_INACTIVITY_TIMEOUT":"60s",
"FROSTFS_NODE_WALLET_PATH":"${workspaceFolder}/dev/storage/wallet04.json", "FROSTFS_NODE_WALLET_PATH":"${workspaceFolder}/dev/storage/wallet04.json",
"FROSTFS_NODE_WALLET_PASSWORD":"", "FROSTFS_NODE_WALLET_PASSWORD":"",
"FROSTFS_NODE_ADDRESSES":"127.0.0.1:8086", "FROSTFS_NODE_ADDRESSES":"127.0.0.1:8086",

View file

@ -95,19 +95,15 @@ $ git push origin ${FROSTFS_TAG_PREFIX}${FROSTFS_REVISION}
## Post-release ## Post-release
### Prepare and push images to a Docker Hub (if not automated) ### Prepare and push images to a Docker registry (automated)
Create Docker images for all applications and push them into Docker Hub Create Docker images for all applications and push them into container registry
(requires [organization](https://hub.docker.com/u/truecloudlab) privileges) (executed automatically in Forgejo Actions upon pushing a release tag):
```shell ```shell
$ git checkout ${FROSTFS_TAG_PREFIX}${FROSTFS_REVISION} $ git checkout ${FROSTFS_TAG_PREFIX}${FROSTFS_REVISION}
$ make images $ make images
$ docker push truecloudlab/frostfs-storage:${FROSTFS_REVISION} $ make push-images
$ docker push truecloudlab/frostfs-storage-testnet:${FROSTFS_REVISION}
$ docker push truecloudlab/frostfs-ir:${FROSTFS_REVISION}
$ docker push truecloudlab/frostfs-cli:${FROSTFS_REVISION}
$ docker push truecloudlab/frostfs-adm:${FROSTFS_REVISION}
``` ```
### Make a proper release (if not automated) ### Make a proper release (if not automated)

34
go.mod
View file

@ -4,11 +4,11 @@ go 1.22
require ( require (
code.gitea.io/sdk/gitea v0.17.1 code.gitea.io/sdk/gitea v0.17.1
git.frostfs.info/TrueCloudLab/frostfs-contract v0.21.0-rc.4 git.frostfs.info/TrueCloudLab/frostfs-contract v0.21.1-0.20241205083807-762d7f9f9f08
git.frostfs.info/TrueCloudLab/frostfs-crypto v0.6.0 git.frostfs.info/TrueCloudLab/frostfs-crypto v0.6.0
git.frostfs.info/TrueCloudLab/frostfs-locode-db v0.4.1-0.20240710074952-65761deb5c0d git.frostfs.info/TrueCloudLab/frostfs-locode-db v0.4.1-0.20240710074952-65761deb5c0d
git.frostfs.info/TrueCloudLab/frostfs-observability v0.0.0-20241112082307-f17779933e88 git.frostfs.info/TrueCloudLab/frostfs-observability v0.0.0-20241112082307-f17779933e88
git.frostfs.info/TrueCloudLab/frostfs-sdk-go v0.0.0-20241107121119-cb813e27a823 git.frostfs.info/TrueCloudLab/frostfs-sdk-go v0.0.0-20250109084609-328d214d2d76
git.frostfs.info/TrueCloudLab/hrw v1.2.1 git.frostfs.info/TrueCloudLab/hrw v1.2.1
git.frostfs.info/TrueCloudLab/multinet v0.0.0-20241015075604-6cb0d80e0972 git.frostfs.info/TrueCloudLab/multinet v0.0.0-20241015075604-6cb0d80e0972
git.frostfs.info/TrueCloudLab/policy-engine v0.0.0-20240814080254-96225afacb88 git.frostfs.info/TrueCloudLab/policy-engine v0.0.0-20240814080254-96225afacb88
@ -27,7 +27,7 @@ require (
github.com/klauspost/compress v1.17.4 github.com/klauspost/compress v1.17.4
github.com/mailru/easyjson v0.7.7 github.com/mailru/easyjson v0.7.7
github.com/mr-tron/base58 v1.2.0 github.com/mr-tron/base58 v1.2.0
github.com/multiformats/go-multiaddr v0.12.1 github.com/multiformats/go-multiaddr v0.14.0
github.com/nspcc-dev/neo-go v0.106.3 github.com/nspcc-dev/neo-go v0.106.3
github.com/olekukonko/tablewriter v0.0.5 github.com/olekukonko/tablewriter v0.0.5
github.com/panjf2000/ants/v2 v2.9.0 github.com/panjf2000/ants/v2 v2.9.0
@ -40,15 +40,15 @@ require (
github.com/ssgreg/journald v1.0.0 github.com/ssgreg/journald v1.0.0
github.com/stretchr/testify v1.9.0 github.com/stretchr/testify v1.9.0
go.etcd.io/bbolt v1.3.10 go.etcd.io/bbolt v1.3.10
go.opentelemetry.io/otel v1.28.0 go.opentelemetry.io/otel v1.31.0
go.opentelemetry.io/otel/trace v1.28.0 go.opentelemetry.io/otel/trace v1.31.0
go.uber.org/zap v1.27.0 go.uber.org/zap v1.27.0
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56
golang.org/x/sync v0.7.0 golang.org/x/sync v0.10.0
golang.org/x/sys v0.22.0 golang.org/x/sys v0.28.0
golang.org/x/term v0.21.0 golang.org/x/term v0.27.0
google.golang.org/grpc v1.66.2 google.golang.org/grpc v1.69.2
google.golang.org/protobuf v1.34.2 google.golang.org/protobuf v1.36.1
gopkg.in/yaml.v3 v3.0.1 gopkg.in/yaml.v3 v3.0.1
) )
@ -119,15 +119,15 @@ require (
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.28.0 // indirect go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.28.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.28.0 // indirect go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.28.0 // indirect
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.28.0 // indirect go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.28.0 // indirect
go.opentelemetry.io/otel/metric v1.28.0 // indirect go.opentelemetry.io/otel/metric v1.31.0 // indirect
go.opentelemetry.io/otel/sdk v1.28.0 // indirect go.opentelemetry.io/otel/sdk v1.31.0 // indirect
go.opentelemetry.io/proto/otlp v1.3.1 // indirect go.opentelemetry.io/proto/otlp v1.3.1 // indirect
go.uber.org/multierr v1.11.0 // indirect go.uber.org/multierr v1.11.0 // indirect
golang.org/x/crypto v0.24.0 // indirect golang.org/x/crypto v0.31.0 // indirect
golang.org/x/net v0.26.0 // indirect golang.org/x/net v0.30.0 // indirect
golang.org/x/text v0.16.0 // indirect golang.org/x/text v0.21.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20240701130421-f6361c86f094 // indirect google.golang.org/genproto/googleapis/api v0.0.0-20241015192408-796eee8c2d53 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20240701130421-f6361c86f094 // indirect google.golang.org/genproto/googleapis/rpc v0.0.0-20241015192408-796eee8c2d53 // indirect
gopkg.in/ini.v1 v1.67.0 // indirect gopkg.in/ini.v1 v1.67.0 // indirect
lukechampine.com/blake3 v1.2.1 // indirect lukechampine.com/blake3 v1.2.1 // indirect
rsc.io/tmplfunc v0.0.3 // indirect rsc.io/tmplfunc v0.0.3 // indirect

BIN
go.sum

Binary file not shown.

View file

@ -146,7 +146,6 @@ const (
ClientCantGetBlockchainHeight = "can't get blockchain height" ClientCantGetBlockchainHeight = "can't get blockchain height"
ClientCantGetBlockchainHeight243 = "can't get blockchain height" ClientCantGetBlockchainHeight243 = "can't get blockchain height"
EventCouldNotSubmitHandlerToWorkerPool = "could not Submit handler to worker pool" EventCouldNotSubmitHandlerToWorkerPool = "could not Submit handler to worker pool"
EventCouldNotStartListenToEvents = "could not start listen to events"
EventStopEventListenerByError = "stop event listener by error" EventStopEventListenerByError = "stop event listener by error"
EventStopEventListenerByContext = "stop event listener by context" EventStopEventListenerByContext = "stop event listener by context"
EventStopEventListenerByNotificationChannel = "stop event listener by notification channel" EventStopEventListenerByNotificationChannel = "stop event listener by notification channel"
@ -164,17 +163,9 @@ const (
EventNotaryParserNotSet = "notary parser not set" EventNotaryParserNotSet = "notary parser not set"
EventCouldNotParseNotaryEvent = "could not parse notary event" EventCouldNotParseNotaryEvent = "could not parse notary event"
EventNotaryHandlersForParsedNotificationEventWereNotRegistered = "notary handlers for parsed notification event were not registered" EventNotaryHandlersForParsedNotificationEventWereNotRegistered = "notary handlers for parsed notification event were not registered"
EventIgnoreNilEventParser = "ignore nil event parser"
EventListenerHasBeenAlreadyStartedIgnoreParser = "listener has been already started, ignore parser"
EventRegisteredNewEventParser = "registered new event parser" EventRegisteredNewEventParser = "registered new event parser"
EventIgnoreNilEventHandler = "ignore nil event handler"
EventIgnoreHandlerOfEventWoParser = "ignore handler of event w/o parser"
EventRegisteredNewEventHandler = "registered new event handler" EventRegisteredNewEventHandler = "registered new event handler"
EventIgnoreNilNotaryEventParser = "ignore nil notary event parser"
EventListenerHasBeenAlreadyStartedIgnoreNotaryParser = "listener has been already started, ignore notary parser"
EventIgnoreNilNotaryEventHandler = "ignore nil notary event handler"
EventIgnoreHandlerOfNotaryEventWoParser = "ignore handler of notary event w/o parser" EventIgnoreHandlerOfNotaryEventWoParser = "ignore handler of notary event w/o parser"
EventIgnoreNilBlockHandler = "ignore nil block handler"
StorageOperation = "local object storage operation" StorageOperation = "local object storage operation"
BlobovniczaCreatingDirectoryForBoltDB = "creating directory for BoltDB" BlobovniczaCreatingDirectoryForBoltDB = "creating directory for BoltDB"
BlobovniczaOpeningBoltDB = "opening BoltDB" BlobovniczaOpeningBoltDB = "opening BoltDB"
@ -392,7 +383,6 @@ const (
FrostFSNodeShutdownSkip = "node is already shutting down, skipped shutdown" FrostFSNodeShutdownSkip = "node is already shutting down, skipped shutdown"
FrostFSNodeShutdownWhenNotReady = "node is going to shut down when subsystems are still initializing" FrostFSNodeShutdownWhenNotReady = "node is going to shut down when subsystems are still initializing"
FrostFSNodeConfigurationReading = "configuration reading" FrostFSNodeConfigurationReading = "configuration reading"
FrostFSNodeLoggerConfigurationPreparation = "logger configuration preparation"
FrostFSNodeTracingConfigationUpdated = "tracing configation updated" FrostFSNodeTracingConfigationUpdated = "tracing configation updated"
FrostFSNodeStorageEngineConfigurationUpdate = "storage engine configuration update" FrostFSNodeStorageEngineConfigurationUpdate = "storage engine configuration update"
FrostFSNodePoolConfigurationUpdate = "adjust pool configuration" FrostFSNodePoolConfigurationUpdate = "adjust pool configuration"

View file

@ -12,8 +12,9 @@ type ApplicationInfo struct {
func NewApplicationInfo(version string) *ApplicationInfo { func NewApplicationInfo(version string) *ApplicationInfo {
appInfo := &ApplicationInfo{ appInfo := &ApplicationInfo{
versionValue: metrics.NewGaugeVec(prometheus.GaugeOpts{ versionValue: metrics.NewGaugeVec(prometheus.GaugeOpts{
Name: "app_info", Namespace: namespace,
Help: "General information about the application.", Name: "app_info",
Help: "General information about the application.",
}, []string{"version"}), }, []string{"version"}),
} }
appInfo.versionValue.With(prometheus.Labels{"version": version}) appInfo.versionValue.With(prometheus.Labels{"version": version})

View file

@ -31,9 +31,7 @@ type RPCActorProvider interface {
type ProxyVerificationContractStorage struct { type ProxyVerificationContractStorage struct {
rpcActorProvider RPCActorProvider rpcActorProvider RPCActorProvider
acc *wallet.Account cosigners []actor.SignerAccount
proxyScriptHash util.Uint160
policyScriptHash util.Uint160 policyScriptHash util.Uint160
} }
@ -41,12 +39,27 @@ type ProxyVerificationContractStorage struct {
var _ ProxyAdaptedContractStorage = (*ProxyVerificationContractStorage)(nil) var _ ProxyAdaptedContractStorage = (*ProxyVerificationContractStorage)(nil)
func NewProxyVerificationContractStorage(rpcActorProvider RPCActorProvider, key *keys.PrivateKey, proxyScriptHash, policyScriptHash util.Uint160) *ProxyVerificationContractStorage { func NewProxyVerificationContractStorage(rpcActorProvider RPCActorProvider, key *keys.PrivateKey, proxyScriptHash, policyScriptHash util.Uint160) *ProxyVerificationContractStorage {
acc := wallet.NewAccountFromPrivateKey(key)
return &ProxyVerificationContractStorage{ return &ProxyVerificationContractStorage{
rpcActorProvider: rpcActorProvider, rpcActorProvider: rpcActorProvider,
acc: wallet.NewAccountFromPrivateKey(key), cosigners: []actor.SignerAccount{
{
proxyScriptHash: proxyScriptHash, Signer: transaction.Signer{
Account: proxyScriptHash,
Scopes: transaction.CustomContracts,
AllowedContracts: []util.Uint160{policyScriptHash},
},
Account: notary.FakeContractAccount(proxyScriptHash),
},
{
Signer: transaction.Signer{
Account: acc.Contract.ScriptHash(),
Scopes: transaction.CalledByEntry,
},
Account: acc,
},
},
policyScriptHash: policyScriptHash, policyScriptHash: policyScriptHash,
} }
@ -64,7 +77,7 @@ func (n *contractStorageActorAdapter) GetRPCInvoker() invoker.RPCInvoke {
func (contractStorage *ProxyVerificationContractStorage) newContractStorageActor() (policy_morph.ContractStorageActor, error) { func (contractStorage *ProxyVerificationContractStorage) newContractStorageActor() (policy_morph.ContractStorageActor, error) {
rpcActor := contractStorage.rpcActorProvider.GetRPCActor() rpcActor := contractStorage.rpcActorProvider.GetRPCActor()
act, err := actor.New(rpcActor, cosigners(contractStorage.acc, contractStorage.proxyScriptHash, contractStorage.policyScriptHash)) act, err := actor.New(rpcActor, contractStorage.cosigners)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -98,31 +111,16 @@ func (contractStorage *ProxyVerificationContractStorage) RemoveMorphRuleChain(na
// ListMorphRuleChains lists morph rule chains from Policy contract using both Proxy contract and storage account as consigners. // ListMorphRuleChains lists morph rule chains from Policy contract using both Proxy contract and storage account as consigners.
func (contractStorage *ProxyVerificationContractStorage) ListMorphRuleChains(name chain.Name, target engine.Target) ([]*chain.Chain, error) { func (contractStorage *ProxyVerificationContractStorage) ListMorphRuleChains(name chain.Name, target engine.Target) ([]*chain.Chain, error) {
// contractStorageActor is reconstructed per each method invocation because RPCActor's (that is, basically, WSClient) connection may get invalidated, but rpcActor := contractStorage.rpcActorProvider.GetRPCActor()
// ProxyVerificationContractStorage does not manage reconnections. inv := &invokerAdapter{Invoker: invoker.New(rpcActor, nil), rpcInvoker: rpcActor}
contractStorageActor, err := contractStorage.newContractStorageActor() return policy_morph.NewContractStorageReader(inv, contractStorage.policyScriptHash).ListMorphRuleChains(name, target)
if err != nil {
return nil, err
}
return policy_morph.NewContractStorage(contractStorageActor, contractStorage.policyScriptHash).ListMorphRuleChains(name, target)
} }
func cosigners(acc *wallet.Account, proxyScriptHash, policyScriptHash util.Uint160) []actor.SignerAccount { type invokerAdapter struct {
return []actor.SignerAccount{ *invoker.Invoker
{ rpcInvoker invoker.RPCInvoke
Signer: transaction.Signer{ }
Account: proxyScriptHash,
Scopes: transaction.CustomContracts, func (n *invokerAdapter) GetRPCInvoker() invoker.RPCInvoke {
AllowedContracts: []util.Uint160{policyScriptHash}, return n.rpcInvoker
},
Account: notary.FakeContractAccount(proxyScriptHash),
},
{
Signer: transaction.Signer{
Account: acc.Contract.ScriptHash(),
Scopes: transaction.CalledByEntry,
},
Account: acc,
},
}
} }

View file

@ -67,7 +67,7 @@ func (c SenderClassifier) IsInnerRingOrContainerNode(ctx context.Context, ownerK
if err != nil { if err != nil {
// do not throw error, try best case matching // do not throw error, try best case matching
c.log.Debug(ctx, logs.V2CantCheckIfRequestFromInnerRing, c.log.Debug(ctx, logs.V2CantCheckIfRequestFromInnerRing,
zap.String("error", err.Error())) zap.Error(err))
} else if isInnerRingNode { } else if isInnerRingNode {
return &ClassifyResult{ return &ClassifyResult{
Role: acl.RoleInnerRing, Role: acl.RoleInnerRing,
@ -84,7 +84,7 @@ func (c SenderClassifier) IsInnerRingOrContainerNode(ctx context.Context, ownerK
// is not possible for previous epoch, so // is not possible for previous epoch, so
// do not throw error, try best case matching // do not throw error, try best case matching
c.log.Debug(ctx, logs.V2CantCheckIfRequestFromContainerNode, c.log.Debug(ctx, logs.V2CantCheckIfRequestFromContainerNode,
zap.String("error", err.Error())) zap.Error(err))
} else if isContainerNode { } else if isContainerNode {
return &ClassifyResult{ return &ClassifyResult{
Role: acl.RoleContainer, Role: acl.RoleContainer,

View file

@ -8,7 +8,6 @@ type (
// ContractProcessor interface defines functions for binding event producers // ContractProcessor interface defines functions for binding event producers
// such as event.Listener and Timers with contract processor. // such as event.Listener and Timers with contract processor.
ContractProcessor interface { ContractProcessor interface {
ListenerNotificationParsers() []event.NotificationParserInfo
ListenerNotificationHandlers() []event.NotificationHandlerInfo ListenerNotificationHandlers() []event.NotificationHandlerInfo
ListenerNotaryParsers() []event.NotaryParserInfo ListenerNotaryParsers() []event.NotaryParserInfo
ListenerNotaryHandlers() []event.NotaryHandlerInfo ListenerNotaryHandlers() []event.NotaryHandlerInfo
@ -16,11 +15,6 @@ type (
) )
func connectListenerWithProcessor(l event.Listener, p ContractProcessor) { func connectListenerWithProcessor(l event.Listener, p ContractProcessor) {
// register notification parsers
for _, parser := range p.ListenerNotificationParsers() {
l.SetNotificationParser(parser)
}
// register notification handlers // register notification handlers
for _, handler := range p.ListenerNotificationHandlers() { for _, handler := range p.ListenerNotificationHandlers() {
l.RegisterNotificationHandler(handler) l.RegisterNotificationHandler(handler)

View file

@ -38,10 +38,7 @@ import (
func (s *Server) initNetmapProcessor(ctx context.Context, cfg *viper.Viper, func (s *Server) initNetmapProcessor(ctx context.Context, cfg *viper.Viper,
alphaSync event.Handler, alphaSync event.Handler,
) error { ) error {
locodeValidator, err := s.newLocodeValidator(cfg) locodeValidator := s.newLocodeValidator(cfg)
if err != nil {
return err
}
netSettings := (*networkSettings)(s.netmapClient) netSettings := (*networkSettings)(s.netmapClient)
@ -51,6 +48,7 @@ func (s *Server) initNetmapProcessor(ctx context.Context, cfg *viper.Viper,
poolSize := cfg.GetInt("workers.netmap") poolSize := cfg.GetInt("workers.netmap")
s.log.Debug(ctx, logs.NetmapNetmapWorkerPool, zap.Int("size", poolSize)) s.log.Debug(ctx, logs.NetmapNetmapWorkerPool, zap.Int("size", poolSize))
var err error
s.netmapProcessor, err = netmap.New(&netmap.Params{ s.netmapProcessor, err = netmap.New(&netmap.Params{
Log: s.log, Log: s.log,
Metrics: s.irMetrics, Metrics: s.irMetrics,
@ -100,7 +98,7 @@ func (s *Server) initMainnet(ctx context.Context, cfg *viper.Viper, morphChain *
fromMainChainBlock, err := s.persistate.UInt32(persistateMainChainLastBlockKey) fromMainChainBlock, err := s.persistate.UInt32(persistateMainChainLastBlockKey)
if err != nil { if err != nil {
fromMainChainBlock = 0 fromMainChainBlock = 0
s.log.Warn(ctx, logs.InnerringCantGetLastProcessedMainChainBlockNumber, zap.String("error", err.Error())) s.log.Warn(ctx, logs.InnerringCantGetLastProcessedMainChainBlockNumber, zap.Error(err))
} }
mainnetChain.from = fromMainChainBlock mainnetChain.from = fromMainChainBlock
@ -380,7 +378,6 @@ func (s *Server) initClientsFromMorph() (*serverMorphClients, error) {
// form morph container client's options // form morph container client's options
morphCnrOpts := make([]container.Option, 0, 3) morphCnrOpts := make([]container.Option, 0, 3)
morphCnrOpts = append(morphCnrOpts, morphCnrOpts = append(morphCnrOpts,
container.TryNotary(),
container.AsAlphabet(), container.AsAlphabet(),
) )
@ -390,12 +387,12 @@ func (s *Server) initClientsFromMorph() (*serverMorphClients, error) {
} }
s.containerClient = result.CnrClient s.containerClient = result.CnrClient
s.netmapClient, err = nmClient.NewFromMorph(s.morphClient, s.contracts.netmap, fee, nmClient.TryNotary(), nmClient.AsAlphabet()) s.netmapClient, err = nmClient.NewFromMorph(s.morphClient, s.contracts.netmap, fee, nmClient.AsAlphabet())
if err != nil { if err != nil {
return nil, err return nil, err
} }
s.balanceClient, err = balanceClient.NewFromMorph(s.morphClient, s.contracts.balance, fee, balanceClient.TryNotary(), balanceClient.AsAlphabet()) s.balanceClient, err = balanceClient.NewFromMorph(s.morphClient, s.contracts.balance, fee, balanceClient.AsAlphabet())
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -457,7 +454,7 @@ func (s *Server) initMorph(ctx context.Context, cfg *viper.Viper, errChan chan<-
fromSideChainBlock, err := s.persistate.UInt32(persistateSideChainLastBlockKey) fromSideChainBlock, err := s.persistate.UInt32(persistateSideChainLastBlockKey)
if err != nil { if err != nil {
fromSideChainBlock = 0 fromSideChainBlock = 0
s.log.Warn(ctx, logs.InnerringCantGetLastProcessedSideChainBlockNumber, zap.String("error", err.Error())) s.log.Warn(ctx, logs.InnerringCantGetLastProcessedSideChainBlockNumber, zap.Error(err))
} }
morphChain := &chainParams{ morphChain := &chainParams{

View file

@ -177,7 +177,7 @@ func (s *Server) Start(ctx context.Context, intError chan<- error) (err error) {
if err != nil { if err != nil {
// we don't stop inner ring execution on this error // we don't stop inner ring execution on this error
s.log.Warn(ctx, logs.InnerringCantVoteForPreparedValidators, s.log.Warn(ctx, logs.InnerringCantVoteForPreparedValidators,
zap.String("error", err.Error())) zap.Error(err))
} }
s.tickInitialExpoch(ctx) s.tickInitialExpoch(ctx)
@ -308,7 +308,7 @@ func (s *Server) Stop(ctx context.Context) {
for _, c := range s.closers { for _, c := range s.closers {
if err := c(); err != nil { if err := c(); err != nil {
s.log.Warn(ctx, logs.InnerringCloserError, s.log.Warn(ctx, logs.InnerringCloserError,
zap.String("error", err.Error()), zap.Error(err),
) )
} }
} }

View file

@ -9,7 +9,7 @@ import (
"github.com/spf13/viper" "github.com/spf13/viper"
) )
func (s *Server) newLocodeValidator(cfg *viper.Viper) (netmap.NodeValidator, error) { func (s *Server) newLocodeValidator(cfg *viper.Viper) netmap.NodeValidator {
locodeDB := locodebolt.New(locodebolt.Prm{ locodeDB := locodebolt.New(locodebolt.Prm{
Path: cfg.GetString("locode.db.path"), Path: cfg.GetString("locode.db.path"),
}, },
@ -21,7 +21,7 @@ func (s *Server) newLocodeValidator(cfg *viper.Viper) (netmap.NodeValidator, err
return irlocode.New(irlocode.Prm{ return irlocode.New(irlocode.Prm{
DB: (*locodeBoltDBWrapper)(locodeDB), DB: (*locodeBoltDBWrapper)(locodeDB),
}), nil })
} }
type locodeBoltEntryWrapper struct { type locodeBoltEntryWrapper struct {

View file

@ -33,7 +33,7 @@ func (ap *Processor) processEmit(ctx context.Context) bool {
// there is no signature collecting, so we don't need extra fee // there is no signature collecting, so we don't need extra fee
_, err := ap.morphClient.Invoke(ctx, contract, 0, emitMethod) _, err := ap.morphClient.Invoke(ctx, contract, 0, emitMethod)
if err != nil { if err != nil {
ap.log.Warn(ctx, logs.AlphabetCantInvokeAlphabetEmitMethod, zap.String("error", err.Error())) ap.log.Warn(ctx, logs.AlphabetCantInvokeAlphabetEmitMethod, zap.Error(err))
return false return false
} }
@ -47,7 +47,7 @@ func (ap *Processor) processEmit(ctx context.Context) bool {
networkMap, err := ap.netmapClient.NetMap() networkMap, err := ap.netmapClient.NetMap()
if err != nil { if err != nil {
ap.log.Warn(ctx, logs.AlphabetCantGetNetmapSnapshotToEmitGasToStorageNodes, ap.log.Warn(ctx, logs.AlphabetCantGetNetmapSnapshotToEmitGasToStorageNodes,
zap.String("error", err.Error())) zap.Error(err))
return false return false
} }
@ -83,7 +83,7 @@ func (ap *Processor) transferGasToNetmapNodes(ctx context.Context, nmNodes []net
key, err := keys.NewPublicKeyFromBytes(keyBytes, elliptic.P256()) key, err := keys.NewPublicKeyFromBytes(keyBytes, elliptic.P256())
if err != nil { if err != nil {
ap.log.Warn(ctx, logs.AlphabetCantParseNodePublicKey, ap.log.Warn(ctx, logs.AlphabetCantParseNodePublicKey,
zap.String("error", err.Error())) zap.Error(err))
continue continue
} }
@ -93,7 +93,7 @@ func (ap *Processor) transferGasToNetmapNodes(ctx context.Context, nmNodes []net
ap.log.Warn(ctx, logs.AlphabetCantTransferGas, ap.log.Warn(ctx, logs.AlphabetCantTransferGas,
zap.String("receiver", key.Address()), zap.String("receiver", key.Address()),
zap.Int64("amount", int64(gasPerNode)), zap.Int64("amount", int64(gasPerNode)),
zap.String("error", err.Error()), zap.Error(err),
) )
} }
} }
@ -110,7 +110,7 @@ func (ap *Processor) transferGasToExtraNodes(ctx context.Context, pw []util.Uint
ap.log.Warn(ctx, logs.AlphabetCantTransferGasToWallet, ap.log.Warn(ctx, logs.AlphabetCantTransferGasToWallet,
zap.Strings("receivers", receiversLog), zap.Strings("receivers", receiversLog),
zap.Int64("amount", int64(gasPerNode)), zap.Int64("amount", int64(gasPerNode)),
zap.String("error", err.Error()), zap.Error(err),
) )
} }
} }

View file

@ -114,11 +114,6 @@ func (ap *Processor) SetParsedWallets(parsedWallets []util.Uint160) {
ap.pwLock.Unlock() ap.pwLock.Unlock()
} }
// ListenerNotificationParsers for the 'event.Listener' event producer.
func (ap *Processor) ListenerNotificationParsers() []event.NotificationParserInfo {
return nil
}
// ListenerNotificationHandlers for the 'event.Listener' event producer. // ListenerNotificationHandlers for the 'event.Listener' event producer.
func (ap *Processor) ListenerNotificationHandlers() []event.NotificationHandlerInfo { func (ap *Processor) ListenerNotificationHandlers() []event.NotificationHandlerInfo {
return nil return nil

View file

@ -88,32 +88,16 @@ func New(p *Params) (*Processor, error) {
}, nil }, nil
} }
// ListenerNotificationParsers for the 'event.Listener' event producer.
func (bp *Processor) ListenerNotificationParsers() []event.NotificationParserInfo {
var parsers []event.NotificationParserInfo
// new lock event
lock := event.NotificationParserInfo{}
lock.SetType(lockNotification)
lock.SetScriptHash(bp.balanceSC)
lock.SetParser(balanceEvent.ParseLock)
parsers = append(parsers, lock)
return parsers
}
// ListenerNotificationHandlers for the 'event.Listener' event producer. // ListenerNotificationHandlers for the 'event.Listener' event producer.
func (bp *Processor) ListenerNotificationHandlers() []event.NotificationHandlerInfo { func (bp *Processor) ListenerNotificationHandlers() []event.NotificationHandlerInfo {
var handlers []event.NotificationHandlerInfo return []event.NotificationHandlerInfo{
{
// lock handler Contract: bp.balanceSC,
lock := event.NotificationHandlerInfo{} Type: lockNotification,
lock.SetType(lockNotification) Parser: balanceEvent.ParseLock,
lock.SetScriptHash(bp.balanceSC) Handlers: []event.Handler{bp.handleLock},
lock.SetHandler(bp.handleLock) },
handlers = append(handlers, lock) }
return handlers
} }
// ListenerNotaryParsers for the 'event.Listener' event producer. // ListenerNotaryParsers for the 'event.Listener' event producer.

View file

@ -50,7 +50,7 @@ func (cp *Processor) processContainerPut(ctx context.Context, put putEvent) bool
err := cp.checkPutContainer(pctx) err := cp.checkPutContainer(pctx)
if err != nil { if err != nil {
cp.log.Error(ctx, logs.ContainerPutContainerCheckFailed, cp.log.Error(ctx, logs.ContainerPutContainerCheckFailed,
zap.String("error", err.Error()), zap.Error(err),
) )
return false return false
@ -58,7 +58,7 @@ func (cp *Processor) processContainerPut(ctx context.Context, put putEvent) bool
if err := cp.morphClient.NotarySignAndInvokeTX(pctx.e.NotaryRequest().MainTransaction); err != nil { if err := cp.morphClient.NotarySignAndInvokeTX(pctx.e.NotaryRequest().MainTransaction); err != nil {
cp.log.Error(ctx, logs.ContainerCouldNotApprovePutContainer, cp.log.Error(ctx, logs.ContainerCouldNotApprovePutContainer,
zap.String("error", err.Error()), zap.Error(err),
) )
return false return false
} }
@ -113,7 +113,7 @@ func (cp *Processor) processContainerDelete(ctx context.Context, e containerEven
err := cp.checkDeleteContainer(e) err := cp.checkDeleteContainer(e)
if err != nil { if err != nil {
cp.log.Error(ctx, logs.ContainerDeleteContainerCheckFailed, cp.log.Error(ctx, logs.ContainerDeleteContainerCheckFailed,
zap.String("error", err.Error()), zap.Error(err),
) )
return false return false
@ -121,7 +121,7 @@ func (cp *Processor) processContainerDelete(ctx context.Context, e containerEven
if err := cp.morphClient.NotarySignAndInvokeTX(e.NotaryRequest().MainTransaction); err != nil { if err := cp.morphClient.NotarySignAndInvokeTX(e.NotaryRequest().MainTransaction); err != nil {
cp.log.Error(ctx, logs.ContainerCouldNotApproveDeleteContainer, cp.log.Error(ctx, logs.ContainerCouldNotApproveDeleteContainer,
zap.String("error", err.Error()), zap.Error(err),
) )
return false return false

View file

@ -118,11 +118,6 @@ func New(p *Params) (*Processor, error) {
}, nil }, nil
} }
// ListenerNotificationParsers for the 'event.Listener' event producer.
func (cp *Processor) ListenerNotificationParsers() []event.NotificationParserInfo {
return nil
}
// ListenerNotificationHandlers for the 'event.Listener' event producer. // ListenerNotificationHandlers for the 'event.Listener' event producer.
func (cp *Processor) ListenerNotificationHandlers() []event.NotificationHandlerInfo { func (cp *Processor) ListenerNotificationHandlers() []event.NotificationHandlerInfo {
return nil return nil

View file

@ -73,7 +73,7 @@ func (np *Processor) processDeposit(ctx context.Context, deposit frostfsEvent.De
err = np.morphClient.TransferGas(receiver, np.mintEmitValue) err = np.morphClient.TransferGas(receiver, np.mintEmitValue)
if err != nil { if err != nil {
np.log.Error(ctx, logs.FrostFSCantTransferNativeGasToReceiver, np.log.Error(ctx, logs.FrostFSCantTransferNativeGasToReceiver,
zap.String("error", err.Error())) zap.Error(err))
return false return false
} }

View file

@ -142,70 +142,34 @@ func New(p *Params) (*Processor, error) {
}, nil }, nil
} }
// ListenerNotificationParsers for the 'event.Listener' event producer.
func (np *Processor) ListenerNotificationParsers() []event.NotificationParserInfo {
var (
parsers = make([]event.NotificationParserInfo, 0, 6)
p event.NotificationParserInfo
)
p.SetScriptHash(np.frostfsContract)
// deposit event
p.SetType(event.TypeFromString(depositNotification))
p.SetParser(frostfsEvent.ParseDeposit)
parsers = append(parsers, p)
// withdraw event
p.SetType(event.TypeFromString(withdrawNotification))
p.SetParser(frostfsEvent.ParseWithdraw)
parsers = append(parsers, p)
// cheque event
p.SetType(event.TypeFromString(chequeNotification))
p.SetParser(frostfsEvent.ParseCheque)
parsers = append(parsers, p)
// config event
p.SetType(event.TypeFromString(configNotification))
p.SetParser(frostfsEvent.ParseConfig)
parsers = append(parsers, p)
return parsers
}
// ListenerNotificationHandlers for the 'event.Listener' event producer. // ListenerNotificationHandlers for the 'event.Listener' event producer.
func (np *Processor) ListenerNotificationHandlers() []event.NotificationHandlerInfo { func (np *Processor) ListenerNotificationHandlers() []event.NotificationHandlerInfo {
var ( return []event.NotificationHandlerInfo{
handlers = make([]event.NotificationHandlerInfo, 0, 6) {
Contract: np.frostfsContract,
h event.NotificationHandlerInfo Type: event.TypeFromString(depositNotification),
) Parser: frostfsEvent.ParseDeposit,
Handlers: []event.Handler{np.handleDeposit},
h.SetScriptHash(np.frostfsContract) },
{
// deposit handler Contract: np.frostfsContract,
h.SetType(event.TypeFromString(depositNotification)) Type: event.TypeFromString(withdrawNotification),
h.SetHandler(np.handleDeposit) Parser: frostfsEvent.ParseWithdraw,
handlers = append(handlers, h) Handlers: []event.Handler{np.handleWithdraw},
},
// withdraw handler {
h.SetType(event.TypeFromString(withdrawNotification)) Contract: np.frostfsContract,
h.SetHandler(np.handleWithdraw) Type: event.TypeFromString(chequeNotification),
handlers = append(handlers, h) Parser: frostfsEvent.ParseCheque,
Handlers: []event.Handler{np.handleCheque},
// cheque handler },
h.SetType(event.TypeFromString(chequeNotification)) {
h.SetHandler(np.handleCheque) Contract: np.frostfsContract,
handlers = append(handlers, h) Type: event.TypeFromString(configNotification),
Parser: frostfsEvent.ParseConfig,
// config handler Handlers: []event.Handler{np.handleConfig},
h.SetType(event.TypeFromString(configNotification)) },
h.SetHandler(np.handleConfig) }
handlers = append(handlers, h)
return handlers
} }
// ListenerNotaryParsers for the 'event.Listener' event producer. // ListenerNotaryParsers for the 'event.Listener' event producer.

View file

@ -28,21 +28,21 @@ func (gp *Processor) processAlphabetSync(ctx context.Context, txHash util.Uint25
mainnetAlphabet, err := gp.mainnetClient.NeoFSAlphabetList() mainnetAlphabet, err := gp.mainnetClient.NeoFSAlphabetList()
if err != nil { if err != nil {
gp.log.Error(ctx, logs.GovernanceCantFetchAlphabetListFromMainNet, gp.log.Error(ctx, logs.GovernanceCantFetchAlphabetListFromMainNet,
zap.String("error", err.Error())) zap.Error(err))
return false return false
} }
sidechainAlphabet, err := gp.morphClient.Committee() sidechainAlphabet, err := gp.morphClient.Committee()
if err != nil { if err != nil {
gp.log.Error(ctx, logs.GovernanceCantFetchAlphabetListFromSideChain, gp.log.Error(ctx, logs.GovernanceCantFetchAlphabetListFromSideChain,
zap.String("error", err.Error())) zap.Error(err))
return false return false
} }
newAlphabet, err := newAlphabetList(sidechainAlphabet, mainnetAlphabet) newAlphabet, err := newAlphabetList(sidechainAlphabet, mainnetAlphabet)
if err != nil { if err != nil {
gp.log.Error(ctx, logs.GovernanceCantMergeAlphabetListsFromMainNetAndSideChain, gp.log.Error(ctx, logs.GovernanceCantMergeAlphabetListsFromMainNetAndSideChain,
zap.String("error", err.Error())) zap.Error(err))
return false return false
} }
@ -65,7 +65,7 @@ func (gp *Processor) processAlphabetSync(ctx context.Context, txHash util.Uint25
err = gp.voter.VoteForSidechainValidator(ctx, votePrm) err = gp.voter.VoteForSidechainValidator(ctx, votePrm)
if err != nil { if err != nil {
gp.log.Error(ctx, logs.GovernanceCantVoteForSideChainCommittee, gp.log.Error(ctx, logs.GovernanceCantVoteForSideChainCommittee,
zap.String("error", err.Error())) zap.Error(err))
} }
// 2. Update NeoFSAlphabet role in the sidechain. // 2. Update NeoFSAlphabet role in the sidechain.
@ -98,14 +98,14 @@ func (gp *Processor) updateNeoFSAlphabetRoleInSidechain(ctx context.Context, sid
innerRing, err := gp.irFetcher.InnerRingKeys() innerRing, err := gp.irFetcher.InnerRingKeys()
if err != nil { if err != nil {
gp.log.Error(ctx, logs.GovernanceCantFetchInnerRingListFromSideChain, gp.log.Error(ctx, logs.GovernanceCantFetchInnerRingListFromSideChain,
zap.String("error", err.Error())) zap.Error(err))
return return
} }
newInnerRing, err := updateInnerRing(innerRing, sidechainAlphabet, newAlphabet) newInnerRing, err := updateInnerRing(innerRing, sidechainAlphabet, newAlphabet)
if err != nil { if err != nil {
gp.log.Error(ctx, logs.GovernanceCantCreateNewInnerRingListWithNewAlphabetKeys, gp.log.Error(ctx, logs.GovernanceCantCreateNewInnerRingListWithNewAlphabetKeys,
zap.String("error", err.Error())) zap.Error(err))
return return
} }
@ -122,7 +122,7 @@ func (gp *Processor) updateNeoFSAlphabetRoleInSidechain(ctx context.Context, sid
if err = gp.morphClient.UpdateNeoFSAlphabetList(ctx, updPrm); err != nil { if err = gp.morphClient.UpdateNeoFSAlphabetList(ctx, updPrm); err != nil {
gp.log.Error(ctx, logs.GovernanceCantUpdateInnerRingListWithNewAlphabetKeys, gp.log.Error(ctx, logs.GovernanceCantUpdateInnerRingListWithNewAlphabetKeys,
zap.String("error", err.Error())) zap.Error(err))
} }
} }
@ -135,7 +135,7 @@ func (gp *Processor) updateNotaryRoleInSidechain(ctx context.Context, newAlphabe
err := gp.morphClient.UpdateNotaryList(ctx, updPrm) err := gp.morphClient.UpdateNotaryList(ctx, updPrm)
if err != nil { if err != nil {
gp.log.Error(ctx, logs.GovernanceCantUpdateListOfNotaryNodesInSideChain, gp.log.Error(ctx, logs.GovernanceCantUpdateListOfNotaryNodesInSideChain,
zap.String("error", err.Error())) zap.Error(err))
} }
} }
@ -155,6 +155,6 @@ func (gp *Processor) updateFrostFSContractInMainnet(ctx context.Context, newAlph
err := gp.frostfsClient.AlphabetUpdate(ctx, prm) err := gp.frostfsClient.AlphabetUpdate(ctx, prm)
if err != nil { if err != nil {
gp.log.Error(ctx, logs.GovernanceCantUpdateListOfAlphabetNodesInFrostfsContract, gp.log.Error(ctx, logs.GovernanceCantUpdateListOfAlphabetNodesInFrostfsContract,
zap.String("error", err.Error())) zap.Error(err))
} }
} }

View file

@ -155,22 +155,16 @@ func New(p *Params) (*Processor, error) {
}, nil }, nil
} }
// ListenerNotificationParsers for the 'event.Listener' event producer.
func (gp *Processor) ListenerNotificationParsers() []event.NotificationParserInfo {
var pi event.NotificationParserInfo
pi.SetScriptHash(gp.designate)
pi.SetType(event.TypeFromString(native.DesignationEventName))
pi.SetParser(rolemanagement.ParseDesignate)
return []event.NotificationParserInfo{pi}
}
// ListenerNotificationHandlers for the 'event.Listener' event producer. // ListenerNotificationHandlers for the 'event.Listener' event producer.
func (gp *Processor) ListenerNotificationHandlers() []event.NotificationHandlerInfo { func (gp *Processor) ListenerNotificationHandlers() []event.NotificationHandlerInfo {
var hi event.NotificationHandlerInfo return []event.NotificationHandlerInfo{
hi.SetScriptHash(gp.designate) {
hi.SetType(event.TypeFromString(native.DesignationEventName)) Contract: gp.designate,
hi.SetHandler(gp.HandleAlphabetSync) Type: event.TypeFromString(native.DesignationEventName),
return []event.NotificationHandlerInfo{hi} Parser: rolemanagement.ParseDesignate,
Handlers: []event.Handler{gp.HandleAlphabetSync},
},
}
} }
// ListenerNotaryParsers for the 'event.Listener' event producer. // ListenerNotaryParsers for the 'event.Listener' event producer.

View file

@ -49,7 +49,7 @@ func (np *Processor) processNetmapCleanupTick(ctx context.Context, ev netmapClea
}) })
if err != nil { if err != nil {
np.log.Warn(ctx, logs.NetmapCantIterateOnNetmapCleanerCache, np.log.Warn(ctx, logs.NetmapCantIterateOnNetmapCleanerCache,
zap.String("error", err.Error())) zap.Error(err))
return false return false
} }

View file

@ -17,7 +17,7 @@ func (np *Processor) processNewEpoch(ctx context.Context, ev netmapEvent.NewEpoc
epochDuration, err := np.netmapClient.EpochDuration() epochDuration, err := np.netmapClient.EpochDuration()
if err != nil { if err != nil {
np.log.Warn(ctx, logs.NetmapCantGetEpochDuration, np.log.Warn(ctx, logs.NetmapCantGetEpochDuration,
zap.String("error", err.Error())) zap.Error(err))
} else { } else {
np.epochState.SetEpochDuration(epochDuration) np.epochState.SetEpochDuration(epochDuration)
} }
@ -28,19 +28,19 @@ func (np *Processor) processNewEpoch(ctx context.Context, ev netmapEvent.NewEpoc
if err != nil { if err != nil {
np.log.Warn(ctx, logs.NetmapCantGetTransactionHeight, np.log.Warn(ctx, logs.NetmapCantGetTransactionHeight,
zap.String("hash", ev.TxHash().StringLE()), zap.String("hash", ev.TxHash().StringLE()),
zap.String("error", err.Error())) zap.Error(err))
} }
if err := np.epochTimer.ResetEpochTimer(h); err != nil { if err := np.epochTimer.ResetEpochTimer(h); err != nil {
np.log.Warn(ctx, logs.NetmapCantResetEpochTimer, np.log.Warn(ctx, logs.NetmapCantResetEpochTimer,
zap.String("error", err.Error())) zap.Error(err))
} }
// get new netmap snapshot // get new netmap snapshot
networkMap, err := np.netmapClient.NetMap() networkMap, err := np.netmapClient.NetMap()
if err != nil { if err != nil {
np.log.Warn(ctx, logs.NetmapCantGetNetmapSnapshotToPerformCleanup, np.log.Warn(ctx, logs.NetmapCantGetNetmapSnapshotToPerformCleanup,
zap.String("error", err.Error())) zap.Error(err))
return false return false
} }

View file

@ -42,7 +42,7 @@ func (np *Processor) processAddPeer(ctx context.Context, ev netmapEvent.AddPeer)
err = np.nodeValidator.VerifyAndUpdate(&nodeInfo) err = np.nodeValidator.VerifyAndUpdate(&nodeInfo)
if err != nil { if err != nil {
np.log.Warn(ctx, logs.NetmapCouldNotVerifyAndUpdateInformationAboutNetworkMapCandidate, np.log.Warn(ctx, logs.NetmapCouldNotVerifyAndUpdateInformationAboutNetworkMapCandidate,
zap.String("error", err.Error()), zap.Error(err),
) )
return false return false

View file

@ -161,36 +161,16 @@ func New(p *Params) (*Processor, error) {
}, nil }, nil
} }
// ListenerNotificationParsers for the 'event.Listener' event producer.
func (np *Processor) ListenerNotificationParsers() []event.NotificationParserInfo {
parsers := make([]event.NotificationParserInfo, 0, 3)
var p event.NotificationParserInfo
p.SetScriptHash(np.netmapClient.ContractAddress())
// new epoch event
p.SetType(newEpochNotification)
p.SetParser(netmapEvent.ParseNewEpoch)
parsers = append(parsers, p)
return parsers
}
// ListenerNotificationHandlers for the 'event.Listener' event producer. // ListenerNotificationHandlers for the 'event.Listener' event producer.
func (np *Processor) ListenerNotificationHandlers() []event.NotificationHandlerInfo { func (np *Processor) ListenerNotificationHandlers() []event.NotificationHandlerInfo {
handlers := make([]event.NotificationHandlerInfo, 0, 3) return []event.NotificationHandlerInfo{
{
var i event.NotificationHandlerInfo Contract: np.netmapClient.ContractAddress(),
Type: newEpochNotification,
i.SetScriptHash(np.netmapClient.ContractAddress()) Parser: netmapEvent.ParseNewEpoch,
Handlers: []event.Handler{np.handleNewEpoch},
// new epoch handler },
i.SetType(newEpochNotification) }
i.SetHandler(np.handleNewEpoch)
handlers = append(handlers, i)
return handlers
} }
// ListenerNotaryParsers for the 'event.Listener' event producer. // ListenerNotaryParsers for the 'event.Listener' event producer.

View file

@ -62,7 +62,7 @@ func (s *Server) IsAlphabet(ctx context.Context) bool {
func (s *Server) InnerRingIndex(ctx context.Context) int { func (s *Server) InnerRingIndex(ctx context.Context) int {
index, err := s.statusIndex.InnerRingIndex() index, err := s.statusIndex.InnerRingIndex()
if err != nil { if err != nil {
s.log.Error(ctx, logs.InnerringCantGetInnerRingIndex, zap.String("error", err.Error())) s.log.Error(ctx, logs.InnerringCantGetInnerRingIndex, zap.Error(err))
return -1 return -1
} }
@ -74,7 +74,7 @@ func (s *Server) InnerRingIndex(ctx context.Context) int {
func (s *Server) InnerRingSize(ctx context.Context) int { func (s *Server) InnerRingSize(ctx context.Context) int {
size, err := s.statusIndex.InnerRingSize() size, err := s.statusIndex.InnerRingSize()
if err != nil { if err != nil {
s.log.Error(ctx, logs.InnerringCantGetInnerRingSize, zap.String("error", err.Error())) s.log.Error(ctx, logs.InnerringCantGetInnerRingSize, zap.Error(err))
return 0 return 0
} }
@ -86,7 +86,7 @@ func (s *Server) InnerRingSize(ctx context.Context) int {
func (s *Server) AlphabetIndex(ctx context.Context) int { func (s *Server) AlphabetIndex(ctx context.Context) int {
index, err := s.statusIndex.AlphabetIndex() index, err := s.statusIndex.AlphabetIndex()
if err != nil { if err != nil {
s.log.Error(ctx, logs.InnerringCantGetAlphabetIndex, zap.String("error", err.Error())) s.log.Error(ctx, logs.InnerringCantGetAlphabetIndex, zap.Error(err))
return -1 return -1
} }
@ -132,7 +132,7 @@ func (s *Server) voteForSidechainValidator(ctx context.Context, prm governance.V
s.log.Warn(ctx, logs.InnerringCantInvokeVoteMethodInAlphabetContract, s.log.Warn(ctx, logs.InnerringCantInvokeVoteMethodInAlphabetContract,
zap.Int8("alphabet_index", int8(letter)), zap.Int8("alphabet_index", int8(letter)),
zap.Uint64("epoch", epoch), zap.Uint64("epoch", epoch),
zap.String("error", err.Error())) zap.Error(err))
} }
}) })

View file

@ -129,7 +129,7 @@ func (b *Blobovnicza) initializeCounters(ctx context.Context) error {
}) })
}) })
if err != nil { if err != nil {
return fmt.Errorf("can't determine DB size: %w", err) return fmt.Errorf("determine DB size: %w", err)
} }
if (!sizeExists || !itemsCountExists) && !b.boltOptions.ReadOnly { if (!sizeExists || !itemsCountExists) && !b.boltOptions.ReadOnly {
b.log.Debug(ctx, logs.BlobovniczaSavingCountersToMeta, zap.Uint64("size", size), zap.Uint64("items", items)) b.log.Debug(ctx, logs.BlobovniczaSavingCountersToMeta, zap.Uint64("size", size), zap.Uint64("items", items))
@ -140,7 +140,7 @@ func (b *Blobovnicza) initializeCounters(ctx context.Context) error {
return saveItemsCount(tx, items) return saveItemsCount(tx, items)
}); err != nil { }); err != nil {
b.log.Debug(ctx, logs.BlobovniczaSavingCountersToMetaFailed, zap.Uint64("size", size), zap.Uint64("items", items)) b.log.Debug(ctx, logs.BlobovniczaSavingCountersToMetaFailed, zap.Uint64("size", size), zap.Uint64("items", items))
return fmt.Errorf("can't save blobovnicza's size and items count: %w", err) return fmt.Errorf("save blobovnicza's size and items count: %w", err)
} }
b.log.Debug(ctx, logs.BlobovniczaSavingCountersToMetaSuccess, zap.Uint64("size", size), zap.Uint64("items", items)) b.log.Debug(ctx, logs.BlobovniczaSavingCountersToMetaSuccess, zap.Uint64("size", size), zap.Uint64("items", items))
} }

View file

@ -146,7 +146,7 @@ func (b *Blobovnicza) Iterate(ctx context.Context, prm IteratePrm) (IterateRes,
if prm.ignoreErrors { if prm.ignoreErrors {
return nil return nil
} }
return fmt.Errorf("could not decode address key: %w", err) return fmt.Errorf("decode address key: %w", err)
} }
} }

View file

@ -19,7 +19,10 @@ import (
"go.uber.org/zap" "go.uber.org/zap"
) )
var errObjectIsDeleteProtected = errors.New("object is delete protected") var (
errObjectIsDeleteProtected = errors.New("object is delete protected")
deleteRes = common.DeleteRes{}
)
// Delete deletes object from blobovnicza tree. // Delete deletes object from blobovnicza tree.
// //
@ -43,17 +46,17 @@ func (b *Blobovniczas) Delete(ctx context.Context, prm common.DeletePrm) (res co
defer span.End() defer span.End()
if b.readOnly { if b.readOnly {
return common.DeleteRes{}, common.ErrReadOnly return deleteRes, common.ErrReadOnly
} }
if b.rebuildGuard.TryRLock() { if b.rebuildGuard.TryRLock() {
defer b.rebuildGuard.RUnlock() defer b.rebuildGuard.RUnlock()
} else { } else {
return common.DeleteRes{}, errRebuildInProgress return deleteRes, errRebuildInProgress
} }
if b.deleteProtectedObjects.Contains(prm.Address) { if b.deleteProtectedObjects.Contains(prm.Address) {
return common.DeleteRes{}, errObjectIsDeleteProtected return deleteRes, errObjectIsDeleteProtected
} }
var bPrm blobovnicza.DeletePrm var bPrm blobovnicza.DeletePrm
@ -82,7 +85,7 @@ func (b *Blobovniczas) Delete(ctx context.Context, prm common.DeletePrm) (res co
if !client.IsErrObjectNotFound(err) { if !client.IsErrObjectNotFound(err) {
b.log.Debug(ctx, logs.BlobovniczatreeCouldNotRemoveObjectFromLevel, b.log.Debug(ctx, logs.BlobovniczatreeCouldNotRemoveObjectFromLevel,
zap.String("level", p), zap.String("level", p),
zap.String("error", err.Error()), zap.Error(err),
zap.String("trace_id", tracingPkg.GetTraceID(ctx)), zap.String("trace_id", tracingPkg.GetTraceID(ctx)),
) )
} }
@ -98,7 +101,7 @@ func (b *Blobovniczas) Delete(ctx context.Context, prm common.DeletePrm) (res co
if err == nil && !objectFound { if err == nil && !objectFound {
// not found in any blobovnicza // not found in any blobovnicza
return common.DeleteRes{}, logicerr.Wrap(new(apistatus.ObjectNotFound)) return deleteRes, logicerr.Wrap(new(apistatus.ObjectNotFound))
} }
success = err == nil success = err == nil
@ -112,7 +115,7 @@ func (b *Blobovniczas) deleteObjectFromLevel(ctx context.Context, prm blobovnicz
shBlz := b.getBlobovnicza(ctx, blzPath) shBlz := b.getBlobovnicza(ctx, blzPath)
blz, err := shBlz.Open(ctx) blz, err := shBlz.Open(ctx)
if err != nil { if err != nil {
return common.DeleteRes{}, err return deleteRes, err
} }
defer shBlz.Close(ctx) defer shBlz.Close(ctx)
@ -122,5 +125,5 @@ func (b *Blobovniczas) deleteObjectFromLevel(ctx context.Context, prm blobovnicz
// removes object from blobovnicza and returns common.DeleteRes. // removes object from blobovnicza and returns common.DeleteRes.
func (b *Blobovniczas) deleteObject(ctx context.Context, blz *blobovnicza.Blobovnicza, prm blobovnicza.DeletePrm) (common.DeleteRes, error) { func (b *Blobovniczas) deleteObject(ctx context.Context, blz *blobovnicza.Blobovnicza, prm blobovnicza.DeletePrm) (common.DeleteRes, error) {
_, err := blz.Delete(ctx, prm) _, err := blz.Delete(ctx, prm)
return common.DeleteRes{}, err return deleteRes, err
} }

View file

@ -57,7 +57,7 @@ func (b *Blobovniczas) Exists(ctx context.Context, prm common.ExistsPrm) (common
if !client.IsErrObjectNotFound(err) { if !client.IsErrObjectNotFound(err) {
b.log.Debug(ctx, logs.BlobovniczatreeCouldNotGetObjectFromLevel, b.log.Debug(ctx, logs.BlobovniczatreeCouldNotGetObjectFromLevel,
zap.String("level", p), zap.String("level", p),
zap.String("error", err.Error()), zap.Error(err),
zap.String("trace_id", tracingPkg.GetTraceID(ctx))) zap.String("trace_id", tracingPkg.GetTraceID(ctx)))
} }
} }

View file

@ -69,7 +69,7 @@ func (b *Blobovniczas) Get(ctx context.Context, prm common.GetPrm) (res common.G
if !client.IsErrObjectNotFound(err) { if !client.IsErrObjectNotFound(err) {
b.log.Debug(ctx, logs.BlobovniczatreeCouldNotGetObjectFromLevel, b.log.Debug(ctx, logs.BlobovniczatreeCouldNotGetObjectFromLevel,
zap.String("level", p), zap.String("level", p),
zap.String("error", err.Error()), zap.Error(err),
zap.String("trace_id", tracingPkg.GetTraceID(ctx)), zap.String("trace_id", tracingPkg.GetTraceID(ctx)),
) )
} }
@ -115,13 +115,13 @@ func (b *Blobovniczas) getObject(ctx context.Context, blz *blobovnicza.Blobovnic
// decompress the data // decompress the data
data, err := b.compression.Decompress(res.Object()) data, err := b.compression.Decompress(res.Object())
if err != nil { if err != nil {
return common.GetRes{}, fmt.Errorf("could not decompress object data: %w", err) return common.GetRes{}, fmt.Errorf("decompress object data: %w", err)
} }
// unmarshal the object // unmarshal the object
obj := objectSDK.New() obj := objectSDK.New()
if err := obj.Unmarshal(data); err != nil { if err := obj.Unmarshal(data); err != nil {
return common.GetRes{}, fmt.Errorf("could not unmarshal the object: %w", err) return common.GetRes{}, fmt.Errorf("unmarshal the object: %w", err)
} }
return common.GetRes{Object: obj, RawData: data}, nil return common.GetRes{Object: obj, RawData: data}, nil

View file

@ -71,7 +71,7 @@ func (b *Blobovniczas) GetRange(ctx context.Context, prm common.GetRangePrm) (re
if !outOfBounds && !client.IsErrObjectNotFound(err) { if !outOfBounds && !client.IsErrObjectNotFound(err) {
b.log.Debug(ctx, logs.BlobovniczatreeCouldNotGetObjectFromLevel, b.log.Debug(ctx, logs.BlobovniczatreeCouldNotGetObjectFromLevel,
zap.String("level", p), zap.String("level", p),
zap.String("error", err.Error()), zap.Error(err),
zap.String("trace_id", tracingPkg.GetTraceID(ctx))) zap.String("trace_id", tracingPkg.GetTraceID(ctx)))
} }
if outOfBounds { if outOfBounds {
@ -130,13 +130,13 @@ func (b *Blobovniczas) getObjectRange(ctx context.Context, blz *blobovnicza.Blob
// decompress the data // decompress the data
data, err := b.compression.Decompress(res.Object()) data, err := b.compression.Decompress(res.Object())
if err != nil { if err != nil {
return common.GetRangeRes{}, fmt.Errorf("could not decompress object data: %w", err) return common.GetRangeRes{}, fmt.Errorf("decompress object data: %w", err)
} }
// unmarshal the object // unmarshal the object
obj := objectSDK.New() obj := objectSDK.New()
if err := obj.Unmarshal(data); err != nil { if err := obj.Unmarshal(data); err != nil {
return common.GetRangeRes{}, fmt.Errorf("could not unmarshal the object: %w", err) return common.GetRangeRes{}, fmt.Errorf("unmarshal the object: %w", err)
} }
from := prm.Range.GetOffset() from := prm.Range.GetOffset()

View file

@ -44,12 +44,12 @@ func (b *Blobovniczas) Iterate(ctx context.Context, prm common.IteratePrm) (comm
if prm.IgnoreErrors { if prm.IgnoreErrors {
b.log.Warn(ctx, logs.BlobstorErrorOccurredDuringTheIteration, b.log.Warn(ctx, logs.BlobstorErrorOccurredDuringTheIteration,
zap.Stringer("address", elem.Address()), zap.Stringer("address", elem.Address()),
zap.String("err", err.Error()), zap.Error(err),
zap.String("storage_id", p), zap.String("storage_id", p),
zap.String("root_path", b.rootPath)) zap.String("root_path", b.rootPath))
return nil return nil
} }
return fmt.Errorf("could not decompress object data: %w", err) return fmt.Errorf("decompress object data: %w", err)
} }
if prm.Handler != nil { if prm.Handler != nil {
@ -77,12 +77,12 @@ func (b *Blobovniczas) iterateBlobovniczas(ctx context.Context, ignoreErrors boo
if err != nil { if err != nil {
if ignoreErrors { if ignoreErrors {
b.log.Warn(ctx, logs.BlobstorErrorOccurredDuringTheIteration, b.log.Warn(ctx, logs.BlobstorErrorOccurredDuringTheIteration,
zap.String("err", err.Error()), zap.Error(err),
zap.String("storage_id", p), zap.String("storage_id", p),
zap.String("root_path", b.rootPath)) zap.String("root_path", b.rootPath))
return false, nil return false, nil
} }
return false, fmt.Errorf("could not open blobovnicza %s: %w", p, err) return false, fmt.Errorf("open blobovnicza %s: %w", p, err)
} }
defer shBlz.Close(ctx) defer shBlz.Close(ctx)
@ -249,6 +249,12 @@ func (b *Blobovniczas) iterateSortedDBPaths(ctx context.Context, addr oid.Addres
} }
func (b *Blobovniczas) iterateSordedDBPathsInternal(ctx context.Context, path string, addr oid.Address, f func(string) (bool, error)) (bool, error) { func (b *Blobovniczas) iterateSordedDBPathsInternal(ctx context.Context, path string, addr oid.Address, f func(string) (bool, error)) (bool, error) {
select {
case <-ctx.Done():
return false, ctx.Err()
default:
}
sysPath := filepath.Join(b.rootPath, path) sysPath := filepath.Join(b.rootPath, path)
entries, err := os.ReadDir(sysPath) entries, err := os.ReadDir(sysPath)
if os.IsNotExist(err) && b.readOnly && path == "" { // non initialized tree in read only mode if os.IsNotExist(err) && b.readOnly && path == "" { // non initialized tree in read only mode

View file

@ -69,10 +69,10 @@ func (b *sharedDB) Open(ctx context.Context) (*blobovnicza.Blobovnicza, error) {
)...) )...)
if err := blz.Open(ctx); err != nil { if err := blz.Open(ctx); err != nil {
return nil, fmt.Errorf("could not open blobovnicza %s: %w", b.path, err) return nil, fmt.Errorf("open blobovnicza %s: %w", b.path, err)
} }
if err := blz.Init(ctx); err != nil { if err := blz.Init(ctx); err != nil {
return nil, fmt.Errorf("could not init blobovnicza %s: %w", b.path, err) return nil, fmt.Errorf("init blobovnicza %s: %w", b.path, err)
} }
b.refCount++ b.refCount++
@ -97,7 +97,7 @@ func (b *sharedDB) Close(ctx context.Context) {
if err := b.blcza.Close(ctx); err != nil { if err := b.blcza.Close(ctx); err != nil {
b.log.Error(ctx, logs.BlobovniczatreeCouldNotCloseBlobovnicza, b.log.Error(ctx, logs.BlobovniczatreeCouldNotCloseBlobovnicza,
zap.String("id", b.path), zap.String("id", b.path),
zap.String("error", err.Error()), zap.Error(err),
) )
} }
b.blcza = nil b.blcza = nil
@ -125,9 +125,9 @@ func (b *sharedDB) CloseAndRemoveFile(ctx context.Context) error {
if err := b.blcza.Close(ctx); err != nil { if err := b.blcza.Close(ctx); err != nil {
b.log.Error(ctx, logs.BlobovniczatreeCouldNotCloseBlobovnicza, b.log.Error(ctx, logs.BlobovniczatreeCouldNotCloseBlobovnicza,
zap.String("id", b.path), zap.String("id", b.path),
zap.String("error", err.Error()), zap.Error(err),
) )
return fmt.Errorf("failed to close blobovnicza (path = %s): %w", b.path, err) return fmt.Errorf("close blobovnicza (path = %s): %w", b.path, err)
} }
b.refCount = 0 b.refCount = 0

View file

@ -83,7 +83,7 @@ func (i *putIterator) iterate(ctx context.Context, lvlPath string) (bool, error)
i.B.reportError(ctx, logs.BlobovniczatreeCouldNotGetActiveBlobovnicza, err) i.B.reportError(ctx, logs.BlobovniczatreeCouldNotGetActiveBlobovnicza, err)
} else { } else {
i.B.log.Debug(ctx, logs.BlobovniczatreeCouldNotGetActiveBlobovnicza, i.B.log.Debug(ctx, logs.BlobovniczatreeCouldNotGetActiveBlobovnicza,
zap.String("error", err.Error()), zap.Error(err),
zap.String("trace_id", tracingPkg.GetTraceID(ctx))) zap.String("trace_id", tracingPkg.GetTraceID(ctx)))
} }
@ -106,7 +106,7 @@ func (i *putIterator) iterate(ctx context.Context, lvlPath string) (bool, error)
} else { } else {
i.B.log.Debug(ctx, logs.BlobovniczatreeCouldNotPutObjectToActiveBlobovnicza, i.B.log.Debug(ctx, logs.BlobovniczatreeCouldNotPutObjectToActiveBlobovnicza,
zap.String("path", active.SystemPath()), zap.String("path", active.SystemPath()),
zap.String("error", err.Error()), zap.Error(err),
zap.String("trace_id", tracingPkg.GetTraceID(ctx))) zap.String("trace_id", tracingPkg.GetTraceID(ctx)))
} }
if errors.Is(err, blobovnicza.ErrNoSpace) { if errors.Is(err, blobovnicza.ErrNoSpace) {

View file

@ -74,7 +74,7 @@ func (b *BlobStor) Close(ctx context.Context) error {
for i := range b.storage { for i := range b.storage {
err := b.storage[i].Storage.Close(ctx) err := b.storage[i].Storage.Close(ctx)
if err != nil { if err != nil {
b.log.Info(ctx, logs.BlobstorCouldntCloseStorage, zap.String("error", err.Error())) b.log.Info(ctx, logs.BlobstorCouldntCloseStorage, zap.Error(err))
if firstErr == nil { if firstErr == nil {
firstErr = err firstErr = err
} }

View file

@ -75,7 +75,7 @@ func (b *BlobStor) Exists(ctx context.Context, prm common.ExistsPrm) (common.Exi
for _, err := range errors[:len(errors)-1] { for _, err := range errors[:len(errors)-1] {
b.log.Warn(ctx, logs.BlobstorErrorOccurredDuringObjectExistenceChecking, b.log.Warn(ctx, logs.BlobstorErrorOccurredDuringObjectExistenceChecking,
zap.Stringer("address", prm.Address), zap.Stringer("address", prm.Address),
zap.String("error", err.Error()), zap.Error(err),
zap.String("trace_id", tracingPkg.GetTraceID(ctx))) zap.String("trace_id", tracingPkg.GetTraceID(ctx)))
} }

View file

@ -153,7 +153,7 @@ func (t *FSTree) iterate(ctx context.Context, depth uint64, curPath []string, pr
if err != nil { if err != nil {
if prm.IgnoreErrors { if prm.IgnoreErrors {
t.log.Warn(ctx, logs.BlobstorErrorOccurredDuringTheIteration, t.log.Warn(ctx, logs.BlobstorErrorOccurredDuringTheIteration,
zap.String("err", err.Error()), zap.Error(err),
zap.String("directory_path", dirPath)) zap.String("directory_path", dirPath))
return nil return nil
} }
@ -202,7 +202,7 @@ func (t *FSTree) iterate(ctx context.Context, depth uint64, curPath []string, pr
if prm.IgnoreErrors { if prm.IgnoreErrors {
t.log.Warn(ctx, logs.BlobstorErrorOccurredDuringTheIteration, t.log.Warn(ctx, logs.BlobstorErrorOccurredDuringTheIteration,
zap.Stringer("address", addr), zap.Stringer("address", addr),
zap.String("err", err.Error()), zap.Error(err),
zap.String("path", path)) zap.String("path", path))
continue continue
} }
@ -538,7 +538,7 @@ func (t *FSTree) countFiles() (uint64, uint64, error) {
}, },
) )
if err != nil { if err != nil {
return 0, 0, fmt.Errorf("could not walk through %s directory: %w", t.RootPath, err) return 0, 0, fmt.Errorf("walk through %s directory: %w", t.RootPath, err)
} }
return count, size, nil return count, size, nil
@ -577,7 +577,7 @@ func (t *FSTree) ObjectsCount(ctx context.Context) (uint64, error) {
}, },
) )
if err != nil { if err != nil {
return 0, fmt.Errorf("could not walk through %s directory: %w", t.RootPath, err) return 0, fmt.Errorf("walk through %s directory: %w", t.RootPath, err)
} }
success = true success = true
return result, nil return result, nil

View file

@ -136,6 +136,6 @@ func (w *genericWriter) removeWithCounter(p string, size uint64) error {
if err := os.Remove(p); err != nil { if err := os.Remove(p); err != nil {
return err return err
} }
w.fileCounter.Dec(uint64(size)) w.fileCounter.Dec(size)
return nil return nil
} }

View file

@ -69,10 +69,13 @@ func (w *linuxWriter) writeFile(p string, data []byte) error {
if err != nil { if err != nil {
return err return err
} }
written := 0
tmpPath := "/proc/self/fd/" + strconv.FormatUint(uint64(fd), 10) tmpPath := "/proc/self/fd/" + strconv.FormatUint(uint64(fd), 10)
n, err := unix.Write(fd, data) n, err := unix.Write(fd, data)
if err == nil { for err == nil {
if n == len(data) { written += n
if written == len(data) {
err = unix.Linkat(unix.AT_FDCWD, tmpPath, unix.AT_FDCWD, p, unix.AT_SYMLINK_FOLLOW) err = unix.Linkat(unix.AT_FDCWD, tmpPath, unix.AT_FDCWD, p, unix.AT_SYMLINK_FOLLOW)
if err == nil { if err == nil {
w.fileCounter.Inc(uint64(len(data))) w.fileCounter.Inc(uint64(len(data)))
@ -80,9 +83,23 @@ func (w *linuxWriter) writeFile(p string, data []byte) error {
if errors.Is(err, unix.EEXIST) { if errors.Is(err, unix.EEXIST) {
err = nil err = nil
} }
} else { break
err = errors.New("incomplete write")
} }
// From man 2 write:
// https://www.man7.org/linux/man-pages/man2/write.2.html
//
// Note that a successful write() may transfer fewer than count
// bytes. Such partial writes can occur for various reasons; for
// example, because there was insufficient space on the disk device
// to write all of the requested bytes, or because a blocked write()
// to a socket, pipe, or similar was interrupted by a signal handler
// after it had transferred some, but before it had transferred all
// of the requested bytes. In the event of a partial write, the
// caller can make another write() call to transfer the remaining
// bytes. The subsequent call will either transfer further bytes or
// may result in an error (e.g., if the disk is now full).
n, err = unix.Write(fd, data[written:])
} }
errClose := unix.Close(fd) errClose := unix.Close(fd)
if err != nil { if err != nil {
@ -114,7 +131,7 @@ func (w *linuxWriter) removeFile(p string, size uint64) error {
return logicerr.Wrap(new(apistatus.ObjectNotFound)) return logicerr.Wrap(new(apistatus.ObjectNotFound))
} }
if err == nil { if err == nil {
w.fileCounter.Dec(uint64(size)) w.fileCounter.Dec(size)
} }
return err return err
} }

View file

@ -0,0 +1,42 @@
//go:build linux && integration
package fstree
import (
"context"
"errors"
"os"
"testing"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/shard/mode"
"github.com/stretchr/testify/require"
"golang.org/x/sys/unix"
)
func TestENOSPC(t *testing.T) {
dir, err := os.MkdirTemp(t.TempDir(), "ramdisk")
require.NoError(t, err)
f, err := os.CreateTemp(t.TempDir(), "ramdisk_*")
require.NoError(t, err)
err = unix.Mount(f.Name(), dir, "tmpfs", 0, "size=1M")
if errors.Is(err, unix.EPERM) {
t.Skipf("skip size tests: no permission to mount: %v", err)
return
}
require.NoError(t, err)
defer func() {
require.NoError(t, unix.Unmount(dir, 0))
}()
fst := New(WithPath(dir), WithDepth(1))
require.NoError(t, fst.Open(mode.ComponentReadWrite))
require.NoError(t, fst.Init())
_, err = fst.Put(context.Background(), common.PutPrm{
RawData: make([]byte, 10<<20),
})
require.ErrorIs(t, err, common.ErrNoSpace)
}

View file

@ -45,7 +45,7 @@ func (b *BlobStor) Iterate(ctx context.Context, prm common.IteratePrm) (common.I
b.log.Warn(ctx, logs.BlobstorErrorOccurredDuringTheIteration, b.log.Warn(ctx, logs.BlobstorErrorOccurredDuringTheIteration,
zap.String("storage_path", b.storage[i].Storage.Path()), zap.String("storage_path", b.storage[i].Storage.Path()),
zap.String("storage_type", b.storage[i].Storage.Type()), zap.String("storage_type", b.storage[i].Storage.Type()),
zap.String("err", err.Error())) zap.Error(err))
continue continue
} }
return common.IterateRes{}, fmt.Errorf("blobstor iterator failure: %w", err) return common.IterateRes{}, fmt.Errorf("blobstor iterator failure: %w", err)

View file

@ -47,13 +47,13 @@ func (s *memstoreImpl) Get(_ context.Context, req common.GetPrm) (common.GetRes,
// Decompress the data. // Decompress the data.
var err error var err error
if data, err = s.compression.Decompress(data); err != nil { if data, err = s.compression.Decompress(data); err != nil {
return common.GetRes{}, fmt.Errorf("could not decompress object data: %w", err) return common.GetRes{}, fmt.Errorf("decompress object data: %w", err)
} }
// Unmarshal the SDK object. // Unmarshal the SDK object.
obj := objectSDK.New() obj := objectSDK.New()
if err := obj.Unmarshal(data); err != nil { if err := obj.Unmarshal(data); err != nil {
return common.GetRes{}, fmt.Errorf("could not unmarshal the object: %w", err) return common.GetRes{}, fmt.Errorf("unmarshal the object: %w", err)
} }
return common.GetRes{Object: obj, RawData: data}, nil return common.GetRes{Object: obj, RawData: data}, nil
@ -133,11 +133,11 @@ func (s *memstoreImpl) Iterate(_ context.Context, req common.IteratePrm) (common
elem := common.IterationElement{ elem := common.IterationElement{
ObjectData: v, ObjectData: v,
} }
if err := elem.Address.DecodeString(string(k)); err != nil { if err := elem.Address.DecodeString(k); err != nil {
if req.IgnoreErrors { if req.IgnoreErrors {
continue continue
} }
return common.IterateRes{}, logicerr.Wrap(fmt.Errorf("(%T) decoding address string %q: %v", s, string(k), err)) return common.IterateRes{}, logicerr.Wrap(fmt.Errorf("(%T) decoding address string %q: %v", s, k, err))
} }
var err error var err error
if elem.ObjectData, err = s.compression.Decompress(elem.ObjectData); err != nil { if elem.ObjectData, err = s.compression.Decompress(elem.ObjectData); err != nil {

View file

@ -27,7 +27,7 @@ func (b *BlobStor) SetMode(ctx context.Context, m mode.Mode) error {
} }
} }
if err != nil { if err != nil {
return fmt.Errorf("can't set blobstor mode (old=%s, new=%s): %w", b.mode, m, err) return fmt.Errorf("set blobstor mode (old=%s, new=%s): %w", b.mode, m, err)
} }
b.mode = m b.mode = m

View file

@ -52,7 +52,7 @@ func (b *BlobStor) Put(ctx context.Context, prm common.PutPrm) (common.PutRes, e
// marshal object // marshal object
data, err := prm.Object.Marshal() data, err := prm.Object.Marshal()
if err != nil { if err != nil {
return common.PutRes{}, fmt.Errorf("could not marshal the object: %w", err) return common.PutRes{}, fmt.Errorf("marshal the object: %w", err)
} }
prm.RawData = data prm.RawData = data
} }

View file

@ -48,8 +48,8 @@ func (e *StorageEngine) ContainerSize(ctx context.Context, prm ContainerSizePrm)
defer elapsed("ContainerSize", e.metrics.AddMethodDuration)() defer elapsed("ContainerSize", e.metrics.AddMethodDuration)()
err = e.execIfNotBlocked(func() error { err = e.execIfNotBlocked(func() error {
res, err = e.containerSize(ctx, prm) res = e.containerSize(ctx, prm)
return err return nil
}) })
return return
@ -69,7 +69,7 @@ func ContainerSize(ctx context.Context, e *StorageEngine, id cid.ID) (uint64, er
return res.Size(), nil return res.Size(), nil
} }
func (e *StorageEngine) containerSize(ctx context.Context, prm ContainerSizePrm) (res ContainerSizeRes, err error) { func (e *StorageEngine) containerSize(ctx context.Context, prm ContainerSizePrm) (res ContainerSizeRes) {
e.iterateOverUnsortedShards(func(sh hashedShard) (stop bool) { e.iterateOverUnsortedShards(func(sh hashedShard) (stop bool) {
var csPrm shard.ContainerSizePrm var csPrm shard.ContainerSizePrm
csPrm.SetContainerID(prm.cnr) csPrm.SetContainerID(prm.cnr)
@ -96,8 +96,8 @@ func (e *StorageEngine) ListContainers(ctx context.Context, _ ListContainersPrm)
defer elapsed("ListContainers", e.metrics.AddMethodDuration)() defer elapsed("ListContainers", e.metrics.AddMethodDuration)()
err = e.execIfNotBlocked(func() error { err = e.execIfNotBlocked(func() error {
res, err = e.listContainers(ctx) res = e.listContainers(ctx)
return err return nil
}) })
return return
@ -115,7 +115,7 @@ func ListContainers(ctx context.Context, e *StorageEngine) ([]cid.ID, error) {
return res.Containers(), nil return res.Containers(), nil
} }
func (e *StorageEngine) listContainers(ctx context.Context) (ListContainersRes, error) { func (e *StorageEngine) listContainers(ctx context.Context) ListContainersRes {
uniqueIDs := make(map[string]cid.ID) uniqueIDs := make(map[string]cid.ID)
e.iterateOverUnsortedShards(func(sh hashedShard) (stop bool) { e.iterateOverUnsortedShards(func(sh hashedShard) (stop bool) {
@ -142,5 +142,5 @@ func (e *StorageEngine) listContainers(ctx context.Context) (ListContainersRes,
return ListContainersRes{ return ListContainersRes{
containers: result, containers: result,
}, nil }
} }

View file

@ -95,7 +95,7 @@ func (e *StorageEngine) Init(ctx context.Context) error {
err := eg.Wait() err := eg.Wait()
close(errCh) close(errCh)
if err != nil { if err != nil {
return fmt.Errorf("failed to initialize shards: %w", err) return fmt.Errorf("initialize shards: %w", err)
} }
for res := range errCh { for res := range errCh {
@ -117,7 +117,7 @@ func (e *StorageEngine) Init(ctx context.Context) error {
continue continue
} }
return fmt.Errorf("could not initialize shard %s: %w", res.id, res.err) return fmt.Errorf("initialize shard %s: %w", res.id, res.err)
} }
} }
@ -167,7 +167,7 @@ func (e *StorageEngine) close(ctx context.Context, releasePools bool) error {
if err := sh.Close(ctx); err != nil { if err := sh.Close(ctx); err != nil {
e.log.Debug(ctx, logs.EngineCouldNotCloseShard, e.log.Debug(ctx, logs.EngineCouldNotCloseShard,
zap.String("id", id), zap.String("id", id),
zap.String("error", err.Error()), zap.Error(err),
) )
} }
} }
@ -320,7 +320,7 @@ loop:
for _, newID := range shardsToAdd { for _, newID := range shardsToAdd {
sh, err := e.createShard(ctx, rcfg.shards[newID]) sh, err := e.createShard(ctx, rcfg.shards[newID])
if err != nil { if err != nil {
return fmt.Errorf("could not add new shard with '%s' metabase path: %w", newID, err) return fmt.Errorf("add new shard with '%s' metabase path: %w", newID, err)
} }
idStr := sh.ID().String() idStr := sh.ID().String()
@ -331,13 +331,13 @@ loop:
} }
if err != nil { if err != nil {
_ = sh.Close(ctx) _ = sh.Close(ctx)
return fmt.Errorf("could not init %s shard: %w", idStr, err) return fmt.Errorf("init %s shard: %w", idStr, err)
} }
err = e.addShard(sh) err = e.addShard(sh)
if err != nil { if err != nil {
_ = sh.Close(ctx) _ = sh.Close(ctx)
return fmt.Errorf("could not add %s shard: %w", idStr, err) return fmt.Errorf("add %s shard: %w", idStr, err)
} }
e.log.Info(ctx, logs.EngineAddedNewShard, zap.String("id", idStr)) e.log.Info(ctx, logs.EngineAddedNewShard, zap.String("id", idStr))

View file

@ -24,9 +24,6 @@ type DeletePrm struct {
forceRemoval bool forceRemoval bool
} }
// DeleteRes groups the resulting values of Delete operation.
type DeleteRes struct{}
// WithAddress is a Delete option to set the addresses of the objects to delete. // WithAddress is a Delete option to set the addresses of the objects to delete.
// //
// Option is required. // Option is required.
@ -51,7 +48,7 @@ func (p *DeletePrm) WithForceRemoval() {
// NOTE: Marks any object to be deleted (despite any prohibitions // NOTE: Marks any object to be deleted (despite any prohibitions
// on operations with that object) if WithForceRemoval option has // on operations with that object) if WithForceRemoval option has
// been provided. // been provided.
func (e *StorageEngine) Delete(ctx context.Context, prm DeletePrm) (res DeleteRes, err error) { func (e *StorageEngine) Delete(ctx context.Context, prm DeletePrm) error {
ctx, span := tracing.StartSpanFromContext(ctx, "StorageEngine.Delete", ctx, span := tracing.StartSpanFromContext(ctx, "StorageEngine.Delete",
trace.WithAttributes( trace.WithAttributes(
attribute.String("address", prm.addr.EncodeToString()), attribute.String("address", prm.addr.EncodeToString()),
@ -60,15 +57,12 @@ func (e *StorageEngine) Delete(ctx context.Context, prm DeletePrm) (res DeleteRe
defer span.End() defer span.End()
defer elapsed("Delete", e.metrics.AddMethodDuration)() defer elapsed("Delete", e.metrics.AddMethodDuration)()
err = e.execIfNotBlocked(func() error { return e.execIfNotBlocked(func() error {
res, err = e.delete(ctx, prm) return e.delete(ctx, prm)
return err
}) })
return
} }
func (e *StorageEngine) delete(ctx context.Context, prm DeletePrm) (DeleteRes, error) { func (e *StorageEngine) delete(ctx context.Context, prm DeletePrm) error {
var locked struct { var locked struct {
is bool is bool
} }
@ -126,14 +120,14 @@ func (e *StorageEngine) delete(ctx context.Context, prm DeletePrm) (DeleteRes, e
}) })
if locked.is { if locked.is {
return DeleteRes{}, new(apistatus.ObjectLocked) return new(apistatus.ObjectLocked)
} }
if splitInfo != nil { if splitInfo != nil {
e.deleteChildren(ctx, prm.addr, prm.forceRemoval, splitInfo.SplitID()) e.deleteChildren(ctx, prm.addr, prm.forceRemoval, splitInfo.SplitID())
} }
return DeleteRes{}, nil return nil
} }
func (e *StorageEngine) deleteChildren(ctx context.Context, addr oid.Address, force bool, splitID *objectSDK.SplitID) { func (e *StorageEngine) deleteChildren(ctx context.Context, addr oid.Address, force bool, splitID *objectSDK.SplitID) {
@ -154,7 +148,7 @@ func (e *StorageEngine) deleteChildren(ctx context.Context, addr oid.Address, fo
if err != nil { if err != nil {
e.log.Warn(ctx, logs.EngineErrorDuringSearchingForObjectChildren, e.log.Warn(ctx, logs.EngineErrorDuringSearchingForObjectChildren,
zap.Stringer("addr", addr), zap.Stringer("addr", addr),
zap.String("error", err.Error()), zap.Error(err),
zap.String("trace_id", tracingPkg.GetTraceID(ctx))) zap.String("trace_id", tracingPkg.GetTraceID(ctx)))
return false return false
} }
@ -166,7 +160,7 @@ func (e *StorageEngine) deleteChildren(ctx context.Context, addr oid.Address, fo
if err != nil { if err != nil {
e.log.Debug(ctx, logs.EngineCouldNotInhumeObjectInShard, e.log.Debug(ctx, logs.EngineCouldNotInhumeObjectInShard,
zap.Stringer("addr", addr), zap.Stringer("addr", addr),
zap.String("err", err.Error()), zap.Error(err),
zap.String("trace_id", tracingPkg.GetTraceID(ctx))) zap.String("trace_id", tracingPkg.GetTraceID(ctx)))
continue continue
} }
@ -196,7 +190,7 @@ func (e *StorageEngine) deleteChunks(
if err != nil { if err != nil {
e.log.Debug(ctx, logs.EngineCouldNotInhumeObjectInShard, e.log.Debug(ctx, logs.EngineCouldNotInhumeObjectInShard,
zap.Stringer("addr", addr), zap.Stringer("addr", addr),
zap.String("err", err.Error()), zap.Error(err),
zap.String("trace_id", tracingPkg.GetTraceID(ctx))) zap.String("trace_id", tracingPkg.GetTraceID(ctx)))
continue continue
} }

View file

@ -70,8 +70,7 @@ func TestDeleteBigObject(t *testing.T) {
deletePrm.WithForceRemoval() deletePrm.WithForceRemoval()
deletePrm.WithAddress(addrParent) deletePrm.WithAddress(addrParent)
_, err := e.Delete(context.Background(), deletePrm) require.NoError(t, e.Delete(context.Background(), deletePrm))
require.NoError(t, err)
checkGetError[*apistatus.ObjectNotFound](t, e, addrParent, true) checkGetError[*apistatus.ObjectNotFound](t, e, addrParent, true)
checkGetError[*apistatus.ObjectNotFound](t, e, addrLink, true) checkGetError[*apistatus.ObjectNotFound](t, e, addrLink, true)
@ -141,8 +140,7 @@ func TestDeleteBigObjectWithoutGC(t *testing.T) {
deletePrm.WithForceRemoval() deletePrm.WithForceRemoval()
deletePrm.WithAddress(addrParent) deletePrm.WithAddress(addrParent)
_, err := e.Delete(context.Background(), deletePrm) require.NoError(t, e.Delete(context.Background(), deletePrm))
require.NoError(t, err)
checkGetError[*apistatus.ObjectNotFound](t, e, addrParent, true) checkGetError[*apistatus.ObjectNotFound](t, e, addrParent, true)
checkGetError[*apistatus.ObjectNotFound](t, e, addrLink, true) checkGetError[*apistatus.ObjectNotFound](t, e, addrLink, true)
@ -153,7 +151,7 @@ func TestDeleteBigObjectWithoutGC(t *testing.T) {
// delete physical // delete physical
var delPrm shard.DeletePrm var delPrm shard.DeletePrm
delPrm.SetAddresses(addrParent) delPrm.SetAddresses(addrParent)
_, err = s1.Delete(context.Background(), delPrm) _, err := s1.Delete(context.Background(), delPrm)
require.NoError(t, err) require.NoError(t, err)
delPrm.SetAddresses(addrLink) delPrm.SetAddresses(addrLink)

View file

@ -140,7 +140,7 @@ func (e *StorageEngine) reportShardError(
if isLogical(err) { if isLogical(err) {
e.log.Warn(ctx, msg, e.log.Warn(ctx, msg,
zap.Stringer("shard_id", sh.ID()), zap.Stringer("shard_id", sh.ID()),
zap.String("error", err.Error())) zap.Error(err))
return return
} }
@ -151,7 +151,7 @@ func (e *StorageEngine) reportShardError(
e.log.Warn(ctx, msg, append([]zap.Field{ e.log.Warn(ctx, msg, append([]zap.Field{
zap.Stringer("shard_id", sid), zap.Stringer("shard_id", sid),
zap.Uint32("error count", errCount), zap.Uint32("error count", errCount),
zap.String("error", err.Error()), zap.Error(err),
}, fields...)...) }, fields...)...)
if e.errorsThreshold == 0 || errCount < e.errorsThreshold { if e.errorsThreshold == 0 || errCount < e.errorsThreshold {

View file

@ -17,10 +17,12 @@ import (
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
type epochState struct{} type epochState struct {
currEpoch uint64
}
func (s epochState) CurrentEpoch() uint64 { func (s epochState) CurrentEpoch() uint64 {
return 0 return s.currEpoch
} }
type testEngineWrapper struct { type testEngineWrapper struct {
@ -87,12 +89,16 @@ func testGetDefaultShardOptions(t testing.TB) []shard.Option {
blobstor.WithLogger(test.NewLogger(t)), blobstor.WithLogger(test.NewLogger(t)),
), ),
shard.WithPiloramaOptions(pilorama.WithPath(filepath.Join(t.TempDir(), "pilorama"))), shard.WithPiloramaOptions(pilorama.WithPath(filepath.Join(t.TempDir(), "pilorama"))),
shard.WithMetaBaseOptions( shard.WithMetaBaseOptions(testGetDefaultMetabaseOptions(t)...),
meta.WithPath(filepath.Join(t.TempDir(), "metabase")), }
meta.WithPermissions(0o700), }
meta.WithEpochState(epochState{}),
meta.WithLogger(test.NewLogger(t)), func testGetDefaultMetabaseOptions(t testing.TB) []meta.Option {
), return []meta.Option{
meta.WithPath(filepath.Join(t.TempDir(), "metabase")),
meta.WithPermissions(0o700),
meta.WithEpochState(epochState{}),
meta.WithLogger(test.NewLogger(t)),
} }
} }

View file

@ -4,6 +4,7 @@ import (
"context" "context"
"errors" "errors"
"fmt" "fmt"
"slices"
"strings" "strings"
"sync" "sync"
"sync/atomic" "sync/atomic"
@ -86,7 +87,6 @@ type EvacuateShardPrm struct {
ObjectsHandler func(context.Context, oid.Address, *objectSDK.Object) (bool, error) ObjectsHandler func(context.Context, oid.Address, *objectSDK.Object) (bool, error)
TreeHandler func(context.Context, cid.ID, string, pilorama.Forest) (bool, string, error) TreeHandler func(context.Context, cid.ID, string, pilorama.Forest) (bool, string, error)
IgnoreErrors bool IgnoreErrors bool
Async bool
Scope EvacuateScope Scope EvacuateScope
RepOneOnly bool RepOneOnly bool
@ -211,10 +211,10 @@ var errMustHaveTwoShards = errors.New("must have at least 1 spare shard")
// Evacuate moves data from one shard to the others. // Evacuate moves data from one shard to the others.
// The shard being moved must be in read-only mode. // The shard being moved must be in read-only mode.
func (e *StorageEngine) Evacuate(ctx context.Context, prm EvacuateShardPrm) (*EvacuateShardRes, error) { func (e *StorageEngine) Evacuate(ctx context.Context, prm EvacuateShardPrm) error {
select { select {
case <-ctx.Done(): case <-ctx.Done():
return nil, ctx.Err() return ctx.Err()
default: default:
} }
@ -226,7 +226,6 @@ func (e *StorageEngine) Evacuate(ctx context.Context, prm EvacuateShardPrm) (*Ev
ctx, span := tracing.StartSpanFromContext(ctx, "StorageEngine.Evacuate", ctx, span := tracing.StartSpanFromContext(ctx, "StorageEngine.Evacuate",
trace.WithAttributes( trace.WithAttributes(
attribute.StringSlice("shardIDs", shardIDs), attribute.StringSlice("shardIDs", shardIDs),
attribute.Bool("async", prm.Async),
attribute.Bool("ignoreErrors", prm.IgnoreErrors), attribute.Bool("ignoreErrors", prm.IgnoreErrors),
attribute.Stringer("scope", prm.Scope), attribute.Stringer("scope", prm.Scope),
)) ))
@ -234,7 +233,7 @@ func (e *StorageEngine) Evacuate(ctx context.Context, prm EvacuateShardPrm) (*Ev
shards, err := e.getActualShards(shardIDs, prm) shards, err := e.getActualShards(shardIDs, prm)
if err != nil { if err != nil {
return nil, err return err
} }
shardsToEvacuate := make(map[string]*shard.Shard) shardsToEvacuate := make(map[string]*shard.Shard)
@ -247,36 +246,24 @@ func (e *StorageEngine) Evacuate(ctx context.Context, prm EvacuateShardPrm) (*Ev
} }
res := NewEvacuateShardRes() res := NewEvacuateShardRes()
ctx = ctxOrBackground(ctx, prm.Async) ctx = context.WithoutCancel(ctx)
eg, egCtx, err := e.evacuateLimiter.TryStart(ctx, shardIDs, res) eg, ctx, err := e.evacuateLimiter.TryStart(ctx, shardIDs, res)
if err != nil { if err != nil {
return nil, err return err
} }
var mtx sync.RWMutex var mtx sync.RWMutex
copyShards := func() []pooledShard { copyShards := func() []pooledShard {
mtx.RLock() mtx.RLock()
defer mtx.RUnlock() defer mtx.RUnlock()
t := make([]pooledShard, len(shards)) t := slices.Clone(shards)
copy(t, shards)
return t return t
} }
eg.Go(func() error { eg.Go(func() error {
return e.evacuateShards(egCtx, shardIDs, prm, res, copyShards, shardsToEvacuate) return e.evacuateShards(ctx, shardIDs, prm, res, copyShards, shardsToEvacuate)
}) })
if prm.Async { return nil
return nil, nil
}
return res, eg.Wait()
}
func ctxOrBackground(ctx context.Context, background bool) context.Context {
if background {
return context.Background()
}
return ctx
} }
func (e *StorageEngine) evacuateShards(ctx context.Context, shardIDs []string, prm EvacuateShardPrm, res *EvacuateShardRes, func (e *StorageEngine) evacuateShards(ctx context.Context, shardIDs []string, prm EvacuateShardPrm, res *EvacuateShardRes,
@ -286,7 +273,6 @@ func (e *StorageEngine) evacuateShards(ctx context.Context, shardIDs []string, p
ctx, span := tracing.StartSpanFromContext(ctx, "StorageEngine.evacuateShards", ctx, span := tracing.StartSpanFromContext(ctx, "StorageEngine.evacuateShards",
trace.WithAttributes( trace.WithAttributes(
attribute.StringSlice("shardIDs", shardIDs), attribute.StringSlice("shardIDs", shardIDs),
attribute.Bool("async", prm.Async),
attribute.Bool("ignoreErrors", prm.IgnoreErrors), attribute.Bool("ignoreErrors", prm.IgnoreErrors),
attribute.Stringer("scope", prm.Scope), attribute.Stringer("scope", prm.Scope),
attribute.Bool("repOneOnly", prm.RepOneOnly), attribute.Bool("repOneOnly", prm.RepOneOnly),
@ -592,7 +578,7 @@ func (e *StorageEngine) evacuateTrees(ctx context.Context, sh *shard.Shard, tree
func (e *StorageEngine) evacuateTreeToOtherNode(ctx context.Context, sh *shard.Shard, tree pilorama.ContainerIDTreeID, prm EvacuateShardPrm) (bool, string, error) { func (e *StorageEngine) evacuateTreeToOtherNode(ctx context.Context, sh *shard.Shard, tree pilorama.ContainerIDTreeID, prm EvacuateShardPrm) (bool, string, error) {
if prm.TreeHandler == nil { if prm.TreeHandler == nil {
return false, "", fmt.Errorf("failed to evacuate tree '%s' for container %s from shard %s: local evacuation failed, but no remote evacuation available", tree.TreeID, tree.CID, sh.ID()) return false, "", fmt.Errorf("evacuate tree '%s' for container %s from shard %s: local evacuation failed, but no remote evacuation available", tree.TreeID, tree.CID, sh.ID())
} }
return prm.TreeHandler(ctx, tree.CID, tree.TreeID, sh) return prm.TreeHandler(ctx, tree.CID, tree.TreeID, sh)
@ -738,7 +724,7 @@ func (e *StorageEngine) getActualShards(shardIDs []string, prm EvacuateShardPrm)
shards := make([]pooledShard, 0, len(e.shards)) shards := make([]pooledShard, 0, len(e.shards))
for id := range e.shards { for id := range e.shards {
shards = append(shards, pooledShard{ shards = append(shards, pooledShard{
hashedShard: hashedShard(e.shards[id]), hashedShard: e.shards[id],
pool: e.shardPools[id], pool: e.shardPools[id],
}) })
} }

Some files were not shown because too many files have changed in this diff Show more