Compare commits

...

68 commits

Author SHA1 Message Date
Evgenii Stratonikov
9426fd5046 WIP: pilorama: add custom batches
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-20 12:13:14 +03:00
Evgenii Stratonikov
34d20fd592 services/tree: allow to customize some parameters
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-19 11:03:13 +03:00
Evgenii Stratonikov
609dbe83db [#1559] engine: Do not count logical errors as storage ones
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-13 10:09:30 +03:00
Evgenii Stratonikov
f9eb15254e engine: remove default error threshold
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:47:39 +03:00
Evgenii Stratonikov
c5bd51e934 neofs-node: initialize storage before other services
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:47:39 +03:00
Evgenii Stratonikov
85aa30e89c local_object_storage: ignore pilorama errors
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:47:39 +03:00
Evgenii Stratonikov
61ae8b0a2c shard: ignore errors in UpdateID
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:47:39 +03:00
Evgenii Stratonikov
b193352d1e [#1548] morph/client: Execute close callback without switch mutex
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:45:57 +03:00
Evgenii Stratonikov
7b5b735fb2 [#1550] engine: Split errors on write- and meta- errors
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:45:57 +03:00
Evgenii Stratonikov
dafc21b052 [#1550] engine: Set default error threshold
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:45:57 +03:00
Leonard Lyubich
a6d1eefeff [#1549] shard: Always close metabase
Make `meta.DB` to call `Close` method on `bbolt.DB` instance if it is
non-nil only. Call `meta.DB.Close` in `shard.Shard.Close` anyway.

Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
2022-07-08 13:45:57 +03:00
Leonard Lyubich
596d877a44 [#1549] engine: Disable shard on blobovnicza init failure
There is a need to support working w/o shard if it has problems with
blobovnicza tree.

Make `BlobStor.Init` to return new `ErrInitBlobovniczas` error. Remove
shard from storage engine's shard set if it returned this error from
`Init` call. So if some of the shards (but not all) return this error,
the node will be able to continue working without them.

Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
2022-07-08 13:45:57 +03:00
Leonard Lyubich
263497a92b [#1549] shard: Turn to ModeDegraded on metabase failure
Make `Shard` to work in degraded mode if metabase is unavailable on
opening/init stage. Close metabase in non-degraded mode only.

Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
2022-07-08 13:45:57 +03:00
Pavel Karpy
1684cd63fa [#1558] node: Do not put SHA256 hash as homomorphic
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-07-08 13:45:16 +03:00
Pavel Karpy
33676ad832 [#1370] adm: Support changing NeoFS config value
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-07-08 13:45:13 +03:00
Pavel Karpy
83dd963ab7 [#1367] adm: Support homomorphic hashing config in dump-config
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-07-08 13:44:13 +03:00
Evgenii Stratonikov
e0e4f1f7ee engine: initialize shards in parallel
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:44:05 +03:00
Pavel Karpy
90b4820ee0 [#1365] morph: Do not return errors if config key is missing
Return default values instead of casting errors in `HomomorphicHashDisabled`
method.

Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-07-08 13:43:46 +03:00
Pavel Karpy
2d9c805c81 [#1365] adm: Add homomorphic hash disabling option
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-07-08 13:43:45 +03:00
Pavel Karpy
adcda361a7 [#1365] node: Calculate object homomorphic hash flexibly
Do not calculate and do not write homomorphic hash for containers that were
configured to store objects without hash.

Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-07-08 13:41:35 +03:00
Pavel Karpy
e9c534b0a0 [#1365] ir: Check homomorphic hash flexibly in audit
Do not perform that check if it was turned off for the container being
checked.

Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-07-08 13:39:19 +03:00
Pavel Karpy
455096ab53 [#1365] ir: Check homomorphic hash setting on ContainerPut
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-07-08 13:36:22 +03:00
Pavel Karpy
fdc934a360 [#1365] morph: Add HomomorphicHashDisabled config getter
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-07-08 13:34:40 +03:00
Pavel Karpy
ab749460cd [#1365] cli: Calculate homomorphic hash flexibly
Do not use homomorphic hash in storage group for containers that have
`homomorphic_hashing_disabled` set to `true`.

Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-07-08 13:34:39 +03:00
Pavel Karpy
a455f4e3a7 [#1365] cli: Sync container with network config
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-07-08 13:32:42 +03:00
Pavel Karpy
7308c333cc [#1365] cli: Add SyncContainerSettings func to internal client
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-07-08 13:32:19 +03:00
Evgenii Stratonikov
9857a20c0d [#1505] pilorama: Provide timeout to bbolt.Open
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:28:23 +03:00
Evgenii Stratonikov
1fed255c5b [#1505] pilorama: Allow to customize database parameters
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:27:59 +03:00
Evgenii Stratonikov
2c8a87a469 [#1334] services/tree: Document *.proto files
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:27:57 +03:00
Evgenii Stratonikov
c8fce0d3e4 [#1333] neofs-cli: add control synchronize-tree command
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:27:54 +03:00
Evgenii Stratonikov
681df24547 [#1333] services/control: allow to synchronize local trees
Do not check that a node indeed belongs to the container, because the
synchronization will fail in this case anyway.

Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:27:49 +03:00
Evgenii Stratonikov
5af89b4bbe [#1333] neofs-node: initialize tree service before the control one
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:27:46 +03:00
Evgenii Stratonikov
982cb987a3 [#1333] engine: Increase error counter for pilorama errors
1. Modifying operations are not expected to fail, unless the shard is
   read-only.
2. `Get*` operations should increase error counter too, unless the
   error is `ErrTreeNotFound`.

Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:27:38 +03:00
Evgenii Stratonikov
5408efef82 [#1333] services/control: Return pilorama info in ListShards RPC
Do not return backend type from the service for now, because memory
backend is expected to vanish.

Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:27:17 +03:00
Evgenii Stratonikov
62b2769a66 [#1333] local_object_storage: Support ReadOnly mode in pilorama
The tricky part here is the engine itself: we stop iteration on
`ErrReadOnly` because it is better to synchronize the shard later than
to have partial trees stored in 2 shards.

Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:26:53 +03:00
Evgenii Stratonikov
199ee3a680 [#1481] pilorama: Fix TreeApply
Current implementation prevents invalid operations to become valid at
some later point (consider adding a child to the non-existent parent and
then adding the parent). This seems to diverge from the paper algorithm
and complicates implementation. Make it simpler.

Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:26:30 +03:00
Evgenii Stratonikov
73df95b8d3 [#1456] services/tree: wait some time before reconnecting after failure
In case node is down or failing for some reason, we can expect `Dial` to
fail. In case we actively try to replicate and `Dial` always takes 2
seconds, replication-related channels quickly become full. That affects
latency of all other write operations.

Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:26:26 +03:00
Evgenii Stratonikov
96277c650f [#1445] services/tree: Cache the list of container nodes
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:26:24 +03:00
Evgenii Stratonikov
879c1de59d [#1446] services/tree: Use grpc.WithInsecure only for nodes without TLS
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:23:45 +03:00
Evgenii Stratonikov
6b02df7b8c [#1444] pilorama: Fix TreeMove in bbolt backend
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:23:23 +03:00
Evgenii Stratonikov
578fbdca57 [#1427] services/tree: Parallelize replicator
Before this commit the replication channel was quickly filled under
heavy load. This lead to the continuously increasing latency for all
write operations. Now it looks better.

Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:23:19 +03:00
Evgenii Stratonikov
aec4f54a00 [#1444] pilorama: Optimize internal encoding/decoding
```
name                      old time/op    new time/op    delta
ApplySequential/bbolt-8     55.5µs ± 4%    55.5µs ± 3%     ~     (p=1.000 n=10+7)
ApplyReorderLast/bbolt-8     108µs ± 6%     112µs ± 8%     ~     (p=0.077 n=9+9)

name                      old alloc/op   new alloc/op   delta
ApplySequential/bbolt-8     28.8kB ± 3%    27.7kB ± 6%   -3.79%  (p=0.005 n=10+10)
ApplyReorderLast/bbolt-8    41.4kB ± 5%    38.9kB ± 5%   -6.19%  (p=0.001 n=10+9)

name                      old allocs/op  new allocs/op  delta
ApplySequential/bbolt-8        262 ± 2%       235 ±10%  -10.41%  (p=0.000 n=10+10)
ApplyReorderLast/bbolt-8       684 ± 6%       616 ± 7%  -10.04%  (p=0.000 n=10+9)
```

Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:22:00 +03:00
Evgenii Stratonikov
c9ddc8fbeb [#1446] services/tree: Cache connections to the container nodes
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:20:33 +03:00
Evgenii Stratonikov
06f2681178 [#1442] pilorama: Generate timestamp based on node position in the container
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:19:50 +03:00
Evgenii Stratonikov
55a9a39f9e [#1442] services/tree: Fix log message for failed Apply
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:19:45 +03:00
Evgenii Stratonikov
d244b2658a [#1401] services/tree: Marshal public key once
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:19:31 +03:00
Evgenii Stratonikov
86c6c24b86 [#1401] services/tree: Retransmit queries to container nodes
Also fix a bug with replicator using the multiaddress instead of
<host>:<port> format expected by gRPC library.

Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:19:21 +03:00
Evgenii Stratonikov
fa57a8be44 [#1431] pilorama: Use Batch for write transactions
Helps a lot in case of concurrent request flow.

```
name                      old time/op    new time/op    delta
ApplySequential/bbolt-8     78.0µs ± 9%    59.8µs ± 4%  -23.39%  (p=0.000 n=10+9)
ApplyReorderLast/bbolt-8     143µs ± 5%     113µs ±15%  -21.06%  (p=0.000 n=10+10)

name                      old alloc/op   new alloc/op   delta
ApplySequential/bbolt-8     56.9kB ± 8%    28.9kB ± 3%  -49.22%  (p=0.000 n=10+10)
ApplyReorderLast/bbolt-8    87.3kB ± 3%    40.9kB ±10%  -53.16%  (p=0.000 n=10+10)

name                      old allocs/op  new allocs/op  delta
ApplySequential/bbolt-8        224 ±11%       262 ± 5%  +16.93%  (p=0.000 n=9+10)
ApplyReorderLast/bbolt-8       518 ± 4%       674 ±11%  +30.09%  (p=0.000 n=10+10)
```

Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:18:55 +03:00
Evgenii Stratonikov
d6d7e35454 [#1431] pilorama: Cache attributes in the index
Currently to find a node by path we iterate over all the children on
each level. This is far from optimal and scales badly with the number of
nodes on a single level. Thus we introduce "indexed attributes" for
which an additional information is stored and which can be use in
`*ByPath` operations. Currently this set only includes `FileName`
attribute but this may change in future.

Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:18:23 +03:00
Evgenii Stratonikov
241d4d6810 [#1431] engine: Add benchmark for Select vs TreeGetByPath
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:18:15 +03:00
Evgenii Stratonikov
b3ca9ce775 [#1329] services/tree: Synchronize from the last stored height
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:18:09 +03:00
Evgenii Stratonikov
35fa445195 [#1329] pilorama: Allow to benchmark all tree backends
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:18:01 +03:00
Evgenii Stratonikov
9cbd4271f1 [#1329] services/tree: Implement GetOpLog RPC
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:17:22 +03:00
Evgenii Stratonikov
b19de6116f [#1426] services/tree: Do not replicate to a local node
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:17:15 +03:00
Evgenii Stratonikov
3cc67db083 [#1419] pilorama: Create new nodes in path if needed
Consider a node `{FileName: "dir", Attribute: "xxx"}`. In case we add
a new node by path `["dir", "file.txt"]`, create a new intermediate node
with a single attribute.

`GetByPath` now also considers only nodes with a single attribute while building a path.

Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:16:16 +03:00
Evgenii Stratonikov
730f14e4eb [#1406] pilorama: Return parent from TreeGetMeta
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:15:52 +03:00
Denis Kirillov
7af3424bad [#1404] services/tree: fix nodeId in GetSubTree
Signed-off-by: Denis Kirillov <denis@nspcc.ru>
2022-07-08 13:15:48 +03:00
Evgenii Stratonikov
427f63e359 [#1328] services/tree: Fix grpc import path
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:15:43 +03:00
Evgenii Stratonikov
035963d147 [#1328] services/tree: Implement access control
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:15:41 +03:00
Evgenii Stratonikov
f6589331b6 [#1328] services/tree: Fix proto field numbers
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:13:53 +03:00
Evgenii Stratonikov
319fd212dc [#1342] neofs-node: Use the default endpoint for tree service
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:13:48 +03:00
Evgenii Stratonikov
34cab7be82 [#1344] pilorama: Document errors for Get* methods
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:13:39 +03:00
Evgenii Stratonikov
59bd5ac973 [#1344] engine: Log errors in Tree* operations
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:13:32 +03:00
Evgenii Stratonikov
e2c88a9983 [#1344] pilorama: Use require.ErrorIs in tests
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 13:13:27 +03:00
Evgenii Stratonikov
dd7c4385c6 [#1326] services/tree: Implement GetSubTree RPC
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 12:50:13 +03:00
Evgenii Stratonikov
375c30e687 [#1324] services/tree: Implement Object Tree Service
Object Tree Service allows changing trees assotiated with
the container in runtime.

Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 12:50:12 +03:00
Evgenii Stratonikov
4a65eb7e5f [#1324] engine: Implement Forest interface for storage engine
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 12:47:40 +03:00
Evgenii Stratonikov
cf73feb3f8 [#1324] local_object_storage: Implement tree service backend
In this commit we implement algorithm for CRDT trees from
https://martin.klepmann.com/papers/move-op.pdf

Each tree is identified by the ID of a container it belongs to
and the tree name itself. Essentially, it is a sequence of operations
which should be applied in chronological order to get a usual tree
representation.

There are 2 backends for now: bbolt database and in-memory.
In-memory backend is here for debugging and will eventually act
as a memory-cache for the on-disk database.

Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2022-07-08 12:47:40 +03:00
107 changed files with 5545 additions and 229 deletions

View file

@ -70,6 +70,8 @@ credentials: # passwords for consensus node / alphabet wallets
#### Network maintenance #### Network maintenance
- `set-config` Add/update configuration values in the Netmap contract.
- `force-new-epoch` increments NeoFS epoch number and executes new epoch - `force-new-epoch` increments NeoFS epoch number and executes new epoch
handlers in NeoFS nodes. handlers in NeoFS nodes.

View file

@ -25,6 +25,7 @@ type configTemplate struct {
ContainerAliasFee int ContainerAliasFee int
WithdrawFee int WithdrawFee int
Glagolitics []string Glagolitics []string
HomomorphicHashDisabled bool
} }
const configTxtTemplate = `rpc-endpoint: {{ .Endpoint}} const configTxtTemplate = `rpc-endpoint: {{ .Endpoint}}
@ -33,6 +34,7 @@ network:
max_object_size: {{ .MaxObjectSize}} max_object_size: {{ .MaxObjectSize}}
epoch_duration: {{ .EpochDuration}} epoch_duration: {{ .EpochDuration}}
basic_income_rate: {{ .BasicIncomeRate}} basic_income_rate: {{ .BasicIncomeRate}}
homomorphic_hash_disabled: {{ .HomomorphicHashDisabled}}
fee: fee:
audit: {{ .AuditFee}} audit: {{ .AuditFee}}
candidate: {{ .CandidateFee}} candidate: {{ .CandidateFee}}
@ -110,6 +112,7 @@ func generateConfigExample(appDir string, credSize int) (string, error) {
MaxObjectSize: 67108864, // 64 MiB MaxObjectSize: 67108864, // 64 MiB
EpochDuration: 240, // 1 hour with 15s per block EpochDuration: 240, // 1 hour with 15s per block
BasicIncomeRate: 1_0000_0000, // 0.0001 GAS per GiB (Fixed12) BasicIncomeRate: 1_0000_0000, // 0.0001 GAS per GiB (Fixed12)
HomomorphicHashDisabled: false, // object homomorphic hash is enabled
AuditFee: 1_0000, // 0.00000001 GAS per audit (Fixed12) AuditFee: 1_0000, // 0.00000001 GAS per audit (Fixed12)
CandidateFee: 100_0000_0000, // 100.0 GAS (Fixed8) CandidateFee: 100_0000_0000, // 100.0 GAS (Fixed8)
ContainerFee: 1000, // 0.000000001 * 7 GAS per container (Fixed12) ContainerFee: 1000, // 0.000000001 * 7 GAS per container (Fixed12)

View file

@ -6,6 +6,8 @@ import (
"encoding/hex" "encoding/hex"
"errors" "errors"
"fmt" "fmt"
"strconv"
"strings"
"text/tabwriter" "text/tabwriter"
"github.com/nspcc-dev/neo-go/pkg/io" "github.com/nspcc-dev/neo-go/pkg/io"
@ -184,7 +186,7 @@ func dumpNetworkConfig(cmd *cobra.Command, _ []string) error {
v, err := tuple[1].TryBytes() v, err := tuple[1].TryBytes()
if err != nil { if err != nil {
return errors.New("invalid config value from netmap contract") return invalidConfigValueErr(k)
} }
switch string(k) { switch string(k) {
@ -199,6 +201,13 @@ func dumpNetworkConfig(cmd *cobra.Command, _ []string) error {
_, _ = tw.Write([]byte(fmt.Sprintf("%s:\t%d (int)\n", k, n))) _, _ = tw.Write([]byte(fmt.Sprintf("%s:\t%d (int)\n", k, n)))
case netmapEigenTrustAlphaKey: case netmapEigenTrustAlphaKey:
_, _ = tw.Write([]byte(fmt.Sprintf("%s:\t%s (str)\n", k, v))) _, _ = tw.Write([]byte(fmt.Sprintf("%s:\t%s (str)\n", k, v)))
case netmapHomomorphicHashDisabledKey:
vBool, err := tuple[1].TryBool()
if err != nil {
return invalidConfigValueErr(k)
}
_, _ = tw.Write([]byte(fmt.Sprintf("%s:\t%t (bool)\n", k, vBool)))
default: default:
_, _ = tw.Write([]byte(fmt.Sprintf("%s:\t%s (hex)\n", k, hex.EncodeToString(v)))) _, _ = tw.Write([]byte(fmt.Sprintf("%s:\t%s (hex)\n", k, hex.EncodeToString(v))))
} }
@ -209,3 +218,93 @@ func dumpNetworkConfig(cmd *cobra.Command, _ []string) error {
return nil return nil
} }
func invalidConfigValueErr(key []byte) error {
return fmt.Errorf("invalid %s config value from netmap contract", key)
}
func setConfigCmd(cmd *cobra.Command, args []string) error {
if len(args) == 0 {
return errors.New("empty config pairs")
}
wCtx, err := newInitializeContext(cmd, viper.GetViper())
if err != nil {
return fmt.Errorf("can't initialize context: %w", err)
}
cs, err := wCtx.Client.GetContractStateByID(1)
if err != nil {
return fmt.Errorf("can't get NNS contract info: %w", err)
}
nmHash, err := nnsResolveHash(wCtx.Client, cs.Hash, netmapContract+".neofs")
if err != nil {
return fmt.Errorf("can't get netmap contract hash: %w", err)
}
bw := io.NewBufBinWriter()
for _, arg := range args {
k, v, err := parseConfigPair(arg)
if err != nil {
return err
}
// In NeoFS this is done via Notary contract. Here, however, we can form the
// transaction locally. The first `nil` argument is required only for notary
// disabled environment which is not supported by that command.
emit.AppCall(bw.BinWriter, nmHash, "setConfig", callflag.All, nil, k, v)
if bw.Err != nil {
return fmt.Errorf("can't form raw transaction: %w", bw.Err)
}
}
err = wCtx.sendCommitteeTx(bw.Bytes(), -1, true)
if err != nil {
return err
}
return wCtx.awaitTx()
}
func parseConfigPair(kvStr string) (key string, val interface{}, err error) {
kv := strings.SplitN(kvStr, "=", 2)
if len(kv) != 2 {
return "", nil, fmt.Errorf("invalid parameter format: must be 'key=val', got: %s", kvStr)
}
key = kv[0]
valRaw := kv[1]
switch key {
case netmapAuditFeeKey, netmapBasicIncomeRateKey,
netmapContainerFeeKey, netmapContainerAliasFeeKey,
netmapEigenTrustIterationsKey,
netmapEpochKey, netmapInnerRingCandidateFeeKey,
netmapMaxObjectSizeKey, netmapWithdrawFeeKey:
val, err = strconv.ParseInt(valRaw, 10, 64)
if err != nil {
err = fmt.Errorf("could not parse %s's value '%s' as int: %w", key, valRaw, err)
}
case netmapEigenTrustAlphaKey:
// just check that it could
// be parsed correctly
_, err = strconv.ParseFloat(kv[1], 64)
if err != nil {
err = fmt.Errorf("could not parse %s's value '%s' as float: %w", key, valRaw, err)
}
val = valRaw
case netmapHomomorphicHashDisabledKey:
val, err = strconv.ParseBool(valRaw)
if err != nil {
err = fmt.Errorf("could not parse %s's value '%s' as bool: %w", key, valRaw, err)
}
default:
// print some warning that user
// want to add some unknown config?
val = valRaw
}
return
}

View file

@ -57,6 +57,7 @@ const (
netmapBasicIncomeRateKey = "BasicIncomeRate" netmapBasicIncomeRateKey = "BasicIncomeRate"
netmapInnerRingCandidateFeeKey = "InnerRingCandidateFee" netmapInnerRingCandidateFeeKey = "InnerRingCandidateFee"
netmapWithdrawFeeKey = "WithdrawFee" netmapWithdrawFeeKey = "WithdrawFee"
netmapHomomorphicHashDisabledKey = "HomomorphicHashingDisabled"
defaultEigenTrustIterations = 4 defaultEigenTrustIterations = 4
defaultEigenTrustAlpha = "0.1" defaultEigenTrustAlpha = "0.1"
@ -544,6 +545,7 @@ func (c *initializeContext) getContractDeployData(ctrName string, keysParam []in
netmapBasicIncomeRateKey, viper.GetInt64(incomeRateInitFlag), netmapBasicIncomeRateKey, viper.GetInt64(incomeRateInitFlag),
netmapInnerRingCandidateFeeKey, viper.GetInt64(candidateFeeInitFlag), netmapInnerRingCandidateFeeKey, viper.GetInt64(candidateFeeInitFlag),
netmapWithdrawFeeKey, viper.GetInt64(withdrawFeeInitFlag), netmapWithdrawFeeKey, viper.GetInt64(withdrawFeeInitFlag),
netmapHomomorphicHashDisabledKey, viper.GetBool(homomorphicHashDisabledInitFlag),
} }
items = append(items, items = append(items,
c.Contracts[balanceContract].Hash, c.Contracts[balanceContract].Hash,

View file

@ -28,6 +28,8 @@ const (
containerAliasFeeCLIFlag = "container-alias-fee" containerAliasFeeCLIFlag = "container-alias-fee"
candidateFeeInitFlag = "network.fee.candidate" candidateFeeInitFlag = "network.fee.candidate"
candidateFeeCLIFlag = "candidate-fee" candidateFeeCLIFlag = "candidate-fee"
homomorphicHashDisabledInitFlag = "network.homomorphic_hash_disabled"
homomorphicHashDisabledCLIFlag = "homomorphic-disabled"
withdrawFeeInitFlag = "network.fee.withdraw" withdrawFeeInitFlag = "network.fee.withdraw"
withdrawFeeCLIFlag = "withdraw-fee" withdrawFeeCLIFlag = "withdraw-fee"
containerDumpFlag = "dump" containerDumpFlag = "dump"
@ -66,6 +68,7 @@ var (
_ = viper.BindPFlag(epochDurationInitFlag, cmd.Flags().Lookup(epochDurationCLIFlag)) _ = viper.BindPFlag(epochDurationInitFlag, cmd.Flags().Lookup(epochDurationCLIFlag))
_ = viper.BindPFlag(maxObjectSizeInitFlag, cmd.Flags().Lookup(maxObjectSizeCLIFlag)) _ = viper.BindPFlag(maxObjectSizeInitFlag, cmd.Flags().Lookup(maxObjectSizeCLIFlag))
_ = viper.BindPFlag(incomeRateInitFlag, cmd.Flags().Lookup(incomeRateCLIFlag)) _ = viper.BindPFlag(incomeRateInitFlag, cmd.Flags().Lookup(incomeRateCLIFlag))
_ = viper.BindPFlag(homomorphicHashDisabledInitFlag, cmd.Flags().Lookup(homomorphicHashDisabledCLIFlag))
_ = viper.BindPFlag(auditFeeInitFlag, cmd.Flags().Lookup(auditFeeCLIFlag)) _ = viper.BindPFlag(auditFeeInitFlag, cmd.Flags().Lookup(auditFeeCLIFlag))
_ = viper.BindPFlag(candidateFeeInitFlag, cmd.Flags().Lookup(candidateFeeCLIFlag)) _ = viper.BindPFlag(candidateFeeInitFlag, cmd.Flags().Lookup(candidateFeeCLIFlag))
_ = viper.BindPFlag(containerFeeInitFlag, cmd.Flags().Lookup(containerFeeCLIFlag)) _ = viper.BindPFlag(containerFeeInitFlag, cmd.Flags().Lookup(containerFeeCLIFlag))
@ -122,6 +125,17 @@ var (
RunE: removeNodesCmd, RunE: removeNodesCmd,
} }
setConfig = &cobra.Command{
Use: "set-config key1=val1 [key2=val2 ...]",
DisableFlagsInUseLine: true,
Short: "Add/update global config value in the NeoFS network",
PreRun: func(cmd *cobra.Command, _ []string) {
_ = viper.BindPFlag(alphabetWalletsFlag, cmd.Flags().Lookup(alphabetWalletsFlag))
_ = viper.BindPFlag(endpointFlag, cmd.Flags().Lookup(endpointFlag))
},
RunE: setConfigCmd,
}
setPolicy = &cobra.Command{ setPolicy = &cobra.Command{
Use: "set-policy [ExecFeeFactor=<n1>] [StoragePrice=<n2>] [FeePerByte=<n3>]", Use: "set-policy [ExecFeeFactor=<n1>] [StoragePrice=<n2>] [FeePerByte=<n3>]",
DisableFlagsInUseLine: true, DisableFlagsInUseLine: true,
@ -210,6 +224,7 @@ func init() {
initCmd.Flags().String(contractsInitFlag, "", "path to archive with compiled NeoFS contracts (default fetched from latest github release)") initCmd.Flags().String(contractsInitFlag, "", "path to archive with compiled NeoFS contracts (default fetched from latest github release)")
initCmd.Flags().Uint(epochDurationCLIFlag, 240, "amount of side chain blocks in one NeoFS epoch") initCmd.Flags().Uint(epochDurationCLIFlag, 240, "amount of side chain blocks in one NeoFS epoch")
initCmd.Flags().Uint(maxObjectSizeCLIFlag, 67108864, "max single object size in bytes") initCmd.Flags().Uint(maxObjectSizeCLIFlag, 67108864, "max single object size in bytes")
initCmd.Flags().Bool(homomorphicHashDisabledCLIFlag, false, "disable object homomorphic hashing")
// Defaults are taken from neo-preodolenie. // Defaults are taken from neo-preodolenie.
initCmd.Flags().Uint64(containerFeeCLIFlag, 1000, "container registration fee") initCmd.Flags().Uint64(containerFeeCLIFlag, 1000, "container registration fee")
initCmd.Flags().Uint64(containerAliasFeeCLIFlag, 500, "container alias fee") initCmd.Flags().Uint64(containerAliasFeeCLIFlag, 500, "container alias fee")
@ -241,6 +256,10 @@ func init() {
RootCmd.AddCommand(dumpNetworkConfigCmd) RootCmd.AddCommand(dumpNetworkConfigCmd)
dumpNetworkConfigCmd.Flags().StringP(endpointFlag, "r", "", "N3 RPC node endpoint") dumpNetworkConfigCmd.Flags().StringP(endpointFlag, "r", "", "N3 RPC node endpoint")
RootCmd.AddCommand(setConfig)
setConfig.Flags().String(alphabetWalletsFlag, "", "path to alphabet wallets dir")
setConfig.Flags().StringP(endpointFlag, "r", "", "N3 RPC node endpoint")
RootCmd.AddCommand(dumpBalancesCmd) RootCmd.AddCommand(dumpBalancesCmd)
dumpBalancesCmd.Flags().StringP(endpointFlag, "r", "", "N3 RPC node endpoint") dumpBalancesCmd.Flags().StringP(endpointFlag, "r", "", "N3 RPC node endpoint")
dumpBalancesCmd.Flags().BoolP(dumpBalancesStorageFlag, "s", false, "dump balances of storage nodes from the current netmap") dumpBalancesCmd.Flags().BoolP(dumpBalancesStorageFlag, "s", false, "dump balances of storage nodes from the current netmap")

View file

@ -9,7 +9,7 @@ import (
"github.com/nspcc-dev/neofs-sdk-go/accounting" "github.com/nspcc-dev/neofs-sdk-go/accounting"
"github.com/nspcc-dev/neofs-sdk-go/client" "github.com/nspcc-dev/neofs-sdk-go/client"
"github.com/nspcc-dev/neofs-sdk-go/container" containerSDK "github.com/nspcc-dev/neofs-sdk-go/container"
cid "github.com/nspcc-dev/neofs-sdk-go/container/id" cid "github.com/nspcc-dev/neofs-sdk-go/container/id"
"github.com/nspcc-dev/neofs-sdk-go/eacl" "github.com/nspcc-dev/neofs-sdk-go/eacl"
"github.com/nspcc-dev/neofs-sdk-go/netmap" "github.com/nspcc-dev/neofs-sdk-go/netmap"
@ -123,7 +123,7 @@ type GetContainerRes struct {
} }
// Container returns structured of the requested container. // Container returns structured of the requested container.
func (x GetContainerRes) Container() container.Container { func (x GetContainerRes) Container() containerSDK.Container {
return x.cliRes.Container() return x.cliRes.Container()
} }
@ -833,3 +833,37 @@ func PayloadRange(prm PayloadRangePrm) (*PayloadRangeRes, error) {
return new(PayloadRangeRes), nil return new(PayloadRangeRes), nil
} }
// SyncContainerPrm groups parameters of SyncContainerSettings operation.
type SyncContainerPrm struct {
commonPrm
c *containerSDK.Container
}
// SetContainer sets a container that is required to be synced.
func (s *SyncContainerPrm) SetContainer(c *containerSDK.Container) {
s.c = c
}
// SyncContainerRes groups resulting values of SyncContainerSettings
// operation.
type SyncContainerRes struct{}
// SyncContainerSettings reads global network config from NeoFS and
// syncs container settings with it.
//
// Interrupts on any writer error.
//
// Panics if a container passed as a parameter is nil.
func SyncContainerSettings(prm SyncContainerPrm) (*SyncContainerRes, error) {
if prm.c == nil {
panic("sync container settings with the network: nil container")
}
err := client.SyncContainerWithNetwork(context.Background(), prm.c, prm.cli)
if err != nil {
return nil, err
}
return new(SyncContainerRes), nil
}

View file

@ -80,6 +80,13 @@ It will be stored in sidechain when inner ring will accepts it.`,
cli := internalclient.GetSDKClientByFlag(cmd, key, commonflags.RPC) cli := internalclient.GetSDKClientByFlag(cmd, key, commonflags.RPC)
var syncContainerPrm internalclient.SyncContainerPrm
syncContainerPrm.SetClient(cli)
syncContainerPrm.SetContainer(&cnr)
_, err = internalclient.SyncContainerSettings(syncContainerPrm)
common.ExitOnErr(cmd, "syncing container's settings rpc error: %w", err)
var putPrm internalclient.PutContainerPrm var putPrm internalclient.PutContainerPrm
putPrm.SetClient(cli) putPrm.SetClient(cli)
putPrm.SetContainer(cnr) putPrm.SetContainer(cnr)
@ -89,7 +96,7 @@ It will be stored in sidechain when inner ring will accepts it.`,
} }
res, err := internalclient.PutContainer(putPrm) res, err := internalclient.PutContainer(putPrm)
common.ExitOnErr(cmd, "rpc error: %w", err) common.ExitOnErr(cmd, "put container rpc error: %w", err)
id := res.ID() id := res.ID()

View file

@ -32,6 +32,7 @@ func init() {
dropObjectsCmd, dropObjectsCmd,
snapshotCmd, snapshotCmd,
shardsCmd, shardsCmd,
synchronizeTreeCmd,
) )
initControlHealthCheckCmd() initControlHealthCheckCmd()
@ -39,4 +40,5 @@ func init() {
initControlDropObjectsCmd() initControlDropObjectsCmd()
initControlSnapshotCmd() initControlSnapshotCmd()
initControlShardsCmd() initControlShardsCmd()
initControlSynchronizeTreeCmd()
} }

View file

@ -93,6 +93,7 @@ func prettyPrintShards(cmd *cobra.Command, ii []*control.ShardInfo) {
pathPrinter("Metabase", i.GetMetabasePath())+ pathPrinter("Metabase", i.GetMetabasePath())+
pathPrinter("Blobstor", i.GetBlobstorPath())+ pathPrinter("Blobstor", i.GetBlobstorPath())+
pathPrinter("Write-cache", i.GetWritecachePath())+ pathPrinter("Write-cache", i.GetWritecachePath())+
pathPrinter("Pilorama", i.GetPiloramaPath())+
fmt.Sprintf("Error count: %d\n", i.GetErrorCount()), fmt.Sprintf("Error count: %d\n", i.GetErrorCount()),
base58.Encode(i.Shard_ID), base58.Encode(i.Shard_ID),
shardModeToString(i.GetMode()), shardModeToString(i.GetMode()),

View file

@ -0,0 +1,79 @@
package control
import (
"crypto/sha256"
"errors"
rawclient "github.com/nspcc-dev/neofs-api-go/v2/rpc/client"
"github.com/nspcc-dev/neofs-node/cmd/neofs-cli/internal/common"
"github.com/nspcc-dev/neofs-node/cmd/neofs-cli/internal/commonflags"
"github.com/nspcc-dev/neofs-node/cmd/neofs-cli/internal/key"
"github.com/nspcc-dev/neofs-node/pkg/services/control"
controlSvc "github.com/nspcc-dev/neofs-node/pkg/services/control/server"
cid "github.com/nspcc-dev/neofs-sdk-go/container/id"
"github.com/spf13/cobra"
)
const (
synchronizeTreeIDFlag = "tree-id"
synchronizeTreeHeightFlag = "height"
)
var synchronizeTreeCmd = &cobra.Command{
Use: "synchronize-tree",
Short: "Synchronize log for the tree",
Long: "Synchronize log for the tree in an object tree service.",
Run: synchronizeTree,
}
func initControlSynchronizeTreeCmd() {
commonflags.InitWithoutRPC(synchronizeTreeCmd)
flags := synchronizeTreeCmd.Flags()
flags.String(controlRPC, controlRPCDefault, controlRPCUsage)
flags.String("cid", "", "Container ID")
flags.String(synchronizeTreeIDFlag, "", "Tree ID")
flags.Uint64(synchronizeTreeHeightFlag, 0, "Starting height")
}
func synchronizeTree(cmd *cobra.Command, _ []string) {
pk := key.Get(cmd)
var cnr cid.ID
cidStr, _ := cmd.Flags().GetString("cid")
common.ExitOnErr(cmd, "can't decode container ID: %w", cnr.DecodeString(cidStr))
treeID, _ := cmd.Flags().GetString("tree-id")
if treeID == "" {
common.ExitOnErr(cmd, "", errors.New("tree ID must not be empty"))
}
height, _ := cmd.Flags().GetUint64("height")
rawCID := make([]byte, sha256.Size)
cnr.Encode(rawCID)
req := &control.SynchronizeTreeRequest{
Body: &control.SynchronizeTreeRequest_Body{
ContainerId: rawCID,
TreeId: treeID,
Height: height,
},
}
err := controlSvc.SignMessage(pk, req)
common.ExitOnErr(cmd, "could not sign request: %w", err)
cli := getClient(cmd, pk)
var resp *control.SynchronizeTreeResponse
err = cli.ExecRaw(func(client *rawclient.Client) error {
resp, err = control.SynchronizeTree(client, req)
return err
})
common.ExitOnErr(cmd, "rpc error: %w", err)
verifyResponse(cmd, resp.GetSignature(), resp.GetBody())
cmd.Println("Tree has been synchronized successfully.")
}

View file

@ -13,6 +13,7 @@ import (
objectCli "github.com/nspcc-dev/neofs-node/cmd/neofs-cli/modules/object" objectCli "github.com/nspcc-dev/neofs-node/cmd/neofs-cli/modules/object"
sessionCli "github.com/nspcc-dev/neofs-node/cmd/neofs-cli/modules/session" sessionCli "github.com/nspcc-dev/neofs-node/cmd/neofs-cli/modules/session"
"github.com/nspcc-dev/neofs-node/pkg/services/object_manager/storagegroup" "github.com/nspcc-dev/neofs-node/pkg/services/object_manager/storagegroup"
"github.com/nspcc-dev/neofs-sdk-go/container"
cid "github.com/nspcc-dev/neofs-sdk-go/container/id" cid "github.com/nspcc-dev/neofs-sdk-go/container/id"
"github.com/nspcc-dev/neofs-sdk-go/object" "github.com/nspcc-dev/neofs-sdk-go/object"
oid "github.com/nspcc-dev/neofs-sdk-go/object/id" oid "github.com/nspcc-dev/neofs-sdk-go/object/id"
@ -71,8 +72,16 @@ func putSG(cmd *cobra.Command, _ []string) {
var ( var (
headPrm internalclient.HeadObjectPrm headPrm internalclient.HeadObjectPrm
putPrm internalclient.PutObjectPrm putPrm internalclient.PutObjectPrm
getCnrPrm internalclient.GetContainerPrm
) )
cli := internalclient.GetSDKClientByFlag(cmd, pk, commonflags.RPC)
getCnrPrm.SetClient(cli)
getCnrPrm.SetContainer(cnr)
resGetCnr, err := internalclient.GetContainer(getCnrPrm)
common.ExitOnErr(cmd, "get container RPC call: %w", err)
sessionCli.Prepare(cmd, cnr, nil, pk, &putPrm) sessionCli.Prepare(cmd, cnr, nil, pk, &putPrm)
objectCli.Prepare(cmd, &headPrm, &putPrm) objectCli.Prepare(cmd, &headPrm, &putPrm)
@ -83,11 +92,9 @@ func putSG(cmd *cobra.Command, _ []string) {
key: pk, key: pk,
ownerID: &ownerID, ownerID: &ownerID,
prm: headPrm, prm: headPrm,
}, cnr, members) }, cnr, members, !container.IsHomomorphicHashingDisabled(resGetCnr.Container()))
common.ExitOnErr(cmd, "could not collect storage group members: %w", err) common.ExitOnErr(cmd, "could not collect storage group members: %w", err)
cli := internalclient.GetSDKClientByFlag(cmd, pk, commonflags.RPC)
var netInfoPrm internalclient.NetworkInfoPrm var netInfoPrm internalclient.NetworkInfoPrm
netInfoPrm.SetClient(cli) netInfoPrm.SetClient(cli)

View file

@ -24,6 +24,7 @@ import (
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/blobstor" "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/blobstor"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/engine" "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/engine"
meta "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/metabase" meta "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/metabase"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/pilorama"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/shard" "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/shard"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/writecache" "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/writecache"
"github.com/nspcc-dev/neofs-node/pkg/metrics" "github.com/nspcc-dev/neofs-node/pkg/metrics"
@ -40,6 +41,7 @@ import (
tsourse "github.com/nspcc-dev/neofs-node/pkg/services/object_manager/tombstone/source" tsourse "github.com/nspcc-dev/neofs-node/pkg/services/object_manager/tombstone/source"
trustcontroller "github.com/nspcc-dev/neofs-node/pkg/services/reputation/local/controller" trustcontroller "github.com/nspcc-dev/neofs-node/pkg/services/reputation/local/controller"
truststorage "github.com/nspcc-dev/neofs-node/pkg/services/reputation/local/storage" truststorage "github.com/nspcc-dev/neofs-node/pkg/services/reputation/local/storage"
"github.com/nspcc-dev/neofs-node/pkg/services/tree"
"github.com/nspcc-dev/neofs-node/pkg/services/util/response" "github.com/nspcc-dev/neofs-node/pkg/services/util/response"
"github.com/nspcc-dev/neofs-node/pkg/util" "github.com/nspcc-dev/neofs-node/pkg/util"
"github.com/nspcc-dev/neofs-node/pkg/util/logger" "github.com/nspcc-dev/neofs-node/pkg/util/logger"
@ -111,6 +113,8 @@ type cfg struct {
cfgControlService cfgControlService cfgControlService cfgControlService
treeService *tree.Service
healthStatus *atomic.Int32 healthStatus *atomic.Int32
closers []func() closers []func()
@ -418,6 +422,19 @@ func initShardOptions(c *cfg) {
metabaseCfg := sc.Metabase() metabaseCfg := sc.Metabase()
gcCfg := sc.GC() gcCfg := sc.GC()
piloramaCfg := sc.Pilorama()
piloramaPath := piloramaCfg.Path()
if piloramaPath == "" {
piloramaPath = filepath.Join(blobStorCfg.Path(), "pilorama.db")
}
piloramaOpts := []pilorama.Option{
pilorama.WithPath(piloramaPath),
pilorama.WithPerm(piloramaCfg.Perm()),
pilorama.WithNoSync(piloramaCfg.NoSync()),
pilorama.WithMaxBatchSize(piloramaCfg.MaxBatchSize()),
pilorama.WithMaxBatchDelay(piloramaCfg.MaxBatchDelay())}
metaPath := metabaseCfg.Path() metaPath := metabaseCfg.Path()
metaPerm := metabaseCfg.BoltDB().Perm() metaPerm := metabaseCfg.BoltDB().Perm()
fatalOnErr(util.MkdirAllX(filepath.Dir(metaPath), metaPerm)) fatalOnErr(util.MkdirAllX(filepath.Dir(metaPath), metaPerm))
@ -453,6 +470,7 @@ func initShardOptions(c *cfg) {
Timeout: 100 * time.Millisecond, Timeout: 100 * time.Millisecond,
}), }),
), ),
shard.WithPiloramaOptions(piloramaOpts...),
shard.WithWriteCache(writeCacheCfg.Enabled()), shard.WithWriteCache(writeCacheCfg.Enabled()),
shard.WithWriteCacheOptions(writeCacheOpts...), shard.WithWriteCacheOptions(writeCacheOpts...),
shard.WithRemoverBatchSize(gcCfg.RemoverBatchSize()), shard.WithRemoverBatchSize(gcCfg.RemoverBatchSize()),

View file

@ -8,6 +8,7 @@ import (
"github.com/nspcc-dev/neofs-node/cmd/neofs-node/config" "github.com/nspcc-dev/neofs-node/cmd/neofs-node/config"
engineconfig "github.com/nspcc-dev/neofs-node/cmd/neofs-node/config/engine" engineconfig "github.com/nspcc-dev/neofs-node/cmd/neofs-node/config/engine"
shardconfig "github.com/nspcc-dev/neofs-node/cmd/neofs-node/config/engine/shard" shardconfig "github.com/nspcc-dev/neofs-node/cmd/neofs-node/config/engine/shard"
piloramaconfig "github.com/nspcc-dev/neofs-node/cmd/neofs-node/config/engine/shard/pilorama"
configtest "github.com/nspcc-dev/neofs-node/cmd/neofs-node/config/test" configtest "github.com/nspcc-dev/neofs-node/cmd/neofs-node/config/test"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/shard" "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/shard"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
@ -53,10 +54,17 @@ func TestEngineSection(t *testing.T) {
meta := sc.Metabase() meta := sc.Metabase()
blob := sc.BlobStor() blob := sc.BlobStor()
blz := blob.Blobovnicza() blz := blob.Blobovnicza()
pl := sc.Pilorama()
gc := sc.GC() gc := sc.GC()
switch num { switch num {
case 0: case 0:
require.Equal(t, "tmp/0/blob/pilorama.db", pl.Path())
require.Equal(t, fs.FileMode(piloramaconfig.PermDefault), pl.Perm())
require.False(t, pl.NoSync())
require.Equal(t, pl.MaxBatchDelay(), 10*time.Millisecond)
require.Equal(t, pl.MaxBatchSize(), 200)
require.Equal(t, false, wc.Enabled()) require.Equal(t, false, wc.Enabled())
require.Equal(t, "tmp/0/cache", wc.Path()) require.Equal(t, "tmp/0/cache", wc.Path())
@ -89,6 +97,12 @@ func TestEngineSection(t *testing.T) {
require.Equal(t, false, sc.RefillMetabase()) require.Equal(t, false, sc.RefillMetabase())
require.Equal(t, shard.ModeReadOnly, sc.Mode()) require.Equal(t, shard.ModeReadOnly, sc.Mode())
case 1: case 1:
require.Equal(t, "tmp/1/blob/pilorama.db", pl.Path())
require.Equal(t, fs.FileMode(0644), pl.Perm())
require.True(t, pl.NoSync())
require.Equal(t, 5*time.Millisecond, pl.MaxBatchDelay())
require.Equal(t, 100, pl.MaxBatchSize())
require.Equal(t, true, wc.Enabled()) require.Equal(t, true, wc.Enabled())
require.Equal(t, "tmp/1/cache", wc.Path()) require.Equal(t, "tmp/1/cache", wc.Path())

View file

@ -7,6 +7,7 @@ import (
blobstorconfig "github.com/nspcc-dev/neofs-node/cmd/neofs-node/config/engine/shard/blobstor" blobstorconfig "github.com/nspcc-dev/neofs-node/cmd/neofs-node/config/engine/shard/blobstor"
gcconfig "github.com/nspcc-dev/neofs-node/cmd/neofs-node/config/engine/shard/gc" gcconfig "github.com/nspcc-dev/neofs-node/cmd/neofs-node/config/engine/shard/gc"
metabaseconfig "github.com/nspcc-dev/neofs-node/cmd/neofs-node/config/engine/shard/metabase" metabaseconfig "github.com/nspcc-dev/neofs-node/cmd/neofs-node/config/engine/shard/metabase"
piloramaconfig "github.com/nspcc-dev/neofs-node/cmd/neofs-node/config/engine/shard/pilorama"
writecacheconfig "github.com/nspcc-dev/neofs-node/cmd/neofs-node/config/engine/shard/writecache" writecacheconfig "github.com/nspcc-dev/neofs-node/cmd/neofs-node/config/engine/shard/writecache"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/shard" "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/shard"
) )
@ -44,6 +45,14 @@ func (x *Config) WriteCache() *writecacheconfig.Config {
) )
} }
// Pilorama returns "pilorama" subsection as a piloramaconfig.Config.
func (x *Config) Pilorama() *piloramaconfig.Config {
return piloramaconfig.From(
(*config.Config)(x).
Sub("pilorama"),
)
}
// GC returns "gc" subsection as a gcconfig.Config. // GC returns "gc" subsection as a gcconfig.Config.
func (x *Config) GC() *gcconfig.Config { func (x *Config) GC() *gcconfig.Config {
return gcconfig.From( return gcconfig.From(

View file

@ -0,0 +1,70 @@
package piloramaconfig
import (
"io/fs"
"time"
"github.com/nspcc-dev/neofs-node/cmd/neofs-node/config"
)
// Config is a wrapper over the config section
// which provides access to Metabase configurations.
type Config config.Config
const (
// PermDefault is a default permission bits for metabase file.
PermDefault = 0660
)
// From wraps config section into Config.
func From(c *config.Config) *Config {
return (*Config)(c)
}
// Path returns the value of "path" config parameter.
//
// Returns empty string if missing, for compatibility with older configurations.
func (x *Config) Path() string {
return config.String((*config.Config)(x), "path")
}
// Perm returns the value of "perm" config parameter as a fs.FileMode.
//
// Returns PermDefault if the value is not a positive number.
func (x *Config) Perm() fs.FileMode {
p := config.UintSafe((*config.Config)(x), "perm")
if p == 0 {
p = PermDefault
}
return fs.FileMode(p)
}
// NoSync returns the value of "no_sync" config parameter as a bool value.
//
// Returns false if the value is not a boolean.
func (x *Config) NoSync() bool {
return config.BoolSafe((*config.Config)(x), "no_sync")
}
// MaxBatchDelay returns the value of "max_batch_delay" config parameter.
//
// Returns 0 if the value is not a positive number.
func (x *Config) MaxBatchDelay() time.Duration {
d := config.DurationSafe((*config.Config)(x), "max_batch_delay")
if d <= 0 {
d = 0
}
return d
}
// MaxBatchSize returns the value of "max_batch_size" config parameter.
//
// Returns 0 if the value is not a positive number.
func (x *Config) MaxBatchSize() int {
s := int(config.IntSafe((*config.Config)(x), "max_batch_size"))
if s <= 0 {
s = 0
}
return s
}

View file

@ -42,6 +42,7 @@ func initControlService(c *cfg) {
return err return err
}), }),
controlSvc.WithLocalStorage(c.cfgObject.cfgLocalStorage.localStorage), controlSvc.WithLocalStorage(c.cfgObject.cfgLocalStorage.localStorage),
controlSvc.WithTreeService(c.treeService),
) )
lis, err := net.Listen("tcp", endpoint) lis, err := net.Listen("tcp", endpoint)

View file

@ -75,6 +75,11 @@ func initAndLog(c *cfg, name string, initializer func(*cfg)) {
func initApp(c *cfg) { func initApp(c *cfg) {
c.ctx, c.ctxCancel = signal.NotifyContext(context.Background(), syscall.SIGINT, syscall.SIGTERM, syscall.SIGHUP) c.ctx, c.ctxCancel = signal.NotifyContext(context.Background(), syscall.SIGINT, syscall.SIGTERM, syscall.SIGHUP)
initAndLog(c, "storage engine", func(c *cfg) {
fatalOnErr(c.cfgObject.cfgLocalStorage.localStorage.Open())
fatalOnErr(c.cfgObject.cfgLocalStorage.localStorage.Init())
})
initAndLog(c, "gRPC", initGRPC) initAndLog(c, "gRPC", initGRPC)
initAndLog(c, "netmap", initNetmapService) initAndLog(c, "netmap", initNetmapService)
initAndLog(c, "accounting", initAccountingService) initAndLog(c, "accounting", initAccountingService)
@ -85,13 +90,9 @@ func initApp(c *cfg) {
initAndLog(c, "object", initObjectService) initAndLog(c, "object", initObjectService)
initAndLog(c, "profiler", initProfiler) initAndLog(c, "profiler", initProfiler)
initAndLog(c, "metrics", initMetrics) initAndLog(c, "metrics", initMetrics)
initAndLog(c, "tree", initTreeService)
initAndLog(c, "control", initControlService) initAndLog(c, "control", initControlService)
initAndLog(c, "storage engine", func(c *cfg) {
fatalOnErr(c.cfgObject.cfgLocalStorage.localStorage.Open())
fatalOnErr(c.cfgObject.cfgLocalStorage.localStorage.Init())
})
initAndLog(c, "morph notifications", listenMorphNotifications) initAndLog(c, "morph notifications", listenMorphNotifications)
} }

31
cmd/neofs-node/tree.go Normal file
View file

@ -0,0 +1,31 @@
package main
import (
"context"
"github.com/nspcc-dev/neofs-node/cmd/neofs-node/config"
"github.com/nspcc-dev/neofs-node/pkg/services/tree"
)
func initTreeService(c *cfg) {
sub := c.appCfg.Sub("tree")
c.treeService = tree.New(
tree.WithContainerSource(c.cfgObject.cnrSource),
tree.WithNetmapSource(c.netMapSource),
tree.WithPrivateKey(&c.key.PrivateKey),
tree.WithLogger(c.log),
tree.WithStorage(c.cfgObject.cfgLocalStorage.localStorage),
tree.WithContainerCacheSize(int(config.IntSafe(sub, "cache_size"))),
tree.WithReplicationChannelCapacity(int(config.IntSafe(sub, "replication_channel_capacity"))),
tree.WithReplicationWorkerCount(int(config.IntSafe(sub, "replication_worker_count"))))
for _, srv := range c.cfgGRPC.servers {
tree.RegisterTreeServiceServer(srv, c.treeService)
}
c.workers = append(c.workers, newWorkerFromFunc(func(ctx context.Context) {
c.treeService.Start(ctx)
}))
c.onShutdown(c.treeService.Shutdown)
}

View file

@ -105,6 +105,10 @@ NEOFS_STORAGE_SHARD_0_BLOBSTOR_BLOBOVNICZA_SIZE=4194304
NEOFS_STORAGE_SHARD_0_BLOBSTOR_BLOBOVNICZA_DEPTH=1 NEOFS_STORAGE_SHARD_0_BLOBSTOR_BLOBOVNICZA_DEPTH=1
NEOFS_STORAGE_SHARD_0_BLOBSTOR_BLOBOVNICZA_WIDTH=4 NEOFS_STORAGE_SHARD_0_BLOBSTOR_BLOBOVNICZA_WIDTH=4
NEOFS_STORAGE_SHARD_0_BLOBSTOR_BLOBOVNICZA_OPENED_CACHE_CAPACITY=50 NEOFS_STORAGE_SHARD_0_BLOBSTOR_BLOBOVNICZA_OPENED_CACHE_CAPACITY=50
### Pilorama config
NEOFS_STORAGE_SHARD_0_PILORAMA_PATH="tmp/0/blob/pilorama.db"
NEOFS_STORAGE_SHARD_0_PILORAMA_MAX_BATCH_DELAY=10ms
NEOFS_STORAGE_SHARD_0_PILORAMA_MAX_BATCH_SIZE=200
### GC config ### GC config
#### Limit of the single data remover's batching operation in number of objects #### Limit of the single data remover's batching operation in number of objects
NEOFS_STORAGE_SHARD_0_GC_REMOVER_BATCH_SIZE=150 NEOFS_STORAGE_SHARD_0_GC_REMOVER_BATCH_SIZE=150
@ -140,6 +144,12 @@ NEOFS_STORAGE_SHARD_1_BLOBSTOR_BLOBOVNICZA_SIZE=4194304
NEOFS_STORAGE_SHARD_1_BLOBSTOR_BLOBOVNICZA_DEPTH=1 NEOFS_STORAGE_SHARD_1_BLOBSTOR_BLOBOVNICZA_DEPTH=1
NEOFS_STORAGE_SHARD_1_BLOBSTOR_BLOBOVNICZA_WIDTH=4 NEOFS_STORAGE_SHARD_1_BLOBSTOR_BLOBOVNICZA_WIDTH=4
NEOFS_STORAGE_SHARD_1_BLOBSTOR_BLOBOVNICZA_OPENED_CACHE_CAPACITY=50 NEOFS_STORAGE_SHARD_1_BLOBSTOR_BLOBOVNICZA_OPENED_CACHE_CAPACITY=50
### Pilorama config
NEOFS_STORAGE_SHARD_1_PILORAMA_PATH="tmp/1/blob/pilorama.db"
NEOFS_STORAGE_SHARD_1_PILORAMA_PERM=0644
NEOFS_STORAGE_SHARD_1_PILORAMA_NO_SYNC=true
NEOFS_STORAGE_SHARD_1_PILORAMA_MAX_BATCH_DELAY=5ms
NEOFS_STORAGE_SHARD_1_PILORAMA_MAX_BATCH_SIZE=100
### GC config ### GC config
#### Limit of the single data remover's batching operation in number of objects #### Limit of the single data remover's batching operation in number of objects
NEOFS_STORAGE_SHARD_1_GC_REMOVER_BATCH_SIZE=200 NEOFS_STORAGE_SHARD_1_GC_REMOVER_BATCH_SIZE=200

View file

@ -156,6 +156,11 @@
"opened_cache_capacity": 50 "opened_cache_capacity": 50
} }
}, },
"pilorama": {
"path": "tmp/0/blob/pilorama.db",
"max_batch_delay": "10ms",
"max_batch_size": 200
},
"gc": { "gc": {
"remover_batch_size": 150, "remover_batch_size": 150,
"remover_sleep_interval": "2m" "remover_sleep_interval": "2m"
@ -192,6 +197,13 @@
"opened_cache_capacity": 50 "opened_cache_capacity": 50
} }
}, },
"pilorama": {
"path": "tmp/1/blob/pilorama.db",
"perm": "0644",
"no_sync": true,
"max_batch_delay": "5ms",
"max_batch_size": 100
},
"gc": { "gc": {
"remover_batch_size": 200, "remover_batch_size": 200,
"remover_sleep_interval": "5m" "remover_sleep_interval": "5m"

View file

@ -60,6 +60,11 @@ grpc:
enabled: true enabled: true
use_insecure_crypto: true # allow using insecure ciphers with TLS 1.2 use_insecure_crypto: true # allow using insecure ciphers with TLS 1.2
tree:
cache_size: 10
replication_worker_count: 64
replication_channel_capacity: 64
control: control:
authorized_keys: # list of hex-encoded public keys that have rights to use the Control Service authorized_keys: # list of hex-encoded public keys that have rights to use the Control Service
- 035839e45d472a3b7769a2a1bd7d54c4ccd4943c3b40f547870e83a8fcbfb3ce11 - 035839e45d472a3b7769a2a1bd7d54c4ccd4943c3b40f547870e83a8fcbfb3ce11
@ -119,6 +124,10 @@ storage:
max_batch_size: 200 max_batch_size: 200
max_batch_delay: 20ms max_batch_delay: 20ms
pilorama:
max_batch_delay: 5ms # maximum delay for a batch of operations to be executed
max_batch_size: 100 # maximum amount of operations in a single batch
blobstor: blobstor:
compress: false # turn on/off zstd(level 3) compression of stored objects compress: false # turn on/off zstd(level 3) compression of stored objects
perm: 0644 # permissions for blobstor files(directories: +x for current user and group) perm: 0644 # permissions for blobstor files(directories: +x for current user and group)
@ -157,6 +166,11 @@ storage:
- audio/* - audio/*
- video/* - video/*
pilorama:
path: tmp/0/blob/pilorama.db # path to the pilorama database. If omitted, `pilorama.db` file is created blobstor.path
max_batch_delay: 10ms
max_batch_size: 200
gc: gc:
remover_batch_size: 150 # number of objects to be removed by the garbage collector remover_batch_size: 150 # number of objects to be removed by the garbage collector
remover_sleep_interval: 2m # frequency of the garbage collector invocation remover_sleep_interval: 2m # frequency of the garbage collector invocation
@ -171,3 +185,9 @@ storage:
blobstor: blobstor:
path: tmp/1/blob # blobstor path path: tmp/1/blob # blobstor path
pilorama:
path: tmp/1/blob/pilorama.db
no_sync: true # USE WITH CAUTION. Return to user before pages have been persisted.
perm: 0644 # permission to use for the database file and intermediate directories

View file

@ -82,6 +82,12 @@ func (cp *Processor) checkPutContainer(ctx *putContainerContext) error {
return fmt.Errorf("incorrect subnetwork: %w", err) return fmt.Errorf("incorrect subnetwork: %w", err)
} }
// check homomorphic hashing setting
err = checkHomomorphicHashing(cp.netState, cnr)
if err != nil {
return fmt.Errorf("incorrect homomorphic hashing setting: %w", err)
}
// check native name and zone // check native name and zone
err = checkNNS(ctx, cnr) err = checkNNS(ctx, cnr)
if err != nil { if err != nil {
@ -237,3 +243,16 @@ func checkSubnet(subCli *morphsubnet.Client, cnr containerSDK.Container) error {
return nil return nil
} }
func checkHomomorphicHashing(ns NetworkState, cnr containerSDK.Container) error {
netSetting, err := ns.HomomorphicHashDisabled()
if err != nil {
return fmt.Errorf("could not get setting in contract: %w", err)
}
if cnrSetting := containerSDK.IsHomomorphicHashingDisabled(cnr); netSetting != cnrSetting {
return fmt.Errorf("network setting: %t, container setting: %t", netSetting, cnrSetting)
}
return nil
}

View file

@ -53,6 +53,14 @@ type NetworkState interface {
// Must return any error encountered // Must return any error encountered
// which did not allow reading the value. // which did not allow reading the value.
Epoch() (uint64, error) Epoch() (uint64, error)
// HomomorphicHashDisabled must return boolean that
// represents homomorphic network state:
// * true if hashing is disabled;
// * false if hashing is enabled.
//
// which did not allow reading the value.
HomomorphicHashDisabled() (bool, error)
} }
const ( const (

View file

@ -1,5 +1,10 @@
package blobstor package blobstor
import (
"errors"
"fmt"
)
// Open opens BlobStor. // Open opens BlobStor.
func (b *BlobStor) Open() error { func (b *BlobStor) Open() error {
b.log.Debug("opening...") b.log.Debug("opening...")
@ -7,13 +12,23 @@ func (b *BlobStor) Open() error {
return nil return nil
} }
// ErrInitBlobovniczas is returned when blobovnicza initialization fails.
var ErrInitBlobovniczas = errors.New("failure on blobovnicza initialization stage")
// Init initializes internal data structures and system resources. // Init initializes internal data structures and system resources.
// //
// If BlobStor is already initialized, no action is taken. // If BlobStor is already initialized, no action is taken.
//
// Returns wrapped ErrInitBlobovniczas on blobovnicza tree's initializaiton failure.
func (b *BlobStor) Init() error { func (b *BlobStor) Init() error {
b.log.Debug("initializing...") b.log.Debug("initializing...")
return b.blobovniczas.init() err := b.blobovniczas.init()
if err != nil {
return fmt.Errorf("%w: %v", ErrInitBlobovniczas, err)
}
return nil
} }
// Close releases all internal resources of BlobStor. // Close releases all internal resources of BlobStor.

View file

@ -73,7 +73,7 @@ func (e *StorageEngine) containerSize(prm ContainerSizePrm) (res ContainerSizeRe
e.iterateOverUnsortedShards(func(sh hashedShard) (stop bool) { e.iterateOverUnsortedShards(func(sh hashedShard) (stop bool) {
size, err := shard.ContainerSize(sh.Shard, prm.cnr) size, err := shard.ContainerSize(sh.Shard, prm.cnr)
if err != nil { if err != nil {
e.reportShardError(sh, "can't get container size", err, e.reportShardError(sh, sh.metaErrorCount, "can't get container size", err,
zap.Stringer("container_id", prm.cnr), zap.Stringer("container_id", prm.cnr),
) )
return false return false
@ -121,7 +121,7 @@ func (e *StorageEngine) listContainers() (ListContainersRes, error) {
e.iterateOverUnsortedShards(func(sh hashedShard) (stop bool) { e.iterateOverUnsortedShards(func(sh hashedShard) (stop bool) {
cnrs, err := shard.ListContainers(sh.Shard) cnrs, err := shard.ListContainers(sh.Shard)
if err != nil { if err != nil {
e.reportShardError(sh, "can't get list of containers", err) e.reportShardError(sh, sh.metaErrorCount, "can't get list of containers", err)
return false return false
} }

View file

@ -3,7 +3,10 @@ package engine
import ( import (
"errors" "errors"
"fmt" "fmt"
"sync"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/blobstor"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/shard"
"go.uber.org/zap" "go.uber.org/zap"
) )
@ -16,9 +19,23 @@ func (e *StorageEngine) open() error {
e.mtx.RLock() e.mtx.RLock()
defer e.mtx.RUnlock() defer e.mtx.RUnlock()
var wg sync.WaitGroup
var errCh = make(chan error, len(e.shards))
for id, sh := range e.shards { for id, sh := range e.shards {
wg.Add(1)
go func(id string, sh *shard.Shard) {
defer wg.Done()
if err := sh.Open(); err != nil { if err := sh.Open(); err != nil {
return fmt.Errorf("could not open shard %s: %w", id, err) errCh <- fmt.Errorf("could not open shard %s: %w", id, err)
}
}(id, sh.Shard)
}
wg.Wait()
close(errCh)
for err := range errCh {
if err != nil {
return err
} }
} }
@ -32,10 +49,25 @@ func (e *StorageEngine) Init() error {
for id, sh := range e.shards { for id, sh := range e.shards {
if err := sh.Init(); err != nil { if err := sh.Init(); err != nil {
if errors.Is(err, blobstor.ErrInitBlobovniczas) {
delete(e.shards, id)
e.log.Error("shard initialization failure, skipping",
zap.String("id", id),
zap.Error(err),
)
continue
}
return fmt.Errorf("could not initialize shard %s: %w", id, err) return fmt.Errorf("could not initialize shard %s: %w", id, err)
} }
} }
if len(e.shards) == 0 {
return errors.New("failed initialization on all shards")
}
return nil return nil
} }

View file

@ -5,6 +5,7 @@ import (
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/shard" "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/shard"
apistatus "github.com/nspcc-dev/neofs-sdk-go/client/status" apistatus "github.com/nspcc-dev/neofs-sdk-go/client/status"
objectSDK "github.com/nspcc-dev/neofs-sdk-go/object"
oid "github.com/nspcc-dev/neofs-sdk-go/object/id" oid "github.com/nspcc-dev/neofs-sdk-go/object/id"
) )
@ -57,7 +58,13 @@ func (e *StorageEngine) delete(prm DeletePrm) (DeleteRes, error) {
resExists, err := sh.Exists(existsPrm) resExists, err := sh.Exists(existsPrm)
if err != nil { if err != nil {
e.reportShardError(sh, "could not check object existence", err) _, ok := err.(*objectSDK.SplitInfoError)
if ok && shard.IsErrNotFound(err) && shard.IsErrRemoved(err) {
return true
}
if resExists.FromMeta() {
e.reportShardError(sh, sh.metaErrorCount, "could not check object existence", err)
}
return false return false
} else if !resExists.Exists() { } else if !resExists.Exists() {
return false return false
@ -68,7 +75,9 @@ func (e *StorageEngine) delete(prm DeletePrm) (DeleteRes, error) {
_, err = sh.Inhume(shPrm) _, err = sh.Inhume(shPrm)
if err != nil { if err != nil {
e.reportShardError(sh, "could not inhume object in shard", err) if sh.GetMode() == shard.ModeReadWrite {
e.reportShardError(sh, sh.metaErrorCount, "could not inhume object in shard", err)
}
locked.is = errors.As(err, &locked.err) locked.is = errors.As(err, &locked.err)

View file

@ -28,7 +28,8 @@ type StorageEngine struct {
} }
type shardWrapper struct { type shardWrapper struct {
errorCount *atomic.Uint32 metaErrorCount *atomic.Uint32
writeErrorCount *atomic.Uint32
*shard.Shard *shard.Shard
} }
@ -36,10 +37,11 @@ type shardWrapper struct {
// If it does, shard is set to read-only mode. // If it does, shard is set to read-only mode.
func (e *StorageEngine) reportShardError( func (e *StorageEngine) reportShardError(
sh hashedShard, sh hashedShard,
errorCount *atomic.Uint32,
msg string, msg string,
err error, err error,
fields ...zap.Field) { fields ...zap.Field) {
errCount := sh.errorCount.Inc() errCount := errorCount.Inc()
e.log.Warn(msg, append([]zap.Field{ e.log.Warn(msg, append([]zap.Field{
zap.Stringer("shard_id", sh.ID()), zap.Stringer("shard_id", sh.ID()),
zap.Uint32("error count", errCount), zap.Uint32("error count", errCount),
@ -50,7 +52,11 @@ func (e *StorageEngine) reportShardError(
return return
} }
err = sh.SetMode(shard.ModeDegraded) if errorCount == sh.writeErrorCount {
err = sh.SetMode(sh.GetMode() | shard.ModeReadOnly)
} else {
err = sh.SetMode(sh.GetMode() | shard.ModeDegraded)
}
if err != nil { if err != nil {
e.log.Error("failed to move shard in degraded mode", e.log.Error("failed to move shard in degraded mode",
zap.Uint32("error count", errCount), zap.Uint32("error count", errCount),
@ -123,6 +129,8 @@ func WithShardPoolSize(sz uint32) Option {
// shard is moved to read-only mode. // shard is moved to read-only mode.
func WithErrorThreshold(sz uint32) Option { func WithErrorThreshold(sz uint32) Option {
return func(c *cfg) { return func(c *cfg) {
if sz != 0 {
c.errorsThreshold = sz c.errorsThreshold = sz
} }
} }
}

View file

@ -8,6 +8,7 @@ import (
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/blobstor" "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/blobstor"
meta "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/metabase" meta "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/metabase"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/pilorama"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/shard" "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/shard"
"github.com/nspcc-dev/neofs-sdk-go/checksum" "github.com/nspcc-dev/neofs-sdk-go/checksum"
checksumtest "github.com/nspcc-dev/neofs-sdk-go/checksum/test" checksumtest "github.com/nspcc-dev/neofs-sdk-go/checksum/test"
@ -77,7 +78,8 @@ func testNewEngineWithShards(shards ...*shard.Shard) *StorageEngine {
} }
engine.shards[s.ID().String()] = shardWrapper{ engine.shards[s.ID().String()] = shardWrapper{
errorCount: atomic.NewUint32(0), writeErrorCount: atomic.NewUint32(0),
metaErrorCount: atomic.NewUint32(0),
Shard: s, Shard: s,
} }
engine.shardPools[s.ID().String()] = pool engine.shardPools[s.ID().String()] = pool
@ -99,6 +101,7 @@ func testNewShard(t testing.TB, id int) *shard.Shard {
blobstor.WithBlobovniczaShallowDepth(2), blobstor.WithBlobovniczaShallowDepth(2),
blobstor.WithRootPerm(0700), blobstor.WithRootPerm(0700),
), ),
shard.WithPiloramaOptions(pilorama.WithPath(filepath.Join(t.Name(), fmt.Sprintf("%d.pilorama", id)))),
shard.WithMetaBaseOptions( shard.WithMetaBaseOptions(
meta.WithPath(filepath.Join(t.Name(), fmt.Sprintf("%d.metabase", id))), meta.WithPath(filepath.Join(t.Name(), fmt.Sprintf("%d.metabase", id))),
meta.WithPermissions(0700), meta.WithPermissions(0700),
@ -123,7 +126,10 @@ func testEngineFromShardOpts(t *testing.T, num int, extraOpts func(int) []shard.
shard.WithMetaBaseOptions( shard.WithMetaBaseOptions(
meta.WithPath(filepath.Join(t.Name(), fmt.Sprintf("metabase%d", i))), meta.WithPath(filepath.Join(t.Name(), fmt.Sprintf("metabase%d", i))),
meta.WithPermissions(0700), meta.WithPermissions(0700),
)}, extraOpts(i)...)...) ),
shard.WithPiloramaOptions(
pilorama.WithPath(filepath.Join(t.Name(), fmt.Sprintf("pilorama%d", i)))),
}, extraOpts(i)...)...)
require.NoError(t, err) require.NoError(t, err)
} }

View file

@ -10,6 +10,7 @@ import (
"github.com/nspcc-dev/neofs-node/pkg/core/object" "github.com/nspcc-dev/neofs-node/pkg/core/object"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/blobstor" "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/blobstor"
meta "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/metabase" meta "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/metabase"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/pilorama"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/shard" "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/shard"
cidtest "github.com/nspcc-dev/neofs-sdk-go/container/id/test" cidtest "github.com/nspcc-dev/neofs-sdk-go/container/id/test"
objectSDK "github.com/nspcc-dev/neofs-sdk-go/object" objectSDK "github.com/nspcc-dev/neofs-sdk-go/object"
@ -19,7 +20,7 @@ import (
const errSmallSize = 256 const errSmallSize = 256
func newEngineWithErrorThreshold(t *testing.T, dir string, errThreshold uint32) (*StorageEngine, string, [2]*shard.ID) { func newEngineWithErrorThreshold(t testing.TB, dir string, errThreshold uint32) (*StorageEngine, string, [2]*shard.ID) {
if dir == "" { if dir == "" {
var err error var err error
@ -48,7 +49,10 @@ func newEngineWithErrorThreshold(t *testing.T, dir string, errThreshold uint32)
blobstor.WithRootPerm(0700)), blobstor.WithRootPerm(0700)),
shard.WithMetaBaseOptions( shard.WithMetaBaseOptions(
meta.WithPath(filepath.Join(dir, fmt.Sprintf("%d.metabase", i))), meta.WithPath(filepath.Join(dir, fmt.Sprintf("%d.metabase", i))),
meta.WithPermissions(0700))) meta.WithPermissions(0700)),
shard.WithPiloramaOptions(
pilorama.WithPath(filepath.Join(dir, fmt.Sprintf("%d.pilorama", i))),
pilorama.WithPerm(0700)))
require.NoError(t, err) require.NoError(t, err)
} }
require.NoError(t, e.Open()) require.NoError(t, e.Open())
@ -59,6 +63,7 @@ func newEngineWithErrorThreshold(t *testing.T, dir string, errThreshold uint32)
func TestErrorReporting(t *testing.T) { func TestErrorReporting(t *testing.T) {
t.Run("ignore errors by default", func(t *testing.T) { t.Run("ignore errors by default", func(t *testing.T) {
t.Skip()
e, dir, id := newEngineWithErrorThreshold(t, "", 0) e, dir, id := newEngineWithErrorThreshold(t, "", 0)
obj := generateObjectWithCID(t, cidtest.ID()) obj := generateObjectWithCID(t, cidtest.ID())
@ -107,10 +112,16 @@ func TestErrorReporting(t *testing.T) {
checkShardState(t, e, id[0], 0, shard.ModeReadWrite) checkShardState(t, e, id[0], 0, shard.ModeReadWrite)
checkShardState(t, e, id[1], 0, shard.ModeReadWrite) checkShardState(t, e, id[1], 0, shard.ModeReadWrite)
e.mtx.RLock()
sh := e.shards[id[0].String()]
e.mtx.RUnlock()
fmt.Println(sh.writeErrorCount, sh.metaErrorCount, errThreshold)
corruptSubDir(t, filepath.Join(dir, "0")) corruptSubDir(t, filepath.Join(dir, "0"))
for i := uint32(1); i < errThreshold; i++ { for i := uint32(1); i < errThreshold; i++ {
_, err = e.Get(GetPrm{addr: object.AddressOf(obj)}) _, err = e.Get(GetPrm{addr: object.AddressOf(obj)})
fmt.Println(sh.writeErrorCount, sh.metaErrorCount)
require.Error(t, err) require.Error(t, err)
checkShardState(t, e, id[0], i, shard.ModeReadWrite) checkShardState(t, e, id[0], i, shard.ModeReadWrite)
checkShardState(t, e, id[1], 0, shard.ModeReadWrite) checkShardState(t, e, id[1], 0, shard.ModeReadWrite)
@ -119,12 +130,12 @@ func TestErrorReporting(t *testing.T) {
for i := uint32(0); i < 2; i++ { for i := uint32(0); i < 2; i++ {
_, err = e.Get(GetPrm{addr: object.AddressOf(obj)}) _, err = e.Get(GetPrm{addr: object.AddressOf(obj)})
require.Error(t, err) require.Error(t, err)
checkShardState(t, e, id[0], errThreshold+i, shard.ModeDegraded) checkShardState(t, e, id[0], errThreshold, shard.ModeDegraded)
checkShardState(t, e, id[1], 0, shard.ModeReadWrite) checkShardState(t, e, id[1], 0, shard.ModeReadWrite)
} }
require.NoError(t, e.SetShardMode(id[0], shard.ModeReadWrite, false)) require.NoError(t, e.SetShardMode(id[0], shard.ModeReadWrite, false))
checkShardState(t, e, id[0], errThreshold+1, shard.ModeReadWrite) checkShardState(t, e, id[0], errThreshold, shard.ModeReadWrite)
require.NoError(t, e.SetShardMode(id[0], shard.ModeReadWrite, true)) require.NoError(t, e.SetShardMode(id[0], shard.ModeReadWrite, true))
checkShardState(t, e, id[0], 0, shard.ModeReadWrite) checkShardState(t, e, id[0], 0, shard.ModeReadWrite)
@ -187,7 +198,7 @@ func TestBlobstorFailback(t *testing.T) {
require.ErrorIs(t, err, object.ErrRangeOutOfBounds) require.ErrorIs(t, err, object.ErrRangeOutOfBounds)
} }
checkShardState(t, e, id[0], 4, shard.ModeDegraded) checkShardState(t, e, id[0], 2, shard.ModeDegraded)
checkShardState(t, e, id[1], 0, shard.ModeReadWrite) checkShardState(t, e, id[1], 0, shard.ModeReadWrite)
} }
@ -197,7 +208,7 @@ func checkShardState(t *testing.T, e *StorageEngine, id *shard.ID, errCount uint
e.mtx.RUnlock() e.mtx.RUnlock()
require.Equal(t, mode, sh.GetMode()) require.Equal(t, mode, sh.GetMode())
require.Equal(t, errCount, sh.errorCount.Load()) require.Equal(t, errCount, sh.writeErrorCount.Load()+sh.metaErrorCount.Load())
} }
// corruptSubDir makes random directory except "blobovnicza" in blobstor FSTree unreadable. // corruptSubDir makes random directory except "blobovnicza" in blobstor FSTree unreadable.

View file

@ -3,6 +3,7 @@ package engine
import ( import (
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/shard" "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/shard"
apistatus "github.com/nspcc-dev/neofs-sdk-go/client/status" apistatus "github.com/nspcc-dev/neofs-sdk-go/client/status"
objectSDK "github.com/nspcc-dev/neofs-sdk-go/object"
oid "github.com/nspcc-dev/neofs-sdk-go/object/id" oid "github.com/nspcc-dev/neofs-sdk-go/object/id"
) )
@ -21,7 +22,16 @@ func (e *StorageEngine) exists(addr oid.Address) (bool, error) {
return true return true
} }
e.reportShardError(sh, "could not check existence of object in shard", err) _, ok := err.(*objectSDK.SplitInfoError)
if ok || shard.IsErrNotFound(err) {
return true
}
if res.FromMeta() {
e.reportShardError(sh, sh.metaErrorCount, "could not check existence of object in shard", err)
}
return false
} }
if !exists { if !exists {

View file

@ -107,7 +107,9 @@ func (e *StorageEngine) get(prm GetPrm) (GetRes, error) {
return true // stop, return it back return true // stop, return it back
default: default:
e.reportShardError(sh, "could not get object from shard", err) if sh.GetMode()&shard.ModeDegraded == 0 {
e.reportShardError(sh, sh.metaErrorCount, "could not get object from shard", err)
}
return false return false
} }
} }
@ -139,8 +141,9 @@ func (e *StorageEngine) get(prm GetPrm) (GetRes, error) {
if obj == nil { if obj == nil {
return GetRes{}, outError return GetRes{}, outError
} }
e.reportShardError(shardWithMeta, "meta info was present, but object is missing", e.log.Warn("meta info was present, but object is missing",
metaError, zap.Stringer("address", prm.addr)) zap.String("err", metaError.Error()),
zap.Stringer("address", prm.addr))
} }
return GetRes{ return GetRes{

View file

@ -112,7 +112,9 @@ func (e *StorageEngine) head(prm HeadPrm) (HeadRes, error) {
return true // stop, return it back return true // stop, return it back
default: default:
e.reportShardError(sh, "could not head object from shard", err) if res.FromMeta() {
e.reportShardError(sh, sh.metaErrorCount, "could not head object from shard", err)
}
return false return false
} }
} }

View file

@ -18,7 +18,7 @@ func (e *StorageEngine) DumpInfo() (i Info) {
for _, sh := range e.shards { for _, sh := range e.shards {
info := sh.DumpInfo() info := sh.DumpInfo()
info.ErrorCount = sh.errorCount.Load() info.ErrorCount = sh.metaErrorCount.Load()
i.Shards = append(i.Shards, info) i.Shards = append(i.Shards, info)
} }

View file

@ -108,6 +108,11 @@ func (e *StorageEngine) inhumeAddr(addr oid.Address, prm shard.InhumePrm, checkE
} }
}() }()
if sh.GetMode() != shard.ModeReadWrite {
// Inhume is a modifying operation on metabase, so return here.
return false
}
if checkExists { if checkExists {
existPrm.WithAddress(addr) existPrm.WithAddress(addr)
exRes, err := sh.Exists(existPrm) exRes, err := sh.Exists(existPrm)
@ -120,7 +125,9 @@ func (e *StorageEngine) inhumeAddr(addr oid.Address, prm shard.InhumePrm, checkE
var siErr *objectSDK.SplitInfoError var siErr *objectSDK.SplitInfoError
if !errors.As(err, &siErr) { if !errors.As(err, &siErr) {
e.reportShardError(sh, "could not check for presents in shard", err) if exRes.FromMeta() {
e.reportShardError(sh, sh.metaErrorCount, "could not check for presents in shard", err)
}
return return
} }
@ -132,13 +139,12 @@ func (e *StorageEngine) inhumeAddr(addr oid.Address, prm shard.InhumePrm, checkE
_, err := sh.Inhume(prm) _, err := sh.Inhume(prm)
if err != nil { if err != nil {
e.reportShardError(sh, "could not inhume object in shard", err)
if errors.As(err, &errLocked) { if errors.As(err, &errLocked) {
status = 1 status = 1
return true return true
} }
e.reportShardError(sh, sh.metaErrorCount, "could not inhume object in shard", err)
return false return false
} }

View file

@ -72,7 +72,10 @@ func (e *StorageEngine) lockSingle(idCnr cid.ID, locker, locked oid.ID, checkExi
if err != nil { if err != nil {
var siErr *objectSDK.SplitInfoError var siErr *objectSDK.SplitInfoError
if !errors.As(err, &siErr) { if !errors.As(err, &siErr) {
e.reportShardError(sh, "could not check locked object for presence in shard", err) // In non-degraded mode the error originated from the metabase.
if exRes.FromMeta() {
e.reportShardError(sh, sh.metaErrorCount, "could not check locked object for presence in shard", err)
}
return return
} }
@ -84,7 +87,7 @@ func (e *StorageEngine) lockSingle(idCnr cid.ID, locker, locked oid.ID, checkExi
err := sh.Lock(idCnr, locker, []oid.ID{locked}) err := sh.Lock(idCnr, locker, []oid.ID{locked})
if err != nil { if err != nil {
e.reportShardError(sh, "could not lock object in shard", err) e.reportShardError(sh, sh.metaErrorCount, "could not lock object in shard", err)
if errors.As(err, &errIrregular) { if errors.As(err, &errIrregular) {
status = 1 status = 1

View file

@ -76,6 +76,9 @@ func (e *StorageEngine) put(prm PutPrm) (PutRes, error) {
exists, err := sh.Exists(existPrm) exists, err := sh.Exists(existPrm)
if err != nil { if err != nil {
if exists.FromMeta() {
e.reportShardError(sh, sh.metaErrorCount, "could not check object existence", err)
}
return // this is not ErrAlreadyRemoved error so we can go to the next shard return // this is not ErrAlreadyRemoved error so we can go to the next shard
} }
@ -101,12 +104,20 @@ func (e *StorageEngine) put(prm PutPrm) (PutRes, error) {
var putPrm shard.PutPrm var putPrm shard.PutPrm
putPrm.WithObject(prm.obj) putPrm.WithObject(prm.obj)
_, err = sh.Put(putPrm) var res shard.PutRes
res, err = sh.Put(putPrm)
if err != nil { if err != nil {
if res.FromMeta() {
e.reportShardError(sh, sh.metaErrorCount, "could not put object in shard", err)
return
} else if res.FromBlobstor() {
e.reportShardError(sh, sh.writeErrorCount, "could not put object in shard", err)
return
} else {
e.log.Warn("could not put object in shard", e.log.Warn("could not put object in shard",
zap.Stringer("shard", sh.ID()), zap.Stringer("shard", sh.ID()),
zap.String("error", err.Error()), zap.String("error", err.Error()))
) }
return return
} }

View file

@ -126,7 +126,9 @@ func (e *StorageEngine) getRange(prm RngPrm) (RngRes, error) {
return true // stop, return it back return true // stop, return it back
default: default:
e.reportShardError(sh, "could not get object from shard", err) if !res.HasMeta() {
e.reportShardError(sh, sh.metaErrorCount, "could not get object from shard", err)
}
return false return false
} }
} }
@ -162,7 +164,8 @@ func (e *StorageEngine) getRange(prm RngPrm) (RngRes, error) {
if obj == nil { if obj == nil {
return RngRes{}, outError return RngRes{}, outError
} }
e.reportShardError(shardWithMeta, "meta info was present, but object is missing", e.reportShardError(shardWithMeta, shardWithMeta.metaErrorCount,
"meta info was present, but object is missing",
metaError, metaError,
zap.Stringer("address", prm.addr), zap.Stringer("address", prm.addr),
) )

View file

@ -68,7 +68,7 @@ func (e *StorageEngine) _select(prm SelectPrm) (SelectRes, error) {
e.iterateOverUnsortedShards(func(sh hashedShard) (stop bool) { e.iterateOverUnsortedShards(func(sh hashedShard) (stop bool) {
res, err := sh.Select(shPrm) res, err := sh.Select(shPrm)
if err != nil { if err != nil {
e.reportShardError(sh, "could not select objects from shard", err) e.reportShardError(sh, sh.metaErrorCount, "could not select objects from shard", err)
return false return false
} }
@ -113,7 +113,7 @@ func (e *StorageEngine) list(limit uint64) (SelectRes, error) {
e.iterateOverUnsortedShards(func(sh hashedShard) (stop bool) { e.iterateOverUnsortedShards(func(sh hashedShard) (stop bool) {
res, err := sh.List() // consider limit result of shard iterator res, err := sh.List() // consider limit result of shard iterator
if err != nil { if err != nil {
e.reportShardError(sh, "could not select objects from shard", err) e.reportShardError(sh, sh.metaErrorCount, "could not select objects from shard", err)
} else { } else {
for _, addr := range res.AddressList() { // save only unique values for _, addr := range res.AddressList() { // save only unique values
if _, ok := uniqueMap[addr.EncodeToString()]; !ok { if _, ok := uniqueMap[addr.EncodeToString()]; !ok {

View file

@ -50,7 +50,8 @@ func (e *StorageEngine) AddShard(opts ...shard.Option) (*shard.ID, error) {
} }
e.shards[strID] = shardWrapper{ e.shards[strID] = shardWrapper{
errorCount: atomic.NewUint32(0), metaErrorCount: atomic.NewUint32(0),
writeErrorCount: atomic.NewUint32(0),
Shard: sh, Shard: sh,
} }
@ -135,7 +136,8 @@ func (e *StorageEngine) SetShardMode(id *shard.ID, m shard.Mode, resetErrorCount
for shID, sh := range e.shards { for shID, sh := range e.shards {
if id.String() == shID { if id.String() == shID {
if resetErrorCounter { if resetErrorCounter {
sh.errorCount.Store(0) sh.metaErrorCount.Store(0)
sh.writeErrorCount.Store(0)
} }
return sh.SetMode(m) return sh.SetMode(m)
} }

View file

@ -0,0 +1,148 @@
package engine
import (
"errors"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/pilorama"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/shard"
cidSDK "github.com/nspcc-dev/neofs-sdk-go/container/id"
)
var _ pilorama.Forest = (*StorageEngine)(nil)
// TreeMove implements the pilorama.Forest interface.
func (e *StorageEngine) TreeMove(d pilorama.CIDDescriptor, treeID string, m *pilorama.Move) (*pilorama.LogMove, error) {
var err error
var lm *pilorama.LogMove
for _, sh := range e.sortShardsByWeight(d.CID) {
lm, err = sh.TreeMove(d, treeID, m)
if err != nil {
if errors.Is(err, shard.ErrReadOnlyMode) {
return nil, err
}
//e.reportShardError(sh, sh.writeErrorCount, "can't perform `TreeMove`", err,
// zap.Stringer("cid", d.CID),
// zap.String("tree", treeID))
continue
}
return lm, nil
}
return nil, err
}
// TreeAddByPath implements the pilorama.Forest interface.
func (e *StorageEngine) TreeAddByPath(d pilorama.CIDDescriptor, treeID string, attr string, path []string, m []pilorama.KeyValue) ([]pilorama.LogMove, error) {
var err error
var lm []pilorama.LogMove
for _, sh := range e.sortShardsByWeight(d.CID) {
lm, err = sh.TreeAddByPath(d, treeID, attr, path, m)
if err != nil {
if errors.Is(err, shard.ErrReadOnlyMode) {
return nil, err
}
//e.reportShardError(sh, sh.writeErrorCount, "can't perform `TreeAddByPath`", err,
// zap.Stringer("cid", d.CID),
// zap.String("tree", treeID))
continue
}
return lm, nil
}
return nil, err
}
// TreeApply implements the pilorama.Forest interface.
func (e *StorageEngine) TreeApply(d pilorama.CIDDescriptor, treeID string, m []pilorama.Move) error {
var err error
for _, sh := range e.sortShardsByWeight(d.CID) {
err = sh.TreeApply(d, treeID, m)
if err != nil {
if errors.Is(err, shard.ErrReadOnlyMode) {
return err
}
//e.reportShardError(sh, sh.writeErrorCount, "can't perform `TreeApply`", err,
// zap.Stringer("cid", d.CID),
// zap.String("tree", treeID))
continue
}
return nil
}
return err
}
// TreeGetByPath implements the pilorama.Forest interface.
func (e *StorageEngine) TreeGetByPath(cid cidSDK.ID, treeID string, attr string, path []string, latest bool) ([]pilorama.Node, error) {
var err error
var nodes []pilorama.Node
for _, sh := range e.sortShardsByWeight(cid) {
nodes, err = sh.TreeGetByPath(cid, treeID, attr, path, latest)
if err != nil {
if !errors.Is(err, pilorama.ErrTreeNotFound) {
//e.reportShardError(sh, "can't perform `TreeGetByPath`", err,
// zap.Stringer("cid", cid),
// zap.String("tree", treeID))
}
continue
}
return nodes, nil
}
return nil, err
}
// TreeGetMeta implements the pilorama.Forest interface.
func (e *StorageEngine) TreeGetMeta(cid cidSDK.ID, treeID string, nodeID pilorama.Node) (pilorama.Meta, uint64, error) {
var err error
var m pilorama.Meta
var p uint64
for _, sh := range e.sortShardsByWeight(cid) {
m, p, err = sh.TreeGetMeta(cid, treeID, nodeID)
if err != nil {
if !errors.Is(err, pilorama.ErrTreeNotFound) {
//e.reportShardError(sh, sh.writeErrorCount, "can't perform `TreeGetMeta`", err,
// zap.Stringer("cid", cid),
// zap.String("tree", treeID))
}
continue
}
return m, p, nil
}
return pilorama.Meta{}, 0, err
}
// TreeGetChildren implements the pilorama.Forest interface.
func (e *StorageEngine) TreeGetChildren(cid cidSDK.ID, treeID string, nodeID pilorama.Node) ([]uint64, error) {
var err error
var nodes []uint64
for _, sh := range e.sortShardsByWeight(cid) {
nodes, err = sh.TreeGetChildren(cid, treeID, nodeID)
if err != nil {
if !errors.Is(err, pilorama.ErrTreeNotFound) {
//e.reportShardError(sh, "can't perform `TreeGetChildren`", err,
// zap.Stringer("cid", cid),
// zap.String("tree", treeID))
}
continue
}
return nodes, nil
}
return nil, err
}
// TreeGetOpLog implements the pilorama.Forest interface.
func (e *StorageEngine) TreeGetOpLog(cid cidSDK.ID, treeID string, height uint64) (pilorama.Move, error) {
var err error
var lm pilorama.Move
for _, sh := range e.sortShardsByWeight(cid) {
lm, err = sh.TreeGetOpLog(cid, treeID, height)
if err != nil {
if !errors.Is(err, pilorama.ErrTreeNotFound) {
//e.reportShardError(sh, "can't perform `TreeGetOpLog`", err,
// zap.Stringer("cid", cid),
// zap.String("tree", treeID))
}
continue
}
return lm, nil
}
return lm, err
}

View file

@ -0,0 +1,73 @@
package engine
import (
"strconv"
"testing"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/pilorama"
cidtest "github.com/nspcc-dev/neofs-sdk-go/container/id/test"
"github.com/nspcc-dev/neofs-sdk-go/object"
)
func BenchmarkTreeVsSearch(b *testing.B) {
b.Run("10 objects", func(b *testing.B) {
benchmarkTreeVsSearch(b, 10)
})
b.Run("100 objects", func(b *testing.B) {
benchmarkTreeVsSearch(b, 100)
})
b.Run("1000 objects", func(b *testing.B) {
benchmarkTreeVsSearch(b, 1000)
})
}
func benchmarkTreeVsSearch(b *testing.B, objCount int) {
e, _, _ := newEngineWithErrorThreshold(b, "", 0)
cid := cidtest.ID()
d := pilorama.CIDDescriptor{CID: cid, Position: 0, Size: 1}
treeID := "someTree"
for i := 0; i < objCount; i++ {
obj := generateObjectWithCID(b, cid)
addAttribute(obj, pilorama.AttributeFilename, strconv.Itoa(i))
err := Put(e, obj)
if err != nil {
b.Fatal(err)
}
_, err = e.TreeAddByPath(d, treeID, pilorama.AttributeFilename, nil,
[]pilorama.KeyValue{{pilorama.AttributeFilename, []byte(strconv.Itoa(i))}})
if err != nil {
b.Fatal(err)
}
}
b.Run("search", func(b *testing.B) {
var prm SelectPrm
prm.WithContainerID(cid)
var fs object.SearchFilters
fs.AddFilter(pilorama.AttributeFilename, strconv.Itoa(objCount/2), object.MatchStringEqual)
prm.WithFilters(fs)
for i := 0; i < b.N; i++ {
res, err := e.Select(prm)
if err != nil {
b.Fatal(err)
}
if count := len(res.addrList); count != 1 {
b.Fatalf("expected 1 object, got %d", count)
}
}
})
b.Run("TreeGetByPath", func(b *testing.B) {
for i := 0; i < b.N; i++ {
nodes, err := e.TreeGetByPath(cid, treeID, pilorama.AttributeFilename, []string{strconv.Itoa(objCount / 2)}, true)
if err != nil {
b.Fatal(err)
}
if count := len(nodes); count != 1 {
b.Fatalf("expected 1 object, got %d", count)
}
}
})
}

View file

@ -114,5 +114,9 @@ func (db *DB) init(reset bool) error {
// Close closes boltDB instance. // Close closes boltDB instance.
func (db *DB) Close() error { func (db *DB) Close() error {
if db.boltDB != nil {
return db.boltDB.Close() return db.boltDB.Close()
} }
return nil
}

View file

@ -6,8 +6,10 @@ import (
"github.com/nspcc-dev/neofs-node/pkg/core/object" "github.com/nspcc-dev/neofs-node/pkg/core/object"
meta "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/metabase" meta "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/metabase"
apistatus "github.com/nspcc-dev/neofs-sdk-go/client/status"
cidtest "github.com/nspcc-dev/neofs-sdk-go/container/id/test" cidtest "github.com/nspcc-dev/neofs-sdk-go/container/id/test"
objectSDK "github.com/nspcc-dev/neofs-sdk-go/object" objectSDK "github.com/nspcc-dev/neofs-sdk-go/object"
oidtest "github.com/nspcc-dev/neofs-sdk-go/object/id/test"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@ -29,6 +31,15 @@ func TestDB_Exists(t *testing.T) {
exists, err := meta.Exists(db, object.AddressOf(regular)) exists, err := meta.Exists(db, object.AddressOf(regular))
require.NoError(t, err) require.NoError(t, err)
require.True(t, exists) require.True(t, exists)
t.Run("removed object", func(t *testing.T) {
err := meta.Inhume(db, object.AddressOf(regular), oidtest.Address())
require.NoError(t, err)
exists, err := meta.Exists(db, object.AddressOf(regular))
require.ErrorAs(t, err, new(apistatus.ObjectAlreadyRemoved))
require.False(t, exists)
})
}) })
t.Run("tombstone object", func(t *testing.T) { t.Run("tombstone object", func(t *testing.T) {
@ -153,4 +164,12 @@ func TestDB_Exists(t *testing.T) {
require.Equal(t, id1, id2) require.Equal(t, id1, id2)
}) })
}) })
t.Run("random object", func(t *testing.T) {
addr := oidtest.Address()
exists, err := meta.Exists(db, addr)
require.NoError(t, err)
require.False(t, exists)
})
} }

View file

@ -0,0 +1,764 @@
package pilorama
import (
"bytes"
"encoding/binary"
"fmt"
"math/rand"
"os"
"path/filepath"
"sort"
"sync"
"time"
"github.com/nspcc-dev/neo-go/pkg/io"
"github.com/nspcc-dev/neofs-node/pkg/util"
cidSDK "github.com/nspcc-dev/neofs-sdk-go/container/id"
"go.etcd.io/bbolt"
)
type boltForest struct {
db *bbolt.DB
// mtx protects batches field.
mtx sync.Mutex
batches []batch
batchesCh chan batch
closeCh chan struct{}
cfg
}
type batch struct {
cid cidSDK.ID
treeID string
ch []chan error
m []Move
}
var (
dataBucket = []byte{0}
logBucket = []byte{1}
)
// NewBoltForest returns storage wrapper for storing operations on CRDT trees.
//
// Each tree is stored in a separate bucket by `CID + treeID` key.
// All integers are stored in little-endian unless explicitly specified otherwise.
//
// DB schema (for a single tree):
// timestamp is 8-byte, id is 4-byte.
//
// log storage (logBucket):
// timestamp in big-endian -> log operation
//
// tree storage (dataBucket):
// 't' + node (id) -> timestamp when the node first appeared
// 'p' + node (id) -> parent (id)
// 'm' + node (id) -> serialized meta
// 'c' + parent (id) + child (id) -> 0/1
// 'i' + 0 + attrKey + 0 + attrValue + 0 + parent (id) + node (id) -> 0/1 (1 for automatically created nodes)
func NewBoltForest(opts ...Option) ForestStorage {
b := boltForest{
cfg: cfg{
perm: os.ModePerm,
maxBatchDelay: bbolt.DefaultMaxBatchDelay,
maxBatchSize: bbolt.DefaultMaxBatchSize,
},
}
for i := range opts {
opts[i](&b.cfg)
}
return &b
}
func (t *boltForest) Init() error {
t.closeCh = make(chan struct{})
batchWorkersCount := t.maxBatchSize
t.batchesCh = make(chan batch, batchWorkersCount)
go func() {
tick := time.NewTicker(time.Millisecond * 20)
defer tick.Stop()
for {
select {
case <-t.closeCh:
return
case <-tick.C:
t.trigger()
}
}
}()
for i := 0; i < batchWorkersCount; i++ {
go t.applier()
}
return nil
}
func (t *boltForest) Open() error {
err := util.MkdirAllX(filepath.Dir(t.path), t.perm)
if err != nil {
return fmt.Errorf("can't create dir %s for the pilorama: %w", t.path, err)
}
opts := *bbolt.DefaultOptions
opts.NoSync = t.noSync
opts.Timeout = 100 * time.Millisecond
t.db, err = bbolt.Open(t.path, t.perm, &opts)
if err != nil {
return fmt.Errorf("can't open the pilorama DB: %w", err)
}
t.db.MaxBatchSize = t.maxBatchSize
t.db.MaxBatchDelay = t.maxBatchDelay
return t.db.Update(func(tx *bbolt.Tx) error {
_, err := tx.CreateBucketIfNotExists(dataBucket)
if err != nil {
return err
}
_, err = tx.CreateBucketIfNotExists(logBucket)
if err != nil {
return err
}
return nil
})
}
func (t *boltForest) Close() error {
if t.closeCh != nil {
close(t.closeCh)
t.closeCh = nil
}
return t.db.Close()
}
// TreeMove implements the Forest interface.
func (t *boltForest) TreeMove(d CIDDescriptor, treeID string, m *Move) (*LogMove, error) {
if !d.checkValid() {
return nil, ErrInvalidCIDDescriptor
}
var lm LogMove
return &lm, t.db.Batch(func(tx *bbolt.Tx) error {
bLog, bTree, err := t.getTreeBuckets(tx, d.CID, treeID)
if err != nil {
return err
}
m.Time = t.getLatestTimestamp(bLog, d.Position, d.Size)
if m.Child == RootID {
m.Child = t.findSpareID(bTree)
}
return t.applyOperation(bLog, bTree, []Move{*m}, &lm)
})
}
// TreeAddByPath implements the Forest interface.
func (t *boltForest) TreeAddByPath(d CIDDescriptor, treeID string, attr string, path []string, meta []KeyValue) ([]LogMove, error) {
if !d.checkValid() {
return nil, ErrInvalidCIDDescriptor
}
if !isAttributeInternal(attr) {
return nil, ErrNotPathAttribute
}
var lm []LogMove
var key [17]byte
err := t.db.Batch(func(tx *bbolt.Tx) error {
bLog, bTree, err := t.getTreeBuckets(tx, d.CID, treeID)
if err != nil {
return err
}
i, node, err := t.getPathPrefix(bTree, attr, path)
if err != nil {
return err
}
ts := t.getLatestTimestamp(bLog, d.Position, d.Size)
lm = make([]LogMove, len(path)-i+1)
for j := i; j < len(path); j++ {
lm[j-i].Move = Move{
Parent: node,
Meta: Meta{
Time: ts,
Items: []KeyValue{{Key: attr, Value: []byte(path[j])}},
},
Child: t.findSpareID(bTree),
}
err := t.do(bLog, bTree, key[:], &lm[j-i])
if err != nil {
return err
}
ts = nextTimestamp(ts, uint64(d.Position), uint64(d.Size))
node = lm[j-i].Child
}
lm[len(lm)-1].Move = Move{
Parent: node,
Meta: Meta{
Time: ts,
Items: meta,
},
Child: t.findSpareID(bTree),
}
return t.do(bLog, bTree, key[:], &lm[len(lm)-1])
})
return lm, err
}
// getLatestTimestamp returns timestamp for a new operation which is guaranteed to be bigger than
// all timestamps corresponding to already stored operations.
func (t *boltForest) getLatestTimestamp(bLog *bbolt.Bucket, pos, size int) uint64 {
var ts uint64
c := bLog.Cursor()
key, _ := c.Last()
if len(key) != 0 {
ts = binary.BigEndian.Uint64(key)
}
return nextTimestamp(ts, uint64(pos), uint64(size))
}
// findSpareID returns random unused ID.
func (t *boltForest) findSpareID(bTree *bbolt.Bucket) uint64 {
id := uint64(rand.Int63())
var key [9]byte
key[0] = 't'
binary.LittleEndian.PutUint64(key[1:], id)
for {
if bTree.Get(key[:]) == nil {
return id
}
id = uint64(rand.Int63())
binary.LittleEndian.PutUint64(key[1:], id)
}
}
// TreeApply implements the Forest interface.
func (t *boltForest) TreeApply(d CIDDescriptor, treeID string, m []Move) error {
if !d.checkValid() {
return ErrInvalidCIDDescriptor
}
ch := make(chan error, 1)
t.addBatch(d, treeID, m, ch)
return <-ch
}
func (t *boltForest) addBatch(d CIDDescriptor, treeID string, m []Move, ch chan error) {
t.mtx.Lock()
defer t.mtx.Unlock()
for i := range t.batches {
if t.batches[i].cid.Equals(d.CID) && t.batches[i].treeID == treeID {
t.batches[i].ch = append(t.batches[i].ch, ch)
t.batches[i].m = append(t.batches[i].m, m...)
return
}
}
t.batches = append(t.batches, batch{
cid: d.CID,
treeID: treeID,
ch: []chan error{ch},
m: m,
})
}
func (t *boltForest) trigger() {
t.mtx.Lock()
for i := range t.batches {
t.batchesCh <- t.batches[i]
}
t.batches = t.batches[:0]
t.mtx.Unlock()
}
func (t *boltForest) applier() {
for b := range t.batchesCh {
sort.Slice(b.m, func(i, j int) bool {
return b.m[i].Time < b.m[j].Time
})
err := t.db.Batch(func(tx *bbolt.Tx) error {
bLog, bTree, err := t.getTreeBuckets(tx, b.cid, b.treeID)
if err != nil {
return err
}
var lm LogMove
return t.applyOperation(bLog, bTree, b.m, &lm)
})
for i := range b.ch {
b.ch[i] <- err
}
}
}
func (t *boltForest) getTreeBuckets(tx *bbolt.Tx, cid cidSDK.ID, treeID string) (*bbolt.Bucket, *bbolt.Bucket, error) {
treeRoot := bucketName(cid, treeID)
child, err := tx.CreateBucket(treeRoot)
if err != nil && err != bbolt.ErrBucketExists {
return nil, nil, err
}
var bLog, bData *bbolt.Bucket
if err == nil {
if bLog, err = child.CreateBucket(logBucket); err != nil {
return nil, nil, err
}
if bData, err = child.CreateBucket(dataBucket); err != nil {
return nil, nil, err
}
} else {
child = tx.Bucket(treeRoot)
bLog = child.Bucket(logBucket)
bData = child.Bucket(dataBucket)
}
return bLog, bData, nil
}
// applyOperations applies log operations. Assumes lm are sorted by timestamp.
func (t *boltForest) applyOperation(logBucket, treeBucket *bbolt.Bucket, ms []Move, lm *LogMove) error {
var tmp LogMove
var cKey [17]byte
c := logBucket.Cursor()
key, value := c.Last()
b := bytes.NewReader(nil)
r := io.NewBinReaderFromIO(b)
// 1. Undo up until the desired timestamp is here.
for len(key) == 8 && binary.BigEndian.Uint64(key) > ms[0].Time {
b.Reset(value)
if err := t.logFromBytes(&tmp, r); err != nil {
return err
}
if err := t.undo(&tmp.Move, &tmp, treeBucket, cKey[:]); err != nil {
return err
}
key, value = c.Prev()
}
var i int
for {
// 2. Insert the operation.
if len(key) != 8 || binary.BigEndian.Uint64(key) != ms[i].Time {
lm.Move = ms[i]
if err := t.do(logBucket, treeBucket, cKey[:], lm); err != nil {
return err
}
}
key, value = c.Next()
i++
// 3. Re-apply all other operations.
for len(key) == 8 && (i == len(ms) || binary.BigEndian.Uint64(key) < ms[i].Time) {
b.Reset(value)
if err := t.logFromBytes(&tmp, r); err != nil {
return err
}
if err := t.do(logBucket, treeBucket, cKey[:], &tmp); err != nil {
return err
}
key, value = c.Next()
}
if i == len(ms) {
return nil
}
}
}
func (t *boltForest) do(lb *bbolt.Bucket, b *bbolt.Bucket, key []byte, op *LogMove) error {
shouldPut := !t.isAncestor(b, key, op.Child, op.Parent)
currParent := b.Get(parentKey(key, op.Child))
if currParent != nil { // node is already in tree
op.HasOld = true
op.Old.Parent = binary.LittleEndian.Uint64(currParent)
if err := op.Old.Meta.FromBytes(b.Get(metaKey(key, op.Child))); err != nil {
return err
}
}
binary.BigEndian.PutUint64(key, op.Time)
if err := lb.Put(key[:8], t.logToBytes(op)); err != nil {
return err
}
if !shouldPut {
return nil
}
if currParent == nil {
if err := b.Put(timestampKey(key, op.Child), toUint64(op.Time)); err != nil {
return err
}
} else {
parent := binary.LittleEndian.Uint64(currParent)
if err := b.Delete(childrenKey(key, op.Child, parent)); err != nil {
return err
}
var meta Meta
var k = metaKey(key, op.Child)
if err := meta.FromBytes(b.Get(k)); err == nil {
for i := range meta.Items {
if isAttributeInternal(meta.Items[i].Key) {
err := b.Delete(internalKey(nil, meta.Items[i].Key, string(meta.Items[i].Value), parent, op.Child))
if err != nil {
return err
}
}
}
}
}
return t.addNode(b, key, op.Child, op.Parent, op.Meta)
}
// removeNode removes node keys from the tree except the children key or its parent.
func (t *boltForest) removeNode(b *bbolt.Bucket, key []byte, node, parent Node) error {
if err := b.Delete(parentKey(key, node)); err != nil {
return err
}
var meta Meta
var k = metaKey(key, node)
if err := meta.FromBytes(b.Get(k)); err == nil {
for i := range meta.Items {
if isAttributeInternal(meta.Items[i].Key) {
err := b.Delete(internalKey(nil, meta.Items[i].Key, string(meta.Items[i].Value), parent, node))
if err != nil {
return err
}
}
}
}
if err := b.Delete(metaKey(key, node)); err != nil {
return err
}
return b.Delete(timestampKey(key, node))
}
// addNode adds node keys to the tree except the timestamp key.
func (t *boltForest) addNode(b *bbolt.Bucket, key []byte, child, parent Node, meta Meta) error {
err := b.Put(parentKey(key, child), toUint64(parent))
if err != nil {
return err
}
err = b.Put(childrenKey(key, child, parent), []byte{1})
if err != nil {
return err
}
err = b.Put(metaKey(key, child), meta.Bytes())
if err != nil {
return err
}
for i := range meta.Items {
if !isAttributeInternal(meta.Items[i].Key) {
continue
}
key = internalKey(key, meta.Items[i].Key, string(meta.Items[i].Value), parent, child)
if len(meta.Items) == 1 {
err = b.Put(key, []byte{1})
} else {
err = b.Put(key, []byte{0})
}
if err != nil {
return err
}
}
return nil
}
func (t *boltForest) undo(m *Move, lm *LogMove, b *bbolt.Bucket, key []byte) error {
if err := b.Delete(childrenKey(key, m.Child, m.Parent)); err != nil {
return err
}
if !lm.HasOld {
return t.removeNode(b, key, m.Child, m.Parent)
}
return t.addNode(b, key, m.Child, lm.Old.Parent, lm.Old.Meta)
}
func (t *boltForest) isAncestor(b *bbolt.Bucket, key []byte, parent, child Node) bool {
key[0] = 'p'
for c := child; c != parent; {
binary.LittleEndian.PutUint64(key[1:], c)
rawParent := b.Get(key[:9])
if len(rawParent) != 8 {
return false
}
c = binary.LittleEndian.Uint64(rawParent)
}
return true
}
// TreeGetByPath implements the Forest interface.
func (t *boltForest) TreeGetByPath(cid cidSDK.ID, treeID string, attr string, path []string, latest bool) ([]Node, error) {
if !isAttributeInternal(attr) {
return nil, ErrNotPathAttribute
}
if len(path) == 0 {
return nil, nil
}
var nodes []Node
return nodes, t.db.View(func(tx *bbolt.Tx) error {
treeRoot := tx.Bucket(bucketName(cid, treeID))
if treeRoot == nil {
return ErrTreeNotFound
}
b := treeRoot.Bucket(dataBucket)
i, curNode, err := t.getPathPrefix(b, attr, path[:len(path)-1])
if err != nil {
return err
}
if i < len(path)-1 {
return nil
}
var (
childID [9]byte
maxTimestamp uint64
)
c := b.Cursor()
attrKey := internalKey(nil, attr, path[len(path)-1], curNode, 0)
attrKey = attrKey[:len(attrKey)-8]
childKey, _ := c.Seek(attrKey)
for len(childKey) == len(attrKey)+8 && bytes.Equal(attrKey, childKey[:len(childKey)-8]) {
child := binary.LittleEndian.Uint64(childKey[len(childKey)-8:])
if latest {
ts := binary.LittleEndian.Uint64(b.Get(timestampKey(childID[:], child)))
if ts >= maxTimestamp {
nodes = append(nodes[:0], child)
maxTimestamp = ts
}
} else {
nodes = append(nodes, child)
}
childKey, _ = c.Next()
}
return nil
})
}
// TreeGetMeta implements the forest interface.
func (t *boltForest) TreeGetMeta(cid cidSDK.ID, treeID string, nodeID Node) (Meta, Node, error) {
key := parentKey(make([]byte, 9), nodeID)
var m Meta
var parentID uint64
err := t.db.View(func(tx *bbolt.Tx) error {
treeRoot := tx.Bucket(bucketName(cid, treeID))
if treeRoot == nil {
return ErrTreeNotFound
}
b := treeRoot.Bucket(dataBucket)
if data := b.Get(key); len(data) == 8 {
parentID = binary.LittleEndian.Uint64(data)
}
return m.FromBytes(b.Get(metaKey(key, nodeID)))
})
return m, parentID, err
}
// TreeGetChildren implements the Forest interface.
func (t *boltForest) TreeGetChildren(cid cidSDK.ID, treeID string, nodeID Node) ([]uint64, error) {
key := make([]byte, 9)
key[0] = 'c'
binary.LittleEndian.PutUint64(key[1:], nodeID)
var children []uint64
err := t.db.View(func(tx *bbolt.Tx) error {
treeRoot := tx.Bucket(bucketName(cid, treeID))
if treeRoot == nil {
return ErrTreeNotFound
}
b := treeRoot.Bucket(dataBucket)
c := b.Cursor()
for k, _ := c.Seek(key); len(k) == 17 && binary.LittleEndian.Uint64(k[1:]) == nodeID; k, _ = c.Next() {
children = append(children, binary.LittleEndian.Uint64(k[9:]))
}
return nil
})
return children, err
}
// TreeGetOpLog implements the pilorama.Forest interface.
func (t *boltForest) TreeGetOpLog(cid cidSDK.ID, treeID string, height uint64) (Move, error) {
key := make([]byte, 8)
binary.BigEndian.PutUint64(key, height)
var lm Move
err := t.db.View(func(tx *bbolt.Tx) error {
treeRoot := tx.Bucket(bucketName(cid, treeID))
if treeRoot == nil {
return ErrTreeNotFound
}
c := treeRoot.Bucket(logBucket).Cursor()
if _, data := c.Seek(key); data != nil {
return t.moveFromBytes(&lm, data)
}
return nil
})
return lm, err
}
func (t *boltForest) getPathPrefix(bTree *bbolt.Bucket, attr string, path []string) (int, Node, error) {
c := bTree.Cursor()
var curNode Node
var attrKey []byte
loop:
for i := range path {
attrKey = internalKey(attrKey, attr, path[i], curNode, 0)
attrKey = attrKey[:len(attrKey)-8]
childKey, value := c.Seek(attrKey)
for len(childKey) == len(attrKey)+8 && bytes.Equal(attrKey, childKey[:len(childKey)-8]) {
if len(value) == 1 && value[0] == 1 {
curNode = binary.LittleEndian.Uint64(childKey[len(childKey)-8:])
continue loop
}
childKey, value = c.Next()
}
return i, curNode, nil
}
return len(path), curNode, nil
}
func (t *boltForest) moveFromBytes(m *Move, data []byte) error {
r := io.NewBinReaderFromBuf(data)
m.Child = r.ReadU64LE()
m.Parent = r.ReadU64LE()
m.Meta.DecodeBinary(r)
return r.Err
}
func (t *boltForest) logFromBytes(lm *LogMove, r *io.BinReader) error {
lm.Child = r.ReadU64LE()
lm.Parent = r.ReadU64LE()
lm.Meta.DecodeBinary(r)
lm.HasOld = r.ReadBool()
if lm.HasOld {
lm.Old.Parent = r.ReadU64LE()
lm.Old.Meta.DecodeBinary(r)
}
return r.Err
}
func (t *boltForest) logToBytes(lm *LogMove) []byte {
w := io.NewBufBinWriter()
size := 8 + 8 + lm.Meta.Size() + 1
if lm.HasOld {
size += 8 + lm.Old.Meta.Size()
}
w.Grow(size)
w.WriteU64LE(lm.Child)
w.WriteU64LE(lm.Parent)
lm.Meta.EncodeBinary(w.BinWriter)
w.WriteBool(lm.HasOld)
if lm.HasOld {
w.WriteU64LE(lm.Old.Parent)
lm.Old.Meta.EncodeBinary(w.BinWriter)
}
return w.Bytes()
}
func bucketName(cid cidSDK.ID, treeID string) []byte {
return []byte(cid.String() + treeID)
}
// 't' + node (id) -> timestamp when the node first appeared
func timestampKey(key []byte, child Node) []byte {
key[0] = 't'
binary.LittleEndian.PutUint64(key[1:], child)
return key[:9]
}
// 'p' + node (id) -> parent (id)
func parentKey(key []byte, child Node) []byte {
key[0] = 'p'
binary.LittleEndian.PutUint64(key[1:], child)
return key[:9]
}
// 'm' + node (id) -> serialized meta
func metaKey(key []byte, child Node) []byte {
key[0] = 'm'
binary.LittleEndian.PutUint64(key[1:], child)
return key[:9]
}
// 'c' + parent (id) + child (id) -> 0/1
func childrenKey(key []byte, child, parent Node) []byte {
key[0] = 'c'
binary.LittleEndian.PutUint64(key[1:], parent)
binary.LittleEndian.PutUint64(key[9:], child)
return key[:17]
}
// 'i' + attribute name (string) + attribute value (string) + parent (id) + node (id) -> 0/1
func internalKey(key []byte, k, v string, parent, node Node) []byte {
size := 1 /* prefix */ + 2*2 /* len */ + 2*8 /* nodes */ + len(k) + len(v)
if cap(key) < size {
key = make([]byte, 0, size)
}
key = key[:0]
key = append(key, 'i')
l := len(k)
key = append(key, byte(l), byte(l>>8))
key = append(key, k...)
l = len(v)
key = append(key, byte(l), byte(l>>8))
key = append(key, v...)
var raw [8]byte
binary.LittleEndian.PutUint64(raw[:], parent)
key = append(key, raw[:]...)
binary.LittleEndian.PutUint64(raw[:], node)
key = append(key, raw[:]...)
return key
}
func toUint64(x uint64) []byte {
var a [8]byte
binary.LittleEndian.PutUint64(a[:], x)
return a[:]
}

View file

@ -0,0 +1,185 @@
package pilorama
import (
"sort"
cidSDK "github.com/nspcc-dev/neofs-sdk-go/container/id"
)
// memoryForest represents multiple replicating trees sharing a single storage.
type memoryForest struct {
// treeMap maps tree identifier (container ID + name) to the replicated log.
treeMap map[string]*state
}
var _ Forest = (*memoryForest)(nil)
// NewMemoryForest creates new empty forest.
// TODO: this function will eventually be removed and is here for debugging.
func NewMemoryForest() ForestStorage {
return &memoryForest{
treeMap: make(map[string]*state),
}
}
// TreeMove implements the Forest interface.
func (f *memoryForest) TreeMove(d CIDDescriptor, treeID string, op *Move) (*LogMove, error) {
if !d.checkValid() {
return nil, ErrInvalidCIDDescriptor
}
fullID := d.CID.String() + "/" + treeID
s, ok := f.treeMap[fullID]
if !ok {
s = newState()
f.treeMap[fullID] = s
}
op.Time = s.timestamp(d.Position, d.Size)
if op.Child == RootID {
op.Child = s.findSpareID()
}
lm := s.do(op)
s.operations = append(s.operations, lm)
return &lm, nil
}
// TreeAddByPath implements the Forest interface.
func (f *memoryForest) TreeAddByPath(d CIDDescriptor, treeID string, attr string, path []string, m []KeyValue) ([]LogMove, error) {
if !d.checkValid() {
return nil, ErrInvalidCIDDescriptor
}
if !isAttributeInternal(attr) {
return nil, ErrNotPathAttribute
}
fullID := d.CID.String() + "/" + treeID
s, ok := f.treeMap[fullID]
if !ok {
s = newState()
f.treeMap[fullID] = s
}
i, node := s.getPathPrefix(attr, path)
lm := make([]LogMove, len(path)-i+1)
for j := i; j < len(path); j++ {
lm[j-i] = s.do(&Move{
Parent: node,
Meta: Meta{
Time: s.timestamp(d.Position, d.Size),
Items: []KeyValue{{Key: attr, Value: []byte(path[j])}}},
Child: s.findSpareID(),
})
node = lm[j-i].Child
s.operations = append(s.operations, lm[j-i])
}
mCopy := make([]KeyValue, len(m))
copy(mCopy, m)
lm[len(lm)-1] = s.do(&Move{
Parent: node,
Meta: Meta{
Time: s.timestamp(d.Position, d.Size),
Items: mCopy,
},
Child: s.findSpareID(),
})
return lm, nil
}
// TreeApply implements the Forest interface.
func (f *memoryForest) TreeApply(d CIDDescriptor, treeID string, op []Move) error {
if !d.checkValid() {
return ErrInvalidCIDDescriptor
}
fullID := d.CID.String() + "/" + treeID
s, ok := f.treeMap[fullID]
if !ok {
s = newState()
f.treeMap[fullID] = s
}
for i := range op {
err := s.Apply(&op[i])
if err != nil {
return err
}
}
return nil
}
func (f *memoryForest) Init() error {
return nil
}
func (f *memoryForest) Open() error {
return nil
}
func (f *memoryForest) Close() error {
return nil
}
// TreeGetByPath implements the Forest interface.
func (f *memoryForest) TreeGetByPath(cid cidSDK.ID, treeID string, attr string, path []string, latest bool) ([]Node, error) {
if !isAttributeInternal(attr) {
return nil, ErrNotPathAttribute
}
fullID := cid.String() + "/" + treeID
s, ok := f.treeMap[fullID]
if !ok {
return nil, ErrTreeNotFound
}
return s.get(attr, path, latest), nil
}
// TreeGetMeta implements the Forest interface.
func (f *memoryForest) TreeGetMeta(cid cidSDK.ID, treeID string, nodeID Node) (Meta, Node, error) {
fullID := cid.String() + "/" + treeID
s, ok := f.treeMap[fullID]
if !ok {
return Meta{}, 0, ErrTreeNotFound
}
return s.getMeta(nodeID), s.infoMap[nodeID].Parent, nil
}
// TreeGetChildren implements the Forest interface.
func (f *memoryForest) TreeGetChildren(cid cidSDK.ID, treeID string, nodeID Node) ([]uint64, error) {
fullID := cid.String() + "/" + treeID
s, ok := f.treeMap[fullID]
if !ok {
return nil, ErrTreeNotFound
}
children, ok := s.childMap[nodeID]
if !ok {
return nil, nil
}
res := make([]Node, len(children))
copy(res, children)
return res, nil
}
// TreeGetOpLog implements the pilorama.Forest interface.
func (f *memoryForest) TreeGetOpLog(cid cidSDK.ID, treeID string, height uint64) (Move, error) {
fullID := cid.String() + "/" + treeID
s, ok := f.treeMap[fullID]
if !ok {
return Move{}, ErrTreeNotFound
}
n := sort.Search(len(s.operations), func(i int) bool {
return s.operations[i].Time >= height
})
if n == len(s.operations) {
return Move{}, nil
}
return s.operations[n].Move, nil
}

View file

@ -0,0 +1,690 @@
package pilorama
import (
"math/rand"
"os"
"path/filepath"
"strconv"
"testing"
cidSDK "github.com/nspcc-dev/neofs-sdk-go/container/id"
cidtest "github.com/nspcc-dev/neofs-sdk-go/container/id/test"
"github.com/stretchr/testify/require"
)
var providers = []struct {
name string
construct func(t testing.TB) Forest
}{
{"inmemory", func(t testing.TB) Forest {
f := NewMemoryForest()
require.NoError(t, f.Init())
require.NoError(t, f.Open())
t.Cleanup(func() {
require.NoError(t, f.Close())
})
return f
}},
{"bbolt", func(t testing.TB) Forest {
// Use `os.TempDir` because we construct multiple times in the same test.
tmpDir, err := os.MkdirTemp(os.TempDir(), "*")
require.NoError(t, err)
f := NewBoltForest(WithPath(filepath.Join(tmpDir, "test.db")))
require.NoError(t, f.Init())
require.NoError(t, f.Open())
t.Cleanup(func() {
require.NoError(t, f.Close())
require.NoError(t, os.RemoveAll(tmpDir))
})
return f
}},
}
func testMeta(t *testing.T, f Forest, cid cidSDK.ID, treeID string, nodeID, parentID Node, expected Meta) {
actualMeta, actualParent, err := f.TreeGetMeta(cid, treeID, nodeID)
require.NoError(t, err)
require.Equal(t, parentID, actualParent)
require.Equal(t, expected, actualMeta)
}
func TestForest_TreeMove(t *testing.T) {
for i := range providers {
t.Run(providers[i].name, func(t *testing.T) {
testForestTreeMove(t, providers[i].construct(t))
})
}
}
func testForestTreeMove(t *testing.T, s Forest) {
cid := cidtest.ID()
d := CIDDescriptor{cid, 0, 1}
treeID := "version"
meta := []KeyValue{
{Key: AttributeVersion, Value: []byte("XXX")},
{Key: AttributeFilename, Value: []byte("file.txt")}}
lm, err := s.TreeAddByPath(d, treeID, AttributeFilename, []string{"path", "to"}, meta)
require.NoError(t, err)
require.Equal(t, 3, len(lm))
nodeID := lm[2].Child
t.Run("invalid descriptor", func(t *testing.T) {
_, err = s.TreeMove(CIDDescriptor{cid, 0, 0}, treeID, &Move{
Parent: lm[1].Child,
Meta: Meta{Items: append(meta, KeyValue{Key: "NewKey", Value: []byte("NewValue")})},
Child: nodeID,
})
require.ErrorIs(t, err, ErrInvalidCIDDescriptor)
})
t.Run("same parent, update meta", func(t *testing.T) {
res, err := s.TreeMove(d, treeID, &Move{
Parent: lm[1].Child,
Meta: Meta{Items: append(meta, KeyValue{Key: "NewKey", Value: []byte("NewValue")})},
Child: nodeID,
})
require.NoError(t, err)
require.Equal(t, res.Child, nodeID)
nodes, err := s.TreeGetByPath(cid, treeID, AttributeFilename, []string{"path", "to", "file.txt"}, false)
require.NoError(t, err)
require.ElementsMatch(t, []Node{nodeID}, nodes)
})
t.Run("different parent", func(t *testing.T) {
res, err := s.TreeMove(d, treeID, &Move{
Parent: RootID,
Meta: Meta{Items: append(meta, KeyValue{Key: "NewKey", Value: []byte("NewValue")})},
Child: nodeID,
})
require.NoError(t, err)
require.Equal(t, res.Child, nodeID)
nodes, err := s.TreeGetByPath(cid, treeID, AttributeFilename, []string{"path", "to", "file.txt"}, false)
require.NoError(t, err)
require.True(t, len(nodes) == 0)
nodes, err = s.TreeGetByPath(cid, treeID, AttributeFilename, []string{"file.txt"}, false)
require.NoError(t, err)
require.ElementsMatch(t, []Node{nodeID}, nodes)
})
}
func TestMemoryForest_TreeGetChildren(t *testing.T) {
for i := range providers {
t.Run(providers[i].name, func(t *testing.T) {
testForestTreeGetChildren(t, providers[i].construct(t))
})
}
}
func testForestTreeGetChildren(t *testing.T, s Forest) {
cid := cidtest.ID()
d := CIDDescriptor{cid, 0, 1}
treeID := "version"
treeAdd := func(t *testing.T, child, parent Node) {
_, err := s.TreeMove(d, treeID, &Move{
Parent: parent,
Child: child,
})
require.NoError(t, err)
}
// 0
// |- 10
// | |- 3
// | |- 6
// | |- 11
// |- 2
// |- 7
treeAdd(t, 10, 0)
treeAdd(t, 3, 10)
treeAdd(t, 6, 10)
treeAdd(t, 11, 6)
treeAdd(t, 2, 0)
treeAdd(t, 7, 0)
testGetChildren := func(t *testing.T, nodeID Node, expected []Node) {
actual, err := s.TreeGetChildren(cid, treeID, nodeID)
require.NoError(t, err)
require.ElementsMatch(t, expected, actual)
}
testGetChildren(t, 0, []uint64{10, 2, 7})
testGetChildren(t, 10, []uint64{3, 6})
testGetChildren(t, 3, nil)
testGetChildren(t, 6, []uint64{11})
testGetChildren(t, 11, nil)
testGetChildren(t, 2, nil)
testGetChildren(t, 7, nil)
t.Run("missing node", func(t *testing.T) {
testGetChildren(t, 42, nil)
})
t.Run("missing tree", func(t *testing.T) {
_, err := s.TreeGetChildren(cid, treeID+"123", 0)
require.ErrorIs(t, err, ErrTreeNotFound)
})
}
func TestForest_TreeAdd(t *testing.T) {
for i := range providers {
t.Run(providers[i].name, func(t *testing.T) {
testForestTreeAdd(t, providers[i].construct(t))
})
}
}
func testForestTreeAdd(t *testing.T, s Forest) {
cid := cidtest.ID()
d := CIDDescriptor{cid, 0, 1}
treeID := "version"
meta := []KeyValue{
{Key: AttributeVersion, Value: []byte("XXX")},
{Key: AttributeFilename, Value: []byte("file.txt")}}
m := &Move{
Parent: RootID,
Child: RootID,
Meta: Meta{Items: meta},
}
t.Run("invalid descriptor", func(t *testing.T) {
_, err := s.TreeMove(CIDDescriptor{cid, 0, 0}, treeID, m)
require.ErrorIs(t, err, ErrInvalidCIDDescriptor)
})
lm, err := s.TreeMove(d, treeID, m)
require.NoError(t, err)
testMeta(t, s, cid, treeID, lm.Child, lm.Parent, Meta{Time: lm.Time, Items: meta})
nodes, err := s.TreeGetByPath(cid, treeID, AttributeFilename, []string{"file.txt"}, false)
require.NoError(t, err)
require.ElementsMatch(t, []Node{lm.Child}, nodes)
t.Run("other trees are unaffected", func(t *testing.T) {
_, err := s.TreeGetByPath(cid, treeID+"123", AttributeFilename, []string{"file.txt"}, false)
require.ErrorIs(t, err, ErrTreeNotFound)
_, _, err = s.TreeGetMeta(cid, treeID+"123", 0)
require.ErrorIs(t, err, ErrTreeNotFound)
})
}
func TestForest_TreeAddByPath(t *testing.T) {
for i := range providers {
t.Run(providers[i].name, func(t *testing.T) {
testForestTreeAddByPath(t, providers[i].construct(t))
})
}
}
func testForestTreeAddByPath(t *testing.T, s Forest) {
cid := cidtest.ID()
d := CIDDescriptor{cid, 0, 1}
treeID := "version"
meta := []KeyValue{
{Key: AttributeVersion, Value: []byte("XXX")},
{Key: AttributeFilename, Value: []byte("file.txt")}}
t.Run("invalid descriptor", func(t *testing.T) {
_, err := s.TreeAddByPath(CIDDescriptor{cid, 0, 0}, treeID, AttributeFilename, []string{"yyy"}, meta)
require.ErrorIs(t, err, ErrInvalidCIDDescriptor)
})
t.Run("invalid attribute", func(t *testing.T) {
_, err := s.TreeAddByPath(d, treeID, AttributeVersion, []string{"yyy"}, meta)
require.ErrorIs(t, err, ErrNotPathAttribute)
})
lm, err := s.TreeAddByPath(d, treeID, AttributeFilename, []string{"path", "to"}, meta)
require.NoError(t, err)
require.Equal(t, 3, len(lm))
testMeta(t, s, cid, treeID, lm[0].Child, lm[0].Parent, Meta{Time: lm[0].Time, Items: []KeyValue{{AttributeFilename, []byte("path")}}})
testMeta(t, s, cid, treeID, lm[1].Child, lm[1].Parent, Meta{Time: lm[1].Time, Items: []KeyValue{{AttributeFilename, []byte("to")}}})
firstID := lm[2].Child
testMeta(t, s, cid, treeID, firstID, lm[2].Parent, Meta{Time: lm[2].Time, Items: meta})
meta[0].Value = []byte("YYY")
lm, err = s.TreeAddByPath(d, treeID, AttributeFilename, []string{"path", "to"}, meta)
require.NoError(t, err)
require.Equal(t, 1, len(lm))
secondID := lm[0].Child
testMeta(t, s, cid, treeID, secondID, lm[0].Parent, Meta{Time: lm[0].Time, Items: meta})
t.Run("get versions", func(t *testing.T) {
// All versions.
nodes, err := s.TreeGetByPath(cid, treeID, AttributeFilename, []string{"path", "to", "file.txt"}, false)
require.NoError(t, err)
require.ElementsMatch(t, []Node{firstID, secondID}, nodes)
// Latest version.
nodes, err = s.TreeGetByPath(cid, treeID, AttributeFilename, []string{"path", "to", "file.txt"}, true)
require.NoError(t, err)
require.Equal(t, []Node{secondID}, nodes)
})
meta[0].Value = []byte("ZZZ")
meta[1].Value = []byte("cat.jpg")
lm, err = s.TreeAddByPath(d, treeID, AttributeFilename, []string{"path", "dir"}, meta)
require.NoError(t, err)
require.Equal(t, 2, len(lm))
testMeta(t, s, cid, treeID, lm[0].Child, lm[0].Parent, Meta{Time: lm[0].Time, Items: []KeyValue{{AttributeFilename, []byte("dir")}}})
testMeta(t, s, cid, treeID, lm[1].Child, lm[1].Parent, Meta{Time: lm[1].Time, Items: meta})
t.Run("create internal nodes", func(t *testing.T) {
meta[0].Value = []byte("SomeValue")
meta[1].Value = []byte("another")
lm, err = s.TreeAddByPath(d, treeID, AttributeFilename, []string{"path"}, meta)
require.NoError(t, err)
require.Equal(t, 1, len(lm))
oldMove := lm[0]
meta[0].Value = []byte("Leaf")
meta[1].Value = []byte("file.txt")
lm, err = s.TreeAddByPath(d, treeID, AttributeFilename, []string{"path", "another"}, meta)
require.NoError(t, err)
require.Equal(t, 2, len(lm))
testMeta(t, s, cid, treeID, lm[0].Child, lm[0].Parent,
Meta{Time: lm[0].Time, Items: []KeyValue{{AttributeFilename, []byte("another")}}})
testMeta(t, s, cid, treeID, lm[1].Child, lm[1].Parent, Meta{Time: lm[1].Time, Items: meta})
require.NotEqual(t, lm[0].Child, oldMove.Child)
testMeta(t, s, cid, treeID, oldMove.Child, oldMove.Parent,
Meta{Time: oldMove.Time, Items: []KeyValue{
{AttributeVersion, []byte("SomeValue")},
{AttributeFilename, []byte("another")}}})
t.Run("get by path", func(t *testing.T) {
nodes, err := s.TreeGetByPath(cid, treeID, AttributeFilename, []string{"path", "another"}, false)
require.NoError(t, err)
require.Equal(t, 2, len(nodes))
require.ElementsMatch(t, []Node{lm[0].Child, oldMove.Child}, nodes)
nodes, err = s.TreeGetByPath(cid, treeID, AttributeFilename, []string{"path", "another", "file.txt"}, false)
require.NoError(t, err)
require.Equal(t, 1, len(nodes))
require.Equal(t, lm[1].Child, nodes[0])
})
})
}
func TestForest_Apply(t *testing.T) {
for i := range providers {
t.Run(providers[i].name, func(t *testing.T) {
testForestTreeApply(t, providers[i].construct)
})
}
}
func testForestTreeApply(t *testing.T, constructor func(t testing.TB) Forest) {
cid := cidtest.ID()
d := CIDDescriptor{cid, 0, 1}
treeID := "version"
t.Run("invalid descriptor", func(t *testing.T) {
s := constructor(t)
err := s.TreeApply(CIDDescriptor{cid, 0, 0}, treeID, []Move{{
Child: 10,
Parent: 0,
Meta: Meta{Time: 1, Items: []KeyValue{{"grand", []byte{1}}}},
}})
require.ErrorIs(t, err, ErrInvalidCIDDescriptor)
})
testApply := func(t *testing.T, s Forest, child, parent Node, meta Meta) {
require.NoError(t, s.TreeApply(d, treeID, []Move{{
Child: child,
Parent: parent,
Meta: meta,
}}))
}
t.Run("add a child, then insert a parent removal", func(t *testing.T) {
s := constructor(t)
testApply(t, s, 10, 0, Meta{Time: 1, Items: []KeyValue{{"grand", []byte{1}}}})
meta := Meta{Time: 3, Items: []KeyValue{{"child", []byte{3}}}}
testApply(t, s, 11, 10, meta)
testMeta(t, s, cid, treeID, 11, 10, meta)
testApply(t, s, 10, TrashID, Meta{Time: 2, Items: []KeyValue{{"parent", []byte{2}}}})
testMeta(t, s, cid, treeID, 11, 10, meta)
})
t.Run("add a child to non-existent parent, then add a parent", func(t *testing.T) {
s := constructor(t)
meta := Meta{Time: 1, Items: []KeyValue{{"child", []byte{3}}}}
testApply(t, s, 11, 10, meta)
testMeta(t, s, cid, treeID, 11, 10, meta)
testApply(t, s, 10, 0, Meta{Time: 2, Items: []KeyValue{{"grand", []byte{1}}}})
testMeta(t, s, cid, treeID, 11, 10, meta)
})
}
func TestForest_GetOpLog(t *testing.T) {
for i := range providers {
t.Run(providers[i].name, func(t *testing.T) {
testForestTreeGetOpLog(t, providers[i].construct)
})
}
}
func testForestTreeGetOpLog(t *testing.T, constructor func(t testing.TB) Forest) {
cid := cidtest.ID()
d := CIDDescriptor{cid, 0, 1}
treeID := "version"
logs := []Move{
{
Meta: Meta{Time: 4, Items: []KeyValue{{"grand", []byte{1}}}},
Child: 1,
},
{
Meta: Meta{Time: 5, Items: []KeyValue{{"second", []byte{1, 2, 3}}}},
Child: 4,
},
{
Parent: 10,
Meta: Meta{Time: 256 + 4, Items: []KeyValue{}}, // make sure keys are big-endian
Child: 11,
},
}
s := constructor(t)
t.Run("empty log, no panic", func(t *testing.T) {
_, err := s.TreeGetOpLog(cid, treeID, 0)
require.ErrorIs(t, err, ErrTreeNotFound)
})
for i := range logs {
require.NoError(t, s.TreeApply(d, treeID, logs[i:i+1]))
}
testGetOpLog := func(t *testing.T, height uint64, m Move) {
lm, err := s.TreeGetOpLog(cid, treeID, height)
require.NoError(t, err)
require.Equal(t, m, lm)
}
testGetOpLog(t, 0, logs[0])
testGetOpLog(t, 4, logs[0])
testGetOpLog(t, 5, logs[1])
testGetOpLog(t, 6, logs[2])
testGetOpLog(t, 260, logs[2])
t.Run("missing entry", func(t *testing.T) {
testGetOpLog(t, 261, Move{})
})
t.Run("missing tree", func(t *testing.T) {
_, err := s.TreeGetOpLog(cid, treeID+"123", 4)
require.ErrorIs(t, err, ErrTreeNotFound)
})
}
func TestForest_ApplyRandom(t *testing.T) {
for i := range providers {
t.Run(providers[i].name, func(t *testing.T) {
testForestTreeApplyRandom(t, providers[i].construct)
})
}
}
func testForestTreeApplyRandom(t *testing.T, constructor func(t testing.TB) Forest) {
rand.Seed(42)
const (
nodeCount = 4
opCount = 10
iterCount = 100
)
cid := cidtest.ID()
d := CIDDescriptor{cid, 0, 1}
treeID := "version"
expected := constructor(t)
ops := make([]Move, nodeCount+opCount)
for i := 0; i < nodeCount; i++ {
ops[i] = Move{
Parent: 0,
Meta: Meta{
Time: Timestamp(i),
Items: []KeyValue{
{Key: AttributeFilename, Value: []byte(strconv.Itoa(i))},
{Value: make([]byte, 10)},
},
},
Child: uint64(i) + 1,
}
rand.Read(ops[i].Meta.Items[1].Value)
}
for i := nodeCount; i < len(ops); i++ {
ops[i] = Move{
Parent: rand.Uint64() % (nodeCount + 1),
Meta: Meta{
Time: Timestamp(i + nodeCount),
Items: []KeyValue{
{Key: AttributeFilename, Value: []byte(strconv.Itoa(i))},
{Value: make([]byte, 10)},
},
},
Child: rand.Uint64() % (nodeCount + 1),
}
if rand.Uint32()%5 == 0 {
ops[i].Parent = TrashID
}
rand.Read(ops[i].Meta.Items[1].Value)
}
for i := range ops {
require.NoError(t, expected.TreeApply(d, treeID, ops[i:i+1]))
}
for i := 0; i < iterCount; i++ {
// Shuffle random operations, leave initialization in place.
rand.Shuffle(len(ops)-nodeCount, func(i, j int) { ops[i+nodeCount], ops[j+nodeCount] = ops[j+nodeCount], ops[i+nodeCount] })
actual := constructor(t)
for i := range ops {
require.NoError(t, actual.TreeApply(d, treeID, ops[i:i+1]))
}
for i := uint64(0); i < nodeCount; i++ {
expectedMeta, expectedParent, err := expected.TreeGetMeta(cid, treeID, i)
require.NoError(t, err)
actualMeta, actualParent, err := actual.TreeGetMeta(cid, treeID, i)
require.NoError(t, err)
require.Equal(t, expectedParent, actualParent, "node id: %d", i)
require.Equal(t, expectedMeta, actualMeta, "node id: %d", i)
if _, ok := actual.(*memoryForest); ok {
require.Equal(t, expected, actual, i)
}
}
}
}
const benchNodeCount = 1000
func BenchmarkApplySequential(b *testing.B) {
for i := range providers {
if providers[i].name == "inmemory" { // memory backend is not thread-safe
continue
}
b.Run(providers[i].name, func(b *testing.B) {
for _, bs := range []int{1, 2, 4} {
b.Run("batchsize="+strconv.Itoa(bs), func(b *testing.B) {
benchmarkApply(b, providers[i].construct(b), bs, func(opCount int) []Move {
ops := make([]Move, opCount)
for i := range ops {
ops[i] = Move{
Parent: uint64(rand.Intn(benchNodeCount)),
Meta: Meta{
Time: Timestamp(i),
Items: []KeyValue{{Value: []byte{0, 1, 2, 3, 4}}},
},
Child: uint64(rand.Intn(benchNodeCount)),
}
}
return ops
})
})
}
})
}
}
func BenchmarkApplyReorderLast(b *testing.B) {
// Group operations in a blocks of 10, order blocks in increasing timestamp order,
// and operations in a single block in reverse.
const blockSize = 10
for i := range providers {
if providers[i].name == "inmemory" { // memory backend is not thread-safe
continue
}
b.Run(providers[i].name, func(b *testing.B) {
for _, bs := range []int{1, 2, 4} {
b.Run("batchsize="+strconv.Itoa(bs), func(b *testing.B) {
benchmarkApply(b, providers[i].construct(b), bs, func(opCount int) []Move {
ops := make([]Move, opCount)
for i := range ops {
ops[i] = Move{
Parent: uint64(rand.Intn(benchNodeCount)),
Meta: Meta{
Time: Timestamp(i),
Items: []KeyValue{{Value: []byte{0, 1, 2, 3, 4}}},
},
Child: uint64(rand.Intn(benchNodeCount)),
}
if i != 0 && i%blockSize == 0 {
for j := 0; j < blockSize/2; j++ {
ops[i-j], ops[i+j-blockSize] = ops[i+j-blockSize], ops[i-j]
}
}
}
return ops
})
})
}
})
}
}
func benchmarkApply(b *testing.B, s Forest, batchSize int, genFunc func(int) []Move) {
rand.Seed(42)
ops := genFunc(b.N)
cid := cidtest.ID()
d := CIDDescriptor{cid, 0, 1}
treeID := "version"
ch := make(chan int, b.N)
for i := 0; i < b.N; i++ {
ch <- i
}
b.ResetTimer()
b.ReportAllocs()
b.SetParallelism(20)
b.RunParallel(func(pb *testing.PB) {
batch := make([]Move, 0, batchSize)
for pb.Next() {
batch = append(batch, ops[<-ch])
if len(batch) == batchSize {
if err := s.TreeApply(d, treeID, batch); err != nil {
b.Fatalf("error in `Apply`: %v", err)
}
batch = batch[:0]
}
}
if len(batch) > 0 {
if err := s.TreeApply(d, treeID, batch); err != nil {
b.Fatalf("error in `Apply`: %v", err)
}
}
})
}
func TestTreeGetByPath(t *testing.T) {
for i := range providers {
t.Run(providers[i].name, func(t *testing.T) {
testTreeGetByPath(t, providers[i].construct(t))
})
}
}
func testTreeGetByPath(t *testing.T, s Forest) {
cid := cidtest.ID()
d := CIDDescriptor{cid, 0, 1}
treeID := "version"
// /
// |- a (1)
// |- cat1.jpg, Version=TTT (3)
// |- b (2)
// |- cat1.jpg, Version=XXX (4)
// |- cat1.jpg, Version=YYY (5)
// |- cat2.jpg, Version=ZZZ (6)
testMove(t, s, 0, 1, 0, d, treeID, "a", "")
testMove(t, s, 1, 2, 0, d, treeID, "b", "")
testMove(t, s, 2, 3, 1, d, treeID, "cat1.jpg", "TTT")
testMove(t, s, 3, 4, 2, d, treeID, "cat1.jpg", "XXX")
testMove(t, s, 4, 5, 2, d, treeID, "cat1.jpg", "YYY")
testMove(t, s, 5, 6, 2, d, treeID, "cat2.jpg", "ZZZ")
if mf, ok := s.(*memoryForest); ok {
single := mf.treeMap[cid.String()+"/"+treeID]
t.Run("test meta", func(t *testing.T) {
for i := 0; i < 6; i++ {
require.Equal(t, uint64(i), single.infoMap[Node(i+1)].Timestamp)
}
})
}
t.Run("invalid attribute", func(t *testing.T) {
_, err := s.TreeGetByPath(cid, treeID, AttributeVersion, []string{"", "TTT"}, false)
require.ErrorIs(t, err, ErrNotPathAttribute)
})
nodes, err := s.TreeGetByPath(cid, treeID, AttributeFilename, []string{"b", "cat1.jpg"}, false)
require.NoError(t, err)
require.Equal(t, []Node{4, 5}, nodes)
nodes, err = s.TreeGetByPath(cid, treeID, AttributeFilename, []string{"a", "cat1.jpg"}, false)
require.Equal(t, []Node{3}, nodes)
t.Run("missing child", func(t *testing.T) {
nodes, err = s.TreeGetByPath(cid, treeID, AttributeFilename, []string{"a", "cat3.jpg"}, false)
require.True(t, len(nodes) == 0)
})
t.Run("missing parent", func(t *testing.T) {
nodes, err = s.TreeGetByPath(cid, treeID, AttributeFilename, []string{"xyz", "cat1.jpg"}, false)
require.True(t, len(nodes) == 0)
})
t.Run("empty path", func(t *testing.T) {
nodes, err = s.TreeGetByPath(cid, treeID, AttributeFilename, nil, false)
require.True(t, len(nodes) == 0)
})
}
func testMove(t *testing.T, s Forest, ts int, node, parent Node, d CIDDescriptor, treeID, filename, version string) {
items := make([]KeyValue, 1, 2)
items[0] = KeyValue{AttributeFilename, []byte(filename)}
if version != "" {
items = append(items, KeyValue{AttributeVersion, []byte(version)})
}
require.NoError(t, s.TreeApply(d, treeID, []Move{{
Parent: parent,
Child: node,
Meta: Meta{
Time: uint64(ts),
Items: items,
},
}}))
}

View file

@ -0,0 +1,24 @@
package pilorama
// Info groups the information about the pilorama.
type Info struct {
// Path contains path to the root-directory of the pilorama.
Path string
// Backend is the pilorama storage type. Either "boltdb" or "memory".
Backend string
}
// DumpInfo implements the ForestStorage interface.
func (t *boltForest) DumpInfo() Info {
return Info{
Path: t.path,
Backend: "boltdb",
}
}
// DumpInfo implements the ForestStorage interface.
func (f *memoryForest) DumpInfo() Info {
return Info{
Backend: "memory",
}
}

View file

@ -0,0 +1,224 @@
package pilorama
// nodeInfo couples parent and metadata.
type nodeInfo struct {
Parent Node
Meta Meta
Timestamp Timestamp
}
// state represents state being replicated.
type state struct {
operations []LogMove
tree
}
// newState constructs new empty tree.
func newState() *state {
return &state{
tree: *newTree(),
}
}
// undo un-does op and changes s in-place.
func (s *state) undo(op *LogMove) {
children := s.tree.childMap[op.Parent]
for i := range children {
if children[i] == op.Child {
if len(children) > 1 {
s.tree.childMap[op.Parent] = append(children[:i], children[i+1:]...)
} else {
delete(s.tree.childMap, op.Parent)
}
break
}
}
if op.HasOld {
s.tree.infoMap[op.Child] = op.Old
oldChildren := s.tree.childMap[op.Old.Parent]
for i := range oldChildren {
if oldChildren[i] == op.Child {
return
}
}
s.tree.childMap[op.Old.Parent] = append(oldChildren, op.Child)
} else {
delete(s.tree.infoMap, op.Child)
}
}
// Apply puts op in log at a proper position, re-applies all subsequent operations
// from log and changes s in-place.
func (s *state) Apply(op *Move) error {
var index int
for index = len(s.operations); index > 0; index-- {
if s.operations[index-1].Time <= op.Time {
break
}
}
if index == len(s.operations) {
s.operations = append(s.operations, s.do(op))
return nil
}
s.operations = append(s.operations[:index+1], s.operations[index:]...)
for i := len(s.operations) - 1; i > index; i-- {
s.undo(&s.operations[i])
}
s.operations[index] = s.do(op)
for i := index + 1; i < len(s.operations); i++ {
s.operations[i] = s.do(&s.operations[i].Move)
}
return nil
}
// do performs a single move operation on a tree.
func (s *state) do(op *Move) LogMove {
lm := LogMove{
Move: Move{
Parent: op.Parent,
Meta: op.Meta,
Child: op.Child,
},
}
shouldPut := !s.tree.isAncestor(op.Child, op.Parent)
p, ok := s.tree.infoMap[op.Child]
if ok {
lm.HasOld = true
lm.Old = p
}
if !shouldPut {
return lm
}
if !ok {
p.Timestamp = op.Time
} else {
s.removeChild(op.Child, p.Parent)
}
p.Meta = op.Meta
p.Parent = op.Parent
s.tree.infoMap[op.Child] = p
s.tree.childMap[op.Parent] = append(s.tree.childMap[op.Parent], op.Child)
return lm
}
func (s *state) removeChild(child, parent Node) {
oldChildren := s.tree.childMap[parent]
for i := range oldChildren {
if oldChildren[i] == child {
s.tree.childMap[parent] = append(oldChildren[:i], oldChildren[i+1:]...)
break
}
}
}
func (s *state) timestamp(pos, size int) Timestamp {
if len(s.operations) == 0 {
return nextTimestamp(0, uint64(pos), uint64(size))
}
return nextTimestamp(s.operations[len(s.operations)-1].Time, uint64(pos), uint64(size))
}
func (s *state) findSpareID() Node {
id := uint64(1)
for _, ok := s.infoMap[id]; ok; _, ok = s.infoMap[id] {
id++
}
return id
}
// tree is a mapping from the child nodes to their parent and metadata.
type tree struct {
infoMap map[Node]nodeInfo
childMap map[Node][]Node
}
func newTree() *tree {
return &tree{
childMap: make(map[Node][]Node),
infoMap: make(map[Node]nodeInfo),
}
}
// isAncestor returns true if parent is an ancestor of a child.
// For convenience, also return true if parent == child.
func (t tree) isAncestor(parent, child Node) bool {
for c := child; c != parent; {
p, ok := t.infoMap[c]
if !ok {
return false
}
c = p.Parent
}
return true
}
// getPathPrefix descends by path constructed from values of attr until
// there is no node corresponding to a path element. Returns the amount of nodes
// processed and ID of the last node.
func (t tree) getPathPrefix(attr string, path []string) (int, Node) {
var curNode Node
loop:
for i := range path {
children := t.childMap[curNode]
for j := range children {
meta := t.infoMap[children[j]].Meta
f := meta.GetAttr(attr)
if len(meta.Items) == 1 && string(f) == path[i] {
curNode = children[j]
continue loop
}
}
return i, curNode
}
return len(path), curNode
}
// get returns list of nodes which have the specified path from root
// descending by values of attr from meta.
func (t tree) get(attr string, path []string, latest bool) []Node {
if len(path) == 0 {
return nil
}
i, curNode := t.getPathPrefix(attr, path[:len(path)-1])
if i < len(path)-1 {
return nil
}
var nodes []Node
var lastTs Timestamp
children := t.childMap[curNode]
for i := range children {
info := t.infoMap[children[i]]
fileName := string(info.Meta.GetAttr(attr))
if fileName == path[len(path)-1] {
if latest {
if info.Timestamp >= lastTs {
nodes = append(nodes[:0], children[i])
}
} else {
nodes = append(nodes, children[i])
}
}
}
return nodes
}
// getMeta returns meta information of node n.
func (t tree) getMeta(n Node) Meta {
return t.infoMap[n].Meta
}

View file

@ -0,0 +1,66 @@
package pilorama
import (
"errors"
cidSDK "github.com/nspcc-dev/neofs-sdk-go/container/id"
)
// Forest represents CRDT tree.
type Forest interface {
// TreeMove moves node in the tree.
// If the parent of the move operation is TrashID, the node is removed.
// If the child of the move operation is RootID, new ID is generated and added to a tree.
TreeMove(d CIDDescriptor, treeID string, m *Move) (*LogMove, error)
// TreeAddByPath adds new node in the tree using provided path.
// The path is constructed by descending from the root using the values of the attr in meta.
// Internal nodes in path should have exactly one attribute, otherwise a new node is created.
TreeAddByPath(d CIDDescriptor, treeID string, attr string, path []string, meta []KeyValue) ([]LogMove, error)
// TreeApply applies replicated operation from another node.
TreeApply(d CIDDescriptor, treeID string, m []Move) error
// TreeGetByPath returns all nodes corresponding to the path.
// The path is constructed by descending from the root using the values of the
// AttributeFilename in meta.
// The last argument determines whether only the node with the latest timestamp is returned.
// Should return ErrTreeNotFound if the tree is not found, and empty result if the path is not in the tree.
TreeGetByPath(cid cidSDK.ID, treeID string, attr string, path []string, latest bool) ([]Node, error)
// TreeGetMeta returns meta information of the node with the specified ID.
// Should return ErrTreeNotFound if the tree is not found, and empty result if the node is not in the tree.
TreeGetMeta(cid cidSDK.ID, treeID string, nodeID Node) (Meta, Node, error)
// TreeGetChildren returns children of the node with the specified ID. The order is arbitrary.
// Should return ErrTreeNotFound if the tree is not found, and empty result if the node is not in the tree.
TreeGetChildren(cid cidSDK.ID, treeID string, nodeID Node) ([]uint64, error)
// TreeGetOpLog returns first log operation stored at or above the height.
// In case no such operation is found, empty Move and nil error should be returned.
TreeGetOpLog(cid cidSDK.ID, treeID string, height uint64) (Move, error)
}
type ForestStorage interface {
// DumpInfo returns information about the pilorama.
DumpInfo() Info
Init() error
Open() error
Close() error
Forest
}
const (
AttributeFilename = "FileName"
AttributeVersion = "Version"
)
// CIDDescriptor contains container ID and information about the node position
// in the list of container nodes.
type CIDDescriptor struct {
CID cidSDK.ID
Position int
Size int
}
// ErrInvalidCIDDescriptor is returned when info about tne node position
// in the container is invalid.
var ErrInvalidCIDDescriptor = errors.New("cid descriptor is invalid")
func (d CIDDescriptor) checkValid() bool {
return 0 <= d.Position && d.Position < d.Size
}

View file

@ -0,0 +1,86 @@
package pilorama
import "github.com/nspcc-dev/neo-go/pkg/io"
func (x *Meta) FromBytes(data []byte) error {
if len(data) == 0 {
x.Items = nil
x.Time = 0
return nil
}
r := io.NewBinReaderFromBuf(data)
x.DecodeBinary(r)
return r.Err
}
func (x Meta) Bytes() []byte {
w := io.NewBufBinWriter()
x.EncodeBinary(w.BinWriter)
return w.Bytes()
}
func (x Meta) GetAttr(name string) []byte {
for _, kv := range x.Items {
if kv.Key == name {
return kv.Value
}
}
return nil
}
// DecodeBinary implements the io.Serializable interface.
func (x *Meta) DecodeBinary(r *io.BinReader) {
ts := r.ReadVarUint()
size := r.ReadVarUint()
m := make([]KeyValue, size)
for i := range m {
m[i].Key = r.ReadString()
m[i].Value = r.ReadVarBytes()
}
if r.Err != nil {
return
}
x.Time = ts
x.Items = m
}
// EncodeBinary implements the io.Serializable interface.
func (x Meta) EncodeBinary(w *io.BinWriter) {
w.WriteVarUint(x.Time)
w.WriteVarUint(uint64(len(x.Items)))
for _, e := range x.Items {
w.WriteString(e.Key)
w.WriteVarBytes(e.Value)
}
}
// Size returns size of x in bytes.
func (x Meta) Size() int {
size := getVarIntSize(x.Time)
size += getVarIntSize(uint64(len(x.Items)))
for i := range x.Items {
ln := len(x.Items[i].Key)
size += getVarIntSize(uint64(ln)) + ln
ln = len(x.Items[i].Value)
size += getVarIntSize(uint64(ln)) + ln
}
return size
}
// getVarIntSize returns the size in number of bytes of a variable integer.
// (reference: GetVarSize(int value), https://github.com/neo-project/neo/blob/master/neo/IO/Helper.cs)
func getVarIntSize(value uint64) int {
var size int
if value < 0xFD {
size = 1 // unit8
} else if value <= 0xFFFF {
size = 3 // byte + uint16
} else {
size = 5 // byte + uint32
}
return size
}

View file

@ -0,0 +1,54 @@
package pilorama
import (
"math/rand"
"testing"
"github.com/stretchr/testify/require"
)
func TestMeta_Bytes(t *testing.T) {
t.Run("empty", func(t *testing.T) {
var m Meta
require.NoError(t, m.FromBytes(nil))
require.True(t, len(m.Items) == 0)
require.Equal(t, uint64(0), m.Time)
require.Equal(t, []byte{0, 0}, m.Bytes())
})
t.Run("filled", func(t *testing.T) {
expected := Meta{
Time: 123,
Items: []KeyValue{
{"abc", []byte{1, 2, 3}},
{"xyz", []byte{5, 6, 7, 8}},
}}
data := expected.Bytes()
var actual Meta
require.NoError(t, actual.FromBytes(data))
require.Equal(t, expected, actual)
t.Run("error", func(t *testing.T) {
require.Error(t, new(Meta).FromBytes(data[:len(data)/2]))
})
})
}
func TestMeta_GetAttr(t *testing.T) {
attr := [][]byte{
make([]byte, 5),
make([]byte, 10),
}
for i := range attr {
rand.Read(attr[i])
}
m := Meta{Items: []KeyValue{{"abc", attr[0]}, {"xyz", attr[1]}}}
require.Equal(t, attr[0], m.GetAttr("abc"))
require.Equal(t, attr[1], m.GetAttr("xyz"))
require.Nil(t, m.GetAttr("a"))
require.Nil(t, m.GetAttr("xyza"))
require.Nil(t, m.GetAttr(""))
}

View file

@ -0,0 +1,46 @@
package pilorama
import (
"io/fs"
"time"
)
type Option func(*cfg)
type cfg struct {
path string
perm fs.FileMode
noSync bool
maxBatchDelay time.Duration
maxBatchSize int
}
func WithPath(path string) Option {
return func(c *cfg) {
c.path = path
}
}
func WithPerm(perm fs.FileMode) Option {
return func(c *cfg) {
c.perm = perm
}
}
func WithNoSync(noSync bool) Option {
return func(c *cfg) {
c.noSync = noSync
}
}
func WithMaxBatchDelay(d time.Duration) Option {
return func(c *cfg) {
c.maxBatchDelay = d
}
}
func WithMaxBatchSize(size int) Option {
return func(c *cfg) {
c.maxBatchSize = size
}
}

View file

@ -0,0 +1,63 @@
package pilorama
import (
"errors"
"math"
)
// Timestamp is an alias for integer timestamp type.
// TODO: remove after the debugging.
type Timestamp = uint64
// Node is used to represent nodes.
// TODO: remove after the debugging.
type Node = uint64
// Meta represents arbitrary meta information.
// TODO: remove after the debugging or create a proper interface.
type Meta struct {
Time Timestamp
Items []KeyValue
}
// KeyValue represents a key-value pair.
type KeyValue struct {
Key string
Value []byte
}
// Move represents a single move operation.
type Move struct {
Parent Node
Meta
// Child represents the ID of a node being moved. If zero, new ID is generated.
Child Node
}
// LogMove represents log record for a single move operation.
type LogMove struct {
Move
HasOld bool
Old nodeInfo
}
const (
// RootID represents the ID of a root node.
RootID = 0
// TrashID is a parent for all removed nodes.
TrashID = math.MaxUint64
)
var (
// ErrTreeNotFound is returned when the requested tree is not found.
ErrTreeNotFound = errors.New("tree not found")
// ErrNotPathAttribute is returned when the path is trying to be constructed with a non-internal
// attribute. Currently the only attribute allowed is AttributeFilename.
ErrNotPathAttribute = errors.New("attribute can't be used in path construction")
)
// isAttributeInternal returns true iff key can be used in `*ByPath` methods.
// For such attributes an additional index is maintained in the database.
func isAttributeInternal(key string) bool {
return key == AttributeFilename
}

View file

@ -0,0 +1,11 @@
package pilorama
// nextTimestamp accepts the latest local timestamp, node position in a container and container size.
// Returns the next timestamp which can be generated by this node.
func nextTimestamp(ts Timestamp, pos, size uint64) Timestamp {
base := ts/size*size + pos
if ts < base {
return base
}
return base + size
}

View file

@ -0,0 +1,38 @@
package pilorama
import (
"testing"
"github.com/stretchr/testify/require"
)
func TestNextTimestamp(t *testing.T) {
testCases := []struct {
latest Timestamp
pos, size uint64
expected Timestamp
}{
{0, 0, 1, 1},
{2, 0, 1, 3},
{0, 0, 2, 2},
{0, 1, 2, 1},
{10, 0, 4, 12},
{11, 0, 4, 12},
{12, 0, 4, 16},
{10, 1, 4, 13},
{11, 1, 4, 13},
{12, 1, 4, 13},
{10, 2, 4, 14},
{11, 2, 4, 14},
{12, 2, 4, 14},
{10, 3, 4, 11},
{11, 3, 4, 15},
{12, 3, 4, 15},
}
for _, tc := range testCases {
actual := nextTimestamp(tc.latest, tc.pos, tc.size)
require.Equal(t, tc.expected, actual,
"latest %d, pos %d, size %d", tc.latest, tc.pos, tc.size)
}
}

View file

@ -12,10 +12,25 @@ import (
"go.uber.org/zap" "go.uber.org/zap"
) )
func (s *Shard) handleMetabaseFailure(stage string, err error) error {
s.log.Error("metabase failure, switching mode",
zap.String("stage", stage),
zap.Stringer("mode", ModeDegraded),
zap.Error(err),
)
err = s.SetMode(ModeDegraded)
if err != nil {
return fmt.Errorf("could not switch to mode %s", ModeDegraded)
}
return nil
}
// Open opens all Shard's components. // Open opens all Shard's components.
func (s *Shard) Open() error { func (s *Shard) Open() error {
components := []interface{ Open() error }{ components := []interface{ Open() error }{
s.blobStor, s.metaBase, s.blobStor, s.metaBase, s.pilorama,
} }
if s.hasWriteCache() { if s.hasWriteCache() {
@ -24,32 +39,70 @@ func (s *Shard) Open() error {
for _, component := range components { for _, component := range components {
if err := component.Open(); err != nil { if err := component.Open(); err != nil {
if component == s.metaBase {
err = s.handleMetabaseFailure("open", err)
if err != nil {
return err
}
continue
}
return fmt.Errorf("could not open %T: %w", component, err) return fmt.Errorf("could not open %T: %w", component, err)
} }
} }
return nil return nil
} }
type metabaseSynchronizer Shard
func (x *metabaseSynchronizer) Init() error {
return (*Shard)(x).refillMetabase()
}
// Init initializes all Shard's components. // Init initializes all Shard's components.
func (s *Shard) Init() error { func (s *Shard) Init() error {
var fMetabase func() error type initializer interface {
Init() error
if s.needRefillMetabase() {
fMetabase = s.refillMetabase
} else {
fMetabase = s.metaBase.Init
} }
components := []func() error{ var components []initializer
s.blobStor.Init, fMetabase,
metaIndex := -1
if s.GetMode() != ModeDegraded {
var initMetabase initializer
if s.needRefillMetabase() {
initMetabase = (*metabaseSynchronizer)(s)
} else {
initMetabase = s.metaBase
}
metaIndex = 1
components = []initializer{
s.blobStor, initMetabase, s.pilorama,
}
} else {
components = []initializer{s.blobStor, s.pilorama}
} }
if s.hasWriteCache() { if s.hasWriteCache() {
components = append(components, s.writeCache.Init) components = append(components, s.writeCache)
}
for i, component := range components {
if err := component.Init(); err != nil {
if i == metaIndex {
err = s.handleMetabaseFailure("init", err)
if err != nil {
return err
}
continue
} }
for _, component := range components {
if err := component(); err != nil {
return fmt.Errorf("could not initialize %T: %w", component, err) return fmt.Errorf("could not initialize %T: %w", component, err)
} }
} }
@ -154,7 +207,7 @@ func (s *Shard) Close() error {
components = append(components, s.writeCache) components = append(components, s.writeCache)
} }
components = append(components, s.blobStor, s.metaBase) components = append(components, s.pilorama, s.blobStor, s.metaBase)
for _, component := range components { for _, component := range components {
if err := component.Close(); err != nil { if err := component.Close(); err != nil {

View file

@ -9,6 +9,7 @@ import (
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/blobstor" "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/blobstor"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/blobstor/fstree" "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/blobstor/fstree"
meta "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/metabase" meta "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/metabase"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/pilorama"
apistatus "github.com/nspcc-dev/neofs-sdk-go/client/status" apistatus "github.com/nspcc-dev/neofs-sdk-go/client/status"
cid "github.com/nspcc-dev/neofs-sdk-go/container/id" cid "github.com/nspcc-dev/neofs-sdk-go/container/id"
cidtest "github.com/nspcc-dev/neofs-sdk-go/container/id/test" cidtest "github.com/nspcc-dev/neofs-sdk-go/container/id/test"
@ -31,6 +32,7 @@ func TestRefillMetabaseCorrupted(t *testing.T) {
sh := New( sh := New(
WithBlobStorOptions(blobOpts...), WithBlobStorOptions(blobOpts...),
WithPiloramaOptions(pilorama.WithPath(filepath.Join(dir, "pilorama"))),
WithMetaBaseOptions(meta.WithPath(filepath.Join(dir, "meta")))) WithMetaBaseOptions(meta.WithPath(filepath.Join(dir, "meta"))))
require.NoError(t, sh.Open()) require.NoError(t, sh.Open())
require.NoError(t, sh.Init()) require.NoError(t, sh.Init())
@ -55,6 +57,7 @@ func TestRefillMetabaseCorrupted(t *testing.T) {
sh = New( sh = New(
WithBlobStorOptions(blobOpts...), WithBlobStorOptions(blobOpts...),
WithPiloramaOptions(pilorama.WithPath(filepath.Join(dir, "pilorama"))),
WithMetaBaseOptions(meta.WithPath(filepath.Join(dir, "meta_new"))), WithMetaBaseOptions(meta.WithPath(filepath.Join(dir, "meta_new"))),
WithRefillMetabase(true)) WithRefillMetabase(true))
require.NoError(t, sh.Open()) require.NoError(t, sh.Open())
@ -83,6 +86,8 @@ func TestRefillMetabase(t *testing.T) {
WithMetaBaseOptions( WithMetaBaseOptions(
meta.WithPath(filepath.Join(p, "meta")), meta.WithPath(filepath.Join(p, "meta")),
), ),
WithPiloramaOptions(
pilorama.WithPath(filepath.Join(p, "pilorama"))),
) )
// open Blobstor // open Blobstor
@ -246,6 +251,8 @@ func TestRefillMetabase(t *testing.T) {
WithMetaBaseOptions( WithMetaBaseOptions(
meta.WithPath(filepath.Join(p, "meta_restored")), meta.WithPath(filepath.Join(p, "meta_restored")),
), ),
WithPiloramaOptions(
pilorama.WithPath(filepath.Join(p, "pilorama_another"))),
) )
// open Blobstor // open Blobstor

View file

@ -29,7 +29,8 @@ func (p *DeletePrm) WithAddresses(addr ...oid.Address) {
// Delete removes data from the shard's writeCache, metaBase and // Delete removes data from the shard's writeCache, metaBase and
// blobStor. // blobStor.
func (s *Shard) Delete(prm DeletePrm) (DeleteRes, error) { func (s *Shard) Delete(prm DeletePrm) (DeleteRes, error) {
if s.GetMode() != ModeReadWrite { mode := s.GetMode()
if s.GetMode()&ModeReadOnly != 0 {
return DeleteRes{}, ErrReadOnlyMode return DeleteRes{}, ErrReadOnlyMode
} }
@ -61,10 +62,13 @@ func (s *Shard) Delete(prm DeletePrm) (DeleteRes, error) {
} }
} }
err := meta.Delete(s.metaBase, prm.addr...) var err error
if mode&ModeDegraded == 0 { // Skip metabase errors in degraded mode.
err = meta.Delete(s.metaBase, prm.addr...)
if err != nil { if err != nil {
return DeleteRes{}, err // stop on metabase error ? return DeleteRes{}, err // stop on metabase error ?
} }
}
for i := range prm.addr { // delete small object for i := range prm.addr { // delete small object
if id, ok := smalls[prm.addr[i]]; ok { if id, ok := smalls[prm.addr[i]]; ok {

View file

@ -15,6 +15,7 @@ type ExistsPrm struct {
// ExistsRes groups the resulting values of Exists operation. // ExistsRes groups the resulting values of Exists operation.
type ExistsRes struct { type ExistsRes struct {
ex bool ex bool
metaErr bool
} }
// WithAddress is an Exists option to set object checked for existence. // WithAddress is an Exists option to set object checked for existence.
@ -31,6 +32,11 @@ func (p ExistsRes) Exists() bool {
return p.ex return p.ex
} }
// FromMeta returns true if the error resulted from the metabase.
func (p ExistsRes) FromMeta() bool {
return p.metaErr
}
// Exists checks if object is presented in shard. // Exists checks if object is presented in shard.
// //
// Returns any error encountered that does not allow to // Returns any error encountered that does not allow to
@ -38,11 +44,16 @@ func (p ExistsRes) Exists() bool {
// //
// Returns an error of type apistatus.ObjectAlreadyRemoved if object has been marked as removed. // Returns an error of type apistatus.ObjectAlreadyRemoved if object has been marked as removed.
func (s *Shard) Exists(prm ExistsPrm) (ExistsRes, error) { func (s *Shard) Exists(prm ExistsPrm) (ExistsRes, error) {
exists, err := meta.Exists(s.metaBase, prm.addr) var exists bool
if err != nil { var err error
// If the shard is in degraded mode, try to consult blobstor directly.
// Otherwise, just return an error. mode := s.GetMode()
if s.GetMode() == ModeDegraded { if mode&ModeDegraded == 0 { // In Degraded mode skip metabase consulting.
exists, err = meta.Exists(s.metaBase, prm.addr)
}
metaErr := err != nil
if err != nil && mode&ModeDegraded != 0 {
var p blobstor.ExistsPrm var p blobstor.ExistsPrm
p.SetAddress(prm.addr) p.SetAddress(prm.addr)
@ -53,11 +64,13 @@ func (s *Shard) Exists(prm ExistsPrm) (ExistsRes, error) {
zap.Stringer("address", prm.addr), zap.Stringer("address", prm.addr),
zap.String("error", err.Error())) zap.String("error", err.Error()))
err = nil err = nil
} } else if err == nil {
err = bErr
} }
} }
return ExistsRes{ return ExistsRes{
ex: exists, ex: exists,
metaErr: metaErr,
}, err }, err
} }

View file

@ -77,7 +77,6 @@ func (s *Shard) Get(prm GetPrm) (GetRes, error) {
return res.Object(), nil return res.Object(), nil
} }
small = func(stor *blobstor.BlobStor, id *blobovnicza.ID) (*objectSDK.Object, error) { small = func(stor *blobstor.BlobStor, id *blobovnicza.ID) (*objectSDK.Object, error) {
var getSmallPrm blobstor.GetSmallPrm var getSmallPrm blobstor.GetSmallPrm
getSmallPrm.SetAddress(prm.addr) getSmallPrm.SetAddress(prm.addr)

View file

@ -18,6 +18,7 @@ type HeadPrm struct {
// HeadRes groups the resulting values of Head operation. // HeadRes groups the resulting values of Head operation.
type HeadRes struct { type HeadRes struct {
obj *objectSDK.Object obj *objectSDK.Object
meta bool
} }
// WithAddress is a Head option to set the address of the requested object. // WithAddress is a Head option to set the address of the requested object.
@ -43,6 +44,11 @@ func (r HeadRes) Object() *objectSDK.Object {
return r.obj return r.obj
} }
// FromMeta returns true if the error is related to the metabase.
func (r HeadRes) FromMeta() bool {
return r.meta
}
// Head reads header of the object from the shard. // Head reads header of the object from the shard.
// //
// Returns any error encountered. // Returns any error encountered.
@ -67,13 +73,25 @@ func (s *Shard) Head(prm HeadPrm) (HeadRes, error) {
// otherwise object seems to be flushed to metabase // otherwise object seems to be flushed to metabase
} }
if s.GetMode()&ModeDegraded != 0 { // In degraded mode, fallback to blobstor.
var getPrm GetPrm
getPrm.WithIgnoreMeta(true)
getPrm.WithAddress(getPrm.addr)
res, err := s.Get(getPrm)
if err != nil {
return HeadRes{}, err
}
return HeadRes{obj: res.obj.CutPayload()}, nil
}
var headParams meta.GetPrm var headParams meta.GetPrm
headParams.WithAddress(prm.addr) headParams.WithAddress(prm.addr)
headParams.WithRaw(prm.raw) headParams.WithRaw(prm.raw)
res, err := s.metaBase.Get(headParams) res, err := s.metaBase.Get(headParams)
if err != nil { if err != nil {
return HeadRes{}, err return HeadRes{meta: true}, err
} }
return HeadRes{ return HeadRes{

View file

@ -27,7 +27,7 @@ func (s *Shard) ID() *ID {
// UpdateID reads shard ID saved in the metabase and updates it if it is missing. // UpdateID reads shard ID saved in the metabase and updates it if it is missing.
func (s *Shard) UpdateID() (err error) { func (s *Shard) UpdateID() (err error) {
if err = s.metaBase.Open(); err != nil { if err = s.metaBase.Open(); err != nil {
return err return s.handleMetabaseFailure("open", err)
} }
defer func() { defer func() {
cErr := s.metaBase.Close() cErr := s.metaBase.Close()

View file

@ -3,6 +3,7 @@ package shard
import ( import (
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/blobstor" "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/blobstor"
meta "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/metabase" meta "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/metabase"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/pilorama"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/writecache" "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/writecache"
) )
@ -28,6 +29,9 @@ type Info struct {
// ErrorCount contains amount of errors occurred in shard operations. // ErrorCount contains amount of errors occurred in shard operations.
ErrorCount uint32 ErrorCount uint32
// PiloramaInfo contains information about trees stored on this shard.
PiloramaInfo pilorama.Info
} }
// DumpInfo returns information about the Shard. // DumpInfo returns information about the Shard.

View file

@ -15,7 +15,10 @@ type PutPrm struct {
} }
// PutRes groups the resulting values of Put operation. // PutRes groups the resulting values of Put operation.
type PutRes struct{} type PutRes struct {
metaErr bool
blobErr bool
}
// WithObject is a Put option to set object to save. // WithObject is a Put option to set object to save.
func (p *PutPrm) WithObject(obj *object.Object) { func (p *PutPrm) WithObject(obj *object.Object) {
@ -24,6 +27,14 @@ func (p *PutPrm) WithObject(obj *object.Object) {
} }
} }
func (r *PutRes) FromMeta() bool {
return r.metaErr
}
func (r *PutRes) FromBlobstor() bool {
return r.blobErr
}
// Put saves the object in shard. // Put saves the object in shard.
// //
// Returns any error encountered that // Returns any error encountered that
@ -31,7 +42,8 @@ func (p *PutPrm) WithObject(obj *object.Object) {
// //
// Returns ErrReadOnlyMode error if shard is in "read-only" mode. // Returns ErrReadOnlyMode error if shard is in "read-only" mode.
func (s *Shard) Put(prm PutPrm) (PutRes, error) { func (s *Shard) Put(prm PutPrm) (PutRes, error) {
if s.GetMode() != ModeReadWrite { mode := s.GetMode()
if mode&ModeReadOnly != 0 {
return PutRes{}, ErrReadOnlyMode return PutRes{}, ErrReadOnlyMode
} }
@ -56,14 +68,16 @@ func (s *Shard) Put(prm PutPrm) (PutRes, error) {
) )
if res, err = s.blobStor.Put(putPrm); err != nil { if res, err = s.blobStor.Put(putPrm); err != nil {
return PutRes{}, fmt.Errorf("could not put object to BLOB storage: %w", err) return PutRes{blobErr: true}, fmt.Errorf("could not put object to BLOB storage: %w", err)
} }
if mode&ModeDegraded == 0 { // In degraded mode, skip metabase.
// put to metabase // put to metabase
if err := meta.Put(s.metaBase, prm.obj, res.BlobovniczaID()); err != nil { if err := meta.Put(s.metaBase, prm.obj, res.BlobovniczaID()); err != nil {
// may we need to handle this case in a special way // may we need to handle this case in a special way
// since the object has been successfully written to BlobStor // since the object has been successfully written to BlobStor
return PutRes{}, fmt.Errorf("could not put object to metabase: %w", err) return PutRes{metaErr: true}, fmt.Errorf("could not put object to metabase: %w", err)
}
} }
return PutRes{}, nil return PutRes{}, nil

View file

@ -7,6 +7,7 @@ import (
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/blobstor" "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/blobstor"
meta "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/metabase" meta "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/metabase"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/pilorama"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/writecache" "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/writecache"
"github.com/nspcc-dev/neofs-node/pkg/util" "github.com/nspcc-dev/neofs-node/pkg/util"
"github.com/nspcc-dev/neofs-node/pkg/util/logger" "github.com/nspcc-dev/neofs-node/pkg/util/logger"
@ -24,6 +25,8 @@ type Shard struct {
blobStor *blobstor.BlobStor blobStor *blobstor.BlobStor
pilorama pilorama.ForestStorage
metaBase *meta.DB metaBase *meta.DB
tsSource TombstoneSource tsSource TombstoneSource
@ -55,6 +58,8 @@ type cfg struct {
writeCacheOpts []writecache.Option writeCacheOpts []writecache.Option
piloramaOpts []pilorama.Option
log *logger.Logger log *logger.Logger
gcCfg *gcCfg gcCfg *gcCfg
@ -99,6 +104,7 @@ func New(opts ...Option) *Shard {
metaBase: mb, metaBase: mb,
writeCache: writeCache, writeCache: writeCache,
tsSource: c.tsSource, tsSource: c.tsSource,
pilorama: pilorama.NewBoltForest(c.piloramaOpts...),
} }
s.fillInfo() s.fillInfo()
@ -134,6 +140,13 @@ func WithWriteCacheOptions(opts ...writecache.Option) Option {
} }
} }
// WithPiloramaOptions returns option to set internal write cache options.
func WithPiloramaOptions(opts ...pilorama.Option) Option {
return func(c *cfg) {
c.piloramaOpts = opts
}
}
// WithLogger returns option to set Shard's logger. // WithLogger returns option to set Shard's logger.
func WithLogger(l *logger.Logger) Option { func WithLogger(l *logger.Logger) Option {
return func(c *cfg) { return func(c *cfg) {
@ -237,4 +250,5 @@ func (s *Shard) fillInfo() {
if s.cfg.useWriteCache { if s.cfg.useWriteCache {
s.cfg.info.WriteCacheInfo = s.writeCache.DumpInfo() s.cfg.info.WriteCacheInfo = s.writeCache.DumpInfo()
} }
s.cfg.info.PiloramaInfo = s.pilorama.DumpInfo()
} }

View file

@ -10,6 +10,7 @@ import (
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/blobstor" "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/blobstor"
meta "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/metabase" meta "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/metabase"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/pilorama"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/shard" "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/shard"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/writecache" "github.com/nspcc-dev/neofs-node/pkg/local_object_storage/writecache"
"github.com/nspcc-dev/neofs-sdk-go/checksum" "github.com/nspcc-dev/neofs-sdk-go/checksum"
@ -49,6 +50,7 @@ func newCustomShard(t testing.TB, rootPath string, enableWriteCache bool, wcOpts
shard.WithMetaBaseOptions( shard.WithMetaBaseOptions(
meta.WithPath(filepath.Join(rootPath, "meta")), meta.WithPath(filepath.Join(rootPath, "meta")),
), ),
shard.WithPiloramaOptions(pilorama.WithPath(filepath.Join(rootPath, "pilorama"))),
shard.WithWriteCache(enableWriteCache), shard.WithWriteCache(enableWriteCache),
shard.WithWriteCacheOptions( shard.WithWriteCacheOptions(
append( append(

View file

@ -0,0 +1,52 @@
package shard
import (
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/pilorama"
cidSDK "github.com/nspcc-dev/neofs-sdk-go/container/id"
)
var _ pilorama.Forest = (*Shard)(nil)
// TreeMove implements the pilorama.Forest interface.
func (s *Shard) TreeMove(d pilorama.CIDDescriptor, treeID string, m *pilorama.Move) (*pilorama.LogMove, error) {
if s.GetMode() == ModeReadOnly {
return nil, ErrReadOnlyMode
}
return s.pilorama.TreeMove(d, treeID, m)
}
// TreeAddByPath implements the pilorama.Forest interface.
func (s *Shard) TreeAddByPath(d pilorama.CIDDescriptor, treeID string, attr string, path []string, meta []pilorama.KeyValue) ([]pilorama.LogMove, error) {
if s.GetMode() == ModeReadOnly {
return nil, ErrReadOnlyMode
}
return s.pilorama.TreeAddByPath(d, treeID, attr, path, meta)
}
// TreeApply implements the pilorama.Forest interface.
func (s *Shard) TreeApply(d pilorama.CIDDescriptor, treeID string, m []pilorama.Move) error {
if s.GetMode() == ModeReadOnly {
return ErrReadOnlyMode
}
return s.pilorama.TreeApply(d, treeID, m)
}
// TreeGetByPath implements the pilorama.Forest interface.
func (s *Shard) TreeGetByPath(cid cidSDK.ID, treeID string, attr string, path []string, latest bool) ([]pilorama.Node, error) {
return s.pilorama.TreeGetByPath(cid, treeID, attr, path, latest)
}
// TreeGetMeta implements the pilorama.Forest interface.
func (s *Shard) TreeGetMeta(cid cidSDK.ID, treeID string, nodeID pilorama.Node) (pilorama.Meta, uint64, error) {
return s.pilorama.TreeGetMeta(cid, treeID, nodeID)
}
// TreeGetChildren implements the pilorama.Forest interface.
func (s *Shard) TreeGetChildren(cid cidSDK.ID, treeID string, nodeID pilorama.Node) ([]uint64, error) {
return s.pilorama.TreeGetChildren(cid, treeID, nodeID)
}
// TreeGetOpLog implements the pilorama.Forest interface.
func (s *Shard) TreeGetOpLog(cid cidSDK.ID, treeID string, height uint64) (pilorama.Move, error) {
return s.pilorama.TreeGetOpLog(cid, treeID, height)
}

View file

@ -579,10 +579,9 @@ func (c *Client) NotificationChannel() <-chan client.Notification {
// - inactiveModeCb is called if not nil. // - inactiveModeCb is called if not nil.
func (c *Client) inactiveMode() { func (c *Client) inactiveMode() {
c.switchLock.Lock() c.switchLock.Lock()
defer c.switchLock.Unlock()
close(c.notifications) close(c.notifications)
c.inactive = true c.inactive = true
c.switchLock.Unlock()
if c.cfg.inactiveModeCb != nil { if c.cfg.inactiveModeCb != nil {
c.cfg.inactiveModeCb() c.cfg.inactiveModeCb()

View file

@ -1,6 +1,7 @@
package netmap package netmap
import ( import (
"errors"
"fmt" "fmt"
"strconv" "strconv"
@ -20,6 +21,7 @@ const (
etAlphaConfig = "EigenTrustAlpha" etAlphaConfig = "EigenTrustAlpha"
irCandidateFeeConfig = "InnerRingCandidateFee" irCandidateFeeConfig = "InnerRingCandidateFee"
withdrawFeeConfig = "WithdrawFee" withdrawFeeConfig = "WithdrawFee"
homomorphicHashingDisabledKey = "HomomorphicHashingDisabled"
) )
// MaxObjectSize receives max object size configuration // MaxObjectSize receives max object size configuration
@ -109,6 +111,25 @@ func (c *Client) EigenTrustAlpha() (float64, error) {
return strconv.ParseFloat(strAlpha, 64) return strconv.ParseFloat(strAlpha, 64)
} }
// HomomorphicHashDisabled returns global configuration value of homomorphic hashing
// settings.
//
// Returns (false, nil) if config key is not found in the contract.
func (c *Client) HomomorphicHashDisabled() (bool, error) {
const defaultValue = false
hashingDisabled, err := c.readBoolConfig(homomorphicHashingDisabledKey)
if err != nil {
if errors.Is(err, ErrConfigNotFound) {
return defaultValue, nil
}
return false, fmt.Errorf("(%T) could not get homomorphic hash state: %w", c, err)
}
return hashingDisabled, nil
}
// InnerRingCandidateFee returns global configuration value of fee paid by // InnerRingCandidateFee returns global configuration value of fee paid by
// node to be in inner ring candidates list. // node to be in inner ring candidates list.
func (c *Client) InnerRingCandidateFee() (uint64, error) { func (c *Client) InnerRingCandidateFee() (uint64, error) {
@ -151,6 +172,16 @@ func (c *Client) readStringConfig(key string) (string, error) {
return v.(string), nil return v.(string), nil
} }
func (c *Client) readBoolConfig(key string) (bool, error) {
v, err := c.config([]byte(key), BoolAssert)
if err != nil {
return false, err
}
// BoolAssert is guaranteed to return bool if the error is nil.
return v.(bool), nil
}
// SetConfigPrm groups parameters of SetConfig operation. // SetConfigPrm groups parameters of SetConfig operation.
type SetConfigPrm struct { type SetConfigPrm struct {
id []byte id []byte
@ -297,8 +328,14 @@ func bytesToUint64(val []byte) uint64 {
return bigint.FromBytes(val).Uint64() return bigint.FromBytes(val).Uint64()
} }
// ErrConfigNotFound is returned when the requested key was not found
// in the network config (returned value is `Null`).
var ErrConfigNotFound = errors.New("config value not found")
// config performs the test invoke of get config value // config performs the test invoke of get config value
// method of NeoFS Netmap contract. // method of NeoFS Netmap contract.
//
// Returns ErrConfigNotFound if config key is not found in the contract.
func (c *Client) config(key []byte, assert func(stackitem.Item) (interface{}, error)) (interface{}, error) { func (c *Client) config(key []byte, assert func(stackitem.Item) (interface{}, error)) (interface{}, error) {
prm := client.TestInvokePrm{} prm := client.TestInvokePrm{}
prm.SetMethod(configMethod) prm.SetMethod(configMethod)
@ -315,6 +352,10 @@ func (c *Client) config(key []byte, assert func(stackitem.Item) (interface{}, er
configMethod, ln) configMethod, ln)
} }
if _, ok := items[0].(stackitem.Null); ok {
return nil, ErrConfigNotFound
}
return assert(items[0]) return assert(items[0])
} }
@ -328,6 +369,11 @@ func StringAssert(item stackitem.Item) (interface{}, error) {
return client.StringFromStackItem(item) return client.StringFromStackItem(item)
} }
// BoolAssert converts stack item to bool.
func BoolAssert(item stackitem.Item) (interface{}, error) {
return client.BoolFromStackItem(item)
}
// iterateRecords iterates over all config records and passes them to f. // iterateRecords iterates over all config records and passes them to f.
// //
// Returns f's errors directly. // Returns f's errors directly.

View file

@ -7,6 +7,7 @@ import (
"github.com/nspcc-dev/neofs-node/pkg/services/object_manager/placement" "github.com/nspcc-dev/neofs-node/pkg/services/object_manager/placement"
"github.com/nspcc-dev/neofs-node/pkg/util/rand" "github.com/nspcc-dev/neofs-node/pkg/util/rand"
containerSDK "github.com/nspcc-dev/neofs-sdk-go/container"
oid "github.com/nspcc-dev/neofs-sdk-go/object/id" oid "github.com/nspcc-dev/neofs-sdk-go/object/id"
storagegroupSDK "github.com/nspcc-dev/neofs-sdk-go/storagegroup" storagegroupSDK "github.com/nspcc-dev/neofs-sdk-go/storagegroup"
"github.com/nspcc-dev/tzhash/tz" "github.com/nspcc-dev/tzhash/tz"
@ -50,6 +51,8 @@ func (c *Context) checkStorageGroupPoR(sgID oid.ID, sg storagegroupSDK.StorageGr
getHeaderPrm.CID = c.task.ContainerID() getHeaderPrm.CID = c.task.ContainerID()
getHeaderPrm.NodeIsRelay = true getHeaderPrm.NodeIsRelay = true
homomorphicHashingEnabled := !containerSDK.IsHomomorphicHashingDisabled(c.task.ContainerStructure())
for i := range members { for i := range members {
objectPlacement, err := c.buildPlacement(members[i]) objectPlacement, err := c.buildPlacement(members[i])
if err != nil { if err != nil {
@ -90,20 +93,24 @@ func (c *Context) checkStorageGroupPoR(sgID oid.ID, sg storagegroupSDK.StorageGr
// update cache for PoR and PDP audit checks // update cache for PoR and PDP audit checks
c.updateHeadResponses(hdr) c.updateHeadResponses(hdr)
if homomorphicHashingEnabled {
cs, _ := hdr.PayloadHomomorphicHash() cs, _ := hdr.PayloadHomomorphicHash()
if len(tzHash) == 0 { if len(tzHash) == 0 {
tzHash = cs.Value() tzHash = cs.Value()
} else { } else {
tzHash, err = tz.Concat([][]byte{tzHash, cs.Value()}) tzHash, err = tz.Concat([][]byte{
tzHash,
cs.Value(),
})
if err != nil { if err != nil {
c.log.Debug("can't concatenate tz hash", c.log.Debug("can't concatenate tz hash",
zap.Stringer("oid", members[i]), zap.String("oid", members[i].String()),
zap.String("error", err.Error())) zap.String("error", err.Error()))
break break
} }
} }
}
totalSize += hdr.PayloadSize() totalSize += hdr.PayloadSize()
@ -116,7 +123,7 @@ func (c *Context) checkStorageGroupPoR(sgID oid.ID, sg storagegroupSDK.StorageGr
sizeCheck := sg.ValidationDataSize() == totalSize sizeCheck := sg.ValidationDataSize() == totalSize
cs, _ := sg.ValidationDataHash() cs, _ := sg.ValidationDataHash()
tzCheck := bytes.Equal(tzHash, cs.Value()) tzCheck := !homomorphicHashingEnabled || bytes.Equal(tzHash, cs.Value())
if sizeCheck && tzCheck { if sizeCheck && tzCheck {
c.report.PassedPoR(sgID) // write report c.report.PassedPoR(sgID) // write report

View file

@ -166,3 +166,21 @@ func (w *restoreShardResponseWrapper) FromGRPCMessage(m grpc.Message) error {
w.RestoreShardResponse = r w.RestoreShardResponse = r
return nil return nil
} }
type synchronizeTreeResponseWrapper struct {
*SynchronizeTreeResponse
}
func (w *synchronizeTreeResponseWrapper) ToGRPCMessage() grpc.Message {
return w.SynchronizeTreeResponse
}
func (w *synchronizeTreeResponseWrapper) FromGRPCMessage(m grpc.Message) error {
r, ok := m.(*SynchronizeTreeResponse)
if !ok {
return message.NewUnexpectedMessageType(m, (*SynchronizeTreeResponse)(nil))
}
w.SynchronizeTreeResponse = r
return nil
}

View file

@ -16,6 +16,7 @@ const (
rpcSetShardMode = "SetShardMode" rpcSetShardMode = "SetShardMode"
rpcDumpShard = "DumpShard" rpcDumpShard = "DumpShard"
rpcRestoreShard = "RestoreShard" rpcRestoreShard = "RestoreShard"
rpcSynchronizeTree = "SynchronizeTree"
) )
// HealthCheck executes ControlService.HealthCheck RPC. // HealthCheck executes ControlService.HealthCheck RPC.
@ -172,3 +173,16 @@ func RestoreShard(cli *client.Client, req *RestoreShardRequest, opts ...client.C
return wResp.RestoreShardResponse, nil return wResp.RestoreShardResponse, nil
} }
// SynchronizeTree executes ControlService.SynchronizeTree RPC.
func SynchronizeTree(cli *client.Client, req *SynchronizeTreeRequest, opts ...client.CallOption) (*SynchronizeTreeResponse, error) {
wResp := &synchronizeTreeResponseWrapper{new(SynchronizeTreeResponse)}
wReq := &requestWrapper{m: req}
err := client.SendUnary(cli, common.CallMethodInfoUnary(serviceName, rpcSynchronizeTree), wReq, wResp, opts...)
if err != nil {
return nil, err
}
return wResp.SynchronizeTreeResponse, nil
}

View file

@ -32,6 +32,7 @@ func (s *Server) ListShards(_ context.Context, req *control.ListShardsRequest) (
si.SetMetabasePath(sh.MetaBaseInfo.Path) si.SetMetabasePath(sh.MetaBaseInfo.Path)
si.SetBlobstorPath(sh.BlobStorInfo.RootPath) si.SetBlobstorPath(sh.BlobStorInfo.RootPath)
si.SetWriteCachePath(sh.WriteCacheInfo.Path) si.SetWriteCachePath(sh.WriteCacheInfo.Path)
si.SetPiloramaPath(sh.PiloramaInfo.Path)
var mode control.ShardMode var mode control.ShardMode

View file

@ -3,9 +3,8 @@ package control
import ( import (
"crypto/ecdsa" "crypto/ecdsa"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/engine"
"github.com/nspcc-dev/neofs-node/pkg/core/netmap" "github.com/nspcc-dev/neofs-node/pkg/core/netmap"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/engine"
"github.com/nspcc-dev/neofs-node/pkg/services/control" "github.com/nspcc-dev/neofs-node/pkg/services/control"
) )
@ -52,6 +51,8 @@ type cfg struct {
delObjHandler DeletedObjectHandler delObjHandler DeletedObjectHandler
treeService TreeService
s *engine.StorageEngine s *engine.StorageEngine
} }
@ -125,3 +126,10 @@ func WithLocalStorage(engine *engine.StorageEngine) Option {
c.s = engine c.s = engine
} }
} }
// WithTreeService returns an option to set tree service.
func WithTreeService(s TreeService) Option {
return func(c *cfg) {
c.treeService = s
}
}

View file

@ -0,0 +1,48 @@
package control
import (
"context"
"github.com/nspcc-dev/neofs-node/pkg/services/control"
cid "github.com/nspcc-dev/neofs-sdk-go/container/id"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
)
// TreeService represents a tree service instance.
type TreeService interface {
Synchronize(ctx context.Context, cnr cid.ID, treeID string) error
}
func (s *Server) SynchronizeTree(ctx context.Context, req *control.SynchronizeTreeRequest) (*control.SynchronizeTreeResponse, error) {
err := s.isValidRequest(req)
if err != nil {
return nil, status.Error(codes.PermissionDenied, err.Error())
}
if s.treeService == nil {
return nil, status.Error(codes.Internal, "tree service is disabled")
}
b := req.GetBody()
var cnr cid.ID
if err := cnr.Decode(b.GetContainerId()); err != nil {
return nil, status.Error(codes.Internal, err.Error())
}
err = s.treeService.Synchronize(ctx, cnr, b.GetTreeId())
if err != nil {
return nil, status.Error(codes.Internal, err.Error())
}
resp := new(control.SynchronizeTreeResponse)
resp.SetBody(new(control.SynchronizeTreeResponse_Body))
err = SignMessage(s.key, resp)
if err != nil {
return nil, status.Error(codes.Internal, err.Error())
}
return resp, nil
}

View file

@ -200,3 +200,17 @@ func (x *RestoreShardResponse) SetBody(v *RestoreShardResponse_Body) {
x.Body = v x.Body = v
} }
} }
// SetBody sets list shards request body.
func (x *SynchronizeTreeRequest) SetBody(v *SynchronizeTreeRequest_Body) {
if x != nil {
x.Body = v
}
}
// SetBody sets list shards response body.
func (x *SynchronizeTreeResponse) SetBody(v *SynchronizeTreeResponse_Body) {
if x != nil {
x.Body = v
}
}

Binary file not shown.

View file

@ -31,6 +31,9 @@ service ControlService {
// Restore objects from dump. // Restore objects from dump.
rpc RestoreShard (RestoreShardRequest) returns (RestoreShardResponse); rpc RestoreShard (RestoreShardRequest) returns (RestoreShardResponse);
// Synchronizes all log operations for the specified tree.
rpc SynchronizeTree (SynchronizeTreeRequest) returns (SynchronizeTreeResponse);
} }
// Health check request. // Health check request.
@ -279,3 +282,33 @@ message RestoreShardResponse {
// Body signature. // Body signature.
Signature signature = 2; Signature signature = 2;
} }
// SynchronizeTree request.
message SynchronizeTreeRequest {
// Request body structure.
message Body {
bytes container_id = 1;
string tree_id = 2;
// Starting height for the synchronization. Can be omitted.
uint64 height = 3;
}
// Body of restore shard request message.
Body body = 1;
// Body signature.
Signature signature = 2;
}
// SynchronizeTree response.
message SynchronizeTreeResponse {
// Response body structure.
message Body {
}
// Body of restore shard response message.
Body body = 1;
// Body signature.
Signature signature = 2;
}

Binary file not shown.

Binary file not shown.

View file

@ -103,6 +103,7 @@ func equalListShardResponseBodies(b1, b2 *control.ListShardsResponse_Body) bool
if b1.Shards[i].GetMetabasePath() != b2.Shards[i].GetMetabasePath() || if b1.Shards[i].GetMetabasePath() != b2.Shards[i].GetMetabasePath() ||
b1.Shards[i].GetBlobstorPath() != b2.Shards[i].GetBlobstorPath() || b1.Shards[i].GetBlobstorPath() != b2.Shards[i].GetBlobstorPath() ||
b1.Shards[i].GetWritecachePath() != b2.Shards[i].GetWritecachePath() || b1.Shards[i].GetWritecachePath() != b2.Shards[i].GetWritecachePath() ||
b1.Shards[i].GetPiloramaPath() != b2.Shards[i].GetPiloramaPath() ||
!bytes.Equal(b1.Shards[i].GetShard_ID(), b2.Shards[i].GetShard_ID()) { !bytes.Equal(b1.Shards[i].GetShard_ID(), b2.Shards[i].GetShard_ID()) {
return false return false
} }
@ -160,3 +161,21 @@ func equalSetShardModeRequestBodies(b1, b2 *control.SetShardModeRequest_Body) bo
return true return true
} }
func TestSynchronizeTreeRequest_Body_StableMarshal(t *testing.T) {
testStableMarshal(t,
&control.SynchronizeTreeRequest_Body{
ContainerId: []byte{1, 2, 3, 4, 5, 6, 7},
TreeId: "someID",
Height: 42,
},
new(control.SynchronizeTreeRequest_Body),
func(m1, m2 protoMessage) bool {
b1 := m1.(*control.SynchronizeTreeRequest_Body)
b2 := m2.(*control.SynchronizeTreeRequest_Body)
return bytes.Equal(b1.GetContainerId(), b2.GetContainerId()) &&
b1.GetTreeId() == b2.GetTreeId() &&
b1.GetHeight() == b2.GetHeight()
},
)
}

View file

@ -107,6 +107,11 @@ func (x *ShardInfo) SetWriteCachePath(v string) {
x.WritecachePath = v x.WritecachePath = v
} }
// SetPiloramaPath sets path to shard's pilorama.
func (x *ShardInfo) SetPiloramaPath(v string) {
x.PiloramaPath = v
}
// SetMode sets path to shard's work mode. // SetMode sets path to shard's work mode.
func (x *ShardInfo) SetMode(v ShardMode) { func (x *ShardInfo) SetMode(v ShardMode) {
x.Mode = v x.Mode = v

Binary file not shown.

View file

@ -139,6 +139,9 @@ message ShardInfo {
// Amount of errors occured. // Amount of errors occured.
uint32 errorCount = 6; uint32 errorCount = 6;
// Path to shard's pilorama storage.
string pilorama_path = 7 [json_name = "piloramaPath"];
} }
// Work mode of the shard. // Work mode of the shard.

Binary file not shown.

View file

@ -140,6 +140,7 @@ func generateShardInfo(id int) *control.ShardInfo {
si.SetMetabasePath(filepath.Join(path, "meta")) si.SetMetabasePath(filepath.Join(path, "meta"))
si.SetBlobstorPath(filepath.Join(path, "blobstor")) si.SetBlobstorPath(filepath.Join(path, "blobstor"))
si.SetWriteCachePath(filepath.Join(path, "writecache")) si.SetWriteCachePath(filepath.Join(path, "writecache"))
si.SetPiloramaPath(filepath.Join(path, "pilorama"))
return si return si
} }

View file

@ -4,6 +4,7 @@ import (
"github.com/nspcc-dev/neofs-node/pkg/core/client" "github.com/nspcc-dev/neofs-node/pkg/core/client"
"github.com/nspcc-dev/neofs-node/pkg/services/object/util" "github.com/nspcc-dev/neofs-node/pkg/services/object/util"
"github.com/nspcc-dev/neofs-node/pkg/services/object_manager/placement" "github.com/nspcc-dev/neofs-node/pkg/services/object_manager/placement"
containerSDK "github.com/nspcc-dev/neofs-sdk-go/container"
"github.com/nspcc-dev/neofs-sdk-go/object" "github.com/nspcc-dev/neofs-sdk-go/object"
) )
@ -12,6 +13,8 @@ type PutInitPrm struct {
hdr *object.Object hdr *object.Object
cnr containerSDK.Container
traverseOpts []placement.Option traverseOpts []placement.Option
relay func(client.NodeInfo, client.MultiAddressClient) error relay func(client.NodeInfo, client.MultiAddressClient) error

View file

@ -10,6 +10,7 @@ import (
"github.com/nspcc-dev/neofs-node/pkg/services/object/util" "github.com/nspcc-dev/neofs-node/pkg/services/object/util"
"github.com/nspcc-dev/neofs-node/pkg/services/object_manager/placement" "github.com/nspcc-dev/neofs-node/pkg/services/object_manager/placement"
"github.com/nspcc-dev/neofs-node/pkg/services/object_manager/transformer" "github.com/nspcc-dev/neofs-node/pkg/services/object_manager/transformer"
containerSDK "github.com/nspcc-dev/neofs-sdk-go/container"
"github.com/nspcc-dev/neofs-sdk-go/object" "github.com/nspcc-dev/neofs-sdk-go/object"
"github.com/nspcc-dev/neofs-sdk-go/user" "github.com/nspcc-dev/neofs-sdk-go/user"
) )
@ -119,6 +120,7 @@ func (p *Streamer) initTarget(prm *PutInitPrm) error {
unpreparedObject: true, unpreparedObject: true,
nextTarget: transformer.NewPayloadSizeLimiter( nextTarget: transformer.NewPayloadSizeLimiter(
p.maxPayloadSz, p.maxPayloadSz,
containerSDK.IsHomomorphicHashingDisabled(prm.cnr),
func() transformer.ObjectTarget { func() transformer.ObjectTarget {
return transformer.NewFormatTarget(&transformer.FormatterParams{ return transformer.NewFormatTarget(&transformer.FormatterParams{
Key: sessionKey, Key: sessionKey,
@ -148,15 +150,17 @@ func (p *Streamer) preparePrm(prm *PutInitPrm) error {
} }
// get container to store the object // get container to store the object
cnr, err := p.cnrSrc.Get(idCnr) cnrInfo, err := p.cnrSrc.Get(idCnr)
if err != nil { if err != nil {
return fmt.Errorf("(%T) could not get container by ID: %w", p, err) return fmt.Errorf("(%T) could not get container by ID: %w", p, err)
} }
prm.cnr = cnrInfo.Value
// add common options // add common options
prm.traverseOpts = append(prm.traverseOpts, prm.traverseOpts = append(prm.traverseOpts,
// set processing container // set processing container
placement.ForContainer(cnr.Value), placement.ForContainer(prm.cnr),
) )
if id, ok := prm.hdr.ID(); ok { if id, ok := prm.hdr.ID(); ok {

View file

@ -14,7 +14,7 @@ import (
// with information about members collected via HeadReceiver. // with information about members collected via HeadReceiver.
// //
// Resulting storage group consists of physically stored objects only. // Resulting storage group consists of physically stored objects only.
func CollectMembers(r objutil.HeadReceiver, cnr cid.ID, members []oid.ID) (*storagegroup.StorageGroup, error) { func CollectMembers(r objutil.HeadReceiver, cnr cid.ID, members []oid.ID, calcHomoHash bool) (*storagegroup.StorageGroup, error) {
var ( var (
sumPhySize uint64 sumPhySize uint64
phyMembers []oid.ID phyMembers []oid.ID
@ -37,12 +37,19 @@ func CollectMembers(r objutil.HeadReceiver, cnr cid.ID, members []oid.ID) (*stor
phyMembers = append(phyMembers, id) phyMembers = append(phyMembers, id)
sumPhySize += leaf.PayloadSize() sumPhySize += leaf.PayloadSize()
cs, _ := leaf.PayloadHomomorphicHash() cs, _ := leaf.PayloadHomomorphicHash()
if calcHomoHash {
phyHashes = append(phyHashes, cs.Value()) phyHashes = append(phyHashes, cs.Value())
}
}); err != nil { }); err != nil {
return nil, err return nil, err
} }
} }
sg.SetMembers(phyMembers)
sg.SetValidationDataSize(sumPhySize)
if calcHomoHash {
sumHash, err := tz.Concat(phyHashes) sumHash, err := tz.Concat(phyHashes)
if err != nil { if err != nil {
return nil, err return nil, err
@ -53,9 +60,8 @@ func CollectMembers(r objutil.HeadReceiver, cnr cid.ID, members []oid.ID) (*stor
copy(tzHash[:], sumHash) copy(tzHash[:], sumHash)
cs.SetTillichZemor(tzHash) cs.SetTillichZemor(tzHash)
sg.SetMembers(phyMembers)
sg.SetValidationDataSize(sumPhySize)
sg.SetValidationDataHash(cs) sg.SetValidationDataHash(cs)
}
return &sg, nil return &sg, nil
} }

View file

@ -15,6 +15,8 @@ import (
type payloadSizeLimiter struct { type payloadSizeLimiter struct {
maxSize, written uint64 maxSize, written uint64
withoutHomomorphicHash bool
targetInit func() ObjectTarget targetInit func() ObjectTarget
target ObjectTarget target ObjectTarget
@ -41,10 +43,14 @@ type payloadChecksumHasher struct {
// NewPayloadSizeLimiter returns ObjectTarget instance that restricts payload length // NewPayloadSizeLimiter returns ObjectTarget instance that restricts payload length
// of the writing object and writes generated objects to targets from initializer. // of the writing object and writes generated objects to targets from initializer.
// //
// Calculates and adds homomorphic hash to resulting objects only if withoutHomomorphicHash
// is false.
//
// Objects w/ payload size less or equal than max size remain untouched. // Objects w/ payload size less or equal than max size remain untouched.
func NewPayloadSizeLimiter(maxSize uint64, targetInit TargetInitializer) ObjectTarget { func NewPayloadSizeLimiter(maxSize uint64, withoutHomomorphicHash bool, targetInit TargetInitializer) ObjectTarget {
return &payloadSizeLimiter{ return &payloadSizeLimiter{
maxSize: maxSize, maxSize: maxSize,
withoutHomomorphicHash: withoutHomomorphicHash,
targetInit: targetInit, targetInit: targetInit,
splitID: object.NewSplitID(), splitID: object.NewSplitID(),
} }
@ -108,7 +114,7 @@ func (s *payloadSizeLimiter) initializeCurrent() {
s.target = s.targetInit() s.target = s.targetInit()
// create payload hashers // create payload hashers
s.currentHashers = payloadHashersForObject(s.current) s.currentHashers = payloadHashersForObject(s.current, s.withoutHomomorphicHash)
// compose multi-writer from target and all payload hashers // compose multi-writer from target and all payload hashers
ws := make([]io.Writer, 0, 1+len(s.currentHashers)+len(s.parentHashers)) ws := make([]io.Writer, 0, 1+len(s.currentHashers)+len(s.parentHashers))
@ -126,9 +132,10 @@ func (s *payloadSizeLimiter) initializeCurrent() {
s.chunkWriter = io.MultiWriter(ws...) s.chunkWriter = io.MultiWriter(ws...)
} }
func payloadHashersForObject(obj *object.Object) []*payloadChecksumHasher { func payloadHashersForObject(obj *object.Object, withoutHomomorphicHash bool) []*payloadChecksumHasher {
return []*payloadChecksumHasher{ hashers := make([]*payloadChecksumHasher, 0, 2)
{
hashers = append(hashers, &payloadChecksumHasher{
hasher: sha256.New(), hasher: sha256.New(),
checksumWriter: func(binChecksum []byte) { checksumWriter: func(binChecksum []byte) {
if ln := len(binChecksum); ln != sha256.Size { if ln := len(binChecksum); ln != sha256.Size {
@ -143,8 +150,10 @@ func payloadHashersForObject(obj *object.Object) []*payloadChecksumHasher {
obj.SetPayloadChecksum(cs) obj.SetPayloadChecksum(cs)
}, },
}, })
{
if !withoutHomomorphicHash {
hashers = append(hashers, &payloadChecksumHasher{
hasher: tz.New(), hasher: tz.New(),
checksumWriter: func(binChecksum []byte) { checksumWriter: func(binChecksum []byte) {
if ln := len(binChecksum); ln != tz.Size { if ln := len(binChecksum); ln != tz.Size {
@ -159,8 +168,10 @@ func payloadHashersForObject(obj *object.Object) []*payloadChecksumHasher {
obj.SetPayloadHomomorphicHash(cs) obj.SetPayloadHomomorphicHash(cs)
}, },
}, })
} }
return hashers
} }
func (s *payloadSizeLimiter) release(close bool) (*AccessIdentifiers, error) { func (s *payloadSizeLimiter) release(close bool) (*AccessIdentifiers, error) {

View file

@ -0,0 +1,96 @@
package tree
import (
"context"
"fmt"
"strings"
"sync"
"time"
"github.com/hashicorp/golang-lru/simplelru"
"github.com/nspcc-dev/neofs-node/pkg/network"
"google.golang.org/grpc"
"google.golang.org/grpc/connectivity"
)
type clientCache struct {
sync.Mutex
simplelru.LRU
}
type cacheItem struct {
cc *grpc.ClientConn
lastTry time.Time
}
const (
defaultClientCacheSize = 10
defaultClientConnectTimeout = time.Second * 2
defaultReconnectInterval = time.Second * 15
)
func (c *clientCache) init() {
l, _ := simplelru.NewLRU(defaultClientCacheSize, func(key, value interface{}) {
_ = value.(*grpc.ClientConn).Close()
})
c.LRU = *l
}
func (c *clientCache) get(ctx context.Context, netmapAddr string) (TreeServiceClient, error) {
c.Lock()
ccInt, ok := c.LRU.Get(netmapAddr)
c.Unlock()
if ok {
item := ccInt.(cacheItem)
if item.cc == nil {
if d := time.Since(item.lastTry); d < defaultReconnectInterval {
return nil, fmt.Errorf("skip connecting to %s (time since last error %s)",
netmapAddr, d)
}
} else {
if s := item.cc.GetState(); s == connectivity.Idle || s == connectivity.Ready {
return NewTreeServiceClient(item.cc), nil
}
_ = item.cc.Close()
}
}
cc, err := dialTreeService(ctx, netmapAddr)
lastTry := time.Now()
c.Lock()
if err != nil {
c.LRU.Add(netmapAddr, cacheItem{cc: nil, lastTry: lastTry})
} else {
c.LRU.Add(netmapAddr, cacheItem{cc: cc, lastTry: lastTry})
}
c.Unlock()
if err != nil {
return nil, err
}
return NewTreeServiceClient(cc), nil
}
func dialTreeService(ctx context.Context, netmapAddr string) (*grpc.ClientConn, error) {
var netAddr network.Address
if err := netAddr.FromString(netmapAddr); err != nil {
return nil, err
}
opts := make([]grpc.DialOption, 1, 2)
opts[0] = grpc.WithBlock()
// FIXME(@fyrchik): ugly hack #1322
if !strings.HasPrefix(netAddr.URIAddr(), "grpcs:") {
opts = append(opts, grpc.WithInsecure())
}
ctx, cancel := context.WithTimeout(ctx, defaultClientConnectTimeout)
cc, err := grpc.DialContext(ctx, netAddr.URIAddr(), opts...)
cancel()
return cc, err
}

View file

@ -0,0 +1,90 @@
package tree
import (
"bytes"
"crypto/sha256"
"fmt"
"sync"
"github.com/hashicorp/golang-lru/simplelru"
"github.com/nspcc-dev/neofs-node/pkg/core/container"
"github.com/nspcc-dev/neofs-node/pkg/services/object_manager/placement"
cidSDK "github.com/nspcc-dev/neofs-sdk-go/container/id"
netmapSDK "github.com/nspcc-dev/neofs-sdk-go/netmap"
)
type containerCache struct {
sync.Mutex
nm *netmapSDK.NetMap
lru *simplelru.LRU
}
func (c *containerCache) init(size int) {
c.lru, _ = simplelru.NewLRU(size, nil) // no error, size is positive
}
type containerCacheItem struct {
cnr *container.Container
local int
nodes []netmapSDK.NodeInfo
}
const defaultContainerCacheSize = 10
// getContainerNodes returns nodes in the container and a position of local key in the list.
func (s *Service) getContainerNodes(cid cidSDK.ID) ([]netmapSDK.NodeInfo, int, error) {
nm, err := s.nmSource.GetNetMap(0)
if err != nil {
return nil, -1, fmt.Errorf("can't get netmap: %w", err)
}
cnr, err := s.cnrSource.Get(cid)
if err != nil {
return nil, -1, fmt.Errorf("can't get container: %w", err)
}
cidStr := cid.String()
s.containerCache.Lock()
if s.containerCache.nm != nm {
s.containerCache.lru.Purge()
} else if v, ok := s.containerCache.lru.Get(cidStr); ok {
item := v.(containerCacheItem)
if item.cnr == cnr {
s.containerCache.Unlock()
return item.nodes, item.local, nil
}
}
s.containerCache.Unlock()
policy := cnr.Value.PlacementPolicy()
rawCID := make([]byte, sha256.Size)
cid.Encode(rawCID)
cntNodes, err := nm.ContainerNodes(policy, rawCID)
if err != nil {
return nil, -1, err
}
nodes := placement.FlattenNodes(cntNodes)
localPos := -1
for i := range nodes {
if bytes.Equal(nodes[i].PublicKey(), s.rawPub) {
localPos = i
break
}
}
s.containerCache.Lock()
s.containerCache.nm = nm
s.containerCache.lru.Add(cidStr, containerCacheItem{
cnr: cnr,
local: localPos,
nodes: nodes,
})
s.containerCache.Unlock()
return nodes, localPos, err
}

View file

@ -0,0 +1,90 @@
package tree
import (
"crypto/ecdsa"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/nspcc-dev/neofs-node/pkg/core/container"
"github.com/nspcc-dev/neofs-node/pkg/core/netmap"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/pilorama"
"go.uber.org/zap"
)
type cfg struct {
log *zap.Logger
key *ecdsa.PrivateKey
rawPub []byte
nmSource netmap.Source
cnrSource container.Source
forest pilorama.Forest
// replication-related parameters
replicatorChannelCapacity int
replicatorWorkerCount int
containerCacheSize int
}
// Option represents configuration option for a tree service.
type Option func(*cfg)
// WithContainerSource sets a container source for a tree service.
// This option is required.
func WithContainerSource(src container.Source) Option {
return func(c *cfg) {
c.cnrSource = src
}
}
// WithNetmapSource sets a netmap source for a tree service.
// This option is required.
func WithNetmapSource(src netmap.Source) Option {
return func(c *cfg) {
c.nmSource = src
}
}
// WithPrivateKey sets a netmap source for a tree service.
// This option is required.
func WithPrivateKey(key *ecdsa.PrivateKey) Option {
return func(c *cfg) {
c.key = key
c.rawPub = (*keys.PublicKey)(&key.PublicKey).Bytes()
}
}
// WithLogger sets logger for a tree service.
func WithLogger(log *zap.Logger) Option {
return func(c *cfg) {
c.log = log
}
}
// WithStorage sets tree storage for a service.
func WithStorage(s pilorama.Forest) Option {
return func(c *cfg) {
c.forest = s
}
}
func WithReplicationChannelCapacity(n int) Option {
return func(c *cfg) {
if n > 0 {
c.replicatorChannelCapacity = n
}
}
}
func WithReplicationWorkerCount(n int) Option {
return func(c *cfg) {
if n > 0 {
c.replicatorWorkerCount = n
}
}
}
func WithContainerCacheSize(n int) Option {
return func(c *cfg) {
if n > 0 {
c.containerCacheSize = n
}
}
}

View file

@ -0,0 +1,45 @@
package tree
import (
"bytes"
"context"
"errors"
netmapSDK "github.com/nspcc-dev/neofs-sdk-go/netmap"
"go.uber.org/zap"
)
var errNoSuitableNode = errors.New("no node was found to execute the request")
// forEachNode executes callback for each node in the container until true is returned.
// Returns errNoSuitableNode if there was no successful attempt to dial any node.
func (s *Service) forEachNode(ctx context.Context, cntNodes []netmapSDK.NodeInfo, f func(c TreeServiceClient) bool) error {
for _, n := range cntNodes {
if bytes.Equal(n.PublicKey(), s.rawPub) {
return nil
}
}
var called bool
for _, n := range cntNodes {
var stop bool
n.IterateNetworkEndpoints(func(endpoint string) bool {
c, err := s.cache.get(ctx, endpoint)
if err != nil {
return false
}
s.log.Debug("redirecting tree service query", zap.String("endpoint", endpoint))
called = true
stop = f(c)
return true
})
if stop {
return nil
}
}
if !called {
return errNoSuitableNode
}
return nil
}

View file

@ -0,0 +1,142 @@
package tree
import (
"context"
"crypto/sha256"
"encoding/hex"
"fmt"
"time"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/pilorama"
cidSDK "github.com/nspcc-dev/neofs-sdk-go/container/id"
netmapSDK "github.com/nspcc-dev/neofs-sdk-go/netmap"
"go.uber.org/zap"
)
type movePair struct {
cid cidSDK.ID
treeID string
op *pilorama.LogMove
}
type replicationTask struct {
n netmapSDK.NodeInfo
req *ApplyRequest
}
const (
defaultReplicatorCapacity = 64
defaultReplicatorWorkerCount = 64
defaultReplicatorSendTimeout = time.Second * 5
)
func (s *Service) replicationWorker() {
for {
select {
case <-s.closeCh:
return
case task := <-s.replicationTasks:
var lastErr error
var lastAddr string
task.n.IterateNetworkEndpoints(func(addr string) bool {
lastAddr = addr
c, err := s.cache.get(context.Background(), addr)
if err != nil {
lastErr = fmt.Errorf("can't create client: %w", err)
return false
}
ctx, cancel := context.WithTimeout(context.Background(), defaultReplicatorSendTimeout)
_, lastErr = c.Apply(ctx, task.req)
cancel()
return lastErr == nil
})
if lastErr != nil {
s.log.Warn("failed to sent update to the node",
zap.String("last_error", lastErr.Error()),
zap.String("address", lastAddr),
zap.String("key", hex.EncodeToString(task.n.PublicKey())))
}
}
}
}
func (s *Service) replicateLoop(ctx context.Context) {
for i := 0; i < s.replicatorWorkerCount; i++ {
go s.replicationWorker()
}
defer func() {
for len(s.replicationTasks) != 0 {
<-s.replicationTasks
}
}()
for {
select {
case <-s.closeCh:
return
case <-ctx.Done():
return
case op := <-s.replicateCh:
err := s.replicate(op)
if err != nil {
s.log.Error("error during replication",
zap.String("err", err.Error()),
zap.Stringer("cid", op.cid),
zap.String("treeID", op.treeID))
}
}
}
}
func (s *Service) replicate(op movePair) error {
req := newApplyRequest(&op)
err := signMessage(req, s.key)
if err != nil {
return fmt.Errorf("can't sign data: %w", err)
}
nodes, localIndex, err := s.getContainerNodes(op.cid)
if err != nil {
return fmt.Errorf("can't get container nodes: %w", err)
}
for i := range nodes {
if i != localIndex {
s.replicationTasks <- replicationTask{nodes[i], req}
}
}
return nil
}
func (s *Service) pushToQueue(cid cidSDK.ID, treeID string, op *pilorama.LogMove) {
select {
case s.replicateCh <- movePair{
cid: cid,
treeID: treeID,
op: op,
}:
case <-s.closeCh:
}
}
func newApplyRequest(op *movePair) *ApplyRequest {
rawCID := make([]byte, sha256.Size)
op.cid.Encode(rawCID)
return &ApplyRequest{
Body: &ApplyRequest_Body{
ContainerId: rawCID,
TreeId: op.treeID,
Operation: &LogMove{
ParentId: op.op.Parent,
Meta: op.op.Meta.Bytes(),
ChildId: op.op.Child,
},
},
}
}

View file

@ -0,0 +1,558 @@
package tree
import (
"bytes"
"context"
"errors"
"fmt"
"github.com/nspcc-dev/neofs-node/pkg/local_object_storage/pilorama"
cidSDK "github.com/nspcc-dev/neofs-sdk-go/container/id"
"github.com/nspcc-dev/neofs-sdk-go/eacl"
netmapSDK "github.com/nspcc-dev/neofs-sdk-go/netmap"
"go.uber.org/zap"
)
// Service represents tree-service capable of working with multiple
// instances of CRDT trees.
type Service struct {
cfg
cache clientCache
replicateCh chan movePair
replicationTasks chan replicationTask
closeCh chan struct{}
containerCache containerCache
}
// MaxGetSubTreeDepth represents maximum allowed traversal depth in GetSubTree RPC.
const MaxGetSubTreeDepth = 10
var _ TreeServiceServer = (*Service)(nil)
// New creates new tree service instance.
func New(opts ...Option) *Service {
var s Service
s.containerCacheSize = defaultContainerCacheSize
s.replicatorChannelCapacity = defaultReplicatorCapacity
s.replicatorWorkerCount = defaultReplicatorWorkerCount
for i := range opts {
opts[i](&s.cfg)
}
if s.log == nil {
s.log = zap.NewNop()
}
s.cache.init()
s.closeCh = make(chan struct{})
s.replicateCh = make(chan movePair, s.replicatorChannelCapacity)
s.replicationTasks = make(chan replicationTask, s.replicatorWorkerCount)
s.containerCache.init(s.containerCacheSize)
return &s
}
// Start starts the service.
func (s *Service) Start(ctx context.Context) {
go s.replicateLoop(ctx)
}
// Shutdown shutdowns the service.
func (s *Service) Shutdown() {
close(s.closeCh)
}
func (s *Service) Add(ctx context.Context, req *AddRequest) (*AddResponse, error) {
b := req.GetBody()
var cid cidSDK.ID
if err := cid.Decode(b.GetContainerId()); err != nil {
return nil, err
}
err := s.verifyClient(req, cid, b.GetBearerToken(), eacl.OperationPut)
if err != nil {
return nil, err
}
ns, pos, err := s.getContainerNodes(cid)
if err != nil {
return nil, err
}
if pos < 0 {
var resp *AddResponse
var outErr error
err = s.forEachNode(ctx, ns, func(c TreeServiceClient) bool {
resp, outErr = c.Add(ctx, req)
return true
})
if err != nil {
return nil, err
}
return resp, outErr
}
d := pilorama.CIDDescriptor{CID: cid, Position: pos, Size: len(ns)}
log, err := s.forest.TreeMove(d, b.GetTreeId(), &pilorama.Move{
Parent: b.GetParentId(),
Child: pilorama.RootID,
Meta: pilorama.Meta{Items: protoToMeta(b.GetMeta())},
})
if err != nil {
return nil, err
}
s.pushToQueue(cid, b.GetTreeId(), log)
return &AddResponse{
Body: &AddResponse_Body{
NodeId: log.Child,
},
}, nil
}
func (s *Service) AddByPath(ctx context.Context, req *AddByPathRequest) (*AddByPathResponse, error) {
b := req.GetBody()
var cid cidSDK.ID
if err := cid.Decode(b.GetContainerId()); err != nil {
return nil, err
}
err := s.verifyClient(req, cid, b.GetBearerToken(), eacl.OperationPut)
if err != nil {
return nil, err
}
ns, pos, err := s.getContainerNodes(cid)
if err != nil {
return nil, err
}
if pos < 0 {
var resp *AddByPathResponse
var outErr error
err = s.forEachNode(ctx, ns, func(c TreeServiceClient) bool {
resp, outErr = c.AddByPath(ctx, req)
return true
})
if err != nil {
return nil, err
}
return resp, outErr
}
meta := protoToMeta(b.GetMeta())
attr := b.GetPathAttribute()
if len(attr) == 0 {
attr = pilorama.AttributeFilename
}
d := pilorama.CIDDescriptor{CID: cid, Position: pos, Size: len(ns)}
logs, err := s.forest.TreeAddByPath(d, b.GetTreeId(), attr, b.GetPath(), meta)
if err != nil {
return nil, err
}
for i := range logs {
s.pushToQueue(cid, b.GetTreeId(), &logs[i])
}
nodes := make([]uint64, len(logs))
nodes[0] = logs[len(logs)-1].Child
for i, l := range logs[:len(logs)-1] {
nodes[i+1] = l.Child
}
return &AddByPathResponse{
Body: &AddByPathResponse_Body{
Nodes: nodes,
},
}, nil
}
func (s *Service) Remove(ctx context.Context, req *RemoveRequest) (*RemoveResponse, error) {
b := req.GetBody()
var cid cidSDK.ID
if err := cid.Decode(b.GetContainerId()); err != nil {
return nil, err
}
err := s.verifyClient(req, cid, b.GetBearerToken(), eacl.OperationPut)
if err != nil {
return nil, err
}
ns, pos, err := s.getContainerNodes(cid)
if err != nil {
return nil, err
}
if pos < 0 {
var resp *RemoveResponse
var outErr error
err = s.forEachNode(ctx, ns, func(c TreeServiceClient) bool {
resp, outErr = c.Remove(ctx, req)
return true
})
if err != nil {
return nil, err
}
return resp, outErr
}
if b.GetNodeId() == pilorama.RootID {
return nil, fmt.Errorf("node with ID %d is root and can't be removed", b.GetNodeId())
}
d := pilorama.CIDDescriptor{CID: cid, Position: pos, Size: len(ns)}
log, err := s.forest.TreeMove(d, b.GetTreeId(), &pilorama.Move{
Parent: pilorama.TrashID,
Child: b.GetNodeId(),
})
if err != nil {
return nil, err
}
s.pushToQueue(cid, b.GetTreeId(), log)
return new(RemoveResponse), nil
}
// Move applies client operation to the specified tree and pushes in queue
// for replication on other nodes.
func (s *Service) Move(ctx context.Context, req *MoveRequest) (*MoveResponse, error) {
b := req.GetBody()
var cid cidSDK.ID
if err := cid.Decode(b.GetContainerId()); err != nil {
return nil, err
}
err := s.verifyClient(req, cid, b.GetBearerToken(), eacl.OperationPut)
if err != nil {
return nil, err
}
ns, pos, err := s.getContainerNodes(cid)
if err != nil {
return nil, err
}
if pos < 0 {
var resp *MoveResponse
var outErr error
err = s.forEachNode(ctx, ns, func(c TreeServiceClient) bool {
resp, outErr = c.Move(ctx, req)
return true
})
if err != nil {
return nil, err
}
return resp, outErr
}
if b.GetNodeId() == pilorama.RootID {
return nil, fmt.Errorf("node with ID %d is root and can't be moved", b.GetNodeId())
}
d := pilorama.CIDDescriptor{CID: cid, Position: pos, Size: len(ns)}
log, err := s.forest.TreeMove(d, b.GetTreeId(), &pilorama.Move{
Parent: b.GetParentId(),
Child: b.GetNodeId(),
Meta: pilorama.Meta{Items: protoToMeta(b.GetMeta())},
})
if err != nil {
return nil, err
}
s.pushToQueue(cid, b.GetTreeId(), log)
return new(MoveResponse), nil
}
func (s *Service) GetNodeByPath(ctx context.Context, req *GetNodeByPathRequest) (*GetNodeByPathResponse, error) {
b := req.GetBody()
var cid cidSDK.ID
if err := cid.Decode(b.GetContainerId()); err != nil {
return nil, err
}
err := s.verifyClient(req, cid, b.GetBearerToken(), eacl.OperationGet)
if err != nil {
return nil, err
}
ns, pos, err := s.getContainerNodes(cid)
if err != nil {
return nil, err
}
if pos < 0 {
var resp *GetNodeByPathResponse
var outErr error
err = s.forEachNode(ctx, ns, func(c TreeServiceClient) bool {
resp, outErr = c.GetNodeByPath(ctx, req)
return true
})
if err != nil {
return nil, err
}
return resp, outErr
}
attr := b.GetPathAttribute()
if len(attr) == 0 {
attr = pilorama.AttributeFilename
}
nodes, err := s.forest.TreeGetByPath(cid, b.GetTreeId(), attr, b.GetPath(), b.GetLatestOnly())
if err != nil {
return nil, err
}
info := make([]*GetNodeByPathResponse_Info, 0, len(nodes))
for _, node := range nodes {
m, _, err := s.forest.TreeGetMeta(cid, b.GetTreeId(), node)
if err != nil {
return nil, err
}
var x GetNodeByPathResponse_Info
x.NodeId = node
x.Timestamp = m.Time
if b.AllAttributes {
x.Meta = metaToProto(m.Items)
} else {
for _, kv := range m.Items {
for _, attr := range b.GetAttributes() {
if kv.Key == attr {
x.Meta = append(x.Meta, &KeyValue{
Key: kv.Key,
Value: kv.Value,
})
break
}
}
}
}
info = append(info, &x)
}
return &GetNodeByPathResponse{
Body: &GetNodeByPathResponse_Body{
Nodes: info,
},
}, nil
}
type nodeDepthPair struct {
nodes []uint64
depth uint32
}
func (s *Service) GetSubTree(req *GetSubTreeRequest, srv TreeService_GetSubTreeServer) error {
b := req.GetBody()
if b.GetDepth() > MaxGetSubTreeDepth {
return fmt.Errorf("too big depth: max=%d, got=%d", MaxGetSubTreeDepth, b.GetDepth())
}
var cid cidSDK.ID
if err := cid.Decode(b.GetContainerId()); err != nil {
return err
}
err := s.verifyClient(req, cid, b.GetBearerToken(), eacl.OperationGet)
if err != nil {
return err
}
ns, pos, err := s.getContainerNodes(cid)
if err != nil {
return err
}
if pos < 0 {
var cli TreeService_GetSubTreeClient
var outErr error
err = s.forEachNode(srv.Context(), ns, func(c TreeServiceClient) bool {
cli, outErr = c.GetSubTree(srv.Context(), req)
return true
})
if err != nil {
return err
} else if outErr != nil {
return outErr
}
for resp, err := cli.Recv(); err == nil; resp, err = cli.Recv() {
if err := srv.Send(resp); err != nil {
return err
}
}
return nil
}
queue := []nodeDepthPair{{[]uint64{b.GetRootId()}, 0}}
for len(queue) != 0 {
for _, nodeID := range queue[0].nodes {
m, p, err := s.forest.TreeGetMeta(cid, b.GetTreeId(), nodeID)
if err != nil {
return err
}
err = srv.Send(&GetSubTreeResponse{
Body: &GetSubTreeResponse_Body{
NodeId: nodeID,
ParentId: p,
Timestamp: m.Time,
Meta: metaToProto(m.Items),
},
})
if err != nil {
return err
}
}
if queue[0].depth < b.GetDepth() {
for _, nodeID := range queue[0].nodes {
children, err := s.forest.TreeGetChildren(cid, b.GetTreeId(), nodeID)
if err != nil {
return err
}
queue = append(queue, nodeDepthPair{children, queue[0].depth + 1})
}
}
queue = queue[1:]
}
return nil
}
// Apply locally applies operation from the remote node to the tree.
func (s *Service) Apply(_ context.Context, req *ApplyRequest) (*ApplyResponse, error) {
err := verifyMessage(req)
if err != nil {
return nil, err
}
var cid cidSDK.ID
if err := cid.Decode(req.GetBody().GetContainerId()); err != nil {
return nil, err
}
key := req.GetSignature().GetKey()
_, pos, size, err := s.getContainerInfo(cid, key)
if err != nil {
return nil, err
}
if pos < 0 {
return nil, errors.New("`Apply` request must be signed by a container node")
}
op := req.GetBody().GetOperation()
var meta pilorama.Meta
if err := meta.FromBytes(op.GetMeta()); err != nil {
return nil, fmt.Errorf("can't parse meta-information: %w", err)
}
d := pilorama.CIDDescriptor{CID: cid, Position: pos, Size: size}
resp := &ApplyResponse{Body: &ApplyResponse_Body{}, Signature: &Signature{}}
return resp, s.forest.TreeApply(d, req.GetBody().GetTreeId(), []pilorama.Move{{
Parent: op.GetParentId(),
Child: op.GetChildId(),
Meta: meta,
}})
}
func (s *Service) GetOpLog(req *GetOpLogRequest, srv TreeService_GetOpLogServer) error {
b := req.GetBody()
var cid cidSDK.ID
if err := cid.Decode(req.GetBody().GetContainerId()); err != nil {
return err
}
ns, pos, err := s.getContainerNodes(cid)
if err != nil {
return err
}
if pos < 0 {
var cli TreeService_GetOpLogClient
var outErr error
err := s.forEachNode(srv.Context(), ns, func(c TreeServiceClient) bool {
cli, outErr = c.GetOpLog(srv.Context(), req)
return true
})
if err != nil {
return err
} else if outErr != nil {
return outErr
}
for resp, err := cli.Recv(); err == nil; resp, err = cli.Recv() {
if err := srv.Send(resp); err != nil {
return err
}
}
return nil
}
h := b.GetHeight()
for {
lm, err := s.forest.TreeGetOpLog(cid, b.GetTreeId(), h)
if err != nil || lm.Time == 0 {
return err
}
err = srv.Send(&GetOpLogResponse{
Body: &GetOpLogResponse_Body{
Operation: &LogMove{
ParentId: lm.Parent,
Meta: lm.Meta.Bytes(),
ChildId: lm.Child,
},
},
})
if err != nil {
return err
}
h = lm.Time + 1
}
}
func protoToMeta(arr []*KeyValue) []pilorama.KeyValue {
meta := make([]pilorama.KeyValue, len(arr))
for i, kv := range arr {
if kv != nil {
meta[i].Key = kv.Key
meta[i].Value = kv.Value
}
}
return meta
}
func metaToProto(arr []pilorama.KeyValue) []*KeyValue {
meta := make([]*KeyValue, len(arr))
for i, kv := range arr {
meta[i] = &KeyValue{
Key: kv.Key,
Value: kv.Value,
}
}
return meta
}
// getContainerInfo returns the list of container nodes, position in the container for the node
// with pub key and total amount of nodes in all replicas.
func (s *Service) getContainerInfo(cid cidSDK.ID, pub []byte) ([]netmapSDK.NodeInfo, int, int, error) {
cntNodes, _, err := s.getContainerNodes(cid)
if err != nil {
return nil, 0, 0, err
}
for i, node := range cntNodes {
if bytes.Equal(node.PublicKey(), pub) {
return cntNodes, i, len(cntNodes), nil
}
}
return cntNodes, -1, len(cntNodes), nil
}

BIN
pkg/services/tree/service.pb.go generated Normal file

Binary file not shown.

View file

@ -0,0 +1,313 @@
/**
* Service for working with CRDT tree.
*/
syntax = "proto3";
package tree;
import "pkg/services/tree/types.proto";
option go_package = "github.com/nspcc-dev/neofs-node/pkg/services/tree";
service TreeService {
/* Client API */
// Add adds new node to the tree. Invoked by a client.
rpc Add (AddRequest) returns (AddResponse);
// AddByPath adds new node to the tree by path. Invoked by a client.
rpc AddByPath (AddByPathRequest) returns (AddByPathResponse);
// Remove removes node from the tree. Invoked by a client.
rpc Remove (RemoveRequest) returns (RemoveResponse);
// Move moves node from one parent to another. Invoked by a client.
rpc Move (MoveRequest) returns (MoveResponse);
// GetNodeByPath returns list of IDs corresponding to a specific filepath.
rpc GetNodeByPath (GetNodeByPathRequest) returns (GetNodeByPathResponse);
// GetSubTree returns tree corresponding to a specific node.
rpc GetSubTree (GetSubTreeRequest) returns (stream GetSubTreeResponse);
/* Synchronization API */
// Apply pushes log operation from another node to the current.
// The request must be signed by a container node.
rpc Apply (ApplyRequest) returns (ApplyResponse);
// GetOpLog returns a stream of logged operations starting from some height.
rpc GetOpLog(GetOpLogRequest) returns (stream GetOpLogResponse);
}
message AddRequest {
message Body {
// Container ID in V2 format.
bytes container_id = 1;
// The name of the tree.
string tree_id = 2;
// ID of the parent to attach node to.
uint64 parent_id = 3;
// Key-Value pairs with meta information.
repeated KeyValue meta = 4;
// Bearer token in V2 format.
bytes bearer_token = 5;
}
// Request body.
Body body = 1;
// Request signature.
Signature signature = 2;
}
message AddResponse {
message Body {
// ID of the created node.
uint64 node_id = 1;
}
// Response body.
Body body = 1;
// Response signature.
Signature signature = 2;
};
message AddByPathRequest {
message Body {
// Container ID in V2 format.
bytes container_id = 1;
// The name of the tree.
string tree_id = 2;
// Attribute to build path with. Default: "FileName".
string path_attribute = 3;
// List of path components.
repeated string path = 4;
// Node meta-information.
repeated KeyValue meta = 5;
// Bearer token in V2 format.
bytes bearer_token = 6;
}
// Request body.
Body body = 1;
// Request signature.
Signature signature = 2;
}
message AddByPathResponse {
message Body {
// List of all created nodes. The first one is the leaf.
repeated uint64 nodes = 1;
// ID of the parent node where new nodes were attached.
uint64 parent_id = 2;
}
// Response body.
Body body = 1;
// Response signature.
Signature signature = 2;
};
message RemoveRequest {
message Body {
// Container ID in V2 format.
bytes container_id = 1;
// The name of the tree.
string tree_id = 2;
// ID of the node to remove.
uint64 node_id = 3;
// Bearer token in V2 format.
bytes bearer_token = 4;
}
// Request body.
Body body = 1;
// Request signature.
Signature signature = 2;
}
message RemoveResponse {
message Body {
}
// Response body.
Body body = 1;
// Response signature.
Signature signature = 2;
};
message MoveRequest {
message Body {
// TODO import neo.fs.v2.refs.ContainerID directly.
// Container ID in V2 format.
bytes container_id = 1;
// The name of the tree.
string tree_id = 2;
// ID of the new parent.
uint64 parent_id = 3;
// ID of the node to move.
uint64 node_id = 4;
// Node meta-information.
repeated KeyValue meta = 5;
// Bearer token in V2 format.
bytes bearer_token = 6;
}
// Request body.
Body body = 1;
// Request signature.
Signature signature = 2;
}
message MoveResponse {
message Body {
}
// Response body.
Body body = 1;
// Response signature.
Signature signature = 2;
};
message GetNodeByPathRequest {
message Body {
// Container ID in V2 format.
bytes container_id = 1;
// The name of the tree.
string tree_id = 2;
// Attribute to build path with. Default: "FileName".
string path_attribute = 3;
// List of path components.
repeated string path = 4;
// List of attributes to include in response.
repeated string attributes = 5;
// Flag to return only the latest version of node.
bool latest_only = 6;
// Flag to return all stored attributes.
bool all_attributes = 7;
// Bearer token in V2 format.
bytes bearer_token = 8;
}
// Request body.
Body body = 1;
// Request signature.
Signature signature = 2;
}
message GetNodeByPathResponse {
// Information about a single tree node.
message Info {
// Node ID.
uint64 node_id = 1;
// Timestamp of the last operation with the node.
uint64 timestamp = 2;
// Node meta-information.
repeated KeyValue meta = 3;
}
message Body {
// List of nodes stored by path.
repeated Info nodes = 1;
}
// Response body.
Body body = 1;
// Response signature.
Signature signature = 2;
};
message GetSubTreeRequest {
message Body {
// Container ID in V2 format.
bytes container_id = 1;
// The name of the tree.
string tree_id = 2;
// ID of the root node of a subtree.
uint64 root_id = 3;
// Optional depth of the traversal. Zero means return only root.
// Maximum depth is 10.
uint32 depth = 4;
// Bearer token in V2 format.
bytes bearer_token = 5;
}
// Request body.
Body body = 1;
// Request signature.
Signature signature = 2;
}
message GetSubTreeResponse {
message Body {
// ID of the node.
uint64 node_id = 1;
// ID of the parent.
uint64 parent_id = 2;
// Time node was first added to a tree.
uint64 timestamp = 3;
// Node meta-information.
repeated KeyValue meta = 4;
}
// Response body.
Body body = 1;
// Response signature.
Signature signature = 2;
};
message ApplyRequest {
message Body {
// Container ID in V2 format.
bytes container_id = 1;
// The name of the tree.
string tree_id = 2;
// Operation to be applied.
LogMove operation = 3;
}
// Request body.
Body body = 1;
// Request signature.
Signature signature = 2;
}
message ApplyResponse {
message Body {
}
// Response body.
Body body = 1;
// Response signature.
Signature signature = 2;
};
message GetOpLogRequest {
message Body {
// Container ID in V2 format.
bytes container_id = 1;
// The name of the tree.
string tree_id = 2;
// Starting height to return logs from.
uint64 height = 3;
// Amount of operations to return.
uint64 count = 4;
}
// Request body.
Body body = 1;
// Request signature.
Signature signature = 2;
}
message GetOpLogResponse {
message Body {
// Operation on a tree.
LogMove operation = 1;
}
// Response body.
Body body = 1;
// Response signature.
Signature signature = 2;
};

BIN
pkg/services/tree/service_grpc.pb.go generated Normal file

Binary file not shown.

Some files were not shown because too many files have changed in this diff Show more