Commit graph

2216 commits

Author SHA1 Message Date
cedbd380f2 [#2197] pilorama: Close database in degraded mode
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-01-25 15:31:47 +03:00
Pavel Karpy
1d21b1e3e8 [#1978] node: Do not drop clients on split errors
After the reconnection interval feature there was an bug related to the big
objects collecting: split error is returned from a client directly, not
via API status and was considered as a connection error.

Signed-off-by: Pavel Karpy <p.karpy@yadro.com>
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-01-25 15:31:47 +03:00
554b85411f [#2190] services/object: Log service error with INFO level
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-01-25 15:31:47 +03:00
b0ad1b9ed2 [#2193] pilorama: Use do in TreeMove
It should be similar to a `TreeAddByPath`. `applyOperation` is used for
`Apply` when the operation can be inserted in the middle of a log.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-01-25 15:31:47 +03:00
ba393e3e91 [#2188] engine: Fix panic during setting shard mode
Under load changing shard mode can lead to it being removed from the
list during some other PUT.
```
Dec 28 07:01:26 az neofs-node[364505]: panic: runtime error: invalid memory address or nil pointer dereference
Dec 28 07:01:26 az neofs-node[364505]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0xc9fbb1]
Dec 28 07:01:26 az neofs-node[364505]: goroutine 11791912 [running]:
Dec 28 07:01:26 az neofs-node[364505]: github.com/nspcc-dev/neofs-node/pkg/local_object_storage/engine.(*StorageEngine).putToShard(0xc000435490, {0xc0003f7a28?, 0xc0001192c0?}, 0x2, {0x0, 0x>
Dec 28 07:01:26 az neofs-node[364505]:         github.com/nspcc-dev/neofs-node/pkg/local_object_storage/engine/put.go:91 +0x1b1
Dec 28 07:01:26 az neofs-node[364505]: github.com/nspcc-dev/neofs-node/pkg/local_object_storage/engine.(*StorageEngine).put.func1(0xc000435490?, {0xc0003f7a28?, 0xc0001192c0?})
Dec 28 07:01:26 az neofs-node[364505]:         github.com/nspcc-dev/neofs-node/pkg/local_object_storage/engine/put.go:71 +0x19c
Dec 28 07:01:26 az neofs-node[364505]: github.com/nspcc-dev/neofs-node/pkg/local_object_storage/engine.(*StorageEngine).iterateOverSortedShards(0x1?, {{0x62, 0x23, 0xfe, 0x60, 0x67, 0xd5, 0x>
Dec 28 07:01:26 az neofs-node[364505]:         github.com/nspcc-dev/neofs-node/pkg/local_object_storage/engine/shards.go:225 +0xc8
Dec 28 07:01:26 az neofs-node[364505]: github.com/nspcc-dev/neofs-node/pkg/local_object_storage/engine.(*StorageEngine).put(0xc000435490, {0x1?})
Dec 28 07:01:26 az neofs-node[364505]:         github.com/nspcc-dev/neofs-node/pkg/local_object_storage/engine/put.go:66 +0x2a9
Dec 28 07:01:26 az neofs-node[364505]: github.com/nspcc-dev/neofs-node/pkg/local_object_storage/engine.(*StorageEngine).Put.func1()
Dec 28 07:01:26 az neofs-node[364505]:         github.com/nspcc-dev/neofs-node/pkg/local_object_storage/engine/put.go:43 +0x2a
Dec 28 07:01:26 az neofs-node[364505]: github.com/nspcc-dev/neofs-node/pkg/local_object_storage/engine.(*StorageEngine).execIfNotBlocked(0x8?, 0x38?)
Dec 28 07:01:26 az neofs-node[364505]:         github.com/nspcc-dev/neofs-node/pkg/local_object_storage/engine/control.go:147 +0xcf
Dec 28 07:01:26 az neofs-node[364505]: github.com/nspcc-dev/neofs-node/pkg/local_object_storage/engine.(*StorageEngine).Put(0xc4df775a80?, {0x0?})
Dec 28 07:01:26 az neofs-node[364505]:         github.com/nspcc-dev/neofs-node/pkg/local_object_storage/engine/put.go:42 +0x65
Dec 28 07:01:26 az neofs-node[364505]: github.com/nspcc-dev/neofs-node/pkg/local_object_storage/engine.Put(0xc06d928b80?, 0xc06b1b8dc8?)
Dec 28 07:01:26 az neofs-node[364505]:         github.com/nspcc-dev/neofs-node/pkg/local_object_storage/engine/put.go:158 +0x19
Dec 28 07:01:26 az neofs-node[364505]: main.engineWithoutNotifications.Put({0x20301b?}, 0x20301b?)
```

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-01-25 15:31:47 +03:00
3d57f4c961 [#2179] test: Fix test TestEvacuateNetwork/multiple_shards,_evacuate_many
Signed-off-by: Anton Nikiforov <an.nikiforov@yadro.com>
2023-01-24 13:37:49 +03:00
3d1d2ee7b1 [#11] Regenerate proto files
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-01-12 08:55:47 +03:00
054bc4a727 [#11] Rename contract-related NeoFS occurences
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-01-12 08:55:47 +03:00
9cb4b4cc17 [#11] Rename neofsid contract to frostfsid
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-01-12 08:55:47 +03:00
19d180b510 [#11] Rename neofs contract to frostfs
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-01-12 08:55:47 +03:00
9936b112b8 [#5] blobstor: Use generic LRU cache
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2022-12-31 23:04:06 +03:00
f0be0befc5 [#5] services/object_manager: Use generic LRU cache
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2022-12-31 23:04:06 +03:00
1b3374ac7f [#5] services/tree: User generic LRU cache
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2022-12-31 23:04:06 +03:00
d0a0432a51 [#5] innerring: Use generic LRU cache
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2022-12-31 23:04:06 +03:00
4155c1bdff [#5] writecache: Use generic LRU cache
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2022-12-31 23:04:06 +03:00
8f61cc1dcc [#5] policer: Use generic LRU client
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2022-12-31 23:04:06 +03:00
55b403e0ee [#5] morph/client: Use generic LRU cache
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2022-12-31 23:04:06 +03:00
0272218eb9 [#2184] compression: Properly calculate upper bound
If the data is not compressible allocating `len(data)` will lead to a
slice reallocation. For a compressible data the results for small size
are flaky and we allocate a bit more. However, it feels right to use a
provided function if we need to pick any size at all.

```
name                                                           old time/op    new time/op    delta
Compression/size=128/zeroed_slice-8                              2.23µs ±12%    2.06µs ± 6%   -7.35%  (p=0.009 n=10+10)
Compression/size=128/not_so_random_slice_(block_=_123)-8         19.0µs ±10%    15.8µs ±16%  -17.09%  (p=0.000 n=9+10)
Compression/size=128/random_slice-8                              17.6µs ±15%    16.1µs ±16%     ~     (p=0.075 n=10+10)
Compression/size=1024/zeroed_slice-8                             3.05µs ±11%    2.84µs ±10%     ~     (p=0.089 n=10+10)
Compression/size=1024/not_so_random_slice_(block_=_123)-8        18.1µs ± 6%    18.2µs ±12%     ~     (p=0.971 n=10+10)
Compression/size=1024/random_slice-8                             48.6µs ± 6%    45.6µs ± 5%   -6.07%  (p=0.006 n=10+9)
Compression/size=32768/zeroed_slice-8                            26.8µs ± 3%    28.7µs ± 8%   +7.23%  (p=0.001 n=10+10)
Compression/size=32768/not_so_random_slice_(block_=_123)-8       44.3µs ± 8%    43.7µs ±13%     ~     (p=0.762 n=8+10)
Compression/size=32768/random_slice-8                            97.3µs ±32%    68.9µs ±15%  -29.13%  (p=0.000 n=10+10)
Compression/size=33554432/zeroed_slice-8                         29.8ms ± 9%    30.3ms ±17%     ~     (p=1.000 n=9+9)
Compression/size=33554432/not_so_random_slice_(block_=_123)-8    33.1ms ±14%    30.3ms ±11%   -8.61%  (p=0.043 n=10+10)
Compression/size=33554432/random_slice-8                         41.7ms ± 3%    30.1ms ± 8%  -27.72%  (p=0.000 n=9+10)

name                                                           old alloc/op   new alloc/op   delta
Compression/size=128/zeroed_slice-8                                128B ± 0%      144B ± 0%  +12.50%  (p=0.000 n=10+10)
Compression/size=128/not_so_random_slice_(block_=_123)-8           384B ± 0%      144B ± 0%  -62.50%  (p=0.000 n=10+10)
Compression/size=128/random_slice-8                                384B ± 0%      144B ± 0%  -62.50%  (p=0.000 n=10+10)
Compression/size=1024/zeroed_slice-8                             1.02kB ± 0%    1.15kB ± 0%  +12.50%  (p=0.000 n=10+10)
Compression/size=1024/not_so_random_slice_(block_=_123)-8        1.02kB ± 0%    1.15kB ± 0%  +12.50%  (p=0.000 n=10+10)
Compression/size=1024/random_slice-8                             2.56kB ± 0%    1.15kB ± 0%  -55.00%  (p=0.000 n=10+10)
Compression/size=32768/zeroed_slice-8                            32.8kB ± 0%    41.0kB ± 0%  +25.00%  (p=0.000 n=10+10)
Compression/size=32768/not_so_random_slice_(block_=_123)-8       32.8kB ± 0%    41.0kB ± 0%  +25.00%  (p=0.000 n=10+10)
Compression/size=32768/random_slice-8                            81.9kB ± 0%    41.0kB ± 0%  -50.00%  (p=0.000 n=10+10)
Compression/size=33554432/zeroed_slice-8                         33.6MB ± 0%    33.6MB ± 0%   +0.02%  (p=0.000 n=9+9)
Compression/size=33554432/not_so_random_slice_(block_=_123)-8    33.6MB ± 0%    33.6MB ± 0%   +0.02%  (p=0.000 n=8+10)
Compression/size=33554432/random_slice-8                         75.5MB ± 0%    33.6MB ± 0%  -55.55%  (p=0.000 n=10+10)

name                                                           old allocs/op  new allocs/op  delta
Compression/size=128/zeroed_slice-8                                1.00 ± 0%      1.00 ± 0%     ~     (all equal)
Compression/size=128/not_so_random_slice_(block_=_123)-8           2.00 ± 0%      1.00 ± 0%  -50.00%  (p=0.000 n=10+10)
Compression/size=128/random_slice-8                                2.00 ± 0%      1.00 ± 0%  -50.00%  (p=0.000 n=10+10)
Compression/size=1024/zeroed_slice-8                               1.00 ± 0%      1.00 ± 0%     ~     (all equal)
Compression/size=1024/not_so_random_slice_(block_=_123)-8          1.00 ± 0%      1.00 ± 0%     ~     (all equal)
Compression/size=1024/random_slice-8                               2.00 ± 0%      1.00 ± 0%  -50.00%  (p=0.000 n=10+10)
Compression/size=32768/zeroed_slice-8                              1.00 ± 0%      1.00 ± 0%     ~     (all equal)
Compression/size=32768/not_so_random_slice_(block_=_123)-8         1.00 ± 0%      1.00 ± 0%     ~     (all equal)
Compression/size=32768/random_slice-8                              2.00 ± 0%      1.00 ± 0%  -50.00%  (p=0.000 n=10+10)
Compression/size=33554432/zeroed_slice-8                           1.00 ± 0%      1.00 ± 0%     ~     (all equal)
Compression/size=33554432/not_so_random_slice_(block_=_123)-8      1.00 ± 0%      1.00 ± 0%     ~     (all equal)
Compression/size=33554432/random_slice-8                           2.00 ± 0%      1.00 ± 0%  -50.00%  (p=0.000 n=10+10)
```

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2022-12-30 11:07:35 +03:00
741482c26f [#2164] neofs-node: Allow to set reconnect interval
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2022-12-30 11:07:35 +03:00
6f5edac730 [#2164] network/cache: Do not reconnect to failed clients immediately
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2022-12-30 11:07:35 +03:00
f3caf6acfe [#2164] network/cache: Separate mutex for addr
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2022-12-30 11:07:35 +03:00
0ace28e43d [#2175] blobovniczatree: Close all non-active blobovniczas
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2022-12-30 11:07:35 +03:00
c1cf418956 [#2175] blobovniczatree: Make function parameters more descriptive
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2022-12-30 11:07:35 +03:00
b4e90cdf51 [#2165] pilorama: Optimize TreeApply when used for synchronization
Because synchronization _most likely_ will have apply already existing
operations, it is much faster to check their presence in a read
transaction. However, always doing this will degrade the perfomance
for normal `Apply`. And, let's be honest, it is already not good.
Thus we add a separate parameter which specifies whether this logic is
enabled.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2022-12-30 11:07:35 +03:00
f9fcd85363 [#2165] services/tree: Remember starting height for the synchronization
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2022-12-30 11:07:35 +03:00
06137dbf8e [#2165] services/tree: Do not export synchronizeAllTrees
It is used only privately.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2022-12-30 11:07:35 +03:00
c299b98afe [#2165] services/tree: Parallelize synchronization
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2022-12-30 11:07:35 +03:00
de9957e076 [#2165] services/tree: Always synchronize all containers
In case of split-brain we must synchronize everything.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2022-12-30 11:07:35 +03:00
Pavel Karpy
6a4e5e6f0a [#2144] node: Try node's private key if dynamic token fetching failed
`GETRANGEHASH` request spawns `GETRANGE` requests if an object could not be
found locally. If the original request contains session, it can be static
and, therefore, fetching session key can not be performed successfully.
As the best effort a node could request object's range with its own key.

Signed-off-by: Pavel Karpy <p.karpy@yadro.com>
2022-12-30 11:07:35 +03:00
Pavel Karpy
86a4fba571 [#2144] node: Clarify KeyStorage.GetKey method
Actualize the doc, fix API status error return.

Signed-off-by: Pavel Karpy <p.karpy@yadro.com>
2022-12-30 11:07:35 +03:00
Pavel Karpy
21717262ec [#2016] shard: Check meta first on Get
`meta` should prevent returning removed objects (`GCMark` and `TS` relations
are `meta` abstractions).

Signed-off-by: Pavel Karpy <p.karpy@yadro.com>
2022-12-30 11:07:35 +03:00
Pavel Karpy
74ec71446f [#2167] shard: Do not use write-cache by default in Head
Both `meta` and `write-cache` are expected to have a fast underlying disk,
so it does not seem like an optimisation. Moreover, `write-cache`'s `Head`
is a `Get` with payload cutting, it _must_ use more memory for no reason
(`meta` was created for such requests). Also, `write-cache` does not allow
performing any "meta" relations checks (such as locking, tombstoning).

Signed-off-by: Pavel Karpy <p.karpy@yadro.com>
2022-12-30 11:07:35 +03:00
Pavel Karpy
1608fd1c07 [#2167] write-cache: Add "write-cache" to its logs
Signed-off-by: Pavel Karpy <p.karpy@yadro.com>
2022-12-30 11:07:35 +03:00
Pavel Karpy
eea2892109 [#1956] node: Lock shard's mode on its methods switch
Signed-off-by: Pavel Karpy <p.karpy@yadro.com>
2022-12-30 11:07:35 +03:00
Evgenii Stratonikov
04b5ec759b [#2139] object/put: Use sync.Pool for temporary payloads
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2022-12-30 11:07:35 +03:00
9e0decd12d [#2162] services/tree: Close connection after the syncronization
There was a goroutine leak here.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2022-12-30 11:07:35 +03:00
Evgenii Stratonikov
e1c3bdbfa6 [#1621] pilorama: Remove Timestamp field from nodeInfo
It is already present in `Meta`.

Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
Signed-off-by: Evgenii Stratonikov <evgeniy@morphbits.ru>
2022-12-30 11:07:35 +03:00
Evgenii Stratonikov
1044adbe94 [#1621] pilorama: Improve memory allocation
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
Signed-off-by: Evgenii Stratonikov <evgeniy@morphbits.ru>
2022-12-30 11:07:35 +03:00
Evgenii Stratonikov
2539d466a6 [#1621] pilorama: Seek after cursor invalidation
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
Signed-off-by: Evgenii Stratonikov <evgeniy@morphbits.ru>
2022-12-30 11:07:35 +03:00
Evgenii Stratonikov
e9ba8931f8 [#1621] pilorama: Simplify bucket creation
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
Signed-off-by: Evgenii Stratonikov <evgeniy@morphbits.ru>
2022-12-30 11:07:35 +03:00
Evgenii Stratonikov
fe7ddfdc6a [#1621] pilorama: Compare memory forests properly
Node children are not sorted and could occur in any order.

Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
Signed-off-by: Evgenii Stratonikov <evgeniy@morphbits.ru>
2022-12-30 11:07:35 +03:00
Pavel Karpy
306609030a [#2159] node: Add tree replication timeout configuration
Signed-off-by: Pavel Karpy <p.karpy@yadro.com>
2022-12-30 11:07:35 +03:00
edb1428248 [#2022] Add metric readonly to get shards mode
Signed-off-by: Anton Nikiforov <an.nikiforov@yadro.com>
2022-12-30 11:07:35 +03:00
e5c304536b [#2161] pilorama: Do not apply already existing operations
Speeds up synchronization a bit.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2022-12-30 11:07:35 +03:00
3bb5a320d7 [#2154] services/tree: Do not log an error when synchronizing container of 1 node
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2022-12-30 11:07:35 +03:00
387d1e2977 [#2127] services/tree: Randomize node order for synchronization
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2022-12-30 11:07:35 +03:00
Pavel Karpy
21c58c92a9 [#2145] meta: Do allow force inhuming a locked object
Signed-off-by: Pavel Karpy <p.karpy@yadro.com>
2022-12-30 11:07:35 +03:00
b207dc424f [#2158] policer: Reduce default cache size
We use cache to avoid policing the same object multiple times in a short
time span (< 30 seconds). If we have 200_000 objects in a blobstor, it is a bit useless
-- if it takes 1 second to process an object and we have `replicator.pool_size: 20`
in config, the next iteration will happen in 10_000 second which is much
larger than 30 second. However we still consume a lot of memory, so it
makes sense to use saner default.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2022-12-30 11:07:35 +03:00
0244f2e5ce [#2156] core/netmap: Change node addresses iteration order
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2022-12-30 11:07:35 +03:00
Pavel Karpy
b413094704 [#2095] node: Fix collecting child objects
Stop child objects collection if the last returned object (the most "left"
object in the collected chain) starts exactly from the `GETRANGE`'s `from`
value.

Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
2022-12-30 11:07:35 +03:00