When we have N items, sorting then iterating provides `O(n log n)` latency
in the worst case scenario (flat bucket), because we must return items
from a level in the sorted order. Some heap implementations allow O(1)
insertion and O(log n) dequeue, this means that we can decrease the
latency for the first received operation to O(log n), albeit with a
slight increase in the total time.
Pairing heap was chosen as one of the most simplest implementations.
```
goos: linux
goarch: amd64
pkg: git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/tree
cpu: 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz
│ old │ new │
│ sec/op │ sec/op vs base │
GetSubTree/latency-8 5.034m ± 23% 1.110m ± 22% -77.95% (p=0.000 n=10)
GetSubTree/total_time-8 81.03m ± 1% 95.02m ± 14% +17.26% (p=0.000 n=10)
geomean 20.20m 10.27m -49.15%
│ old │ new │
│ B/op │ B/op vs base │
GetSubTree/latency-8 32.14Mi ± 0% 37.49Mi ± 0% +16.63% (p=0.000 n=10)
GetSubTree/total_time-8 32.14Mi ± 0% 37.49Mi ± 0% +16.63% (p=0.000 n=10)
geomean 32.14Mi 37.49Mi +16.63%
│ old │ new │
│ allocs/op │ allocs/op vs base │
GetSubTree/latency-8 400.0k ± 0% 400.0k ± 0% +0.00% (p=0.000 n=10)
GetSubTree/total_time-8 400.0k ± 0% 400.0k ± 0% +0.00% (p=0.000 n=10)
geomean 400.0k 400.0k +0.00%
```
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
This fixes shutdown panic:
1. Some morph connection gets error and passes it to internalErr channel.
2. Storage node starts to shutdow and closes internalErr channel.
3. Other morph connection gets error and tries to pass it to internalErr channel.
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
Tombstone objects must be present on all container nodes.
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
# Conflicts:
# cmd/frostfs-cli/modules/object/nodes.go
#
# It looks like you may be committing a cherry-pick.
# If this is not correct, please run
# git update-ref -d CHERRY_PICK_HEAD
# and try again.
# Please enter the commit message for your changes. Lines starting
# with '#' will be ignored, and an empty message aborts the commit.
#
# Date: Thu Nov 9 13:33:59 2023 +0300
#
# On branch fix/zombie_object_supportv037
# You are currently cherry-picking commit 78cfb6ae.
#
# Changes to be committed:
# modified: cmd/frostfs-cli/modules/object/nodes.go
#
Blobovnicza initialization take a long time because of bucket
Stat() call. So now blobovnicza stores counters in META bucket.
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
If object was GC marked or deleted or expired, it is still required
to update storageID to physically delete object.
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
If actual small object size value lower than default
object size limit, then unnecessary buckets created.
If actual small object size value greated than default
object size limit, then error happens.
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
Put stores object to next active DB, so there is no need to sort DBs.
In addition, it adds unnecessary DB openings.
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
Now move info stores in blobovnicza, so in case of failover
rebuild completes previous operation first.
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
Due to the flushing data from the writecache to the storage
and simultaneous deletion, a partial deletion situation is possible.
So as a solution, deletion is allowed only when the object is in storage,
because object will be deleted from writecache by flush goroutine.
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
Because of this check, under certain conditions,
the node could be removed from the network map,
although the node was functioning normally.
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
It was introduced in 69e1e6ca to help node determine faulty shards.
However, the situation is possible in a real-life scenario:
1. Object O is evacuated from shard A to B.
2. Shard A is unmounted because of lower-level errors.
3. We now have object in meta on A and in blobstor on B. Technically we
have it in meta on shard B too, but we still got the error if B goes
to a degraded mode.
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
Supposedly, this was added to allow creating 2 different shards without
subtest. Now we use t.TempDir() everywhere, so this should not be a
problem.
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
newCustomShard() has many parameters but only the first is obligatory.
`enableWriteCache` is left as-is, because it directly affects the
functionality.
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>