Commit graph

7 commits

Author SHA1 Message Date
47dcfa20f3 [#895] test: Use t.Cleanup only for external resources
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-01-11 12:32:09 +00:00
a8526d45e9 [#373] blobovnizca: Add missed/fix tracing spans
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-06-21 15:13:26 +03:00
0920d848d0 [#135] get-object: Add tracing spans
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-04-12 06:52:00 +00:00
20de74a505 Rename package name
Due to source code relocation from GitHub.

Signed-off-by: Alex Vanin <a.vanin@yadro.com>
2023-03-07 16:38:26 +03:00
Pavel Karpy
923f84722a Move to frostfs-node
Signed-off-by: Pavel Karpy <p.karpy@yadro.com>
2022-12-28 15:04:29 +03:00
Evgenii Stratonikov
6ad2b5d5b8 [#2068] blobovnicza: Add Exists method
Signed-off-by: Evgenii Stratonikov <evgeniy@morphbits.ru>
2022-12-02 11:52:05 +03:00
Leonard Lyubich
8e8265d5ea [#1707] blobovnicza/get: Iterate over all buckets regardless of config
Re-configuration of Blobovnicza's object size limit must not affect
already stored objects. In previous implementation `Blobovnicza` didn't
see objects stored in buckets which became too big after size limit
re-configuration. For example, lets consider 1st configuration with 64KB
size limit for stored objects. Lets assume that we stored object of 64KB
size, and re-configured `Blobovnicza` with 32KB limit. After reboot
object should be still available, but actually it isn't. This is caused
by `Get` operation algorithm which iterates over configured size ranges
only, and doesn't process any other existing size bucket. By the way,
increasing of the object size limit didn't lead to the problem even in
previous implementation.

Make `Blobovnicza.Get` method to iterate over all size buckets
regardless of current configuration. This covers the described scenario.

Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
2022-09-08 12:24:28 +04:00