Use clever batches for blobovnicza and metabase #627
Labels
No labels
P0
P1
P2
P3
badger
frostfs-adm
frostfs-cli
frostfs-ir
frostfs-lens
frostfs-node
good first issue
triage
Infrastructure
blocked
bug
config
discussion
documentation
duplicate
enhancement
go
help wanted
internal
invalid
kludge
observability
perfomance
question
refactoring
wontfix
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: TrueCloudLab/frostfs-node#627
Loading…
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
In pilorama we have custom batching scheme. The difference is that our batch can accept new requests after the configured timeout but before the batch has actually started executing. This makes sense because we spend lots of time in Fdatasync.
The suggestion is:
Metabase batches are also a bit harder, because there are logical errors, so take this into account.
As a nice side-effect, we can handle
context.Context
cancelation better.Actually, for blobovnicza we can go even further:
It has a simple key-value structure, so we can manage the transaction manually and cache bucket (
tx.Bucket()
always returns the same value). This may aleviate our problems with degradation, because we may perform PUT immediately, thus amortizing the cost over the time (instead of wating for batch delay and then doing it). In the current schemeUpdate/Batch
lock the database exclusively.For metabase similar optimization can be done, but we need to ensure that all GET operations (where logical error can occur) are done before the PUT ones.
We will eventually move to a badger store or another storage with doesn't panic on disk removal, closing this.