neo-go/pkg/core/storage
Roman Khimov 9513780c45 core: adjust in-memory processed set dynamically
Instead of tick-tocking with sync/async and having an unpredictable data
set we can just try to check for the real amount of keys that be processed
by the underlying DB. Can't be perfect, but still this adds some hard
limit to the amount of in-memory data. It's also adaptive, slower machines
will keep less and faster machines will keep more.

This gives almost perfect 4s cycles for mainnet BoltDB with no tail cutting,
it makes zero sense to process more blocks since we're clearly DB-bound:

2025-01-15T11:35:00.567+0300    INFO    persisted to disk       {"blocks": 1469, "keys": 40579, "headerHeight": 5438141, "blockHeight": 5438140, "velocity": 9912, "took": "4.378939648s"}
2025-01-15T11:35:04.699+0300    INFO    persisted to disk       {"blocks": 1060, "keys": 39976, "headerHeight": 5439201, "blockHeight": 5439200, "velocity": 9888, "took": "4.131985438s"}
2025-01-15T11:35:08.752+0300    INFO    persisted to disk       {"blocks": 1508, "keys": 39658, "headerHeight": 5440709, "blockHeight": 5440708, "velocity": 9877, "took": "4.052347569s"}
2025-01-15T11:35:12.807+0300    INFO    persisted to disk       {"blocks": 1645, "keys": 39565, "headerHeight": 5442354, "blockHeight": 5442353, "velocity": 9864, "took": "4.05547743s"}
2025-01-15T11:35:17.011+0300    INFO    persisted to disk       {"blocks": 1472, "keys": 39519, "headerHeight": 5443826, "blockHeight": 5443825, "velocity": 9817, "took": "4.203258142s"}
2025-01-15T11:35:21.089+0300    INFO    persisted to disk       {"blocks": 1345, "keys": 39529, "headerHeight": 5445171, "blockHeight": 5445170, "velocity": 9804, "took": "4.078297579s"}
2025-01-15T11:35:25.090+0300    INFO    persisted to disk       {"blocks": 1054, "keys": 39326, "headerHeight": 5446225, "blockHeight": 5446224, "velocity": 9806, "took": "4.000524899s"}
2025-01-15T11:35:30.372+0300    INFO    persisted to disk       {"blocks": 1239, "keys": 39349, "headerHeight": 5447464, "blockHeight": 5447463, "velocity": 9744, "took": "4.281444939s"}

2× can be considered, but this calculation isn't perfect for low number of
keys, so somewhat bigger tolerance is preferable for now. Overall it's not
degrading performance, my mainnet/bolt run was even 8% better with this.

Fixes #3249, we don't need any option this way.

Fixes #3783 as well, it no longer OOMs in that scenario. It however can OOM in
case of big GarbageCollectionPeriod (like 400K), but this can't be solved easily.

Signed-off-by: Roman Khimov <roman@nspcc.ru>
2025-01-15 22:08:08 +03:00
..
dbconfig dbconfig: fix DBConfiguration description 2023-09-03 18:02:38 +01:00
dboper storage: move Operation into package of its own 2022-07-08 23:30:30 +03:00
boltdb_store.go storage: bytes.Clone(nil) == nil 2024-05-16 19:29:11 +03:00
boltdb_store_test.go core: close BoltDB on failed root bucket creation 2022-10-10 10:12:34 +03:00
leveldb_store.go core: allow RO mode for Bolt and Level 2022-10-07 15:56:29 +03:00
leveldb_store_test.go core: allow RO mode for Bolt and Level 2022-10-07 15:56:29 +03:00
memcached_store.go *: use slices package for sorting and searching 2024-08-27 12:29:44 +03:00
memcached_store_test.go core: adjust in-memory processed set dynamically 2025-01-15 22:08:08 +03:00
memory_store.go core: adjust in-memory processed set dynamically 2025-01-15 22:08:08 +03:00
memory_store_test.go *: improve for loop syntax 2024-08-30 21:45:18 +03:00
store.go *: use slices.Concat where appropriate 2024-08-30 17:00:11 +03:00
store_test.go storage: move Operation into package of its own 2022-07-08 23:30:30 +03:00
store_type_test.go docs: fix supported database types 2022-10-07 15:56:34 +03:00
storeandbatch_test.go *: use slices package for sorting and searching 2024-08-27 12:29:44 +03:00