Instead of tick-tocking with sync/async and having an unpredictable data
set we can just try to check for the real amount of keys that be processed
by the underlying DB. Can't be perfect, but still this adds some hard
limit to the amount of in-memory data. It's also adaptive, slower machines
will keep less and faster machines will keep more.
This gives almost perfect 4s cycles for mainnet BoltDB with no tail cutting,
it makes zero sense to process more blocks since we're clearly DB-bound:
2025-01-15T11:35:00.567+0300 INFO persisted to disk {"blocks": 1469, "keys": 40579, "headerHeight": 5438141, "blockHeight": 5438140, "velocity": 9912, "took": "4.378939648s"}
2025-01-15T11:35:04.699+0300 INFO persisted to disk {"blocks": 1060, "keys": 39976, "headerHeight": 5439201, "blockHeight": 5439200, "velocity": 9888, "took": "4.131985438s"}
2025-01-15T11:35:08.752+0300 INFO persisted to disk {"blocks": 1508, "keys": 39658, "headerHeight": 5440709, "blockHeight": 5440708, "velocity": 9877, "took": "4.052347569s"}
2025-01-15T11:35:12.807+0300 INFO persisted to disk {"blocks": 1645, "keys": 39565, "headerHeight": 5442354, "blockHeight": 5442353, "velocity": 9864, "took": "4.05547743s"}
2025-01-15T11:35:17.011+0300 INFO persisted to disk {"blocks": 1472, "keys": 39519, "headerHeight": 5443826, "blockHeight": 5443825, "velocity": 9817, "took": "4.203258142s"}
2025-01-15T11:35:21.089+0300 INFO persisted to disk {"blocks": 1345, "keys": 39529, "headerHeight": 5445171, "blockHeight": 5445170, "velocity": 9804, "took": "4.078297579s"}
2025-01-15T11:35:25.090+0300 INFO persisted to disk {"blocks": 1054, "keys": 39326, "headerHeight": 5446225, "blockHeight": 5446224, "velocity": 9806, "took": "4.000524899s"}
2025-01-15T11:35:30.372+0300 INFO persisted to disk {"blocks": 1239, "keys": 39349, "headerHeight": 5447464, "blockHeight": 5447463, "velocity": 9744, "took": "4.281444939s"}
2× can be considered, but this calculation isn't perfect for low number of
keys, so somewhat bigger tolerance is preferable for now. Overall it's not
degrading performance, my mainnet/bolt run was even 8% better with this.
Fixes#3249, we don't need any option this way.
Fixes#3783 as well, it no longer OOMs in that scenario. It however can OOM in
case of big GarbageCollectionPeriod (like 400K), but this can't be solved easily.
Signed-off-by: Roman Khimov <roman@nspcc.ru>
Move them to the core/network packages, close#2950. The name of
mempool's unsorted transactions metrics has been changed along the
way to match the core's metrics naming convention.
Signed-off-by: Anna Shaleva <shaleva.ann@nspcc.ru>