Commit graph

8 commits

Author SHA1 Message Date
Roman Khimov
9513780c45 core: adjust in-memory processed set dynamically
Instead of tick-tocking with sync/async and having an unpredictable data
set we can just try to check for the real amount of keys that be processed
by the underlying DB. Can't be perfect, but still this adds some hard
limit to the amount of in-memory data. It's also adaptive, slower machines
will keep less and faster machines will keep more.

This gives almost perfect 4s cycles for mainnet BoltDB with no tail cutting,
it makes zero sense to process more blocks since we're clearly DB-bound:

2025-01-15T11:35:00.567+0300    INFO    persisted to disk       {"blocks": 1469, "keys": 40579, "headerHeight": 5438141, "blockHeight": 5438140, "velocity": 9912, "took": "4.378939648s"}
2025-01-15T11:35:04.699+0300    INFO    persisted to disk       {"blocks": 1060, "keys": 39976, "headerHeight": 5439201, "blockHeight": 5439200, "velocity": 9888, "took": "4.131985438s"}
2025-01-15T11:35:08.752+0300    INFO    persisted to disk       {"blocks": 1508, "keys": 39658, "headerHeight": 5440709, "blockHeight": 5440708, "velocity": 9877, "took": "4.052347569s"}
2025-01-15T11:35:12.807+0300    INFO    persisted to disk       {"blocks": 1645, "keys": 39565, "headerHeight": 5442354, "blockHeight": 5442353, "velocity": 9864, "took": "4.05547743s"}
2025-01-15T11:35:17.011+0300    INFO    persisted to disk       {"blocks": 1472, "keys": 39519, "headerHeight": 5443826, "blockHeight": 5443825, "velocity": 9817, "took": "4.203258142s"}
2025-01-15T11:35:21.089+0300    INFO    persisted to disk       {"blocks": 1345, "keys": 39529, "headerHeight": 5445171, "blockHeight": 5445170, "velocity": 9804, "took": "4.078297579s"}
2025-01-15T11:35:25.090+0300    INFO    persisted to disk       {"blocks": 1054, "keys": 39326, "headerHeight": 5446225, "blockHeight": 5446224, "velocity": 9806, "took": "4.000524899s"}
2025-01-15T11:35:30.372+0300    INFO    persisted to disk       {"blocks": 1239, "keys": 39349, "headerHeight": 5447464, "blockHeight": 5447463, "velocity": 9744, "took": "4.281444939s"}

2× can be considered, but this calculation isn't perfect for low number of
keys, so somewhat bigger tolerance is preferable for now. Overall it's not
degrading performance, my mainnet/bolt run was even 8% better with this.

Fixes #3249, we don't need any option this way.

Fixes #3783 as well, it no longer OOMs in that scenario. It however can OOM in
case of big GarbageCollectionPeriod (like 400K), but this can't be solved easily.

Signed-off-by: Roman Khimov <roman@nspcc.ru>
2025-01-15 22:08:08 +03:00
Anna Shaleva
3a71aafc43 core: distinguish notarypool/mempool metrics
Move them to the core/network packages, close #2950. The name of
mempool's unsorted transactions metrics has been changed along the
way to match the core's metrics naming convention.

Signed-off-by: Anna Shaleva <shaleva.ann@nspcc.ru>
2023-04-13 18:40:19 +03:00
Anna Shaleva
7bcc62d99c *: fix Prometheus metrics comment formatting
Signed-off-by: Anna Shaleva <shaleva.ann@nspcc.ru>
2023-04-13 18:36:08 +03:00
Roman Khimov
0ad6e295ea core: make GetHeaderHash accept uint32
It should've always been this way because block indexes are uint32.
2022-11-25 14:30:51 +03:00
Evgeniy Stratonikov
bf20db09e0 stateroot: move state-root related logic to core/stateroot 2021-03-09 13:48:29 +03:00
Roman Khimov
e21c65c59f core: add state height to prometheus metrics
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
2020-07-30 12:42:15 +03:00
Evgenii Stratonikov
fed6fba9b6 core: refactor out MemPool 2020-01-16 10:16:24 +03:00
Vsevolod Brekelov
d374175170 monitoring: add prometheus monitoring
add init metrics service which uses prometheus;
add configuration for metrics service;
add monitoring metrics for blockchain,rpc,server;
2019-10-29 20:51:17 +03:00