Commit graph

134 commits

Author SHA1 Message Date
Roman Khimov
f80680187e storage: expose private storage map for more efficient MPT batch
It couldn't be done previously with two maps and mixed storage, but now all of
the storage changes are located in a single map, so it's trivial to do exact
slice allocations and avoid string->[]byte conversions.
2022-02-17 23:41:10 +03:00
Roman Khimov
9bfb3357f2 storage: add "private" mode to MemCachedStore
Most of the time we don't need locking on the higher-level stores and we drop
them after Persist, so that's what private MemCachedStore is for.

It doesn't improve things in any noticeable way, some ~1% can be observed in
neo-bench under various loads and even less than that in chain processing. But
it seems to be a bit better anyway (less allocations, less locks).
2022-02-17 22:27:39 +03:00
Roman Khimov
9d2ef775cf storage: simplify (*MemCachedStore).Put/Delete interface
They never return errors, so their interface should reflect that. This allows
to remove quite a lot of useless and never tested code.

Notice that Get still does return an error. It can be made not to do that, but
usually we need to differentiate between successful/unsuccessful accesses
anyway, so this doesn't help much.
2022-02-16 18:24:20 +03:00
Roman Khimov
be24bf6412 storage: drop Put and Delete from Store interface
It's only changed with PutChangeSet, single KV operations are handled by
MemCachedStore.
2022-02-16 18:24:20 +03:00
Roman Khimov
017795c9c1 storage: completely remove MemoryBatch
If you need something like that, just wrap another MemCachedStore layer around
it.
2022-02-16 16:13:12 +03:00
Roman Khimov
17a43b19e0 storage: remove Batch from Store
We never use it for real underlying stores, so these implementations are
useless (everything goes though PutChangeSet now).
2022-02-16 15:55:48 +03:00
Roman Khimov
35bdfc5eca storage: use two maps for MemoryStore
Simple and dumb as it is, this allows to separate contract storage from other
things and dramatically improve Seek() time over storage (even though it's
still unordered!) which in turn improves block processing speed.

        LevelDB             LevelDB (KeepOnlyLatest)  BoltDB              BoltDB (KeepOnlyLatest)
Master  real    16m27,936s  real    10m9,440s         real    16m39,369s  real    8m1,227s
        user    20m12,619s  user    26m13,925s        user    18m9,162s   user    18m5,846s
        sys     2m56,377s   sys     1m32,051s         sys     9m52,576s   sys     2m9,455s

2 maps  real    10m49,495s  real    8m53,342s         real    11m46,204s  real    5m56,043s
        user    14m19,922s  user    24m6,225s         user    13m25,691s  user    15m4,694s
        sys     1m53,021s   sys     1m23,006s         sys     4m31,735s   sys     2m8,714s

neo-bench performance is mostly unaffected, ~0.5% for 1-1 test and 4% for
10K-10K test both fall within regular test error range.
2022-02-16 15:55:48 +03:00
Roman Khimov
b6a4947cfd storage: drop STAccount prefix
It's unused for a long time.
2022-02-16 13:06:57 +03:00
Roman Khimov
54bc603831 core: remove old storage items synchronously during jump
There won't be a lot of them and GC experience we have shows that iterating
over 1-2M of entries is not a huge problem for DBs we have.
2022-02-16 13:03:13 +03:00
Roman Khimov
c4b49a2d52 storage: deduplicate storage closing in tests 2022-02-14 17:29:21 +03:00
Roman Khimov
ccdda21718 stateroot: add and use DataMPTAux for auxiliary data
Use DataMPT for nodes only, otherwise with 1M blocks with have 1M
height-stateroot mapping entries that our GC has to iterate over for no
reason.
2022-02-14 17:29:21 +03:00
Roman Khimov
ad606101c7 storage: add SeekGC interface for GC
It's very special, single-purpose thing, but it improves cumulative time spent
in GC by ~10% for LevelDB and by ~36% for BoltDB during 1050K mainnet chain
processing. While the overall chain import time doesn't change in any
noticeable way (~1%), I think it's still worth it, for machines with slower
disks the difference might be more noticeable.
2022-02-14 17:29:21 +03:00
Roman Khimov
51b804ab0e storage: generalize Level/Bolt seek implementations
Too much in common. Just refactoring. no functional changes.
2022-02-12 23:09:37 +03:00
Roman Khimov
a5f8b8870a storage: use Update for changeset processing
Batch is only relevant in multithreaded context, internally it'll do some
magic and use the same locking/updating Update does, so it makes little sense
for us. This doesn't change benchmarks in any noticeable way.
2022-02-11 16:48:35 +03:00
Roman Khimov
de99c3acdb storage: provide a way to escape from SeekAsync goroutine
A routine blocked on channel send here can't really exit, so avoid goroutine
leak:

goroutine 2813725 [chan send, 6 minutes]:
github.com/nspcc-dev/neo-go/pkg/core/storage.(*MemCachedStore).SeekAsync.func1.1(0xc01a7118f7, 0x2, 0x25, 0xc01a7118f9, 0x23, 0x23, 0xc0366c7c01)
        github.com/nspcc-dev/neo-go/pkg/core/storage/memcached_store.go:120 +0x86
github.com/nspcc-dev/neo-go/pkg/core/storage.(*MemCachedStore).seek.func2(0xc0079e7920, 0xa, 0x30, 0xc0079e792a, 0x26, 0x26, 0x1)
        github.com/nspcc-dev/neo-go/pkg/core/storage/memcached_store.go:183 +0x347
github.com/nspcc-dev/neo-go/pkg/core/storage.(*MemCachedStore).seek(0xc000458480, 0x135c028, 0xc0000445d0, 0xc00f1721d0, 0x7, 0x7, 0x0, 0x0, 0x0, 0x0, ...)
        github.com/nspcc-dev/neo-go/pkg/core/storage/memcached_store.go:224 +0x4f4
github.com/nspcc-dev/neo-go/pkg/core/storage.(*MemCachedStore).Seek(0xc000458480, 0xc00f1721d0, 0x7, 0x7, 0x0, 0x0, 0x0, 0x0, 0xc0357c6620)
        github.com/nspcc-dev/neo-go/pkg/core/storage/memcached_store.go:110 +0x8a
github.com/nspcc-dev/neo-go/pkg/core/storage.(*MemCachedStore).seek(0xc0331a4f00, 0x135bff0, 0xc00ae26ec0, 0xc00f1721d0, 0x7, 0x7, 0x0, 0x0, 0x0, 0x0, ...)
        github.com/nspcc-dev/neo-go/pkg/core/storage/memcached_store.go:210 +0x379
github.com/nspcc-dev/neo-go/pkg/core/storage.(*MemCachedStore).SeekAsync.func1(0xc0331a4f00, 0x135bff0, 0xc00ae26ec0, 0xc00f1721d0, 0x7, 0x7, 0x0, 0x0, 0x0, 0x0, ...)
        github.com/nspcc-dev/neo-go/pkg/core/storage/memcached_store.go:119 +0xc5
created by github.com/nspcc-dev/neo-go/pkg/core/storage.(*MemCachedStore).SeekAsync
        github.com/nspcc-dev/neo-go/pkg/core/storage/memcached_store.go:118 +0xc8

goroutine 2822823 [chan send, 6 minutes]:
github.com/nspcc-dev/neo-go/pkg/core/storage.(*MemCachedStore).SeekAsync.func1.1(0xc011859b77, 0x3, 0x3, 0xc017bea8d0, 0x26, 0x26, 0xc00f1afc00)
        github.com/nspcc-dev/neo-go/pkg/core/storage/memcached_store.go:120 +0x86
github.com/nspcc-dev/neo-go/pkg/core/storage.(*MemCachedStore).seek.func2(0xc011859b60, 0xa, 0xa, 0xc017bea8a0, 0x26, 0x26, 0xc00ad9fb00)
        github.com/nspcc-dev/neo-go/pkg/core/storage/memcached_store.go:200 +0x47e
github.com/nspcc-dev/neo-go/pkg/core/storage.(*MemCachedStore).seek.func2(0xc01d5d8c90, 0xa, 0x30, 0xc01d5d8c9a, 0x26, 0x26, 0x1)
        github.com/nspcc-dev/neo-go/pkg/core/storage/memcached_store.go:200 +0x47e
github.com/nspcc-dev/neo-go/pkg/core/storage.(*MemCachedStore).seek(0xc035e12900, 0x135c028, 0xc0000445d0, 0xc01773bf60, 0x7, 0x7, 0x0, 0x0, 0x0, 0x0, ...)
        github.com/nspcc-dev/neo-go/pkg/core/storage/memcached_store.go:224 +0x4f4
github.com/nspcc-dev/neo-go/pkg/core/storage.(*MemCachedStore).Seek(0xc035e12900, 0xc01773bf60, 0x7, 0x7, 0x0, 0x0, 0x0, 0x0, 0xc030c9e0e0)
        github.com/nspcc-dev/neo-go/pkg/core/storage/memcached_store.go:110 +0x8a
github.com/nspcc-dev/neo-go/pkg/core/storage.(*MemCachedStore).seek(0xc000458480, 0x135c028, 0xc0000445d0, 0xc01773bf60, 0x7, 0x7, 0x0, 0x0, 0x0, 0x0, ...)
        github.com/nspcc-dev/neo-go/pkg/core/storage/memcached_store.go:210 +0x379
github.com/nspcc-dev/neo-go/pkg/core/storage.(*MemCachedStore).Seek(0xc000458480, 0xc01773bf60, 0x7, 0x7, 0x0, 0x0, 0x0, 0x0, 0xc030c9e070)
        github.com/nspcc-dev/neo-go/pkg/core/storage/memcached_store.go:110 +0x8a
github.com/nspcc-dev/neo-go/pkg/core/storage.(*MemCachedStore).seek(0xc00b340c60, 0x135bff0, 0xc00f1afbc0, 0xc01773bf60, 0x7, 0x7, 0x0, 0x0, 0x0, 0x0, ...)
        github.com/nspcc-dev/neo-go/pkg/core/storage/memcached_store.go:210 +0x379
github.com/nspcc-dev/neo-go/pkg/core/storage.(*MemCachedStore).SeekAsync.func1(0xc00b340c60, 0x135bff0, 0xc00f1afbc0, 0xc01773bf60, 0x7, 0x7, 0x0, 0x0, 0x0, 0x0, ...)
        github.com/nspcc-dev/neo-go/pkg/core/storage/memcached_store.go:119 +0xc5
created by github.com/nspcc-dev/neo-go/pkg/core/storage.(*MemCachedStore).SeekAsync
        github.com/nspcc-dev/neo-go/pkg/core/storage/memcached_store.go:118 +0xc8
...
2022-02-09 00:22:42 +03:00
Roman Khimov
3307292597 storage: rework MemoryStore with a single map
Doesn't affect any benchmarks or tests, but makes things a bit simpler.
2022-02-09 00:22:42 +03:00
Roman Khimov
42769d11ff
Merge pull request #2330 from nspcc-dev/transfer-logs-opt
core: improve Seek and optimise seek time for transfer logs
2022-01-19 21:55:40 +03:00
Anna Shaleva
cd42b8b20c core: allow early Seek stop
This simple approach allows to improve the performance of
BoltDB and LevelDB in both terms of speed and allocations
for retrieving GasPerVote value from the storage.
MemoryPS's speed suffers a bit, but we don't use it for
production environment.

Part of #2322.

Benchmark results:

name                                                              old time/op    new time/op    delta
NEO_GetGASPerVote/MemPS_10RewardRecords_1RewardDistance-8           25.3µs ± 1%    26.4µs ± 9%   +4.41%  (p=0.043 n=10+9)
NEO_GetGASPerVote/MemPS_10RewardRecords_10RewardDistance-8          27.9µs ± 1%    30.1µs ±15%   +7.97%  (p=0.000 n=10+9)
NEO_GetGASPerVote/MemPS_10RewardRecords_100RewardDistance-8         55.1µs ± 1%    60.2µs ± 7%   +9.27%  (p=0.000 n=10+10)
NEO_GetGASPerVote/MemPS_10RewardRecords_1000RewardDistance-8         353µs ± 2%     416µs ±13%  +17.88%  (p=0.000 n=8+8)
NEO_GetGASPerVote/MemPS_100RewardRecords_1RewardDistance-8           195µs ± 1%     216µs ± 7%  +10.42%  (p=0.000 n=10+10)
NEO_GetGASPerVote/MemPS_100RewardRecords_10RewardDistance-8          200µs ± 4%     214µs ± 9%   +6.99%  (p=0.002 n=9+8)
NEO_GetGASPerVote/MemPS_100RewardRecords_100RewardDistance-8         223µs ± 2%     247µs ± 9%  +10.60%  (p=0.000 n=10+10)
NEO_GetGASPerVote/MemPS_100RewardRecords_1000RewardDistance-8        612µs ±23%     855µs ±52%  +39.60%  (p=0.001 n=9+10)
NEO_GetGASPerVote/MemPS_1000RewardRecords_1RewardDistance-8         11.3ms ±53%    10.7ms ±50%     ~     (p=0.739 n=10+10)
NEO_GetGASPerVote/MemPS_1000RewardRecords_10RewardDistance-8        12.0ms ±37%    10.4ms ±65%     ~     (p=0.853 n=10+10)
NEO_GetGASPerVote/MemPS_1000RewardRecords_100RewardDistance-8       11.3ms ±40%    10.4ms ±49%     ~     (p=0.631 n=10+10)
NEO_GetGASPerVote/MemPS_1000RewardRecords_1000RewardDistance-8      3.80ms ±45%    3.69ms ±27%     ~     (p=0.931 n=6+5)
NEO_GetGASPerVote/BoltPS_10RewardRecords_1RewardDistance-8          23.0µs ± 9%    22.6µs ± 4%     ~     (p=0.059 n=8+9)
NEO_GetGASPerVote/BoltPS_10RewardRecords_10RewardDistance-8         25.9µs ± 5%    24.8µs ± 4%   -4.17%  (p=0.006 n=10+8)
NEO_GetGASPerVote/BoltPS_10RewardRecords_100RewardDistance-8        42.7µs ±13%    38.9µs ± 1%   -8.85%  (p=0.000 n=9+8)
NEO_GetGASPerVote/BoltPS_10RewardRecords_1000RewardDistance-8       80.8µs ±12%    84.9µs ± 9%     ~     (p=0.114 n=8+9)
NEO_GetGASPerVote/BoltPS_100RewardRecords_1RewardDistance-8         64.3µs ±16%    22.1µs ±23%  -65.64%  (p=0.000 n=10+10)
NEO_GetGASPerVote/BoltPS_100RewardRecords_10RewardDistance-8        61.0µs ±34%    23.2µs ± 8%  -62.04%  (p=0.000 n=10+9)
NEO_GetGASPerVote/BoltPS_100RewardRecords_100RewardDistance-8       62.2µs ±14%    25.7µs ±13%  -58.66%  (p=0.000 n=9+10)
NEO_GetGASPerVote/BoltPS_100RewardRecords_1000RewardDistance-8       359µs ±60%     325µs ±60%     ~     (p=0.739 n=10+10)
NEO_GetGASPerVote/BoltPS_1000RewardRecords_1RewardDistance-8         242µs ±21%      13µs ±28%  -94.49%  (p=0.000 n=10+8)
NEO_GetGASPerVote/BoltPS_1000RewardRecords_10RewardDistance-8        229µs ±23%      18µs ±70%  -92.02%  (p=0.000 n=10+9)
NEO_GetGASPerVote/BoltPS_1000RewardRecords_100RewardDistance-8       238µs ±28%     20µs ±109%  -91.38%  (p=0.000 n=10+9)
NEO_GetGASPerVote/BoltPS_1000RewardRecords_1000RewardDistance-8      265µs ±20%      77µs ±62%  -71.04%  (p=0.000 n=10+10)
NEO_GetGASPerVote/LevelPS_10RewardRecords_1RewardDistance-8         25.5µs ± 3%    24.7µs ± 7%     ~     (p=0.143 n=10+10)
NEO_GetGASPerVote/LevelPS_10RewardRecords_10RewardDistance-8        27.4µs ± 2%    27.9µs ± 6%     ~     (p=0.280 n=10+10)
NEO_GetGASPerVote/LevelPS_10RewardRecords_100RewardDistance-8       50.2µs ± 7%    47.4µs ±10%     ~     (p=0.156 n=9+10)
NEO_GetGASPerVote/LevelPS_10RewardRecords_1000RewardDistance-8      98.2µs ± 9%    94.6µs ±10%     ~     (p=0.218 n=10+10)
NEO_GetGASPerVote/LevelPS_100RewardRecords_1RewardDistance-8        82.9µs ±13%    32.1µs ±22%  -61.30%  (p=0.000 n=10+10)
NEO_GetGASPerVote/LevelPS_100RewardRecords_10RewardDistance-8       92.2µs ±11%    33.7µs ±12%  -63.42%  (p=0.000 n=10+9)
NEO_GetGASPerVote/LevelPS_100RewardRecords_100RewardDistance-8      88.3µs ±22%    39.4µs ±14%  -55.36%  (p=0.000 n=10+9)
NEO_GetGASPerVote/LevelPS_100RewardRecords_1000RewardDistance-8      106µs ±18%      78µs ±24%  -26.20%  (p=0.000 n=9+10)
NEO_GetGASPerVote/LevelPS_1000RewardRecords_1RewardDistance-8        360µs ±24%      29µs ±53%  -91.91%  (p=0.000 n=10+9)
NEO_GetGASPerVote/LevelPS_1000RewardRecords_10RewardDistance-8       353µs ±16%      50µs ±70%  -85.72%  (p=0.000 n=10+10)
NEO_GetGASPerVote/LevelPS_1000RewardRecords_100RewardDistance-8      381µs ±20%     47µs ±111%  -87.64%  (p=0.000 n=10+10)
NEO_GetGASPerVote/LevelPS_1000RewardRecords_1000RewardDistance-8     434µs ±19%     113µs ±41%  -74.04%  (p=0.000 n=10+10)

name                                                              old alloc/op   new alloc/op   delta
NEO_GetGASPerVote/MemPS_10RewardRecords_1RewardDistance-8           4.82kB ± 0%    4.26kB ± 1%  -11.62%  (p=0.000 n=10+9)
NEO_GetGASPerVote/MemPS_10RewardRecords_10RewardDistance-8          4.99kB ± 0%    4.41kB ± 1%  -11.56%  (p=0.000 n=10+10)
NEO_GetGASPerVote/MemPS_10RewardRecords_100RewardDistance-8         8.45kB ± 0%    7.87kB ± 0%   -6.88%  (p=0.000 n=10+10)
NEO_GetGASPerVote/MemPS_10RewardRecords_1000RewardDistance-8        55.0kB ± 0%    54.5kB ± 0%   -0.81%  (p=0.000 n=10+10)
NEO_GetGASPerVote/MemPS_100RewardRecords_1RewardDistance-8          29.1kB ± 0%    21.7kB ± 2%  -25.56%  (p=0.000 n=9+9)
NEO_GetGASPerVote/MemPS_100RewardRecords_10RewardDistance-8         29.3kB ± 1%    21.8kB ± 2%  -25.74%  (p=0.000 n=10+10)
NEO_GetGASPerVote/MemPS_100RewardRecords_100RewardDistance-8        31.3kB ± 1%    23.6kB ± 1%  -24.50%  (p=0.000 n=10+10)
NEO_GetGASPerVote/MemPS_100RewardRecords_1000RewardDistance-8       92.5kB ± 5%    84.7kB ± 3%   -8.50%  (p=0.000 n=10+9)
NEO_GetGASPerVote/MemPS_1000RewardRecords_1RewardDistance-8          324kB ±29%     222kB ±44%  -31.33%  (p=0.007 n=10+10)
NEO_GetGASPerVote/MemPS_1000RewardRecords_10RewardDistance-8         308kB ±32%     174kB ±14%  -43.56%  (p=0.000 n=10+8)
NEO_GetGASPerVote/MemPS_1000RewardRecords_100RewardDistance-8        298kB ±23%     178kB ±36%  -40.26%  (p=0.000 n=10+10)
NEO_GetGASPerVote/MemPS_1000RewardRecords_1000RewardDistance-8       362kB ± 6%     248kB ± 6%  -31.54%  (p=0.004 n=6+5)
NEO_GetGASPerVote/BoltPS_10RewardRecords_1RewardDistance-8          5.15kB ± 3%    4.64kB ± 2%   -9.92%  (p=0.000 n=10+9)
NEO_GetGASPerVote/BoltPS_10RewardRecords_10RewardDistance-8         5.36kB ± 1%    4.75kB ± 5%  -11.42%  (p=0.000 n=10+10)
NEO_GetGASPerVote/BoltPS_10RewardRecords_100RewardDistance-8        8.15kB ± 4%    7.53kB ± 1%   -7.62%  (p=0.000 n=10+9)
NEO_GetGASPerVote/BoltPS_10RewardRecords_1000RewardDistance-8       33.2kB ± 5%    33.2kB ± 7%     ~     (p=0.829 n=8+10)
NEO_GetGASPerVote/BoltPS_100RewardRecords_1RewardDistance-8         20.1kB ± 7%     5.8kB ±13%  -70.90%  (p=0.000 n=10+10)
NEO_GetGASPerVote/BoltPS_100RewardRecords_10RewardDistance-8        19.8kB ±14%     6.2kB ± 5%  -68.87%  (p=0.000 n=10+9)
NEO_GetGASPerVote/BoltPS_100RewardRecords_100RewardDistance-8       21.7kB ± 6%     8.0kB ± 7%  -63.20%  (p=0.000 n=9+10)
NEO_GetGASPerVote/BoltPS_100RewardRecords_1000RewardDistance-8      98.5kB ±44%    81.8kB ±48%     ~     (p=0.143 n=10+10)
NEO_GetGASPerVote/BoltPS_1000RewardRecords_1RewardDistance-8         130kB ± 4%       4kB ± 9%  -96.69%  (p=0.000 n=10+10)
NEO_GetGASPerVote/BoltPS_1000RewardRecords_10RewardDistance-8        131kB ± 4%       5kB ±21%  -96.48%  (p=0.000 n=9+9)
NEO_GetGASPerVote/BoltPS_1000RewardRecords_100RewardDistance-8       132kB ± 4%       6kB ±10%  -95.39%  (p=0.000 n=10+8)
NEO_GetGASPerVote/BoltPS_1000RewardRecords_1000RewardDistance-8      151kB ± 4%      26kB ±10%  -82.46%  (p=0.000 n=9+9)
NEO_GetGASPerVote/LevelPS_10RewardRecords_1RewardDistance-8         5.92kB ± 3%    5.32kB ± 2%  -10.01%  (p=0.000 n=10+10)
NEO_GetGASPerVote/LevelPS_10RewardRecords_10RewardDistance-8        6.09kB ± 2%    5.48kB ± 2%  -10.00%  (p=0.000 n=10+10)
NEO_GetGASPerVote/LevelPS_10RewardRecords_100RewardDistance-8       9.61kB ± 1%    9.00kB ± 0%   -6.29%  (p=0.000 n=10+10)
NEO_GetGASPerVote/LevelPS_10RewardRecords_1000RewardDistance-8      33.4kB ± 7%    32.2kB ± 5%   -3.60%  (p=0.037 n=10+10)
NEO_GetGASPerVote/LevelPS_100RewardRecords_1RewardDistance-8        22.3kB ±10%     9.0kB ±16%  -59.78%  (p=0.000 n=10+10)
NEO_GetGASPerVote/LevelPS_100RewardRecords_10RewardDistance-8       23.6kB ± 6%     8.5kB ±20%  -63.76%  (p=0.000 n=10+10)
NEO_GetGASPerVote/LevelPS_100RewardRecords_100RewardDistance-8      24.2kB ± 9%    11.5kB ± 4%  -52.34%  (p=0.000 n=10+8)
NEO_GetGASPerVote/LevelPS_100RewardRecords_1000RewardDistance-8     44.2kB ± 6%    30.8kB ± 9%  -30.24%  (p=0.000 n=10+10)
NEO_GetGASPerVote/LevelPS_1000RewardRecords_1RewardDistance-8        144kB ± 4%      10kB ±24%  -93.39%  (p=0.000 n=9+10)
NEO_GetGASPerVote/LevelPS_1000RewardRecords_10RewardDistance-8       146kB ± 1%      11kB ±37%  -92.14%  (p=0.000 n=7+10)
NEO_GetGASPerVote/LevelPS_1000RewardRecords_100RewardDistance-8      149kB ± 3%      11kB ±12%  -92.28%  (p=0.000 n=10+9)
NEO_GetGASPerVote/LevelPS_1000RewardRecords_1000RewardDistance-8     171kB ± 4%      34kB ±12%  -80.00%  (p=0.000 n=10+10)

name                                                              old allocs/op  new allocs/op  delta
NEO_GetGASPerVote/MemPS_10RewardRecords_1RewardDistance-8             95.0 ± 0%      74.0 ± 0%  -22.11%  (p=0.001 n=8+9)
NEO_GetGASPerVote/MemPS_10RewardRecords_10RewardDistance-8             100 ± 0%        78 ± 1%  -21.70%  (p=0.000 n=10+10)
NEO_GetGASPerVote/MemPS_10RewardRecords_100RewardDistance-8            153 ± 0%       131 ± 2%  -14.25%  (p=0.000 n=6+10)
NEO_GetGASPerVote/MemPS_10RewardRecords_1000RewardDistance-8           799 ± 2%       797 ± 4%     ~     (p=0.956 n=10+10)
NEO_GetGASPerVote/MemPS_100RewardRecords_1RewardDistance-8             438 ± 6%       167 ± 0%  -61.86%  (p=0.000 n=10+9)
NEO_GetGASPerVote/MemPS_100RewardRecords_10RewardDistance-8            446 ± 5%       172 ± 0%  -61.38%  (p=0.000 n=10+10)
NEO_GetGASPerVote/MemPS_100RewardRecords_100RewardDistance-8           506 ± 4%       232 ± 1%  -54.21%  (p=0.000 n=10+10)
NEO_GetGASPerVote/MemPS_100RewardRecords_1000RewardDistance-8        1.31k ± 5%     0.97k ± 4%  -26.20%  (p=0.000 n=10+10)
NEO_GetGASPerVote/MemPS_1000RewardRecords_1RewardDistance-8          5.06k ± 1%     1.09k ± 2%  -78.53%  (p=0.000 n=10+10)
NEO_GetGASPerVote/MemPS_1000RewardRecords_10RewardDistance-8         5.02k ± 3%     1.08k ± 0%  -78.45%  (p=0.000 n=10+8)
NEO_GetGASPerVote/MemPS_1000RewardRecords_100RewardDistance-8        5.09k ± 3%     1.15k ± 2%  -77.48%  (p=0.000 n=10+10)
NEO_GetGASPerVote/MemPS_1000RewardRecords_1000RewardDistance-8       5.83k ± 1%     1.87k ± 3%  -68.02%  (p=0.004 n=6+5)
NEO_GetGASPerVote/BoltPS_10RewardRecords_1RewardDistance-8             103 ± 2%        82 ± 1%  -20.83%  (p=0.000 n=10+10)
NEO_GetGASPerVote/BoltPS_10RewardRecords_10RewardDistance-8            107 ± 0%        86 ± 0%  -19.63%  (p=0.000 n=8+8)
NEO_GetGASPerVote/BoltPS_10RewardRecords_100RewardDistance-8           164 ± 1%       139 ± 0%  -15.45%  (p=0.000 n=10+9)
NEO_GetGASPerVote/BoltPS_10RewardRecords_1000RewardDistance-8          820 ± 1%       789 ± 1%   -3.70%  (p=0.000 n=9+10)
NEO_GetGASPerVote/BoltPS_100RewardRecords_1RewardDistance-8            475 ± 0%        94 ± 3%  -80.15%  (p=0.000 n=10+9)
NEO_GetGASPerVote/BoltPS_100RewardRecords_10RewardDistance-8           481 ± 0%       100 ± 2%  -79.26%  (p=0.000 n=9+9)
NEO_GetGASPerVote/BoltPS_100RewardRecords_100RewardDistance-8          549 ± 0%       161 ± 2%  -70.69%  (p=0.000 n=10+10)
NEO_GetGASPerVote/BoltPS_100RewardRecords_1000RewardDistance-8       1.61k ±19%     1.19k ±25%  -26.05%  (p=0.000 n=10+10)
NEO_GetGASPerVote/BoltPS_1000RewardRecords_1RewardDistance-8         4.12k ± 0%     0.08k ± 2%  -98.02%  (p=0.000 n=10+10)
NEO_GetGASPerVote/BoltPS_1000RewardRecords_10RewardDistance-8        4.14k ± 0%     0.09k ± 3%  -97.90%  (p=0.000 n=9+9)
NEO_GetGASPerVote/BoltPS_1000RewardRecords_100RewardDistance-8       4.19k ± 0%     0.15k ± 3%  -96.52%  (p=0.000 n=9+9)
NEO_GetGASPerVote/BoltPS_1000RewardRecords_1000RewardDistance-8      4.82k ± 1%     0.74k ± 1%  -84.58%  (p=0.000 n=10+9)
NEO_GetGASPerVote/LevelPS_10RewardRecords_1RewardDistance-8            112 ± 4%        90 ± 3%  -19.45%  (p=0.000 n=10+10)
NEO_GetGASPerVote/LevelPS_10RewardRecords_10RewardDistance-8           116 ± 2%        95 ± 2%  -17.90%  (p=0.000 n=10+10)
NEO_GetGASPerVote/LevelPS_10RewardRecords_100RewardDistance-8          170 ± 3%       148 ± 3%  -12.99%  (p=0.000 n=10+10)
NEO_GetGASPerVote/LevelPS_10RewardRecords_1000RewardDistance-8         800 ± 2%       772 ± 2%   -3.50%  (p=0.000 n=10+10)
NEO_GetGASPerVote/LevelPS_100RewardRecords_1RewardDistance-8           480 ± 3%       118 ± 3%  -75.32%  (p=0.000 n=10+10)
NEO_GetGASPerVote/LevelPS_100RewardRecords_10RewardDistance-8          479 ± 2%       123 ± 3%  -74.33%  (p=0.000 n=10+9)
NEO_GetGASPerVote/LevelPS_100RewardRecords_100RewardDistance-8         542 ± 1%       183 ± 3%  -66.34%  (p=0.000 n=10+9)
NEO_GetGASPerVote/LevelPS_100RewardRecords_1000RewardDistance-8      1.19k ± 1%     0.79k ± 1%  -33.41%  (p=0.000 n=10+10)
NEO_GetGASPerVote/LevelPS_1000RewardRecords_1RewardDistance-8        4.21k ± 1%     0.13k ±21%  -96.83%  (p=0.000 n=10+10)
NEO_GetGASPerVote/LevelPS_1000RewardRecords_10RewardDistance-8       4.23k ± 1%     0.15k ±17%  -96.48%  (p=0.000 n=10+10)
NEO_GetGASPerVote/LevelPS_1000RewardRecords_100RewardDistance-8      4.27k ± 0%     0.19k ± 6%  -95.51%  (p=0.000 n=10+9)
NEO_GetGASPerVote/LevelPS_1000RewardRecords_1000RewardDistance-8     4.89k ± 1%     0.79k ± 2%  -83.80%  (p=0.000 n=10+10)
2022-01-19 20:54:35 +03:00
Roman Khimov
2e452df13a rpc/storage: add storage changes to invoke* diagnostics 2022-01-19 00:02:19 +03:00
Anna Shaleva
04a8e6666f storage: allow to seek backwards 2022-01-13 12:44:29 +03:00
AnnaShaleva
6bc92abe19 storage: allow to seek starting from some point 2022-01-13 12:44:29 +03:00
Anna Shaleva
5770a581c3 store: improve Seek tests
After #2193 Seek results are sorted in an ascending way, so technically
the test was needed to be fixed along with these changes.
2022-01-13 12:44:29 +03:00
Roman Khimov
4c39b6600d *: store application long along with tx/block
Two times less keys inserted into the DB per tx leads to ~13% TPS
improvement. We also drop one goroutine with it which isn't bad as well.
2021-12-09 15:39:26 +03:00
Roman Khimov
9576e10d04 storage: use map size hints to optimize subsequent persist()
We're likely to have something comparable to the current changeset in the
subsequent one. If it's bigger, no big deal, it'll be reallocated, if it's
smaller, no big deal, the next one will be preallocated smaller.
2021-12-01 21:36:25 +03:00
Roman Khimov
33e37e60e5
Merge pull request #2264 from nspcc-dev/fix-win-tests
*: Windows compatibility fixes
2021-11-29 11:25:35 +03:00
AnnaShaleva
aefb6f9fee *: fix tests failing due to path.Join usage
Solution:
Use `file/filepath` package to construct expected path. This package is OS-aware, see https://github.com/golang/go/issues/30616.
2021-11-29 11:11:09 +03:00
AnnaShaleva
06beb4d534 core: fix TestMemCachedPersist test
Problem:
```
--- FAIL: TestMemCachedPersist (0.07s)
    --- FAIL: TestMemCachedPersist/BoltDBStore (0.07s)
        testing.go:894: TempDir RemoveAll cleanup: remove C:\Users\Anna\AppData\Local\Temp\TestMemCachedPersist_BoltDBStore294966711\001\test_bolt_db: The process cannot access the file because it is being used by another process.
```

Solution:
Release the resources occupied by the DB.
2021-11-26 18:26:27 +03:00
Evgeniy Stratonikov
fac595bbdf core: remove old storage items asynchronously
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-11-24 16:25:11 +03:00
Roman Khimov
5fbe838fd4 core: add and use synchronous persist to avoid OOM
b9be892bf9 has made Persist asynchronous which
is very effective in allowing the system to continue processing
blocks/transactions while flushing things to disk. It at the same time is very
dangerous in that if the disk is slow and it takes much time to flush KV set
(more than persisting interval), there might be even bigger new KV set in
MemCachedStore by the time it finishes. Even if the system immediately starts
to flush this new data set it (being bigger) can take more time than the
previous one. And while doing so a new data set will appear in memory,
potentially again bigger than this.

So we can easily end up with the system going out of control, consuming more
and more memory and taking more and more time to persist a single set of
data. To avoid this we need to detect such condition and just wait for Persist
to really finish its job and release the resources.
2021-11-22 10:41:40 +03:00
Roman Khimov
125e4231b0 core: store NEP-11 transfers, add accessor functions 2021-11-18 00:09:10 +03:00
Roman Khimov
1144a03486 storage: drop RedisDB, close #2130 2021-10-27 17:32:25 +03:00
Roman Khimov
fb4b87bb96 storage: drop BadgerDB support, close #2130 2021-10-27 17:31:55 +03:00
Anna Shaleva
3450371910 core: split (*MemCachedStore) Seek and SeekAsync methods
Use SeekAsync for System.Storage.Find and Seek for the rest of cases.
2021-10-21 10:05:12 +03:00
Anna Shaleva
dcda7bec63 core: squash PS-seeking and merging routines in (*MemCachedStore).Seek
We don't need a separate routine to merge seek results.
2021-10-21 10:05:12 +03:00
Anna Shaleva
c88720bf45 core: remove memstore routine from (*MemCachedStore).SeekAsync
It adds unnecessary overhead to computations.
2021-10-21 10:05:12 +03:00
Anna Shaleva
dfe2c667e1 core: do not hold the lock while seeking over persistent store 2021-10-21 10:05:12 +03:00
Anna Shaleva
0a4f45c9b0 core: add ability to free storage.Iterator resources 2021-10-21 10:05:12 +03:00
Anna Shaleva
89ee2e7720 core: refactor storage.Find and storage.Iterator to work with channel
Add SeekAsync methods in order to fetch matching storage items
on demand. Refactor storage.Find and storage.Iterator wrt these changes.
2021-10-21 10:05:12 +03:00
Anna Shaleva
7ba88e98e2 core: optimize (*MemCachedStore).Seek operation
Real persistent storage guarantees that result of Seek is sorted
by keys. The idea of optimisation is to merge two sorted seek
results into one (memStore+persistentStore), so that
(*MemCachedStore).Seek will return sorted list. The only thing
that remains is to sort items got from (*MemoryStore).Seek.
2021-10-21 10:05:12 +03:00
Anna Shaleva
191cc45032 core: sort items in MemoryStore.Seek
MemoryStore is used in a MemCachedStore as a persistent layer in tests.
Further commits suppose that persistent storage returns sorted values
from Seek, so sort the result of MemoryStore.Seek.

Benchmark results for 10000 matching items in MemoryStore compared to
master:
name          old time/op    new time/op    delta
MemorySeek-8     712µs ± 0%    3850µs ± 0%   +440.52%  (p=0.000 n=8+8)

name          old alloc/op   new alloc/op   delta
MemorySeek-8     160kB ± 0%    2724kB ± 0%  +1602.61%  (p=0.000 n=10+8)

name          old allocs/op  new allocs/op  delta
MemorySeek-8     10.0k ± 0%     10.0k ± 0%     +0.24%  (p=0.000 n=10+10)

For details on implementation efficiency see the
https://github.com/nspcc-dev/neo-go/pull/2193#discussion_r722993358.
2021-10-21 10:05:12 +03:00
Anna Shaleva
d8210c0137 core: add benchmarks for iterator.Next, MemCached.Seek, Mem.Seek 2021-10-21 10:05:12 +03:00
Anna Shaleva
8d8071f97e core: distinguish storage.KeyValue and storage.KeyValueExists
We need Exists field for storage batch related code; other cases may go
without Exists, so add new KeyValue structure and refactor related code.
2021-10-21 10:05:12 +03:00
Anna Shaleva
5cd78c31af core: allow to recover after state jump interruption
We need several stages to manage state jump process in order not to mess
up old and new contract storage items and to be sure about genesis state data
are properly removed from the storage. Other operations do not require
separate stage and can be performed each time `jumpToStateInternal` is
called.
2021-09-07 19:43:27 +03:00
Anna Shaleva
6381173293 core: store statesync-related storage items under temp prefix
State jump should be an atomic operation, we can't modify contract
storage items state on-the-fly. Thus, store fresh items under temp
prefix and replase the outdated ones after state sync is completed.
Related
https://github.com/nspcc-dev/neo-go/pull/2019#discussion_r693350460.
2021-09-07 19:43:27 +03:00
Roman Khimov
6d074a96e9 *: make tests use TempDir(), fix #1319
Simplify things, drop TempFile at the same time (refs. #1764)
2021-08-26 17:29:40 +03:00
Roman Khimov
adc660c3e0
Merge pull request #2123 from nspcc-dev/store-better
Store better
2021-08-13 12:50:24 +03:00
Roman Khimov
ae071d4542 storage: introduce PutChangeSet and use it for Persist
We're using batches in wrong way during persist, we already have all changes
accumulated in two maps and then we move them to batch and then this is
applied. For some DBs like BoltDB this batch is just another MemoryStore, so
we essentially just shuffle the changeset from one map to another, for others
like LevelDB batch is just a serialized set of KV pairs, it doesn't help much
on subsequent PutBatch, we just duplicate the changeset again.

So introduce PutChangeSet that allows to take two maps with sets and deletes
directly. It also allows to simplify MemCachedStore logic.

neo-bench for single node with 10 workers, LevelDB:

  Reference:

  RPS    30189.132 30556.448 30390.482 ≈ 30379    ±  0.61%
  TPS    29427.344 29418.687 29434.273 ≈ 29427    ±  0.03%
  CPU %     33.304    27.179    33.860 ≈    31.45 ± 11.79%
  Mem MB   800.677   798.389   715.042 ≈   771    ±  6.33%

  Patched:

  RPS    30264.326 30386.364 30166.231 ≈ 30272    ± 0.36% ⇅
  TPS    29444.673 29407.440 29452.478 ≈ 29435    ± 0.08% ⇅
  CPU %     34.012    32.597    33.467 ≈   33.36  ± 2.14% ⇅
  Mem MB   549.126   523.656   517.684 ≈  530     ± 3.15% ↓ 31.26%

BoltDB:

  Reference:

  RPS    31937.647 31551.684 31850.408 ≈ 31780    ±  0.64%
  TPS    31292.049 30368.368 31307.724 ≈ 30989    ±  1.74%
  CPU %     33.792    22.339    35.887 ≈    30.67 ± 23.78%
  Mem MB  1271.687  1254.472  1215.639 ≈  1247    ±  2.30%

  Patched:

  RPS    31746.818 30859.485 31689.761 ≈ 31432    ± 1.58% ⇅
  TPS    31271.499 30340.726 30342.568 ≈ 30652    ± 1.75% ⇅
  CPU %     34.611    34.414    31.553 ≈    33.53 ± 5.11% ⇅
  Mem MB  1262.960  1231.389  1335.569 ≈  1277    ± 4.18% ⇅
2021-08-12 17:42:16 +03:00
Roman Khimov
18682f2409 storage: don't use locks for memory batches
They're inherently single-threaded, so locking makes no sense for them.
2021-08-11 18:55:07 +03:00
Anna Shaleva
0e3b9c48a2 core: add API to store StateSyncPoint and StateSyncCurrentBlockHeight
We need it in order not to mess up the blockchain which has its own
CurrentBlockHeight.
2021-08-10 14:06:28 +03:00
Roman Khimov
b9be892bf9 storage: allow accessing MemCachedStore during Persist
Persist by its definition doesn't change MemCachedStore visible state, all KV
pairs that were acessible via it before Persist remain accessible after
Persist. The only thing it does is flushing of the current set of KV pairs
from memory to peristent store. To do that it needs read-only access to the
current KV pair set, but technically it then replaces maps, so we have to use
full write lock which makes MemCachedStore inaccessible for the duration of
Persist. And Persist can take a lot of time, it's about disk access for
regular DBs.

What we do here is we create new in-memory maps for MemCachedStore before
flushing old ones to the persistent store. Then a fake persistent store is
created which actually is a MemCachedStore with old maps, so it has exactly
the same visible state. This Store is never accessed for writes, so we can
read it without taking any internal locks and at the same time we no longer
need write locks for original MemCachedStore, we're not using it. All of this
makes it possible to use MemCachedStore as normally reads are handled going
down to whatever level is needed and writes are handled by new maps. So while
Persist for (*Blockchain).dao does its most time-consuming work we can process
other blocks (reading data for transactions and persisting storeBlock caches
to (*Blockchain).dao).

The change was tested for performance with neo-bench (single node, 10 workers,
LevelDB) on two machines and block dump processing (RC4 testnet up to 62800
with VerifyBlocks set to false) on i7-8565U.

Reference results (bbe4e9cd7b):

Ryzen 9 5950X:
RPS     23616.969 22817.086 23222.378  ≈ 23218   ± 1.72%
TPS     23047.316 22608.578 22735.540  ≈ 22797   ± 0.99%
CPU %      23.434    25.553    23.848  ≈    24.3 ± 4.63%
Mem MB    600.636   503.060   582.043  ≈   562   ± 9.22%

Core i7-8565U:
RPS     6594.007 6499.501 6572.902  ≈ 6555   ± 0.76%
TPS     6561.680 6444.545 6510.120  ≈ 6505   ± 0.90%
CPU %     58.452   60.568   62.474    ≈ 60.5 ± 3.33%
Mem MB   234.893  285.067  269.081   ≈ 263   ± 9.75%

DB restore:
real    0m22.237s 0m23.471s 0m23.409s  ≈ 23.04 ± 3.02%
user    0m35.435s 0m38.943s 0m39.247s  ≈ 37.88 ± 5.59%
sys      0m3.085s  0m3.360s  0m3.144s  ≈  3.20 ± 4.53%

After the change:

Ryzen 9 5950X:
RPS     27747.349 27407.726 27520.210  ≈ 27558   ± 0.63%  ↑ 18.69%
TPS     26992.010 26993.468 27010.966  ≈ 26999   ± 0.04%  ↑ 18.43%
CPU %      28.928    28.096    29.105  ≈    28.7 ± 1.88%  ↑ 18.1%
Mem MB    760.385   726.320   756.118  ≈   748   ± 2.48%  ↑ 33.10%

Core i7-8565U:
RPS     7783.229 7628.409 7542.340  ≈ 7651   ± 1.60%  ↑ 16.72%
TPS     7708.436 7607.397 7489.459  ≈ 7602   ± 1.44%  ↑ 16.85%
CPU %     74.899   71.020   72.697  ≈   72.9 ± 2.67%  ↑ 20.50%
Mem MB   438.047  436.967  416.350  ≈  430   ± 2.84%  ↑ 63.50%

DB restore:
real    0m20.838s 0m21.895s 0m21.794s  ≈ 21.51 ± 2.71%  ↓ 6.64%
user    0m39.091s 0m40.565s 0m41.493s  ≈ 40.38 ± 3.00%  ↑ 6.60%
sys      0m3.184s  0m2.923s  0m3.062s  ≈  3.06 ± 4.27%  ↓ 4.38%

It obviously uses more memory now and utilizes CPU more aggressively, but at
the same time it allows to improve all relevant metrics and finally reach a
situation where we process 50K transactions in less than second on Ryzen 9
5950X (going higher than 25K TPS). The other observation is much more stable
block time, on Ryzen 9 it's as close to 1 second as it could be.
2021-08-02 16:33:00 +03:00