Real persistent storage guarantees that result of Seek is sorted
by keys. The idea of optimisation is to merge two sorted seek
results into one (memStore+persistentStore), so that
(*MemCachedStore).Seek will return sorted list. The only thing
that remains is to sort items got from (*MemoryStore).Seek.
MemoryStore is used in a MemCachedStore as a persistent layer in tests.
Further commits suppose that persistent storage returns sorted values
from Seek, so sort the result of MemoryStore.Seek.
Benchmark results for 10000 matching items in MemoryStore compared to
master:
name old time/op new time/op delta
MemorySeek-8 712µs ± 0% 3850µs ± 0% +440.52% (p=0.000 n=8+8)
name old alloc/op new alloc/op delta
MemorySeek-8 160kB ± 0% 2724kB ± 0% +1602.61% (p=0.000 n=10+8)
name old allocs/op new allocs/op delta
MemorySeek-8 10.0k ± 0% 10.0k ± 0% +0.24% (p=0.000 n=10+10)
For details on implementation efficiency see the
https://github.com/nspcc-dev/neo-go/pull/2193#discussion_r722993358.
We need several stages to manage state jump process in order not to mess
up old and new contract storage items and to be sure about genesis state data
are properly removed from the storage. Other operations do not require
separate stage and can be performed each time `jumpToStateInternal` is
called.
State jump should be an atomic operation, we can't modify contract
storage items state on-the-fly. Thus, store fresh items under temp
prefix and replase the outdated ones after state sync is completed.
Related
https://github.com/nspcc-dev/neo-go/pull/2019#discussion_r693350460.
We're using batches in wrong way during persist, we already have all changes
accumulated in two maps and then we move them to batch and then this is
applied. For some DBs like BoltDB this batch is just another MemoryStore, so
we essentially just shuffle the changeset from one map to another, for others
like LevelDB batch is just a serialized set of KV pairs, it doesn't help much
on subsequent PutBatch, we just duplicate the changeset again.
So introduce PutChangeSet that allows to take two maps with sets and deletes
directly. It also allows to simplify MemCachedStore logic.
neo-bench for single node with 10 workers, LevelDB:
Reference:
RPS 30189.132 30556.448 30390.482 ≈ 30379 ± 0.61%
TPS 29427.344 29418.687 29434.273 ≈ 29427 ± 0.03%
CPU % 33.304 27.179 33.860 ≈ 31.45 ± 11.79%
Mem MB 800.677 798.389 715.042 ≈ 771 ± 6.33%
Patched:
RPS 30264.326 30386.364 30166.231 ≈ 30272 ± 0.36% ⇅
TPS 29444.673 29407.440 29452.478 ≈ 29435 ± 0.08% ⇅
CPU % 34.012 32.597 33.467 ≈ 33.36 ± 2.14% ⇅
Mem MB 549.126 523.656 517.684 ≈ 530 ± 3.15% ↓ 31.26%
BoltDB:
Reference:
RPS 31937.647 31551.684 31850.408 ≈ 31780 ± 0.64%
TPS 31292.049 30368.368 31307.724 ≈ 30989 ± 1.74%
CPU % 33.792 22.339 35.887 ≈ 30.67 ± 23.78%
Mem MB 1271.687 1254.472 1215.639 ≈ 1247 ± 2.30%
Patched:
RPS 31746.818 30859.485 31689.761 ≈ 31432 ± 1.58% ⇅
TPS 31271.499 30340.726 30342.568 ≈ 30652 ± 1.75% ⇅
CPU % 34.611 34.414 31.553 ≈ 33.53 ± 5.11% ⇅
Mem MB 1262.960 1231.389 1335.569 ≈ 1277 ± 4.18% ⇅
Persist by its definition doesn't change MemCachedStore visible state, all KV
pairs that were acessible via it before Persist remain accessible after
Persist. The only thing it does is flushing of the current set of KV pairs
from memory to peristent store. To do that it needs read-only access to the
current KV pair set, but technically it then replaces maps, so we have to use
full write lock which makes MemCachedStore inaccessible for the duration of
Persist. And Persist can take a lot of time, it's about disk access for
regular DBs.
What we do here is we create new in-memory maps for MemCachedStore before
flushing old ones to the persistent store. Then a fake persistent store is
created which actually is a MemCachedStore with old maps, so it has exactly
the same visible state. This Store is never accessed for writes, so we can
read it without taking any internal locks and at the same time we no longer
need write locks for original MemCachedStore, we're not using it. All of this
makes it possible to use MemCachedStore as normally reads are handled going
down to whatever level is needed and writes are handled by new maps. So while
Persist for (*Blockchain).dao does its most time-consuming work we can process
other blocks (reading data for transactions and persisting storeBlock caches
to (*Blockchain).dao).
The change was tested for performance with neo-bench (single node, 10 workers,
LevelDB) on two machines and block dump processing (RC4 testnet up to 62800
with VerifyBlocks set to false) on i7-8565U.
Reference results (bbe4e9cd7b):
Ryzen 9 5950X:
RPS 23616.969 22817.086 23222.378 ≈ 23218 ± 1.72%
TPS 23047.316 22608.578 22735.540 ≈ 22797 ± 0.99%
CPU % 23.434 25.553 23.848 ≈ 24.3 ± 4.63%
Mem MB 600.636 503.060 582.043 ≈ 562 ± 9.22%
Core i7-8565U:
RPS 6594.007 6499.501 6572.902 ≈ 6555 ± 0.76%
TPS 6561.680 6444.545 6510.120 ≈ 6505 ± 0.90%
CPU % 58.452 60.568 62.474 ≈ 60.5 ± 3.33%
Mem MB 234.893 285.067 269.081 ≈ 263 ± 9.75%
DB restore:
real 0m22.237s 0m23.471s 0m23.409s ≈ 23.04 ± 3.02%
user 0m35.435s 0m38.943s 0m39.247s ≈ 37.88 ± 5.59%
sys 0m3.085s 0m3.360s 0m3.144s ≈ 3.20 ± 4.53%
After the change:
Ryzen 9 5950X:
RPS 27747.349 27407.726 27520.210 ≈ 27558 ± 0.63% ↑ 18.69%
TPS 26992.010 26993.468 27010.966 ≈ 26999 ± 0.04% ↑ 18.43%
CPU % 28.928 28.096 29.105 ≈ 28.7 ± 1.88% ↑ 18.1%
Mem MB 760.385 726.320 756.118 ≈ 748 ± 2.48% ↑ 33.10%
Core i7-8565U:
RPS 7783.229 7628.409 7542.340 ≈ 7651 ± 1.60% ↑ 16.72%
TPS 7708.436 7607.397 7489.459 ≈ 7602 ± 1.44% ↑ 16.85%
CPU % 74.899 71.020 72.697 ≈ 72.9 ± 2.67% ↑ 20.50%
Mem MB 438.047 436.967 416.350 ≈ 430 ± 2.84% ↑ 63.50%
DB restore:
real 0m20.838s 0m21.895s 0m21.794s ≈ 21.51 ± 2.71% ↓ 6.64%
user 0m39.091s 0m40.565s 0m41.493s ≈ 40.38 ± 3.00% ↑ 6.60%
sys 0m3.184s 0m2.923s 0m3.062s ≈ 3.06 ± 4.27% ↓ 4.38%
It obviously uses more memory now and utilizes CPU more aggressively, but at
the same time it allows to improve all relevant metrics and finally reach a
situation where we process 50K transactions in less than second on Ryzen 9
5950X (going higher than 25K TPS). The other observation is much more stable
block time, on Ryzen 9 it's as close to 1 second as it could be.
Balances are to be removed from state.NEP17TransferInfo, so the remnant
fields are NextTransferBatch, NewBatch and a map of LastUpdatedBlocks.
These fields are more staff-related.
Also rename dao.[Get, Put, put]NEP17Balances and STNEP17Balances
preffix.
Also rename NEP17TransferInfo.Trackers to LastUpdatedBlockTrackers
because NEP17TransferInfo.Balances are to be removed.
Everything was wrong here, wrong file used, wrong cleanup procedure, the net
result is this (and some failing tests from time to time):
$ ls -l /tmp/test_bolt_db* | wc -l
30939
Related to #1468, ported from #1475.
We should copy the key to avoid bytes substitution. Otherwise there's a
chance that at the end of dao.Store.Seek(...) execution some keys won't
be the same as the original keys found inside saveToMap function because
storage.Seek can guarantee that provided key and value are only valid
until the next `f` call.
We're constantly checking for transactions there and most of the time this
check is not successful (meaning that the transaction in question is
new). Bloom filter easily reduces the need to search over the DB in 99% of
these cases and gives some 13% increase in single-node TPS.
MPT is a trie with a branching factor = 16, i.e. it consists of sequences in
16-element alphabet.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
The notion of NativeContractState shouldn't ever existed, native contract is a
contract and its state is saved as regular contract state which is critical
because we'll have MPT calculations over this state soon.
Initial minting should be done in Neo.Native.Deploy because it generates
notification that should have proper transaction context.
RegisterNative() shouldn't exist as a public method, native contracts are only
registered at block 0 and they can do it internally, no outside user should be
able to mess with it.
Move some structures from `native` package to `interop` also to avoid circular
references as interop.Context has to have a list of native contracts (exposing
them via Blockchainer is again too dangerous, it's too powerful tool).
Most of the time it's persisted into the MemoryStore or MemCachedStore, when
that's the case there is no real need to go through the Batch mechanism as it
incurs multiple copies of the data.
Importing 1.5M mainnet blocks with verification turned off, before:
real 12m39,484s
user 20m48,300s
sys 2m25,022s
After:
real 11m15,053s
user 18m2,755s
sys 2m4,162s
So it's around 10% improvement which looks good enough.
Error in Seek means something is terribly wrong (e.g. db was not opened) and
error drop is not the right thing to do, because caller
will continue working with the wrong view.
add dao which takes care about all CRUD operations on storage
remove blockchain state since everything is stored on change
remove storage operations from structs(entities)
move structs to entities package
It's used a lot and it looks a lot like MemoryStore, it just needs not to
return errors from Put and Delete, so make it use MemoryStore internally with
adjusted interface.
Make it look more like a real transaction, put/delete things with a single
lock. Make a copy of value in Put also, just for safety purposes, no one knows
how this value slice can be used after the Put.
Using pointers is just plain wrong here, because the batch can be updated with
newer values for the same keys.
Fixes Seek() to use HasPrefix also because this is the intended behavior.
BoltDB doesn't have internal batching mechanism, thus we have a substitute for
it, but this substitute is absolutely identical to MemoryBatch, so it's better
to unify them and import ac5d2f94d3 fix into the
MemoryBatch.
Commit 578ac414d4 was wrong in that it saved
only a part of the block, so depending on how you use blockchain, you may
still see that the block was not really processed properly. To really fix it
this commit introduces intermediate storage layer in form of memStore, which
actually is a MemoryStore that supports full Store API (thus easily fitting
into the existing code) and one extension that allows it to flush its data to
some other Store.
It also changes AddBlock() semantics in that it only accepts now successive
blocks, but when it does it guarantees that they're properly added into the
Blockchain and can be referred to in any way. Pending block queing is now
moved into the server (see 8c0c055ac657813fe3ed10257bce199e9527d5ed).
So the only thing done with persist() now is just a move from memStore to
Store which probably should've always been the case (notice also that
previously headers and some other metadata was written into the Store
bypassing caching/batching mechanism thus leading to some inefficiency).
It must copy both the value and the key because they can be reused for other
purposes between Put() and PutBatch(). This actually happens with values in
headers processing, leading to wrong data being written into the DB.
Extend the batch test to check for that.
The logic here is that we'll have all binary encoding/decoding done via our io
package, which simplifies error handling. This functionality doesn't belong to
util, so it's moved.
This also expands BufBinWriter with Reset() method to fit the needs of core
package.
add close function to storage interface
add common defer function call which will close db connection
remove context as soon as it's not needed anymore
updated unit tests
In the unlikely event of overlapping hash block written to the DB we might end
up with wrong hash list. That happened to me for some reason when synching
with the testnet leading to the following keys with respective values:
150000 -> 2000 hashes
152000 -> 2000 hashes
153999 -> 2000 hashes
Reading it hashes number 153999 and 154000 got the same values and the chain
couldn't sync correctly.
* Made Encode/Decode message public.
* Added Redis storage driver and made some optimizations for the initialising the blockchain
* removed log lines in tcp_peer
* Added missing comments on exported methods.
* bumped version