Related to #1468, ported from #1475.
We should copy the key to avoid bytes substitution. Otherwise there's a
chance that at the end of dao.Store.Seek(...) execution some keys won't
be the same as the original keys found inside saveToMap function because
storage.Seek can guarantee that provided key and value are only valid
until the next `f` call.
We're constantly checking for transactions there and most of the time this
check is not successful (meaning that the transaction in question is
new). Bloom filter easily reduces the need to search over the DB in 99% of
these cases and gives some 13% increase in single-node TPS.
MPT is a trie with a branching factor = 16, i.e. it consists of sequences in
16-element alphabet.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
The notion of NativeContractState shouldn't ever existed, native contract is a
contract and its state is saved as regular contract state which is critical
because we'll have MPT calculations over this state soon.
Initial minting should be done in Neo.Native.Deploy because it generates
notification that should have proper transaction context.
RegisterNative() shouldn't exist as a public method, native contracts are only
registered at block 0 and they can do it internally, no outside user should be
able to mess with it.
Move some structures from `native` package to `interop` also to avoid circular
references as interop.Context has to have a list of native contracts (exposing
them via Blockchainer is again too dangerous, it's too powerful tool).
Most of the time it's persisted into the MemoryStore or MemCachedStore, when
that's the case there is no real need to go through the Batch mechanism as it
incurs multiple copies of the data.
Importing 1.5M mainnet blocks with verification turned off, before:
real 12m39,484s
user 20m48,300s
sys 2m25,022s
After:
real 11m15,053s
user 18m2,755s
sys 2m4,162s
So it's around 10% improvement which looks good enough.
Error in Seek means something is terribly wrong (e.g. db was not opened) and
error drop is not the right thing to do, because caller
will continue working with the wrong view.
add dao which takes care about all CRUD operations on storage
remove blockchain state since everything is stored on change
remove storage operations from structs(entities)
move structs to entities package
It's used a lot and it looks a lot like MemoryStore, it just needs not to
return errors from Put and Delete, so make it use MemoryStore internally with
adjusted interface.
Make it look more like a real transaction, put/delete things with a single
lock. Make a copy of value in Put also, just for safety purposes, no one knows
how this value slice can be used after the Put.
Using pointers is just plain wrong here, because the batch can be updated with
newer values for the same keys.
Fixes Seek() to use HasPrefix also because this is the intended behavior.
BoltDB doesn't have internal batching mechanism, thus we have a substitute for
it, but this substitute is absolutely identical to MemoryBatch, so it's better
to unify them and import ac5d2f94d3 fix into the
MemoryBatch.
Commit 578ac414d4 was wrong in that it saved
only a part of the block, so depending on how you use blockchain, you may
still see that the block was not really processed properly. To really fix it
this commit introduces intermediate storage layer in form of memStore, which
actually is a MemoryStore that supports full Store API (thus easily fitting
into the existing code) and one extension that allows it to flush its data to
some other Store.
It also changes AddBlock() semantics in that it only accepts now successive
blocks, but when it does it guarantees that they're properly added into the
Blockchain and can be referred to in any way. Pending block queing is now
moved into the server (see 8c0c055ac657813fe3ed10257bce199e9527d5ed).
So the only thing done with persist() now is just a move from memStore to
Store which probably should've always been the case (notice also that
previously headers and some other metadata was written into the Store
bypassing caching/batching mechanism thus leading to some inefficiency).
It must copy both the value and the key because they can be reused for other
purposes between Put() and PutBatch(). This actually happens with values in
headers processing, leading to wrong data being written into the DB.
Extend the batch test to check for that.