We need to compact our in-memory MPT from time to time, otherwise it quickly
fills up all available memory. This raises two obvious quesions --- when to do
that and to what level do that.
As for 'when', I think it's quite easy to use our regular persistence interval
as an anchor (and it also frees up some memory), but we can't do that in the
persistence routine itself because of synchronization issues (adding some
synchronization primitives would add some cost that I'd also like to avoid),
so do it indirectly by comparing persisted and current height in `storeBlock`.
Choosing proper level is another problem, but if we're to roughly estimate one
full branch node to use 1K of memory (usually it's way less than that) then we
can easily store 1K of these nodes and that gives us a depth of 10 for our
trie.
Items were serialized several times if there were several successful
transactions in a block, prevent that by using State field as a bitfield (as
it almost was intended to) and adding one more bit. It also eliminates useless
duplicate MPT traversions.
Confirmed to not break storage changes up to 3.3M on testnet.
This was differing from C# notion of PrevHash. It's not a previous root, but
rather a hash of the previous serialized MPTRoot structure (that is to be
signed by CNs).
Implement secp256k1 and secp256r1 recover interops, closes#1003.
Note:
We have to implement Koblitz-related math to recover keys properly
with Neo.Cryptography.Secp256k1Recover interop as far as standard
go elliptic package supports short-form Weierstrass curve with a=-3
only (see https://github.com/golang/go/issues/26776 for details).
However, it's not the best choise to have a lot of such math in our
project, so it would be better to use ready-made solution for
Koblitz-related cryptography.
Because trie size is rather big, it can't be stored in memory.
Thus some form of caching should also be implemented. To avoid
marshaling/unmarshaling of items which are close to root and are used
very frequenly we can save them across the persists.
This commit implements pruning items at the specified depth,
replacing them by hash nodes.
There is nothing wrong with iterators being implemented in other parts
of code (e.g. Storage.Find). In this case type assertions can
prevent bugs at compile-time.
Reproduce behavior of the reference realization:
- if item was Put in cache after it was encountered during
Storage.Find, it must appear twice
- checking if item is in cache must be performed in real-time
during `Iterator.Next()`
bafdb916a0 change was wrong (probably brought
from neo-vm 3.0 at the state at which it existed back then), neo-vm 2.x
doesn't allow PICKITEM for arbitrary types.
The order in which storage.Find items are returns depends on what items
were processed in previous transactions of the same block.
The easiest way to implement this sort of caching is to cache operations
with storage, flushing the only in `Persist()`.