In general, NEP5 contracts are not limited to int64. And we have an example
of pnWETH Flamingo token now (with 18 decimals) that easily overflows int64,
so for correctness we need to store big.Int.
And as TransferLog is shared for different purposes I've decided to not make
it variable-length on Neo 2.
We should not return transaction metadata from `getrawtransaction` in case
transaction is not in the mempool. Height shouldn't be returned from
`gettransactionheight` in case transaction is in the mempool.
After contract is migrated there is no way to retrieve it's state.
This commit implements some metadata for NEP5 contracts, so that
values important for diplaying transfer log aren't lost.
When synchronizing with stateroot-enabled network from genesis and if
stateroot is not enabled in block zero we were failing to update state height
because initially it's updated with a jump from 0 to StateRootEnableIndex, so
we should allow that to happen to have correct state height.
We need to compact our in-memory MPT from time to time, otherwise it quickly
fills up all available memory. This raises two obvious quesions --- when to do
that and to what level do that.
As for 'when', I think it's quite easy to use our regular persistence interval
as an anchor (and it also frees up some memory), but we can't do that in the
persistence routine itself because of synchronization issues (adding some
synchronization primitives would add some cost that I'd also like to avoid),
so do it indirectly by comparing persisted and current height in `storeBlock`.
Choosing proper level is another problem, but if we're to roughly estimate one
full branch node to use 1K of memory (usually it's way less than that) then we
can easily store 1K of these nodes and that gives us a depth of 10 for our
trie.
This was differing from C# notion of PrevHash. It's not a previous root, but
rather a hash of the previous serialized MPTRoot structure (that is to be
signed by CNs).
The order in which storage.Find items are returns depends on what items
were processed in previous transactions of the same block.
The easiest way to implement this sort of caching is to cache operations
with storage, flushing the only in `Persist()`.
Getting batch, updating Prometheus metrics and pushing events doesn't require
any locking: batch is a local cache batch that no one outside cares about,
Prometheus metrics are not critical to be in perfect sync and events are
asynchronous anyway.