There is nothing wrong with iterators being implemented in other parts
of code (e.g. Storage.Find). In this case type assertions can
prevent bugs at compile-time.
Reproduce behavior of the reference realization:
- if item was Put in cache after it was encountered during
Storage.Find, it must appear twice
- checking if item is in cache must be performed in real-time
during `Iterator.Next()`
The order in which storage.Find items are returns depends on what items
were processed in previous transactions of the same block.
The easiest way to implement this sort of caching is to cache operations
with storage, flushing the only in `Persist()`.
This syscall should only work for contracts created by current transaction and
that is what is supposed to be checked here. Do so by looking at the
differences between ic.dao and original lower DAO.
Our block.Block was JSONized in a bit different fashion than result.Block in
its Nonce and NextConsensus fields. It's not good for notifications because
third-party clients would probably expect to see the same format. Also, using
completely different Block representation in result is probably making our
client a bit weaker as this representation is harder to use with other neo-go
components.
So use the same approach we took for Transactions and wrap block.Base which is
to be serialized in proper way.
Getting batch, updating Prometheus metrics and pushing events doesn't require
any locking: batch is a local cache batch that no one outside cares about,
Prometheus metrics are not critical to be in perfect sync and events are
asynchronous anyway.
Which makes iterating over map stable which is important for serialization and
and even fixes occasional test failures. We use the same ordering here as
NEO 3.0 uses, but it should also be fine for NEO 2.0 because it has no
defined order.
Most of the time it's persisted into the MemoryStore or MemCachedStore, when
that's the case there is no real need to go through the Batch mechanism as it
incurs multiple copies of the data.
Importing 1.5M mainnet blocks with verification turned off, before:
real 12m39,484s
user 20m48,300s
sys 2m25,022s
After:
real 11m15,053s
user 18m2,755s
sys 2m4,162s
So it's around 10% improvement which looks good enough.
Frequently one needs to check if struct serializes/deserializes
properly. This commit implements helpers for such cases including:
1. JSON
2. io.Serializable interface
When serializing multiple accounts, cost of a buffer grow
can become significant. This commit tries to amortize it by
reusing the same buffer in a single `Persist()` call.
Fixes difference in state changes at mainnet's block 2442790 because contract
migration in b4eb2dc35226e6520ee4e09a56197dff91547b50a7f57edc82930fc18c75dffc
doesn't actually transfer the storage state, it only deletes the old one.
And add an error check just in case.
This is an append-only log which is read only during some RPCs.
It is rather slow to get it from base every time we need to append to
it. This commit stores all NEP5Transfers in batches, so that
only a last batch needs to be unmarshaled during block processing.
That's how it was intended to behave originally. One thing questionable here
is contract price (policy thing, basically) being moved to smartcontract
package, but it's probably fine for NEO 2.0 (as it won't change) and we'll
make something better for NEO 3.0.
1.5M block import time (VerifyBlocks disabled) on AMD Ryzen 5 1600/16GB/HDD,
before:
real 159m16.551s
user 69m58.279s
sys 7m34.334s
after:
real 139m41.836s
user 67m12.477s
sys 6m19.420s
12% which is even a bit more than could be expected from inputs analysis (that
has around 10% cache hits for a block-wide cache), worth doing.
This change reduces pressure on DB by doing the following things:
* not storing additional KV pair for SpentCoin
* storing Output right in the UnspentCoin, thus eliminating the need to get a
full transaction from DB
At the same time it makes UnspentCoin more fat and hot, but it should probably
worth it.
Also drop `GetUnspentCoinStateOrNew` as it shouldn't ever existed, UTXOs
can't come out of nowhere.
1.5M block import time (VerifyBlocks disabled) on AMD Ryzen 5 1600/16GB/HDD,
before:
real 302m9.895s
user 96m17.200s
sys 13m37.084s
after:
real 159m16.551s
user 69m58.279s
sys 7m34.334s
So it's almost two-fold which is a great improvement.
C# uses ToArray() or UintXXX(bytes) here which interprets hashes as they
should be interpreted (BE, although they always convert to LE when converting
to String just for the fun of it). It leads to state difference for us at
block 2025204 where even though we have the same value for the key, the key
itself differs, ours:
dd2b538e2a0c1db1ae5061c15be14f916bd1e678e512ffcda6d9499d8e7fe97ee71fd6b8004583d9afe09cc4dadbd5deb63d01e061009b7cffdaa674beae0f930ebe6085af900093e5fe56b34a5c220ccdcf6efc336fc5000000000000000000000000000000000010
theirs:
dd2b538e2a0c1db1ae5061c15be14f916bd1e67861e0013db6ded5dbdac49ce0afd9834500b8d61fe77ee97f8e9d49d9a6cdff12e5009b7cffdaa674beae0f930ebe6085af900093e5fe56b34a5c220ccdcf6efc336fc5000000000000000000000000000000000010
In this key there is a tx hash encoded
(e512ffcda6d9499d8e7fe97ee71fd6b84583d9afe09cc4dadbd5deb63d01e061 in LE used
by all the tools like neoscan).
I love Neo.
Both verification and invocation scripts need to
be unmarshaled from hex.
Also fix failing RPC tests: block contains non-pointer
`transaction.Witness` field and (*Witness).MarshalJSON method
is not called.
They have it specified right in the transaction. Unfortunately, this little
change rendered invalid our RPC test chain, but I think it became even better
after it, especially given that chain generation is a nice test by itself, so
it should be running as a regular test.