The order in which storage.Find items are returns depends on what items
were processed in previous transactions of the same block.
The easiest way to implement this sort of caching is to cache operations
with storage, flushing the only in `Persist()`.
Our block.Block was JSONized in a bit different fashion than result.Block in
its Nonce and NextConsensus fields. It's not good for notifications because
third-party clients would probably expect to see the same format. Also, using
completely different Block representation in result is probably making our
client a bit weaker as this representation is harder to use with other neo-go
components.
So use the same approach we took for Transactions and wrap block.Base which is
to be serialized in proper way.
Getting batch, updating Prometheus metrics and pushing events doesn't require
any locking: batch is a local cache batch that no one outside cares about,
Prometheus metrics are not critical to be in perfect sync and events are
asynchronous anyway.
Note that the protocol differs a bit from #895 in its notifications format,
to avoid additional server-side processing we're omitting some metadata like:
* block size and confirmations
* transaction fees, confirmations, block hash and timestamp
* application execution doesn't have ScriptHash populated
Some block fields may also differ in encoding compared to `getblock` results
(like nonce field).
I think these differences are unnoticieable for most use cases, so we can
leave them as is, but it can be changed in the future.
We actually have to do that in order to answer getapplicationlog requests for
transactions that leave some interop items on the stack. It follows the same
logic our binary serializer/deserializes does leaving the type and stripping
the value (whatever that is).
It will be important for proper subscription testing and it doesn't hurt even
though technically we've got two http servers listening after this change (one
is a regular Server's http.Server and one is httptest's Server). Reusing
rpc.Server would be nice, but it requires some changes to Start sequence to
start Listener with net.Listen and then communicate back its resulting
Addr. It's not very convenient especially given that no other code needs it,
so doing these changes just for a bit cleaner testing seems like and
overkill.
Update config appropriately. Update Start comment along the way.
Get new blocks directly from the Blockchain. It may lead to some duplications
(as we'll also receive our own blocks), but at the same time it's more
correct, because technically we can also get blocks via other means besides
network server like RPC (submitblock call). And it simplifies network server
at the same time.
CanTransfer function checks if "to" and "from" values are
correct script hashes. If one of these values is correct and one
incorrect, then function returns false positive result. It uses "and"
operator which requires both "to" and "from" script hashes to be
incorrect to fail transaction.
Instead transaction must fail if at least one argument is incorrect,
so it should be "or" operator.
A part of integration with NEO Blockchain Toolkit (see #902). To be
able to deploy smart-contract compiled with neo-go compiler via NEO
Express, we have to generate additional .abi.json file. This file
contains the following information:
- hash of the compiled contract
- smart-contract metadata (title, description, version, author,
email, has-storage, has-dynamic-invoke, is-payable)
- smart-contract entry point
- functions
- events
However, this .abi.json file is slightly different from the one,
described in manifest.go, so we have to add auxilaury stractures for
json marshalling. The .abi.json format used by NEO-Express is described
[here](https://github.com/neo-project/neo-devpack-dotnet/blob/master/src/Neo.Compiler.MSIL/FuncExport.cs#L66).
Method `methodInfoFromScope(...)` always returned an empty parameters
set, so we were missing this information in both .abi.json and
.debug.json files. Fixed now.