Add ability to generate NEO3-compatable *.manifest.json into compiler.
This file represets contract manifest and includes ABI information, so
we don't need to create separate *.abi.json file. NEO3 debugger also
needs *.manifest.json only. So, switched from *.abi.json to
*.manifest.json file.
In order to avoid dependency cycle at the next commits:
imports github.com/nspcc-dev/neo-go/pkg/config
imports github.com/nspcc-dev/neo-go/pkg/wallet
imports github.com/nspcc-dev/neo-go/pkg/vm
imports github.com/nspcc-dev/neo-go/pkg/smartcontract/nef
imports github.com/nspcc-dev/neo-go/pkg/config
When CN is not up to date with the network is synchonizes blocks first and
only then starts consensus process. But while synchronizing it receives
consensus payloads and tries to process them even though messages reader
routine is not started yet. This leads to lots of goroutines waiting to send
their messages:
Jun 25 23:55:53 nodoka neo-go[32733]: goroutine 1639919 [chan send, 4 minutes]:
Jun 25 23:55:53 nodoka neo-go[32733]: github.com/nspcc-dev/neo-go/pkg/consensus.(*service).OnPayload(0xc0000ecb40, 0xc005bd7680)
Jun 25 23:55:53 nodoka neo-go[32733]: #011/go/src/github.com/nspcc-dev/neo-go/pkg/consensus/consensus.go:329 +0x31b
Jun 25 23:55:53 nodoka neo-go[32733]: github.com/nspcc-dev/neo-go/pkg/network.(*Server).handleConsensusCmd(...)
Jun 25 23:55:53 nodoka neo-go[32733]: #011/go/src/github.com/nspcc-dev/neo-go/pkg/network/server.go:687
Jun 25 23:55:53 nodoka neo-go[32733]: github.com/nspcc-dev/neo-go/pkg/network.(*Server).handleMessage(0xc0000ba160, 0x1053260, 0xc00507d170, 0xc005bdd560, 0x0, 0x0)
Jun 25 23:55:53 nodoka neo-go[32733]: #011/go/src/github.com/nspcc-dev/neo-go/pkg/network/server.go:806 +0xd58
Jun 25 23:55:53 nodoka neo-go[32733]: github.com/nspcc-dev/neo-go/pkg/network.(*TCPPeer).handleConn(0xc00507d170)
Jun 25 23:55:53 nodoka neo-go[32733]: #011/go/src/github.com/nspcc-dev/neo-go/pkg/network/tcp_peer.go:160 +0x294
Jun 25 23:55:53 nodoka neo-go[32733]: created by github.com/nspcc-dev/neo-go/pkg/network.(*TCPTransport).Dial
Jun 25 23:55:53 nodoka neo-go[32733]: #011/go/src/github.com/nspcc-dev/neo-go/pkg/network/tcp_transport.go:38 +0x1ad
Jun 25 23:55:53 nodoka neo-go[32733]: goroutine 1639181 [chan send, 10 minutes]:
Jun 25 23:55:53 nodoka neo-go[32733]: github.com/nspcc-dev/neo-go/pkg/consensus.(*service).OnPayload(0xc0000ecb40, 0xc013bb6600)
Jun 25 23:55:53 nodoka neo-go[32733]: #011/go/src/github.com/nspcc-dev/neo-go/pkg/consensus/consensus.go:329 +0x31b
Jun 25 23:55:53 nodoka neo-go[32733]: github.com/nspcc-dev/neo-go/pkg/network.(*Server).handleConsensusCmd(...)
Jun 25 23:55:53 nodoka neo-go[32733]: #011/go/src/github.com/nspcc-dev/neo-go/pkg/network/server.go:687
Jun 25 23:55:53 nodoka neo-go[32733]: github.com/nspcc-dev/neo-go/pkg/network.(*Server).handleMessage(0xc0000ba160, 0x1053260, 0xc01361ee10, 0xc01342c780, 0x0, 0x0)
Jun 25 23:55:53 nodoka neo-go[32733]: #011/go/src/github.com/nspcc-dev/neo-go/pkg/network/server.go:806 +0xd58
Jun 25 23:55:53 nodoka neo-go[32733]: github.com/nspcc-dev/neo-go/pkg/network.(*TCPPeer).handleConn(0xc01361ee10)
Jun 25 23:55:53 nodoka neo-go[32733]: #011/go/src/github.com/nspcc-dev/neo-go/pkg/network/tcp_peer.go:160 +0x294
Jun 25 23:55:53 nodoka neo-go[32733]: created by github.com/nspcc-dev/neo-go/pkg/network.(*TCPTransport).Dial
Jun 25 23:55:53 nodoka neo-go[32733]: #011/go/src/github.com/nspcc-dev/neo-go/pkg/network/tcp_transport.go:38 +0x1ad
Jun 25 23:55:53 nodoka neo-go[32733]: goroutine 39454 [chan send, 32 minutes]:
Jun 25 23:55:53 nodoka neo-go[32733]: github.com/nspcc-dev/neo-go/pkg/consensus.(*service).OnPayload(0xc0000ecb40, 0xc014fea680)
Jun 25 23:55:53 nodoka neo-go[32733]: #011/go/src/github.com/nspcc-dev/neo-go/pkg/consensus/consensus.go:329 +0x31b
Jun 25 23:55:53 nodoka neo-go[32733]: github.com/nspcc-dev/neo-go/pkg/network.(*Server).handleConsensusCmd(...)
Jun 25 23:55:53 nodoka neo-go[32733]: #011/go/src/github.com/nspcc-dev/neo-go/pkg/network/server.go:687
Jun 25 23:55:53 nodoka neo-go[32733]: github.com/nspcc-dev/neo-go/pkg/network.(*Server).handleMessage(0xc0000ba160, 0x1053260, 0xc0140b2ea0, 0xc014fe0ed0, 0x0, 0x0)
Jun 25 23:55:53 nodoka neo-go[32733]: #011/go/src/github.com/nspcc-dev/neo-go/pkg/network/server.go:806 +0xd58
Jun 25 23:55:53 nodoka neo-go[32733]: github.com/nspcc-dev/neo-go/pkg/network.(*TCPPeer).handleConn(0xc0140b2ea0)
Jun 25 23:55:53 nodoka neo-go[32733]: #011/go/src/github.com/nspcc-dev/neo-go/pkg/network/tcp_peer.go:160 +0x294
Jun 25 23:55:53 nodoka neo-go[32733]: created by github.com/nspcc-dev/neo-go/pkg/network.(*TCPTransport).Dial
Jun 25 23:55:53 nodoka neo-go[32733]: #011/go/src/github.com/nspcc-dev/neo-go/pkg/network/tcp_transport.go:38 +0x1ad
Luckily it doesn't break synchronization completely as eventually connection
timers fire, the node breaks all connections, create new ones and these new
ones request blocks successfully until another consensus payload stalls them
too. In the end the node reaches synchronization, message processing loop
starts and releases all of these waiting goroutines, but it's better for us to
avoid this happening at all.
This also makes double-starting a no-op which is a nice property.
Using view number from the recovery message is just plain wrong, it's gonna be
higher than our current view and these messages will be treated as coming from
the future, even though they have their original view number included.
Implement (*Param).GetBoolean() for converting parameter to bool value.
It is used for verbosity flag and is false iff it is either zero number
or empty sting.
This error message makes no sense when shutting down the server:
2020-06-25T19:29:53.251+0300 ERROR failed to start RPC server {"error": "http: Server closed"}
And ListenAndServer is documented to always return non-nil error one of which
is http.ErrServerClosed. This should also fix the following test failure:
==================
WARNING: DATA RACE
Read at 0x00c000254243 by goroutine 49:
testing.(*common).logDepth()
/usr/local/go/src/testing/testing.go:665 +0xa1
testing.(*common).Logf()
/usr/local/go/src/testing/testing.go:658 +0x8f
testing.(*T).Logf()
<autogenerated>:1 +0x75
go.uber.org/zap/zaptest.testingWriter.Write()
/go/pkg/mod/go.uber.org/zap@v1.10.0/zaptest/logger.go:130 +0x11f
go.uber.org/zap/zaptest.(*testingWriter).Write()
<autogenerated>:1 +0xa9
go.uber.org/zap/zapcore.(*ioCore).Write()
/go/pkg/mod/go.uber.org/zap@v1.10.0/zapcore/core.go:90 +0x1c3
go.uber.org/zap/zapcore.(*CheckedEntry).Write()
/go/pkg/mod/go.uber.org/zap@v1.10.0/zapcore/entry.go:215 +0x1e7
go.uber.org/zap.(*Logger).Error()
/go/pkg/mod/go.uber.org/zap@v1.10.0/logger.go:203 +0x95
github.com/nspcc-dev/neo-go/pkg/rpc/server.(*Server).Start()
/go/src/github.com/nspcc-dev/neo-go/pkg/rpc/server/server.go:179 +0x5c5
Previous write at 0x00c000254243 by goroutine 44:
testing.tRunner.func1()
/usr/local/go/src/testing/testing.go:900 +0x353
testing.tRunner()
/usr/local/go/src/testing/testing.go:913 +0x1bb
Goroutine 49 (running) created at:
github.com/nspcc-dev/neo-go/pkg/rpc/server.initClearServerWithInMemoryChain()
/go/src/github.com/nspcc-dev/neo-go/pkg/rpc/server/server_helper_test.go:69 +0x305
github.com/nspcc-dev/neo-go/pkg/rpc/server.initServerWithInMemoryChain()
/go/src/github.com/nspcc-dev/neo-go/pkg/rpc/server/server_helper_test.go:78 +0x3c
github.com/nspcc-dev/neo-go/pkg/rpc/server.testRPCProtocol()
/go/src/github.com/nspcc-dev/neo-go/pkg/rpc/server/server_test.go:805 +0x53
github.com/nspcc-dev/neo-go/pkg/rpc/server.TestRPC.func1()
/go/src/github.com/nspcc-dev/neo-go/pkg/rpc/server/server_test.go:793 +0x44
testing.tRunner()
/usr/local/go/src/testing/testing.go:909 +0x199
Goroutine 44 (finished) created at:
testing.(*T).Run()
/usr/local/go/src/testing/testing.go:960 +0x651
github.com/nspcc-dev/neo-go/pkg/rpc/server.TestRPC()
/go/src/github.com/nspcc-dev/neo-go/pkg/rpc/server/server_test.go:792 +0x5d
testing.tRunner()
/usr/local/go/src/testing/testing.go:909 +0x199
==================
Emit CONVERT for converting between different types. NeoVM behavior is
different from that of Go (e.g. assertions of `int` and `uint32` are
equivalent).
Even if the value is zero, the GAS distribution updates the balance height, so
storage item must be updated too. Fixes the followin on preview2 testnet:
block 74227: value mismatch for key ffffffff1454a6cb279fbcedc66162ad4ad5d1d910202b92743e000000000000000000000005: 1041032104809fd5002103f3210128010000 vs 1041032104809fd50021033f110128010000
They make no sense. Fixes preview2 testnet state problem:
file BlockStorage_100000/dump-block-70000.json: block 69935: state mismatch for key ffffffff1454a6cb279fbcedc66162ad4ad5d1d910202b92743e000000000000000000000005: Deleted vs Added
Regular CALLs don't update it, only Contract.Call does. Fixes the following
mismatch on preview2 testnet:
file BlockStorage_100000/dump-block-49000.json: block 48644, changes number mismatch: 6 vs 4
And fix contract create to really update the ID, eliminating this difference
in the storage (preview2 testnet):
file BlockStorage_100000/dump-block-39000.json: block 38043: key mismatch: 0c000000617373657464df4ebe92334d1fc7e64b10f1d1e33942d9905e510000000000000009 vs 00000000617373657464df4ebe92334d1fc7e64b10f1d1e33942d9905e510000000000000009
Preview2 testnet:
file BlockStorage_100000/dump-block-12000.json: block 11562: key mismatch: feffffff1454a6cb279fbcedc66162ad4ad5d1d910202b92743e000000000000000000000005 vs feffffff1431b7e7aea5131f74721e002c6a56b610885813f79e000000000000000000000005
Originally this code was written to run after transactions processing, but
after 0fa4c49735 it works in different manner.
ValidatorsCount is not initialized at block 0 with C# node (the first voter
initializes it) and until that initialization happens the standby validators
list is being returned as is without sorting.
Fixes state mismatch for the key ffffffff0e00000000000000000000000000000001 in
the first blocks.
It also affects tests as now the first validator is different and it receives
the network fees.
After block was stored it's possible to have new FeePerByte constraint,
so we should remove all transactions which do not meet this requirement.
Also caching of FeePerByte was added in order not to re-verify
transactions each time mempool needs to be updated.
MarshalJSON should be defined on structure (not pointer), as we use
structures to marshal parameters (e.g. in NotificationEvent and
Invoke of RPC result package) and never use pointers for that purpose.
Also added marshalling of nil array into `[]` instead of `null` to
follow C# implementation.
part of #904
1. We now have MaxTransactionsPerBlock set in native Policy contract,
so this value should be used in (dbft).GetVerified method instead
of passing it as an argument.
2. Removed (dbft).WithTxPerBlock.
2. DBFT API has changed, so update it's version.
3. Removed MaxTransactionsPerBlock from node configuration, as we
have it set in native Policy contract.
System.Enumerator.Next costs 1_000_000 gas which is rather big for a
single iteration. Reimplement this by saving current counters on stack.
This approach will use 2 PICKITEMS (270_000 gas) for maps, which is
still cheaper that *.Next syscall.
C# implementation uses NEWARRAY for creating arguments.
Don't change our implementation in `emit`, because PACK is cheaper and
this script must not depend on the internal details of `emit` package anyway.
There is no such thing as high/low priority transactions, as there are
no free transactions anymore and they are ordered by fees contained
in transaction itself.
Closes#1063.
It's just JSON, io.Serializable is only used for DB storage where the length
should be obtained from the stream. Fixes:
2020-06-18T22:14:10.571+0300 WARN contract invocation failed {"tx": "1ffd475a9c246495d6206cb80a9a78e9d14a433ded60cd37aa87d897655606e1", "block": 25893, "error": "error encountered at instruction 3696 (SYSCALL): failed to invoke syscall: invalid character ':' after top-level value"}
We make it explicit in the appropriate Block/Transaction structures, not via a
singleton as C# node does. I think this approach has a bit more potential and
allows better packages reuse for different purposes.
And implement it for Transaction, the only user of ParameterContext for
now. Which make correct signing/verifying possible for cases when
serialization for general transmission and signing differ.
Turns out, Neo uses block compression and not a streaming lz4 format. And it
doesn't contain uncompressed payload size which makes guessing it somewhat
suboptimal.
It doesn't affect anything yet, but it's going to be used in the future for
network-specific behavior. It also renames short '--timeout' form to '-s'
avoiding conlict with '-t' used for '--testnet'.
1. Remove GetScript, IsPayable, GetStorageContext.
2. Revert 82319538 related to GetStorageContext.
3. Rename Migrate to Update.
4. Move remaining to System.Contract.*.
Related #1031.
Updated System.Blockchain.GetBlock interop replaced the functionality of
the following interops:
System.Block.GetTransactions
System.Block.GetTransactionCount
Neo.Block.GetTransactions
Neo.Block.GetTransactionsCount
closes#1025
Now we put on stack stackitem.Array instead of Interop, so we're able to
use all available block properties without extra interop getters.
Removed Neo.Blockchain.GetBlock interop as we don't need it anymore.
Removed Neo.Block.GetTransaction and System.Block.GetTransaction
interops. These interops were replaced by new
System.Blockchain.GetTransactionFromBlock interop.
closes#1023
Now we put on stack stackitem.Array instead of Interop, so we don't
need old transaction-related interops anymore. Removed the following
interops:
System.Transaction.GetHash
Neo.Transaction.GetAttributes
Neo.Transaction.GetHash
Neo.Transaction.GetWitnesses
Neo.Attribute.GetData
Neo.Attribute.GetUsage
Also removed the following duplicated NEO interop:
Neo.Blockchain.GetTransaction
Closes#913.
Provide package info in the funcScope to check if function is defined
insided an interop package. As a good side-effect bytecode for builtins from `util`
is no longer emitted.
Related #941.
Two changes being done here, because they require a lot of updates to
tests. Now we're back into version 0 and we only have one type of
transaction.
It also removes GetType and GetScript interops, both are obsolete in Neo 3.
Neo 3 can emit Null in its transfer notifications in `from` or `to` fields
when minting/burning tokens (unlike Neo 2 that emitted util.Uint256{} for this
case), then it gets converted to Parameter as AnyType and we have to JSONize
it somehow for proper RPC functioning.
When money is being sent usually they go away from someone's pocket, so that
there is a little less money left there. Not in our case as it turns out, we
actually were adding money both to sender and receiver which is nice, but a
bit different from usual economic's expectations.
They actually use the same types as for messages. Fixes
2020-05-29T00:06:17.593+0300 WARN peer disconnected {"addr": "168.62.167.190:20333", "reason": "handling CMDInv message: invalid inventory type", "peerCount": 3}
It's useless. Even though there is Neo.Transaction.GetUnspentCoins syscall
that can be used, its return type is an interop structure that's not accepted
by any other syscall, so you can't really do anything with it. And there is no
such interface for the .net Framework.
This syscall should only work for contracts created by current transaction and
that is what is supposed to be checked here. Do so by looking at the
differences between ic.dao and original lower DAO.
Our block.Block was JSONized in a bit different fashion than result.Block in
its NextConsensus and Index fields. It's not good for notifications because
third-party clients would probably expect to see the same format. Also, using
completely different Block representation is probably making our client a bit
weaker as this representation is harder to use with other neo-go components.
So use the same approach we took for Transactions and wrap block.Block which is
to be serialized in proper way.
Fix `Script` JSONization along the way, 3.0 node wraps it within `witnesses`.
Getting batch, updating Prometheus metrics and pushing events doesn't require
any locking: batch is a local cache batch that no one outside cares about,
Prometheus metrics are not critical to be in perfect sync and events are
asynchronous anyway.
Native contracts also don't require any locks and they should be processed
before dumping storage changes.
Note that the protocol differs a bit from #895 in its notifications format,
to avoid additional server-side processing we're omitting some metadata like:
* block size and confirmations
* transaction fees, confirmations, block hash and timestamp
* application execution doesn't have ScriptHash populated
Some block fields may also differ in encoding compared to `getblock` results
(like nonce field).
I think these differences are unnoticieable for most use cases, so we can
leave them as is, but it can be changed in the future.
We actually have to do that in order to answer getapplicationlog requests for
transactions that leave some interop items on the stack. It follows the same
logic our binary serializer/deserializes does leaving the type and stripping
the value (whatever that is).
It will be important for proper subscription testing and it doesn't hurt even
though technically we've got two http servers listening after this change (one
is a regular Server's http.Server and one is httptest's Server). Reusing
rpc.Server would be nice, but it requires some changes to Start sequence to
start Listener with net.Listen and then communicate back its resulting
Addr. It's not very convenient especially given that no other code needs it,
so doing these changes just for a bit cleaner testing seems like and
overkill.
Update config appropriately. Update Start comment along the way.
Get new blocks directly from the Blockchain. It may lead to some duplications
(as we'll also receive our own blocks), but at the same time it's more
correct, because technically we can also get blocks via other means besides
network server like RPC (submitblock call). And it simplifies network server
at the same time.
When using Go's `map` for decoding, order can change from time to time,
while we may want to check it too. Use custom decoder for saving items
in order.