Fix mempool and chain locking
This allows us easily make 1000 Tx/s in 4-nodes privnet, fixes potential
double spends and improves mempool testing coverage.
Fixes GolangCI:
Error return value of
(*github.com/CityOfZion/neo-go/pkg/core/mempool.Pool).Add is not checked
(from errcheck)
and allows us to almost completely forget about mempool here.
Our mempool only contains valid verified transactions all the time, it never
has any unverified ones. Unverified pool made some sense for quick unverifying
after the new block acceptance (and gradual background reverification), but
reverification needs some non-trivial locking between blockchain and mempool
and internal mempool state locking (reverifying tx and moving it between
unverified and verified pools must be atomic). But our current reverification
is fast enough (and has all the appropriate locks), so bothering with
unverified pool makes little sense.
We not only need to remove transactions stored in the block, but also
invalidate some potential double spends caused by these transactions. Usually
new block contains a substantial number of transactions from the pool, so it's
easier to make one pass over it only keeping valid items rather than remove
them one by one and make an additional pass to recheck inputs/witnesses.
It doesn't harm as we have transactions naturally ordered by fee anyway and it
makes managing them a little easier. This also makes slices store item itself
instead of pointers to it which reduces the pressure on the memory subsystem.
They shouldn't depend on the chain state and for the same transaction they
should always produce the same result. Thus, it makes no sense recalculating
them over and over again.
We can only add one block of the given height and we have two competing
goroutines to do that --- consensus and block queue. Whomever adds the block
first shouldn't trigger an error in another one.
Fix block relaying for blocks added via the block queue also, previously one
consensus-generated blocks were broadcasted.
Eliminate races between tx checks and adding them to the mempool, ensure the
chain doesn't change while we're working with the new tx. Ensure only one
block addition attempt could be in progress.
The chain may already be more current than our dBFT state (like when the node
has commited something at view 0, but all the other nodes changed view and
accepted something at view 1), so in this case we should reinit dBFT on new
height.
Because the constants are loaded directly via `emitLoadConst`, there is no need to store
them in an array of locals. It can have a big overhead, because it
is done at the beginning of every function.
It can lead to some goroutine explosion, but supposedly it's better than
stalling other processing and eventually all of these goroutines should finish
their sends. Note that this doesn't change the behavior for RPC-relayed
transactions that are still waiting for the broadcast to finish ensuring
proper transaction distribution before returning the result to the client.
If we have already got Version message, we don't need the rest of handshake to
complete before being able to properly answer the PeerAddr() requests. Fixes
some duplicate connections between machines.
This one is designed to give more priority to direct nodes communication, that
is that their messaging would have more priority than generic broadcasts. It
should improve consensus process under TX pressure and allow to handle
pings in time (preventing disconnects).
They have the opposite order, height first and nonce second. It was done wrong
in 4e6ed902 and never fixed since. Fixes sending wrong peer state leading to
useless getheaders messages (and disconnects when the other side is lagging
behind).
We can have more than one connection attempt in progress and not yet completed
the handshake, so if there is a Version already received we should look it.
Returning error string as a result (not an error) is utterly wrong, but C#
implementation just returns a zero balance for unknown addresses, so we should
follow that.
While decoding payload, local implementations of Recovery*
messages were used, but when creating RecoveryMessage inside dBFT
library default NewRecoveryMessage was invoked. This lead to parsing
errors.
Append should leave it's result on top of the stack.
Thus we need to transform top of the stack:
(top) a . b --> (top) a . b . b
It can be done with just OVER + SWAP.
Our node was too pingy because of wrong timer setups (that divided timeout
Duration by time.Second), it also was wrong in its time calculations (using
UTC time to calculate intervals). At the same time missing block is a
server-wide problem, so it's better solved with server-wide protocol loop.
A while ago VM serialization format for Integer items was changed
but compiler continued to emit Integers in old format.
This commit changes compiler behaviour to be compatible with VM.
Recursive execute() calls can affect gas calculation.
This commit makes execute() be called only for real opcodes
and moves duplicate logic for CALL/JMP into a separate function.
1) Make timeout a timeout, don't do magic ping counts.
2) Drop additional timer from the main peer's protocol loop, create it
dynamically and make it disconnect the peer.
3) Don't expose the ping counter to the outside, handle more logic inside the
Peer.
Relates to #430.
We don't and we won't have synchronized clocks in the network so the only
timestamp that we can compare our local time with is the one made
ourselves. What this ping mechanism is used for is to recover from missing the
block broadcast, thus it's appropriate for it to trigger after X seconds of
the local time since the last block received.
Relates to #430.
In reality it will never be true exactly in the case where we want this ping
mechanism to work --- when the node failed to get a block from the net. It
won't get the header either and thus its block height will be equal to header
height. The only moment when this condition is met is when the node does
initial synchronization and this synchronization works just fine without any
pings.
Relates to #430.
Two queues for high-priority and ordinary messages. Fixes#590. These queues
are deliberately made small to avoid buffer bloat problem, there is gonna be
another queueing layer above them to compensate for that. The queues are
designed to be synchronous in enqueueing, async capabilities are to be added
layer above later.
Big.Int Bytes()/SetBytes() methods are not symmetric.
Moreover we need to mimic C# node behavior:
- if a positive number has MSB set, 0x00 byte should be appended
to distinguish positive number from negatives
- negative numbers should serialize as two's-complement
add pingInterval same as used in ref C# implementation with the same logic
add pingTimeout which is used to check whether pong received. If not -- drop the peer.
add pingLimit which is hardcoded to 4 in TCPPeer. It's limit for unsuccessful ping/pong calls (where pong wasn't received in pingTimeout interval)
It wasn't actually requesting transactions but rather sending an inventory
message telling everyone that we have them which is completely wrong and
easily leads to ChangeView that could be avoided.
When system and network pressure is high it can be beneficial
to use transactions which and were already proposed.
The assumption is that they will be in other node's memory pool
with more probability.
If blockchain is not closed, logging in defer can occur
after test has finished, which will lead to a panic with
"Log in goroutine after Test* has completed".
There is no point in encoding the output of this function in a WIF format,
most of the users actually want the real key and those who need a WIF can
easily get if from the key (and it's simpler than getting the key from the
WIF).
It also fixes a severe bug in NEP2Decrypt, base58 decoding errors were not
processed correctly.
Error in Seek means something is terribly wrong (e.g. db was not opened) and
error drop is not the right thing to do, because caller
will continue working with the wrong view.