Execution events are followed by block events, not vise versa, thus,
we can wait until VUB block to be accepted to be sure that
transaction wasn't accepted to chain.
Every 1000 blocks seems to be OK for big networks (that only had done some
initial requests previously and then effectively never requested addresses
again because there was a sufficient number of addresses), won't hurt smaller
ones as well (that effectively keep doing this on every connect/disconnect,
peer changes are very rare there, but when they happen we want to have some
quick reaction to these changes).
32 is a very good number, but we all know 42 is a better one. And it can even
be proven by tests with higher peaking TPS values.
You may wonder why is it so good? Because we're using packet-switching
networks mostly and a packet is a packet almost irrespectively of how bit it
is. Yet a packet has some maximum possible size (hi, MTU) and this size most
of the time is 1500 (or a little less than that, hi VPN). Subtract IP header
(20 for IPv4 or 40 for IPv6 not counting options), TCP header (another 20) and
Neo message/payload headers (~8 for this case) and we have just a little more
than 1400 bytes for our dear hashes. Which means that in a single packet most
of the time we can have 42-44 of them, maybe 45. Choosing between these
numbers is not hard then.
We have AttemptConnPeers that is closely related, the more we have there the
bigger the network supposedly is, so it's much better than magic minPoolCount.
When block is being spread through the network we can get a lot of invs with
the same hash. Some more stale nodes may also announce previous or some
earlier block. We can avoid full DB lookup for them and minimize inv handling
time (timeouts in inv handler had happened in #2744).
It doesn't affect tests, just makes node a little less likely to spend some
considerable amount of time in the inv handler.
Sometimes we already have it, but it's not yet processed, so we can save on
getdata request. It only affects very high-speed networks like 4-1 scenario
and it doesn't affect it a lot, but still we can do it.
This is not exactly the protocol-level batching as was tried in #1770 and
proposed by neo-project/neo#2365, but it's a TCP-level change in that we now
Write() a set of messages and given that Go sets up TCP sockets with
TCP_NODELAY by default this is a substantial change, we have less packets
generated with the same amount of data. It doesn't change anything on properly
connected networks, but the ones with delays benefit from it a lot.
This also improves queueing because we no longer generate 32 messages to
deliver on transaction's GetData, it's just one stream of bytes with 32
messages inside.
Do the same with GetBlocksByIndex, we can have a lot of messages there too.
But don't forget about potential peer DoS attacks, if a peer is to request a
lot of big blocks we need to flush them before we process the whole set.
This allows to naturally scale transaction processing if we have some peer
that is sending a lot of them while others are mostly silent. It also can help
somewhat in the event we have 50 peers that all send transactions. 4+1
scenario benefits a lot from it, while 7+2 slows down a little. Delayed
scenarios don't care.
Surprisingly, this also makes disconnects (#2744) much more rare, 4-node
scenario almost never sees it now. Most probably this is the case where peers
affect each other a lot, single-threaded transaction receiver can be slow
enough to trigger some timeout in getdata handler of its peer (because it
tries to push a number of replies).