It makes sense in general (further narrowing down the time window when
transactions are processed by consensus thread) and it improves block times a
little too, especially in the 7+2 scenario.
Related to #2744.
They can fail right in the getPeers or they can fail later when packet send
is attempted. Of course they can complete handshake in-between these events,
but most likely they won't and we'll waste more resources on this attempt. So
rule out bad peers immediately.
Some tests are failing on Windows due to slow runners with errors like the following:
```
2022-02-09T17:11:20.3127016Z --- FAIL: TestGetData/transaction (1.82s)
2022-02-09T17:11:20.3127385Z server_test.go:500:
2022-02-09T17:11:20.3127878Z Error Trace: server_test.go:500
2022-02-09T17:11:20.3128533Z server_test.go:520
2022-02-09T17:11:20.3128978Z Error: Condition never satisfied
2022-02-09T17:11:20.3129479Z Test: TestGetData/transaction
```
Consensus can require conflicting transactions and it can require more
transactions than mempool can fit, all of this should work. Transactions will
be checked anyway using its secondary mempool. See the scenario from #668.
Notice that it makes the node accept Extensible payloads with any category
which is the same way C# node works. We're trusting Extensible senders,
improper payloads are harmless until they DoS the network, but we have some
protections against that too (and spamming with proper category doesn't differ
a lot).
Most of the time on healthy network we see new transactions appearing that are
not present in the mempool. Once they get into mempool we don't ask for them
again when some other peer sends an Inv with them. Then these transactions are
usually added into block, removed from mempool and no one actually sends them
again to us. Some stale nodes can do that, but it's not very likely to
happen.
At the receiving end at the same time it's quite expensive to do full chain
HasTransaction() query, so if we can avoid doing that it's always good. Here
it technically allows resending old transaction that will be re-requested and
an attempt to add it to mempool will be made. But it'll inevitably fail
because the same HasTransaction() check is done there too. One can try to
maliciously flood the node with stale transactions but it doesn't differ from
flooding it with any other invalid transactions, so there is no new attack
vector added.
Baseline, 4 nodes with 10 workers:
RPS 6902.296 6465.662 6856.044 6785.515 6157.024 ≈ 6633 ± 4.26%
TPS 6468.431 6218.867 6610.565 6288.596 5790.556 ≈ 6275 ± 4.44%
CPU % 50.231 42.925 49.481 48.396 42.662 ≈ 46.7 ± 7.01%
Mem MB 2856.841 2684.103 2756.195 2733.485 2422.787 ≈ 2691 ± 5.40%
Patched:
RPS 7176.784 7014.511 6139.663 7191.280 7080.852 ≈ 6921 ± 5.72% ↑ 4.34%
TPS 6945.409 6562.756 5927.050 6681.187 6821.794 ≈ 6588 ± 5.38% ↑ 4.99%
CPU % 44.400 43.842 40.418 49.211 49.370 ≈ 45.4 ± 7.53% ↓ 2.78%
Mem MB 2693.414 2640.602 2472.007 2731.482 2707.879 ≈ 2649 ± 3.53% ↓ 1.56%
It's only used to sign/verify it and is not a part of the structure. It's
still neded in consensus.Payload though because that's the way dbft library
is.
It's not network-tied any more, network is only needed to
sign/verify. Unfortunately we still have to keep network in consensus data
structures because of dbft library interface.
Prices are defined in as a coefficients to `BaseExecFee` which
is defined by Policy contract (TBD later).
Native method prices are defined without need to multiply.
Closes#1192
1. We now have CMDGetBlockByIndex, so there's no need to request headers
first when we can just ask for blocks.
2. We don't ask for headers (i.e. we don't send CMDGetHeaders),
consequently, we shouldn't react on CMDHeaders.
3. But we still keep on reacting on CMDGetHeaders command as
there could be a node which needs headers.