Refactor code and be fine with sending to just 2/3 of proper peers. Previously
it was an edge case, but it can be a normal thing to do also as broadcasting
to everyone is obviously too expensive and excessive (hi, #608).
Baseline (four node, 10 workers):
RPS 8180.760 8137.822 7858.358 7820.011 8051.076 ≈ 8010 ± 2.04%
TPS 7819.831 7521.172 7519.023 7242.965 7426.000 ≈ 7506 ± 2.78%
CPU % 41.983 38.775 40.606 39.375 35.537 ≈ 39.3 ± 6.15%
Mem MB 2947.189 2743.658 2896.688 2813.276 2863.108 ≈ 2853 ± 2.74%
Patched:
RPS 9714.567 9676.102 9358.609 9371.408 9301.372 ≈ 9484 ± 2.05% ↑ 18.40%
TPS 8809.796 8796.854 8534.754 8661.158 8426.162 ≈ 8646 ± 1.92% ↑ 15.19%
CPU % 44.980 45.018 33.640 29.645 43.830 ≈ 39.4 ± 18.41% ↑ 0.25%
Mem MB 2989.078 2976.577 2306.185 2351.929 2910.479 ≈ 2707 ± 12.80% ↓ 5.12%
There is a nuance with this patch however. While typically it works the way
outlined above, sometimes it works like this:
RPS ≈ 6734.368
TPS ≈ 6299.332
CPU ≈ 25.552%
Mem ≈ 2706.046MB
And that's because the log looks like this:
DeltaTime, TransactionsCount, TPS
5014, 44212, 8817.710
5163, 49690, 9624.249
5166, 49523, 9586.334
5189, 49693, 9576.604
5198, 49339, 9491.920
5147, 49559, 9628.716
5192, 49680, 9568.567
5163, 49750, 9635.871
5183, 49189, 9490.450
5159, 49653, 9624.540
5167, 47945, 9279.079
5179, 2051, 396.022
5015, 4, 0.798
5004, 0, 0.000
5003, 0, 0.000
5003, 0, 0.000
5003, 0, 0.000
5003, 0, 0.000
5004, 0, 0.000
5003, 2925, 584.649
5040, 49099, 9741.865
5161, 49718, 9633.404
5170, 49228, 9521.857
5179, 49773, 9610.543
5167, 47253, 9145.152
5202, 49788, 9570.934
5177, 47704, 9214.603
5209, 46610, 8947.975
5249, 49156, 9364.831
5163, 18284, 3541.352
5072, 174, 34.306
On a network with 4 CNs and 1 RPC node there is 1/256 probability that a block
won't be broadcasted to RPC node, so it won't see it until ping timeout kicks
in. While it doesn't see a block it can't accept new incoming transactions so
the bench gets stuck basically. To me that's an acceptable trade-off because
normal networks are much larger than that and the effect of this patch is way
more important there, but still that's what we have and we need to take into
account.
send() can return errStateMismatch, errGone and errBusy. errGone means the
peer is dead and it won't ever be active again, it doesn't make sense retrying
sends to it. errStateMismatch is technically "not yet ready", but we can't
wait for it either, no one knows how much will it take to complete
handshake. So only errBusy means we can retry.
So keep track of dead peers and adjust tries counting appropriately.
It doesn't change much, we can't magically get more valid peers and if some
die while we're iterating we'd detect that by an error returned from send().
When transaction spreads through the network many nodes are likely to get it
in roughly the same time. They will rebroadcast it also in roughly the same
time. As we have a number of peers it's quite likely that we'd get an Inv with
the same transaction from multiple peers simultaneously. We will ask them for
this transaction (independently!) and again we're likely to get it in roughly
the same time. So we can easily end up with multiple threads processing the
same transaction. Only one will succeed, but we can actually easily avoid
doing it in the first place saving some CPU cycles for other things.
Notice that we can't do it _before_ receiving a transaction because nothing
guarantees that the peer will respond to our transaction request, so
communication overhead is unavoidable at the moment, but saving on processing
already gives quite interesting results.
Baseline, four nodes with 10 workers:
RPS 7176.784 7014.511 6139.663 7191.280 7080.852 ≈ 6921 ± 5.72%
TPS 6945.409 6562.756 5927.050 6681.187 6821.794 ≈ 6588 ± 5.38%
CPU % 44.400 43.842 40.418 49.211 49.370 ≈ 45.4 ± 7.53%
Mem MB 2693.414 2640.602 2472.007 2731.482 2707.879 ≈ 2649 ± 3.53%
Patched:
RPS ≈ 7791.675 7996.559 7834.504 7746.705 7891.614 ≈ 7852 ± 1.10% ↑ 13.45%
TPS ≈ 7241.497 7711.765 7520.211 7425.890 7334.443 ≈ 7447 ± 2.17% ↑ 13.04%
CPU % 29.853 39.936 39.945 36.371 39.999 ≈ 37.2 ± 10.57% ↓ 18.06%
Mem MB 2749.635 2791.609 2828.610 2910.431 2863.344 ≈ 2829 ± 1.97% ↑ 6.80%
Most of the time on healthy network we see new transactions appearing that are
not present in the mempool. Once they get into mempool we don't ask for them
again when some other peer sends an Inv with them. Then these transactions are
usually added into block, removed from mempool and no one actually sends them
again to us. Some stale nodes can do that, but it's not very likely to
happen.
At the receiving end at the same time it's quite expensive to do full chain
HasTransaction() query, so if we can avoid doing that it's always good. Here
it technically allows resending old transaction that will be re-requested and
an attempt to add it to mempool will be made. But it'll inevitably fail
because the same HasTransaction() check is done there too. One can try to
maliciously flood the node with stale transactions but it doesn't differ from
flooding it with any other invalid transactions, so there is no new attack
vector added.
Baseline, 4 nodes with 10 workers:
RPS 6902.296 6465.662 6856.044 6785.515 6157.024 ≈ 6633 ± 4.26%
TPS 6468.431 6218.867 6610.565 6288.596 5790.556 ≈ 6275 ± 4.44%
CPU % 50.231 42.925 49.481 48.396 42.662 ≈ 46.7 ± 7.01%
Mem MB 2856.841 2684.103 2756.195 2733.485 2422.787 ≈ 2691 ± 5.40%
Patched:
RPS 7176.784 7014.511 6139.663 7191.280 7080.852 ≈ 6921 ± 5.72% ↑ 4.34%
TPS 6945.409 6562.756 5927.050 6681.187 6821.794 ≈ 6588 ± 5.38% ↑ 4.99%
CPU % 44.400 43.842 40.418 49.211 49.370 ≈ 45.4 ± 7.53% ↓ 2.78%
Mem MB 2693.414 2640.602 2472.007 2731.482 2707.879 ≈ 2649 ± 3.53% ↓ 1.56%
Network communication takes time. Handling some messages (like transaction)
also takes time. We can share this time by making handler a separate
goroutine. So while message is being handled receiver can already get and
parse the next one.
It doesn't improve metrics a lot, but still I think it makes sense and in some
scenarios this can be more beneficial than this.
e41fc2fd1b, 4 nodes, 10 workers
RPS 6732.979 6396.160 6759.624 6246.398 6589.841 ≈ 6545 ± 3.02%
TPS 6491.062 5984.190 6275.652 5867.477 6360.797 ≈ 6196 ± 3.77%
CPU % 42.053 43.515 44.768 40.344 44.112 ≈ 43.0 ± 3.69%
Mem MB 2564.130 2744.236 2636.267 2589.505 2765.926 ≈ 2660 ± 3.06%
Patched:
RPS 6902.296 6465.662 6856.044 6785.515 6157.024 ≈ 6633 ± 4.26% ↑ 1.34%
TPS 6468.431 6218.867 6610.565 6288.596 5790.556 ≈ 6275 ± 4.44% ↑ 1.28%
CPU % 50.231 42.925 49.481 48.396 42.662 ≈ 46.7 ± 7.01% ↑ 8.60%
Mem MB 2856.841 2684.103 2756.195 2733.485 2422.787 ≈ 2691 ± 5.40% ↑ 1.17%
Asynchronous tryAddress() routines may get dial result AFTER the switch to
another test, so we need to ensure that they'll get the result intended for
this particular call. Fixes:
2021-07-07T20:25:40.1624521Z === RUN TestDefaultDiscoverer
2021-07-07T20:25:40.1625316Z discovery_test.go:159: timeout expecting for transport dial; i: 2, j: 1
2021-07-07T20:25:40.1626319Z --- FAIL: TestDefaultDiscoverer (1.19s)
We don't care much about dialing, but the same constant is used in outer
discoverer loop in case no connections are established and we have no
connections established.
It's only used to sign/verify it and is not a part of the structure. It's
still neded in consensus.Payload though because that's the way dbft library
is.
It's not network-tied any more, network is only needed to
sign/verify. Unfortunately we still have to keep network in consensus data
structures because of dbft library interface.
There was a deadlock while trying to finalize transaction during
PostBlock:
1) (*Notary).PostBlock is called under the blockchain lock
2) (*Notary).onTransaction is called inside the PostBlock
3) (*Notary).onTransaction needs to RLock the blockchain to add
completed transaction to the memory pool (and the blockchain is Lock'ed
by this moment)
The problem is fixed by using notifications subsistem, because it's not
required to call (*Notary).PostBlock under the blockchain lock.
And stop dropping connections if we're to receive them. Proper handling is
subject of #1701, but we need at least some connection-level stability for
now.
Node receiving extensible payload from the future is confused and drops
connection. Note that this can still happen if the node is to loose its
synchrony.
Calling `IsInSync()` is quite expensive, so we stop doing that once synchrony
is reached (hence bool flag).
1. Initialization is performed via `Blockchain` methods.
2. Native Oracle contract updates list of oracle nodes
and in-fly requests in `PostPersist`.
3. RPC uses Oracle module directly.
1) It duplicates registration in `version` message handler and no valid
connection can work without version exchange.
2) On public networks we have seed nodes defined by names, so we register
connections to them using these names, but then if connection is dropped we
delist them by IP:PORT combinations which can lead to zero PeerCount() with
all seeds still being registered as connected in the discovery subsystem
and thus no reconnection attempts being made.
It could be the case that checks are performed simultaneosly and
peers connections goes down from 2 to 0. We must take such case into
account and register address as good in discovery.
Right now a single slow peer can slow down whole network.
Do broadcast in 2 parts:
1. Perform non-blocking send to all peers if possible.
2. Perform blocking sends until message is sent to 2/3 of good peers.
Prices are defined in as a coefficients to `BaseExecFee` which
is defined by Policy contract (TBD later).
Native method prices are defined without need to multiply.
It happens from time to time in a four-node private network where there are
seeds (aka CNs) and not a lot of other nodes to connect to.
I don't know how to test for an infinite loop that has no side-effects, so no
test added here.
If the node is to start with seeds unavailable it will try connecting to each
of them three times, blacklist them and then sit forever waiting for
something. It's not a good behavior, it should always try connecting to seeds
if nothing else works.
Now we have VerifyTx() and PoolTx() APIs that either verify transaction in
isolation or verify it against the mempool (either the primary one or the one
given) and then add it there. There is no possibility to check against the
mempool, but not add a transaction to it, but I doubt we really need it.
It allows to remove some duplication between old PoolTx and verifyTx where
they both tried to check transaction against mempool (verifying first and then
adding it). It also saves us utility token balance check because it's done by
the mempool anyway and we no longer need to do that explicitly in verifyTx.
It makes AddBlock() and verifyBlock() transaction's checks more correct,
because previously they could miss that even though sender S has enough
balance to pay for A, B or C, he can't pay for all of them.
Caveats:
* consensus is running concurrently to other processes, so things could
change while verifyBlock() is iterating over transactions, this will be
mitigated in subsequent commits
Improves TPS value for single node by at least 11%.
Fixes#667, fixes#668.
GetBlockByIndex handler starts sending blocks right from the start index and
if that index is s.chain.BlockHeight() then we're requesting and receiving a
block we already have.
Turns out, C# node no longer broadcasts an Inv when it's creating a block,
instead it sends a ping and if we're not paying attention to the height
specified there we're technically missing a new block. Of course we'll get it
later after ping timer expiration and regular ping/pong sequence, but that's
delaying it for no good reason.
It no longer depends on blockchain state and there can't ever be an error, in
fact we can always iterate over signers, so copying these hashes doesn't make
much sense at all as well as sorting arrays in verifyTxWitnesses (witnesses
order must match signers order).
It's not needed any more with Go 1.13 as we have wrapping/unwrapping in base
packages. All errors.Wrap calls are replaced with fmt.Errorf, some strings are
improved along the way.
Closes#1192
1. We now have CMDGetBlockByIndex, so there's no need to request headers
first when we can just ask for blocks.
2. We don't ask for headers (i.e. we don't send CMDGetHeaders),
consequently, we shouldn't react on CMDHeaders.
3. But we still keep on reacting on CMDGetHeaders command as
there could be a node which needs headers.
It returned an error in case if block wasn't found (it might be when our
chain is lower). Fixed. It also should return all requested blocks, not
the first one.
We can't lock them (or there will be a deadlock), but we need to fix this:
fatal error: concurrent map iteration and map write
goroutine 1 [running]:
runtime.throw(0xdec086, 0x26)
/usr/lib64/go/1.12/src/runtime/panic.go:617 +0x72 fp=0xc02fec2bf8 sp=0xc02fec2bc8 pc=0x42d932
runtime.mapiternext(0xc02fec2d40)
/usr/lib64/go/1.12/src/runtime/map.go:860 +0x597 fp=0xc02fec2c80 sp=0xc02fec2bf8 pc=0x40efe7
github.com/nspcc-dev/neo-go/pkg/network.(*Server).Shutdown(0xc0000fc160)
/home/rik/dev/neo-go2/pkg/network/server.go:194 +0x238 fp=0xc02fec2db0 sp=0xc02fec2c80 pc=0xa89da8
github.com/nspcc-dev/neo-go/cli/server.startServer(0xc0000fcc60, 0x0, 0x0)
/home/rik/dev/neo-go2/cli/server/server.go:399 +0x7a9 fp=0xc02fec3820 sp=0xc02fec2db0 pc=0xae2079
...
GetValidators without parameter is called upon DBFT initialization and it
should receive validators for the next block (that will create it),
parameterized GetValidators is used for NextConsensus calculation where we
need a list for the current state of the chain.
In order to avoid dependency cycle at the next commits:
imports github.com/nspcc-dev/neo-go/pkg/config
imports github.com/nspcc-dev/neo-go/pkg/wallet
imports github.com/nspcc-dev/neo-go/pkg/vm
imports github.com/nspcc-dev/neo-go/pkg/smartcontract/nef
imports github.com/nspcc-dev/neo-go/pkg/config
There is no such thing as high/low priority transactions, as there are
no free transactions anymore and they are ordered by fees contained
in transaction itself.
Closes#1063.
We make it explicit in the appropriate Block/Transaction structures, not via a
singleton as C# node does. I think this approach has a bit more potential and
allows better packages reuse for different purposes.
Turns out, Neo uses block compression and not a streaming lz4 format. And it
doesn't contain uncompressed payload size which makes guessing it somewhat
suboptimal.
They actually use the same types as for messages. Fixes
2020-05-29T00:06:17.593+0300 WARN peer disconnected {"addr": "168.62.167.190:20333", "reason": "handling CMDInv message: invalid inventory type", "peerCount": 3}
Get new blocks directly from the Blockchain. It may lead to some duplications
(as we'll also receive our own blocks), but at the same time it's more
correct, because technically we can also get blocks via other means besides
network server like RPC (submitblock call). And it simplifies network server
at the same time.