Commit graph

211 commits

Author SHA1 Message Date
Roman Khimov
5aff82aef4
Merge pull request #2119 from nspcc-dev/states-exchange/insole
core, network: prepare basis for Insole module
2021-08-12 10:35:02 +03:00
Anna Shaleva
6ca7983be8 network: fix typo in error message 2021-08-10 11:00:39 +03:00
Roman Khimov
7bb82f1f99 network: merge two loops in iteratePeersWithSendMsg, send to 2/3
Refactor code and be fine with sending to just 2/3 of proper peers. Previously
it was an edge case, but it can be a normal thing to do also as broadcasting
to everyone is obviously too expensive and excessive (hi, #608).

Baseline (four node, 10 workers):

RPS    8180.760 8137.822 7858.358 7820.011 8051.076 ≈ 8010   ± 2.04%
TPS    7819.831 7521.172 7519.023 7242.965 7426.000 ≈ 7506   ± 2.78%
CPU %    41.983   38.775   40.606   39.375   35.537 ≈   39.3 ± 6.15%
Mem MB 2947.189 2743.658 2896.688 2813.276 2863.108 ≈ 2853   ± 2.74%

Patched:

RPS    9714.567 9676.102 9358.609 9371.408 9301.372 ≈ 9484   ±  2.05% ↑ 18.40%
TPS    8809.796 8796.854 8534.754 8661.158 8426.162 ≈ 8646   ±  1.92% ↑ 15.19%
CPU %    44.980   45.018   33.640   29.645   43.830 ≈   39.4 ± 18.41% ↑  0.25%
Mem MB 2989.078 2976.577 2306.185 2351.929 2910.479 ≈ 2707   ± 12.80% ↓  5.12%

There is a nuance with this patch however. While typically it works the way
outlined above, sometimes it works like this:

RPS ≈ 6734.368
TPS ≈ 6299.332
CPU ≈ 25.552%
Mem ≈ 2706.046MB

And that's because the log looks like this:

DeltaTime, TransactionsCount, TPS
5014, 44212, 8817.710
5163, 49690, 9624.249
5166, 49523, 9586.334
5189, 49693, 9576.604
5198, 49339, 9491.920
5147, 49559, 9628.716
5192, 49680, 9568.567
5163, 49750, 9635.871
5183, 49189, 9490.450
5159, 49653, 9624.540
5167, 47945, 9279.079
5179, 2051, 396.022
5015, 4, 0.798
5004, 0, 0.000
5003, 0, 0.000
5003, 0, 0.000
5003, 0, 0.000
5003, 0, 0.000
5004, 0, 0.000
5003, 2925, 584.649
5040, 49099, 9741.865
5161, 49718, 9633.404
5170, 49228, 9521.857
5179, 49773, 9610.543
5167, 47253, 9145.152
5202, 49788, 9570.934
5177, 47704, 9214.603
5209, 46610, 8947.975
5249, 49156, 9364.831
5163, 18284, 3541.352
5072, 174, 34.306

On a network with 4 CNs and 1 RPC node there is 1/256 probability that a block
won't be broadcasted to RPC node, so it won't see it until ping timeout kicks
in. While it doesn't see a block it can't accept new incoming transactions so
the bench gets stuck basically. To me that's an acceptable trade-off because
normal networks are much larger than that and the effect of this patch is way
more important there, but still that's what we have and we need to take into
account.
2021-08-06 21:10:34 +03:00
Roman Khimov
966a16e80e network: keep track of dead peers in iteratePeersWithSendMsg()
send() can return errStateMismatch, errGone and errBusy. errGone means the
peer is dead and it won't ever be active again, it doesn't make sense retrying
sends to it. errStateMismatch is technically "not yet ready", but we can't
wait for it either, no one knows how much will it take to complete
handshake. So only errBusy means we can retry.

So keep track of dead peers and adjust tries counting appropriately.
2021-08-06 21:10:34 +03:00
Roman Khimov
80f3ec2312 network: move peer filtering to getPeers()
It doesn't change much, we can't magically get more valid peers and if some
die while we're iterating we'd detect that by an error returned from send().
2021-08-06 21:10:34 +03:00
Roman Khimov
de6f4987f6 network: microoptimize iteratePeersWithSendMsg()
Now that s.getPeers() returns a slice we can use slice for `success` too, maps
are more expensive.
2021-08-06 21:10:34 +03:00
Roman Khimov
d51db20405 network: randomize peer iteration order
While iterating over map in getPeers() is non-deterministic it's not really
random enough for our purposes (usually maps have 2-3 paths through them), we
need to fill our peers queues more uniformly.

Believe it or not, but it does affect performance metrics, baseline (four
nodes, 10 workers):

RPS ≈  7791.675 7996.559 7834.504 7746.705 7891.614 ≈ 7852   ±  1.10%
TPS ≈  7241.497 7711.765 7520.211 7425.890 7334.443 ≈ 7447   ±  2.17%
CPU %    29.853   39.936   39.945   36.371   39.999 ≈   37.2 ± 10.57%
Mem MB 2749.635 2791.609 2828.610 2910.431 2863.344 ≈ 2829   ±  1.97%

Patched:

RPS    8180.760 8137.822 7858.358 7820.011 8051.076 ≈ 8010   ± 2.04% ↑ 2.01%
TPS    7819.831 7521.172 7519.023 7242.965 7426.000 ≈ 7506   ± 2.78% ↑ 0.79%
CPU %    41.983   38.775   40.606   39.375   35.537 ≈   39.3 ± 6.15% ↑ 5.65%
Mem MB 2947.189 2743.658 2896.688 2813.276 2863.108 ≈ 2853   ± 2.74% ↑ 0.85%
2021-08-06 21:10:34 +03:00
Roman Khimov
b55c75d59d network: hide Peers, make it return a slice
Slice is a bit more efficient, we don't need a map for Peers() users and it's
not really interesting to outside users, so better hide this method.
2021-08-06 21:10:34 +03:00
Roman Khimov
119b4200ac network: add fail-fast route for tx double processing
When transaction spreads through the network many nodes are likely to get it
in roughly the same time. They will rebroadcast it also in roughly the same
time. As we have a number of peers it's quite likely that we'd get an Inv with
the same transaction from multiple peers simultaneously. We will ask them for
this transaction (independently!) and again we're likely to get it in roughly
the same time. So we can easily end up with multiple threads processing the
same transaction. Only one will succeed, but we can actually easily avoid
doing it in the first place saving some CPU cycles for other things.

Notice that we can't do it _before_ receiving a transaction because nothing
guarantees that the peer will respond to our transaction request, so
communication overhead is unavoidable at the moment, but saving on processing
already gives quite interesting results.

Baseline, four nodes with 10 workers:

RPS    7176.784 7014.511 6139.663 7191.280 7080.852 ≈ 6921   ± 5.72%
TPS    6945.409 6562.756 5927.050 6681.187 6821.794 ≈ 6588   ± 5.38%
CPU %    44.400   43.842   40.418   49.211   49.370 ≈   45.4 ± 7.53%
Mem MB 2693.414 2640.602 2472.007 2731.482 2707.879 ≈ 2649   ± 3.53%

Patched:

RPS ≈  7791.675 7996.559 7834.504 7746.705 7891.614 ≈ 7852   ±  1.10% ↑ 13.45%
TPS ≈  7241.497 7711.765 7520.211 7425.890 7334.443 ≈ 7447   ±  2.17% ↑ 13.04%
CPU %    29.853   39.936   39.945   36.371   39.999 ≈   37.2 ± 10.57% ↓ 18.06%
Mem MB 2749.635 2791.609 2828.610 2910.431 2863.344 ≈ 2829   ±  1.97% ↑  6.80%
2021-08-06 21:10:25 +03:00
Roman Khimov
7fc153ed2a network: only ask mempool for intersections with received Inv
Most of the time on healthy network we see new transactions appearing that are
not present in the mempool. Once they get into mempool we don't ask for them
again when some other peer sends an Inv with them. Then these transactions are
usually added into block, removed from mempool and no one actually sends them
again to us. Some stale nodes can do that, but it's not very likely to
happen.

At the receiving end at the same time it's quite expensive to do full chain
HasTransaction() query, so if we can avoid doing that it's always good. Here
it technically allows resending old transaction that will be re-requested and
an attempt to add it to mempool will be made. But it'll inevitably fail
because the same HasTransaction() check is done there too. One can try to
maliciously flood the node with stale transactions but it doesn't differ from
flooding it with any other invalid transactions, so there is no new attack
vector added.

Baseline, 4 nodes with 10 workers:

RPS    6902.296 6465.662 6856.044 6785.515 6157.024 ≈ 6633   ± 4.26%
TPS    6468.431 6218.867 6610.565 6288.596 5790.556 ≈ 6275   ± 4.44%
CPU %    50.231   42.925   49.481   48.396   42.662 ≈   46.7 ± 7.01%
Mem MB 2856.841 2684.103 2756.195 2733.485 2422.787 ≈ 2691   ± 5.40%

Patched:

RPS    7176.784 7014.511 6139.663 7191.280 7080.852 ≈ 6921   ± 5.72% ↑ 4.34%
TPS    6945.409 6562.756 5927.050 6681.187 6821.794 ≈ 6588   ± 5.38% ↑ 4.99%
CPU %    44.400   43.842   40.418   49.211   49.370 ≈   45.4 ± 7.53% ↓ 2.78%
Mem MB 2693.414 2640.602 2472.007 2731.482 2707.879 ≈ 2649   ± 3.53% ↓ 1.56%
2021-08-06 20:53:02 +03:00
Roman Khimov
f9663a97a1 network: fix Ping messages
* NewPing() accepts block index first and nonce then.
 * Block height should be used, it'll be important for state exchanging nodes
2021-08-06 11:28:09 +03:00
Roman Khimov
1cea0dd894
Merge pull request #1997 from nspcc-dev/drop-syncreached-check
network: drop useless flag check
2021-06-04 23:39:34 +03:00
Roman Khimov
f6da88af0d network: drop useless flag check
It's the first thing done in tryStartServices(), so checking it here doesn't
make much sense.
2021-06-04 20:29:47 +03:00
Anna Shaleva
1dbf1d4310 rpc: allow to track notary requests via Notification subsystem 2021-06-01 16:29:04 +03:00
Roman Khimov
c4e084b0d8 *: fix whitespace errors
leading/trailing newlines
2021-05-12 22:51:41 +03:00
Roman Khimov
99108c620f network: fix errcheck warning 2021-05-12 20:14:35 +03:00
Roman Khimov
cfc067dd24 *: remove dead code
Found by deadcode via golangci-lint.
2021-05-12 18:13:14 +03:00
Evgeniy Stratonikov
275a5c9daa network: limit message number from the same sender 2021-05-12 10:52:11 +03:00
Anna Shaleva
09bb162de0 network: add ability to specify port for P2P version exchange 2021-04-30 11:27:55 +03:00
Roman Khimov
99b71bbbd1 network: move service starts to tryStartServices
All of them only make sense on a fully synchronized node, doing anything
during the initial sync is just a waste of time.
2021-04-02 13:12:06 +03:00
Roman Khimov
690a1db589 network: replace consensusStarted/canHandleExtens with syncReached flag
They're essentially the same.
2021-04-02 12:55:56 +03:00
Roman Khimov
a01636a1b0 stateroot: set networking callback in a more straightforward way 2021-04-02 12:12:36 +03:00
Roman Khimov
546faf5e70
Merge pull request #1859 from nspcc-dev/rework-signing-fix-stateroots
Rework signing, fix stateroots
2021-03-26 14:04:23 +03:00
Roman Khimov
d314f82db3 transaction: drop Network from Transaction
We only need it when signing/verifying.
2021-03-26 13:45:18 +03:00
Roman Khimov
fa4380c9da network: prevent putting duplicate addresses into pool from peer's data
It can't be trusted.
2021-03-26 12:31:07 +03:00
Anna Shaleva
23a3514cc0 consensus: store ProtocolConfiguration in consensus config 2021-03-15 16:58:27 +03:00
Evgeniy Stratonikov
2f3abf95a2 stateroot: broadcast state on new blocks 2021-03-09 13:51:11 +03:00
Evgeniy Stratonikov
3c65ed1507 stateroot: allow to sign new roots 2021-03-09 13:51:11 +03:00
Evgeniy Stratonikov
ac227a80fe stateroot: use RoleStateValidator for verification 2021-03-09 13:51:10 +03:00
Anna Shaleva
94430ef3ca network: refactor RelayTx error handling
We don't need to wrap different core errors in server. Also it would be
good to provede more error info to the user.
2021-02-18 12:40:40 +03:00
Anna Shaleva
9f6fba5926 network: specify error message
For better user experience.
2021-02-16 14:11:42 +03:00
Anna Shaleva
bcb82b457d config: move notary module config to ApplicationConfiguration 2021-02-16 13:58:25 +03:00
Anna Shaleva
8444f3d816 network: refactor notary service's PostBlock
There was a deadlock while trying to finalize transaction during
PostBlock:
	1) (*Notary).PostBlock is called under the blockchain lock
	2) (*Notary).onTransaction is called inside the PostBlock
	3) (*Notary).onTransaction needs to RLock the blockchain to add
completed transaction to the memory pool (and the blockchain is Lock'ed
by this moment)

The problem is fixed by using notifications subsistem, because it's not
required to call (*Notary).PostBlock under the blockchain lock.
2021-02-11 17:11:36 +03:00
Anna Shaleva
5d6fdda664 network: fix P2PNotaryRequest payload broadcaster 2021-02-11 17:11:36 +03:00
Anna Shaleva
c14e34cdb5 network: add RelayP2PNotaryRequest method 2021-02-11 16:56:24 +03:00
Roman Khimov
a87b8578b2 network: stub "StateService" payloads out for now
And stop dropping connections if we're to receive them. Proper handling is
subject of #1701, but we need at least some connection-level stability for
now.
2021-02-05 14:59:41 +03:00
Roman Khimov
686f983ccf network: prevent disconnects during initial sync
Node receiving extensible payload from the future is confused and drops
connection. Note that this can still happen if the node is to loose its
synchrony.

Calling `IsInSync()` is quite expensive, so we stop doing that once synchrony
is reached (hence bool flag).
2021-02-05 14:54:43 +03:00
Anna Shaleva
bfbd096fed core: introduce mempool notifications 2021-02-02 22:01:32 +03:00
Anna Shaleva
19fa0daaa6 core, network: add Notary module 2021-02-02 22:01:20 +03:00
Evgeniy Stratonikov
9592f3e052 network: implement pool for Extensible payloads 2021-01-28 17:09:06 +03:00
Evgenii Stratonikov
43e4d3af88 oracle: integrate module in core and RPC
1. Initialization is performed via `Blockchain` methods.
2. Native Oracle contract updates list of oracle nodes
  and in-fly requests in `PostPersist`.
3. RPC uses Oracle module directly.
2021-01-28 13:00:58 +03:00
Evgeniy Stratonikov
5d83c28bc9 network: replace ConsensusType with ExtensibleType 2021-01-22 10:38:33 +03:00
Evgenii Stratonikov
5bd6c1e5cc network: fix a bug in discovery with a peer connected twice
It could be the case that checks are performed simultaneosly and
peers connections goes down from 2 to 0. We must take such case into
account and register address as good in discovery.
2020-12-25 14:36:53 +03:00
Evgenii Stratonikov
2cb536a6a1 network: provide NullPayload where necessary 2020-12-25 14:36:53 +03:00
Evgenii Stratonikov
0a5049658f network: support non-blocking broadcast
Right now a single slow peer can slow down whole network.
Do broadcast in 2 parts:
1. Perform non-blocking send to all peers if possible.
2. Perform blocking sends until message is sent to 2/3 of good peers.
2020-12-25 14:36:52 +03:00
Anna Shaleva
0b5cf78468 network: add notary request payload 2020-12-10 18:17:31 +03:00
Evgenii Stratonikov
27624946d9 network/test: add tests for server commands 2020-12-09 15:23:49 +03:00
Evgenii Stratonikov
bd81b19a7a network: fix requestTx()
2 bugs were here:
1. If amount of tx is small, no messages were sent.
2. Correctly cut byte slice if last message is small.
2020-12-09 12:04:10 +03:00
Evgenii Stratonikov
074ba5f394 network: fix GetBlocks command
Return exactly requested amount of hashes.
2020-12-09 12:04:10 +03:00
Evgenii Stratonikov
4aa1a37f3f network: fetch blocks in parallel
Blockcache size is 2000, while max request size is 500.
Try to fetch blocks in chunks starting from current height.
Lower height has priority.
2020-12-02 10:50:35 +03:00