network: batch getdata replies

This is not exactly the protocol-level batching as was tried in #1770 and
proposed by neo-project/neo#2365, but it's a TCP-level change in that we now
Write() a set of messages and given that Go sets up TCP sockets with
TCP_NODELAY by default this is a substantial change, we have less packets
generated with the same amount of data. It doesn't change anything on properly
connected networks, but the ones with delays benefit from it a lot.

This also improves queueing because we no longer generate 32 messages to
deliver on transaction's GetData, it's just one stream of bytes with 32
messages inside.

Do the same with GetBlocksByIndex, we can have a lot of messages there too.

But don't forget about potential peer DoS attacks, if a peer is to request a
lot of big blocks we need to flush them before we process the whole set.
This commit is contained in:
Roman Khimov 2022-10-21 10:49:44 +03:00
parent bf4636f70a
commit cfb5058018
4 changed files with 72 additions and 14 deletions

View file

@ -128,6 +128,16 @@ func (p *TCPPeer) EnqueueHPMessage(msg *Message) error {
return p.putMsgIntoQueue(p.hpSendQ, msg)
}
// EnqueueP2PPacket implements the Peer interface.
func (p *TCPPeer) EnqueueP2PPacket(b []byte) error {
return p.putPacketIntoQueue(context.Background(), p.p2pSendQ, b)
}
// EnqueueHPPacket implements the Peer interface.
func (p *TCPPeer) EnqueueHPPacket(b []byte) error {
return p.putPacketIntoQueue(context.Background(), p.hpSendQ, b)
}
func (p *TCPPeer) writeMsg(msg *Message) error {
b, err := msg.Bytes()
if err != nil {