Commit graph

625 commits

Author SHA1 Message Date
Anna Shaleva
da757fa387 network: fix grammar typo in the error message 2023-02-20 11:08:07 +03:00
Anna Shaleva
28927228f0 *: adjust subscription-related doc
Add a warning about received events modification where applicable.
2023-01-17 17:11:19 +03:00
Anna Shaleva
9b364aa7ee network: do not allow to request invalid block count
The problem is in peer disconnection due to invalid GetBlockByIndex
payload (the logs are from some patched neo-go version):
```
дек 15 16:02:39 glagoli neo-go[928530]: 2022-12-15T16:02:39.490Z        INFO        new peer connected        {"addr": "10.78.69.115:50846", "peerCount": 3}
дек 15 16:02:39 glagoli neo-go[928530]: 2022-12-15T16:02:39.490Z        WARN        peer disconnected        {"addr": "10.78.69.115:50846", "error": "invalid block count", "peerCount": 2}
дек 15 16:02:39 glagoli neo-go[928530]: 2022-12-15T16:02:39.490Z        INFO        started protocol        {"addr": "10.78.69.115:50846", "userAgent": "/NEO-GO:1.0.0/", "startHeight": 0, "id": 1339571820}
дек 15 16:02:39 glagoli neo-go[928530]: 2022-12-15T16:02:39.491Z        INFO        new peer connected        {"addr": "10.78.69.115:50856", "peerCount": 3}
дек 15 16:02:39 glagoli neo-go[928530]: 2022-12-15T16:02:39.492Z        WARN        peer disconnected        {"addr": "10.78.69.115:50856", "error": "invalid block count", "peerCount": 2}
дек 15 16:02:39 glagoli neo-go[928530]: 2022-12-15T16:02:39.492Z        INFO        started protocol        {"addr": "10.78.69.115:50856", "userAgent": "/NEO-GO:1.0.0/", "startHeight": 0, "id": 1339571820}
дек 15 16:02:39 glagoli neo-go[928530]: 2022-12-15T16:02:39.492Z        INFO        new peer connected        {"addr": "10.78.69.115:50858", "peerCount": 3}
дек 15 16:02:39 glagoli neo-go[928530]: 2022-12-15T16:02:39.493Z        INFO        started protocol        {"addr": "10.78.69.115:50858", "userAgent": "/NEO-GO:1.0.0/", "startHeight": 0, "id": 1339571820}
дек 15 16:02:39 glagoli neo-go[928530]: 2022-12-15T16:02:39.493Z        WARN        peer disconnected        {"addr": "10.78.69.115:50858", "error": "invalid block count", "peerCount": 2}
дек 15 16:02:39 glagoli neo-go[928530]: 2022-12-15T16:02:39.494Z        INFO        new peer connected        {"addr": "10.78.69.115:50874", "peerCount": 3}
дек 15 16:02:39 glagoli neo-go[928530]: 2022-12-15T16:02:39.494Z        INFO        started protocol        {"addr": "10.78.69.115:50874", "userAgent": "/NEO-GO:1.0.0/", "startHeight": 0, "id": 1339571820}
дек 15 16:02:39 glagoli neo-go[928530]: 2022-12-15T16:02:39.494Z        WARN        peer disconnected        {"addr": "10.78.69.115:50874", "error": "invalid block count", "peerCount": 2}
```

GetBlockByIndex payload can't be decoded, and the only possible cause
is zero (or <-1, but it's probably not the case) block count requested.

Error is improved as far.
2022-12-28 13:04:56 +03:00
Anna Shaleva
c0a453a53b network: adjust requestBlocs logic
If the lastQueued block index is the same as the one we'd like to
request in payload, then we need to increment the payload's count.
2022-12-28 12:50:30 +03:00
Roman Khimov
e79dec15f9 *: use zap.Stringer instead of zap.String where it can be used
It's a bit more efficient in case we're not logging the message (mostly for
debug), makes the code somewhat simpler as well.
2022-12-13 12:44:54 +03:00
Roman Khimov
7589733017 config: add a special Blockchain type to configure Blockchain
And include some node-specific configurations there with backwards
compatibility. Note that in the future we'll remove Ledger's
fields from the ProtocolConfiguration and it'll be possible to access them in
Blockchain directly (not via .Ledger).

The other option tried was using two configuration types separately, but that
incurs more changes to the codebase, single structure that behaves almost like
the old one is better for backwards compatibility.

Fixes #2676.
2022-12-07 17:35:53 +03:00
Anna Shaleva
82221b0ca7 *: fix Neo and NeoGo misuses 2022-12-07 17:29:09 +03:00
Anna Shaleva
54c2aa8582 config: move P2P options to a separate config section
And convert time-related settings to a Duration format along the way.
2022-12-07 13:06:05 +03:00
Anna Shaleva
9cf6cc61f4 network: allow multiple bind addresses for server
And replace Transporter.Address() with Transporter.HostPort() along the way.
2022-12-07 13:06:03 +03:00
Roman Khimov
c2adbf768b config: add TimePerBlock to replace SecondsPerBlock
It's more generic and convenient than MillisecondsPerBlock. This setting is
made in backwards-compatible fashion, but it'll override SecondsPerBlock if
both are used. Configurations are specifically not changed here, it's
important to check compatibility.

Fixes #2675.
2022-12-02 19:52:14 +03:00
Roman Khimov
0ad6e295ea core: make GetHeaderHash accept uint32
It should've always been this way because block indexes are uint32.
2022-11-25 14:30:51 +03:00
Roman Khimov
b8c09f509f network: add random slight delay to connection attempts
Small (especially dockerized/virtualized) networks often start all nodes at
ones and then we see a lot of connection flapping in the log. This happens
because nodes try to connect to each other simultaneously, establish two
connections, then each one finds a duplicate and drops it, but this can be
different duplicate connections on other sides, so they retry and it all
happens for some time. Eventually everything settles, but we have a lot of
garbage in the log and a lot of useless attempts.

This random waiting timeout doesn't change the logic much, adds a minimal
delay, but increases chances for both nodes to establish a proper single
connection on both sides to only then see another one and drop it on both
sides as well. It leads to almost no flapping in small networks, doesn't
affect much bigger ones. The delay is close to unnoticeable especially if
there is something in the DB for node to process during startup.
2022-11-17 18:42:43 +03:00
Roman Khimov
075a54192c network: don't try too many connections
Consider mainnet, it has an AttemptConnPeers of 20, so may already have 3
peers and request 20 more, then have 4th connected and attemtp 20 more again,
this leads to a huge number of connections easily.
2022-11-17 18:03:04 +03:00
Roman Khimov
6bce973ac2 network: drop duplicationg check from handleAddrCmd()
It was relevant with the queue-based discoverer, now it's not, discoverer
handles this internally.
2022-11-17 17:42:36 +03:00
Roman Khimov
1c7487b8e4 network: add a timer to check for peers
Consider initial connection phase for public networks:
 * simultaneous connections to seeds
 * very quick handshakes
 * got five handshaked peers and some getaddr requests sent
 * but addr replies won't trigger new connections
 * so we can stay with just five connections until any of them breaks or a
   (long) address checking timer fires

This new timers solves the problem, it's adaptive at the same time. If we have
enough peers we won't be waking up often.
2022-11-17 17:32:05 +03:00
Roman Khimov
23f118a1a9 network: rework discoverer/server interaction
* treat connected/handshaked peers separately in the discoverer, save
   "original" address for connected ones, it can be a name instead of IP and
   it's important to keep it to avoid reconnections
 * store name->IP mapping for seeds if and when they're connected to avoid
   reconnections
 * block seed if it's detected to be our own node (which is often the case for
   small private networks)
 * add an event for handshaked peers in the server, connected but
   non-handshaked ones are not really helpful for MinPeers or GetAddr logic

Fixes #2796.
2022-11-17 17:07:19 +03:00
Roman Khimov
6ba4afc977 network: consider handshaked peers only when comparing with MinPeers
We don't know a lot about non-handshaked ones, so it's safer to try more
connection.
2022-11-17 16:40:29 +03:00
Anna Shaleva
6f3a0a6b4c network: adjust warning for deposit expiration
Provide additional info for better user experience.
2022-11-15 14:16:34 +03:00
Roman Khimov
c405092953 network: pre-filter transactions going into dbft
Drop some load from dbft loop during consensus process.
2022-11-11 15:32:51 +03:00
Roman Khimov
e19d867d4e
Merge pull request #2761 from nspcc-dev/fancy-getaddr
Fancy getaddr
2022-10-25 16:51:38 +07:00
Roman Khimov
28f54d352a network: do getaddr requests periodically, fix #2745
Every 1000 blocks seems to be OK for big networks (that only had done some
initial requests previously and then effectively never requested addresses
again because there was a sufficient number of addresses), won't hurt smaller
ones as well (that effectively keep doing this on every connect/disconnect,
peer changes are very rare there, but when they happen we want to have some
quick reaction to these changes).
2022-10-24 15:10:51 +03:00
Roman Khimov
9efc110058 network: it is 42
32 is a very good number, but we all know 42 is a better one. And it can even
be proven by tests with higher peaking TPS values.

You may wonder why is it so good? Because we're using packet-switching
networks mostly and a packet is a packet almost irrespectively of how bit it
is. Yet a packet has some maximum possible size (hi, MTU) and this size most
of the time is 1500 (or a little less than that, hi VPN). Subtract IP header
(20 for IPv4 or 40 for IPv6 not counting options), TCP header (another 20) and
Neo message/payload headers (~8 for this case) and we have just a little more
than 1400 bytes for our dear hashes. Which means that in a single packet most
of the time we can have 42-44 of them, maybe 45. Choosing between these
numbers is not hard then.
2022-10-24 14:44:19 +03:00
Roman Khimov
9d6b18adec network: drop minPoolCount magic constant
We have AttemptConnPeers that is closely related, the more we have there the
bigger the network supposedly is, so it's much better than magic minPoolCount.
2022-10-24 14:36:10 +03:00
Roman Khimov
af24051bf5 network: sleep a bit before retrying reconnects
If Dial() is to exit quickly we can end up in a retry loop eating CPU.
2022-10-24 14:34:48 +03:00
Roman Khimov
f42b8e78fc
Merge pull request #2758 from nspcc-dev/check-inflight-tx-invs
network: check inv against currently processed transactions
2022-10-24 14:16:33 +07:00
Roman Khimov
e26055190e network: check inv against currently processed transactions
Sometimes we already have it, but it's not yet processed, so we can save on
getdata request. It only affects very high-speed networks like 4-1 scenario
and it doesn't affect it a lot, but still we can do it.
2022-10-21 21:16:18 +03:00
Roman Khimov
cfb5058018 network: batch getdata replies
This is not exactly the protocol-level batching as was tried in #1770 and
proposed by neo-project/neo#2365, but it's a TCP-level change in that we now
Write() a set of messages and given that Go sets up TCP sockets with
TCP_NODELAY by default this is a substantial change, we have less packets
generated with the same amount of data. It doesn't change anything on properly
connected networks, but the ones with delays benefit from it a lot.

This also improves queueing because we no longer generate 32 messages to
deliver on transaction's GetData, it's just one stream of bytes with 32
messages inside.

Do the same with GetBlocksByIndex, we can have a lot of messages there too.

But don't forget about potential peer DoS attacks, if a peer is to request a
lot of big blocks we need to flush them before we process the whole set.
2022-10-21 17:16:32 +03:00
Roman Khimov
e1b5ac9b81 network: separate tx handling from msg handling
This allows to naturally scale transaction processing if we have some peer
that is sending a lot of them while others are mostly silent. It also can help
somewhat in the event we have 50 peers that all send transactions. 4+1
scenario benefits a lot from it, while 7+2 slows down a little. Delayed
scenarios don't care.

Surprisingly, this also makes disconnects (#2744) much more rare, 4-node
scenario almost never sees it now. Most probably this is the case where peers
affect each other a lot, single-threaded transaction receiver can be slow
enough to trigger some timeout in getdata handler of its peer (because it
tries to push a number of replies).
2022-10-21 12:11:24 +03:00
Roman Khimov
e003b67418 network: reuse inventory hash list for request hashes
Microoptimization, we can do this because we only use them in handleInvCmd().
2022-10-21 11:28:40 +03:00
Roman Khimov
0f625f04f0
Merge pull request #2748 from nspcc-dev/stop-tx-flow
network/consensus: use new dbft StopTxFlow callback
2022-10-18 16:29:37 +07:00
Roman Khimov
73ce898e27 network/consensus: use new dbft StopTxFlow callback
It makes sense in general (further narrowing down the time window when
transactions are processed by consensus thread) and it improves block times a
little too, especially in the 7+2 scenario.

Related to #2744.
2022-10-18 11:06:20 +03:00
Roman Khimov
2791127ee4 network: add prometheus histogram with cmd processing time
It can be useful to detect some performance issues.
2022-10-17 22:51:16 +03:00
Roman Khimov
73079745ab
Merge pull request #2746 from nspcc-dev/optimize-tx-callbacks
network: only call tx callback if we're waiting for transactions
2022-10-17 16:39:41 +07:00
Roman Khimov
dce9f80585
Merge pull request #2743 from nspcc-dev/log-fan-out
Logarithmic gossip fan out
2022-10-14 23:18:34 +07:00
Roman Khimov
4dd3fd4ac0 network: only call tx callback if we're waiting for transactions
Until the consensus process starts for a new block and until it really needs
some transactions we can spare some cycles by not delivering transactions to
it. In tests this doesn't affect TPS, but makes block delays a bit more
stable. Related to #2744, I think it also may cause timeouts during
transaction processing (waiting on the consensus process channel while it does
something dBFT-related).
2022-10-14 18:45:48 +03:00
Roman Khimov
65f0fadddb network: register peer only if it's not a duplicate 2022-10-14 15:53:32 +03:00
Roman Khimov
851cbc7dab network: implement adaptive peer requests
When the network is big enough, MinPeers may be suboptimal for good network
connectivity, but if we know the network size we can do some estimation on the
number of sufficient peers.
2022-10-14 15:53:32 +03:00
Roman Khimov
c17b2afab5 network: add BroadcastFactor to control gossip, fix #2678 2022-10-14 15:53:32 +03:00
Roman Khimov
215e8704f1 network: simplify discoverer, make it almost a lib
We already have two basic lists: connected and unconnected nodes, we don't
need an additional channel and we don't need a goroutine to handle it.
2022-10-14 15:53:32 +03:00
Roman Khimov
c1ef326183 network: re-add addresses to the pool on UnregisterConnectedAddr
That's what we do anyway, but this way we can be a bit more efficient.
2022-10-14 14:12:33 +03:00
Roman Khimov
631f166709 network: broadcast to log-dependent number of nodes
Fixes #608.
2022-10-14 14:12:33 +03:00
Roman Khimov
dc62046019 network: add network size estimation metric 2022-10-12 22:29:55 +03:00
Roman Khimov
bcf77c3c42 network: filter out not-yet-ready nodes when broadcasting
They can fail right in the getPeers or they can fail later when packet send
is attempted. Of course they can complete handshake in-between these events,
but most likely they won't and we'll waste more resources on this attempt. So
rule out bad peers immediately.
2022-10-12 16:51:01 +03:00
Roman Khimov
137f2cb192 network: deduplicate TCPPeer code a bit
context.Background() is never canceled and has no deadline, so we can avoid
duplicating some code.
2022-10-12 15:43:31 +03:00
Roman Khimov
104da8caff network: broadcast messages, enqueue packets
Drop EnqueueP2PPacket, replace EnqueueHPPacket with EnqueueHPMessage. We use
Enqueue* when we have a specific per-peer message, it makes zero sense
duplicating serialization code for it (unlike Broadcast*).
2022-10-12 15:39:20 +03:00
Roman Khimov
d5f2ad86a1 network: drop unused EnqueueMessage interface from Peer 2022-10-12 15:27:08 +03:00
Roman Khimov
b345581c72 network: pings are broadcasted, don't send them to everyone
Follow the general rules of broadcasts, even though it's somewhat different
from Inv, we just want to get some reply from our neighbors to see if we're
behind. We don't strictly need all neighbors for it.
2022-10-12 15:25:03 +03:00
Roman Khimov
e1d5f18ff4 network: fix outdated Peer interface comments 2022-10-12 10:16:07 +03:00
Roman Khimov
8b26d9475b network: speculatively set GetAddrSent status
Otherwise we routinely get "unexpected addr received" error.
2022-10-11 18:42:40 +03:00
Roman Khimov
e80c60a3b9 network: rework broadcast logic
We have a number of queues for different purposes:
 * regular broadcast queue
 * direct p2p queue
 * high-priority queue

And two basic egress scenarios:
 * direct p2p messages (replies to requests in Server's handle* methods)
 * broadcasted messages

Low priority broadcasted messages:
 * transaction inventories
 * block inventories
 * notary inventories
 * non-consensus extensibles

High-priority broadcasted messages:
 * consensus extensibles
 * getdata transaction requests from consensus process
 * getaddr requests

P2P messages are a bit more complicated, most of the time they use p2p queue,
but extensible message requests/replies use HP queue.

Server's handle* code is run from Peer's handleIncoming, every peer has this
thread that handles incoming messages. When working with the peer it's
important to reply to requests and blocking this thread until we send (queue)
a reply is fine, if the peer is slow we just won't get anything new from
it. The queue used is irrelevant wrt this issue.

Broadcasted messages are radically different, we want them to be delivered to
many peers, but we don't care about specific ones. If it's delivered to 2/3 of
the peers we're fine, if it's delivered to more of them --- it's not an
issue. But doing this fairly is not an easy thing, current code tries performing
unblocked sends and if this doesn't yield enough results it then blocks (but
has a timeout, we can't wait indefinitely). But it does so in sequential
manner, once the peer is chosen the code will wait for it (and only it) until
timeout happens.

What can be done instead is an attempt to push the message to all of the peers
simultaneously (or close to that). If they all deliver --- OK, if some block
and wait then we can wait until _any_ of them pushes the message through (or
global timeout happens, we still can't wait forever). If we have enough
deliveries then we can cancel pending ones and it's again not an error if
these canceled threads still do their job.

This makes the system more dynamic and adds some substantial processing
overhead, but it's a networking code, any of this overhead is much lower than
the actual packet delivery time. It also allows to spread the load more
fairly, if there is any spare queue it'll get the packet and release the
broadcaster. On the next broadcast iteration another peer is more likely to be
chosen just because it didn't get a message previously (and had some time to
deliver already queued messages).

It works perfectly in tests, with optimal networking conditions we have much
better block times and TPS increases by 5-25%% depending on the scenario.

I'd go as far as to say that it fixes the original problem of #2678, because
in this particular scenario we have empty queues in ~100% of the cases and
this new logic will likely lead to 100% fan out in this case (cancelation just
won't happen fast enough). But when the load grows and there is some waiting
in the queue it will optimize out the slowest links.
2022-10-11 18:42:40 +03:00
Roman Khimov
dabdad20ad network: don't wait indefinitely for packet to be sent
Peers can be slow, very slow, slow enough to affect node's regular
operation. We can't wait for them indefinitely, there has to be a timeout for
send operations.

This patch uses TimePerBlock as a reference for its timeout. It's relatively
big and it doesn't affect tests much, 4+1 scenarios tend to perform a little
worse with while 7+2 scenarios work a little better. The difference is in some
percents, but all of these tests easily have 10-15% variations from run to
run.

It's an important step in making our gossip better because we can't have any
behavior where neighbors directly block the node forever, refs. #2678 and
2022-10-10 22:15:21 +03:00
Roman Khimov
317dd42513 *: use uint*Size and SignatureLen constants where appropriate 2022-10-05 10:45:52 +03:00
Roman Khimov
4f3ffe7290 golangci: enable errorlint and fix everything it found 2022-09-02 18:36:23 +03:00
Roman Khimov
779a5c070f network: wait for exit in discoverer
And synchronize other threads with channels instead of mutexes. Overall this
scheme is more reliable.
2022-08-19 22:23:47 +03:00
Roman Khimov
eeeb0f6f0e core: accept two-side channels for sub/unsub, read on unsub
Blockchain's notificationDispatcher sends events to channels and these
channels must be read from. Unfortunately, regular service shutdown procedure
does unsubscription first (outside of the read loop) and only then drains the
channel. While it waits for unsubscription request to be accepted
notificationDispatcher can try pushing more data into the same channel which
will lead to a deadlock. Reading in the same method solves this, any number of
events can be pushed until unsub channel accepts the data.
2022-08-19 22:08:40 +03:00
Roman Khimov
dea75a4211 network: wait for the relayer thread to finish on shutdown
Unsubscribe and drain first, then return from the Shutdown method. It's
important wrt to subsequent chain shutdown process (normally it's closed right
after the network server).
2022-08-19 22:08:40 +03:00
Roman Khimov
155089f4e5 network: drop cleanup from TestVerifyNotaryRequest
It never runs the server, so 746644a4eb was a
bit wrong with this.
2022-08-19 20:54:06 +03:00
Anna Shaleva
916f2293b8 *: apply go 1.19 formatter heuristics
And make manual corrections where needed. See the "Common mistakes
and pitfalls" section of https://tip.golang.org/doc/comment.
2022-08-09 15:37:52 +03:00
Anna Shaleva
bb751535d3 *: bump minimum supported go version
Close #2497.
2022-08-08 13:59:32 +03:00
Roman Khimov
9b0ea2c21b network/consensus: always process dBFT messages as high priority
Move category definition from consensus to payload, consensus service is the
one of its kind (HP), so network.Server can be adjusted accordingly.
2022-08-02 13:07:18 +03:00
Roman Khimov
94a8784dcb network: allow to drop services and solve concurrency issues
Now that services can come and go we need to protect all of the associated
fields and allow to deregister them.
2022-08-02 13:05:39 +03:00
Roman Khimov
5a7fa2d3df cli: restart consensus service on USR2
Fix #1949. Also drop wallet from the ServerConfig since it's not used in any
meaningful way after this change.
2022-08-02 13:05:07 +03:00
Roman Khimov
2e27c3d829 metrics: move package to services
Where it belongs.
2022-07-21 23:38:23 +03:00
Anna Shaleva
1ae601787d network: allow to handle GetMPTData with KeepOnlyLatestState on
And adjust documentation along the way.
2022-07-14 14:33:20 +03:00
Roman Khimov
dc59dc991b config: move metrics.Config into config.BasicService
Config package should be as lightweight as possible and now it depends on the
whole metrics package just to get one structure from it.
2022-07-08 23:30:30 +03:00
Roman Khimov
3fbc1331aa
Merge pull request #2582 from nspcc-dev/fix-server-sync
network: adjust the way (*Server).IsInSync() works
2022-07-05 12:28:20 +03:00
Roman Khimov
9f05009d1a
Merge pull request #2580 from nspcc-dev/service-review
Service review
2022-07-05 12:23:25 +03:00
Anna Shaleva
0835581fa9 network: adjust the way (*Server).IsInSync() works
Always return true if sync was reached once. Fix #2564.
2022-07-05 12:20:31 +03:00
Roman Khimov
3e2eda6752 *: add some comments to service Start/Shutdown methods 2022-07-04 23:03:50 +03:00
Roman Khimov
c26a962b55 *: use localhost address instead of 127.0.0.1, fix #2575 2022-06-30 16:19:07 +03:00
Anna Shaleva
8ab422da66 *: properly unsubscribe from Blockchain events 2022-06-28 19:09:25 +03:00
Roman Khimov
75d06d18c9
Merge pull request #2466 from nspcc-dev/rules-fixes
Rules scope fixes
2022-05-06 11:09:39 +03:00
Roman Khimov
bd352daab4 transaction: fix Rules stringer, it's WitnessRules in C#
See neo-project/neo#2720.
2022-05-06 10:08:09 +03:00
Elizaveta Chichindaeva
28908aa3cf [#2442] English Check
Signed-off-by: Elizaveta Chichindaeva <elizaveta@nspcc.ru>
2022-05-04 19:48:27 +03:00
Roman Khimov
53423b7c37 network: fix panic in blockqueue during shutdown
panic: send on closed channel

goroutine 116 [running]:
github.com/nspcc-dev/neo-go/pkg/network.(*blockQueue).putBlock(0xc00011b650, 0xc01e371200)
        github.com/nspcc-dev/neo-go/pkg/network/blockqueue.go:129 +0x185
github.com/nspcc-dev/neo-go/pkg/network.(*Server).handleBlockCmd(0xc0002d3c00, {0xf69b7f?, 0xc001520010?}, 0xc02eb44000?)
        github.com/nspcc-dev/neo-go/pkg/network/server.go:607 +0x6f
github.com/nspcc-dev/neo-go/pkg/network.(*Server).handleMessage(0xc0002d3c00, {0x121f4c8?, 0xc001528000?}, 0xc01e35cf80)
        github.com/nspcc-dev/neo-go/pkg/network/server.go:1160 +0x6c5
github.com/nspcc-dev/neo-go/pkg/network.(*TCPPeer).handleIncoming(0xc001528000)
        github.com/nspcc-dev/neo-go/pkg/network/tcp_peer.go:189 +0x98
created by github.com/nspcc-dev/neo-go/pkg/network.(*TCPPeer).handleConn
        github.com/nspcc-dev/neo-go/pkg/network/tcp_peer.go:164 +0xcf
2022-04-26 00:31:48 +03:00
Roman Khimov
2593bb0535 network: extend Service with Name, use it to distinguish services 2022-04-26 00:31:48 +03:00
Evgeniy Stratonikov
34b1b52784 network: check compressed payload size in decompress
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2022-03-24 17:22:55 +03:00
Anna Shaleva
753d604784 network: use net.ErrClosed to check network connection was closed
Close #1765.
2022-03-17 19:39:18 +03:00
Anna Shaleva
9bbd94d0fa network: tune waiting limits in tests
Some tests are failing on Windows due to slow runners with errors like the following:
```
2022-02-09T17:11:20.3127016Z     --- FAIL: TestGetData/transaction (1.82s)
2022-02-09T17:11:20.3127385Z         server_test.go:500:
2022-02-09T17:11:20.3127878Z             	Error Trace:	server_test.go:500
2022-02-09T17:11:20.3128533Z             	            				server_test.go:520
2022-02-09T17:11:20.3128978Z             	Error:      	Condition never satisfied
2022-02-09T17:11:20.3129479Z             	Test:       	TestGetData/transaction
```
2022-02-10 18:58:50 +03:00
Roman Khimov
e621f746a7 config/core: allow to change the number of validators
Fixes #2320.
2022-01-31 23:14:38 +03:00
Roman Khimov
60d6fa1125 network: keep a copy of the config inside of Server
Avoid copying the configuration again and again, make things a bit more
efficient.
2022-01-24 18:43:01 +03:00
Roman Khimov
89d754da6f network: don't request blocks we already have in the queue
Fixes #2258.
2022-01-18 00:04:41 +03:00
Roman Khimov
03fd91e857 network: use assert.Eventually in bq test
Simpler and more efficient (polls more often and completes the test sooner).
2022-01-18 00:04:29 +03:00
Roman Khimov
d52a06a82d network: move index-position relation into helper
Just to make things more clear, no functional changes.
2022-01-18 00:02:16 +03:00
Roman Khimov
bc6d6e58bc network: always pass transactions to consensus process
Consensus can require conflicting transactions and it can require more
transactions than mempool can fit, all of this should work. Transactions will
be checked anyway using its secondary mempool. See the scenario from #668.
2022-01-14 20:08:40 +03:00
Roman Khimov
746644a4eb network: decouple it from blockchainer.Blockchainer
We don't need all of it.
2022-01-14 19:57:16 +03:00
Roman Khimov
bf1604454c blockchainer/network: move StateSync interface to the user
Only network package cares about it.
2022-01-14 19:57:14 +03:00
Roman Khimov
af87cb082f network: decouple Server from the notary service 2022-01-14 19:55:53 +03:00
Roman Khimov
508d36f698 network: drop consensus dependency 2022-01-14 19:55:53 +03:00
Roman Khimov
66aafd868b network: unplug stateroot service from the Server
Notice that it makes the node accept Extensible payloads with any category
which is the same way C# node works. We're trusting Extensible senders,
improper payloads are harmless until they DoS the network, but we have some
protections against that too (and spamming with proper category doesn't differ
a lot).
2022-01-14 19:55:50 +03:00
Roman Khimov
0ad3ea5944 network/cli: move Oracle service instantiation out of the network 2022-01-14 19:53:45 +03:00
Roman Khimov
5dd4db2c02 network/services: unify service lifecycle management
Run with Start, Stop with Shutdown, make behavior uniform.
2022-01-14 19:53:45 +03:00
Roman Khimov
c942402957 blockchainer: drop Policer interface
We never use it as a proper interface, so it makes no sense keeping it this
way.
2022-01-12 00:58:03 +03:00
Roman Khimov
48de82d902 network: fix data race in TestHandleMPTData, fix #2241 2021-11-15 12:37:01 +03:00
Roman Khimov
fe50f6edc7
Merge pull request #2240 from nspcc-dev/fix-panic-in-network
Fix panic on peer disconnect
2021-11-01 12:44:15 +03:00
Roman Khimov
774dee3cd4 network: fix disconnection race between handleConn() and handleIncoming()
handleIncoming() winning the race for p.Disconnect() call might lead to nil
error passed as the reason for peer unregistration.
2021-11-01 12:20:55 +03:00
Roman Khimov
2eeec73770 network: don't panic if there is no reason for disconnect
Although error should always be there, we shouldn't fail like this if it's not:
    | panic: runtime error: invalid memory address or nil pointer dereference
    | [signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0xc8884c]
    |
    | goroutine 113 [running]:
    | github.com/nspcc-dev/neo-go/pkg/network.(*Server).run(0xc000150580)
    |         github.com/nspcc-dev/neo-go/pkg/network/server.go:396 +0x7ac
    | github.com/nspcc-dev/neo-go/pkg/network.(*Server).Start(0xc000150580, 0x0)
    |         github.com/nspcc-dev/neo-go/pkg/network/server.go:294 +0x3fb
    | created by github.com/nspcc-dev/neo-go/cli/server.startServer
    |         github.com/nspcc-dev/neo-go/cli/server/server.go:344 +0x56f
2021-11-01 12:19:00 +03:00
Roman Khimov
8bb1ecb45a network: remove priority queue from block queue
Use circular buffer which is a bit more appropriate. The problem is that
priority queue accepts and stores equal items which wastes memory even in
normal usage scenario, but it's especially dangerous if the node is stuck for
some reason. In this case it'll accept from peers and put into queue the same
blocks again and again leaking memory up to OOM condition.

Notice that queue length calculation might be wrong in case circular buffer
wraps, but it's not very likely to happen (usually blocks not coming from the
queue are added by consensus and it's not very fast in doing so).
2021-11-01 11:49:01 +03:00
AnnaShaleva
2d196b3f35 rpc: refactor calculatenetworkfee handler
Use (Blockchainer).VerifyWitness() to calculate network fee for
contract-based witnesses.
2021-10-25 19:07:25 +03:00
Evgeniy Stratonikov
4dd3a0d503 network: request headers in parallel, fix #2158
Do this similarly to how blocks are requested.
See also 4aa1a37.

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-10-06 15:25:54 +03:00