Move them to the core/network packages, close#2950. The name of
mempool's unsorted transactions metrics has been changed along the
way to match the core's metrics naming convention.
Signed-off-by: Anna Shaleva <shaleva.ann@nspcc.ru>
I've carefully checked the way how new service can be added to the
Blockchain instance or to be removed from it. Current implemention
of SetNotary and SetOracle methods doesn't contain dangerous code,
and native contracts have atomic values everywhere where service
is stored.
Current implementation of Notary, Oracle and StateRoot services'
reload/disabling/enabling on SIGUSR1 is safe and doesn't require
any adjustment.
This commit closes#2944, it's not a bug in the code, it's just
stale documentation.
Signed-off-by: Anna Shaleva <shaleva.ann@nspcc.ru>
`cache` persisting operation is concurrent with the storage modifications
made by further state reset stages. We can't allow situation when data
from the next stage are leaking into the previous stage. State reset stages
must be atomic in turms of DB persisting, thus, use another `upperCache`
MemCached store to keep them apart.
Here are the results of mainnet's BoltDB reset from 1.1M to 6K:
This patch:
```
anna@kiwi:~/Documents/GitProjects/nspcc-dev/neo-go$ ./bin/neo-go db reset -m --debug --height 600000
2023-04-11T16:15:25.783+0300 INFO MaxBlockSize is not set or wrong, setting default value {"MaxBlockSize": 262144}
2023-04-11T16:15:25.783+0300 INFO MaxBlockSystemFee is not set or wrong, setting default value {"MaxBlockSystemFee": 900000000000}
2023-04-11T16:15:25.783+0300 INFO MaxTransactionsPerBlock is not set or wrong, using default value {"MaxTransactionsPerBlock": 512}
2023-04-11T16:15:25.783+0300 INFO MaxValidUntilBlockIncrement is not set or wrong, using default value {"MaxValidUntilBlockIncrement": 5760}
2023-04-11T16:15:25.787+0300 INFO restoring blockchain {"version": "0.2.8"}
2023-04-11T16:15:25.906+0300 INFO initializing state reset {"target height": 600000}
2023-04-11T16:15:25.906+0300 DEBUG trying to reset blocks, transactions and AERs
2023-04-11T16:15:57.031+0300 INFO intermediate batch of removed blocks, transactions and AERs is collected {"batch": 1, "took": "31.125057214s"}
2023-04-11T16:16:12.644+0300 DEBUG intermediate batch of removed blocks, transactions and AERs is persisted {"batch": 1, "took": "15.613156971s", "keys": 321895}
2023-04-11T16:16:13.895+0300 INFO intermediate batch of removed blocks, transactions and AERs is collected {"batch": 2, "took": "16.663444208s"}
2023-04-11T16:16:19.784+0300 INFO last batch of removed blocks, transactions and AERs is collected {"batch": 3, "took": "5.760308543s"}
2023-04-11T16:16:19.784+0300 INFO blocks, transactions ans AERs are reset {"took": "53.878632911s"}
2023-04-11T16:16:22.870+0300 DEBUG intermediate batch of removed blocks, transactions and AERs is persisted {"batch": 2, "took": "8.974838893s", "keys": 334823}
2023-04-11T16:16:22.870+0300 DEBUG trying to reset contract storage items
2023-04-11T16:16:27.272+0300 DEBUG last batch of removed blocks, transactions and AERs is persisted {"batch": 3, "took": "7.487357441s", "keys": 208913}
2023-04-11T16:17:23.678+0300 INFO intermediate batch of contract storage items and IDs is collected {"batch": 1, "took": "1m0.80711106s"}
2023-04-11T16:18:00.769+0300 INFO intermediate batch of contract storage items and IDs is collected {"batch": 2, "took": "36.967660061s"}
2023-04-11T16:18:20.478+0300 DEBUG intermediate batch of contract storage items is persisted {"batch": 1, "took": "56.796257788s", "keys": 200000}
2023-04-11T16:18:54.115+0300 INFO intermediate batch of contract storage items and IDs is collected {"batch": 3, "took": "33.637412437s"}
2023-04-11T16:19:18.844+0300 DEBUG intermediate batch of contract storage items is persisted {"batch": 2, "took": "1m18.0737668s", "keys": 200000}
2023-04-11T16:19:27.650+0300 INFO last batch of contract storage items is collected {"batch": 4, "took": "8.806264019s"}
2023-04-11T16:19:27.650+0300 INFO contract storage items are reset {"took": "3m4.780232077s", "keys": 656944}
2023-04-11T16:20:15.660+0300 DEBUG intermediate batch of contract storage items is persisted {"batch": 3, "took": "1m21.544386403s", "keys": 200000}
2023-04-11T16:20:15.660+0300 DEBUG trying to reset headers information
2023-04-11T16:20:16.385+0300 INFO headers information is reset {"took": "725.174932ms"}
2023-04-11T16:20:19.586+0300 DEBUG last batch of contract storage items and IDs is persisted {"batch": 4, "took": "51.936278608s", "keys": 56945}
2023-04-11T16:20:19.587+0300 DEBUG trying to reset state root information and NEP transfers
2023-04-11T16:20:35.845+0300 INFO state root information and NEP transfers are reset {"took": "16.25852114s"}
2023-04-11T16:21:10.000+0300 DEBUG headers information is persisted {"took": "53.613638429s", "keys": 528438}
2023-04-11T16:21:10.003+0300 DEBUG trying to remove stale storage items
2023-04-11T16:21:18.108+0300 INFO stale storage items are reset {"took": "8.105140658s", "keys": 1350176}
2023-04-11T16:21:18.108+0300 DEBUG trying to remove state reset point
2023-04-11T16:21:18.108+0300 INFO state reset point is removed {"took": "8.554µs"}
2023-04-11T16:21:20.151+0300 DEBUG state root information and NEP transfers are persisted {"took": "44.305707049s", "keys": 602578}
2023-04-11T16:21:20.212+0300 INFO state reset point information is persisted {"took": "2.103764633s", "keys": 2}
2023-04-11T16:21:20.213+0300 INFO reset finished successfully {"took": "5m54.306861367s"}
```
The previous commit:
```
anna@kiwi:~/Documents/GitProjects/nspcc-dev/neo-go$ ./bin/neo-go db reset -m --debug --height 600000
2023-04-11T16:24:04.256+0300 INFO MaxBlockSize is not set or wrong, setting default value {"MaxBlockSize": 262144}
2023-04-11T16:24:04.256+0300 INFO MaxBlockSystemFee is not set or wrong, setting default value {"MaxBlockSystemFee": 900000000000}
2023-04-11T16:24:04.256+0300 INFO MaxTransactionsPerBlock is not set or wrong, using default value {"MaxTransactionsPerBlock": 512}
2023-04-11T16:24:04.256+0300 INFO MaxValidUntilBlockIncrement is not set or wrong, using default value {"MaxValidUntilBlockIncrement": 5760}
2023-04-11T16:24:04.261+0300 INFO restoring blockchain {"version": "0.2.8"}
2023-04-11T16:24:04.368+0300 INFO initializing state reset {"target height": 600000}
2023-04-11T16:24:04.368+0300 DEBUG trying to reset blocks, transactions and AERs
2023-04-11T16:24:30.363+0300 INFO intermediate batch of removed blocks, transactions and AERs is collected {"batch": 1, "took": "25.995261037s"}
2023-04-11T16:24:44.947+0300 DEBUG intermediate batch of removed blocks, transactions and AERs is persisted {"batch": 1, "took": "14.584447338s", "keys": 321897}
2023-04-11T16:24:45.791+0300 INFO intermediate batch of removed blocks, transactions and AERs is collected {"batch": 2, "took": "15.428492824s"}
2023-04-11T16:24:51.252+0300 INFO last batch of removed blocks, transactions and AERs is collected {"batch": 3, "took": "5.460662766s"}
2023-04-11T16:24:51.252+0300 INFO blocks, transactions ans AERs are reset {"took": "46.884558096s"}
2023-04-11T16:24:55.399+0300 DEBUG intermediate batch of removed blocks, transactions and AERs is persisted {"batch": 2, "took": "9.607820004s", "keys": 334821}
2023-04-11T16:24:55.399+0300 DEBUG trying to reset contract storage items
2023-04-11T16:24:59.981+0300 DEBUG last batch of removed blocks, transactions and AERs is persisted {"batch": 3, "took": "8.728713255s", "keys": 208913}
2023-04-11T16:25:50.827+0300 INFO intermediate batch of contract storage items and IDs is collected {"batch": 1, "took": "55.426411416s"}
2023-04-11T16:26:28.734+0300 INFO intermediate batch of contract storage items and IDs is collected {"batch": 2, "took": "37.902647706s"}
2023-04-11T16:26:53.960+0300 DEBUG intermediate batch of contract storage items is persisted {"batch": 1, "took": "1m3.129453265s", "keys": 200001}
2023-04-11T16:27:27.645+0300 INFO intermediate batch of contract storage items and IDs is collected {"batch": 3, "took": "33.685283662s"}
2023-04-11T16:27:52.173+0300 DEBUG intermediate batch of contract storage items is persisted {"batch": 2, "took": "1m23.438465575s", "keys": 199999}
2023-04-11T16:28:00.995+0300 INFO last batch of contract storage items is collected {"batch": 4, "took": "8.821990443s"}
2023-04-11T16:28:00.995+0300 INFO contract storage items are reset {"took": "3m5.595950958s", "keys": 656944}
2023-04-11T16:28:49.164+0300 DEBUG intermediate batch of contract storage items is persisted {"batch": 3, "took": "1m21.518344712s", "keys": 200000}
2023-04-11T16:28:49.164+0300 DEBUG trying to reset headers information
2023-04-11T16:28:49.936+0300 INFO headers information is reset {"took": "772.36435ms"}
2023-04-11T16:28:53.122+0300 DEBUG last batch of contract storage items and IDs is persisted {"batch": 4, "took": "52.126928092s", "keys": 56945}
2023-04-11T16:28:53.122+0300 DEBUG trying to reset state root information and NEP transfers
2023-04-11T16:29:09.332+0300 INFO state root information and NEP transfers are reset {"took": "16.20921699s"}
2023-04-11T16:29:46.264+0300 DEBUG headers information is persisted {"took": "56.326715249s", "keys": 528438}
2023-04-11T16:29:46.267+0300 DEBUG trying to remove stale storage items
2023-04-11T16:29:53.986+0300 INFO stale storage items are reset {"took": "7.718950145s", "keys": 1350176}
2023-04-11T16:29:53.986+0300 DEBUG trying to remove state reset point
2023-04-11T16:29:53.986+0300 INFO state reset point is removed {"took": "6.013µs"}
2023-04-11T16:29:55.899+0300 DEBUG state root information and NEP transfers are persisted {"took": "46.567302762s", "keys": 602578}
2023-04-11T16:29:55.929+0300 INFO state reset point information is persisted {"took": "1.942392208s", "keys": 2}
2023-04-11T16:29:55.929+0300 INFO reset finished successfully {"took": "5m51.561573137s"}
```
Master:
```
anna@kiwi:~/Documents/GitProjects/nspcc-dev/neo-go$ ./bin/neo-go db reset -m --debug --height 600000
2023-04-11T16:34:12.410+0300 INFO MaxBlockSize is not set or wrong, setting default value {"MaxBlockSize": 262144}
2023-04-11T16:34:12.410+0300 INFO MaxBlockSystemFee is not set or wrong, setting default value {"MaxBlockSystemFee": 900000000000}
2023-04-11T16:34:12.410+0300 INFO MaxTransactionsPerBlock is not set or wrong, using default value {"MaxTransactionsPerBlock": 512}
2023-04-11T16:34:12.410+0300 INFO MaxValidUntilBlockIncrement is not set or wrong, using default value {"MaxValidUntilBlockIncrement": 5760}
2023-04-11T16:34:12.413+0300 INFO restoring blockchain {"version": "0.2.8"}
2023-04-11T16:34:12.495+0300 INFO initialize state reset {"target height": 600000}
2023-04-11T16:34:12.513+0300 INFO trying to reset blocks, transactions and AERs
2023-04-11T16:35:03.582+0300 INFO intermediate batch of removed blocks, transactions and AERs is persisted {"batches persisted": 1, "took": "51.087226195s", "keys": 321895}
2023-04-11T16:35:31.302+0300 INFO intermediate batch of removed blocks, transactions and AERs is persisted {"batches persisted": 2, "took": "27.719871393s", "keys": 334823}
2023-04-11T16:35:41.309+0300 INFO last batch of removed blocks, transactions and AERs is persisted {"batches persisted": 3, "took": "10.007017388s", "keys": 208913}
2023-04-11T16:35:41.309+0300 INFO blocks, transactions ans AERs are reset {"took": "1m28.814245057s", "overall persisted keys": 865631}
2023-04-11T16:35:41.309+0300 INFO trying to reset contract storage items
2023-04-11T16:37:38.315+0300 INFO intermediate batch of contract storage items and IDs is persisted {"batch": 1, "took": "1m57.00650253s", "keys": 200000}
2023-04-11T16:39:29.704+0300 INFO intermediate batch of contract storage items and IDs is persisted {"batch": 2, "took": "1m51.385224725s", "keys": 200000}
2023-04-11T16:41:14.991+0300 INFO intermediate batch of contract storage items and IDs is persisted {"batch": 3, "took": "1m45.287483794s", "keys": 200000}
2023-04-11T16:41:31.667+0300 INFO last batch of contract storage items and IDs is persisted {"batch": 4, "took": "16.675347478s", "keys": 56945}
2023-04-11T16:41:31.667+0300 INFO contract storage items and IDs are reset {"took": "5m50.357775401s", "keys": 656944}
2023-04-11T16:41:31.667+0300 INFO trying to reset headers information
2023-04-11T16:42:16.779+0300 INFO headers information is reset {"took": "45.111354262s", "keys": 528438}
2023-04-11T16:42:16.784+0300 INFO trying to reset state root information and NEP transfers
2023-04-11T16:42:35.778+0300 INFO state root information and NEP transfers are reset {"took": "18.99373117s", "keys": 602578}
2023-04-11T16:42:35.781+0300 INFO trying to remove stale storage items
2023-04-11T16:42:43.884+0300 INFO stale storage items are reset {"took": "8.103929306s", "keys": 1350176}
2023-04-11T16:42:43.885+0300 INFO trying to remove state reset point
2023-04-11T16:42:43.926+0300 INFO stale reset point is removed {"took": "41.858883ms", "keys": 2}
2023-04-11T16:42:43.932+0300 INFO reset finished successfully {"took": "8m31.437493325s"}
```
Signed-off-by: Anna Shaleva <shaleva.ann@nspcc.ru>
Everywhere including examples, external interop APIs, bindings generators
code and in other valuable places. A couple of `interface{}` usages are
intentionally left in the CHANGELOG.md, documentation and tests.
We have both from and to here, so technically we can either drop the neg/neg
trick from the processTokenTransfer() or drop one field from the structure
(the other side is a part of the key). Drop the field since this can make the
DB a bit more compact. Change Amount to be a pointer along the way since
that's the "native" thing for big.Int, we've used non-pointer field
specifically to avoid Neg/Neg problems, but it looks like this is not
necessary.
This structure is only used by the RPC server and I doubt anyone uses it via
the *Blockchain.
d5a9af5860 is incompatible with the NeoFS
mainnet sidechain, so we add the old logic to the pre-Aspidochelone
behaviour. Changing flags at newMethodAndPrice() is a bit less convenient
unfortunately because this will affect interop validity checks, so let's have
this kludge here.
It directly affects node security and the default here MUST BE the safe choice
which is to do the verification. Otherwise it's just dangerous, absent any
VerifyBlocks configuration we'll get an insecure node. This option is not
supposed to be frequently used and it doesn't affect the ability to process
blocks, so breaking compatibility (in a safe manner) should be OK here.
And include some node-specific configurations there with backwards
compatibility. Note that in the future we'll remove Ledger's
fields from the ProtocolConfiguration and it'll be possible to access them in
Blockchain directly (not via .Ledger).
The other option tried was using two configuration types separately, but that
incurs more changes to the codebase, single structure that behaves almost like
the old one is better for backwards compatibility.
Fixes#2676.
It doesn't store id->hash mappings for native contracts. We need blockchain's
GetContractScriptHash to serve both anyway, so it was changed a bit. The only
other direct user of native.GetContractScriptHash is the VM CLI, but I doubt
anyone will use it for native contracts (they have ~zero VM code anyway).
It's more generic and convenient than MillisecondsPerBlock. This setting is
made in backwards-compatible fashion, but it'll override SecondsPerBlock if
both are used. Configurations are specifically not changed here, it's
important to check compatibility.
Fixes#2675.
Follow neo-project/neo#2807. Notice that this data is not cached, our previous
implementation wasn't too and it shouldn't be a problem (not on the hot path).
They can stay in the memory pool forever because consensus process will never
accept these transactions (and maybe even block consensus process at all).
We're paging these hashes, so we need a previous full page and a current one
plus some cache for various requests. Storing 1M of hashes is 32M of memory
and it grows quickly. It also seriously affects node startup time, most of
what it's doing is reading these hashes, the longer the chain the more time it
needs to do that.
Notice that this doesn't change the underlying DB scheme in any way.
If we only have genesis block (or <2000 headers) then we might as well use
generic logic below with zero targetHash because genesis block has zero
PrevHash (and its hash will naturally be the last on the chain going
backwards).
Sometimes it can be hard to persist all changes at ones, the process
can take almost all RAM and a lot of time. Here's the example of reset
for mainnet from 2.4M to 1:
```
anna@kiwi:~/Documents/GitProjects/nspcc-dev/neo-go$ ./bin/neo-go db reset -m --height 1
2022-11-20T17:16:48.236+0300 INFO MaxBlockSize is not set or wrong, setting default value {"MaxBlockSize": 262144}
2022-11-20T17:16:48.236+0300 INFO MaxBlockSystemFee is not set or wrong, setting default value {"MaxBlockSystemFee": 900000000000}
2022-11-20T17:16:48.237+0300 INFO MaxTransactionsPerBlock is not set or wrong, using default value {"MaxTransactionsPerBlock": 512}
2022-11-20T17:16:48.237+0300 INFO MaxValidUntilBlockIncrement is not set or wrong, using default value {"MaxValidUntilBlockIncrement": 5760}
2022-11-20T17:16:48.240+0300 INFO restoring blockchain {"version": "0.2.6"}
2022-11-20T17:16:48.297+0300 INFO initialize state reset {"target height": 1}
2022-11-20T17:16:48.300+0300 INFO trying to reset blocks, transactions and AERs
2022-11-20T17:19:29.313+0300 INFO blocks, transactions ans AERs are reset {"took": "2m41.015126493s", "keys": 3958420}
...
```
To avoid OOM killer, split blocks reset into multiple stages. It increases
operation time due to intermediate DB persists, but makes things cleaner, the
result for almost the same DB height with the new approach:
```
anna@kiwi:~/Documents/GitProjects/nspcc-dev/neo-go$ ./bin/neo-go db reset -m --height 1
2022-11-20T17:39:42.023+0300 INFO MaxBlockSize is not set or wrong, setting default value {"MaxBlockSize": 262144}
2022-11-20T17:39:42.023+0300 INFO MaxBlockSystemFee is not set or wrong, setting default value {"MaxBlockSystemFee": 900000000000}
2022-11-20T17:39:42.023+0300 INFO MaxTransactionsPerBlock is not set or wrong, using default value {"MaxTransactionsPerBlock": 512}
2022-11-20T17:39:42.023+0300 INFO MaxValidUntilBlockIncrement is not set or wrong, using default value {"MaxValidUntilBlockIncrement": 5760}
2022-11-20T17:39:42.026+0300 INFO restoring blockchain {"version": "0.2.6"}
2022-11-20T17:39:42.071+0300 INFO initialize state reset {"target height": 1}
2022-11-20T17:39:42.073+0300 INFO trying to reset blocks, transactions and AERs
2022-11-20T17:40:11.735+0300 INFO intermediate batch of removed blocks, transactions and AERs is persisted {"batches persisted": 1, "took": "29.66363737s", "keys": 210973}
2022-11-20T17:40:33.574+0300 INFO intermediate batch of removed blocks, transactions and AERs is persisted {"batches persisted": 2, "took": "21.839208683s", "keys": 241203}
2022-11-20T17:41:29.325+0300 INFO intermediate batch of removed blocks, transactions and AERs is persisted {"batches persisted": 3, "took": "55.750698386s", "keys": 250593}
2022-11-20T17:42:12.532+0300 INFO intermediate batch of removed blocks, transactions and AERs is persisted {"batches persisted": 4, "took": "43.205892757s", "keys": 321896}
2022-11-20T17:43:07.978+0300 INFO intermediate batch of removed blocks, transactions and AERs is persisted {"batches persisted": 5, "took": "55.445398156s", "keys": 334822}
2022-11-20T17:43:35.603+0300 INFO intermediate batch of removed blocks, transactions and AERs is persisted {"batches persisted": 6, "took": "27.625292032s", "keys": 317131}
2022-11-20T17:43:51.747+0300 INFO intermediate batch of removed blocks, transactions and AERs is persisted {"batches persisted": 7, "took": "16.144359017s", "keys": 355832}
2022-11-20T17:44:05.176+0300 INFO intermediate batch of removed blocks, transactions and AERs is persisted {"batches persisted": 8, "took": "13.428733899s", "keys": 357690}
2022-11-20T17:44:32.895+0300 INFO intermediate batch of removed blocks, transactions and AERs is persisted {"batches persisted": 9, "took": "27.718548783s", "keys": 393356}
2022-11-20T17:44:51.814+0300 INFO intermediate batch of removed blocks, transactions and AERs is persisted {"batches persisted": 10, "took": "18.917954658s", "keys": 366492}
2022-11-20T17:45:07.208+0300 INFO intermediate batch of removed blocks, transactions and AERs is persisted {"batches persisted": 11, "took": "15.392642196s", "keys": 326030}
2022-11-20T17:45:18.776+0300 INFO intermediate batch of removed blocks, transactions and AERs is persisted {"batches persisted": 12, "took": "11.568255716s", "keys": 299884}
2022-11-20T17:45:25.862+0300 INFO last batch of removed blocks, transactions and AERs is persisted {"batches persisted": 13, "took": "7.086079594s", "keys": 190399}
2022-11-20T17:45:25.862+0300 INFO blocks, transactions ans AERs are reset {"took": "5m43.791214084s", "overall persisted keys": 3966301}
...
```
We need to keep the headers information consistent with header batches
and headers. This comit fixes the bug with failing blockchain
initialization on recovering from state reset interrupted after the
second stage (blocks/txs/AERs removal):
```
anna@kiwi:~/Documents/GitProjects/nspcc-dev/neo-go$ ./bin/neo-go db reset -t --height 83000
2022-11-20T16:28:29.437+0300 INFO MaxValidUntilBlockIncrement is not set or wrong, using default value {"MaxValidUntilBlockIncrement": 5760}
2022-11-20T16:28:29.440+0300 INFO restoring blockchain {"version": "0.2.6"}
failed to create Blockchain instance: could not initialize blockchain: could not get header 1898cd356a4a2688ed1c6c7ba1fd6ba7d516959d8add3f8dd26232474d4539bd: key not found
```
Don't use cache because it's not yet initialized. Also, perform
safety checks only if state reset wasn't yet started. These fixes
alloww to solve the following problem while recovering from
interrupted state reset:
```
anna@kiwi:~/Documents/GitProjects/nspcc-dev/neo-go$ ./bin/neo-go db reset -t --height 83000
2022-11-20T15:51:31.431+0300 INFO MaxValidUntilBlockIncrement is not set or wrong, using default value {"MaxValidUntilBlockIncrement": 5760}
2022-11-20T15:51:31.434+0300 INFO restoring blockchain {"version": "0.2.6"}
failed to create Blockchain instance: could not initialize blockchain: current block height is 0, can't reset state to height 83000
```
We don't use all of the Stack functionality for it, so drop useless methods
and avoid some interface conversions. It increases single-node TPS by about
0.9%, so nothing really important there, but not a bad change either. Maybe it
can be reworked again with generics though.
Do not block subscribers until the unsubscription request to RPC server
is completed. Otherwise, another notification may be received from the
RPC server which will block the unsubscription process.
At the same time, fix event-based waiter. We must not block the receiver
channel during unsubscription because there's a chance that subsequent
event will be sent by the server. We need to read this event in order not
to block the WSClient's readloop.
Bad contract -> no contract. Unfortunately we've got a broken
6f1837723768f27a6f6a14452977e3e0e264f2cc contract on the mainnet which can't
be decoded (even though it had been saved successfully), so this is a
temporary fix for #2801 to be able to start mainnet node after shutdown.
There is a security issue found in github.com/btcsuite/btcd that we don't care
about (we're only using 256k1 implementation), but GitHub complains about
it. We could update to github.com/btcsuite/btcd/btcec/v2, but it's now just a
thin wrapper over github.com/decred/dcrd/dcrec/secp256k1/v4, so we better use
it directly.
When block is being spread through the network we can get a lot of invs with
the same hash. Some more stale nodes may also announce previous or some
earlier block. We can avoid full DB lookup for them and minimize inv handling
time (timeouts in inv handler had happened in #2744).
It doesn't affect tests, just makes node a little less likely to spend some
considerable amount of time in the inv handler.
Blockchain's notificationDispatcher sends events to channels and these
channels must be read from. Unfortunately, regular service shutdown procedure
does unsubscription first (outside of the read loop) and only then drains the
channel. While it waits for unsubscription request to be accepted
notificationDispatcher can try pushing more data into the same channel which
will lead to a deadlock. Reading in the same method solves this, any number of
events can be pushed until unsub channel accepts the data.