wsReader() closes c.done first and then goes over the list of
c.respChannels. Technically this means that any of the two can be taken in
this select.
Signed-off-by: Roman Khimov <roman@nspcc.ru>
Previously RPC server shutdown procedure listened to the execution
channel and stopped at the first element that arrived in the queue. This
could lead to the following problems:
* stopper could steal the execution result from subscriber
* stopper didn't wait for other subscription actions to complete
Add dedicated channel to `Server` for subscription routine. Close the
channel on `handleSubEvents` return and wait for signal in `Shutdown`.
Signed-off-by: Leonard Lyubich <leonard@morphbits.io>
Previously RPC server could never be shut down completely due to
some start precondition failure (in particular, inability to serve HTTP
on any configured endpoint). The problem was caused by next facts:
* start method ran subscription routine after HTTP init succeeded only
* stop method blocked waiting for the subscription routine to return
Run `handleSubEvents` routine on fresh `Start` unconditionally. With
this change, `Shutdown` method won't produce deadlock since
`handleSubEvents` closes wait channel.
Refs #2896.
Signed-off-by: Leonard Lyubich <leonard@morphbits.io>
There is an existing problem with RPC server shutdown freeze after start
failure due to some init actions (at least HTTP listen) described in
#2896.
Add dedicated unit test which checks that `Shutdown` returns within 5s
after `Start` method encounters internal problems.
Signed-off-by: Leonard Lyubich <leonard@morphbits.io>
Move them to the core/network packages, close#2950. The name of
mempool's unsorted transactions metrics has been changed along the
way to match the core's metrics naming convention.
Signed-off-by: Anna Shaleva <shaleva.ann@nspcc.ru>
I've carefully checked the way how new service can be added to the
Blockchain instance or to be removed from it. Current implemention
of SetNotary and SetOracle methods doesn't contain dangerous code,
and native contracts have atomic values everywhere where service
is stored.
Current implementation of Notary, Oracle and StateRoot services'
reload/disabling/enabling on SIGUSR1 is safe and doesn't require
any adjustment.
This commit closes#2944, it's not a bug in the code, it's just
stale documentation.
Signed-off-by: Anna Shaleva <shaleva.ann@nspcc.ru>
`cache` persisting operation is concurrent with the storage modifications
made by further state reset stages. We can't allow situation when data
from the next stage are leaking into the previous stage. State reset stages
must be atomic in turms of DB persisting, thus, use another `upperCache`
MemCached store to keep them apart.
Here are the results of mainnet's BoltDB reset from 1.1M to 6K:
This patch:
```
anna@kiwi:~/Documents/GitProjects/nspcc-dev/neo-go$ ./bin/neo-go db reset -m --debug --height 600000
2023-04-11T16:15:25.783+0300 INFO MaxBlockSize is not set or wrong, setting default value {"MaxBlockSize": 262144}
2023-04-11T16:15:25.783+0300 INFO MaxBlockSystemFee is not set or wrong, setting default value {"MaxBlockSystemFee": 900000000000}
2023-04-11T16:15:25.783+0300 INFO MaxTransactionsPerBlock is not set or wrong, using default value {"MaxTransactionsPerBlock": 512}
2023-04-11T16:15:25.783+0300 INFO MaxValidUntilBlockIncrement is not set or wrong, using default value {"MaxValidUntilBlockIncrement": 5760}
2023-04-11T16:15:25.787+0300 INFO restoring blockchain {"version": "0.2.8"}
2023-04-11T16:15:25.906+0300 INFO initializing state reset {"target height": 600000}
2023-04-11T16:15:25.906+0300 DEBUG trying to reset blocks, transactions and AERs
2023-04-11T16:15:57.031+0300 INFO intermediate batch of removed blocks, transactions and AERs is collected {"batch": 1, "took": "31.125057214s"}
2023-04-11T16:16:12.644+0300 DEBUG intermediate batch of removed blocks, transactions and AERs is persisted {"batch": 1, "took": "15.613156971s", "keys": 321895}
2023-04-11T16:16:13.895+0300 INFO intermediate batch of removed blocks, transactions and AERs is collected {"batch": 2, "took": "16.663444208s"}
2023-04-11T16:16:19.784+0300 INFO last batch of removed blocks, transactions and AERs is collected {"batch": 3, "took": "5.760308543s"}
2023-04-11T16:16:19.784+0300 INFO blocks, transactions ans AERs are reset {"took": "53.878632911s"}
2023-04-11T16:16:22.870+0300 DEBUG intermediate batch of removed blocks, transactions and AERs is persisted {"batch": 2, "took": "8.974838893s", "keys": 334823}
2023-04-11T16:16:22.870+0300 DEBUG trying to reset contract storage items
2023-04-11T16:16:27.272+0300 DEBUG last batch of removed blocks, transactions and AERs is persisted {"batch": 3, "took": "7.487357441s", "keys": 208913}
2023-04-11T16:17:23.678+0300 INFO intermediate batch of contract storage items and IDs is collected {"batch": 1, "took": "1m0.80711106s"}
2023-04-11T16:18:00.769+0300 INFO intermediate batch of contract storage items and IDs is collected {"batch": 2, "took": "36.967660061s"}
2023-04-11T16:18:20.478+0300 DEBUG intermediate batch of contract storage items is persisted {"batch": 1, "took": "56.796257788s", "keys": 200000}
2023-04-11T16:18:54.115+0300 INFO intermediate batch of contract storage items and IDs is collected {"batch": 3, "took": "33.637412437s"}
2023-04-11T16:19:18.844+0300 DEBUG intermediate batch of contract storage items is persisted {"batch": 2, "took": "1m18.0737668s", "keys": 200000}
2023-04-11T16:19:27.650+0300 INFO last batch of contract storage items is collected {"batch": 4, "took": "8.806264019s"}
2023-04-11T16:19:27.650+0300 INFO contract storage items are reset {"took": "3m4.780232077s", "keys": 656944}
2023-04-11T16:20:15.660+0300 DEBUG intermediate batch of contract storage items is persisted {"batch": 3, "took": "1m21.544386403s", "keys": 200000}
2023-04-11T16:20:15.660+0300 DEBUG trying to reset headers information
2023-04-11T16:20:16.385+0300 INFO headers information is reset {"took": "725.174932ms"}
2023-04-11T16:20:19.586+0300 DEBUG last batch of contract storage items and IDs is persisted {"batch": 4, "took": "51.936278608s", "keys": 56945}
2023-04-11T16:20:19.587+0300 DEBUG trying to reset state root information and NEP transfers
2023-04-11T16:20:35.845+0300 INFO state root information and NEP transfers are reset {"took": "16.25852114s"}
2023-04-11T16:21:10.000+0300 DEBUG headers information is persisted {"took": "53.613638429s", "keys": 528438}
2023-04-11T16:21:10.003+0300 DEBUG trying to remove stale storage items
2023-04-11T16:21:18.108+0300 INFO stale storage items are reset {"took": "8.105140658s", "keys": 1350176}
2023-04-11T16:21:18.108+0300 DEBUG trying to remove state reset point
2023-04-11T16:21:18.108+0300 INFO state reset point is removed {"took": "8.554µs"}
2023-04-11T16:21:20.151+0300 DEBUG state root information and NEP transfers are persisted {"took": "44.305707049s", "keys": 602578}
2023-04-11T16:21:20.212+0300 INFO state reset point information is persisted {"took": "2.103764633s", "keys": 2}
2023-04-11T16:21:20.213+0300 INFO reset finished successfully {"took": "5m54.306861367s"}
```
The previous commit:
```
anna@kiwi:~/Documents/GitProjects/nspcc-dev/neo-go$ ./bin/neo-go db reset -m --debug --height 600000
2023-04-11T16:24:04.256+0300 INFO MaxBlockSize is not set or wrong, setting default value {"MaxBlockSize": 262144}
2023-04-11T16:24:04.256+0300 INFO MaxBlockSystemFee is not set or wrong, setting default value {"MaxBlockSystemFee": 900000000000}
2023-04-11T16:24:04.256+0300 INFO MaxTransactionsPerBlock is not set or wrong, using default value {"MaxTransactionsPerBlock": 512}
2023-04-11T16:24:04.256+0300 INFO MaxValidUntilBlockIncrement is not set or wrong, using default value {"MaxValidUntilBlockIncrement": 5760}
2023-04-11T16:24:04.261+0300 INFO restoring blockchain {"version": "0.2.8"}
2023-04-11T16:24:04.368+0300 INFO initializing state reset {"target height": 600000}
2023-04-11T16:24:04.368+0300 DEBUG trying to reset blocks, transactions and AERs
2023-04-11T16:24:30.363+0300 INFO intermediate batch of removed blocks, transactions and AERs is collected {"batch": 1, "took": "25.995261037s"}
2023-04-11T16:24:44.947+0300 DEBUG intermediate batch of removed blocks, transactions and AERs is persisted {"batch": 1, "took": "14.584447338s", "keys": 321897}
2023-04-11T16:24:45.791+0300 INFO intermediate batch of removed blocks, transactions and AERs is collected {"batch": 2, "took": "15.428492824s"}
2023-04-11T16:24:51.252+0300 INFO last batch of removed blocks, transactions and AERs is collected {"batch": 3, "took": "5.460662766s"}
2023-04-11T16:24:51.252+0300 INFO blocks, transactions ans AERs are reset {"took": "46.884558096s"}
2023-04-11T16:24:55.399+0300 DEBUG intermediate batch of removed blocks, transactions and AERs is persisted {"batch": 2, "took": "9.607820004s", "keys": 334821}
2023-04-11T16:24:55.399+0300 DEBUG trying to reset contract storage items
2023-04-11T16:24:59.981+0300 DEBUG last batch of removed blocks, transactions and AERs is persisted {"batch": 3, "took": "8.728713255s", "keys": 208913}
2023-04-11T16:25:50.827+0300 INFO intermediate batch of contract storage items and IDs is collected {"batch": 1, "took": "55.426411416s"}
2023-04-11T16:26:28.734+0300 INFO intermediate batch of contract storage items and IDs is collected {"batch": 2, "took": "37.902647706s"}
2023-04-11T16:26:53.960+0300 DEBUG intermediate batch of contract storage items is persisted {"batch": 1, "took": "1m3.129453265s", "keys": 200001}
2023-04-11T16:27:27.645+0300 INFO intermediate batch of contract storage items and IDs is collected {"batch": 3, "took": "33.685283662s"}
2023-04-11T16:27:52.173+0300 DEBUG intermediate batch of contract storage items is persisted {"batch": 2, "took": "1m23.438465575s", "keys": 199999}
2023-04-11T16:28:00.995+0300 INFO last batch of contract storage items is collected {"batch": 4, "took": "8.821990443s"}
2023-04-11T16:28:00.995+0300 INFO contract storage items are reset {"took": "3m5.595950958s", "keys": 656944}
2023-04-11T16:28:49.164+0300 DEBUG intermediate batch of contract storage items is persisted {"batch": 3, "took": "1m21.518344712s", "keys": 200000}
2023-04-11T16:28:49.164+0300 DEBUG trying to reset headers information
2023-04-11T16:28:49.936+0300 INFO headers information is reset {"took": "772.36435ms"}
2023-04-11T16:28:53.122+0300 DEBUG last batch of contract storage items and IDs is persisted {"batch": 4, "took": "52.126928092s", "keys": 56945}
2023-04-11T16:28:53.122+0300 DEBUG trying to reset state root information and NEP transfers
2023-04-11T16:29:09.332+0300 INFO state root information and NEP transfers are reset {"took": "16.20921699s"}
2023-04-11T16:29:46.264+0300 DEBUG headers information is persisted {"took": "56.326715249s", "keys": 528438}
2023-04-11T16:29:46.267+0300 DEBUG trying to remove stale storage items
2023-04-11T16:29:53.986+0300 INFO stale storage items are reset {"took": "7.718950145s", "keys": 1350176}
2023-04-11T16:29:53.986+0300 DEBUG trying to remove state reset point
2023-04-11T16:29:53.986+0300 INFO state reset point is removed {"took": "6.013µs"}
2023-04-11T16:29:55.899+0300 DEBUG state root information and NEP transfers are persisted {"took": "46.567302762s", "keys": 602578}
2023-04-11T16:29:55.929+0300 INFO state reset point information is persisted {"took": "1.942392208s", "keys": 2}
2023-04-11T16:29:55.929+0300 INFO reset finished successfully {"took": "5m51.561573137s"}
```
Master:
```
anna@kiwi:~/Documents/GitProjects/nspcc-dev/neo-go$ ./bin/neo-go db reset -m --debug --height 600000
2023-04-11T16:34:12.410+0300 INFO MaxBlockSize is not set or wrong, setting default value {"MaxBlockSize": 262144}
2023-04-11T16:34:12.410+0300 INFO MaxBlockSystemFee is not set or wrong, setting default value {"MaxBlockSystemFee": 900000000000}
2023-04-11T16:34:12.410+0300 INFO MaxTransactionsPerBlock is not set or wrong, using default value {"MaxTransactionsPerBlock": 512}
2023-04-11T16:34:12.410+0300 INFO MaxValidUntilBlockIncrement is not set or wrong, using default value {"MaxValidUntilBlockIncrement": 5760}
2023-04-11T16:34:12.413+0300 INFO restoring blockchain {"version": "0.2.8"}
2023-04-11T16:34:12.495+0300 INFO initialize state reset {"target height": 600000}
2023-04-11T16:34:12.513+0300 INFO trying to reset blocks, transactions and AERs
2023-04-11T16:35:03.582+0300 INFO intermediate batch of removed blocks, transactions and AERs is persisted {"batches persisted": 1, "took": "51.087226195s", "keys": 321895}
2023-04-11T16:35:31.302+0300 INFO intermediate batch of removed blocks, transactions and AERs is persisted {"batches persisted": 2, "took": "27.719871393s", "keys": 334823}
2023-04-11T16:35:41.309+0300 INFO last batch of removed blocks, transactions and AERs is persisted {"batches persisted": 3, "took": "10.007017388s", "keys": 208913}
2023-04-11T16:35:41.309+0300 INFO blocks, transactions ans AERs are reset {"took": "1m28.814245057s", "overall persisted keys": 865631}
2023-04-11T16:35:41.309+0300 INFO trying to reset contract storage items
2023-04-11T16:37:38.315+0300 INFO intermediate batch of contract storage items and IDs is persisted {"batch": 1, "took": "1m57.00650253s", "keys": 200000}
2023-04-11T16:39:29.704+0300 INFO intermediate batch of contract storage items and IDs is persisted {"batch": 2, "took": "1m51.385224725s", "keys": 200000}
2023-04-11T16:41:14.991+0300 INFO intermediate batch of contract storage items and IDs is persisted {"batch": 3, "took": "1m45.287483794s", "keys": 200000}
2023-04-11T16:41:31.667+0300 INFO last batch of contract storage items and IDs is persisted {"batch": 4, "took": "16.675347478s", "keys": 56945}
2023-04-11T16:41:31.667+0300 INFO contract storage items and IDs are reset {"took": "5m50.357775401s", "keys": 656944}
2023-04-11T16:41:31.667+0300 INFO trying to reset headers information
2023-04-11T16:42:16.779+0300 INFO headers information is reset {"took": "45.111354262s", "keys": 528438}
2023-04-11T16:42:16.784+0300 INFO trying to reset state root information and NEP transfers
2023-04-11T16:42:35.778+0300 INFO state root information and NEP transfers are reset {"took": "18.99373117s", "keys": 602578}
2023-04-11T16:42:35.781+0300 INFO trying to remove stale storage items
2023-04-11T16:42:43.884+0300 INFO stale storage items are reset {"took": "8.103929306s", "keys": 1350176}
2023-04-11T16:42:43.885+0300 INFO trying to remove state reset point
2023-04-11T16:42:43.926+0300 INFO stale reset point is removed {"took": "41.858883ms", "keys": 2}
2023-04-11T16:42:43.932+0300 INFO reset finished successfully {"took": "8m31.437493325s"}
```
Signed-off-by: Anna Shaleva <shaleva.ann@nspcc.ru>
Everywhere including examples, external interop APIs, bindings generators
code and in other valuable places. A couple of `interface{}` usages are
intentionally left in the CHANGELOG.md, documentation and tests.
Do not add them directly to chain, it will be done by the block queue
manager. Close https://github.com/nspcc-dev/neo-go/issues/2923. However,
this commit is not valid without
https://github.com/roman-khimov/dbft/pull/4.
It's the neo-go's duty to initialize consensus after subsequent block
addition; the dBFT itself must wait for the neo-go to complete the block
addition and notify the dBFT, so that it can initialize at 0-th view to
collect the next block.
It has a special `requestF` and a special initialization function, but other
than that it's an absolutely regular WSClient. Can be used to call, can be
used to subscribe. Fixes#2909.
According to docs, `Server` uses provided error channel only to write
encountered error to it. In this case, there is no need to accept rw
channel to create `Server` instance. Strengthening the type to
write-only will allow the caller to ensure control of reading errors
from the provided channel.
The change is backward compatible since any `chan` is `chan<-`.
Signed-off-by: Leonard Lyubich <ctulhurider@gmail.com>
`Actor.MakeUnsignedUncheckedRun` method imposes restriction to
`CalculateNetworkFee` method's implementations: `Hash` or `Size` methods
must not be called on the pointer to the given transaction.
Add docs to adjust described requirement.
Signed-off-by: Leonard Lyubich <ctulhurider@gmail.com>
We have both from and to here, so technically we can either drop the neg/neg
trick from the processTokenTransfer() or drop one field from the structure
(the other side is a part of the key). Drop the field since this can make the
DB a bit more compact. Change Amount to be a pointer along the way since
that's the "native" thing for big.Int, we've used non-pointer field
specifically to avoid Neg/Neg problems, but it looks like this is not
necessary.
This structure is only used by the RPC server and I doubt anyone uses it via
the *Blockchain.
In some cases n.Add() can reuse the []Word buffer and n.Sub() reallocate it
away. If that happens, we're out of luck with 0.99.0+ versions (since
3945e81857). I'm not sure why it does that, bit
width doesn't change in most of the cases and even if it does, we still have
enough of it in cap() to hold the old Abs() value (when we have a negative
value we in fact decreate its Abs() first and increase it back
afterwards). Still, that's what we have.
So when we have processTokenTransfer() doing Neg/Neg in-place its value is not
affected, but the original []Word bits that are reused by amount value are
(they're shared initially, Amount: *amount).
name old time/op new time/op delta
ToPreallocatedBytes-8 65.8ns ± 2% 45.6ns ± 2% -30.73% (p=0.008 n=5+5)
name old alloc/op new alloc/op delta
ToPreallocatedBytes-8 0.00B 0.00B ~ (all equal)
name old allocs/op new allocs/op delta
ToPreallocatedBytes-8 0.00 0.00 ~ (all equal)
d5a9af5860 is incompatible with the NeoFS
mainnet sidechain, so we add the old logic to the pre-Aspidochelone
behaviour. Changing flags at newMethodAndPrice() is a bit less convenient
unfortunately because this will affect interop validity checks, so let's have
this kludge here.
6b4dd5703e made it to be a uint16 which was
somewhat important for RPC, but now it's irrelevant and the fact that it was a
string in some cases may lead to errors like these:
failed to unmarshal config YAML: yaml: unmarshal errors:
line 48: cannot unmarshal !!str `20011` into uint16
line 52: cannot unmarshal !!str `40001` into uint16
So for maximum backwards compatibility we better have string here and
eventually it'll be deleted anyway.
It directly affects node security and the default here MUST BE the safe choice
which is to do the verification. Otherwise it's just dangerous, absent any
VerifyBlocks configuration we'll get an insecure node. This option is not
supposed to be frequently used and it doesn't affect the ability to process
blocks, so breaking compatibility (in a safe manner) should be OK here.
And include some node-specific configurations there with backwards
compatibility. Note that in the future we'll remove Ledger's
fields from the ProtocolConfiguration and it'll be possible to access them in
Blockchain directly (not via .Ledger).
The other option tried was using two configuration types separately, but that
incurs more changes to the codebase, single structure that behaves almost like
the old one is better for backwards compatibility.
Fixes#2676.
It doesn't store id->hash mappings for native contracts. We need blockchain's
GetContractScriptHash to serve both anyway, so it was changed a bit. The only
other direct user of native.GetContractScriptHash is the VM CLI, but I doubt
anyone will use it for native contracts (they have ~zero VM code anyway).
There are no changes visible from the user side (at least for those
users who doesn't put Prometheus's or pprof's port in quotes), just
internal refactoring. From now and on, BasicService configuration is
used by RPC server config, TLS for RPC server, pprof and Prometheus.
It's more generic and convenient than MillisecondsPerBlock. This setting is
made in backwards-compatible fashion, but it'll override SecondsPerBlock if
both are used. Configurations are specifically not changed here, it's
important to check compatibility.
Fixes#2675.
Follow neo-project/neo#2807. Notice that this data is not cached, our previous
implementation wasn't too and it shouldn't be a problem (not on the hot path).
They can stay in the memory pool forever because consensus process will never
accept these transactions (and maybe even block consensus process at all).
We're paging these hashes, so we need a previous full page and a current one
plus some cache for various requests. Storing 1M of hashes is 32M of memory
and it grows quickly. It also seriously affects node startup time, most of
what it's doing is reading these hashes, the longer the chain the more time it
needs to do that.
Notice that this doesn't change the underlying DB scheme in any way.
If we only have genesis block (or <2000 headers) then we might as well use
generic logic below with zero targetHash because genesis block has zero
PrevHash (and its hash will naturally be the last on the chain going
backwards).
Let upper-layer APIs like actor.Send() return it as well. Server can return
"already exists" which is an error and yet at the same time a very special
one, in many cases it means we can proceed with waiting for the TX to settle.
Sometimes it can be hard to persist all changes at ones, the process
can take almost all RAM and a lot of time. Here's the example of reset
for mainnet from 2.4M to 1:
```
anna@kiwi:~/Documents/GitProjects/nspcc-dev/neo-go$ ./bin/neo-go db reset -m --height 1
2022-11-20T17:16:48.236+0300 INFO MaxBlockSize is not set or wrong, setting default value {"MaxBlockSize": 262144}
2022-11-20T17:16:48.236+0300 INFO MaxBlockSystemFee is not set or wrong, setting default value {"MaxBlockSystemFee": 900000000000}
2022-11-20T17:16:48.237+0300 INFO MaxTransactionsPerBlock is not set or wrong, using default value {"MaxTransactionsPerBlock": 512}
2022-11-20T17:16:48.237+0300 INFO MaxValidUntilBlockIncrement is not set or wrong, using default value {"MaxValidUntilBlockIncrement": 5760}
2022-11-20T17:16:48.240+0300 INFO restoring blockchain {"version": "0.2.6"}
2022-11-20T17:16:48.297+0300 INFO initialize state reset {"target height": 1}
2022-11-20T17:16:48.300+0300 INFO trying to reset blocks, transactions and AERs
2022-11-20T17:19:29.313+0300 INFO blocks, transactions ans AERs are reset {"took": "2m41.015126493s", "keys": 3958420}
...
```
To avoid OOM killer, split blocks reset into multiple stages. It increases
operation time due to intermediate DB persists, but makes things cleaner, the
result for almost the same DB height with the new approach:
```
anna@kiwi:~/Documents/GitProjects/nspcc-dev/neo-go$ ./bin/neo-go db reset -m --height 1
2022-11-20T17:39:42.023+0300 INFO MaxBlockSize is not set or wrong, setting default value {"MaxBlockSize": 262144}
2022-11-20T17:39:42.023+0300 INFO MaxBlockSystemFee is not set or wrong, setting default value {"MaxBlockSystemFee": 900000000000}
2022-11-20T17:39:42.023+0300 INFO MaxTransactionsPerBlock is not set or wrong, using default value {"MaxTransactionsPerBlock": 512}
2022-11-20T17:39:42.023+0300 INFO MaxValidUntilBlockIncrement is not set or wrong, using default value {"MaxValidUntilBlockIncrement": 5760}
2022-11-20T17:39:42.026+0300 INFO restoring blockchain {"version": "0.2.6"}
2022-11-20T17:39:42.071+0300 INFO initialize state reset {"target height": 1}
2022-11-20T17:39:42.073+0300 INFO trying to reset blocks, transactions and AERs
2022-11-20T17:40:11.735+0300 INFO intermediate batch of removed blocks, transactions and AERs is persisted {"batches persisted": 1, "took": "29.66363737s", "keys": 210973}
2022-11-20T17:40:33.574+0300 INFO intermediate batch of removed blocks, transactions and AERs is persisted {"batches persisted": 2, "took": "21.839208683s", "keys": 241203}
2022-11-20T17:41:29.325+0300 INFO intermediate batch of removed blocks, transactions and AERs is persisted {"batches persisted": 3, "took": "55.750698386s", "keys": 250593}
2022-11-20T17:42:12.532+0300 INFO intermediate batch of removed blocks, transactions and AERs is persisted {"batches persisted": 4, "took": "43.205892757s", "keys": 321896}
2022-11-20T17:43:07.978+0300 INFO intermediate batch of removed blocks, transactions and AERs is persisted {"batches persisted": 5, "took": "55.445398156s", "keys": 334822}
2022-11-20T17:43:35.603+0300 INFO intermediate batch of removed blocks, transactions and AERs is persisted {"batches persisted": 6, "took": "27.625292032s", "keys": 317131}
2022-11-20T17:43:51.747+0300 INFO intermediate batch of removed blocks, transactions and AERs is persisted {"batches persisted": 7, "took": "16.144359017s", "keys": 355832}
2022-11-20T17:44:05.176+0300 INFO intermediate batch of removed blocks, transactions and AERs is persisted {"batches persisted": 8, "took": "13.428733899s", "keys": 357690}
2022-11-20T17:44:32.895+0300 INFO intermediate batch of removed blocks, transactions and AERs is persisted {"batches persisted": 9, "took": "27.718548783s", "keys": 393356}
2022-11-20T17:44:51.814+0300 INFO intermediate batch of removed blocks, transactions and AERs is persisted {"batches persisted": 10, "took": "18.917954658s", "keys": 366492}
2022-11-20T17:45:07.208+0300 INFO intermediate batch of removed blocks, transactions and AERs is persisted {"batches persisted": 11, "took": "15.392642196s", "keys": 326030}
2022-11-20T17:45:18.776+0300 INFO intermediate batch of removed blocks, transactions and AERs is persisted {"batches persisted": 12, "took": "11.568255716s", "keys": 299884}
2022-11-20T17:45:25.862+0300 INFO last batch of removed blocks, transactions and AERs is persisted {"batches persisted": 13, "took": "7.086079594s", "keys": 190399}
2022-11-20T17:45:25.862+0300 INFO blocks, transactions ans AERs are reset {"took": "5m43.791214084s", "overall persisted keys": 3966301}
...
```
We need to keep the headers information consistent with header batches
and headers. This comit fixes the bug with failing blockchain
initialization on recovering from state reset interrupted after the
second stage (blocks/txs/AERs removal):
```
anna@kiwi:~/Documents/GitProjects/nspcc-dev/neo-go$ ./bin/neo-go db reset -t --height 83000
2022-11-20T16:28:29.437+0300 INFO MaxValidUntilBlockIncrement is not set or wrong, using default value {"MaxValidUntilBlockIncrement": 5760}
2022-11-20T16:28:29.440+0300 INFO restoring blockchain {"version": "0.2.6"}
failed to create Blockchain instance: could not initialize blockchain: could not get header 1898cd356a4a2688ed1c6c7ba1fd6ba7d516959d8add3f8dd26232474d4539bd: key not found
```
Don't use cache because it's not yet initialized. Also, perform
safety checks only if state reset wasn't yet started. These fixes
alloww to solve the following problem while recovering from
interrupted state reset:
```
anna@kiwi:~/Documents/GitProjects/nspcc-dev/neo-go$ ./bin/neo-go db reset -t --height 83000
2022-11-20T15:51:31.431+0300 INFO MaxValidUntilBlockIncrement is not set or wrong, using default value {"MaxValidUntilBlockIncrement": 5760}
2022-11-20T15:51:31.434+0300 INFO restoring blockchain {"version": "0.2.6"}
failed to create Blockchain instance: could not initialize blockchain: current block height is 0, can't reset state to height 83000
```
We don't use all of the Stack functionality for it, so drop useless methods
and avoid some interface conversions. It increases single-node TPS by about
0.9%, so nothing really important there, but not a bad change either. Maybe it
can be reworked again with generics though.
Blockchain's subscriptions, unsubscriptions and notifications are
handled by a single notificationDispatcher routine. Thus, on attempt
to send the subsequent event to Blockchain's subscribers, dispatcher
can't handle subscriptions\unsubscriptions. Make subscription and
unsubscription to be a non-blocking operation for blockchain on the
server side, otherwise it may cause the dispatcher locks.
To achieve this, use a separate lock for those code that make calls
to blockchain's subscription API and for subscription counters on
the server side.
Small (especially dockerized/virtualized) networks often start all nodes at
ones and then we see a lot of connection flapping in the log. This happens
because nodes try to connect to each other simultaneously, establish two
connections, then each one finds a duplicate and drops it, but this can be
different duplicate connections on other sides, so they retry and it all
happens for some time. Eventually everything settles, but we have a lot of
garbage in the log and a lot of useless attempts.
This random waiting timeout doesn't change the logic much, adds a minimal
delay, but increases chances for both nodes to establish a proper single
connection on both sides to only then see another one and drop it on both
sides as well. It leads to almost no flapping in small networks, doesn't
affect much bigger ones. The delay is close to unnoticeable especially if
there is something in the DB for node to process during startup.
Consider mainnet, it has an AttemptConnPeers of 20, so may already have 3
peers and request 20 more, then have 4th connected and attemtp 20 more again,
this leads to a huge number of connections easily.
Consider initial connection phase for public networks:
* simultaneous connections to seeds
* very quick handshakes
* got five handshaked peers and some getaddr requests sent
* but addr replies won't trigger new connections
* so we can stay with just five connections until any of them breaks or a
(long) address checking timer fires
This new timers solves the problem, it's adaptive at the same time. If we have
enough peers we won't be waking up often.
* treat connected/handshaked peers separately in the discoverer, save
"original" address for connected ones, it can be a name instead of IP and
it's important to keep it to avoid reconnections
* store name->IP mapping for seeds if and when they're connected to avoid
reconnections
* block seed if it's detected to be our own node (which is often the case for
small private networks)
* add an event for handshaked peers in the server, connected but
non-handshaked ones are not really helpful for MinPeers or GetAddr logic
Fixes#2796.
If VUB-th block is received, we still can't guaranty that transaction
wasn't accepted to chain. Back this situation by rolling back to a
poll-based waiter.
Do not block subscribers until the unsubscription request to RPC server
is completed. Otherwise, another notification may be received from the
RPC server which will block the unsubscription process.
At the same time, fix event-based waiter. We must not block the receiver
channel during unsubscription because there's a chance that subsequent
event will be sent by the server. We need to read this event in order not
to block the WSClient's readloop.
Bad contract -> no contract. Unfortunately we've got a broken
6f1837723768f27a6f6a14452977e3e0e264f2cc contract on the mainnet which can't
be decoded (even though it had been saved successfully), so this is a
temporary fix for #2801 to be able to start mainnet node after shutdown.
v.estack is used throughout the code to work with estack, while ctx.sc.estack
is (theoretically) just a reference to it that is saved on script load and
restored to v.estack on context unload. The problem is that v.estack can grow
as we use it and can be reallocated away from its original slice (saved in the
ctx.sc.estack), so either ctx.sc.estack should be a pointer or we need to
ensure that it's correct when loading a new script. The second approach is a
bit safer for now and it fixes#2798.
client_test.go:1935:
Error Trace: /home/rik/dev/neo-go/pkg/services/rpcsrv/client_test.go:1935
Error: Should NOT be empty, but was 00000000-0000-0000-0000-000000000000
Test: TestClient_Iterator_SessionConfigVariations/sessions_disabled
It's obviously empty, since we have sessions disabled, but it was not
considered to be empty in testify 1.7.0, now it is, see 840cb80149
There is a security issue found in github.com/btcsuite/btcd that we don't care
about (we're only using 256k1 implementation), but GitHub complains about
it. We could update to github.com/btcsuite/btcd/btcec/v2, but it's now just a
thin wrapper over github.com/decred/dcrd/dcrec/secp256k1/v4, so we better use
it directly.
* strip NEP-XX methods before going into generator to avoid unused imports
* nepXX.Invoker types already include Call
* always import util, it's used for Hash
Execution events are followed by block events, not vise versa, thus,
we can wait until VUB block to be accepted to be sure that
transaction wasn't accepted to chain.
Every 1000 blocks seems to be OK for big networks (that only had done some
initial requests previously and then effectively never requested addresses
again because there was a sufficient number of addresses), won't hurt smaller
ones as well (that effectively keep doing this on every connect/disconnect,
peer changes are very rare there, but when they happen we want to have some
quick reaction to these changes).
32 is a very good number, but we all know 42 is a better one. And it can even
be proven by tests with higher peaking TPS values.
You may wonder why is it so good? Because we're using packet-switching
networks mostly and a packet is a packet almost irrespectively of how bit it
is. Yet a packet has some maximum possible size (hi, MTU) and this size most
of the time is 1500 (or a little less than that, hi VPN). Subtract IP header
(20 for IPv4 or 40 for IPv6 not counting options), TCP header (another 20) and
Neo message/payload headers (~8 for this case) and we have just a little more
than 1400 bytes for our dear hashes. Which means that in a single packet most
of the time we can have 42-44 of them, maybe 45. Choosing between these
numbers is not hard then.
We have AttemptConnPeers that is closely related, the more we have there the
bigger the network supposedly is, so it's much better than magic minPoolCount.
When block is being spread through the network we can get a lot of invs with
the same hash. Some more stale nodes may also announce previous or some
earlier block. We can avoid full DB lookup for them and minimize inv handling
time (timeouts in inv handler had happened in #2744).
It doesn't affect tests, just makes node a little less likely to spend some
considerable amount of time in the inv handler.
Sometimes we already have it, but it's not yet processed, so we can save on
getdata request. It only affects very high-speed networks like 4-1 scenario
and it doesn't affect it a lot, but still we can do it.
This is not exactly the protocol-level batching as was tried in #1770 and
proposed by neo-project/neo#2365, but it's a TCP-level change in that we now
Write() a set of messages and given that Go sets up TCP sockets with
TCP_NODELAY by default this is a substantial change, we have less packets
generated with the same amount of data. It doesn't change anything on properly
connected networks, but the ones with delays benefit from it a lot.
This also improves queueing because we no longer generate 32 messages to
deliver on transaction's GetData, it's just one stream of bytes with 32
messages inside.
Do the same with GetBlocksByIndex, we can have a lot of messages there too.
But don't forget about potential peer DoS attacks, if a peer is to request a
lot of big blocks we need to flush them before we process the whole set.
This allows to naturally scale transaction processing if we have some peer
that is sending a lot of them while others are mostly silent. It also can help
somewhat in the event we have 50 peers that all send transactions. 4+1
scenario benefits a lot from it, while 7+2 slows down a little. Delayed
scenarios don't care.
Surprisingly, this also makes disconnects (#2744) much more rare, 4-node
scenario almost never sees it now. Most probably this is the case where peers
affect each other a lot, single-threaded transaction receiver can be slow
enough to trigger some timeout in getdata handler of its peer (because it
tries to push a number of replies).
It makes sense in general (further narrowing down the time window when
transactions are processed by consensus thread) and it improves block times a
little too, especially in the 7+2 scenario.
Related to #2744.
Until the consensus process starts for a new block and until it really needs
some transactions we can spare some cycles by not delivering transactions to
it. In tests this doesn't affect TPS, but makes block delays a bit more
stable. Related to #2744, I think it also may cause timeouts during
transaction processing (waiting on the consensus process channel while it does
something dBFT-related).
When the network is big enough, MinPeers may be suboptimal for good network
connectivity, but if we know the network size we can do some estimation on the
number of sufficient peers.
Share parameters parsing code between 'contract invokefunction' and
'vm run' commands. It allows VM CLI to parse more complicated parameter
types including arrays and file-backed bytestrings.
They can fail right in the getPeers or they can fail later when packet send
is attempted. Of course they can complete handshake in-between these events,
but most likely they won't and we'll waste more resources on this attempt. So
rule out bad peers immediately.
Drop EnqueueP2PPacket, replace EnqueueHPPacket with EnqueueHPMessage. We use
Enqueue* when we have a specific per-peer message, it makes zero sense
duplicating serialization code for it (unlike Broadcast*).
Follow the general rules of broadcasts, even though it's somewhat different
from Inv, we just want to get some reply from our neighbors to see if we're
behind. We don't strictly need all neighbors for it.
We have a number of queues for different purposes:
* regular broadcast queue
* direct p2p queue
* high-priority queue
And two basic egress scenarios:
* direct p2p messages (replies to requests in Server's handle* methods)
* broadcasted messages
Low priority broadcasted messages:
* transaction inventories
* block inventories
* notary inventories
* non-consensus extensibles
High-priority broadcasted messages:
* consensus extensibles
* getdata transaction requests from consensus process
* getaddr requests
P2P messages are a bit more complicated, most of the time they use p2p queue,
but extensible message requests/replies use HP queue.
Server's handle* code is run from Peer's handleIncoming, every peer has this
thread that handles incoming messages. When working with the peer it's
important to reply to requests and blocking this thread until we send (queue)
a reply is fine, if the peer is slow we just won't get anything new from
it. The queue used is irrelevant wrt this issue.
Broadcasted messages are radically different, we want them to be delivered to
many peers, but we don't care about specific ones. If it's delivered to 2/3 of
the peers we're fine, if it's delivered to more of them --- it's not an
issue. But doing this fairly is not an easy thing, current code tries performing
unblocked sends and if this doesn't yield enough results it then blocks (but
has a timeout, we can't wait indefinitely). But it does so in sequential
manner, once the peer is chosen the code will wait for it (and only it) until
timeout happens.
What can be done instead is an attempt to push the message to all of the peers
simultaneously (or close to that). If they all deliver --- OK, if some block
and wait then we can wait until _any_ of them pushes the message through (or
global timeout happens, we still can't wait forever). If we have enough
deliveries then we can cancel pending ones and it's again not an error if
these canceled threads still do their job.
This makes the system more dynamic and adds some substantial processing
overhead, but it's a networking code, any of this overhead is much lower than
the actual packet delivery time. It also allows to spread the load more
fairly, if there is any spare queue it'll get the packet and release the
broadcaster. On the next broadcast iteration another peer is more likely to be
chosen just because it didn't get a message previously (and had some time to
deliver already queued messages).
It works perfectly in tests, with optimal networking conditions we have much
better block times and TPS increases by 5-25%% depending on the scenario.
I'd go as far as to say that it fixes the original problem of #2678, because
in this particular scenario we have empty queues in ~100% of the cases and
this new logic will likely lead to 100% fan out in this case (cancelation just
won't happen fast enough). But when the load grows and there is some waiting
in the queue it will optimize out the slowest links.
Value of PublicKey parameter always stores public key bytes, not the
deserialized representation. All other code (CLI parameters parsing with
its NewParameterFromString, Parameter unmarshaller, etc.) is based on
the idea that value of PublicKey is []byte.
Peers can be slow, very slow, slow enough to affect node's regular
operation. We can't wait for them indefinitely, there has to be a timeout for
send operations.
This patch uses TimePerBlock as a reference for its timeout. It's relatively
big and it doesn't affect tests much, 4+1 scenarios tend to perform a little
worse with while 7+2 scenarios work a little better. The difference is in some
percents, but all of these tests easily have 10-15% variations from run to
run.
It's an important step in making our gossip better because we can't have any
behavior where neighbors directly block the node forever, refs. #2678 and