There are no changes visible from the user side (at least for those
users who doesn't put Prometheus's or pprof's port in quotes), just
internal refactoring. From now and on, BasicService configuration is
used by RPC server config, TLS for RPC server, pprof and Prometheus.
It's more generic and convenient than MillisecondsPerBlock. This setting is
made in backwards-compatible fashion, but it'll override SecondsPerBlock if
both are used. Configurations are specifically not changed here, it's
important to check compatibility.
Fixes#2675.
HF_ prefix makes zero sense to me. If it's "hardfork", then it's in the
"Hardforks" section already. If it's "hotfix", then it made some sense back
when it was HF_2712_FixSyscallFees, but now it's codenamed anyway. So we can
drop it and have a cleaner config.
https://github.com/nspcc-dev/neo-go/pull/2435 breaks compatibility
between newer RPC clients and older RPC servers with the following
error:
```
failed to get network magic: json: cannot unmarshal string into Go struct field Protocol.protocol.initialgasdistribution of type int64
```
This behaviour is expected, but we can't allow this radical change.
Thus, the following solution is implemented:
1. RPC server responds with proper non-stringified
InitialGasDistribution value. The value represents an integral
of fixed8 multiplied by the decimals.
2. RPC client is able to distinguish older and newer responses. For
older one the stringified value without decimals part is
expected. For newer responses the int64 value with decimal part
is expected.
The cludge will be present in the code for a while until nodes of
version <=0.98.3 become completely absolete.
The key idea here is that even though we can't ensure MPT code won't make the
node active again we can order the changes made to the persistent store in
such a way that it practically doesn't matter. What happens is:
* after persist if it's time to collect our garbage we do it synchronously
right in the same thread working the underlying persistent store directly
* all the other node code doesn't see much of it, it works with bc.dao or
layers above it
* if MPT doesn't find some stale deactivated node in the storage it's OK,
it'll recreate it in bc.dao
* if MPT finds it and activates it, it's OK too, bc.dao will store it
* while GC is being performed nothing else changes the persistent store
* all subsequent bc.dao persists only happen after the GC is completed which
means that any changes to the (potentially) deleted nodes have a priority,
it's OK for GC to delete something that'll be recreated with the next
persist cycle
Otherwise it's a simple scheme with node status/last active height stored in
the value. Preliminary tests show that it works ~18% worse than the simple
KeepOnlyLatest scheme, but this seems to be the best result so far.
Fixes#2095.