Added sync logs for every service separately to provide the ability to
have a custom logger for each service. This commit makes the code follow
the zap usages rules: `Sync calls the underlying Core's Sync method,
flushing any buffered log entries. Applications should take care to
call Sync before exiting.`
Signed-off-by: Ekaterina Pavlova <ekt@morphbits.io>
Start of some services is bound to blockchain subscriptions, and thus,
can't be run before the blockchain notifications dispatcher.
Signed-off-by: Ekaterina Pavlova <ekt@morphbits.io>
dBFT doesn't use validators got from this call to GetValidators callback,
because NeoGo doesn't properly set WithGetConsensusAddress, and thus
this call can be safely skipped. Instead, NeoGo fills NextConsensus field
by itself in NewBlockFromContext callback.
This commit technically doesn't perform any functional changes and doesn't
affect the problem described in #3253 in any way. This commit is just a
removal of the code that was never used by NeoGo library.
This commit is a direct consequence of https://github.com/nspcc-dev/dbft/issues/84.
Signed-off-by: Anna Shaleva <shaleva.ann@nspcc.ru>
If it's the end of epoch, then it contains the updated validators list recalculated
during the last block's PostPersist. If it's middle of the epoch, then it contains
previously calculated value (value for the previous completed epoch) that is equal
to the current nextValidators cache value.
Signed-off-by: Anna Shaleva <shaleva.ann@nspcc.ru>
Recalculate them once per epoch. Consensus is aware of it and must
call CalculateNextValidators exactly when needed.
Signed-off-by: Anna Shaleva <shaleva.ann@nspcc.ru>
We have two similar blockchain APIs: GetNextBlockValidators and GetValidators.
It's hard to distinguish them, thus renaming it to match the meaning, so what
we have now is:
GetNextBlockValidators literally just returns the top of the committee that
was elected in the start of batch of CommitteeSize blocks batch. It doesn't
change its valie every block.
ComputeNextBlockValidators literally computes the list of validators based on
the most fresh committee members information got from the NeoToken's storage
and based on the latest register/unregister/vote events. The list returned by
this method may be updated every block.
Signed-off-by: Anna Shaleva <shaleva.ann@nspcc.ru>
Blockchain passes his own pure unwrapped DAO to
(*Blockchain).ComputeNextBlockValidators which means that native
RW NEO cache structure stored inside this DAO can be modified by
anyone who uses exported ComputeNextBlockValidators Blockchain API,
and technically it's valid, and we should allow this, because it's
the only purpose of `validators` caching. However, at the same time
some RPC server is allowed to request a subsequent wrapped DAO for
some test invocation. It means that descendant wrapped DAO
eventually will request RW NEO cache and try to `Copy()`
the underlying's DAO cache which is in direct use of
ComputeNextBlockValidators. Here's the race:
ComputeNextBlockValidators called by Consensus service tries to
update cached `validators` value, and descendant wrapped DAO
created by the RPC server tries to copy DAO's native cache and
read the cached `validators` value.
So the problem is that native cache not designated to handle
concurrent access between parent DAO layer and derived (wrapped)
DAO layer. I've carefully reviewed all the usages of native cache,
and turns out that the described situation is the only place where
parent DAO is used directly to modify its cache concurrently with
some descendant DAO that is trying to access the cache. All other
usages of native cache (not only NEO, but also all other native
contrcts) strictly rely on the hierarchical DAO structure and don't
try to perform these concurrent operations between DAO layers.
There's also persist operation, but it keeps cache RW lock taken,
so it doesn't have this problem as far. Thus, in this commit we rework
NEO's `validators` cache value so that it always contain the relevant
list for upper Blockchain's DAO and is updated every PostPersist (if
needed).
Note: we must be very careful extending our native cache in the
future, every usage of native cache must be checked against the
described problem.
Close#2989.
Signed-off-by: Anna Shaleva <shaleva.ann@nspcc.ru>
go.uber.org/atomic deprecated CAS methods in version 1.10 (that introduced
CompareAndSwap), so we need to fix it.
Signed-off-by: Roman Khimov <roman@nspcc.ru>
Move them to the core/network packages, close#2950. The name of
mempool's unsorted transactions metrics has been changed along the
way to match the core's metrics naming convention.
Signed-off-by: Anna Shaleva <shaleva.ann@nspcc.ru>
Everywhere including examples, external interop APIs, bindings generators
code and in other valuable places. A couple of `interface{}` usages are
intentionally left in the CHANGELOG.md, documentation and tests.
Do not add them directly to chain, it will be done by the block queue
manager. Close https://github.com/nspcc-dev/neo-go/issues/2923. However,
this commit is not valid without
https://github.com/roman-khimov/dbft/pull/4.
It's the neo-go's duty to initialize consensus after subsequent block
addition; the dBFT itself must wait for the neo-go to complete the block
addition and notify the dBFT, so that it can initialize at 0-th view to
collect the next block.
And include some node-specific configurations there with backwards
compatibility. Note that in the future we'll remove Ledger's
fields from the ProtocolConfiguration and it'll be possible to access them in
Blockchain directly (not via .Ledger).
The other option tried was using two configuration types separately, but that
incurs more changes to the codebase, single structure that behaves almost like
the old one is better for backwards compatibility.
Fixes#2676.
It's more generic and convenient than MillisecondsPerBlock. This setting is
made in backwards-compatible fashion, but it'll override SecondsPerBlock if
both are used. Configurations are specifically not changed here, it's
important to check compatibility.
Fixes#2675.
It makes sense in general (further narrowing down the time window when
transactions are processed by consensus thread) and it improves block times a
little too, especially in the 7+2 scenario.
Related to #2744.
Blockchain's notificationDispatcher sends events to channels and these
channels must be read from. Unfortunately, regular service shutdown procedure
does unsubscription first (outside of the read loop) and only then drains the
channel. While it waits for unsubscription request to be accepted
notificationDispatcher can try pushing more data into the same channel which
will lead to a deadlock. Reading in the same method solves this, any number of
events can be pushed until unsub channel accepts the data.
See neo-project/neo#2390. Can't see it there? No wonder, that's why we have
this bug for a year and a half. Not critical, we don't care about versions,
but _very_ annoying.
It's not an ideal solution, but at least it solves the problem for
now. Caveats:
* consensus only needs one method, so it's mirrored to Blockchain
* rpcsrv uses core.* definition of the StateRoot (so technically it might as
well not have an internal Ledger), but it uses core already unfortunately