Linter is updated up to v1.60.1, the following issue is fixed:
```
predeclared variable max has same name as predeclared identifier
```
Signed-off-by: Anna Shaleva <shaleva.ann@nspcc.ru>
We have cache update mechanism (Neo's cache votesChanged flag), it must
be used for current epoch and new epoch cached values update. And the
cached current/new epoch values themselves must always contain valid
information for the current/new epoch. These cached values must only be
changed once per epoch, never set them to nil.
This commit prevents CN node panic described in #3253 when dBFT tries
to retrieve new epoch validators with some votes modifications made
before at the same dBFT epoch.
Close#3253.
Signed-off-by: Anna Shaleva <shaleva.ann@nspcc.ru>
If it's the end of epoch, then it contains the updated validators list recalculated
during the last block's PostPersist. If it's middle of the epoch, then it contains
previously calculated value (value for the previous completed epoch) that is equal
to the current nextValidators cache value.
Signed-off-by: Anna Shaleva <shaleva.ann@nspcc.ru>
Do not recalculate new committee/validators value in the start of every
subsequent epoch. Use values that was calculated in the PostPersist method
of the previously processed block in the end of the previous epoch.
Signed-off-by: Anna Shaleva <shaleva.ann@nspcc.ru>
No funcional changes, just refactoring. It doesn't need the whole cache,
only the set of committee keys with votes.
Signed-off-by: Anna Shaleva <shaleva.ann@nspcc.ru>
Recalculate them once per epoch. Consensus is aware of it and must
call CalculateNextValidators exactly when needed.
Signed-off-by: Anna Shaleva <shaleva.ann@nspcc.ru>
Blockchain passes his own pure unwrapped DAO to
(*Blockchain).ComputeNextBlockValidators which means that native
RW NEO cache structure stored inside this DAO can be modified by
anyone who uses exported ComputeNextBlockValidators Blockchain API,
and technically it's valid, and we should allow this, because it's
the only purpose of `validators` caching. However, at the same time
some RPC server is allowed to request a subsequent wrapped DAO for
some test invocation. It means that descendant wrapped DAO
eventually will request RW NEO cache and try to `Copy()`
the underlying's DAO cache which is in direct use of
ComputeNextBlockValidators. Here's the race:
ComputeNextBlockValidators called by Consensus service tries to
update cached `validators` value, and descendant wrapped DAO
created by the RPC server tries to copy DAO's native cache and
read the cached `validators` value.
So the problem is that native cache not designated to handle
concurrent access between parent DAO layer and derived (wrapped)
DAO layer. I've carefully reviewed all the usages of native cache,
and turns out that the described situation is the only place where
parent DAO is used directly to modify its cache concurrently with
some descendant DAO that is trying to access the cache. All other
usages of native cache (not only NEO, but also all other native
contrcts) strictly rely on the hierarchical DAO structure and don't
try to perform these concurrent operations between DAO layers.
There's also persist operation, but it keeps cache RW lock taken,
so it doesn't have this problem as far. Thus, in this commit we rework
NEO's `validators` cache value so that it always contain the relevant
list for upper Blockchain's DAO and is updated every PostPersist (if
needed).
Note: we must be very careful extending our native cache in the
future, every usage of native cache must be checked against the
described problem.
Close#2989.
Signed-off-by: Anna Shaleva <shaleva.ann@nspcc.ru>
Make the contracts cache initialization unified. The order of cache
iniitialization is not important and Nottary contract is added to the
bc.contracts.Contracts wrt P2PSigExtensions setting, thus no functional
changes, just refactoring for future applications.
Signed-off-by: Anna Shaleva <shaleva.ann@nspcc.ru>
It can change the committee even if noone voted. Fixes state diff at block
390726 of T5 testnet where there are no transactions, but committee changes
because there were some registrations in previous 21 blocks.
This commit fixes T5 statediff at block #0. Management's storage item
differs between neo-go and C# nodes, the reason in native NEO contract
state.
The parameter should have `pubKey` name, unlike the other `pubkey`
arguments in this contract.
Turns out, it's almost always allocating because we're mostly dealing with
small integers while the buffer size is calculated in 8-byte chunks here, so
preallocated buffer is always insufficient.
name old time/op new time/op delta
ToPreallocatedBytes-8 28.5ns ± 7% 19.7ns ± 5% -30.72% (p=0.000 n=10+10)
name old alloc/op new alloc/op delta
ToPreallocatedBytes-8 16.0B ± 0% 0.0B -100.00% (p=0.000 n=10+10)
name old allocs/op new allocs/op delta
ToPreallocatedBytes-8 1.00 ± 0% 0.00 -100.00% (p=0.000 n=10+10)
Fix StorageItem reuse at the same time. We don't copy when getting values from
the storage, but we don when we're putting them, so buffer reuse could corrupt
old values.
1. Use layered natives cache. With layered cache the storeblock
process includes the following steps: create a wrapper over
current nativeCache, put changes into upper nativeCache layer,
persist (or discard) changes.
2. Split contract getters to read-only and read-and-change. Read-only
ones doesn't require the copy of an existing nativeCache item.
Read-and-change ones create a copy and after that change the copy.
They never return errors, so their interface should reflect that. This allows
to remove quite a lot of useless and never tested code.
Notice that Get still does return an error. It can be made not to do that, but
usually we need to differentiate between successful/unsuccessful accesses
anyway, so this doesn't help much.
They're misleading now that we have variable number of committee
members/validators. The standby list can be seen in the configuration and the
appropriate numbers can be received from it also.