Provide cosigners explicitly during deploy and don't read wallet twice.
This is needed because manifest validation requires valid sender address.
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
We use them quite frequently (consider children for a new branch
node) and it is better to get rid of unneeded allocations.
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
`WriteArray` involves reflection, it makes sense to optimize
serialization of transactions and application logs which are serialized
constantly. Adding case in a type switch in `WriteArray` is not an
option because we don't want new dependencies for `io` package.
```
name old time/op new time/op delta
AppExecResult_EncodeBinary-8 852ns ± 3% 656ns ± 2% -22.94% (p=0.000 n=10+9)
name old alloc/op new alloc/op delta
AppExecResult_EncodeBinary-8 448B ± 0% 376B ± 0% -16.07% (p=0.000 n=10+10)
name old allocs/op new allocs/op delta
AppExecResult_EncodeBinary-8 7.00 ± 0% 5.00 ± 0% -28.57% (p=0.000 n=10+10)
```
```
name old time/op new time/op delta
Transaction_Bytes-8 1.29µs ± 3% 0.76µs ± 5% -41.52% (p=0.000 n=9+10)
name old alloc/op new alloc/op delta
Transaction_Bytes-8 1.21kB ± 0% 1.01kB ± 0% -16.56% (p=0.000 n=10+10)
name old allocs/op new allocs/op delta
Transaction_Bytes-8 12.0 ± 0% 7.0 ± 0% -41.67% (p=0.000 n=10+10)
```
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
We know it already, but with current loading code VM will hash it once
more. It doesn't help a lot and still it costs nothing to avoid this
overhead.
name old time/op new time/op delta
VerifyWitness-8 93.4µs ± 3% 92.7µs ± 2% ~ (p=0.353 n=10+10)
name old alloc/op new alloc/op delta
VerifyWitness-8 3.43kB ± 0% 3.40kB ± 0% -0.70% (p=0.000 n=9+9)
name old allocs/op new allocs/op delta
VerifyWitness-8 67.0 ± 0% 66.0 ± 0% -1.49% (p=0.000 n=10+10)
ReadArray() can return some error and we shouldn't overwrite it. At the same
time limiting ReadArray() to the number of Signers can make it return wrong
error if the number of witnesses actually is bigger than the number of
signers, so use MaxAttributes.
We burn GAS in OnPersist for every transaction so some buffer reuse here is
quite natural.
This also doesn't change a lot in the overall TPS picture, maybe adding some
1%.
Which duplicates the check, but deduplicates error path. This check forced
double balance deserialization which is quite costly operation, so we better
do it once.
It's hardly noticeable as of TPS metrics though, maybe some 1-2%%.
Persist by its definition doesn't change MemCachedStore visible state, all KV
pairs that were acessible via it before Persist remain accessible after
Persist. The only thing it does is flushing of the current set of KV pairs
from memory to peristent store. To do that it needs read-only access to the
current KV pair set, but technically it then replaces maps, so we have to use
full write lock which makes MemCachedStore inaccessible for the duration of
Persist. And Persist can take a lot of time, it's about disk access for
regular DBs.
What we do here is we create new in-memory maps for MemCachedStore before
flushing old ones to the persistent store. Then a fake persistent store is
created which actually is a MemCachedStore with old maps, so it has exactly
the same visible state. This Store is never accessed for writes, so we can
read it without taking any internal locks and at the same time we no longer
need write locks for original MemCachedStore, we're not using it. All of this
makes it possible to use MemCachedStore as normally reads are handled going
down to whatever level is needed and writes are handled by new maps. So while
Persist for (*Blockchain).dao does its most time-consuming work we can process
other blocks (reading data for transactions and persisting storeBlock caches
to (*Blockchain).dao).
The change was tested for performance with neo-bench (single node, 10 workers,
LevelDB) on two machines and block dump processing (RC4 testnet up to 62800
with VerifyBlocks set to false) on i7-8565U.
Reference results (bbe4e9cd7b):
Ryzen 9 5950X:
RPS 23616.969 22817.086 23222.378 ≈ 23218 ± 1.72%
TPS 23047.316 22608.578 22735.540 ≈ 22797 ± 0.99%
CPU % 23.434 25.553 23.848 ≈ 24.3 ± 4.63%
Mem MB 600.636 503.060 582.043 ≈ 562 ± 9.22%
Core i7-8565U:
RPS 6594.007 6499.501 6572.902 ≈ 6555 ± 0.76%
TPS 6561.680 6444.545 6510.120 ≈ 6505 ± 0.90%
CPU % 58.452 60.568 62.474 ≈ 60.5 ± 3.33%
Mem MB 234.893 285.067 269.081 ≈ 263 ± 9.75%
DB restore:
real 0m22.237s 0m23.471s 0m23.409s ≈ 23.04 ± 3.02%
user 0m35.435s 0m38.943s 0m39.247s ≈ 37.88 ± 5.59%
sys 0m3.085s 0m3.360s 0m3.144s ≈ 3.20 ± 4.53%
After the change:
Ryzen 9 5950X:
RPS 27747.349 27407.726 27520.210 ≈ 27558 ± 0.63% ↑ 18.69%
TPS 26992.010 26993.468 27010.966 ≈ 26999 ± 0.04% ↑ 18.43%
CPU % 28.928 28.096 29.105 ≈ 28.7 ± 1.88% ↑ 18.1%
Mem MB 760.385 726.320 756.118 ≈ 748 ± 2.48% ↑ 33.10%
Core i7-8565U:
RPS 7783.229 7628.409 7542.340 ≈ 7651 ± 1.60% ↑ 16.72%
TPS 7708.436 7607.397 7489.459 ≈ 7602 ± 1.44% ↑ 16.85%
CPU % 74.899 71.020 72.697 ≈ 72.9 ± 2.67% ↑ 20.50%
Mem MB 438.047 436.967 416.350 ≈ 430 ± 2.84% ↑ 63.50%
DB restore:
real 0m20.838s 0m21.895s 0m21.794s ≈ 21.51 ± 2.71% ↓ 6.64%
user 0m39.091s 0m40.565s 0m41.493s ≈ 40.38 ± 3.00% ↑ 6.60%
sys 0m3.184s 0m2.923s 0m3.062s ≈ 3.06 ± 4.27% ↓ 4.38%
It obviously uses more memory now and utilizes CPU more aggressively, but at
the same time it allows to improve all relevant metrics and finally reach a
situation where we process 50K transactions in less than second on Ryzen 9
5950X (going higher than 25K TPS). The other observation is much more stable
block time, on Ryzen 9 it's as close to 1 second as it could be.