Functions are usually immediately replaced (and it's OK for them to be nil,
searching through an array with length of zero is fine), Notifications are
usually appended to (and are absolutely useless in verification contexts).
* both 'to' and 'from' are either Null or Hash160, there is no other
possibility for valid NEP-17. So returning util.Uint160{} in case of
parsing error is wrong.
* but this is what allowed burns/mints to work at the expense of error
allocation inside of util.Uint160DecodeBytesBE()
* Uint160 can technically fit into regular VM integer, so even though it'd be
quite surprising to see it there, TryBytes() is more correct (and easier!)
to use
* same thing with `amount`, we have `TryInteger()` that easily covers all
possible cases and does appropriate error checking inside
Squash (*DAO).StoreAsTransaction and
(*DAO).StoreConflictingTransactions. It's better to keep them this way,
because StoreAsTransaction is always followed by
StoreConflictingTransactions, so it's an atomic operation.
The logic wasn't changed.
We're using batches in wrong way during persist, we already have all changes
accumulated in two maps and then we move them to batch and then this is
applied. For some DBs like BoltDB this batch is just another MemoryStore, so
we essentially just shuffle the changeset from one map to another, for others
like LevelDB batch is just a serialized set of KV pairs, it doesn't help much
on subsequent PutBatch, we just duplicate the changeset again.
So introduce PutChangeSet that allows to take two maps with sets and deletes
directly. It also allows to simplify MemCachedStore logic.
neo-bench for single node with 10 workers, LevelDB:
Reference:
RPS 30189.132 30556.448 30390.482 ≈ 30379 ± 0.61%
TPS 29427.344 29418.687 29434.273 ≈ 29427 ± 0.03%
CPU % 33.304 27.179 33.860 ≈ 31.45 ± 11.79%
Mem MB 800.677 798.389 715.042 ≈ 771 ± 6.33%
Patched:
RPS 30264.326 30386.364 30166.231 ≈ 30272 ± 0.36% ⇅
TPS 29444.673 29407.440 29452.478 ≈ 29435 ± 0.08% ⇅
CPU % 34.012 32.597 33.467 ≈ 33.36 ± 2.14% ⇅
Mem MB 549.126 523.656 517.684 ≈ 530 ± 3.15% ↓ 31.26%
BoltDB:
Reference:
RPS 31937.647 31551.684 31850.408 ≈ 31780 ± 0.64%
TPS 31292.049 30368.368 31307.724 ≈ 30989 ± 1.74%
CPU % 33.792 22.339 35.887 ≈ 30.67 ± 23.78%
Mem MB 1271.687 1254.472 1215.639 ≈ 1247 ± 2.30%
Patched:
RPS 31746.818 30859.485 31689.761 ≈ 31432 ± 1.58% ⇅
TPS 31271.499 30340.726 30342.568 ≈ 30652 ± 1.75% ⇅
CPU % 34.611 34.414 31.553 ≈ 33.53 ± 5.11% ⇅
Mem MB 1262.960 1231.389 1335.569 ≈ 1277 ± 4.18% ⇅
It requires only two methods from Blockchainer: AddBlock and
BlockHeight. New interface will allow to easily reuse the block queue
for state exchange purposes.
Do not allocate a separate buffer for the transfer.
```
name old time/op new time/op delta
NEP17TransferLog_Append-8 58.8µs ± 3% 32.1µs ± 1% -45.40% (p=0.000 n=10+9)
name old alloc/op new alloc/op delta
NEP17TransferLog_Append-8 118kB ± 1% 44kB ± 3% -63.00% (p=0.000 n=9+10)
name old allocs/op new allocs/op delta
NEP17TransferLog_Append-8 901 ± 1% 513 ± 3% -43.08% (p=0.000 n=9+8)
```
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
We use them quite frequently (consider children for a new branch
node) and it is better to get rid of unneeded allocations.
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
`WriteArray` involves reflection, it makes sense to optimize
serialization of transactions and application logs which are serialized
constantly. Adding case in a type switch in `WriteArray` is not an
option because we don't want new dependencies for `io` package.
```
name old time/op new time/op delta
AppExecResult_EncodeBinary-8 852ns ± 3% 656ns ± 2% -22.94% (p=0.000 n=10+9)
name old alloc/op new alloc/op delta
AppExecResult_EncodeBinary-8 448B ± 0% 376B ± 0% -16.07% (p=0.000 n=10+10)
name old allocs/op new allocs/op delta
AppExecResult_EncodeBinary-8 7.00 ± 0% 5.00 ± 0% -28.57% (p=0.000 n=10+10)
```
```
name old time/op new time/op delta
Transaction_Bytes-8 1.29µs ± 3% 0.76µs ± 5% -41.52% (p=0.000 n=9+10)
name old alloc/op new alloc/op delta
Transaction_Bytes-8 1.21kB ± 0% 1.01kB ± 0% -16.56% (p=0.000 n=10+10)
name old allocs/op new allocs/op delta
Transaction_Bytes-8 12.0 ± 0% 7.0 ± 0% -41.67% (p=0.000 n=10+10)
```
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
We know it already, but with current loading code VM will hash it once
more. It doesn't help a lot and still it costs nothing to avoid this
overhead.
name old time/op new time/op delta
VerifyWitness-8 93.4µs ± 3% 92.7µs ± 2% ~ (p=0.353 n=10+10)
name old alloc/op new alloc/op delta
VerifyWitness-8 3.43kB ± 0% 3.40kB ± 0% -0.70% (p=0.000 n=9+9)
name old allocs/op new allocs/op delta
VerifyWitness-8 67.0 ± 0% 66.0 ± 0% -1.49% (p=0.000 n=10+10)
ReadArray() can return some error and we shouldn't overwrite it. At the same
time limiting ReadArray() to the number of Signers can make it return wrong
error if the number of witnesses actually is bigger than the number of
signers, so use MaxAttributes.
We burn GAS in OnPersist for every transaction so some buffer reuse here is
quite natural.
This also doesn't change a lot in the overall TPS picture, maybe adding some
1%.
Which duplicates the check, but deduplicates error path. This check forced
double balance deserialization which is quite costly operation, so we better
do it once.
It's hardly noticeable as of TPS metrics though, maybe some 1-2%%.
Persist by its definition doesn't change MemCachedStore visible state, all KV
pairs that were acessible via it before Persist remain accessible after
Persist. The only thing it does is flushing of the current set of KV pairs
from memory to peristent store. To do that it needs read-only access to the
current KV pair set, but technically it then replaces maps, so we have to use
full write lock which makes MemCachedStore inaccessible for the duration of
Persist. And Persist can take a lot of time, it's about disk access for
regular DBs.
What we do here is we create new in-memory maps for MemCachedStore before
flushing old ones to the persistent store. Then a fake persistent store is
created which actually is a MemCachedStore with old maps, so it has exactly
the same visible state. This Store is never accessed for writes, so we can
read it without taking any internal locks and at the same time we no longer
need write locks for original MemCachedStore, we're not using it. All of this
makes it possible to use MemCachedStore as normally reads are handled going
down to whatever level is needed and writes are handled by new maps. So while
Persist for (*Blockchain).dao does its most time-consuming work we can process
other blocks (reading data for transactions and persisting storeBlock caches
to (*Blockchain).dao).
The change was tested for performance with neo-bench (single node, 10 workers,
LevelDB) on two machines and block dump processing (RC4 testnet up to 62800
with VerifyBlocks set to false) on i7-8565U.
Reference results (bbe4e9cd7b):
Ryzen 9 5950X:
RPS 23616.969 22817.086 23222.378 ≈ 23218 ± 1.72%
TPS 23047.316 22608.578 22735.540 ≈ 22797 ± 0.99%
CPU % 23.434 25.553 23.848 ≈ 24.3 ± 4.63%
Mem MB 600.636 503.060 582.043 ≈ 562 ± 9.22%
Core i7-8565U:
RPS 6594.007 6499.501 6572.902 ≈ 6555 ± 0.76%
TPS 6561.680 6444.545 6510.120 ≈ 6505 ± 0.90%
CPU % 58.452 60.568 62.474 ≈ 60.5 ± 3.33%
Mem MB 234.893 285.067 269.081 ≈ 263 ± 9.75%
DB restore:
real 0m22.237s 0m23.471s 0m23.409s ≈ 23.04 ± 3.02%
user 0m35.435s 0m38.943s 0m39.247s ≈ 37.88 ± 5.59%
sys 0m3.085s 0m3.360s 0m3.144s ≈ 3.20 ± 4.53%
After the change:
Ryzen 9 5950X:
RPS 27747.349 27407.726 27520.210 ≈ 27558 ± 0.63% ↑ 18.69%
TPS 26992.010 26993.468 27010.966 ≈ 26999 ± 0.04% ↑ 18.43%
CPU % 28.928 28.096 29.105 ≈ 28.7 ± 1.88% ↑ 18.1%
Mem MB 760.385 726.320 756.118 ≈ 748 ± 2.48% ↑ 33.10%
Core i7-8565U:
RPS 7783.229 7628.409 7542.340 ≈ 7651 ± 1.60% ↑ 16.72%
TPS 7708.436 7607.397 7489.459 ≈ 7602 ± 1.44% ↑ 16.85%
CPU % 74.899 71.020 72.697 ≈ 72.9 ± 2.67% ↑ 20.50%
Mem MB 438.047 436.967 416.350 ≈ 430 ± 2.84% ↑ 63.50%
DB restore:
real 0m20.838s 0m21.895s 0m21.794s ≈ 21.51 ± 2.71% ↓ 6.64%
user 0m39.091s 0m40.565s 0m41.493s ≈ 40.38 ± 3.00% ↑ 6.60%
sys 0m3.184s 0m2.923s 0m3.062s ≈ 3.06 ± 4.27% ↓ 4.38%
It obviously uses more memory now and utilizes CPU more aggressively, but at
the same time it allows to improve all relevant metrics and finally reach a
situation where we process 50K transactions in less than second on Ryzen 9
5950X (going higher than 25K TPS). The other observation is much more stable
block time, on Ryzen 9 it's as close to 1 second as it could be.