Commit graph

1573 commits

Author SHA1 Message Date
Roman Khimov
de4ed7d020 runtime: fix CustomGroups witness
See neo-project/neo#2586.
2021-08-24 15:50:24 +03:00
Roman Khimov
7808762ba0 transaction: avoid reencoding and reading what can't be read
name               old time/op    new time/op    delta
DecodeFromBytes-8    1.79µs ± 2%    1.46µs ± 4%  -18.44%  (p=0.000 n=10+10)

name               old alloc/op   new alloc/op   delta
DecodeFromBytes-8      800B ± 0%      624B ± 0%  -22.00%  (p=0.000 n=10+10)

name               old allocs/op  new allocs/op  delta
DecodeFromBytes-8      10.0 ± 0%       8.0 ± 0%  -20.00%  (p=0.000 n=10+10)
2021-08-23 21:41:38 +03:00
Roman Khimov
2808f6857d interop: don't allocate for Functions and Notifications in New
Functions are usually immediately replaced (and it's OK for them to be nil,
searching through an array with length of zero is fine), Notifications are
usually appended to (and are absolutely useless in verification contexts).
2021-08-20 11:56:28 +03:00
Roman Khimov
a68a8aa8fc core: simplify and correct notification handling
* both 'to' and 'from' are either Null or Hash160, there is no other
   possibility for valid NEP-17. So returning util.Uint160{} in case of
   parsing error is wrong.
 * but this is what allowed burns/mints to work at the expense of error
   allocation inside of util.Uint160DecodeBytesBE()
 * Uint160 can technically fit into regular VM integer, so even though it'd be
   quite surprising to see it there, TryBytes() is more correct (and easier!)
   to use
 * same thing with `amount`, we have `TryInteger()` that easily covers all
   possible cases and does appropriate error checking inside
2021-08-20 11:26:16 +03:00
Roman Khimov
abc48229a3 block: Grow buffer on Trim, avoid reallocations 2021-08-20 11:05:46 +03:00
Anna Shaleva
5f9d38f640 core: refactor (*DAO).StoreAsTransaction
Squash (*DAO).StoreAsTransaction and
(*DAO).StoreConflictingTransactions. It's better to keep them this way,
because StoreAsTransaction is always followed by
StoreConflictingTransactions, so it's an atomic operation.

The logic wasn't changed.
2021-08-18 13:39:28 +03:00
Anna Shaleva
4b35a1cf92 core: remove conflicting transactions wrt MaxTraceableBlocks 2021-08-18 13:31:47 +03:00
Roman Khimov
f477a48758 contract: block calls to contracts via Policy contract
See neo-project/neo#2567.
2021-08-17 15:24:06 +03:00
Roman Khimov
adc660c3e0
Merge pull request #2123 from nspcc-dev/store-better
Store better
2021-08-13 12:50:24 +03:00
Roman Khimov
ae071d4542 storage: introduce PutChangeSet and use it for Persist
We're using batches in wrong way during persist, we already have all changes
accumulated in two maps and then we move them to batch and then this is
applied. For some DBs like BoltDB this batch is just another MemoryStore, so
we essentially just shuffle the changeset from one map to another, for others
like LevelDB batch is just a serialized set of KV pairs, it doesn't help much
on subsequent PutBatch, we just duplicate the changeset again.

So introduce PutChangeSet that allows to take two maps with sets and deletes
directly. It also allows to simplify MemCachedStore logic.

neo-bench for single node with 10 workers, LevelDB:

  Reference:

  RPS    30189.132 30556.448 30390.482 ≈ 30379    ±  0.61%
  TPS    29427.344 29418.687 29434.273 ≈ 29427    ±  0.03%
  CPU %     33.304    27.179    33.860 ≈    31.45 ± 11.79%
  Mem MB   800.677   798.389   715.042 ≈   771    ±  6.33%

  Patched:

  RPS    30264.326 30386.364 30166.231 ≈ 30272    ± 0.36% ⇅
  TPS    29444.673 29407.440 29452.478 ≈ 29435    ± 0.08% ⇅
  CPU %     34.012    32.597    33.467 ≈   33.36  ± 2.14% ⇅
  Mem MB   549.126   523.656   517.684 ≈  530     ± 3.15% ↓ 31.26%

BoltDB:

  Reference:

  RPS    31937.647 31551.684 31850.408 ≈ 31780    ±  0.64%
  TPS    31292.049 30368.368 31307.724 ≈ 30989    ±  1.74%
  CPU %     33.792    22.339    35.887 ≈    30.67 ± 23.78%
  Mem MB  1271.687  1254.472  1215.639 ≈  1247    ±  2.30%

  Patched:

  RPS    31746.818 30859.485 31689.761 ≈ 31432    ± 1.58% ⇅
  TPS    31271.499 30340.726 30342.568 ≈ 30652    ± 1.75% ⇅
  CPU %     34.611    34.414    31.553 ≈    33.53 ± 5.11% ⇅
  Mem MB  1262.960  1231.389  1335.569 ≈  1277    ± 4.18% ⇅
2021-08-12 17:42:16 +03:00
Roman Khimov
5aff82aef4
Merge pull request #2119 from nspcc-dev/states-exchange/insole
core, network: prepare basis for Insole module
2021-08-12 10:35:02 +03:00
Roman Khimov
47f0f4c45f dao: completely drop Cached
It was very useful in 2.0 days, but today it only serves one purpose that
could easily (and more effectively!) be solved in another way.
2021-08-11 23:06:17 +03:00
Roman Khimov
3e60771175 core: deduplicate and simplify processNEP17Transfer a bit
Just refactoring, no functional changes.
2021-08-11 22:36:26 +03:00
Roman Khimov
50ee1a1f91 *: don't use dao.Cached in tests
There is no need to use it.
2021-08-11 21:02:50 +03:00
Roman Khimov
18682f2409 storage: don't use locks for memory batches
They're inherently single-threaded, so locking makes no sense for them.
2021-08-11 18:55:07 +03:00
Roman Khimov
13da1b62fb interop: fetch baseExecFee once and keep it in the Context
It never changes during single execution, so we can cache it and avoid going
to Policer via Chain for every instruction.
2021-08-11 15:42:23 +03:00
Anna Shaleva
0e3b9c48a2 core: add API to store StateSyncPoint and StateSyncCurrentBlockHeight
We need it in order not to mess up the blockchain which has its own
CurrentBlockHeight.
2021-08-10 14:06:28 +03:00
Anna Shaleva
cb01f533c0 core: store conflicting transactions in a separate method
(DAO).StoreConflictingTransactions will be reused from the state sync
module.
2021-08-10 13:47:13 +03:00
Anna Shaleva
72e654332e core: refactor block queue
It requires only two methods from Blockchainer: AddBlock and
BlockHeight. New interface will allow to easily reuse the block queue
for state exchange purposes.
2021-08-10 13:47:13 +03:00
Anna Shaleva
35501a281a core: remove untraceable blocks wrt StateSyncInterval 2021-08-10 13:47:10 +03:00
Anna Shaleva
76c687aaa1 config: add P2PStateExchangeExtensions and StateSyncInterval settings 2021-08-10 11:00:32 +03:00
Roman Khimov
1e0c70ecb0
Merge pull request #2117 from nspcc-dev/io-grow
Some io package improvements
2021-08-10 09:57:31 +03:00
Evgeniy Stratonikov
73e4040628 mpt: use BinWriter.Grow() instead of custom buffer
Also add benchmarks.

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-10 09:34:05 +03:00
Evgeniy Stratonikov
620295efe3 transaction: add benchmark for transaction serialization
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-09 12:01:38 +03:00
Evgeniy Stratonikov
23adb1e2fc state: optimize NEP17TransferLog.Append
Do not allocate a separate buffer for the transfer.
```
name                       old time/op    new time/op    delta
NEP17TransferLog_Append-8    58.8µs ± 3%    32.1µs ± 1%  -45.40%  (p=0.000 n=10+9)

name                       old alloc/op   new alloc/op   delta
NEP17TransferLog_Append-8     118kB ± 1%      44kB ± 3%  -63.00%  (p=0.000 n=9+10)

name                       old allocs/op  new allocs/op  delta
NEP17TransferLog_Append-8       901 ± 1%       513 ± 3%  -43.08%  (p=0.000 n=9+8)
```

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-09 11:09:49 +03:00
Evgeniy Stratonikov
403a4b75de state/test: add benchmark for NEP17TransferLog.Append
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-09 11:09:49 +03:00
Evgeniy Stratonikov
b210a34b1e state: optimize NEP17Balance deserialization
```
BenchmarkNEP17BalanceFromBytes/stackitem-8         	 2402318	       503.3 ns/op	     208 B/op	      10 allocs/op
BenchmarkNEP17BalanceFromBytes/from_bytes-8        	 7623139	       160.7 ns/op	      72 B/op	       3 allocs/op
```

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-09 11:09:49 +03:00
Evgeniy Stratonikov
3218b74ea5 state: optimize NEP17Balance serialization
Put to slice directly and allow to provide pre-allocated buffer.
```
BenchmarkNEP17BalanceBytes/stackitem-8         	 1712475	       673.4 ns/op	     448 B/op	       9 allocs/op
BenchmarkNEP17BalanceBytes/bytes-8             	13422715	        75.80 ns/op	      32 B/op	       2 allocs/op
BenchmarkNEP17BalanceBytes/bytes,_prealloc-8   	25990371	        46.46 ns/op	      16 B/op	       1 allocs/op
```

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-09 11:09:06 +03:00
Roman Khimov
b989504d74
Merge pull request #2108 from nspcc-dev/optimize-mpt
Some allocation optimizations
2021-08-06 14:51:10 +03:00
Evgeniy Stratonikov
bd2b1a0521 mpt: add Size method to trie nodes
Knowing serialized size of the node is useful for
preallocating byte-slice in advance.

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-06 12:01:16 +03:00
Evgeniy Stratonikov
db80ef28df mpt: move empty hash node in a separate type
We use them quite frequently (consider children for a new branch
node) and it is better to get rid of unneeded allocations.

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-06 12:01:16 +03:00
Evgeniy Stratonikov
291a29af1e *: do not use WriteArray for frequently used items
`WriteArray` involves reflection, it makes sense to optimize
serialization of transactions and application logs which are serialized
constantly. Adding case in a type switch in `WriteArray` is not an
option because we don't want new dependencies for `io` package.

```
name                          old time/op    new time/op    delta
AppExecResult_EncodeBinary-8     852ns ± 3%     656ns ± 2%  -22.94%  (p=0.000 n=10+9)

name                          old alloc/op   new alloc/op   delta
AppExecResult_EncodeBinary-8      448B ± 0%      376B ± 0%  -16.07%  (p=0.000 n=10+10)

name                          old allocs/op  new allocs/op  delta
AppExecResult_EncodeBinary-8      7.00 ± 0%      5.00 ± 0%  -28.57%  (p=0.000 n=10+10)
```

```
name                 old time/op    new time/op    delta
Transaction_Bytes-8    1.29µs ± 3%    0.76µs ± 5%  -41.52%  (p=0.000 n=9+10)

name                 old alloc/op   new alloc/op   delta
Transaction_Bytes-8    1.21kB ± 0%    1.01kB ± 0%  -16.56%  (p=0.000 n=10+10)

name                 old allocs/op  new allocs/op  delta
Transaction_Bytes-8      12.0 ± 0%       7.0 ± 0%  -41.67%  (p=0.000 n=10+10)
```

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-06 11:59:20 +03:00
Roman Khimov
39f874d03f core: don't recalculate witness script hash
We know it already, but with current loading code VM will hash it once
more. It doesn't help a lot and still it costs nothing to avoid this
overhead.

name             old time/op    new time/op    delta
VerifyWitness-8    93.4µs ± 3%    92.7µs ± 2%    ~     (p=0.353 n=10+10)

name             old alloc/op   new alloc/op   delta
VerifyWitness-8    3.43kB ± 0%    3.40kB ± 0%  -0.70%  (p=0.000 n=9+9)

name             old allocs/op  new allocs/op  delta
VerifyWitness-8      67.0 ± 0%      66.0 ± 0%  -1.49%  (p=0.000 n=10+10)
2021-08-06 11:25:09 +03:00
Evgeniy Stratonikov
43ee671f36 mpt: do not allocate NodeObject for serialization
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-06 10:28:19 +03:00
Roman Khimov
e41fc2fd1b
Merge pull request #2111 from nspcc-dev/drop-refuel
native: drop Refuel method from GAS
2021-08-05 16:42:29 +03:00
Roman Khimov
d6bd6b6888 native: drop Refuel method from GAS
It can be used to attack the network (amplifying DOS), so it's broken
beyond repair. This reverts ac601601c1.

See also neo-project/neo#2560 and neo-project/neo#2561.
2021-08-05 10:27:13 +03:00
Roman Khimov
892c9785ad transaction: don't allocate new buffer to calculate hash
We can write directly to hash.Hash.

name               old time/op    new time/op    delta
DecodeBinary-8       2.89µs ± 3%    2.82µs ± 5%     ~     (p=0.052 n=10+10)
DecodeJSON-8         13.0µs ± 1%    12.8µs ± 1%   -1.54%  (p=0.002 n=10+8)
DecodeFromBytes-8    2.37µs ± 1%    2.25µs ± 5%   -5.25%  (p=0.000 n=9+10)

name               old alloc/op   new alloc/op   delta
DecodeBinary-8       1.75kB ± 0%    1.53kB ± 0%  -12.79%  (p=0.000 n=10+10)
DecodeJSON-8         3.49kB ± 0%    3.26kB ± 0%   -6.42%  (p=0.000 n=10+10)
DecodeFromBytes-8    1.37kB ± 0%    1.14kB ± 0%  -16.37%  (p=0.000 n=10+10)

name               old allocs/op  new allocs/op  delta
DecodeBinary-8         26.0 ± 0%      23.0 ± 0%  -11.54%  (p=0.000 n=10+10)
DecodeJSON-8           58.0 ± 0%      55.0 ± 0%   -5.17%  (p=0.000 n=10+10)
DecodeFromBytes-8      18.0 ± 0%      15.0 ± 0%  -16.67%  (p=0.000 n=10+10)
2021-08-04 23:43:20 +03:00
Roman Khimov
6d10cdc2f6 transaction: avoid ReadArray()
Reflection adds some real cost to it:

name               old time/op    new time/op    delta
DecodeBinary-8       3.14µs ± 5%    2.89µs ± 3%   -8.19%  (p=0.000 n=10+10)
DecodeJSON-8         12.6µs ± 3%    13.0µs ± 1%   +3.77%  (p=0.000 n=10+10)
DecodeFromBytes-8    2.73µs ± 2%    2.37µs ± 1%  -13.12%  (p=0.000 n=9+9)

name               old alloc/op   new alloc/op   delta
DecodeBinary-8       1.82kB ± 0%    1.75kB ± 0%   -3.95%  (p=0.000 n=10+10)
DecodeJSON-8         3.49kB ± 0%    3.49kB ± 0%     ~     (all equal)
DecodeFromBytes-8    1.44kB ± 0%    1.37kB ± 0%   -5.00%  (p=0.000 n=10+10)

name               old allocs/op  new allocs/op  delta
DecodeBinary-8         29.0 ± 0%      26.0 ± 0%  -10.34%  (p=0.000 n=10+10)
DecodeJSON-8           58.0 ± 0%      58.0 ± 0%     ~     (all equal)
DecodeFromBytes-8      21.0 ± 0%      18.0 ± 0%  -14.29%  (p=0.000 n=10+10)
2021-08-04 23:34:57 +03:00
Roman Khimov
d2732a71d8 transaction: don't overwrite error and witnesses length check
ReadArray() can return some error and we shouldn't overwrite it. At the same
time limiting ReadArray() to the number of Signers can make it return wrong
error if the number of witnesses actually is bigger than the number of
signers, so use MaxAttributes.
2021-08-04 23:17:50 +03:00
Roman Khimov
d487b54612 transaction: don't recalculate size when decoding from buffer
name               old time/op    new time/op    delta
DecodeBinary-8       3.17µs ± 6%    3.14µs ± 5%     ~     (p=0.579 n=10+10)
DecodeJSON-8         12.8µs ± 3%    12.6µs ± 3%     ~     (p=0.105 n=10+10)
DecodeFromBytes-8    3.45µs ± 4%    2.73µs ± 2%  -20.70%  (p=0.000 n=10+9)

name               old alloc/op   new alloc/op   delta
DecodeBinary-8       1.82kB ± 0%    1.82kB ± 0%     ~     (all equal)
DecodeJSON-8         3.49kB ± 0%    3.49kB ± 0%     ~     (all equal)
DecodeFromBytes-8    1.82kB ± 0%    1.44kB ± 0%  -21.05%  (p=0.000 n=10+10)

name               old allocs/op  new allocs/op  delta
DecodeBinary-8         29.0 ± 0%      29.0 ± 0%     ~     (all equal)
DecodeJSON-8           58.0 ± 0%      58.0 ± 0%     ~     (all equal)
DecodeFromBytes-8      29.0 ± 0%      21.0 ± 0%  -27.59%  (p=0.000 n=10+10)
2021-08-04 23:13:58 +03:00
Roman Khimov
64c780ad7a native: optimize totalSupply operations during token burn/mint
We burn GAS in OnPersist for every transaction so some buffer reuse here is
quite natural.

This also doesn't change a lot in the overall TPS picture, maybe adding some
1%.
2021-08-03 17:59:38 +03:00
Roman Khimov
dede4fa7b1 state: convert NEO balance to stack item directly
Avoid calling Append() that will reallocate the slice, we know the length of
the slice exactly.
2021-08-03 17:59:38 +03:00
Roman Khimov
5c65d33439 native: move required balance check to token contracts
Which duplicates the check, but deduplicates error path. This check forced
double balance deserialization which is quite costly operation, so we better
do it once.

It's hardly noticeable as of TPS metrics though, maybe some 1-2%%.
2021-08-03 17:59:38 +03:00
Roman Khimov
3c1325035e fee: use array for opcodes
Use less memory and have faster access.

name       old time/op  new time/op  delta
Opcode1-8  22.4ns ± 6%   3.0ns ± 6%  -86.63%  (p=0.000 n=10+10)
2021-08-02 20:18:33 +03:00
Roman Khimov
dfc514eda0
Merge pull request #2102 from nspcc-dev/store4
Improve (*MemCachedStore).Persist
2021-08-02 20:10:05 +03:00
Roman Khimov
82f481e143
Merge pull request #2105 from nspcc-dev/json-restrict
native/std: restrint amount of items in JSON deserialization
2021-08-02 19:41:54 +03:00
Evgeniy Stratonikov
bdb9748c1b native/std: restrict amount of items in JSON deserialization
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-02 18:57:47 +03:00
Roman Khimov
f8174ca64c core: ensure data logged is from persistent store
Using bc.dao here is wrong, it can contain unpersisted data.
2021-08-02 16:33:09 +03:00
Roman Khimov
8277b7a19a core: don't spawn goroutine for persist function
It doesn't make any sense, in some situations it leads to a number of
goroutines created that will Persist one after another (as we can't Persist
concurrently). We can manage it better in a single thread.

This doesn't change performance in any way, but somewhat reduces resource
consumption. It was tested neo-bench (single node, 10 workers, LevelDB) on two
machines and block dump processing (RC4 testnet up to 62800 with VerifyBlocks
set to false) on i7-8565U.

Reference (b9be892bf9):

Ryzen 9 5950X:
RPS     27747.349 27407.726 27520.210  ≈ 27558   ± 0.63%
TPS     26992.010 26993.468 27010.966  ≈ 26999   ± 0.04%
CPU %      28.928    28.096    29.105  ≈    28.7 ± 1.88%
Mem MB    760.385   726.320   756.118  ≈   748   ± 2.48%

Core i7-8565U:
RPS     7783.229 7628.409 7542.340  ≈ 7651   ± 1.60%
TPS     7708.436 7607.397 7489.459  ≈ 7602   ± 1.44%
CPU %     74.899   71.020   72.697  ≈   72.9 ± 2.67%
Mem MB   438.047  436.967  416.350  ≈  430   ± 2.84%

DB restore:
real    0m20.838s 0m21.895s 0m21.794s  ≈ 21.51 ± 2.71%
user    0m39.091s 0m40.565s 0m41.493s  ≈ 40.38 ± 3.00%
sys      0m3.184s  0m2.923s  0m3.062s  ≈  3.06 ± 4.27%

Patched:

Ryzen 9 5950X:
RPS     27636.957 27246.911 27462.036  ≈ 27449   ±  0.71%  ↓ 0.40%
TPS     27003.672 26993.468 27011.696  ≈ 27003   ±  0.03%  ↑ 0.01%
CPU %      28.562    28.475    28.012  ≈    28.3 ±  1.04%  ↓ 1.39%
Mem MB    627.007   648.110   794.895  ≈   690   ± 13.25%  ↓ 7.75%

Core i7-8565U:
RPS     7497.210 7527.797 7897.532  ≈ 7641   ±  2.92%  ↓ 0.13%
TPS     7461.128 7482.678 7841.723  ≈ 7595   ±  2.81%  ↓ 0.09%
CPU %     71.559   73.423   69.005  ≈   71.3 ±  3.11%  ↓ 2.19%
Mem MB   393.090  395.899  482.264  ≈  424   ± 11.96%  ↓ 1.40%

DB restore:
real    0m20.773s 0m21.583s 0m20.522s  ≈ 20.96 ±  2.65%  ↓ 2.56%
user    0m39.322s 0m42.268s 0m38.626s  ≈ 40.07 ±  4.82%  ↓ 0.77%
sys      0m3.006s  0m3.597s  0m3.042s  ≈  3.22 ± 10.31%  ↑ 5.23%
2021-08-02 16:33:00 +03:00
Roman Khimov
b9be892bf9 storage: allow accessing MemCachedStore during Persist
Persist by its definition doesn't change MemCachedStore visible state, all KV
pairs that were acessible via it before Persist remain accessible after
Persist. The only thing it does is flushing of the current set of KV pairs
from memory to peristent store. To do that it needs read-only access to the
current KV pair set, but technically it then replaces maps, so we have to use
full write lock which makes MemCachedStore inaccessible for the duration of
Persist. And Persist can take a lot of time, it's about disk access for
regular DBs.

What we do here is we create new in-memory maps for MemCachedStore before
flushing old ones to the persistent store. Then a fake persistent store is
created which actually is a MemCachedStore with old maps, so it has exactly
the same visible state. This Store is never accessed for writes, so we can
read it without taking any internal locks and at the same time we no longer
need write locks for original MemCachedStore, we're not using it. All of this
makes it possible to use MemCachedStore as normally reads are handled going
down to whatever level is needed and writes are handled by new maps. So while
Persist for (*Blockchain).dao does its most time-consuming work we can process
other blocks (reading data for transactions and persisting storeBlock caches
to (*Blockchain).dao).

The change was tested for performance with neo-bench (single node, 10 workers,
LevelDB) on two machines and block dump processing (RC4 testnet up to 62800
with VerifyBlocks set to false) on i7-8565U.

Reference results (bbe4e9cd7b):

Ryzen 9 5950X:
RPS     23616.969 22817.086 23222.378  ≈ 23218   ± 1.72%
TPS     23047.316 22608.578 22735.540  ≈ 22797   ± 0.99%
CPU %      23.434    25.553    23.848  ≈    24.3 ± 4.63%
Mem MB    600.636   503.060   582.043  ≈   562   ± 9.22%

Core i7-8565U:
RPS     6594.007 6499.501 6572.902  ≈ 6555   ± 0.76%
TPS     6561.680 6444.545 6510.120  ≈ 6505   ± 0.90%
CPU %     58.452   60.568   62.474    ≈ 60.5 ± 3.33%
Mem MB   234.893  285.067  269.081   ≈ 263   ± 9.75%

DB restore:
real    0m22.237s 0m23.471s 0m23.409s  ≈ 23.04 ± 3.02%
user    0m35.435s 0m38.943s 0m39.247s  ≈ 37.88 ± 5.59%
sys      0m3.085s  0m3.360s  0m3.144s  ≈  3.20 ± 4.53%

After the change:

Ryzen 9 5950X:
RPS     27747.349 27407.726 27520.210  ≈ 27558   ± 0.63%  ↑ 18.69%
TPS     26992.010 26993.468 27010.966  ≈ 26999   ± 0.04%  ↑ 18.43%
CPU %      28.928    28.096    29.105  ≈    28.7 ± 1.88%  ↑ 18.1%
Mem MB    760.385   726.320   756.118  ≈   748   ± 2.48%  ↑ 33.10%

Core i7-8565U:
RPS     7783.229 7628.409 7542.340  ≈ 7651   ± 1.60%  ↑ 16.72%
TPS     7708.436 7607.397 7489.459  ≈ 7602   ± 1.44%  ↑ 16.85%
CPU %     74.899   71.020   72.697  ≈   72.9 ± 2.67%  ↑ 20.50%
Mem MB   438.047  436.967  416.350  ≈  430   ± 2.84%  ↑ 63.50%

DB restore:
real    0m20.838s 0m21.895s 0m21.794s  ≈ 21.51 ± 2.71%  ↓ 6.64%
user    0m39.091s 0m40.565s 0m41.493s  ≈ 40.38 ± 3.00%  ↑ 6.60%
sys      0m3.184s  0m2.923s  0m3.062s  ≈  3.06 ± 4.27%  ↓ 4.38%

It obviously uses more memory now and utilizes CPU more aggressively, but at
the same time it allows to improve all relevant metrics and finally reach a
situation where we process 50K transactions in less than second on Ryzen 9
5950X (going higher than 25K TPS). The other observation is much more stable
block time, on Ryzen 9 it's as close to 1 second as it could be.
2021-08-02 16:33:00 +03:00