Commit graph

4848 commits

Author SHA1 Message Date
Evgeniy Stratonikov
8c31831626 util: reduce allocations in util.Uint256DecodeStringLE
It is used a lot in clients (including our benchmark).
`Uint160` is already optimized.

```
name                     old time/op    new time/op    delta
Uint256DecodeStringLE-8     150ns ±15%     112ns ± 3%  -25.23%  (p=0.000 n=10+10)

name                     old alloc/op   new alloc/op   delta
Uint256DecodeStringLE-8     96.0B ± 0%     64.0B ± 0%  -33.33%  (p=0.000 n=10+10)

name                     old allocs/op  new allocs/op  delta
Uint256DecodeStringLE-8      2.00 ± 0%      1.00 ± 0%  -50.00%  (p=0.000 n=10+10)
```

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-17 16:53:39 +03:00
Roman Khimov
11351b9702
Merge pull request #2114 from nspcc-dev/optimize-rpc
rpc/request: delay parameter unmarshaling
2021-08-13 16:30:42 +03:00
Evgeniy Stratonikov
3c34e6fa21 rpc/request: delay parameter unmarshaling
It is rather costly to try to unmarshal many structs in order.

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-13 16:22:54 +03:00
Roman Khimov
5b12dd2025
Merge pull request #2128 from nspcc-dev/vm-update-int
Some VM optimizations
2021-08-13 16:16:01 +03:00
Evgeniy Stratonikov
6879f76a13 stackitem: make Buffer an alias to []byte
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-13 14:41:26 +03:00
Evgeniy Stratonikov
1dfef4ba26 stackitem: make ByteArray an alias to []byte
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-13 14:41:26 +03:00
Evgeniy Stratonikov
4f98ec2f53 vm: embed reference counter in compound items
```
name              old time/op  new time/op  delta
RefCounter_Add-8  44.8ns ± 4%  11.7ns ± 3%  -73.94%  (p=0.000 n=10+10)
```

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-13 14:41:26 +03:00
Roman Khimov
adc660c3e0
Merge pull request #2123 from nspcc-dev/store-better
Store better
2021-08-13 12:50:24 +03:00
Evgeniy Stratonikov
dc9287bf5c compiler: use parameter directly in writeJumps
`Next` doesn't longer copy parameter.

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-13 11:59:04 +03:00
Evgeniy Stratonikov
f5d1277bfd vm: do not copy parameter
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-13 11:52:38 +03:00
Roman Khimov
4a714425bf
Merge pull request #2125 from nspcc-dev/cli-convert-public
vm/cli: add public key -> address conversion, fix #2121
2021-08-13 11:49:49 +03:00
Evgeniy Stratonikov
e2910a7cb4 vm/cli: add public key -> address conversion, fix #2121
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-13 10:43:49 +03:00
Evgeniy Stratonikov
bb137abb03 crypto/keys: enforce length in PublicKey.DecodeBytes()
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-13 10:38:09 +03:00
Evgeniy Stratonikov
a5516e8c96 stackitem: make BigInteger alias to big.Int
Remove one indirection step.
``
name       old time/op    new time/op    delta
MakeInt-8    79.7ns ± 8%    56.2ns ± 8%  -29.44%  (p=0.000 n=10+10)

name       old alloc/op   new alloc/op   delta
MakeInt-8     48.0B ± 0%     40.0B ± 0%  -16.67%  (p=0.000 n=10+10)

name       old allocs/op  new allocs/op  delta
MakeInt-8      3.00 ± 0%      2.00 ± 0%  -33.33%  (p=0.000 n=10+10)
```

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-12 17:53:36 +03:00
Evgeniy Stratonikov
cff8b1c24e stackitem: use Bool item directly
It is always copied.

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-12 17:53:36 +03:00
Roman Khimov
ae071d4542 storage: introduce PutChangeSet and use it for Persist
We're using batches in wrong way during persist, we already have all changes
accumulated in two maps and then we move them to batch and then this is
applied. For some DBs like BoltDB this batch is just another MemoryStore, so
we essentially just shuffle the changeset from one map to another, for others
like LevelDB batch is just a serialized set of KV pairs, it doesn't help much
on subsequent PutBatch, we just duplicate the changeset again.

So introduce PutChangeSet that allows to take two maps with sets and deletes
directly. It also allows to simplify MemCachedStore logic.

neo-bench for single node with 10 workers, LevelDB:

  Reference:

  RPS    30189.132 30556.448 30390.482 ≈ 30379    ±  0.61%
  TPS    29427.344 29418.687 29434.273 ≈ 29427    ±  0.03%
  CPU %     33.304    27.179    33.860 ≈    31.45 ± 11.79%
  Mem MB   800.677   798.389   715.042 ≈   771    ±  6.33%

  Patched:

  RPS    30264.326 30386.364 30166.231 ≈ 30272    ± 0.36% ⇅
  TPS    29444.673 29407.440 29452.478 ≈ 29435    ± 0.08% ⇅
  CPU %     34.012    32.597    33.467 ≈   33.36  ± 2.14% ⇅
  Mem MB   549.126   523.656   517.684 ≈  530     ± 3.15% ↓ 31.26%

BoltDB:

  Reference:

  RPS    31937.647 31551.684 31850.408 ≈ 31780    ±  0.64%
  TPS    31292.049 30368.368 31307.724 ≈ 30989    ±  1.74%
  CPU %     33.792    22.339    35.887 ≈    30.67 ± 23.78%
  Mem MB  1271.687  1254.472  1215.639 ≈  1247    ±  2.30%

  Patched:

  RPS    31746.818 30859.485 31689.761 ≈ 31432    ± 1.58% ⇅
  TPS    31271.499 30340.726 30342.568 ≈ 30652    ± 1.75% ⇅
  CPU %     34.611    34.414    31.553 ≈    33.53 ± 5.11% ⇅
  Mem MB  1262.960  1231.389  1335.569 ≈  1277    ± 4.18% ⇅
2021-08-12 17:42:16 +03:00
Roman Khimov
5aff82aef4
Merge pull request #2119 from nspcc-dev/states-exchange/insole
core, network: prepare basis for Insole module
2021-08-12 10:35:02 +03:00
Roman Khimov
3360c4a0d8
Merge pull request #2122 from nspcc-dev/vm-microopt
VM-related microoptimizations
2021-08-12 10:34:49 +03:00
Roman Khimov
47f0f4c45f dao: completely drop Cached
It was very useful in 2.0 days, but today it only serves one purpose that
could easily (and more effectively!) be solved in another way.
2021-08-11 23:06:17 +03:00
Roman Khimov
3e60771175 core: deduplicate and simplify processNEP17Transfer a bit
Just refactoring, no functional changes.
2021-08-11 22:36:26 +03:00
Roman Khimov
50ee1a1f91 *: don't use dao.Cached in tests
There is no need to use it.
2021-08-11 21:02:50 +03:00
Roman Khimov
18682f2409 storage: don't use locks for memory batches
They're inherently single-threaded, so locking makes no sense for them.
2021-08-11 18:55:07 +03:00
Roman Khimov
13da1b62fb interop: fetch baseExecFee once and keep it in the Context
It never changes during single execution, so we can cache it and avoid going
to Policer via Chain for every instruction.
2021-08-11 15:42:23 +03:00
Roman Khimov
bdb2d24a5a vm: remove istack redirection in VM
VM always has istack and it doesn't even change, so doing this microallocation
makes no sense. Notice that estack is a bit harder to change we do replace it
in some cases and we compare pointers to it as well.
2021-08-11 14:42:01 +03:00
Roman Khimov
ff7d594bef vm: store refcounter directly in VM
VM always has it, so allocating yet another object makes no sense.
2021-08-11 13:25:58 +03:00
Anna Shaleva
0e3b9c48a2 core: add API to store StateSyncPoint and StateSyncCurrentBlockHeight
We need it in order not to mess up the blockchain which has its own
CurrentBlockHeight.
2021-08-10 14:06:28 +03:00
Anna Shaleva
cb01f533c0 core: store conflicting transactions in a separate method
(DAO).StoreConflictingTransactions will be reused from the state sync
module.
2021-08-10 13:47:13 +03:00
Anna Shaleva
72e654332e core: refactor block queue
It requires only two methods from Blockchainer: AddBlock and
BlockHeight. New interface will allow to easily reuse the block queue
for state exchange purposes.
2021-08-10 13:47:13 +03:00
Anna Shaleva
35501a281a core: remove untraceable blocks wrt StateSyncInterval 2021-08-10 13:47:10 +03:00
Roman Khimov
0a2bbf3c04
Merge pull request #2118 from nspcc-dev/neopt2
Networking improvements
2021-08-10 13:29:40 +03:00
Anna Shaleva
6ca7983be8 network: fix typo in error message 2021-08-10 11:00:39 +03:00
Anna Shaleva
76c687aaa1 config: add P2PStateExchangeExtensions and StateSyncInterval settings 2021-08-10 11:00:32 +03:00
Roman Khimov
1e0c70ecb0
Merge pull request #2117 from nspcc-dev/io-grow
Some io package improvements
2021-08-10 09:57:31 +03:00
Evgeniy Stratonikov
73e4040628 mpt: use BinWriter.Grow() instead of custom buffer
Also add benchmarks.

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-10 09:34:05 +03:00
Evgeniy Stratonikov
c74de9a579 network: preallocate buffer for message
```
name            old time/op    new time/op    delta
MessageBytes-8     740ns ± 0%     684ns ± 2%   -7.58%  (p=0.000 n=10+10)

name            old alloc/op   new alloc/op   delta
MessageBytes-8    1.39kB ± 0%    1.20kB ± 0%  -13.79%  (p=0.000 n=10+10)

name            old allocs/op  new allocs/op  delta
MessageBytes-8      11.0 ± 0%      10.0 ± 0%   -9.09%  (p=0.000 n=10+10)
```

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-10 09:33:52 +03:00
Roman Khimov
cf13f30dbf
Merge pull request #2112 from nspcc-dev/gas-balance
Improve NEP17 transfer serialization
2021-08-09 12:18:46 +03:00
Evgeniy Stratonikov
dacf025dd9 io: add Grow to BinWriter
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-09 12:05:31 +03:00
Evgeniy Stratonikov
c69670c85b io: use a single slice for numbers
Slice takes 24 bytes of memory, while we really need only 9.
```
name                 old time/op    new time/op    delta
Transaction_Bytes-8     667ns ±17%     583ns ± 6%  -12.50%  (p=0.000 n=10+10)
GetVarSize-8            283ns ±11%     189ns ± 5%  -33.37%  (p=0.000 n=10+10)

name                 old alloc/op   new alloc/op   delta
Transaction_Bytes-8    1.01kB ± 0%    0.88kB ± 0%  -12.70%  (p=0.000 n=10+10)
GetVarSize-8             184B ± 0%       56B ± 0%  -69.57%  (p=0.000 n=10+10)

name                 old allocs/op  new allocs/op  delta
Transaction_Bytes-8      7.00 ± 0%      6.00 ± 0%  -14.29%  (p=0.000 n=10+10)
GetVarSize-8             3.00 ± 0%      2.00 ± 0%  -33.33%  (p=0.000 n=10+10)
```

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-09 12:04:28 +03:00
Evgeniy Stratonikov
620295efe3 transaction: add benchmark for transaction serialization
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-09 12:01:38 +03:00
Evgeniy Stratonikov
23adb1e2fc state: optimize NEP17TransferLog.Append
Do not allocate a separate buffer for the transfer.
```
name                       old time/op    new time/op    delta
NEP17TransferLog_Append-8    58.8µs ± 3%    32.1µs ± 1%  -45.40%  (p=0.000 n=10+9)

name                       old alloc/op   new alloc/op   delta
NEP17TransferLog_Append-8     118kB ± 1%      44kB ± 3%  -63.00%  (p=0.000 n=9+10)

name                       old allocs/op  new allocs/op  delta
NEP17TransferLog_Append-8       901 ± 1%       513 ± 3%  -43.08%  (p=0.000 n=9+8)
```

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-09 11:09:49 +03:00
Evgeniy Stratonikov
403a4b75de state/test: add benchmark for NEP17TransferLog.Append
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-09 11:09:49 +03:00
Evgeniy Stratonikov
b210a34b1e state: optimize NEP17Balance deserialization
```
BenchmarkNEP17BalanceFromBytes/stackitem-8         	 2402318	       503.3 ns/op	     208 B/op	      10 allocs/op
BenchmarkNEP17BalanceFromBytes/from_bytes-8        	 7623139	       160.7 ns/op	      72 B/op	       3 allocs/op
```

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-09 11:09:49 +03:00
Evgeniy Stratonikov
3218b74ea5 state: optimize NEP17Balance serialization
Put to slice directly and allow to provide pre-allocated buffer.
```
BenchmarkNEP17BalanceBytes/stackitem-8         	 1712475	       673.4 ns/op	     448 B/op	       9 allocs/op
BenchmarkNEP17BalanceBytes/bytes-8             	13422715	        75.80 ns/op	      32 B/op	       2 allocs/op
BenchmarkNEP17BalanceBytes/bytes,_prealloc-8   	25990371	        46.46 ns/op	      16 B/op	       1 allocs/op
```

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-09 11:09:06 +03:00
Roman Khimov
7bb82f1f99 network: merge two loops in iteratePeersWithSendMsg, send to 2/3
Refactor code and be fine with sending to just 2/3 of proper peers. Previously
it was an edge case, but it can be a normal thing to do also as broadcasting
to everyone is obviously too expensive and excessive (hi, #608).

Baseline (four node, 10 workers):

RPS    8180.760 8137.822 7858.358 7820.011 8051.076 ≈ 8010   ± 2.04%
TPS    7819.831 7521.172 7519.023 7242.965 7426.000 ≈ 7506   ± 2.78%
CPU %    41.983   38.775   40.606   39.375   35.537 ≈   39.3 ± 6.15%
Mem MB 2947.189 2743.658 2896.688 2813.276 2863.108 ≈ 2853   ± 2.74%

Patched:

RPS    9714.567 9676.102 9358.609 9371.408 9301.372 ≈ 9484   ±  2.05% ↑ 18.40%
TPS    8809.796 8796.854 8534.754 8661.158 8426.162 ≈ 8646   ±  1.92% ↑ 15.19%
CPU %    44.980   45.018   33.640   29.645   43.830 ≈   39.4 ± 18.41% ↑  0.25%
Mem MB 2989.078 2976.577 2306.185 2351.929 2910.479 ≈ 2707   ± 12.80% ↓  5.12%

There is a nuance with this patch however. While typically it works the way
outlined above, sometimes it works like this:

RPS ≈ 6734.368
TPS ≈ 6299.332
CPU ≈ 25.552%
Mem ≈ 2706.046MB

And that's because the log looks like this:

DeltaTime, TransactionsCount, TPS
5014, 44212, 8817.710
5163, 49690, 9624.249
5166, 49523, 9586.334
5189, 49693, 9576.604
5198, 49339, 9491.920
5147, 49559, 9628.716
5192, 49680, 9568.567
5163, 49750, 9635.871
5183, 49189, 9490.450
5159, 49653, 9624.540
5167, 47945, 9279.079
5179, 2051, 396.022
5015, 4, 0.798
5004, 0, 0.000
5003, 0, 0.000
5003, 0, 0.000
5003, 0, 0.000
5003, 0, 0.000
5004, 0, 0.000
5003, 2925, 584.649
5040, 49099, 9741.865
5161, 49718, 9633.404
5170, 49228, 9521.857
5179, 49773, 9610.543
5167, 47253, 9145.152
5202, 49788, 9570.934
5177, 47704, 9214.603
5209, 46610, 8947.975
5249, 49156, 9364.831
5163, 18284, 3541.352
5072, 174, 34.306

On a network with 4 CNs and 1 RPC node there is 1/256 probability that a block
won't be broadcasted to RPC node, so it won't see it until ping timeout kicks
in. While it doesn't see a block it can't accept new incoming transactions so
the bench gets stuck basically. To me that's an acceptable trade-off because
normal networks are much larger than that and the effect of this patch is way
more important there, but still that's what we have and we need to take into
account.
2021-08-06 21:10:34 +03:00
Roman Khimov
966a16e80e network: keep track of dead peers in iteratePeersWithSendMsg()
send() can return errStateMismatch, errGone and errBusy. errGone means the
peer is dead and it won't ever be active again, it doesn't make sense retrying
sends to it. errStateMismatch is technically "not yet ready", but we can't
wait for it either, no one knows how much will it take to complete
handshake. So only errBusy means we can retry.

So keep track of dead peers and adjust tries counting appropriately.
2021-08-06 21:10:34 +03:00
Roman Khimov
80f3ec2312 network: move peer filtering to getPeers()
It doesn't change much, we can't magically get more valid peers and if some
die while we're iterating we'd detect that by an error returned from send().
2021-08-06 21:10:34 +03:00
Roman Khimov
de6f4987f6 network: microoptimize iteratePeersWithSendMsg()
Now that s.getPeers() returns a slice we can use slice for `success` too, maps
are more expensive.
2021-08-06 21:10:34 +03:00
Roman Khimov
d51db20405 network: randomize peer iteration order
While iterating over map in getPeers() is non-deterministic it's not really
random enough for our purposes (usually maps have 2-3 paths through them), we
need to fill our peers queues more uniformly.

Believe it or not, but it does affect performance metrics, baseline (four
nodes, 10 workers):

RPS ≈  7791.675 7996.559 7834.504 7746.705 7891.614 ≈ 7852   ±  1.10%
TPS ≈  7241.497 7711.765 7520.211 7425.890 7334.443 ≈ 7447   ±  2.17%
CPU %    29.853   39.936   39.945   36.371   39.999 ≈   37.2 ± 10.57%
Mem MB 2749.635 2791.609 2828.610 2910.431 2863.344 ≈ 2829   ±  1.97%

Patched:

RPS    8180.760 8137.822 7858.358 7820.011 8051.076 ≈ 8010   ± 2.04% ↑ 2.01%
TPS    7819.831 7521.172 7519.023 7242.965 7426.000 ≈ 7506   ± 2.78% ↑ 0.79%
CPU %    41.983   38.775   40.606   39.375   35.537 ≈   39.3 ± 6.15% ↑ 5.65%
Mem MB 2947.189 2743.658 2896.688 2813.276 2863.108 ≈ 2853   ± 2.74% ↑ 0.85%
2021-08-06 21:10:34 +03:00
Roman Khimov
b55c75d59d network: hide Peers, make it return a slice
Slice is a bit more efficient, we don't need a map for Peers() users and it's
not really interesting to outside users, so better hide this method.
2021-08-06 21:10:34 +03:00
Roman Khimov
119b4200ac network: add fail-fast route for tx double processing
When transaction spreads through the network many nodes are likely to get it
in roughly the same time. They will rebroadcast it also in roughly the same
time. As we have a number of peers it's quite likely that we'd get an Inv with
the same transaction from multiple peers simultaneously. We will ask them for
this transaction (independently!) and again we're likely to get it in roughly
the same time. So we can easily end up with multiple threads processing the
same transaction. Only one will succeed, but we can actually easily avoid
doing it in the first place saving some CPU cycles for other things.

Notice that we can't do it _before_ receiving a transaction because nothing
guarantees that the peer will respond to our transaction request, so
communication overhead is unavoidable at the moment, but saving on processing
already gives quite interesting results.

Baseline, four nodes with 10 workers:

RPS    7176.784 7014.511 6139.663 7191.280 7080.852 ≈ 6921   ± 5.72%
TPS    6945.409 6562.756 5927.050 6681.187 6821.794 ≈ 6588   ± 5.38%
CPU %    44.400   43.842   40.418   49.211   49.370 ≈   45.4 ± 7.53%
Mem MB 2693.414 2640.602 2472.007 2731.482 2707.879 ≈ 2649   ± 3.53%

Patched:

RPS ≈  7791.675 7996.559 7834.504 7746.705 7891.614 ≈ 7852   ±  1.10% ↑ 13.45%
TPS ≈  7241.497 7711.765 7520.211 7425.890 7334.443 ≈ 7447   ±  2.17% ↑ 13.04%
CPU %    29.853   39.936   39.945   36.371   39.999 ≈   37.2 ± 10.57% ↓ 18.06%
Mem MB 2749.635 2791.609 2828.610 2910.431 2863.344 ≈ 2829   ±  1.97% ↑  6.80%
2021-08-06 21:10:25 +03:00