Problem:
```
--- FAIL: TestMakeDirForFile_HappyPath (0.01s)
testing.go:894: TempDir RemoveAll cleanup: remove C:\Users\Anna\AppData\Local\Temp\TestMakeDirForFile_HappyPath402638411\001\testDir\testFile.test: The process cannot access the file because it is being used by another process.
--- FAIL: TestMakeDirForFile_Negative (0.01s)
testing.go:894: TempDir RemoveAll cleanup: remove C:\Users\Anna\AppData\Local\Temp\TestMakeDirForFile_Negative672737582\001\testFile.test: The process cannot access the file because it is being used by another process.
FAIL
```
Solution:
Release resources occupied by os.Create.
Problem:
```
--- FAIL: TestMemCachedPersist (0.07s)
--- FAIL: TestMemCachedPersist/BoltDBStore (0.07s)
testing.go:894: TempDir RemoveAll cleanup: remove C:\Users\Anna\AppData\Local\Temp\TestMemCachedPersist_BoltDBStore294966711\001\test_bolt_db: The process cannot access the file because it is being used by another process.
```
Solution:
Release the resources occupied by the DB.
We use 2 prefixes for storing items because of state synchronization.
This commit allows to parametrize dao with the default prefix.
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
Copying just the scope doesn't work for CustomContracts, CustomGroups and
Rules because they all contain additional metadata. Thanks @mialbu for
reporting this.
b9be892bf9 has made Persist asynchronous which
is very effective in allowing the system to continue processing
blocks/transactions while flushing things to disk. It at the same time is very
dangerous in that if the disk is slow and it takes much time to flush KV set
(more than persisting interval), there might be even bigger new KV set in
MemCachedStore by the time it finishes. Even if the system immediately starts
to flush this new data set it (being bigger) can take more time than the
previous one. And while doing so a new data set will appear in memory,
potentially again bigger than this.
So we can easily end up with the system going out of control, consuming more
and more memory and taking more and more time to persist a single set of
data. To avoid this we need to detect such condition and just wait for Persist
to really finish its job and release the resources.
N3 contracts require method and array of arguments to function correctly, but
CreateFunctionInvocationScript can produce scripts that won't work correctly
if one is to pass anything but a single array parameter to it. This doesn't
make much sense, so we can always expect there to be an array of parameters in
the third positional invokefunction argument (the same way C# node does).
Everywhere it matters (and that's callExFromNative() now) it's incremented
already, so when we're doing Call() at the same time (and it's done to invoke
`_initialize` method) we're effectively double-incrementing it.
Standards are NEP-11 and NEP-17, not NEP11, not NEP17, not anything
else. Variable/function names of course can use whatever fits, but documents
and comments should be consistent wrt this.
Oracle responses must use the same set of signers as oracle requests even
though the transaction itself is signed by oracle nodes/contract.
We can probably improve interop.Context by removing Tx field completely and
adding more functionality to Container, but it's not very convenient for
VerifyWitness and will require adding more stub-like methods for Block, so Tx
is used for now (and we do have it in every relevant case).
I don't think it's possible with regular service functioning, but it happens
during testing because of pointer reuse:
WARNING: DATA RACE
Read at 0x00c003a0e3f0 by goroutine 114:
github.com/nspcc-dev/neo-go/pkg/services/notary.(*Notary).verifyIncompleteWitnesses()
/home/runner/work/neo-go/neo-go/pkg/services/notary/notary.go:441 +0x1dc
github.com/nspcc-dev/neo-go/pkg/services/notary.(*Notary).OnNewRequest()
/home/runner/work/neo-go/neo-go/pkg/services/notary/notary.go:188 +0x205
github.com/nspcc-dev/neo-go/pkg/core.TestNotary.func11()
/home/runner/work/neo-go/neo-go/pkg/core/notary_test.go:347 +0x612
github.com/nspcc-dev/neo-go/pkg/core.TestNotary()
/home/runner/work/neo-go/neo-go/pkg/core/notary_test.go:443 +0xe33
testing.tRunner()
/opt/hostedtoolcache/go/1.16.10/x64/src/testing/testing.go:1193 +0x202
Previous write at 0x00c003a0e3f0 by goroutine 104:
github.com/nspcc-dev/neo-go/pkg/services/notary.(*Notary).finalize()
/home/runner/work/neo-go/neo-go/pkg/services/notary/notary.go:338 +0x50a
github.com/nspcc-dev/neo-go/pkg/services/notary.(*Notary).PostPersist()
/home/runner/work/neo-go/neo-go/pkg/services/notary/notary.go:314 +0x297
github.com/nspcc-dev/neo-go/pkg/services/notary.(*Notary).Run()
/home/runner/work/neo-go/neo-go/pkg/services/notary/notary.go:169 +0x4a7
Serializing/deserializing the payload yields this:
Error: Received unexpected error:
both main and fallback transactions should have the same ValidUntil value
CONVERT call base price is 8192. DUP, ISTYPE and JMPIF all cost 2. So we add
0.07% overhead in the worst case and save 99.93% otherwise.
At the same time, old code was just two bytes and new one is seven, but I
think it's tolerable considering how much GAS it can potentially save.
Fix#2250.
See neo-project/neo#2622. The implementation is somewhat asymmetric (and not
very efficient) for binary/JSON encoding/decoding, but it should be
sufficient.
Sometimes user needs to construct transaction by itself, so it's better
to unify nonce sources for auto-generated and manually-generated
transactions to avoid nonce collisions in tests.
Eventually this will be replaced by `pkg/neotest` invocations but for
now it allows us to remove NNS constants together with the tests.
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
Implementing a separate `Signer` interface is beneficial in multiple
ways:
1. Support both single and multiple transaction witnesses.
2. It should be easy to add contract signer this way.
Tests should use accounts created with `NewAccount` so hiding all
details doesn't seem to be an issue.
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
Use circular buffer which is a bit more appropriate. The problem is that
priority queue accepts and stores equal items which wastes memory even in
normal usage scenario, but it's especially dangerous if the node is stuck for
some reason. In this case it'll accept from peers and put into queue the same
blocks again and again leaking memory up to OOM condition.
Notice that queue length calculation might be wrong in case circular buffer
wraps, but it's not very likely to happen (usually blocks not coming from the
queue are added by consensus and it's not very fast in doing so).
In application logs hashes are serialized as base64 so it is useful
to convert them back to address via `util convert`.
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
Set all necessary context before file traversal, not only import
maps. Also, we can skip restoring import maps because all our code is
processed via `For*` iterators which set necessary context.
We can also refactor this a bit to have all context in one place,
this will be done in #2086.
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
Notes for witnesses:
* [N sig + M multisig + K contract] combination is possible where N, M, K >=0.
* Each verification script should be properly filled in.
* Each invocation script should either be empty or contain exactly one
signature.
Real persistent storage guarantees that result of Seek is sorted
by keys. The idea of optimisation is to merge two sorted seek
results into one (memStore+persistentStore), so that
(*MemCachedStore).Seek will return sorted list. The only thing
that remains is to sort items got from (*MemoryStore).Seek.
MemoryStore is used in a MemCachedStore as a persistent layer in tests.
Further commits suppose that persistent storage returns sorted values
from Seek, so sort the result of MemoryStore.Seek.
Benchmark results for 10000 matching items in MemoryStore compared to
master:
name old time/op new time/op delta
MemorySeek-8 712µs ± 0% 3850µs ± 0% +440.52% (p=0.000 n=8+8)
name old alloc/op new alloc/op delta
MemorySeek-8 160kB ± 0% 2724kB ± 0% +1602.61% (p=0.000 n=10+8)
name old allocs/op new allocs/op delta
MemorySeek-8 10.0k ± 0% 10.0k ± 0% +0.24% (p=0.000 n=10+10)
For details on implementation efficiency see the
https://github.com/nspcc-dev/neo-go/pull/2193#discussion_r722993358.
In N3 no arguments passed should be treated as empty arguments array not as
missing array of arguments, because the array must be present even for
functions that accept no parameters.
Our current algorithm marks function as used if it is called
at least ones, even if the callee function is itself unused.
This commit implements more clever traversal to collect usage
information more precisely.
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
(*Billet).Traverse changes:
1. Get rid of the `offset` argument. We can cut `from` and pass just the
part that remains. This implies that node with path matching `from` will
also be included in the result, so additional check needs to be added to
the callback function.
2. Pass `path` and `from` without search prefix. Append prefix to the
result inside the callback.
3. Remove duplicating code.
(*Trie).Find changes:
1. Properly prepare `from` argument for traversing function. It closly
depends on the `path` argument.
Because `MaxKeySize` is bigger than integer size, we fail on integer
cast while retreiving items from map. SETITEM is not affected.
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
Close immediately after read/write. This can be a bit slower but
we store everything in memory anyway and code size is smaller.
`wallet.Close()` method is now a no-op.
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
If a method is missing from the manifest, it is most likely a typo
or regression after refactoring. There is no "turn-off" flag
for this error because we can do this precisely.
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
If wallet was opened via `NewWalletFromFile`, open it as read-only first
and re-open for write if needed.
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
Instead of flushing everything to `cache` and then to `bc.dao`, wrap `bc.dao`
directly for block/tx data and AERs and then flush to it. Block/transactions
are usually processed more quickly than other components, so they easily end
up in `cache` where they directly affect Seek performance for any executing
transaction.
Simple as it is this change improves voter NEO transfer benchmark with 1000
accounts by more than 25%, from ~18500 TPS to ~23500 TPS. It doesn't affect
much other cases.
GAS can only be distributed once in a block for particular address, so it
makes little sense trying to calculate it again and again. This fixes
neo-bench for NEO voter, because without it we get ~2500 TPS for
single-address test and with it it jumps 13-fold to normal values like
~33500.
We need to store NEO balance's LastUpdateHeight before GAS mint,
because mint can call onNEP17Payment and onNEP17Payment can call NEO
transfer which also calls GAS mint. Storing balance height allows to
avoid recursion.
`StateRootInHeader` is duplicated similarly to `Network`.
It will be removed in future as it is surely a protocol parameter.
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
CLI:
* Typos are fixed
* Documentation on NEP-11 tokens is added
* NeoGo node configuration is moved to a separate file
Compiler:
* Typos and indentations are fixed
* Ops dump example is updated
Consensus:
* Typos are fixed
* Links are fixed
Notifications:
* Minor adjustments
RPC:
* `getversion` response is updated
* `getunclamedgas` comment is removed (not valid since
https://github.com/neo-project/neo-modules/pull/243)
VM:
* Update help message
* `load*` command adjustments
* `astack` command removal
We need to copy the result of `TryGet` method, otherwice the slice can
be modified inside `Add` or `Update` methods, which leads to
inconsistent MPT pool state.
We need several stages to manage state jump process in order not to mess
up old and new contract storage items and to be sure about genesis state data
are properly removed from the storage. Other operations do not require
separate stage and can be performed each time `jumpToStateInternal` is
called.
We don't need this method to be exposed, the only its user is the
StateSync module. At the same time StateSync module manages its state by
itself which guarantees that (*Blockchain).jumpToState will be called
with proper StateSync stage.
State jump should be an atomic operation, we can't modify contract
storage items state on-the-fly. Thus, store fresh items under temp
prefix and replase the outdated ones after state sync is completed.
Related
https://github.com/nspcc-dev/neo-go/pull/2019#discussion_r693350460.
Before state sync process can be started, outdated MPT nodes
should be removed from storage. After state sync is completed,
outdated blocks/transactions/AERs should also be removed.
In this commit:
1. Request unknown MPT nodes from peers. Note, that StateSync module itself
shouldn't be responsible for nodes requests, that's a server duty.
2. Do not request the same node twice, check if it is in storage
already. If so, then the only thing remaining is to update refcounter.
MPT restore process is much simpler then regular MPT maintaining: trie
has a fixed structure, we don't need to remove or rebuild MPT nodes. The
only thing we should do is to replace Hash nodes to their unhashed
counterparts and increment refcount. It's better not to touch the
regular MPT code and create a separate structure for this.
C# node does not return empty proof enymore in case if path is bad. C#
node also throws an exception on bad Put.
Our node does not return an error on delete if the key is empty.
Allow it for (*Trie).Put. And distinguish empty value and nil value for
(*Trie).PutBatch, because batch is already capable of handling both nil
and empty value. For (*Trie).PutBatch putting nil value means deletion,
while putting empty value means just putting LeafNode with an empty
value.
Attempts to reuse elliptic.Unmarshal() and elliptic.UnmarshalCompressed() lead
to this:
name old time/op new time/op delta
PublicDecodeBytes-8 59.5µs ± 2% 61.8µs ± 1% +3.78% (p=0.000 n=10+9)
name old alloc/op new alloc/op delta
PublicDecodeBytes-8 3.99kB ± 0% 4.27kB ± 0% +6.81% (p=0.000 n=9+10)
name old allocs/op new allocs/op delta
PublicDecodeBytes-8 136 ± 0% 135 ± 0% -0.74% (p=0.000 n=10+10)
So it makes no sense. Refs. #1319.
Go 1.15 provides native (*ecdsa.PublicKey).Equal method, but we can't drop our
own Equal because the types are different and there is still code using our
Equal (forcing it to convert types is counterproductive), while changing
(*PublicKey).Equal to use (*ecdsa.PublicKey).Equal internally with some kind of
(*ecdsa.PublicKey)(p).Equal((*ecdsa.PublicKey)(key))
slows it down:
name old time/op new time/op delta
PublicEqual-8 14.9ns ± 1% 18.4ns ± 2% +23.55% (p=0.000 n=9+10)
name old alloc/op new alloc/op delta
PublicEqual-8 0.00B 0.00B ~ (all equal)
name old allocs/op new allocs/op delta
PublicEqual-8 0.00 0.00 ~ (all equal)
So leave it as is, but add this micro-bench. Refs. #1319.
Similar to c69670c85b, allows to eliminate one
allocation and reduce memory footprint a bit (tested on tx decoding):
name old time/op new time/op delta
DecodeFromBytes-8 1.78µs ± 3% 1.79µs ± 2% ~ (p=1.000 n=10+10)
name old alloc/op new alloc/op delta
DecodeFromBytes-8 888B ± 0% 800B ± 0% -9.91% (p=0.000 n=10+10)
name old allocs/op new allocs/op delta
DecodeFromBytes-8 11.0 ± 0% 10.0 ± 0% -9.09% (p=0.000 n=10+10)
Doesn't change much, but still simpler.
name old time/op new time/op delta
SerializeSimple-8 452ns ±10% 435ns ± 4% ~ (p=0.356 n=10+9)
name old alloc/op new alloc/op delta
SerializeSimple-8 432B ± 0% 432B ± 0% ~ (all equal)
name old allocs/op new allocs/op delta
SerializeSimple-8 7.00 ± 0% 7.00 ± 0% ~ (all equal)
Functions are usually immediately replaced (and it's OK for them to be nil,
searching through an array with length of zero is fine), Notifications are
usually appended to (and are absolutely useless in verification contexts).
* both 'to' and 'from' are either Null or Hash160, there is no other
possibility for valid NEP-17. So returning util.Uint160{} in case of
parsing error is wrong.
* but this is what allowed burns/mints to work at the expense of error
allocation inside of util.Uint160DecodeBytesBE()
* Uint160 can technically fit into regular VM integer, so even though it'd be
quite surprising to see it there, TryBytes() is more correct (and easier!)
to use
* same thing with `amount`, we have `TryInteger()` that easily covers all
possible cases and does appropriate error checking inside
Squash (*DAO).StoreAsTransaction and
(*DAO).StoreConflictingTransactions. It's better to keep them this way,
because StoreAsTransaction is always followed by
StoreConflictingTransactions, so it's an atomic operation.
The logic wasn't changed.
It is used a lot in clients (including our benchmark).
`Uint160` is already optimized.
```
name old time/op new time/op delta
Uint256DecodeStringLE-8 150ns ±15% 112ns ± 3% -25.23% (p=0.000 n=10+10)
name old alloc/op new alloc/op delta
Uint256DecodeStringLE-8 96.0B ± 0% 64.0B ± 0% -33.33% (p=0.000 n=10+10)
name old allocs/op new allocs/op delta
Uint256DecodeStringLE-8 2.00 ± 0% 1.00 ± 0% -50.00% (p=0.000 n=10+10)
```
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
We're using batches in wrong way during persist, we already have all changes
accumulated in two maps and then we move them to batch and then this is
applied. For some DBs like BoltDB this batch is just another MemoryStore, so
we essentially just shuffle the changeset from one map to another, for others
like LevelDB batch is just a serialized set of KV pairs, it doesn't help much
on subsequent PutBatch, we just duplicate the changeset again.
So introduce PutChangeSet that allows to take two maps with sets and deletes
directly. It also allows to simplify MemCachedStore logic.
neo-bench for single node with 10 workers, LevelDB:
Reference:
RPS 30189.132 30556.448 30390.482 ≈ 30379 ± 0.61%
TPS 29427.344 29418.687 29434.273 ≈ 29427 ± 0.03%
CPU % 33.304 27.179 33.860 ≈ 31.45 ± 11.79%
Mem MB 800.677 798.389 715.042 ≈ 771 ± 6.33%
Patched:
RPS 30264.326 30386.364 30166.231 ≈ 30272 ± 0.36% ⇅
TPS 29444.673 29407.440 29452.478 ≈ 29435 ± 0.08% ⇅
CPU % 34.012 32.597 33.467 ≈ 33.36 ± 2.14% ⇅
Mem MB 549.126 523.656 517.684 ≈ 530 ± 3.15% ↓ 31.26%
BoltDB:
Reference:
RPS 31937.647 31551.684 31850.408 ≈ 31780 ± 0.64%
TPS 31292.049 30368.368 31307.724 ≈ 30989 ± 1.74%
CPU % 33.792 22.339 35.887 ≈ 30.67 ± 23.78%
Mem MB 1271.687 1254.472 1215.639 ≈ 1247 ± 2.30%
Patched:
RPS 31746.818 30859.485 31689.761 ≈ 31432 ± 1.58% ⇅
TPS 31271.499 30340.726 30342.568 ≈ 30652 ± 1.75% ⇅
CPU % 34.611 34.414 31.553 ≈ 33.53 ± 5.11% ⇅
Mem MB 1262.960 1231.389 1335.569 ≈ 1277 ± 4.18% ⇅
VM always has istack and it doesn't even change, so doing this microallocation
makes no sense. Notice that estack is a bit harder to change we do replace it
in some cases and we compare pointers to it as well.
It requires only two methods from Blockchainer: AddBlock and
BlockHeight. New interface will allow to easily reuse the block queue
for state exchange purposes.
Do not allocate a separate buffer for the transfer.
```
name old time/op new time/op delta
NEP17TransferLog_Append-8 58.8µs ± 3% 32.1µs ± 1% -45.40% (p=0.000 n=10+9)
name old alloc/op new alloc/op delta
NEP17TransferLog_Append-8 118kB ± 1% 44kB ± 3% -63.00% (p=0.000 n=9+10)
name old allocs/op new allocs/op delta
NEP17TransferLog_Append-8 901 ± 1% 513 ± 3% -43.08% (p=0.000 n=9+8)
```
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
Refactor code and be fine with sending to just 2/3 of proper peers. Previously
it was an edge case, but it can be a normal thing to do also as broadcasting
to everyone is obviously too expensive and excessive (hi, #608).
Baseline (four node, 10 workers):
RPS 8180.760 8137.822 7858.358 7820.011 8051.076 ≈ 8010 ± 2.04%
TPS 7819.831 7521.172 7519.023 7242.965 7426.000 ≈ 7506 ± 2.78%
CPU % 41.983 38.775 40.606 39.375 35.537 ≈ 39.3 ± 6.15%
Mem MB 2947.189 2743.658 2896.688 2813.276 2863.108 ≈ 2853 ± 2.74%
Patched:
RPS 9714.567 9676.102 9358.609 9371.408 9301.372 ≈ 9484 ± 2.05% ↑ 18.40%
TPS 8809.796 8796.854 8534.754 8661.158 8426.162 ≈ 8646 ± 1.92% ↑ 15.19%
CPU % 44.980 45.018 33.640 29.645 43.830 ≈ 39.4 ± 18.41% ↑ 0.25%
Mem MB 2989.078 2976.577 2306.185 2351.929 2910.479 ≈ 2707 ± 12.80% ↓ 5.12%
There is a nuance with this patch however. While typically it works the way
outlined above, sometimes it works like this:
RPS ≈ 6734.368
TPS ≈ 6299.332
CPU ≈ 25.552%
Mem ≈ 2706.046MB
And that's because the log looks like this:
DeltaTime, TransactionsCount, TPS
5014, 44212, 8817.710
5163, 49690, 9624.249
5166, 49523, 9586.334
5189, 49693, 9576.604
5198, 49339, 9491.920
5147, 49559, 9628.716
5192, 49680, 9568.567
5163, 49750, 9635.871
5183, 49189, 9490.450
5159, 49653, 9624.540
5167, 47945, 9279.079
5179, 2051, 396.022
5015, 4, 0.798
5004, 0, 0.000
5003, 0, 0.000
5003, 0, 0.000
5003, 0, 0.000
5003, 0, 0.000
5004, 0, 0.000
5003, 2925, 584.649
5040, 49099, 9741.865
5161, 49718, 9633.404
5170, 49228, 9521.857
5179, 49773, 9610.543
5167, 47253, 9145.152
5202, 49788, 9570.934
5177, 47704, 9214.603
5209, 46610, 8947.975
5249, 49156, 9364.831
5163, 18284, 3541.352
5072, 174, 34.306
On a network with 4 CNs and 1 RPC node there is 1/256 probability that a block
won't be broadcasted to RPC node, so it won't see it until ping timeout kicks
in. While it doesn't see a block it can't accept new incoming transactions so
the bench gets stuck basically. To me that's an acceptable trade-off because
normal networks are much larger than that and the effect of this patch is way
more important there, but still that's what we have and we need to take into
account.
send() can return errStateMismatch, errGone and errBusy. errGone means the
peer is dead and it won't ever be active again, it doesn't make sense retrying
sends to it. errStateMismatch is technically "not yet ready", but we can't
wait for it either, no one knows how much will it take to complete
handshake. So only errBusy means we can retry.
So keep track of dead peers and adjust tries counting appropriately.
It doesn't change much, we can't magically get more valid peers and if some
die while we're iterating we'd detect that by an error returned from send().
When transaction spreads through the network many nodes are likely to get it
in roughly the same time. They will rebroadcast it also in roughly the same
time. As we have a number of peers it's quite likely that we'd get an Inv with
the same transaction from multiple peers simultaneously. We will ask them for
this transaction (independently!) and again we're likely to get it in roughly
the same time. So we can easily end up with multiple threads processing the
same transaction. Only one will succeed, but we can actually easily avoid
doing it in the first place saving some CPU cycles for other things.
Notice that we can't do it _before_ receiving a transaction because nothing
guarantees that the peer will respond to our transaction request, so
communication overhead is unavoidable at the moment, but saving on processing
already gives quite interesting results.
Baseline, four nodes with 10 workers:
RPS 7176.784 7014.511 6139.663 7191.280 7080.852 ≈ 6921 ± 5.72%
TPS 6945.409 6562.756 5927.050 6681.187 6821.794 ≈ 6588 ± 5.38%
CPU % 44.400 43.842 40.418 49.211 49.370 ≈ 45.4 ± 7.53%
Mem MB 2693.414 2640.602 2472.007 2731.482 2707.879 ≈ 2649 ± 3.53%
Patched:
RPS ≈ 7791.675 7996.559 7834.504 7746.705 7891.614 ≈ 7852 ± 1.10% ↑ 13.45%
TPS ≈ 7241.497 7711.765 7520.211 7425.890 7334.443 ≈ 7447 ± 2.17% ↑ 13.04%
CPU % 29.853 39.936 39.945 36.371 39.999 ≈ 37.2 ± 10.57% ↓ 18.06%
Mem MB 2749.635 2791.609 2828.610 2910.431 2863.344 ≈ 2829 ± 1.97% ↑ 6.80%
Most of the time on healthy network we see new transactions appearing that are
not present in the mempool. Once they get into mempool we don't ask for them
again when some other peer sends an Inv with them. Then these transactions are
usually added into block, removed from mempool and no one actually sends them
again to us. Some stale nodes can do that, but it's not very likely to
happen.
At the receiving end at the same time it's quite expensive to do full chain
HasTransaction() query, so if we can avoid doing that it's always good. Here
it technically allows resending old transaction that will be re-requested and
an attempt to add it to mempool will be made. But it'll inevitably fail
because the same HasTransaction() check is done there too. One can try to
maliciously flood the node with stale transactions but it doesn't differ from
flooding it with any other invalid transactions, so there is no new attack
vector added.
Baseline, 4 nodes with 10 workers:
RPS 6902.296 6465.662 6856.044 6785.515 6157.024 ≈ 6633 ± 4.26%
TPS 6468.431 6218.867 6610.565 6288.596 5790.556 ≈ 6275 ± 4.44%
CPU % 50.231 42.925 49.481 48.396 42.662 ≈ 46.7 ± 7.01%
Mem MB 2856.841 2684.103 2756.195 2733.485 2422.787 ≈ 2691 ± 5.40%
Patched:
RPS 7176.784 7014.511 6139.663 7191.280 7080.852 ≈ 6921 ± 5.72% ↑ 4.34%
TPS 6945.409 6562.756 5927.050 6681.187 6821.794 ≈ 6588 ± 5.38% ↑ 4.99%
CPU % 44.400 43.842 40.418 49.211 49.370 ≈ 45.4 ± 7.53% ↓ 2.78%
Mem MB 2693.414 2640.602 2472.007 2731.482 2707.879 ≈ 2649 ± 3.53% ↓ 1.56%
Network communication takes time. Handling some messages (like transaction)
also takes time. We can share this time by making handler a separate
goroutine. So while message is being handled receiver can already get and
parse the next one.
It doesn't improve metrics a lot, but still I think it makes sense and in some
scenarios this can be more beneficial than this.
e41fc2fd1b, 4 nodes, 10 workers
RPS 6732.979 6396.160 6759.624 6246.398 6589.841 ≈ 6545 ± 3.02%
TPS 6491.062 5984.190 6275.652 5867.477 6360.797 ≈ 6196 ± 3.77%
CPU % 42.053 43.515 44.768 40.344 44.112 ≈ 43.0 ± 3.69%
Mem MB 2564.130 2744.236 2636.267 2589.505 2765.926 ≈ 2660 ± 3.06%
Patched:
RPS 6902.296 6465.662 6856.044 6785.515 6157.024 ≈ 6633 ± 4.26% ↑ 1.34%
TPS 6468.431 6218.867 6610.565 6288.596 5790.556 ≈ 6275 ± 4.44% ↑ 1.28%
CPU % 50.231 42.925 49.481 48.396 42.662 ≈ 46.7 ± 7.01% ↑ 8.60%
Mem MB 2856.841 2684.103 2756.195 2733.485 2422.787 ≈ 2691 ± 5.40% ↑ 1.17%
We use them quite frequently (consider children for a new branch
node) and it is better to get rid of unneeded allocations.
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
`WriteArray` involves reflection, it makes sense to optimize
serialization of transactions and application logs which are serialized
constantly. Adding case in a type switch in `WriteArray` is not an
option because we don't want new dependencies for `io` package.
```
name old time/op new time/op delta
AppExecResult_EncodeBinary-8 852ns ± 3% 656ns ± 2% -22.94% (p=0.000 n=10+9)
name old alloc/op new alloc/op delta
AppExecResult_EncodeBinary-8 448B ± 0% 376B ± 0% -16.07% (p=0.000 n=10+10)
name old allocs/op new allocs/op delta
AppExecResult_EncodeBinary-8 7.00 ± 0% 5.00 ± 0% -28.57% (p=0.000 n=10+10)
```
```
name old time/op new time/op delta
Transaction_Bytes-8 1.29µs ± 3% 0.76µs ± 5% -41.52% (p=0.000 n=9+10)
name old alloc/op new alloc/op delta
Transaction_Bytes-8 1.21kB ± 0% 1.01kB ± 0% -16.56% (p=0.000 n=10+10)
name old allocs/op new allocs/op delta
Transaction_Bytes-8 12.0 ± 0% 7.0 ± 0% -41.67% (p=0.000 n=10+10)
```
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
We know it already, but with current loading code VM will hash it once
more. It doesn't help a lot and still it costs nothing to avoid this
overhead.
name old time/op new time/op delta
VerifyWitness-8 93.4µs ± 3% 92.7µs ± 2% ~ (p=0.353 n=10+10)
name old alloc/op new alloc/op delta
VerifyWitness-8 3.43kB ± 0% 3.40kB ± 0% -0.70% (p=0.000 n=9+9)
name old allocs/op new allocs/op delta
VerifyWitness-8 67.0 ± 0% 66.0 ± 0% -1.49% (p=0.000 n=10+10)