Commit graph

3824 commits

Author SHA1 Message Date
Roman Khimov
56dd7b7364
Merge pull request #2177 from nspcc-dev/fix-lint
Replace golint with revive
2021-09-15 17:59:04 +03:00
Roman Khimov
68af14100c
Merge pull request #2174 from nspcc-dev/states-diff_testnet_284177
smartcontract: escape non-ascii characters for manifest.Extra SI
2021-09-15 15:07:25 +03:00
Anna Shaleva
dfc0b25cfe gomod: use nspcc-dev's fork of go-ordered-json
Escape non-ASCII characters while JSON encoding.
2021-09-15 15:01:01 +03:00
Roman Khimov
1480e29548
Merge pull request #2178 from nspcc-dev/fix-oracle-unsupported-code
transaction: fix ContentTypeNotSupported oracle code processing
2021-09-14 18:01:40 +03:00
Roman Khimov
24a3cce1ca
Merge pull request #2169 from nspcc-dev/states-diff_mainnet_131795
core: allow transfer 0 GAS/NEO with zero balance
2021-09-14 17:30:41 +03:00
Roman Khimov
8a440a4016 transaction: fix ContentTypeNotSupported oracle code processing
Fix testnet block 311487 block processing and synchronization errors:
  2021-09-14T15:18:53.611+0300    WARN    peer disconnected       {"addr": "20.198.226.132:20333", "reason": "invalid oracle response code", "peerCount": 10}

Fix 8e9302f40b.
2021-09-14 15:18:38 +03:00
Evgeniy Stratonikov
176b61e317 *: fix linter issues
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-09-14 14:39:39 +03:00
Evgeniy Stratonikov
918d7e65bf smartcontract: unmarhal null values properly
First we unmarshal `null` to `[]byte`, then we marshal it to the empty
string.

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-09-14 13:28:01 +03:00
Roman Khimov
5209fe1e09
Merge pull request #2173 from nspcc-dev/fix-race
network: fix race in StateSync module tests
2021-09-14 11:57:21 +03:00
Roman Khimov
621478296c
Merge pull request #2161 from nspcc-dev/rpc-get-version
rpc: return protocol parameters in `getversion`, fix #2160
2021-09-13 19:10:43 +03:00
Anna Shaleva
6357af0bb0 network: fix race in TestHandleGetMPTData
Init server config before server start. Fixes the following data race:

```
WARNING: DATA RACE
Write at 0x00c00032ef20 by goroutine 26:
  github.com/nspcc-dev/neo-go/pkg/network.TestHandleGetMPTData.func2()
      /go/src/github.com/nspcc-dev/neo-go/pkg/network/server_test.go:755 +0x10a
  testing.tRunner()
      /usr/local/go/src/testing/testing.go:1193 +0x202

Previous read at 0x00c00032ef20 by goroutine 24:
  github.com/nspcc-dev/neo-go/internal/fakechain.(*FakeChain).GetConfig()
      /go/src/github.com/nspcc-dev/neo-go/internal/fakechain/fakechain.go:167 +0x6f
  github.com/nspcc-dev/neo-go/pkg/network.(*Server).initStaleMemPools()
      /go/src/github.com/nspcc-dev/neo-go/pkg/network/server.go:1433 +0x89
  github.com/nspcc-dev/neo-go/pkg/network.(*Server).Start()
      /go/src/github.com/nspcc-dev/neo-go/pkg/network/server.go:284 +0x288
  github.com/nspcc-dev/neo-go/pkg/network.startWithChannel.func1()
      /go/src/github.com/nspcc-dev/neo-go/pkg/network/server_test.go:91 +0x44

Goroutine 26 (running) created at:
  testing.(*T).Run()
      /usr/local/go/src/testing/testing.go:1238 +0x5d7
  github.com/nspcc-dev/neo-go/pkg/network.TestHandleGetMPTData()
      /go/src/github.com/nspcc-dev/neo-go/pkg/network/server_test.go:752 +0x8c
  testing.tRunner()
      /usr/local/go/src/testing/testing.go:1193 +0x202

Goroutine 24 (running) created at:
  github.com/nspcc-dev/neo-go/pkg/network.startWithChannel()
      /go/src/github.com/nspcc-dev/neo-go/pkg/network/server_test.go:90 +0x78
  github.com/nspcc-dev/neo-go/pkg/network.startTestServer()
      /go/src/github.com/nspcc-dev/neo-go/pkg/network/server_test.go:384 +0xbd
  github.com/nspcc-dev/neo-go/pkg/network.TestHandleGetMPTData.func2()
      /go/src/github.com/nspcc-dev/neo-go/pkg/network/server_test.go:753 +0x55
  testing.tRunner()
      /usr/local/go/src/testing/testing.go:1193 +0x202
```
2021-09-13 11:45:48 +03:00
Anna Shaleva
29ef076f4b network: fix race in TestTryInitStateSync
Register peers properly. Fixes the following data race:
```
Read at 0x00c001184ac8 by goroutine 116:
  github.com/nspcc-dev/neo-go/pkg/network.(*localPeer).EnqueueHPPacket()
      /go/src/github.com/nspcc-dev/neo-go/pkg/network/helper_test.go:127 +0x1f2
  github.com/nspcc-dev/neo-go/pkg/network.(*localPeer).EnqueuePacket()
      /go/src/github.com/nspcc-dev/neo-go/pkg/network/helper_test.go:114 +0xac
  github.com/nspcc-dev/neo-go/pkg/network.(*localPeer).EnqueueMessage()
      /go/src/github.com/nspcc-dev/neo-go/pkg/network/helper_test.go:111 +0xc1
  github.com/nspcc-dev/neo-go/pkg/network.(*localPeer).SendPing()
      /go/src/github.com/nspcc-dev/neo-go/pkg/network/helper_test.go:159 +0x88
  github.com/nspcc-dev/neo-go/pkg/network.(*Server).runProto()
      /go/src/github.com/nspcc-dev/neo-go/pkg/network/server.go:446 +0x409

Previous write at 0x00c001184ac8 by goroutine 102:
  github.com/nspcc-dev/neo-go/pkg/network.newLocalPeer()
      /go/src/github.com/nspcc-dev/neo-go/pkg/network/helper_test.go:83 +0x476
  github.com/nspcc-dev/neo-go/pkg/network.TestTryInitStateSync.func3()
      /go/src/github.com/nspcc-dev/neo-go/pkg/network/server_test.go:1064 +0x40f
  testing.tRunner()
      /usr/local/go/src/testing/testing.go:1123 +0x202

Goroutine 116 (running) created at:
  github.com/nspcc-dev/neo-go/pkg/network.(*Server).run()
      /go/src/github.com/nspcc-dev/neo-go/pkg/network/server.go:358 +0x69
  github.com/nspcc-dev/neo-go/pkg/network.(*Server).Start()
      /go/src/github.com/nspcc-dev/neo-go/pkg/network/server.go:292 +0x488
  github.com/nspcc-dev/neo-go/pkg/network.startWithChannel.func1()
      /go/src/github.com/nspcc-dev/neo-go/pkg/network/server_test.go:91 +0x44

Goroutine 102 (running) created at:
  testing.(*T).Run()
      /usr/local/go/src/testing/testing.go:1168 +0x5bb
  github.com/nspcc-dev/neo-go/pkg/network.TestTryInitStateSync()
      /go/src/github.com/nspcc-dev/neo-go/pkg/network/server_test.go:1056 +0xbb
  testing.tRunner()
      /usr/local/go/src/testing/testing.go:1123 +0x202
```
2021-09-13 11:45:48 +03:00
Evgeniy Stratonikov
8a3e05096b *: gofmt -s
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-09-10 21:49:11 +03:00
Evgeniy Stratonikov
c465b18cb2 rpc: return protocol parameters in getversion, fix #2160
`StateRootInHeader` is duplicated similarly to `Network`.
It will be removed in future as it is surely a protocol parameter.

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-09-10 21:45:59 +03:00
Roman Khimov
63e00ac128
Merge pull request #2166 from nspcc-dev/fix-nns-compat
Fix NNS compatibility
2021-09-10 18:10:17 +03:00
Anna Shaleva
7fc57c9d58 core: allow transfer 0 GAS/NEO with zero balance
This commit fixes states diff at 131795 block of mainnet.

Transaction:
```
NEO-GO-VM > loadbase64 DAAQDBSPsxdYh6cITC3gUKI4oWmYxJs49gwUj7MXWIenCEwt4FCiOKFpmMSbOPYUwB8MCHRyYW5zZmVyDBT1Y+pAvCg9TQ4FxI6jBbPyoHNA70FifVtSOQwAEQwUj7MXWIenCEwt4FCiOKFpmMSbOPYMFL1Mb4Fqp6gHiEwzM6xSc8fLS+RpFMAfDAh0cmFuc2ZlcgwU9WPqQLwoPU0OBcSOowWz8qBzQO9BYn1bUjk=
READY: loaded 176 instructions
NEO-GO-VM 0 > ops
INDEX    OPCODE       PARAMETER
0        PUSHDATA1     ("")                                       <<
2        PUSH0
3        PUSHDATA1    8fb3175887a7084c2de050a238a16998c49b38f6
25       PUSHDATA1    8fb3175887a7084c2de050a238a16998c49b38f6
47       PUSH4
48       PACK
49       PUSH15
50       PUSHDATA1    7472616e73666572 ("transfer")
60       PUSHDATA1    f563ea40bc283d4d0e05c48ea305b3f2a07340ef    // NEO token
82       SYSCALL      System.Contract.Call (627d5b52)
87       ASSERT
88       PUSHDATA1     ("")
90       PUSH1
91       PUSHDATA1    8fb3175887a7084c2de050a238a16998c49b38f6
113      PUSHDATA1    bd4c6f816aa7a807884c3333ac5273c7cb4be469
135      PUSH4
136      PACK
137      PUSH15
138      PUSHDATA1    7472616e73666572 ("transfer")
148      PUSHDATA1    f563ea40bc283d4d0e05c48ea305b3f2a07340ef    // NEO token
170      SYSCALL      System.Contract.Call (627d5b52)
175      ASSERT

```

Go's applog:
```
{
   "id" : 1,
   "result" : {
      "txid" : "0x97d2ccb01467b22c73a2cb95f7af298f3a5bd8c849d7044371898b8efecdaabd",
      "executions" : [
         {
            "exception" : "at instruction 87 (ASSERT): ASSERT failed",
            "stack" : [],
            "gasconsumed" : "4988995",
            "notifications" : [],
            "trigger" : "Application",
            "vmstate" : "FAULT"
         }
      ]
   },
   "jsonrpc" : "2.0"
}
```

C#'s applog:
```
{
   "jsonrpc" : "2.0",
   "result" : {
      "executions" : [
         {
            "stack" : [],
            "notifications" : [
               {
                  "contract" : "0xef4073a0f2b305a38ec4050e4d3d28bc40ea63f5",
                  "state" : {
                     "type" : "Array",
                     "value" : [
                        {
                           "type" : "ByteString",
                           "value" : "j7MXWIenCEwt4FCiOKFpmMSbOPY="
                        },
                        {
                           "type" : "ByteString",
                           "value" : "j7MXWIenCEwt4FCiOKFpmMSbOPY="
                        },
                        {
                           "value" : "0",
                           "type" : "Integer"
                        }
                     ]
                  },
                  "eventname" : "Transfer"
               },
               {
                  "contract" : "0xd2a4cff31913016155e38e474a2c06d08be276cf",
                  "state" : {
                     "value" : [
                        {
                           "type" : "Any"
                        },
                        {
                           "type" : "ByteString",
                           "value" : "vUxvgWqnqAeITDMzrFJzx8tL5Gk="
                        },
                        {
                           "value" : "2490",
                           "type" : "Integer"
                        }
                     ],
                     "type" : "Array"
                  },
                  "eventname" : "Transfer"
               },
               {
                  "contract" : "0xef4073a0f2b305a38ec4050e4d3d28bc40ea63f5",
                  "state" : {
                     "value" : [
                        {
                           "value" : "vUxvgWqnqAeITDMzrFJzx8tL5Gk=",
                           "type" : "ByteString"
                        },
                        {
                           "value" : "j7MXWIenCEwt4FCiOKFpmMSbOPY=",
                           "type" : "ByteString"
                        },
                        {
                           "value" : "1",
                           "type" : "Integer"
                        }
                     ],
                     "type" : "Array"
                  },
                  "eventname" : "Transfer"
               }
            ],
            "vmstate" : "HALT",
            "gasconsumed" : "9977990",
            "trigger" : "Application",
            "exception" : null
         }
      ],
      "txid" : "0x97d2ccb01467b22c73a2cb95f7af298f3a5bd8c849d7044371898b8efecdaabd"
   },
   "id" : 1
}

```
2021-09-10 17:18:09 +03:00
Roman Khimov
aaccf748ac nft-nd-nns: add getAllRecords method
See neo-project/non-native-contracts#5.
2021-09-10 16:30:45 +03:00
Roman Khimov
c4637514d4
Merge pull request #2165 from nspcc-dev/rpc/audit
rpc: request handlers audit
2021-09-10 13:16:40 +03:00
Anna Shaleva
b989fdb462 rpc: fill transaction witnesses during invokescript handling 2021-09-10 11:38:59 +03:00
Anna Shaleva
61fd7bd6ba core: avoid nil values during natives manifest marshalling 2021-09-10 11:38:59 +03:00
Anna Shaleva
61faf28738 rpc: avoid null unverified transactions in getrawmempool response 2021-09-10 11:38:59 +03:00
Anna Shaleva
ed9cdfe667 rpc: use core Header for getblockheader response
Nonce and Primary fields were missing from response.
2021-09-09 18:47:22 +03:00
Anna Shaleva
db13362e86 core: marshal Block.Nonce in upper-case hex 2021-09-09 15:52:51 +03:00
Anna Shaleva
3b04b6d238 vm: refactor stack dump commands 2021-09-09 13:45:10 +03:00
Anna Shaleva
6da458365d vm CLI: allow to dump slots 2021-09-09 13:45:10 +03:00
Roman Khimov
b502c5f148
Merge pull request #2162 from nspcc-dev/docs/update
docs: minor documentation updates and adjustments
2021-09-09 12:38:20 +03:00
Anna Shaleva
2f23d83a49 interop: adjust documentation 2021-09-08 17:53:09 +03:00
Anna Shaleva
df8141ff7d rpc: adjust client documentation 2021-09-08 17:53:09 +03:00
Anna Shaleva
913e3878c5 vm CLI: check whether VM is ready before jumping to the instruction
It allows to avoid panic:
```
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0xdab469]

goroutine 1 [running]:
github.com/nspcc-dev/neo-go/pkg/vm.(*VM).Jump(...)
	github.com/nspcc-dev/neo-go/pkg/vm/vm.go:1506
github.com/nspcc-dev/neo-go/pkg/vm/cli.handleRun(0xc0005988f0)
	github.com/nspcc-dev/neo-go/pkg/vm/cli/cli.go:413 +0x2e9
github.com/abiosoft/ishell/v2.(*Shell).handleCommand(0xc0004320f0, {0xc00032c7c0, 0xc0002a3920, 0x0})
	github.com/abiosoft/ishell/v2@v2.0.2/ishell.go:279 +0x143
github.com/abiosoft/ishell/v2.handleInput(0xc0004320f0, {0xc00032c7c0, 0x3, 0x4})
	github.com/abiosoft/ishell/v2@v2.0.2/ishell.go:233 +0x31
github.com/abiosoft/ishell/v2.(*Shell).run(0xc0004320f0)
	github.com/abiosoft/ishell/v2@v2.0.2/ishell.go:212 +0x30f
github.com/abiosoft/ishell/v2.(*Shell).Run(0xc0004320f0)
	github.com/abiosoft/ishell/v2@v2.0.2/ishell.go:112 +0x28
github.com/nspcc-dev/neo-go/pkg/vm/cli.(*VMCLI).Run(0xc000224030)
	github.com/nspcc-dev/neo-go/pkg/vm/cli/cli.go:538 +0x39
github.com/nspcc-dev/neo-go/cli/vm.startVMPrompt(0xc0001f46e0)
	github.com/nspcc-dev/neo-go/cli/vm/vm.go:28 +0xb4
github.com/urfave/cli.HandleAction({0xe65fa0, 0x1161c68}, 0x2)
	github.com/urfave/cli@v1.22.5/app.go:524 +0xa8
github.com/urfave/cli.Command.Run({{0xfed435, 0x2}, {0x0, 0x0}, {0x0, 0x0, 0x0}, {0x100576d, 0x19}, {0x0, ...}, ...}, ...)
	github.com/urfave/cli@v1.22.5/command.go:173 +0x652
github.com/urfave/cli.(*App).Run(0xc0001016c0, {0xc0000c6000, 0x2, 0x2})
	github.com/urfave/cli@v1.22.5/app.go:277 +0x705
main.main()
	./main.go:19 +0x33
```
2021-09-08 17:53:09 +03:00
Anna Shaleva
cbc75afd4d docs: refactor documentation
CLI:
* Typos are fixed
* Documentation on NEP-11 tokens is added
* NeoGo node configuration is moved to a separate file

Compiler:
* Typos and indentations are fixed
* Ops dump example is updated

Consensus:
* Typos are fixed
* Links are fixed

Notifications:
* Minor adjustments

RPC:
* `getversion` response is updated
* `getunclamedgas` comment is removed (not valid since
https://github.com/neo-project/neo-modules/pull/243)

VM:
* Update help message
* `load*` command adjustments
* `astack` command removal
2021-09-08 17:52:46 +03:00
Anna Shaleva
0fa48691f7 network: do not duplicate MPT nodes in GetMPTNodes response
Also tests are added.
2021-09-08 14:25:54 +03:00
Anna Shaleva
51c8c0d82b core: add tests for StateSync module 2021-09-07 19:43:27 +03:00
Anna Shaleva
0aedfd0038 core: fix bug in MPT pool during Update
We need to copy the result of `TryGet` method, otherwice the slice can
be modified inside `Add` or `Update` methods, which leads to
inconsistent MPT pool state.
2021-09-07 19:43:27 +03:00
Anna Shaleva
36808b8904 core: clone MPT node while restoring it multiple times
We need this to avoid collapse collisions. Example of such collapse
described in
https://github.com/nspcc-dev/neo-go/pull/2019#discussion_r689629704.
2021-09-07 19:43:27 +03:00
Anna Shaleva
5cd78c31af core: allow to recover after state jump interruption
We need several stages to manage state jump process in order not to mess
up old and new contract storage items and to be sure about genesis state data
are properly removed from the storage. Other operations do not require
separate stage and can be performed each time `jumpToStateInternal` is
called.
2021-09-07 19:43:27 +03:00
Anna Shaleva
5cda24b3af core: initialize headers before current block 2021-09-07 19:43:27 +03:00
Anna Shaleva
0e0b55350a core: convert (*Blockchain).JumpToState to a callback
We don't need this method to be exposed, the only its user is the
StateSync module. At the same time StateSync module manages its state by
itself which guarantees that (*Blockchain).jumpToState will be called
with proper StateSync stage.
2021-09-07 19:43:27 +03:00
Anna Shaleva
6381173293 core: store statesync-related storage items under temp prefix
State jump should be an atomic operation, we can't modify contract
storage items state on-the-fly. Thus, store fresh items under temp
prefix and replase the outdated ones after state sync is completed.
Related
https://github.com/nspcc-dev/neo-go/pull/2019#discussion_r693350460.
2021-09-07 19:43:27 +03:00
Anna Shaleva
51f405471e core: remove outdated blocks/txs/AERs/MPT nodes during state sync
Before state sync process can be started, outdated MPT nodes
should be removed from storage. After state sync is completed,
outdated blocks/transactions/AERs should also be removed.
2021-09-07 19:43:27 +03:00
Anna Shaleva
a276a85b72 core: unify code of state sync module initialization 2021-09-07 19:43:27 +03:00
Anna Shaleva
3b7807e897 network: request unknown MPT nodes
In this commit:

1. Request unknown MPT nodes from peers. Note, that StateSync module itself
shouldn't be responsible for nodes requests, that's a server duty.
2. Do not request the same node twice, check if it is in storage
already. If so, then the only thing remaining is to update refcounter.
2021-09-07 19:43:27 +03:00
Anna Shaleva
6a04880b49 core: collapse completed parts of Billet
Some kind of marker is needed to check whether node has been collapsed
or not. So introduce (HashNode).Collapsed
2021-09-07 19:43:27 +03:00
Anna Shaleva
74f1848d19 core: adjust LastUpdatedBlock calculation for NEP17 balances
...wrt P2PStateExchange setting.
2021-09-07 19:43:27 +03:00
Anna Shaleva
d67ff30704 core: implement statesync module
And support GetMPTData and MPTData P2P commands.
2021-09-07 19:43:27 +03:00
Anna Shaleva
a22b1caa3e core: implement MPT Billet structure for MPT restore
MPT restore process is much simpler then regular MPT maintaining: trie
has a fixed structure, we don't need to remove or rebuild MPT nodes. The
only thing we should do is to replace Hash nodes to their unhashed
counterparts and increment refcount. It's better not to touch the
regular MPT code and create a separate structure for this.
2021-09-07 19:43:27 +03:00
Roman Khimov
c9e62769a6
Merge pull request #2143 from nspcc-dev/mpt/add_empty_values
core: allow empty MPT Leaf values
2021-09-07 09:18:48 +03:00
Anna Shaleva
c95f2079d5 core: adjust comments on behaviour defferences for MPT TestCompatibility
C# node does not return empty proof enymore in case if path is bad. C#
node also throws an exception on bad Put.

Our node does not return an error on delete if the key is empty.
2021-09-03 13:46:52 +03:00
Anna Shaleva
f721384ead core: allow empty MPT Leaf values
Allow it for (*Trie).Put. And distinguish empty value and nil value for
(*Trie).PutBatch, because batch is already capable of handling both nil
and empty value. For (*Trie).PutBatch putting nil value means deletion,
while putting empty value means just putting LeafNode with an empty
value.
2021-09-03 13:46:48 +03:00
Evgeniy Stratonikov
7371593bdc native/policy: disallow blocking native contracts
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-09-03 11:11:06 +03:00
Evgeniy Stratonikov
9d34547118 rpc/client: add MaxConnsPerHost option, fix #2149
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-09-02 11:26:17 +03:00
Roman Khimov
b07347e602 core: reuse PushItem for interops
Probably less critical here, but still let's push things faster.
2021-08-30 23:43:58 +03:00
Roman Khimov
a3892aa662 vm: don't use PushVal when item type is known
PushVal is very convenient, but type switch is somewhat expensive.

name                    old time/op    new time/op    delta
ScriptFibonacci-8          736µs ± 1%     602µs ± 1%  -18.21%  (p=0.000 n=9+10)
ScriptNestedRefCount-8    1.08ms ± 2%    0.96ms ± 1%  -11.13%  (p=0.000 n=10+10)
ScriptPushPop/4-8         1.48µs ± 3%    1.35µs ± 2%   -9.14%  (p=0.000 n=10+9)
ScriptPushPop/16-8        3.59µs ± 1%    3.38µs ± 1%   -6.01%  (p=0.000 n=10+10)
ScriptPushPop/128-8       23.7µs ± 1%    22.6µs ± 1%   -4.39%  (p=0.000 n=10+8)
ScriptPushPop/1024-8       176µs ± 2%     167µs ± 3%   -5.24%  (p=0.000 n=9+10)

name                    old alloc/op   new alloc/op   delta
ScriptFibonacci-8          123kB ± 0%     114kB ± 0%   -6.88%  (p=0.000 n=10+9)
ScriptNestedRefCount-8     266kB ± 0%     241kB ± 0%   -9.23%  (p=0.000 n=8+10)
ScriptPushPop/4-8           160B ± 0%      160B ± 0%     ~     (all equal)
ScriptPushPop/16-8          640B ± 0%      640B ± 0%     ~     (all equal)
ScriptPushPop/128-8       8.70kB ± 0%    8.70kB ± 0%     ~     (all equal)
ScriptPushPop/1024-8      73.2kB ± 0%    73.2kB ± 0%     ~     (all equal)

name                    old allocs/op  new allocs/op  delta
ScriptFibonacci-8          3.53k ± 0%     3.17k ± 0%   -9.98%  (p=0.000 n=10+10)
ScriptNestedRefCount-8     11.8k ± 0%     10.7k ± 0%   -8.70%  (p=0.000 n=10+10)
ScriptPushPop/4-8           8.00 ± 0%      8.00 ± 0%     ~     (all equal)
ScriptPushPop/16-8          32.0 ± 0%      32.0 ± 0%     ~     (all equal)
ScriptPushPop/128-8          259 ± 0%       259 ± 0%     ~     (all equal)
ScriptPushPop/1024-8       2.05k ± 0%     2.05k ± 0%     ~     (all equal)
2021-08-30 23:43:58 +03:00
Roman Khimov
bc31c97c32 vm: simplify access to context, don't call Context() twice
Avoid going through Value(), avoid doing type casts twice for every
instruction.

name                    old time/op    new time/op    delta
ScriptFibonacci-8          793µs ± 3%     736µs ± 1%  -7.18%  (p=0.000 n=10+9)
ScriptNestedRefCount-8    1.09ms ± 1%    1.08ms ± 2%  -0.96%  (p=0.035 n=10+10)
ScriptPushPop/4-8         1.51µs ± 3%    1.48µs ± 3%    ~     (p=0.072 n=10+10)
ScriptPushPop/16-8        3.76µs ± 1%    3.59µs ± 1%  -4.56%  (p=0.000 n=10+10)
ScriptPushPop/128-8       25.0µs ± 1%    23.7µs ± 1%  -5.28%  (p=0.000 n=10+10)
ScriptPushPop/1024-8       184µs ± 1%     176µs ± 2%  -4.22%  (p=0.000 n=9+9)
2021-08-30 23:43:58 +03:00
Roman Khimov
e09a0f3969 vm: don't allocate for break points in NewContext
They're rarely used and when they're used they're appended to.
2021-08-29 13:15:12 +03:00
Roman Khimov
7b9558d756
Merge pull request #2142 from nspcc-dev/fix-customgroups-witness-scope
runtime: fix CustomGroups witness
2021-08-26 18:13:08 +03:00
Roman Khimov
734eef3290
Merge pull request #2147 from nspcc-dev/drop-go-1.14
Drop Go 1.14, use 1.17
2021-08-26 17:59:21 +03:00
Roman Khimov
932a57e1e4 keys: reuse coordLen where appropriate 2021-08-26 17:30:04 +03:00
Roman Khimov
6d074a96e9 *: make tests use TempDir(), fix #1319
Simplify things, drop TempFile at the same time (refs. #1764)
2021-08-26 17:29:40 +03:00
Roman Khimov
40c6c065d2
Merge pull request #2140 from nspcc-dev/vm-optimize-stack
Optimize VM stack
2021-08-26 10:31:31 +03:00
Roman Khimov
f4ba21a41a keys: use (*Int).FillBytes where appropriate
Allows to avoid some allocations. Refs. #1319.
2021-08-25 22:35:39 +03:00
Roman Khimov
76eca07961 keys: simplify NewPrivateKeyFrom* functions
Avoid allocating a slice and doing double calculations.
2021-08-25 22:35:39 +03:00
Roman Khimov
61ea42c570 keys: simplify end of buffer check 2021-08-25 22:35:39 +03:00
Roman Khimov
4803cc15c7 keys: add (*PublicKey).DecodeBytes benchmark
Attempts to reuse elliptic.Unmarshal() and elliptic.UnmarshalCompressed() lead
to this:
name                 old time/op    new time/op    delta
PublicDecodeBytes-8    59.5µs ± 2%    61.8µs ± 1%  +3.78%  (p=0.000 n=10+9)

name                 old alloc/op   new alloc/op   delta
PublicDecodeBytes-8    3.99kB ± 0%    4.27kB ± 0%  +6.81%  (p=0.000 n=9+10)

name                 old allocs/op  new allocs/op  delta
PublicDecodeBytes-8       136 ± 0%       135 ± 0%  -0.74%  (p=0.000 n=10+10)

So it makes no sense. Refs. #1319.
2021-08-25 22:35:39 +03:00
Roman Khimov
a1d96a7d7d keys: use elliptic package marshalling functions, #1319
name                       old time/op    new time/op    delta
PublicBytes-8                81.4ns ± 6%    71.2ns ± 8%  -12.56%  (p=0.000 n=10+10)
PublicUncompressedBytes-8    93.2ns ±17%    72.5ns ±14%  -22.25%  (p=0.000 n=10+10)

name                       old alloc/op   new alloc/op   delta
PublicBytes-8                 80.0B ± 0%     48.0B ± 0%  -40.00%  (p=0.000 n=10+10)
PublicUncompressedBytes-8     80.0B ± 0%     48.0B ± 0%  -40.00%  (p=0.000 n=10+10)

name                       old allocs/op  new allocs/op  delta
PublicBytes-8                  2.00 ± 0%      1.00 ± 0%  -50.00%  (p=0.000 n=10+10)
PublicUncompressedBytes-8      2.00 ± 0%      1.00 ± 0%  -50.00%  (p=0.000 n=10+10)
2021-08-25 22:35:39 +03:00
Roman Khimov
217d7bdf44 keys: add equality benchmark
Go 1.15 provides native (*ecdsa.PublicKey).Equal method, but we can't drop our
own Equal because the types are different and there is still code using our
Equal (forcing it to convert types is counterproductive), while changing
(*PublicKey).Equal to use (*ecdsa.PublicKey).Equal internally with some kind of

  (*ecdsa.PublicKey)(p).Equal((*ecdsa.PublicKey)(key))

slows it down:

name           old time/op    new time/op    delta
PublicEqual-8    14.9ns ± 1%    18.4ns ± 2%  +23.55%  (p=0.000 n=9+10)

name           old alloc/op   new alloc/op   delta
PublicEqual-8     0.00B          0.00B          ~     (all equal)

name           old allocs/op  new allocs/op  delta
PublicEqual-8      0.00           0.00          ~     (all equal)

So leave it as is, but add this micro-bench. Refs. #1319.
2021-08-25 15:18:26 +03:00
Roman Khimov
fd87cd4c54 vm: fix (*Stack).Clear to clean up references 2021-08-25 11:20:43 +03:00
Roman Khimov
de4ed7d020 runtime: fix CustomGroups witness
See neo-project/neo#2586.
2021-08-24 15:50:24 +03:00
Roman Khimov
930653418d vm: rework stack as a simple slice
Double-linked list is quite expensive to manage especially given that it
requires microallocations for each Element. It can be replaced by simple slice
which is much more effective for simple push/pop operations that are very
typical in a VM. I've worried a little about more complex operations like
XDROP/1024 or REVERSEN/1024 because these require copying quite substantial
number of elements, but turns out these work fine too.

At the moment Element is kept as a convenient wrapper for Bytes/BigInt/Bool/etc
methods, but it can be changed in future. Many other potential optimizations
are also possible now.

Complex scripts:
name                    old time/op    new time/op    delta
ScriptFibonacci-8         1.11ms ± 2%    0.85ms ± 2%  -23.40%  (p=0.000 n=10+10)
ScriptNestedRefCount-8    1.46ms ± 2%    1.16ms ± 1%  -20.65%  (p=0.000 n=10+10)
ScriptPushPop/4-8         1.81µs ± 1%    1.54µs ± 4%  -14.96%  (p=0.000 n=8+10)
ScriptPushPop/16-8        4.88µs ± 2%    3.91µs ± 2%  -19.87%  (p=0.000 n=9+9)
ScriptPushPop/128-8       31.9µs ± 9%    26.7µs ± 3%  -16.28%  (p=0.000 n=9+8)
ScriptPushPop/1024-8       235µs ± 1%     192µs ± 3%  -18.31%  (p=0.000 n=9+10)

name                    old alloc/op   new alloc/op   delta
ScriptFibonacci-8          392kB ± 0%     123kB ± 0%  -68.68%  (p=0.000 n=8+8)
ScriptNestedRefCount-8     535kB ± 0%     266kB ± 0%  -50.38%  (p=0.000 n=6+10)
ScriptPushPop/4-8           352B ± 0%      160B ± 0%  -54.55%  (p=0.000 n=10+10)
ScriptPushPop/16-8        1.41kB ± 0%    0.64kB ± 0%  -54.55%  (p=0.000 n=10+10)
ScriptPushPop/128-8       11.3kB ± 0%     8.7kB ± 0%  -22.73%  (p=0.000 n=10+10)
ScriptPushPop/1024-8      90.1kB ± 0%    73.2kB ± 0%  -18.75%  (p=0.000 n=10+10)

name                    old allocs/op  new allocs/op  delta
ScriptFibonacci-8          9.14k ± 0%     3.53k ± 0%  -61.41%  (p=0.000 n=10+10)
ScriptNestedRefCount-8     17.4k ± 0%     11.8k ± 0%  -32.35%  (p=0.000 n=10+10)
ScriptPushPop/4-8           12.0 ± 0%       8.0 ± 0%  -33.33%  (p=0.000 n=10+10)
ScriptPushPop/16-8          48.0 ± 0%      32.0 ± 0%  -33.33%  (p=0.000 n=10+10)
ScriptPushPop/128-8          384 ± 0%       259 ± 0%  -32.55%  (p=0.000 n=10+10)
ScriptPushPop/1024-8       3.07k ± 0%     2.05k ± 0%  -33.14%  (p=0.000 n=10+10)

Some stack-management opcodes:

name                                 old time/op    new time/op    delta
Opcodes/XDROP/0/1-8                     255ns ± 9%     273ns ±11%    +6.92%  (p=0.016 n=11+10)
Opcodes/XDROP/0/1024-8                  362ns ± 2%     365ns ± 8%      ~     (p=0.849 n=10+11)
Opcodes/XDROP/1024/1024-8              3.20µs ± 2%    1.99µs ±12%   -37.69%  (p=0.000 n=11+11)
Opcodes/XDROP/2047/2048-8              6.55µs ± 3%    1.75µs ± 5%   -73.26%  (p=0.000 n=10+11)
Opcodes/DUP/null-8                      414ns ± 6%     245ns ±12%   -40.88%  (p=0.000 n=11+11)
Opcodes/DUP/boolean-8                   411ns ± 8%     245ns ± 6%   -40.31%  (p=0.000 n=11+11)
Opcodes/DUP/integer/small-8             684ns ± 8%     574ns ± 3%   -16.02%  (p=0.000 n=11+10)
Opcodes/DUP/integer/big-8               675ns ± 6%     601ns ±10%   -10.98%  (p=0.000 n=11+11)
Opcodes/DUP/bytearray/small-8           675ns ±10%     566ns ±10%   -16.22%  (p=0.000 n=11+11)
Opcodes/DUP/bytearray/big-8            6.39µs ±11%    6.13µs ± 3%      ~     (p=0.148 n=10+10)
Opcodes/DUP/buffer/small-8              412ns ± 5%     261ns ± 8%   -36.55%  (p=0.000 n=9+11)
Opcodes/DUP/buffer/big-8                586ns ±10%     337ns ± 7%   -42.53%  (p=0.000 n=11+11)
Opcodes/DUP/struct/small-8              458ns ±12%     256ns ±12%   -44.09%  (p=0.000 n=11+11)
Opcodes/DUP/struct/big-8                489ns ± 7%     274ns ± 5%   -44.06%  (p=0.000 n=10+10)
Opcodes/DUP/pointer-8                   586ns ± 7%     494ns ± 7%   -15.67%  (p=0.000 n=11+11)
Opcodes/OVER/null-8                     450ns ±14%     264ns ±10%   -41.30%  (p=0.000 n=11+11)
Opcodes/OVER/boolean-8                  450ns ±14%     264ns ±10%   -41.31%  (p=0.000 n=11+11)
Opcodes/OVER/integer/small-8            716ns ± 9%     604ns ± 6%   -15.65%  (p=0.000 n=11+11)
Opcodes/OVER/integer/big-8              696ns ± 5%     634ns ± 6%    -8.89%  (p=0.000 n=10+11)
Opcodes/OVER/bytearray/small-8          693ns ± 1%     539ns ± 9%   -22.18%  (p=0.000 n=9+10)
Opcodes/OVER/bytearray/big-8           6.33µs ± 2%    6.16µs ± 4%    -2.79%  (p=0.004 n=8+10)
Opcodes/OVER/buffer/small-8             415ns ± 4%     263ns ± 8%   -36.76%  (p=0.000 n=9+11)
Opcodes/OVER/buffer/big-8               587ns ± 5%     342ns ± 7%   -41.70%  (p=0.000 n=11+11)
Opcodes/OVER/struct/small-8             446ns ±14%     257ns ± 8%   -42.42%  (p=0.000 n=11+11)
Opcodes/OVER/struct/big-8               607ns ±26%     278ns ± 7%   -54.25%  (p=0.000 n=11+11)
Opcodes/OVER/pointer-8                  645ns ±12%     476ns ±10%   -26.21%  (p=0.000 n=11+11)
Opcodes/PICK/2/null-8                   460ns ±11%     264ns ± 9%   -42.68%  (p=0.000 n=11+11)
Opcodes/PICK/2/boolean-8                460ns ± 4%     260ns ± 4%   -43.37%  (p=0.000 n=8+11)
Opcodes/PICK/2/integer/small-8          725ns ± 7%     557ns ± 4%   -23.19%  (p=0.000 n=11+10)
Opcodes/PICK/2/integer/big-8            722ns ±12%     582ns ± 6%   -19.51%  (p=0.000 n=11+11)
Opcodes/PICK/2/bytearray/small-8        705ns ± 6%     545ns ± 4%   -22.69%  (p=0.000 n=11+11)
Opcodes/PICK/2/bytearray/big-8         7.17µs ±36%    6.37µs ± 8%      ~     (p=0.065 n=11+11)
Opcodes/PICK/2/buffer/small-8           427ns ± 8%     253ns ± 8%   -40.82%  (p=0.000 n=11+11)
Opcodes/PICK/2/buffer/big-8             590ns ± 3%     331ns ± 6%   -43.83%  (p=0.000 n=11+11)
Opcodes/PICK/2/struct/small-8           428ns ± 8%     254ns ± 7%   -40.64%  (p=0.000 n=11+11)
Opcodes/PICK/2/struct/big-8             489ns ±15%     283ns ± 7%   -42.11%  (p=0.000 n=11+11)
Opcodes/PICK/2/pointer-8                553ns ± 7%     414ns ± 8%   -25.18%  (p=0.000 n=11+11)
Opcodes/PICK/1024/null-8                531ns ± 4%     327ns ± 6%   -38.49%  (p=0.000 n=10+10)
Opcodes/PICK/1024/boolean-8             527ns ± 5%     318ns ± 5%   -39.78%  (p=0.000 n=11+9)
Opcodes/PICK/1024/integer/small-8       861ns ± 4%     683ns ± 4%   -20.66%  (p=0.000 n=11+11)
Opcodes/PICK/1024/integer/big-8         882ns ± 4%    1060ns ±47%      ~     (p=0.748 n=11+11)
Opcodes/PICK/1024/bytearray/small-8     850ns ± 4%     671ns ± 5%   -21.12%  (p=0.000 n=10+11)
Opcodes/PICK/1024/bytearray/big-8      6.32µs ±26%    6.75µs ± 4%    +6.86%  (p=0.019 n=10+11)
Opcodes/PICK/1024/buffer/small-8        530ns ± 6%     324ns ± 5%   -38.86%  (p=0.000 n=10+11)
Opcodes/PICK/1024/buffer/big-8          570ns ± 4%     417ns ±45%   -26.82%  (p=0.001 n=11+10)
Opcodes/PICK/1024/struct/small-8      1.11µs ±122%    0.34µs ±11%   -69.38%  (p=0.000 n=11+10)
Opcodes/PICK/1024/pointer-8             693ns ± 5%     568ns ±31%   -18.10%  (p=0.002 n=10+10)
Opcodes/TUCK/null-8                     450ns ±10%     275ns ± 8%   -38.93%  (p=0.000 n=11+11)
Opcodes/TUCK/boolean-8                  449ns ±13%     268ns ± 9%   -40.16%  (p=0.000 n=11+10)
Opcodes/TUCK/integer/small-8            716ns ± 7%     599ns ± 7%   -16.30%  (p=0.000 n=11+11)
Opcodes/TUCK/integer/big-8              718ns ± 8%     613ns ±11%   -14.55%  (p=0.000 n=11+11)
Opcodes/TUCK/bytearray/small-8          700ns ±12%     558ns ± 7%   -20.39%  (p=0.000 n=11+11)
Opcodes/TUCK/bytearray/big-8           5.88µs ± 7%    6.37µs ± 3%    +8.31%  (p=0.000 n=10+11)
Opcodes/TUCK/buffer/small-8             425ns ± 6%     258ns ±12%   -39.28%  (p=0.000 n=11+11)
Opcodes/TUCK/buffer/big-8               553ns ±19%     334ns ± 6%   -39.57%  (p=0.000 n=11+11)
Opcodes/TUCK/struct/small-8             474ns ± 3%     263ns ±12%   -44.51%  (p=0.000 n=10+11)
Opcodes/TUCK/struct/big-8               641ns ±24%     284ns ± 8%   -55.63%  (p=0.000 n=11+11)
Opcodes/TUCK/pointer-8                  635ns ±13%     468ns ±16%   -26.31%  (p=0.000 n=11+11)
Opcodes/SWAP/null-8                     227ns ±31%     212ns ±11%      ~     (p=0.847 n=11+11)
Opcodes/SWAP/integer-8                  233ns ±32%     210ns ±14%      ~     (p=0.072 n=10+11)
Opcodes/SWAP/big_bytes-8                263ns ±39%     211ns ±11%      ~     (p=0.056 n=11+11)
Opcodes/ROT/null-8                      308ns ±68%     223ns ±12%      ~     (p=0.519 n=11+11)
Opcodes/ROT/integer-8                   226ns ±25%     228ns ± 9%      ~     (p=0.705 n=10+11)
Opcodes/ROT/big_bytes-8                 215ns ±18%     218ns ± 7%      ~     (p=0.756 n=10+11)
Opcodes/ROLL/4/null-8                   269ns ±10%     295ns ± 9%    +9.42%  (p=0.002 n=10+11)
Opcodes/ROLL/4/integer-8                344ns ±48%     280ns ± 2%      ~     (p=0.882 n=11+9)
Opcodes/ROLL/4/big_bytes-8              276ns ±13%     288ns ± 4%    +4.38%  (p=0.046 n=9+11)
Opcodes/ROLL/1024/null-8               4.21µs ±70%    1.01µs ± 9%   -76.15%  (p=0.000 n=11+11)
Opcodes/ROLL/1024/integer-8            4.78µs ±82%    0.71µs ± 3%   -85.06%  (p=0.000 n=11+11)
Opcodes/ROLL/1024/big_bytes-8          3.28µs ± 5%    1.35µs ±36%   -58.91%  (p=0.000 n=9+11)
Opcodes/REVERSE3/null-8                 219ns ± 9%     224ns ± 9%      ~     (p=0.401 n=11+11)
Opcodes/REVERSE3/integer-8              261ns ±28%     220ns ± 6%   -15.67%  (p=0.015 n=11+11)
Opcodes/REVERSE3/big_bytes-8            245ns ±31%     218ns ± 7%      ~     (p=0.051 n=10+11)
Opcodes/REVERSE4/null-8                 223ns ±10%     218ns ± 6%      ~     (p=0.300 n=11+11)
Opcodes/REVERSE4/integer-8              233ns ±10%     220ns ± 7%    -5.74%  (p=0.016 n=11+11)
Opcodes/REVERSE4/big_bytes-8            225ns ±10%     220ns ± 7%      ~     (p=0.157 n=10+11)
Opcodes/REVERSEN/5/null-8               281ns ±12%     277ns ± 4%      ~     (p=0.847 n=11+11)
Opcodes/REVERSEN/5/integer-8            280ns ±11%     275ns ± 5%      ~     (p=0.243 n=11+11)
Opcodes/REVERSEN/5/big_bytes-8          283ns ± 9%     276ns ± 7%      ~     (p=0.133 n=11+11)
Opcodes/REVERSEN/1024/null-8           4.85µs ± 6%    1.94µs ± 6%   -60.07%  (p=0.000 n=10+11)
Opcodes/REVERSEN/1024/integer-8        4.97µs ± 7%    1.99µs ±22%   -59.88%  (p=0.000 n=11+11)
Opcodes/REVERSEN/1024/big_bytes-8      5.11µs ±10%    2.00µs ± 4%   -60.87%  (p=0.000 n=10+9)
Opcodes/PACK/1-8                       1.22µs ± 7%    0.95µs ± 6%   -22.17%  (p=0.000 n=10+11)
Opcodes/PACK/255-8                     11.1µs ± 4%    10.2µs ± 6%    -7.96%  (p=0.000 n=11+11)
Opcodes/PACK/1024-8                    38.9µs ± 4%    37.4µs ± 9%      ~     (p=0.173 n=10+11)
Opcodes/UNPACK/1-8                     1.32µs ±34%    0.96µs ± 6%   -27.57%  (p=0.000 n=10+11)
Opcodes/UNPACK/255-8                   27.2µs ±14%    16.0µs ±13%   -41.04%  (p=0.000 n=11+11)
Opcodes/UNPACK/1024-8                   102µs ±10%      64µs ±16%   -37.33%  (p=0.000 n=10+11)

name                                 old alloc/op   new alloc/op   delta
Opcodes/XDROP/0/1-8                     0.00B          0.00B           ~     (all equal)
Opcodes/XDROP/0/1024-8                  0.00B          0.00B           ~     (all equal)
Opcodes/XDROP/1024/1024-8               0.00B          0.00B           ~     (all equal)
Opcodes/XDROP/2047/2048-8               0.00B          0.00B           ~     (all equal)
Opcodes/DUP/null-8                      48.0B ± 0%      0.0B       -100.00%  (p=0.000 n=11+11)
Opcodes/DUP/boolean-8                   48.0B ± 0%      0.0B       -100.00%  (p=0.000 n=11+11)
Opcodes/DUP/integer/small-8             96.0B ± 0%     48.0B ± 0%   -50.00%  (p=0.000 n=11+11)
Opcodes/DUP/integer/big-8                104B ± 0%       56B ± 0%   -46.15%  (p=0.000 n=11+11)
Opcodes/DUP/bytearray/small-8           88.0B ± 0%     40.0B ± 0%   -54.55%  (p=0.000 n=11+11)
Opcodes/DUP/bytearray/big-8            65.6kB ± 0%    65.6kB ± 0%    -0.07%  (p=0.000 n=10+9)
Opcodes/DUP/buffer/small-8              48.0B ± 0%      0.0B       -100.00%  (p=0.000 n=11+11)
Opcodes/DUP/buffer/big-8                48.0B ± 0%      0.0B       -100.00%  (p=0.000 n=11+11)
Opcodes/DUP/struct/small-8              48.0B ± 0%      0.0B       -100.00%  (p=0.000 n=11+11)
Opcodes/DUP/struct/big-8                48.0B ± 0%      0.0B       -100.00%  (p=0.000 n=11+11)
Opcodes/DUP/pointer-8                    112B ± 0%       64B ± 0%   -42.86%  (p=0.000 n=11+11)
Opcodes/OVER/null-8                     48.0B ± 0%      0.0B       -100.00%  (p=0.000 n=11+11)
Opcodes/OVER/boolean-8                  48.0B ± 0%      0.0B       -100.00%  (p=0.000 n=11+11)
Opcodes/OVER/integer/small-8            96.0B ± 0%     48.0B ± 0%   -50.00%  (p=0.000 n=11+11)
Opcodes/OVER/integer/big-8               104B ± 0%       56B ± 0%   -46.15%  (p=0.000 n=11+11)
Opcodes/OVER/bytearray/small-8          88.0B ± 0%     40.0B ± 0%   -54.55%  (p=0.000 n=11+11)
Opcodes/OVER/bytearray/big-8           65.6kB ± 0%    65.6kB ± 0%    -0.07%  (p=0.000 n=9+11)
Opcodes/OVER/buffer/small-8             48.0B ± 0%      0.0B       -100.00%  (p=0.000 n=11+11)
Opcodes/OVER/buffer/big-8               48.0B ± 0%      0.0B       -100.00%  (p=0.000 n=11+11)
Opcodes/OVER/struct/small-8             48.0B ± 0%      0.0B       -100.00%  (p=0.000 n=11+11)
Opcodes/OVER/struct/big-8               48.0B ± 0%      0.0B       -100.00%  (p=0.000 n=11+11)
Opcodes/OVER/pointer-8                   112B ± 0%       64B ± 0%   -42.86%  (p=0.000 n=11+11)
Opcodes/PICK/2/null-8                   48.0B ± 0%      0.0B       -100.00%  (p=0.000 n=11+11)
Opcodes/PICK/2/boolean-8                48.0B ± 0%      0.0B       -100.00%  (p=0.000 n=11+11)
Opcodes/PICK/2/integer/small-8          96.0B ± 0%     48.0B ± 0%   -50.00%  (p=0.000 n=11+11)
Opcodes/PICK/2/integer/big-8             104B ± 0%       56B ± 0%   -46.15%  (p=0.000 n=11+11)
Opcodes/PICK/2/bytearray/small-8        88.0B ± 0%     40.0B ± 0%   -54.55%  (p=0.000 n=11+11)
Opcodes/PICK/2/bytearray/big-8         65.6kB ± 0%    65.6kB ± 0%    -0.07%  (p=0.001 n=9+11)
Opcodes/PICK/2/buffer/small-8           48.0B ± 0%      0.0B       -100.00%  (p=0.000 n=11+11)
Opcodes/PICK/2/buffer/big-8             48.0B ± 0%      0.0B       -100.00%  (p=0.000 n=11+11)
Opcodes/PICK/2/struct/small-8           48.0B ± 0%      0.0B       -100.00%  (p=0.000 n=11+11)
Opcodes/PICK/2/struct/big-8             48.0B ± 0%      0.0B       -100.00%  (p=0.000 n=11+11)
Opcodes/PICK/2/pointer-8                 112B ± 0%       64B ± 0%   -42.86%  (p=0.000 n=11+11)
Opcodes/PICK/1024/null-8                48.0B ± 0%      0.0B       -100.00%  (p=0.000 n=11+11)
Opcodes/PICK/1024/boolean-8             48.0B ± 0%      0.0B       -100.00%  (p=0.000 n=11+11)
Opcodes/PICK/1024/integer/small-8       96.0B ± 0%     48.0B ± 0%   -50.00%  (p=0.000 n=11+11)
Opcodes/PICK/1024/integer/big-8          104B ± 0%       56B ± 0%   -46.15%  (p=0.000 n=11+11)
Opcodes/PICK/1024/bytearray/small-8     88.0B ± 0%     40.0B ± 0%   -54.55%  (p=0.000 n=11+11)
Opcodes/PICK/1024/bytearray/big-8      65.6kB ± 0%    65.6kB ± 0%    -0.07%  (p=0.000 n=11+11)
Opcodes/PICK/1024/buffer/small-8        48.0B ± 0%      0.0B       -100.00%  (p=0.000 n=11+11)
Opcodes/PICK/1024/buffer/big-8          48.0B ± 0%      0.0B       -100.00%  (p=0.000 n=11+11)
Opcodes/PICK/1024/struct/small-8        48.0B ± 0%      0.0B       -100.00%  (p=0.000 n=11+11)
Opcodes/PICK/1024/pointer-8              112B ± 0%       64B ± 0%   -42.86%  (p=0.000 n=11+11)
Opcodes/TUCK/null-8                     48.0B ± 0%      0.0B       -100.00%  (p=0.000 n=11+11)
Opcodes/TUCK/boolean-8                  48.0B ± 0%      0.0B       -100.00%  (p=0.000 n=11+11)
Opcodes/TUCK/integer/small-8            96.0B ± 0%     48.0B ± 0%   -50.00%  (p=0.000 n=11+11)
Opcodes/TUCK/integer/big-8               104B ± 0%       56B ± 0%   -46.15%  (p=0.000 n=11+11)
Opcodes/TUCK/bytearray/small-8          88.0B ± 0%     40.0B ± 0%   -54.55%  (p=0.000 n=11+11)
Opcodes/TUCK/bytearray/big-8           65.6kB ± 0%    65.6kB ± 0%    -0.07%  (p=0.000 n=10+11)
Opcodes/TUCK/buffer/small-8             48.0B ± 0%      0.0B       -100.00%  (p=0.000 n=11+11)
Opcodes/TUCK/buffer/big-8               48.0B ± 0%      0.0B       -100.00%  (p=0.000 n=11+11)
Opcodes/TUCK/struct/small-8             48.0B ± 0%      0.0B       -100.00%  (p=0.000 n=11+11)
Opcodes/TUCK/struct/big-8               48.0B ± 0%      0.0B       -100.00%  (p=0.000 n=11+11)
Opcodes/TUCK/pointer-8                   112B ± 0%       64B ± 0%   -42.86%  (p=0.000 n=11+11)
Opcodes/SWAP/null-8                     0.00B          0.00B           ~     (all equal)
Opcodes/SWAP/integer-8                  0.00B          0.00B           ~     (all equal)
Opcodes/SWAP/big_bytes-8                0.00B          0.00B           ~     (all equal)
Opcodes/ROT/null-8                      0.00B          0.00B           ~     (all equal)
Opcodes/ROT/integer-8                   0.00B          0.00B           ~     (all equal)
Opcodes/ROT/big_bytes-8                 0.00B          0.00B           ~     (all equal)
Opcodes/ROLL/4/null-8                   0.00B          0.00B           ~     (all equal)
Opcodes/ROLL/4/integer-8                0.00B          0.00B           ~     (all equal)
Opcodes/ROLL/4/big_bytes-8              0.00B          0.00B           ~     (all equal)
Opcodes/ROLL/1024/null-8                0.00B          0.00B           ~     (all equal)
Opcodes/ROLL/1024/integer-8             0.00B          0.00B           ~     (all equal)
Opcodes/ROLL/1024/big_bytes-8           0.00B          0.00B           ~     (all equal)
Opcodes/REVERSE3/null-8                 0.00B          0.00B           ~     (all equal)
Opcodes/REVERSE3/integer-8              0.00B          0.00B           ~     (all equal)
Opcodes/REVERSE3/big_bytes-8            0.00B          0.00B           ~     (all equal)
Opcodes/REVERSE4/null-8                 0.00B          0.00B           ~     (all equal)
Opcodes/REVERSE4/integer-8              0.00B          0.00B           ~     (all equal)
Opcodes/REVERSE4/big_bytes-8            0.00B          0.00B           ~     (all equal)
Opcodes/REVERSEN/5/null-8               0.00B          0.00B           ~     (all equal)
Opcodes/REVERSEN/5/integer-8            0.00B          0.00B           ~     (all equal)
Opcodes/REVERSEN/5/big_bytes-8          0.00B          0.00B           ~     (all equal)
Opcodes/REVERSEN/1024/null-8            0.00B          0.00B           ~     (all equal)
Opcodes/REVERSEN/1024/integer-8         0.00B          0.00B           ~     (all equal)
Opcodes/REVERSEN/1024/big_bytes-8       0.00B          0.00B           ~     (all equal)
Opcodes/PACK/1-8                         144B ± 0%       96B ± 0%   -33.33%  (p=0.000 n=11+11)
Opcodes/PACK/255-8                     4.22kB ± 0%    4.18kB ± 0%    -1.14%  (p=0.000 n=11+11)
Opcodes/PACK/1024-8                    16.5kB ± 0%    16.5kB ± 0%    -0.29%  (p=0.000 n=11+11)
Opcodes/UNPACK/1-8                       168B ± 0%       72B ± 0%   -57.14%  (p=0.000 n=11+11)
Opcodes/UNPACK/255-8                   12.4kB ± 0%     7.8kB ± 0%   -37.28%  (p=0.000 n=11+11)
Opcodes/UNPACK/1024-8                  49.3kB ± 0%    52.8kB ± 0%    +7.18%  (p=0.000 n=11+11)

name                                 old allocs/op  new allocs/op  delta
Opcodes/XDROP/0/1-8                      0.00           0.00           ~     (all equal)
Opcodes/XDROP/0/1024-8                   0.00           0.00           ~     (all equal)
Opcodes/XDROP/1024/1024-8                0.00           0.00           ~     (all equal)
Opcodes/XDROP/2047/2048-8                0.00           0.00           ~     (all equal)
Opcodes/DUP/null-8                       1.00 ± 0%      0.00       -100.00%  (p=0.000 n=11+11)
Opcodes/DUP/boolean-8                    1.00 ± 0%      0.00       -100.00%  (p=0.000 n=11+11)
Opcodes/DUP/integer/small-8              3.00 ± 0%      2.00 ± 0%   -33.33%  (p=0.000 n=11+11)
Opcodes/DUP/integer/big-8                3.00 ± 0%      2.00 ± 0%   -33.33%  (p=0.000 n=11+11)
Opcodes/DUP/bytearray/small-8            3.00 ± 0%      2.00 ± 0%   -33.33%  (p=0.000 n=11+11)
Opcodes/DUP/bytearray/big-8              3.00 ± 0%      2.00 ± 0%   -33.33%  (p=0.000 n=11+11)
Opcodes/DUP/buffer/small-8               1.00 ± 0%      0.00       -100.00%  (p=0.000 n=11+11)
Opcodes/DUP/buffer/big-8                 1.00 ± 0%      0.00       -100.00%  (p=0.000 n=11+11)
Opcodes/DUP/struct/small-8               1.00 ± 0%      0.00       -100.00%  (p=0.000 n=11+11)
Opcodes/DUP/struct/big-8                 1.00 ± 0%      0.00       -100.00%  (p=0.000 n=11+11)
Opcodes/DUP/pointer-8                    2.00 ± 0%      1.00 ± 0%   -50.00%  (p=0.000 n=11+11)
Opcodes/OVER/null-8                      1.00 ± 0%      0.00       -100.00%  (p=0.000 n=11+11)
Opcodes/OVER/boolean-8                   1.00 ± 0%      0.00       -100.00%  (p=0.000 n=11+11)
Opcodes/OVER/integer/small-8             3.00 ± 0%      2.00 ± 0%   -33.33%  (p=0.000 n=11+11)
Opcodes/OVER/integer/big-8               3.00 ± 0%      2.00 ± 0%   -33.33%  (p=0.000 n=11+11)
Opcodes/OVER/bytearray/small-8           3.00 ± 0%      2.00 ± 0%   -33.33%  (p=0.000 n=11+11)
Opcodes/OVER/bytearray/big-8             3.00 ± 0%      2.00 ± 0%   -33.33%  (p=0.000 n=11+11)
Opcodes/OVER/buffer/small-8              1.00 ± 0%      0.00       -100.00%  (p=0.000 n=11+11)
Opcodes/OVER/buffer/big-8                1.00 ± 0%      0.00       -100.00%  (p=0.000 n=11+11)
Opcodes/OVER/struct/small-8              1.00 ± 0%      0.00       -100.00%  (p=0.000 n=11+11)
Opcodes/OVER/struct/big-8                1.00 ± 0%      0.00       -100.00%  (p=0.000 n=11+11)
Opcodes/OVER/pointer-8                   2.00 ± 0%      1.00 ± 0%   -50.00%  (p=0.000 n=11+11)
Opcodes/PICK/2/null-8                    1.00 ± 0%      0.00       -100.00%  (p=0.000 n=11+11)
Opcodes/PICK/2/boolean-8                 1.00 ± 0%      0.00       -100.00%  (p=0.000 n=11+11)
Opcodes/PICK/2/integer/small-8           3.00 ± 0%      2.00 ± 0%   -33.33%  (p=0.000 n=11+11)
Opcodes/PICK/2/integer/big-8             3.00 ± 0%      2.00 ± 0%   -33.33%  (p=0.000 n=11+11)
Opcodes/PICK/2/bytearray/small-8         3.00 ± 0%      2.00 ± 0%   -33.33%  (p=0.000 n=11+11)
Opcodes/PICK/2/bytearray/big-8           3.00 ± 0%      2.00 ± 0%   -33.33%  (p=0.000 n=11+11)
Opcodes/PICK/2/buffer/small-8            1.00 ± 0%      0.00       -100.00%  (p=0.000 n=11+11)
Opcodes/PICK/2/buffer/big-8              1.00 ± 0%      0.00       -100.00%  (p=0.000 n=11+11)
Opcodes/PICK/2/struct/small-8            1.00 ± 0%      0.00       -100.00%  (p=0.000 n=11+11)
Opcodes/PICK/2/struct/big-8              1.00 ± 0%      0.00       -100.00%  (p=0.000 n=11+11)
Opcodes/PICK/2/pointer-8                 2.00 ± 0%      1.00 ± 0%   -50.00%  (p=0.000 n=11+11)
Opcodes/PICK/1024/null-8                 1.00 ± 0%      0.00       -100.00%  (p=0.000 n=11+11)
Opcodes/PICK/1024/boolean-8              1.00 ± 0%      0.00       -100.00%  (p=0.000 n=11+11)
Opcodes/PICK/1024/integer/small-8        3.00 ± 0%      2.00 ± 0%   -33.33%  (p=0.000 n=11+11)
Opcodes/PICK/1024/integer/big-8          3.00 ± 0%      2.00 ± 0%   -33.33%  (p=0.000 n=11+11)
Opcodes/PICK/1024/bytearray/small-8      3.00 ± 0%      2.00 ± 0%   -33.33%  (p=0.000 n=11+11)
Opcodes/PICK/1024/bytearray/big-8        3.00 ± 0%      2.00 ± 0%   -33.33%  (p=0.000 n=11+11)
Opcodes/PICK/1024/buffer/small-8         1.00 ± 0%      0.00       -100.00%  (p=0.000 n=11+11)
Opcodes/PICK/1024/buffer/big-8           1.00 ± 0%      0.00       -100.00%  (p=0.000 n=11+11)
Opcodes/PICK/1024/struct/small-8         1.00 ± 0%      0.00       -100.00%  (p=0.000 n=11+11)
Opcodes/PICK/1024/pointer-8              2.00 ± 0%      1.00 ± 0%   -50.00%  (p=0.000 n=11+11)
Opcodes/TUCK/null-8                      1.00 ± 0%      0.00       -100.00%  (p=0.000 n=11+11)
Opcodes/TUCK/boolean-8                   1.00 ± 0%      0.00       -100.00%  (p=0.000 n=11+11)
Opcodes/TUCK/integer/small-8             3.00 ± 0%      2.00 ± 0%   -33.33%  (p=0.000 n=11+11)
Opcodes/TUCK/integer/big-8               3.00 ± 0%      2.00 ± 0%   -33.33%  (p=0.000 n=11+11)
Opcodes/TUCK/bytearray/small-8           3.00 ± 0%      2.00 ± 0%   -33.33%  (p=0.000 n=11+11)
Opcodes/TUCK/bytearray/big-8             3.00 ± 0%      2.00 ± 0%   -33.33%  (p=0.000 n=11+11)
Opcodes/TUCK/buffer/small-8              1.00 ± 0%      0.00       -100.00%  (p=0.000 n=11+11)
Opcodes/TUCK/buffer/big-8                1.00 ± 0%      0.00       -100.00%  (p=0.000 n=11+11)
Opcodes/TUCK/struct/small-8              1.00 ± 0%      0.00       -100.00%  (p=0.000 n=11+11)
Opcodes/TUCK/struct/big-8                1.00 ± 0%      0.00       -100.00%  (p=0.000 n=11+11)
Opcodes/TUCK/pointer-8                   2.00 ± 0%      1.00 ± 0%   -50.00%  (p=0.000 n=11+11)
Opcodes/SWAP/null-8                      0.00           0.00           ~     (all equal)
Opcodes/SWAP/integer-8                   0.00           0.00           ~     (all equal)
Opcodes/SWAP/big_bytes-8                 0.00           0.00           ~     (all equal)
Opcodes/ROT/null-8                       0.00           0.00           ~     (all equal)
Opcodes/ROT/integer-8                    0.00           0.00           ~     (all equal)
Opcodes/ROT/big_bytes-8                  0.00           0.00           ~     (all equal)
Opcodes/ROLL/4/null-8                    0.00           0.00           ~     (all equal)
Opcodes/ROLL/4/integer-8                 0.00           0.00           ~     (all equal)
Opcodes/ROLL/4/big_bytes-8               0.00           0.00           ~     (all equal)
Opcodes/ROLL/1024/null-8                 0.00           0.00           ~     (all equal)
Opcodes/ROLL/1024/integer-8              0.00           0.00           ~     (all equal)
Opcodes/ROLL/1024/big_bytes-8            0.00           0.00           ~     (all equal)
Opcodes/REVERSE3/null-8                  0.00           0.00           ~     (all equal)
Opcodes/REVERSE3/integer-8               0.00           0.00           ~     (all equal)
Opcodes/REVERSE3/big_bytes-8             0.00           0.00           ~     (all equal)
Opcodes/REVERSE4/null-8                  0.00           0.00           ~     (all equal)
Opcodes/REVERSE4/integer-8               0.00           0.00           ~     (all equal)
Opcodes/REVERSE4/big_bytes-8             0.00           0.00           ~     (all equal)
Opcodes/REVERSEN/5/null-8                0.00           0.00           ~     (all equal)
Opcodes/REVERSEN/5/integer-8             0.00           0.00           ~     (all equal)
Opcodes/REVERSEN/5/big_bytes-8           0.00           0.00           ~     (all equal)
Opcodes/REVERSEN/1024/null-8             0.00           0.00           ~     (all equal)
Opcodes/REVERSEN/1024/integer-8          0.00           0.00           ~     (all equal)
Opcodes/REVERSEN/1024/big_bytes-8        0.00           0.00           ~     (all equal)
Opcodes/PACK/1-8                         5.00 ± 0%      4.00 ± 0%   -20.00%  (p=0.000 n=11+11)
Opcodes/PACK/255-8                       5.00 ± 0%      4.00 ± 0%   -20.00%  (p=0.000 n=11+11)
Opcodes/PACK/1024-8                      5.00 ± 0%      4.00 ± 0%   -20.00%  (p=0.000 n=11+11)
Opcodes/UNPACK/1-8                       5.00 ± 0%      3.00 ± 0%   -40.00%  (p=0.000 n=11+11)
Opcodes/UNPACK/255-8                      259 ± 0%         7 ± 0%   -97.30%  (p=0.000 n=11+11)
Opcodes/UNPACK/1024-8                   1.03k ± 0%     0.01k ± 0%   -98.93%  (p=0.000 n=11+11)
2021-08-24 15:28:14 +03:00
Roman Khimov
7808762ba0 transaction: avoid reencoding and reading what can't be read
name               old time/op    new time/op    delta
DecodeFromBytes-8    1.79µs ± 2%    1.46µs ± 4%  -18.44%  (p=0.000 n=10+10)

name               old alloc/op   new alloc/op   delta
DecodeFromBytes-8      800B ± 0%      624B ± 0%  -22.00%  (p=0.000 n=10+10)

name               old allocs/op  new allocs/op  delta
DecodeFromBytes-8      10.0 ± 0%       8.0 ± 0%  -20.00%  (p=0.000 n=10+10)
2021-08-23 21:41:38 +03:00
Roman Khimov
d0620b24ec io: simplify BinReader uint buffer
Similar to c69670c85b, allows to eliminate one
allocation and reduce memory footprint a bit (tested on tx decoding):

name               old time/op    new time/op    delta
DecodeFromBytes-8    1.78µs ± 3%    1.79µs ± 2%    ~     (p=1.000 n=10+10)

name               old alloc/op   new alloc/op   delta
DecodeFromBytes-8      888B ± 0%      800B ± 0%  -9.91%  (p=0.000 n=10+10)

name               old allocs/op  new allocs/op  delta
DecodeFromBytes-8      11.0 ± 0%      10.0 ± 0%  -9.09%  (p=0.000 n=10+10)
2021-08-23 21:18:07 +03:00
Roman Khimov
ed35cf8f12 vm: store exception stack in the context
Avoid allocating it.
2021-08-23 18:29:27 +03:00
Roman Khimov
d3198c3082 stackitem: avoid going through Value() in serialization
Doesn't change much, but still simpler.

name               old time/op    new time/op    delta
SerializeSimple-8     452ns ±10%     435ns ± 4%   ~     (p=0.356 n=10+9)

name               old alloc/op   new alloc/op   delta
SerializeSimple-8      432B ± 0%      432B ± 0%   ~     (all equal)

name               old allocs/op  new allocs/op  delta
SerializeSimple-8      7.00 ± 0%      7.00 ± 0%   ~     (all equal)
2021-08-23 18:29:07 +03:00
Roman Khimov
2808f6857d interop: don't allocate for Functions and Notifications in New
Functions are usually immediately replaced (and it's OK for them to be nil,
searching through an array with length of zero is fine), Notifications are
usually appended to (and are absolutely useless in verification contexts).
2021-08-20 11:56:28 +03:00
Roman Khimov
2e39f1a1e3 io: drop one allocation from NewBufBinWriter 2021-08-20 11:38:42 +03:00
Roman Khimov
a68a8aa8fc core: simplify and correct notification handling
* both 'to' and 'from' are either Null or Hash160, there is no other
   possibility for valid NEP-17. So returning util.Uint160{} in case of
   parsing error is wrong.
 * but this is what allowed burns/mints to work at the expense of error
   allocation inside of util.Uint160DecodeBytesBE()
 * Uint160 can technically fit into regular VM integer, so even though it'd be
   quite surprising to see it there, TryBytes() is more correct (and easier!)
   to use
 * same thing with `amount`, we have `TryInteger()` that easily covers all
   possible cases and does appropriate error checking inside
2021-08-20 11:26:16 +03:00
Roman Khimov
abc48229a3 block: Grow buffer on Trim, avoid reallocations 2021-08-20 11:05:46 +03:00
Roman Khimov
b8dd284d3d io: don't allocate new error on every call to Bytes()
It makes no sense and we're using Bytes() pretty often.
2021-08-20 10:58:51 +03:00
Anna Shaleva
5f9d38f640 core: refactor (*DAO).StoreAsTransaction
Squash (*DAO).StoreAsTransaction and
(*DAO).StoreConflictingTransactions. It's better to keep them this way,
because StoreAsTransaction is always followed by
StoreConflictingTransactions, so it's an atomic operation.

The logic wasn't changed.
2021-08-18 13:39:28 +03:00
Anna Shaleva
4b35a1cf92 core: remove conflicting transactions wrt MaxTraceableBlocks 2021-08-18 13:31:47 +03:00
Roman Khimov
483934d3a6
Merge pull request #2133 from nspcc-dev/optimize-util
util: reduce allocations in `util.Uint256DecodeStringLE`
2021-08-17 19:20:16 +03:00
Evgeniy Stratonikov
8c31831626 util: reduce allocations in util.Uint256DecodeStringLE
It is used a lot in clients (including our benchmark).
`Uint160` is already optimized.

```
name                     old time/op    new time/op    delta
Uint256DecodeStringLE-8     150ns ±15%     112ns ± 3%  -25.23%  (p=0.000 n=10+10)

name                     old alloc/op   new alloc/op   delta
Uint256DecodeStringLE-8     96.0B ± 0%     64.0B ± 0%  -33.33%  (p=0.000 n=10+10)

name                     old allocs/op  new allocs/op  delta
Uint256DecodeStringLE-8      2.00 ± 0%      1.00 ± 0%  -50.00%  (p=0.000 n=10+10)
```

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-17 16:53:39 +03:00
Roman Khimov
f477a48758 contract: block calls to contracts via Policy contract
See neo-project/neo#2567.
2021-08-17 15:24:06 +03:00
Roman Khimov
11351b9702
Merge pull request #2114 from nspcc-dev/optimize-rpc
rpc/request: delay parameter unmarshaling
2021-08-13 16:30:42 +03:00
Evgeniy Stratonikov
3c34e6fa21 rpc/request: delay parameter unmarshaling
It is rather costly to try to unmarshal many structs in order.

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-13 16:22:54 +03:00
Roman Khimov
5b12dd2025
Merge pull request #2128 from nspcc-dev/vm-update-int
Some VM optimizations
2021-08-13 16:16:01 +03:00
Evgeniy Stratonikov
6879f76a13 stackitem: make Buffer an alias to []byte
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-13 14:41:26 +03:00
Evgeniy Stratonikov
1dfef4ba26 stackitem: make ByteArray an alias to []byte
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-13 14:41:26 +03:00
Evgeniy Stratonikov
4f98ec2f53 vm: embed reference counter in compound items
```
name              old time/op  new time/op  delta
RefCounter_Add-8  44.8ns ± 4%  11.7ns ± 3%  -73.94%  (p=0.000 n=10+10)
```

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-13 14:41:26 +03:00
Roman Khimov
adc660c3e0
Merge pull request #2123 from nspcc-dev/store-better
Store better
2021-08-13 12:50:24 +03:00
Evgeniy Stratonikov
dc9287bf5c compiler: use parameter directly in writeJumps
`Next` doesn't longer copy parameter.

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-13 11:59:04 +03:00
Evgeniy Stratonikov
f5d1277bfd vm: do not copy parameter
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-13 11:52:38 +03:00
Evgeniy Stratonikov
e2910a7cb4 vm/cli: add public key -> address conversion, fix #2121
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-13 10:43:49 +03:00
Evgeniy Stratonikov
bb137abb03 crypto/keys: enforce length in PublicKey.DecodeBytes()
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-13 10:38:09 +03:00
Evgeniy Stratonikov
a5516e8c96 stackitem: make BigInteger alias to big.Int
Remove one indirection step.
``
name       old time/op    new time/op    delta
MakeInt-8    79.7ns ± 8%    56.2ns ± 8%  -29.44%  (p=0.000 n=10+10)

name       old alloc/op   new alloc/op   delta
MakeInt-8     48.0B ± 0%     40.0B ± 0%  -16.67%  (p=0.000 n=10+10)

name       old allocs/op  new allocs/op  delta
MakeInt-8      3.00 ± 0%      2.00 ± 0%  -33.33%  (p=0.000 n=10+10)
```

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-12 17:53:36 +03:00
Evgeniy Stratonikov
cff8b1c24e stackitem: use Bool item directly
It is always copied.

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-12 17:53:36 +03:00
Roman Khimov
ae071d4542 storage: introduce PutChangeSet and use it for Persist
We're using batches in wrong way during persist, we already have all changes
accumulated in two maps and then we move them to batch and then this is
applied. For some DBs like BoltDB this batch is just another MemoryStore, so
we essentially just shuffle the changeset from one map to another, for others
like LevelDB batch is just a serialized set of KV pairs, it doesn't help much
on subsequent PutBatch, we just duplicate the changeset again.

So introduce PutChangeSet that allows to take two maps with sets and deletes
directly. It also allows to simplify MemCachedStore logic.

neo-bench for single node with 10 workers, LevelDB:

  Reference:

  RPS    30189.132 30556.448 30390.482 ≈ 30379    ±  0.61%
  TPS    29427.344 29418.687 29434.273 ≈ 29427    ±  0.03%
  CPU %     33.304    27.179    33.860 ≈    31.45 ± 11.79%
  Mem MB   800.677   798.389   715.042 ≈   771    ±  6.33%

  Patched:

  RPS    30264.326 30386.364 30166.231 ≈ 30272    ± 0.36% ⇅
  TPS    29444.673 29407.440 29452.478 ≈ 29435    ± 0.08% ⇅
  CPU %     34.012    32.597    33.467 ≈   33.36  ± 2.14% ⇅
  Mem MB   549.126   523.656   517.684 ≈  530     ± 3.15% ↓ 31.26%

BoltDB:

  Reference:

  RPS    31937.647 31551.684 31850.408 ≈ 31780    ±  0.64%
  TPS    31292.049 30368.368 31307.724 ≈ 30989    ±  1.74%
  CPU %     33.792    22.339    35.887 ≈    30.67 ± 23.78%
  Mem MB  1271.687  1254.472  1215.639 ≈  1247    ±  2.30%

  Patched:

  RPS    31746.818 30859.485 31689.761 ≈ 31432    ± 1.58% ⇅
  TPS    31271.499 30340.726 30342.568 ≈ 30652    ± 1.75% ⇅
  CPU %     34.611    34.414    31.553 ≈    33.53 ± 5.11% ⇅
  Mem MB  1262.960  1231.389  1335.569 ≈  1277    ± 4.18% ⇅
2021-08-12 17:42:16 +03:00
Roman Khimov
5aff82aef4
Merge pull request #2119 from nspcc-dev/states-exchange/insole
core, network: prepare basis for Insole module
2021-08-12 10:35:02 +03:00
Roman Khimov
47f0f4c45f dao: completely drop Cached
It was very useful in 2.0 days, but today it only serves one purpose that
could easily (and more effectively!) be solved in another way.
2021-08-11 23:06:17 +03:00
Roman Khimov
3e60771175 core: deduplicate and simplify processNEP17Transfer a bit
Just refactoring, no functional changes.
2021-08-11 22:36:26 +03:00
Roman Khimov
50ee1a1f91 *: don't use dao.Cached in tests
There is no need to use it.
2021-08-11 21:02:50 +03:00
Roman Khimov
18682f2409 storage: don't use locks for memory batches
They're inherently single-threaded, so locking makes no sense for them.
2021-08-11 18:55:07 +03:00
Roman Khimov
13da1b62fb interop: fetch baseExecFee once and keep it in the Context
It never changes during single execution, so we can cache it and avoid going
to Policer via Chain for every instruction.
2021-08-11 15:42:23 +03:00
Roman Khimov
bdb2d24a5a vm: remove istack redirection in VM
VM always has istack and it doesn't even change, so doing this microallocation
makes no sense. Notice that estack is a bit harder to change we do replace it
in some cases and we compare pointers to it as well.
2021-08-11 14:42:01 +03:00
Roman Khimov
ff7d594bef vm: store refcounter directly in VM
VM always has it, so allocating yet another object makes no sense.
2021-08-11 13:25:58 +03:00
Anna Shaleva
0e3b9c48a2 core: add API to store StateSyncPoint and StateSyncCurrentBlockHeight
We need it in order not to mess up the blockchain which has its own
CurrentBlockHeight.
2021-08-10 14:06:28 +03:00
Anna Shaleva
cb01f533c0 core: store conflicting transactions in a separate method
(DAO).StoreConflictingTransactions will be reused from the state sync
module.
2021-08-10 13:47:13 +03:00
Anna Shaleva
72e654332e core: refactor block queue
It requires only two methods from Blockchainer: AddBlock and
BlockHeight. New interface will allow to easily reuse the block queue
for state exchange purposes.
2021-08-10 13:47:13 +03:00
Anna Shaleva
35501a281a core: remove untraceable blocks wrt StateSyncInterval 2021-08-10 13:47:10 +03:00
Roman Khimov
0a2bbf3c04
Merge pull request #2118 from nspcc-dev/neopt2
Networking improvements
2021-08-10 13:29:40 +03:00
Anna Shaleva
6ca7983be8 network: fix typo in error message 2021-08-10 11:00:39 +03:00
Anna Shaleva
76c687aaa1 config: add P2PStateExchangeExtensions and StateSyncInterval settings 2021-08-10 11:00:32 +03:00
Roman Khimov
1e0c70ecb0
Merge pull request #2117 from nspcc-dev/io-grow
Some io package improvements
2021-08-10 09:57:31 +03:00
Evgeniy Stratonikov
73e4040628 mpt: use BinWriter.Grow() instead of custom buffer
Also add benchmarks.

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-10 09:34:05 +03:00
Evgeniy Stratonikov
c74de9a579 network: preallocate buffer for message
```
name            old time/op    new time/op    delta
MessageBytes-8     740ns ± 0%     684ns ± 2%   -7.58%  (p=0.000 n=10+10)

name            old alloc/op   new alloc/op   delta
MessageBytes-8    1.39kB ± 0%    1.20kB ± 0%  -13.79%  (p=0.000 n=10+10)

name            old allocs/op  new allocs/op  delta
MessageBytes-8      11.0 ± 0%      10.0 ± 0%   -9.09%  (p=0.000 n=10+10)
```

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-10 09:33:52 +03:00
Evgeniy Stratonikov
dacf025dd9 io: add Grow to BinWriter
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-09 12:05:31 +03:00
Evgeniy Stratonikov
c69670c85b io: use a single slice for numbers
Slice takes 24 bytes of memory, while we really need only 9.
```
name                 old time/op    new time/op    delta
Transaction_Bytes-8     667ns ±17%     583ns ± 6%  -12.50%  (p=0.000 n=10+10)
GetVarSize-8            283ns ±11%     189ns ± 5%  -33.37%  (p=0.000 n=10+10)

name                 old alloc/op   new alloc/op   delta
Transaction_Bytes-8    1.01kB ± 0%    0.88kB ± 0%  -12.70%  (p=0.000 n=10+10)
GetVarSize-8             184B ± 0%       56B ± 0%  -69.57%  (p=0.000 n=10+10)

name                 old allocs/op  new allocs/op  delta
Transaction_Bytes-8      7.00 ± 0%      6.00 ± 0%  -14.29%  (p=0.000 n=10+10)
GetVarSize-8             3.00 ± 0%      2.00 ± 0%  -33.33%  (p=0.000 n=10+10)
```

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-09 12:04:28 +03:00
Evgeniy Stratonikov
620295efe3 transaction: add benchmark for transaction serialization
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-09 12:01:38 +03:00
Evgeniy Stratonikov
23adb1e2fc state: optimize NEP17TransferLog.Append
Do not allocate a separate buffer for the transfer.
```
name                       old time/op    new time/op    delta
NEP17TransferLog_Append-8    58.8µs ± 3%    32.1µs ± 1%  -45.40%  (p=0.000 n=10+9)

name                       old alloc/op   new alloc/op   delta
NEP17TransferLog_Append-8     118kB ± 1%      44kB ± 3%  -63.00%  (p=0.000 n=9+10)

name                       old allocs/op  new allocs/op  delta
NEP17TransferLog_Append-8       901 ± 1%       513 ± 3%  -43.08%  (p=0.000 n=9+8)
```

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-09 11:09:49 +03:00
Evgeniy Stratonikov
403a4b75de state/test: add benchmark for NEP17TransferLog.Append
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-09 11:09:49 +03:00
Evgeniy Stratonikov
b210a34b1e state: optimize NEP17Balance deserialization
```
BenchmarkNEP17BalanceFromBytes/stackitem-8         	 2402318	       503.3 ns/op	     208 B/op	      10 allocs/op
BenchmarkNEP17BalanceFromBytes/from_bytes-8        	 7623139	       160.7 ns/op	      72 B/op	       3 allocs/op
```

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-09 11:09:49 +03:00
Evgeniy Stratonikov
3218b74ea5 state: optimize NEP17Balance serialization
Put to slice directly and allow to provide pre-allocated buffer.
```
BenchmarkNEP17BalanceBytes/stackitem-8         	 1712475	       673.4 ns/op	     448 B/op	       9 allocs/op
BenchmarkNEP17BalanceBytes/bytes-8             	13422715	        75.80 ns/op	      32 B/op	       2 allocs/op
BenchmarkNEP17BalanceBytes/bytes,_prealloc-8   	25990371	        46.46 ns/op	      16 B/op	       1 allocs/op
```

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-09 11:09:06 +03:00
Roman Khimov
7bb82f1f99 network: merge two loops in iteratePeersWithSendMsg, send to 2/3
Refactor code and be fine with sending to just 2/3 of proper peers. Previously
it was an edge case, but it can be a normal thing to do also as broadcasting
to everyone is obviously too expensive and excessive (hi, #608).

Baseline (four node, 10 workers):

RPS    8180.760 8137.822 7858.358 7820.011 8051.076 ≈ 8010   ± 2.04%
TPS    7819.831 7521.172 7519.023 7242.965 7426.000 ≈ 7506   ± 2.78%
CPU %    41.983   38.775   40.606   39.375   35.537 ≈   39.3 ± 6.15%
Mem MB 2947.189 2743.658 2896.688 2813.276 2863.108 ≈ 2853   ± 2.74%

Patched:

RPS    9714.567 9676.102 9358.609 9371.408 9301.372 ≈ 9484   ±  2.05% ↑ 18.40%
TPS    8809.796 8796.854 8534.754 8661.158 8426.162 ≈ 8646   ±  1.92% ↑ 15.19%
CPU %    44.980   45.018   33.640   29.645   43.830 ≈   39.4 ± 18.41% ↑  0.25%
Mem MB 2989.078 2976.577 2306.185 2351.929 2910.479 ≈ 2707   ± 12.80% ↓  5.12%

There is a nuance with this patch however. While typically it works the way
outlined above, sometimes it works like this:

RPS ≈ 6734.368
TPS ≈ 6299.332
CPU ≈ 25.552%
Mem ≈ 2706.046MB

And that's because the log looks like this:

DeltaTime, TransactionsCount, TPS
5014, 44212, 8817.710
5163, 49690, 9624.249
5166, 49523, 9586.334
5189, 49693, 9576.604
5198, 49339, 9491.920
5147, 49559, 9628.716
5192, 49680, 9568.567
5163, 49750, 9635.871
5183, 49189, 9490.450
5159, 49653, 9624.540
5167, 47945, 9279.079
5179, 2051, 396.022
5015, 4, 0.798
5004, 0, 0.000
5003, 0, 0.000
5003, 0, 0.000
5003, 0, 0.000
5003, 0, 0.000
5004, 0, 0.000
5003, 2925, 584.649
5040, 49099, 9741.865
5161, 49718, 9633.404
5170, 49228, 9521.857
5179, 49773, 9610.543
5167, 47253, 9145.152
5202, 49788, 9570.934
5177, 47704, 9214.603
5209, 46610, 8947.975
5249, 49156, 9364.831
5163, 18284, 3541.352
5072, 174, 34.306

On a network with 4 CNs and 1 RPC node there is 1/256 probability that a block
won't be broadcasted to RPC node, so it won't see it until ping timeout kicks
in. While it doesn't see a block it can't accept new incoming transactions so
the bench gets stuck basically. To me that's an acceptable trade-off because
normal networks are much larger than that and the effect of this patch is way
more important there, but still that's what we have and we need to take into
account.
2021-08-06 21:10:34 +03:00
Roman Khimov
966a16e80e network: keep track of dead peers in iteratePeersWithSendMsg()
send() can return errStateMismatch, errGone and errBusy. errGone means the
peer is dead and it won't ever be active again, it doesn't make sense retrying
sends to it. errStateMismatch is technically "not yet ready", but we can't
wait for it either, no one knows how much will it take to complete
handshake. So only errBusy means we can retry.

So keep track of dead peers and adjust tries counting appropriately.
2021-08-06 21:10:34 +03:00
Roman Khimov
80f3ec2312 network: move peer filtering to getPeers()
It doesn't change much, we can't magically get more valid peers and if some
die while we're iterating we'd detect that by an error returned from send().
2021-08-06 21:10:34 +03:00
Roman Khimov
de6f4987f6 network: microoptimize iteratePeersWithSendMsg()
Now that s.getPeers() returns a slice we can use slice for `success` too, maps
are more expensive.
2021-08-06 21:10:34 +03:00
Roman Khimov
d51db20405 network: randomize peer iteration order
While iterating over map in getPeers() is non-deterministic it's not really
random enough for our purposes (usually maps have 2-3 paths through them), we
need to fill our peers queues more uniformly.

Believe it or not, but it does affect performance metrics, baseline (four
nodes, 10 workers):

RPS ≈  7791.675 7996.559 7834.504 7746.705 7891.614 ≈ 7852   ±  1.10%
TPS ≈  7241.497 7711.765 7520.211 7425.890 7334.443 ≈ 7447   ±  2.17%
CPU %    29.853   39.936   39.945   36.371   39.999 ≈   37.2 ± 10.57%
Mem MB 2749.635 2791.609 2828.610 2910.431 2863.344 ≈ 2829   ±  1.97%

Patched:

RPS    8180.760 8137.822 7858.358 7820.011 8051.076 ≈ 8010   ± 2.04% ↑ 2.01%
TPS    7819.831 7521.172 7519.023 7242.965 7426.000 ≈ 7506   ± 2.78% ↑ 0.79%
CPU %    41.983   38.775   40.606   39.375   35.537 ≈   39.3 ± 6.15% ↑ 5.65%
Mem MB 2947.189 2743.658 2896.688 2813.276 2863.108 ≈ 2853   ± 2.74% ↑ 0.85%
2021-08-06 21:10:34 +03:00
Roman Khimov
b55c75d59d network: hide Peers, make it return a slice
Slice is a bit more efficient, we don't need a map for Peers() users and it's
not really interesting to outside users, so better hide this method.
2021-08-06 21:10:34 +03:00
Roman Khimov
119b4200ac network: add fail-fast route for tx double processing
When transaction spreads through the network many nodes are likely to get it
in roughly the same time. They will rebroadcast it also in roughly the same
time. As we have a number of peers it's quite likely that we'd get an Inv with
the same transaction from multiple peers simultaneously. We will ask them for
this transaction (independently!) and again we're likely to get it in roughly
the same time. So we can easily end up with multiple threads processing the
same transaction. Only one will succeed, but we can actually easily avoid
doing it in the first place saving some CPU cycles for other things.

Notice that we can't do it _before_ receiving a transaction because nothing
guarantees that the peer will respond to our transaction request, so
communication overhead is unavoidable at the moment, but saving on processing
already gives quite interesting results.

Baseline, four nodes with 10 workers:

RPS    7176.784 7014.511 6139.663 7191.280 7080.852 ≈ 6921   ± 5.72%
TPS    6945.409 6562.756 5927.050 6681.187 6821.794 ≈ 6588   ± 5.38%
CPU %    44.400   43.842   40.418   49.211   49.370 ≈   45.4 ± 7.53%
Mem MB 2693.414 2640.602 2472.007 2731.482 2707.879 ≈ 2649   ± 3.53%

Patched:

RPS ≈  7791.675 7996.559 7834.504 7746.705 7891.614 ≈ 7852   ±  1.10% ↑ 13.45%
TPS ≈  7241.497 7711.765 7520.211 7425.890 7334.443 ≈ 7447   ±  2.17% ↑ 13.04%
CPU %    29.853   39.936   39.945   36.371   39.999 ≈   37.2 ± 10.57% ↓ 18.06%
Mem MB 2749.635 2791.609 2828.610 2910.431 2863.344 ≈ 2829   ±  1.97% ↑  6.80%
2021-08-06 21:10:25 +03:00
Roman Khimov
7fc153ed2a network: only ask mempool for intersections with received Inv
Most of the time on healthy network we see new transactions appearing that are
not present in the mempool. Once they get into mempool we don't ask for them
again when some other peer sends an Inv with them. Then these transactions are
usually added into block, removed from mempool and no one actually sends them
again to us. Some stale nodes can do that, but it's not very likely to
happen.

At the receiving end at the same time it's quite expensive to do full chain
HasTransaction() query, so if we can avoid doing that it's always good. Here
it technically allows resending old transaction that will be re-requested and
an attempt to add it to mempool will be made. But it'll inevitably fail
because the same HasTransaction() check is done there too. One can try to
maliciously flood the node with stale transactions but it doesn't differ from
flooding it with any other invalid transactions, so there is no new attack
vector added.

Baseline, 4 nodes with 10 workers:

RPS    6902.296 6465.662 6856.044 6785.515 6157.024 ≈ 6633   ± 4.26%
TPS    6468.431 6218.867 6610.565 6288.596 5790.556 ≈ 6275   ± 4.44%
CPU %    50.231   42.925   49.481   48.396   42.662 ≈   46.7 ± 7.01%
Mem MB 2856.841 2684.103 2756.195 2733.485 2422.787 ≈ 2691   ± 5.40%

Patched:

RPS    7176.784 7014.511 6139.663 7191.280 7080.852 ≈ 6921   ± 5.72% ↑ 4.34%
TPS    6945.409 6562.756 5927.050 6681.187 6821.794 ≈ 6588   ± 5.38% ↑ 4.99%
CPU %    44.400   43.842   40.418   49.211   49.370 ≈   45.4 ± 7.53% ↓ 2.78%
Mem MB 2693.414 2640.602 2472.007 2731.482 2707.879 ≈ 2649   ± 3.53% ↓ 1.56%
2021-08-06 20:53:02 +03:00
Roman Khimov
f78bd6474f network: handle incoming message in a separate goroutine
Network communication takes time. Handling some messages (like transaction)
also takes time. We can share this time by making handler a separate
goroutine. So while message is being handled receiver can already get and
parse the next one.

It doesn't improve metrics a lot, but still I think it makes sense and in some
scenarios this can be more beneficial than this.

e41fc2fd1b, 4 nodes, 10 workers

RPS    6732.979 6396.160 6759.624 6246.398 6589.841 ≈ 6545   ± 3.02%
TPS    6491.062 5984.190 6275.652 5867.477 6360.797 ≈ 6196   ± 3.77%
CPU %    42.053   43.515   44.768   40.344   44.112 ≈   43.0 ± 3.69%
Mem MB 2564.130 2744.236 2636.267 2589.505 2765.926 ≈ 2660   ± 3.06%

Patched:

RPS    6902.296 6465.662 6856.044 6785.515 6157.024 ≈ 6633   ± 4.26% ↑ 1.34%
TPS    6468.431 6218.867 6610.565 6288.596 5790.556 ≈ 6275   ± 4.44% ↑ 1.28%
CPU %    50.231   42.925   49.481   48.396   42.662 ≈   46.7 ± 7.01% ↑ 8.60%
Mem MB 2856.841 2684.103 2756.195 2733.485 2422.787 ≈ 2691   ± 5.40% ↑ 1.17%
2021-08-06 19:37:37 +03:00
Roman Khimov
b989504d74
Merge pull request #2108 from nspcc-dev/optimize-mpt
Some allocation optimizations
2021-08-06 14:51:10 +03:00
Evgeniy Stratonikov
bd2b1a0521 mpt: add Size method to trie nodes
Knowing serialized size of the node is useful for
preallocating byte-slice in advance.

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-06 12:01:16 +03:00
Evgeniy Stratonikov
db80ef28df mpt: move empty hash node in a separate type
We use them quite frequently (consider children for a new branch
node) and it is better to get rid of unneeded allocations.

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-06 12:01:16 +03:00
Evgeniy Stratonikov
f02d8b4ec4 stackitem: serialize integers to the pre-allocated slice
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-06 11:59:24 +03:00
Evgeniy Stratonikov
291a29af1e *: do not use WriteArray for frequently used items
`WriteArray` involves reflection, it makes sense to optimize
serialization of transactions and application logs which are serialized
constantly. Adding case in a type switch in `WriteArray` is not an
option because we don't want new dependencies for `io` package.

```
name                          old time/op    new time/op    delta
AppExecResult_EncodeBinary-8     852ns ± 3%     656ns ± 2%  -22.94%  (p=0.000 n=10+9)

name                          old alloc/op   new alloc/op   delta
AppExecResult_EncodeBinary-8      448B ± 0%      376B ± 0%  -16.07%  (p=0.000 n=10+10)

name                          old allocs/op  new allocs/op  delta
AppExecResult_EncodeBinary-8      7.00 ± 0%      5.00 ± 0%  -28.57%  (p=0.000 n=10+10)
```

```
name                 old time/op    new time/op    delta
Transaction_Bytes-8    1.29µs ± 3%    0.76µs ± 5%  -41.52%  (p=0.000 n=9+10)

name                 old alloc/op   new alloc/op   delta
Transaction_Bytes-8    1.21kB ± 0%    1.01kB ± 0%  -16.56%  (p=0.000 n=10+10)

name                 old allocs/op  new allocs/op  delta
Transaction_Bytes-8      12.0 ± 0%       7.0 ± 0%  -41.67%  (p=0.000 n=10+10)
```

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-06 11:59:20 +03:00
Roman Khimov
95e1f5f77b
Merge pull request #2113 from nspcc-dev/optimize-witness-hashing
core: don't recalculate witness script hash
2021-08-06 11:57:54 +03:00
Roman Khimov
f9663a97a1 network: fix Ping messages
* NewPing() accepts block index first and nonce then.
 * Block height should be used, it'll be important for state exchanging nodes
2021-08-06 11:28:09 +03:00
Roman Khimov
39f874d03f core: don't recalculate witness script hash
We know it already, but with current loading code VM will hash it once
more. It doesn't help a lot and still it costs nothing to avoid this
overhead.

name             old time/op    new time/op    delta
VerifyWitness-8    93.4µs ± 3%    92.7µs ± 2%    ~     (p=0.353 n=10+10)

name             old alloc/op   new alloc/op   delta
VerifyWitness-8    3.43kB ± 0%    3.40kB ± 0%  -0.70%  (p=0.000 n=9+9)

name             old allocs/op  new allocs/op  delta
VerifyWitness-8      67.0 ± 0%      66.0 ± 0%  -1.49%  (p=0.000 n=10+10)
2021-08-06 11:25:09 +03:00
Evgeniy Stratonikov
43ee671f36 mpt: do not allocate NodeObject for serialization
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-06 10:28:19 +03:00
Roman Khimov
e41fc2fd1b
Merge pull request #2111 from nspcc-dev/drop-refuel
native: drop Refuel method from GAS
2021-08-05 16:42:29 +03:00
Roman Khimov
d6bd6b6888 native: drop Refuel method from GAS
It can be used to attack the network (amplifying DOS), so it's broken
beyond repair. This reverts ac601601c1.

See also neo-project/neo#2560 and neo-project/neo#2561.
2021-08-05 10:27:13 +03:00
Roman Khimov
1b186e046b network: use optimized decoder for transactions
NewTransactionFromBytes() works a bit faster and uses less memory.
2021-08-04 23:49:07 +03:00
Roman Khimov
892c9785ad transaction: don't allocate new buffer to calculate hash
We can write directly to hash.Hash.

name               old time/op    new time/op    delta
DecodeBinary-8       2.89µs ± 3%    2.82µs ± 5%     ~     (p=0.052 n=10+10)
DecodeJSON-8         13.0µs ± 1%    12.8µs ± 1%   -1.54%  (p=0.002 n=10+8)
DecodeFromBytes-8    2.37µs ± 1%    2.25µs ± 5%   -5.25%  (p=0.000 n=9+10)

name               old alloc/op   new alloc/op   delta
DecodeBinary-8       1.75kB ± 0%    1.53kB ± 0%  -12.79%  (p=0.000 n=10+10)
DecodeJSON-8         3.49kB ± 0%    3.26kB ± 0%   -6.42%  (p=0.000 n=10+10)
DecodeFromBytes-8    1.37kB ± 0%    1.14kB ± 0%  -16.37%  (p=0.000 n=10+10)

name               old allocs/op  new allocs/op  delta
DecodeBinary-8         26.0 ± 0%      23.0 ± 0%  -11.54%  (p=0.000 n=10+10)
DecodeJSON-8           58.0 ± 0%      55.0 ± 0%   -5.17%  (p=0.000 n=10+10)
DecodeFromBytes-8      18.0 ± 0%      15.0 ± 0%  -16.67%  (p=0.000 n=10+10)
2021-08-04 23:43:20 +03:00
Roman Khimov
6d10cdc2f6 transaction: avoid ReadArray()
Reflection adds some real cost to it:

name               old time/op    new time/op    delta
DecodeBinary-8       3.14µs ± 5%    2.89µs ± 3%   -8.19%  (p=0.000 n=10+10)
DecodeJSON-8         12.6µs ± 3%    13.0µs ± 1%   +3.77%  (p=0.000 n=10+10)
DecodeFromBytes-8    2.73µs ± 2%    2.37µs ± 1%  -13.12%  (p=0.000 n=9+9)

name               old alloc/op   new alloc/op   delta
DecodeBinary-8       1.82kB ± 0%    1.75kB ± 0%   -3.95%  (p=0.000 n=10+10)
DecodeJSON-8         3.49kB ± 0%    3.49kB ± 0%     ~     (all equal)
DecodeFromBytes-8    1.44kB ± 0%    1.37kB ± 0%   -5.00%  (p=0.000 n=10+10)

name               old allocs/op  new allocs/op  delta
DecodeBinary-8         29.0 ± 0%      26.0 ± 0%  -10.34%  (p=0.000 n=10+10)
DecodeJSON-8           58.0 ± 0%      58.0 ± 0%     ~     (all equal)
DecodeFromBytes-8      21.0 ± 0%      18.0 ± 0%  -14.29%  (p=0.000 n=10+10)
2021-08-04 23:34:57 +03:00
Roman Khimov
d2732a71d8 transaction: don't overwrite error and witnesses length check
ReadArray() can return some error and we shouldn't overwrite it. At the same
time limiting ReadArray() to the number of Signers can make it return wrong
error if the number of witnesses actually is bigger than the number of
signers, so use MaxAttributes.
2021-08-04 23:17:50 +03:00
Roman Khimov
d487b54612 transaction: don't recalculate size when decoding from buffer
name               old time/op    new time/op    delta
DecodeBinary-8       3.17µs ± 6%    3.14µs ± 5%     ~     (p=0.579 n=10+10)
DecodeJSON-8         12.8µs ± 3%    12.6µs ± 3%     ~     (p=0.105 n=10+10)
DecodeFromBytes-8    3.45µs ± 4%    2.73µs ± 2%  -20.70%  (p=0.000 n=10+9)

name               old alloc/op   new alloc/op   delta
DecodeBinary-8       1.82kB ± 0%    1.82kB ± 0%     ~     (all equal)
DecodeJSON-8         3.49kB ± 0%    3.49kB ± 0%     ~     (all equal)
DecodeFromBytes-8    1.82kB ± 0%    1.44kB ± 0%  -21.05%  (p=0.000 n=10+10)

name               old allocs/op  new allocs/op  delta
DecodeBinary-8         29.0 ± 0%      29.0 ± 0%     ~     (all equal)
DecodeJSON-8           58.0 ± 0%      58.0 ± 0%     ~     (all equal)
DecodeFromBytes-8      29.0 ± 0%      21.0 ± 0%  -27.59%  (p=0.000 n=10+10)
2021-08-04 23:13:58 +03:00
Roman Khimov
64c780ad7a native: optimize totalSupply operations during token burn/mint
We burn GAS in OnPersist for every transaction so some buffer reuse here is
quite natural.

This also doesn't change a lot in the overall TPS picture, maybe adding some
1%.
2021-08-03 17:59:38 +03:00
Roman Khimov
dede4fa7b1 state: convert NEO balance to stack item directly
Avoid calling Append() that will reallocate the slice, we know the length of
the slice exactly.
2021-08-03 17:59:38 +03:00
Roman Khimov
5c65d33439 native: move required balance check to token contracts
Which duplicates the check, but deduplicates error path. This check forced
double balance deserialization which is quite costly operation, so we better
do it once.

It's hardly noticeable as of TPS metrics though, maybe some 1-2%%.
2021-08-03 17:59:38 +03:00
Roman Khimov
85936de254 vm: don't create reference counter when it's not needed
* invocation stack doesn't need refcounting
 * exception stack doesn't need refcounting
 * evaluation stack always has VM-level refcounter
2021-08-02 22:38:41 +03:00
Roman Khimov
2c2ccdca74 opcode: optimize IsValid
Map access costs much more than array access.

name       old time/op  new time/op  delta
IsValid-8  17.6ns ± 2%   1.1ns ± 2%  -93.68%  (p=0.008 n=5+5)
2021-08-02 21:46:19 +03:00
Roman Khimov
3c1325035e fee: use array for opcodes
Use less memory and have faster access.

name       old time/op  new time/op  delta
Opcode1-8  22.4ns ± 6%   3.0ns ± 6%  -86.63%  (p=0.000 n=10+10)
2021-08-02 20:18:33 +03:00
Roman Khimov
dfc514eda0
Merge pull request #2102 from nspcc-dev/store4
Improve (*MemCachedStore).Persist
2021-08-02 20:10:05 +03:00
Roman Khimov
82f481e143
Merge pull request #2105 from nspcc-dev/json-restrict
native/std: restrint amount of items in JSON deserialization
2021-08-02 19:41:54 +03:00
Evgeniy Stratonikov
bdb9748c1b native/std: restrict amount of items in JSON deserialization
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-02 18:57:47 +03:00
Roman Khimov
f8174ca64c core: ensure data logged is from persistent store
Using bc.dao here is wrong, it can contain unpersisted data.
2021-08-02 16:33:09 +03:00
Roman Khimov
8277b7a19a core: don't spawn goroutine for persist function
It doesn't make any sense, in some situations it leads to a number of
goroutines created that will Persist one after another (as we can't Persist
concurrently). We can manage it better in a single thread.

This doesn't change performance in any way, but somewhat reduces resource
consumption. It was tested neo-bench (single node, 10 workers, LevelDB) on two
machines and block dump processing (RC4 testnet up to 62800 with VerifyBlocks
set to false) on i7-8565U.

Reference (b9be892bf9):

Ryzen 9 5950X:
RPS     27747.349 27407.726 27520.210  ≈ 27558   ± 0.63%
TPS     26992.010 26993.468 27010.966  ≈ 26999   ± 0.04%
CPU %      28.928    28.096    29.105  ≈    28.7 ± 1.88%
Mem MB    760.385   726.320   756.118  ≈   748   ± 2.48%

Core i7-8565U:
RPS     7783.229 7628.409 7542.340  ≈ 7651   ± 1.60%
TPS     7708.436 7607.397 7489.459  ≈ 7602   ± 1.44%
CPU %     74.899   71.020   72.697  ≈   72.9 ± 2.67%
Mem MB   438.047  436.967  416.350  ≈  430   ± 2.84%

DB restore:
real    0m20.838s 0m21.895s 0m21.794s  ≈ 21.51 ± 2.71%
user    0m39.091s 0m40.565s 0m41.493s  ≈ 40.38 ± 3.00%
sys      0m3.184s  0m2.923s  0m3.062s  ≈  3.06 ± 4.27%

Patched:

Ryzen 9 5950X:
RPS     27636.957 27246.911 27462.036  ≈ 27449   ±  0.71%  ↓ 0.40%
TPS     27003.672 26993.468 27011.696  ≈ 27003   ±  0.03%  ↑ 0.01%
CPU %      28.562    28.475    28.012  ≈    28.3 ±  1.04%  ↓ 1.39%
Mem MB    627.007   648.110   794.895  ≈   690   ± 13.25%  ↓ 7.75%

Core i7-8565U:
RPS     7497.210 7527.797 7897.532  ≈ 7641   ±  2.92%  ↓ 0.13%
TPS     7461.128 7482.678 7841.723  ≈ 7595   ±  2.81%  ↓ 0.09%
CPU %     71.559   73.423   69.005  ≈   71.3 ±  3.11%  ↓ 2.19%
Mem MB   393.090  395.899  482.264  ≈  424   ± 11.96%  ↓ 1.40%

DB restore:
real    0m20.773s 0m21.583s 0m20.522s  ≈ 20.96 ±  2.65%  ↓ 2.56%
user    0m39.322s 0m42.268s 0m38.626s  ≈ 40.07 ±  4.82%  ↓ 0.77%
sys      0m3.006s  0m3.597s  0m3.042s  ≈  3.22 ± 10.31%  ↑ 5.23%
2021-08-02 16:33:00 +03:00
Roman Khimov
b9be892bf9 storage: allow accessing MemCachedStore during Persist
Persist by its definition doesn't change MemCachedStore visible state, all KV
pairs that were acessible via it before Persist remain accessible after
Persist. The only thing it does is flushing of the current set of KV pairs
from memory to peristent store. To do that it needs read-only access to the
current KV pair set, but technically it then replaces maps, so we have to use
full write lock which makes MemCachedStore inaccessible for the duration of
Persist. And Persist can take a lot of time, it's about disk access for
regular DBs.

What we do here is we create new in-memory maps for MemCachedStore before
flushing old ones to the persistent store. Then a fake persistent store is
created which actually is a MemCachedStore with old maps, so it has exactly
the same visible state. This Store is never accessed for writes, so we can
read it without taking any internal locks and at the same time we no longer
need write locks for original MemCachedStore, we're not using it. All of this
makes it possible to use MemCachedStore as normally reads are handled going
down to whatever level is needed and writes are handled by new maps. So while
Persist for (*Blockchain).dao does its most time-consuming work we can process
other blocks (reading data for transactions and persisting storeBlock caches
to (*Blockchain).dao).

The change was tested for performance with neo-bench (single node, 10 workers,
LevelDB) on two machines and block dump processing (RC4 testnet up to 62800
with VerifyBlocks set to false) on i7-8565U.

Reference results (bbe4e9cd7b):

Ryzen 9 5950X:
RPS     23616.969 22817.086 23222.378  ≈ 23218   ± 1.72%
TPS     23047.316 22608.578 22735.540  ≈ 22797   ± 0.99%
CPU %      23.434    25.553    23.848  ≈    24.3 ± 4.63%
Mem MB    600.636   503.060   582.043  ≈   562   ± 9.22%

Core i7-8565U:
RPS     6594.007 6499.501 6572.902  ≈ 6555   ± 0.76%
TPS     6561.680 6444.545 6510.120  ≈ 6505   ± 0.90%
CPU %     58.452   60.568   62.474    ≈ 60.5 ± 3.33%
Mem MB   234.893  285.067  269.081   ≈ 263   ± 9.75%

DB restore:
real    0m22.237s 0m23.471s 0m23.409s  ≈ 23.04 ± 3.02%
user    0m35.435s 0m38.943s 0m39.247s  ≈ 37.88 ± 5.59%
sys      0m3.085s  0m3.360s  0m3.144s  ≈  3.20 ± 4.53%

After the change:

Ryzen 9 5950X:
RPS     27747.349 27407.726 27520.210  ≈ 27558   ± 0.63%  ↑ 18.69%
TPS     26992.010 26993.468 27010.966  ≈ 26999   ± 0.04%  ↑ 18.43%
CPU %      28.928    28.096    29.105  ≈    28.7 ± 1.88%  ↑ 18.1%
Mem MB    760.385   726.320   756.118  ≈   748   ± 2.48%  ↑ 33.10%

Core i7-8565U:
RPS     7783.229 7628.409 7542.340  ≈ 7651   ± 1.60%  ↑ 16.72%
TPS     7708.436 7607.397 7489.459  ≈ 7602   ± 1.44%  ↑ 16.85%
CPU %     74.899   71.020   72.697  ≈   72.9 ± 2.67%  ↑ 20.50%
Mem MB   438.047  436.967  416.350  ≈  430   ± 2.84%  ↑ 63.50%

DB restore:
real    0m20.838s 0m21.895s 0m21.794s  ≈ 21.51 ± 2.71%  ↓ 6.64%
user    0m39.091s 0m40.565s 0m41.493s  ≈ 40.38 ± 3.00%  ↑ 6.60%
sys      0m3.184s  0m2.923s  0m3.062s  ≈  3.06 ± 4.27%  ↓ 4.38%

It obviously uses more memory now and utilizes CPU more aggressively, but at
the same time it allows to improve all relevant metrics and finally reach a
situation where we process 50K transactions in less than second on Ryzen 9
5950X (going higher than 25K TPS). The other observation is much more stable
block time, on Ryzen 9 it's as close to 1 second as it could be.
2021-08-02 16:33:00 +03:00
Roman Khimov
3cebd2b129 interop: use non-Cached wrapped DAO
Cached only caches NEP-17 tracking data now, it makes no sense here.
2021-07-30 15:45:17 +03:00
Roman Khimov
fa7314ea90 dao: drop dropNEP17Cache from Cached
It's not used now.
2021-07-30 15:45:17 +03:00
Roman Khimov
49be753850 core: spread storeBlock actions to three goroutines
Block processing consists of:
 * saving block/transactions to the DB
 * executing blocks/transactions
 * processing notifications/saving AERs
 * updating MPT
 * atomically updating Blockchain state

Of these the first one is completely independent of others, it can be done in
a separate routine easily. The third one technically depends on the second,
it just doesn't have data until something is executed. At the same time it
doesn't affect future executions in any way, so we can offload
AER/notification processing to separate goroutine (while the main thread
proceeds with other transactions).

MPT update depends on all executions, so it can't be offloaded, but it can be
done concurrently to AER processing. And only the last thing actually needs
all previous ones to be finished, so it's a natural synchronization point.

So we spawn two additional routines and let the main one execute transactions
and update MPT as fast as it can. While technically all of these routines
could share single DAO (they are working with different KV sets) benchmarking
shows that using separate DAOs and then persisting them to lower one actually
works about 7-8%% better. At the same time we can simplify DAOs used, Cached
one is only relevant for AER processing because it caches NEP-17 tracking
data, everything else can do just fine with Simple.

The change was tested for performance with neo-bench (single node, 10 workers,
LevelDB) on two machines and block dump processing (RC4 testnet up to 50825
with VerifyBlocks set to false) on i7-8565U. neo-bench creates huge blocks
with lots of transactions while RC4 dump mostly consists of empty blocks.

Reference results (06c3dda5d1):

Ryzen 9 5950X:
RPS ≈ 20059.569   21186.328   20158.983   ≈ 20468   ±  3.05%
TPS ≈ 19544.993   20585.450   19658.338   ≈ 19930   ±  2.86%
CPU ≈    18.682%     23.877%     22.852%  ≈    21.8 ± 12.62%
Mem ≈   618.981MB   559.246MB   541.539MB ≈   573   ±  7.08%

Core i7-8565U:
RPS ≈ 5927.082   6526.739   6372.115   ≈ 6275   ± 4.96%
TPS ≈ 5899.531   6477.187   6329.515   ≈ 6235   ± 4.81%
CPU ≈   56.346%    61.955%    58.125%  ≈   58.8 ± 4.87%
Mem ≈  212.191MB  224.974MB  205.479MB ≈  214   ± 4.62%

DB restore:
real    0m12.683s 0m13.222s 0m13.382s  ≈ 13.096 ±  2.80%
user    0m18.501s 0m19.163s 0m19.489s  ≈ 19.051 ±  2.64%
sys      0m1.404s  0m1.396s  0m1.666s  ≈  1.489 ± 10.32%

After the change:

Ryzen 9 5950X:
RPS ≈ 23056.899   22822.015   23006.543   ≈ 22962   ± 0.54%
TPS ≈ 22594.785   22292.071   22800.857   ≈ 22562   ± 1.13%
CPU ≈    24.262%     23.185%     25.921%  ≈    24.5 ± 5.65%
Mem ≈   614.254MB   613.204MB   555.491MB ≈   594   ± 5.66%

Core i7-8565U:
RPS ≈ 6378.702   6423.927   6363.788      ≈ 6389   ± 0.49%
TPS ≈ 6327.072   6372.552   6311.179      ≈ 6337   ± 0.50%
CPU ≈   57.599%    58.622%    59.737%     ≈   58.7 ± 1.82%
Mem ≈  198.697MB  188.746MB  200.235MB    ≈  196   ± 3.18%

DB restore:
real    0m13.576s 0m13.334s 0m12.757s  ≈  13.222 ±  3.18%
user    0m19.113s 0m19.490s 0m20.197s  ≈  19.600 ±  2.81%
sys      0m2.211s  0m1.558s  0m1.559s  ≈   1.776 ± 21.21%

On Ryzen 9 we've got 12% better RPS, 13% better TPS with 12% CPU and 3% RAM
more used. Core i7-8565U changes don't seem to be statistically significant:
1.8% more RPS, 1.6% more TPS with about the same CPU and 8.5% less RAM
used. It also is 1% worse in DB restore time.

The result is somewhat expected, on a powerful machine with lots of spare
cores we get 10%+ better results while on average resource-constrained laptop it
doesn't change much (the machine is already saturated). Overall, this seems to
be worthwhile.
2021-07-30 15:45:17 +03:00
Roman Khimov
06c3dda5d1
Merge pull request #2093 from nspcc-dev/states-exchange/drop-nep17-balance-state
core: implement dynamic NEP17 balances tracking
2021-07-29 19:08:42 +03:00
Evgeniy Stratonikov
283173bb9d wallet: use named constants in Seek
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-07-29 17:11:50 +03:00
Evgeniy Stratonikov
a429aa3e68 wallet: truncate file when writing
If wallet size decreases, we need to remove trailing garbage if it
exists. This can happen when removing account or reading pretty-printed
wallet. It doesn't affect our CLI (we decode only file prefix), but
it is nice to always have a valid JSON file.

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-07-29 17:11:49 +03:00
Evgeniy Stratonikov
8f196c8222 wallet: marshal before writing to file
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-07-29 16:07:36 +03:00
Anna Shaleva
a30e48ff90 core: increment the DB version
DB scheme has been changed.
2021-07-29 10:23:13 +03:00
Anna Shaleva
e8bed184d5 core: implement dynamic NEP17 balances tracking
Request NEP17 balances from a set of NEP17 contracts instead of getting
them from storage. LastUpdatedBlock tracking remains untouched, because
there's no way to retrieve it dynamically.
2021-07-29 10:23:01 +03:00
Anna Shaleva
e46d76d7aa core: rename state.NEP17Balances to state.NEP17TransferInfo
Balances are to be removed from state.NEP17TransferInfo, so the remnant
fields are NextTransferBatch, NewBatch and a map of LastUpdatedBlocks.
These fields are more staff-related.

Also rename dao.[Get, Put, put]NEP17Balances and STNEP17Balances
preffix.

Also rename NEP17TransferInfo.Trackers to LastUpdatedBlockTrackers
because NEP17TransferInfo.Balances are to be removed.
2021-07-28 13:22:53 +03:00
Anna Shaleva
c0a2c74e0c core: maintain a set of NEP17-compliant contracts 2021-07-28 13:22:53 +03:00
Roman Khimov
50d99464e0
Merge pull request #2064 from nspcc-dev/fix-remove-stale-hang
mempool: send events in a separate goroutine
2021-07-23 18:16:14 +03:00
Evgeniy Stratonikov
e2f2addf95 notary: fix possible deadlock in UpdateNotaryNodes
`UpdateNotaryNodes` takes account then request mutex, and `PostPersist` takes
them in a different order. Because they are executed concurrently a deadlock
can appear.

```
2021-07-23T11:06:58.3732405Z panic: test timed out after 10m0s
2021-07-23T11:06:58.3732642Z
2021-07-23T11:06:58.3742610Z goroutine 7351 [semacquire, 9 minutes]:
2021-07-23T11:06:58.3743140Z sync.runtime_SemacquireMutex(0xc00010e4dc, 0x1100000000, 0x1)
2021-07-23T11:06:58.3743747Z 	/opt/hostedtoolcache/go/1.14.15/x64/src/runtime/sema.go:71 +0x47
2021-07-23T11:06:58.3744222Z sync.(*Mutex).lockSlow(0xc00010e4d8)
2021-07-23T11:06:58.3744742Z 	/opt/hostedtoolcache/go/1.14.15/x64/src/sync/mutex.go:138 +0x1c1
2021-07-23T11:06:58.3745209Z sync.(*Mutex).Lock(0xc00010e4d8)
2021-07-23T11:06:58.3745692Z 	/opt/hostedtoolcache/go/1.14.15/x64/src/sync/mutex.go:81 +0x7d
2021-07-23T11:06:58.3746162Z sync.(*RWMutex).Lock(0xc00010e4d8)
2021-07-23T11:06:58.3746764Z 	/opt/hostedtoolcache/go/1.14.15/x64/src/sync/rwmutex.go:98 +0x4a
2021-07-23T11:06:58.3747699Z github.com/nspcc-dev/neo-go/pkg/services/notary.(*Notary).UpdateNotaryNodes(0xc00010e480, 0xc000105b90, 0x1, 0x1)
2021-07-23T11:06:58.3748621Z 	/home/runner/work/neo-go/neo-go/pkg/services/notary/node.go:44 +0x3ba
2021-07-23T11:06:58.3749367Z github.com/nspcc-dev/neo-go/pkg/core.TestNotary(0xc0003677a0)
2021-07-23T11:06:58.3750116Z 	/home/runner/work/neo-go/neo-go/pkg/core/notary_test.go:594 +0x2dba
2021-07-23T11:06:58.3750641Z testing.tRunner(0xc0003677a0, 0x16f3308)
2021-07-23T11:06:58.3751202Z 	/opt/hostedtoolcache/go/1.14.15/x64/src/testing/testing.go:1050 +0x1ec
2021-07-23T11:06:58.3751696Z created by testing.(*T).Run
2021-07-23T11:06:58.3752225Z 	/opt/hostedtoolcache/go/1.14.15/x64/src/testing/testing.go:1095 +0x538
2021-07-23T11:06:58.3752573Z
2021-07-23T11:06:58.3771319Z goroutine 7340 [semacquire, 9 minutes]:
2021-07-23T11:06:58.3772048Z sync.runtime_SemacquireMutex(0xc00010e504, 0x0, 0x0)
2021-07-23T11:06:58.3772889Z 	/opt/hostedtoolcache/go/1.14.15/x64/src/runtime/sema.go:71 +0x47
2021-07-23T11:06:58.3773581Z sync.(*RWMutex).RLock(0xc00010e4f8)
2021-07-23T11:06:58.3774310Z 	/opt/hostedtoolcache/go/1.14.15/x64/src/sync/rwmutex.go:50 +0xa4
2021-07-23T11:06:58.3775449Z github.com/nspcc-dev/neo-go/pkg/services/notary.(*Notary).getAccount(0xc00010e480, 0x0)
2021-07-23T11:06:58.3776626Z 	/home/runner/work/neo-go/neo-go/pkg/services/notary/node.go:51 +0x51
2021-07-23T11:06:58.3778270Z github.com/nspcc-dev/neo-go/pkg/services/notary.(*Notary).finalize(0xc00010e480, 0xc0003b2630, 0xa97df1bc78dd5787, 0xcc8a4d69e7f5d62a, 0x1a4d7981bd86b087, 0xbafdb720c93480b3, 0x0, 0x0)
2021-07-23T11:06:58.3779845Z 	/home/runner/work/neo-go/neo-go/pkg/services/notary/notary.go:306 +0x54
2021-07-23T11:06:58.3781022Z github.com/nspcc-dev/neo-go/pkg/services/notary.(*Notary).PostPersist(0xc00010e480)
2021-07-23T11:06:58.3782232Z 	/home/runner/work/neo-go/neo-go/pkg/services/notary/notary.go:297 +0x662
2021-07-23T11:06:58.3782989Z github.com/nspcc-dev/neo-go/pkg/services/notary.(*Notary).Run(0xc00010e480)
2021-07-23T11:06:58.3783941Z 	/home/runner/work/neo-go/neo-go/pkg/services/notary/notary.go:148 +0x3cb
2021-07-23T11:06:58.3784702Z created by github.com/nspcc-dev/neo-go/pkg/core.TestNotary
2021-07-23T11:06:58.3785451Z 	/home/runner/work/neo-go/neo-go/pkg/core/notary_test.go:132 +0x6e0
```

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-07-23 14:48:00 +03:00
Evgeniy Stratonikov
3507f52c32 notary: process new transactions in a separate goroutine
Related #2063.

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-07-23 14:48:00 +03:00
Roman Khimov
6103da8d10 context: read item key in LE
Hi, neo-project/neo#938.
2021-07-23 12:43:04 +03:00
Roman Khimov
efb67a0ea3 context: scripts and signatures are base64-encoded in C# now
So use base64 too and add compatibility test. Unfortunately this breaks
support for old (hex-based) files, but those should be completed a long time
ago.
2021-07-23 11:57:35 +03:00
Roman Khimov
cbe1eeb08c smartcontract: add support for valueless Parameters
This is fine:
            {
               "type" : "Signature"
            },
2021-07-23 11:57:13 +03:00
Roman Khimov
59b4377f90 context: support Neo.Network.P2P.Payloads.Transaction type
C# now uses this one, so use it by default, but also accept old one.
2021-07-23 11:33:51 +03:00
Roman Khimov
ad35db66b5
Merge pull request #2091 from nspcc-dev/tune-error-messages
*: simplify some error messages
2021-07-23 10:40:26 +03:00
Roman Khimov
6e2eddbeb9
Merge pull request #2090 from nspcc-dev/new-query-commands
New query commands
2021-07-23 10:39:57 +03:00
Roman Khimov
1d8ad5b84a *: simplify some error messages
Log:
2021-07-23T09:59:18.948+0300    WARN    contract invocation failed      {"tx": "de3e3c1f1d37e4528990f894dea5583fd320485ad3862a95eb0e8823eecf4a5f", "block": 9643, "error": "error encountered at instruction 1 (SYSCALL): error during call from native: error encountered at instruction 745 (CAT): invalid conversion: Map/ByteString"}

The word "error" appears 4 times here.
2021-07-23 10:08:09 +03:00
Roman Khimov
7366d45985 rpc: add GetStateHeight to client 2021-07-22 21:13:44 +03:00
Roman Khimov
a188d20fd1 rpc: fix getstateheight result compatibility
C#:
   "result" : {
      "localrootindex" : 11623,
      "validatedrootindex" : 11623
   }

Go:
   "result" : {
      "blockHeight" : 11627,
      "stateHeight" : 11627
   }
2021-07-22 21:13:44 +03:00
Roman Khimov
6b852fc7b6
Merge pull request #2089 from nspcc-dev/fix-emit
vm/emit: improve error message
2021-07-22 15:17:59 +03:00
Evgeniy Stratonikov
808c30e7d7 vm/emit: improve error message
Show unsupported type instead of value.

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-07-22 14:23:32 +03:00
Roman Khimov
ede410a4a7 go.mod: update ishell package
It only adds go.mod and changes import path, that's it.
2021-07-21 23:28:26 +03:00
Roman Khimov
002ad9dfee go.mod: update miniredis to 2.15.1
It's only used for testing purposes and this version doesn't change anything
for us, but still better be current.
2021-07-21 23:28:26 +03:00
Roman Khimov
4d1e952be6 go.mod: update go-datastructures to 1.0.53
We're only using queue library and it didn't change in any way, but 1.0.53 has
proper go.mod, so it's still an improvement.

It at the same time pulls some new packages also like x/tools.
2021-07-21 23:28:00 +03:00
Roman Khimov
4d2ecab16f consensus: fix nonce handling
It was broken somewhere between 2f490a3403 and
85ce207f40 leading to panic on watch only node:

2021-07-21T16:21:39.201+0200    INFO    received Commit {"validator": 3}
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0xbcc59e]

goroutine 486 [running]:
github.com/nspcc-dev/neo-go/pkg/consensus.(*service).newBlockFromContext(0xc0001629a0, 0xc000308000, 0xc0010fa000, 0x2cb417800)
        github.com/nspcc-dev/neo-go/pkg/consensus/consensus.go:664 +0xbe
github.com/nspcc-dev/dbft.(*Context).MakeHeader(...)
        github.com/nspcc-dev/dbft@v0.0.0-20210302103605-cc75991b7cfb/context.go:270
github.com/nspcc-dev/dbft.(*DBFT).onCommit(0xc000308000, 0x138c998, 0xc000115110)
        github.com/nspcc-dev/dbft@v0.0.0-20210302103605-cc75991b7cfb/dbft.go:487 +0x575
github.com/nspcc-dev/dbft.(*DBFT).OnReceive(0xc000308000, 0x138c998, 0xc000115110)
        github.com/nspcc-dev/dbft@v0.0.0-20210302103605-cc75991b7cfb/dbft.go:251 +0xef5
github.com/nspcc-dev/neo-go/pkg/consensus.(*service).eventLoop(0xc0001629a0)
        github.com/nspcc-dev/neo-go/pkg/consensus/consensus.go:312 +0x7d6
created by github.com/nspcc-dev/neo-go/pkg/consensus.(*service).Start
        github.com/nspcc-dev/neo-go/pkg/consensus/consensus.go:262 +0xdc

In fact, nonce is correctly provided by dbft library (since Legacy), we just
need to use it here.
2021-07-21 19:06:19 +03:00
Roman Khimov
7d6898677b keys: trivial code simplification 2021-07-21 17:05:49 +03:00
Roman Khimov
e3f19dd242
Merge pull request #2081 from nspcc-dev/mainnet-config-update
config: update mainnet magic
2021-07-21 15:03:58 +03:00
Roman Khimov
df07ba505a config: update mainnet magic
It's NEO3, see neo-project/neo-node#795.
2021-07-21 14:42:26 +03:00
Roman Khimov
5bdcd4c241 client: add GetCandidateRegisterPrice method
It's important for clients.
2021-07-21 12:19:55 +03:00
Roman Khimov
35c2c3ae8e
Merge pull request #2078 from nspcc-dev/configurable-initial-gas
config: add InitialGASSupply, fix #2073
2021-07-20 17:10:25 +03:00
Roman Khimov
36d486a664 config: add InitialGASSupply, fix #2073
We now have 52M by default.
2021-07-20 16:59:54 +03:00
Roman Khimov
caf07c1ee7
Merge pull request #2076 from nspcc-dev/fix-occasional-bolt-test-failures
Improve temp file/dir handling in tests
2021-07-20 16:53:54 +03:00
Roman Khimov
f9a9d15490 config: update testnet magic for RC4
See neo-project/neo-node#798 and https://github.com/neo-project/neo-node/releases/tag/v3.0.0-rc4
2021-07-20 13:16:38 +03:00
Roman Khimov
0583f252ab *: create real temporary dirs and files in tests
Improve reliability.
2021-07-20 12:51:11 +03:00
Roman Khimov
3b19b34122 storage: fix memcached test with boltdb store
Everything was wrong here, wrong file used, wrong cleanup procedure, the net
result is this (and some failing tests from time to time):

  $ ls -l /tmp/test_bolt_db* | wc -l
  30939
2021-07-20 12:35:24 +03:00
Roman Khimov
c88ebaede9
Merge pull request #2075 from nspcc-dev/small-refactoring
Array util refactoring and naming improvement
2021-07-20 11:29:59 +03:00
Roman Khimov
7477e2cd9f
Merge pull request #2074 from nspcc-dev/fix-oracle-service-behaviour
Fix oracle service behaviour
2021-07-20 11:29:39 +03:00
Roman Khimov
19717dd9a8 slice: introduce common Copy helper
It's a bit more convenient to use.
2021-07-19 22:57:55 +03:00