Commit graph

5039 commits

Author SHA1 Message Date
Evgeniy Stratonikov
6fe2ace43b cli/smartcontract: refactor contract deploy a bit
Provide cosigners explicitly during deploy and don't read wallet twice.
This is needed because manifest validation requires valid sender address.

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-06 12:05:32 +03:00
Evgeniy Stratonikov
f83395e897 cli/test: move test wallet path to constant
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-06 12:05:32 +03:00
Evgeniy Stratonikov
bd2b1a0521 mpt: add Size method to trie nodes
Knowing serialized size of the node is useful for
preallocating byte-slice in advance.

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-06 12:01:16 +03:00
Evgeniy Stratonikov
db80ef28df mpt: move empty hash node in a separate type
We use them quite frequently (consider children for a new branch
node) and it is better to get rid of unneeded allocations.

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-06 12:01:16 +03:00
Evgeniy Stratonikov
f02d8b4ec4 stackitem: serialize integers to the pre-allocated slice
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-06 11:59:24 +03:00
Evgeniy Stratonikov
291a29af1e *: do not use WriteArray for frequently used items
`WriteArray` involves reflection, it makes sense to optimize
serialization of transactions and application logs which are serialized
constantly. Adding case in a type switch in `WriteArray` is not an
option because we don't want new dependencies for `io` package.

```
name                          old time/op    new time/op    delta
AppExecResult_EncodeBinary-8     852ns ± 3%     656ns ± 2%  -22.94%  (p=0.000 n=10+9)

name                          old alloc/op   new alloc/op   delta
AppExecResult_EncodeBinary-8      448B ± 0%      376B ± 0%  -16.07%  (p=0.000 n=10+10)

name                          old allocs/op  new allocs/op  delta
AppExecResult_EncodeBinary-8      7.00 ± 0%      5.00 ± 0%  -28.57%  (p=0.000 n=10+10)
```

```
name                 old time/op    new time/op    delta
Transaction_Bytes-8    1.29µs ± 3%    0.76µs ± 5%  -41.52%  (p=0.000 n=9+10)

name                 old alloc/op   new alloc/op   delta
Transaction_Bytes-8    1.21kB ± 0%    1.01kB ± 0%  -16.56%  (p=0.000 n=10+10)

name                 old allocs/op  new allocs/op  delta
Transaction_Bytes-8      12.0 ± 0%       7.0 ± 0%  -41.67%  (p=0.000 n=10+10)
```

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-06 11:59:20 +03:00
Roman Khimov
95e1f5f77b
Merge pull request #2113 from nspcc-dev/optimize-witness-hashing
core: don't recalculate witness script hash
2021-08-06 11:57:54 +03:00
Roman Khimov
79bdf9b98f
Merge pull request #2115 from nspcc-dev/fix-ping-messages
network: fix Ping messages
2021-08-06 11:43:00 +03:00
Roman Khimov
f9663a97a1 network: fix Ping messages
* NewPing() accepts block index first and nonce then.
 * Block height should be used, it'll be important for state exchanging nodes
2021-08-06 11:28:09 +03:00
Roman Khimov
39f874d03f core: don't recalculate witness script hash
We know it already, but with current loading code VM will hash it once
more. It doesn't help a lot and still it costs nothing to avoid this
overhead.

name             old time/op    new time/op    delta
VerifyWitness-8    93.4µs ± 3%    92.7µs ± 2%    ~     (p=0.353 n=10+10)

name             old alloc/op   new alloc/op   delta
VerifyWitness-8    3.43kB ± 0%    3.40kB ± 0%  -0.70%  (p=0.000 n=9+9)

name             old allocs/op  new allocs/op  delta
VerifyWitness-8      67.0 ± 0%      66.0 ± 0%  -1.49%  (p=0.000 n=10+10)
2021-08-06 11:25:09 +03:00
Evgeniy Stratonikov
43ee671f36 mpt: do not allocate NodeObject for serialization
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-06 10:28:19 +03:00
Roman Khimov
e41fc2fd1b
Merge pull request #2111 from nspcc-dev/drop-refuel
native: drop Refuel method from GAS
2021-08-05 16:42:29 +03:00
Roman Khimov
f685c49cb2
Merge pull request #2110 from nspcc-dev/optimize-tx-decoding
Optimize tx decoding
2021-08-05 13:43:11 +03:00
Roman Khimov
d6bd6b6888 native: drop Refuel method from GAS
It can be used to attack the network (amplifying DOS), so it's broken
beyond repair. This reverts ac601601c1.

See also neo-project/neo#2560 and neo-project/neo#2561.
2021-08-05 10:27:13 +03:00
Roman Khimov
1b186e046b network: use optimized decoder for transactions
NewTransactionFromBytes() works a bit faster and uses less memory.
2021-08-04 23:49:07 +03:00
Roman Khimov
892c9785ad transaction: don't allocate new buffer to calculate hash
We can write directly to hash.Hash.

name               old time/op    new time/op    delta
DecodeBinary-8       2.89µs ± 3%    2.82µs ± 5%     ~     (p=0.052 n=10+10)
DecodeJSON-8         13.0µs ± 1%    12.8µs ± 1%   -1.54%  (p=0.002 n=10+8)
DecodeFromBytes-8    2.37µs ± 1%    2.25µs ± 5%   -5.25%  (p=0.000 n=9+10)

name               old alloc/op   new alloc/op   delta
DecodeBinary-8       1.75kB ± 0%    1.53kB ± 0%  -12.79%  (p=0.000 n=10+10)
DecodeJSON-8         3.49kB ± 0%    3.26kB ± 0%   -6.42%  (p=0.000 n=10+10)
DecodeFromBytes-8    1.37kB ± 0%    1.14kB ± 0%  -16.37%  (p=0.000 n=10+10)

name               old allocs/op  new allocs/op  delta
DecodeBinary-8         26.0 ± 0%      23.0 ± 0%  -11.54%  (p=0.000 n=10+10)
DecodeJSON-8           58.0 ± 0%      55.0 ± 0%   -5.17%  (p=0.000 n=10+10)
DecodeFromBytes-8      18.0 ± 0%      15.0 ± 0%  -16.67%  (p=0.000 n=10+10)
2021-08-04 23:43:20 +03:00
Roman Khimov
6d10cdc2f6 transaction: avoid ReadArray()
Reflection adds some real cost to it:

name               old time/op    new time/op    delta
DecodeBinary-8       3.14µs ± 5%    2.89µs ± 3%   -8.19%  (p=0.000 n=10+10)
DecodeJSON-8         12.6µs ± 3%    13.0µs ± 1%   +3.77%  (p=0.000 n=10+10)
DecodeFromBytes-8    2.73µs ± 2%    2.37µs ± 1%  -13.12%  (p=0.000 n=9+9)

name               old alloc/op   new alloc/op   delta
DecodeBinary-8       1.82kB ± 0%    1.75kB ± 0%   -3.95%  (p=0.000 n=10+10)
DecodeJSON-8         3.49kB ± 0%    3.49kB ± 0%     ~     (all equal)
DecodeFromBytes-8    1.44kB ± 0%    1.37kB ± 0%   -5.00%  (p=0.000 n=10+10)

name               old allocs/op  new allocs/op  delta
DecodeBinary-8         29.0 ± 0%      26.0 ± 0%  -10.34%  (p=0.000 n=10+10)
DecodeJSON-8           58.0 ± 0%      58.0 ± 0%     ~     (all equal)
DecodeFromBytes-8      21.0 ± 0%      18.0 ± 0%  -14.29%  (p=0.000 n=10+10)
2021-08-04 23:34:57 +03:00
Roman Khimov
d2732a71d8 transaction: don't overwrite error and witnesses length check
ReadArray() can return some error and we shouldn't overwrite it. At the same
time limiting ReadArray() to the number of Signers can make it return wrong
error if the number of witnesses actually is bigger than the number of
signers, so use MaxAttributes.
2021-08-04 23:17:50 +03:00
Roman Khimov
d487b54612 transaction: don't recalculate size when decoding from buffer
name               old time/op    new time/op    delta
DecodeBinary-8       3.17µs ± 6%    3.14µs ± 5%     ~     (p=0.579 n=10+10)
DecodeJSON-8         12.8µs ± 3%    12.6µs ± 3%     ~     (p=0.105 n=10+10)
DecodeFromBytes-8    3.45µs ± 4%    2.73µs ± 2%  -20.70%  (p=0.000 n=10+9)

name               old alloc/op   new alloc/op   delta
DecodeBinary-8       1.82kB ± 0%    1.82kB ± 0%     ~     (all equal)
DecodeJSON-8         3.49kB ± 0%    3.49kB ± 0%     ~     (all equal)
DecodeFromBytes-8    1.82kB ± 0%    1.44kB ± 0%  -21.05%  (p=0.000 n=10+10)

name               old allocs/op  new allocs/op  delta
DecodeBinary-8         29.0 ± 0%      29.0 ± 0%     ~     (all equal)
DecodeJSON-8           58.0 ± 0%      58.0 ± 0%     ~     (all equal)
DecodeFromBytes-8      29.0 ± 0%      21.0 ± 0%  -27.59%  (p=0.000 n=10+10)
2021-08-04 23:13:58 +03:00
Roman Khimov
5e18a6141e
Merge pull request #2106 from nspcc-dev/microopt
Microoptimizations
2021-08-03 21:28:35 +03:00
Roman Khimov
64c780ad7a native: optimize totalSupply operations during token burn/mint
We burn GAS in OnPersist for every transaction so some buffer reuse here is
quite natural.

This also doesn't change a lot in the overall TPS picture, maybe adding some
1%.
2021-08-03 17:59:38 +03:00
Roman Khimov
dede4fa7b1 state: convert NEO balance to stack item directly
Avoid calling Append() that will reallocate the slice, we know the length of
the slice exactly.
2021-08-03 17:59:38 +03:00
Roman Khimov
5c65d33439 native: move required balance check to token contracts
Which duplicates the check, but deduplicates error path. This check forced
double balance deserialization which is quite costly operation, so we better
do it once.

It's hardly noticeable as of TPS metrics though, maybe some 1-2%%.
2021-08-03 17:59:38 +03:00
Roman Khimov
85936de254 vm: don't create reference counter when it's not needed
* invocation stack doesn't need refcounting
 * exception stack doesn't need refcounting
 * evaluation stack always has VM-level refcounter
2021-08-02 22:38:41 +03:00
Roman Khimov
2c2ccdca74 opcode: optimize IsValid
Map access costs much more than array access.

name       old time/op  new time/op  delta
IsValid-8  17.6ns ± 2%   1.1ns ± 2%  -93.68%  (p=0.008 n=5+5)
2021-08-02 21:46:19 +03:00
Roman Khimov
3c1325035e fee: use array for opcodes
Use less memory and have faster access.

name       old time/op  new time/op  delta
Opcode1-8  22.4ns ± 6%   3.0ns ± 6%  -86.63%  (p=0.000 n=10+10)
2021-08-02 20:18:33 +03:00
Roman Khimov
dfc514eda0
Merge pull request #2102 from nspcc-dev/store4
Improve (*MemCachedStore).Persist
2021-08-02 20:10:05 +03:00
Roman Khimov
024bfee363 README: N3 is stable now 2021-08-02 20:08:39 +03:00
Roman Khimov
07febc10c7 CHANGELOG: release 0.97.0 2021-08-02 19:59:42 +03:00
Roman Khimov
82f481e143
Merge pull request #2105 from nspcc-dev/json-restrict
native/std: restrint amount of items in JSON deserialization
2021-08-02 19:41:54 +03:00
Evgeniy Stratonikov
bdb9748c1b native/std: restrict amount of items in JSON deserialization
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-08-02 18:57:47 +03:00
Roman Khimov
f8174ca64c core: ensure data logged is from persistent store
Using bc.dao here is wrong, it can contain unpersisted data.
2021-08-02 16:33:09 +03:00
Roman Khimov
8277b7a19a core: don't spawn goroutine for persist function
It doesn't make any sense, in some situations it leads to a number of
goroutines created that will Persist one after another (as we can't Persist
concurrently). We can manage it better in a single thread.

This doesn't change performance in any way, but somewhat reduces resource
consumption. It was tested neo-bench (single node, 10 workers, LevelDB) on two
machines and block dump processing (RC4 testnet up to 62800 with VerifyBlocks
set to false) on i7-8565U.

Reference (b9be892bf9):

Ryzen 9 5950X:
RPS     27747.349 27407.726 27520.210  ≈ 27558   ± 0.63%
TPS     26992.010 26993.468 27010.966  ≈ 26999   ± 0.04%
CPU %      28.928    28.096    29.105  ≈    28.7 ± 1.88%
Mem MB    760.385   726.320   756.118  ≈   748   ± 2.48%

Core i7-8565U:
RPS     7783.229 7628.409 7542.340  ≈ 7651   ± 1.60%
TPS     7708.436 7607.397 7489.459  ≈ 7602   ± 1.44%
CPU %     74.899   71.020   72.697  ≈   72.9 ± 2.67%
Mem MB   438.047  436.967  416.350  ≈  430   ± 2.84%

DB restore:
real    0m20.838s 0m21.895s 0m21.794s  ≈ 21.51 ± 2.71%
user    0m39.091s 0m40.565s 0m41.493s  ≈ 40.38 ± 3.00%
sys      0m3.184s  0m2.923s  0m3.062s  ≈  3.06 ± 4.27%

Patched:

Ryzen 9 5950X:
RPS     27636.957 27246.911 27462.036  ≈ 27449   ±  0.71%  ↓ 0.40%
TPS     27003.672 26993.468 27011.696  ≈ 27003   ±  0.03%  ↑ 0.01%
CPU %      28.562    28.475    28.012  ≈    28.3 ±  1.04%  ↓ 1.39%
Mem MB    627.007   648.110   794.895  ≈   690   ± 13.25%  ↓ 7.75%

Core i7-8565U:
RPS     7497.210 7527.797 7897.532  ≈ 7641   ±  2.92%  ↓ 0.13%
TPS     7461.128 7482.678 7841.723  ≈ 7595   ±  2.81%  ↓ 0.09%
CPU %     71.559   73.423   69.005  ≈   71.3 ±  3.11%  ↓ 2.19%
Mem MB   393.090  395.899  482.264  ≈  424   ± 11.96%  ↓ 1.40%

DB restore:
real    0m20.773s 0m21.583s 0m20.522s  ≈ 20.96 ±  2.65%  ↓ 2.56%
user    0m39.322s 0m42.268s 0m38.626s  ≈ 40.07 ±  4.82%  ↓ 0.77%
sys      0m3.006s  0m3.597s  0m3.042s  ≈  3.22 ± 10.31%  ↑ 5.23%
2021-08-02 16:33:00 +03:00
Roman Khimov
b9be892bf9 storage: allow accessing MemCachedStore during Persist
Persist by its definition doesn't change MemCachedStore visible state, all KV
pairs that were acessible via it before Persist remain accessible after
Persist. The only thing it does is flushing of the current set of KV pairs
from memory to peristent store. To do that it needs read-only access to the
current KV pair set, but technically it then replaces maps, so we have to use
full write lock which makes MemCachedStore inaccessible for the duration of
Persist. And Persist can take a lot of time, it's about disk access for
regular DBs.

What we do here is we create new in-memory maps for MemCachedStore before
flushing old ones to the persistent store. Then a fake persistent store is
created which actually is a MemCachedStore with old maps, so it has exactly
the same visible state. This Store is never accessed for writes, so we can
read it without taking any internal locks and at the same time we no longer
need write locks for original MemCachedStore, we're not using it. All of this
makes it possible to use MemCachedStore as normally reads are handled going
down to whatever level is needed and writes are handled by new maps. So while
Persist for (*Blockchain).dao does its most time-consuming work we can process
other blocks (reading data for transactions and persisting storeBlock caches
to (*Blockchain).dao).

The change was tested for performance with neo-bench (single node, 10 workers,
LevelDB) on two machines and block dump processing (RC4 testnet up to 62800
with VerifyBlocks set to false) on i7-8565U.

Reference results (bbe4e9cd7b):

Ryzen 9 5950X:
RPS     23616.969 22817.086 23222.378  ≈ 23218   ± 1.72%
TPS     23047.316 22608.578 22735.540  ≈ 22797   ± 0.99%
CPU %      23.434    25.553    23.848  ≈    24.3 ± 4.63%
Mem MB    600.636   503.060   582.043  ≈   562   ± 9.22%

Core i7-8565U:
RPS     6594.007 6499.501 6572.902  ≈ 6555   ± 0.76%
TPS     6561.680 6444.545 6510.120  ≈ 6505   ± 0.90%
CPU %     58.452   60.568   62.474    ≈ 60.5 ± 3.33%
Mem MB   234.893  285.067  269.081   ≈ 263   ± 9.75%

DB restore:
real    0m22.237s 0m23.471s 0m23.409s  ≈ 23.04 ± 3.02%
user    0m35.435s 0m38.943s 0m39.247s  ≈ 37.88 ± 5.59%
sys      0m3.085s  0m3.360s  0m3.144s  ≈  3.20 ± 4.53%

After the change:

Ryzen 9 5950X:
RPS     27747.349 27407.726 27520.210  ≈ 27558   ± 0.63%  ↑ 18.69%
TPS     26992.010 26993.468 27010.966  ≈ 26999   ± 0.04%  ↑ 18.43%
CPU %      28.928    28.096    29.105  ≈    28.7 ± 1.88%  ↑ 18.1%
Mem MB    760.385   726.320   756.118  ≈   748   ± 2.48%  ↑ 33.10%

Core i7-8565U:
RPS     7783.229 7628.409 7542.340  ≈ 7651   ± 1.60%  ↑ 16.72%
TPS     7708.436 7607.397 7489.459  ≈ 7602   ± 1.44%  ↑ 16.85%
CPU %     74.899   71.020   72.697  ≈   72.9 ± 2.67%  ↑ 20.50%
Mem MB   438.047  436.967  416.350  ≈  430   ± 2.84%  ↑ 63.50%

DB restore:
real    0m20.838s 0m21.895s 0m21.794s  ≈ 21.51 ± 2.71%  ↓ 6.64%
user    0m39.091s 0m40.565s 0m41.493s  ≈ 40.38 ± 3.00%  ↑ 6.60%
sys      0m3.184s  0m2.923s  0m3.062s  ≈  3.06 ± 4.27%  ↓ 4.38%

It obviously uses more memory now and utilizes CPU more aggressively, but at
the same time it allows to improve all relevant metrics and finally reach a
situation where we process 50K transactions in less than second on Ryzen 9
5950X (going higher than 25K TPS). The other observation is much more stable
block time, on Ryzen 9 it's as close to 1 second as it could be.
2021-08-02 16:33:00 +03:00
Roman Khimov
5f2e08581f
Merge pull request #2103 from nspcc-dev/mainnet-config
config: add missing mainnet standby committee members
2021-08-02 12:15:40 +03:00
Roman Khimov
63e59accd1 config: add missing mainnet standby committee members 2021-08-02 11:12:48 +03:00
Roman Khimov
bbe4e9cd7b
Merge pull request #2101 from nspcc-dev/goroutiner
Improve big block processing
2021-07-30 19:21:13 +03:00
Roman Khimov
3cebd2b129 interop: use non-Cached wrapped DAO
Cached only caches NEP-17 tracking data now, it makes no sense here.
2021-07-30 15:45:17 +03:00
Roman Khimov
fa7314ea90 dao: drop dropNEP17Cache from Cached
It's not used now.
2021-07-30 15:45:17 +03:00
Roman Khimov
49be753850 core: spread storeBlock actions to three goroutines
Block processing consists of:
 * saving block/transactions to the DB
 * executing blocks/transactions
 * processing notifications/saving AERs
 * updating MPT
 * atomically updating Blockchain state

Of these the first one is completely independent of others, it can be done in
a separate routine easily. The third one technically depends on the second,
it just doesn't have data until something is executed. At the same time it
doesn't affect future executions in any way, so we can offload
AER/notification processing to separate goroutine (while the main thread
proceeds with other transactions).

MPT update depends on all executions, so it can't be offloaded, but it can be
done concurrently to AER processing. And only the last thing actually needs
all previous ones to be finished, so it's a natural synchronization point.

So we spawn two additional routines and let the main one execute transactions
and update MPT as fast as it can. While technically all of these routines
could share single DAO (they are working with different KV sets) benchmarking
shows that using separate DAOs and then persisting them to lower one actually
works about 7-8%% better. At the same time we can simplify DAOs used, Cached
one is only relevant for AER processing because it caches NEP-17 tracking
data, everything else can do just fine with Simple.

The change was tested for performance with neo-bench (single node, 10 workers,
LevelDB) on two machines and block dump processing (RC4 testnet up to 50825
with VerifyBlocks set to false) on i7-8565U. neo-bench creates huge blocks
with lots of transactions while RC4 dump mostly consists of empty blocks.

Reference results (06c3dda5d1):

Ryzen 9 5950X:
RPS ≈ 20059.569   21186.328   20158.983   ≈ 20468   ±  3.05%
TPS ≈ 19544.993   20585.450   19658.338   ≈ 19930   ±  2.86%
CPU ≈    18.682%     23.877%     22.852%  ≈    21.8 ± 12.62%
Mem ≈   618.981MB   559.246MB   541.539MB ≈   573   ±  7.08%

Core i7-8565U:
RPS ≈ 5927.082   6526.739   6372.115   ≈ 6275   ± 4.96%
TPS ≈ 5899.531   6477.187   6329.515   ≈ 6235   ± 4.81%
CPU ≈   56.346%    61.955%    58.125%  ≈   58.8 ± 4.87%
Mem ≈  212.191MB  224.974MB  205.479MB ≈  214   ± 4.62%

DB restore:
real    0m12.683s 0m13.222s 0m13.382s  ≈ 13.096 ±  2.80%
user    0m18.501s 0m19.163s 0m19.489s  ≈ 19.051 ±  2.64%
sys      0m1.404s  0m1.396s  0m1.666s  ≈  1.489 ± 10.32%

After the change:

Ryzen 9 5950X:
RPS ≈ 23056.899   22822.015   23006.543   ≈ 22962   ± 0.54%
TPS ≈ 22594.785   22292.071   22800.857   ≈ 22562   ± 1.13%
CPU ≈    24.262%     23.185%     25.921%  ≈    24.5 ± 5.65%
Mem ≈   614.254MB   613.204MB   555.491MB ≈   594   ± 5.66%

Core i7-8565U:
RPS ≈ 6378.702   6423.927   6363.788      ≈ 6389   ± 0.49%
TPS ≈ 6327.072   6372.552   6311.179      ≈ 6337   ± 0.50%
CPU ≈   57.599%    58.622%    59.737%     ≈   58.7 ± 1.82%
Mem ≈  198.697MB  188.746MB  200.235MB    ≈  196   ± 3.18%

DB restore:
real    0m13.576s 0m13.334s 0m12.757s  ≈  13.222 ±  3.18%
user    0m19.113s 0m19.490s 0m20.197s  ≈  19.600 ±  2.81%
sys      0m2.211s  0m1.558s  0m1.559s  ≈   1.776 ± 21.21%

On Ryzen 9 we've got 12% better RPS, 13% better TPS with 12% CPU and 3% RAM
more used. Core i7-8565U changes don't seem to be statistically significant:
1.8% more RPS, 1.6% more TPS with about the same CPU and 8.5% less RAM
used. It also is 1% worse in DB restore time.

The result is somewhat expected, on a powerful machine with lots of spare
cores we get 10%+ better results while on average resource-constrained laptop it
doesn't change much (the machine is already saturated). Overall, this seems to
be worthwhile.
2021-07-30 15:45:17 +03:00
Roman Khimov
06c3dda5d1
Merge pull request #2093 from nspcc-dev/states-exchange/drop-nep17-balance-state
core: implement dynamic NEP17 balances tracking
2021-07-29 19:08:42 +03:00
Roman Khimov
6d5f064fd8
Merge pull request #2097 from nspcc-dev/doc-manifest-permission
docs/compiler.md: document contract configuration
2021-07-29 18:52:29 +03:00
Roman Khimov
ebbb9df91e
Merge pull request #2099 from nspcc-dev/wallet-truncate
wallet: truncate file after writing
2021-07-29 18:52:18 +03:00
Evgeniy Stratonikov
283173bb9d wallet: use named constants in Seek
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-07-29 17:11:50 +03:00
Evgeniy Stratonikov
a429aa3e68 wallet: truncate file when writing
If wallet size decreases, we need to remove trailing garbage if it
exists. This can happen when removing account or reading pretty-printed
wallet. It doesn't affect our CLI (we decode only file prefix), but
it is nice to always have a valid JSON file.

Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-07-29 17:11:49 +03:00
Evgeniy Stratonikov
619bbb40c4 docs/compiler.md: document contract configuration
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-07-29 16:12:31 +03:00
Evgeniy Stratonikov
8f196c8222 wallet: marshal before writing to file
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
2021-07-29 16:07:36 +03:00
Anna Shaleva
a30e48ff90 core: increment the DB version
DB scheme has been changed.
2021-07-29 10:23:13 +03:00
Anna Shaleva
e8bed184d5 core: implement dynamic NEP17 balances tracking
Request NEP17 balances from a set of NEP17 contracts instead of getting
them from storage. LastUpdatedBlock tracking remains untouched, because
there's no way to retrieve it dynamically.
2021-07-29 10:23:01 +03:00
Anna Shaleva
e46d76d7aa core: rename state.NEP17Balances to state.NEP17TransferInfo
Balances are to be removed from state.NEP17TransferInfo, so the remnant
fields are NextTransferBatch, NewBatch and a map of LastUpdatedBlocks.
These fields are more staff-related.

Also rename dao.[Get, Put, put]NEP17Balances and STNEP17Balances
preffix.

Also rename NEP17TransferInfo.Trackers to LastUpdatedBlockTrackers
because NEP17TransferInfo.Balances are to be removed.
2021-07-28 13:22:53 +03:00