It doesn't cost much, but it's used _a lot_, so optimizing it makes sense.
name old time/op new time/op delta
TxHash-8 4.89ns ± 5% 0.54ns ± 2% -88.86% (p=0.008 n=5+5)
name old alloc/op new alloc/op delta
TxHash-8 0.00B 0.00B ~ (all equal)
name old allocs/op new allocs/op delta
TxHash-8 0.00 0.00 ~ (all equal)
We're likely to have something comparable to the current changeset in the
subsequent one. If it's bigger, no big deal, it'll be reallocated, if it's
smaller, no big deal, the next one will be preallocated smaller.
It's very effective in avoiding allocations for big.Int, we don't have a
microbenchmark for memppol, but this improves TPS metrics by ~1-2%, so it's
noticeable.
Problem: transactions with wrong hashes are accepted to the chain if
consensus nodes are designated as Oracle nodes. The result is wrong
MerkleRoot for the accepted block. Consensus nodes got such blocks
right from the dbft and store them without errors, but if
non-consensus nodes are present in the network, they just can't accept
these "bad" blocks:
```
2021-11-29T12:56:40.533+0300 WARN blockQueue: failed adding block into the blockchain {"error": "invalid block: MerkleRoot mismatch (expected a866b57ad637934f7a7700e3635a549387e644970b42681d865a54c3b3a46122, calculated d465aafabaf4539a3f619d373d178eeeeab7acb9847e746e398706c8c1582bf8)", "blockHeight": 17, "nextIndex": 18}
```
This problem happens because of transaction hash caching. We can't set
transaction hash if transaction construction wasn't yet completed.
Problem:
```
--- FAIL: TestMemCachedPersist (0.07s)
--- FAIL: TestMemCachedPersist/BoltDBStore (0.07s)
testing.go:894: TempDir RemoveAll cleanup: remove C:\Users\Anna\AppData\Local\Temp\TestMemCachedPersist_BoltDBStore294966711\001\test_bolt_db: The process cannot access the file because it is being used by another process.
```
Solution:
Release the resources occupied by the DB.
We use 2 prefixes for storing items because of state synchronization.
This commit allows to parametrize dao with the default prefix.
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
b9be892bf9 has made Persist asynchronous which
is very effective in allowing the system to continue processing
blocks/transactions while flushing things to disk. It at the same time is very
dangerous in that if the disk is slow and it takes much time to flush KV set
(more than persisting interval), there might be even bigger new KV set in
MemCachedStore by the time it finishes. Even if the system immediately starts
to flush this new data set it (being bigger) can take more time than the
previous one. And while doing so a new data set will appear in memory,
potentially again bigger than this.
So we can easily end up with the system going out of control, consuming more
and more memory and taking more and more time to persist a single set of
data. To avoid this we need to detect such condition and just wait for Persist
to really finish its job and release the resources.
Everywhere it matters (and that's callExFromNative() now) it's incremented
already, so when we're doing Call() at the same time (and it's done to invoke
`_initialize` method) we're effectively double-incrementing it.
Standards are NEP-11 and NEP-17, not NEP11, not NEP17, not anything
else. Variable/function names of course can use whatever fits, but documents
and comments should be consistent wrt this.
Oracle responses must use the same set of signers as oracle requests even
though the transaction itself is signed by oracle nodes/contract.
We can probably improve interop.Context by removing Tx field completely and
adding more functionality to Container, but it's not very convenient for
VerifyWitness and will require adding more stub-like methods for Block, so Tx
is used for now (and we do have it in every relevant case).
I don't think it's possible with regular service functioning, but it happens
during testing because of pointer reuse:
WARNING: DATA RACE
Read at 0x00c003a0e3f0 by goroutine 114:
github.com/nspcc-dev/neo-go/pkg/services/notary.(*Notary).verifyIncompleteWitnesses()
/home/runner/work/neo-go/neo-go/pkg/services/notary/notary.go:441 +0x1dc
github.com/nspcc-dev/neo-go/pkg/services/notary.(*Notary).OnNewRequest()
/home/runner/work/neo-go/neo-go/pkg/services/notary/notary.go:188 +0x205
github.com/nspcc-dev/neo-go/pkg/core.TestNotary.func11()
/home/runner/work/neo-go/neo-go/pkg/core/notary_test.go:347 +0x612
github.com/nspcc-dev/neo-go/pkg/core.TestNotary()
/home/runner/work/neo-go/neo-go/pkg/core/notary_test.go:443 +0xe33
testing.tRunner()
/opt/hostedtoolcache/go/1.16.10/x64/src/testing/testing.go:1193 +0x202
Previous write at 0x00c003a0e3f0 by goroutine 104:
github.com/nspcc-dev/neo-go/pkg/services/notary.(*Notary).finalize()
/home/runner/work/neo-go/neo-go/pkg/services/notary/notary.go:338 +0x50a
github.com/nspcc-dev/neo-go/pkg/services/notary.(*Notary).PostPersist()
/home/runner/work/neo-go/neo-go/pkg/services/notary/notary.go:314 +0x297
github.com/nspcc-dev/neo-go/pkg/services/notary.(*Notary).Run()
/home/runner/work/neo-go/neo-go/pkg/services/notary/notary.go:169 +0x4a7
Serializing/deserializing the payload yields this:
Error: Received unexpected error:
both main and fallback transactions should have the same ValidUntil value
See neo-project/neo#2622. The implementation is somewhat asymmetric (and not
very efficient) for binary/JSON encoding/decoding, but it should be
sufficient.
Eventually this will be replaced by `pkg/neotest` invocations but for
now it allows us to remove NNS constants together with the tests.
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
Use circular buffer which is a bit more appropriate. The problem is that
priority queue accepts and stores equal items which wastes memory even in
normal usage scenario, but it's especially dangerous if the node is stuck for
some reason. In this case it'll accept from peers and put into queue the same
blocks again and again leaking memory up to OOM condition.
Notice that queue length calculation might be wrong in case circular buffer
wraps, but it's not very likely to happen (usually blocks not coming from the
queue are added by consensus and it's not very fast in doing so).
Notes for witnesses:
* [N sig + M multisig + K contract] combination is possible where N, M, K >=0.
* Each verification script should be properly filled in.
* Each invocation script should either be empty or contain exactly one
signature.
Real persistent storage guarantees that result of Seek is sorted
by keys. The idea of optimisation is to merge two sorted seek
results into one (memStore+persistentStore), so that
(*MemCachedStore).Seek will return sorted list. The only thing
that remains is to sort items got from (*MemoryStore).Seek.
MemoryStore is used in a MemCachedStore as a persistent layer in tests.
Further commits suppose that persistent storage returns sorted values
from Seek, so sort the result of MemoryStore.Seek.
Benchmark results for 10000 matching items in MemoryStore compared to
master:
name old time/op new time/op delta
MemorySeek-8 712µs ± 0% 3850µs ± 0% +440.52% (p=0.000 n=8+8)
name old alloc/op new alloc/op delta
MemorySeek-8 160kB ± 0% 2724kB ± 0% +1602.61% (p=0.000 n=10+8)
name old allocs/op new allocs/op delta
MemorySeek-8 10.0k ± 0% 10.0k ± 0% +0.24% (p=0.000 n=10+10)
For details on implementation efficiency see the
https://github.com/nspcc-dev/neo-go/pull/2193#discussion_r722993358.
(*Billet).Traverse changes:
1. Get rid of the `offset` argument. We can cut `from` and pass just the
part that remains. This implies that node with path matching `from` will
also be included in the result, so additional check needs to be added to
the callback function.
2. Pass `path` and `from` without search prefix. Append prefix to the
result inside the callback.
3. Remove duplicating code.
(*Trie).Find changes:
1. Properly prepare `from` argument for traversing function. It closly
depends on the `path` argument.
Instead of flushing everything to `cache` and then to `bc.dao`, wrap `bc.dao`
directly for block/tx data and AERs and then flush to it. Block/transactions
are usually processed more quickly than other components, so they easily end
up in `cache` where they directly affect Seek performance for any executing
transaction.
Simple as it is this change improves voter NEO transfer benchmark with 1000
accounts by more than 25%, from ~18500 TPS to ~23500 TPS. It doesn't affect
much other cases.
GAS can only be distributed once in a block for particular address, so it
makes little sense trying to calculate it again and again. This fixes
neo-bench for NEO voter, because without it we get ~2500 TPS for
single-address test and with it it jumps 13-fold to normal values like
~33500.
We need to store NEO balance's LastUpdateHeight before GAS mint,
because mint can call onNEP17Payment and onNEP17Payment can call NEO
transfer which also calls GAS mint. Storing balance height allows to
avoid recursion.
We need to copy the result of `TryGet` method, otherwice the slice can
be modified inside `Add` or `Update` methods, which leads to
inconsistent MPT pool state.
We need several stages to manage state jump process in order not to mess
up old and new contract storage items and to be sure about genesis state data
are properly removed from the storage. Other operations do not require
separate stage and can be performed each time `jumpToStateInternal` is
called.
We don't need this method to be exposed, the only its user is the
StateSync module. At the same time StateSync module manages its state by
itself which guarantees that (*Blockchain).jumpToState will be called
with proper StateSync stage.
State jump should be an atomic operation, we can't modify contract
storage items state on-the-fly. Thus, store fresh items under temp
prefix and replase the outdated ones after state sync is completed.
Related
https://github.com/nspcc-dev/neo-go/pull/2019#discussion_r693350460.
Before state sync process can be started, outdated MPT nodes
should be removed from storage. After state sync is completed,
outdated blocks/transactions/AERs should also be removed.
In this commit:
1. Request unknown MPT nodes from peers. Note, that StateSync module itself
shouldn't be responsible for nodes requests, that's a server duty.
2. Do not request the same node twice, check if it is in storage
already. If so, then the only thing remaining is to update refcounter.
MPT restore process is much simpler then regular MPT maintaining: trie
has a fixed structure, we don't need to remove or rebuild MPT nodes. The
only thing we should do is to replace Hash nodes to their unhashed
counterparts and increment refcount. It's better not to touch the
regular MPT code and create a separate structure for this.
C# node does not return empty proof enymore in case if path is bad. C#
node also throws an exception on bad Put.
Our node does not return an error on delete if the key is empty.
Allow it for (*Trie).Put. And distinguish empty value and nil value for
(*Trie).PutBatch, because batch is already capable of handling both nil
and empty value. For (*Trie).PutBatch putting nil value means deletion,
while putting empty value means just putting LeafNode with an empty
value.
Functions are usually immediately replaced (and it's OK for them to be nil,
searching through an array with length of zero is fine), Notifications are
usually appended to (and are absolutely useless in verification contexts).
* both 'to' and 'from' are either Null or Hash160, there is no other
possibility for valid NEP-17. So returning util.Uint160{} in case of
parsing error is wrong.
* but this is what allowed burns/mints to work at the expense of error
allocation inside of util.Uint160DecodeBytesBE()
* Uint160 can technically fit into regular VM integer, so even though it'd be
quite surprising to see it there, TryBytes() is more correct (and easier!)
to use
* same thing with `amount`, we have `TryInteger()` that easily covers all
possible cases and does appropriate error checking inside
Squash (*DAO).StoreAsTransaction and
(*DAO).StoreConflictingTransactions. It's better to keep them this way,
because StoreAsTransaction is always followed by
StoreConflictingTransactions, so it's an atomic operation.
The logic wasn't changed.
We're using batches in wrong way during persist, we already have all changes
accumulated in two maps and then we move them to batch and then this is
applied. For some DBs like BoltDB this batch is just another MemoryStore, so
we essentially just shuffle the changeset from one map to another, for others
like LevelDB batch is just a serialized set of KV pairs, it doesn't help much
on subsequent PutBatch, we just duplicate the changeset again.
So introduce PutChangeSet that allows to take two maps with sets and deletes
directly. It also allows to simplify MemCachedStore logic.
neo-bench for single node with 10 workers, LevelDB:
Reference:
RPS 30189.132 30556.448 30390.482 ≈ 30379 ± 0.61%
TPS 29427.344 29418.687 29434.273 ≈ 29427 ± 0.03%
CPU % 33.304 27.179 33.860 ≈ 31.45 ± 11.79%
Mem MB 800.677 798.389 715.042 ≈ 771 ± 6.33%
Patched:
RPS 30264.326 30386.364 30166.231 ≈ 30272 ± 0.36% ⇅
TPS 29444.673 29407.440 29452.478 ≈ 29435 ± 0.08% ⇅
CPU % 34.012 32.597 33.467 ≈ 33.36 ± 2.14% ⇅
Mem MB 549.126 523.656 517.684 ≈ 530 ± 3.15% ↓ 31.26%
BoltDB:
Reference:
RPS 31937.647 31551.684 31850.408 ≈ 31780 ± 0.64%
TPS 31292.049 30368.368 31307.724 ≈ 30989 ± 1.74%
CPU % 33.792 22.339 35.887 ≈ 30.67 ± 23.78%
Mem MB 1271.687 1254.472 1215.639 ≈ 1247 ± 2.30%
Patched:
RPS 31746.818 30859.485 31689.761 ≈ 31432 ± 1.58% ⇅
TPS 31271.499 30340.726 30342.568 ≈ 30652 ± 1.75% ⇅
CPU % 34.611 34.414 31.553 ≈ 33.53 ± 5.11% ⇅
Mem MB 1262.960 1231.389 1335.569 ≈ 1277 ± 4.18% ⇅
It requires only two methods from Blockchainer: AddBlock and
BlockHeight. New interface will allow to easily reuse the block queue
for state exchange purposes.
Do not allocate a separate buffer for the transfer.
```
name old time/op new time/op delta
NEP17TransferLog_Append-8 58.8µs ± 3% 32.1µs ± 1% -45.40% (p=0.000 n=10+9)
name old alloc/op new alloc/op delta
NEP17TransferLog_Append-8 118kB ± 1% 44kB ± 3% -63.00% (p=0.000 n=9+10)
name old allocs/op new allocs/op delta
NEP17TransferLog_Append-8 901 ± 1% 513 ± 3% -43.08% (p=0.000 n=9+8)
```
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
We use them quite frequently (consider children for a new branch
node) and it is better to get rid of unneeded allocations.
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
`WriteArray` involves reflection, it makes sense to optimize
serialization of transactions and application logs which are serialized
constantly. Adding case in a type switch in `WriteArray` is not an
option because we don't want new dependencies for `io` package.
```
name old time/op new time/op delta
AppExecResult_EncodeBinary-8 852ns ± 3% 656ns ± 2% -22.94% (p=0.000 n=10+9)
name old alloc/op new alloc/op delta
AppExecResult_EncodeBinary-8 448B ± 0% 376B ± 0% -16.07% (p=0.000 n=10+10)
name old allocs/op new allocs/op delta
AppExecResult_EncodeBinary-8 7.00 ± 0% 5.00 ± 0% -28.57% (p=0.000 n=10+10)
```
```
name old time/op new time/op delta
Transaction_Bytes-8 1.29µs ± 3% 0.76µs ± 5% -41.52% (p=0.000 n=9+10)
name old alloc/op new alloc/op delta
Transaction_Bytes-8 1.21kB ± 0% 1.01kB ± 0% -16.56% (p=0.000 n=10+10)
name old allocs/op new allocs/op delta
Transaction_Bytes-8 12.0 ± 0% 7.0 ± 0% -41.67% (p=0.000 n=10+10)
```
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
We know it already, but with current loading code VM will hash it once
more. It doesn't help a lot and still it costs nothing to avoid this
overhead.
name old time/op new time/op delta
VerifyWitness-8 93.4µs ± 3% 92.7µs ± 2% ~ (p=0.353 n=10+10)
name old alloc/op new alloc/op delta
VerifyWitness-8 3.43kB ± 0% 3.40kB ± 0% -0.70% (p=0.000 n=9+9)
name old allocs/op new allocs/op delta
VerifyWitness-8 67.0 ± 0% 66.0 ± 0% -1.49% (p=0.000 n=10+10)
ReadArray() can return some error and we shouldn't overwrite it. At the same
time limiting ReadArray() to the number of Signers can make it return wrong
error if the number of witnesses actually is bigger than the number of
signers, so use MaxAttributes.
We burn GAS in OnPersist for every transaction so some buffer reuse here is
quite natural.
This also doesn't change a lot in the overall TPS picture, maybe adding some
1%.
Which duplicates the check, but deduplicates error path. This check forced
double balance deserialization which is quite costly operation, so we better
do it once.
It's hardly noticeable as of TPS metrics though, maybe some 1-2%%.
Persist by its definition doesn't change MemCachedStore visible state, all KV
pairs that were acessible via it before Persist remain accessible after
Persist. The only thing it does is flushing of the current set of KV pairs
from memory to peristent store. To do that it needs read-only access to the
current KV pair set, but technically it then replaces maps, so we have to use
full write lock which makes MemCachedStore inaccessible for the duration of
Persist. And Persist can take a lot of time, it's about disk access for
regular DBs.
What we do here is we create new in-memory maps for MemCachedStore before
flushing old ones to the persistent store. Then a fake persistent store is
created which actually is a MemCachedStore with old maps, so it has exactly
the same visible state. This Store is never accessed for writes, so we can
read it without taking any internal locks and at the same time we no longer
need write locks for original MemCachedStore, we're not using it. All of this
makes it possible to use MemCachedStore as normally reads are handled going
down to whatever level is needed and writes are handled by new maps. So while
Persist for (*Blockchain).dao does its most time-consuming work we can process
other blocks (reading data for transactions and persisting storeBlock caches
to (*Blockchain).dao).
The change was tested for performance with neo-bench (single node, 10 workers,
LevelDB) on two machines and block dump processing (RC4 testnet up to 62800
with VerifyBlocks set to false) on i7-8565U.
Reference results (bbe4e9cd7b):
Ryzen 9 5950X:
RPS 23616.969 22817.086 23222.378 ≈ 23218 ± 1.72%
TPS 23047.316 22608.578 22735.540 ≈ 22797 ± 0.99%
CPU % 23.434 25.553 23.848 ≈ 24.3 ± 4.63%
Mem MB 600.636 503.060 582.043 ≈ 562 ± 9.22%
Core i7-8565U:
RPS 6594.007 6499.501 6572.902 ≈ 6555 ± 0.76%
TPS 6561.680 6444.545 6510.120 ≈ 6505 ± 0.90%
CPU % 58.452 60.568 62.474 ≈ 60.5 ± 3.33%
Mem MB 234.893 285.067 269.081 ≈ 263 ± 9.75%
DB restore:
real 0m22.237s 0m23.471s 0m23.409s ≈ 23.04 ± 3.02%
user 0m35.435s 0m38.943s 0m39.247s ≈ 37.88 ± 5.59%
sys 0m3.085s 0m3.360s 0m3.144s ≈ 3.20 ± 4.53%
After the change:
Ryzen 9 5950X:
RPS 27747.349 27407.726 27520.210 ≈ 27558 ± 0.63% ↑ 18.69%
TPS 26992.010 26993.468 27010.966 ≈ 26999 ± 0.04% ↑ 18.43%
CPU % 28.928 28.096 29.105 ≈ 28.7 ± 1.88% ↑ 18.1%
Mem MB 760.385 726.320 756.118 ≈ 748 ± 2.48% ↑ 33.10%
Core i7-8565U:
RPS 7783.229 7628.409 7542.340 ≈ 7651 ± 1.60% ↑ 16.72%
TPS 7708.436 7607.397 7489.459 ≈ 7602 ± 1.44% ↑ 16.85%
CPU % 74.899 71.020 72.697 ≈ 72.9 ± 2.67% ↑ 20.50%
Mem MB 438.047 436.967 416.350 ≈ 430 ± 2.84% ↑ 63.50%
DB restore:
real 0m20.838s 0m21.895s 0m21.794s ≈ 21.51 ± 2.71% ↓ 6.64%
user 0m39.091s 0m40.565s 0m41.493s ≈ 40.38 ± 3.00% ↑ 6.60%
sys 0m3.184s 0m2.923s 0m3.062s ≈ 3.06 ± 4.27% ↓ 4.38%
It obviously uses more memory now and utilizes CPU more aggressively, but at
the same time it allows to improve all relevant metrics and finally reach a
situation where we process 50K transactions in less than second on Ryzen 9
5950X (going higher than 25K TPS). The other observation is much more stable
block time, on Ryzen 9 it's as close to 1 second as it could be.
Block processing consists of:
* saving block/transactions to the DB
* executing blocks/transactions
* processing notifications/saving AERs
* updating MPT
* atomically updating Blockchain state
Of these the first one is completely independent of others, it can be done in
a separate routine easily. The third one technically depends on the second,
it just doesn't have data until something is executed. At the same time it
doesn't affect future executions in any way, so we can offload
AER/notification processing to separate goroutine (while the main thread
proceeds with other transactions).
MPT update depends on all executions, so it can't be offloaded, but it can be
done concurrently to AER processing. And only the last thing actually needs
all previous ones to be finished, so it's a natural synchronization point.
So we spawn two additional routines and let the main one execute transactions
and update MPT as fast as it can. While technically all of these routines
could share single DAO (they are working with different KV sets) benchmarking
shows that using separate DAOs and then persisting them to lower one actually
works about 7-8%% better. At the same time we can simplify DAOs used, Cached
one is only relevant for AER processing because it caches NEP-17 tracking
data, everything else can do just fine with Simple.
The change was tested for performance with neo-bench (single node, 10 workers,
LevelDB) on two machines and block dump processing (RC4 testnet up to 50825
with VerifyBlocks set to false) on i7-8565U. neo-bench creates huge blocks
with lots of transactions while RC4 dump mostly consists of empty blocks.
Reference results (06c3dda5d1):
Ryzen 9 5950X:
RPS ≈ 20059.569 21186.328 20158.983 ≈ 20468 ± 3.05%
TPS ≈ 19544.993 20585.450 19658.338 ≈ 19930 ± 2.86%
CPU ≈ 18.682% 23.877% 22.852% ≈ 21.8 ± 12.62%
Mem ≈ 618.981MB 559.246MB 541.539MB ≈ 573 ± 7.08%
Core i7-8565U:
RPS ≈ 5927.082 6526.739 6372.115 ≈ 6275 ± 4.96%
TPS ≈ 5899.531 6477.187 6329.515 ≈ 6235 ± 4.81%
CPU ≈ 56.346% 61.955% 58.125% ≈ 58.8 ± 4.87%
Mem ≈ 212.191MB 224.974MB 205.479MB ≈ 214 ± 4.62%
DB restore:
real 0m12.683s 0m13.222s 0m13.382s ≈ 13.096 ± 2.80%
user 0m18.501s 0m19.163s 0m19.489s ≈ 19.051 ± 2.64%
sys 0m1.404s 0m1.396s 0m1.666s ≈ 1.489 ± 10.32%
After the change:
Ryzen 9 5950X:
RPS ≈ 23056.899 22822.015 23006.543 ≈ 22962 ± 0.54%
TPS ≈ 22594.785 22292.071 22800.857 ≈ 22562 ± 1.13%
CPU ≈ 24.262% 23.185% 25.921% ≈ 24.5 ± 5.65%
Mem ≈ 614.254MB 613.204MB 555.491MB ≈ 594 ± 5.66%
Core i7-8565U:
RPS ≈ 6378.702 6423.927 6363.788 ≈ 6389 ± 0.49%
TPS ≈ 6327.072 6372.552 6311.179 ≈ 6337 ± 0.50%
CPU ≈ 57.599% 58.622% 59.737% ≈ 58.7 ± 1.82%
Mem ≈ 198.697MB 188.746MB 200.235MB ≈ 196 ± 3.18%
DB restore:
real 0m13.576s 0m13.334s 0m12.757s ≈ 13.222 ± 3.18%
user 0m19.113s 0m19.490s 0m20.197s ≈ 19.600 ± 2.81%
sys 0m2.211s 0m1.558s 0m1.559s ≈ 1.776 ± 21.21%
On Ryzen 9 we've got 12% better RPS, 13% better TPS with 12% CPU and 3% RAM
more used. Core i7-8565U changes don't seem to be statistically significant:
1.8% more RPS, 1.6% more TPS with about the same CPU and 8.5% less RAM
used. It also is 1% worse in DB restore time.
The result is somewhat expected, on a powerful machine with lots of spare
cores we get 10%+ better results while on average resource-constrained laptop it
doesn't change much (the machine is already saturated). Overall, this seems to
be worthwhile.
Request NEP17 balances from a set of NEP17 contracts instead of getting
them from storage. LastUpdatedBlock tracking remains untouched, because
there's no way to retrieve it dynamically.
Balances are to be removed from state.NEP17TransferInfo, so the remnant
fields are NextTransferBatch, NewBatch and a map of LastUpdatedBlocks.
These fields are more staff-related.
Also rename dao.[Get, Put, put]NEP17Balances and STNEP17Balances
preffix.
Also rename NEP17TransferInfo.Trackers to LastUpdatedBlockTrackers
because NEP17TransferInfo.Balances are to be removed.