name old time/op new time/op delta
TokenTransferLog_Append-8 93.0µs ±170% 46.8µs ±152% ~ (p=0.053 n=10+9)
name old alloc/op new alloc/op delta
TokenTransferLog_Append-8 53.8kB ± 4% 38.6kB ±39% -28.26% (p=0.004 n=8+10)
name old allocs/op new allocs/op delta
TokenTransferLog_Append-8 384 ± 0% 128 ± 0% -66.67% (p=0.000 n=10+10)
We shouldn't use StoragePrice from Blockchain because its dao doesn't
contain the whole set of changes from previouse transactions in the
current block. Instead, we should use an updated storage price for
each transaction and retrieve the price from cached DAO.
The usage of the Blockchain's one leads to the same ExecFeeFactor within
a single block. What we need is to update ExecFeeFactor after each
transaction invocation, thus, cached DAO should be used as it contains
all relevant changes.
We don't have a need to iterate over them at the moment, but since we're
changing the DB format in the next release anyway let's add this ability also,
just in case.
It couldn't be done previously with two maps and mixed storage, but now all of
the storage changes are located in a single map, so it's trivial to do exact
slice allocations and avoid string->[]byte conversions.
Private DAO is only used in a single thread which means we can safely reuse
key/data buffers most of the time and handle it all in DAO.
Doesn't affect any benchmarks.
Most of the time we don't need locking on the higher-level stores and we drop
them after Persist, so that's what private MemCachedStore is for.
It doesn't improve things in any noticeable way, some ~1% can be observed in
neo-bench under various loads and even less than that in chain processing. But
it seems to be a bit better anyway (less allocations, less locks).
They never return errors, so their interface should reflect that. This allows
to remove quite a lot of useless and never tested code.
Notice that Get still does return an error. It can be made not to do that, but
usually we need to differentiate between successful/unsuccessful accesses
anyway, so this doesn't help much.
Initially I thought of doing it in the next persist cycle, but testing shows
that it needs just ~2-5% of the time MPT GC does, so doing it in the same
cycle doesn't affect anything.
The key idea here is that even though we can't ensure MPT code won't make the
node active again we can order the changes made to the persistent store in
such a way that it practically doesn't matter. What happens is:
* after persist if it's time to collect our garbage we do it synchronously
right in the same thread working the underlying persistent store directly
* all the other node code doesn't see much of it, it works with bc.dao or
layers above it
* if MPT doesn't find some stale deactivated node in the storage it's OK,
it'll recreate it in bc.dao
* if MPT finds it and activates it, it's OK too, bc.dao will store it
* while GC is being performed nothing else changes the persistent store
* all subsequent bc.dao persists only happen after the GC is completed which
means that any changes to the (potentially) deleted nodes have a priority,
it's OK for GC to delete something that'll be recreated with the next
persist cycle
Otherwise it's a simple scheme with node status/last active height stored in
the value. Preliminary tests show that it works ~18% worse than the simple
KeepOnlyLatest scheme, but this seems to be the best result so far.
Fixes#2095.
Add "active" flag into the node data and make the remainder modal, for active
nodes it's a reference counter, for inactive ones the deactivation height is
stored.
Technically, refcounted chains storing just one trie don't need a flag, but
it's a bit simpler this way.
They're misleading now that we have variable number of committee
members/validators. The standby list can be seen in the configuration and the
appropriate numbers can be received from it also.