We shouldn't use StoragePrice from Blockchain because its dao doesn't
contain the whole set of changes from previouse transactions in the
current block. Instead, we should use an updated storage price for
each transaction and retrieve the price from cached DAO.
The usage of the Blockchain's one leads to the same ExecFeeFactor within
a single block. What we need is to update ExecFeeFactor after each
transaction invocation, thus, cached DAO should be used as it contains
all relevant changes.
It's not a perfect thing, but neo-debugger just doesn't work at all with
relative pathes. Notice that `saveSequencePoint` still used absolute ones
leading to invalid debug.json data, but fixing it there doesn't help,
neo-debugger can't load source code using relatives.
Although it's the caller's duty to avoid WSClient re-closing, we
still can handle it.
Fixes the following neofs-node error:
```
panic: close of closed channel
goroutine 98 [running]:
github.com/nspcc-dev/neo-go/pkg/rpc/client.(*WSClient).Close(...)
github.com/nspcc-dev/neo-go@v0.98.3-pre.0.20220321144433-3b639f518ebb/pkg/rpc/client/wsclient.go:120
github.com/nspcc-dev/neofs-node/pkg/morph/subscriber.(*subscriber).Close(0x13)
github.com/nspcc-dev/neofs-node/pkg/morph/subscriber/subscriber.go:108 +0x29
github.com/nspcc-dev/neofs-node/pkg/morph/event.listener.Stop(...)
github.com/nspcc-dev/neofs-node/pkg/morph/event/listener.go:573
created by github.com/nspcc-dev/neofs-node/pkg/innerring.(*Server).Stop
github.com/nspcc-dev/neofs-node/pkg/innerring/innerring.go:285 +0x12f
```
It is exists from the times we used `smartcontract.Parameter` somewhere
in the `NotificationEvent`/`ApplicationLog`. `stackitem.ToJSON` now
handles this.
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
CWE-117:
Log entries created from user input
If unsanitized user input is written to a log entry, a malicious user may be able to forge new log entries.
The only user of (*Block).Trim() is in DAO and it already has a nice buffer
usually, so creating another one makes no sense. It also simplifies error
handling a lot.
Currently the only known reason this can happen is processing
ENDFINALLY opcode before the corresponding ENDTRY.
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
As can be seen in https://dotnetfiddle.net/s7eg21 (int) conversions
result in an exception in C# code.
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
Also do not limit depth. It was introduced in e34fa2e915 as a simple
solution to OOM problem. In this commit we do exactly the refactoring
described there. Maximum size is the same as stack item size and
can be changed if needed withouth significat refactoring.
`1 MiB` seems sufficient, though.
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
1. Move redirections check to the tcp level. Manually resolve request address
and create connection for the first suitable resolved address.
2. Remove URIValidator. Redirections checks are set in the custom http client,
so the user should take care of validation by himself when customizing the
client.
Fix the following linter warning:
```
pkg/rpc/client/wsclient.go:99:18 govet copylocks: literal copies lock value from *cl: github.com/nspcc-dev/neo-go/pkg/rpc/client.Client contains sync.RWMutex
```
1. Keep initDone check only for the places where cache is directly accessed.
We don't need to check it in other places, otherwise we have a mess of
duplicating checks.
2. Fix bug in code related to block deserialisation. There's no magic, so
checking that initialisation is done is not enough for proper block
deserialisation. We need to manually fill StateRootEnabled field.
3. Since transaction doesn't need network magic to compute its hash, we don't
need to perform Client initialisation before transaction-related requests.
4. Check that cache is initialised before accessing network magic.
5. Refactor the way Policy contract hash is fetched for Client requests.
We don't really need Client initialisation for that, it's OK to fetch Policy
hash on-the-fly.
When execution fails it's the only way to change the prompt showing the
last executed instruction. Example:
```
NEO-GO-VM > loadgo examples/engine/engine.go
READY: loaded 117 instructions
NEO-GO-VM 0 > run
Error: at instruction 3 (SYSCALL): failed to invoke syscall 805851437: syscall not found
NEO-GO-VM 8 > parse
Error: missing argument
NEO-GO-VM 8 > parse 123
Integer to Hex 7b
Integer to Base64 ew==
String to Hex 313233
String to Base64 MTIz
NEO-GO-VM 8 > unload
NEO-GO-VM > exit
```
We don't have a need to iterate over them at the moment, but since we're
changing the DB format in the next release anyway let's add this ability also,
just in case.
It couldn't be done previously with two maps and mixed storage, but now all of
the storage changes are located in a single map, so it's trivial to do exact
slice allocations and avoid string->[]byte conversions.
Private DAO is only used in a single thread which means we can safely reuse
key/data buffers most of the time and handle it all in DAO.
Doesn't affect any benchmarks.
Most of the time we don't need locking on the higher-level stores and we drop
them after Persist, so that's what private MemCachedStore is for.
It doesn't improve things in any noticeable way, some ~1% can be observed in
neo-bench under various loads and even less than that in chain processing. But
it seems to be a bit better anyway (less allocations, less locks).
They never return errors, so their interface should reflect that. This allows
to remove quite a lot of useless and never tested code.
Notice that Get still does return an error. It can be made not to do that, but
usually we need to differentiate between successful/unsuccessful accesses
anyway, so this doesn't help much.
Simple and dumb as it is, this allows to separate contract storage from other
things and dramatically improve Seek() time over storage (even though it's
still unordered!) which in turn improves block processing speed.
LevelDB LevelDB (KeepOnlyLatest) BoltDB BoltDB (KeepOnlyLatest)
Master real 16m27,936s real 10m9,440s real 16m39,369s real 8m1,227s
user 20m12,619s user 26m13,925s user 18m9,162s user 18m5,846s
sys 2m56,377s sys 1m32,051s sys 9m52,576s sys 2m9,455s
2 maps real 10m49,495s real 8m53,342s real 11m46,204s real 5m56,043s
user 14m19,922s user 24m6,225s user 13m25,691s user 15m4,694s
sys 1m53,021s sys 1m23,006s sys 4m31,735s sys 2m8,714s
neo-bench performance is mostly unaffected, ~0.5% for 1-1 test and 4% for
10K-10K test both fall within regular test error range.
Initially I thought of doing it in the next persist cycle, but testing shows
that it needs just ~2-5% of the time MPT GC does, so doing it in the same
cycle doesn't affect anything.
It's very special, single-purpose thing, but it improves cumulative time spent
in GC by ~10% for LevelDB and by ~36% for BoltDB during 1050K mainnet chain
processing. While the overall chain import time doesn't change in any
noticeable way (~1%), I think it's still worth it, for machines with slower
disks the difference might be more noticeable.
Batch is only relevant in multithreaded context, internally it'll do some
magic and use the same locking/updating Update does, so it makes little sense
for us. This doesn't change benchmarks in any noticeable way.
The key idea here is that even though we can't ensure MPT code won't make the
node active again we can order the changes made to the persistent store in
such a way that it practically doesn't matter. What happens is:
* after persist if it's time to collect our garbage we do it synchronously
right in the same thread working the underlying persistent store directly
* all the other node code doesn't see much of it, it works with bc.dao or
layers above it
* if MPT doesn't find some stale deactivated node in the storage it's OK,
it'll recreate it in bc.dao
* if MPT finds it and activates it, it's OK too, bc.dao will store it
* while GC is being performed nothing else changes the persistent store
* all subsequent bc.dao persists only happen after the GC is completed which
means that any changes to the (potentially) deleted nodes have a priority,
it's OK for GC to delete something that'll be recreated with the next
persist cycle
Otherwise it's a simple scheme with node status/last active height stored in
the value. Preliminary tests show that it works ~18% worse than the simple
KeepOnlyLatest scheme, but this seems to be the best result so far.
Fixes#2095.