b9be892bf9 has made Persist asynchronous which
is very effective in allowing the system to continue processing
blocks/transactions while flushing things to disk. It at the same time is very
dangerous in that if the disk is slow and it takes much time to flush KV set
(more than persisting interval), there might be even bigger new KV set in
MemCachedStore by the time it finishes. Even if the system immediately starts
to flush this new data set it (being bigger) can take more time than the
previous one. And while doing so a new data set will appear in memory,
potentially again bigger than this.
So we can easily end up with the system going out of control, consuming more
and more memory and taking more and more time to persist a single set of
data. To avoid this we need to detect such condition and just wait for Persist
to really finish its job and release the resources.
Standards are NEP-11 and NEP-17, not NEP11, not NEP17, not anything
else. Variable/function names of course can use whatever fits, but documents
and comments should be consistent wrt this.
Oracle responses must use the same set of signers as oracle requests even
though the transaction itself is signed by oracle nodes/contract.
We can probably improve interop.Context by removing Tx field completely and
adding more functionality to Container, but it's not very convenient for
VerifyWitness and will require adding more stub-like methods for Block, so Tx
is used for now (and we do have it in every relevant case).
I don't think it's possible with regular service functioning, but it happens
during testing because of pointer reuse:
WARNING: DATA RACE
Read at 0x00c003a0e3f0 by goroutine 114:
github.com/nspcc-dev/neo-go/pkg/services/notary.(*Notary).verifyIncompleteWitnesses()
/home/runner/work/neo-go/neo-go/pkg/services/notary/notary.go:441 +0x1dc
github.com/nspcc-dev/neo-go/pkg/services/notary.(*Notary).OnNewRequest()
/home/runner/work/neo-go/neo-go/pkg/services/notary/notary.go:188 +0x205
github.com/nspcc-dev/neo-go/pkg/core.TestNotary.func11()
/home/runner/work/neo-go/neo-go/pkg/core/notary_test.go:347 +0x612
github.com/nspcc-dev/neo-go/pkg/core.TestNotary()
/home/runner/work/neo-go/neo-go/pkg/core/notary_test.go:443 +0xe33
testing.tRunner()
/opt/hostedtoolcache/go/1.16.10/x64/src/testing/testing.go:1193 +0x202
Previous write at 0x00c003a0e3f0 by goroutine 104:
github.com/nspcc-dev/neo-go/pkg/services/notary.(*Notary).finalize()
/home/runner/work/neo-go/neo-go/pkg/services/notary/notary.go:338 +0x50a
github.com/nspcc-dev/neo-go/pkg/services/notary.(*Notary).PostPersist()
/home/runner/work/neo-go/neo-go/pkg/services/notary/notary.go:314 +0x297
github.com/nspcc-dev/neo-go/pkg/services/notary.(*Notary).Run()
/home/runner/work/neo-go/neo-go/pkg/services/notary/notary.go:169 +0x4a7
Serializing/deserializing the payload yields this:
Error: Received unexpected error:
both main and fallback transactions should have the same ValidUntil value
See neo-project/neo#2622. The implementation is somewhat asymmetric (and not
very efficient) for binary/JSON encoding/decoding, but it should be
sufficient.
Eventually this will be replaced by `pkg/neotest` invocations but for
now it allows us to remove NNS constants together with the tests.
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
Use circular buffer which is a bit more appropriate. The problem is that
priority queue accepts and stores equal items which wastes memory even in
normal usage scenario, but it's especially dangerous if the node is stuck for
some reason. In this case it'll accept from peers and put into queue the same
blocks again and again leaking memory up to OOM condition.
Notice that queue length calculation might be wrong in case circular buffer
wraps, but it's not very likely to happen (usually blocks not coming from the
queue are added by consensus and it's not very fast in doing so).
Notes for witnesses:
* [N sig + M multisig + K contract] combination is possible where N, M, K >=0.
* Each verification script should be properly filled in.
* Each invocation script should either be empty or contain exactly one
signature.
Real persistent storage guarantees that result of Seek is sorted
by keys. The idea of optimisation is to merge two sorted seek
results into one (memStore+persistentStore), so that
(*MemCachedStore).Seek will return sorted list. The only thing
that remains is to sort items got from (*MemoryStore).Seek.
MemoryStore is used in a MemCachedStore as a persistent layer in tests.
Further commits suppose that persistent storage returns sorted values
from Seek, so sort the result of MemoryStore.Seek.
Benchmark results for 10000 matching items in MemoryStore compared to
master:
name old time/op new time/op delta
MemorySeek-8 712µs ± 0% 3850µs ± 0% +440.52% (p=0.000 n=8+8)
name old alloc/op new alloc/op delta
MemorySeek-8 160kB ± 0% 2724kB ± 0% +1602.61% (p=0.000 n=10+8)
name old allocs/op new allocs/op delta
MemorySeek-8 10.0k ± 0% 10.0k ± 0% +0.24% (p=0.000 n=10+10)
For details on implementation efficiency see the
https://github.com/nspcc-dev/neo-go/pull/2193#discussion_r722993358.
(*Billet).Traverse changes:
1. Get rid of the `offset` argument. We can cut `from` and pass just the
part that remains. This implies that node with path matching `from` will
also be included in the result, so additional check needs to be added to
the callback function.
2. Pass `path` and `from` without search prefix. Append prefix to the
result inside the callback.
3. Remove duplicating code.
(*Trie).Find changes:
1. Properly prepare `from` argument for traversing function. It closly
depends on the `path` argument.