It's not needed any more with Go 1.13 as we have wrapping/unwrapping in base
packages. All errors.Wrap calls are replaced with fmt.Errorf, some strings are
improved along the way.
In 121c9664b we should take into account isValid flag of
NativePolicy contract while retrieving MaxVerificationGas native
policy value. Otherwise we won't be able to get MaxVerificationGas
after the node was restarted, because this value is not truly
stored along with the other native policy values.
This commit fixes bug with headers verification after the node
restarting with an existing storage:
```
2020-08-03T12:52:56.158+0300 WARN failed processing headers {"error": "vm failed to execute the script with error: error encountered at instruction 0 (PUSHDATA1): gas limit is exceeded", "errorVerbose": "vm failed to execute the script with error: error encountered at instruction 0 (PUSHDATA1): gas limit is exceeded\ngithub.com/nspcc-dev/neo-go/pkg/core.(*Blockchain).verifyHashAgainstScript\n\t/home/neospcc/Documents/GitProjects/nspcc-dev/neo-go/pkg/core/blockchain.go:1454\ngithub.com/nspcc-dev/neo-go/pkg/core.(*Blockchain).verifyHeaderWitnesses\n\t/home/neospcc/Documents/GitProjects/nspcc-dev/neo-go/pkg/core/blockchain.go:1517\ngithub.com/nspcc-dev/neo-go/pkg/core.(*Blockchain).verifyHeader\n\t/home/neospcc/Documents/GitProjects/nspcc-dev/neo-go/pkg/core/blockchain.go:1175\ngithub.com/nspcc-dev/neo-go/pkg/core.(*Blockchain).addHeaders\n\t/home/neospcc/Documents/GitProjects/nspcc-dev/neo-go/pkg/core/blockchain.go:484\ngithub.com/nspcc-dev/neo-go/pkg/core.(*Blockchain).AddHeaders\n\t/home/neospcc/Documents/GitProjects/nspcc-dev/neo-go/pkg/core/blockchain.go:453\ngithub.com/nspcc-dev/neo-go/pkg/network.(*Server).handleHeadersCmd\n\t/home/neospcc/Documents/GitProjects/nspcc-dev/neo-go/pkg/network/server.go:454\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1373"}
```
We need to compact our in-memory MPT from time to time, otherwise it quickly
fills up all available memory. This raises two obvious quesions --- when to do
that and to what level do that.
As for 'when', I think it's quite easy to use our regular persistence interval
as an anchor (and it also frees up some memory), but we can't do that in the
persistence routine itself because of synchronization issues (adding some
synchronization primitives would add some cost that I'd also like to avoid),
so do it indirectly by comparing persisted and current height in `storeBlock`.
Choosing proper level is another problem, but if we're to roughly estimate one
full branch node to use 1K of memory (usually it's way less than that) then we
can easily store 1K of these nodes and that gives us a depth of 10 for our
trie.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
This was differing from C# notion of PrevHash. It's not a previous root, but
rather a hash of the previous serialized MPTRoot structure (that is to be
signed by CNs).
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
Because trie size is rather big, it can't be stored in memory.
Thus some form of caching should also be implemented. To avoid
marshaling/unmarshaling of items which are close to root and are used
very frequenly we can save them across the persists.
This commit implements pruning items at the specified depth,
replacing them by hash nodes.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
Because there is no distinct type field in JSONized nodes, distinction
is made via payload itself, thus all unmarshaling is done via
NodeObject.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
MPT is a trie with a branching factor = 16, i.e. it consists of sequences in
16-element alphabet.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
Get 2 contracts in pair which is useful everytime we need to test
syscall with one contract calling the other.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
Invoke `_initialize` method on every call if present.
In NEO3 there is no entrypoint and methods are invoked by offset,
thus `Main` function is no longer required.
We still have special `Main` method in tests to simplify them.
Allow to invoke methods by offset:
1. Every invoked contract must have manifest.
2. Check arguments count on invocation.
3. Change AppCall to a regular syscall.
4. Add test suite for `System.Contract.Call`.
Disallow costly verification methods. We put this limit in policy
contract as it may be a subject to change in future.
In fact this value also overrides gas limit for header verification.
Close#1202.
When providing public key as a subslice, it still can be
decoded as a valid key, thus interop will not return an error
but rather push `false` on stack. This test is about providing
invalid key, so ensure this via setting invalid prefix.
Part of #1055.
It should have `AllowStates` flag.
Also removed unreachable code: we can't have such situation when
script container is not a transaction in the scope of `CheckWitness`
method because:
1. Blocks have their own implementation of CheckWitness for
internal usage (it's (bc *Blockchain) verifyHeaderWitnesses method).
2. For the outside calls of System.Runtime.CheckWitness interop (e.g.
calls from smart-contract) script container is always a transaction.
Part of #1055.
Split methods, as they have a lot of common code. This also fixex nil
error of storageGetReadOnlyContext in case when contract does not have
storage.
Fixes#1144. It's quite simple approach, we just update balance info right
upon contract migration. It will slow down migration transactions, but it
takes about 1-2 seconds to Seek through balances at mainnet's 3.8M, so the
approach should still work good enough. The other idea was to make lazy
updates (maintaining contract migration map), but it's more complicated to
implement (and implies that a balance get might also do a write).
There also is a concern about memory usage, it can give a spike of some tens
of megabytes, but that also is considered to be acceptable.
Part of #1055.
We should check contract scripthash against the one provided in manifest
and manifest groups. We shouldn't put on stack anything after return.
And ofcourse, we mast not destroy the old contract at the end, as
`contractDestroy` removes all storage items associated with the
old contract ID (which equals to the new contract ID). We just remove
old contract state - it's enough.