Old implementation could view 0x62 byte in
a script as a JMP instruction irregardless of whether it is
a real opcode or a part of a parameter of another instruction.
In this commit instructions are decoded together with parameters
during jump label rewriting.
Seeing some
blockQueue: failed adding block into the blockchain {"error": "failed to store notifications: not supported", "blockHeight": 713984, "nextIndex": 713985}
in logs is not very helpful.
Recursive execute() calls can affect gas calculation.
This commit makes execute() be called only for real opcodes
and moves duplicate logic for CALL/JMP into a separate function.
Big.Int Bytes()/SetBytes() methods are not symmetric.
Moreover we need to mimic C# node behavior:
- if a positive number has MSB set, 0x00 byte should be appended
to distinguish positive number from negatives
- negative numbers should serialize as two's-complement
Preseed the scriptHash value when we already know it. Eliminates this time
waste from the pprof graph, but doesn't really change anything in the 1.4M ->
1.5M 100K mainnet blocks import test.
These don't belong to VM as they compile some Go code and run it in a VM. One
may call them integration tests, but I prefer to attribute them to
compiler. Moving these tests into pkg/compiler also allows to properly count
the compiler coverage they add:
-ok github.com/CityOfZion/neo-go/pkg/compiler (cached) coverage: 69.7% of statements
+ok github.com/CityOfZion/neo-go/pkg/compiler (cached) coverage: 84.2% of statements
This change also fixes `contant` typo and removes fake packages exposed to the
public by moving foo/bar/foobar into the testdata directory.
This solves two problems:
* adds support for shortened SYSCALL form that uses IDs (similar to #434, but
for NEO 2.0, supporting both forms), which is important for compatibility
with C# node and mainnet chain that uses it from some height
* reworks interop plugging to use callbacks rather than appending to the map,
these map mangling functions are clearly visible in the VM profiling
statistics and we want spawning a VM to be fast, so it makes sense
optimizing it. This change moves most of the work to the init() phase
making VM setup cheaper.
Caveats:
* InteropNameToID accepts `[]byte` because that's the thing we have in
SYSCALL processing and that's the most often usecase for it, it leads to
some conversions in other places but that's acceptable because those are
either tests or init()
* three getInterop functions are: `getDefaultVMInterop`, `getSystemInterop`
and `getNeoInterop`
Our 100K (1.4M->1.5M) block import time improves by ~4% with this change.
Add `Roll` method to Stack that doesn't pop and push values and use it for
ROLL and ROT.
1.4M->1.5M 100K block import test before:
real 3m44,292s
user 5m43,494s
sys 0m34,741s
After:
real 3m40,449s
user 5m42,701s
sys 0m35,500s
Add `Swap` method to the Stack and use it for both SWAP and XSWAP. Avoid
element popping and pushing (and associated accounting costs).
1.4M->1.5M 100K block import test before:
real 3m51,885s
user 5m54,744s
sys 0m38,444s
After:
real 3m44,292s
user 5m43,494s
sys 0m34,741s
Creating a new BinReader for every instruction is a bit too much and it adds
about 1% overhead on block import (and actually is quite visible in the VM
profiling statistics). So use a bit more ugly but efficient method.