It wasn't actually requesting transactions but rather sending an inventory
message telling everyone that we have them which is completely wrong and
easily leads to ChangeView that could be avoided.
When system and network pressure is high it can be beneficial
to use transactions which and were already proposed.
The assumption is that they will be in other node's memory pool
with more probability.
If blockchain is not closed, logging in defer can occur
after test has finished, which will lead to a panic with
"Log in goroutine after Test* has completed".
There is no point in encoding the output of this function in a WIF format,
most of the users actually want the real key and those who need a WIF can
easily get if from the key (and it's simpler than getting the key from the
WIF).
It also fixes a severe bug in NEP2Decrypt, base58 decoding errors were not
processed correctly.
Error in Seek means something is terribly wrong (e.g. db was not opened) and
error drop is not the right thing to do, because caller
will continue working with the wrong view.
buildMerkleTree() is internal to the hash package and if anyone calls it with
`len(leaves) == 0` he deserves a panic. As it's the only error case in it, we
can remove error value return from this function and simplify NewMerkleTree().
Turns out, our dApps use it a lot and we were going to the DB to get it which
is a useless waste of time. Technically we could also remove blockHeight here,
but not doing it at the moment as it's more involved.
It eliminates this time waste from the pprof graph, but doesn't change 1.4M ->
1.5M 100K mainnet block import test case in any noticeable way.
Preseed the scriptHash value when we already know it. Eliminates this time
waste from the pprof graph, but doesn't really change anything in the 1.4M ->
1.5M 100K mainnet blocks import test.
These don't belong to VM as they compile some Go code and run it in a VM. One
may call them integration tests, but I prefer to attribute them to
compiler. Moving these tests into pkg/compiler also allows to properly count
the compiler coverage they add:
-ok github.com/CityOfZion/neo-go/pkg/compiler (cached) coverage: 69.7% of statements
+ok github.com/CityOfZion/neo-go/pkg/compiler (cached) coverage: 84.2% of statements
This change also fixes `contant` typo and removes fake packages exposed to the
public by moving foo/bar/foobar into the testdata directory.
This solves two problems:
* adds support for shortened SYSCALL form that uses IDs (similar to #434, but
for NEO 2.0, supporting both forms), which is important for compatibility
with C# node and mainnet chain that uses it from some height
* reworks interop plugging to use callbacks rather than appending to the map,
these map mangling functions are clearly visible in the VM profiling
statistics and we want spawning a VM to be fast, so it makes sense
optimizing it. This change moves most of the work to the init() phase
making VM setup cheaper.
Caveats:
* InteropNameToID accepts `[]byte` because that's the thing we have in
SYSCALL processing and that's the most often usecase for it, it leads to
some conversions in other places but that's acceptable because those are
either tests or init()
* three getInterop functions are: `getDefaultVMInterop`, `getSystemInterop`
and `getNeoInterop`
Our 100K (1.4M->1.5M) block import time improves by ~4% with this change.
Fix duping and add tests.
C# node actually implements DUP in the same way we did, but it does create a
new element when accessing some particular value (like BigInt() or Bytes()) so
in the end this DUP implementation doesn't lead to any visible side-effects. In
our case I think it's more appropriate to fix the DUP (and its variants) itself
avoiding useless allocations in the VM.
Add `Roll` method to Stack that doesn't pop and push values and use it for
ROLL and ROT.
1.4M->1.5M 100K block import test before:
real 3m44,292s
user 5m43,494s
sys 0m34,741s
After:
real 3m40,449s
user 5m42,701s
sys 0m35,500s
Add `Swap` method to the Stack and use it for both SWAP and XSWAP. Avoid
element popping and pushing (and associated accounting costs).
1.4M->1.5M 100K block import test before:
real 3m51,885s
user 5m54,744s
sys 0m38,444s
After:
real 3m44,292s
user 5m43,494s
sys 0m34,741s
First of all, it was wrong, it was not checking for inputs really, it compared
tx hashes for some reason, second, when it did compare inputs it compared only
the PrevIndex part of them which is also wrong.
Also, there is absolutely no reason to go through GetVerifiedTransactions()
here, we don't need this copy of pointers and it can also be outdated by the
time we're to finish our check.
Before:
BenchmarkTXPerformanceTest-4
5000 485506 ns/op 65886 B/op 409 allocs/op
ok github.com/CityOfZion/neo-go/integration 3.212s
After:
enchmarkTXPerformanceTest-4
5000 371104 ns/op 44367 B/op 408 allocs/op
ok github.com/CityOfZion/neo-go/integration 2.712s
This simple change improves our BenchmarkTXPerformanceTest by 14%, just
because we don't waste time on reallocations during append().
Before:
10000 439754 ns/op 218859 B/op 428 allocs/op
ok github.com/CityOfZion/neo-go/integration 5.423s
After:
10000 369833 ns/op 87209 B/op 412 allocs/op
ok github.com/CityOfZion/neo-go/integration 4.612s
Creating a new BinReader for every instruction is a bit too much and it adds
about 1% overhead on block import (and actually is quite visible in the VM
profiling statistics). So use a bit more ugly but efficient method.
It's useless work being done before it's actually needed. These (updated with
new values) are going to be written with some kind of Put anyway, so writing
them here is just a waste of time.
We're spending a lot of time here, 100K blocks import starting at 1.4M, before
this patch:
real 4m17,748s
user 6m23,316s
sys 0m37,866s
After:
real 3m54,968s
user 5m56,547s
sys 0m39,398s
9% is quite a substantial improvement to justify this change.
Importing 100K blocks starting at 1.4M, before this patch:
real 6m0,356s
user 8m52,293s
sys 0m47,372s
After this patch:
real 4m17,748s
user 6m23,316s
sys 0m37,866s
Almost 30% better.
Do not fill verification script randomly as there is a probability
for it to be executed sucessfully.
time="2019-12-12T17:24:22+03:00" level=info msg="blockchain persist completed" blockHeight=0 headerHeight=0 persistedBlocks=0 persistedKeys=15 took="54.474µs"
time="2019-12-12T17:24:23+03:00" level=info msg="blockchain persist completed" blockHeight=0 headerHeight=0 persistedBlocks=0 persistedKeys=15 took="49.312µs"
2019-12-12T17:24:24.026+0300 DEBUG can't verify payload from #%d1 {"module": "dbft"}
--- FAIL: TestPayload_Sign (0.00s)
payload_test.go:302:
Error Trace: payload_test.go:302
Error: Should be false
Test: TestPayload_Sign
FAIL
coverage: 75.8% of statements
FAIL github.com/CityOfZion/neo-go/pkg/consensus 2.145s
It's a getter function and even though it's quite fancy with its transactions
processing (for consensus operation) it shouldn't ever change the state of the
Blockchain. If we're to change anything here these changes may conflict with
the actual block processing later or may lead to broken state (if transactions
won't be approved for some reason).
go vet is not happy about them:
pkg/io/binaryReader.go:92:21: method ReadByte() byte should have signature ReadByte() (byte, error)
pkg/io/binaryWriter.go:75:21: method WriteByte(u8 byte) should have signature WriteByte(byte) error
This seriously improves the serialization/deserialization performance for
several reasons:
* no time spent in `binary` reflection
* no memory allocations being made on every read/write
* uses fast ReadBytes everywhere it's appropriate
It also makes Fixed8 Serializable just for convenience.
add dao which takes care about all CRUD operations on storage
remove blockchain state since everything is stored on change
remove storage operations from structs(entities)
move structs to entities package
This change (closely related to the neo-project/neo#1321 proposal) speeds up
1.4M mainnet blocks import by 30%. Basically, we're eliminating key decoding
for block's multisignature that has the same keys most of the time.
Things I don't like about this patch:
* yet another parameter for verifyHashAgainstScript()
* vm keys are not copied in/out
But it's rather simple and solves the problem for this particular case, so I
think it's worth it.
It can't be really solved in many cases (it's used in P2P protocol and we have
to follow the usual conventions there) and in most of the cases we don't care
about the difference between nil slice and zero-length slice.
It makes very little sense having pointers here, these structures MUST have
some kind of key and this key is not gonna be wandering somewhere on its
own. Fixes a part of #519.
It reduces heap pressure a little for these elements as we don't have to
allocate/free them individually. And they're directly tied to transactions or
block, not being shared or anything like that, so it makes little sense for
them to be pointer-based. It only makes building transactions a little easier,
but that's obviously a minor usecase.
reflect.MethodByName is a rather expensive function especially when
called on hot path. This became obvious during profiling of db restore.
This commit replaces reflection with a cast to an interface.
Before this patch on block import we could easily be spending more than 6
seconds out of 30 in Uint256 encoding for UnspentBalance, now it's completely
off the radar.
Which speeds it up at least twofold for a typical 32-bytes write (and that's
for a very naïve test that allocates new BufBinWriter on every iteration):
pkg: github.com/CityOfZion/neo-go/pkg/io
BenchmarkWriteBytes-8 10000000 124 ns/op
BenchmarkWriteBytesOld-8 5000000 251 ns/op
When 74590551 introduced this code we had no proper caching layer, so there
were these strange fallbacks in the code. fc0031e5 should'd removed them, but
failed to do so, so do it now and fix processing of transactions that touch
storage for the same key (address) in the same block.
To use opcode definitions you have to import whole vm package that you might
not care about at all. So this moves opcodes to their own package under vm, fixes
and deduplicate related code and moves compiler package up one level.
Drop wif.GetVerificationScript(), drop
smartcontract.CreateSignatureRedeemScript(), add GetVerificationScript()
directly to the PublicKey and use it everywhere.
This allows easier reuse of opcodes and in some cases allows to eliminate
dependencies on the whole vm package, like in compiler that only needs opcodes
and doesn't care about VM for any other purpose.
And yes, they're opcodes because an instruction is a whole thing with
operands, that's what context.Next() returns.
Only request headers from the other peer if his height is bigger than
ours. Otherwise we routinely ask 0-height newcomers for some random headers
that they know nothing about.
This one is essential for the consensus nodes as otherwise they won't give out
the blocks they generate making their generation almost useless. It also makes
our networking part more complete.
We have a race between reader and writer goroutines for the same connection
that leads to handshake failures when reader is faster to read the incoming
version (and try to reply to it) than writer is to write our own Version:
WARN[0000] peer disconnected addr="172.200.0.4:20334" peerCount=5 reason="invalid handshake: tried to send VersionAck, but didn't send Version yet
Fix it by moving Version sending before the reader loop starts.
Commit c80ee952a1 removed temporary store used
to contain changes of the block being processed. It's wrong in that the block
changes should be applied to the database in a single transaction so that
there wouldn't be any intermediate state observed from the outside (which is
possible now). Also, this made changes commiting persist them to the
underlying store effectively making our persist loop a no-op (and not
producing `persist completed` log lines that we love so much).
Param getters were redone to return errors because otherwise bad FuncParam
values could lead to panic. FuncParam itself might be not the most elegant
solution, but it works good enough for now.
This PR does 3 things:
adds array parameter unmarshalling
extend Param with convenient methods
refactor tests into using tables to make it easier add new tests
(part of #347 solution)
add processing of validators while block persist;
add validator structure with decoding/encoding;
add validator get from store;
add EnrollmentTX and StateTX processing;
add pubkey decode bytes, unique and contains functions;
Fixes failure to process transaction from the block when it was relayed
initially:
WARN[0788] blockQueue: failed adding block into the blockchain blockHeight=7270 error="transaction 35088916403e5cf2152e16c3bc6e0fba20c955fba38543b9fa5c50a3d3a4ace5 failed to verify: invalid transaction due to conflicts with the memory pool" nextIndex=7271
WARN[0790] blockQueue: failed adding block into the blockchain blockHeight=7270 error="transaction 35088916403e5cf2152e16c3bc6e0fba20c955fba38543b9fa5c50a3d3a4ace5 failed to verify: invalid transaction due to conflicts with the memory pool" nextIndex=7271
WARN[0790] blockQueue: failed adding block into the blockchain blockHeight=7270 error="transaction 35088916403e5cf2152e16c3bc6e0fba20c955fba38543b9fa5c50a3d3a4ace5 failed to verify: invalid transaction due to conflicts with the memory pool" nextIndex=7271
Right now message can be written in several Write's so
concurrent calls of writeMsg() can in theory interleave.
This commit fixes it.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
When encountering already seen stack item we should fail
only if it is a collection. Duplicate Integers or ByteArrays are ok
because they can't lead to recursion.