It's bogus and no other node implementation has anything like that. It fires
up for no good reason in the case when some other node connects to us and it
obviously doesn't use its listening port for it.
commit methods duplicated putSmthIntoStore functions, but have MemCachedStore
now that can easily substitute for a Batch, especially given that interop
needs something like that for its storage purposes anyway.
This adds the following verifications:
* merkleroot check
* index check
* timestamp check
* witnesses verification
VerifyWitnesses is also renamed to verifyTxWitnesses here to not confuse it
with verifyBlockWitnesse and to hide it from external access (no users at the
moment).
Linter isn't happy with our recent changes:
pkg/core/contract_state.go:109:1: receiver name cs should be consistent with previous receiver name a for ContractState
pkg/core/contract_state.go:114:1: receiver name cs should be consistent with previous receiver name a for ContractState
pkg/core/contract_state.go:119:1: receiver name cs should be consistent with previous receiver name a for ContractState
But actually `a` here most probably is a copy-paste from AssetState methods,
so fit the old code to match the new one.
Enable transaction verification for privnets and tests, testnet can't
successfuly verify block number 316711 with it enabled and mainnet stops at
105829.
We want to get a full block, so it has to have transactions
inside. Unfortunately our tests were used to this wrong behavior and utilized
completely bogus transactions without data that couldn't be persisted, so fix
that also.
PublishTX only had one of these flags, but newer contracts (created via the
interop function) can have more and these flags are aggregated into one field
that uses PropertyState enumeration (it's used to publish contract, so
supposedly it's also a nice choice for contract state storage).
It's used a lot and it looks a lot like MemoryStore, it just needs not to
return errors from Put and Delete, so make it use MemoryStore internally with
adjusted interface.
Make it look more like a real transaction, put/delete things with a single
lock. Make a copy of value in Put also, just for safety purposes, no one knows
how this value slice can be used after the Put.
Using pointers is just plain wrong here, because the batch can be updated with
newer values for the same keys.
Fixes Seek() to use HasPrefix also because this is the intended behavior.
Script can return non-bool results that can still be converted to bool
according to the usual VM rules. Unfortunately Bool() panics if this
conversion fails which is OK for things done in vm.execute(), but certainly
not for VerifyWitnesses(), thus there is a need for TryBool() that will just
return an error in this case.
It gives access to the internal value's Value() which is essential for interop
functions that need to get something from InteropItems. And it also simplifies
some already existing code along the way.
If the block references two ouputs in some other transaction the code failed
to verify it because of key collision. C# code implements it properly by using
full CoinReference type as a key, so let's do it in a similar fashion.
Claim transactions have different logic in C# node, so we need to
implement it too. It's not the most elegant way to fix it, but let's make it
work first and then refactor if and where needed. Fixes verification of Claim
transactions.
What started as an attempt to fix#366 ended up being quite substantial refactoring of the Blockchain->Store and Server->Blockchain interactions. As usually, some additional problems were noted and fixed along the way. It also accidentally fixes#410.
In the very specific case when the list of headers received is exactly one
block ahead of the chain of full blocks requestBlocks() failed to generate
request to get the next full block.
BoltDB doesn't have internal batching mechanism, thus we have a substitute for
it, but this substitute is absolutely identical to MemoryBatch, so it's better
to unify them and import ac5d2f94d3 fix into the
MemoryBatch.
Commit 578ac414d4 was wrong in that it saved
only a part of the block, so depending on how you use blockchain, you may
still see that the block was not really processed properly. To really fix it
this commit introduces intermediate storage layer in form of memStore, which
actually is a MemoryStore that supports full Store API (thus easily fitting
into the existing code) and one extension that allows it to flush its data to
some other Store.
It also changes AddBlock() semantics in that it only accepts now successive
blocks, but when it does it guarantees that they're properly added into the
Blockchain and can be referred to in any way. Pending block queing is now
moved into the server (see 8c0c055ac657813fe3ed10257bce199e9527d5ed).
So the only thing done with persist() now is just a move from memStore to
Store which probably should've always been the case (notice also that
previously headers and some other metadata was written into the Store
bypassing caching/batching mechanism thus leading to some inefficiency).
This one will replace blockCache in Blockchain itself as it can and should be
external from it. The idea is that we only feed successive blocks into the
Blockchain and it only stores valid proper Blockchain and nothing else.
This changes the Blockchain to also return unpersisted (theoretically, verified
in the AddBlock!) blocks and transactions, making Add/Get interfaces
symmetrical. It allows to turn Persist into internal method again and makes it
possible to enable transaction check in GetBlock(), thus fixing #366.
It must copy both the value and the key because they can be reused for other
purposes between Put() and PutBatch(). This actually happens with values in
headers processing, leading to wrong data being written into the DB.
Extend the batch test to check for that.
For example, at the moment our node can't handle `consensus` message, so when
it received it before the patch it just crashed because of uninitialized `p`.
earlier we had an issue with failing test in #353 and other one #305.
Reworked these test to have in-memory database. This led to multiple
changes: made some functions like Hash and Persist public(otherwise
it's not possible to control state of the blockchain); removed
unit_tests storage package which was used mainly for leveldb in unit
tests.
I see these tests not really good since they look like e2e tests and
as for me should be run in separate step against dockerized env or
in case we want to check rpc handler we might want to rework it in order
to have interface for proper unit tests.
As for me this patchset at least makes as safe with not removing totally
previous tests and at the same time CircleCI will be happy now.
It's mostly used for Serializable and in other cases where one needs to
estimate binary-encoded size of the stucture. This also simplifies future
removal of the Size() from Serializable.
The logic here is that we'll have all binary encoding/decoding done via our io
package, which simplifies error handling. This functionality doesn't belong to
util, so it's moved.
This also expands BufBinWriter with Reset() method to fit the needs of core
package.
add close function to storage interface
add common defer function call which will close db connection
remove context as soon as it's not needed anymore
updated unit tests
This one fixes#390 and some connected problems. After this patchset the node reconnects to some other nodes if anything goes wrong and it better senses when something goes wrong. It also fixes some block handling problems based on the testnet connection experience.
...and don't try to connect to the nodes we're already connected to.
Before this change we had a problem of discoverer throwing away good valid
addresses just because they are already known which lead to pool draining over
time (as address reuse was basically forbidden and getaddr may not get enough
new nodes).
Queuing one message is not reliable enough, the peer that gets it can fail to
actually make a request, so make this queue a bit deeper to have a higher
chance of success.
This makes writer side handle errors properly and fixes communication between
reader and writer goroutine to always correctly unregister the peer. This is
especially important for the case where error occurs before handshake
completes as in this case we don't even have goroutine in startProtocol()
running.
In the unlikely event of overlapping hash block written to the DB we might end
up with wrong hash list. That happened to me for some reason when synching
with the testnet leading to the following keys with respective values:
150000 -> 2000 hashes
152000 -> 2000 hashes
153999 -> 2000 hashes
Reading it hashes number 153999 and 154000 got the same values and the chain
couldn't sync correctly.
Same thing done in a2a8981979 for PUSHBYTES,
failing to read the amount of bytes specified should lead to FAULT. Also
makes readUint16() and readUint32() panic as this is the behavior we want in
these cases. Add some tests along the way.
Before:
NEO-GO-VM > loadgo h.go
READY: loaded 16 instructions
NEO-GO-VM > ip
instruction pointer at -1 (PUSH0)
After:
NEO-GO-VM > loadgo h.go
READY: loaded 16 instructions
NEO-GO-VM > ip
instruction pointer at -1 (NOP)
I think NOP is a little less scary.
Current NEO documentation lists them:
https://docs.neo.org/docs/en-us/tooldev/advanced/neo_vm.html
CALL_* instructions were left out because of conflict with golint (but they're
removed in NEO 3.0 anyway, so wasting time on them makes no sense).
Update autogenerated instruction_string.go accordingly.
The code that we have actually implements XTUCK and not TUCK. And it's a bit
broken, so fix it and add some tests. The most interesting one (that required
to touch stack code) is the one when we have 1 element on the stack and are
trying to tell XTUCK to push 2 elements deep.
ANSI X9.62 says that if x or y coordinate are greater than or equal to
curve.Params().P, the conversion should return an error (see ANSI X9.62:2005
Section A.5.8 Step b, which invokes Section A.5.5, which does the check and
rejects when x or y are too big.
See https://github.com/golang/go/issues/20482 for more details.
PublicKey() for PrivateKey now just can't fail and it makes no sense to return
an error from it. There is a lot of associated functionality for which this
also is true, so adjust it accordingly and simplify a lot of code.
Public key is just a point, so use the coordinates obtained previously to
initialize the PublicKey structure without jumping through the hoops of
encoding/decoding.
As NEO uses P256 we can use standard crypto/elliptic library for almost
everything, the only exception being decompression of the Y coordinate. For
some reason the standard library only supports uncompressed format in its
Marshal()/Unmarshal() functions. elliptic.P256() is known to have
constant-time implementation, so it fixes#245 (and the decompression using
big.Int operates on public key, so nobody really cares about that part being
constant-time).
New decompress function is inspired by
https://stackoverflow.com/questions/46283760, even though the previous one
really did the same thing just in a little less obvious way.
It makes no sense to provide an API for throw-away public keys, so obtain it
via a new real keypair generation where appropriate (and that's only needed
for testing).
Golint:
pkg/rpc/rpc.go:15:67: exported method GetBlock returns unexported type *rpc.response, which can be annoying to use
pkg/rpc/rpc.go:82:64: exported method GetRawTransaction returns unexported type *rpc.response, which can be annoying to use
pkg/rpc/rpc.go:97:52: exported method SendRawTransaction returns unexported type *rpc.response, which can be annoying to use
Refs. #213.
pkg/rpc/neoScanBalanceGetter.go:54:56: method parameter assetIdUint should be assetIDUint
pkg/rpc/neoScanBalanceGetter.go:62:3: var assetId should be assetID
pkg/rpc/server_test.go:27:5: var testRpcCases should be testRPCCases
pkg/rpc/txTypes.go:19:3: struct field assetId should be assetID
pkg/rpc/txTypes.go:39:35: interface method parameter assetId should be assetID
pkg/rpc/types.go:115:2: struct field TxId should be TxID
Refs. #213.
pkg/core/transaction/attribute.go:67:14: should omit type uint8 from declaration of var urllen; it will be inferred from the right-hand side
pkg/crypto/keys/publickey.go:184:8: should omit type []byte from declaration of var b; it will be inferred from the right-hand side
pkg/network/payload/version_test.go:15:12: should omit type bool from declaration of var relay; it will be inferred from the right-hand side
Refs. #213.
Golint:
pkg/core/blockchain.go:796:9: if block ends with a return statement, so drop
this else and outdent its block (move short variable declaration to its own
line if necessary)
Refs. #213.
Fixes things like:
* exported type/method/function X should have comment or be unexported
* comment on exported type/method/function X should be of the form "X ..."
(with optional leading article)
Refs. #213.
Fixes one more instruction being ran when VM FAULTs:
NEO-GO-VM > run
NEO-GO-VM > error encountered at instruction 6 (ROLL)
NEO-GO-VM > runtime error: invalid memory address or nil pointer dereference
FAULT
NEO-GO-VM > error encountered at instruction 7 (SETITEM)
NEO-GO-VM > interface conversion: interface {} is []vm.StackItem, not []uint8
Refs. #96.
And drop associated _pkg.dev remnants (refs. #307).
Original `dev` branch had two separate packages for public and private keys,
but those are so intertwined (`TestHelper` subpackage is a proof) that it's
better unite them and all associated code (like WIF and NEP-2) in one
package. This patch also:
* creates internal `keytestcases` package to share things with wallet (maybe
it'll be changed in some future)
* ports some tests from `dev`
* ports Verify() method for public key from `dev`
* expands TestPrivateKey() with public key check
Simplifies a lot of code and removes some duplication. Unfortunately I had to
move test_util random functions in same commit to avoid cycle
dependencies. One of these random functions was also used in core/transaction
testing, to simplify things I've just dropped it there and used a static
string (which is nice to have for a test anyway).
There is still sha256 left in wallet (but it needs to pass Hash structure into
the signing function).
Go's Hash is explicitly specified to never return an error on Write(), and our
own decoding functions only check for length which is gonna be right in every
case so it makes no sense returning errors from these functions.
With associated test and drop duplicating Uint160 implementation from
_pkg.dev. It doesn't seem to be used in pkg code at the moment, but still it
can be useful. Refs #307.
Unfortunately d58fbe0c88 didn't really fix the
problem because tinfo.Type (the expression resulting type) actually is a bool
and we need to check its parameters. Also, there is need to fix the NEQ
operation.
neo-storm has developed more wrappers for syscall APIs, so they can and should
be used as a drop-in replacement for pkg/vm/api. Moving it out of vm, as it's
not exactly related to the VM itself.
These were interpreted completely wrong, they actually have two next bytes
indicating an offset. This patch is a quick fix, actually more work is needed
here to properly display various instructions.
This is wrong, see issue #294, but it makes our VM tests work (as VM is
missing EQUAL implementation), so until #294 is properly resolved we're better
have this kind of wrong code generation.