We want to get a full block, so it has to have transactions
inside. Unfortunately our tests were used to this wrong behavior and utilized
completely bogus transactions without data that couldn't be persisted, so fix
that also.
PublishTX only had one of these flags, but newer contracts (created via the
interop function) can have more and these flags are aggregated into one field
that uses PropertyState enumeration (it's used to publish contract, so
supposedly it's also a nice choice for contract state storage).
It's used a lot and it looks a lot like MemoryStore, it just needs not to
return errors from Put and Delete, so make it use MemoryStore internally with
adjusted interface.
Make it look more like a real transaction, put/delete things with a single
lock. Make a copy of value in Put also, just for safety purposes, no one knows
how this value slice can be used after the Put.
Using pointers is just plain wrong here, because the batch can be updated with
newer values for the same keys.
Fixes Seek() to use HasPrefix also because this is the intended behavior.
Various verification fixes part 2, which focuses on VM improvements
necessary to make tx verification work. It's mostly related to interop
functionality, but doesn't add interops at the moment. Fixes#295 along the way.
Script can return non-bool results that can still be converted to bool
according to the usual VM rules. Unfortunately Bool() panics if this
conversion fails which is OK for things done in vm.execute(), but certainly
not for VerifyWitnesses(), thus there is a need for TryBool() that will just
return an error in this case.
It gives access to the internal value's Value() which is essential for interop
functions that need to get something from InteropItems. And it also simplifies
some already existing code along the way.
This set of fixes allows Testnet synchronization (with verifyBlocks
set to true) to proceed up to the block number 4586, the block number
4587 fails at APPCALL invocation at the moment, but that's for the next
set of fixes.
If the block references two ouputs in some other transaction the code failed
to verify it because of key collision. C# code implements it properly by using
full CoinReference type as a key, so let's do it in a similar fashion.
Claim transactions have different logic in C# node, so we need to
implement it too. It's not the most elegant way to fix it, but let's make it
work first and then refactor if and where needed. Fixes verification of Claim
transactions.
What started as an attempt to fix#366 ended up being quite substantial refactoring of the Blockchain->Store and Server->Blockchain interactions. As usually, some additional problems were noted and fixed along the way. It also accidentally fixes#410.
In the very specific case when the list of headers received is exactly one
block ahead of the chain of full blocks requestBlocks() failed to generate
request to get the next full block.
BoltDB doesn't have internal batching mechanism, thus we have a substitute for
it, but this substitute is absolutely identical to MemoryBatch, so it's better
to unify them and import ac5d2f94d3 fix into the
MemoryBatch.
Commit 578ac414d4 was wrong in that it saved
only a part of the block, so depending on how you use blockchain, you may
still see that the block was not really processed properly. To really fix it
this commit introduces intermediate storage layer in form of memStore, which
actually is a MemoryStore that supports full Store API (thus easily fitting
into the existing code) and one extension that allows it to flush its data to
some other Store.
It also changes AddBlock() semantics in that it only accepts now successive
blocks, but when it does it guarantees that they're properly added into the
Blockchain and can be referred to in any way. Pending block queing is now
moved into the server (see 8c0c055ac657813fe3ed10257bce199e9527d5ed).
So the only thing done with persist() now is just a move from memStore to
Store which probably should've always been the case (notice also that
previously headers and some other metadata was written into the Store
bypassing caching/batching mechanism thus leading to some inefficiency).