It can't ever happen. We're guaranteed to have a consistent chain of headers
(we're verifying them above, if we're not verifying --- it's not our fault)
that starts at HeaderHeight that was actual when we were asking for it
previously. HeaderHeight can only move forward, so if that happened that would
be filtered out by the condition below and the first one can't happen. Though
to be absolutely sure change the second check to only pass "+1" headers (which
is what we want).
It's used in two places now:
* Blockchain.AddBlock()
This one does transaction duplication check of its own, doing it in
Verify() is just a waste of time. Merkle tree root hash value check is
still relevant though
* Block.DecodeBinary()
We're decoding blocks for the following purposes:
- on restore from dump
The block will be added to the chain via AddBlock() and that will do a
full check of it (if configured to do so)
- on retrieving the block from the DB (DAO)
We trust the DB, if it's gone wild, this check won't really help
- on receiving the block via P2P
It's gonna be put into block queue and then end up in AddBlock() which
will check it
- on receiving the block via RPC (submitblock)
It is to be passed into AddBlock()
- on receiving the block via RPC in a client
That's the only problematic case probably, but RPC client has to trust
the server and it can check for the signature if it really
cares. Or a separate in-client check might be added.
As we can see nothing really requires this verification to be done the way it
is now, AddBlock can just have a Merkle check and DecodeBinary can do fine
without it at all.
It's a no-op and there is nothing we can do about it, header contents could
only be checked against chain state, there is nothing to check for internal
consistency.
Now we have VerifyTx() and PoolTx() APIs that either verify transaction in
isolation or verify it against the mempool (either the primary one or the one
given) and then add it there. There is no possibility to check against the
mempool, but not add a transaction to it, but I doubt we really need it.
It allows to remove some duplication between old PoolTx and verifyTx where
they both tried to check transaction against mempool (verifying first and then
adding it). It also saves us utility token balance check because it's done by
the mempool anyway and we no longer need to do that explicitly in verifyTx.
It makes AddBlock() and verifyBlock() transaction's checks more correct,
because previously they could miss that even though sender S has enough
balance to pay for A, B or C, he can't pay for all of them.
Caveats:
* consensus is running concurrently to other processes, so things could
change while verifyBlock() is iterating over transactions, this will be
mitigated in subsequent commits
Improves TPS value for single node by at least 11%.
Fixes#667, fixes#668.
New transactions are added to the chain with blocks. If there is no
transaction X at height N in DAO, it could only be added with block N+1, so
it has to be present there. Therefore we can replace `dao.HasTransaction()`
check with a search through in-block transactions. HasTransaction() is nasty
in that it may add useless load the DB and this code is being run with a big
Blockchain lock held, so we don't want to be delayed here at all.
Improves single-node TPS by ~2%.
The end effect is almost as if `VerifyTransactions: false` was set in the
config, but without actually compromising the guarantees provided by it.
It almost doubles performance for single-mode benchmarks and makes block
processing smoother (more smaller blocks are being produced).
C# node is quite picky as it expects there to be exactly one value returned,
but our testchain actually adds 4 signatures for multisig cases instead of 3
which makes it technically incompatible with C# node.
We were checking blocked accounts twice which is obviously excessive. We also
have our accounts sorted, so we can rely on that in CheckPolicy(). It also
doesn't make much sense to check MaxBlockSystemFee in Blockchain code, policy
contract can handle that.
It no longer depends on blockchain state and there can't ever be an error, in
fact we can always iterate over signers, so copying these hashes doesn't make
much sense at all as well as sorting arrays in verifyTxWitnesses (witnesses
order must match signers order).
It's not needed any more with Go 1.13 as we have wrapping/unwrapping in base
packages. All errors.Wrap calls are replaced with fmt.Errorf, some strings are
improved along the way.
We need to compact our in-memory MPT from time to time, otherwise it quickly
fills up all available memory. This raises two obvious quesions --- when to do
that and to what level do that.
As for 'when', I think it's quite easy to use our regular persistence interval
as an anchor (and it also frees up some memory), but we can't do that in the
persistence routine itself because of synchronization issues (adding some
synchronization primitives would add some cost that I'd also like to avoid),
so do it indirectly by comparing persisted and current height in `storeBlock`.
Choosing proper level is another problem, but if we're to roughly estimate one
full branch node to use 1K of memory (usually it's way less than that) then we
can easily store 1K of these nodes and that gives us a depth of 10 for our
trie.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
This was differing from C# notion of PrevHash. It's not a previous root, but
rather a hash of the previous serialized MPTRoot structure (that is to be
signed by CNs).
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
Disallow costly verification methods. We put this limit in policy
contract as it may be a subject to change in future.
In fact this value also overrides gas limit for header verification.
Close#1202.
We were accepting transactions with zero system fee, but we shouldn't do
that. Also, transaction's verification execution has to be limited by network
fee.
GetValidators without parameter is called upon DBFT initialization and it
should receive validators for the next block (that will create it),
parameterized GetValidators is used for NextConsensus calculation where we
need a list for the current state of the chain.
NextBlockValidators are updated before the new block persist, so we need to
use GetValidators to get the list corresponding to the current state of the
chain.
part of #904
1. We now have MaxTransactionsPerBlock set in native Policy contract,
so this value should be used in (dbft).GetVerified method instead
of passing it as an argument.
2. Removed (dbft).WithTxPerBlock.
2. DBFT API has changed, so update it's version.
3. Removed MaxTransactionsPerBlock from node configuration, as we
have it set in native Policy contract.
There is no such thing as high/low priority transactions, as there are
no free transactions anymore and they are ordered by fees contained
in transaction itself.
Closes#1063.
We make it explicit in the appropriate Block/Transaction structures, not via a
singleton as C# node does. I think this approach has a bit more potential and
allows better packages reuse for different purposes.
Two changes being done here, because they require a lot of updates to
tests. Now we're back into version 0 and we only have one type of
transaction.
It also removes GetType and GetScript interops, both are obsolete in Neo 3.
Getting batch, updating Prometheus metrics and pushing events doesn't require
any locking: batch is a local cache batch that no one outside cares about,
Prometheus metrics are not critical to be in perfect sync and events are
asynchronous anyway.
Native contracts also don't require any locks and they should be processed
before dumping storage changes.
Native contracts deployment creates `Transfer` notifications and adds
them into interop context. However, these notifications were not stored
for two reasons:
1. typo in `Transfer` (so these notifications were not recognised during
processing of the invocation tx in (*Blockchain).storeBlock(...) method)
2. these notifications have `from` adress setted to null, so conversion
to []byte fails. Same thing could happen with `to`.
Related C# issue: https://github.com/neo-project/neo/issues/1646
For now, made both `transfer` and `Transfer` valid.
The notion of NativeContractState shouldn't ever existed, native contract is a
contract and its state is saved as regular contract state which is critical
because we'll have MPT calculations over this state soon.
Initial minting should be done in Neo.Native.Deploy because it generates
notification that should have proper transaction context.
RegisterNative() shouldn't exist as a public method, native contracts are only
registered at block 0 and they can do it internally, no outside user should be
able to mess with it.
Move some structures from `native` package to `interop` also to avoid circular
references as interop.Context has to have a list of native contracts (exposing
them via Blockchainer is again too dangerous, it's too powerful tool).
1. closes#841
2. Commented out test cases where binary transaction are used.
These test cases marked with `TODO NEO3.0: Update binary` and need to be
updated.
3. Updated other tests.
4. Added cache to calculateValidUntilBlock() RPC-client method.
This is an append-only log which is read only during some RPCs.
It is rather slow to get it from base every time we need to append to
it. This commit stores all NEP5Transfers in batches, so that
only a last batch needs to be unmarshaled during block processing.
This change reduces pressure on DB by doing the following things:
* not storing additional KV pair for SpentCoin
* storing Output right in the UnspentCoin, thus eliminating the need to get a
full transaction from DB
At the same time it makes UnspentCoin more fat and hot, but it should probably
worth it.
Also drop `GetUnspentCoinStateOrNew` as it shouldn't ever existed, UTXOs
can't come out of nowhere.
1.5M block import time (VerifyBlocks disabled) on AMD Ryzen 5 1600/16GB/HDD,
before:
real 302m9.895s
user 96m17.200s
sys 13m37.084s
after:
real 159m16.551s
user 69m58.279s
sys 7m34.334s
So it's almost two-fold which is a great improvement.
They have it specified right in the transaction. Unfortunately, this little
change rendered invalid our RPC test chain, but I think it became even better
after it, especially given that chain generation is a nice test by itself, so
it should be running as a regular test.
prevHash == input.PrevHash, so make less DB accesses and more real work. Fix
some bugs along the way:
* spentCoins structure may already be present in the DB when persisting TX,
there is nothing wrong with that and we shouldn't overwrite it
* it's only used for NEO and only to check for claim validity. Thus, when
processing claim tx the corresponding spentCoins should always be present
in the DB
Everywhere in this code prevHash == input.PrevHash, thus we can easily move
some common code out of the loop saving on DB accesses and
serialization/deserialization.
Implement mempool and consensus block creation policies, almost the same as
SimplePolicy plugin for C# node provides with two caveats:
* HighPriorityTxType is not configured and hardcoded to ClaimType
* BlockedAccounts are not supported
Other than that it allows us to run successfuly as testnet CN, previously our
proposals were rejected because we were proposing blocks with oversized
transactions (that are rejected by PoolTx() now).
Mainnet and testnet configuration files are updated accordingly, but privnet
is left as is with no limits.
Configuration is currently attached to the Blockchain and so is the code that
does policying, it may be moved somewhere in the future, but it works for
now.
Both are very useful outside of the core, this change also makes respective
transactions initialize with the package as they don't depend on any kind of
input and it makes no sense recreating them again and again on every use.
It wasn't sorted when all validators were elected. There is also no need to do
`Unique()` on the result because validators are distinguished by the key, so
no two registered validators can have the same key.
As C# node does it. Technically it's only needed for consensus and could be
implemented in the appropriate package, but for better compatibility with C#
node we're better returning it sorted right here.
We were completely lacking ValidatorsCount that is supposed to track the
number of votes with particular count of consensus nodes which in theory can
change the number of active consensus nodes (if it ever to exceed the number
of standby validators), so we were not producing the right count and based on
that not giving the right set of validators.
Fixes#512.
Simple as that: UnregisteredAndHasNoVotes != !RegisteredAndHasVotes
Registered validators should stay in the DB, we might be in the process of
updating votes for them and that starts with subtraction.
Lack of FreeGasLimit in privnet leads to gas limit exceeding in case of transactions with small amount of GAS to be used for invoke operation (< real cost of the transaction). Solution: Fixed constraint in case when FreeGasLimit == 0. So now we are able to perform transactions in privnet with FreeGasLimit = 0 for free.
We not only need to remove transactions stored in the block, but also
invalidate some potential double spends caused by these transactions. Usually
new block contains a substantial number of transactions from the pool, so it's
easier to make one pass over it only keeping valid items rather than remove
them one by one and make an additional pass to recheck inputs/witnesses.
Eliminate races between tx checks and adding them to the mempool, ensure the
chain doesn't change while we're working with the new tx. Ensure only one
block addition attempt could be in progress.
Turns out, our dApps use it a lot and we were going to the DB to get it which
is a useless waste of time. Technically we could also remove blockHeight here,
but not doing it at the moment as it's more involved.
It eliminates this time waste from the pprof graph, but doesn't change 1.4M ->
1.5M 100K mainnet block import test case in any noticeable way.
This solves two problems:
* adds support for shortened SYSCALL form that uses IDs (similar to #434, but
for NEO 2.0, supporting both forms), which is important for compatibility
with C# node and mainnet chain that uses it from some height
* reworks interop plugging to use callbacks rather than appending to the map,
these map mangling functions are clearly visible in the VM profiling
statistics and we want spawning a VM to be fast, so it makes sense
optimizing it. This change moves most of the work to the init() phase
making VM setup cheaper.
Caveats:
* InteropNameToID accepts `[]byte` because that's the thing we have in
SYSCALL processing and that's the most often usecase for it, it leads to
some conversions in other places but that's acceptable because those are
either tests or init()
* three getInterop functions are: `getDefaultVMInterop`, `getSystemInterop`
and `getNeoInterop`
Our 100K (1.4M->1.5M) block import time improves by ~4% with this change.
Importing 100K blocks starting at 1.4M, before this patch:
real 6m0,356s
user 8m52,293s
sys 0m47,372s
After this patch:
real 4m17,748s
user 6m23,316s
sys 0m37,866s
Almost 30% better.
It's a getter function and even though it's quite fancy with its transactions
processing (for consensus operation) it shouldn't ever change the state of the
Blockchain. If we're to change anything here these changes may conflict with
the actual block processing later or may lead to broken state (if transactions
won't be approved for some reason).
This seriously improves the serialization/deserialization performance for
several reasons:
* no time spent in `binary` reflection
* no memory allocations being made on every read/write
* uses fast ReadBytes everywhere it's appropriate
It also makes Fixed8 Serializable just for convenience.
add dao which takes care about all CRUD operations on storage
remove blockchain state since everything is stored on change
remove storage operations from structs(entities)
move structs to entities package
This change (closely related to the neo-project/neo#1321 proposal) speeds up
1.4M mainnet blocks import by 30%. Basically, we're eliminating key decoding
for block's multisignature that has the same keys most of the time.
Things I don't like about this patch:
* yet another parameter for verifyHashAgainstScript()
* vm keys are not copied in/out
But it's rather simple and solves the problem for this particular case, so I
think it's worth it.
It makes very little sense having pointers here, these structures MUST have
some kind of key and this key is not gonna be wandering somewhere on its
own. Fixes a part of #519.
It reduces heap pressure a little for these elements as we don't have to
allocate/free them individually. And they're directly tied to transactions or
block, not being shared or anything like that, so it makes little sense for
them to be pointer-based. It only makes building transactions a little easier,
but that's obviously a minor usecase.
add processing of validators while block persist;
add validator structure with decoding/encoding;
add validator get from store;
add EnrollmentTX and StateTX processing;
add pubkey decode bytes, unique and contains functions;
Fixes failure to process transaction from the block when it was relayed
initially:
WARN[0788] blockQueue: failed adding block into the blockchain blockHeight=7270 error="transaction 35088916403e5cf2152e16c3bc6e0fba20c955fba38543b9fa5c50a3d3a4ace5 failed to verify: invalid transaction due to conflicts with the memory pool" nextIndex=7271
WARN[0790] blockQueue: failed adding block into the blockchain blockHeight=7270 error="transaction 35088916403e5cf2152e16c3bc6e0fba20c955fba38543b9fa5c50a3d3a4ace5 failed to verify: invalid transaction due to conflicts with the memory pool" nextIndex=7271
WARN[0790] blockQueue: failed adding block into the blockchain blockHeight=7270 error="transaction 35088916403e5cf2152e16c3bc6e0fba20c955fba38543b9fa5c50a3d3a4ace5 failed to verify: invalid transaction due to conflicts with the memory pool" nextIndex=7271
If we're to receive some 500 headers (less than `headerBatchCount`) and quit
before receiving more of them we end up with clean `bc.headerList` that will
be inited going backwards to the `targetHash`, but code path doesn't add add
the `targetHash` itself which it should do in this particular case, otherwise
we end with no genesis block hash in the list.
Missing it the following line could fail on subsequent restarts:
currHeaderHeight, currHeaderHash, err := storage.CurrentHeaderHeight(bc.store)
if the node was stopped before any headers had been received.
Extend Blockchainer with one more method to spawn a VM for test runs and use
it to run scripts. Gas consumption is not counted or limited in any way at the
moment (see #424).
VM should be responsible for code execution and in case anyone interested in additional logging or errors they could handle them like we do it iin cli.
We're about stored values here, so print those, which avoids blocking in
bc.HeaderHeight() and removes duplication between blockHeight and
persistedHeight. Fixes saving the blockchain on exit (deferred function in
Run() blocked in persist()).
Test modification was required because storeBlocks() doesn't actually save
headers and thus TestGetTransaction started to fail on persist().