1. Initialization is performed via `Blockchain` methods.
2. Native Oracle contract updates list of oracle nodes
and in-fly requests in `PostPersist`.
3. RPC uses Oracle module directly.
1) It duplicates registration in `version` message handler and no valid
connection can work without version exchange.
2) On public networks we have seed nodes defined by names, so we register
connections to them using these names, but then if connection is dropped we
delist them by IP:PORT combinations which can lead to zero PeerCount() with
all seeds still being registered as connected in the discovery subsystem
and thus no reconnection attempts being made.
It could be the case that checks are performed simultaneosly and
peers connections goes down from 2 to 0. We must take such case into
account and register address as good in discovery.
Right now a single slow peer can slow down whole network.
Do broadcast in 2 parts:
1. Perform non-blocking send to all peers if possible.
2. Perform blocking sends until message is sent to 2/3 of good peers.
Prices are defined in as a coefficients to `BaseExecFee` which
is defined by Policy contract (TBD later).
Native method prices are defined without need to multiply.
It happens from time to time in a four-node private network where there are
seeds (aka CNs) and not a lot of other nodes to connect to.
I don't know how to test for an infinite loop that has no side-effects, so no
test added here.
If the node is to start with seeds unavailable it will try connecting to each
of them three times, blacklist them and then sit forever waiting for
something. It's not a good behavior, it should always try connecting to seeds
if nothing else works.
Now we have VerifyTx() and PoolTx() APIs that either verify transaction in
isolation or verify it against the mempool (either the primary one or the one
given) and then add it there. There is no possibility to check against the
mempool, but not add a transaction to it, but I doubt we really need it.
It allows to remove some duplication between old PoolTx and verifyTx where
they both tried to check transaction against mempool (verifying first and then
adding it). It also saves us utility token balance check because it's done by
the mempool anyway and we no longer need to do that explicitly in verifyTx.
It makes AddBlock() and verifyBlock() transaction's checks more correct,
because previously they could miss that even though sender S has enough
balance to pay for A, B or C, he can't pay for all of them.
Caveats:
* consensus is running concurrently to other processes, so things could
change while verifyBlock() is iterating over transactions, this will be
mitigated in subsequent commits
Improves TPS value for single node by at least 11%.
Fixes#667, fixes#668.
GetBlockByIndex handler starts sending blocks right from the start index and
if that index is s.chain.BlockHeight() then we're requesting and receiving a
block we already have.
Turns out, C# node no longer broadcasts an Inv when it's creating a block,
instead it sends a ping and if we're not paying attention to the height
specified there we're technically missing a new block. Of course we'll get it
later after ping timer expiration and regular ping/pong sequence, but that's
delaying it for no good reason.
It no longer depends on blockchain state and there can't ever be an error, in
fact we can always iterate over signers, so copying these hashes doesn't make
much sense at all as well as sorting arrays in verifyTxWitnesses (witnesses
order must match signers order).
It's not needed any more with Go 1.13 as we have wrapping/unwrapping in base
packages. All errors.Wrap calls are replaced with fmt.Errorf, some strings are
improved along the way.
Closes#1192
1. We now have CMDGetBlockByIndex, so there's no need to request headers
first when we can just ask for blocks.
2. We don't ask for headers (i.e. we don't send CMDGetHeaders),
consequently, we shouldn't react on CMDHeaders.
3. But we still keep on reacting on CMDGetHeaders command as
there could be a node which needs headers.
It returned an error in case if block wasn't found (it might be when our
chain is lower). Fixed. It also should return all requested blocks, not
the first one.
We can't lock them (or there will be a deadlock), but we need to fix this:
fatal error: concurrent map iteration and map write
goroutine 1 [running]:
runtime.throw(0xdec086, 0x26)
/usr/lib64/go/1.12/src/runtime/panic.go:617 +0x72 fp=0xc02fec2bf8 sp=0xc02fec2bc8 pc=0x42d932
runtime.mapiternext(0xc02fec2d40)
/usr/lib64/go/1.12/src/runtime/map.go:860 +0x597 fp=0xc02fec2c80 sp=0xc02fec2bf8 pc=0x40efe7
github.com/nspcc-dev/neo-go/pkg/network.(*Server).Shutdown(0xc0000fc160)
/home/rik/dev/neo-go2/pkg/network/server.go:194 +0x238 fp=0xc02fec2db0 sp=0xc02fec2c80 pc=0xa89da8
github.com/nspcc-dev/neo-go/cli/server.startServer(0xc0000fcc60, 0x0, 0x0)
/home/rik/dev/neo-go2/cli/server/server.go:399 +0x7a9 fp=0xc02fec3820 sp=0xc02fec2db0 pc=0xae2079
...
GetValidators without parameter is called upon DBFT initialization and it
should receive validators for the next block (that will create it),
parameterized GetValidators is used for NextConsensus calculation where we
need a list for the current state of the chain.
In order to avoid dependency cycle at the next commits:
imports github.com/nspcc-dev/neo-go/pkg/config
imports github.com/nspcc-dev/neo-go/pkg/wallet
imports github.com/nspcc-dev/neo-go/pkg/vm
imports github.com/nspcc-dev/neo-go/pkg/smartcontract/nef
imports github.com/nspcc-dev/neo-go/pkg/config