This error message makes no sense when shutting down the server:
2020-06-25T19:29:53.251+0300 ERROR failed to start RPC server {"error": "http: Server closed"}
And ListenAndServer is documented to always return non-nil error one of which
is http.ErrServerClosed. This should also fix the following test failure:
==================
WARNING: DATA RACE
Read at 0x00c000254243 by goroutine 49:
testing.(*common).logDepth()
/usr/local/go/src/testing/testing.go:665 +0xa1
testing.(*common).Logf()
/usr/local/go/src/testing/testing.go:658 +0x8f
testing.(*T).Logf()
<autogenerated>:1 +0x75
go.uber.org/zap/zaptest.testingWriter.Write()
/go/pkg/mod/go.uber.org/zap@v1.10.0/zaptest/logger.go:130 +0x11f
go.uber.org/zap/zaptest.(*testingWriter).Write()
<autogenerated>:1 +0xa9
go.uber.org/zap/zapcore.(*ioCore).Write()
/go/pkg/mod/go.uber.org/zap@v1.10.0/zapcore/core.go:90 +0x1c3
go.uber.org/zap/zapcore.(*CheckedEntry).Write()
/go/pkg/mod/go.uber.org/zap@v1.10.0/zapcore/entry.go:215 +0x1e7
go.uber.org/zap.(*Logger).Error()
/go/pkg/mod/go.uber.org/zap@v1.10.0/logger.go:203 +0x95
github.com/nspcc-dev/neo-go/pkg/rpc/server.(*Server).Start()
/go/src/github.com/nspcc-dev/neo-go/pkg/rpc/server/server.go:179 +0x5c5
Previous write at 0x00c000254243 by goroutine 44:
testing.tRunner.func1()
/usr/local/go/src/testing/testing.go:900 +0x353
testing.tRunner()
/usr/local/go/src/testing/testing.go:913 +0x1bb
Goroutine 49 (running) created at:
github.com/nspcc-dev/neo-go/pkg/rpc/server.initClearServerWithInMemoryChain()
/go/src/github.com/nspcc-dev/neo-go/pkg/rpc/server/server_helper_test.go:69 +0x305
github.com/nspcc-dev/neo-go/pkg/rpc/server.initServerWithInMemoryChain()
/go/src/github.com/nspcc-dev/neo-go/pkg/rpc/server/server_helper_test.go:78 +0x3c
github.com/nspcc-dev/neo-go/pkg/rpc/server.testRPCProtocol()
/go/src/github.com/nspcc-dev/neo-go/pkg/rpc/server/server_test.go:805 +0x53
github.com/nspcc-dev/neo-go/pkg/rpc/server.TestRPC.func1()
/go/src/github.com/nspcc-dev/neo-go/pkg/rpc/server/server_test.go:793 +0x44
testing.tRunner()
/usr/local/go/src/testing/testing.go:909 +0x199
Goroutine 44 (finished) created at:
testing.(*T).Run()
/usr/local/go/src/testing/testing.go:960 +0x651
github.com/nspcc-dev/neo-go/pkg/rpc/server.TestRPC()
/go/src/github.com/nspcc-dev/neo-go/pkg/rpc/server/server_test.go:792 +0x5d
testing.tRunner()
/usr/local/go/src/testing/testing.go:909 +0x199
==================
Pick up the following changes:
* recovery message sending fix
* tx re-requests on sendRecoveryRequest
* proposed block checks fix
* timeout calculation fix for InitializeConsensus
* removal of TxPerBlock which is irrelevant for neo-go (policying is done by
the node, not dbft library)
NEO VM does not distinguish between empty and nil slices. Supporting
this is not easy and requires changing lots of other opcodes.
Pointers are not supported anyway and slices can be checked for
emptiness by inspecting their `len`.
Set default value also for complex (struct, map) types.
Note: because VM does not distinguish empty array and nil array,
comparison 'a == nil' can't be handled properly without substantial
effort. This will be fixed in NEO3.
Using view number from the recovery message is just plain wrong, it's gonna be
higher than our current view and these messages will be treated as coming from
the future, even though they have their original view number included.
Implement (*Param).GetBoolean() for converting parameter to bool value.
It is used for verbosity flag and is false iff it is either zero number
or empty sting.
We need to compact our in-memory MPT from time to time, otherwise it quickly
fills up all available memory. This raises two obvious quesions --- when to do
that and to what level do that.
As for 'when', I think it's quite easy to use our regular persistence interval
as an anchor (and it also frees up some memory), but we can't do that in the
persistence routine itself because of synchronization issues (adding some
synchronization primitives would add some cost that I'd also like to avoid),
so do it indirectly by comparing persisted and current height in `storeBlock`.
Choosing proper level is another problem, but if we're to roughly estimate one
full branch node to use 1K of memory (usually it's way less than that) then we
can easily store 1K of these nodes and that gives us a depth of 10 for our
trie.
Items were serialized several times if there were several successful
transactions in a block, prevent that by using State field as a bitfield (as
it almost was intended to) and adding one more bit. It also eliminates useless
duplicate MPT traversions.
Confirmed to not break storage changes up to 3.3M on testnet.
This was differing from C# notion of PrevHash. It's not a previous root, but
rather a hash of the previous serialized MPTRoot structure (that is to be
signed by CNs).