[#2442] English Check

Signed-off-by: Elizaveta Chichindaeva <elizaveta@nspcc.ru>
This commit is contained in:
Elizaveta Chichindaeva 2022-04-20 21:30:09 +03:00
parent 7f8b259994
commit 28908aa3cf
293 changed files with 2222 additions and 2224 deletions

View file

@ -5,9 +5,9 @@ follow the guidelines:
1. Check open [issues](https://github.com/nspcc-dev/neo-go/issues) and 1. Check open [issues](https://github.com/nspcc-dev/neo-go/issues) and
[pull requests](https://github.com/nspcc-dev/neo-go/pulls) for existing discussions. [pull requests](https://github.com/nspcc-dev/neo-go/pulls) for existing discussions.
1. Open an issue first, to discuss a new feature or enhancement. 1. Open an issue first to discuss a new feature or enhancement.
1. Write tests, and make sure the test suite passes locally and on CI. 1. Write tests and make sure the test suite passes locally and on CI.
1. When optimizing something, write benchmarks and attach results: 1. When optimizing something, write benchmarks and attach the results:
``` ```
go test -run - -bench BenchmarkYourFeature -count=10 ./... >old // on master go test -run - -bench BenchmarkYourFeature -count=10 ./... >old // on master
go test -run - -bench BenchmarkYourFeature -count=10 ./... >new // on your branch go test -run - -bench BenchmarkYourFeature -count=10 ./... >new // on your branch
@ -15,11 +15,11 @@ follow the guidelines:
``` ```
`benchstat` is described here https://godocs.io/golang.org/x/perf/cmd/benchstat. `benchstat` is described here https://godocs.io/golang.org/x/perf/cmd/benchstat.
1. Open a pull request, and reference the relevant issue(s). 1. Open a pull request and reference the relevant issue(s).
1. Make sure your commits are logically separated and have good comments 1. Make sure your commits are logically separated and have good comments
explaining the details of your change. Add a package/file prefix to your explaining the details of your change. Add a package/file prefix to your
commit if that's applicable (like 'vm: fix ADD miscalculation on full commit if that's applicable (like 'vm: fix ADD miscalculation on full
moon'). moon').
1. After receiving feedback, amend your commits or add new ones as 1. After receiving a feedback, amend your commits or add new ones as
appropriate. appropriate.
1. **Have fun!** 1. **Have fun!**

View file

@ -59,7 +59,7 @@ The resulting binary is `bin/neo-go`.
#### Building on Windows #### Building on Windows
To build NeoGo on Windows platform we recommend you to install `make` from [MinGW To build NeoGo on Windows platform we recommend you to install `make` from [MinGW
package](https://osdn.net/projects/mingw/). Then you can build NeoGo with: package](https://osdn.net/projects/mingw/). Then, you can build NeoGo with:
``` ```
make build make build
@ -77,13 +77,13 @@ is stored in a file and NeoGo allows you to store multiple files in one
directory (`./config` by default) and easily switch between them using network directory (`./config` by default) and easily switch between them using network
flags. flags.
To start Neo node on private network use: To start Neo node on a private network, use:
``` ```
./bin/neo-go node ./bin/neo-go node
``` ```
Or specify a different network with appropriate flag like this: Or specify a different network with an appropriate flag like this:
``` ```
./bin/neo-go node --mainnet ./bin/neo-go node --mainnet
@ -94,12 +94,12 @@ Available network flags:
- `--privnet, -p` - `--privnet, -p`
- `--testnet, -t` - `--testnet, -t`
To run a consensus/committee node refer to [consensus To run a consensus/committee node, refer to [consensus
documentation](docs/consensus.md). documentation](docs/consensus.md).
### Docker ### Docker
By default the `CMD` is set to run a node on `privnet`, so to do this simply run: By default, the `CMD` is set to run a node on `privnet`. So, to do this, simply run:
```bash ```bash
docker run -d --name neo-go -p 20332:20332 -p 20331:20331 nspccdev/neo-go docker run -d --name neo-go -p 20332:20332 -p 20331:20331 nspccdev/neo-go
@ -111,8 +111,7 @@ protocol) and `20331` (JSON-RPC server).
### Importing mainnet/testnet dump files ### Importing mainnet/testnet dump files
If you want to jump-start your mainnet or testnet node with [chain archives If you want to jump-start your mainnet or testnet node with [chain archives
provided by NGD](https://sync.ngd.network/) follow these instructions (when provided by NGD](https://sync.ngd.network/), follow these instructions:
they'd be available for 3.0 networks):
``` ```
$ wget .../chain.acc.zip # chain dump file $ wget .../chain.acc.zip # chain dump file
$ unzip chain.acc.zip $ unzip chain.acc.zip
@ -120,7 +119,7 @@ $ ./bin/neo-go db restore -m -i chain.acc # for testnet use '-t' flag instead of
``` ```
The process differs from the C# node in that block importing is a separate The process differs from the C# node in that block importing is a separate
mode, after it ends the node can be started normally. mode. After it ends, the node can be started normally.
## Running a private network ## Running a private network
@ -131,8 +130,8 @@ Refer to [consensus node documentation](docs/consensus.md).
Please refer to [neo-go smart contract development Please refer to [neo-go smart contract development
workshop](https://github.com/nspcc-dev/neo-go-sc-wrkshp) that shows some workshop](https://github.com/nspcc-dev/neo-go-sc-wrkshp) that shows some
simple contracts that can be compiled/deployed/run using neo-go compiler, SDK simple contracts that can be compiled/deployed/run using neo-go compiler, SDK
and private network. For details on how Go code is translated to Neo VM and a private network. For details on how Go code is translated to Neo VM
bytecode and what you can and can not do in smart contract please refer to the bytecode and what you can and can not do in a smart contract, please refer to the
[compiler documentation](docs/compiler.md). [compiler documentation](docs/compiler.md).
Refer to [examples](examples/README.md) for more NEO smart contract examples Refer to [examples](examples/README.md) for more NEO smart contract examples
@ -145,9 +144,9 @@ wallets. NeoGo wallet is just a
[NEP-6](https://github.com/neo-project/proposals/blob/68398d28b6932b8dd2b377d5d51bca7b0442f532/nep-6.mediawiki) [NEP-6](https://github.com/neo-project/proposals/blob/68398d28b6932b8dd2b377d5d51bca7b0442f532/nep-6.mediawiki)
file that is used by CLI commands to sign various things. There is no database file that is used by CLI commands to sign various things. There is no database
behind it, the blockchain is the database and CLI commands use RPC to query behind it, the blockchain is the database and CLI commands use RPC to query
data from it. At the same time it's not required to open the wallet on RPC data from it. At the same time, it's not required to open a wallet on an RPC
node to perform various actions (unless your node is providing some service node to perform various actions (unless your node provides some service
for the network like consensus or oracle nodes). for the network like consensus or oracle nodes do).
# Developer notes # Developer notes
Nodes have such features as [Prometheus](https://prometheus.io/docs/guides/go-application) and Nodes have such features as [Prometheus](https://prometheus.io/docs/guides/go-application) and
@ -167,7 +166,7 @@ where you can switch on/off and define port. Prometheus is enabled and Pprof is
Feel free to contribute to this project after reading the Feel free to contribute to this project after reading the
[contributing guidelines](CONTRIBUTING.md). [contributing guidelines](CONTRIBUTING.md).
Before starting to work on a certain topic, create an new issue first, Before starting to work on a certain topic, create a new issue first
describing the feature/topic you are going to implement. describing the feature/topic you are going to implement.
# Contact # Contact

View file

@ -1,7 +1,7 @@
# Roadmap for neo-go # Roadmap for neo-go
This defines approximate plan of neo-go releases and key features planned for This defines approximate plan of neo-go releases and key features planned for
them. Things can change if there a need to push a bugfix or some critical them. Things can change if there is a need to push a bugfix or some critical
functionality. functionality.
## Versions 0.7X.Y (as needed) ## Versions 0.7X.Y (as needed)

View file

@ -10,7 +10,7 @@ import (
"github.com/urfave/cli" "github.com/urfave/cli"
) )
// Address is a wrapper for Uint160 with flag.Value methods. // Address is a wrapper for a Uint160 with flag.Value methods.
type Address struct { type Address struct {
IsSet bool IsSet bool
Value util.Uint160 Value util.Uint160
@ -28,12 +28,12 @@ var (
_ cli.Flag = AddressFlag{} _ cli.Flag = AddressFlag{}
) )
// String implements fmt.Stringer interface. // String implements the fmt.Stringer interface.
func (a Address) String() string { func (a Address) String() string {
return address.Uint160ToString(a.Value) return address.Uint160ToString(a.Value)
} }
// Set implements flag.Value interface. // Set implements the flag.Value interface.
func (a *Address) Set(s string) error { func (a *Address) Set(s string) error {
addr, err := ParseAddress(s) addr, err := ParseAddress(s)
if err != nil { if err != nil {
@ -44,7 +44,7 @@ func (a *Address) Set(s string) error {
return nil return nil
} }
// Uint160 casts address to Uint160. // Uint160 casts an address to Uint160.
func (a *Address) Uint160() (u util.Uint160) { func (a *Address) Uint160() (u util.Uint160) {
if !a.IsSet { if !a.IsSet {
// It is a programmer error to call this method without // It is a programmer error to call this method without
@ -82,7 +82,7 @@ func (f AddressFlag) GetName() string {
return f.Name return f.Name
} }
// Apply populates the flag given the flag set and environment // Apply populates the flag given the flag set and environment.
// Ignores errors. // Ignores errors.
func (f AddressFlag) Apply(set *flag.FlagSet) { func (f AddressFlag) Apply(set *flag.FlagSet) {
eachName(f.Name, func(name string) { eachName(f.Name, func(name string) {
@ -90,7 +90,7 @@ func (f AddressFlag) Apply(set *flag.FlagSet) {
}) })
} }
// ParseAddress parses Uint160 form either LE string or address. // ParseAddress parses a Uint160 from either an LE string or an address.
func ParseAddress(s string) (util.Uint160, error) { func ParseAddress(s string) (util.Uint160, error) {
const uint160size = 2 * util.Uint160Size const uint160size = 2 * util.Uint160Size
switch len(s) { switch len(s) {

View file

@ -8,7 +8,7 @@ import (
"github.com/urfave/cli" "github.com/urfave/cli"
) )
// Fixed8 is a wrapper for Uint160 with flag.Value methods. // Fixed8 is a wrapper for a Uint160 with flag.Value methods.
type Fixed8 struct { type Fixed8 struct {
Value fixedn.Fixed8 Value fixedn.Fixed8
} }
@ -25,12 +25,12 @@ var (
_ cli.Flag = Fixed8Flag{} _ cli.Flag = Fixed8Flag{}
) )
// String implements fmt.Stringer interface. // String implements the fmt.Stringer interface.
func (a Fixed8) String() string { func (a Fixed8) String() string {
return a.Value.String() return a.Value.String()
} }
// Set implements flag.Value interface. // Set implements the flag.Value interface.
func (a *Fixed8) Set(s string) error { func (a *Fixed8) Set(s string) error {
f, err := fixedn.Fixed8FromString(s) f, err := fixedn.Fixed8FromString(s)
if err != nil { if err != nil {
@ -40,7 +40,7 @@ func (a *Fixed8) Set(s string) error {
return nil return nil
} }
// Fixed8 casts address to util.Fixed8. // Fixed8 casts the address to util.Fixed8.
func (a *Fixed8) Fixed8() fixedn.Fixed8 { func (a *Fixed8) Fixed8() fixedn.Fixed8 {
return a.Value return a.Value
} }
@ -61,7 +61,7 @@ func (f Fixed8Flag) GetName() string {
return f.Name return f.Name
} }
// Apply populates the flag given the flag set and environment // Apply populates the flag given the flag set and environment.
// Ignores errors. // Ignores errors.
func (f Fixed8Flag) Apply(set *flag.FlagSet) { func (f Fixed8Flag) Apply(set *flag.FlagSet) {
eachName(f.Name, func(name string) { eachName(f.Name, func(name string) {
@ -69,7 +69,7 @@ func (f Fixed8Flag) Apply(set *flag.FlagSet) {
}) })
} }
// Fixed8FromContext returns parsed util.Fixed8 value provided flag name. // Fixed8FromContext returns a parsed util.Fixed8 value provided flag name.
func Fixed8FromContext(ctx *cli.Context, name string) fixedn.Fixed8 { func Fixed8FromContext(ctx *cli.Context, name string) fixedn.Fixed8 {
return ctx.Generic(name).(*Fixed8).Value return ctx.Generic(name).(*Fixed8).Value
} }

View file

@ -21,7 +21,7 @@ type ReadWriter struct {
io.Writer io.Writer
} }
// ReadLine reads line from the input without trailing '\n'. // ReadLine reads a line from the input without trailing '\n'.
func ReadLine(prompt string) (string, error) { func ReadLine(prompt string) (string, error) {
trm := Terminal trm := Terminal
if trm == nil { if trm == nil {
@ -46,7 +46,7 @@ func readLine(trm *term.Terminal, prompt string) (string, error) {
return trm.ReadLine() return trm.ReadLine()
} }
// ReadPassword reads user password with prompt. // ReadPassword reads the user's password with prompt.
func ReadPassword(prompt string) (string, error) { func ReadPassword(prompt string) (string, error) {
trm := Terminal trm := Terminal
if trm == nil { if trm == nil {
@ -60,7 +60,7 @@ func ReadPassword(prompt string) (string, error) {
return trm.ReadPassword(prompt) return trm.ReadPassword(prompt)
} }
// ConfirmTx asks for a confirmation to send tx. // ConfirmTx asks for a confirmation to send the tx.
func ConfirmTx(w io.Writer, tx *transaction.Transaction) error { func ConfirmTx(w io.Writer, tx *transaction.Transaction) error {
fmt.Fprintf(w, "Network fee: %s\n", fixedn.Fixed8(tx.NetworkFee)) fmt.Fprintf(w, "Network fee: %s\n", fixedn.Fixed8(tx.NetworkFee))
fmt.Fprintf(w, "System fee: %s\n", fixedn.Fixed8(tx.SystemFee)) fmt.Fprintf(w, "System fee: %s\n", fixedn.Fixed8(tx.SystemFee))

View file

@ -16,7 +16,7 @@ import (
// DefaultTimeout is the default timeout used for RPC requests. // DefaultTimeout is the default timeout used for RPC requests.
const DefaultTimeout = 10 * time.Second const DefaultTimeout = 10 * time.Second
// RPCEndpointFlag is a long flag name for RPC endpoint. It can be used to // RPCEndpointFlag is a long flag name for an RPC endpoint. It can be used to
// check for flag presence in the context. // check for flag presence in the context.
const RPCEndpointFlag = "rpc-endpoint" const RPCEndpointFlag = "rpc-endpoint"
@ -60,7 +60,7 @@ func GetNetwork(ctx *cli.Context) netmode.Magic {
return net return net
} }
// GetTimeoutContext returns a context.Context with default of user-set timeout. // GetTimeoutContext returns a context.Context with the default or a user-set timeout.
func GetTimeoutContext(ctx *cli.Context) (context.Context, func()) { func GetTimeoutContext(ctx *cli.Context) (context.Context, func()) {
dur := ctx.Duration("timeout") dur := ctx.Duration("timeout")
if dur == 0 { if dur == 0 {

View file

@ -15,8 +15,8 @@ import (
// validUntilBlockIncrement is the number of extra blocks to add to an exported transaction. // validUntilBlockIncrement is the number of extra blocks to add to an exported transaction.
const validUntilBlockIncrement = 50 const validUntilBlockIncrement = 50
// InitAndSave creates incompletely signed transaction which can used // InitAndSave creates an incompletely signed transaction which can be used
// as input to `multisig sign`. // as an input to `multisig sign`.
func InitAndSave(net netmode.Magic, tx *transaction.Transaction, acc *wallet.Account, filename string) error { func InitAndSave(net netmode.Magic, tx *transaction.Transaction, acc *wallet.Account, filename string) error {
// avoid fast transaction expiration // avoid fast transaction expiration
tx.ValidUntilBlock += validUntilBlockIncrement tx.ValidUntilBlock += validUntilBlockIncrement
@ -34,7 +34,7 @@ func InitAndSave(net netmode.Magic, tx *transaction.Transaction, acc *wallet.Acc
return Save(scCtx, filename) return Save(scCtx, filename)
} }
// Read reads parameter context from file. // Read reads the parameter context from the file.
func Read(filename string) (*context.ParameterContext, error) { func Read(filename string) (*context.ParameterContext, error) {
data, err := os.ReadFile(filename) data, err := os.ReadFile(filename)
if err != nil { if err != nil {
@ -48,7 +48,7 @@ func Read(filename string) (*context.ParameterContext, error) {
return c, nil return c, nil
} }
// Save writes parameter context to file. // Save writes the parameter context to the file.
func Save(c *context.ParameterContext, filename string) error { func Save(c *context.ParameterContext, filename string) error {
if data, err := json.Marshal(c); err != nil { if data, err := json.Marshal(c); err != nil {
return fmt.Errorf("can't marshal transaction: %w", err) return fmt.Errorf("can't marshal transaction: %w", err)

View file

@ -120,8 +120,8 @@ func newGraceContext() context.Context {
return ctx return ctx
} }
// getConfigFromContext looks at path and mode flags in the given config and // getConfigFromContext looks at the path and the mode flags in the given config and
// returns appropriate config. // returns an appropriate config.
func getConfigFromContext(ctx *cli.Context) (config.Config, error) { func getConfigFromContext(ctx *cli.Context) (config.Config, error) {
configPath := "./config" configPath := "./config"
if argCp := ctx.String("config-path"); argCp != "" { if argCp := ctx.String("config-path"); argCp != "" {
@ -131,10 +131,10 @@ func getConfigFromContext(ctx *cli.Context) (config.Config, error) {
} }
// handleLoggingParams reads logging parameters. // handleLoggingParams reads logging parameters.
// If user selected debug level -- function enables it. // If a user selected debug level -- function enables it.
// If logPath is configured -- function creates dir and file for logging. // If logPath is configured -- function creates a dir and a file for logging.
// If logPath is configured on Windows -- function returns closer to be // If logPath is configured on Windows -- function returns closer to be
// able to close sink for opened log output file. // able to close sink for the opened log output file.
func handleLoggingParams(ctx *cli.Context, cfg config.ApplicationConfiguration) (*zap.Logger, func() error, error) { func handleLoggingParams(ctx *cli.Context, cfg config.ApplicationConfiguration) (*zap.Logger, func() error, error) {
level := zapcore.InfoLevel level := zapcore.InfoLevel
if ctx.Bool("debug") { if ctx.Bool("debug") {

View file

@ -48,14 +48,14 @@ func CheckSenderWitness() {
} }
} }
// Update updates contract with the new one. // Update updates the contract with a new one.
func Update(script, manifest []byte) { func Update(script, manifest []byte) {
ctx := storage.GetReadOnlyContext() ctx := storage.GetReadOnlyContext()
mgmt := storage.Get(ctx, mgmtKey).(interop.Hash160) mgmt := storage.Get(ctx, mgmtKey).(interop.Hash160)
contract.Call(mgmt, "update", contract.All, script, manifest) contract.Call(mgmt, "update", contract.All, script, manifest)
} }
// GetValue returns stored value. // GetValue returns the stored value.
func GetValue() string { func GetValue() string {
ctx := storage.GetReadOnlyContext() ctx := storage.GetReadOnlyContext()
val1 := storage.Get(ctx, key) val1 := storage.Get(ctx, key)
@ -63,7 +63,7 @@ func GetValue() string {
return val1.(string) + "|" + val2.(string) return val1.(string) + "|" + val2.(string)
} }
// GetValueWithKey returns stored value with the specified key. // GetValueWithKey returns the stored value with the specified key.
func GetValueWithKey(key string) string { func GetValueWithKey(key string) string {
ctx := storage.GetReadOnlyContext() ctx := storage.GetReadOnlyContext()
return storage.Get(ctx, key).(string) return storage.Get(ctx, key).(string)

View file

@ -1,4 +1,4 @@
// invalid is an example of contract which doesn't pass event check. // invalid is an example of a contract which doesn't pass event check.
package invalid1 package invalid1
import ( import (
@ -6,14 +6,14 @@ import (
"github.com/nspcc-dev/neo-go/pkg/interop/runtime" "github.com/nspcc-dev/neo-go/pkg/interop/runtime"
) )
// Notify1 emits correctly typed event. // Notify1 emits a correctly typed event.
func Notify1() bool { func Notify1() bool {
runtime.Notify("Event", interop.Hash160{1, 2, 3}) runtime.Notify("Event", interop.Hash160{1, 2, 3})
return true return true
} }
// Notify2 emits invalid event (ByteString instead of Hash160). // Notify2 emits an invalid event (ByteString instead of Hash160).
func Notify2() bool { func Notify2() bool {
runtime.Notify("Event", []byte{1, 2, 3}) runtime.Notify("Event", []byte{1, 2, 3})

View file

@ -1,4 +1,4 @@
// invalid is an example of contract which doesn't pass event check. // invalid is an example of a contract which doesn't pass event check.
package invalid2 package invalid2
import ( import (
@ -6,14 +6,14 @@ import (
"github.com/nspcc-dev/neo-go/pkg/interop/runtime" "github.com/nspcc-dev/neo-go/pkg/interop/runtime"
) )
// Notify1 emits correctly typed event. // Notify1 emits a correctly typed event.
func Notify1() bool { func Notify1() bool {
runtime.Notify("Event", interop.Hash160{1, 2, 3}) runtime.Notify("Event", interop.Hash160{1, 2, 3})
return true return true
} }
// Notify2 emits invalid event (extra parameter). // Notify2 emits an invalid event (extra parameter).
func Notify2() bool { func Notify2() bool {
runtime.Notify("Event", interop.Hash160{1, 2, 3}, "extra parameter") runtime.Notify("Event", interop.Hash160{1, 2, 3}, "extra parameter")

View file

@ -1,4 +1,4 @@
// invalid is an example of contract which doesn't pass event check. // invalid is an example of a contract which doesn't pass event check.
package invalid3 package invalid3
import ( import (
@ -6,14 +6,14 @@ import (
"github.com/nspcc-dev/neo-go/pkg/interop/runtime" "github.com/nspcc-dev/neo-go/pkg/interop/runtime"
) )
// Notify1 emits correctly typed event. // Notify1 emits a correctly typed event.
func Notify1() bool { func Notify1() bool {
runtime.Notify("Event", interop.Hash160{1, 2, 3}) runtime.Notify("Event", interop.Hash160{1, 2, 3})
return true return true
} }
// Notify2 emits invalid event (missing from manifest). // Notify2 emits an invalid event (missing from manifest).
func Notify2() bool { func Notify2() bool {
runtime.Notify("AnotherEvent", interop.Hash160{1, 2, 3}) runtime.Notify("AnotherEvent", interop.Hash160{1, 2, 3})

View file

@ -35,8 +35,8 @@ type (
} }
) )
// newWalletV2FromFile reads NEO2 wallet from file. // newWalletV2FromFile reads a NEO2 wallet from the file.
// This should be used read-only, no operations are supported on returned wallet. // This should be used read-only, no operations are supported on the returned wallet.
func newWalletV2FromFile(path string) (*walletV2, error) { func newWalletV2FromFile(path string) (*walletV2, error) {
file, err := os.OpenFile(path, os.O_RDWR, os.ModeAppend) file, err := os.OpenFile(path, os.O_RDWR, os.ModeAppend)
if err != nil { if err != nil {
@ -64,7 +64,7 @@ func (a *accountV2) convert(pass string, scrypt keys.ScryptParams) (*wallet.Acco
if err != nil { if err != nil {
return nil, err return nil, err
} }
// If it is simple signature script, newAcc does already have it. // If it is a simple signature script, a newAcc does already have it.
if len(script) != simpleSigLen { if len(script) != simpleSigLen {
nsigs, pubs, ok := parseMultisigContract(script) nsigs, pubs, ok := parseMultisigContract(script)
if !ok { if !ok {
@ -112,8 +112,8 @@ func getNumOfThingsFromInstr(script []byte) (int, int, bool) {
const minMultisigLen = 37 const minMultisigLen = 37
// parseMultisigContract accepts multisig verification script from NEO2 // parseMultisigContract accepts a multisig verification script from NEO2
// and returns list of public keys in the same order as in script.. // and returns a list of public keys in the same order as in the script.
func parseMultisigContract(script []byte) (int, keys.PublicKeys, bool) { func parseMultisigContract(script []byte) (int, keys.PublicKeys, bool) {
// It should contain at least 1 public key. // It should contain at least 1 public key.
if len(script) < minMultisigLen { if len(script) < minMultisigLen {

View file

@ -1,10 +1,10 @@
# NeoGo CLI interface # NeoGo CLI interface
NeoGo CLI provides all functionality from one binary, so it's used to run NeoGo CLI provides all functionality from one binary. It's used to run
node, create/compile/deploy/invoke/debug smart contracts, run vm and operate a node, create/compile/deploy/invoke/debug smart contracts, run vm and operate
with the wallet. The standard setup assumes that you're running a node as a with a wallet. Standard setup assumes that you run a node as a
separate process, and it doesn't provide any CLI of its own, instead it just separate process, and it doesn't provide any CLI of its own. Instead, it just
makes RPC interface available for you. To perform any actions you invoke NeoGo makes RPC interface available for you. To perform any actions, you invoke NeoGo
as a client that connects to this RPC node and does things you want it to do as a client that connects to this RPC node and does things you want it to do
(like transferring some NEP-17 asset). (like transferring some NEP-17 asset).
@ -40,19 +40,19 @@ detailed configuration file description.
### Starting a node ### Starting a node
To start Neo node on private network use: To start Neo node on private network, use:
``` ```
./bin/neo-go node ./bin/neo-go node
``` ```
Or specify a different network with appropriate flag like this: Or specify a different network with an appropriate flag like this:
``` ```
./bin/neo-go node --mainnet ./bin/neo-go node --mainnet
``` ```
By default, the node will run in foreground using current standard output for By default, the node will run in the foreground using current standard output for
logging. logging.
@ -78,8 +78,8 @@ signal. List of the services to be restarted on SIGHUP receiving:
### DB import/exports ### DB import/exports
Node operates using some database as a backend to store blockchain data. NeoGo Node operates using some database as a backend to store blockchain data. NeoGo
allows to dump chain into file from the database (when node is stopped) or to allows to dump chain into a file from the database (when node is stopped) or to
import blocks from file into the database (also when node is stopped). Use import blocks from a file into the database (also when node is stopped). Use
`db` command for that. `db` command for that.
## Smart contracts ## Smart contracts
@ -101,7 +101,7 @@ special `-` path can be used to read the wallet from the standard input.
#### Create wallet #### Create wallet
Use `wallet init` command to create new wallet: Use `wallet init` command to create a new wallet:
``` ```
./bin/neo-go wallet init -w wallet.nep6 ./bin/neo-go wallet init -w wallet.nep6
@ -121,8 +121,8 @@ Use `wallet init` command to create new wallet:
wallet successfully created, file location is wallet.nep6 wallet successfully created, file location is wallet.nep6
``` ```
where "wallet.nep6" is a wallet file name. This wallet will be empty, to where "wallet.nep6" is a wallet file name. This wallet will be empty. To
generate a new key pair and add an account for it use `-a` option: generate a new key pair and add an account for it, use `-a` option:
``` ```
./bin/neo-go wallet init -w wallet.nep6 -a ./bin/neo-go wallet init -w wallet.nep6 -a
Enter the name of the account > Name Enter the name of the account > Name
@ -163,7 +163,7 @@ Confirm passphrase >
wallet successfully created, file location is wallet.nep6 wallet successfully created, file location is wallet.nep6
``` ```
or use `wallet create` command to create new account in existing wallet: or use `wallet create` command to create a new account in an existing wallet:
``` ```
./bin/neo-go wallet create -w wallet.nep6 ./bin/neo-go wallet create -w wallet.nep6
Enter the name of the account > Joe Random Enter the name of the account > Joe Random
@ -182,7 +182,7 @@ just allows to reuse the old key on N3 network).
``` ```
#### Check wallet contents #### Check wallet contents
`wallet dump` can be used to see wallet contents in more user-friendly way, `wallet dump` can be used to see wallet contents in a more user-friendly way,
its output is the same NEP-6 JSON, but better formatted. You can also decrypt its output is the same NEP-6 JSON, but better formatted. You can also decrypt
keys at the same time with `-d` option (you'll be prompted for password): keys at the same time with `-d` option (you'll be prompted for password):
``` ```
@ -230,7 +230,7 @@ NMe64G6j6nkPZby26JAgpaCNrn1Ee4wW6E (simple signature contract):
``` ```
#### Private key export #### Private key export
`wallet export` allows you to export private key in NEP-2 encrypted or WIF `wallet export` allows you to export a private key in NEP-2 encrypted or WIF
(unencrypted) form (`-d` flag). (unencrypted) form (`-d` flag).
``` ```
$ ./bin/neo-go wallet export -w wallet.nep6 -d NMe64G6j6nkPZby26JAgpaCNrn1Ee4wW6E $ ./bin/neo-go wallet export -w wallet.nep6 -d NMe64G6j6nkPZby26JAgpaCNrn1Ee4wW6E
@ -251,8 +251,8 @@ Confirm passphrase >
#### Special accounts #### Special accounts
Multisignature accounts can be imported with `wallet import-multisig`, you'll Multisignature accounts can be imported with `wallet import-multisig`, you'll
need all public keys and one private key to do that. Then you could sign need all public keys and one private key to do that. Then, you could sign
transactions for this multisignature account with imported key. transactions for this multisignature account with the imported key.
`wallet import-deployed` can be used to create wallet accounts for deployed `wallet import-deployed` can be used to create wallet accounts for deployed
contracts. They also can have WIF keys associated with them (in case your contracts. They also can have WIF keys associated with them (in case your
@ -294,8 +294,8 @@ OnChain: true
BlockHash: fabcd46e93b8f4e1bc5689e3e0cc59704320494f7a0265b91ae78b4d747ee93b BlockHash: fabcd46e93b8f4e1bc5689e3e0cc59704320494f7a0265b91ae78b4d747ee93b
Success: true Success: true
``` ```
`OnChain` is true if transaction was included in block and `Success` is true `OnChain` is true if the transaction has been included in the block; and `Success` is true
if it was executed successfully. if it has been executed successfully.
#### Committee members #### Committee members
`query commitee` returns a list of current committee members: `query commitee` returns a list of current committee members:
@ -353,8 +353,8 @@ Key Votes Com
``` ```
#### Voter data #### Voter data
`query voter` returns additional data about NEO holder: amount of NEO he has, `query voter` returns additional data about NEO holder: the amount of NEO he has,
candidate he voted for (if any) and block number of the last transactions the candidate it voted for (if any) and the block number of the last transactions
involving NEO on this account: involving NEO on this account:
``` ```
$ ./bin/neo-go query voter -r http://localhost:20332 Nj91C8TxQSxW1jCE1ytFre6mg5qxTypg1Y $ ./bin/neo-go query voter -r http://localhost:20332 Nj91C8TxQSxW1jCE1ytFre6mg5qxTypg1Y
@ -373,7 +373,7 @@ NEP-17 commands are designed to work with any NEP-17 tokens, but NeoGo needs
some metadata for these tokens to function properly. Native NEO or GAS are some metadata for these tokens to function properly. Native NEO or GAS are
known to NeoGo by default, but other tokens are not. NeoGo can get this known to NeoGo by default, but other tokens are not. NeoGo can get this
metadata from the specified RPC server, but that's an additional request to metadata from the specified RPC server, but that's an additional request to
make, so if you care about command processing delay you can import token make. So, if you care about command processing delay, you can import token
metadata into the wallet with `wallet nep17 import` command. It'll be stored metadata into the wallet with `wallet nep17 import` command. It'll be stored
in the `extra` section of the wallet. in the `extra` section of the wallet.
``` ```
@ -391,7 +391,7 @@ Getting balance is easy:
By default, you'll get data for all tokens for the default wallet's By default, you'll get data for all tokens for the default wallet's
address. You can select non-default address with `-a` flag and/or select token address. You can select non-default address with `-a` flag and/or select token
with `--token` flag (token hash or name can be used as parameter) with `--token` flag (token hash or name can be used as parameter).
#### Transfers #### Transfers
@ -405,15 +405,15 @@ parties). For example, transferring 100 GAS looks like this:
You can omit `--from` parameter (default wallet's address will be used in this You can omit `--from` parameter (default wallet's address will be used in this
case), you can add `--gas` for extra network fee (raising priority of your case), you can add `--gas` for extra network fee (raising priority of your
transaction). And you can save transaction to file with `--out` instead of transaction). And you can save the transaction to a file with `--out` instead of
sending it to the network if it needs to be signed by multiple parties. sending it to the network if it needs to be signed by multiple parties.
To add optional `data` transfer parameter specify `data` positional argument To add optional `data` transfer parameter, specify `data` positional argument
after all required flags. Refer to `wallet nep17 transfer --help` command after all required flags. Refer to `wallet nep17 transfer --help` command
description for details. description for details.
One `transfer` invocation creates one transaction, but in case you need to do One `transfer` invocation creates one transaction. In case you need to do
many transfers you can save on network fees by doing multiple token moves with many transfers, you can save on network fees by doing multiple token moves with
one transaction by using `wallet nep17 multitransfer` command. It can transfer one transaction by using `wallet nep17 multitransfer` command. It can transfer
things from one account to many, its syntax differs from `transfer` in that things from one account to many, its syntax differs from `transfer` in that
you don't have `--token`, `--to` and `--amount` options, but instead you can you don't have `--token`, `--to` and `--amount` options, but instead you can
@ -426,7 +426,7 @@ transfer as above can be done with `multitransfer` by doing this:
#### GAS claims #### GAS claims
While Neo N3 doesn't have any notion of "claim transaction" and has GAS While Neo N3 doesn't have any notion of "claim transaction" and has GAS
automatically distributed with every NEO transfer for NEO owners you still automatically distributed with every NEO transfer for NEO owners, you still
won't get GAS if you don't do any actions. So the old `wallet claim` command won't get GAS if you don't do any actions. So the old `wallet claim` command
was updated to be an easier way to do NEO "flipping" when you send a was updated to be an easier way to do NEO "flipping" when you send a
transaction that transfers all of your NEO to yourself thereby triggering GAS transaction that transfers all of your NEO to yourself thereby triggering GAS
@ -451,7 +451,7 @@ By default, no token ID specified, i.e. common `balanceOf` method is called.
#### Transfers #### Transfers
Specify token ID via `--id` flag to transfer NEP-11 token. Specify amount to Specify token ID via `--id` flag to transfer NEP-11 token. Specify the amount to
transfer divisible NEP-11 token: transfer divisible NEP-11 token:
``` ```
@ -462,7 +462,7 @@ By default, no amount is specified, i.e. the whole token is transferred for
non-divisible tokens and 100% of the token is transferred if there is only one non-divisible tokens and 100% of the token is transferred if there is only one
owner of this token for divisible tokens. owner of this token for divisible tokens.
Unlike NEP-17 tokens functionality, `multitransfer` command currently not Unlike NEP-17 tokens functionality, `multitransfer` command is currently not
supported on NEP-11 tokens. supported on NEP-11 tokens.
#### Tokens Of #### Tokens Of
@ -536,7 +536,7 @@ Some basic commands available there:
- `ops` -- show the opcodes of currently loaded contract - `ops` -- show the opcodes of currently loaded contract
- `run` -- executes currently loaded contract - `run` -- executes currently loaded contract
Use `help` command to get more detailed information on all possibilities and Use `help` command to get more detailed information on all options and
particular commands. Note that this VM is completely disconnected from the particular commands. Note that this VM is completely disconnected from the
blockchain, so you won't have all interop functionality available for smart blockchain, so you won't have all interop functionality available for smart
contracts (use test invocations via RPC for that). contracts (use test invocations via RPC for that).

View file

@ -1,26 +1,26 @@
# NeoGo smart contract compiler # NeoGo smart contract compiler
The neo-go compiler compiles Go programs to bytecode that the NEO virtual machine can understand. The neo-go compiler compiles Go programs to a bytecode that the NEO virtual machine can understand.
## Language compatibility ## Language compatibility
The compiler is mostly compatible with regular Go language specification, but The compiler is mostly compatible with regular Go language specification. However,
there are some important deviations that you need to be aware of that make it there are some important deviations that you need to be aware of that make it
a dialect of Go rather than a complete port of the language: a dialect of Go rather than a complete port of the language:
* `new()` is not supported, most of the time you can substitute structs with composite literals * `new()` is not supported, most of the time you can substitute structs with composite literals
* `make()` is supported for maps and slices with elements of basic types * `make()` is supported for maps and slices with elements of basic types
* `copy()` is supported only for byte slices, because of underlying `MEMCPY` opcode * `copy()` is supported only for byte slices because of the underlying `MEMCPY` opcode
* pointers are supported only for struct literals, one can't take an address * pointers are supported only for struct literals, one can't take an address
of an arbitrary variable of an arbitrary variable
* there is no real distinction between different integer types, all of them * there is no real distinction between different integer types, all of them
work as big.Int in Go with a limit of 256 bit in width, so you can use work as big.Int in Go with a limit of 256 bit in width; so you can use
`int` for just about anything. This is the way integers work in Neo VM and `int` for just about anything. This is the way integers work in Neo VM and
adding proper Go types emulation is considered to be too costly. adding proper Go types emulation is considered to be too costly.
* goroutines, channels and garbage collection are not supported and will * goroutines, channels and garbage collection are not supported and will
never be because emulating that aspects of Go runtime on top of Neo VM is never be because emulating that aspects of Go runtime on top of Neo VM is
close to impossible close to impossible
* `defer` and `recover` are supported except for cases where panic occurs in * `defer` and `recover` are supported except for the cases where panic occurs in
`return` statement, because this complicates implementation and imposes runtime `return` statement because this complicates implementation and imposes runtime
overhead for all contracts. This can easily be mitigated by first storing values overhead for all contracts. This can easily be mitigated by first storing values
in variables and returning the result. in variables and returning the result.
* lambdas are supported, but closures are not. * lambdas are supported, but closures are not.
@ -53,8 +53,8 @@ this requires you to set proper `GOROOT` environment variable, like
export GOROOT=/usr/lib64/go/1.15 export GOROOT=/usr/lib64/go/1.15
``` ```
The best way to create a new contract is using `contract init` command. This will The best way to create a new contract is to use `contract init` command. This will
create an example source file, config file and `go.mod` with `github.com/nspcc-dev/neo-go/pkg/interop` dependency. create an example source file, a config file and `go.mod` with `github.com/nspcc-dev/neo-go/pkg/interop` dependency.
``` ```
$ ./bin/neo-go contract init --name MyAwesomeContract $ ./bin/neo-go contract init --name MyAwesomeContract
$ cd MyAwesomeContract $ cd MyAwesomeContract
@ -73,8 +73,8 @@ $ go mod tidy
``` ```
By default, the filename will be the name of your .go file with the .nef By default, the filename will be the name of your .go file with the .nef
extension, the file will be located in the same directory where your Go contract extension, the file will be located in the same directory with your Go contract.
is. If you want another location for your compiled contract: If you want another location for your compiled contract:
``` ```
./bin/neo-go contract compile -i contract.go --out /Users/foo/bar/contract.nef ./bin/neo-go contract compile -i contract.go --out /Users/foo/bar/contract.nef
@ -207,14 +207,14 @@ other supported language.
### Deploying ### Deploying
Deploying a contract to blockchain with neo-go requires both NEF and JSON Deploying a contract to blockchain with neo-go requires both NEF and JSON
manifest generated by the compiler from configuration file provided in YAML manifest generated by the compiler from a configuration file provided in YAML
format. To create contract manifest pass YAML file with `-c` parameter and format. To create contract manifest, pass a YAML file with `-c` parameter and
specify manifest output file with `-m`: specify the manifest output file with `-m`:
``` ```
./bin/neo-go contract compile -i contract.go -c config.yml -m contract.manifest.json ./bin/neo-go contract compile -i contract.go -c config.yml -m contract.manifest.json
``` ```
Example YAML file contents: Example of such YAML file contents:
``` ```
name: Contract name: Contract
safemethods: [] safemethods: []
@ -226,14 +226,14 @@ events:
type: String type: String
``` ```
Then the manifest can be passed to the `deploy` command via `-m` option: Then, the manifest can be passed to the `deploy` command via `-m` option:
``` ```
$ ./bin/neo-go contract deploy -i contract.nef -m contract.manifest.json -r http://localhost:20331 -w wallet.json $ ./bin/neo-go contract deploy -i contract.nef -m contract.manifest.json -r http://localhost:20331 -w wallet.json
``` ```
Deployment works via an RPC server, an address of which is passed via `-r` Deployment works via an RPC server, an address of which is passed via `-r`
option and should be signed using a wallet from `-w` option. More details can option, and should be signed using a wallet from `-w` option. More details can
be found in `deploy` command help. be found in `deploy` command help.
#### Config file #### Config file
@ -271,7 +271,7 @@ anything else | `Any`
`interop.*` types are defined as aliases in `github.com/nspcc-dev/neo-go/pkg/interop` module `interop.*` types are defined as aliases in `github.com/nspcc-dev/neo-go/pkg/interop` module
with the sole purpose of correct manifest generation. with the sole purpose of correct manifest generation.
As an example consider `Transfer` event from `NEP-17` standard: As an example, consider `Transfer` event from `NEP-17` standard:
``` ```
- name: Transfer - name: Transfer
parameters: parameters:
@ -285,14 +285,14 @@ As an example consider `Transfer` event from `NEP-17` standard:
By default, compiler performs some sanity checks. Most of the time By default, compiler performs some sanity checks. Most of the time
it will report missing events and/or parameter type mismatch. it will report missing events and/or parameter type mismatch.
Using variable as an event name in code isn't prohibited but will prevent It isn't prohibited to use a variable as an event name in code, but it will prevent
compiler from analyzing an event. It is better to use either constant or string literal. the compiler from analyzing the event. It is better to use either constant or string literal.
The check can be disabled with `--no-events` flag. The check can be disabled with `--no-events` flag.
##### Permissions ##### Permissions
Each permission specifies contracts and methods allowed for this permission. Each permission specifies contracts and methods allowed for this permission.
If contract is not specified in a rule, specified set of methods can be called on any contract. If a contract is not specified in a rule, specified set of methods can be called on any contract.
By default, no calls are allowed. Simplest permission is to allow everything: By default, no calls are allowed. The simplest permission is to allow everything:
``` ```
- methods: '*' - methods: '*'
``` ```
@ -303,10 +303,10 @@ for most of the NEP-17 token implementations:
- methods: ["onNEP17Payment"] - methods: ["onNEP17Payment"]
``` ```
In addition to `methods` permission can have one of these fields: In addition to `methods`, permission can have one of these fields:
1. `hash` contains hash and restricts set of contracts to a single contract. 1. `hash` contains hash and restricts a set of contracts to a single contract.
2. `group` contains public key and restricts set of contracts to those who 2. `group` contains public key and restricts a set of contracts to those that
have corresponding group in their manifest. have the corresponding group in their manifest.
Consider an example: Consider an example:
``` ```
@ -322,32 +322,32 @@ This set of permissions allows calling:
- `start` and `stop` methods of contract with hash `fffdc93764dbaddd97c48f252a53ea4643faa3fd` - `start` and `stop` methods of contract with hash `fffdc93764dbaddd97c48f252a53ea4643faa3fd`
- `update` method of contract in group with public key `03184b018d6b2bc093e535519732b3fd3f7551c8cffaf4621dd5a0b89482ca66c9` - `update` method of contract in group with public key `03184b018d6b2bc093e535519732b3fd3f7551c8cffaf4621dd5a0b89482ca66c9`
Also note, that native contract must be included here too. For example, if your contract Also note that a native contract must be included here too. For example, if your contract
transfers NEO/GAS or gets some info from the `Ledger` contract, all of these transfers NEO/GAS or gets some info from the `Ledger` contract, all of these
calls must be allowed in permissions. calls must be allowed in permissions.
Compiler does its best to ensure correct permissions are specified in config. The compiler does its best to ensure that correct permissions are specified in the config.
Incorrect permissions will result in runtime invocation failures. Incorrect permissions will result in runtime invocation failures.
Using either constant or literal for contract hash and method will allow compiler Using either constant or literal for contract hash and method will allow the compiler
to perform more extensive analysis. to perform more extensive analysis.
This check can be disabled with `--no-permissions` flag. This check can be disabled with `--no-permissions` flag.
##### Overloads ##### Overloads
NeoVM allows a contract to have multiple methods with the same name NeoVM allows a contract to have multiple methods with the same name
but different parameters number. Go lacks this feature but this can be circumvented but different parameters number. Go lacks this feature, but this can be circumvented
with `overloads` section. Essentially it is a mapping from default contract method names with `overloads` section. Essentially, it is a mapping from default contract method names
to the new ones. to the new ones.
``` ```
- overloads: - overloads:
oldName1: newName oldName1: newName
oldName2: newName oldName2: newName
``` ```
Because the use-case for this is to provide multiple implementations with the same ABI name, Since the use-case for this is to provide multiple implementations with the same ABI name,
`newName` is required to be already present in the compiled contract. `newName` is required to be already present in the compiled contract.
As an example consider [`NEP-11` standard](https://github.com/neo-project/proposals/blob/master/nep-11.mediawiki#transfer). As an example, consider [`NEP-11` standard](https://github.com/neo-project/proposals/blob/master/nep-11.mediawiki#transfer).
It requires divisible NFT contract to have 2 `transfer` methods. To achieve this we might implement It requires a divisible NFT contract to have 2 `transfer` methods. To achieve this, we might implement
`Tranfer` and `TransferDivisible` and specify emitted name in config: `Transfer` and `TransferDivisible` and specify the emitted name in the config:
``` ```
- overloads: - overloads:
transferDivisible:transfer transferDivisible:transfer
@ -361,15 +361,15 @@ This is achieved with `manifest add-group` command.
./bin/neo-go contract manifest add-group -n contract.nef -m contract.manifest.json --sender <sender> --wallet /path/to/wallet.json --account <account> ./bin/neo-go contract manifest add-group -n contract.nef -m contract.manifest.json --sender <sender> --wallet /path/to/wallet.json --account <account>
``` ```
It accepts contract `.nef` and manifest files emitted by `compile` command as well as It accepts contract `.nef` and manifest files emitted by `compile` command as well as
sender and signer accounts. `--sender` is the account who will send deploy transaction later (not necessarily in wallet). sender and signer accounts. `--sender` is the account that will send deploy transaction later (not necessarily in wallet).
`--account` is the wallet account which signs contract hash using group private key. `--account` is the wallet account which signs contract hash using group private key.
#### Neo Express support #### Neo Express support
It's possible to deploy contracts written in Go using [Neo It's possible to deploy contracts written in Go using [Neo
Express](https://github.com/neo-project/neo-express) which is a part of [Neo Express](https://github.com/neo-project/neo-express), which is a part of [Neo
Blockchain Blockchain
Toolkit](https://github.com/neo-project/neo-blockchain-toolkit/). To do that Toolkit](https://github.com/neo-project/neo-blockchain-toolkit/). To do that,
you need to generate a different metadata file using YAML written for you need to generate a different metadata file using YAML written for
deployment with neo-go. It's done in the same step with compilation via deployment with neo-go. It's done in the same step with compilation via
`--config` input parameter and `--abi` output parameter, combined with debug `--config` input parameter and `--abi` output parameter, combined with debug
@ -380,11 +380,11 @@ $ ./bin/neo-go contract compile -i contract.go --config contract.yml -o contract
``` ```
This file can then be used by toolkit to deploy contract the same way This file can then be used by toolkit to deploy contract the same way
contracts in other languagues are deployed. contracts in other languages are deployed.
### Invoking ### Invoking
You can import your contract into the standalone VM and run it there (see [VM You can import your contract into a standalone VM and run it there (see [VM
documentation](vm.md) for more info), but that only works for simple contracts documentation](vm.md) for more info), but that only works for simple contracts
that don't use blockchain a lot. For more real contracts you need to deploy that don't use blockchain a lot. For more real contracts you need to deploy
them first and then do test invocations and regular invocations with `contract them first and then do test invocations and regular invocations with `contract

View file

@ -1,14 +1,14 @@
# NeoGo consensus node # NeoGo consensus node
NeoGo node can act as a consensus node. A consensus node differs from regular NeoGo node can act as a consensus node. A consensus node differs from a regular
one in that it participates in block acceptance process using dBFT one in that it participates in block acceptance process using dBFT
protocol. Any committee node can also be elected as CN therefore they're protocol. Any committee node can also be elected as a CN; therefore, they're
expected to follow the same setup process as CNs (to be ready to become CNs expected to follow the same setup process as CNs (to be ready to become CNs
if/when they're elected). if/when they're elected).
While regular nodes on Neo network don't need any special keys CNs always have While regular nodes on Neo network don't need any special keys, CNs always have
one used to sign dBFT messages and blocks. So the main difference between one used to sign dBFT messages and blocks. So, the main difference between
regular node and consensus/committee node is that it should be configured to a regular node and a consensus/committee node is that it should be configured to
use some key from some wallet. use some key from some wallet.
## Running a CN on public networks ## Running a CN on public networks
@ -27,7 +27,7 @@ be enough for the first year of blockchain).
NeoGo is a single binary that can be run on any modern GNU/Linux NeoGo is a single binary that can be run on any modern GNU/Linux
distribution. We recommend using major well-supported OSes like CentOS, Debian distribution. We recommend using major well-supported OSes like CentOS, Debian
or Ubuntu, make sure they're updated with the latest security patches. or Ubuntu. Make sure they're updated with the latest security patches.
No additional packages are needed for NeoGo CN. No additional packages are needed for NeoGo CN.
@ -38,9 +38,9 @@ Github](https://github.com/nspcc-dev/neo-go/releases) or use [Docker
image](https://hub.docker.com/r/nspccdev/neo-go). It has everything included, image](https://hub.docker.com/r/nspccdev/neo-go). It has everything included,
no additional plugins needed. no additional plugins needed.
Take appropriate (mainnet/testnet) configuration [from the Take an appropriate (mainnet/testnet) configuration [from the
repository](https://github.com/nspcc-dev/neo-go/tree/master/config) and save repository](https://github.com/nspcc-dev/neo-go/tree/master/config) and save
in some directory (we'll assume that it's available in the same directory as it in some directory (we'll assume that it's available in the same directory as
neo-go binary). neo-go binary).
### Configuration and execution ### Configuration and execution
@ -59,24 +59,24 @@ is a password to your CN key. Run the node in a regular way after that:
$ neo-go node --mainnet --config-path ./ $ neo-go node --mainnet --config-path ./
``` ```
where `--mainnet` is your network (can be `--testnet` for testnet) and where `--mainnet` is your network (can be `--testnet` for testnet) and
`--config-path` is a path to configuration file directory. If the node starts `--config-path` is a path to the configuration file directory. If the node starts
fine it'll be logging events like synchronized blocks. The node doesn't have fine, it'll be logging events like synchronized blocks. The node doesn't have
any interactive CLI, it only outputs logs so you can wrap this command in a any interactive CLI, it only outputs logs so you can wrap this command in a
systemd service file to run automatically on system startup. systemd service file to run automatically on system startup.
Notice that the default configuration has RPC and Prometheus services enabled, Notice that the default configuration has RPC and Prometheus services enabled.
you can turn them off for security purposes or restrict access to them with a You can turn them off for security purposes or restrict access to them with a
firewall. Carefuly review all other configuration options to see if they meet firewall. Carefully review all other configuration options to see if they meet
your expectations. Details on various configuration options are provided in the your expectations. Details on various configuration options are provided in the
[node configuration documentation](node-configuration.md), CLI commands are [node configuration documentation](node-configuration.md), CLI commands are
provided in the [CLI documentation](cli.md). provided in the [CLI documentation](cli.md).
### Registration ### Registration
To register as a candidate use neo-go as CLI command with an external RPC To register as a candidate, use neo-go as CLI command with an external RPC
server for it to connect to (for chain data and transaction submission). You server for it to connect to (for chain data and transaction submission). You
can use any public RPC server or an RPC server of your own like the node can use any public RPC server or an RPC server of your own like the node
started at previous step. We'll assume that you're running the next command on started at the previous step. We'll assume that you run the next command on
the same node in default configuration with RPC interface available at port the same node in default configuration with RPC interface available at port
10332. 10332.
@ -91,15 +91,15 @@ path to NEP-6 wallet file and `http://localhost:10332` is an RPC node to
use. use.
This command will create and send appropriate transaction to the network and This command will create and send appropriate transaction to the network and
you should then wait for it to settle in a block. If all goes well it'll end you should then wait for it to settle in a block. If all goes well, it'll end
with "HALT" state and your registration will be completed. You can use with "HALT" state and your registration will be completed. You can use
`query tx` command to see transaction status or `query candidates` to see if `query tx` command to see transaction status or `query candidates` to see if
your candidate was added. your candidate has been added.
### Voting ### Voting
After registration completion if you own some NEO you can also vote for your After registration is completed, if you own some NEO, you can also vote for your
candidate to help it become CN and receive additional voter GAS. To do that candidate to help it become a CN and receive additional voter GAS. To do that,
you need to know the public key of your candidate, which can either be seen in you need to know the public key of your candidate, which can either be seen in
`query candidates` command output or extracted from wallet `wallet dump-keys` `query candidates` command output or extracted from wallet `wallet dump-keys`
command: command:
@ -117,10 +117,10 @@ use:
$ neo-go wallet candidate vote -a NiKEkwz6i9q6gqfCizztDoHQh9r9BtdCNa -w wallet.json -r http://localhost:10332 -c 0363f6678ea4c59e292175c67e2b75c9ba7bb73e47cd97cdf5abaf45de157133f5 $ neo-go wallet candidate vote -a NiKEkwz6i9q6gqfCizztDoHQh9r9BtdCNa -w wallet.json -r http://localhost:10332 -c 0363f6678ea4c59e292175c67e2b75c9ba7bb73e47cd97cdf5abaf45de157133f5
``` ```
where `NiKEkwz6i9q6gqfCizztDoHQh9r9BtdCNa` is voter's address, `wallet.json` where `NiKEkwz6i9q6gqfCizztDoHQh9r9BtdCNa` is the voter's address, `wallet.json`
is NEP-6 wallet file path, `http://localhost:10332` is RPC node address and is the NEP-6 wallet file path, `http://localhost:10332` is the RPC node address and
`0363f6678ea4c59e292175c67e2b75c9ba7bb73e47cd97cdf5abaf45de157133f5` is a `0363f6678ea4c59e292175c67e2b75c9ba7bb73e47cd97cdf5abaf45de157133f5` is the
public key voter votes for. This command also returns transaction hash and you public key the voter votes for. This command also returns transaction hash and you
need to wait for this transaction to be accepted into one of subsequent blocks. need to wait for this transaction to be accepted into one of subsequent blocks.
## Private NeoGo network ## Private NeoGo network
@ -135,11 +135,11 @@ Four-node setup uses ports 20333-20336 for P2P communication and ports
20001-20004). Single-node is on ports 20333/30333/20001 for 20001-20004). Single-node is on ports 20333/30333/20001 for
P2P/RPC/Prometheus. P2P/RPC/Prometheus.
NeoGo default privnet configuration is made to work with four node consensus, NeoGo default privnet configuration is made to work with four-node consensus,
you have to modify it if you're to use single consensus node. you have to modify it if you're to use single consensus node.
Node wallets are located in the `.docker/wallets` directory where Node wallets are located in the `.docker/wallets` directory where
`wallet1_solo.json` is used for single-node setup and all the other ones for `wallet1_solo.json` is used for single-node setup and all others for
four-node setup. four-node setup.
#### Prerequisites #### Prerequisites
@ -148,7 +148,7 @@ four-node setup.
- `go` compiler - `go` compiler
#### Instructions #### Instructions
You can use existing docker-compose file located in `.docker/docker-compose.yml`: You can use an existing docker-compose file located in `.docker/docker-compose.yml`:
```bash ```bash
make env_image # build image make env_image # build image
make env_up # start containers, use "make env_single" for single CN make env_up # start containers, use "make env_single" for single CN
@ -170,12 +170,12 @@ make env_clean
### Start nodes manually ### Start nodes manually
1. Create a separate config directory for every node and 1. Create a separate config directory for every node and
place corresponding config named `protocol.privnet.yml` there. place the corresponding config named `protocol.privnet.yml` there.
2. Edit configuration file for every node. 2. Edit configuration file for every node.
Examples can be found at `config/protocol.privnet.docker.one.yml` (`two`, `three` etc.). Examples can be found at `config/protocol.privnet.docker.one.yml` (`two`, `three` etc.).
1. Add `UnlockWallet` section with `Path` and `Password` strings for NEP-6 1. Add `UnlockWallet` section with `Path` and `Password` strings for NEP-6
wallet path and password for the account to be used for consensus node. wallet path and the password for the account to be used for the consensus node.
2. Make sure that your `MinPeers` setting is equal to 2. Make sure that your `MinPeers` setting is equal to
the number of nodes participating in consensus. the number of nodes participating in consensus.
This requirement is needed for nodes to correctly This requirement is needed for nodes to correctly

View file

@ -1,9 +1,9 @@
# Conventions # Conventions
This document will list conventions that this repo should follow. These are This document will list conventions that this repo should follow. These are
guidelines and if you believe that one should not be followed, then please state the guidelines, and if you believe that one should not be followed, please state
why in your PR. If you believe that a piece of code does not follow one of the why in your PR. If you believe that a piece of code does not follow one of the
conventions listed, then please open an issue before making any changes. conventions listed, please open an issue before making any changes.
When submitting a new convention, please open an issue for discussion, if When submitting a new convention, please open an issue for discussion, if
possible please highlight parts in the code where this convention could help the possible please highlight parts in the code where this convention could help the

View file

@ -1,7 +1,7 @@
# NeoGo node configuration file # NeoGo node configuration file
This section contains detailed NeoGo node configuration file description This section contains detailed NeoGo node configuration file description
including default config values and tips to set up configurable values. including default config values and some tips to set up configurable values.
Each config file contains two sections. `ApplicationConfiguration` describes node-related Each config file contains two sections. `ApplicationConfiguration` describes node-related
settings and `ProtocolConfiguration` contains protocol-related settings. See the settings and `ProtocolConfiguration` contains protocol-related settings. See the
@ -17,14 +17,14 @@ node-related settings described in the table below.
| Section | Type | Default value | Description | | Section | Type | Default value | Description |
| --- | --- | --- | --- | | --- | --- | --- | --- |
| Address | `string` | `127.0.0.1` | Node address that P2P protocol handler binds to. | | Address | `string` | `127.0.0.1` | Node address that P2P protocol handler binds to. |
| AnnouncedPort | `uint16` | Same as the `NodePort` | Node port which should be used to announce node's port on P2P layer, can differ from `NodePort` node is bound to (for example, if your node is behind NAT). | | AnnouncedPort | `uint16` | Same as `NodePort` | Node port which should be used to announce node's port on P2P layer, it can differ from the `NodePort` the node is bound to (for example, if your node is behind NAT). |
| AttemptConnPeers | `int` | `20` | Number of connection to try to establish when the connection count drops below the `MinPeers` value.| | AttemptConnPeers | `int` | `20` | Number of connection to try to establish when the connection count drops below the `MinPeers` value.|
| DBConfiguration | [DB Configuration](#DB-Configuration) | | Describes configuration for database. See the [DB Configuration](#DB-Configuration) section for details. | | DBConfiguration | [DB Configuration](#DB-Configuration) | | Describes configuration for database. See the [DB Configuration](#DB-Configuration) section for details. |
| DialTimeout | `int64` | `0` | Maximum duration a single dial may take in seconds. | | DialTimeout | `int64` | `0` | Maximum duration a single dial may take in seconds. |
| ExtensiblePoolSize | `int` | `20` | Maximum amount of the extensible payloads from a single sender stored in a local pool. | | ExtensiblePoolSize | `int` | `20` | Maximum amount of the extensible payloads from a single sender stored in a local pool. |
| LogPath | `string` | "", so only console logging | File path where to store node logs. | | LogPath | `string` | "", so only console logging | File path where to store node logs. |
| MaxPeers | `int` | `100` | Maximum numbers of peers that can be connected to the server. | | MaxPeers | `int` | `100` | Maximum numbers of peers that can be connected to the server. |
| MinPeers | `int` | `5` | Minimum number of peers for normal operation, when the node has less than this number of peers it tries to connect with some new ones. | | MinPeers | `int` | `5` | Minimum number of peers for normal operation; when the node has less than this number of peers it tries to connect with some new ones. |
| NodePort | `uint16` | `0`, which is any free port | The actual node port it is bound to. | | NodePort | `uint16` | `0`, which is any free port | The actual node port it is bound to. |
| Oracle | [Oracle Configuration](#Oracle-Configuration) | | Oracle module configuration. See the [Oracle Configuration](#Oracle-Configuration) section for details. | | Oracle | [Oracle Configuration](#Oracle-Configuration) | | Oracle module configuration. See the [Oracle Configuration](#Oracle-Configuration) section for details. |
| P2PNotary | [P2P Notary Configuration](#P2P-Notary-Configuration) | | P2P Notary module configuration. See the [P2P Notary Configuration](#P2P-Notary-Configuration) section for details. | | P2PNotary | [P2P Notary Configuration](#P2P-Notary-Configuration) | | P2P Notary module configuration. See the [P2P Notary Configuration](#P2P-Notary-Configuration) section for details. |
@ -145,7 +145,7 @@ RPC:
KeyFile: serv.key KeyFile: serv.key
``` ```
where: where:
- `Enabled` denotes whether RPC server should be started. - `Enabled` denotes whether an RPC server should be started.
- `Address` is an RPC server address to be running at. - `Address` is an RPC server address to be running at.
- `EnableCORSWorkaround` enables Cross-Origin Resource Sharing and is useful if - `EnableCORSWorkaround` enables Cross-Origin Resource Sharing and is useful if
you're accessing RPC interface from the browser. you're accessing RPC interface from the browser.
@ -202,7 +202,7 @@ protocol-related settings described in the table below.
| Section | Type | Default value | Description | Notes | | Section | Type | Default value | Description | Notes |
| --- | --- | --- | --- | --- | | --- | --- | --- | --- | --- |
| CommitteeHistory | map[uint32]int | none | Number of committee members after given height, for example `{0: 1, 20: 4}` sets up a chain with one committee member since the genesis and then changes the setting to 4 committee members at the height of 20. `StandbyCommittee` committee setting must have the number of keys equal or exceeding the highest value in this option. Blocks numbers where the change happens must be divisble by the old and by the new values simultaneously. If not set, committee size is derived from the `StandbyCommittee` setting and never changes. | | CommitteeHistory | map[uint32]int | none | Number of committee members after the given height, for example `{0: 1, 20: 4}` sets up a chain with one committee member since the genesis and then changes the setting to 4 committee members at the height of 20. `StandbyCommittee` committee setting must have the number of keys equal or exceeding the highest value in this option. Blocks numbers where the change happens must be divisible by the old and by the new values simultaneously. If not set, committee size is derived from the `StandbyCommittee` setting and never changes. |
| GarbageCollectionPeriod | `uint32` | 10000 | Controls MPT garbage collection interval (in blocks) for configurations with `RemoveUntraceableBlocks` enabled and `KeepOnlyLatestState` disabled. In this mode the node stores a number of MPT trees (corresponding to `MaxTraceableBlocks` and `StateSyncInterval`), but the DB needs to be clean from old entries from time to time. Doing it too often will cause too much processing overhead, doing it too rarely will leave more useless data in the DB. | | GarbageCollectionPeriod | `uint32` | 10000 | Controls MPT garbage collection interval (in blocks) for configurations with `RemoveUntraceableBlocks` enabled and `KeepOnlyLatestState` disabled. In this mode the node stores a number of MPT trees (corresponding to `MaxTraceableBlocks` and `StateSyncInterval`), but the DB needs to be clean from old entries from time to time. Doing it too often will cause too much processing overhead, doing it too rarely will leave more useless data in the DB. |
| KeepOnlyLatestState | `bool` | `false` | Specifies if MPT should only store latest state. If true, DB size will be smaller, but older roots won't be accessible. This value should remain th | KeepOnlyLatestState | `bool` | `false` | Specifies if MPT should only store latest state. If true, DB size will be smaller, but older roots won't be accessible. This value should remain th
e same for the same database. | Conflicts with `P2PStateExchangeExtensions`. | e same for the same database. | Conflicts with `P2PStateExchangeExtensions`. |
@ -226,5 +226,5 @@ e same for the same database. | Conflicts with `P2PStateExchangeExtensions`. |
| StateSyncInterval | `int` | `40000` | The number of blocks between state heights available for MPT state data synchronization. | `P2PStateExchangeExtensions` should be enabled to use this setting. | | StateSyncInterval | `int` | `40000` | The number of blocks between state heights available for MPT state data synchronization. | `P2PStateExchangeExtensions` should be enabled to use this setting. |
| ValidatorsCount | `int` | `0` | Number of validators set for the whole network lifetime, can't be set if `ValidatorsHistory` setting is used. | | ValidatorsCount | `int` | `0` | Number of validators set for the whole network lifetime, can't be set if `ValidatorsHistory` setting is used. |
| ValidatorsHistory | map[uint32]int | none | Number of consensus nodes to use after given height (see `CommitteeHistory` also). Heights where the change occurs must be divisible by the number of committee members at that height. Can't be used with `ValidatorsCount` not equal to zero. | | ValidatorsHistory | map[uint32]int | none | Number of consensus nodes to use after given height (see `CommitteeHistory` also). Heights where the change occurs must be divisible by the number of committee members at that height. Can't be used with `ValidatorsCount` not equal to zero. |
| VerifyBlocks | `bool` | `false` | Denotes whether to verify received blocks. | | VerifyBlocks | `bool` | `false` | Denotes whether to verify the received blocks. |
| VerifyTransactions | `bool` | `false` | Denotes whether to verify transactions in received blocks. | | VerifyTransactions | `bool` | `false` | Denotes whether to verify transactions in the received blocks. |

View file

@ -15,7 +15,7 @@ The original problem definition:
> any interaction, which is the case for oracle nodes or NeoFS inner ring nodes. > any interaction, which is the case for oracle nodes or NeoFS inner ring nodes.
> >
> As some of the services using this mechanism can be quite sensitive to the > As some of the services using this mechanism can be quite sensitive to the
> latency of their requests processing it should be possible to construct complete > latency of their requests processing, it should be possible to construct a complete
> transaction within the time frame between two consecutive blocks. > transaction within the time frame between two consecutive blocks.
@ -26,10 +26,10 @@ doing the actual work. It uses generic `Conflicts` and `NotValidBefore`
transaction attributes for its purposes as well as an additional special one transaction attributes for its purposes as well as an additional special one
(`Notary assisted`). (`Notary assisted`).
A new designated role is added, `P2PNotary`. It can have arbitrary number of A new designated role is added, `P2PNotary`. It can have an arbitrary number of
keys associated with it. keys associated with it.
Using the service costs some GAS, so below we operate with `FEE` as a unit of cost To use the service, one should pay some GAS, so below we operate with `FEE` as a unit of cost
for this service. `FEE` is set to be 0.1 GAS. for this service. `FEE` is set to be 0.1 GAS.
We'll also use `NKeys` definition as the number of keys that participate in the We'll also use `NKeys` definition as the number of keys that participate in the
@ -43,12 +43,12 @@ witnesses that's K+N*L.
#### Conflicts #### Conflicts
This attribute makes the chain only accept one transaction of the two conflicting This attribute makes the chain accept one transaction of the two conflicting only
and adds an ability to give a priority to any of the two if needed. This and adds an ability to give a priority to any of the two if needed. This
attribute was originally proposed in attribute was originally proposed in
[neo-project/neo#1991](https://github.com/neo-project/neo/issues/1991). [neo-project/neo#1991](https://github.com/neo-project/neo/issues/1991).
The attribute has Uint256 data inside of it containing the hash of conflicting The attribute has Uint256 data inside containing the hash of conflicting
transaction. It is allowed to have multiple attributes of this type. transaction. It is allowed to have multiple attributes of this type.
#### NotValidBefore #### NotValidBefore
@ -59,19 +59,19 @@ was originally proposed in
The attribute has uint32 data inside which is the block height starting from The attribute has uint32 data inside which is the block height starting from
which the transaction is considered to be valid. It can be seen as the opposite which the transaction is considered to be valid. It can be seen as the opposite
of `ValidUntilBlock`, using both allows to have a window of valid block numbers of `ValidUntilBlock`. Using both allows to have a window of valid block numbers
that this transaction could be accepted into. Transactions with this attribute that this transaction could be accepted into. Transactions with this attribute
are not accepted into mempool before specified block is persisted. are not accepted into mempool before specified block is persisted.
It can be used to create some transactions in advance with a guarantee that they It can be used to create some transactions in advance with a guarantee that they
won't be accepted until specified block. won't be accepted until the specified block.
#### NotaryAssisted #### NotaryAssisted
This attribute contains one byte containing the number of transactions collected This attribute holds one byte containing the number of transactions collected
by the service. It could be 0 for fallback transaction or `NKeys` for normal by the service. It could be 0 for fallback transaction or `NKeys` for a normal
transaction that completed its P2P signature collection. Transactions using this transaction that completed its P2P signature collection. Transactions using this
attribute need to pay an additional network fee of (`NKeys`+1)×`FEE`. This attribute attribute need to pay additional network fee of (`NKeys`+1)×`FEE`. This attribute
could be only be used by transactions signed by the notary native contract. could be only be used by transactions signed by the notary native contract.
### Native Notary contract ### Native Notary contract
@ -109,9 +109,9 @@ This payload has two incomplete transactions inside:
than the current chain height and it must have `Conflicts` attribute with the than the current chain height and it must have `Conflicts` attribute with the
hash of the main transaction. It at the same time must have `Notary assisted` hash of the main transaction. It at the same time must have `Notary assisted`
attribute with a count of zero. attribute with a count of zero.
- *Main tx*. This is the one that actually needs to be completed, it: - *Main tx*. This is the one that actually needs to be completed; it:
1. *either* doesn't have all witnesses attached 1. *either* doesn't have all witnesses attached
2. *or* it only has a partial multisignature 2. *or* has a partial multisignature only
3. *or* have not all witnesses attached and some of the rest are partial multisignature 3. *or* have not all witnesses attached and some of the rest are partial multisignature
This transaction must have `Notary assisted` attribute with a count of `NKeys` This transaction must have `Notary assisted` attribute with a count of `NKeys`
@ -124,19 +124,19 @@ construct and send the payload.
Node module with the designated key monitors the network for `P2PNotaryRequest` Node module with the designated key monitors the network for `P2PNotaryRequest`
payloads. It maintains a list of current requests grouped by main transaction payloads. It maintains a list of current requests grouped by main transaction
hash, when it receives enough requests to correctly construct all transaction hash. When it receives enough requests to correctly construct all transaction
witnesses it does so, adds a witness of its own (for Notary contract witness) and witnesses, it does so, adds a witness of its own (for Notary contract witness) and
sends the resulting transaction to the network. sends the resulting transaction to the network.
If the main transaction with all witnesses attached still can't be validated If the main transaction with all witnesses attached still can't be validated
because of fee (or other) issues, the node waits for `NotValidBefore` block of due to any fee (or other) issues, the node waits for `NotValidBefore` block of
the fallback transaction to be persisted. the fallback transaction to be persisted.
If `NotValidBefore` block is persisted and there are still some signatures If `NotValidBefore` block is persisted and there are still some signatures
missing (or the resulting transaction is invalid), the module sends all the missing (or the resulting transaction is invalid), the module sends all the
associated fallback transactions for the main transaction. associated fallback transactions for the main transaction.
After processing service request is deleted from the module. After processing, service request is deleted from the module.
See the [NeoGo P2P signature extensions](#NeoGo P2P signature extensions) on how See the [NeoGo P2P signature extensions](#NeoGo P2P signature extensions) on how
to enable notary-related extensions on chain and to enable notary-related extensions on chain and
@ -145,7 +145,7 @@ set up Notary service node.
## Environment setup ## Environment setup
To run P2P signature collection service on your network you need to do: To run P2P signature collection service on your network, you need to do:
* Set up [`P2PSigExtensions`](#NeoGo P2P signature extensions) for all nodes in * Set up [`P2PSigExtensions`](#NeoGo P2P signature extensions) for all nodes in
the network. the network.
* Set notary node keys in `RoleManagement` native contract. * Set notary node keys in `RoleManagement` native contract.
@ -159,7 +159,7 @@ notary requests to the network.
### NeoGo P2P signature extensions ### NeoGo P2P signature extensions
As far as Notary service is an extension of the standard NeoGo node, it should be As far as Notary service is an extension of the standard NeoGo node, it should be
enabled and properly configured before the usage. enabled and properly configured before usage.
#### Configuration #### Configuration
@ -172,7 +172,7 @@ Notary contract and designate `P2PNotary` node role in RoleManagement native
contract. contract.
If you use custom `NativeActivations` subsection of the `ProtocolConfiguration` If you use custom `NativeActivations` subsection of the `ProtocolConfiguration`
section in your node config, then specify the height of the Notary contract section in your node config, specify the height of the Notary contract
activation, e.g. `0`. activation, e.g. `0`.
Note, that even if `P2PSigExtensions` config subsection enables notary-related Note, that even if `P2PSigExtensions` config subsection enables notary-related
@ -201,13 +201,13 @@ To enable notary service node functionality refer to the
### NeoGo Notary service node module ### NeoGo Notary service node module
NeoGo node can act as notary service node (the node that accumulates notary NeoGo node can act as notary service node (the node that accumulates notary
requests, collects signatures and releases fully-signed transactions). It has to requests, collects signatures and releases fully-signed transactions). It must
have a wallet with key belonging to one of network's designated notary nodes have a wallet with a key belonging to one of network's designated notary nodes
(stored in `RoleManagement` native contract). Also, the node must be connected to (stored in `RoleManagement` native contract). Also, the node must be connected to
the network with enabled P2P signature extensions, otherwise problems with states a network with enabled P2P signature extensions, otherwise problems with states
and peer disconnections will occur. and peer disconnections will occur.
Notary service node doesn't need [RPC service](rpc.md) to be enabled, because it Notary service node doesn't need [RPC service](rpc.md) to be enabled because it
receives notary requests and broadcasts completed transactions via P2P protocol. receives notary requests and broadcasts completed transactions via P2P protocol.
However, enabling [RPC service](rpc.md) allows to send notary requests directly However, enabling [RPC service](rpc.md) allows to send notary requests directly
to the notary service node and avoid P2P communication delays. to the notary service node and avoid P2P communication delays.
@ -241,7 +241,7 @@ P2PNotary:
Below are presented all stages each P2P signature collection request goes through. Use Below are presented all stages each P2P signature collection request goes through. Use
stages 1 and 2 to create, sign and submit P2P notary request. Stage 3 is stages 1 and 2 to create, sign and submit P2P notary request. Stage 3 is
performed by the notary service, does not require user's intervention and is given performed by the notary service; it does not require user's intervention and is given
for informational purposes. Stage 4 contains advice to check for notary request for informational purposes. Stage 4 contains advice to check for notary request
results. results.
@ -252,221 +252,221 @@ sender's deposit to the Notary native contract is used. Before the notary reques
submitted, you need to deposit enough GAS to the contract, otherwise, request submitted, you need to deposit enough GAS to the contract, otherwise, request
won't pass verification. won't pass verification.
Notary native contract supports `onNEP17Payment` method, thus to deposit funds to Notary native contract supports `onNEP17Payment` method. Thus, to deposit funds to
the Notary native contract, transfer desired amount of GAS to the contract the Notary native contract, transfer the desired amount of GAS to the contract
address. Use address. Use
[func (*Client) TransferNEP17](https://pkg.go.dev/github.com/nspcc-dev/neo-go@v0.97.2/pkg/rpc/client#Client.TransferNEP17) [func (*Client) TransferNEP17](https://pkg.go.dev/github.com/nspcc-dev/neo-go@v0.97.2/pkg/rpc/client#Client.TransferNEP17)
with the `data` parameter matching the following requirements: with the `data` parameter matching the following requirements:
- `data` should be an array of two elements: `to` and `till`. - `data` should be an array of two elements: `to` and `till`.
- `to` denotes the receiver of the deposit. It can be nil in case if `to` equals - `to` denotes the receiver of the deposit. It can be nil in case `to` equals
to the GAS sender. the GAS sender.
- `till` denotes chain's height before which deposit is locked and can't be - `till` denotes chain's height before which deposit is locked and can't be
withdrawn. `till` can't be set if you're not the deposit owner. Default `till` withdrawn. `till` can't be set if you're not the deposit owner. Default `till`
value is current chain height + 5760. `till` can't be less than current chain value is the current chain height + 5760. `till` can't be less than the current chain
height. `till` can't be less than currently set `till` value for that deposit if height. `till` can't be less than the currently set `till` value for that deposit if
the deposit already exists. the deposit already exists.
Note, that the first deposit call for the `to` address can't transfer less than 2×`FEE` GAS. Note, that the first deposit call for the `to` address can't transfer less than 2×`FEE` GAS.
Deposit is allowed for renewal, i.e. consequent `deposit` calls for the same `to` Deposit is allowed for renewal, i.e. consequent `deposit` calls for the same `to`
address add up specified amount to the already deposited value. address add up a specified amount to the already deposited value.
After GAS transfer successfully submitted to the chain, use [Notary native After GAS transfer is successfully submitted to the chain, use [Notary native
contract API](#Native Notary contract) to manage your deposit. contract API](#Native Notary contract) to manage your deposit.
Note, that regular operation flow requires deposited amount of GAS to be Note, that regular operation flow requires the deposited amount of GAS to be
sufficient to pay for *all* fallback transactions that are currently submitted (all sufficient to pay for *all* fallback transactions that are currently submitted (all
in-flight notary requests). The default deposit sum for one fallback transaction in-flight notary requests). The default deposit sum for one fallback transaction
should be enough to pay the fallback transaction fees which are system fee and should be enough to pay the fallback transaction fees which are system fee and
network fee. Fallback network fee includes (`NKeys`+1)×`FEE` = (0+1)×`FEE` = `FEE` network fee. Fallback network fee includes (`NKeys`+1)×`FEE` = (0+1)×`FEE` = `FEE`
GAS for `NotaryAssisted` attribute usage and regular fee for the fallback size. GAS for `NotaryAssisted` attribute usage and regular fee for the fallback size.
If you need to submit several notary requests, ensure that deposited amount is If you need to submit several notary requests, ensure that the deposited amount is
enough to pay for all fallbacks. If the deposited amount is not enough to pay the enough to pay for all fallbacks. If the deposited amount is not enough to pay the
fallback fees, then `Insufficiend funds` error will be returned from the RPC node fallback fees, `Insufficiend funds` error will be returned from the RPC node
after notary request submission. after notary request submission.
### 2. Request submission ### 2. Request submission
Once several parties want to sign one transaction, each of them should generate Once several parties want to sign one transaction, each of them should generate
the transaction, wrap it into `P2PNotaryRequest` payload and send to the known RPC the transaction, wrap it into `P2PNotaryRequest` payload and send it to the known RPC
server via [`submitnotaryrequest` RPC call](./rpc.md#submitnotaryrequest-call). server via [`submitnotaryrequest` RPC call](./rpc.md#submitnotaryrequest-call).
Note, that all parties must generate the same main transaction, while fallbacks Note, that all parties must generate the same main transaction while fallbacks
can differ. can differ.
To create notary request, you can use [NeoGo RPC client](./rpc.md#Client). Follow To create a notary request, you can use [NeoGo RPC client](./rpc.md#Client). Follow
the steps to create a signature request: the steps to create a signature request:
1. Prepare list of signers with scopes for the main transaction (i.e. the 1. Prepare a list of signers with scopes for the main transaction (i.e. the
transaction that signatures are being collected for, that will be `Signers` transaction that signatures are being collected for, that will be `Signers`
transaction field). Use the following rules to construct the list: transaction field). Use the following rules to construct the list:
* First signer is the one who pays transaction fees. * First signer is the one who pays the transaction fees.
* Each signer is either multisignature or standard signature or a contract * Each signer is either a multisignature or a standard signature or a contract
signer. signer.
* Multisignature and signature signers can be combined. * Multisignature and signature signers can be combined.
* Contract signer can be combined with any other signer. * Contract signer can be combined with any other signer.
Include Notary native contract in the list of signers with the following Include Notary native contract in the list of signers with the following
constraints: constraints:
* Notary signer hash is the hash of native Notary contract that can be fetched * Notary signer hash is the hash of a native Notary contract that can be fetched
from from
[func (*Client) GetNativeContractHash](https://pkg.go.dev/github.com/nspcc-dev/neo-go@v0.97.2/pkg/rpc/client#Client.GetNativeContractHash). [func (*Client) GetNativeContractHash](https://pkg.go.dev/github.com/nspcc-dev/neo-go@v0.97.2/pkg/rpc/client#Client.GetNativeContractHash).
* Notary signer must have `None` scope. * A notary signer must have `None` scope.
* Notary signer shouldn't be placed at the beginning of the signer list, * A notary signer shouldn't be placed at the beginning of the signer list
because Notary contract does not pay main transaction fees. Other positions because Notary contract does not pay main transaction fees. Other positions
in the signer list are available for Notary signer. in the signer list are available for a Notary signer.
2. Construct script for the main transaction (that will be `Script` transaction 2. Construct a script for the main transaction (that will be `Script` transaction
field) and calculate system fee using regular rules (that will be `SystemFee` field) and calculate system fee using regular rules (that will be `SystemFee`
transaction field). Probably, you'll perform one of these actions: transaction field). Probably, you'll perform one of these actions:
1. If the script is a contract method call, use `invokefunction` RPC API 1. If the script is a contract method call, use `invokefunction` RPC API
[func (*Client) InvokeFunction](https://pkg.go.dev/github.com/nspcc-dev/neo-go@v0.97.2/pkg/rpc/client#Client.InvokeFunction) [func (*Client) InvokeFunction](https://pkg.go.dev/github.com/nspcc-dev/neo-go@v0.97.2/pkg/rpc/client#Client.InvokeFunction)
and fetch script and gas consumed from the result. and fetch the script and the gas consumed from the result.
2. If the script is more complicated than just a contract method call, 2. If the script is more complicated than just a contract method call,
construct the script manually and use `invokescript` RPC API construct the script manually and use `invokescript` RPC API
[func (*Client) InvokeScript](https://pkg.go.dev/github.com/nspcc-dev/neo-go@v0.97.2/pkg/rpc/client#Client.InvokeScript) [func (*Client) InvokeScript](https://pkg.go.dev/github.com/nspcc-dev/neo-go@v0.97.2/pkg/rpc/client#Client.InvokeScript)
to fetch gas consumed from the result. to fetch the gas consumed from the result.
3. Or just construct the script and set system fee manually. 3. Or just construct the script and set system fee manually.
3. Calculate the height main transaction is valid until (that will be 3. Calculate the height main transaction is valid until (that will be
`ValidUntilBlock` transaction field). Consider the following rules for `VUB` `ValidUntilBlock` transaction field). Consider the following rules for `VUB`
value estimation: value estimation:
* `VUB` value must not be lower than current chain height. * `VUB` value must not be lower than the current chain height.
* The whole notary request (including fallback transaction) is valid until * The whole notary request (including fallback transaction) is valid until
the same `VUB` height. the same `VUB` height.
* `VUB` value must be lower than notary deposit expiration height. This * `VUB` value must be lower than notary deposit expiration height. This
condition guarantees that deposit won't be withdrawn before notary condition guarantees that the deposit won't be withdrawn before notary
service payment. service payment.
* All parties must provide the same `VUB` for the main transaction. * All parties must provide the same `VUB` for the main transaction.
4. Construct the list of main transaction attributes (that will be `Attributes` 4. Construct the list of main transaction attributes (that will be `Attributes`
transaction field). The list must include `NotaryAssisted` attribute with transaction field). The list must include `NotaryAssisted` attribute with
`NKeys` equals to the sum number of keys to be collected excluding notary and `NKeys` equals the overall number of the keys to be collected excluding notary and
other contract-based witnesses. For m out of n multisignature request other contract-based witnesses. For m out of n multisignature request
`NKeys = n`. For multiple standard signature request signers `NKeys` equals to `NKeys = n`. For multiple standard signature request, signers `NKeys` equals
the standard signature signers count. the standard signature signers count.
5. Construct the list of accounts (`wallet.Account` structure from the `wallet` 5. Construct a list of accounts (`wallet.Account` structure from the `wallet`
package) to calculate network fee for the transaction package) to calculate network fee for the transaction
using following rules. This list will be used in the next step. using the following rules. This list will be used in the next step.
- Number and order of the accounts should match transaction signers - The number and the order of the accounts should match the transaction signers
constructed at step 1. constructed at step 1.
- Account for contract signer should have `Contract` field with `Deployed` set - An account for a contract signer should have `Contract` field with `Deployed` set
to `true` if the corresponding contract is deployed on chain. to `true` if the corresponding contract is deployed on chain.
- Account for signature or multisignature signer should have `Contract` field - An account for a signature or a multisignature signer should have `Contract` field
with `Deployed` set to `false` and `Script` set to the signer's verification with `Deployed` set to `false` and `Script` set to the signer's verification
script. script.
- Account for notary signer is **just a placeholder** and should have - An account for a notary signer is **just a placeholder** and should have
`Contract` field with `Deployed` set to `false`, i.e. the default value for `Contract` field with `Deployed` set to `false`, i.e. the default value for
`Contract` field. That's needed to skip notary verification during regular `Contract` field. That's needed to skip notary verification during regular
network fee calculation at the next step. network fee calculation at the next step.
7. Calculate network fee for the transaction (that will be `NetworkFee` 7. Calculate network fee for the transaction (that will be `NetworkFee`
transaction field). Network fee consists of several parts: transaction field). Network fee consists of several parts:
- *Notary network fee.* That's amount of GAS need to be paid for - *Notary network fee.* That's the amount of GAS needed to be paid for
`NotaryAssisted` attribute usage and for notary contract witness `NotaryAssisted` attribute usage and for notary contract witness
verification (that is to be added by the notary node in the end of verification (that is to be added by the notary node in the end of
signature collection process). Use signature collection process). Use
[func (*Client) CalculateNotaryFee](https://pkg.go.dev/github.com/nspcc-dev/neo-go@v0.97.2/pkg/rpc/client#Client.CalculateNotaryFee) [func (*Client) CalculateNotaryFee](https://pkg.go.dev/github.com/nspcc-dev/neo-go@v0.97.2/pkg/rpc/client#Client.CalculateNotaryFee)
to calculate notary network fee. Use `NKeys` estimated on the step 4 as an to calculate notary network fee. Use `NKeys` estimated at step 4 as an
argument. argument.
- *Regular network fee.* That's amount of GAS to be paid for other witnesses - *Regular network fee.* That's the amount of GAS to be paid for other witnesses
verification. Use verification. Use
[func (*Client) AddNetworkFee](https://pkg.go.dev/github.com/nspcc-dev/neo-go@v0.97.2/pkg/rpc/client#Client.AddNetworkFee) [func (*Client) AddNetworkFee](https://pkg.go.dev/github.com/nspcc-dev/neo-go@v0.97.2/pkg/rpc/client#Client.AddNetworkFee)
to calculate regular network fee and add it to the transaction. Use to calculate regular network fee and add it to the transaction. Use
partially-filled main transaction from the previous steps as `tx` argument. partially-filled main transaction from the previous steps as `tx` argument.
Use notary network fee calculated at the previous substep as `extraFee` Use notary network fee calculated at the previous substep as `extraFee`
argument. Use the list of accounts constructed at the step 5 as `accs` argument. Use the list of accounts constructed at step 5 as `accs`
argument. argument.
8. Fill in main transaction `Nonce` field. 8. Fill in the main transaction `Nonce` field.
9. Construct the list of main transactions witnesses (that will be `Scripts` 9. Construct a list of main transactions witnesses (that will be `Scripts`
transaction field). Use the following rules: transaction field). Use the following rules:
- Contract-based witness should have `Invocation` script that pushes arguments - A contract-based witness should have `Invocation` script that pushes arguments
on stack (it may be empty) and empty `Verification` script. If multiple notary on stack (it may be empty) and empty `Verification` script. If multiple notary
requests provide different `Invocation` scripts then the first one will be used requests provide different `Invocation` scripts, the first one will be used
to construct contract-based witness. to construct contract-based witness.
- **Notary contract witness** (which is also a contract-based witness) should - A **Notary contract witness** (which is also a contract-based witness) should
have empty `Verification` script. `Invocation` script should be of the form have empty `Verification` script. `Invocation` script should be of the form
[opcode.PUSHDATA1, 64, make([]byte, 64)...], i.e. to be a placeholder for [opcode.PUSHDATA1, 64, make([]byte, 64)...], i.e. to be a placeholder for
notary contract signature. a notary contract signature.
- Standard signature witness must have regular `Verification` script filled - A standard signature witness must have regular `Verification` script filled
even if the `Invocation` script is to be collected from other notary even if the `Invocation` script is to be collected from other notary
requests. requests.
`Invocation` script either should push signature bytes on stack **or** (in `Invocation` script either should push signature bytes on stack **or** (in
case if the signature is to be collected) **should be empty**. case the signature is to be collected) **should be empty**.
- Multisignature witness must have regular `Verification` script filled even - A multisignature witness must have regular `Verification` script filled even
if `Invocation` script is to be collected from other notary requests. if `Invocation` script is to be collected from other notary requests.
`Invocation` script either should push on stack signature bytes (one `Invocation` script either should push on stack signature bytes (one
signature at max per one resuest) **or** (in case if there's no ability to signature at max per one request) **or** (in case there's no ability to
provide proper signature) **should be empty**. provide proper signature) **should be empty**.
10. Define lifetime for the fallback transaction. Let the `fallbackValidFor` be 10. Define lifetime for the fallback transaction. Let the `fallbackValidFor` be
the lifetime. Let `N` be the current chain's height and `VUB` be the lifetime. Let `N` be the current chain's height and `VUB` be
`ValidUntilBlock` value estimated at the step 3. Then notary node is trying to `ValidUntilBlock` value estimated at step 3. Then, the notary node is trying to
collect signatures for the main transaction from `N` up to collect signatures for the main transaction from `N` up to
`VUB-fallbackValidFor`. In case of failure after `VUB-fallbackValidFor`-th `VUB-fallbackValidFor`. In case of failure after `VUB-fallbackValidFor`-th
block is accepted, notary node stops attempts to complete main transaction and block is accepted, the notary node abandons attempts to complete the main transaction and
tries to push all associated fallbacks. Use the following rules to define tries to push all associated fallbacks. Use the following rules to define
`fallbackValidFor`: `fallbackValidFor`:
- `fallbackValidFor` shouldn't be more than `MaxNotValidBeforeDelta` value. - `fallbackValidFor` shouldn't be more than `MaxNotValidBeforeDelta` value.
- Use [func (*Client) GetMaxNotValidBeforeDelta](https://pkg.go.dev/github.com/nspcc-dev/neo-go@v0.97.2/pkg/rpc/client#Client.GetMaxNotValidBeforeDelta) - Use [func (*Client) GetMaxNotValidBeforeDelta](https://pkg.go.dev/github.com/nspcc-dev/neo-go@v0.97.2/pkg/rpc/client#Client.GetMaxNotValidBeforeDelta)
to check `MaxNotValidBefore` value. to check `MaxNotValidBefore` value.
11. Construct script for the fallback transaction. Script may do something useful, 11. Construct a script for the fallback transaction. The script may do something useful,
i.g. invoke method of a contract, but if you don't need to perform something i.g. invoke method of a contract. However, if you don't need to perform anything
special on fallback invocation, you can use simple `opcode.RET` script. special on fallback invocation, you can use simple `opcode.RET` script.
12. Sign and submit P2P notary request. Use 12. Sign and submit P2P notary request. Use
[func (*Client) SignAndPushP2PNotaryRequest](https://pkg.go.dev/github.com/nspcc-dev/neo-go@v0.97.2/pkg/rpc/client#Client.SignAndPushP2PNotaryRequest) for it. [func (*Client) SignAndPushP2PNotaryRequest](https://pkg.go.dev/github.com/nspcc-dev/neo-go@v0.97.2/pkg/rpc/client#Client.SignAndPushP2PNotaryRequest) for it.
- Use signed main transaction from step 8 as `mainTx` argument. - Use the signed main transaction from step 8 as `mainTx` argument.
- Use fallback script from step 10 as `fallbackScript` argument. - Use the fallback script from step 10 as `fallbackScript` argument.
- Use `-1` as `fallbackSysFee` argument to define system fee by test - Use `-1` as `fallbackSysFee` argument to define system fee by test
invocation or provide custom value. invocation or provide any custom value.
- Use `0` as `fallbackNetFee` argument not to add extra network fee to the - Use `0` as `fallbackNetFee` argument not to add extra network fee to the
fallback. fallback.
- Use `fallbackValidFor` estimated at step 9 as `fallbackValidFor` argument. - Use the `fallbackValidFor` estimated at step 9 as `fallbackValidFor` argument.
- Use your account you'd like to send request (and fallback transaction) from - Use your account you'd like to send request (and fallback transaction) from
to sign the request (and fallback transaction). to sign the request (and fallback transaction).
`SignAndPushP2PNotaryRequest` will construct and sign fallback transaction, `SignAndPushP2PNotaryRequest` will construct and sign a fallback transaction,
construct and sign P2PNotaryRequest and submit it to the RPC node. The construct and sign a P2PNotaryRequest and submit it to the RPC node. The
resulting notary request and an error are returned. resulting notary request and an error are returned.
After P2PNotaryRequests are sent, participants should then wait for one of their After P2PNotaryRequests are sent, participants should wait for one of their
transactions (main or fallback) to get accepted into one of subsequent blocks. transactions (main or fallback) to get accepted into one of subsequent blocks.
### 3. Signatures collection and transaction release ### 3. Signatures collection and transaction release
Valid P2PNotaryRequest payload is distributed via P2P network using standard A valid P2PNotaryRequest payload is distributed via P2P network using standard
broadcasting mechanisms until it reaches designated notary nodes that have the broadcasting mechanisms until it reaches the designated notary nodes that have the
respective node module active. They collect all payloads for the same main respective node module active. They collect all payloads for the same main
transaction until enough signatures are collected to create proper witnesses for transaction until enough signatures are collected to create proper witnesses for
it. They then attach all witnesses required and send this transaction as usual it. Then, they attach all witnesses required and send this transaction as usual
and monitor subsequent blocks for its inclusion. and monitor subsequent blocks for its inclusion.
All the operations leading to successful transaction creation are independent All the operations leading to successful transaction creation are independent
of the chain and could easily be done within one block interval, so if the of the chain and could easily be done within one block interval. So, if the
first service request is sent at current height `H` it's highly likely that the first service request is sent at the current height `H`, the main transaction
main transaction will be a part of `H+1` block. is highly likely to be a part of `H+1` block.
### 4. Results monitoring ### 4. Results monitoring
Once P2PNotaryRequest reached RPC node, it is added to the notary request pool. Once the P2PNotaryRequest reaches RPC node, it is added to the notary request pool.
Completed or outdated requests are being removed from the pool. Use Completed or outdated requests are removed from the pool. Use
[NeoGo notification subsystem](./notifications.md) to track request addition and [NeoGo notification subsystem](./notifications.md) to track request addition and
removal: removal:
- Use RPC `subscribe` method with `notary_request_event` stream name parameter to - Use RPC `subscribe` method with `notary_request_event` stream name parameter to
subscribe to `P2PNotaryRequest` payloads that are added or removed from the subscribe to `P2PNotaryRequest` payloads that are added or removed from the
notary request pool. notary request pool.
- Use `sender` or `signer` filters to filter out notary request with desired - Use `sender` or `signer` filters to filter out a notary request with the desired
request senders or main tx signers. request senders or main tx signers.
Use the notification subsystem to track that main or fallback transaction Use the notification subsystem to track that the main or the fallback transaction
accepted to the chain: is accepted to the chain:
- Use RPC `subscribe` method with `transaction_added` stream name parameter to - Use RPC `subscribe` method with `transaction_added` stream name parameter to
subscribe to transactions that are accepted to the chain. subscribe to transactions that are accepted to the chain.
- Use `sender` filter with Notary native contract hash to filter out fallback - Use `sender` filter with the Notary native contract hash to filter out fallback
transactions sent by Notary node. Use `signer` filter with notary request transactions sent by the Notary node. Use `signer` filter with the notary request
sender address to filter out fallback transactions sent by the specified sender address to filter out the fallback transactions sent by the specified
sender. sender.
- Use `sender` or `signer` filters to filter out main transaction with desired - Use `sender` or `signer` filters to filter out the main transaction with the desired
sender or signers. You can also filter out main transaction using Notary sender or signers. You can also filter out the main transaction using Notary
contract `signer` filter. contract `signer` filter.
- Don't rely on `sender` and `signer` filters only, check also that received - Don't rely on `sender` and `signer` filters only, also check that the received
transaction has `NotaryAssisted` attribute with expected `NKeys` value. transaction has `NotaryAssisted` attribute with the expected `NKeys` value.
Use the notification subsystem to track main or fallback transaction execution Use the notification subsystem to track main or fallback transaction execution
results. results.
@ -480,31 +480,31 @@ Several use-cases where Notary subsystem can be applied are described below.
### Committee-signed transactions ### Committee-signed transactions
The signature collection problem occures every time committee participants need The signature collection problem occurs every time committee participants need
to submit transaction with `m out of n` multisignature, i.g.: to submit a transaction with `m out of n` multisignature, i.g.:
- transfer initial supply of NEO and GAS from committee multisignature account to - transfer initial supply of NEO and GAS from a committee multisignature account to
other addresses on new chain start other addresses on new chain start
- tune valuable chain parameters like gas per block, candidate register price, - tune valuable chain parameters like gas per block, candidate register price,
minimum contract deployment fee, Oracle request price, native Policy values etc minimum contract deployment fee, Oracle request price, native Policy values etc
- invoke non-native contract methods that require committee multisignature witness - invoke non-native contract methods that require committee multisignature witness
Current solution supposes off-chain non-P2P signature collection (either manual Current solution offers off-chain non-P2P signature collection (either manual
or using some additional network connectivity). It has an obvious downside of or using some additional network connectivity). It has an obvious downside of
reliance on something external to the network. If it's manual, it's slow and reliance on something external to the network. If it's manual, it's slow and
error-prone, if it's automated, it requires additional protocol for all the error-prone; if it's automated, it requires additional protocol for all the
parties involved. For the protocol used by oracle nodes that also means parties involved. For the protocol used by oracle nodes, it also means
explicitly exposing nodes to each other. nodes explicitly exposing to each other.
With Notary service all signature collection logic is unified and is on chain already, With the Notary service all signature collection logic is unified and is on chain already.
the only thing that committee participants should perform is to create and submit The only thing that committee participants should perform is to create and submit
P2P notary request (can be done independently). Once sufficient number of signatures a P2P notary request (can be done independently). Once the sufficient number of signatures
is collected by the service, desired transaction will be applied and pass committee is collected by the service, the desired transaction will be applied and pass committee
witness verification. witness verification.
### NeoFS Inner Ring nodes ### NeoFS Inner Ring nodes
Alphabet nodes of the Inner Ring signature collection is a particular case of committee-signed Alphabet nodes of the Inner Ring signature collection is a particular case of committee-signed
transactions. Alphabet nodes multisignature is used for the various cases, such as: transactions. Alphabet nodes multisignature is used for various cases, such as:
- main chain and side chain funds synchronization and withdrawal - main chain and side chain funds synchronization and withdrawal
- bootstrapping new storage nodes to the network - bootstrapping new storage nodes to the network
- network map management and epoch update - network map management and epoch update
@ -513,7 +513,7 @@ transactions. Alphabet nodes multisignature is used for the various cases, such
Non-notary on-chain solution for Alphabet nodes multisignature forming is Non-notary on-chain solution for Alphabet nodes multisignature forming is
imitated via contracts collecting invocations of their methods signed by standard imitated via contracts collecting invocations of their methods signed by standard
signature of each Alphabet node. Once sufficient number of invocations is signature of each Alphabet node. Once the sufficient number of invocations is
collected, the invocation is performed. collected, the invocation is performed.
The described solution has several drawbacks: The described solution has several drawbacks:
@ -522,7 +522,7 @@ The described solution has several drawbacks:
be duplicated) because we can't create transactions from transactions (thus be duplicated) because we can't create transactions from transactions (thus
using proper multisignature account is not possible) using proper multisignature account is not possible)
- for `m out of n` multisignature we need at least `m` transactions instead of - for `m out of n` multisignature we need at least `m` transactions instead of
one we really wanted to have, but in reality we'll create and process `n` of one we really wanted to have; but actually we'll create and process `n` of
them, so this adds substantial overhead to the chain them, so this adds substantial overhead to the chain
- some GAS is inevitably wasted because any invocation could either go the easy - some GAS is inevitably wasted because any invocation could either go the easy
path (just adding a signature to the list) or really invoke the function we path (just adding a signature to the list) or really invoke the function we
@ -531,7 +531,7 @@ The described solution has several drawbacks:
Notary on-chain Alphabet multisignature collection solution Notary on-chain Alphabet multisignature collection solution
[uses Notary subsystem](https://github.com/nspcc-dev/neofs-node/pull/404) to [uses Notary subsystem](https://github.com/nspcc-dev/neofs-node/pull/404) to
successfully solve these problems, e.g. to calculate precisely amount of GAS to successfully solve these problems, e.g. to calculate precisely the amount of GAS to
pay for contract invocation witnessed by Alphabet nodes (see pay for contract invocation witnessed by Alphabet nodes (see
[nspcc-dev/neofs-node#47](https://github.com/nspcc-dev/neofs-node/issues/47)), [nspcc-dev/neofs-node#47](https://github.com/nspcc-dev/neofs-node/issues/47)),
to reduce container creation delay to reduce container creation delay
@ -540,5 +540,5 @@ etc.
### Contract-sponsored (free) transactions ### Contract-sponsored (free) transactions
The original problem and solution are described in the The original problem and solution are described in
[neo-project/neo#2577](https://github.com/neo-project/neo/issues/2577) discussion. [neo-project/neo#2577](https://github.com/neo-project/neo/issues/2577) discussion.

View file

@ -34,15 +34,15 @@ Filters use conjunctional logic.
announcing the block itself announcing the block itself
* transaction notifications are only announced for successful transactions * transaction notifications are only announced for successful transactions
* all announcements are being done in the same order they happen on the chain * all announcements are being done in the same order they happen on the chain
At first transaction execution is announced, then followed by notifications First, transaction execution is announced. It is then followed by notifications
generated during this execution, then followed by transaction announcement. generated during this execution. Next, follows the transaction announcement.
Transaction announcements are ordered the same way they're in the block. Transaction announcements are ordered the same way they're in the block.
* unsubscription may not cancel pending, but not yet sent events * unsubscription may not cancel pending, but not yet sent events
## Subscription management ## Subscription management
To receive events clients need to subscribe to them first via `subscribe` To receive events, clients need to subscribe to them first via `subscribe`
method. Upon successful subscription clients receive subscription ID for method. Upon successful subscription, clients receive subscription ID for
subsequent management of this subscription. Subscription is only valid for subsequent management of this subscription. Subscription is only valid for
connection lifetime, no long-term client identification is being made. connection lifetime, no long-term client identification is being made.
@ -59,18 +59,18 @@ Recognized stream names:
Filter: `primary` as an integer with primary (speaker) node index from Filter: `primary` as an integer with primary (speaker) node index from
ConsensusData. ConsensusData.
* `transaction_added` * `transaction_added`
Filter: `sender` field containing string with hex-encoded Uint160 (LE Filter: `sender` field containing a string with hex-encoded Uint160 (LE
representation) for transaction's `Sender` and/or `signer` in the same representation) for transaction's `Sender` and/or `signer` in the same
format for one of transaction's `Signers`. format for one of transaction's `Signers`.
* `notification_from_execution` * `notification_from_execution`
Filter: `contract` field containing string with hex-encoded Uint160 (LE Filter: `contract` field containing a string with hex-encoded Uint160 (LE
representation) and/or `name` field containing string with execution representation) and/or `name` field containing a string with execution
notification name. notification name.
* `transaction_executed` * `transaction_executed`
Filter: `state` field containing `HALT` or `FAULT` string for successful Filter: `state` field containing `HALT` or `FAULT` string for successful
and failed executions respectively. and failed executions respectively.
* `notary_request_event` * `notary_request_event`
Filter: `sender` field containing string with hex-encoded Uint160 (LE Filter: `sender` field containing a string with hex-encoded Uint160 (LE
representation) for notary request's `Sender` and/or `signer` in the same representation) for notary request's `Sender` and/or `signer` in the same
format for one of main transaction's `Signers`. format for one of main transaction's `Signers`.
@ -133,21 +133,22 @@ Example response:
Events are sent as JSON-RPC notifications from the server with `method` field Events are sent as JSON-RPC notifications from the server with `method` field
being used for notification names. Notification names are identical to stream being used for notification names. Notification names are identical to stream
names described for `subscribe` method with one important addition for names described for `subscribe` method with one important addition for
`event_missed` which can be sent for any subscription to signify that some `event_missed`, which can be sent for any subscription to signify that some
events were not delivered (usually when client isn't able to keep up with events have not been delivered (usually when a client is unable to keep up with
event flow). the event flow).
Verbose responses for various structures like blocks and transactions are used Verbose responses for various structures like blocks and transactions are used
to simplify working with notifications on client side. Returned structures to simplify working with notifications on the client side. Returned structures
mostly follow the one used by standard Neo RPC calls, but may have some minor mostly follow the one used by standard Neo RPC calls but may have some minor
differences. differences.
If a server-side event matches several subscriptions from one client, it's If a server-side event matches several subscriptions from one client, it's
only sent once. only sent once.
### `block_added` notification ### `block_added` notification
As a first parameter (`params` section) contains block converted to JSON
structure which is similar to verbose `getblock` response but with the The first parameter (`params` section) contains a block converted to a JSON
structure, which is similar to a verbose `getblock` response but with the
following differences: following differences:
* it doesn't have `size` field (you can calculate it client-side) * it doesn't have `size` field (you can calculate it client-side)
* it doesn't have `nextblockhash` field (it's supposed to be the latest one * it doesn't have `nextblockhash` field (it's supposed to be the latest one
@ -238,8 +239,8 @@ Example:
### `transaction_added` notification ### `transaction_added` notification
In the first parameter (`params` section) contains transaction converted to The first parameter (`params` section) contains a transaction converted to
JSON which is similar to verbose `getrawtransaction` response, but with the JSON, which is similar to a verbose `getrawtransaction` response, but with the
following differences: following differences:
* block's metadata is missing (`blockhash`, `confirmations`, `blocktime`) * block's metadata is missing (`blockhash`, `confirmations`, `blocktime`)
@ -337,8 +338,8 @@ Example:
### `transaction_executed` notification ### `transaction_executed` notification
Contains the same result as from `getapplicationlog` method in the first It contains the same result as from `getapplicationlog` method in the first
parameter and no other parameters. One difference from `getapplicationlog` is parameter and no other parameters. The only difference from `getapplicationlog` is
that it always contains zero in the `contract` field. that it always contains zero in the `contract` field.
Example: Example:
@ -424,7 +425,7 @@ Example:
### `notary_request_event` notification ### `notary_request_event` notification
Contains two parameters: event type which could be one of "added" or "removed" and It contains two parameters: event type, which could be one of "added" or "removed", and
added (or removed) notary request. added (or removed) notary request.
Example: Example:

View file

@ -1,7 +1,7 @@
# NeoGo Oracle service # NeoGo Oracle service
NeoGo node can act as oracle service node for https and neofs protocols. It NeoGo node can act as an oracle service node for https and neofs protocols. It
has to have a wallet with key belonging to one of network's designated oracle has to have a wallet with a key belonging to one of the network's designated oracle
nodes (stored in `RoleManagement` native contract). nodes (stored in `RoleManagement` native contract).
It needs [RPC service](rpc.md) to be enabled and configured properly because It needs [RPC service](rpc.md) to be enabled and configured properly because
@ -10,7 +10,7 @@ transaction.
## Configuration ## Configuration
To enable oracle service add `Oracle` subsection to `ApplicationConfiguration` To enable oracle service, add `Oracle` subsection to `ApplicationConfiguration`
section of your node config. section of your node config.
Parameters: Parameters:
@ -19,14 +19,14 @@ Parameters:
* `AllowPrivateHost`: boolean value, enables/disables private IPs (like * `AllowPrivateHost`: boolean value, enables/disables private IPs (like
127.0.0.1 or 192.168.0.1) for https requests, it defaults to false and it's 127.0.0.1 or 192.168.0.1) for https requests, it defaults to false and it's
false on public networks, but you can enable it for private ones. false on public networks, but you can enable it for private ones.
* `AllowedContentTypes`: list of allowed MIME types. Only `application/json` * `AllowedContentTypes`: a list of allowed MIME types. Only `application/json`
is allowed by default. Can be left empty to allow everything. is allowed by default. Can be left empty to allow everything.
* `Nodes`: list of oracle node RPC endpoints, it's used for oracle node * `Nodes`: a list of oracle node RPC endpoints, it's used for oracle node
communication. All oracle nodes should be specified there. communication. All oracle nodes should be specified there.
* `NeoFS`: a subsection of its own for NeoFS configuration with two * `NeoFS`: a subsection of its own for NeoFS configuration with two
parameters: parameters:
- `Timeout`: request timeout, like "5s" - `Timeout`: request timeout, like "5s"
- `Nodes`: list of NeoFS nodes (their gRPC interfaces) to get data from, - `Nodes`: a list of NeoFS nodes (their gRPC interfaces) to get data from,
one node is enough to operate, but they're used in round-robin fashion, one node is enough to operate, but they're used in round-robin fashion,
so you can spread the load by specifying multiple nodes so you can spread the load by specifying multiple nodes
* `MaxTaskTimeout`: maximum time a request can be active (retried to * `MaxTaskTimeout`: maximum time a request can be active (retried to
@ -67,7 +67,7 @@ Parameters:
## Operation ## Operation
To run oracle service on your network you need to: To run oracle service on your network, you need to:
* set oracle node keys in `RoleManagement` contract * set oracle node keys in `RoleManagement` contract
* configure and run appropriate number of oracle nodes with keys specified in * configure and run an appropriate number of oracle nodes with keys specified in
`RoleManagement` contract `RoleManagement` contract

View file

@ -1,11 +1,11 @@
# Release instructions # Release instructions
This documents outlines the neo-go release process, it can be used as a todo This document outlines the neo-go release process. It can be used as a todo
list for a new release. list for a new release.
## Pre-release checks ## Pre-release checks
These should run successfuly: These should run successfully:
* build * build
* unit-tests * unit-tests
* lint * lint
@ -15,10 +15,10 @@ These should run successfuly:
Add an entry to the CHANGELOG.md following the style established there. Add a Add an entry to the CHANGELOG.md following the style established there. Add a
codename, version and release date in the heading. Write a paragraph codename, version and release date in the heading. Write a paragraph
describing the most significant changes done in this release. Then add describing the most significant changes done in this release. Then, add
sections with new features and bugs fixed describing each change in detail and sections with new features implemented and bugs fixed describing each change in detail and
with a reference to Github issues. Add generic improvements section for with a reference to Github issues. Add generic improvements section for
changes that are not directly visible to the node end-user such as performance changes that are not directly visible to the node end-user, such as performance
optimizations, refactoring and API changes. Add a "Behaviour changes" section optimizations, refactoring and API changes. Add a "Behaviour changes" section
if there are any incompatible changes in default settings or the way commands if there are any incompatible changes in default settings or the way commands
operate. operate.
@ -34,8 +34,8 @@ Use `vX.Y.Z` tag following the semantic versioning standard.
## Push changes and release tag to Github ## Push changes and release tag to Github
This step should bypass the default PR mechanism to get a correct result (so This step should bypass the default PR mechanism to get a correct result (so
that releasing requires admin privileges for the project), both the `master` that releasing requires admin privileges for the project). Both the `master`
branch update and tag must be pushed simultaneously like this: branch update and the tag must be pushed simultaneously like this:
$ git push origin master v0.70.1 $ git push origin master v0.70.1
@ -61,10 +61,10 @@ Copy the github release page link to:
## Deployment ## Deployment
Deploy updated version to the mainnet/testnet. Deploy the updated version to the mainnet/testnet.
## Post-release ## Post-release
The first commit after the release must be tagged with `X.Y.Z+1-pre` tag for The first commit after the release must be tagged with `X.Y.Z+1-pre` tag for
proper semantic-versioned builds, so it's good to make some minor proper semantic-versioned builds. So, it's good to make some minor
documentation update after the release and push it with this new tag. documentation update after the release and push it with this new tag.

View file

@ -78,14 +78,14 @@ which would yield the response:
##### `invokefunction` ##### `invokefunction`
neo-go's implementation of `invokefunction` does not return `tx` neo-go implementation of `invokefunction` does not return `tx`
field in the answer because that requires signing the transaction with some field in the answer because that requires signing the transaction with some
key in the server which doesn't fit the model of our node-client interactions. key in the server, which doesn't fit the model of our node-client interactions.
Lacking this signature the transaction is almost useless, so there is no point If this signature is lacking, the transaction is almost useless, so there is no point
in returning it. in returning it.
It's possible to use `invokefunction` not only with contract scripthash, but also It's possible to use `invokefunction` not only with a contract scripthash, but also
with contract name (for native contracts) or contract ID (for all contracts). This with a contract name (for native contracts) or a contract ID (for all contracts). This
feature is not supported by the C# node. feature is not supported by the C# node.
##### `getcontractstate` ##### `getcontractstate`
@ -95,7 +95,7 @@ it only works for native contracts.
##### `getrawtransaction` ##### `getrawtransaction`
VM state is included to verbose response along with other transaction fields if VM state is included into verbose response along with other transaction fields if
the transaction is already on chain. the transaction is already on chain.
##### `getstateroot` ##### `getstateroot`
@ -107,30 +107,30 @@ where only index is accepted.
This method doesn't work for the Ledger contract, you can get data via regular This method doesn't work for the Ledger contract, you can get data via regular
`getblock` and `getrawtransaction` calls. This method is able to get storage of `getblock` and `getrawtransaction` calls. This method is able to get storage of
the native contract by its name (case-insensitive), unlike the C# node where a native contract by its name (case-insensitive), unlike the C# node where
it only possible for index or hash. it only possible for index or hash.
#### `getnep11balances` and `getnep17balances` #### `getnep11balances` and `getnep17balances`
neo-go's implementation of `getnep11balances` and `getnep17balances` does not neo-go implementation of `getnep11balances` and `getnep17balances` does not
perform tracking of NEP-11 and NEP-17 balances for each account as it is done perform tracking of NEP-11 and NEP-17 balances for each account as it is done
in the C# node. Instead, neo-go node maintains the list of standard-compliant in the C# node. Instead, a neo-go node maintains a list of standard-compliant
contracts, i.e. those contracts that have `NEP-11` or `NEP-17` declared in the contracts, i.e. those contracts that have `NEP-11` or `NEP-17` declared in the
supported standards section of the manifest. Each time balances are queried, supported standards section of the manifest. Each time balances are queried,
neo-go node asks every NEP-11/NEP-17 contract for the account balance by the neo-go node asks every NEP-11/NEP-17 contract for the account balance by
invoking `balanceOf` method with the corresponding args. Invocation GAS limit invoking `balanceOf` method with the corresponding args. Invocation GAS limit
is set to be 3 GAS. All non-zero balances are included in the RPC call result. is set to be 3 GAS. All non-zero balances are included in the RPC call result.
Thus, if token contract doesn't have proper standard declared in the list of Thus, if a token contract doesn't have proper standard declared in the list of
supported standards but emits compliant NEP-11/NEP-17 `Transfer` supported standards but emits compliant NEP-11/NEP-17 `Transfer`
notifications, the token balance won't be shown in the list of balances notifications, the token balance won't be shown in the list of balances
returned by the neo-go node (unlike the C# node behavior). However, transfer returned by the neo-go node (unlike the C# node behavior). However, transfer
logs of such tokens are still available via respective `getnepXXtransfers` RPC logs of such tokens are still available via respective `getnepXXtransfers` RPC
calls. calls.
The behaviour of the `LastUpdatedBlock` tracking for archival nodes as far as for The behavior of the `LastUpdatedBlock` tracking for archival nodes as far as for
governing token balances matches the C# node's one. For non-archival nodes and governing token balances matches the C# node's one. For non-archival nodes and
other NEP-11/NEP-17 tokens if transfer's `LastUpdatedBlock` is lower than the other NEP-11/NEP-17 tokens, if transfer's `LastUpdatedBlock` is lower than the
latest state synchronization point P the node working against, then latest state synchronization point P the node working against,
`LastUpdatedBlock` equals P. For NEP-11 NFTs `LastUpdatedBlock` is equal for `LastUpdatedBlock` equals P. For NEP-11 NFTs `LastUpdatedBlock` is equal for
all tokens of the same asset. all tokens of the same asset.
@ -139,7 +139,7 @@ all tokens of the same asset.
### Unsupported methods ### Unsupported methods
Methods listed down below are not going to be supported for various reasons Methods listed below are not going to be supported for various reasons
and we're not accepting issues related to them. and we're not accepting issues related to them.
| Method | Reason | | Method | Reason |
@ -165,7 +165,7 @@ Some additional extensions are implemented as a part of this RPC server.
This method returns cumulative system fee for all transactions included in a This method returns cumulative system fee for all transactions included in a
block. It can be removed in future versions, but at the moment you can use it block. It can be removed in future versions, but at the moment you can use it
to see how much GAS is burned with particular block (because system fees are to see how much GAS is burned with a particular block (because system fees are
burned). burned).
#### `invokecontractverifyhistoric`, `invokefunctionhistoric` and `invokescripthistoric` calls #### `invokecontractverifyhistoric`, `invokefunctionhistoric` and `invokescripthistoric` calls
@ -198,11 +198,11 @@ payloads to be relayed from RPC to P2P.
#### Limits and paging for getnep11transfers and getnep17transfers #### Limits and paging for getnep11transfers and getnep17transfers
`getnep11transfers` and `getnep17transfers` RPC calls never return more than `getnep11transfers` and `getnep17transfers` RPC calls never return more than
1000 results for one request (within specified time frame). You can pass your 1000 results for one request (within the specified time frame). You can pass your
own limit via an additional parameter and then use paging to request the next own limit via an additional parameter and then use paging to request the next
batch of transfers. batch of transfers.
Example requesting 10 events for address NbTiM6h8r99kpRtb428XcsUk1TzKed2gTc An example of requesting 10 events for address NbTiM6h8r99kpRtb428XcsUk1TzKed2gTc
within 0-1600094189000 timestamps: within 0-1600094189000 timestamps:
```json ```json

View file

@ -3,11 +3,11 @@
NeoGo supports state validation using N3 stateroots and can also act as state NeoGo supports state validation using N3 stateroots and can also act as state
validator (run state validation service). validator (run state validation service).
All NeoGo nodes always calculate MPT root hash for data stored by contracts, All NeoGo nodes always calculate MPT root hash for data stored by contracts.
unlike in Neo Legacy this behavior can't be turned off. They also process Unlike in Neo Legacy, this behavior can't be turned off. They also process
stateroot messages broadcasted through the network and save validated stateroot messages broadcasted through the network and save validated
signatures from them if state root hash specified there matches the one signed signatures from them if the state root hash specified there matches the one signed
by validators (or shouts loud in the log if it doesn't, because it should be by validators (or shouts loud in the log if it doesn't because it should be
the same). the same).
## State validation service ## State validation service
@ -37,7 +37,7 @@ Parameters:
To run state validation service on your network you need to: To run state validation service on your network you need to:
* set state validation node keys in `RoleManagement` contract * set state validation node keys in `RoleManagement` contract
* configure and run appropriate number of state validation nodes with keys * configure and run an appropriate number of state validation nodes with the keys
specified in `RoleManagement` contract specified in `RoleManagement` contract
@ -46,7 +46,7 @@ To run state validation service on your network you need to:
NeoGo also supports protocol extension to include state root hashes right into NeoGo also supports protocol extension to include state root hashes right into
header blocks. It's not compatible with regular Neo N3 state validation header blocks. It's not compatible with regular Neo N3 state validation
service and it's not compatible with public Neo N3 networks, but you can use service and it's not compatible with public Neo N3 networks, but you can use
it on private networks if there is a need to. it on private networks if needed.
The option is `StateRootInHeader` and it's specified in The option is `StateRootInHeader` and it's specified in
`ProtocolConfiguration` section, set it to true and run your network with it `ProtocolConfiguration` section, set it to true and run your network with it

View file

@ -4,7 +4,7 @@ A cross platform virtual machine implementation for `NEF` compatible programs.
# Installation # Installation
VM is provided as part of neo-go binary, so usual neo-go build instructions VM is provided as a part of neo-go binary, so usual neo-go build instructions
are applicable. are applicable.
# Running the VM # Running the VM
@ -118,7 +118,7 @@ NEO-GO-VM > run
``` ```
## Running programs with arguments ## Running programs with arguments
You can invoke smart contracts with arguments. Take the following ***roll the dice*** smartcontract as example. You can invoke smart contracts with arguments. Take the following ***roll the dice*** smart contract as an example.
``` ```
package rollthedice package rollthedice
@ -144,9 +144,9 @@ func RollDice(number int) {
To invoke this contract we need to specify both the method and the arguments. To invoke this contract we need to specify both the method and the arguments.
The first parameter (called method or operation) is always of type The first parameter (called method or operation) is always of type
string. Notice that arguments can have different types, they can inferred string. Notice that arguments can have different types. They can be inferred
automatically (please refer to the `run` command help), but in you need to automatically (please refer to the `run` command help), but if you need to
pass parameter of specific type you can specify it in `run`'s arguments: pass a parameter of a specific type you can specify it in `run`'s arguments:
``` ```
NEO-GO-VM > run rollDice int:1 NEO-GO-VM > run rollDice int:1

View file

@ -220,7 +220,7 @@ func TestSetGetRecord(t *testing.T) {
c.Invoke(t, "1.2.3.4", "getRecord", "neo.com", int64(nns.A)) c.Invoke(t, "1.2.3.4", "getRecord", "neo.com", int64(nns.A))
t.Run("SetRecord_compatibility", func(t *testing.T) { t.Run("SetRecord_compatibility", func(t *testing.T) {
// tests are got from the NNS C# implementation and changed accordingly to non-native implementation behaviour // tests are got from the NNS C# implementation and changed accordingly to non-native implementation behavior
testCases := []struct { testCases := []struct {
Type nns.RecordType Type nns.RecordType
Name string Name string

View file

@ -1,7 +1,7 @@
/* /*
Package nft contains non-divisible non-fungible NEP-11-compatible token Package nft contains non-divisible non-fungible NEP-11-compatible token
implementation. This token can be minted with GAS transfer to contract address, implementation. This token can be minted with GAS transfer to contract address,
it will hash some data (including data provided in transfer) and produce it will hash some data (including data provided in transfer) and produce a
base64-encoded string that is your NFT. Since it's based on hashing and basically base64-encoded string that is your NFT. Since it's based on hashing and basically
you own a hash it's HASHY. you own a hash it's HASHY.
*/ */
@ -54,7 +54,7 @@ func TotalSupply() int {
} }
// totalSupply is an internal implementation of TotalSupply operating with // totalSupply is an internal implementation of TotalSupply operating with
// given context. The number itself is stored raw in the DB with totalSupplyPrefix // the given context. The number itself is stored raw in the DB with totalSupplyPrefix
// key. // key.
func totalSupply(ctx storage.Context) int { func totalSupply(ctx storage.Context) int {
var res int var res int
@ -66,28 +66,28 @@ func totalSupply(ctx storage.Context) int {
return res return res
} }
// mkAccountPrefix creates DB key-prefix for account tokens specified // mkAccountPrefix creates DB key-prefix for the account tokens specified
// by concatenating accountPrefix and account address. // by concatenating accountPrefix and account address.
func mkAccountPrefix(holder interop.Hash160) []byte { func mkAccountPrefix(holder interop.Hash160) []byte {
res := []byte(accountPrefix) res := []byte(accountPrefix)
return append(res, holder...) return append(res, holder...)
} }
// mkBalanceKey creates DB key for account specified by concatenating balancePrefix // mkBalanceKey creates DB key for the account specified by concatenating balancePrefix
// and account address. // and account address.
func mkBalanceKey(holder interop.Hash160) []byte { func mkBalanceKey(holder interop.Hash160) []byte {
res := []byte(balancePrefix) res := []byte(balancePrefix)
return append(res, holder...) return append(res, holder...)
} }
// mkTokenKey creates DB key for token specified by concatenating tokenPrefix // mkTokenKey creates DB key for the token specified by concatenating tokenPrefix
// and token ID. // and token ID.
func mkTokenKey(tokenID []byte) []byte { func mkTokenKey(tokenID []byte) []byte {
res := []byte(tokenPrefix) res := []byte(tokenPrefix)
return append(res, tokenID...) return append(res, tokenID...)
} }
// BalanceOf returns the number of tokens owned by specified address. // BalanceOf returns the number of tokens owned by the specified address.
func BalanceOf(holder interop.Hash160) int { func BalanceOf(holder interop.Hash160) int {
if len(holder) != 20 { if len(holder) != 20 {
panic("bad owner address") panic("bad owner address")
@ -96,7 +96,7 @@ func BalanceOf(holder interop.Hash160) int {
return getBalanceOf(ctx, mkBalanceKey(holder)) return getBalanceOf(ctx, mkBalanceKey(holder))
} }
// getBalanceOf returns balance of the account using database key. // getBalanceOf returns the balance of an account using database key.
func getBalanceOf(ctx storage.Context, balanceKey []byte) int { func getBalanceOf(ctx storage.Context, balanceKey []byte) int {
val := storage.Get(ctx, balanceKey) val := storage.Get(ctx, balanceKey)
if val != nil { if val != nil {
@ -105,7 +105,7 @@ func getBalanceOf(ctx storage.Context, balanceKey []byte) int {
return 0 return 0
} }
// addToBalance adds amount to the account balance. Amount can be negative. // addToBalance adds an amount to the account balance. Amount can be negative.
func addToBalance(ctx storage.Context, holder interop.Hash160, amount int) { func addToBalance(ctx storage.Context, holder interop.Hash160, amount int) {
key := mkBalanceKey(holder) key := mkBalanceKey(holder)
old := getBalanceOf(ctx, key) old := getBalanceOf(ctx, key)
@ -117,13 +117,13 @@ func addToBalance(ctx storage.Context, holder interop.Hash160, amount int) {
} }
} }
// addToken adds token to the account. // addToken adds a token to the account.
func addToken(ctx storage.Context, holder interop.Hash160, token []byte) { func addToken(ctx storage.Context, holder interop.Hash160, token []byte) {
key := mkAccountPrefix(holder) key := mkAccountPrefix(holder)
storage.Put(ctx, append(key, token...), token) storage.Put(ctx, append(key, token...), token)
} }
// removeToken removes token from the account. // removeToken removes the token from the account.
func removeToken(ctx storage.Context, holder interop.Hash160, token []byte) { func removeToken(ctx storage.Context, holder interop.Hash160, token []byte) {
key := mkAccountPrefix(holder) key := mkAccountPrefix(holder)
storage.Delete(ctx, append(key, token...)) storage.Delete(ctx, append(key, token...))
@ -137,7 +137,7 @@ func Tokens() iterator.Iterator {
return iter return iter
} }
// TokensOf returns an iterator with all tokens held by specified address. // TokensOf returns an iterator with all tokens held by the specified address.
func TokensOf(holder interop.Hash160) iterator.Iterator { func TokensOf(holder interop.Hash160) iterator.Iterator {
if len(holder) != 20 { if len(holder) != 20 {
panic("bad owner address") panic("bad owner address")
@ -148,8 +148,8 @@ func TokensOf(holder interop.Hash160) iterator.Iterator {
return iter return iter
} }
// getOwnerOf returns current owner of the specified token or panics if token // getOwnerOf returns the current owner of the specified token or panics if token
// ID is invalid. Owner is stored as value of the token key (prefix + token ID). // ID is invalid. The owner is stored as a value of the token key (prefix + token ID).
func getOwnerOf(ctx storage.Context, token []byte) interop.Hash160 { func getOwnerOf(ctx storage.Context, token []byte) interop.Hash160 {
key := mkTokenKey(token) key := mkTokenKey(token)
val := storage.Get(ctx, key) val := storage.Get(ctx, key)
@ -159,13 +159,13 @@ func getOwnerOf(ctx storage.Context, token []byte) interop.Hash160 {
return val.(interop.Hash160) return val.(interop.Hash160)
} }
// setOwnerOf writes current owner of the specified token into the DB. // setOwnerOf writes the current owner of the specified token into the DB.
func setOwnerOf(ctx storage.Context, token []byte, holder interop.Hash160) { func setOwnerOf(ctx storage.Context, token []byte, holder interop.Hash160) {
key := mkTokenKey(token) key := mkTokenKey(token)
storage.Put(ctx, key, holder) storage.Put(ctx, key, holder)
} }
// OwnerOf returns owner of specified token. // OwnerOf returns the owner of the specified token.
func OwnerOf(token []byte) interop.Hash160 { func OwnerOf(token []byte) interop.Hash160 {
ctx := storage.GetReadOnlyContext() ctx := storage.GetReadOnlyContext()
return getOwnerOf(ctx, token) return getOwnerOf(ctx, token)
@ -248,14 +248,14 @@ func OnNEP17Payment(from interop.Hash160, amount int, data interface{}) {
postTransfer(nil, from, []byte(token), nil) // no `data` during minting postTransfer(nil, from, []byte(token), nil) // no `data` during minting
} }
// Verify allows owner to manage contract's address, including earned GAS // Verify allows an owner to manage a contract's address, including earned GAS
// transfer from contract's address to somewhere else. It just checks for transaction // transfer from the contract's address to somewhere else. It just checks for the transaction
// to also be signed by contract owner, so contract's witness should be empty. // to also be signed by the contract owner, so contract's witness should be empty.
func Verify() bool { func Verify() bool {
return runtime.CheckWitness(contractOwner) return runtime.CheckWitness(contractOwner)
} }
// Destroy destroys the contract, only owner can do that. // Destroy destroys the contract, only its owner can do that.
func Destroy() { func Destroy() {
if !Verify() { if !Verify() {
panic("only owner can destroy") panic("only owner can destroy")
@ -263,7 +263,7 @@ func Destroy() {
management.Destroy() management.Destroy()
} }
// Update updates the contract, only owner can do that. // Update updates the contract, only its owner can do that.
func Update(nef, manifest []byte) { func Update(nef, manifest []byte) {
if !Verify() { if !Verify() {
panic("only owner can update") panic("only owner can update")

View file

@ -40,7 +40,7 @@ func CheckWitness() bool {
return false return false
} }
// Log logs given message. // Log logs the given message.
func Log(message string) { func Log(message string) {
runtime.Log(message) runtime.Log(message)
} }
@ -50,12 +50,12 @@ func Notify(event interface{}) {
runtime.Notify("Event", event) runtime.Notify("Event", event)
} }
// Verify method is used when contract is being used as a signer of transaction, // Verify method is used when the contract is being used as a signer of transaction,
// it can have parameters (that then need to be present in invocation script) // it can have parameters (that then need to be present in invocation script)
// and it returns simple pass/fail result. This implementation just checks for // and it returns simple pass/fail result. This implementation just checks for
// owner's signature presence. // the owner's signature presence.
func Verify() bool { func Verify() bool {
// Technically this restriction is not needed, but you can see the difference // Technically, this restriction is not needed, but you can see the difference
// between invokefunction and invokecontractverify RPC methods with it. // between invokefunction and invokecontractverify RPC methods with it.
if runtime.GetTrigger() != runtime.Verification { if runtime.GetTrigger() != runtime.Verification {
return false return false
@ -63,7 +63,7 @@ func Verify() bool {
return CheckWitness() return CheckWitness()
} }
// Destroy destroys the contract, only owner can do that. // Destroy destroys the contract, only the owner can do that.
func Destroy() { func Destroy() {
if !Verify() { if !Verify() {
panic("only owner can destroy") panic("only owner can destroy")
@ -71,7 +71,7 @@ func Destroy() {
management.Destroy() management.Destroy()
} }
// Update updates the contract, only owner can do that. _deploy will be called // Update updates the contract, only the owner can do that. _deploy will be called
// after update. // after update.
func Update(nef, manifest []byte) { func Update(nef, manifest []byte) {
if !Verify() { if !Verify() {

View file

@ -16,19 +16,19 @@ func init() {
ctx = storage.GetContext() ctx = storage.GetContext()
} }
// Put puts value at key. // Put puts the value at the key.
func Put(key, value []byte) []byte { func Put(key, value []byte) []byte {
storage.Put(ctx, key, value) storage.Put(ctx, key, value)
return key return key
} }
// PutDefault puts value to the default key. // PutDefault puts the value to the default key.
func PutDefault(value []byte) []byte { func PutDefault(value []byte) []byte {
storage.Put(ctx, defaultKey, value) storage.Put(ctx, defaultKey, value)
return defaultKey return defaultKey
} }
// Get returns the value at passed key. // Get returns the value at the passed key.
func Get(key []byte) interface{} { func Get(key []byte) interface{} {
return storage.Get(ctx, key) return storage.Get(ctx, key)
} }
@ -38,13 +38,13 @@ func GetDefault() interface{} {
return storage.Get(ctx, defaultKey) return storage.Get(ctx, defaultKey)
} }
// Delete deletes the value at passed key. // Delete deletes the value at the passed key.
func Delete(key []byte) bool { func Delete(key []byte) bool {
storage.Delete(ctx, key) storage.Delete(ctx, key)
return true return true
} }
// Find returns an array of key-value pairs with key that matched the passed value // Find returns an array of key-value pairs with the key that matches the passed value.
func Find(value []byte) []string { func Find(value []byte) []string {
iter := storage.Find(ctx, value, storage.None) iter := storage.Find(ctx, value, storage.None)
result := []string{} result := []string{}

View file

@ -18,7 +18,7 @@ var (
ctx storage.Context ctx storage.Context
) )
// init initializes the Token Interface and storage context for the Smart // init initializes Token Interface and storage context for the Smart
// Contract to operate with // Contract to operate with
func init() { func init() {
token = nep17.Token{ token = nep17.Token{

View file

@ -26,7 +26,7 @@ var (
) )
// GetTestContractState reads 2 pre-compiled contracts generated by // GetTestContractState reads 2 pre-compiled contracts generated by
// TestGenerateHelperContracts second of which is allowed to call the first. // TestGenerateHelperContracts, second of which is allowed to call the first.
func GetTestContractState(t *testing.T, pathToInternalContracts string, id1, id2 int32, sender2 util.Uint160) (*state.Contract, *state.Contract) { func GetTestContractState(t *testing.T, pathToInternalContracts string, id1, id2 int32, sender2 util.Uint160) (*state.Contract, *state.Contract) {
errNotFound := errors.New("auto-generated oracle contract is not found, use TestGenerateHelperContracts to regenerate") errNotFound := errors.New("auto-generated oracle contract is not found, use TestGenerateHelperContracts to regenerate")
neBytes, err := os.ReadFile(filepath.Join(pathToInternalContracts, helper1ContractNEFPath)) neBytes, err := os.ReadFile(filepath.Join(pathToInternalContracts, helper1ContractNEFPath))

View file

@ -36,9 +36,9 @@ func TestGenerateHelperContracts(t *testing.T) {
require.False(t, saveState) require.False(t, saveState)
} }
// generateOracleContract generates helper contract that is able to call // generateOracleContract generates a helper contract that is able to call
// native Oracle contract and has callback method. It uses test chain to define // the native Oracle contract and has callback method. It uses testchain to define
// Oracle and StdLib native hashes and saves generated NEF and manifest to `oracle_contract` folder. // Oracle and StdLib native hashes and saves the generated NEF and manifest to `oracle_contract` folder.
// Set `saveState` flag to true and run the test to rewrite NEF and manifest files. // Set `saveState` flag to true and run the test to rewrite NEF and manifest files.
func generateOracleContract(t *testing.T, saveState bool) { func generateOracleContract(t *testing.T, saveState bool) {
bc, validator, committee := chain.NewMultiWithCustomConfig(t, func(c *config.ProtocolConfiguration) { bc, validator, committee := chain.NewMultiWithCustomConfig(t, func(c *config.ProtocolConfiguration) {
@ -131,9 +131,9 @@ func generateOracleContract(t *testing.T, saveState bool) {
} }
} }
// generateManagementHelperContracts generates 2 helper contracts second of which is // generateManagementHelperContracts generates 2 helper contracts, second of which is
// allowed to call the first. It uses test chain to define Management and StdLib // allowed to call the first. It uses testchain to define Management and StdLib
// native hashes and saves generated NEF and manifest to `management_contract` folder. // native hashes and saves the generated NEF and manifest to `management_contract` folder.
// Set `saveState` flag to true and run the test to rewrite NEF and manifest files. // Set `saveState` flag to true and run the test to rewrite NEF and manifest files.
func generateManagementHelperContracts(t *testing.T, saveState bool) { func generateManagementHelperContracts(t *testing.T, saveState bool) {
bc, validator, committee := chain.NewMultiWithCustomConfig(t, func(c *config.ProtocolConfiguration) { bc, validator, committee := chain.NewMultiWithCustomConfig(t, func(c *config.ProtocolConfiguration) {

View file

@ -25,7 +25,7 @@ import (
uatomic "go.uber.org/atomic" uatomic "go.uber.org/atomic"
) )
// FakeChain implements Blockchainer interface, but does not provide real functionality. // FakeChain implements the Blockchainer interface, but does not provide real functionality.
type FakeChain struct { type FakeChain struct {
config.ProtocolConfiguration config.ProtocolConfiguration
*mempool.Pool *mempool.Pool
@ -44,7 +44,7 @@ type FakeChain struct {
UtilityTokenBalance *big.Int UtilityTokenBalance *big.Int
} }
// FakeStateSync implements StateSync interface. // FakeStateSync implements the StateSync interface.
type FakeStateSync struct { type FakeStateSync struct {
IsActiveFlag uatomic.Bool IsActiveFlag uatomic.Bool
IsInitializedFlag uatomic.Bool IsInitializedFlag uatomic.Bool
@ -54,12 +54,12 @@ type FakeStateSync struct {
AddMPTNodesFunc func(nodes [][]byte) error AddMPTNodesFunc func(nodes [][]byte) error
} }
// NewFakeChain returns new FakeChain structure. // NewFakeChain returns a new FakeChain structure.
func NewFakeChain() *FakeChain { func NewFakeChain() *FakeChain {
return NewFakeChainWithCustomCfg(nil) return NewFakeChainWithCustomCfg(nil)
} }
// NewFakeChainWithCustomCfg returns new FakeChain structure with specified protocol configuration. // NewFakeChainWithCustomCfg returns a new FakeChain structure with the specified protocol configuration.
func NewFakeChainWithCustomCfg(protocolCfg func(c *config.ProtocolConfiguration)) *FakeChain { func NewFakeChainWithCustomCfg(protocolCfg func(c *config.ProtocolConfiguration)) *FakeChain {
cfg := config.ProtocolConfiguration{Magic: netmode.UnitTestNet, P2PNotaryRequestPayloadPoolSize: 10} cfg := config.ProtocolConfiguration{Magic: netmode.UnitTestNet, P2PNotaryRequestPayloadPoolSize: 10}
if protocolCfg != nil { if protocolCfg != nil {
@ -76,29 +76,29 @@ func NewFakeChainWithCustomCfg(protocolCfg func(c *config.ProtocolConfiguration)
} }
} }
// PutBlock implements Blockchainer interface. // PutBlock implements the Blockchainer interface.
func (chain *FakeChain) PutBlock(b *block.Block) { func (chain *FakeChain) PutBlock(b *block.Block) {
chain.blocks[b.Hash()] = b chain.blocks[b.Hash()] = b
chain.hdrHashes[b.Index] = b.Hash() chain.hdrHashes[b.Index] = b.Hash()
atomic.StoreUint32(&chain.Blockheight, b.Index) atomic.StoreUint32(&chain.Blockheight, b.Index)
} }
// PutHeader implements Blockchainer interface. // PutHeader implements the Blockchainer interface.
func (chain *FakeChain) PutHeader(b *block.Block) { func (chain *FakeChain) PutHeader(b *block.Block) {
chain.hdrHashes[b.Index] = b.Hash() chain.hdrHashes[b.Index] = b.Hash()
} }
// PutTx implements Blockchainer interface. // PutTx implements the Blockchainer interface.
func (chain *FakeChain) PutTx(tx *transaction.Transaction) { func (chain *FakeChain) PutTx(tx *transaction.Transaction) {
chain.txs[tx.Hash()] = tx chain.txs[tx.Hash()] = tx
} }
// ApplyPolicyToTxSet implements Blockchainer interface. // ApplyPolicyToTxSet implements the Blockchainer interface.
func (chain *FakeChain) ApplyPolicyToTxSet([]*transaction.Transaction) []*transaction.Transaction { func (chain *FakeChain) ApplyPolicyToTxSet([]*transaction.Transaction) []*transaction.Transaction {
panic("TODO") panic("TODO")
} }
// IsTxStillRelevant implements Blockchainer interface. // IsTxStillRelevant implements the Blockchainer interface.
func (chain *FakeChain) IsTxStillRelevant(t *transaction.Transaction, txpool *mempool.Pool, isPartialTx bool) bool { func (chain *FakeChain) IsTxStillRelevant(t *transaction.Transaction, txpool *mempool.Pool, isPartialTx bool) bool {
panic("TODO") panic("TODO")
} }
@ -108,17 +108,17 @@ func (chain *FakeChain) InitVerificationContext(ic *interop.Context, hash util.U
panic("TODO") panic("TODO")
} }
// IsExtensibleAllowed implements Blockchainer interface. // IsExtensibleAllowed implements the Blockchainer interface.
func (*FakeChain) IsExtensibleAllowed(uint160 util.Uint160) bool { func (*FakeChain) IsExtensibleAllowed(uint160 util.Uint160) bool {
return true return true
} }
// GetNatives implements blockchainer.Blockchainer interface. // GetNatives implements the blockchainer.Blockchainer interface.
func (*FakeChain) GetNatives() []state.NativeContract { func (*FakeChain) GetNatives() []state.NativeContract {
panic("TODO") panic("TODO")
} }
// GetNotaryDepositExpiration implements Blockchainer interface. // GetNotaryDepositExpiration implements the Blockchainer interface.
func (chain *FakeChain) GetNotaryDepositExpiration(acc util.Uint160) uint32 { func (chain *FakeChain) GetNotaryDepositExpiration(acc util.Uint160) uint32 {
if chain.NotaryDepositExpiration != 0 { if chain.NotaryDepositExpiration != 0 {
return chain.NotaryDepositExpiration return chain.NotaryDepositExpiration
@ -126,7 +126,7 @@ func (chain *FakeChain) GetNotaryDepositExpiration(acc util.Uint160) uint32 {
panic("TODO") panic("TODO")
} }
// GetNotaryContractScriptHash implements Blockchainer interface. // GetNotaryContractScriptHash implements the Blockchainer interface.
func (chain *FakeChain) GetNotaryContractScriptHash() util.Uint160 { func (chain *FakeChain) GetNotaryContractScriptHash() util.Uint160 {
if !chain.NotaryContractScriptHash.Equals(util.Uint160{}) { if !chain.NotaryContractScriptHash.Equals(util.Uint160{}) {
return chain.NotaryContractScriptHash return chain.NotaryContractScriptHash
@ -134,27 +134,27 @@ func (chain *FakeChain) GetNotaryContractScriptHash() util.Uint160 {
panic("TODO") panic("TODO")
} }
// GetNotaryBalance implements Blockchainer interface. // GetNotaryBalance implements the Blockchainer interface.
func (chain *FakeChain) GetNotaryBalance(acc util.Uint160) *big.Int { func (chain *FakeChain) GetNotaryBalance(acc util.Uint160) *big.Int {
panic("TODO") panic("TODO")
} }
// GetNotaryServiceFeePerKey implements Blockchainer interface. // GetNotaryServiceFeePerKey implements the Blockchainer interface.
func (chain *FakeChain) GetNotaryServiceFeePerKey() int64 { func (chain *FakeChain) GetNotaryServiceFeePerKey() int64 {
panic("TODO") panic("TODO")
} }
// GetBaseExecFee implements Policer interface. // GetBaseExecFee implements the Policer interface.
func (chain *FakeChain) GetBaseExecFee() int64 { func (chain *FakeChain) GetBaseExecFee() int64 {
return interop.DefaultBaseExecFee return interop.DefaultBaseExecFee
} }
// GetStoragePrice implements Policer interface. // GetStoragePrice implements the Policer interface.
func (chain *FakeChain) GetStoragePrice() int64 { func (chain *FakeChain) GetStoragePrice() int64 {
return native.DefaultStoragePrice return native.DefaultStoragePrice
} }
// GetMaxVerificationGAS implements Policer interface. // GetMaxVerificationGAS implements the Policer interface.
func (chain *FakeChain) GetMaxVerificationGAS() int64 { func (chain *FakeChain) GetMaxVerificationGAS() int64 {
if chain.MaxVerificationGAS != 0 { if chain.MaxVerificationGAS != 0 {
return chain.MaxVerificationGAS return chain.MaxVerificationGAS
@ -162,22 +162,22 @@ func (chain *FakeChain) GetMaxVerificationGAS() int64 {
panic("TODO") panic("TODO")
} }
// PoolTxWithData implements Blockchainer interface. // PoolTxWithData implements the Blockchainer interface.
func (chain *FakeChain) PoolTxWithData(t *transaction.Transaction, data interface{}, mp *mempool.Pool, feer mempool.Feer, verificationFunction func(t *transaction.Transaction, data interface{}) error) error { func (chain *FakeChain) PoolTxWithData(t *transaction.Transaction, data interface{}, mp *mempool.Pool, feer mempool.Feer, verificationFunction func(t *transaction.Transaction, data interface{}) error) error {
return chain.poolTxWithData(t, data, mp) return chain.poolTxWithData(t, data, mp)
} }
// RegisterPostBlock implements Blockchainer interface. // RegisterPostBlock implements the Blockchainer interface.
func (chain *FakeChain) RegisterPostBlock(f func(func(*transaction.Transaction, *mempool.Pool, bool) bool, *mempool.Pool, *block.Block)) { func (chain *FakeChain) RegisterPostBlock(f func(func(*transaction.Transaction, *mempool.Pool, bool) bool, *mempool.Pool, *block.Block)) {
chain.PostBlock = append(chain.PostBlock, f) chain.PostBlock = append(chain.PostBlock, f)
} }
// GetConfig implements Blockchainer interface. // GetConfig implements the Blockchainer interface.
func (chain *FakeChain) GetConfig() config.ProtocolConfiguration { func (chain *FakeChain) GetConfig() config.ProtocolConfiguration {
return chain.ProtocolConfiguration return chain.ProtocolConfiguration
} }
// CalculateClaimable implements Blockchainer interface. // CalculateClaimable implements the Blockchainer interface.
func (chain *FakeChain) CalculateClaimable(util.Uint160, uint32) (*big.Int, error) { func (chain *FakeChain) CalculateClaimable(util.Uint160, uint32) (*big.Int, error) {
panic("TODO") panic("TODO")
} }
@ -192,12 +192,12 @@ func (chain *FakeChain) P2PSigExtensionsEnabled() bool {
return true return true
} }
// AddHeaders implements Blockchainer interface. // AddHeaders implements the Blockchainer interface.
func (chain *FakeChain) AddHeaders(...*block.Header) error { func (chain *FakeChain) AddHeaders(...*block.Header) error {
panic("TODO") panic("TODO")
} }
// AddBlock implements Blockchainer interface. // AddBlock implements the Blockchainer interface.
func (chain *FakeChain) AddBlock(block *block.Block) error { func (chain *FakeChain) AddBlock(block *block.Block) error {
if block.Index == atomic.LoadUint32(&chain.Blockheight)+1 { if block.Index == atomic.LoadUint32(&chain.Blockheight)+1 {
chain.PutBlock(block) chain.PutBlock(block)
@ -205,27 +205,27 @@ func (chain *FakeChain) AddBlock(block *block.Block) error {
return nil return nil
} }
// BlockHeight implements Feer interface. // BlockHeight implements the Feer interface.
func (chain *FakeChain) BlockHeight() uint32 { func (chain *FakeChain) BlockHeight() uint32 {
return atomic.LoadUint32(&chain.Blockheight) return atomic.LoadUint32(&chain.Blockheight)
} }
// Close implements Blockchainer interface. // Close implements the Blockchainer interface.
func (chain *FakeChain) Close() { func (chain *FakeChain) Close() {
panic("TODO") panic("TODO")
} }
// HeaderHeight implements Blockchainer interface. // HeaderHeight implements the Blockchainer interface.
func (chain *FakeChain) HeaderHeight() uint32 { func (chain *FakeChain) HeaderHeight() uint32 {
return atomic.LoadUint32(&chain.Blockheight) return atomic.LoadUint32(&chain.Blockheight)
} }
// GetAppExecResults implements Blockchainer interface. // GetAppExecResults implements the Blockchainer interface.
func (chain *FakeChain) GetAppExecResults(hash util.Uint256, trig trigger.Type) ([]state.AppExecResult, error) { func (chain *FakeChain) GetAppExecResults(hash util.Uint256, trig trigger.Type) ([]state.AppExecResult, error) {
panic("TODO") panic("TODO")
} }
// GetBlock implements Blockchainer interface. // GetBlock implements the Blockchainer interface.
func (chain *FakeChain) GetBlock(hash util.Uint256) (*block.Block, error) { func (chain *FakeChain) GetBlock(hash util.Uint256) (*block.Block, error) {
if b, ok := chain.blocks[hash]; ok { if b, ok := chain.blocks[hash]; ok {
return b, nil return b, nil
@ -233,27 +233,27 @@ func (chain *FakeChain) GetBlock(hash util.Uint256) (*block.Block, error) {
return nil, errors.New("not found") return nil, errors.New("not found")
} }
// GetCommittee implements Blockchainer interface. // GetCommittee implements the Blockchainer interface.
func (chain *FakeChain) GetCommittee() (keys.PublicKeys, error) { func (chain *FakeChain) GetCommittee() (keys.PublicKeys, error) {
panic("TODO") panic("TODO")
} }
// GetContractState implements Blockchainer interface. // GetContractState implements the Blockchainer interface.
func (chain *FakeChain) GetContractState(hash util.Uint160) *state.Contract { func (chain *FakeChain) GetContractState(hash util.Uint160) *state.Contract {
panic("TODO") panic("TODO")
} }
// GetContractScriptHash implements Blockchainer interface. // GetContractScriptHash implements the Blockchainer interface.
func (chain *FakeChain) GetContractScriptHash(id int32) (util.Uint160, error) { func (chain *FakeChain) GetContractScriptHash(id int32) (util.Uint160, error) {
panic("TODO") panic("TODO")
} }
// GetNativeContractScriptHash implements Blockchainer interface. // GetNativeContractScriptHash implements the Blockchainer interface.
func (chain *FakeChain) GetNativeContractScriptHash(name string) (util.Uint160, error) { func (chain *FakeChain) GetNativeContractScriptHash(name string) (util.Uint160, error) {
panic("TODO") panic("TODO")
} }
// GetHeaderHash implements Blockchainer interface. // GetHeaderHash implements the Blockchainer interface.
func (chain *FakeChain) GetHeaderHash(n int) util.Uint256 { func (chain *FakeChain) GetHeaderHash(n int) util.Uint256 {
if n < 0 || n > math.MaxUint32 { if n < 0 || n > math.MaxUint32 {
return util.Uint256{} return util.Uint256{}
@ -261,7 +261,7 @@ func (chain *FakeChain) GetHeaderHash(n int) util.Uint256 {
return chain.hdrHashes[uint32(n)] return chain.hdrHashes[uint32(n)]
} }
// GetHeader implements Blockchainer interface. // GetHeader implements the Blockchainer interface.
func (chain *FakeChain) GetHeader(hash util.Uint256) (*block.Header, error) { func (chain *FakeChain) GetHeader(hash util.Uint256) (*block.Header, error) {
b, err := chain.GetBlock(hash) b, err := chain.GetBlock(hash)
if err != nil { if err != nil {
@ -270,84 +270,84 @@ func (chain *FakeChain) GetHeader(hash util.Uint256) (*block.Header, error) {
return &b.Header, nil return &b.Header, nil
} }
// GetNextBlockValidators implements Blockchainer interface. // GetNextBlockValidators implements the Blockchainer interface.
func (chain *FakeChain) GetNextBlockValidators() ([]*keys.PublicKey, error) { func (chain *FakeChain) GetNextBlockValidators() ([]*keys.PublicKey, error) {
panic("TODO") panic("TODO")
} }
// GetNEP17Contracts implements Blockchainer interface. // GetNEP17Contracts implements the Blockchainer interface.
func (chain *FakeChain) GetNEP11Contracts() []util.Uint160 { func (chain *FakeChain) GetNEP11Contracts() []util.Uint160 {
panic("TODO") panic("TODO")
} }
// GetNEP17Contracts implements Blockchainer interface. // GetNEP17Contracts implements the Blockchainer interface.
func (chain *FakeChain) GetNEP17Contracts() []util.Uint160 { func (chain *FakeChain) GetNEP17Contracts() []util.Uint160 {
panic("TODO") panic("TODO")
} }
// GetNEP17LastUpdated implements Blockchainer interface. // GetNEP17LastUpdated implements the Blockchainer interface.
func (chain *FakeChain) GetTokenLastUpdated(acc util.Uint160) (map[int32]uint32, error) { func (chain *FakeChain) GetTokenLastUpdated(acc util.Uint160) (map[int32]uint32, error) {
panic("TODO") panic("TODO")
} }
// ForEachNEP17Transfer implements Blockchainer interface. // ForEachNEP17Transfer implements the Blockchainer interface.
func (chain *FakeChain) ForEachNEP11Transfer(util.Uint160, uint64, func(*state.NEP11Transfer) (bool, error)) error { func (chain *FakeChain) ForEachNEP11Transfer(util.Uint160, uint64, func(*state.NEP11Transfer) (bool, error)) error {
panic("TODO") panic("TODO")
} }
// ForEachNEP17Transfer implements Blockchainer interface. // ForEachNEP17Transfer implements the Blockchainer interface.
func (chain *FakeChain) ForEachNEP17Transfer(util.Uint160, uint64, func(*state.NEP17Transfer) (bool, error)) error { func (chain *FakeChain) ForEachNEP17Transfer(util.Uint160, uint64, func(*state.NEP17Transfer) (bool, error)) error {
panic("TODO") panic("TODO")
} }
// GetValidators implements Blockchainer interface. // GetValidators implements the Blockchainer interface.
func (chain *FakeChain) GetValidators() ([]*keys.PublicKey, error) { func (chain *FakeChain) GetValidators() ([]*keys.PublicKey, error) {
panic("TODO") panic("TODO")
} }
// GetEnrollments implements Blockchainer interface. // GetEnrollments implements the Blockchainer interface.
func (chain *FakeChain) GetEnrollments() ([]state.Validator, error) { func (chain *FakeChain) GetEnrollments() ([]state.Validator, error) {
panic("TODO") panic("TODO")
} }
// GetStateModule implements Blockchainer interface. // GetStateModule implements the Blockchainer interface.
func (chain *FakeChain) GetStateModule() blockchainer.StateRoot { func (chain *FakeChain) GetStateModule() blockchainer.StateRoot {
return nil return nil
} }
// GetStorageItem implements Blockchainer interface. // GetStorageItem implements the Blockchainer interface.
func (chain *FakeChain) GetStorageItem(id int32, key []byte) state.StorageItem { func (chain *FakeChain) GetStorageItem(id int32, key []byte) state.StorageItem {
panic("TODO") panic("TODO")
} }
// GetTestVM implements Blockchainer interface. // GetTestVM implements the Blockchainer interface.
func (chain *FakeChain) GetTestVM(t trigger.Type, tx *transaction.Transaction, b *block.Block) *interop.Context { func (chain *FakeChain) GetTestVM(t trigger.Type, tx *transaction.Transaction, b *block.Block) *interop.Context {
panic("TODO") panic("TODO")
} }
// CurrentHeaderHash implements Blockchainer interface. // CurrentHeaderHash implements the Blockchainer interface.
func (chain *FakeChain) CurrentHeaderHash() util.Uint256 { func (chain *FakeChain) CurrentHeaderHash() util.Uint256 {
return util.Uint256{} return util.Uint256{}
} }
// CurrentBlockHash implements Blockchainer interface. // CurrentBlockHash implements the Blockchainer interface.
func (chain *FakeChain) CurrentBlockHash() util.Uint256 { func (chain *FakeChain) CurrentBlockHash() util.Uint256 {
return util.Uint256{} return util.Uint256{}
} }
// HasBlock implements Blockchainer interface. // HasBlock implements the Blockchainer interface.
func (chain *FakeChain) HasBlock(h util.Uint256) bool { func (chain *FakeChain) HasBlock(h util.Uint256) bool {
_, ok := chain.blocks[h] _, ok := chain.blocks[h]
return ok return ok
} }
// HasTransaction implements Blockchainer interface. // HasTransaction implements the Blockchainer interface.
func (chain *FakeChain) HasTransaction(h util.Uint256) bool { func (chain *FakeChain) HasTransaction(h util.Uint256) bool {
_, ok := chain.txs[h] _, ok := chain.txs[h]
return ok return ok
} }
// GetTransaction implements Blockchainer interface. // GetTransaction implements the Blockchainer interface.
func (chain *FakeChain) GetTransaction(h util.Uint256) (*transaction.Transaction, uint32, error) { func (chain *FakeChain) GetTransaction(h util.Uint256) (*transaction.Transaction, uint32, error) {
if tx, ok := chain.txs[h]; ok { if tx, ok := chain.txs[h]; ok {
return tx, 1, nil return tx, 1, nil
@ -355,12 +355,12 @@ func (chain *FakeChain) GetTransaction(h util.Uint256) (*transaction.Transaction
return nil, 0, errors.New("not found") return nil, 0, errors.New("not found")
} }
// GetMemPool implements Blockchainer interface. // GetMemPool implements the Blockchainer interface.
func (chain *FakeChain) GetMemPool() *mempool.Pool { func (chain *FakeChain) GetMemPool() *mempool.Pool {
return chain.Pool return chain.Pool
} }
// GetGoverningTokenBalance implements Blockchainer interface. // GetGoverningTokenBalance implements the Blockchainer interface.
func (chain *FakeChain) GetGoverningTokenBalance(acc util.Uint160) (*big.Int, uint32) { func (chain *FakeChain) GetGoverningTokenBalance(acc util.Uint160) (*big.Int, uint32) {
panic("TODO") panic("TODO")
} }
@ -373,52 +373,52 @@ func (chain *FakeChain) GetUtilityTokenBalance(uint160 util.Uint160) *big.Int {
panic("TODO") panic("TODO")
} }
// ManagementContractHash implements Blockchainer interface. // ManagementContractHash implements the Blockchainer interface.
func (chain FakeChain) ManagementContractHash() util.Uint160 { func (chain FakeChain) ManagementContractHash() util.Uint160 {
panic("TODO") panic("TODO")
} }
// PoolTx implements Blockchainer interface. // PoolTx implements the Blockchainer interface.
func (chain *FakeChain) PoolTx(tx *transaction.Transaction, _ ...*mempool.Pool) error { func (chain *FakeChain) PoolTx(tx *transaction.Transaction, _ ...*mempool.Pool) error {
return chain.PoolTxF(tx) return chain.PoolTxF(tx)
} }
// SetOracle implements Blockchainer interface. // SetOracle implements the Blockchainer interface.
func (chain FakeChain) SetOracle(services.Oracle) { func (chain FakeChain) SetOracle(services.Oracle) {
panic("TODO") panic("TODO")
} }
// SetNotary implements Blockchainer interface. // SetNotary implements the Blockchainer interface.
func (chain *FakeChain) SetNotary(notary services.Notary) { func (chain *FakeChain) SetNotary(notary services.Notary) {
panic("TODO") panic("TODO")
} }
// SubscribeForBlocks implements Blockchainer interface. // SubscribeForBlocks implements the Blockchainer interface.
func (chain *FakeChain) SubscribeForBlocks(ch chan<- *block.Block) { func (chain *FakeChain) SubscribeForBlocks(ch chan<- *block.Block) {
chain.blocksCh = append(chain.blocksCh, ch) chain.blocksCh = append(chain.blocksCh, ch)
} }
// SubscribeForExecutions implements Blockchainer interface. // SubscribeForExecutions implements the Blockchainer interface.
func (chain *FakeChain) SubscribeForExecutions(ch chan<- *state.AppExecResult) { func (chain *FakeChain) SubscribeForExecutions(ch chan<- *state.AppExecResult) {
panic("TODO") panic("TODO")
} }
// SubscribeForNotifications implements Blockchainer interface. // SubscribeForNotifications implements the Blockchainer interface.
func (chain *FakeChain) SubscribeForNotifications(ch chan<- *subscriptions.NotificationEvent) { func (chain *FakeChain) SubscribeForNotifications(ch chan<- *subscriptions.NotificationEvent) {
panic("TODO") panic("TODO")
} }
// SubscribeForTransactions implements Blockchainer interface. // SubscribeForTransactions implements the Blockchainer interface.
func (chain *FakeChain) SubscribeForTransactions(ch chan<- *transaction.Transaction) { func (chain *FakeChain) SubscribeForTransactions(ch chan<- *transaction.Transaction) {
panic("TODO") panic("TODO")
} }
// VerifyTx implements Blockchainer interface. // VerifyTx implements the Blockchainer interface.
func (chain *FakeChain) VerifyTx(*transaction.Transaction) error { func (chain *FakeChain) VerifyTx(*transaction.Transaction) error {
panic("TODO") panic("TODO")
} }
// VerifyWitness implements Blockchainer interface. // VerifyWitness implements the Blockchainer interface.
func (chain *FakeChain) VerifyWitness(util.Uint160, hash.Hashable, *transaction.Witness, int64) (int64, error) { func (chain *FakeChain) VerifyWitness(util.Uint160, hash.Hashable, *transaction.Witness, int64) (int64, error) {
if chain.VerifyWitnessF != nil { if chain.VerifyWitnessF != nil {
return chain.VerifyWitnessF() return chain.VerifyWitnessF()
@ -426,7 +426,7 @@ func (chain *FakeChain) VerifyWitness(util.Uint160, hash.Hashable, *transaction.
panic("TODO") panic("TODO")
} }
// UnsubscribeFromBlocks implements Blockchainer interface. // UnsubscribeFromBlocks implements the Blockchainer interface.
func (chain *FakeChain) UnsubscribeFromBlocks(ch chan<- *block.Block) { func (chain *FakeChain) UnsubscribeFromBlocks(ch chan<- *block.Block) {
for i, c := range chain.blocksCh { for i, c := range chain.blocksCh {
if c == ch { if c == ch {
@ -438,32 +438,32 @@ func (chain *FakeChain) UnsubscribeFromBlocks(ch chan<- *block.Block) {
} }
} }
// UnsubscribeFromExecutions implements Blockchainer interface. // UnsubscribeFromExecutions implements the Blockchainer interface.
func (chain *FakeChain) UnsubscribeFromExecutions(ch chan<- *state.AppExecResult) { func (chain *FakeChain) UnsubscribeFromExecutions(ch chan<- *state.AppExecResult) {
panic("TODO") panic("TODO")
} }
// UnsubscribeFromNotifications implements Blockchainer interface. // UnsubscribeFromNotifications implements the Blockchainer interface.
func (chain *FakeChain) UnsubscribeFromNotifications(ch chan<- *subscriptions.NotificationEvent) { func (chain *FakeChain) UnsubscribeFromNotifications(ch chan<- *subscriptions.NotificationEvent) {
panic("TODO") panic("TODO")
} }
// UnsubscribeFromTransactions implements Blockchainer interface. // UnsubscribeFromTransactions implements the Blockchainer interface.
func (chain *FakeChain) UnsubscribeFromTransactions(ch chan<- *transaction.Transaction) { func (chain *FakeChain) UnsubscribeFromTransactions(ch chan<- *transaction.Transaction) {
panic("TODO") panic("TODO")
} }
// AddBlock implements StateSync interface. // AddBlock implements the StateSync interface.
func (s *FakeStateSync) AddBlock(block *block.Block) error { func (s *FakeStateSync) AddBlock(block *block.Block) error {
panic("TODO") panic("TODO")
} }
// AddHeaders implements StateSync interface. // AddHeaders implements the StateSync interface.
func (s *FakeStateSync) AddHeaders(...*block.Header) error { func (s *FakeStateSync) AddHeaders(...*block.Header) error {
panic("TODO") panic("TODO")
} }
// AddMPTNodes implements StateSync interface. // AddMPTNodes implements the StateSync interface.
func (s *FakeStateSync) AddMPTNodes(nodes [][]byte) error { func (s *FakeStateSync) AddMPTNodes(nodes [][]byte) error {
if s.AddMPTNodesFunc != nil { if s.AddMPTNodesFunc != nil {
return s.AddMPTNodesFunc(nodes) return s.AddMPTNodesFunc(nodes)
@ -471,20 +471,20 @@ func (s *FakeStateSync) AddMPTNodes(nodes [][]byte) error {
panic("TODO") panic("TODO")
} }
// BlockHeight implements StateSync interface. // BlockHeight implements the StateSync interface.
func (s *FakeStateSync) BlockHeight() uint32 { func (s *FakeStateSync) BlockHeight() uint32 {
return 0 return 0
} }
// IsActive implements StateSync interface. // IsActive implements the StateSync interface.
func (s *FakeStateSync) IsActive() bool { return s.IsActiveFlag.Load() } func (s *FakeStateSync) IsActive() bool { return s.IsActiveFlag.Load() }
// IsInitialized implements StateSync interface. // IsInitialized implements the StateSync interface.
func (s *FakeStateSync) IsInitialized() bool { func (s *FakeStateSync) IsInitialized() bool {
return s.IsInitializedFlag.Load() return s.IsInitializedFlag.Load()
} }
// Init implements StateSync interface. // Init implements the StateSync interface.
func (s *FakeStateSync) Init(currChainHeight uint32) error { func (s *FakeStateSync) Init(currChainHeight uint32) error {
if s.InitFunc != nil { if s.InitFunc != nil {
return s.InitFunc(currChainHeight) return s.InitFunc(currChainHeight)
@ -492,15 +492,15 @@ func (s *FakeStateSync) Init(currChainHeight uint32) error {
panic("TODO") panic("TODO")
} }
// NeedHeaders implements StateSync interface. // NeedHeaders implements the StateSync interface.
func (s *FakeStateSync) NeedHeaders() bool { return s.RequestHeaders.Load() } func (s *FakeStateSync) NeedHeaders() bool { return s.RequestHeaders.Load() }
// NeedMPTNodes implements StateSync interface. // NeedMPTNodes implements the StateSync interface.
func (s *FakeStateSync) NeedMPTNodes() bool { func (s *FakeStateSync) NeedMPTNodes() bool {
panic("TODO") panic("TODO")
} }
// Traverse implements StateSync interface. // Traverse implements the StateSync interface.
func (s *FakeStateSync) Traverse(root util.Uint256, process func(node mpt.Node, nodeBytes []byte) bool) error { func (s *FakeStateSync) Traverse(root util.Uint256, process func(node mpt.Node, nodeBytes []byte) bool) error {
if s.TraverseFunc != nil { if s.TraverseFunc != nil {
return s.TraverseFunc(root, process) return s.TraverseFunc(root, process)
@ -508,7 +508,7 @@ func (s *FakeStateSync) Traverse(root util.Uint256, process func(node mpt.Node,
panic("TODO") panic("TODO")
} }
// GetUnknownMPTNodesBatch implements StateSync interface. // GetUnknownMPTNodesBatch implements the StateSync interface.
func (s *FakeStateSync) GetUnknownMPTNodesBatch(limit int) []util.Uint256 { func (s *FakeStateSync) GetUnknownMPTNodesBatch(limit int) []util.Uint256 {
panic("TODO") panic("TODO")
} }

View file

@ -24,20 +24,20 @@ var privNetKeys = []string{
"KxyjQ8eUa4FHt3Gvioyt1Wz29cTUrE4eTqX3yFSk1YFCsPL8uNsY", "KxyjQ8eUa4FHt3Gvioyt1Wz29cTUrE4eTqX3yFSk1YFCsPL8uNsY",
"L2oEXKRAAMiPEZukwR5ho2S6SMeQLhcK9mF71ZnF7GvT8dU4Kkgz", "L2oEXKRAAMiPEZukwR5ho2S6SMeQLhcK9mF71ZnF7GvT8dU4Kkgz",
// Provide 2 committee extra members so that committee address differs from // Provide 2 committee extra members so that the committee address differs from
// the validators one. // the validators one.
"L1Tr1iq5oz1jaFaMXP21sHDkJYDDkuLtpvQ4wRf1cjKvJYvnvpAb", "L1Tr1iq5oz1jaFaMXP21sHDkJYDDkuLtpvQ4wRf1cjKvJYvnvpAb",
"Kz6XTUrExy78q8f4MjDHnwz8fYYyUE8iPXwPRAkHa3qN2JcHYm7e", "Kz6XTUrExy78q8f4MjDHnwz8fYYyUE8iPXwPRAkHa3qN2JcHYm7e",
} }
// ValidatorsCount returns number of validators in the testchain. // ValidatorsCount returns the number of validators in the testchain.
const ValidatorsCount = 4 const ValidatorsCount = 4
var ( var (
// ids maps validators order by public key sorting to validators ID. // ids maps validators order by public key sorting to validators ID.
// which is an order of the validator in the StandByValidators list. // That is the order of the validator in the StandByValidators list.
ids = []int{1, 3, 0, 2, 4, 5} ids = []int{1, 3, 0, 2, 4, 5}
// orders maps to validators id to it's order by public key sorting. // orders maps validators id to its order by public key sorting.
orders = []int{2, 0, 3, 1, 4, 5} orders = []int{2, 0, 3, 1, 4, 5}
) )
@ -56,12 +56,12 @@ func IDToOrder(id int) int {
return orders[id] return orders[id]
} }
// WIF returns unencrypted wif of the specified validator. // WIF returns the unencrypted wif of the specified validator.
func WIF(i int) string { func WIF(i int) string {
return privNetKeys[i] return privNetKeys[i]
} }
// PrivateKey returns private key of node #i. // PrivateKey returns the private key of node #i.
func PrivateKey(i int) *keys.PrivateKey { func PrivateKey(i int) *keys.PrivateKey {
wif := WIF(i) wif := WIF(i)
priv, err := keys.NewPrivateKeyFromWIF(wif) priv, err := keys.NewPrivateKeyFromWIF(wif)
@ -154,7 +154,7 @@ func SignCommittee(h hash.Hashable) []byte {
return buf.Bytes() return buf.Bytes()
} }
// NewBlock creates new block for the given blockchain with the given offset // NewBlock creates a new block for the given blockchain with the given offset
// (usually, 1), primary node index and transactions. // (usually, 1), primary node index and transactions.
func NewBlock(t *testing.T, bc blockchainer.Blockchainer, offset uint32, primary uint32, txs ...*transaction.Transaction) *block.Block { func NewBlock(t *testing.T, bc blockchainer.Blockchainer, offset uint32, primary uint32, txs ...*transaction.Transaction) *block.Block {
witness := transaction.Witness{VerificationScript: MultisigVerificationScript()} witness := transaction.Witness{VerificationScript: MultisigVerificationScript()}

View file

@ -2,7 +2,7 @@ package testchain
import "github.com/nspcc-dev/neo-go/pkg/config/netmode" import "github.com/nspcc-dev/neo-go/pkg/config/netmode"
// Network returns test chain network's magic number. // Network returns testchain network's magic number.
func Network() netmode.Magic { func Network() netmode.Magic {
return netmode.UnitTestNet return netmode.UnitTestNet
} }

View file

@ -28,7 +28,7 @@ var (
ownerScript = MultisigVerificationScript() ownerScript = MultisigVerificationScript()
) )
// NewTransferFromOwner returns transaction transferring funds from NEO and GAS owner. // NewTransferFromOwner returns a transaction transferring funds from NEO and GAS owner.
func NewTransferFromOwner(bc blockchainer.Blockchainer, contractHash, to util.Uint160, amount int64, func NewTransferFromOwner(bc blockchainer.Blockchainer, contractHash, to util.Uint160, amount int64,
nonce, validUntil uint32) (*transaction.Transaction, error) { nonce, validUntil uint32) (*transaction.Transaction, error) {
w := io.NewBufBinWriter() w := io.NewBufBinWriter()
@ -51,8 +51,8 @@ func NewTransferFromOwner(bc blockchainer.Blockchainer, contractHash, to util.Ui
return tx, SignTx(bc, tx) return tx, SignTx(bc, tx)
} }
// NewDeployTx returns new deployment for contract with source from r and name equal to // NewDeployTx returns a new deployment transaction for a contract with the source from r and a name equal to
// filename without '.go' suffix. // the filename without '.go' suffix.
func NewDeployTx(bc blockchainer.Blockchainer, name string, sender util.Uint160, r gio.Reader, confFile *string) (*transaction.Transaction, util.Uint160, []byte, error) { func NewDeployTx(bc blockchainer.Blockchainer, name string, sender util.Uint160, r gio.Reader, confFile *string) (*transaction.Transaction, util.Uint160, []byte, error) {
// nef.NewFile() cares about version a lot. // nef.NewFile() cares about version a lot.
config.Version = "0.90.0-test" config.Version = "0.90.0-test"
@ -110,7 +110,7 @@ func NewDeployTx(bc blockchainer.Blockchainer, name string, sender util.Uint160,
return tx, h, ne.Script, nil return tx, h, ne.Script, nil
} }
// SignTx signs provided transactions with validator keys. // SignTx signs the provided transactions with validator keys.
func SignTx(bc blockchainer.Blockchainer, txs ...*transaction.Transaction) error { func SignTx(bc blockchainer.Blockchainer, txs ...*transaction.Transaction) error {
signTxGeneric(bc, Sign, ownerScript, txs...) signTxGeneric(bc, Sign, ownerScript, txs...)
return nil return nil

View file

@ -9,7 +9,7 @@ import (
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
// MarshalUnmarshalJSON checks if expected stays the same after // MarshalUnmarshalJSON checks if the expected stays the same after
// marshal/unmarshal via JSON. // marshal/unmarshal via JSON.
func MarshalUnmarshalJSON(t *testing.T, expected, actual interface{}) { func MarshalUnmarshalJSON(t *testing.T, expected, actual interface{}) {
data, err := json.Marshal(expected) data, err := json.Marshal(expected)
@ -18,7 +18,7 @@ func MarshalUnmarshalJSON(t *testing.T, expected, actual interface{}) {
require.Equal(t, expected, actual) require.Equal(t, expected, actual)
} }
// EncodeDecodeBinary checks if expected stays the same after // EncodeDecodeBinary checks if the expected stays the same after
// serializing/deserializing via io.Serializable methods. // serializing/deserializing via io.Serializable methods.
func EncodeDecodeBinary(t *testing.T, expected, actual io.Serializable) { func EncodeDecodeBinary(t *testing.T, expected, actual io.Serializable) {
data, err := EncodeBinary(expected) data, err := EncodeBinary(expected)
@ -27,7 +27,7 @@ func EncodeDecodeBinary(t *testing.T, expected, actual io.Serializable) {
require.Equal(t, expected, actual) require.Equal(t, expected, actual)
} }
// ToFromStackItem checks if expected stays the same after converting to/from // ToFromStackItem checks if the expected stays the same after converting to/from
// StackItem. // StackItem.
func ToFromStackItem(t *testing.T, expected, actual stackitem.Convertible) { func ToFromStackItem(t *testing.T, expected, actual stackitem.Convertible) {
item, err := expected.ToStackItem() item, err := expected.ToStackItem()
@ -58,7 +58,7 @@ type encodable interface {
Decode(*io.BinReader) error Decode(*io.BinReader) error
} }
// EncodeDecode checks if expected stays the same after // EncodeDecode checks if the expected stays the same after
// serializing/deserializing via encodable methods. // serializing/deserializing via encodable methods.
func EncodeDecode(t *testing.T, expected, actual encodable) { func EncodeDecode(t *testing.T, expected, actual encodable) {
data, err := Encode(expected) data, err := Encode(expected)

View file

@ -21,13 +21,13 @@ var (
} }
) )
// newGlobal creates new global variable. // newGlobal creates a new global variable.
func (c *codegen) newGlobal(pkg string, name string) { func (c *codegen) newGlobal(pkg string, name string) {
name = c.getIdentName(pkg, name) name = c.getIdentName(pkg, name)
c.globals[name] = len(c.globals) c.globals[name] = len(c.globals)
} }
// getIdentName returns fully-qualified name for a variable. // getIdentName returns a fully-qualified name for a variable.
func (c *codegen) getIdentName(pkg string, name string) string { func (c *codegen) getIdentName(pkg string, name string) string {
if fullName, ok := c.importMap[pkg]; ok { if fullName, ok := c.importMap[pkg]; ok {
pkg = fullName pkg = fullName
@ -92,7 +92,7 @@ func (c *codegen) traverseGlobals() bool {
} }
} }
// because we reuse `convertFuncDecl` for init funcs, // because we reuse `convertFuncDecl` for init funcs,
// we need to cleare scope, so that global variables // we need to clear scope, so that global variables
// encountered after will be recognized as globals. // encountered after will be recognized as globals.
c.scope = nil c.scope = nil
}) })
@ -133,7 +133,7 @@ func (c *codegen) traverseGlobals() bool {
// countGlobals counts the global variables in the program to add // countGlobals counts the global variables in the program to add
// them with the stack size of the function. // them with the stack size of the function.
// Second returned argument contains amount of global constants. // Second returned argument contains the amount of global constants.
func countGlobals(f ast.Node) (int, int) { func countGlobals(f ast.Node) (int, int) {
var numVar, numConst int var numVar, numConst int
ast.Inspect(f, func(node ast.Node) bool { ast.Inspect(f, func(node ast.Node) bool {
@ -141,7 +141,7 @@ func countGlobals(f ast.Node) (int, int) {
// Skip all function declarations if we have already encountered `defer`. // Skip all function declarations if we have already encountered `defer`.
case *ast.FuncDecl: case *ast.FuncDecl:
return false return false
// After skipping all funcDecls we are sure that each value spec // After skipping all funcDecls, we are sure that each value spec
// is a global declared variable or constant. // is a global declared variable or constant.
case *ast.GenDecl: case *ast.GenDecl:
isVar := n.Tok == token.VAR isVar := n.Tok == token.VAR
@ -172,7 +172,7 @@ func isExprNil(e ast.Expr) bool {
} }
// indexOfStruct returns the index of the given field inside that struct. // indexOfStruct returns the index of the given field inside that struct.
// If the struct does not contain that field it will return -1. // If the struct does not contain that field, it will return -1.
func indexOfStruct(strct *types.Struct, fldName string) int { func indexOfStruct(strct *types.Struct, fldName string) int {
for i := 0; i < strct.NumFields(); i++ { for i := 0; i < strct.NumFields(); i++ {
if strct.Field(i).Name() == fldName { if strct.Field(i).Name() == fldName {
@ -189,7 +189,7 @@ func (f funcUsage) funcUsed(name string) bool {
return ok return ok
} }
// lastStmtIsReturn checks if last statement of the declaration was return statement.. // lastStmtIsReturn checks if the last statement of the declaration was return statement.
func lastStmtIsReturn(body *ast.BlockStmt) (b bool) { func lastStmtIsReturn(body *ast.BlockStmt) (b bool) {
if l := len(body.List); l != 0 { if l := len(body.List); l != 0 {
switch inner := body.List[l-1].(type) { switch inner := body.List[l-1].(type) {
@ -240,11 +240,11 @@ func (c *codegen) fillDocumentInfo() {
}) })
} }
// analyzeFuncUsage traverses all code and returns map with functions // analyzeFuncUsage traverses all code and returns a map with functions
// which should be present in the emitted code. // which should be present in the emitted code.
// This is done using BFS starting from exported functions or // This is done using BFS starting from exported functions or
// function used in variable declarations (graph edge corresponds to // the function used in variable declarations (graph edge corresponds to
// function being called in declaration). // the function being called in declaration).
func (c *codegen) analyzeFuncUsage() funcUsage { func (c *codegen) analyzeFuncUsage() funcUsage {
type declPair struct { type declPair struct {
decl *ast.FuncDecl decl *ast.FuncDecl
@ -376,8 +376,8 @@ func canConvert(s string) bool {
return true return true
} }
// canInline returns true if function is to be inlined. // canInline returns true if the function is to be inlined.
// Currently there is a static list of function which are inlined, // Currently, there is a static list of functions which are inlined,
// this may change in future. // this may change in future.
func canInline(s string, name string) bool { func canInline(s string, name string) bool {
if strings.HasPrefix(s, "github.com/nspcc-dev/neo-go/pkg/compiler/testdata/inline") { if strings.HasPrefix(s, "github.com/nspcc-dev/neo-go/pkg/compiler/testdata/inline") {

View file

@ -35,7 +35,7 @@ type codegen struct {
// Type information. // Type information.
typeInfo *types.Info typeInfo *types.Info
// pkgInfoInline is stack of type information for packages containing inline functions. // pkgInfoInline is a stack of type information for packages containing inline functions.
pkgInfoInline []*packages.Package pkgInfoInline []*packages.Package
// A mapping of func identifiers with their scope. // A mapping of func identifiers with their scope.
@ -63,9 +63,9 @@ type codegen struct {
// A list of nested label names together with evaluation stack depth. // A list of nested label names together with evaluation stack depth.
labelList []labelWithStackSize labelList []labelWithStackSize
// inlineLabelOffsets contains size of labelList at the start of inline call processing. // inlineLabelOffsets contains size of labelList at the start of inline call processing.
// For such calls we need to drop only newly created part of stack. // For such calls, we need to drop only the newly created part of stack.
inlineLabelOffsets []int inlineLabelOffsets []int
// globalInlineCount contains amount of auxiliary variables introduced by // globalInlineCount contains the amount of auxiliary variables introduced by
// function inlining during global variables initialization. // function inlining during global variables initialization.
globalInlineCount int globalInlineCount int
@ -76,7 +76,7 @@ type codegen struct {
// A label to be used in the next statement. // A label to be used in the next statement.
nextLabel string nextLabel string
// sequencePoints is mapping from method name to a slice // sequencePoints is a mapping from the method name to a slice
// containing info about mapping from opcode's offset // containing info about mapping from opcode's offset
// to a text span in the source file. // to a text span in the source file.
sequencePoints map[string][]DebugSeqPoint sequencePoints map[string][]DebugSeqPoint
@ -92,25 +92,25 @@ type codegen struct {
// constMap contains constants from foreign packages. // constMap contains constants from foreign packages.
constMap map[string]types.TypeAndValue constMap map[string]types.TypeAndValue
// currPkg is current package being processed. // currPkg is the current package being processed.
currPkg *packages.Package currPkg *packages.Package
// mainPkg is a main package metadata. // mainPkg is the main package metadata.
mainPkg *packages.Package mainPkg *packages.Package
// packages contains packages in the order they were loaded. // packages contains packages in the order they were loaded.
packages []string packages []string
packageCache map[string]*packages.Package packageCache map[string]*packages.Package
// exceptionIndex is the index of static slot where exception is stored. // exceptionIndex is the index of the static slot where the exception is stored.
exceptionIndex int exceptionIndex int
// documents contains paths to all files used by the program. // documents contains paths to all files used by the program.
documents []string documents []string
// docIndex maps file path to an index in documents array. // docIndex maps the file path to the index in the documents array.
docIndex map[string]int docIndex map[string]int
// emittedEvents contains all events emitted by contract. // emittedEvents contains all events emitted by the contract.
emittedEvents map[string][][]string emittedEvents map[string][][]string
// invokedContracts contains invoked methods of other contracts. // invokedContracts contains invoked methods of other contracts.
@ -166,7 +166,7 @@ func (c *codegen) newLabel() (l uint16) {
return return
} }
// newNamedLabel creates a new label with a specified name. // newNamedLabel creates a new label with the specified name.
func (c *codegen) newNamedLabel(typ labelOffsetType, name string) (l uint16) { func (c *codegen) newNamedLabel(typ labelOffsetType, name string) (l uint16) {
l = c.newLabel() l = c.newLabel()
lt := labelWithType{name: name, typ: typ} lt := labelWithType{name: name, typ: typ}
@ -223,8 +223,8 @@ func (c *codegen) emitStoreStructField(i int) {
emit.Opcodes(c.prog.BinWriter, opcode.ROT, opcode.SETITEM) emit.Opcodes(c.prog.BinWriter, opcode.ROT, opcode.SETITEM)
} }
// getVarIndex returns variable type and position in corresponding slot, // getVarIndex returns variable type and position in the corresponding slot,
// according to current scope. // according to the current scope.
func (c *codegen) getVarIndex(pkg string, name string) *varInfo { func (c *codegen) getVarIndex(pkg string, name string) *varInfo {
if pkg == "" { if pkg == "" {
if c.scope != nil { if c.scope != nil {
@ -255,7 +255,7 @@ func getBaseOpcode(t varType) (opcode.Opcode, opcode.Opcode) {
} }
} }
// emitLoadVar loads specified variable to the evaluation stack. // emitLoadVar loads the specified variable to the evaluation stack.
func (c *codegen) emitLoadVar(pkg string, name string) { func (c *codegen) emitLoadVar(pkg string, name string) {
vi := c.getVarIndex(pkg, name) vi := c.getVarIndex(pkg, name)
if vi.ctx != nil && c.typeAndValueOf(vi.ctx.expr).Value != nil { if vi.ctx != nil && c.typeAndValueOf(vi.ctx.expr).Value != nil {
@ -284,7 +284,7 @@ func (c *codegen) emitLoadVar(pkg string, name string) {
c.emitLoadByIndex(vi.refType, vi.index) c.emitLoadByIndex(vi.refType, vi.index)
} }
// emitLoadByIndex loads specified variable type with index i. // emitLoadByIndex loads the specified variable type with index i.
func (c *codegen) emitLoadByIndex(t varType, i int) { func (c *codegen) emitLoadByIndex(t varType, i int) {
base, _ := getBaseOpcode(t) base, _ := getBaseOpcode(t)
if i < 7 { if i < 7 {
@ -341,7 +341,7 @@ func (c *codegen) emitDefault(t types.Type) {
} }
// convertGlobals traverses the AST and only converts global declarations. // convertGlobals traverses the AST and only converts global declarations.
// If we call this in convertFuncDecl then it will load all global variables // If we call this in convertFuncDecl, it will load all global variables
// into the scope of the function. // into the scope of the function.
func (c *codegen) convertGlobals(f *ast.File, _ *types.Package) { func (c *codegen) convertGlobals(f *ast.File, _ *types.Package) {
ast.Inspect(f, func(node ast.Node) bool { ast.Inspect(f, func(node ast.Node) bool {
@ -375,7 +375,7 @@ func (c *codegen) clearSlots(n int) {
} }
// convertInitFuncs converts `init()` functions in file f and returns // convertInitFuncs converts `init()` functions in file f and returns
// number of locals in last processed definition as well as maximum locals number encountered. // the number of locals in the last processed definition as well as maximum locals number encountered.
func (c *codegen) convertInitFuncs(f *ast.File, pkg *types.Package, lastCount int) (int, int) { func (c *codegen) convertInitFuncs(f *ast.File, pkg *types.Package, lastCount int) (int, int) {
maxCount := -1 maxCount := -1
ast.Inspect(f, func(node ast.Node) bool { ast.Inspect(f, func(node ast.Node) bool {
@ -479,10 +479,10 @@ func (c *codegen) convertFuncDecl(file ast.Node, decl *ast.FuncDecl, pkg *types.
defer f.vars.dropScope() defer f.vars.dropScope()
// We need to handle methods, which in Go, is just syntactic sugar. // We need to handle methods, which in Go, is just syntactic sugar.
// The method receiver will be passed in as first argument. // The method receiver will be passed in as the first argument.
// We check if this declaration has a receiver and load it into scope. // We check if this declaration has a receiver and load it into the scope.
// //
// FIXME: For now we will hard cast this to a struct. We can later fine tune this // FIXME: For now, we will hard cast this to a struct. We can later fine tune this
// to support other types. // to support other types.
if decl.Recv != nil { if decl.Recv != nil {
for _, arg := range decl.Recv.List { for _, arg := range decl.Recv.List {
@ -915,12 +915,12 @@ func (c *codegen) Visit(node ast.Node) ast.Visitor {
} }
case *ast.SelectorExpr: case *ast.SelectorExpr:
// If this is a method call we need to walk the AST to load the struct locally. // If this is a method call we need to walk the AST to load the struct locally.
// Otherwise this is a function call from a imported package and we can call it // Otherwise, this is a function call from an imported package and we can call it
// directly. // directly.
name, isMethod := c.getFuncNameFromSelector(fun) name, isMethod := c.getFuncNameFromSelector(fun)
if isMethod { if isMethod {
ast.Walk(c, fun.X) ast.Walk(c, fun.X)
// Dont forget to add 1 extra argument when its a method. // Don't forget to add 1 extra argument when its a method.
numArgs++ numArgs++
} }
@ -983,7 +983,7 @@ func (c *codegen) Visit(node ast.Node) ast.Visitor {
// We can be sure builtins are of type *ast.Ident. // We can be sure builtins are of type *ast.Ident.
c.convertBuiltin(n) c.convertBuiltin(n)
case name != "": case name != "":
// Function was not found thus is can be only an invocation of func-typed variable or type conversion. // Function was not found, thus it can only be an invocation of a func-typed variable or type conversion.
// We care only about string conversions because all others are effectively no-op in NeoVM. // We care only about string conversions because all others are effectively no-op in NeoVM.
// E.g. one cannot write `bool(int(a))`, only `int32(int(a))`. // E.g. one cannot write `bool(int(a))`, only `int32(int(a))`.
if isString(c.typeOf(n.Fun)) { if isString(c.typeOf(n.Fun)) {
@ -1096,7 +1096,7 @@ func (c *codegen) Visit(node ast.Node) ast.Visitor {
ast.Walk(c, n.X) ast.Walk(c, n.X)
c.emitToken(n.Tok, c.typeOf(n.X)) c.emitToken(n.Tok, c.typeOf(n.X))
// For now only identifiers are supported for (post) for stmts. // For now, only identifiers are supported for (post) for stmts.
// for i := 0; i < 10; i++ {} // for i := 0; i < 10; i++ {}
// Where the post stmt is ( i++ ) // Where the post stmt is ( i++ )
if ident, ok := n.X.(*ast.Ident); ok { if ident, ok := n.X.(*ast.Ident); ok {
@ -1218,8 +1218,8 @@ func (c *codegen) Visit(node ast.Node) ast.Visitor {
ast.Walk(c, n.X) ast.Walk(c, n.X)
// Implementation is a bit different for slices and maps: // Implementation is a bit different for slices and maps:
// For slices we iterate index from 0 to len-1, storing array, len and index on stack. // For slices, we iterate through indices from 0 to len-1, storing array, len and index on stack.
// For maps we iterate index from 0 to len-1, storing map, keyarray, size and index on stack. // For maps, we iterate through indices from 0 to len-1, storing map, keyarray, size and index on stack.
_, isMap := c.typeOf(n.X).Underlying().(*types.Map) _, isMap := c.typeOf(n.X).Underlying().(*types.Map)
emit.Opcodes(c.prog.BinWriter, opcode.DUP) emit.Opcodes(c.prog.BinWriter, opcode.DUP)
if isMap { if isMap {
@ -1281,10 +1281,10 @@ func (c *codegen) Visit(node ast.Node) ast.Visitor {
return nil return nil
// We dont really care about assertions for the core logic. // We don't really care about assertions for the core logic.
// The only thing we need is to please the compiler type checking. // The only thing we need is to please the compiler type checking.
// For this to work properly, we only need to walk the expression // For this to work properly, we only need to walk the expression
// not the assertion type. // which is not the assertion type.
case *ast.TypeAssertExpr: case *ast.TypeAssertExpr:
ast.Walk(c, n.X) ast.Walk(c, n.X)
if c.isCallExprSyscall(n.X) { if c.isCallExprSyscall(n.X) {
@ -1302,7 +1302,7 @@ func (c *codegen) Visit(node ast.Node) ast.Visitor {
} }
// packVarArgs packs variadic arguments into an array // packVarArgs packs variadic arguments into an array
// and returns amount of arguments packed. // and returns the amount of arguments packed.
func (c *codegen) packVarArgs(n *ast.CallExpr, typ *types.Signature) int { func (c *codegen) packVarArgs(n *ast.CallExpr, typ *types.Signature) int {
varSize := len(n.Args) - typ.Params().Len() + 1 varSize := len(n.Args) - typ.Params().Len() + 1
c.emitReverse(varSize) c.emitReverse(varSize)
@ -1332,12 +1332,12 @@ func (c *codegen) isCallExprSyscall(e ast.Expr) bool {
// Go `defer` statements are a bit different: // Go `defer` statements are a bit different:
// 1. `defer` is always executed irregardless of whether an exception has occurred. // 1. `defer` is always executed irregardless of whether an exception has occurred.
// 2. `recover` can or can not handle a possible exception. // 2. `recover` can or can not handle a possible exception.
// Thus we use the following approach: // Thus, we use the following approach:
// 1. Throwed exception is saved in a static field X, static fields Y and is set to true. // 1. Throwed exception is saved in a static field X, static fields Y and it is set to true.
// 2. For each defer local there is a dedicated local variable which is set to 1 if `defer` statement // 2. For each defer local there is a dedicated local variable which is set to 1 if `defer` statement
// is encountered during an actual execution. // is encountered during an actual execution.
// 3. CATCH and FINALLY blocks are the same, and both contain the same CALLs. // 3. CATCH and FINALLY blocks are the same, and both contain the same CALLs.
// 4. Right before the CATCH block check a variable from (2). If it is null, jump to the end of CATCH+FINALLY block. // 4. Right before the CATCH block, check a variable from (2). If it is null, jump to the end of CATCH+FINALLY block.
// 5. In CATCH block we set Y to true and emit default return values if it is the last defer. // 5. In CATCH block we set Y to true and emit default return values if it is the last defer.
// 6. Execute FINALLY block only if Y is false. // 6. Execute FINALLY block only if Y is false.
func (c *codegen) processDefers() { func (c *codegen) processDefers() {
@ -1386,7 +1386,7 @@ func (c *codegen) processDefers() {
// emitExplicitConvert handles `someType(someValue)` conversions between string/[]byte. // emitExplicitConvert handles `someType(someValue)` conversions between string/[]byte.
// Rules for conversion: // Rules for conversion:
// 1. interop.* types are converted to ByteArray if not already. // 1. interop.* types are converted to ByteArray if not already.
// 2. Otherwise convert between ByteArray/Buffer. // 2. Otherwise, convert between ByteArray/Buffer.
// 3. Rules for types which are not string/[]byte should already // 3. Rules for types which are not string/[]byte should already
// be enforced by go parser. // be enforced by go parser.
func (c *codegen) emitExplicitConvert(from, to types.Type) { func (c *codegen) emitExplicitConvert(from, to types.Type) {
@ -1847,8 +1847,8 @@ func (c *codegen) convertBuiltin(expr *ast.CallExpr) {
// There are special cases for builtins: // There are special cases for builtins:
// 1. With FromAddress, parameter conversion is happening at compile-time // 1. With FromAddress, parameter conversion is happening at compile-time
// so there is no need to push parameters on stack and perform an actual call // so there is no need to push parameters on stack and perform an actual call
// 2. With panic, generated code depends on if argument was nil or a string so // 2. With panic, the generated code depends on the fact if an argument was nil or a string;
// it should be handled accordingly. // so, it should be handled accordingly.
func transformArgs(fs *funcScope, fun ast.Expr, args []ast.Expr) []ast.Expr { func transformArgs(fs *funcScope, fun ast.Expr, args []ast.Expr) []ast.Expr {
switch f := fun.(type) { switch f := fun.(type) {
case *ast.SelectorExpr: case *ast.SelectorExpr:
@ -1868,7 +1868,7 @@ func transformArgs(fs *funcScope, fun ast.Expr, args []ast.Expr) []ast.Expr {
return args return args
} }
// emitConvert converts top stack item to the specified type. // emitConvert converts the top stack item to the specified type.
func (c *codegen) emitConvert(typ stackitem.Type) { func (c *codegen) emitConvert(typ stackitem.Type) {
emit.Opcodes(c.prog.BinWriter, opcode.DUP) emit.Opcodes(c.prog.BinWriter, opcode.DUP)
emit.Instruction(c.prog.BinWriter, opcode.ISTYPE, []byte{byte(typ)}) emit.Instruction(c.prog.BinWriter, opcode.ISTYPE, []byte{byte(typ)})
@ -2297,7 +2297,7 @@ func (c *codegen) replaceLabelWithOffset(ip int, arg []byte) (int, error) {
// By pure coincidence, this is also the size of `INITSLOT` instruction. // By pure coincidence, this is also the size of `INITSLOT` instruction.
const longToShortRemoveCount = 3 const longToShortRemoveCount = 3
// shortenJumps returns converts b to a program where all long JMP*/CALL* specified by absolute offsets, // shortenJumps converts b to a program where all long JMP*/CALL* specified by absolute offsets
// are replaced with their corresponding short counterparts. It panics if either b or offsets are invalid. // are replaced with their corresponding short counterparts. It panics if either b or offsets are invalid.
// This is done in 2 passes: // This is done in 2 passes:
// 1. Alter jump offsets taking into account parts to be removed. // 1. Alter jump offsets taking into account parts to be removed.

View file

@ -24,7 +24,7 @@ import (
const fileExt = "nef" const fileExt = "nef"
// Options contains all the parameters that affect the behaviour of the compiler. // Options contains all the parameters that affect the behavior of the compiler.
type Options struct { type Options struct {
// The extension of the output file default set to .nef // The extension of the output file default set to .nef
Ext string Ext string
@ -51,10 +51,10 @@ type Options struct {
// This setting has effect only if manifest is emitted. // This setting has effect only if manifest is emitted.
NoPermissionsCheck bool NoPermissionsCheck bool
// Name is contract's name to be written to manifest. // Name is a contract's name to be written to manifest.
Name string Name string
// SourceURL is contract's source URL to be written to manifest. // SourceURL is a contract's source URL to be written to manifest.
SourceURL string SourceURL string
// Runtime notifications. // Runtime notifications.
@ -63,10 +63,10 @@ type Options struct {
// The list of standards supported by the contract. // The list of standards supported by the contract.
ContractSupportedStandards []string ContractSupportedStandards []string
// SafeMethods contains list of methods which will be marked as safe in manifest. // SafeMethods contains a list of methods which will be marked as safe in manifest.
SafeMethods []string SafeMethods []string
// Overloads contains mapping from compiled method name to the name emitted in manifest. // Overloads contains mapping from the compiled method name to the name emitted in manifest.
// It can be used to provide method overloads as Go doesn't have such capability. // It can be used to provide method overloads as Go doesn't have such capability.
Overloads map[string]string Overloads map[string]string
@ -94,7 +94,7 @@ func (c *codegen) ForEachPackage(fn func(*packages.Package)) {
} }
} }
// ForEachFile executes fn on each file used in current program. // ForEachFile executes fn on each file used in the current program.
func (c *codegen) ForEachFile(fn func(*ast.File, *types.Package)) { func (c *codegen) ForEachFile(fn func(*ast.File, *types.Package)) {
c.ForEachPackage(func(pkg *packages.Package) { c.ForEachPackage(func(pkg *packages.Package) {
for _, f := range pkg.Syntax { for _, f := range pkg.Syntax {
@ -173,7 +173,7 @@ func getBuildInfo(name string, src interface{}) (*buildInfo, error) {
conf.ParseFile = func(fset *token.FileSet, filename string, src []byte) (*ast.File, error) { conf.ParseFile = func(fset *token.FileSet, filename string, src []byte) (*ast.File, error) {
// When compiling a single file we can or can not load other files from the same package. // When compiling a single file we can or can not load other files from the same package.
// Here we chose the latter which is consistent with `go run` behaviour. // Here we chose the latter which is consistent with `go run` behavior.
// Other dependencies should still be processed. // Other dependencies should still be processed.
if singleFile && filepath.Dir(filename) == filepath.Dir(absName) && filename != absName { if singleFile && filepath.Dir(filename) == filepath.Dir(absName) && filename != absName {
return nil, nil return nil, nil
@ -196,9 +196,9 @@ func getBuildInfo(name string, src interface{}) (*buildInfo, error) {
}, nil }, nil
} }
// Compile compiles a Go program into bytecode that can run on the NEO virtual machine. // Compile compiles a Go program into a bytecode that can run on the NEO virtual machine.
// If `r != nil`, `name` is interpreted as a filename, and `r` as file contents. // If `r != nil`, `name` is interpreted as a filename, and `r` as file contents.
// Otherwise `name` is either file name or name of the directory containing source files. // Otherwise `name` is either a file name or a name of the directory containing source files.
func Compile(name string, r io.Reader) ([]byte, error) { func Compile(name string, r io.Reader) ([]byte, error) {
f, _, err := CompileWithOptions(name, r, nil) f, _, err := CompileWithOptions(name, r, nil)
if err != nil { if err != nil {
@ -208,7 +208,7 @@ func Compile(name string, r io.Reader) ([]byte, error) {
return f.Script, nil return f.Script, nil
} }
// CompileWithOptions compiles a Go program into bytecode with provided compiler options. // CompileWithOptions compiles a Go program into bytecode with the provided compiler options.
func CompileWithOptions(name string, r io.Reader, o *Options) (*nef.File, *DebugInfo, error) { func CompileWithOptions(name string, r io.Reader, o *Options) (*nef.File, *DebugInfo, error) {
ctx, err := getBuildInfo(name, r) ctx, err := getBuildInfo(name, r)
if err != nil { if err != nil {

View file

@ -28,7 +28,7 @@ type compilerTestCase struct {
} }
func TestCompiler(t *testing.T) { func TestCompiler(t *testing.T) {
// CompileAndSave use config.Version for proper .nef generation. // CompileAndSave uses config.Version for proper .nef generation.
config.Version = "0.90.0-test" config.Version = "0.90.0-test"
testCases := []compilerTestCase{ testCases := []compilerTestCase{
{ {
@ -53,7 +53,7 @@ func TestCompiler(t *testing.T) {
for _, info := range infos { for _, info := range infos {
if !info.IsDir() { if !info.IsDir() {
// example smart contracts are located in the `examplePath` subdirectories, but // example smart contracts are located in the `examplePath` subdirectories, but
// there are also a couple of files inside the `examplePath` which doesn't need to be compiled // there is also a couple of files inside the `examplePath` which don't need to be compiled
continue continue
} }

View file

@ -31,7 +31,7 @@ type DebugInfo struct {
EmittedEvents map[string][][]string `json:"-"` EmittedEvents map[string][][]string `json:"-"`
// InvokedContracts contains foreign contract invocations. // InvokedContracts contains foreign contract invocations.
InvokedContracts map[util.Uint160][]string `json:"-"` InvokedContracts map[util.Uint160][]string `json:"-"`
// StaticVariables contains list of static variable names and types. // StaticVariables contains a list of static variable names and types.
StaticVariables []string `json:"static-variables"` StaticVariables []string `json:"static-variables"`
} }
@ -43,19 +43,19 @@ type MethodDebugInfo struct {
// together with the namespace it belongs to. We need to keep the first letter // together with the namespace it belongs to. We need to keep the first letter
// lowercased to match manifest standards. // lowercased to match manifest standards.
Name DebugMethodName `json:"name"` Name DebugMethodName `json:"name"`
// IsExported defines whether method is exported. // IsExported defines whether the method is exported.
IsExported bool `json:"-"` IsExported bool `json:"-"`
// IsFunction defines whether method has no receiver. // IsFunction defines whether the method has no receiver.
IsFunction bool `json:"-"` IsFunction bool `json:"-"`
// Range is the range of smart-contract's opcodes corresponding to the method. // Range is the range of smart-contract's opcodes corresponding to the method.
Range DebugRange `json:"range"` Range DebugRange `json:"range"`
// Parameters is a list of method's parameters. // Parameters is a list of the method's parameters.
Parameters []DebugParam `json:"params"` Parameters []DebugParam `json:"params"`
// ReturnType is method's return type. // ReturnType is the method's return type.
ReturnType string `json:"return"` ReturnType string `json:"return"`
// ReturnTypeReal is method's return type as specified in Go code. // ReturnTypeReal is the method's return type as specified in Go code.
ReturnTypeReal binding.Override `json:"-"` ReturnTypeReal binding.Override `json:"-"`
// ReturnTypeSC is return type to use in manifest. // ReturnTypeSC is a return type to use in manifest.
ReturnTypeSC smartcontract.ParamType `json:"-"` ReturnTypeSC smartcontract.ParamType `json:"-"`
Variables []string `json:"variables"` Variables []string `json:"variables"`
// SeqPoints is a map between source lines and byte-code instruction offsets. // SeqPoints is a map between source lines and byte-code instruction offsets.
@ -92,13 +92,13 @@ type DebugSeqPoint struct {
EndCol int EndCol int
} }
// DebugRange represents method's section in bytecode. // DebugRange represents the method's section in bytecode.
type DebugRange struct { type DebugRange struct {
Start uint16 Start uint16
End uint16 End uint16
} }
// DebugParam represents variables's name and type. // DebugParam represents the variables's name and type.
type DebugParam struct { type DebugParam struct {
Name string `json:"name"` Name string `json:"name"`
Type string `json:"type"` Type string `json:"type"`
@ -362,13 +362,13 @@ func (c *codegen) scAndVMTypeFromType(t types.Type) (smartcontract.ParamType, st
} }
} }
// MarshalJSON implements json.Marshaler interface. // MarshalJSON implements the json.Marshaler interface.
func (d *DebugRange) MarshalJSON() ([]byte, error) { func (d *DebugRange) MarshalJSON() ([]byte, error) {
return []byte(`"` + strconv.FormatUint(uint64(d.Start), 10) + `-` + return []byte(`"` + strconv.FormatUint(uint64(d.Start), 10) + `-` +
strconv.FormatUint(uint64(d.End), 10) + `"`), nil strconv.FormatUint(uint64(d.End), 10) + `"`), nil
} }
// UnmarshalJSON implements json.Unmarshaler interface. // UnmarshalJSON implements the json.Unmarshaler interface.
func (d *DebugRange) UnmarshalJSON(data []byte) error { func (d *DebugRange) UnmarshalJSON(data []byte) error {
startS, endS, err := parsePairJSON(data, "-") startS, endS, err := parsePairJSON(data, "-")
if err != nil { if err != nil {
@ -389,12 +389,12 @@ func (d *DebugRange) UnmarshalJSON(data []byte) error {
return nil return nil
} }
// MarshalJSON implements json.Marshaler interface. // MarshalJSON implements the json.Marshaler interface.
func (d *DebugParam) MarshalJSON() ([]byte, error) { func (d *DebugParam) MarshalJSON() ([]byte, error) {
return []byte(`"` + d.Name + `,` + d.Type + `"`), nil return []byte(`"` + d.Name + `,` + d.Type + `"`), nil
} }
// UnmarshalJSON implements json.Unmarshaler interface. // UnmarshalJSON implements the json.Unmarshaler interface.
func (d *DebugParam) UnmarshalJSON(data []byte) error { func (d *DebugParam) UnmarshalJSON(data []byte) error {
startS, endS, err := parsePairJSON(data, ",") startS, endS, err := parsePairJSON(data, ",")
if err != nil { if err != nil {
@ -431,12 +431,12 @@ func (m *MethodDebugInfo) ToManifestMethod() manifest.Method {
return result return result
} }
// MarshalJSON implements json.Marshaler interface. // MarshalJSON implements the json.Marshaler interface.
func (d *DebugMethodName) MarshalJSON() ([]byte, error) { func (d *DebugMethodName) MarshalJSON() ([]byte, error) {
return []byte(`"` + d.Namespace + `,` + d.Name + `"`), nil return []byte(`"` + d.Namespace + `,` + d.Name + `"`), nil
} }
// UnmarshalJSON implements json.Unmarshaler interface. // UnmarshalJSON implements the json.Unmarshaler interface.
func (d *DebugMethodName) UnmarshalJSON(data []byte) error { func (d *DebugMethodName) UnmarshalJSON(data []byte) error {
startS, endS, err := parsePairJSON(data, ",") startS, endS, err := parsePairJSON(data, ",")
if err != nil { if err != nil {
@ -449,14 +449,14 @@ func (d *DebugMethodName) UnmarshalJSON(data []byte) error {
return nil return nil
} }
// MarshalJSON implements json.Marshaler interface. // MarshalJSON implements the json.Marshaler interface.
func (d *DebugSeqPoint) MarshalJSON() ([]byte, error) { func (d *DebugSeqPoint) MarshalJSON() ([]byte, error) {
s := fmt.Sprintf("%d[%d]%d:%d-%d:%d", d.Opcode, d.Document, s := fmt.Sprintf("%d[%d]%d:%d-%d:%d", d.Opcode, d.Document,
d.StartLine, d.StartCol, d.EndLine, d.EndCol) d.StartLine, d.StartCol, d.EndLine, d.EndCol)
return []byte(`"` + s + `"`), nil return []byte(`"` + s + `"`), nil
} }
// UnmarshalJSON implements json.Unmarshaler interface. // UnmarshalJSON implements the json.Unmarshaler interface.
func (d *DebugSeqPoint) UnmarshalJSON(data []byte) error { func (d *DebugSeqPoint) UnmarshalJSON(data []byte) error {
_, err := fmt.Sscanf(string(data), `"%d[%d]%d:%d-%d:%d"`, _, err := fmt.Sscanf(string(data), `"%d[%d]%d:%d-%d:%d"`,
&d.Opcode, &d.Document, &d.StartLine, &d.StartCol, &d.EndLine, &d.EndCol) &d.Opcode, &d.Document, &d.StartLine, &d.StartCol, &d.EndLine, &d.EndCol)
@ -475,7 +475,7 @@ func parsePairJSON(data []byte, sep string) (string, string, error) {
return ss[0], ss[1], nil return ss[0], ss[1], nil
} }
// ConvertToManifest converts contract to the manifest.Manifest struct for debugger. // ConvertToManifest converts a contract to the manifest.Manifest struct for debugger.
// Note: manifest is taken from the external source, however it can be generated ad-hoc. See #1038. // Note: manifest is taken from the external source, however it can be generated ad-hoc. See #1038.
func (di *DebugInfo) ConvertToManifest(o *Options) (*manifest.Manifest, error) { func (di *DebugInfo) ConvertToManifest(o *Options) (*manifest.Manifest, error) {
methods := make([]manifest.Method, 0) methods := make([]manifest.Method, 0)

View file

@ -6,7 +6,7 @@ import (
) )
// A funcScope represents the scope within the function context. // A funcScope represents the scope within the function context.
// It holds al the local variables along with the initialized struct positions. // It holds all the local variables along with the initialized struct positions.
type funcScope struct { type funcScope struct {
// Identifier of the function. // Identifier of the function.
name string name string

View file

@ -50,8 +50,8 @@ type syscallTestCase struct {
isVoid bool isVoid bool
} }
// This test ensures that our wrappers have necessary number of parameters // This test ensures that our wrappers have the necessary number of parameters
// and execute needed syscall. Because of lack of typing (compared to native contracts) // and execute the appropriate syscall. Because of lack of typing (compared to native contracts),
// parameter types can't be checked. // parameter types can't be checked.
func TestSyscallExecution(t *testing.T) { func TestSyscallExecution(t *testing.T) {
b := `[]byte{1}` b := `[]byte{1}`

View file

@ -11,7 +11,7 @@ import (
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
// In this test we only check that needed interop // In this test, we only check that needed interop
// is called with the provided arguments in the right order. // is called with the provided arguments in the right order.
func TestVerifyGood(t *testing.T) { func TestVerifyGood(t *testing.T) {
msg := []byte("test message") msg := []byte("test message")

View file

@ -18,7 +18,7 @@ const (
UserAgentFormat = UserAgentWrapper + UserAgentPrefix + "%s" + UserAgentWrapper UserAgentFormat = UserAgentWrapper + UserAgentPrefix + "%s" + UserAgentWrapper
) )
// Version the version of the node, set at build time. // Version is the version of the node, set at the build time.
var Version string var Version string
// Config top level struct representing the config // Config top level struct representing the config
@ -28,7 +28,7 @@ type Config struct {
ApplicationConfiguration ApplicationConfiguration `yaml:"ApplicationConfiguration"` ApplicationConfiguration ApplicationConfiguration `yaml:"ApplicationConfiguration"`
} }
// GenerateUserAgent creates user agent string based on build time environment. // GenerateUserAgent creates a user agent string based on the build time environment.
func (c Config) GenerateUserAgent() string { func (c Config) GenerateUserAgent() string {
return fmt.Sprintf(UserAgentFormat, Version) return fmt.Sprintf(UserAgentFormat, Version)
} }

View file

@ -28,7 +28,7 @@ type (
// P2PNotaryRequestPayloadPoolSize specifies the memory pool size for P2PNotaryRequestPayloads. // P2PNotaryRequestPayloadPoolSize specifies the memory pool size for P2PNotaryRequestPayloads.
// It is valid only if P2PSigExtensions are enabled. // It is valid only if P2PSigExtensions are enabled.
P2PNotaryRequestPayloadPoolSize int `yaml:"P2PNotaryRequestPayloadPoolSize"` P2PNotaryRequestPayloadPoolSize int `yaml:"P2PNotaryRequestPayloadPoolSize"`
// KeepOnlyLatestState specifies if MPT should only store latest state. // KeepOnlyLatestState specifies if MPT should only store the latest state.
// If true, DB size will be smaller, but older roots won't be accessible. // If true, DB size will be smaller, but older roots won't be accessible.
// This value should remain the same for the same database. // This value should remain the same for the same database.
KeepOnlyLatestState bool `yaml:"KeepOnlyLatestState"` KeepOnlyLatestState bool `yaml:"KeepOnlyLatestState"`
@ -46,7 +46,7 @@ type (
// exceeding that a transaction should fail validation. It is set to estimated daily number // exceeding that a transaction should fail validation. It is set to estimated daily number
// of blocks with 15s interval. // of blocks with 15s interval.
MaxValidUntilBlockIncrement uint32 `yaml:"MaxValidUntilBlockIncrement"` MaxValidUntilBlockIncrement uint32 `yaml:"MaxValidUntilBlockIncrement"`
// NativeUpdateHistories is the list of histories of native contracts updates. // NativeUpdateHistories is a list of histories of native contracts updates.
NativeUpdateHistories map[string][]uint32 `yaml:"NativeActivations"` NativeUpdateHistories map[string][]uint32 `yaml:"NativeActivations"`
// P2PSigExtensions enables additional signature-related logic. // P2PSigExtensions enables additional signature-related logic.
P2PSigExtensions bool `yaml:"P2PSigExtensions"` P2PSigExtensions bool `yaml:"P2PSigExtensions"`
@ -69,7 +69,7 @@ type (
ValidatorsHistory map[uint32]int `yaml:"ValidatorsHistory"` ValidatorsHistory map[uint32]int `yaml:"ValidatorsHistory"`
// Whether to verify received blocks. // Whether to verify received blocks.
VerifyBlocks bool `yaml:"VerifyBlocks"` VerifyBlocks bool `yaml:"VerifyBlocks"`
// Whether to verify transactions in received blocks. // Whether to verify transactions in the received blocks.
VerifyTransactions bool `yaml:"VerifyTransactions"` VerifyTransactions bool `yaml:"VerifyTransactions"`
} }
) )
@ -81,7 +81,7 @@ type heightNumber struct {
} }
// Validate checks ProtocolConfiguration for internal consistency and returns // Validate checks ProtocolConfiguration for internal consistency and returns
// error if anything inappropriate found. Other methods can rely on protocol // an error if anything inappropriate found. Other methods can rely on protocol
// validity after this. // validity after this.
func (p *ProtocolConfiguration) Validate() error { func (p *ProtocolConfiguration) Validate() error {
var err error var err error

View file

@ -11,7 +11,7 @@ import (
"github.com/nspcc-dev/neo-go/pkg/util" "github.com/nspcc-dev/neo-go/pkg/util"
) )
// neoBlock is a wrapper of core.Block which implements // neoBlock is a wrapper of a core.Block which implements
// methods necessary for dBFT library. // methods necessary for dBFT library.
type neoBlock struct { type neoBlock struct {
coreb.Block coreb.Block
@ -22,7 +22,7 @@ type neoBlock struct {
var _ block.Block = (*neoBlock)(nil) var _ block.Block = (*neoBlock)(nil)
// Sign implements block.Block interface. // Sign implements the block.Block interface.
func (n *neoBlock) Sign(key crypto.PrivateKey) error { func (n *neoBlock) Sign(key crypto.PrivateKey) error {
k := key.(*privateKey) k := key.(*privateKey)
sig := k.PrivateKey.SignHashable(uint32(n.network), &n.Block) sig := k.PrivateKey.SignHashable(uint32(n.network), &n.Block)
@ -30,7 +30,7 @@ func (n *neoBlock) Sign(key crypto.PrivateKey) error {
return nil return nil
} }
// Verify implements block.Block interface. // Verify implements the block.Block interface.
func (n *neoBlock) Verify(key crypto.PublicKey, sign []byte) error { func (n *neoBlock) Verify(key crypto.PublicKey, sign []byte) error {
k := key.(*publicKey) k := key.(*publicKey)
if k.PublicKey.VerifyHashable(sign, uint32(n.network), &n.Block) { if k.PublicKey.VerifyHashable(sign, uint32(n.network), &n.Block) {
@ -39,7 +39,7 @@ func (n *neoBlock) Verify(key crypto.PublicKey, sign []byte) error {
return errors.New("verification failed") return errors.New("verification failed")
} }
// Transactions implements block.Block interface. // Transactions implements the block.Block interface.
func (n *neoBlock) Transactions() []block.Transaction { func (n *neoBlock) Transactions() []block.Transaction {
txes := make([]block.Transaction, len(n.Block.Transactions)) txes := make([]block.Transaction, len(n.Block.Transactions))
for i, tx := range n.Block.Transactions { for i, tx := range n.Block.Transactions {
@ -49,7 +49,7 @@ func (n *neoBlock) Transactions() []block.Transaction {
return txes return txes
} }
// SetTransactions implements block.Block interface. // SetTransactions implements the block.Block interface.
func (n *neoBlock) SetTransactions(txes []block.Transaction) { func (n *neoBlock) SetTransactions(txes []block.Transaction) {
n.Block.Transactions = make([]*transaction.Transaction, len(txes)) n.Block.Transactions = make([]*transaction.Transaction, len(txes))
for i, tx := range txes { for i, tx := range txes {
@ -57,26 +57,26 @@ func (n *neoBlock) SetTransactions(txes []block.Transaction) {
} }
} }
// Version implements block.Block interface. // Version implements the block.Block interface.
func (n *neoBlock) Version() uint32 { return n.Block.Version } func (n *neoBlock) Version() uint32 { return n.Block.Version }
// PrevHash implements block.Block interface. // PrevHash implements the block.Block interface.
func (n *neoBlock) PrevHash() util.Uint256 { return n.Block.PrevHash } func (n *neoBlock) PrevHash() util.Uint256 { return n.Block.PrevHash }
// MerkleRoot implements block.Block interface. // MerkleRoot implements the block.Block interface.
func (n *neoBlock) MerkleRoot() util.Uint256 { return n.Block.MerkleRoot } func (n *neoBlock) MerkleRoot() util.Uint256 { return n.Block.MerkleRoot }
// Timestamp implements block.Block interface. // Timestamp implements the block.Block interface.
func (n *neoBlock) Timestamp() uint64 { return n.Block.Timestamp * nsInMs } func (n *neoBlock) Timestamp() uint64 { return n.Block.Timestamp * nsInMs }
// Index implements block.Block interface. // Index implements the block.Block interface.
func (n *neoBlock) Index() uint32 { return n.Block.Index } func (n *neoBlock) Index() uint32 { return n.Block.Index }
// ConsensusData implements block.Block interface. // ConsensusData implements the block.Block interface.
func (n *neoBlock) ConsensusData() uint64 { return n.Block.Nonce } func (n *neoBlock) ConsensusData() uint64 { return n.Block.Nonce }
// NextConsensus implements block.Block interface. // NextConsensus implements the block.Block interface.
func (n *neoBlock) NextConsensus() util.Uint160 { return n.Block.NextConsensus } func (n *neoBlock) NextConsensus() util.Uint160 { return n.Block.NextConsensus }
// Signature implements block.Block interface. // Signature implements the block.Block interface.
func (n *neoBlock) Signature() []byte { return n.signature } func (n *neoBlock) Signature() []byte { return n.signature }

View file

@ -7,7 +7,7 @@ import (
"github.com/nspcc-dev/neo-go/pkg/util" "github.com/nspcc-dev/neo-go/pkg/util"
) )
// relayCache is a payload cache which is used to store // relayCache is payload cache which is used to store
// last consensus payloads. // last consensus payloads.
type relayCache struct { type relayCache struct {
*sync.RWMutex *sync.RWMutex
@ -17,7 +17,7 @@ type relayCache struct {
queue *list.List queue *list.List
} }
// hashable is a type of items which can be stored in the relayCache. // hashable is the type of items which can be stored in the relayCache.
type hashable interface { type hashable interface {
Hash() util.Uint256 Hash() util.Uint256
} }
@ -32,7 +32,7 @@ func newFIFOCache(capacity int) *relayCache {
} }
} }
// Add adds payload into a cache if it doesn't already exist. // Add adds payload into cache if it doesn't already exist there.
func (c *relayCache) Add(p hashable) { func (c *relayCache) Add(p hashable) {
c.Lock() c.Lock()
defer c.Unlock() defer c.Unlock()
@ -52,7 +52,7 @@ func (c *relayCache) Add(p hashable) {
c.elems[h] = e c.elems[h] = e
} }
// Has checks if an item is already in cache. // Has checks if the item is already in cache.
func (c *relayCache) Has(h util.Uint256) bool { func (c *relayCache) Has(h util.Uint256) bool {
c.RLock() c.RLock()
defer c.RUnlock() defer c.RUnlock()

View file

@ -14,32 +14,32 @@ type changeView struct {
var _ payload.ChangeView = (*changeView)(nil) var _ payload.ChangeView = (*changeView)(nil)
// EncodeBinary implements io.Serializable interface. // EncodeBinary implements the io.Serializable interface.
func (c *changeView) EncodeBinary(w *io.BinWriter) { func (c *changeView) EncodeBinary(w *io.BinWriter) {
w.WriteU64LE(c.timestamp) w.WriteU64LE(c.timestamp)
w.WriteB(byte(c.reason)) w.WriteB(byte(c.reason))
} }
// DecodeBinary implements io.Serializable interface. // DecodeBinary implements the io.Serializable interface.
func (c *changeView) DecodeBinary(r *io.BinReader) { func (c *changeView) DecodeBinary(r *io.BinReader) {
c.timestamp = r.ReadU64LE() c.timestamp = r.ReadU64LE()
c.reason = payload.ChangeViewReason(r.ReadB()) c.reason = payload.ChangeViewReason(r.ReadB())
} }
// NewViewNumber implements payload.ChangeView interface. // NewViewNumber implements the payload.ChangeView interface.
func (c changeView) NewViewNumber() byte { return c.newViewNumber } func (c changeView) NewViewNumber() byte { return c.newViewNumber }
// SetNewViewNumber implements payload.ChangeView interface. // SetNewViewNumber implements the payload.ChangeView interface.
func (c *changeView) SetNewViewNumber(view byte) { c.newViewNumber = view } func (c *changeView) SetNewViewNumber(view byte) { c.newViewNumber = view }
// Timestamp implements payload.ChangeView interface. // Timestamp implements the payload.ChangeView interface.
func (c changeView) Timestamp() uint64 { return c.timestamp * nsInMs } func (c changeView) Timestamp() uint64 { return c.timestamp * nsInMs }
// SetTimestamp implements payload.ChangeView interface. // SetTimestamp implements the payload.ChangeView interface.
func (c *changeView) SetTimestamp(ts uint64) { c.timestamp = ts / nsInMs } func (c *changeView) SetTimestamp(ts uint64) { c.timestamp = ts / nsInMs }
// Reason implements payload.ChangeView interface. // Reason implements the payload.ChangeView interface.
func (c changeView) Reason() payload.ChangeViewReason { return c.reason } func (c changeView) Reason() payload.ChangeViewReason { return c.reason }
// SetReason implements payload.ChangeView interface. // SetReason implements the payload.ChangeView interface.
func (c *changeView) SetReason(reason payload.ChangeViewReason) { c.reason = reason } func (c *changeView) SetReason(reason payload.ChangeViewReason) { c.reason = reason }

View file

@ -11,25 +11,25 @@ type commit struct {
} }
// signatureSize is an rfc6989 signature size in bytes // signatureSize is an rfc6989 signature size in bytes
// without leading byte (0x04, uncompressed). // without a leading byte (0x04, uncompressed).
const signatureSize = 64 const signatureSize = 64
var _ payload.Commit = (*commit)(nil) var _ payload.Commit = (*commit)(nil)
// EncodeBinary implements io.Serializable interface. // EncodeBinary implements the io.Serializable interface.
func (c *commit) EncodeBinary(w *io.BinWriter) { func (c *commit) EncodeBinary(w *io.BinWriter) {
w.WriteBytes(c.signature[:]) w.WriteBytes(c.signature[:])
} }
// DecodeBinary implements io.Serializable interface. // DecodeBinary implements the io.Serializable interface.
func (c *commit) DecodeBinary(r *io.BinReader) { func (c *commit) DecodeBinary(r *io.BinReader) {
r.ReadBytes(c.signature[:]) r.ReadBytes(c.signature[:])
} }
// Signature implements payload.Commit interface. // Signature implements the payload.Commit interface.
func (c commit) Signature() []byte { return c.signature[:] } func (c commit) Signature() []byte { return c.signature[:] }
// SetSignature implements payload.Commit interface. // SetSignature implements the payload.Commit interface.
func (c *commit) SetSignature(signature []byte) { func (c *commit) SetSignature(signature []byte) {
copy(c.signature[:], signature) copy(c.signature[:], signature)
} }

View file

@ -40,7 +40,7 @@ const defaultTimePerBlock = 15 * time.Second
// Number of nanoseconds in millisecond. // Number of nanoseconds in millisecond.
const nsInMs = 1000000 const nsInMs = 1000000
// Category is message category for extensible payloads. // Category is a message category for extensible payloads.
const Category = "dBFT" const Category = "dBFT"
// Ledger is the interface to Blockchain sufficient for Service. // Ledger is the interface to Blockchain sufficient for Service.
@ -61,19 +61,19 @@ type Ledger interface {
mempool.Feer mempool.Feer
} }
// Service represents consensus instance. // Service represents a consensus instance.
type Service interface { type Service interface {
// Name returns service name. // Name returns service name.
Name() string Name() string
// Start initializes dBFT and starts event loop for consensus service. // Start initializes dBFT and starts event loop for consensus service.
// It must be called only when sufficient amount of peers are connected. // It must be called only when the sufficient amount of peers are connected.
Start() Start()
// Shutdown stops dBFT event loop. // Shutdown stops dBFT event loop.
Shutdown() Shutdown()
// OnPayload is a callback to notify Service about new received payload. // OnPayload is a callback to notify the Service about a newly received payload.
OnPayload(p *npayload.Extensible) error OnPayload(p *npayload.Extensible) error
// OnTransaction is a callback to notify Service about new received transaction. // OnTransaction is a callback to notify the Service about a newly received transaction.
OnTransaction(tx *transaction.Transaction) OnTransaction(tx *transaction.Transaction)
} }
@ -100,8 +100,8 @@ type service struct {
finished chan struct{} finished chan struct{}
// lastTimestamp contains timestamp for the last processed block. // lastTimestamp contains timestamp for the last processed block.
// We can't rely on timestamp from dbft context because it is changed // We can't rely on timestamp from dbft context because it is changed
// before block is accepted, so in case of change view it will contain // before the block is accepted. So, in case of change view, it will contain
// updated value. // an updated value.
lastTimestamp uint64 lastTimestamp uint64
} }
@ -109,23 +109,23 @@ type service struct {
type Config struct { type Config struct {
// Logger is a logger instance. // Logger is a logger instance.
Logger *zap.Logger Logger *zap.Logger
// Broadcast is a callback which is called to notify server // Broadcast is a callback which is called to notify the server
// about new consensus payload to sent. // about a new consensus payload to be sent.
Broadcast func(p *npayload.Extensible) Broadcast func(p *npayload.Extensible)
// Chain is a Ledger instance. // Chain is a Ledger instance.
Chain Ledger Chain Ledger
// ProtocolConfiguration contains protocol settings. // ProtocolConfiguration contains protocol settings.
ProtocolConfiguration config.ProtocolConfiguration ProtocolConfiguration config.ProtocolConfiguration
// RequestTx is a callback to which will be called // RequestTx is a callback to which will be called
// when a node lacks transactions present in a block. // when a node lacks transactions present in the block.
RequestTx func(h ...util.Uint256) RequestTx func(h ...util.Uint256)
// TimePerBlock minimal time that should pass before next block is accepted. // TimePerBlock is minimal time that should pass before the next block is accepted.
TimePerBlock time.Duration TimePerBlock time.Duration
// Wallet is a local-node wallet configuration. // Wallet is a local-node wallet configuration.
Wallet *config.Wallet Wallet *config.Wallet
} }
// NewService returns new consensus.Service instance. // NewService returns a new consensus.Service instance.
func NewService(cfg Config) (Service, error) { func NewService(cfg Config) (Service, error) {
if cfg.TimePerBlock <= 0 { if cfg.TimePerBlock <= 0 {
cfg.TimePerBlock = defaultTimePerBlock cfg.TimePerBlock = defaultTimePerBlock
@ -155,7 +155,7 @@ func NewService(cfg Config) (Service, error) {
return nil, err return nil, err
} }
// Check that wallet password is correct for at least one account. // Check that the wallet password is correct for at least one account.
var ok bool var ok bool
for _, acc := range srv.wallet.Accounts { for _, acc := range srv.wallet.Accounts {
err := acc.Decrypt(srv.Config.Wallet.Password, srv.wallet.Scrypt) err := acc.Decrypt(srv.Config.Wallet.Password, srv.wallet.Scrypt)
@ -213,7 +213,7 @@ var (
_ block.Block = (*neoBlock)(nil) _ block.Block = (*neoBlock)(nil)
) )
// NewPayload creates new consensus payload for the provided network. // NewPayload creates a new consensus payload for the provided network.
func NewPayload(m netmode.Magic, stateRootEnabled bool) *Payload { func NewPayload(m netmode.Magic, stateRootEnabled bool) *Payload {
return &Payload{ return &Payload{
Extensible: npayload.Extensible{ Extensible: npayload.Extensible{
@ -272,7 +272,7 @@ func (s *service) Start() {
} }
} }
// Shutdown implements Service interface. // Shutdown implements the Service interface.
func (s *service) Shutdown() { func (s *service) Shutdown() {
if s.started.Load() { if s.started.Load() {
close(s.quit) close(s.quit)

View file

@ -8,44 +8,44 @@ import (
) )
// privateKey is a wrapper around keys.PrivateKey // privateKey is a wrapper around keys.PrivateKey
// which implements crypto.PrivateKey interface. // which implements the crypto.PrivateKey interface.
type privateKey struct { type privateKey struct {
*keys.PrivateKey *keys.PrivateKey
} }
// MarshalBinary implements encoding.BinaryMarshaler interface. // MarshalBinary implements the encoding.BinaryMarshaler interface.
func (p privateKey) MarshalBinary() (data []byte, err error) { func (p privateKey) MarshalBinary() (data []byte, err error) {
return p.PrivateKey.Bytes(), nil return p.PrivateKey.Bytes(), nil
} }
// UnmarshalBinary implements encoding.BinaryUnmarshaler interface. // UnmarshalBinary implements the encoding.BinaryUnmarshaler interface.
func (p *privateKey) UnmarshalBinary(data []byte) (err error) { func (p *privateKey) UnmarshalBinary(data []byte) (err error) {
p.PrivateKey, err = keys.NewPrivateKeyFromBytes(data) p.PrivateKey, err = keys.NewPrivateKeyFromBytes(data)
return return
} }
// Sign implements dbft's crypto.PrivateKey interface. // Sign implements the dbft's crypto.PrivateKey interface.
func (p *privateKey) Sign(data []byte) ([]byte, error) { func (p *privateKey) Sign(data []byte) ([]byte, error) {
return p.PrivateKey.Sign(data), nil return p.PrivateKey.Sign(data), nil
} }
// publicKey is a wrapper around keys.PublicKey // publicKey is a wrapper around keys.PublicKey
// which implements crypto.PublicKey interface. // which implements the crypto.PublicKey interface.
type publicKey struct { type publicKey struct {
*keys.PublicKey *keys.PublicKey
} }
// MarshalBinary implements encoding.BinaryMarshaler interface. // MarshalBinary implements the encoding.BinaryMarshaler interface.
func (p publicKey) MarshalBinary() (data []byte, err error) { func (p publicKey) MarshalBinary() (data []byte, err error) {
return p.PublicKey.Bytes(), nil return p.PublicKey.Bytes(), nil
} }
// UnmarshalBinary implements encoding.BinaryUnmarshaler interface. // UnmarshalBinary implements the encoding.BinaryUnmarshaler interface.
func (p *publicKey) UnmarshalBinary(data []byte) error { func (p *publicKey) UnmarshalBinary(data []byte) error {
return p.PublicKey.DecodeBytes(data) return p.PublicKey.DecodeBytes(data)
} }
// Verify implements crypto.PublicKey interface. // Verify implements the crypto.PublicKey interface.
func (p publicKey) Verify(msg, sig []byte) error { func (p publicKey) Verify(msg, sig []byte) error {
hash := sha256.Sum256(msg) hash := sha256.Sum256(msg)
if p.PublicKey.Verify(sig, hash[:]) { if p.PublicKey.Verify(sig, hash[:]) {

View file

@ -44,83 +44,83 @@ const (
payloadGasLimit = 2000000 // 0.02 GAS payloadGasLimit = 2000000 // 0.02 GAS
) )
// ViewNumber implements payload.ConsensusPayload interface. // ViewNumber implements the payload.ConsensusPayload interface.
func (p Payload) ViewNumber() byte { func (p Payload) ViewNumber() byte {
return p.message.ViewNumber return p.message.ViewNumber
} }
// SetViewNumber implements payload.ConsensusPayload interface. // SetViewNumber implements the payload.ConsensusPayload interface.
func (p *Payload) SetViewNumber(view byte) { func (p *Payload) SetViewNumber(view byte) {
p.message.ViewNumber = view p.message.ViewNumber = view
} }
// Type implements payload.ConsensusPayload interface. // Type implements the payload.ConsensusPayload interface.
func (p Payload) Type() payload.MessageType { func (p Payload) Type() payload.MessageType {
return payload.MessageType(p.message.Type) return payload.MessageType(p.message.Type)
} }
// SetType implements payload.ConsensusPayload interface. // SetType implements the payload.ConsensusPayload interface.
func (p *Payload) SetType(t payload.MessageType) { func (p *Payload) SetType(t payload.MessageType) {
p.message.Type = messageType(t) p.message.Type = messageType(t)
} }
// Payload implements payload.ConsensusPayload interface. // Payload implements the payload.ConsensusPayload interface.
func (p Payload) Payload() interface{} { func (p Payload) Payload() interface{} {
return p.payload return p.payload
} }
// SetPayload implements payload.ConsensusPayload interface. // SetPayload implements the payload.ConsensusPayload interface.
func (p *Payload) SetPayload(pl interface{}) { func (p *Payload) SetPayload(pl interface{}) {
p.payload = pl.(io.Serializable) p.payload = pl.(io.Serializable)
} }
// GetChangeView implements payload.ConsensusPayload interface. // GetChangeView implements the payload.ConsensusPayload interface.
func (p Payload) GetChangeView() payload.ChangeView { return p.payload.(payload.ChangeView) } func (p Payload) GetChangeView() payload.ChangeView { return p.payload.(payload.ChangeView) }
// GetPrepareRequest implements payload.ConsensusPayload interface. // GetPrepareRequest implements the payload.ConsensusPayload interface.
func (p Payload) GetPrepareRequest() payload.PrepareRequest { func (p Payload) GetPrepareRequest() payload.PrepareRequest {
return p.payload.(payload.PrepareRequest) return p.payload.(payload.PrepareRequest)
} }
// GetPrepareResponse implements payload.ConsensusPayload interface. // GetPrepareResponse implements the payload.ConsensusPayload interface.
func (p Payload) GetPrepareResponse() payload.PrepareResponse { func (p Payload) GetPrepareResponse() payload.PrepareResponse {
return p.payload.(payload.PrepareResponse) return p.payload.(payload.PrepareResponse)
} }
// GetCommit implements payload.ConsensusPayload interface. // GetCommit implements the payload.ConsensusPayload interface.
func (p Payload) GetCommit() payload.Commit { return p.payload.(payload.Commit) } func (p Payload) GetCommit() payload.Commit { return p.payload.(payload.Commit) }
// GetRecoveryRequest implements payload.ConsensusPayload interface. // GetRecoveryRequest implements the payload.ConsensusPayload interface.
func (p Payload) GetRecoveryRequest() payload.RecoveryRequest { func (p Payload) GetRecoveryRequest() payload.RecoveryRequest {
return p.payload.(payload.RecoveryRequest) return p.payload.(payload.RecoveryRequest)
} }
// GetRecoveryMessage implements payload.ConsensusPayload interface. // GetRecoveryMessage implements the payload.ConsensusPayload interface.
func (p Payload) GetRecoveryMessage() payload.RecoveryMessage { func (p Payload) GetRecoveryMessage() payload.RecoveryMessage {
return p.payload.(payload.RecoveryMessage) return p.payload.(payload.RecoveryMessage)
} }
// ValidatorIndex implements payload.ConsensusPayload interface. // ValidatorIndex implements the payload.ConsensusPayload interface.
func (p Payload) ValidatorIndex() uint16 { func (p Payload) ValidatorIndex() uint16 {
return uint16(p.message.ValidatorIndex) return uint16(p.message.ValidatorIndex)
} }
// SetValidatorIndex implements payload.ConsensusPayload interface. // SetValidatorIndex implements the payload.ConsensusPayload interface.
func (p *Payload) SetValidatorIndex(i uint16) { func (p *Payload) SetValidatorIndex(i uint16) {
p.message.ValidatorIndex = byte(i) p.message.ValidatorIndex = byte(i)
} }
// Height implements payload.ConsensusPayload interface. // Height implements the payload.ConsensusPayload interface.
func (p Payload) Height() uint32 { func (p Payload) Height() uint32 {
return p.message.BlockIndex return p.message.BlockIndex
} }
// SetHeight implements payload.ConsensusPayload interface. // SetHeight implements the payload.ConsensusPayload interface.
func (p *Payload) SetHeight(h uint32) { func (p *Payload) SetHeight(h uint32) {
p.message.BlockIndex = h p.message.BlockIndex = h
} }
// EncodeBinary implements io.Serializable interface. // EncodeBinary implements the io.Serializable interface.
func (p *Payload) EncodeBinary(w *io.BinWriter) { func (p *Payload) EncodeBinary(w *io.BinWriter) {
p.encodeData() p.encodeData()
p.Extensible.EncodeBinary(w) p.Extensible.EncodeBinary(w)
@ -140,7 +140,7 @@ func (p *Payload) Sign(key *privateKey) error {
return nil return nil
} }
// Hash implements payload.ConsensusPayload interface. // Hash implements the payload.ConsensusPayload interface.
func (p *Payload) Hash() util.Uint256 { func (p *Payload) Hash() util.Uint256 {
if p.Extensible.Data == nil { if p.Extensible.Data == nil {
p.encodeData() p.encodeData()
@ -148,7 +148,7 @@ func (p *Payload) Hash() util.Uint256 {
return p.Extensible.Hash() return p.Extensible.Hash()
} }
// DecodeBinary implements io.Serializable interface. // DecodeBinary implements the io.Serializable interface.
func (p *Payload) DecodeBinary(r *io.BinReader) { func (p *Payload) DecodeBinary(r *io.BinReader) {
p.Extensible.DecodeBinary(r) p.Extensible.DecodeBinary(r)
if r.Err == nil { if r.Err == nil {
@ -156,7 +156,7 @@ func (p *Payload) DecodeBinary(r *io.BinReader) {
} }
} }
// EncodeBinary implements io.Serializable interface. // EncodeBinary implements the io.Serializable interface.
func (m *message) EncodeBinary(w *io.BinWriter) { func (m *message) EncodeBinary(w *io.BinWriter) {
w.WriteB(byte(m.Type)) w.WriteB(byte(m.Type))
w.WriteU32LE(m.BlockIndex) w.WriteU32LE(m.BlockIndex)
@ -165,7 +165,7 @@ func (m *message) EncodeBinary(w *io.BinWriter) {
m.payload.EncodeBinary(w) m.payload.EncodeBinary(w)
} }
// DecodeBinary implements io.Serializable interface. // DecodeBinary implements the io.Serializable interface.
func (m *message) DecodeBinary(r *io.BinReader) { func (m *message) DecodeBinary(r *io.BinReader) {
m.Type = messageType(r.ReadB()) m.Type = messageType(r.ReadB())
m.BlockIndex = r.ReadU32LE() m.BlockIndex = r.ReadU32LE()

View file

@ -20,7 +20,7 @@ type prepareRequest struct {
var _ payload.PrepareRequest = (*prepareRequest)(nil) var _ payload.PrepareRequest = (*prepareRequest)(nil)
// EncodeBinary implements io.Serializable interface. // EncodeBinary implements the io.Serializable interface.
func (p *prepareRequest) EncodeBinary(w *io.BinWriter) { func (p *prepareRequest) EncodeBinary(w *io.BinWriter) {
w.WriteU32LE(p.version) w.WriteU32LE(p.version)
w.WriteBytes(p.prevHash[:]) w.WriteBytes(p.prevHash[:])
@ -32,7 +32,7 @@ func (p *prepareRequest) EncodeBinary(w *io.BinWriter) {
} }
} }
// DecodeBinary implements io.Serializable interface. // DecodeBinary implements the io.Serializable interface.
func (p *prepareRequest) DecodeBinary(r *io.BinReader) { func (p *prepareRequest) DecodeBinary(r *io.BinReader) {
p.version = r.ReadU32LE() p.version = r.ReadU32LE()
r.ReadBytes(p.prevHash[:]) r.ReadBytes(p.prevHash[:])
@ -44,46 +44,46 @@ func (p *prepareRequest) DecodeBinary(r *io.BinReader) {
} }
} }
// Version implements payload.PrepareRequest interface. // Version implements the payload.PrepareRequest interface.
func (p prepareRequest) Version() uint32 { func (p prepareRequest) Version() uint32 {
return p.version return p.version
} }
// SetVersion implements payload.PrepareRequest interface. // SetVersion implements the payload.PrepareRequest interface.
func (p *prepareRequest) SetVersion(v uint32) { func (p *prepareRequest) SetVersion(v uint32) {
p.version = v p.version = v
} }
// PrevHash implements payload.PrepareRequest interface. // PrevHash implements the payload.PrepareRequest interface.
func (p prepareRequest) PrevHash() util.Uint256 { func (p prepareRequest) PrevHash() util.Uint256 {
return p.prevHash return p.prevHash
} }
// SetPrevHash implements payload.PrepareRequest interface. // SetPrevHash implements the payload.PrepareRequest interface.
func (p *prepareRequest) SetPrevHash(h util.Uint256) { func (p *prepareRequest) SetPrevHash(h util.Uint256) {
p.prevHash = h p.prevHash = h
} }
// Timestamp implements payload.PrepareRequest interface. // Timestamp implements the payload.PrepareRequest interface.
func (p *prepareRequest) Timestamp() uint64 { return p.timestamp * nsInMs } func (p *prepareRequest) Timestamp() uint64 { return p.timestamp * nsInMs }
// SetTimestamp implements payload.PrepareRequest interface. // SetTimestamp implements the payload.PrepareRequest interface.
func (p *prepareRequest) SetTimestamp(ts uint64) { p.timestamp = ts / nsInMs } func (p *prepareRequest) SetTimestamp(ts uint64) { p.timestamp = ts / nsInMs }
// Nonce implements payload.PrepareRequest interface. // Nonce implements the payload.PrepareRequest interface.
func (p *prepareRequest) Nonce() uint64 { return p.nonce } func (p *prepareRequest) Nonce() uint64 { return p.nonce }
// SetNonce implements payload.PrepareRequest interface. // SetNonce implements the payload.PrepareRequest interface.
func (p *prepareRequest) SetNonce(nonce uint64) { p.nonce = nonce } func (p *prepareRequest) SetNonce(nonce uint64) { p.nonce = nonce }
// TransactionHashes implements payload.PrepareRequest interface. // TransactionHashes implements the payload.PrepareRequest interface.
func (p *prepareRequest) TransactionHashes() []util.Uint256 { return p.transactionHashes } func (p *prepareRequest) TransactionHashes() []util.Uint256 { return p.transactionHashes }
// SetTransactionHashes implements payload.PrepareRequest interface. // SetTransactionHashes implements the payload.PrepareRequest interface.
func (p *prepareRequest) SetTransactionHashes(hs []util.Uint256) { p.transactionHashes = hs } func (p *prepareRequest) SetTransactionHashes(hs []util.Uint256) { p.transactionHashes = hs }
// NextConsensus implements payload.PrepareRequest interface. // NextConsensus implements the payload.PrepareRequest interface.
func (p *prepareRequest) NextConsensus() util.Uint160 { return util.Uint160{} } func (p *prepareRequest) NextConsensus() util.Uint160 { return util.Uint160{} }
// SetNextConsensus implements payload.PrepareRequest interface. // SetNextConsensus implements the payload.PrepareRequest interface.
func (p *prepareRequest) SetNextConsensus(_ util.Uint160) {} func (p *prepareRequest) SetNextConsensus(_ util.Uint160) {}

View file

@ -13,18 +13,18 @@ type prepareResponse struct {
var _ payload.PrepareResponse = (*prepareResponse)(nil) var _ payload.PrepareResponse = (*prepareResponse)(nil)
// EncodeBinary implements io.Serializable interface. // EncodeBinary implements the io.Serializable interface.
func (p *prepareResponse) EncodeBinary(w *io.BinWriter) { func (p *prepareResponse) EncodeBinary(w *io.BinWriter) {
w.WriteBytes(p.preparationHash[:]) w.WriteBytes(p.preparationHash[:])
} }
// DecodeBinary implements io.Serializable interface. // DecodeBinary implements the io.Serializable interface.
func (p *prepareResponse) DecodeBinary(r *io.BinReader) { func (p *prepareResponse) DecodeBinary(r *io.BinReader) {
r.ReadBytes(p.preparationHash[:]) r.ReadBytes(p.preparationHash[:])
} }
// PreparationHash implements payload.PrepareResponse interface. // PreparationHash implements the payload.PrepareResponse interface.
func (p *prepareResponse) PreparationHash() util.Uint256 { return p.preparationHash } func (p *prepareResponse) PreparationHash() util.Uint256 { return p.preparationHash }
// SetPreparationHash implements payload.PrepareResponse interface. // SetPreparationHash implements the payload.PrepareResponse interface.
func (p *prepareResponse) SetPreparationHash(h util.Uint256) { p.preparationHash = h } func (p *prepareResponse) SetPreparationHash(h util.Uint256) { p.preparationHash = h }

View file

@ -43,7 +43,7 @@ type (
var _ payload.RecoveryMessage = (*recoveryMessage)(nil) var _ payload.RecoveryMessage = (*recoveryMessage)(nil)
// DecodeBinary implements io.Serializable interface. // DecodeBinary implements the io.Serializable interface.
func (m *recoveryMessage) DecodeBinary(r *io.BinReader) { func (m *recoveryMessage) DecodeBinary(r *io.BinReader) {
r.ReadArray(&m.changeViewPayloads) r.ReadArray(&m.changeViewPayloads)
@ -73,7 +73,7 @@ func (m *recoveryMessage) DecodeBinary(r *io.BinReader) {
r.ReadArray(&m.commitPayloads) r.ReadArray(&m.commitPayloads)
} }
// EncodeBinary implements io.Serializable interface. // EncodeBinary implements the io.Serializable interface.
func (m *recoveryMessage) EncodeBinary(w *io.BinWriter) { func (m *recoveryMessage) EncodeBinary(w *io.BinWriter) {
w.WriteArray(m.changeViewPayloads) w.WriteArray(m.changeViewPayloads)
@ -94,7 +94,7 @@ func (m *recoveryMessage) EncodeBinary(w *io.BinWriter) {
w.WriteArray(m.commitPayloads) w.WriteArray(m.commitPayloads)
} }
// DecodeBinary implements io.Serializable interface. // DecodeBinary implements the io.Serializable interface.
func (p *changeViewCompact) DecodeBinary(r *io.BinReader) { func (p *changeViewCompact) DecodeBinary(r *io.BinReader) {
p.ValidatorIndex = r.ReadB() p.ValidatorIndex = r.ReadB()
p.OriginalViewNumber = r.ReadB() p.OriginalViewNumber = r.ReadB()
@ -102,7 +102,7 @@ func (p *changeViewCompact) DecodeBinary(r *io.BinReader) {
p.InvocationScript = r.ReadVarBytes(1024) p.InvocationScript = r.ReadVarBytes(1024)
} }
// EncodeBinary implements io.Serializable interface. // EncodeBinary implements the io.Serializable interface.
func (p *changeViewCompact) EncodeBinary(w *io.BinWriter) { func (p *changeViewCompact) EncodeBinary(w *io.BinWriter) {
w.WriteB(p.ValidatorIndex) w.WriteB(p.ValidatorIndex)
w.WriteB(p.OriginalViewNumber) w.WriteB(p.OriginalViewNumber)
@ -110,7 +110,7 @@ func (p *changeViewCompact) EncodeBinary(w *io.BinWriter) {
w.WriteVarBytes(p.InvocationScript) w.WriteVarBytes(p.InvocationScript)
} }
// DecodeBinary implements io.Serializable interface. // DecodeBinary implements the io.Serializable interface.
func (p *commitCompact) DecodeBinary(r *io.BinReader) { func (p *commitCompact) DecodeBinary(r *io.BinReader) {
p.ViewNumber = r.ReadB() p.ViewNumber = r.ReadB()
p.ValidatorIndex = r.ReadB() p.ValidatorIndex = r.ReadB()
@ -118,7 +118,7 @@ func (p *commitCompact) DecodeBinary(r *io.BinReader) {
p.InvocationScript = r.ReadVarBytes(1024) p.InvocationScript = r.ReadVarBytes(1024)
} }
// EncodeBinary implements io.Serializable interface. // EncodeBinary implements the io.Serializable interface.
func (p *commitCompact) EncodeBinary(w *io.BinWriter) { func (p *commitCompact) EncodeBinary(w *io.BinWriter) {
w.WriteB(p.ViewNumber) w.WriteB(p.ViewNumber)
w.WriteB(p.ValidatorIndex) w.WriteB(p.ValidatorIndex)
@ -126,19 +126,19 @@ func (p *commitCompact) EncodeBinary(w *io.BinWriter) {
w.WriteVarBytes(p.InvocationScript) w.WriteVarBytes(p.InvocationScript)
} }
// DecodeBinary implements io.Serializable interface. // DecodeBinary implements the io.Serializable interface.
func (p *preparationCompact) DecodeBinary(r *io.BinReader) { func (p *preparationCompact) DecodeBinary(r *io.BinReader) {
p.ValidatorIndex = r.ReadB() p.ValidatorIndex = r.ReadB()
p.InvocationScript = r.ReadVarBytes(1024) p.InvocationScript = r.ReadVarBytes(1024)
} }
// EncodeBinary implements io.Serializable interface. // EncodeBinary implements the io.Serializable interface.
func (p *preparationCompact) EncodeBinary(w *io.BinWriter) { func (p *preparationCompact) EncodeBinary(w *io.BinWriter) {
w.WriteB(p.ValidatorIndex) w.WriteB(p.ValidatorIndex)
w.WriteVarBytes(p.InvocationScript) w.WriteVarBytes(p.InvocationScript)
} }
// AddPayload implements payload.RecoveryMessage interface. // AddPayload implements the payload.RecoveryMessage interface.
func (m *recoveryMessage) AddPayload(p payload.ConsensusPayload) { func (m *recoveryMessage) AddPayload(p payload.ConsensusPayload) {
validator := uint8(p.ValidatorIndex()) validator := uint8(p.ValidatorIndex())
@ -183,7 +183,7 @@ func (m *recoveryMessage) AddPayload(p payload.ConsensusPayload) {
} }
} }
// GetPrepareRequest implements payload.RecoveryMessage interface. // GetPrepareRequest implements the payload.RecoveryMessage interface.
func (m *recoveryMessage) GetPrepareRequest(p payload.ConsensusPayload, validators []crypto.PublicKey, primary uint16) payload.ConsensusPayload { func (m *recoveryMessage) GetPrepareRequest(p payload.ConsensusPayload, validators []crypto.PublicKey, primary uint16) payload.ConsensusPayload {
if m.prepareRequest == nil { if m.prepareRequest == nil {
return nil return nil
@ -210,7 +210,7 @@ func (m *recoveryMessage) GetPrepareRequest(p payload.ConsensusPayload, validato
return req return req
} }
// GetPrepareResponses implements payload.RecoveryMessage interface. // GetPrepareResponses implements the payload.RecoveryMessage interface.
func (m *recoveryMessage) GetPrepareResponses(p payload.ConsensusPayload, validators []crypto.PublicKey) []payload.ConsensusPayload { func (m *recoveryMessage) GetPrepareResponses(p payload.ConsensusPayload, validators []crypto.PublicKey) []payload.ConsensusPayload {
if m.preparationHash == nil { if m.preparationHash == nil {
return nil return nil
@ -233,7 +233,7 @@ func (m *recoveryMessage) GetPrepareResponses(p payload.ConsensusPayload, valida
return ps return ps
} }
// GetChangeViews implements payload.RecoveryMessage interface. // GetChangeViews implements the payload.RecoveryMessage interface.
func (m *recoveryMessage) GetChangeViews(p payload.ConsensusPayload, validators []crypto.PublicKey) []payload.ConsensusPayload { func (m *recoveryMessage) GetChangeViews(p payload.ConsensusPayload, validators []crypto.PublicKey) []payload.ConsensusPayload {
ps := make([]payload.ConsensusPayload, len(m.changeViewPayloads)) ps := make([]payload.ConsensusPayload, len(m.changeViewPayloads))
@ -254,7 +254,7 @@ func (m *recoveryMessage) GetChangeViews(p payload.ConsensusPayload, validators
return ps return ps
} }
// GetCommits implements payload.RecoveryMessage interface. // GetCommits implements the payload.RecoveryMessage interface.
func (m *recoveryMessage) GetCommits(p payload.ConsensusPayload, validators []crypto.PublicKey) []payload.ConsensusPayload { func (m *recoveryMessage) GetCommits(p payload.ConsensusPayload, validators []crypto.PublicKey) []payload.ConsensusPayload {
ps := make([]payload.ConsensusPayload, len(m.commitPayloads)) ps := make([]payload.ConsensusPayload, len(m.commitPayloads))
@ -271,12 +271,12 @@ func (m *recoveryMessage) GetCommits(p payload.ConsensusPayload, validators []cr
return ps return ps
} }
// PreparationHash implements payload.RecoveryMessage interface. // PreparationHash implements the payload.RecoveryMessage interface.
func (m *recoveryMessage) PreparationHash() *util.Uint256 { func (m *recoveryMessage) PreparationHash() *util.Uint256 {
return m.preparationHash return m.preparationHash
} }
// SetPreparationHash implements payload.RecoveryMessage interface. // SetPreparationHash implements the payload.RecoveryMessage interface.
func (m *recoveryMessage) SetPreparationHash(h *util.Uint256) { func (m *recoveryMessage) SetPreparationHash(h *util.Uint256) {
m.preparationHash = h m.preparationHash = h
} }

View file

@ -12,18 +12,18 @@ type recoveryRequest struct {
var _ payload.RecoveryRequest = (*recoveryRequest)(nil) var _ payload.RecoveryRequest = (*recoveryRequest)(nil)
// DecodeBinary implements io.Serializable interface. // DecodeBinary implements the io.Serializable interface.
func (m *recoveryRequest) DecodeBinary(r *io.BinReader) { func (m *recoveryRequest) DecodeBinary(r *io.BinReader) {
m.timestamp = r.ReadU64LE() m.timestamp = r.ReadU64LE()
} }
// EncodeBinary implements io.Serializable interface. // EncodeBinary implements the io.Serializable interface.
func (m *recoveryRequest) EncodeBinary(w *io.BinWriter) { func (m *recoveryRequest) EncodeBinary(w *io.BinWriter) {
w.WriteU64LE(m.timestamp) w.WriteU64LE(m.timestamp)
} }
// Timestamp implements payload.RecoveryRequest interface. // Timestamp implements the payload.RecoveryRequest interface.
func (m *recoveryRequest) Timestamp() uint64 { return m.timestamp * nsInMs } func (m *recoveryRequest) Timestamp() uint64 { return m.timestamp * nsInMs }
// SetTimestamp implements payload.RecoveryRequest interface. // SetTimestamp implements the payload.RecoveryRequest interface.
func (m *recoveryRequest) SetTimestamp(ts uint64) { m.timestamp = ts / nsInMs } func (m *recoveryRequest) SetTimestamp(ts uint64) { m.timestamp = ts / nsInMs }

View file

@ -143,7 +143,7 @@ func (b *Block) EncodeBinary(bw *io.BinWriter) {
} }
} }
// MarshalJSON implements json.Marshaler interface. // MarshalJSON implements the json.Marshaler interface.
func (b Block) MarshalJSON() ([]byte, error) { func (b Block) MarshalJSON() ([]byte, error) {
auxb, err := json.Marshal(auxBlockOut{ auxb, err := json.Marshal(auxBlockOut{
Transactions: b.Transactions, Transactions: b.Transactions,
@ -165,7 +165,7 @@ func (b Block) MarshalJSON() ([]byte, error) {
return baseBytes, nil return baseBytes, nil
} }
// UnmarshalJSON implements json.Unmarshaler interface. // UnmarshalJSON implements the json.Unmarshaler interface.
func (b *Block) UnmarshalJSON(data []byte) error { func (b *Block) UnmarshalJSON(data []byte) error {
// As Base and auxb are at the same level in json, // As Base and auxb are at the same level in json,
// do unmarshalling separately for both structs. // do unmarshalling separately for both structs.
@ -192,7 +192,7 @@ func (b *Block) UnmarshalJSON(data []byte) error {
return nil return nil
} }
// GetExpectedBlockSize returns expected block size which should be equal to io.GetVarSize(b). // GetExpectedBlockSize returns the expected block size which should be equal to io.GetVarSize(b).
func (b *Block) GetExpectedBlockSize() int { func (b *Block) GetExpectedBlockSize() int {
var transactionsSize int var transactionsSize int
for _, tx := range b.Transactions { for _, tx := range b.Transactions {
@ -201,7 +201,7 @@ func (b *Block) GetExpectedBlockSize() int {
return b.GetExpectedBlockSizeWithoutTransactions(len(b.Transactions)) + transactionsSize return b.GetExpectedBlockSizeWithoutTransactions(len(b.Transactions)) + transactionsSize
} }
// GetExpectedBlockSizeWithoutTransactions returns expected block size without transactions size. // GetExpectedBlockSizeWithoutTransactions returns the expected block size without transactions size.
func (b *Block) GetExpectedBlockSizeWithoutTransactions(txCount int) int { func (b *Block) GetExpectedBlockSizeWithoutTransactions(txCount int) int {
size := expectedHeaderSizeWithEmptyWitness - 1 - 1 + // 1 is for the zero-length (new(Header)).Script.Invocation/Verification size := expectedHeaderSizeWithEmptyWitness - 1 - 1 + // 1 is for the zero-length (new(Header)).Script.Invocation/Verification
io.GetVarSize(&b.Script) + io.GetVarSize(&b.Script) +

View file

@ -23,7 +23,7 @@ func trim0x(value interface{}) string {
return strings.TrimPrefix(s, "0x") return strings.TrimPrefix(s, "0x")
} }
// Test blocks are blocks from testnet with their corresponding index. // Test blocks are blocks from testnet with their corresponding indices.
func TestDecodeBlock1(t *testing.T) { func TestDecodeBlock1(t *testing.T) {
data, err := getBlockData(1) data, err := getBlockData(1)
require.NoError(t, err) require.NoError(t, err)
@ -126,12 +126,12 @@ func TestBinBlockDecodeEncode(t *testing.T) {
assert.Equal(t, len(expected), len(hashes)) assert.Equal(t, len(expected), len(hashes))
// changes value in map to true, if hash found // changes value in map to true, if hash is found
for _, hash := range hashes { for _, hash := range hashes {
expected[hash] = true expected[hash] = true
} }
// iterate map; all vlaues should be true // iterate map; all values should be true
val := true val := true
for _, v := range expected { for _, v := range expected {
if v == false { if v == false {
@ -151,7 +151,7 @@ func TestBlockSizeCalculation(t *testing.T) {
// block taken from C# privnet: 02d7c7801742cd404eb178780c840477f1eef4a771ecc8cc9434640fe8f2bb09 // block taken from C# privnet: 02d7c7801742cd404eb178780c840477f1eef4a771ecc8cc9434640fe8f2bb09
// The Size in golang is given by counting the number of bytes of an object. (len(Bytes)) // The Size in golang is given by counting the number of bytes of an object. (len(Bytes))
// its implementation is different from the corresponding C# and python implementations. But the result should // its implementation is different from the corresponding C# and python implementations. But the result should
// should be the same.In this test we provide more details then necessary because in case of failure we can easily debug the // be the same. In this test we provide more details then necessary because in case of failure we can easily debug the
// root cause of the size calculation missmatch. // root cause of the size calculation missmatch.
rawBlock := "AAAAAAwIVa2D6Yha3tArd5XnwkAf7deJBsdyyvpYb2xMZGBbkOUNHAsfre0rKA/F+Ox05/bQSXmcRZnzK3M6Z+/TxJUh0MNFeAEAAAAAAAAAAAAAAQAAAADe7nnBifMAmLC6ai65CzqSWKbH/wHGDEDgwCcXkcaFw5MGOp1cpkgApzDTX2/RxKlmPeXTgWYtfEA8g9svUSbZA4TeoGyWvX8LiN0tJKrzajdMGvTVGqVmDEDp6PBmZmRx9CxswtLht6oWa2Uq4rl5diPsLtqXZeZepMlxUSbaCdlFTB7iWQG9yKXWR5hc0sScevvuVwwsUYdlDEDwlhwZrP07E5fEQKttVMYAiL7edd/eW2yoMGZe6Q95g7yXQ69edVHfQb61fBw3DjCpMDZ5lsxp3BgzXglJwMSKkxMMIQIQOn990BZVhZf3lg0nxRakOU/ZaLnmUVXrSwE+QEBAbgwhAqe8Vf6GhOARl2jRBLoweVvcyGYZ6GSt0mFWcj7Rhc1iDCECs2Ir9AF73+MXxYrtX0x1PyBrfbiWBG+n13S7xL9/jcIMIQPZDAffY+aQzneRLhCrUazJRLZoYCN7YIxPj4MJ5x7mmRRBe85spQIAWNC7C8DYpwAAAAAAIKpEAAAAAADoAwAAAd7uecGJ8wCYsLpqLrkLOpJYpsf/AQBbCwIA4fUFDBSAzse29bVvUFePc38WLTqxTUZlDQwU3u55wYnzAJiwumouuQs6klimx/8UwB8MCHRyYW5zZmVyDBT1Y+pAvCg9TQ4FxI6jBbPyoHNA70FifVtSOQHGDEC4UIzT61GYPx0LdksrF6C2ioYai6fbwpjv3BGAqiyagxiomYGZRLeXZyD67O5FJ86pXRFtSbVYu2YDG+T5ICIgDEDzm/wl+BnHvQXaHQ1rGLtdUMc41wN6I48kPPM7F23gL9sVxGziQIMRLnpTbWHrnzaU9Sy0fXkvIrdJy1KABkSQDEDBwuBuVK+nsZvn1oAscPj6d3FJiUGK9xiHpX9Ipp/5jTnXRBAyzyGc8IZMBVql4WS8kwFe6ojA/9BvFb5eWXnEkxMMIQIQOn990BZVhZf3lg0nxRakOU/ZaLnmUVXrSwE+QEBAbgwhAqe8Vf6GhOARl2jRBLoweVvcyGYZ6GSt0mFWcj7Rhc1iDCECs2Ir9AF73+MXxYrtX0x1PyBrfbiWBG+n13S7xL9/jcIMIQPZDAffY+aQzneRLhCrUazJRLZoYCN7YIxPj4MJ5x7mmRRBe85spQDYJLwZwNinAAAAAAAgqkQAAAAAAOgDAAAB3u55wYnzAJiwumouuQs6klimx/8BAF8LAwBA2d2ITQoADBSAzse29bVvUFePc38WLTqxTUZlDQwU3u55wYnzAJiwumouuQs6klimx/8UwB8MCHRyYW5zZmVyDBTPduKL0AYsSkeO41VhARMZ88+k0kFifVtSOQHGDEDWn0D7z2ELqpN8ghcM/PtfFwo56/BfEasfHuSKECJMYxvU47r2ZtSihg59lGxSZzHsvxTy6nsyvJ22ycNhINdJDECl61cg937N/HujKsLMu2wJMS7C54bzJ3q22Czqllvw3Yp809USgKDs+W+3QD7rI+SFs0OhIn0gooCUU6f/13WjDEDr9XdeT5CGTO8CL0JigzcTcucs0GBcqHs8fToO6zPuuCfS7Wh6dyxSCijT4A4S+7BUdW3dsO7828ke1fj8oNxmkxMMIQIQOn990BZVhZf3lg0nxRakOU/ZaLnmUVXrSwE+QEBAbgwhAqe8Vf6GhOARl2jRBLoweVvcyGYZ6GSt0mFWcj7Rhc1iDCECs2Ir9AF73+MXxYrtX0x1PyBrfbiWBG+n13S7xL9/jcIMIQPZDAffY+aQzneRLhCrUazJRLZoYCN7YIxPj4MJ5x7mmRRBe85spQ==" rawBlock := "AAAAAAwIVa2D6Yha3tArd5XnwkAf7deJBsdyyvpYb2xMZGBbkOUNHAsfre0rKA/F+Ox05/bQSXmcRZnzK3M6Z+/TxJUh0MNFeAEAAAAAAAAAAAAAAQAAAADe7nnBifMAmLC6ai65CzqSWKbH/wHGDEDgwCcXkcaFw5MGOp1cpkgApzDTX2/RxKlmPeXTgWYtfEA8g9svUSbZA4TeoGyWvX8LiN0tJKrzajdMGvTVGqVmDEDp6PBmZmRx9CxswtLht6oWa2Uq4rl5diPsLtqXZeZepMlxUSbaCdlFTB7iWQG9yKXWR5hc0sScevvuVwwsUYdlDEDwlhwZrP07E5fEQKttVMYAiL7edd/eW2yoMGZe6Q95g7yXQ69edVHfQb61fBw3DjCpMDZ5lsxp3BgzXglJwMSKkxMMIQIQOn990BZVhZf3lg0nxRakOU/ZaLnmUVXrSwE+QEBAbgwhAqe8Vf6GhOARl2jRBLoweVvcyGYZ6GSt0mFWcj7Rhc1iDCECs2Ir9AF73+MXxYrtX0x1PyBrfbiWBG+n13S7xL9/jcIMIQPZDAffY+aQzneRLhCrUazJRLZoYCN7YIxPj4MJ5x7mmRRBe85spQIAWNC7C8DYpwAAAAAAIKpEAAAAAADoAwAAAd7uecGJ8wCYsLpqLrkLOpJYpsf/AQBbCwIA4fUFDBSAzse29bVvUFePc38WLTqxTUZlDQwU3u55wYnzAJiwumouuQs6klimx/8UwB8MCHRyYW5zZmVyDBT1Y+pAvCg9TQ4FxI6jBbPyoHNA70FifVtSOQHGDEC4UIzT61GYPx0LdksrF6C2ioYai6fbwpjv3BGAqiyagxiomYGZRLeXZyD67O5FJ86pXRFtSbVYu2YDG+T5ICIgDEDzm/wl+BnHvQXaHQ1rGLtdUMc41wN6I48kPPM7F23gL9sVxGziQIMRLnpTbWHrnzaU9Sy0fXkvIrdJy1KABkSQDEDBwuBuVK+nsZvn1oAscPj6d3FJiUGK9xiHpX9Ipp/5jTnXRBAyzyGc8IZMBVql4WS8kwFe6ojA/9BvFb5eWXnEkxMMIQIQOn990BZVhZf3lg0nxRakOU/ZaLnmUVXrSwE+QEBAbgwhAqe8Vf6GhOARl2jRBLoweVvcyGYZ6GSt0mFWcj7Rhc1iDCECs2Ir9AF73+MXxYrtX0x1PyBrfbiWBG+n13S7xL9/jcIMIQPZDAffY+aQzneRLhCrUazJRLZoYCN7YIxPj4MJ5x7mmRRBe85spQDYJLwZwNinAAAAAAAgqkQAAAAAAOgDAAAB3u55wYnzAJiwumouuQs6klimx/8BAF8LAwBA2d2ITQoADBSAzse29bVvUFePc38WLTqxTUZlDQwU3u55wYnzAJiwumouuQs6klimx/8UwB8MCHRyYW5zZmVyDBTPduKL0AYsSkeO41VhARMZ88+k0kFifVtSOQHGDEDWn0D7z2ELqpN8ghcM/PtfFwo56/BfEasfHuSKECJMYxvU47r2ZtSihg59lGxSZzHsvxTy6nsyvJ22ycNhINdJDECl61cg937N/HujKsLMu2wJMS7C54bzJ3q22Czqllvw3Yp809USgKDs+W+3QD7rI+SFs0OhIn0gooCUU6f/13WjDEDr9XdeT5CGTO8CL0JigzcTcucs0GBcqHs8fToO6zPuuCfS7Wh6dyxSCijT4A4S+7BUdW3dsO7828ke1fj8oNxmkxMMIQIQOn990BZVhZf3lg0nxRakOU/ZaLnmUVXrSwE+QEBAbgwhAqe8Vf6GhOARl2jRBLoweVvcyGYZ6GSt0mFWcj7Rhc1iDCECs2Ir9AF73+MXxYrtX0x1PyBrfbiWBG+n13S7xL9/jcIMIQPZDAffY+aQzneRLhCrUazJRLZoYCN7YIxPj4MJ5x7mmRRBe85spQ=="

View file

@ -25,8 +25,8 @@ type Header struct {
MerkleRoot util.Uint256 MerkleRoot util.Uint256
// Timestamp is a millisecond-precision timestamp. // Timestamp is a millisecond-precision timestamp.
// The time stamp of each block must be later than previous block's time stamp. // The time stamp of each block must be later than the previous block's time stamp.
// Generally the difference of two block's time stamp is about 15 seconds and imprecision is allowed. // Generally, the difference between two block's time stamps is about 15 seconds and imprecision is allowed.
// The height of the block must be exactly equal to the height of the previous block plus 1. // The height of the block must be exactly equal to the height of the previous block plus 1.
Timestamp uint64 Timestamp uint64
@ -42,11 +42,11 @@ type Header struct {
// Script used to validate the block // Script used to validate the block
Script transaction.Witness Script transaction.Witness
// StateRootEnabled specifies if header contains state root. // StateRootEnabled specifies if the header contains state root.
StateRootEnabled bool StateRootEnabled bool
// PrevStateRoot is state root of the previous block. // PrevStateRoot is the state root of the previous block.
PrevStateRoot util.Uint256 PrevStateRoot util.Uint256
// PrimaryIndex is the index of primary consensus node for this block. // PrimaryIndex is the index of the primary consensus node for this block.
PrimaryIndex byte PrimaryIndex byte
// Hash of this block, created when binary encoded (double SHA256). // Hash of this block, created when binary encoded (double SHA256).
@ -78,7 +78,7 @@ func (b *Header) Hash() util.Uint256 {
return b.hash return b.hash
} }
// DecodeBinary implements Serializable interface. // DecodeBinary implements the Serializable interface.
func (b *Header) DecodeBinary(br *io.BinReader) { func (b *Header) DecodeBinary(br *io.BinReader) {
b.decodeHashableFields(br) b.decodeHashableFields(br)
witnessCount := br.ReadVarUint() witnessCount := br.ReadVarUint()
@ -90,7 +90,7 @@ func (b *Header) DecodeBinary(br *io.BinReader) {
b.Script.DecodeBinary(br) b.Script.DecodeBinary(br)
} }
// EncodeBinary implements Serializable interface. // EncodeBinary implements the Serializable interface.
func (b *Header) EncodeBinary(bw *io.BinWriter) { func (b *Header) EncodeBinary(bw *io.BinWriter) {
b.encodeHashableFields(bw) b.encodeHashableFields(bw)
bw.WriteVarUint(1) bw.WriteVarUint(1)
@ -98,11 +98,12 @@ func (b *Header) EncodeBinary(bw *io.BinWriter) {
} }
// createHash creates the hash of the block. // createHash creates the hash of the block.
// When calculating the hash value of the block, instead of calculating the entire block, // When calculating the hash value of the block, instead of processing the entire block,
// only first seven fields in the block head will be calculated, which are // only the header (without the signatures) is added as an input for the hash. It differs
// version, PrevBlock, MerkleRoot, timestamp, and height, the nonce, NextMiner. // from the complete block only in that it doesn't contain transactions, but their hashes
// Since MerkleRoot already contains the hash value of all transactions, // are used for MerkleRoot hash calculation. Therefore, adding/removing/changing any
// the modification of transaction will influence the hash value of the block. // transaction affects the header hash and there is no need to use the complete block for
// hash calculation.
func (b *Header) createHash() { func (b *Header) createHash() {
buf := io.NewBufBinWriter() buf := io.NewBufBinWriter()
// No error can occur while encoding hashable fields. // No error can occur while encoding hashable fields.
@ -149,7 +150,7 @@ func (b *Header) decodeHashableFields(br *io.BinReader) {
} }
} }
// MarshalJSON implements json.Marshaler interface. // MarshalJSON implements the json.Marshaler interface.
func (b Header) MarshalJSON() ([]byte, error) { func (b Header) MarshalJSON() ([]byte, error) {
aux := baseAux{ aux := baseAux{
Hash: b.Hash(), Hash: b.Hash(),
@ -169,7 +170,7 @@ func (b Header) MarshalJSON() ([]byte, error) {
return json.Marshal(aux) return json.Marshal(aux)
} }
// UnmarshalJSON implements json.Unmarshaler interface. // UnmarshalJSON implements the json.Unmarshaler interface.
func (b *Header) UnmarshalJSON(data []byte) error { func (b *Header) UnmarshalJSON(data []byte) error {
var aux = new(baseAux) var aux = new(baseAux)
var nextC util.Uint160 var nextC util.Uint160

View file

@ -439,7 +439,7 @@ func (bc *Blockchain) init() error {
} }
// Check autogenerated native contracts' manifests and NEFs against the stored ones. // Check autogenerated native contracts' manifests and NEFs against the stored ones.
// Need to be done after native Management cache initialisation to be able to get // Need to be done after native Management cache initialization to be able to get
// contract state from DAO via high-level bc API. // contract state from DAO via high-level bc API.
for _, c := range bc.contracts.Contracts { for _, c := range bc.contracts.Contracts {
md := c.Metadata() md := c.Metadata()

View file

@ -17,7 +17,7 @@ import (
"github.com/nspcc-dev/neo-go/pkg/util" "github.com/nspcc-dev/neo-go/pkg/util"
) )
// Blockchainer is an interface that abstract the implementation // Blockchainer is an interface that abstracts the implementation
// of the blockchain. // of the blockchain.
type Blockchainer interface { type Blockchainer interface {
ApplyPolicyToTxSet([]*transaction.Transaction) []*transaction.Transaction ApplyPolicyToTxSet([]*transaction.Transaction) []*transaction.Transaction

View file

@ -9,7 +9,7 @@ import (
"github.com/nspcc-dev/neo-go/pkg/util" "github.com/nspcc-dev/neo-go/pkg/util"
) )
// DumperRestorer in the interface to get/add blocks from/to. // DumperRestorer is an interface to get/add blocks from/to.
type DumperRestorer interface { type DumperRestorer interface {
AddBlock(block *block.Block) error AddBlock(block *block.Block) error
GetBlock(hash util.Uint256) (*block.Block, error) GetBlock(hash util.Uint256) (*block.Block, error)
@ -18,7 +18,7 @@ type DumperRestorer interface {
} }
// Dump writes count blocks from start to the provided writer. // Dump writes count blocks from start to the provided writer.
// Note: header needs to be written separately by client. // Note: header needs to be written separately by a client.
func Dump(bc DumperRestorer, w *io.BinWriter, start, count uint32) error { func Dump(bc DumperRestorer, w *io.BinWriter, start, count uint32) error {
for i := start; i < start+count; i++ { for i := start; i < start+count; i++ {
bh := bc.GetHeaderHash(int(i)) bh := bc.GetHeaderHash(int(i))
@ -38,7 +38,7 @@ func Dump(bc DumperRestorer, w *io.BinWriter, start, count uint32) error {
return nil return nil
} }
// Restore restores blocks from provided reader. // Restore restores blocks from the provided reader.
// f is called after addition of every block. // f is called after addition of every block.
func Restore(bc DumperRestorer, r *io.BinReader, skip, count uint32, f func(b *block.Block) error) error { func Restore(bc DumperRestorer, r *io.BinReader, skip, count uint32, f func(b *block.Block) error) error {
readBlock := func(r *io.BinReader) ([]byte, error) { readBlock := func(r *io.BinReader) ([]byte, error) {

View file

@ -21,12 +21,12 @@ import (
// HasTransaction errors. // HasTransaction errors.
var ( var (
// ErrAlreadyExists is returned when transaction exists in dao. // ErrAlreadyExists is returned when the transaction exists in dao.
ErrAlreadyExists = errors.New("transaction already exists") ErrAlreadyExists = errors.New("transaction already exists")
// ErrHasConflicts is returned when transaction is in the list of conflicting // ErrHasConflicts is returned when the transaction is in the list of conflicting
// transactions which are already in dao. // transactions which are already in dao.
ErrHasConflicts = errors.New("transaction has conflicts") ErrHasConflicts = errors.New("transaction has conflicts")
// ErrInternalDBInconsistency is returned when the format of retrieved DAO // ErrInternalDBInconsistency is returned when the format of the retrieved DAO
// record is unexpected. // record is unexpected.
ErrInternalDBInconsistency = errors.New("internal DB inconsistency") ErrInternalDBInconsistency = errors.New("internal DB inconsistency")
) )
@ -57,7 +57,7 @@ type NativeContractCache interface {
Copy() NativeContractCache Copy() NativeContractCache
} }
// NewSimple creates new simple dao using provided backend store. // NewSimple creates a new simple dao using the provided backend store.
func NewSimple(backend storage.Store, stateRootInHeader bool, p2pSigExtensions bool) *Simple { func NewSimple(backend storage.Store, stateRootInHeader bool, p2pSigExtensions bool) *Simple {
st := storage.NewMemCachedStore(backend) st := storage.NewMemCachedStore(backend)
return newSimple(st, stateRootInHeader, p2pSigExtensions) return newSimple(st, stateRootInHeader, p2pSigExtensions)
@ -75,12 +75,12 @@ func newSimple(st *storage.MemCachedStore, stateRootInHeader bool, p2pSigExtensi
} }
} }
// GetBatch returns currently accumulated DB changeset. // GetBatch returns the currently accumulated DB changeset.
func (dao *Simple) GetBatch() *storage.MemBatch { func (dao *Simple) GetBatch() *storage.MemBatch {
return dao.Store.GetBatch() return dao.Store.GetBatch()
} }
// GetWrapped returns new DAO instance with another layer of wrapped // GetWrapped returns a new DAO instance with another layer of wrapped
// MemCachedStore around the current DAO Store. // MemCachedStore around the current DAO Store.
func (dao *Simple) GetWrapped() *Simple { func (dao *Simple) GetWrapped() *Simple {
d := NewSimple(dao.Store, dao.Version.StateRootInHeader, dao.Version.P2PSigExtensions) d := NewSimple(dao.Store, dao.Version.StateRootInHeader, dao.Version.P2PSigExtensions)
@ -89,7 +89,7 @@ func (dao *Simple) GetWrapped() *Simple {
return d return d
} }
// GetPrivate returns new DAO instance with another layer of private // GetPrivate returns a new DAO instance with another layer of private
// MemCachedStore around the current DAO Store. // MemCachedStore around the current DAO Store.
func (dao *Simple) GetPrivate() *Simple { func (dao *Simple) GetPrivate() *Simple {
d := &Simple{ d := &Simple{
@ -142,12 +142,12 @@ func (dao *Simple) DeleteContractID(id int32) {
dao.Store.Delete(dao.makeContractIDKey(id)) dao.Store.Delete(dao.makeContractIDKey(id))
} }
// PutContractID adds a mapping from contract's ID to its hash. // PutContractID adds a mapping from a contract's ID to its hash.
func (dao *Simple) PutContractID(id int32, hash util.Uint160) { func (dao *Simple) PutContractID(id int32, hash util.Uint160) {
dao.Store.Put(dao.makeContractIDKey(id), hash.BytesBE()) dao.Store.Put(dao.makeContractIDKey(id), hash.BytesBE())
} }
// GetContractScriptHash retrieves contract's hash given its ID. // GetContractScriptHash retrieves the contract's hash given its ID.
func (dao *Simple) GetContractScriptHash(id int32) (util.Uint160, error) { func (dao *Simple) GetContractScriptHash(id int32) (util.Uint160, error) {
var data = new(util.Uint160) var data = new(util.Uint160)
if err := dao.GetAndDecode(data, dao.makeContractIDKey(id)); err != nil { if err := dao.GetAndDecode(data, dao.makeContractIDKey(id)); err != nil {
@ -259,7 +259,7 @@ func (dao *Simple) GetTokenTransferLog(acc util.Uint160, newestTimestamp uint64,
return &state.TokenTransferLog{Raw: value}, nil return &state.TokenTransferLog{Raw: value}, nil
} }
// PutTokenTransferLog saves given transfer log in the cache. // PutTokenTransferLog saves the given transfer log in the cache.
func (dao *Simple) PutTokenTransferLog(acc util.Uint160, start uint64, index uint32, isNEP11 bool, lg *state.TokenTransferLog) { func (dao *Simple) PutTokenTransferLog(acc util.Uint160, start uint64, index uint32, isNEP11 bool, lg *state.TokenTransferLog) {
key := dao.getTokenTransferLogKey(acc, start, index, isNEP11) key := dao.getTokenTransferLogKey(acc, start, index, isNEP11)
dao.Store.Put(key, lg.Raw) dao.Store.Put(key, lg.Raw)
@ -377,22 +377,22 @@ func (dao *Simple) GetStorageItem(id int32, key []byte) state.StorageItem {
return b return b
} }
// PutStorageItem puts given StorageItem for given id with given // PutStorageItem puts the given StorageItem for the given id with the given
// key into the given store. // key into the given store.
func (dao *Simple) PutStorageItem(id int32, key []byte, si state.StorageItem) { func (dao *Simple) PutStorageItem(id int32, key []byte, si state.StorageItem) {
stKey := dao.makeStorageItemKey(id, key) stKey := dao.makeStorageItemKey(id, key)
dao.Store.Put(stKey, si) dao.Store.Put(stKey, si)
} }
// DeleteStorageItem drops storage item for the given id with the // DeleteStorageItem drops a storage item for the given id with the
// given key from the store. // given key from the store.
func (dao *Simple) DeleteStorageItem(id int32, key []byte) { func (dao *Simple) DeleteStorageItem(id int32, key []byte) {
stKey := dao.makeStorageItemKey(id, key) stKey := dao.makeStorageItemKey(id, key)
dao.Store.Delete(stKey) dao.Store.Delete(stKey)
} }
// Seek executes f for all storage items matching a given `rng` (matching given prefix and // Seek executes f for all storage items matching the given `rng` (matching the given prefix and
// starting from the point specified). If key or value is to be used outside of f, they // starting from the point specified). If the key or the value is to be used outside of f, they
// may not be copied. Seek continues iterating until false is returned from f. // may not be copied. Seek continues iterating until false is returned from f.
func (dao *Simple) Seek(id int32, rng storage.SeekRange, f func(k, v []byte) bool) { func (dao *Simple) Seek(id int32, rng storage.SeekRange, f func(k, v []byte) bool) {
rng.Prefix = slice.Copy(dao.makeStorageItemKey(id, rng.Prefix)) // f() can use dao too. rng.Prefix = slice.Copy(dao.makeStorageItemKey(id, rng.Prefix)) // f() can use dao too.
@ -401,7 +401,7 @@ func (dao *Simple) Seek(id int32, rng storage.SeekRange, f func(k, v []byte) boo
}) })
} }
// SeekAsync sends all storage items matching a given `rng` (matching given prefix and // SeekAsync sends all storage items matching the given `rng` (matching the given prefix and
// starting from the point specified) to a channel and returns the channel. // starting from the point specified) to a channel and returns the channel.
// Resulting keys and values may not be copied. // Resulting keys and values may not be copied.
func (dao *Simple) SeekAsync(ctx context.Context, id int32, rng storage.SeekRange) chan storage.KeyValue { func (dao *Simple) SeekAsync(ctx context.Context, id int32, rng storage.SeekRange) chan storage.KeyValue {
@ -409,7 +409,7 @@ func (dao *Simple) SeekAsync(ctx context.Context, id int32, rng storage.SeekRang
return dao.Store.SeekAsync(ctx, rng, true) return dao.Store.SeekAsync(ctx, rng, true)
} }
// makeStorageItemKey returns a key used to store StorageItem in the DB. // makeStorageItemKey returns the key used to store the StorageItem in the DB.
func (dao *Simple) makeStorageItemKey(id int32, key []byte) []byte { func (dao *Simple) makeStorageItemKey(id int32, key []byte) []byte {
// 1 for prefix + 4 for Uint32 + len(key) for key // 1 for prefix + 4 for Uint32 + len(key) for key
buf := dao.getKeyBuf(5 + len(key)) buf := dao.getKeyBuf(5 + len(key))
@ -446,7 +446,7 @@ func (dao *Simple) getBlock(key []byte) (*block.Block, error) {
return block, nil return block, nil
} }
// Version represents current dao version. // Version represents the current dao version.
type Version struct { type Version struct {
StoragePrefix storage.KeyPrefix StoragePrefix storage.KeyPrefix
StateRootInHeader bool StateRootInHeader bool
@ -549,7 +549,7 @@ func (dao *Simple) GetCurrentHeaderHeight() (i uint32, h util.Uint256, err error
return return
} }
// GetStateSyncPoint returns current state synchronisation point P. // GetStateSyncPoint returns current state synchronization point P.
func (dao *Simple) GetStateSyncPoint() (uint32, error) { func (dao *Simple) GetStateSyncPoint() (uint32, error) {
b, err := dao.Store.Get(dao.mkKeyPrefix(storage.SYSStateSyncPoint)) b, err := dao.Store.Get(dao.mkKeyPrefix(storage.SYSStateSyncPoint))
if err != nil { if err != nil {
@ -558,8 +558,8 @@ func (dao *Simple) GetStateSyncPoint() (uint32, error) {
return binary.LittleEndian.Uint32(b), nil return binary.LittleEndian.Uint32(b), nil
} }
// GetStateSyncCurrentBlockHeight returns current block height stored during state // GetStateSyncCurrentBlockHeight returns the current block height stored during state
// synchronisation process. // synchronization process.
func (dao *Simple) GetStateSyncCurrentBlockHeight() (uint32, error) { func (dao *Simple) GetStateSyncCurrentBlockHeight() (uint32, error) {
b, err := dao.Store.Get(dao.mkKeyPrefix(storage.SYSStateSyncCurrentBlockHeight)) b, err := dao.Store.Get(dao.mkKeyPrefix(storage.SYSStateSyncCurrentBlockHeight))
if err != nil { if err != nil {
@ -627,7 +627,7 @@ func (dao *Simple) PutVersion(v Version) {
dao.Store.Put(dao.mkKeyPrefix(storage.SYSVersion), v.Bytes()) dao.Store.Put(dao.mkKeyPrefix(storage.SYSVersion), v.Bytes())
} }
// PutCurrentHeader stores current header. // PutCurrentHeader stores the current header.
func (dao *Simple) PutCurrentHeader(h util.Uint256, index uint32) { func (dao *Simple) PutCurrentHeader(h util.Uint256, index uint32) {
buf := dao.getDataBuf() buf := dao.getDataBuf()
buf.WriteBytes(h.BytesLE()) buf.WriteBytes(h.BytesLE())
@ -635,14 +635,14 @@ func (dao *Simple) PutCurrentHeader(h util.Uint256, index uint32) {
dao.Store.Put(dao.mkKeyPrefix(storage.SYSCurrentHeader), buf.Bytes()) dao.Store.Put(dao.mkKeyPrefix(storage.SYSCurrentHeader), buf.Bytes())
} }
// PutStateSyncPoint stores current state synchronisation point P. // PutStateSyncPoint stores the current state synchronization point P.
func (dao *Simple) PutStateSyncPoint(p uint32) { func (dao *Simple) PutStateSyncPoint(p uint32) {
buf := dao.getDataBuf() buf := dao.getDataBuf()
buf.WriteU32LE(p) buf.WriteU32LE(p)
dao.Store.Put(dao.mkKeyPrefix(storage.SYSStateSyncPoint), buf.Bytes()) dao.Store.Put(dao.mkKeyPrefix(storage.SYSStateSyncPoint), buf.Bytes())
} }
// PutStateSyncCurrentBlockHeight stores current block height during state synchronisation process. // PutStateSyncCurrentBlockHeight stores the current block height during state synchronization process.
func (dao *Simple) PutStateSyncCurrentBlockHeight(h uint32) { func (dao *Simple) PutStateSyncCurrentBlockHeight(h uint32) {
buf := dao.getDataBuf() buf := dao.getDataBuf()
buf.WriteU32LE(h) buf.WriteU32LE(h)
@ -682,7 +682,7 @@ func (dao *Simple) StoreHeaderHashes(hashes []util.Uint256, height uint32) error
} }
// HasTransaction returns nil if the given store does not contain the given // HasTransaction returns nil if the given store does not contain the given
// Transaction hash. It returns an error in case if transaction is in chain // Transaction hash. It returns an error in case the transaction is in chain
// or in the list of conflicting transactions. // or in the list of conflicting transactions.
func (dao *Simple) HasTransaction(hash util.Uint256) error { func (dao *Simple) HasTransaction(hash util.Uint256) error {
key := dao.makeExecutableKey(hash) key := dao.makeExecutableKey(hash)
@ -722,7 +722,7 @@ func (dao *Simple) StoreAsBlock(block *block.Block, aer1 *state.AppExecResult, a
return nil return nil
} }
// DeleteBlock removes block from dao. It's not atomic, so make sure you're // DeleteBlock removes the block from dao. It's not atomic, so make sure you're
// using private MemCached instance here. // using private MemCached instance here.
func (dao *Simple) DeleteBlock(h util.Uint256) error { func (dao *Simple) DeleteBlock(h util.Uint256) error {
key := dao.makeExecutableKey(h) key := dao.makeExecutableKey(h)
@ -752,7 +752,7 @@ func (dao *Simple) DeleteBlock(h util.Uint256) error {
return nil return nil
} }
// StoreHeader saves block header into the store. // StoreHeader saves the block header into the store.
func (dao *Simple) StoreHeader(h *block.Header) error { func (dao *Simple) StoreHeader(h *block.Header) error {
return dao.storeHeader(dao.makeExecutableKey(h.Hash()), h) return dao.storeHeader(dao.makeExecutableKey(h.Hash()), h)
} }
@ -769,9 +769,8 @@ func (dao *Simple) storeHeader(key []byte, h *block.Header) error {
return nil return nil
} }
// StoreAsCurrentBlock stores a hash of the given block with prefix // StoreAsCurrentBlock stores the hash of the given block with prefix
// SYSCurrentBlock. It can reuse given buffer for the purpose of value // SYSCurrentBlock.
// serialization.
func (dao *Simple) StoreAsCurrentBlock(block *block.Block) { func (dao *Simple) StoreAsCurrentBlock(block *block.Block) {
buf := dao.getDataBuf() buf := dao.getDataBuf()
h := block.Hash() h := block.Hash()
@ -780,8 +779,8 @@ func (dao *Simple) StoreAsCurrentBlock(block *block.Block) {
dao.Store.Put(dao.mkKeyPrefix(storage.SYSCurrentBlock), buf.Bytes()) dao.Store.Put(dao.mkKeyPrefix(storage.SYSCurrentBlock), buf.Bytes())
} }
// StoreAsTransaction stores given TX as DataTransaction. It also stores transactions // StoreAsTransaction stores the given TX as DataTransaction. It also stores transactions
// given tx has conflicts with as DataTransaction with dummy version. It can reuse given // the given tx has conflicts with as DataTransaction with dummy version. It can reuse the given
// buffer for the purpose of value serialization. // buffer for the purpose of value serialization.
func (dao *Simple) StoreAsTransaction(tx *transaction.Transaction, index uint32, aer *state.AppExecResult) error { func (dao *Simple) StoreAsTransaction(tx *transaction.Transaction, index uint32, aer *state.AppExecResult) error {
key := dao.makeExecutableKey(tx.Hash()) key := dao.makeExecutableKey(tx.Hash())

View file

@ -10,7 +10,7 @@ import (
// ECDSAVerifyPrice is a gas price of a single verification. // ECDSAVerifyPrice is a gas price of a single verification.
const ECDSAVerifyPrice = 1 << 15 const ECDSAVerifyPrice = 1 << 15
// Calculate returns network fee for transaction. // Calculate returns network fee for a transaction.
func Calculate(base int64, script []byte) (int64, int) { func Calculate(base int64, script []byte) (int64, int) {
var ( var (
netFee int64 netFee int64

View file

@ -4,7 +4,7 @@ import (
"github.com/nspcc-dev/neo-go/pkg/vm/opcode" "github.com/nspcc-dev/neo-go/pkg/vm/opcode"
) )
// Opcode returns the deployment coefficients of specified opcodes. // Opcode returns the deployment coefficients of the specified opcodes.
func Opcode(base int64, opcodes ...opcode.Opcode) int64 { func Opcode(base int64, opcodes ...opcode.Opcode) int64 {
var result int64 var result int64
for _, op := range opcodes { for _, op := range opcodes {

View file

@ -8,7 +8,7 @@ import (
const feeFactor = 30 const feeFactor = 30
// The most common Opcode() use case is to get price for single opcode. // The most common Opcode() use case is to get price for a single opcode.
func BenchmarkOpcode1(t *testing.B) { func BenchmarkOpcode1(t *testing.B) {
// Just so that we don't always test the same opcode. // Just so that we don't always test the same opcode.
script := []opcode.Opcode{opcode.NOP, opcode.ADD, opcode.SYSCALL, opcode.APPEND} script := []opcode.Opcode{opcode.NOP, opcode.ADD, opcode.SYSCALL, opcode.APPEND}

View file

@ -30,7 +30,7 @@ import (
) )
const ( const (
// DefaultBaseExecFee specifies default multiplier for opcode and syscall prices. // DefaultBaseExecFee specifies the default multiplier for opcode and syscall prices.
DefaultBaseExecFee = 30 DefaultBaseExecFee = 30
) )
@ -104,7 +104,7 @@ func (ic *Context) UseSigners(s []transaction.Signer) {
ic.signers = s ic.signers = s
} }
// Signers returns signers witnessing current execution context. // Signers returns signers witnessing the current execution context.
func (ic *Context) Signers() []transaction.Signer { func (ic *Context) Signers() []transaction.Signer {
if ic.signers != nil { if ic.signers != nil {
return ic.signers return ic.signers
@ -115,7 +115,7 @@ func (ic *Context) Signers() []transaction.Signer {
return nil return nil
} }
// Function binds function name, id with the function itself and price, // Function binds function name, id with the function itself and the price,
// it's supposed to be inited once for all interopContexts, so it doesn't use // it's supposed to be inited once for all interopContexts, so it doesn't use
// vm.InteropFuncPrice directly. // vm.InteropFuncPrice directly.
type Function struct { type Function struct {
@ -151,7 +151,7 @@ type Contract interface {
PostPersist(*Context) error PostPersist(*Context) error
} }
// ContractMD represents native contract instance. // ContractMD represents a native contract instance.
type ContractMD struct { type ContractMD struct {
state.NativeContract state.NativeContract
Name string Name string
@ -164,8 +164,8 @@ func NewContractMD(name string, id int32) *ContractMD {
c.ID = id c.ID = id
// NEF is now stored in contract state and affects state dump. // NEF is now stored in the contract state and affects state dump.
// Therefore values are taken from C# node. // Therefore, values are taken from C# node.
c.NEF.Header.Compiler = "neo-core-v3.0" c.NEF.Header.Compiler = "neo-core-v3.0"
c.NEF.Header.Magic = nef.Magic c.NEF.Header.Magic = nef.Magic
c.NEF.Tokens = []nef.MethodToken{} // avoid `nil` result during JSON marshalling c.NEF.Tokens = []nef.MethodToken{} // avoid `nil` result during JSON marshalling
@ -175,7 +175,7 @@ func NewContractMD(name string, id int32) *ContractMD {
return c return c
} }
// UpdateHash creates native contract script and updates hash. // UpdateHash creates a native contract script and updates hash.
func (c *ContractMD) UpdateHash() { func (c *ContractMD) UpdateHash() {
w := io.NewBufBinWriter() w := io.NewBufBinWriter()
for i := range c.Methods { for i := range c.Methods {
@ -195,7 +195,7 @@ func (c *ContractMD) UpdateHash() {
c.NEF.Checksum = c.NEF.CalculateChecksum() c.NEF.Checksum = c.NEF.CalculateChecksum()
} }
// AddMethod adds new method to a native contract. // AddMethod adds a new method to a native contract.
func (c *ContractMD) AddMethod(md *MethodAndPrice, desc *manifest.Method) { func (c *ContractMD) AddMethod(md *MethodAndPrice, desc *manifest.Method) {
md.MD = desc md.MD = desc
desc.Safe = md.RequiredFlags&(callflag.All^callflag.ReadOnly) == 0 desc.Safe = md.RequiredFlags&(callflag.All^callflag.ReadOnly) == 0
@ -217,7 +217,7 @@ func (c *ContractMD) AddMethod(md *MethodAndPrice, desc *manifest.Method) {
c.Methods[index] = *md c.Methods[index] = *md
} }
// GetMethodByOffset returns with the provided offset. // GetMethodByOffset returns method with the provided offset.
// Offset is offset of `System.Contract.CallNative` syscall. // Offset is offset of `System.Contract.CallNative` syscall.
func (c *ContractMD) GetMethodByOffset(offset int) (MethodAndPrice, bool) { func (c *ContractMD) GetMethodByOffset(offset int) (MethodAndPrice, bool) {
for k := range c.Methods { for k := range c.Methods {
@ -228,7 +228,7 @@ func (c *ContractMD) GetMethodByOffset(offset int) (MethodAndPrice, bool) {
return MethodAndPrice{}, false return MethodAndPrice{}, false
} }
// GetMethod returns method `name` with specified number of parameters. // GetMethod returns method `name` with the specified number of parameters.
func (c *ContractMD) GetMethod(name string, paramCount int) (MethodAndPrice, bool) { func (c *ContractMD) GetMethod(name string, paramCount int) (MethodAndPrice, bool) {
index := sort.Search(len(c.Methods), func(i int) bool { index := sort.Search(len(c.Methods), func(i int) bool {
md := c.Methods[i] md := c.Methods[i]
@ -249,7 +249,7 @@ func (c *ContractMD) GetMethod(name string, paramCount int) (MethodAndPrice, boo
return MethodAndPrice{}, false return MethodAndPrice{}, false
} }
// AddEvent adds new event to a native contract. // AddEvent adds a new event to the native contract.
func (c *ContractMD) AddEvent(name string, ps ...manifest.Parameter) { func (c *ContractMD) AddEvent(name string, ps ...manifest.Parameter) {
c.Manifest.ABI.Events = append(c.Manifest.ABI.Events, manifest.Event{ c.Manifest.ABI.Events = append(c.Manifest.ABI.Events, manifest.Event{
Name: name, Name: name,
@ -257,7 +257,7 @@ func (c *ContractMD) AddEvent(name string, ps ...manifest.Parameter) {
}) })
} }
// IsActive returns true iff the contract was deployed by the specified height. // IsActive returns true if the contract was deployed by the specified height.
func (c *ContractMD) IsActive(height uint32) bool { func (c *ContractMD) IsActive(height uint32) bool {
history := c.UpdateHistory history := c.UpdateHistory
return len(history) != 0 && history[0] <= height return len(history) != 0 && history[0] <= height
@ -268,7 +268,7 @@ func Sort(fs []Function) {
sort.Slice(fs, func(i, j int) bool { return fs[i].ID < fs[j].ID }) sort.Slice(fs, func(i, j int) bool { return fs[i].ID < fs[j].ID })
} }
// GetContract returns contract by its hash in current interop context. // GetContract returns a contract by its hash in the current interop context.
func (ic *Context) GetContract(hash util.Uint160) (*state.Contract, error) { func (ic *Context) GetContract(hash util.Uint160) (*state.Contract, error) {
return ic.getContract(ic.DAO, hash) return ic.getContract(ic.DAO, hash)
} }
@ -310,7 +310,7 @@ func (ic *Context) SyscallHandler(_ *vm.VM, id uint32) error {
return f.Func(ic) return f.Func(ic)
} }
// SpawnVM spawns new VM with the specified gas limit and set context.VM field. // SpawnVM spawns a new VM with the specified gas limit and set context.VM field.
func (ic *Context) SpawnVM() *vm.VM { func (ic *Context) SpawnVM() *vm.VM {
v := vm.NewWithTrigger(ic.Trigger) v := vm.NewWithTrigger(ic.Trigger)
v.GasLimit = -1 v.GasLimit = -1
@ -319,7 +319,7 @@ func (ic *Context) SpawnVM() *vm.VM {
return v return v
} }
// RegisterCancelFunc adds given function to the list of functions to be called after VM // RegisterCancelFunc adds the given function to the list of functions to be called after the VM
// finishes script execution. // finishes script execution.
func (ic *Context) RegisterCancelFunc(f context.CancelFunc) { func (ic *Context) RegisterCancelFunc(f context.CancelFunc) {
if f != nil { if f != nil {

View file

@ -21,7 +21,7 @@ type policyChecker interface {
IsBlocked(*dao.Simple, util.Uint160) bool IsBlocked(*dao.Simple, util.Uint160) bool
} }
// LoadToken calls method specified by token id. // LoadToken calls method specified by the token id.
func LoadToken(ic *interop.Context) func(id int32) error { func LoadToken(ic *interop.Context) func(id int32) error {
return func(id int32) error { return func(id int32) error {
ctx := ic.VM.Context() ctx := ic.VM.Context()
@ -91,7 +91,7 @@ func callInternal(ic *interop.Context, cs *state.Contract, name string, f callfl
return callExFromNative(ic, ic.VM.GetCurrentScriptHash(), cs, name, args, f, hasReturn) return callExFromNative(ic, ic.VM.GetCurrentScriptHash(), cs, name, args, f, hasReturn)
} }
// callExFromNative calls a contract with flags using provided calling hash. // callExFromNative calls a contract with flags using the provided calling hash.
func callExFromNative(ic *interop.Context, caller util.Uint160, cs *state.Contract, func callExFromNative(ic *interop.Context, caller util.Uint160, cs *state.Contract,
name string, args []stackitem.Item, f callflag.CallFlag, hasReturn bool) error { name string, args []stackitem.Item, f callflag.CallFlag, hasReturn bool) error {
for _, nc := range ic.Natives { for _, nc := range ic.Natives {

View file

@ -15,7 +15,7 @@ import (
"github.com/twmb/murmur3" "github.com/twmb/murmur3"
) )
// GasLeft returns remaining amount of GAS. // GasLeft returns the remaining amount of GAS.
func GasLeft(ic *interop.Context) error { func GasLeft(ic *interop.Context) error {
if ic.VM.GasLimit == -1 { if ic.VM.GasLimit == -1 {
ic.VM.Estack().PushItem(stackitem.NewBigInteger(big.NewInt(ic.VM.GasLimit))) ic.VM.Estack().PushItem(stackitem.NewBigInteger(big.NewInt(ic.VM.GasLimit)))
@ -25,7 +25,7 @@ func GasLeft(ic *interop.Context) error {
return nil return nil
} }
// GetNotifications returns notifications emitted by current contract execution. // GetNotifications returns notifications emitted in the current execution context.
func GetNotifications(ic *interop.Context) error { func GetNotifications(ic *interop.Context) error {
item := ic.VM.Estack().Pop().Item() item := ic.VM.Estack().Pop().Item()
notifications := ic.Notifications notifications := ic.Notifications
@ -61,7 +61,7 @@ func GetNotifications(ic *interop.Context) error {
return nil return nil
} }
// GetInvocationCounter returns how many times current contract was invoked during current tx execution. // GetInvocationCounter returns how many times the current contract has been invoked during the current tx execution.
func GetInvocationCounter(ic *interop.Context) error { func GetInvocationCounter(ic *interop.Context) error {
currentScriptHash := ic.VM.GetCurrentScriptHash() currentScriptHash := ic.VM.GetCurrentScriptHash()
count, ok := ic.Invocations[currentScriptHash] count, ok := ic.Invocations[currentScriptHash]

View file

@ -15,7 +15,7 @@ import (
"github.com/nspcc-dev/neo-go/pkg/vm/stackitem" "github.com/nspcc-dev/neo-go/pkg/vm/stackitem"
) )
// CheckHashedWitness checks given hash against current list of script hashes // CheckHashedWitness checks the given hash against the current list of script hashes
// for verifying in the interop context. // for verifying in the interop context.
func CheckHashedWitness(ic *interop.Context, hash util.Uint160) (bool, error) { func CheckHashedWitness(ic *interop.Context, hash util.Uint160) (bool, error) {
callingSH := ic.VM.GetCallingScriptHash() callingSH := ic.VM.GetCallingScriptHash()
@ -113,8 +113,8 @@ func checkScope(ic *interop.Context, hash util.Uint160) (bool, error) {
return false, nil return false, nil
} }
// CheckKeyedWitness checks hash of signature check contract with a given public // CheckKeyedWitness checks the hash of the signature check contract with the given public
// key against current list of script hashes for verifying in the interop context. // key against the current list of script hashes for verifying in the interop context.
func CheckKeyedWitness(ic *interop.Context, key *keys.PublicKey) (bool, error) { func CheckKeyedWitness(ic *interop.Context, key *keys.PublicKey) (bool, error) {
return CheckHashedWitness(ic, key.GetScriptHash()) return CheckHashedWitness(ic, key.GetScriptHash())
} }

View file

@ -29,7 +29,7 @@ type Iterator struct {
prefix []byte prefix []byte
} }
// NewIterator creates a new Iterator with given options for a given channel of store.Seek results. // NewIterator creates a new Iterator with the given options for the given channel of store.Seek results.
func NewIterator(seekCh chan storage.KeyValue, prefix []byte, opts int64) *Iterator { func NewIterator(seekCh chan storage.KeyValue, prefix []byte, opts int64) *Iterator {
return &Iterator{ return &Iterator{
seekCh: seekCh, seekCh: seekCh,

View file

@ -6,7 +6,7 @@ import (
"github.com/nspcc-dev/neo-go/pkg/util" "github.com/nspcc-dev/neo-go/pkg/util"
) )
// Feer is an interface that abstract the implementation of the fee calculation. // Feer is an interface that abstracts the implementation of the fee calculation.
type Feer interface { type Feer interface {
FeePerByte() int64 FeePerByte() int64
GetUtilityTokenBalance(util.Uint160) *big.Int GetUtilityTokenBalance(util.Uint160) *big.Int

View file

@ -15,24 +15,24 @@ import (
) )
var ( var (
// ErrInsufficientFunds is returned when Sender is not able to pay for // ErrInsufficientFunds is returned when the Sender is not able to pay for
// transaction being added irrespective of the other contents of the // the transaction being added irrespective of the other contents of the
// pool. // pool.
ErrInsufficientFunds = errors.New("insufficient funds") ErrInsufficientFunds = errors.New("insufficient funds")
// ErrConflict is returned when transaction being added is incompatible // ErrConflict is returned when the transaction being added is incompatible
// with the contents of the memory pool (Sender doesn't have enough GAS // with the contents of the memory pool (Sender doesn't have enough GAS
// to pay for all transactions in the pool). // to pay for all transactions in the pool).
ErrConflict = errors.New("conflicts: insufficient funds for all pooled tx") ErrConflict = errors.New("conflicts: insufficient funds for all pooled tx")
// ErrDup is returned when transaction being added is already present // ErrDup is returned when the transaction being added is already present
// in the memory pool. // in the memory pool.
ErrDup = errors.New("already in the memory pool") ErrDup = errors.New("already in the memory pool")
// ErrOOM is returned when transaction just doesn't fit in the memory // ErrOOM is returned when the transaction just doesn't fit in the memory
// pool because of its capacity constraints. // pool because of its capacity constraints.
ErrOOM = errors.New("out of memory") ErrOOM = errors.New("out of memory")
// ErrConflictsAttribute is returned when transaction conflicts with other transactions // ErrConflictsAttribute is returned when the transaction conflicts with other transactions
// due to its (or theirs) Conflicts attributes. // due to its (or theirs) Conflicts attributes.
ErrConflictsAttribute = errors.New("conflicts with memory pool due to Conflicts attribute") ErrConflictsAttribute = errors.New("conflicts with memory pool due to Conflicts attribute")
// ErrOracleResponse is returned when mempool already contains transaction // ErrOracleResponse is returned when the mempool already contains a transaction
// with the same oracle response ID and higher network fee. // with the same oracle response ID and higher network fee.
ErrOracleResponse = errors.New("conflicts with memory pool due to OracleResponse attribute") ErrOracleResponse = errors.New("conflicts with memory pool due to OracleResponse attribute")
) )
@ -44,25 +44,25 @@ type item struct {
data interface{} data interface{}
} }
// items is a slice of item. // items is a slice of an item.
type items []item type items []item
// utilityBalanceAndFees stores sender's balance and overall fees of // utilityBalanceAndFees stores the sender's balance and overall fees of
// sender's transactions which are currently in mempool. // the sender's transactions which are currently in the mempool.
type utilityBalanceAndFees struct { type utilityBalanceAndFees struct {
balance uint256.Int balance uint256.Int
feeSum uint256.Int feeSum uint256.Int
} }
// Pool stores the unconfirms transactions. // Pool stores the unconfirmed transactions.
type Pool struct { type Pool struct {
lock sync.RWMutex lock sync.RWMutex
verifiedMap map[util.Uint256]*transaction.Transaction verifiedMap map[util.Uint256]*transaction.Transaction
verifiedTxes items verifiedTxes items
fees map[util.Uint160]utilityBalanceAndFees fees map[util.Uint160]utilityBalanceAndFees
// conflicts is a map of hashes of transactions which are conflicting with the mempooled ones. // conflicts is a map of the hashes of the transactions which are conflicting with the mempooled ones.
conflicts map[util.Uint256][]util.Uint256 conflicts map[util.Uint256][]util.Uint256
// oracleResp contains ids of oracle responses for tx in pool. // oracleResp contains the ids of oracle responses for the tx in the pool.
oracleResp map[uint64]util.Uint256 oracleResp map[uint64]util.Uint256
capacity int capacity int
@ -106,7 +106,7 @@ func (p item) CompareTo(otherP item) int {
return int(p.txn.NetworkFee - otherP.txn.NetworkFee) return int(p.txn.NetworkFee - otherP.txn.NetworkFee)
} }
// Count returns the total number of uncofirm transactions. // Count returns the total number of uncofirmed transactions.
func (mp *Pool) Count() int { func (mp *Pool) Count() int {
mp.lock.RLock() mp.lock.RLock()
defer mp.lock.RUnlock() defer mp.lock.RUnlock()
@ -118,7 +118,7 @@ func (mp *Pool) count() int {
return len(mp.verifiedTxes) return len(mp.verifiedTxes)
} }
// ContainsKey checks if a transactions hash is in the Pool. // ContainsKey checks if the transactions hash is in the Pool.
func (mp *Pool) ContainsKey(hash util.Uint256) bool { func (mp *Pool) ContainsKey(hash util.Uint256) bool {
mp.lock.RLock() mp.lock.RLock()
defer mp.lock.RUnlock() defer mp.lock.RUnlock()
@ -135,8 +135,8 @@ func (mp *Pool) containsKey(hash util.Uint256) bool {
return false return false
} }
// HasConflicts returns true if transaction is already in pool or in the Conflicts attributes // HasConflicts returns true if the transaction is already in the pool or in the Conflicts attributes
// of pooled transactions or has Conflicts attributes for pooled transactions. // of the pooled transactions or has Conflicts attributes against the pooled transactions.
func (mp *Pool) HasConflicts(t *transaction.Transaction, fee Feer) bool { func (mp *Pool) HasConflicts(t *transaction.Transaction, fee Feer) bool {
mp.lock.RLock() mp.lock.RLock()
defer mp.lock.RUnlock() defer mp.lock.RUnlock()
@ -158,8 +158,8 @@ func (mp *Pool) HasConflicts(t *transaction.Transaction, fee Feer) bool {
return false return false
} }
// tryAddSendersFee tries to add system fee and network fee to the total sender`s fee in mempool // tryAddSendersFee tries to add system fee and network fee to the total sender`s fee in the mempool
// and returns false if both balance check is required and sender has not enough GAS to pay. // and returns false if both balance check is required and the sender does not have enough GAS to pay.
func (mp *Pool) tryAddSendersFee(tx *transaction.Transaction, feer Feer, needCheck bool) bool { func (mp *Pool) tryAddSendersFee(tx *transaction.Transaction, feer Feer, needCheck bool) bool {
payer := tx.Signers[mp.payerIndex].Account payer := tx.Signers[mp.payerIndex].Account
senderFee, ok := mp.fees[payer] senderFee, ok := mp.fees[payer]
@ -180,8 +180,8 @@ func (mp *Pool) tryAddSendersFee(tx *transaction.Transaction, feer Feer, needChe
return true return true
} }
// checkBalance returns new cumulative fee balance for account or an error in // checkBalance returns a new cumulative fee balance for the account or an error in
// case sender doesn't have enough GAS to pay for the transaction. // case the sender doesn't have enough GAS to pay for the transaction.
func checkBalance(tx *transaction.Transaction, balance utilityBalanceAndFees) (uint256.Int, error) { func checkBalance(tx *transaction.Transaction, balance utilityBalanceAndFees) (uint256.Int, error) {
var txFee uint256.Int var txFee uint256.Int
@ -196,7 +196,7 @@ func checkBalance(tx *transaction.Transaction, balance utilityBalanceAndFees) (u
return txFee, nil return txFee, nil
} }
// Add tries to add given transaction to the Pool. // Add tries to add the given transaction to the Pool.
func (mp *Pool) Add(t *transaction.Transaction, fee Feer, data ...interface{}) error { func (mp *Pool) Add(t *transaction.Transaction, fee Feer, data ...interface{}) error {
var pItem = item{ var pItem = item{
txn: t, txn: t,
@ -234,9 +234,9 @@ func (mp *Pool) Add(t *transaction.Transaction, fee Feer, data ...interface{}) e
mp.removeInternal(conflictingTx.Hash(), fee) mp.removeInternal(conflictingTx.Hash(), fee)
} }
} }
// Insert into sorted array (from max to min, that could also be done // Insert into a sorted array (from max to min, that could also be done
// using sort.Sort(sort.Reverse()), but it incurs more overhead. Notice // using sort.Sort(sort.Reverse()), but it incurs more overhead. Notice
// also that we're searching for position that is strictly more // also that we're searching for a position that is strictly more
// prioritized than our new item because we do expect a lot of // prioritized than our new item because we do expect a lot of
// transactions with the same priority and appending to the end of the // transactions with the same priority and appending to the end of the
// slice is always more efficient. // slice is always more efficient.
@ -299,7 +299,7 @@ func (mp *Pool) Add(t *transaction.Transaction, fee Feer, data ...interface{}) e
return nil return nil
} }
// Remove removes an item from the mempool, if it exists there (and does // Remove removes an item from the mempool if it exists there (and does
// nothing if it doesn't). // nothing if it doesn't).
func (mp *Pool) Remove(hash util.Uint256, feer Feer) { func (mp *Pool) Remove(hash util.Uint256, feer Feer) {
mp.lock.Lock() mp.lock.Lock()
@ -346,8 +346,8 @@ func (mp *Pool) removeInternal(hash util.Uint256, feer Feer) {
} }
// RemoveStale filters verified transactions through the given function keeping // RemoveStale filters verified transactions through the given function keeping
// only the transactions for which it returns a true result. It's used to quickly // only the transactions for which it returns true result. It's used to quickly
// drop part of the mempool that is now invalid after the block acceptance. // drop a part of the mempool that is now invalid after the block acceptance.
func (mp *Pool) RemoveStale(isOK func(*transaction.Transaction) bool, feer Feer) { func (mp *Pool) RemoveStale(isOK func(*transaction.Transaction) bool, feer Feer) {
mp.lock.Lock() mp.lock.Lock()
policyChanged := mp.loadPolicy(feer) policyChanged := mp.loadPolicy(feer)
@ -372,7 +372,7 @@ func (mp *Pool) RemoveStale(isOK func(*transaction.Transaction) bool, feer Feer)
} }
} }
if mp.resendThreshold != 0 { if mp.resendThreshold != 0 {
// item is resend at resendThreshold, 2*resendThreshold, 4*resendThreshold ... // item is resent at resendThreshold, 2*resendThreshold, 4*resendThreshold ...
// so quotient must be a power of two. // so quotient must be a power of two.
diff := (height - itm.blockStamp) diff := (height - itm.blockStamp)
if diff%mp.resendThreshold == 0 && bits.OnesCount32(diff/mp.resendThreshold) == 1 { if diff%mp.resendThreshold == 0 && bits.OnesCount32(diff/mp.resendThreshold) == 1 {
@ -400,7 +400,7 @@ func (mp *Pool) RemoveStale(isOK func(*transaction.Transaction) bool, feer Feer)
mp.lock.Unlock() mp.lock.Unlock()
} }
// loadPolicy updates feePerByte field and returns whether policy has been // loadPolicy updates feePerByte field and returns whether the policy has been
// changed. // changed.
func (mp *Pool) loadPolicy(feer Feer) bool { func (mp *Pool) loadPolicy(feer Feer) bool {
newFeePerByte := feer.FeePerByte() newFeePerByte := feer.FeePerByte()
@ -411,7 +411,7 @@ func (mp *Pool) loadPolicy(feer Feer) bool {
return false return false
} }
// checkPolicy checks whether transaction fits policy. // checkPolicy checks whether the transaction fits the policy.
func (mp *Pool) checkPolicy(tx *transaction.Transaction, policyChanged bool) bool { func (mp *Pool) checkPolicy(tx *transaction.Transaction, policyChanged bool) bool {
if !policyChanged || tx.FeePerByte() >= mp.feePerByte { if !policyChanged || tx.FeePerByte() >= mp.feePerByte {
return true return true
@ -439,7 +439,7 @@ func New(capacity int, payerIndex int, enableSubscriptions bool) *Pool {
return mp return mp
} }
// SetResendThreshold sets threshold after which transaction will be considered stale // SetResendThreshold sets a threshold after which the transaction will be considered stale
// and returned for retransmission by `GetStaleTransactions`. // and returned for retransmission by `GetStaleTransactions`.
func (mp *Pool) SetResendThreshold(h uint32, f func(*transaction.Transaction, interface{})) { func (mp *Pool) SetResendThreshold(h uint32, f func(*transaction.Transaction, interface{})) {
mp.lock.Lock() mp.lock.Lock()
@ -555,10 +555,10 @@ func (mp *Pool) checkTxConflicts(tx *transaction.Transaction, fee Feer) ([]*tran
return conflictsToBeRemoved, err return conflictsToBeRemoved, err
} }
// Verify checks if a Sender of tx is able to pay for it (and all the other // Verify checks if the Sender of the tx is able to pay for it (and all the other
// transactions in the pool). If yes, the transaction tx is a valid // transactions in the pool). If yes, the transaction tx is a valid
// transaction and the function returns true. If no, the transaction tx is // transaction and the function returns true. If no, the transaction tx is
// considered to be invalid the function returns false. // considered to be invalid, the function returns false.
func (mp *Pool) Verify(tx *transaction.Transaction, feer Feer) bool { func (mp *Pool) Verify(tx *transaction.Transaction, feer Feer) bool {
mp.lock.RLock() mp.lock.RLock()
defer mp.lock.RUnlock() defer mp.lock.RUnlock()
@ -566,7 +566,7 @@ func (mp *Pool) Verify(tx *transaction.Transaction, feer Feer) bool {
return err == nil return err == nil
} }
// removeConflictsOf removes hash of the given transaction from the conflicts list // removeConflictsOf removes the hash of the given transaction from the conflicts list
// for each Conflicts attribute. // for each Conflicts attribute.
func (mp *Pool) removeConflictsOf(tx *transaction.Transaction) { func (mp *Pool) removeConflictsOf(tx *transaction.Transaction) {
// remove all conflicting hashes from mp.conflicts list // remove all conflicting hashes from mp.conflicts list

View file

@ -25,16 +25,16 @@ func (mp *Pool) StopSubscriptions() {
} }
} }
// SubscribeForTransactions adds given channel to new mempool event broadcasting, so when // SubscribeForTransactions adds the given channel to the new mempool event broadcasting, so when
// there is a new transactions added to mempool or an existing transaction removed from // there is a new transactions added to the mempool or an existing transaction removed from
// mempool you'll receive it via this channel. // the mempool, you'll receive it via this channel.
func (mp *Pool) SubscribeForTransactions(ch chan<- mempoolevent.Event) { func (mp *Pool) SubscribeForTransactions(ch chan<- mempoolevent.Event) {
if mp.subscriptionsOn.Load() { if mp.subscriptionsOn.Load() {
mp.subCh <- ch mp.subCh <- ch
} }
} }
// UnsubscribeFromTransactions unsubscribes given channel from new mempool notifications, // UnsubscribeFromTransactions unsubscribes the given channel from new mempool notifications,
// you can close it afterwards. Passing non-subscribed channel is a no-op. // you can close it afterwards. Passing non-subscribed channel is a no-op.
func (mp *Pool) UnsubscribeFromTransactions(ch chan<- mempoolevent.Event) { func (mp *Pool) UnsubscribeFromTransactions(ch chan<- mempoolevent.Event) {
if mp.subscriptionsOn.Load() { if mp.subscriptionsOn.Load() {

View file

@ -17,7 +17,7 @@ const (
TransactionRemoved Type = 0x02 TransactionRemoved Type = 0x02
) )
// Event represents one of mempool events: transaction was added or removed from mempool. // Event represents one of mempool events: transaction was added or removed from the mempool.
type Event struct { type Event struct {
Type Type Type Type
Tx *transaction.Transaction Tx *transaction.Transaction
@ -36,7 +36,7 @@ func (e Type) String() string {
} }
} }
// GetEventTypeFromString converts input string into an Type if it's possible. // GetEventTypeFromString converts the input string into the Type if it's possible.
func GetEventTypeFromString(s string) (Type, error) { func GetEventTypeFromString(s string) (Type, error) {
switch s { switch s {
case "added": case "added":
@ -48,12 +48,12 @@ func GetEventTypeFromString(s string) (Type, error) {
} }
} }
// MarshalJSON implements json.Marshaler interface. // MarshalJSON implements the json.Marshaler interface.
func (e Type) MarshalJSON() ([]byte, error) { func (e Type) MarshalJSON() ([]byte, error) {
return json.Marshal(e.String()) return json.Marshal(e.String())
} }
// UnmarshalJSON implements json.Unmarshaler interface. // UnmarshalJSON implements the json.Unmarshaler interface.
func (e *Type) UnmarshalJSON(b []byte) error { func (e *Type) UnmarshalJSON(b []byte) error {
var s string var s string

View file

@ -36,7 +36,7 @@ func (b *BaseNode) setCache(bs []byte, h util.Uint256) {
b.hashValid = true b.hashValid = true
} }
// getHash returns a hash of this BaseNode. // getHash returns the hash of this BaseNode.
func (b *BaseNode) getHash(n Node) util.Uint256 { func (b *BaseNode) getHash(n Node) util.Uint256 {
if !b.hashValid { if !b.hashValid {
b.updateHash(n) b.updateHash(n)
@ -52,7 +52,7 @@ func (b *BaseNode) getBytes(n Node) []byte {
return b.bytes return b.bytes
} }
// updateHash updates hash field for this BaseNode. // updateHash updates the hash field for this BaseNode.
func (b *BaseNode) updateHash(n Node) { func (b *BaseNode) updateHash(n Node) {
if n.Type() == HashT || n.Type() == EmptyT { if n.Type() == HashT || n.Type() == EmptyT {
panic("can't update hash for empty or hash node") panic("can't update hash for empty or hash node")
@ -61,7 +61,7 @@ func (b *BaseNode) updateHash(n Node) {
b.hashValid = true b.hashValid = true
} }
// updateCache updates hash and bytes fields for this BaseNode. // updateCache updates the hash and bytes fields for this BaseNode.
func (b *BaseNode) updateBytes(n Node) { func (b *BaseNode) updateBytes(n Node) {
bw := io.NewBufBinWriter() bw := io.NewBufBinWriter()
bw.Grow(1 + n.Size()) bw.Grow(1 + n.Size())
@ -85,13 +85,13 @@ func encodeBinaryAsChild(n Node, w *io.BinWriter) {
w.WriteBytes(n.Hash().BytesBE()) w.WriteBytes(n.Hash().BytesBE())
} }
// encodeNodeWithType encodes node together with it's type. // encodeNodeWithType encodes the node together with its type.
func encodeNodeWithType(n Node, w *io.BinWriter) { func encodeNodeWithType(n Node, w *io.BinWriter) {
w.WriteB(byte(n.Type())) w.WriteB(byte(n.Type()))
n.EncodeBinary(w) n.EncodeBinary(w)
} }
// DecodeNodeWithType decodes node together with it's type. // DecodeNodeWithType decodes the node together with its type.
func DecodeNodeWithType(r *io.BinReader) Node { func DecodeNodeWithType(r *io.BinReader) Node {
if r.Err != nil { if r.Err != nil {
return nil return nil

View file

@ -5,7 +5,7 @@ import (
"sort" "sort"
) )
// Batch is batch of storage changes. // Batch is a batch of storage changes.
// It stores key-value pairs in a sorted state. // It stores key-value pairs in a sorted state.
type Batch struct { type Batch struct {
kv []keyValue kv []keyValue
@ -16,7 +16,7 @@ type keyValue struct {
value []byte value []byte
} }
// MapToMPTBatch makes a Batch from unordered set of storage changes. // MapToMPTBatch makes a Batch from an unordered set of storage changes.
func MapToMPTBatch(m map[string][]byte) Batch { func MapToMPTBatch(m map[string][]byte) Batch {
var b Batch var b Batch
@ -31,13 +31,13 @@ func MapToMPTBatch(m map[string][]byte) Batch {
return b return b
} }
// PutBatch puts batch to trie. // PutBatch puts a batch to a trie.
// It is not atomic (and probably cannot be without substantial slow-down) // It is not atomic (and probably cannot be without substantial slow-down)
// and returns number of elements processed. // and returns the number of elements processed.
// If an error is returned, the trie may be in the inconsistent state in case of storage failures. // If an error is returned, the trie may be in the inconsistent state in case of storage failures.
// This is due to the fact that we can remove multiple children from the branch node simultaneously // This is due to the fact that we can remove multiple children from the branch node simultaneously
// and won't strip the resulting branch node. // and won't strip the resulting branch node.
// However it is used mostly after the block processing to update MPT and error is not expected. // However, it is used mostly after block processing to update MPT, and error is not expected.
func (t *Trie) PutBatch(b Batch) (int, error) { func (t *Trie) PutBatch(b Batch) (int, error) {
if len(b.kv) == 0 { if len(b.kv) == 0 {
return 0, nil return 0, nil
@ -150,13 +150,13 @@ func (t *Trie) addToBranch(b *BranchNode, kv []keyValue, inTrie bool) (Node, int
t.removeRef(b.Hash(), b.bytes) t.removeRef(b.Hash(), b.bytes)
} }
// Error during iterate means some storage failure (i.e. some hash node cannot be // An error during iterate means some storage failure (i.e. some hash node cannot be
// retrieved from storage). This can leave trie in inconsistent state, because // retrieved from storage). This can leave the trie in an inconsistent state because
// it can be impossible to strip branch node after it has been changed. // it can be impossible to strip the branch node after it has been changed.
// Consider a branch with 10 children, first 9 of which are deleted and the remaining one // Consider a branch with 10 children, first 9 of which are deleted and the remaining one
// is a leaf node replaced by a hash node missing from storage. // is a leaf node replaced by a hash node missing from the storage.
// This can't be fixed easily because we need to _revert_ changes in reference counts // This can't be fixed easily because we need to _revert_ changes in the reference counts
// for children which were updated successfully. But storage access errors means we are // for children which have been updated successfully. But storage access errors means we are
// in a bad state anyway. // in a bad state anyway.
n, err := t.iterateBatch(kv, func(c byte, kv []keyValue) (int, error) { n, err := t.iterateBatch(kv, func(c byte, kv []keyValue) (int, error) {
child, n, err := t.putBatchIntoNode(b.Children[c], kv) child, n, err := t.putBatchIntoNode(b.Children[c], kv)
@ -167,8 +167,8 @@ func (t *Trie) addToBranch(b *BranchNode, kv []keyValue, inTrie bool) (Node, int
b.invalidateCache() b.invalidateCache()
} }
// Even if some of the children can't be put, we need to try to strip branch // Even if some of the children can't be put, we need to try to strip the branch
// and possibly update refcounts. // and possibly update the refcounts.
nd, bErr := t.stripBranch(b) nd, bErr := t.stripBranch(b)
if err == nil { if err == nil {
err = bErr err = bErr
@ -176,8 +176,8 @@ func (t *Trie) addToBranch(b *BranchNode, kv []keyValue, inTrie bool) (Node, int
return nd, n, err return nd, n, err
} }
// stripsBranch strips branch node after incomplete batch put. // stripsBranch strips the branch node after incomplete batch put.
// It assumes there is no reference to b in trie. // It assumes there is no reference to b in the trie.
func (t *Trie) stripBranch(b *BranchNode) (Node, error) { func (t *Trie) stripBranch(b *BranchNode) (Node, error) {
var n int var n int
var lastIndex byte var lastIndex byte
@ -232,12 +232,12 @@ func (t *Trie) putBatchIntoHash(curr *HashNode, kv []keyValue) (Node, int, error
return t.putBatchIntoNode(result, kv) return t.putBatchIntoNode(result, kv)
} }
// Creates new subtrie from provided key-value pairs. // Creates a new subtrie from the provided key-value pairs.
// Items in kv must have no common prefix. // Items in kv must have no common prefix.
// If there are any deletions in kv, return error. // If there are any deletions in kv, error is returned.
// kv is not empty. // kv is not empty.
// kv is sorted by key. // kv is sorted by key.
// value is current value stored by prefix. // value is the current value stored by prefix.
func (t *Trie) newSubTrieMany(prefix []byte, kv []keyValue, value []byte) (Node, int, error) { func (t *Trie) newSubTrieMany(prefix []byte, kv []keyValue, value []byte) (Node, int, error) {
if len(kv[0].key) == 0 { if len(kv[0].key) == 0 {
if kv[0].value == nil { if kv[0].value == nil {

View file

@ -19,13 +19,13 @@ var (
errStop = errors.New("stop condition is met") errStop = errors.New("stop condition is met")
) )
// Billet is a part of MPT trie with missing hash nodes that need to be restored. // Billet is a part of an MPT trie with missing hash nodes that need to be restored.
// Billet is based on the following assumptions: // Billet is based on the following assumptions:
// 1. Refcount can only be incremented (we don't change MPT structure during restore, // 1. Refcount can only be incremented (we don't change the MPT structure during restore,
// thus don't need to decrease refcount). // thus don't need to decrease refcount).
// 2. Each time the part of Billet is completely restored, it is collapsed into // 2. Each time a part of a Billet is completely restored, it is collapsed into
// HashNode. // HashNode.
// 3. Pair (node, path) must be restored only once. It's a duty of MPT pool to manage // 3. Any pair (node, path) must be restored only once. It's a duty of an MPT pool to manage
// MPT paths in order to provide this assumption. // MPT paths in order to provide this assumption.
type Billet struct { type Billet struct {
TempStoragePrefix storage.KeyPrefix TempStoragePrefix storage.KeyPrefix
@ -35,9 +35,9 @@ type Billet struct {
mode TrieMode mode TrieMode
} }
// NewBillet returns new billet for MPT trie restoring. It accepts a MemCachedStore // NewBillet returns a new billet for MPT trie restoring. It accepts a MemCachedStore
// to decouple storage errors from logic errors so that all storage errors are // to decouple storage errors from logic errors so that all storage errors are
// processed during `store.Persist()` at the caller. This also has the benefit, // processed during `store.Persist()` at the caller. Another benifit is
// that every `Put` can be considered an atomic operation. // that every `Put` can be considered an atomic operation.
func NewBillet(rootHash util.Uint256, mode TrieMode, prefix storage.KeyPrefix, store *storage.MemCachedStore) *Billet { func NewBillet(rootHash util.Uint256, mode TrieMode, prefix storage.KeyPrefix, store *storage.MemCachedStore) *Billet {
return &Billet{ return &Billet{
@ -49,8 +49,8 @@ func NewBillet(rootHash util.Uint256, mode TrieMode, prefix storage.KeyPrefix, s
} }
// RestoreHashNode replaces HashNode located at the provided path by the specified Node // RestoreHashNode replaces HashNode located at the provided path by the specified Node
// and stores it. It also maintains MPT as small as possible by collapsing those parts // and stores it. It also maintains the MPT as small as possible by collapsing those parts
// of MPT that have been completely restored. // of the MPT that have been completely restored.
func (b *Billet) RestoreHashNode(path []byte, node Node) error { func (b *Billet) RestoreHashNode(path []byte, node Node) error {
if _, ok := node.(*HashNode); ok { if _, ok := node.(*HashNode); ok {
return fmt.Errorf("%w: unable to restore node into HashNode", ErrRestoreFailed) return fmt.Errorf("%w: unable to restore node into HashNode", ErrRestoreFailed)
@ -75,7 +75,7 @@ func (b *Billet) RestoreHashNode(path []byte, node Node) error {
return nil return nil
} }
// putIntoNode puts val with provided path inside curr and returns updated node. // putIntoNode puts val with the provided path inside curr and returns an updated node.
// Reference counters are updated for both curr and returned value. // Reference counters are updated for both curr and returned value.
func (b *Billet) putIntoNode(curr Node, path []byte, val Node) (Node, error) { func (b *Billet) putIntoNode(curr Node, path []byte, val Node) (Node, error) {
switch n := curr.(type) { switch n := curr.(type) {
@ -102,7 +102,7 @@ func (b *Billet) putIntoLeaf(curr *LeafNode, path []byte, val Node) (Node, error
return nil, fmt.Errorf("%w: bad Leaf node hash: expected %s, got %s", ErrRestoreFailed, curr.Hash().StringBE(), val.Hash().StringBE()) return nil, fmt.Errorf("%w: bad Leaf node hash: expected %s, got %s", ErrRestoreFailed, curr.Hash().StringBE(), val.Hash().StringBE())
} }
// Once Leaf node is restored, it will be collapsed into HashNode forever, so // Once Leaf node is restored, it will be collapsed into HashNode forever, so
// there shouldn't be such situation when we try to restore Leaf node. // there shouldn't be such situation when we try to restore a Leaf node.
panic("bug: can't restore LeafNode") panic("bug: can't restore LeafNode")
} }
@ -143,15 +143,15 @@ func (b *Billet) putIntoExtension(curr *ExtensionNode, path []byte, val Node) (N
} }
func (b *Billet) putIntoHash(curr *HashNode, path []byte, val Node) (Node, error) { func (b *Billet) putIntoHash(curr *HashNode, path []byte, val Node) (Node, error) {
// Once a part of MPT Billet is completely restored, it will be collapsed forever, so // Once a part of the MPT Billet is completely restored, it will be collapsed forever, so
// it's an MPT pool duty to avoid duplicating restore requests. // it's an MPT pool duty to avoid duplicating restore requests.
if len(path) != 0 { if len(path) != 0 {
return nil, fmt.Errorf("%w: node has already been collapsed", ErrRestoreFailed) return nil, fmt.Errorf("%w: node has already been collapsed", ErrRestoreFailed)
} }
// `curr` hash node can be either of // `curr` hash node can be either of
// 1) saved in storage (i.g. if we've already restored node with the same hash from the // 1) saved in the storage (i.g. if we've already restored a node with the same hash from the
// other part of MPT), so just add it to local in-memory MPT. // other part of the MPT), so just add it to the local in-memory MPT.
// 2) missing from the storage. It's OK because we're syncing MPT state, and the purpose // 2) missing from the storage. It's OK because we're syncing MPT state, and the purpose
// is to store missing hash nodes. // is to store missing hash nodes.
// both cases are OK, but we still need to validate `val` against `curr`. // both cases are OK, but we still need to validate `val` against `curr`.

View file

@ -9,13 +9,13 @@ import (
) )
const ( const (
// childrenCount represents a number of children of a branch node. // childrenCount represents the number of children of a branch node.
childrenCount = 17 childrenCount = 17
// lastChild is the index of the last child. // lastChild is the index of the last child.
lastChild = childrenCount - 1 lastChild = childrenCount - 1
) )
// BranchNode represents MPT's branch node. // BranchNode represents an MPT's branch node.
type BranchNode struct { type BranchNode struct {
BaseNode BaseNode
Children [childrenCount]Node Children [childrenCount]Node
@ -23,7 +23,7 @@ type BranchNode struct {
var _ Node = (*BranchNode)(nil) var _ Node = (*BranchNode)(nil)
// NewBranchNode returns new branch node. // NewBranchNode returns a new branch node.
func NewBranchNode() *BranchNode { func NewBranchNode() *BranchNode {
b := new(BranchNode) b := new(BranchNode)
for i := 0; i < childrenCount; i++ { for i := 0; i < childrenCount; i++ {
@ -32,20 +32,20 @@ func NewBranchNode() *BranchNode {
return b return b
} }
// Type implements Node interface. // Type implements the Node interface.
func (b *BranchNode) Type() NodeType { return BranchT } func (b *BranchNode) Type() NodeType { return BranchT }
// Hash implements BaseNode interface. // Hash implements the BaseNode interface.
func (b *BranchNode) Hash() util.Uint256 { func (b *BranchNode) Hash() util.Uint256 {
return b.getHash(b) return b.getHash(b)
} }
// Bytes implements BaseNode interface. // Bytes implements the BaseNode interface.
func (b *BranchNode) Bytes() []byte { func (b *BranchNode) Bytes() []byte {
return b.getBytes(b) return b.getBytes(b)
} }
// Size implements Node interface. // Size implements the Node interface.
func (b *BranchNode) Size() int { func (b *BranchNode) Size() int {
sz := childrenCount sz := childrenCount
for i := range b.Children { for i := range b.Children {
@ -72,12 +72,12 @@ func (b *BranchNode) DecodeBinary(r *io.BinReader) {
} }
} }
// MarshalJSON implements json.Marshaler. // MarshalJSON implements the json.Marshaler.
func (b *BranchNode) MarshalJSON() ([]byte, error) { func (b *BranchNode) MarshalJSON() ([]byte, error) {
return json.Marshal(b.Children) return json.Marshal(b.Children)
} }
// UnmarshalJSON implements json.Unmarshaler. // UnmarshalJSON implements the json.Unmarshaler.
func (b *BranchNode) UnmarshalJSON(data []byte) error { func (b *BranchNode) UnmarshalJSON(data []byte) error {
var obj NodeObject var obj NodeObject
if err := obj.UnmarshalJSON(data); err != nil { if err := obj.UnmarshalJSON(data); err != nil {

View file

@ -37,15 +37,15 @@ func prepareMPTCompat() *Trie {
// TestCompatibility contains tests present in C# implementation. // TestCompatibility contains tests present in C# implementation.
// https://github.com/neo-project/neo-modules/blob/master/tests/Neo.Plugins.StateService.Tests/MPT/UT_MPTTrie.cs // https://github.com/neo-project/neo-modules/blob/master/tests/Neo.Plugins.StateService.Tests/MPT/UT_MPTTrie.cs
// There are some differences, though: // There are some differences, though:
// 1. In our implementation delete is silent, i.e. we do not return an error is the key is missing or empty. // 1. In our implementation, delete is silent, i.e. we do not return an error if the key is missing or empty.
// However, we do return error when contents of hash node are missing from the store // However, we do return an error when the contents of the hash node are missing from the store
// (corresponds to exception in C# implementation). However, if the key is too big, an error is returned // (corresponds to exception in C# implementation). However, if the key is too big, an error is returned
// (corresponds to exception in C# implementation). // (corresponds to exception in C# implementation).
// 2. In our implementation put returns error if something goes wrong, while C# implementation throws // 2. In our implementation, put returns an error if something goes wrong, while C# implementation throws
// an exception and returns nothing. // an exception and returns nothing.
// 3. In our implementation get does not immediately return error in case of an empty key. An error is returned // 3. In our implementation, get does not immediately return any error in case of an empty key. An error is returned
// only if value is missing from the storage. C# implementation checks that key is not empty and throws an error // only if the value is missing from the storage. C# implementation checks that the key is not empty and throws an error
// otherwice. However, if the key is too big, an error is returned (corresponds to exception in C# implementation). // otherwise. However, if the key is too big, an error is returned (corresponds to exception in C# implementation).
func TestCompatibility(t *testing.T) { func TestCompatibility(t *testing.T) {
mainTrie := prepareMPTCompat() mainTrie := prepareMPTCompat()

View file

@ -1,14 +1,14 @@
/* /*
Package mpt implements MPT (Merkle-Patricia Tree). Package mpt implements MPT (Merkle-Patricia Trie).
MPT stores key-value pairs and is a trie over 16-symbol alphabet. https://en.wikipedia.org/wiki/Trie An MPT stores key-value pairs and is a trie over 16-symbol alphabet. https://en.wikipedia.org/wiki/Trie
Trie is a tree where values are stored in leafs and keys are paths from root to the leaf node. A trie is a tree where values are stored in leafs and keys are paths from the root to the leaf node.
MPT consists of 4 type of nodes: An MPT consists of 4 types of nodes:
- Leaf node contains only value. - Leaf node only contains a value.
- Extension node contains both key and value. - Extension node contains both a key and a value.
- Branch node contains 2 or more children. - Branch node contains 2 or more children.
- Hash node is a compressed node and contains only actual node's hash. - Hash node is a compressed node and only contains the actual node's hash.
The actual node must be retrieved from storage or over the network. The actual node must be retrieved from the storage or over the network.
As an example here is a trie containing 3 pairs: As an example here is a trie containing 3 pairs:
- 0x1201 -> val1 - 0x1201 -> val1
@ -31,15 +31,15 @@ BranchNode [0, 1, 2, ...], Last -> Leaf(val4)
There are 3 invariants that this implementation has: There are 3 invariants that this implementation has:
- Branch node cannot have <= 1 children - Branch node cannot have <= 1 children
- Extension node cannot have zero-length key - Extension node cannot have a zero-length key
- Extension node cannot have another Extension node in it's next field - Extension node cannot have another Extension node in its next field
Thank to these restrictions, there is a single root hash for every set of key-value pairs Thanks to these restrictions, there is a single root hash for every set of key-value pairs
irregardless of the order they were added/removed with. irregardless of the order they were added/removed in.
The actual trie structure can vary because of node -> HashNode compressing. The actual trie structure can vary because of node -> HashNode compressing.
There is also one optimization which cost us almost nothing in terms of complexity but is very beneficial: There is also one optimization which cost us almost nothing in terms of complexity but is quite beneficial:
When we perform get/put/delete on a speficic path, every Hash node which was retreived from storage is When we perform get/put/delete on a specific path, every Hash node which was retrieved from the storage is
replaced by its uncompressed form, so that subsequent hits of this not don't use storage. replaced by its uncompressed form, so that subsequent hits of this don't need to access the storage.
*/ */
package mpt package mpt

View file

@ -8,14 +8,14 @@ import (
"github.com/nspcc-dev/neo-go/pkg/util" "github.com/nspcc-dev/neo-go/pkg/util"
) )
// EmptyNode represents empty node. // EmptyNode represents an empty node.
type EmptyNode struct{} type EmptyNode struct{}
// DecodeBinary implements io.Serializable interface. // DecodeBinary implements the io.Serializable interface.
func (e EmptyNode) DecodeBinary(*io.BinReader) { func (e EmptyNode) DecodeBinary(*io.BinReader) {
} }
// EncodeBinary implements io.Serializable interface. // EncodeBinary implements the io.Serializable interface.
func (e EmptyNode) EncodeBinary(*io.BinWriter) { func (e EmptyNode) EncodeBinary(*io.BinWriter) {
} }

View file

@ -15,12 +15,12 @@ const (
// maxPathLength is the max length of the extension node key. // maxPathLength is the max length of the extension node key.
maxPathLength = (storage.MaxStorageKeyLen + 4) * 2 maxPathLength = (storage.MaxStorageKeyLen + 4) * 2
// MaxKeyLength is the max length of the key to put in trie // MaxKeyLength is the max length of the key to put in the trie
// before transforming to nibbles. // before transforming to nibbles.
MaxKeyLength = maxPathLength / 2 MaxKeyLength = maxPathLength / 2
) )
// ExtensionNode represents MPT's extension node. // ExtensionNode represents an MPT's extension node.
type ExtensionNode struct { type ExtensionNode struct {
BaseNode BaseNode
key []byte key []byte
@ -29,8 +29,8 @@ type ExtensionNode struct {
var _ Node = (*ExtensionNode)(nil) var _ Node = (*ExtensionNode)(nil)
// NewExtensionNode returns hash node with the specified key and next node. // NewExtensionNode returns a hash node with the specified key and the next node.
// Note: because it is a part of Trie, key must be mangled, i.e. must contain only bytes with high half = 0. // Note: since it is a part of a Trie, the key must be mangled, i.e. must contain only bytes with high half = 0.
func NewExtensionNode(key []byte, next Node) *ExtensionNode { func NewExtensionNode(key []byte, next Node) *ExtensionNode {
return &ExtensionNode{ return &ExtensionNode{
key: key, key: key,
@ -78,7 +78,7 @@ func (e *ExtensionNode) Size() int {
1 + util.Uint256Size // e.next is never empty 1 + util.Uint256Size // e.next is never empty
} }
// MarshalJSON implements json.Marshaler. // MarshalJSON implements the json.Marshaler.
func (e *ExtensionNode) MarshalJSON() ([]byte, error) { func (e *ExtensionNode) MarshalJSON() ([]byte, error) {
m := map[string]interface{}{ m := map[string]interface{}{
"key": hex.EncodeToString(e.key), "key": hex.EncodeToString(e.key),
@ -87,7 +87,7 @@ func (e *ExtensionNode) MarshalJSON() ([]byte, error) {
return json.Marshal(m) return json.Marshal(m)
} }
// UnmarshalJSON implements json.Unmarshaler. // UnmarshalJSON implements the json.Unmarshaler.
func (e *ExtensionNode) UnmarshalJSON(data []byte) error { func (e *ExtensionNode) UnmarshalJSON(data []byte) error {
var obj NodeObject var obj NodeObject
if err := obj.UnmarshalJSON(data); err != nil { if err := obj.UnmarshalJSON(data); err != nil {

View file

@ -7,7 +7,7 @@ import (
"github.com/nspcc-dev/neo-go/pkg/util" "github.com/nspcc-dev/neo-go/pkg/util"
) )
// HashNode represents MPT's hash node. // HashNode represents an MPT's hash node.
type HashNode struct { type HashNode struct {
BaseNode BaseNode
Collapsed bool Collapsed bool
@ -15,7 +15,7 @@ type HashNode struct {
var _ Node = (*HashNode)(nil) var _ Node = (*HashNode)(nil)
// NewHashNode returns hash node with the specified hash. // NewHashNode returns a hash node with the specified hash.
func NewHashNode(h util.Uint256) *HashNode { func NewHashNode(h util.Uint256) *HashNode {
return &HashNode{ return &HashNode{
BaseNode: BaseNode{ BaseNode: BaseNode{
@ -61,12 +61,12 @@ func (h HashNode) EncodeBinary(w *io.BinWriter) {
w.WriteBytes(h.hash[:]) w.WriteBytes(h.hash[:])
} }
// MarshalJSON implements json.Marshaler. // MarshalJSON implements the json.Marshaler.
func (h *HashNode) MarshalJSON() ([]byte, error) { func (h *HashNode) MarshalJSON() ([]byte, error) {
return []byte(`{"hash":"` + h.hash.StringLE() + `"}`), nil return []byte(`{"hash":"` + h.hash.StringLE() + `"}`), nil
} }
// UnmarshalJSON implements json.Unmarshaler. // UnmarshalJSON implements the json.Unmarshaler.
func (h *HashNode) UnmarshalJSON(data []byte) error { func (h *HashNode) UnmarshalJSON(data []byte) error {
var obj NodeObject var obj NodeObject
if err := obj.UnmarshalJSON(data); err != nil { if err := obj.UnmarshalJSON(data); err != nil {

View file

@ -2,7 +2,7 @@ package mpt
import "github.com/nspcc-dev/neo-go/pkg/util" import "github.com/nspcc-dev/neo-go/pkg/util"
// lcp returns longest common prefix of a and b. // lcp returns the longest common prefix of a and b.
// Note: it does no allocations. // Note: it does no allocations.
func lcp(a, b []byte) []byte { func lcp(a, b []byte) []byte {
if len(a) < len(b) { if len(a) < len(b) {
@ -33,7 +33,7 @@ func lcpMany(kv []keyValue) []byte {
return p return p
} }
// toNibbles mangles path by splitting every byte into 2 containing low- and high- 4-byte part. // toNibbles mangles the path by splitting every byte into 2 containing low- and high- 4-byte part.
func toNibbles(path []byte) []byte { func toNibbles(path []byte) []byte {
result := make([]byte, len(path)*2) result := make([]byte, len(path)*2)
for i := range path { for i := range path {
@ -43,7 +43,7 @@ func toNibbles(path []byte) []byte {
return result return result
} }
// strToNibbles mangles path by splitting every byte into 2 containing low- and high- 4-byte part, // strToNibbles mangles the path by splitting every byte into 2 containing low- and high- 4-byte part,
// ignoring the first byte (prefix). // ignoring the first byte (prefix).
func strToNibbles(path string) []byte { func strToNibbles(path string) []byte {
result := make([]byte, (len(path)-1)*2) result := make([]byte, (len(path)-1)*2)
@ -54,7 +54,7 @@ func strToNibbles(path string) []byte {
return result return result
} }
// fromNibbles performs operation opposite to toNibbles and does no path validity checks. // fromNibbles performs an operation opposite to toNibbles and runs no path validity checks.
func fromNibbles(path []byte) []byte { func fromNibbles(path []byte) []byte {
result := make([]byte, len(path)/2) result := make([]byte, len(path)/2)
for i := range result { for i := range result {
@ -63,7 +63,7 @@ func fromNibbles(path []byte) []byte {
return result return result
} }
// GetChildrenPaths returns a set of paths to node's children who are non-empty HashNodes // GetChildrenPaths returns a set of paths to the node's children who are non-empty HashNodes
// based on the node's path. // based on the node's path.
func GetChildrenPaths(path []byte, node Node) map[util.Uint256][][]byte { func GetChildrenPaths(path []byte, node Node) map[util.Uint256][][]byte {
res := make(map[util.Uint256][][]byte) res := make(map[util.Uint256][][]byte)

View file

@ -10,10 +10,10 @@ import (
"github.com/nspcc-dev/neo-go/pkg/util" "github.com/nspcc-dev/neo-go/pkg/util"
) )
// MaxValueLength is a max length of a leaf node value. // MaxValueLength is the max length of a leaf node value.
const MaxValueLength = 3 + storage.MaxStorageValueLen + 1 const MaxValueLength = 3 + storage.MaxStorageValueLen + 1
// LeafNode represents MPT's leaf node. // LeafNode represents an MPT's leaf node.
type LeafNode struct { type LeafNode struct {
BaseNode BaseNode
value []byte value []byte
@ -21,7 +21,7 @@ type LeafNode struct {
var _ Node = (*LeafNode)(nil) var _ Node = (*LeafNode)(nil)
// NewLeafNode returns hash node with the specified value. // NewLeafNode returns a hash node with the specified value.
func NewLeafNode(value []byte) *LeafNode { func NewLeafNode(value []byte) *LeafNode {
return &LeafNode{value: value} return &LeafNode{value: value}
} }
@ -61,12 +61,12 @@ func (n *LeafNode) Size() int {
return io.GetVarSize(len(n.value)) + len(n.value) return io.GetVarSize(len(n.value)) + len(n.value)
} }
// MarshalJSON implements json.Marshaler. // MarshalJSON implements the json.Marshaler.
func (n *LeafNode) MarshalJSON() ([]byte, error) { func (n *LeafNode) MarshalJSON() ([]byte, error) {
return []byte(`{"value":"` + hex.EncodeToString(n.value) + `"}`), nil return []byte(`{"value":"` + hex.EncodeToString(n.value) + `"}`), nil
} }
// UnmarshalJSON implements json.Unmarshaler. // UnmarshalJSON implements the json.Unmarshaler.
func (n *LeafNode) UnmarshalJSON(data []byte) error { func (n *LeafNode) UnmarshalJSON(data []byte) error {
var obj NodeObject var obj NodeObject
if err := obj.UnmarshalJSON(data); err != nil { if err := obj.UnmarshalJSON(data); err != nil {

View file

@ -9,7 +9,7 @@ import (
"github.com/nspcc-dev/neo-go/pkg/util" "github.com/nspcc-dev/neo-go/pkg/util"
) )
// NodeType represents node type.. // NodeType represents a node type..
type NodeType byte type NodeType byte
// Node types definitions. // Node types definitions.
@ -21,14 +21,14 @@ const (
EmptyT NodeType = 0x04 EmptyT NodeType = 0x04
) )
// NodeObject represents Node together with it's type. // NodeObject represents a Node together with it's type.
// It is used for serialization/deserialization where type info // It is used for serialization/deserialization where type info
// is also expected. // is also expected.
type NodeObject struct { type NodeObject struct {
Node Node
} }
// Node represents common interface of all MPT nodes. // Node represents a common interface of all MPT nodes.
type Node interface { type Node interface {
io.Serializable io.Serializable
json.Marshaler json.Marshaler
@ -48,7 +48,7 @@ func (n *NodeObject) DecodeBinary(r *io.BinReader) {
n.Node = DecodeNodeWithType(r) n.Node = DecodeNodeWithType(r)
} }
// UnmarshalJSON implements json.Unmarshaler. // UnmarshalJSON implements the json.Unmarshaler.
func (n *NodeObject) UnmarshalJSON(data []byte) error { func (n *NodeObject) UnmarshalJSON(data []byte) error {
var m map[string]json.RawMessage var m map[string]json.RawMessage
err := json.Unmarshal(data, &m) err := json.Unmarshal(data, &m)

View file

@ -10,8 +10,8 @@ import (
"github.com/nspcc-dev/neo-go/pkg/util/slice" "github.com/nspcc-dev/neo-go/pkg/util/slice"
) )
// GetProof returns a proof that key belongs to t. // GetProof returns a proof that the key belongs to t.
// Proof consist of serialized nodes occurring on path from the root to the leaf of key. // The proof consists of serialized nodes occurring on the path from the root to the leaf of key.
func (t *Trie) GetProof(key []byte) ([][]byte, error) { func (t *Trie) GetProof(key []byte) ([][]byte, error) {
var proof [][]byte var proof [][]byte
if len(key) > MaxKeyLength { if len(key) > MaxKeyLength {
@ -63,7 +63,7 @@ func (t *Trie) getProof(curr Node, path []byte, proofs *[][]byte) (Node, error)
} }
// VerifyProof verifies that path indeed belongs to a MPT with the specified root hash. // VerifyProof verifies that path indeed belongs to a MPT with the specified root hash.
// It also returns value for the key. // It also returns the value for the key.
func VerifyProof(rh util.Uint256, key []byte, proofs [][]byte) ([]byte, bool) { func VerifyProof(rh util.Uint256, key []byte, proofs [][]byte) ([]byte, bool) {
path := toNibbles(key) path := toNibbles(key)
tr := NewTrie(NewHashNode(rh), ModeAll, storage.NewMemCachedStore(storage.NewMemoryStore())) tr := NewTrie(NewHashNode(rh), ModeAll, storage.NewMemCachedStore(storage.NewMemoryStore()))

View file

@ -12,10 +12,10 @@ import (
"github.com/nspcc-dev/neo-go/pkg/util/slice" "github.com/nspcc-dev/neo-go/pkg/util/slice"
) )
// TrieMode is the storage mode of trie, it affects the DB scheme. // TrieMode is the storage mode of a trie, it affects the DB scheme.
type TrieMode byte type TrieMode byte
// TrieMode is the storage mode of trie. // TrieMode is the storage mode of a trie.
const ( const (
// ModeAll is used to store everything. // ModeAll is used to store everything.
ModeAll TrieMode = 0 ModeAll TrieMode = 0
@ -43,7 +43,7 @@ type cachedNode struct {
refcount int32 refcount int32
} }
// ErrNotFound is returned when requested trie item is missing. // ErrNotFound is returned when the requested trie item is missing.
var ErrNotFound = errors.New("item not found") var ErrNotFound = errors.New("item not found")
// RC returns true when reference counting is enabled. // RC returns true when reference counting is enabled.
@ -56,9 +56,9 @@ func (m TrieMode) GC() bool {
return m&ModeGCFlag != 0 return m&ModeGCFlag != 0
} }
// NewTrie returns new MPT trie. It accepts a MemCachedStore to decouple storage errors from logic errors // NewTrie returns a new MPT trie. It accepts a MemCachedStore to decouple storage errors from logic errors,
// so that all storage errors are processed during `store.Persist()` at the caller. // so that all storage errors are processed during `store.Persist()` at the caller.
// This also has the benefit, that every `Put` can be considered an atomic operation. // Another benefit is that every `Put` can be considered an atomic operation.
func NewTrie(root Node, mode TrieMode, store *storage.MemCachedStore) *Trie { func NewTrie(root Node, mode TrieMode, store *storage.MemCachedStore) *Trie {
if root == nil { if root == nil {
root = EmptyNode{} root = EmptyNode{}
@ -73,7 +73,7 @@ func NewTrie(root Node, mode TrieMode, store *storage.MemCachedStore) *Trie {
} }
} }
// Get returns value for the provided key in t. // Get returns the value for the provided key in t.
func (t *Trie) Get(key []byte) ([]byte, error) { func (t *Trie) Get(key []byte) ([]byte, error) {
if len(key) > MaxKeyLength { if len(key) > MaxKeyLength {
return nil, errors.New("key is too big") return nil, errors.New("key is too big")
@ -87,11 +87,11 @@ func (t *Trie) Get(key []byte) ([]byte, error) {
return slice.Copy(leaf.(*LeafNode).value), nil return slice.Copy(leaf.(*LeafNode).value), nil
} }
// getWithPath returns a current node with all hash nodes along the path replaced // getWithPath returns the current node with all hash nodes along the path replaced
// to their "unhashed" counterparts. It also returns node the provided path in a // with their "unhashed" counterparts. It also returns node which the provided path in a
// subtrie rooting in curr points to. In case of `strict` set to `false` the // subtrie rooting in curr points to. In case of `strict` set to `false`, the
// provided path can be incomplete, so it also returns full path that points to // provided path can be incomplete, so it also returns the full path that points to
// the node found at the specified incomplete path. In case of `strict` set to `true` // the node found at the specified incomplete path. In case of `strict` set to `true`,
// the resulting path matches the provided one. // the resulting path matches the provided one.
func (t *Trie) getWithPath(curr Node, path []byte, strict bool) (Node, Node, []byte, error) { func (t *Trie) getWithPath(curr Node, path []byte, strict bool) (Node, Node, []byte, error) {
switch n := curr.(type) { switch n := curr.(type) {
@ -159,8 +159,8 @@ func (t *Trie) Put(key, value []byte) error {
return nil return nil
} }
// putIntoLeaf puts val to trie if current node is a Leaf. // putIntoLeaf puts the val to the trie if the current node is a Leaf.
// It returns Node if curr needs to be replaced and error if any. // It returns a Node if curr needs to be replaced and an error has occurred, if any.
func (t *Trie) putIntoLeaf(curr *LeafNode, path []byte, val Node) (Node, error) { func (t *Trie) putIntoLeaf(curr *LeafNode, path []byte, val Node) (Node, error) {
v := val.(*LeafNode) v := val.(*LeafNode)
if len(path) == 0 { if len(path) == 0 {
@ -176,8 +176,8 @@ func (t *Trie) putIntoLeaf(curr *LeafNode, path []byte, val Node) (Node, error)
return b, nil return b, nil
} }
// putIntoBranch puts val to trie if current node is a Branch. // putIntoBranch puts the val to the trie if the current node is a Branch.
// It returns Node if curr needs to be replaced and error if any. // It returns the Node if curr needs to be replaced and an error has occurred, if any.
func (t *Trie) putIntoBranch(curr *BranchNode, path []byte, val Node) (Node, error) { func (t *Trie) putIntoBranch(curr *BranchNode, path []byte, val Node) (Node, error) {
i, path := splitPath(path) i, path := splitPath(path)
t.removeRef(curr.Hash(), curr.bytes) t.removeRef(curr.Hash(), curr.bytes)
@ -191,8 +191,8 @@ func (t *Trie) putIntoBranch(curr *BranchNode, path []byte, val Node) (Node, err
return curr, nil return curr, nil
} }
// putIntoExtension puts val to trie if current node is an Extension. // putIntoExtension puts the val to the trie if the current node is an Extension.
// It returns Node if curr needs to be replaced and error if any. // It returns the Node if curr needs to be replaced and an error has occurred, if any.
func (t *Trie) putIntoExtension(curr *ExtensionNode, path []byte, val Node) (Node, error) { func (t *Trie) putIntoExtension(curr *ExtensionNode, path []byte, val Node) (Node, error) {
t.removeRef(curr.Hash(), curr.bytes) t.removeRef(curr.Hash(), curr.bytes)
if bytes.HasPrefix(path, curr.key) { if bytes.HasPrefix(path, curr.key) {
@ -232,8 +232,8 @@ func (t *Trie) putIntoEmpty(path []byte, val Node) (Node, error) {
return t.newSubTrie(path, val, true), nil return t.newSubTrie(path, val, true), nil
} }
// putIntoHash puts val to trie if current node is a HashNode. // putIntoHash puts the val to the trie if the current node is a HashNode.
// It returns Node if curr needs to be replaced and error if any. // It returns the Node if curr needs to be replaced and an error has occurred, if any.
func (t *Trie) putIntoHash(curr *HashNode, path []byte, val Node) (Node, error) { func (t *Trie) putIntoHash(curr *HashNode, path []byte, val Node) (Node, error) {
result, err := t.getFromStore(curr.hash) result, err := t.getFromStore(curr.hash)
if err != nil { if err != nil {
@ -242,7 +242,7 @@ func (t *Trie) putIntoHash(curr *HashNode, path []byte, val Node) (Node, error)
return t.putIntoNode(result, path, val) return t.putIntoNode(result, path, val)
} }
// newSubTrie create new trie containing node at provided path. // newSubTrie creates a new trie containing the node at the provided path.
func (t *Trie) newSubTrie(path []byte, val Node, newVal bool) Node { func (t *Trie) newSubTrie(path []byte, val Node, newVal bool) Node {
if newVal { if newVal {
t.addRef(val.Hash(), val.Bytes()) t.addRef(val.Hash(), val.Bytes())
@ -255,7 +255,7 @@ func (t *Trie) newSubTrie(path []byte, val Node, newVal bool) Node {
return e return e
} }
// putIntoNode puts val with provided path inside curr and returns updated node. // putIntoNode puts the val with the provided path inside curr and returns an updated node.
// Reference counters are updated for both curr and returned value. // Reference counters are updated for both curr and returned value.
func (t *Trie) putIntoNode(curr Node, path []byte, val Node) (Node, error) { func (t *Trie) putIntoNode(curr Node, path []byte, val Node) (Node, error) {
switch n := curr.(type) { switch n := curr.(type) {
@ -274,8 +274,8 @@ func (t *Trie) putIntoNode(curr Node, path []byte, val Node) (Node, error) {
} }
} }
// Delete removes key from trie. // Delete removes the key from the trie.
// It returns no error on missing key. // It returns no error on a missing key.
func (t *Trie) Delete(key []byte) error { func (t *Trie) Delete(key []byte) error {
if len(key) > MaxKeyLength { if len(key) > MaxKeyLength {
return errors.New("key is too big") return errors.New("key is too big")
@ -363,7 +363,7 @@ func (t *Trie) deleteFromExtension(n *ExtensionNode, path []byte) (Node, error)
return n, nil return n, nil
} }
// deleteFromNode removes value with provided path from curr and returns an updated node. // deleteFromNode removes the value with the provided path from curr and returns an updated node.
// Reference counters are updated for both curr and returned value. // Reference counters are updated for both curr and returned value.
func (t *Trie) deleteFromNode(curr Node, path []byte) (Node, error) { func (t *Trie) deleteFromNode(curr Node, path []byte) (Node, error) {
switch n := curr.(type) { switch n := curr.(type) {
@ -402,9 +402,9 @@ func makeStorageKey(mptKey util.Uint256) []byte {
return append([]byte{byte(storage.DataMPT)}, mptKey[:]...) return append([]byte{byte(storage.DataMPT)}, mptKey[:]...)
} }
// Flush puts every node in the trie except Hash ones to the storage. // Flush puts every node (except Hash ones) in the trie to the storage.
// Because we care only about block-level changes, there is no need to put every // Because we care about block-level changes only, there is no need to put every
// new node to storage. Normally, flush should be called with every StateRoot persist, i.e. // new node to the storage. Normally, flush should be called with every StateRoot persist, i.e.
// after every block. // after every block.
func (t *Trie) Flush(index uint32) { func (t *Trie) Flush(index uint32) {
key := makeStorageKey(util.Uint256{}) key := makeStorageKey(util.Uint256{})
@ -571,7 +571,7 @@ func collapse(depth int, node Node) Node {
return node return node
} }
// Find returns list of storage key-value pairs whose key is prefixed by the specified // Find returns a list of storage key-value pairs whose key is prefixed by the specified
// prefix starting from the specified `prefix`+`from` path (not including the item at // prefix starting from the specified `prefix`+`from` path (not including the item at
// the specified `prefix`+`from` path if so). The `max` number of elements is returned at max. // the specified `prefix`+`from` path if so). The `max` number of elements is returned at max.
func (t *Trie) Find(prefix, from []byte, max int) ([]storage.KeyValue, error) { func (t *Trie) Find(prefix, from []byte, max int) ([]storage.KeyValue, error) {

View file

@ -27,13 +27,13 @@ type Contracts struct {
Crypto *Crypto Crypto *Crypto
Std *Std Std *Std
Contracts []interop.Contract Contracts []interop.Contract
// persistScript is vm script which executes "onPersist" method of every native contract. // persistScript is a vm script which executes "onPersist" method of every native contract.
persistScript []byte persistScript []byte
// postPersistScript is vm script which executes "postPersist" method of every native contract. // postPersistScript is a vm script which executes "postPersist" method of every native contract.
postPersistScript []byte postPersistScript []byte
} }
// ByHash returns native contract with the specified hash. // ByHash returns a native contract with the specified hash.
func (cs *Contracts) ByHash(h util.Uint160) interop.Contract { func (cs *Contracts) ByHash(h util.Uint160) interop.Contract {
for _, ctr := range cs.Contracts { for _, ctr := range cs.Contracts {
if ctr.Metadata().Hash.Equals(h) { if ctr.Metadata().Hash.Equals(h) {
@ -43,7 +43,7 @@ func (cs *Contracts) ByHash(h util.Uint160) interop.Contract {
return nil return nil
} }
// ByName returns native contract with the specified name. // ByName returns a native contract with the specified name.
func (cs *Contracts) ByName(name string) interop.Contract { func (cs *Contracts) ByName(name string) interop.Contract {
name = strings.ToLower(name) name = strings.ToLower(name)
for _, ctr := range cs.Contracts { for _, ctr := range cs.Contracts {
@ -54,7 +54,7 @@ func (cs *Contracts) ByName(name string) interop.Contract {
return nil return nil
} }
// NewContracts returns new set of native contracts with new GAS, NEO, Policy, Oracle, // NewContracts returns a new set of native contracts with new GAS, NEO, Policy, Oracle,
// Designate and (optional) Notary contracts. // Designate and (optional) Notary contracts.
func NewContracts(cfg config.ProtocolConfiguration) *Contracts { func NewContracts(cfg config.ProtocolConfiguration) *Contracts {
cs := new(Contracts) cs := new(Contracts)
@ -122,7 +122,7 @@ func NewContracts(cfg config.ProtocolConfiguration) *Contracts {
return cs return cs
} }
// GetPersistScript returns VM script calling "onPersist" syscall for native contracts. // GetPersistScript returns a VM script calling "onPersist" syscall for native contracts.
func (cs *Contracts) GetPersistScript() []byte { func (cs *Contracts) GetPersistScript() []byte {
if cs.persistScript != nil { if cs.persistScript != nil {
return cs.persistScript return cs.persistScript
@ -133,7 +133,7 @@ func (cs *Contracts) GetPersistScript() []byte {
return cs.persistScript return cs.persistScript
} }
// GetPostPersistScript returns VM script calling "postPersist" syscall for native contracts. // GetPostPersistScript returns a VM script calling "postPersist" syscall for native contracts.
func (cs *Contracts) GetPostPersistScript() []byte { func (cs *Contracts) GetPostPersistScript() []byte {
if cs.postPersistScript != nil { if cs.postPersistScript != nil {
return cs.postPersistScript return cs.postPersistScript

View file

@ -137,22 +137,22 @@ func curveFromStackitem(si stackitem.Item) (elliptic.Curve, error) {
} }
} }
// Metadata implements Contract interface. // Metadata implements the Contract interface.
func (c *Crypto) Metadata() *interop.ContractMD { func (c *Crypto) Metadata() *interop.ContractMD {
return &c.ContractMD return &c.ContractMD
} }
// Initialize implements Contract interface. // Initialize implements the Contract interface.
func (c *Crypto) Initialize(ic *interop.Context) error { func (c *Crypto) Initialize(ic *interop.Context) error {
return nil return nil
} }
// OnPersist implements Contract interface. // OnPersist implements the Contract interface.
func (c *Crypto) OnPersist(ic *interop.Context) error { func (c *Crypto) OnPersist(ic *interop.Context) error {
return nil return nil
} }
// PostPersist implements Contract interface. // PostPersist implements the Contract interface.
func (c *Crypto) PostPersist(ic *interop.Context) error { func (c *Crypto) PostPersist(ic *interop.Context) error {
return nil return nil
} }

View file

@ -27,7 +27,7 @@ import (
"github.com/nspcc-dev/neo-go/pkg/vm/stackitem" "github.com/nspcc-dev/neo-go/pkg/vm/stackitem"
) )
// Designate represents designation contract. // Designate represents a designation contract.
type Designate struct { type Designate struct {
interop.ContractMD interop.ContractMD
NEO *NEO NEO *NEO
@ -36,9 +36,9 @@ type Designate struct {
p2pSigExtensionsEnabled bool p2pSigExtensionsEnabled bool
OracleService atomic.Value OracleService atomic.Value
// NotaryService represents Notary node module. // NotaryService represents a Notary node module.
NotaryService atomic.Value NotaryService atomic.Value
// StateRootService represents StateRoot node module. // StateRootService represents a StateRoot node module.
StateRootService *stateroot.Module StateRootService *stateroot.Module
} }
@ -64,7 +64,7 @@ const (
// maxNodeCount is the maximum number of nodes to set the role for. // maxNodeCount is the maximum number of nodes to set the role for.
maxNodeCount = 32 maxNodeCount = 32
// DesignationEventName is the name of a designation event. // DesignationEventName is the name of the designation event.
DesignationEventName = "Designation" DesignationEventName = "Designation"
) )
@ -150,12 +150,12 @@ func (s *Designate) InitializeCache(d *dao.Simple) error {
return nil return nil
} }
// OnPersist implements Contract interface. // OnPersist implements the Contract interface.
func (s *Designate) OnPersist(ic *interop.Context) error { func (s *Designate) OnPersist(ic *interop.Context) error {
return nil return nil
} }
// PostPersist implements Contract interface. // PostPersist implements the Contract interface.
func (s *Designate) PostPersist(ic *interop.Context) error { func (s *Designate) PostPersist(ic *interop.Context) error {
cache := ic.DAO.GetRWCache(s.ID).(*DesignationCache) cache := ic.DAO.GetRWCache(s.ID).(*DesignationCache)
if !cache.rolesChangedFlag { if !cache.rolesChangedFlag {
@ -268,7 +268,7 @@ func getCachedRoleData(cache *DesignationCache, r noderoles.Role) *roleData {
return nil return nil
} }
// GetLastDesignatedHash returns last designated hash of a given role. // GetLastDesignatedHash returns the last designated hash of the given role.
func (s *Designate) GetLastDesignatedHash(d *dao.Simple, r noderoles.Role) (util.Uint160, error) { func (s *Designate) GetLastDesignatedHash(d *dao.Simple, r noderoles.Role) (util.Uint160, error) {
if !s.isValidRole(r) { if !s.isValidRole(r) {
return util.Uint160{}, ErrInvalidRole return util.Uint160{}, ErrInvalidRole

View file

@ -10,7 +10,7 @@ import (
"github.com/nspcc-dev/neo-go/pkg/vm/stackitem" "github.com/nspcc-dev/neo-go/pkg/vm/stackitem"
) )
// Call calls specified native contract method. // Call calls the specified native contract method.
func Call(ic *interop.Context) error { func Call(ic *interop.Context) error {
version := ic.VM.Estack().Pop().BigInt().Int64() version := ic.VM.Estack().Pop().BigInt().Int64()
if version != 0 { if version != 0 {

View file

@ -28,7 +28,7 @@ type Ledger struct {
const ledgerContractID = -4 const ledgerContractID = -4
// newLedger creates new Ledger native contract. // newLedger creates a new Ledger native contract.
func newLedger() *Ledger { func newLedger() *Ledger {
var l = &Ledger{ var l = &Ledger{
ContractMD: *interop.NewContractMD(nativenames.Ledger, ledgerContractID), ContractMD: *interop.NewContractMD(nativenames.Ledger, ledgerContractID),
@ -77,17 +77,17 @@ func newLedger() *Ledger {
return l return l
} }
// Metadata implements Contract interface. // Metadata implements the Contract interface.
func (l *Ledger) Metadata() *interop.ContractMD { func (l *Ledger) Metadata() *interop.ContractMD {
return &l.ContractMD return &l.ContractMD
} }
// Initialize implements Contract interface. // Initialize implements the Contract interface.
func (l *Ledger) Initialize(ic *interop.Context) error { func (l *Ledger) Initialize(ic *interop.Context) error {
return nil return nil
} }
// OnPersist implements Contract interface. // OnPersist implements the Contract interface.
func (l *Ledger) OnPersist(ic *interop.Context) error { func (l *Ledger) OnPersist(ic *interop.Context) error {
// Actual block/tx processing is done in Blockchain.storeBlock(). // Actual block/tx processing is done in Blockchain.storeBlock().
// Even though C# node add them to storage here, they're not // Even though C# node add them to storage here, they're not
@ -96,7 +96,7 @@ func (l *Ledger) OnPersist(ic *interop.Context) error {
return nil return nil
} }
// PostPersist implements Contract interface. // PostPersist implements the Contract interface.
func (l *Ledger) PostPersist(ic *interop.Context) error { func (l *Ledger) PostPersist(ic *interop.Context) error {
return nil // Actual block/tx processing is done in Blockchain.storeBlock(). return nil // Actual block/tx processing is done in Blockchain.storeBlock().
} }
@ -139,8 +139,8 @@ func (l *Ledger) getTransactionHeight(ic *interop.Context, params []stackitem.It
return stackitem.Make(h) return stackitem.Make(h)
} }
// getTransactionFromBlock returns transaction with the given index from the // getTransactionFromBlock returns a transaction with the given index from the
// block with height or hash specified. // block with the height or hash specified.
func (l *Ledger) getTransactionFromBlock(ic *interop.Context, params []stackitem.Item) stackitem.Item { func (l *Ledger) getTransactionFromBlock(ic *interop.Context, params []stackitem.Item) stackitem.Item {
hash := getBlockHashFromItem(ic, params[0]) hash := getBlockHashFromItem(ic, params[0])
index := toUint32(params[1]) index := toUint32(params[1])
@ -177,14 +177,14 @@ func (l *Ledger) getTransactionVMState(ic *interop.Context, params []stackitem.I
} }
// isTraceableBlock defines whether we're able to give information about // isTraceableBlock defines whether we're able to give information about
// the block with index specified. // the block with the index specified.
func isTraceableBlock(ic *interop.Context, index uint32) bool { func isTraceableBlock(ic *interop.Context, index uint32) bool {
height := ic.BlockHeight() height := ic.BlockHeight()
MaxTraceableBlocks := ic.Chain.GetConfig().MaxTraceableBlocks MaxTraceableBlocks := ic.Chain.GetConfig().MaxTraceableBlocks
return index <= height && index+MaxTraceableBlocks > height return index <= height && index+MaxTraceableBlocks > height
} }
// getBlockHashFromItem converts given stackitem.Item to block hash using given // getBlockHashFromItem converts the given stackitem.Item to a block hash using the given
// Ledger if needed. Interop functions accept both block numbers and // Ledger if needed. Interop functions accept both block numbers and
// block hashes as parameters, thus this function is needed. It's supposed to // block hashes as parameters, thus this function is needed. It's supposed to
// be called within VM context, so it panics if anything goes wrong. // be called within VM context, so it panics if anything goes wrong.
@ -219,7 +219,7 @@ func getUint256FromItem(item stackitem.Item) (util.Uint256, error) {
return hash, nil return hash, nil
} }
// getTransactionAndHeight returns transaction and its height if it's present // getTransactionAndHeight returns a transaction and its height if it's present
// on the chain. It panics if anything goes wrong. // on the chain. It panics if anything goes wrong.
func getTransactionAndHeight(d *dao.Simple, item stackitem.Item) (*transaction.Transaction, uint32, error) { func getTransactionAndHeight(d *dao.Simple, item stackitem.Item) (*transaction.Transaction, uint32, error) {
hash, err := getUint256FromItem(item) hash, err := getUint256FromItem(item)

View file

@ -25,7 +25,7 @@ import (
"github.com/nspcc-dev/neo-go/pkg/vm/stackitem" "github.com/nspcc-dev/neo-go/pkg/vm/stackitem"
) )
// Management is contract-managing native contract. // Management is a contract-managing native contract.
type Management struct { type Management struct {
interop.ContractMD interop.ContractMD
NEO *NEO NEO *NEO
@ -84,12 +84,12 @@ func (c *ManagementCache) Copy() dao.NativeContractCache {
return cp return cp
} }
// MakeContractKey creates a key from account script hash. // MakeContractKey creates a key from the account script hash.
func MakeContractKey(h util.Uint160) []byte { func MakeContractKey(h util.Uint160) []byte {
return makeUint160Key(prefixContract, h) return makeUint160Key(prefixContract, h)
} }
// newManagement creates new Management native contract. // newManagement creates a new Management native contract.
func newManagement() *Management { func newManagement() *Management {
var m = &Management{ var m = &Management{
ContractMD: *interop.NewContractMD(nativenames.Management, ManagementContractID), ContractMD: *interop.NewContractMD(nativenames.Management, ManagementContractID),
@ -168,7 +168,7 @@ func (m *Management) getContract(ic *interop.Context, args []stackitem.Item) sta
return contractToStack(ctr) return contractToStack(ctr)
} }
// GetContract returns contract with given hash from given DAO. // GetContract returns a contract with the given hash from the given DAO.
func (m *Management) GetContract(d *dao.Simple, hash util.Uint160) (*state.Contract, error) { func (m *Management) GetContract(d *dao.Simple, hash util.Uint160) (*state.Contract, error) {
cache := d.GetROCache(m.ID).(*ManagementCache) cache := d.GetROCache(m.ID).(*ManagementCache)
cs, ok := cache.contracts[hash] cs, ok := cache.contracts[hash]
@ -198,7 +198,7 @@ func getLimitedSlice(arg stackitem.Item, max int) ([]byte, error) {
} }
// getNefAndManifestFromItems converts input arguments into NEF and manifest // getNefAndManifestFromItems converts input arguments into NEF and manifest
// adding appropriate deployment GAS price and sanitizing inputs. // adding an appropriate deployment GAS price and sanitizing inputs.
func (m *Management) getNefAndManifestFromItems(ic *interop.Context, args []stackitem.Item, isDeploy bool) (*nef.File, *manifest.Manifest, error) { func (m *Management) getNefAndManifestFromItems(ic *interop.Context, args []stackitem.Item, isDeploy bool) (*nef.File, *manifest.Manifest, error) {
nefBytes, err := getLimitedSlice(args[0], math.MaxInt32) // Upper limits are checked during NEF deserialization. nefBytes, err := getLimitedSlice(args[0], math.MaxInt32) // Upper limits are checked during NEF deserialization.
if err != nil { if err != nil {
@ -282,7 +282,7 @@ func (m *Management) markUpdated(d *dao.Simple, hash util.Uint160, cs *state.Con
updateContractCache(cache, cs) updateContractCache(cache, cs)
} }
// Deploy creates contract's hash/ID and saves new contract into the given DAO. // Deploy creates a contract's hash/ID and saves a new contract into the given DAO.
// It doesn't run _deploy method and doesn't emit notification. // It doesn't run _deploy method and doesn't emit notification.
func (m *Management) Deploy(d *dao.Simple, sender util.Uint160, neff *nef.File, manif *manifest.Manifest) (*state.Contract, error) { func (m *Management) Deploy(d *dao.Simple, sender util.Uint160, neff *nef.File, manif *manifest.Manifest) (*state.Contract, error) {
h := state.CreateContractHash(sender, neff.Checksum, manif.Name) h := state.CreateContractHash(sender, neff.Checksum, manif.Name)
@ -390,7 +390,7 @@ func (m *Management) destroy(ic *interop.Context, sis []stackitem.Item) stackite
return stackitem.Null{} return stackitem.Null{}
} }
// Destroy drops given contract from DAO along with its storage. It doesn't emit notification. // Destroy drops the given contract from DAO along with its storage. It doesn't emit notification.
func (m *Management) Destroy(d *dao.Simple, hash util.Uint160) error { func (m *Management) Destroy(d *dao.Simple, hash util.Uint160) error {
contract, err := m.GetContract(d, hash) contract, err := m.GetContract(d, hash)
if err != nil { if err != nil {
@ -448,12 +448,12 @@ func contractToStack(cs *state.Contract) stackitem.Item {
return si return si
} }
// Metadata implements Contract interface. // Metadata implements the Contract interface.
func (m *Management) Metadata() *interop.ContractMD { func (m *Management) Metadata() *interop.ContractMD {
return &m.ContractMD return &m.ContractMD
} }
// updateContractCache saves contract in the common and NEP-related caches. It's // updateContractCache saves the contract in the common and NEP-related caches. It's
// an internal method that must be called with m.mtx lock taken. // an internal method that must be called with m.mtx lock taken.
func updateContractCache(cache *ManagementCache, cs *state.Contract) { func updateContractCache(cache *ManagementCache, cs *state.Contract) {
cache.contracts[cs.Hash] = cs cache.contracts[cs.Hash] = cs
@ -465,7 +465,7 @@ func updateContractCache(cache *ManagementCache, cs *state.Contract) {
} }
} }
// OnPersist implements Contract interface. // OnPersist implements the Contract interface.
func (m *Management) OnPersist(ic *interop.Context) error { func (m *Management) OnPersist(ic *interop.Context) error {
var cache *ManagementCache var cache *ManagementCache
for _, native := range ic.Natives { for _, native := range ic.Natives {
@ -495,7 +495,7 @@ func (m *Management) OnPersist(ic *interop.Context) error {
} }
// InitializeCache initializes contract cache with the proper values from storage. // InitializeCache initializes contract cache with the proper values from storage.
// Cache initialisation should be done apart from Initialize because Initialize is // Cache initialization should be done apart from Initialize because Initialize is
// called only when deploying native contracts. // called only when deploying native contracts.
func (m *Management) InitializeCache(d *dao.Simple) error { func (m *Management) InitializeCache(d *dao.Simple) error {
cache := &ManagementCache{ cache := &ManagementCache{
@ -521,7 +521,7 @@ func (m *Management) InitializeCache(d *dao.Simple) error {
return nil return nil
} }
// PostPersist implements Contract interface. // PostPersist implements the Contract interface.
func (m *Management) PostPersist(ic *interop.Context) error { func (m *Management) PostPersist(ic *interop.Context) error {
return nil return nil
} }
@ -550,7 +550,7 @@ func (m *Management) GetNEP17Contracts(d *dao.Simple) []util.Uint160 {
return result return result
} }
// Initialize implements Contract interface. // Initialize implements the Contract interface.
func (m *Management) Initialize(ic *interop.Context) error { func (m *Management) Initialize(ic *interop.Context) error {
setIntWithKey(m.ID, ic.DAO, keyMinimumDeploymentFee, defaultMinimumDeploymentFee) setIntWithKey(m.ID, ic.DAO, keyMinimumDeploymentFee, defaultMinimumDeploymentFee)
setIntWithKey(m.ID, ic.DAO, keyNextAvailableID, 1) setIntWithKey(m.ID, ic.DAO, keyNextAvailableID, 1)

View file

@ -82,7 +82,7 @@ func (g *GAS) balanceFromBytes(si *state.StorageItem) (*big.Int, error) {
return &acc.Balance, err return &acc.Balance, err
} }
// Initialize initializes GAS contract. // Initialize initializes a GAS contract.
func (g *GAS) Initialize(ic *interop.Context) error { func (g *GAS) Initialize(ic *interop.Context) error {
if err := g.nep17TokenNative.Initialize(ic); err != nil { if err := g.nep17TokenNative.Initialize(ic); err != nil {
return err return err
@ -99,7 +99,7 @@ func (g *GAS) Initialize(ic *interop.Context) error {
return nil return nil
} }
// OnPersist implements Contract interface. // OnPersist implements the Contract interface.
func (g *GAS) OnPersist(ic *interop.Context) error { func (g *GAS) OnPersist(ic *interop.Context) error {
if len(ic.Block.Transactions) == 0 { if len(ic.Block.Transactions) == 0 {
return nil return nil
@ -127,7 +127,7 @@ func (g *GAS) OnPersist(ic *interop.Context) error {
return nil return nil
} }
// PostPersist implements Contract interface. // PostPersist implements the Contract interface.
func (g *GAS) PostPersist(ic *interop.Context) error { func (g *GAS) PostPersist(ic *interop.Context) error {
return nil return nil
} }

View file

@ -52,13 +52,13 @@ type NeoCache struct {
// committee contains cached committee members and their votes. // committee contains cached committee members and their votes.
// It is updated once in a while depending on committee size // It is updated once in a while depending on committee size
// (every 28 blocks for mainnet). It's value // (every 28 blocks for mainnet). It's value
// is always equal to value stored by `prefixCommittee`. // is always equal to the value stored by `prefixCommittee`.
committee keysWithVotes committee keysWithVotes
// committeeHash contains script hash of the committee. // committeeHash contains the script hash of the committee.
committeeHash util.Uint160 committeeHash util.Uint160
// gasPerVoteCache contains last updated value of GAS per vote reward for candidates. // gasPerVoteCache contains the last updated value of GAS per vote reward for candidates.
// It is set in state-modifying methods only and read in `PostPersist` thus is not protected // It is set in state-modifying methods only and read in `PostPersist`, thus is not protected
// by any mutex. // by any mutex.
gasPerVoteCache map[string]big.Int gasPerVoteCache map[string]big.Int
} }
@ -67,7 +67,7 @@ const (
neoContractID = -5 neoContractID = -5
// NEOTotalSupply is the total amount of NEO in the system. // NEOTotalSupply is the total amount of NEO in the system.
NEOTotalSupply = 100000000 NEOTotalSupply = 100000000
// DefaultRegisterPrice is default price for candidate register. // DefaultRegisterPrice is the default price for candidate register.
DefaultRegisterPrice = 1000 * GASFactor DefaultRegisterPrice = 1000 * GASFactor
// prefixCandidate is a prefix used to store validator's data. // prefixCandidate is a prefix used to store validator's data.
prefixCandidate = 33 prefixCandidate = 33
@ -139,7 +139,7 @@ func copyNeoCache(src, dst *NeoCache) {
} }
} }
// makeValidatorKey creates a key from account script hash. // makeValidatorKey creates a key from the account script hash.
func makeValidatorKey(key *keys.PublicKey) []byte { func makeValidatorKey(key *keys.PublicKey) []byte {
b := key.Bytes() b := key.Bytes()
// Don't create a new buffer. // Don't create a new buffer.
@ -228,7 +228,7 @@ func newNEO(cfg config.ProtocolConfiguration) *NEO {
return n return n
} }
// Initialize initializes NEO contract. // Initialize initializes a NEO contract.
func (n *NEO) Initialize(ic *interop.Context) error { func (n *NEO) Initialize(ic *interop.Context) error {
if err := n.nep17TokenNative.Initialize(ic); err != nil { if err := n.nep17TokenNative.Initialize(ic); err != nil {
return err return err
@ -276,8 +276,8 @@ func (n *NEO) Initialize(ic *interop.Context) error {
return nil return nil
} }
// InitializeCache initializes all NEO cache with the proper values from storage. // InitializeCache initializes all NEO cache with the proper values from the storage.
// Cache initialisation should be done apart from Initialize because Initialize is // Cache initialization should be done apart from Initialize because Initialize is
// called only when deploying native contracts. // called only when deploying native contracts.
func (n *NEO) InitializeCache(blockHeight uint32, d *dao.Simple) error { func (n *NEO) InitializeCache(blockHeight uint32, d *dao.Simple) error {
cache := &NeoCache{ cache := &NeoCache{
@ -344,7 +344,7 @@ func (n *NEO) updateCommittee(cache *NeoCache, ic *interop.Context) error {
return nil return nil
} }
// OnPersist implements Contract interface. // OnPersist implements the Contract interface.
func (n *NEO) OnPersist(ic *interop.Context) error { func (n *NEO) OnPersist(ic *interop.Context) error {
if n.cfg.ShouldUpdateCommitteeAt(ic.Block.Index) { if n.cfg.ShouldUpdateCommitteeAt(ic.Block.Index) {
cache := ic.DAO.GetRWCache(n.ID).(*NeoCache) cache := ic.DAO.GetRWCache(n.ID).(*NeoCache)
@ -361,7 +361,7 @@ func (n *NEO) OnPersist(ic *interop.Context) error {
return nil return nil
} }
// PostPersist implements Contract interface. // PostPersist implements the Contract interface.
func (n *NEO) PostPersist(ic *interop.Context) error { func (n *NEO) PostPersist(ic *interop.Context) error {
gas := n.GetGASPerBlock(ic.DAO, ic.Block.Index) gas := n.GetGASPerBlock(ic.DAO, ic.Block.Index)
cache := ic.DAO.GetROCache(n.ID).(*NeoCache) cache := ic.DAO.GetROCache(n.ID).(*NeoCache)

Some files were not shown because too many files have changed in this diff Show more