[#2442] English Check

Signed-off-by: Elizaveta Chichindaeva <elizaveta@nspcc.ru>
This commit is contained in:
Elizaveta Chichindaeva 2022-04-20 21:30:09 +03:00
parent 7f8b259994
commit 28908aa3cf
293 changed files with 2222 additions and 2224 deletions

View file

@ -5,9 +5,9 @@ follow the guidelines:
1. Check open [issues](https://github.com/nspcc-dev/neo-go/issues) and
[pull requests](https://github.com/nspcc-dev/neo-go/pulls) for existing discussions.
1. Open an issue first, to discuss a new feature or enhancement.
1. Write tests, and make sure the test suite passes locally and on CI.
1. When optimizing something, write benchmarks and attach results:
1. Open an issue first to discuss a new feature or enhancement.
1. Write tests and make sure the test suite passes locally and on CI.
1. When optimizing something, write benchmarks and attach the results:
```
go test -run - -bench BenchmarkYourFeature -count=10 ./... >old // on master
go test -run - -bench BenchmarkYourFeature -count=10 ./... >new // on your branch
@ -15,11 +15,11 @@ follow the guidelines:
```
`benchstat` is described here https://godocs.io/golang.org/x/perf/cmd/benchstat.
1. Open a pull request, and reference the relevant issue(s).
1. Open a pull request and reference the relevant issue(s).
1. Make sure your commits are logically separated and have good comments
explaining the details of your change. Add a package/file prefix to your
commit if that's applicable (like 'vm: fix ADD miscalculation on full
moon').
1. After receiving feedback, amend your commits or add new ones as
1. After receiving a feedback, amend your commits or add new ones as
appropriate.
1. **Have fun!**

View file

@ -59,7 +59,7 @@ The resulting binary is `bin/neo-go`.
#### Building on Windows
To build NeoGo on Windows platform we recommend you to install `make` from [MinGW
package](https://osdn.net/projects/mingw/). Then you can build NeoGo with:
package](https://osdn.net/projects/mingw/). Then, you can build NeoGo with:
```
make build
@ -77,13 +77,13 @@ is stored in a file and NeoGo allows you to store multiple files in one
directory (`./config` by default) and easily switch between them using network
flags.
To start Neo node on private network use:
To start Neo node on a private network, use:
```
./bin/neo-go node
```
Or specify a different network with appropriate flag like this:
Or specify a different network with an appropriate flag like this:
```
./bin/neo-go node --mainnet
@ -94,12 +94,12 @@ Available network flags:
- `--privnet, -p`
- `--testnet, -t`
To run a consensus/committee node refer to [consensus
To run a consensus/committee node, refer to [consensus
documentation](docs/consensus.md).
### Docker
By default the `CMD` is set to run a node on `privnet`, so to do this simply run:
By default, the `CMD` is set to run a node on `privnet`. So, to do this, simply run:
```bash
docker run -d --name neo-go -p 20332:20332 -p 20331:20331 nspccdev/neo-go
@ -111,8 +111,7 @@ protocol) and `20331` (JSON-RPC server).
### Importing mainnet/testnet dump files
If you want to jump-start your mainnet or testnet node with [chain archives
provided by NGD](https://sync.ngd.network/) follow these instructions (when
they'd be available for 3.0 networks):
provided by NGD](https://sync.ngd.network/), follow these instructions:
```
$ wget .../chain.acc.zip # chain dump file
$ unzip chain.acc.zip
@ -120,7 +119,7 @@ $ ./bin/neo-go db restore -m -i chain.acc # for testnet use '-t' flag instead of
```
The process differs from the C# node in that block importing is a separate
mode, after it ends the node can be started normally.
mode. After it ends, the node can be started normally.
## Running a private network
@ -131,8 +130,8 @@ Refer to [consensus node documentation](docs/consensus.md).
Please refer to [neo-go smart contract development
workshop](https://github.com/nspcc-dev/neo-go-sc-wrkshp) that shows some
simple contracts that can be compiled/deployed/run using neo-go compiler, SDK
and private network. For details on how Go code is translated to Neo VM
bytecode and what you can and can not do in smart contract please refer to the
and a private network. For details on how Go code is translated to Neo VM
bytecode and what you can and can not do in a smart contract, please refer to the
[compiler documentation](docs/compiler.md).
Refer to [examples](examples/README.md) for more NEO smart contract examples
@ -145,9 +144,9 @@ wallets. NeoGo wallet is just a
[NEP-6](https://github.com/neo-project/proposals/blob/68398d28b6932b8dd2b377d5d51bca7b0442f532/nep-6.mediawiki)
file that is used by CLI commands to sign various things. There is no database
behind it, the blockchain is the database and CLI commands use RPC to query
data from it. At the same time it's not required to open the wallet on RPC
node to perform various actions (unless your node is providing some service
for the network like consensus or oracle nodes).
data from it. At the same time, it's not required to open a wallet on an RPC
node to perform various actions (unless your node provides some service
for the network like consensus or oracle nodes do).
# Developer notes
Nodes have such features as [Prometheus](https://prometheus.io/docs/guides/go-application) and
@ -167,7 +166,7 @@ where you can switch on/off and define port. Prometheus is enabled and Pprof is
Feel free to contribute to this project after reading the
[contributing guidelines](CONTRIBUTING.md).
Before starting to work on a certain topic, create an new issue first,
Before starting to work on a certain topic, create a new issue first
describing the feature/topic you are going to implement.
# Contact

View file

@ -1,7 +1,7 @@
# Roadmap for neo-go
This defines approximate plan of neo-go releases and key features planned for
them. Things can change if there a need to push a bugfix or some critical
them. Things can change if there is a need to push a bugfix or some critical
functionality.
## Versions 0.7X.Y (as needed)

View file

@ -10,7 +10,7 @@ import (
"github.com/urfave/cli"
)
// Address is a wrapper for Uint160 with flag.Value methods.
// Address is a wrapper for a Uint160 with flag.Value methods.
type Address struct {
IsSet bool
Value util.Uint160
@ -28,12 +28,12 @@ var (
_ cli.Flag = AddressFlag{}
)
// String implements fmt.Stringer interface.
// String implements the fmt.Stringer interface.
func (a Address) String() string {
return address.Uint160ToString(a.Value)
}
// Set implements flag.Value interface.
// Set implements the flag.Value interface.
func (a *Address) Set(s string) error {
addr, err := ParseAddress(s)
if err != nil {
@ -44,7 +44,7 @@ func (a *Address) Set(s string) error {
return nil
}
// Uint160 casts address to Uint160.
// Uint160 casts an address to Uint160.
func (a *Address) Uint160() (u util.Uint160) {
if !a.IsSet {
// It is a programmer error to call this method without
@ -82,7 +82,7 @@ func (f AddressFlag) GetName() string {
return f.Name
}
// Apply populates the flag given the flag set and environment
// Apply populates the flag given the flag set and environment.
// Ignores errors.
func (f AddressFlag) Apply(set *flag.FlagSet) {
eachName(f.Name, func(name string) {
@ -90,7 +90,7 @@ func (f AddressFlag) Apply(set *flag.FlagSet) {
})
}
// ParseAddress parses Uint160 form either LE string or address.
// ParseAddress parses a Uint160 from either an LE string or an address.
func ParseAddress(s string) (util.Uint160, error) {
const uint160size = 2 * util.Uint160Size
switch len(s) {

View file

@ -8,7 +8,7 @@ import (
"github.com/urfave/cli"
)
// Fixed8 is a wrapper for Uint160 with flag.Value methods.
// Fixed8 is a wrapper for a Uint160 with flag.Value methods.
type Fixed8 struct {
Value fixedn.Fixed8
}
@ -25,12 +25,12 @@ var (
_ cli.Flag = Fixed8Flag{}
)
// String implements fmt.Stringer interface.
// String implements the fmt.Stringer interface.
func (a Fixed8) String() string {
return a.Value.String()
}
// Set implements flag.Value interface.
// Set implements the flag.Value interface.
func (a *Fixed8) Set(s string) error {
f, err := fixedn.Fixed8FromString(s)
if err != nil {
@ -40,7 +40,7 @@ func (a *Fixed8) Set(s string) error {
return nil
}
// Fixed8 casts address to util.Fixed8.
// Fixed8 casts the address to util.Fixed8.
func (a *Fixed8) Fixed8() fixedn.Fixed8 {
return a.Value
}
@ -61,7 +61,7 @@ func (f Fixed8Flag) GetName() string {
return f.Name
}
// Apply populates the flag given the flag set and environment
// Apply populates the flag given the flag set and environment.
// Ignores errors.
func (f Fixed8Flag) Apply(set *flag.FlagSet) {
eachName(f.Name, func(name string) {
@ -69,7 +69,7 @@ func (f Fixed8Flag) Apply(set *flag.FlagSet) {
})
}
// Fixed8FromContext returns parsed util.Fixed8 value provided flag name.
// Fixed8FromContext returns a parsed util.Fixed8 value provided flag name.
func Fixed8FromContext(ctx *cli.Context, name string) fixedn.Fixed8 {
return ctx.Generic(name).(*Fixed8).Value
}

View file

@ -21,7 +21,7 @@ type ReadWriter struct {
io.Writer
}
// ReadLine reads line from the input without trailing '\n'.
// ReadLine reads a line from the input without trailing '\n'.
func ReadLine(prompt string) (string, error) {
trm := Terminal
if trm == nil {
@ -46,7 +46,7 @@ func readLine(trm *term.Terminal, prompt string) (string, error) {
return trm.ReadLine()
}
// ReadPassword reads user password with prompt.
// ReadPassword reads the user's password with prompt.
func ReadPassword(prompt string) (string, error) {
trm := Terminal
if trm == nil {
@ -60,7 +60,7 @@ func ReadPassword(prompt string) (string, error) {
return trm.ReadPassword(prompt)
}
// ConfirmTx asks for a confirmation to send tx.
// ConfirmTx asks for a confirmation to send the tx.
func ConfirmTx(w io.Writer, tx *transaction.Transaction) error {
fmt.Fprintf(w, "Network fee: %s\n", fixedn.Fixed8(tx.NetworkFee))
fmt.Fprintf(w, "System fee: %s\n", fixedn.Fixed8(tx.SystemFee))

View file

@ -16,7 +16,7 @@ import (
// DefaultTimeout is the default timeout used for RPC requests.
const DefaultTimeout = 10 * time.Second
// RPCEndpointFlag is a long flag name for RPC endpoint. It can be used to
// RPCEndpointFlag is a long flag name for an RPC endpoint. It can be used to
// check for flag presence in the context.
const RPCEndpointFlag = "rpc-endpoint"
@ -60,7 +60,7 @@ func GetNetwork(ctx *cli.Context) netmode.Magic {
return net
}
// GetTimeoutContext returns a context.Context with default of user-set timeout.
// GetTimeoutContext returns a context.Context with the default or a user-set timeout.
func GetTimeoutContext(ctx *cli.Context) (context.Context, func()) {
dur := ctx.Duration("timeout")
if dur == 0 {

View file

@ -15,8 +15,8 @@ import (
// validUntilBlockIncrement is the number of extra blocks to add to an exported transaction.
const validUntilBlockIncrement = 50
// InitAndSave creates incompletely signed transaction which can used
// as input to `multisig sign`.
// InitAndSave creates an incompletely signed transaction which can be used
// as an input to `multisig sign`.
func InitAndSave(net netmode.Magic, tx *transaction.Transaction, acc *wallet.Account, filename string) error {
// avoid fast transaction expiration
tx.ValidUntilBlock += validUntilBlockIncrement
@ -34,7 +34,7 @@ func InitAndSave(net netmode.Magic, tx *transaction.Transaction, acc *wallet.Acc
return Save(scCtx, filename)
}
// Read reads parameter context from file.
// Read reads the parameter context from the file.
func Read(filename string) (*context.ParameterContext, error) {
data, err := os.ReadFile(filename)
if err != nil {
@ -48,7 +48,7 @@ func Read(filename string) (*context.ParameterContext, error) {
return c, nil
}
// Save writes parameter context to file.
// Save writes the parameter context to the file.
func Save(c *context.ParameterContext, filename string) error {
if data, err := json.Marshal(c); err != nil {
return fmt.Errorf("can't marshal transaction: %w", err)

View file

@ -120,8 +120,8 @@ func newGraceContext() context.Context {
return ctx
}
// getConfigFromContext looks at path and mode flags in the given config and
// returns appropriate config.
// getConfigFromContext looks at the path and the mode flags in the given config and
// returns an appropriate config.
func getConfigFromContext(ctx *cli.Context) (config.Config, error) {
configPath := "./config"
if argCp := ctx.String("config-path"); argCp != "" {
@ -131,10 +131,10 @@ func getConfigFromContext(ctx *cli.Context) (config.Config, error) {
}
// handleLoggingParams reads logging parameters.
// If user selected debug level -- function enables it.
// If logPath is configured -- function creates dir and file for logging.
// If a user selected debug level -- function enables it.
// If logPath is configured -- function creates a dir and a file for logging.
// If logPath is configured on Windows -- function returns closer to be
// able to close sink for opened log output file.
// able to close sink for the opened log output file.
func handleLoggingParams(ctx *cli.Context, cfg config.ApplicationConfiguration) (*zap.Logger, func() error, error) {
level := zapcore.InfoLevel
if ctx.Bool("debug") {

View file

@ -48,14 +48,14 @@ func CheckSenderWitness() {
}
}
// Update updates contract with the new one.
// Update updates the contract with a new one.
func Update(script, manifest []byte) {
ctx := storage.GetReadOnlyContext()
mgmt := storage.Get(ctx, mgmtKey).(interop.Hash160)
contract.Call(mgmt, "update", contract.All, script, manifest)
}
// GetValue returns stored value.
// GetValue returns the stored value.
func GetValue() string {
ctx := storage.GetReadOnlyContext()
val1 := storage.Get(ctx, key)
@ -63,7 +63,7 @@ func GetValue() string {
return val1.(string) + "|" + val2.(string)
}
// GetValueWithKey returns stored value with the specified key.
// GetValueWithKey returns the stored value with the specified key.
func GetValueWithKey(key string) string {
ctx := storage.GetReadOnlyContext()
return storage.Get(ctx, key).(string)

View file

@ -1,4 +1,4 @@
// invalid is an example of contract which doesn't pass event check.
// invalid is an example of a contract which doesn't pass event check.
package invalid1
import (
@ -6,14 +6,14 @@ import (
"github.com/nspcc-dev/neo-go/pkg/interop/runtime"
)
// Notify1 emits correctly typed event.
// Notify1 emits a correctly typed event.
func Notify1() bool {
runtime.Notify("Event", interop.Hash160{1, 2, 3})
return true
}
// Notify2 emits invalid event (ByteString instead of Hash160).
// Notify2 emits an invalid event (ByteString instead of Hash160).
func Notify2() bool {
runtime.Notify("Event", []byte{1, 2, 3})

View file

@ -1,4 +1,4 @@
// invalid is an example of contract which doesn't pass event check.
// invalid is an example of a contract which doesn't pass event check.
package invalid2
import (
@ -6,14 +6,14 @@ import (
"github.com/nspcc-dev/neo-go/pkg/interop/runtime"
)
// Notify1 emits correctly typed event.
// Notify1 emits a correctly typed event.
func Notify1() bool {
runtime.Notify("Event", interop.Hash160{1, 2, 3})
return true
}
// Notify2 emits invalid event (extra parameter).
// Notify2 emits an invalid event (extra parameter).
func Notify2() bool {
runtime.Notify("Event", interop.Hash160{1, 2, 3}, "extra parameter")

View file

@ -1,4 +1,4 @@
// invalid is an example of contract which doesn't pass event check.
// invalid is an example of a contract which doesn't pass event check.
package invalid3
import (
@ -6,14 +6,14 @@ import (
"github.com/nspcc-dev/neo-go/pkg/interop/runtime"
)
// Notify1 emits correctly typed event.
// Notify1 emits a correctly typed event.
func Notify1() bool {
runtime.Notify("Event", interop.Hash160{1, 2, 3})
return true
}
// Notify2 emits invalid event (missing from manifest).
// Notify2 emits an invalid event (missing from manifest).
func Notify2() bool {
runtime.Notify("AnotherEvent", interop.Hash160{1, 2, 3})

View file

@ -35,8 +35,8 @@ type (
}
)
// newWalletV2FromFile reads NEO2 wallet from file.
// This should be used read-only, no operations are supported on returned wallet.
// newWalletV2FromFile reads a NEO2 wallet from the file.
// This should be used read-only, no operations are supported on the returned wallet.
func newWalletV2FromFile(path string) (*walletV2, error) {
file, err := os.OpenFile(path, os.O_RDWR, os.ModeAppend)
if err != nil {
@ -64,7 +64,7 @@ func (a *accountV2) convert(pass string, scrypt keys.ScryptParams) (*wallet.Acco
if err != nil {
return nil, err
}
// If it is simple signature script, newAcc does already have it.
// If it is a simple signature script, a newAcc does already have it.
if len(script) != simpleSigLen {
nsigs, pubs, ok := parseMultisigContract(script)
if !ok {
@ -112,8 +112,8 @@ func getNumOfThingsFromInstr(script []byte) (int, int, bool) {
const minMultisigLen = 37
// parseMultisigContract accepts multisig verification script from NEO2
// and returns list of public keys in the same order as in script..
// parseMultisigContract accepts a multisig verification script from NEO2
// and returns a list of public keys in the same order as in the script.
func parseMultisigContract(script []byte) (int, keys.PublicKeys, bool) {
// It should contain at least 1 public key.
if len(script) < minMultisigLen {

View file

@ -1,10 +1,10 @@
# NeoGo CLI interface
NeoGo CLI provides all functionality from one binary, so it's used to run
node, create/compile/deploy/invoke/debug smart contracts, run vm and operate
with the wallet. The standard setup assumes that you're running a node as a
separate process, and it doesn't provide any CLI of its own, instead it just
makes RPC interface available for you. To perform any actions you invoke NeoGo
NeoGo CLI provides all functionality from one binary. It's used to run
a node, create/compile/deploy/invoke/debug smart contracts, run vm and operate
with a wallet. Standard setup assumes that you run a node as a
separate process, and it doesn't provide any CLI of its own. Instead, it just
makes RPC interface available for you. To perform any actions, you invoke NeoGo
as a client that connects to this RPC node and does things you want it to do
(like transferring some NEP-17 asset).
@ -40,19 +40,19 @@ detailed configuration file description.
### Starting a node
To start Neo node on private network use:
To start Neo node on private network, use:
```
./bin/neo-go node
```
Or specify a different network with appropriate flag like this:
Or specify a different network with an appropriate flag like this:
```
./bin/neo-go node --mainnet
```
By default, the node will run in foreground using current standard output for
By default, the node will run in the foreground using current standard output for
logging.
@ -78,8 +78,8 @@ signal. List of the services to be restarted on SIGHUP receiving:
### DB import/exports
Node operates using some database as a backend to store blockchain data. NeoGo
allows to dump chain into file from the database (when node is stopped) or to
import blocks from file into the database (also when node is stopped). Use
allows to dump chain into a file from the database (when node is stopped) or to
import blocks from a file into the database (also when node is stopped). Use
`db` command for that.
## Smart contracts
@ -101,7 +101,7 @@ special `-` path can be used to read the wallet from the standard input.
#### Create wallet
Use `wallet init` command to create new wallet:
Use `wallet init` command to create a new wallet:
```
./bin/neo-go wallet init -w wallet.nep6
@ -121,8 +121,8 @@ Use `wallet init` command to create new wallet:
wallet successfully created, file location is wallet.nep6
```
where "wallet.nep6" is a wallet file name. This wallet will be empty, to
generate a new key pair and add an account for it use `-a` option:
where "wallet.nep6" is a wallet file name. This wallet will be empty. To
generate a new key pair and add an account for it, use `-a` option:
```
./bin/neo-go wallet init -w wallet.nep6 -a
Enter the name of the account > Name
@ -163,7 +163,7 @@ Confirm passphrase >
wallet successfully created, file location is wallet.nep6
```
or use `wallet create` command to create new account in existing wallet:
or use `wallet create` command to create a new account in an existing wallet:
```
./bin/neo-go wallet create -w wallet.nep6
Enter the name of the account > Joe Random
@ -182,7 +182,7 @@ just allows to reuse the old key on N3 network).
```
#### Check wallet contents
`wallet dump` can be used to see wallet contents in more user-friendly way,
`wallet dump` can be used to see wallet contents in a more user-friendly way,
its output is the same NEP-6 JSON, but better formatted. You can also decrypt
keys at the same time with `-d` option (you'll be prompted for password):
```
@ -230,7 +230,7 @@ NMe64G6j6nkPZby26JAgpaCNrn1Ee4wW6E (simple signature contract):
```
#### Private key export
`wallet export` allows you to export private key in NEP-2 encrypted or WIF
`wallet export` allows you to export a private key in NEP-2 encrypted or WIF
(unencrypted) form (`-d` flag).
```
$ ./bin/neo-go wallet export -w wallet.nep6 -d NMe64G6j6nkPZby26JAgpaCNrn1Ee4wW6E
@ -251,8 +251,8 @@ Confirm passphrase >
#### Special accounts
Multisignature accounts can be imported with `wallet import-multisig`, you'll
need all public keys and one private key to do that. Then you could sign
transactions for this multisignature account with imported key.
need all public keys and one private key to do that. Then, you could sign
transactions for this multisignature account with the imported key.
`wallet import-deployed` can be used to create wallet accounts for deployed
contracts. They also can have WIF keys associated with them (in case your
@ -294,8 +294,8 @@ OnChain: true
BlockHash: fabcd46e93b8f4e1bc5689e3e0cc59704320494f7a0265b91ae78b4d747ee93b
Success: true
```
`OnChain` is true if transaction was included in block and `Success` is true
if it was executed successfully.
`OnChain` is true if the transaction has been included in the block; and `Success` is true
if it has been executed successfully.
#### Committee members
`query commitee` returns a list of current committee members:
@ -353,8 +353,8 @@ Key Votes Com
```
#### Voter data
`query voter` returns additional data about NEO holder: amount of NEO he has,
candidate he voted for (if any) and block number of the last transactions
`query voter` returns additional data about NEO holder: the amount of NEO he has,
the candidate it voted for (if any) and the block number of the last transactions
involving NEO on this account:
```
$ ./bin/neo-go query voter -r http://localhost:20332 Nj91C8TxQSxW1jCE1ytFre6mg5qxTypg1Y
@ -373,7 +373,7 @@ NEP-17 commands are designed to work with any NEP-17 tokens, but NeoGo needs
some metadata for these tokens to function properly. Native NEO or GAS are
known to NeoGo by default, but other tokens are not. NeoGo can get this
metadata from the specified RPC server, but that's an additional request to
make, so if you care about command processing delay you can import token
make. So, if you care about command processing delay, you can import token
metadata into the wallet with `wallet nep17 import` command. It'll be stored
in the `extra` section of the wallet.
```
@ -391,7 +391,7 @@ Getting balance is easy:
By default, you'll get data for all tokens for the default wallet's
address. You can select non-default address with `-a` flag and/or select token
with `--token` flag (token hash or name can be used as parameter)
with `--token` flag (token hash or name can be used as parameter).
#### Transfers
@ -405,15 +405,15 @@ parties). For example, transferring 100 GAS looks like this:
You can omit `--from` parameter (default wallet's address will be used in this
case), you can add `--gas` for extra network fee (raising priority of your
transaction). And you can save transaction to file with `--out` instead of
transaction). And you can save the transaction to a file with `--out` instead of
sending it to the network if it needs to be signed by multiple parties.
To add optional `data` transfer parameter specify `data` positional argument
To add optional `data` transfer parameter, specify `data` positional argument
after all required flags. Refer to `wallet nep17 transfer --help` command
description for details.
One `transfer` invocation creates one transaction, but in case you need to do
many transfers you can save on network fees by doing multiple token moves with
One `transfer` invocation creates one transaction. In case you need to do
many transfers, you can save on network fees by doing multiple token moves with
one transaction by using `wallet nep17 multitransfer` command. It can transfer
things from one account to many, its syntax differs from `transfer` in that
you don't have `--token`, `--to` and `--amount` options, but instead you can
@ -426,7 +426,7 @@ transfer as above can be done with `multitransfer` by doing this:
#### GAS claims
While Neo N3 doesn't have any notion of "claim transaction" and has GAS
automatically distributed with every NEO transfer for NEO owners you still
automatically distributed with every NEO transfer for NEO owners, you still
won't get GAS if you don't do any actions. So the old `wallet claim` command
was updated to be an easier way to do NEO "flipping" when you send a
transaction that transfers all of your NEO to yourself thereby triggering GAS
@ -451,7 +451,7 @@ By default, no token ID specified, i.e. common `balanceOf` method is called.
#### Transfers
Specify token ID via `--id` flag to transfer NEP-11 token. Specify amount to
Specify token ID via `--id` flag to transfer NEP-11 token. Specify the amount to
transfer divisible NEP-11 token:
```
@ -462,7 +462,7 @@ By default, no amount is specified, i.e. the whole token is transferred for
non-divisible tokens and 100% of the token is transferred if there is only one
owner of this token for divisible tokens.
Unlike NEP-17 tokens functionality, `multitransfer` command currently not
Unlike NEP-17 tokens functionality, `multitransfer` command is currently not
supported on NEP-11 tokens.
#### Tokens Of
@ -536,7 +536,7 @@ Some basic commands available there:
- `ops` -- show the opcodes of currently loaded contract
- `run` -- executes currently loaded contract
Use `help` command to get more detailed information on all possibilities and
Use `help` command to get more detailed information on all options and
particular commands. Note that this VM is completely disconnected from the
blockchain, so you won't have all interop functionality available for smart
contracts (use test invocations via RPC for that).

View file

@ -1,26 +1,26 @@
# NeoGo smart contract compiler
The neo-go compiler compiles Go programs to bytecode that the NEO virtual machine can understand.
The neo-go compiler compiles Go programs to a bytecode that the NEO virtual machine can understand.
## Language compatibility
The compiler is mostly compatible with regular Go language specification, but
The compiler is mostly compatible with regular Go language specification. However,
there are some important deviations that you need to be aware of that make it
a dialect of Go rather than a complete port of the language:
* `new()` is not supported, most of the time you can substitute structs with composite literals
* `make()` is supported for maps and slices with elements of basic types
* `copy()` is supported only for byte slices, because of underlying `MEMCPY` opcode
* `copy()` is supported only for byte slices because of the underlying `MEMCPY` opcode
* pointers are supported only for struct literals, one can't take an address
of an arbitrary variable
* there is no real distinction between different integer types, all of them
work as big.Int in Go with a limit of 256 bit in width, so you can use
work as big.Int in Go with a limit of 256 bit in width; so you can use
`int` for just about anything. This is the way integers work in Neo VM and
adding proper Go types emulation is considered to be too costly.
* goroutines, channels and garbage collection are not supported and will
never be because emulating that aspects of Go runtime on top of Neo VM is
close to impossible
* `defer` and `recover` are supported except for cases where panic occurs in
`return` statement, because this complicates implementation and imposes runtime
* `defer` and `recover` are supported except for the cases where panic occurs in
`return` statement because this complicates implementation and imposes runtime
overhead for all contracts. This can easily be mitigated by first storing values
in variables and returning the result.
* lambdas are supported, but closures are not.
@ -53,8 +53,8 @@ this requires you to set proper `GOROOT` environment variable, like
export GOROOT=/usr/lib64/go/1.15
```
The best way to create a new contract is using `contract init` command. This will
create an example source file, config file and `go.mod` with `github.com/nspcc-dev/neo-go/pkg/interop` dependency.
The best way to create a new contract is to use `contract init` command. This will
create an example source file, a config file and `go.mod` with `github.com/nspcc-dev/neo-go/pkg/interop` dependency.
```
$ ./bin/neo-go contract init --name MyAwesomeContract
$ cd MyAwesomeContract
@ -73,8 +73,8 @@ $ go mod tidy
```
By default, the filename will be the name of your .go file with the .nef
extension, the file will be located in the same directory where your Go contract
is. If you want another location for your compiled contract:
extension, the file will be located in the same directory with your Go contract.
If you want another location for your compiled contract:
```
./bin/neo-go contract compile -i contract.go --out /Users/foo/bar/contract.nef
@ -207,14 +207,14 @@ other supported language.
### Deploying
Deploying a contract to blockchain with neo-go requires both NEF and JSON
manifest generated by the compiler from configuration file provided in YAML
format. To create contract manifest pass YAML file with `-c` parameter and
specify manifest output file with `-m`:
manifest generated by the compiler from a configuration file provided in YAML
format. To create contract manifest, pass a YAML file with `-c` parameter and
specify the manifest output file with `-m`:
```
./bin/neo-go contract compile -i contract.go -c config.yml -m contract.manifest.json
```
Example YAML file contents:
Example of such YAML file contents:
```
name: Contract
safemethods: []
@ -226,14 +226,14 @@ events:
type: String
```
Then the manifest can be passed to the `deploy` command via `-m` option:
Then, the manifest can be passed to the `deploy` command via `-m` option:
```
$ ./bin/neo-go contract deploy -i contract.nef -m contract.manifest.json -r http://localhost:20331 -w wallet.json
```
Deployment works via an RPC server, an address of which is passed via `-r`
option and should be signed using a wallet from `-w` option. More details can
option, and should be signed using a wallet from `-w` option. More details can
be found in `deploy` command help.
#### Config file
@ -271,7 +271,7 @@ anything else | `Any`
`interop.*` types are defined as aliases in `github.com/nspcc-dev/neo-go/pkg/interop` module
with the sole purpose of correct manifest generation.
As an example consider `Transfer` event from `NEP-17` standard:
As an example, consider `Transfer` event from `NEP-17` standard:
```
- name: Transfer
parameters:
@ -285,14 +285,14 @@ As an example consider `Transfer` event from `NEP-17` standard:
By default, compiler performs some sanity checks. Most of the time
it will report missing events and/or parameter type mismatch.
Using variable as an event name in code isn't prohibited but will prevent
compiler from analyzing an event. It is better to use either constant or string literal.
It isn't prohibited to use a variable as an event name in code, but it will prevent
the compiler from analyzing the event. It is better to use either constant or string literal.
The check can be disabled with `--no-events` flag.
##### Permissions
Each permission specifies contracts and methods allowed for this permission.
If contract is not specified in a rule, specified set of methods can be called on any contract.
By default, no calls are allowed. Simplest permission is to allow everything:
If a contract is not specified in a rule, specified set of methods can be called on any contract.
By default, no calls are allowed. The simplest permission is to allow everything:
```
- methods: '*'
```
@ -303,10 +303,10 @@ for most of the NEP-17 token implementations:
- methods: ["onNEP17Payment"]
```
In addition to `methods` permission can have one of these fields:
1. `hash` contains hash and restricts set of contracts to a single contract.
2. `group` contains public key and restricts set of contracts to those who
have corresponding group in their manifest.
In addition to `methods`, permission can have one of these fields:
1. `hash` contains hash and restricts a set of contracts to a single contract.
2. `group` contains public key and restricts a set of contracts to those that
have the corresponding group in their manifest.
Consider an example:
```
@ -322,32 +322,32 @@ This set of permissions allows calling:
- `start` and `stop` methods of contract with hash `fffdc93764dbaddd97c48f252a53ea4643faa3fd`
- `update` method of contract in group with public key `03184b018d6b2bc093e535519732b3fd3f7551c8cffaf4621dd5a0b89482ca66c9`
Also note, that native contract must be included here too. For example, if your contract
Also note that a native contract must be included here too. For example, if your contract
transfers NEO/GAS or gets some info from the `Ledger` contract, all of these
calls must be allowed in permissions.
Compiler does its best to ensure correct permissions are specified in config.
The compiler does its best to ensure that correct permissions are specified in the config.
Incorrect permissions will result in runtime invocation failures.
Using either constant or literal for contract hash and method will allow compiler
Using either constant or literal for contract hash and method will allow the compiler
to perform more extensive analysis.
This check can be disabled with `--no-permissions` flag.
##### Overloads
NeoVM allows a contract to have multiple methods with the same name
but different parameters number. Go lacks this feature but this can be circumvented
with `overloads` section. Essentially it is a mapping from default contract method names
but different parameters number. Go lacks this feature, but this can be circumvented
with `overloads` section. Essentially, it is a mapping from default contract method names
to the new ones.
```
- overloads:
oldName1: newName
oldName2: newName
```
Because the use-case for this is to provide multiple implementations with the same ABI name,
Since the use-case for this is to provide multiple implementations with the same ABI name,
`newName` is required to be already present in the compiled contract.
As an example consider [`NEP-11` standard](https://github.com/neo-project/proposals/blob/master/nep-11.mediawiki#transfer).
It requires divisible NFT contract to have 2 `transfer` methods. To achieve this we might implement
`Tranfer` and `TransferDivisible` and specify emitted name in config:
As an example, consider [`NEP-11` standard](https://github.com/neo-project/proposals/blob/master/nep-11.mediawiki#transfer).
It requires a divisible NFT contract to have 2 `transfer` methods. To achieve this, we might implement
`Transfer` and `TransferDivisible` and specify the emitted name in the config:
```
- overloads:
transferDivisible:transfer
@ -361,15 +361,15 @@ This is achieved with `manifest add-group` command.
./bin/neo-go contract manifest add-group -n contract.nef -m contract.manifest.json --sender <sender> --wallet /path/to/wallet.json --account <account>
```
It accepts contract `.nef` and manifest files emitted by `compile` command as well as
sender and signer accounts. `--sender` is the account who will send deploy transaction later (not necessarily in wallet).
sender and signer accounts. `--sender` is the account that will send deploy transaction later (not necessarily in wallet).
`--account` is the wallet account which signs contract hash using group private key.
#### Neo Express support
It's possible to deploy contracts written in Go using [Neo
Express](https://github.com/neo-project/neo-express) which is a part of [Neo
Express](https://github.com/neo-project/neo-express), which is a part of [Neo
Blockchain
Toolkit](https://github.com/neo-project/neo-blockchain-toolkit/). To do that
Toolkit](https://github.com/neo-project/neo-blockchain-toolkit/). To do that,
you need to generate a different metadata file using YAML written for
deployment with neo-go. It's done in the same step with compilation via
`--config` input parameter and `--abi` output parameter, combined with debug
@ -380,11 +380,11 @@ $ ./bin/neo-go contract compile -i contract.go --config contract.yml -o contract
```
This file can then be used by toolkit to deploy contract the same way
contracts in other languagues are deployed.
contracts in other languages are deployed.
### Invoking
You can import your contract into the standalone VM and run it there (see [VM
You can import your contract into a standalone VM and run it there (see [VM
documentation](vm.md) for more info), but that only works for simple contracts
that don't use blockchain a lot. For more real contracts you need to deploy
them first and then do test invocations and regular invocations with `contract

View file

@ -1,14 +1,14 @@
# NeoGo consensus node
NeoGo node can act as a consensus node. A consensus node differs from regular
NeoGo node can act as a consensus node. A consensus node differs from a regular
one in that it participates in block acceptance process using dBFT
protocol. Any committee node can also be elected as CN therefore they're
protocol. Any committee node can also be elected as a CN; therefore, they're
expected to follow the same setup process as CNs (to be ready to become CNs
if/when they're elected).
While regular nodes on Neo network don't need any special keys CNs always have
one used to sign dBFT messages and blocks. So the main difference between
regular node and consensus/committee node is that it should be configured to
While regular nodes on Neo network don't need any special keys, CNs always have
one used to sign dBFT messages and blocks. So, the main difference between
a regular node and a consensus/committee node is that it should be configured to
use some key from some wallet.
## Running a CN on public networks
@ -27,7 +27,7 @@ be enough for the first year of blockchain).
NeoGo is a single binary that can be run on any modern GNU/Linux
distribution. We recommend using major well-supported OSes like CentOS, Debian
or Ubuntu, make sure they're updated with the latest security patches.
or Ubuntu. Make sure they're updated with the latest security patches.
No additional packages are needed for NeoGo CN.
@ -38,9 +38,9 @@ Github](https://github.com/nspcc-dev/neo-go/releases) or use [Docker
image](https://hub.docker.com/r/nspccdev/neo-go). It has everything included,
no additional plugins needed.
Take appropriate (mainnet/testnet) configuration [from the
Take an appropriate (mainnet/testnet) configuration [from the
repository](https://github.com/nspcc-dev/neo-go/tree/master/config) and save
in some directory (we'll assume that it's available in the same directory as
it in some directory (we'll assume that it's available in the same directory as
neo-go binary).
### Configuration and execution
@ -59,24 +59,24 @@ is a password to your CN key. Run the node in a regular way after that:
$ neo-go node --mainnet --config-path ./
```
where `--mainnet` is your network (can be `--testnet` for testnet) and
`--config-path` is a path to configuration file directory. If the node starts
fine it'll be logging events like synchronized blocks. The node doesn't have
`--config-path` is a path to the configuration file directory. If the node starts
fine, it'll be logging events like synchronized blocks. The node doesn't have
any interactive CLI, it only outputs logs so you can wrap this command in a
systemd service file to run automatically on system startup.
Notice that the default configuration has RPC and Prometheus services enabled,
you can turn them off for security purposes or restrict access to them with a
firewall. Carefuly review all other configuration options to see if they meet
Notice that the default configuration has RPC and Prometheus services enabled.
You can turn them off for security purposes or restrict access to them with a
firewall. Carefully review all other configuration options to see if they meet
your expectations. Details on various configuration options are provided in the
[node configuration documentation](node-configuration.md), CLI commands are
provided in the [CLI documentation](cli.md).
### Registration
To register as a candidate use neo-go as CLI command with an external RPC
To register as a candidate, use neo-go as CLI command with an external RPC
server for it to connect to (for chain data and transaction submission). You
can use any public RPC server or an RPC server of your own like the node
started at previous step. We'll assume that you're running the next command on
started at the previous step. We'll assume that you run the next command on
the same node in default configuration with RPC interface available at port
10332.
@ -91,15 +91,15 @@ path to NEP-6 wallet file and `http://localhost:10332` is an RPC node to
use.
This command will create and send appropriate transaction to the network and
you should then wait for it to settle in a block. If all goes well it'll end
you should then wait for it to settle in a block. If all goes well, it'll end
with "HALT" state and your registration will be completed. You can use
`query tx` command to see transaction status or `query candidates` to see if
your candidate was added.
your candidate has been added.
### Voting
After registration completion if you own some NEO you can also vote for your
candidate to help it become CN and receive additional voter GAS. To do that
After registration is completed, if you own some NEO, you can also vote for your
candidate to help it become a CN and receive additional voter GAS. To do that,
you need to know the public key of your candidate, which can either be seen in
`query candidates` command output or extracted from wallet `wallet dump-keys`
command:
@ -117,10 +117,10 @@ use:
$ neo-go wallet candidate vote -a NiKEkwz6i9q6gqfCizztDoHQh9r9BtdCNa -w wallet.json -r http://localhost:10332 -c 0363f6678ea4c59e292175c67e2b75c9ba7bb73e47cd97cdf5abaf45de157133f5
```
where `NiKEkwz6i9q6gqfCizztDoHQh9r9BtdCNa` is voter's address, `wallet.json`
is NEP-6 wallet file path, `http://localhost:10332` is RPC node address and
`0363f6678ea4c59e292175c67e2b75c9ba7bb73e47cd97cdf5abaf45de157133f5` is a
public key voter votes for. This command also returns transaction hash and you
where `NiKEkwz6i9q6gqfCizztDoHQh9r9BtdCNa` is the voter's address, `wallet.json`
is the NEP-6 wallet file path, `http://localhost:10332` is the RPC node address and
`0363f6678ea4c59e292175c67e2b75c9ba7bb73e47cd97cdf5abaf45de157133f5` is the
public key the voter votes for. This command also returns transaction hash and you
need to wait for this transaction to be accepted into one of subsequent blocks.
## Private NeoGo network
@ -135,11 +135,11 @@ Four-node setup uses ports 20333-20336 for P2P communication and ports
20001-20004). Single-node is on ports 20333/30333/20001 for
P2P/RPC/Prometheus.
NeoGo default privnet configuration is made to work with four node consensus,
NeoGo default privnet configuration is made to work with four-node consensus,
you have to modify it if you're to use single consensus node.
Node wallets are located in the `.docker/wallets` directory where
`wallet1_solo.json` is used for single-node setup and all the other ones for
`wallet1_solo.json` is used for single-node setup and all others for
four-node setup.
#### Prerequisites
@ -148,7 +148,7 @@ four-node setup.
- `go` compiler
#### Instructions
You can use existing docker-compose file located in `.docker/docker-compose.yml`:
You can use an existing docker-compose file located in `.docker/docker-compose.yml`:
```bash
make env_image # build image
make env_up # start containers, use "make env_single" for single CN
@ -170,12 +170,12 @@ make env_clean
### Start nodes manually
1. Create a separate config directory for every node and
place corresponding config named `protocol.privnet.yml` there.
place the corresponding config named `protocol.privnet.yml` there.
2. Edit configuration file for every node.
Examples can be found at `config/protocol.privnet.docker.one.yml` (`two`, `three` etc.).
1. Add `UnlockWallet` section with `Path` and `Password` strings for NEP-6
wallet path and password for the account to be used for consensus node.
wallet path and the password for the account to be used for the consensus node.
2. Make sure that your `MinPeers` setting is equal to
the number of nodes participating in consensus.
This requirement is needed for nodes to correctly

View file

@ -1,9 +1,9 @@
# Conventions
This document will list conventions that this repo should follow. These are
guidelines and if you believe that one should not be followed, then please state
the guidelines, and if you believe that one should not be followed, please state
why in your PR. If you believe that a piece of code does not follow one of the
conventions listed, then please open an issue before making any changes.
conventions listed, please open an issue before making any changes.
When submitting a new convention, please open an issue for discussion, if
possible please highlight parts in the code where this convention could help the

View file

@ -1,7 +1,7 @@
# NeoGo node configuration file
This section contains detailed NeoGo node configuration file description
including default config values and tips to set up configurable values.
including default config values and some tips to set up configurable values.
Each config file contains two sections. `ApplicationConfiguration` describes node-related
settings and `ProtocolConfiguration` contains protocol-related settings. See the
@ -17,14 +17,14 @@ node-related settings described in the table below.
| Section | Type | Default value | Description |
| --- | --- | --- | --- |
| Address | `string` | `127.0.0.1` | Node address that P2P protocol handler binds to. |
| AnnouncedPort | `uint16` | Same as the `NodePort` | Node port which should be used to announce node's port on P2P layer, can differ from `NodePort` node is bound to (for example, if your node is behind NAT). |
| AnnouncedPort | `uint16` | Same as `NodePort` | Node port which should be used to announce node's port on P2P layer, it can differ from the `NodePort` the node is bound to (for example, if your node is behind NAT). |
| AttemptConnPeers | `int` | `20` | Number of connection to try to establish when the connection count drops below the `MinPeers` value.|
| DBConfiguration | [DB Configuration](#DB-Configuration) | | Describes configuration for database. See the [DB Configuration](#DB-Configuration) section for details. |
| DialTimeout | `int64` | `0` | Maximum duration a single dial may take in seconds. |
| ExtensiblePoolSize | `int` | `20` | Maximum amount of the extensible payloads from a single sender stored in a local pool. |
| LogPath | `string` | "", so only console logging | File path where to store node logs. |
| MaxPeers | `int` | `100` | Maximum numbers of peers that can be connected to the server. |
| MinPeers | `int` | `5` | Minimum number of peers for normal operation, when the node has less than this number of peers it tries to connect with some new ones. |
| MinPeers | `int` | `5` | Minimum number of peers for normal operation; when the node has less than this number of peers it tries to connect with some new ones. |
| NodePort | `uint16` | `0`, which is any free port | The actual node port it is bound to. |
| Oracle | [Oracle Configuration](#Oracle-Configuration) | | Oracle module configuration. See the [Oracle Configuration](#Oracle-Configuration) section for details. |
| P2PNotary | [P2P Notary Configuration](#P2P-Notary-Configuration) | | P2P Notary module configuration. See the [P2P Notary Configuration](#P2P-Notary-Configuration) section for details. |
@ -145,7 +145,7 @@ RPC:
KeyFile: serv.key
```
where:
- `Enabled` denotes whether RPC server should be started.
- `Enabled` denotes whether an RPC server should be started.
- `Address` is an RPC server address to be running at.
- `EnableCORSWorkaround` enables Cross-Origin Resource Sharing and is useful if
you're accessing RPC interface from the browser.
@ -202,7 +202,7 @@ protocol-related settings described in the table below.
| Section | Type | Default value | Description | Notes |
| --- | --- | --- | --- | --- |
| CommitteeHistory | map[uint32]int | none | Number of committee members after given height, for example `{0: 1, 20: 4}` sets up a chain with one committee member since the genesis and then changes the setting to 4 committee members at the height of 20. `StandbyCommittee` committee setting must have the number of keys equal or exceeding the highest value in this option. Blocks numbers where the change happens must be divisble by the old and by the new values simultaneously. If not set, committee size is derived from the `StandbyCommittee` setting and never changes. |
| CommitteeHistory | map[uint32]int | none | Number of committee members after the given height, for example `{0: 1, 20: 4}` sets up a chain with one committee member since the genesis and then changes the setting to 4 committee members at the height of 20. `StandbyCommittee` committee setting must have the number of keys equal or exceeding the highest value in this option. Blocks numbers where the change happens must be divisible by the old and by the new values simultaneously. If not set, committee size is derived from the `StandbyCommittee` setting and never changes. |
| GarbageCollectionPeriod | `uint32` | 10000 | Controls MPT garbage collection interval (in blocks) for configurations with `RemoveUntraceableBlocks` enabled and `KeepOnlyLatestState` disabled. In this mode the node stores a number of MPT trees (corresponding to `MaxTraceableBlocks` and `StateSyncInterval`), but the DB needs to be clean from old entries from time to time. Doing it too often will cause too much processing overhead, doing it too rarely will leave more useless data in the DB. |
| KeepOnlyLatestState | `bool` | `false` | Specifies if MPT should only store latest state. If true, DB size will be smaller, but older roots won't be accessible. This value should remain th
e same for the same database. | Conflicts with `P2PStateExchangeExtensions`. |
@ -226,5 +226,5 @@ e same for the same database. | Conflicts with `P2PStateExchangeExtensions`. |
| StateSyncInterval | `int` | `40000` | The number of blocks between state heights available for MPT state data synchronization. | `P2PStateExchangeExtensions` should be enabled to use this setting. |
| ValidatorsCount | `int` | `0` | Number of validators set for the whole network lifetime, can't be set if `ValidatorsHistory` setting is used. |
| ValidatorsHistory | map[uint32]int | none | Number of consensus nodes to use after given height (see `CommitteeHistory` also). Heights where the change occurs must be divisible by the number of committee members at that height. Can't be used with `ValidatorsCount` not equal to zero. |
| VerifyBlocks | `bool` | `false` | Denotes whether to verify received blocks. |
| VerifyTransactions | `bool` | `false` | Denotes whether to verify transactions in received blocks. |
| VerifyBlocks | `bool` | `false` | Denotes whether to verify the received blocks. |
| VerifyTransactions | `bool` | `false` | Denotes whether to verify transactions in the received blocks. |

View file

@ -15,7 +15,7 @@ The original problem definition:
> any interaction, which is the case for oracle nodes or NeoFS inner ring nodes.
>
> As some of the services using this mechanism can be quite sensitive to the
> latency of their requests processing it should be possible to construct complete
> latency of their requests processing, it should be possible to construct a complete
> transaction within the time frame between two consecutive blocks.
@ -26,10 +26,10 @@ doing the actual work. It uses generic `Conflicts` and `NotValidBefore`
transaction attributes for its purposes as well as an additional special one
(`Notary assisted`).
A new designated role is added, `P2PNotary`. It can have arbitrary number of
A new designated role is added, `P2PNotary`. It can have an arbitrary number of
keys associated with it.
Using the service costs some GAS, so below we operate with `FEE` as a unit of cost
To use the service, one should pay some GAS, so below we operate with `FEE` as a unit of cost
for this service. `FEE` is set to be 0.1 GAS.
We'll also use `NKeys` definition as the number of keys that participate in the
@ -43,12 +43,12 @@ witnesses that's K+N*L.
#### Conflicts
This attribute makes the chain only accept one transaction of the two conflicting
This attribute makes the chain accept one transaction of the two conflicting only
and adds an ability to give a priority to any of the two if needed. This
attribute was originally proposed in
[neo-project/neo#1991](https://github.com/neo-project/neo/issues/1991).
The attribute has Uint256 data inside of it containing the hash of conflicting
The attribute has Uint256 data inside containing the hash of conflicting
transaction. It is allowed to have multiple attributes of this type.
#### NotValidBefore
@ -59,19 +59,19 @@ was originally proposed in
The attribute has uint32 data inside which is the block height starting from
which the transaction is considered to be valid. It can be seen as the opposite
of `ValidUntilBlock`, using both allows to have a window of valid block numbers
of `ValidUntilBlock`. Using both allows to have a window of valid block numbers
that this transaction could be accepted into. Transactions with this attribute
are not accepted into mempool before specified block is persisted.
It can be used to create some transactions in advance with a guarantee that they
won't be accepted until specified block.
won't be accepted until the specified block.
#### NotaryAssisted
This attribute contains one byte containing the number of transactions collected
by the service. It could be 0 for fallback transaction or `NKeys` for normal
This attribute holds one byte containing the number of transactions collected
by the service. It could be 0 for fallback transaction or `NKeys` for a normal
transaction that completed its P2P signature collection. Transactions using this
attribute need to pay an additional network fee of (`NKeys`+1)×`FEE`. This attribute
attribute need to pay additional network fee of (`NKeys`+1)×`FEE`. This attribute
could be only be used by transactions signed by the notary native contract.
### Native Notary contract
@ -109,9 +109,9 @@ This payload has two incomplete transactions inside:
than the current chain height and it must have `Conflicts` attribute with the
hash of the main transaction. It at the same time must have `Notary assisted`
attribute with a count of zero.
- *Main tx*. This is the one that actually needs to be completed, it:
- *Main tx*. This is the one that actually needs to be completed; it:
1. *either* doesn't have all witnesses attached
2. *or* it only has a partial multisignature
2. *or* has a partial multisignature only
3. *or* have not all witnesses attached and some of the rest are partial multisignature
This transaction must have `Notary assisted` attribute with a count of `NKeys`
@ -124,19 +124,19 @@ construct and send the payload.
Node module with the designated key monitors the network for `P2PNotaryRequest`
payloads. It maintains a list of current requests grouped by main transaction
hash, when it receives enough requests to correctly construct all transaction
witnesses it does so, adds a witness of its own (for Notary contract witness) and
hash. When it receives enough requests to correctly construct all transaction
witnesses, it does so, adds a witness of its own (for Notary contract witness) and
sends the resulting transaction to the network.
If the main transaction with all witnesses attached still can't be validated
because of fee (or other) issues, the node waits for `NotValidBefore` block of
due to any fee (or other) issues, the node waits for `NotValidBefore` block of
the fallback transaction to be persisted.
If `NotValidBefore` block is persisted and there are still some signatures
missing (or the resulting transaction is invalid), the module sends all the
associated fallback transactions for the main transaction.
After processing service request is deleted from the module.
After processing, service request is deleted from the module.
See the [NeoGo P2P signature extensions](#NeoGo P2P signature extensions) on how
to enable notary-related extensions on chain and
@ -145,7 +145,7 @@ set up Notary service node.
## Environment setup
To run P2P signature collection service on your network you need to do:
To run P2P signature collection service on your network, you need to do:
* Set up [`P2PSigExtensions`](#NeoGo P2P signature extensions) for all nodes in
the network.
* Set notary node keys in `RoleManagement` native contract.
@ -159,7 +159,7 @@ notary requests to the network.
### NeoGo P2P signature extensions
As far as Notary service is an extension of the standard NeoGo node, it should be
enabled and properly configured before the usage.
enabled and properly configured before usage.
#### Configuration
@ -172,7 +172,7 @@ Notary contract and designate `P2PNotary` node role in RoleManagement native
contract.
If you use custom `NativeActivations` subsection of the `ProtocolConfiguration`
section in your node config, then specify the height of the Notary contract
section in your node config, specify the height of the Notary contract
activation, e.g. `0`.
Note, that even if `P2PSigExtensions` config subsection enables notary-related
@ -201,13 +201,13 @@ To enable notary service node functionality refer to the
### NeoGo Notary service node module
NeoGo node can act as notary service node (the node that accumulates notary
requests, collects signatures and releases fully-signed transactions). It has to
have a wallet with key belonging to one of network's designated notary nodes
requests, collects signatures and releases fully-signed transactions). It must
have a wallet with a key belonging to one of network's designated notary nodes
(stored in `RoleManagement` native contract). Also, the node must be connected to
the network with enabled P2P signature extensions, otherwise problems with states
a network with enabled P2P signature extensions, otherwise problems with states
and peer disconnections will occur.
Notary service node doesn't need [RPC service](rpc.md) to be enabled, because it
Notary service node doesn't need [RPC service](rpc.md) to be enabled because it
receives notary requests and broadcasts completed transactions via P2P protocol.
However, enabling [RPC service](rpc.md) allows to send notary requests directly
to the notary service node and avoid P2P communication delays.
@ -241,7 +241,7 @@ P2PNotary:
Below are presented all stages each P2P signature collection request goes through. Use
stages 1 and 2 to create, sign and submit P2P notary request. Stage 3 is
performed by the notary service, does not require user's intervention and is given
performed by the notary service; it does not require user's intervention and is given
for informational purposes. Stage 4 contains advice to check for notary request
results.
@ -252,221 +252,221 @@ sender's deposit to the Notary native contract is used. Before the notary reques
submitted, you need to deposit enough GAS to the contract, otherwise, request
won't pass verification.
Notary native contract supports `onNEP17Payment` method, thus to deposit funds to
the Notary native contract, transfer desired amount of GAS to the contract
Notary native contract supports `onNEP17Payment` method. Thus, to deposit funds to
the Notary native contract, transfer the desired amount of GAS to the contract
address. Use
[func (*Client) TransferNEP17](https://pkg.go.dev/github.com/nspcc-dev/neo-go@v0.97.2/pkg/rpc/client#Client.TransferNEP17)
with the `data` parameter matching the following requirements:
- `data` should be an array of two elements: `to` and `till`.
- `to` denotes the receiver of the deposit. It can be nil in case if `to` equals
to the GAS sender.
- `to` denotes the receiver of the deposit. It can be nil in case `to` equals
the GAS sender.
- `till` denotes chain's height before which deposit is locked and can't be
withdrawn. `till` can't be set if you're not the deposit owner. Default `till`
value is current chain height + 5760. `till` can't be less than current chain
height. `till` can't be less than currently set `till` value for that deposit if
value is the current chain height + 5760. `till` can't be less than the current chain
height. `till` can't be less than the currently set `till` value for that deposit if
the deposit already exists.
Note, that the first deposit call for the `to` address can't transfer less than 2×`FEE` GAS.
Deposit is allowed for renewal, i.e. consequent `deposit` calls for the same `to`
address add up specified amount to the already deposited value.
address add up a specified amount to the already deposited value.
After GAS transfer successfully submitted to the chain, use [Notary native
After GAS transfer is successfully submitted to the chain, use [Notary native
contract API](#Native Notary contract) to manage your deposit.
Note, that regular operation flow requires deposited amount of GAS to be
Note, that regular operation flow requires the deposited amount of GAS to be
sufficient to pay for *all* fallback transactions that are currently submitted (all
in-flight notary requests). The default deposit sum for one fallback transaction
should be enough to pay the fallback transaction fees which are system fee and
network fee. Fallback network fee includes (`NKeys`+1)×`FEE` = (0+1)×`FEE` = `FEE`
GAS for `NotaryAssisted` attribute usage and regular fee for the fallback size.
If you need to submit several notary requests, ensure that deposited amount is
If you need to submit several notary requests, ensure that the deposited amount is
enough to pay for all fallbacks. If the deposited amount is not enough to pay the
fallback fees, then `Insufficiend funds` error will be returned from the RPC node
fallback fees, `Insufficiend funds` error will be returned from the RPC node
after notary request submission.
### 2. Request submission
Once several parties want to sign one transaction, each of them should generate
the transaction, wrap it into `P2PNotaryRequest` payload and send to the known RPC
the transaction, wrap it into `P2PNotaryRequest` payload and send it to the known RPC
server via [`submitnotaryrequest` RPC call](./rpc.md#submitnotaryrequest-call).
Note, that all parties must generate the same main transaction, while fallbacks
Note, that all parties must generate the same main transaction while fallbacks
can differ.
To create notary request, you can use [NeoGo RPC client](./rpc.md#Client). Follow
To create a notary request, you can use [NeoGo RPC client](./rpc.md#Client). Follow
the steps to create a signature request:
1. Prepare list of signers with scopes for the main transaction (i.e. the
1. Prepare a list of signers with scopes for the main transaction (i.e. the
transaction that signatures are being collected for, that will be `Signers`
transaction field). Use the following rules to construct the list:
* First signer is the one who pays transaction fees.
* Each signer is either multisignature or standard signature or a contract
* First signer is the one who pays the transaction fees.
* Each signer is either a multisignature or a standard signature or a contract
signer.
* Multisignature and signature signers can be combined.
* Contract signer can be combined with any other signer.
Include Notary native contract in the list of signers with the following
constraints:
* Notary signer hash is the hash of native Notary contract that can be fetched
* Notary signer hash is the hash of a native Notary contract that can be fetched
from
[func (*Client) GetNativeContractHash](https://pkg.go.dev/github.com/nspcc-dev/neo-go@v0.97.2/pkg/rpc/client#Client.GetNativeContractHash).
* Notary signer must have `None` scope.
* Notary signer shouldn't be placed at the beginning of the signer list,
* A notary signer must have `None` scope.
* A notary signer shouldn't be placed at the beginning of the signer list
because Notary contract does not pay main transaction fees. Other positions
in the signer list are available for Notary signer.
2. Construct script for the main transaction (that will be `Script` transaction
in the signer list are available for a Notary signer.
2. Construct a script for the main transaction (that will be `Script` transaction
field) and calculate system fee using regular rules (that will be `SystemFee`
transaction field). Probably, you'll perform one of these actions:
1. If the script is a contract method call, use `invokefunction` RPC API
[func (*Client) InvokeFunction](https://pkg.go.dev/github.com/nspcc-dev/neo-go@v0.97.2/pkg/rpc/client#Client.InvokeFunction)
and fetch script and gas consumed from the result.
and fetch the script and the gas consumed from the result.
2. If the script is more complicated than just a contract method call,
construct the script manually and use `invokescript` RPC API
[func (*Client) InvokeScript](https://pkg.go.dev/github.com/nspcc-dev/neo-go@v0.97.2/pkg/rpc/client#Client.InvokeScript)
to fetch gas consumed from the result.
to fetch the gas consumed from the result.
3. Or just construct the script and set system fee manually.
3. Calculate the height main transaction is valid until (that will be
`ValidUntilBlock` transaction field). Consider the following rules for `VUB`
value estimation:
* `VUB` value must not be lower than current chain height.
* `VUB` value must not be lower than the current chain height.
* The whole notary request (including fallback transaction) is valid until
the same `VUB` height.
* `VUB` value must be lower than notary deposit expiration height. This
condition guarantees that deposit won't be withdrawn before notary
condition guarantees that the deposit won't be withdrawn before notary
service payment.
* All parties must provide the same `VUB` for the main transaction.
4. Construct the list of main transaction attributes (that will be `Attributes`
transaction field). The list must include `NotaryAssisted` attribute with
`NKeys` equals to the sum number of keys to be collected excluding notary and
`NKeys` equals the overall number of the keys to be collected excluding notary and
other contract-based witnesses. For m out of n multisignature request
`NKeys = n`. For multiple standard signature request signers `NKeys` equals to
`NKeys = n`. For multiple standard signature request, signers `NKeys` equals
the standard signature signers count.
5. Construct the list of accounts (`wallet.Account` structure from the `wallet`
5. Construct a list of accounts (`wallet.Account` structure from the `wallet`
package) to calculate network fee for the transaction
using following rules. This list will be used in the next step.
- Number and order of the accounts should match transaction signers
using the following rules. This list will be used in the next step.
- The number and the order of the accounts should match the transaction signers
constructed at step 1.
- Account for contract signer should have `Contract` field with `Deployed` set
- An account for a contract signer should have `Contract` field with `Deployed` set
to `true` if the corresponding contract is deployed on chain.
- Account for signature or multisignature signer should have `Contract` field
- An account for a signature or a multisignature signer should have `Contract` field
with `Deployed` set to `false` and `Script` set to the signer's verification
script.
- Account for notary signer is **just a placeholder** and should have
- An account for a notary signer is **just a placeholder** and should have
`Contract` field with `Deployed` set to `false`, i.e. the default value for
`Contract` field. That's needed to skip notary verification during regular
network fee calculation at the next step.
7. Calculate network fee for the transaction (that will be `NetworkFee`
transaction field). Network fee consists of several parts:
- *Notary network fee.* That's amount of GAS need to be paid for
- *Notary network fee.* That's the amount of GAS needed to be paid for
`NotaryAssisted` attribute usage and for notary contract witness
verification (that is to be added by the notary node in the end of
signature collection process). Use
[func (*Client) CalculateNotaryFee](https://pkg.go.dev/github.com/nspcc-dev/neo-go@v0.97.2/pkg/rpc/client#Client.CalculateNotaryFee)
to calculate notary network fee. Use `NKeys` estimated on the step 4 as an
to calculate notary network fee. Use `NKeys` estimated at step 4 as an
argument.
- *Regular network fee.* That's amount of GAS to be paid for other witnesses
- *Regular network fee.* That's the amount of GAS to be paid for other witnesses
verification. Use
[func (*Client) AddNetworkFee](https://pkg.go.dev/github.com/nspcc-dev/neo-go@v0.97.2/pkg/rpc/client#Client.AddNetworkFee)
to calculate regular network fee and add it to the transaction. Use
partially-filled main transaction from the previous steps as `tx` argument.
Use notary network fee calculated at the previous substep as `extraFee`
argument. Use the list of accounts constructed at the step 5 as `accs`
argument. Use the list of accounts constructed at step 5 as `accs`
argument.
8. Fill in main transaction `Nonce` field.
9. Construct the list of main transactions witnesses (that will be `Scripts`
8. Fill in the main transaction `Nonce` field.
9. Construct a list of main transactions witnesses (that will be `Scripts`
transaction field). Use the following rules:
- Contract-based witness should have `Invocation` script that pushes arguments
- A contract-based witness should have `Invocation` script that pushes arguments
on stack (it may be empty) and empty `Verification` script. If multiple notary
requests provide different `Invocation` scripts then the first one will be used
requests provide different `Invocation` scripts, the first one will be used
to construct contract-based witness.
- **Notary contract witness** (which is also a contract-based witness) should
- A **Notary contract witness** (which is also a contract-based witness) should
have empty `Verification` script. `Invocation` script should be of the form
[opcode.PUSHDATA1, 64, make([]byte, 64)...], i.e. to be a placeholder for
notary contract signature.
- Standard signature witness must have regular `Verification` script filled
a notary contract signature.
- A standard signature witness must have regular `Verification` script filled
even if the `Invocation` script is to be collected from other notary
requests.
`Invocation` script either should push signature bytes on stack **or** (in
case if the signature is to be collected) **should be empty**.
- Multisignature witness must have regular `Verification` script filled even
case the signature is to be collected) **should be empty**.
- A multisignature witness must have regular `Verification` script filled even
if `Invocation` script is to be collected from other notary requests.
`Invocation` script either should push on stack signature bytes (one
signature at max per one resuest) **or** (in case if there's no ability to
signature at max per one request) **or** (in case there's no ability to
provide proper signature) **should be empty**.
10. Define lifetime for the fallback transaction. Let the `fallbackValidFor` be
the lifetime. Let `N` be the current chain's height and `VUB` be
`ValidUntilBlock` value estimated at the step 3. Then notary node is trying to
`ValidUntilBlock` value estimated at step 3. Then, the notary node is trying to
collect signatures for the main transaction from `N` up to
`VUB-fallbackValidFor`. In case of failure after `VUB-fallbackValidFor`-th
block is accepted, notary node stops attempts to complete main transaction and
block is accepted, the notary node abandons attempts to complete the main transaction and
tries to push all associated fallbacks. Use the following rules to define
`fallbackValidFor`:
- `fallbackValidFor` shouldn't be more than `MaxNotValidBeforeDelta` value.
- Use [func (*Client) GetMaxNotValidBeforeDelta](https://pkg.go.dev/github.com/nspcc-dev/neo-go@v0.97.2/pkg/rpc/client#Client.GetMaxNotValidBeforeDelta)
to check `MaxNotValidBefore` value.
11. Construct script for the fallback transaction. Script may do something useful,
i.g. invoke method of a contract, but if you don't need to perform something
11. Construct a script for the fallback transaction. The script may do something useful,
i.g. invoke method of a contract. However, if you don't need to perform anything
special on fallback invocation, you can use simple `opcode.RET` script.
12. Sign and submit P2P notary request. Use
[func (*Client) SignAndPushP2PNotaryRequest](https://pkg.go.dev/github.com/nspcc-dev/neo-go@v0.97.2/pkg/rpc/client#Client.SignAndPushP2PNotaryRequest) for it.
- Use signed main transaction from step 8 as `mainTx` argument.
- Use fallback script from step 10 as `fallbackScript` argument.
- Use the signed main transaction from step 8 as `mainTx` argument.
- Use the fallback script from step 10 as `fallbackScript` argument.
- Use `-1` as `fallbackSysFee` argument to define system fee by test
invocation or provide custom value.
invocation or provide any custom value.
- Use `0` as `fallbackNetFee` argument not to add extra network fee to the
fallback.
- Use `fallbackValidFor` estimated at step 9 as `fallbackValidFor` argument.
- Use the `fallbackValidFor` estimated at step 9 as `fallbackValidFor` argument.
- Use your account you'd like to send request (and fallback transaction) from
to sign the request (and fallback transaction).
`SignAndPushP2PNotaryRequest` will construct and sign fallback transaction,
construct and sign P2PNotaryRequest and submit it to the RPC node. The
`SignAndPushP2PNotaryRequest` will construct and sign a fallback transaction,
construct and sign a P2PNotaryRequest and submit it to the RPC node. The
resulting notary request and an error are returned.
After P2PNotaryRequests are sent, participants should then wait for one of their
After P2PNotaryRequests are sent, participants should wait for one of their
transactions (main or fallback) to get accepted into one of subsequent blocks.
### 3. Signatures collection and transaction release
Valid P2PNotaryRequest payload is distributed via P2P network using standard
broadcasting mechanisms until it reaches designated notary nodes that have the
A valid P2PNotaryRequest payload is distributed via P2P network using standard
broadcasting mechanisms until it reaches the designated notary nodes that have the
respective node module active. They collect all payloads for the same main
transaction until enough signatures are collected to create proper witnesses for
it. They then attach all witnesses required and send this transaction as usual
it. Then, they attach all witnesses required and send this transaction as usual
and monitor subsequent blocks for its inclusion.
All the operations leading to successful transaction creation are independent
of the chain and could easily be done within one block interval, so if the
first service request is sent at current height `H` it's highly likely that the
main transaction will be a part of `H+1` block.
of the chain and could easily be done within one block interval. So, if the
first service request is sent at the current height `H`, the main transaction
is highly likely to be a part of `H+1` block.
### 4. Results monitoring
Once P2PNotaryRequest reached RPC node, it is added to the notary request pool.
Completed or outdated requests are being removed from the pool. Use
Once the P2PNotaryRequest reaches RPC node, it is added to the notary request pool.
Completed or outdated requests are removed from the pool. Use
[NeoGo notification subsystem](./notifications.md) to track request addition and
removal:
- Use RPC `subscribe` method with `notary_request_event` stream name parameter to
subscribe to `P2PNotaryRequest` payloads that are added or removed from the
notary request pool.
- Use `sender` or `signer` filters to filter out notary request with desired
- Use `sender` or `signer` filters to filter out a notary request with the desired
request senders or main tx signers.
Use the notification subsystem to track that main or fallback transaction
accepted to the chain:
Use the notification subsystem to track that the main or the fallback transaction
is accepted to the chain:
- Use RPC `subscribe` method with `transaction_added` stream name parameter to
subscribe to transactions that are accepted to the chain.
- Use `sender` filter with Notary native contract hash to filter out fallback
transactions sent by Notary node. Use `signer` filter with notary request
sender address to filter out fallback transactions sent by the specified
- Use `sender` filter with the Notary native contract hash to filter out fallback
transactions sent by the Notary node. Use `signer` filter with the notary request
sender address to filter out the fallback transactions sent by the specified
sender.
- Use `sender` or `signer` filters to filter out main transaction with desired
sender or signers. You can also filter out main transaction using Notary
- Use `sender` or `signer` filters to filter out the main transaction with the desired
sender or signers. You can also filter out the main transaction using Notary
contract `signer` filter.
- Don't rely on `sender` and `signer` filters only, check also that received
transaction has `NotaryAssisted` attribute with expected `NKeys` value.
- Don't rely on `sender` and `signer` filters only, also check that the received
transaction has `NotaryAssisted` attribute with the expected `NKeys` value.
Use the notification subsystem to track main or fallback transaction execution
results.
@ -480,31 +480,31 @@ Several use-cases where Notary subsystem can be applied are described below.
### Committee-signed transactions
The signature collection problem occures every time committee participants need
to submit transaction with `m out of n` multisignature, i.g.:
- transfer initial supply of NEO and GAS from committee multisignature account to
The signature collection problem occurs every time committee participants need
to submit a transaction with `m out of n` multisignature, i.g.:
- transfer initial supply of NEO and GAS from a committee multisignature account to
other addresses on new chain start
- tune valuable chain parameters like gas per block, candidate register price,
minimum contract deployment fee, Oracle request price, native Policy values etc
- invoke non-native contract methods that require committee multisignature witness
Current solution supposes off-chain non-P2P signature collection (either manual
Current solution offers off-chain non-P2P signature collection (either manual
or using some additional network connectivity). It has an obvious downside of
reliance on something external to the network. If it's manual, it's slow and
error-prone, if it's automated, it requires additional protocol for all the
parties involved. For the protocol used by oracle nodes that also means
explicitly exposing nodes to each other.
error-prone; if it's automated, it requires additional protocol for all the
parties involved. For the protocol used by oracle nodes, it also means
nodes explicitly exposing to each other.
With Notary service all signature collection logic is unified and is on chain already,
the only thing that committee participants should perform is to create and submit
P2P notary request (can be done independently). Once sufficient number of signatures
is collected by the service, desired transaction will be applied and pass committee
With the Notary service all signature collection logic is unified and is on chain already.
The only thing that committee participants should perform is to create and submit
a P2P notary request (can be done independently). Once the sufficient number of signatures
is collected by the service, the desired transaction will be applied and pass committee
witness verification.
### NeoFS Inner Ring nodes
Alphabet nodes of the Inner Ring signature collection is a particular case of committee-signed
transactions. Alphabet nodes multisignature is used for the various cases, such as:
transactions. Alphabet nodes multisignature is used for various cases, such as:
- main chain and side chain funds synchronization and withdrawal
- bootstrapping new storage nodes to the network
- network map management and epoch update
@ -513,7 +513,7 @@ transactions. Alphabet nodes multisignature is used for the various cases, such
Non-notary on-chain solution for Alphabet nodes multisignature forming is
imitated via contracts collecting invocations of their methods signed by standard
signature of each Alphabet node. Once sufficient number of invocations is
signature of each Alphabet node. Once the sufficient number of invocations is
collected, the invocation is performed.
The described solution has several drawbacks:
@ -522,7 +522,7 @@ The described solution has several drawbacks:
be duplicated) because we can't create transactions from transactions (thus
using proper multisignature account is not possible)
- for `m out of n` multisignature we need at least `m` transactions instead of
one we really wanted to have, but in reality we'll create and process `n` of
one we really wanted to have; but actually we'll create and process `n` of
them, so this adds substantial overhead to the chain
- some GAS is inevitably wasted because any invocation could either go the easy
path (just adding a signature to the list) or really invoke the function we
@ -531,7 +531,7 @@ The described solution has several drawbacks:
Notary on-chain Alphabet multisignature collection solution
[uses Notary subsystem](https://github.com/nspcc-dev/neofs-node/pull/404) to
successfully solve these problems, e.g. to calculate precisely amount of GAS to
successfully solve these problems, e.g. to calculate precisely the amount of GAS to
pay for contract invocation witnessed by Alphabet nodes (see
[nspcc-dev/neofs-node#47](https://github.com/nspcc-dev/neofs-node/issues/47)),
to reduce container creation delay
@ -540,5 +540,5 @@ etc.
### Contract-sponsored (free) transactions
The original problem and solution are described in the
The original problem and solution are described in
[neo-project/neo#2577](https://github.com/neo-project/neo/issues/2577) discussion.

View file

@ -34,15 +34,15 @@ Filters use conjunctional logic.
announcing the block itself
* transaction notifications are only announced for successful transactions
* all announcements are being done in the same order they happen on the chain
At first transaction execution is announced, then followed by notifications
generated during this execution, then followed by transaction announcement.
First, transaction execution is announced. It is then followed by notifications
generated during this execution. Next, follows the transaction announcement.
Transaction announcements are ordered the same way they're in the block.
* unsubscription may not cancel pending, but not yet sent events
## Subscription management
To receive events clients need to subscribe to them first via `subscribe`
method. Upon successful subscription clients receive subscription ID for
To receive events, clients need to subscribe to them first via `subscribe`
method. Upon successful subscription, clients receive subscription ID for
subsequent management of this subscription. Subscription is only valid for
connection lifetime, no long-term client identification is being made.
@ -59,18 +59,18 @@ Recognized stream names:
Filter: `primary` as an integer with primary (speaker) node index from
ConsensusData.
* `transaction_added`
Filter: `sender` field containing string with hex-encoded Uint160 (LE
Filter: `sender` field containing a string with hex-encoded Uint160 (LE
representation) for transaction's `Sender` and/or `signer` in the same
format for one of transaction's `Signers`.
* `notification_from_execution`
Filter: `contract` field containing string with hex-encoded Uint160 (LE
representation) and/or `name` field containing string with execution
Filter: `contract` field containing a string with hex-encoded Uint160 (LE
representation) and/or `name` field containing a string with execution
notification name.
* `transaction_executed`
Filter: `state` field containing `HALT` or `FAULT` string for successful
and failed executions respectively.
* `notary_request_event`
Filter: `sender` field containing string with hex-encoded Uint160 (LE
Filter: `sender` field containing a string with hex-encoded Uint160 (LE
representation) for notary request's `Sender` and/or `signer` in the same
format for one of main transaction's `Signers`.
@ -133,21 +133,22 @@ Example response:
Events are sent as JSON-RPC notifications from the server with `method` field
being used for notification names. Notification names are identical to stream
names described for `subscribe` method with one important addition for
`event_missed` which can be sent for any subscription to signify that some
events were not delivered (usually when client isn't able to keep up with
event flow).
`event_missed`, which can be sent for any subscription to signify that some
events have not been delivered (usually when a client is unable to keep up with
the event flow).
Verbose responses for various structures like blocks and transactions are used
to simplify working with notifications on client side. Returned structures
mostly follow the one used by standard Neo RPC calls, but may have some minor
to simplify working with notifications on the client side. Returned structures
mostly follow the one used by standard Neo RPC calls but may have some minor
differences.
If a server-side event matches several subscriptions from one client, it's
only sent once.
### `block_added` notification
As a first parameter (`params` section) contains block converted to JSON
structure which is similar to verbose `getblock` response but with the
The first parameter (`params` section) contains a block converted to a JSON
structure, which is similar to a verbose `getblock` response but with the
following differences:
* it doesn't have `size` field (you can calculate it client-side)
* it doesn't have `nextblockhash` field (it's supposed to be the latest one
@ -238,8 +239,8 @@ Example:
### `transaction_added` notification
In the first parameter (`params` section) contains transaction converted to
JSON which is similar to verbose `getrawtransaction` response, but with the
The first parameter (`params` section) contains a transaction converted to
JSON, which is similar to a verbose `getrawtransaction` response, but with the
following differences:
* block's metadata is missing (`blockhash`, `confirmations`, `blocktime`)
@ -337,8 +338,8 @@ Example:
### `transaction_executed` notification
Contains the same result as from `getapplicationlog` method in the first
parameter and no other parameters. One difference from `getapplicationlog` is
It contains the same result as from `getapplicationlog` method in the first
parameter and no other parameters. The only difference from `getapplicationlog` is
that it always contains zero in the `contract` field.
Example:
@ -424,7 +425,7 @@ Example:
### `notary_request_event` notification
Contains two parameters: event type which could be one of "added" or "removed" and
It contains two parameters: event type, which could be one of "added" or "removed", and
added (or removed) notary request.
Example:

View file

@ -1,7 +1,7 @@
# NeoGo Oracle service
NeoGo node can act as oracle service node for https and neofs protocols. It
has to have a wallet with key belonging to one of network's designated oracle
NeoGo node can act as an oracle service node for https and neofs protocols. It
has to have a wallet with a key belonging to one of the network's designated oracle
nodes (stored in `RoleManagement` native contract).
It needs [RPC service](rpc.md) to be enabled and configured properly because
@ -10,7 +10,7 @@ transaction.
## Configuration
To enable oracle service add `Oracle` subsection to `ApplicationConfiguration`
To enable oracle service, add `Oracle` subsection to `ApplicationConfiguration`
section of your node config.
Parameters:
@ -19,14 +19,14 @@ Parameters:
* `AllowPrivateHost`: boolean value, enables/disables private IPs (like
127.0.0.1 or 192.168.0.1) for https requests, it defaults to false and it's
false on public networks, but you can enable it for private ones.
* `AllowedContentTypes`: list of allowed MIME types. Only `application/json`
* `AllowedContentTypes`: a list of allowed MIME types. Only `application/json`
is allowed by default. Can be left empty to allow everything.
* `Nodes`: list of oracle node RPC endpoints, it's used for oracle node
* `Nodes`: a list of oracle node RPC endpoints, it's used for oracle node
communication. All oracle nodes should be specified there.
* `NeoFS`: a subsection of its own for NeoFS configuration with two
parameters:
- `Timeout`: request timeout, like "5s"
- `Nodes`: list of NeoFS nodes (their gRPC interfaces) to get data from,
- `Nodes`: a list of NeoFS nodes (their gRPC interfaces) to get data from,
one node is enough to operate, but they're used in round-robin fashion,
so you can spread the load by specifying multiple nodes
* `MaxTaskTimeout`: maximum time a request can be active (retried to
@ -67,7 +67,7 @@ Parameters:
## Operation
To run oracle service on your network you need to:
To run oracle service on your network, you need to:
* set oracle node keys in `RoleManagement` contract
* configure and run appropriate number of oracle nodes with keys specified in
* configure and run an appropriate number of oracle nodes with keys specified in
`RoleManagement` contract

View file

@ -1,11 +1,11 @@
# Release instructions
This documents outlines the neo-go release process, it can be used as a todo
This document outlines the neo-go release process. It can be used as a todo
list for a new release.
## Pre-release checks
These should run successfuly:
These should run successfully:
* build
* unit-tests
* lint
@ -15,10 +15,10 @@ These should run successfuly:
Add an entry to the CHANGELOG.md following the style established there. Add a
codename, version and release date in the heading. Write a paragraph
describing the most significant changes done in this release. Then add
sections with new features and bugs fixed describing each change in detail and
describing the most significant changes done in this release. Then, add
sections with new features implemented and bugs fixed describing each change in detail and
with a reference to Github issues. Add generic improvements section for
changes that are not directly visible to the node end-user such as performance
changes that are not directly visible to the node end-user, such as performance
optimizations, refactoring and API changes. Add a "Behaviour changes" section
if there are any incompatible changes in default settings or the way commands
operate.
@ -34,8 +34,8 @@ Use `vX.Y.Z` tag following the semantic versioning standard.
## Push changes and release tag to Github
This step should bypass the default PR mechanism to get a correct result (so
that releasing requires admin privileges for the project), both the `master`
branch update and tag must be pushed simultaneously like this:
that releasing requires admin privileges for the project). Both the `master`
branch update and the tag must be pushed simultaneously like this:
$ git push origin master v0.70.1
@ -61,10 +61,10 @@ Copy the github release page link to:
## Deployment
Deploy updated version to the mainnet/testnet.
Deploy the updated version to the mainnet/testnet.
## Post-release
The first commit after the release must be tagged with `X.Y.Z+1-pre` tag for
proper semantic-versioned builds, so it's good to make some minor
proper semantic-versioned builds. So, it's good to make some minor
documentation update after the release and push it with this new tag.

View file

@ -78,14 +78,14 @@ which would yield the response:
##### `invokefunction`
neo-go's implementation of `invokefunction` does not return `tx`
neo-go implementation of `invokefunction` does not return `tx`
field in the answer because that requires signing the transaction with some
key in the server which doesn't fit the model of our node-client interactions.
Lacking this signature the transaction is almost useless, so there is no point
key in the server, which doesn't fit the model of our node-client interactions.
If this signature is lacking, the transaction is almost useless, so there is no point
in returning it.
It's possible to use `invokefunction` not only with contract scripthash, but also
with contract name (for native contracts) or contract ID (for all contracts). This
It's possible to use `invokefunction` not only with a contract scripthash, but also
with a contract name (for native contracts) or a contract ID (for all contracts). This
feature is not supported by the C# node.
##### `getcontractstate`
@ -95,7 +95,7 @@ it only works for native contracts.
##### `getrawtransaction`
VM state is included to verbose response along with other transaction fields if
VM state is included into verbose response along with other transaction fields if
the transaction is already on chain.
##### `getstateroot`
@ -107,30 +107,30 @@ where only index is accepted.
This method doesn't work for the Ledger contract, you can get data via regular
`getblock` and `getrawtransaction` calls. This method is able to get storage of
the native contract by its name (case-insensitive), unlike the C# node where
a native contract by its name (case-insensitive), unlike the C# node where
it only possible for index or hash.
#### `getnep11balances` and `getnep17balances`
neo-go's implementation of `getnep11balances` and `getnep17balances` does not
neo-go implementation of `getnep11balances` and `getnep17balances` does not
perform tracking of NEP-11 and NEP-17 balances for each account as it is done
in the C# node. Instead, neo-go node maintains the list of standard-compliant
in the C# node. Instead, a neo-go node maintains a list of standard-compliant
contracts, i.e. those contracts that have `NEP-11` or `NEP-17` declared in the
supported standards section of the manifest. Each time balances are queried,
neo-go node asks every NEP-11/NEP-17 contract for the account balance by
the neo-go node asks every NEP-11/NEP-17 contract for the account balance by
invoking `balanceOf` method with the corresponding args. Invocation GAS limit
is set to be 3 GAS. All non-zero balances are included in the RPC call result.
Thus, if token contract doesn't have proper standard declared in the list of
Thus, if a token contract doesn't have proper standard declared in the list of
supported standards but emits compliant NEP-11/NEP-17 `Transfer`
notifications, the token balance won't be shown in the list of balances
returned by the neo-go node (unlike the C# node behavior). However, transfer
logs of such tokens are still available via respective `getnepXXtransfers` RPC
calls.
The behaviour of the `LastUpdatedBlock` tracking for archival nodes as far as for
The behavior of the `LastUpdatedBlock` tracking for archival nodes as far as for
governing token balances matches the C# node's one. For non-archival nodes and
other NEP-11/NEP-17 tokens if transfer's `LastUpdatedBlock` is lower than the
latest state synchronization point P the node working against, then
other NEP-11/NEP-17 tokens, if transfer's `LastUpdatedBlock` is lower than the
latest state synchronization point P the node working against,
`LastUpdatedBlock` equals P. For NEP-11 NFTs `LastUpdatedBlock` is equal for
all tokens of the same asset.
@ -139,7 +139,7 @@ all tokens of the same asset.
### Unsupported methods
Methods listed down below are not going to be supported for various reasons
Methods listed below are not going to be supported for various reasons
and we're not accepting issues related to them.
| Method | Reason |
@ -165,7 +165,7 @@ Some additional extensions are implemented as a part of this RPC server.
This method returns cumulative system fee for all transactions included in a
block. It can be removed in future versions, but at the moment you can use it
to see how much GAS is burned with particular block (because system fees are
to see how much GAS is burned with a particular block (because system fees are
burned).
#### `invokecontractverifyhistoric`, `invokefunctionhistoric` and `invokescripthistoric` calls
@ -198,11 +198,11 @@ payloads to be relayed from RPC to P2P.
#### Limits and paging for getnep11transfers and getnep17transfers
`getnep11transfers` and `getnep17transfers` RPC calls never return more than
1000 results for one request (within specified time frame). You can pass your
1000 results for one request (within the specified time frame). You can pass your
own limit via an additional parameter and then use paging to request the next
batch of transfers.
Example requesting 10 events for address NbTiM6h8r99kpRtb428XcsUk1TzKed2gTc
An example of requesting 10 events for address NbTiM6h8r99kpRtb428XcsUk1TzKed2gTc
within 0-1600094189000 timestamps:
```json

View file

@ -3,11 +3,11 @@
NeoGo supports state validation using N3 stateroots and can also act as state
validator (run state validation service).
All NeoGo nodes always calculate MPT root hash for data stored by contracts,
unlike in Neo Legacy this behavior can't be turned off. They also process
All NeoGo nodes always calculate MPT root hash for data stored by contracts.
Unlike in Neo Legacy, this behavior can't be turned off. They also process
stateroot messages broadcasted through the network and save validated
signatures from them if state root hash specified there matches the one signed
by validators (or shouts loud in the log if it doesn't, because it should be
signatures from them if the state root hash specified there matches the one signed
by validators (or shouts loud in the log if it doesn't because it should be
the same).
## State validation service
@ -37,7 +37,7 @@ Parameters:
To run state validation service on your network you need to:
* set state validation node keys in `RoleManagement` contract
* configure and run appropriate number of state validation nodes with keys
* configure and run an appropriate number of state validation nodes with the keys
specified in `RoleManagement` contract
@ -46,7 +46,7 @@ To run state validation service on your network you need to:
NeoGo also supports protocol extension to include state root hashes right into
header blocks. It's not compatible with regular Neo N3 state validation
service and it's not compatible with public Neo N3 networks, but you can use
it on private networks if there is a need to.
it on private networks if needed.
The option is `StateRootInHeader` and it's specified in
`ProtocolConfiguration` section, set it to true and run your network with it

View file

@ -4,7 +4,7 @@ A cross platform virtual machine implementation for `NEF` compatible programs.
# Installation
VM is provided as part of neo-go binary, so usual neo-go build instructions
VM is provided as a part of neo-go binary, so usual neo-go build instructions
are applicable.
# Running the VM
@ -118,7 +118,7 @@ NEO-GO-VM > run
```
## Running programs with arguments
You can invoke smart contracts with arguments. Take the following ***roll the dice*** smartcontract as example.
You can invoke smart contracts with arguments. Take the following ***roll the dice*** smart contract as an example.
```
package rollthedice
@ -144,9 +144,9 @@ func RollDice(number int) {
To invoke this contract we need to specify both the method and the arguments.
The first parameter (called method or operation) is always of type
string. Notice that arguments can have different types, they can inferred
automatically (please refer to the `run` command help), but in you need to
pass parameter of specific type you can specify it in `run`'s arguments:
string. Notice that arguments can have different types. They can be inferred
automatically (please refer to the `run` command help), but if you need to
pass a parameter of a specific type you can specify it in `run`'s arguments:
```
NEO-GO-VM > run rollDice int:1

View file

@ -220,7 +220,7 @@ func TestSetGetRecord(t *testing.T) {
c.Invoke(t, "1.2.3.4", "getRecord", "neo.com", int64(nns.A))
t.Run("SetRecord_compatibility", func(t *testing.T) {
// tests are got from the NNS C# implementation and changed accordingly to non-native implementation behaviour
// tests are got from the NNS C# implementation and changed accordingly to non-native implementation behavior
testCases := []struct {
Type nns.RecordType
Name string

View file

@ -1,7 +1,7 @@
/*
Package nft contains non-divisible non-fungible NEP-11-compatible token
implementation. This token can be minted with GAS transfer to contract address,
it will hash some data (including data provided in transfer) and produce
it will hash some data (including data provided in transfer) and produce a
base64-encoded string that is your NFT. Since it's based on hashing and basically
you own a hash it's HASHY.
*/
@ -54,7 +54,7 @@ func TotalSupply() int {
}
// totalSupply is an internal implementation of TotalSupply operating with
// given context. The number itself is stored raw in the DB with totalSupplyPrefix
// the given context. The number itself is stored raw in the DB with totalSupplyPrefix
// key.
func totalSupply(ctx storage.Context) int {
var res int
@ -66,28 +66,28 @@ func totalSupply(ctx storage.Context) int {
return res
}
// mkAccountPrefix creates DB key-prefix for account tokens specified
// mkAccountPrefix creates DB key-prefix for the account tokens specified
// by concatenating accountPrefix and account address.
func mkAccountPrefix(holder interop.Hash160) []byte {
res := []byte(accountPrefix)
return append(res, holder...)
}
// mkBalanceKey creates DB key for account specified by concatenating balancePrefix
// mkBalanceKey creates DB key for the account specified by concatenating balancePrefix
// and account address.
func mkBalanceKey(holder interop.Hash160) []byte {
res := []byte(balancePrefix)
return append(res, holder...)
}
// mkTokenKey creates DB key for token specified by concatenating tokenPrefix
// mkTokenKey creates DB key for the token specified by concatenating tokenPrefix
// and token ID.
func mkTokenKey(tokenID []byte) []byte {
res := []byte(tokenPrefix)
return append(res, tokenID...)
}
// BalanceOf returns the number of tokens owned by specified address.
// BalanceOf returns the number of tokens owned by the specified address.
func BalanceOf(holder interop.Hash160) int {
if len(holder) != 20 {
panic("bad owner address")
@ -96,7 +96,7 @@ func BalanceOf(holder interop.Hash160) int {
return getBalanceOf(ctx, mkBalanceKey(holder))
}
// getBalanceOf returns balance of the account using database key.
// getBalanceOf returns the balance of an account using database key.
func getBalanceOf(ctx storage.Context, balanceKey []byte) int {
val := storage.Get(ctx, balanceKey)
if val != nil {
@ -105,7 +105,7 @@ func getBalanceOf(ctx storage.Context, balanceKey []byte) int {
return 0
}
// addToBalance adds amount to the account balance. Amount can be negative.
// addToBalance adds an amount to the account balance. Amount can be negative.
func addToBalance(ctx storage.Context, holder interop.Hash160, amount int) {
key := mkBalanceKey(holder)
old := getBalanceOf(ctx, key)
@ -117,13 +117,13 @@ func addToBalance(ctx storage.Context, holder interop.Hash160, amount int) {
}
}
// addToken adds token to the account.
// addToken adds a token to the account.
func addToken(ctx storage.Context, holder interop.Hash160, token []byte) {
key := mkAccountPrefix(holder)
storage.Put(ctx, append(key, token...), token)
}
// removeToken removes token from the account.
// removeToken removes the token from the account.
func removeToken(ctx storage.Context, holder interop.Hash160, token []byte) {
key := mkAccountPrefix(holder)
storage.Delete(ctx, append(key, token...))
@ -137,7 +137,7 @@ func Tokens() iterator.Iterator {
return iter
}
// TokensOf returns an iterator with all tokens held by specified address.
// TokensOf returns an iterator with all tokens held by the specified address.
func TokensOf(holder interop.Hash160) iterator.Iterator {
if len(holder) != 20 {
panic("bad owner address")
@ -148,8 +148,8 @@ func TokensOf(holder interop.Hash160) iterator.Iterator {
return iter
}
// getOwnerOf returns current owner of the specified token or panics if token
// ID is invalid. Owner is stored as value of the token key (prefix + token ID).
// getOwnerOf returns the current owner of the specified token or panics if token
// ID is invalid. The owner is stored as a value of the token key (prefix + token ID).
func getOwnerOf(ctx storage.Context, token []byte) interop.Hash160 {
key := mkTokenKey(token)
val := storage.Get(ctx, key)
@ -159,13 +159,13 @@ func getOwnerOf(ctx storage.Context, token []byte) interop.Hash160 {
return val.(interop.Hash160)
}
// setOwnerOf writes current owner of the specified token into the DB.
// setOwnerOf writes the current owner of the specified token into the DB.
func setOwnerOf(ctx storage.Context, token []byte, holder interop.Hash160) {
key := mkTokenKey(token)
storage.Put(ctx, key, holder)
}
// OwnerOf returns owner of specified token.
// OwnerOf returns the owner of the specified token.
func OwnerOf(token []byte) interop.Hash160 {
ctx := storage.GetReadOnlyContext()
return getOwnerOf(ctx, token)
@ -248,14 +248,14 @@ func OnNEP17Payment(from interop.Hash160, amount int, data interface{}) {
postTransfer(nil, from, []byte(token), nil) // no `data` during minting
}
// Verify allows owner to manage contract's address, including earned GAS
// transfer from contract's address to somewhere else. It just checks for transaction
// to also be signed by contract owner, so contract's witness should be empty.
// Verify allows an owner to manage a contract's address, including earned GAS
// transfer from the contract's address to somewhere else. It just checks for the transaction
// to also be signed by the contract owner, so contract's witness should be empty.
func Verify() bool {
return runtime.CheckWitness(contractOwner)
}
// Destroy destroys the contract, only owner can do that.
// Destroy destroys the contract, only its owner can do that.
func Destroy() {
if !Verify() {
panic("only owner can destroy")
@ -263,7 +263,7 @@ func Destroy() {
management.Destroy()
}
// Update updates the contract, only owner can do that.
// Update updates the contract, only its owner can do that.
func Update(nef, manifest []byte) {
if !Verify() {
panic("only owner can update")

View file

@ -40,7 +40,7 @@ func CheckWitness() bool {
return false
}
// Log logs given message.
// Log logs the given message.
func Log(message string) {
runtime.Log(message)
}
@ -50,12 +50,12 @@ func Notify(event interface{}) {
runtime.Notify("Event", event)
}
// Verify method is used when contract is being used as a signer of transaction,
// Verify method is used when the contract is being used as a signer of transaction,
// it can have parameters (that then need to be present in invocation script)
// and it returns simple pass/fail result. This implementation just checks for
// owner's signature presence.
// the owner's signature presence.
func Verify() bool {
// Technically this restriction is not needed, but you can see the difference
// Technically, this restriction is not needed, but you can see the difference
// between invokefunction and invokecontractverify RPC methods with it.
if runtime.GetTrigger() != runtime.Verification {
return false
@ -63,7 +63,7 @@ func Verify() bool {
return CheckWitness()
}
// Destroy destroys the contract, only owner can do that.
// Destroy destroys the contract, only the owner can do that.
func Destroy() {
if !Verify() {
panic("only owner can destroy")
@ -71,7 +71,7 @@ func Destroy() {
management.Destroy()
}
// Update updates the contract, only owner can do that. _deploy will be called
// Update updates the contract, only the owner can do that. _deploy will be called
// after update.
func Update(nef, manifest []byte) {
if !Verify() {

View file

@ -16,19 +16,19 @@ func init() {
ctx = storage.GetContext()
}
// Put puts value at key.
// Put puts the value at the key.
func Put(key, value []byte) []byte {
storage.Put(ctx, key, value)
return key
}
// PutDefault puts value to the default key.
// PutDefault puts the value to the default key.
func PutDefault(value []byte) []byte {
storage.Put(ctx, defaultKey, value)
return defaultKey
}
// Get returns the value at passed key.
// Get returns the value at the passed key.
func Get(key []byte) interface{} {
return storage.Get(ctx, key)
}
@ -38,13 +38,13 @@ func GetDefault() interface{} {
return storage.Get(ctx, defaultKey)
}
// Delete deletes the value at passed key.
// Delete deletes the value at the passed key.
func Delete(key []byte) bool {
storage.Delete(ctx, key)
return true
}
// Find returns an array of key-value pairs with key that matched the passed value
// Find returns an array of key-value pairs with the key that matches the passed value.
func Find(value []byte) []string {
iter := storage.Find(ctx, value, storage.None)
result := []string{}

View file

@ -18,7 +18,7 @@ var (
ctx storage.Context
)
// init initializes the Token Interface and storage context for the Smart
// init initializes Token Interface and storage context for the Smart
// Contract to operate with
func init() {
token = nep17.Token{

View file

@ -26,7 +26,7 @@ var (
)
// GetTestContractState reads 2 pre-compiled contracts generated by
// TestGenerateHelperContracts second of which is allowed to call the first.
// TestGenerateHelperContracts, second of which is allowed to call the first.
func GetTestContractState(t *testing.T, pathToInternalContracts string, id1, id2 int32, sender2 util.Uint160) (*state.Contract, *state.Contract) {
errNotFound := errors.New("auto-generated oracle contract is not found, use TestGenerateHelperContracts to regenerate")
neBytes, err := os.ReadFile(filepath.Join(pathToInternalContracts, helper1ContractNEFPath))

View file

@ -36,9 +36,9 @@ func TestGenerateHelperContracts(t *testing.T) {
require.False(t, saveState)
}
// generateOracleContract generates helper contract that is able to call
// native Oracle contract and has callback method. It uses test chain to define
// Oracle and StdLib native hashes and saves generated NEF and manifest to `oracle_contract` folder.
// generateOracleContract generates a helper contract that is able to call
// the native Oracle contract and has callback method. It uses testchain to define
// Oracle and StdLib native hashes and saves the generated NEF and manifest to `oracle_contract` folder.
// Set `saveState` flag to true and run the test to rewrite NEF and manifest files.
func generateOracleContract(t *testing.T, saveState bool) {
bc, validator, committee := chain.NewMultiWithCustomConfig(t, func(c *config.ProtocolConfiguration) {
@ -131,9 +131,9 @@ func generateOracleContract(t *testing.T, saveState bool) {
}
}
// generateManagementHelperContracts generates 2 helper contracts second of which is
// generateManagementHelperContracts generates 2 helper contracts, second of which is
// allowed to call the first. It uses testchain to define Management and StdLib
// native hashes and saves generated NEF and manifest to `management_contract` folder.
// native hashes and saves the generated NEF and manifest to `management_contract` folder.
// Set `saveState` flag to true and run the test to rewrite NEF and manifest files.
func generateManagementHelperContracts(t *testing.T, saveState bool) {
bc, validator, committee := chain.NewMultiWithCustomConfig(t, func(c *config.ProtocolConfiguration) {

View file

@ -25,7 +25,7 @@ import (
uatomic "go.uber.org/atomic"
)
// FakeChain implements Blockchainer interface, but does not provide real functionality.
// FakeChain implements the Blockchainer interface, but does not provide real functionality.
type FakeChain struct {
config.ProtocolConfiguration
*mempool.Pool
@ -44,7 +44,7 @@ type FakeChain struct {
UtilityTokenBalance *big.Int
}
// FakeStateSync implements StateSync interface.
// FakeStateSync implements the StateSync interface.
type FakeStateSync struct {
IsActiveFlag uatomic.Bool
IsInitializedFlag uatomic.Bool
@ -54,12 +54,12 @@ type FakeStateSync struct {
AddMPTNodesFunc func(nodes [][]byte) error
}
// NewFakeChain returns new FakeChain structure.
// NewFakeChain returns a new FakeChain structure.
func NewFakeChain() *FakeChain {
return NewFakeChainWithCustomCfg(nil)
}
// NewFakeChainWithCustomCfg returns new FakeChain structure with specified protocol configuration.
// NewFakeChainWithCustomCfg returns a new FakeChain structure with the specified protocol configuration.
func NewFakeChainWithCustomCfg(protocolCfg func(c *config.ProtocolConfiguration)) *FakeChain {
cfg := config.ProtocolConfiguration{Magic: netmode.UnitTestNet, P2PNotaryRequestPayloadPoolSize: 10}
if protocolCfg != nil {
@ -76,29 +76,29 @@ func NewFakeChainWithCustomCfg(protocolCfg func(c *config.ProtocolConfiguration)
}
}
// PutBlock implements Blockchainer interface.
// PutBlock implements the Blockchainer interface.
func (chain *FakeChain) PutBlock(b *block.Block) {
chain.blocks[b.Hash()] = b
chain.hdrHashes[b.Index] = b.Hash()
atomic.StoreUint32(&chain.Blockheight, b.Index)
}
// PutHeader implements Blockchainer interface.
// PutHeader implements the Blockchainer interface.
func (chain *FakeChain) PutHeader(b *block.Block) {
chain.hdrHashes[b.Index] = b.Hash()
}
// PutTx implements Blockchainer interface.
// PutTx implements the Blockchainer interface.
func (chain *FakeChain) PutTx(tx *transaction.Transaction) {
chain.txs[tx.Hash()] = tx
}
// ApplyPolicyToTxSet implements Blockchainer interface.
// ApplyPolicyToTxSet implements the Blockchainer interface.
func (chain *FakeChain) ApplyPolicyToTxSet([]*transaction.Transaction) []*transaction.Transaction {
panic("TODO")
}
// IsTxStillRelevant implements Blockchainer interface.
// IsTxStillRelevant implements the Blockchainer interface.
func (chain *FakeChain) IsTxStillRelevant(t *transaction.Transaction, txpool *mempool.Pool, isPartialTx bool) bool {
panic("TODO")
}
@ -108,17 +108,17 @@ func (chain *FakeChain) InitVerificationContext(ic *interop.Context, hash util.U
panic("TODO")
}
// IsExtensibleAllowed implements Blockchainer interface.
// IsExtensibleAllowed implements the Blockchainer interface.
func (*FakeChain) IsExtensibleAllowed(uint160 util.Uint160) bool {
return true
}
// GetNatives implements blockchainer.Blockchainer interface.
// GetNatives implements the blockchainer.Blockchainer interface.
func (*FakeChain) GetNatives() []state.NativeContract {
panic("TODO")
}
// GetNotaryDepositExpiration implements Blockchainer interface.
// GetNotaryDepositExpiration implements the Blockchainer interface.
func (chain *FakeChain) GetNotaryDepositExpiration(acc util.Uint160) uint32 {
if chain.NotaryDepositExpiration != 0 {
return chain.NotaryDepositExpiration
@ -126,7 +126,7 @@ func (chain *FakeChain) GetNotaryDepositExpiration(acc util.Uint160) uint32 {
panic("TODO")
}
// GetNotaryContractScriptHash implements Blockchainer interface.
// GetNotaryContractScriptHash implements the Blockchainer interface.
func (chain *FakeChain) GetNotaryContractScriptHash() util.Uint160 {
if !chain.NotaryContractScriptHash.Equals(util.Uint160{}) {
return chain.NotaryContractScriptHash
@ -134,27 +134,27 @@ func (chain *FakeChain) GetNotaryContractScriptHash() util.Uint160 {
panic("TODO")
}
// GetNotaryBalance implements Blockchainer interface.
// GetNotaryBalance implements the Blockchainer interface.
func (chain *FakeChain) GetNotaryBalance(acc util.Uint160) *big.Int {
panic("TODO")
}
// GetNotaryServiceFeePerKey implements Blockchainer interface.
// GetNotaryServiceFeePerKey implements the Blockchainer interface.
func (chain *FakeChain) GetNotaryServiceFeePerKey() int64 {
panic("TODO")
}
// GetBaseExecFee implements Policer interface.
// GetBaseExecFee implements the Policer interface.
func (chain *FakeChain) GetBaseExecFee() int64 {
return interop.DefaultBaseExecFee
}
// GetStoragePrice implements Policer interface.
// GetStoragePrice implements the Policer interface.
func (chain *FakeChain) GetStoragePrice() int64 {
return native.DefaultStoragePrice
}
// GetMaxVerificationGAS implements Policer interface.
// GetMaxVerificationGAS implements the Policer interface.
func (chain *FakeChain) GetMaxVerificationGAS() int64 {
if chain.MaxVerificationGAS != 0 {
return chain.MaxVerificationGAS
@ -162,22 +162,22 @@ func (chain *FakeChain) GetMaxVerificationGAS() int64 {
panic("TODO")
}
// PoolTxWithData implements Blockchainer interface.
// PoolTxWithData implements the Blockchainer interface.
func (chain *FakeChain) PoolTxWithData(t *transaction.Transaction, data interface{}, mp *mempool.Pool, feer mempool.Feer, verificationFunction func(t *transaction.Transaction, data interface{}) error) error {
return chain.poolTxWithData(t, data, mp)
}
// RegisterPostBlock implements Blockchainer interface.
// RegisterPostBlock implements the Blockchainer interface.
func (chain *FakeChain) RegisterPostBlock(f func(func(*transaction.Transaction, *mempool.Pool, bool) bool, *mempool.Pool, *block.Block)) {
chain.PostBlock = append(chain.PostBlock, f)
}
// GetConfig implements Blockchainer interface.
// GetConfig implements the Blockchainer interface.
func (chain *FakeChain) GetConfig() config.ProtocolConfiguration {
return chain.ProtocolConfiguration
}
// CalculateClaimable implements Blockchainer interface.
// CalculateClaimable implements the Blockchainer interface.
func (chain *FakeChain) CalculateClaimable(util.Uint160, uint32) (*big.Int, error) {
panic("TODO")
}
@ -192,12 +192,12 @@ func (chain *FakeChain) P2PSigExtensionsEnabled() bool {
return true
}
// AddHeaders implements Blockchainer interface.
// AddHeaders implements the Blockchainer interface.
func (chain *FakeChain) AddHeaders(...*block.Header) error {
panic("TODO")
}
// AddBlock implements Blockchainer interface.
// AddBlock implements the Blockchainer interface.
func (chain *FakeChain) AddBlock(block *block.Block) error {
if block.Index == atomic.LoadUint32(&chain.Blockheight)+1 {
chain.PutBlock(block)
@ -205,27 +205,27 @@ func (chain *FakeChain) AddBlock(block *block.Block) error {
return nil
}
// BlockHeight implements Feer interface.
// BlockHeight implements the Feer interface.
func (chain *FakeChain) BlockHeight() uint32 {
return atomic.LoadUint32(&chain.Blockheight)
}
// Close implements Blockchainer interface.
// Close implements the Blockchainer interface.
func (chain *FakeChain) Close() {
panic("TODO")
}
// HeaderHeight implements Blockchainer interface.
// HeaderHeight implements the Blockchainer interface.
func (chain *FakeChain) HeaderHeight() uint32 {
return atomic.LoadUint32(&chain.Blockheight)
}
// GetAppExecResults implements Blockchainer interface.
// GetAppExecResults implements the Blockchainer interface.
func (chain *FakeChain) GetAppExecResults(hash util.Uint256, trig trigger.Type) ([]state.AppExecResult, error) {
panic("TODO")
}
// GetBlock implements Blockchainer interface.
// GetBlock implements the Blockchainer interface.
func (chain *FakeChain) GetBlock(hash util.Uint256) (*block.Block, error) {
if b, ok := chain.blocks[hash]; ok {
return b, nil
@ -233,27 +233,27 @@ func (chain *FakeChain) GetBlock(hash util.Uint256) (*block.Block, error) {
return nil, errors.New("not found")
}
// GetCommittee implements Blockchainer interface.
// GetCommittee implements the Blockchainer interface.
func (chain *FakeChain) GetCommittee() (keys.PublicKeys, error) {
panic("TODO")
}
// GetContractState implements Blockchainer interface.
// GetContractState implements the Blockchainer interface.
func (chain *FakeChain) GetContractState(hash util.Uint160) *state.Contract {
panic("TODO")
}
// GetContractScriptHash implements Blockchainer interface.
// GetContractScriptHash implements the Blockchainer interface.
func (chain *FakeChain) GetContractScriptHash(id int32) (util.Uint160, error) {
panic("TODO")
}
// GetNativeContractScriptHash implements Blockchainer interface.
// GetNativeContractScriptHash implements the Blockchainer interface.
func (chain *FakeChain) GetNativeContractScriptHash(name string) (util.Uint160, error) {
panic("TODO")
}
// GetHeaderHash implements Blockchainer interface.
// GetHeaderHash implements the Blockchainer interface.
func (chain *FakeChain) GetHeaderHash(n int) util.Uint256 {
if n < 0 || n > math.MaxUint32 {
return util.Uint256{}
@ -261,7 +261,7 @@ func (chain *FakeChain) GetHeaderHash(n int) util.Uint256 {
return chain.hdrHashes[uint32(n)]
}
// GetHeader implements Blockchainer interface.
// GetHeader implements the Blockchainer interface.
func (chain *FakeChain) GetHeader(hash util.Uint256) (*block.Header, error) {
b, err := chain.GetBlock(hash)
if err != nil {
@ -270,84 +270,84 @@ func (chain *FakeChain) GetHeader(hash util.Uint256) (*block.Header, error) {
return &b.Header, nil
}
// GetNextBlockValidators implements Blockchainer interface.
// GetNextBlockValidators implements the Blockchainer interface.
func (chain *FakeChain) GetNextBlockValidators() ([]*keys.PublicKey, error) {
panic("TODO")
}
// GetNEP17Contracts implements Blockchainer interface.
// GetNEP17Contracts implements the Blockchainer interface.
func (chain *FakeChain) GetNEP11Contracts() []util.Uint160 {
panic("TODO")
}
// GetNEP17Contracts implements Blockchainer interface.
// GetNEP17Contracts implements the Blockchainer interface.
func (chain *FakeChain) GetNEP17Contracts() []util.Uint160 {
panic("TODO")
}
// GetNEP17LastUpdated implements Blockchainer interface.
// GetNEP17LastUpdated implements the Blockchainer interface.
func (chain *FakeChain) GetTokenLastUpdated(acc util.Uint160) (map[int32]uint32, error) {
panic("TODO")
}
// ForEachNEP17Transfer implements Blockchainer interface.
// ForEachNEP17Transfer implements the Blockchainer interface.
func (chain *FakeChain) ForEachNEP11Transfer(util.Uint160, uint64, func(*state.NEP11Transfer) (bool, error)) error {
panic("TODO")
}
// ForEachNEP17Transfer implements Blockchainer interface.
// ForEachNEP17Transfer implements the Blockchainer interface.
func (chain *FakeChain) ForEachNEP17Transfer(util.Uint160, uint64, func(*state.NEP17Transfer) (bool, error)) error {
panic("TODO")
}
// GetValidators implements Blockchainer interface.
// GetValidators implements the Blockchainer interface.
func (chain *FakeChain) GetValidators() ([]*keys.PublicKey, error) {
panic("TODO")
}
// GetEnrollments implements Blockchainer interface.
// GetEnrollments implements the Blockchainer interface.
func (chain *FakeChain) GetEnrollments() ([]state.Validator, error) {
panic("TODO")
}
// GetStateModule implements Blockchainer interface.
// GetStateModule implements the Blockchainer interface.
func (chain *FakeChain) GetStateModule() blockchainer.StateRoot {
return nil
}
// GetStorageItem implements Blockchainer interface.
// GetStorageItem implements the Blockchainer interface.
func (chain *FakeChain) GetStorageItem(id int32, key []byte) state.StorageItem {
panic("TODO")
}
// GetTestVM implements Blockchainer interface.
// GetTestVM implements the Blockchainer interface.
func (chain *FakeChain) GetTestVM(t trigger.Type, tx *transaction.Transaction, b *block.Block) *interop.Context {
panic("TODO")
}
// CurrentHeaderHash implements Blockchainer interface.
// CurrentHeaderHash implements the Blockchainer interface.
func (chain *FakeChain) CurrentHeaderHash() util.Uint256 {
return util.Uint256{}
}
// CurrentBlockHash implements Blockchainer interface.
// CurrentBlockHash implements the Blockchainer interface.
func (chain *FakeChain) CurrentBlockHash() util.Uint256 {
return util.Uint256{}
}
// HasBlock implements Blockchainer interface.
// HasBlock implements the Blockchainer interface.
func (chain *FakeChain) HasBlock(h util.Uint256) bool {
_, ok := chain.blocks[h]
return ok
}
// HasTransaction implements Blockchainer interface.
// HasTransaction implements the Blockchainer interface.
func (chain *FakeChain) HasTransaction(h util.Uint256) bool {
_, ok := chain.txs[h]
return ok
}
// GetTransaction implements Blockchainer interface.
// GetTransaction implements the Blockchainer interface.
func (chain *FakeChain) GetTransaction(h util.Uint256) (*transaction.Transaction, uint32, error) {
if tx, ok := chain.txs[h]; ok {
return tx, 1, nil
@ -355,12 +355,12 @@ func (chain *FakeChain) GetTransaction(h util.Uint256) (*transaction.Transaction
return nil, 0, errors.New("not found")
}
// GetMemPool implements Blockchainer interface.
// GetMemPool implements the Blockchainer interface.
func (chain *FakeChain) GetMemPool() *mempool.Pool {
return chain.Pool
}
// GetGoverningTokenBalance implements Blockchainer interface.
// GetGoverningTokenBalance implements the Blockchainer interface.
func (chain *FakeChain) GetGoverningTokenBalance(acc util.Uint160) (*big.Int, uint32) {
panic("TODO")
}
@ -373,52 +373,52 @@ func (chain *FakeChain) GetUtilityTokenBalance(uint160 util.Uint160) *big.Int {
panic("TODO")
}
// ManagementContractHash implements Blockchainer interface.
// ManagementContractHash implements the Blockchainer interface.
func (chain FakeChain) ManagementContractHash() util.Uint160 {
panic("TODO")
}
// PoolTx implements Blockchainer interface.
// PoolTx implements the Blockchainer interface.
func (chain *FakeChain) PoolTx(tx *transaction.Transaction, _ ...*mempool.Pool) error {
return chain.PoolTxF(tx)
}
// SetOracle implements Blockchainer interface.
// SetOracle implements the Blockchainer interface.
func (chain FakeChain) SetOracle(services.Oracle) {
panic("TODO")
}
// SetNotary implements Blockchainer interface.
// SetNotary implements the Blockchainer interface.
func (chain *FakeChain) SetNotary(notary services.Notary) {
panic("TODO")
}
// SubscribeForBlocks implements Blockchainer interface.
// SubscribeForBlocks implements the Blockchainer interface.
func (chain *FakeChain) SubscribeForBlocks(ch chan<- *block.Block) {
chain.blocksCh = append(chain.blocksCh, ch)
}
// SubscribeForExecutions implements Blockchainer interface.
// SubscribeForExecutions implements the Blockchainer interface.
func (chain *FakeChain) SubscribeForExecutions(ch chan<- *state.AppExecResult) {
panic("TODO")
}
// SubscribeForNotifications implements Blockchainer interface.
// SubscribeForNotifications implements the Blockchainer interface.
func (chain *FakeChain) SubscribeForNotifications(ch chan<- *subscriptions.NotificationEvent) {
panic("TODO")
}
// SubscribeForTransactions implements Blockchainer interface.
// SubscribeForTransactions implements the Blockchainer interface.
func (chain *FakeChain) SubscribeForTransactions(ch chan<- *transaction.Transaction) {
panic("TODO")
}
// VerifyTx implements Blockchainer interface.
// VerifyTx implements the Blockchainer interface.
func (chain *FakeChain) VerifyTx(*transaction.Transaction) error {
panic("TODO")
}
// VerifyWitness implements Blockchainer interface.
// VerifyWitness implements the Blockchainer interface.
func (chain *FakeChain) VerifyWitness(util.Uint160, hash.Hashable, *transaction.Witness, int64) (int64, error) {
if chain.VerifyWitnessF != nil {
return chain.VerifyWitnessF()
@ -426,7 +426,7 @@ func (chain *FakeChain) VerifyWitness(util.Uint160, hash.Hashable, *transaction.
panic("TODO")
}
// UnsubscribeFromBlocks implements Blockchainer interface.
// UnsubscribeFromBlocks implements the Blockchainer interface.
func (chain *FakeChain) UnsubscribeFromBlocks(ch chan<- *block.Block) {
for i, c := range chain.blocksCh {
if c == ch {
@ -438,32 +438,32 @@ func (chain *FakeChain) UnsubscribeFromBlocks(ch chan<- *block.Block) {
}
}
// UnsubscribeFromExecutions implements Blockchainer interface.
// UnsubscribeFromExecutions implements the Blockchainer interface.
func (chain *FakeChain) UnsubscribeFromExecutions(ch chan<- *state.AppExecResult) {
panic("TODO")
}
// UnsubscribeFromNotifications implements Blockchainer interface.
// UnsubscribeFromNotifications implements the Blockchainer interface.
func (chain *FakeChain) UnsubscribeFromNotifications(ch chan<- *subscriptions.NotificationEvent) {
panic("TODO")
}
// UnsubscribeFromTransactions implements Blockchainer interface.
// UnsubscribeFromTransactions implements the Blockchainer interface.
func (chain *FakeChain) UnsubscribeFromTransactions(ch chan<- *transaction.Transaction) {
panic("TODO")
}
// AddBlock implements StateSync interface.
// AddBlock implements the StateSync interface.
func (s *FakeStateSync) AddBlock(block *block.Block) error {
panic("TODO")
}
// AddHeaders implements StateSync interface.
// AddHeaders implements the StateSync interface.
func (s *FakeStateSync) AddHeaders(...*block.Header) error {
panic("TODO")
}
// AddMPTNodes implements StateSync interface.
// AddMPTNodes implements the StateSync interface.
func (s *FakeStateSync) AddMPTNodes(nodes [][]byte) error {
if s.AddMPTNodesFunc != nil {
return s.AddMPTNodesFunc(nodes)
@ -471,20 +471,20 @@ func (s *FakeStateSync) AddMPTNodes(nodes [][]byte) error {
panic("TODO")
}
// BlockHeight implements StateSync interface.
// BlockHeight implements the StateSync interface.
func (s *FakeStateSync) BlockHeight() uint32 {
return 0
}
// IsActive implements StateSync interface.
// IsActive implements the StateSync interface.
func (s *FakeStateSync) IsActive() bool { return s.IsActiveFlag.Load() }
// IsInitialized implements StateSync interface.
// IsInitialized implements the StateSync interface.
func (s *FakeStateSync) IsInitialized() bool {
return s.IsInitializedFlag.Load()
}
// Init implements StateSync interface.
// Init implements the StateSync interface.
func (s *FakeStateSync) Init(currChainHeight uint32) error {
if s.InitFunc != nil {
return s.InitFunc(currChainHeight)
@ -492,15 +492,15 @@ func (s *FakeStateSync) Init(currChainHeight uint32) error {
panic("TODO")
}
// NeedHeaders implements StateSync interface.
// NeedHeaders implements the StateSync interface.
func (s *FakeStateSync) NeedHeaders() bool { return s.RequestHeaders.Load() }
// NeedMPTNodes implements StateSync interface.
// NeedMPTNodes implements the StateSync interface.
func (s *FakeStateSync) NeedMPTNodes() bool {
panic("TODO")
}
// Traverse implements StateSync interface.
// Traverse implements the StateSync interface.
func (s *FakeStateSync) Traverse(root util.Uint256, process func(node mpt.Node, nodeBytes []byte) bool) error {
if s.TraverseFunc != nil {
return s.TraverseFunc(root, process)
@ -508,7 +508,7 @@ func (s *FakeStateSync) Traverse(root util.Uint256, process func(node mpt.Node,
panic("TODO")
}
// GetUnknownMPTNodesBatch implements StateSync interface.
// GetUnknownMPTNodesBatch implements the StateSync interface.
func (s *FakeStateSync) GetUnknownMPTNodesBatch(limit int) []util.Uint256 {
panic("TODO")
}

View file

@ -24,20 +24,20 @@ var privNetKeys = []string{
"KxyjQ8eUa4FHt3Gvioyt1Wz29cTUrE4eTqX3yFSk1YFCsPL8uNsY",
"L2oEXKRAAMiPEZukwR5ho2S6SMeQLhcK9mF71ZnF7GvT8dU4Kkgz",
// Provide 2 committee extra members so that committee address differs from
// Provide 2 committee extra members so that the committee address differs from
// the validators one.
"L1Tr1iq5oz1jaFaMXP21sHDkJYDDkuLtpvQ4wRf1cjKvJYvnvpAb",
"Kz6XTUrExy78q8f4MjDHnwz8fYYyUE8iPXwPRAkHa3qN2JcHYm7e",
}
// ValidatorsCount returns number of validators in the testchain.
// ValidatorsCount returns the number of validators in the testchain.
const ValidatorsCount = 4
var (
// ids maps validators order by public key sorting to validators ID.
// which is an order of the validator in the StandByValidators list.
// That is the order of the validator in the StandByValidators list.
ids = []int{1, 3, 0, 2, 4, 5}
// orders maps to validators id to it's order by public key sorting.
// orders maps validators id to its order by public key sorting.
orders = []int{2, 0, 3, 1, 4, 5}
)
@ -56,12 +56,12 @@ func IDToOrder(id int) int {
return orders[id]
}
// WIF returns unencrypted wif of the specified validator.
// WIF returns the unencrypted wif of the specified validator.
func WIF(i int) string {
return privNetKeys[i]
}
// PrivateKey returns private key of node #i.
// PrivateKey returns the private key of node #i.
func PrivateKey(i int) *keys.PrivateKey {
wif := WIF(i)
priv, err := keys.NewPrivateKeyFromWIF(wif)
@ -154,7 +154,7 @@ func SignCommittee(h hash.Hashable) []byte {
return buf.Bytes()
}
// NewBlock creates new block for the given blockchain with the given offset
// NewBlock creates a new block for the given blockchain with the given offset
// (usually, 1), primary node index and transactions.
func NewBlock(t *testing.T, bc blockchainer.Blockchainer, offset uint32, primary uint32, txs ...*transaction.Transaction) *block.Block {
witness := transaction.Witness{VerificationScript: MultisigVerificationScript()}

View file

@ -28,7 +28,7 @@ var (
ownerScript = MultisigVerificationScript()
)
// NewTransferFromOwner returns transaction transferring funds from NEO and GAS owner.
// NewTransferFromOwner returns a transaction transferring funds from NEO and GAS owner.
func NewTransferFromOwner(bc blockchainer.Blockchainer, contractHash, to util.Uint160, amount int64,
nonce, validUntil uint32) (*transaction.Transaction, error) {
w := io.NewBufBinWriter()
@ -51,8 +51,8 @@ func NewTransferFromOwner(bc blockchainer.Blockchainer, contractHash, to util.Ui
return tx, SignTx(bc, tx)
}
// NewDeployTx returns new deployment for contract with source from r and name equal to
// filename without '.go' suffix.
// NewDeployTx returns a new deployment transaction for a contract with the source from r and a name equal to
// the filename without '.go' suffix.
func NewDeployTx(bc blockchainer.Blockchainer, name string, sender util.Uint160, r gio.Reader, confFile *string) (*transaction.Transaction, util.Uint160, []byte, error) {
// nef.NewFile() cares about version a lot.
config.Version = "0.90.0-test"
@ -110,7 +110,7 @@ func NewDeployTx(bc blockchainer.Blockchainer, name string, sender util.Uint160,
return tx, h, ne.Script, nil
}
// SignTx signs provided transactions with validator keys.
// SignTx signs the provided transactions with validator keys.
func SignTx(bc blockchainer.Blockchainer, txs ...*transaction.Transaction) error {
signTxGeneric(bc, Sign, ownerScript, txs...)
return nil

View file

@ -9,7 +9,7 @@ import (
"github.com/stretchr/testify/require"
)
// MarshalUnmarshalJSON checks if expected stays the same after
// MarshalUnmarshalJSON checks if the expected stays the same after
// marshal/unmarshal via JSON.
func MarshalUnmarshalJSON(t *testing.T, expected, actual interface{}) {
data, err := json.Marshal(expected)
@ -18,7 +18,7 @@ func MarshalUnmarshalJSON(t *testing.T, expected, actual interface{}) {
require.Equal(t, expected, actual)
}
// EncodeDecodeBinary checks if expected stays the same after
// EncodeDecodeBinary checks if the expected stays the same after
// serializing/deserializing via io.Serializable methods.
func EncodeDecodeBinary(t *testing.T, expected, actual io.Serializable) {
data, err := EncodeBinary(expected)
@ -27,7 +27,7 @@ func EncodeDecodeBinary(t *testing.T, expected, actual io.Serializable) {
require.Equal(t, expected, actual)
}
// ToFromStackItem checks if expected stays the same after converting to/from
// ToFromStackItem checks if the expected stays the same after converting to/from
// StackItem.
func ToFromStackItem(t *testing.T, expected, actual stackitem.Convertible) {
item, err := expected.ToStackItem()
@ -58,7 +58,7 @@ type encodable interface {
Decode(*io.BinReader) error
}
// EncodeDecode checks if expected stays the same after
// EncodeDecode checks if the expected stays the same after
// serializing/deserializing via encodable methods.
func EncodeDecode(t *testing.T, expected, actual encodable) {
data, err := Encode(expected)

View file

@ -21,13 +21,13 @@ var (
}
)
// newGlobal creates new global variable.
// newGlobal creates a new global variable.
func (c *codegen) newGlobal(pkg string, name string) {
name = c.getIdentName(pkg, name)
c.globals[name] = len(c.globals)
}
// getIdentName returns fully-qualified name for a variable.
// getIdentName returns a fully-qualified name for a variable.
func (c *codegen) getIdentName(pkg string, name string) string {
if fullName, ok := c.importMap[pkg]; ok {
pkg = fullName
@ -92,7 +92,7 @@ func (c *codegen) traverseGlobals() bool {
}
}
// because we reuse `convertFuncDecl` for init funcs,
// we need to cleare scope, so that global variables
// we need to clear scope, so that global variables
// encountered after will be recognized as globals.
c.scope = nil
})
@ -133,7 +133,7 @@ func (c *codegen) traverseGlobals() bool {
// countGlobals counts the global variables in the program to add
// them with the stack size of the function.
// Second returned argument contains amount of global constants.
// Second returned argument contains the amount of global constants.
func countGlobals(f ast.Node) (int, int) {
var numVar, numConst int
ast.Inspect(f, func(node ast.Node) bool {
@ -141,7 +141,7 @@ func countGlobals(f ast.Node) (int, int) {
// Skip all function declarations if we have already encountered `defer`.
case *ast.FuncDecl:
return false
// After skipping all funcDecls we are sure that each value spec
// After skipping all funcDecls, we are sure that each value spec
// is a global declared variable or constant.
case *ast.GenDecl:
isVar := n.Tok == token.VAR
@ -172,7 +172,7 @@ func isExprNil(e ast.Expr) bool {
}
// indexOfStruct returns the index of the given field inside that struct.
// If the struct does not contain that field it will return -1.
// If the struct does not contain that field, it will return -1.
func indexOfStruct(strct *types.Struct, fldName string) int {
for i := 0; i < strct.NumFields(); i++ {
if strct.Field(i).Name() == fldName {
@ -189,7 +189,7 @@ func (f funcUsage) funcUsed(name string) bool {
return ok
}
// lastStmtIsReturn checks if last statement of the declaration was return statement..
// lastStmtIsReturn checks if the last statement of the declaration was return statement.
func lastStmtIsReturn(body *ast.BlockStmt) (b bool) {
if l := len(body.List); l != 0 {
switch inner := body.List[l-1].(type) {
@ -240,11 +240,11 @@ func (c *codegen) fillDocumentInfo() {
})
}
// analyzeFuncUsage traverses all code and returns map with functions
// analyzeFuncUsage traverses all code and returns a map with functions
// which should be present in the emitted code.
// This is done using BFS starting from exported functions or
// function used in variable declarations (graph edge corresponds to
// function being called in declaration).
// the function used in variable declarations (graph edge corresponds to
// the function being called in declaration).
func (c *codegen) analyzeFuncUsage() funcUsage {
type declPair struct {
decl *ast.FuncDecl
@ -376,8 +376,8 @@ func canConvert(s string) bool {
return true
}
// canInline returns true if function is to be inlined.
// Currently there is a static list of function which are inlined,
// canInline returns true if the function is to be inlined.
// Currently, there is a static list of functions which are inlined,
// this may change in future.
func canInline(s string, name string) bool {
if strings.HasPrefix(s, "github.com/nspcc-dev/neo-go/pkg/compiler/testdata/inline") {

View file

@ -35,7 +35,7 @@ type codegen struct {
// Type information.
typeInfo *types.Info
// pkgInfoInline is stack of type information for packages containing inline functions.
// pkgInfoInline is a stack of type information for packages containing inline functions.
pkgInfoInline []*packages.Package
// A mapping of func identifiers with their scope.
@ -63,9 +63,9 @@ type codegen struct {
// A list of nested label names together with evaluation stack depth.
labelList []labelWithStackSize
// inlineLabelOffsets contains size of labelList at the start of inline call processing.
// For such calls we need to drop only newly created part of stack.
// For such calls, we need to drop only the newly created part of stack.
inlineLabelOffsets []int
// globalInlineCount contains amount of auxiliary variables introduced by
// globalInlineCount contains the amount of auxiliary variables introduced by
// function inlining during global variables initialization.
globalInlineCount int
@ -76,7 +76,7 @@ type codegen struct {
// A label to be used in the next statement.
nextLabel string
// sequencePoints is mapping from method name to a slice
// sequencePoints is a mapping from the method name to a slice
// containing info about mapping from opcode's offset
// to a text span in the source file.
sequencePoints map[string][]DebugSeqPoint
@ -92,25 +92,25 @@ type codegen struct {
// constMap contains constants from foreign packages.
constMap map[string]types.TypeAndValue
// currPkg is current package being processed.
// currPkg is the current package being processed.
currPkg *packages.Package
// mainPkg is a main package metadata.
// mainPkg is the main package metadata.
mainPkg *packages.Package
// packages contains packages in the order they were loaded.
packages []string
packageCache map[string]*packages.Package
// exceptionIndex is the index of static slot where exception is stored.
// exceptionIndex is the index of the static slot where the exception is stored.
exceptionIndex int
// documents contains paths to all files used by the program.
documents []string
// docIndex maps file path to an index in documents array.
// docIndex maps the file path to the index in the documents array.
docIndex map[string]int
// emittedEvents contains all events emitted by contract.
// emittedEvents contains all events emitted by the contract.
emittedEvents map[string][][]string
// invokedContracts contains invoked methods of other contracts.
@ -166,7 +166,7 @@ func (c *codegen) newLabel() (l uint16) {
return
}
// newNamedLabel creates a new label with a specified name.
// newNamedLabel creates a new label with the specified name.
func (c *codegen) newNamedLabel(typ labelOffsetType, name string) (l uint16) {
l = c.newLabel()
lt := labelWithType{name: name, typ: typ}
@ -223,8 +223,8 @@ func (c *codegen) emitStoreStructField(i int) {
emit.Opcodes(c.prog.BinWriter, opcode.ROT, opcode.SETITEM)
}
// getVarIndex returns variable type and position in corresponding slot,
// according to current scope.
// getVarIndex returns variable type and position in the corresponding slot,
// according to the current scope.
func (c *codegen) getVarIndex(pkg string, name string) *varInfo {
if pkg == "" {
if c.scope != nil {
@ -255,7 +255,7 @@ func getBaseOpcode(t varType) (opcode.Opcode, opcode.Opcode) {
}
}
// emitLoadVar loads specified variable to the evaluation stack.
// emitLoadVar loads the specified variable to the evaluation stack.
func (c *codegen) emitLoadVar(pkg string, name string) {
vi := c.getVarIndex(pkg, name)
if vi.ctx != nil && c.typeAndValueOf(vi.ctx.expr).Value != nil {
@ -284,7 +284,7 @@ func (c *codegen) emitLoadVar(pkg string, name string) {
c.emitLoadByIndex(vi.refType, vi.index)
}
// emitLoadByIndex loads specified variable type with index i.
// emitLoadByIndex loads the specified variable type with index i.
func (c *codegen) emitLoadByIndex(t varType, i int) {
base, _ := getBaseOpcode(t)
if i < 7 {
@ -341,7 +341,7 @@ func (c *codegen) emitDefault(t types.Type) {
}
// convertGlobals traverses the AST and only converts global declarations.
// If we call this in convertFuncDecl then it will load all global variables
// If we call this in convertFuncDecl, it will load all global variables
// into the scope of the function.
func (c *codegen) convertGlobals(f *ast.File, _ *types.Package) {
ast.Inspect(f, func(node ast.Node) bool {
@ -375,7 +375,7 @@ func (c *codegen) clearSlots(n int) {
}
// convertInitFuncs converts `init()` functions in file f and returns
// number of locals in last processed definition as well as maximum locals number encountered.
// the number of locals in the last processed definition as well as maximum locals number encountered.
func (c *codegen) convertInitFuncs(f *ast.File, pkg *types.Package, lastCount int) (int, int) {
maxCount := -1
ast.Inspect(f, func(node ast.Node) bool {
@ -479,10 +479,10 @@ func (c *codegen) convertFuncDecl(file ast.Node, decl *ast.FuncDecl, pkg *types.
defer f.vars.dropScope()
// We need to handle methods, which in Go, is just syntactic sugar.
// The method receiver will be passed in as first argument.
// We check if this declaration has a receiver and load it into scope.
// The method receiver will be passed in as the first argument.
// We check if this declaration has a receiver and load it into the scope.
//
// FIXME: For now we will hard cast this to a struct. We can later fine tune this
// FIXME: For now, we will hard cast this to a struct. We can later fine tune this
// to support other types.
if decl.Recv != nil {
for _, arg := range decl.Recv.List {
@ -915,12 +915,12 @@ func (c *codegen) Visit(node ast.Node) ast.Visitor {
}
case *ast.SelectorExpr:
// If this is a method call we need to walk the AST to load the struct locally.
// Otherwise this is a function call from a imported package and we can call it
// Otherwise, this is a function call from an imported package and we can call it
// directly.
name, isMethod := c.getFuncNameFromSelector(fun)
if isMethod {
ast.Walk(c, fun.X)
// Dont forget to add 1 extra argument when its a method.
// Don't forget to add 1 extra argument when its a method.
numArgs++
}
@ -983,7 +983,7 @@ func (c *codegen) Visit(node ast.Node) ast.Visitor {
// We can be sure builtins are of type *ast.Ident.
c.convertBuiltin(n)
case name != "":
// Function was not found thus is can be only an invocation of func-typed variable or type conversion.
// Function was not found, thus it can only be an invocation of a func-typed variable or type conversion.
// We care only about string conversions because all others are effectively no-op in NeoVM.
// E.g. one cannot write `bool(int(a))`, only `int32(int(a))`.
if isString(c.typeOf(n.Fun)) {
@ -1096,7 +1096,7 @@ func (c *codegen) Visit(node ast.Node) ast.Visitor {
ast.Walk(c, n.X)
c.emitToken(n.Tok, c.typeOf(n.X))
// For now only identifiers are supported for (post) for stmts.
// For now, only identifiers are supported for (post) for stmts.
// for i := 0; i < 10; i++ {}
// Where the post stmt is ( i++ )
if ident, ok := n.X.(*ast.Ident); ok {
@ -1218,8 +1218,8 @@ func (c *codegen) Visit(node ast.Node) ast.Visitor {
ast.Walk(c, n.X)
// Implementation is a bit different for slices and maps:
// For slices we iterate index from 0 to len-1, storing array, len and index on stack.
// For maps we iterate index from 0 to len-1, storing map, keyarray, size and index on stack.
// For slices, we iterate through indices from 0 to len-1, storing array, len and index on stack.
// For maps, we iterate through indices from 0 to len-1, storing map, keyarray, size and index on stack.
_, isMap := c.typeOf(n.X).Underlying().(*types.Map)
emit.Opcodes(c.prog.BinWriter, opcode.DUP)
if isMap {
@ -1281,10 +1281,10 @@ func (c *codegen) Visit(node ast.Node) ast.Visitor {
return nil
// We dont really care about assertions for the core logic.
// We don't really care about assertions for the core logic.
// The only thing we need is to please the compiler type checking.
// For this to work properly, we only need to walk the expression
// not the assertion type.
// which is not the assertion type.
case *ast.TypeAssertExpr:
ast.Walk(c, n.X)
if c.isCallExprSyscall(n.X) {
@ -1302,7 +1302,7 @@ func (c *codegen) Visit(node ast.Node) ast.Visitor {
}
// packVarArgs packs variadic arguments into an array
// and returns amount of arguments packed.
// and returns the amount of arguments packed.
func (c *codegen) packVarArgs(n *ast.CallExpr, typ *types.Signature) int {
varSize := len(n.Args) - typ.Params().Len() + 1
c.emitReverse(varSize)
@ -1332,12 +1332,12 @@ func (c *codegen) isCallExprSyscall(e ast.Expr) bool {
// Go `defer` statements are a bit different:
// 1. `defer` is always executed irregardless of whether an exception has occurred.
// 2. `recover` can or can not handle a possible exception.
// Thus we use the following approach:
// 1. Throwed exception is saved in a static field X, static fields Y and is set to true.
// Thus, we use the following approach:
// 1. Throwed exception is saved in a static field X, static fields Y and it is set to true.
// 2. For each defer local there is a dedicated local variable which is set to 1 if `defer` statement
// is encountered during an actual execution.
// 3. CATCH and FINALLY blocks are the same, and both contain the same CALLs.
// 4. Right before the CATCH block check a variable from (2). If it is null, jump to the end of CATCH+FINALLY block.
// 4. Right before the CATCH block, check a variable from (2). If it is null, jump to the end of CATCH+FINALLY block.
// 5. In CATCH block we set Y to true and emit default return values if it is the last defer.
// 6. Execute FINALLY block only if Y is false.
func (c *codegen) processDefers() {
@ -1386,7 +1386,7 @@ func (c *codegen) processDefers() {
// emitExplicitConvert handles `someType(someValue)` conversions between string/[]byte.
// Rules for conversion:
// 1. interop.* types are converted to ByteArray if not already.
// 2. Otherwise convert between ByteArray/Buffer.
// 2. Otherwise, convert between ByteArray/Buffer.
// 3. Rules for types which are not string/[]byte should already
// be enforced by go parser.
func (c *codegen) emitExplicitConvert(from, to types.Type) {
@ -1847,8 +1847,8 @@ func (c *codegen) convertBuiltin(expr *ast.CallExpr) {
// There are special cases for builtins:
// 1. With FromAddress, parameter conversion is happening at compile-time
// so there is no need to push parameters on stack and perform an actual call
// 2. With panic, generated code depends on if argument was nil or a string so
// it should be handled accordingly.
// 2. With panic, the generated code depends on the fact if an argument was nil or a string;
// so, it should be handled accordingly.
func transformArgs(fs *funcScope, fun ast.Expr, args []ast.Expr) []ast.Expr {
switch f := fun.(type) {
case *ast.SelectorExpr:
@ -1868,7 +1868,7 @@ func transformArgs(fs *funcScope, fun ast.Expr, args []ast.Expr) []ast.Expr {
return args
}
// emitConvert converts top stack item to the specified type.
// emitConvert converts the top stack item to the specified type.
func (c *codegen) emitConvert(typ stackitem.Type) {
emit.Opcodes(c.prog.BinWriter, opcode.DUP)
emit.Instruction(c.prog.BinWriter, opcode.ISTYPE, []byte{byte(typ)})
@ -2297,7 +2297,7 @@ func (c *codegen) replaceLabelWithOffset(ip int, arg []byte) (int, error) {
// By pure coincidence, this is also the size of `INITSLOT` instruction.
const longToShortRemoveCount = 3
// shortenJumps returns converts b to a program where all long JMP*/CALL* specified by absolute offsets,
// shortenJumps converts b to a program where all long JMP*/CALL* specified by absolute offsets
// are replaced with their corresponding short counterparts. It panics if either b or offsets are invalid.
// This is done in 2 passes:
// 1. Alter jump offsets taking into account parts to be removed.

View file

@ -24,7 +24,7 @@ import (
const fileExt = "nef"
// Options contains all the parameters that affect the behaviour of the compiler.
// Options contains all the parameters that affect the behavior of the compiler.
type Options struct {
// The extension of the output file default set to .nef
Ext string
@ -51,10 +51,10 @@ type Options struct {
// This setting has effect only if manifest is emitted.
NoPermissionsCheck bool
// Name is contract's name to be written to manifest.
// Name is a contract's name to be written to manifest.
Name string
// SourceURL is contract's source URL to be written to manifest.
// SourceURL is a contract's source URL to be written to manifest.
SourceURL string
// Runtime notifications.
@ -63,10 +63,10 @@ type Options struct {
// The list of standards supported by the contract.
ContractSupportedStandards []string
// SafeMethods contains list of methods which will be marked as safe in manifest.
// SafeMethods contains a list of methods which will be marked as safe in manifest.
SafeMethods []string
// Overloads contains mapping from compiled method name to the name emitted in manifest.
// Overloads contains mapping from the compiled method name to the name emitted in manifest.
// It can be used to provide method overloads as Go doesn't have such capability.
Overloads map[string]string
@ -94,7 +94,7 @@ func (c *codegen) ForEachPackage(fn func(*packages.Package)) {
}
}
// ForEachFile executes fn on each file used in current program.
// ForEachFile executes fn on each file used in the current program.
func (c *codegen) ForEachFile(fn func(*ast.File, *types.Package)) {
c.ForEachPackage(func(pkg *packages.Package) {
for _, f := range pkg.Syntax {
@ -173,7 +173,7 @@ func getBuildInfo(name string, src interface{}) (*buildInfo, error) {
conf.ParseFile = func(fset *token.FileSet, filename string, src []byte) (*ast.File, error) {
// When compiling a single file we can or can not load other files from the same package.
// Here we chose the latter which is consistent with `go run` behaviour.
// Here we chose the latter which is consistent with `go run` behavior.
// Other dependencies should still be processed.
if singleFile && filepath.Dir(filename) == filepath.Dir(absName) && filename != absName {
return nil, nil
@ -196,9 +196,9 @@ func getBuildInfo(name string, src interface{}) (*buildInfo, error) {
}, nil
}
// Compile compiles a Go program into bytecode that can run on the NEO virtual machine.
// Compile compiles a Go program into a bytecode that can run on the NEO virtual machine.
// If `r != nil`, `name` is interpreted as a filename, and `r` as file contents.
// Otherwise `name` is either file name or name of the directory containing source files.
// Otherwise `name` is either a file name or a name of the directory containing source files.
func Compile(name string, r io.Reader) ([]byte, error) {
f, _, err := CompileWithOptions(name, r, nil)
if err != nil {
@ -208,7 +208,7 @@ func Compile(name string, r io.Reader) ([]byte, error) {
return f.Script, nil
}
// CompileWithOptions compiles a Go program into bytecode with provided compiler options.
// CompileWithOptions compiles a Go program into bytecode with the provided compiler options.
func CompileWithOptions(name string, r io.Reader, o *Options) (*nef.File, *DebugInfo, error) {
ctx, err := getBuildInfo(name, r)
if err != nil {

View file

@ -28,7 +28,7 @@ type compilerTestCase struct {
}
func TestCompiler(t *testing.T) {
// CompileAndSave use config.Version for proper .nef generation.
// CompileAndSave uses config.Version for proper .nef generation.
config.Version = "0.90.0-test"
testCases := []compilerTestCase{
{
@ -53,7 +53,7 @@ func TestCompiler(t *testing.T) {
for _, info := range infos {
if !info.IsDir() {
// example smart contracts are located in the `examplePath` subdirectories, but
// there are also a couple of files inside the `examplePath` which doesn't need to be compiled
// there is also a couple of files inside the `examplePath` which don't need to be compiled
continue
}

View file

@ -31,7 +31,7 @@ type DebugInfo struct {
EmittedEvents map[string][][]string `json:"-"`
// InvokedContracts contains foreign contract invocations.
InvokedContracts map[util.Uint160][]string `json:"-"`
// StaticVariables contains list of static variable names and types.
// StaticVariables contains a list of static variable names and types.
StaticVariables []string `json:"static-variables"`
}
@ -43,19 +43,19 @@ type MethodDebugInfo struct {
// together with the namespace it belongs to. We need to keep the first letter
// lowercased to match manifest standards.
Name DebugMethodName `json:"name"`
// IsExported defines whether method is exported.
// IsExported defines whether the method is exported.
IsExported bool `json:"-"`
// IsFunction defines whether method has no receiver.
// IsFunction defines whether the method has no receiver.
IsFunction bool `json:"-"`
// Range is the range of smart-contract's opcodes corresponding to the method.
Range DebugRange `json:"range"`
// Parameters is a list of method's parameters.
// Parameters is a list of the method's parameters.
Parameters []DebugParam `json:"params"`
// ReturnType is method's return type.
// ReturnType is the method's return type.
ReturnType string `json:"return"`
// ReturnTypeReal is method's return type as specified in Go code.
// ReturnTypeReal is the method's return type as specified in Go code.
ReturnTypeReal binding.Override `json:"-"`
// ReturnTypeSC is return type to use in manifest.
// ReturnTypeSC is a return type to use in manifest.
ReturnTypeSC smartcontract.ParamType `json:"-"`
Variables []string `json:"variables"`
// SeqPoints is a map between source lines and byte-code instruction offsets.
@ -92,13 +92,13 @@ type DebugSeqPoint struct {
EndCol int
}
// DebugRange represents method's section in bytecode.
// DebugRange represents the method's section in bytecode.
type DebugRange struct {
Start uint16
End uint16
}
// DebugParam represents variables's name and type.
// DebugParam represents the variables's name and type.
type DebugParam struct {
Name string `json:"name"`
Type string `json:"type"`
@ -362,13 +362,13 @@ func (c *codegen) scAndVMTypeFromType(t types.Type) (smartcontract.ParamType, st
}
}
// MarshalJSON implements json.Marshaler interface.
// MarshalJSON implements the json.Marshaler interface.
func (d *DebugRange) MarshalJSON() ([]byte, error) {
return []byte(`"` + strconv.FormatUint(uint64(d.Start), 10) + `-` +
strconv.FormatUint(uint64(d.End), 10) + `"`), nil
}
// UnmarshalJSON implements json.Unmarshaler interface.
// UnmarshalJSON implements the json.Unmarshaler interface.
func (d *DebugRange) UnmarshalJSON(data []byte) error {
startS, endS, err := parsePairJSON(data, "-")
if err != nil {
@ -389,12 +389,12 @@ func (d *DebugRange) UnmarshalJSON(data []byte) error {
return nil
}
// MarshalJSON implements json.Marshaler interface.
// MarshalJSON implements the json.Marshaler interface.
func (d *DebugParam) MarshalJSON() ([]byte, error) {
return []byte(`"` + d.Name + `,` + d.Type + `"`), nil
}
// UnmarshalJSON implements json.Unmarshaler interface.
// UnmarshalJSON implements the json.Unmarshaler interface.
func (d *DebugParam) UnmarshalJSON(data []byte) error {
startS, endS, err := parsePairJSON(data, ",")
if err != nil {
@ -431,12 +431,12 @@ func (m *MethodDebugInfo) ToManifestMethod() manifest.Method {
return result
}
// MarshalJSON implements json.Marshaler interface.
// MarshalJSON implements the json.Marshaler interface.
func (d *DebugMethodName) MarshalJSON() ([]byte, error) {
return []byte(`"` + d.Namespace + `,` + d.Name + `"`), nil
}
// UnmarshalJSON implements json.Unmarshaler interface.
// UnmarshalJSON implements the json.Unmarshaler interface.
func (d *DebugMethodName) UnmarshalJSON(data []byte) error {
startS, endS, err := parsePairJSON(data, ",")
if err != nil {
@ -449,14 +449,14 @@ func (d *DebugMethodName) UnmarshalJSON(data []byte) error {
return nil
}
// MarshalJSON implements json.Marshaler interface.
// MarshalJSON implements the json.Marshaler interface.
func (d *DebugSeqPoint) MarshalJSON() ([]byte, error) {
s := fmt.Sprintf("%d[%d]%d:%d-%d:%d", d.Opcode, d.Document,
d.StartLine, d.StartCol, d.EndLine, d.EndCol)
return []byte(`"` + s + `"`), nil
}
// UnmarshalJSON implements json.Unmarshaler interface.
// UnmarshalJSON implements the json.Unmarshaler interface.
func (d *DebugSeqPoint) UnmarshalJSON(data []byte) error {
_, err := fmt.Sscanf(string(data), `"%d[%d]%d:%d-%d:%d"`,
&d.Opcode, &d.Document, &d.StartLine, &d.StartCol, &d.EndLine, &d.EndCol)
@ -475,7 +475,7 @@ func parsePairJSON(data []byte, sep string) (string, string, error) {
return ss[0], ss[1], nil
}
// ConvertToManifest converts contract to the manifest.Manifest struct for debugger.
// ConvertToManifest converts a contract to the manifest.Manifest struct for debugger.
// Note: manifest is taken from the external source, however it can be generated ad-hoc. See #1038.
func (di *DebugInfo) ConvertToManifest(o *Options) (*manifest.Manifest, error) {
methods := make([]manifest.Method, 0)

View file

@ -6,7 +6,7 @@ import (
)
// A funcScope represents the scope within the function context.
// It holds al the local variables along with the initialized struct positions.
// It holds all the local variables along with the initialized struct positions.
type funcScope struct {
// Identifier of the function.
name string

View file

@ -50,8 +50,8 @@ type syscallTestCase struct {
isVoid bool
}
// This test ensures that our wrappers have necessary number of parameters
// and execute needed syscall. Because of lack of typing (compared to native contracts)
// This test ensures that our wrappers have the necessary number of parameters
// and execute the appropriate syscall. Because of lack of typing (compared to native contracts),
// parameter types can't be checked.
func TestSyscallExecution(t *testing.T) {
b := `[]byte{1}`

View file

@ -11,7 +11,7 @@ import (
"github.com/stretchr/testify/require"
)
// In this test we only check that needed interop
// In this test, we only check that needed interop
// is called with the provided arguments in the right order.
func TestVerifyGood(t *testing.T) {
msg := []byte("test message")

View file

@ -18,7 +18,7 @@ const (
UserAgentFormat = UserAgentWrapper + UserAgentPrefix + "%s" + UserAgentWrapper
)
// Version the version of the node, set at build time.
// Version is the version of the node, set at the build time.
var Version string
// Config top level struct representing the config
@ -28,7 +28,7 @@ type Config struct {
ApplicationConfiguration ApplicationConfiguration `yaml:"ApplicationConfiguration"`
}
// GenerateUserAgent creates user agent string based on build time environment.
// GenerateUserAgent creates a user agent string based on the build time environment.
func (c Config) GenerateUserAgent() string {
return fmt.Sprintf(UserAgentFormat, Version)
}

View file

@ -28,7 +28,7 @@ type (
// P2PNotaryRequestPayloadPoolSize specifies the memory pool size for P2PNotaryRequestPayloads.
// It is valid only if P2PSigExtensions are enabled.
P2PNotaryRequestPayloadPoolSize int `yaml:"P2PNotaryRequestPayloadPoolSize"`
// KeepOnlyLatestState specifies if MPT should only store latest state.
// KeepOnlyLatestState specifies if MPT should only store the latest state.
// If true, DB size will be smaller, but older roots won't be accessible.
// This value should remain the same for the same database.
KeepOnlyLatestState bool `yaml:"KeepOnlyLatestState"`
@ -46,7 +46,7 @@ type (
// exceeding that a transaction should fail validation. It is set to estimated daily number
// of blocks with 15s interval.
MaxValidUntilBlockIncrement uint32 `yaml:"MaxValidUntilBlockIncrement"`
// NativeUpdateHistories is the list of histories of native contracts updates.
// NativeUpdateHistories is a list of histories of native contracts updates.
NativeUpdateHistories map[string][]uint32 `yaml:"NativeActivations"`
// P2PSigExtensions enables additional signature-related logic.
P2PSigExtensions bool `yaml:"P2PSigExtensions"`
@ -69,7 +69,7 @@ type (
ValidatorsHistory map[uint32]int `yaml:"ValidatorsHistory"`
// Whether to verify received blocks.
VerifyBlocks bool `yaml:"VerifyBlocks"`
// Whether to verify transactions in received blocks.
// Whether to verify transactions in the received blocks.
VerifyTransactions bool `yaml:"VerifyTransactions"`
}
)
@ -81,7 +81,7 @@ type heightNumber struct {
}
// Validate checks ProtocolConfiguration for internal consistency and returns
// error if anything inappropriate found. Other methods can rely on protocol
// an error if anything inappropriate found. Other methods can rely on protocol
// validity after this.
func (p *ProtocolConfiguration) Validate() error {
var err error

View file

@ -11,7 +11,7 @@ import (
"github.com/nspcc-dev/neo-go/pkg/util"
)
// neoBlock is a wrapper of core.Block which implements
// neoBlock is a wrapper of a core.Block which implements
// methods necessary for dBFT library.
type neoBlock struct {
coreb.Block
@ -22,7 +22,7 @@ type neoBlock struct {
var _ block.Block = (*neoBlock)(nil)
// Sign implements block.Block interface.
// Sign implements the block.Block interface.
func (n *neoBlock) Sign(key crypto.PrivateKey) error {
k := key.(*privateKey)
sig := k.PrivateKey.SignHashable(uint32(n.network), &n.Block)
@ -30,7 +30,7 @@ func (n *neoBlock) Sign(key crypto.PrivateKey) error {
return nil
}
// Verify implements block.Block interface.
// Verify implements the block.Block interface.
func (n *neoBlock) Verify(key crypto.PublicKey, sign []byte) error {
k := key.(*publicKey)
if k.PublicKey.VerifyHashable(sign, uint32(n.network), &n.Block) {
@ -39,7 +39,7 @@ func (n *neoBlock) Verify(key crypto.PublicKey, sign []byte) error {
return errors.New("verification failed")
}
// Transactions implements block.Block interface.
// Transactions implements the block.Block interface.
func (n *neoBlock) Transactions() []block.Transaction {
txes := make([]block.Transaction, len(n.Block.Transactions))
for i, tx := range n.Block.Transactions {
@ -49,7 +49,7 @@ func (n *neoBlock) Transactions() []block.Transaction {
return txes
}
// SetTransactions implements block.Block interface.
// SetTransactions implements the block.Block interface.
func (n *neoBlock) SetTransactions(txes []block.Transaction) {
n.Block.Transactions = make([]*transaction.Transaction, len(txes))
for i, tx := range txes {
@ -57,26 +57,26 @@ func (n *neoBlock) SetTransactions(txes []block.Transaction) {
}
}
// Version implements block.Block interface.
// Version implements the block.Block interface.
func (n *neoBlock) Version() uint32 { return n.Block.Version }
// PrevHash implements block.Block interface.
// PrevHash implements the block.Block interface.
func (n *neoBlock) PrevHash() util.Uint256 { return n.Block.PrevHash }
// MerkleRoot implements block.Block interface.
// MerkleRoot implements the block.Block interface.
func (n *neoBlock) MerkleRoot() util.Uint256 { return n.Block.MerkleRoot }
// Timestamp implements block.Block interface.
// Timestamp implements the block.Block interface.
func (n *neoBlock) Timestamp() uint64 { return n.Block.Timestamp * nsInMs }
// Index implements block.Block interface.
// Index implements the block.Block interface.
func (n *neoBlock) Index() uint32 { return n.Block.Index }
// ConsensusData implements block.Block interface.
// ConsensusData implements the block.Block interface.
func (n *neoBlock) ConsensusData() uint64 { return n.Block.Nonce }
// NextConsensus implements block.Block interface.
// NextConsensus implements the block.Block interface.
func (n *neoBlock) NextConsensus() util.Uint160 { return n.Block.NextConsensus }
// Signature implements block.Block interface.
// Signature implements the block.Block interface.
func (n *neoBlock) Signature() []byte { return n.signature }

View file

@ -7,7 +7,7 @@ import (
"github.com/nspcc-dev/neo-go/pkg/util"
)
// relayCache is a payload cache which is used to store
// relayCache is payload cache which is used to store
// last consensus payloads.
type relayCache struct {
*sync.RWMutex
@ -17,7 +17,7 @@ type relayCache struct {
queue *list.List
}
// hashable is a type of items which can be stored in the relayCache.
// hashable is the type of items which can be stored in the relayCache.
type hashable interface {
Hash() util.Uint256
}
@ -32,7 +32,7 @@ func newFIFOCache(capacity int) *relayCache {
}
}
// Add adds payload into a cache if it doesn't already exist.
// Add adds payload into cache if it doesn't already exist there.
func (c *relayCache) Add(p hashable) {
c.Lock()
defer c.Unlock()
@ -52,7 +52,7 @@ func (c *relayCache) Add(p hashable) {
c.elems[h] = e
}
// Has checks if an item is already in cache.
// Has checks if the item is already in cache.
func (c *relayCache) Has(h util.Uint256) bool {
c.RLock()
defer c.RUnlock()

View file

@ -14,32 +14,32 @@ type changeView struct {
var _ payload.ChangeView = (*changeView)(nil)
// EncodeBinary implements io.Serializable interface.
// EncodeBinary implements the io.Serializable interface.
func (c *changeView) EncodeBinary(w *io.BinWriter) {
w.WriteU64LE(c.timestamp)
w.WriteB(byte(c.reason))
}
// DecodeBinary implements io.Serializable interface.
// DecodeBinary implements the io.Serializable interface.
func (c *changeView) DecodeBinary(r *io.BinReader) {
c.timestamp = r.ReadU64LE()
c.reason = payload.ChangeViewReason(r.ReadB())
}
// NewViewNumber implements payload.ChangeView interface.
// NewViewNumber implements the payload.ChangeView interface.
func (c changeView) NewViewNumber() byte { return c.newViewNumber }
// SetNewViewNumber implements payload.ChangeView interface.
// SetNewViewNumber implements the payload.ChangeView interface.
func (c *changeView) SetNewViewNumber(view byte) { c.newViewNumber = view }
// Timestamp implements payload.ChangeView interface.
// Timestamp implements the payload.ChangeView interface.
func (c changeView) Timestamp() uint64 { return c.timestamp * nsInMs }
// SetTimestamp implements payload.ChangeView interface.
// SetTimestamp implements the payload.ChangeView interface.
func (c *changeView) SetTimestamp(ts uint64) { c.timestamp = ts / nsInMs }
// Reason implements payload.ChangeView interface.
// Reason implements the payload.ChangeView interface.
func (c changeView) Reason() payload.ChangeViewReason { return c.reason }
// SetReason implements payload.ChangeView interface.
// SetReason implements the payload.ChangeView interface.
func (c *changeView) SetReason(reason payload.ChangeViewReason) { c.reason = reason }

View file

@ -11,25 +11,25 @@ type commit struct {
}
// signatureSize is an rfc6989 signature size in bytes
// without leading byte (0x04, uncompressed).
// without a leading byte (0x04, uncompressed).
const signatureSize = 64
var _ payload.Commit = (*commit)(nil)
// EncodeBinary implements io.Serializable interface.
// EncodeBinary implements the io.Serializable interface.
func (c *commit) EncodeBinary(w *io.BinWriter) {
w.WriteBytes(c.signature[:])
}
// DecodeBinary implements io.Serializable interface.
// DecodeBinary implements the io.Serializable interface.
func (c *commit) DecodeBinary(r *io.BinReader) {
r.ReadBytes(c.signature[:])
}
// Signature implements payload.Commit interface.
// Signature implements the payload.Commit interface.
func (c commit) Signature() []byte { return c.signature[:] }
// SetSignature implements payload.Commit interface.
// SetSignature implements the payload.Commit interface.
func (c *commit) SetSignature(signature []byte) {
copy(c.signature[:], signature)
}

View file

@ -40,7 +40,7 @@ const defaultTimePerBlock = 15 * time.Second
// Number of nanoseconds in millisecond.
const nsInMs = 1000000
// Category is message category for extensible payloads.
// Category is a message category for extensible payloads.
const Category = "dBFT"
// Ledger is the interface to Blockchain sufficient for Service.
@ -61,19 +61,19 @@ type Ledger interface {
mempool.Feer
}
// Service represents consensus instance.
// Service represents a consensus instance.
type Service interface {
// Name returns service name.
Name() string
// Start initializes dBFT and starts event loop for consensus service.
// It must be called only when sufficient amount of peers are connected.
// It must be called only when the sufficient amount of peers are connected.
Start()
// Shutdown stops dBFT event loop.
Shutdown()
// OnPayload is a callback to notify Service about new received payload.
// OnPayload is a callback to notify the Service about a newly received payload.
OnPayload(p *npayload.Extensible) error
// OnTransaction is a callback to notify Service about new received transaction.
// OnTransaction is a callback to notify the Service about a newly received transaction.
OnTransaction(tx *transaction.Transaction)
}
@ -100,8 +100,8 @@ type service struct {
finished chan struct{}
// lastTimestamp contains timestamp for the last processed block.
// We can't rely on timestamp from dbft context because it is changed
// before block is accepted, so in case of change view it will contain
// updated value.
// before the block is accepted. So, in case of change view, it will contain
// an updated value.
lastTimestamp uint64
}
@ -109,23 +109,23 @@ type service struct {
type Config struct {
// Logger is a logger instance.
Logger *zap.Logger
// Broadcast is a callback which is called to notify server
// about new consensus payload to sent.
// Broadcast is a callback which is called to notify the server
// about a new consensus payload to be sent.
Broadcast func(p *npayload.Extensible)
// Chain is a Ledger instance.
Chain Ledger
// ProtocolConfiguration contains protocol settings.
ProtocolConfiguration config.ProtocolConfiguration
// RequestTx is a callback to which will be called
// when a node lacks transactions present in a block.
// when a node lacks transactions present in the block.
RequestTx func(h ...util.Uint256)
// TimePerBlock minimal time that should pass before next block is accepted.
// TimePerBlock is minimal time that should pass before the next block is accepted.
TimePerBlock time.Duration
// Wallet is a local-node wallet configuration.
Wallet *config.Wallet
}
// NewService returns new consensus.Service instance.
// NewService returns a new consensus.Service instance.
func NewService(cfg Config) (Service, error) {
if cfg.TimePerBlock <= 0 {
cfg.TimePerBlock = defaultTimePerBlock
@ -155,7 +155,7 @@ func NewService(cfg Config) (Service, error) {
return nil, err
}
// Check that wallet password is correct for at least one account.
// Check that the wallet password is correct for at least one account.
var ok bool
for _, acc := range srv.wallet.Accounts {
err := acc.Decrypt(srv.Config.Wallet.Password, srv.wallet.Scrypt)
@ -213,7 +213,7 @@ var (
_ block.Block = (*neoBlock)(nil)
)
// NewPayload creates new consensus payload for the provided network.
// NewPayload creates a new consensus payload for the provided network.
func NewPayload(m netmode.Magic, stateRootEnabled bool) *Payload {
return &Payload{
Extensible: npayload.Extensible{
@ -272,7 +272,7 @@ func (s *service) Start() {
}
}
// Shutdown implements Service interface.
// Shutdown implements the Service interface.
func (s *service) Shutdown() {
if s.started.Load() {
close(s.quit)

View file

@ -8,44 +8,44 @@ import (
)
// privateKey is a wrapper around keys.PrivateKey
// which implements crypto.PrivateKey interface.
// which implements the crypto.PrivateKey interface.
type privateKey struct {
*keys.PrivateKey
}
// MarshalBinary implements encoding.BinaryMarshaler interface.
// MarshalBinary implements the encoding.BinaryMarshaler interface.
func (p privateKey) MarshalBinary() (data []byte, err error) {
return p.PrivateKey.Bytes(), nil
}
// UnmarshalBinary implements encoding.BinaryUnmarshaler interface.
// UnmarshalBinary implements the encoding.BinaryUnmarshaler interface.
func (p *privateKey) UnmarshalBinary(data []byte) (err error) {
p.PrivateKey, err = keys.NewPrivateKeyFromBytes(data)
return
}
// Sign implements dbft's crypto.PrivateKey interface.
// Sign implements the dbft's crypto.PrivateKey interface.
func (p *privateKey) Sign(data []byte) ([]byte, error) {
return p.PrivateKey.Sign(data), nil
}
// publicKey is a wrapper around keys.PublicKey
// which implements crypto.PublicKey interface.
// which implements the crypto.PublicKey interface.
type publicKey struct {
*keys.PublicKey
}
// MarshalBinary implements encoding.BinaryMarshaler interface.
// MarshalBinary implements the encoding.BinaryMarshaler interface.
func (p publicKey) MarshalBinary() (data []byte, err error) {
return p.PublicKey.Bytes(), nil
}
// UnmarshalBinary implements encoding.BinaryUnmarshaler interface.
// UnmarshalBinary implements the encoding.BinaryUnmarshaler interface.
func (p *publicKey) UnmarshalBinary(data []byte) error {
return p.PublicKey.DecodeBytes(data)
}
// Verify implements crypto.PublicKey interface.
// Verify implements the crypto.PublicKey interface.
func (p publicKey) Verify(msg, sig []byte) error {
hash := sha256.Sum256(msg)
if p.PublicKey.Verify(sig, hash[:]) {

View file

@ -44,83 +44,83 @@ const (
payloadGasLimit = 2000000 // 0.02 GAS
)
// ViewNumber implements payload.ConsensusPayload interface.
// ViewNumber implements the payload.ConsensusPayload interface.
func (p Payload) ViewNumber() byte {
return p.message.ViewNumber
}
// SetViewNumber implements payload.ConsensusPayload interface.
// SetViewNumber implements the payload.ConsensusPayload interface.
func (p *Payload) SetViewNumber(view byte) {
p.message.ViewNumber = view
}
// Type implements payload.ConsensusPayload interface.
// Type implements the payload.ConsensusPayload interface.
func (p Payload) Type() payload.MessageType {
return payload.MessageType(p.message.Type)
}
// SetType implements payload.ConsensusPayload interface.
// SetType implements the payload.ConsensusPayload interface.
func (p *Payload) SetType(t payload.MessageType) {
p.message.Type = messageType(t)
}
// Payload implements payload.ConsensusPayload interface.
// Payload implements the payload.ConsensusPayload interface.
func (p Payload) Payload() interface{} {
return p.payload
}
// SetPayload implements payload.ConsensusPayload interface.
// SetPayload implements the payload.ConsensusPayload interface.
func (p *Payload) SetPayload(pl interface{}) {
p.payload = pl.(io.Serializable)
}
// GetChangeView implements payload.ConsensusPayload interface.
// GetChangeView implements the payload.ConsensusPayload interface.
func (p Payload) GetChangeView() payload.ChangeView { return p.payload.(payload.ChangeView) }
// GetPrepareRequest implements payload.ConsensusPayload interface.
// GetPrepareRequest implements the payload.ConsensusPayload interface.
func (p Payload) GetPrepareRequest() payload.PrepareRequest {
return p.payload.(payload.PrepareRequest)
}
// GetPrepareResponse implements payload.ConsensusPayload interface.
// GetPrepareResponse implements the payload.ConsensusPayload interface.
func (p Payload) GetPrepareResponse() payload.PrepareResponse {
return p.payload.(payload.PrepareResponse)
}
// GetCommit implements payload.ConsensusPayload interface.
// GetCommit implements the payload.ConsensusPayload interface.
func (p Payload) GetCommit() payload.Commit { return p.payload.(payload.Commit) }
// GetRecoveryRequest implements payload.ConsensusPayload interface.
// GetRecoveryRequest implements the payload.ConsensusPayload interface.
func (p Payload) GetRecoveryRequest() payload.RecoveryRequest {
return p.payload.(payload.RecoveryRequest)
}
// GetRecoveryMessage implements payload.ConsensusPayload interface.
// GetRecoveryMessage implements the payload.ConsensusPayload interface.
func (p Payload) GetRecoveryMessage() payload.RecoveryMessage {
return p.payload.(payload.RecoveryMessage)
}
// ValidatorIndex implements payload.ConsensusPayload interface.
// ValidatorIndex implements the payload.ConsensusPayload interface.
func (p Payload) ValidatorIndex() uint16 {
return uint16(p.message.ValidatorIndex)
}
// SetValidatorIndex implements payload.ConsensusPayload interface.
// SetValidatorIndex implements the payload.ConsensusPayload interface.
func (p *Payload) SetValidatorIndex(i uint16) {
p.message.ValidatorIndex = byte(i)
}
// Height implements payload.ConsensusPayload interface.
// Height implements the payload.ConsensusPayload interface.
func (p Payload) Height() uint32 {
return p.message.BlockIndex
}
// SetHeight implements payload.ConsensusPayload interface.
// SetHeight implements the payload.ConsensusPayload interface.
func (p *Payload) SetHeight(h uint32) {
p.message.BlockIndex = h
}
// EncodeBinary implements io.Serializable interface.
// EncodeBinary implements the io.Serializable interface.
func (p *Payload) EncodeBinary(w *io.BinWriter) {
p.encodeData()
p.Extensible.EncodeBinary(w)
@ -140,7 +140,7 @@ func (p *Payload) Sign(key *privateKey) error {
return nil
}
// Hash implements payload.ConsensusPayload interface.
// Hash implements the payload.ConsensusPayload interface.
func (p *Payload) Hash() util.Uint256 {
if p.Extensible.Data == nil {
p.encodeData()
@ -148,7 +148,7 @@ func (p *Payload) Hash() util.Uint256 {
return p.Extensible.Hash()
}
// DecodeBinary implements io.Serializable interface.
// DecodeBinary implements the io.Serializable interface.
func (p *Payload) DecodeBinary(r *io.BinReader) {
p.Extensible.DecodeBinary(r)
if r.Err == nil {
@ -156,7 +156,7 @@ func (p *Payload) DecodeBinary(r *io.BinReader) {
}
}
// EncodeBinary implements io.Serializable interface.
// EncodeBinary implements the io.Serializable interface.
func (m *message) EncodeBinary(w *io.BinWriter) {
w.WriteB(byte(m.Type))
w.WriteU32LE(m.BlockIndex)
@ -165,7 +165,7 @@ func (m *message) EncodeBinary(w *io.BinWriter) {
m.payload.EncodeBinary(w)
}
// DecodeBinary implements io.Serializable interface.
// DecodeBinary implements the io.Serializable interface.
func (m *message) DecodeBinary(r *io.BinReader) {
m.Type = messageType(r.ReadB())
m.BlockIndex = r.ReadU32LE()

View file

@ -20,7 +20,7 @@ type prepareRequest struct {
var _ payload.PrepareRequest = (*prepareRequest)(nil)
// EncodeBinary implements io.Serializable interface.
// EncodeBinary implements the io.Serializable interface.
func (p *prepareRequest) EncodeBinary(w *io.BinWriter) {
w.WriteU32LE(p.version)
w.WriteBytes(p.prevHash[:])
@ -32,7 +32,7 @@ func (p *prepareRequest) EncodeBinary(w *io.BinWriter) {
}
}
// DecodeBinary implements io.Serializable interface.
// DecodeBinary implements the io.Serializable interface.
func (p *prepareRequest) DecodeBinary(r *io.BinReader) {
p.version = r.ReadU32LE()
r.ReadBytes(p.prevHash[:])
@ -44,46 +44,46 @@ func (p *prepareRequest) DecodeBinary(r *io.BinReader) {
}
}
// Version implements payload.PrepareRequest interface.
// Version implements the payload.PrepareRequest interface.
func (p prepareRequest) Version() uint32 {
return p.version
}
// SetVersion implements payload.PrepareRequest interface.
// SetVersion implements the payload.PrepareRequest interface.
func (p *prepareRequest) SetVersion(v uint32) {
p.version = v
}
// PrevHash implements payload.PrepareRequest interface.
// PrevHash implements the payload.PrepareRequest interface.
func (p prepareRequest) PrevHash() util.Uint256 {
return p.prevHash
}
// SetPrevHash implements payload.PrepareRequest interface.
// SetPrevHash implements the payload.PrepareRequest interface.
func (p *prepareRequest) SetPrevHash(h util.Uint256) {
p.prevHash = h
}
// Timestamp implements payload.PrepareRequest interface.
// Timestamp implements the payload.PrepareRequest interface.
func (p *prepareRequest) Timestamp() uint64 { return p.timestamp * nsInMs }
// SetTimestamp implements payload.PrepareRequest interface.
// SetTimestamp implements the payload.PrepareRequest interface.
func (p *prepareRequest) SetTimestamp(ts uint64) { p.timestamp = ts / nsInMs }
// Nonce implements payload.PrepareRequest interface.
// Nonce implements the payload.PrepareRequest interface.
func (p *prepareRequest) Nonce() uint64 { return p.nonce }
// SetNonce implements payload.PrepareRequest interface.
// SetNonce implements the payload.PrepareRequest interface.
func (p *prepareRequest) SetNonce(nonce uint64) { p.nonce = nonce }
// TransactionHashes implements payload.PrepareRequest interface.
// TransactionHashes implements the payload.PrepareRequest interface.
func (p *prepareRequest) TransactionHashes() []util.Uint256 { return p.transactionHashes }
// SetTransactionHashes implements payload.PrepareRequest interface.
// SetTransactionHashes implements the payload.PrepareRequest interface.
func (p *prepareRequest) SetTransactionHashes(hs []util.Uint256) { p.transactionHashes = hs }
// NextConsensus implements payload.PrepareRequest interface.
// NextConsensus implements the payload.PrepareRequest interface.
func (p *prepareRequest) NextConsensus() util.Uint160 { return util.Uint160{} }
// SetNextConsensus implements payload.PrepareRequest interface.
// SetNextConsensus implements the payload.PrepareRequest interface.
func (p *prepareRequest) SetNextConsensus(_ util.Uint160) {}

View file

@ -13,18 +13,18 @@ type prepareResponse struct {
var _ payload.PrepareResponse = (*prepareResponse)(nil)
// EncodeBinary implements io.Serializable interface.
// EncodeBinary implements the io.Serializable interface.
func (p *prepareResponse) EncodeBinary(w *io.BinWriter) {
w.WriteBytes(p.preparationHash[:])
}
// DecodeBinary implements io.Serializable interface.
// DecodeBinary implements the io.Serializable interface.
func (p *prepareResponse) DecodeBinary(r *io.BinReader) {
r.ReadBytes(p.preparationHash[:])
}
// PreparationHash implements payload.PrepareResponse interface.
// PreparationHash implements the payload.PrepareResponse interface.
func (p *prepareResponse) PreparationHash() util.Uint256 { return p.preparationHash }
// SetPreparationHash implements payload.PrepareResponse interface.
// SetPreparationHash implements the payload.PrepareResponse interface.
func (p *prepareResponse) SetPreparationHash(h util.Uint256) { p.preparationHash = h }

View file

@ -43,7 +43,7 @@ type (
var _ payload.RecoveryMessage = (*recoveryMessage)(nil)
// DecodeBinary implements io.Serializable interface.
// DecodeBinary implements the io.Serializable interface.
func (m *recoveryMessage) DecodeBinary(r *io.BinReader) {
r.ReadArray(&m.changeViewPayloads)
@ -73,7 +73,7 @@ func (m *recoveryMessage) DecodeBinary(r *io.BinReader) {
r.ReadArray(&m.commitPayloads)
}
// EncodeBinary implements io.Serializable interface.
// EncodeBinary implements the io.Serializable interface.
func (m *recoveryMessage) EncodeBinary(w *io.BinWriter) {
w.WriteArray(m.changeViewPayloads)
@ -94,7 +94,7 @@ func (m *recoveryMessage) EncodeBinary(w *io.BinWriter) {
w.WriteArray(m.commitPayloads)
}
// DecodeBinary implements io.Serializable interface.
// DecodeBinary implements the io.Serializable interface.
func (p *changeViewCompact) DecodeBinary(r *io.BinReader) {
p.ValidatorIndex = r.ReadB()
p.OriginalViewNumber = r.ReadB()
@ -102,7 +102,7 @@ func (p *changeViewCompact) DecodeBinary(r *io.BinReader) {
p.InvocationScript = r.ReadVarBytes(1024)
}
// EncodeBinary implements io.Serializable interface.
// EncodeBinary implements the io.Serializable interface.
func (p *changeViewCompact) EncodeBinary(w *io.BinWriter) {
w.WriteB(p.ValidatorIndex)
w.WriteB(p.OriginalViewNumber)
@ -110,7 +110,7 @@ func (p *changeViewCompact) EncodeBinary(w *io.BinWriter) {
w.WriteVarBytes(p.InvocationScript)
}
// DecodeBinary implements io.Serializable interface.
// DecodeBinary implements the io.Serializable interface.
func (p *commitCompact) DecodeBinary(r *io.BinReader) {
p.ViewNumber = r.ReadB()
p.ValidatorIndex = r.ReadB()
@ -118,7 +118,7 @@ func (p *commitCompact) DecodeBinary(r *io.BinReader) {
p.InvocationScript = r.ReadVarBytes(1024)
}
// EncodeBinary implements io.Serializable interface.
// EncodeBinary implements the io.Serializable interface.
func (p *commitCompact) EncodeBinary(w *io.BinWriter) {
w.WriteB(p.ViewNumber)
w.WriteB(p.ValidatorIndex)
@ -126,19 +126,19 @@ func (p *commitCompact) EncodeBinary(w *io.BinWriter) {
w.WriteVarBytes(p.InvocationScript)
}
// DecodeBinary implements io.Serializable interface.
// DecodeBinary implements the io.Serializable interface.
func (p *preparationCompact) DecodeBinary(r *io.BinReader) {
p.ValidatorIndex = r.ReadB()
p.InvocationScript = r.ReadVarBytes(1024)
}
// EncodeBinary implements io.Serializable interface.
// EncodeBinary implements the io.Serializable interface.
func (p *preparationCompact) EncodeBinary(w *io.BinWriter) {
w.WriteB(p.ValidatorIndex)
w.WriteVarBytes(p.InvocationScript)
}
// AddPayload implements payload.RecoveryMessage interface.
// AddPayload implements the payload.RecoveryMessage interface.
func (m *recoveryMessage) AddPayload(p payload.ConsensusPayload) {
validator := uint8(p.ValidatorIndex())
@ -183,7 +183,7 @@ func (m *recoveryMessage) AddPayload(p payload.ConsensusPayload) {
}
}
// GetPrepareRequest implements payload.RecoveryMessage interface.
// GetPrepareRequest implements the payload.RecoveryMessage interface.
func (m *recoveryMessage) GetPrepareRequest(p payload.ConsensusPayload, validators []crypto.PublicKey, primary uint16) payload.ConsensusPayload {
if m.prepareRequest == nil {
return nil
@ -210,7 +210,7 @@ func (m *recoveryMessage) GetPrepareRequest(p payload.ConsensusPayload, validato
return req
}
// GetPrepareResponses implements payload.RecoveryMessage interface.
// GetPrepareResponses implements the payload.RecoveryMessage interface.
func (m *recoveryMessage) GetPrepareResponses(p payload.ConsensusPayload, validators []crypto.PublicKey) []payload.ConsensusPayload {
if m.preparationHash == nil {
return nil
@ -233,7 +233,7 @@ func (m *recoveryMessage) GetPrepareResponses(p payload.ConsensusPayload, valida
return ps
}
// GetChangeViews implements payload.RecoveryMessage interface.
// GetChangeViews implements the payload.RecoveryMessage interface.
func (m *recoveryMessage) GetChangeViews(p payload.ConsensusPayload, validators []crypto.PublicKey) []payload.ConsensusPayload {
ps := make([]payload.ConsensusPayload, len(m.changeViewPayloads))
@ -254,7 +254,7 @@ func (m *recoveryMessage) GetChangeViews(p payload.ConsensusPayload, validators
return ps
}
// GetCommits implements payload.RecoveryMessage interface.
// GetCommits implements the payload.RecoveryMessage interface.
func (m *recoveryMessage) GetCommits(p payload.ConsensusPayload, validators []crypto.PublicKey) []payload.ConsensusPayload {
ps := make([]payload.ConsensusPayload, len(m.commitPayloads))
@ -271,12 +271,12 @@ func (m *recoveryMessage) GetCommits(p payload.ConsensusPayload, validators []cr
return ps
}
// PreparationHash implements payload.RecoveryMessage interface.
// PreparationHash implements the payload.RecoveryMessage interface.
func (m *recoveryMessage) PreparationHash() *util.Uint256 {
return m.preparationHash
}
// SetPreparationHash implements payload.RecoveryMessage interface.
// SetPreparationHash implements the payload.RecoveryMessage interface.
func (m *recoveryMessage) SetPreparationHash(h *util.Uint256) {
m.preparationHash = h
}

View file

@ -12,18 +12,18 @@ type recoveryRequest struct {
var _ payload.RecoveryRequest = (*recoveryRequest)(nil)
// DecodeBinary implements io.Serializable interface.
// DecodeBinary implements the io.Serializable interface.
func (m *recoveryRequest) DecodeBinary(r *io.BinReader) {
m.timestamp = r.ReadU64LE()
}
// EncodeBinary implements io.Serializable interface.
// EncodeBinary implements the io.Serializable interface.
func (m *recoveryRequest) EncodeBinary(w *io.BinWriter) {
w.WriteU64LE(m.timestamp)
}
// Timestamp implements payload.RecoveryRequest interface.
// Timestamp implements the payload.RecoveryRequest interface.
func (m *recoveryRequest) Timestamp() uint64 { return m.timestamp * nsInMs }
// SetTimestamp implements payload.RecoveryRequest interface.
// SetTimestamp implements the payload.RecoveryRequest interface.
func (m *recoveryRequest) SetTimestamp(ts uint64) { m.timestamp = ts / nsInMs }

View file

@ -143,7 +143,7 @@ func (b *Block) EncodeBinary(bw *io.BinWriter) {
}
}
// MarshalJSON implements json.Marshaler interface.
// MarshalJSON implements the json.Marshaler interface.
func (b Block) MarshalJSON() ([]byte, error) {
auxb, err := json.Marshal(auxBlockOut{
Transactions: b.Transactions,
@ -165,7 +165,7 @@ func (b Block) MarshalJSON() ([]byte, error) {
return baseBytes, nil
}
// UnmarshalJSON implements json.Unmarshaler interface.
// UnmarshalJSON implements the json.Unmarshaler interface.
func (b *Block) UnmarshalJSON(data []byte) error {
// As Base and auxb are at the same level in json,
// do unmarshalling separately for both structs.
@ -192,7 +192,7 @@ func (b *Block) UnmarshalJSON(data []byte) error {
return nil
}
// GetExpectedBlockSize returns expected block size which should be equal to io.GetVarSize(b).
// GetExpectedBlockSize returns the expected block size which should be equal to io.GetVarSize(b).
func (b *Block) GetExpectedBlockSize() int {
var transactionsSize int
for _, tx := range b.Transactions {
@ -201,7 +201,7 @@ func (b *Block) GetExpectedBlockSize() int {
return b.GetExpectedBlockSizeWithoutTransactions(len(b.Transactions)) + transactionsSize
}
// GetExpectedBlockSizeWithoutTransactions returns expected block size without transactions size.
// GetExpectedBlockSizeWithoutTransactions returns the expected block size without transactions size.
func (b *Block) GetExpectedBlockSizeWithoutTransactions(txCount int) int {
size := expectedHeaderSizeWithEmptyWitness - 1 - 1 + // 1 is for the zero-length (new(Header)).Script.Invocation/Verification
io.GetVarSize(&b.Script) +

View file

@ -23,7 +23,7 @@ func trim0x(value interface{}) string {
return strings.TrimPrefix(s, "0x")
}
// Test blocks are blocks from testnet with their corresponding index.
// Test blocks are blocks from testnet with their corresponding indices.
func TestDecodeBlock1(t *testing.T) {
data, err := getBlockData(1)
require.NoError(t, err)
@ -126,12 +126,12 @@ func TestBinBlockDecodeEncode(t *testing.T) {
assert.Equal(t, len(expected), len(hashes))
// changes value in map to true, if hash found
// changes value in map to true, if hash is found
for _, hash := range hashes {
expected[hash] = true
}
// iterate map; all vlaues should be true
// iterate map; all values should be true
val := true
for _, v := range expected {
if v == false {
@ -151,7 +151,7 @@ func TestBlockSizeCalculation(t *testing.T) {
// block taken from C# privnet: 02d7c7801742cd404eb178780c840477f1eef4a771ecc8cc9434640fe8f2bb09
// The Size in golang is given by counting the number of bytes of an object. (len(Bytes))
// its implementation is different from the corresponding C# and python implementations. But the result should
// should be the same.In this test we provide more details then necessary because in case of failure we can easily debug the
// be the same. In this test we provide more details then necessary because in case of failure we can easily debug the
// root cause of the size calculation missmatch.
rawBlock := "AAAAAAwIVa2D6Yha3tArd5XnwkAf7deJBsdyyvpYb2xMZGBbkOUNHAsfre0rKA/F+Ox05/bQSXmcRZnzK3M6Z+/TxJUh0MNFeAEAAAAAAAAAAAAAAQAAAADe7nnBifMAmLC6ai65CzqSWKbH/wHGDEDgwCcXkcaFw5MGOp1cpkgApzDTX2/RxKlmPeXTgWYtfEA8g9svUSbZA4TeoGyWvX8LiN0tJKrzajdMGvTVGqVmDEDp6PBmZmRx9CxswtLht6oWa2Uq4rl5diPsLtqXZeZepMlxUSbaCdlFTB7iWQG9yKXWR5hc0sScevvuVwwsUYdlDEDwlhwZrP07E5fEQKttVMYAiL7edd/eW2yoMGZe6Q95g7yXQ69edVHfQb61fBw3DjCpMDZ5lsxp3BgzXglJwMSKkxMMIQIQOn990BZVhZf3lg0nxRakOU/ZaLnmUVXrSwE+QEBAbgwhAqe8Vf6GhOARl2jRBLoweVvcyGYZ6GSt0mFWcj7Rhc1iDCECs2Ir9AF73+MXxYrtX0x1PyBrfbiWBG+n13S7xL9/jcIMIQPZDAffY+aQzneRLhCrUazJRLZoYCN7YIxPj4MJ5x7mmRRBe85spQIAWNC7C8DYpwAAAAAAIKpEAAAAAADoAwAAAd7uecGJ8wCYsLpqLrkLOpJYpsf/AQBbCwIA4fUFDBSAzse29bVvUFePc38WLTqxTUZlDQwU3u55wYnzAJiwumouuQs6klimx/8UwB8MCHRyYW5zZmVyDBT1Y+pAvCg9TQ4FxI6jBbPyoHNA70FifVtSOQHGDEC4UIzT61GYPx0LdksrF6C2ioYai6fbwpjv3BGAqiyagxiomYGZRLeXZyD67O5FJ86pXRFtSbVYu2YDG+T5ICIgDEDzm/wl+BnHvQXaHQ1rGLtdUMc41wN6I48kPPM7F23gL9sVxGziQIMRLnpTbWHrnzaU9Sy0fXkvIrdJy1KABkSQDEDBwuBuVK+nsZvn1oAscPj6d3FJiUGK9xiHpX9Ipp/5jTnXRBAyzyGc8IZMBVql4WS8kwFe6ojA/9BvFb5eWXnEkxMMIQIQOn990BZVhZf3lg0nxRakOU/ZaLnmUVXrSwE+QEBAbgwhAqe8Vf6GhOARl2jRBLoweVvcyGYZ6GSt0mFWcj7Rhc1iDCECs2Ir9AF73+MXxYrtX0x1PyBrfbiWBG+n13S7xL9/jcIMIQPZDAffY+aQzneRLhCrUazJRLZoYCN7YIxPj4MJ5x7mmRRBe85spQDYJLwZwNinAAAAAAAgqkQAAAAAAOgDAAAB3u55wYnzAJiwumouuQs6klimx/8BAF8LAwBA2d2ITQoADBSAzse29bVvUFePc38WLTqxTUZlDQwU3u55wYnzAJiwumouuQs6klimx/8UwB8MCHRyYW5zZmVyDBTPduKL0AYsSkeO41VhARMZ88+k0kFifVtSOQHGDEDWn0D7z2ELqpN8ghcM/PtfFwo56/BfEasfHuSKECJMYxvU47r2ZtSihg59lGxSZzHsvxTy6nsyvJ22ycNhINdJDECl61cg937N/HujKsLMu2wJMS7C54bzJ3q22Czqllvw3Yp809USgKDs+W+3QD7rI+SFs0OhIn0gooCUU6f/13WjDEDr9XdeT5CGTO8CL0JigzcTcucs0GBcqHs8fToO6zPuuCfS7Wh6dyxSCijT4A4S+7BUdW3dsO7828ke1fj8oNxmkxMMIQIQOn990BZVhZf3lg0nxRakOU/ZaLnmUVXrSwE+QEBAbgwhAqe8Vf6GhOARl2jRBLoweVvcyGYZ6GSt0mFWcj7Rhc1iDCECs2Ir9AF73+MXxYrtX0x1PyBrfbiWBG+n13S7xL9/jcIMIQPZDAffY+aQzneRLhCrUazJRLZoYCN7YIxPj4MJ5x7mmRRBe85spQ=="

View file

@ -25,8 +25,8 @@ type Header struct {
MerkleRoot util.Uint256
// Timestamp is a millisecond-precision timestamp.
// The time stamp of each block must be later than previous block's time stamp.
// Generally the difference of two block's time stamp is about 15 seconds and imprecision is allowed.
// The time stamp of each block must be later than the previous block's time stamp.
// Generally, the difference between two block's time stamps is about 15 seconds and imprecision is allowed.
// The height of the block must be exactly equal to the height of the previous block plus 1.
Timestamp uint64
@ -42,11 +42,11 @@ type Header struct {
// Script used to validate the block
Script transaction.Witness
// StateRootEnabled specifies if header contains state root.
// StateRootEnabled specifies if the header contains state root.
StateRootEnabled bool
// PrevStateRoot is state root of the previous block.
// PrevStateRoot is the state root of the previous block.
PrevStateRoot util.Uint256
// PrimaryIndex is the index of primary consensus node for this block.
// PrimaryIndex is the index of the primary consensus node for this block.
PrimaryIndex byte
// Hash of this block, created when binary encoded (double SHA256).
@ -78,7 +78,7 @@ func (b *Header) Hash() util.Uint256 {
return b.hash
}
// DecodeBinary implements Serializable interface.
// DecodeBinary implements the Serializable interface.
func (b *Header) DecodeBinary(br *io.BinReader) {
b.decodeHashableFields(br)
witnessCount := br.ReadVarUint()
@ -90,7 +90,7 @@ func (b *Header) DecodeBinary(br *io.BinReader) {
b.Script.DecodeBinary(br)
}
// EncodeBinary implements Serializable interface.
// EncodeBinary implements the Serializable interface.
func (b *Header) EncodeBinary(bw *io.BinWriter) {
b.encodeHashableFields(bw)
bw.WriteVarUint(1)
@ -98,11 +98,12 @@ func (b *Header) EncodeBinary(bw *io.BinWriter) {
}
// createHash creates the hash of the block.
// When calculating the hash value of the block, instead of calculating the entire block,
// only first seven fields in the block head will be calculated, which are
// version, PrevBlock, MerkleRoot, timestamp, and height, the nonce, NextMiner.
// Since MerkleRoot already contains the hash value of all transactions,
// the modification of transaction will influence the hash value of the block.
// When calculating the hash value of the block, instead of processing the entire block,
// only the header (without the signatures) is added as an input for the hash. It differs
// from the complete block only in that it doesn't contain transactions, but their hashes
// are used for MerkleRoot hash calculation. Therefore, adding/removing/changing any
// transaction affects the header hash and there is no need to use the complete block for
// hash calculation.
func (b *Header) createHash() {
buf := io.NewBufBinWriter()
// No error can occur while encoding hashable fields.
@ -149,7 +150,7 @@ func (b *Header) decodeHashableFields(br *io.BinReader) {
}
}
// MarshalJSON implements json.Marshaler interface.
// MarshalJSON implements the json.Marshaler interface.
func (b Header) MarshalJSON() ([]byte, error) {
aux := baseAux{
Hash: b.Hash(),
@ -169,7 +170,7 @@ func (b Header) MarshalJSON() ([]byte, error) {
return json.Marshal(aux)
}
// UnmarshalJSON implements json.Unmarshaler interface.
// UnmarshalJSON implements the json.Unmarshaler interface.
func (b *Header) UnmarshalJSON(data []byte) error {
var aux = new(baseAux)
var nextC util.Uint160

View file

@ -439,7 +439,7 @@ func (bc *Blockchain) init() error {
}
// Check autogenerated native contracts' manifests and NEFs against the stored ones.
// Need to be done after native Management cache initialisation to be able to get
// Need to be done after native Management cache initialization to be able to get
// contract state from DAO via high-level bc API.
for _, c := range bc.contracts.Contracts {
md := c.Metadata()

View file

@ -17,7 +17,7 @@ import (
"github.com/nspcc-dev/neo-go/pkg/util"
)
// Blockchainer is an interface that abstract the implementation
// Blockchainer is an interface that abstracts the implementation
// of the blockchain.
type Blockchainer interface {
ApplyPolicyToTxSet([]*transaction.Transaction) []*transaction.Transaction

View file

@ -9,7 +9,7 @@ import (
"github.com/nspcc-dev/neo-go/pkg/util"
)
// DumperRestorer in the interface to get/add blocks from/to.
// DumperRestorer is an interface to get/add blocks from/to.
type DumperRestorer interface {
AddBlock(block *block.Block) error
GetBlock(hash util.Uint256) (*block.Block, error)
@ -18,7 +18,7 @@ type DumperRestorer interface {
}
// Dump writes count blocks from start to the provided writer.
// Note: header needs to be written separately by client.
// Note: header needs to be written separately by a client.
func Dump(bc DumperRestorer, w *io.BinWriter, start, count uint32) error {
for i := start; i < start+count; i++ {
bh := bc.GetHeaderHash(int(i))
@ -38,7 +38,7 @@ func Dump(bc DumperRestorer, w *io.BinWriter, start, count uint32) error {
return nil
}
// Restore restores blocks from provided reader.
// Restore restores blocks from the provided reader.
// f is called after addition of every block.
func Restore(bc DumperRestorer, r *io.BinReader, skip, count uint32, f func(b *block.Block) error) error {
readBlock := func(r *io.BinReader) ([]byte, error) {

View file

@ -21,12 +21,12 @@ import (
// HasTransaction errors.
var (
// ErrAlreadyExists is returned when transaction exists in dao.
// ErrAlreadyExists is returned when the transaction exists in dao.
ErrAlreadyExists = errors.New("transaction already exists")
// ErrHasConflicts is returned when transaction is in the list of conflicting
// ErrHasConflicts is returned when the transaction is in the list of conflicting
// transactions which are already in dao.
ErrHasConflicts = errors.New("transaction has conflicts")
// ErrInternalDBInconsistency is returned when the format of retrieved DAO
// ErrInternalDBInconsistency is returned when the format of the retrieved DAO
// record is unexpected.
ErrInternalDBInconsistency = errors.New("internal DB inconsistency")
)
@ -57,7 +57,7 @@ type NativeContractCache interface {
Copy() NativeContractCache
}
// NewSimple creates new simple dao using provided backend store.
// NewSimple creates a new simple dao using the provided backend store.
func NewSimple(backend storage.Store, stateRootInHeader bool, p2pSigExtensions bool) *Simple {
st := storage.NewMemCachedStore(backend)
return newSimple(st, stateRootInHeader, p2pSigExtensions)
@ -75,12 +75,12 @@ func newSimple(st *storage.MemCachedStore, stateRootInHeader bool, p2pSigExtensi
}
}
// GetBatch returns currently accumulated DB changeset.
// GetBatch returns the currently accumulated DB changeset.
func (dao *Simple) GetBatch() *storage.MemBatch {
return dao.Store.GetBatch()
}
// GetWrapped returns new DAO instance with another layer of wrapped
// GetWrapped returns a new DAO instance with another layer of wrapped
// MemCachedStore around the current DAO Store.
func (dao *Simple) GetWrapped() *Simple {
d := NewSimple(dao.Store, dao.Version.StateRootInHeader, dao.Version.P2PSigExtensions)
@ -89,7 +89,7 @@ func (dao *Simple) GetWrapped() *Simple {
return d
}
// GetPrivate returns new DAO instance with another layer of private
// GetPrivate returns a new DAO instance with another layer of private
// MemCachedStore around the current DAO Store.
func (dao *Simple) GetPrivate() *Simple {
d := &Simple{
@ -142,12 +142,12 @@ func (dao *Simple) DeleteContractID(id int32) {
dao.Store.Delete(dao.makeContractIDKey(id))
}
// PutContractID adds a mapping from contract's ID to its hash.
// PutContractID adds a mapping from a contract's ID to its hash.
func (dao *Simple) PutContractID(id int32, hash util.Uint160) {
dao.Store.Put(dao.makeContractIDKey(id), hash.BytesBE())
}
// GetContractScriptHash retrieves contract's hash given its ID.
// GetContractScriptHash retrieves the contract's hash given its ID.
func (dao *Simple) GetContractScriptHash(id int32) (util.Uint160, error) {
var data = new(util.Uint160)
if err := dao.GetAndDecode(data, dao.makeContractIDKey(id)); err != nil {
@ -259,7 +259,7 @@ func (dao *Simple) GetTokenTransferLog(acc util.Uint160, newestTimestamp uint64,
return &state.TokenTransferLog{Raw: value}, nil
}
// PutTokenTransferLog saves given transfer log in the cache.
// PutTokenTransferLog saves the given transfer log in the cache.
func (dao *Simple) PutTokenTransferLog(acc util.Uint160, start uint64, index uint32, isNEP11 bool, lg *state.TokenTransferLog) {
key := dao.getTokenTransferLogKey(acc, start, index, isNEP11)
dao.Store.Put(key, lg.Raw)
@ -377,22 +377,22 @@ func (dao *Simple) GetStorageItem(id int32, key []byte) state.StorageItem {
return b
}
// PutStorageItem puts given StorageItem for given id with given
// PutStorageItem puts the given StorageItem for the given id with the given
// key into the given store.
func (dao *Simple) PutStorageItem(id int32, key []byte, si state.StorageItem) {
stKey := dao.makeStorageItemKey(id, key)
dao.Store.Put(stKey, si)
}
// DeleteStorageItem drops storage item for the given id with the
// DeleteStorageItem drops a storage item for the given id with the
// given key from the store.
func (dao *Simple) DeleteStorageItem(id int32, key []byte) {
stKey := dao.makeStorageItemKey(id, key)
dao.Store.Delete(stKey)
}
// Seek executes f for all storage items matching a given `rng` (matching given prefix and
// starting from the point specified). If key or value is to be used outside of f, they
// Seek executes f for all storage items matching the given `rng` (matching the given prefix and
// starting from the point specified). If the key or the value is to be used outside of f, they
// may not be copied. Seek continues iterating until false is returned from f.
func (dao *Simple) Seek(id int32, rng storage.SeekRange, f func(k, v []byte) bool) {
rng.Prefix = slice.Copy(dao.makeStorageItemKey(id, rng.Prefix)) // f() can use dao too.
@ -401,7 +401,7 @@ func (dao *Simple) Seek(id int32, rng storage.SeekRange, f func(k, v []byte) boo
})
}
// SeekAsync sends all storage items matching a given `rng` (matching given prefix and
// SeekAsync sends all storage items matching the given `rng` (matching the given prefix and
// starting from the point specified) to a channel and returns the channel.
// Resulting keys and values may not be copied.
func (dao *Simple) SeekAsync(ctx context.Context, id int32, rng storage.SeekRange) chan storage.KeyValue {
@ -409,7 +409,7 @@ func (dao *Simple) SeekAsync(ctx context.Context, id int32, rng storage.SeekRang
return dao.Store.SeekAsync(ctx, rng, true)
}
// makeStorageItemKey returns a key used to store StorageItem in the DB.
// makeStorageItemKey returns the key used to store the StorageItem in the DB.
func (dao *Simple) makeStorageItemKey(id int32, key []byte) []byte {
// 1 for prefix + 4 for Uint32 + len(key) for key
buf := dao.getKeyBuf(5 + len(key))
@ -446,7 +446,7 @@ func (dao *Simple) getBlock(key []byte) (*block.Block, error) {
return block, nil
}
// Version represents current dao version.
// Version represents the current dao version.
type Version struct {
StoragePrefix storage.KeyPrefix
StateRootInHeader bool
@ -549,7 +549,7 @@ func (dao *Simple) GetCurrentHeaderHeight() (i uint32, h util.Uint256, err error
return
}
// GetStateSyncPoint returns current state synchronisation point P.
// GetStateSyncPoint returns current state synchronization point P.
func (dao *Simple) GetStateSyncPoint() (uint32, error) {
b, err := dao.Store.Get(dao.mkKeyPrefix(storage.SYSStateSyncPoint))
if err != nil {
@ -558,8 +558,8 @@ func (dao *Simple) GetStateSyncPoint() (uint32, error) {
return binary.LittleEndian.Uint32(b), nil
}
// GetStateSyncCurrentBlockHeight returns current block height stored during state
// synchronisation process.
// GetStateSyncCurrentBlockHeight returns the current block height stored during state
// synchronization process.
func (dao *Simple) GetStateSyncCurrentBlockHeight() (uint32, error) {
b, err := dao.Store.Get(dao.mkKeyPrefix(storage.SYSStateSyncCurrentBlockHeight))
if err != nil {
@ -627,7 +627,7 @@ func (dao *Simple) PutVersion(v Version) {
dao.Store.Put(dao.mkKeyPrefix(storage.SYSVersion), v.Bytes())
}
// PutCurrentHeader stores current header.
// PutCurrentHeader stores the current header.
func (dao *Simple) PutCurrentHeader(h util.Uint256, index uint32) {
buf := dao.getDataBuf()
buf.WriteBytes(h.BytesLE())
@ -635,14 +635,14 @@ func (dao *Simple) PutCurrentHeader(h util.Uint256, index uint32) {
dao.Store.Put(dao.mkKeyPrefix(storage.SYSCurrentHeader), buf.Bytes())
}
// PutStateSyncPoint stores current state synchronisation point P.
// PutStateSyncPoint stores the current state synchronization point P.
func (dao *Simple) PutStateSyncPoint(p uint32) {
buf := dao.getDataBuf()
buf.WriteU32LE(p)
dao.Store.Put(dao.mkKeyPrefix(storage.SYSStateSyncPoint), buf.Bytes())
}
// PutStateSyncCurrentBlockHeight stores current block height during state synchronisation process.
// PutStateSyncCurrentBlockHeight stores the current block height during state synchronization process.
func (dao *Simple) PutStateSyncCurrentBlockHeight(h uint32) {
buf := dao.getDataBuf()
buf.WriteU32LE(h)
@ -682,7 +682,7 @@ func (dao *Simple) StoreHeaderHashes(hashes []util.Uint256, height uint32) error
}
// HasTransaction returns nil if the given store does not contain the given
// Transaction hash. It returns an error in case if transaction is in chain
// Transaction hash. It returns an error in case the transaction is in chain
// or in the list of conflicting transactions.
func (dao *Simple) HasTransaction(hash util.Uint256) error {
key := dao.makeExecutableKey(hash)
@ -722,7 +722,7 @@ func (dao *Simple) StoreAsBlock(block *block.Block, aer1 *state.AppExecResult, a
return nil
}
// DeleteBlock removes block from dao. It's not atomic, so make sure you're
// DeleteBlock removes the block from dao. It's not atomic, so make sure you're
// using private MemCached instance here.
func (dao *Simple) DeleteBlock(h util.Uint256) error {
key := dao.makeExecutableKey(h)
@ -752,7 +752,7 @@ func (dao *Simple) DeleteBlock(h util.Uint256) error {
return nil
}
// StoreHeader saves block header into the store.
// StoreHeader saves the block header into the store.
func (dao *Simple) StoreHeader(h *block.Header) error {
return dao.storeHeader(dao.makeExecutableKey(h.Hash()), h)
}
@ -769,9 +769,8 @@ func (dao *Simple) storeHeader(key []byte, h *block.Header) error {
return nil
}
// StoreAsCurrentBlock stores a hash of the given block with prefix
// SYSCurrentBlock. It can reuse given buffer for the purpose of value
// serialization.
// StoreAsCurrentBlock stores the hash of the given block with prefix
// SYSCurrentBlock.
func (dao *Simple) StoreAsCurrentBlock(block *block.Block) {
buf := dao.getDataBuf()
h := block.Hash()
@ -780,8 +779,8 @@ func (dao *Simple) StoreAsCurrentBlock(block *block.Block) {
dao.Store.Put(dao.mkKeyPrefix(storage.SYSCurrentBlock), buf.Bytes())
}
// StoreAsTransaction stores given TX as DataTransaction. It also stores transactions
// given tx has conflicts with as DataTransaction with dummy version. It can reuse given
// StoreAsTransaction stores the given TX as DataTransaction. It also stores transactions
// the given tx has conflicts with as DataTransaction with dummy version. It can reuse the given
// buffer for the purpose of value serialization.
func (dao *Simple) StoreAsTransaction(tx *transaction.Transaction, index uint32, aer *state.AppExecResult) error {
key := dao.makeExecutableKey(tx.Hash())

View file

@ -10,7 +10,7 @@ import (
// ECDSAVerifyPrice is a gas price of a single verification.
const ECDSAVerifyPrice = 1 << 15
// Calculate returns network fee for transaction.
// Calculate returns network fee for a transaction.
func Calculate(base int64, script []byte) (int64, int) {
var (
netFee int64

View file

@ -4,7 +4,7 @@ import (
"github.com/nspcc-dev/neo-go/pkg/vm/opcode"
)
// Opcode returns the deployment coefficients of specified opcodes.
// Opcode returns the deployment coefficients of the specified opcodes.
func Opcode(base int64, opcodes ...opcode.Opcode) int64 {
var result int64
for _, op := range opcodes {

View file

@ -8,7 +8,7 @@ import (
const feeFactor = 30
// The most common Opcode() use case is to get price for single opcode.
// The most common Opcode() use case is to get price for a single opcode.
func BenchmarkOpcode1(t *testing.B) {
// Just so that we don't always test the same opcode.
script := []opcode.Opcode{opcode.NOP, opcode.ADD, opcode.SYSCALL, opcode.APPEND}

View file

@ -30,7 +30,7 @@ import (
)
const (
// DefaultBaseExecFee specifies default multiplier for opcode and syscall prices.
// DefaultBaseExecFee specifies the default multiplier for opcode and syscall prices.
DefaultBaseExecFee = 30
)
@ -104,7 +104,7 @@ func (ic *Context) UseSigners(s []transaction.Signer) {
ic.signers = s
}
// Signers returns signers witnessing current execution context.
// Signers returns signers witnessing the current execution context.
func (ic *Context) Signers() []transaction.Signer {
if ic.signers != nil {
return ic.signers
@ -115,7 +115,7 @@ func (ic *Context) Signers() []transaction.Signer {
return nil
}
// Function binds function name, id with the function itself and price,
// Function binds function name, id with the function itself and the price,
// it's supposed to be inited once for all interopContexts, so it doesn't use
// vm.InteropFuncPrice directly.
type Function struct {
@ -151,7 +151,7 @@ type Contract interface {
PostPersist(*Context) error
}
// ContractMD represents native contract instance.
// ContractMD represents a native contract instance.
type ContractMD struct {
state.NativeContract
Name string
@ -164,8 +164,8 @@ func NewContractMD(name string, id int32) *ContractMD {
c.ID = id
// NEF is now stored in contract state and affects state dump.
// Therefore values are taken from C# node.
// NEF is now stored in the contract state and affects state dump.
// Therefore, values are taken from C# node.
c.NEF.Header.Compiler = "neo-core-v3.0"
c.NEF.Header.Magic = nef.Magic
c.NEF.Tokens = []nef.MethodToken{} // avoid `nil` result during JSON marshalling
@ -175,7 +175,7 @@ func NewContractMD(name string, id int32) *ContractMD {
return c
}
// UpdateHash creates native contract script and updates hash.
// UpdateHash creates a native contract script and updates hash.
func (c *ContractMD) UpdateHash() {
w := io.NewBufBinWriter()
for i := range c.Methods {
@ -195,7 +195,7 @@ func (c *ContractMD) UpdateHash() {
c.NEF.Checksum = c.NEF.CalculateChecksum()
}
// AddMethod adds new method to a native contract.
// AddMethod adds a new method to a native contract.
func (c *ContractMD) AddMethod(md *MethodAndPrice, desc *manifest.Method) {
md.MD = desc
desc.Safe = md.RequiredFlags&(callflag.All^callflag.ReadOnly) == 0
@ -217,7 +217,7 @@ func (c *ContractMD) AddMethod(md *MethodAndPrice, desc *manifest.Method) {
c.Methods[index] = *md
}
// GetMethodByOffset returns with the provided offset.
// GetMethodByOffset returns method with the provided offset.
// Offset is offset of `System.Contract.CallNative` syscall.
func (c *ContractMD) GetMethodByOffset(offset int) (MethodAndPrice, bool) {
for k := range c.Methods {
@ -228,7 +228,7 @@ func (c *ContractMD) GetMethodByOffset(offset int) (MethodAndPrice, bool) {
return MethodAndPrice{}, false
}
// GetMethod returns method `name` with specified number of parameters.
// GetMethod returns method `name` with the specified number of parameters.
func (c *ContractMD) GetMethod(name string, paramCount int) (MethodAndPrice, bool) {
index := sort.Search(len(c.Methods), func(i int) bool {
md := c.Methods[i]
@ -249,7 +249,7 @@ func (c *ContractMD) GetMethod(name string, paramCount int) (MethodAndPrice, boo
return MethodAndPrice{}, false
}
// AddEvent adds new event to a native contract.
// AddEvent adds a new event to the native contract.
func (c *ContractMD) AddEvent(name string, ps ...manifest.Parameter) {
c.Manifest.ABI.Events = append(c.Manifest.ABI.Events, manifest.Event{
Name: name,
@ -257,7 +257,7 @@ func (c *ContractMD) AddEvent(name string, ps ...manifest.Parameter) {
})
}
// IsActive returns true iff the contract was deployed by the specified height.
// IsActive returns true if the contract was deployed by the specified height.
func (c *ContractMD) IsActive(height uint32) bool {
history := c.UpdateHistory
return len(history) != 0 && history[0] <= height
@ -268,7 +268,7 @@ func Sort(fs []Function) {
sort.Slice(fs, func(i, j int) bool { return fs[i].ID < fs[j].ID })
}
// GetContract returns contract by its hash in current interop context.
// GetContract returns a contract by its hash in the current interop context.
func (ic *Context) GetContract(hash util.Uint160) (*state.Contract, error) {
return ic.getContract(ic.DAO, hash)
}
@ -310,7 +310,7 @@ func (ic *Context) SyscallHandler(_ *vm.VM, id uint32) error {
return f.Func(ic)
}
// SpawnVM spawns new VM with the specified gas limit and set context.VM field.
// SpawnVM spawns a new VM with the specified gas limit and set context.VM field.
func (ic *Context) SpawnVM() *vm.VM {
v := vm.NewWithTrigger(ic.Trigger)
v.GasLimit = -1
@ -319,7 +319,7 @@ func (ic *Context) SpawnVM() *vm.VM {
return v
}
// RegisterCancelFunc adds given function to the list of functions to be called after VM
// RegisterCancelFunc adds the given function to the list of functions to be called after the VM
// finishes script execution.
func (ic *Context) RegisterCancelFunc(f context.CancelFunc) {
if f != nil {

View file

@ -21,7 +21,7 @@ type policyChecker interface {
IsBlocked(*dao.Simple, util.Uint160) bool
}
// LoadToken calls method specified by token id.
// LoadToken calls method specified by the token id.
func LoadToken(ic *interop.Context) func(id int32) error {
return func(id int32) error {
ctx := ic.VM.Context()
@ -91,7 +91,7 @@ func callInternal(ic *interop.Context, cs *state.Contract, name string, f callfl
return callExFromNative(ic, ic.VM.GetCurrentScriptHash(), cs, name, args, f, hasReturn)
}
// callExFromNative calls a contract with flags using provided calling hash.
// callExFromNative calls a contract with flags using the provided calling hash.
func callExFromNative(ic *interop.Context, caller util.Uint160, cs *state.Contract,
name string, args []stackitem.Item, f callflag.CallFlag, hasReturn bool) error {
for _, nc := range ic.Natives {

View file

@ -15,7 +15,7 @@ import (
"github.com/twmb/murmur3"
)
// GasLeft returns remaining amount of GAS.
// GasLeft returns the remaining amount of GAS.
func GasLeft(ic *interop.Context) error {
if ic.VM.GasLimit == -1 {
ic.VM.Estack().PushItem(stackitem.NewBigInteger(big.NewInt(ic.VM.GasLimit)))
@ -25,7 +25,7 @@ func GasLeft(ic *interop.Context) error {
return nil
}
// GetNotifications returns notifications emitted by current contract execution.
// GetNotifications returns notifications emitted in the current execution context.
func GetNotifications(ic *interop.Context) error {
item := ic.VM.Estack().Pop().Item()
notifications := ic.Notifications
@ -61,7 +61,7 @@ func GetNotifications(ic *interop.Context) error {
return nil
}
// GetInvocationCounter returns how many times current contract was invoked during current tx execution.
// GetInvocationCounter returns how many times the current contract has been invoked during the current tx execution.
func GetInvocationCounter(ic *interop.Context) error {
currentScriptHash := ic.VM.GetCurrentScriptHash()
count, ok := ic.Invocations[currentScriptHash]

View file

@ -15,7 +15,7 @@ import (
"github.com/nspcc-dev/neo-go/pkg/vm/stackitem"
)
// CheckHashedWitness checks given hash against current list of script hashes
// CheckHashedWitness checks the given hash against the current list of script hashes
// for verifying in the interop context.
func CheckHashedWitness(ic *interop.Context, hash util.Uint160) (bool, error) {
callingSH := ic.VM.GetCallingScriptHash()
@ -113,8 +113,8 @@ func checkScope(ic *interop.Context, hash util.Uint160) (bool, error) {
return false, nil
}
// CheckKeyedWitness checks hash of signature check contract with a given public
// key against current list of script hashes for verifying in the interop context.
// CheckKeyedWitness checks the hash of the signature check contract with the given public
// key against the current list of script hashes for verifying in the interop context.
func CheckKeyedWitness(ic *interop.Context, key *keys.PublicKey) (bool, error) {
return CheckHashedWitness(ic, key.GetScriptHash())
}

View file

@ -29,7 +29,7 @@ type Iterator struct {
prefix []byte
}
// NewIterator creates a new Iterator with given options for a given channel of store.Seek results.
// NewIterator creates a new Iterator with the given options for the given channel of store.Seek results.
func NewIterator(seekCh chan storage.KeyValue, prefix []byte, opts int64) *Iterator {
return &Iterator{
seekCh: seekCh,

View file

@ -6,7 +6,7 @@ import (
"github.com/nspcc-dev/neo-go/pkg/util"
)
// Feer is an interface that abstract the implementation of the fee calculation.
// Feer is an interface that abstracts the implementation of the fee calculation.
type Feer interface {
FeePerByte() int64
GetUtilityTokenBalance(util.Uint160) *big.Int

View file

@ -15,24 +15,24 @@ import (
)
var (
// ErrInsufficientFunds is returned when Sender is not able to pay for
// transaction being added irrespective of the other contents of the
// ErrInsufficientFunds is returned when the Sender is not able to pay for
// the transaction being added irrespective of the other contents of the
// pool.
ErrInsufficientFunds = errors.New("insufficient funds")
// ErrConflict is returned when transaction being added is incompatible
// ErrConflict is returned when the transaction being added is incompatible
// with the contents of the memory pool (Sender doesn't have enough GAS
// to pay for all transactions in the pool).
ErrConflict = errors.New("conflicts: insufficient funds for all pooled tx")
// ErrDup is returned when transaction being added is already present
// ErrDup is returned when the transaction being added is already present
// in the memory pool.
ErrDup = errors.New("already in the memory pool")
// ErrOOM is returned when transaction just doesn't fit in the memory
// ErrOOM is returned when the transaction just doesn't fit in the memory
// pool because of its capacity constraints.
ErrOOM = errors.New("out of memory")
// ErrConflictsAttribute is returned when transaction conflicts with other transactions
// ErrConflictsAttribute is returned when the transaction conflicts with other transactions
// due to its (or theirs) Conflicts attributes.
ErrConflictsAttribute = errors.New("conflicts with memory pool due to Conflicts attribute")
// ErrOracleResponse is returned when mempool already contains transaction
// ErrOracleResponse is returned when the mempool already contains a transaction
// with the same oracle response ID and higher network fee.
ErrOracleResponse = errors.New("conflicts with memory pool due to OracleResponse attribute")
)
@ -44,25 +44,25 @@ type item struct {
data interface{}
}
// items is a slice of item.
// items is a slice of an item.
type items []item
// utilityBalanceAndFees stores sender's balance and overall fees of
// sender's transactions which are currently in mempool.
// utilityBalanceAndFees stores the sender's balance and overall fees of
// the sender's transactions which are currently in the mempool.
type utilityBalanceAndFees struct {
balance uint256.Int
feeSum uint256.Int
}
// Pool stores the unconfirms transactions.
// Pool stores the unconfirmed transactions.
type Pool struct {
lock sync.RWMutex
verifiedMap map[util.Uint256]*transaction.Transaction
verifiedTxes items
fees map[util.Uint160]utilityBalanceAndFees
// conflicts is a map of hashes of transactions which are conflicting with the mempooled ones.
// conflicts is a map of the hashes of the transactions which are conflicting with the mempooled ones.
conflicts map[util.Uint256][]util.Uint256
// oracleResp contains ids of oracle responses for tx in pool.
// oracleResp contains the ids of oracle responses for the tx in the pool.
oracleResp map[uint64]util.Uint256
capacity int
@ -106,7 +106,7 @@ func (p item) CompareTo(otherP item) int {
return int(p.txn.NetworkFee - otherP.txn.NetworkFee)
}
// Count returns the total number of uncofirm transactions.
// Count returns the total number of uncofirmed transactions.
func (mp *Pool) Count() int {
mp.lock.RLock()
defer mp.lock.RUnlock()
@ -118,7 +118,7 @@ func (mp *Pool) count() int {
return len(mp.verifiedTxes)
}
// ContainsKey checks if a transactions hash is in the Pool.
// ContainsKey checks if the transactions hash is in the Pool.
func (mp *Pool) ContainsKey(hash util.Uint256) bool {
mp.lock.RLock()
defer mp.lock.RUnlock()
@ -135,8 +135,8 @@ func (mp *Pool) containsKey(hash util.Uint256) bool {
return false
}
// HasConflicts returns true if transaction is already in pool or in the Conflicts attributes
// of pooled transactions or has Conflicts attributes for pooled transactions.
// HasConflicts returns true if the transaction is already in the pool or in the Conflicts attributes
// of the pooled transactions or has Conflicts attributes against the pooled transactions.
func (mp *Pool) HasConflicts(t *transaction.Transaction, fee Feer) bool {
mp.lock.RLock()
defer mp.lock.RUnlock()
@ -158,8 +158,8 @@ func (mp *Pool) HasConflicts(t *transaction.Transaction, fee Feer) bool {
return false
}
// tryAddSendersFee tries to add system fee and network fee to the total sender`s fee in mempool
// and returns false if both balance check is required and sender has not enough GAS to pay.
// tryAddSendersFee tries to add system fee and network fee to the total sender`s fee in the mempool
// and returns false if both balance check is required and the sender does not have enough GAS to pay.
func (mp *Pool) tryAddSendersFee(tx *transaction.Transaction, feer Feer, needCheck bool) bool {
payer := tx.Signers[mp.payerIndex].Account
senderFee, ok := mp.fees[payer]
@ -180,8 +180,8 @@ func (mp *Pool) tryAddSendersFee(tx *transaction.Transaction, feer Feer, needChe
return true
}
// checkBalance returns new cumulative fee balance for account or an error in
// case sender doesn't have enough GAS to pay for the transaction.
// checkBalance returns a new cumulative fee balance for the account or an error in
// case the sender doesn't have enough GAS to pay for the transaction.
func checkBalance(tx *transaction.Transaction, balance utilityBalanceAndFees) (uint256.Int, error) {
var txFee uint256.Int
@ -196,7 +196,7 @@ func checkBalance(tx *transaction.Transaction, balance utilityBalanceAndFees) (u
return txFee, nil
}
// Add tries to add given transaction to the Pool.
// Add tries to add the given transaction to the Pool.
func (mp *Pool) Add(t *transaction.Transaction, fee Feer, data ...interface{}) error {
var pItem = item{
txn: t,
@ -234,9 +234,9 @@ func (mp *Pool) Add(t *transaction.Transaction, fee Feer, data ...interface{}) e
mp.removeInternal(conflictingTx.Hash(), fee)
}
}
// Insert into sorted array (from max to min, that could also be done
// Insert into a sorted array (from max to min, that could also be done
// using sort.Sort(sort.Reverse()), but it incurs more overhead. Notice
// also that we're searching for position that is strictly more
// also that we're searching for a position that is strictly more
// prioritized than our new item because we do expect a lot of
// transactions with the same priority and appending to the end of the
// slice is always more efficient.
@ -299,7 +299,7 @@ func (mp *Pool) Add(t *transaction.Transaction, fee Feer, data ...interface{}) e
return nil
}
// Remove removes an item from the mempool, if it exists there (and does
// Remove removes an item from the mempool if it exists there (and does
// nothing if it doesn't).
func (mp *Pool) Remove(hash util.Uint256, feer Feer) {
mp.lock.Lock()
@ -346,8 +346,8 @@ func (mp *Pool) removeInternal(hash util.Uint256, feer Feer) {
}
// RemoveStale filters verified transactions through the given function keeping
// only the transactions for which it returns a true result. It's used to quickly
// drop part of the mempool that is now invalid after the block acceptance.
// only the transactions for which it returns true result. It's used to quickly
// drop a part of the mempool that is now invalid after the block acceptance.
func (mp *Pool) RemoveStale(isOK func(*transaction.Transaction) bool, feer Feer) {
mp.lock.Lock()
policyChanged := mp.loadPolicy(feer)
@ -372,7 +372,7 @@ func (mp *Pool) RemoveStale(isOK func(*transaction.Transaction) bool, feer Feer)
}
}
if mp.resendThreshold != 0 {
// item is resend at resendThreshold, 2*resendThreshold, 4*resendThreshold ...
// item is resent at resendThreshold, 2*resendThreshold, 4*resendThreshold ...
// so quotient must be a power of two.
diff := (height - itm.blockStamp)
if diff%mp.resendThreshold == 0 && bits.OnesCount32(diff/mp.resendThreshold) == 1 {
@ -400,7 +400,7 @@ func (mp *Pool) RemoveStale(isOK func(*transaction.Transaction) bool, feer Feer)
mp.lock.Unlock()
}
// loadPolicy updates feePerByte field and returns whether policy has been
// loadPolicy updates feePerByte field and returns whether the policy has been
// changed.
func (mp *Pool) loadPolicy(feer Feer) bool {
newFeePerByte := feer.FeePerByte()
@ -411,7 +411,7 @@ func (mp *Pool) loadPolicy(feer Feer) bool {
return false
}
// checkPolicy checks whether transaction fits policy.
// checkPolicy checks whether the transaction fits the policy.
func (mp *Pool) checkPolicy(tx *transaction.Transaction, policyChanged bool) bool {
if !policyChanged || tx.FeePerByte() >= mp.feePerByte {
return true
@ -439,7 +439,7 @@ func New(capacity int, payerIndex int, enableSubscriptions bool) *Pool {
return mp
}
// SetResendThreshold sets threshold after which transaction will be considered stale
// SetResendThreshold sets a threshold after which the transaction will be considered stale
// and returned for retransmission by `GetStaleTransactions`.
func (mp *Pool) SetResendThreshold(h uint32, f func(*transaction.Transaction, interface{})) {
mp.lock.Lock()
@ -555,10 +555,10 @@ func (mp *Pool) checkTxConflicts(tx *transaction.Transaction, fee Feer) ([]*tran
return conflictsToBeRemoved, err
}
// Verify checks if a Sender of tx is able to pay for it (and all the other
// Verify checks if the Sender of the tx is able to pay for it (and all the other
// transactions in the pool). If yes, the transaction tx is a valid
// transaction and the function returns true. If no, the transaction tx is
// considered to be invalid the function returns false.
// considered to be invalid, the function returns false.
func (mp *Pool) Verify(tx *transaction.Transaction, feer Feer) bool {
mp.lock.RLock()
defer mp.lock.RUnlock()
@ -566,7 +566,7 @@ func (mp *Pool) Verify(tx *transaction.Transaction, feer Feer) bool {
return err == nil
}
// removeConflictsOf removes hash of the given transaction from the conflicts list
// removeConflictsOf removes the hash of the given transaction from the conflicts list
// for each Conflicts attribute.
func (mp *Pool) removeConflictsOf(tx *transaction.Transaction) {
// remove all conflicting hashes from mp.conflicts list

View file

@ -25,16 +25,16 @@ func (mp *Pool) StopSubscriptions() {
}
}
// SubscribeForTransactions adds given channel to new mempool event broadcasting, so when
// there is a new transactions added to mempool or an existing transaction removed from
// mempool you'll receive it via this channel.
// SubscribeForTransactions adds the given channel to the new mempool event broadcasting, so when
// there is a new transactions added to the mempool or an existing transaction removed from
// the mempool, you'll receive it via this channel.
func (mp *Pool) SubscribeForTransactions(ch chan<- mempoolevent.Event) {
if mp.subscriptionsOn.Load() {
mp.subCh <- ch
}
}
// UnsubscribeFromTransactions unsubscribes given channel from new mempool notifications,
// UnsubscribeFromTransactions unsubscribes the given channel from new mempool notifications,
// you can close it afterwards. Passing non-subscribed channel is a no-op.
func (mp *Pool) UnsubscribeFromTransactions(ch chan<- mempoolevent.Event) {
if mp.subscriptionsOn.Load() {

View file

@ -17,7 +17,7 @@ const (
TransactionRemoved Type = 0x02
)
// Event represents one of mempool events: transaction was added or removed from mempool.
// Event represents one of mempool events: transaction was added or removed from the mempool.
type Event struct {
Type Type
Tx *transaction.Transaction
@ -36,7 +36,7 @@ func (e Type) String() string {
}
}
// GetEventTypeFromString converts input string into an Type if it's possible.
// GetEventTypeFromString converts the input string into the Type if it's possible.
func GetEventTypeFromString(s string) (Type, error) {
switch s {
case "added":
@ -48,12 +48,12 @@ func GetEventTypeFromString(s string) (Type, error) {
}
}
// MarshalJSON implements json.Marshaler interface.
// MarshalJSON implements the json.Marshaler interface.
func (e Type) MarshalJSON() ([]byte, error) {
return json.Marshal(e.String())
}
// UnmarshalJSON implements json.Unmarshaler interface.
// UnmarshalJSON implements the json.Unmarshaler interface.
func (e *Type) UnmarshalJSON(b []byte) error {
var s string

View file

@ -36,7 +36,7 @@ func (b *BaseNode) setCache(bs []byte, h util.Uint256) {
b.hashValid = true
}
// getHash returns a hash of this BaseNode.
// getHash returns the hash of this BaseNode.
func (b *BaseNode) getHash(n Node) util.Uint256 {
if !b.hashValid {
b.updateHash(n)
@ -52,7 +52,7 @@ func (b *BaseNode) getBytes(n Node) []byte {
return b.bytes
}
// updateHash updates hash field for this BaseNode.
// updateHash updates the hash field for this BaseNode.
func (b *BaseNode) updateHash(n Node) {
if n.Type() == HashT || n.Type() == EmptyT {
panic("can't update hash for empty or hash node")
@ -61,7 +61,7 @@ func (b *BaseNode) updateHash(n Node) {
b.hashValid = true
}
// updateCache updates hash and bytes fields for this BaseNode.
// updateCache updates the hash and bytes fields for this BaseNode.
func (b *BaseNode) updateBytes(n Node) {
bw := io.NewBufBinWriter()
bw.Grow(1 + n.Size())
@ -85,13 +85,13 @@ func encodeBinaryAsChild(n Node, w *io.BinWriter) {
w.WriteBytes(n.Hash().BytesBE())
}
// encodeNodeWithType encodes node together with it's type.
// encodeNodeWithType encodes the node together with its type.
func encodeNodeWithType(n Node, w *io.BinWriter) {
w.WriteB(byte(n.Type()))
n.EncodeBinary(w)
}
// DecodeNodeWithType decodes node together with it's type.
// DecodeNodeWithType decodes the node together with its type.
func DecodeNodeWithType(r *io.BinReader) Node {
if r.Err != nil {
return nil

View file

@ -5,7 +5,7 @@ import (
"sort"
)
// Batch is batch of storage changes.
// Batch is a batch of storage changes.
// It stores key-value pairs in a sorted state.
type Batch struct {
kv []keyValue
@ -16,7 +16,7 @@ type keyValue struct {
value []byte
}
// MapToMPTBatch makes a Batch from unordered set of storage changes.
// MapToMPTBatch makes a Batch from an unordered set of storage changes.
func MapToMPTBatch(m map[string][]byte) Batch {
var b Batch
@ -31,13 +31,13 @@ func MapToMPTBatch(m map[string][]byte) Batch {
return b
}
// PutBatch puts batch to trie.
// PutBatch puts a batch to a trie.
// It is not atomic (and probably cannot be without substantial slow-down)
// and returns number of elements processed.
// and returns the number of elements processed.
// If an error is returned, the trie may be in the inconsistent state in case of storage failures.
// This is due to the fact that we can remove multiple children from the branch node simultaneously
// and won't strip the resulting branch node.
// However it is used mostly after the block processing to update MPT and error is not expected.
// However, it is used mostly after block processing to update MPT, and error is not expected.
func (t *Trie) PutBatch(b Batch) (int, error) {
if len(b.kv) == 0 {
return 0, nil
@ -150,13 +150,13 @@ func (t *Trie) addToBranch(b *BranchNode, kv []keyValue, inTrie bool) (Node, int
t.removeRef(b.Hash(), b.bytes)
}
// Error during iterate means some storage failure (i.e. some hash node cannot be
// retrieved from storage). This can leave trie in inconsistent state, because
// it can be impossible to strip branch node after it has been changed.
// An error during iterate means some storage failure (i.e. some hash node cannot be
// retrieved from storage). This can leave the trie in an inconsistent state because
// it can be impossible to strip the branch node after it has been changed.
// Consider a branch with 10 children, first 9 of which are deleted and the remaining one
// is a leaf node replaced by a hash node missing from storage.
// This can't be fixed easily because we need to _revert_ changes in reference counts
// for children which were updated successfully. But storage access errors means we are
// is a leaf node replaced by a hash node missing from the storage.
// This can't be fixed easily because we need to _revert_ changes in the reference counts
// for children which have been updated successfully. But storage access errors means we are
// in a bad state anyway.
n, err := t.iterateBatch(kv, func(c byte, kv []keyValue) (int, error) {
child, n, err := t.putBatchIntoNode(b.Children[c], kv)
@ -167,8 +167,8 @@ func (t *Trie) addToBranch(b *BranchNode, kv []keyValue, inTrie bool) (Node, int
b.invalidateCache()
}
// Even if some of the children can't be put, we need to try to strip branch
// and possibly update refcounts.
// Even if some of the children can't be put, we need to try to strip the branch
// and possibly update the refcounts.
nd, bErr := t.stripBranch(b)
if err == nil {
err = bErr
@ -176,8 +176,8 @@ func (t *Trie) addToBranch(b *BranchNode, kv []keyValue, inTrie bool) (Node, int
return nd, n, err
}
// stripsBranch strips branch node after incomplete batch put.
// It assumes there is no reference to b in trie.
// stripsBranch strips the branch node after incomplete batch put.
// It assumes there is no reference to b in the trie.
func (t *Trie) stripBranch(b *BranchNode) (Node, error) {
var n int
var lastIndex byte
@ -232,12 +232,12 @@ func (t *Trie) putBatchIntoHash(curr *HashNode, kv []keyValue) (Node, int, error
return t.putBatchIntoNode(result, kv)
}
// Creates new subtrie from provided key-value pairs.
// Creates a new subtrie from the provided key-value pairs.
// Items in kv must have no common prefix.
// If there are any deletions in kv, return error.
// If there are any deletions in kv, error is returned.
// kv is not empty.
// kv is sorted by key.
// value is current value stored by prefix.
// value is the current value stored by prefix.
func (t *Trie) newSubTrieMany(prefix []byte, kv []keyValue, value []byte) (Node, int, error) {
if len(kv[0].key) == 0 {
if kv[0].value == nil {

View file

@ -19,13 +19,13 @@ var (
errStop = errors.New("stop condition is met")
)
// Billet is a part of MPT trie with missing hash nodes that need to be restored.
// Billet is a part of an MPT trie with missing hash nodes that need to be restored.
// Billet is based on the following assumptions:
// 1. Refcount can only be incremented (we don't change MPT structure during restore,
// 1. Refcount can only be incremented (we don't change the MPT structure during restore,
// thus don't need to decrease refcount).
// 2. Each time the part of Billet is completely restored, it is collapsed into
// 2. Each time a part of a Billet is completely restored, it is collapsed into
// HashNode.
// 3. Pair (node, path) must be restored only once. It's a duty of MPT pool to manage
// 3. Any pair (node, path) must be restored only once. It's a duty of an MPT pool to manage
// MPT paths in order to provide this assumption.
type Billet struct {
TempStoragePrefix storage.KeyPrefix
@ -35,9 +35,9 @@ type Billet struct {
mode TrieMode
}
// NewBillet returns new billet for MPT trie restoring. It accepts a MemCachedStore
// NewBillet returns a new billet for MPT trie restoring. It accepts a MemCachedStore
// to decouple storage errors from logic errors so that all storage errors are
// processed during `store.Persist()` at the caller. This also has the benefit,
// processed during `store.Persist()` at the caller. Another benifit is
// that every `Put` can be considered an atomic operation.
func NewBillet(rootHash util.Uint256, mode TrieMode, prefix storage.KeyPrefix, store *storage.MemCachedStore) *Billet {
return &Billet{
@ -49,8 +49,8 @@ func NewBillet(rootHash util.Uint256, mode TrieMode, prefix storage.KeyPrefix, s
}
// RestoreHashNode replaces HashNode located at the provided path by the specified Node
// and stores it. It also maintains MPT as small as possible by collapsing those parts
// of MPT that have been completely restored.
// and stores it. It also maintains the MPT as small as possible by collapsing those parts
// of the MPT that have been completely restored.
func (b *Billet) RestoreHashNode(path []byte, node Node) error {
if _, ok := node.(*HashNode); ok {
return fmt.Errorf("%w: unable to restore node into HashNode", ErrRestoreFailed)
@ -75,7 +75,7 @@ func (b *Billet) RestoreHashNode(path []byte, node Node) error {
return nil
}
// putIntoNode puts val with provided path inside curr and returns updated node.
// putIntoNode puts val with the provided path inside curr and returns an updated node.
// Reference counters are updated for both curr and returned value.
func (b *Billet) putIntoNode(curr Node, path []byte, val Node) (Node, error) {
switch n := curr.(type) {
@ -102,7 +102,7 @@ func (b *Billet) putIntoLeaf(curr *LeafNode, path []byte, val Node) (Node, error
return nil, fmt.Errorf("%w: bad Leaf node hash: expected %s, got %s", ErrRestoreFailed, curr.Hash().StringBE(), val.Hash().StringBE())
}
// Once Leaf node is restored, it will be collapsed into HashNode forever, so
// there shouldn't be such situation when we try to restore Leaf node.
// there shouldn't be such situation when we try to restore a Leaf node.
panic("bug: can't restore LeafNode")
}
@ -143,15 +143,15 @@ func (b *Billet) putIntoExtension(curr *ExtensionNode, path []byte, val Node) (N
}
func (b *Billet) putIntoHash(curr *HashNode, path []byte, val Node) (Node, error) {
// Once a part of MPT Billet is completely restored, it will be collapsed forever, so
// Once a part of the MPT Billet is completely restored, it will be collapsed forever, so
// it's an MPT pool duty to avoid duplicating restore requests.
if len(path) != 0 {
return nil, fmt.Errorf("%w: node has already been collapsed", ErrRestoreFailed)
}
// `curr` hash node can be either of
// 1) saved in storage (i.g. if we've already restored node with the same hash from the
// other part of MPT), so just add it to local in-memory MPT.
// 1) saved in the storage (i.g. if we've already restored a node with the same hash from the
// other part of the MPT), so just add it to the local in-memory MPT.
// 2) missing from the storage. It's OK because we're syncing MPT state, and the purpose
// is to store missing hash nodes.
// both cases are OK, but we still need to validate `val` against `curr`.

View file

@ -9,13 +9,13 @@ import (
)
const (
// childrenCount represents a number of children of a branch node.
// childrenCount represents the number of children of a branch node.
childrenCount = 17
// lastChild is the index of the last child.
lastChild = childrenCount - 1
)
// BranchNode represents MPT's branch node.
// BranchNode represents an MPT's branch node.
type BranchNode struct {
BaseNode
Children [childrenCount]Node
@ -23,7 +23,7 @@ type BranchNode struct {
var _ Node = (*BranchNode)(nil)
// NewBranchNode returns new branch node.
// NewBranchNode returns a new branch node.
func NewBranchNode() *BranchNode {
b := new(BranchNode)
for i := 0; i < childrenCount; i++ {
@ -32,20 +32,20 @@ func NewBranchNode() *BranchNode {
return b
}
// Type implements Node interface.
// Type implements the Node interface.
func (b *BranchNode) Type() NodeType { return BranchT }
// Hash implements BaseNode interface.
// Hash implements the BaseNode interface.
func (b *BranchNode) Hash() util.Uint256 {
return b.getHash(b)
}
// Bytes implements BaseNode interface.
// Bytes implements the BaseNode interface.
func (b *BranchNode) Bytes() []byte {
return b.getBytes(b)
}
// Size implements Node interface.
// Size implements the Node interface.
func (b *BranchNode) Size() int {
sz := childrenCount
for i := range b.Children {
@ -72,12 +72,12 @@ func (b *BranchNode) DecodeBinary(r *io.BinReader) {
}
}
// MarshalJSON implements json.Marshaler.
// MarshalJSON implements the json.Marshaler.
func (b *BranchNode) MarshalJSON() ([]byte, error) {
return json.Marshal(b.Children)
}
// UnmarshalJSON implements json.Unmarshaler.
// UnmarshalJSON implements the json.Unmarshaler.
func (b *BranchNode) UnmarshalJSON(data []byte) error {
var obj NodeObject
if err := obj.UnmarshalJSON(data); err != nil {

View file

@ -37,15 +37,15 @@ func prepareMPTCompat() *Trie {
// TestCompatibility contains tests present in C# implementation.
// https://github.com/neo-project/neo-modules/blob/master/tests/Neo.Plugins.StateService.Tests/MPT/UT_MPTTrie.cs
// There are some differences, though:
// 1. In our implementation delete is silent, i.e. we do not return an error is the key is missing or empty.
// However, we do return error when contents of hash node are missing from the store
// 1. In our implementation, delete is silent, i.e. we do not return an error if the key is missing or empty.
// However, we do return an error when the contents of the hash node are missing from the store
// (corresponds to exception in C# implementation). However, if the key is too big, an error is returned
// (corresponds to exception in C# implementation).
// 2. In our implementation put returns error if something goes wrong, while C# implementation throws
// 2. In our implementation, put returns an error if something goes wrong, while C# implementation throws
// an exception and returns nothing.
// 3. In our implementation get does not immediately return error in case of an empty key. An error is returned
// only if value is missing from the storage. C# implementation checks that key is not empty and throws an error
// otherwice. However, if the key is too big, an error is returned (corresponds to exception in C# implementation).
// 3. In our implementation, get does not immediately return any error in case of an empty key. An error is returned
// only if the value is missing from the storage. C# implementation checks that the key is not empty and throws an error
// otherwise. However, if the key is too big, an error is returned (corresponds to exception in C# implementation).
func TestCompatibility(t *testing.T) {
mainTrie := prepareMPTCompat()

View file

@ -1,14 +1,14 @@
/*
Package mpt implements MPT (Merkle-Patricia Tree).
Package mpt implements MPT (Merkle-Patricia Trie).
MPT stores key-value pairs and is a trie over 16-symbol alphabet. https://en.wikipedia.org/wiki/Trie
Trie is a tree where values are stored in leafs and keys are paths from root to the leaf node.
MPT consists of 4 type of nodes:
- Leaf node contains only value.
- Extension node contains both key and value.
An MPT stores key-value pairs and is a trie over 16-symbol alphabet. https://en.wikipedia.org/wiki/Trie
A trie is a tree where values are stored in leafs and keys are paths from the root to the leaf node.
An MPT consists of 4 types of nodes:
- Leaf node only contains a value.
- Extension node contains both a key and a value.
- Branch node contains 2 or more children.
- Hash node is a compressed node and contains only actual node's hash.
The actual node must be retrieved from storage or over the network.
- Hash node is a compressed node and only contains the actual node's hash.
The actual node must be retrieved from the storage or over the network.
As an example here is a trie containing 3 pairs:
- 0x1201 -> val1
@ -31,15 +31,15 @@ BranchNode [0, 1, 2, ...], Last -> Leaf(val4)
There are 3 invariants that this implementation has:
- Branch node cannot have <= 1 children
- Extension node cannot have zero-length key
- Extension node cannot have another Extension node in it's next field
- Extension node cannot have a zero-length key
- Extension node cannot have another Extension node in its next field
Thank to these restrictions, there is a single root hash for every set of key-value pairs
irregardless of the order they were added/removed with.
Thanks to these restrictions, there is a single root hash for every set of key-value pairs
irregardless of the order they were added/removed in.
The actual trie structure can vary because of node -> HashNode compressing.
There is also one optimization which cost us almost nothing in terms of complexity but is very beneficial:
When we perform get/put/delete on a speficic path, every Hash node which was retreived from storage is
replaced by its uncompressed form, so that subsequent hits of this not don't use storage.
There is also one optimization which cost us almost nothing in terms of complexity but is quite beneficial:
When we perform get/put/delete on a specific path, every Hash node which was retrieved from the storage is
replaced by its uncompressed form, so that subsequent hits of this don't need to access the storage.
*/
package mpt

View file

@ -8,14 +8,14 @@ import (
"github.com/nspcc-dev/neo-go/pkg/util"
)
// EmptyNode represents empty node.
// EmptyNode represents an empty node.
type EmptyNode struct{}
// DecodeBinary implements io.Serializable interface.
// DecodeBinary implements the io.Serializable interface.
func (e EmptyNode) DecodeBinary(*io.BinReader) {
}
// EncodeBinary implements io.Serializable interface.
// EncodeBinary implements the io.Serializable interface.
func (e EmptyNode) EncodeBinary(*io.BinWriter) {
}

View file

@ -15,12 +15,12 @@ const (
// maxPathLength is the max length of the extension node key.
maxPathLength = (storage.MaxStorageKeyLen + 4) * 2
// MaxKeyLength is the max length of the key to put in trie
// MaxKeyLength is the max length of the key to put in the trie
// before transforming to nibbles.
MaxKeyLength = maxPathLength / 2
)
// ExtensionNode represents MPT's extension node.
// ExtensionNode represents an MPT's extension node.
type ExtensionNode struct {
BaseNode
key []byte
@ -29,8 +29,8 @@ type ExtensionNode struct {
var _ Node = (*ExtensionNode)(nil)
// NewExtensionNode returns hash node with the specified key and next node.
// Note: because it is a part of Trie, key must be mangled, i.e. must contain only bytes with high half = 0.
// NewExtensionNode returns a hash node with the specified key and the next node.
// Note: since it is a part of a Trie, the key must be mangled, i.e. must contain only bytes with high half = 0.
func NewExtensionNode(key []byte, next Node) *ExtensionNode {
return &ExtensionNode{
key: key,
@ -78,7 +78,7 @@ func (e *ExtensionNode) Size() int {
1 + util.Uint256Size // e.next is never empty
}
// MarshalJSON implements json.Marshaler.
// MarshalJSON implements the json.Marshaler.
func (e *ExtensionNode) MarshalJSON() ([]byte, error) {
m := map[string]interface{}{
"key": hex.EncodeToString(e.key),
@ -87,7 +87,7 @@ func (e *ExtensionNode) MarshalJSON() ([]byte, error) {
return json.Marshal(m)
}
// UnmarshalJSON implements json.Unmarshaler.
// UnmarshalJSON implements the json.Unmarshaler.
func (e *ExtensionNode) UnmarshalJSON(data []byte) error {
var obj NodeObject
if err := obj.UnmarshalJSON(data); err != nil {

View file

@ -7,7 +7,7 @@ import (
"github.com/nspcc-dev/neo-go/pkg/util"
)
// HashNode represents MPT's hash node.
// HashNode represents an MPT's hash node.
type HashNode struct {
BaseNode
Collapsed bool
@ -15,7 +15,7 @@ type HashNode struct {
var _ Node = (*HashNode)(nil)
// NewHashNode returns hash node with the specified hash.
// NewHashNode returns a hash node with the specified hash.
func NewHashNode(h util.Uint256) *HashNode {
return &HashNode{
BaseNode: BaseNode{
@ -61,12 +61,12 @@ func (h HashNode) EncodeBinary(w *io.BinWriter) {
w.WriteBytes(h.hash[:])
}
// MarshalJSON implements json.Marshaler.
// MarshalJSON implements the json.Marshaler.
func (h *HashNode) MarshalJSON() ([]byte, error) {
return []byte(`{"hash":"` + h.hash.StringLE() + `"}`), nil
}
// UnmarshalJSON implements json.Unmarshaler.
// UnmarshalJSON implements the json.Unmarshaler.
func (h *HashNode) UnmarshalJSON(data []byte) error {
var obj NodeObject
if err := obj.UnmarshalJSON(data); err != nil {

View file

@ -2,7 +2,7 @@ package mpt
import "github.com/nspcc-dev/neo-go/pkg/util"
// lcp returns longest common prefix of a and b.
// lcp returns the longest common prefix of a and b.
// Note: it does no allocations.
func lcp(a, b []byte) []byte {
if len(a) < len(b) {
@ -33,7 +33,7 @@ func lcpMany(kv []keyValue) []byte {
return p
}
// toNibbles mangles path by splitting every byte into 2 containing low- and high- 4-byte part.
// toNibbles mangles the path by splitting every byte into 2 containing low- and high- 4-byte part.
func toNibbles(path []byte) []byte {
result := make([]byte, len(path)*2)
for i := range path {
@ -43,7 +43,7 @@ func toNibbles(path []byte) []byte {
return result
}
// strToNibbles mangles path by splitting every byte into 2 containing low- and high- 4-byte part,
// strToNibbles mangles the path by splitting every byte into 2 containing low- and high- 4-byte part,
// ignoring the first byte (prefix).
func strToNibbles(path string) []byte {
result := make([]byte, (len(path)-1)*2)
@ -54,7 +54,7 @@ func strToNibbles(path string) []byte {
return result
}
// fromNibbles performs operation opposite to toNibbles and does no path validity checks.
// fromNibbles performs an operation opposite to toNibbles and runs no path validity checks.
func fromNibbles(path []byte) []byte {
result := make([]byte, len(path)/2)
for i := range result {
@ -63,7 +63,7 @@ func fromNibbles(path []byte) []byte {
return result
}
// GetChildrenPaths returns a set of paths to node's children who are non-empty HashNodes
// GetChildrenPaths returns a set of paths to the node's children who are non-empty HashNodes
// based on the node's path.
func GetChildrenPaths(path []byte, node Node) map[util.Uint256][][]byte {
res := make(map[util.Uint256][][]byte)

View file

@ -10,10 +10,10 @@ import (
"github.com/nspcc-dev/neo-go/pkg/util"
)
// MaxValueLength is a max length of a leaf node value.
// MaxValueLength is the max length of a leaf node value.
const MaxValueLength = 3 + storage.MaxStorageValueLen + 1
// LeafNode represents MPT's leaf node.
// LeafNode represents an MPT's leaf node.
type LeafNode struct {
BaseNode
value []byte
@ -21,7 +21,7 @@ type LeafNode struct {
var _ Node = (*LeafNode)(nil)
// NewLeafNode returns hash node with the specified value.
// NewLeafNode returns a hash node with the specified value.
func NewLeafNode(value []byte) *LeafNode {
return &LeafNode{value: value}
}
@ -61,12 +61,12 @@ func (n *LeafNode) Size() int {
return io.GetVarSize(len(n.value)) + len(n.value)
}
// MarshalJSON implements json.Marshaler.
// MarshalJSON implements the json.Marshaler.
func (n *LeafNode) MarshalJSON() ([]byte, error) {
return []byte(`{"value":"` + hex.EncodeToString(n.value) + `"}`), nil
}
// UnmarshalJSON implements json.Unmarshaler.
// UnmarshalJSON implements the json.Unmarshaler.
func (n *LeafNode) UnmarshalJSON(data []byte) error {
var obj NodeObject
if err := obj.UnmarshalJSON(data); err != nil {

View file

@ -9,7 +9,7 @@ import (
"github.com/nspcc-dev/neo-go/pkg/util"
)
// NodeType represents node type..
// NodeType represents a node type..
type NodeType byte
// Node types definitions.
@ -21,14 +21,14 @@ const (
EmptyT NodeType = 0x04
)
// NodeObject represents Node together with it's type.
// NodeObject represents a Node together with it's type.
// It is used for serialization/deserialization where type info
// is also expected.
type NodeObject struct {
Node
}
// Node represents common interface of all MPT nodes.
// Node represents a common interface of all MPT nodes.
type Node interface {
io.Serializable
json.Marshaler
@ -48,7 +48,7 @@ func (n *NodeObject) DecodeBinary(r *io.BinReader) {
n.Node = DecodeNodeWithType(r)
}
// UnmarshalJSON implements json.Unmarshaler.
// UnmarshalJSON implements the json.Unmarshaler.
func (n *NodeObject) UnmarshalJSON(data []byte) error {
var m map[string]json.RawMessage
err := json.Unmarshal(data, &m)

View file

@ -10,8 +10,8 @@ import (
"github.com/nspcc-dev/neo-go/pkg/util/slice"
)
// GetProof returns a proof that key belongs to t.
// Proof consist of serialized nodes occurring on path from the root to the leaf of key.
// GetProof returns a proof that the key belongs to t.
// The proof consists of serialized nodes occurring on the path from the root to the leaf of key.
func (t *Trie) GetProof(key []byte) ([][]byte, error) {
var proof [][]byte
if len(key) > MaxKeyLength {
@ -63,7 +63,7 @@ func (t *Trie) getProof(curr Node, path []byte, proofs *[][]byte) (Node, error)
}
// VerifyProof verifies that path indeed belongs to a MPT with the specified root hash.
// It also returns value for the key.
// It also returns the value for the key.
func VerifyProof(rh util.Uint256, key []byte, proofs [][]byte) ([]byte, bool) {
path := toNibbles(key)
tr := NewTrie(NewHashNode(rh), ModeAll, storage.NewMemCachedStore(storage.NewMemoryStore()))

View file

@ -12,10 +12,10 @@ import (
"github.com/nspcc-dev/neo-go/pkg/util/slice"
)
// TrieMode is the storage mode of trie, it affects the DB scheme.
// TrieMode is the storage mode of a trie, it affects the DB scheme.
type TrieMode byte
// TrieMode is the storage mode of trie.
// TrieMode is the storage mode of a trie.
const (
// ModeAll is used to store everything.
ModeAll TrieMode = 0
@ -43,7 +43,7 @@ type cachedNode struct {
refcount int32
}
// ErrNotFound is returned when requested trie item is missing.
// ErrNotFound is returned when the requested trie item is missing.
var ErrNotFound = errors.New("item not found")
// RC returns true when reference counting is enabled.
@ -56,9 +56,9 @@ func (m TrieMode) GC() bool {
return m&ModeGCFlag != 0
}
// NewTrie returns new MPT trie. It accepts a MemCachedStore to decouple storage errors from logic errors
// NewTrie returns a new MPT trie. It accepts a MemCachedStore to decouple storage errors from logic errors,
// so that all storage errors are processed during `store.Persist()` at the caller.
// This also has the benefit, that every `Put` can be considered an atomic operation.
// Another benefit is that every `Put` can be considered an atomic operation.
func NewTrie(root Node, mode TrieMode, store *storage.MemCachedStore) *Trie {
if root == nil {
root = EmptyNode{}
@ -73,7 +73,7 @@ func NewTrie(root Node, mode TrieMode, store *storage.MemCachedStore) *Trie {
}
}
// Get returns value for the provided key in t.
// Get returns the value for the provided key in t.
func (t *Trie) Get(key []byte) ([]byte, error) {
if len(key) > MaxKeyLength {
return nil, errors.New("key is too big")
@ -87,11 +87,11 @@ func (t *Trie) Get(key []byte) ([]byte, error) {
return slice.Copy(leaf.(*LeafNode).value), nil
}
// getWithPath returns a current node with all hash nodes along the path replaced
// to their "unhashed" counterparts. It also returns node the provided path in a
// subtrie rooting in curr points to. In case of `strict` set to `false` the
// provided path can be incomplete, so it also returns full path that points to
// the node found at the specified incomplete path. In case of `strict` set to `true`
// getWithPath returns the current node with all hash nodes along the path replaced
// with their "unhashed" counterparts. It also returns node which the provided path in a
// subtrie rooting in curr points to. In case of `strict` set to `false`, the
// provided path can be incomplete, so it also returns the full path that points to
// the node found at the specified incomplete path. In case of `strict` set to `true`,
// the resulting path matches the provided one.
func (t *Trie) getWithPath(curr Node, path []byte, strict bool) (Node, Node, []byte, error) {
switch n := curr.(type) {
@ -159,8 +159,8 @@ func (t *Trie) Put(key, value []byte) error {
return nil
}
// putIntoLeaf puts val to trie if current node is a Leaf.
// It returns Node if curr needs to be replaced and error if any.
// putIntoLeaf puts the val to the trie if the current node is a Leaf.
// It returns a Node if curr needs to be replaced and an error has occurred, if any.
func (t *Trie) putIntoLeaf(curr *LeafNode, path []byte, val Node) (Node, error) {
v := val.(*LeafNode)
if len(path) == 0 {
@ -176,8 +176,8 @@ func (t *Trie) putIntoLeaf(curr *LeafNode, path []byte, val Node) (Node, error)
return b, nil
}
// putIntoBranch puts val to trie if current node is a Branch.
// It returns Node if curr needs to be replaced and error if any.
// putIntoBranch puts the val to the trie if the current node is a Branch.
// It returns the Node if curr needs to be replaced and an error has occurred, if any.
func (t *Trie) putIntoBranch(curr *BranchNode, path []byte, val Node) (Node, error) {
i, path := splitPath(path)
t.removeRef(curr.Hash(), curr.bytes)
@ -191,8 +191,8 @@ func (t *Trie) putIntoBranch(curr *BranchNode, path []byte, val Node) (Node, err
return curr, nil
}
// putIntoExtension puts val to trie if current node is an Extension.
// It returns Node if curr needs to be replaced and error if any.
// putIntoExtension puts the val to the trie if the current node is an Extension.
// It returns the Node if curr needs to be replaced and an error has occurred, if any.
func (t *Trie) putIntoExtension(curr *ExtensionNode, path []byte, val Node) (Node, error) {
t.removeRef(curr.Hash(), curr.bytes)
if bytes.HasPrefix(path, curr.key) {
@ -232,8 +232,8 @@ func (t *Trie) putIntoEmpty(path []byte, val Node) (Node, error) {
return t.newSubTrie(path, val, true), nil
}
// putIntoHash puts val to trie if current node is a HashNode.
// It returns Node if curr needs to be replaced and error if any.
// putIntoHash puts the val to the trie if the current node is a HashNode.
// It returns the Node if curr needs to be replaced and an error has occurred, if any.
func (t *Trie) putIntoHash(curr *HashNode, path []byte, val Node) (Node, error) {
result, err := t.getFromStore(curr.hash)
if err != nil {
@ -242,7 +242,7 @@ func (t *Trie) putIntoHash(curr *HashNode, path []byte, val Node) (Node, error)
return t.putIntoNode(result, path, val)
}
// newSubTrie create new trie containing node at provided path.
// newSubTrie creates a new trie containing the node at the provided path.
func (t *Trie) newSubTrie(path []byte, val Node, newVal bool) Node {
if newVal {
t.addRef(val.Hash(), val.Bytes())
@ -255,7 +255,7 @@ func (t *Trie) newSubTrie(path []byte, val Node, newVal bool) Node {
return e
}
// putIntoNode puts val with provided path inside curr and returns updated node.
// putIntoNode puts the val with the provided path inside curr and returns an updated node.
// Reference counters are updated for both curr and returned value.
func (t *Trie) putIntoNode(curr Node, path []byte, val Node) (Node, error) {
switch n := curr.(type) {
@ -274,8 +274,8 @@ func (t *Trie) putIntoNode(curr Node, path []byte, val Node) (Node, error) {
}
}
// Delete removes key from trie.
// It returns no error on missing key.
// Delete removes the key from the trie.
// It returns no error on a missing key.
func (t *Trie) Delete(key []byte) error {
if len(key) > MaxKeyLength {
return errors.New("key is too big")
@ -363,7 +363,7 @@ func (t *Trie) deleteFromExtension(n *ExtensionNode, path []byte) (Node, error)
return n, nil
}
// deleteFromNode removes value with provided path from curr and returns an updated node.
// deleteFromNode removes the value with the provided path from curr and returns an updated node.
// Reference counters are updated for both curr and returned value.
func (t *Trie) deleteFromNode(curr Node, path []byte) (Node, error) {
switch n := curr.(type) {
@ -402,9 +402,9 @@ func makeStorageKey(mptKey util.Uint256) []byte {
return append([]byte{byte(storage.DataMPT)}, mptKey[:]...)
}
// Flush puts every node in the trie except Hash ones to the storage.
// Because we care only about block-level changes, there is no need to put every
// new node to storage. Normally, flush should be called with every StateRoot persist, i.e.
// Flush puts every node (except Hash ones) in the trie to the storage.
// Because we care about block-level changes only, there is no need to put every
// new node to the storage. Normally, flush should be called with every StateRoot persist, i.e.
// after every block.
func (t *Trie) Flush(index uint32) {
key := makeStorageKey(util.Uint256{})
@ -571,7 +571,7 @@ func collapse(depth int, node Node) Node {
return node
}
// Find returns list of storage key-value pairs whose key is prefixed by the specified
// Find returns a list of storage key-value pairs whose key is prefixed by the specified
// prefix starting from the specified `prefix`+`from` path (not including the item at
// the specified `prefix`+`from` path if so). The `max` number of elements is returned at max.
func (t *Trie) Find(prefix, from []byte, max int) ([]storage.KeyValue, error) {

View file

@ -27,13 +27,13 @@ type Contracts struct {
Crypto *Crypto
Std *Std
Contracts []interop.Contract
// persistScript is vm script which executes "onPersist" method of every native contract.
// persistScript is a vm script which executes "onPersist" method of every native contract.
persistScript []byte
// postPersistScript is vm script which executes "postPersist" method of every native contract.
// postPersistScript is a vm script which executes "postPersist" method of every native contract.
postPersistScript []byte
}
// ByHash returns native contract with the specified hash.
// ByHash returns a native contract with the specified hash.
func (cs *Contracts) ByHash(h util.Uint160) interop.Contract {
for _, ctr := range cs.Contracts {
if ctr.Metadata().Hash.Equals(h) {
@ -43,7 +43,7 @@ func (cs *Contracts) ByHash(h util.Uint160) interop.Contract {
return nil
}
// ByName returns native contract with the specified name.
// ByName returns a native contract with the specified name.
func (cs *Contracts) ByName(name string) interop.Contract {
name = strings.ToLower(name)
for _, ctr := range cs.Contracts {
@ -54,7 +54,7 @@ func (cs *Contracts) ByName(name string) interop.Contract {
return nil
}
// NewContracts returns new set of native contracts with new GAS, NEO, Policy, Oracle,
// NewContracts returns a new set of native contracts with new GAS, NEO, Policy, Oracle,
// Designate and (optional) Notary contracts.
func NewContracts(cfg config.ProtocolConfiguration) *Contracts {
cs := new(Contracts)
@ -122,7 +122,7 @@ func NewContracts(cfg config.ProtocolConfiguration) *Contracts {
return cs
}
// GetPersistScript returns VM script calling "onPersist" syscall for native contracts.
// GetPersistScript returns a VM script calling "onPersist" syscall for native contracts.
func (cs *Contracts) GetPersistScript() []byte {
if cs.persistScript != nil {
return cs.persistScript
@ -133,7 +133,7 @@ func (cs *Contracts) GetPersistScript() []byte {
return cs.persistScript
}
// GetPostPersistScript returns VM script calling "postPersist" syscall for native contracts.
// GetPostPersistScript returns a VM script calling "postPersist" syscall for native contracts.
func (cs *Contracts) GetPostPersistScript() []byte {
if cs.postPersistScript != nil {
return cs.postPersistScript

View file

@ -137,22 +137,22 @@ func curveFromStackitem(si stackitem.Item) (elliptic.Curve, error) {
}
}
// Metadata implements Contract interface.
// Metadata implements the Contract interface.
func (c *Crypto) Metadata() *interop.ContractMD {
return &c.ContractMD
}
// Initialize implements Contract interface.
// Initialize implements the Contract interface.
func (c *Crypto) Initialize(ic *interop.Context) error {
return nil
}
// OnPersist implements Contract interface.
// OnPersist implements the Contract interface.
func (c *Crypto) OnPersist(ic *interop.Context) error {
return nil
}
// PostPersist implements Contract interface.
// PostPersist implements the Contract interface.
func (c *Crypto) PostPersist(ic *interop.Context) error {
return nil
}

View file

@ -27,7 +27,7 @@ import (
"github.com/nspcc-dev/neo-go/pkg/vm/stackitem"
)
// Designate represents designation contract.
// Designate represents a designation contract.
type Designate struct {
interop.ContractMD
NEO *NEO
@ -36,9 +36,9 @@ type Designate struct {
p2pSigExtensionsEnabled bool
OracleService atomic.Value
// NotaryService represents Notary node module.
// NotaryService represents a Notary node module.
NotaryService atomic.Value
// StateRootService represents StateRoot node module.
// StateRootService represents a StateRoot node module.
StateRootService *stateroot.Module
}
@ -64,7 +64,7 @@ const (
// maxNodeCount is the maximum number of nodes to set the role for.
maxNodeCount = 32
// DesignationEventName is the name of a designation event.
// DesignationEventName is the name of the designation event.
DesignationEventName = "Designation"
)
@ -150,12 +150,12 @@ func (s *Designate) InitializeCache(d *dao.Simple) error {
return nil
}
// OnPersist implements Contract interface.
// OnPersist implements the Contract interface.
func (s *Designate) OnPersist(ic *interop.Context) error {
return nil
}
// PostPersist implements Contract interface.
// PostPersist implements the Contract interface.
func (s *Designate) PostPersist(ic *interop.Context) error {
cache := ic.DAO.GetRWCache(s.ID).(*DesignationCache)
if !cache.rolesChangedFlag {
@ -268,7 +268,7 @@ func getCachedRoleData(cache *DesignationCache, r noderoles.Role) *roleData {
return nil
}
// GetLastDesignatedHash returns last designated hash of a given role.
// GetLastDesignatedHash returns the last designated hash of the given role.
func (s *Designate) GetLastDesignatedHash(d *dao.Simple, r noderoles.Role) (util.Uint160, error) {
if !s.isValidRole(r) {
return util.Uint160{}, ErrInvalidRole

View file

@ -10,7 +10,7 @@ import (
"github.com/nspcc-dev/neo-go/pkg/vm/stackitem"
)
// Call calls specified native contract method.
// Call calls the specified native contract method.
func Call(ic *interop.Context) error {
version := ic.VM.Estack().Pop().BigInt().Int64()
if version != 0 {

View file

@ -28,7 +28,7 @@ type Ledger struct {
const ledgerContractID = -4
// newLedger creates new Ledger native contract.
// newLedger creates a new Ledger native contract.
func newLedger() *Ledger {
var l = &Ledger{
ContractMD: *interop.NewContractMD(nativenames.Ledger, ledgerContractID),
@ -77,17 +77,17 @@ func newLedger() *Ledger {
return l
}
// Metadata implements Contract interface.
// Metadata implements the Contract interface.
func (l *Ledger) Metadata() *interop.ContractMD {
return &l.ContractMD
}
// Initialize implements Contract interface.
// Initialize implements the Contract interface.
func (l *Ledger) Initialize(ic *interop.Context) error {
return nil
}
// OnPersist implements Contract interface.
// OnPersist implements the Contract interface.
func (l *Ledger) OnPersist(ic *interop.Context) error {
// Actual block/tx processing is done in Blockchain.storeBlock().
// Even though C# node add them to storage here, they're not
@ -96,7 +96,7 @@ func (l *Ledger) OnPersist(ic *interop.Context) error {
return nil
}
// PostPersist implements Contract interface.
// PostPersist implements the Contract interface.
func (l *Ledger) PostPersist(ic *interop.Context) error {
return nil // Actual block/tx processing is done in Blockchain.storeBlock().
}
@ -139,8 +139,8 @@ func (l *Ledger) getTransactionHeight(ic *interop.Context, params []stackitem.It
return stackitem.Make(h)
}
// getTransactionFromBlock returns transaction with the given index from the
// block with height or hash specified.
// getTransactionFromBlock returns a transaction with the given index from the
// block with the height or hash specified.
func (l *Ledger) getTransactionFromBlock(ic *interop.Context, params []stackitem.Item) stackitem.Item {
hash := getBlockHashFromItem(ic, params[0])
index := toUint32(params[1])
@ -177,14 +177,14 @@ func (l *Ledger) getTransactionVMState(ic *interop.Context, params []stackitem.I
}
// isTraceableBlock defines whether we're able to give information about
// the block with index specified.
// the block with the index specified.
func isTraceableBlock(ic *interop.Context, index uint32) bool {
height := ic.BlockHeight()
MaxTraceableBlocks := ic.Chain.GetConfig().MaxTraceableBlocks
return index <= height && index+MaxTraceableBlocks > height
}
// getBlockHashFromItem converts given stackitem.Item to block hash using given
// getBlockHashFromItem converts the given stackitem.Item to a block hash using the given
// Ledger if needed. Interop functions accept both block numbers and
// block hashes as parameters, thus this function is needed. It's supposed to
// be called within VM context, so it panics if anything goes wrong.
@ -219,7 +219,7 @@ func getUint256FromItem(item stackitem.Item) (util.Uint256, error) {
return hash, nil
}
// getTransactionAndHeight returns transaction and its height if it's present
// getTransactionAndHeight returns a transaction and its height if it's present
// on the chain. It panics if anything goes wrong.
func getTransactionAndHeight(d *dao.Simple, item stackitem.Item) (*transaction.Transaction, uint32, error) {
hash, err := getUint256FromItem(item)

View file

@ -25,7 +25,7 @@ import (
"github.com/nspcc-dev/neo-go/pkg/vm/stackitem"
)
// Management is contract-managing native contract.
// Management is a contract-managing native contract.
type Management struct {
interop.ContractMD
NEO *NEO
@ -84,12 +84,12 @@ func (c *ManagementCache) Copy() dao.NativeContractCache {
return cp
}
// MakeContractKey creates a key from account script hash.
// MakeContractKey creates a key from the account script hash.
func MakeContractKey(h util.Uint160) []byte {
return makeUint160Key(prefixContract, h)
}
// newManagement creates new Management native contract.
// newManagement creates a new Management native contract.
func newManagement() *Management {
var m = &Management{
ContractMD: *interop.NewContractMD(nativenames.Management, ManagementContractID),
@ -168,7 +168,7 @@ func (m *Management) getContract(ic *interop.Context, args []stackitem.Item) sta
return contractToStack(ctr)
}
// GetContract returns contract with given hash from given DAO.
// GetContract returns a contract with the given hash from the given DAO.
func (m *Management) GetContract(d *dao.Simple, hash util.Uint160) (*state.Contract, error) {
cache := d.GetROCache(m.ID).(*ManagementCache)
cs, ok := cache.contracts[hash]
@ -198,7 +198,7 @@ func getLimitedSlice(arg stackitem.Item, max int) ([]byte, error) {
}
// getNefAndManifestFromItems converts input arguments into NEF and manifest
// adding appropriate deployment GAS price and sanitizing inputs.
// adding an appropriate deployment GAS price and sanitizing inputs.
func (m *Management) getNefAndManifestFromItems(ic *interop.Context, args []stackitem.Item, isDeploy bool) (*nef.File, *manifest.Manifest, error) {
nefBytes, err := getLimitedSlice(args[0], math.MaxInt32) // Upper limits are checked during NEF deserialization.
if err != nil {
@ -282,7 +282,7 @@ func (m *Management) markUpdated(d *dao.Simple, hash util.Uint160, cs *state.Con
updateContractCache(cache, cs)
}
// Deploy creates contract's hash/ID and saves new contract into the given DAO.
// Deploy creates a contract's hash/ID and saves a new contract into the given DAO.
// It doesn't run _deploy method and doesn't emit notification.
func (m *Management) Deploy(d *dao.Simple, sender util.Uint160, neff *nef.File, manif *manifest.Manifest) (*state.Contract, error) {
h := state.CreateContractHash(sender, neff.Checksum, manif.Name)
@ -390,7 +390,7 @@ func (m *Management) destroy(ic *interop.Context, sis []stackitem.Item) stackite
return stackitem.Null{}
}
// Destroy drops given contract from DAO along with its storage. It doesn't emit notification.
// Destroy drops the given contract from DAO along with its storage. It doesn't emit notification.
func (m *Management) Destroy(d *dao.Simple, hash util.Uint160) error {
contract, err := m.GetContract(d, hash)
if err != nil {
@ -448,12 +448,12 @@ func contractToStack(cs *state.Contract) stackitem.Item {
return si
}
// Metadata implements Contract interface.
// Metadata implements the Contract interface.
func (m *Management) Metadata() *interop.ContractMD {
return &m.ContractMD
}
// updateContractCache saves contract in the common and NEP-related caches. It's
// updateContractCache saves the contract in the common and NEP-related caches. It's
// an internal method that must be called with m.mtx lock taken.
func updateContractCache(cache *ManagementCache, cs *state.Contract) {
cache.contracts[cs.Hash] = cs
@ -465,7 +465,7 @@ func updateContractCache(cache *ManagementCache, cs *state.Contract) {
}
}
// OnPersist implements Contract interface.
// OnPersist implements the Contract interface.
func (m *Management) OnPersist(ic *interop.Context) error {
var cache *ManagementCache
for _, native := range ic.Natives {
@ -495,7 +495,7 @@ func (m *Management) OnPersist(ic *interop.Context) error {
}
// InitializeCache initializes contract cache with the proper values from storage.
// Cache initialisation should be done apart from Initialize because Initialize is
// Cache initialization should be done apart from Initialize because Initialize is
// called only when deploying native contracts.
func (m *Management) InitializeCache(d *dao.Simple) error {
cache := &ManagementCache{
@ -521,7 +521,7 @@ func (m *Management) InitializeCache(d *dao.Simple) error {
return nil
}
// PostPersist implements Contract interface.
// PostPersist implements the Contract interface.
func (m *Management) PostPersist(ic *interop.Context) error {
return nil
}
@ -550,7 +550,7 @@ func (m *Management) GetNEP17Contracts(d *dao.Simple) []util.Uint160 {
return result
}
// Initialize implements Contract interface.
// Initialize implements the Contract interface.
func (m *Management) Initialize(ic *interop.Context) error {
setIntWithKey(m.ID, ic.DAO, keyMinimumDeploymentFee, defaultMinimumDeploymentFee)
setIntWithKey(m.ID, ic.DAO, keyNextAvailableID, 1)

View file

@ -82,7 +82,7 @@ func (g *GAS) balanceFromBytes(si *state.StorageItem) (*big.Int, error) {
return &acc.Balance, err
}
// Initialize initializes GAS contract.
// Initialize initializes a GAS contract.
func (g *GAS) Initialize(ic *interop.Context) error {
if err := g.nep17TokenNative.Initialize(ic); err != nil {
return err
@ -99,7 +99,7 @@ func (g *GAS) Initialize(ic *interop.Context) error {
return nil
}
// OnPersist implements Contract interface.
// OnPersist implements the Contract interface.
func (g *GAS) OnPersist(ic *interop.Context) error {
if len(ic.Block.Transactions) == 0 {
return nil
@ -127,7 +127,7 @@ func (g *GAS) OnPersist(ic *interop.Context) error {
return nil
}
// PostPersist implements Contract interface.
// PostPersist implements the Contract interface.
func (g *GAS) PostPersist(ic *interop.Context) error {
return nil
}

View file

@ -52,13 +52,13 @@ type NeoCache struct {
// committee contains cached committee members and their votes.
// It is updated once in a while depending on committee size
// (every 28 blocks for mainnet). It's value
// is always equal to value stored by `prefixCommittee`.
// is always equal to the value stored by `prefixCommittee`.
committee keysWithVotes
// committeeHash contains script hash of the committee.
// committeeHash contains the script hash of the committee.
committeeHash util.Uint160
// gasPerVoteCache contains last updated value of GAS per vote reward for candidates.
// It is set in state-modifying methods only and read in `PostPersist` thus is not protected
// gasPerVoteCache contains the last updated value of GAS per vote reward for candidates.
// It is set in state-modifying methods only and read in `PostPersist`, thus is not protected
// by any mutex.
gasPerVoteCache map[string]big.Int
}
@ -67,7 +67,7 @@ const (
neoContractID = -5
// NEOTotalSupply is the total amount of NEO in the system.
NEOTotalSupply = 100000000
// DefaultRegisterPrice is default price for candidate register.
// DefaultRegisterPrice is the default price for candidate register.
DefaultRegisterPrice = 1000 * GASFactor
// prefixCandidate is a prefix used to store validator's data.
prefixCandidate = 33
@ -139,7 +139,7 @@ func copyNeoCache(src, dst *NeoCache) {
}
}
// makeValidatorKey creates a key from account script hash.
// makeValidatorKey creates a key from the account script hash.
func makeValidatorKey(key *keys.PublicKey) []byte {
b := key.Bytes()
// Don't create a new buffer.
@ -228,7 +228,7 @@ func newNEO(cfg config.ProtocolConfiguration) *NEO {
return n
}
// Initialize initializes NEO contract.
// Initialize initializes a NEO contract.
func (n *NEO) Initialize(ic *interop.Context) error {
if err := n.nep17TokenNative.Initialize(ic); err != nil {
return err
@ -276,8 +276,8 @@ func (n *NEO) Initialize(ic *interop.Context) error {
return nil
}
// InitializeCache initializes all NEO cache with the proper values from storage.
// Cache initialisation should be done apart from Initialize because Initialize is
// InitializeCache initializes all NEO cache with the proper values from the storage.
// Cache initialization should be done apart from Initialize because Initialize is
// called only when deploying native contracts.
func (n *NEO) InitializeCache(blockHeight uint32, d *dao.Simple) error {
cache := &NeoCache{
@ -344,7 +344,7 @@ func (n *NEO) updateCommittee(cache *NeoCache, ic *interop.Context) error {
return nil
}
// OnPersist implements Contract interface.
// OnPersist implements the Contract interface.
func (n *NEO) OnPersist(ic *interop.Context) error {
if n.cfg.ShouldUpdateCommitteeAt(ic.Block.Index) {
cache := ic.DAO.GetRWCache(n.ID).(*NeoCache)
@ -361,7 +361,7 @@ func (n *NEO) OnPersist(ic *interop.Context) error {
return nil
}
// PostPersist implements Contract interface.
// PostPersist implements the Contract interface.
func (n *NEO) PostPersist(ic *interop.Context) error {
gas := n.GetGASPerBlock(ic.DAO, ic.Block.Index)
cache := ic.DAO.GetROCache(n.ID).(*NeoCache)

View file

@ -11,7 +11,7 @@ type candidate struct {
Votes big.Int
}
// FromBytes unmarshals candidate from byte array.
// FromBytes unmarshals a candidate from the byte array.
func (c *candidate) FromBytes(data []byte) *candidate {
err := stackitem.DeserializeConvertible(data, c)
if err != nil {

Some files were not shown because too many files have changed in this diff Show more