Commit graph

26 commits

Author SHA1 Message Date
Roman Khimov
2591c39500 rpcsrv: make websocket client limit configurable 2022-11-23 12:19:49 +03:00
Roman Khimov
3247aa40a7 rpcsrv: allow any Origin in WS connections if EnableCORSWorkaround
Break origin checks even more. Alternative to #2772.
2022-11-09 09:28:09 +03:00
Roman Khimov
c17b2afab5 network: add BroadcastFactor to control gossip, fix #2678 2022-10-14 15:53:32 +03:00
Anna Shaleva
70e59d83c9 docs: fix supported database types 2022-10-07 15:56:34 +03:00
Anna Shaleva
2f5137e9b7 core: allow RO mode for Bolt and Level 2022-10-07 15:56:29 +03:00
Anna Shaleva
23795ab6e0 docs: add MaxValidUntilBlockIncrement to config docs 2022-10-03 13:13:20 +03:00
Roman Khimov
e46ec978d3 docs: improve some phasing, fix spelling 2022-07-15 12:52:21 +03:00
Anna Shaleva
1ae601787d network: allow to handle GetMPTData with KeepOnlyLatestState on
And adjust documentation along the way.
2022-07-14 14:33:20 +03:00
Anna Shaleva
445cca114a rpc: restrict the amount of concurrently running iterator sessions 2022-07-08 17:05:18 +03:00
Anna Shaleva
47ffc1f3e8 rpc: restrict default SessionExpirationTime 2022-07-08 17:05:18 +03:00
Anna Shaleva
b5d39a3ffd rpc: add configuration extension for MPT-backed iterator sessions
Add ability to switch between current blockchain storage and MPT-backed
storage for iterator traversing process. It may be useful because
iterator implementation traverses underlying backed storage (BoltDB,
LevelDB) inside DB's Seek which is blocking operation for BoltDB:
```
Opening a read transaction and a write transaction in the same goroutine
can cause the writer to deadlock because the database periodically needs
to re-mmap itself as it grows and it cannot do that while a read transaction
is open.

If a long running read transaction (for example, a snapshot transaction)
is needed, you might want to set DB.InitialMmapSize to a large enough
value to avoid potential blocking of write transaction.
```

So during bbolt re-mmaping, standard blockchain DB operations (i.e. persist)
can be blocked until iterator resourses release. The described behaviour
is tested and confirmed on four-nodes privnet with BoltDB and
`SessionExpirationTime` set to be 180 seconds. After new iterator session
is added to the server, the subsequent persist took ~5m21s, see the log
record `2022-06-17T18:58:21.563+0300`:

```
anna@kiwi:~/Documents/GitProjects/nspcc-dev/neo-go$ ./bin/neo-go node -p
2022-06-17T18:52:21.535+0300	INFO	initial gas supply is not set or wrong, setting default value	{"InitialGASSupply": "52000000"}
2022-06-17T18:52:21.535+0300	INFO	MaxBlockSize is not set or wrong, setting default value	{"MaxBlockSize": 262144}
2022-06-17T18:52:21.535+0300	INFO	MaxBlockSystemFee is not set or wrong, setting default value	{"MaxBlockSystemFee": 900000000000}
2022-06-17T18:52:21.535+0300	INFO	MaxTransactionsPerBlock is not set or wrong, using default value	{"MaxTransactionsPerBlock": 512}
2022-06-17T18:52:21.535+0300	INFO	MaxValidUntilBlockIncrement is not set or wrong, using default value	{"MaxValidUntilBlockIncrement": 5760}
2022-06-17T18:52:21.535+0300	INFO	Hardforks are not set, using default value
2022-06-17T18:52:21.543+0300	INFO	no storage version found! creating genesis block
2022-06-17T18:52:21.546+0300	INFO	ExtensiblePoolSize is not set or wrong, using default value	{"ExtensiblePoolSize": 20}
2022-06-17T18:52:21.546+0300	INFO	service is running	{"service": "Prometheus", "endpoint": ":2112"}
2022-06-17T18:52:21.547+0300	INFO	starting rpc-server	{"endpoint": ":20331"}
2022-06-17T18:52:21.547+0300	INFO	rpc-server iterator sessions are enabled
2022-06-17T18:52:21.547+0300	INFO	service hasn't started since it's disabled	{"service": "Pprof"}
2022-06-17T18:52:21.547+0300	INFO	node started	{"blockHeight": 0, "headerHeight": 0}

    _   ____________        __________
   / | / / ____/ __ \      / ____/ __ \
  /  |/ / __/ / / / /_____/ / __/ / / /
 / /|  / /___/ /_/ /_____/ /_/ / /_/ /
/_/ |_/_____/\____/      \____/\____/

/NEO-GO:0.99.1-pre-53-g7ccb646e/

2022-06-17T18:52:21.548+0300	INFO	new peer connected	{"addr": "127.0.0.1:20336", "peerCount": 1}
2022-06-17T18:52:21.550+0300	INFO	started protocol	{"addr": "127.0.0.1:20336", "userAgent": "/NEO-GO:0.99.1-pre-53-g7ccb646e/", "startHeight": 65, "id": 1475228436}
2022-06-17T18:52:22.575+0300	INFO	persisted to disk	{"blocks": 65, "keys": 1410, "headerHeight": 65, "blockHeight": 65, "took": "28.193409ms"}
2022-06-17T18:52:24.548+0300	INFO	new peer connected	{"addr": "127.0.0.1:20333", "peerCount": 2}
2022-06-17T18:52:24.548+0300	INFO	new peer connected	{"addr": "127.0.0.1:20336", "peerCount": 3}
2022-06-17T18:52:24.548+0300	INFO	new peer connected	{"addr": "127.0.0.1:20334", "peerCount": 4}
2022-06-17T18:52:24.549+0300	INFO	new peer connected	{"addr": "127.0.0.1:20335", "peerCount": 5}
2022-06-17T18:52:24.549+0300	INFO	new peer connected	{"addr": "127.0.0.1:20335", "peerCount": 6}
2022-06-17T18:52:24.549+0300	INFO	started protocol	{"addr": "127.0.0.1:20333", "userAgent": "/NEO-GO:0.99.1-pre-53-g7ccb646e/", "startHeight": 65, "id": 3444438498}
2022-06-17T18:52:24.549+0300	INFO	new peer connected	{"addr": "127.0.0.1:20334", "peerCount": 7}
2022-06-17T18:52:24.549+0300	INFO	new peer connected	{"addr": "127.0.0.1:20333", "peerCount": 8}
2022-06-17T18:52:24.550+0300	INFO	node reached synchronized state, starting services
2022-06-17T18:52:24.550+0300	INFO	started protocol	{"addr": "127.0.0.1:20334", "userAgent": "/NEO-GO:0.99.1-pre-53-g7ccb646e/", "startHeight": 65, "id": 2435677826}
2022-06-17T18:52:24.550+0300	INFO	starting state validation service
2022-06-17T18:52:24.550+0300	INFO	RPC server already started
2022-06-17T18:52:24.550+0300	INFO	new peer connected	{"addr": "127.0.0.1:20335", "peerCount": 9}
2022-06-17T18:52:24.550+0300	INFO	new peer connected	{"addr": "127.0.0.1:20335", "peerCount": 10}
2022-06-17T18:52:24.550+0300	WARN	peer disconnected	{"addr": "127.0.0.1:20334", "error": "already connected", "peerCount": 9}
2022-06-17T18:52:24.550+0300	WARN	peer disconnected	{"addr": "127.0.0.1:20336", "error": "already connected", "peerCount": 8}
2022-06-17T18:52:24.550+0300	WARN	peer disconnected	{"addr": "127.0.0.1:20333", "error": "already connected", "peerCount": 7}
2022-06-17T18:52:24.550+0300	WARN	peer disconnected	{"addr": "127.0.0.1:20335", "error": "unexpected empty payload: CMDVersion", "peerCount": 6}
2022-06-17T18:52:24.550+0300	INFO	started protocol	{"addr": "127.0.0.1:20335", "userAgent": "/NEO-GO:0.99.1-pre-53-g7ccb646e/", "startHeight": 65, "id": 970555896}
2022-06-17T18:52:24.551+0300	INFO	new peer connected	{"addr": "127.0.0.1:20334", "peerCount": 7}
2022-06-17T18:52:24.551+0300	WARN	peer disconnected	{"addr": "127.0.0.1:20335", "error": "unexpected empty payload: CMDVersion", "peerCount": 6}
2022-06-17T18:52:24.551+0300	WARN	peer disconnected	{"addr": "127.0.0.1:20335", "error": "unexpected empty payload: CMDVersion", "peerCount": 5}
2022-06-17T18:52:24.551+0300	WARN	peer disconnected	{"addr": "127.0.0.1:20334", "error": "already connected", "peerCount": 4}
2022-06-17T18:52:29.564+0300	INFO	persisted to disk	{"blocks": 1, "keys": 19, "headerHeight": 66, "blockHeight": 66, "took": "12.51808ms"}
2022-06-17T18:52:44.558+0300	INFO	persisted to disk	{"blocks": 1, "keys": 19, "headerHeight": 67, "blockHeight": 67, "took": "1.563137ms"}
2022-06-17T18:55:21.549+0300	WARN	peer disconnected	{"addr": "127.0.0.1:20335", "error": "ping/pong timeout", "peerCount": 3}
2022-06-17T18:55:21.550+0300	WARN	peer disconnected	{"addr": "127.0.0.1:20333", "error": "ping/pong timeout", "peerCount": 2}
2022-06-17T18:55:21.550+0300	WARN	peer disconnected	{"addr": "127.0.0.1:20334", "error": "ping/pong timeout", "peerCount": 1}
2022-06-17T18:55:21.550+0300	WARN	peer disconnected	{"addr": "127.0.0.1:20336", "error": "ping/pong timeout", "peerCount": 0}
2022-06-17T18:55:21.553+0300	INFO	new peer connected	{"addr": "127.0.0.1:20335", "peerCount": 1}
2022-06-17T18:55:21.554+0300	INFO	started protocol	{"addr": "127.0.0.1:20335", "userAgent": "/NEO-GO:0.99.1-pre-53-g7ccb646e/", "startHeight": 77, "id": 970555896}
2022-06-17T18:55:24.554+0300	INFO	new peer connected	{"addr": "172.200.0.4:20333", "peerCount": 2}
2022-06-17T18:55:24.555+0300	INFO	new peer connected	{"addr": "172.200.0.3:20334", "peerCount": 3}
2022-06-17T18:55:24.555+0300	INFO	new peer connected	{"addr": "10.78.13.84:59876", "peerCount": 4}
2022-06-17T18:55:24.555+0300	INFO	new peer connected	{"addr": "127.0.0.1:20335", "peerCount": 5}
2022-06-17T18:55:24.556+0300	INFO	new peer connected	{"addr": "172.200.0.254:20332", "peerCount": 6}
2022-06-17T18:55:24.556+0300	INFO	new peer connected	{"addr": "127.0.0.1:20336", "peerCount": 7}
2022-06-17T18:55:24.556+0300	INFO	started protocol	{"addr": "172.200.0.4:20333", "userAgent": "/NEO-GO:0.99.1-pre-53-g7ccb646e/", "startHeight": 76, "id": 3444438498}
2022-06-17T18:55:24.556+0300	INFO	new peer connected	{"addr": "172.200.0.1:20335", "peerCount": 8}
2022-06-17T18:55:24.558+0300	INFO	started protocol	{"addr": "127.0.0.1:20336", "userAgent": "/NEO-GO:0.99.1-pre-53-g7ccb646e/", "startHeight": 77, "id": 1475228436}
2022-06-17T18:55:24.559+0300	INFO	new peer connected	{"addr": "127.0.0.1:20334", "peerCount": 9}
2022-06-17T18:55:24.558+0300	INFO	started protocol	{"addr": "172.200.0.3:20334", "userAgent": "/NEO-GO:0.99.1-pre-53-g7ccb646e/", "startHeight": 77, "id": 2435677826}
2022-06-17T18:55:24.559+0300	INFO	new peer connected	{"addr": "127.0.0.1:20336", "peerCount": 10}
2022-06-17T18:55:24.559+0300	WARN	peer disconnected	{"addr": "172.200.0.1:20335", "error": "unexpected empty payload: CMDVersion", "peerCount": 9}
2022-06-17T18:55:24.559+0300	INFO	new peer connected	{"addr": "127.0.0.1:20333", "peerCount": 10}
2022-06-17T18:55:24.560+0300	INFO	new peer connected	{"addr": "172.200.0.2:20336", "peerCount": 11}
2022-06-17T18:55:24.560+0300	WARN	peer disconnected	{"addr": "172.200.0.254:20332", "error": "identical node id", "peerCount": 10}
2022-06-17T18:55:24.561+0300	WARN	peer disconnected	{"addr": "127.0.0.1:20335", "error": "already connected", "peerCount": 9}
2022-06-17T18:55:24.561+0300	INFO	new peer connected	{"addr": "127.0.0.1:20334", "peerCount": 10}
2022-06-17T18:55:24.561+0300	WARN	peer disconnected	{"addr": "10.78.13.84:59876", "error": "unexpected empty payload: CMDVersion", "peerCount": 9}
2022-06-17T18:55:24.561+0300	WARN	peer disconnected	{"addr": "127.0.0.1:20336", "error": "already connected", "peerCount": 8}
2022-06-17T18:55:24.561+0300	INFO	new peer connected	{"addr": "127.0.0.1:20335", "peerCount": 9}
2022-06-17T18:55:24.561+0300	WARN	peer disconnected	{"addr": "127.0.0.1:20333", "error": "unexpected empty payload: CMDVersion", "peerCount": 8}
2022-06-17T18:55:24.561+0300	INFO	new peer connected	{"addr": "127.0.0.1:20333", "peerCount": 9}
2022-06-17T18:55:24.561+0300	WARN	peer disconnected	{"addr": "127.0.0.1:20334", "error": "unexpected empty payload: CMDVersion", "peerCount": 8}
2022-06-17T18:55:24.561+0300	WARN	peer disconnected	{"addr": "172.200.0.2:20336", "error": "unexpected empty payload: CMDVersion", "peerCount": 7}
2022-06-17T18:55:24.561+0300	INFO	new peer connected	{"addr": "127.0.0.1:20336", "peerCount": 8}
2022-06-17T18:55:24.561+0300	INFO	new peer connected	{"addr": "127.0.0.1:20333", "peerCount": 9}
2022-06-17T18:55:24.561+0300	WARN	peer disconnected	{"addr": "127.0.0.1:20336", "error": "already connected", "peerCount": 8}
2022-06-17T18:55:24.561+0300	INFO	new peer connected	{"addr": "127.0.0.1:20336", "peerCount": 9}
2022-06-17T18:55:24.561+0300	INFO	new peer connected	{"addr": "127.0.0.1:20334", "peerCount": 10}
2022-06-17T18:55:24.561+0300	INFO	new peer connected	{"addr": "127.0.0.1:20334", "peerCount": 11}
2022-06-17T18:55:24.561+0300	INFO	new peer connected	{"addr": "127.0.0.1:20333", "peerCount": 12}
2022-06-17T18:55:24.562+0300	WARN	peer disconnected	{"addr": "127.0.0.1:20335", "error": "already connected", "peerCount": 11}
2022-06-17T18:55:24.562+0300	INFO	new peer connected	{"addr": "127.0.0.1:20333", "peerCount": 12}
2022-06-17T18:55:24.562+0300	INFO	new peer connected	{"addr": "127.0.0.1:20334", "peerCount": 13}
2022-06-17T18:55:24.562+0300	WARN	peer disconnected	{"addr": "127.0.0.1:20333", "error": "already connected", "peerCount": 12}
2022-06-17T18:55:24.562+0300	WARN	peer disconnected	{"addr": "127.0.0.1:20336", "error": "already connected", "peerCount": 11}
2022-06-17T18:55:24.562+0300	WARN	peer disconnected	{"addr": "127.0.0.1:20334", "error": "already connected", "peerCount": 10}
2022-06-17T18:55:24.562+0300	WARN	peer disconnected	{"addr": "127.0.0.1:20333", "error": "unexpected empty payload: CMDVersion", "peerCount": 9}
2022-06-17T18:55:24.563+0300	INFO	new peer connected	{"addr": "127.0.0.1:20336", "peerCount": 10}
2022-06-17T18:55:24.563+0300	WARN	peer disconnected	{"addr": "127.0.0.1:20334", "error": "already connected", "peerCount": 9}
2022-06-17T18:55:24.563+0300	WARN	peer disconnected	{"addr": "127.0.0.1:20334", "error": "unexpected empty payload: CMDVersion", "peerCount": 8}
2022-06-17T18:55:24.563+0300	WARN	peer disconnected	{"addr": "127.0.0.1:20333", "error": "already connected", "peerCount": 7}
2022-06-17T18:55:24.563+0300	WARN	peer disconnected	{"addr": "127.0.0.1:20335", "error": "max peers reached", "peerCount": 6}
2022-06-17T18:55:24.563+0300	WARN	peer disconnected	{"addr": "127.0.0.1:20333", "error": "already connected", "peerCount": 5}
2022-06-17T18:55:24.563+0300	WARN	peer disconnected	{"addr": "127.0.0.1:20334", "error": "max peers reached", "peerCount": 4}
2022-06-17T18:55:24.563+0300	WARN	peer disconnected	{"addr": "127.0.0.1:20336", "error": "already connected", "peerCount": 3}
2022-06-17T18:57:21.551+0300	WARN	peer disconnected	{"addr": "172.200.0.4:20333", "error": "ping/pong timeout", "peerCount": 2}
2022-06-17T18:57:21.552+0300	WARN	peer disconnected	{"addr": "172.200.0.3:20334", "error": "ping/pong timeout", "peerCount": 1}
2022-06-17T18:57:21.552+0300	WARN	peer disconnected	{"addr": "127.0.0.1:20336", "error": "ping/pong timeout", "peerCount": 0}
2022-06-17T18:57:21.553+0300	INFO	new peer connected	{"addr": "172.200.0.4:20333", "peerCount": 1}
2022-06-17T18:57:21.554+0300	INFO	new peer connected	{"addr": "10.78.13.84:20332", "peerCount": 2}
2022-06-17T18:57:21.555+0300	INFO	started protocol	{"addr": "172.200.0.4:20333", "userAgent": "/NEO-GO:0.99.1-pre-53-g7ccb646e/", "startHeight": 82, "id": 3444438498}
2022-06-17T18:57:21.556+0300	INFO	new peer connected	{"addr": "127.0.0.1:20334", "peerCount": 3}
2022-06-17T18:57:21.556+0300	INFO	new peer connected	{"addr": "10.78.13.84:46076", "peerCount": 4}
2022-06-17T18:57:21.556+0300	INFO	new peer connected	{"addr": "172.200.0.1:20335", "peerCount": 5}
2022-06-17T18:57:21.556+0300	INFO	new peer connected	{"addr": "172.200.0.254:20332", "peerCount": 6}
2022-06-17T18:57:21.556+0300	INFO	new peer connected	{"addr": "10.78.13.84:59972", "peerCount": 7}
2022-06-17T18:57:21.557+0300	INFO	new peer connected	{"addr": "127.0.0.1:20333", "peerCount": 8}
2022-06-17T18:57:21.557+0300	INFO	new peer connected	{"addr": "127.0.0.1:20335", "peerCount": 9}
2022-06-17T18:57:21.557+0300	INFO	new peer connected	{"addr": "172.200.0.2:20336", "peerCount": 10}
2022-06-17T18:57:21.557+0300	INFO	new peer connected	{"addr": "127.0.0.1:20333", "peerCount": 11}
2022-06-17T18:57:21.557+0300	INFO	new peer connected	{"addr": "127.0.0.1:20334", "peerCount": 12}
2022-06-17T18:57:21.557+0300	INFO	new peer connected	{"addr": "172.200.0.3:20334", "peerCount": 13}
2022-06-17T18:57:21.557+0300	INFO	new peer connected	{"addr": "127.0.0.1:20336", "peerCount": 14}
2022-06-17T18:57:21.557+0300	INFO	started protocol	{"addr": "127.0.0.1:20334", "userAgent": "/NEO-GO:0.99.1-pre-53-g7ccb646e/", "startHeight": 82, "id": 2435677826}
2022-06-17T18:57:21.557+0300	WARN	peer disconnected	{"addr": "172.200.0.2:20336", "error": "max peers reached", "peerCount": 13}
2022-06-17T18:57:21.557+0300	INFO	new peer connected	{"addr": "127.0.0.1:20335", "peerCount": 14}
2022-06-17T18:57:21.558+0300	INFO	started protocol	{"addr": "172.200.0.1:20335", "userAgent": "/NEO-GO:0.99.1-pre-53-g7ccb646e/", "startHeight": 82, "id": 970555896}
2022-06-17T18:57:21.558+0300	WARN	peer disconnected	{"addr": "172.200.0.254:20332", "error": "identical node id", "peerCount": 13}
2022-06-17T18:57:21.558+0300	INFO	new peer connected	{"addr": "127.0.0.1:20334", "peerCount": 14}
2022-06-17T18:57:21.558+0300	WARN	peer disconnected	{"addr": "127.0.0.1:20335", "error": "max peers reached", "peerCount": 13}
2022-06-17T18:57:21.558+0300	WARN	peer disconnected	{"addr": "10.78.13.84:46076", "error": "identical node id", "peerCount": 12}
2022-06-17T18:57:21.558+0300	INFO	new peer connected	{"addr": "127.0.0.1:20333", "peerCount": 13}
2022-06-17T18:57:21.558+0300	INFO	new peer connected	{"addr": "127.0.0.1:20335", "peerCount": 14}
2022-06-17T18:57:21.558+0300	INFO	new peer connected	{"addr": "127.0.0.1:20336", "peerCount": 15}
2022-06-17T18:57:21.558+0300	WARN	peer disconnected	{"addr": "10.78.13.84:59972", "error": "identical node id", "peerCount": 14}
2022-06-17T18:57:21.558+0300	WARN	peer disconnected	{"addr": "127.0.0.1:20334", "error": "already connected", "peerCount": 13}
2022-06-17T18:57:21.559+0300	WARN	peer disconnected	{"addr": "10.78.13.84:20332", "error": "unexpected empty payload: CMDVersion", "peerCount": 12}
2022-06-17T18:57:21.559+0300	WARN	peer disconnected	{"addr": "127.0.0.1:20333", "error": "already connected", "peerCount": 11}
2022-06-17T18:57:21.559+0300	WARN	peer disconnected	{"addr": "172.200.0.3:20334", "error": "unexpected empty payload: CMDVersion", "peerCount": 10}
2022-06-17T18:57:21.559+0300	WARN	peer disconnected	{"addr": "127.0.0.1:20335", "error": "unexpected empty payload: CMDVersion", "peerCount": 9}
2022-06-17T18:57:21.559+0300	WARN	peer disconnected	{"addr": "127.0.0.1:20334", "error": "already connected", "peerCount": 8}
2022-06-17T18:57:21.559+0300	WARN	peer disconnected	{"addr": "127.0.0.1:20333", "error": "unexpected empty payload: CMDVersion", "peerCount": 7}
2022-06-17T18:57:21.559+0300	INFO	started protocol	{"addr": "127.0.0.1:20336", "userAgent": "/NEO-GO:0.99.1-pre-53-g7ccb646e/", "startHeight": 82, "id": 1475228436}
2022-06-17T18:57:21.559+0300	WARN	peer disconnected	{"addr": "127.0.0.1:20333", "error": "already connected", "peerCount": 6}
2022-06-17T18:57:21.559+0300	WARN	peer disconnected	{"addr": "127.0.0.1:20335", "error": "already connected", "peerCount": 5}
2022-06-17T18:57:21.559+0300	WARN	peer disconnected	{"addr": "127.0.0.1:20336", "error": "already connected", "peerCount": 4}
2022-06-17T18:58:21.561+0300	INFO	persisted to disk	{"blocks": 1, "keys": 20, "headerHeight": 68, "blockHeight": 68, "took": "5m21.993873018s"}
2022-06-17T18:58:21.563+0300	INFO	persisted to disk	{"blocks": 8, "keys": 111, "headerHeight": 76, "blockHeight": 76, "took": "2.243347ms"}
2022-06-17T18:58:22.567+0300	INFO	persisted to disk	{"blocks": 10, "keys": 135, "headerHeight": 86, "blockHeight": 86, "took": "5.637669ms"}
2022-06-17T18:58:25.565+0300	INFO	persisted to disk	{"blocks": 1, "keys": 19, "headerHeight": 87, "blockHeight": 87, "took": "1.879912ms"}
2022-06-17T18:58:40.572+0300	INFO	persisted to disk	{"blocks": 1, "keys": 20, "headerHeight": 88, "blockHeight": 88, "took": "1.560317ms"}
2022-06-17T18:58:55.579+0300	INFO	persisted to disk	{"blocks": 1, "keys": 19, "headerHeight": 89, "blockHeight": 89, "took": "1.925225ms"}
2022-06-17T18:59:10.587+0300	INFO	persisted to disk	{"blocks": 1, "keys": 19, "headerHeight": 90, "blockHeight": 90, "took": "3.118073ms"}
2022-06-17T18:59:25.592+0300	INFO	persisted to disk	{"blocks": 1, "keys": 19, "headerHeight": 91, "blockHeight": 91, "took": "1.607248ms"}
2022-06-17T18:59:40.600+0300	INFO	persisted to disk	{"blocks": 1, "keys": 20, "headerHeight": 92, "blockHeight": 92, "took": "931.806µs"}
2022-06-17T18:59:55.610+0300	INFO	persisted to disk	{"blocks": 1, "keys": 19, "headerHeight": 93, "blockHeight": 93, "took": "2.019041ms"}

```
2022-07-08 17:05:18 +03:00
Anna Shaleva
cbd20eb959 rpc: implement iterator sessions 2022-07-08 17:05:18 +03:00
Roman Khimov
a15a9577f0 docs: fix wrong default address value mentioned
We're listening on all addresses by default.
2022-06-30 15:41:44 +03:00
Roman Khimov
b6829f36fd config: s/HF_Aspidochelone/Aspidochelone/
HF_ prefix makes zero sense to me. If it's "hardfork", then it's in the
"Hardforks" section already. If it's "hotfix", then it made some sense back
when it was HF_2712_FixSyscallFees, but now it's codenamed anyway. So we can
drop it and have a cleaner config.
2022-06-03 11:53:18 +03:00
Anna Shaleva
8055952bbc core: rename hardfork HF_2712_FixSyscallFees
Fantastic Beasts and Where to Find Them
2022-05-26 14:20:48 +03:00
Anna Shaleva
4d4f616b54 docs: add Hardforks configuration section 2022-05-12 13:14:28 +03:00
Elizaveta Chichindaeva
28908aa3cf [#2442] English Check
Signed-off-by: Elizaveta Chichindaeva <elizaveta@nspcc.ru>
2022-05-04 19:48:27 +03:00
Roman Khimov
887fe0634d rpc: add StartWhenSynchronized option, fix #2433 2022-04-26 00:31:48 +03:00
Roman Khimov
373fce54e6 config: conflict P2PStateExchangeExtensions/KeepOnlyLatestState
They don't make sense together, for P2P state exchange to be possible we need
a set of MPTs.
2022-02-11 14:19:54 +03:00
Roman Khimov
423c7883b8 core: implement basic GC for value-based storage scheme
The key idea here is that even though we can't ensure MPT code won't make the
node active again we can order the changes made to the persistent store in
such a way that it practically doesn't matter. What happens is:
 * after persist if it's time to collect our garbage we do it synchronously
   right in the same thread working the underlying persistent store directly
 * all the other node code doesn't see much of it, it works with bc.dao or
   layers above it
 * if MPT doesn't find some stale deactivated node in the storage it's OK,
   it'll recreate it in bc.dao
 * if MPT finds it and activates it, it's OK too, bc.dao will store it
 * while GC is being performed nothing else changes the persistent store
 * all subsequent bc.dao persists only happen after the GC is completed which
   means that any changes to the (potentially) deleted nodes have a priority,
   it's OK for GC to delete something that'll be recreated with the next
   persist cycle

Otherwise it's a simple scheme with node status/last active height stored in
the value. Preliminary tests show that it works ~18% worse than the simple
KeepOnlyLatest scheme, but this seems to be the best result so far.

Fixes #2095.
2022-02-11 14:19:54 +03:00
Roman Khimov
e621f746a7 config/core: allow to change the number of validators
Fixes #2320.
2022-01-31 23:14:38 +03:00
Roman Khimov
7f48653e66 rpc: add server-side NEP-11 tracking API 2021-11-19 12:58:46 +03:00
Roman Khimov
1144a03486 storage: drop RedisDB, close #2130 2021-10-27 17:32:25 +03:00
Roman Khimov
fb4b87bb96 storage: drop BadgerDB support, close #2130 2021-10-27 17:31:55 +03:00
Anna Shaleva
43ac4e1517 rpc: implement findstates RPC handler 2021-10-13 11:41:05 +03:00
Anna Shaleva
cbc75afd4d docs: refactor documentation
CLI:
* Typos are fixed
* Documentation on NEP-11 tokens is added
* NeoGo node configuration is moved to a separate file

Compiler:
* Typos and indentations are fixed
* Ops dump example is updated

Consensus:
* Typos are fixed
* Links are fixed

Notifications:
* Minor adjustments

RPC:
* `getversion` response is updated
* `getunclamedgas` comment is removed (not valid since
https://github.com/neo-project/neo-modules/pull/243)

VM:
* Update help message
* `load*` command adjustments
* `astack` command removal
2021-09-08 17:52:46 +03:00