Behaviour change.
`terminatesession` returns ErrUnknownSession in case of impossibility of finding session,
previously there was no-error response with `false` result.
`traverseIterator`returns ErrUnknownSession in case of impossibility of finding session,
previously there was no-error response with default result; `traverseIterator`returns ErrUnknownIterator,
there were no such errors before.
Accordingly to proposal:
https://github.com/neo-project/proposals/pull/156
Also adding description of `traverseIterator` in docs/rpc.md.
Signed-off-by: Tatiana Nesterenko <tatiana@nspcc.io>
This change makes code incompatible with C# node,
because currently no error is returned on invalid proof.
According to proposal:
https://github.com/neo-project/proposals/pull/156
Also adding `verifyProof` descpiption in docs/rpc.md.
Signed-off-by: Tatiana Nesterenko <tatiana@nspcc.io>
And simplify atomic service value stored by RPC server. Oracle service can
either be an untyped nil or be the proper non-nil *oracle.Oracle.
Otherwise `submitoracleresponse` RPC handler doesn't work properly.
Signed-off-by: Anna Shaleva <shaleva.ann@nspcc.ru>
Ensure that Scopes can be properly parsed not only from the string
representation, but also from a single byte. transaction.Signer
is not affected (checked against the C# implementation), only
RPC-related signer scopes are allowed to be unmarshalled from byte.
Close#3059.
Signed-off-by: Anna Shaleva <shaleva.ann@nspcc.ru>
go.uber.org/atomic deprecated CAS methods in version 1.10 (that introduced
CompareAndSwap), so we need to fix it.
Signed-off-by: Roman Khimov <roman@nspcc.ru>
Previously RPC server shutdown procedure listened to the execution
channel and stopped at the first element that arrived in the queue. This
could lead to the following problems:
* stopper could steal the execution result from subscriber
* stopper didn't wait for other subscription actions to complete
Add dedicated channel to `Server` for subscription routine. Close the
channel on `handleSubEvents` return and wait for signal in `Shutdown`.
Signed-off-by: Leonard Lyubich <leonard@morphbits.io>
Previously RPC server could never be shut down completely due to
some start precondition failure (in particular, inability to serve HTTP
on any configured endpoint). The problem was caused by next facts:
* start method ran subscription routine after HTTP init succeeded only
* stop method blocked waiting for the subscription routine to return
Run `handleSubEvents` routine on fresh `Start` unconditionally. With
this change, `Shutdown` method won't produce deadlock since
`handleSubEvents` closes wait channel.
Refs #2896.
Signed-off-by: Leonard Lyubich <leonard@morphbits.io>
There is an existing problem with RPC server shutdown freeze after start
failure due to some init actions (at least HTTP listen) described in
#2896.
Add dedicated unit test which checks that `Shutdown` returns within 5s
after `Start` method encounters internal problems.
Signed-off-by: Leonard Lyubich <leonard@morphbits.io>
Everywhere including examples, external interop APIs, bindings generators
code and in other valuable places. A couple of `interface{}` usages are
intentionally left in the CHANGELOG.md, documentation and tests.
It has a special `requestF` and a special initialization function, but other
than that it's an absolutely regular WSClient. Can be used to call, can be
used to subscribe. Fixes#2909.
According to docs, `Server` uses provided error channel only to write
encountered error to it. In this case, there is no need to accept rw
channel to create `Server` instance. Strengthening the type to
write-only will allow the caller to ensure control of reading errors
from the provided channel.
The change is backward compatible since any `chan` is `chan<-`.
Signed-off-by: Leonard Lyubich <ctulhurider@gmail.com>
We have both from and to here, so technically we can either drop the neg/neg
trick from the processTokenTransfer() or drop one field from the structure
(the other side is a part of the key). Drop the field since this can make the
DB a bit more compact. Change Amount to be a pointer along the way since
that's the "native" thing for big.Int, we've used non-pointer field
specifically to avoid Neg/Neg problems, but it looks like this is not
necessary.
This structure is only used by the RPC server and I doubt anyone uses it via
the *Blockchain.
And include some node-specific configurations there with backwards
compatibility. Note that in the future we'll remove Ledger's
fields from the ProtocolConfiguration and it'll be possible to access them in
Blockchain directly (not via .Ledger).
The other option tried was using two configuration types separately, but that
incurs more changes to the codebase, single structure that behaves almost like
the old one is better for backwards compatibility.
Fixes#2676.
It doesn't store id->hash mappings for native contracts. We need blockchain's
GetContractScriptHash to serve both anyway, so it was changed a bit. The only
other direct user of native.GetContractScriptHash is the VM CLI, but I doubt
anyone will use it for native contracts (they have ~zero VM code anyway).
There are no changes visible from the user side (at least for those
users who doesn't put Prometheus's or pprof's port in quotes), just
internal refactoring. From now and on, BasicService configuration is
used by RPC server config, TLS for RPC server, pprof and Prometheus.
It's more generic and convenient than MillisecondsPerBlock. This setting is
made in backwards-compatible fashion, but it'll override SecondsPerBlock if
both are used. Configurations are specifically not changed here, it's
important to check compatibility.
Fixes#2675.
Follow neo-project/neo#2807. Notice that this data is not cached, our previous
implementation wasn't too and it shouldn't be a problem (not on the hot path).
Blockchain's subscriptions, unsubscriptions and notifications are
handled by a single notificationDispatcher routine. Thus, on attempt
to send the subsequent event to Blockchain's subscribers, dispatcher
can't handle subscriptions\unsubscriptions. Make subscription and
unsubscription to be a non-blocking operation for blockchain on the
server side, otherwise it may cause the dispatcher locks.
To achieve this, use a separate lock for those code that make calls
to blockchain's subscription API and for subscription counters on
the server side.
If VUB-th block is received, we still can't guaranty that transaction
wasn't accepted to chain. Back this situation by rolling back to a
poll-based waiter.
Do not block subscribers until the unsubscription request to RPC server
is completed. Otherwise, another notification may be received from the
RPC server which will block the unsubscription process.
At the same time, fix event-based waiter. We must not block the receiver
channel during unsubscription because there's a chance that subsequent
event will be sent by the server. We need to read this event in order not
to block the WSClient's readloop.
client_test.go:1935:
Error Trace: /home/rik/dev/neo-go/pkg/services/rpcsrv/client_test.go:1935
Error: Should NOT be empty, but was 00000000-0000-0000-0000-000000000000
Test: TestClient_Iterator_SessionConfigVariations/sessions_disabled
It's obviously empty, since we have sessions disabled, but it was not
considered to be empty in testify 1.7.0, now it is, see 840cb80149
NEP-6 has a notion of locked acccounts and SignTx must respect this user's
choice. For some reason this setting was inappropriately used by our RPC
client tests (probably a different kind of lock was meant).
calculatenetworkfee MUST calculate complete proper network fee, if we have
some extensions enabled and some attributes should be paid for that they're a
part of the equation too.
We're dealing with a transaction here and it can't be decoded successfully
unless it has an appropriate number of witness scripts (matching the number of
signers) with appropriate hashes (matching signers). So this iterations make
no sense at all, we know exactly where to look for the
verification/invocation scripts.
Blockchain's notificationDispatcher sends events to channels and these
channels must be read from. Unfortunately, regular service shutdown procedure
does unsubscription first (outside of the read loop) and only then drains the
channel. While it waits for unsubscription request to be accepted
notificationDispatcher can try pushing more data into the same channel which
will lead to a deadlock. Reading in the same method solves this, any number of
events can be pushed until unsub channel accepts the data.
Unfortunately Go doesn't allow to easily reuse readers in full packages, still
we can have this wrapper with a little overhead (the alternative is to move
specific methods into types of their own, but I'm not sure how it's going to
be accepted user-side).