Cache object that are being processed. That prevents concurrent
object handling when there is a few number of objects and object handling
takes more time that the policer needs for starting that object handling one
more time.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
If placement contains two vectors with intersecting nodes it was possible to
send the object to the nodes twice.
Also optimizes requests: do not ask about storing the object twice from the
same node.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
The node does not support asynchronous object replication anymore, so it
does not need to have replicator worker, channel and `AddTask` function.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
In case we have lots of objects in a single container,
`GetContainerNodes` invoked indirectly by a policer can be seen in
pprof.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
In previous implementation `verifySignature` method of container
processor worked incorrectly for operations without a key and with
session: processor tried to verify signature with one of the bound owner
keys instead of session one.
Use `VerifySessionDataSignature` method to check the signature if
session is used. Refactor `verifySignature` a bit with session check
highlighting for readability.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
In order to extend container ACL `F` bit must be set in basic ACL.
Make `Container` contract processor to deny eACL tables bound to
non-extendable containers.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Log errors for network operations. The only places where we are not
interested in errors are `Submit` in pool and unmarshaling.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
Node shouldn't perform eACL verification during GET/HEAD request
processing until full object header is received. Otherwise, for some
eACL tables request may be falsely rejected.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Scenario:
* HEAD request of some object
* 1st eACL record allows op for objects with specific user attribute
* 2nd eACL record forbids op by object ID
* node doesn't store the requested object locally
With this scenario node shouldn't deny request.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
It is redundant to process object headers in responses w/o object field
since result will be the same.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Request processing should not be interrupted in case of local storage
failure since error case in normal for relay nodes.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
ACL service should not deny request on local storage failure since in
this case relay nodes won't be able to continue the operation.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
It is useless process since subnet owner is able to delete subnet without an
Alphabet approval. The Alphabet should only validate netmap state after
removal:
1. Update nodes' attributes if they were included in the deleted subnet;
2. Remove nodes without any subnet entrance.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
`log.With` is suitable during initialization, but in other places it induces
some overhead, even when branches with logging are not taken.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
If we should process address based on some condition, there is no need
to read file content in memory.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
Currently we use `(*bbolt.Bucket).Stats().KeyN` for estimating database
size. However, it iterates over all pages in bucket and thus heavily
depends on the bucket size. This commit replaces initial size estimation
with a single `os.Stat` call.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
Core changes:
* avoid package-colliding variable naming
* avoid using pointers to IDs where unnecessary
* avoid using `idSDK` import alias pattern
* use `EncodeToString` for protocol string calculation and `String` for
printing
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Helper function `client.IsErrObjectNotFound` doesn't support error
unwrapping, so we need to do it on caller side.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
`Policer` should pass list of selected candidates into `WithNodes`
method of `replicator.Task`. In previous implementation `processNodes`
method passed an opposite list: failed nodes and/or the local one.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Not all the NeoFS requests must contain OID in their bodies (or must NOT
contain them at all). Do not pass object address in helper functions, pass
CID and OID separately instead.
Also, fixed NPE in the ACL service: updated SDK library brought errors
when working with `Put` and `Search` requests without OID fields.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
In previous implementation `Policer` considered local object copy as
redundant on processing single placement vector.
Make `Policer` to call redundant copy callback after full placement
processing. Also fix 404 error parsing.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Call `UpdateStateIR` and `AddPeerIR` method instead of `UpdateState` and
`AddPeer` if calling client is configured as Alphabet in notary enabled
environment.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
Parse all headers beforehand and reject invalid requests.
Another approach would be to remember the error and check
it after `CalculateAction`, which is a bit faster.
The rule of thumb here is "first validate, then use".
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
It is nice to have different paths for different components and also
check that the information returned is different for different shards.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
In previous implementation NeoFS CLI app used `network.Address.HostAddr`
as a server URI, which caused scheme loss since host address doesn't
contain it.
Rename `HostAddr` to `URIAddr` and make it to return URI address with
`grpcs` scheme if TLS is enabled. Make `TLSEnabled` unexported since it
was used to provide default `tls.Config` only (it is used by default in
SDK).
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
- Delete objects physically on tombstone's arrival;
- Store information about tombstones in the Graveyard;
- Clear Graveyard every epoch based on the information about TS in the
network.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
The service fetches tombstones from the network via object service, every
request is handled in the following order:
1. checks local LRU cache;
2. checks local storage engine;
3. tries to find object in the placement nodes.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
Add offset element to the iterations over deleted objects (both the
Graveyard and the Garbage buckets).
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
It allows storing information about object in both ways at the same time:
1. Metabase should know if an object is covered by a tombstone (that is
not expired yet);
2. It should be possible to physically delete objects covered by a
tombstone immediately (mark with GC) but keep tombstone knowledge.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
Morph "NewEpoch" event handling was registered in a closure over
`addNewEpochNotificationHandler` func. That may lead to the data race:
if a shard was initialized before the event registration, everything works
as planned, but if registration was made earlier, it was not able to
include GC handlers since a shard has not called `eventChanInit` yet and,
therefore, it has not registered handler yet.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
Add worker pool to the listener to prevent blocking. It is used only for
notary notifications and new block events handling since it uses RPC
calls. That may lead to the deadlock state: neo-go cannot send RPC until
notification channel is read but notification channel cannot be read since
neo-go client cannot send RPC.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
Updated client now supports subscription to chain notifications and RPC
switch between provided RPC endpoints.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
Provide current heights as an argument to ticker.
Zero height disables any checks, thus corresponding to the old
behaviour. If non-zero height is used, ignore the tick if the height
is less than the timer tick state.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
After `Listen` has raw receiver and calls `listen` which has pointer
receiver, leading to ineffectual assign to `started` field.
Always use `listener` by pointer, to avoid similar bugs.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
When IR node configured without main chain, both
`morphListener` and `mainnetListener` are pointing
into single listener component. We should not call
`Stop()` twice, because it may trigger channel
closing in neo-go or other components and it
can throw panic.
Signed-off-by: Alex Vanin <alexey@nspcc.ru>
Prevent attemts to connect to the failing endpoint multiple times.
This speeds up all invocations a bit and removes excessive error logs.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
Also, remove optimization comments:
1. Having to maintain an execute the same logic for headers as for
objects is quite inefficient, as it increases memory footprint.
2. Unmarshaling object is a cheap operation if data slice is in memory.
3. For unmarshaling header-only, I think we need SDK support.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
`Degraded` mode is set automatically after error counter is over the
threshold. `ReadOnly` mode can still be set by an administrator.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
Add `WithEncryption` option that passes ECDSA key to the persistent session
storage. It uses 32 bytes from marshalled ECDSA key in ASN.1 DER from in
AES-256 algorithm encryption in Galois/Counter Mode.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
Move in-memory session storage to the separate directory of `storage`. It is
done for future support of different kind of session storages.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
`Batch` can execute the function multiple times leading to multiple
increases of a size approximation.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
Alphabet nodes in notary enabled environment cannot call `UpdateState`
method to remove unwanted storage nodes from the network map,
because this method checks witness of the storage node.
To force storage node state update, alphabet nodes should invoke
new method `UpdateStateIR` which is similar to `AddPeerIR`.
State update initiated by the storage node itself is processed
the same way as before -- alphabet nods resign such transaction.
Signed-off-by: Alex Vanin <alexey@nspcc.ru>
`Register` was renamed to `AddPeerIR` for consistency with
`UpdateState` changes in
https://github.com/nspcc-dev/neofs-contract/pull/227
This is protocol breaking change for notary enabled environment.
Luckily, there is no notary enabled environments anywhere except
of neofs-dev-env, so we can do such thing. We should avoid such
changes in the future, though.
Signed-off-by: Alex Vanin <alexey@nspcc.ru>
`apistatus` package provides types which implement build-in `error`
interface. Add `error of type` pattern when documenting these errors in
order to clarify how these errors should be handled (e.g. `errors.Is` is
not good).
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Return `apistatus.ObjectAccessDenied` error on access violation from ACL
service. Write reason in format of the errors from the previous
implementation. These errors are returned by storage node's server as
NeoFS API statuses.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Replace `core/container.ErrNotFound` error returned by `Source.Get`
interface method with `apistatus.ContainerNotFound` status error. This
error is returned by storage node's server as NeoFS API statuses.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Replace `ErrNotFound`/`ErrAlreadyRemoved` error from
`pkg/core/object` package with `ObjectNotFound`/`ObjectAlreadyRemoved`
one from `apistatus` package. These errors are returned by storage
node's server as NeoFS API statuses.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Return `SessionTokenExpired`/`SessionTokenNotFound` error from
`apistatus` package if private session token is expired/missing. These
errors are returned by storage node's server as NeoFS API statuses.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Make all epoch independent in reputation process. Do not reset any timers
related to reputation. Make it possible to finish iteration after the
unexpected `NewEpoch` event.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
There is a need to process expired `LOCK` objects similar to `TOMBSTONE`
ones: we collect them on `Shard`, notify all other shards about
expiration so they could unlock the objects, and only after that mark
lockers as garbage.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
In previous implementation `SignService` converted all `error` values to
`INTERNAL` server failure status. That was done for simplification only.
There is a need to transmit status errors as corresponding status
messages.
Make `SignService` to unwrap errors and convert them to status message
during writing to the response. Non-status errors are converted to
`INTERNAL` server failures. Status errors can also be wrapped in the
depths of the executable code, so `SignService` tries to unwrap them.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Make `FormatValidator.ValidateContent` to verify payload of `LOCK`
objects. Pass locked objects to `Locker` interface. Require from
`Locker.Lock` to return `apistatus.IrregularObjectLock` error on a
corresponding condition.
Also add error return to `DeleteHandler.DeleteObjects` method. Require
from method to return `apistatus.ObjectLocked` error on a corresponding
condition. Adopt implementations.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
`Inhume` operation can potentially mark lockers as garbage. There is a
need to update locker list in locked bucket.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Make `StorageEngine.Delete` to forward first encountered
`apistatus.ObjectLocked` error during shard processing.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Make `StorageEngine.Inhume` to forward first encountered
`apistatus.ObjectLocked` error during shard processing.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Implement `StorageEngine.Lock` method which works similar to `Inhume`
but calls `Lock` on the processing shards.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Make `DB.Lock` to return `apistatus.IrregularObjectLock` if at least one
of the locked objects is irregular (not of type REGULAR).
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Make `DB.Inhume` to return `apistatus.ObjectLocked` if at least one of
the inhumed objects is locked.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Make `DB.IterateCoveredByTombstones` to not pass locked objects to the
handler. The method is used by GC, therefore it will not consider locked
objects as candidates for deletion even if their tombstone is expired.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Make `DB.IterateExpired` to not pass locked objects to the handler. The
method is used by GC, therefore it will not consider them as candidates
for deletion.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
After introduction of LOCK objects (of type `TypeLock`) complicated
extended its behavior:
* create `lockers` container bucket (LCB) during PUT;
* remove object from LCB during DELETE;
* look up object in LCB during EXISTS;
* get object from LCB during GET;
* list objects from LCB during LIST with cursor;
* select objects from LCB during SELECT with '*'.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Implement `DB.Lock` method which marks list of the objects as locked by
another object. Only regular objects can be locked.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Create class of container buckets with `LOCKED` suffix. Put identifiers
of the objects of type `LOCK` to these buckets during `DB.Put`.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
After `putsvc.Service` started to support additional container broadcast
of the saved objects there is no more need to perform broadcast of
tombstone object in `deletesvc.Service`.
Make `putsvc.Service` to perform additional broadcast of `TOMBSTONE`
objects. Remove `broadcastTombstone` stage from `deletesvc.execCtx`,
from now it is encapsulated in `saveTombstone` stage. Remove no longer
needed `putsvc.PutInitPrm.WithTraverseOption` method.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
There are several cases when we need to spread the object around the
container after its primary placement (e.g. objects of type TOMBSTONE).
It'd be convenient to support this feature in `putsvc.Service`.
Add additional stage of container broadcast after the object is stored.
This stage is carried out no more than once and does not affect the
outcome of the main PUT operation.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Make all operations that related to `neofs-api-go` library be placed in `v2`
packages. They parse all v2-versioned structs info `neofs-sdk-go`
abstractions and pass them to the corresponding `acl`/`eacl` packages. `v2`
packages are the only packages that do import `neofs-api-go` library. `eacl`
and `acl` provide public functions that only accepts `sdk` structures.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
Remove `Object` and `RawObject` types from `pkg/core/object` package.
Use `Object` type from NeoFS SDK Go library everywhere. Avoid using the
deprecated elements.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Similarly to `Get`. Also fix a bug where `ErrNotFound` is returned
instead of `ErrRangeOutOfBounds`.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
Metabase is expected to contain actual information about objects stored
in shard. If the object is present in metabase but is missing from
blobstor, peform an additional attempt to fetch it directly without
consulting metabase. Such a situation is unexpected, so error counter
is increased for the shard which has the object in the metabase. We
don't increase error counter for the shard which has the object in
blobstor, because some garbage can be expected there. In this
implementation there is no overhead for objects which are really
missing, i.e. are not present in any metabase.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
There are certain errors which are not expected during usual node
operation and which tell us that something is wrong with the shard.
To prevent possible data corruption, move shard in read-only mode after
amount of errors exceeded some threshold. By default no actions are performed.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
In some places we panic, in some we return error, in some (audit) just return a client.
However in all of the places static client is created immediately before
the sugared-client creation.
This commit makes all constructors to just return a client for the sake
of code simplification and unification.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
We don't use custom names and the only place where custom method option
is used it provides the default name and can be omitted.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
If pre-existing blobovnicza is initialized, it's size should be updated
even if all buckets are in place.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
It was added back in 2fb379b7 when we had many shard modes. Now we have
only two and comments for constants are rather descriptive.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
In previous implementation IR incorrectly verified `SetEACL` event of
`Container` contract. The incorrect behavior could be reproduced in two
ways:
1. Create container using session, and perform `SetEACL` operation
with a key that is different from the session one.
2. Create container using session, and perform `SetEACL` w/o a
session, but sign it using session key from the `Put` operation.
The problem was in the `checkSetEACL` validation method of IR container
processor. It always used session token used for container creation
during session ownership check.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
We could also ignore errors during evacuate, but this requires
unmarshaling objects first which slowers the process considerably.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
In read-only mode modifying operations are immediately returned with
error and all background operations are suspended.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
Dump contains magic and a list of objects prefixed by object size in bytes.
We can't use proto-marshaled list because this requires having all dump
in memory. Using TAR induces 512 byte overhead for each object which can
be a problem in some cases.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
- Update `neofs-sdk-go`:
v0.0.0-20211230072947-1fe37df88f80 => v0.0.0-20220113123743-7f3162110659
- Add client interface that duplicates SDK's client behaviour and new
`MultiAddressClient` interface that has method that iterates over wrapped
clients.
- Also start using simple client mode that does not require parsing statuses
outside the SDK library.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
For some data compression makes little sense, as it is already compressed.
This commit allows to leave such data unchanged based on `Content-Type`
attribute. Currently exact, prefix and suffix matching are supported.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
Reverse payload overtake triggers direct payload overtake that
sets status and error. We should not override that.
Signed-off-by: Alex Vanin <alexey@nspcc.ru>
It is much more convenient to skip source creation.
Also fix some bugs:
1. `cryptoSource.Int63()` now returns number in [0, 1<<63) as required
by `rand.Source` interface.
2. Replace `cryptoSource.Uint63()` with `cryptoSource.Uint64` to allow
generate uint64 numbers directly (see rand.Source64 docs).
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
Indent sections corresponding to different shells for faster navigation
and remove `#` at the line beginning.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
If mainchain is disabled in IR config then the node should read inner
ring list via role management contract.
Use `NeoFSAlphabetList` method of morph client as IR lister if
`withoutMainNet` flag is set in IR app.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
In previous implementation `Search` method of transport splitter skipped
responses with empty ID list.
Replace while-loop with do-while one in `TransportSplitter.Search`
method implementation in order to send responses with empty result too.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Provide shard mode information via `DumpInfo()`. Delete atomic field from
Shard structure since it duplicates new field.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
Shard's mode was not used in the Node, so added only two modes whose roles
are clear. More modes will be added in the future.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
Without sanity check, container service provides successful response,
even though such request will never be approved by Alphabet nodes.
Signed-off-by: Alex Vanin <alexey@nspcc.ru>
When application is being terminated, replicator routine
might be on the object picking phase. Storage is terminated
asynchronously, thus `Select()` may return corresponding
error. If we don't process `context.Done()` in this case,
then application freezes on shutdown.
Signed-off-by: Alex Vanin <alexey@nspcc.ru>
Make `flushBigObjects` routine to mark objects which are written to
`BlobStor`. This prevents already flushed objects from being written on
the next iterator tick.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
For fullness estimation of `Blobovnicza` we use number of object stored
in each size bucket. In previous implementation we multiplied the number
by the difference in bucket boundaries. This expression rather
estimated the minimum volume (and for the smallest bucket, the maximum)
of objects in the bucket.
Multiply number of objects by mean bucket size.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Make `syncFullnessCounter` to accept `bbolt.Tx` argument of Bolt
transaction within which counter should be synchronized. Pass
corresponding transaction during `Init`.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
In notary disabled environment, approval of container creation with nice
name attribute takes much more additional GAS than other operations
(due to NNS invocation).
Morph library changes:
* add the ability to specify per-op fees using `StaticClient` options;
* add the ability to customize fee for `Put` operation with named
container in container morph client.
Inner Ring changes:
* add `fee.named_container_register` config value which specifies
additional GAS fee for the approvals of the named container
registrations;
* pass the config value to `WithCustomFeeForNamedPut` option of
container morph client.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
According to BoltDB documentation bucket `value is only valid for the
life of the transaction`.
Make `DB.IsSmall` copy value slice in order to prevent potential memory
corruptions (e.g. `runtime.stringtobyteslice` cast).
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
After https://github.com/nspcc-dev/neofs-contract/issues/154 alphabet
nodes should call `Register` method for approval of the notary
notifications spawned by `AddPeer` method.
Call `register` method for peer approval in Netmap processor.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Notary event name equals to the name of the method which throws the
event.
Define name const of notary subnet creation.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Use persistent flags on parent command in order to inherit flags in
sub-commands. Turn on notary mode of morph client in `subnet` command of
admin utility for notary environments.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Make `Client.EnableNotarySupport` method to call `NNSContractAddress`
for proxy contract if it is not specified in corresponding option.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Add `subnet` command which contains all subnet-related commands. Add
sub-commands:
* `create` for creation;
* `remove` for removal;
* `get` for reading;
* `admin` for admin management;
* `client` for client management.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Define notification events, implement parsers. Add morph client of
Subnet contract. Listen, verify and approve events in Inner Ring app.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Continues replication executed in separate pool of goroutines,
so there is no need in worker to handle replication tasks
asynchronously.
Signed-off-by: Alex Vanin <alexey@nspcc.ru>
Upgrade NeoFS API Go library to version with status returns. Make all API
clients to pull out and return errors from failed statuses. Make signature
service to respond with status if client version supports it.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
ListWithCursor allows listing physically stored objects
from metabase with small chunks. Cursor tracks last
processed object, therefore new chunks are returned
on each request.
Signed-off-by: Alex Vanin <alexey@nspcc.ru>
Add hash of the TX that generated notification
to neofsid event structures. Adapt all
neofsid wrapper calls to new structures.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
Add hash of the TX that generated notification
to neofs/netmap event structures. Adapt all
neofs/netmap wrapper calls to new structures.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
Add optional parameters to the client call
signature. Group parameters of a client call
into struct to improve future codebase
support.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
Add optional parameters to the client call
signature. Group parameters of a client call
into struct to improve future codebase
support.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
Parsers should have original notification
structure to be able to construct internal
event structure that contains necessary
for unique nonce calculation information.
So notification parsers take raw notification
structure instead of slice of stack items.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
Make `BlockExecution` / `ResumeExecution` to not release per-shard worker
pools. Make `StorageEngine.Close` to block these methods and any
data-related operations. It is still releases the pools.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Use `sync.Once` to prevent locks of stopping GC. It will also allow to
safely call `Shard.Close` multiple times.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Add `MAINTENANCE` value to `NetmapStatus` enum in Control API. The status is
going to be used to toggle maintenance mode of the storage node.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
There is a need to disable execution of local data operation on storage
engine in runtime. If storage engine ops are blocked, node will act like
always but all local object operations will be denied.
Implement `BlockExecution` / `ResumeExecution` methods on `StorageEngine`
which blocks / resumes the execution of data ops. Wait for the completion of
all operations executed at the time of the call. Return error passed to
`BlockExecution` from all data-related methods until `ResumeExecution` call.
Make `Close` to block operations as well.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
In previous implementation IR handler of `AddPeer` notification didn't send
registration to contract if existing peer changed has changed its
information. as a consequence, the network map members could not update the
information without going into offline.
Change `processAddPeer` handler to check if
* candidate in the network map is a brand new
* or information about the network map member was changed
and call `AddPeer` method if so.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
All objects in NeoFS must have owner ID. In previous implementation Object
Delete service handler set owner ID from request session token. If removal
was executed w/o a session, object with tombstone was prepared incorrectly.
In order to fix this node should set its own ID and become an owner of the
tombstone object.
Extend `NetworkInfo` interface required by Object.Delete handler with
`LocalNodeID` method which returns `owner.ID` of the local node. Implement
the method on `networkState` component of storage node application which is
updated on each node state change in NeoFS network map. Set owner returned
by `LocalNodeID` call as tombstone object's owner in Delete handler.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
In previous implementation node returns "access denied" on Object.Put with
object with unset owner. Although object owner must be set, its absence
should not be considered as access error. The same applies to sender key.
Check owner ID and public key emptiness only if sticky bit is set.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Also, add ErrNNSRecordNotFound error that
indicates that required hash is not presented
in `NNS` contract.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
The client needs of the Object service are limited and change not often.
Interface changes of the client library should not affect the operation of
various service packages, if they do not change their requirements for
the provided functionality. To localize the use of the base client and
facilitate further support, an auxiliary package is implemented that will
only be used by the Object service.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Timer is not suitable for notary deposits because it can never fire
in case of desynchronization or external epoch changes. Notary deposits
must be handled on new epoch event.
Signed-off-by: Alex Vanin <alexey@nspcc.ru>
a1696a8 introduced some logic which in some situations prevented big objects
to be persisted in FSTree. In this commit a refactoring is done with the
goal of simplifying the code and also checking #866 issue.
1. Split a monstrous function into multiple simple ones: memory objects
can only be small and for writing through the cache we can do a dispatch
in `Put` itself.
2. Determine objects to be put in database before the actual update
as setting up a transaction has non-zero overhead.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
The client needs of the IR application are very limited and rarely change.
Interface changes of the client library should not affect the operation of
various application packages, if they do not change their requirements for
the provided functionality. To localize the use of the base client and
facilitate further support, an auxiliary package is implemented that will
only be used by the IR application.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
There is no point to pass key storage in parameters because
it can be defined on the service level of application.
Signed-off-by: Alex Vanin <alexey@nspcc.ru>
`CommonPrm` structure has private key for remote operations.
It obtained in the beginning of request processing. However,
not every operation triggers remote calls. Therefore, key
might not be used. It is important to avoid early key fetching
because `TokenStore` now returns error if session token does not
exist. This is valid case when container nodes receive request with
session token (for ACL pass) and they should process request locally.
Signed-off-by: Alex Vanin <alexey@nspcc.ru>
This is invalid operation for storage nodes that receive part of split
object. While object is signed by session token, the message itself
should be signed by the node key.
Signed-off-by: Alex Vanin <alexey@nspcc.ru>
Container listing should not ignore tombstone and
storage group objects which are not stored in
primary buckets.
Signed-off-by: Alex Vanin <alexey@nspcc.ru>
Do not set `till` as some constant:
use maximum of two values instead:
1. currentDepositTill;
2. currentHeight+epochDuration+constant.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
Some of the pools are initialized during config initialization,
so it isn't possible currently to release them in one place.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
Add name and zone arguments to `Put` method of wrapper over the Container
contract client. Pass result of `container.GetNativeNameWithZone` function
to the method in `Put` helper function. Due to this, the storage node will
call the method depending on the presence of the container name in the
attributes.
Make IR to listen `putNamed` notification event. The event is processed like
`put` event, but with sanity check of the container attributes.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Add `PutArgs.SetNativeNameWithZone` method which sets native name and zone
for container. Call `putNamed` method of Container contract if name is set,
otherwise call `put` method.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Implement `ParsePutNamedNotary` function which parses `PutNamed` structure
from `event.NotaryEvent`. Share common code with `ParsePutNotary` function.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Define `PutNamed` structure of notary notification from `putNamed` method of
Container contract. Embed `Put` type in order to inherit methods and
parsing of common parts with `put` method.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Implement `StringFromOpcode` function that tries to retrieve `string` to
`Op`. Add a comment about neo-go source code that is used for implementation
of converters.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
`WSClient` of Neo Go v0.97.3 sets value of notification with
`NotificationEventID` to `subscriptions.NotificationEvent` type which wraps
previously used `state.NotificationEvent`.
Change type cast and pull `state.NotificationEvent` structure from new type.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Make the implementation of network info source (Netmap V2 service
dependency) to read MillisecondsPerBlock sidechain parameter and NeoFS
network parameters depending on the client version.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Implement `Client.ListConfig` method which calls `listConfig` method of
Netmap contract. Implement `Wrapper.IterateConfigParameters` method which
uses previous one. Implement `wrapper.WriteConfig` helper function which
allows to interpret parameters by names.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
IR tries to keep 1:3 proportion of GAS and
notary balances respectively. If that proportion
has been messed(means that notary balance is
lower than required) it sends half of its
GAS balance to the notary service.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
After storage engine started to limit number of PUT operations there is no
need to limited worker pool in Object Put service.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Make `StorageEngine` to use non-blocking worker pools with the same
(configurable) size for PUT operation. This allows you to switch to using
more free shards when overloading others, thereby more evenly distributing
the write load.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Convert minutes of received coordinates
to decimal parts of degree, and do not
use decimal part of float as storage for
minutes: "5915N 01806E" is
59.25N 18.10E, not 59°15'N 18°06'E.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
Session token can be present in both object header and
request meta header. They are the same during initial object
placement.
At the object replication, storage node puts object without
any session tokens attached to the request. If container's eACL
denies object.Put for USER role (use bearer to upload), then
replication might fail on objects with session tokens of the
signed by container owner. It is incorrect, so use session
token directly from request meta header.
Signed-off-by: Alex Vanin <alexey@nspcc.ru>
We should be able to read whatever we have written earlier.
Compression setting applies only to the new objects.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
Do not log in options constructors. Also failure to
initialize compression module (possibly due to invalid options) is
certainly an error deserving proper treatment.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
There is a need to list addresses of the small objects stored in WriteCache
database.
Implement `IterateDB` function which accepts BoltDB instance and iterate
over all saved objects and passes their addresses to the hander.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
There is a need to open Blobovnicza instances in read-only mode in some
cases.
Add `ReadOnly` option. Do not create dir path in RO. Open underlying BoltDB
instance with ReadOnly flag. Document thal all writing operations should not
be called in ro (otherwise BoltDB txs fail).
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
`Blobovnicza` can be initialized with any number of range buckets, and
reconstructed with different size limit. In previous implementation
`Iterate` could miss some stored objects if we construct `Blobovnicza` with
smaller number of ranges.
Make `Iterate` to traverse all buckets regardless of current instance
bounds.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
In previous implementation `Blobovnicza.Iterate` op decoded object data only
and passed it to the handler. There is a need to iterate over all addresses
of the stored objects.
Add `DecodeAddresses` and `WithoutData` methods of `IteratePrm` type. Add
`Address` method to `IterationElement` type. Make `Iterate` to decode object
addresses if `DecodeAddress` was called and not read the data if
`WithoutData` was called. Implement `IterateAddresses` helper function to
simplify the code.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
There is a need to check if public key in the RPC response matches the
public key of the related storage node declared in network map.
Define `ErrWrongPublicKey` error. Implement RPC response handler's
constructor `AssertKeyResponseCallback` which checks public key. Construct
handler and pass it to client's option `WithResponseInfoHandler`.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
There is a need to process announced public keys of storage nodes for
intra-container communication.
Implement `PublicKey` / `SetPublicKey` methods of `client.NodeInfo` type.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
There is a need to have the ability to expand the data needed for client
construction.
Replace `network.AddressGroup` parameter of client cache interfaces with
`client.NodeInfo`.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
After recent changes morph client wrappers provide contract address getter.
It can be used to compose notification parsers and handlers.
Use `ContractAddress` method in constructors of notification parsers and
handlers. Remove no longer used script hash parameters of event processors.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
There is a need to inherit some methods of `StaticClient` type. In order to
not inherit all method via type embedding we can group sub-set of methods
and inherit it.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Accept notification endpoints as string slice from config. Work with the
first successfully initialized WSClient.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
In previous implementation Object PUT used single pool of workers for local
and remote ops, but these ops are heterogeneous.
Use remote/local pool for remote/local operations in PUT service. At first
the pools are configured with the same size.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
In previous implementation `distributedTarget` didn't check if next node is
local. This check was performed by the handlers (target initializer and
relay func).
Make `distributedTarget` to calculate node's locality. Pass locality flag to
the handlers.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Each object from graveyard has tombstone or GC mark. If object has
tombstone, metabase should return `ErrAlreadyRemoved` on object requests.
This is the case when user clearly removed the object from container. GC
marks are used for physical removal which can appear even if object is still
presented in container (Control service, Policer job, etc.). In this case
metabase should return 404 error on object requests.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
`List` method of `Shard` must return only physically stored objects.
Use `AddPhyFilter` to select only phy objects.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Forwarding mechanism resends original request. During split object chain traversal,
storage node performs multiple `object.Head` requests on each child. If request
forwarding happens, then `object.Head` returns object ID of the original request.
This produces infinite assembly loop.
Signed-off-by: Alex Vanin <alexey@nspcc.ru>
Set `Curve` field in `ecdsa.PublicKey` instance from `keys.PublicKey` one in
`checkKeyOwnership` method of container processor.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Tombstone and "alive" objects can be both stored in BlobStor. They can
appear during iterating in different order. Metabase returns
`ErrAlreadyRemoved` error if object is inhumed.
Ignore `object.ErrAlreadyRemoved` errors of `metabase.Put`in Shard's
`refillMetabase` operation.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
There is a need to refill Metabase data with the objects from BlobStor.
Implement `refillMetabase` method which iterates over all objects from
BlobStor and saves them in Metabase.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
There is a need to be able to process all objects saved in `BlobStor`.
Implement `BlobStor.Iterate` method which iterates over all objects.
Implement `IterateBinaryObjects` and `IterateObjects` helper functions to
simplify the code.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
There is a need to be able to process all stored objects saved in
`Blobovnicza`.
Implement `Blobovnicza.Iterate` method which iterates over all objects.
Implement `IterateObjects` helper function to simplify the code.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
In the previous implementation of the metabase, there was no possibility of
reinitializing the metabase: clearing information about existing objects and
bringing it back to its initial state. This operation can be useful in
cases when the stored metadata about objects has lost (or possibly lost)
relevance, and you need to generate data from scratch. Also at the
initialization stage, static resources of the base were not created -
container-independent buckets.
Make `Metabase.Init` method to allocate graveyard, container-size and
to-move-it buckets in underlying BoltDB instance. Implement `Metabase.Reset`
method: it works like `Init` but clean up all static buckets and removes
other ones. Due to the logical similarity, the methods share a single piece
of code.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
There is a need to limit disk space used by write-cache. It is almost
impossible to calculate the value exactly. It is proposed to estimate the
size of the cache by the number of objects stored in it.
Track amounts of objects saved in DB and FSTree separately. To do this,
`ObjectCounters` interface is defined. It is generalized to a store of
numbers that can be made persistent (new option `WithObjectCounters`). By
default DB number is calculated as key number in default bucket, and FS
number is set same to DB since it is currently hard to read the actual value
from `FSTree` instance. Each PUT/DELETE operation to DB or FS
increases/decreases corresponding counter. Before each PUT op an overflow
check is performed with the following formula for evaluating the occupied
space: `NumDB * MaxDBSize + NumFS * MaxFSSize`. If next PUT can cause
write-cache overflow, object is written to the main storage.
By default maximum write-cache size is set to 1GB.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
There is a need to keep track of each local storage change. Log messages are
the most convenient way to do it.
Implement function which writes log message about the completed writing
operation in storage engine.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Do not pass high level `PACK` opcode to
notary parsers. Add opcode amount check.
Delete `PACK` cases in notary parsers.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
Make `errIncompletePut` to be a structure which wraps single client error.
Wrap error of the last client into `errIncompletePut` during placement
execution.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
In previous implementation Object service's handler returned const error in
case of failure (full or partial) of PUT operation. This did not even allow
us to roughly guess what the reason is. Not as a complete solution, but to
alleviate some cases where all nodes in a container return the same error,
it is suggested to return the error of the last server that responded.
Return latest server error from placement loop of `iteratePlacement` method
of `distributedTarget` type.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
In previous implementation sticky bit could disrupt access of container
nodes to replication. According to NeoFS specification sticky bit should not
affect the requests sent by nodes from SYSTEM group.
Add role check to `stickyBitCheck`.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Remote RPC node might be in the transition state and produce
events from the past. We should avoid listening such nodes.
To do that subscriber component can await minimal height of
remote node. If minimal height is not reached, then throw
error.
Signed-off-by: Alex Vanin <alexey@nspcc.ru>
Implement `NotaryContractProcessor` by IR
container processor. Add support for notary
`put` container operation. Do not parse `put`
non-notary notifications in notary enabled
environment.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
Add `NotarySignAndInvokeTX` method to morph
client. This function allows invoking notary
request with passed main TX(not creating a
new one). It signs passed main TX with
client's key.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
Add `NotaryInvokeNotAlpha` to low-level
client. It creates and sends notary request
that must be signed by Alphabet nodes, but
does not sign it by current node's private
key.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
Add preparator for notary requests. Is parses
raw notary requests, checks if it should be
handled by Alphabet node. If handling is required,
returns `NotaryEvent` that contains information
about contract scripthash, method name and
arguments of the call.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
Prepare all listening structures for notary events:
rename(add prefix/suffix 'notification') all
notification specific handlers/parsers.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
`fallbackTime` is delta b/w `ValidUntilBlock` of
the main transaction and block when `fallback`
transaction is sent.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
In recent changes, the locality criterion for a node has been changed to
compare public keys.
Remove no longer used `IsLocalAddress` function and `LocalAddressSource`
interface.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Some software components regulate the way of working with placement arrays
when a local node enters it. In the previous implementation, the locality
criterion was the correspondence between the announced network address
(group) and the address with which the node was configured. However, by
design, network addresses are not unique identifiers of storage nodes in the
system.
Change comparisons by network addresses to comparisons by keys in all
packages with the logic described above. Implement `netmap.AnnouncedKeys`
interface on `cfg` type in the storage node application.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Announced keys of storage nodes are required for many application components
to function.
Define a unified interface for the utility for working with public keys of
nodes. Add a method to check if the key has been advertised by the local
node in the application context.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
There is a need to process public keys of the placement result.
Implement `Node.PublicKey` method which returns storage node's key announced
in netmap.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
In previous implementation `placement.Traverser.Next` method returned slice
of `network.AddressGroup` elements. There is a need to process keys of
storage nodes besides network addresses for intra-container communication.
Wrap `network.AddressGroup` in a new type `placement.Node` that summarizes
the storage node information required for communication. Return slice of
`Node` instances from `Traverser.Next` method. Fix compilation breaks in
dependent packages.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
IDs come from NeoFS contract in big endian, but it is customary to write in
the node logs in little endian.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Shard should try to read object headers from write-cache if it is enabled.
Extend `writecache.Cache` interface with `Head` method. Call the method in
`Shard.Head` if `Shard.hasWriteCache` returns true.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Write cache should be able to execute HEAD operations according to spec.
Add simple implementation of `Head` method through the `Get` one. Leave
notes for future optimization.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Caching is performed inside `GetNativeContractHash` method of neo-go client,
so the additional cache level is redundant.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
There is no need to continue iterating over Neo RPC endpoints in case of
some address-independent error (e.g. NeoFS logic error).
Unwrap and immediately return `neofsError` errors from loop in
`iterateClients`.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Implement `error` interface on new `neofsError` type which is a wrapper over
NeoFS-specific error. Wrap all `Client` errors except neo-go client API ones
into `neofsError`. Wrapped errors are going to be used for multi-endpoint
loop control.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Since morph `Client` works in multi-client mode, there is an error case when
we can not get network magic when all endpoints are unavailable.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
There is a need to work with a set of Neo RPC nodes in order not to depend
on the failure of some nodes while others are active.
Support "multi-client" mode of morph `Client` entity. If instance is not
"multi-client", it works as before. Constructor `New` creates multi-client,
and each method performs iterating over the fixed set of endpoints until
success. Opened client connections are cached (without eviction for now).
Storage (as earlier) and IR (from now) nodes can be configured with multiple
Neo endpoints. As above, `New` creates multi-client instance, so we don't
need initialization changes on app-side.
`Wait` and `GetDesignateHash` methods of `Client` return an error from now
to detect connection errors. `NotaryEnabled` method is removed as unused.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
As soon as resulting list and cache operate with the attribute pointer,
we can add attribute structure immediately and set restored key and
value of the attribute later.
Signed-off-by: Alex Vanin <alexey@nspcc.ru>
Attributes are linked to each other through parents, so they can
be returned in any order. However, it will be better to return
the list in consistent order to reduce entropy.
Signed-off-by: Alex Vanin <alexey@nspcc.ru>
Parser should reuse existing attributes from the cache to update list
of the parent links in it. Parent links should be unique.
Signed-off-by: Alex Vanin <alexey@nspcc.ru>
In previous implementation Container service handlers didn't cache the
results of `Get` / `GetEACL` / `List` operations. As a consequence of this,
high load on the service caused neo-go client's connection errors. To avoid
this there is a need to use cache. Object service already uses `Get` and
`GetEACL` caches.
Implement cache of `List` results. Share already implemented cache of Object
service with the Container one. Provide new instance of read-only container
storage (defined as an interface)to morph executor's constructor on which
container service is based. Write operations remained unchanged.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>