Add offset element to the iterations over deleted objects (both the
Graveyard and the Garbage buckets).
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
It allows storing information about object in both ways at the same time:
1. Metabase should know if an object is covered by a tombstone (that is
not expired yet);
2. It should be possible to physically delete objects covered by a
tombstone immediately (mark with GC) but keep tombstone knowledge.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
Morph "NewEpoch" event handling was registered in a closure over
`addNewEpochNotificationHandler` func. That may lead to the data race:
if a shard was initialized before the event registration, everything works
as planned, but if registration was made earlier, it was not able to
include GC handlers since a shard has not called `eventChanInit` yet and,
therefore, it has not registered handler yet.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
Add worker pool to the listener to prevent blocking. It is used only for
notary notifications and new block events handling since it uses RPC
calls. That may lead to the deadlock state: neo-go cannot send RPC until
notification channel is read but notification channel cannot be read since
neo-go client cannot send RPC.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
Updated client now supports subscription to chain notifications and RPC
switch between provided RPC endpoints.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
Provide current heights as an argument to ticker.
Zero height disables any checks, thus corresponding to the old
behaviour. If non-zero height is used, ignore the tick if the height
is less than the timer tick state.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
After `Listen` has raw receiver and calls `listen` which has pointer
receiver, leading to ineffectual assign to `started` field.
Always use `listener` by pointer, to avoid similar bugs.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
When IR node configured without main chain, both
`morphListener` and `mainnetListener` are pointing
into single listener component. We should not call
`Stop()` twice, because it may trigger channel
closing in neo-go or other components and it
can throw panic.
Signed-off-by: Alex Vanin <alexey@nspcc.ru>
Prevent attemts to connect to the failing endpoint multiple times.
This speeds up all invocations a bit and removes excessive error logs.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
Also, remove optimization comments:
1. Having to maintain an execute the same logic for headers as for
objects is quite inefficient, as it increases memory footprint.
2. Unmarshaling object is a cheap operation if data slice is in memory.
3. For unmarshaling header-only, I think we need SDK support.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
`Degraded` mode is set automatically after error counter is over the
threshold. `ReadOnly` mode can still be set by an administrator.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
Add `WithEncryption` option that passes ECDSA key to the persistent session
storage. It uses 32 bytes from marshalled ECDSA key in ASN.1 DER from in
AES-256 algorithm encryption in Galois/Counter Mode.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
Move in-memory session storage to the separate directory of `storage`. It is
done for future support of different kind of session storages.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
`Batch` can execute the function multiple times leading to multiple
increases of a size approximation.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
Alphabet nodes in notary enabled environment cannot call `UpdateState`
method to remove unwanted storage nodes from the network map,
because this method checks witness of the storage node.
To force storage node state update, alphabet nodes should invoke
new method `UpdateStateIR` which is similar to `AddPeerIR`.
State update initiated by the storage node itself is processed
the same way as before -- alphabet nods resign such transaction.
Signed-off-by: Alex Vanin <alexey@nspcc.ru>
`Register` was renamed to `AddPeerIR` for consistency with
`UpdateState` changes in
https://github.com/nspcc-dev/neofs-contract/pull/227
This is protocol breaking change for notary enabled environment.
Luckily, there is no notary enabled environments anywhere except
of neofs-dev-env, so we can do such thing. We should avoid such
changes in the future, though.
Signed-off-by: Alex Vanin <alexey@nspcc.ru>
`apistatus` package provides types which implement build-in `error`
interface. Add `error of type` pattern when documenting these errors in
order to clarify how these errors should be handled (e.g. `errors.Is` is
not good).
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Return `apistatus.ObjectAccessDenied` error on access violation from ACL
service. Write reason in format of the errors from the previous
implementation. These errors are returned by storage node's server as
NeoFS API statuses.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Replace `core/container.ErrNotFound` error returned by `Source.Get`
interface method with `apistatus.ContainerNotFound` status error. This
error is returned by storage node's server as NeoFS API statuses.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Replace `ErrNotFound`/`ErrAlreadyRemoved` error from
`pkg/core/object` package with `ObjectNotFound`/`ObjectAlreadyRemoved`
one from `apistatus` package. These errors are returned by storage
node's server as NeoFS API statuses.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Return `SessionTokenExpired`/`SessionTokenNotFound` error from
`apistatus` package if private session token is expired/missing. These
errors are returned by storage node's server as NeoFS API statuses.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Make all epoch independent in reputation process. Do not reset any timers
related to reputation. Make it possible to finish iteration after the
unexpected `NewEpoch` event.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
There is a need to process expired `LOCK` objects similar to `TOMBSTONE`
ones: we collect them on `Shard`, notify all other shards about
expiration so they could unlock the objects, and only after that mark
lockers as garbage.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
In previous implementation `SignService` converted all `error` values to
`INTERNAL` server failure status. That was done for simplification only.
There is a need to transmit status errors as corresponding status
messages.
Make `SignService` to unwrap errors and convert them to status message
during writing to the response. Non-status errors are converted to
`INTERNAL` server failures. Status errors can also be wrapped in the
depths of the executable code, so `SignService` tries to unwrap them.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Make `FormatValidator.ValidateContent` to verify payload of `LOCK`
objects. Pass locked objects to `Locker` interface. Require from
`Locker.Lock` to return `apistatus.IrregularObjectLock` error on a
corresponding condition.
Also add error return to `DeleteHandler.DeleteObjects` method. Require
from method to return `apistatus.ObjectLocked` error on a corresponding
condition. Adopt implementations.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
`Inhume` operation can potentially mark lockers as garbage. There is a
need to update locker list in locked bucket.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Make `StorageEngine.Delete` to forward first encountered
`apistatus.ObjectLocked` error during shard processing.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Make `StorageEngine.Inhume` to forward first encountered
`apistatus.ObjectLocked` error during shard processing.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Implement `StorageEngine.Lock` method which works similar to `Inhume`
but calls `Lock` on the processing shards.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Make `DB.Lock` to return `apistatus.IrregularObjectLock` if at least one
of the locked objects is irregular (not of type REGULAR).
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Make `DB.Inhume` to return `apistatus.ObjectLocked` if at least one of
the inhumed objects is locked.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Make `DB.IterateCoveredByTombstones` to not pass locked objects to the
handler. The method is used by GC, therefore it will not consider locked
objects as candidates for deletion even if their tombstone is expired.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Make `DB.IterateExpired` to not pass locked objects to the handler. The
method is used by GC, therefore it will not consider them as candidates
for deletion.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
After introduction of LOCK objects (of type `TypeLock`) complicated
extended its behavior:
* create `lockers` container bucket (LCB) during PUT;
* remove object from LCB during DELETE;
* look up object in LCB during EXISTS;
* get object from LCB during GET;
* list objects from LCB during LIST with cursor;
* select objects from LCB during SELECT with '*'.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Implement `DB.Lock` method which marks list of the objects as locked by
another object. Only regular objects can be locked.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Create class of container buckets with `LOCKED` suffix. Put identifiers
of the objects of type `LOCK` to these buckets during `DB.Put`.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>