Node shouldn't perform eACL verification during GET/HEAD request
processing until full object header is received. Otherwise, for some
eACL tables request may be falsely rejected.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Scenario:
* HEAD request of some object
* 1st eACL record allows op for objects with specific user attribute
* 2nd eACL record forbids op by object ID
* node doesn't store the requested object locally
With this scenario node shouldn't deny request.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
It is redundant to process object headers in responses w/o object field
since result will be the same.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Request processing should not be interrupted in case of local storage
failure since error case in normal for relay nodes.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
ACL service should not deny request on local storage failure since in
this case relay nodes won't be able to continue the operation.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
It is useless process since subnet owner is able to delete subnet without an
Alphabet approval. The Alphabet should only validate netmap state after
removal:
1. Update nodes' attributes if they were included in the deleted subnet;
2. Remove nodes without any subnet entrance.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
`log.With` is suitable during initialization, but in other places it induces
some overhead, even when branches with logging are not taken.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
If we should process address based on some condition, there is no need
to read file content in memory.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
Currently we use `(*bbolt.Bucket).Stats().KeyN` for estimating database
size. However, it iterates over all pages in bucket and thus heavily
depends on the bucket size. This commit replaces initial size estimation
with a single `os.Stat` call.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
Core changes:
* avoid package-colliding variable naming
* avoid using pointers to IDs where unnecessary
* avoid using `idSDK` import alias pattern
* use `EncodeToString` for protocol string calculation and `String` for
printing
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Helper function `client.IsErrObjectNotFound` doesn't support error
unwrapping, so we need to do it on caller side.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
`Policer` should pass list of selected candidates into `WithNodes`
method of `replicator.Task`. In previous implementation `processNodes`
method passed an opposite list: failed nodes and/or the local one.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Not all the NeoFS requests must contain OID in their bodies (or must NOT
contain them at all). Do not pass object address in helper functions, pass
CID and OID separately instead.
Also, fixed NPE in the ACL service: updated SDK library brought errors
when working with `Put` and `Search` requests without OID fields.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
In previous implementation `Policer` considered local object copy as
redundant on processing single placement vector.
Make `Policer` to call redundant copy callback after full placement
processing. Also fix 404 error parsing.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Call `UpdateStateIR` and `AddPeerIR` method instead of `UpdateState` and
`AddPeer` if calling client is configured as Alphabet in notary enabled
environment.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
Parse all headers beforehand and reject invalid requests.
Another approach would be to remember the error and check
it after `CalculateAction`, which is a bit faster.
The rule of thumb here is "first validate, then use".
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
It is nice to have different paths for different components and also
check that the information returned is different for different shards.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
In previous implementation NeoFS CLI app used `network.Address.HostAddr`
as a server URI, which caused scheme loss since host address doesn't
contain it.
Rename `HostAddr` to `URIAddr` and make it to return URI address with
`grpcs` scheme if TLS is enabled. Make `TLSEnabled` unexported since it
was used to provide default `tls.Config` only (it is used by default in
SDK).
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
- Delete objects physically on tombstone's arrival;
- Store information about tombstones in the Graveyard;
- Clear Graveyard every epoch based on the information about TS in the
network.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
The service fetches tombstones from the network via object service, every
request is handled in the following order:
1. checks local LRU cache;
2. checks local storage engine;
3. tries to find object in the placement nodes.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
Add offset element to the iterations over deleted objects (both the
Graveyard and the Garbage buckets).
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
It allows storing information about object in both ways at the same time:
1. Metabase should know if an object is covered by a tombstone (that is
not expired yet);
2. It should be possible to physically delete objects covered by a
tombstone immediately (mark with GC) but keep tombstone knowledge.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
Morph "NewEpoch" event handling was registered in a closure over
`addNewEpochNotificationHandler` func. That may lead to the data race:
if a shard was initialized before the event registration, everything works
as planned, but if registration was made earlier, it was not able to
include GC handlers since a shard has not called `eventChanInit` yet and,
therefore, it has not registered handler yet.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
Add worker pool to the listener to prevent blocking. It is used only for
notary notifications and new block events handling since it uses RPC
calls. That may lead to the deadlock state: neo-go cannot send RPC until
notification channel is read but notification channel cannot be read since
neo-go client cannot send RPC.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
Updated client now supports subscription to chain notifications and RPC
switch between provided RPC endpoints.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
Provide current heights as an argument to ticker.
Zero height disables any checks, thus corresponding to the old
behaviour. If non-zero height is used, ignore the tick if the height
is less than the timer tick state.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
After `Listen` has raw receiver and calls `listen` which has pointer
receiver, leading to ineffectual assign to `started` field.
Always use `listener` by pointer, to avoid similar bugs.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
When IR node configured without main chain, both
`morphListener` and `mainnetListener` are pointing
into single listener component. We should not call
`Stop()` twice, because it may trigger channel
closing in neo-go or other components and it
can throw panic.
Signed-off-by: Alex Vanin <alexey@nspcc.ru>
Prevent attemts to connect to the failing endpoint multiple times.
This speeds up all invocations a bit and removes excessive error logs.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
Also, remove optimization comments:
1. Having to maintain an execute the same logic for headers as for
objects is quite inefficient, as it increases memory footprint.
2. Unmarshaling object is a cheap operation if data slice is in memory.
3. For unmarshaling header-only, I think we need SDK support.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
`Degraded` mode is set automatically after error counter is over the
threshold. `ReadOnly` mode can still be set by an administrator.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
Add `WithEncryption` option that passes ECDSA key to the persistent session
storage. It uses 32 bytes from marshalled ECDSA key in ASN.1 DER from in
AES-256 algorithm encryption in Galois/Counter Mode.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
Move in-memory session storage to the separate directory of `storage`. It is
done for future support of different kind of session storages.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
`Batch` can execute the function multiple times leading to multiple
increases of a size approximation.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
Alphabet nodes in notary enabled environment cannot call `UpdateState`
method to remove unwanted storage nodes from the network map,
because this method checks witness of the storage node.
To force storage node state update, alphabet nodes should invoke
new method `UpdateStateIR` which is similar to `AddPeerIR`.
State update initiated by the storage node itself is processed
the same way as before -- alphabet nods resign such transaction.
Signed-off-by: Alex Vanin <alexey@nspcc.ru>
`Register` was renamed to `AddPeerIR` for consistency with
`UpdateState` changes in
https://github.com/nspcc-dev/neofs-contract/pull/227
This is protocol breaking change for notary enabled environment.
Luckily, there is no notary enabled environments anywhere except
of neofs-dev-env, so we can do such thing. We should avoid such
changes in the future, though.
Signed-off-by: Alex Vanin <alexey@nspcc.ru>
`apistatus` package provides types which implement build-in `error`
interface. Add `error of type` pattern when documenting these errors in
order to clarify how these errors should be handled (e.g. `errors.Is` is
not good).
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Return `apistatus.ObjectAccessDenied` error on access violation from ACL
service. Write reason in format of the errors from the previous
implementation. These errors are returned by storage node's server as
NeoFS API statuses.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Replace `core/container.ErrNotFound` error returned by `Source.Get`
interface method with `apistatus.ContainerNotFound` status error. This
error is returned by storage node's server as NeoFS API statuses.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Replace `ErrNotFound`/`ErrAlreadyRemoved` error from
`pkg/core/object` package with `ObjectNotFound`/`ObjectAlreadyRemoved`
one from `apistatus` package. These errors are returned by storage
node's server as NeoFS API statuses.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Return `SessionTokenExpired`/`SessionTokenNotFound` error from
`apistatus` package if private session token is expired/missing. These
errors are returned by storage node's server as NeoFS API statuses.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Make all epoch independent in reputation process. Do not reset any timers
related to reputation. Make it possible to finish iteration after the
unexpected `NewEpoch` event.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
There is a need to process expired `LOCK` objects similar to `TOMBSTONE`
ones: we collect them on `Shard`, notify all other shards about
expiration so they could unlock the objects, and only after that mark
lockers as garbage.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
In previous implementation `SignService` converted all `error` values to
`INTERNAL` server failure status. That was done for simplification only.
There is a need to transmit status errors as corresponding status
messages.
Make `SignService` to unwrap errors and convert them to status message
during writing to the response. Non-status errors are converted to
`INTERNAL` server failures. Status errors can also be wrapped in the
depths of the executable code, so `SignService` tries to unwrap them.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Make `FormatValidator.ValidateContent` to verify payload of `LOCK`
objects. Pass locked objects to `Locker` interface. Require from
`Locker.Lock` to return `apistatus.IrregularObjectLock` error on a
corresponding condition.
Also add error return to `DeleteHandler.DeleteObjects` method. Require
from method to return `apistatus.ObjectLocked` error on a corresponding
condition. Adopt implementations.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
`Inhume` operation can potentially mark lockers as garbage. There is a
need to update locker list in locked bucket.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Make `StorageEngine.Delete` to forward first encountered
`apistatus.ObjectLocked` error during shard processing.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Make `StorageEngine.Inhume` to forward first encountered
`apistatus.ObjectLocked` error during shard processing.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Implement `StorageEngine.Lock` method which works similar to `Inhume`
but calls `Lock` on the processing shards.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Make `DB.Lock` to return `apistatus.IrregularObjectLock` if at least one
of the locked objects is irregular (not of type REGULAR).
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Make `DB.Inhume` to return `apistatus.ObjectLocked` if at least one of
the inhumed objects is locked.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Make `DB.IterateCoveredByTombstones` to not pass locked objects to the
handler. The method is used by GC, therefore it will not consider locked
objects as candidates for deletion even if their tombstone is expired.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Make `DB.IterateExpired` to not pass locked objects to the handler. The
method is used by GC, therefore it will not consider them as candidates
for deletion.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
After introduction of LOCK objects (of type `TypeLock`) complicated
extended its behavior:
* create `lockers` container bucket (LCB) during PUT;
* remove object from LCB during DELETE;
* look up object in LCB during EXISTS;
* get object from LCB during GET;
* list objects from LCB during LIST with cursor;
* select objects from LCB during SELECT with '*'.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Implement `DB.Lock` method which marks list of the objects as locked by
another object. Only regular objects can be locked.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Create class of container buckets with `LOCKED` suffix. Put identifiers
of the objects of type `LOCK` to these buckets during `DB.Put`.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
After `putsvc.Service` started to support additional container broadcast
of the saved objects there is no more need to perform broadcast of
tombstone object in `deletesvc.Service`.
Make `putsvc.Service` to perform additional broadcast of `TOMBSTONE`
objects. Remove `broadcastTombstone` stage from `deletesvc.execCtx`,
from now it is encapsulated in `saveTombstone` stage. Remove no longer
needed `putsvc.PutInitPrm.WithTraverseOption` method.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
There are several cases when we need to spread the object around the
container after its primary placement (e.g. objects of type TOMBSTONE).
It'd be convenient to support this feature in `putsvc.Service`.
Add additional stage of container broadcast after the object is stored.
This stage is carried out no more than once and does not affect the
outcome of the main PUT operation.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Make all operations that related to `neofs-api-go` library be placed in `v2`
packages. They parse all v2-versioned structs info `neofs-sdk-go`
abstractions and pass them to the corresponding `acl`/`eacl` packages. `v2`
packages are the only packages that do import `neofs-api-go` library. `eacl`
and `acl` provide public functions that only accepts `sdk` structures.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
Remove `Object` and `RawObject` types from `pkg/core/object` package.
Use `Object` type from NeoFS SDK Go library everywhere. Avoid using the
deprecated elements.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Similarly to `Get`. Also fix a bug where `ErrNotFound` is returned
instead of `ErrRangeOutOfBounds`.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
Metabase is expected to contain actual information about objects stored
in shard. If the object is present in metabase but is missing from
blobstor, peform an additional attempt to fetch it directly without
consulting metabase. Such a situation is unexpected, so error counter
is increased for the shard which has the object in the metabase. We
don't increase error counter for the shard which has the object in
blobstor, because some garbage can be expected there. In this
implementation there is no overhead for objects which are really
missing, i.e. are not present in any metabase.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>