Implement `NetworkInfo` calls on full stack of Netmap services. Current
epoch is read from node local state, magic number is read via `MagicNumber`
call of morph client.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Implement `Client.MagicNumber` method that returns magic number of the
network to which the underlying RPC node client is connected.
Network magic value is received via `GetNetwork` method.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
CLI `storagegroup put` cmd collects information about SG members via NeoFS
API ObjectService.Head RPC in order to compose SG structure. Bearer token
attached to the call was not used in communication, which could lead to data
access problems. These changes fix the described problem.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
According to nspcc-dev/neofs-api#136 tombstone body should store the same
attribute as in object header. If they are different, then check is failed
with `errTombstoneExpiration`.
Signed-off-by: Alex Vanin <alexey@nspcc.ru>
Creating tombstones for tombstones is prohibited in NeoFS system. Metabase
graveyard contains records of the form {address: address}: key is an address
of inhumed object, value is an address of the tombstone. To prevent creation
tombstones for tombstones metabase must control incoming Inhume calls:
* if Inhume target is a tombstone, then "grave" should not be added;
* if {a1:a2} "grave" was created earlier and {a2: a3} "grave" came later,
then first "grave" must be removed as tomb-on-tomb.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Change Shard's garbage remover to interrupt iterating over the metabase
graveyard when the buffer is full to the max size (`WithRemoverBatchSize`
Shard's option).
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Make `DB.IterateOverGraveyard` to immediately return nil if GraveHandler
returns ErrInterruptIterator.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Add `TOMBSTONE_LIFETIME` configuration value of the node which is measured
in NeoFS epoch and is set to 5 by default.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Make object delete service to use network information to calculate and set
expiration of the created tombstone.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Add new epoch event handler to GC that finds all expired tombstones and
marks them and underlying objects to be removed. Shard uses callbacks
provided by the storage engine to mark underlying objects.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Implement `DB.IterateCoveredByTombstones` method that iterates over graves
and handles all objects under one of the tombstones.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Add new epoch event handler to GC that finds all expired non-tombstone
objects and marks them to be removed.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Implement `DB.IterateExpired` method that iterates over the objects in
metabase that are expired at particular epoch.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Group handlers of the particular event to a WaitGroup and wait for it before
the next event handling. This will ensure that all handlers complete and
prevent potential conflicts between past and present jobs.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
`Shard.Init` method creates a new GC instance from shard configuration and
starts GC's workers through `init` call. In initial implementation GC
routines are indefinite and can be killed only with by application shutdown.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Shard's GC component consists of:
* asynchronous remover that periodically wake up and removes all garbage
objects from the shard, and goes to sleep for particular time interval;
* external event listener that distributes jobs between workers;
* group of workers that can handle a single job related to particular
external event.
Remover and event listener represents go-routines which are started by
`init` method (calls from `Shard.Init`). In initial version all event
handlers are interrupted: this means that next event of the same type will
interrupt previous handling and start the new one.
GC is fully encapsulated in Shard. All GC configurations are reflected in
Shard's configuration.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Implement `DB.IterateOverGraveyard` method that iterates over all graves and
passes passes their descriptors (new type `Grave`) to handler (new type
`GraveHandler`). `Grave` currently have buried object address and garbage
flag.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Replace single target address in `InhumePrm` with the list of addresses.
Change corresponding parameter in `WithTarget` and `MarkAsGarbage` methods
to variadic.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Replace single target address in `InhumePrm` with the list of addresses.
Rename `WithAddress` method to `WithAddresses` and change parameter to
variadic.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Make `StorageEngine.Delete` to execute `Inhume` operation with
`MarkAsGarbage` parameter on the `Shard` that holds the object. Searching of
the particular shard is performed through iterating over HRW-sorted shards.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Implement `InhumePrm.MarkAsGarbage` method that leads to marking object as
garbage in metabase. Update `InhumePrm.WithTarget` doc indicating a conflict
with the new method.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Implement `InhumePrm.WithGCMark` method that marks the object as garbage in
graveyard. Update `InhumePrm.WithTombstoneAddress` doc indicating a conflict
with the new method. Update `Inhume` function doc about tombstone address
parameter.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Delete operation of Metabase is performed on group of objects. The set being
removed can contain descendants of a common parent. In the case when all
descendants of a parent object are deleted, it must also be deleted from
the metabase. In the previous implementation, this was not done due to the
chosen approach to counting references to the parent.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
The lifetime of an object can be limited by specifying a correspondin
well-known attribute. Node should refuse to save expired objects.
Checking objects in FormatValidator is extended with an expiration attribute
parsing step.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
In previous implementation StorageEngine.Inhume operation forced Shard
.Inhume call on all internal shards. There is a need to inhume object in a
single shard. To achieve this, Inhume operation is performed in next steps:
1. iterate over sorted shards, check object presence through Exists call;
2. if object exists at any shard in step 1 => inhume it and return on
success;
3. if no shards contain the object => iterate over sorted shards again and
try to inhume the object at first possible shard;
4. if all Inhume calls are failed => return an error.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
All node info attribute transformations can't guarantee
the order of attributes. However it should be consistent
otherwise smart-contract won't be able to collect signatures
and approve transaction.
Signed-off-by: Alex Vanin <alexey@nspcc.ru>
Node info validator may change node attributes, e.g. update
it with human-readable location attributes based on LOCODE.
So inner ring node should provide new node info binary to
smart contract.
Signed-off-by: Alex Vanin <alexey@nspcc.ru>
Consider single word of search filter expression as path to file with
protobuf JSON filters. Decode filters from file and add them to the rest.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Make `--filters` flag to be repeated. Define new filter expression format:
* `<key> <binary_op> <value>` for binary filters. Supported binary ops: `EQ` (`STRING_EQUAL`), `NE` (`STRING_NOT_EQUAL`).
* `<key> <unary_op>` for unary filters. Supported unary ops: `NOPRESENT` (`NOT_PRESENT`).
Any other string expressions are considered invalid.
Additionally support shorthand flag `-f`.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Scanning subdivision csv-table entries one-by-one takes significant time and
system resources. To speed up random access to table records, on the first
call, the table is pumped into memory (map). On subsequent calls, I/O
operations are not performed.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Scanning csv-table entries one-by-one takes significant time and system
resources. To speed up random access to table records, on the first call,
the table is pumped into memory (map). On subsequent calls, I/O operations
are not performed.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Split the attributes into those that must be explicitly set in the
configuration, and those that, if absent, will be assigned a default value.
Support this logic in `addWellKnownAttributes` function. If no explicit
attribute is set, the application will panic.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>