IR tries to keep 1:3 proportion of GAS and
notary balances respectively. If that proportion
has been messed(means that notary balance is
lower than required) it sends half of its
GAS balance to the notary service.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
After storage engine started to limit number of PUT operations there is no
need to limited worker pool in Object Put service.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Make `StorageEngine` to use non-blocking worker pools with the same
(configurable) size for PUT operation. This allows you to switch to using
more free shards when overloading others, thereby more evenly distributing
the write load.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Convert minutes of received coordinates
to decimal parts of degree, and do not
use decimal part of float as storage for
minutes: "5915N 01806E" is
59.25N 18.10E, not 59°15'N 18°06'E.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
Session token can be present in both object header and
request meta header. They are the same during initial object
placement.
At the object replication, storage node puts object without
any session tokens attached to the request. If container's eACL
denies object.Put for USER role (use bearer to upload), then
replication might fail on objects with session tokens of the
signed by container owner. It is incorrect, so use session
token directly from request meta header.
Signed-off-by: Alex Vanin <alexey@nspcc.ru>
We should be able to read whatever we have written earlier.
Compression setting applies only to the new objects.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
Do not log in options constructors. Also failure to
initialize compression module (possibly due to invalid options) is
certainly an error deserving proper treatment.
Signed-off-by: Evgenii Stratonikov <evgeniy@nspcc.ru>
There is a need to list addresses of the small objects stored in WriteCache
database.
Implement `IterateDB` function which accepts BoltDB instance and iterate
over all saved objects and passes their addresses to the hander.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
There is a need to open Blobovnicza instances in read-only mode in some
cases.
Add `ReadOnly` option. Do not create dir path in RO. Open underlying BoltDB
instance with ReadOnly flag. Document thal all writing operations should not
be called in ro (otherwise BoltDB txs fail).
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
`Blobovnicza` can be initialized with any number of range buckets, and
reconstructed with different size limit. In previous implementation
`Iterate` could miss some stored objects if we construct `Blobovnicza` with
smaller number of ranges.
Make `Iterate` to traverse all buckets regardless of current instance
bounds.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
In previous implementation `Blobovnicza.Iterate` op decoded object data only
and passed it to the handler. There is a need to iterate over all addresses
of the stored objects.
Add `DecodeAddresses` and `WithoutData` methods of `IteratePrm` type. Add
`Address` method to `IterationElement` type. Make `Iterate` to decode object
addresses if `DecodeAddress` was called and not read the data if
`WithoutData` was called. Implement `IterateAddresses` helper function to
simplify the code.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
There is a need to check if public key in the RPC response matches the
public key of the related storage node declared in network map.
Define `ErrWrongPublicKey` error. Implement RPC response handler's
constructor `AssertKeyResponseCallback` which checks public key. Construct
handler and pass it to client's option `WithResponseInfoHandler`.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
There is a need to process announced public keys of storage nodes for
intra-container communication.
Implement `PublicKey` / `SetPublicKey` methods of `client.NodeInfo` type.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
There is a need to have the ability to expand the data needed for client
construction.
Replace `network.AddressGroup` parameter of client cache interfaces with
`client.NodeInfo`.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
After recent changes morph client wrappers provide contract address getter.
It can be used to compose notification parsers and handlers.
Use `ContractAddress` method in constructors of notification parsers and
handlers. Remove no longer used script hash parameters of event processors.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
There is a need to inherit some methods of `StaticClient` type. In order to
not inherit all method via type embedding we can group sub-set of methods
and inherit it.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Accept notification endpoints as string slice from config. Work with the
first successfully initialized WSClient.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
In previous implementation Object PUT used single pool of workers for local
and remote ops, but these ops are heterogeneous.
Use remote/local pool for remote/local operations in PUT service. At first
the pools are configured with the same size.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
In previous implementation `distributedTarget` didn't check if next node is
local. This check was performed by the handlers (target initializer and
relay func).
Make `distributedTarget` to calculate node's locality. Pass locality flag to
the handlers.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Each object from graveyard has tombstone or GC mark. If object has
tombstone, metabase should return `ErrAlreadyRemoved` on object requests.
This is the case when user clearly removed the object from container. GC
marks are used for physical removal which can appear even if object is still
presented in container (Control service, Policer job, etc.). In this case
metabase should return 404 error on object requests.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
`List` method of `Shard` must return only physically stored objects.
Use `AddPhyFilter` to select only phy objects.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Forwarding mechanism resends original request. During split object chain traversal,
storage node performs multiple `object.Head` requests on each child. If request
forwarding happens, then `object.Head` returns object ID of the original request.
This produces infinite assembly loop.
Signed-off-by: Alex Vanin <alexey@nspcc.ru>
Set `Curve` field in `ecdsa.PublicKey` instance from `keys.PublicKey` one in
`checkKeyOwnership` method of container processor.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Tombstone and "alive" objects can be both stored in BlobStor. They can
appear during iterating in different order. Metabase returns
`ErrAlreadyRemoved` error if object is inhumed.
Ignore `object.ErrAlreadyRemoved` errors of `metabase.Put`in Shard's
`refillMetabase` operation.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
There is a need to refill Metabase data with the objects from BlobStor.
Implement `refillMetabase` method which iterates over all objects from
BlobStor and saves them in Metabase.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
There is a need to be able to process all objects saved in `BlobStor`.
Implement `BlobStor.Iterate` method which iterates over all objects.
Implement `IterateBinaryObjects` and `IterateObjects` helper functions to
simplify the code.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
There is a need to be able to process all stored objects saved in
`Blobovnicza`.
Implement `Blobovnicza.Iterate` method which iterates over all objects.
Implement `IterateObjects` helper function to simplify the code.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
In the previous implementation of the metabase, there was no possibility of
reinitializing the metabase: clearing information about existing objects and
bringing it back to its initial state. This operation can be useful in
cases when the stored metadata about objects has lost (or possibly lost)
relevance, and you need to generate data from scratch. Also at the
initialization stage, static resources of the base were not created -
container-independent buckets.
Make `Metabase.Init` method to allocate graveyard, container-size and
to-move-it buckets in underlying BoltDB instance. Implement `Metabase.Reset`
method: it works like `Init` but clean up all static buckets and removes
other ones. Due to the logical similarity, the methods share a single piece
of code.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
There is a need to limit disk space used by write-cache. It is almost
impossible to calculate the value exactly. It is proposed to estimate the
size of the cache by the number of objects stored in it.
Track amounts of objects saved in DB and FSTree separately. To do this,
`ObjectCounters` interface is defined. It is generalized to a store of
numbers that can be made persistent (new option `WithObjectCounters`). By
default DB number is calculated as key number in default bucket, and FS
number is set same to DB since it is currently hard to read the actual value
from `FSTree` instance. Each PUT/DELETE operation to DB or FS
increases/decreases corresponding counter. Before each PUT op an overflow
check is performed with the following formula for evaluating the occupied
space: `NumDB * MaxDBSize + NumFS * MaxFSSize`. If next PUT can cause
write-cache overflow, object is written to the main storage.
By default maximum write-cache size is set to 1GB.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
There is a need to keep track of each local storage change. Log messages are
the most convenient way to do it.
Implement function which writes log message about the completed writing
operation in storage engine.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Do not pass high level `PACK` opcode to
notary parsers. Add opcode amount check.
Delete `PACK` cases in notary parsers.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
Make `errIncompletePut` to be a structure which wraps single client error.
Wrap error of the last client into `errIncompletePut` during placement
execution.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
In previous implementation Object service's handler returned const error in
case of failure (full or partial) of PUT operation. This did not even allow
us to roughly guess what the reason is. Not as a complete solution, but to
alleviate some cases where all nodes in a container return the same error,
it is suggested to return the error of the last server that responded.
Return latest server error from placement loop of `iteratePlacement` method
of `distributedTarget` type.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
In previous implementation sticky bit could disrupt access of container
nodes to replication. According to NeoFS specification sticky bit should not
affect the requests sent by nodes from SYSTEM group.
Add role check to `stickyBitCheck`.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Remote RPC node might be in the transition state and produce
events from the past. We should avoid listening such nodes.
To do that subscriber component can await minimal height of
remote node. If minimal height is not reached, then throw
error.
Signed-off-by: Alex Vanin <alexey@nspcc.ru>
Implement `NotaryContractProcessor` by IR
container processor. Add support for notary
`put` container operation. Do not parse `put`
non-notary notifications in notary enabled
environment.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
Add `NotarySignAndInvokeTX` method to morph
client. This function allows invoking notary
request with passed main TX(not creating a
new one). It signs passed main TX with
client's key.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
Add `NotaryInvokeNotAlpha` to low-level
client. It creates and sends notary request
that must be signed by Alphabet nodes, but
does not sign it by current node's private
key.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
Add preparator for notary requests. Is parses
raw notary requests, checks if it should be
handled by Alphabet node. If handling is required,
returns `NotaryEvent` that contains information
about contract scripthash, method name and
arguments of the call.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
Prepare all listening structures for notary events:
rename(add prefix/suffix 'notification') all
notification specific handlers/parsers.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
`fallbackTime` is delta b/w `ValidUntilBlock` of
the main transaction and block when `fallback`
transaction is sent.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
In recent changes, the locality criterion for a node has been changed to
compare public keys.
Remove no longer used `IsLocalAddress` function and `LocalAddressSource`
interface.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Some software components regulate the way of working with placement arrays
when a local node enters it. In the previous implementation, the locality
criterion was the correspondence between the announced network address
(group) and the address with which the node was configured. However, by
design, network addresses are not unique identifiers of storage nodes in the
system.
Change comparisons by network addresses to comparisons by keys in all
packages with the logic described above. Implement `netmap.AnnouncedKeys`
interface on `cfg` type in the storage node application.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Announced keys of storage nodes are required for many application components
to function.
Define a unified interface for the utility for working with public keys of
nodes. Add a method to check if the key has been advertised by the local
node in the application context.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
There is a need to process public keys of the placement result.
Implement `Node.PublicKey` method which returns storage node's key announced
in netmap.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
In previous implementation `placement.Traverser.Next` method returned slice
of `network.AddressGroup` elements. There is a need to process keys of
storage nodes besides network addresses for intra-container communication.
Wrap `network.AddressGroup` in a new type `placement.Node` that summarizes
the storage node information required for communication. Return slice of
`Node` instances from `Traverser.Next` method. Fix compilation breaks in
dependent packages.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
IDs come from NeoFS contract in big endian, but it is customary to write in
the node logs in little endian.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Shard should try to read object headers from write-cache if it is enabled.
Extend `writecache.Cache` interface with `Head` method. Call the method in
`Shard.Head` if `Shard.hasWriteCache` returns true.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Write cache should be able to execute HEAD operations according to spec.
Add simple implementation of `Head` method through the `Get` one. Leave
notes for future optimization.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Caching is performed inside `GetNativeContractHash` method of neo-go client,
so the additional cache level is redundant.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
There is no need to continue iterating over Neo RPC endpoints in case of
some address-independent error (e.g. NeoFS logic error).
Unwrap and immediately return `neofsError` errors from loop in
`iterateClients`.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Implement `error` interface on new `neofsError` type which is a wrapper over
NeoFS-specific error. Wrap all `Client` errors except neo-go client API ones
into `neofsError`. Wrapped errors are going to be used for multi-endpoint
loop control.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
Since morph `Client` works in multi-client mode, there is an error case when
we can not get network magic when all endpoints are unavailable.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
There is a need to work with a set of Neo RPC nodes in order not to depend
on the failure of some nodes while others are active.
Support "multi-client" mode of morph `Client` entity. If instance is not
"multi-client", it works as before. Constructor `New` creates multi-client,
and each method performs iterating over the fixed set of endpoints until
success. Opened client connections are cached (without eviction for now).
Storage (as earlier) and IR (from now) nodes can be configured with multiple
Neo endpoints. As above, `New` creates multi-client instance, so we don't
need initialization changes on app-side.
`Wait` and `GetDesignateHash` methods of `Client` return an error from now
to detect connection errors. `NotaryEnabled` method is removed as unused.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>
As soon as resulting list and cache operate with the attribute pointer,
we can add attribute structure immediately and set restored key and
value of the attribute later.
Signed-off-by: Alex Vanin <alexey@nspcc.ru>
Attributes are linked to each other through parents, so they can
be returned in any order. However, it will be better to return
the list in consistent order to reduce entropy.
Signed-off-by: Alex Vanin <alexey@nspcc.ru>
Parser should reuse existing attributes from the cache to update list
of the parent links in it. Parent links should be unique.
Signed-off-by: Alex Vanin <alexey@nspcc.ru>
In previous implementation Container service handlers didn't cache the
results of `Get` / `GetEACL` / `List` operations. As a consequence of this,
high load on the service caused neo-go client's connection errors. To avoid
this there is a need to use cache. Object service already uses `Get` and
`GetEACL` caches.
Implement cache of `List` results. Share already implemented cache of Object
service with the Container one. Provide new instance of read-only container
storage (defined as an interface)to morph executor's constructor on which
container service is based. Write operations remained unchanged.
Signed-off-by: Leonard Lyubich <leonard@nspcc.ru>