Those errors are fired when it is enough chunks retrieved and error group
cancels other requests.
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
Placement vector may contain fewer nodes count than it required by policy
due to the outage of the one of the node.
Signed-off-by: Anton Nikiforov <an.nikiforov@yadro.com>
Concurrent Apply can lead to child node applies before parent, so
undo/redo operations will perform. This leads to performance degradation
in case of tree with many sublevels.
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
Previous release was EACL-compatible.
Starting from now all EACL should've been migrated to APE chains.
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
* Update version within go.mod;
* Fix deprecated frostfs-api-go/v2 package and use frostfs-sdk-go/api
instead.
Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
Initially, this test was a check that only the container node can
assemble an EC object. But the implementation of this test was wrong.
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
Nodes from cache could be changed by traverser, if no objectID specified.
So it is required to return copy of cache's slice.
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
As EC put request may be processed only by container node, so sign requests
with current node private to not to perform APE checks.
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
`slices.SortFunc` doesn't use reflection and is a bit faster.
I have done some micro-benchmarks for `[]NodeInfo`:
```
$ benchstat -col "/func" out
goos: linux
goarch: amd64
pkg: git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/pilorama
cpu: 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz
│ sort.Slice │ slices.SortFunc │
│ sec/op │ sec/op vs base │
Sort-8 2.130µ ± 2% 1.253µ ± 2% -41.20% (p=0.000 n=10)
```
Haven't included them, though, as they I don't see them being used a
lot.
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
When getting a role in the APE checker for the container services,
an error may be returned if network maps of the previous two epochs
don't have enough nodes to fulfil a container placement policy.
It's a logical error, so we should ignore it.
Signed-off-by: Aleksey Savchuk <a.savchuk@yadro.com>
* `SignRequestPrivateKey` field should be initialized either within
`newUntrustedTarget` or within `newTrustedTarget`. Otherwise, all
requests are signed by local node key that makes impossible to perform
patch on non-container node.
Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
Consider the following operations ordering:
1. Inhume(with tombstone A) --> add tombstone mark for an object
2. --> new epoch arives
3. --> GCMark is added for a tombstone A, because it is unavailable
4. Put(A) --> return error, because the object already has a GCMark
It is possible, and I have successfully reproduced it with a test on the
shard level. However, the error is related to the specific
_ordering_ of operations with engine. And triggering race-conditions like
this is only possible on a shard level currently, so no tests are
written.
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
There might be situation when context canceled earlier than traverser move to another part of the nodes.
To avoid this, need to wait for the result from concurrent put at each traverser iteration.
Signed-off-by: Anton Nikiforov <an.nikiforov@yadro.com>