Due to the flushing data from the writecache to the storage
and simultaneous deletion, a partial deletion situation is possible.
So as a solution, deletion is allowed only when the object is in storage,
because object will be deleted from writecache by flush goroutine.
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
Supposedly, this was added to allow creating 2 different shards without
subtest. Now we use t.TempDir() everywhere, so this should not be a
problem.
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
newCustomShard() has many parameters but only the first is obligatory.
`enableWriteCache` is left as-is, because it directly affects the
functionality.
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
This was set in #348 to speed up tests.
It seems 100ms doesn't increase overall test time,
but it reduces the amount of logs by 100x factor.
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
In some places we have debug=false, in others debug=true.
Let's be consistent.
Semantic patch:
```
@@
@@
-test.NewLogger(..., false)
+test.NewLogger(..., true)
```
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
Protect test metric store fields with a mutex. Probably, not every field
should be protected, but better safe than sorry.
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
Semantic patch:
```
@@
var f expression
var t expression
var a expression
@@
f(
...,
- zap.String(t, a.String()),
+ zap.Stringer(t, a),
...,
)
```
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
Concurrent initialization in case of the metabase resync leads to
high memory consumption and potential OOM.
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
Now it's possible to run evacuate shard in async.
Also only one evacuate process can be in progress.
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
All operations must ensure the shard is not in a degraded mode.
Write operations must also ensure the shard is not in a read-only mode.
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
Initially it was there to check whether an update is being initiated by
a proper node. It is now obsolete for 2 reasons:
1. Background synchronization fetches all operations from a single node.
2. There are a lot more problems with trust in the tree service, it is
only used in controlled environments.
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
GC deletes expired locks and objects sequentially. Expired locks and
objects are now being deleted concurrently in batches. Added a config
parameter that controls the number of concurrent workers and batch size.
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
Currently we track based on `PayloadSize`, because it is already stored
in the metabase and it is easier to calculate without slowing down the
whole system.
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
Signed-off-by: Pavel Karpy <p.karpy@yadro.com>
Currently the only way to tell whether `evacuate/set-mode` is finished
is to set a very big timeout and _hope_ that the operation will finish.
In this commit we add INFO logs for such operations which should
simplify the life of an administrator.
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
In the previous implementation any non-nil error that preceded object
fetching from blobstor led to iterating over every storage (in other words,
no storage ID information was taken into account). Now storage ID is
skipped only if metabase (storage ID source) returns any error.
Signed-off-by: Pavel Karpy <p.karpy@yadro.com>
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
Because synchronization _most likely_ will have apply already existing
operations, it is much faster to check their presence in a read
transaction. However, always doing this will degrade the perfomance
for normal `Apply`. And, let's be honest, it is already not good.
Thus we add a separate parameter which specifies whether this logic is
enabled.
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
Both `meta` and `write-cache` are expected to have a fast underlying disk,
so it does not seem like an optimisation. Moreover, `write-cache`'s `Head`
is a `Get` with payload cutting, it _must_ use more memory for no reason
(`meta` was created for such requests). Also, `write-cache` does not allow
performing any "meta" relations checks (such as locking, tombstoning).
Signed-off-by: Pavel Karpy <p.karpy@yadro.com>
Includes extending listing methods in the Storage Engine with object types.
It allows tuning replication/policer algorithms: container nodes do
not remove `LOCK` objects as redundant and try to fulfill `LOCK` placement
on the ohter container nodes.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
It was needed before we started to flush during transition to
`degraded` mode. Now it is confusing.
Signed-off-by: Evgenii Stratonikov <evgeniy@morphbits.ru>
Currently there is a possibility for modifying operations to fail
because of I/O errors and a new tree to be created on another shard.
This commit adds existence check for modifying operations.
Read operations remain as they are, not to slow things.
`TreeDrop` is an exception, because this is a tree removal and trying
multiple shards is not an unwanted behaviour.
Signed-off-by: Evgenii Stratonikov <evgeniy@morphbits.ru>