metabase.Open() now reports metabase mode metric. shard.UpdateID()
needs to read shard ID from metabase => needs to open metabase.
It caused reporting 'shard undefined' metrics. To avoid reporting
wrong metrics metabase.GetShardID() was added which also opens
metabase and does not report metrics.
Signed-off-by: Ekaterina Lebedeva <ekaterina.lebedeva@yadro.com>
It used to always show CLOSED regardless of actual mode.
Now metric represents actual metabase mode of operations.
Signed-off-by: Ekaterina Lebedeva <ekaterina.lebedeva@yadro.com>
`fmt.Errorf can be replaced with errors.New` and `fmt.Sprintf can be replaced with string addition`
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
There may be a race condition between put an object and
flushing the writecache:
1. Put object to the writecache
2. Writecache flushes object to the blobstore and sets blobstore's
storageID
3. Put object to the metabase, set writecache's storageID
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
Most of the time it exits, e.g. when it is per-container and use on each
object PUT. Bbolt implementation first tries to create bucket and then
returns it if it exists. Create operation uses cursor and thus is not
very lightweight, we can avoid it.
```
goos: linux
goarch: amd64
pkg: git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/metabase
cpu: 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz
│ old │ new │
│ sec/op │ sec/op vs base │
Put/parallel-8 174.4µ ± 3% 163.3µ ± 3% -6.39% (p=0.000 n=10)
Put/sequential-8 263.3µ ± 2% 259.0µ ± 1% -1.64% (p=0.000 n=10)
geomean 214.3µ 205.6µ -4.05%
│ old │ new │
│ B/op │ B/op vs base │
Put/parallel-8 275.3Ki ± 3% 281.1Ki ± 4% ~ (p=0.063 n=10)
Put/sequential-8 413.0Ki ± 2% 426.6Ki ± 2% +3.29% (p=0.003 n=10)
geomean 337.2Ki 346.3Ki +2.70%
│ old │ new │
│ allocs/op │ allocs/op vs base │
Put/parallel-8 678.0 ± 1% 524.5 ± 2% -22.64% (p=0.000 n=10)
Put/sequential-8 1.329k ± 0% 1.183k ± 0% -10.91% (p=0.000 n=10)
geomean 949.1 787.9 -16.98%
```
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
GC inhumes expired locks and tombstones on all the shards.
So it could be GC mark without object.
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
Available -> Logic, Raw -> Phy for delete/inhume results.
Use single counter instead of vectors.
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
Now UpdateStorageID doesn't return error in case of logical error.
If object is in graveyard or GC market, it is still required to
update storage ID.
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
Previously, node could get an "infinite" small object: it could be expired
and thus could not be flushed (update its storage ID) to metabase => could
not be marked as flushed => node never removes such object and repeat all
the cycle one more time. If object exists and is not marked with GC (meta
returns `ErrObjectIsExpired`, not `ObjectNotFound` and not
`ObjectAlreadyRemoved`), its ID is safe to update _in the same_ bbolt
transaction.
Signed-off-by: Pavel Karpy <p.karpy@yadro.com>
Currently we track based on `PayloadSize`, because it is already stored
in the metabase and it is easier to calculate without slowing down the
whole system.
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
Signed-off-by: Pavel Karpy <p.karpy@yadro.com>
Includes:
1. mode change read lock operation in every exported method that r/w the
underlying database;
2. returning `ErrDegradedMode` logical error if any exported method is
called in degraded (without a metabase) mode.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
Includes extending listing methods in the Storage Engine with object types.
It allows tuning replication/policer algorithms: container nodes do
not remove `LOCK` objects as redundant and try to fulfill `LOCK` placement
on the ohter container nodes.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
It is not an error: removing virtual object is expected and should be just
skipped. Getting a virtual object with `raw` flag is considered as an
impossible action, all the virtual objects removals will be handled via
their children's removals implicitly.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
By default writecache puts the whole object to update storage ID.
This logic comes from the times when we needed to put objects
in the metabase by the writecache itself. Now this is done by the
blobstor at unmarshaling objects during flush only to update storage ID
is an overkill.
Signed-off-by: Evgenii Stratonikov <evgeniy@morphbits.ru>
All logic errors are wrapped in `logicerr.Logical` type and do not
affect shard error counter.
Signed-off-by: Evgenii Stratonikov <evgeniy@morphbits.ru>
From the `Bucket.ForEach` doc:
```
The provided function must not modify the bucket; this will result in undefined behavior.
```
Signed-off-by: Evgenii Stratonikov <evgeniy@morphbits.ru>
In the 2nd version, there was a database format change: buckets have changed
their keys, so it becomes impossible to check the version in the 1 -> 2+
migrations because of different buckets that store info about the version.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>
Make it store its internal `zap.Logger`'s level. Also, make all the
components to accept internal `logger.Logger` instead of `zap.Logger`; it
will simplify future refactor.
Signed-off-by: Pavel Karpy <carpawell@nspcc.ru>