Allow to seal writecache after flush #886

Merged
fyrchik merged 1 commit from dstepanov-yadro/frostfs-node:feat/flush_and_disable_writecache into master 2024-09-04 19:51:05 +00:00
  1. Added new flag seal for command frostfs-cli control shards flush-cache ... --seal that changes writecache mode to read-only after flush.

  2. Fixed manual flush: now objects from DB will be deleted, previously only fstree object deleted.

  3. Fixed manual flush 2: now it exclusive locks modeMtx, so manual flush doesn't conflict with background flush

  4. Put and Delete methods now return error immediately, if writecache is changing mode, closing or flushing objects with manual invokation.

To check:

  1. Start k6 to write data

  2. Grafana shows that writecache stores objects

  3. Run frostfs-cli control shards flush-cache --id <shard_id> --endpoint ... --wallet ... --seal

  4. Grafana shows that writecache has stopped to store objects, all objects were flushed, mode was changed:
    image

UPD:

Also added frostfs-cli control shards writecache seal that does almost the same as frostfs-cli control shards flush-cache --seal, but moves writecache to degraded read only mode.

Output example:

$ frostfs-cli control shards writecache seal --all --endpoint ... --wallet ...
Enter password > 
Shard G9yBpG87G28VxvUdTY8e9M: failed with error "shard is in read-only mode"
Shard LhMy32HTvHAC8eSWArCwoh: OK
Total: 1 success, 1 failed
$ frostfs-cli control shards writecache seal --all --endpoint ... --wallet ....
Enter password > 
Shard NrZrnj9UreF2kZXX96AHLo: OK
Shard L6se643BehHdVpLZe444Hc: OK
Total: 2 success, 0 failed
$ frostfs-cli control shards writecache seal --id 6688MpNXDTdTMjgtcJEEtR --endpoint ... --wallet ...
Enter password > 
Shard 6688MpNXDTdTMjgtcJEEtR: OK
Total: 1 success, 0 failed

Relates #569

1. Added new flag `seal` for command `frostfs-cli control shards flush-cache ... --seal` that changes writecache mode to read-only after flush. 2. Fixed manual flush: now objects from DB will be deleted, previously only fstree object deleted. 3. Fixed manual flush 2: now it exclusive locks modeMtx, so manual flush doesn't conflict with background flush 4. `Put` and `Delete` methods now return error immediately, if writecache is changing mode, closing or flushing objects with manual invokation. To check: 1. Start k6 to write data 2. Grafana shows that writecache stores objects 3. Run `frostfs-cli control shards flush-cache --id <shard_id> --endpoint ... --wallet ... --seal` 4. Grafana shows that writecache has stopped to store objects, all objects were flushed, mode was changed: ![image](/attachments/f411a4ef-55a1-4888-b55a-ff76875c40a9) UPD: Also added `frostfs-cli control shards writecache seal` that does almost the same as `frostfs-cli control shards flush-cache --seal`, but moves writecache to degraded read only mode. Output example: ``` $ frostfs-cli control shards writecache seal --all --endpoint ... --wallet ... Enter password > Shard G9yBpG87G28VxvUdTY8e9M: failed with error "shard is in read-only mode" Shard LhMy32HTvHAC8eSWArCwoh: OK Total: 1 success, 1 failed $ frostfs-cli control shards writecache seal --all --endpoint ... --wallet .... Enter password > Shard NrZrnj9UreF2kZXX96AHLo: OK Shard L6se643BehHdVpLZe444Hc: OK Total: 2 success, 0 failed $ frostfs-cli control shards writecache seal --id 6688MpNXDTdTMjgtcJEEtR --endpoint ... --wallet ... Enter password > Shard 6688MpNXDTdTMjgtcJEEtR: OK Total: 1 success, 0 failed ``` Relates #569
dstepanov-yadro force-pushed feat/flush_and_disable_writecache from 33790b2d94 to 58702a57e7 2023-12-22 06:57:42 +00:00 Compare
dstepanov-yadro reviewed 2023-12-22 07:01:31 +00:00
@ -239,3 +239,3 @@
// Write-cache must be in readonly mode to ensure correctness of an operation and
// to prevent interference with background flush workers.
func (c *cache) Flush(ctx context.Context, ignoreErrors bool) error {
func (c *cache) Flush(ctx context.Context, ignoreErrors, _ bool) error {
Author
Member

Decided to drop badger implementation, so no changed here.

Decided to drop badger implementation, so no changed here.
Owner

Don't understand -- if we dropped badger and rebased, why this change?

Don't understand -- if we dropped badger and rebased, why this change?
Owner

Oh, I see now.

Oh, I see now.
fyrchik marked this conversation as resolved
dstepanov-yadro reviewed 2023-12-22 07:02:56 +00:00
@ -34,3 +35,3 @@
}()
c.modeMtx.RLock()
if !c.modeMtx.TryRLock() {
Author
Member

Writecache is closing, changing mode or flushing. So no need to wait, this step will be skipped.

Writecache is closing, changing mode or flushing. So no need to wait, this step will be skipped.
dstepanov-yadro reviewed 2023-12-22 07:03:45 +00:00
@ -136,13 +139,11 @@ func (c *cache) flushSmallObjects(ctx context.Context) {
}
}
c.modeMtx.RUnlock()
Author
Member

refactoring

refactoring
dstepanov-yadro reviewed 2023-12-22 07:05:46 +00:00
@ -293,2 +301,3 @@
return c.db.View(func(tx *bbolt.Tx) error {
for {
batch, err := c.readNextDBBatch(ignoreErrors)
Author
Member

Batching added to not to store all objects in single slice for delete.

Batching added to not to store all objects in single slice for delete.
dstepanov-yadro reviewed 2023-12-22 07:06:37 +00:00
@ -44,3 +44,3 @@
}()
c.modeMtx.RLock()
if !c.modeMtx.TryRLock() {
Author
Member

Writecache is closing, changing mode or flushing. So no need to wait, this step will be skipped.

Writecache is closing, changing mode or flushing. So no need to wait, this step will be skipped.
Owner

It seems like a separate change in behavior, it deserves a separate commit.

It seems like a separate change in behavior, it deserves a separate commit.
Author
Member

Done

Done
dstepanov-yadro reviewed 2023-12-22 07:08:05 +00:00
@ -69,3 +69,3 @@
}
func (c *cache) deleteFromDB(key string) {
func (c *cache) deleteFromDB(key string, batched bool) {
Author
Member

When flush started by external command, there is no need for batching, because deletion processes in single goroutine.

When flush started by external command, there is no need for batching, because deletion processes in single goroutine.
dstepanov-yadro requested review from storage-core-committers 2023-12-22 07:09:53 +00:00
dstepanov-yadro requested review from storage-core-developers 2023-12-22 07:09:55 +00:00
acid-ant requested review from anikeev-yadro 2023-12-22 07:17:05 +00:00
anikeev-yadro approved these changes 2023-12-22 07:28:33 +00:00
dstepanov-yadro changed title from Allow to seal writecache after flush to WIP: Allow to seal writecache after flush 2023-12-22 07:46:11 +00:00
dstepanov-yadro force-pushed feat/flush_and_disable_writecache from 58702a57e7 to 0605da30de 2023-12-22 08:14:02 +00:00 Compare
dstepanov-yadro reviewed 2023-12-22 08:17:01 +00:00
@ -68,4 +68,2 @@
require.Error(t, wc.SetMode(mode.Degraded))
// First move to read-only mode to close background workers.
require.NoError(t, wc.SetMode(mode.ReadOnly))
Author
Member

It was used to stop background workers. Now background workers disabled with writecache option. Also setting read-only mode makes it impossible to delete objects from db.

It was used to stop background workers. Now background workers disabled with writecache option. Also setting read-only mode makes it impossible to delete objects from db.
dstepanov-yadro changed title from WIP: Allow to seal writecache after flush to Allow to seal writecache after flush 2023-12-22 08:36:05 +00:00
acid-ant approved these changes 2023-12-22 10:42:10 +00:00
acid-ant approved these changes 2023-12-22 10:50:00 +00:00
dstepanov-yadro force-pushed feat/flush_and_disable_writecache from 0605da30de to f49fefe3ad 2023-12-22 11:39:29 +00:00 Compare
elebedeva approved these changes 2023-12-25 15:19:38 +00:00
fyrchik reviewed 2023-12-26 08:40:24 +00:00
@ -33,3 +34,3 @@
}()
c.modeMtx.RLock()
if !c.modeMtx.TryRLock() {
Owner

If writecache is doing something, why don't we just hang here? Anyway, if it is not related to sealing, how about doing it in a separate commit or before your changes -- it deserves a separate description.

If writecache is doing something, why don't we just hang here? Anyway, if it is not related to sealing, how about doing it in a separate commit or before your changes -- it deserves a separate description.
Author
Member

Writecache is not mandatory, but it can take a lot of time to flush for example. Also if modeMtx is locked, then highly likely writecache will be unavailable for write. So put or delete can be processed with blobstor directly.

Writecache is not mandatory, but it can take a lot of time to flush for example. Also if modeMtx is locked, then highly likely writecache will be unavailable for write. So put or delete can be processed with blobstor directly.
@ -135,13 +138,11 @@ func (c *cache) flushSmallObjects(ctx context.Context) {
}
}
c.modeMtx.RUnlock()
Owner

Pure refactoring, please, make it a separate commit.

Pure refactoring, please, make it a separate commit.
Author
Member

Done.

Done.
fyrchik marked this conversation as resolved
@ -281,3 +281,2 @@
c.modeMtx.RLock()
defer c.modeMtx.RUnlock()
c.modeMtx.Lock() // exclusive lock to not to conflict with background flush
Owner

What do you mean by "conflict with background flush"?
Both flushes do the same, here we just need to return after everything is flushed.

What do you mean by "conflict with background flush"? Both flushes do the same, here we just need to return _after_ everything is flushed.
Author
Member

Background flush acquires RLock, so only one flush will be in process: background or manual

Background flush acquires RLock, so only one flush will be in process: background or manual
@ -77,1 +72,3 @@
})
var err error
if batched {
err = c.db.Batch(func(tx *bbolt.Tx) error {
Owner

Again, an optimization, please, separate commit.

Again, an optimization, please, separate commit.
Author
Member

No, it isn't an optimization: for manual flush we don't need batch.

No, it isn't an optimization: for manual flush we don't need batch.
Owner

Why is manual flush single-threaded then?

Why is manual flush single-threaded then?
Author
Member

To reduce complexity. No need to do manual flush as fast as possible.

To reduce complexity. No need to do manual flush as fast as possible.
Owner

On the contrary 8 hours vs 1 hour can make a difference -- this is all done by a human.

On the contrary 8 hours vs 1 hour can make a difference -- this is all done by a human.
Author
Member

Looks like that these values taken not from real life: flush works pretty fast now.

Looks like that these values taken not from real life: flush works pretty fast now.
fyrchik marked this conversation as resolved
fyrchik reviewed 2023-12-26 10:28:03 +00:00
@ -284,2 +283,3 @@
defer c.modeMtx.Unlock()
return c.flush(ctx, ignoreErrors)
if err := c.flush(ctx, ignoreErrors); err != nil {
Owner

There are 2 cases to consider:

  1. DEGRADED shard mode.
  2. READONLY shard mode.

What is the behaviour in these situations? It looks like we should fail in both:

  1. In DEGRADED writecache is not opened and SSD can be missing.
  2. In READONLY blobstor is in READONLY too (likely for a reason, like dead HDD), so flushing won't be done.
There are 2 cases to consider: 1. DEGRADED shard mode. 2. READONLY _shard_ mode. What is the behaviour in these situations? It looks like we should fail in both: 1. In DEGRADED writecache is not opened and SSD can be missing. 2. In READONLY blobstor is in READONLY too (likely for a reason, like dead HDD), so flushing won't be done.
Author
Member

And we will fail:

func (s *Shard) FlushWriteCache(ctx context.Context, p FlushWriteCachePrm) error {

And we will fail: https://git.frostfs.info/TrueCloudLab/frostfs-node/src/commit/8180a0664f05b5adedf399cbd8ad90f0f37115aa/pkg/local_object_storage/shard/writecache.go#L27
fyrchik marked this conversation as resolved
dstepanov-yadro force-pushed feat/flush_and_disable_writecache from f49fefe3ad to a1c438acd4 2023-12-27 05:22:13 +00:00 Compare
dstepanov-yadro force-pushed feat/flush_and_disable_writecache from a1c438acd4 to 20e9670f9a 2023-12-27 08:41:42 +00:00 Compare
dstepanov-yadro force-pushed feat/flush_and_disable_writecache from 98fd3bb184 to 5a9f171804 2023-12-27 12:00:47 +00:00 Compare
dstepanov-yadro force-pushed feat/flush_and_disable_writecache from 5a9f171804 to 11de6264e0 2023-12-28 08:28:13 +00:00 Compare
fyrchik reviewed 2023-12-29 09:56:31 +00:00
@ -16,0 +15,4 @@
Short: "Flush objects from the write-cache to the main storage",
Long: "Flush objects from the write-cache to the main storage",
Run: flushCache,
Deprecated: "Flushing objects from writecache to the main storage performs by writecache automatically. To flush and seal writecache use `frostfs-cli control shards writecache seal`.",
Owner

s/performs/is performed by/

s/performs/is performed by/
Author
Member

fixed

fixed
fyrchik marked this conversation as resolved
@ -0,0 +53,4 @@
cmd.Printf("Shard %s: OK\n", base58.Encode(res.GetShard_ID()))
} else {
failed++
cmd.Printf("Shard %s: failed with error \"%s\"\n", base58.Encode(res.GetShard_ID()), res.GetError())
Owner

\"%s\" -> %q?

`\"%s\"` -> `%q`?
Author
Member

fixed

fixed
fyrchik marked this conversation as resolved
@ -0,0 +29,4 @@
for _, r := range res.ShardResults {
if r.Success {
resp.Body.Results = append(resp.GetBody().GetResults(), &control.SealWriteCacheResponse_Body_Status{
Shard_ID: r.ShardID.Bytes(),
Owner

Doesn't *r.ShardID work? We already have similar code in other parts.

Doesn't `*r.ShardID` work? We already have similar code in other parts.
Author
Member

fixed

fixed
fyrchik marked this conversation as resolved
dstepanov-yadro force-pushed feat/flush_and_disable_writecache from 11de6264e0 to 5afad725d3 2023-12-29 12:00:24 +00:00 Compare
dstepanov-yadro force-pushed feat/flush_and_disable_writecache from 5afad725d3 to 581887148a 2023-12-29 13:06:02 +00:00 Compare
fyrchik approved these changes 2023-12-29 14:14:57 +00:00
fyrchik merged commit 581887148a into master 2023-12-29 14:33:13 +00:00
Sign in to join this conversation.
No milestone
No project
No assignees
5 participants
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: TrueCloudLab/frostfs-node#886
No description provided.