node: Stop flushing big object when termination signal received #379
No reviewers
Labels
No labels
P0
P1
P2
P3
badger
frostfs-adm
frostfs-cli
frostfs-ir
frostfs-lens
frostfs-node
good first issue
triage
Infrastructure
blocked
bug
config
discussion
documentation
duplicate
enhancement
go
help wanted
internal
invalid
kludge
observability
perfomance
question
refactoring
wontfix
No milestone
No project
No assignees
4 participants
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: TrueCloudLab/frostfs-node#379
Loading…
Reference in a new issue
No description provided.
Delete branch "acid-ant/frostfs-node:bugfix/364-fix-flush"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Close #364
Signed-off-by: Anton Nikiforov an.nikiforov@yadro.com
node: Stop flushing big object when termination signal receivedto WIP: node: Stop flushing big object when termination signal received27f76ee280
toa525fe391b
WIP: node: Stop flushing big object when termination signal receivedto node: Stop flushing big object when termination signal received@ -86,3 +86,3 @@
```bash
frostfs-cli control shards evacuation start --endpoint s01.frostfs.devenv:8081 --wallet ./../frostfs-dev-env/services/storage/wallet01.json --id 54Y8aot9uc7BSadw2XtYr3 --await --no-progress
Enter password >
Enter password >
What a beautiful separate commit!
@ -210,6 +210,7 @@ func (c *cache) flushFSTree(ctx context.Context, ignoreErrors bool) error {
c.deleteFromDisk(ctx, []string{sAddr})
return nil
}
prm.CloseCh = c.closeCh
Have you run tests with the
-race
flag?c.closeCh
can be unsafe to access.Yes, and they have passed. We are setting closeCh as nil in
Close()
and afterc.wg.Wait()
completed.Also, about providing context like it is done for other methods in
common.Storage
interface?I thought about this, that was first implementation. But it takes a lot of changes, not only in
Init()
forwritecache
-SetMode()
,Open()
also need to use context from main.Also, we need to redesign main shutdown function - use Context from main everywhere, create child context for each component and close main only when all children already closed to keep shutdown process manageable.
We discussed this with @dstepanov-yadro, and it looks like Context is more about requests, not about how to control application lifecycle. @fyrchik what do you think?
a525fe391b
to75c9e88972
As discussed with @acid-ant we can use such trick and pass ctx to flush sub-calls:
It is more golang-way i think.
@acid-ant The @dstepanov-yadro suggestion looks fine to me.
75c9e88972
to2575f6c00d
Updated, @fyrchik and @dstepanov-yadro please review.
@ -12,3 +13,3 @@
// Iterate iterates over all objects in b.
func (b *Blobovniczas) Iterate(prm common.IteratePrm) (common.IterateRes, error) {
func (b *Blobovniczas) Iterate(_ context.Context, prm common.IteratePrm) (common.IterateRes, error) {
Shouldn't it respect context too?
I prefer to do this in separate PR - another bunch of files need to refactor. Created #394 for tracking.
2575f6c00d
to41cb98110b
Context
inBlobovniczas.Iterate()
#39441cb98110b
to86a52e4de9
86a52e4de9
to802168c0c6