node: Add ability to evacuate objects from REP 1
only #1350
No reviewers
Labels
No labels
P0
P1
P2
P3
badger
frostfs-adm
frostfs-cli
frostfs-ir
frostfs-lens
frostfs-node
good first issue
triage
Infrastructure
blocked
bug
config
discussion
documentation
duplicate
enhancement
go
help wanted
internal
invalid
kludge
observability
perfomance
question
refactoring
wontfix
No milestone
No project
No assignees
4 participants
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: TrueCloudLab/frostfs-node#1350
Loading…
Reference in a new issue
No description provided.
Delete branch "acid-ant/frostfs-node:feat/evac-skip-rep-one"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Signed-off-by: Anton Nikiforov an.nikiforov@yadro.com
MAINTENANCE
REP 1
only26a007a017
tod64712f25a
d64712f25a
to99441bf077
99441bf077
to0574590325
0574590325
to37ceddedc4
37ceddedc4
toaa6083f90c
aa6083f90c
toa6986dfcf3
WIP: node: Add ability to evacuate objects fromto node: Add ability to evacuate objects fromREP 1
onlyREP 1
only@ -654,3 +656,3 @@
}
addr := toEvacuate[i].Address
if prm.RepOneOnly {
Could you clarify, please, what a problem does this PR solve? Is there a specific case that requires to evacuate only
REP 1
containers? May it be that we'll need to evacuateREP 2
,REP 3
etc. only?Just to do it quicker. It is not a recommended way, but may be helpful in emergency situation, when disk is going to die.
@ -706,0 +720,4 @@
return false, err
}
p := c.Value.PlacementPolicy()
for i := range p.NumberOfReplicas() {
Looks like
REP 1 SELECT 1 FROM X1 REP 1 SELECT 1 FROM X2
will defined as repOne.Right, and I think that condition should be strong and simple. In other case, we need to check if current node should store
REP 1
or not. Should we do that check instead of the currently implemented?I don't know. But I think there is no difference between
REP 1 SELECT 1 FROM X1 REP 1 SELECT 1 FROM X2
andREP 2 SELECT 2 FROM X
policies, as both of then require 2 object instances.I was wrong.
REP 1 SELECT 1 FROM X1 REP 1 SELECT 1 FROM X2
policy may result in both copies being stored on the same node (withoutUNIQUE
).Updated, please review. Also added fix for tests.
a6986dfcf3
toe281d31fbc
node: Add ability to evacuate objects fromto WIP: node: Add ability to evacuate objects fromREP 1
onlyREP 1
onlye281d31fbc
to69c75a660b
69c75a660b
to94e8629888
94e8629888
to763a7cef1c
763a7cef1c
tobcc0fed99c
2218e42a10
to417fa6d4cb
417fa6d4cb
tofb3d0e23eb
fb3d0e23eb
toc2a5a987ee
c2a5a987ee
tob76a7f125a
b76a7f125a
to9fea4f65b2
WIP: node: Add ability to evacuate objects fromto node: Add ability to evacuate objects fromREP 1
onlyREP 1
only9fea4f65b2
to3ec9900790
@ -784,0 +804,4 @@
return false, err
}
p := c.Value.PlacementPolicy()
for i := range p.NumberOfReplicas() {
You'd had already the discussion with @dstepanov-yadro.
I believe your PR solves the problem for evacuation of objects that have single instance across all nodes. But if the placement policy has a few vectors (
REP 1 ... REP 1
) it seems like it has replicated copy. Could you explain, then, why do you NOT check ifp.NumberOfReplicas() == 1
?I have only one argument for this - network connectivity between nodes for two
REP 1
may be bad. I see that it is not strong and because you don't see any problems I'll add this check.@ -781,6 +798,20 @@ func (e *StorageEngine) evacuateObject(ctx context.Context, shardID string, objI
return nil
}
func (e *StorageEngine) isNotRepOne(cid cid.ID) (bool, error) {
Looks like the case with deleted container is missing. It could happen that there are object from already deleted container.
Also should check moving to other node in case of deleted container.
Now all objects are skipped for evacuation in case when container already removed.
517d9bdbbe
tofe2dda1caa
New commits pushed, approval review dismissed automatically according to repository settings
fe2dda1caa
toa5b83d3ba8
Added check for container existence.
a5b83d3ba8
to318ac167ba
318ac167ba
to0230e37bda
New commits pushed, approval review dismissed automatically according to repository settings
0230e37bda
tod0ed29b3c7