Fix maintenance node processing in policer #1604
No reviewers
Labels
No labels
P0
P1
P2
P3
badger
frostfs-adm
frostfs-cli
frostfs-ir
frostfs-lens
frostfs-node
good first issue
triage
Infrastructure
blocked
bug
config
discussion
documentation
duplicate
enhancement
go
help wanted
internal
invalid
kludge
observability
perfomance
question
refactoring
wontfix
No project
No assignees
5 participants
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: TrueCloudLab/frostfs-node#1604
Loading…
Reference in a new issue
No description provided.
Delete branch "fyrchik/frostfs-node:fix-policer"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
There is a minor refactoring (aka simplification) in progress, but I may do this in another PR.
Consider
REP 1 REP 1
placement (selects/filters are omitted).The placement is
[1, 2], [1, 0]
. We are the 0-th node.Node 1 is under maintenance, so we do not replicate object
on the node 2. In the second replication group node 1 is under maintenance,
but current caching logic considers it as "replica holder" and removes
local copy. Voilà, we have DL if the object is missing from the node 1.
TBD: write testing scenario for QA
c47b639fe3
to0ab7818c81
0ab7818c81
to57efa0bc8e
@ -127,7 +128,7 @@ func TestProcessObject(t *testing.T) {
nodeCount: 2,
policy: `REP 2 REP 2`,
placement: [][]int{{0, 1}, {0, 1}},
wantReplicateTo: []int{1, 1}, // is this actually good?
The question of @ale64bit was finally answered: no, it is not :)