pool: Immediately mark maintenance node as unhealthy #283
Labels
No labels
P0
P1
P2
P3
good first issue
pool
Infrastructure
blocked
bug
config
discussion
documentation
duplicate
enhancement
go
help wanted
internal
invalid
kludge
observability
perfomance
question
refactoring
wontfix
No milestone
No project
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: TrueCloudLab/frostfs-sdk-go#283
Loading…
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Expected Behavior
When active client receives maintenance mode error from object service, immediately switch to another healthy client in next request.
Current Behavior
Maintenance mode error considered as any other network error and increments threshold counter by one.
Possible Solution
Increment threshold variable to a threshold limit value, so it overflows the limit.
Steps to Reproduce (for bugs)
Context
This is similar to #278
Maintenance mode error is not an accident error, so we can rely on this status and switch client.
Otherwise we are having performance penalty when SDK Client has to decline multiple requests until threshold value has been reached or rebalance interval has been passed.
Regression
No
Your Environment
SDK Pool from frostfs-s3-gw v0.31.0-rc.2 (
1b67ab9608
)