Possible panics when client has zero free space on disk #125
Labels
No labels
P0
P1
P2
P3
good first issue
pool
Infrastructure
blocked
bug
config
discussion
documentation
duplicate
enhancement
go
help wanted
internal
invalid
kludge
observability
perfomance
question
refactoring
wontfix
No milestone
No project
No assignees
3 participants
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: TrueCloudLab/frostfs-sdk-go#125
Loading…
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Not sure this is an issue we have to solve, but it would be nice to look at it anyway.
When client's hard disk is out of space, S3 gateway might produce such panic.
@alexvanin Could you elaborate a bit on how free disk space is connected to this panic?
Panic happened when disk was 100% full, that is the only connection. May be pure coincidence.
Didn't reproduce for a while, close.
Given that stdlib is well tested, the behavior could be the result of a data race.
func (p *innerPool) connection() (client, error) {
Why do we have
Rlock
here, notLock
?We don't change the
p.sampler
field (just read it) that's why we are usingRLock