[#114] pool: Support client cut with memory limiter #120
No reviewers
Labels
No labels
P0
P1
P2
P3
good first issue
pool
Infrastructure
blocked
bug
config
discussion
documentation
duplicate
enhancement
go
help wanted
internal
invalid
kludge
observability
perfomance
question
refactoring
wontfix
No milestone
No project
No assignees
4 participants
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: TrueCloudLab/frostfs-sdk-go#120
Loading…
Reference in a new issue
No description provided.
Delete branch "dkirillov/frostfs-sdk-go:feature/114-preparion_objects_on_client"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
close #114
Current PR is open to pool-client-cut branch. After #115
pool-client-cut
branch will be merged tomaster
This PR adds possibility to use client cut for Pool.PutObject.
To allocate buffers more efficiently we use
sync.Pool
Graph below shows different memory consumption for different approaches for allocations.
From left to right: arena (go1.20 experiment feature), sync.Pool (final solution), simple make([]byte,size).
In this scenario there were 5 concurrent writers, object size: 128mb, MaxObjectSize param: 64mb
The following graph use 10 concurrent writers, object size: 128mb, MaxObjectSize param: 64mb
From left to right: arena (go1.20 experiment feature), sync.Pool (final solution), simple make([]byte,size).
WIP: [#114] pool: Support client cut with memory limiterto [#114] pool: Support client cut with memory limiter@ -0,0 +28,4 @@
// We have to use pointer (even for slices), see https://staticcheck.dev/docs/checks/#SA6002
// It's based on interfaces implementation in 2016, so maybe something has changed since then.
// We can use no pointer for multi-kilobyte slices though https://github.com/golang/go/issues/16323#issuecomment-254401036
buff := make([]byte, maxObjectSize)
Am I right that 64MB is allocated here, even for small objects? It seems like too much.
Yes, but otherwise we cannot use sync.Pool at all if I understand it correctly
Using sync.Pool for byte arrays is tricky. For example https://github.com/golang/go/issues/23199
frostfs-node has such sync.Pool usage for put objects: https://git.frostfs.info/TrueCloudLab/frostfs-node/src/branch/master/pkg/services/object/put/pool.go
Will this reuse only buffers with size <= 128Kb? It seems such approach isn't quite appropriate for streaming big objects with retrying.
Probably we can try to use more complex condition for dropping big buffers
https://github.com/golang/go/issues/27735#issuecomment-739169121
@ -0,0 +59,4 @@
defer p.mu.Unlock()
used := p.limit - p.available
if buff.len > used {
It's clearer this way i think:
buff.len + p.available > p.limit
@ -1603,0 +1717,4 @@
// we cannot initialize partBufferPool in NewPool function,
// so we have to save maxClientCutMemory param for further initialization in Dial.
maxClientCutMemory uint64
partsBufferPool *PartsBufferPool
This forces the user to use the pool even if he doesn't want to. I suggest storing the pool as an interface. The user will then be able to use an implementation that simply allocates memory without locks.
We continue development in
pool-client-cut
branch, so some comments may be approached later.