[#114] pool: Support client cut with memory limiter #120

Merged
Member

close #114

Current PR is open to pool-client-cut branch. After #115 pool-client-cut branch will be merged to master

This PR adds possibility to use client cut for Pool.PutObject.
To allocate buffers more efficiently we use sync.Pool

Graph below shows different memory consumption for different approaches for allocations.
From left to right: arena (go1.20 experiment feature), sync.Pool (final solution), simple make([]byte,size).
In this scenario there were 5 concurrent writers, object size: 128mb, MaxObjectSize param: 64mb

image

The following graph use 10 concurrent writers, object size: 128mb, MaxObjectSize param: 64mb
From left to right: arena (go1.20 experiment feature), sync.Pool (final solution), simple make([]byte,size).

image

close #114 Current PR is open to [pool-client-cut branch](https://git.frostfs.info/TrueCloudLab/frostfs-sdk-go/src/branch/pool-client-cut). After #115 `pool-client-cut` branch will be merged to `master` This PR adds possibility to use client cut for Pool.PutObject. To allocate buffers more efficiently we use `sync.Pool` Graph below shows different memory consumption for different approaches for allocations. From left to right: arena (go1.20 experiment feature), sync.Pool (final solution), simple make([]byte,size). In this scenario there were 5 concurrent writers, object size: 128mb, MaxObjectSize param: 64mb ![image](/attachments/5107175a-7609-47ec-95ff-4a1a720012f4) The following graph use 10 concurrent writers, object size: 128mb, MaxObjectSize param: 64mb From left to right: arena (go1.20 experiment feature), sync.Pool (final solution), simple make([]byte,size). ![image](/attachments/1cb1a3db-495b-483e-be06-c6f75be41457)
dkirillov self-assigned this 2023-07-14 08:12:20 +00:00
dkirillov added 1 commit 2023-07-14 08:12:23 +00:00
[#114] pool: Support client cut with memory limiter
Some checks failed
/ DCO (pull_request) Successful in 1m12s
/ Lint (pull_request) Failing after 2m14s
/ Tests (1.19) (pull_request) Successful in 4m2s
/ Tests (1.20) (pull_request) Successful in 1m50s
9bc2f62d97
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
dkirillov added 1 commit 2023-07-14 09:47:13 +00:00
[#114] pool: Fix linter errors
All checks were successful
/ DCO (pull_request) Successful in 1m19s
/ Lint (pull_request) Successful in 2m6s
/ Tests (1.19) (pull_request) Successful in 5m52s
/ Tests (1.20) (pull_request) Successful in 6m35s
2bdbfdb23e
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
dkirillov changed title from WIP: [#114] pool: Support client cut with memory limiter to [#114] pool: Support client cut with memory limiter 2023-07-14 09:51:29 +00:00
dkirillov requested review from storage-services-developers 2023-07-14 09:51:37 +00:00
dkirillov requested review from storage-services-committers 2023-07-14 09:51:37 +00:00
dkirillov requested review from storage-core-developers 2023-07-14 09:51:37 +00:00
dkirillov requested review from storage-core-committers 2023-07-14 09:51:37 +00:00
dstepanov-yadro reviewed 2023-07-14 11:14:29 +00:00
@ -0,0 +28,4 @@
// We have to use pointer (even for slices), see https://staticcheck.dev/docs/checks/#SA6002
// It's based on interfaces implementation in 2016, so maybe something has changed since then.
// We can use no pointer for multi-kilobyte slices though https://github.com/golang/go/issues/16323#issuecomment-254401036
buff := make([]byte, maxObjectSize)

Am I right that 64MB is allocated here, even for small objects? It seems like too much.

Am I right that 64MB is allocated here, even for small objects? It seems like too much.
Author
Member

Yes, but otherwise we cannot use sync.Pool at all if I understand it correctly

Yes, but otherwise we cannot use sync.Pool at all if I understand it correctly

Using sync.Pool for byte arrays is tricky. For example https://github.com/golang/go/issues/23199

frostfs-node has such sync.Pool usage for put objects: https://git.frostfs.info/TrueCloudLab/frostfs-node/src/branch/master/pkg/services/object/put/pool.go

Using sync.Pool for byte arrays is tricky. For example https://github.com/golang/go/issues/23199 frostfs-node has such sync.Pool usage for put objects: https://git.frostfs.info/TrueCloudLab/frostfs-node/src/branch/master/pkg/services/object/put/pool.go
Author
Member

Using sync.Pool for byte arrays is tricky. For example https://github.com/golang/go/issues/23199

Will this reuse only buffers with size <= 128Kb? It seems such approach isn't quite appropriate for streaming big objects with retrying.
Probably we can try to use more complex condition for dropping big buffers
https://github.com/golang/go/issues/27735#issuecomment-739169121

> Using sync.Pool for byte arrays is tricky. For example https://github.com/golang/go/issues/23199 Will this reuse only buffers with size <= 128Kb? It seems such approach isn't quite appropriate for streaming big objects with retrying. Probably we can try to use more complex condition for dropping big buffers https://github.com/golang/go/issues/27735#issuecomment-739169121
@ -0,0 +59,4 @@
defer p.mu.Unlock()
used := p.limit - p.available
if buff.len > used {

It's clearer this way i think: buff.len + p.available > p.limit

It's clearer this way i think: `buff.len + p.available > p.limit`
@ -1603,0 +1717,4 @@
// we cannot initialize partBufferPool in NewPool function,
// so we have to save maxClientCutMemory param for further initialization in Dial.
maxClientCutMemory uint64
partsBufferPool *PartsBufferPool

This forces the user to use the pool even if he doesn't want to. I suggest storing the pool as an interface. The user will then be able to use an implementation that simply allocates memory without locks.

This forces the user to use the pool even if he doesn't want to. I suggest storing the pool as an interface. The user will then be able to use an implementation that simply allocates memory without locks.
dkirillov added 1 commit 2023-07-14 12:23:54 +00:00
[#114] pool: Don't use part buffers when client cut is off
All checks were successful
/ DCO (pull_request) Successful in 1m21s
/ Lint (pull_request) Successful in 2m11s
/ Tests (1.19) (pull_request) Successful in 6m5s
/ Tests (1.20) (pull_request) Successful in 6m49s
44ae28d1df
Signed-off-by: Denis Kirillov <d.kirillov@yadro.com>
dkirillov changed target branch from master to pool-client-cut 2023-07-14 12:25:11 +00:00
aarifullin approved these changes 2023-07-20 10:15:25 +00:00
alexvanin approved these changes 2023-07-21 07:39:45 +00:00
alexvanin left a comment
Owner

We continue development in pool-client-cut branch, so some comments may be approached later.

We continue development in `pool-client-cut` branch, so some comments may be approached later.
alexvanin merged commit 44ae28d1df into pool-client-cut 2023-07-21 07:39:57 +00:00
alexvanin deleted branch feature/114-preparion_objects_on_client 2023-07-21 07:39:59 +00:00
Sign in to join this conversation.
No reviewers
TrueCloudLab/storage-services-developers
TrueCloudLab/storage-core-committers
No milestone
No project
No assignees
4 participants
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: TrueCloudLab/frostfs-sdk-go#120
No description provided.