Investigate perfomance for bigger gRPC message sizes #77
Labels
No labels
P0
P1
P2
P3
badger
frostfs-adm
frostfs-cli
frostfs-ir
frostfs-lens
frostfs-node
good first issue
triage
Infrastructure
blocked
bug
config
discussion
documentation
duplicate
enhancement
go
help wanted
internal
invalid
kludge
observability
perfomance
question
refactoring
wontfix
No project
No assignees
2 participants
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: TrueCloudLab/frostfs-node#77
Loading…
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
By default it is 4 MiB. This leads to big objects being split into 4MiB chunks, thus for MaxObjectSize=64 MiB we have at least 16 messages sent (don't forget about signing and verification). For custom deployments where we have full control over the network we could set this size depending on MaxObjectSize both on client and server.
In this task:
grpc.MaxRecvMsgSize
in node to some high value (70 MiB).In theory this enables future optimizations, such as being able to replicate an object from the blobstor without unmarshaling. (in this case, also check that validation is done when the object is received, we don't want to propagate possibly corrupted data across the cluster, see https://www.usenix.org/system/files/conference/fast17/fast17-ganesan.pdf).
Somewhat related https://github.com/TrueCloudLab/frostfs-api/issues/9
What did I do:
REP 1 SELECT 1
with the load size from the previous pointIt turned out that on my PC the optimal chunk size is 60MB (+/-). That is, the speed no longer increases, but even drops a little.
If we compare the download speed of an object of 60MB with a chunk of 60MB and with a chunk of 3 MB, we get 36 MB/s and 27 MB/s, respectively.
I think we can do the following:
Added argument for xk6, testing required.