feature/add_multipart #80
No reviewers
TrueCloudLab/storage-core-developers
TrueCloudLab/storage-services-developers
Labels
No labels
P0
P1
P2
P3
good first issue
Infrastructure
blocked
bug
config
discussion
documentation
duplicate
enhancement
go
help wanted
internal
invalid
kludge
observability
perfomance
question
refactoring
wontfix
No milestone
No project
No assignees
7 participants
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: TrueCloudLab/xk6-frostfs#80
Loading…
Reference in a new issue
No description provided.
Delete branch "dkirillov/xk6-frostfs:feature/add_multipart"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
d598c35eaf
tod786edd986
d786edd986
to91f8a2e9bd
WIP: feature/add_multipartto feature/add_multipart91f8a2e9bd
toecca1386b4
@ -0,0 +34,4 @@
var res PreGenerateInfo
bucketMap := make(map[string]struct{})
count, err := o.selector.Count()
As I understand, registry DB can be quite big (order of GiB, even more in JSON). What is the motivation for exporting everything in JSON?
The initial goal was forming pregen json file that includes objects that were created during multipart upload.
But after #25 this probably will be unnecessary
@ -0,0 +33,4 @@
const obj_to_exporter = registry.getExporter(obj_to_export_selector);
export function obj_registry_export() {
Why did you decide to write a separate scenario vs small command-line tool?
It was done the similar way as verifying. But actually, there is no significant reason to do it this way.
I'll redo this with command-line tool then
@ -134,6 +134,8 @@ Options (in addition to the common options):
* `DELETE_AGE` - age of object in seconds before which it can not be deleted. This parameter can be used to control how many objects we have in the system under load.
* `SLEEP_DELETE` - time interval (in seconds) between deleting VU iterations.
* `OBJ_NAME` - if specified, this name will be used for all write operations instead of random generation.
* `WRITERS_MULTIPART` - number of VUs performing multipart upload operations.
Is the goal here to have both "simple" and multipart writers in one scenario?
Until we have sequential multipart it very similar for simple put. But I'll separate this because I want to support parallel multipart upload in this PR
feature/add_multipartto WIP: feature/add_multipartMark as WIP to support parallel multipart
ecca1386b4
to2a84d844ac
WIP: feature/add_multipartto feature/add_multipart@ -0,0 +59,4 @@
return fmt.Errorf("get '%s' flag: %w", formatFlag, err)
}
if format != jsonFormat {
return fmt.Errorf("unknown format '%s', only '%s' is supported", format, jsonFormat)
If only one format is supported, then why this parameter? I think this creates unnecessary complexity.
2a84d844ac
to7c27a5056e
@ -0,0 +1,6 @@
package version
var (
// Version is the RBAC sync module version.
RBAC?
Whoops
@ -0,0 +38,4 @@
const write_multipart_vu_count = parseInt(__ENV.WRITERS_MULTIPART || '0');
if (write_multipart_vu_count < 1) {
throw 'number of parts (env WRITERS_MULTIPART) to upload in parallel should be greater than 0';
So we explicitly prohibit having "no multipart" scenarios?
Yes, it's a separate scenario
7c27a5056e
to50e2f55362