[#466] Implement PATCH for multipart objects #466
Labels
No labels
P0
P1
P2
P3
good first issue
Infrastructure
blocked
bug
config
discussion
documentation
duplicate
enhancement
go
help wanted
internal
invalid
kludge
observability
perfomance
question
refactoring
wontfix
No project
No assignees
3 participants
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: TrueCloudLab/frostfs-s3-gw#466
Loading…
Reference in a new issue
No description provided.
Delete branch "mbiryukova/frostfs-s3-gw:feature/patch_multipart"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
324665e77a
to2d6a8516b6
WIP: [#xxx] Implement PATCH for multipart objectsto WIP: [#466] Implement PATCH for multipart objects2d6a8516b6
toc01772149e
WIP: [#466] Implement PATCH for multipart objectsto [#466] Implement PATCH for multipart objects@ -604,0 +624,4 @@
}
allPatchLen := p.Range.End - p.Range.Start + 1
patchReader := newPartialReader(p.NewBytes, 64*1024)
Should we configure buffer size?
@ -604,0 +641,4 @@
prmPatch.Object = part.OID
prmPatch.ObjectSize = part.Size
prmPatch.Range = *p.Range
Can we use brand new range param. It's not easy to catch idea with
p.Range.Start -= part.Size
etc.@ -604,0 +646,4 @@
var patchLen uint64
if i != len(parts)-1 {
if prmPatch.Range.End >= part.Size-1 {
prmPatch.Range.End = part.Size - 1
If we have to patch all part it's better to use single
put
method rather thanpatch + head
@ -604,0 +652,4 @@
} else {
patchLen = allPatchLen - patchedLen
}
prmPatch.Payload = patchReader.partReader(patchLen)
Can we use
io.LimitReader(p.NewBytes, patchLen)
?@ -604,0 +655,4 @@
prmPatch.Payload = patchReader.partReader(patchLen)
patchedLen += patchLen
objID, err := n.frostFS.PatchObject(ctx, prmPatch)
Can we move to separate method patching part?
@ -604,0 +712,4 @@
a.SetValue(headerParts.String())
prmPatch.Attributes = append(prmPatch.Attributes, a)
objID, err := n.frostFS.PatchObject(ctx, prmPatch)
Why do we use patch to update "combined" object? It seems we can use regular put
@ -604,0 +733,4 @@
Size: obj.PayloadSize(),
Created: &p.Object.Created,
Owner: &n.gateOwner,
// TODO: Add creation epoch
Probably we should rebase
feature/patch
before merge this PR?@ -604,0 +632,4 @@
continue
}
if p.Range.Start > part.Size || (p.Range.Start == part.Size && i != len(parts)-1) {
I don't understand why do we need
|| (p.Range.Start == part.Size && i != len(parts)-1)
. If we really need this, let's add test for this condition. Currently without this condition tests passIf range starts after the last part - we have to patch the last part. Otherwise, we move on to the next part. I will add test for this condition
I still can freely delete this condition
Seems that it doesn't affect the result, but reduces number of patch requests. If it's ok, I can count their number and compare it with expected one
c01772149e
todde5e04461
dde5e04461
to48ca6cb146
48ca6cb146
to3c504e4661
3c504e4661
toeb80d14923
eb80d14923
to9f9ef38fdc
9f9ef38fdc
tod1a5403b1f
@ -110,0 +115,4 @@
bktName, objName, partSize := "bucket-for-multipart-patch", "object-for-multipart-patch", 5*1024*1024
createTestBucket(tc, bktName)
// patch beginning of the first part
Maybe it's better to use
t.Run
@ -579,0 +641,4 @@
allPatchLen := p.Range.End - p.Range.Start + 1
var multipartObjectSize, patchedLen uint64
for i, part := range parts {
Probably it's matter of taste but what do you think about writing something like this
Looks good, but in some cases we will make unnecessary requests (which are not made with the condition discussed in the comment above)
I've updated diff in previous comment
UPD: again
d1a5403b1f
toe8f22ed77e
LGTM