archiver: reduce memory usage for large files
FutureBlob now uses a Take() method as a more memory-efficient way to retrieve the futures result. In addition, futures are now collected while saving the file. As only a limited number of blobs can be queued for uploading, for a large file nearly all FutureBlobs already have their result ready, such that the FutureBlob object just consumes memory.
This commit is contained in:
parent
b817681a11
commit
4a10ebed15
5 changed files with 64 additions and 61 deletions
|
@ -54,8 +54,8 @@ func TestBlobSaver(t *testing.T) {
|
|||
}
|
||||
|
||||
for i, blob := range results {
|
||||
blob.Wait(ctx)
|
||||
if blob.Known() {
|
||||
sbr := blob.Take(ctx)
|
||||
if sbr.known {
|
||||
t.Errorf("blob %v is known, that should not be the case", i)
|
||||
}
|
||||
}
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue