Error context cancellation when docker pull image #13
Labels
No labels
Infrastructure
blocked
bug
config
discussion
documentation
duplicate
enhancement
go
help wanted
internal
invalid
kludge
observability
perfomance
question
refactoring
wontfix
No milestone
No project
No assignees
2 participants
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: TrueCloudLab/distribution#13
Loading…
Add table
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
During the operation of the docker full command, an error appears in the distribution logs:
At the same time, the request itself is completed successfully and there is no negative effect on the result of the work:
The error itself occurs due to the fact that the context is canceled from outside the frostfs driver code.
Steps to Reproduce (for bugs)
2.1 Raising frostfs-aio
2.2 Creating a container
2.3 Add its id and the path to the wallet to the config file distribution:
./cmd/registry/config-dev-frostfs.yml
2.4 We are building distribution from the root of the project and launching
Your Environment
Distribution commit version:
8ceca80274
Additional details:
This error is reproduced as on the release v3.0.0-alpha.1 with FrostFS support so it is on the release v3.0.0-beta.1 with FrostFS support
The error was reproduced on the following environments:
Important detail: the error is reproduced only if caching is disabled. Caching is disabled by default, details here
It seems this is valid behavior (I mean current distribution code leads to this error). We try do additional things using request context but we already send all data to client and it consider connection done and close it.
See example server:
run client (it can be golang http client or
curl
(we will usecurl
))it server we will see:
It seems like this is indeed the expected behavior. We can close this bug.
Probably we can create issue in upstream