Incorrect cache behavior when reloading an image after deletion #7

Closed
opened 2024-02-14 12:53:16 +00:00 by r.loginov · 6 comments
Member

When trying to load an image using the frostfs driver that has been deleted, the cache considers that the layers of this image are already in the storage and does not load them. Exactly the same behavior is observed when working with the driver for the local file system. If you use the s3 driver, then reloading images is valid. However, getting the image also fails.

Expected Behavior

It is expected that the following sequence of operations will work without errors:

  1. push image
  2. delete image
  3. push image
  4. delete image

Current Behavior

At the moment, when trying the second push (step 3 from the point above), the cache thinks that the image layers already exist and does not load them:

$ docker push localhost:5000/alpine:latest
The push refers to repository [localhost:5000/alpine]
d4fc045c9e3a: Layer already exists 
latest: digest: sha256:6457d53fb065d6f250e1504b9bc42d5b6c65941d57532c072d929dd0628977d0 size: 528

As a result, in the future we get an error when trying to pull an image:

$ docker pull localhost:5000/alpine:latest
Error response from daemon: manifest for localhost:5000/alpine:latest not found: manifest unknown: manifest unknown

Steps to Reproduce (for bugs)

  1. Tags the image we want to work with. (We use the prefix localhost:5000 because by default the registry is deployed on port 5000)
$ docker tag alpine:latest localhost:5000/alpine:latest
  1. Push the image
$ docker push localhost:5000/alpine:latest
The push refers to repository [localhost:5000/alpine]
d4fc045c9e3a: Pushed 
latest: digest: sha256:6457d53fb065d6f250e1504b9bc42d5b6c65941d57532c072d929dd0628977d0 size: 528
  1. Send an HTTP API request to delete the image
curl -i -X DELETE http://localhost:5000/v2/alpine/manifests/sha256:6457d53fb065d6f250e1504b9bc42d5b6c65941d57532c072d929dd0628977d0
  1. Calling the garbage collector
$ ./registry garbage-collect ../cmd/registry/config-dev-frostfs.yml

0 blobs marked, 3 blobs and 0 manifests eligible for deletion
blob eligible for deletion: sha256:6457d53fb065d6f250e1504b9bc42d5b6c65941d57532c072d929dd0628977d0
INFO[0000] Deleting blob: /docker/registry/v2/blobs/sha256/64/6457d53fb065d6f250e1504b9bc42d5b6c65941d57532c072d929dd0628977d0  environment=development go.version=go1.21.4 instance.id=23d8c328-31f9-4f4d-93aa-bdbde422c8ae service=registry
delete | path: /docker/registry/v2/blobs/sha256/64/6457d53fb065d6f250e1504b9bc42d5b6c65941d57532c072d929dd0628977d0
blob eligible for deletion: sha256:05455a08881ea9cf0e752bc48e61bbd71a34c029bb13df01e40e3e70e0d007bd
INFO[0000] Deleting blob: /docker/registry/v2/blobs/sha256/05/05455a08881ea9cf0e752bc48e61bbd71a34c029bb13df01e40e3e70e0d007bd  environment=development go.version=go1.21.4 instance.id=23d8c328-31f9-4f4d-93aa-bdbde422c8ae service=registry
delete | path: /docker/registry/v2/blobs/sha256/05/05455a08881ea9cf0e752bc48e61bbd71a34c029bb13df01e40e3e70e0d007bd
blob eligible for deletion: sha256:4abcf20661432fb2d719aaf90656f55c287f8ca915dc1c92ec14ff61e67fbaf8
INFO[0000] Deleting blob: /docker/registry/v2/blobs/sha256/4a/4abcf20661432fb2d719aaf90656f55c287f8ca915dc1c92ec14ff61e67fbaf8  environment=development go.version=go1.21.4 instance.id=23d8c328-31f9-4f4d-93aa-bdbde422c8ae service=registry
delete | path: /docker/registry/v2/blobs/sha256/4a/4abcf20661432fb2d719aaf90656f55c287f8ca915dc1c92ec14ff61e67fbaf8
  1. Re-push the same image (The output shows that the layer already exists)
$ docker push localhost:5000/alpine:latest
The push refers to repository [localhost:5000/alpine]
d4fc045c9e3a: Layer already exists 
latest: digest: sha256:6457d53fb065d6f250e1504b9bc42d5b6c65941d57532c072d929dd0628977d0 size: 528
  1. Pull an image:
$ docker pull localhost:5000/alpine:latest
Error response from daemon: manifest for localhost:5000/alpine:latest not found: manifest unknown: manifest unknown
When trying to load an image using the [frostfs driver](https://git.frostfs.info/TrueCloudLab/distribution/src/branch/tcl/master/registry/storage/driver/frostfs) that has been deleted, the cache considers that the layers of this image are already in the storage and does not load them. Exactly the same behavior is observed when working with the driver for the [local file system](https://git.frostfs.info/TrueCloudLab/distribution/src/branch/tcl/master/registry/storage/driver/filesystem). If you use the [s3 driver](https://git.frostfs.info/TrueCloudLab/distribution/src/branch/tcl/master/registry/storage/driver/s3-aws), then reloading images is valid. However, getting the image also fails. ## Expected Behavior It is expected that the following sequence of operations will work without errors: 1. push image 2. delete image 3. push image 4. delete image ## Current Behavior At the moment, when trying the second push (step 3 from the point above), the cache thinks that the image layers already exist and does not load them: ``` $ docker push localhost:5000/alpine:latest The push refers to repository [localhost:5000/alpine] d4fc045c9e3a: Layer already exists latest: digest: sha256:6457d53fb065d6f250e1504b9bc42d5b6c65941d57532c072d929dd0628977d0 size: 528 ``` As a result, in the future we get an error when trying to pull an image: ``` $ docker pull localhost:5000/alpine:latest Error response from daemon: manifest for localhost:5000/alpine:latest not found: manifest unknown: manifest unknown ``` ## Steps to Reproduce (for bugs) 1. Tags the image we want to work with. (We use the prefix localhost:5000 because by default the registry is deployed on port 5000) ``` $ docker tag alpine:latest localhost:5000/alpine:latest ``` 2. Push the image ``` $ docker push localhost:5000/alpine:latest The push refers to repository [localhost:5000/alpine] d4fc045c9e3a: Pushed latest: digest: sha256:6457d53fb065d6f250e1504b9bc42d5b6c65941d57532c072d929dd0628977d0 size: 528 ``` 3. Send an HTTP API request to delete the image ``` curl -i -X DELETE http://localhost:5000/v2/alpine/manifests/sha256:6457d53fb065d6f250e1504b9bc42d5b6c65941d57532c072d929dd0628977d0 ``` 4. Calling the garbage collector ``` $ ./registry garbage-collect ../cmd/registry/config-dev-frostfs.yml 0 blobs marked, 3 blobs and 0 manifests eligible for deletion blob eligible for deletion: sha256:6457d53fb065d6f250e1504b9bc42d5b6c65941d57532c072d929dd0628977d0 INFO[0000] Deleting blob: /docker/registry/v2/blobs/sha256/64/6457d53fb065d6f250e1504b9bc42d5b6c65941d57532c072d929dd0628977d0 environment=development go.version=go1.21.4 instance.id=23d8c328-31f9-4f4d-93aa-bdbde422c8ae service=registry delete | path: /docker/registry/v2/blobs/sha256/64/6457d53fb065d6f250e1504b9bc42d5b6c65941d57532c072d929dd0628977d0 blob eligible for deletion: sha256:05455a08881ea9cf0e752bc48e61bbd71a34c029bb13df01e40e3e70e0d007bd INFO[0000] Deleting blob: /docker/registry/v2/blobs/sha256/05/05455a08881ea9cf0e752bc48e61bbd71a34c029bb13df01e40e3e70e0d007bd environment=development go.version=go1.21.4 instance.id=23d8c328-31f9-4f4d-93aa-bdbde422c8ae service=registry delete | path: /docker/registry/v2/blobs/sha256/05/05455a08881ea9cf0e752bc48e61bbd71a34c029bb13df01e40e3e70e0d007bd blob eligible for deletion: sha256:4abcf20661432fb2d719aaf90656f55c287f8ca915dc1c92ec14ff61e67fbaf8 INFO[0000] Deleting blob: /docker/registry/v2/blobs/sha256/4a/4abcf20661432fb2d719aaf90656f55c287f8ca915dc1c92ec14ff61e67fbaf8 environment=development go.version=go1.21.4 instance.id=23d8c328-31f9-4f4d-93aa-bdbde422c8ae service=registry delete | path: /docker/registry/v2/blobs/sha256/4a/4abcf20661432fb2d719aaf90656f55c287f8ca915dc1c92ec14ff61e67fbaf8 ``` 5. Re-push the same image (The output shows that the layer already exists) ``` $ docker push localhost:5000/alpine:latest The push refers to repository [localhost:5000/alpine] d4fc045c9e3a: Layer already exists latest: digest: sha256:6457d53fb065d6f250e1504b9bc42d5b6c65941d57532c072d929dd0628977d0 size: 528 ``` 6. Pull an image: ``` $ docker pull localhost:5000/alpine:latest Error response from daemon: manifest for localhost:5000/alpine:latest not found: manifest unknown: manifest unknown ```
r.loginov added the
bug
label 2024-02-14 12:53:16 +00:00
r.loginov self-assigned this 2024-02-14 12:53:16 +00:00
Author
Member

Configuration and startup instructions

The driver configuration block for running distribution with frostfs (frostfs-dev-env):

frostfs:  
  wallet:  
    path: <wallet_path>  
    password: ""  
  peers:  
    0:  
      address: s01.frostfs.devenv:8080  
      weight: 1  
      priority: 1  
    1:  
      address: s02.frostfs.devenv:8080  
      weight: 1  
      priority: 1  
    2:  
      address: s03.frostfs.devenv:8080  
      weight: 1  
      priority: 1  
    3:  
      address: s04.frostfs.devenv:8080  
      weight: 1  
      priority: 1  
  # container can be nicename (rpc_endpoint is required)  
  container: <container_id> 
  # the following params are optional  
  session_expiration_duration: 1000 # in blocks  
  connection_timeout: 5s  
  request_timeout: 5s  
  rebalance_interval: 30s  
  rpc_endpoint: http://morph-chain.frostfs.devenv:30333

The driver configuration block for running distribution with s3 (frostfs-aio):

s3:  
  accesskey: <accesskey>  
  secretkey: <secretkey>  
  region: us-east-1  
  regionendpoint: http://localhost:8084  
  bucket: <bucket_name>  
  encrypt: false

The driver configuration block for running distribution with local file system:

filesystem:
  rootdirectory: /var/lib/registry

How to start the registry:

$ make

$ ./bin/registry serve cmd/registry/config-dev-frostfs.yml
Configuration and startup instructions The driver configuration block for running distribution with frostfs (frostfs-dev-env): ``` frostfs: wallet: path: <wallet_path> password: "" peers: 0: address: s01.frostfs.devenv:8080 weight: 1 priority: 1 1: address: s02.frostfs.devenv:8080 weight: 1 priority: 1 2: address: s03.frostfs.devenv:8080 weight: 1 priority: 1 3: address: s04.frostfs.devenv:8080 weight: 1 priority: 1 # container can be nicename (rpc_endpoint is required) container: <container_id> # the following params are optional session_expiration_duration: 1000 # in blocks connection_timeout: 5s request_timeout: 5s rebalance_interval: 30s rpc_endpoint: http://morph-chain.frostfs.devenv:30333 ``` The driver configuration block for running distribution with s3 (frostfs-aio): ``` s3: accesskey: <accesskey> secretkey: <secretkey> region: us-east-1 regionendpoint: http://localhost:8084 bucket: <bucket_name> encrypt: false ``` The driver configuration block for running distribution with local file system: ``` filesystem: rootdirectory: /var/lib/registry ``` How to start the registry: ``` $ make $ ./bin/registry serve cmd/registry/config-dev-frostfs.yml ```
Author
Member

Observations that have been identified.

1. Errors in the logs.

During the first push of the image, two similar errors are always observed in the logs. Moreover, they appear there both when using the frostfs driver and when using the s3 driver. The error looks like this:

ERRO[0006] response completed with error                 environment=development err.code="blob unknown" err.detail="sha256:661ff4d9561e3fd050929ee5097067c34bafc523ee60f5294a37fd08056a73ca" err.message="blob unknown to registry" go.version=go1.21.4 http.request.host="localhost:5000" http.request.id=24ebdac3-1e01-46eb-84e0-aa0fb2b83590 http.request.method=HEAD http.request.remoteaddr="127.0.0.1:53436" http.request.uri="/v2/alpine/blobs/sha256:661ff4d9561e3fd050929ee5097067c34bafc523ee60f5294a37fd08056a73ca" http.request.useragent="docker/24.0.5 go/go1.20.6 git-commit/a61e2b4 kernel/6.5.0-15-generic os/linux arch/amd64 UpstreamClient(Docker-Client/24.0.5 \\(linux\\))" http.response.contenttype=application/json http.response.duration=12.440753ms http.response.status=404 http.response.written=157 instance.id=a829bb80-3436-4c4f-924a-f738f311bf5c service=registry vars.digest="sha256:661ff4d9561e3fd050929ee5097067c34bafc523ee60f5294a37fd08056a73ca" vars.name=alpine version=v3.0.0+unknown
127.0.0.1 - - [29/Jan/2024:16:38:42 +0300] "HEAD /v2/alpine/blobs/sha256:661ff4d9561e3fd050929ee5097067c34bafc523ee60f5294a37fd08056a73ca HTTP/1.1" 404 157 "" "docker/24.0.5 go/go1.20.6 git-commit/a61e2b4 kernel/6.5.0-15-generic os/linux arch/amd64 UpstreamClient(Docker-Client/24.0.5 \\(linux\\))"

This error occurs because the registry at some point requests a block that does not exist. Moreover, for the first time he does this at the very beginning of the push image operation. Why he does this is unclear. It doesn't seem to affect further work.

2. The state of the objects.

Each docker image in the distribution repository is a set of files (objects for frostfs). Brief information on the types of these files:

  • .../blobs/sha256/4a/4abcf.../data - data for a specific image layer
  • .../_layers/sha256/4abcf.../link - A link to a layer in the repository links a layer to a specific image or tag in the repository, allowing you to manage the relationship between layers and images.
  • .../_manifests/revisions/sha256/6457d.../link - A link to a specific revision (version) of the image in the repository
  • ..._manifests/tags/latest/index/sha256/6457d.../link and - `.../_manifests/tags/latest/current/link - they point to the current image associated with the latest tag.

Consider changing the state of objects stored in frostfs during the process of downloading and deleting an alpine image:

  1. After the image is pushed
.../_layers/sha256/<sha256-hash>/link
.../_layers/sha256/<sha256-hash>/link
.../blobs/sha256/<two-char-hash>/<sha256-hash>/data
.../blobs/sha256/<two-char-hash>/<sha256-hash>/data
.../blobs/sha256/<two-char-hash>/<sha256-hash>/data
.../_manifests/tags/latest/current/link
.../_manifests/revisions/sha256/<sha256-hash>/link
.../_manifests/tags/latest/index/sha256/<sha256-hash>/link
  1. After sending an image deletion request via the HTTP API:
.../_layers/sha256/<sha256-hash>/link
.../_layers/sha256/<sha256-hash>/link
.../blobs/sha256/<two-char-hash>/<sha256-hash>/data
.../blobs/sha256/<two-char-hash>/<sha256-hash>/data
.../blobs/sha256/<two-char-hash>/<sha256-hash>/data
  1. After starting the garbage collector:
.../_layers/sha256/<sha256-hash>/link
.../_layers/sha256/<sha256-hash>/link
  1. After re-pushing the image:
.../_layers/sha256/<sha256-hash>/link
.../_layers/sha256/<sha256-hash>/link
.../_manifests/tags/latest/current/link
.../_manifests/revisions/sha256/<sha256-hash>/link
.../_manifests/tags/latest/index/sha256/<sha256-hash>/link

What interesting observations and conclusions have been made:

  • After deleting the image (step 3), objects (links to layers) still remain in the storage, that is, for some reason the garbage collector does not delete them. The same is true with the s3 driver.
  • Confirmation that the problem is in the cache. If you look at step 4, you will see that when you click on the image again, the large binary objects (layer objects) did not load, because the cache believes that they already exist.
  • If we are talking about using the s3 driver, then in step 3, during the re-push, the cache does not say that the layer already exists and loads them (but not all!). As a result, in step 4, there will be 7 objects in the storage instead of 5 (two more blobs are added). But this is also invalid, since there should be 8 of them, as in the first step. This indicates that there is a possibility that there is also an error in the s3 driver (or an error in the cache implementation).

3. Why is the problem related to the cache?:

If you disable the cache in the config:

storage:
  cache:
    blobdescriptor: null

Then everything will work valid with both the frostfs driver and the s3 driver.

Fact: there are no obvious ways to clear the cache. Only if you restart the registry (since the in-memory cache).

4. Assumptions

In the process of its operation, the registry calls the methods of the driver interface. There is an assumption that the cache may not work properly due to the fact that one of these methods is not working correctly. First of all, the driver's methods fall into doubt:

  • GetContent
  • Stat
  • List
  • Walk
    Since a lot depends on the result of their execution, including the operation of the cache. But during the verification process, no errors were found there.
### Observations that have been identified. **1. Errors in the logs.** During the first push of the image, two similar errors are always observed in the logs. Moreover, they appear there both when using the frostfs driver and when using the s3 driver. The error looks like this: ``` ERRO[0006] response completed with error environment=development err.code="blob unknown" err.detail="sha256:661ff4d9561e3fd050929ee5097067c34bafc523ee60f5294a37fd08056a73ca" err.message="blob unknown to registry" go.version=go1.21.4 http.request.host="localhost:5000" http.request.id=24ebdac3-1e01-46eb-84e0-aa0fb2b83590 http.request.method=HEAD http.request.remoteaddr="127.0.0.1:53436" http.request.uri="/v2/alpine/blobs/sha256:661ff4d9561e3fd050929ee5097067c34bafc523ee60f5294a37fd08056a73ca" http.request.useragent="docker/24.0.5 go/go1.20.6 git-commit/a61e2b4 kernel/6.5.0-15-generic os/linux arch/amd64 UpstreamClient(Docker-Client/24.0.5 \\(linux\\))" http.response.contenttype=application/json http.response.duration=12.440753ms http.response.status=404 http.response.written=157 instance.id=a829bb80-3436-4c4f-924a-f738f311bf5c service=registry vars.digest="sha256:661ff4d9561e3fd050929ee5097067c34bafc523ee60f5294a37fd08056a73ca" vars.name=alpine version=v3.0.0+unknown 127.0.0.1 - - [29/Jan/2024:16:38:42 +0300] "HEAD /v2/alpine/blobs/sha256:661ff4d9561e3fd050929ee5097067c34bafc523ee60f5294a37fd08056a73ca HTTP/1.1" 404 157 "" "docker/24.0.5 go/go1.20.6 git-commit/a61e2b4 kernel/6.5.0-15-generic os/linux arch/amd64 UpstreamClient(Docker-Client/24.0.5 \\(linux\\))" ``` This error occurs because the registry at some point requests a block that does not exist. Moreover, for the first time he does this at the very beginning of the push image operation. Why he does this is unclear. It doesn't seem to affect further work. **2. The state of the objects.** Each docker image in the distribution repository is a set of files (objects for frostfs). Brief information on the types of these files: - `.../blobs/sha256/4a/4abcf.../data` - data for a specific image layer - `.../_layers/sha256/4abcf.../link` - A link to a layer in the repository links a layer to a specific image or tag in the repository, allowing you to manage the relationship between layers and images. - `.../_manifests/revisions/sha256/6457d.../link` - A link to a specific revision (version) of the image in the repository - `..._manifests/tags/latest/index/sha256/6457d.../link` and - `.../_manifests/tags/latest/current/link - they point to the current image associated with the latest tag. Consider changing the state of objects stored in frostfs during the process of downloading and deleting an alpine image: 1. After the image is pushed ``` .../_layers/sha256/<sha256-hash>/link .../_layers/sha256/<sha256-hash>/link .../blobs/sha256/<two-char-hash>/<sha256-hash>/data .../blobs/sha256/<two-char-hash>/<sha256-hash>/data .../blobs/sha256/<two-char-hash>/<sha256-hash>/data .../_manifests/tags/latest/current/link .../_manifests/revisions/sha256/<sha256-hash>/link .../_manifests/tags/latest/index/sha256/<sha256-hash>/link ``` 2. After sending an image deletion request via the HTTP API: ``` .../_layers/sha256/<sha256-hash>/link .../_layers/sha256/<sha256-hash>/link .../blobs/sha256/<two-char-hash>/<sha256-hash>/data .../blobs/sha256/<two-char-hash>/<sha256-hash>/data .../blobs/sha256/<two-char-hash>/<sha256-hash>/data ``` 3. After starting the garbage collector: ``` .../_layers/sha256/<sha256-hash>/link .../_layers/sha256/<sha256-hash>/link ``` 4. After re-pushing the image: ``` .../_layers/sha256/<sha256-hash>/link .../_layers/sha256/<sha256-hash>/link .../_manifests/tags/latest/current/link .../_manifests/revisions/sha256/<sha256-hash>/link .../_manifests/tags/latest/index/sha256/<sha256-hash>/link ``` What interesting observations and conclusions have been made: - After deleting the image (step 3), objects (links to layers) still remain in the storage, that is, for some reason the garbage collector does not delete them. The same is true with the s3 driver. - Confirmation that the problem is in the cache. If you look at step 4, you will see that when you click on the image again, the large binary objects (layer objects) did not load, because the cache believes that they already exist. - If we are talking about using the s3 driver, then in step 3, during the re-push, the cache does not say that the layer already exists and loads them (but not all!). As a result, in step 4, there will be 7 objects in the storage instead of 5 (two more blobs are added). But this is also invalid, since there should be 8 of them, as in the first step. This indicates that there is a possibility that there is also an error in the s3 driver (or an error in the cache implementation). **3. Why is the problem related to the cache?:** If you disable the cache in the config: ``` storage: cache: blobdescriptor: null ``` Then everything will work valid with both the frostfs driver and the s3 driver. Fact: there are no obvious ways to clear the cache. Only if you restart the registry (since the in-memory cache). **4. Assumptions** In the process of its operation, the registry calls the methods of the driver interface. There is an assumption that the cache may not work properly due to the fact that one of these methods is not working correctly. First of all, the driver's methods fall into doubt: - `GetContent` - `Stat` - `List` - `Walk` Since a lot depends on the result of their execution, including the operation of the cache. But during the verification process, no errors were found there.
Member
The same issue in upstream https://github.com/distribution/distribution/issues/4269
Member

@r.loginov Could you try instead of

curl -i -X DELETE http://localhost:5000/v2/alpine/manifests/sha256:6457d53fb065d6f250e1504b9bc42d5b6c65941d57532c072d929dd0628977d0

this

curl -i -X DELETE http://localhost:5000/v2/alpine/manifests/latest

I get different results when using tag instead of digest. Or we have to use digest?

@r.loginov Could you try instead of ``` curl -i -X DELETE http://localhost:5000/v2/alpine/manifests/sha256:6457d53fb065d6f250e1504b9bc42d5b6c65941d57532c072d929dd0628977d0 ``` this ``` curl -i -X DELETE http://localhost:5000/v2/alpine/manifests/latest ``` I get different results when using tag instead of digest. Or we have to use digest?
Author
Member

Yes, I tried to use deletion using a tag. The result when using the tag is really different, but it is also wrong.

Based on the documentation, digest deletion should be used to delete the image.

Yes, I tried to use deletion using a tag. The result when using the tag is really different, but it is also wrong. Based on the [documentation](https://github.com/distribution/distribution/blob/main/docs/content/spec/api.md#deleting-an-image), digest deletion should be used to delete the image.
Owner

Summary.

  1. registry garbage-collect call cannot update in-memory cache of the registry application, therefore described scenario should not work with any supported driver.

  2. It does work for most drivers, except inmemory and filesystem drivers. The reason is that these drivers does not support direct http access to blobs (one, two). HTTP access is used during blob HEAD request, it fails and triggers blob re-upload.

If admin disables redirects, all drivers going to fail described scenario.

storage:  
  redirect:
    disable: true
  1. In case of redis cache, this fix tries to solve the issue. However it simply does not work yet.

The only convinient thing to do is to explicitly disable cache usage for described scenario.

Summary. 1. `registry garbage-collect` call cannot update in-memory cache of the registry application, therefore described scenario **should not** work with any supported driver. 2. It **does** work for most drivers, except `inmemory` and `filesystem` drivers. The reason is that these drivers does not support direct http access to blobs ([one](https://github.com/distribution/distribution/blob/0d1792f55f3c5bd0380d6cac781aba75dd5f87c0/registry/storage/driver/filesystem/driver.go#L290), [two](https://github.com/distribution/distribution/blob/0d1792f55f3c5bd0380d6cac781aba75dd5f87c0/registry/storage/driver/inmemory/driver.go#L243)). HTTP access is used during blob [HEAD request](https://git.frostfs.info/TrueCloudLab/distribution/src/commit/b8de0a6cafdd9b176f477eeeb830f1caf47b1952/registry/storage/blobserver.go#L44), it fails and triggers blob re-upload. If admin disables redirects, all drivers going to fail described scenario. ```yaml storage: redirect: disable: true ``` 3. In case of redis cache, [this fix](https://github.com/distribution/distribution/pull/3323) tries to solve the issue. However it simply does not work yet. The only convinient thing to do is to explicitly disable cache usage for described scenario.
Sign in to join this conversation.
No milestone
No project
No assignees
3 participants
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: TrueCloudLab/distribution#7
No description provided.