xK6 delete S3 job removes all objects in bolt database but one. #153

Open
opened 2024-07-10 12:42:43 +00:00 by a-baranov · 4 comments

Expected behaviour:
All objects in bolt database are deleted

Existing behaviour:
One object in bolt database is not deleted.

Steps to repoduce:

  • Create Buckets
scenarios/preset/preset_s3.py --size 128 --buckets 4 --out /home/service/opt/k6/s3_128kb.json --endpoint https://10.78.131.223 --preload_obj 0 --location default --workers 2 --no-verify-ssl
  • Write Objects
./k6 run -e DURATION=100 -e STREAM_TIMEOUT=60  -e SLEEP_READ=0.2  -e SLEEP_WRITE=0.2 -e WRITE_OBJ_SIZE=64 -e READERS=0 -e WRITERS=4 -e S3_ENDPOINTS=https://10.78.131.223 -e REGISTRY_FILE=/home/service/opt/k6/s3_128k.bolt -e PREGEN_JSON=/home/service/opt/k6/s3_128kb.json  -e NO_VERIFY_SSL=true ./scenarios/s3.js
  • Delete Objects
./k6 run -e DURATION=240 -e STREAM_TIMEOUT=60  -e SLEEP_READ=0.2  -e SLEEP_WRITE=0.2 -e WRITE_OBJ_SIZE=64 -e READERS=0 -e WRITERS=0 -e DELETERS=2 -e DELETE_AGE=1 -e S3_ENDPOINTS=https://10.78.131.223 -e REGISTRY_FILE=/home/service/opt/k6/s3_128k.bolt -e PREGEN_JSON=/home/service/opt/k6/s3_128kb.json  -e NO_VERIFY_SSL=true ./scenarios/s3.js

/home/service/opt/k6/s3_128k.bolt after removal contains one object, the same object is stores in bucket:
(screenshot of boltDB browser and S3 browser with the object is attached)

Use latest K6 version:

./k6 version
k6 v0.45.1 ((devel), go1.22.1, linux/amd64)
Extensions:
  git.frostfs.info/TrueCloudLab/xk6-frostfs (devel), k6/x/frostfs [js]
  git.frostfs.info/TrueCloudLab/xk6-frostfs (devel), k6/x/frostfs/datagen [js]
  git.frostfs.info/TrueCloudLab/xk6-frostfs (devel), k6/x/frostfs/env [js]
  git.frostfs.info/TrueCloudLab/xk6-frostfs (devel), k6/x/frostfs/local [js]
  git.frostfs.info/TrueCloudLab/xk6-frostfs (devel), k6/x/frostfs/logging [js]
  git.frostfs.info/TrueCloudLab/xk6-frostfs (devel), k6/x/frostfs/native [js]
  git.frostfs.info/TrueCloudLab/xk6-frostfs (devel), k6/x/frostfs/registry [js]
  git.frostfs.info/TrueCloudLab/xk6-frostfs (devel), k6/x/frostfs/s3 [js]
  git.frostfs.info/TrueCloudLab/xk6-frostfs (devel), k6/x/frostfs/s3local [js]
  git.frostfs.info/TrueCloudLab/xk6-frostfs (devel), k6/x/frostfs/stats [js]
  git.frostfs.info/TrueCloudLab/xk6-frostfs (devel), profile [output]
**Expected behaviour:** All objects in bolt database are deleted **Existing behaviour:** One object in bolt database is not deleted. Steps to repoduce: - Create Buckets ``` scenarios/preset/preset_s3.py --size 128 --buckets 4 --out /home/service/opt/k6/s3_128kb.json --endpoint https://10.78.131.223 --preload_obj 0 --location default --workers 2 --no-verify-ssl ``` - Write Objects ``` ./k6 run -e DURATION=100 -e STREAM_TIMEOUT=60 -e SLEEP_READ=0.2 -e SLEEP_WRITE=0.2 -e WRITE_OBJ_SIZE=64 -e READERS=0 -e WRITERS=4 -e S3_ENDPOINTS=https://10.78.131.223 -e REGISTRY_FILE=/home/service/opt/k6/s3_128k.bolt -e PREGEN_JSON=/home/service/opt/k6/s3_128kb.json -e NO_VERIFY_SSL=true ./scenarios/s3.js ``` - Delete Objects ``` ./k6 run -e DURATION=240 -e STREAM_TIMEOUT=60 -e SLEEP_READ=0.2 -e SLEEP_WRITE=0.2 -e WRITE_OBJ_SIZE=64 -e READERS=0 -e WRITERS=0 -e DELETERS=2 -e DELETE_AGE=1 -e S3_ENDPOINTS=https://10.78.131.223 -e REGISTRY_FILE=/home/service/opt/k6/s3_128k.bolt -e PREGEN_JSON=/home/service/opt/k6/s3_128kb.json -e NO_VERIFY_SSL=true ./scenarios/s3.js ``` `/home/service/opt/k6/s3_128k.bolt` after removal contains one object, the same object is stores in bucket: (screenshot of boltDB browser and S3 browser with the object is attached) Use latest K6 version: ``` ./k6 version k6 v0.45.1 ((devel), go1.22.1, linux/amd64) Extensions: git.frostfs.info/TrueCloudLab/xk6-frostfs (devel), k6/x/frostfs [js] git.frostfs.info/TrueCloudLab/xk6-frostfs (devel), k6/x/frostfs/datagen [js] git.frostfs.info/TrueCloudLab/xk6-frostfs (devel), k6/x/frostfs/env [js] git.frostfs.info/TrueCloudLab/xk6-frostfs (devel), k6/x/frostfs/local [js] git.frostfs.info/TrueCloudLab/xk6-frostfs (devel), k6/x/frostfs/logging [js] git.frostfs.info/TrueCloudLab/xk6-frostfs (devel), k6/x/frostfs/native [js] git.frostfs.info/TrueCloudLab/xk6-frostfs (devel), k6/x/frostfs/registry [js] git.frostfs.info/TrueCloudLab/xk6-frostfs (devel), k6/x/frostfs/s3 [js] git.frostfs.info/TrueCloudLab/xk6-frostfs (devel), k6/x/frostfs/s3local [js] git.frostfs.info/TrueCloudLab/xk6-frostfs (devel), k6/x/frostfs/stats [js] git.frostfs.info/TrueCloudLab/xk6-frostfs (devel), profile [output] ```
achuprov was assigned by fyrchik 2024-07-10 13:01:59 +00:00
acid-ant was assigned by fyrchik 2024-07-10 14:44:15 +00:00
achuprov was unassigned by fyrchik 2024-07-10 14:44:24 +00:00
Member

Howto reproduce issue:

  • Startup dev-env
  • Preset k6 loader:
  echo "Getting S3 Gate public key..."
  s3gate_pk=$(neo-go wallet dump-keys -w /home/annikifa/workspace/frostfs-dev-env/services/s3_gate/wallet.json | sed -n 2p)
  echo "Got S3 Gate public key: $s3gate_pk"

  /home/annikifa/workspace/frostfs-node/bin/frostfs-adm morph ape add-rule-chain \
    -c /home/annikifa/workspace/frostfs-dev-env/frostfs-adm.yml --target-type namespace --target-name="" \
    --rule "allow Container.* *" --rule "allow Object.* *" --chain-id 1112

  echo "Issuing S3 Gate secrets..."
  echo "Press ENTER for wallet password request->"
  /home/annikifa/workspace/frostfs-s3-gw/bin/frostfs-s3-authmate issue-secret --wallet ./scenarios/files/wallet.json \
  --peer s01.frostfs.devenv:8080 \
  --container-placement-policy "REP 1 IN X CBF 1 SELECT 1 FROM * AS X" \
  --container-policy ./scenarios/files/policy.json \
  --gate-public-key $s3gate_pk | grep access | awk '{ print $2 }' | tr -d '",'  > secrets.txt
  echo "S3 Gate secrets were written to secrets.txt"

  access=$(cat secrets.txt | sed -n 2p)
  secret=$(cat secrets.txt | sed -n 3p)
  echo "access key: $access"
  echo "secret key: $secret"

  echo "Configuring AWS tool..."
  aws configure set aws_access_key_id $access \
  && aws configure set aws_secret_access_key $secret \
  && aws configure set region load-1-1
  echo "Configuring AWS tool completed"

  echo "Generating S3 preset..."
  time /home/annikifa/workspace/xk6-frostfs/scenarios/preset/preset_s3.py --size 128 \
  --buckets 4 --out s3_preset.json \
  --endpoint https://s3.frostfs.devenv:8080/ --preload_obj 0 --workers 2  --location load-1-1 \
  --no-verify-ssl
  echo "S3 preset generated"

  ./scenarios/preset/resolve_containers_in_preset.py --endpoint s3.frostfs.devenv:8080 \
    --preset_file /home/annikifa/workspace/xk6-frostfs/s3_preset.json
  • Generate pressure
  echo "Starting S3 write test..."
  ./k6 run -e DURATION=10 -e WRITE_OBJ_SIZE=64 -e READERS=0 -e WRITERS=4 \
  -e S3_ENDPOINTS='https://s3.frostfs.devenv:8080/' \
  -e REGISTRY_FILE='/home/annikifa/workspace/xk6-frostfs/s3_registry.db' \
  -e PREGEN_JSON=/home/annikifa/workspace/xk6-frostfs/s3_preset.json \
  /home/annikifa/workspace/xk6-frostfs/scenarios/s3.js
  echo "S3 write test completed"
  • Remove objects via k6
  echo "Starting S3 write test..."
  ./k6 run -e DURATION=10 -e WRITE_OBJ_SIZE=64 -e READERS=0 -e WRITERS=0 -e DELETERS=2 -e DELETE_AGE=1 \
  -e S3_ENDPOINTS='https://s3.frostfs.devenv:8080/' \
  -e REGISTRY_FILE='/home/annikifa/workspace/xk6-frostfs/s3_registry.db' \
  -e PREGEN_JSON=/home/annikifa/workspace/xk6-frostfs/s3_preset.json \
  /home/annikifa/workspace/xk6-frostfs/scenarios/s3.js
  echo "S3 write test completed"
  • Check database file s3_registry.db for entries.

Looks like we are interested in OneShortSelector creation.
Here is where we iterate over registry.

Howto reproduce issue: - Startup dev-env - Preset k6 loader: ``` echo "Getting S3 Gate public key..." s3gate_pk=$(neo-go wallet dump-keys -w /home/annikifa/workspace/frostfs-dev-env/services/s3_gate/wallet.json | sed -n 2p) echo "Got S3 Gate public key: $s3gate_pk" /home/annikifa/workspace/frostfs-node/bin/frostfs-adm morph ape add-rule-chain \ -c /home/annikifa/workspace/frostfs-dev-env/frostfs-adm.yml --target-type namespace --target-name="" \ --rule "allow Container.* *" --rule "allow Object.* *" --chain-id 1112 echo "Issuing S3 Gate secrets..." echo "Press ENTER for wallet password request->" /home/annikifa/workspace/frostfs-s3-gw/bin/frostfs-s3-authmate issue-secret --wallet ./scenarios/files/wallet.json \ --peer s01.frostfs.devenv:8080 \ --container-placement-policy "REP 1 IN X CBF 1 SELECT 1 FROM * AS X" \ --container-policy ./scenarios/files/policy.json \ --gate-public-key $s3gate_pk | grep access | awk '{ print $2 }' | tr -d '",' > secrets.txt echo "S3 Gate secrets were written to secrets.txt" access=$(cat secrets.txt | sed -n 2p) secret=$(cat secrets.txt | sed -n 3p) echo "access key: $access" echo "secret key: $secret" echo "Configuring AWS tool..." aws configure set aws_access_key_id $access \ && aws configure set aws_secret_access_key $secret \ && aws configure set region load-1-1 echo "Configuring AWS tool completed" echo "Generating S3 preset..." time /home/annikifa/workspace/xk6-frostfs/scenarios/preset/preset_s3.py --size 128 \ --buckets 4 --out s3_preset.json \ --endpoint https://s3.frostfs.devenv:8080/ --preload_obj 0 --workers 2 --location load-1-1 \ --no-verify-ssl echo "S3 preset generated" ./scenarios/preset/resolve_containers_in_preset.py --endpoint s3.frostfs.devenv:8080 \ --preset_file /home/annikifa/workspace/xk6-frostfs/s3_preset.json ``` - Generate pressure ``` echo "Starting S3 write test..." ./k6 run -e DURATION=10 -e WRITE_OBJ_SIZE=64 -e READERS=0 -e WRITERS=4 \ -e S3_ENDPOINTS='https://s3.frostfs.devenv:8080/' \ -e REGISTRY_FILE='/home/annikifa/workspace/xk6-frostfs/s3_registry.db' \ -e PREGEN_JSON=/home/annikifa/workspace/xk6-frostfs/s3_preset.json \ /home/annikifa/workspace/xk6-frostfs/scenarios/s3.js echo "S3 write test completed" ``` - Remove objects via k6 ``` echo "Starting S3 write test..." ./k6 run -e DURATION=10 -e WRITE_OBJ_SIZE=64 -e READERS=0 -e WRITERS=0 -e DELETERS=2 -e DELETE_AGE=1 \ -e S3_ENDPOINTS='https://s3.frostfs.devenv:8080/' \ -e REGISTRY_FILE='/home/annikifa/workspace/xk6-frostfs/s3_registry.db' \ -e PREGEN_JSON=/home/annikifa/workspace/xk6-frostfs/s3_preset.json \ /home/annikifa/workspace/xk6-frostfs/scenarios/s3.js echo "S3 write test completed" ``` - Check database file `s3_registry.db` for entries. Looks like we are interested in [OneShortSelector](https://git.frostfs.info/TrueCloudLab/xk6-frostfs/src/commit/9b9db46a07c44739926dc195c2c9df90ed76665f/scenarios/s3.js#L79) creation. [Here](https://git.frostfs.info/TrueCloudLab/xk6-frostfs/src/commit/9b9db46a07c44739926dc195c2c9df90ed76665f/internal/registry/obj_selector.go#L108) is where we iterate over registry.
acid-ant removed their assignment 2024-07-12 13:16:51 +00:00
achuprov was assigned by acid-ant 2024-07-12 13:16:51 +00:00
Member

The selectLoop seems to be working correctly. I suspect the issue is with k6. After updating to version 0.52.0, all objects were successfully deleted.

Update:
Other changes I made along with the k6 update resolved the issue.

~~The [selectLoop](https://git.frostfs.info/TrueCloudLab/xk6-frostfs/src/commit/9b9db46a07c44739926dc195c2c9df90ed76665f/internal/registry/obj_selector.go#L108) seems to be working correctly. I suspect the issue is with k6. After updating to version 0.52.0, all objects were successfully deleted.~~ Update: Other changes I made along with the k6 update resolved the issue.
Member

The problem occurs when using multiple deleters: the deleters processing the last object+1 will complete and terminate the test prematurely.
https://github.com/grafana/k6/issues/2804
k6 does not support runner synchronization.
Potential solutions:

  • Implement synchronization using an external source.
  • Add a delay before executing test.Abort. This approach mimics the loop mode.
  • Restrict the use of more than one runner in Oneshot mode.
The problem occurs when using multiple deleters: the deleters processing the `last object+1` will complete and terminate the test prematurely. https://github.com/grafana/k6/issues/2804 k6 does not support runner synchronization. Potential solutions: - Implement synchronization using an external source. - Add a delay before [executing](https://git.frostfs.info/TrueCloudLab/xk6-frostfs/src/commit/9b9db46a07c44739926dc195c2c9df90ed76665f/scenarios/s3.js#L207) `test.Abort`. This approach mimics the [loop](https://git.frostfs.info/TrueCloudLab/xk6-frostfs/src/commit/9b9db46a07c44739926dc195c2c9df90ed76665f/internal/registry/registry.go#L97) mode. - Restrict the use of more than one runner in `Oneshot` mode.
Owner

If we know how much workers are using selector, we can set a shared counter and abort the test if all workers have received empty pointer from the selector.

If we know how much workers are using selector, we can set a shared counter and abort the test if all workers have received empty pointer from the selector.
Sign in to join this conversation.
No milestone
No project
No assignees
4 participants
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: TrueCloudLab/xk6-frostfs#153
No description provided.