One node stuck in old epoch after shutdown/start all nodes #448

Closed
opened 2023-06-15 09:21:42 +00:00 by anikeev-yadro · 1 comment
Member

Autotest testsuites.failover.test_failover_full_shutdown.TestFailoverFullShutdown#test_full_storage_shutdown

Expected Behavior

After shutdown/start all nodes should be in the same epoch.

Current Behavior

One node stuck in old epoch after shutdown/start all nodes.

Steps to Reproduce (for bugs)

1.Shutdown all nodes
2.Start all nodes
3.Wait for all nodes will have the following status

 Network status: ONLINE
 Health status: READY

4.Created s3 creds:

 frostfs-s3-authmate  issue-secret --wallet '/tmp/077de44f-2e0c-45a5-9b99-29bf3d296e99.json' --peer '10.78.70.135:8080' --bearer-rules '/etc/k6/scenarios/files/rules.json' --gate-public-key '03bfd22301f1db77664fd274c6d3429f8a2e0eed020fe09d9914fb06c93f5c48c6' --gate-public-key '02ee932dd39ea747e7a2ff7af1f965573dcdf3f8b7d41b95fa38cf8277ac35f4eb' --gate-public-key '02ef099a17110bbde91872765ea7e32a69c2ce1bf3b0989b9b938c0398b2379d9c' --gate-public-key '0319f871c28e5ff6d6fc87848df7c1c80ed0304395fbb3c959d92932b9cdd6d44e' --container-placement-policy 'REP 2 IN X CBF 2 SELECT 2 FROM * AS X' --container-policy '/etc/k6/scenarios/files/policy.json'

 {
   "access_key_id": "5GtTfkTsEw84DRZVc8RXXfqrLyzwyWvgcwQvD8JY3fFd02x7M3g7VKihB79FXy9qQCaoZk7KnAgPNkrYsDdo1RhYc",
   "secret_access_key": "42571d6ce569b71bc8f748244a633f37052ede787c8736d3a10b33a893ebb73f",
   "owner_private_key": "6e99710ebd940ddc0c921c13ff42184d3f4ce6b197eec4d809e75d4e2e887eff",
   "wallet_public_key": "02520315be9604421cb6ba6628dcb0901d6e4e6d9e86ed667592965f085986d595",
   "container_id": "5GtTfkTsEw84DRZVc8RXXfqrLyzwyWvgcwQvD8JY3fFd"

5.Make preset in k6

HOST: 10.78.69.139
COMMAND:
 /etc/k6/scenarios/preset/preset_s3.py --endpoint 10.78.71.136:8084 --size '1024' --preload_obj '10' --out '/var/log/autotests/s3_d523b9a7-8e37-4209-94be-aa997447a328_2_0_prepare.json' --workers '50' --buckets '40' --location 'node-off'

  > Total Buckets has been created: 40.
  > Total Objects has been created: 400.

6.Start k6 with the following env:

DURATION: 1200
WRITE_OBJ_SIZE: 1024
REGISTRY_FILE: /var/log/autotests/s3_d523b9a7-8e37-4209-94be-aa997447a328_2_0_registry.bolt
K6_SETUP_TIMEOUT: 5s
WRITERS: 50
READERS: 50
DELETERS: 0
PREGEN_JSON: /var/log/autotests/s3_d523b9a7-8e37-4209-94be-aa997447a328_2_0_prepare.json
S3_ENDPOINTS: 10.78.71.136:8084,10.78.71.137:8084,10.78.71.135:8084,10.78.70.136:8084,10.78.71.138:8084,10.78.70.137:8084,10.78.70.138:8084,10.78.70.135:8084
SUMMARY_JSON: /var/log/autotests/s3_d523b9a7-8e37-4209-94be-aa997447a328_2_0_s3_summary.json

7.k6 finished with the following failed r/w operations:

     aws_obj_get_duration...: avg=164.73ms min=11.87ms  med=28.76ms  max=4.63s p(90)=186.37ms p(95)=301.25ms
     aws_obj_get_fails......: 433542 3326.92318/s
     aws_obj_get_total......: 25890  198.675194/s
     aws_obj_put_duration...: avg=334.39ms min=76.65ms  med=216.95ms max=5.72s p(90)=402.95ms p(95)=485.45ms
     aws_obj_put_fails......: 86429  663.240571/s
     aws_obj_put_total......: 14794  113.52649/s
     data_received..........: 27 GB  209 MB/s
     data_sent..............: 16 GB  119 MB/s
     iteration_duration.....: avg=23.09ms  min=136.31µs med=4.7ms    max=5.73s p(90)=20.76ms  p(95)=30.93ms 
     iterations.............: 560655 4302.365436/s
     vus....................: 100    min=100       max=100
     vus_max................: 100    min=100       max=100

8.All errors from 10.78.70.138 node:

ERRO[12:34:32] operation error S3: PutObject, https response error StatusCode: 403, RequestID: 0d680c6a-0595-480e-911b-2fcdc9d66222, HostID: 06bfda75-c8be-4273-8dbf-80443276b175, api error AccessDenied: Access Denied.  bucket=2366bf68-af21-4800-ab31-f3abb790e3b7 endpoint="10.78.70.138:8084" key=6c5308e5-0c1e-4878-b005-f57f5cea25b7

9.In node log:

service@glagoli:~$ journalctl -S '12:34'|grep 0d680c6a-0595-480e-911b-2fcdc9d66222
Journal file /var/log/journal/acafbee98f204074973ed4c92d4ec3df/system@0005fe01f0a699bd-6a55639aa1eee5fe.journal~ is truncated, ignoring file.
Jun 14 12:34:32 glagoli frostfs-s3-gw[6496]: 2023-06-14T12:34:32.801Z        info        layer/layer.go:703        resolve bucket        {"reqId": "0d680c6a-0595-480e-911b-2fcdc9d66222", "bucket": "2366bf68-af21-4800-ab31-f3abb790e3b7", "cid": "6UcudzzcZ9WptSpJaLBPYp8TycpYooYUGYA4ADf4QFLZ"}
Jun 14 12:34:32 glagoli frostfs-s3-gw[6496]: 2023-06-14T12:34:32.811Z        error        handler/util.go:29        call method        {"status": 403, "request_id": "0d680c6a-0595-480e-911b-2fcdc9d66222", "method": "PutObject", "bucket": "2366bf68-af21-4800-ab31-f3abb790e3b7", "object": "6c5308e5-0c1e-4878-b005-f57f5cea25b7", "description": "could not upload object", "error": "access denied: access to operation OBJECT_PUT is denied by extended ACL check: bearer token has expired", "body close errors": []}

This error said that node 138 in old epoch and token has expired.

10.Also node log has the following lines about epoch:

service@glagoli:~$ journalctl --utc -S  "2023-06-14 12:20:00" --no-pager|grep -i 'new epoch'
Journal file /var/log/journal/acafbee98f204074973ed4c92d4ec3df/system@0005fe01f0a699bd-6a55639aa1eee5fe.journal~ is truncated, ignoring file.
Jun 14 12:29:16 glagoli neo-go[5681]: 2023-06-14T12:29:16.160Z        INFO        runtime log        {"tx": "6b64c9b8df7c4a4f35ddbafb8edfe51cddd525381c124dc12706f941c1e43028", "script": "97f9d078e65e0c3a68f3f87cf91a10891c799feb", "msg": "process new epoch"}
Jun 14 12:29:32 glagoli neo-go[5681]: 2023-06-14T12:29:32.188Z        INFO        runtime log        {"tx": "70e77692a0cf1703e500572ef8cae0201b014e52c0f05d90a5f687363907fe36", "script": "97f9d078e65e0c3a68f3f87cf91a10891c799feb", "msg": "process new epoch"}
Jun 14 12:29:32 glagoli frostfs-ir[6270]: 2023-06-14T12:29:32.585Z        info        netmap/handlers.go:29        notification        {"type": "new epoch", "value": 906}
Jun 14 12:37:10 glagoli neo-go[5681]: 2023-06-14T12:37:10.033Z        INFO        runtime log        {"tx": "e49feb757e7b737b413a25f877ade9388d7fab2f4c69dec6c94696ccd626f372", "script": "97f9d078e65e0c3a68f3f87cf91a10891c799feb", "msg": "process new epoch"}
Jun 14 12:37:10 glagoli frostfs-node[5687]: 2023-06-14T12:37:10.146Z        info        frostfs-node/morph.go:218        new epoch event from sidechain        {"number": 907}
Jun 14 12:37:10 glagoli frostfs-ir[6270]: 2023-06-14T12:37:10.146Z        info        netmap/handlers.go:29        notification        {"type": "new epoch", "value": 907}
Jun 14 12:37:10 glagoli frostfs-node[5687]: 2023-06-14T12:37:10.174Z        debug        frostfs-node/reputation.go:108        start reporting reputation on new epoch event
Jun 14 12:35:02 glagoli frostfs-node[5687]: 2023-06-14T12:35:02.815Z        debug        frostfs-node/object.go:508        writing local reputation values        {"epoch": 905, "satisfactory": true
}
Jun 14 12:35:02 glagoli frostfs-node[5687]: 2023-06-14T12:35:02.815Z        debug        frostfs-node/object.go:508        writing local reputation values        {"epoch": 905, "satisfactory": true
}
Jun 14 12:35:02 glagoli frostfs-node[5687]: 2023-06-14T12:35:02.817Z        debug        frostfs-node/object.go:508        writing local reputation values        {"epoch": 905, "satisfactory": true
}

Context

Test objectivity: load should be run before shutdown and after start nodes.

Regression

yes

Version

FrostFS Storage node
Version: v0.0.1-432-g405e17b2
GoVersion: go1.19.7

Your Environment

HW
4 nodes
Datacenter config: 1+1+1+1
Network profile: bad
Bandwidth: 1Gb
Packet loss: 5% с 50% dependency from old loss

Autotest testsuites.failover.test_failover_full_shutdown.TestFailoverFullShutdown#test_full_storage_shutdown ## Expected Behavior After shutdown/start all nodes should be in the same epoch. ## Current Behavior One node stuck in old epoch after shutdown/start all nodes. ## Steps to Reproduce (for bugs) 1.Shutdown all nodes 2.Start all nodes 3.Wait for all nodes will have the following status ``` Network status: ONLINE Health status: READY ``` 4.Created s3 creds: ``` frostfs-s3-authmate issue-secret --wallet '/tmp/077de44f-2e0c-45a5-9b99-29bf3d296e99.json' --peer '10.78.70.135:8080' --bearer-rules '/etc/k6/scenarios/files/rules.json' --gate-public-key '03bfd22301f1db77664fd274c6d3429f8a2e0eed020fe09d9914fb06c93f5c48c6' --gate-public-key '02ee932dd39ea747e7a2ff7af1f965573dcdf3f8b7d41b95fa38cf8277ac35f4eb' --gate-public-key '02ef099a17110bbde91872765ea7e32a69c2ce1bf3b0989b9b938c0398b2379d9c' --gate-public-key '0319f871c28e5ff6d6fc87848df7c1c80ed0304395fbb3c959d92932b9cdd6d44e' --container-placement-policy 'REP 2 IN X CBF 2 SELECT 2 FROM * AS X' --container-policy '/etc/k6/scenarios/files/policy.json' { "access_key_id": "5GtTfkTsEw84DRZVc8RXXfqrLyzwyWvgcwQvD8JY3fFd02x7M3g7VKihB79FXy9qQCaoZk7KnAgPNkrYsDdo1RhYc", "secret_access_key": "42571d6ce569b71bc8f748244a633f37052ede787c8736d3a10b33a893ebb73f", "owner_private_key": "6e99710ebd940ddc0c921c13ff42184d3f4ce6b197eec4d809e75d4e2e887eff", "wallet_public_key": "02520315be9604421cb6ba6628dcb0901d6e4e6d9e86ed667592965f085986d595", "container_id": "5GtTfkTsEw84DRZVc8RXXfqrLyzwyWvgcwQvD8JY3fFd" ``` 5.Make preset in k6 ``` HOST: 10.78.69.139 COMMAND: /etc/k6/scenarios/preset/preset_s3.py --endpoint 10.78.71.136:8084 --size '1024' --preload_obj '10' --out '/var/log/autotests/s3_d523b9a7-8e37-4209-94be-aa997447a328_2_0_prepare.json' --workers '50' --buckets '40' --location 'node-off' > Total Buckets has been created: 40. > Total Objects has been created: 400. ``` 6.Start k6 with the following env: ``` DURATION: 1200 WRITE_OBJ_SIZE: 1024 REGISTRY_FILE: /var/log/autotests/s3_d523b9a7-8e37-4209-94be-aa997447a328_2_0_registry.bolt K6_SETUP_TIMEOUT: 5s WRITERS: 50 READERS: 50 DELETERS: 0 PREGEN_JSON: /var/log/autotests/s3_d523b9a7-8e37-4209-94be-aa997447a328_2_0_prepare.json S3_ENDPOINTS: 10.78.71.136:8084,10.78.71.137:8084,10.78.71.135:8084,10.78.70.136:8084,10.78.71.138:8084,10.78.70.137:8084,10.78.70.138:8084,10.78.70.135:8084 SUMMARY_JSON: /var/log/autotests/s3_d523b9a7-8e37-4209-94be-aa997447a328_2_0_s3_summary.json ``` 7.k6 finished with the following failed r/w operations: ``` aws_obj_get_duration...: avg=164.73ms min=11.87ms med=28.76ms max=4.63s p(90)=186.37ms p(95)=301.25ms aws_obj_get_fails......: 433542 3326.92318/s aws_obj_get_total......: 25890 198.675194/s aws_obj_put_duration...: avg=334.39ms min=76.65ms med=216.95ms max=5.72s p(90)=402.95ms p(95)=485.45ms aws_obj_put_fails......: 86429 663.240571/s aws_obj_put_total......: 14794 113.52649/s data_received..........: 27 GB 209 MB/s data_sent..............: 16 GB 119 MB/s iteration_duration.....: avg=23.09ms min=136.31µs med=4.7ms max=5.73s p(90)=20.76ms p(95)=30.93ms iterations.............: 560655 4302.365436/s vus....................: 100 min=100 max=100 vus_max................: 100 min=100 max=100 ``` 8.All errors from 10.78.70.138 node: ``` ERRO[12:34:32] operation error S3: PutObject, https response error StatusCode: 403, RequestID: 0d680c6a-0595-480e-911b-2fcdc9d66222, HostID: 06bfda75-c8be-4273-8dbf-80443276b175, api error AccessDenied: Access Denied. bucket=2366bf68-af21-4800-ab31-f3abb790e3b7 endpoint="10.78.70.138:8084" key=6c5308e5-0c1e-4878-b005-f57f5cea25b7 ``` 9.In node log: ``` service@glagoli:~$ journalctl -S '12:34'|grep 0d680c6a-0595-480e-911b-2fcdc9d66222 Journal file /var/log/journal/acafbee98f204074973ed4c92d4ec3df/system@0005fe01f0a699bd-6a55639aa1eee5fe.journal~ is truncated, ignoring file. Jun 14 12:34:32 glagoli frostfs-s3-gw[6496]: 2023-06-14T12:34:32.801Z info layer/layer.go:703 resolve bucket {"reqId": "0d680c6a-0595-480e-911b-2fcdc9d66222", "bucket": "2366bf68-af21-4800-ab31-f3abb790e3b7", "cid": "6UcudzzcZ9WptSpJaLBPYp8TycpYooYUGYA4ADf4QFLZ"} Jun 14 12:34:32 glagoli frostfs-s3-gw[6496]: 2023-06-14T12:34:32.811Z error handler/util.go:29 call method {"status": 403, "request_id": "0d680c6a-0595-480e-911b-2fcdc9d66222", "method": "PutObject", "bucket": "2366bf68-af21-4800-ab31-f3abb790e3b7", "object": "6c5308e5-0c1e-4878-b005-f57f5cea25b7", "description": "could not upload object", "error": "access denied: access to operation OBJECT_PUT is denied by extended ACL check: bearer token has expired", "body close errors": []} ``` **This error said that node 138 in old epoch and token has expired.** 10.Also node log has the following lines about epoch: ``` service@glagoli:~$ journalctl --utc -S "2023-06-14 12:20:00" --no-pager|grep -i 'new epoch' Journal file /var/log/journal/acafbee98f204074973ed4c92d4ec3df/system@0005fe01f0a699bd-6a55639aa1eee5fe.journal~ is truncated, ignoring file. Jun 14 12:29:16 glagoli neo-go[5681]: 2023-06-14T12:29:16.160Z INFO runtime log {"tx": "6b64c9b8df7c4a4f35ddbafb8edfe51cddd525381c124dc12706f941c1e43028", "script": "97f9d078e65e0c3a68f3f87cf91a10891c799feb", "msg": "process new epoch"} Jun 14 12:29:32 glagoli neo-go[5681]: 2023-06-14T12:29:32.188Z INFO runtime log {"tx": "70e77692a0cf1703e500572ef8cae0201b014e52c0f05d90a5f687363907fe36", "script": "97f9d078e65e0c3a68f3f87cf91a10891c799feb", "msg": "process new epoch"} Jun 14 12:29:32 glagoli frostfs-ir[6270]: 2023-06-14T12:29:32.585Z info netmap/handlers.go:29 notification {"type": "new epoch", "value": 906} Jun 14 12:37:10 glagoli neo-go[5681]: 2023-06-14T12:37:10.033Z INFO runtime log {"tx": "e49feb757e7b737b413a25f877ade9388d7fab2f4c69dec6c94696ccd626f372", "script": "97f9d078e65e0c3a68f3f87cf91a10891c799feb", "msg": "process new epoch"} Jun 14 12:37:10 glagoli frostfs-node[5687]: 2023-06-14T12:37:10.146Z info frostfs-node/morph.go:218 new epoch event from sidechain {"number": 907} Jun 14 12:37:10 glagoli frostfs-ir[6270]: 2023-06-14T12:37:10.146Z info netmap/handlers.go:29 notification {"type": "new epoch", "value": 907} Jun 14 12:37:10 glagoli frostfs-node[5687]: 2023-06-14T12:37:10.174Z debug frostfs-node/reputation.go:108 start reporting reputation on new epoch event ``` ``` Jun 14 12:35:02 glagoli frostfs-node[5687]: 2023-06-14T12:35:02.815Z debug frostfs-node/object.go:508 writing local reputation values {"epoch": 905, "satisfactory": true } Jun 14 12:35:02 glagoli frostfs-node[5687]: 2023-06-14T12:35:02.815Z debug frostfs-node/object.go:508 writing local reputation values {"epoch": 905, "satisfactory": true } Jun 14 12:35:02 glagoli frostfs-node[5687]: 2023-06-14T12:35:02.817Z debug frostfs-node/object.go:508 writing local reputation values {"epoch": 905, "satisfactory": true } ``` ## Context Test objectivity: load should be run before shutdown and after start nodes. ## Regression yes ## Version ``` FrostFS Storage node Version: v0.0.1-432-g405e17b2 GoVersion: go1.19.7 ``` ## Your Environment HW 4 nodes Datacenter config: 1+1+1+1 Network profile: bad Bandwidth: 1Gb Packet loss: 5% с 50% dependency from old loss
anikeev-yadro added the
bug
triage
labels 2023-06-15 09:21:42 +00:00
fyrchik added this to the v0.38.0 milestone 2023-06-15 11:59:33 +00:00
fyrchik added the
frostfs-ir
frostfs-node
labels 2023-06-15 11:59:40 +00:00
aarifullin was assigned by fyrchik 2023-08-22 11:18:40 +00:00
Author
Member

Not reproduced on v0.37.0-rc.1-6-g5acc13fa
Suggest to close.

Not reproduced on v0.37.0-rc.1-6-g5acc13fa Suggest to close.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: TrueCloudLab/frostfs-node#448
No description provided.