Compare commits

...

145 commits

Author SHA1 Message Date
61da7dca24 [#835] node: Fix appCfg concurrent access
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-14 16:38:59 +03:00
f4877e7b42 [#835] grpc: Try to reconnect if endpoint listen failed
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-14 16:38:59 +03:00
bdd43f6211 [#869] object: Pass just CID to chain router
* Do not convert CID from request to native-schema resource
  format - this step is unneccessary for APE.

Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2023-12-14 11:01:20 +00:00
4a64b07703 [#869] cli: Pass only CID in requests for control API
* Fix add-rule, list-rules, remove-rule, get-rule commands:
  do not convert container ID to native-schema resource format
  and pass it to control API.

Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2023-12-14 11:01:20 +00:00
2d4c0a0f4a [#552] Add systemd notifications to ir service
Signed-off-by: Ekaterina Lebedeva <ekaterina.lebedeva@yadro.com>
2023-12-13 17:51:41 +03:00
ef07c1a3c9 [#552] Add sysd notifications to storage service
Signed-off-by: Ekaterina Lebedeva <ekaterina.lebedeva@yadro.com>
2023-12-13 17:51:41 +03:00
eca7ac9f0d [#552] Add sdnotify package
To avoid using third-party dependencies.

Signed-off-by: Ekaterina Lebedeva <ekaterina.lebedeva@yadro.com>
2023-12-13 17:49:26 +03:00
9b2dce5763 [#552] Add status notification to systemd
Signed-off-by: Ekaterina Lebedeva <ekaterina.lebedeva@yadro.com>
2023-12-13 15:02:39 +03:00
05f8f49289 [#552] gofumpt changes
Signed-off-by: Ekaterina Lebedeva <ekaterina.lebedeva@yadro.com>
2023-12-13 15:02:25 +03:00
7eb46404a1 [#863] blobovnicza: Fix counters
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-13 13:34:29 +03:00
11add38e87 [#857] golangci: Add protogetter linter
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-12 16:27:02 +03:00
94ffe8bb45 [#857] golangci: Add testifylint linter
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-12 16:27:02 +03:00
5d7833c89b [#857] golangci: Add perfsprint linter
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-12 16:27:02 +03:00
d2746a7d67 [#857] Makefile: Update linter version
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-12 16:27:02 +03:00
3b7c0362a8 [#861] shard: Fix Delete object
It is possible that object doesn't exist in metabase.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-12 14:25:40 +03:00
681b2c5fd4 [#825] policer: Do not drop required linking objects
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-12 11:04:03 +00:00
a3ef7b58b4 [#755] innerring: Check container owner namespace
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-12 12:36:34 +03:00
1cd2bfe51a [#755] morph: Drop FrostFSID contract usage
Unused.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-12 12:36:34 +03:00
d5c9dd3c83 [#852] ape: Use first match for eACL->APE converter
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-11 16:55:32 +03:00
46532fb9ce [#841] doc: Describe epoch
Signed-off-by: Anton Nikiforov <an.nikiforov@yadro.com>
2023-12-11 13:14:41 +00:00
70e0c1e082 [#841] ir: Execute netmap.addPeerIR only for state online
Signed-off-by: Anton Nikiforov <an.nikiforov@yadro.com>
2023-12-11 13:14:41 +00:00
0f45e3d344 [#804] ape: Implement boltdb storage for local overrides
Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2023-12-07 19:08:41 +03:00
e361e017f3 [#842] control: Pass target instead resource name
* Update policy-engine package version in go.mod, go.sum.
* Refactor CheckIfRequestPermitted: pass container target
  instead container ID.

Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2023-12-07 14:21:55 +00:00
39060382a1 [#842] control: Recieve target in gRPC methods for APE managing
* Introduce Target type and pass it to all gRPC methods
  for APE chain managing instead CID.

Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2023-12-07 14:21:55 +00:00
db49ad16cc [#826] blobovniczatree: Do not create DB's on init
Blobovniczas will be created on write requests.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-07 15:37:33 +03:00
ad0697adc4 [#661] blobovnicza: Compute size with record size
To get more accurate size of blobovnicza use record
size (lenght of key + lenght of data).

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-07 15:37:33 +03:00
e54dc3dc7c [#698] blobovnicza: Store counter values
Blobovnicza initialization take a long time because of bucket
Stat() call. So now blobovnicza stores counters in META bucket.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-07 15:37:33 +03:00
5e8c08da3e [#661] blobstore: Add address to error logs
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-07 15:37:33 +03:00
8911656b1a [#661] metrcis: Add rebuild percent metric
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-07 15:37:33 +03:00
8bbfb2df43 [#661] blobovniczatree: Pass object size limit from config
If actual small object size value lower than default
object size limit, then unnecessary buckets created.
If actual small object size value greated than default
object size limit, then error happens.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-07 15:37:33 +03:00
2407e5f5ff [#661] blobovniczatree: Do not sort DB's and indicies
Put stores object to next active DB, so there is no need to sort DBs.
In addition, it adds unnecessary DB openings.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-07 15:37:33 +03:00
c6a739e746 [#661] blobovniczatree: Make Rebuild concurrent for objects
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-07 15:37:33 +03:00
f1c7905263 [#661] blobovniczatree: Make Rebuild concurrent
Different DBs can be rebuild concurrently.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-07 15:37:33 +03:00
d4d905ecc6 [#661] metrics: Add blobovniczatree rebuild metrics
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-07 15:37:33 +03:00
b2769ca3de [#661] blobovniczatree: Make Rebuild failover safe
Now move info stores in blobovnicza, so in case of failover
rebuild completes previous operation first.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-07 15:37:33 +03:00
da4fee2d0b [#698] blobovniczatree: Init blobovniczas concurrently
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-07 15:37:33 +03:00
422226da18 [#661] blobovniczatree: Add Rebuild implementation
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-07 15:37:32 +03:00
a531eaf8bc [#661] blobstor: Add Rebuild implementation
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-07 15:37:32 +03:00
c1667a11d2 [#661] blobovniczatree: Allow to change depth or width
Now it is possible to change depth or with of blobovniczatree.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-07 15:37:32 +03:00
484eb59893 [#661] blobovniczatree: Use .db extension for db files
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-07 15:37:32 +03:00
44552a849b [#661] shard: Add blobstor rebuilder
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-07 15:37:32 +03:00
a478050639 [#838] metabase: Resolve funlen linter
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-06 15:44:21 +03:00
d30ab5f29e [#838] metabase: Count user objects
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-06 15:44:21 +03:00
f314da4af3 [#838] metabase: Add user object type counter
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-06 15:44:21 +03:00
29550fe600 [#838] shard: Refactor updateMetrics method
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-06 15:44:21 +03:00
b892feeaf6 [#845] adm: Relax notary-enabled check
Starting from v0.104.0 `NativeActivations` config field is no longer
present and Notary activation height is always 0.

https://github.com/nspcc-dev/neo-go/pull/3212/
TrueCloudLab/frostfs-dev-env#59

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-12-06 11:08:04 +00:00
6bb27f98dd [#837] .pre-commit: Update hook versions
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-12-06 11:07:10 +00:00
44806aa9f1 [#837] go.mod: Update dependencies
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-12-06 11:07:10 +00:00
f1db468d48 [#840] adm: Update FrostFS ID deploy arguments
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-12-04 17:39:41 +03:00
b2c63e57ba [#651] engine/test: Speedup StorageEngine_Inhume
Signed-off-by: Alexander Chuprov <a.chuprov@yadro.com>
2023-11-30 13:19:43 +00:00
445ebcc0e7 [#651] shard/test: Speedup Shard_Delete
Signed-off-by: Alexander Chuprov <a.chuprov@yadro.com>
2023-11-30 13:19:43 +00:00
2302e5d342 [#651] shard/test: Refactor Shard_Delete
Signed-off-by: Alexander Chuprov <a.chuprov@yadro.com>
2023-11-30 13:19:43 +00:00
a982c3df18 [#824] cli: Support passing chain ID in add-rule command
Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2023-11-30 13:13:46 +00:00
7f6852bbd2 [#639] node: Refactor TTL cache
Migrate from internal to external TTL implementation

Signed-off-by: Alexander Chuprov <a.chuprov@yadro.com>
2023-11-30 12:54:51 +00:00
26e4f7005c [#741] treesvc: Refactor tree sync
Fix linter issues.
Add error logging.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-11-30 12:45:02 +00:00
b21be1abdd [#741] treesvc: Do not update sync height if some node is unavailable
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-11-30 12:45:02 +00:00
b215817e14 [#741] treesvc: Remove unused height variables
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-11-30 12:45:02 +00:00
306f12e6c5 [#828] adm: Fix policy contract deploy
Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2023-11-29 06:23:56 +00:00
5521737f0b [#808] cli: Use EnableTraverseRunHooks in cobra
Adopt EnableTraverseRunHooks to get rid of tracing boilerplate in
multiple commands. Now adding `--trace` flag is sufficient for a command
to support tracing. Finally, it looks how it _should_.

Refs TrueCloudLab/frostfs-node#406

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-11-27 09:58:19 +00:00
e81a58b8da [#808] cli: Use MarkFlagsOneRequired after cobra update
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-11-27 09:58:19 +00:00
5048236441 [#808] go.mod: Update dependencies
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-11-27 09:58:19 +00:00
c516c7c5f4 [#821] node: Pass user.ID by value
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-11-23 10:21:07 +03:00
c99157f0b2 [#821] go.mod: Update SDK-Go version
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-11-23 10:21:06 +03:00
07390ad4e3 [#715] node: Unify config parameter names
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-11-22 17:13:50 +03:00
8d18fa159e [#667] writecache: Fix flush test
Allow to disable background flusher for testing purposes.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-11-17 17:45:43 +03:00
02454df14a [#809] client: Refactor PrmInit, PrmDial usage
Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2023-11-17 13:37:03 +00:00
76ff26039c [#96] node: Drop neo-go's slices package
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-11-17 13:24:04 +03:00
47286ebf32 [#805] pilorama: Fix TreeDrop
* If treeID is empty then deleting buckets for cursor may get
  invalidated. So, buckets should be gathered before deleting.

Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2023-11-17 10:21:35 +00:00
5cfb758e4e [#806] morph: Remove container list cache
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-11-17 10:40:08 +03:00
29fe8c41f3 [#655] storage: Drop ErrorHandler
The only one usage was for logging.
Now logging performed by storage anyway.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-11-16 17:27:38 +03:00
137e987a4e [#655] storage: Drop LazyHandler
LazyHandler is implemented and used incorrectly.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-11-16 17:27:38 +03:00
4d5be5ccb5 [#811] ape: Update policy-engine module version and rebase
Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2023-11-16 11:31:37 +03:00
fd9128d051 [#800] node: eACL -> APE converter
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-11-15 11:55:55 +03:00
364f835b7e [#740] logs: Add Loki
Signed-off-by: Alexander Chuprov <a.chuprov@yadro.com>
2023-11-14 19:01:05 +00:00
c1ec6e33b4 [#793] adm: Always use committee as FrostFS ID owner
Committee should be able to authorize everything, there are no other
usecases for the frostfs-adm currently. Also, it somewhat eases
configuration, because committee hash depends on the protocol
configuration.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-11-14 19:00:32 +00:00
f871f5cc6c [#793] adm: Support new FrostFS ID contract
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-11-14 19:00:32 +00:00
b62008daca [#506] node: Invalidate list cache after container add/removal
`update` already has problems mentioned in its doc-comment and the code
itself is not straightforward. Invalidating cache altogether seems like
a better option because we don't construct cache output ourselves (thus, no
"impossible" results).

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-11-14 19:00:11 +00:00
f13f5d3b0f [#506] node: Use DeletionInfo() method to get deleted container owner
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-11-14 19:00:11 +00:00
a952a406a2 [#506] container: Use uint64 for epoch type
It is `uint64` in netmap source interfaces and other code.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-11-14 19:00:11 +00:00
f04806ccd3 [#506] container: Use user.ID in DeletionInfo response
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-11-14 19:00:11 +00:00
8088063195 [#787] netmap: Refactor NewEpoch method
Split for user and control methods.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-11-13 17:22:31 +03:00
c8a62ffedd [#787] morph: Calculate VUB and nonce when hash is nil
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-11-13 17:13:03 +03:00
2393d13e4d [#787] morph: Return VUB for IR service calls
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-11-13 17:13:03 +03:00
518f3baf41 [#787] morph: Return VUB from Invoke method
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-11-13 17:13:03 +03:00
5466e88444 [#787] cli: Add vub for control ir commands
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-11-13 17:13:03 +03:00
bdfa523487 [#787] proto: Add VUB field for IR service
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-11-13 17:13:03 +03:00
78cfb6aea8 [#796] cli: Fix object nodes command
Tombstone objects must be present on all container nodes.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-11-09 13:41:36 +03:00
0f75e48138 [#796] policer: Fix tombstone objects replication
Tombstone objects must be replicated to all container nodes.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-11-09 13:39:33 +03:00
7cdae4f660 [#792] proto: Regenerate with fixed version
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-11-09 10:09:13 +00:00
1bca8f118f [#792] makefile: Fix protoc and staticcheck versions
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-11-09 10:09:13 +00:00
9133b4389e [#788] objectsvc: Fix formatting (gofumpt)
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-11-09 10:27:32 +03:00
1b22801eed [#788] engine: Fix flaky tests
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-11-09 10:25:46 +03:00
3534d6d05b [#794] objectsvc: Return accidentally removed acl checks for Head
Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2023-11-08 17:13:58 +03:00
3ed3e2715b [#xx] adm: Drop notaryDisabled deploy parameter
Refs TrueCloudLab/frostfs-contract#50

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-11-08 13:38:04 +00:00
66848d3288 [#770] cli: Add methods to work with APE rules via control svc
* Add methods to frostfs-cli
* Implement rpc in control service

Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2023-11-08 13:34:03 +00:00
8e11ef46b8 [#770] object: Introduce ape chain checker for object svc
* Introduce Request type converted from RequestInfo type
  to implement policy-engine's Request interface
* Implement basic ape checker to check if a request is
  permitted to be performed
* Make put handlers use APE checker instead EACL

Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2023-11-08 13:34:03 +00:00
5ec73fe8a0 [#770] node: Introduce ape chain source
* Provide methods to access rule chains with access
  policy engine (APE) chain source
* Initialize apeChainSource within object service
  initialization
* Share apeChainSource with control service
* Implement dummy apeChainSource instance based on
  in-memory implementation

Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2023-11-08 13:34:03 +00:00
3a2c319b87 [#770] control: Generate gRPC methods to manipulate APE chains
* Define new types and gRPC methods to manipulate APE chains
  in control service.
* Stub gRPC handlers for the generated methods.

Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2023-11-08 13:34:03 +00:00
70ab1ebd54 [#763] metrics: Add container_objects_total metric
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-11-08 12:30:57 +03:00
9c98fa6152 [#763] metabase: Add container objects counter
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-11-08 12:30:57 +03:00
226e84d782 [#684] node: Add skipped objects count to evacuation result
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-11-07 12:17:11 +00:00
1e21733ea5 [#684] cli: Add skipped count to evacuation status output
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-11-07 12:17:11 +00:00
523fb3ca51 [#684] proto: Add skipped count to evacuation status response
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-11-07 12:17:11 +00:00
74c91eeef5 [#777] client: Refactor PrmContainerList, PrmObjectSearch usage
Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2023-11-06 06:50:11 +00:00
cae50ecb21 [#716] adm: Add dump policy
Signed-off-by: Alexander Chuprov <a.chuprov@yadro.com>
2023-11-06 06:43:56 +00:00
20d6132f31 [#531] signSvc: Add SetMarshaledData method call
To reduce memory allocations add `SetMarshaledData` method call
to return already marshalled data in next `StableMarshal` calls.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-11-02 17:34:33 +03:00
7b1eda5107 [#531] go.mod: Update api-go version
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-11-02 17:32:41 +03:00
7c8591f83b [#779] go.mod: Update frostfs-contract
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-11-02 09:23:41 +00:00
1ab567870a [#779] adm: Support deploying policy contract
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-11-02 09:23:41 +00:00
0b0e5dab24 [#756] adm: Add polling interval increase
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-11-01 14:24:28 +03:00
c7a7229484 [#764] metrics: Fix epoch metric
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-11-01 10:57:31 +00:00
a26483fc30 [#749] morph: Fix panic when closing morph client
Signed-off-by: Anton Nikiforov <an.nikiforov@yadro.com>
2023-11-01 10:48:10 +00:00
c80b46fad3 [#754] blobstor: Estimate compressability
Now it is possible to enable compressability estimation.
If data is likely uncompressable, it should reduce CPU time and memory.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-11-01 11:24:32 +03:00
05b508f79a [#772] proto: Fix file ending
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-10-31 17:03:04 +03:00
8a82335b5c [#772] pre-commit: Add gofumpt check
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-10-31 17:03:04 +03:00
990f9f2d2b [#772] makefile: Replace gofmt with gofumpt
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-10-31 17:03:04 +03:00
79088baa06 [#772] node: Apply gofumpt
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-10-31 17:03:03 +03:00
00aa6d9749 [#633] shard/test: Fix TestCounters()
Introduced in 362f24953a, forgotten to be changed because test
generator didn't provide payload size.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-10-31 12:53:28 +00:00
b8f79f4227 [#633] shard/test: Fix race conditions in TestCounters()
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-10-31 12:53:28 +00:00
261d281154 [#762] go.mod: Update sdk-go
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-10-31 11:22:29 +00:00
869518be0a [#728] writecache: Fix Badger writecache race.
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-10-30 18:36:41 +03:00
d4b6ebe7e7 [#725] writecache: Fix metric values
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-10-27 12:22:29 +03:00
121f5c4dd8 [#757] ir: Do not exclude node in maintenance mode from netmap
Signed-off-by: Anton Nikiforov <an.nikiforov@yadro.com>
2023-10-26 10:50:32 +03:00
9f7c2d8810 [#752] innerring: Simplify keyPosition()
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-10-25 16:06:44 +03:00
cddc58ace2 [#752] innerring: Optimize keyPosition()
```
goos: linux
goarch: amd64
pkg: git.frostfs.info/TrueCloudLab/frostfs-node/pkg/innerring
cpu: 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz
              │      old       │                 new                 │
              │     sec/op     │   sec/op     vs base                │
KeyPosition-8   2771.50n ± 10%   40.32n ± 4%  -98.55% (p=0.000 n=10)

              │     old      │                  new                  │
              │     B/op     │     B/op      vs base                 │
KeyPosition-8   1.531Ki ± 0%   0.000Ki ± 0%  -100.00% (p=0.000 n=10)

              │    old     │                new                 │
              │ allocs/op  │ allocs/op  vs base                 │
KeyPosition-8   21.00 ± 0%   0.00 ± 0%  -100.00% (p=0.000 n=10)
```

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-10-25 16:06:44 +03:00
0a9830564f [#752] morph: Adopt neo-go RPC statuses
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-10-25 16:06:44 +03:00
6950312967 [#752] morph: Drop loop copy kludges
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-10-25 16:06:44 +03:00
4f62fded01 [#752] go.mod: Update dependencies
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-10-25 16:06:44 +03:00
2dbf5c612a [#752] go.mod: Update neo-go to v0.103.0
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-10-25 16:06:44 +03:00
4239f1e817 [#750] adm: Drop deprecated rpcclient.TransferTarget
We do not use `nep17` wrapper, because transfers of different tokens are
possible in a single transaction.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-10-25 07:57:05 +00:00
7f35f2fb1d [#750] adm: Drop deprecated CreateTxFromScript()
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-10-25 07:57:05 +00:00
b0d303f3ed [#750] adm: Drop unused methods from Client
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-10-25 07:57:05 +00:00
a788c24e6d [#750] adm: Drop deprecated AddNetworkFee()
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-10-25 07:57:05 +00:00
4368243bed [#750] adm: Drop deprecated NEP17BalanceOf()
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-10-25 07:57:05 +00:00
00a0045d9a [#750] adm: Drop deprecated GetContract*()
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-10-25 07:57:05 +00:00
7f8ccc105b [#750] adm: Drop deprecated GetNetwork()
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-10-25 07:57:05 +00:00
efb37b0e65 [#750] adm: Fix invalid tests
Introduced in a9d04ba86f.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-10-25 07:57:05 +00:00
fe1acf9e9a [#750] morph: Remove deprecated channel use
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-10-25 07:57:05 +00:00
559ad58ab1 [#642] writecache: Remove usage of close channel in bbolt
Signed-off-by: Anton Nikiforov <an.nikiforov@yadro.com>
2023-10-24 15:57:50 +00:00
c0b86f2d93 [#642] writecache: Remove usage of close channel in badger
Signed-off-by: Anton Nikiforov <an.nikiforov@yadro.com>
2023-10-24 15:57:50 +00:00
b0cf100427 [#49] node: React on SIGHUP only when node in READY state
Add more info in logs when node is going to shut down,
but initialization process still in progress.

Signed-off-by: Anton Nikiforov <an.nikiforov@yadro.com>
2023-10-24 15:55:29 +00:00
58b6224dd8 [#747] client: Refactor PrmObjectPutInit usage
Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2023-10-20 11:55:40 +00:00
12b7cf2533 [#747] client: Refactor PrmObjectPutSingle usage
Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2023-10-20 11:55:40 +00:00
dc4d27201b [#733] morph: Fix delete container signature check
Committed invalid condition, it was just for debug.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-10-19 18:07:37 +03:00
189dbb01be [#733] frostfs-cli: Add control ir remove-container
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-10-19 16:22:18 +03:00
397 changed files with 12238 additions and 4694 deletions

View file

@ -39,7 +39,7 @@ linters-settings:
alias: objectSDK alias: objectSDK
custom: custom:
truecloudlab-linters: truecloudlab-linters:
path: bin/external_linters.so path: bin/linters/external_linters.so
original-url: git.frostfs.info/TrueCloudLab/linters.git original-url: git.frostfs.info/TrueCloudLab/linters.git
settings: settings:
noliteral: noliteral:
@ -79,5 +79,8 @@ linters:
- contextcheck - contextcheck
- importas - importas
- truecloudlab-linters - truecloudlab-linters
- perfsprint
- testifylint
- protogetter
disable-all: true disable-all: true
fast: false fast: false

View file

@ -10,7 +10,7 @@ repos:
- id: gitlint-ci - id: gitlint-ci
- repo: https://github.com/pre-commit/pre-commit-hooks - repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0 rev: v4.5.0
hooks: hooks:
- id: check-added-large-files - id: check-added-large-files
- id: check-case-conflict - id: check-case-conflict
@ -26,7 +26,7 @@ repos:
exclude: ".key$" exclude: ".key$"
- repo: https://github.com/shellcheck-py/shellcheck-py - repo: https://github.com/shellcheck-py/shellcheck-py
rev: v0.9.0.5 rev: v0.9.0.6
hooks: hooks:
- id: shellcheck - id: shellcheck
@ -47,6 +47,15 @@ repos:
types: [go] types: [go]
language: system language: system
- repo: local
hooks:
- id: gofumpt
name: gofumpt
entry: make fumpt
pass_filenames: false
types: [go]
language: system
- repo: https://github.com/TekWizely/pre-commit-golang - repo: https://github.com/TekWizely/pre-commit-golang
rev: v1.0.0-rc.1 rev: v1.0.0-rc.1
hooks: hooks:

View file

@ -8,8 +8,16 @@ HUB_IMAGE ?= truecloudlab/frostfs
HUB_TAG ?= "$(shell echo ${VERSION} | sed 's/^v//')" HUB_TAG ?= "$(shell echo ${VERSION} | sed 's/^v//')"
GO_VERSION ?= 1.21 GO_VERSION ?= 1.21
LINT_VERSION ?= 1.54.0 LINT_VERSION ?= 1.55.2
TRUECLOUDLAB_LINT_VERSION ?= 0.0.2 TRUECLOUDLAB_LINT_VERSION ?= 0.0.3
PROTOC_VERSION ?= 25.0
PROTOC_GEN_GO_VERSION ?= $(shell go list -f '{{.Version}}' -m google.golang.org/protobuf)
PROTOGEN_FROSTFS_VERSION ?= $(shell go list -f '{{.Version}}' -m git.frostfs.info/TrueCloudLab/frostfs-api-go/v2)
PROTOC_OS_VERSION=osx-x86_64
ifeq ($(shell uname), Linux)
PROTOC_OS_VERSION=linux-x86_64
endif
STATICCHECK_VERSION ?= 2023.1.6
ARCH = amd64 ARCH = amd64
BIN = bin BIN = bin
@ -26,11 +34,17 @@ PKG_VERSION ?= $(shell echo $(VERSION) | sed "s/^v//" | \
sed -E "s/(.*)-(g[a-fA-F0-9]{6,8})(.*)/\1\3~\2/" | \ sed -E "s/(.*)-(g[a-fA-F0-9]{6,8})(.*)/\1\3~\2/" | \
sed "s/-/~/")-${OS_RELEASE} sed "s/-/~/")-${OS_RELEASE}
OUTPUT_LINT_DIR ?= $(shell pwd)/bin OUTPUT_LINT_DIR ?= $(abspath $(BIN))/linters
LINT_DIR = $(OUTPUT_LINT_DIR)/golangci-lint-$(LINT_VERSION)-v$(TRUECLOUDLAB_LINT_VERSION) LINT_DIR = $(OUTPUT_LINT_DIR)/golangci-lint-$(LINT_VERSION)-v$(TRUECLOUDLAB_LINT_VERSION)
TMP_DIR := .cache TMP_DIR := .cache
PROTOBUF_DIR ?= $(abspath $(BIN))/protobuf
PROTOC_DIR ?= $(PROTOBUF_DIR)/protoc-v$(PROTOC_VERSION)
PROTOC_GEN_GO_DIR ?= $(PROTOBUF_DIR)/protoc-gen-go-$(PROTOC_GEN_GO_VERSION)
PROTOGEN_FROSTFS_DIR ?= $(PROTOBUF_DIR)/protogen-$(PROTOGEN_FROSTFS_VERSION)
STATICCHECK_DIR ?= $(abspath $(BIN))/staticcheck
STATICCHECK_VERSION_DIR ?= $(STATICCHECK_DIR)/$(STATICCHECK_VERSION)
.PHONY: help all images dep clean fmts fmt imports test lint docker/lint .PHONY: help all images dep clean fmts fumpt imports test lint docker/lint
prepare-release debpackage pre-commit unpre-commit prepare-release debpackage pre-commit unpre-commit
# To build a specific binary, use it's name prefix with bin/ as a target # To build a specific binary, use it's name prefix with bin/ as a target
@ -78,22 +92,32 @@ export-metrics: dep
# Regenerate proto files: # Regenerate proto files:
protoc: protoc:
@GOPRIVATE=github.com/TrueCloudLab go mod vendor @if [ ! -d "$(PROTOC_DIR)" ] || [ ! -d "$(PROTOC_GEN_GO_DIR)" ] || [ ! -d "$(PROTOGEN_FROSTFS_DIR)" ]; then \
# Install specific version for protobuf lib make protoc-install; \
@go list -f '{{.Path}}/...@{{.Version}}' -m github.com/golang/protobuf | xargs go install -v fi
@GOBIN=$(abspath $(BIN)) go install -mod=mod -v git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/util/protogen @for f in `find . -type f -name '*.proto' -not -path './bin/*'`; do \
# Protoc generate
@for f in `find . -type f -name '*.proto' -not -path './vendor/*'`; do \
echo "⇒ Processing $$f "; \ echo "⇒ Processing $$f "; \
protoc \ $(PROTOC_DIR)/bin/protoc \
--proto_path=.:./vendor:/usr/local/include \ --proto_path=.:$(PROTOC_DIR)/include:/usr/local/include \
--plugin=protoc-gen-go-frostfs=$(BIN)/protogen \ --plugin=protoc-gen-go=$(PROTOC_GEN_GO_DIR)/protoc-gen-go \
--plugin=protoc-gen-go-frostfs=$(PROTOGEN_FROSTFS_DIR)/protogen \
--go-frostfs_out=. --go-frostfs_opt=paths=source_relative \ --go-frostfs_out=. --go-frostfs_opt=paths=source_relative \
--go_out=. --go_opt=paths=source_relative \ --go_out=. --go_opt=paths=source_relative \
--go-grpc_opt=require_unimplemented_servers=false \ --go-grpc_opt=require_unimplemented_servers=false \
--go-grpc_out=. --go-grpc_opt=paths=source_relative $$f; \ --go-grpc_out=. --go-grpc_opt=paths=source_relative $$f; \
done done
rm -rf vendor
protoc-install:
@rm -rf $(PROTOBUF_DIR)
@mkdir $(PROTOBUF_DIR)
@echo "⇒ Installing protoc... "
@wget -q -O $(PROTOBUF_DIR)/protoc-$(PROTOC_VERSION).zip 'https://github.com/protocolbuffers/protobuf/releases/download/v$(PROTOC_VERSION)/protoc-$(PROTOC_VERSION)-$(PROTOC_OS_VERSION).zip'
@unzip -q -o $(PROTOBUF_DIR)/protoc-$(PROTOC_VERSION).zip -d $(PROTOC_DIR)
@rm $(PROTOBUF_DIR)/protoc-$(PROTOC_VERSION).zip
@echo "⇒ Installing protoc-gen-go..."
@GOBIN=$(PROTOC_GEN_GO_DIR) go install -v google.golang.org/protobuf/...@$(PROTOC_GEN_GO_VERSION)
@echo "⇒ Instaling protogen FrostFS plugin..."
@GOBIN=$(PROTOGEN_FROSTFS_DIR) go install -mod=mod -v git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/util/protogen@$(PROTOGEN_FROSTFS_VERSION)
# Build FrostFS component's docker image # Build FrostFS component's docker image
image-%: image-%:
@ -122,18 +146,17 @@ docker/%:
# Run all code formatters # Run all code formatters
fmts: fmt imports fmts: fumpt imports
# Reformat code
fmt:
@echo "⇒ Processing gofmt check"
@gofmt -s -w cmd/ pkg/ misc/
# Reformat imports # Reformat imports
imports: imports:
@echo "⇒ Processing goimports check" @echo "⇒ Processing goimports check"
@goimports -w cmd/ pkg/ misc/ @goimports -w cmd/ pkg/ misc/
fumpt:
@echo "⇒ Processing gofumpt check"
@gofumpt -l -w cmd/ pkg/ misc/
# Run Unit Test with go test # Run Unit Test with go test
test: test:
@echo "⇒ Running go test" @echo "⇒ Running go test"
@ -144,6 +167,8 @@ pre-commit-run:
# Install linters # Install linters
lint-install: lint-install:
@rm -rf $(OUTPUT_LINT_DIR)
@mkdir $(OUTPUT_LINT_DIR)
@mkdir -p $(TMP_DIR) @mkdir -p $(TMP_DIR)
@rm -rf $(TMP_DIR)/linters @rm -rf $(TMP_DIR)/linters
@git -c advice.detachedHead=false clone --branch v$(TRUECLOUDLAB_LINT_VERSION) https://git.frostfs.info/TrueCloudLab/linters.git $(TMP_DIR)/linters @git -c advice.detachedHead=false clone --branch v$(TRUECLOUDLAB_LINT_VERSION) https://git.frostfs.info/TrueCloudLab/linters.git $(TMP_DIR)/linters
@ -155,18 +180,22 @@ lint-install:
# Run linters # Run linters
lint: lint:
@if [ ! -d "$(LINT_DIR)" ]; then \ @if [ ! -d "$(LINT_DIR)" ]; then \
echo "Run make lint-install"; \ make lint-install; \
exit 1; \
fi fi
$(LINT_DIR)/golangci-lint run $(LINT_DIR)/golangci-lint run
# Install staticcheck # Install staticcheck
staticcheck-install: staticcheck-install:
@go install honnef.co/go/tools/cmd/staticcheck@latest @rm -rf $(STATICCHECK_DIR)
@mkdir $(STATICCHECK_DIR)
@GOBIN=$(STATICCHECK_VERSION_DIR) go install honnef.co/go/tools/cmd/staticcheck@$(STATICCHECK_VERSION)
# Run staticcheck # Run staticcheck
staticcheck-run: staticcheck-run:
@staticcheck ./... @if [ ! -d "$(STATICCHECK_VERSION_DIR)" ]; then \
make staticcheck-install; \
fi
@$(STATICCHECK_VERSION_DIR)/staticcheck ./...
# Run linters in Docker # Run linters in Docker
docker/lint: docker/lint:
@ -190,7 +219,6 @@ version:
# Delete built artifacts # Delete built artifacts
clean: clean:
rm -rf vendor
rm -rf .cache rm -rf .cache
rm -rf $(BIN) rm -rf $(BIN)
rm -rf $(RELEASE) rm -rf $(RELEASE)

View file

@ -50,12 +50,12 @@ func initConfig(cmd *cobra.Command, _ []string) error {
} }
pathDir := filepath.Dir(configPath) pathDir := filepath.Dir(configPath)
err = os.MkdirAll(pathDir, 0700) err = os.MkdirAll(pathDir, 0o700)
if err != nil { if err != nil {
return fmt.Errorf("create dir %s: %w", pathDir, err) return fmt.Errorf("create dir %s: %w", pathDir, err)
} }
f, err := os.OpenFile(configPath, os.O_RDWR|os.O_CREATE|os.O_TRUNC|os.O_SYNC, 0600) f, err := os.OpenFile(configPath, os.O_RDWR|os.O_CREATE|os.O_TRUNC|os.O_SYNC, 0o600)
if err != nil { if err != nil {
return fmt.Errorf("open %s: %w", configPath, err) return fmt.Errorf("open %s: %w", configPath, err)
} }

View file

@ -16,6 +16,7 @@ import (
"github.com/nspcc-dev/neo-go/pkg/io" "github.com/nspcc-dev/neo-go/pkg/io"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/gas" "github.com/nspcc-dev/neo-go/pkg/rpcclient/gas"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/invoker" "github.com/nspcc-dev/neo-go/pkg/rpcclient/invoker"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/management"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/rolemgmt" "github.com/nspcc-dev/neo-go/pkg/rpcclient/rolemgmt"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/unwrap" "github.com/nspcc-dev/neo-go/pkg/rpcclient/unwrap"
"github.com/nspcc-dev/neo-go/pkg/smartcontract/callflag" "github.com/nspcc-dev/neo-go/pkg/smartcontract/callflag"
@ -56,7 +57,8 @@ func dumpBalances(cmd *cobra.Command, _ []string) error {
inv := invoker.New(c, nil) inv := invoker.New(c, nil)
if dumpStorage || dumpAlphabet || dumpProxy { if dumpStorage || dumpAlphabet || dumpProxy {
nnsCs, err = c.GetContractStateByID(1) r := management.NewReader(inv)
nnsCs, err = r.GetContractByID(1)
if err != nil { if err != nil {
return fmt.Errorf("can't get NNS contract info: %w", err) return fmt.Errorf("can't get NNS contract info: %w", err)
} }

View file

@ -13,6 +13,7 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client/netmap" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client/netmap"
"github.com/nspcc-dev/neo-go/pkg/io" "github.com/nspcc-dev/neo-go/pkg/io"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/invoker" "github.com/nspcc-dev/neo-go/pkg/rpcclient/invoker"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/management"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/unwrap" "github.com/nspcc-dev/neo-go/pkg/rpcclient/unwrap"
"github.com/nspcc-dev/neo-go/pkg/smartcontract/callflag" "github.com/nspcc-dev/neo-go/pkg/smartcontract/callflag"
"github.com/nspcc-dev/neo-go/pkg/vm/emit" "github.com/nspcc-dev/neo-go/pkg/vm/emit"
@ -29,8 +30,9 @@ func dumpNetworkConfig(cmd *cobra.Command, _ []string) error {
} }
inv := invoker.New(c, nil) inv := invoker.New(c, nil)
r := management.NewReader(inv)
cs, err := c.GetContractStateByID(1) cs, err := r.GetContractByID(1)
if err != nil { if err != nil {
return fmt.Errorf("can't get NNS contract info: %w", err) return fmt.Errorf("can't get NNS contract info: %w", err)
} }
@ -87,7 +89,8 @@ func setConfigCmd(cmd *cobra.Command, args []string) error {
return fmt.Errorf("can't initialize context: %w", err) return fmt.Errorf("can't initialize context: %w", err)
} }
cs, err := wCtx.Client.GetContractStateByID(1) r := management.NewReader(wCtx.ReadOnlyInvoker)
cs, err := r.GetContractByID(1)
if err != nil { if err != nil {
return fmt.Errorf("can't get NNS contract info: %w", err) return fmt.Errorf("can't get NNS contract info: %w", err)
} }

View file

@ -11,6 +11,7 @@ import (
"github.com/nspcc-dev/neo-go/pkg/crypto/hash" "github.com/nspcc-dev/neo-go/pkg/crypto/hash"
"github.com/nspcc-dev/neo-go/pkg/io" "github.com/nspcc-dev/neo-go/pkg/io"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/invoker" "github.com/nspcc-dev/neo-go/pkg/rpcclient/invoker"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/management"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/unwrap" "github.com/nspcc-dev/neo-go/pkg/rpcclient/unwrap"
"github.com/nspcc-dev/neo-go/pkg/smartcontract/callflag" "github.com/nspcc-dev/neo-go/pkg/smartcontract/callflag"
"github.com/nspcc-dev/neo-go/pkg/util" "github.com/nspcc-dev/neo-go/pkg/util"
@ -22,14 +23,15 @@ import (
var errInvalidContainerResponse = errors.New("invalid response from container contract") var errInvalidContainerResponse = errors.New("invalid response from container contract")
func getContainerContractHash(cmd *cobra.Command, inv *invoker.Invoker, c Client) (util.Uint160, error) { func getContainerContractHash(cmd *cobra.Command, inv *invoker.Invoker) (util.Uint160, error) {
s, err := cmd.Flags().GetString(containerContractFlag) s, err := cmd.Flags().GetString(containerContractFlag)
var ch util.Uint160 var ch util.Uint160
if err == nil { if err == nil {
ch, err = util.Uint160DecodeStringLE(s) ch, err = util.Uint160DecodeStringLE(s)
} }
if err != nil { if err != nil {
nnsCs, err := c.GetContractStateByID(1) r := management.NewReader(inv)
nnsCs, err := r.GetContractByID(1)
if err != nil { if err != nil {
return util.Uint160{}, fmt.Errorf("can't get NNS contract state: %w", err) return util.Uint160{}, fmt.Errorf("can't get NNS contract state: %w", err)
} }
@ -78,7 +80,7 @@ func dumpContainers(cmd *cobra.Command, _ []string) error {
inv := invoker.New(c, nil) inv := invoker.New(c, nil)
ch, err := getContainerContractHash(cmd, inv, c) ch, err := getContainerContractHash(cmd, inv)
if err != nil { if err != nil {
return fmt.Errorf("unable to get contaract hash: %w", err) return fmt.Errorf("unable to get contaract hash: %w", err)
} }
@ -168,7 +170,7 @@ func listContainers(cmd *cobra.Command, _ []string) error {
inv := invoker.New(c, nil) inv := invoker.New(c, nil)
ch, err := getContainerContractHash(cmd, inv, c) ch, err := getContainerContractHash(cmd, inv)
if err != nil { if err != nil {
return fmt.Errorf("unable to get contaract hash: %w", err) return fmt.Errorf("unable to get contaract hash: %w", err)
} }
@ -298,7 +300,8 @@ func parseContainers(filename string) ([]Container, error) {
} }
func fetchContainerContractHash(wCtx *initializeContext) (util.Uint160, error) { func fetchContainerContractHash(wCtx *initializeContext) (util.Uint160, error) {
nnsCs, err := wCtx.Client.GetContractStateByID(1) r := management.NewReader(wCtx.ReadOnlyInvoker)
nnsCs, err := r.GetContractByID(1)
if err != nil { if err != nil {
return util.Uint160{}, fmt.Errorf("can't get NNS contract state: %w", err) return util.Uint160{}, fmt.Errorf("can't get NNS contract state: %w", err)
} }

View file

@ -76,7 +76,8 @@ func deployContractCmd(cmd *cobra.Command, args []string) error {
return err return err
} }
nnsCs, err := c.Client.GetContractStateByID(1) r := management.NewReader(c.ReadOnlyInvoker)
nnsCs, err := r.GetContractByID(1)
if err != nil { if err != nil {
return fmt.Errorf("can't fetch NNS contract state: %w", err) return fmt.Errorf("can't fetch NNS contract state: %w", err)
} }

View file

@ -11,6 +11,7 @@ import (
morphClient "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client" morphClient "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client"
"github.com/nspcc-dev/neo-go/pkg/io" "github.com/nspcc-dev/neo-go/pkg/io"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/invoker" "github.com/nspcc-dev/neo-go/pkg/rpcclient/invoker"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/management"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/unwrap" "github.com/nspcc-dev/neo-go/pkg/rpcclient/unwrap"
"github.com/nspcc-dev/neo-go/pkg/smartcontract/callflag" "github.com/nspcc-dev/neo-go/pkg/smartcontract/callflag"
"github.com/nspcc-dev/neo-go/pkg/util" "github.com/nspcc-dev/neo-go/pkg/util"
@ -36,7 +37,8 @@ func dumpContractHashes(cmd *cobra.Command, _ []string) error {
return fmt.Errorf("can't create N3 client: %w", err) return fmt.Errorf("can't create N3 client: %w", err)
} }
cs, err := c.GetContractStateByID(1) r := management.NewReader(invoker.New(c, nil))
cs, err := r.GetContractByID(1)
if err != nil { if err != nil {
return err return err
} }

View file

@ -6,6 +6,7 @@ import (
"strings" "strings"
"github.com/nspcc-dev/neo-go/pkg/io" "github.com/nspcc-dev/neo-go/pkg/io"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/management"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/unwrap" "github.com/nspcc-dev/neo-go/pkg/rpcclient/unwrap"
"github.com/nspcc-dev/neo-go/pkg/smartcontract/callflag" "github.com/nspcc-dev/neo-go/pkg/smartcontract/callflag"
"github.com/nspcc-dev/neo-go/pkg/util" "github.com/nspcc-dev/neo-go/pkg/util"
@ -20,7 +21,8 @@ func forceNewEpochCmd(cmd *cobra.Command, _ []string) error {
return fmt.Errorf("can't to initialize context: %w", err) return fmt.Errorf("can't to initialize context: %w", err)
} }
cs, err := wCtx.Client.GetContractStateByID(1) r := management.NewReader(wCtx.ReadOnlyInvoker)
cs, err := r.GetContractByID(1)
if err != nil { if err != nil {
return fmt.Errorf("can't get NNS contract info: %w", err) return fmt.Errorf("can't get NNS contract info: %w", err)
} }

View file

@ -0,0 +1,33 @@
package morph
import (
"fmt"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/nspcc-dev/neo-go/pkg/encoding/address"
"github.com/nspcc-dev/neo-go/pkg/util"
"github.com/spf13/viper"
)
func getFrostfsIDAdmin(v *viper.Viper) (util.Uint160, bool, error) {
admin := v.GetString(frostfsIDAdminConfigKey)
if admin == "" {
return util.Uint160{}, false, nil
}
h, err := address.StringToUint160(admin)
if err == nil {
return h, true, nil
}
h, err = util.Uint160DecodeStringLE(admin)
if err == nil {
return h, true, nil
}
pk, err := keys.NewPublicKeyFromString(admin)
if err == nil {
return pk.GetScriptHash(), true, nil
}
return util.Uint160{}, true, fmt.Errorf("frostfsid: admin is invalid: '%s'", admin)
}

View file

@ -0,0 +1,53 @@
package morph
import (
"encoding/hex"
"testing"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/nspcc-dev/neo-go/pkg/encoding/address"
"github.com/spf13/viper"
"github.com/stretchr/testify/require"
)
func TestFrostfsIDConfig(t *testing.T) {
pks := make([]*keys.PrivateKey, 4)
for i := range pks {
pk, err := keys.NewPrivateKey()
require.NoError(t, err)
pks[i] = pk
}
fmts := []string{
pks[0].GetScriptHash().StringLE(),
address.Uint160ToString(pks[1].GetScriptHash()),
hex.EncodeToString(pks[2].PublicKey().UncompressedBytes()),
hex.EncodeToString(pks[3].PublicKey().Bytes()),
}
for i := range fmts {
v := viper.New()
v.Set("frostfsid.admin", fmts[i])
actual, found, err := getFrostfsIDAdmin(v)
require.NoError(t, err)
require.True(t, found)
require.Equal(t, pks[i].GetScriptHash(), actual)
}
t.Run("bad key", func(t *testing.T) {
v := viper.New()
v.Set("frostfsid.admin", "abc")
_, found, err := getFrostfsIDAdmin(v)
require.Error(t, err)
require.True(t, found)
})
t.Run("missing key", func(t *testing.T) {
v := viper.New()
_, found, err := getFrostfsIDAdmin(v)
require.NoError(t, err)
require.False(t, found)
})
}

View file

@ -76,7 +76,7 @@ func initializeWallets(v *viper.Viper, walletDir string, size int) ([]string, er
} }
p := filepath.Join(walletDir, innerring.GlagoliticLetter(i).String()+".json") p := filepath.Join(walletDir, innerring.GlagoliticLetter(i).String()+".json")
f, err := os.OpenFile(p, os.O_CREATE, 0644) f, err := os.OpenFile(p, os.O_CREATE, 0o644)
if err != nil { if err != nil {
return nil, fmt.Errorf("can't create wallet file: %w", err) return nil, fmt.Errorf("can't create wallet file: %w", err)
} }

View file

@ -7,6 +7,7 @@ import (
"path/filepath" "path/filepath"
"time" "time"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/config" "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/config"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/innerring" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/innerring"
morphClient "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client" morphClient "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client"
@ -15,6 +16,7 @@ import (
"github.com/nspcc-dev/neo-go/pkg/core/transaction" "github.com/nspcc-dev/neo-go/pkg/core/transaction"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys" "github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/actor" "github.com/nspcc-dev/neo-go/pkg/rpcclient/actor"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/management"
"github.com/nspcc-dev/neo-go/pkg/smartcontract/trigger" "github.com/nspcc-dev/neo-go/pkg/smartcontract/trigger"
"github.com/nspcc-dev/neo-go/pkg/util" "github.com/nspcc-dev/neo-go/pkg/util"
"github.com/nspcc-dev/neo-go/pkg/vm/vmstate" "github.com/nspcc-dev/neo-go/pkg/vm/vmstate"
@ -312,7 +314,8 @@ func (c *initializeContext) nnsContractState() (*state.Contract, error) {
return c.nnsCs, nil return c.nnsCs, nil
} }
cs, err := c.Client.GetContractStateByID(1) r := management.NewReader(c.ReadOnlyInvoker)
cs, err := r.GetContractByID(1)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -374,36 +377,18 @@ func (c *clientContext) awaitTx(cmd *cobra.Command) error {
func awaitTx(cmd *cobra.Command, c Client, txs []hashVUBPair) error { func awaitTx(cmd *cobra.Command, c Client, txs []hashVUBPair) error {
cmd.Println("Waiting for transactions to persist...") cmd.Println("Waiting for transactions to persist...")
const pollInterval = time.Second
tick := time.NewTicker(pollInterval)
defer tick.Stop()
at := trigger.Application at := trigger.Application
var retErr error var retErr error
currBlock, err := c.GetBlockCount()
if err != nil {
return fmt.Errorf("can't fetch current block height: %w", err)
}
loop: loop:
for i := range txs { for i := range txs {
res, err := c.GetApplicationLog(txs[i].hash, &at) var it int
if err == nil { var pollInterval time.Duration
if retErr == nil && len(res.Executions) > 0 && res.Executions[0].VMState != vmstate.Halt { var pollIntervalChanged bool
retErr = fmt.Errorf("tx %d persisted in %s state: %s", for {
i, res.Executions[0].VMState, res.Executions[0].FaultException)
}
continue loop
}
if txs[i].vub < currBlock {
return fmt.Errorf("tx was not persisted: vub=%d, height=%d", txs[i].vub, currBlock)
}
for range tick.C {
// We must fetch current height before application log, to avoid race condition. // We must fetch current height before application log, to avoid race condition.
currBlock, err = c.GetBlockCount() currBlock, err := c.GetBlockCount()
if err != nil { if err != nil {
return fmt.Errorf("can't fetch current block height: %w", err) return fmt.Errorf("can't fetch current block height: %w", err)
} }
@ -418,12 +403,43 @@ loop:
if txs[i].vub < currBlock { if txs[i].vub < currBlock {
return fmt.Errorf("tx was not persisted: vub=%d, height=%d", txs[i].vub, currBlock) return fmt.Errorf("tx was not persisted: vub=%d, height=%d", txs[i].vub, currBlock)
} }
pollInterval, pollIntervalChanged = nextPollInterval(it, pollInterval)
if pollIntervalChanged && viper.GetBool(commonflags.Verbose) {
cmd.Printf("Pool interval to check transaction persistence changed: %s\n", pollInterval.String())
}
timer := time.NewTimer(pollInterval)
select {
case <-cmd.Context().Done():
return cmd.Context().Err()
case <-timer.C:
}
it++
} }
} }
return retErr return retErr
} }
func nextPollInterval(it int, previous time.Duration) (time.Duration, bool) {
const minPollInterval = 1 * time.Second
const maxPollInterval = 16 * time.Second
const changeAfter = 5
if it == 0 {
return minPollInterval, true
}
if it%changeAfter != 0 {
return previous, false
}
nextInterval := previous * 2
if nextInterval > maxPollInterval {
return maxPollInterval, previous != maxPollInterval
}
return nextInterval, true
}
// sendCommitteeTx creates transaction from script, signs it by committee nodes and sends it to RPC. // sendCommitteeTx creates transaction from script, signs it by committee nodes and sends it to RPC.
// If tryGroup is false, global scope is used for the signer (useful when // If tryGroup is false, global scope is used for the signer (useful when
// working with native contracts). // working with native contracts).
@ -503,7 +519,7 @@ func checkNotaryEnabled(c Client) error {
nativeHashes := make(map[string]util.Uint160, len(ns)) nativeHashes := make(map[string]util.Uint160, len(ns))
for i := range ns { for i := range ns {
if ns[i].Manifest.Name == nativenames.Notary { if ns[i].Manifest.Name == nativenames.Notary {
notaryEnabled = len(ns[i].UpdateHistory) > 0 notaryEnabled = true
} }
nativeHashes[ns[i].Manifest.Name] = ns[i].Hash nativeHashes[ns[i].Manifest.Name] = ns[i].Hash
} }

View file

@ -18,10 +18,8 @@ import (
morphClient "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client" morphClient "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client/netmap" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client/netmap"
"github.com/nspcc-dev/neo-go/pkg/core/state" "github.com/nspcc-dev/neo-go/pkg/core/state"
"github.com/nspcc-dev/neo-go/pkg/core/transaction"
"github.com/nspcc-dev/neo-go/pkg/encoding/address" "github.com/nspcc-dev/neo-go/pkg/encoding/address"
io2 "github.com/nspcc-dev/neo-go/pkg/io" io2 "github.com/nspcc-dev/neo-go/pkg/io"
"github.com/nspcc-dev/neo-go/pkg/rpcclient"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/actor" "github.com/nspcc-dev/neo-go/pkg/rpcclient/actor"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/management" "github.com/nspcc-dev/neo-go/pkg/rpcclient/management"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/unwrap" "github.com/nspcc-dev/neo-go/pkg/rpcclient/unwrap"
@ -33,7 +31,7 @@ import (
"github.com/nspcc-dev/neo-go/pkg/vm/emit" "github.com/nspcc-dev/neo-go/pkg/vm/emit"
"github.com/nspcc-dev/neo-go/pkg/vm/opcode" "github.com/nspcc-dev/neo-go/pkg/vm/opcode"
"github.com/nspcc-dev/neo-go/pkg/vm/stackitem" "github.com/nspcc-dev/neo-go/pkg/vm/stackitem"
"github.com/nspcc-dev/neo-go/pkg/vm/vmstate" "github.com/spf13/viper"
) )
const ( const (
@ -45,15 +43,19 @@ const (
containerContract = "container" containerContract = "container"
frostfsIDContract = "frostfsid" frostfsIDContract = "frostfsid"
netmapContract = "netmap" netmapContract = "netmap"
policyContract = "policy"
proxyContract = "proxy" proxyContract = "proxy"
) )
const frostfsIDAdminConfigKey = "frostfsid.admin"
var ( var (
contractList = []string{ contractList = []string{
balanceContract, balanceContract,
containerContract, containerContract,
frostfsIDContract, frostfsIDContract,
netmapContract, netmapContract,
policyContract,
proxyContract, proxyContract,
} }
@ -94,7 +96,10 @@ func (c *initializeContext) deployNNS(method string) error {
h := cs.Hash h := cs.Hash
nnsCs, err := c.nnsContractState() nnsCs, err := c.nnsContractState()
if err == nil { if err != nil {
return err
}
if nnsCs != nil {
if nnsCs.NEF.Checksum == cs.NEF.Checksum { if nnsCs.NEF.Checksum == cs.NEF.Checksum {
if method == deployMethodName { if method == deployMethodName {
c.Command.Println("NNS contract is already deployed.") c.Command.Println("NNS contract is already deployed.")
@ -112,28 +117,13 @@ func (c *initializeContext) deployNNS(method string) error {
} }
params := getContractDeployParameters(cs, nil) params := getContractDeployParameters(cs, nil)
signer := transaction.Signer{
Account: c.CommitteeAcc.Contract.ScriptHash(),
Scopes: transaction.CalledByEntry,
}
invokeHash := management.Hash invokeHash := management.Hash
if method == updateMethodName { if method == updateMethodName {
invokeHash = nnsCs.Hash invokeHash = nnsCs.Hash
} }
res, err := invokeFunction(c.Client, invokeHash, method, params, []transaction.Signer{signer}) tx, err := c.CommitteeAct.MakeCall(invokeHash, method, params...)
if err != nil {
return fmt.Errorf("can't deploy NNS contract: %w", err)
}
if res.State != vmstate.Halt.String() {
return fmt.Errorf("can't deploy NNS contract: %s", res.FaultException)
}
tx, err := c.Client.CreateTxFromScript(res.Script, c.CommitteeAcc, res.GasConsumed, 0, []rpcclient.SignerAccount{{
Signer: signer,
Account: c.CommitteeAcc,
}})
if err != nil { if err != nil {
return fmt.Errorf("failed to create deploy tx for %s: %w", nnsContract, err) return fmt.Errorf("failed to create deploy tx for %s: %w", nnsContract, err)
} }
@ -366,8 +356,9 @@ func (c *initializeContext) deployContracts() error {
} }
func (c *initializeContext) isUpdated(ctrHash util.Uint160, cs *contractState) bool { func (c *initializeContext) isUpdated(ctrHash util.Uint160, cs *contractState) bool {
realCs, err := c.Client.GetContractStateByHash(ctrHash) r := management.NewReader(c.ReadOnlyInvoker)
return err == nil && realCs.NEF.Checksum == cs.NEF.Checksum realCs, err := r.GetContract(ctrHash)
return err == nil && realCs != nil && realCs.NEF.Checksum == cs.NEF.Checksum
} }
func (c *initializeContext) getContract(ctrName string) *contractState { func (c *initializeContext) getContract(ctrName string) *contractState {
@ -517,8 +508,7 @@ func getContractDeployParameters(cs *contractState, deployData []any) []any {
} }
func (c *initializeContext) getContractDeployData(ctrName string, keysParam []any, method string) []any { func (c *initializeContext) getContractDeployData(ctrName string, keysParam []any, method string) []any {
items := make([]any, 1, 6) items := make([]any, 0, 6)
items[0] = false // notaryDisabled is false
switch ctrName { switch ctrName {
case frostfsContract: case frostfsContract:
@ -536,7 +526,8 @@ func (c *initializeContext) getContractDeployData(ctrName string, keysParam []an
case containerContract: case containerContract:
// In case if NNS is updated multiple times, we can't calculate // In case if NNS is updated multiple times, we can't calculate
// it's actual hash based on local data, thus query chain. // it's actual hash based on local data, thus query chain.
nnsCs, err := c.Client.GetContractStateByID(1) r := management.NewReader(c.ReadOnlyInvoker)
nnsCs, err := r.GetContractByID(1)
if err != nil { if err != nil {
panic("NNS is not yet deployed") panic("NNS is not yet deployed")
} }
@ -547,9 +538,16 @@ func (c *initializeContext) getContractDeployData(ctrName string, keysParam []an
nnsCs.Hash, nnsCs.Hash,
"container") "container")
case frostfsIDContract: case frostfsIDContract:
items = append(items, h, found, err := getFrostfsIDAdmin(viper.GetViper())
c.Contracts[netmapContract].Hash, if err != nil {
c.Contracts[containerContract].Hash) panic(err)
}
if found {
items = append(items, h)
} else {
items = append(items, nil)
}
case netmapContract: case netmapContract:
md := getDefaultNetmapContractConfigMap() md := getDefaultNetmapContractConfigMap()
if method == updateMethodName { if method == updateMethodName {
@ -583,6 +581,8 @@ func (c *initializeContext) getContractDeployData(ctrName string, keysParam []an
configParam) configParam)
case proxyContract: case proxyContract:
items = nil items = nil
case policyContract:
items = []any{nil}
default: default:
panic(fmt.Sprintf("invalid contract name: %s", ctrName)) panic(fmt.Sprintf("invalid contract name: %s", ctrName))
} }
@ -590,7 +590,8 @@ func (c *initializeContext) getContractDeployData(ctrName string, keysParam []an
} }
func (c *initializeContext) getNetConfigFromNetmapContract() ([]stackitem.Item, error) { func (c *initializeContext) getNetConfigFromNetmapContract() ([]stackitem.Item, error) {
cs, err := c.Client.GetContractStateByID(1) r := management.NewReader(c.ReadOnlyInvoker)
cs, err := r.GetContractByID(1)
if err != nil { if err != nil {
return nil, fmt.Errorf("NNS is not yet deployed: %w", err) return nil, fmt.Errorf("NNS is not yet deployed: %w", err)
} }
@ -606,12 +607,11 @@ func (c *initializeContext) getNetConfigFromNetmapContract() ([]stackitem.Item,
} }
func (c *initializeContext) getAlphabetDeployItems(i, n int) []any { func (c *initializeContext) getAlphabetDeployItems(i, n int) []any {
items := make([]any, 6) items := make([]any, 5)
items[0] = false items[0] = c.Contracts[netmapContract].Hash
items[1] = c.Contracts[netmapContract].Hash items[1] = c.Contracts[proxyContract].Hash
items[2] = c.Contracts[proxyContract].Hash items[2] = innerring.GlagoliticLetter(i).String()
items[3] = innerring.GlagoliticLetter(i).String() items[3] = int64(i)
items[4] = int64(i) items[4] = int64(n)
items[5] = int64(n)
return items return items
} }

View file

@ -15,6 +15,7 @@ import (
"github.com/nspcc-dev/neo-go/pkg/io" "github.com/nspcc-dev/neo-go/pkg/io"
"github.com/nspcc-dev/neo-go/pkg/rpcclient" "github.com/nspcc-dev/neo-go/pkg/rpcclient"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/invoker" "github.com/nspcc-dev/neo-go/pkg/rpcclient/invoker"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/management"
nnsClient "github.com/nspcc-dev/neo-go/pkg/rpcclient/nns" nnsClient "github.com/nspcc-dev/neo-go/pkg/rpcclient/nns"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/unwrap" "github.com/nspcc-dev/neo-go/pkg/rpcclient/unwrap"
"github.com/nspcc-dev/neo-go/pkg/smartcontract/callflag" "github.com/nspcc-dev/neo-go/pkg/smartcontract/callflag"
@ -30,7 +31,8 @@ const defaultExpirationTime = 10 * 365 * 24 * time.Hour / time.Second
const frostfsOpsEmail = "ops@frostfs.info" const frostfsOpsEmail = "ops@frostfs.info"
func (c *initializeContext) setNNS() error { func (c *initializeContext) setNNS() error {
nnsCs, err := c.Client.GetContractStateByID(1) r := management.NewReader(c.ReadOnlyInvoker)
nnsCs, err := r.GetContractByID(1)
if err != nil { if err != nil {
return err return err
} }

View file

@ -3,14 +3,17 @@ package morph
import ( import (
"errors" "errors"
"fmt" "fmt"
"math/big"
"github.com/nspcc-dev/neo-go/pkg/core/native" "github.com/nspcc-dev/neo-go/pkg/core/native"
"github.com/nspcc-dev/neo-go/pkg/core/state" "github.com/nspcc-dev/neo-go/pkg/core/state"
"github.com/nspcc-dev/neo-go/pkg/core/transaction" "github.com/nspcc-dev/neo-go/pkg/core/transaction"
"github.com/nspcc-dev/neo-go/pkg/io" "github.com/nspcc-dev/neo-go/pkg/io"
"github.com/nspcc-dev/neo-go/pkg/rpcclient" "github.com/nspcc-dev/neo-go/pkg/rpcclient"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/actor"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/invoker" "github.com/nspcc-dev/neo-go/pkg/rpcclient/invoker"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/neo" "github.com/nspcc-dev/neo-go/pkg/rpcclient/neo"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/nep17"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/unwrap" "github.com/nspcc-dev/neo-go/pkg/rpcclient/unwrap"
"github.com/nspcc-dev/neo-go/pkg/smartcontract/callflag" "github.com/nspcc-dev/neo-go/pkg/smartcontract/callflag"
"github.com/nspcc-dev/neo-go/pkg/util" "github.com/nspcc-dev/neo-go/pkg/util"
@ -41,12 +44,12 @@ func (c *initializeContext) registerCandidateRange(start, end int) error {
panic(fmt.Sprintf("BUG: %v", w.Err)) panic(fmt.Sprintf("BUG: %v", w.Err))
} }
signers := []rpcclient.SignerAccount{{ signers := []actor.SignerAccount{{
Signer: c.getSigner(false, c.CommitteeAcc), Signer: c.getSigner(false, c.CommitteeAcc),
Account: c.CommitteeAcc, Account: c.CommitteeAcc,
}} }}
for _, acc := range c.Accounts[start:end] { for _, acc := range c.Accounts[start:end] {
signers = append(signers, rpcclient.SignerAccount{ signers = append(signers, actor.SignerAccount{
Signer: transaction.Signer{ Signer: transaction.Signer{
Account: acc.Contract.ScriptHash(), Account: acc.Contract.ScriptHash(),
Scopes: transaction.CustomContracts, Scopes: transaction.CustomContracts,
@ -56,7 +59,11 @@ func (c *initializeContext) registerCandidateRange(start, end int) error {
}) })
} }
tx, err := c.Client.CreateTxFromScript(w.Bytes(), c.CommitteeAcc, -1, 0, signers) act, err := actor.New(c.Client, signers)
if err != nil {
return fmt.Errorf("can't create actor: %w", err)
}
tx, err := act.MakeRun(w.Bytes())
if err != nil { if err != nil {
return fmt.Errorf("can't create tx: %w", err) return fmt.Errorf("can't create tx: %w", err)
} }
@ -134,8 +141,9 @@ func (c *initializeContext) transferNEOToAlphabetContracts() error {
} }
func (c *initializeContext) transferNEOFinished(neoHash util.Uint160) (bool, error) { func (c *initializeContext) transferNEOFinished(neoHash util.Uint160) (bool, error) {
bal, err := c.Client.NEP17BalanceOf(neoHash, c.CommitteeAcc.Contract.ScriptHash()) r := nep17.NewReader(c.ReadOnlyInvoker, neoHash)
return bal < native.NEOTotalSupply, err bal, err := r.BalanceOf(c.CommitteeAcc.Contract.ScriptHash())
return bal.Cmp(big.NewInt(native.NEOTotalSupply)) == -1, err
} }
var errGetPriceInvalid = errors.New("`getRegisterPrice`: invalid response") var errGetPriceInvalid = errors.New("`getRegisterPrice`: invalid response")

View file

@ -20,7 +20,7 @@ import (
) )
const ( const (
contractsPath = "../../../../../../frostfs-contract/frostfs-contract-v0.16.0.tar.gz" contractsPath = "../../../../../../contract/frostfs-contract-v0.18.0.tar.gz"
protoFileName = "proto.yml" protoFileName = "proto.yml"
) )
@ -58,7 +58,9 @@ func testInitialize(t *testing.T, committeeSize int) {
// Set to the path or remove the next statement to download from the network. // Set to the path or remove the next statement to download from the network.
require.NoError(t, initCmd.Flags().Set(contractsInitFlag, contractsPath)) require.NoError(t, initCmd.Flags().Set(contractsInitFlag, contractsPath))
v.Set(localDumpFlag, filepath.Join(testdataDir, "out"))
dumpPath := filepath.Join(testdataDir, "out")
require.NoError(t, initCmd.Flags().Set(localDumpFlag, dumpPath))
v.Set(alphabetWalletsFlag, testdataDir) v.Set(alphabetWalletsFlag, testdataDir)
v.Set(epochDurationInitFlag, 1) v.Set(epochDurationInitFlag, 1)
v.Set(maxObjectSizeInitFlag, 1024) v.Set(maxObjectSizeInitFlag, 1024)
@ -67,12 +69,15 @@ func testInitialize(t *testing.T, committeeSize int) {
require.NoError(t, initializeSideChainCmd(initCmd, nil)) require.NoError(t, initializeSideChainCmd(initCmd, nil))
t.Run("force-new-epoch", func(t *testing.T) { t.Run("force-new-epoch", func(t *testing.T) {
require.NoError(t, forceNewEpoch.Flags().Set(localDumpFlag, dumpPath))
require.NoError(t, forceNewEpochCmd(forceNewEpoch, nil)) require.NoError(t, forceNewEpochCmd(forceNewEpoch, nil))
}) })
t.Run("set-config", func(t *testing.T) { t.Run("set-config", func(t *testing.T) {
require.NoError(t, setConfig.Flags().Set(localDumpFlag, dumpPath))
require.NoError(t, setConfigCmd(setConfig, []string{"MaintenanceModeAllowed=true"})) require.NoError(t, setConfigCmd(setConfig, []string{"MaintenanceModeAllowed=true"}))
}) })
t.Run("set-policy", func(t *testing.T) { t.Run("set-policy", func(t *testing.T) {
require.NoError(t, setPolicy.Flags().Set(localDumpFlag, dumpPath))
require.NoError(t, setPolicyCmd(setPolicy, []string{"ExecFeeFactor=1"})) require.NoError(t, setPolicyCmd(setPolicy, []string{"ExecFeeFactor=1"}))
}) })
t.Run("remove-node", func(t *testing.T) { t.Run("remove-node", func(t *testing.T) {
@ -80,6 +85,7 @@ func testInitialize(t *testing.T, committeeSize int) {
require.NoError(t, err) require.NoError(t, err)
pub := hex.EncodeToString(pk.PublicKey().Bytes()) pub := hex.EncodeToString(pk.PublicKey().Bytes())
require.NoError(t, removeNodes.Flags().Set(localDumpFlag, dumpPath))
require.NoError(t, removeNodesCmd(removeNodes, []string{pub})) require.NoError(t, removeNodesCmd(removeNodes, []string{pub}))
}) })
} }
@ -139,3 +145,38 @@ func setTestCredentials(v *viper.Viper, size int) {
} }
v.Set("credentials.contract", testContractPassword) v.Set("credentials.contract", testContractPassword)
} }
func TestNextPollInterval(t *testing.T) {
var pollInterval time.Duration
var iteration int
pollInterval, hasChanged := nextPollInterval(iteration, pollInterval)
require.True(t, hasChanged)
require.Equal(t, time.Second, pollInterval)
iteration = 4
pollInterval, hasChanged = nextPollInterval(iteration, pollInterval)
require.False(t, hasChanged)
require.Equal(t, time.Second, pollInterval)
iteration = 5
pollInterval, hasChanged = nextPollInterval(iteration, pollInterval)
require.True(t, hasChanged)
require.Equal(t, 2*time.Second, pollInterval)
iteration = 10
pollInterval, hasChanged = nextPollInterval(iteration, pollInterval)
require.True(t, hasChanged)
require.Equal(t, 4*time.Second, pollInterval)
iteration = 20
pollInterval = 32 * time.Second
pollInterval, hasChanged = nextPollInterval(iteration, pollInterval)
require.True(t, hasChanged) // from 32s to 16s
require.Equal(t, 16*time.Second, pollInterval)
pollInterval = 16 * time.Second
pollInterval, hasChanged = nextPollInterval(iteration, pollInterval)
require.False(t, hasChanged)
require.Equal(t, 16*time.Second, pollInterval)
}

View file

@ -2,15 +2,18 @@ package morph
import ( import (
"fmt" "fmt"
"math/big"
"github.com/nspcc-dev/neo-go/pkg/core/native" "github.com/nspcc-dev/neo-go/pkg/core/native"
"github.com/nspcc-dev/neo-go/pkg/core/transaction" "github.com/nspcc-dev/neo-go/pkg/core/transaction"
"github.com/nspcc-dev/neo-go/pkg/io" "github.com/nspcc-dev/neo-go/pkg/io"
"github.com/nspcc-dev/neo-go/pkg/rpcclient" "github.com/nspcc-dev/neo-go/pkg/rpcclient/actor"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/gas" "github.com/nspcc-dev/neo-go/pkg/rpcclient/gas"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/neo" "github.com/nspcc-dev/neo-go/pkg/rpcclient/neo"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/nep17"
"github.com/nspcc-dev/neo-go/pkg/smartcontract/callflag" "github.com/nspcc-dev/neo-go/pkg/smartcontract/callflag"
scContext "github.com/nspcc-dev/neo-go/pkg/smartcontract/context" scContext "github.com/nspcc-dev/neo-go/pkg/smartcontract/context"
"github.com/nspcc-dev/neo-go/pkg/util"
"github.com/nspcc-dev/neo-go/pkg/vm/emit" "github.com/nspcc-dev/neo-go/pkg/vm/emit"
"github.com/nspcc-dev/neo-go/pkg/vm/opcode" "github.com/nspcc-dev/neo-go/pkg/vm/opcode"
"github.com/nspcc-dev/neo-go/pkg/wallet" "github.com/nspcc-dev/neo-go/pkg/wallet"
@ -33,11 +36,11 @@ func (c *initializeContext) transferFunds() error {
return err return err
} }
var transfers []rpcclient.TransferTarget var transfers []transferTarget
for _, acc := range c.Accounts { for _, acc := range c.Accounts {
to := acc.Contract.ScriptHash() to := acc.Contract.ScriptHash()
transfers = append(transfers, transfers = append(transfers,
rpcclient.TransferTarget{ transferTarget{
Token: gas.Hash, Token: gas.Hash,
Address: to, Address: to,
Amount: initialAlphabetGASAmount, Amount: initialAlphabetGASAmount,
@ -47,25 +50,19 @@ func (c *initializeContext) transferFunds() error {
// It is convenient to have all funds at the committee account. // It is convenient to have all funds at the committee account.
transfers = append(transfers, transfers = append(transfers,
rpcclient.TransferTarget{ transferTarget{
Token: gas.Hash, Token: gas.Hash,
Address: c.CommitteeAcc.Contract.ScriptHash(), Address: c.CommitteeAcc.Contract.ScriptHash(),
Amount: (gasInitialTotalSupply - initialAlphabetGASAmount*int64(len(c.Wallets))) / 2, Amount: (gasInitialTotalSupply - initialAlphabetGASAmount*int64(len(c.Wallets))) / 2,
}, },
rpcclient.TransferTarget{ transferTarget{
Token: neo.Hash, Token: neo.Hash,
Address: c.CommitteeAcc.Contract.ScriptHash(), Address: c.CommitteeAcc.Contract.ScriptHash(),
Amount: native.NEOTotalSupply, Amount: native.NEOTotalSupply,
}, },
) )
tx, err := createNEP17MultiTransferTx(c.Client, c.ConsensusAcc, 0, transfers, []rpcclient.SignerAccount{{ tx, err := createNEP17MultiTransferTx(c.Client, c.ConsensusAcc, transfers)
Signer: transaction.Signer{
Account: c.ConsensusAcc.Contract.ScriptHash(),
Scopes: transaction.CalledByEntry,
},
Account: c.ConsensusAcc,
}})
if err != nil { if err != nil {
return fmt.Errorf("can't create transfer transaction: %w", err) return fmt.Errorf("can't create transfer transaction: %w", err)
} }
@ -80,8 +77,9 @@ func (c *initializeContext) transferFunds() error {
func (c *initializeContext) transferFundsFinished() (bool, error) { func (c *initializeContext) transferFundsFinished() (bool, error) {
acc := c.Accounts[0] acc := c.Accounts[0]
res, err := c.Client.NEP17BalanceOf(gas.Hash, acc.Contract.ScriptHash()) r := nep17.NewReader(c.ReadOnlyInvoker, gas.Hash)
return res > initialAlphabetGASAmount/2, err res, err := r.BalanceOf(acc.Contract.ScriptHash())
return res.Cmp(big.NewInt(initialAlphabetGASAmount/2)) == 1, err
} }
func (c *initializeContext) multiSignAndSend(tx *transaction.Transaction, accType string) error { func (c *initializeContext) multiSignAndSend(tx *transaction.Transaction, accType string) error {
@ -93,12 +91,13 @@ func (c *initializeContext) multiSignAndSend(tx *transaction.Transaction, accTyp
} }
func (c *initializeContext) multiSign(tx *transaction.Transaction, accType string) error { func (c *initializeContext) multiSign(tx *transaction.Transaction, accType string) error {
network, err := c.Client.GetNetwork() version, err := c.Client.GetVersion()
if err != nil { if err != nil {
// error appears only if client // error appears only if client
// has not been initialized // has not been initialized
panic(err) panic(err)
} }
network := version.Protocol.Network
// Use parameter context to avoid dealing with signature order. // Use parameter context to avoid dealing with signature order.
pc := scContext.NewParameterContext("", network, tx) pc := scContext.NewParameterContext("", network, tx)
@ -146,16 +145,17 @@ func (c *initializeContext) multiSign(tx *transaction.Transaction, accType strin
func (c *initializeContext) transferGASToProxy() error { func (c *initializeContext) transferGASToProxy() error {
proxyCs := c.getContract(proxyContract) proxyCs := c.getContract(proxyContract)
bal, err := c.Client.NEP17BalanceOf(gas.Hash, proxyCs.Hash) r := nep17.NewReader(c.ReadOnlyInvoker, gas.Hash)
if err != nil || bal > 0 { bal, err := r.BalanceOf(proxyCs.Hash)
if err != nil || bal.Sign() > 0 {
return err return err
} }
tx, err := createNEP17MultiTransferTx(c.Client, c.CommitteeAcc, 0, []rpcclient.TransferTarget{{ tx, err := createNEP17MultiTransferTx(c.Client, c.CommitteeAcc, []transferTarget{{
Token: gas.Hash, Token: gas.Hash,
Address: proxyCs.Hash, Address: proxyCs.Hash,
Amount: initialProxyGASAmount, Amount: initialProxyGASAmount,
}}, nil) }})
if err != nil { if err != nil {
return err return err
} }
@ -167,8 +167,14 @@ func (c *initializeContext) transferGASToProxy() error {
return c.awaitTx() return c.awaitTx()
} }
func createNEP17MultiTransferTx(c Client, acc *wallet.Account, netFee int64, type transferTarget struct {
recipients []rpcclient.TransferTarget, cosigners []rpcclient.SignerAccount) (*transaction.Transaction, error) { Token util.Uint160
Address util.Uint160
Amount int64
Data any
}
func createNEP17MultiTransferTx(c Client, acc *wallet.Account, recipients []transferTarget) (*transaction.Transaction, error) {
from := acc.Contract.ScriptHash() from := acc.Contract.ScriptHash()
w := io.NewBufBinWriter() w := io.NewBufBinWriter()
@ -180,11 +186,18 @@ func createNEP17MultiTransferTx(c Client, acc *wallet.Account, netFee int64,
if w.Err != nil { if w.Err != nil {
return nil, fmt.Errorf("failed to create transfer script: %w", w.Err) return nil, fmt.Errorf("failed to create transfer script: %w", w.Err)
} }
return c.CreateTxFromScript(w.Bytes(), acc, -1, netFee, append([]rpcclient.SignerAccount{{
signers := []actor.SignerAccount{{
Signer: transaction.Signer{ Signer: transaction.Signer{
Account: from, Account: acc.Contract.ScriptHash(),
Scopes: transaction.CalledByEntry, Scopes: transaction.CalledByEntry,
}, },
Account: acc, Account: acc,
}}, cosigners...)) }}
act, err := actor.New(c, signers)
if err != nil {
return nil, fmt.Errorf("can't create actor: %w", err)
}
return act.MakeRun(w.Bytes())
} }

View file

@ -10,33 +10,28 @@ import (
"github.com/google/uuid" "github.com/google/uuid"
"github.com/nspcc-dev/neo-go/pkg/config" "github.com/nspcc-dev/neo-go/pkg/config"
"github.com/nspcc-dev/neo-go/pkg/config/netmode"
"github.com/nspcc-dev/neo-go/pkg/core" "github.com/nspcc-dev/neo-go/pkg/core"
"github.com/nspcc-dev/neo-go/pkg/core/block" "github.com/nspcc-dev/neo-go/pkg/core/block"
"github.com/nspcc-dev/neo-go/pkg/core/chaindump" "github.com/nspcc-dev/neo-go/pkg/core/chaindump"
"github.com/nspcc-dev/neo-go/pkg/core/fee"
"github.com/nspcc-dev/neo-go/pkg/core/native/noderoles" "github.com/nspcc-dev/neo-go/pkg/core/native/noderoles"
"github.com/nspcc-dev/neo-go/pkg/core/state" "github.com/nspcc-dev/neo-go/pkg/core/state"
"github.com/nspcc-dev/neo-go/pkg/core/storage" "github.com/nspcc-dev/neo-go/pkg/core/storage"
"github.com/nspcc-dev/neo-go/pkg/core/transaction" "github.com/nspcc-dev/neo-go/pkg/core/transaction"
"github.com/nspcc-dev/neo-go/pkg/crypto/hash"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys" "github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/nspcc-dev/neo-go/pkg/encoding/address" "github.com/nspcc-dev/neo-go/pkg/encoding/address"
"github.com/nspcc-dev/neo-go/pkg/encoding/fixedn"
"github.com/nspcc-dev/neo-go/pkg/io" "github.com/nspcc-dev/neo-go/pkg/io"
"github.com/nspcc-dev/neo-go/pkg/neorpc/result" "github.com/nspcc-dev/neo-go/pkg/neorpc/result"
"github.com/nspcc-dev/neo-go/pkg/network/payload"
"github.com/nspcc-dev/neo-go/pkg/rpcclient"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/invoker" "github.com/nspcc-dev/neo-go/pkg/rpcclient/invoker"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/unwrap" "github.com/nspcc-dev/neo-go/pkg/rpcclient/unwrap"
"github.com/nspcc-dev/neo-go/pkg/smartcontract" "github.com/nspcc-dev/neo-go/pkg/smartcontract"
"github.com/nspcc-dev/neo-go/pkg/smartcontract/callflag" "github.com/nspcc-dev/neo-go/pkg/smartcontract/callflag"
"github.com/nspcc-dev/neo-go/pkg/smartcontract/manifest"
"github.com/nspcc-dev/neo-go/pkg/smartcontract/trigger" "github.com/nspcc-dev/neo-go/pkg/smartcontract/trigger"
"github.com/nspcc-dev/neo-go/pkg/util" "github.com/nspcc-dev/neo-go/pkg/util"
"github.com/nspcc-dev/neo-go/pkg/vm"
"github.com/nspcc-dev/neo-go/pkg/vm/emit" "github.com/nspcc-dev/neo-go/pkg/vm/emit"
"github.com/nspcc-dev/neo-go/pkg/vm/opcode" "github.com/nspcc-dev/neo-go/pkg/vm/opcode"
"github.com/nspcc-dev/neo-go/pkg/vm/stackitem" "github.com/nspcc-dev/neo-go/pkg/vm/stackitem"
"github.com/nspcc-dev/neo-go/pkg/vm/vmstate"
"github.com/nspcc-dev/neo-go/pkg/wallet" "github.com/nspcc-dev/neo-go/pkg/wallet"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"github.com/spf13/viper" "github.com/spf13/viper"
@ -88,7 +83,7 @@ func newLocalClient(cmd *cobra.Command, v *viper.Viper, wallets []*wallet.Wallet
go bc.Run() go bc.Run()
if cmd.Name() != "init" { if cmd.Name() != "init" {
f, err := os.OpenFile(dumpPath, os.O_RDONLY, 0600) f, err := os.OpenFile(dumpPath, os.O_RDONLY, 0o600)
if err != nil { if err != nil {
return nil, fmt.Errorf("can't open local dump: %w", err) return nil, fmt.Errorf("can't open local dump: %w", err)
} }
@ -119,29 +114,10 @@ func (l *localClient) GetBlockCount() (uint32, error) {
return l.bc.BlockHeight(), nil return l.bc.BlockHeight(), nil
} }
func (l *localClient) GetContractStateByID(id int32) (*state.Contract, error) {
h, err := l.bc.GetContractScriptHash(id)
if err != nil {
return nil, err
}
return l.GetContractStateByHash(h)
}
func (l *localClient) GetContractStateByHash(h util.Uint160) (*state.Contract, error) {
if cs := l.bc.GetContractState(h); cs != nil {
return cs, nil
}
return nil, storage.ErrKeyNotFound
}
func (l *localClient) GetNativeContracts() ([]state.NativeContract, error) { func (l *localClient) GetNativeContracts() ([]state.NativeContract, error) {
return l.bc.GetNatives(), nil return l.bc.GetNatives(), nil
} }
func (l *localClient) GetNetwork() (netmode.Magic, error) {
return l.bc.GetConfig().Magic, nil
}
func (l *localClient) GetApplicationLog(h util.Uint256, t *trigger.Type) (*result.ApplicationLog, error) { func (l *localClient) GetApplicationLog(h util.Uint256, t *trigger.Type) (*result.ApplicationLog, error) {
aer, err := l.bc.GetAppExecResults(h, *t) aer, err := l.bc.GetAppExecResults(h, *t)
if err != nil { if err != nil {
@ -152,34 +128,6 @@ func (l *localClient) GetApplicationLog(h util.Uint256, t *trigger.Type) (*resul
return &a, nil return &a, nil
} }
func (l *localClient) CreateTxFromScript(script []byte, acc *wallet.Account, sysFee int64, netFee int64, cosigners []rpcclient.SignerAccount) (*transaction.Transaction, error) {
signers, accounts, err := getSigners(acc, cosigners)
if err != nil {
return nil, fmt.Errorf("failed to construct tx signers: %w", err)
}
if sysFee < 0 {
res, err := l.InvokeScript(script, signers)
if err != nil {
return nil, fmt.Errorf("can't add system fee to transaction: %w", err)
}
if res.State != "HALT" {
return nil, fmt.Errorf("can't add system fee to transaction: bad vm state: %s due to an error: %s", res.State, res.FaultException)
}
sysFee = res.GasConsumed
}
tx := transaction.New(script, sysFee)
tx.Signers = signers
tx.ValidUntilBlock = l.bc.BlockHeight() + 2
err = l.AddNetworkFee(tx, netFee, accounts...)
if err != nil {
return nil, fmt.Errorf("failed to add network fee: %w", err)
}
return tx, nil
}
func (l *localClient) GetCommittee() (keys.PublicKeys, error) { func (l *localClient) GetCommittee() (keys.PublicKeys, error) {
// not used by `morph init` command // not used by `morph init` command
panic("unexpected call") panic("unexpected call")
@ -200,21 +148,6 @@ func (l *localClient) InvokeFunction(h util.Uint160, method string, sPrm []smart
return invokeFunction(l, h, method, pp, ss) return invokeFunction(l, h, method, pp, ss)
} }
func (l *localClient) CalculateNotaryFee(_ uint8) (int64, error) {
// not used by `morph init` command
panic("unexpected call")
}
func (l *localClient) SignAndPushP2PNotaryRequest(_ *transaction.Transaction, _ []byte, _ int64, _ int64, _ uint32, _ *wallet.Account) (*payload.P2PNotaryRequest, error) {
// not used by `morph init` command
panic("unexpected call")
}
func (l *localClient) SignAndPushInvocationTx(_ []byte, _ *wallet.Account, _ int64, _ fixedn.Fixed8, _ []rpcclient.SignerAccount) (util.Uint256, error) {
// not used by `morph init` command
panic("unexpected call")
}
func (l *localClient) TerminateSession(_ uuid.UUID) (bool, error) { func (l *localClient) TerminateSession(_ uuid.UUID) (bool, error) {
// not used by `morph init` command // not used by `morph init` command
panic("unexpected call") panic("unexpected call")
@ -254,110 +187,76 @@ func (l *localClient) InvokeContractVerify(util.Uint160, []smartcontract.Paramet
// CalculateNetworkFee calculates network fee for the given transaction. // CalculateNetworkFee calculates network fee for the given transaction.
// Copied from neo-go with minor corrections (no need to support non-notary mode): // Copied from neo-go with minor corrections (no need to support non-notary mode):
// https://github.com/nspcc-dev/neo-go/blob/v0.99.2/pkg/services/rpcsrv/server.go#L744 // https://github.com/nspcc-dev/neo-go/blob/v0.103.0/pkg/services/rpcsrv/server.go#L911
func (l *localClient) CalculateNetworkFee(tx *transaction.Transaction) (int64, error) { func (l *localClient) CalculateNetworkFee(tx *transaction.Transaction) (int64, error) {
hashablePart, err := tx.EncodeHashableFields() // Avoid setting hash for this tx: server code doesn't touch client transaction.
if err != nil { data := tx.Bytes()
return 0, fmt.Errorf("failed to compute tx size: %w", err) tx, err := transaction.NewTransactionFromBytes(data)
}
size := len(hashablePart) + io.GetVarSize(len(tx.Signers))
ef := l.bc.GetBaseExecFee()
var netFee int64
for i, signer := range tx.Signers {
var verificationScript []byte
for _, w := range tx.Scripts {
if w.VerificationScript != nil && hash.Hash160(w.VerificationScript).Equals(signer.Account) {
verificationScript = w.VerificationScript
break
}
}
if verificationScript == nil {
gasConsumed, err := l.bc.VerifyWitness(signer.Account, tx, &tx.Scripts[i], l.maxGasInvoke)
if err != nil {
return 0, fmt.Errorf("invalid signature: %w", err)
}
netFee += gasConsumed
size += io.GetVarSize([]byte{}) + io.GetVarSize(tx.Scripts[i].InvocationScript)
continue
}
fee, sizeDelta := fee.Calculate(ef, verificationScript)
netFee += fee
size += sizeDelta
}
fee := l.bc.FeePerByte()
netFee += int64(size) * fee
return netFee, nil
}
// AddNetworkFee adds network fee for each witness script and optional extra
// network fee to transaction. `accs` is an array signer's accounts.
// Copied from neo-go with minor corrections (no need to support contract signers):
// https://github.com/nspcc-dev/neo-go/blob/6ff11baa1b9e4c71ef0d1de43b92a8c541ca732c/pkg/rpc/client/rpc.go#L960
func (l *localClient) AddNetworkFee(tx *transaction.Transaction, extraFee int64, accs ...*wallet.Account) error {
if len(tx.Signers) != len(accs) {
return errors.New("number of signers must match number of scripts")
}
size := io.GetVarSize(tx)
ef := l.bc.GetBaseExecFee()
for i := range tx.Signers {
netFee, sizeDelta := fee.Calculate(ef, accs[i].Contract.Script)
tx.NetworkFee += netFee
size += sizeDelta
}
tx.NetworkFee += int64(size)*l.bc.FeePerByte() + extraFee
return nil
}
// getSigners returns an array of transaction signers and corresponding accounts from
// given sender and cosigners. If cosigners list already contains sender, the sender
// will be placed at the start of the list.
// Copied from neo-go with minor corrections:
// https://github.com/nspcc-dev/neo-go/blob/6ff11baa1b9e4c71ef0d1de43b92a8c541ca732c/pkg/rpc/client/rpc.go#L735
func getSigners(sender *wallet.Account, cosigners []rpcclient.SignerAccount) ([]transaction.Signer, []*wallet.Account, error) {
var (
signers []transaction.Signer
accounts []*wallet.Account
)
from := sender.Contract.ScriptHash()
s := transaction.Signer{
Account: from,
Scopes: transaction.None,
}
for _, c := range cosigners {
if c.Signer.Account == from {
s = c.Signer
continue
}
signers = append(signers, c.Signer)
accounts = append(accounts, c.Account)
}
signers = append([]transaction.Signer{s}, signers...)
accounts = append([]*wallet.Account{sender}, accounts...)
return signers, accounts, nil
}
func (l *localClient) NEP17BalanceOf(h util.Uint160, acc util.Uint160) (int64, error) {
res, err := invokeFunction(l, h, "balanceOf", []any{acc}, nil)
if err != nil { if err != nil {
return 0, err return 0, err
} }
if res.State != vmstate.Halt.String() || len(res.Stack) == 0 {
return 0, fmt.Errorf("`balance`: invalid response (empty: %t): %s", hashablePart, err := tx.EncodeHashableFields()
len(res.Stack) == 0, res.FaultException) if err != nil {
return 0, err
} }
bi, err := res.Stack[0].TryInteger() size := len(hashablePart) + io.GetVarSize(len(tx.Signers))
if err != nil || !bi.IsInt64() { var (
return 0, fmt.Errorf("`balance`: invalid response") netFee int64
// Verification GAS cost can't exceed this policy.
gasLimit = l.bc.GetMaxVerificationGAS()
)
for i, signer := range tx.Signers {
w := tx.Scripts[i]
if len(w.InvocationScript) == 0 { // No invocation provided, try to infer one.
var paramz []manifest.Parameter
if len(w.VerificationScript) == 0 { // Contract-based verification
cs := l.bc.GetContractState(signer.Account)
if cs == nil {
return 0, fmt.Errorf("signer %d has no verification script and no deployed contract", i)
}
md := cs.Manifest.ABI.GetMethod(manifest.MethodVerify, -1)
if md == nil || md.ReturnType != smartcontract.BoolType {
return 0, fmt.Errorf("signer %d has no verify method in deployed contract", i)
}
paramz = md.Parameters // Might as well have none params and it's OK.
} else { // Regular signature verification.
if vm.IsSignatureContract(w.VerificationScript) {
paramz = []manifest.Parameter{{Type: smartcontract.SignatureType}}
} else if nSigs, _, ok := vm.ParseMultiSigContract(w.VerificationScript); ok {
paramz = make([]manifest.Parameter, nSigs)
for j := 0; j < nSigs; j++ {
paramz[j] = manifest.Parameter{Type: smartcontract.SignatureType}
}
}
}
inv := io.NewBufBinWriter()
for _, p := range paramz {
p.Type.EncodeDefaultValue(inv.BinWriter)
}
if inv.Err != nil {
return 0, fmt.Errorf("failed to create dummy invocation script (signer %d): %s", i, inv.Err.Error())
}
w.InvocationScript = inv.Bytes()
}
gasConsumed, err := l.bc.VerifyWitness(signer.Account, tx, &w, gasLimit)
if err != nil && !errors.Is(err, core.ErrInvalidSignature) {
return 0, err
}
gasLimit -= gasConsumed
netFee += gasConsumed
size += io.GetVarSize(w.VerificationScript) + io.GetVarSize(w.InvocationScript)
} }
return bi.Int64(), nil if l.bc.P2PSigExtensionsEnabled() {
attrs := tx.GetAttributes(transaction.NotaryAssistedT)
if len(attrs) != 0 {
na := attrs[0].Value.(*transaction.NotaryAssisted)
netFee += (int64(na.NKeys) + 1) * l.bc.GetNotaryServiceFeePerKey()
}
}
fee := l.bc.FeePerByte()
netFee += int64(size) * fee
return netFee, nil
} }
func (l *localClient) InvokeScript(script []byte, signers []transaction.Signer) (*result.Invoke, error) { func (l *localClient) InvokeScript(script []byte, signers []transaction.Signer) (*result.Invoke, error) {

View file

@ -6,13 +6,10 @@ import (
"fmt" "fmt"
"time" "time"
"github.com/nspcc-dev/neo-go/pkg/config/netmode"
"github.com/nspcc-dev/neo-go/pkg/core/state" "github.com/nspcc-dev/neo-go/pkg/core/state"
"github.com/nspcc-dev/neo-go/pkg/core/transaction" "github.com/nspcc-dev/neo-go/pkg/core/transaction"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys" "github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/nspcc-dev/neo-go/pkg/encoding/fixedn"
"github.com/nspcc-dev/neo-go/pkg/neorpc/result" "github.com/nspcc-dev/neo-go/pkg/neorpc/result"
"github.com/nspcc-dev/neo-go/pkg/network/payload"
"github.com/nspcc-dev/neo-go/pkg/rpcclient" "github.com/nspcc-dev/neo-go/pkg/rpcclient"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/actor" "github.com/nspcc-dev/neo-go/pkg/rpcclient/actor"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/invoker" "github.com/nspcc-dev/neo-go/pkg/rpcclient/invoker"
@ -29,21 +26,12 @@ type Client interface {
invoker.RPCInvoke invoker.RPCInvoke
GetBlockCount() (uint32, error) GetBlockCount() (uint32, error)
GetContractStateByID(int32) (*state.Contract, error)
GetContractStateByHash(util.Uint160) (*state.Contract, error)
GetNativeContracts() ([]state.NativeContract, error) GetNativeContracts() ([]state.NativeContract, error)
GetNetwork() (netmode.Magic, error)
GetApplicationLog(util.Uint256, *trigger.Type) (*result.ApplicationLog, error) GetApplicationLog(util.Uint256, *trigger.Type) (*result.ApplicationLog, error)
GetVersion() (*result.Version, error) GetVersion() (*result.Version, error)
CreateTxFromScript([]byte, *wallet.Account, int64, int64, []rpcclient.SignerAccount) (*transaction.Transaction, error)
NEP17BalanceOf(util.Uint160, util.Uint160) (int64, error)
SendRawTransaction(*transaction.Transaction) (util.Uint256, error) SendRawTransaction(*transaction.Transaction) (util.Uint256, error)
GetCommittee() (keys.PublicKeys, error) GetCommittee() (keys.PublicKeys, error)
CalculateNotaryFee(uint8) (int64, error)
CalculateNetworkFee(tx *transaction.Transaction) (int64, error) CalculateNetworkFee(tx *transaction.Transaction) (int64, error)
AddNetworkFee(*transaction.Transaction, int64, ...*wallet.Account) error
SignAndPushInvocationTx([]byte, *wallet.Account, int64, fixedn.Fixed8, []rpcclient.SignerAccount) (util.Uint256, error)
SignAndPushP2PNotaryRequest(*transaction.Transaction, []byte, int64, int64, uint32, *wallet.Account) (*payload.P2PNotaryRequest, error)
} }
type hashVUBPair struct { type hashVUBPair struct {

View file

@ -5,6 +5,7 @@ import (
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common" commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client/netmap" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client/netmap"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/invoker" "github.com/nspcc-dev/neo-go/pkg/rpcclient/invoker"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/management"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"github.com/spf13/viper" "github.com/spf13/viper"
) )
@ -14,8 +15,9 @@ func listNetmapCandidatesNodes(cmd *cobra.Command, _ []string) {
commonCmd.ExitOnErr(cmd, "can't create N3 client: %w", err) commonCmd.ExitOnErr(cmd, "can't create N3 client: %w", err)
inv := invoker.New(c, nil) inv := invoker.New(c, nil)
r := management.NewReader(inv)
cs, err := c.GetContractStateByID(1) cs, err := r.GetContractByID(1)
commonCmd.ExitOnErr(cmd, "can't get NNS contract info: %w", err) commonCmd.ExitOnErr(cmd, "can't get NNS contract info: %w", err)
nmHash, err := nnsResolveHash(inv, cs.Hash, netmapContract+".frostfs") nmHash, err := nnsResolveHash(inv, cs.Hash, netmapContract+".frostfs")

View file

@ -1,11 +1,15 @@
package morph package morph
import ( import (
"bytes"
"fmt" "fmt"
"strconv" "strconv"
"strings" "strings"
"text/tabwriter"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"github.com/nspcc-dev/neo-go/pkg/io" "github.com/nspcc-dev/neo-go/pkg/io"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/invoker"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/policy" "github.com/nspcc-dev/neo-go/pkg/rpcclient/policy"
"github.com/nspcc-dev/neo-go/pkg/smartcontract/callflag" "github.com/nspcc-dev/neo-go/pkg/smartcontract/callflag"
"github.com/nspcc-dev/neo-go/pkg/vm/emit" "github.com/nspcc-dev/neo-go/pkg/vm/emit"
@ -52,3 +56,32 @@ func setPolicyCmd(cmd *cobra.Command, args []string) error {
return wCtx.awaitTx() return wCtx.awaitTx()
} }
func dumpPolicyCmd(cmd *cobra.Command, _ []string) error {
c, err := getN3Client(viper.GetViper())
commonCmd.ExitOnErr(cmd, "can't create N3 client:", err)
inv := invoker.New(c, nil)
policyContract := policy.NewReader(inv)
execFee, err := policyContract.GetExecFeeFactor()
commonCmd.ExitOnErr(cmd, "can't get execution fee factor:", err)
feePerByte, err := policyContract.GetFeePerByte()
commonCmd.ExitOnErr(cmd, "can't get fee per byte:", err)
storagePrice, err := policyContract.GetStoragePrice()
commonCmd.ExitOnErr(cmd, "can't get storage price:", err)
buf := bytes.NewBuffer(nil)
tw := tabwriter.NewWriter(buf, 0, 2, 2, ' ', 0)
_, _ = tw.Write([]byte(fmt.Sprintf("Execution Fee Factor:\t%d (int)\n", execFee)))
_, _ = tw.Write([]byte(fmt.Sprintf("Fee Per Byte:\t%d (int)\n", feePerByte)))
_, _ = tw.Write([]byte(fmt.Sprintf("Storage Price:\t%d (int)\n", storagePrice)))
_ = tw.Flush()
cmd.Print(buf.String())
return nil
}

View file

@ -7,6 +7,7 @@ import (
netmapcontract "git.frostfs.info/TrueCloudLab/frostfs-contract/netmap" netmapcontract "git.frostfs.info/TrueCloudLab/frostfs-contract/netmap"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys" "github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/nspcc-dev/neo-go/pkg/io" "github.com/nspcc-dev/neo-go/pkg/io"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/management"
"github.com/nspcc-dev/neo-go/pkg/smartcontract/callflag" "github.com/nspcc-dev/neo-go/pkg/smartcontract/callflag"
"github.com/nspcc-dev/neo-go/pkg/vm/emit" "github.com/nspcc-dev/neo-go/pkg/vm/emit"
"github.com/spf13/cobra" "github.com/spf13/cobra"
@ -33,7 +34,8 @@ func removeNodesCmd(cmd *cobra.Command, args []string) error {
} }
defer wCtx.close() defer wCtx.close()
cs, err := wCtx.Client.GetContractStateByID(1) r := management.NewReader(wCtx.ReadOnlyInvoker)
cs, err := r.GetContractByID(1)
if err != nil { if err != nil {
return fmt.Errorf("can't get NNS contract info: %w", err) return fmt.Errorf("can't get NNS contract info: %w", err)
} }

View file

@ -146,6 +146,15 @@ var (
}, },
} }
dumpPolicy = &cobra.Command{
Use: "dump-policy",
Short: "Dump FrostFS policy",
PreRun: func(cmd *cobra.Command, _ []string) {
_ = viper.BindPFlag(endpointFlag, cmd.Flags().Lookup(endpointFlag))
},
RunE: dumpPolicyCmd,
}
dumpContractHashesCmd = &cobra.Command{ dumpContractHashesCmd = &cobra.Command{
Use: "dump-hashes", Use: "dump-hashes",
Short: "Dump deployed contract hashes", Short: "Dump deployed contract hashes",
@ -239,6 +248,7 @@ func init() {
initForceNewEpochCmd() initForceNewEpochCmd()
initRemoveNodesCmd() initRemoveNodesCmd()
initSetPolicyCmd() initSetPolicyCmd()
initDumpPolicyCmd()
initDumpContractHashesCmd() initDumpContractHashesCmd()
initDumpNetworkConfigCmd() initDumpNetworkConfigCmd()
initSetConfigCmd() initSetConfigCmd()
@ -320,6 +330,7 @@ func initSetConfigCmd() {
setConfig.Flags().String(alphabetWalletsFlag, "", "Path to alphabet wallets dir") setConfig.Flags().String(alphabetWalletsFlag, "", "Path to alphabet wallets dir")
setConfig.Flags().StringP(endpointFlag, "r", "", "N3 RPC node endpoint") setConfig.Flags().StringP(endpointFlag, "r", "", "N3 RPC node endpoint")
setConfig.Flags().Bool(forceConfigSet, false, "Force setting not well-known configuration key") setConfig.Flags().Bool(forceConfigSet, false, "Force setting not well-known configuration key")
setConfig.Flags().String(localDumpFlag, "", "Path to the blocks dump file")
} }
func initDumpNetworkConfigCmd() { func initDumpNetworkConfigCmd() {
@ -337,18 +348,26 @@ func initSetPolicyCmd() {
RootCmd.AddCommand(setPolicy) RootCmd.AddCommand(setPolicy)
setPolicy.Flags().String(alphabetWalletsFlag, "", "Path to alphabet wallets dir") setPolicy.Flags().String(alphabetWalletsFlag, "", "Path to alphabet wallets dir")
setPolicy.Flags().StringP(endpointFlag, "r", "", "N3 RPC node endpoint") setPolicy.Flags().StringP(endpointFlag, "r", "", "N3 RPC node endpoint")
setPolicy.Flags().String(localDumpFlag, "", "Path to the blocks dump file")
}
func initDumpPolicyCmd() {
RootCmd.AddCommand(dumpPolicy)
dumpPolicy.Flags().StringP(endpointFlag, "r", "", "N3 RPC node endpoint")
} }
func initRemoveNodesCmd() { func initRemoveNodesCmd() {
RootCmd.AddCommand(removeNodes) RootCmd.AddCommand(removeNodes)
removeNodes.Flags().String(alphabetWalletsFlag, "", "Path to alphabet wallets dir") removeNodes.Flags().String(alphabetWalletsFlag, "", "Path to alphabet wallets dir")
removeNodes.Flags().StringP(endpointFlag, "r", "", "N3 RPC node endpoint") removeNodes.Flags().StringP(endpointFlag, "r", "", "N3 RPC node endpoint")
removeNodes.Flags().String(localDumpFlag, "", "Path to the blocks dump file")
} }
func initForceNewEpochCmd() { func initForceNewEpochCmd() {
RootCmd.AddCommand(forceNewEpoch) RootCmd.AddCommand(forceNewEpoch)
forceNewEpoch.Flags().String(alphabetWalletsFlag, "", "Path to alphabet wallets dir") forceNewEpoch.Flags().String(alphabetWalletsFlag, "", "Path to alphabet wallets dir")
forceNewEpoch.Flags().StringP(endpointFlag, "r", "", "N3 RPC node endpoint") forceNewEpoch.Flags().StringP(endpointFlag, "r", "", "N3 RPC node endpoint")
forceNewEpoch.Flags().String(localDumpFlag, "", "Path to the blocks dump file")
} }
func initGenerateStorageCmd() { func initGenerateStorageCmd() {

View file

@ -15,16 +15,14 @@ import (
"github.com/spf13/viper" "github.com/spf13/viper"
) )
var ( var rootCmd = &cobra.Command{
rootCmd = &cobra.Command{ Use: "frostfs-adm",
Use: "frostfs-adm", Short: "FrostFS Administrative Tool",
Short: "FrostFS Administrative Tool", Long: `FrostFS Administrative Tool provides functions to setup and
Long: `FrostFS Administrative Tool provides functions to setup and
manage FrostFS network deployment.`, manage FrostFS network deployment.`,
RunE: entryPoint, RunE: entryPoint,
SilenceUsage: true, SilenceUsage: true,
} }
)
func init() { func init() {
cobra.OnInitialize(func() { initConfig(rootCmd) }) cobra.OnInitialize(func() { initConfig(rootCmd) })

View file

@ -145,7 +145,7 @@ func storageConfig(cmd *cobra.Command, args []string) {
} }
out := applyTemplate(c) out := applyTemplate(c)
fatalOnErr(os.WriteFile(outPath, out, 0644)) fatalOnErr(os.WriteFile(outPath, out, 0o644))
cmd.Println("Node is ready for work! Run `frostfs-node -config " + outPath + "`") cmd.Println("Node is ready for work! Run `frostfs-node -config " + outPath + "`")
} }

View file

@ -387,30 +387,23 @@ func (x *PutObjectPrm) PrepareLocally() {
} }
func (x *PutObjectPrm) convertToSDKPrm(ctx context.Context) (client.PrmObjectPutInit, error) { func (x *PutObjectPrm) convertToSDKPrm(ctx context.Context) (client.PrmObjectPutInit, error) {
var putPrm client.PrmObjectPutInit putPrm := client.PrmObjectPutInit{
if !x.prepareLocally && x.sessionToken != nil { XHeaders: x.xHeaders,
putPrm.WithinSession(*x.sessionToken) BearerToken: x.bearerToken,
Local: x.local,
CopiesNumber: x.copyNum,
} }
if x.bearerToken != nil {
putPrm.WithBearerToken(*x.bearerToken)
}
if x.local {
putPrm.MarkLocal()
}
putPrm.WithXHeaders(x.xHeaders...)
putPrm.SetCopiesNumberByVectors(x.copyNum)
if x.prepareLocally { if x.prepareLocally {
res, err := x.cli.NetworkInfo(ctx, client.PrmNetworkInfo{}) res, err := x.cli.NetworkInfo(ctx, client.PrmNetworkInfo{})
if err != nil { if err != nil {
return client.PrmObjectPutInit{}, err return client.PrmObjectPutInit{}, err
} }
putPrm.WithObjectMaxSize(res.Info().MaxObjectSize()) putPrm.MaxSize = res.Info().MaxObjectSize()
putPrm.WithEpochSource(epochSource(res.Info().CurrentEpoch())) putPrm.EpochSource = epochSource(res.Info().CurrentEpoch())
putPrm.WithoutHomomorphicHash(res.Info().HomomorphicHashingDisabled()) putPrm.WithoutHomomorphHash = res.Info().HomomorphicHashingDisabled()
} else {
putPrm.Session = x.sessionToken
} }
return putPrm, nil return putPrm, nil
} }
@ -696,24 +689,15 @@ func (x SearchObjectsRes) IDList() []oid.ID {
// //
// Returns any error which prevented the operation from completing correctly in error return. // Returns any error which prevented the operation from completing correctly in error return.
func SearchObjects(ctx context.Context, prm SearchObjectsPrm) (*SearchObjectsRes, error) { func SearchObjects(ctx context.Context, prm SearchObjectsPrm) (*SearchObjectsRes, error) {
var cliPrm client.PrmObjectSearch cliPrm := client.PrmObjectSearch{
cliPrm.InContainer(prm.cnrID) XHeaders: prm.xHeaders,
cliPrm.SetFilters(prm.filters) Local: prm.local,
BearerToken: prm.bearerToken,
if prm.sessionToken != nil { Session: prm.sessionToken,
cliPrm.WithinSession(*prm.sessionToken) ContainerID: &prm.cnrID,
Filters: prm.filters,
} }
if prm.bearerToken != nil {
cliPrm.WithBearerToken(*prm.bearerToken)
}
if prm.local {
cliPrm.MarkLocal()
}
cliPrm.WithXHeaders(prm.xHeaders...)
rdr, err := prm.cli.ObjectSearchInit(ctx, cliPrm) rdr, err := prm.cli.ObjectSearchInit(ctx, cliPrm)
if err != nil { if err != nil {
return nil, fmt.Errorf("init object search: %w", err) return nil, fmt.Errorf("init object search: %w", err)

View file

@ -43,27 +43,28 @@ func getSDKClientByFlag(cmd *cobra.Command, key *ecdsa.PrivateKey, endpointFlag
// GetSDKClient returns default frostfs-sdk-go client. // GetSDKClient returns default frostfs-sdk-go client.
func GetSDKClient(ctx context.Context, cmd *cobra.Command, key *ecdsa.PrivateKey, addr network.Address) (*client.Client, error) { func GetSDKClient(ctx context.Context, cmd *cobra.Command, key *ecdsa.PrivateKey, addr network.Address) (*client.Client, error) {
var ( var c client.Client
c client.Client
prmInit client.PrmInit
prmDial client.PrmDial
)
prmInit.SetDefaultPrivateKey(*key) prmInit := client.PrmInit{
prmInit.ResolveFrostFSFailures() Key: *key,
prmDial.SetServerURI(addr.URIAddr()) }
prmDial := client.PrmDial{
Endpoint: addr.URIAddr(),
GRPCDialOptions: []grpc.DialOption{
grpc.WithChainUnaryInterceptor(tracing.NewUnaryClientInteceptor()),
grpc.WithChainStreamInterceptor(tracing.NewStreamClientInterceptor()),
},
}
if timeout := viper.GetDuration(commonflags.Timeout); timeout > 0 { if timeout := viper.GetDuration(commonflags.Timeout); timeout > 0 {
// In CLI we can only set a timeout for the whole operation. // In CLI we can only set a timeout for the whole operation.
// By also setting stream timeout we ensure that no operation hands // By also setting stream timeout we ensure that no operation hands
// for too long. // for too long.
prmDial.SetTimeout(timeout) prmDial.DialTimeout = timeout
prmDial.SetStreamTimeout(timeout) prmDial.StreamTimeout = timeout
common.PrintVerbose(cmd, "Set request timeout to %s.", timeout) common.PrintVerbose(cmd, "Set request timeout to %s.", timeout)
} }
prmDial.SetGRPCDialOptions(
grpc.WithChainUnaryInterceptor(tracing.NewUnaryClientInteceptor()),
grpc.WithChainStreamInterceptor(tracing.NewStreamClientInterceptor()))
c.Init(prmInit) c.Init(prmInit)

View file

@ -39,7 +39,7 @@ var accountingBalanceCmd = &cobra.Command{
var prm internalclient.BalanceOfPrm var prm internalclient.BalanceOfPrm
prm.SetClient(cli) prm.SetClient(cli)
prm.SetAccount(idUser) prm.Account = idUser
res, err := internalclient.BalanceOf(cmd.Context(), prm) res, err := internalclient.BalanceOf(cmd.Context(), prm)
commonCmd.ExitOnErr(cmd, "rpc error: %w", err) commonCmd.ExitOnErr(cmd, "rpc error: %w", err)

View file

@ -1,7 +1,6 @@
package accounting package accounting
import ( import (
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags" "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"github.com/spf13/viper" "github.com/spf13/viper"
@ -18,9 +17,7 @@ var Cmd = &cobra.Command{
_ = viper.BindPFlag(commonflags.WalletPath, flags.Lookup(commonflags.WalletPath)) _ = viper.BindPFlag(commonflags.WalletPath, flags.Lookup(commonflags.WalletPath))
_ = viper.BindPFlag(commonflags.Account, flags.Lookup(commonflags.Account)) _ = viper.BindPFlag(commonflags.Account, flags.Lookup(commonflags.Account))
_ = viper.BindPFlag(commonflags.RPC, flags.Lookup(commonflags.RPC)) _ = viper.BindPFlag(commonflags.RPC, flags.Lookup(commonflags.RPC))
common.StartClientCommandSpan(cmd)
}, },
PersistentPostRun: common.StopClientCommandSpan,
} }
func init() { func init() {

View file

@ -106,7 +106,7 @@ func createEACL(cmd *cobra.Command, _ []string) {
return return
} }
err = os.WriteFile(outArg, buf.Bytes(), 0644) err = os.WriteFile(outArg, buf.Bytes(), 0o644)
if err != nil { if err != nil {
cmd.PrintErrln(err) cmd.PrintErrln(err)
os.Exit(1) os.Exit(1)

View file

@ -130,6 +130,6 @@ func createToken(cmd *cobra.Command, _ []string) {
} }
out, _ := cmd.Flags().GetString(outFlag) out, _ := cmd.Flags().GetString(outFlag)
err = os.WriteFile(out, data, 0644) err = os.WriteFile(out, data, 0o644)
commonCmd.ExitOnErr(cmd, "can't write token to file: %w", err) commonCmd.ExitOnErr(cmd, "can't write token to file: %w", err)
} }

View file

@ -51,7 +51,7 @@ var getContainerInfoCmd = &cobra.Command{
data = cnr.Marshal() data = cnr.Marshal()
} }
err = os.WriteFile(containerPathTo, data, 0644) err = os.WriteFile(containerPathTo, data, 0o644)
commonCmd.ExitOnErr(cmd, "can't write container to file: %w", err) commonCmd.ExitOnErr(cmd, "can't write container to file: %w", err)
} }
}, },

View file

@ -52,7 +52,7 @@ var getExtendedACLCmd = &cobra.Command{
cmd.Println("dumping data to file:", containerPathTo) cmd.Println("dumping data to file:", containerPathTo)
err = os.WriteFile(containerPathTo, data, 0644) err = os.WriteFile(containerPathTo, data, 0o644)
commonCmd.ExitOnErr(cmd, "could not write eACL to file: %w", err) commonCmd.ExitOnErr(cmd, "could not write eACL to file: %w", err)
}, },
} }

View file

@ -47,7 +47,7 @@ var listContainersCmd = &cobra.Command{
var prm internalclient.ListContainersPrm var prm internalclient.ListContainersPrm
prm.SetClient(cli) prm.SetClient(cli)
prm.SetAccount(idUser) prm.Account = idUser
res, err := internalclient.ListContainers(cmd.Context(), prm) res, err := internalclient.ListContainers(cmd.Context(), prm)
commonCmd.ExitOnErr(cmd, "rpc error: %w", err) commonCmd.ExitOnErr(cmd, "rpc error: %w", err)

View file

@ -20,7 +20,7 @@ var containerNodesCmd = &cobra.Command{
Short: "Show nodes for container", Short: "Show nodes for container",
Long: "Show nodes taking part in a container at the current epoch.", Long: "Show nodes taking part in a container at the current epoch.",
Run: func(cmd *cobra.Command, args []string) { Run: func(cmd *cobra.Command, args []string) {
var cnr, pkey = getContainer(cmd) cnr, pkey := getContainer(cmd)
if pkey == nil { if pkey == nil {
pkey = key.GetOrGenerate(cmd) pkey = key.GetOrGenerate(cmd)

View file

@ -1,7 +1,6 @@
package container package container
import ( import (
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags" "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"github.com/spf13/cobra" "github.com/spf13/cobra"
) )
@ -16,9 +15,7 @@ var Cmd = &cobra.Command{
// the viper before execution // the viper before execution
commonflags.Bind(cmd) commonflags.Bind(cmd)
commonflags.BindAPI(cmd) commonflags.BindAPI(cmd)
common.StartClientCommandSpan(cmd)
}, },
PersistentPostRun: common.StopClientCommandSpan,
} }
func init() { func init() {

View file

@ -0,0 +1,97 @@
package control
import (
"bytes"
"crypto/sha256"
"encoding/json"
"git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/rpc/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/modules/util"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
apechain "git.frostfs.info/TrueCloudLab/policy-engine/pkg/chain"
"github.com/spf13/cobra"
)
const (
ruleFlag = "rule"
)
var addRuleCmd = &cobra.Command{
Use: "add-rule",
Short: "Add local override",
Long: "Add local APE rule to a node with following format:\n<action>[:action_detail] <operation> [<condition1> ...] <resource>",
Example: `allow Object.Get *
deny Object.Get EbxzAdz5LB4uqxuz6crWKAumBNtZyK2rKsqQP7TdZvwr/*
deny:QuotaLimitReached Object.Put Object.Resource:Department=HR *
`,
Run: addRule,
}
func prettyJSONFormat(cmd *cobra.Command, serializedChain []byte) string {
wr := bytes.NewBufferString("")
err := json.Indent(wr, serializedChain, "", " ")
commonCmd.ExitOnErr(cmd, "%w", err)
return wr.String()
}
func addRule(cmd *cobra.Command, _ []string) {
pk := key.Get(cmd)
chainID, _ := cmd.Flags().GetString(chainIDFlag)
var cnr cid.ID
cidStr, _ := cmd.Flags().GetString(commonflags.CIDFlag)
commonCmd.ExitOnErr(cmd, "can't decode container ID: %w", cnr.DecodeString(cidStr))
rawCID := make([]byte, sha256.Size)
cnr.Encode(rawCID)
rule, _ := cmd.Flags().GetString(ruleFlag)
chain := new(apechain.Chain)
commonCmd.ExitOnErr(cmd, "parser error: %w", util.ParseAPEChain(chain, []string{rule}))
chain.ID = apechain.ID(chainID)
serializedChain := chain.Bytes()
cmd.Println("Container ID: " + cidStr)
cmd.Println("Parsed chain:\n" + prettyJSONFormat(cmd, serializedChain))
req := &control.AddChainLocalOverrideRequest{
Body: &control.AddChainLocalOverrideRequest_Body{
Target: &control.ChainTarget{
Type: control.ChainTarget_CONTAINER,
Name: cidStr,
},
Chain: serializedChain,
},
}
signRequest(cmd, pk, req)
cli := getClient(cmd, pk)
var resp *control.AddChainLocalOverrideResponse
var err error
err = cli.ExecRaw(func(client *client.Client) error {
resp, err = control.AddChainLocalOverride(client, req)
return err
})
commonCmd.ExitOnErr(cmd, "rpc error: %w", err)
verifyResponse(cmd, resp.GetSignature(), resp.GetBody())
cmd.Println("Rule has been added. Chain id: ", resp.GetBody().GetChainId())
}
func initControlAddRuleCmd() {
initControlFlags(addRuleCmd)
ff := addRuleCmd.Flags()
ff.String(commonflags.CIDFlag, "", commonflags.CIDFlagUsage)
ff.String(ruleFlag, "", "Rule statement")
ff.String(chainIDFlag, "", "Assign ID to the parsed chain")
}

View file

@ -220,12 +220,12 @@ func appendEstimation(sb *strings.Builder, resp *control.GetShardEvacuationStatu
if resp.GetBody().GetStatus() != control.GetShardEvacuationStatusResponse_Body_RUNNING || if resp.GetBody().GetStatus() != control.GetShardEvacuationStatusResponse_Body_RUNNING ||
resp.GetBody().GetDuration() == nil || resp.GetBody().GetDuration() == nil ||
resp.GetBody().GetTotal() == 0 || resp.GetBody().GetTotal() == 0 ||
resp.GetBody().GetEvacuated()+resp.GetBody().GetFailed() == 0 { resp.GetBody().GetEvacuated()+resp.GetBody().GetFailed()+resp.GetBody().GetSkipped() == 0 {
return return
} }
durationSeconds := float64(resp.GetBody().GetDuration().GetSeconds()) durationSeconds := float64(resp.GetBody().GetDuration().GetSeconds())
evacuated := float64(resp.GetBody().GetEvacuated() + resp.GetBody().GetFailed()) evacuated := float64(resp.GetBody().GetEvacuated() + resp.GetBody().GetFailed() + resp.GetBody().GetSkipped())
avgObjEvacuationTimeSeconds := durationSeconds / evacuated avgObjEvacuationTimeSeconds := durationSeconds / evacuated
objectsLeft := float64(resp.GetBody().GetTotal()) - evacuated objectsLeft := float64(resp.GetBody().GetTotal()) - evacuated
leftSeconds := avgObjEvacuationTimeSeconds * objectsLeft leftSeconds := avgObjEvacuationTimeSeconds * objectsLeft
@ -252,8 +252,8 @@ func appendStartedAt(sb *strings.Builder, resp *control.GetShardEvacuationStatus
} }
func appendError(sb *strings.Builder, resp *control.GetShardEvacuationStatusResponse) { func appendError(sb *strings.Builder, resp *control.GetShardEvacuationStatusResponse) {
if len(resp.Body.GetErrorMessage()) > 0 { if len(resp.GetBody().GetErrorMessage()) > 0 {
sb.WriteString(fmt.Sprintf(" Error: %s.", resp.Body.GetErrorMessage())) sb.WriteString(fmt.Sprintf(" Error: %s.", resp.GetBody().GetErrorMessage()))
} }
} }
@ -285,10 +285,11 @@ func appendShardIDs(sb *strings.Builder, resp *control.GetShardEvacuationStatusR
} }
func appendCounts(sb *strings.Builder, resp *control.GetShardEvacuationStatusResponse) { func appendCounts(sb *strings.Builder, resp *control.GetShardEvacuationStatusResponse) {
sb.WriteString(fmt.Sprintf(" Evacuated %d object out of %d, failed to evacuate %d objects.", sb.WriteString(fmt.Sprintf(" Evacuated %d objects out of %d, failed to evacuate: %d, skipped: %d.",
resp.GetBody().GetEvacuated(), resp.GetBody().GetEvacuated(),
resp.Body.GetTotal(), resp.GetBody().GetTotal(),
resp.Body.GetFailed())) resp.GetBody().GetFailed(),
resp.GetBody().GetSkipped()))
} }
func initControlEvacuationShardCmd() { func initControlEvacuationShardCmd() {

View file

@ -0,0 +1,72 @@
package control
import (
"crypto/sha256"
"git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/rpc/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
apechain "git.frostfs.info/TrueCloudLab/policy-engine/pkg/chain"
"github.com/spf13/cobra"
)
var getRuleCmd = &cobra.Command{
Use: "get-rule",
Short: "Get local override",
Long: "Get local APE override of the node",
Run: getRule,
}
func getRule(cmd *cobra.Command, _ []string) {
pk := key.Get(cmd)
var cnr cid.ID
cidStr, _ := cmd.Flags().GetString(commonflags.CIDFlag)
commonCmd.ExitOnErr(cmd, "can't decode container ID: %w", cnr.DecodeString(cidStr))
rawCID := make([]byte, sha256.Size)
cnr.Encode(rawCID)
chainID, _ := cmd.Flags().GetString(chainIDFlag)
req := &control.GetChainLocalOverrideRequest{
Body: &control.GetChainLocalOverrideRequest_Body{
Target: &control.ChainTarget{
Name: cidStr,
Type: control.ChainTarget_CONTAINER,
},
ChainId: chainID,
},
}
signRequest(cmd, pk, req)
cli := getClient(cmd, pk)
var resp *control.GetChainLocalOverrideResponse
var err error
err = cli.ExecRaw(func(client *client.Client) error {
resp, err = control.GetChainLocalOverride(client, req)
return err
})
commonCmd.ExitOnErr(cmd, "rpc error: %w", err)
verifyResponse(cmd, resp.GetSignature(), resp.GetBody())
var chain apechain.Chain
commonCmd.ExitOnErr(cmd, "decode error: %w", chain.DecodeBytes(resp.GetBody().GetChain()))
// TODO (aarifullin): make pretty-formatted output for chains.
cmd.Println("Parsed chain:\n" + prettyJSONFormat(cmd, chain.Bytes()))
}
func initControGetRuleCmd() {
initControlFlags(getRuleCmd)
ff := getRuleCmd.Flags()
ff.String(commonflags.CIDFlag, "", commonflags.CIDFlagUsage)
ff.String(chainIDFlag, "", "Chain id")
}

View file

@ -1,6 +1,9 @@
package control package control
import "github.com/spf13/cobra" import (
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"github.com/spf13/cobra"
)
var irCmd = &cobra.Command{ var irCmd = &cobra.Command{
Use: "ir", Use: "ir",
@ -12,8 +15,20 @@ func initControlIRCmd() {
irCmd.AddCommand(tickEpochCmd) irCmd.AddCommand(tickEpochCmd)
irCmd.AddCommand(removeNodeCmd) irCmd.AddCommand(removeNodeCmd)
irCmd.AddCommand(irHealthCheckCmd) irCmd.AddCommand(irHealthCheckCmd)
irCmd.AddCommand(removeContainerCmd)
initControlIRTickEpochCmd() initControlIRTickEpochCmd()
initControlIRRemoveNodeCmd() initControlIRRemoveNodeCmd()
initControlIRHealthCheckCmd() initControlIRHealthCheckCmd()
initControlIRRemoveContainerCmd()
}
func printVUB(cmd *cobra.Command, vub uint32) {
cmd.Printf("Transaction's valid until block is %d\n", vub)
}
func parseVUB(cmd *cobra.Command) uint32 {
vub, err := cmd.Flags().GetUint32(irFlagNameVUB)
commonCmd.ExitOnErr(cmd, "invalid valid until block value: %w", err)
return vub
} }

View file

@ -0,0 +1,93 @@
package control
import (
"git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/refs"
rawclient "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/rpc/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
ircontrol "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control/ir"
ircontrolsrv "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control/ir/server"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user"
"github.com/spf13/cobra"
)
const (
ownerFlag = "owner"
)
var removeContainerCmd = &cobra.Command{
Use: "remove-container",
Short: "Schedules a container removal",
Long: `Schedules a container removal via a notary request.
Container data will be deleted asynchronously by policer.
To check removal status "frostfs-cli container list" command can be used.`,
Run: removeContainer,
}
func initControlIRRemoveContainerCmd() {
initControlIRFlags(removeContainerCmd)
flags := removeContainerCmd.Flags()
flags.String(commonflags.CIDFlag, "", commonflags.CIDFlagUsage)
flags.String(ownerFlag, "", "Container owner's wallet address.")
removeContainerCmd.MarkFlagsMutuallyExclusive(commonflags.CIDFlag, ownerFlag)
removeContainerCmd.MarkFlagsOneRequired(commonflags.CIDFlag, ownerFlag)
}
func removeContainer(cmd *cobra.Command, _ []string) {
req := prepareRemoveContainerRequest(cmd)
pk := key.Get(cmd)
c := getClient(cmd, pk)
commonCmd.ExitOnErr(cmd, "could not sign request: %w", ircontrolsrv.SignMessage(pk, req))
var resp *ircontrol.RemoveContainerResponse
err := c.ExecRaw(func(client *rawclient.Client) error {
var err error
resp, err = ircontrol.RemoveContainer(client, req)
return err
})
commonCmd.ExitOnErr(cmd, "failed to execute request: %w", err)
verifyResponse(cmd, resp.GetSignature(), resp.GetBody())
if len(req.GetBody().GetContainerId()) > 0 {
cmd.Println("Container scheduled to removal")
} else {
cmd.Println("User containers sheduled to removal")
}
printVUB(cmd, resp.GetBody().GetVub())
}
func prepareRemoveContainerRequest(cmd *cobra.Command) *ircontrol.RemoveContainerRequest {
req := &ircontrol.RemoveContainerRequest{
Body: &ircontrol.RemoveContainerRequest_Body{},
}
cidStr, err := cmd.Flags().GetString(commonflags.CIDFlag)
commonCmd.ExitOnErr(cmd, "failed to get cid: ", err)
ownerStr, err := cmd.Flags().GetString(ownerFlag)
commonCmd.ExitOnErr(cmd, "failed to get owner: ", err)
if len(ownerStr) > 0 {
var owner user.ID
commonCmd.ExitOnErr(cmd, "invalid owner ID: %w", owner.DecodeString(ownerStr))
var ownerID refs.OwnerID
owner.WriteToV2(&ownerID)
req.Body.Owner = ownerID.StableMarshal(nil)
}
if len(cidStr) > 0 {
var containerID cid.ID
commonCmd.ExitOnErr(cmd, "invalid container ID: %w", containerID.DecodeString(cidStr))
req.Body.ContainerId = containerID[:]
}
req.Body.Vub = parseVUB(cmd)
return req
}

View file

@ -20,7 +20,7 @@ var removeNodeCmd = &cobra.Command{
} }
func initControlIRRemoveNodeCmd() { func initControlIRRemoveNodeCmd() {
initControlFlags(removeNodeCmd) initControlIRFlags(removeNodeCmd)
flags := removeNodeCmd.Flags() flags := removeNodeCmd.Flags()
flags.String("node", "", "Node public key as a hex string") flags.String("node", "", "Node public key as a hex string")
@ -41,6 +41,7 @@ func removeNode(cmd *cobra.Command, _ []string) {
req := new(ircontrol.RemoveNodeRequest) req := new(ircontrol.RemoveNodeRequest)
req.SetBody(&ircontrol.RemoveNodeRequest_Body{ req.SetBody(&ircontrol.RemoveNodeRequest_Body{
Key: nodeKey, Key: nodeKey,
Vub: parseVUB(cmd),
}) })
commonCmd.ExitOnErr(cmd, "could not sign request: %w", ircontrolsrv.SignMessage(pk, req)) commonCmd.ExitOnErr(cmd, "could not sign request: %w", ircontrolsrv.SignMessage(pk, req))
@ -55,4 +56,5 @@ func removeNode(cmd *cobra.Command, _ []string) {
verifyResponse(cmd, resp.GetSignature(), resp.GetBody()) verifyResponse(cmd, resp.GetSignature(), resp.GetBody())
cmd.Println("Node removed") cmd.Println("Node removed")
printVUB(cmd, resp.GetBody().GetVub())
} }

View file

@ -17,7 +17,7 @@ var tickEpochCmd = &cobra.Command{
} }
func initControlIRTickEpochCmd() { func initControlIRTickEpochCmd() {
initControlFlags(tickEpochCmd) initControlIRFlags(tickEpochCmd)
} }
func tickEpoch(cmd *cobra.Command, _ []string) { func tickEpoch(cmd *cobra.Command, _ []string) {
@ -25,7 +25,9 @@ func tickEpoch(cmd *cobra.Command, _ []string) {
c := getClient(cmd, pk) c := getClient(cmd, pk)
req := new(ircontrol.TickEpochRequest) req := new(ircontrol.TickEpochRequest)
req.SetBody(new(ircontrol.TickEpochRequest_Body)) req.SetBody(&ircontrol.TickEpochRequest_Body{
Vub: parseVUB(cmd),
})
err := ircontrolsrv.SignMessage(pk, req) err := ircontrolsrv.SignMessage(pk, req)
commonCmd.ExitOnErr(cmd, "could not sign request: %w", err) commonCmd.ExitOnErr(cmd, "could not sign request: %w", err)
@ -40,4 +42,5 @@ func tickEpoch(cmd *cobra.Command, _ []string) {
verifyResponse(cmd, resp.GetSignature(), resp.GetBody()) verifyResponse(cmd, resp.GetSignature(), resp.GetBody())
cmd.Println("Epoch tick requested") cmd.Println("Epoch tick requested")
printVUB(cmd, resp.GetBody().GetVub())
} }

View file

@ -0,0 +1,75 @@
package control
import (
"crypto/sha256"
"git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/rpc/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
apechain "git.frostfs.info/TrueCloudLab/policy-engine/pkg/chain"
"github.com/spf13/cobra"
)
var listRulesCmd = &cobra.Command{
Use: "list-rules",
Short: "List local overrides",
Long: "List local APE overrides of the node",
Run: listRules,
}
func listRules(cmd *cobra.Command, _ []string) {
pk := key.Get(cmd)
var cnr cid.ID
cidStr, _ := cmd.Flags().GetString(commonflags.CIDFlag)
commonCmd.ExitOnErr(cmd, "can't decode container ID: %w", cnr.DecodeString(cidStr))
rawCID := make([]byte, sha256.Size)
cnr.Encode(rawCID)
req := &control.ListChainLocalOverridesRequest{
Body: &control.ListChainLocalOverridesRequest_Body{
Target: &control.ChainTarget{
Name: cidStr,
Type: control.ChainTarget_CONTAINER,
},
},
}
signRequest(cmd, pk, req)
cli := getClient(cmd, pk)
var resp *control.ListChainLocalOverridesResponse
var err error
err = cli.ExecRaw(func(client *client.Client) error {
resp, err = control.ListChainLocalOverrides(client, req)
return err
})
commonCmd.ExitOnErr(cmd, "rpc error: %w", err)
verifyResponse(cmd, resp.GetSignature(), resp.GetBody())
chains := resp.GetBody().GetChains()
if len(chains) == 0 {
cmd.Println("Local overrides are not defined for the container.")
return
}
for _, c := range chains {
// TODO (aarifullin): make pretty-formatted output for chains.
var chain apechain.Chain
commonCmd.ExitOnErr(cmd, "decode error: %w", chain.DecodeBytes(c))
cmd.Println("Parsed chain:\n" + prettyJSONFormat(cmd, chain.Bytes()))
}
}
func initControlListRulesCmd() {
initControlFlags(listRulesCmd)
ff := listRulesCmd.Flags()
ff.String(commonflags.CIDFlag, "", commonflags.CIDFlagUsage)
}

View file

@ -0,0 +1,75 @@
package control
import (
"crypto/sha256"
"git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/rpc/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"github.com/spf13/cobra"
)
const (
chainIDFlag = "chain-id"
)
var removeRuleCmd = &cobra.Command{
Use: "remove-rule",
Short: "Remove local override",
Long: "Remove local APE override of the node",
Run: removeRule,
}
func removeRule(cmd *cobra.Command, _ []string) {
pk := key.Get(cmd)
var cnr cid.ID
cidStr, _ := cmd.Flags().GetString(commonflags.CIDFlag)
commonCmd.ExitOnErr(cmd, "can't decode container ID: %w", cnr.DecodeString(cidStr))
rawCID := make([]byte, sha256.Size)
cnr.Encode(rawCID)
chainID, _ := cmd.Flags().GetString(chainIDFlag)
req := &control.RemoveChainLocalOverrideRequest{
Body: &control.RemoveChainLocalOverrideRequest_Body{
Target: &control.ChainTarget{
Name: cidStr,
Type: control.ChainTarget_CONTAINER,
},
ChainId: chainID,
},
}
signRequest(cmd, pk, req)
cli := getClient(cmd, pk)
var resp *control.RemoveChainLocalOverrideResponse
var err error
err = cli.ExecRaw(func(client *client.Client) error {
resp, err = control.RemoveChainLocalOverride(client, req)
return err
})
commonCmd.ExitOnErr(cmd, "rpc error: %w", err)
verifyResponse(cmd, resp.GetSignature(), resp.GetBody())
if resp.GetBody().GetRemoved() {
cmd.Println("Rule has been removed.")
} else {
cmd.Println("Rule has not been removed.")
}
}
func initControlRemoveRuleCmd() {
initControlFlags(removeRuleCmd)
ff := removeRuleCmd.Flags()
ff.String(commonflags.CIDFlag, "", commonflags.CIDFlagUsage)
ff.String(chainIDFlag, "", "Chain id")
}

View file

@ -34,6 +34,10 @@ func init() {
shardsCmd, shardsCmd,
synchronizeTreeCmd, synchronizeTreeCmd,
irCmd, irCmd,
addRuleCmd,
removeRuleCmd,
listRulesCmd,
getRuleCmd,
) )
initControlHealthCheckCmd() initControlHealthCheckCmd()
@ -42,4 +46,8 @@ func init() {
initControlShardsCmd() initControlShardsCmd()
initControlSynchronizeTreeCmd() initControlSynchronizeTreeCmd()
initControlIRCmd() initControlIRCmd()
initControlAddRuleCmd()
initControlRemoveRuleCmd()
initControlListRulesCmd()
initControGetRuleCmd()
} }

View file

@ -65,7 +65,7 @@ func prettyPrintShardsJSON(cmd *cobra.Command, ii []*control.ShardInfo) {
out := make([]map[string]any, 0, len(ii)) out := make([]map[string]any, 0, len(ii))
for _, i := range ii { for _, i := range ii {
out = append(out, map[string]any{ out = append(out, map[string]any{
"shard_id": base58.Encode(i.Shard_ID), "shard_id": base58.Encode(i.GetShard_ID()),
"mode": shardModeToString(i.GetMode()), "mode": shardModeToString(i.GetMode()),
"metabase": i.GetMetabasePath(), "metabase": i.GetMetabasePath(),
"blobstor": i.GetBlobstor(), "blobstor": i.GetBlobstor(),
@ -105,7 +105,7 @@ func prettyPrintShards(cmd *cobra.Command, ii []*control.ShardInfo) {
pathPrinter("Write-cache", i.GetWritecachePath())+ pathPrinter("Write-cache", i.GetWritecachePath())+
pathPrinter("Pilorama", i.GetPiloramaPath())+ pathPrinter("Pilorama", i.GetPiloramaPath())+
fmt.Sprintf("Error count: %d\n", i.GetErrorCount()), fmt.Sprintf("Error count: %d\n", i.GetErrorCount()),
base58.Encode(i.Shard_ID), base58.Encode(i.GetShard_ID()),
shardModeToString(i.GetMode()), shardModeToString(i.GetMode()),
) )
} }
@ -122,6 +122,6 @@ func shardModeToString(m control.ShardMode) string {
func sortShardsByID(ii []*control.ShardInfo) { func sortShardsByID(ii []*control.ShardInfo) {
sort.Slice(ii, func(i, j int) bool { sort.Slice(ii, func(i, j int) bool {
return bytes.Compare(ii[i].Shard_ID, ii[j].Shard_ID) < 0 return bytes.Compare(ii[i].GetShard_ID(), ii[j].GetShard_ID()) < 0
}) })
} }

View file

@ -14,6 +14,10 @@ import (
"github.com/spf13/cobra" "github.com/spf13/cobra"
) )
const (
irFlagNameVUB = "vub"
)
func initControlFlags(cmd *cobra.Command) { func initControlFlags(cmd *cobra.Command) {
ff := cmd.Flags() ff := cmd.Flags()
ff.StringP(commonflags.WalletPath, commonflags.WalletPathShorthand, commonflags.WalletPathDefault, commonflags.WalletPathUsage) ff.StringP(commonflags.WalletPath, commonflags.WalletPathShorthand, commonflags.WalletPathDefault, commonflags.WalletPathUsage)
@ -22,6 +26,13 @@ func initControlFlags(cmd *cobra.Command) {
ff.DurationP(commonflags.Timeout, commonflags.TimeoutShorthand, commonflags.TimeoutDefault, commonflags.TimeoutUsage) ff.DurationP(commonflags.Timeout, commonflags.TimeoutShorthand, commonflags.TimeoutDefault, commonflags.TimeoutUsage)
} }
func initControlIRFlags(cmd *cobra.Command) {
initControlFlags(cmd)
ff := cmd.Flags()
ff.Uint32(irFlagNameVUB, 0, "Valid until block value for notary transaction")
}
func signRequest(cmd *cobra.Command, pk *ecdsa.PrivateKey, req controlSvc.SignedMessage) { func signRequest(cmd *cobra.Command, pk *ecdsa.PrivateKey, req controlSvc.SignedMessage) {
err := controlSvc.SignMessage(pk, req) err := controlSvc.SignMessage(pk, req)
commonCmd.ExitOnErr(cmd, "could not sign request: %w", err) commonCmd.ExitOnErr(cmd, "could not sign request: %w", err)

View file

@ -1,7 +1,6 @@
package netmap package netmap
import ( import (
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags" "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"github.com/spf13/cobra" "github.com/spf13/cobra"
) )
@ -15,9 +14,7 @@ var Cmd = &cobra.Command{
// the viper before execution // the viper before execution
commonflags.Bind(cmd) commonflags.Bind(cmd)
commonflags.BindAPI(cmd) commonflags.BindAPI(cmd)
common.StartClientCommandSpan(cmd)
}, },
PersistentPostRun: common.StopClientCommandSpan,
} }
func init() { func init() {

View file

@ -132,7 +132,7 @@ func createOutWriter(cmd *cobra.Command, filename string) (out io.Writer, closer
out = os.Stdout out = os.Stdout
closer = func() {} closer = func() {}
} else { } else {
f, err := os.OpenFile(filename, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, 0644) f, err := os.OpenFile(filename, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, 0o644)
if err != nil { if err != nil {
commonCmd.ExitOnErr(cmd, "", fmt.Errorf("can't open file '%s': %w", filename, err)) commonCmd.ExitOnErr(cmd, "", fmt.Errorf("can't open file '%s': %w", filename, err))
} }

View file

@ -94,7 +94,7 @@ var objectLockCmd = &cobra.Command{
obj := objectSDK.New() obj := objectSDK.New()
obj.SetContainerID(cnr) obj.SetContainerID(cnr)
obj.SetOwnerID(&idOwner) obj.SetOwnerID(idOwner)
obj.SetType(objectSDK.TypeLock) obj.SetType(objectSDK.TypeLock)
obj.SetAttributes(expirationAttr) obj.SetAttributes(expirationAttr)
obj.SetPayload(lock.Marshal()) obj.SetPayload(lock.Marshal())

View file

@ -31,10 +31,10 @@ const (
) )
type objectNodesInfo struct { type objectNodesInfo struct {
containerID cid.ID containerID cid.ID
objectID oid.ID objectID oid.ID
relatedObjectIDs []oid.ID relatedObjectIDs []oid.ID
isLock bool isLockOrTombstone bool
} }
type boolError struct { type boolError struct {
@ -101,9 +101,9 @@ func getObjectInfo(cmd *cobra.Command, cnrID cid.ID, objID oid.ID, cli *client.C
res, err := internalclient.HeadObject(cmd.Context(), prmHead) res, err := internalclient.HeadObject(cmd.Context(), prmHead)
if err == nil { if err == nil {
return &objectNodesInfo{ return &objectNodesInfo{
containerID: cnrID, containerID: cnrID,
objectID: objID, objectID: objID,
isLock: res.Header().Type() == objectSDK.TypeLock, isLockOrTombstone: res.Header().Type() == objectSDK.TypeLock || res.Header().Type() == objectSDK.TypeTombstone,
} }
} }
@ -191,7 +191,7 @@ func getRequiredPlacement(cmd *cobra.Command, objInfo *objectNodesInfo, placemen
numOfReplicas := placementPolicy.ReplicaNumberByIndex(repIdx) numOfReplicas := placementPolicy.ReplicaNumberByIndex(repIdx)
var nodeIdx uint32 var nodeIdx uint32
for _, n := range rep { for _, n := range rep {
if !objInfo.isLock && nodeIdx == numOfReplicas { //lock object should be on all container nodes if !objInfo.isLockOrTombstone && nodeIdx == numOfReplicas { // lock and tombstone objects should be on all container nodes
break break
} }
nodes[n.Hash()] = n nodes[n.Hash()] = n
@ -213,7 +213,8 @@ func getRequiredPlacement(cmd *cobra.Command, objInfo *objectNodesInfo, placemen
} }
func getActualPlacement(cmd *cobra.Command, netmap *netmapSDK.NetMap, requiredPlacement map[uint64]netmapSDK.NodeInfo, func getActualPlacement(cmd *cobra.Command, netmap *netmapSDK.NetMap, requiredPlacement map[uint64]netmapSDK.NodeInfo,
pk *ecdsa.PrivateKey, objInfo *objectNodesInfo) map[uint64]boolError { pk *ecdsa.PrivateKey, objInfo *objectNodesInfo,
) map[uint64]boolError {
result := make(map[uint64]boolError) result := make(map[uint64]boolError)
resultMtx := &sync.Mutex{} resultMtx := &sync.Mutex{}

View file

@ -93,7 +93,7 @@ func putObject(cmd *cobra.Command, _ []string) {
attrs := getAllObjectAttributes(cmd) attrs := getAllObjectAttributes(cmd)
obj.SetContainerID(cnr) obj.SetContainerID(cnr)
obj.SetOwnerID(&ownerID) obj.SetOwnerID(ownerID)
obj.SetAttributes(attrs...) obj.SetAttributes(attrs...)
notificationInfo, err := parseObjectNotifications(cmd) notificationInfo, err := parseObjectNotifications(cmd)
@ -160,7 +160,7 @@ func readFilePayload(filename string, cmd *cobra.Command) (io.Reader, cid.ID, us
commonCmd.ExitOnErr(cmd, "can't unmarshal object from given file: %w", objTemp.Unmarshal(buf)) commonCmd.ExitOnErr(cmd, "can't unmarshal object from given file: %w", objTemp.Unmarshal(buf))
payloadReader := bytes.NewReader(objTemp.Payload()) payloadReader := bytes.NewReader(objTemp.Payload())
cnr, _ := objTemp.ContainerID() cnr, _ := objTemp.ContainerID()
ownerID := *objTemp.OwnerID() ownerID := objTemp.OwnerID()
return payloadReader, cnr, ownerID return payloadReader, cnr, ownerID
} }

View file

@ -1,7 +1,6 @@
package object package object
import ( import (
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags" "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"github.com/spf13/cobra" "github.com/spf13/cobra"
) )
@ -16,9 +15,7 @@ var Cmd = &cobra.Command{
// the viper before execution // the viper before execution
commonflags.Bind(cmd) commonflags.Bind(cmd)
commonflags.BindAPI(cmd) commonflags.BindAPI(cmd)
common.StartClientCommandSpan(cmd)
}, },
PersistentPostRun: common.StopClientCommandSpan,
} }
func init() { func init() {
@ -31,7 +28,8 @@ func init() {
objectHashCmd, objectHashCmd,
objectRangeCmd, objectRangeCmd,
objectLockCmd, objectLockCmd,
objectNodesCmd} objectNodesCmd,
}
Cmd.AddCommand(objectChildCommands...) Cmd.AddCommand(objectChildCommands...)

View file

@ -46,6 +46,10 @@ of frostfs-api and some useful utilities for compiling ACL rules from JSON
notation, managing container access through protocol gates, querying network map notation, managing container access through protocol gates, querying network map
and much more!`, and much more!`,
Run: entryPoint, Run: entryPoint,
PersistentPreRun: func(cmd *cobra.Command, _ []string) {
common.StartClientCommandSpan(cmd)
},
PersistentPostRun: common.StopClientCommandSpan,
} }
// Execute adds all child commands to the root command and sets flags appropriately. // Execute adds all child commands to the root command and sets flags appropriately.
@ -57,6 +61,7 @@ func Execute() {
func init() { func init() {
cobra.OnInitialize(initConfig) cobra.OnInitialize(initConfig)
cobra.EnableTraverseRunHooks = true
// use stdout as default output for cmd.Print() // use stdout as default output for cmd.Print()
rootCmd.SetOut(os.Stdout) rootCmd.SetOut(os.Stdout)

View file

@ -6,7 +6,6 @@ import (
"os" "os"
internalclient "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/client" internalclient "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags" "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key" "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common" commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
@ -33,9 +32,7 @@ var createCmd = &cobra.Command{
PersistentPreRun: func(cmd *cobra.Command, args []string) { PersistentPreRun: func(cmd *cobra.Command, args []string) {
_ = viper.BindPFlag(commonflags.WalletPath, cmd.Flags().Lookup(commonflags.WalletPath)) _ = viper.BindPFlag(commonflags.WalletPath, cmd.Flags().Lookup(commonflags.WalletPath))
_ = viper.BindPFlag(commonflags.Account, cmd.Flags().Lookup(commonflags.Account)) _ = viper.BindPFlag(commonflags.Account, cmd.Flags().Lookup(commonflags.Account))
common.StartClientCommandSpan(cmd)
}, },
PersistentPostRun: common.StopClientCommandSpan,
} }
func init() { func init() {
@ -81,7 +78,7 @@ func createSession(cmd *cobra.Command, _ []string) {
} }
filename, _ := cmd.Flags().GetString(outFlag) filename, _ := cmd.Flags().GetString(outFlag)
err = os.WriteFile(filename, data, 0644) err = os.WriteFile(filename, data, 0o644)
commonCmd.ExitOnErr(cmd, "can't write token to file: %w", err) commonCmd.ExitOnErr(cmd, "can't write token to file: %w", err)
} }

View file

@ -74,7 +74,7 @@ func add(cmd *cobra.Command, _ []string) {
resp, err := cli.Add(ctx, req) resp, err := cli.Add(ctx, req)
commonCmd.ExitOnErr(cmd, "failed to cal add: %w", err) commonCmd.ExitOnErr(cmd, "failed to cal add: %w", err)
cmd.Println("Node ID: ", resp.Body.NodeId) cmd.Println("Node ID: ", resp.GetBody().GetNodeId())
} }
func parseMeta(cmd *cobra.Command) ([]*tree.KeyValue, error) { func parseMeta(cmd *cobra.Command) ([]*tree.KeyValue, error) {

View file

@ -0,0 +1,181 @@
package util
import (
"errors"
"fmt"
"strings"
apechain "git.frostfs.info/TrueCloudLab/policy-engine/pkg/chain"
nativeschema "git.frostfs.info/TrueCloudLab/policy-engine/schema/native"
"github.com/flynn-archive/go-shlex"
)
var (
errInvalidStatementFormat = errors.New("invalid statement format")
errInvalidConditionFormat = errors.New("invalid condition format")
errUnknownAction = errors.New("action is not recognized")
errUnknownOperation = errors.New("operation is not recognized")
errUnknownActionDetail = errors.New("action detail is not recognized")
errUnknownBinaryOperator = errors.New("binary operator is not recognized")
errUnknownCondObjectType = errors.New("condition object type is not recognized")
)
// ParseAPEChain parses APE chain rules.
func ParseAPEChain(chain *apechain.Chain, rules []string) error {
if len(rules) == 0 {
return errors.New("no APE rules provided")
}
for _, rule := range rules {
r := new(apechain.Rule)
if err := ParseAPERule(r, rule); err != nil {
return err
}
chain.Rules = append(chain.Rules, *r)
}
return nil
}
// ParseAPERule parses access-policy-engine statement from the following form:
// <action>[:action_detail] <operation> [<condition1> ...] <resource>
//
// Examples:
// deny Object.Put *
// deny:QuotaLimitReached Object.Put *
// allow Object.Put *
// allow Object.Get Object.Resource:Department=HR Object.Request:Actor=ownerA *
//
//nolint:godot
func ParseAPERule(r *apechain.Rule, rule string) error {
lexemes, err := shlex.Split(rule)
if err != nil {
return fmt.Errorf("can't parse rule '%s': %v", rule, err)
}
return parseRuleLexemes(r, lexemes)
}
func parseRuleLexemes(r *apechain.Rule, lexemes []string) error {
if len(lexemes) < 2 {
return errInvalidStatementFormat
}
var err error
r.Status, err = parseStatus(lexemes[0])
if err != nil {
return err
}
r.Actions, err = parseAction(lexemes[1])
if err != nil {
return err
}
r.Condition, err = parseConditions(lexemes[2 : len(lexemes)-1])
if err != nil {
return err
}
r.Resources, err = parseResource(lexemes[len(lexemes)-1])
return err
}
func parseStatus(lexeme string) (apechain.Status, error) {
action, expression, found := strings.Cut(lexeme, ":")
switch action = strings.ToLower(action); action {
case "deny":
if !found {
return apechain.AccessDenied, nil
} else if strings.EqualFold(expression, "QuotaLimitReached") {
return apechain.QuotaLimitReached, nil
} else {
return 0, fmt.Errorf("%w: %s", errUnknownActionDetail, expression)
}
case "allow":
if found {
return 0, errUnknownActionDetail
}
return apechain.Allow, nil
default:
return 0, errUnknownAction
}
}
func parseAction(lexeme string) (apechain.Actions, error) {
switch strings.ToLower(lexeme) {
case "object.put":
return apechain.Actions{Names: []string{nativeschema.MethodPutObject}}, nil
case "object.get":
return apechain.Actions{Names: []string{nativeschema.MethodGetObject}}, nil
case "object.head":
return apechain.Actions{Names: []string{nativeschema.MethodHeadObject}}, nil
case "object.delete":
return apechain.Actions{Names: []string{nativeschema.MethodDeleteObject}}, nil
case "object.search":
return apechain.Actions{Names: []string{nativeschema.MethodSearchObject}}, nil
case "object.range":
return apechain.Actions{Names: []string{nativeschema.MethodRangeObject}}, nil
case "object.hash":
return apechain.Actions{Names: []string{nativeschema.MethodHashObject}}, nil
default:
}
return apechain.Actions{}, fmt.Errorf("%w: %s", errUnknownOperation, lexeme)
}
func parseResource(lexeme string) (apechain.Resources, error) {
if lexeme == "*" {
return apechain.Resources{Names: []string{nativeschema.ResourceFormatRootObjects}}, nil
}
return apechain.Resources{Names: []string{fmt.Sprintf(nativeschema.ResourceFormatRootContainerObjects, lexeme)}}, nil
}
const (
ObjectResource = "object.resource"
ObjectRequest = "object.request"
)
var typeToCondObject = map[string]apechain.ObjectType{
ObjectResource: apechain.ObjectResource,
ObjectRequest: apechain.ObjectRequest,
}
func parseConditions(lexemes []string) ([]apechain.Condition, error) {
conds := make([]apechain.Condition, 0)
for _, lexeme := range lexemes {
typ, expression, found := strings.Cut(lexeme, ":")
typ = strings.ToLower(typ)
objType, ok := typeToCondObject[typ]
if ok {
if !found {
return nil, fmt.Errorf("%w: %s", errInvalidConditionFormat, lexeme)
}
var lhs, rhs string
var binExpFound bool
var cond apechain.Condition
cond.Object = objType
lhs, rhs, binExpFound = strings.Cut(expression, "!=")
if !binExpFound {
lhs, rhs, binExpFound = strings.Cut(expression, "=")
if !binExpFound {
return nil, fmt.Errorf("%w: %s", errUnknownBinaryOperator, expression)
}
cond.Op = apechain.CondStringEquals
} else {
cond.Op = apechain.CondStringNotEquals
}
cond.Key, cond.Value = lhs, rhs
conds = append(conds, cond)
} else {
return nil, fmt.Errorf("%w: %s", errUnknownCondObjectType, typ)
}
}
return conds, nil
}

View file

@ -0,0 +1,130 @@
package util
import (
"testing"
policyengine "git.frostfs.info/TrueCloudLab/policy-engine/pkg/chain"
nativeschema "git.frostfs.info/TrueCloudLab/policy-engine/schema/native"
"github.com/stretchr/testify/require"
)
func TestParseAPERule(t *testing.T) {
tests := [...]struct {
name string
rule string
expectErr error
expectRule policyengine.Rule
}{
{
name: "Valid allow rule",
rule: "allow Object.Put *",
expectRule: policyengine.Rule{
Status: policyengine.Allow,
Actions: policyengine.Actions{Names: []string{nativeschema.MethodPutObject}},
Resources: policyengine.Resources{Names: []string{nativeschema.ResourceFormatRootObjects}},
Condition: []policyengine.Condition{},
},
},
{
name: "Valid deny rule",
rule: "deny Object.Put *",
expectRule: policyengine.Rule{
Status: policyengine.AccessDenied,
Actions: policyengine.Actions{Names: []string{nativeschema.MethodPutObject}},
Resources: policyengine.Resources{Names: []string{nativeschema.ResourceFormatRootObjects}},
Condition: []policyengine.Condition{},
},
},
{
name: "Valid deny rule with action detail",
rule: "deny:QuotaLimitReached Object.Put *",
expectRule: policyengine.Rule{
Status: policyengine.QuotaLimitReached,
Actions: policyengine.Actions{Names: []string{nativeschema.MethodPutObject}},
Resources: policyengine.Resources{Names: []string{nativeschema.ResourceFormatRootObjects}},
Condition: []policyengine.Condition{},
},
},
{
name: "Valid allow rule with conditions",
rule: "allow Object.Get Object.Resource:Department=HR Object.Request:Actor!=ownerA *",
expectRule: policyengine.Rule{
Status: policyengine.Allow,
Actions: policyengine.Actions{Names: []string{nativeschema.MethodGetObject}},
Resources: policyengine.Resources{Names: []string{nativeschema.ResourceFormatRootObjects}},
Condition: []policyengine.Condition{
{
Op: policyengine.CondStringEquals,
Object: policyengine.ObjectResource,
Key: "Department",
Value: "HR",
},
{
Op: policyengine.CondStringNotEquals,
Object: policyengine.ObjectRequest,
Key: "Actor",
Value: "ownerA",
},
},
},
},
{
name: "Valid rule with conditions with action detail",
rule: "deny:QuotaLimitReached Object.Get Object.Resource:Department=HR Object.Request:Actor!=ownerA *",
expectRule: policyengine.Rule{
Status: policyengine.QuotaLimitReached,
Actions: policyengine.Actions{Names: []string{nativeschema.MethodGetObject}},
Resources: policyengine.Resources{Names: []string{nativeschema.ResourceFormatRootObjects}},
Condition: []policyengine.Condition{
{
Op: policyengine.CondStringEquals,
Object: policyengine.ObjectResource,
Key: "Department",
Value: "HR",
},
{
Op: policyengine.CondStringNotEquals,
Object: policyengine.ObjectRequest,
Key: "Actor",
Value: "ownerA",
},
},
},
},
{
name: "Invalid rule with unknown action",
rule: "permit Object.Put *",
expectErr: errUnknownAction,
},
{
name: "Invalid rule with unknown operation",
rule: "allow Object.PutOut *",
expectErr: errUnknownOperation,
},
{
name: "Invalid rule with unknown action detail",
rule: "deny:UnknownActionDetail Object.Put *",
expectErr: errUnknownActionDetail,
},
{
name: "Invalid rule with unknown condition binary operator",
rule: "deny Object.Put Object.Resource:Department<HR *",
expectErr: errUnknownBinaryOperator,
},
{
name: "Invalid rule with unknown condition object type",
rule: "deny Object.Put Object.ResourZe:Department=HR *",
expectErr: errUnknownCondObjectType,
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
r := new(policyengine.Rule)
err := ParseAPERule(r, test.rule)
require.ErrorIs(t, err, test.expectErr)
if test.expectErr == nil {
require.Equal(t, test.expectRule, *r)
}
})
}
}

View file

@ -48,7 +48,7 @@ func convertEACLTable(cmd *cobra.Command, _ []string) {
return return
} }
err = os.WriteFile(to, data, 0644) err = os.WriteFile(to, data, 0o644)
commonCmd.ExitOnErr(cmd, "can't write exteded ACL table to file: %w", err) commonCmd.ExitOnErr(cmd, "can't write exteded ACL table to file: %w", err)
cmd.Printf("extended ACL table was successfully dumped to %s\n", to) cmd.Printf("extended ACL table was successfully dumped to %s\n", to)

View file

@ -78,7 +78,7 @@ func keyerGenerate(filename string, d *keyer.Dashboard) error {
} }
if filename != "" { if filename != "" {
return os.WriteFile(filename, key, 0600) return os.WriteFile(filename, key, 0o600)
} }
return nil return nil

View file

@ -56,7 +56,7 @@ func signBearerToken(cmd *cobra.Command, _ []string) {
return return
} }
err = os.WriteFile(to, data, 0644) err = os.WriteFile(to, data, 0o644)
commonCmd.ExitOnErr(cmd, "can't write signed bearer token to file: %w", err) commonCmd.ExitOnErr(cmd, "can't write signed bearer token to file: %w", err)
cmd.Printf("signed bearer token was successfully dumped to %s\n", to) cmd.Printf("signed bearer token was successfully dumped to %s\n", to)

View file

@ -76,7 +76,7 @@ func signSessionToken(cmd *cobra.Command, _ []string) {
return return
} }
err = os.WriteFile(to, data, 0644) err = os.WriteFile(to, data, 0o644)
if err != nil { if err != nil {
commonCmd.ExitOnErr(cmd, "", fmt.Errorf("can't write signed session token to %s: %w", to, err)) commonCmd.ExitOnErr(cmd, "", fmt.Errorf("can't write signed session token to %s: %w", to, err))
} }

View file

@ -13,7 +13,7 @@ import (
func newConfig() (*viper.Viper, error) { func newConfig() (*viper.Viper, error) {
var err error var err error
var dv = viper.New() dv := viper.New()
defaultConfiguration(dv) defaultConfiguration(dv)

View file

@ -5,7 +5,7 @@ import (
"fmt" "fmt"
common "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-lens/internal" common "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-lens/internal"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobovnicza" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/blobovniczatree"
meta "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/metabase" meta "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/metabase"
objectSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object" objectSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id" oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
@ -40,7 +40,7 @@ func inspectFunc(cmd *cobra.Command, _ []string) {
common.ExitOnErr(cmd, common.Errf("could not check if the obj is small: %w", err)) common.ExitOnErr(cmd, common.Errf("could not check if the obj is small: %w", err))
if id := resStorageID.StorageID(); id != nil { if id := resStorageID.StorageID(); id != nil {
cmd.Printf("Object storageID: %s\n\n", blobovnicza.NewIDFromBytes(id).String()) cmd.Printf("Object storageID: %s\n\n", blobovniczatree.NewIDFromBytes(id).Path())
} else { } else {
cmd.Printf("Object does not contain storageID\n\n") cmd.Printf("Object does not contain storageID\n\n")
} }

View file

@ -59,7 +59,7 @@ func WriteObjectToFile(cmd *cobra.Command, path string, data []byte) {
} }
ExitOnErr(cmd, Errf("could not write file: %w", ExitOnErr(cmd, Errf("could not write file: %w",
os.WriteFile(path, data, 0644))) os.WriteFile(path, data, 0o644)))
cmd.Printf("\nSaved payload to '%s' file\n", path) cmd.Printf("\nSaved payload to '%s' file\n", path)
} }

View file

@ -2,12 +2,14 @@ package main
import ( import (
"context" "context"
"net"
accountingGRPC "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/accounting/grpc" accountingGRPC "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/accounting/grpc"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client/balance" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client/balance"
accountingTransportGRPC "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/network/transport/accounting/grpc" accountingTransportGRPC "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/network/transport/accounting/grpc"
accountingService "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/accounting" accountingService "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/accounting"
accounting "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/accounting/morph" accounting "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/accounting/morph"
"google.golang.org/grpc"
) )
func initAccountingService(ctx context.Context, c *cfg) { func initAccountingService(ctx context.Context, c *cfg) {
@ -28,7 +30,7 @@ func initAccountingService(ctx context.Context, c *cfg) {
), ),
) )
for _, srv := range c.cfgGRPC.servers { c.cfgGRPC.performAndSave(func(_ string, _ net.Listener, s *grpc.Server) {
accountingGRPC.RegisterAccountingServiceServer(srv, server) accountingGRPC.RegisterAccountingServiceServer(s, server)
} })
} }

View file

@ -6,46 +6,35 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/container" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/container"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/netmap" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/netmap"
cntClient "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client/container"
putsvc "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/object/put" putsvc "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/object/put"
utilSync "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/sync" utilSync "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/sync"
apistatus "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client/status" apistatus "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client/status"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id" cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
netmapSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap" netmapSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user"
lru "github.com/hashicorp/golang-lru/v2" lru "github.com/hashicorp/golang-lru/v2"
"github.com/hashicorp/golang-lru/v2/expirable"
) )
type netValueReader[K any, V any] func(K) (V, error) type netValueReader[K any, V any] func(K) (V, error)
type valueWithTime[V any] struct { type valueWithError[V any] struct {
v V v V
t time.Time
// cached error in order to not repeat failed request for some time // cached error in order to not repeat failed request for some time
e error e error
} }
// entity that provides TTL cache interface. // entity that provides TTL cache interface.
type ttlNetCache[K comparable, V any] struct { type ttlNetCache[K comparable, V any] struct {
ttl time.Duration cache *expirable.LRU[K, *valueWithError[V]]
netRdr netValueReader[K, V]
sz int
cache *lru.Cache[K, *valueWithTime[V]]
netRdr netValueReader[K, V]
keyLocker *utilSync.KeyLocker[K] keyLocker *utilSync.KeyLocker[K]
} }
// complicates netValueReader with TTL caching mechanism. // complicates netValueReader with TTL caching mechanism.
func newNetworkTTLCache[K comparable, V any](sz int, ttl time.Duration, netRdr netValueReader[K, V]) *ttlNetCache[K, V] { func newNetworkTTLCache[K comparable, V any](sz int, ttl time.Duration, netRdr netValueReader[K, V]) *ttlNetCache[K, V] {
cache, err := lru.New[K, *valueWithTime[V]](sz) cache := expirable.NewLRU[K, *valueWithError[V]](sz, nil, ttl)
fatalOnErr(err)
return &ttlNetCache[K, V]{ return &ttlNetCache[K, V]{
ttl: ttl,
sz: sz,
cache: cache, cache: cache,
netRdr: netRdr, netRdr: netRdr,
keyLocker: utilSync.NewKeyLocker[K](), keyLocker: utilSync.NewKeyLocker[K](),
@ -59,7 +48,7 @@ func newNetworkTTLCache[K comparable, V any](sz int, ttl time.Duration, netRdr n
// returned value should not be modified. // returned value should not be modified.
func (c *ttlNetCache[K, V]) get(key K) (V, error) { func (c *ttlNetCache[K, V]) get(key K) (V, error) {
val, ok := c.cache.Peek(key) val, ok := c.cache.Peek(key)
if ok && time.Since(val.t) < c.ttl { if ok {
return val.v, val.e return val.v, val.e
} }
@ -67,15 +56,14 @@ func (c *ttlNetCache[K, V]) get(key K) (V, error) {
defer c.keyLocker.Unlock(key) defer c.keyLocker.Unlock(key)
val, ok = c.cache.Peek(key) val, ok = c.cache.Peek(key)
if ok && time.Since(val.t) < c.ttl { if ok {
return val.v, val.e return val.v, val.e
} }
v, err := c.netRdr(key) v, err := c.netRdr(key)
c.cache.Add(key, &valueWithTime[V]{ c.cache.Add(key, &valueWithError[V]{
v: v, v: v,
t: time.Now(),
e: err, e: err,
}) })
@ -86,9 +74,8 @@ func (c *ttlNetCache[K, V]) set(k K, v V, e error) {
c.keyLocker.Lock(k) c.keyLocker.Lock(k)
defer c.keyLocker.Unlock(k) defer c.keyLocker.Unlock(k)
c.cache.Add(k, &valueWithTime[V]{ c.cache.Add(k, &valueWithError[V]{
v: v, v: v,
t: time.Now(),
e: e, e: e,
}) })
} }
@ -244,117 +231,6 @@ func (s *lruNetmapSource) Epoch() (uint64, error) {
return s.netState.CurrentEpoch(), nil return s.netState.CurrentEpoch(), nil
} }
// wrapper over TTL cache of values read from the network
// that implements container lister.
type ttlContainerLister struct {
inner *ttlNetCache[string, *cacheItemContainerList]
client *cntClient.Client
}
// value type for ttlNetCache used by ttlContainerLister.
type cacheItemContainerList struct {
// protects list from concurrent add/remove ops
mtx sync.RWMutex
// actual list of containers owner by the particular user
list []cid.ID
}
func newCachedContainerLister(c *cntClient.Client, ttl time.Duration) ttlContainerLister {
const containerListerCacheSize = 100
lruCnrListerCache := newNetworkTTLCache(containerListerCacheSize, ttl, func(strID string) (*cacheItemContainerList, error) {
var id *user.ID
if strID != "" {
id = new(user.ID)
err := id.DecodeString(strID)
if err != nil {
return nil, err
}
}
list, err := c.ContainersOf(id)
if err != nil {
return nil, err
}
return &cacheItemContainerList{
list: list,
}, nil
})
return ttlContainerLister{inner: lruCnrListerCache, client: c}
}
// List returns list of container IDs from the cache. If list is missing in the
// cache or expired, then it returns container IDs from side chain and updates
// the cache.
func (s ttlContainerLister) List(id *user.ID) ([]cid.ID, error) {
if id == nil {
return s.client.ContainersOf(nil)
}
item, err := s.inner.get(id.EncodeToString())
if err != nil {
return nil, err
}
item.mtx.RLock()
res := make([]cid.ID, len(item.list))
copy(res, item.list)
item.mtx.RUnlock()
return res, nil
}
// updates cached list of owner's containers: cnr is added if flag is true, otherwise it's removed.
// Concurrent calls can lead to some races:
// - two parallel additions to missing owner's cache can lead to only one container to be cached
// - async cache value eviction can lead to idle addition
//
// All described race cases aren't critical since cache values expire anyway, we just try
// to increase cache actuality w/o huge overhead on synchronization.
func (s *ttlContainerLister) update(owner user.ID, cnr cid.ID, add bool) {
strOwner := owner.EncodeToString()
val, ok := s.inner.cache.Peek(strOwner)
if !ok {
// we could cache the single cnr but in this case we will disperse
// with the Sidechain a lot
return
}
if s.inner.ttl <= time.Since(val.t) {
return
}
item := val.v
item.mtx.Lock()
{
found := false
for i := range item.list {
if found = item.list[i].Equals(cnr); found {
if !add {
item.list = append(item.list[:i], item.list[i+1:]...)
// if list became empty we don't remove the value from the cache
// since empty list is a correct value, and we don't want to insta
// re-request it from the Sidechain
}
break
}
}
if add && !found {
item.list = append(item.list, cnr)
}
}
item.mtx.Unlock()
}
type cachedIRFetcher struct { type cachedIRFetcher struct {
*ttlNetCache[struct{}, [][]byte] *ttlNetCache[struct{}, [][]byte]
} }

View file

@ -0,0 +1,56 @@
package main
import (
"errors"
"testing"
"time"
"github.com/stretchr/testify/require"
)
func TestTTLNetCache(t *testing.T) {
ttlDuration := time.Millisecond * 50
cache := newNetworkTTLCache[string, time.Time](10, ttlDuration, testNetValueReader)
key := "key"
t.Run("Test Add and Get", func(t *testing.T) {
ti := time.Now()
cache.set(key, ti, nil)
val, err := cache.get(key)
require.NoError(t, err)
require.Equal(t, ti, val)
})
t.Run("Test TTL", func(t *testing.T) {
ti := time.Now()
cache.set(key, ti, nil)
time.Sleep(2 * ttlDuration)
val, err := cache.get(key)
require.NoError(t, err)
require.NotEqual(t, val, ti)
})
t.Run("Test Remove", func(t *testing.T) {
ti := time.Now()
cache.set(key, ti, nil)
cache.remove(key)
val, err := cache.get(key)
require.NoError(t, err)
require.NotEqual(t, val, ti)
})
t.Run("Test Cache Error", func(t *testing.T) {
cache.set("error", time.Now(), errors.New("mock error"))
_, err := cache.get("error")
require.Error(t, err)
require.Equal(t, "mock error", err.Error())
})
}
func testNetValueReader(key string) (time.Time, error) {
if key == "error" {
return time.Now(), errors.New("mock error")
}
return time.Now(), nil
}

View file

@ -29,6 +29,7 @@ import (
replicatorconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/replicator" replicatorconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/replicator"
tracingconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/tracing" tracingconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/tracing"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs" "git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/ape/chainbase"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/container" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/container"
netmapCore "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/netmap" netmapCore "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/netmap"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor"
@ -61,17 +62,21 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/util/response" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/util/response"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/sdnotify"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/state" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/state"
"git.frostfs.info/TrueCloudLab/frostfs-observability/logging/lokicore"
"git.frostfs.info/TrueCloudLab/frostfs-observability/tracing" "git.frostfs.info/TrueCloudLab/frostfs-observability/tracing"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap"
objectSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object" objectSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/version" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/version"
"git.frostfs.info/TrueCloudLab/policy-engine/pkg/engine/inmemory"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys" "github.com/nspcc-dev/neo-go/pkg/crypto/keys"
neogoutil "github.com/nspcc-dev/neo-go/pkg/util" neogoutil "github.com/nspcc-dev/neo-go/pkg/util"
"github.com/panjf2000/ants/v2" "github.com/panjf2000/ants/v2"
"go.etcd.io/bbolt" "go.etcd.io/bbolt"
"go.uber.org/zap" "go.uber.org/zap"
"go.uber.org/zap/zapcore"
"google.golang.org/grpc" "google.golang.org/grpc"
) )
@ -101,11 +106,15 @@ type applicationConfiguration struct {
shardPoolSize uint32 shardPoolSize uint32
shards []shardCfg shards []shardCfg
lowMem bool lowMem bool
rebuildWorkers uint32
} }
} }
type shardCfg struct { type shardCfg struct {
compress bool compress bool
estimateCompressibility bool
estimateCompressibilityThreshold float64
smallSizeObjectLimit uint64 smallSizeObjectLimit uint64
uncompressableContentType []string uncompressableContentType []string
refillMetabase bool refillMetabase bool
@ -121,10 +130,10 @@ type shardCfg struct {
subStorages []subStorageCfg subStorages []subStorageCfg
gcCfg struct { gcCfg struct {
removerBatchSize int removerBatchSize int
removerSleepInterval time.Duration removerSleepInterval time.Duration
expiredCollectorBatchSize int expiredCollectorBatchSize int
expiredCollectorWorkersCount int expiredCollectorWorkerCount int
} }
writecacheCfg struct { writecacheCfg struct {
@ -176,6 +185,8 @@ type subStorageCfg struct {
width uint64 width uint64
leafWidth uint64 leafWidth uint64
openedCacheSize int openedCacheSize int
initWorkerCount int
initInAdvance bool
} }
// readConfig fills applicationConfiguration with raw configuration values // readConfig fills applicationConfiguration with raw configuration values
@ -207,6 +218,7 @@ func (a *applicationConfiguration) readConfig(c *config.Config) error {
a.EngineCfg.errorThreshold = engineconfig.ShardErrorThreshold(c) a.EngineCfg.errorThreshold = engineconfig.ShardErrorThreshold(c)
a.EngineCfg.shardPoolSize = engineconfig.ShardPoolSize(c) a.EngineCfg.shardPoolSize = engineconfig.ShardPoolSize(c)
a.EngineCfg.lowMem = engineconfig.EngineLowMemoryConsumption(c) a.EngineCfg.lowMem = engineconfig.EngineLowMemoryConsumption(c)
a.EngineCfg.rebuildWorkers = engineconfig.EngineRebuildWorkersCount(c)
return engineconfig.IterateShards(c, false, func(sc *shardconfig.Config) error { return a.updateShardConfig(c, sc) }) return engineconfig.IterateShards(c, false, func(sc *shardconfig.Config) error { return a.updateShardConfig(c, sc) })
} }
@ -217,6 +229,8 @@ func (a *applicationConfiguration) updateShardConfig(c *config.Config, oldConfig
newConfig.refillMetabase = oldConfig.RefillMetabase() newConfig.refillMetabase = oldConfig.RefillMetabase()
newConfig.mode = oldConfig.Mode() newConfig.mode = oldConfig.Mode()
newConfig.compress = oldConfig.Compress() newConfig.compress = oldConfig.Compress()
newConfig.estimateCompressibility = oldConfig.EstimateCompressibility()
newConfig.estimateCompressibilityThreshold = oldConfig.EstimateCompressibilityThreshold()
newConfig.uncompressableContentType = oldConfig.UncompressableContentTypes() newConfig.uncompressableContentType = oldConfig.UncompressableContentTypes()
newConfig.smallSizeObjectLimit = oldConfig.SmallSizeLimit() newConfig.smallSizeObjectLimit = oldConfig.SmallSizeLimit()
@ -249,7 +263,7 @@ func (a *applicationConfiguration) setShardWriteCacheConfig(newConfig *shardCfg,
wc.maxBatchDelay = writeCacheCfg.BoltDB().MaxBatchDelay() wc.maxBatchDelay = writeCacheCfg.BoltDB().MaxBatchDelay()
wc.maxObjSize = writeCacheCfg.MaxObjectSize() wc.maxObjSize = writeCacheCfg.MaxObjectSize()
wc.smallObjectSize = writeCacheCfg.SmallObjectSize() wc.smallObjectSize = writeCacheCfg.SmallObjectSize()
wc.flushWorkerCount = writeCacheCfg.WorkersNumber() wc.flushWorkerCount = writeCacheCfg.WorkerCount()
wc.sizeLimit = writeCacheCfg.SizeLimit() wc.sizeLimit = writeCacheCfg.SizeLimit()
wc.noSync = writeCacheCfg.NoSync() wc.noSync = writeCacheCfg.NoSync()
wc.gcInterval = writeCacheCfg.GCInterval() wc.gcInterval = writeCacheCfg.GCInterval()
@ -291,6 +305,8 @@ func (a *applicationConfiguration) setShardStorageConfig(newConfig *shardCfg, ol
sCfg.width = sub.ShallowWidth() sCfg.width = sub.ShallowWidth()
sCfg.leafWidth = sub.LeafWidth() sCfg.leafWidth = sub.LeafWidth()
sCfg.openedCacheSize = sub.OpenedCacheSize() sCfg.openedCacheSize = sub.OpenedCacheSize()
sCfg.initWorkerCount = sub.InitWorkerCount()
sCfg.initInAdvance = sub.InitInAdvance()
case fstree.Type: case fstree.Type:
sub := fstreeconfig.From((*config.Config)(storagesCfg[i])) sub := fstreeconfig.From((*config.Config)(storagesCfg[i]))
sCfg.depth = sub.Depth() sCfg.depth = sub.Depth()
@ -321,7 +337,7 @@ func (a *applicationConfiguration) setGCConfig(newConfig *shardCfg, oldConfig *s
newConfig.gcCfg.removerBatchSize = gcCfg.RemoverBatchSize() newConfig.gcCfg.removerBatchSize = gcCfg.RemoverBatchSize()
newConfig.gcCfg.removerSleepInterval = gcCfg.RemoverSleepInterval() newConfig.gcCfg.removerSleepInterval = gcCfg.RemoverSleepInterval()
newConfig.gcCfg.expiredCollectorBatchSize = gcCfg.ExpiredCollectorBatchSize() newConfig.gcCfg.expiredCollectorBatchSize = gcCfg.ExpiredCollectorBatchSize()
newConfig.gcCfg.expiredCollectorWorkersCount = gcCfg.ExpiredCollectorWorkersCount() newConfig.gcCfg.expiredCollectorWorkerCount = gcCfg.ExpiredCollectorWorkerCount()
} }
// internals contains application-specific internals that are created // internals contains application-specific internals that are created
@ -346,6 +362,8 @@ type internals struct {
healthStatus *atomic.Int32 healthStatus *atomic.Int32
// is node under maintenance // is node under maintenance
isMaintenance atomic.Bool isMaintenance atomic.Bool
sdNotify bool
} }
// starts node's maintenance. // starts node's maintenance.
@ -408,11 +426,26 @@ type dynamicConfiguration struct {
metrics *httpComponent metrics *httpComponent
} }
type appConfigGuard struct {
mtx sync.RWMutex
}
func (g *appConfigGuard) LockAppConfigShared() func() {
g.mtx.RLock()
return func() { g.mtx.RUnlock() }
}
func (g *appConfigGuard) LockAppConfigExclusive() func() {
g.mtx.Lock()
return func() { g.mtx.Unlock() }
}
type cfg struct { type cfg struct {
applicationConfiguration applicationConfiguration
internals internals
shared shared
dynamicConfiguration dynamicConfiguration
appConfigGuard
// configuration of the internal // configuration of the internal
// services // services
@ -442,16 +475,79 @@ func (c *cfg) ReadCurrentNetMap(msg *netmapV2.NetMap) error {
return nil return nil
} }
type grpcServer struct {
Listener net.Listener
Server *grpc.Server
Endpoint string
}
type cfgGRPC struct { type cfgGRPC struct {
listeners []net.Listener // guard protects connections and handlers
guard sync.RWMutex
// servers must be protected with guard
servers []grpcServer
// handlers must be protected with guard
handlers []func(e string, l net.Listener, s *grpc.Server)
servers []*grpc.Server maxChunkSize uint64
maxAddrAmount uint64
reconnectTimeout time.Duration
}
endpoints []string func (c *cfgGRPC) append(e string, l net.Listener, s *grpc.Server) {
c.guard.Lock()
defer c.guard.Unlock()
maxChunkSize uint64 c.servers = append(c.servers, grpcServer{
Listener: l,
Server: s,
Endpoint: e,
})
}
maxAddrAmount uint64 func (c *cfgGRPC) appendAndHandle(e string, l net.Listener, s *grpc.Server) {
c.guard.Lock()
defer c.guard.Unlock()
c.servers = append(c.servers, grpcServer{
Listener: l,
Server: s,
Endpoint: e,
})
for _, h := range c.handlers {
h(e, l, s)
}
}
func (c *cfgGRPC) performAndSave(handler func(e string, l net.Listener, s *grpc.Server)) {
c.guard.Lock()
defer c.guard.Unlock()
for _, conn := range c.servers {
handler(conn.Endpoint, conn.Listener, conn.Server)
}
c.handlers = append(c.handlers, handler)
}
func (c *cfgGRPC) dropConnection(endpoint string) {
c.guard.Lock()
defer c.guard.Unlock()
pos := -1
for idx, srv := range c.servers {
if srv.Endpoint == endpoint {
pos = idx
break
}
}
if pos < 0 {
return
}
c.servers[pos].Server.Stop() // closes listener
c.servers = append(c.servers[0:pos], c.servers[pos+1:]...)
} }
type cfgMorph struct { type cfgMorph struct {
@ -505,6 +601,8 @@ type cfgObject struct {
eaclSource container.EACLSource eaclSource container.EACLSource
cfgAccessPolicyEngine cfgAccessPolicyEngine
pool cfgObjectRoutines pool cfgObjectRoutines
cfgLocalStorage cfgLocalStorage cfgLocalStorage cfgLocalStorage
@ -524,6 +622,10 @@ type cfgLocalStorage struct {
localStorage *engine.StorageEngine localStorage *engine.StorageEngine
} }
type cfgAccessPolicyEngine struct {
accessPolicyEngine *accessPolicyEngine
}
type cfgObjectRoutines struct { type cfgObjectRoutines struct {
putRemote *ants.Pool putRemote *ants.Pool
@ -557,15 +659,22 @@ func initCfg(appCfg *config.Config) *cfg {
relayOnly := nodeconfig.Relay(appCfg) relayOnly := nodeconfig.Relay(appCfg)
netState := newNetworkState() netState := newNetworkState()
netState.metrics = c.metricsCollector
c.shared = initShared(appCfg, key, netState, relayOnly) c.shared = initShared(appCfg, key, netState, relayOnly)
netState.metrics = c.metricsCollector
logPrm, err := c.loggerPrm() logPrm, err := c.loggerPrm()
fatalOnErr(err) fatalOnErr(err)
logPrm.SamplingHook = c.metricsCollector.LogMetrics().GetSamplingHook() logPrm.SamplingHook = c.metricsCollector.LogMetrics().GetSamplingHook()
log, err := logger.NewLogger(logPrm) log, err := logger.NewLogger(logPrm)
fatalOnErr(err) fatalOnErr(err)
if loggerconfig.ToLokiConfig(appCfg).Enabled {
log.Logger = log.Logger.WithOptions(zap.WrapCore(func(core zapcore.Core) zapcore.Core {
lokiCore := lokicore.New(core, loggerconfig.ToLokiConfig(appCfg))
return lokiCore
}))
}
c.internals = initInternals(appCfg, log) c.internals = initInternals(appCfg, log)
@ -604,9 +713,18 @@ func initInternals(appCfg *config.Config, log *logger.Logger) internals {
log: log, log: log,
apiVersion: version.Current(), apiVersion: version.Current(),
healthStatus: &healthStatus, healthStatus: &healthStatus,
sdNotify: initSdNotify(appCfg),
} }
} }
func initSdNotify(appCfg *config.Config) bool {
if config.BoolSafe(appCfg.Sub("systemdnotify"), "enabled") {
fatalOnErr(sdnotify.InitSocket())
return true
}
return false
}
func initShared(appCfg *config.Config, key *keys.PrivateKey, netState *networkState, relayOnly bool) shared { func initShared(appCfg *config.Config, key *keys.PrivateKey, netState *networkState, relayOnly bool) shared {
var netAddr network.AddressGroup var netAddr network.AddressGroup
@ -682,13 +800,14 @@ func initCfgObject(appCfg *config.Config) cfgObject {
} }
func (c *cfg) engineOpts() []engine.Option { func (c *cfg) engineOpts() []engine.Option {
opts := make([]engine.Option, 0, 4) var opts []engine.Option
opts = append(opts, opts = append(opts,
engine.WithShardPoolSize(c.EngineCfg.shardPoolSize), engine.WithShardPoolSize(c.EngineCfg.shardPoolSize),
engine.WithErrorThreshold(c.EngineCfg.errorThreshold), engine.WithErrorThreshold(c.EngineCfg.errorThreshold),
engine.WithLogger(c.log), engine.WithLogger(c.log),
engine.WithLowMemoryConsumption(c.EngineCfg.lowMem), engine.WithLowMemoryConsumption(c.EngineCfg.lowMem),
engine.WithRebuildWorkersCount(c.EngineCfg.rebuildWorkers),
) )
if c.metricsCollector != nil { if c.metricsCollector != nil {
@ -777,7 +896,10 @@ func (c *cfg) getSubstorageOpts(shCfg shardCfg) []blobstor.SubStorage {
blobovniczatree.WithBlobovniczaShallowWidth(sRead.width), blobovniczatree.WithBlobovniczaShallowWidth(sRead.width),
blobovniczatree.WithBlobovniczaLeafWidth(sRead.leafWidth), blobovniczatree.WithBlobovniczaLeafWidth(sRead.leafWidth),
blobovniczatree.WithOpenedCacheSize(sRead.openedCacheSize), blobovniczatree.WithOpenedCacheSize(sRead.openedCacheSize),
blobovniczatree.WithInitWorkerCount(sRead.initWorkerCount),
blobovniczatree.WithInitInAdvance(sRead.initInAdvance),
blobovniczatree.WithLogger(c.log), blobovniczatree.WithLogger(c.log),
blobovniczatree.WithObjectSizeLimit(shCfg.smallSizeObjectLimit),
} }
if c.metricsCollector != nil { if c.metricsCollector != nil {
@ -799,6 +921,7 @@ func (c *cfg) getSubstorageOpts(shCfg shardCfg) []blobstor.SubStorage {
fstree.WithPerm(sRead.perm), fstree.WithPerm(sRead.perm),
fstree.WithDepth(sRead.depth), fstree.WithDepth(sRead.depth),
fstree.WithNoSync(sRead.noSync), fstree.WithNoSync(sRead.noSync),
fstree.WithLogger(c.log),
} }
if c.metricsCollector != nil { if c.metricsCollector != nil {
fstreeOpts = append(fstreeOpts, fstreeOpts = append(fstreeOpts,
@ -830,6 +953,8 @@ func (c *cfg) getShardOpts(shCfg shardCfg) shardOptsWithID {
blobstoreOpts := []blobstor.Option{ blobstoreOpts := []blobstor.Option{
blobstor.WithCompressObjects(shCfg.compress), blobstor.WithCompressObjects(shCfg.compress),
blobstor.WithUncompressableContentTypes(shCfg.uncompressableContentType), blobstor.WithUncompressableContentTypes(shCfg.uncompressableContentType),
blobstor.WithCompressibilityEstimate(shCfg.estimateCompressibility),
blobstor.WithCompressibilityEstimateThreshold(shCfg.estimateCompressibilityThreshold),
blobstor.WithStorages(ss), blobstor.WithStorages(ss),
blobstor.WithLogger(c.log), blobstor.WithLogger(c.log),
} }
@ -866,7 +991,7 @@ func (c *cfg) getShardOpts(shCfg shardCfg) shardOptsWithID {
shard.WithRemoverBatchSize(shCfg.gcCfg.removerBatchSize), shard.WithRemoverBatchSize(shCfg.gcCfg.removerBatchSize),
shard.WithGCRemoverSleepInterval(shCfg.gcCfg.removerSleepInterval), shard.WithGCRemoverSleepInterval(shCfg.gcCfg.removerSleepInterval),
shard.WithExpiredCollectorBatchSize(shCfg.gcCfg.expiredCollectorBatchSize), shard.WithExpiredCollectorBatchSize(shCfg.gcCfg.expiredCollectorBatchSize),
shard.WithExpiredCollectorWorkersCount(shCfg.gcCfg.expiredCollectorWorkersCount), shard.WithExpiredCollectorWorkerCount(shCfg.gcCfg.expiredCollectorWorkerCount),
shard.WithGCWorkerPoolInitializer(func(sz int) util.WorkerPool { shard.WithGCWorkerPoolInitializer(func(sz int) util.WorkerPool {
pool, err := ants.NewPool(sz) pool, err := ants.NewPool(sz)
fatalOnErr(err) fatalOnErr(err)
@ -938,6 +1063,34 @@ func initLocalStorage(ctx context.Context, c *cfg) {
}) })
} }
func initAccessPolicyEngine(_ context.Context, c *cfg) {
var localOverrideDB chainbase.LocalOverrideDatabase
if nodeconfig.PersistentPolicyRules(c.appCfg).Path() == "" {
c.log.Warn(logs.FrostFSNodePersistentRuleStorageDBPathIsNotSetInmemoryWillBeUsed)
localOverrideDB = chainbase.NewInmemoryLocalOverrideDatabase()
} else {
localOverrideDB = chainbase.NewBoltLocalOverrideDatabase(
chainbase.WithLogger(c.log),
chainbase.WithPath(nodeconfig.PersistentPolicyRules(c.appCfg).Path()),
chainbase.WithPerm(nodeconfig.PersistentPolicyRules(c.appCfg).Perm()),
chainbase.WithNoSync(nodeconfig.PersistentPolicyRules(c.appCfg).NoSync()),
)
}
morphRuleStorage := inmemory.NewInmemoryMorphRuleChainStorage()
ape := newAccessPolicyEngine(morphRuleStorage, localOverrideDB)
c.cfgObject.cfgAccessPolicyEngine.accessPolicyEngine = ape
c.onShutdown(func() {
if err := ape.LocalOverrideDatabaseCore().Close(); err != nil {
c.log.Warn(logs.FrostFSNodeAccessPolicyEngineClosingFailure,
zap.Error(err),
)
}
})
}
func initObjectPool(cfg *config.Config) (pool cfgObjectRoutines) { func initObjectPool(cfg *config.Config) (pool cfgObjectRoutines) {
var err error var err error
@ -1040,7 +1193,6 @@ func (c *cfg) signalWatcher(ctx context.Context) {
c.reloadConfig(ctx) c.reloadConfig(ctx)
case syscall.SIGTERM, syscall.SIGINT: case syscall.SIGTERM, syscall.SIGINT:
c.log.Info(logs.FrostFSNodeTerminationSignalHasBeenReceivedStopping) c.log.Info(logs.FrostFSNodeTerminationSignalHasBeenReceivedStopping)
// TODO (@acid-ant): #49 need to cover case when stuck at the middle(node health UNDEFINED or STARTING)
c.shutdown() c.shutdown()
@ -1062,7 +1214,13 @@ func (c *cfg) signalWatcher(ctx context.Context) {
func (c *cfg) reloadConfig(ctx context.Context) { func (c *cfg) reloadConfig(ctx context.Context) {
c.log.Info(logs.FrostFSNodeSIGHUPHasBeenReceivedRereadingConfiguration) c.log.Info(logs.FrostFSNodeSIGHUPHasBeenReceivedRereadingConfiguration)
err := c.readConfig(c.appCfg) if !c.compareAndSwapHealthStatus(control.HealthStatus_READY, control.HealthStatus_RECONFIGURING) {
c.log.Info(logs.FrostFSNodeSIGHUPSkip)
return
}
defer c.compareAndSwapHealthStatus(control.HealthStatus_RECONFIGURING, control.HealthStatus_READY)
err := c.reloadAppConfig()
if err != nil { if err != nil {
c.log.Error(logs.FrostFSNodeConfigurationReading, zap.Error(err)) c.log.Error(logs.FrostFSNodeConfigurationReading, zap.Error(err))
return return
@ -1129,6 +1287,13 @@ func (c *cfg) reloadConfig(ctx context.Context) {
c.log.Info(logs.FrostFSNodeConfigurationHasBeenReloadedSuccessfully) c.log.Info(logs.FrostFSNodeConfigurationHasBeenReloadedSuccessfully)
} }
func (c *cfg) reloadAppConfig() error {
unlock := c.LockAppConfigExclusive()
defer unlock()
return c.readConfig(c.appCfg)
}
func (c *cfg) createTombstoneSource() *tombstone.ExpirationChecker { func (c *cfg) createTombstoneSource() *tombstone.ExpirationChecker {
var tssPrm tsourse.TombstoneSourcePrm var tssPrm tsourse.TombstoneSourcePrm
tssPrm.SetGetService(c.cfgObject.getSvc) tssPrm.SetGetService(c.cfgObject.getSvc)
@ -1142,10 +1307,17 @@ func (c *cfg) createTombstoneSource() *tombstone.ExpirationChecker {
} }
func (c *cfg) shutdown() { func (c *cfg) shutdown() {
c.setHealthStatus(control.HealthStatus_SHUTTING_DOWN) old := c.swapHealthStatus(control.HealthStatus_SHUTTING_DOWN)
if old == control.HealthStatus_SHUTTING_DOWN {
c.log.Info(logs.FrostFSNodeShutdownSkip)
return
}
if old == control.HealthStatus_STARTING {
c.log.Warn(logs.FrostFSNodeShutdownWhenNotReady)
}
c.ctxCancel() c.ctxCancel()
c.done <- struct{}{} close(c.done)
for i := range c.closers { for i := range c.closers {
c.closers[len(c.closers)-1-i].fn() c.closers[len(c.closers)-1-i].fn()
} }

View file

@ -22,7 +22,7 @@ func TestApiclientSection(t *testing.T) {
const path = "../../../../config/example/node" const path = "../../../../config/example/node"
var fileConfigTest = func(c *config.Config) { fileConfigTest := func(c *config.Config) {
require.Equal(t, 15*time.Second, apiclientconfig.DialTimeout(c)) require.Equal(t, 15*time.Second, apiclientconfig.DialTimeout(c))
require.Equal(t, 20*time.Second, apiclientconfig.StreamTimeout(c)) require.Equal(t, 20*time.Second, apiclientconfig.StreamTimeout(c))
require.Equal(t, 30*time.Second, apiclientconfig.ReconnectTimeout(c)) require.Equal(t, 30*time.Second, apiclientconfig.ReconnectTimeout(c))

View file

@ -223,3 +223,15 @@ func parseSizeInBytes(sizeStr string) uint64 {
size := cast.ToFloat64(sizeStr) size := cast.ToFloat64(sizeStr)
return safeMul(size, multiplier) return safeMul(size, multiplier)
} }
// FloatOrDefault reads a configuration value
// from c by name and casts it to float64.
//
// Returns defaultValue if the value can not be casted.
func FloatOrDefault(c *Config, name string, defaultValue float64) float64 {
v, err := cast.ToFloat64E(c.Value(name))
if err != nil {
return defaultValue
}
return v
}

View file

@ -38,7 +38,6 @@ func New(configFile, configDir, envPrefix string) *Config {
configViper.WithConfigFile(configFile), configViper.WithConfigFile(configFile),
configViper.WithConfigDir(configDir), configViper.WithConfigDir(configDir),
configViper.WithEnvPrefix(envPrefix)) configViper.WithEnvPrefix(envPrefix))
if err != nil { if err != nil {
panic(err) panic(err)
} }

View file

@ -15,8 +15,8 @@ func TestConfigDir(t *testing.T) {
cfgFileName0 := path.Join(dir, "cfg_00.json") cfgFileName0 := path.Join(dir, "cfg_00.json")
cfgFileName1 := path.Join(dir, "cfg_01.yml") cfgFileName1 := path.Join(dir, "cfg_01.yml")
require.NoError(t, os.WriteFile(cfgFileName0, []byte(`{"storage":{"shard_pool_size":15}}`), 0777)) require.NoError(t, os.WriteFile(cfgFileName0, []byte(`{"storage":{"shard_pool_size":15}}`), 0o777))
require.NoError(t, os.WriteFile(cfgFileName1, []byte("logger:\n level: debug"), 0777)) require.NoError(t, os.WriteFile(cfgFileName1, []byte("logger:\n level: debug"), 0o777))
c := New("", dir, "") c := New("", dir, "")
require.Equal(t, "debug", cast.ToString(c.Sub("logger").Value("level"))) require.Equal(t, "debug", cast.ToString(c.Sub("logger").Value("level")))

View file

@ -35,7 +35,7 @@ func TestContractsSection(t *testing.T) {
expProxy, err := util.Uint160DecodeStringLE("ad7c6b55b737b696e5c82c85445040964a03e97f") expProxy, err := util.Uint160DecodeStringLE("ad7c6b55b737b696e5c82c85445040964a03e97f")
require.NoError(t, err) require.NoError(t, err)
var fileConfigTest = func(c *config.Config) { fileConfigTest := func(c *config.Config) {
balance := contractsconfig.Balance(c) balance := contractsconfig.Balance(c)
container := contractsconfig.Container(c) container := contractsconfig.Container(c)
netmap := contractsconfig.Netmap(c) netmap := contractsconfig.Netmap(c)

View file

@ -24,7 +24,7 @@ func TestControlSection(t *testing.T) {
pubs[0], _ = keys.NewPublicKeyFromString("035839e45d472a3b7769a2a1bd7d54c4ccd4943c3b40f547870e83a8fcbfb3ce11") pubs[0], _ = keys.NewPublicKeyFromString("035839e45d472a3b7769a2a1bd7d54c4ccd4943c3b40f547870e83a8fcbfb3ce11")
pubs[1], _ = keys.NewPublicKeyFromString("028f42cfcb74499d7b15b35d9bff260a1c8d27de4f446a627406a382d8961486d6") pubs[1], _ = keys.NewPublicKeyFromString("028f42cfcb74499d7b15b35d9bff260a1c8d27de4f446a627406a382d8961486d6")
var fileConfigTest = func(c *config.Config) { fileConfigTest := func(c *config.Config) {
require.Equal(t, pubs, controlconfig.AuthorizedKeys(c)) require.Equal(t, pubs, controlconfig.AuthorizedKeys(c))
require.Equal(t, "localhost:8090", controlconfig.GRPC(c).Endpoint()) require.Equal(t, "localhost:8090", controlconfig.GRPC(c).Endpoint())
} }

View file

@ -15,6 +15,9 @@ const (
// ShardPoolSizeDefault is a default value of routine pool size per-shard to // ShardPoolSizeDefault is a default value of routine pool size per-shard to
// process object PUT operations in a storage engine. // process object PUT operations in a storage engine.
ShardPoolSizeDefault = 20 ShardPoolSizeDefault = 20
// RebuildWorkersCountDefault is a default value of the workers count to
// process storage rebuild operations in a storage engine.
RebuildWorkersCountDefault = 100
) )
// ErrNoShardConfigured is returned when at least 1 shard is required but none are found. // ErrNoShardConfigured is returned when at least 1 shard is required but none are found.
@ -88,3 +91,11 @@ func ShardErrorThreshold(c *config.Config) uint32 {
func EngineLowMemoryConsumption(c *config.Config) bool { func EngineLowMemoryConsumption(c *config.Config) bool {
return config.BoolSafe(c.Sub(subsection), "low_mem") return config.BoolSafe(c.Sub(subsection), "low_mem")
} }
// EngineRebuildWorkersCount returns value of "rebuild_workers_count" config parmeter from "storage" section.
func EngineRebuildWorkersCount(c *config.Config) uint32 {
if v := config.Uint32Safe(c.Sub(subsection), "rebuild_workers_count"); v > 0 {
return v
}
return RebuildWorkersCountDefault
}

View file

@ -38,15 +38,17 @@ func TestEngineSection(t *testing.T) {
require.EqualValues(t, 0, engineconfig.ShardErrorThreshold(empty)) require.EqualValues(t, 0, engineconfig.ShardErrorThreshold(empty))
require.EqualValues(t, engineconfig.ShardPoolSizeDefault, engineconfig.ShardPoolSize(empty)) require.EqualValues(t, engineconfig.ShardPoolSizeDefault, engineconfig.ShardPoolSize(empty))
require.EqualValues(t, mode.ReadWrite, shardconfig.From(empty).Mode()) require.EqualValues(t, mode.ReadWrite, shardconfig.From(empty).Mode())
require.EqualValues(t, engineconfig.RebuildWorkersCountDefault, engineconfig.EngineRebuildWorkersCount(empty))
}) })
const path = "../../../../config/example/node" const path = "../../../../config/example/node"
var fileConfigTest = func(c *config.Config) { fileConfigTest := func(c *config.Config) {
num := 0 num := 0
require.EqualValues(t, 100, engineconfig.ShardErrorThreshold(c)) require.EqualValues(t, 100, engineconfig.ShardErrorThreshold(c))
require.EqualValues(t, 15, engineconfig.ShardPoolSize(c)) require.EqualValues(t, 15, engineconfig.ShardPoolSize(c))
require.EqualValues(t, uint32(1000), engineconfig.EngineRebuildWorkersCount(c))
err := engineconfig.IterateShards(c, true, func(sc *shardconfig.Config) error { err := engineconfig.IterateShards(c, true, func(sc *shardconfig.Config) error {
defer func() { defer func() {
@ -74,30 +76,34 @@ func TestEngineSection(t *testing.T) {
require.Equal(t, "tmp/0/cache", wc.Path()) require.Equal(t, "tmp/0/cache", wc.Path())
require.EqualValues(t, 16384, wc.SmallObjectSize()) require.EqualValues(t, 16384, wc.SmallObjectSize())
require.EqualValues(t, 134217728, wc.MaxObjectSize()) require.EqualValues(t, 134217728, wc.MaxObjectSize())
require.EqualValues(t, 30, wc.WorkersNumber()) require.EqualValues(t, 30, wc.WorkerCount())
require.EqualValues(t, 3221225472, wc.SizeLimit()) require.EqualValues(t, 3221225472, wc.SizeLimit())
require.Equal(t, "tmp/0/meta", meta.Path()) require.Equal(t, "tmp/0/meta", meta.Path())
require.Equal(t, fs.FileMode(0644), meta.BoltDB().Perm()) require.Equal(t, fs.FileMode(0o644), meta.BoltDB().Perm())
require.Equal(t, 100, meta.BoltDB().MaxBatchSize()) require.Equal(t, 100, meta.BoltDB().MaxBatchSize())
require.Equal(t, 10*time.Millisecond, meta.BoltDB().MaxBatchDelay()) require.Equal(t, 10*time.Millisecond, meta.BoltDB().MaxBatchDelay())
require.Equal(t, true, sc.Compress()) require.Equal(t, true, sc.Compress())
require.Equal(t, []string{"audio/*", "video/*"}, sc.UncompressableContentTypes()) require.Equal(t, []string{"audio/*", "video/*"}, sc.UncompressableContentTypes())
require.Equal(t, true, sc.EstimateCompressibility())
require.Equal(t, float64(0.7), sc.EstimateCompressibilityThreshold())
require.EqualValues(t, 102400, sc.SmallSizeLimit()) require.EqualValues(t, 102400, sc.SmallSizeLimit())
require.Equal(t, 2, len(ss)) require.Equal(t, 2, len(ss))
blz := blobovniczaconfig.From((*config.Config)(ss[0])) blz := blobovniczaconfig.From((*config.Config)(ss[0]))
require.Equal(t, "tmp/0/blob/blobovnicza", ss[0].Path()) require.Equal(t, "tmp/0/blob/blobovnicza", ss[0].Path())
require.EqualValues(t, 0644, blz.BoltDB().Perm()) require.EqualValues(t, 0o644, blz.BoltDB().Perm())
require.EqualValues(t, 4194304, blz.Size()) require.EqualValues(t, 4194304, blz.Size())
require.EqualValues(t, 1, blz.ShallowDepth()) require.EqualValues(t, 1, blz.ShallowDepth())
require.EqualValues(t, 4, blz.ShallowWidth()) require.EqualValues(t, 4, blz.ShallowWidth())
require.EqualValues(t, 50, blz.OpenedCacheSize()) require.EqualValues(t, 50, blz.OpenedCacheSize())
require.EqualValues(t, 10, blz.LeafWidth()) require.EqualValues(t, 10, blz.LeafWidth())
require.EqualValues(t, 10, blz.InitWorkerCount())
require.EqualValues(t, true, blz.InitInAdvance())
require.Equal(t, "tmp/0/blob", ss[1].Path()) require.Equal(t, "tmp/0/blob", ss[1].Path())
require.EqualValues(t, 0644, ss[1].Perm()) require.EqualValues(t, 0o644, ss[1].Perm())
fst := fstreeconfig.From((*config.Config)(ss[1])) fst := fstreeconfig.From((*config.Config)(ss[1]))
require.EqualValues(t, 5, fst.Depth()) require.EqualValues(t, 5, fst.Depth())
@ -106,13 +112,13 @@ func TestEngineSection(t *testing.T) {
require.EqualValues(t, 150, gc.RemoverBatchSize()) require.EqualValues(t, 150, gc.RemoverBatchSize())
require.Equal(t, 2*time.Minute, gc.RemoverSleepInterval()) require.Equal(t, 2*time.Minute, gc.RemoverSleepInterval())
require.Equal(t, 1500, gc.ExpiredCollectorBatchSize()) require.Equal(t, 1500, gc.ExpiredCollectorBatchSize())
require.Equal(t, 15, gc.ExpiredCollectorWorkersCount()) require.Equal(t, 15, gc.ExpiredCollectorWorkerCount())
require.Equal(t, false, sc.RefillMetabase()) require.Equal(t, false, sc.RefillMetabase())
require.Equal(t, mode.ReadOnly, sc.Mode()) require.Equal(t, mode.ReadOnly, sc.Mode())
case 1: case 1:
require.Equal(t, "tmp/1/blob/pilorama.db", pl.Path()) require.Equal(t, "tmp/1/blob/pilorama.db", pl.Path())
require.Equal(t, fs.FileMode(0644), pl.Perm()) require.Equal(t, fs.FileMode(0o644), pl.Perm())
require.True(t, pl.NoSync()) require.True(t, pl.NoSync())
require.Equal(t, 5*time.Millisecond, pl.MaxBatchDelay()) require.Equal(t, 5*time.Millisecond, pl.MaxBatchDelay())
require.Equal(t, 100, pl.MaxBatchSize()) require.Equal(t, 100, pl.MaxBatchSize())
@ -123,11 +129,11 @@ func TestEngineSection(t *testing.T) {
require.Equal(t, "tmp/1/cache", wc.Path()) require.Equal(t, "tmp/1/cache", wc.Path())
require.EqualValues(t, 16384, wc.SmallObjectSize()) require.EqualValues(t, 16384, wc.SmallObjectSize())
require.EqualValues(t, 134217728, wc.MaxObjectSize()) require.EqualValues(t, 134217728, wc.MaxObjectSize())
require.EqualValues(t, 30, wc.WorkersNumber()) require.EqualValues(t, 30, wc.WorkerCount())
require.EqualValues(t, 4294967296, wc.SizeLimit()) require.EqualValues(t, 4294967296, wc.SizeLimit())
require.Equal(t, "tmp/1/meta", meta.Path()) require.Equal(t, "tmp/1/meta", meta.Path())
require.Equal(t, fs.FileMode(0644), meta.BoltDB().Perm()) require.Equal(t, fs.FileMode(0o644), meta.BoltDB().Perm())
require.Equal(t, 200, meta.BoltDB().MaxBatchSize()) require.Equal(t, 200, meta.BoltDB().MaxBatchSize())
require.Equal(t, 20*time.Millisecond, meta.BoltDB().MaxBatchDelay()) require.Equal(t, 20*time.Millisecond, meta.BoltDB().MaxBatchDelay())
@ -144,9 +150,10 @@ func TestEngineSection(t *testing.T) {
require.EqualValues(t, 4, blz.ShallowWidth()) require.EqualValues(t, 4, blz.ShallowWidth())
require.EqualValues(t, 50, blz.OpenedCacheSize()) require.EqualValues(t, 50, blz.OpenedCacheSize())
require.EqualValues(t, 10, blz.LeafWidth()) require.EqualValues(t, 10, blz.LeafWidth())
require.EqualValues(t, blobovniczaconfig.InitWorkerCountDefault, blz.InitWorkerCount())
require.Equal(t, "tmp/1/blob", ss[1].Path()) require.Equal(t, "tmp/1/blob", ss[1].Path())
require.EqualValues(t, 0644, ss[1].Perm()) require.EqualValues(t, 0o644, ss[1].Perm())
fst := fstreeconfig.From((*config.Config)(ss[1])) fst := fstreeconfig.From((*config.Config)(ss[1]))
require.EqualValues(t, 5, fst.Depth()) require.EqualValues(t, 5, fst.Depth())
@ -155,7 +162,7 @@ func TestEngineSection(t *testing.T) {
require.EqualValues(t, 200, gc.RemoverBatchSize()) require.EqualValues(t, 200, gc.RemoverBatchSize())
require.Equal(t, 5*time.Minute, gc.RemoverSleepInterval()) require.Equal(t, 5*time.Minute, gc.RemoverSleepInterval())
require.Equal(t, gcconfig.ExpiredCollectorBatchSizeDefault, gc.ExpiredCollectorBatchSize()) require.Equal(t, gcconfig.ExpiredCollectorBatchSizeDefault, gc.ExpiredCollectorBatchSize())
require.Equal(t, gcconfig.ExpiredCollectorWorkersCountDefault, gc.ExpiredCollectorWorkersCount()) require.Equal(t, gcconfig.ExpiredCollectorWorkersCountDefault, gc.ExpiredCollectorWorkerCount())
require.Equal(t, true, sc.RefillMetabase()) require.Equal(t, true, sc.RefillMetabase())
require.Equal(t, mode.ReadWrite, sc.Mode()) require.Equal(t, mode.ReadWrite, sc.Mode())

View file

@ -22,6 +22,9 @@ const (
// OpenedCacheSizeDefault is a default cache size of opened Blobovnicza's. // OpenedCacheSizeDefault is a default cache size of opened Blobovnicza's.
OpenedCacheSizeDefault = 16 OpenedCacheSizeDefault = 16
// InitWorkerCountDefault is a default workers count to initialize Blobovnicza's.
InitWorkerCountDefault = 5
) )
// From wraps config section into Config. // From wraps config section into Config.
@ -112,3 +115,29 @@ func (x *Config) LeafWidth() uint64 {
"leaf_width", "leaf_width",
) )
} }
// InitWorkerCount returns the value of "init_worker_count" config parameter.
//
// Returns InitWorkerCountDefault if the value is not a positive number.
func (x *Config) InitWorkerCount() int {
d := config.IntSafe(
(*config.Config)(x),
"init_worker_count",
)
if d > 0 {
return int(d)
}
return InitWorkerCountDefault
}
// InitInAdvance returns the value of "init_in_advance" config parameter.
//
// Returns False if the value is not defined or invalid.
func (x *Config) InitInAdvance() bool {
return config.BoolSafe(
(*config.Config)(x),
"init_in_advance",
)
}

View file

@ -9,7 +9,7 @@ import (
type Config config.Config type Config config.Config
// PermDefault are default permission bits for BlobStor data. // PermDefault are default permission bits for BlobStor data.
const PermDefault = 0660 const PermDefault = 0o660
func From(x *config.Config) *Config { func From(x *config.Config) *Config {
return (*Config)(x) return (*Config)(x)

View file

@ -13,7 +13,7 @@ type Config config.Config
const ( const (
// PermDefault is a default permission bits for metabase file. // PermDefault is a default permission bits for metabase file.
PermDefault = 0660 PermDefault = 0o660
) )
// Perm returns the value of "perm" config parameter as a fs.FileMode. // Perm returns the value of "perm" config parameter as a fs.FileMode.

View file

@ -16,8 +16,11 @@ import (
// which provides access to Shard configurations. // which provides access to Shard configurations.
type Config config.Config type Config config.Config
// SmallSizeLimitDefault is a default limit of small objects payload in bytes. const (
const SmallSizeLimitDefault = 1 << 20 // SmallSizeLimitDefault is a default limit of small objects payload in bytes.
SmallSizeLimitDefault = 1 << 20
EstimateCompressibilityThresholdDefault = 0.1
)
// From wraps config section into Config. // From wraps config section into Config.
func From(c *config.Config) *Config { func From(c *config.Config) *Config {
@ -43,6 +46,30 @@ func (x *Config) UncompressableContentTypes() []string {
"compression_exclude_content_types") "compression_exclude_content_types")
} }
// EstimateCompressibility returns the value of "estimate_compressibility" config parameter.
//
// Returns false if the value is not a valid bool.
func (x *Config) EstimateCompressibility() bool {
return config.BoolSafe(
(*config.Config)(x),
"compression_estimate_compressibility",
)
}
// EstimateCompressibilityThreshold returns the value of "estimate_compressibility_threshold" config parameter.
//
// Returns EstimateCompressibilityThresholdDefault if the value is not defined, not valid float or not in range [0.0; 1.0].
func (x *Config) EstimateCompressibilityThreshold() float64 {
v := config.FloatOrDefault(
(*config.Config)(x),
"compression_estimate_compressibility_threshold",
EstimateCompressibilityThresholdDefault)
if v < 0.0 || v > 1.0 {
return EstimateCompressibilityThresholdDefault
}
return v
}
// SmallSizeLimit returns the value of "small_object_size" config parameter. // SmallSizeLimit returns the value of "small_object_size" config parameter.
// //
// Returns SmallSizeLimitDefault if the value is not a positive number. // Returns SmallSizeLimitDefault if the value is not a positive number.

View file

@ -63,14 +63,14 @@ func (x *Config) RemoverSleepInterval() time.Duration {
return RemoverSleepIntervalDefault return RemoverSleepIntervalDefault
} }
// ExpiredCollectorWorkersCount returns the value of "expired_collector_workers_count" // ExpiredCollectorWorkerCount returns the value of "expired_collector_worker_count"
// config parameter. // config parameter.
// //
// Returns ExpiredCollectorWorkersCountDefault if the value is not a positive number. // Returns ExpiredCollectorWorkersCountDefault if the value is not a positive number.
func (x *Config) ExpiredCollectorWorkersCount() int { func (x *Config) ExpiredCollectorWorkerCount() int {
s := config.IntSafe( s := config.IntSafe(
(*config.Config)(x), (*config.Config)(x),
"expired_collector_workers_count", "expired_collector_worker_count",
) )
if s > 0 { if s > 0 {

View file

@ -13,7 +13,7 @@ type Config config.Config
const ( const (
// PermDefault is a default permission bits for metabase file. // PermDefault is a default permission bits for metabase file.
PermDefault = 0660 PermDefault = 0o660
) )
// From wraps config section into Config. // From wraps config section into Config.

View file

@ -106,13 +106,13 @@ func (x *Config) MaxObjectSize() uint64 {
return MaxSizeDefault return MaxSizeDefault
} }
// WorkersNumber returns the value of "workers_number" config parameter. // WorkerCount returns the value of "flush_worker_count" config parameter.
// //
// Returns WorkersNumberDefault if the value is not a positive number. // Returns WorkersNumberDefault if the value is not a positive number.
func (x *Config) WorkersNumber() int { func (x *Config) WorkerCount() int {
c := config.IntSafe( c := config.IntSafe(
(*config.Config)(x), (*config.Config)(x),
"workers_number", "flush_worker_count",
) )
if c > 0 { if c > 0 {

View file

@ -3,6 +3,7 @@ package grpcconfig
import ( import (
"errors" "errors"
"strconv" "strconv"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config" "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config"
) )
@ -109,3 +110,17 @@ func IterateEndpoints(c *config.Config, f func(*Config)) {
panic("no gRPC server configured") panic("no gRPC server configured")
} }
} }
const DefaultReconnectInterval = time.Minute
// ReconnectTimeout returns the value of "reconnect_interval" gRPC config parameter.
//
// Returns DefaultReconnectInterval if value is not defined or invalid.
func ReconnectTimeout(c *config.Config) time.Duration {
grpcConf := c.Sub("grpc")
ri := config.DurationSafe(grpcConf, "reconnect_interval")
if ri > 0 {
return ri
}
return DefaultReconnectInterval
}

View file

@ -17,7 +17,7 @@ func TestGRPCSection(t *testing.T) {
const path = "../../../../config/example/node" const path = "../../../../config/example/node"
var fileConfigTest = func(c *config.Config) { fileConfigTest := func(c *config.Config) {
num := 0 num := 0
IterateEndpoints(c, func(sc *Config) { IterateEndpoints(c, func(sc *Config) {

View file

@ -1,12 +1,21 @@
package loggerconfig package loggerconfig
import ( import (
"os"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config" "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config"
"git.frostfs.info/TrueCloudLab/frostfs-observability/logging/lokicore/loki"
) )
const ( const (
// LevelDefault is a default logger level. // LevelDefault is a default logger level.
LevelDefault = "info" LevelDefault = "info"
subsection = "logger"
lokiSubsection = "loki"
AddressDefault = "localhost:3100"
BatchEntriesNumberDefault = 100
BatchWaitDefault = time.Second
) )
// Level returns the value of "level" config parameter // Level returns the value of "level" config parameter
@ -15,7 +24,7 @@ const (
// Returns LevelDefault if the value is not a non-empty string. // Returns LevelDefault if the value is not a non-empty string.
func Level(c *config.Config) string { func Level(c *config.Config) string {
v := config.StringSafe( v := config.StringSafe(
c.Sub("logger"), c.Sub(subsection),
"level", "level",
) )
if v != "" { if v != "" {
@ -24,3 +33,44 @@ func Level(c *config.Config) string {
return LevelDefault return LevelDefault
} }
// ToLokiConfig extracts loki config.
func ToLokiConfig(c *config.Config) loki.Config {
hostname, _ := os.Hostname()
return loki.Config{
Enabled: config.BoolSafe(c.Sub(subsection).Sub(lokiSubsection), "enabled"),
BatchWait: getBatchWait(c),
BatchEntriesNumber: getBatchEntriesNumber(c),
Endpoint: getEndpoint(c),
Labels: map[string]string{
"hostname": hostname,
},
}
}
func getBatchWait(c *config.Config) time.Duration {
v := config.DurationSafe(c.Sub(subsection).Sub(lokiSubsection), "max_batch_delay")
if v > 0 {
return v
}
return BatchWaitDefault
}
func getBatchEntriesNumber(c *config.Config) int {
v := config.IntSafe(c.Sub(subsection).Sub(lokiSubsection), "max_batch_size")
if v > 0 {
return int(v)
}
return BatchEntriesNumberDefault
}
func getEndpoint(c *config.Config) string {
v := config.StringSafe(c.Sub(subsection).Sub(lokiSubsection), "endpoint")
if v != "" {
return v
}
return AddressDefault
}

View file

@ -17,7 +17,7 @@ func TestLoggerSection_Level(t *testing.T) {
const path = "../../../../config/example/node" const path = "../../../../config/example/node"
var fileConfigTest = func(c *config.Config) { fileConfigTest := func(c *config.Config) {
v := loggerconfig.Level(c) v := loggerconfig.Level(c)
require.Equal(t, "debug", v) require.Equal(t, "debug", v)
} }

View file

@ -22,7 +22,7 @@ func TestMetricsSection(t *testing.T) {
const path = "../../../../config/example/node" const path = "../../../../config/example/node"
var fileConfigTest = func(c *config.Config) { fileConfigTest := func(c *config.Config) {
to := metricsconfig.ShutdownTimeout(c) to := metricsconfig.ShutdownTimeout(c)
addr := metricsconfig.Address(c) addr := metricsconfig.Address(c)

View file

@ -23,14 +23,12 @@ func TestMorphSection(t *testing.T) {
const path = "../../../../config/example/node" const path = "../../../../config/example/node"
var ( rpcs := []client.Endpoint{
rpcs = []client.Endpoint{ {"wss://rpc1.morph.frostfs.info:40341/ws", 1},
{"wss://rpc1.morph.frostfs.info:40341/ws", 1}, {"wss://rpc2.morph.frostfs.info:40341/ws", 2},
{"wss://rpc2.morph.frostfs.info:40341/ws", 2}, }
}
)
var fileConfigTest = func(c *config.Config) { fileConfigTest := func(c *config.Config) {
require.Equal(t, rpcs, morphconfig.RPCEndpoint(c)) require.Equal(t, rpcs, morphconfig.RPCEndpoint(c))
require.Equal(t, 30*time.Second, morphconfig.DialTimeout(c)) require.Equal(t, 30*time.Second, morphconfig.DialTimeout(c))
require.Equal(t, 15*time.Second, morphconfig.CacheTTL(c)) require.Equal(t, 15*time.Second, morphconfig.CacheTTL(c))

View file

@ -2,6 +2,7 @@ package nodeconfig
import ( import (
"fmt" "fmt"
"io/fs"
"os" "os"
"strconv" "strconv"
"time" "time"
@ -30,11 +31,18 @@ type NotificationConfig struct {
cfg *config.Config cfg *config.Config
} }
// PersistentPolicyRulesConfig is a wrapper over "persistent_policy_rules" config section
// which provides access to persistent policy rules storage configuration of node.
type PersistentPolicyRulesConfig struct {
cfg *config.Config
}
const ( const (
subsection = "node" subsection = "node"
persistentSessionsSubsection = "persistent_sessions" persistentSessionsSubsection = "persistent_sessions"
persistentStateSubsection = "persistent_state" persistentStateSubsection = "persistent_state"
notificationSubsection = "notification" notificationSubsection = "notification"
persistentPolicyRulesSubsection = "persistent_policy_rules"
attributePrefix = "attribute" attributePrefix = "attribute"
@ -245,3 +253,42 @@ func (n NotificationConfig) KeyPath() string {
func (n NotificationConfig) CAPath() string { func (n NotificationConfig) CAPath() string {
return config.StringSafe(n.cfg, "ca") return config.StringSafe(n.cfg, "ca")
} }
const (
// PermDefault is a default permission bits for local override storage file.
PermDefault = 0o644
)
// PersistentPolicyRules returns structure that provides access to "persistent_policy_rules"
// subsection of "node" section.
func PersistentPolicyRules(c *config.Config) PersistentPolicyRulesConfig {
return PersistentPolicyRulesConfig{
c.Sub(subsection).Sub(persistentPolicyRulesSubsection),
}
}
// Path returns the value of "path" config parameter.
//
// Returns empty string if missing, for compatibility with older configurations.
func (l PersistentPolicyRulesConfig) Path() string {
return config.StringSafe(l.cfg, "path")
}
// Perm returns the value of "perm" config parameter as a fs.FileMode.
//
// Returns PermDefault if the value is not a positive number.
func (l PersistentPolicyRulesConfig) Perm() fs.FileMode {
p := config.UintSafe((*config.Config)(l.cfg), "perm")
if p == 0 {
p = PermDefault
}
return fs.FileMode(p)
}
// NoSync returns the value of "no_sync" config parameter as a bool value.
//
// Returns false if the value is not a boolean.
func (l PersistentPolicyRulesConfig) NoSync() bool {
return config.BoolSafe((*config.Config)(l.cfg), "no_sync")
}

View file

@ -56,7 +56,7 @@ func TestNodeSection(t *testing.T) {
const path = "../../../../config/example/node" const path = "../../../../config/example/node"
var fileConfigTest = func(c *config.Config) { fileConfigTest := func(c *config.Config) {
key := Key(c) key := Key(c)
addrs := BootstrapAddresses(c) addrs := BootstrapAddresses(c)
attributes := Attributes(c) attributes := Attributes(c)

View file

@ -28,11 +28,11 @@ func Put(c *config.Config) PutConfig {
} }
} }
// PoolSizeRemote returns the value of "pool_size_remote" config parameter. // PoolSizeRemote returns the value of "remote_pool_size" config parameter.
// //
// Returns PutPoolSizeDefault if the value is not a positive number. // Returns PutPoolSizeDefault if the value is not a positive number.
func (g PutConfig) PoolSizeRemote() int { func (g PutConfig) PoolSizeRemote() int {
v := config.Int(g.cfg, "pool_size_remote") v := config.Int(g.cfg, "remote_pool_size")
if v > 0 { if v > 0 {
return int(v) return int(v)
} }
@ -40,11 +40,11 @@ func (g PutConfig) PoolSizeRemote() int {
return PutPoolSizeDefault return PutPoolSizeDefault
} }
// PoolSizeLocal returns the value of "pool_size_local" config parameter. // PoolSizeLocal returns the value of "local_pool_size" config parameter.
// //
// Returns PutPoolSizeDefault if the value is not a positive number. // Returns PutPoolSizeDefault if the value is not a positive number.
func (g PutConfig) PoolSizeLocal() int { func (g PutConfig) PoolSizeLocal() int {
v := config.Int(g.cfg, "pool_size_local") v := config.Int(g.cfg, "local_pool_size")
if v > 0 { if v > 0 {
return int(v) return int(v)
} }

View file

@ -21,7 +21,7 @@ func TestObjectSection(t *testing.T) {
const path = "../../../../config/example/node" const path = "../../../../config/example/node"
var fileConfigTest = func(c *config.Config) { fileConfigTest := func(c *config.Config) {
require.Equal(t, 100, objectconfig.Put(c).PoolSizeRemote()) require.Equal(t, 100, objectconfig.Put(c).PoolSizeRemote())
require.Equal(t, 200, objectconfig.Put(c).PoolSizeLocal()) require.Equal(t, 200, objectconfig.Put(c).PoolSizeLocal())
require.EqualValues(t, 10, objectconfig.TombstoneLifetime(c)) require.EqualValues(t, 10, objectconfig.TombstoneLifetime(c))

View file

@ -19,7 +19,7 @@ func TestPolicerSection(t *testing.T) {
const path = "../../../../config/example/node" const path = "../../../../config/example/node"
var fileConfigTest = func(c *config.Config) { fileConfigTest := func(c *config.Config) {
require.Equal(t, 15*time.Second, policerconfig.HeadTimeout(c)) require.Equal(t, 15*time.Second, policerconfig.HeadTimeout(c))
} }

View file

@ -25,7 +25,7 @@ func TestProfilerSection(t *testing.T) {
const path = "../../../../config/example/node" const path = "../../../../config/example/node"
var fileConfigTest = func(c *config.Config) { fileConfigTest := func(c *config.Config) {
to := profilerconfig.ShutdownTimeout(c) to := profilerconfig.ShutdownTimeout(c)
addr := profilerconfig.Address(c) addr := profilerconfig.Address(c)

View file

@ -20,7 +20,7 @@ func TestReplicatorSection(t *testing.T) {
const path = "../../../../config/example/node" const path = "../../../../config/example/node"
var fileConfigTest = func(c *config.Config) { fileConfigTest := func(c *config.Config) {
require.Equal(t, 15*time.Second, replicatorconfig.PutTimeout(c)) require.Equal(t, 15*time.Second, replicatorconfig.PutTimeout(c))
require.Equal(t, 10, replicatorconfig.PoolSize(c)) require.Equal(t, 10, replicatorconfig.PoolSize(c))
} }

Some files were not shown because too many files have changed in this diff Show more