Compare commits

...
Sign in to create a new pull request.

79 commits

Author SHA1 Message Date
6e1576cfdb [#1656] qos: Add tests for AdjustOutgoingIOTag Interceptors
Change-Id: If534e756b26cf7f202039d48ecdf554b4283728b
Signed-off-by: Ekaterina Lebedeva <ekaterina.lebedeva@yadro.com>
2025-04-01 11:55:15 +03:00
a5bae6c5af
[#1699] qos: Allow to prohibit operations for IO tag
Change-Id: I2bee26885244e241d224860978b6de3526527e96
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-04-01 10:08:03 +03:00
5a13830a94
[#1699] mod: Bump frostfs-qos version
Change-Id: Ie5e708c0ca653596c6e3346aa286618868a5aee8
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-04-01 10:08:03 +03:00
dcb2b23a7d [#1656] qos: Add test for SetCriticalIOTag Interceptor
Change-Id: I4a55fcb84e6f65408a1c0120ac917e49e23354a1
Signed-off-by: Ekaterina Lebedeva <ekaterina.lebedeva@yadro.com>
2025-03-31 18:21:48 +03:00
115aae7c34 [#1656] qos: Add tests for MaxActiveRPCLimiter Interceptors
Change-Id: Ib65890ae5aec34c34e15d4ec1f05952f74f1ad26
Signed-off-by: Ekaterina Lebedeva <ekaterina.lebedeva@yadro.com>
2025-03-31 18:21:46 +03:00
12a0537a7a [#1689] ci: Add commit checker to Jenkinsfile
- Commit checker image is built from dco-go:
  TrueCloudLab/dco-go#14
- 'pull_request_target' branch is defined in Jenkins job:
  TrueCloudLab/jenkins#10
  TrueCloudLab/jenkins#11

Change-Id: Ib86c5749f9e084d736b868240c4b47014b02ba8d
Signed-off-by: Vitaliy Potyarkin <v.potyarkin@yadro.com>
2025-03-31 15:08:59 +00:00
30d4692c3e [#1640] go.mod: Bump version for frostfs-locode-db
Change-Id: Ic45ae77d6209c0097575fc8f89b076b22d50d149
Signed-off-by: Anton Nikiforov <an.nikiforov@yadro.com>
2025-03-31 10:29:41 +00:00
2254c8aff5 [#1689] go.mod: Bump SDK version
Change-Id: Ic946aa68c3d6da9e7d54363f8e9141c6547707d6
Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2025-03-31 11:55:29 +03:00
d432bebef4 [#1689] go.mod: Bump frostfs-qos version
Change-Id: Iaa28da1a1e7b2f4ab7fd8ed661939eb38f4c7782
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-03-28 12:40:05 +00:00
d144abc977 [#1692] metabase: Remove useless count variable
It is always equal to `len(to)`.

Change-Id: Id7a4c26e9711216b78f96e6b2511efa0773e3471
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-03-28 12:16:01 +00:00
a2053870e2 [#1692] metabase: Use bucket cache in ListWithCursor()
No changes in speed, but unified approach:
```
goos: linux
goarch: amd64
pkg: git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/metabase
cpu: 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz
                           │    master    │                 new                 │
                           │    sec/op    │    sec/op     vs base               │
ListWithCursor/1_item-8      6.067µ ±  8%   5.731µ ± 10%       ~ (p=0.052 n=10)
ListWithCursor/10_items-8    25.40µ ± 11%   26.12µ ± 13%       ~ (p=0.971 n=10)
ListWithCursor/100_items-8   210.7µ ±  9%   203.2µ ±  6%       ~ (p=0.280 n=10)
geomean                      31.90µ         31.22µ        -2.16%

                           │    master    │                  new                  │
                           │     B/op     │     B/op      vs base                 │
ListWithCursor/1_item-8      3.287Ki ± 0%   3.287Ki ± 0%       ~ (p=1.000 n=10) ¹
ListWithCursor/10_items-8    15.63Ki ± 0%   15.62Ki ± 0%       ~ (p=0.328 n=10)
ListWithCursor/100_items-8   138.1Ki ± 0%   138.1Ki ± 0%       ~ (p=0.340 n=10)
geomean                      19.21Ki        19.21Ki       -0.00%
¹ all samples are equal

                           │   master    │                 new                  │
                           │  allocs/op  │  allocs/op   vs base                 │
ListWithCursor/1_item-8       109.0 ± 0%    109.0 ± 0%       ~ (p=1.000 n=10) ¹
ListWithCursor/10_items-8     380.0 ± 0%    380.0 ± 0%       ~ (p=1.000 n=10) ¹
ListWithCursor/100_items-8   3.082k ± 0%   3.082k ± 0%       ~ (p=1.000 n=10) ¹
geomean                       503.5         503.5       +0.00%
¹ all samples are equal
```

Change-Id: Ic11673427615053656b2a60068a6d4dbd27af2cb
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-03-28 12:16:01 +00:00
d00c606fee [#652] adm: Group independent stages in batches
Each stage waits until transaction persists. This is needed to ensure
the next stage will see the result of the previous one. However, some of
the stages do not depend one on another, so we may execute them in
parallel.

`AwaitDisabled` flag is used to localize this batching on the code
level. We could've removed `AwaitTx()` from respective stages, but it
seems more error prone.

Close #652.

Change-Id: Ib9c6f6cd5e0db0f31aa1cda8e127b1fad5166336
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-03-28 12:15:21 +00:00
60446bb668 [#1689] adm/helper: Use proper nns bindings import
The one in `neo-go` is for another contract.

Change-Id: Ia1ac2da5e419b48801afdb26df72892d77344e0d
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-03-28 12:15:21 +00:00
bd8ab2d84a [#1689] adm: Remove useless switch in NNSIsAvailable()
After all the refactorings, there is no need to have custom behaviour
for local client.

Change-Id: I99e297cdeffff979524b3f89d3580ab5780e7681
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-03-28 12:15:21 +00:00
bce2f7bef0 [#1689] adm: Reuse neo.NewReader helper in transferNEOFinished()
Change-Id: I27980ed87436958cb4d27278e30e05da021d1506
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-03-28 12:10:37 +00:00
c2c05e2228 [#1689] adm: Reuse ReadOnlyInvoker in registerCandidateRange()
Change-Id: I544d10340825494b45a62700fa247404c18f746a
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-03-28 12:10:37 +00:00
0a38571a10 [#1689] adm: Simplify getCandidateRegisterPrice()
After all the refactoring, there is no more need to have custom branch
for the local client.

Change-Id: I274305b0c390578fb4583759135d3e7ce58873dc
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-03-28 12:10:31 +00:00
632bd8e38d [#1696] qos: Fix internal tag adjust
If request has no tag, but request's public key is netmap node's key or
one of allowed internal tag keys from config, then request must use
internal IO tag.

Change-Id: Iff93b626941a81b088d8999b3f2947f9501dcdf8
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-03-28 07:47:12 +00:00
3bbee1b554 [#1619] logger: Allow to set options for zap.Logger via logger.Prm
Change-Id: I8eed951c25d1ecf18b0aea62c6825be65a450085
Signed-off-by: Anton Nikiforov <an.nikiforov@yadro.com>
2025-03-27 15:41:37 +03:00
9358938222 [#1633] go.mod: Bump frostfs-sdk-go
Change-Id: I50c1a0d5b88e307402a5b1b2883bb9b9a357a2c7
Signed-off-by: Anton Nikiforov <an.nikiforov@yadro.com>
2025-03-26 15:03:17 +00:00
5470b205fd [#1619] gc: Fix metric frostfs_node.garbage_collector.marking_duration_seconds
Change-Id: I957f930d1babf179d0fb6de624a90f4fe9977862
Signed-off-by: Anton Nikiforov <an.nikiforov@yadro.com>
2025-03-26 14:14:26 +03:00
163e2e9f83 ci: Cache pre-commit installations on Jenkins Agent
This change introduces a custom helper from our shared library [0]
that defines ad-hoc container environment to execute CI steps in.

Pre-commit installation will now be cached on Jenkins Agent: builds will
tolerate network hiccups better and we will also save some run time
(although on non-critical path of a parallel process).

[0]: TrueCloudLab/jenkins#8

Change-Id: I93b01f169c457aa35f4d8bc5b90f31b31e2bd8b2
Signed-off-by: Vitaliy Potyarkin <v.potyarkin@yadro.com>
2025-03-24 17:04:24 +03:00
0c664fa804 [#1689] qos: Fix metric description
Change-Id: I460fdd3713e765d57ef3ff2945b9b3776f46c164
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-03-24 09:42:45 +00:00
0a9d139e20 [#1689] morph/client: Remove notary hash field from notaryInfo
Notary contract hash is constant.

Change-Id: I7935580acbced5c9d567875ea75daa57cc259a3c
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-03-24 09:20:19 +00:00
3bb1fb744a [#1689] morph/client: Reuse auto-generated wrappers for NNS
Make code simpler, remove unused methods.

Change-Id: I18807f2c14b5a96e533e5e3fc153e23c742c66c1
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-03-24 09:20:19 +00:00
ccdd6cb767 [#1052] object: Nuke out acl middleware
* Remove `acl` package as it's no longer used;
* Remove `RequestContext`;
* Fix `cmd/frostfs-node`.

Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2025-03-23 06:39:32 +00:00
73e35bc885 [#1052] object: Make ape middleware form request info
* Move some helpers from `acl/v2` package to `ape`. Also move errors;
* Introduce `Metadata`, `RequestInfo` types;
* Introduce `RequestInfoExtractor` interface and its implementation.
  The extractor's purpose is to extract request info based on request
  metadata. It also validates session token;
* Refactor ape service - each handler forms request info and pass
  necessary fields to checker.

Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2025-03-23 06:39:32 +00:00
eed0824590 go.mod: Bump frostfs-qos version
Change-Id: I8bc045b509ee1259cfad288477a0b7d045683f10
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-03-21 16:36:03 +00:00
a4da1da767 [#905] morph/client: Fetch NNS hash once on init
NNS contract hash is taken from the contract with ID=1.
Because morph client is expected to work with the same chain,
and because contract hash doesn't change on update, there is no need to
fetch it from each new endpoint.

Change-Id: Ic6dc18283789da076d6a0b3701139b97037714cc
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-03-21 15:13:54 +00:00
30099194ba [#1689] config: Remove testnet and mainnet configs
They are invalid and unsupported.
There is neither mainnet nor testnet currently.

Change-Id: I520363e2de0c22a584238accc253248be3eefea5
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-03-21 15:07:02 +00:00
e7e91ef634 [#1689] adm: Remove storagecfg subcommand
It is unused and unsupported for a long time.

Change-Id: I570567db4e8cb202e41286064406ad85cd0e7a39
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-03-21 15:05:02 +00:00
4919b6a206
[#1689] node/config: Allow zero max_ops in RPC limits config
The limiter allows zeros for limits, meaning "this operation is
disabled".  However, the config didn't allow zero due to the lack of
distinction between "no value" and "zero" - cast functions read both
`nil` and zero as zero.

Now, the config allows a zero limit. Added tests.

Managing such cases should be easier after #1610.

Change-Id: Ifc840732390b2feb975f230573b34bf479406e05
Signed-off-by: Aleksey Savchuk <a.savchuk@yadro.com>
2025-03-21 17:00:40 +03:00
d951289131 [#1294] docs: Fix description of shard switching mode
Signed-off-by: Alexander Chuprov <a.chuprov@yadro.com>
2025-03-21 12:13:03 +00:00
016f2e11e3 [#1689] Makefile: Add more restricted .SHELLFLAGS
Catch more errors immediately.

Change-Id: I576f1b394a2b167c78c693a794ab8cca3ac1013b
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-03-21 11:49:49 +00:00
9aa486c9d8 [#1689] Makefile: Create dirs with -p flag
On CI there is no `bin` directory initially, so an error occurs
```
mkdir: cannot create directory '/var/cache/jenkins-agent/workspace/gerrit/frostfs-node#55-488b12-8ac3c/bin/gofumpt': No such file or directory
```

Change-Id: I43895c8f5ed7cc5c71c8025228710279f9e75e9c
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-03-21 11:49:42 +00:00
af76350bfb [#1695] qos: Sort tags by asc
Change-Id: Ia23e392bb49d2536096de2ba07fc6f8fb7ac0489
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-03-21 11:25:33 +00:00
3fa5c22ddf [#1689] Makefile: Add default reviewers via --push-option
Gerrit doesn't provide an easy way to have default reviewers assigned to
new change requests. However, we can use `--push-option` and mention all
people from storage-core-developers group.

Change-Id: Ia01f8a3c5c8eb8a1dca6efb66fbe07018f6a42c9
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-03-21 11:25:21 +00:00
5385f9994f
[#1695] mod: Bump frostfs-observability version
Change-Id: Id362b71f743ff70c8cd374030c9fa67e2566022f
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-03-21 13:28:02 +03:00
eea46a599d
[#1695] qos: Add treesync tag
Tree sync is too much different from GC and rebuild to use the same tag for GC and tree sync.

Change-Id: Ib44d5fa9a88daff507d759d0b0410cc9272e236f
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-03-21 13:28:02 +03:00
049a650b89 [#1619] logger: Simplify logger config reloading
Change-Id: Ide892b250304b8cdb6c279f5f728c3b35f05df54
Signed-off-by: Anton Nikiforov <an.nikiforov@yadro.com>
2025-03-21 09:03:54 +00:00
3f4717a37f [#1692] metabase: Do not allocate map in cache unless needed
Change-Id: I8b1015a8c7c3df4153a08fdb788117d9f0d6c333
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-03-21 08:56:32 +00:00
60cea8c714 [#1692] metabase/test: Fix end of iteration error check
This is not good:
```
BenchmarkListWithCursor/1_item-8                --- FAIL: BenchmarkListWithCursor/1_item-8
    list_test.go:63: error: end of object listing
```

Change-Id: I61b70937ce30fefaf16ebeb0cdb51bdd39096061
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-03-21 08:56:18 +00:00
7df2912a83
[#1689] Makefile: Create prepare-commit-msg hook too
`commit-msg` is ignored when `--no-verify` option is used, so there is
no way to ignore `pre-commit` while retaining `commit-msg` hook.
Ignoring pre-commit is useful, though, so we might add Change-Id in
`prepare-commit-msg` hook instead. It accepts more parameters, but the
first one is a file with the commit message, so we may reuse
`commit-msg` hook.

Change-Id: I4edb79810bbe38a5dcf7f4f07535f34c6bda0da3
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-03-20 18:01:53 +03:00
affab25512
Makefile: Add Gerrit-related targets
This commit adds helper targets to easily setup an existing repo for
work with Gerrit.

Change-Id: I0696eb8ea84cc16a9482be6a2fb0382fe624bb96
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-03-20 16:48:04 +03:00
45b7796151 [#1689] ci: Reimplement CI tasks in Jenkinsfile
This commit introduces Jenkins pipeline that duplicates the features of
existing Forgejo Actions workflows.

Change-Id: I657a6c27373a1ed4736ae27b4fb660e0ac86012d
Signed-off-by: Vitaliy Potyarkin <v.potyarkin@yadro.com>
2025-03-20 12:55:30 +00:00
e8801dbf49 [#1691] metabase: Move cheaper conditions to the front in ListWithCursor()
`objectLocked` call is expensive, it does IO. We may omit it if object
is not expired.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-03-20 12:52:36 +00:00
eb9df85b98 [#1685] metabase: Cache primary bucket
```
goos: linux
goarch: amd64
pkg: git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/metabase
cpu: 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz
                          │   expired    │               primary               │
                          │    sec/op    │    sec/op     vs base               │
Select/string_equal-8       3.529m ± 11%   3.689m ±  7%  +4.55% (p=0.023 n=10)
Select/string_not_equal-8   3.440m ±  7%   3.543m ± 13%       ~ (p=0.190 n=10)
Select/common_prefix-8      3.240m ±  6%   3.050m ±  5%  -5.85% (p=0.005 n=10)
Select/unknown-8            3.198m ±  6%   2.928m ±  8%  -8.44% (p=0.003 n=10)
geomean                     3.349m         3.287m        -1.84%

                          │   expired    │               primary               │
                          │     B/op     │     B/op      vs base               │
Select/string_equal-8       1.885Mi ± 0%   1.786Mi ± 0%  -5.23% (p=0.000 n=10)
Select/string_not_equal-8   1.885Mi ± 0%   1.786Mi ± 0%  -5.23% (p=0.000 n=10)
Select/common_prefix-8      1.885Mi ± 0%   1.786Mi ± 0%  -5.23% (p=0.000 n=10)
Select/unknown-8            1.877Mi ± 0%   1.779Mi ± 0%  -5.26% (p=0.000 n=10)
geomean                     1.883Mi        1.784Mi       -5.24%

                          │   expired   │              primary               │
                          │  allocs/op  │  allocs/op   vs base               │
Select/string_equal-8       46.04k ± 0%   43.04k ± 0%  -6.50% (p=0.000 n=10)
Select/string_not_equal-8   46.04k ± 0%   43.04k ± 0%  -6.50% (p=0.000 n=10)
Select/common_prefix-8      46.04k ± 0%   43.04k ± 0%  -6.50% (p=0.000 n=10)
Select/unknown-8            45.05k ± 0%   42.05k ± 0%  -6.65% (p=0.000 n=10)
geomean                     45.79k        42.79k       -6.54%
```

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-03-20 12:52:01 +00:00
21bed3362c [#1685] metabase: Cache expired bucket
```
goos: linux
goarch: amd64
pkg: git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/metabase
cpu: 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz
                          │    master    │               expired                │
                          │    sec/op    │    sec/op     vs base                │
Select/string_equal-8       4.007m ± 10%   3.529m ± 11%  -11.94% (p=0.000 n=10)
Select/string_not_equal-8   3.834m ± 12%   3.440m ±  7%  -10.29% (p=0.029 n=10)
Select/common_prefix-8      3.470m ±  9%   3.240m ±  6%        ~ (p=0.105 n=10)
Select/unknown-8            3.156m ±  3%   3.198m ±  6%        ~ (p=0.631 n=10)
geomean                     3.602m         3.349m         -7.03%

                          │    master    │               expired               │
                          │     B/op     │     B/op      vs base               │
Select/string_equal-8       1.907Mi ± 0%   1.885Mi ± 0%  -1.18% (p=0.000 n=10)
Select/string_not_equal-8   1.907Mi ± 0%   1.885Mi ± 0%  -1.18% (p=0.000 n=10)
Select/common_prefix-8      1.907Mi ± 0%   1.885Mi ± 0%  -1.18% (p=0.000 n=10)
Select/unknown-8            1.900Mi ± 0%   1.877Mi ± 0%  -1.18% (p=0.000 n=10)
geomean                     1.905Mi        1.883Mi       -1.18%

                          │   master    │              expired               │
                          │  allocs/op  │  allocs/op   vs base               │
Select/string_equal-8       47.03k ± 0%   46.04k ± 0%  -2.12% (p=0.000 n=10)
Select/string_not_equal-8   47.03k ± 0%   46.04k ± 0%  -2.12% (p=0.000 n=10)
Select/common_prefix-8      47.03k ± 0%   46.04k ± 0%  -2.12% (p=0.000 n=10)
Select/unknown-8            46.04k ± 0%   45.05k ± 0%  -2.16% (p=0.000 n=10)
geomean                     46.78k        45.79k       -2.13%
```

Change-Id: I9c7a5e1f5c8b9eb3f25a563fd74c6ad2a9d1b92e
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-03-20 12:52:01 +00:00
af5b3575d0
[#1690] qos: Do not export zero metrics counters
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-03-20 14:42:35 +03:00
a49f0717b3 [#1685] metabase: Cache frequently accessed singleton buckets
There are some buckets we access almost always, to check whether an
object is alive. In search we also iterate over lots of objects, and
`tx.Bucket()` shows itself a lot in pprof.
```
goos: linux
goarch: amd64
pkg: git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/metabase
cpu: 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz
                          │      1      │                  2                   │
                          │   sec/op    │    sec/op     vs base                │
Select/string_equal-8       4.753m ± 6%   3.969m ± 14%  -16.50% (p=0.000 n=10)
Select/string_not_equal-8   4.247m ± 9%   3.486m ± 11%  -17.93% (p=0.000 n=10)
Select/common_prefix-8      4.163m ± 5%   3.323m ±  5%  -20.18% (p=0.000 n=10)
Select/unknown-8            3.557m ± 3%   3.064m ±  8%  -13.85% (p=0.001 n=10)
geomean                     4.158m        3.445m        -17.15%

                          │      1       │                  2                   │
                          │     B/op     │     B/op      vs base                │
Select/string_equal-8       2.250Mi ± 0%   1.907Mi ± 0%  -15.24% (p=0.000 n=10)
Select/string_not_equal-8   2.250Mi ± 0%   1.907Mi ± 0%  -15.24% (p=0.000 n=10)
Select/common_prefix-8      2.250Mi ± 0%   1.907Mi ± 0%  -15.24% (p=0.000 n=10)
Select/unknown-8            2.243Mi ± 0%   1.900Mi ± 0%  -15.29% (p=0.000 n=10)
geomean                     2.248Mi        1.905Mi       -15.26%

                          │      1      │                  2                  │
                          │  allocs/op  │  allocs/op   vs base                │
Select/string_equal-8       56.02k ± 0%   47.03k ± 0%  -16.05% (p=0.000 n=10)
Select/string_not_equal-8   56.02k ± 0%   47.03k ± 0%  -16.05% (p=0.000 n=10)
Select/common_prefix-8      56.02k ± 0%   47.03k ± 0%  -16.05% (p=0.000 n=10)
Select/unknown-8            55.03k ± 0%   46.04k ± 0%  -16.34% (p=0.000 n=10)
geomean                     55.78k        46.78k       -16.12%
```

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-03-20 10:17:42 +00:00
a7ac30da9c [#1642] tree: Refactor getSortedSubTree
* Reuse `item` as result for `forest.TreeSortedByFilename`
  invocation.

Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2025-03-20 10:12:49 +00:00
39f549a7ab [#1642] tree: Intoduce a helper LastChild
Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2025-03-20 10:12:49 +00:00
760b6a44ea [#1642] tree: Fix sorted getSubtree for multiversion filenames
Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2025-03-20 10:12:49 +00:00
a11b2d27e4 [#1642] tree: Introduce Cursor type
* Use `Cursor` as parameter for `TreeSortedByFilename`

Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2025-03-20 10:12:49 +00:00
a405fb1f39 [#1683] metabase: Check object status once in Select()
objectStatus() is called twice for the same object:
First, in selectObject() to filter removed objects.
Then, again, in getObjectForSlowFilters() via db.get().
The second call will return the same result, so remove useless branch.

```
goos: linux
goarch: amd64
pkg: git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/metabase
cpu: 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz
                          │     old     │                status                │
                          │   sec/op    │    sec/op     vs base                │
Select/string_equal-8       5.022m ± 7%   3.968m ±  8%  -20.98% (p=0.000 n=10)
Select/string_not_equal-8   4.953m ± 9%   3.990m ± 10%  -19.44% (p=0.000 n=10)
Select/common_prefix-8      4.962m ± 8%   3.971m ±  9%  -19.98% (p=0.000 n=10)
Select/unknown-8            5.246m ± 9%   3.548m ±  5%  -32.37% (p=0.000 n=10)
geomean                     5.045m        3.865m        -23.39%

                          │     old      │                status                │
                          │     B/op     │     B/op      vs base                │
Select/string_equal-8       2.685Mi ± 0%   2.250Mi ± 0%  -16.20% (p=0.000 n=10)
Select/string_not_equal-8   2.685Mi ± 0%   2.250Mi ± 0%  -16.20% (p=0.000 n=10)
Select/common_prefix-8      2.685Mi ± 0%   2.250Mi ± 0%  -16.20% (p=0.000 n=10)
Select/unknown-8            2.677Mi ± 0%   2.243Mi ± 0%  -16.24% (p=0.000 n=10)
geomean                     2.683Mi        2.248Mi       -16.21%

                          │     old     │               status                │
                          │  allocs/op  │  allocs/op   vs base                │
Select/string_equal-8       69.03k ± 0%   56.02k ± 0%  -18.84% (p=0.000 n=10)
Select/string_not_equal-8   69.03k ± 0%   56.02k ± 0%  -18.84% (p=0.000 n=10)
Select/common_prefix-8      69.03k ± 0%   56.02k ± 0%  -18.84% (p=0.000 n=10)
Select/unknown-8            68.03k ± 0%   55.03k ± 0%  -19.11% (p=0.000 n=10)
geomean                     68.78k        55.77k       -18.90%
```

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-03-18 11:48:51 +00:00
a7319bc979 [#1683] metabase/test: Report allocs in benchmarkSelect()
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-03-18 11:48:51 +00:00
fc743cc537
[#1684] cli: Correct description of control shards writecache seal
Signed-off-by: Alexander Chuprov <a.chuprov@yadro.com>
2025-03-18 10:56:30 +03:00
54ef71a92f [#1680] Update staticcheck to 2025.1.1
Change-Id: Ie851e714afebf171c4d42d4c49b42379c2665113
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-03-17 10:02:53 +00:00
91c7b39232 [#1680] go.mod: Bump go version to 1.23
Change-Id: I77f908924f675e676f0db6a57204d7c1e0df219a
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-03-17 10:02:53 +00:00
ef6ac751df [#1671] Update gopls to v0.17.1
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-03-17 10:02:53 +00:00
fde2649e60 [#1678] adm: Fix frostfs-adm morph list-subjects & list-group-subjects
`include-names` for `list-subjects` returns error `invalid response subject struct`
because `ListSubjects` returns only subject addresses (see frostfs-contract).

Replace `include-names` with `extended` as now all subject info printed.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-03-14 13:42:29 +00:00
07a660fbc4
[#1677] writecache: Add QoS limiter usage
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-03-14 16:23:33 +03:00
7893d763d1
[#1673] logger: Add sampling for journald logger
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-03-13 14:52:03 +03:00
ff4e9b6ae1
[#1673] logger: Drop unused fields
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-03-13 14:52:02 +03:00
997759994a
[#1676] golangci: Enable gci linter
Signed-off-by: Alexander Chuprov <a.chuprov@yadro.com>
2025-03-13 12:04:01 +03:00
ecb6b0793c [#1671] Use slices.ContainsFunc() where possible
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-03-13 08:12:20 +00:00
460e5cbccf [#1671] Use slices.Delete() where possible
gopatch is missing for this one, because
https://github.com/uber-go/gopatch/issues/179

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-03-13 08:12:20 +00:00
155d3ddb6e [#1671] Use min builtin where possible
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-03-13 08:12:20 +00:00
40536d8a06 [#1671] Use fmt.Appendf where warranted
Fix gopls warnings:
```
cmd/frostfs-adm/internal/modules/morph/config/config.go:68:20-64: Replace []byte(fmt.Sprintf...) with fmt.Appendf
````

gopatch:
```
@@
var f expression
@@
-[]byte(fmt.Sprintf(f, ...))
+fmt.Appendf(nil, f, ...)
```

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-03-13 08:12:20 +00:00
d66bffb191 [#1671] Use max builtin where possible
gopatcH:
```
@@
var d, a expression
@@
-if d < a {
-    d = a
-}
-return d
+return max(d, a)

@@
var d, a expression
@@
-if d <= a {
-    d = a
-}
-return d
+return max(d, a)
```

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-03-13 08:12:20 +00:00
bcc84c85a0 [#1671] Replace interface{} with any
gopatch:
```
@@
@@
-interface{}
+any
```

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-03-13 08:12:20 +00:00
737788b35f
[#1669] go.mod: Bump frostfs-qos version
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-03-12 09:23:53 +03:00
2005fdda09
[#1667] shard: Drop shard pool
After adding an ops limiter, shard's `put` pool is redundant.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-03-11 13:59:51 +03:00
597bce7a87 [#1653] treeSvc: Add operations by IO tag metric
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-03-11 10:57:47 +00:00
4ed2bbdb0f [#1653] objectSvc: Add operations by IO tag metric
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-03-11 10:57:47 +00:00
3727d60331 [#1653] qos: Add metrics
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-03-11 10:57:47 +00:00
d36afa31c7 [#1653] qos: Fix logging
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-03-11 10:57:47 +00:00
8643e0abc5
[#1668] writecache: Use object size to check free space
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-03-10 17:52:57 +03:00
bd61f7bf0a [#1666] audit: Fix duplicated log in Patch method
When we do `object patch` with audit enabled we get several
duplicated entries in logs.

`object patch` request is logged in 2 places:
1. `(*auditPatchStream) CloseAndRecv()` - when the client closes
   the request stream or when stream gets aborted.
2. `(*auditPatchStream) Send()` - when stream was NOT aborted.

`Send()` doesn't check if `err != nil` before logging.
It led to to logging on every `Send()` call.

Signed-off-by: Ekaterina Lebedeva <ekaterina.lebedeva@yadro.com>
2025-03-07 13:27:07 +00:00
151 changed files with 2089 additions and 3647 deletions

87
.ci/Jenkinsfile vendored Normal file
View file

@ -0,0 +1,87 @@
def golang = ['1.23', '1.24']
def golangDefault = "golang:${golang.last()}"
async {
for (version in golang) {
def go = version
task("test/go${go}") {
container("golang:${go}") {
sh 'make test'
}
}
task("build/go${go}") {
container("golang:${go}") {
for (app in ['cli', 'node', 'ir', 'adm', 'lens']) {
sh """
make bin/frostfs-${app}
bin/frostfs-${app} --version
"""
}
}
}
}
task('test/race') {
container(golangDefault) {
sh 'make test GOFLAGS="-count=1 -race"'
}
}
task('lint') {
container(golangDefault) {
sh 'make lint-install lint'
}
}
task('staticcheck') {
container(golangDefault) {
sh 'make staticcheck-install staticcheck-run'
}
}
task('gopls') {
container(golangDefault) {
sh 'make gopls-install gopls-run'
}
}
task('gofumpt') {
container(golangDefault) {
sh '''
make fumpt-install
make fumpt
git diff --exit-code --quiet
'''
}
}
task('vulncheck') {
container(golangDefault) {
sh '''
go install golang.org/x/vuln/cmd/govulncheck@latest
govulncheck ./...
'''
}
}
task('pre-commit') {
dockerfile("""
FROM ${golangDefault}
RUN apt update && \
apt install -y --no-install-recommends pre-commit
""") {
withEnv(['SKIP=make-lint,go-staticcheck-repo-mod,go-unit-tests,gofumpt']) {
sh 'pre-commit run --color=always --hook-stage=manual --all-files'
}
}
}
task('dco') {
container('git.frostfs.info/truecloudlab/commit-check:master') {
sh 'FROM=pull_request_target commit-check'
}
}
}

View file

@ -12,7 +12,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
go_versions: [ '1.22', '1.23' ]
go_versions: [ '1.23', '1.24' ]
steps:
- uses: actions/checkout@v3

View file

@ -13,7 +13,7 @@ jobs:
- name: Setup Go
uses: actions/setup-go@v3
with:
go-version: '1.22'
go-version: '1.24'
- name: Run commit format checker
uses: https://git.frostfs.info/TrueCloudLab/dco-go@v3

View file

@ -21,7 +21,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: 1.23
go-version: 1.24
- name: Set up Python
run: |
apt update

View file

@ -16,7 +16,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: '1.23'
go-version: '1.24'
cache: true
- name: Install linters
@ -30,7 +30,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
go_versions: [ '1.22', '1.23' ]
go_versions: [ '1.23', '1.24' ]
fail-fast: false
steps:
- uses: actions/checkout@v3
@ -53,7 +53,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: '1.22'
go-version: '1.24'
cache: true
- name: Run tests
@ -68,7 +68,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: '1.23'
go-version: '1.24'
cache: true
- name: Install staticcheck
@ -104,7 +104,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: '1.23'
go-version: '1.24'
cache: true
- name: Install gofumpt

View file

@ -18,7 +18,7 @@ jobs:
- name: Setup Go
uses: actions/setup-go@v3
with:
go-version: '1.23'
go-version: '1.24'
check-latest: true
- name: Install govulncheck

View file

@ -22,6 +22,11 @@ linters-settings:
# 'default' case is present, even if all enum members aren't listed in the
# switch
default-signifies-exhaustive: true
gci:
sections:
- standard
- default
custom-order: true
govet:
# report about shadowed variables
check-shadowing: false
@ -72,6 +77,7 @@ linters:
- durationcheck
- exhaustive
- copyloopvar
- gci
- gofmt
- goimports
- misspell

View file

@ -1,5 +1,6 @@
#!/usr/bin/make -f
SHELL = bash
.SHELLFLAGS = -euo pipefail -c
REPO ?= $(shell go list -m)
VERSION ?= $(shell git describe --tags --dirty --match "v*" --always --abbrev=8 2>/dev/null || cat VERSION 2>/dev/null || echo "develop")
@ -7,7 +8,7 @@ VERSION ?= $(shell git describe --tags --dirty --match "v*" --always --abbrev=8
HUB_IMAGE ?= git.frostfs.info/truecloudlab/frostfs
HUB_TAG ?= "$(shell echo ${VERSION} | sed 's/^v//')"
GO_VERSION ?= 1.22
GO_VERSION ?= 1.23
LINT_VERSION ?= 1.62.2
TRUECLOUDLAB_LINT_VERSION ?= 0.0.8
PROTOC_VERSION ?= 25.0
@ -16,7 +17,7 @@ PROTOC_OS_VERSION=osx-x86_64
ifeq ($(shell uname), Linux)
PROTOC_OS_VERSION=linux-x86_64
endif
STATICCHECK_VERSION ?= 2024.1.1
STATICCHECK_VERSION ?= 2025.1.1
ARCH = amd64
BIN = bin
@ -42,7 +43,7 @@ GOFUMPT_VERSION ?= v0.7.0
GOFUMPT_DIR ?= $(abspath $(BIN))/gofumpt
GOFUMPT_VERSION_DIR ?= $(GOFUMPT_DIR)/$(GOFUMPT_VERSION)
GOPLS_VERSION ?= v0.15.1
GOPLS_VERSION ?= v0.17.1
GOPLS_DIR ?= $(abspath $(BIN))/gopls
GOPLS_VERSION_DIR ?= $(GOPLS_DIR)/$(GOPLS_VERSION)
GOPLS_TEMP_FILE := $(shell mktemp)
@ -115,7 +116,7 @@ protoc:
# Install protoc
protoc-install:
@rm -rf $(PROTOBUF_DIR)
@mkdir $(PROTOBUF_DIR)
@mkdir -p $(PROTOBUF_DIR)
@echo "⇒ Installing protoc... "
@wget -q -O $(PROTOBUF_DIR)/protoc-$(PROTOC_VERSION).zip 'https://github.com/protocolbuffers/protobuf/releases/download/v$(PROTOC_VERSION)/protoc-$(PROTOC_VERSION)-$(PROTOC_OS_VERSION).zip'
@unzip -q -o $(PROTOBUF_DIR)/protoc-$(PROTOC_VERSION).zip -d $(PROTOC_DIR)
@ -169,7 +170,7 @@ imports:
# Install gofumpt
fumpt-install:
@rm -rf $(GOFUMPT_DIR)
@mkdir $(GOFUMPT_DIR)
@mkdir -p $(GOFUMPT_DIR)
@GOBIN=$(GOFUMPT_VERSION_DIR) go install mvdan.cc/gofumpt@$(GOFUMPT_VERSION)
# Run gofumpt
@ -186,14 +187,37 @@ test:
@echo "⇒ Running go test"
@GOFLAGS="$(GOFLAGS)" go test ./...
# Install Gerrit commit-msg hook
review-install: GIT_HOOK_DIR := $(shell git rev-parse --git-dir)/hooks
review-install:
@git config remote.review.url \
|| git remote add review ssh://review.frostfs.info:2222/TrueCloudLab/frostfs-node
@mkdir -p $(GIT_HOOK_DIR)/
@curl -Lo $(GIT_HOOK_DIR)/commit-msg https://review.frostfs.info/tools/hooks/commit-msg
@chmod +x $(GIT_HOOK_DIR)/commit-msg
@echo -e '#!/bin/sh\n"$$(git rev-parse --git-path hooks)"/commit-msg "$$1"' >$(GIT_HOOK_DIR)/prepare-commit-msg
@chmod +x $(GIT_HOOK_DIR)/prepare-commit-msg
# Create a PR in Gerrit
review: BRANCH ?= master
review:
@git push review HEAD:refs/for/$(BRANCH) \
--push-option r=e.stratonikov@yadro.com \
--push-option r=d.stepanov@yadro.com \
--push-option r=an.nikiforov@yadro.com \
--push-option r=a.arifullin@yadro.com \
--push-option r=ekaterina.lebedeva@yadro.com \
--push-option r=a.savchuk@yadro.com \
--push-option r=a.chuprov@yadro.com
# Run pre-commit
pre-commit-run:
@pre-commit run -a --hook-stage manual
# Install linters
lint-install:
lint-install: $(BIN)
@rm -rf $(OUTPUT_LINT_DIR)
@mkdir $(OUTPUT_LINT_DIR)
@mkdir -p $(OUTPUT_LINT_DIR)
@mkdir -p $(TMP_DIR)
@rm -rf $(TMP_DIR)/linters
@git -c advice.detachedHead=false clone --branch v$(TRUECLOUDLAB_LINT_VERSION) https://git.frostfs.info/TrueCloudLab/linters.git $(TMP_DIR)/linters
@ -212,7 +236,7 @@ lint:
# Install staticcheck
staticcheck-install:
@rm -rf $(STATICCHECK_DIR)
@mkdir $(STATICCHECK_DIR)
@mkdir -p $(STATICCHECK_DIR)
@GOBIN=$(STATICCHECK_VERSION_DIR) go install honnef.co/go/tools/cmd/staticcheck@$(STATICCHECK_VERSION)
# Run staticcheck
@ -225,7 +249,7 @@ staticcheck-run:
# Install gopls
gopls-install:
@rm -rf $(GOPLS_DIR)
@mkdir $(GOPLS_DIR)
@mkdir -p $(GOPLS_DIR)
@GOBIN=$(GOPLS_VERSION_DIR) go install golang.org/x/tools/gopls@$(GOPLS_VERSION)
# Run gopls

View file

@ -65,14 +65,14 @@ func dumpNetworkConfig(cmd *cobra.Command, _ []string) error {
nbuf := make([]byte, 8)
copy(nbuf[:], v)
n := binary.LittleEndian.Uint64(nbuf)
_, _ = tw.Write([]byte(fmt.Sprintf("%s:\t%d (int)\n", k, n)))
_, _ = tw.Write(fmt.Appendf(nil, "%s:\t%d (int)\n", k, n))
case netmap.HomomorphicHashingDisabledKey, netmap.MaintenanceModeAllowedConfig:
if len(v) == 0 || len(v) > 1 {
return helper.InvalidConfigValueErr(k)
}
_, _ = tw.Write([]byte(fmt.Sprintf("%s:\t%t (bool)\n", k, v[0] == 1)))
_, _ = tw.Write(fmt.Appendf(nil, "%s:\t%t (bool)\n", k, v[0] == 1))
default:
_, _ = tw.Write([]byte(fmt.Sprintf("%s:\t%s (hex)\n", k, hex.EncodeToString(v))))
_, _ = tw.Write(fmt.Appendf(nil, "%s:\t%s (hex)\n", k, hex.EncodeToString(v)))
}
}

View file

@ -219,8 +219,8 @@ func printContractInfo(cmd *cobra.Command, infos []contractDumpInfo) {
if info.version == "" {
info.version = "unknown"
}
_, _ = tw.Write([]byte(fmt.Sprintf("%s\t(%s):\t%s\n",
info.name, info.version, info.hash.StringLE())))
_, _ = tw.Write(fmt.Appendf(nil, "%s\t(%s):\t%s\n",
info.name, info.version, info.hash.StringLE()))
}
_ = tw.Flush()

View file

@ -34,7 +34,7 @@ const (
subjectNameFlag = "subject-name"
subjectKeyFlag = "subject-key"
subjectAddressFlag = "subject-address"
includeNamesFlag = "include-names"
extendedFlag = "extended"
groupNameFlag = "group-name"
groupIDFlag = "group-id"
@ -209,7 +209,7 @@ func initFrostfsIDListSubjectsCmd() {
Cmd.AddCommand(frostfsidListSubjectsCmd)
frostfsidListSubjectsCmd.Flags().StringP(commonflags.EndpointFlag, commonflags.EndpointFlagShort, "", commonflags.EndpointFlagDesc)
frostfsidListSubjectsCmd.Flags().String(namespaceFlag, "", "Namespace to list subjects")
frostfsidListSubjectsCmd.Flags().Bool(includeNamesFlag, false, "Whether include subject name (require additional requests)")
frostfsidListSubjectsCmd.Flags().Bool(extendedFlag, false, "Whether include subject info (require additional requests)")
}
func initFrostfsIDCreateGroupCmd() {
@ -256,7 +256,7 @@ func initFrostfsIDListGroupSubjectsCmd() {
frostfsidListGroupSubjectsCmd.Flags().StringP(commonflags.EndpointFlag, commonflags.EndpointFlagShort, "", commonflags.EndpointFlagDesc)
frostfsidListGroupSubjectsCmd.Flags().String(namespaceFlag, "", "Namespace name")
frostfsidListGroupSubjectsCmd.Flags().Int64(groupIDFlag, 0, "Group id")
frostfsidListGroupSubjectsCmd.Flags().Bool(includeNamesFlag, false, "Whether include subject name (require additional requests)")
frostfsidListGroupSubjectsCmd.Flags().Bool(extendedFlag, false, "Whether include subject info (require additional requests)")
}
func initFrostfsIDSetKVCmd() {
@ -336,7 +336,7 @@ func frostfsidDeleteSubject(cmd *cobra.Command, _ []string) {
}
func frostfsidListSubjects(cmd *cobra.Command, _ []string) {
includeNames, _ := cmd.Flags().GetBool(includeNamesFlag)
extended, _ := cmd.Flags().GetBool(extendedFlag)
ns := getFrostfsIDNamespace(cmd)
inv, _, hash := initInvoker(cmd)
reader := frostfsidrpclient.NewReader(inv, hash)
@ -349,21 +349,19 @@ func frostfsidListSubjects(cmd *cobra.Command, _ []string) {
sort.Slice(subAddresses, func(i, j int) bool { return subAddresses[i].Less(subAddresses[j]) })
for _, addr := range subAddresses {
if !includeNames {
if !extended {
cmd.Println(address.Uint160ToString(addr))
continue
}
sessionID, it, err := reader.ListSubjects()
items, err := reader.GetSubject(addr)
commonCmd.ExitOnErr(cmd, "can't get subject: %w", err)
items, err := readIterator(inv, &it, sessionID)
commonCmd.ExitOnErr(cmd, "can't read iterator: %w", err)
subj, err := frostfsidclient.ParseSubject(items)
commonCmd.ExitOnErr(cmd, "can't parse subject: %w", err)
cmd.Printf("%s (%s)\n", address.Uint160ToString(addr), subj.Name)
printSubjectInfo(cmd, addr, subj)
cmd.Println()
}
}
@ -483,7 +481,7 @@ func frostfsidDeleteKV(cmd *cobra.Command, _ []string) {
func frostfsidListGroupSubjects(cmd *cobra.Command, _ []string) {
ns := getFrostfsIDNamespace(cmd)
groupID := getFrostfsIDGroupID(cmd)
includeNames, _ := cmd.Flags().GetBool(includeNamesFlag)
extended, _ := cmd.Flags().GetBool(extendedFlag)
inv, cs, hash := initInvoker(cmd)
_, err := helper.NNSResolveHash(inv, cs.Hash, helper.DomainOf(constants.FrostfsIDContract))
commonCmd.ExitOnErr(cmd, "can't get netmap contract hash: %w", err)
@ -501,7 +499,7 @@ func frostfsidListGroupSubjects(cmd *cobra.Command, _ []string) {
sort.Slice(subjects, func(i, j int) bool { return subjects[i].Less(subjects[j]) })
for _, subjAddr := range subjects {
if !includeNames {
if !extended {
cmd.Println(address.Uint160ToString(subjAddr))
continue
}
@ -510,7 +508,8 @@ func frostfsidListGroupSubjects(cmd *cobra.Command, _ []string) {
commonCmd.ExitOnErr(cmd, "can't get subject: %w", err)
subj, err := frostfsidclient.ParseSubject(items)
commonCmd.ExitOnErr(cmd, "can't parse subject: %w", err)
cmd.Printf("%s (%s)\n", address.Uint160ToString(subjAddr), subj.Name)
printSubjectInfo(cmd, subjAddr, subj)
cmd.Println()
}
}
@ -600,3 +599,30 @@ func initInvoker(cmd *cobra.Command) (*invoker.Invoker, *state.Contract, util.Ui
return inv, cs, nmHash
}
func printSubjectInfo(cmd *cobra.Command, addr util.Uint160, subj *frostfsidclient.Subject) {
cmd.Printf("Address: %s\n", address.Uint160ToString(addr))
pk := "<nil>"
if subj.PrimaryKey != nil {
pk = subj.PrimaryKey.String()
}
cmd.Printf("Primary key: %s\n", pk)
cmd.Printf("Name: %s\n", subj.Name)
cmd.Printf("Namespace: %s\n", subj.Namespace)
if len(subj.AdditionalKeys) > 0 {
cmd.Printf("Additional keys:\n")
for _, key := range subj.AdditionalKeys {
k := "<nil>"
if key != nil {
k = key.String()
}
cmd.Printf("- %s\n", k)
}
}
if len(subj.KV) > 0 {
cmd.Printf("KV:\n")
for k, v := range subj.KV {
cmd.Printf("- %s: %s\n", k, v)
}
}
}

View file

@ -6,6 +6,7 @@ import (
"time"
"git.frostfs.info/TrueCloudLab/frostfs-contract/nns"
nns2 "git.frostfs.info/TrueCloudLab/frostfs-contract/rpcclient/nns"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/config"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/morph/constants"
@ -13,9 +14,7 @@ import (
"github.com/nspcc-dev/neo-go/pkg/core/native/nativenames"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/nspcc-dev/neo-go/pkg/encoding/address"
"github.com/nspcc-dev/neo-go/pkg/rpcclient"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/invoker"
nns2 "github.com/nspcc-dev/neo-go/pkg/rpcclient/nns"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/unwrap"
"github.com/nspcc-dev/neo-go/pkg/smartcontract/trigger"
"github.com/nspcc-dev/neo-go/pkg/util"
@ -187,19 +186,9 @@ func NNSResolveKey(inv *invoker.Invoker, nnsHash util.Uint160, domain string) (*
}
func NNSIsAvailable(c Client, nnsHash util.Uint160, name string) (bool, error) {
switch c.(type) {
case *rpcclient.Client:
inv := invoker.New(c, nil)
reader := nns2.NewReader(inv, nnsHash)
return reader.IsAvailable(name)
default:
b, err := unwrap.Bool(InvokeFunction(c, nnsHash, "isAvailable", []any{name}, nil))
if err != nil {
return false, fmt.Errorf("`isAvailable`: invalid response: %w", err)
}
return b, nil
}
inv := invoker.New(c, nil)
reader := nns2.NewReader(inv, nnsHash)
return reader.IsAvailable(name)
}
func CheckNotaryEnabled(c Client) error {

View file

@ -40,6 +40,8 @@ type ClientContext struct {
CommitteeAct *actor.Actor // committee actor with the Global witness scope
ReadOnlyInvoker *invoker.Invoker // R/O contract invoker, does not contain any signer
SentTxs []HashVUBPair
AwaitDisabled bool
}
func NewRemoteClient(v *viper.Viper) (Client, error) {
@ -120,7 +122,7 @@ func (c *ClientContext) SendTx(tx *transaction.Transaction, cmd *cobra.Command,
}
func (c *ClientContext) AwaitTx(cmd *cobra.Command) error {
if len(c.SentTxs) == 0 {
if len(c.SentTxs) == 0 || c.AwaitDisabled {
return nil
}

View file

@ -3,6 +3,7 @@ package helper
import (
"errors"
"fmt"
"slices"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/morph/constants"
@ -118,11 +119,8 @@ func MergeNetmapConfig(roInvoker *invoker.Invoker, md map[string]any) error {
return err
}
for k, v := range m {
for _, key := range NetmapConfigKeys {
if k == key {
md[k] = v
break
}
if slices.Contains(NetmapConfigKeys, k) {
md[k] = v
}
}
return nil

View file

@ -39,6 +39,7 @@ func initializeSideChainCmd(cmd *cobra.Command, _ []string) error {
return err
}
initCtx.AwaitDisabled = true
cmd.Println("Stage 4.1: Transfer GAS to proxy contract.")
if err := transferGASToProxy(initCtx); err != nil {
return err
@ -55,5 +56,10 @@ func initializeSideChainCmd(cmd *cobra.Command, _ []string) error {
}
cmd.Println("Stage 7: set addresses in NNS.")
return setNNS(initCtx)
if err := setNNS(initCtx); err != nil {
return err
}
initCtx.AwaitDisabled = false
return initCtx.AwaitTx()
}

View file

@ -1,7 +1,6 @@
package initialize
import (
"errors"
"fmt"
"math/big"
@ -11,11 +10,8 @@ import (
"github.com/nspcc-dev/neo-go/pkg/core/state"
"github.com/nspcc-dev/neo-go/pkg/core/transaction"
"github.com/nspcc-dev/neo-go/pkg/io"
"github.com/nspcc-dev/neo-go/pkg/rpcclient"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/actor"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/invoker"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/neo"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/nep17"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/unwrap"
"github.com/nspcc-dev/neo-go/pkg/smartcontract/callflag"
"github.com/nspcc-dev/neo-go/pkg/util"
@ -30,7 +26,8 @@ const (
)
func registerCandidateRange(c *helper.InitializeContext, start, end int) error {
regPrice, err := getCandidateRegisterPrice(c)
reader := neo.NewReader(c.ReadOnlyInvoker)
regPrice, err := reader.GetRegisterPrice()
if err != nil {
return fmt.Errorf("can't fetch registration price: %w", err)
}
@ -116,7 +113,7 @@ func registerCandidates(c *helper.InitializeContext) error {
func transferNEOToAlphabetContracts(c *helper.InitializeContext) error {
neoHash := neo.Hash
ok, err := transferNEOFinished(c, neoHash)
ok, err := transferNEOFinished(c)
if ok || err != nil {
return err
}
@ -139,33 +136,8 @@ func transferNEOToAlphabetContracts(c *helper.InitializeContext) error {
return c.AwaitTx()
}
func transferNEOFinished(c *helper.InitializeContext, neoHash util.Uint160) (bool, error) {
r := nep17.NewReader(c.ReadOnlyInvoker, neoHash)
func transferNEOFinished(c *helper.InitializeContext) (bool, error) {
r := neo.NewReader(c.ReadOnlyInvoker)
bal, err := r.BalanceOf(c.CommitteeAcc.Contract.ScriptHash())
return bal.Cmp(big.NewInt(native.NEOTotalSupply)) == -1, err
}
var errGetPriceInvalid = errors.New("`getRegisterPrice`: invalid response")
func getCandidateRegisterPrice(c *helper.InitializeContext) (int64, error) {
switch c.Client.(type) {
case *rpcclient.Client:
inv := invoker.New(c.Client, nil)
reader := neo.NewReader(inv)
return reader.GetRegisterPrice()
default:
neoHash := neo.Hash
res, err := helper.InvokeFunction(c.Client, neoHash, "getRegisterPrice", nil, nil)
if err != nil {
return 0, err
}
if len(res.Stack) == 0 {
return 0, errGetPriceInvalid
}
bi, err := res.Stack[0].TryInteger()
if err != nil || !bi.IsInt64() {
return 0, errGetPriceInvalid
}
return bi.Int64(), nil
}
}

View file

@ -80,9 +80,9 @@ func dumpPolicyCmd(cmd *cobra.Command, _ []string) error {
buf := bytes.NewBuffer(nil)
tw := tabwriter.NewWriter(buf, 0, 2, 2, ' ', 0)
_, _ = tw.Write([]byte(fmt.Sprintf("Execution Fee Factor:\t%d (int)\n", execFee)))
_, _ = tw.Write([]byte(fmt.Sprintf("Fee Per Byte:\t%d (int)\n", feePerByte)))
_, _ = tw.Write([]byte(fmt.Sprintf("Storage Price:\t%d (int)\n", storagePrice)))
_, _ = tw.Write(fmt.Appendf(nil, "Execution Fee Factor:\t%d (int)\n", execFee))
_, _ = tw.Write(fmt.Appendf(nil, "Fee Per Byte:\t%d (int)\n", feePerByte))
_, _ = tw.Write(fmt.Appendf(nil, "Storage Price:\t%d (int)\n", storagePrice))
_ = tw.Flush()
cmd.Print(buf.String())

View file

@ -7,7 +7,6 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/config"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/metabase"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/morph"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/storagecfg"
"git.frostfs.info/TrueCloudLab/frostfs-node/misc"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/autocomplete"
utilConfig "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/config"
@ -41,7 +40,6 @@ func init() {
rootCmd.AddCommand(config.RootCmd)
rootCmd.AddCommand(morph.RootCmd)
rootCmd.AddCommand(storagecfg.RootCmd)
rootCmd.AddCommand(metabase.RootCmd)
rootCmd.AddCommand(autocomplete.Command("frostfs-adm"))

View file

@ -1,137 +0,0 @@
package storagecfg
const configTemplate = `logger:
level: info # logger level: one of "debug", "info" (default), "warn", "error", "dpanic", "panic", "fatal"
node:
wallet:
path: {{ .Wallet.Path }} # path to a NEO wallet; ignored if key is presented
address: {{ .Wallet.Account }} # address of a NEO account in the wallet; ignored if key is presented
password: {{ .Wallet.Password }} # password for a NEO account in the wallet; ignored if key is presented
addresses: # list of addresses announced by Storage node in the Network map
- {{ .AnnouncedAddress }}
attribute_0: UN-LOCODE:{{ .Attribute.Locode }}
relay: {{ .Relay }} # start Storage node in relay mode without bootstrapping into the Network map
grpc:
num: 1 # total number of listener endpoints
0:
endpoint: {{ .Endpoint }} # endpoint for gRPC server
tls:{{if .TLSCert}}
enabled: true # enable TLS for a gRPC connection (min version is TLS 1.2)
certificate: {{ .TLSCert }} # path to TLS certificate
key: {{ .TLSKey }} # path to TLS key
{{- else }}
enabled: false # disable TLS for a gRPC connection
{{- end}}
control:
authorized_keys: # list of hex-encoded public keys that have rights to use the Control Service
{{- range .AuthorizedKeys }}
- {{.}}{{end}}
grpc:
endpoint: {{.ControlEndpoint}} # endpoint that is listened by the Control Service
morph:
dial_timeout: 20s # timeout for side chain NEO RPC client connection
cache_ttl: 15s # use TTL cache for side chain GET operations
rpc_endpoint: # side chain N3 RPC endpoints
{{- range .MorphRPC }}
- address: wss://{{.}}/ws{{end}}
{{if not .Relay }}
storage:
shard_pool_size: 15 # size of per-shard worker pools used for PUT operations
shard:
default: # section with the default shard parameters
metabase:
perm: 0644 # permissions for metabase files(directories: +x for current user and group)
blobstor:
perm: 0644 # permissions for blobstor files(directories: +x for current user and group)
depth: 2 # max depth of object tree storage in FS
small_object_size: 102400 # 100KiB, size threshold for "small" objects which are stored in key-value DB, not in FS, bytes
compress: true # turn on/off Zstandard compression (level 3) of stored objects
compression_exclude_content_types:
- audio/*
- video/*
blobovnicza:
size: 1073741824 # approximate size limit of single blobovnicza instance, total size will be: size*width^(depth+1), bytes
depth: 1 # max depth of object tree storage in key-value DB
width: 4 # max width of object tree storage in key-value DB
opened_cache_capacity: 50 # maximum number of opened database files
opened_cache_ttl: 5m # ttl for opened database file
opened_cache_exp_interval: 15s # cache cleanup interval for expired blobovnicza's
gc:
remover_batch_size: 200 # number of objects to be removed by the garbage collector
remover_sleep_interval: 5m # frequency of the garbage collector invocation
0:
mode: "read-write" # mode of the shard, must be one of the: "read-write" (default), "read-only"
metabase:
path: {{ .MetabasePath }} # path to the metabase
blobstor:
path: {{ .BlobstorPath }} # path to the blobstor
{{end}}`
const (
neofsMainnetAddress = "2cafa46838e8b564468ebd868dcafdd99dce6221"
balanceMainnetAddress = "dc1ec98d9d0c5f9dfade16144defe08cffc5ca55"
neofsTestnetAddress = "b65d8243ac63983206d17e5221af0653a7266fa1"
balanceTestnetAddress = "e0420c216003747626670d1424569c17c79015bf"
)
var n3config = map[string]struct {
MorphRPC []string
RPC []string
NeoFSContract string
BalanceContract string
}{
"testnet": {
MorphRPC: []string{
"rpc01.morph.testnet.fs.neo.org:51331",
"rpc02.morph.testnet.fs.neo.org:51331",
"rpc03.morph.testnet.fs.neo.org:51331",
"rpc04.morph.testnet.fs.neo.org:51331",
"rpc05.morph.testnet.fs.neo.org:51331",
"rpc06.morph.testnet.fs.neo.org:51331",
"rpc07.morph.testnet.fs.neo.org:51331",
},
RPC: []string{
"rpc01.testnet.n3.nspcc.ru:21331",
"rpc02.testnet.n3.nspcc.ru:21331",
"rpc03.testnet.n3.nspcc.ru:21331",
"rpc04.testnet.n3.nspcc.ru:21331",
"rpc05.testnet.n3.nspcc.ru:21331",
"rpc06.testnet.n3.nspcc.ru:21331",
"rpc07.testnet.n3.nspcc.ru:21331",
},
NeoFSContract: neofsTestnetAddress,
BalanceContract: balanceTestnetAddress,
},
"mainnet": {
MorphRPC: []string{
"rpc1.morph.fs.neo.org:40341",
"rpc2.morph.fs.neo.org:40341",
"rpc3.morph.fs.neo.org:40341",
"rpc4.morph.fs.neo.org:40341",
"rpc5.morph.fs.neo.org:40341",
"rpc6.morph.fs.neo.org:40341",
"rpc7.morph.fs.neo.org:40341",
},
RPC: []string{
"rpc1.n3.nspcc.ru:10331",
"rpc2.n3.nspcc.ru:10331",
"rpc3.n3.nspcc.ru:10331",
"rpc4.n3.nspcc.ru:10331",
"rpc5.n3.nspcc.ru:10331",
"rpc6.n3.nspcc.ru:10331",
"rpc7.n3.nspcc.ru:10331",
},
NeoFSContract: neofsMainnetAddress,
BalanceContract: balanceMainnetAddress,
},
}

View file

@ -1,433 +0,0 @@
package storagecfg
import (
"bytes"
"context"
"encoding/hex"
"errors"
"fmt"
"math/rand"
"net"
"net/url"
"os"
"path/filepath"
"slices"
"strconv"
"strings"
"text/template"
"time"
netutil "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/network"
"github.com/chzyer/readline"
"github.com/nspcc-dev/neo-go/cli/flags"
"github.com/nspcc-dev/neo-go/cli/input"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/nspcc-dev/neo-go/pkg/encoding/address"
"github.com/nspcc-dev/neo-go/pkg/encoding/fixedn"
"github.com/nspcc-dev/neo-go/pkg/rpcclient"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/actor"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/gas"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/nep17"
"github.com/nspcc-dev/neo-go/pkg/smartcontract/trigger"
"github.com/nspcc-dev/neo-go/pkg/util"
"github.com/nspcc-dev/neo-go/pkg/wallet"
"github.com/spf13/cobra"
)
const (
walletFlag = "wallet"
accountFlag = "account"
)
const (
defaultControlEndpoint = "localhost:8090"
defaultDataEndpoint = "localhost"
)
// RootCmd is a root command of config section.
var RootCmd = &cobra.Command{
Use: "storage-config [-w wallet] [-a acccount] [<path-to-config>]",
Short: "Section for storage node configuration commands",
Run: storageConfig,
}
func init() {
fs := RootCmd.Flags()
fs.StringP(walletFlag, "w", "", "Path to wallet")
fs.StringP(accountFlag, "a", "", "Wallet account")
}
type config struct {
AnnouncedAddress string
AuthorizedKeys []string
ControlEndpoint string
Endpoint string
TLSCert string
TLSKey string
MorphRPC []string
Attribute struct {
Locode string
}
Wallet struct {
Path string
Account string
Password string
}
Relay bool
BlobstorPath string
MetabasePath string
}
func storageConfig(cmd *cobra.Command, args []string) {
outPath := getOutputPath(args)
historyPath := filepath.Join(os.TempDir(), "frostfs-adm.history")
readline.SetHistoryPath(historyPath)
var c config
c.Wallet.Path, _ = cmd.Flags().GetString(walletFlag)
if c.Wallet.Path == "" {
c.Wallet.Path = getPath("Path to the storage node wallet: ")
}
w, err := wallet.NewWalletFromFile(c.Wallet.Path)
fatalOnErr(err)
fillWalletAccount(cmd, &c, w)
accH, err := flags.ParseAddress(c.Wallet.Account)
fatalOnErr(err)
acc := w.GetAccount(accH)
if acc == nil {
fatalOnErr(errors.New("can't find account in wallet"))
}
c.Wallet.Password, err = input.ReadPassword(fmt.Sprintf("Enter password for %s > ", c.Wallet.Account))
fatalOnErr(err)
err = acc.Decrypt(c.Wallet.Password, keys.NEP2ScryptParams())
fatalOnErr(err)
c.AuthorizedKeys = append(c.AuthorizedKeys, hex.EncodeToString(acc.PrivateKey().PublicKey().Bytes()))
network := readNetwork(cmd)
c.MorphRPC = n3config[network].MorphRPC
depositGas(cmd, acc, network)
c.Attribute.Locode = getString("UN-LOCODE attribute in [XX YYY] format: ")
endpoint := getDefaultEndpoint(cmd, &c)
c.Endpoint = getString(fmt.Sprintf("Listening address [%s]: ", endpoint))
if c.Endpoint == "" {
c.Endpoint = endpoint
}
c.ControlEndpoint = getString(fmt.Sprintf("Listening address (control endpoint) [%s]: ", defaultControlEndpoint))
if c.ControlEndpoint == "" {
c.ControlEndpoint = defaultControlEndpoint
}
c.TLSCert = getPath("TLS Certificate (optional): ")
if c.TLSCert != "" {
c.TLSKey = getPath("TLS Key: ")
}
c.Relay = getConfirmation(false, "Use node as a relay? yes/[no]: ")
if !c.Relay {
p := getPath("Path to the storage directory (all available storage will be used): ")
c.BlobstorPath = filepath.Join(p, "blob")
c.MetabasePath = filepath.Join(p, "meta")
}
out := applyTemplate(c)
fatalOnErr(os.WriteFile(outPath, out, 0o644))
cmd.Println("Node is ready for work! Run `frostfs-node -config " + outPath + "`")
}
func getDefaultEndpoint(cmd *cobra.Command, c *config) string {
var addr, port string
for {
c.AnnouncedAddress = getString("Publicly announced address: ")
validator := netutil.Address{}
err := validator.FromString(c.AnnouncedAddress)
if err != nil {
cmd.Println("Incorrect address format. See https://git.frostfs.info/TrueCloudLab/frostfs-node/src/branch/master/pkg/network/address.go for details.")
continue
}
uriAddr, err := url.Parse(validator.URIAddr())
if err != nil {
panic(fmt.Errorf("unexpected error: %w", err))
}
addr = uriAddr.Hostname()
port = uriAddr.Port()
ip, err := net.ResolveIPAddr("ip", addr)
if err != nil {
cmd.Printf("Can't resolve IP address %s: %v\n", addr, err)
continue
}
if !ip.IP.IsGlobalUnicast() {
cmd.Println("IP must be global unicast.")
continue
}
cmd.Printf("Resolved IP address: %s\n", ip.String())
_, err = strconv.ParseUint(port, 10, 16)
if err != nil {
cmd.Println("Port must be an integer.")
continue
}
break
}
return net.JoinHostPort(defaultDataEndpoint, port)
}
func fillWalletAccount(cmd *cobra.Command, c *config, w *wallet.Wallet) {
c.Wallet.Account, _ = cmd.Flags().GetString(accountFlag)
if c.Wallet.Account == "" {
addr := address.Uint160ToString(w.GetChangeAddress())
c.Wallet.Account = getWalletAccount(w, fmt.Sprintf("Wallet account [%s]: ", addr))
if c.Wallet.Account == "" {
c.Wallet.Account = addr
}
}
}
func readNetwork(cmd *cobra.Command) string {
var network string
for {
network = getString("Choose network [mainnet]/testnet: ")
switch network {
case "":
network = "mainnet"
case "testnet", "mainnet":
default:
cmd.Println(`Network must be either "mainnet" or "testnet"`)
continue
}
break
}
return network
}
func getOutputPath(args []string) string {
if len(args) != 0 {
return args[0]
}
outPath := getPath("File to write config at [./config.yml]: ")
if outPath == "" {
outPath = "./config.yml"
}
return outPath
}
func getWalletAccount(w *wallet.Wallet, prompt string) string {
addrs := make([]readline.PrefixCompleterInterface, len(w.Accounts))
for i := range w.Accounts {
addrs[i] = readline.PcItem(w.Accounts[i].Address)
}
readline.SetAutoComplete(readline.NewPrefixCompleter(addrs...))
defer readline.SetAutoComplete(nil)
s, err := readline.Line(prompt)
fatalOnErr(err)
return strings.TrimSpace(s) // autocompleter can return a string with a trailing space
}
func getString(prompt string) string {
s, err := readline.Line(prompt)
fatalOnErr(err)
if s != "" {
_ = readline.AddHistory(s)
}
return s
}
type filenameCompleter struct{}
func (filenameCompleter) Do(line []rune, pos int) (newLine [][]rune, length int) {
prefix := string(line[:pos])
dir := filepath.Dir(prefix)
de, err := os.ReadDir(dir)
if err != nil {
return nil, 0
}
for i := range de {
name := filepath.Join(dir, de[i].Name())
if strings.HasPrefix(name, prefix) {
tail := []rune(strings.TrimPrefix(name, prefix))
if de[i].IsDir() {
tail = append(tail, filepath.Separator)
}
newLine = append(newLine, tail)
}
}
if pos != 0 {
return newLine, pos - len([]rune(dir))
}
return newLine, 0
}
func getPath(prompt string) string {
readline.SetAutoComplete(filenameCompleter{})
defer readline.SetAutoComplete(nil)
p, err := readline.Line(prompt)
fatalOnErr(err)
if p == "" {
return p
}
_ = readline.AddHistory(p)
abs, err := filepath.Abs(p)
if err != nil {
fatalOnErr(fmt.Errorf("can't create an absolute path: %w", err))
}
return abs
}
func getConfirmation(def bool, prompt string) bool {
for {
s, err := readline.Line(prompt)
fatalOnErr(err)
switch strings.ToLower(s) {
case "y", "yes":
return true
case "n", "no":
return false
default:
if len(s) == 0 {
return def
}
}
}
}
func applyTemplate(c config) []byte {
tmpl, err := template.New("config").Parse(configTemplate)
fatalOnErr(err)
b := bytes.NewBuffer(nil)
fatalOnErr(tmpl.Execute(b, c))
return b.Bytes()
}
func fatalOnErr(err error) {
if err != nil {
_, _ = fmt.Fprintf(os.Stderr, "Error: %v\n", err)
os.Exit(1)
}
}
func depositGas(cmd *cobra.Command, acc *wallet.Account, network string) {
sideClient := initClient(n3config[network].MorphRPC)
balanceHash, _ := util.Uint160DecodeStringLE(n3config[network].BalanceContract)
sideActor, err := actor.NewSimple(sideClient, acc)
if err != nil {
fatalOnErr(fmt.Errorf("creating actor over side chain client: %w", err))
}
sideGas := nep17.NewReader(sideActor, balanceHash)
accSH := acc.Contract.ScriptHash()
balance, err := sideGas.BalanceOf(accSH)
if err != nil {
fatalOnErr(fmt.Errorf("side chain balance: %w", err))
}
ok := getConfirmation(false, fmt.Sprintf("Current NeoFS balance is %s, make a deposit? y/[n]: ",
fixedn.ToString(balance, 12)))
if !ok {
return
}
amountStr := getString("Enter amount in GAS: ")
amount, err := fixedn.FromString(amountStr, 8)
if err != nil {
fatalOnErr(fmt.Errorf("invalid amount: %w", err))
}
mainClient := initClient(n3config[network].RPC)
neofsHash, _ := util.Uint160DecodeStringLE(n3config[network].NeoFSContract)
mainActor, err := actor.NewSimple(mainClient, acc)
if err != nil {
fatalOnErr(fmt.Errorf("creating actor over main chain client: %w", err))
}
mainGas := nep17.New(mainActor, gas.Hash)
txHash, _, err := mainGas.Transfer(accSH, neofsHash, amount, nil)
if err != nil {
fatalOnErr(fmt.Errorf("sending TX to the NeoFS contract: %w", err))
}
cmd.Print("Waiting for transactions to persist.")
tick := time.NewTicker(time.Second / 2)
defer tick.Stop()
timer := time.NewTimer(time.Second * 20)
defer timer.Stop()
at := trigger.Application
loop:
for {
select {
case <-tick.C:
_, err := mainClient.GetApplicationLog(txHash, &at)
if err == nil {
cmd.Print("\n")
break loop
}
cmd.Print(".")
case <-timer.C:
cmd.Printf("\nTimeout while waiting for transaction to persist.\n")
if getConfirmation(false, "Continue configuration? yes/[no]: ") {
return
}
os.Exit(1)
}
}
}
func initClient(rpc []string) *rpcclient.Client {
var c *rpcclient.Client
var err error
shuffled := slices.Clone(rpc)
rand.Shuffle(len(shuffled), func(i, j int) { shuffled[i], shuffled[j] = shuffled[j], shuffled[i] })
for _, endpoint := range shuffled {
c, err = rpcclient.New(context.Background(), "https://"+endpoint, rpcclient.Options{
DialTimeout: time.Second * 2,
RequestTimeout: time.Second * 5,
})
if err != nil {
continue
}
if err = c.Init(); err != nil {
continue
}
return c
}
fatalOnErr(fmt.Errorf("can't create N3 client: %w", err))
panic("unreachable")
}

View file

@ -56,7 +56,7 @@ func GetSDKClient(ctx context.Context, cmd *cobra.Command, key *ecdsa.PrivateKey
prmDial := client.PrmDial{
Endpoint: addr.URIAddr(),
GRPCDialOptions: []grpc.DialOption{
grpc.WithChainUnaryInterceptor(tracing.NewUnaryClientInteceptor()),
grpc.WithChainUnaryInterceptor(tracing.NewUnaryClientInterceptor()),
grpc.WithChainStreamInterceptor(tracing.NewStreamClientInterceptor()),
grpc.WithDefaultCallOptions(grpc.WaitForReady(true)),
},

View file

@ -62,7 +62,7 @@ func listTargets(cmd *cobra.Command, _ []string) {
tw := tabwriter.NewWriter(buf, 0, 2, 2, ' ', 0)
_, _ = tw.Write([]byte("#\tName\tType\n"))
for i, t := range targets {
_, _ = tw.Write([]byte(fmt.Sprintf("%s\t%s\t%s\n", strconv.Itoa(i), t.GetName(), t.GetType())))
_, _ = tw.Write(fmt.Appendf(nil, "%s\t%s\t%s\n", strconv.Itoa(i), t.GetName(), t.GetType()))
}
_ = tw.Flush()
cmd.Print(buf.String())

View file

@ -11,7 +11,6 @@ import (
rawclient "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/rpc/client"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"github.com/mr-tron/base58"
"github.com/spf13/cobra"
)

View file

@ -24,7 +24,7 @@ var writecacheShardCmd = &cobra.Command{
var sealWritecacheShardCmd = &cobra.Command{
Use: "seal",
Short: "Flush objects from write-cache and move write-cache to degraded read only mode.",
Long: "Flush all the objects from the write-cache to the main storage and move the write-cache to the degraded read only mode: write-cache will be empty and no objects will be put in it.",
Long: "Flush all the objects from the write-cache to the main storage and move the write-cache to the 'CLOSED' mode: write-cache will be empty and no objects will be put in it.",
Run: sealWritecache,
}

View file

@ -33,7 +33,7 @@ func _client() (tree.TreeServiceClient, error) {
opts := []grpc.DialOption{
grpc.WithChainUnaryInterceptor(
tracing.NewUnaryClientInteceptor(),
tracing.NewUnaryClientInterceptor(),
),
grpc.WithChainStreamInterceptor(
tracing.NewStreamClientInterceptor(),

View file

@ -9,6 +9,7 @@ import (
configViper "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common/config"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
control "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control/ir"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger"
"github.com/spf13/viper"
"go.uber.org/zap"
)
@ -38,13 +39,14 @@ func reloadConfig() error {
}
cmode.Store(cfg.GetBool("node.kludge_compatibility_mode"))
audit.Store(cfg.GetBool("audit.enabled"))
var logPrm logger.Prm
err = logPrm.SetLevelString(cfg.GetString("logger.level"))
if err != nil {
return err
}
logPrm.PrependTimestamp = cfg.GetBool("logger.timestamp")
log.Reload(logPrm)
return logPrm.Reload()
return nil
}
func watchForSignal(ctx context.Context, cancel func()) {

View file

@ -31,7 +31,6 @@ const (
var (
wg = new(sync.WaitGroup)
intErr = make(chan error) // internal inner ring errors
logPrm = new(logger.Prm)
innerRing *innerring.Server
pprofCmp *pprofComponent
metricsCmp *httpComponent
@ -70,6 +69,7 @@ func main() {
metrics := irMetrics.NewInnerRingMetrics()
var logPrm logger.Prm
err = logPrm.SetLevelString(
cfg.GetString("logger.level"),
)

View file

@ -1,6 +1,8 @@
package tui
import (
"slices"
"github.com/gdamore/tcell/v2"
"github.com/rivo/tview"
)
@ -26,7 +28,7 @@ func (f *InputFieldWithHistory) AddToHistory(s string) {
// Used history data for search prompt, so just make that data recent.
if f.historyPointer != len(f.history) && s == f.history[f.historyPointer] {
f.history = append(f.history[:f.historyPointer], f.history[f.historyPointer+1:]...)
f.history = slices.Delete(f.history, f.historyPointer, f.historyPointer+1)
f.history = append(f.history, s)
}

View file

@ -108,6 +108,7 @@ type applicationConfiguration struct {
level string
destination string
timestamp bool
options []zap.Option
}
ObjectCfg struct {
@ -117,7 +118,6 @@ type applicationConfiguration struct {
EngineCfg struct {
errorThreshold uint32
shardPoolSize uint32
shards []shardCfg
lowMem bool
}
@ -233,6 +233,14 @@ func (a *applicationConfiguration) readConfig(c *config.Config) error {
a.LoggerCfg.level = loggerconfig.Level(c)
a.LoggerCfg.destination = loggerconfig.Destination(c)
a.LoggerCfg.timestamp = loggerconfig.Timestamp(c)
var opts []zap.Option
if loggerconfig.ToLokiConfig(c).Enabled {
opts = []zap.Option{zap.WrapCore(func(core zapcore.Core) zapcore.Core {
lokiCore := lokicore.New(core, loggerconfig.ToLokiConfig(c))
return lokiCore
})}
}
a.LoggerCfg.options = opts
// Object
@ -250,7 +258,6 @@ func (a *applicationConfiguration) readConfig(c *config.Config) error {
// Storage Engine
a.EngineCfg.errorThreshold = engineconfig.ShardErrorThreshold(c)
a.EngineCfg.shardPoolSize = engineconfig.ShardPoolSize(c)
a.EngineCfg.lowMem = engineconfig.EngineLowMemoryConsumption(c)
return engineconfig.IterateShards(c, false, func(sc *shardconfig.Config) error { return a.updateShardConfig(c, sc) })
@ -475,7 +482,6 @@ type shared struct {
// dynamicConfiguration stores parameters of the
// components that supports runtime reconfigurations.
type dynamicConfiguration struct {
logger *logger.Prm
pprof *httpComponent
metrics *httpComponent
}
@ -716,16 +722,11 @@ func initCfg(appCfg *config.Config) *cfg {
netState.metrics = c.metricsCollector
logPrm := c.loggerPrm()
logPrm, err := c.loggerPrm()
fatalOnErr(err)
logPrm.SamplingHook = c.metricsCollector.LogMetrics().GetSamplingHook()
log, err := logger.NewLogger(logPrm)
fatalOnErr(err)
if loggerconfig.ToLokiConfig(appCfg).Enabled {
log.WithOptions(zap.WrapCore(func(core zapcore.Core) zapcore.Core {
lokiCore := lokicore.New(core, loggerconfig.ToLokiConfig(appCfg))
return lokiCore
}))
}
c.internals = initInternals(appCfg, log)
@ -893,7 +894,6 @@ func (c *cfg) engineOpts() []engine.Option {
var opts []engine.Option
opts = append(opts,
engine.WithShardPoolSize(c.EngineCfg.shardPoolSize),
engine.WithErrorThreshold(c.EngineCfg.errorThreshold),
engine.WithLogger(c.log),
engine.WithLowMemoryConsumption(c.EngineCfg.lowMem),
@ -933,6 +933,7 @@ func (c *cfg) getWriteCacheOpts(shCfg shardCfg) []writecache.Option {
writecache.WithMaxCacheCount(wcRead.countLimit),
writecache.WithNoSync(wcRead.noSync),
writecache.WithLogger(c.log),
writecache.WithQoSLimiter(shCfg.limiter),
)
}
return writeCacheOpts
@ -1048,6 +1049,7 @@ func (c *cfg) getShardOpts(ctx context.Context, shCfg shardCfg) shardOptsWithID
}
if c.metricsCollector != nil {
mbOptions = append(mbOptions, meta.WithMetrics(lsmetrics.NewMetabaseMetrics(shCfg.metaCfg.path, c.metricsCollector.MetabaseMetrics())))
shCfg.limiter.SetMetrics(c.metricsCollector.QoSMetrics())
}
var sh shardOptsWithID
@ -1077,26 +1079,23 @@ func (c *cfg) getShardOpts(ctx context.Context, shCfg shardCfg) shardOptsWithID
return sh
}
func (c *cfg) loggerPrm() *logger.Prm {
// check if it has been inited before
if c.dynamicConfiguration.logger == nil {
c.dynamicConfiguration.logger = new(logger.Prm)
}
func (c *cfg) loggerPrm() (logger.Prm, error) {
var prm logger.Prm
// (re)init read configuration
err := c.dynamicConfiguration.logger.SetLevelString(c.LoggerCfg.level)
err := prm.SetLevelString(c.LoggerCfg.level)
if err != nil {
// not expected since validation should be performed before
panic("incorrect log level format: " + c.LoggerCfg.level)
return logger.Prm{}, errors.New("incorrect log level format: " + c.LoggerCfg.level)
}
err = c.dynamicConfiguration.logger.SetDestination(c.LoggerCfg.destination)
err = prm.SetDestination(c.LoggerCfg.destination)
if err != nil {
// not expected since validation should be performed before
panic("incorrect log destination format: " + c.LoggerCfg.destination)
return logger.Prm{}, errors.New("incorrect log destination format: " + c.LoggerCfg.destination)
}
c.dynamicConfiguration.logger.PrependTimestamp = c.LoggerCfg.timestamp
prm.PrependTimestamp = c.LoggerCfg.timestamp
prm.Options = c.LoggerCfg.options
return c.dynamicConfiguration.logger
return prm, nil
}
func (c *cfg) LocalAddress() network.AddressGroup {
@ -1336,11 +1335,7 @@ func (c *cfg) reloadConfig(ctx context.Context) {
// all the components are expected to support
// Logger's dynamic reconfiguration approach
// Logger
logPrm := c.loggerPrm()
components := c.getComponents(ctx, logPrm)
components := c.getComponents(ctx)
// Object
c.cfgObject.tombstoneLifetime.Store(c.ObjectCfg.tombstoneLifetime)
@ -1378,10 +1373,17 @@ func (c *cfg) reloadConfig(ctx context.Context) {
c.log.Info(ctx, logs.FrostFSNodeConfigurationHasBeenReloadedSuccessfully)
}
func (c *cfg) getComponents(ctx context.Context, logPrm *logger.Prm) []dCmp {
func (c *cfg) getComponents(ctx context.Context) []dCmp {
var components []dCmp
components = append(components, dCmp{"logger", logPrm.Reload})
components = append(components, dCmp{"logger", func() error {
prm, err := c.loggerPrm()
if err != nil {
return err
}
c.log.Reload(prm)
return nil
}})
components = append(components, dCmp{"runtime", func() error {
setRuntimeParameters(ctx, c)
return nil

View file

@ -12,13 +12,10 @@ import (
func TestConfigDir(t *testing.T) {
dir := t.TempDir()
cfgFileName0 := path.Join(dir, "cfg_00.json")
cfgFileName1 := path.Join(dir, "cfg_01.yml")
cfgFileName := path.Join(dir, "cfg_01.yml")
require.NoError(t, os.WriteFile(cfgFileName0, []byte(`{"storage":{"shard_pool_size":15}}`), 0o777))
require.NoError(t, os.WriteFile(cfgFileName1, []byte("logger:\n level: debug"), 0o777))
require.NoError(t, os.WriteFile(cfgFileName, []byte("logger:\n level: debug"), 0o777))
c := New("", dir, "")
require.Equal(t, "debug", cast.ToString(c.Sub("logger").Value("level")))
require.EqualValues(t, 15, cast.ToUint32(c.Sub("storage").Value("shard_pool_size")))
}

View file

@ -11,10 +11,6 @@ import (
const (
subsection = "storage"
// ShardPoolSizeDefault is a default value of routine pool size per-shard to
// process object PUT operations in a storage engine.
ShardPoolSizeDefault = 20
)
// ErrNoShardConfigured is returned when at least 1 shard is required but none are found.
@ -65,18 +61,6 @@ func IterateShards(c *config.Config, required bool, f func(*shardconfig.Config)
return nil
}
// ShardPoolSize returns the value of "shard_pool_size" config parameter from "storage" section.
//
// Returns ShardPoolSizeDefault if the value is not a positive number.
func ShardPoolSize(c *config.Config) uint32 {
v := config.Uint32Safe(c.Sub(subsection), "shard_pool_size")
if v > 0 {
return v
}
return ShardPoolSizeDefault
}
// ShardErrorThreshold returns the value of "shard_ro_error_threshold" config parameter from "storage" section.
//
// Returns 0 if the the value is missing.

View file

@ -54,7 +54,6 @@ func TestEngineSection(t *testing.T) {
require.False(t, handlerCalled)
require.EqualValues(t, 0, engineconfig.ShardErrorThreshold(empty))
require.EqualValues(t, engineconfig.ShardPoolSizeDefault, engineconfig.ShardPoolSize(empty))
require.EqualValues(t, mode.ReadWrite, shardconfig.From(empty).Mode())
})
@ -64,7 +63,6 @@ func TestEngineSection(t *testing.T) {
num := 0
require.EqualValues(t, 100, engineconfig.ShardErrorThreshold(c))
require.EqualValues(t, 15, engineconfig.ShardPoolSize(c))
err := engineconfig.IterateShards(c, true, func(sc *shardconfig.Config) error {
defer func() {
@ -170,9 +168,10 @@ func TestEngineSection(t *testing.T) {
LimitOps: toPtr(25000),
},
{
Tag: "policer",
Weight: toPtr(5),
LimitOps: toPtr(25000),
Tag: "policer",
Weight: toPtr(5),
LimitOps: toPtr(25000),
Prohibited: true,
},
})
require.ElementsMatch(t, writeLimits.Tags,

View file

@ -37,10 +37,7 @@ func (x *Config) Perm() fs.FileMode {
// Returns 0 if the value is not a positive number.
func (x *Config) MaxBatchDelay() time.Duration {
d := config.DurationSafe((*config.Config)(x), "max_batch_delay")
if d < 0 {
d = 0
}
return d
return max(d, 0)
}
// MaxBatchSize returns the value of "max_batch_size" config parameter.
@ -48,10 +45,7 @@ func (x *Config) MaxBatchDelay() time.Duration {
// Returns 0 if the value is not a positive number.
func (x *Config) MaxBatchSize() int {
s := int(config.IntSafe((*config.Config)(x), "max_batch_size"))
if s < 0 {
s = 0
}
return s
return max(s, 0)
}
// NoSync returns the value of "no_sync" config parameter.
@ -66,8 +60,5 @@ func (x *Config) NoSync() bool {
// Returns 0 if the value is not a positive number.
func (x *Config) PageSize() int {
s := int(config.SizeInBytesSafe((*config.Config)(x), "page_size"))
if s < 0 {
s = 0
}
return s
return max(s, 0)
}

View file

@ -84,6 +84,7 @@ type IOTagConfig struct {
Weight *float64
LimitOps *float64
ReservedOps *float64
Prohibited bool
}
func tags(c *config.Config) []IOTagConfig {
@ -119,6 +120,13 @@ func tags(c *config.Config) []IOTagConfig {
tagConfig.ReservedOps = &r
}
v = c.Value(strconv.Itoa(i) + ".prohibited")
if v != nil {
r, err := cast.ToBoolE(v)
panicOnErr(err)
tagConfig.Prohibited = r
}
result = append(result, tagConfig)
}
}

View file

@ -52,10 +52,7 @@ func (x *Config) NoSync() bool {
// Returns 0 if the value is not a positive number.
func (x *Config) MaxBatchDelay() time.Duration {
d := config.DurationSafe((*config.Config)(x), "max_batch_delay")
if d <= 0 {
d = 0
}
return d
return max(d, 0)
}
// MaxBatchSize returns the value of "max_batch_size" config parameter.
@ -63,8 +60,5 @@ func (x *Config) MaxBatchDelay() time.Duration {
// Returns 0 if the value is not a positive number.
func (x *Config) MaxBatchSize() int {
s := int(config.IntSafe((*config.Config)(x), "max_batch_size"))
if s <= 0 {
s = 0
}
return s
return max(s, 0)
}

View file

@ -31,12 +31,11 @@ func Limits(c *config.Config) []LimitConfig {
break
}
maxOps := config.IntSafe(sc, "max_ops")
if maxOps == 0 {
if sc.Value("max_ops") == nil {
panic("no max operations for method group")
}
limits = append(limits, LimitConfig{methods, maxOps})
limits = append(limits, LimitConfig{methods, config.IntSafe(sc, "max_ops")})
}
return limits

View file

@ -38,7 +38,7 @@ func TestRPCSection(t *testing.T) {
})
t.Run("no max operations", func(t *testing.T) {
const path = "testdata/node"
const path = "testdata/no_max_ops"
fileConfigTest := func(c *config.Config) {
require.Panics(t, func() { _ = Limits(c) })
@ -50,4 +50,28 @@ func TestRPCSection(t *testing.T) {
configtest.ForEnvFileType(t, path, fileConfigTest)
})
})
t.Run("zero max operations", func(t *testing.T) {
const path = "testdata/zero_max_ops"
fileConfigTest := func(c *config.Config) {
limits := Limits(c)
require.Len(t, limits, 2)
limit0 := limits[0]
limit1 := limits[1]
require.ElementsMatch(t, limit0.Methods, []string{"/neo.fs.v2.object.ObjectService/PutSingle", "/neo.fs.v2.object.ObjectService/Put"})
require.Equal(t, limit0.MaxOps, int64(0))
require.ElementsMatch(t, limit1.Methods, []string{"/neo.fs.v2.object.ObjectService/Get"})
require.Equal(t, limit1.MaxOps, int64(10000))
}
configtest.ForEachFileType(path, fileConfigTest)
t.Run("ENV", func(t *testing.T) {
configtest.ForEnvFileType(t, path, fileConfigTest)
})
})
}

View file

@ -0,0 +1,4 @@
FROSTFS_RPC_LIMITS_0_METHODS="/neo.fs.v2.object.ObjectService/PutSingle /neo.fs.v2.object.ObjectService/Put"
FROSTFS_RPC_LIMITS_0_MAX_OPS=0
FROSTFS_RPC_LIMITS_1_METHODS="/neo.fs.v2.object.ObjectService/Get"
FROSTFS_RPC_LIMITS_1_MAX_OPS=10000

View file

@ -0,0 +1,19 @@
{
"rpc": {
"limits": [
{
"methods": [
"/neo.fs.v2.object.ObjectService/PutSingle",
"/neo.fs.v2.object.ObjectService/Put"
],
"max_ops": 0
},
{
"methods": [
"/neo.fs.v2.object.ObjectService/Get"
],
"max_ops": 10000
}
]
}
}

View file

@ -0,0 +1,9 @@
rpc:
limits:
- methods:
- /neo.fs.v2.object.ObjectService/PutSingle
- /neo.fs.v2.object.ObjectService/Put
max_ops: 0
- methods:
- /neo.fs.v2.object.ObjectService/Get
max_ops: 10000

View file

@ -16,7 +16,6 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/network/cache"
objectTransportGRPC "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/network/transport/object/grpc"
objectService "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/object"
v2 "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/object/acl/v2"
objectAPE "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/object/ape"
objectwriter "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/object/common/writer"
deletesvc "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/object/delete"
@ -172,12 +171,10 @@ func initObjectService(c *cfg) {
splitSvc := createSplitService(c, sPutV2, sGetV2, sSearchV2, sDeleteV2, sPatch)
apeSvc := createAPEService(c, splitSvc)
aclSvc := createACLServiceV2(c, apeSvc, &irFetcher)
apeSvc := createAPEService(c, &irFetcher, splitSvc)
var commonSvc objectService.Common
commonSvc.Init(&c.internals, aclSvc)
commonSvc.Init(&c.internals, apeSvc)
respSvc := objectService.NewResponseService(
&commonSvc,
@ -284,7 +281,7 @@ func addPolicer(c *cfg, keyStorage *util.KeyStorage, clientConstructor *cache.Cl
})
}
func createInnerRingFetcher(c *cfg) v2.InnerRingFetcher {
func createInnerRingFetcher(c *cfg) objectAPE.InnerRingFetcher {
return &innerRingFetcherWithNotary{
sidechain: c.cfgMorph.client,
}
@ -429,17 +426,7 @@ func createSplitService(c *cfg, sPutV2 *putsvcV2.Service, sGetV2 *getsvcV2.Servi
)
}
func createACLServiceV2(c *cfg, apeSvc *objectAPE.Service, irFetcher *cachedIRFetcher) v2.Service {
return v2.New(
apeSvc,
c.netMapSource,
irFetcher,
c.cfgObject.cnrSource,
v2.WithLogger(c.log),
)
}
func createAPEService(c *cfg, splitSvc *objectService.TransportSplitter) *objectAPE.Service {
func createAPEService(c *cfg, irFetcher *cachedIRFetcher, splitSvc *objectService.TransportSplitter) *objectAPE.Service {
return objectAPE.NewService(
objectAPE.NewChecker(
c.cfgObject.cfgAccessPolicyEngine.accessPolicyEngine.LocalStorage(),
@ -451,6 +438,7 @@ func createAPEService(c *cfg, splitSvc *objectService.TransportSplitter) *object
c.cfgObject.cnrSource,
c.binPublicKey,
),
objectAPE.NewRequestInfoExtractor(c.log, c.cfgObject.cnrSource, irFetcher, c.netMapSource),
splitSvc,
)
}

View file

@ -43,11 +43,14 @@ func initQoSService(c *cfg) {
func (s *cfgQoSService) AdjustIncomingTag(ctx context.Context, requestSignPublicKey []byte) context.Context {
rawTag, defined := qosTagging.IOTagFromContext(ctx)
if !defined {
if s.isInternalIOTagPublicKey(ctx, requestSignPublicKey) {
return qosTagging.ContextWithIOTag(ctx, qos.IOTagInternal.String())
}
return qosTagging.ContextWithIOTag(ctx, qos.IOTagClient.String())
}
ioTag, err := qos.FromRawString(rawTag)
if err != nil {
s.logger.Warn(ctx, logs.FailedToParseIncomingIOTag, zap.Error(err))
s.logger.Debug(ctx, logs.FailedToParseIncomingIOTag, zap.Error(err))
return qosTagging.ContextWithIOTag(ctx, qos.IOTagClient.String())
}
@ -70,26 +73,36 @@ func (s *cfgQoSService) AdjustIncomingTag(ctx context.Context, requestSignPublic
return ctx
}
}
s.logger.Debug(ctx, logs.FailedToValidateIncomingIOTag)
return qosTagging.ContextWithIOTag(ctx, qos.IOTagClient.String())
case qos.IOTagInternal:
for _, pk := range s.allowedInternalPubs {
if bytes.Equal(pk, requestSignPublicKey) {
return ctx
}
}
nm, err := s.netmapSource.GetNetMap(ctx, 0)
if err != nil {
s.logger.Debug(ctx, logs.FailedToGetNetmapToAdjustIOTag, zap.Error(err))
return qosTagging.ContextWithIOTag(ctx, qos.IOTagClient.String())
}
for _, node := range nm.Nodes() {
if bytes.Equal(node.PublicKey(), requestSignPublicKey) {
return ctx
}
if s.isInternalIOTagPublicKey(ctx, requestSignPublicKey) {
return ctx
}
s.logger.Debug(ctx, logs.FailedToValidateIncomingIOTag)
return qosTagging.ContextWithIOTag(ctx, qos.IOTagClient.String())
default:
s.logger.Warn(ctx, logs.NotSupportedIncomingIOTagReplacedWithClient, zap.Stringer("io_tag", ioTag))
s.logger.Debug(ctx, logs.NotSupportedIncomingIOTagReplacedWithClient, zap.Stringer("io_tag", ioTag))
return qosTagging.ContextWithIOTag(ctx, qos.IOTagClient.String())
}
}
func (s *cfgQoSService) isInternalIOTagPublicKey(ctx context.Context, publicKey []byte) bool {
for _, pk := range s.allowedInternalPubs {
if bytes.Equal(pk, publicKey) {
return true
}
}
nm, err := s.netmapSource.GetNetMap(ctx, 0)
if err != nil {
s.logger.Debug(ctx, logs.FailedToGetNetmapToAdjustIOTag, zap.Error(err))
return false
}
for _, node := range nm.Nodes() {
if bytes.Equal(node.PublicKey(), publicKey) {
return true
}
}
return false
}

View file

@ -0,0 +1,226 @@
package main
import (
"context"
"testing"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/qos"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger/test"
utilTesting "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/testing"
"git.frostfs.info/TrueCloudLab/frostfs-qos/tagging"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/stretchr/testify/require"
)
func TestQoSService_Client(t *testing.T) {
t.Parallel()
s, pk := testQoSServicePrepare(t)
t.Run("IO tag client defined", func(t *testing.T) {
ctx := tagging.ContextWithIOTag(context.Background(), qos.IOTagClient.String())
ctx = s.AdjustIncomingTag(ctx, pk.Request)
tag, ok := tagging.IOTagFromContext(ctx)
require.True(t, ok)
require.Equal(t, qos.IOTagClient.String(), tag)
})
t.Run("no IO tag defined, signed with unknown key", func(t *testing.T) {
ctx := s.AdjustIncomingTag(context.Background(), pk.Request)
tag, ok := tagging.IOTagFromContext(ctx)
require.True(t, ok)
require.Equal(t, qos.IOTagClient.String(), tag)
})
t.Run("no IO tag defined, signed with allowed critical key", func(t *testing.T) {
ctx := s.AdjustIncomingTag(context.Background(), pk.Critical)
tag, ok := tagging.IOTagFromContext(ctx)
require.True(t, ok)
require.Equal(t, qos.IOTagClient.String(), tag)
})
t.Run("unknown IO tag, signed with unknown key", func(t *testing.T) {
ctx := tagging.ContextWithIOTag(context.Background(), "some IO tag we don't know")
ctx = s.AdjustIncomingTag(ctx, pk.Request)
tag, ok := tagging.IOTagFromContext(ctx)
require.True(t, ok)
require.Equal(t, qos.IOTagClient.String(), tag)
})
t.Run("unknown IO tag, signed with netmap key", func(t *testing.T) {
ctx := tagging.ContextWithIOTag(context.Background(), "some IO tag we don't know")
ctx = s.AdjustIncomingTag(ctx, pk.NetmapNode)
tag, ok := tagging.IOTagFromContext(ctx)
require.True(t, ok)
require.Equal(t, qos.IOTagClient.String(), tag)
})
t.Run("unknown IO tag, signed with allowed internal key", func(t *testing.T) {
ctx := tagging.ContextWithIOTag(context.Background(), "some IO tag we don't know")
ctx = s.AdjustIncomingTag(ctx, pk.Internal)
tag, ok := tagging.IOTagFromContext(ctx)
require.True(t, ok)
require.Equal(t, qos.IOTagClient.String(), tag)
})
t.Run("unknown IO tag, signed with allowed critical key", func(t *testing.T) {
ctx := tagging.ContextWithIOTag(context.Background(), "some IO tag we don't know")
ctx = s.AdjustIncomingTag(ctx, pk.Critical)
tag, ok := tagging.IOTagFromContext(ctx)
require.True(t, ok)
require.Equal(t, qos.IOTagClient.String(), tag)
})
t.Run("IO tag internal defined, signed with unknown key", func(t *testing.T) {
ctx := tagging.ContextWithIOTag(context.Background(), qos.IOTagInternal.String())
ctx = s.AdjustIncomingTag(ctx, pk.Request)
tag, ok := tagging.IOTagFromContext(ctx)
require.True(t, ok)
require.Equal(t, qos.IOTagClient.String(), tag)
})
t.Run("IO tag internal defined, signed with allowed critical key", func(t *testing.T) {
ctx := tagging.ContextWithIOTag(context.Background(), qos.IOTagInternal.String())
ctx = s.AdjustIncomingTag(ctx, pk.Critical)
tag, ok := tagging.IOTagFromContext(ctx)
require.True(t, ok)
require.Equal(t, qos.IOTagClient.String(), tag)
})
t.Run("IO tag critical defined, signed with unknown key", func(t *testing.T) {
ctx := tagging.ContextWithIOTag(context.Background(), qos.IOTagCritical.String())
ctx = s.AdjustIncomingTag(ctx, pk.Request)
tag, ok := tagging.IOTagFromContext(ctx)
require.True(t, ok)
require.Equal(t, qos.IOTagClient.String(), tag)
})
t.Run("IO tag critical defined, signed with allowed internal key", func(t *testing.T) {
ctx := tagging.ContextWithIOTag(context.Background(), qos.IOTagCritical.String())
ctx = s.AdjustIncomingTag(ctx, pk.Internal)
tag, ok := tagging.IOTagFromContext(ctx)
require.True(t, ok)
require.Equal(t, qos.IOTagClient.String(), tag)
})
}
func TestQoSService_Internal(t *testing.T) {
t.Parallel()
s, pk := testQoSServicePrepare(t)
t.Run("IO tag internal defined, signed with netmap key", func(t *testing.T) {
ctx := tagging.ContextWithIOTag(context.Background(), qos.IOTagInternal.String())
ctx = s.AdjustIncomingTag(ctx, pk.NetmapNode)
tag, ok := tagging.IOTagFromContext(ctx)
require.True(t, ok)
require.Equal(t, qos.IOTagInternal.String(), tag)
})
t.Run("IO tag internal defined, signed with allowed internal key", func(t *testing.T) {
ctx := tagging.ContextWithIOTag(context.Background(), qos.IOTagInternal.String())
ctx = s.AdjustIncomingTag(ctx, pk.Internal)
tag, ok := tagging.IOTagFromContext(ctx)
require.True(t, ok)
require.Equal(t, qos.IOTagInternal.String(), tag)
})
t.Run("no IO tag defined, signed with netmap key", func(t *testing.T) {
ctx := s.AdjustIncomingTag(context.Background(), pk.NetmapNode)
tag, ok := tagging.IOTagFromContext(ctx)
require.True(t, ok)
require.Equal(t, qos.IOTagInternal.String(), tag)
})
t.Run("no IO tag defined, signed with allowed internal key", func(t *testing.T) {
ctx := s.AdjustIncomingTag(context.Background(), pk.Internal)
tag, ok := tagging.IOTagFromContext(ctx)
require.True(t, ok)
require.Equal(t, qos.IOTagInternal.String(), tag)
})
}
func TestQoSService_Critical(t *testing.T) {
t.Parallel()
s, pk := testQoSServicePrepare(t)
t.Run("IO tag critical defined, signed with netmap key", func(t *testing.T) {
ctx := tagging.ContextWithIOTag(context.Background(), qos.IOTagCritical.String())
ctx = s.AdjustIncomingTag(ctx, pk.NetmapNode)
tag, ok := tagging.IOTagFromContext(ctx)
require.True(t, ok)
require.Equal(t, qos.IOTagCritical.String(), tag)
})
t.Run("IO tag critical defined, signed with allowed critical key", func(t *testing.T) {
ctx := tagging.ContextWithIOTag(context.Background(), qos.IOTagCritical.String())
ctx = s.AdjustIncomingTag(ctx, pk.Critical)
tag, ok := tagging.IOTagFromContext(ctx)
require.True(t, ok)
require.Equal(t, qos.IOTagCritical.String(), tag)
})
}
func TestQoSService_NetmapGetError(t *testing.T) {
t.Parallel()
s, pk := testQoSServicePrepare(t)
s.netmapSource = &utilTesting.TestNetmapSource{}
t.Run("IO tag internal defined, signed with netmap key", func(t *testing.T) {
ctx := tagging.ContextWithIOTag(context.Background(), qos.IOTagInternal.String())
ctx = s.AdjustIncomingTag(ctx, pk.NetmapNode)
tag, ok := tagging.IOTagFromContext(ctx)
require.True(t, ok)
require.Equal(t, qos.IOTagClient.String(), tag)
})
t.Run("IO tag critical defined, signed with netmap key", func(t *testing.T) {
ctx := tagging.ContextWithIOTag(context.Background(), qos.IOTagCritical.String())
ctx = s.AdjustIncomingTag(ctx, pk.NetmapNode)
tag, ok := tagging.IOTagFromContext(ctx)
require.True(t, ok)
require.Equal(t, qos.IOTagClient.String(), tag)
})
t.Run("no IO tag defined, signed with netmap key", func(t *testing.T) {
ctx := s.AdjustIncomingTag(context.Background(), pk.NetmapNode)
tag, ok := tagging.IOTagFromContext(ctx)
require.True(t, ok)
require.Equal(t, qos.IOTagClient.String(), tag)
})
t.Run("unknown IO tag, signed with netmap key", func(t *testing.T) {
ctx := tagging.ContextWithIOTag(context.Background(), "some IO tag we don't know")
ctx = s.AdjustIncomingTag(ctx, pk.NetmapNode)
tag, ok := tagging.IOTagFromContext(ctx)
require.True(t, ok)
require.Equal(t, qos.IOTagClient.String(), tag)
})
}
func testQoSServicePrepare(t *testing.T) (*cfgQoSService, *testQoSServicePublicKeys) {
nmSigner, err := keys.NewPrivateKey()
require.NoError(t, err)
reqSigner, err := keys.NewPrivateKey()
require.NoError(t, err)
allowedCritSigner, err := keys.NewPrivateKey()
require.NoError(t, err)
allowedIntSigner, err := keys.NewPrivateKey()
require.NoError(t, err)
var node netmap.NodeInfo
node.SetPublicKey(nmSigner.PublicKey().Bytes())
nm := &netmap.NetMap{}
nm.SetEpoch(100)
nm.SetNodes([]netmap.NodeInfo{node})
return &cfgQoSService{
logger: test.NewLogger(t),
netmapSource: &utilTesting.TestNetmapSource{
Netmaps: map[uint64]*netmap.NetMap{
100: nm,
},
CurrentEpoch: 100,
},
allowedCriticalPubs: [][]byte{
allowedCritSigner.PublicKey().Bytes(),
},
allowedInternalPubs: [][]byte{
allowedIntSigner.PublicKey().Bytes(),
},
},
&testQoSServicePublicKeys{
NetmapNode: nmSigner.PublicKey().Bytes(),
Request: reqSigner.PublicKey().Bytes(),
Internal: allowedIntSigner.PublicKey().Bytes(),
Critical: allowedCritSigner.PublicKey().Bytes(),
}
}
type testQoSServicePublicKeys struct {
NetmapNode []byte
Request []byte
Internal []byte
Critical []byte
}

View file

@ -1,7 +1,6 @@
package main
import (
"os"
"path/filepath"
"testing"
@ -22,17 +21,4 @@ func TestValidate(t *testing.T) {
require.NoError(t, err)
})
})
t.Run("mainnet", func(t *testing.T) {
os.Clearenv() // ENVs have priority over config files, so we do this in tests
p := filepath.Join(exampleConfigPrefix, "mainnet/config.yml")
c := config.New(p, "", config.EnvPrefix)
require.NoError(t, validateConfig(c))
})
t.Run("testnet", func(t *testing.T) {
os.Clearenv() // ENVs have priority over config files, so we do this in tests
p := filepath.Join(exampleConfigPrefix, "testnet/config.yml")
c := config.New(p, "", config.EnvPrefix)
require.NoError(t, validateConfig(c))
})
}

View file

@ -97,7 +97,6 @@ FROSTFS_RPC_LIMITS_1_METHODS="/neo.fs.v2.object.ObjectService/Get"
FROSTFS_RPC_LIMITS_1_MAX_OPS=10000
# Storage engine section
FROSTFS_STORAGE_SHARD_POOL_SIZE=15
FROSTFS_STORAGE_SHARD_RO_ERROR_THRESHOLD=100
## 0 shard
### Flag to refill Metabase from BlobStor
@ -181,6 +180,7 @@ FROSTFS_STORAGE_SHARD_0_LIMITS_READ_TAGS_3_LIMIT_OPS=25000
FROSTFS_STORAGE_SHARD_0_LIMITS_READ_TAGS_4_TAG=policer
FROSTFS_STORAGE_SHARD_0_LIMITS_READ_TAGS_4_WEIGHT=5
FROSTFS_STORAGE_SHARD_0_LIMITS_READ_TAGS_4_LIMIT_OPS=25000
FROSTFS_STORAGE_SHARD_0_LIMITS_READ_TAGS_4_PROHIBITED=true
FROSTFS_STORAGE_SHARD_0_LIMITS_WRITE_TAGS_0_TAG=internal
FROSTFS_STORAGE_SHARD_0_LIMITS_WRITE_TAGS_0_WEIGHT=200
FROSTFS_STORAGE_SHARD_0_LIMITS_WRITE_TAGS_0_LIMIT_OPS=0

View file

@ -158,7 +158,6 @@
]
},
"storage": {
"shard_pool_size": 15,
"shard_ro_error_threshold": 100,
"shard": {
"0": {
@ -253,7 +252,8 @@
{
"tag": "policer",
"weight": 5,
"limit_ops": 25000
"limit_ops": 25000,
"prohibited": true
}
]
},

View file

@ -135,7 +135,6 @@ rpc:
storage:
# note: shard configuration can be omitted for relay node (see `node.relay`)
shard_pool_size: 15 # size of per-shard worker pools used for PUT operations
shard_ro_error_threshold: 100 # amount of errors to occur before shard is made read-only (default: 0, ignore errors)
shard:
@ -250,6 +249,7 @@ storage:
- tag: policer
weight: 5
limit_ops: 25000
prohibited: true
write:
max_running_ops: 1000
max_waiting_ops: 100

View file

@ -1,28 +0,0 @@
# N3 Mainnet Storage node configuration
Here is a template for simple storage node configuration in N3 Mainnet.
Make sure to specify correct values instead of `<...>` placeholders.
Do not change `contracts` section. Run the latest frostfs-node release with
the fixed config `frostfs-node -c config.yml`
To use NeoFS in the Mainnet, you need to deposit assets to NeoFS contract.
The contract sript hash is `2cafa46838e8b564468ebd868dcafdd99dce6221`
(N3 address `NNxVrKjLsRkWsmGgmuNXLcMswtxTGaNQLk`)
## Tips
Use `grpcs://` scheme in the announced address if you enable TLS in grpc server.
```yaml
node:
addresses:
- grpcs://frostfs.my.org:8080
grpc:
num: 1
0:
endpoint: frostfs.my.org:8080
tls:
enabled: true
certificate: /path/to/cert
key: /path/to/key
```

View file

@ -1,70 +0,0 @@
node:
wallet:
path: <path/to/wallet>
address: <address-in-wallet>
password: <password>
addresses:
- <announced.address:port>
attribute_0: UN-LOCODE:<XX YYY>
attribute_1: Price:100000
attribute_2: User-Agent:FrostFS\/0.9999
grpc:
num: 1
0:
endpoint: <listen.local.address:port>
tls:
enabled: false
storage:
shard_num: 1
shard:
0:
metabase:
path: /storage/path/metabase
perm: 0600
blobstor:
- path: /storage/path/blobovnicza
type: blobovnicza
perm: 0600
opened_cache_capacity: 32
depth: 1
width: 1
- path: /storage/path/fstree
type: fstree
perm: 0600
depth: 4
writecache:
enabled: false
gc:
remover_batch_size: 100
remover_sleep_interval: 1m
logger:
level: info
prometheus:
enabled: true
address: localhost:9090
shutdown_timeout: 15s
object:
put:
remote_pool_size: 100
local_pool_size: 100
morph:
rpc_endpoint:
- wss://rpc1.morph.frostfs.info:40341/ws
- wss://rpc2.morph.frostfs.info:40341/ws
- wss://rpc3.morph.frostfs.info:40341/ws
- wss://rpc4.morph.frostfs.info:40341/ws
- wss://rpc5.morph.frostfs.info:40341/ws
- wss://rpc6.morph.frostfs.info:40341/ws
- wss://rpc7.morph.frostfs.info:40341/ws
dial_timeout: 20s
contracts:
balance: dc1ec98d9d0c5f9dfade16144defe08cffc5ca55
container: 1b6e68d299b570e1cb7e86eadfdc06aa2e8e0cc5
netmap: 7c5bdb23e36cc7cce95bf42f3ab9e452c2501df1

View file

@ -1,129 +0,0 @@
# N3 Testnet Storage node configuration
There is a prepared configuration for NeoFS Storage Node deployment in
N3 Testnet. The easiest way to deploy a Storage Node is to use the prepared
docker image and run it with docker-compose.
## Build image
Prepared **frostfs-storage-testnet** image is available at Docker Hub.
However, if you need to rebuild it for some reason, run
`make image-storage-testnet` command.
```
$ make image-storage-testnet
...
Successfully built ab0557117b02
Successfully tagged nspccdev/neofs-storage-testnet:0.25.1
```
## Deploy node
To run a storage node in N3 Testnet environment, you should deposit GAS assets,
update docker-compose file and start the node.
### Deposit
The Storage Node owner should deposit GAS to NeoFS smart contract. It generates a
bit of sidechain GAS in the node's wallet. Sidechain GAS is used to send bootstrap tx.
First, obtain GAS in N3 Testnet chain. You can do that with
[faucet](https://neowish.ngd.network) service.
Then, make a deposit by transferring GAS to NeoFS contract in N3 Testnet.
You can provide scripthash in the `data` argument of transfer tx to make a
deposit to a specified account. Otherwise, deposit is made to the tx sender.
NeoFS contract scripthash in N3 Testnet is `b65d8243ac63983206d17e5221af0653a7266fa1`,
so the address is `NadZ8YfvkddivcFFkztZgfwxZyKf1acpRF`.
See a deposit example with `neo-go`.
```
neo-go wallet nep17 transfer -w wallet.json -r https://rpc01.testnet.n3.nspcc.ru:21331 \
--from NXxRAFPqPstaPByndKMHuC8iGcaHgtRY3m \
--to NadZ8YfvkddivcFFkztZgfwxZyKf1acpRF \
--token GAS \
--amount 1
```
### Configure
Next, configure `node_config.env` file. Change endpoints values. Both
should contain your **public** IP.
```
NEOFS_GRPC_0_ENDPOINT=65.52.183.157:36512
NEOFS_NODE_ADDRESSES=65.52.183.157:36512
```
Set up your [UN/LOCODE](https://unece.org/trade/cefact/unlocode-code-list-country-and-territory)
attribute.
```
NEOFS_GRPC_0_ENDPOINT=65.52.183.157:36512
NEOFS_NODE_ADDRESSES=65.52.183.157:36512
NEOFS_NODE_ATTRIBUTE_2=UN-LOCODE:RU LED
```
You can validate UN/LOCODE attribute in
[NeoFS LOCODE database](https://git.frostfs.info/TrueCloudLab/frostfs-locode-db/releases/tag/v0.4.0)
with frostfs-cli.
```
$ frostfs-cli util locode info --db ./locode_db --locode 'RU LED'
Country: Russia
Location: Saint Petersburg (ex Leningrad)
Continent: Europe
Subdivision: [SPE] Sankt-Peterburg
Coordinates: 59.53, 30.15
```
It is recommended to pass the node's key as a file. To do so, convert your wallet
WIF to 32-byte hex (via `frostfs-cli` for example) and save it to a file.
```
// Print WIF in a 32-byte hex format
$ frostfs-cli util keyer Kwp4Q933QujZLUCcn39tzY94itNQJS4EjTp28oAMzuxMwabm3p1s
PrivateKey 11ab917cd99170cb8d0d48e78fca317564e6b3aaff7f7058952d6175cdca0f56
PublicKey 02be8b2e837cab232168f5c3303f1b985818b7583682fb49026b8d2f43df7c1059
WIF Kwp4Q933QujZLUCcn39tzY94itNQJS4EjTp28oAMzuxMwabm3p1s
Wallet3.0 Nfzmk7FAZmEHDhLePdgysQL2FgkJbaEMpQ
ScriptHash3.0 dffe39998f50d42f2e06807866161cd0440b4bdc
ScriptHash3.0BE dc4b0b44d01c16667880062e2fd4508f9939fedf
// Save 32-byte hex into a file
$ echo '11ab917cd99170cb8d0d48e78fca317564e6b3aaff7f7058952d6175cdca0f56' | xxd -r -p > my_wallet.key
```
Then, specify the path to this file in `docker-compose.yml`
```yaml
volumes:
- frostfs_storage:/storage
- ./my_wallet.key:/node.key
```
NeoFS objects will be stored on your machine. By default, docker-compose
is configured to store objects in named docker volume `frostfs_storage`. You can
specify a directory on the filesystem to store objects there.
```yaml
volumes:
- /home/username/frostfs/rc3/storage:/storage
- ./my_wallet.key:/node.key
```
### Start
Run the node with `docker-compose up` command and stop it with `docker-compose down`.
### Debug
To print node logs, use `docker logs frostfs-testnet`. To print debug messages in
log, set up log level to debug with this env:
```yaml
environment:
- NEOFS_LOGGER_LEVEL=debug
```

View file

@ -1,52 +0,0 @@
logger:
level: info
morph:
rpc_endpoint:
- wss://rpc01.morph.testnet.frostfs.info:51331/ws
- wss://rpc02.morph.testnet.frostfs.info:51331/ws
- wss://rpc03.morph.testnet.frostfs.info:51331/ws
- wss://rpc04.morph.testnet.frostfs.info:51331/ws
- wss://rpc05.morph.testnet.frostfs.info:51331/ws
- wss://rpc06.morph.testnet.frostfs.info:51331/ws
- wss://rpc07.morph.testnet.frostfs.info:51331/ws
dial_timeout: 20s
contracts:
balance: e0420c216003747626670d1424569c17c79015bf
container: 9dbd2b5e67568ed285c3d6f96bac4edf5e1efba0
netmap: d4b331639799e2958d4bc5b711b469d79de94e01
node:
key: /node.key
attribute_0: Deployed:SelfHosted
attribute_1: User-Agent:FrostFS\/0.9999
prometheus:
enabled: true
address: localhost:9090
shutdown_timeout: 15s
storage:
shard_num: 1
shard:
0:
metabase:
path: /storage/metabase
perm: 0777
blobstor:
- path: /storage/path/blobovnicza
type: blobovnicza
perm: 0600
opened_cache_capacity: 32
depth: 1
width: 1
- path: /storage/path/fstree
type: fstree
perm: 0600
depth: 4
writecache:
enabled: false
gc:
remover_batch_size: 100
remover_sleep_interval: 1m

View file

@ -51,10 +51,7 @@ However, all mode changing operations are idempotent.
## Automatic mode changes
Shard can automatically switch to a `degraded-read-only` mode in 3 cases:
1. If the metabase was not available or couldn't be opened/initialized during shard startup.
2. If shard error counter exceeds threshold.
3. If the metabase couldn't be reopened during SIGHUP handling.
A shard can automatically switch to `read-only` mode if its error counter exceeds the threshold.
# Detach shard

View file

@ -170,7 +170,6 @@ Local storage engine configuration.
| Parameter | Type | Default value | Description |
|----------------------------|-----------------------------------|---------------|------------------------------------------------------------------------------------------------------------------|
| `shard_pool_size` | `int` | `20` | Pool size for shard workers. Limits the amount of concurrent `PUT` operations on each shard. |
| `shard_ro_error_threshold` | `int` | `0` | Maximum amount of storage errors to encounter before shard automatically moves to `Degraded` or `ReadOnly` mode. |
| `low_mem` | `bool` | `false` | Reduce memory consumption by reducing performance. |
| `shard` | [Shard config](#shard-subsection) | | Configuration for separate shards. |
@ -360,6 +359,7 @@ limits:
| `tag.weight` | `float` | 0 (no weight) | Weight for queries with the specified tag. Weights must be specified for all tags or not specified for any one. |
| `tag.limit_ops` | `float` | 0 (no limit) | Operations per second rate limit for queries with the specified tag. |
| `tag.reserved_ops` | `float` | 0 (no reserve) | Reserved operations per second rate for queries with the specified tag. |
| `tag.prohibited` | `bool` | false | If true, operations with this specified tag will be prohibited. |
# `node` section

10
go.mod
View file

@ -1,15 +1,15 @@
module git.frostfs.info/TrueCloudLab/frostfs-node
go 1.22
go 1.23
require (
code.gitea.io/sdk/gitea v0.17.1
git.frostfs.info/TrueCloudLab/frostfs-contract v0.21.1
git.frostfs.info/TrueCloudLab/frostfs-crypto v0.6.0
git.frostfs.info/TrueCloudLab/frostfs-locode-db v0.4.1-0.20240710074952-65761deb5c0d
git.frostfs.info/TrueCloudLab/frostfs-observability v0.0.0-20250212111929-d34e1329c824
git.frostfs.info/TrueCloudLab/frostfs-qos v0.0.0-20250227072915-25102d1e1aa3
git.frostfs.info/TrueCloudLab/frostfs-sdk-go v0.0.0-20250306092416-69b0711d12d9
git.frostfs.info/TrueCloudLab/frostfs-locode-db v0.5.2
git.frostfs.info/TrueCloudLab/frostfs-observability v0.0.0-20250321063246-93b681a20248
git.frostfs.info/TrueCloudLab/frostfs-qos v0.0.0-20250331080422-b5ed0b6eff47
git.frostfs.info/TrueCloudLab/frostfs-sdk-go v0.0.0-20250326101739-4d36a49d3945
git.frostfs.info/TrueCloudLab/hrw v1.2.1
git.frostfs.info/TrueCloudLab/multinet v0.0.0-20241015075604-6cb0d80e0972
git.frostfs.info/TrueCloudLab/policy-engine v0.0.0-20240822104152-a3bc3099bd5b

16
go.sum
View file

@ -4,14 +4,14 @@ git.frostfs.info/TrueCloudLab/frostfs-contract v0.21.1 h1:k1Qw8dWUQczfo0eVXlhrq9
git.frostfs.info/TrueCloudLab/frostfs-contract v0.21.1/go.mod h1:5fSm/l5xSjGWqsPUffSdboiGFUHa7y/1S0fvxzQowN8=
git.frostfs.info/TrueCloudLab/frostfs-crypto v0.6.0 h1:FxqFDhQYYgpe41qsIHVOcdzSVCB8JNSfPG7Uk4r2oSk=
git.frostfs.info/TrueCloudLab/frostfs-crypto v0.6.0/go.mod h1:RUIKZATQLJ+TaYQa60X2fTDwfuhMfm8Ar60bQ5fr+vU=
git.frostfs.info/TrueCloudLab/frostfs-locode-db v0.4.1-0.20240710074952-65761deb5c0d h1:uJ/wvuMdepbkaV8XMS5uN9B0FQWMep0CttSuDZiDhq0=
git.frostfs.info/TrueCloudLab/frostfs-locode-db v0.4.1-0.20240710074952-65761deb5c0d/go.mod h1:7ZZq8iguY7qFsXajdHGmZd2AW4QbucyrJwhbsRfOfek=
git.frostfs.info/TrueCloudLab/frostfs-observability v0.0.0-20250212111929-d34e1329c824 h1:Mxw1c/8t96vFIUOffl28lFaHKi413oCBfLMGJmF9cFA=
git.frostfs.info/TrueCloudLab/frostfs-observability v0.0.0-20250212111929-d34e1329c824/go.mod h1:kbwB4v2o6RyOfCo9kEFeUDZIX3LKhmS0yXPrtvzkQ1g=
git.frostfs.info/TrueCloudLab/frostfs-qos v0.0.0-20250227072915-25102d1e1aa3 h1:QnAt5b2R6+hQthMOIn5ECfLAlVD8IAE5JRm1NCCOmuE=
git.frostfs.info/TrueCloudLab/frostfs-qos v0.0.0-20250227072915-25102d1e1aa3/go.mod h1:PCijYq4oa8vKtIEcUX6jRiszI6XAW+nBwU+T1kB4d1U=
git.frostfs.info/TrueCloudLab/frostfs-sdk-go v0.0.0-20250306092416-69b0711d12d9 h1:svCl6NDAPZ/KuQPjdVKo74RkCIANesxUPM45zQZDhSw=
git.frostfs.info/TrueCloudLab/frostfs-sdk-go v0.0.0-20250306092416-69b0711d12d9/go.mod h1:aQpPWfG8oyfJ2X+FenPTJpSRWZjwcP5/RAtkW+/VEX8=
git.frostfs.info/TrueCloudLab/frostfs-locode-db v0.5.2 h1:AovQs7bea0fLnYfldCZB88FkUgRj0QaHkJEbcWfgzvY=
git.frostfs.info/TrueCloudLab/frostfs-locode-db v0.5.2/go.mod h1:7ZZq8iguY7qFsXajdHGmZd2AW4QbucyrJwhbsRfOfek=
git.frostfs.info/TrueCloudLab/frostfs-observability v0.0.0-20250321063246-93b681a20248 h1:fluzML8BIIabd07LyPSjc0JAV2qymWkPiFaLrXdALLA=
git.frostfs.info/TrueCloudLab/frostfs-observability v0.0.0-20250321063246-93b681a20248/go.mod h1:kbwB4v2o6RyOfCo9kEFeUDZIX3LKhmS0yXPrtvzkQ1g=
git.frostfs.info/TrueCloudLab/frostfs-qos v0.0.0-20250331080422-b5ed0b6eff47 h1:O2c3VOlaGZ862hf2ZPLBMdTG6vGJzhIgDvFEFGfntzU=
git.frostfs.info/TrueCloudLab/frostfs-qos v0.0.0-20250331080422-b5ed0b6eff47/go.mod h1:PCijYq4oa8vKtIEcUX6jRiszI6XAW+nBwU+T1kB4d1U=
git.frostfs.info/TrueCloudLab/frostfs-sdk-go v0.0.0-20250326101739-4d36a49d3945 h1:zM2l316J55h9p30snl6vHBI/h0xmnuqZjnxIjRDtJZw=
git.frostfs.info/TrueCloudLab/frostfs-sdk-go v0.0.0-20250326101739-4d36a49d3945/go.mod h1:aQpPWfG8oyfJ2X+FenPTJpSRWZjwcP5/RAtkW+/VEX8=
git.frostfs.info/TrueCloudLab/hrw v1.2.1 h1:ccBRK21rFvY5R1WotI6LNoPlizk7qSvdfD8lNIRudVc=
git.frostfs.info/TrueCloudLab/hrw v1.2.1/go.mod h1:C1Ygde2n843yTZEQ0FP69jYiuaYV0kriLvP4zm8JuvM=
git.frostfs.info/TrueCloudLab/multinet v0.0.0-20241015075604-6cb0d80e0972 h1:/960fWeyn2AFHwQUwDsWB3sbP6lTEnFnMzLMM6tx6N8=

View file

@ -512,5 +512,7 @@ const (
FailedToUpdateMultinetConfiguration = "failed to update multinet configuration"
FailedToParseIncomingIOTag = "failed to parse incoming IO tag"
NotSupportedIncomingIOTagReplacedWithClient = "incoming IO tag is not supported, replaced with `client`"
FailedToGetNetmapToAdjustIOTag = "failed to get netmap to adjust IO tag, replaced with `client`"
FailedToGetNetmapToAdjustIOTag = "failed to get netmap to adjust IO tag"
FailedToValidateIncomingIOTag = "failed to validate incoming IO tag, replaced with `client`"
WriteCacheFailedToAcquireRPSQuota = "writecache failed to acquire RPS quota to flush object"
)

View file

@ -23,6 +23,7 @@ const (
policerSubsystem = "policer"
commonCacheSubsystem = "common_cache"
multinetSubsystem = "multinet"
qosSubsystem = "qos"
successLabel = "success"
shardIDLabel = "shard_id"
@ -43,6 +44,7 @@ const (
hitLabel = "hit"
cacheLabel = "cache"
sourceIPLabel = "source_ip"
ioTagLabel = "io_tag"
readWriteMode = "READ_WRITE"
readOnlyMode = "READ_ONLY"

View file

@ -26,6 +26,7 @@ type NodeMetrics struct {
morphCache *morphCacheMetrics
log logger.LogMetrics
multinet *multinetMetrics
qos *QoSMetrics
// nolint: unused
appInfo *ApplicationInfo
}
@ -55,6 +56,7 @@ func NewNodeMetrics() *NodeMetrics {
log: logger.NewLogMetrics(namespace),
appInfo: NewApplicationInfo(misc.Version),
multinet: newMultinetMetrics(namespace),
qos: newQoSMetrics(),
}
}
@ -126,3 +128,7 @@ func (m *NodeMetrics) LogMetrics() logger.LogMetrics {
func (m *NodeMetrics) MultinetMetrics() MultinetMetrics {
return m.multinet
}
func (m *NodeMetrics) QoSMetrics() *QoSMetrics {
return m.qos
}

View file

@ -9,13 +9,14 @@ import (
)
type ObjectServiceMetrics interface {
AddRequestDuration(method string, d time.Duration, success bool)
AddRequestDuration(method string, d time.Duration, success bool, ioTag string)
AddPayloadSize(method string, size int)
}
type objectServiceMetrics struct {
methodDuration *prometheus.HistogramVec
payloadCounter *prometheus.CounterVec
methodDuration *prometheus.HistogramVec
payloadCounter *prometheus.CounterVec
ioTagOpsCounter *prometheus.CounterVec
}
func newObjectServiceMetrics() *objectServiceMetrics {
@ -32,14 +33,24 @@ func newObjectServiceMetrics() *objectServiceMetrics {
Name: "request_payload_bytes",
Help: "Object Service request payload",
}, []string{methodLabel}),
ioTagOpsCounter: metrics.NewCounterVec(prometheus.CounterOpts{
Namespace: namespace,
Subsystem: objectSubsystem,
Name: "requests_total",
Help: "Count of requests for each IO tag",
}, []string{methodLabel, ioTagLabel}),
}
}
func (m *objectServiceMetrics) AddRequestDuration(method string, d time.Duration, success bool) {
func (m *objectServiceMetrics) AddRequestDuration(method string, d time.Duration, success bool, ioTag string) {
m.methodDuration.With(prometheus.Labels{
methodLabel: method,
successLabel: strconv.FormatBool(success),
}).Observe(d.Seconds())
m.ioTagOpsCounter.With(prometheus.Labels{
ioTagLabel: ioTag,
methodLabel: method,
}).Inc()
}
func (m *objectServiceMetrics) AddPayloadSize(method string, size int) {

52
internal/metrics/qos.go Normal file
View file

@ -0,0 +1,52 @@
package metrics
import (
"git.frostfs.info/TrueCloudLab/frostfs-observability/metrics"
"github.com/prometheus/client_golang/prometheus"
)
type QoSMetrics struct {
opsCounter *prometheus.GaugeVec
}
func newQoSMetrics() *QoSMetrics {
return &QoSMetrics{
opsCounter: metrics.NewGaugeVec(prometheus.GaugeOpts{
Namespace: namespace,
Subsystem: qosSubsystem,
Name: "operations_total",
Help: "Count of pending, in progress, completed and failed due of resource exhausted error operations for each shard",
}, []string{shardIDLabel, operationLabel, ioTagLabel, typeLabel}),
}
}
func (m *QoSMetrics) SetOperationTagCounters(shardID, operation, tag string, pending, inProgress, completed, resourceExhausted uint64) {
m.opsCounter.With(prometheus.Labels{
shardIDLabel: shardID,
operationLabel: operation,
ioTagLabel: tag,
typeLabel: "pending",
}).Set(float64(pending))
m.opsCounter.With(prometheus.Labels{
shardIDLabel: shardID,
operationLabel: operation,
ioTagLabel: tag,
typeLabel: "in_progress",
}).Set(float64(inProgress))
m.opsCounter.With(prometheus.Labels{
shardIDLabel: shardID,
operationLabel: operation,
ioTagLabel: tag,
typeLabel: "completed",
}).Set(float64(completed))
m.opsCounter.With(prometheus.Labels{
shardIDLabel: shardID,
operationLabel: operation,
ioTagLabel: tag,
typeLabel: "resource_exhausted",
}).Set(float64(resourceExhausted))
}
func (m *QoSMetrics) Close(shardID string) {
m.opsCounter.DeletePartialMatch(prometheus.Labels{shardIDLabel: shardID})
}

View file

@ -12,12 +12,14 @@ type TreeMetricsRegister interface {
AddReplicateTaskDuration(time.Duration, bool)
AddReplicateWaitDuration(time.Duration, bool)
AddSyncDuration(time.Duration, bool)
AddOperation(string, string)
}
type treeServiceMetrics struct {
replicateTaskDuration *prometheus.HistogramVec
replicateWaitDuration *prometheus.HistogramVec
syncOpDuration *prometheus.HistogramVec
ioTagOpsCounter *prometheus.CounterVec
}
var _ TreeMetricsRegister = (*treeServiceMetrics)(nil)
@ -42,6 +44,12 @@ func newTreeServiceMetrics() *treeServiceMetrics {
Name: "sync_duration_seconds",
Help: "Duration of synchronization operations",
}, []string{successLabel}),
ioTagOpsCounter: metrics.NewCounterVec(prometheus.CounterOpts{
Namespace: namespace,
Subsystem: treeServiceSubsystem,
Name: "requests_total",
Help: "Count of requests for each IO tag",
}, []string{methodLabel, ioTagLabel}),
}
}
@ -62,3 +70,10 @@ func (m *treeServiceMetrics) AddSyncDuration(d time.Duration, success bool) {
successLabel: strconv.FormatBool(success),
}).Observe(d.Seconds())
}
func (m *treeServiceMetrics) AddOperation(op string, ioTag string) {
m.ioTagOpsCounter.With(prometheus.Labels{
ioTagLabel: ioTag,
methodLabel: op,
}).Inc()
}

View file

@ -26,7 +26,7 @@ func NewAdjustOutgoingIOTagUnaryClientInterceptor() grpc.UnaryClientInterceptor
if err != nil {
tag = IOTagClient
}
if tag == IOTagBackground || tag == IOTagPolicer || tag == IOTagWritecache {
if tag.IsLocal() {
tag = IOTagInternal
}
ctx = tagging.ContextWithIOTag(ctx, tag.String())
@ -44,7 +44,7 @@ func NewAdjustOutgoingIOTagStreamClientInterceptor() grpc.StreamClientIntercepto
if err != nil {
tag = IOTagClient
}
if tag == IOTagBackground || tag == IOTagPolicer || tag == IOTagWritecache {
if tag.IsLocal() {
tag = IOTagInternal
}
ctx = tagging.ContextWithIOTag(ctx, tag.String())

219
internal/qos/grpc_test.go Normal file
View file

@ -0,0 +1,219 @@
package qos_test
import (
"context"
"errors"
"fmt"
"testing"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/qos"
"git.frostfs.info/TrueCloudLab/frostfs-qos/limiting"
"git.frostfs.info/TrueCloudLab/frostfs-qos/tagging"
apistatus "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client/status"
"github.com/stretchr/testify/require"
"google.golang.org/grpc"
)
const (
okKey = "ok"
)
var (
errTest = errors.New("mock")
errWrongTag = errors.New("wrong tag")
errNoTag = errors.New("failed to get tag from context")
errResExhausted *apistatus.ResourceExhausted
tags = []qos.IOTag{qos.IOTagBackground, qos.IOTagWritecache, qos.IOTagPolicer, qos.IOTagTreeSync}
)
type mockGRPCServerStream struct {
grpc.ServerStream
ctx context.Context
}
func (m *mockGRPCServerStream) Context() context.Context {
return m.ctx
}
type limiter struct {
acquired bool
released bool
}
func (l *limiter) Acquire(key string) (limiting.ReleaseFunc, bool) {
l.acquired = true
if key != okKey {
return nil, false
}
return func() { l.released = true }, true
}
func unaryMaxActiveRPCLimiter(ctx context.Context, lim *limiter, methodName string) error {
interceptor := qos.NewMaxActiveRPCLimiterUnaryServerInterceptor(func() limiting.Limiter { return lim })
handler := func(ctx context.Context, req any) (any, error) {
return nil, errTest
}
_, err := interceptor(ctx, nil, &grpc.UnaryServerInfo{FullMethod: methodName}, handler)
return err
}
func streamMaxActiveRPCLimiter(ctx context.Context, lim *limiter, methodName string) error {
interceptor := qos.NewMaxActiveRPCLimiterStreamServerInterceptor(func() limiting.Limiter { return lim })
handler := func(srv any, stream grpc.ServerStream) error {
return errTest
}
err := interceptor(nil, &mockGRPCServerStream{ctx: ctx}, &grpc.StreamServerInfo{
FullMethod: methodName,
}, handler)
return err
}
func Test_MaxActiveRPCLimiter(t *testing.T) {
// UnaryServerInterceptor
t.Run("unary fail", func(t *testing.T) {
var lim limiter
err := unaryMaxActiveRPCLimiter(context.Background(), &lim, "")
require.ErrorAs(t, err, &errResExhausted)
require.True(t, lim.acquired)
require.False(t, lim.released)
})
t.Run("unary pass critical", func(t *testing.T) {
var lim limiter
ctx := tagging.ContextWithIOTag(context.Background(), qos.IOTagCritical.String())
err := unaryMaxActiveRPCLimiter(ctx, &lim, "")
require.ErrorIs(t, err, errTest)
require.False(t, lim.acquired)
require.False(t, lim.released)
})
t.Run("unary pass", func(t *testing.T) {
var lim limiter
err := unaryMaxActiveRPCLimiter(context.Background(), &lim, okKey)
require.ErrorIs(t, err, errTest)
require.True(t, lim.acquired)
require.True(t, lim.released)
})
// StreamServerInterceptor
t.Run("stream fail", func(t *testing.T) {
var lim limiter
err := streamMaxActiveRPCLimiter(context.Background(), &lim, "")
require.ErrorAs(t, err, &errResExhausted)
require.True(t, lim.acquired)
require.False(t, lim.released)
})
t.Run("stream pass critical", func(t *testing.T) {
var lim limiter
ctx := tagging.ContextWithIOTag(context.Background(), qos.IOTagCritical.String())
err := streamMaxActiveRPCLimiter(ctx, &lim, "")
require.ErrorIs(t, err, errTest)
require.False(t, lim.acquired)
require.False(t, lim.released)
})
t.Run("stream pass", func(t *testing.T) {
var lim limiter
err := streamMaxActiveRPCLimiter(context.Background(), &lim, okKey)
require.ErrorIs(t, err, errTest)
require.True(t, lim.acquired)
require.True(t, lim.released)
})
}
func TestSetCriticalIOTagUnaryServerInterceptor_Pass(t *testing.T) {
interceptor := qos.NewSetCriticalIOTagUnaryServerInterceptor()
called := false
handler := func(ctx context.Context, req any) (any, error) {
called = true
if tag, ok := tagging.IOTagFromContext(ctx); ok && tag == qos.IOTagCritical.String() {
return nil, nil
}
return nil, errWrongTag
}
_, err := interceptor(context.Background(), nil, nil, handler)
require.NoError(t, err)
require.True(t, called)
}
func TestAdjustOutgoingIOTagUnaryClientInterceptor(t *testing.T) {
interceptor := qos.NewAdjustOutgoingIOTagUnaryClientInterceptor()
// check context with no value
called := false
invoker := func(ctx context.Context, method string, req, reply any, cc *grpc.ClientConn, opts ...grpc.CallOption) error {
called = true
if _, ok := tagging.IOTagFromContext(ctx); ok {
return fmt.Errorf("%v: expected no IO tags", errWrongTag)
}
return nil
}
require.NoError(t, interceptor(context.Background(), "", nil, nil, nil, invoker, nil))
require.True(t, called)
// check context for internal tag
targetTag := qos.IOTagInternal.String()
invoker = func(ctx context.Context, method string, req, reply any, cc *grpc.ClientConn, opts ...grpc.CallOption) error {
raw, ok := tagging.IOTagFromContext(ctx)
if !ok {
return errNoTag
}
if raw != targetTag {
return errWrongTag
}
return nil
}
for _, tag := range tags {
ctx := tagging.ContextWithIOTag(context.Background(), tag.String())
require.NoError(t, interceptor(ctx, "", nil, nil, nil, invoker, nil))
}
// check context for client tag
ctx := tagging.ContextWithIOTag(context.Background(), "")
targetTag = qos.IOTagClient.String()
require.NoError(t, interceptor(ctx, "", nil, nil, nil, invoker, nil))
}
func TestAdjustOutgoingIOTagStreamClientInterceptor(t *testing.T) {
interceptor := qos.NewAdjustOutgoingIOTagStreamClientInterceptor()
// check context with no value
called := false
streamer := func(ctx context.Context, desc *grpc.StreamDesc, cc *grpc.ClientConn, method string, opts ...grpc.CallOption) (grpc.ClientStream, error) {
called = true
if _, ok := tagging.IOTagFromContext(ctx); ok {
return nil, fmt.Errorf("%v: expected no IO tags", errWrongTag)
}
return nil, nil
}
_, err := interceptor(context.Background(), nil, nil, "", streamer, nil)
require.True(t, called)
require.NoError(t, err)
// check context for internal tag
targetTag := qos.IOTagInternal.String()
streamer = func(ctx context.Context, desc *grpc.StreamDesc, cc *grpc.ClientConn, method string, opts ...grpc.CallOption) (grpc.ClientStream, error) {
raw, ok := tagging.IOTagFromContext(ctx)
if !ok {
return nil, errNoTag
}
if raw != targetTag {
return nil, errWrongTag
}
return nil, nil
}
for _, tag := range tags {
ctx := tagging.ContextWithIOTag(context.Background(), tag.String())
_, err := interceptor(ctx, nil, nil, "", streamer, nil)
require.NoError(t, err)
}
// check context for client tag
ctx := tagging.ContextWithIOTag(context.Background(), "")
targetTag = qos.IOTagClient.String()
_, err = interceptor(ctx, nil, nil, "", streamer, nil)
require.NoError(t, err)
}

View file

@ -4,6 +4,8 @@ import (
"context"
"errors"
"fmt"
"sync"
"sync/atomic"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/engine/shard/limits"
@ -15,6 +17,9 @@ import (
const (
defaultIdleTimeout time.Duration = 0
defaultShare float64 = 1.0
minusOne = ^uint64(0)
defaultMetricsCollectTimeout = 5 * time.Second
)
type ReleaseFunc scheduling.ReleaseFunc
@ -22,6 +27,8 @@ type ReleaseFunc scheduling.ReleaseFunc
type Limiter interface {
ReadRequest(context.Context) (ReleaseFunc, error)
WriteRequest(context.Context) (ReleaseFunc, error)
SetParentID(string)
SetMetrics(Metrics)
Close()
}
@ -34,10 +41,6 @@ func NewLimiter(c *limits.Config) (Limiter, error) {
if err := validateConfig(c); err != nil {
return nil, err
}
read, write := c.Read(), c.Write()
if isNoop(read, write) {
return noopLimiterInstance, nil
}
readScheduler, err := createScheduler(c.Read())
if err != nil {
return nil, fmt.Errorf("create read scheduler: %w", err)
@ -46,10 +49,18 @@ func NewLimiter(c *limits.Config) (Limiter, error) {
if err != nil {
return nil, fmt.Errorf("create write scheduler: %w", err)
}
return &mClockLimiter{
l := &mClockLimiter{
readScheduler: readScheduler,
writeScheduler: writeScheduler,
}, nil
closeCh: make(chan struct{}),
wg: &sync.WaitGroup{},
readStats: createStats(),
writeStats: createStats(),
}
l.shardID.Store(&shardID{})
l.metrics.Store(&metricsHolder{metrics: &noopMetrics{}})
l.startMetricsCollect()
return l, nil
}
func createScheduler(config limits.OpConfig) (scheduler, error) {
@ -63,7 +74,7 @@ func createScheduler(config limits.OpConfig) (scheduler, error) {
func converToSchedulingTags(limits []limits.IOTagConfig) map[string]scheduling.TagInfo {
result := make(map[string]scheduling.TagInfo)
for _, tag := range []IOTag{IOTagClient, IOTagBackground, IOTagInternal, IOTagPolicer, IOTagWritecache} {
for _, tag := range []IOTag{IOTagBackground, IOTagClient, IOTagInternal, IOTagPolicer, IOTagTreeSync, IOTagWritecache} {
result[tag.String()] = scheduling.TagInfo{
Share: defaultShare,
}
@ -79,6 +90,7 @@ func converToSchedulingTags(limits []limits.IOTagConfig) map[string]scheduling.T
if l.ReservedOps != nil && *l.ReservedOps != 0 {
v.ReservedIOPS = l.ReservedOps
}
v.Prohibited = l.Prohibited
result[l.Tag] = v
}
return result
@ -91,7 +103,7 @@ var (
)
func NewNoopLimiter() Limiter {
return &noopLimiter{}
return noopLimiterInstance
}
type noopLimiter struct{}
@ -104,43 +116,127 @@ func (n *noopLimiter) WriteRequest(context.Context) (ReleaseFunc, error) {
return releaseStub, nil
}
func (n *noopLimiter) SetParentID(string) {}
func (n *noopLimiter) Close() {}
func (n *noopLimiter) SetMetrics(Metrics) {}
var _ Limiter = (*mClockLimiter)(nil)
type shardID struct {
id string
}
type mClockLimiter struct {
readScheduler scheduler
writeScheduler scheduler
readStats map[string]*stat
writeStats map[string]*stat
shardID atomic.Pointer[shardID]
metrics atomic.Pointer[metricsHolder]
closeCh chan struct{}
wg *sync.WaitGroup
}
func (n *mClockLimiter) ReadRequest(ctx context.Context) (ReleaseFunc, error) {
return requestArrival(ctx, n.readScheduler)
return requestArrival(ctx, n.readScheduler, n.readStats)
}
func (n *mClockLimiter) WriteRequest(ctx context.Context) (ReleaseFunc, error) {
return requestArrival(ctx, n.writeScheduler)
return requestArrival(ctx, n.writeScheduler, n.writeStats)
}
func requestArrival(ctx context.Context, s scheduler) (ReleaseFunc, error) {
func requestArrival(ctx context.Context, s scheduler, stats map[string]*stat) (ReleaseFunc, error) {
tag, ok := tagging.IOTagFromContext(ctx)
if !ok {
tag = IOTagClient.String()
}
stat := getStat(tag, stats)
stat.pending.Add(1)
if tag == IOTagCritical.String() {
return releaseStub, nil
stat.inProgress.Add(1)
return func() {
stat.completed.Add(1)
}, nil
}
rel, err := s.RequestArrival(ctx, tag)
stat.inProgress.Add(1)
if err != nil {
if errors.Is(err, scheduling.ErrMClockSchedulerRequestLimitExceeded) ||
errors.Is(err, errSemaphoreLimitExceeded) {
if isResourceExhaustedErr(err) {
stat.resourceExhausted.Add(1)
return nil, &apistatus.ResourceExhausted{}
}
stat.completed.Add(1)
return nil, err
}
return ReleaseFunc(rel), nil
return func() {
rel()
stat.completed.Add(1)
}, nil
}
func (n *mClockLimiter) Close() {
n.readScheduler.Close()
n.writeScheduler.Close()
close(n.closeCh)
n.wg.Wait()
n.metrics.Load().metrics.Close(n.shardID.Load().id)
}
func (n *mClockLimiter) SetParentID(parentID string) {
n.shardID.Store(&shardID{id: parentID})
}
func (n *mClockLimiter) SetMetrics(m Metrics) {
n.metrics.Store(&metricsHolder{metrics: m})
}
func (n *mClockLimiter) startMetricsCollect() {
n.wg.Add(1)
go func() {
defer n.wg.Done()
ticker := time.NewTicker(defaultMetricsCollectTimeout)
defer ticker.Stop()
for {
select {
case <-n.closeCh:
return
case <-ticker.C:
shardID := n.shardID.Load().id
if shardID == "" {
continue
}
metrics := n.metrics.Load().metrics
exportMetrics(metrics, n.readStats, shardID, "read")
exportMetrics(metrics, n.writeStats, shardID, "write")
}
}
}()
}
func exportMetrics(metrics Metrics, stats map[string]*stat, shardID, operation string) {
var pending uint64
var inProgress uint64
var completed uint64
var resExh uint64
for tag, s := range stats {
pending = s.pending.Load()
inProgress = s.inProgress.Load()
completed = s.completed.Load()
resExh = s.resourceExhausted.Load()
if pending == 0 && inProgress == 0 && completed == 0 && resExh == 0 {
continue
}
metrics.SetOperationTagCounters(shardID, operation, tag, pending, inProgress, completed, resExh)
}
}
func isResourceExhaustedErr(err error) bool {
return errors.Is(err, scheduling.ErrMClockSchedulerRequestLimitExceeded) ||
errors.Is(err, errSemaphoreLimitExceeded) ||
errors.Is(err, scheduling.ErrTagRequestsProhibited)
}

31
internal/qos/metrics.go Normal file
View file

@ -0,0 +1,31 @@
package qos
import "sync/atomic"
type Metrics interface {
SetOperationTagCounters(shardID, operation, tag string, pending, inProgress, completed, resourceExhausted uint64)
Close(shardID string)
}
var _ Metrics = (*noopMetrics)(nil)
type noopMetrics struct{}
func (n *noopMetrics) SetOperationTagCounters(string, string, string, uint64, uint64, uint64, uint64) {
}
func (n *noopMetrics) Close(string) {}
// stat presents limiter statistics cumulative counters.
//
// Each operation changes its status as follows: `pending` -> `in_progress` -> `completed` or `resource_exhausted`.
type stat struct {
completed atomic.Uint64
pending atomic.Uint64
resourceExhausted atomic.Uint64
inProgress atomic.Uint64
}
type metricsHolder struct {
metrics Metrics
}

29
internal/qos/stats.go Normal file
View file

@ -0,0 +1,29 @@
package qos
const unknownStatsTag = "unknown"
var statTags = map[string]struct{}{
IOTagBackground.String(): {},
IOTagClient.String(): {},
IOTagCritical.String(): {},
IOTagInternal.String(): {},
IOTagPolicer.String(): {},
IOTagTreeSync.String(): {},
IOTagWritecache.String(): {},
unknownStatsTag: {},
}
func createStats() map[string]*stat {
result := make(map[string]*stat)
for tag := range statTags {
result[tag] = &stat{}
}
return result
}
func getStat(tag string, stats map[string]*stat) *stat {
if v, ok := stats[tag]; ok {
return v
}
return stats[unknownStatsTag]
}

View file

@ -1,34 +1,42 @@
package qos
import "fmt"
import (
"context"
"fmt"
"git.frostfs.info/TrueCloudLab/frostfs-qos/tagging"
)
type IOTag string
const (
IOTagClient IOTag = "client"
IOTagInternal IOTag = "internal"
IOTagBackground IOTag = "background"
IOTagWritecache IOTag = "writecache"
IOTagPolicer IOTag = "policer"
IOTagClient IOTag = "client"
IOTagCritical IOTag = "critical"
IOTagInternal IOTag = "internal"
IOTagPolicer IOTag = "policer"
IOTagTreeSync IOTag = "treesync"
IOTagWritecache IOTag = "writecache"
ioTagUnknown IOTag = ""
)
func FromRawString(s string) (IOTag, error) {
switch s {
case string(IOTagCritical):
return IOTagCritical, nil
case string(IOTagClient):
return IOTagClient, nil
case string(IOTagInternal):
return IOTagInternal, nil
case string(IOTagBackground):
return IOTagBackground, nil
case string(IOTagWritecache):
return IOTagWritecache, nil
case string(IOTagClient):
return IOTagClient, nil
case string(IOTagCritical):
return IOTagCritical, nil
case string(IOTagInternal):
return IOTagInternal, nil
case string(IOTagPolicer):
return IOTagPolicer, nil
case string(IOTagTreeSync):
return IOTagTreeSync, nil
case string(IOTagWritecache):
return IOTagWritecache, nil
default:
return ioTagUnknown, fmt.Errorf("unknown tag %s", s)
}
@ -37,3 +45,15 @@ func FromRawString(s string) (IOTag, error) {
func (t IOTag) String() string {
return string(t)
}
func IOTagFromContext(ctx context.Context) string {
tag, ok := tagging.IOTagFromContext(ctx)
if !ok {
tag = "undefined"
}
return tag
}
func (t IOTag) IsLocal() bool {
return t == IOTagBackground || t == IOTagPolicer || t == IOTagWritecache || t == IOTagTreeSync
}

View file

@ -42,11 +42,12 @@ func validateOpConfig(c limits.OpConfig) error {
func validateTags(configTags []limits.IOTagConfig) error {
tags := map[IOTag]tagConfig{
IOTagBackground: {},
IOTagClient: {},
IOTagInternal: {},
IOTagBackground: {},
IOTagWritecache: {},
IOTagPolicer: {},
IOTagTreeSync: {},
IOTagWritecache: {},
}
for _, t := range configTags {
tag, err := FromRawString(t.Tag)
@ -90,12 +91,3 @@ func float64Value(f *float64) float64 {
}
return *f
}
func isNoop(read, write limits.OpConfig) bool {
return read.MaxRunningOps == limits.NoLimit &&
read.MaxWaitingOps == limits.NoLimit &&
write.MaxRunningOps == limits.NoLimit &&
write.MaxWaitingOps == limits.NoLimit &&
len(read.Tags) == 0 &&
len(write.Tags) == 0
}

View file

@ -9,6 +9,7 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/container"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger"
utilTesting "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/testing"
objectV2 "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/object"
containerSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
@ -410,11 +411,11 @@ func TestFormatValidator_ValidateTokenIssuer(t *testing.T) {
},
),
WithNetmapSource(
&testNetmapSource{
netmaps: map[uint64]*netmap.NetMap{
&utilTesting.TestNetmapSource{
Netmaps: map[uint64]*netmap.NetMap{
curEpoch: currentEpochNM,
},
currentEpoch: curEpoch,
CurrentEpoch: curEpoch,
},
),
WithLogger(logger.NewLoggerWrapper(zaptest.NewLogger(t))),
@ -483,12 +484,12 @@ func TestFormatValidator_ValidateTokenIssuer(t *testing.T) {
},
),
WithNetmapSource(
&testNetmapSource{
netmaps: map[uint64]*netmap.NetMap{
&utilTesting.TestNetmapSource{
Netmaps: map[uint64]*netmap.NetMap{
curEpoch: currentEpochNM,
curEpoch - 1: previousEpochNM,
},
currentEpoch: curEpoch,
CurrentEpoch: curEpoch,
},
),
WithLogger(logger.NewLoggerWrapper(zaptest.NewLogger(t))),
@ -559,12 +560,12 @@ func TestFormatValidator_ValidateTokenIssuer(t *testing.T) {
},
),
WithNetmapSource(
&testNetmapSource{
netmaps: map[uint64]*netmap.NetMap{
&utilTesting.TestNetmapSource{
Netmaps: map[uint64]*netmap.NetMap{
curEpoch: currentEpochNM,
curEpoch - 1: previousEpochNM,
},
currentEpoch: curEpoch,
CurrentEpoch: curEpoch,
},
),
WithLogger(logger.NewLoggerWrapper(zaptest.NewLogger(t))),
@ -596,26 +597,3 @@ func (s *testContainerSource) Get(ctx context.Context, cnrID cid.ID) (*container
func (s *testContainerSource) DeletionInfo(context.Context, cid.ID) (*container.DelInfo, error) {
return nil, nil
}
type testNetmapSource struct {
netmaps map[uint64]*netmap.NetMap
currentEpoch uint64
}
func (s *testNetmapSource) GetNetMap(ctx context.Context, diff uint64) (*netmap.NetMap, error) {
if diff >= s.currentEpoch {
return nil, fmt.Errorf("invalid diff")
}
return s.GetNetMapByEpoch(ctx, s.currentEpoch-diff)
}
func (s *testNetmapSource) GetNetMapByEpoch(ctx context.Context, epoch uint64) (*netmap.NetMap, error) {
if nm, found := s.netmaps[epoch]; found {
return nm, nil
}
return nil, fmt.Errorf("netmap not found")
}
func (s *testNetmapSource) Epoch(ctx context.Context) (uint64, error) {
return s.currentEpoch, nil
}

View file

@ -3,6 +3,7 @@ package blobstortest
import (
"context"
"errors"
"slices"
"testing"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/common"
@ -26,7 +27,7 @@ func TestIterate(t *testing.T, cons Constructor, minSize, maxSize uint64) {
_, err := s.Delete(context.Background(), delPrm)
require.NoError(t, err)
objects = append(objects[:delID], objects[delID+1:]...)
objects = slices.Delete(objects, delID, delID+1)
runTestNormalHandler(t, s, objects)

View file

@ -153,16 +153,10 @@ func (e *StorageEngine) Close(ctx context.Context) error {
}
// closes all shards. Never returns an error, shard errors are logged.
func (e *StorageEngine) close(ctx context.Context, releasePools bool) error {
func (e *StorageEngine) close(ctx context.Context) error {
e.mtx.RLock()
defer e.mtx.RUnlock()
if releasePools {
for _, p := range e.shardPools {
p.Release()
}
}
for id, sh := range e.shards {
if err := sh.Close(ctx); err != nil {
e.log.Debug(ctx, logs.EngineCouldNotCloseShard,
@ -213,7 +207,7 @@ func (e *StorageEngine) setBlockExecErr(ctx context.Context, err error) error {
return e.open(ctx)
}
} else if prevErr == nil { // ok -> block
return e.close(ctx, errors.Is(err, errClosed))
return e.close(ctx)
}
// otherwise do nothing

View file

@ -245,7 +245,6 @@ func TestReload(t *testing.T) {
// no new paths => no new shards
require.Equal(t, shardNum, len(e.shards))
require.Equal(t, shardNum, len(e.shardPools))
newMeta := filepath.Join(addPath, fmt.Sprintf("%d.metabase", shardNum))
@ -257,7 +256,6 @@ func TestReload(t *testing.T) {
require.NoError(t, e.Reload(context.Background(), rcfg))
require.Equal(t, shardNum+1, len(e.shards))
require.Equal(t, shardNum+1, len(e.shardPools))
require.NoError(t, e.Close(context.Background()))
})
@ -277,7 +275,6 @@ func TestReload(t *testing.T) {
// removed one
require.Equal(t, shardNum-1, len(e.shards))
require.Equal(t, shardNum-1, len(e.shardPools))
require.NoError(t, e.Close(context.Background()))
})
@ -311,7 +308,6 @@ func engineWithShards(t *testing.T, path string, num int) (*StorageEngine, []str
}
require.Equal(t, num, len(e.shards))
require.Equal(t, num, len(e.shardPools))
return e, currShards
}

View file

@ -12,7 +12,6 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/shard"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/shard/mode"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/util/logicerr"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger"
apistatus "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client/status"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
@ -29,8 +28,6 @@ type StorageEngine struct {
shards map[string]hashedShard
shardPools map[string]util.WorkerPool
closeCh chan struct{}
setModeCh chan setModeRequest
wg sync.WaitGroup
@ -193,8 +190,6 @@ type cfg struct {
metrics MetricRegister
shardPoolSize uint32
lowMem bool
containerSource atomic.Pointer[containerSource]
@ -202,9 +197,8 @@ type cfg struct {
func defaultCfg() *cfg {
res := &cfg{
log: logger.NewLoggerWrapper(zap.L()),
shardPoolSize: 20,
metrics: noopMetrics{},
log: logger.NewLoggerWrapper(zap.L()),
metrics: noopMetrics{},
}
res.containerSource.Store(&containerSource{})
return res
@ -221,7 +215,6 @@ func New(opts ...Option) *StorageEngine {
return &StorageEngine{
cfg: c,
shards: make(map[string]hashedShard),
shardPools: make(map[string]util.WorkerPool),
closeCh: make(chan struct{}),
setModeCh: make(chan setModeRequest),
evacuateLimiter: &evacuationLimiter{},
@ -241,13 +234,6 @@ func WithMetrics(v MetricRegister) Option {
}
}
// WithShardPoolSize returns option to specify size of worker pool for each shard.
func WithShardPoolSize(sz uint32) Option {
return func(c *cfg) {
c.shardPoolSize = sz
}
}
// WithErrorThreshold returns an option to specify size amount of errors after which
// shard is moved to read-only mode.
func WithErrorThreshold(sz uint32) Option {

View file

@ -57,7 +57,6 @@ func (te *testEngineWrapper) setShardsNumOpts(
te.shardIDs[i] = shard.ID()
}
require.Len(t, te.engine.shards, num)
require.Len(t, te.engine.shardPools, num)
return te
}
@ -163,6 +162,8 @@ type testQoSLimiter struct {
write atomic.Int64
}
func (t *testQoSLimiter) SetMetrics(qos.Metrics) {}
func (t *testQoSLimiter) Close() {
require.Equal(t.t, int64(0), t.read.Load(), "read requests count after limiter close must be 0")
require.Equal(t.t, int64(0), t.write.Load(), "write requests count after limiter close must be 0")
@ -177,3 +178,5 @@ func (t *testQoSLimiter) WriteRequest(context.Context) (qos.ReleaseFunc, error)
t.write.Add(1)
return func() { t.write.Add(-1) }, nil
}
func (t *testQoSLimiter) SetParentID(string) {}

View file

@ -46,7 +46,6 @@ func newEngineWithErrorThreshold(t testing.TB, dir string, errThreshold uint32)
var testShards [2]*testShard
te := testNewEngine(t,
WithShardPoolSize(1),
WithErrorThreshold(errThreshold),
).
setShardsNumOpts(t, 2, func(id int) []shard.Option {

View file

@ -15,7 +15,6 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/pilorama"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/shard"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/util/logicerr"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util"
"git.frostfs.info/TrueCloudLab/frostfs-observability/tracing"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client"
containerSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container"
@ -201,11 +200,6 @@ func (p *EvacuateShardRes) DeepCopy() *EvacuateShardRes {
return res
}
type pooledShard struct {
hashedShard
pool util.WorkerPool
}
var errMustHaveTwoShards = errors.New("must have at least 1 spare shard")
// Evacuate moves data from one shard to the others.
@ -252,7 +246,7 @@ func (e *StorageEngine) Evacuate(ctx context.Context, prm EvacuateShardPrm) erro
}
var mtx sync.RWMutex
copyShards := func() []pooledShard {
copyShards := func() []hashedShard {
mtx.RLock()
defer mtx.RUnlock()
t := slices.Clone(shards)
@ -266,7 +260,7 @@ func (e *StorageEngine) Evacuate(ctx context.Context, prm EvacuateShardPrm) erro
}
func (e *StorageEngine) evacuateShards(ctx context.Context, shardIDs []string, prm EvacuateShardPrm, res *EvacuateShardRes,
shards func() []pooledShard, shardsToEvacuate map[string]*shard.Shard,
shards func() []hashedShard, shardsToEvacuate map[string]*shard.Shard,
) error {
var err error
ctx, span := tracing.StartSpanFromContext(ctx, "StorageEngine.evacuateShards",
@ -388,7 +382,7 @@ func (e *StorageEngine) getTotals(ctx context.Context, prm EvacuateShardPrm, sha
}
func (e *StorageEngine) evacuateShard(ctx context.Context, cancel context.CancelCauseFunc, shardID string, prm EvacuateShardPrm, res *EvacuateShardRes,
shards func() []pooledShard, shardsToEvacuate map[string]*shard.Shard,
shards func() []hashedShard, shardsToEvacuate map[string]*shard.Shard,
egContainer *errgroup.Group, egObject *errgroup.Group,
) error {
ctx, span := tracing.StartSpanFromContext(ctx, "StorageEngine.evacuateShard",
@ -412,7 +406,7 @@ func (e *StorageEngine) evacuateShard(ctx context.Context, cancel context.Cancel
}
func (e *StorageEngine) evacuateShardObjects(ctx context.Context, cancel context.CancelCauseFunc, shardID string, prm EvacuateShardPrm, res *EvacuateShardRes,
shards func() []pooledShard, shardsToEvacuate map[string]*shard.Shard,
shards func() []hashedShard, shardsToEvacuate map[string]*shard.Shard,
egContainer *errgroup.Group, egObject *errgroup.Group,
) error {
sh := shardsToEvacuate[shardID]
@ -485,7 +479,7 @@ func (e *StorageEngine) evacuateShardObjects(ctx context.Context, cancel context
}
func (e *StorageEngine) evacuateShardTrees(ctx context.Context, shardID string, prm EvacuateShardPrm, res *EvacuateShardRes,
getShards func() []pooledShard, shardsToEvacuate map[string]*shard.Shard,
getShards func() []hashedShard, shardsToEvacuate map[string]*shard.Shard,
) error {
sh := shardsToEvacuate[shardID]
shards := getShards()
@ -515,7 +509,7 @@ func (e *StorageEngine) evacuateShardTrees(ctx context.Context, shardID string,
}
func (e *StorageEngine) evacuateTrees(ctx context.Context, sh *shard.Shard, trees []pilorama.ContainerIDTreeID,
prm EvacuateShardPrm, res *EvacuateShardRes, shards []pooledShard, shardsToEvacuate map[string]*shard.Shard,
prm EvacuateShardPrm, res *EvacuateShardRes, shards []hashedShard, shardsToEvacuate map[string]*shard.Shard,
) error {
ctx, span := tracing.StartSpanFromContext(ctx, "StorageEngine.evacuateTrees",
trace.WithAttributes(
@ -583,7 +577,7 @@ func (e *StorageEngine) evacuateTreeToOtherNode(ctx context.Context, sh *shard.S
}
func (e *StorageEngine) tryEvacuateTreeLocal(ctx context.Context, sh *shard.Shard, tree pilorama.ContainerIDTreeID,
prm EvacuateShardPrm, shards []pooledShard, shardsToEvacuate map[string]*shard.Shard,
prm EvacuateShardPrm, shards []hashedShard, shardsToEvacuate map[string]*shard.Shard,
) (bool, string, error) {
target, found, err := e.findShardToEvacuateTree(ctx, tree, shards, shardsToEvacuate)
if err != nil {
@ -653,15 +647,15 @@ func (e *StorageEngine) tryEvacuateTreeLocal(ctx context.Context, sh *shard.Shar
// findShardToEvacuateTree returns first shard according HRW or first shard with tree exists.
func (e *StorageEngine) findShardToEvacuateTree(ctx context.Context, tree pilorama.ContainerIDTreeID,
shards []pooledShard, shardsToEvacuate map[string]*shard.Shard,
) (pooledShard, bool, error) {
shards []hashedShard, shardsToEvacuate map[string]*shard.Shard,
) (hashedShard, bool, error) {
hrw.SortHasherSliceByValue(shards, hrw.StringHash(tree.CID.EncodeToString()))
var result pooledShard
var result hashedShard
var found bool
for _, target := range shards {
select {
case <-ctx.Done():
return pooledShard{}, false, ctx.Err()
return hashedShard{}, false, ctx.Err()
default:
}
@ -689,7 +683,7 @@ func (e *StorageEngine) findShardToEvacuateTree(ctx context.Context, tree pilora
return result, found, nil
}
func (e *StorageEngine) getActualShards(shardIDs []string, prm EvacuateShardPrm) ([]pooledShard, error) {
func (e *StorageEngine) getActualShards(shardIDs []string, prm EvacuateShardPrm) ([]hashedShard, error) {
e.mtx.RLock()
defer e.mtx.RUnlock()
@ -719,18 +713,15 @@ func (e *StorageEngine) getActualShards(shardIDs []string, prm EvacuateShardPrm)
// We must have all shards, to have correct information about their
// indexes in a sorted slice and set appropriate marks in the metabase.
// Evacuated shard is skipped during put.
shards := make([]pooledShard, 0, len(e.shards))
shards := make([]hashedShard, 0, len(e.shards))
for id := range e.shards {
shards = append(shards, pooledShard{
hashedShard: e.shards[id],
pool: e.shardPools[id],
})
shards = append(shards, e.shards[id])
}
return shards, nil
}
func (e *StorageEngine) evacuateObject(ctx context.Context, shardID string, objInfo *object.Info, prm EvacuateShardPrm, res *EvacuateShardRes,
getShards func() []pooledShard, shardsToEvacuate map[string]*shard.Shard, cnr containerSDK.Container,
getShards func() []hashedShard, shardsToEvacuate map[string]*shard.Shard, cnr containerSDK.Container,
) error {
ctx, span := tracing.StartSpanFromContext(ctx, "StorageEngine.evacuateObjects")
defer span.End()
@ -800,7 +791,7 @@ func (e *StorageEngine) isNotRepOne(c *container.Container) bool {
}
func (e *StorageEngine) tryEvacuateObjectLocal(ctx context.Context, addr oid.Address, object *objectSDK.Object, sh *shard.Shard,
shards []pooledShard, shardsToEvacuate map[string]*shard.Shard, res *EvacuateShardRes, cnr containerSDK.Container,
shards []hashedShard, shardsToEvacuate map[string]*shard.Shard, res *EvacuateShardRes, cnr containerSDK.Container,
) (bool, error) {
hrw.SortHasherSliceByValue(shards, hrw.StringHash(addr.EncodeToString()))
for j := range shards {
@ -813,7 +804,7 @@ func (e *StorageEngine) tryEvacuateObjectLocal(ctx context.Context, addr oid.Add
if _, ok := shardsToEvacuate[shards[j].ID().String()]; ok {
continue
}
switch e.putToShard(ctx, shards[j].hashedShard, shards[j].pool, addr, object, container.IsIndexedContainer(cnr)).status {
switch e.putToShard(ctx, shards[j], addr, object, container.IsIndexedContainer(cnr)).status {
case putToShardSuccess:
res.objEvacuated.Add(1)
e.log.Debug(ctx, logs.EngineObjectIsMovedToAnotherShard,

View file

@ -196,7 +196,6 @@ func TestEvacuateShardObjects(t *testing.T) {
e.mtx.Lock()
delete(e.shards, evacuateShardID)
delete(e.shardPools, evacuateShardID)
e.mtx.Unlock()
checkHasObjects(t)
@ -405,8 +404,8 @@ func TestEvacuateSingleProcess(t *testing.T) {
require.NoError(t, e.shards[ids[0].String()].SetMode(context.Background(), mode.ReadOnly))
require.NoError(t, e.shards[ids[1].String()].SetMode(context.Background(), mode.ReadOnly))
blocker := make(chan interface{})
running := make(chan interface{})
blocker := make(chan any)
running := make(chan any)
var prm EvacuateShardPrm
prm.ShardID = ids[1:2]
@ -447,8 +446,8 @@ func TestEvacuateObjectsAsync(t *testing.T) {
require.NoError(t, e.shards[ids[0].String()].SetMode(context.Background(), mode.ReadOnly))
require.NoError(t, e.shards[ids[1].String()].SetMode(context.Background(), mode.ReadOnly))
blocker := make(chan interface{})
running := make(chan interface{})
blocker := make(chan any)
running := make(chan any)
var prm EvacuateShardPrm
prm.ShardID = ids[1:2]

View file

@ -205,7 +205,7 @@ func BenchmarkInhumeMultipart(b *testing.B) {
func benchmarkInhumeMultipart(b *testing.B, numShards, numObjects int) {
b.StopTimer()
engine := testNewEngine(b, WithShardPoolSize(uint32(numObjects))).
engine := testNewEngine(b).
setShardsNum(b, numShards).prepare(b).engine
defer func() { require.NoError(b, engine.Close(context.Background())) }()

View file

@ -9,7 +9,6 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/shard"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util"
"git.frostfs.info/TrueCloudLab/frostfs-observability/tracing"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client"
objectSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
@ -99,13 +98,13 @@ func (e *StorageEngine) put(ctx context.Context, prm PutPrm) error {
var shRes putToShardRes
e.iterateOverSortedShards(addr, func(_ int, sh hashedShard) (stop bool) {
e.mtx.RLock()
pool, ok := e.shardPools[sh.ID().String()]
_, ok := e.shards[sh.ID().String()]
e.mtx.RUnlock()
if !ok {
// Shard was concurrently removed, skip.
return false
}
shRes = e.putToShard(ctx, sh, pool, addr, prm.Object, prm.IsIndexedContainer)
shRes = e.putToShard(ctx, sh, addr, prm.Object, prm.IsIndexedContainer)
return shRes.status != putToShardUnknown
})
switch shRes.status {
@ -122,70 +121,59 @@ func (e *StorageEngine) put(ctx context.Context, prm PutPrm) error {
// putToShard puts object to sh.
// Return putToShardStatus and error if it is necessary to propagate an error upper.
func (e *StorageEngine) putToShard(ctx context.Context, sh hashedShard, pool util.WorkerPool,
func (e *StorageEngine) putToShard(ctx context.Context, sh hashedShard,
addr oid.Address, obj *objectSDK.Object, isIndexedContainer bool,
) (res putToShardRes) {
exitCh := make(chan struct{})
var existPrm shard.ExistsPrm
existPrm.Address = addr
if err := pool.Submit(func() {
defer close(exitCh)
var existPrm shard.ExistsPrm
existPrm.Address = addr
exists, err := sh.Exists(ctx, existPrm)
if err != nil {
if shard.IsErrObjectExpired(err) {
// object is already found but
// expired => do nothing with it
res.status = putToShardExists
} else {
e.log.Warn(ctx, logs.EngineCouldNotCheckObjectExistence,
zap.Stringer("shard_id", sh.ID()),
zap.Error(err))
}
return // this is not ErrAlreadyRemoved error so we can go to the next shard
}
if exists.Exists() {
exists, err := sh.Exists(ctx, existPrm)
if err != nil {
if shard.IsErrObjectExpired(err) {
// object is already found but
// expired => do nothing with it
res.status = putToShardExists
return
} else {
e.log.Warn(ctx, logs.EngineCouldNotCheckObjectExistence,
zap.Stringer("shard_id", sh.ID()),
zap.Error(err))
}
var putPrm shard.PutPrm
putPrm.SetObject(obj)
putPrm.SetIndexAttributes(isIndexedContainer)
_, err = sh.Put(ctx, putPrm)
if err != nil {
if errors.Is(err, shard.ErrReadOnlyMode) || errors.Is(err, blobstor.ErrNoPlaceFound) ||
errors.Is(err, common.ErrReadOnly) || errors.Is(err, common.ErrNoSpace) {
e.log.Warn(ctx, logs.EngineCouldNotPutObjectToShard,
zap.Stringer("shard_id", sh.ID()),
zap.Error(err))
return
}
if client.IsErrObjectAlreadyRemoved(err) {
e.log.Warn(ctx, logs.EngineCouldNotPutObjectToShard,
zap.Stringer("shard_id", sh.ID()),
zap.Error(err))
res.status = putToShardRemoved
res.err = err
return
}
e.reportShardError(ctx, sh, "could not put object to shard", err, zap.Stringer("address", addr))
return
}
res.status = putToShardSuccess
}); err != nil {
e.log.Warn(ctx, logs.EngineCouldNotPutObjectToShard, zap.Error(err))
close(exitCh)
return // this is not ErrAlreadyRemoved error so we can go to the next shard
}
<-exitCh
if exists.Exists() {
res.status = putToShardExists
return
}
var putPrm shard.PutPrm
putPrm.SetObject(obj)
putPrm.SetIndexAttributes(isIndexedContainer)
_, err = sh.Put(ctx, putPrm)
if err != nil {
if errors.Is(err, shard.ErrReadOnlyMode) || errors.Is(err, blobstor.ErrNoPlaceFound) ||
errors.Is(err, common.ErrReadOnly) || errors.Is(err, common.ErrNoSpace) {
e.log.Warn(ctx, logs.EngineCouldNotPutObjectToShard,
zap.Stringer("shard_id", sh.ID()),
zap.Error(err))
return
}
if client.IsErrObjectAlreadyRemoved(err) {
e.log.Warn(ctx, logs.EngineCouldNotPutObjectToShard,
zap.Stringer("shard_id", sh.ID()),
zap.Error(err))
res.status = putToShardRemoved
res.err = err
return
}
e.reportShardError(ctx, sh, "could not put object to shard", err, zap.Stringer("address", addr))
return
}
res.status = putToShardSuccess
return
}

View file

@ -17,7 +17,6 @@ import (
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"git.frostfs.info/TrueCloudLab/hrw"
"github.com/google/uuid"
"github.com/panjf2000/ants/v2"
"go.uber.org/zap"
"golang.org/x/sync/errgroup"
)
@ -181,11 +180,6 @@ func (e *StorageEngine) addShard(sh *shard.Shard) error {
e.mtx.Lock()
defer e.mtx.Unlock()
pool, err := ants.NewPool(int(e.shardPoolSize), ants.WithNonblocking(true))
if err != nil {
return fmt.Errorf("create pool: %w", err)
}
strID := sh.ID().String()
if _, ok := e.shards[strID]; ok {
return fmt.Errorf("shard with id %s was already added", strID)
@ -199,8 +193,6 @@ func (e *StorageEngine) addShard(sh *shard.Shard) error {
hash: hrw.StringHash(strID),
}
e.shardPools[strID] = pool
return nil
}
@ -225,12 +217,6 @@ func (e *StorageEngine) removeShards(ctx context.Context, ids ...string) {
ss = append(ss, sh)
delete(e.shards, id)
pool, ok := e.shardPools[id]
if ok {
pool.Release()
delete(e.shardPools, id)
}
e.log.Info(ctx, logs.EngineShardHasBeenRemoved,
zap.String("id", id))
}
@ -429,12 +415,6 @@ func (e *StorageEngine) deleteShards(ctx context.Context, ids []*shard.ID) ([]ha
delete(e.shards, idStr)
pool, ok := e.shardPools[idStr]
if ok {
pool.Release()
delete(e.shardPools, idStr)
}
e.log.Info(ctx, logs.EngineShardHasBeenRemoved,
zap.String("id", idStr))
}

View file

@ -17,7 +17,6 @@ func TestRemoveShard(t *testing.T) {
e, ids := te.engine, te.shardIDs
defer func() { require.NoError(t, e.Close(context.Background())) }()
require.Equal(t, numOfShards, len(e.shardPools))
require.Equal(t, numOfShards, len(e.shards))
removedNum := numOfShards / 2
@ -37,7 +36,6 @@ func TestRemoveShard(t *testing.T) {
}
}
require.Equal(t, numOfShards-removedNum, len(e.shardPools))
require.Equal(t, numOfShards-removedNum, len(e.shards))
for id, removed := range mSh {

View file

@ -230,7 +230,7 @@ func (e *StorageEngine) TreeGetChildren(ctx context.Context, cid cidSDK.ID, tree
}
// TreeSortedByFilename implements the pilorama.Forest interface.
func (e *StorageEngine) TreeSortedByFilename(ctx context.Context, cid cidSDK.ID, treeID string, nodeID pilorama.MultiNode, last *string, count int) ([]pilorama.MultiNodeInfo, *string, error) {
func (e *StorageEngine) TreeSortedByFilename(ctx context.Context, cid cidSDK.ID, treeID string, nodeID pilorama.MultiNode, last *pilorama.Cursor, count int) ([]pilorama.MultiNodeInfo, *pilorama.Cursor, error) {
ctx, span := tracing.StartSpanFromContext(ctx, "StorageEngine.TreeSortedByFilename",
trace.WithAttributes(
attribute.String("container_id", cid.EncodeToString()),
@ -241,7 +241,7 @@ func (e *StorageEngine) TreeSortedByFilename(ctx context.Context, cid cidSDK.ID,
var err error
var nodes []pilorama.MultiNodeInfo
var cursor *string
var cursor *pilorama.Cursor
for _, sh := range e.sortShards(cid) {
nodes, cursor, err = sh.TreeSortedByFilename(ctx, cid, treeID, nodeID, last, count)
if err != nil {

View file

@ -0,0 +1,82 @@
package meta
import (
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"go.etcd.io/bbolt"
)
type bucketCache struct {
locked *bbolt.Bucket
graveyard *bbolt.Bucket
garbage *bbolt.Bucket
expired map[cid.ID]*bbolt.Bucket
primary map[cid.ID]*bbolt.Bucket
}
func newBucketCache() *bucketCache {
return &bucketCache{}
}
func getLockedBucket(bc *bucketCache, tx *bbolt.Tx) *bbolt.Bucket {
if bc == nil {
return tx.Bucket(bucketNameLocked)
}
return getBucket(&bc.locked, tx, bucketNameLocked)
}
func getGraveyardBucket(bc *bucketCache, tx *bbolt.Tx) *bbolt.Bucket {
if bc == nil {
return tx.Bucket(graveyardBucketName)
}
return getBucket(&bc.graveyard, tx, graveyardBucketName)
}
func getGarbageBucket(bc *bucketCache, tx *bbolt.Tx) *bbolt.Bucket {
if bc == nil {
return tx.Bucket(garbageBucketName)
}
return getBucket(&bc.garbage, tx, garbageBucketName)
}
func getBucket(cache **bbolt.Bucket, tx *bbolt.Tx, name []byte) *bbolt.Bucket {
if *cache != nil {
return *cache
}
*cache = tx.Bucket(name)
return *cache
}
func getExpiredBucket(bc *bucketCache, tx *bbolt.Tx, cnr cid.ID) *bbolt.Bucket {
if bc == nil {
bucketName := make([]byte, bucketKeySize)
bucketName = objectToExpirationEpochBucketName(cnr, bucketName)
return tx.Bucket(bucketName)
}
return getMappedBucket(&bc.expired, tx, objectToExpirationEpochBucketName, cnr)
}
func getPrimaryBucket(bc *bucketCache, tx *bbolt.Tx, cnr cid.ID) *bbolt.Bucket {
if bc == nil {
bucketName := make([]byte, bucketKeySize)
bucketName = primaryBucketName(cnr, bucketName)
return tx.Bucket(bucketName)
}
return getMappedBucket(&bc.primary, tx, primaryBucketName, cnr)
}
func getMappedBucket(m *map[cid.ID]*bbolt.Bucket, tx *bbolt.Tx, nameFunc func(cid.ID, []byte) []byte, cnr cid.ID) *bbolt.Bucket {
value, ok := (*m)[cnr]
if ok {
return value
}
if *m == nil {
*m = make(map[cid.ID]*bbolt.Bucket, 1)
}
bucketName := make([]byte, bucketKeySize)
bucketName = nameFunc(cnr, bucketName)
(*m)[cnr] = getBucket(&value, tx, bucketName)
return value
}

View file

@ -153,12 +153,16 @@ func (db *DB) exists(tx *bbolt.Tx, addr oid.Address, ecParent oid.Address, currE
// - 2 if object is covered with tombstone;
// - 3 if object is expired.
func objectStatus(tx *bbolt.Tx, addr oid.Address, currEpoch uint64) (uint8, error) {
return objectStatusWithCache(nil, tx, addr, currEpoch)
}
func objectStatusWithCache(bc *bucketCache, tx *bbolt.Tx, addr oid.Address, currEpoch uint64) (uint8, error) {
// locked object could not be removed/marked with GC/expired
if objectLocked(tx, addr.Container(), addr.Object()) {
if objectLockedWithCache(bc, tx, addr.Container(), addr.Object()) {
return 0, nil
}
expired, err := isExpired(tx, addr, currEpoch)
expired, err := isExpiredWithCache(bc, tx, addr, currEpoch)
if err != nil {
return 0, err
}
@ -167,8 +171,8 @@ func objectStatus(tx *bbolt.Tx, addr oid.Address, currEpoch uint64) (uint8, erro
return 3, nil
}
graveyardBkt := tx.Bucket(graveyardBucketName)
garbageBkt := tx.Bucket(garbageBucketName)
graveyardBkt := getGraveyardBucket(bc, tx)
garbageBkt := getGarbageBucket(bc, tx)
addrKey := addressKey(addr, make([]byte, addressKeySize))
return inGraveyardWithKey(addrKey, graveyardBkt, garbageBkt), nil
}

View file

@ -74,9 +74,11 @@ func (db *DB) FilterExpired(ctx context.Context, epoch uint64, addresses []oid.A
}
func isExpired(tx *bbolt.Tx, addr oid.Address, currEpoch uint64) (bool, error) {
bucketName := make([]byte, bucketKeySize)
bucketName = objectToExpirationEpochBucketName(addr.Container(), bucketName)
b := tx.Bucket(bucketName)
return isExpiredWithCache(nil, tx, addr, currEpoch)
}
func isExpiredWithCache(bc *bucketCache, tx *bbolt.Tx, addr oid.Address, currEpoch uint64) (bool, error) {
b := getExpiredBucket(bc, tx, addr.Container())
if b == nil {
return false, nil
}

View file

@ -88,8 +88,12 @@ func (db *DB) Get(ctx context.Context, prm GetPrm) (res GetRes, err error) {
}
func (db *DB) get(tx *bbolt.Tx, addr oid.Address, key []byte, checkStatus, raw bool, currEpoch uint64) (*objectSDK.Object, error) {
return db.getWithCache(nil, tx, addr, key, checkStatus, raw, currEpoch)
}
func (db *DB) getWithCache(bc *bucketCache, tx *bbolt.Tx, addr oid.Address, key []byte, checkStatus, raw bool, currEpoch uint64) (*objectSDK.Object, error) {
if checkStatus {
st, err := objectStatus(tx, addr, currEpoch)
st, err := objectStatusWithCache(bc, tx, addr, currEpoch)
if err != nil {
return nil, err
}
@ -109,12 +113,13 @@ func (db *DB) get(tx *bbolt.Tx, addr oid.Address, key []byte, checkStatus, raw b
bucketName := make([]byte, bucketKeySize)
// check in primary index
data := getFromBucket(tx, primaryBucketName(cnr, bucketName), key)
if len(data) != 0 {
return obj, obj.Unmarshal(data)
if b := getPrimaryBucket(bc, tx, cnr); b != nil {
if data := b.Get(key); len(data) != 0 {
return obj, obj.Unmarshal(data)
}
}
data = getFromBucket(tx, ecInfoBucketName(cnr, bucketName), key)
data := getFromBucket(tx, ecInfoBucketName(cnr, bucketName), key)
if len(data) != 0 {
return nil, getECInfoError(tx, cnr, data)
}

View file

@ -139,8 +139,7 @@ func (db *DB) listWithCursor(tx *bbolt.Tx, result []objectcore.Info, count int,
var containerID cid.ID
var offset []byte
graveyardBkt := tx.Bucket(graveyardBucketName)
garbageBkt := tx.Bucket(garbageBucketName)
bc := newBucketCache()
rawAddr := make([]byte, cidSize, addressKeySize)
@ -169,7 +168,7 @@ loop:
bkt := tx.Bucket(name)
if bkt != nil {
copy(rawAddr, cidRaw)
result, offset, cursor, err = selectNFromBucket(bkt, objType, graveyardBkt, garbageBkt, rawAddr, containerID,
result, offset, cursor, err = selectNFromBucket(bc, bkt, objType, rawAddr, containerID,
result, count, cursor, threshold, currEpoch)
if err != nil {
return nil, nil, err
@ -204,9 +203,10 @@ loop:
// selectNFromBucket similar to selectAllFromBucket but uses cursor to find
// object to start selecting from. Ignores inhumed objects.
func selectNFromBucket(bkt *bbolt.Bucket, // main bucket
func selectNFromBucket(
bc *bucketCache,
bkt *bbolt.Bucket, // main bucket
objType objectSDK.Type, // type of the objects stored in the main bucket
graveyardBkt, garbageBkt *bbolt.Bucket, // cached graveyard buckets
cidRaw []byte, // container ID prefix, optimization
cnt cid.ID, // container ID
to []objectcore.Info, // listing result
@ -219,7 +219,6 @@ func selectNFromBucket(bkt *bbolt.Bucket, // main bucket
cursor = new(Cursor)
}
count := len(to)
c := bkt.Cursor()
k, v := c.First()
@ -231,7 +230,7 @@ func selectNFromBucket(bkt *bbolt.Bucket, // main bucket
}
for ; k != nil; k, v = c.Next() {
if count >= limit {
if len(to) >= limit {
break
}
@ -241,6 +240,8 @@ func selectNFromBucket(bkt *bbolt.Bucket, // main bucket
}
offset = k
graveyardBkt := getGraveyardBucket(bc, bkt.Tx())
garbageBkt := getGarbageBucket(bc, bkt.Tx())
if inGraveyardWithKey(append(cidRaw, k...), graveyardBkt, garbageBkt) > 0 {
continue
}
@ -251,7 +252,7 @@ func selectNFromBucket(bkt *bbolt.Bucket, // main bucket
}
expEpoch, hasExpEpoch := hasExpirationEpoch(&o)
if !objectLocked(bkt.Tx(), cnt, obj) && hasExpEpoch && expEpoch < currEpoch {
if hasExpEpoch && expEpoch < currEpoch && !objectLockedWithCache(bc, bkt.Tx(), cnt, obj) {
continue
}
@ -273,7 +274,6 @@ func selectNFromBucket(bkt *bbolt.Bucket, // main bucket
a.SetContainer(cnt)
a.SetObject(obj)
to = append(to, objectcore.Info{Address: a, Type: objType, IsLinkingObject: isLinkingObj, ECInfo: ecInfo})
count++
}
return to, offset, cursor, nil

View file

@ -59,7 +59,7 @@ func benchmarkListWithCursor(b *testing.B, db *meta.DB, batchSize int) {
for range b.N {
res, err := db.ListWithCursor(context.Background(), prm)
if err != nil {
if errors.Is(err, meta.ErrEndOfListing) {
if !errors.Is(err, meta.ErrEndOfListing) {
b.Fatalf("error: %v", err)
}
prm.SetCursor(nil)

View file

@ -4,6 +4,7 @@ import (
"bytes"
"context"
"fmt"
"slices"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/internal/metaerr"
@ -162,7 +163,11 @@ func (db *DB) FreeLockedBy(lockers []oid.Address) ([]oid.Address, error) {
// checks if specified object is locked in the specified container.
func objectLocked(tx *bbolt.Tx, idCnr cid.ID, idObj oid.ID) bool {
bucketLocked := tx.Bucket(bucketNameLocked)
return objectLockedWithCache(nil, tx, idCnr, idObj)
}
func objectLockedWithCache(bc *bucketCache, tx *bbolt.Tx, idCnr cid.ID, idObj oid.ID) bool {
bucketLocked := getLockedBucket(bc, tx)
if bucketLocked != nil {
key := make([]byte, cidSize)
idCnr.Encode(key)
@ -250,7 +255,7 @@ func freePotentialLocks(tx *bbolt.Tx, idCnr cid.ID, locker oid.ID) ([]oid.Addres
unlockedObjects = append(unlockedObjects, addr)
} else {
// exclude locker
keyLockers = append(keyLockers[:i], keyLockers[i+1:]...)
keyLockers = slices.Delete(keyLockers, i, i+1)
v, err = encodeList(keyLockers)
if err != nil {

View file

@ -37,7 +37,7 @@ func TestResetDropsContainerBuckets(t *testing.T) {
for idx := range 100 {
var putPrm PutPrm
putPrm.SetObject(testutil.GenerateObject())
putPrm.SetStorageID([]byte(fmt.Sprintf("0/%d", idx)))
putPrm.SetStorageID(fmt.Appendf(nil, "0/%d", idx))
_, err := db.Put(context.Background(), putPrm)
require.NoError(t, err)
}

View file

@ -131,6 +131,7 @@ func (db *DB) selectObjects(tx *bbolt.Tx, cnr cid.ID, fs objectSDK.SearchFilters
res := make([]oid.Address, 0, len(mAddr))
bc := newBucketCache()
for a, ind := range mAddr {
if ind != expLen {
continue // ignore objects with unmatched fast filters
@ -145,7 +146,7 @@ func (db *DB) selectObjects(tx *bbolt.Tx, cnr cid.ID, fs objectSDK.SearchFilters
var addr oid.Address
addr.SetContainer(cnr)
addr.SetObject(id)
st, err := objectStatus(tx, addr, currEpoch)
st, err := objectStatusWithCache(bc, tx, addr, currEpoch)
if err != nil {
return nil, err
}
@ -153,7 +154,7 @@ func (db *DB) selectObjects(tx *bbolt.Tx, cnr cid.ID, fs objectSDK.SearchFilters
continue // ignore removed objects
}
addr, match := db.matchSlowFilters(tx, addr, group.slowFilters, currEpoch)
addr, match := db.matchSlowFilters(bc, tx, addr, group.slowFilters, currEpoch)
if !match {
continue // ignore objects with unmatched slow filters
}
@ -451,13 +452,13 @@ func (db *DB) selectObjectID(
}
// matchSlowFilters return true if object header is matched by all slow filters.
func (db *DB) matchSlowFilters(tx *bbolt.Tx, addr oid.Address, f objectSDK.SearchFilters, currEpoch uint64) (oid.Address, bool) {
func (db *DB) matchSlowFilters(bc *bucketCache, tx *bbolt.Tx, addr oid.Address, f objectSDK.SearchFilters, currEpoch uint64) (oid.Address, bool) {
result := addr
if len(f) == 0 {
return result, true
}
obj, isECChunk, err := db.getObjectForSlowFilters(tx, addr, currEpoch)
obj, isECChunk, err := db.getObjectForSlowFilters(bc, tx, addr, currEpoch)
if err != nil {
return result, false
}
@ -515,9 +516,9 @@ func (db *DB) matchSlowFilters(tx *bbolt.Tx, addr oid.Address, f objectSDK.Searc
return result, true
}
func (db *DB) getObjectForSlowFilters(tx *bbolt.Tx, addr oid.Address, currEpoch uint64) (*objectSDK.Object, bool, error) {
func (db *DB) getObjectForSlowFilters(bc *bucketCache, tx *bbolt.Tx, addr oid.Address, currEpoch uint64) (*objectSDK.Object, bool, error) {
buf := make([]byte, addressKeySize)
obj, err := db.get(tx, addr, buf, true, false, currEpoch)
obj, err := db.getWithCache(bc, tx, addr, buf, false, false, currEpoch)
if err != nil {
var ecInfoError *objectSDK.ECInfoError
if errors.As(err, &ecInfoError) {
@ -527,7 +528,7 @@ func (db *DB) getObjectForSlowFilters(tx *bbolt.Tx, addr oid.Address, currEpoch
continue
}
addr.SetObject(objID)
obj, err = db.get(tx, addr, buf, true, false, currEpoch)
obj, err = db.getWithCache(bc, tx, addr, buf, true, false, currEpoch)
if err == nil {
return obj, true, nil
}

View file

@ -1216,6 +1216,8 @@ func TestExpiredObjects(t *testing.T) {
}
func benchmarkSelect(b *testing.B, db *meta.DB, cid cidSDK.ID, fs objectSDK.SearchFilters, expected int) {
b.ReportAllocs()
var prm meta.SelectPrm
prm.SetContainerID(cid)
prm.SetFilters(fs)

View file

@ -1077,7 +1077,7 @@ func (t *boltForest) hasFewChildren(b *bbolt.Bucket, nodeIDs MultiNode, threshol
}
// TreeSortedByFilename implements the Forest interface.
func (t *boltForest) TreeSortedByFilename(ctx context.Context, cid cidSDK.ID, treeID string, nodeIDs MultiNode, last *string, count int) ([]MultiNodeInfo, *string, error) {
func (t *boltForest) TreeSortedByFilename(ctx context.Context, cid cidSDK.ID, treeID string, nodeIDs MultiNode, last *Cursor, count int) ([]MultiNodeInfo, *Cursor, error) {
var (
startedAt = time.Now()
success = false
@ -1155,7 +1155,7 @@ func (t *boltForest) TreeSortedByFilename(ctx context.Context, cid cidSDK.ID, tr
}
if len(res) != 0 {
s := string(findAttr(res[len(res)-1].Meta, AttributeFilename))
last = &s
last = NewCursor(s, res[len(res)-1].LastChild())
}
return res, last, metaerr.Wrap(err)
}
@ -1166,10 +1166,10 @@ func sortByFilename(nodes []NodeInfo) {
})
}
func sortAndCut(result []NodeInfo, last *string) []NodeInfo {
func sortAndCut(result []NodeInfo, last *Cursor) []NodeInfo {
var lastBytes []byte
if last != nil {
lastBytes = []byte(*last)
lastBytes = []byte(last.GetFilename())
}
sortByFilename(result)

View file

@ -164,7 +164,7 @@ func (f *memoryForest) TreeGetMeta(_ context.Context, cid cid.ID, treeID string,
}
// TreeSortedByFilename implements the Forest interface.
func (f *memoryForest) TreeSortedByFilename(_ context.Context, cid cid.ID, treeID string, nodeIDs MultiNode, start *string, count int) ([]MultiNodeInfo, *string, error) {
func (f *memoryForest) TreeSortedByFilename(_ context.Context, cid cid.ID, treeID string, nodeIDs MultiNode, start *Cursor, count int) ([]MultiNodeInfo, *Cursor, error) {
fullID := cid.String() + "/" + treeID
s, ok := f.treeMap[fullID]
if !ok {
@ -204,17 +204,14 @@ func (f *memoryForest) TreeSortedByFilename(_ context.Context, cid cid.ID, treeI
r := mergeNodeInfos(res)
for i := range r {
if start == nil || string(findAttr(r[i].Meta, AttributeFilename)) > *start {
finish := i + count
if len(res) < finish {
finish = len(res)
}
if start == nil || string(findAttr(r[i].Meta, AttributeFilename)) > start.GetFilename() {
finish := min(len(res), i+count)
last := string(findAttr(r[finish-1].Meta, AttributeFilename))
return r[i:finish], &last, nil
return r[i:finish], NewCursor(last, 0), nil
}
}
last := string(res[len(res)-1].Meta.GetAttr(AttributeFilename))
return nil, &last, nil
return nil, NewCursor(last, 0), nil
}
// TreeGetChildren implements the Forest interface.

View file

@ -273,7 +273,7 @@ func testForestTreeSortedIterationBugWithSkip(t *testing.T, s ForestStorage) {
}
var result []MultiNodeInfo
treeAppend := func(t *testing.T, last *string, count int) *string {
treeAppend := func(t *testing.T, last *Cursor, count int) *Cursor {
res, cursor, err := s.TreeSortedByFilename(context.Background(), d.CID, treeID, MultiNode{RootID}, last, count)
require.NoError(t, err)
result = append(result, res...)
@ -328,7 +328,7 @@ func testForestTreeSortedIteration(t *testing.T, s ForestStorage) {
}
var result []MultiNodeInfo
treeAppend := func(t *testing.T, last *string, count int) *string {
treeAppend := func(t *testing.T, last *Cursor, count int) *Cursor {
res, cursor, err := s.TreeSortedByFilename(context.Background(), d.CID, treeID, MultiNode{RootID}, last, count)
require.NoError(t, err)
result = append(result, res...)

View file

@ -30,13 +30,13 @@ func (h *filenameHeap) Pop() any {
// fixedHeap maintains a fixed number of smallest elements started at some point.
type fixedHeap struct {
start *string
start *Cursor
sorted bool
count int
h *filenameHeap
}
func newHeap(start *string, count int) *fixedHeap {
func newHeap(start *Cursor, count int) *fixedHeap {
h := new(filenameHeap)
heap.Init(h)
@ -50,8 +50,19 @@ func newHeap(start *string, count int) *fixedHeap {
const amortizationMultiplier = 5
func (h *fixedHeap) push(id MultiNode, filename string) bool {
if h.start != nil && filename <= *h.start {
return false
if h.start != nil {
if filename < h.start.GetFilename() {
return false
} else if filename == h.start.GetFilename() {
// A tree may have a lot of nodes with the same filename but different versions so that
// len(nodes) > batch_size. The cut nodes should be pushed into the result on repeated call
// with the same filename.
pos := slices.Index(id, h.start.GetNode())
if pos == -1 || pos+1 >= len(id) {
return false
}
id = id[pos+1:]
}
}
*h.h = append(*h.h, heapInfo{id: id, filename: filename})

View file

@ -37,7 +37,7 @@ type Forest interface {
TreeGetChildren(ctx context.Context, cid cidSDK.ID, treeID string, nodeID Node) ([]NodeInfo, error)
// TreeSortedByFilename returns children of the node with the specified ID. The nodes are sorted by the filename attribute..
// Should return ErrTreeNotFound if the tree is not found, and empty result if the node is not in the tree.
TreeSortedByFilename(ctx context.Context, cid cidSDK.ID, treeID string, nodeID MultiNode, last *string, count int) ([]MultiNodeInfo, *string, error)
TreeSortedByFilename(ctx context.Context, cid cidSDK.ID, treeID string, nodeID MultiNode, last *Cursor, count int) ([]MultiNodeInfo, *Cursor, error)
// TreeGetOpLog returns first log operation stored at or above the height.
// In case no such operation is found, empty Move and nil error should be returned.
TreeGetOpLog(ctx context.Context, cid cidSDK.ID, treeID string, height uint64) (Move, error)
@ -79,6 +79,38 @@ const (
AttributeVersion = "Version"
)
// Cursor keeps state between function calls for traversing nodes.
// It stores the attributes associated with a previous call, allowing subsequent operations
// to resume traversal from this point rather than starting from the beginning.
type Cursor struct {
// Last traversed filename.
filename string
// Last traversed node.
node Node
}
func NewCursor(filename string, node Node) *Cursor {
return &Cursor{
filename: filename,
node: node,
}
}
func (c *Cursor) GetFilename() string {
if c == nil {
return ""
}
return c.filename
}
func (c *Cursor) GetNode() Node {
if c == nil {
return Node(0)
}
return c.node
}
// CIDDescriptor contains container ID and information about the node position
// in the list of container nodes.
type CIDDescriptor struct {

Some files were not shown because too many files have changed in this diff Show more