Compare commits

...

38 commits

Author SHA1 Message Date
c0135f8a65 tree: Use pairing heap for listing
When we have N items, sorting then iterating provides `O(n log n)` latency
in the worst case scenario (flat bucket), because we must return items
from a level in the sorted order. Some heap implementations allow O(1)
insertion and O(log n) dequeue, this means that we can decrease the
latency for the first received operation to O(log n), albeit with a
slight increase in the total time.
Pairing heap was chosen as one of the most simplest implementations.

```
goos: linux
goarch: amd64
pkg: git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/tree
cpu: 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz
                        │     old      │                 new                  │
                        │    sec/op    │    sec/op     vs base                │
GetSubTree/latency-8      5.034m ± 23%   1.110m ± 22%  -77.95% (p=0.000 n=10)
GetSubTree/total_time-8   81.03m ±  1%   95.02m ± 14%  +17.26% (p=0.000 n=10)
geomean                   20.20m         10.27m        -49.15%

                        │     old      │                 new                  │
                        │     B/op     │     B/op      vs base                │
GetSubTree/latency-8      32.14Mi ± 0%   37.49Mi ± 0%  +16.63% (p=0.000 n=10)
GetSubTree/total_time-8   32.14Mi ± 0%   37.49Mi ± 0%  +16.63% (p=0.000 n=10)
geomean                   32.14Mi        37.49Mi       +16.63%

                        │     old     │                new                 │
                        │  allocs/op  │  allocs/op   vs base               │
GetSubTree/latency-8      400.0k ± 0%   400.0k ± 0%  +0.00% (p=0.000 n=10)
GetSubTree/total_time-8   400.0k ± 0%   400.0k ± 0%  +0.00% (p=0.000 n=10)
geomean                   400.0k        400.0k       +0.00%
```

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-01-30 12:36:26 +03:00
2bfaa65455 [#879] node: Drain internal error's channel
Some checks failed
Vulncheck / Vulncheck (pull_request) Failing after 1m20s
DCO action / DCO (pull_request) Successful in 3m6s
Build / Build Components (1.20) (pull_request) Successful in 3m31s
Build / Build Components (1.21) (pull_request) Successful in 3m48s
Tests and linters / Staticcheck (pull_request) Successful in 8m7s
Tests and linters / Lint (pull_request) Successful in 8m34s
Tests and linters / Tests with -race (pull_request) Failing after 14m37s
Tests and linters / Tests (1.20) (pull_request) Failing after 15m3s
Tests and linters / Tests (1.21) (pull_request) Successful in 16m37s
This fixes shutdown panic:
1. Some morph connection gets error and passes it to internalErr channel.
2. Storage node starts to shutdow and closes internalErr channel.
3. Other morph connection gets error and tries to pass it to internalErr channel.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-19 08:26:27 +03:00
e39db63827 [#863] blobovnicza: Fix counters
Some checks failed
DCO action / DCO (pull_request) Successful in 2m7s
Build / Build Components (1.21) (pull_request) Successful in 3m28s
Vulncheck / Vulncheck (pull_request) Failing after 3m8s
Build / Build Components (1.20) (pull_request) Successful in 4m1s
Tests and linters / Staticcheck (pull_request) Successful in 7m31s
Tests and linters / Lint (pull_request) Successful in 8m40s
Tests and linters / Tests (1.20) (pull_request) Successful in 20m43s
Tests and linters / Tests (1.21) (pull_request) Successful in 22m10s
Tests and linters / Tests with -race (pull_request) Successful in 22m35s
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-13 13:57:42 +03:00
c8683554cd [#861] shard: Fix Delete object
Some checks failed
DCO action / DCO (pull_request) Successful in 3m26s
Vulncheck / Vulncheck (pull_request) Failing after 4m32s
Build / Build Components (1.21) (pull_request) Successful in 5m19s
Build / Build Components (1.20) (pull_request) Successful in 8m1s
Tests and linters / Staticcheck (pull_request) Successful in 10m0s
Tests and linters / Lint (pull_request) Successful in 10m29s
Tests and linters / Tests (1.21) (pull_request) Successful in 8m29s
Tests and linters / Tests (1.20) (pull_request) Successful in 3m51s
Tests and linters / Tests with -race (pull_request) Successful in 5m8s
It is possible that object doesn't exist in metabase.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-12 16:50:44 +03:00
9436fcfc70 [#858] morph: Disable kludge by default
Some checks failed
DCO action / DCO (pull_request) Successful in 1m54s
Build / Build Components (1.21) (pull_request) Successful in 3m58s
Vulncheck / Vulncheck (pull_request) Failing after 3m32s
Build / Build Components (1.20) (pull_request) Successful in 5m0s
Tests and linters / Staticcheck (pull_request) Successful in 5m30s
Tests and linters / Lint (pull_request) Successful in 6m35s
Tests and linters / Tests with -race (pull_request) Failing after 13m13s
Tests and linters / Tests (1.20) (pull_request) Successful in 15m26s
Tests and linters / Tests (1.21) (pull_request) Successful in 17m8s
Signed-off-by: Anton Nikiforov <an.nikiforov@yadro.com>
2023-12-12 12:01:10 +03:00
7d9fe03f28 [#776] morph: Cleanup InvocationScript before sign
Some checks failed
DCO action / DCO (pull_request) Successful in 2m3s
Vulncheck / Vulncheck (pull_request) Failing after 2m31s
Build / Build Components (1.21) (pull_request) Successful in 4m49s
Build / Build Components (1.20) (pull_request) Successful in 4m57s
Tests and linters / Staticcheck (pull_request) Successful in 7m35s
Tests and linters / Lint (pull_request) Successful in 8m17s
Tests and linters / Tests (1.20) (pull_request) Successful in 11m20s
Tests and linters / Tests with -race (pull_request) Successful in 11m28s
Tests and linters / Tests (1.21) (pull_request) Successful in 11m37s
Necessary to skip check on the neo-go side.
This is a kludge purely for update to work.

Signed-off-by: Anton Nikiforov <an.nikiforov@yadro.com>
2023-12-11 12:21:15 +03:00
08fdab7bc7 [#796] cli: Fix object nodes command
Tombstone objects must be present on all container nodes.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>

# Conflicts:
#	cmd/frostfs-cli/modules/object/nodes.go
#
# It looks like you may be committing a cherry-pick.
# If this is not correct, please run
#	git update-ref -d CHERRY_PICK_HEAD
# and try again.

# Please enter the commit message for your changes. Lines starting
# with '#' will be ignored, and an empty message aborts the commit.
#
# Date:      Thu Nov 9 13:33:59 2023 +0300
#
# On branch fix/zombie_object_supportv037
# You are currently cherry-picking commit 78cfb6ae.
#
# Changes to be committed:
#	modified:   cmd/frostfs-cli/modules/object/nodes.go
#
2023-12-07 15:00:09 +00:00
8abe47d316 [#796] policer: Fix tombstone objects replication
Tombstone objects must be replicated to all container nodes.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-07 15:00:09 +00:00
12ca452638 [#806] morph: Remove container list cache
Some checks failed
Vulncheck / Vulncheck (pull_request) Failing after 2m49s
Build / Build Components (1.20) (pull_request) Successful in 3m7s
DCO action / DCO (pull_request) Successful in 8m12s
Build / Build Components (1.21) (pull_request) Successful in 10m26s
Tests and linters / Lint (pull_request) Successful in 14m56s
Tests and linters / Staticcheck (pull_request) Successful in 16m8s
Tests and linters / Tests (1.21) (pull_request) Successful in 6m22s
Tests and linters / Tests (1.20) (pull_request) Successful in 6m37s
Tests and linters / Tests with -race (pull_request) Successful in 6m55s
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-07 16:47:36 +03:00
02450a9a16 [#661] blobovniczatree: Do not create DB's on init
Some checks failed
Tests and linters / Tests with -race (pull_request) Failing after 10s
DCO action / DCO (pull_request) Successful in 1m24s
Build / Build Components (1.21) (pull_request) Successful in 3m4s
Build / Build Components (1.20) (pull_request) Successful in 4m3s
Vulncheck / Vulncheck (pull_request) Failing after 3m39s
Tests and linters / Lint (pull_request) Successful in 9m41s
Tests and linters / Staticcheck (pull_request) Successful in 9m43s
Tests and linters / Tests (1.21) (pull_request) Failing after 16m43s
Tests and linters / Tests (1.20) (pull_request) Failing after 16m48s
Blobovniczas will be created on write requests.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-07 15:36:05 +03:00
9de91542fe [#661] blobovnicza: Compute size with record size
To get more accurate size of blobovnicza use record
size (lenght of key + lenght of data).

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-07 15:36:05 +03:00
e0cb29cb6e [#698] blobovnicza: Store counter values
Blobovnicza initialization take a long time because of bucket
Stat() call. So now blobovnicza stores counters in META bucket.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-07 15:36:05 +03:00
736ec847b7 [#661] metabase: Update storage ID in case of logical errors
If object was GC marked or deleted or expired, it is still required
to update storageID to physically delete object.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-07 15:36:05 +03:00
4b20f846af [#661] metrcis: Add rebuild percent metric
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-07 15:36:05 +03:00
04b530197e [#661] blobovniczatree: Pass object size limit from config
If actual small object size value lower than default
object size limit, then unnecessary buckets created.
If actual small object size value greated than default
object size limit, then error happens.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-07 15:36:05 +03:00
f8606db7f2 [#661] blobovniczatree: Do not sort DB's and indicies
Put stores object to next active DB, so there is no need to sort DBs.
In addition, it adds unnecessary DB openings.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-07 15:36:05 +03:00
52c625df0f [#661] blobovniczatree: Make Rebuild concurrent for objects
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-07 15:36:05 +03:00
888f966eb4 [#661] blobovniczatree: Make Rebuild concurrent
Different DBs can be rebuild concurrently.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-07 15:36:04 +03:00
f325b9557c [#661] metrics: Add blobovniczatree rebuild metrics
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-07 15:36:04 +03:00
4f7f67d136 [#661] blobovniczatree: Make Rebuild failover safe
Now move info stores in blobovnicza, so in case of failover
rebuild completes previous operation first.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-07 15:36:04 +03:00
8afb42aae8 [#698] blobovniczatree: Init blobovniczas concurrently
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-07 15:36:04 +03:00
71fac2b9c3 [#661] shard: Fix Delete method
Due to the flushing data from the writecache to the storage
and simultaneous deletion, a partial deletion situation is possible.
So as a solution, deletion is allowed only when the object is in storage,
because object will be deleted from writecache by flush goroutine.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-07 15:36:04 +03:00
9ef4a885de [#661] blobovniczatree: Add Rebuild implementation
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-07 15:36:04 +03:00
b22b703325 [#661] blobstor: Add Rebuild implementation
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-07 15:36:04 +03:00
784efb6155 [#661] blobovniczatree: Allow to change depth or width
Now it is possible to change depth or with of blobovniczatree.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-07 15:36:04 +03:00
df11b40dfd [#661] blobovniczatree: Use .db extension for db files
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-07 15:36:04 +03:00
4e05ce1c3a [#661] shard: Add blobstor rebuilder
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-12-07 15:36:04 +03:00
1f07e8b375 Release v0.37.0
Some checks failed
DCO action / DCO (pull_request) Successful in 2m6s
Build / Build Components (1.21) (pull_request) Successful in 3m5s
Build / Build Components (1.20) (pull_request) Successful in 3m58s
Vulncheck / Vulncheck (pull_request) Failing after 3m28s
Tests and linters / Tests (1.20) (pull_request) Successful in 6m23s
Tests and linters / Tests (1.21) (pull_request) Successful in 6m36s
Tests and linters / Staticcheck (pull_request) Successful in 6m30s
Tests and linters / Lint (pull_request) Successful in 6m56s
Tests and linters / Tests with -race (pull_request) Successful in 6m45s
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-12-07 13:36:34 +03:00
2be938f3cd [#823] go.mod: Update frostfs-api-go
Some checks failed
DCO action / DCO (pull_request) Successful in 3m2s
Vulncheck / Vulncheck (pull_request) Failing after 7m28s
Build / Build Components (1.20) (pull_request) Successful in 10m41s
Build / Build Components (1.21) (pull_request) Successful in 10m36s
Tests and linters / Tests with -race (pull_request) Failing after 11m58s
Tests and linters / Staticcheck (pull_request) Successful in 12m6s
Tests and linters / Lint (pull_request) Successful in 12m46s
Tests and linters / Tests (1.20) (pull_request) Successful in 13m30s
Tests and linters / Tests (1.21) (pull_request) Successful in 13m38s
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-11-21 14:46:50 +03:00
eac61fe9b2 [#733] frostfs-cli: Add control ir remove-container
Some checks failed
DCO action / DCO (pull_request) Successful in 1m14s
Vulncheck / Vulncheck (pull_request) Failing after 3m22s
Build / Build Components (1.20) (pull_request) Successful in 3m16s
Build / Build Components (1.21) (pull_request) Successful in 3m13s
Tests and linters / Tests (1.21) (pull_request) Successful in 5m2s
Tests and linters / Tests (1.20) (pull_request) Successful in 5m8s
Tests and linters / Staticcheck (pull_request) Successful in 4m57s
Tests and linters / Tests with -race (pull_request) Successful in 5m3s
Tests and linters / Lint (pull_request) Successful in 5m24s
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-11-03 15:42:58 +03:00
dc81b4b50c [#725] writecache: Fix metric values
All checks were successful
DCO action / DCO (pull_request) Successful in 1m42s
Build / Build Components (1.20) (pull_request) Successful in 4m21s
Tests and linters / Staticcheck (pull_request) Successful in 4m32s
Tests and linters / Tests (1.21) (pull_request) Successful in 4m46s
Tests and linters / Lint (pull_request) Successful in 5m21s
Tests and linters / Tests (1.20) (pull_request) Successful in 6m14s
Build / Build Components (1.21) (pull_request) Successful in 15m46s
Vulncheck / Vulncheck (pull_request) Successful in 1m21s
Tests and linters / Tests with -race (pull_request) Successful in 4m6s
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-10-27 16:25:04 +03:00
98da032324 [#758] ir: Do not exclude node in maintenance mode from netmap
All checks were successful
DCO action / DCO (pull_request) Successful in 2m59s
Build / Build Components (1.21) (pull_request) Successful in 4m18s
Build / Build Components (1.20) (pull_request) Successful in 4m45s
Vulncheck / Vulncheck (pull_request) Successful in 5m12s
Tests and linters / Tests (1.20) (pull_request) Successful in 9m11s
Tests and linters / Tests (1.21) (pull_request) Successful in 9m10s
Tests and linters / Staticcheck (pull_request) Successful in 9m6s
Tests and linters / Tests with -race (pull_request) Successful in 9m17s
Tests and linters / Lint (pull_request) Successful in 9m41s
Signed-off-by: Anton Nikiforov <an.nikiforov@yadro.com>
2023-10-26 16:05:02 +03:00
5acc13fa94 [#732] containersvc: Remove load announcement
Some checks failed
DCO action / DCO (pull_request) Successful in 3m16s
Vulncheck / Vulncheck (pull_request) Successful in 4m7s
Build / Build Components (1.21) (pull_request) Successful in 4m42s
Build / Build Components (1.20) (pull_request) Successful in 6m40s
Tests and linters / Tests with -race (pull_request) Failing after 7m14s
Tests and linters / Tests (1.20) (pull_request) Successful in 8m25s
Tests and linters / Staticcheck (pull_request) Successful in 8m11s
Tests and linters / Tests (1.21) (pull_request) Successful in 8m31s
Tests and linters / Lint (pull_request) Successful in 8m50s
IR code was removed in 8879c6ea.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-10-09 15:17:10 +03:00
e0f0b93b5e [#723] netmap: Drop already bootstraped check
All checks were successful
DCO action / DCO (pull_request) Successful in 3m48s
Build / Build Components (1.21) (pull_request) Successful in 5m10s
Build / Build Components (1.20) (pull_request) Successful in 5m47s
Vulncheck / Vulncheck (pull_request) Successful in 6m31s
Tests and linters / Tests (1.21) (pull_request) Successful in 7m2s
Tests and linters / Staticcheck (pull_request) Successful in 10m48s
Tests and linters / Lint (pull_request) Successful in 11m16s
Tests and linters / Tests (1.20) (pull_request) Successful in 11m31s
Tests and linters / Tests with -race (pull_request) Successful in 11m21s
Because of this check, under certain conditions,
the node could be removed from the network map,
although the node was functioning normally.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-10-05 15:31:18 +03:00
eb5248621a [#723] netmap: Send bootstrap at each epoch tick
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-10-05 15:31:10 +03:00
02be6a4341 [#702] node: Update SDK version
Some checks failed
Tests and linters / Lint (pull_request) Failing after 53s
DCO action / DCO (pull_request) Successful in 3m58s
Vulncheck / Vulncheck (pull_request) Successful in 4m44s
Build / Build Components (1.21) (pull_request) Successful in 6m9s
Build / Build Components (1.20) (pull_request) Successful in 6m15s
Tests and linters / Tests (1.21) (pull_request) Successful in 6m59s
Tests and linters / Staticcheck (pull_request) Successful in 6m54s
Tests and linters / Tests (1.20) (pull_request) Successful in 7m15s
Tests and linters / Tests with -race (pull_request) Successful in 7m9s
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2023-09-29 18:46:06 +03:00
368774be95 [#691] node: Compare node info during initial bootstrap properly
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-09-18 07:30:15 +00:00
b9ef294b99 [#692] go.mod: Update sdk-go
All checks were successful
DCO action / DCO (pull_request) Successful in 4m8s
Vulncheck / Vulncheck (pull_request) Successful in 4m50s
Build / Build Components (1.21) (pull_request) Successful in 6m29s
Build / Build Components (1.20) (pull_request) Successful in 6m44s
Tests and linters / Tests (1.21) (pull_request) Successful in 7m20s
Tests and linters / Staticcheck (pull_request) Successful in 7m12s
Tests and linters / Lint (pull_request) Successful in 7m37s
Tests and linters / Tests (1.20) (pull_request) Successful in 7m35s
Tests and linters / Tests with -race (pull_request) Successful in 8m37s
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2023-09-15 16:54:15 +03:00
118 changed files with 2852 additions and 2285 deletions

View file

@ -1,7 +1,7 @@
# Changelog # Changelog
Changelog for FrostFS Node Changelog for FrostFS Node
## [Unreleased] ## [v0.37.0] - 2023-12-07 - Academy of Sciences
### Added ### Added
- Support impersonate bearer token (#229) - Support impersonate bearer token (#229)
@ -11,9 +11,12 @@ Changelog for FrostFS Node
- Set extra wallets on SIGHUP for ir (#125) - Set extra wallets on SIGHUP for ir (#125)
- Writecache metrics (#312) - Writecache metrics (#312)
- Add tree service metrics (#370) - Add tree service metrics (#370)
- Add `frostfs-cli control ir remove-container` (#733)
### Changed ### Changed
- `frostfs-cli util locode generate` is now much faster (#309) - `frostfs-cli util locode generate` is now much faster (#309)
- Send bootstrap query every epoch in `frostfs-node` (#691)
### Fixed ### Fixed
- Take network settings into account during netmap contract update (#100) - Take network settings into account during netmap contract update (#100)
- Read config files from dir even if config file not provided via `--config` for node (#238) - Read config files from dir even if config file not provided via `--config` for node (#238)
@ -21,6 +24,8 @@ Changelog for FrostFS Node
- Tree service panic in its internal client cache (#322) - Tree service panic in its internal client cache (#322)
- Iterate over endpoints when create ws client in morph's constructor (#304) - Iterate over endpoints when create ws client in morph's constructor (#304)
- Delete complex objects with GC (#332) - Delete complex objects with GC (#332)
- Connection leaks after netmap address updates (#674)
- Count writecache metrics properly (#725)
### Removed ### Removed
### Updated ### Updated

View file

@ -1 +1 @@
v0.36.0 v0.37.0

View file

@ -12,8 +12,10 @@ func initControlIRCmd() {
irCmd.AddCommand(tickEpochCmd) irCmd.AddCommand(tickEpochCmd)
irCmd.AddCommand(removeNodeCmd) irCmd.AddCommand(removeNodeCmd)
irCmd.AddCommand(irHealthCheckCmd) irCmd.AddCommand(irHealthCheckCmd)
irCmd.AddCommand(removeContainerCmd)
initControlIRTickEpochCmd() initControlIRTickEpochCmd()
initControlIRRemoveNodeCmd() initControlIRRemoveNodeCmd()
initControlIRHealthCheckCmd() initControlIRHealthCheckCmd()
initControlIRRemoveContainerCmd()
} }

View file

@ -0,0 +1,94 @@
package control
import (
"errors"
"git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/refs"
rawclient "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/rpc/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
ircontrol "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control/ir"
ircontrolsrv "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control/ir/server"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user"
"github.com/spf13/cobra"
)
const (
ownerFlag = "owner"
)
var removeContainerCmd = &cobra.Command{
Use: "remove-container",
Short: "Schedules a container removal",
Long: `Schedules a container removal via a notary request.
Container data will be deleted asynchronously by policer.
To check removal status "frostfs-cli container list" command can be used.`,
Run: removeContainer,
}
func initControlIRRemoveContainerCmd() {
initControlFlags(removeContainerCmd)
flags := removeContainerCmd.Flags()
flags.String(commonflags.CIDFlag, "", commonflags.CIDFlagUsage)
flags.String(ownerFlag, "", "Container owner's wallet address.")
removeContainerCmd.MarkFlagsMutuallyExclusive(commonflags.CIDFlag, ownerFlag)
}
func removeContainer(cmd *cobra.Command, _ []string) {
req := prepareRemoveContainerRequest(cmd)
pk := key.Get(cmd)
c := getClient(cmd, pk)
commonCmd.ExitOnErr(cmd, "could not sign request: %w", ircontrolsrv.SignMessage(pk, req))
var resp *ircontrol.RemoveContainerResponse
err := c.ExecRaw(func(client *rawclient.Client) error {
var err error
resp, err = ircontrol.RemoveContainer(client, req)
return err
})
commonCmd.ExitOnErr(cmd, "failed to execute request: %w", err)
verifyResponse(cmd, resp.GetSignature(), resp.GetBody())
if len(req.GetBody().GetContainerId()) > 0 {
cmd.Println("Container scheduled to removal")
} else {
cmd.Println("User containers sheduled to removal")
}
}
func prepareRemoveContainerRequest(cmd *cobra.Command) *ircontrol.RemoveContainerRequest {
req := &ircontrol.RemoveContainerRequest{
Body: &ircontrol.RemoveContainerRequest_Body{},
}
cidStr, err := cmd.Flags().GetString(commonflags.CIDFlag)
commonCmd.ExitOnErr(cmd, "failed to get cid: ", err)
ownerStr, err := cmd.Flags().GetString(ownerFlag)
commonCmd.ExitOnErr(cmd, "failed to get owner: ", err)
if len(ownerStr) == 0 && len(cidStr) == 0 {
commonCmd.ExitOnErr(cmd, "invalid usage: %w", errors.New("neither owner's wallet address nor container ID are specified"))
}
if len(ownerStr) > 0 {
var owner user.ID
commonCmd.ExitOnErr(cmd, "invalid owner ID: %w", owner.DecodeString(ownerStr))
var ownerID refs.OwnerID
owner.WriteToV2(&ownerID)
req.Body.Owner = ownerID.StableMarshal(nil)
}
if len(cidStr) > 0 {
var containerID cid.ID
commonCmd.ExitOnErr(cmd, "invalid container ID: %w", containerID.DecodeString(cidStr))
req.Body.ContainerId = containerID[:]
}
return req
}

View file

@ -31,10 +31,10 @@ const (
) )
type objectNodesInfo struct { type objectNodesInfo struct {
containerID cid.ID containerID cid.ID
objectID oid.ID objectID oid.ID
relatedObjectIDs []oid.ID relatedObjectIDs []oid.ID
isLock bool isLockOrTombstone bool
} }
type boolError struct { type boolError struct {
@ -101,9 +101,9 @@ func getObjectInfo(cmd *cobra.Command, cnrID cid.ID, objID oid.ID, cli *client.C
res, err := internalclient.HeadObject(cmd.Context(), prmHead) res, err := internalclient.HeadObject(cmd.Context(), prmHead)
if err == nil { if err == nil {
return &objectNodesInfo{ return &objectNodesInfo{
containerID: cnrID, containerID: cnrID,
objectID: objID, objectID: objID,
isLock: res.Header().Type() == objectSDK.TypeLock, isLockOrTombstone: res.Header().Type() == objectSDK.TypeLock || res.Header().Type() == objectSDK.TypeTombstone,
} }
} }
@ -191,7 +191,7 @@ func getRequiredPlacement(cmd *cobra.Command, objInfo *objectNodesInfo, placemen
numOfReplicas := placementPolicy.ReplicaNumberByIndex(repIdx) numOfReplicas := placementPolicy.ReplicaNumberByIndex(repIdx)
var nodeIdx uint32 var nodeIdx uint32
for _, n := range rep { for _, n := range rep {
if !objInfo.isLock && nodeIdx == numOfReplicas { //lock object should be on all container nodes if !objInfo.isLockOrTombstone && nodeIdx == numOfReplicas { // lock and tombstone objects should be on all container nodes
break break
} }
nodes[n.Hash()] = n nodes[n.Hash()] = n
@ -213,7 +213,8 @@ func getRequiredPlacement(cmd *cobra.Command, objInfo *objectNodesInfo, placemen
} }
func getActualPlacement(cmd *cobra.Command, netmap *netmapSDK.NetMap, requiredPlacement map[uint64]netmapSDK.NodeInfo, func getActualPlacement(cmd *cobra.Command, netmap *netmapSDK.NetMap, requiredPlacement map[uint64]netmapSDK.NodeInfo,
pk *ecdsa.PrivateKey, objInfo *objectNodesInfo) map[uint64]boolError { pk *ecdsa.PrivateKey, objInfo *objectNodesInfo,
) map[uint64]boolError {
result := make(map[uint64]boolError) result := make(map[uint64]boolError)
resultMtx := &sync.Mutex{} resultMtx := &sync.Mutex{}

View file

@ -7,6 +7,7 @@ import (
configViper "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common/config" configViper "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common/config"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs" "git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client"
"github.com/spf13/viper" "github.com/spf13/viper"
"go.uber.org/zap" "go.uber.org/zap"
) )
@ -67,6 +68,7 @@ func watchForSignal(cancel func()) {
if err != nil { if err != nil {
log.Error(logs.FrostFSNodeConfigurationReading, zap.Error(err)) log.Error(logs.FrostFSNodeConfigurationReading, zap.Error(err))
} }
client.CleanInvScript = cfg.GetBool("morph.cleaninvscript")
log.Info(logs.FrostFSNodeConfigurationHasBeenReloadedSuccessfully) log.Info(logs.FrostFSNodeConfigurationHasBeenReloadedSuccessfully)
case syscall.SIGTERM, syscall.SIGINT: case syscall.SIGTERM, syscall.SIGINT:
log.Info(logs.FrostFSNodeTerminationSignalHasBeenReceivedStopping) log.Info(logs.FrostFSNodeTerminationSignalHasBeenReceivedStopping)

View file

@ -10,6 +10,7 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs" "git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-node/misc" "git.frostfs.info/TrueCloudLab/frostfs-node/misc"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/innerring" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/innerring"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger"
"github.com/spf13/viper" "github.com/spf13/viper"
"go.uber.org/zap" "go.uber.org/zap"
@ -60,6 +61,7 @@ func main() {
var err error var err error
cfg, err = newConfig() cfg, err = newConfig()
exitErr(err) exitErr(err)
client.CleanInvScript = cfg.GetBool("morph.cleaninvscript")
logPrm.MetricsNamespace = "frostfs_ir" logPrm.MetricsNamespace = "frostfs_ir"
err = logPrm.SetLevelString( err = logPrm.SetLevelString(

View file

@ -5,7 +5,7 @@ import (
"fmt" "fmt"
common "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-lens/internal" common "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-lens/internal"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobovnicza" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/blobovniczatree"
meta "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/metabase" meta "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/metabase"
objectSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object" objectSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id" oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
@ -40,7 +40,7 @@ func inspectFunc(cmd *cobra.Command, _ []string) {
common.ExitOnErr(cmd, common.Errf("could not check if the obj is small: %w", err)) common.ExitOnErr(cmd, common.Errf("could not check if the obj is small: %w", err))
if id := resStorageID.StorageID(); id != nil { if id := resStorageID.StorageID(); id != nil {
cmd.Printf("Object storageID: %s\n\n", blobovnicza.NewIDFromBytes(id).String()) cmd.Printf("Object storageID: %s\n\n", blobovniczatree.NewIDFromBytes(id).Path())
} else { } else {
cmd.Printf("Object does not contain storageID\n\n") cmd.Printf("Object does not contain storageID\n\n")
} }

View file

@ -6,13 +6,11 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/container" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/container"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/netmap" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/netmap"
cntClient "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client/container"
putsvc "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/object/put" putsvc "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/object/put"
utilSync "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/sync" utilSync "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/sync"
apistatus "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client/status" apistatus "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client/status"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id" cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
netmapSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap" netmapSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user"
lru "github.com/hashicorp/golang-lru/v2" lru "github.com/hashicorp/golang-lru/v2"
) )
@ -244,117 +242,6 @@ func (s *lruNetmapSource) Epoch() (uint64, error) {
return s.netState.CurrentEpoch(), nil return s.netState.CurrentEpoch(), nil
} }
// wrapper over TTL cache of values read from the network
// that implements container lister.
type ttlContainerLister struct {
inner *ttlNetCache[string, *cacheItemContainerList]
client *cntClient.Client
}
// value type for ttlNetCache used by ttlContainerLister.
type cacheItemContainerList struct {
// protects list from concurrent add/remove ops
mtx sync.RWMutex
// actual list of containers owner by the particular user
list []cid.ID
}
func newCachedContainerLister(c *cntClient.Client, ttl time.Duration) ttlContainerLister {
const containerListerCacheSize = 100
lruCnrListerCache := newNetworkTTLCache(containerListerCacheSize, ttl, func(strID string) (*cacheItemContainerList, error) {
var id *user.ID
if strID != "" {
id = new(user.ID)
err := id.DecodeString(strID)
if err != nil {
return nil, err
}
}
list, err := c.ContainersOf(id)
if err != nil {
return nil, err
}
return &cacheItemContainerList{
list: list,
}, nil
})
return ttlContainerLister{inner: lruCnrListerCache, client: c}
}
// List returns list of container IDs from the cache. If list is missing in the
// cache or expired, then it returns container IDs from side chain and updates
// the cache.
func (s ttlContainerLister) List(id *user.ID) ([]cid.ID, error) {
if id == nil {
return s.client.ContainersOf(nil)
}
item, err := s.inner.get(id.EncodeToString())
if err != nil {
return nil, err
}
item.mtx.RLock()
res := make([]cid.ID, len(item.list))
copy(res, item.list)
item.mtx.RUnlock()
return res, nil
}
// updates cached list of owner's containers: cnr is added if flag is true, otherwise it's removed.
// Concurrent calls can lead to some races:
// - two parallel additions to missing owner's cache can lead to only one container to be cached
// - async cache value eviction can lead to idle addition
//
// All described race cases aren't critical since cache values expire anyway, we just try
// to increase cache actuality w/o huge overhead on synchronization.
func (s *ttlContainerLister) update(owner user.ID, cnr cid.ID, add bool) {
strOwner := owner.EncodeToString()
val, ok := s.inner.cache.Peek(strOwner)
if !ok {
// we could cache the single cnr but in this case we will disperse
// with the Sidechain a lot
return
}
if s.inner.ttl <= time.Since(val.t) {
return
}
item := val.v
item.mtx.Lock()
{
found := false
for i := range item.list {
if found = item.list[i].Equals(cnr); found {
if !add {
item.list = append(item.list[:i], item.list[i+1:]...)
// if list became empty we don't remove the value from the cache
// since empty list is a correct value, and we don't want to insta
// re-request it from the Sidechain
}
break
}
}
if add && !found {
item.list = append(item.list, cnr)
}
}
item.mtx.Unlock()
}
type cachedIRFetcher struct { type cachedIRFetcher struct {
*ttlNetCache[struct{}, [][]byte] *ttlNetCache[struct{}, [][]byte]
} }

View file

@ -24,6 +24,7 @@ import (
blobovniczaconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/engine/shard/blobstor/blobovnicza" blobovniczaconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/engine/shard/blobstor/blobovnicza"
fstreeconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/engine/shard/blobstor/fstree" fstreeconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/engine/shard/blobstor/fstree"
loggerconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/logger" loggerconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/logger"
morphconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/morph"
nodeconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/node" nodeconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/node"
objectconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/object" objectconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/object"
replicatorconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/replicator" replicatorconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/replicator"
@ -101,6 +102,7 @@ type applicationConfiguration struct {
shardPoolSize uint32 shardPoolSize uint32
shards []shardCfg shards []shardCfg
lowMem bool lowMem bool
rebuildWorkers uint32
} }
} }
@ -176,6 +178,8 @@ type subStorageCfg struct {
width uint64 width uint64
leafWidth uint64 leafWidth uint64
openedCacheSize int openedCacheSize int
initWorkerCount int
initInAdvance bool
} }
// readConfig fills applicationConfiguration with raw configuration values // readConfig fills applicationConfiguration with raw configuration values
@ -207,6 +211,10 @@ func (a *applicationConfiguration) readConfig(c *config.Config) error {
a.EngineCfg.errorThreshold = engineconfig.ShardErrorThreshold(c) a.EngineCfg.errorThreshold = engineconfig.ShardErrorThreshold(c)
a.EngineCfg.shardPoolSize = engineconfig.ShardPoolSize(c) a.EngineCfg.shardPoolSize = engineconfig.ShardPoolSize(c)
a.EngineCfg.lowMem = engineconfig.EngineLowMemoryConsumption(c) a.EngineCfg.lowMem = engineconfig.EngineLowMemoryConsumption(c)
a.EngineCfg.rebuildWorkers = engineconfig.EngineRebuildWorkersCount(c)
// Kludge
client.CleanInvScript = morphconfig.CleanInvocationScript(c)
return engineconfig.IterateShards(c, false, func(sc *shardconfig.Config) error { return a.updateShardConfig(c, sc) }) return engineconfig.IterateShards(c, false, func(sc *shardconfig.Config) error { return a.updateShardConfig(c, sc) })
} }
@ -291,6 +299,8 @@ func (a *applicationConfiguration) setShardStorageConfig(newConfig *shardCfg, ol
sCfg.width = sub.ShallowWidth() sCfg.width = sub.ShallowWidth()
sCfg.leafWidth = sub.LeafWidth() sCfg.leafWidth = sub.LeafWidth()
sCfg.openedCacheSize = sub.OpenedCacheSize() sCfg.openedCacheSize = sub.OpenedCacheSize()
sCfg.initWorkerCount = sub.InitWorkerCount()
sCfg.initInAdvance = sub.InitInAdvance()
case fstree.Type: case fstree.Type:
sub := fstreeconfig.From((*config.Config)(storagesCfg[i])) sub := fstreeconfig.From((*config.Config)(storagesCfg[i]))
sCfg.depth = sub.Depth() sCfg.depth = sub.Depth()
@ -345,8 +355,7 @@ type internals struct {
apiVersion version.Version apiVersion version.Version
healthStatus *atomic.Int32 healthStatus *atomic.Int32
// is node under maintenance // is node under maintenance
isMaintenance atomic.Bool isMaintenance atomic.Bool
alreadyBootstraped bool
} }
// starts node's maintenance. // starts node's maintenance.
@ -686,13 +695,14 @@ func initCfgObject(appCfg *config.Config) cfgObject {
} }
func (c *cfg) engineOpts() []engine.Option { func (c *cfg) engineOpts() []engine.Option {
opts := make([]engine.Option, 0, 4) var opts []engine.Option
opts = append(opts, opts = append(opts,
engine.WithShardPoolSize(c.EngineCfg.shardPoolSize), engine.WithShardPoolSize(c.EngineCfg.shardPoolSize),
engine.WithErrorThreshold(c.EngineCfg.errorThreshold), engine.WithErrorThreshold(c.EngineCfg.errorThreshold),
engine.WithLogger(c.log), engine.WithLogger(c.log),
engine.WithLowMemoryConsumption(c.EngineCfg.lowMem), engine.WithLowMemoryConsumption(c.EngineCfg.lowMem),
engine.WithRebuildWorkersCount(c.EngineCfg.rebuildWorkers),
) )
if c.metricsCollector != nil { if c.metricsCollector != nil {
@ -781,7 +791,10 @@ func (c *cfg) getSubstorageOpts(shCfg shardCfg) []blobstor.SubStorage {
blobovniczatree.WithBlobovniczaShallowWidth(sRead.width), blobovniczatree.WithBlobovniczaShallowWidth(sRead.width),
blobovniczatree.WithBlobovniczaLeafWidth(sRead.leafWidth), blobovniczatree.WithBlobovniczaLeafWidth(sRead.leafWidth),
blobovniczatree.WithOpenedCacheSize(sRead.openedCacheSize), blobovniczatree.WithOpenedCacheSize(sRead.openedCacheSize),
blobovniczatree.WithInitWorkerCount(sRead.initWorkerCount),
blobovniczatree.WithInitInAdvance(sRead.initInAdvance),
blobovniczatree.WithLogger(c.log), blobovniczatree.WithLogger(c.log),
blobovniczatree.WithObjectSizeLimit(shCfg.smallSizeObjectLimit),
} }
if c.metricsCollector != nil { if c.metricsCollector != nil {
@ -1149,9 +1162,8 @@ func (c *cfg) shutdown() {
c.setHealthStatus(control.HealthStatus_SHUTTING_DOWN) c.setHealthStatus(control.HealthStatus_SHUTTING_DOWN)
c.ctxCancel() c.ctxCancel()
c.done <- struct{}{} close(c.done)
for i := range c.closers { for i := range c.closers {
c.closers[len(c.closers)-1-i].fn() c.closers[len(c.closers)-1-i].fn()
} }
close(c.internalErr)
} }

View file

@ -15,6 +15,9 @@ const (
// ShardPoolSizeDefault is a default value of routine pool size per-shard to // ShardPoolSizeDefault is a default value of routine pool size per-shard to
// process object PUT operations in a storage engine. // process object PUT operations in a storage engine.
ShardPoolSizeDefault = 20 ShardPoolSizeDefault = 20
// RebuildWorkersCountDefault is a default value of the workers count to
// process storage rebuild operations in a storage engine.
RebuildWorkersCountDefault = 100
) )
// ErrNoShardConfigured is returned when at least 1 shard is required but none are found. // ErrNoShardConfigured is returned when at least 1 shard is required but none are found.
@ -88,3 +91,11 @@ func ShardErrorThreshold(c *config.Config) uint32 {
func EngineLowMemoryConsumption(c *config.Config) bool { func EngineLowMemoryConsumption(c *config.Config) bool {
return config.BoolSafe(c.Sub(subsection), "low_mem") return config.BoolSafe(c.Sub(subsection), "low_mem")
} }
// EngineRebuildWorkersCount returns value of "rebuild_workers_count" config parmeter from "storage" section.
func EngineRebuildWorkersCount(c *config.Config) uint32 {
if v := config.Uint32Safe(c.Sub(subsection), "rebuild_workers_count"); v > 0 {
return v
}
return RebuildWorkersCountDefault
}

View file

@ -38,15 +38,17 @@ func TestEngineSection(t *testing.T) {
require.EqualValues(t, 0, engineconfig.ShardErrorThreshold(empty)) require.EqualValues(t, 0, engineconfig.ShardErrorThreshold(empty))
require.EqualValues(t, engineconfig.ShardPoolSizeDefault, engineconfig.ShardPoolSize(empty)) require.EqualValues(t, engineconfig.ShardPoolSizeDefault, engineconfig.ShardPoolSize(empty))
require.EqualValues(t, mode.ReadWrite, shardconfig.From(empty).Mode()) require.EqualValues(t, mode.ReadWrite, shardconfig.From(empty).Mode())
require.EqualValues(t, engineconfig.RebuildWorkersCountDefault, engineconfig.EngineRebuildWorkersCount(empty))
}) })
const path = "../../../../config/example/node" const path = "../../../../config/example/node"
var fileConfigTest = func(c *config.Config) { fileConfigTest := func(c *config.Config) {
num := 0 num := 0
require.EqualValues(t, 100, engineconfig.ShardErrorThreshold(c)) require.EqualValues(t, 100, engineconfig.ShardErrorThreshold(c))
require.EqualValues(t, 15, engineconfig.ShardPoolSize(c)) require.EqualValues(t, 15, engineconfig.ShardPoolSize(c))
require.EqualValues(t, uint32(1000), engineconfig.EngineRebuildWorkersCount(c))
err := engineconfig.IterateShards(c, true, func(sc *shardconfig.Config) error { err := engineconfig.IterateShards(c, true, func(sc *shardconfig.Config) error {
defer func() { defer func() {
@ -78,7 +80,7 @@ func TestEngineSection(t *testing.T) {
require.EqualValues(t, 3221225472, wc.SizeLimit()) require.EqualValues(t, 3221225472, wc.SizeLimit())
require.Equal(t, "tmp/0/meta", meta.Path()) require.Equal(t, "tmp/0/meta", meta.Path())
require.Equal(t, fs.FileMode(0644), meta.BoltDB().Perm()) require.Equal(t, fs.FileMode(0o644), meta.BoltDB().Perm())
require.Equal(t, 100, meta.BoltDB().MaxBatchSize()) require.Equal(t, 100, meta.BoltDB().MaxBatchSize())
require.Equal(t, 10*time.Millisecond, meta.BoltDB().MaxBatchDelay()) require.Equal(t, 10*time.Millisecond, meta.BoltDB().MaxBatchDelay())
@ -89,15 +91,17 @@ func TestEngineSection(t *testing.T) {
require.Equal(t, 2, len(ss)) require.Equal(t, 2, len(ss))
blz := blobovniczaconfig.From((*config.Config)(ss[0])) blz := blobovniczaconfig.From((*config.Config)(ss[0]))
require.Equal(t, "tmp/0/blob/blobovnicza", ss[0].Path()) require.Equal(t, "tmp/0/blob/blobovnicza", ss[0].Path())
require.EqualValues(t, 0644, blz.BoltDB().Perm()) require.EqualValues(t, 0o644, blz.BoltDB().Perm())
require.EqualValues(t, 4194304, blz.Size()) require.EqualValues(t, 4194304, blz.Size())
require.EqualValues(t, 1, blz.ShallowDepth()) require.EqualValues(t, 1, blz.ShallowDepth())
require.EqualValues(t, 4, blz.ShallowWidth()) require.EqualValues(t, 4, blz.ShallowWidth())
require.EqualValues(t, 50, blz.OpenedCacheSize()) require.EqualValues(t, 50, blz.OpenedCacheSize())
require.EqualValues(t, 10, blz.LeafWidth()) require.EqualValues(t, 10, blz.LeafWidth())
require.EqualValues(t, 10, blz.InitWorkerCount())
require.EqualValues(t, true, blz.InitInAdvance())
require.Equal(t, "tmp/0/blob", ss[1].Path()) require.Equal(t, "tmp/0/blob", ss[1].Path())
require.EqualValues(t, 0644, ss[1].Perm()) require.EqualValues(t, 0o644, ss[1].Perm())
fst := fstreeconfig.From((*config.Config)(ss[1])) fst := fstreeconfig.From((*config.Config)(ss[1]))
require.EqualValues(t, 5, fst.Depth()) require.EqualValues(t, 5, fst.Depth())
@ -112,7 +116,7 @@ func TestEngineSection(t *testing.T) {
require.Equal(t, mode.ReadOnly, sc.Mode()) require.Equal(t, mode.ReadOnly, sc.Mode())
case 1: case 1:
require.Equal(t, "tmp/1/blob/pilorama.db", pl.Path()) require.Equal(t, "tmp/1/blob/pilorama.db", pl.Path())
require.Equal(t, fs.FileMode(0644), pl.Perm()) require.Equal(t, fs.FileMode(0o644), pl.Perm())
require.True(t, pl.NoSync()) require.True(t, pl.NoSync())
require.Equal(t, 5*time.Millisecond, pl.MaxBatchDelay()) require.Equal(t, 5*time.Millisecond, pl.MaxBatchDelay())
require.Equal(t, 100, pl.MaxBatchSize()) require.Equal(t, 100, pl.MaxBatchSize())
@ -127,7 +131,7 @@ func TestEngineSection(t *testing.T) {
require.EqualValues(t, 4294967296, wc.SizeLimit()) require.EqualValues(t, 4294967296, wc.SizeLimit())
require.Equal(t, "tmp/1/meta", meta.Path()) require.Equal(t, "tmp/1/meta", meta.Path())
require.Equal(t, fs.FileMode(0644), meta.BoltDB().Perm()) require.Equal(t, fs.FileMode(0o644), meta.BoltDB().Perm())
require.Equal(t, 200, meta.BoltDB().MaxBatchSize()) require.Equal(t, 200, meta.BoltDB().MaxBatchSize())
require.Equal(t, 20*time.Millisecond, meta.BoltDB().MaxBatchDelay()) require.Equal(t, 20*time.Millisecond, meta.BoltDB().MaxBatchDelay())
@ -144,9 +148,10 @@ func TestEngineSection(t *testing.T) {
require.EqualValues(t, 4, blz.ShallowWidth()) require.EqualValues(t, 4, blz.ShallowWidth())
require.EqualValues(t, 50, blz.OpenedCacheSize()) require.EqualValues(t, 50, blz.OpenedCacheSize())
require.EqualValues(t, 10, blz.LeafWidth()) require.EqualValues(t, 10, blz.LeafWidth())
require.EqualValues(t, blobovniczaconfig.InitWorkerCountDefault, blz.InitWorkerCount())
require.Equal(t, "tmp/1/blob", ss[1].Path()) require.Equal(t, "tmp/1/blob", ss[1].Path())
require.EqualValues(t, 0644, ss[1].Perm()) require.EqualValues(t, 0o644, ss[1].Perm())
fst := fstreeconfig.From((*config.Config)(ss[1])) fst := fstreeconfig.From((*config.Config)(ss[1]))
require.EqualValues(t, 5, fst.Depth()) require.EqualValues(t, 5, fst.Depth())

View file

@ -22,6 +22,9 @@ const (
// OpenedCacheSizeDefault is a default cache size of opened Blobovnicza's. // OpenedCacheSizeDefault is a default cache size of opened Blobovnicza's.
OpenedCacheSizeDefault = 16 OpenedCacheSizeDefault = 16
// InitWorkerCountDefault is a default workers count to initialize Blobovnicza's.
InitWorkerCountDefault = 5
) )
// From wraps config section into Config. // From wraps config section into Config.
@ -112,3 +115,29 @@ func (x *Config) LeafWidth() uint64 {
"leaf_width", "leaf_width",
) )
} }
// InitWorkersCount returns the value of "init_worker_count" config parameter.
//
// Returns InitWorkerCountDefault if the value is not a positive number.
func (x *Config) InitWorkerCount() int {
d := config.IntSafe(
(*config.Config)(x),
"init_worker_count",
)
if d > 0 {
return int(d)
}
return InitWorkerCountDefault
}
// InitInAdvance returns the value of "init_in_advance" config parameter.
//
// Returns False if the value is not defined or invalid.
func (x *Config) InitInAdvance() bool {
return config.BoolSafe(
(*config.Config)(x),
"init_in_advance",
)
}

View file

@ -97,3 +97,8 @@ func SwitchInterval(c *config.Config) time.Duration {
return SwitchIntervalDefault return SwitchIntervalDefault
} }
// CleanInvocationScript this is a kludge purely for update to work.
func CleanInvocationScript(c *config.Config) bool {
return config.Bool(c.Sub(subsection), "cleaninvscript")
}

View file

@ -3,44 +3,22 @@ package main
import ( import (
"bytes" "bytes"
"context" "context"
"crypto/ecdsa"
"crypto/sha256"
"errors"
"fmt"
"strconv"
containerV2 "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/container"
containerGRPC "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/container/grpc" containerGRPC "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/container/grpc"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs" "git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/client"
containerCore "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/container" containerCore "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/container"
netmapCore "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/netmap"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/engine"
cntClient "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client/container" cntClient "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client/container"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/event" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/event"
containerEvent "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/event/container" containerEvent "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/event/container"
containerTransportGRPC "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/network/transport/container/grpc" containerTransportGRPC "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/network/transport/container/grpc"
containerService "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/container" containerService "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/container"
loadcontroller "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/container/announcement/load/controller"
loadroute "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/container/announcement/load/route"
placementrouter "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/container/announcement/load/route/placement"
loadstorage "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/container/announcement/load/storage"
containerMorph "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/container/morph" containerMorph "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/container/morph"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger"
apiClient "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client"
containerSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id" cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user"
"go.uber.org/zap" "go.uber.org/zap"
) )
const ( func initContainerService(_ context.Context, c *cfg) {
startEstimationNotifyEvent = "StartEstimation"
stopEstimationNotifyEvent = "StopEstimation"
)
func initContainerService(ctx context.Context, c *cfg) {
// container wrapper that tries to invoke notary // container wrapper that tries to invoke notary
// requests if chain is configured so // requests if chain is configured so
wrap, err := cntClient.NewFromMorph(c.cfgMorph.client, c.cfgContainer.scriptHash, 0, cntClient.TryNotary()) wrap, err := cntClient.NewFromMorph(c.cfgMorph.client, c.cfgContainer.scriptHash, 0, cntClient.TryNotary())
@ -52,44 +30,10 @@ func initContainerService(ctx context.Context, c *cfg) {
cnrRdr, cnrWrt := configureEACLAndContainerSources(c, wrap, cnrSrc) cnrRdr, cnrWrt := configureEACLAndContainerSources(c, wrap, cnrSrc)
loadAccumulator := loadstorage.New(loadstorage.Prm{})
loadPlacementBuilder := &loadPlacementBuilder{
log: c.log,
nmSrc: c.netMapSource,
cnrSrc: cnrSrc,
}
routeBuilder := placementrouter.New(placementrouter.Prm{
PlacementBuilder: loadPlacementBuilder,
})
loadRouter := loadroute.New(
loadroute.Prm{
LocalServerInfo: c,
RemoteWriterProvider: &remoteLoadAnnounceProvider{
key: &c.key.PrivateKey,
netmapKeys: c,
clientCache: c.bgClientCache,
deadEndProvider: loadcontroller.SimpleWriterProvider(loadAccumulator),
},
Builder: routeBuilder,
},
loadroute.WithLogger(c.log),
)
setLoadController(ctx, c, loadRouter, loadAccumulator)
server := containerTransportGRPC.New( server := containerTransportGRPC.New(
containerService.NewSignService( containerService.NewSignService(
&c.key.PrivateKey, &c.key.PrivateKey,
&usedSpaceService{ containerService.NewExecutionService(containerMorph.NewExecutor(cnrRdr, cnrWrt), c.respSvc),
Server: containerService.NewExecutionService(containerMorph.NewExecutor(cnrRdr, cnrWrt), c.respSvc),
loadWriterProvider: loadRouter,
loadPlacementBuilder: loadPlacementBuilder,
routeBuilder: routeBuilder,
cfg: c,
},
), ),
) )
@ -119,7 +63,6 @@ func configureEACLAndContainerSources(c *cfg, client *cntClient.Client, cnrSrc c
// use RPC node as source of Container contract items (with caching) // use RPC node as source of Container contract items (with caching)
cachedContainerStorage := newCachedContainerStorage(cnrSrc, c.cfgMorph.cacheTTL) cachedContainerStorage := newCachedContainerStorage(cnrSrc, c.cfgMorph.cacheTTL)
cachedEACLStorage := newCachedEACLStorage(eACLFetcher, c.cfgMorph.cacheTTL) cachedEACLStorage := newCachedEACLStorage(eACLFetcher, c.cfgMorph.cacheTTL)
cachedContainerLister := newCachedContainerLister(client, c.cfgMorph.cacheTTL)
subscribeToContainerCreation(c, func(e event.Event) { subscribeToContainerCreation(c, func(e event.Event) {
ev := e.(containerEvent.PutSuccess) ev := e.(containerEvent.PutSuccess)
@ -130,7 +73,6 @@ func configureEACLAndContainerSources(c *cfg, client *cntClient.Client, cnrSrc c
// creation success are most commonly tracked by polling GET op. // creation success are most commonly tracked by polling GET op.
cnr, err := cnrSrc.Get(ev.ID) cnr, err := cnrSrc.Get(ev.ID)
if err == nil { if err == nil {
cachedContainerLister.update(cnr.Value.Owner(), ev.ID, true)
cachedContainerStorage.containerCache.set(ev.ID, cnr, nil) cachedContainerStorage.containerCache.set(ev.ID, cnr, nil)
} else { } else {
// unlike removal, we expect successful receive of the container // unlike removal, we expect successful receive of the container
@ -149,15 +91,6 @@ func configureEACLAndContainerSources(c *cfg, client *cntClient.Client, cnrSrc c
subscribeToContainerRemoval(c, func(e event.Event) { subscribeToContainerRemoval(c, func(e event.Event) {
ev := e.(containerEvent.DeleteSuccess) ev := e.(containerEvent.DeleteSuccess)
// read owner of the removed container in order to update the listing cache.
// It's strange to read already removed container, but we can successfully hit
// the cache.
// TODO: use owner directly from the event after neofs-contract#256 will become resolved
cnr, err := cachedContainerStorage.Get(ev.ID)
if err == nil {
cachedContainerLister.update(cnr.Value.Owner(), ev.ID, false)
}
cachedContainerStorage.handleRemoval(ev.ID) cachedContainerStorage.handleRemoval(ev.ID)
c.log.Debug(logs.FrostFSNodeContainerRemovalEventsReceipt, c.log.Debug(logs.FrostFSNodeContainerRemovalEventsReceipt,
zap.Stringer("id", ev.ID), zap.Stringer("id", ev.ID),
@ -167,7 +100,7 @@ func configureEACLAndContainerSources(c *cfg, client *cntClient.Client, cnrSrc c
c.cfgObject.eaclSource = cachedEACLStorage c.cfgObject.eaclSource = cachedEACLStorage
c.cfgObject.cnrSource = cachedContainerStorage c.cfgObject.cnrSource = cachedContainerStorage
cnrRdr.lister = cachedContainerLister cnrRdr.lister = client
cnrRdr.eacl = c.cfgObject.eaclSource cnrRdr.eacl = c.cfgObject.eaclSource
cnrRdr.src = c.cfgObject.cnrSource cnrRdr.src = c.cfgObject.cnrSource
@ -178,50 +111,6 @@ func configureEACLAndContainerSources(c *cfg, client *cntClient.Client, cnrSrc c
return cnrRdr, cnrWrt return cnrRdr, cnrWrt
} }
func setLoadController(ctx context.Context, c *cfg, loadRouter *loadroute.Router, loadAccumulator *loadstorage.Storage) {
pubKey := c.key.PublicKey().Bytes()
// container wrapper that always sends non-notary
// requests
wrapperNoNotary, err := cntClient.NewFromMorph(c.cfgMorph.client, c.cfgContainer.scriptHash, 0)
fatalOnErr(err)
resultWriter := &morphLoadWriter{
log: c.log,
cnrMorphClient: wrapperNoNotary,
key: pubKey,
}
localMetrics := &localStorageLoad{
log: c.log,
engine: c.cfgObject.cfgLocalStorage.localStorage,
}
ctrl := loadcontroller.New(
loadcontroller.Prm{
LocalMetrics: loadcontroller.SimpleIteratorProvider(localMetrics),
AnnouncementAccumulator: loadcontroller.SimpleIteratorProvider(loadAccumulator),
LocalAnnouncementTarget: loadRouter,
ResultReceiver: loadcontroller.SimpleWriterProvider(resultWriter),
},
loadcontroller.WithLogger(c.log),
)
setContainerNotificationParser(c, startEstimationNotifyEvent, containerEvent.ParseStartEstimation)
addContainerAsyncNotificationHandler(c, startEstimationNotifyEvent, func(ev event.Event) {
ctrl.Start(ctx, loadcontroller.StartPrm{
Epoch: ev.(containerEvent.StartEstimation).Epoch(),
})
})
setContainerNotificationParser(c, stopEstimationNotifyEvent, containerEvent.ParseStopEstimation)
addContainerAsyncNotificationHandler(c, stopEstimationNotifyEvent, func(ev event.Event) {
ctrl.Stop(ctx, loadcontroller.StopPrm{
Epoch: ev.(containerEvent.StopEstimation).Epoch(),
})
})
}
// addContainerNotificationHandler adds handler that will be executed synchronously. // addContainerNotificationHandler adds handler that will be executed synchronously.
func addContainerNotificationHandler(c *cfg, sTyp string, h event.Handler) { func addContainerNotificationHandler(c *cfg, sTyp string, h event.Handler) {
typ := event.TypeFromString(sTyp) typ := event.TypeFromString(sTyp)
@ -284,219 +173,6 @@ func setContainerNotificationParser(c *cfg, sTyp string, p event.NotificationPar
c.cfgContainer.parsers[typ] = p c.cfgContainer.parsers[typ] = p
} }
type morphLoadWriter struct {
log *logger.Logger
cnrMorphClient *cntClient.Client
key []byte
}
func (w *morphLoadWriter) Put(a containerSDK.SizeEstimation) error {
w.log.Debug(logs.FrostFSNodeSaveUsedSpaceAnnouncementInContract,
zap.Uint64("epoch", a.Epoch()),
zap.Stringer("cid", a.Container()),
zap.Uint64("size", a.Value()),
)
prm := cntClient.AnnounceLoadPrm{}
prm.SetAnnouncement(a)
prm.SetReporter(w.key)
return w.cnrMorphClient.AnnounceLoad(prm)
}
func (*morphLoadWriter) Close(context.Context) error {
return nil
}
type nopLoadWriter struct{}
func (nopLoadWriter) Put(containerSDK.SizeEstimation) error {
return nil
}
func (nopLoadWriter) Close(context.Context) error {
return nil
}
type remoteLoadAnnounceProvider struct {
key *ecdsa.PrivateKey
netmapKeys netmapCore.AnnouncedKeys
clientCache interface {
Get(client.NodeInfo) (client.MultiAddressClient, error)
}
deadEndProvider loadcontroller.WriterProvider
}
func (r *remoteLoadAnnounceProvider) InitRemote(srv loadcontroller.ServerInfo) (loadcontroller.WriterProvider, error) {
if srv == nil {
return r.deadEndProvider, nil
}
if r.netmapKeys.IsLocalKey(srv.PublicKey()) {
// if local => return no-op writer
return loadcontroller.SimpleWriterProvider(new(nopLoadWriter)), nil
}
var info client.NodeInfo
err := client.NodeInfoFromRawNetmapElement(&info, srv)
if err != nil {
return nil, fmt.Errorf("parse client node info: %w", err)
}
c, err := r.clientCache.Get(info)
if err != nil {
return nil, fmt.Errorf("could not initialize API client: %w", err)
}
return &remoteLoadAnnounceWriterProvider{
client: c,
}, nil
}
type remoteLoadAnnounceWriterProvider struct {
client client.Client
}
func (p *remoteLoadAnnounceWriterProvider) InitWriter([]loadcontroller.ServerInfo) (loadcontroller.Writer, error) {
return &remoteLoadAnnounceWriter{
client: p.client,
}, nil
}
type remoteLoadAnnounceWriter struct {
client client.Client
buf []containerSDK.SizeEstimation
}
func (r *remoteLoadAnnounceWriter) Put(a containerSDK.SizeEstimation) error {
r.buf = append(r.buf, a)
return nil
}
func (r *remoteLoadAnnounceWriter) Close(ctx context.Context) error {
cliPrm := apiClient.PrmAnnounceSpace{
Announcements: r.buf,
}
_, err := r.client.ContainerAnnounceUsedSpace(ctx, cliPrm)
return err
}
type loadPlacementBuilder struct {
log *logger.Logger
nmSrc netmapCore.Source
cnrSrc containerCore.Source
}
func (l *loadPlacementBuilder) BuildPlacement(epoch uint64, cnr cid.ID) ([][]netmap.NodeInfo, error) {
cnrNodes, nm, err := l.buildPlacement(epoch, cnr)
if err != nil {
return nil, err
}
const pivotPrefix = "load_announcement_"
pivot := []byte(
pivotPrefix + strconv.FormatUint(epoch, 10),
)
placement, err := nm.PlacementVectors(cnrNodes, pivot)
if err != nil {
return nil, fmt.Errorf("could not build placement vectors: %w", err)
}
return placement, nil
}
func (l *loadPlacementBuilder) buildPlacement(epoch uint64, idCnr cid.ID) ([][]netmap.NodeInfo, *netmap.NetMap, error) {
cnr, err := l.cnrSrc.Get(idCnr)
if err != nil {
return nil, nil, err
}
nm, err := l.nmSrc.GetNetMapByEpoch(epoch)
if err != nil {
return nil, nil, fmt.Errorf("could not get network map: %w", err)
}
binCnr := make([]byte, sha256.Size)
idCnr.Encode(binCnr)
cnrNodes, err := nm.ContainerNodes(cnr.Value.PlacementPolicy(), binCnr)
if err != nil {
return nil, nil, fmt.Errorf("could not build container nodes: %w", err)
}
return cnrNodes, nm, nil
}
type localStorageLoad struct {
log *logger.Logger
engine *engine.StorageEngine
}
func (d *localStorageLoad) Iterate(f loadcontroller.UsedSpaceFilter, h loadcontroller.UsedSpaceHandler) error {
idList, err := engine.ListContainers(context.TODO(), d.engine)
if err != nil {
return fmt.Errorf("list containers on engine failure: %w", err)
}
for i := range idList {
sz, err := engine.ContainerSize(d.engine, idList[i])
if err != nil {
d.log.Debug(logs.FrostFSNodeFailedToCalculateContainerSizeInStorageEngine,
zap.Stringer("cid", idList[i]),
zap.String("error", err.Error()),
)
continue
}
d.log.Debug(logs.FrostFSNodeContainerSizeInStorageEngineCalculatedSuccessfully,
zap.Uint64("size", sz),
zap.Stringer("cid", idList[i]),
)
var a containerSDK.SizeEstimation
a.SetContainer(idList[i])
a.SetValue(sz)
if f != nil && !f(a) {
continue
}
if err := h(a); err != nil {
return err
}
}
return nil
}
type usedSpaceService struct {
containerService.Server
loadWriterProvider loadcontroller.WriterProvider
loadPlacementBuilder *loadPlacementBuilder
routeBuilder loadroute.Builder
cfg *cfg
}
func (c *cfg) PublicKey() []byte { func (c *cfg) PublicKey() []byte {
return nodeKeyFromNetmap(c) return nodeKeyFromNetmap(c)
} }
@ -517,125 +193,6 @@ func (c *cfg) ExternalAddresses() []string {
return c.cfgNodeInfo.localInfo.ExternalAddresses() return c.cfgNodeInfo.localInfo.ExternalAddresses()
} }
func (c *usedSpaceService) PublicKey() []byte {
return nodeKeyFromNetmap(c.cfg)
}
func (c *usedSpaceService) IterateAddresses(f func(string) bool) {
c.cfg.iterateNetworkAddresses(f)
}
func (c *usedSpaceService) NumberOfAddresses() int {
return c.cfg.addressNum()
}
func (c *usedSpaceService) ExternalAddresses() []string {
return c.cfg.ExternalAddresses()
}
func (c *usedSpaceService) AnnounceUsedSpace(ctx context.Context, req *containerV2.AnnounceUsedSpaceRequest) (*containerV2.AnnounceUsedSpaceResponse, error) {
var passedRoute []loadcontroller.ServerInfo
for hdr := req.GetVerificationHeader(); hdr != nil; hdr = hdr.GetOrigin() {
passedRoute = append(passedRoute, &containerOnlyKeyRemoteServerInfo{
key: hdr.GetBodySignature().GetKey(),
})
}
for left, right := 0, len(passedRoute)-1; left < right; left, right = left+1, right-1 {
passedRoute[left], passedRoute[right] = passedRoute[right], passedRoute[left]
}
passedRoute = append(passedRoute, c)
w, err := c.loadWriterProvider.InitWriter(passedRoute)
if err != nil {
return nil, fmt.Errorf("could not initialize container's used space writer: %w", err)
}
var est containerSDK.SizeEstimation
for _, aV2 := range req.GetBody().GetAnnouncements() {
err = est.ReadFromV2(aV2)
if err != nil {
return nil, fmt.Errorf("invalid size announcement: %w", err)
}
if err := c.processLoadValue(ctx, est, passedRoute, w); err != nil {
return nil, err
}
}
respBody := new(containerV2.AnnounceUsedSpaceResponseBody)
resp := new(containerV2.AnnounceUsedSpaceResponse)
resp.SetBody(respBody)
c.cfg.respSvc.SetMeta(resp)
return resp, nil
}
var errNodeOutsideContainer = errors.New("node outside the container")
type containerOnlyKeyRemoteServerInfo struct {
key []byte
}
func (i *containerOnlyKeyRemoteServerInfo) PublicKey() []byte {
return i.key
}
func (*containerOnlyKeyRemoteServerInfo) IterateAddresses(func(string) bool) {
}
func (*containerOnlyKeyRemoteServerInfo) NumberOfAddresses() int {
return 0
}
func (*containerOnlyKeyRemoteServerInfo) ExternalAddresses() []string {
return nil
}
func (l *loadPlacementBuilder) isNodeFromContainerKey(epoch uint64, cnr cid.ID, key []byte) (bool, error) {
cnrNodes, _, err := l.buildPlacement(epoch, cnr)
if err != nil {
return false, err
}
for i := range cnrNodes {
for j := range cnrNodes[i] {
if bytes.Equal(cnrNodes[i][j].PublicKey(), key) {
return true, nil
}
}
}
return false, nil
}
func (c *usedSpaceService) processLoadValue(_ context.Context, a containerSDK.SizeEstimation,
route []loadcontroller.ServerInfo, w loadcontroller.Writer) error {
fromCnr, err := c.loadPlacementBuilder.isNodeFromContainerKey(a.Epoch(), a.Container(), route[0].PublicKey())
if err != nil {
return fmt.Errorf("could not verify that the sender belongs to the container: %w", err)
} else if !fromCnr {
return errNodeOutsideContainer
}
err = loadroute.CheckRoute(c.routeBuilder, a, route)
if err != nil {
return fmt.Errorf("wrong route of container's used space value: %w", err)
}
err = w.Put(a)
if err != nil {
return fmt.Errorf("could not write container's used space value: %w", err)
}
return nil
}
// implements interface required by container service provided by morph executor. // implements interface required by container service provided by morph executor.
type morphContainerReader struct { type morphContainerReader struct {
eacl containerCore.EACLSource eacl containerCore.EACLSource

View file

@ -6,6 +6,7 @@ import (
"fmt" "fmt"
"log" "log"
"os" "os"
"sync"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config" "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs" "git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
@ -148,9 +149,22 @@ func wait(c *cfg) {
<-c.done // graceful shutdown <-c.done // graceful shutdown
drain := &sync.WaitGroup{}
drain.Add(1)
go func() {
defer drain.Done()
for err := range c.internalErr {
c.log.Warn(logs.FrostFSNodeInternalApplicationError,
zap.String("message", err.Error()))
}
}()
c.log.Debug(logs.FrostFSNodeWaitingForAllProcessesToStop) c.log.Debug(logs.FrostFSNodeWaitingForAllProcessesToStop)
c.wg.Wait() c.wg.Wait()
close(c.internalErr)
drain.Wait()
} }
func (c *cfg) onShutdown(f func()) { func (c *cfg) onShutdown(f func()) {

View file

@ -179,15 +179,8 @@ func addNewEpochNotificationHandlers(c *cfg) {
return return
} }
n := ev.(netmapEvent.NewEpoch).EpochNumber() if err := c.bootstrap(); err != nil {
c.log.Warn(logs.FrostFSNodeCantSendRebootstrapTx, zap.Error(err))
const reBootstrapInterval = 2
if (n-c.cfgNetmap.startEpoch)%reBootstrapInterval == 0 {
err := c.bootstrap()
if err != nil {
c.log.Warn(logs.FrostFSNodeCantSendRebootstrapTx, zap.Error(err))
}
} }
}) })
@ -227,10 +220,6 @@ func bootstrapNode(c *cfg) {
c.log.Info(logs.FrostFSNodeNodeIsUnderMaintenanceSkipInitialBootstrap) c.log.Info(logs.FrostFSNodeNodeIsUnderMaintenanceSkipInitialBootstrap)
return return
} }
if c.alreadyBootstraped {
c.log.Info(logs.NetmapNodeAlreadyInCandidateListOnlineSkipInitialBootstrap)
return
}
err := c.bootstrap() err := c.bootstrap()
fatalOnErrDetails("bootstrap error", err) fatalOnErrDetails("bootstrap error", err)
} }
@ -263,7 +252,7 @@ func initNetmapState(c *cfg) {
fatalOnErrDetails("could not initialize current epoch number", err) fatalOnErrDetails("could not initialize current epoch number", err)
var ni *netmapSDK.NodeInfo var ni *netmapSDK.NodeInfo
ni, c.alreadyBootstraped, err = c.netmapInitLocalNodeState(epoch) ni, err = c.netmapInitLocalNodeState(epoch)
fatalOnErrDetails("could not init network state", err) fatalOnErrDetails("could not init network state", err)
stateWord := nodeState(ni) stateWord := nodeState(ni)
@ -282,13 +271,6 @@ func initNetmapState(c *cfg) {
c.handleLocalNodeInfo(ni) c.handleLocalNodeInfo(ni)
} }
func sameNodeInfo(a, b *netmapSDK.NodeInfo) bool {
// Suboptimal, but we do this once on the node startup.
rawA := a.Marshal()
rawB := b.Marshal()
return bytes.Equal(rawA, rawB)
}
func nodeState(ni *netmapSDK.NodeInfo) string { func nodeState(ni *netmapSDK.NodeInfo) string {
if ni != nil { if ni != nil {
switch { switch {
@ -303,29 +285,27 @@ func nodeState(ni *netmapSDK.NodeInfo) string {
return "undefined" return "undefined"
} }
func (c *cfg) netmapInitLocalNodeState(epoch uint64) (*netmapSDK.NodeInfo, bool, error) { func (c *cfg) netmapInitLocalNodeState(epoch uint64) (*netmapSDK.NodeInfo, error) {
nmNodes, err := c.cfgNetmap.wrapper.GetCandidates() nmNodes, err := c.cfgNetmap.wrapper.GetCandidates()
if err != nil { if err != nil {
return nil, false, err return nil, err
} }
var candidate *netmapSDK.NodeInfo var candidate *netmapSDK.NodeInfo
alreadyBootstraped := false
for i := range nmNodes { for i := range nmNodes {
if bytes.Equal(nmNodes[i].PublicKey(), c.binPublicKey) { if bytes.Equal(nmNodes[i].PublicKey(), c.binPublicKey) {
candidate = &nmNodes[i] candidate = &nmNodes[i]
alreadyBootstraped = candidate.IsOnline() && sameNodeInfo(&c.cfgNodeInfo.localInfo, candidate)
break break
} }
} }
node, err := c.netmapLocalNodeState(epoch) node, err := c.netmapLocalNodeState(epoch)
if err != nil { if err != nil {
return nil, false, err return nil, err
} }
if candidate == nil { if candidate == nil {
return node, false, nil return node, nil
} }
nmState := nodeState(node) nmState := nodeState(node)
@ -337,7 +317,7 @@ func (c *cfg) netmapInitLocalNodeState(epoch uint64) (*netmapSDK.NodeInfo, bool,
zap.String("netmap", nmState), zap.String("netmap", nmState),
zap.String("candidate", candidateState)) zap.String("candidate", candidateState))
} }
return candidate, alreadyBootstraped, nil return candidate, nil
} }
func (c *cfg) netmapLocalNodeState(epoch uint64) (*netmapSDK.NodeInfo, error) { func (c *cfg) netmapLocalNodeState(epoch uint64) (*netmapSDK.NodeInfo, error) {

View file

@ -92,6 +92,7 @@ FROSTFS_OBJECT_DELETE_TOMBSTONE_LIFETIME=10
# Storage engine section # Storage engine section
FROSTFS_STORAGE_SHARD_POOL_SIZE=15 FROSTFS_STORAGE_SHARD_POOL_SIZE=15
FROSTFS_STORAGE_SHARD_RO_ERROR_THRESHOLD=100 FROSTFS_STORAGE_SHARD_RO_ERROR_THRESHOLD=100
FROSTFS_STORAGE_REBUILD_WORKERS_COUNT=1000
## 0 shard ## 0 shard
### Flag to refill Metabase from BlobStor ### Flag to refill Metabase from BlobStor
FROSTFS_STORAGE_SHARD_0_RESYNC_METABASE=false FROSTFS_STORAGE_SHARD_0_RESYNC_METABASE=false
@ -123,6 +124,8 @@ FROSTFS_STORAGE_SHARD_0_BLOBSTOR_0_DEPTH=1
FROSTFS_STORAGE_SHARD_0_BLOBSTOR_0_WIDTH=4 FROSTFS_STORAGE_SHARD_0_BLOBSTOR_0_WIDTH=4
FROSTFS_STORAGE_SHARD_0_BLOBSTOR_0_OPENED_CACHE_CAPACITY=50 FROSTFS_STORAGE_SHARD_0_BLOBSTOR_0_OPENED_CACHE_CAPACITY=50
FROSTFS_STORAGE_SHARD_0_BLOBSTOR_0_LEAF_WIDTH=10 FROSTFS_STORAGE_SHARD_0_BLOBSTOR_0_LEAF_WIDTH=10
FROSTFS_STORAGE_SHARD_0_BLOBSTOR_0_INIT_WORKER_COUNT=10
FROSTFS_STORAGE_SHARD_0_BLOBSTOR_0_INIT_IN_ADVANCE=TRUE
### FSTree config ### FSTree config
FROSTFS_STORAGE_SHARD_0_BLOBSTOR_1_TYPE=fstree FROSTFS_STORAGE_SHARD_0_BLOBSTOR_1_TYPE=fstree
FROSTFS_STORAGE_SHARD_0_BLOBSTOR_1_PATH=tmp/0/blob FROSTFS_STORAGE_SHARD_0_BLOBSTOR_1_PATH=tmp/0/blob

View file

@ -137,6 +137,7 @@
"storage": { "storage": {
"shard_pool_size": 15, "shard_pool_size": 15,
"shard_ro_error_threshold": 100, "shard_ro_error_threshold": 100,
"rebuild_workers_count": 1000,
"shard": { "shard": {
"0": { "0": {
"mode": "read-only", "mode": "read-only",
@ -170,7 +171,9 @@
"depth": 1, "depth": 1,
"width": 4, "width": 4,
"opened_cache_capacity": 50, "opened_cache_capacity": 50,
"leaf_width": 10 "leaf_width": 10,
"init_worker_count": 10,
"init_in_advance": true
}, },
{ {
"type": "fstree", "type": "fstree",

View file

@ -116,6 +116,7 @@ storage:
# note: shard configuration can be omitted for relay node (see `node.relay`) # note: shard configuration can be omitted for relay node (see `node.relay`)
shard_pool_size: 15 # size of per-shard worker pools used for PUT operations shard_pool_size: 15 # size of per-shard worker pools used for PUT operations
shard_ro_error_threshold: 100 # amount of errors to occur before shard is made read-only (default: 0, ignore errors) shard_ro_error_threshold: 100 # amount of errors to occur before shard is made read-only (default: 0, ignore errors)
rebuild_workers_count: 1000 # count of rebuild storage concurrent workers
shard: shard:
default: # section with the default shard parameters default: # section with the default shard parameters
@ -182,6 +183,8 @@ storage:
blobstor: blobstor:
- type: blobovnicza - type: blobovnicza
path: tmp/0/blob/blobovnicza path: tmp/0/blob/blobovnicza
init_worker_count: 10 #count of workers to initialize blobovniczas
init_in_advance: true
- type: fstree - type: fstree
path: tmp/0/blob # blobstor path path: tmp/0/blob # blobstor path

4
go.mod
View file

@ -3,10 +3,10 @@ module git.frostfs.info/TrueCloudLab/frostfs-node
go 1.20 go 1.20
require ( require (
git.frostfs.info/TrueCloudLab/frostfs-api-go/v2 v2.16.0 git.frostfs.info/TrueCloudLab/frostfs-api-go/v2 v2.16.1-0.20231121085847-241a9f1ad0a4
git.frostfs.info/TrueCloudLab/frostfs-contract v0.18.0 git.frostfs.info/TrueCloudLab/frostfs-contract v0.18.0
git.frostfs.info/TrueCloudLab/frostfs-observability v0.0.0-20230531082742-c97d21411eb6 git.frostfs.info/TrueCloudLab/frostfs-observability v0.0.0-20230531082742-c97d21411eb6
git.frostfs.info/TrueCloudLab/frostfs-sdk-go v0.0.0-20230911122224-ac8fc6d4400c git.frostfs.info/TrueCloudLab/frostfs-sdk-go v0.0.0-20230928142024-84b9d29fc98c
git.frostfs.info/TrueCloudLab/hrw v1.2.1 git.frostfs.info/TrueCloudLab/hrw v1.2.1
git.frostfs.info/TrueCloudLab/tzhash v1.8.0 git.frostfs.info/TrueCloudLab/tzhash v1.8.0
github.com/cheggaaa/pb v1.0.29 github.com/cheggaaa/pb v1.0.29

8
go.sum
View file

@ -385,16 +385,16 @@ cloud.google.com/go/workflows v1.7.0/go.mod h1:JhSrZuVZWuiDfKEFxU0/F1PQjmpnpcoIS
cloud.google.com/go/workflows v1.8.0/go.mod h1:ysGhmEajwZxGn1OhGOGKsTXc5PyxOc0vfKf5Af+to4M= cloud.google.com/go/workflows v1.8.0/go.mod h1:ysGhmEajwZxGn1OhGOGKsTXc5PyxOc0vfKf5Af+to4M=
cloud.google.com/go/workflows v1.9.0/go.mod h1:ZGkj1aFIOd9c8Gerkjjq7OW7I5+l6cSvT3ujaO/WwSA= cloud.google.com/go/workflows v1.9.0/go.mod h1:ZGkj1aFIOd9c8Gerkjjq7OW7I5+l6cSvT3ujaO/WwSA=
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU= dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
git.frostfs.info/TrueCloudLab/frostfs-api-go/v2 v2.16.0 h1:vy6leTEGcKVrLmKfeK5pGGIi3D1vn6rX32Hy4gIOWto= git.frostfs.info/TrueCloudLab/frostfs-api-go/v2 v2.16.1-0.20231121085847-241a9f1ad0a4 h1:wjLfZ3WCt7qNGsQv+Jl0TXnmtg0uVk/jToKPFTBc/jo=
git.frostfs.info/TrueCloudLab/frostfs-api-go/v2 v2.16.0/go.mod h1:uY0AYmCznjZdghDnAk7THFIe1Vlg531IxUcus7ZfUJI= git.frostfs.info/TrueCloudLab/frostfs-api-go/v2 v2.16.1-0.20231121085847-241a9f1ad0a4/go.mod h1:uY0AYmCznjZdghDnAk7THFIe1Vlg531IxUcus7ZfUJI=
git.frostfs.info/TrueCloudLab/frostfs-contract v0.18.0 h1:9ahw69njrwf2legz5xNVTA+4A6UwcBzy9oBdbJmoBxU= git.frostfs.info/TrueCloudLab/frostfs-contract v0.18.0 h1:9ahw69njrwf2legz5xNVTA+4A6UwcBzy9oBdbJmoBxU=
git.frostfs.info/TrueCloudLab/frostfs-contract v0.18.0/go.mod h1:3V8FyzpbIIxzpgfUaSlOJBAT11IzhZzkQnGpYvRQR5E= git.frostfs.info/TrueCloudLab/frostfs-contract v0.18.0/go.mod h1:3V8FyzpbIIxzpgfUaSlOJBAT11IzhZzkQnGpYvRQR5E=
git.frostfs.info/TrueCloudLab/frostfs-crypto v0.6.0 h1:FxqFDhQYYgpe41qsIHVOcdzSVCB8JNSfPG7Uk4r2oSk= git.frostfs.info/TrueCloudLab/frostfs-crypto v0.6.0 h1:FxqFDhQYYgpe41qsIHVOcdzSVCB8JNSfPG7Uk4r2oSk=
git.frostfs.info/TrueCloudLab/frostfs-crypto v0.6.0/go.mod h1:RUIKZATQLJ+TaYQa60X2fTDwfuhMfm8Ar60bQ5fr+vU= git.frostfs.info/TrueCloudLab/frostfs-crypto v0.6.0/go.mod h1:RUIKZATQLJ+TaYQa60X2fTDwfuhMfm8Ar60bQ5fr+vU=
git.frostfs.info/TrueCloudLab/frostfs-observability v0.0.0-20230531082742-c97d21411eb6 h1:aGQ6QaAnTerQ5Dq5b2/f9DUQtSqPkZZ/bkMx/HKuLCo= git.frostfs.info/TrueCloudLab/frostfs-observability v0.0.0-20230531082742-c97d21411eb6 h1:aGQ6QaAnTerQ5Dq5b2/f9DUQtSqPkZZ/bkMx/HKuLCo=
git.frostfs.info/TrueCloudLab/frostfs-observability v0.0.0-20230531082742-c97d21411eb6/go.mod h1:W8Nn08/l6aQ7UlIbpF7FsQou7TVpcRD1ZT1KG4TrFhE= git.frostfs.info/TrueCloudLab/frostfs-observability v0.0.0-20230531082742-c97d21411eb6/go.mod h1:W8Nn08/l6aQ7UlIbpF7FsQou7TVpcRD1ZT1KG4TrFhE=
git.frostfs.info/TrueCloudLab/frostfs-sdk-go v0.0.0-20230911122224-ac8fc6d4400c h1:gD+dj5IZx9jlniDu8TlLQdRGCd8KIOzYjjdDd1KcVdI= git.frostfs.info/TrueCloudLab/frostfs-sdk-go v0.0.0-20230928142024-84b9d29fc98c h1:c8mduKlc8Zioppz5o06QRYS5KYX3BFRO+NgKj2q6kD8=
git.frostfs.info/TrueCloudLab/frostfs-sdk-go v0.0.0-20230911122224-ac8fc6d4400c/go.mod h1:t1akKcUH7iBrFHX8rSXScYMP17k2kYQXMbZooiL5Juw= git.frostfs.info/TrueCloudLab/frostfs-sdk-go v0.0.0-20230928142024-84b9d29fc98c/go.mod h1:t1akKcUH7iBrFHX8rSXScYMP17k2kYQXMbZooiL5Juw=
git.frostfs.info/TrueCloudLab/hrw v1.2.1 h1:ccBRK21rFvY5R1WotI6LNoPlizk7qSvdfD8lNIRudVc= git.frostfs.info/TrueCloudLab/hrw v1.2.1 h1:ccBRK21rFvY5R1WotI6LNoPlizk7qSvdfD8lNIRudVc=
git.frostfs.info/TrueCloudLab/hrw v1.2.1/go.mod h1:C1Ygde2n843yTZEQ0FP69jYiuaYV0kriLvP4zm8JuvM= git.frostfs.info/TrueCloudLab/hrw v1.2.1/go.mod h1:C1Ygde2n843yTZEQ0FP69jYiuaYV0kriLvP4zm8JuvM=
git.frostfs.info/TrueCloudLab/rfc6979 v0.4.0 h1:M2KR3iBj7WpY3hP10IevfIB9MURr4O9mwVfJ+SjT3HA= git.frostfs.info/TrueCloudLab/rfc6979 v0.4.0 h1:M2KR3iBj7WpY3hP10IevfIB9MURr4O9mwVfJ+SjT3HA=

View file

@ -512,4 +512,37 @@ const (
RuntimeSoftMemoryDefinedWithGOMEMLIMIT = "soft runtime memory defined with GOMEMLIMIT environment variable, config value skipped" RuntimeSoftMemoryDefinedWithGOMEMLIMIT = "soft runtime memory defined with GOMEMLIMIT environment variable, config value skipped"
FailedToCountWritecacheItems = "failed to count writecache items" FailedToCountWritecacheItems = "failed to count writecache items"
AttemtToCloseAlreadyClosedBlobovnicza = "attempt to close an already closed blobovnicza" AttemtToCloseAlreadyClosedBlobovnicza = "attempt to close an already closed blobovnicza"
FailedToRebuildBlobstore = "failed to rebuild blobstore"
BlobstoreRebuildStarted = "blobstore rebuild started"
BlobstoreRebuildCompletedSuccessfully = "blobstore rebuild completed successfully"
BlobstoreRebuildStopped = "blobstore rebuild stopped"
BlobovniczaTreeFixingFileExtensions = "fixing blobovnicza tree file extensions..."
BlobovniczaTreeFixingFileExtensionsCompletedSuccessfully = "fixing blobovnicza tree file extensions completed successfully"
BlobovniczaTreeFixingFileExtensionsFailed = "failed to fix blobovnicza tree file extensions"
BlobovniczaTreeFixingFileExtensionForFile = "fixing blobovnicza file extension..."
BlobovniczaTreeFixingFileExtensionCompletedSuccessfully = "fixing blobovnicza file extension completed successfully"
BlobovniczaTreeFixingFileExtensionFailed = "failed to fix blobovnicza file extension"
BlobstorRebuildFailedToRebuildStorages = "failed to rebuild storages"
BlobstorRebuildRebuildStoragesCompleted = "storages rebuild completed"
BlobovniczaTreeCollectingDBToRebuild = "collecting blobovniczas to rebuild..."
BlobovniczaTreeCollectingDBToRebuildFailed = "collecting blobovniczas to rebuild failed"
BlobovniczaTreeCollectingDBToRebuildSuccess = "collecting blobovniczas to rebuild completed successfully"
BlobovniczaTreeRebuildingBlobovnicza = "rebuilding blobovnicza..."
BlobovniczaTreeRebuildingBlobovniczaFailed = "rebuilding blobovnicza failed"
BlobovniczaTreeRebuildingBlobovniczaSuccess = "rebuilding blobovnicza completed successfully"
BlobovniczatreeCouldNotPutMoveInfoToSourceBlobovnicza = "could not put move info to source blobovnicza"
BlobovniczatreeCouldNotUpdateStorageID = "could not update storage ID"
BlobovniczatreeCouldNotDropMoveInfo = "could not drop move info from source blobovnicza"
BlobovniczatreeCouldNotDeleteFromSource = "could not delete object from source blobovnicza"
BlobovniczaTreeCompletingPreviousRebuild = "completing previous rebuild if failed..."
BlobovniczaTreeCompletedPreviousRebuildSuccess = "previous rebuild completed successfully"
BlobovniczaTreeCompletedPreviousRebuildFailed = "failed to complete previous rebuild"
BlobovniczatreeCouldNotCheckExistenceInSourceDB = "could not check object existence in source blobovnicza"
BlobovniczatreeCouldNotCheckExistenceInTargetDB = "could not check object existence in target blobovnicza"
BlobovniczatreeCouldNotGetObjectFromSourceDB = "could not get object from source blobovnicza"
BlobovniczatreeCouldNotPutObjectToTargetDB = "could not put object to target blobovnicza"
BlobovniczaSavingCountersToMeta = "saving counters to blobovnicza's meta..."
BlobovniczaSavingCountersToMetaSuccess = "saving counters to blobovnicza's meta completed successfully"
BlobovniczaSavingCountersToMetaFailed = "saving counters to blobovnicza's meta failed"
ObjectRemovalFailureExistsInWritecache = "can't remove object: object must be flushed from writecache"
) )

View file

@ -343,7 +343,7 @@ func (s *Server) initGRPCServer(cfg *viper.Viper) error {
p.SetPrivateKey(*s.key) p.SetPrivateKey(*s.key)
p.SetHealthChecker(s) p.SetHealthChecker(s)
controlSvc := controlsrv.New(p, s.netmapClient, controlSvc := controlsrv.New(p, s.netmapClient, s.containerClient,
controlsrv.WithAllowedKeys(authKeys), controlsrv.WithAllowedKeys(authKeys),
) )
@ -389,6 +389,7 @@ func (s *Server) initClientsFromMorph() (*serverMorphClients, error) {
if err != nil { if err != nil {
return nil, err return nil, err
} }
s.containerClient = result.CnrClient
s.netmapClient, err = nmClient.NewFromMorph(s.morphClient, s.contracts.netmap, fee, nmClient.TryNotary(), nmClient.AsAlphabet()) s.netmapClient, err = nmClient.NewFromMorph(s.morphClient, s.contracts.netmap, fee, nmClient.TryNotary(), nmClient.AsAlphabet())
if err != nil { if err != nil {

View file

@ -16,6 +16,7 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/metrics" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/metrics"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client"
balanceClient "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client/balance" balanceClient "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client/balance"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client/container"
nmClient "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client/netmap" nmClient "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client/netmap"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/event" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/event"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/subscriber" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/subscriber"
@ -46,16 +47,17 @@ type (
epochTimer *timer.BlockTimer epochTimer *timer.BlockTimer
// global state // global state
morphClient *client.Client morphClient *client.Client
mainnetClient *client.Client mainnetClient *client.Client
epochCounter atomic.Uint64 epochCounter atomic.Uint64
epochDuration atomic.Uint64 epochDuration atomic.Uint64
statusIndex *innerRingIndexer statusIndex *innerRingIndexer
precision precision.Fixed8Converter precision precision.Fixed8Converter
healthStatus atomic.Int32 healthStatus atomic.Int32
balanceClient *balanceClient.Client balanceClient *balanceClient.Client
netmapClient *nmClient.Client netmapClient *nmClient.Client
persistate *state.PersistentStorage persistate *state.PersistentStorage
containerClient *container.Client
// metrics // metrics
irMetrics *metrics.InnerRingServiceMetrics irMetrics *metrics.InnerRingServiceMetrics

View file

@ -24,6 +24,8 @@ type (
epochStamp epochStamp
binNodeInfo []byte binNodeInfo []byte
maintenance bool
} }
) )
@ -58,6 +60,7 @@ func (c *cleanupTable) update(snapshot netmap.NetMap, now uint64) {
} }
access.binNodeInfo = binNodeInfo access.binNodeInfo = binNodeInfo
access.maintenance = nmNodes[i].IsMaintenance()
newMap[keyString] = access newMap[keyString] = access
} }
@ -105,7 +108,7 @@ func (c *cleanupTable) forEachRemoveCandidate(epoch uint64, f func(string) error
defer c.Unlock() defer c.Unlock()
for keyString, access := range c.lastAccess { for keyString, access := range c.lastAccess {
if epoch-access.epoch > c.threshold { if !access.maintenance && epoch-access.epoch > c.threshold {
access.removeFlag = true // set remove flag access.removeFlag = true // set remove flag
c.lastAccess[keyString] = access c.lastAccess[keyString] = access

View file

@ -124,6 +124,21 @@ func TestCleanupTable(t *testing.T) {
})) }))
require.EqualValues(t, len(infos)-1, cnt) require.EqualValues(t, len(infos)-1, cnt)
}) })
t.Run("skip maintenance nodes", func(t *testing.T) {
cnt := 0
infos[1].SetMaintenance()
key := netmap.StringifyPublicKey(infos[1])
c.update(networkMap, 5)
require.NoError(t,
c.forEachRemoveCandidate(5, func(s string) error {
cnt++
require.NotEqual(t, s, key)
return nil
}))
require.EqualValues(t, len(infos)-1, cnt)
})
}) })
} }

View file

@ -104,17 +104,42 @@ func (b *Blobovnicza) Init() error {
func (b *Blobovnicza) initializeCounters() error { func (b *Blobovnicza) initializeCounters() error {
var size uint64 var size uint64
var items uint64 var items uint64
var sizeExists bool
var itemsCountExists bool
err := b.boltDB.View(func(tx *bbolt.Tx) error { err := b.boltDB.View(func(tx *bbolt.Tx) error {
return b.iterateAllBuckets(tx, func(lower, upper uint64, b *bbolt.Bucket) (bool, error) { size, sizeExists = hasDataSize(tx)
keysN := uint64(b.Stats().KeyN) items, itemsCountExists = hasItemsCount(tx)
size += keysN * upper
items += keysN if sizeExists && itemsCountExists {
return false, nil return nil
}
return b.iterateAllDataBuckets(tx, func(lower, upper uint64, b *bbolt.Bucket) (bool, error) {
return false, b.ForEach(func(k, v []byte) error {
size += uint64(len(k) + len(v))
items++
return nil
})
}) })
}) })
if err != nil { if err != nil {
return fmt.Errorf("can't determine DB size: %w", err) return fmt.Errorf("can't determine DB size: %w", err)
} }
if (!sizeExists || !itemsCountExists) && !b.boltOptions.ReadOnly {
b.log.Debug(logs.BlobovniczaSavingCountersToMeta, zap.Uint64("size", size), zap.Uint64("items", items))
if err := b.boltDB.Update(func(tx *bbolt.Tx) error {
if err := saveDataSize(tx, size); err != nil {
return err
}
return saveItemsCount(tx, items)
}); err != nil {
b.log.Debug(logs.BlobovniczaSavingCountersToMetaFailed, zap.Uint64("size", size), zap.Uint64("items", items))
return fmt.Errorf("can't save blobovnicza's size and items count: %w", err)
}
b.log.Debug(logs.BlobovniczaSavingCountersToMetaSuccess, zap.Uint64("size", size), zap.Uint64("items", items))
}
b.dataSize.Store(size) b.dataSize.Store(size)
b.itemsCount.Store(items) b.itemsCount.Store(items)
b.metrics.AddOpenBlobovniczaSize(size) b.metrics.AddOpenBlobovniczaSize(size)

View file

@ -49,9 +49,10 @@ func (b *Blobovnicza) Delete(ctx context.Context, prm DeletePrm) (DeleteRes, err
var sizeUpperBound uint64 var sizeUpperBound uint64
var sizeLowerBound uint64 var sizeLowerBound uint64
var dataSize uint64 var dataSize uint64
var recordSize uint64
err := b.boltDB.Update(func(tx *bbolt.Tx) error { err := b.boltDB.Update(func(tx *bbolt.Tx) error {
return b.iterateAllBuckets(tx, func(lower, upper uint64, buck *bbolt.Bucket) (bool, error) { err := b.iterateAllDataBuckets(tx, func(lower, upper uint64, buck *bbolt.Bucket) (bool, error) {
objData := buck.Get(addrKey) objData := buck.Get(addrKey)
if objData == nil { if objData == nil {
// object is not in bucket => continue iterating // object is not in bucket => continue iterating
@ -60,9 +61,27 @@ func (b *Blobovnicza) Delete(ctx context.Context, prm DeletePrm) (DeleteRes, err
dataSize = uint64(len(objData)) dataSize = uint64(len(objData))
sizeLowerBound = lower sizeLowerBound = lower
sizeUpperBound = upper sizeUpperBound = upper
recordSize = dataSize + uint64(len(addrKey))
found = true found = true
return true, buck.Delete(addrKey) return true, buck.Delete(addrKey)
}) })
if err != nil {
return err
}
if found {
return updateMeta(tx, func(count, size uint64) (uint64, uint64) {
if count > 0 {
count--
}
if size >= recordSize {
size -= recordSize
} else {
size = 0
}
return count, size
})
}
return nil
}) })
if err == nil && !found { if err == nil && !found {
@ -74,7 +93,7 @@ func (b *Blobovnicza) Delete(ctx context.Context, prm DeletePrm) (DeleteRes, err
zap.String("binary size", stringifyByteSize(dataSize)), zap.String("binary size", stringifyByteSize(dataSize)),
zap.String("range", stringifyBounds(sizeLowerBound, sizeUpperBound)), zap.String("range", stringifyBounds(sizeLowerBound, sizeUpperBound)),
) )
b.itemDeleted(sizeUpperBound) b.itemDeleted(recordSize)
} }
return DeleteRes{}, err return DeleteRes{}, err

View file

@ -26,7 +26,10 @@ func (b *Blobovnicza) Exists(ctx context.Context, addr oid.Address) (bool, error
addrKey := addressKey(addr) addrKey := addressKey(addr)
err := b.boltDB.View(func(tx *bbolt.Tx) error { err := b.boltDB.View(func(tx *bbolt.Tx) error {
return tx.ForEach(func(_ []byte, buck *bbolt.Bucket) error { return tx.ForEach(func(bucketName []byte, buck *bbolt.Bucket) error {
if isNonDataBucket(bucketName) {
return nil
}
exists = buck.Get(addrKey) != nil exists = buck.Get(addrKey) != nil
if exists { if exists {
return errInterruptForEach return errInterruptForEach

View file

@ -57,7 +57,11 @@ func (b *Blobovnicza) Get(ctx context.Context, prm GetPrm) (GetRes, error) {
) )
if err := b.boltDB.View(func(tx *bbolt.Tx) error { if err := b.boltDB.View(func(tx *bbolt.Tx) error {
return tx.ForEach(func(_ []byte, buck *bbolt.Bucket) error { return tx.ForEach(func(bucketName []byte, buck *bbolt.Bucket) error {
if isNonDataBucket(bucketName) {
return nil
}
data = buck.Get(addrKey) data = buck.Get(addrKey)
if data == nil { if data == nil {
return nil return nil

View file

@ -12,11 +12,11 @@ import (
"go.opentelemetry.io/otel/trace" "go.opentelemetry.io/otel/trace"
) )
// iterateAllBuckets iterates all buckets in db // iterateAllDataBuckets iterates all buckets in db
// //
// If the maximum size of the object (b.objSizeLimit) has been changed to lower value, // If the maximum size of the object (b.objSizeLimit) has been changed to lower value,
// then there may be more buckets than the current limit of the object size. // then there may be more buckets than the current limit of the object size.
func (b *Blobovnicza) iterateAllBuckets(tx *bbolt.Tx, f func(uint64, uint64, *bbolt.Bucket) (bool, error)) error { func (b *Blobovnicza) iterateAllDataBuckets(tx *bbolt.Tx, f func(uint64, uint64, *bbolt.Bucket) (bool, error)) error {
return b.iterateBucketKeys(false, func(lower uint64, upper uint64, key []byte) (bool, error) { return b.iterateBucketKeys(false, func(lower uint64, upper uint64, key []byte) (bool, error) {
buck := tx.Bucket(key) buck := tx.Bucket(key)
if buck == nil { if buck == nil {
@ -139,7 +139,10 @@ func (b *Blobovnicza) Iterate(ctx context.Context, prm IteratePrm) (IterateRes,
var elem IterationElement var elem IterationElement
if err := b.boltDB.View(func(tx *bbolt.Tx) error { if err := b.boltDB.View(func(tx *bbolt.Tx) error {
return tx.ForEach(func(name []byte, buck *bbolt.Bucket) error { return tx.ForEach(func(bucketName []byte, buck *bbolt.Bucket) error {
if isNonDataBucket(bucketName) {
return nil
}
return buck.ForEach(func(k, v []byte) error { return buck.ForEach(func(k, v []byte) error {
select { select {
case <-ctx.Done(): case <-ctx.Done():

View file

@ -0,0 +1,104 @@
package blobovnicza
import (
"bytes"
"encoding/binary"
"go.etcd.io/bbolt"
)
const (
dataSizeAndItemsCountBufLength = 8
)
var (
metaBucketName = []byte("META")
dataSizeKey = []byte("data_size")
itemsCountKey = []byte("items_count")
)
func isNonDataBucket(bucketName []byte) bool {
return bytes.Equal(bucketName, incompletedMoveBucketName) || bytes.Equal(bucketName, metaBucketName)
}
func hasDataSize(tx *bbolt.Tx) (uint64, bool) {
b := tx.Bucket(metaBucketName)
if b == nil {
return 0, false
}
v := b.Get(dataSizeKey)
if v == nil {
return 0, false
}
if len(v) != dataSizeAndItemsCountBufLength {
return 0, false
}
return binary.LittleEndian.Uint64(v), true
}
func hasItemsCount(tx *bbolt.Tx) (uint64, bool) {
b := tx.Bucket(metaBucketName)
if b == nil {
return 0, false
}
v := b.Get(itemsCountKey)
if v == nil {
return 0, false
}
if len(v) != dataSizeAndItemsCountBufLength {
return 0, false
}
return binary.LittleEndian.Uint64(v), true
}
func saveDataSize(tx *bbolt.Tx, size uint64) error {
b, err := tx.CreateBucketIfNotExists(metaBucketName)
if err != nil {
return err
}
buf := make([]byte, dataSizeAndItemsCountBufLength)
binary.LittleEndian.PutUint64(buf, size)
return b.Put(dataSizeKey, buf)
}
func saveItemsCount(tx *bbolt.Tx, count uint64) error {
b, err := tx.CreateBucketIfNotExists(metaBucketName)
if err != nil {
return err
}
buf := make([]byte, dataSizeAndItemsCountBufLength)
binary.LittleEndian.PutUint64(buf, count)
return b.Put(itemsCountKey, buf)
}
func updateMeta(tx *bbolt.Tx, updateValues func(count, size uint64) (uint64, uint64)) error {
b, err := tx.CreateBucketIfNotExists(metaBucketName)
if err != nil {
return err
}
var count uint64
var size uint64
v := b.Get(itemsCountKey)
if v != nil {
count = binary.LittleEndian.Uint64(v)
}
v = b.Get(dataSizeKey)
if v != nil {
size = binary.LittleEndian.Uint64(v)
}
count, size = updateValues(count, size)
sizeBuf := make([]byte, dataSizeAndItemsCountBufLength)
binary.LittleEndian.PutUint64(sizeBuf, size)
if err := b.Put(dataSizeKey, sizeBuf); err != nil {
return err
}
countBuf := make([]byte, dataSizeAndItemsCountBufLength)
binary.LittleEndian.PutUint64(countBuf, count)
return b.Put(itemsCountKey, countBuf)
}

View file

@ -0,0 +1,108 @@
package blobovnicza
import (
"context"
"fmt"
"git.frostfs.info/TrueCloudLab/frostfs-observability/tracing"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"go.etcd.io/bbolt"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/trace"
)
var incompletedMoveBucketName = []byte("INCOMPLETED_MOVE")
type MoveInfo struct {
Address oid.Address
TargetStorageID []byte
}
func (b *Blobovnicza) PutMoveInfo(ctx context.Context, prm MoveInfo) error {
_, span := tracing.StartSpanFromContext(ctx, "Blobovnicza.PutMoveInfo",
trace.WithAttributes(
attribute.String("path", b.path),
attribute.String("address", prm.Address.EncodeToString()),
attribute.String("target_storage_id", string(prm.TargetStorageID)),
))
defer span.End()
key := addressKey(prm.Address)
return b.boltDB.Update(func(tx *bbolt.Tx) error {
bucket, err := tx.CreateBucketIfNotExists(incompletedMoveBucketName)
if err != nil {
return err
}
if err := bucket.Put(key, prm.TargetStorageID); err != nil {
return fmt.Errorf("(%T) failed to save move info: %w", b, err)
}
return nil
})
}
func (b *Blobovnicza) DropMoveInfo(ctx context.Context, address oid.Address) error {
_, span := tracing.StartSpanFromContext(ctx, "Blobovnicza.DropMoveInfo",
trace.WithAttributes(
attribute.String("path", b.path),
attribute.String("address", address.EncodeToString()),
))
defer span.End()
key := addressKey(address)
return b.boltDB.Update(func(tx *bbolt.Tx) error {
bucket := tx.Bucket(incompletedMoveBucketName)
if bucket == nil {
return nil
}
if err := bucket.Delete(key); err != nil {
return fmt.Errorf("(%T) failed to drop move info: %w", b, err)
}
c := bucket.Cursor()
k, v := c.First()
bucketEmpty := k == nil && v == nil
if bucketEmpty {
return tx.DeleteBucket(incompletedMoveBucketName)
}
return nil
})
}
func (b *Blobovnicza) ListMoveInfo(ctx context.Context) ([]MoveInfo, error) {
_, span := tracing.StartSpanFromContext(ctx, "Blobovnicza.ListMoveInfo",
trace.WithAttributes(
attribute.String("path", b.path),
))
defer span.End()
var result []MoveInfo
if err := b.boltDB.View(func(tx *bbolt.Tx) error {
bucket := tx.Bucket(incompletedMoveBucketName)
if bucket == nil {
return nil
}
return bucket.ForEach(func(k, v []byte) error {
var addr oid.Address
storageID := make([]byte, len(v))
if err := addressFromKey(&addr, k); err != nil {
return err
}
copy(storageID, v)
result = append(result, MoveInfo{
Address: addr,
TargetStorageID: storageID,
})
return nil
})
}); err != nil {
return nil, err
}
return result, nil
}

View file

@ -57,8 +57,9 @@ func (b *Blobovnicza) Put(ctx context.Context, prm PutPrm) (PutRes, error) {
defer span.End() defer span.End()
sz := uint64(len(prm.objData)) sz := uint64(len(prm.objData))
bucketName, upperBound := bucketForSize(sz) bucketName := bucketForSize(sz)
key := addressKey(prm.addr) key := addressKey(prm.addr)
recordSize := sz + uint64(len(key))
err := b.boltDB.Batch(func(tx *bbolt.Tx) error { err := b.boltDB.Batch(func(tx *bbolt.Tx) error {
buck := tx.Bucket(bucketName) buck := tx.Bucket(bucketName)
@ -74,10 +75,12 @@ func (b *Blobovnicza) Put(ctx context.Context, prm PutPrm) (PutRes, error) {
return fmt.Errorf("(%T) could not save object in bucket: %w", b, err) return fmt.Errorf("(%T) could not save object in bucket: %w", b, err)
} }
return nil return updateMeta(tx, func(count, size uint64) (uint64, uint64) {
return count + 1, size + recordSize
})
}) })
if err == nil { if err == nil {
b.itemAdded(upperBound) b.itemAdded(recordSize)
} }
return PutRes{}, err return PutRes{}, err

View file

@ -29,9 +29,8 @@ func bucketKeyFromBounds(upperBound uint64) []byte {
return buf[:ln] return buf[:ln]
} }
func bucketForSize(sz uint64) ([]byte, uint64) { func bucketForSize(sz uint64) []byte {
upperBound := upperPowerOfTwo(sz) return bucketKeyFromBounds(upperPowerOfTwo(sz))
return bucketKeyFromBounds(upperBound), upperBound
} }
func upperPowerOfTwo(v uint64) uint64 { func upperPowerOfTwo(v uint64) uint64 {

View file

@ -34,7 +34,7 @@ func TestSizes(t *testing.T) {
upperBound: 4 * firstBucketBound, upperBound: 4 * firstBucketBound,
}, },
} { } {
key, _ := bucketForSize(item.sz) key := bucketForSize(item.sz)
require.Equal(t, bucketKeyFromBounds(item.upperBound), key) require.Equal(t, bucketKeyFromBounds(item.upperBound), key)
} }
} }

View file

@ -21,8 +21,8 @@ func (db *activeDB) Close() {
db.shDB.Close() db.shDB.Close()
} }
func (db *activeDB) Path() string { func (db *activeDB) SystemPath() string {
return db.shDB.Path() return db.shDB.SystemPath()
} }
// activeDBManager manages active blobovnicza instances (that is, those that are being used for Put). // activeDBManager manages active blobovnicza instances (that is, those that are being used for Put).
@ -154,7 +154,7 @@ func (m *activeDBManager) getNextSharedDB(lvlPath string) (*sharedDB, error) {
var next *sharedDB var next *sharedDB
for iterCount < m.leafWidth { for iterCount < m.leafWidth {
path := filepath.Join(lvlPath, u64ToHexString(idx)) path := filepath.Join(lvlPath, u64ToHexStringExt(idx))
shDB := m.dbManager.GetByPath(path) shDB := m.dbManager.GetByPath(path)
db, err := shDB.Open() //open db to hold active DB open, will be closed if db is full, after m.replace or by activeDBManager.Close() db, err := shDB.Open() //open db to hold active DB open, will be closed if db is full, after m.replace or by activeDBManager.Close()
if err != nil { if err != nil {
@ -192,7 +192,7 @@ func (m *activeDBManager) hasActiveDB(lvlPath string) (bool, uint64) {
if !ok { if !ok {
return false, 0 return false, 0
} }
return true, u64FromHexString(filepath.Base(db.Path())) return true, u64FromHexString(filepath.Base(db.SystemPath()))
} }
func (m *activeDBManager) replace(lvlPath string, shDB *sharedDB) (*sharedDB, bool) { func (m *activeDBManager) replace(lvlPath string, shDB *sharedDB) (*sharedDB, bool) {

View file

@ -4,6 +4,8 @@ import (
"errors" "errors"
"fmt" "fmt"
"strconv" "strconv"
"strings"
"sync"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/common" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/compression" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/compression"
@ -54,15 +56,22 @@ import (
type Blobovniczas struct { type Blobovniczas struct {
cfg cfg
commondbManager *dbManager commondbManager *dbManager
activeDBManager *activeDBManager activeDBManager *activeDBManager
dbCache *dbCache dbCache *dbCache
deleteProtectedObjects *addressMap
dbFilesGuard *sync.RWMutex
rebuildGuard *sync.RWMutex
} }
var _ common.Storage = (*Blobovniczas)(nil) var _ common.Storage = (*Blobovniczas)(nil)
var errPutFailed = errors.New("could not save the object in any blobovnicza") var errPutFailed = errors.New("could not save the object in any blobovnicza")
const (
dbExtension = ".db"
)
// NewBlobovniczaTree returns new instance of blobovniczas tree. // NewBlobovniczaTree returns new instance of blobovniczas tree.
func NewBlobovniczaTree(opts ...Option) (blz *Blobovniczas) { func NewBlobovniczaTree(opts ...Option) (blz *Blobovniczas) {
blz = new(Blobovniczas) blz = new(Blobovniczas)
@ -76,9 +85,12 @@ func NewBlobovniczaTree(opts ...Option) (blz *Blobovniczas) {
blz.blzLeafWidth = blz.blzShallowWidth blz.blzLeafWidth = blz.blzShallowWidth
} }
blz.commondbManager = newDBManager(blz.rootPath, blz.blzOpts, blz.blzLeafWidth, blz.readOnly, blz.metrics.Blobovnicza(), blz.log) blz.commondbManager = newDBManager(blz.rootPath, blz.blzOpts, blz.readOnly, blz.metrics.Blobovnicza(), blz.log)
blz.activeDBManager = newActiveDBManager(blz.commondbManager, blz.blzLeafWidth) blz.activeDBManager = newActiveDBManager(blz.commondbManager, blz.blzLeafWidth)
blz.dbCache = newDBCache(blz.openedCacheSize, blz.commondbManager) blz.dbCache = newDBCache(blz.openedCacheSize, blz.commondbManager)
blz.deleteProtectedObjects = newAddressMap()
blz.dbFilesGuard = &sync.RWMutex{}
blz.rebuildGuard = &sync.RWMutex{}
return blz return blz
} }
@ -94,14 +106,16 @@ func addressHash(addr *oid.Address, path string) uint64 {
return hrw.StringHash(a + path) return hrw.StringHash(a + path)
} }
// converts uint64 to hex string.
func u64ToHexString(ind uint64) string { func u64ToHexString(ind uint64) string {
return strconv.FormatUint(ind, 16) return strconv.FormatUint(ind, 16)
} }
// converts uint64 hex string to uint64. func u64ToHexStringExt(ind uint64) string {
return strconv.FormatUint(ind, 16) + dbExtension
}
func u64FromHexString(str string) uint64 { func u64FromHexString(str string) uint64 {
v, err := strconv.ParseUint(str, 16, 64) v, err := strconv.ParseUint(strings.TrimSuffix(str, dbExtension), 16, 64)
if err != nil { if err != nil {
panic(fmt.Sprintf("blobovnicza name is not an index %s", str)) panic(fmt.Sprintf("blobovnicza name is not an index %s", str))
} }

View file

@ -15,8 +15,9 @@ import (
type dbCache struct { type dbCache struct {
cacheGuard *sync.RWMutex cacheGuard *sync.RWMutex
cache simplelru.LRUCache[string, *sharedDB] cache simplelru.LRUCache[string, *sharedDB]
pathLock *utilSync.KeyLocker[string] pathLock *utilSync.KeyLocker[string] // the order of locks is important: pathLock first, cacheGuard second
closed bool closed bool
nonCached map[string]struct{}
dbManager *dbManager dbManager *dbManager
} }
@ -34,6 +35,7 @@ func newDBCache(size int, dbManager *dbManager) *dbCache {
cache: cache, cache: cache,
dbManager: dbManager, dbManager: dbManager,
pathLock: utilSync.NewKeyLocker[string](), pathLock: utilSync.NewKeyLocker[string](),
nonCached: make(map[string]struct{}),
} }
} }
@ -59,6 +61,27 @@ func (c *dbCache) GetOrCreate(path string) *sharedDB {
return c.create(path) return c.create(path)
} }
func (c *dbCache) EvictAndMarkNonCached(path string) {
c.pathLock.Lock(path)
defer c.pathLock.Unlock(path)
c.cacheGuard.Lock()
defer c.cacheGuard.Unlock()
c.cache.Remove(path)
c.nonCached[path] = struct{}{}
}
func (c *dbCache) RemoveFromNonCached(path string) {
c.pathLock.Lock(path)
defer c.pathLock.Unlock(path)
c.cacheGuard.Lock()
defer c.cacheGuard.Unlock()
delete(c.nonCached, path)
}
func (c *dbCache) getExisted(path string) *sharedDB { func (c *dbCache) getExisted(path string) *sharedDB {
c.cacheGuard.Lock() c.cacheGuard.Lock()
defer c.cacheGuard.Unlock() defer c.cacheGuard.Unlock()
@ -94,7 +117,9 @@ func (c *dbCache) put(path string, db *sharedDB) bool {
c.cacheGuard.Lock() c.cacheGuard.Lock()
defer c.cacheGuard.Unlock() defer c.cacheGuard.Unlock()
if !c.closed { _, isNonCached := c.nonCached[path]
if !isNonCached && !c.closed {
c.cache.Add(path, db) c.cache.Add(path, db)
return true return true
} }

View file

@ -2,15 +2,24 @@ package blobovniczatree
import ( import (
"context" "context"
"errors"
"os"
"path/filepath"
"strings"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs" "git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util"
"go.uber.org/zap" "go.uber.org/zap"
"golang.org/x/sync/errgroup"
) )
var errFailedToChangeExtensionReadOnly = errors.New("failed to change blobovnicza extension: read only mode")
// Open opens blobovnicza tree. // Open opens blobovnicza tree.
func (b *Blobovniczas) Open(readOnly bool) error { func (b *Blobovniczas) Open(readOnly bool) error {
b.readOnly = readOnly b.readOnly = readOnly
b.metrics.SetMode(readOnly) b.metrics.SetMode(readOnly)
b.metrics.SetRebuildStatus(rebuildStatusNotStarted)
b.openManagers() b.openManagers()
return nil return nil
} }
@ -21,33 +30,95 @@ func (b *Blobovniczas) Open(readOnly bool) error {
func (b *Blobovniczas) Init() error { func (b *Blobovniczas) Init() error {
b.log.Debug(logs.BlobovniczatreeInitializingBlobovniczas) b.log.Debug(logs.BlobovniczatreeInitializingBlobovniczas)
b.log.Debug(logs.BlobovniczaTreeFixingFileExtensions)
if err := b.addDBExtensionToDBs(b.rootPath, 0); err != nil {
b.log.Error(logs.BlobovniczaTreeFixingFileExtensionsFailed, zap.Error(err))
return err
}
b.log.Debug(logs.BlobovniczaTreeFixingFileExtensionsCompletedSuccessfully)
if b.readOnly { if b.readOnly {
b.log.Debug(logs.BlobovniczatreeReadonlyModeSkipBlobovniczasInitialization) b.log.Debug(logs.BlobovniczatreeReadonlyModeSkipBlobovniczasInitialization)
return nil return nil
} }
return b.iterateLeaves(context.TODO(), func(p string) (bool, error) { return b.initializeDBs(context.TODO())
shBlz := b.getBlobovniczaWithoutCaching(p) }
_, err := shBlz.Open()
if err != nil {
return true, err
}
defer shBlz.Close()
b.log.Debug(logs.BlobovniczatreeBlobovniczaSuccessfullyInitializedClosing, zap.String("id", p)) func (b *Blobovniczas) initializeDBs(ctx context.Context) error {
err := util.MkdirAllX(b.rootPath, b.perm)
if err != nil {
return err
}
eg, egCtx := errgroup.WithContext(ctx)
eg.SetLimit(b.blzInitWorkerCount)
visited := make(map[string]struct{})
err = b.iterateExistingDBPaths(egCtx, func(p string) (bool, error) {
visited[p] = struct{}{}
eg.Go(func() error {
shBlz := b.getBlobovniczaWithoutCaching(p)
blz, err := shBlz.Open()
if err != nil {
return err
}
defer shBlz.Close()
moveInfo, err := blz.ListMoveInfo(egCtx)
if err != nil {
return err
}
for _, move := range moveInfo {
b.deleteProtectedObjects.Add(move.Address)
}
b.log.Debug(logs.BlobovniczatreeBlobovniczaSuccessfullyInitializedClosing, zap.String("id", p))
return nil
})
return false, nil return false, nil
}) })
if err != nil {
_ = eg.Wait()
return err
}
if b.createDBInAdvance {
err = b.iterateSortedLeaves(egCtx, nil, func(p string) (bool, error) {
if _, found := visited[p]; found {
return false, nil
}
eg.Go(func() error {
shBlz := b.getBlobovniczaWithoutCaching(p)
_, err := shBlz.Open()
if err != nil {
return err
}
defer shBlz.Close()
b.log.Debug(logs.BlobovniczatreeBlobovniczaSuccessfullyInitializedClosing, zap.String("id", p))
return nil
})
return false, nil
})
if err != nil {
_ = eg.Wait()
return err
}
}
return eg.Wait()
} }
func (b *Blobovniczas) openManagers() { func (b *Blobovniczas) openManagers() {
b.commondbManager.Open() //order important b.commondbManager.Open() // order important
b.activeDBManager.Open() b.activeDBManager.Open()
b.dbCache.Open() b.dbCache.Open()
} }
// Close implements common.Storage. // Close implements common.Storage.
func (b *Blobovniczas) Close() error { func (b *Blobovniczas) Close() error {
b.dbCache.Close() //order important b.dbCache.Close() // order important
b.activeDBManager.Close() b.activeDBManager.Close()
b.commondbManager.Close() b.commondbManager.Close()
@ -64,3 +135,37 @@ func (b *Blobovniczas) getBlobovnicza(p string) *sharedDB {
func (b *Blobovniczas) getBlobovniczaWithoutCaching(p string) *sharedDB { func (b *Blobovniczas) getBlobovniczaWithoutCaching(p string) *sharedDB {
return b.commondbManager.GetByPath(p) return b.commondbManager.GetByPath(p)
} }
func (b *Blobovniczas) addDBExtensionToDBs(path string, depth uint64) error {
entries, err := os.ReadDir(path)
if os.IsNotExist(err) && depth == 0 {
return nil
}
for _, entry := range entries {
if entry.IsDir() {
if err := b.addDBExtensionToDBs(filepath.Join(path, entry.Name()), depth+1); err != nil {
return err
}
continue
}
if strings.HasSuffix(entry.Name(), dbExtension) {
continue
}
if b.readOnly {
return errFailedToChangeExtensionReadOnly
}
sourcePath := filepath.Join(path, entry.Name())
targetPath := filepath.Join(path, entry.Name()+dbExtension)
b.log.Debug(logs.BlobovniczaTreeFixingFileExtensionForFile, zap.String("source", sourcePath), zap.String("target", targetPath))
if err := os.Rename(sourcePath, targetPath); err != nil {
b.log.Error(logs.BlobovniczaTreeFixingFileExtensionFailed, zap.String("source", sourcePath), zap.String("target", targetPath), zap.Error(err))
return err
}
b.log.Debug(logs.BlobovniczaTreeFixingFileExtensionCompletedSuccessfully, zap.String("source", sourcePath), zap.String("target", targetPath))
}
return nil
}

View file

@ -0,0 +1,189 @@
package blobovniczatree
import (
"context"
"os"
"path/filepath"
"strings"
"testing"
objectCore "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/object"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobovnicza"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/internal/blobstortest"
"github.com/stretchr/testify/require"
)
func TestDBExtensionFix(t *testing.T) {
root := t.TempDir()
createTestTree(t, 0, 2, 3, root)
t.Run("adds suffix if not exists", func(t *testing.T) {
openAndCloseTestTree(t, 2, 3, root)
validateTestTree(t, root)
})
t.Run("not adds second suffix if exists", func(t *testing.T) {
openAndCloseTestTree(t, 2, 3, root)
validateTestTree(t, root)
})
}
func createTestTree(t *testing.T, currentDepth, depth, width uint64, path string) {
if currentDepth == depth {
var w uint64
for ; w < width; w++ {
dbPath := filepath.Join(path, u64ToHexString(w))
b := blobovnicza.New(blobovnicza.WithPath(dbPath))
require.NoError(t, b.Open())
require.NoError(t, b.Init())
require.NoError(t, b.Close())
}
return
}
var w uint64
for ; w < width; w++ {
createTestTree(t, currentDepth+1, depth, width, filepath.Join(path, u64ToHexString(w)))
}
}
func openAndCloseTestTree(t *testing.T, depth, width uint64, path string) {
blz := NewBlobovniczaTree(
WithBlobovniczaShallowDepth(depth),
WithBlobovniczaShallowWidth(width),
WithRootPath(path),
)
require.NoError(t, blz.Open(false))
require.NoError(t, blz.Init())
require.NoError(t, blz.Close())
}
func validateTestTree(t *testing.T, path string) {
entries, err := os.ReadDir(path)
require.NoError(t, err)
for _, entry := range entries {
if entry.IsDir() {
validateTestTree(t, filepath.Join(path, entry.Name()))
} else {
require.True(t, strings.HasSuffix(entry.Name(), dbExtension))
require.False(t, strings.HasSuffix(strings.TrimSuffix(entry.Name(), dbExtension), dbExtension))
}
}
}
func TestObjectsAvailableAfterDepthAndWidthEdit(t *testing.T) {
t.Parallel()
rootDir := t.TempDir()
blz := NewBlobovniczaTree(
WithBlobovniczaShallowDepth(3),
WithBlobovniczaShallowWidth(5),
WithRootPath(rootDir),
)
require.NoError(t, blz.Open(false))
require.NoError(t, blz.Init())
obj35 := blobstortest.NewObject(10 * 1024)
addr35 := objectCore.AddressOf(obj35)
raw, err := obj35.Marshal()
require.NoError(t, err)
pRes35, err := blz.Put(context.Background(), common.PutPrm{
Address: addr35,
Object: obj35,
RawData: raw,
})
require.NoError(t, err)
gRes, err := blz.Get(context.Background(), common.GetPrm{
Address: addr35,
StorageID: pRes35.StorageID,
})
require.NoError(t, err)
require.EqualValues(t, obj35, gRes.Object)
gRes, err = blz.Get(context.Background(), common.GetPrm{
Address: addr35,
})
require.NoError(t, err)
require.EqualValues(t, obj35, gRes.Object)
require.NoError(t, blz.Close())
// change depth and width
blz = NewBlobovniczaTree(
WithBlobovniczaShallowDepth(5),
WithBlobovniczaShallowWidth(2),
WithRootPath(rootDir),
)
require.NoError(t, blz.Open(false))
require.NoError(t, blz.Init())
gRes, err = blz.Get(context.Background(), common.GetPrm{
Address: addr35,
StorageID: pRes35.StorageID,
})
require.NoError(t, err)
require.EqualValues(t, obj35, gRes.Object)
gRes, err = blz.Get(context.Background(), common.GetPrm{
Address: addr35,
})
require.NoError(t, err)
require.EqualValues(t, obj35, gRes.Object)
obj52 := blobstortest.NewObject(10 * 1024)
addr52 := objectCore.AddressOf(obj52)
raw, err = obj52.Marshal()
require.NoError(t, err)
pRes52, err := blz.Put(context.Background(), common.PutPrm{
Address: addr52,
Object: obj52,
RawData: raw,
})
require.NoError(t, err)
require.NoError(t, blz.Close())
// change depth and width back
blz = NewBlobovniczaTree(
WithBlobovniczaShallowDepth(3),
WithBlobovniczaShallowWidth(5),
WithRootPath(rootDir),
)
require.NoError(t, blz.Open(false))
require.NoError(t, blz.Init())
gRes, err = blz.Get(context.Background(), common.GetPrm{
Address: addr35,
StorageID: pRes35.StorageID,
})
require.NoError(t, err)
require.EqualValues(t, obj35, gRes.Object)
gRes, err = blz.Get(context.Background(), common.GetPrm{
Address: addr35,
})
require.NoError(t, err)
require.EqualValues(t, obj35, gRes.Object)
gRes, err = blz.Get(context.Background(), common.GetPrm{
Address: addr52,
StorageID: pRes52.StorageID,
})
require.NoError(t, err)
require.EqualValues(t, obj52, gRes.Object)
gRes, err = blz.Get(context.Background(), common.GetPrm{
Address: addr52,
})
require.NoError(t, err)
require.EqualValues(t, obj52, gRes.Object)
require.NoError(t, blz.Close())
}

View file

@ -3,6 +3,7 @@ package blobovniczatree
import ( import (
"context" "context"
"encoding/hex" "encoding/hex"
"errors"
"time" "time"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs" "git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
@ -17,6 +18,8 @@ import (
"go.uber.org/zap" "go.uber.org/zap"
) )
var errObjectIsDeleteProtected = errors.New("object is delete protected")
// Delete deletes object from blobovnicza tree. // Delete deletes object from blobovnicza tree.
// //
// If blobocvnicza ID is specified, only this blobovnicza is processed. // If blobocvnicza ID is specified, only this blobovnicza is processed.
@ -42,12 +45,22 @@ func (b *Blobovniczas) Delete(ctx context.Context, prm common.DeletePrm) (res co
return common.DeleteRes{}, common.ErrReadOnly return common.DeleteRes{}, common.ErrReadOnly
} }
if b.rebuildGuard.TryRLock() {
defer b.rebuildGuard.RUnlock()
} else {
return common.DeleteRes{}, errRebuildInProgress
}
if b.deleteProtectedObjects.Contains(prm.Address) {
return common.DeleteRes{}, errObjectIsDeleteProtected
}
var bPrm blobovnicza.DeletePrm var bPrm blobovnicza.DeletePrm
bPrm.SetAddress(prm.Address) bPrm.SetAddress(prm.Address)
if prm.StorageID != nil { if prm.StorageID != nil {
id := blobovnicza.NewIDFromBytes(prm.StorageID) id := NewIDFromBytes(prm.StorageID)
shBlz := b.getBlobovnicza(id.String()) shBlz := b.getBlobovnicza(id.Path())
blz, err := shBlz.Open() blz, err := shBlz.Open()
if err != nil { if err != nil {
return res, err return res, err
@ -62,7 +75,7 @@ func (b *Blobovniczas) Delete(ctx context.Context, prm common.DeletePrm) (res co
objectFound := false objectFound := false
err = b.iterateSortedLeaves(ctx, &prm.Address, func(p string) (bool, error) { err = b.iterateSortedDBPaths(ctx, prm.Address, func(p string) (bool, error) {
res, err = b.deleteObjectFromLevel(ctx, bPrm, p) res, err = b.deleteObjectFromLevel(ctx, bPrm, p)
if err != nil { if err != nil {
if !client.IsErrObjectNotFound(err) { if !client.IsErrObjectNotFound(err) {

View file

@ -35,8 +35,8 @@ func (b *Blobovniczas) Exists(ctx context.Context, prm common.ExistsPrm) (common
defer span.End() defer span.End()
if prm.StorageID != nil { if prm.StorageID != nil {
id := blobovnicza.NewIDFromBytes(prm.StorageID) id := NewIDFromBytes(prm.StorageID)
shBlz := b.getBlobovnicza(id.String()) shBlz := b.getBlobovnicza(id.Path())
blz, err := shBlz.Open() blz, err := shBlz.Open()
if err != nil { if err != nil {
return common.ExistsRes{}, err return common.ExistsRes{}, err
@ -50,7 +50,7 @@ func (b *Blobovniczas) Exists(ctx context.Context, prm common.ExistsPrm) (common
var gPrm blobovnicza.GetPrm var gPrm blobovnicza.GetPrm
gPrm.SetAddress(prm.Address) gPrm.SetAddress(prm.Address)
err := b.iterateSortedLeaves(ctx, &prm.Address, func(p string) (bool, error) { err := b.iterateSortedDBPaths(ctx, prm.Address, func(p string) (bool, error) {
_, err := b.getObjectFromLevel(ctx, gPrm, p) _, err := b.getObjectFromLevel(ctx, gPrm, p)
if err != nil { if err != nil {
if !client.IsErrObjectNotFound(err) { if !client.IsErrObjectNotFound(err) {

View file

@ -55,7 +55,7 @@ func TestExistsInvalidStorageID(t *testing.T) {
// An invalid boltdb file is created so that it returns an error when opened // An invalid boltdb file is created so that it returns an error when opened
require.NoError(t, os.MkdirAll(filepath.Join(dir, relBadFileDir), os.ModePerm)) require.NoError(t, os.MkdirAll(filepath.Join(dir, relBadFileDir), os.ModePerm))
require.NoError(t, os.WriteFile(filepath.Join(dir, relBadFileDir, badFileName), []byte("not a boltdb file content"), 0777)) require.NoError(t, os.WriteFile(filepath.Join(dir, relBadFileDir, badFileName+".db"), []byte("not a boltdb file content"), 0777))
res, err := b.Exists(context.Background(), common.ExistsPrm{Address: addr, StorageID: []byte(filepath.Join(relBadFileDir, badFileName))}) res, err := b.Exists(context.Background(), common.ExistsPrm{Address: addr, StorageID: []byte(filepath.Join(relBadFileDir, badFileName))})
require.Error(t, err) require.Error(t, err)

View file

@ -46,8 +46,8 @@ func (b *Blobovniczas) Get(ctx context.Context, prm common.GetPrm) (res common.G
bPrm.SetAddress(prm.Address) bPrm.SetAddress(prm.Address)
if prm.StorageID != nil { if prm.StorageID != nil {
id := blobovnicza.NewIDFromBytes(prm.StorageID) id := NewIDFromBytes(prm.StorageID)
shBlz := b.getBlobovnicza(id.String()) shBlz := b.getBlobovnicza(id.Path())
blz, err := shBlz.Open() blz, err := shBlz.Open()
if err != nil { if err != nil {
return res, err return res, err
@ -62,7 +62,7 @@ func (b *Blobovniczas) Get(ctx context.Context, prm common.GetPrm) (res common.G
return res, err return res, err
} }
err = b.iterateSortedLeaves(ctx, &prm.Address, func(p string) (bool, error) { err = b.iterateSortedDBPaths(ctx, prm.Address, func(p string) (bool, error) {
res, err = b.getObjectFromLevel(ctx, bPrm, p) res, err = b.getObjectFromLevel(ctx, bPrm, p)
if err != nil { if err != nil {
if !client.IsErrObjectNotFound(err) { if !client.IsErrObjectNotFound(err) {

View file

@ -45,8 +45,8 @@ func (b *Blobovniczas) GetRange(ctx context.Context, prm common.GetRangePrm) (re
defer span.End() defer span.End()
if prm.StorageID != nil { if prm.StorageID != nil {
id := blobovnicza.NewIDFromBytes(prm.StorageID) id := NewIDFromBytes(prm.StorageID)
shBlz := b.getBlobovnicza(id.String()) shBlz := b.getBlobovnicza(id.Path())
blz, err := shBlz.Open() blz, err := shBlz.Open()
if err != nil { if err != nil {
return common.GetRangeRes{}, err return common.GetRangeRes{}, err
@ -63,7 +63,7 @@ func (b *Blobovniczas) GetRange(ctx context.Context, prm common.GetRangePrm) (re
objectFound := false objectFound := false
err = b.iterateSortedLeaves(ctx, &prm.Address, func(p string) (bool, error) { err = b.iterateSortedDBPaths(ctx, prm.Address, func(p string) (bool, error) {
res, err = b.getRangeFromLevel(ctx, prm, p) res, err = b.getRangeFromLevel(ctx, prm, p)
if err != nil { if err != nil {
outOfBounds := isErrOutOfRange(err) outOfBounds := isErrOutOfRange(err)

View file

@ -1,4 +1,4 @@
package blobovnicza package blobovniczatree
// ID represents Blobovnicza identifier. // ID represents Blobovnicza identifier.
type ID []byte type ID []byte
@ -8,8 +8,8 @@ func NewIDFromBytes(v []byte) *ID {
return (*ID)(&v) return (*ID)(&v)
} }
func (id ID) String() string { func (id ID) Path() string {
return string(id) return string(id) + dbExtension
} }
func (id ID) Bytes() []byte { func (id ID) Bytes() []byte {

View file

@ -3,7 +3,9 @@ package blobovniczatree
import ( import (
"context" "context"
"fmt" "fmt"
"os"
"path/filepath" "path/filepath"
"strings"
"time" "time"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobovnicza" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobovnicza"
@ -50,7 +52,7 @@ func (b *Blobovniczas) Iterate(ctx context.Context, prm common.IteratePrm) (comm
return prm.Handler(common.IterationElement{ return prm.Handler(common.IterationElement{
Address: elem.Address(), Address: elem.Address(),
ObjectData: data, ObjectData: data,
StorageID: []byte(p), StorageID: []byte(strings.TrimSuffix(p, dbExtension)),
}) })
} }
return prm.LazyHandler(elem.Address(), func() ([]byte, error) { return prm.LazyHandler(elem.Address(), func() ([]byte, error) {
@ -67,7 +69,7 @@ func (b *Blobovniczas) Iterate(ctx context.Context, prm common.IteratePrm) (comm
// iterator over all Blobovniczas in unsorted order. Break on f's error return. // iterator over all Blobovniczas in unsorted order. Break on f's error return.
func (b *Blobovniczas) iterateBlobovniczas(ctx context.Context, ignoreErrors bool, f func(string, *blobovnicza.Blobovnicza) error) error { func (b *Blobovniczas) iterateBlobovniczas(ctx context.Context, ignoreErrors bool, f func(string, *blobovnicza.Blobovnicza) error) error {
return b.iterateLeaves(ctx, func(p string) (bool, error) { return b.iterateExistingDBPaths(ctx, func(p string) (bool, error) {
shBlz := b.getBlobovnicza(p) shBlz := b.getBlobovnicza(p)
blz, err := shBlz.Open() blz, err := shBlz.Open()
if err != nil { if err != nil {
@ -84,7 +86,9 @@ func (b *Blobovniczas) iterateBlobovniczas(ctx context.Context, ignoreErrors boo
}) })
} }
// iterator over the paths of Blobovniczas sorted by weight. // iterateSortedLeaves iterates over the paths of Blobovniczas sorted by weight.
//
// Uses depth, width and leaf width for iteration.
func (b *Blobovniczas) iterateSortedLeaves(ctx context.Context, addr *oid.Address, f func(string) (bool, error)) error { func (b *Blobovniczas) iterateSortedLeaves(ctx context.Context, addr *oid.Address, f func(string) (bool, error)) error {
_, err := b.iterateSorted( _, err := b.iterateSorted(
ctx, ctx,
@ -124,7 +128,9 @@ func (b *Blobovniczas) iterateSorted(ctx context.Context, addr *oid.Address, cur
} }
indices := indexSlice(levelWidth) indices := indexSlice(levelWidth)
hrw.SortSliceByValue(indices, addressHash(addr, filepath.Join(curPath...))) if !isLeafLevel {
hrw.SortSliceByValue(indices, addressHash(addr, filepath.Join(curPath...)))
}
exec := uint64(len(curPath)) == execDepth exec := uint64(len(curPath)) == execDepth
@ -134,10 +140,16 @@ func (b *Blobovniczas) iterateSorted(ctx context.Context, addr *oid.Address, cur
return false, ctx.Err() return false, ctx.Err()
default: default:
} }
lastPart := u64ToHexString(indices[i])
if isLeafLevel {
lastPart = u64ToHexStringExt(indices[i])
}
if i == 0 { if i == 0 {
curPath = append(curPath, u64ToHexString(indices[i])) curPath = append(curPath, lastPart)
} else { } else {
curPath[len(curPath)-1] = u64ToHexString(indices[i]) curPath[len(curPath)-1] = lastPart
} }
if exec { if exec {
@ -156,9 +168,110 @@ func (b *Blobovniczas) iterateSorted(ctx context.Context, addr *oid.Address, cur
return false, nil return false, nil
} }
// iterator over the paths of Blobovniczas in random order. // iterateExistingDBPaths iterates over the paths of Blobovniczas without any order.
func (b *Blobovniczas) iterateLeaves(ctx context.Context, f func(string) (bool, error)) error { //
return b.iterateSortedLeaves(ctx, nil, f) // Uses existed blobovnicza files for iteration.
func (b *Blobovniczas) iterateExistingDBPaths(ctx context.Context, f func(string) (bool, error)) error {
b.dbFilesGuard.RLock()
defer b.dbFilesGuard.RUnlock()
_, err := b.iterateExistingDBPathsDFS(ctx, "", f)
return err
}
func (b *Blobovniczas) iterateExistingDBPathsDFS(ctx context.Context, path string, f func(string) (bool, error)) (bool, error) {
sysPath := filepath.Join(b.rootPath, path)
entries, err := os.ReadDir(sysPath)
if os.IsNotExist(err) && b.readOnly && path == "" { //non initialized tree in read only mode
return false, nil
}
if err != nil {
return false, err
}
for _, entry := range entries {
select {
case <-ctx.Done():
return false, ctx.Err()
default:
}
if entry.IsDir() {
stop, err := b.iterateExistingDBPathsDFS(ctx, filepath.Join(path, entry.Name()), f)
if err != nil {
return false, err
}
if stop {
return true, nil
}
} else {
stop, err := f(filepath.Join(path, entry.Name()))
if err != nil {
return false, err
}
if stop {
return true, nil
}
}
}
return false, nil
}
func (b *Blobovniczas) iterateSortedDBPaths(ctx context.Context, addr oid.Address, f func(string) (bool, error)) error {
b.dbFilesGuard.RLock()
defer b.dbFilesGuard.RUnlock()
_, err := b.iterateSordedDBPathsInternal(ctx, "", addr, f)
return err
}
func (b *Blobovniczas) iterateSordedDBPathsInternal(ctx context.Context, path string, addr oid.Address, f func(string) (bool, error)) (bool, error) {
sysPath := filepath.Join(b.rootPath, path)
entries, err := os.ReadDir(sysPath)
if os.IsNotExist(err) && b.readOnly && path == "" { //non initialized tree in read only mode
return false, nil
}
if err != nil {
return false, err
}
var dbIdxs []uint64
var dirIdxs []uint64
for _, entry := range entries {
idx := u64FromHexString(entry.Name())
if entry.IsDir() {
dirIdxs = append(dirIdxs, idx)
} else {
dbIdxs = append(dbIdxs, idx)
}
}
if len(dbIdxs) > 0 {
for _, dbIdx := range dbIdxs {
dbPath := filepath.Join(path, u64ToHexStringExt(dbIdx))
stop, err := f(dbPath)
if err != nil {
return false, err
}
if stop {
return true, nil
}
}
}
if len(dirIdxs) > 0 {
hrw.SortSliceByValue(dirIdxs, addressHash(&addr, path))
for _, dirIdx := range dirIdxs {
dirPath := filepath.Join(path, u64ToHexString(dirIdx))
stop, err := b.iterateSordedDBPathsInternal(ctx, dirPath, addr, f)
if err != nil {
return false, err
}
if stop {
return true, nil
}
}
}
return false, nil
} }
// makes slice of uint64 values from 0 to number-1. // makes slice of uint64 values from 0 to number-1.

View file

@ -0,0 +1,42 @@
package blobovniczatree
import (
"context"
"testing"
oidtest "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id/test"
"github.com/stretchr/testify/require"
)
func TestIterateSortedLeavesAndDBPathsAreSame(t *testing.T) {
t.Parallel()
blz := NewBlobovniczaTree(
WithBlobovniczaShallowDepth(3),
WithBlobovniczaShallowWidth(5),
WithRootPath(t.TempDir()),
)
blz.createDBInAdvance = true
require.NoError(t, blz.Open(false))
require.NoError(t, blz.Init())
defer func() {
require.NoError(t, blz.Close())
}()
addr := oidtest.Address()
var leaves []string
var dbPaths []string
blz.iterateSortedLeaves(context.Background(), &addr, func(s string) (bool, error) {
leaves = append(leaves, s)
return false, nil
})
blz.iterateSortedDBPaths(context.Background(), addr, func(s string) (bool, error) {
dbPaths = append(dbPaths, s)
return false, nil
})
require.Equal(t, leaves, dbPaths)
}

View file

@ -1,7 +1,9 @@
package blobovniczatree package blobovniczatree
import ( import (
"errors"
"fmt" "fmt"
"os"
"path/filepath" "path/filepath"
"sync" "sync"
"sync/atomic" "sync/atomic"
@ -12,9 +14,13 @@ import (
"go.uber.org/zap" "go.uber.org/zap"
) )
var (
errClosingClosedBlobovnicza = errors.New("closing closed blobovnicza is not allowed")
)
// sharedDB is responsible for opening and closing a file of single blobovnicza. // sharedDB is responsible for opening and closing a file of single blobovnicza.
type sharedDB struct { type sharedDB struct {
guard *sync.RWMutex cond *sync.Cond
blcza *blobovnicza.Blobovnicza blcza *blobovnicza.Blobovnicza
refCount uint32 refCount uint32
@ -30,8 +36,9 @@ type sharedDB struct {
func newSharedDB(options []blobovnicza.Option, path string, readOnly bool, func newSharedDB(options []blobovnicza.Option, path string, readOnly bool,
metrics blobovnicza.Metrics, openDBCounter *openDBCounter, closedFlag *atomic.Bool, log *logger.Logger) *sharedDB { metrics blobovnicza.Metrics, openDBCounter *openDBCounter, closedFlag *atomic.Bool, log *logger.Logger) *sharedDB {
return &sharedDB{ return &sharedDB{
guard: &sync.RWMutex{}, cond: &sync.Cond{
L: &sync.RWMutex{},
},
options: options, options: options,
path: path, path: path,
readOnly: readOnly, readOnly: readOnly,
@ -47,8 +54,8 @@ func (b *sharedDB) Open() (*blobovnicza.Blobovnicza, error) {
return nil, errClosed return nil, errClosed
} }
b.guard.Lock() b.cond.L.Lock()
defer b.guard.Unlock() defer b.cond.L.Unlock()
if b.refCount > 0 { if b.refCount > 0 {
b.refCount++ b.refCount++
@ -76,11 +83,12 @@ func (b *sharedDB) Open() (*blobovnicza.Blobovnicza, error) {
} }
func (b *sharedDB) Close() { func (b *sharedDB) Close() {
b.guard.Lock() b.cond.L.Lock()
defer b.guard.Unlock() defer b.cond.L.Unlock()
if b.refCount == 0 { if b.refCount == 0 {
b.log.Error(logs.AttemtToCloseAlreadyClosedBlobovnicza, zap.String("id", b.path)) b.log.Error(logs.AttemtToCloseAlreadyClosedBlobovnicza, zap.String("id", b.path))
b.cond.Broadcast()
return return
} }
@ -98,32 +106,108 @@ func (b *sharedDB) Close() {
} }
b.refCount-- b.refCount--
if b.refCount == 1 {
b.cond.Broadcast()
}
} }
func (b *sharedDB) Path() string { func (b *sharedDB) CloseAndRemoveFile() error {
b.cond.L.Lock()
if b.refCount > 1 {
b.cond.Wait()
}
defer b.cond.L.Unlock()
if b.refCount == 0 {
return errClosingClosedBlobovnicza
}
if err := b.blcza.Close(); err != nil {
b.log.Error(logs.BlobovniczatreeCouldNotCloseBlobovnicza,
zap.String("id", b.path),
zap.String("error", err.Error()),
)
return fmt.Errorf("failed to close blobovnicza (path = %s): %w", b.path, err)
}
b.refCount = 0
b.blcza = nil
b.openDBCounter.Dec()
return os.Remove(b.path)
}
func (b *sharedDB) SystemPath() string {
return b.path return b.path
} }
// levelDbManager stores pointers of the sharedDB's for the leaf directory of the blobovnicza tree. // levelDbManager stores pointers of the sharedDB's for the leaf directory of the blobovnicza tree.
type levelDbManager struct { type levelDbManager struct {
databases []*sharedDB dbMtx *sync.RWMutex
databases map[uint64]*sharedDB
options []blobovnicza.Option
path string
readOnly bool
metrics blobovnicza.Metrics
openDBCounter *openDBCounter
closedFlag *atomic.Bool
log *logger.Logger
} }
func newLevelDBManager(width uint64, options []blobovnicza.Option, rootPath string, lvlPath string, func newLevelDBManager(options []blobovnicza.Option, rootPath string, lvlPath string,
readOnly bool, metrics blobovnicza.Metrics, openDBCounter *openDBCounter, closedFlog *atomic.Bool, log *logger.Logger) *levelDbManager { readOnly bool, metrics blobovnicza.Metrics, openDBCounter *openDBCounter, closedFlag *atomic.Bool, log *logger.Logger) *levelDbManager {
result := &levelDbManager{ result := &levelDbManager{
databases: make([]*sharedDB, width), databases: make(map[uint64]*sharedDB),
} dbMtx: &sync.RWMutex{},
for idx := uint64(0); idx < width; idx++ {
result.databases[idx] = newSharedDB(options, filepath.Join(rootPath, lvlPath, u64ToHexString(idx)), readOnly, metrics, openDBCounter, closedFlog, log) options: options,
path: filepath.Join(rootPath, lvlPath),
readOnly: readOnly,
metrics: metrics,
openDBCounter: openDBCounter,
closedFlag: closedFlag,
log: log,
} }
return result return result
} }
func (m *levelDbManager) GetByIndex(idx uint64) *sharedDB { func (m *levelDbManager) GetByIndex(idx uint64) *sharedDB {
res := m.getDBIfExists(idx)
if res != nil {
return res
}
return m.getOrCreateDB(idx)
}
func (m *levelDbManager) getDBIfExists(idx uint64) *sharedDB {
m.dbMtx.RLock()
defer m.dbMtx.RUnlock()
return m.databases[idx] return m.databases[idx]
} }
func (m *levelDbManager) getOrCreateDB(idx uint64) *sharedDB {
m.dbMtx.Lock()
defer m.dbMtx.Unlock()
db := m.databases[idx]
if db != nil {
return db
}
db = newSharedDB(m.options, filepath.Join(m.path, u64ToHexStringExt(idx)), m.readOnly, m.metrics, m.openDBCounter, m.closedFlag, m.log)
m.databases[idx] = db
return db
}
func (m *levelDbManager) hasAnyDB() bool {
m.dbMtx.RLock()
defer m.dbMtx.RUnlock()
return len(m.databases) > 0
}
// dbManager manages the opening and closing of blobovnicza instances. // dbManager manages the opening and closing of blobovnicza instances.
// //
// The blobovnicza opens at the first request, closes after the last request. // The blobovnicza opens at the first request, closes after the last request.
@ -133,21 +217,19 @@ type dbManager struct {
closedFlag *atomic.Bool closedFlag *atomic.Bool
dbCounter *openDBCounter dbCounter *openDBCounter
rootPath string rootPath string
options []blobovnicza.Option options []blobovnicza.Option
readOnly bool readOnly bool
metrics blobovnicza.Metrics metrics blobovnicza.Metrics
leafWidth uint64 log *logger.Logger
log *logger.Logger
} }
func newDBManager(rootPath string, options []blobovnicza.Option, leafWidth uint64, readOnly bool, metrics blobovnicza.Metrics, log *logger.Logger) *dbManager { func newDBManager(rootPath string, options []blobovnicza.Option, readOnly bool, metrics blobovnicza.Metrics, log *logger.Logger) *dbManager {
return &dbManager{ return &dbManager{
rootPath: rootPath, rootPath: rootPath,
options: options, options: options,
readOnly: readOnly, readOnly: readOnly,
metrics: metrics, metrics: metrics,
leafWidth: leafWidth,
levelToManager: make(map[string]*levelDbManager), levelToManager: make(map[string]*levelDbManager),
levelToManagerGuard: &sync.RWMutex{}, levelToManagerGuard: &sync.RWMutex{},
log: log, log: log,
@ -163,6 +245,17 @@ func (m *dbManager) GetByPath(path string) *sharedDB {
return levelManager.GetByIndex(curIndex) return levelManager.GetByIndex(curIndex)
} }
func (m *dbManager) CleanResources(path string) {
lvlPath := filepath.Dir(path)
m.levelToManagerGuard.Lock()
defer m.levelToManagerGuard.Unlock()
if result, ok := m.levelToManager[lvlPath]; ok && !result.hasAnyDB() {
delete(m.levelToManager, lvlPath)
}
}
func (m *dbManager) Open() { func (m *dbManager) Open() {
m.closedFlag.Store(false) m.closedFlag.Store(false)
} }
@ -195,7 +288,7 @@ func (m *dbManager) getOrCreateLevelManager(lvlPath string) *levelDbManager {
return result return result
} }
result := newLevelDBManager(m.leafWidth, m.options, m.rootPath, lvlPath, m.readOnly, m.metrics, m.dbCounter, m.closedFlag, m.log) result := newLevelDBManager(m.options, m.rootPath, lvlPath, m.readOnly, m.metrics, m.dbCounter, m.closedFlag, m.log)
m.levelToManager[lvlPath] = result m.levelToManager[lvlPath] = result
return result return result
} }

View file

@ -6,6 +6,13 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobovnicza" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobovnicza"
) )
const (
rebuildStatusNotStarted = "not_started"
rebuildStatusRunning = "running"
rebuildStatusCompleted = "completed"
rebuildStatusFailed = "failed"
)
type Metrics interface { type Metrics interface {
Blobovnicza() blobovnicza.Metrics Blobovnicza() blobovnicza.Metrics
@ -14,6 +21,10 @@ type Metrics interface {
SetMode(readOnly bool) SetMode(readOnly bool)
Close() Close()
SetRebuildStatus(status string)
ObjectMoved(d time.Duration)
SetRebuildPercent(value uint32)
Delete(d time.Duration, success, withStorageID bool) Delete(d time.Duration, success, withStorageID bool)
Exists(d time.Duration, success, withStorageID bool) Exists(d time.Duration, success, withStorageID bool)
GetRange(d time.Duration, size int, success, withStorageID bool) GetRange(d time.Duration, size int, success, withStorageID bool)
@ -27,6 +38,9 @@ type noopMetrics struct{}
func (m *noopMetrics) SetParentID(string) {} func (m *noopMetrics) SetParentID(string) {}
func (m *noopMetrics) SetMode(bool) {} func (m *noopMetrics) SetMode(bool) {}
func (m *noopMetrics) Close() {} func (m *noopMetrics) Close() {}
func (m *noopMetrics) SetRebuildStatus(string) {}
func (m *noopMetrics) SetRebuildPercent(uint32) {}
func (m *noopMetrics) ObjectMoved(time.Duration) {}
func (m *noopMetrics) Delete(time.Duration, bool, bool) {} func (m *noopMetrics) Delete(time.Duration, bool, bool) {}
func (m *noopMetrics) Exists(time.Duration, bool, bool) {} func (m *noopMetrics) Exists(time.Duration, bool, bool) {}
func (m *noopMetrics) GetRange(time.Duration, int, bool, bool) {} func (m *noopMetrics) GetRange(time.Duration, int, bool, bool) {}

View file

@ -2,6 +2,7 @@ package blobovniczatree
import ( import (
"io/fs" "io/fs"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobovnicza" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobovnicza"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/compression" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/compression"
@ -10,39 +11,48 @@ import (
) )
type cfg struct { type cfg struct {
log *logger.Logger log *logger.Logger
perm fs.FileMode perm fs.FileMode
readOnly bool readOnly bool
rootPath string rootPath string
openedCacheSize int openedCacheSize int
blzShallowDepth uint64 blzShallowDepth uint64
blzShallowWidth uint64 blzShallowWidth uint64
blzLeafWidth uint64 blzLeafWidth uint64
compression *compression.Config compression *compression.Config
blzOpts []blobovnicza.Option blzOpts []blobovnicza.Option
// reportError is the function called when encountering disk errors. reportError func(string, error) // reportError is the function called when encountering disk errors.
reportError func(string, error) metrics Metrics
metrics Metrics waitBeforeDropDB time.Duration
blzInitWorkerCount int
blzMoveBatchSize int
createDBInAdvance bool
} }
type Option func(*cfg) type Option func(*cfg)
const ( const (
defaultPerm = 0700 defaultPerm = 0o700
defaultOpenedCacheSize = 50 defaultOpenedCacheSize = 50
defaultBlzShallowDepth = 2 defaultBlzShallowDepth = 2
defaultBlzShallowWidth = 16 defaultBlzShallowWidth = 16
defaultWaitBeforeDropDB = 10 * time.Second
defaultBlzInitWorkerCount = 5
defaulBlzMoveBatchSize = 10000
) )
func initConfig(c *cfg) { func initConfig(c *cfg) {
*c = cfg{ *c = cfg{
log: &logger.Logger{Logger: zap.L()}, log: &logger.Logger{Logger: zap.L()},
perm: defaultPerm, perm: defaultPerm,
openedCacheSize: defaultOpenedCacheSize, openedCacheSize: defaultOpenedCacheSize,
blzShallowDepth: defaultBlzShallowDepth, blzShallowDepth: defaultBlzShallowDepth,
blzShallowWidth: defaultBlzShallowWidth, blzShallowWidth: defaultBlzShallowWidth,
reportError: func(string, error) {}, reportError: func(string, error) {},
metrics: &noopMetrics{}, metrics: &noopMetrics{},
waitBeforeDropDB: defaultWaitBeforeDropDB,
blzInitWorkerCount: defaultBlzInitWorkerCount,
blzMoveBatchSize: defaulBlzMoveBatchSize,
} }
} }
@ -106,3 +116,34 @@ func WithMetrics(m Metrics) Option {
c.metrics = m c.metrics = m
} }
} }
func WithWaitBeforeDropDB(t time.Duration) Option {
return func(c *cfg) {
c.waitBeforeDropDB = t
}
}
func WithMoveBatchSize(v int) Option {
return func(c *cfg) {
c.blzMoveBatchSize = v
}
}
// WithInitWorkersCount sets maximum workers count to init blobovnicza tree.
//
// Negative or zero value means no limit.
func WithInitWorkerCount(v int) Option {
if v <= 0 {
v = -1
}
return func(c *cfg) {
c.blzInitWorkerCount = v
}
}
// WithInitInAdvance returns an option to create blobovnicza tree DB's in advance.
func WithInitInAdvance(v bool) Option {
return func(c *cfg) {
c.createDBInAdvance = v
}
}

View file

@ -69,7 +69,7 @@ func (b *Blobovniczas) Put(ctx context.Context, prm common.PutPrm) (common.PutRe
type putIterator struct { type putIterator struct {
B *Blobovniczas B *Blobovniczas
ID *blobovnicza.ID ID *ID
AllFull bool AllFull bool
PutPrm blobovnicza.PutPrm PutPrm blobovnicza.PutPrm
} }
@ -101,15 +101,15 @@ func (i *putIterator) iterate(ctx context.Context, lvlPath string) (bool, error)
i.B.reportError(logs.BlobovniczatreeCouldNotPutObjectToActiveBlobovnicza, err) i.B.reportError(logs.BlobovniczatreeCouldNotPutObjectToActiveBlobovnicza, err)
} else { } else {
i.B.log.Debug(logs.BlobovniczatreeCouldNotPutObjectToActiveBlobovnicza, i.B.log.Debug(logs.BlobovniczatreeCouldNotPutObjectToActiveBlobovnicza,
zap.String("path", active.Path()), zap.String("path", active.SystemPath()),
zap.String("error", err.Error())) zap.String("error", err.Error()))
} }
return false, nil return false, nil
} }
idx := u64FromHexString(filepath.Base(active.Path())) idx := u64FromHexString(filepath.Base(active.SystemPath()))
i.ID = blobovnicza.NewIDFromBytes([]byte(filepath.Join(lvlPath, u64ToHexString(idx)))) i.ID = NewIDFromBytes([]byte(filepath.Join(lvlPath, u64ToHexString(idx))))
return true, nil return true, nil
} }

View file

@ -0,0 +1,478 @@
package blobovniczatree
import (
"bytes"
"context"
"errors"
"os"
"path/filepath"
"sync"
"sync/atomic"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobovnicza"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/common"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"go.uber.org/zap"
"golang.org/x/sync/errgroup"
)
var (
errRebuildInProgress = errors.New("rebuild is in progress, the operation cannot be performed")
errBatchFull = errors.New("batch full")
)
func (b *Blobovniczas) Rebuild(ctx context.Context, prm common.RebuildPrm) (common.RebuildRes, error) {
if b.readOnly {
return common.RebuildRes{}, common.ErrReadOnly
}
b.metrics.SetRebuildStatus(rebuildStatusRunning)
b.metrics.SetRebuildPercent(0)
success := true
defer func() {
if success {
b.metrics.SetRebuildStatus(rebuildStatusCompleted)
} else {
b.metrics.SetRebuildStatus(rebuildStatusFailed)
}
}()
b.rebuildGuard.Lock()
defer b.rebuildGuard.Unlock()
var res common.RebuildRes
b.log.Debug(logs.BlobovniczaTreeCompletingPreviousRebuild)
completedPreviosMoves, err := b.completeIncompletedMove(ctx, prm.MetaStorage)
res.ObjectsMoved += completedPreviosMoves
if err != nil {
b.log.Warn(logs.BlobovniczaTreeCompletedPreviousRebuildFailed, zap.Error(err))
success = false
return res, err
}
b.log.Debug(logs.BlobovniczaTreeCompletedPreviousRebuildSuccess)
b.log.Debug(logs.BlobovniczaTreeCollectingDBToRebuild)
dbsToMigrate, err := b.getDBsToRebuild(ctx)
if err != nil {
b.log.Warn(logs.BlobovniczaTreeCollectingDBToRebuildFailed, zap.Error(err))
success = false
return res, err
}
b.log.Info(logs.BlobovniczaTreeCollectingDBToRebuildSuccess, zap.Int("blobovniczas_to_rebuild", len(dbsToMigrate)))
res, err = b.migrateDBs(ctx, dbsToMigrate, prm, res)
if err != nil {
success = false
}
return res, err
}
func (b *Blobovniczas) migrateDBs(ctx context.Context, dbs []string, prm common.RebuildPrm, res common.RebuildRes) (common.RebuildRes, error) {
var completedDBCount uint32
for _, db := range dbs {
b.log.Debug(logs.BlobovniczaTreeRebuildingBlobovnicza, zap.String("path", db))
movedObjects, err := b.rebuildDB(ctx, db, prm.MetaStorage, prm.WorkerLimiter)
res.ObjectsMoved += movedObjects
if err != nil {
b.log.Warn(logs.BlobovniczaTreeRebuildingBlobovniczaFailed, zap.String("path", db), zap.Uint64("moved_objects_count", movedObjects), zap.Error(err))
return res, err
}
b.log.Debug(logs.BlobovniczaTreeRebuildingBlobovniczaSuccess, zap.String("path", db), zap.Uint64("moved_objects_count", movedObjects))
res.FilesRemoved++
completedDBCount++
b.metrics.SetRebuildPercent((100 * completedDBCount) / uint32(len(dbs)))
}
b.metrics.SetRebuildPercent(100)
return res, nil
}
func (b *Blobovniczas) getDBsToRebuild(ctx context.Context) ([]string, error) {
dbsToMigrate := make(map[string]struct{})
if err := b.iterateExistingDBPaths(ctx, func(s string) (bool, error) {
dbsToMigrate[s] = struct{}{}
return false, nil
}); err != nil {
return nil, err
}
if err := b.iterateSortedLeaves(ctx, nil, func(s string) (bool, error) {
delete(dbsToMigrate, s)
return false, nil
}); err != nil {
return nil, err
}
result := make([]string, 0, len(dbsToMigrate))
for db := range dbsToMigrate {
result = append(result, db)
}
return result, nil
}
func (b *Blobovniczas) rebuildDB(ctx context.Context, path string, meta common.MetaStorage, limiter common.ConcurrentWorkersLimiter) (uint64, error) {
shDB := b.getBlobovnicza(path)
blz, err := shDB.Open()
if err != nil {
return 0, err
}
shDBClosed := false
defer func() {
if shDBClosed {
return
}
shDB.Close()
}()
migratedObjects, err := b.moveObjects(ctx, blz, shDB.SystemPath(), meta, limiter)
if err != nil {
return migratedObjects, err
}
shDBClosed, err = b.dropDB(ctx, path, shDB)
return migratedObjects, err
}
func (b *Blobovniczas) moveObjects(ctx context.Context, blz *blobovnicza.Blobovnicza, blzPath string, meta common.MetaStorage, limiter common.ConcurrentWorkersLimiter) (uint64, error) {
var result atomic.Uint64
batch := make(map[oid.Address][]byte)
var prm blobovnicza.IteratePrm
prm.DecodeAddresses()
prm.SetHandler(func(ie blobovnicza.IterationElement) error {
batch[ie.Address()] = bytes.Clone(ie.ObjectData())
if len(batch) == b.blzMoveBatchSize {
return errBatchFull
}
return nil
})
for {
_, err := blz.Iterate(ctx, prm)
if err != nil && !errors.Is(err, errBatchFull) {
return result.Load(), err
}
if len(batch) == 0 {
break
}
eg, egCtx := errgroup.WithContext(ctx)
for addr, data := range batch {
addr := addr
data := data
if err := limiter.AcquireWorkSlot(egCtx); err != nil {
_ = eg.Wait()
return result.Load(), err
}
eg.Go(func() error {
defer limiter.ReleaseWorkSlot()
err := b.moveObject(egCtx, blz, blzPath, addr, data, meta)
if err == nil {
result.Add(1)
}
return err
})
}
if err := eg.Wait(); err != nil {
return result.Load(), err
}
batch = make(map[oid.Address][]byte)
}
return result.Load(), nil
}
func (b *Blobovniczas) moveObject(ctx context.Context, source *blobovnicza.Blobovnicza, sourcePath string,
addr oid.Address, data []byte, metaStore common.MetaStorage) error {
startedAt := time.Now()
defer func() {
b.metrics.ObjectMoved(time.Since(startedAt))
}()
it := &moveIterator{
B: b,
ID: nil,
AllFull: true,
Address: addr,
ObjectData: data,
MetaStore: metaStore,
Source: source,
SourceSysPath: sourcePath,
}
if err := b.iterateDeepest(ctx, addr, func(lvlPath string) (bool, error) { return it.tryMoveToLvl(ctx, lvlPath) }); err != nil {
return err
} else if it.ID == nil {
if it.AllFull {
return common.ErrNoSpace
}
return errPutFailed
}
return nil
}
func (b *Blobovniczas) dropDB(ctx context.Context, path string, shDb *sharedDB) (bool, error) {
select {
case <-ctx.Done():
return false, ctx.Err()
case <-time.After(b.waitBeforeDropDB): // to complete requests with old storage ID
}
b.dbCache.EvictAndMarkNonCached(path)
defer b.dbCache.RemoveFromNonCached(path)
b.dbFilesGuard.Lock()
defer b.dbFilesGuard.Unlock()
if err := shDb.CloseAndRemoveFile(); err != nil {
return false, err
}
b.commondbManager.CleanResources(path)
if err := b.dropDirectoryIfEmpty(filepath.Dir(path)); err != nil {
return true, err
}
return true, nil
}
func (b *Blobovniczas) dropDirectoryIfEmpty(path string) error {
if path == "." {
return nil
}
sysPath := filepath.Join(b.rootPath, path)
entries, err := os.ReadDir(sysPath)
if err != nil {
return err
}
if len(entries) > 0 {
return nil
}
if err := os.Remove(sysPath); err != nil {
return err
}
return b.dropDirectoryIfEmpty(filepath.Dir(path))
}
func (b *Blobovniczas) completeIncompletedMove(ctx context.Context, metaStore common.MetaStorage) (uint64, error) {
var count uint64
return count, b.iterateExistingDBPaths(ctx, func(s string) (bool, error) {
shDB := b.getBlobovnicza(s)
blz, err := shDB.Open()
if err != nil {
return true, err
}
defer shDB.Close()
incompletedMoves, err := blz.ListMoveInfo(ctx)
if err != nil {
return true, err
}
for _, move := range incompletedMoves {
if err := b.performMove(ctx, blz, shDB.SystemPath(), move, metaStore); err != nil {
return true, err
}
count++
}
return false, nil
})
}
func (b *Blobovniczas) performMove(ctx context.Context, source *blobovnicza.Blobovnicza, sourcePath string,
move blobovnicza.MoveInfo, metaStore common.MetaStorage) error {
targetDB := b.getBlobovnicza(NewIDFromBytes(move.TargetStorageID).Path())
target, err := targetDB.Open()
if err != nil {
return err
}
defer targetDB.Close()
existsInSource := true
var gPrm blobovnicza.GetPrm
gPrm.SetAddress(move.Address)
gRes, err := source.Get(ctx, gPrm)
if err != nil {
if client.IsErrObjectNotFound(err) {
existsInSource = false
} else {
b.log.Warn(logs.BlobovniczatreeCouldNotCheckExistenceInTargetDB, zap.Error(err))
return err
}
}
if !existsInSource { //object was deleted by Rebuild, need to delete move info
if err = source.DropMoveInfo(ctx, move.Address); err != nil {
b.log.Warn(logs.BlobovniczatreeCouldNotDropMoveInfo, zap.String("path", sourcePath), zap.Error(err))
return err
}
b.deleteProtectedObjects.Delete(move.Address)
return nil
}
existsInTarget, err := target.Exists(ctx, move.Address)
if err != nil {
b.log.Warn(logs.BlobovniczatreeCouldNotCheckExistenceInTargetDB, zap.Error(err))
return err
}
if !existsInTarget {
var putPrm blobovnicza.PutPrm
putPrm.SetAddress(move.Address)
putPrm.SetMarshaledObject(gRes.Object())
_, err = target.Put(ctx, putPrm)
if err != nil {
b.log.Warn(logs.BlobovniczatreeCouldNotPutObjectToTargetDB, zap.String("path", targetDB.SystemPath()), zap.Error(err))
return err
}
}
if err = metaStore.UpdateStorageID(ctx, move.Address, move.TargetStorageID); err != nil {
b.log.Warn(logs.BlobovniczatreeCouldNotUpdateStorageID, zap.Error(err), zap.Stringer("address", move.Address))
if !client.IsErrObjectNotFound(err) {
return err
}
}
var deletePrm blobovnicza.DeletePrm
deletePrm.SetAddress(move.Address)
if _, err = source.Delete(ctx, deletePrm); err != nil {
b.log.Warn(logs.BlobovniczatreeCouldNotDeleteFromSource, zap.String("path", sourcePath), zap.Error(err))
return err
}
if err = source.DropMoveInfo(ctx, move.Address); err != nil {
b.log.Warn(logs.BlobovniczatreeCouldNotDropMoveInfo, zap.String("path", sourcePath), zap.Error(err))
return err
}
b.deleteProtectedObjects.Delete(move.Address)
return nil
}
type moveIterator struct {
B *Blobovniczas
ID *ID
AllFull bool
Address oid.Address
ObjectData []byte
MetaStore common.MetaStorage
Source *blobovnicza.Blobovnicza
SourceSysPath string
}
func (i *moveIterator) tryMoveToLvl(ctx context.Context, lvlPath string) (bool, error) {
target, err := i.B.activeDBManager.GetOpenedActiveDBForLevel(lvlPath)
if err != nil {
if !isLogical(err) {
i.B.reportError(logs.BlobovniczatreeCouldNotGetActiveBlobovnicza, err)
} else {
i.B.log.Warn(logs.BlobovniczatreeCouldNotGetActiveBlobovnicza, zap.Error(err))
}
return false, nil
}
if target == nil {
i.B.log.Warn(logs.BlobovniczatreeBlobovniczaOverflowed, zap.String("level", lvlPath))
return false, nil
}
defer target.Close()
i.AllFull = false
targetIDx := u64FromHexString(filepath.Base(target.SystemPath()))
targetStorageID := NewIDFromBytes([]byte(filepath.Join(lvlPath, u64ToHexString(targetIDx))))
if err = i.Source.PutMoveInfo(ctx, blobovnicza.MoveInfo{
Address: i.Address,
TargetStorageID: targetStorageID.Bytes(),
}); err != nil {
if !isLogical(err) {
i.B.reportError(logs.BlobovniczatreeCouldNotPutMoveInfoToSourceBlobovnicza, err)
} else {
i.B.log.Warn(logs.BlobovniczatreeCouldNotPutMoveInfoToSourceBlobovnicza, zap.String("path", i.SourceSysPath), zap.Error(err))
}
return true, nil
}
i.B.deleteProtectedObjects.Add(i.Address)
var putPrm blobovnicza.PutPrm
putPrm.SetAddress(i.Address)
putPrm.SetMarshaledObject(i.ObjectData)
_, err = target.Blobovnicza().Put(ctx, putPrm)
if err != nil {
if !isLogical(err) {
i.B.reportError(logs.BlobovniczatreeCouldNotPutObjectToActiveBlobovnicza, err)
} else {
i.B.log.Warn(logs.BlobovniczatreeCouldNotPutObjectToActiveBlobovnicza, zap.String("path", target.SystemPath()), zap.Error(err))
}
return true, nil
}
if err = i.MetaStore.UpdateStorageID(ctx, i.Address, targetStorageID.Bytes()); err != nil {
i.B.log.Warn(logs.BlobovniczatreeCouldNotUpdateStorageID, zap.Error(err), zap.Stringer("address", i.Address))
return true, nil
}
var deletePrm blobovnicza.DeletePrm
deletePrm.SetAddress(i.Address)
if _, err = i.Source.Delete(ctx, deletePrm); err != nil {
if !isLogical(err) {
i.B.reportError(logs.BlobovniczatreeCouldNotDeleteFromSource, err)
} else {
i.B.log.Warn(logs.BlobovniczatreeCouldNotDeleteFromSource, zap.String("path", i.SourceSysPath), zap.Error(err))
}
return true, nil
}
if err = i.Source.DropMoveInfo(ctx, i.Address); err != nil {
if !isLogical(err) {
i.B.reportError(logs.BlobovniczatreeCouldNotDropMoveInfo, err)
} else {
i.B.log.Warn(logs.BlobovniczatreeCouldNotDropMoveInfo, zap.String("path", i.SourceSysPath), zap.Error(err))
}
return true, nil
}
i.B.deleteProtectedObjects.Delete(i.Address)
i.ID = targetStorageID
return true, nil
}
type addressMap struct {
data map[oid.Address]struct{}
guard *sync.RWMutex
}
func newAddressMap() *addressMap {
return &addressMap{
data: make(map[oid.Address]struct{}),
guard: &sync.RWMutex{},
}
}
func (m *addressMap) Add(address oid.Address) {
m.guard.Lock()
defer m.guard.Unlock()
m.data[address] = struct{}{}
}
func (m *addressMap) Delete(address oid.Address) {
m.guard.Lock()
defer m.guard.Unlock()
delete(m.data, address)
}
func (m *addressMap) Contains(address oid.Address) bool {
m.guard.RLock()
defer m.guard.RUnlock()
_, contains := m.data[address]
return contains
}

View file

@ -0,0 +1,195 @@
package blobovniczatree
import (
"bytes"
"context"
"path/filepath"
"sync"
"testing"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/object"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobovnicza"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/internal/blobstortest"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger/test"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client"
objectSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"github.com/stretchr/testify/require"
)
func TestRebuildFailover(t *testing.T) {
t.Parallel()
t.Run("only move info saved", testRebuildFailoverOnlyMoveInfoSaved)
t.Run("object saved to target", testRebuildFailoverObjectSavedToTarget)
t.Run("object deleted from source", testRebuildFailoverObjectDeletedFromSource)
}
func testRebuildFailoverOnlyMoveInfoSaved(t *testing.T) {
t.Parallel()
dir := t.TempDir()
blz := blobovnicza.New(blobovnicza.WithPath(filepath.Join(dir, "0", "0", "1.db")))
require.NoError(t, blz.Open())
require.NoError(t, blz.Init())
obj := blobstortest.NewObject(1024)
data, err := obj.Marshal()
require.NoError(t, err)
var pPrm blobovnicza.PutPrm
pPrm.SetAddress(object.AddressOf(obj))
pPrm.SetMarshaledObject(data)
_, err = blz.Put(context.Background(), pPrm)
require.NoError(t, err)
require.NoError(t, blz.PutMoveInfo(context.Background(), blobovnicza.MoveInfo{
Address: object.AddressOf(obj),
TargetStorageID: []byte("0/0/0"),
}))
require.NoError(t, blz.Close())
testRebuildFailoverValidate(t, dir, obj, true)
}
func testRebuildFailoverObjectSavedToTarget(t *testing.T) {
t.Parallel()
dir := t.TempDir()
blz := blobovnicza.New(blobovnicza.WithPath(filepath.Join(dir, "0", "0", "1.db")))
require.NoError(t, blz.Open())
require.NoError(t, blz.Init())
obj := blobstortest.NewObject(1024)
data, err := obj.Marshal()
require.NoError(t, err)
var pPrm blobovnicza.PutPrm
pPrm.SetAddress(object.AddressOf(obj))
pPrm.SetMarshaledObject(data)
_, err = blz.Put(context.Background(), pPrm)
require.NoError(t, err)
require.NoError(t, blz.PutMoveInfo(context.Background(), blobovnicza.MoveInfo{
Address: object.AddressOf(obj),
TargetStorageID: []byte("0/0/0"),
}))
require.NoError(t, blz.Close())
blz = blobovnicza.New(blobovnicza.WithPath(filepath.Join(dir, "0", "0", "0.db")))
require.NoError(t, blz.Open())
require.NoError(t, blz.Init())
_, err = blz.Put(context.Background(), pPrm)
require.NoError(t, err)
require.NoError(t, blz.Close())
testRebuildFailoverValidate(t, dir, obj, true)
}
func testRebuildFailoverObjectDeletedFromSource(t *testing.T) {
t.Parallel()
dir := t.TempDir()
blz := blobovnicza.New(blobovnicza.WithPath(filepath.Join(dir, "0", "0", "1.db")))
require.NoError(t, blz.Open())
require.NoError(t, blz.Init())
obj := blobstortest.NewObject(1024)
data, err := obj.Marshal()
require.NoError(t, err)
require.NoError(t, blz.PutMoveInfo(context.Background(), blobovnicza.MoveInfo{
Address: object.AddressOf(obj),
TargetStorageID: []byte("0/0/0"),
}))
require.NoError(t, blz.Close())
blz = blobovnicza.New(blobovnicza.WithPath(filepath.Join(dir, "0", "0", "0.db")))
require.NoError(t, blz.Open())
require.NoError(t, blz.Init())
var pPrm blobovnicza.PutPrm
pPrm.SetAddress(object.AddressOf(obj))
pPrm.SetMarshaledObject(data)
_, err = blz.Put(context.Background(), pPrm)
require.NoError(t, err)
require.NoError(t, blz.Close())
testRebuildFailoverValidate(t, dir, obj, false)
}
func testRebuildFailoverValidate(t *testing.T, dir string, obj *objectSDK.Object, mustUpdateStorageID bool) {
b := NewBlobovniczaTree(
WithLogger(test.NewLogger(t, true)),
WithObjectSizeLimit(2048),
WithBlobovniczaShallowWidth(2),
WithBlobovniczaShallowDepth(2),
WithRootPath(dir),
WithBlobovniczaSize(100*1024*1024),
WithWaitBeforeDropDB(0),
WithOpenedCacheSize(1000))
require.NoError(t, b.Open(false))
require.NoError(t, b.Init())
var dPrm common.DeletePrm
dPrm.Address = object.AddressOf(obj)
dPrm.StorageID = []byte("0/0/1")
_, err := b.Delete(context.Background(), dPrm)
require.ErrorIs(t, err, errObjectIsDeleteProtected)
metaStub := &storageIDUpdateStub{
storageIDs: make(map[oid.Address][]byte),
guard: &sync.Mutex{},
}
rRes, err := b.Rebuild(context.Background(), common.RebuildPrm{
MetaStorage: metaStub,
WorkerLimiter: &rebuildLimiterStub{},
})
require.NoError(t, err)
require.Equal(t, uint64(1), rRes.ObjectsMoved)
require.Equal(t, uint64(0), rRes.FilesRemoved)
require.NoError(t, b.Close())
blz := blobovnicza.New(blobovnicza.WithPath(filepath.Join(dir, "0", "0", "1.db")))
require.NoError(t, blz.Open())
require.NoError(t, blz.Init())
moveInfo, err := blz.ListMoveInfo(context.Background())
require.NoError(t, err)
require.Equal(t, 0, len(moveInfo))
var gPrm blobovnicza.GetPrm
gPrm.SetAddress(object.AddressOf(obj))
_, err = blz.Get(context.Background(), gPrm)
require.True(t, client.IsErrObjectNotFound(err))
require.NoError(t, blz.Close())
blz = blobovnicza.New(blobovnicza.WithPath(filepath.Join(dir, "0", "0", "0.db")))
require.NoError(t, blz.Open())
require.NoError(t, blz.Init())
moveInfo, err = blz.ListMoveInfo(context.Background())
require.NoError(t, err)
require.Equal(t, 0, len(moveInfo))
gRes, err := blz.Get(context.Background(), gPrm)
require.NoError(t, err)
require.True(t, len(gRes.Object()) > 0)
if mustUpdateStorageID {
require.True(t, bytes.Equal([]byte("0/0/0"), metaStub.storageIDs[object.AddressOf(obj)]))
}
require.NoError(t, blz.Close())
}

View file

@ -0,0 +1,145 @@
package blobovniczatree
import (
"context"
"sync"
"testing"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/object"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/internal/blobstortest"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger/test"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"github.com/stretchr/testify/require"
"golang.org/x/sync/errgroup"
)
func TestBlobovniczaTreeRebuild(t *testing.T) {
t.Parallel()
t.Run("width increased", func(t *testing.T) {
t.Parallel()
testBlobovniczaTreeRebuildHelper(t, 2, 2, 2, 3, false)
})
t.Run("width reduced", func(t *testing.T) {
t.Parallel()
testBlobovniczaTreeRebuildHelper(t, 2, 2, 2, 1, true)
})
t.Run("depth increased", func(t *testing.T) {
t.Parallel()
testBlobovniczaTreeRebuildHelper(t, 1, 2, 2, 2, true)
})
t.Run("depth reduced", func(t *testing.T) {
t.Parallel()
testBlobovniczaTreeRebuildHelper(t, 2, 2, 1, 2, true)
})
}
func testBlobovniczaTreeRebuildHelper(t *testing.T, sourceDepth, sourceWidth, targetDepth, targetWidth uint64, shouldMigrate bool) {
dir := t.TempDir()
b := NewBlobovniczaTree(
WithLogger(test.NewLogger(t, true)),
WithObjectSizeLimit(2048),
WithBlobovniczaShallowWidth(sourceWidth),
WithBlobovniczaShallowDepth(sourceDepth),
WithRootPath(dir),
WithBlobovniczaSize(100*1024*1024),
WithWaitBeforeDropDB(0),
WithOpenedCacheSize(1000),
WithMoveBatchSize(3))
require.NoError(t, b.Open(false))
require.NoError(t, b.Init())
eg, egCtx := errgroup.WithContext(context.Background())
storageIDs := make(map[oid.Address][]byte)
storageIDsGuard := &sync.Mutex{}
for i := 0; i < 1000; i++ {
eg.Go(func() error {
obj := blobstortest.NewObject(1024)
data, err := obj.Marshal()
if err != nil {
return err
}
var prm common.PutPrm
prm.Address = object.AddressOf(obj)
prm.RawData = data
res, err := b.Put(egCtx, prm)
if err != nil {
return err
}
storageIDsGuard.Lock()
storageIDs[prm.Address] = res.StorageID
storageIDsGuard.Unlock()
return nil
})
}
require.NoError(t, eg.Wait())
require.NoError(t, b.Close())
b = NewBlobovniczaTree(
WithLogger(test.NewLogger(t, true)),
WithObjectSizeLimit(2048),
WithBlobovniczaShallowWidth(targetWidth),
WithBlobovniczaShallowDepth(targetDepth),
WithRootPath(dir),
WithBlobovniczaSize(100*1024*1024),
WithWaitBeforeDropDB(0),
WithOpenedCacheSize(1000),
WithMoveBatchSize(3))
require.NoError(t, b.Open(false))
require.NoError(t, b.Init())
for addr, storageID := range storageIDs {
var gPrm common.GetPrm
gPrm.Address = addr
gPrm.StorageID = storageID
_, err := b.Get(context.Background(), gPrm)
require.NoError(t, err)
}
metaStub := &storageIDUpdateStub{
storageIDs: storageIDs,
guard: &sync.Mutex{},
}
var rPrm common.RebuildPrm
rPrm.MetaStorage = metaStub
rPrm.WorkerLimiter = &rebuildLimiterStub{}
rRes, err := b.Rebuild(context.Background(), rPrm)
require.NoError(t, err)
dataMigrated := rRes.ObjectsMoved > 0 || rRes.FilesRemoved > 0 || metaStub.updatedCount > 0
require.Equal(t, shouldMigrate, dataMigrated)
for addr, storageID := range storageIDs {
var gPrm common.GetPrm
gPrm.Address = addr
gPrm.StorageID = storageID
_, err := b.Get(context.Background(), gPrm)
require.NoError(t, err)
}
require.NoError(t, b.Close())
}
type storageIDUpdateStub struct {
guard *sync.Mutex
storageIDs map[oid.Address][]byte
updatedCount uint64
}
func (s *storageIDUpdateStub) UpdateStorageID(ctx context.Context, addr oid.Address, storageID []byte) error {
s.guard.Lock()
defer s.guard.Unlock()
s.storageIDs[addr] = storageID
s.updatedCount++
return nil
}
type rebuildLimiterStub struct{}
func (s *rebuildLimiterStub) AcquireWorkSlot(context.Context) error { return nil }
func (s *rebuildLimiterStub) ReleaseWorkSlot() {}

View file

@ -0,0 +1,26 @@
package common
import (
"context"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
)
type RebuildRes struct {
ObjectsMoved uint64
FilesRemoved uint64
}
type RebuildPrm struct {
MetaStorage MetaStorage
WorkerLimiter ConcurrentWorkersLimiter
}
type MetaStorage interface {
UpdateStorageID(ctx context.Context, addr oid.Address, storageID []byte) error
}
type ConcurrentWorkersLimiter interface {
AcquireWorkSlot(ctx context.Context) error
ReleaseWorkSlot()
}

View file

@ -30,4 +30,5 @@ type Storage interface {
Put(context.Context, PutPrm) (PutRes, error) Put(context.Context, PutPrm) (PutRes, error)
Delete(context.Context, DeletePrm) (DeleteRes, error) Delete(context.Context, DeletePrm) (DeleteRes, error)
Iterate(context.Context, IteratePrm) (IterateRes, error) Iterate(context.Context, IteratePrm) (IterateRes, error)
Rebuild(context.Context, RebuildPrm) (RebuildRes, error)
} }

View file

@ -570,3 +570,7 @@ func (t *FSTree) SetReportErrorFunc(_ func(string, error)) {
func (t *FSTree) SetParentID(parentID string) { func (t *FSTree) SetParentID(parentID string) {
t.metrics.SetParentID(parentID) t.metrics.SetParentID(parentID)
} }
func (t *FSTree) Rebuild(_ context.Context, _ common.RebuildPrm) (common.RebuildRes, error) {
return common.RebuildRes{}, nil
}

View file

@ -166,3 +166,7 @@ func (s *memstoreImpl) Iterate(_ context.Context, req common.IteratePrm) (common
} }
return common.IterateRes{}, nil return common.IterateRes{}, nil
} }
func (s *memstoreImpl) Rebuild(_ context.Context, _ common.RebuildPrm) (common.RebuildRes, error) {
return common.RebuildRes{}, nil
}

View file

@ -0,0 +1,45 @@
package blobstor
import (
"context"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/common"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"go.uber.org/zap"
)
type StorageIDUpdate interface {
UpdateStorageID(ctx context.Context, addr oid.Address, storageID []byte) error
}
type ConcurrentWorkersLimiter interface {
AcquireWorkSlot(ctx context.Context) error
ReleaseWorkSlot()
}
func (b *BlobStor) Rebuild(ctx context.Context, upd StorageIDUpdate, limiter ConcurrentWorkersLimiter) error {
var summary common.RebuildRes
var rErr error
for _, storage := range b.storage {
res, err := storage.Storage.Rebuild(ctx, common.RebuildPrm{
MetaStorage: upd,
WorkerLimiter: limiter,
})
summary.FilesRemoved += res.FilesRemoved
summary.ObjectsMoved += res.ObjectsMoved
if err != nil {
b.log.Error(logs.BlobstorRebuildFailedToRebuildStorages,
zap.String("failed_storage_path", storage.Storage.Path()),
zap.String("failed_storage_type", storage.Storage.Type()),
zap.Error(err))
rErr = err
break
}
}
b.log.Info(logs.BlobstorRebuildRebuildStoragesCompleted,
zap.Bool("success", rErr == nil),
zap.Uint64("total_files_removed", summary.FilesRemoved),
zap.Uint64("total_objects_moved", summary.ObjectsMoved))
return rErr
}

View file

@ -229,3 +229,7 @@ func (s *TestStore) Iterate(ctx context.Context, req common.IteratePrm) (common.
} }
func (s *TestStore) SetParentID(string) {} func (s *TestStore) SetParentID(string) {}
func (s *TestStore) Rebuild(_ context.Context, _ common.RebuildPrm) (common.RebuildRes, error) {
return common.RebuildRes{}, nil
}

View file

@ -38,6 +38,7 @@ type StorageEngine struct {
err error err error
} }
evacuateLimiter *evacuationLimiter evacuateLimiter *evacuationLimiter
rebuildLimiter *rebuildLimiter
} }
type shardWrapper struct { type shardWrapper struct {
@ -213,13 +214,15 @@ type cfg struct {
shardPoolSize uint32 shardPoolSize uint32
lowMem bool lowMem bool
rebuildWorkersCount uint32
} }
func defaultCfg() *cfg { func defaultCfg() *cfg {
return &cfg{ return &cfg{
log: &logger.Logger{Logger: zap.L()}, log: &logger.Logger{Logger: zap.L()},
shardPoolSize: 20,
shardPoolSize: 20, rebuildWorkersCount: 100,
} }
} }
@ -238,6 +241,7 @@ func New(opts ...Option) *StorageEngine {
closeCh: make(chan struct{}), closeCh: make(chan struct{}),
setModeCh: make(chan setModeRequest), setModeCh: make(chan setModeRequest),
evacuateLimiter: &evacuationLimiter{}, evacuateLimiter: &evacuationLimiter{},
rebuildLimiter: newRebuildLimiter(c.rebuildWorkersCount),
} }
} }
@ -275,3 +279,10 @@ func WithLowMemoryConsumption(lowMemCons bool) Option {
c.lowMem = lowMemCons c.lowMem = lowMemCons
} }
} }
// WithRebuildWorkersCount returns an option to set the count of concurrent rebuild workers.
func WithRebuildWorkersCount(count uint32) Option {
return func(c *cfg) {
c.rebuildWorkersCount = count
}
}

View file

@ -0,0 +1,26 @@
package engine
import "context"
type rebuildLimiter struct {
semaphore chan struct{}
}
func newRebuildLimiter(workersCount uint32) *rebuildLimiter {
return &rebuildLimiter{
semaphore: make(chan struct{}, workersCount),
}
}
func (l *rebuildLimiter) AcquireWorkSlot(ctx context.Context) error {
select {
case l.semaphore <- struct{}{}:
return nil
case <-ctx.Done():
return ctx.Err()
}
}
func (l *rebuildLimiter) ReleaseWorkSlot() {
<-l.semaphore
}

View file

@ -110,6 +110,7 @@ func (e *StorageEngine) createShard(ctx context.Context, opts []shard.Option) (*
shard.WithExpiredLocksCallback(e.processExpiredLocks), shard.WithExpiredLocksCallback(e.processExpiredLocks),
shard.WithDeletedLockCallback(e.processDeletedLocks), shard.WithDeletedLockCallback(e.processDeletedLocks),
shard.WithReportErrorFunc(e.reportShardErrorBackground), shard.WithReportErrorFunc(e.reportShardErrorBackground),
shard.WithRebuildWorkerLimiter(e.rebuildLimiter),
)...) )...)
if err := sh.UpdateID(ctx); err != nil { if err := sh.UpdateID(ctx); err != nil {

View file

@ -68,16 +68,10 @@ func (m *writeCacheMetrics) Get(d time.Duration, success bool, st writecache.Sto
func (m *writeCacheMetrics) Delete(d time.Duration, success bool, st writecache.StorageType) { func (m *writeCacheMetrics) Delete(d time.Duration, success bool, st writecache.StorageType) {
m.metrics.AddMethodDuration(m.shardID, "Delete", success, d, st.String()) m.metrics.AddMethodDuration(m.shardID, "Delete", success, d, st.String())
if success {
m.metrics.DecActualCount(m.shardID, st.String())
}
} }
func (m *writeCacheMetrics) Put(d time.Duration, success bool, st writecache.StorageType) { func (m *writeCacheMetrics) Put(d time.Duration, success bool, st writecache.StorageType) {
m.metrics.AddMethodDuration(m.shardID, "Put", success, d, st.String()) m.metrics.AddMethodDuration(m.shardID, "Put", success, d, st.String())
if success {
m.metrics.IncActualCount(m.shardID, st.String())
}
} }
func (m *writeCacheMetrics) SetEstimateSize(db, fstree uint64) { func (m *writeCacheMetrics) SetEstimateSize(db, fstree uint64) {
@ -99,7 +93,6 @@ func (m *writeCacheMetrics) Flush(success bool, st writecache.StorageType) {
} }
func (m *writeCacheMetrics) Evict(st writecache.StorageType) { func (m *writeCacheMetrics) Evict(st writecache.StorageType) {
m.metrics.DecActualCount(m.shardID, st.String())
m.metrics.IncOperationCounter(m.shardID, "Evict", metrics.NullBool{}, st.String()) m.metrics.IncOperationCounter(m.shardID, "Evict", metrics.NullBool{}, st.String())
} }

View file

@ -5,6 +5,7 @@ import (
"errors" "errors"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/internal/metaerr" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/internal/metaerr"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/util/logicerr"
"git.frostfs.info/TrueCloudLab/frostfs-observability/tracing" "git.frostfs.info/TrueCloudLab/frostfs-observability/tracing"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id" oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"github.com/nspcc-dev/neo-go/pkg/util/slice" "github.com/nspcc-dev/neo-go/pkg/util/slice"
@ -107,7 +108,7 @@ func (db *DB) UpdateStorageID(prm UpdateStorageIDPrm) (res UpdateStorageIDRes, e
err = db.boltDB.Batch(func(tx *bbolt.Tx) error { err = db.boltDB.Batch(func(tx *bbolt.Tx) error {
exists, err := db.exists(tx, prm.addr, currEpoch) exists, err := db.exists(tx, prm.addr, currEpoch)
if err == nil && exists || errors.Is(err, ErrObjectIsExpired) { if err == nil && exists || errors.As(err, new(logicerr.Logical)) {
err = updateStorageID(tx, prm.addr, prm.id) err = updateStorageID(tx, prm.addr, prm.id)
} }

View file

@ -42,6 +42,18 @@ func (m *blobovniczaTreeMetrics) Close() {
m.m.CloseBlobobvnizcaTree(m.shardID, m.path) m.m.CloseBlobobvnizcaTree(m.shardID, m.path)
} }
func (m *blobovniczaTreeMetrics) SetRebuildStatus(status string) {
m.m.BlobovniczaTreeRebuildStatus(m.shardID, m.path, status)
}
func (m *blobovniczaTreeMetrics) SetRebuildPercent(value uint32) {
m.m.BlobovniczaTreeRebuildPercent(m.shardID, m.path, value)
}
func (m *blobovniczaTreeMetrics) ObjectMoved(d time.Duration) {
m.m.BlobovniczaTreeObjectMoved(m.shardID, m.path, d)
}
func (m *blobovniczaTreeMetrics) Delete(d time.Duration, success, withStorageID bool) { func (m *blobovniczaTreeMetrics) Delete(d time.Duration, success, withStorageID bool) {
m.m.BlobobvnizcaTreeMethodDuration(m.shardID, m.path, "Delete", d, success, metrics_impl.NullBool{Valid: true, Bool: withStorageID}) m.m.BlobobvnizcaTreeMethodDuration(m.shardID, m.path, "Delete", d, success, metrics_impl.NullBool{Valid: true, Bool: withStorageID})
} }

View file

@ -162,6 +162,9 @@ func (s *Shard) Init(ctx context.Context) error {
s.gc.init(ctx) s.gc.init(ctx)
s.rb = newRebuilder(s.rebuildLimiter)
s.rb.Start(ctx, s.blobStor, s.metaBase, s.log)
return nil return nil
} }
@ -266,6 +269,9 @@ func (s *Shard) refillTombstoneObject(ctx context.Context, obj *objectSDK.Object
// Close releases all Shard's components. // Close releases all Shard's components.
func (s *Shard) Close() error { func (s *Shard) Close() error {
if s.rb != nil {
s.rb.Stop(s.log)
}
components := []interface{ Close() error }{} components := []interface{ Close() error }{}
if s.pilorama != nil { if s.pilorama != nil {
@ -310,6 +316,11 @@ func (s *Shard) Reload(ctx context.Context, opts ...Option) error {
unlock := s.lockExclusive() unlock := s.lockExclusive()
defer unlock() defer unlock()
s.rb.Stop(s.log)
defer func() {
s.rb.Start(ctx, s.blobStor, s.metaBase, s.log)
}()
ok, err := s.metaBase.Reload(c.metaOpts...) ok, err := s.metaBase.Reload(c.metaOpts...)
if err != nil { if err != nil {
if errors.Is(err, meta.ErrDegradedMode) { if errors.Is(err, meta.ErrDegradedMode) {

View file

@ -2,12 +2,11 @@ package shard
import ( import (
"context" "context"
"errors" "fmt"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs" "git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/common" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/common"
meta "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/metabase" meta "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/metabase"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/writecache"
"git.frostfs.info/TrueCloudLab/frostfs-observability/tracing" "git.frostfs.info/TrueCloudLab/frostfs-observability/tracing"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id" oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
@ -33,8 +32,7 @@ func (p *DeletePrm) SetAddresses(addr ...oid.Address) {
p.addr = append(p.addr, addr...) p.addr = append(p.addr, addr...)
} }
// Delete removes data from the shard's writeCache, metaBase and // Delete removes data from the shard's metaBase and blobStor.
// blobStor.
func (s *Shard) Delete(ctx context.Context, prm DeletePrm) (DeleteRes, error) { func (s *Shard) Delete(ctx context.Context, prm DeletePrm) (DeleteRes, error) {
ctx, span := tracing.StartSpanFromContext(ctx, "Shard.Delete", ctx, span := tracing.StartSpanFromContext(ctx, "Shard.Delete",
trace.WithAttributes( trace.WithAttributes(
@ -46,10 +44,10 @@ func (s *Shard) Delete(ctx context.Context, prm DeletePrm) (DeleteRes, error) {
s.m.RLock() s.m.RLock()
defer s.m.RUnlock() defer s.m.RUnlock()
return s.delete(ctx, prm) return s.delete(ctx, prm, false)
} }
func (s *Shard) delete(ctx context.Context, prm DeletePrm) (DeleteRes, error) { func (s *Shard) delete(ctx context.Context, prm DeletePrm, skipFailed bool) (DeleteRes, error) {
if s.info.Mode.ReadOnly() { if s.info.Mode.ReadOnly() {
return DeleteRes{}, ErrReadOnlyMode return DeleteRes{}, ErrReadOnlyMode
} else if s.info.Mode.NoMetabase() { } else if s.info.Mode.NoMetabase() {
@ -64,12 +62,25 @@ func (s *Shard) delete(ctx context.Context, prm DeletePrm) (DeleteRes, error) {
default: default:
} }
s.deleteObjectFromWriteCacheSafe(ctx, addr) if err := s.validateWritecacheDoesntContainObject(ctx, addr); err != nil {
if skipFailed {
continue
}
return result, err
}
s.deleteFromBlobstorSafe(ctx, addr) if err := s.deleteFromBlobstor(ctx, addr); err != nil {
if skipFailed {
continue
}
return result, err
}
if err := s.deleteFromMetabase(ctx, addr); err != nil { if err := s.deleteFromMetabase(ctx, addr); err != nil {
return result, err // stop on metabase error ? if skipFailed {
continue
}
return result, err
} }
result.deleted++ result.deleted++
} }
@ -77,16 +88,22 @@ func (s *Shard) delete(ctx context.Context, prm DeletePrm) (DeleteRes, error) {
return result, nil return result, nil
} }
func (s *Shard) deleteObjectFromWriteCacheSafe(ctx context.Context, addr oid.Address) { func (s *Shard) validateWritecacheDoesntContainObject(ctx context.Context, addr oid.Address) error {
if s.hasWriteCache() { if !s.hasWriteCache() {
err := s.writeCache.Delete(ctx, addr) return nil
if err != nil && !client.IsErrObjectNotFound(err) && !errors.Is(err, writecache.ErrReadOnly) {
s.log.Warn(logs.ShardCantDeleteObjectFromWriteCache, zap.Error(err))
}
} }
_, err := s.writeCache.Head(ctx, addr)
if err == nil {
s.log.Warn(logs.ObjectRemovalFailureExistsInWritecache, zap.Stringer("object_address", addr))
return fmt.Errorf("object %s must be flushed from writecache", addr)
}
if client.IsErrObjectNotFound(err) {
return nil
}
return err
} }
func (s *Shard) deleteFromBlobstorSafe(ctx context.Context, addr oid.Address) { func (s *Shard) deleteFromBlobstor(ctx context.Context, addr oid.Address) error {
var sPrm meta.StorageIDPrm var sPrm meta.StorageIDPrm
sPrm.SetAddress(addr) sPrm.SetAddress(addr)
@ -95,6 +112,7 @@ func (s *Shard) deleteFromBlobstorSafe(ctx context.Context, addr oid.Address) {
s.log.Debug(logs.StorageIDRetrievalFailure, s.log.Debug(logs.StorageIDRetrievalFailure,
zap.Stringer("object", addr), zap.Stringer("object", addr),
zap.String("error", err.Error())) zap.String("error", err.Error()))
return err
} }
storageID := res.StorageID() storageID := res.StorageID()
@ -103,11 +121,13 @@ func (s *Shard) deleteFromBlobstorSafe(ctx context.Context, addr oid.Address) {
delPrm.StorageID = storageID delPrm.StorageID = storageID
_, err = s.blobStor.Delete(ctx, delPrm) _, err = s.blobStor.Delete(ctx, delPrm)
if err != nil { if err != nil && !client.IsErrObjectNotFound(err) {
s.log.Debug(logs.ObjectRemovalFailureBlobStor, s.log.Debug(logs.ObjectRemovalFailureBlobStor,
zap.Stringer("object_address", addr), zap.Stringer("object_address", addr),
zap.String("error", err.Error())) zap.String("error", err.Error()))
return err
} }
return nil
} }
func (s *Shard) deleteFromMetabase(ctx context.Context, addr oid.Address) error { func (s *Shard) deleteFromMetabase(ctx context.Context, addr oid.Address) error {

View file

@ -52,13 +52,18 @@ func testShardDelete(t *testing.T, hasWriteCache bool) {
_, err = testGet(t, sh, getPrm, hasWriteCache) _, err = testGet(t, sh, getPrm, hasWriteCache)
require.NoError(t, err) require.NoError(t, err)
_, err = sh.Delete(context.TODO(), delPrm) if hasWriteCache {
require.NoError(t, err) require.Eventually(t, func() bool {
_, err = sh.Delete(context.Background(), delPrm)
return err == nil
}, 30*time.Second, 100*time.Millisecond)
} else {
_, err = sh.Delete(context.Background(), delPrm)
require.NoError(t, err)
}
require.Eventually(t, func() bool { _, err = sh.Get(context.Background(), getPrm)
_, err = sh.Get(context.Background(), getPrm) require.True(t, client.IsErrObjectNotFound(err))
return client.IsErrObjectNotFound(err)
}, time.Second, 50*time.Millisecond)
}) })
t.Run("small object", func(t *testing.T) { t.Run("small object", func(t *testing.T) {
@ -78,12 +83,17 @@ func testShardDelete(t *testing.T, hasWriteCache bool) {
_, err = sh.Get(context.Background(), getPrm) _, err = sh.Get(context.Background(), getPrm)
require.NoError(t, err) require.NoError(t, err)
_, err = sh.Delete(context.Background(), delPrm) if hasWriteCache {
require.NoError(t, err) require.Eventually(t, func() bool {
_, err = sh.Delete(context.Background(), delPrm)
return err == nil
}, 10*time.Second, 100*time.Millisecond)
} else {
_, err = sh.Delete(context.Background(), delPrm)
require.NoError(t, err)
}
require.Eventually(t, func() bool { _, err = sh.Get(context.Background(), getPrm)
_, err = sh.Get(context.Background(), getPrm) require.True(t, client.IsErrObjectNotFound(err))
return client.IsErrObjectNotFound(err)
}, time.Second, 50*time.Millisecond)
}) })
} }

View file

@ -297,7 +297,7 @@ func (s *Shard) removeGarbage(pctx context.Context) (result gcRunResult) {
deletePrm.SetAddresses(buf...) deletePrm.SetAddresses(buf...)
// delete accumulated objects // delete accumulated objects
res, err := s.delete(ctx, deletePrm) res, err := s.delete(ctx, deletePrm, true)
result.deleted = res.deleted result.deleted = res.deleted
result.failedToDelete = uint64(len(buf)) - res.deleted result.failedToDelete = uint64(len(buf)) - res.deleted

View file

@ -0,0 +1,13 @@
package shard
import "context"
type RebuildWorkerLimiter interface {
AcquireWorkSlot(ctx context.Context) error
ReleaseWorkSlot()
}
type noopRebuildLimiter struct{}
func (l *noopRebuildLimiter) AcquireWorkSlot(context.Context) error { return nil }
func (l *noopRebuildLimiter) ReleaseWorkSlot() {}

View file

@ -0,0 +1,98 @@
package shard
import (
"context"
"errors"
"sync"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor"
meta "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/metabase"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"go.uber.org/zap"
)
type rebuilder struct {
mtx *sync.Mutex
wg *sync.WaitGroup
cancel func()
limiter RebuildWorkerLimiter
}
func newRebuilder(l RebuildWorkerLimiter) *rebuilder {
return &rebuilder{
mtx: &sync.Mutex{},
wg: &sync.WaitGroup{},
cancel: nil,
limiter: l,
}
}
func (r *rebuilder) Start(ctx context.Context, bs *blobstor.BlobStor, mb *meta.DB, log *logger.Logger) {
r.mtx.Lock()
defer r.mtx.Unlock()
r.start(ctx, bs, mb, log)
}
func (r *rebuilder) start(ctx context.Context, bs *blobstor.BlobStor, mb *meta.DB, log *logger.Logger) {
if r.cancel != nil {
r.stop(log)
}
ctx, cancel := context.WithCancel(ctx)
r.cancel = cancel
r.wg.Add(1)
go func() {
defer r.wg.Done()
log.Info(logs.BlobstoreRebuildStarted)
if err := bs.Rebuild(ctx, &mbStorageIDUpdate{mb: mb}, r.limiter); err != nil {
log.Warn(logs.FailedToRebuildBlobstore, zap.Error(err))
} else {
log.Info(logs.BlobstoreRebuildCompletedSuccessfully)
}
}()
}
func (r *rebuilder) Stop(log *logger.Logger) {
r.mtx.Lock()
defer r.mtx.Unlock()
r.stop(log)
}
func (r *rebuilder) stop(log *logger.Logger) {
if r.cancel == nil {
return
}
r.cancel()
r.wg.Wait()
r.cancel = nil
log.Info(logs.BlobstoreRebuildStopped)
}
var errMBIsNotAvailable = errors.New("metabase is not available")
type mbStorageIDUpdate struct {
mb *meta.DB
}
func (u *mbStorageIDUpdate) UpdateStorageID(ctx context.Context, addr oid.Address, storageID []byte) error {
select {
case <-ctx.Done():
return ctx.Err()
default:
}
if u.mb == nil {
return errMBIsNotAvailable
}
var prm meta.UpdateStorageIDPrm
prm.SetAddress(addr)
prm.SetStorageID(storageID)
_, err := u.mb.UpdateStorageID(prm)
return err
}

View file

@ -38,6 +38,8 @@ type Shard struct {
tsSource TombstoneSource tsSource TombstoneSource
rb *rebuilder
gcCancel atomic.Value gcCancel atomic.Value
setModeRequested atomic.Bool setModeRequested atomic.Bool
} }
@ -121,6 +123,8 @@ type cfg struct {
metricsWriter MetricsWriter metricsWriter MetricsWriter
reportErrorFunc func(selfID string, message string, err error) reportErrorFunc func(selfID string, message string, err error)
rebuildLimiter RebuildWorkerLimiter
} }
func defaultCfg() *cfg { func defaultCfg() *cfg {
@ -129,6 +133,7 @@ func defaultCfg() *cfg {
log: &logger.Logger{Logger: zap.L()}, log: &logger.Logger{Logger: zap.L()},
gcCfg: defaultGCCfg(), gcCfg: defaultGCCfg(),
reportErrorFunc: func(string, string, error) {}, reportErrorFunc: func(string, string, error) {},
rebuildLimiter: &noopRebuildLimiter{},
} }
} }
@ -366,6 +371,14 @@ func WithExpiredCollectorWorkersCount(count int) Option {
} }
} }
// WithRebuildWorkerLimiter return option to set concurrent
// workers count of storage rebuild operation.
func WithRebuildWorkerLimiter(l RebuildWorkerLimiter) Option {
return func(c *cfg) {
c.rebuildLimiter = l
}
}
func (s *Shard) fillInfo() { func (s *Shard) fillInfo() {
s.cfg.info.MetaBaseInfo = s.metaBase.DumpInfo() s.cfg.info.MetaBaseInfo = s.metaBase.DumpInfo()
s.cfg.info.BlobStorInfo = s.blobStor.DumpInfo() s.cfg.info.BlobStorInfo = s.blobStor.DumpInfo()

View file

@ -59,7 +59,7 @@ func (c *cache) Delete(ctx context.Context, addr oid.Address) error {
storagelog.OpField("db DELETE"), storagelog.OpField("db DELETE"),
) )
deleted = true deleted = true
c.objCounters.DecDB() c.decDB()
} }
return err return err
} }

View file

@ -76,7 +76,7 @@ func (c *cache) put(obj objectInfo) error {
storagelog.StorageTypeField(wcStorageType), storagelog.StorageTypeField(wcStorageType),
storagelog.OpField("db PUT"), storagelog.OpField("db PUT"),
) )
c.objCounters.IncDB() c.incDB()
} }
return err return err
} }

View file

@ -55,3 +55,13 @@ func (c *cache) initCounters() error {
return nil return nil
} }
func (c *cache) incDB() {
c.objCounters.IncDB()
c.metrics.SetActualCounters(c.objCounters.DB(), 0)
}
func (c *cache) decDB() {
c.objCounters.DecDB()
c.metrics.SetActualCounters(c.objCounters.DB(), 0)
}

View file

@ -73,7 +73,7 @@ func (c *cache) deleteFromDB(keys []internalKey) []internalKey {
} }
for i := 0; i < errorIndex; i++ { for i := 0; i < errorIndex; i++ {
c.objCounters.DecDB() c.decDB()
c.metrics.Evict(writecache.StorageTypeDB) c.metrics.Evict(writecache.StorageTypeDB)
storagelog.Write(c.log, storagelog.Write(c.log,
storagelog.AddressField(keys[i]), storagelog.AddressField(keys[i]),

View file

@ -83,6 +83,7 @@ func (c *cache) Delete(ctx context.Context, addr oid.Address) error {
storagelog.OpField("fstree DELETE"), storagelog.OpField("fstree DELETE"),
) )
deleted = true deleted = true
// counter changed by fstree
c.estimateCacheSize() c.estimateCacheSize()
} }
return metaerr.Wrap(err) return metaerr.Wrap(err)

View file

@ -70,6 +70,7 @@ func (c *cache) runFlushLoop() {
case <-tt.C: case <-tt.C:
c.flushSmallObjects() c.flushSmallObjects()
tt.Reset(defaultFlushInterval) tt.Reset(defaultFlushInterval)
c.estimateCacheSize()
case <-c.closeCh: case <-c.closeCh:
return return
} }

View file

@ -131,6 +131,7 @@ func (c *cache) putBig(ctx context.Context, addr string, prm common.PutPrm) erro
storagelog.StorageTypeField(wcStorageType), storagelog.StorageTypeField(wcStorageType),
storagelog.OpField("fstree PUT"), storagelog.OpField("fstree PUT"),
) )
// counter changed by fstree
c.estimateCacheSize() c.estimateCacheSize()
return nil return nil

View file

@ -72,5 +72,6 @@ func (c *cache) initCounters() error {
return fmt.Errorf("could not read write-cache DB counter: %w", err) return fmt.Errorf("could not read write-cache DB counter: %w", err)
} }
c.objCounters.cDB.Store(inDB) c.objCounters.cDB.Store(inDB)
c.estimateCacheSize()
return nil return nil
} }

View file

@ -73,7 +73,7 @@ func (c *cache) deleteFromDB(key string) {
err := c.db.Batch(func(tx *bbolt.Tx) error { err := c.db.Batch(func(tx *bbolt.Tx) error {
b := tx.Bucket(defaultBucket) b := tx.Bucket(defaultBucket)
key := []byte(key) key := []byte(key)
recordDeleted = !recordDeleted && b.Get(key) != nil recordDeleted = b.Get(key) != nil
return b.Delete(key) return b.Delete(key)
}) })
@ -122,6 +122,7 @@ func (c *cache) deleteFromDisk(ctx context.Context, keys []string) []string {
storagelog.OpField("fstree DELETE"), storagelog.OpField("fstree DELETE"),
) )
c.metrics.Evict(writecache.StorageTypeFSTree) c.metrics.Evict(writecache.StorageTypeFSTree)
// counter changed by fstree
c.estimateCacheSize() c.estimateCacheSize()
} }
} }

View file

@ -23,16 +23,23 @@ type BlobobvnizcaMetrics interface {
IncOpenBlobovniczaCount(shardID, path string) IncOpenBlobovniczaCount(shardID, path string)
DecOpenBlobovniczaCount(shardID, path string) DecOpenBlobovniczaCount(shardID, path string)
BlobovniczaTreeRebuildStatus(shardID, path, status string)
BlobovniczaTreeRebuildPercent(shardID, path string, value uint32)
BlobovniczaTreeObjectMoved(shardID, path string, d time.Duration)
} }
type blobovnicza struct { type blobovnicza struct {
treeMode *shardIDPathModeValue treeMode *shardIDPathModeValue
treeReqDuration *prometheus.HistogramVec treeReqDuration *prometheus.HistogramVec
treePut *prometheus.CounterVec treePut *prometheus.CounterVec
treeGet *prometheus.CounterVec treeGet *prometheus.CounterVec
treeOpenSize *prometheus.GaugeVec treeOpenSize *prometheus.GaugeVec
treeOpenItems *prometheus.GaugeVec treeOpenItems *prometheus.GaugeVec
treeOpenCounter *prometheus.GaugeVec treeOpenCounter *prometheus.GaugeVec
treeObjectMoveDuration *prometheus.HistogramVec
treeRebuildStatus *shardIDPathModeValue
treeRebuildPercent *prometheus.GaugeVec
} }
func newBlobovnicza() *blobovnicza { func newBlobovnicza() *blobovnicza {
@ -75,6 +82,19 @@ func newBlobovnicza() *blobovnicza {
Name: "open_blobovnicza_count", Name: "open_blobovnicza_count",
Help: "Count of opened blobovniczas of Blobovnicza tree", Help: "Count of opened blobovniczas of Blobovnicza tree",
}, []string{shardIDLabel, pathLabel}), }, []string{shardIDLabel, pathLabel}),
treeObjectMoveDuration: metrics.NewHistogramVec(prometheus.HistogramOpts{
Namespace: namespace,
Subsystem: blobovniczaTreeSubSystem,
Name: "object_move_duration_seconds",
Help: "Accumulated Blobovnicza tree object move duration",
}, []string{shardIDLabel, pathLabel}),
treeRebuildStatus: newShardIDPathMode(blobovniczaTreeSubSystem, "rebuild_status", "Blobovnicza tree rebuild status"),
treeRebuildPercent: metrics.NewGaugeVec(prometheus.GaugeOpts{
Namespace: namespace,
Subsystem: blobovniczaTreeSubSystem,
Name: "rebuild_complete_percent",
Help: "Percent of rebuild completeness",
}, []string{shardIDLabel, pathLabel}),
} }
} }
@ -96,6 +116,15 @@ func (b *blobovnicza) CloseBlobobvnizcaTree(shardID, path string) {
shardIDLabel: shardID, shardIDLabel: shardID,
pathLabel: path, pathLabel: path,
}) })
b.treeObjectMoveDuration.DeletePartialMatch(prometheus.Labels{
shardIDLabel: shardID,
pathLabel: path,
})
b.treeRebuildPercent.DeletePartialMatch(prometheus.Labels{
shardIDLabel: shardID,
pathLabel: path,
})
b.treeRebuildStatus.SetMode(shardID, path, undefinedStatus)
} }
func (b *blobovnicza) BlobobvnizcaTreeMethodDuration(shardID, path string, method string, d time.Duration, success bool, withStorageID NullBool) { func (b *blobovnicza) BlobobvnizcaTreeMethodDuration(shardID, path string, method string, d time.Duration, success bool, withStorageID NullBool) {
@ -163,3 +192,21 @@ func (b *blobovnicza) SubOpenBlobovniczaItems(shardID, path string, items uint64
pathLabel: path, pathLabel: path,
}).Sub(float64(items)) }).Sub(float64(items))
} }
func (b *blobovnicza) BlobovniczaTreeRebuildStatus(shardID, path, status string) {
b.treeRebuildStatus.SetMode(shardID, path, status)
}
func (b *blobovnicza) BlobovniczaTreeObjectMoved(shardID, path string, d time.Duration) {
b.treeObjectMoveDuration.With(prometheus.Labels{
shardIDLabel: shardID,
pathLabel: path,
}).Observe(d.Seconds())
}
func (b *blobovnicza) BlobovniczaTreeRebuildPercent(shardID, path string, value uint32) {
b.treeRebuildPercent.With(prometheus.Labels{
shardIDLabel: shardID,
pathLabel: path,
}).Set(float64(value))
}

View file

@ -44,4 +44,5 @@ const (
failedToDeleteStatus = "failed_to_delete" failedToDeleteStatus = "failed_to_delete"
deletedStatus = "deleted" deletedStatus = "deleted"
undefinedStatus = "undefined"
) )

View file

@ -10,16 +10,10 @@ import (
type WriteCacheMetrics interface { type WriteCacheMetrics interface {
AddMethodDuration(shardID string, method string, success bool, d time.Duration, storageType string) AddMethodDuration(shardID string, method string, success bool, d time.Duration, storageType string)
IncActualCount(shardID string, storageType string)
DecActualCount(shardID string, storageType string)
SetActualCount(shardID string, count uint64, storageType string) SetActualCount(shardID string, count uint64, storageType string)
SetEstimateSize(shardID string, size uint64, storageType string) SetEstimateSize(shardID string, size uint64, storageType string)
SetMode(shardID string, mode string) SetMode(shardID string, mode string)
IncOperationCounter(shardID string, operation string, success NullBool, storageType string) IncOperationCounter(shardID string, operation string, success NullBool, storageType string)
Close(shardID string) Close(shardID string)
} }
@ -65,20 +59,6 @@ func (m *writeCacheMetrics) AddMethodDuration(shardID string, method string, suc
).Observe(d.Seconds()) ).Observe(d.Seconds())
} }
func (m *writeCacheMetrics) IncActualCount(shardID string, storageType string) {
m.actualCount.With(prometheus.Labels{
shardIDLabel: shardID,
storageLabel: storageType,
}).Inc()
}
func (m *writeCacheMetrics) DecActualCount(shardID string, storageType string) {
m.actualCount.With(prometheus.Labels{
shardIDLabel: shardID,
storageLabel: storageType,
}).Dec()
}
func (m *writeCacheMetrics) SetActualCount(shardID string, count uint64, storageType string) { func (m *writeCacheMetrics) SetActualCount(shardID string, count uint64, storageType string) {
m.actualCount.With(prometheus.Labels{ m.actualCount.With(prometheus.Labels{
shardIDLabel: shardID, shardIDLabel: shardID,

View file

@ -67,7 +67,7 @@ func (d *DeletePrm) SetKey(key []byte) {
// //
// If TryNotary is provided, calls notary contract. // If TryNotary is provided, calls notary contract.
func (c *Client) Delete(p DeletePrm) error { func (c *Client) Delete(p DeletePrm) error {
if len(p.signature) == 0 { if len(p.signature) == 0 && !p.IsControl() {
return errNilArgument return errNilArgument
} }

View file

@ -67,6 +67,9 @@ const (
var errUnexpectedItems = errors.New("invalid number of NEO VM arguments on stack") var errUnexpectedItems = errors.New("invalid number of NEO VM arguments on stack")
// This is a kludge purely for update to work.
var CleanInvScript bool
func defaultNotaryConfig(c *Client) *notaryCfg { func defaultNotaryConfig(c *Client) *notaryCfg {
return &notaryCfg{ return &notaryCfg{
txValidTime: defaultNotaryValidTime, txValidTime: defaultNotaryValidTime,
@ -418,6 +421,20 @@ func (c *Client) NotarySignAndInvokeTX(mainTx *transaction.Transaction) error {
return err return err
} }
if CleanInvScript {
// This is necessary to suppress this check on neo-go side:
// https://github.com/nspcc-dev/neo-go/blob/8ed6d97085d3229d4faf56a47bbd6cf73c132a76/pkg/services/notary/notary.go#L538
// This is a kludge purely for update to work.
for i := range mainTx.Signers {
for _, sig := range cosigners {
if mainTx.Signers[i].Account.Equals(sig.Account.ScriptHash()) {
mainTx.Scripts[i].InvocationScript = []byte{}
break
}
}
}
}
// Sign exactly the same transaction we've got from the received Notary request. // Sign exactly the same transaction we've got from the received Notary request.
err = nAct.Sign(mainTx) err = nAct.Sign(mainTx)
if err != nil { if err != nil {

View file

@ -115,6 +115,11 @@ func (i *InvokePrmOptional) SetControlTX(b bool) {
i.controlTX = b i.controlTX = b
} }
// IsControl gets whether a control transaction will be used.
func (i *InvokePrmOptional) IsControl() bool {
return i.controlTX
}
// Invoke calls Invoke method of Client with static internal script hash and fee. // Invoke calls Invoke method of Client with static internal script hash and fee.
// Supported args types are the same as in Client. // Supported args types are the same as in Client.
// //

View file

@ -1,307 +0,0 @@
package loadcontroller
import (
"context"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container"
"go.uber.org/zap"
)
// StartPrm groups the required parameters of the Controller.Start method.
type StartPrm struct {
// Epoch number by which you want to select
// the values of the used space of containers.
Epoch uint64
}
type commonContext struct {
epoch uint64
ctrl *Controller
log *logger.Logger
}
type announcer struct {
commonContext
}
// Start starts the processing of container.SizeEstimation values.
//
// Single Start operation overtakes all data from LocalMetrics to
// LocalAnnouncementTarget (Controller's parameters).
// No filter by epoch is used for the iterator, since it is expected
// that the source of metrics does not track the change of epochs.
//
// Each call acquires an announcement context for an Epoch parameter.
// At the very end of the operation, the context is released.
func (c *Controller) Start(ctx context.Context, prm StartPrm) {
var announcer *announcer
// acquire announcement
ctx, announcer = c.acquireAnnouncement(ctx, prm)
if announcer == nil {
return
}
// finally stop and free the announcement
defer announcer.freeAnnouncement()
// announce local values
announcer.announce(ctx)
}
func (c *announcer) announce(ctx context.Context) {
c.log.Debug(logs.ControllerStartingToAnnounceTheValuesOfTheMetrics)
var (
metricsIterator Iterator
err error
)
// initialize iterator over locally collected metrics
metricsIterator, err = c.ctrl.prm.LocalMetrics.InitIterator()
if err != nil {
c.log.Debug(logs.ControllerCouldNotInitializeIteratorOverLocallyCollectedMetrics,
zap.String("error", err.Error()),
)
return
}
// initialize target of local announcements
targetWriter, err := c.ctrl.prm.LocalAnnouncementTarget.InitWriter(nil)
if err != nil {
c.log.Debug(logs.ControllerCouldNotInitializeAnnouncementAccumulator,
zap.String("error", err.Error()),
)
return
}
// iterate over all collected metrics and write them to the target
err = metricsIterator.Iterate(
func(container.SizeEstimation) bool {
return true // local metrics don't know about epochs
},
func(a container.SizeEstimation) error {
a.SetEpoch(c.epoch) // set epoch explicitly
return targetWriter.Put(a)
},
)
if err != nil {
c.log.Debug(logs.ControllerIteratorOverLocallyCollectedMetricsAborted,
zap.String("error", err.Error()),
)
return
}
// finish writing
err = targetWriter.Close(ctx)
if err != nil {
c.log.Debug(logs.ControllerCouldNotFinishWritingLocalAnnouncements,
zap.String("error", err.Error()),
)
return
}
c.log.Debug(logs.ControllerTrustAnnouncementSuccessfullyFinished)
}
func (c *Controller) acquireAnnouncement(ctx context.Context, prm StartPrm) (context.Context, *announcer) {
started := true
c.announceMtx.Lock()
{
if cancel := c.mAnnounceCtx[prm.Epoch]; cancel == nil {
ctx, cancel = context.WithCancel(ctx)
c.mAnnounceCtx[prm.Epoch] = cancel
started = false
}
}
c.announceMtx.Unlock()
log := &logger.Logger{Logger: c.opts.log.With(
zap.Uint64("epoch", prm.Epoch),
)}
if started {
log.Debug(logs.ControllerAnnouncementIsAlreadyStarted)
return ctx, nil
}
return ctx, &announcer{
commonContext: commonContext{
epoch: prm.Epoch,
ctrl: c,
log: log,
},
}
}
func (c *commonContext) freeAnnouncement() {
var stopped bool
c.ctrl.announceMtx.Lock()
{
var cancel context.CancelFunc
cancel, stopped = c.ctrl.mAnnounceCtx[c.epoch]
if stopped {
cancel()
delete(c.ctrl.mAnnounceCtx, c.epoch)
}
}
c.ctrl.announceMtx.Unlock()
if stopped {
c.log.Debug(logs.ControllerAnnouncementSuccessfullyInterrupted)
} else {
c.log.Debug(logs.ControllerAnnouncementIsNotStartedOrAlreadyInterrupted)
}
}
// StopPrm groups the required parameters of the Controller.Stop method.
type StopPrm struct {
// Epoch number the analysis of the values of which must be interrupted.
Epoch uint64
}
type reporter struct {
commonContext
}
// Stop interrupts the processing of container.SizeEstimation values.
//
// Single Stop operation releases an announcement context and overtakes
// all data from AnnouncementAccumulator to ResultReceiver (Controller's
// parameters). Only values for the specified Epoch parameter are processed.
//
// Each call acquires a report context for an Epoch parameter.
// At the very end of the operation, the context is released.
func (c *Controller) Stop(ctx context.Context, prm StopPrm) {
var reporter *reporter
ctx, reporter = c.acquireReport(ctx, prm)
if reporter == nil {
return
}
// finally stop and free reporting
defer reporter.freeReport()
// interrupt announcement
reporter.freeAnnouncement()
// report the estimations
reporter.report(ctx)
}
func (c *Controller) acquireReport(ctx context.Context, prm StopPrm) (context.Context, *reporter) {
started := true
c.reportMtx.Lock()
{
if cancel := c.mReportCtx[prm.Epoch]; cancel == nil {
ctx, cancel = context.WithCancel(ctx)
c.mReportCtx[prm.Epoch] = cancel
started = false
}
}
c.reportMtx.Unlock()
log := &logger.Logger{Logger: c.opts.log.With(
zap.Uint64("epoch", prm.Epoch),
)}
if started {
log.Debug(logs.ControllerReportIsAlreadyStarted)
return ctx, nil
}
return ctx, &reporter{
commonContext: commonContext{
epoch: prm.Epoch,
ctrl: c,
log: log,
},
}
}
func (c *commonContext) freeReport() {
var stopped bool
c.ctrl.reportMtx.Lock()
{
var cancel context.CancelFunc
cancel, stopped = c.ctrl.mReportCtx[c.epoch]
if stopped {
cancel()
delete(c.ctrl.mReportCtx, c.epoch)
}
}
c.ctrl.reportMtx.Unlock()
if stopped {
c.log.Debug(logs.ControllerAnnouncementSuccessfullyInterrupted)
} else {
c.log.Debug(logs.ControllerAnnouncementIsNotStartedOrAlreadyInterrupted)
}
}
func (c *reporter) report(ctx context.Context) {
var (
localIterator Iterator
err error
)
// initialize iterator over locally accumulated announcements
localIterator, err = c.ctrl.prm.AnnouncementAccumulator.InitIterator()
if err != nil {
c.log.Debug(logs.ControllerCouldNotInitializeIteratorOverLocallyAccumulatedAnnouncements,
zap.String("error", err.Error()),
)
return
}
// initialize final destination of load estimations
resultWriter, err := c.ctrl.prm.ResultReceiver.InitWriter(nil)
if err != nil {
c.log.Debug(logs.ControllerCouldNotInitializeResultTarget,
zap.String("error", err.Error()),
)
return
}
// iterate over all accumulated announcements and write them to the target
err = localIterator.Iterate(
usedSpaceFilterEpochEQ(c.epoch),
resultWriter.Put,
)
if err != nil {
c.log.Debug(logs.ControllerIteratorOverLocalAnnouncementsAborted,
zap.String("error", err.Error()),
)
return
}
// finish writing
err = resultWriter.Close(ctx)
if err != nil {
c.log.Debug(logs.ControllerCouldNotFinishWritingLoadEstimations,
zap.String("error", err.Error()),
)
}
}

View file

@ -1,192 +0,0 @@
package loadcontroller_test
import (
"context"
"math/rand"
"sync"
"testing"
loadcontroller "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/container/announcement/load/controller"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container"
cidtest "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id/test"
"github.com/stretchr/testify/require"
)
type testAnnouncementStorage struct {
w loadcontroller.Writer
i loadcontroller.Iterator
mtx sync.RWMutex
m map[uint64][]container.SizeEstimation
}
func newTestStorage() *testAnnouncementStorage {
return &testAnnouncementStorage{
m: make(map[uint64][]container.SizeEstimation),
}
}
func (s *testAnnouncementStorage) InitIterator() (loadcontroller.Iterator, error) {
if s.i != nil {
return s.i, nil
}
return s, nil
}
func (s *testAnnouncementStorage) Iterate(f loadcontroller.UsedSpaceFilter, h loadcontroller.UsedSpaceHandler) error {
s.mtx.RLock()
defer s.mtx.RUnlock()
for _, v := range s.m {
for _, a := range v {
if f(a) {
if err := h(a); err != nil {
return err
}
}
}
}
return nil
}
func (s *testAnnouncementStorage) InitWriter([]loadcontroller.ServerInfo) (loadcontroller.Writer, error) {
if s.w != nil {
return s.w, nil
}
return s, nil
}
func (s *testAnnouncementStorage) Put(v container.SizeEstimation) error {
s.mtx.Lock()
s.m[v.Epoch()] = append(s.m[v.Epoch()], v)
s.mtx.Unlock()
return nil
}
func (s *testAnnouncementStorage) Close(context.Context) error {
return nil
}
func randAnnouncement() (a container.SizeEstimation) {
a.SetContainer(cidtest.ID())
a.SetValue(rand.Uint64())
return
}
func TestSimpleScenario(t *testing.T) {
// create storage to write final estimations
resultStorage := newTestStorage()
// create storages to accumulate announcements
accumulatingStorageN2 := newTestStorage()
// create storage of local metrics
localStorageN1 := newTestStorage()
localStorageN2 := newTestStorage()
// create 2 controllers: 1st writes announcements to 2nd, 2nd directly to final destination
ctrlN1 := loadcontroller.New(loadcontroller.Prm{
LocalMetrics: localStorageN1,
AnnouncementAccumulator: newTestStorage(),
LocalAnnouncementTarget: &testAnnouncementStorage{
w: accumulatingStorageN2,
},
ResultReceiver: resultStorage,
})
ctrlN2 := loadcontroller.New(loadcontroller.Prm{
LocalMetrics: localStorageN2,
AnnouncementAccumulator: accumulatingStorageN2,
LocalAnnouncementTarget: &testAnnouncementStorage{
w: resultStorage,
},
ResultReceiver: resultStorage,
})
const processEpoch uint64 = 10
const goodNum = 4
// create 2 random values for processing epoch and 1 for some different
announces := make([]container.SizeEstimation, 0, goodNum)
for i := 0; i < goodNum; i++ {
a := randAnnouncement()
a.SetEpoch(processEpoch)
announces = append(announces, a)
}
// store one half of "good" announcements to 1st metrics storage, another - to 2nd
// and "bad" to both
for i := 0; i < goodNum/2; i++ {
require.NoError(t, localStorageN1.Put(announces[i]))
}
for i := goodNum / 2; i < goodNum; i++ {
require.NoError(t, localStorageN2.Put(announces[i]))
}
wg := new(sync.WaitGroup)
wg.Add(2)
startPrm := loadcontroller.StartPrm{
Epoch: processEpoch,
}
// start both controllers
go func() {
ctrlN1.Start(context.Background(), startPrm)
wg.Done()
}()
go func() {
ctrlN2.Start(context.Background(), startPrm)
wg.Done()
}()
wg.Wait()
wg.Add(2)
stopPrm := loadcontroller.StopPrm{
Epoch: processEpoch,
}
// stop both controllers
go func() {
ctrlN1.Stop(context.Background(), stopPrm)
wg.Done()
}()
go func() {
ctrlN2.Stop(context.Background(), stopPrm)
wg.Done()
}()
wg.Wait()
// result target should contain all "good" announcements and shoult not container the "bad" one
var res []container.SizeEstimation
err := resultStorage.Iterate(
func(a container.SizeEstimation) bool {
return true
},
func(a container.SizeEstimation) error {
res = append(res, a)
return nil
},
)
require.NoError(t, err)
for i := range announces {
require.Contains(t, res, announces[i])
}
}

View file

@ -1,94 +0,0 @@
package loadcontroller
import (
"context"
"fmt"
"sync"
)
// Prm groups the required parameters of the Controller's constructor.
//
// All values must comply with the requirements imposed on them.
// Passing incorrect parameter values will result in constructor
// failure (error or panic depending on the implementation).
type Prm struct {
// Iterator over the used space values of the containers
// collected by the node locally.
LocalMetrics IteratorProvider
// Place of recording the local values of
// the used space of containers.
LocalAnnouncementTarget WriterProvider
// Iterator over the summarized used space scores
// from the various network participants.
AnnouncementAccumulator IteratorProvider
// Place of recording the final estimates of
// the used space of containers.
ResultReceiver WriterProvider
}
// Controller represents main handler for starting
// and interrupting container volume estimation.
//
// It binds the interfaces of the local value stores
// to the target storage points. Controller is abstracted
// from the internal storage device and the network location
// of the connecting components. At its core, it is a
// high-level start-stop trigger for calculations.
//
// For correct operation, the controller must be created
// using the constructor (New) based on the required parameters
// and optional components. After successful creation,
// the constructor is immediately ready to work through
// API of external control of calculations and data transfer.
type Controller struct {
prm Prm
opts *options
announceMtx sync.Mutex
mAnnounceCtx map[uint64]context.CancelFunc
reportMtx sync.Mutex
mReportCtx map[uint64]context.CancelFunc
}
const invalidPrmValFmt = "invalid parameter %s (%T):%v"
func panicOnPrmValue(n string, v any) {
panic(fmt.Sprintf(invalidPrmValFmt, n, v, v))
}
// New creates a new instance of the Controller.
//
// Panics if at least one value of the parameters is invalid.
//
// The created Controller does not require additional
// initialization and is completely ready for work.
func New(prm Prm, opts ...Option) *Controller {
switch {
case prm.LocalMetrics == nil:
panicOnPrmValue("LocalMetrics", prm.LocalMetrics)
case prm.AnnouncementAccumulator == nil:
panicOnPrmValue("AnnouncementAccumulator", prm.AnnouncementAccumulator)
case prm.LocalAnnouncementTarget == nil:
panicOnPrmValue("LocalAnnouncementTarget", prm.LocalAnnouncementTarget)
case prm.ResultReceiver == nil:
panicOnPrmValue("ResultReceiver", prm.ResultReceiver)
}
o := defaultOpts()
for _, opt := range opts {
opt(o)
}
return &Controller{
prm: prm,
opts: o,
mAnnounceCtx: make(map[uint64]context.CancelFunc),
mReportCtx: make(map[uint64]context.CancelFunc),
}
}

View file

@ -1,103 +0,0 @@
package loadcontroller
import (
"context"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container"
)
// UsedSpaceHandler describes the signature of the container.SizeEstimation
// value handling function.
//
// Termination of processing without failures is usually signaled
// with a zero error, while a specific value may describe the reason
// for failure.
type UsedSpaceHandler func(container.SizeEstimation) error
// UsedSpaceFilter describes the signature of the function for
// checking whether a value meets a certain criterion.
//
// Return of true means conformity, false - vice versa.
type UsedSpaceFilter func(container.SizeEstimation) bool
// Iterator is a group of methods provided by entity
// which can iterate over a group of container.SizeEstimation values.
type Iterator interface {
// Iterate must start an iterator over values that
// meet the filter criterion (returns true).
// For each such value should call a handler, the error
// of which should be directly returned from the method.
//
// Internal failures of the iterator are also signaled via
// an error. After a successful call to the last value
// handler, nil should be returned.
Iterate(UsedSpaceFilter, UsedSpaceHandler) error
}
// IteratorProvider is a group of methods provided
// by entity which generates iterators over
// container.SizeEstimation values.
type IteratorProvider interface {
// InitIterator should return an initialized Iterator.
//
// Initialization problems are reported via error.
// If no error was returned, then the Iterator must not be nil.
//
// Implementations can have different logic for different
// contexts, so specific ones may document their own behavior.
InitIterator() (Iterator, error)
}
// Writer describes the interface for storing container.SizeEstimation values.
//
// This interface is provided by both local storage
// of values and remote (wrappers over the RPC).
type Writer interface {
// Put performs a write operation of container.SizeEstimation value
// and returns any error encountered.
//
// All values after the Close call must be flushed to the
// physical target. Implementations can cache values before
// Close operation.
//
// Put must not be called after Close.
Put(container.SizeEstimation) error
// Close exits with method-providing Writer.
//
// All cached values must be flushed before
// the Close's return.
//
// Methods must not be called after Close.
Close(ctx context.Context) error
}
// WriterProvider is a group of methods provided
// by entity which generates keepers of
// container.SizeEstimation values.
type WriterProvider interface {
// InitWriter should return an initialized Writer.
//
// Initialization problems are reported via error.
// If no error was returned, then the Writer must not be nil.
InitWriter(route []ServerInfo) (Writer, error)
}
// ServerInfo describes a set of
// characteristics of a point in a route.
type ServerInfo interface {
// PublicKey returns public key of the node
// from the route in a binary representation.
PublicKey() []byte
// Iterates over network addresses of the node
// in the route. Breaks iterating on true return
// of the handler.
IterateAddresses(func(string) bool)
// Returns number of server's network addresses.
NumberOfAddresses() int
// ExternalAddresses returns external node's addresses.
ExternalAddresses() []string
}

View file

@ -1,28 +0,0 @@
package loadcontroller
import (
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger"
"go.uber.org/zap"
)
// Option sets an optional parameter of Controller.
type Option func(*options)
type options struct {
log *logger.Logger
}
func defaultOpts() *options {
return &options{
log: &logger.Logger{Logger: zap.L()},
}
}
// WithLogger returns option to specify logging component.
func WithLogger(l *logger.Logger) Option {
return func(o *options) {
if l != nil {
o.log = l
}
}
}

View file

@ -1,36 +0,0 @@
package loadcontroller
import (
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container"
)
func usedSpaceFilterEpochEQ(epoch uint64) UsedSpaceFilter {
return func(a container.SizeEstimation) bool {
return a.Epoch() == epoch
}
}
type storageWrapper struct {
w Writer
i Iterator
}
func (s storageWrapper) InitIterator() (Iterator, error) {
return s.i, nil
}
func (s storageWrapper) InitWriter([]ServerInfo) (Writer, error) {
return s.w, nil
}
func SimpleIteratorProvider(i Iterator) IteratorProvider {
return &storageWrapper{
i: i,
}
}
func SimpleWriterProvider(w Writer) WriterProvider {
return &storageWrapper{
w: w,
}
}

View file

@ -1,145 +0,0 @@
package loadroute
import (
"context"
"encoding/hex"
"sync"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
loadcontroller "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/container/announcement/load/controller"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container"
"go.uber.org/zap"
)
// InitWriter initializes and returns Writer that sends each value to its next route point.
//
// If route is present, then it is taken into account,
// and the value will be sent to its continuation. Otherwise, the route will be laid
// from scratch and the value will be sent to its primary point.
//
// After building a list of remote points of the next leg of the route, the value is sent
// sequentially to all of them. If any transmissions (even all) fail, an error will not
// be returned.
//
// Close of the composed Writer calls Close method on each internal Writer generated in
// runtime and never returns an error.
//
// Always returns nil error.
func (r *Router) InitWriter(route []loadcontroller.ServerInfo) (loadcontroller.Writer, error) {
if len(route) == 0 {
route = []loadcontroller.ServerInfo{r.localSrvInfo}
}
return &loadWriter{
router: r,
route: route,
mRoute: make(map[routeKey]*valuesRoute),
mServers: make(map[string]loadcontroller.Writer),
}, nil
}
type routeKey struct {
epoch uint64
cid string
}
type valuesRoute struct {
route []loadcontroller.ServerInfo
values []container.SizeEstimation
}
type loadWriter struct {
router *Router
route []loadcontroller.ServerInfo
routeMtx sync.RWMutex
mRoute map[routeKey]*valuesRoute
mServers map[string]loadcontroller.Writer
}
func (w *loadWriter) Put(a container.SizeEstimation) error {
w.routeMtx.Lock()
defer w.routeMtx.Unlock()
key := routeKey{
epoch: a.Epoch(),
cid: a.Container().EncodeToString(),
}
routeValues, ok := w.mRoute[key]
if !ok {
route, err := w.router.routeBuilder.NextStage(a, w.route)
if err != nil {
return err
} else if len(route) == 0 {
route = []loadcontroller.ServerInfo{nil}
}
routeValues = &valuesRoute{
route: route,
values: []container.SizeEstimation{a},
}
w.mRoute[key] = routeValues
}
for _, remoteInfo := range routeValues.route {
var key string
if remoteInfo != nil {
key = hex.EncodeToString(remoteInfo.PublicKey())
}
remoteWriter, ok := w.mServers[key]
if !ok {
provider, err := w.router.remoteProvider.InitRemote(remoteInfo)
if err != nil {
w.router.log.Debug(logs.RouteCouldNotInitializeWriterProvider,
zap.String("error", err.Error()),
)
continue // best effort
}
remoteWriter, err = provider.InitWriter(w.route)
if err != nil {
w.router.log.Debug(logs.RouteCouldNotInitializeWriter,
zap.String("error", err.Error()),
)
continue // best effort
}
w.mServers[key] = remoteWriter
}
err := remoteWriter.Put(a)
if err != nil {
w.router.log.Debug(logs.RouteCouldNotPutTheValue,
zap.String("error", err.Error()),
)
}
// continue best effort
}
return nil
}
func (w *loadWriter) Close(ctx context.Context) error {
for key, wRemote := range w.mServers {
err := wRemote.Close(ctx)
if err != nil {
w.router.log.Debug(logs.RouteCouldNotCloseRemoteServerWriter,
zap.String("key", key),
zap.String("error", err.Error()),
)
}
}
return nil
}

View file

@ -1,31 +0,0 @@
package loadroute
import (
loadcontroller "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/container/announcement/load/controller"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container"
)
// Builder groups methods to route values in the network.
type Builder interface {
// NextStage must return next group of route points for the value a
// based on the passed route.
//
// Empty passed list means being at the starting point of the route.
//
// Must return empty list and no error if the endpoint of the route is reached.
// If there are more than one point to go and the last passed point is included
// in that list (means that point is the last point in one of the route groups),
// returned route must contain nil point that should be interpreted as signal to,
// among sending to other route points, save the announcement in that point.
NextStage(a container.SizeEstimation, passed []loadcontroller.ServerInfo) ([]loadcontroller.ServerInfo, error)
}
// RemoteWriterProvider describes the component
// for sending values to a fixed route point.
type RemoteWriterProvider interface {
// InitRemote must return WriterProvider to the route point
// corresponding to info.
//
// Nil info matches the end of the route.
InitRemote(info loadcontroller.ServerInfo) (loadcontroller.WriterProvider, error)
}

View file

@ -1,28 +0,0 @@
package loadroute
import (
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger"
"go.uber.org/zap"
)
// Option sets an optional parameter of Router.
type Option func(*options)
type options struct {
log *logger.Logger
}
func defaultOpts() *options {
return &options{
log: &logger.Logger{Logger: zap.L()},
}
}
// WithLogger returns Option to specify logging component.
func WithLogger(l *logger.Logger) Option {
return func(o *options) {
if l != nil {
o.log = l
}
}
}

Some files were not shown because too many files have changed in this diff Show more