Compare commits

...
Sign in to create a new pull request.

42 commits

Author SHA1 Message Date
3cbff57535
[#1718] linter: Enable gocritic linter
See https://go-critic.com/overview#checkers-from-the-diagnostic-group for
list of default enabled checkers.
`ifElseChain` disabled as it generates doubtful issues.

Change-Id: I5937b116d9af8b3cdf8b06451c4904d0b3f67f68
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-04-16 15:22:54 +03:00
ca8b01667f
[#1718] linter: Resolve gocritic's unlambda linter
See https://go-critic.com/overview#unlambda for details.

Change-Id: Iccb2d293ce31a302fcbb2c3f9c55c9b3fa554db5
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-04-16 15:22:54 +03:00
b88fe8c4a7
[#1718] linter: Resolve gocritic's assignOp linter
See https://go-critic.com/overview#assignop for details.

Change-Id: I839446846437c8c74c119d8b5669f5b866c247dc
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-04-16 15:22:53 +03:00
64900f87e1
[#1718] linter: Resolve gocritic's singleCaseSwitch linter
See https://go-critic.com/overview#singlecaseswitch for details.

Change-Id: Ied7885f83b4116969771de6f91bc5e1e3b2a4f1e
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-04-16 15:22:53 +03:00
8d499f03fe
[#1718] linter: Resolve gocritic's elseif linter
See https://go-critic.com/overview#elseif for details.

Change-Id: I8fd3edfacaeea2b0a83917575d545af7e7ab4d13
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-04-16 15:22:52 +03:00
d2114759aa
[#1718] linter: Resolve gocritic's typeSwitchVar linter
See https://go-critic.com/overview#typeswitchvar for details

Change-Id: Ic29db32c9b080576ab51dd484b4376114e9e775c
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-04-16 15:22:52 +03:00
2075e09ced
[#1718] linter: Resolve gocritic's unslice linter
See https://go-critic.com/overview#unslice for details.

Change-Id: I6d21e8ce1c9bae56099dc203f5080b0e3ea0c1ef
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-04-16 15:22:51 +03:00
cf48069fd8
[#1718] linter: Resolve gocritic's appendAssign linter
See https://go-critic.com/overview#appendassign for details.

Change-Id: I991979ea680af25e2cec9097fa12b1c4eebc6c1d
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-04-16 15:22:51 +03:00
86264e4e20 [#1619] logger: Add benchmark
Change-Id: I49e90e8a3689a755755afd0638b327a6b1884795
Signed-off-by: Anton Nikiforov <an.nikiforov@yadro.com>
2025-04-16 13:07:51 +03:00
100eb8b654 [#1619] logger: Set tags for node components
Change-Id: I55ffcce9d2a74fdd47621674739b07f2e20199e3
Signed-off-by: Anton Nikiforov <an.nikiforov@yadro.com>
2025-04-16 13:07:51 +03:00
36fb15b9a4
[#1689] engine: Return error if object is locked during inhume
Return `object is locked` error if object doesn't exists but is
locked, since the locked index may be populated even when the object
itself doesn't exist.

Change-Id: If1a145c6efead9873acd33bb4fd22cf6175cbabd
Signed-off-by: Aleksey Savchuk <a.savchuk@yadro.com>
2025-04-16 12:21:13 +03:00
e45382b0c1 [#1693] util: Replace conditional panics with asserts
Change-Id: I13b566cde3e6d43d8a75aa2e9b28e63b597adff9
Signed-off-by: Ekaterina Lebedeva <ekaterina.lebedeva@yadro.com>
2025-04-15 16:54:45 +00:00
5dd8d7e87a [#1693] network: Replace conditional panics with asserts
Change-Id: Icba39aa2ed0048d63c6efed398273627e1e4fbbe
Signed-off-by: Ekaterina Lebedeva <ekaterina.lebedeva@yadro.com>
2025-04-15 16:54:41 +00:00
bc045b29e2 [#1693] services: Replace conditional panics with asserts
Change-Id: Ic79609e6ad867caa88ad245b3014aa7fc32e05a8
Signed-off-by: Ekaterina Lebedeva <ekaterina.lebedeva@yadro.com>
2025-04-15 16:54:36 +00:00
fc6abe30b8 [#1693] storage: Replace conditional panics with asserts
Change-Id: I9d8ccde3c71fca716856c7bfc53da20ee0542f20
Signed-off-by: Ekaterina Lebedeva <ekaterina.lebedeva@yadro.com>
2025-04-15 16:54:31 +00:00
a285d8924f [#1693] node: Replace conditional panics with asserts
Change-Id: I5024705fd1693d00cb9241235030a73984c2a7e1
Signed-off-by: Ekaterina Lebedeva <ekaterina.lebedeva@yadro.com>
2025-04-15 16:28:43 +00:00
410b6f70ba
[#1716] cli: Return trace ID on operation failure
Close #1716

Change-Id: I293d0cc6b7331517e8cde42eae07d65384976da5
Signed-off-by: Aleksey Savchuk <a.savchuk@yadro.com>
2025-04-15 18:38:55 +03:00
0ee7467da5 [#1715] config: Add compression config section
To group all `compression_*` parameters together.

Change-Id: I11ad9600f731903753fef1adfbc0328ef75bbf87
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-04-15 15:05:50 +00:00
8c746a914a [#1715] compression: Decouple Config and Compressor
Refactoring.

Change-Id: Ide2e1378f30c39045d4bacd13a902331bd4f764f
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-04-15 15:05:45 +00:00
98308d0cad [#1715] blobstor: Allow to specify custom compression level
Change-Id: I140c39b9dceaaeb58767061b131777af22242b19
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-04-15 15:05:39 +00:00
2d1232ce6d [#1689] network,core/netmap: Replace Iterate*() functions with iterators
Change-Id: I4842a3160d74c56d99ea9465d4be2f0662080605
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-04-15 14:47:32 +00:00
e65d578ba9 [#1689] Remove deprecated NodeInfo.IterateAttributes()
Change-Id: Ibd07302079efe148903aa6177759232a28616736
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-04-15 14:47:28 +00:00
bf06c4fb4b [#1689] Remove deprecated NodeInfo.IterateNetworkEndpoints()
Change-Id: Ic78f18aed11fab34ee3147ceea657296b89fe60c
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-04-15 14:47:22 +00:00
56d09a9957 [#1640] object: Add priority metric based on geo distance
Change-Id: I3a7ea4fc4807392bf50e6ff1389c61367c953074
Signed-off-by: Anton Nikiforov <an.nikiforov@yadro.com>
2025-04-15 13:35:28 +00:00
0712c113de
[#1700] gc: Fix deadlock
`HandleExpiredLocks` gets read lock, then `shard.Close` tries to acquire
write lock, but `HandleExpiredLocks` calls `inhumeUnlockedIfExpired` or
`selectExpired`, that try to acquire read lock again.

Change-Id: Ib2ed015e859328045b5a542a4f569e5e0ff8b05b
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-04-15 10:06:05 +03:00
48930ec452 [#1703] cli: Allow reading RPC endpoint from config file
Allowed reading an RPC endpoint from a configuration file when
getting current epoch in the `object lock` and `bearer create`
commands.

Close #1703

Change-Id: Iea8509dff2893a02cb63f695d7f532eecd743ed8
Signed-off-by: Aleksey Savchuk <a.savchuk@yadro.com>
2025-04-14 15:40:27 +00:00
f37babdc54
[#1700] shard: Lock shard's mode mutex on close
To prevent race between GC handlers and close.

Change-Id: I06219230964f000f666a56158d3563c760518c3b
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-04-14 14:35:16 +03:00
fd37cea443
[#1700] engine: Drop unused block execution methods
`BlockExecution` and `ResumeExecution` were used only by unit test.
So drop them and simplify code.

Change-Id: Ib3de324617e8a27fc1f015542ac5e94df5c60a6e
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-04-14 14:35:15 +03:00
e80632884a
[#1700] config: Drop redundant check
Target config created on level above, so limiter is always nil.

Change-Id: I1896baae5b9ddeed339a7d2b022a9a886589d362
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-04-14 14:35:15 +03:00
5aaa3df533
[#1700] config: Move config struct to qos package
Change-Id: Ie642fff5cd1702cda00425628e11f3fd8c514798
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-04-14 14:35:14 +03:00
3be33b7117
[#1706] cli/playground: Mention 'help' in error message for invalid commands
Change-Id: Ica1112b907919a6d19fa1bf683f2a952c4c638e4
Signed-off-by: Alexander Chuprov <a.chuprov@yadro.com>
2025-04-14 13:10:34 +03:00
29b4fbe451 [#1332] cli/playground: Add 'netmap-config' flag
Change-Id: I4342fb9a6da2a05c18ae4e0ad9f0c71550efc5ef
Signed-off-by: Alexander Chuprov <a.chuprov@yadro.com>
2025-04-14 07:21:08 +00:00
0d36e93169 [#1332] cli/playground: Move command handler selection to separate function
Change-Id: I2dcbd85e61960c3cf141b815edab174e308ef858
Signed-off-by: Alexander Chuprov <a.chuprov@yadro.com>
2025-04-13 11:49:45 +00:00
8e87cbee17 [#1689] ci: Move commit checker out of Jenkinsfile
Commit checker is now configured globally for all Gerrit repositories:
  TrueCloudLab/jenkins#16

This allows us to execute commit-checker independently from the rest of
CI suite and re-check commit message format without rerunning other
tests.

Change-Id: Ib8f899b856482a5dc5d03861171585415ff6b452
Signed-off-by: Vitaliy Potyarkin <v.potyarkin@yadro.com>
2025-04-12 15:42:45 +00:00
12fc7850dd [#1619] logger: Set tags for ir components
Change-Id: Ifab575bc2a3cd83c9001cd68fffaf94c91494043
Signed-off-by: Anton Nikiforov <an.nikiforov@yadro.com>
2025-04-11 17:27:27 +03:00
dfe2f9956a [#1619] logger: Filter entries by tags provided in config
Change-Id: Ia2a79d6cb2a5eb263fb2e6db3f9cf9f2a7d57118
Signed-off-by: Anton Nikiforov <an.nikiforov@yadro.com>
2025-04-11 17:27:27 +03:00
e06ecacf57
[#1705] engine: Use condition var for evacuation unit tests
To know exactly when the evacuation was completed,
a conditional variable was added.

Closes #1705

Change-Id: I86f6d7d2ad2b9759905b6b5e9341008cb74f5dfd
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-04-11 09:20:46 +03:00
64c1392513 [#1710] object: Sign response even if CloseAndRecv returns error
* Sign service wraps an error with status and sign a response even
  if error occurs from `CloseAndRecv` in `Put` and `Patch` methods.

Close #1710

Change-Id: I7e1d8fe00db53607fa6e04ebec9a29b87349f8a1
Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2025-04-10 16:31:47 +03:00
dcfd895449 [#1710] object: Implement Unwrap() for errIncompletePut
* When sign service calls `SignResponse`, it tries to set v2 status
  to response by unwrapping an error to the possible depth. This wasn't
  applicable for `errIncompletePut` so far as it didn't implement
  `Unwrap()`. Thus, it wasn't able to find a correct status set in error.

Change-Id: I280c1806a008176854c55f13bf8688e5736ef941
Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2025-04-10 16:31:47 +03:00
6730e27ae7
[#1712] adm: Drop rpc-endpoint flag from zombie scan
Morph addresses from config are used.

Change-Id: Id99f91defbbff442c308f30d219b9824b4c871de
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-04-10 10:06:31 +03:00
f7779adf71
[#1712] core: Extend object info string with EC header
Closes #1712

Change-Id: Ief4a960f7dece3359763113270d1ff5155f3f19e
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-04-10 10:06:31 +03:00
f93b96c601
[#1712] adm: Add maintenance zombie commands
Change-Id: I1b73e561a8daad67d0a8ffc0d293cbdd09aaab6b
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-04-10 10:06:30 +03:00
137 changed files with 2520 additions and 745 deletions

6
.ci/Jenkinsfile vendored
View file

@ -78,10 +78,4 @@ async {
} }
} }
} }
task('dco') {
container('git.frostfs.info/truecloudlab/commit-check:master') {
sh 'FROM=pull_request_target commit-check'
}
}
} }

View file

@ -18,6 +18,7 @@ linters:
- exhaustive - exhaustive
- funlen - funlen
- gocognit - gocognit
- gocritic
- godot - godot
- importas - importas
- ineffassign - ineffassign
@ -44,6 +45,9 @@ linters:
statements: 60 statements: 60
gocognit: gocognit:
min-complexity: 40 min-complexity: 40
gocritic:
disabled-checks:
- ifElseChain
importas: importas:
alias: alias:
- pkg: git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object - pkg: git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object

View file

@ -0,0 +1,15 @@
package maintenance
import (
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/maintenance/zombie"
"github.com/spf13/cobra"
)
var RootCmd = &cobra.Command{
Use: "maintenance",
Short: "Section for maintenance commands",
}
func init() {
RootCmd.AddCommand(zombie.Cmd)
}

View file

@ -0,0 +1,70 @@
package zombie
import (
"crypto/ecdsa"
"fmt"
"os"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config"
nodeconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/node"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"github.com/nspcc-dev/neo-go/cli/flags"
"github.com/nspcc-dev/neo-go/cli/input"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/nspcc-dev/neo-go/pkg/util"
"github.com/nspcc-dev/neo-go/pkg/wallet"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
func getPrivateKey(cmd *cobra.Command, appCfg *config.Config) *ecdsa.PrivateKey {
keyDesc := viper.GetString(walletFlag)
if keyDesc == "" {
return &nodeconfig.Key(appCfg).PrivateKey
}
data, err := os.ReadFile(keyDesc)
commonCmd.ExitOnErr(cmd, "open wallet file: %w", err)
priv, err := keys.NewPrivateKeyFromBytes(data)
if err != nil {
w, err := wallet.NewWalletFromFile(keyDesc)
commonCmd.ExitOnErr(cmd, "provided key is incorrect, only wallet or binary key supported: %w", err)
return fromWallet(cmd, w, viper.GetString(addressFlag))
}
return &priv.PrivateKey
}
func fromWallet(cmd *cobra.Command, w *wallet.Wallet, addrStr string) *ecdsa.PrivateKey {
var (
addr util.Uint160
err error
)
if addrStr == "" {
addr = w.GetChangeAddress()
} else {
addr, err = flags.ParseAddress(addrStr)
commonCmd.ExitOnErr(cmd, "--address option must be specified and valid: %w", err)
}
acc := w.GetAccount(addr)
if acc == nil {
commonCmd.ExitOnErr(cmd, "--address option must be specified and valid: %w", fmt.Errorf("can't find wallet account for %s", addrStr))
}
pass, err := getPassword()
commonCmd.ExitOnErr(cmd, "invalid password for the encrypted key: %w", err)
commonCmd.ExitOnErr(cmd, "can't decrypt account: %w", acc.Decrypt(pass, keys.NEP2ScryptParams()))
return &acc.PrivateKey().PrivateKey
}
func getPassword() (string, error) {
// this check allows empty passwords
if viper.IsSet("password") {
return viper.GetString("password"), nil
}
return input.ReadPassword("Enter password > ")
}

View file

@ -0,0 +1,31 @@
package zombie
import (
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"github.com/spf13/cobra"
)
func list(cmd *cobra.Command, _ []string) {
configFile, _ := cmd.Flags().GetString(commonflags.ConfigFlag)
configDir, _ := cmd.Flags().GetString(commonflags.ConfigDirFlag)
appCfg := config.New(configFile, configDir, config.EnvPrefix)
storageEngine := newEngine(cmd, appCfg)
q := createQuarantine(cmd, storageEngine.DumpInfo())
var containerID *cid.ID
if cidStr, _ := cmd.Flags().GetString(cidFlag); cidStr != "" {
containerID = &cid.ID{}
commonCmd.ExitOnErr(cmd, "decode container ID string: %w", containerID.DecodeString(cidStr))
}
commonCmd.ExitOnErr(cmd, "iterate over quarantine: %w", q.Iterate(cmd.Context(), func(a oid.Address) error {
if containerID != nil && a.Container() != *containerID {
return nil
}
cmd.Println(a.EncodeToString())
return nil
}))
}

View file

@ -0,0 +1,46 @@
package zombie
import (
"errors"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config"
morphconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/morph"
nodeconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/node"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client"
cntClient "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client/container"
netmapClient "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client/netmap"
"github.com/spf13/cobra"
)
func createMorphClient(cmd *cobra.Command, appCfg *config.Config) *client.Client {
addresses := morphconfig.RPCEndpoint(appCfg)
if len(addresses) == 0 {
commonCmd.ExitOnErr(cmd, "create morph client: %w", errors.New("no morph endpoints found"))
}
key := nodeconfig.Key(appCfg)
cli, err := client.New(cmd.Context(),
key,
client.WithDialTimeout(morphconfig.DialTimeout(appCfg)),
client.WithEndpoints(addresses...),
client.WithSwitchInterval(morphconfig.SwitchInterval(appCfg)),
)
commonCmd.ExitOnErr(cmd, "create morph client: %w", err)
return cli
}
func createContainerClient(cmd *cobra.Command, morph *client.Client) *cntClient.Client {
hs, err := morph.NNSContractAddress(client.NNSContainerContractName)
commonCmd.ExitOnErr(cmd, "resolve container contract hash: %w", err)
cc, err := cntClient.NewFromMorph(morph, hs, 0)
commonCmd.ExitOnErr(cmd, "create morph container client: %w", err)
return cc
}
func createNetmapClient(cmd *cobra.Command, morph *client.Client) *netmapClient.Client {
hs, err := morph.NNSContractAddress(client.NNSNetmapContractName)
commonCmd.ExitOnErr(cmd, "resolve netmap contract hash: %w", err)
cli, err := netmapClient.NewFromMorph(morph, hs, 0)
commonCmd.ExitOnErr(cmd, "create morph netmap client: %w", err)
return cli
}

View file

@ -0,0 +1,154 @@
package zombie
import (
"context"
"fmt"
"math"
"os"
"path/filepath"
"strings"
"sync"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
objectcore "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/object"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/fstree"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/engine"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/shard/mode"
apistatus "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client/status"
objectSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"github.com/spf13/cobra"
)
type quarantine struct {
// mtx protects current field.
mtx sync.Mutex
current int
trees []*fstree.FSTree
}
func createQuarantine(cmd *cobra.Command, engineInfo engine.Info) *quarantine {
var paths []string
for _, sh := range engineInfo.Shards {
var storagePaths []string
for _, st := range sh.BlobStorInfo.SubStorages {
storagePaths = append(storagePaths, st.Path)
}
if len(storagePaths) == 0 {
continue
}
paths = append(paths, filepath.Join(commonPath(storagePaths), "quarantine"))
}
q, err := newQuarantine(paths)
commonCmd.ExitOnErr(cmd, "create quarantine: %w", err)
return q
}
func commonPath(paths []string) string {
if len(paths) == 0 {
return ""
}
if len(paths) == 1 {
return paths[0]
}
minLen := math.MaxInt
for _, p := range paths {
if len(p) < minLen {
minLen = len(p)
}
}
var sb strings.Builder
for i := range minLen {
for _, path := range paths[1:] {
if paths[0][i] != path[i] {
return sb.String()
}
}
sb.WriteByte(paths[0][i])
}
return sb.String()
}
func newQuarantine(paths []string) (*quarantine, error) {
var q quarantine
for i := range paths {
f := fstree.New(
fstree.WithDepth(1),
fstree.WithDirNameLen(1),
fstree.WithPath(paths[i]),
fstree.WithPerm(os.ModePerm),
)
if err := f.Open(mode.ComponentReadWrite); err != nil {
return nil, fmt.Errorf("open fstree %s: %w", paths[i], err)
}
if err := f.Init(); err != nil {
return nil, fmt.Errorf("init fstree %s: %w", paths[i], err)
}
q.trees = append(q.trees, f)
}
return &q, nil
}
func (q *quarantine) Get(ctx context.Context, a oid.Address) (*objectSDK.Object, error) {
for i := range q.trees {
res, err := q.trees[i].Get(ctx, common.GetPrm{Address: a})
if err != nil {
continue
}
return res.Object, nil
}
return nil, &apistatus.ObjectNotFound{}
}
func (q *quarantine) Delete(ctx context.Context, a oid.Address) error {
for i := range q.trees {
_, err := q.trees[i].Delete(ctx, common.DeletePrm{Address: a})
if err != nil {
continue
}
return nil
}
return &apistatus.ObjectNotFound{}
}
func (q *quarantine) Put(ctx context.Context, obj *objectSDK.Object) error {
data, err := obj.Marshal()
if err != nil {
return err
}
var prm common.PutPrm
prm.Address = objectcore.AddressOf(obj)
prm.Object = obj
prm.RawData = data
q.mtx.Lock()
current := q.current
q.current = (q.current + 1) % len(q.trees)
q.mtx.Unlock()
_, err = q.trees[current].Put(ctx, prm)
return err
}
func (q *quarantine) Iterate(ctx context.Context, f func(oid.Address) error) error {
var prm common.IteratePrm
prm.Handler = func(elem common.IterationElement) error {
return f(elem.Address)
}
for i := range q.trees {
select {
case <-ctx.Done():
return ctx.Err()
default:
}
_, err := q.trees[i].Iterate(ctx, prm)
if err != nil {
return err
}
}
return nil
}

View file

@ -0,0 +1,55 @@
package zombie
import (
"errors"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
apistatus "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client/status"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"github.com/spf13/cobra"
)
func remove(cmd *cobra.Command, _ []string) {
configFile, _ := cmd.Flags().GetString(commonflags.ConfigFlag)
configDir, _ := cmd.Flags().GetString(commonflags.ConfigDirFlag)
appCfg := config.New(configFile, configDir, config.EnvPrefix)
storageEngine := newEngine(cmd, appCfg)
q := createQuarantine(cmd, storageEngine.DumpInfo())
var containerID cid.ID
cidStr, _ := cmd.Flags().GetString(cidFlag)
commonCmd.ExitOnErr(cmd, "decode container ID string: %w", containerID.DecodeString(cidStr))
var objectID *oid.ID
oidStr, _ := cmd.Flags().GetString(oidFlag)
if oidStr != "" {
objectID = &oid.ID{}
commonCmd.ExitOnErr(cmd, "decode object ID string: %w", objectID.DecodeString(oidStr))
}
if objectID != nil {
var addr oid.Address
addr.SetContainer(containerID)
addr.SetObject(*objectID)
removeObject(cmd, q, addr)
} else {
commonCmd.ExitOnErr(cmd, "iterate over quarantine: %w", q.Iterate(cmd.Context(), func(addr oid.Address) error {
if addr.Container() != containerID {
return nil
}
removeObject(cmd, q, addr)
return nil
}))
}
}
func removeObject(cmd *cobra.Command, q *quarantine, addr oid.Address) {
err := q.Delete(cmd.Context(), addr)
if errors.Is(err, new(apistatus.ObjectNotFound)) {
return
}
commonCmd.ExitOnErr(cmd, "remove object from quarantine: %w", err)
}

View file

@ -0,0 +1,69 @@
package zombie
import (
"crypto/sha256"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
containerCore "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/container"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/engine"
cntClient "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client/container"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"github.com/spf13/cobra"
)
func restore(cmd *cobra.Command, _ []string) {
configFile, _ := cmd.Flags().GetString(commonflags.ConfigFlag)
configDir, _ := cmd.Flags().GetString(commonflags.ConfigDirFlag)
appCfg := config.New(configFile, configDir, config.EnvPrefix)
storageEngine := newEngine(cmd, appCfg)
q := createQuarantine(cmd, storageEngine.DumpInfo())
morphClient := createMorphClient(cmd, appCfg)
cnrCli := createContainerClient(cmd, morphClient)
var containerID cid.ID
cidStr, _ := cmd.Flags().GetString(cidFlag)
commonCmd.ExitOnErr(cmd, "decode container ID string: %w", containerID.DecodeString(cidStr))
var objectID *oid.ID
oidStr, _ := cmd.Flags().GetString(oidFlag)
if oidStr != "" {
objectID = &oid.ID{}
commonCmd.ExitOnErr(cmd, "decode object ID string: %w", objectID.DecodeString(oidStr))
}
if objectID != nil {
var addr oid.Address
addr.SetContainer(containerID)
addr.SetObject(*objectID)
restoreObject(cmd, storageEngine, q, addr, cnrCli)
} else {
commonCmd.ExitOnErr(cmd, "iterate over quarantine: %w", q.Iterate(cmd.Context(), func(addr oid.Address) error {
if addr.Container() != containerID {
return nil
}
restoreObject(cmd, storageEngine, q, addr, cnrCli)
return nil
}))
}
}
func restoreObject(cmd *cobra.Command, storageEngine *engine.StorageEngine, q *quarantine, addr oid.Address, cnrCli *cntClient.Client) {
obj, err := q.Get(cmd.Context(), addr)
commonCmd.ExitOnErr(cmd, "get object from quarantine: %w", err)
rawCID := make([]byte, sha256.Size)
cid := addr.Container()
cid.Encode(rawCID)
cnr, err := cnrCli.Get(cmd.Context(), rawCID)
commonCmd.ExitOnErr(cmd, "get container: %w", err)
putPrm := engine.PutPrm{
Object: obj,
IsIndexedContainer: containerCore.IsIndexedContainer(cnr.Value),
}
commonCmd.ExitOnErr(cmd, "put object to storage engine: %w", storageEngine.Put(cmd.Context(), putPrm))
commonCmd.ExitOnErr(cmd, "remove object from quarantine: %w", q.Delete(cmd.Context(), addr))
}

View file

@ -0,0 +1,123 @@
package zombie
import (
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/commonflags"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
const (
flagBatchSize = "batch-size"
flagBatchSizeUsage = "Objects iteration batch size"
cidFlag = "cid"
cidFlagUsage = "Container ID"
oidFlag = "oid"
oidFlagUsage = "Object ID"
walletFlag = "wallet"
walletFlagShorthand = "w"
walletFlagUsage = "Path to the wallet or binary key"
addressFlag = "address"
addressFlagUsage = "Address of wallet account"
moveFlag = "move"
moveFlagUsage = "Move objects from storage engine to quarantine"
)
var (
Cmd = &cobra.Command{
Use: "zombie",
Short: "Zombie objects related commands",
}
scanCmd = &cobra.Command{
Use: "scan",
Short: "Scan storage engine for zombie objects and move them to quarantine",
Long: "",
PreRun: func(cmd *cobra.Command, _ []string) {
_ = viper.BindPFlag(commonflags.ConfigFlag, cmd.Flags().Lookup(commonflags.ConfigFlag))
_ = viper.BindPFlag(commonflags.ConfigDirFlag, cmd.Flags().Lookup(commonflags.ConfigDirFlag))
_ = viper.BindPFlag(walletFlag, cmd.Flags().Lookup(walletFlag))
_ = viper.BindPFlag(addressFlag, cmd.Flags().Lookup(addressFlag))
_ = viper.BindPFlag(flagBatchSize, cmd.Flags().Lookup(flagBatchSize))
_ = viper.BindPFlag(moveFlag, cmd.Flags().Lookup(moveFlag))
},
Run: scan,
}
listCmd = &cobra.Command{
Use: "list",
Short: "List zombie objects from quarantine",
Long: "",
PreRun: func(cmd *cobra.Command, _ []string) {
_ = viper.BindPFlag(commonflags.ConfigFlag, cmd.Flags().Lookup(commonflags.ConfigFlag))
_ = viper.BindPFlag(commonflags.ConfigDirFlag, cmd.Flags().Lookup(commonflags.ConfigDirFlag))
_ = viper.BindPFlag(cidFlag, cmd.Flags().Lookup(cidFlag))
},
Run: list,
}
restoreCmd = &cobra.Command{
Use: "restore",
Short: "Restore zombie objects from quarantine",
Long: "",
PreRun: func(cmd *cobra.Command, _ []string) {
_ = viper.BindPFlag(commonflags.ConfigFlag, cmd.Flags().Lookup(commonflags.ConfigFlag))
_ = viper.BindPFlag(commonflags.ConfigDirFlag, cmd.Flags().Lookup(commonflags.ConfigDirFlag))
_ = viper.BindPFlag(cidFlag, cmd.Flags().Lookup(cidFlag))
_ = viper.BindPFlag(oidFlag, cmd.Flags().Lookup(oidFlag))
},
Run: restore,
}
removeCmd = &cobra.Command{
Use: "remove",
Short: "Remove zombie objects from quarantine",
Long: "",
PreRun: func(cmd *cobra.Command, _ []string) {
_ = viper.BindPFlag(commonflags.ConfigFlag, cmd.Flags().Lookup(commonflags.ConfigFlag))
_ = viper.BindPFlag(commonflags.ConfigDirFlag, cmd.Flags().Lookup(commonflags.ConfigDirFlag))
_ = viper.BindPFlag(cidFlag, cmd.Flags().Lookup(cidFlag))
_ = viper.BindPFlag(oidFlag, cmd.Flags().Lookup(oidFlag))
},
Run: remove,
}
)
func init() {
initScanCmd()
initListCmd()
initRestoreCmd()
initRemoveCmd()
}
func initScanCmd() {
Cmd.AddCommand(scanCmd)
scanCmd.Flags().StringP(commonflags.ConfigFlag, commonflags.ConfigFlagShorthand, "", commonflags.ConfigFlagUsage)
scanCmd.Flags().String(commonflags.ConfigDirFlag, "", commonflags.ConfigDirFlagUsage)
scanCmd.Flags().Uint32(flagBatchSize, 1000, flagBatchSizeUsage)
scanCmd.Flags().StringP(walletFlag, walletFlagShorthand, "", walletFlagUsage)
scanCmd.Flags().String(addressFlag, "", addressFlagUsage)
scanCmd.Flags().Bool(moveFlag, false, moveFlagUsage)
}
func initListCmd() {
Cmd.AddCommand(listCmd)
listCmd.Flags().StringP(commonflags.ConfigFlag, commonflags.ConfigFlagShorthand, "", commonflags.ConfigFlagUsage)
listCmd.Flags().String(commonflags.ConfigDirFlag, "", commonflags.ConfigDirFlagUsage)
listCmd.Flags().String(cidFlag, "", cidFlagUsage)
}
func initRestoreCmd() {
Cmd.AddCommand(restoreCmd)
restoreCmd.Flags().StringP(commonflags.ConfigFlag, commonflags.ConfigFlagShorthand, "", commonflags.ConfigFlagUsage)
restoreCmd.Flags().String(commonflags.ConfigDirFlag, "", commonflags.ConfigDirFlagUsage)
restoreCmd.Flags().String(cidFlag, "", cidFlagUsage)
restoreCmd.Flags().String(oidFlag, "", oidFlagUsage)
}
func initRemoveCmd() {
Cmd.AddCommand(removeCmd)
removeCmd.Flags().StringP(commonflags.ConfigFlag, commonflags.ConfigFlagShorthand, "", commonflags.ConfigFlagUsage)
removeCmd.Flags().String(commonflags.ConfigDirFlag, "", commonflags.ConfigDirFlagUsage)
removeCmd.Flags().String(cidFlag, "", cidFlagUsage)
removeCmd.Flags().String(oidFlag, "", oidFlagUsage)
}

View file

@ -0,0 +1,281 @@
package zombie
import (
"context"
"crypto/ecdsa"
"crypto/sha256"
"errors"
"fmt"
"sync"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config"
apiclientconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/apiclient"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
clientCore "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/client"
netmapCore "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/netmap"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/object"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/engine"
cntClient "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client/container"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client/netmap"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/network/cache"
clientSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client"
apistatus "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client/status"
objectSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"github.com/spf13/cobra"
"golang.org/x/sync/errgroup"
)
func scan(cmd *cobra.Command, _ []string) {
configFile, _ := cmd.Flags().GetString(commonflags.ConfigFlag)
configDir, _ := cmd.Flags().GetString(commonflags.ConfigDirFlag)
appCfg := config.New(configFile, configDir, config.EnvPrefix)
batchSize, _ := cmd.Flags().GetUint32(flagBatchSize)
if batchSize == 0 {
commonCmd.ExitOnErr(cmd, "invalid batch size: %w", errors.New("batch size must be positive value"))
}
move, _ := cmd.Flags().GetBool(moveFlag)
storageEngine := newEngine(cmd, appCfg)
morphClient := createMorphClient(cmd, appCfg)
cnrCli := createContainerClient(cmd, morphClient)
nmCli := createNetmapClient(cmd, morphClient)
q := createQuarantine(cmd, storageEngine.DumpInfo())
pk := getPrivateKey(cmd, appCfg)
epoch, err := nmCli.Epoch(cmd.Context())
commonCmd.ExitOnErr(cmd, "read epoch from morph: %w", err)
nm, err := nmCli.GetNetMapByEpoch(cmd.Context(), epoch)
commonCmd.ExitOnErr(cmd, "read netmap from morph: %w", err)
cmd.Printf("Epoch: %d\n", nm.Epoch())
cmd.Printf("Nodes in the netmap: %d\n", len(nm.Nodes()))
ps := &processStatus{
statusCount: make(map[status]uint64),
}
stopCh := make(chan struct{})
start := time.Now()
var wg sync.WaitGroup
wg.Add(2)
go func() {
defer wg.Done()
tick := time.NewTicker(time.Second)
defer tick.Stop()
for {
select {
case <-cmd.Context().Done():
return
case <-stopCh:
return
case <-tick.C:
fmt.Printf("Objects processed: %d; Time elapsed: %s\n", ps.total(), time.Since(start))
}
}
}()
go func() {
defer wg.Done()
err = scanStorageEngine(cmd, batchSize, storageEngine, ps, appCfg, cnrCli, nmCli, q, pk, move)
close(stopCh)
}()
wg.Wait()
commonCmd.ExitOnErr(cmd, "scan storage engine for zombie objects: %w", err)
cmd.Println()
cmd.Println("Status description:")
cmd.Println("undefined -- nothing is clear")
cmd.Println("found -- object is found in cluster")
cmd.Println("quarantine -- object is not found in cluster")
cmd.Println()
for status, count := range ps.statusCount {
cmd.Printf("Status: %s, Count: %d\n", status, count)
}
}
type status string
const (
statusUndefined status = "undefined"
statusFound status = "found"
statusQuarantine status = "quarantine"
)
func checkAddr(ctx context.Context, cnrCli *cntClient.Client, nmCli *netmap.Client, cc *cache.ClientCache, obj object.Info) (status, error) {
rawCID := make([]byte, sha256.Size)
cid := obj.Address.Container()
cid.Encode(rawCID)
cnr, err := cnrCli.Get(ctx, rawCID)
if err != nil {
var errContainerNotFound *apistatus.ContainerNotFound
if errors.As(err, &errContainerNotFound) {
// Policer will deal with this object.
return statusFound, nil
}
return statusUndefined, fmt.Errorf("read container %s from morph: %w", cid, err)
}
nm, err := nmCli.NetMap(ctx)
if err != nil {
return statusUndefined, fmt.Errorf("read netmap from morph: %w", err)
}
nodes, err := nm.ContainerNodes(cnr.Value.PlacementPolicy(), rawCID)
if err != nil {
// Not enough nodes, check all netmap nodes.
nodes = append([][]netmap.NodeInfo{}, nm.Nodes())
}
objID := obj.Address.Object()
cnrID := obj.Address.Container()
local := true
raw := false
if obj.ECInfo != nil {
objID = obj.ECInfo.ParentID
local = false
raw = true
}
prm := clientSDK.PrmObjectHead{
ObjectID: &objID,
ContainerID: &cnrID,
Local: local,
Raw: raw,
}
var ni clientCore.NodeInfo
for i := range nodes {
for j := range nodes[i] {
if err := clientCore.NodeInfoFromRawNetmapElement(&ni, netmapCore.Node(nodes[i][j])); err != nil {
return statusUndefined, fmt.Errorf("parse node info: %w", err)
}
c, err := cc.Get(ni)
if err != nil {
continue
}
res, err := c.ObjectHead(ctx, prm)
if err != nil {
var errECInfo *objectSDK.ECInfoError
if raw && errors.As(err, &errECInfo) {
return statusFound, nil
}
continue
}
if err := apistatus.ErrFromStatus(res.Status()); err != nil {
continue
}
return statusFound, nil
}
}
if cnr.Value.PlacementPolicy().NumberOfReplicas() == 1 && cnr.Value.PlacementPolicy().ReplicaDescriptor(0).NumberOfObjects() == 1 {
return statusFound, nil
}
return statusQuarantine, nil
}
func scanStorageEngine(cmd *cobra.Command, batchSize uint32, storageEngine *engine.StorageEngine, ps *processStatus,
appCfg *config.Config, cnrCli *cntClient.Client, nmCli *netmap.Client, q *quarantine, pk *ecdsa.PrivateKey, move bool,
) error {
cc := cache.NewSDKClientCache(cache.ClientCacheOpts{
DialTimeout: apiclientconfig.DialTimeout(appCfg),
StreamTimeout: apiclientconfig.StreamTimeout(appCfg),
ReconnectTimeout: apiclientconfig.ReconnectTimeout(appCfg),
Key: pk,
AllowExternal: apiclientconfig.AllowExternal(appCfg),
})
ctx := cmd.Context()
var cursor *engine.Cursor
for {
select {
case <-ctx.Done():
return ctx.Err()
default:
}
var prm engine.ListWithCursorPrm
prm.WithCursor(cursor)
prm.WithCount(batchSize)
res, err := storageEngine.ListWithCursor(ctx, prm)
if err != nil {
if errors.Is(err, engine.ErrEndOfListing) {
return nil
}
return fmt.Errorf("list with cursor: %w", err)
}
cursor = res.Cursor()
addrList := res.AddressList()
eg, egCtx := errgroup.WithContext(ctx)
eg.SetLimit(int(batchSize))
for i := range addrList {
addr := addrList[i]
eg.Go(func() error {
result, err := checkAddr(egCtx, cnrCli, nmCli, cc, addr)
if err != nil {
return fmt.Errorf("check object %s status: %w", addr.Address, err)
}
ps.add(result)
if !move && result == statusQuarantine {
cmd.Println(addr)
return nil
}
if result == statusQuarantine {
return moveToQuarantine(egCtx, storageEngine, q, addr.Address)
}
return nil
})
}
if err := eg.Wait(); err != nil {
return fmt.Errorf("process objects batch: %w", err)
}
}
}
func moveToQuarantine(ctx context.Context, storageEngine *engine.StorageEngine, q *quarantine, addr oid.Address) error {
var getPrm engine.GetPrm
getPrm.WithAddress(addr)
res, err := storageEngine.Get(ctx, getPrm)
if err != nil {
return fmt.Errorf("get object %s from storage engine: %w", addr, err)
}
if err := q.Put(ctx, res.Object()); err != nil {
return fmt.Errorf("put object %s to quarantine: %w", addr, err)
}
var delPrm engine.DeletePrm
delPrm.WithForceRemoval()
delPrm.WithAddress(addr)
if err = storageEngine.Delete(ctx, delPrm); err != nil {
return fmt.Errorf("delete object %s from storage engine: %w", addr, err)
}
return nil
}
type processStatus struct {
guard sync.RWMutex
statusCount map[status]uint64
count uint64
}
func (s *processStatus) add(st status) {
s.guard.Lock()
defer s.guard.Unlock()
s.statusCount[st]++
s.count++
}
func (s *processStatus) total() uint64 {
s.guard.RLock()
defer s.guard.RUnlock()
return s.count
}

View file

@ -0,0 +1,201 @@
package zombie
import (
"context"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config"
engineconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/engine"
shardconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/engine/shard"
blobovniczaconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/engine/shard/blobstor/blobovnicza"
fstreeconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/engine/shard/blobstor/fstree"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/qos"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/blobovniczatree"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/fstree"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/engine"
meta "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/metabase"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/pilorama"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/shard"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/writecache"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger"
objectSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
"github.com/panjf2000/ants/v2"
"github.com/spf13/cobra"
"go.etcd.io/bbolt"
"go.uber.org/zap"
)
func newEngine(cmd *cobra.Command, c *config.Config) *engine.StorageEngine {
ngOpts := storageEngineOptions(c)
shardOpts := shardOptions(cmd, c)
e := engine.New(ngOpts...)
for _, opts := range shardOpts {
_, err := e.AddShard(cmd.Context(), opts...)
commonCmd.ExitOnErr(cmd, "iterate shards from config: %w", err)
}
commonCmd.ExitOnErr(cmd, "open storage engine: %w", e.Open(cmd.Context()))
commonCmd.ExitOnErr(cmd, "init storage engine: %w", e.Init(cmd.Context()))
return e
}
func storageEngineOptions(c *config.Config) []engine.Option {
return []engine.Option{
engine.WithErrorThreshold(engineconfig.ShardErrorThreshold(c)),
engine.WithLogger(logger.NewLoggerWrapper(zap.NewNop())),
engine.WithLowMemoryConsumption(engineconfig.EngineLowMemoryConsumption(c)),
}
}
func shardOptions(cmd *cobra.Command, c *config.Config) [][]shard.Option {
var result [][]shard.Option
err := engineconfig.IterateShards(c, false, func(sh *shardconfig.Config) error {
result = append(result, getShardOpts(cmd, c, sh))
return nil
})
commonCmd.ExitOnErr(cmd, "iterate shards from config: %w", err)
return result
}
func getShardOpts(cmd *cobra.Command, c *config.Config, sh *shardconfig.Config) []shard.Option {
wc, wcEnabled := getWriteCacheOpts(sh)
return []shard.Option{
shard.WithLogger(logger.NewLoggerWrapper(zap.NewNop())),
shard.WithRefillMetabase(sh.RefillMetabase()),
shard.WithRefillMetabaseWorkersCount(sh.RefillMetabaseWorkersCount()),
shard.WithMode(sh.Mode()),
shard.WithBlobStorOptions(getBlobstorOpts(cmd.Context(), sh)...),
shard.WithMetaBaseOptions(getMetabaseOpts(sh)...),
shard.WithPiloramaOptions(getPiloramaOpts(c, sh)...),
shard.WithWriteCache(wcEnabled),
shard.WithWriteCacheOptions(wc),
shard.WithRemoverBatchSize(sh.GC().RemoverBatchSize()),
shard.WithGCRemoverSleepInterval(sh.GC().RemoverSleepInterval()),
shard.WithExpiredCollectorBatchSize(sh.GC().ExpiredCollectorBatchSize()),
shard.WithExpiredCollectorWorkerCount(sh.GC().ExpiredCollectorWorkerCount()),
shard.WithGCWorkerPoolInitializer(func(sz int) util.WorkerPool {
pool, err := ants.NewPool(sz)
commonCmd.ExitOnErr(cmd, "init GC pool: %w", err)
return pool
}),
shard.WithLimiter(qos.NewNoopLimiter()),
}
}
func getWriteCacheOpts(sh *shardconfig.Config) ([]writecache.Option, bool) {
if wc := sh.WriteCache(); wc != nil && wc.Enabled() {
var result []writecache.Option
result = append(result,
writecache.WithPath(wc.Path()),
writecache.WithFlushSizeLimit(wc.MaxFlushingObjectsSize()),
writecache.WithMaxObjectSize(wc.MaxObjectSize()),
writecache.WithFlushWorkersCount(wc.WorkerCount()),
writecache.WithMaxCacheSize(wc.SizeLimit()),
writecache.WithMaxCacheCount(wc.CountLimit()),
writecache.WithNoSync(wc.NoSync()),
writecache.WithLogger(logger.NewLoggerWrapper(zap.NewNop())),
writecache.WithQoSLimiter(qos.NewNoopLimiter()),
)
return result, true
}
return nil, false
}
func getPiloramaOpts(c *config.Config, sh *shardconfig.Config) []pilorama.Option {
var piloramaOpts []pilorama.Option
if config.BoolSafe(c.Sub("tree"), "enabled") {
pr := sh.Pilorama()
piloramaOpts = append(piloramaOpts,
pilorama.WithPath(pr.Path()),
pilorama.WithPerm(pr.Perm()),
pilorama.WithNoSync(pr.NoSync()),
pilorama.WithMaxBatchSize(pr.MaxBatchSize()),
pilorama.WithMaxBatchDelay(pr.MaxBatchDelay()),
)
}
return piloramaOpts
}
func getMetabaseOpts(sh *shardconfig.Config) []meta.Option {
return []meta.Option{
meta.WithPath(sh.Metabase().Path()),
meta.WithPermissions(sh.Metabase().BoltDB().Perm()),
meta.WithMaxBatchSize(sh.Metabase().BoltDB().MaxBatchSize()),
meta.WithMaxBatchDelay(sh.Metabase().BoltDB().MaxBatchDelay()),
meta.WithBoltDBOptions(&bbolt.Options{
Timeout: 100 * time.Millisecond,
}),
meta.WithLogger(logger.NewLoggerWrapper(zap.NewNop())),
meta.WithEpochState(&epochState{}),
}
}
func getBlobstorOpts(ctx context.Context, sh *shardconfig.Config) []blobstor.Option {
result := []blobstor.Option{
blobstor.WithCompression(sh.Compression()),
blobstor.WithStorages(getSubStorages(ctx, sh)),
blobstor.WithLogger(logger.NewLoggerWrapper(zap.NewNop())),
}
return result
}
func getSubStorages(ctx context.Context, sh *shardconfig.Config) []blobstor.SubStorage {
var ss []blobstor.SubStorage
for _, storage := range sh.BlobStor().Storages() {
switch storage.Type() {
case blobovniczatree.Type:
sub := blobovniczaconfig.From((*config.Config)(storage))
blobTreeOpts := []blobovniczatree.Option{
blobovniczatree.WithRootPath(storage.Path()),
blobovniczatree.WithPermissions(storage.Perm()),
blobovniczatree.WithBlobovniczaSize(sub.Size()),
blobovniczatree.WithBlobovniczaShallowDepth(sub.ShallowDepth()),
blobovniczatree.WithBlobovniczaShallowWidth(sub.ShallowWidth()),
blobovniczatree.WithOpenedCacheSize(sub.OpenedCacheSize()),
blobovniczatree.WithOpenedCacheTTL(sub.OpenedCacheTTL()),
blobovniczatree.WithOpenedCacheExpInterval(sub.OpenedCacheExpInterval()),
blobovniczatree.WithInitWorkerCount(sub.InitWorkerCount()),
blobovniczatree.WithWaitBeforeDropDB(sub.RebuildDropTimeout()),
blobovniczatree.WithBlobovniczaLogger(logger.NewLoggerWrapper(zap.NewNop())),
blobovniczatree.WithBlobovniczaTreeLogger(logger.NewLoggerWrapper(zap.NewNop())),
blobovniczatree.WithObjectSizeLimit(sh.SmallSizeLimit()),
}
ss = append(ss, blobstor.SubStorage{
Storage: blobovniczatree.NewBlobovniczaTree(ctx, blobTreeOpts...),
Policy: func(_ *objectSDK.Object, data []byte) bool {
return uint64(len(data)) < sh.SmallSizeLimit()
},
})
case fstree.Type:
sub := fstreeconfig.From((*config.Config)(storage))
fstreeOpts := []fstree.Option{
fstree.WithPath(storage.Path()),
fstree.WithPerm(storage.Perm()),
fstree.WithDepth(sub.Depth()),
fstree.WithNoSync(sub.NoSync()),
fstree.WithLogger(logger.NewLoggerWrapper(zap.NewNop())),
}
ss = append(ss, blobstor.SubStorage{
Storage: fstree.New(fstreeOpts...),
Policy: func(_ *objectSDK.Object, _ []byte) bool {
return true
},
})
default:
// should never happen, that has already
// been handled: when the config was read
}
}
return ss
}
type epochState struct{}
func (epochState) CurrentEpoch() uint64 {
return 0
}

View file

@ -63,7 +63,7 @@ func dumpNetworkConfig(cmd *cobra.Command, _ []string) error {
netmap.MaxObjectSizeConfig, netmap.WithdrawFeeConfig, netmap.MaxObjectSizeConfig, netmap.WithdrawFeeConfig,
netmap.MaxECDataCountConfig, netmap.MaxECParityCountConfig: netmap.MaxECDataCountConfig, netmap.MaxECParityCountConfig:
nbuf := make([]byte, 8) nbuf := make([]byte, 8)
copy(nbuf[:], v) copy(nbuf, v)
n := binary.LittleEndian.Uint64(nbuf) n := binary.LittleEndian.Uint64(nbuf)
_, _ = tw.Write(fmt.Appendf(nil, "%s:\t%d (int)\n", k, n)) _, _ = tw.Write(fmt.Appendf(nil, "%s:\t%d (int)\n", k, n))
case netmap.HomomorphicHashingDisabledKey, netmap.MaintenanceModeAllowedConfig: case netmap.HomomorphicHashingDisabledKey, netmap.MaintenanceModeAllowedConfig:

View file

@ -5,6 +5,7 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/commonflags" "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/config" "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/config"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/maintenance"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/metabase" "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/metabase"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/morph" "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/morph"
"git.frostfs.info/TrueCloudLab/frostfs-node/misc" "git.frostfs.info/TrueCloudLab/frostfs-node/misc"
@ -41,6 +42,7 @@ func init() {
rootCmd.AddCommand(config.RootCmd) rootCmd.AddCommand(config.RootCmd)
rootCmd.AddCommand(morph.RootCmd) rootCmd.AddCommand(morph.RootCmd)
rootCmd.AddCommand(metabase.RootCmd) rootCmd.AddCommand(metabase.RootCmd)
rootCmd.AddCommand(maintenance.RootCmd)
rootCmd.AddCommand(autocomplete.Command("frostfs-adm")) rootCmd.AddCommand(autocomplete.Command("frostfs-adm"))
rootCmd.AddCommand(gendoc.Command(rootCmd, gendoc.Options{})) rootCmd.AddCommand(gendoc.Command(rootCmd, gendoc.Options{}))

View file

@ -44,6 +44,7 @@ is set to current epoch + n.
_ = viper.BindPFlag(commonflags.WalletPath, ff.Lookup(commonflags.WalletPath)) _ = viper.BindPFlag(commonflags.WalletPath, ff.Lookup(commonflags.WalletPath))
_ = viper.BindPFlag(commonflags.Account, ff.Lookup(commonflags.Account)) _ = viper.BindPFlag(commonflags.Account, ff.Lookup(commonflags.Account))
_ = viper.BindPFlag(commonflags.RPC, ff.Lookup(commonflags.RPC))
}, },
} }
@ -81,7 +82,7 @@ func createToken(cmd *cobra.Command, _ []string) {
commonCmd.ExitOnErr(cmd, "can't parse --"+notValidBeforeFlag+" flag: %w", err) commonCmd.ExitOnErr(cmd, "can't parse --"+notValidBeforeFlag+" flag: %w", err)
if iatRelative || expRelative || nvbRelative { if iatRelative || expRelative || nvbRelative {
endpoint, _ := cmd.Flags().GetString(commonflags.RPC) endpoint := viper.GetString(commonflags.RPC)
if len(endpoint) == 0 { if len(endpoint) == 0 {
commonCmd.ExitOnErr(cmd, "can't fetch current epoch: %w", fmt.Errorf("'%s' flag value must be specified", commonflags.RPC)) commonCmd.ExitOnErr(cmd, "can't fetch current epoch: %w", fmt.Errorf("'%s' flag value must be specified", commonflags.RPC))
} }

View file

@ -40,9 +40,9 @@ func (repl *policyPlaygroundREPL) handleLs(args []string) error {
i := 1 i := 1
for id, node := range repl.nodes { for id, node := range repl.nodes {
var attrs []string var attrs []string
node.IterateAttributes(func(k, v string) { for k, v := range node.Attributes() {
attrs = append(attrs, fmt.Sprintf("%s:%q", k, v)) attrs = append(attrs, fmt.Sprintf("%s:%q", k, v))
}) }
fmt.Fprintf(repl.console, "\t%2d: id=%s attrs={%v}\n", i, id, strings.Join(attrs, " ")) fmt.Fprintf(repl.console, "\t%2d: id=%s attrs={%v}\n", i, id, strings.Join(attrs, " "))
i++ i++
} }
@ -260,6 +260,28 @@ Example of usage:
}, },
} }
func (repl *policyPlaygroundREPL) handleCommand(args []string) error {
if len(args) == 0 {
return nil
}
switch args[0] {
case "list", "ls":
return repl.handleLs(args[1:])
case "add":
return repl.handleAdd(args[1:])
case "load":
return repl.handleLoad(args[1:])
case "remove", "rm":
return repl.handleRemove(args[1:])
case "eval":
return repl.handleEval(args[1:])
case "help":
return repl.handleHelp(args[1:])
}
return fmt.Errorf("unknown command %q. See 'help' for assistance", args[0])
}
func (repl *policyPlaygroundREPL) run() error { func (repl *policyPlaygroundREPL) run() error {
if len(viper.GetString(commonflags.RPC)) > 0 { if len(viper.GetString(commonflags.RPC)) > 0 {
key := key.GetOrGenerate(repl.cmd) key := key.GetOrGenerate(repl.cmd)
@ -277,15 +299,9 @@ func (repl *policyPlaygroundREPL) run() error {
} }
} }
cmdHandlers := map[string]func([]string) error{ if len(viper.GetString(netmapConfigPath)) > 0 {
"list": repl.handleLs, err := repl.handleLoad([]string{viper.GetString(netmapConfigPath)})
"ls": repl.handleLs, commonCmd.ExitOnErr(repl.cmd, "load netmap config error: %w", err)
"add": repl.handleAdd,
"load": repl.handleLoad,
"remove": repl.handleRemove,
"rm": repl.handleRemove,
"eval": repl.handleEval,
"help": repl.handleHelp,
} }
var cfgCompleter []readline.PrefixCompleterInterface var cfgCompleter []readline.PrefixCompleterInterface
@ -326,18 +342,9 @@ func (repl *policyPlaygroundREPL) run() error {
} }
exit = false exit = false
parts := strings.Fields(line) if err := repl.handleCommand(strings.Fields(line)); err != nil {
if len(parts) == 0 {
continue
}
cmd := parts[0]
if handler, exists := cmdHandlers[cmd]; exists {
if err := handler(parts[1:]); err != nil {
fmt.Fprintf(repl.console, "error: %v\n", err) fmt.Fprintf(repl.console, "error: %v\n", err)
} }
} else {
fmt.Fprintf(repl.console, "error: unknown command %q\n", cmd)
}
} }
} }
@ -352,6 +359,14 @@ If a wallet and endpoint is provided, the initial netmap data will be loaded fro
}, },
} }
const (
netmapConfigPath = "netmap-config"
netmapConfigUsage = "Path to the netmap configuration file"
)
func initContainerPolicyPlaygroundCmd() { func initContainerPolicyPlaygroundCmd() {
commonflags.Init(policyPlaygroundCmd) commonflags.Init(policyPlaygroundCmd)
policyPlaygroundCmd.Flags().String(netmapConfigPath, "", netmapConfigUsage)
_ = viper.BindPFlag(netmapConfigPath, policyPlaygroundCmd.Flags().Lookup(netmapConfigPath))
} }

View file

@ -62,11 +62,11 @@ func prettyPrintNodeInfo(cmd *cobra.Command, i netmap.NodeInfo) {
cmd.Println("state:", stateWord) cmd.Println("state:", stateWord)
netmap.IterateNetworkEndpoints(i, func(s string) { for s := range i.NetworkEndpoints() {
cmd.Println("address:", s) cmd.Println("address:", s)
}) }
i.IterateAttributes(func(key, value string) { for key, value := range i.Attributes() {
cmd.Printf("attribute: %s=%s\n", key, value) cmd.Printf("attribute: %s=%s\n", key, value)
}) }
} }

View file

@ -18,6 +18,7 @@ import (
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id" oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"github.com/spf13/viper"
) )
// object lock command. // object lock command.
@ -78,7 +79,7 @@ var objectLockCmd = &cobra.Command{
ctx, cancel := context.WithTimeout(context.Background(), time.Second*30) ctx, cancel := context.WithTimeout(context.Background(), time.Second*30)
defer cancel() defer cancel()
endpoint, _ := cmd.Flags().GetString(commonflags.RPC) endpoint := viper.GetString(commonflags.RPC)
currEpoch, err := internalclient.GetCurrentEpoch(ctx, cmd, endpoint) currEpoch, err := internalclient.GetCurrentEpoch(ctx, cmd, endpoint)
commonCmd.ExitOnErr(cmd, "Request current epoch: %w", err) commonCmd.ExitOnErr(cmd, "Request current epoch: %w", err)

View file

@ -7,6 +7,7 @@ import (
"encoding/json" "encoding/json"
"errors" "errors"
"fmt" "fmt"
"slices"
"sync" "sync"
internalclient "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/client" internalclient "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/client"
@ -460,17 +461,11 @@ func createClient(ctx context.Context, cmd *cobra.Command, candidate netmapSDK.N
var cli *client.Client var cli *client.Client
var addresses []string var addresses []string
if preferInternal, _ := cmd.Flags().GetBool(preferInternalAddressesFlag); preferInternal { if preferInternal, _ := cmd.Flags().GetBool(preferInternalAddressesFlag); preferInternal {
candidate.IterateNetworkEndpoints(func(s string) bool { addresses = slices.AppendSeq(addresses, candidate.NetworkEndpoints())
addresses = append(addresses, s)
return false
})
addresses = append(addresses, candidate.ExternalAddresses()...) addresses = append(addresses, candidate.ExternalAddresses()...)
} else { } else {
addresses = append(addresses, candidate.ExternalAddresses()...) addresses = append(addresses, candidate.ExternalAddresses()...)
candidate.IterateNetworkEndpoints(func(s string) bool { addresses = slices.AppendSeq(addresses, candidate.NetworkEndpoints())
addresses = append(addresses, s)
return false
})
} }
var lastErr error var lastErr error

View file

@ -262,13 +262,8 @@ func OpenSessionViaClient(cmd *cobra.Command, dst SessionPrm, cli *client.Client
if _, ok := dst.(*internal.DeleteObjectPrm); ok { if _, ok := dst.(*internal.DeleteObjectPrm); ok {
common.PrintVerbose(cmd, "Collecting relatives of the removal object...") common.PrintVerbose(cmd, "Collecting relatives of the removal object...")
rels := collectObjectRelatives(cmd, cli, cnr, *obj) objs = collectObjectRelatives(cmd, cli, cnr, *obj)
objs = append(objs, *obj)
if len(rels) == 0 {
objs = []oid.ID{*obj}
} else {
objs = append(rels, *obj)
}
} }
} }

View file

@ -4,12 +4,14 @@ import (
"context" "context"
"os" "os"
"os/signal" "os/signal"
"strconv"
"syscall" "syscall"
configViper "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common/config" configViper "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common/config"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs" "git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
control "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control/ir" control "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control/ir"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger"
"github.com/spf13/cast"
"github.com/spf13/viper" "github.com/spf13/viper"
"go.uber.org/zap" "go.uber.org/zap"
) )
@ -44,11 +46,30 @@ func reloadConfig() error {
if err != nil { if err != nil {
return err return err
} }
log.Reload(logPrm) err = logPrm.SetTags(loggerTags())
if err != nil {
return err
}
logger.UpdateLevelForTags(logPrm)
return nil return nil
} }
func loggerTags() [][]string {
var res [][]string
for i := 0; ; i++ {
var item []string
index := strconv.FormatInt(int64(i), 10)
names := cast.ToString(cfg.Get("logger.tags." + index + ".names"))
if names == "" {
break
}
item = append(item, names, cast.ToString(cfg.Get("logger.tags."+index+".level")))
res = append(res, item)
}
return res
}
func watchForSignal(ctx context.Context, cancel func()) { func watchForSignal(ctx context.Context, cancel func()) {
ch := make(chan os.Signal, 1) ch := make(chan os.Signal, 1)
signal.Notify(ch, syscall.SIGINT, syscall.SIGTERM) signal.Notify(ch, syscall.SIGINT, syscall.SIGTERM)

View file

@ -80,10 +80,14 @@ func main() {
exitErr(err) exitErr(err)
logPrm.SamplingHook = metrics.LogMetrics().GetSamplingHook() logPrm.SamplingHook = metrics.LogMetrics().GetSamplingHook()
logPrm.PrependTimestamp = cfg.GetBool("logger.timestamp") logPrm.PrependTimestamp = cfg.GetBool("logger.timestamp")
err = logPrm.SetTags(loggerTags())
exitErr(err)
log, err = logger.NewLogger(logPrm) log, err = logger.NewLogger(logPrm)
exitErr(err) exitErr(err)
logger.UpdateLevelForTags(logPrm)
ctx, cancel := context.WithCancel(context.Background()) ctx, cancel := context.WithCancel(context.Background())
pprofCmp = newPprofComponent() pprofCmp = newPprofComponent()

View file

@ -57,7 +57,7 @@ func DefaultRecordParser(key, value []byte) (common.SchemaEntry, common.Parser,
r.addr.SetContainer(cnr) r.addr.SetContainer(cnr)
r.addr.SetObject(obj) r.addr.SetObject(obj)
r.data = value[:] r.data = value
return &r, nil, nil return &r, nil, nil
} }

View file

@ -460,11 +460,11 @@ func (ui *UI) handleInputOnSearching(event *tcell.EventKey) {
return return
} }
switch ui.mountedPage.(type) { switch v := ui.mountedPage.(type) {
case *BucketsView: case *BucketsView:
ui.moveNextPage(NewBucketsView(ui, res)) ui.moveNextPage(NewBucketsView(ui, res))
case *RecordsView: case *RecordsView:
bucket := ui.mountedPage.(*RecordsView).bucket bucket := v.bucket
ui.moveNextPage(NewRecordsView(ui, bucket, res)) ui.moveNextPage(NewRecordsView(ui, bucket, res))
} }

View file

@ -235,16 +235,8 @@ func (s *lruNetmapSource) updateCandidates(ctx context.Context, d time.Duration)
slices.Compare(n1.ExternalAddresses(), n2.ExternalAddresses()) != 0 { slices.Compare(n1.ExternalAddresses(), n2.ExternalAddresses()) != 0 {
return 1 return 1
} }
var ne1 []string ne1 := slices.Collect(n1.NetworkEndpoints())
n1.IterateNetworkEndpoints(func(s string) bool { ne2 := slices.Collect(n2.NetworkEndpoints())
ne1 = append(ne1, s)
return false
})
var ne2 []string
n2.IterateNetworkEndpoints(func(s string) bool {
ne2 = append(ne2, s)
return false
})
return slices.Compare(ne1, ne2) return slices.Compare(ne1, ne2)
}) })
if ret != 0 { if ret != 0 {
@ -364,15 +356,9 @@ func getNetMapNodesToUpdate(nm *netmapSDK.NetMap, candidates []netmapSDK.NodeInf
} }
nodeEndpoints := make([]string, 0, nm.Nodes()[i].NumberOfNetworkEndpoints()) nodeEndpoints := make([]string, 0, nm.Nodes()[i].NumberOfNetworkEndpoints())
nm.Nodes()[i].IterateNetworkEndpoints(func(s string) bool { nodeEndpoints = slices.AppendSeq(nodeEndpoints, nm.Nodes()[i].NetworkEndpoints())
nodeEndpoints = append(nodeEndpoints, s)
return false
})
candidateEndpoints := make([]string, 0, cnd.NumberOfNetworkEndpoints()) candidateEndpoints := make([]string, 0, cnd.NumberOfNetworkEndpoints())
cnd.IterateNetworkEndpoints(func(s string) bool { candidateEndpoints = slices.AppendSeq(candidateEndpoints, cnd.NetworkEndpoints())
candidateEndpoints = append(candidateEndpoints, s)
return false
})
if slices.Compare(nodeEndpoints, candidateEndpoints) != 0 { if slices.Compare(nodeEndpoints, candidateEndpoints) != 0 {
update = true update = true
tmp.endpoints = candidateEndpoints tmp.endpoints = candidateEndpoints

View file

@ -40,6 +40,7 @@ import (
netmapCore "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/netmap" netmapCore "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/netmap"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/blobovniczatree" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/blobovniczatree"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/compression"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/fstree" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/fstree"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/engine" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/engine"
meta "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/metabase" meta "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/metabase"
@ -109,6 +110,7 @@ type applicationConfiguration struct {
destination string destination string
timestamp bool timestamp bool
options []zap.Option options []zap.Option
tags [][]string
} }
ObjectCfg struct { ObjectCfg struct {
@ -127,12 +129,9 @@ type applicationConfiguration struct {
} }
type shardCfg struct { type shardCfg struct {
compress bool compression compression.Config
estimateCompressibility bool
estimateCompressibilityThreshold float64
smallSizeObjectLimit uint64 smallSizeObjectLimit uint64
uncompressableContentType []string
refillMetabase bool refillMetabase bool
refillMetabaseWorkersCount int refillMetabaseWorkersCount int
mode shardmode.Mode mode shardmode.Mode
@ -241,19 +240,21 @@ func (a *applicationConfiguration) readConfig(c *config.Config) error {
})} })}
} }
a.LoggerCfg.options = opts a.LoggerCfg.options = opts
a.LoggerCfg.tags = loggerconfig.Tags(c)
// Object // Object
a.ObjectCfg.tombstoneLifetime = objectconfig.TombstoneLifetime(c) a.ObjectCfg.tombstoneLifetime = objectconfig.TombstoneLifetime(c)
var pm []placement.Metric locodeDBPath := nodeconfig.LocodeDBPath(c)
for _, raw := range objectconfig.Get(c).Priority() { parser, err := placement.NewMetricsParser(locodeDBPath)
m, err := placement.ParseMetric(raw)
if err != nil { if err != nil {
return err return fmt.Errorf("metrics parser creation: %w", err)
} }
pm = append(pm, m) m, err := parser.ParseMetrics(objectconfig.Get(c).Priority())
if err != nil {
return fmt.Errorf("parse metrics: %w", err)
} }
a.ObjectCfg.priorityMetrics = pm a.ObjectCfg.priorityMetrics = m
// Storage Engine // Storage Engine
@ -269,10 +270,7 @@ func (a *applicationConfiguration) updateShardConfig(c *config.Config, source *s
target.refillMetabase = source.RefillMetabase() target.refillMetabase = source.RefillMetabase()
target.refillMetabaseWorkersCount = source.RefillMetabaseWorkersCount() target.refillMetabaseWorkersCount = source.RefillMetabaseWorkersCount()
target.mode = source.Mode() target.mode = source.Mode()
target.compress = source.Compress() target.compression = source.Compression()
target.estimateCompressibility = source.EstimateCompressibility()
target.estimateCompressibilityThreshold = source.EstimateCompressibilityThreshold()
target.uncompressableContentType = source.UncompressableContentTypes()
target.smallSizeObjectLimit = source.SmallSizeLimit() target.smallSizeObjectLimit = source.SmallSizeLimit()
a.setShardWriteCacheConfig(&target, source) a.setShardWriteCacheConfig(&target, source)
@ -383,14 +381,11 @@ func (a *applicationConfiguration) setGCConfig(target *shardCfg, source *shardco
} }
func (a *applicationConfiguration) setLimiter(target *shardCfg, source *shardconfig.Config) error { func (a *applicationConfiguration) setLimiter(target *shardCfg, source *shardconfig.Config) error {
limitsConfig := source.Limits() limitsConfig := source.Limits().ToConfig()
limiter, err := qos.NewLimiter(limitsConfig) limiter, err := qos.NewLimiter(limitsConfig)
if err != nil { if err != nil {
return err return err
} }
if target.limiter != nil {
target.limiter.Close()
}
target.limiter = limiter target.limiter = limiter
return nil return nil
} }
@ -727,6 +722,7 @@ func initCfg(appCfg *config.Config) *cfg {
logPrm.SamplingHook = c.metricsCollector.LogMetrics().GetSamplingHook() logPrm.SamplingHook = c.metricsCollector.LogMetrics().GetSamplingHook()
log, err := logger.NewLogger(logPrm) log, err := logger.NewLogger(logPrm)
fatalOnErr(err) fatalOnErr(err)
logger.UpdateLevelForTags(logPrm)
c.internals = initInternals(appCfg, log) c.internals = initInternals(appCfg, log)
@ -895,7 +891,7 @@ func (c *cfg) engineOpts() []engine.Option {
opts = append(opts, opts = append(opts,
engine.WithErrorThreshold(c.EngineCfg.errorThreshold), engine.WithErrorThreshold(c.EngineCfg.errorThreshold),
engine.WithLogger(c.log), engine.WithLogger(c.log.WithTag(logger.TagEngine)),
engine.WithLowMemoryConsumption(c.EngineCfg.lowMem), engine.WithLowMemoryConsumption(c.EngineCfg.lowMem),
) )
@ -932,7 +928,7 @@ func (c *cfg) getWriteCacheOpts(shCfg shardCfg) []writecache.Option {
writecache.WithMaxCacheSize(wcRead.sizeLimit), writecache.WithMaxCacheSize(wcRead.sizeLimit),
writecache.WithMaxCacheCount(wcRead.countLimit), writecache.WithMaxCacheCount(wcRead.countLimit),
writecache.WithNoSync(wcRead.noSync), writecache.WithNoSync(wcRead.noSync),
writecache.WithLogger(c.log), writecache.WithLogger(c.log.WithTag(logger.TagWriteCache)),
writecache.WithQoSLimiter(shCfg.limiter), writecache.WithQoSLimiter(shCfg.limiter),
) )
} }
@ -972,7 +968,8 @@ func (c *cfg) getSubstorageOpts(ctx context.Context, shCfg shardCfg) []blobstor.
blobovniczatree.WithOpenedCacheExpInterval(sRead.openedCacheExpInterval), blobovniczatree.WithOpenedCacheExpInterval(sRead.openedCacheExpInterval),
blobovniczatree.WithInitWorkerCount(sRead.initWorkerCount), blobovniczatree.WithInitWorkerCount(sRead.initWorkerCount),
blobovniczatree.WithWaitBeforeDropDB(sRead.rebuildDropTimeout), blobovniczatree.WithWaitBeforeDropDB(sRead.rebuildDropTimeout),
blobovniczatree.WithLogger(c.log), blobovniczatree.WithBlobovniczaLogger(c.log.WithTag(logger.TagBlobovnicza)),
blobovniczatree.WithBlobovniczaTreeLogger(c.log.WithTag(logger.TagBlobovniczaTree)),
blobovniczatree.WithObjectSizeLimit(shCfg.smallSizeObjectLimit), blobovniczatree.WithObjectSizeLimit(shCfg.smallSizeObjectLimit),
} }
@ -995,7 +992,7 @@ func (c *cfg) getSubstorageOpts(ctx context.Context, shCfg shardCfg) []blobstor.
fstree.WithPerm(sRead.perm), fstree.WithPerm(sRead.perm),
fstree.WithDepth(sRead.depth), fstree.WithDepth(sRead.depth),
fstree.WithNoSync(sRead.noSync), fstree.WithNoSync(sRead.noSync),
fstree.WithLogger(c.log), fstree.WithLogger(c.log.WithTag(logger.TagFSTree)),
} }
if c.metricsCollector != nil { if c.metricsCollector != nil {
fstreeOpts = append(fstreeOpts, fstreeOpts = append(fstreeOpts,
@ -1025,12 +1022,9 @@ func (c *cfg) getShardOpts(ctx context.Context, shCfg shardCfg) shardOptsWithID
ss := c.getSubstorageOpts(ctx, shCfg) ss := c.getSubstorageOpts(ctx, shCfg)
blobstoreOpts := []blobstor.Option{ blobstoreOpts := []blobstor.Option{
blobstor.WithCompressObjects(shCfg.compress), blobstor.WithCompression(shCfg.compression),
blobstor.WithUncompressableContentTypes(shCfg.uncompressableContentType),
blobstor.WithCompressibilityEstimate(shCfg.estimateCompressibility),
blobstor.WithCompressibilityEstimateThreshold(shCfg.estimateCompressibilityThreshold),
blobstor.WithStorages(ss), blobstor.WithStorages(ss),
blobstor.WithLogger(c.log), blobstor.WithLogger(c.log.WithTag(logger.TagBlobstor)),
} }
if c.metricsCollector != nil { if c.metricsCollector != nil {
blobstoreOpts = append(blobstoreOpts, blobstor.WithMetrics(lsmetrics.NewBlobstoreMetrics(c.metricsCollector.Blobstore()))) blobstoreOpts = append(blobstoreOpts, blobstor.WithMetrics(lsmetrics.NewBlobstoreMetrics(c.metricsCollector.Blobstore())))
@ -1055,7 +1049,7 @@ func (c *cfg) getShardOpts(ctx context.Context, shCfg shardCfg) shardOptsWithID
var sh shardOptsWithID var sh shardOptsWithID
sh.configID = shCfg.id() sh.configID = shCfg.id()
sh.shOpts = []shard.Option{ sh.shOpts = []shard.Option{
shard.WithLogger(c.log), shard.WithLogger(c.log.WithTag(logger.TagShard)),
shard.WithRefillMetabase(shCfg.refillMetabase), shard.WithRefillMetabase(shCfg.refillMetabase),
shard.WithRefillMetabaseWorkersCount(shCfg.refillMetabaseWorkersCount), shard.WithRefillMetabaseWorkersCount(shCfg.refillMetabaseWorkersCount),
shard.WithMode(shCfg.mode), shard.WithMode(shCfg.mode),
@ -1094,6 +1088,11 @@ func (c *cfg) loggerPrm() (logger.Prm, error) {
} }
prm.PrependTimestamp = c.LoggerCfg.timestamp prm.PrependTimestamp = c.LoggerCfg.timestamp
prm.Options = c.LoggerCfg.options prm.Options = c.LoggerCfg.options
err = prm.SetTags(c.LoggerCfg.tags)
if err != nil {
// not expected since validation should be performed before
return logger.Prm{}, errors.New("incorrect allowed tags format: " + c.LoggerCfg.destination)
}
return prm, nil return prm, nil
} }
@ -1381,7 +1380,7 @@ func (c *cfg) getComponents(ctx context.Context) []dCmp {
if err != nil { if err != nil {
return err return err
} }
c.log.Reload(prm) logger.UpdateLevelForTags(prm)
return nil return nil
}}) }})
components = append(components, dCmp{"runtime", func() error { components = append(components, dCmp{"runtime", func() error {

View file

@ -11,10 +11,11 @@ import (
blobovniczaconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/engine/shard/blobstor/blobovnicza" blobovniczaconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/engine/shard/blobstor/blobovnicza"
fstreeconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/engine/shard/blobstor/fstree" fstreeconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/engine/shard/blobstor/fstree"
gcconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/engine/shard/gc" gcconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/engine/shard/gc"
limitsconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/engine/shard/limits"
piloramaconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/engine/shard/pilorama" piloramaconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/engine/shard/pilorama"
writecacheconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/engine/shard/writecache" writecacheconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/engine/shard/writecache"
configtest "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/test" configtest "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/test"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/qos"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/compression"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/shard/mode" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/shard/mode"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@ -100,10 +101,11 @@ func TestEngineSection(t *testing.T) {
require.Equal(t, 100, meta.BoltDB().MaxBatchSize()) require.Equal(t, 100, meta.BoltDB().MaxBatchSize())
require.Equal(t, 10*time.Millisecond, meta.BoltDB().MaxBatchDelay()) require.Equal(t, 10*time.Millisecond, meta.BoltDB().MaxBatchDelay())
require.Equal(t, true, sc.Compress()) require.Equal(t, true, sc.Compression().Enabled)
require.Equal(t, []string{"audio/*", "video/*"}, sc.UncompressableContentTypes()) require.Equal(t, compression.LevelFastest, sc.Compression().Level)
require.Equal(t, true, sc.EstimateCompressibility()) require.Equal(t, []string{"audio/*", "video/*"}, sc.Compression().UncompressableContentTypes)
require.Equal(t, float64(0.7), sc.EstimateCompressibilityThreshold()) require.Equal(t, true, sc.Compression().EstimateCompressibility)
require.Equal(t, float64(0.7), sc.Compression().EstimateCompressibilityThreshold)
require.EqualValues(t, 102400, sc.SmallSizeLimit()) require.EqualValues(t, 102400, sc.SmallSizeLimit())
require.Equal(t, 2, len(ss)) require.Equal(t, 2, len(ss))
@ -135,8 +137,8 @@ func TestEngineSection(t *testing.T) {
require.Equal(t, mode.ReadOnly, sc.Mode()) require.Equal(t, mode.ReadOnly, sc.Mode())
require.Equal(t, 100, sc.RefillMetabaseWorkersCount()) require.Equal(t, 100, sc.RefillMetabaseWorkersCount())
readLimits := limits.Read() readLimits := limits.ToConfig().Read
writeLimits := limits.Write() writeLimits := limits.ToConfig().Write
require.Equal(t, 30*time.Second, readLimits.IdleTimeout) require.Equal(t, 30*time.Second, readLimits.IdleTimeout)
require.Equal(t, int64(10_000), readLimits.MaxRunningOps) require.Equal(t, int64(10_000), readLimits.MaxRunningOps)
require.Equal(t, int64(1_000), readLimits.MaxWaitingOps) require.Equal(t, int64(1_000), readLimits.MaxWaitingOps)
@ -144,7 +146,7 @@ func TestEngineSection(t *testing.T) {
require.Equal(t, int64(1_000), writeLimits.MaxRunningOps) require.Equal(t, int64(1_000), writeLimits.MaxRunningOps)
require.Equal(t, int64(100), writeLimits.MaxWaitingOps) require.Equal(t, int64(100), writeLimits.MaxWaitingOps)
require.ElementsMatch(t, readLimits.Tags, require.ElementsMatch(t, readLimits.Tags,
[]limitsconfig.IOTagConfig{ []qos.IOTagConfig{
{ {
Tag: "internal", Tag: "internal",
Weight: toPtr(20), Weight: toPtr(20),
@ -173,9 +175,14 @@ func TestEngineSection(t *testing.T) {
LimitOps: toPtr(25000), LimitOps: toPtr(25000),
Prohibited: true, Prohibited: true,
}, },
{
Tag: "treesync",
Weight: toPtr(5),
LimitOps: toPtr(25),
},
}) })
require.ElementsMatch(t, writeLimits.Tags, require.ElementsMatch(t, writeLimits.Tags,
[]limitsconfig.IOTagConfig{ []qos.IOTagConfig{
{ {
Tag: "internal", Tag: "internal",
Weight: toPtr(200), Weight: toPtr(200),
@ -203,6 +210,11 @@ func TestEngineSection(t *testing.T) {
Weight: toPtr(50), Weight: toPtr(50),
LimitOps: toPtr(2500), LimitOps: toPtr(2500),
}, },
{
Tag: "treesync",
Weight: toPtr(50),
LimitOps: toPtr(100),
},
}) })
case 1: case 1:
require.Equal(t, "tmp/1/blob/pilorama.db", pl.Path()) require.Equal(t, "tmp/1/blob/pilorama.db", pl.Path())
@ -226,8 +238,9 @@ func TestEngineSection(t *testing.T) {
require.Equal(t, 200, meta.BoltDB().MaxBatchSize()) require.Equal(t, 200, meta.BoltDB().MaxBatchSize())
require.Equal(t, 20*time.Millisecond, meta.BoltDB().MaxBatchDelay()) require.Equal(t, 20*time.Millisecond, meta.BoltDB().MaxBatchDelay())
require.Equal(t, false, sc.Compress()) require.Equal(t, false, sc.Compression().Enabled)
require.Equal(t, []string(nil), sc.UncompressableContentTypes()) require.Equal(t, compression.LevelDefault, sc.Compression().Level)
require.Equal(t, []string(nil), sc.Compression().UncompressableContentTypes)
require.EqualValues(t, 102400, sc.SmallSizeLimit()) require.EqualValues(t, 102400, sc.SmallSizeLimit())
require.Equal(t, 2, len(ss)) require.Equal(t, 2, len(ss))
@ -259,14 +272,14 @@ func TestEngineSection(t *testing.T) {
require.Equal(t, mode.ReadWrite, sc.Mode()) require.Equal(t, mode.ReadWrite, sc.Mode())
require.Equal(t, shardconfig.RefillMetabaseWorkersCountDefault, sc.RefillMetabaseWorkersCount()) require.Equal(t, shardconfig.RefillMetabaseWorkersCountDefault, sc.RefillMetabaseWorkersCount())
readLimits := limits.Read() readLimits := limits.ToConfig().Read
writeLimits := limits.Write() writeLimits := limits.ToConfig().Write
require.Equal(t, limitsconfig.DefaultIdleTimeout, readLimits.IdleTimeout) require.Equal(t, qos.DefaultIdleTimeout, readLimits.IdleTimeout)
require.Equal(t, limitsconfig.NoLimit, readLimits.MaxRunningOps) require.Equal(t, qos.NoLimit, readLimits.MaxRunningOps)
require.Equal(t, limitsconfig.NoLimit, readLimits.MaxWaitingOps) require.Equal(t, qos.NoLimit, readLimits.MaxWaitingOps)
require.Equal(t, limitsconfig.DefaultIdleTimeout, writeLimits.IdleTimeout) require.Equal(t, qos.DefaultIdleTimeout, writeLimits.IdleTimeout)
require.Equal(t, limitsconfig.NoLimit, writeLimits.MaxRunningOps) require.Equal(t, qos.NoLimit, writeLimits.MaxRunningOps)
require.Equal(t, limitsconfig.NoLimit, writeLimits.MaxWaitingOps) require.Equal(t, qos.NoLimit, writeLimits.MaxWaitingOps)
require.Equal(t, 0, len(readLimits.Tags)) require.Equal(t, 0, len(readLimits.Tags))
require.Equal(t, 0, len(writeLimits.Tags)) require.Equal(t, 0, len(writeLimits.Tags))
} }

View file

@ -8,6 +8,7 @@ import (
metabaseconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/engine/shard/metabase" metabaseconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/engine/shard/metabase"
piloramaconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/engine/shard/pilorama" piloramaconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/engine/shard/pilorama"
writecacheconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/engine/shard/writecache" writecacheconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/engine/shard/writecache"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/compression"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/shard/mode" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/shard/mode"
) )
@ -27,42 +28,27 @@ func From(c *config.Config) *Config {
return (*Config)(c) return (*Config)(c)
} }
// Compress returns the value of "compress" config parameter. func (x *Config) Compression() compression.Config {
// cc := (*config.Config)(x).Sub("compression")
// Returns false if the value is not a valid bool. if cc == nil {
func (x *Config) Compress() bool { return compression.Config{}
return config.BoolSafe(
(*config.Config)(x),
"compress",
)
} }
return compression.Config{
// UncompressableContentTypes returns the value of "compress_skip_content_types" config parameter. Enabled: config.BoolSafe(cc, "enabled"),
// UncompressableContentTypes: config.StringSliceSafe(cc, "exclude_content_types"),
// Returns nil if a the value is missing or is invalid. Level: compression.Level(config.StringSafe(cc, "level")),
func (x *Config) UncompressableContentTypes() []string { EstimateCompressibility: config.BoolSafe(cc, "estimate_compressibility"),
return config.StringSliceSafe( EstimateCompressibilityThreshold: estimateCompressibilityThreshold(cc),
(*config.Config)(x),
"compression_exclude_content_types")
} }
// EstimateCompressibility returns the value of "estimate_compressibility" config parameter.
//
// Returns false if the value is not a valid bool.
func (x *Config) EstimateCompressibility() bool {
return config.BoolSafe(
(*config.Config)(x),
"compression_estimate_compressibility",
)
} }
// EstimateCompressibilityThreshold returns the value of "estimate_compressibility_threshold" config parameter. // EstimateCompressibilityThreshold returns the value of "estimate_compressibility_threshold" config parameter.
// //
// Returns EstimateCompressibilityThresholdDefault if the value is not defined, not valid float or not in range [0.0; 1.0]. // Returns EstimateCompressibilityThresholdDefault if the value is not defined, not valid float or not in range [0.0; 1.0].
func (x *Config) EstimateCompressibilityThreshold() float64 { func estimateCompressibilityThreshold(c *config.Config) float64 {
v := config.FloatOrDefault( v := config.FloatOrDefault(
(*config.Config)(x), c,
"compression_estimate_compressibility_threshold", "estimate_compressibility_threshold",
EstimateCompressibilityThresholdDefault) EstimateCompressibilityThresholdDefault)
if v < 0.0 || v > 1.0 { if v < 0.0 || v > 1.0 {
return EstimateCompressibilityThresholdDefault return EstimateCompressibilityThresholdDefault

View file

@ -1,19 +1,13 @@
package limits package limits
import ( import (
"math"
"strconv" "strconv"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config" "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/qos"
"github.com/spf13/cast" "github.com/spf13/cast"
) )
const (
NoLimit int64 = math.MaxInt64
DefaultIdleTimeout = 5 * time.Minute
)
// From wraps config section into Config. // From wraps config section into Config.
func From(c *config.Config) *Config { func From(c *config.Config) *Config {
return (*Config)(c) return (*Config)(c)
@ -23,36 +17,43 @@ func From(c *config.Config) *Config {
// which provides access to Shard's limits configurations. // which provides access to Shard's limits configurations.
type Config config.Config type Config config.Config
// Read returns the value of "read" limits config section. func (x *Config) ToConfig() qos.LimiterConfig {
func (x *Config) Read() OpConfig { result := qos.LimiterConfig{
Read: x.read(),
Write: x.write(),
}
panicOnErr(result.Validate())
return result
}
func (x *Config) read() qos.OpConfig {
return x.parse("read") return x.parse("read")
} }
// Write returns the value of "write" limits config section. func (x *Config) write() qos.OpConfig {
func (x *Config) Write() OpConfig {
return x.parse("write") return x.parse("write")
} }
func (x *Config) parse(sub string) OpConfig { func (x *Config) parse(sub string) qos.OpConfig {
c := (*config.Config)(x).Sub(sub) c := (*config.Config)(x).Sub(sub)
var result OpConfig var result qos.OpConfig
if s := config.Int(c, "max_waiting_ops"); s > 0 { if s := config.Int(c, "max_waiting_ops"); s > 0 {
result.MaxWaitingOps = s result.MaxWaitingOps = s
} else { } else {
result.MaxWaitingOps = NoLimit result.MaxWaitingOps = qos.NoLimit
} }
if s := config.Int(c, "max_running_ops"); s > 0 { if s := config.Int(c, "max_running_ops"); s > 0 {
result.MaxRunningOps = s result.MaxRunningOps = s
} else { } else {
result.MaxRunningOps = NoLimit result.MaxRunningOps = qos.NoLimit
} }
if s := config.DurationSafe(c, "idle_timeout"); s > 0 { if s := config.DurationSafe(c, "idle_timeout"); s > 0 {
result.IdleTimeout = s result.IdleTimeout = s
} else { } else {
result.IdleTimeout = DefaultIdleTimeout result.IdleTimeout = qos.DefaultIdleTimeout
} }
result.Tags = tags(c) result.Tags = tags(c)
@ -60,43 +61,16 @@ func (x *Config) parse(sub string) OpConfig {
return result return result
} }
type OpConfig struct { func tags(c *config.Config) []qos.IOTagConfig {
// MaxWaitingOps returns the value of "max_waiting_ops" config parameter.
//
// Equals NoLimit if the value is not a positive number.
MaxWaitingOps int64
// MaxRunningOps returns the value of "max_running_ops" config parameter.
//
// Equals NoLimit if the value is not a positive number.
MaxRunningOps int64
// IdleTimeout returns the value of "idle_timeout" config parameter.
//
// Equals DefaultIdleTimeout if the value is not a valid duration.
IdleTimeout time.Duration
// Tags returns the value of "tags" config parameter.
//
// Equals nil if the value is not a valid tags config slice.
Tags []IOTagConfig
}
type IOTagConfig struct {
Tag string
Weight *float64
LimitOps *float64
ReservedOps *float64
Prohibited bool
}
func tags(c *config.Config) []IOTagConfig {
c = c.Sub("tags") c = c.Sub("tags")
var result []IOTagConfig var result []qos.IOTagConfig
for i := 0; ; i++ { for i := 0; ; i++ {
tag := config.String(c, strconv.Itoa(i)+".tag") tag := config.String(c, strconv.Itoa(i)+".tag")
if tag == "" { if tag == "" {
return result return result
} }
var tagConfig IOTagConfig var tagConfig qos.IOTagConfig
tagConfig.Tag = tag tagConfig.Tag = tag
v := c.Value(strconv.Itoa(i) + ".weight") v := c.Value(strconv.Itoa(i) + ".weight")

View file

@ -2,6 +2,7 @@ package loggerconfig
import ( import (
"os" "os"
"strconv"
"time" "time"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config" "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config"
@ -60,6 +61,21 @@ func Timestamp(c *config.Config) bool {
return config.BoolSafe(c.Sub(subsection), "timestamp") return config.BoolSafe(c.Sub(subsection), "timestamp")
} }
// Tags returns the value of "tags" config parameter from "logger" section.
func Tags(c *config.Config) [][]string {
var res [][]string
sub := c.Sub(subsection).Sub("tags")
for i := 0; ; i++ {
s := sub.Sub(strconv.FormatInt(int64(i), 10))
names := config.StringSafe(s, "names")
if names == "" {
break
}
res = append(res, []string{names, config.StringSafe(s, "level")})
}
return res
}
// ToLokiConfig extracts loki config. // ToLokiConfig extracts loki config.
func ToLokiConfig(c *config.Config) loki.Config { func ToLokiConfig(c *config.Config) loki.Config {
hostname, _ := os.Hostname() hostname, _ := os.Hostname()

View file

@ -22,6 +22,9 @@ func TestLoggerSection_Level(t *testing.T) {
require.Equal(t, "debug", loggerconfig.Level(c)) require.Equal(t, "debug", loggerconfig.Level(c))
require.Equal(t, "journald", loggerconfig.Destination(c)) require.Equal(t, "journald", loggerconfig.Destination(c))
require.Equal(t, true, loggerconfig.Timestamp(c)) require.Equal(t, true, loggerconfig.Timestamp(c))
tags := loggerconfig.Tags(c)
require.Equal(t, "main, morph", tags[0][0])
require.Equal(t, "debug", tags[0][1])
} }
configtest.ForEachFileType(path, fileConfigTest) configtest.ForEachFileType(path, fileConfigTest)

View file

@ -3,7 +3,9 @@ package nodeconfig
import ( import (
"fmt" "fmt"
"io/fs" "io/fs"
"iter"
"os" "os"
"slices"
"strconv" "strconv"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config" "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config"
@ -88,12 +90,8 @@ func Wallet(c *config.Config) *keys.PrivateKey {
type stringAddressGroup []string type stringAddressGroup []string
func (x stringAddressGroup) IterateAddresses(f func(string) bool) { func (x stringAddressGroup) Addresses() iter.Seq[string] {
for i := range x { return slices.Values(x)
if f(x[i]) {
break
}
}
} }
func (x stringAddressGroup) NumberOfAddresses() int { func (x stringAddressGroup) NumberOfAddresses() int {
@ -217,3 +215,8 @@ func (l PersistentPolicyRulesConfig) NoSync() bool {
func CompatibilityMode(c *config.Config) bool { func CompatibilityMode(c *config.Config) bool {
return config.BoolSafe(c.Sub(subsection), "kludge_compatibility_mode") return config.BoolSafe(c.Sub(subsection), "kludge_compatibility_mode")
} }
// LocodeDBPath returns path to LOCODE database.
func LocodeDBPath(c *config.Config) string {
return config.String(c.Sub(subsection), "locode_db_path")
}

View file

@ -14,6 +14,7 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/event" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/event"
netmapEvent "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/event/netmap" netmapEvent "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/event/netmap"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/subscriber" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/subscriber"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/rand" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/rand"
"github.com/nspcc-dev/neo-go/pkg/core/block" "github.com/nspcc-dev/neo-go/pkg/core/block"
"github.com/nspcc-dev/neo-go/pkg/core/state" "github.com/nspcc-dev/neo-go/pkg/core/state"
@ -84,7 +85,7 @@ func initMorphClient(ctx context.Context, c *cfg) {
cli, err := client.New(ctx, cli, err := client.New(ctx,
c.key, c.key,
client.WithDialTimeout(morphconfig.DialTimeout(c.appCfg)), client.WithDialTimeout(morphconfig.DialTimeout(c.appCfg)),
client.WithLogger(c.log), client.WithLogger(c.log.WithTag(logger.TagMorph)),
client.WithMetrics(c.metricsCollector.MorphClientMetrics()), client.WithMetrics(c.metricsCollector.MorphClientMetrics()),
client.WithEndpoints(addresses...), client.WithEndpoints(addresses...),
client.WithConnLostCallback(func() { client.WithConnLostCallback(func() {
@ -165,6 +166,7 @@ func listenMorphNotifications(ctx context.Context, c *cfg) {
err error err error
subs subscriber.Subscriber subs subscriber.Subscriber
) )
log := c.log.WithTag(logger.TagMorph)
fromSideChainBlock, err := c.persistate.UInt32(persistateSideChainLastBlockKey) fromSideChainBlock, err := c.persistate.UInt32(persistateSideChainLastBlockKey)
if err != nil { if err != nil {
@ -173,14 +175,14 @@ func listenMorphNotifications(ctx context.Context, c *cfg) {
} }
subs, err = subscriber.New(ctx, &subscriber.Params{ subs, err = subscriber.New(ctx, &subscriber.Params{
Log: c.log, Log: log,
StartFromBlock: fromSideChainBlock, StartFromBlock: fromSideChainBlock,
Client: c.cfgMorph.client, Client: c.cfgMorph.client,
}) })
fatalOnErr(err) fatalOnErr(err)
lis, err := event.NewListener(event.ListenerParams{ lis, err := event.NewListener(event.ListenerParams{
Logger: c.log, Logger: log,
Subscriber: subs, Subscriber: subs,
}) })
fatalOnErr(err) fatalOnErr(err)
@ -198,7 +200,7 @@ func listenMorphNotifications(ctx context.Context, c *cfg) {
setNetmapNotificationParser(c, newEpochNotification, func(src *state.ContainedNotificationEvent) (event.Event, error) { setNetmapNotificationParser(c, newEpochNotification, func(src *state.ContainedNotificationEvent) (event.Event, error) {
res, err := netmapEvent.ParseNewEpoch(src) res, err := netmapEvent.ParseNewEpoch(src)
if err == nil { if err == nil {
c.log.Info(ctx, logs.FrostFSNodeNewEpochEventFromSidechain, log.Info(ctx, logs.FrostFSNodeNewEpochEventFromSidechain,
zap.Uint64("number", res.(netmapEvent.NewEpoch).EpochNumber()), zap.Uint64("number", res.(netmapEvent.NewEpoch).EpochNumber()),
) )
} }
@ -209,11 +211,11 @@ func listenMorphNotifications(ctx context.Context, c *cfg) {
registerNotificationHandlers(c.cfgContainer.scriptHash, lis, c.cfgContainer.parsers, c.cfgContainer.subscribers) registerNotificationHandlers(c.cfgContainer.scriptHash, lis, c.cfgContainer.parsers, c.cfgContainer.subscribers)
registerBlockHandler(lis, func(ctx context.Context, block *block.Block) { registerBlockHandler(lis, func(ctx context.Context, block *block.Block) {
c.log.Debug(ctx, logs.FrostFSNodeNewBlock, zap.Uint32("index", block.Index)) log.Debug(ctx, logs.FrostFSNodeNewBlock, zap.Uint32("index", block.Index))
err = c.persistate.SetUInt32(persistateSideChainLastBlockKey, block.Index) err = c.persistate.SetUInt32(persistateSideChainLastBlockKey, block.Index)
if err != nil { if err != nil {
c.log.Warn(ctx, logs.FrostFSNodeCantUpdatePersistentState, log.Warn(ctx, logs.FrostFSNodeCantUpdatePersistentState,
zap.String("chain", "side"), zap.String("chain", "side"),
zap.Uint32("block_index", block.Index)) zap.Uint32("block_index", block.Index))
} }

View file

@ -8,6 +8,7 @@ import (
"net" "net"
"sync/atomic" "sync/atomic"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/assert"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs" "git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/metrics" "git.frostfs.info/TrueCloudLab/frostfs-node/internal/metrics"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/netmap" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/netmap"
@ -104,9 +105,7 @@ func (s *networkState) getNodeInfo() (res netmapSDK.NodeInfo, ok bool) {
v := s.nodeInfo.Load() v := s.nodeInfo.Load()
if v != nil { if v != nil {
res, ok = v.(netmapSDK.NodeInfo) res, ok = v.(netmapSDK.NodeInfo)
if !ok { assert.True(ok, fmt.Sprintf("unexpected value in atomic node info state: %T", v))
panic(fmt.Sprintf("unexpected value in atomic node info state: %T", v))
}
} }
return return
@ -124,7 +123,11 @@ func nodeKeyFromNetmap(c *cfg) []byte {
func (c *cfg) iterateNetworkAddresses(f func(string) bool) { func (c *cfg) iterateNetworkAddresses(f func(string) bool) {
ni, ok := c.cfgNetmap.state.getNodeInfo() ni, ok := c.cfgNetmap.state.getNodeInfo()
if ok { if ok {
ni.IterateNetworkEndpoints(f) for s := range ni.NetworkEndpoints() {
if f(s) {
return
}
}
} }
} }

View file

@ -31,6 +31,7 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/object_manager/placement" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/object_manager/placement"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/policer" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/policer"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/replicator" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/replicator"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/object" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/object"
objectGRPC "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/object/grpc" objectGRPC "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/object/grpc"
netmapSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap" netmapSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap"
@ -217,9 +218,8 @@ func addPolicer(c *cfg, keyStorage *util.KeyStorage, clientConstructor *cache.Cl
} }
remoteReader := objectService.NewRemoteReader(keyStorage, clientConstructor) remoteReader := objectService.NewRemoteReader(keyStorage, clientConstructor)
pol := policer.New( pol := policer.New(
policer.WithLogger(c.log), policer.WithLogger(c.log.WithTag(logger.TagPolicer)),
policer.WithKeySpaceIterator(&keySpaceIterator{ng: ls}), policer.WithKeySpaceIterator(&keySpaceIterator{ng: ls}),
policer.WithBuryFunc(buryFn), policer.WithBuryFunc(buryFn),
policer.WithContainerSource(c.cfgObject.cnrSource), policer.WithContainerSource(c.cfgObject.cnrSource),
@ -291,7 +291,7 @@ func createReplicator(c *cfg, keyStorage *util.KeyStorage, cache *cache.ClientCa
ls := c.cfgObject.cfgLocalStorage.localStorage ls := c.cfgObject.cfgLocalStorage.localStorage
return replicator.New( return replicator.New(
replicator.WithLogger(c.log), replicator.WithLogger(c.log.WithTag(logger.TagReplicator)),
replicator.WithPutTimeout( replicator.WithPutTimeout(
replicatorconfig.PutTimeout(c.appCfg), replicatorconfig.PutTimeout(c.appCfg),
), ),
@ -348,7 +348,7 @@ func createSearchSvc(c *cfg, keyStorage *util.KeyStorage, traverseGen *util.Trav
c.netMapSource, c.netMapSource,
keyStorage, keyStorage,
containerSource, containerSource,
searchsvc.WithLogger(c.log), searchsvc.WithLogger(c.log.WithTag(logger.TagSearchSvc)),
) )
} }
@ -374,7 +374,7 @@ func createGetService(c *cfg, keyStorage *util.KeyStorage, traverseGen *util.Tra
), ),
coreConstructor, coreConstructor,
containerSource, containerSource,
getsvc.WithLogger(c.log)) getsvc.WithLogger(c.log.WithTag(logger.TagGetSvc)))
} }
func createGetServiceV2(c *cfg, sGet *getsvc.Service, keyStorage *util.KeyStorage) *getsvcV2.Service { func createGetServiceV2(c *cfg, sGet *getsvc.Service, keyStorage *util.KeyStorage) *getsvcV2.Service {
@ -385,7 +385,7 @@ func createGetServiceV2(c *cfg, sGet *getsvc.Service, keyStorage *util.KeyStorag
c.netMapSource, c.netMapSource,
c, c,
c.cfgObject.cnrSource, c.cfgObject.cnrSource,
getsvcV2.WithLogger(c.log), getsvcV2.WithLogger(c.log.WithTag(logger.TagGetSvc)),
) )
} }
@ -402,7 +402,7 @@ func createDeleteService(c *cfg, keyStorage *util.KeyStorage, sGet *getsvc.Servi
cfg: c, cfg: c,
}, },
keyStorage, keyStorage,
deletesvc.WithLogger(c.log), deletesvc.WithLogger(c.log.WithTag(logger.TagDeleteSvc)),
) )
} }

View file

@ -14,6 +14,7 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/session/storage" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/session/storage"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/session/storage/persistent" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/session/storage/persistent"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/session/storage/temporary" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/session/storage/temporary"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/session" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/session"
sessionGRPC "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/session/grpc" sessionGRPC "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/session/grpc"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user"
@ -55,7 +56,7 @@ func initSessionService(c *cfg) {
server := sessionTransportGRPC.New( server := sessionTransportGRPC.New(
sessionSvc.NewSignService( sessionSvc.NewSignService(
&c.key.PrivateKey, &c.key.PrivateKey,
sessionSvc.NewExecutionService(c.privateTokenStore, c.respSvc, c.log), sessionSvc.NewExecutionService(c.privateTokenStore, c.respSvc, c.log.WithTag(logger.TagSessionSvc)),
), ),
) )

View file

@ -14,6 +14,7 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/event" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/event"
containerEvent "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/event/container" containerEvent "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/event/container"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/tree" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/tree"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id" cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"go.uber.org/zap" "go.uber.org/zap"
"google.golang.org/grpc" "google.golang.org/grpc"
@ -56,7 +57,7 @@ func initTreeService(c *cfg) {
tree.WithFrostfsidSubjectProvider(c.frostfsidClient), tree.WithFrostfsidSubjectProvider(c.frostfsidClient),
tree.WithNetmapSource(c.netMapSource), tree.WithNetmapSource(c.netMapSource),
tree.WithPrivateKey(&c.key.PrivateKey), tree.WithPrivateKey(&c.key.PrivateKey),
tree.WithLogger(c.log), tree.WithLogger(c.log.WithTag(logger.TagTreeSvc)),
tree.WithStorage(c.cfgObject.cfgLocalStorage.localStorage), tree.WithStorage(c.cfgObject.cfgLocalStorage.localStorage),
tree.WithContainerCacheSize(treeConfig.CacheSize()), tree.WithContainerCacheSize(treeConfig.CacheSize()),
tree.WithReplicationTimeout(treeConfig.ReplicationTimeout()), tree.WithReplicationTimeout(treeConfig.ReplicationTimeout()),

View file

@ -30,6 +30,11 @@ func validateConfig(c *config.Config) error {
return fmt.Errorf("invalid logger destination: %w", err) return fmt.Errorf("invalid logger destination: %w", err)
} }
err = loggerPrm.SetTags(loggerconfig.Tags(c))
if err != nil {
return fmt.Errorf("invalid list of allowed tags: %w", err)
}
// shard configuration validation // shard configuration validation
shardNum := 0 shardNum := 0

View file

@ -51,8 +51,13 @@ func ExitOnErr(cmd *cobra.Command, errFmt string, err error) {
} }
cmd.PrintErrln(err) cmd.PrintErrln(err)
if cmd.PersistentPostRun != nil { for p := cmd; p != nil; p = p.Parent() {
cmd.PersistentPostRun(cmd, nil) if p.PersistentPostRun != nil {
p.PersistentPostRun(cmd, nil)
if !cobra.EnableTraverseRunHooks {
break
}
}
} }
os.Exit(code) os.Exit(code)
} }

View file

@ -27,15 +27,15 @@ func PrettyPrintNodeInfo(cmd *cobra.Command, node netmap.NodeInfo,
cmd.Printf("%sNode %d: %s %s ", indent, index+1, hex.EncodeToString(node.PublicKey()), strState) cmd.Printf("%sNode %d: %s %s ", indent, index+1, hex.EncodeToString(node.PublicKey()), strState)
netmap.IterateNetworkEndpoints(node, func(endpoint string) { for endpoint := range node.NetworkEndpoints() {
cmd.Printf("%s ", endpoint) cmd.Printf("%s ", endpoint)
}) }
cmd.Println() cmd.Println()
if !short { if !short {
node.IterateAttributes(func(key, value string) { for key, value := range node.Attributes() {
cmd.Printf("%s\t%s: %s\n", indent, key, value) cmd.Printf("%s\t%s: %s\n", indent, key, value)
}) }
} }
} }

View file

@ -1,5 +1,7 @@
FROSTFS_IR_LOGGER_LEVEL=info FROSTFS_IR_LOGGER_LEVEL=info
FROSTFS_IR_LOGGER_TIMESTAMP=true FROSTFS_IR_LOGGER_TIMESTAMP=true
FROSTFS_IR_LOGGER_TAGS_0_NAMES="main, morph"
FROSTFS_IR_LOGGER_TAGS_0_LEVEL="debug"
FROSTFS_IR_WALLET_PATH=/path/to/wallet.json FROSTFS_IR_WALLET_PATH=/path/to/wallet.json
FROSTFS_IR_WALLET_ADDRESS=NUHtW3eM6a4mmFCgyyr4rj4wygsTKB88XX FROSTFS_IR_WALLET_ADDRESS=NUHtW3eM6a4mmFCgyyr4rj4wygsTKB88XX

View file

@ -3,6 +3,9 @@
logger: logger:
level: info # Logger level: one of "debug", "info" (default), "warn", "error", "dpanic", "panic", "fatal" level: info # Logger level: one of "debug", "info" (default), "warn", "error", "dpanic", "panic", "fatal"
timestamp: true timestamp: true
tags:
- names: "main, morph" # Possible values: `main`, `morph`, `grpc_svc`, `ir`, `processor`.
level: debug
wallet: wallet:
path: /path/to/wallet.json # Path to NEP-6 NEO wallet file path: /path/to/wallet.json # Path to NEP-6 NEO wallet file

View file

@ -1,6 +1,8 @@
FROSTFS_LOGGER_LEVEL=debug FROSTFS_LOGGER_LEVEL=debug
FROSTFS_LOGGER_DESTINATION=journald FROSTFS_LOGGER_DESTINATION=journald
FROSTFS_LOGGER_TIMESTAMP=true FROSTFS_LOGGER_TIMESTAMP=true
FROSTFS_LOGGER_TAGS_0_NAMES="main, morph"
FROSTFS_LOGGER_TAGS_0_LEVEL="debug"
FROSTFS_PPROF_ENABLED=true FROSTFS_PPROF_ENABLED=true
FROSTFS_PPROF_ADDRESS=localhost:6060 FROSTFS_PPROF_ADDRESS=localhost:6060
@ -23,6 +25,7 @@ FROSTFS_NODE_ATTRIBUTE_1="UN-LOCODE:RU MSK"
FROSTFS_NODE_RELAY=true FROSTFS_NODE_RELAY=true
FROSTFS_NODE_PERSISTENT_SESSIONS_PATH=/sessions FROSTFS_NODE_PERSISTENT_SESSIONS_PATH=/sessions
FROSTFS_NODE_PERSISTENT_STATE_PATH=/state FROSTFS_NODE_PERSISTENT_STATE_PATH=/state
FROSTFS_NODE_LOCODE_DB_PATH=/path/to/locode/db
# Tree service section # Tree service section
FROSTFS_TREE_ENABLED=true FROSTFS_TREE_ENABLED=true
@ -121,7 +124,8 @@ FROSTFS_STORAGE_SHARD_0_METABASE_PERM=0644
FROSTFS_STORAGE_SHARD_0_METABASE_MAX_BATCH_SIZE=100 FROSTFS_STORAGE_SHARD_0_METABASE_MAX_BATCH_SIZE=100
FROSTFS_STORAGE_SHARD_0_METABASE_MAX_BATCH_DELAY=10ms FROSTFS_STORAGE_SHARD_0_METABASE_MAX_BATCH_DELAY=10ms
### Blobstor config ### Blobstor config
FROSTFS_STORAGE_SHARD_0_COMPRESS=true FROSTFS_STORAGE_SHARD_0_COMPRESSION_ENABLED=true
FROSTFS_STORAGE_SHARD_0_COMPRESSION_LEVEL=fastest
FROSTFS_STORAGE_SHARD_0_COMPRESSION_EXCLUDE_CONTENT_TYPES="audio/* video/*" FROSTFS_STORAGE_SHARD_0_COMPRESSION_EXCLUDE_CONTENT_TYPES="audio/* video/*"
FROSTFS_STORAGE_SHARD_0_COMPRESSION_ESTIMATE_COMPRESSIBILITY=true FROSTFS_STORAGE_SHARD_0_COMPRESSION_ESTIMATE_COMPRESSIBILITY=true
FROSTFS_STORAGE_SHARD_0_COMPRESSION_ESTIMATE_COMPRESSIBILITY_THRESHOLD=0.7 FROSTFS_STORAGE_SHARD_0_COMPRESSION_ESTIMATE_COMPRESSIBILITY_THRESHOLD=0.7
@ -181,6 +185,9 @@ FROSTFS_STORAGE_SHARD_0_LIMITS_READ_TAGS_4_TAG=policer
FROSTFS_STORAGE_SHARD_0_LIMITS_READ_TAGS_4_WEIGHT=5 FROSTFS_STORAGE_SHARD_0_LIMITS_READ_TAGS_4_WEIGHT=5
FROSTFS_STORAGE_SHARD_0_LIMITS_READ_TAGS_4_LIMIT_OPS=25000 FROSTFS_STORAGE_SHARD_0_LIMITS_READ_TAGS_4_LIMIT_OPS=25000
FROSTFS_STORAGE_SHARD_0_LIMITS_READ_TAGS_4_PROHIBITED=true FROSTFS_STORAGE_SHARD_0_LIMITS_READ_TAGS_4_PROHIBITED=true
FROSTFS_STORAGE_SHARD_0_LIMITS_READ_TAGS_5_TAG=treesync
FROSTFS_STORAGE_SHARD_0_LIMITS_READ_TAGS_5_WEIGHT=5
FROSTFS_STORAGE_SHARD_0_LIMITS_READ_TAGS_5_LIMIT_OPS=25
FROSTFS_STORAGE_SHARD_0_LIMITS_WRITE_TAGS_0_TAG=internal FROSTFS_STORAGE_SHARD_0_LIMITS_WRITE_TAGS_0_TAG=internal
FROSTFS_STORAGE_SHARD_0_LIMITS_WRITE_TAGS_0_WEIGHT=200 FROSTFS_STORAGE_SHARD_0_LIMITS_WRITE_TAGS_0_WEIGHT=200
FROSTFS_STORAGE_SHARD_0_LIMITS_WRITE_TAGS_0_LIMIT_OPS=0 FROSTFS_STORAGE_SHARD_0_LIMITS_WRITE_TAGS_0_LIMIT_OPS=0
@ -198,6 +205,9 @@ FROSTFS_STORAGE_SHARD_0_LIMITS_WRITE_TAGS_3_LIMIT_OPS=2500
FROSTFS_STORAGE_SHARD_0_LIMITS_WRITE_TAGS_4_TAG=policer FROSTFS_STORAGE_SHARD_0_LIMITS_WRITE_TAGS_4_TAG=policer
FROSTFS_STORAGE_SHARD_0_LIMITS_WRITE_TAGS_4_WEIGHT=50 FROSTFS_STORAGE_SHARD_0_LIMITS_WRITE_TAGS_4_WEIGHT=50
FROSTFS_STORAGE_SHARD_0_LIMITS_WRITE_TAGS_4_LIMIT_OPS=2500 FROSTFS_STORAGE_SHARD_0_LIMITS_WRITE_TAGS_4_LIMIT_OPS=2500
FROSTFS_STORAGE_SHARD_0_LIMITS_WRITE_TAGS_5_TAG=treesync
FROSTFS_STORAGE_SHARD_0_LIMITS_WRITE_TAGS_5_WEIGHT=50
FROSTFS_STORAGE_SHARD_0_LIMITS_WRITE_TAGS_5_LIMIT_OPS=100
## 1 shard ## 1 shard
### Flag to refill Metabase from BlobStor ### Flag to refill Metabase from BlobStor

View file

@ -2,7 +2,13 @@
"logger": { "logger": {
"level": "debug", "level": "debug",
"destination": "journald", "destination": "journald",
"timestamp": true "timestamp": true,
"tags": [
{
"names": "main, morph",
"level": "debug"
}
]
}, },
"pprof": { "pprof": {
"enabled": true, "enabled": true,
@ -37,7 +43,8 @@
}, },
"persistent_state": { "persistent_state": {
"path": "/state" "path": "/state"
} },
"locode_db_path": "/path/to/locode/db"
}, },
"grpc": { "grpc": {
"0": { "0": {
@ -182,12 +189,15 @@
"max_batch_size": 100, "max_batch_size": 100,
"max_batch_delay": "10ms" "max_batch_delay": "10ms"
}, },
"compress": true, "compression": {
"compression_exclude_content_types": [ "enabled": true,
"level": "fastest",
"exclude_content_types": [
"audio/*", "video/*" "audio/*", "video/*"
], ],
"compression_estimate_compressibility": true, "estimate_compressibility": true,
"compression_estimate_compressibility_threshold": 0.7, "estimate_compressibility_threshold": 0.7
},
"small_object_size": 102400, "small_object_size": 102400,
"blobstor": [ "blobstor": [
{ {
@ -254,6 +264,11 @@
"weight": 5, "weight": 5,
"limit_ops": 25000, "limit_ops": 25000,
"prohibited": true "prohibited": true
},
{
"tag": "treesync",
"weight": 5,
"limit_ops": 25
} }
] ]
}, },
@ -288,6 +303,11 @@
"tag": "policer", "tag": "policer",
"weight": 50, "weight": 50,
"limit_ops": 2500 "limit_ops": 2500
},
{
"tag": "treesync",
"weight": 50,
"limit_ops": 100
} }
] ]
} }
@ -311,7 +331,9 @@
"max_batch_size": 200, "max_batch_size": 200,
"max_batch_delay": "20ms" "max_batch_delay": "20ms"
}, },
"compress": false, "compression": {
"enabled": false
},
"small_object_size": 102400, "small_object_size": 102400,
"blobstor": [ "blobstor": [
{ {

View file

@ -2,6 +2,9 @@ logger:
level: debug # logger level: one of "debug", "info" (default), "warn", "error", "dpanic", "panic", "fatal" level: debug # logger level: one of "debug", "info" (default), "warn", "error", "dpanic", "panic", "fatal"
destination: journald # logger destination: one of "stdout" (default), "journald" destination: journald # logger destination: one of "stdout" (default), "journald"
timestamp: true timestamp: true
tags:
- names: "main, morph"
level: debug
systemdnotify: systemdnotify:
enabled: true enabled: true
@ -36,6 +39,7 @@ node:
path: /sessions # path to persistent session tokens file of Storage node (default: in-memory sessions) path: /sessions # path to persistent session tokens file of Storage node (default: in-memory sessions)
persistent_state: persistent_state:
path: /state # path to persistent state file of Storage node path: /state # path to persistent state file of Storage node
"locode_db_path": "/path/to/locode/db"
grpc: grpc:
- endpoint: s01.frostfs.devenv:8080 # endpoint for gRPC server - endpoint: s01.frostfs.devenv:8080 # endpoint for gRPC server
@ -159,7 +163,8 @@ storage:
max_batch_delay: 5ms # maximum delay for a batch of operations to be executed max_batch_delay: 5ms # maximum delay for a batch of operations to be executed
max_batch_size: 100 # maximum amount of operations in a single batch max_batch_size: 100 # maximum amount of operations in a single batch
compress: false # turn on/off zstd(level 3) compression of stored objects compression:
enabled: false # turn on/off zstd compression of stored objects
small_object_size: 100 kb # size threshold for "small" objects which are cached in key-value DB, not in FS, bytes small_object_size: 100 kb # size threshold for "small" objects which are cached in key-value DB, not in FS, bytes
blobstor: blobstor:
@ -201,12 +206,14 @@ storage:
max_batch_size: 100 max_batch_size: 100
max_batch_delay: 10ms max_batch_delay: 10ms
compress: true # turn on/off zstd(level 3) compression of stored objects compression:
compression_exclude_content_types: enabled: true # turn on/off zstd compression of stored objects
level: fastest
exclude_content_types:
- audio/* - audio/*
- video/* - video/*
compression_estimate_compressibility: true estimate_compressibility: true
compression_estimate_compressibility_threshold: 0.7 estimate_compressibility_threshold: 0.7
blobstor: blobstor:
- type: blobovnicza - type: blobovnicza
@ -253,6 +260,9 @@ storage:
weight: 5 weight: 5
limit_ops: 25000 limit_ops: 25000
prohibited: true prohibited: true
- tag: treesync
weight: 5
limit_ops: 25
write: write:
max_running_ops: 1000 max_running_ops: 1000
max_waiting_ops: 100 max_waiting_ops: 100
@ -275,6 +285,9 @@ storage:
- tag: policer - tag: policer
weight: 50 weight: 50
limit_ops: 2500 limit_ops: 2500
- tag: treesync
weight: 50
limit_ops: 100
1: 1:
writecache: writecache:

View file

@ -13,7 +13,8 @@ There are some custom types used for brevity:
# Structure # Structure
| Section | Description | | Section | Description |
|------------------------|---------------------------------------------------------------------| |--------------|---------------------------------------------------------|
| `node` | [Node parameters](#node-section) |
| `logger` | [Logging parameters](#logger-section) | | `logger` | [Logging parameters](#logger-section) |
| `pprof` | [PProf configuration](#pprof-section) | | `pprof` | [PProf configuration](#pprof-section) |
| `prometheus` | [Prometheus metrics configuration](#prometheus-section) | | `prometheus` | [Prometheus metrics configuration](#prometheus-section) |
@ -111,11 +112,21 @@ Contains logger parameters.
```yaml ```yaml
logger: logger:
level: info level: info
tags:
- names: "main, morph"
level: debug
``` ```
| Parameter | Type | Default value | Description | | Parameter | Type | Default value | Description |
|-----------|----------|---------------|---------------------------------------------------------------------------------------------------| |-----------|-----------------------------------------------|---------------|---------------------------------------------------------------------------------------------------|
| `level` | `string` | `info` | Logging level.<br/>Possible values: `debug`, `info`, `warn`, `error`, `dpanic`, `panic`, `fatal` | | `level` | `string` | `info` | Logging level.<br/>Possible values: `debug`, `info`, `warn`, `error`, `dpanic`, `panic`, `fatal` |
| `tags` | list of [tags descriptions](#tags-subsection) | | Array of tags description. |
## `tags` subsection
| Parameter | Type | Default value | Description |
|-----------|----------|---------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `names` | `string` | | List of components divided by `,`.<br/>Possible values: `main`, `morph`, `grpcsvc`, `ir`, `processor`, `engine`, `blobovnicza`, `blobstor`, `fstree`, `gc`, `shard`, `writecache`, `deletesvc`, `getsvc`, `searchsvc`, `sessionsvc`, `treesvc`, `policer`, `replicator`. |
| `level` | `string` | | Logging level for the components from `names`, overrides default logging level. |
# `contracts` section # `contracts` section
Contains override values for FrostFS side-chain contract hashes. Most of the time contract Contains override values for FrostFS side-chain contract hashes. Most of the time contract
@ -185,11 +196,8 @@ Contains configuration for each shard. Keys must be consecutive numbers starting
The following table describes configuration for each shard. The following table describes configuration for each shard.
| Parameter | Type | Default value | Description | | Parameter | Type | Default value | Description |
| ------------------------------------------------ | ------------------------------------------- | ------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | ------------------------------ | --------------------------------------------- | ------------- | --------------------------------------------------------------------------------------------------------- |
| `compress` | `bool` | `false` | Flag to enable compression. | | `compression` | [Compression config](#compression-subsection) | | Compression config. |
| `compression_exclude_content_types` | `[]string` | | List of content-types to disable compression for. Content-type is taken from `Content-Type` object attribute. Each element can contain a star `*` as a first (last) character, which matches any prefix (suffix). |
| `compression_estimate_compressibility` | `bool` | `false` | If `true`, then noramalized compressibility estimation is used to decide compress data or not. |
| `compression_estimate_compressibility_threshold` | `float` | `0.1` | Normilized compressibility estimate threshold: data will compress if estimation if greater than this value. |
| `mode` | `string` | `read-write` | Shard Mode.<br/>Possible values: `read-write`, `read-only`, `degraded`, `degraded-read-only`, `disabled` | | `mode` | `string` | `read-write` | Shard Mode.<br/>Possible values: `read-write`, `read-only`, `degraded`, `degraded-read-only`, `disabled` |
| `resync_metabase` | `bool` | `false` | Flag to enable metabase resync on start. | | `resync_metabase` | `bool` | `false` | Flag to enable metabase resync on start. |
| `resync_metabase_worker_count` | `int` | `1000` | Count of concurrent workers to resync metabase. | | `resync_metabase_worker_count` | `int` | `1000` | Count of concurrent workers to resync metabase. |
@ -200,6 +208,29 @@ The following table describes configuration for each shard.
| `gc` | [GC config](#gc-subsection) | | GC configuration. | | `gc` | [GC config](#gc-subsection) | | GC configuration. |
| `limits` | [Shard limits config](#limits-subsection) | | Shard limits configuration. | | `limits` | [Shard limits config](#limits-subsection) | | Shard limits configuration. |
### `compression` subsection
Contains compression config.
```yaml
compression:
enabled: true
level: smallest_size
exclude_content_types:
- audio/*
- video/*
estimate_compressibility: true
estimate_compressibility_threshold: 0.7
```
| Parameter | Type | Default value | Description |
| ------------------------------------ | ---------- | ------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `enabled` | `bool` | `false` | Flag to enable compression. |
| `level` | `string` | `optimal` | Compression level. Available values are `optimal`, `fastest`, `smallest_size`. |
| `exclude_content_types` | `[]string` | | List of content-types to disable compression for. Content-type is taken from `Content-Type` object attribute. Each element can contain a star `*` as a first (last) character, which matches any prefix (suffix). |
| `estimate_compressibility` | `bool` | `false` | If `true`, then noramalized compressibility estimation is used to decide compress data or not. |
| `estimate_compressibility_threshold` | `float` | `0.1` | Normilized compressibility estimate threshold: data will compress if estimation if greater than this value. |
### `blobstor` subsection ### `blobstor` subsection
Contains a list of substorages each with it's own type. Contains a list of substorages each with it's own type.
@ -384,10 +415,11 @@ node:
path: /sessions path: /sessions
persistent_state: persistent_state:
path: /state path: /state
locode_db_path: "/path/to/locode/db"
``` ```
| Parameter | Type | Default value | Description | | Parameter | Type | Default value | Description |
|-----------------------|---------------------------------------------------------------|---------------|-------------------------------------------------------------------------| |-----------------------|---------------------------------------------------------------|---------------|-----------------------------------------------------------------------------------------------------|
| `key` | `string` | | Path to the binary-encoded private key. | | `key` | `string` | | Path to the binary-encoded private key. |
| `wallet` | [Wallet config](#wallet-subsection) | | Wallet configuration. Has no effect if `key` is provided. | | `wallet` | [Wallet config](#wallet-subsection) | | Wallet configuration. Has no effect if `key` is provided. |
| `addresses` | `[]string` | | Addresses advertised in the netmap. | | `addresses` | `[]string` | | Addresses advertised in the netmap. |
@ -395,6 +427,7 @@ node:
| `relay` | `bool` | | Enable relay mode. | | `relay` | `bool` | | Enable relay mode. |
| `persistent_sessions` | [Persistent sessions config](#persistent_sessions-subsection) | | Persistent session token store configuration. | | `persistent_sessions` | [Persistent sessions config](#persistent_sessions-subsection) | | Persistent session token store configuration. |
| `persistent_state` | [Persistent state config](#persistent_state-subsection) | | Persistent state configuration. | | `persistent_state` | [Persistent state config](#persistent_state-subsection) | | Persistent state configuration. |
| `locode_db_path` | `string` | empty | Path to UN/LOCODE [database](https://git.frostfs.info/TrueCloudLab/frostfs-locode-db/) for FrostFS. |
## `wallet` subsection ## `wallet` subsection
N3 wallet configuration. N3 wallet configuration.

View file

@ -23,3 +23,7 @@ func NoError(err error, details ...string) {
panic(content) panic(content)
} }
} }
func Fail(details ...string) {
panic(strings.Join(details, " "))
}

View file

@ -516,4 +516,5 @@ const (
FailedToValidateIncomingIOTag = "failed to validate incoming IO tag, replaced with `client`" FailedToValidateIncomingIOTag = "failed to validate incoming IO tag, replaced with `client`"
WriteCacheFailedToAcquireRPSQuota = "writecache failed to acquire RPS quota to flush object" WriteCacheFailedToAcquireRPSQuota = "writecache failed to acquire RPS quota to flush object"
FailedToUpdateNetmapCandidates = "update netmap candidates failed" FailedToUpdateNetmapCandidates = "update netmap candidates failed"
UnknownCompressionLevelDefaultWillBeUsed = "unknown compression level, 'optimal' will be used"
) )

31
internal/qos/config.go Normal file
View file

@ -0,0 +1,31 @@
package qos
import (
"math"
"time"
)
const (
NoLimit int64 = math.MaxInt64
DefaultIdleTimeout = 5 * time.Minute
)
type LimiterConfig struct {
Read OpConfig
Write OpConfig
}
type OpConfig struct {
MaxWaitingOps int64
MaxRunningOps int64
IdleTimeout time.Duration
Tags []IOTagConfig
}
type IOTagConfig struct {
Tag string
Weight *float64
LimitOps *float64
ReservedOps *float64
Prohibited bool
}

View file

@ -8,7 +8,6 @@ import (
"sync/atomic" "sync/atomic"
"time" "time"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/engine/shard/limits"
"git.frostfs.info/TrueCloudLab/frostfs-qos/scheduling" "git.frostfs.info/TrueCloudLab/frostfs-qos/scheduling"
"git.frostfs.info/TrueCloudLab/frostfs-qos/tagging" "git.frostfs.info/TrueCloudLab/frostfs-qos/tagging"
apistatus "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client/status" apistatus "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client/status"
@ -37,15 +36,15 @@ type scheduler interface {
Close() Close()
} }
func NewLimiter(c *limits.Config) (Limiter, error) { func NewLimiter(c LimiterConfig) (Limiter, error) {
if err := validateConfig(c); err != nil { if err := c.Validate(); err != nil {
return nil, err return nil, err
} }
readScheduler, err := createScheduler(c.Read()) readScheduler, err := createScheduler(c.Read)
if err != nil { if err != nil {
return nil, fmt.Errorf("create read scheduler: %w", err) return nil, fmt.Errorf("create read scheduler: %w", err)
} }
writeScheduler, err := createScheduler(c.Write()) writeScheduler, err := createScheduler(c.Write)
if err != nil { if err != nil {
return nil, fmt.Errorf("create write scheduler: %w", err) return nil, fmt.Errorf("create write scheduler: %w", err)
} }
@ -63,8 +62,8 @@ func NewLimiter(c *limits.Config) (Limiter, error) {
return l, nil return l, nil
} }
func createScheduler(config limits.OpConfig) (scheduler, error) { func createScheduler(config OpConfig) (scheduler, error) {
if len(config.Tags) == 0 && config.MaxWaitingOps == limits.NoLimit { if len(config.Tags) == 0 && config.MaxWaitingOps == NoLimit {
return newSemaphoreScheduler(config.MaxRunningOps), nil return newSemaphoreScheduler(config.MaxRunningOps), nil
} }
return scheduling.NewMClock( return scheduling.NewMClock(
@ -72,7 +71,7 @@ func createScheduler(config limits.OpConfig) (scheduler, error) {
converToSchedulingTags(config.Tags), config.IdleTimeout) converToSchedulingTags(config.Tags), config.IdleTimeout)
} }
func converToSchedulingTags(limits []limits.IOTagConfig) map[string]scheduling.TagInfo { func converToSchedulingTags(limits []IOTagConfig) map[string]scheduling.TagInfo {
result := make(map[string]scheduling.TagInfo) result := make(map[string]scheduling.TagInfo)
for _, tag := range []IOTag{IOTagBackground, IOTagClient, IOTagInternal, IOTagPolicer, IOTagTreeSync, IOTagWritecache} { for _, tag := range []IOTag{IOTagBackground, IOTagClient, IOTagInternal, IOTagPolicer, IOTagTreeSync, IOTagWritecache} {
result[tag.String()] = scheduling.TagInfo{ result[tag.String()] = scheduling.TagInfo{

View file

@ -4,8 +4,6 @@ import (
"errors" "errors"
"fmt" "fmt"
"math" "math"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/engine/shard/limits"
) )
var errWeightsMustBeSpecified = errors.New("invalid weights: weights must be specified for all tags or not specified for any") var errWeightsMustBeSpecified = errors.New("invalid weights: weights must be specified for all tags or not specified for any")
@ -14,17 +12,17 @@ type tagConfig struct {
Shares, Limit, Reserved *float64 Shares, Limit, Reserved *float64
} }
func validateConfig(c *limits.Config) error { func (c *LimiterConfig) Validate() error {
if err := validateOpConfig(c.Read()); err != nil { if err := validateOpConfig(c.Read); err != nil {
return fmt.Errorf("limits 'read' section validation error: %w", err) return fmt.Errorf("limits 'read' section validation error: %w", err)
} }
if err := validateOpConfig(c.Write()); err != nil { if err := validateOpConfig(c.Write); err != nil {
return fmt.Errorf("limits 'write' section validation error: %w", err) return fmt.Errorf("limits 'write' section validation error: %w", err)
} }
return nil return nil
} }
func validateOpConfig(c limits.OpConfig) error { func validateOpConfig(c OpConfig) error {
if c.MaxRunningOps <= 0 { if c.MaxRunningOps <= 0 {
return fmt.Errorf("invalid 'max_running_ops = %d': must be greater than zero", c.MaxRunningOps) return fmt.Errorf("invalid 'max_running_ops = %d': must be greater than zero", c.MaxRunningOps)
} }
@ -40,7 +38,7 @@ func validateOpConfig(c limits.OpConfig) error {
return nil return nil
} }
func validateTags(configTags []limits.IOTagConfig) error { func validateTags(configTags []IOTagConfig) error {
tags := map[IOTag]tagConfig{ tags := map[IOTag]tagConfig{
IOTagBackground: {}, IOTagBackground: {},
IOTagClient: {}, IOTagClient: {},

View file

@ -3,6 +3,7 @@ package client
import ( import (
"bytes" "bytes"
"fmt" "fmt"
"iter"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/network" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/network"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client" "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client"
@ -19,7 +20,7 @@ func nodeInfoFromKeyAddr(dst *NodeInfo, k []byte, a, external network.AddressGro
// Args must not be nil. // Args must not be nil.
func NodeInfoFromRawNetmapElement(dst *NodeInfo, info interface { func NodeInfoFromRawNetmapElement(dst *NodeInfo, info interface {
PublicKey() []byte PublicKey() []byte
IterateAddresses(func(string) bool) Addresses() iter.Seq[string]
NumberOfAddresses() int NumberOfAddresses() int
ExternalAddresses() []string ExternalAddresses() []string
}, },

View file

@ -1,6 +1,10 @@
package netmap package netmap
import "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap" import (
"iter"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap"
)
// Node is a named type of netmap.NodeInfo which provides interface needed // Node is a named type of netmap.NodeInfo which provides interface needed
// in the current repository. Node is expected to be used everywhere instead // in the current repository. Node is expected to be used everywhere instead
@ -14,10 +18,20 @@ func (x Node) PublicKey() []byte {
return (netmap.NodeInfo)(x).PublicKey() return (netmap.NodeInfo)(x).PublicKey()
} }
// Addresses returns an iterator over all announced network addresses.
func (x Node) Addresses() iter.Seq[string] {
return (netmap.NodeInfo)(x).NetworkEndpoints()
}
// IterateAddresses iterates over all announced network addresses // IterateAddresses iterates over all announced network addresses
// and passes them into f. Handler MUST NOT be nil. // and passes them into f. Handler MUST NOT be nil.
// Deprecated: use [Node.Addresses] instead.
func (x Node) IterateAddresses(f func(string) bool) { func (x Node) IterateAddresses(f func(string) bool) {
(netmap.NodeInfo)(x).IterateNetworkEndpoints(f) for s := range (netmap.NodeInfo)(x).NetworkEndpoints() {
if f(s) {
return
}
}
} }
// NumberOfAddresses returns number of announced network addresses. // NumberOfAddresses returns number of announced network addresses.

View file

@ -13,6 +13,13 @@ type ECInfo struct {
Total uint32 Total uint32
} }
func (v *ECInfo) String() string {
if v == nil {
return "<nil>"
}
return fmt.Sprintf("parent ID: %s, index: %d, total %d", v.ParentID, v.Index, v.Total)
}
// Info groups object address with its FrostFS // Info groups object address with its FrostFS
// object info. // object info.
type Info struct { type Info struct {
@ -23,5 +30,5 @@ type Info struct {
} }
func (v Info) String() string { func (v Info) String() string {
return fmt.Sprintf("address: %s, type: %s, is linking: %t", v.Address, v.Type, v.IsLinkingObject) return fmt.Sprintf("address: %s, type: %s, is linking: %t, EC header: %s", v.Address, v.Type, v.IsLinkingObject, v.ECInfo)
} }

View file

@ -50,7 +50,7 @@ func (s *Server) initNetmapProcessor(ctx context.Context, cfg *viper.Viper,
var err error var err error
s.netmapProcessor, err = netmap.New(&netmap.Params{ s.netmapProcessor, err = netmap.New(&netmap.Params{
Log: s.log, Log: s.log.WithTag(logger.TagProcessor),
Metrics: s.irMetrics, Metrics: s.irMetrics,
PoolSize: poolSize, PoolSize: poolSize,
NetmapClient: netmap.NewNetmapClient(s.netmapClient), NetmapClient: netmap.NewNetmapClient(s.netmapClient),
@ -159,7 +159,7 @@ func (s *Server) createAlphaSync(cfg *viper.Viper, frostfsCli *frostfsClient.Cli
} else { } else {
// create governance processor // create governance processor
governanceProcessor, err := governance.New(&governance.Params{ governanceProcessor, err := governance.New(&governance.Params{
Log: s.log, Log: s.log.WithTag(logger.TagProcessor),
Metrics: s.irMetrics, Metrics: s.irMetrics,
FrostFSClient: frostfsCli, FrostFSClient: frostfsCli,
AlphabetState: s, AlphabetState: s,
@ -225,7 +225,7 @@ func (s *Server) initAlphabetProcessor(ctx context.Context, cfg *viper.Viper) er
// create alphabet processor // create alphabet processor
s.alphabetProcessor, err = alphabet.New(&alphabet.Params{ s.alphabetProcessor, err = alphabet.New(&alphabet.Params{
ParsedWallets: parsedWallets, ParsedWallets: parsedWallets,
Log: s.log, Log: s.log.WithTag(logger.TagProcessor),
Metrics: s.irMetrics, Metrics: s.irMetrics,
PoolSize: poolSize, PoolSize: poolSize,
AlphabetContracts: s.contracts.alphabet, AlphabetContracts: s.contracts.alphabet,
@ -247,7 +247,7 @@ func (s *Server) initContainerProcessor(ctx context.Context, cfg *viper.Viper, c
s.log.Debug(ctx, logs.ContainerContainerWorkerPool, zap.Int("size", poolSize)) s.log.Debug(ctx, logs.ContainerContainerWorkerPool, zap.Int("size", poolSize))
// container processor // container processor
containerProcessor, err := cont.New(&cont.Params{ containerProcessor, err := cont.New(&cont.Params{
Log: s.log, Log: s.log.WithTag(logger.TagProcessor),
Metrics: s.irMetrics, Metrics: s.irMetrics,
PoolSize: poolSize, PoolSize: poolSize,
AlphabetState: s, AlphabetState: s,
@ -268,7 +268,7 @@ func (s *Server) initBalanceProcessor(ctx context.Context, cfg *viper.Viper, fro
s.log.Debug(ctx, logs.BalanceBalanceWorkerPool, zap.Int("size", poolSize)) s.log.Debug(ctx, logs.BalanceBalanceWorkerPool, zap.Int("size", poolSize))
// create balance processor // create balance processor
balanceProcessor, err := balance.New(&balance.Params{ balanceProcessor, err := balance.New(&balance.Params{
Log: s.log, Log: s.log.WithTag(logger.TagProcessor),
Metrics: s.irMetrics, Metrics: s.irMetrics,
PoolSize: poolSize, PoolSize: poolSize,
FrostFSClient: frostfsCli, FrostFSClient: frostfsCli,
@ -291,7 +291,7 @@ func (s *Server) initFrostFSMainnetProcessor(ctx context.Context, cfg *viper.Vip
s.log.Debug(ctx, logs.FrostFSFrostfsWorkerPool, zap.Int("size", poolSize)) s.log.Debug(ctx, logs.FrostFSFrostfsWorkerPool, zap.Int("size", poolSize))
frostfsProcessor, err := frostfs.New(&frostfs.Params{ frostfsProcessor, err := frostfs.New(&frostfs.Params{
Log: s.log, Log: s.log.WithTag(logger.TagProcessor),
Metrics: s.irMetrics, Metrics: s.irMetrics,
PoolSize: poolSize, PoolSize: poolSize,
FrostFSContract: s.contracts.frostfs, FrostFSContract: s.contracts.frostfs,
@ -342,7 +342,7 @@ func (s *Server) initGRPCServer(ctx context.Context, cfg *viper.Viper, log *logg
controlSvc := controlsrv.NewAuditService(controlsrv.New(p, s.netmapClient, s.containerClient, controlSvc := controlsrv.NewAuditService(controlsrv.New(p, s.netmapClient, s.containerClient,
controlsrv.WithAllowedKeys(authKeys), controlsrv.WithAllowedKeys(authKeys),
), log, audit) ), log.WithTag(logger.TagGrpcSvc), audit)
grpcControlSrv := grpc.NewServer() grpcControlSrv := grpc.NewServer()
control.RegisterControlServiceServer(grpcControlSrv, controlSvc) control.RegisterControlServiceServer(grpcControlSrv, controlSvc)
@ -458,7 +458,7 @@ func (s *Server) initMorph(ctx context.Context, cfg *viper.Viper, errChan chan<-
} }
morphChain := &chainParams{ morphChain := &chainParams{
log: s.log, log: s.log.WithTag(logger.TagMorph),
cfg: cfg, cfg: cfg,
key: s.key, key: s.key,
name: morphPrefix, name: morphPrefix,

View file

@ -339,7 +339,7 @@ func New(ctx context.Context, log *logger.Logger, cfg *viper.Viper, errChan chan
) (*Server, error) { ) (*Server, error) {
var err error var err error
server := &Server{ server := &Server{
log: log, log: log.WithTag(logger.TagIr),
irMetrics: metrics, irMetrics: metrics,
cmode: cmode, cmode: cmode,
} }

View file

@ -110,7 +110,7 @@ func WithFullSizeLimit(lim uint64) Option {
// WithLogger returns an option to specify Blobovnicza's logger. // WithLogger returns an option to specify Blobovnicza's logger.
func WithLogger(l *logger.Logger) Option { func WithLogger(l *logger.Logger) Option {
return func(c *cfg) { return func(c *cfg) {
c.log = l.With(zap.String("component", "Blobovnicza")) c.log = l
} }
} }

View file

@ -158,11 +158,11 @@ func (b *Blobovniczas) Path() string {
} }
// SetCompressor implements common.Storage. // SetCompressor implements common.Storage.
func (b *Blobovniczas) SetCompressor(cc *compression.Config) { func (b *Blobovniczas) SetCompressor(cc *compression.Compressor) {
b.compression = cc b.compression = cc
} }
func (b *Blobovniczas) Compressor() *compression.Config { func (b *Blobovniczas) Compressor() *compression.Compressor {
return b.compression return b.compression
} }

View file

@ -19,7 +19,8 @@ func TestBlobovniczaTree_Concurrency(t *testing.T) {
st := NewBlobovniczaTree( st := NewBlobovniczaTree(
context.Background(), context.Background(),
WithLogger(test.NewLogger(t)), WithBlobovniczaLogger(test.NewLogger(t)),
WithBlobovniczaTreeLogger(test.NewLogger(t)),
WithObjectSizeLimit(1024), WithObjectSizeLimit(1024),
WithBlobovniczaShallowWidth(10), WithBlobovniczaShallowWidth(10),
WithBlobovniczaShallowDepth(1), WithBlobovniczaShallowDepth(1),

View file

@ -19,7 +19,8 @@ func TestExistsInvalidStorageID(t *testing.T) {
dir := t.TempDir() dir := t.TempDir()
b := NewBlobovniczaTree( b := NewBlobovniczaTree(
context.Background(), context.Background(),
WithLogger(test.NewLogger(t)), WithBlobovniczaLogger(test.NewLogger(t)),
WithBlobovniczaTreeLogger(test.NewLogger(t)),
WithObjectSizeLimit(1024), WithObjectSizeLimit(1024),
WithBlobovniczaShallowWidth(2), WithBlobovniczaShallowWidth(2),
WithBlobovniczaShallowDepth(2), WithBlobovniczaShallowDepth(2),

View file

@ -15,7 +15,8 @@ func TestGeneric(t *testing.T) {
helper := func(t *testing.T, dir string) common.Storage { helper := func(t *testing.T, dir string) common.Storage {
return NewBlobovniczaTree( return NewBlobovniczaTree(
context.Background(), context.Background(),
WithLogger(test.NewLogger(t)), WithBlobovniczaLogger(test.NewLogger(t)),
WithBlobovniczaTreeLogger(test.NewLogger(t)),
WithObjectSizeLimit(maxObjectSize), WithObjectSizeLimit(maxObjectSize),
WithBlobovniczaShallowWidth(2), WithBlobovniczaShallowWidth(2),
WithBlobovniczaShallowDepth(2), WithBlobovniczaShallowDepth(2),
@ -43,7 +44,8 @@ func TestControl(t *testing.T) {
newTree := func(t *testing.T) common.Storage { newTree := func(t *testing.T) common.Storage {
return NewBlobovniczaTree( return NewBlobovniczaTree(
context.Background(), context.Background(),
WithLogger(test.NewLogger(t)), WithBlobovniczaLogger(test.NewLogger(t)),
WithBlobovniczaTreeLogger(test.NewLogger(t)),
WithObjectSizeLimit(maxObjectSize), WithObjectSizeLimit(maxObjectSize),
WithBlobovniczaShallowWidth(2), WithBlobovniczaShallowWidth(2),
WithBlobovniczaShallowDepth(2), WithBlobovniczaShallowDepth(2),

View file

@ -19,7 +19,7 @@ type cfg struct {
openedCacheSize int openedCacheSize int
blzShallowDepth uint64 blzShallowDepth uint64
blzShallowWidth uint64 blzShallowWidth uint64
compression *compression.Config compression *compression.Compressor
blzOpts []blobovnicza.Option blzOpts []blobovnicza.Option
reportError func(context.Context, string, error) // reportError is the function called when encountering disk errors. reportError func(context.Context, string, error) // reportError is the function called when encountering disk errors.
metrics Metrics metrics Metrics
@ -63,10 +63,15 @@ func initConfig(c *cfg) {
} }
} }
func WithLogger(l *logger.Logger) Option { func WithBlobovniczaTreeLogger(log *logger.Logger) Option {
return func(c *cfg) { return func(c *cfg) {
c.log = l c.log = log
c.blzOpts = append(c.blzOpts, blobovnicza.WithLogger(l)) }
}
func WithBlobovniczaLogger(log *logger.Logger) Option {
return func(c *cfg) {
c.blzOpts = append(c.blzOpts, blobovnicza.WithLogger(log))
} }
} }

View file

@ -226,7 +226,7 @@ func (b *Blobovniczas) rebuildDB(ctx context.Context, path string, meta common.M
func (b *Blobovniczas) addRebuildTempFile(ctx context.Context, path string) (func(), error) { func (b *Blobovniczas) addRebuildTempFile(ctx context.Context, path string) (func(), error) {
sysPath := filepath.Join(b.rootPath, path) sysPath := filepath.Join(b.rootPath, path)
sysPath = sysPath + rebuildSuffix sysPath += rebuildSuffix
_, err := os.OpenFile(sysPath, os.O_RDWR|os.O_CREATE|os.O_EXCL|os.O_SYNC, b.perm) _, err := os.OpenFile(sysPath, os.O_RDWR|os.O_CREATE|os.O_EXCL|os.O_SYNC, b.perm)
if err != nil { if err != nil {
return nil, err return nil, err

View file

@ -140,7 +140,8 @@ func testRebuildFailoverObjectDeletedFromSource(t *testing.T) {
func testRebuildFailoverValidate(t *testing.T, dir string, obj *objectSDK.Object, mustUpdateStorageID bool) { func testRebuildFailoverValidate(t *testing.T, dir string, obj *objectSDK.Object, mustUpdateStorageID bool) {
b := NewBlobovniczaTree( b := NewBlobovniczaTree(
context.Background(), context.Background(),
WithLogger(test.NewLogger(t)), WithBlobovniczaLogger(test.NewLogger(t)),
WithBlobovniczaTreeLogger(test.NewLogger(t)),
WithObjectSizeLimit(2048), WithObjectSizeLimit(2048),
WithBlobovniczaShallowWidth(2), WithBlobovniczaShallowWidth(2),
WithBlobovniczaShallowDepth(2), WithBlobovniczaShallowDepth(2),

View file

@ -50,7 +50,8 @@ func TestBlobovniczaTreeFillPercentRebuild(t *testing.T) {
dir := t.TempDir() dir := t.TempDir()
b := NewBlobovniczaTree( b := NewBlobovniczaTree(
context.Background(), context.Background(),
WithLogger(test.NewLogger(t)), WithBlobovniczaLogger(test.NewLogger(t)),
WithBlobovniczaTreeLogger(test.NewLogger(t)),
WithObjectSizeLimit(64*1024), WithObjectSizeLimit(64*1024),
WithBlobovniczaShallowWidth(1), // single directory WithBlobovniczaShallowWidth(1), // single directory
WithBlobovniczaShallowDepth(1), WithBlobovniczaShallowDepth(1),
@ -106,7 +107,8 @@ func TestBlobovniczaTreeFillPercentRebuild(t *testing.T) {
dir := t.TempDir() dir := t.TempDir()
b := NewBlobovniczaTree( b := NewBlobovniczaTree(
context.Background(), context.Background(),
WithLogger(test.NewLogger(t)), WithBlobovniczaLogger(test.NewLogger(t)),
WithBlobovniczaTreeLogger(test.NewLogger(t)),
WithObjectSizeLimit(64*1024), WithObjectSizeLimit(64*1024),
WithBlobovniczaShallowWidth(1), // single directory WithBlobovniczaShallowWidth(1), // single directory
WithBlobovniczaShallowDepth(1), WithBlobovniczaShallowDepth(1),
@ -160,7 +162,8 @@ func TestBlobovniczaTreeFillPercentRebuild(t *testing.T) {
dir := t.TempDir() dir := t.TempDir()
b := NewBlobovniczaTree( b := NewBlobovniczaTree(
context.Background(), context.Background(),
WithLogger(test.NewLogger(t)), WithBlobovniczaLogger(test.NewLogger(t)),
WithBlobovniczaTreeLogger(test.NewLogger(t)),
WithObjectSizeLimit(64*1024), WithObjectSizeLimit(64*1024),
WithBlobovniczaShallowWidth(1), // single directory WithBlobovniczaShallowWidth(1), // single directory
WithBlobovniczaShallowDepth(1), WithBlobovniczaShallowDepth(1),
@ -231,7 +234,8 @@ func TestBlobovniczaTreeFillPercentRebuild(t *testing.T) {
dir := t.TempDir() dir := t.TempDir()
b := NewBlobovniczaTree( b := NewBlobovniczaTree(
context.Background(), context.Background(),
WithLogger(test.NewLogger(t)), WithBlobovniczaLogger(test.NewLogger(t)),
WithBlobovniczaTreeLogger(test.NewLogger(t)),
WithObjectSizeLimit(64*1024), WithObjectSizeLimit(64*1024),
WithBlobovniczaShallowWidth(1), // single directory WithBlobovniczaShallowWidth(1), // single directory
WithBlobovniczaShallowDepth(1), WithBlobovniczaShallowDepth(1),
@ -262,7 +266,8 @@ func TestBlobovniczaTreeFillPercentRebuild(t *testing.T) {
require.NoError(t, b.Close(context.Background())) require.NoError(t, b.Close(context.Background()))
b = NewBlobovniczaTree( b = NewBlobovniczaTree(
context.Background(), context.Background(),
WithLogger(test.NewLogger(t)), WithBlobovniczaLogger(test.NewLogger(t)),
WithBlobovniczaTreeLogger(test.NewLogger(t)),
WithObjectSizeLimit(64*1024), WithObjectSizeLimit(64*1024),
WithBlobovniczaShallowWidth(1), WithBlobovniczaShallowWidth(1),
WithBlobovniczaShallowDepth(1), WithBlobovniczaShallowDepth(1),
@ -304,7 +309,8 @@ func TestBlobovniczaTreeRebuildLargeObject(t *testing.T) {
dir := t.TempDir() dir := t.TempDir()
b := NewBlobovniczaTree( b := NewBlobovniczaTree(
context.Background(), context.Background(),
WithLogger(test.NewLogger(t)), WithBlobovniczaLogger(test.NewLogger(t)),
WithBlobovniczaTreeLogger(test.NewLogger(t)),
WithObjectSizeLimit(64*1024), // 64KB object size limit WithObjectSizeLimit(64*1024), // 64KB object size limit
WithBlobovniczaShallowWidth(5), WithBlobovniczaShallowWidth(5),
WithBlobovniczaShallowDepth(2), // depth = 2 WithBlobovniczaShallowDepth(2), // depth = 2
@ -332,7 +338,8 @@ func TestBlobovniczaTreeRebuildLargeObject(t *testing.T) {
b = NewBlobovniczaTree( b = NewBlobovniczaTree(
context.Background(), context.Background(),
WithLogger(test.NewLogger(t)), WithBlobovniczaLogger(test.NewLogger(t)),
WithBlobovniczaTreeLogger(test.NewLogger(t)),
WithObjectSizeLimit(32*1024), // 32KB object size limit WithObjectSizeLimit(32*1024), // 32KB object size limit
WithBlobovniczaShallowWidth(5), WithBlobovniczaShallowWidth(5),
WithBlobovniczaShallowDepth(3), // depth = 3 WithBlobovniczaShallowDepth(3), // depth = 3
@ -374,7 +381,8 @@ func testBlobovniczaTreeRebuildHelper(t *testing.T, sourceDepth, sourceWidth, ta
dir := t.TempDir() dir := t.TempDir()
b := NewBlobovniczaTree( b := NewBlobovniczaTree(
context.Background(), context.Background(),
WithLogger(test.NewLogger(t)), WithBlobovniczaLogger(test.NewLogger(t)),
WithBlobovniczaTreeLogger(test.NewLogger(t)),
WithObjectSizeLimit(2048), WithObjectSizeLimit(2048),
WithBlobovniczaShallowWidth(sourceWidth), WithBlobovniczaShallowWidth(sourceWidth),
WithBlobovniczaShallowDepth(sourceDepth), WithBlobovniczaShallowDepth(sourceDepth),
@ -415,7 +423,8 @@ func testBlobovniczaTreeRebuildHelper(t *testing.T, sourceDepth, sourceWidth, ta
b = NewBlobovniczaTree( b = NewBlobovniczaTree(
context.Background(), context.Background(),
WithLogger(test.NewLogger(t)), WithBlobovniczaLogger(test.NewLogger(t)),
WithBlobovniczaTreeLogger(test.NewLogger(t)),
WithObjectSizeLimit(2048), WithObjectSizeLimit(2048),
WithBlobovniczaShallowWidth(targetWidth), WithBlobovniczaShallowWidth(targetWidth),
WithBlobovniczaShallowDepth(targetDepth), WithBlobovniczaShallowDepth(targetDepth),

View file

@ -41,7 +41,7 @@ type SubStorageInfo struct {
type Option func(*cfg) type Option func(*cfg)
type cfg struct { type cfg struct {
compression compression.Config compression compression.Compressor
log *logger.Logger log *logger.Logger
storage []SubStorage storage []SubStorage
metrics Metrics metrics Metrics
@ -91,50 +91,13 @@ func WithStorages(st []SubStorage) Option {
// WithLogger returns option to specify BlobStor's logger. // WithLogger returns option to specify BlobStor's logger.
func WithLogger(l *logger.Logger) Option { func WithLogger(l *logger.Logger) Option {
return func(c *cfg) { return func(c *cfg) {
c.log = l.With(zap.String("component", "BlobStor")) c.log = l
} }
} }
// WithCompressObjects returns option to toggle func WithCompression(comp compression.Config) Option {
// compression of the stored objects.
//
// If true, Zstandard algorithm is used for data compression.
//
// If compressor (decompressor) creation failed,
// the uncompressed option will be used, and the error
// is recorded in the provided log.
func WithCompressObjects(comp bool) Option {
return func(c *cfg) { return func(c *cfg) {
c.compression.Enabled = comp c.compression.Config = comp
}
}
// WithCompressibilityEstimate returns an option to use
// normilized compressibility estimate to decide compress
// data or not.
//
// See https://github.com/klauspost/compress/blob/v1.17.2/compressible.go#L5
func WithCompressibilityEstimate(v bool) Option {
return func(c *cfg) {
c.compression.UseCompressEstimation = v
}
}
// WithCompressibilityEstimateThreshold returns an option to set
// normilized compressibility estimate threshold.
//
// See https://github.com/klauspost/compress/blob/v1.17.2/compressible.go#L5
func WithCompressibilityEstimateThreshold(threshold float64) Option {
return func(c *cfg) {
c.compression.CompressEstimationThreshold = threshold
}
}
// WithUncompressableContentTypes returns option to disable decompression
// for specific content types as seen by object.AttributeContentType attribute.
func WithUncompressableContentTypes(values []string) Option {
return func(c *cfg) {
c.compression.UncompressableContentTypes = values
} }
} }
@ -152,6 +115,6 @@ func WithMetrics(m Metrics) Option {
} }
} }
func (b *BlobStor) Compressor() *compression.Config { func (b *BlobStor) Compressor() *compression.Compressor {
return &b.compression return &b.compression
} }

View file

@ -9,6 +9,7 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/object" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/object"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/blobovniczatree" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/blobovniczatree"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/common" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/compression"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/fstree" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/fstree"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/teststore" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/teststore"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/shard/mode" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/shard/mode"
@ -51,7 +52,9 @@ func TestCompression(t *testing.T) {
newBlobStor := func(t *testing.T, compress bool) *BlobStor { newBlobStor := func(t *testing.T, compress bool) *BlobStor {
bs := New( bs := New(
WithCompressObjects(compress), WithCompression(compression.Config{
Enabled: compress,
}),
WithStorages(defaultStorages(dir, smallSizeLimit))) WithStorages(defaultStorages(dir, smallSizeLimit)))
require.NoError(t, bs.Open(context.Background(), mode.ReadWrite)) require.NoError(t, bs.Open(context.Background(), mode.ReadWrite))
require.NoError(t, bs.Init(context.Background())) require.NoError(t, bs.Init(context.Background()))
@ -113,8 +116,10 @@ func TestBlobstor_needsCompression(t *testing.T) {
dir := t.TempDir() dir := t.TempDir()
bs := New( bs := New(
WithCompressObjects(compress), WithCompression(compression.Config{
WithUncompressableContentTypes(ct), Enabled: compress,
UncompressableContentTypes: ct,
}),
WithStorages([]SubStorage{ WithStorages([]SubStorage{
{ {
Storage: blobovniczatree.NewBlobovniczaTree( Storage: blobovniczatree.NewBlobovniczaTree(

View file

@ -18,8 +18,8 @@ type Storage interface {
Path() string Path() string
ObjectsCount(ctx context.Context) (uint64, error) ObjectsCount(ctx context.Context) (uint64, error)
SetCompressor(cc *compression.Config) SetCompressor(cc *compression.Compressor)
Compressor() *compression.Config Compressor() *compression.Compressor
// SetReportErrorFunc allows to provide a function to be called on disk errors. // SetReportErrorFunc allows to provide a function to be called on disk errors.
// This function MUST be called before Open. // This function MUST be called before Open.

View file

@ -11,7 +11,7 @@ import (
) )
func BenchmarkCompression(b *testing.B) { func BenchmarkCompression(b *testing.B) {
c := Config{Enabled: true} c := Compressor{Config: Config{Enabled: true}}
require.NoError(b, c.Init()) require.NoError(b, c.Init())
for _, size := range []int{128, 1024, 32 * 1024, 32 * 1024 * 1024} { for _, size := range []int{128, 1024, 32 * 1024, 32 * 1024 * 1024} {
@ -33,7 +33,7 @@ func BenchmarkCompression(b *testing.B) {
} }
} }
func benchWith(b *testing.B, c Config, data []byte) { func benchWith(b *testing.B, c Compressor, data []byte) {
b.ResetTimer() b.ResetTimer()
b.ReportAllocs() b.ReportAllocs()
for range b.N { for range b.N {
@ -56,8 +56,10 @@ func BenchmarkCompressionRealVSEstimate(b *testing.B) {
b.Run("estimate", func(b *testing.B) { b.Run("estimate", func(b *testing.B) {
b.ResetTimer() b.ResetTimer()
c := &Config{ c := &Compressor{
Config: Config{
Enabled: true, Enabled: true,
},
} }
require.NoError(b, c.Init()) require.NoError(b, c.Init())
@ -76,8 +78,10 @@ func BenchmarkCompressionRealVSEstimate(b *testing.B) {
b.Run("compress", func(b *testing.B) { b.Run("compress", func(b *testing.B) {
b.ResetTimer() b.ResetTimer()
c := &Config{ c := &Compressor{
Config: Config{
Enabled: true, Enabled: true,
},
} }
require.NoError(b, c.Init()) require.NoError(b, c.Init())

View file

@ -4,21 +4,36 @@ import (
"bytes" "bytes"
"strings" "strings"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/assert"
objectSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object" objectSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
"github.com/klauspost/compress" "github.com/klauspost/compress"
"github.com/klauspost/compress/zstd" "github.com/klauspost/compress/zstd"
) )
type Level string
const (
LevelDefault Level = ""
LevelOptimal Level = "optimal"
LevelFastest Level = "fastest"
LevelSmallestSize Level = "smallest_size"
)
type Compressor struct {
Config
encoder *zstd.Encoder
decoder *zstd.Decoder
}
// Config represents common compression-related configuration. // Config represents common compression-related configuration.
type Config struct { type Config struct {
Enabled bool Enabled bool
UncompressableContentTypes []string UncompressableContentTypes []string
Level Level
UseCompressEstimation bool EstimateCompressibility bool
CompressEstimationThreshold float64 EstimateCompressibilityThreshold float64
encoder *zstd.Encoder
decoder *zstd.Decoder
} }
// zstdFrameMagic contains first 4 bytes of any compressed object // zstdFrameMagic contains first 4 bytes of any compressed object
@ -26,11 +41,11 @@ type Config struct {
var zstdFrameMagic = []byte{0x28, 0xb5, 0x2f, 0xfd} var zstdFrameMagic = []byte{0x28, 0xb5, 0x2f, 0xfd}
// Init initializes compression routines. // Init initializes compression routines.
func (c *Config) Init() error { func (c *Compressor) Init() error {
var err error var err error
if c.Enabled { if c.Enabled {
c.encoder, err = zstd.NewWriter(nil) c.encoder, err = zstd.NewWriter(nil, zstd.WithEncoderLevel(c.compressionLevel()))
if err != nil { if err != nil {
return err return err
} }
@ -73,7 +88,7 @@ func (c *Config) NeedsCompression(obj *objectSDK.Object) bool {
// Decompress decompresses data if it starts with the magic // Decompress decompresses data if it starts with the magic
// and returns data untouched otherwise. // and returns data untouched otherwise.
func (c *Config) Decompress(data []byte) ([]byte, error) { func (c *Compressor) Decompress(data []byte) ([]byte, error) {
if len(data) < 4 || !bytes.Equal(data[:4], zstdFrameMagic) { if len(data) < 4 || !bytes.Equal(data[:4], zstdFrameMagic) {
return data, nil return data, nil
} }
@ -82,13 +97,13 @@ func (c *Config) Decompress(data []byte) ([]byte, error) {
// Compress compresses data if compression is enabled // Compress compresses data if compression is enabled
// and returns data untouched otherwise. // and returns data untouched otherwise.
func (c *Config) Compress(data []byte) []byte { func (c *Compressor) Compress(data []byte) []byte {
if c == nil || !c.Enabled { if c == nil || !c.Enabled {
return data return data
} }
if c.UseCompressEstimation { if c.EstimateCompressibility {
estimated := compress.Estimate(data) estimated := compress.Estimate(data)
if estimated >= c.CompressEstimationThreshold { if estimated >= c.EstimateCompressibilityThreshold {
return c.compress(data) return c.compress(data)
} }
return data return data
@ -96,7 +111,7 @@ func (c *Config) Compress(data []byte) []byte {
return c.compress(data) return c.compress(data)
} }
func (c *Config) compress(data []byte) []byte { func (c *Compressor) compress(data []byte) []byte {
maxSize := c.encoder.MaxEncodedSize(len(data)) maxSize := c.encoder.MaxEncodedSize(len(data))
compressed := c.encoder.EncodeAll(data, make([]byte, 0, maxSize)) compressed := c.encoder.EncodeAll(data, make([]byte, 0, maxSize))
if len(data) < len(compressed) { if len(data) < len(compressed) {
@ -106,7 +121,7 @@ func (c *Config) compress(data []byte) []byte {
} }
// Close closes encoder and decoder, returns any error occurred. // Close closes encoder and decoder, returns any error occurred.
func (c *Config) Close() error { func (c *Compressor) Close() error {
var err error var err error
if c.encoder != nil { if c.encoder != nil {
err = c.encoder.Close() err = c.encoder.Close()
@ -116,3 +131,24 @@ func (c *Config) Close() error {
} }
return err return err
} }
func (c *Config) HasValidCompressionLevel() bool {
return c.Level == LevelDefault ||
c.Level == LevelOptimal ||
c.Level == LevelFastest ||
c.Level == LevelSmallestSize
}
func (c *Compressor) compressionLevel() zstd.EncoderLevel {
switch c.Level {
case LevelDefault, LevelOptimal:
return zstd.SpeedDefault
case LevelFastest:
return zstd.SpeedFastest
case LevelSmallestSize:
return zstd.SpeedBestCompression
default:
assert.Fail("unknown compression level", string(c.Level))
return zstd.SpeedDefault
}
}

View file

@ -6,6 +6,7 @@ import (
"fmt" "fmt"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs" "git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/compression"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/shard/mode" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/shard/mode"
"go.uber.org/zap" "go.uber.org/zap"
) )
@ -53,6 +54,10 @@ var ErrInitBlobovniczas = errors.New("failure on blobovnicza initialization stag
func (b *BlobStor) Init(ctx context.Context) error { func (b *BlobStor) Init(ctx context.Context) error {
b.log.Debug(ctx, logs.BlobstorInitializing) b.log.Debug(ctx, logs.BlobstorInitializing)
if !b.compression.HasValidCompressionLevel() {
b.log.Warn(ctx, logs.UnknownCompressionLevelDefaultWillBeUsed, zap.String("level", string(b.compression.Level)))
b.compression.Level = compression.LevelDefault
}
if err := b.compression.Init(); err != nil { if err := b.compression.Init(); err != nil {
return err return err
} }

View file

@ -2,6 +2,8 @@ package fstree
import ( import (
"sync" "sync"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/assert"
) )
// FileCounter used to count files in FSTree. The implementation must be thread-safe. // FileCounter used to count files in FSTree. The implementation must be thread-safe.
@ -52,16 +54,11 @@ func (c *SimpleCounter) Dec(size uint64) {
c.mtx.Lock() c.mtx.Lock()
defer c.mtx.Unlock() defer c.mtx.Unlock()
if c.count > 0 { assert.True(c.count > 0, "fstree.SimpleCounter: invalid count")
c.count-- c.count--
} else {
panic("fstree.SimpleCounter: invalid count") assert.True(c.size >= size, "fstree.SimpleCounter: invalid size")
}
if c.size >= size {
c.size -= size c.size -= size
} else {
panic("fstree.SimpleCounter: invalid size")
}
} }
func (c *SimpleCounter) CountSize() (uint64, uint64) { func (c *SimpleCounter) CountSize() (uint64, uint64) {

View file

@ -45,7 +45,7 @@ type FSTree struct {
log *logger.Logger log *logger.Logger
*compression.Config compressor *compression.Compressor
Depth uint64 Depth uint64
DirNameLen int DirNameLen int
@ -82,7 +82,7 @@ func New(opts ...Option) *FSTree {
Permissions: 0o700, Permissions: 0o700,
RootPath: "./", RootPath: "./",
}, },
Config: nil, compressor: nil,
Depth: 4, Depth: 4,
DirNameLen: DirNameLen, DirNameLen: DirNameLen,
metrics: &noopMetrics{}, metrics: &noopMetrics{},
@ -196,7 +196,7 @@ func (t *FSTree) iterate(ctx context.Context, depth uint64, curPath []string, pr
} }
if err == nil { if err == nil {
data, err = t.Decompress(data) data, err = t.compressor.Decompress(data)
} }
if err != nil { if err != nil {
if prm.IgnoreErrors { if prm.IgnoreErrors {
@ -405,7 +405,7 @@ func (t *FSTree) Put(ctx context.Context, prm common.PutPrm) (common.PutRes, err
return common.PutRes{}, err return common.PutRes{}, err
} }
if !prm.DontCompress { if !prm.DontCompress {
prm.RawData = t.Compress(prm.RawData) prm.RawData = t.compressor.Compress(prm.RawData)
} }
size = len(prm.RawData) size = len(prm.RawData)
@ -448,7 +448,7 @@ func (t *FSTree) Get(ctx context.Context, prm common.GetPrm) (common.GetRes, err
} }
} }
data, err = t.Decompress(data) data, err = t.compressor.Decompress(data)
if err != nil { if err != nil {
return common.GetRes{}, err return common.GetRes{}, err
} }
@ -597,12 +597,12 @@ func (t *FSTree) Path() string {
} }
// SetCompressor implements common.Storage. // SetCompressor implements common.Storage.
func (t *FSTree) SetCompressor(cc *compression.Config) { func (t *FSTree) SetCompressor(cc *compression.Compressor) {
t.Config = cc t.compressor = cc
} }
func (t *FSTree) Compressor() *compression.Config { func (t *FSTree) Compressor() *compression.Compressor {
return t.Config return t.compressor
} }
// SetReportErrorFunc implements common.Storage. // SetReportErrorFunc implements common.Storage.

View file

@ -67,13 +67,10 @@ func (w *genericWriter) writeAndRename(tmpPath, p string, data []byte) error {
err := w.writeFile(tmpPath, data) err := w.writeFile(tmpPath, data)
if err != nil { if err != nil {
var pe *fs.PathError var pe *fs.PathError
if errors.As(err, &pe) { if errors.As(err, &pe) && errors.Is(pe.Err, syscall.ENOSPC) {
switch pe.Err {
case syscall.ENOSPC:
err = common.ErrNoSpace err = common.ErrNoSpace
_ = os.RemoveAll(tmpPath) _ = os.RemoveAll(tmpPath)
} }
}
return err return err
} }

View file

@ -4,7 +4,6 @@ import (
"io/fs" "io/fs"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger"
"go.uber.org/zap"
) )
type Option func(*FSTree) type Option func(*FSTree)
@ -53,6 +52,6 @@ func WithFileCounter(c FileCounter) Option {
func WithLogger(l *logger.Logger) Option { func WithLogger(l *logger.Logger) Option {
return func(f *FSTree) { return func(f *FSTree) {
f.log = l.With(zap.String("component", "FSTree")) f.log = l
} }
} }

View file

@ -8,6 +8,7 @@ import (
"testing" "testing"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/common" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/compression"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/memstore" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/memstore"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/teststore" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/teststore"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/shard/mode" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/shard/mode"
@ -24,7 +25,9 @@ func TestIterateObjects(t *testing.T) {
// create BlobStor instance // create BlobStor instance
blobStor := New( blobStor := New(
WithStorages(defaultStorages(p, smalSz)), WithStorages(defaultStorages(p, smalSz)),
WithCompressObjects(true), WithCompression(compression.Config{
Enabled: true,
}),
) )
defer os.RemoveAll(p) defer os.RemoveAll(p)

View file

@ -16,7 +16,7 @@ func (s *memstoreImpl) Init() error
func (s *memstoreImpl) Close(context.Context) error { return nil } func (s *memstoreImpl) Close(context.Context) error { return nil }
func (s *memstoreImpl) Type() string { return Type } func (s *memstoreImpl) Type() string { return Type }
func (s *memstoreImpl) Path() string { return s.rootPath } func (s *memstoreImpl) Path() string { return s.rootPath }
func (s *memstoreImpl) SetCompressor(cc *compression.Config) { s.compression = cc } func (s *memstoreImpl) SetCompressor(cc *compression.Compressor) { s.compression = cc }
func (s *memstoreImpl) Compressor() *compression.Config { return s.compression } func (s *memstoreImpl) Compressor() *compression.Compressor { return s.compression }
func (s *memstoreImpl) SetReportErrorFunc(func(context.Context, string, error)) {} func (s *memstoreImpl) SetReportErrorFunc(func(context.Context, string, error)) {}
func (s *memstoreImpl) SetParentID(string) {} func (s *memstoreImpl) SetParentID(string) {}

View file

@ -7,7 +7,7 @@ import (
type cfg struct { type cfg struct {
rootPath string rootPath string
readOnly bool readOnly bool
compression *compression.Config compression *compression.Compressor
} }
func defaultConfig() *cfg { func defaultConfig() *cfg {

View file

@ -17,8 +17,8 @@ type cfg struct {
Type func() string Type func() string
Path func() string Path func() string
SetCompressor func(cc *compression.Config) SetCompressor func(cc *compression.Compressor)
Compressor func() *compression.Config Compressor func() *compression.Compressor
SetReportErrorFunc func(f func(context.Context, string, error)) SetReportErrorFunc func(f func(context.Context, string, error))
Get func(common.GetPrm) (common.GetRes, error) Get func(common.GetPrm) (common.GetRes, error)
@ -45,11 +45,11 @@ func WithClose(f func() error) Option { return func(c *cfg) { c
func WithType(f func() string) Option { return func(c *cfg) { c.overrides.Type = f } } func WithType(f func() string) Option { return func(c *cfg) { c.overrides.Type = f } }
func WithPath(f func() string) Option { return func(c *cfg) { c.overrides.Path = f } } func WithPath(f func() string) Option { return func(c *cfg) { c.overrides.Path = f } }
func WithSetCompressor(f func(*compression.Config)) Option { func WithSetCompressor(f func(*compression.Compressor)) Option {
return func(c *cfg) { c.overrides.SetCompressor = f } return func(c *cfg) { c.overrides.SetCompressor = f }
} }
func WithCompressor(f func() *compression.Config) Option { func WithCompressor(f func() *compression.Compressor) Option {
return func(c *cfg) { c.overrides.Compressor = f } return func(c *cfg) { c.overrides.Compressor = f }
} }

View file

@ -116,7 +116,7 @@ func (s *TestStore) Path() string {
} }
} }
func (s *TestStore) SetCompressor(cc *compression.Config) { func (s *TestStore) SetCompressor(cc *compression.Compressor) {
s.mu.RLock() s.mu.RLock()
defer s.mu.RUnlock() defer s.mu.RUnlock()
switch { switch {
@ -129,7 +129,7 @@ func (s *TestStore) SetCompressor(cc *compression.Config) {
} }
} }
func (s *TestStore) Compressor() *compression.Config { func (s *TestStore) Compressor() *compression.Compressor {
s.mu.RLock() s.mu.RLock()
defer s.mu.RUnlock() defer s.mu.RUnlock()
switch { switch {

View file

@ -22,10 +22,6 @@ type shardInitError struct {
// Open opens all StorageEngine's components. // Open opens all StorageEngine's components.
func (e *StorageEngine) Open(ctx context.Context) error { func (e *StorageEngine) Open(ctx context.Context) error {
return e.open(ctx)
}
func (e *StorageEngine) open(ctx context.Context) error {
e.mtx.Lock() e.mtx.Lock()
defer e.mtx.Unlock() defer e.mtx.Unlock()
@ -149,11 +145,11 @@ var errClosed = errors.New("storage engine is closed")
func (e *StorageEngine) Close(ctx context.Context) error { func (e *StorageEngine) Close(ctx context.Context) error {
close(e.closeCh) close(e.closeCh)
defer e.wg.Wait() defer e.wg.Wait()
return e.setBlockExecErr(ctx, errClosed) return e.closeEngine(ctx)
} }
// closes all shards. Never returns an error, shard errors are logged. // closes all shards. Never returns an error, shard errors are logged.
func (e *StorageEngine) close(ctx context.Context) error { func (e *StorageEngine) closeAllShards(ctx context.Context) error {
e.mtx.RLock() e.mtx.RLock()
defer e.mtx.RUnlock() defer e.mtx.RUnlock()
@ -176,70 +172,23 @@ func (e *StorageEngine) execIfNotBlocked(op func() error) error {
e.blockExec.mtx.RLock() e.blockExec.mtx.RLock()
defer e.blockExec.mtx.RUnlock() defer e.blockExec.mtx.RUnlock()
if e.blockExec.err != nil { if e.blockExec.closed {
return e.blockExec.err return errClosed
} }
return op() return op()
} }
// sets the flag of blocking execution of all data operations according to err: func (e *StorageEngine) closeEngine(ctx context.Context) error {
// - err != nil, then blocks the execution. If exec wasn't blocked, calls close method
// (if err == errClosed => additionally releases pools and does not allow to resume executions).
// - otherwise, resumes execution. If exec was blocked, calls open method.
//
// Can be called concurrently with exec. In this case it waits for all executions to complete.
func (e *StorageEngine) setBlockExecErr(ctx context.Context, err error) error {
e.blockExec.mtx.Lock() e.blockExec.mtx.Lock()
defer e.blockExec.mtx.Unlock() defer e.blockExec.mtx.Unlock()
prevErr := e.blockExec.err if e.blockExec.closed {
wasClosed := errors.Is(prevErr, errClosed)
if wasClosed {
return errClosed return errClosed
} }
e.blockExec.err = err e.blockExec.closed = true
return e.closeAllShards(ctx)
if err == nil {
if prevErr != nil { // block -> ok
return e.open(ctx)
}
} else if prevErr == nil { // ok -> block
return e.close(ctx)
}
// otherwise do nothing
return nil
}
// BlockExecution blocks the execution of any data-related operation. All blocked ops will return err.
// To resume the execution, use ResumeExecution method.
//
// Сan be called regardless of the fact of the previous blocking. If execution wasn't blocked, releases all resources
// similar to Close. Can be called concurrently with Close and any data related method (waits for all executions
// to complete). Returns error if any Close has been called before.
//
// Must not be called concurrently with either Open or Init.
//
// Note: technically passing nil error will resume the execution, otherwise, it is recommended to call ResumeExecution
// for this.
func (e *StorageEngine) BlockExecution(err error) error {
return e.setBlockExecErr(context.Background(), err)
}
// ResumeExecution resumes the execution of any data-related operation.
// To block the execution, use BlockExecution method.
//
// Сan be called regardless of the fact of the previous blocking. If execution was blocked, prepares all resources
// similar to Open. Can be called concurrently with Close and any data related method (waits for all executions
// to complete). Returns error if any Close has been called before.
//
// Must not be called concurrently with either Open or Init.
func (e *StorageEngine) ResumeExecution() error {
return e.setBlockExecErr(context.Background(), nil)
} }
type ReConfiguration struct { type ReConfiguration struct {

View file

@ -2,7 +2,6 @@ package engine
import ( import (
"context" "context"
"errors"
"fmt" "fmt"
"io/fs" "io/fs"
"os" "os"
@ -12,17 +11,14 @@ import (
"testing" "testing"
"time" "time"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/object"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/teststore" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/teststore"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/internal/testutil"
meta "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/metabase" meta "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/metabase"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/pilorama" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/pilorama"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/shard" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/shard"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/shard/mode" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/shard/mode"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/writecache" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/writecache"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger/test" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger/test"
cidtest "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id/test"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
"go.etcd.io/bbolt" "go.etcd.io/bbolt"
) )
@ -163,42 +159,6 @@ func testEngineFailInitAndReload(t *testing.T, degradedMode bool, opts []shard.O
require.Equal(t, 1, shardCount) require.Equal(t, 1, shardCount)
} }
func TestExecBlocks(t *testing.T) {
e := testNewEngine(t).setShardsNum(t, 2).prepare(t).engine // number doesn't matter in this test, 2 is several but not many
// put some object
obj := testutil.GenerateObjectWithCID(cidtest.ID())
addr := object.AddressOf(obj)
require.NoError(t, Put(context.Background(), e, obj, false))
// block executions
errBlock := errors.New("block exec err")
require.NoError(t, e.BlockExecution(errBlock))
// try to exec some op
_, err := Head(context.Background(), e, addr)
require.ErrorIs(t, err, errBlock)
// resume executions
require.NoError(t, e.ResumeExecution())
_, err = Head(context.Background(), e, addr) // can be any data-related op
require.NoError(t, err)
// close
require.NoError(t, e.Close(context.Background()))
// try exec after close
_, err = Head(context.Background(), e, addr)
require.Error(t, err)
// try to resume
require.Error(t, e.ResumeExecution())
}
func TestPersistentShardID(t *testing.T) { func TestPersistentShardID(t *testing.T) {
dir := t.TempDir() dir := t.TempDir()

View file

@ -34,8 +34,7 @@ type StorageEngine struct {
blockExec struct { blockExec struct {
mtx sync.RWMutex mtx sync.RWMutex
closed bool
err error
} }
evacuateLimiter *evacuationLimiter evacuateLimiter *evacuationLimiter
} }
@ -212,12 +211,18 @@ func New(opts ...Option) *StorageEngine {
opts[i](c) opts[i](c)
} }
evLimMtx := &sync.RWMutex{}
evLimCond := sync.NewCond(evLimMtx)
return &StorageEngine{ return &StorageEngine{
cfg: c, cfg: c,
shards: make(map[string]hashedShard), shards: make(map[string]hashedShard),
closeCh: make(chan struct{}), closeCh: make(chan struct{}),
setModeCh: make(chan setModeRequest), setModeCh: make(chan setModeRequest),
evacuateLimiter: &evacuationLimiter{}, evacuateLimiter: &evacuationLimiter{
guard: evLimMtx,
statusCond: evLimCond,
},
} }
} }

View file

@ -116,7 +116,8 @@ func newStorages(t testing.TB, root string, smallSize uint64) []blobstor.SubStor
blobovniczatree.WithBlobovniczaShallowDepth(1), blobovniczatree.WithBlobovniczaShallowDepth(1),
blobovniczatree.WithBlobovniczaShallowWidth(1), blobovniczatree.WithBlobovniczaShallowWidth(1),
blobovniczatree.WithPermissions(0o700), blobovniczatree.WithPermissions(0o700),
blobovniczatree.WithLogger(test.NewLogger(t))), blobovniczatree.WithBlobovniczaLogger(test.NewLogger(t)),
blobovniczatree.WithBlobovniczaTreeLogger(test.NewLogger(t))),
Policy: func(_ *objectSDK.Object, data []byte) bool { Policy: func(_ *objectSDK.Object, data []byte) bool {
return uint64(len(data)) < smallSize return uint64(len(data)) < smallSize
}, },

View file

@ -139,7 +139,8 @@ type evacuationLimiter struct {
eg *errgroup.Group eg *errgroup.Group
cancel context.CancelFunc cancel context.CancelFunc
guard sync.RWMutex guard *sync.RWMutex
statusCond *sync.Cond // used in unit tests
} }
func (l *evacuationLimiter) TryStart(ctx context.Context, shardIDs []string, result *EvacuateShardRes) (*errgroup.Group, context.Context, error) { func (l *evacuationLimiter) TryStart(ctx context.Context, shardIDs []string, result *EvacuateShardRes) (*errgroup.Group, context.Context, error) {
@ -165,6 +166,7 @@ func (l *evacuationLimiter) TryStart(ctx context.Context, shardIDs []string, res
startedAt: time.Now().UTC(), startedAt: time.Now().UTC(),
result: result, result: result,
} }
l.statusCond.Broadcast()
return l.eg, egCtx, nil return l.eg, egCtx, nil
} }
@ -180,6 +182,7 @@ func (l *evacuationLimiter) Complete(err error) {
l.state.processState = EvacuateProcessStateCompleted l.state.processState = EvacuateProcessStateCompleted
l.state.errMessage = errMsq l.state.errMessage = errMsq
l.state.finishedAt = time.Now().UTC() l.state.finishedAt = time.Now().UTC()
l.statusCond.Broadcast()
l.eg = nil l.eg = nil
} }
@ -214,6 +217,7 @@ func (l *evacuationLimiter) ResetEvacuationStatus() error {
l.state = EvacuationState{} l.state = EvacuationState{}
l.eg = nil l.eg = nil
l.cancel = nil l.cancel = nil
l.statusCond.Broadcast()
return nil return nil
} }

View file

@ -204,11 +204,10 @@ func TestEvacuateShardObjects(t *testing.T) {
func testWaitForEvacuationCompleted(t *testing.T, e *StorageEngine) *EvacuationState { func testWaitForEvacuationCompleted(t *testing.T, e *StorageEngine) *EvacuationState {
var st *EvacuationState var st *EvacuationState
var err error var err error
require.Eventually(t, func() bool { e.evacuateLimiter.waitForCompleted()
st, err = e.GetEvacuationState(context.Background()) st, err = e.GetEvacuationState(context.Background())
require.NoError(t, err) require.NoError(t, err)
return st.ProcessingStatus() == EvacuateProcessStateCompleted require.Equal(t, EvacuateProcessStateCompleted, st.ProcessingStatus())
}, 6*time.Second, 10*time.Millisecond)
return st return st
} }
@ -817,3 +816,12 @@ func TestEvacuateShardObjectsRepOneOnlyBench(t *testing.T) {
t.Logf("evacuate took %v\n", time.Since(start)) t.Logf("evacuate took %v\n", time.Since(start))
require.NoError(t, err) require.NoError(t, err)
} }
func (l *evacuationLimiter) waitForCompleted() {
l.guard.Lock()
defer l.guard.Unlock()
for l.state.processState != EvacuateProcessStateCompleted {
l.statusCond.Wait()
}
}

View file

@ -186,10 +186,6 @@ func (e *StorageEngine) findShards(ctx context.Context, addr oid.Address, checkL
default: default:
} }
if !objectExists {
return
}
if checkLocked { if checkLocked {
if isLocked, err := sh.IsLocked(ctx, addr); err != nil { if isLocked, err := sh.IsLocked(ctx, addr); err != nil {
e.log.Warn(ctx, logs.EngineRemovingAnObjectWithoutFullLockingCheck, e.log.Warn(ctx, logs.EngineRemovingAnObjectWithoutFullLockingCheck,
@ -202,6 +198,13 @@ func (e *StorageEngine) findShards(ctx context.Context, addr oid.Address, checkL
} }
} }
// This exit point must come after checking if the object is locked,
// since the locked index may be populated even if the object doesn't
// exist.
if !objectExists {
return
}
ids = append(ids, sh.ID().String()) ids = append(ids, sh.ID().String())
// Continue if it's a root object. // Continue if it's a root object.

View file

@ -242,3 +242,58 @@ func benchmarkInhumeMultipart(b *testing.B, numShards, numObjects int) {
b.StopTimer() b.StopTimer()
} }
} }
func TestInhumeIfObjectDoesntExist(t *testing.T) {
t.Run("object is locked", func(t *testing.T) {
t.Run("inhume without tombstone", func(t *testing.T) {
testInhumeLockedIfObjectDoesntExist(t, false, false)
})
t.Run("inhume with tombstone", func(t *testing.T) {
testInhumeLockedIfObjectDoesntExist(t, true, false)
})
t.Run("force inhume without tombstone", func(t *testing.T) {
testInhumeLockedIfObjectDoesntExist(t, false, true)
})
t.Run("force inhume with tombstone", func(t *testing.T) {
testInhumeLockedIfObjectDoesntExist(t, true, true)
})
})
}
func testInhumeLockedIfObjectDoesntExist(t *testing.T, withTombstone, withForce bool) {
t.Parallel()
var (
errLocked *apistatus.ObjectLocked
inhumePrm InhumePrm
ctx = context.Background()
container = cidtest.ID()
object = oidtest.Address()
lock = oidtest.ID()
tombstone = oidtest.Address()
)
object.SetContainer(container)
tombstone.SetContainer(container)
engine := testNewEngine(t).setShardsNum(t, 4).prepare(t).engine
defer func() { require.NoError(t, engine.Close(ctx)) }()
err := engine.Lock(ctx, container, lock, []oid.ID{object.Object()})
require.NoError(t, err)
if withTombstone {
inhumePrm.WithTarget(tombstone, object)
} else {
inhumePrm.MarkAsGarbage(object)
}
if withForce {
inhumePrm.WithForceRemoval()
}
err = engine.Inhume(ctx, inhumePrm)
if withForce {
require.NoError(t, err)
} else {
require.ErrorAs(t, err, &errLocked)
}
}

View file

@ -363,12 +363,12 @@ func (db *DB) deleteObject(
func parentLength(tx *bbolt.Tx, addr oid.Address) int { func parentLength(tx *bbolt.Tx, addr oid.Address) int {
bucketName := make([]byte, bucketKeySize) bucketName := make([]byte, bucketKeySize)
bkt := tx.Bucket(parentBucketName(addr.Container(), bucketName[:])) bkt := tx.Bucket(parentBucketName(addr.Container(), bucketName))
if bkt == nil { if bkt == nil {
return 0 return 0
} }
lst, err := decodeList(bkt.Get(objectKey(addr.Object(), bucketName[:]))) lst, err := decodeList(bkt.Get(objectKey(addr.Object(), bucketName)))
if err != nil { if err != nil {
return 0 return 0
} }

View file

@ -7,6 +7,7 @@ import (
"slices" "slices"
"time" "time"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/assert"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/internal/metaerr" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/internal/metaerr"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/util/logicerr" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/util/logicerr"
"git.frostfs.info/TrueCloudLab/frostfs-observability/tracing" "git.frostfs.info/TrueCloudLab/frostfs-observability/tracing"
@ -63,9 +64,7 @@ func (db *DB) Lock(ctx context.Context, cnr cid.ID, locker oid.ID, locked []oid.
return ErrReadOnlyMode return ErrReadOnlyMode
} }
if len(locked) == 0 { assert.False(len(locked) == 0, "empty locked list")
panic("empty locked list")
}
err := db.lockInternal(locked, cnr, locker) err := db.lockInternal(locked, cnr, locker)
success = err == nil success = err == nil

View file

@ -6,6 +6,7 @@ import (
"errors" "errors"
"fmt" "fmt"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/assert"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id" cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
objectSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object" objectSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id" oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
@ -278,9 +279,7 @@ func objectKey(obj oid.ID, key []byte) []byte {
// //
// firstIrregularObjectType(tx, cnr, obj) usage allows getting object type. // firstIrregularObjectType(tx, cnr, obj) usage allows getting object type.
func firstIrregularObjectType(tx *bbolt.Tx, idCnr cid.ID, objs ...[]byte) objectSDK.Type { func firstIrregularObjectType(tx *bbolt.Tx, idCnr cid.ID, objs ...[]byte) objectSDK.Type {
if len(objs) == 0 { assert.False(len(objs) == 0, "empty object list in firstIrregularObjectType")
panic("empty object list in firstIrregularObjectType")
}
var keys [2][1 + cidSize]byte var keys [2][1 + cidSize]byte

View file

@ -363,6 +363,7 @@ func (s *Shard) refillTombstoneObject(ctx context.Context, obj *objectSDK.Object
// Close releases all Shard's components. // Close releases all Shard's components.
func (s *Shard) Close(ctx context.Context) error { func (s *Shard) Close(ctx context.Context) error {
unlock := s.lockExclusive()
if s.rb != nil { if s.rb != nil {
s.rb.Stop(ctx, s.log) s.rb.Stop(ctx, s.log)
} }
@ -388,15 +389,19 @@ func (s *Shard) Close(ctx context.Context) error {
} }
} }
if s.opsLimiter != nil {
s.opsLimiter.Close()
}
unlock()
// GC waits for handlers and remover to complete. Handlers may try to lock shard's lock.
// So to prevent deadlock GC stopping is outside of exclusive lock.
// If Init/Open was unsuccessful gc can be nil. // If Init/Open was unsuccessful gc can be nil.
if s.gc != nil { if s.gc != nil {
s.gc.stop(ctx) s.gc.stop(ctx)
} }
if s.opsLimiter != nil {
s.opsLimiter.Close()
}
return lastErr return lastErr
} }

View file

@ -391,6 +391,16 @@ func (s *Shard) handleExpiredObjects(ctx context.Context, expired []oid.Address)
return return
} }
s.handleExpiredObjectsUnsafe(ctx, expired)
}
func (s *Shard) handleExpiredObjectsUnsafe(ctx context.Context, expired []oid.Address) {
select {
case <-ctx.Done():
return
default:
}
expired, err := s.getExpiredWithLinked(ctx, expired) expired, err := s.getExpiredWithLinked(ctx, expired)
if err != nil { if err != nil {
s.log.Warn(ctx, logs.ShardGCFailedToGetExpiredWithLinked, zap.Error(err)) s.log.Warn(ctx, logs.ShardGCFailedToGetExpiredWithLinked, zap.Error(err))
@ -611,13 +621,6 @@ func (s *Shard) getExpiredObjects(ctx context.Context, epoch uint64, onExpiredFo
} }
func (s *Shard) selectExpired(ctx context.Context, epoch uint64, addresses []oid.Address) ([]oid.Address, error) { func (s *Shard) selectExpired(ctx context.Context, epoch uint64, addresses []oid.Address) ([]oid.Address, error) {
s.m.RLock()
defer s.m.RUnlock()
if s.info.Mode.NoMetabase() {
return nil, ErrDegradedMode
}
release, err := s.opsLimiter.ReadRequest(ctx) release, err := s.opsLimiter.ReadRequest(ctx)
if err != nil { if err != nil {
return nil, err return nil, err
@ -728,7 +731,7 @@ func (s *Shard) inhumeUnlockedIfExpired(ctx context.Context, epoch uint64, unloc
return return
} }
s.handleExpiredObjects(ctx, expiredUnlocked) s.handleExpiredObjectsUnsafe(ctx, expiredUnlocked)
} }
// HandleDeletedLocks unlocks all objects which were locked by lockers. // HandleDeletedLocks unlocks all objects which were locked by lockers.

View file

@ -37,7 +37,8 @@ func Test_ObjectNotFoundIfNotDeletedFromMetabase(t *testing.T) {
{ {
Storage: blobovniczatree.NewBlobovniczaTree( Storage: blobovniczatree.NewBlobovniczaTree(
context.Background(), context.Background(),
blobovniczatree.WithLogger(test.NewLogger(t)), blobovniczatree.WithBlobovniczaLogger(test.NewLogger(t)),
blobovniczatree.WithBlobovniczaTreeLogger(test.NewLogger(t)),
blobovniczatree.WithRootPath(filepath.Join(rootPath, "blob", "blobovnicza")), blobovniczatree.WithRootPath(filepath.Join(rootPath, "blob", "blobovnicza")),
blobovniczatree.WithBlobovniczaShallowDepth(1), blobovniczatree.WithBlobovniczaShallowDepth(1),
blobovniczatree.WithBlobovniczaShallowWidth(1)), blobovniczatree.WithBlobovniczaShallowWidth(1)),

View file

@ -28,9 +28,10 @@ func TestShard_Lock(t *testing.T) {
var sh *Shard var sh *Shard
rootPath := t.TempDir() rootPath := t.TempDir()
l := logger.NewLoggerWrapper(zap.NewNop())
opts := []Option{ opts := []Option{
WithID(NewIDFromBytes([]byte{})), WithID(NewIDFromBytes([]byte{})),
WithLogger(logger.NewLoggerWrapper(zap.NewNop())), WithLogger(l),
WithBlobStorOptions( WithBlobStorOptions(
blobstor.WithStorages([]blobstor.SubStorage{ blobstor.WithStorages([]blobstor.SubStorage{
{ {

View file

@ -79,7 +79,8 @@ func testShardGetRange(t *testing.T, hasWriteCache bool) {
{ {
Storage: blobovniczatree.NewBlobovniczaTree( Storage: blobovniczatree.NewBlobovniczaTree(
context.Background(), context.Background(),
blobovniczatree.WithLogger(test.NewLogger(t)), blobovniczatree.WithBlobovniczaLogger(test.NewLogger(t)),
blobovniczatree.WithBlobovniczaTreeLogger(test.NewLogger(t)),
blobovniczatree.WithRootPath(filepath.Join(t.TempDir(), "blob", "blobovnicza")), blobovniczatree.WithRootPath(filepath.Join(t.TempDir(), "blob", "blobovnicza")),
blobovniczatree.WithBlobovniczaShallowDepth(1), blobovniczatree.WithBlobovniczaShallowDepth(1),
blobovniczatree.WithBlobovniczaShallowWidth(1)), blobovniczatree.WithBlobovniczaShallowWidth(1)),

View file

@ -205,7 +205,7 @@ func WithPiloramaOptions(opts ...pilorama.Option) Option {
func WithLogger(l *logger.Logger) Option { func WithLogger(l *logger.Logger) Option {
return func(c *cfg) { return func(c *cfg) {
c.log = l c.log = l
c.gcCfg.log = l c.gcCfg.log = l.WithTag(logger.TagGC)
} }
} }

View file

@ -60,7 +60,8 @@ func newCustomShard(t testing.TB, enableWriteCache bool, o shardOptions) *Shard
{ {
Storage: blobovniczatree.NewBlobovniczaTree( Storage: blobovniczatree.NewBlobovniczaTree(
context.Background(), context.Background(),
blobovniczatree.WithLogger(test.NewLogger(t)), blobovniczatree.WithBlobovniczaLogger(test.NewLogger(t)),
blobovniczatree.WithBlobovniczaTreeLogger(test.NewLogger(t)),
blobovniczatree.WithRootPath(filepath.Join(o.rootPath, "blob", "blobovnicza")), blobovniczatree.WithRootPath(filepath.Join(o.rootPath, "blob", "blobovnicza")),
blobovniczatree.WithBlobovniczaShallowDepth(1), blobovniczatree.WithBlobovniczaShallowDepth(1),
blobovniczatree.WithBlobovniczaShallowWidth(1)), blobovniczatree.WithBlobovniczaShallowWidth(1)),

View file

@ -3,6 +3,8 @@ package writecache
import ( import (
"errors" "errors"
"sync" "sync"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/assert"
) )
var errLimiterClosed = errors.New("acquire failed: limiter closed") var errLimiterClosed = errors.New("acquire failed: limiter closed")
@ -45,17 +47,11 @@ func (l *flushLimiter) release(size uint64) {
l.cond.L.Lock() l.cond.L.Lock()
defer l.cond.L.Unlock() defer l.cond.L.Unlock()
if l.size >= size { assert.True(l.size >= size, "flushLimiter: invalid size")
l.size -= size l.size -= size
} else {
panic("flushLimiter: invalid size")
}
if l.count > 0 { assert.True(l.count > 0, "flushLimiter: invalid count")
l.count-- l.count--
} else {
panic("flushLimiter: invalid count")
}
l.cond.Broadcast() l.cond.Broadcast()
} }

View file

@ -5,7 +5,6 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/qos" "git.frostfs.info/TrueCloudLab/frostfs-node/internal/qos"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger" "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger"
"go.uber.org/zap"
) )
// Option represents write-cache configuration option. // Option represents write-cache configuration option.
@ -46,7 +45,7 @@ type options struct {
// WithLogger sets logger. // WithLogger sets logger.
func WithLogger(log *logger.Logger) Option { func WithLogger(log *logger.Logger) Option {
return func(o *options) { return func(o *options) {
o.log = log.With(zap.String("component", "WriteCache")) o.log = log
} }
} }

Some files were not shown because too many files have changed in this diff Show more