Compare commits

..

31 commits

Author SHA1 Message Date
e45382b0c1 [#1693] util: Replace conditional panics with asserts
Change-Id: I13b566cde3e6d43d8a75aa2e9b28e63b597adff9
Signed-off-by: Ekaterina Lebedeva <ekaterina.lebedeva@yadro.com>
2025-04-15 16:54:45 +00:00
5dd8d7e87a [#1693] network: Replace conditional panics with asserts
Change-Id: Icba39aa2ed0048d63c6efed398273627e1e4fbbe
Signed-off-by: Ekaterina Lebedeva <ekaterina.lebedeva@yadro.com>
2025-04-15 16:54:41 +00:00
bc045b29e2 [#1693] services: Replace conditional panics with asserts
Change-Id: Ic79609e6ad867caa88ad245b3014aa7fc32e05a8
Signed-off-by: Ekaterina Lebedeva <ekaterina.lebedeva@yadro.com>
2025-04-15 16:54:36 +00:00
fc6abe30b8 [#1693] storage: Replace conditional panics with asserts
Change-Id: I9d8ccde3c71fca716856c7bfc53da20ee0542f20
Signed-off-by: Ekaterina Lebedeva <ekaterina.lebedeva@yadro.com>
2025-04-15 16:54:31 +00:00
a285d8924f [#1693] node: Replace conditional panics with asserts
Change-Id: I5024705fd1693d00cb9241235030a73984c2a7e1
Signed-off-by: Ekaterina Lebedeva <ekaterina.lebedeva@yadro.com>
2025-04-15 16:28:43 +00:00
410b6f70ba
[#1716] cli: Return trace ID on operation failure
Close #1716

Change-Id: I293d0cc6b7331517e8cde42eae07d65384976da5
Signed-off-by: Aleksey Savchuk <a.savchuk@yadro.com>
2025-04-15 18:38:55 +03:00
0ee7467da5 [#1715] config: Add compression config section
To group all `compression_*` parameters together.

Change-Id: I11ad9600f731903753fef1adfbc0328ef75bbf87
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-04-15 15:05:50 +00:00
8c746a914a [#1715] compression: Decouple Config and Compressor
Refactoring.

Change-Id: Ide2e1378f30c39045d4bacd13a902331bd4f764f
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-04-15 15:05:45 +00:00
98308d0cad [#1715] blobstor: Allow to specify custom compression level
Change-Id: I140c39b9dceaaeb58767061b131777af22242b19
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-04-15 15:05:39 +00:00
2d1232ce6d [#1689] network,core/netmap: Replace Iterate*() functions with iterators
Change-Id: I4842a3160d74c56d99ea9465d4be2f0662080605
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-04-15 14:47:32 +00:00
e65d578ba9 [#1689] Remove deprecated NodeInfo.IterateAttributes()
Change-Id: Ibd07302079efe148903aa6177759232a28616736
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-04-15 14:47:28 +00:00
bf06c4fb4b [#1689] Remove deprecated NodeInfo.IterateNetworkEndpoints()
Change-Id: Ic78f18aed11fab34ee3147ceea657296b89fe60c
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2025-04-15 14:47:22 +00:00
56d09a9957 [#1640] object: Add priority metric based on geo distance
Change-Id: I3a7ea4fc4807392bf50e6ff1389c61367c953074
Signed-off-by: Anton Nikiforov <an.nikiforov@yadro.com>
2025-04-15 13:35:28 +00:00
0712c113de
[#1700] gc: Fix deadlock
`HandleExpiredLocks` gets read lock, then `shard.Close` tries to acquire
write lock, but `HandleExpiredLocks` calls `inhumeUnlockedIfExpired` or
`selectExpired`, that try to acquire read lock again.

Change-Id: Ib2ed015e859328045b5a542a4f569e5e0ff8b05b
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-04-15 10:06:05 +03:00
48930ec452 [#1703] cli: Allow reading RPC endpoint from config file
Allowed reading an RPC endpoint from a configuration file when
getting current epoch in the `object lock` and `bearer create`
commands.

Close #1703

Change-Id: Iea8509dff2893a02cb63f695d7f532eecd743ed8
Signed-off-by: Aleksey Savchuk <a.savchuk@yadro.com>
2025-04-14 15:40:27 +00:00
f37babdc54
[#1700] shard: Lock shard's mode mutex on close
To prevent race between GC handlers and close.

Change-Id: I06219230964f000f666a56158d3563c760518c3b
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-04-14 14:35:16 +03:00
fd37cea443
[#1700] engine: Drop unused block execution methods
`BlockExecution` and `ResumeExecution` were used only by unit test.
So drop them and simplify code.

Change-Id: Ib3de324617e8a27fc1f015542ac5e94df5c60a6e
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-04-14 14:35:15 +03:00
e80632884a
[#1700] config: Drop redundant check
Target config created on level above, so limiter is always nil.

Change-Id: I1896baae5b9ddeed339a7d2b022a9a886589d362
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-04-14 14:35:15 +03:00
5aaa3df533
[#1700] config: Move config struct to qos package
Change-Id: Ie642fff5cd1702cda00425628e11f3fd8c514798
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-04-14 14:35:14 +03:00
3be33b7117
[#1706] cli/playground: Mention 'help' in error message for invalid commands
Change-Id: Ica1112b907919a6d19fa1bf683f2a952c4c638e4
Signed-off-by: Alexander Chuprov <a.chuprov@yadro.com>
2025-04-14 13:10:34 +03:00
29b4fbe451 [#1332] cli/playground: Add 'netmap-config' flag
Change-Id: I4342fb9a6da2a05c18ae4e0ad9f0c71550efc5ef
Signed-off-by: Alexander Chuprov <a.chuprov@yadro.com>
2025-04-14 07:21:08 +00:00
0d36e93169 [#1332] cli/playground: Move command handler selection to separate function
Change-Id: I2dcbd85e61960c3cf141b815edab174e308ef858
Signed-off-by: Alexander Chuprov <a.chuprov@yadro.com>
2025-04-13 11:49:45 +00:00
8e87cbee17 [#1689] ci: Move commit checker out of Jenkinsfile
Commit checker is now configured globally for all Gerrit repositories:
  TrueCloudLab/jenkins#16

This allows us to execute commit-checker independently from the rest of
CI suite and re-check commit message format without rerunning other
tests.

Change-Id: Ib8f899b856482a5dc5d03861171585415ff6b452
Signed-off-by: Vitaliy Potyarkin <v.potyarkin@yadro.com>
2025-04-12 15:42:45 +00:00
12fc7850dd [#1619] logger: Set tags for ir components
Change-Id: Ifab575bc2a3cd83c9001cd68fffaf94c91494043
Signed-off-by: Anton Nikiforov <an.nikiforov@yadro.com>
2025-04-11 17:27:27 +03:00
dfe2f9956a [#1619] logger: Filter entries by tags provided in config
Change-Id: Ia2a79d6cb2a5eb263fb2e6db3f9cf9f2a7d57118
Signed-off-by: Anton Nikiforov <an.nikiforov@yadro.com>
2025-04-11 17:27:27 +03:00
e06ecacf57
[#1705] engine: Use condition var for evacuation unit tests
To know exactly when the evacuation was completed,
a conditional variable was added.

Closes #1705

Change-Id: I86f6d7d2ad2b9759905b6b5e9341008cb74f5dfd
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-04-11 09:20:46 +03:00
64c1392513 [#1710] object: Sign response even if CloseAndRecv returns error
* Sign service wraps an error with status and sign a response even
  if error occurs from `CloseAndRecv` in `Put` and `Patch` methods.

Close #1710

Change-Id: I7e1d8fe00db53607fa6e04ebec9a29b87349f8a1
Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2025-04-10 16:31:47 +03:00
dcfd895449 [#1710] object: Implement Unwrap() for errIncompletePut
* When sign service calls `SignResponse`, it tries to set v2 status
  to response by unwrapping an error to the possible depth. This wasn't
  applicable for `errIncompletePut` so far as it didn't implement
  `Unwrap()`. Thus, it wasn't able to find a correct status set in error.

Change-Id: I280c1806a008176854c55f13bf8688e5736ef941
Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2025-04-10 16:31:47 +03:00
6730e27ae7
[#1712] adm: Drop rpc-endpoint flag from zombie scan
Morph addresses from config are used.

Change-Id: Id99f91defbbff442c308f30d219b9824b4c871de
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-04-10 10:06:31 +03:00
f7779adf71
[#1712] core: Extend object info string with EC header
Closes #1712

Change-Id: Ief4a960f7dece3359763113270d1ff5155f3f19e
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-04-10 10:06:31 +03:00
f93b96c601
[#1712] adm: Add maintenance zombie commands
Change-Id: I1b73e561a8daad67d0a8ffc0d293cbdd09aaab6b
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2025-04-10 10:06:30 +03:00
93 changed files with 2125 additions and 630 deletions

6
.ci/Jenkinsfile vendored
View file

@ -78,10 +78,4 @@ async {
}
}
}
task('dco') {
container('git.frostfs.info/truecloudlab/commit-check:master') {
sh 'FROM=pull_request_target commit-check'
}
}
}

View file

@ -0,0 +1,15 @@
package maintenance
import (
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/maintenance/zombie"
"github.com/spf13/cobra"
)
var RootCmd = &cobra.Command{
Use: "maintenance",
Short: "Section for maintenance commands",
}
func init() {
RootCmd.AddCommand(zombie.Cmd)
}

View file

@ -0,0 +1,70 @@
package zombie
import (
"crypto/ecdsa"
"fmt"
"os"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config"
nodeconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/node"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"github.com/nspcc-dev/neo-go/cli/flags"
"github.com/nspcc-dev/neo-go/cli/input"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/nspcc-dev/neo-go/pkg/util"
"github.com/nspcc-dev/neo-go/pkg/wallet"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
func getPrivateKey(cmd *cobra.Command, appCfg *config.Config) *ecdsa.PrivateKey {
keyDesc := viper.GetString(walletFlag)
if keyDesc == "" {
return &nodeconfig.Key(appCfg).PrivateKey
}
data, err := os.ReadFile(keyDesc)
commonCmd.ExitOnErr(cmd, "open wallet file: %w", err)
priv, err := keys.NewPrivateKeyFromBytes(data)
if err != nil {
w, err := wallet.NewWalletFromFile(keyDesc)
commonCmd.ExitOnErr(cmd, "provided key is incorrect, only wallet or binary key supported: %w", err)
return fromWallet(cmd, w, viper.GetString(addressFlag))
}
return &priv.PrivateKey
}
func fromWallet(cmd *cobra.Command, w *wallet.Wallet, addrStr string) *ecdsa.PrivateKey {
var (
addr util.Uint160
err error
)
if addrStr == "" {
addr = w.GetChangeAddress()
} else {
addr, err = flags.ParseAddress(addrStr)
commonCmd.ExitOnErr(cmd, "--address option must be specified and valid: %w", err)
}
acc := w.GetAccount(addr)
if acc == nil {
commonCmd.ExitOnErr(cmd, "--address option must be specified and valid: %w", fmt.Errorf("can't find wallet account for %s", addrStr))
}
pass, err := getPassword()
commonCmd.ExitOnErr(cmd, "invalid password for the encrypted key: %w", err)
commonCmd.ExitOnErr(cmd, "can't decrypt account: %w", acc.Decrypt(pass, keys.NEP2ScryptParams()))
return &acc.PrivateKey().PrivateKey
}
func getPassword() (string, error) {
// this check allows empty passwords
if viper.IsSet("password") {
return viper.GetString("password"), nil
}
return input.ReadPassword("Enter password > ")
}

View file

@ -0,0 +1,31 @@
package zombie
import (
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"github.com/spf13/cobra"
)
func list(cmd *cobra.Command, _ []string) {
configFile, _ := cmd.Flags().GetString(commonflags.ConfigFlag)
configDir, _ := cmd.Flags().GetString(commonflags.ConfigDirFlag)
appCfg := config.New(configFile, configDir, config.EnvPrefix)
storageEngine := newEngine(cmd, appCfg)
q := createQuarantine(cmd, storageEngine.DumpInfo())
var containerID *cid.ID
if cidStr, _ := cmd.Flags().GetString(cidFlag); cidStr != "" {
containerID = &cid.ID{}
commonCmd.ExitOnErr(cmd, "decode container ID string: %w", containerID.DecodeString(cidStr))
}
commonCmd.ExitOnErr(cmd, "iterate over quarantine: %w", q.Iterate(cmd.Context(), func(a oid.Address) error {
if containerID != nil && a.Container() != *containerID {
return nil
}
cmd.Println(a.EncodeToString())
return nil
}))
}

View file

@ -0,0 +1,46 @@
package zombie
import (
"errors"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config"
morphconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/morph"
nodeconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/node"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client"
cntClient "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client/container"
netmapClient "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client/netmap"
"github.com/spf13/cobra"
)
func createMorphClient(cmd *cobra.Command, appCfg *config.Config) *client.Client {
addresses := morphconfig.RPCEndpoint(appCfg)
if len(addresses) == 0 {
commonCmd.ExitOnErr(cmd, "create morph client: %w", errors.New("no morph endpoints found"))
}
key := nodeconfig.Key(appCfg)
cli, err := client.New(cmd.Context(),
key,
client.WithDialTimeout(morphconfig.DialTimeout(appCfg)),
client.WithEndpoints(addresses...),
client.WithSwitchInterval(morphconfig.SwitchInterval(appCfg)),
)
commonCmd.ExitOnErr(cmd, "create morph client: %w", err)
return cli
}
func createContainerClient(cmd *cobra.Command, morph *client.Client) *cntClient.Client {
hs, err := morph.NNSContractAddress(client.NNSContainerContractName)
commonCmd.ExitOnErr(cmd, "resolve container contract hash: %w", err)
cc, err := cntClient.NewFromMorph(morph, hs, 0)
commonCmd.ExitOnErr(cmd, "create morph container client: %w", err)
return cc
}
func createNetmapClient(cmd *cobra.Command, morph *client.Client) *netmapClient.Client {
hs, err := morph.NNSContractAddress(client.NNSNetmapContractName)
commonCmd.ExitOnErr(cmd, "resolve netmap contract hash: %w", err)
cli, err := netmapClient.NewFromMorph(morph, hs, 0)
commonCmd.ExitOnErr(cmd, "create morph netmap client: %w", err)
return cli
}

View file

@ -0,0 +1,154 @@
package zombie
import (
"context"
"fmt"
"math"
"os"
"path/filepath"
"strings"
"sync"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
objectcore "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/object"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/fstree"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/engine"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/shard/mode"
apistatus "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client/status"
objectSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"github.com/spf13/cobra"
)
type quarantine struct {
// mtx protects current field.
mtx sync.Mutex
current int
trees []*fstree.FSTree
}
func createQuarantine(cmd *cobra.Command, engineInfo engine.Info) *quarantine {
var paths []string
for _, sh := range engineInfo.Shards {
var storagePaths []string
for _, st := range sh.BlobStorInfo.SubStorages {
storagePaths = append(storagePaths, st.Path)
}
if len(storagePaths) == 0 {
continue
}
paths = append(paths, filepath.Join(commonPath(storagePaths), "quarantine"))
}
q, err := newQuarantine(paths)
commonCmd.ExitOnErr(cmd, "create quarantine: %w", err)
return q
}
func commonPath(paths []string) string {
if len(paths) == 0 {
return ""
}
if len(paths) == 1 {
return paths[0]
}
minLen := math.MaxInt
for _, p := range paths {
if len(p) < minLen {
minLen = len(p)
}
}
var sb strings.Builder
for i := range minLen {
for _, path := range paths[1:] {
if paths[0][i] != path[i] {
return sb.String()
}
}
sb.WriteByte(paths[0][i])
}
return sb.String()
}
func newQuarantine(paths []string) (*quarantine, error) {
var q quarantine
for i := range paths {
f := fstree.New(
fstree.WithDepth(1),
fstree.WithDirNameLen(1),
fstree.WithPath(paths[i]),
fstree.WithPerm(os.ModePerm),
)
if err := f.Open(mode.ComponentReadWrite); err != nil {
return nil, fmt.Errorf("open fstree %s: %w", paths[i], err)
}
if err := f.Init(); err != nil {
return nil, fmt.Errorf("init fstree %s: %w", paths[i], err)
}
q.trees = append(q.trees, f)
}
return &q, nil
}
func (q *quarantine) Get(ctx context.Context, a oid.Address) (*objectSDK.Object, error) {
for i := range q.trees {
res, err := q.trees[i].Get(ctx, common.GetPrm{Address: a})
if err != nil {
continue
}
return res.Object, nil
}
return nil, &apistatus.ObjectNotFound{}
}
func (q *quarantine) Delete(ctx context.Context, a oid.Address) error {
for i := range q.trees {
_, err := q.trees[i].Delete(ctx, common.DeletePrm{Address: a})
if err != nil {
continue
}
return nil
}
return &apistatus.ObjectNotFound{}
}
func (q *quarantine) Put(ctx context.Context, obj *objectSDK.Object) error {
data, err := obj.Marshal()
if err != nil {
return err
}
var prm common.PutPrm
prm.Address = objectcore.AddressOf(obj)
prm.Object = obj
prm.RawData = data
q.mtx.Lock()
current := q.current
q.current = (q.current + 1) % len(q.trees)
q.mtx.Unlock()
_, err = q.trees[current].Put(ctx, prm)
return err
}
func (q *quarantine) Iterate(ctx context.Context, f func(oid.Address) error) error {
var prm common.IteratePrm
prm.Handler = func(elem common.IterationElement) error {
return f(elem.Address)
}
for i := range q.trees {
select {
case <-ctx.Done():
return ctx.Err()
default:
}
_, err := q.trees[i].Iterate(ctx, prm)
if err != nil {
return err
}
}
return nil
}

View file

@ -0,0 +1,55 @@
package zombie
import (
"errors"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
apistatus "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client/status"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"github.com/spf13/cobra"
)
func remove(cmd *cobra.Command, _ []string) {
configFile, _ := cmd.Flags().GetString(commonflags.ConfigFlag)
configDir, _ := cmd.Flags().GetString(commonflags.ConfigDirFlag)
appCfg := config.New(configFile, configDir, config.EnvPrefix)
storageEngine := newEngine(cmd, appCfg)
q := createQuarantine(cmd, storageEngine.DumpInfo())
var containerID cid.ID
cidStr, _ := cmd.Flags().GetString(cidFlag)
commonCmd.ExitOnErr(cmd, "decode container ID string: %w", containerID.DecodeString(cidStr))
var objectID *oid.ID
oidStr, _ := cmd.Flags().GetString(oidFlag)
if oidStr != "" {
objectID = &oid.ID{}
commonCmd.ExitOnErr(cmd, "decode object ID string: %w", objectID.DecodeString(oidStr))
}
if objectID != nil {
var addr oid.Address
addr.SetContainer(containerID)
addr.SetObject(*objectID)
removeObject(cmd, q, addr)
} else {
commonCmd.ExitOnErr(cmd, "iterate over quarantine: %w", q.Iterate(cmd.Context(), func(addr oid.Address) error {
if addr.Container() != containerID {
return nil
}
removeObject(cmd, q, addr)
return nil
}))
}
}
func removeObject(cmd *cobra.Command, q *quarantine, addr oid.Address) {
err := q.Delete(cmd.Context(), addr)
if errors.Is(err, new(apistatus.ObjectNotFound)) {
return
}
commonCmd.ExitOnErr(cmd, "remove object from quarantine: %w", err)
}

View file

@ -0,0 +1,69 @@
package zombie
import (
"crypto/sha256"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
containerCore "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/container"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/engine"
cntClient "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client/container"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"github.com/spf13/cobra"
)
func restore(cmd *cobra.Command, _ []string) {
configFile, _ := cmd.Flags().GetString(commonflags.ConfigFlag)
configDir, _ := cmd.Flags().GetString(commonflags.ConfigDirFlag)
appCfg := config.New(configFile, configDir, config.EnvPrefix)
storageEngine := newEngine(cmd, appCfg)
q := createQuarantine(cmd, storageEngine.DumpInfo())
morphClient := createMorphClient(cmd, appCfg)
cnrCli := createContainerClient(cmd, morphClient)
var containerID cid.ID
cidStr, _ := cmd.Flags().GetString(cidFlag)
commonCmd.ExitOnErr(cmd, "decode container ID string: %w", containerID.DecodeString(cidStr))
var objectID *oid.ID
oidStr, _ := cmd.Flags().GetString(oidFlag)
if oidStr != "" {
objectID = &oid.ID{}
commonCmd.ExitOnErr(cmd, "decode object ID string: %w", objectID.DecodeString(oidStr))
}
if objectID != nil {
var addr oid.Address
addr.SetContainer(containerID)
addr.SetObject(*objectID)
restoreObject(cmd, storageEngine, q, addr, cnrCli)
} else {
commonCmd.ExitOnErr(cmd, "iterate over quarantine: %w", q.Iterate(cmd.Context(), func(addr oid.Address) error {
if addr.Container() != containerID {
return nil
}
restoreObject(cmd, storageEngine, q, addr, cnrCli)
return nil
}))
}
}
func restoreObject(cmd *cobra.Command, storageEngine *engine.StorageEngine, q *quarantine, addr oid.Address, cnrCli *cntClient.Client) {
obj, err := q.Get(cmd.Context(), addr)
commonCmd.ExitOnErr(cmd, "get object from quarantine: %w", err)
rawCID := make([]byte, sha256.Size)
cid := addr.Container()
cid.Encode(rawCID)
cnr, err := cnrCli.Get(cmd.Context(), rawCID)
commonCmd.ExitOnErr(cmd, "get container: %w", err)
putPrm := engine.PutPrm{
Object: obj,
IsIndexedContainer: containerCore.IsIndexedContainer(cnr.Value),
}
commonCmd.ExitOnErr(cmd, "put object to storage engine: %w", storageEngine.Put(cmd.Context(), putPrm))
commonCmd.ExitOnErr(cmd, "remove object from quarantine: %w", q.Delete(cmd.Context(), addr))
}

View file

@ -0,0 +1,123 @@
package zombie
import (
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/commonflags"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
const (
flagBatchSize = "batch-size"
flagBatchSizeUsage = "Objects iteration batch size"
cidFlag = "cid"
cidFlagUsage = "Container ID"
oidFlag = "oid"
oidFlagUsage = "Object ID"
walletFlag = "wallet"
walletFlagShorthand = "w"
walletFlagUsage = "Path to the wallet or binary key"
addressFlag = "address"
addressFlagUsage = "Address of wallet account"
moveFlag = "move"
moveFlagUsage = "Move objects from storage engine to quarantine"
)
var (
Cmd = &cobra.Command{
Use: "zombie",
Short: "Zombie objects related commands",
}
scanCmd = &cobra.Command{
Use: "scan",
Short: "Scan storage engine for zombie objects and move them to quarantine",
Long: "",
PreRun: func(cmd *cobra.Command, _ []string) {
_ = viper.BindPFlag(commonflags.ConfigFlag, cmd.Flags().Lookup(commonflags.ConfigFlag))
_ = viper.BindPFlag(commonflags.ConfigDirFlag, cmd.Flags().Lookup(commonflags.ConfigDirFlag))
_ = viper.BindPFlag(walletFlag, cmd.Flags().Lookup(walletFlag))
_ = viper.BindPFlag(addressFlag, cmd.Flags().Lookup(addressFlag))
_ = viper.BindPFlag(flagBatchSize, cmd.Flags().Lookup(flagBatchSize))
_ = viper.BindPFlag(moveFlag, cmd.Flags().Lookup(moveFlag))
},
Run: scan,
}
listCmd = &cobra.Command{
Use: "list",
Short: "List zombie objects from quarantine",
Long: "",
PreRun: func(cmd *cobra.Command, _ []string) {
_ = viper.BindPFlag(commonflags.ConfigFlag, cmd.Flags().Lookup(commonflags.ConfigFlag))
_ = viper.BindPFlag(commonflags.ConfigDirFlag, cmd.Flags().Lookup(commonflags.ConfigDirFlag))
_ = viper.BindPFlag(cidFlag, cmd.Flags().Lookup(cidFlag))
},
Run: list,
}
restoreCmd = &cobra.Command{
Use: "restore",
Short: "Restore zombie objects from quarantine",
Long: "",
PreRun: func(cmd *cobra.Command, _ []string) {
_ = viper.BindPFlag(commonflags.ConfigFlag, cmd.Flags().Lookup(commonflags.ConfigFlag))
_ = viper.BindPFlag(commonflags.ConfigDirFlag, cmd.Flags().Lookup(commonflags.ConfigDirFlag))
_ = viper.BindPFlag(cidFlag, cmd.Flags().Lookup(cidFlag))
_ = viper.BindPFlag(oidFlag, cmd.Flags().Lookup(oidFlag))
},
Run: restore,
}
removeCmd = &cobra.Command{
Use: "remove",
Short: "Remove zombie objects from quarantine",
Long: "",
PreRun: func(cmd *cobra.Command, _ []string) {
_ = viper.BindPFlag(commonflags.ConfigFlag, cmd.Flags().Lookup(commonflags.ConfigFlag))
_ = viper.BindPFlag(commonflags.ConfigDirFlag, cmd.Flags().Lookup(commonflags.ConfigDirFlag))
_ = viper.BindPFlag(cidFlag, cmd.Flags().Lookup(cidFlag))
_ = viper.BindPFlag(oidFlag, cmd.Flags().Lookup(oidFlag))
},
Run: remove,
}
)
func init() {
initScanCmd()
initListCmd()
initRestoreCmd()
initRemoveCmd()
}
func initScanCmd() {
Cmd.AddCommand(scanCmd)
scanCmd.Flags().StringP(commonflags.ConfigFlag, commonflags.ConfigFlagShorthand, "", commonflags.ConfigFlagUsage)
scanCmd.Flags().String(commonflags.ConfigDirFlag, "", commonflags.ConfigDirFlagUsage)
scanCmd.Flags().Uint32(flagBatchSize, 1000, flagBatchSizeUsage)
scanCmd.Flags().StringP(walletFlag, walletFlagShorthand, "", walletFlagUsage)
scanCmd.Flags().String(addressFlag, "", addressFlagUsage)
scanCmd.Flags().Bool(moveFlag, false, moveFlagUsage)
}
func initListCmd() {
Cmd.AddCommand(listCmd)
listCmd.Flags().StringP(commonflags.ConfigFlag, commonflags.ConfigFlagShorthand, "", commonflags.ConfigFlagUsage)
listCmd.Flags().String(commonflags.ConfigDirFlag, "", commonflags.ConfigDirFlagUsage)
listCmd.Flags().String(cidFlag, "", cidFlagUsage)
}
func initRestoreCmd() {
Cmd.AddCommand(restoreCmd)
restoreCmd.Flags().StringP(commonflags.ConfigFlag, commonflags.ConfigFlagShorthand, "", commonflags.ConfigFlagUsage)
restoreCmd.Flags().String(commonflags.ConfigDirFlag, "", commonflags.ConfigDirFlagUsage)
restoreCmd.Flags().String(cidFlag, "", cidFlagUsage)
restoreCmd.Flags().String(oidFlag, "", oidFlagUsage)
}
func initRemoveCmd() {
Cmd.AddCommand(removeCmd)
removeCmd.Flags().StringP(commonflags.ConfigFlag, commonflags.ConfigFlagShorthand, "", commonflags.ConfigFlagUsage)
removeCmd.Flags().String(commonflags.ConfigDirFlag, "", commonflags.ConfigDirFlagUsage)
removeCmd.Flags().String(cidFlag, "", cidFlagUsage)
removeCmd.Flags().String(oidFlag, "", oidFlagUsage)
}

View file

@ -0,0 +1,281 @@
package zombie
import (
"context"
"crypto/ecdsa"
"crypto/sha256"
"errors"
"fmt"
"sync"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config"
apiclientconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/apiclient"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
clientCore "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/client"
netmapCore "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/netmap"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/object"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/engine"
cntClient "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client/container"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client/netmap"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/network/cache"
clientSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client"
apistatus "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client/status"
objectSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"github.com/spf13/cobra"
"golang.org/x/sync/errgroup"
)
func scan(cmd *cobra.Command, _ []string) {
configFile, _ := cmd.Flags().GetString(commonflags.ConfigFlag)
configDir, _ := cmd.Flags().GetString(commonflags.ConfigDirFlag)
appCfg := config.New(configFile, configDir, config.EnvPrefix)
batchSize, _ := cmd.Flags().GetUint32(flagBatchSize)
if batchSize == 0 {
commonCmd.ExitOnErr(cmd, "invalid batch size: %w", errors.New("batch size must be positive value"))
}
move, _ := cmd.Flags().GetBool(moveFlag)
storageEngine := newEngine(cmd, appCfg)
morphClient := createMorphClient(cmd, appCfg)
cnrCli := createContainerClient(cmd, morphClient)
nmCli := createNetmapClient(cmd, morphClient)
q := createQuarantine(cmd, storageEngine.DumpInfo())
pk := getPrivateKey(cmd, appCfg)
epoch, err := nmCli.Epoch(cmd.Context())
commonCmd.ExitOnErr(cmd, "read epoch from morph: %w", err)
nm, err := nmCli.GetNetMapByEpoch(cmd.Context(), epoch)
commonCmd.ExitOnErr(cmd, "read netmap from morph: %w", err)
cmd.Printf("Epoch: %d\n", nm.Epoch())
cmd.Printf("Nodes in the netmap: %d\n", len(nm.Nodes()))
ps := &processStatus{
statusCount: make(map[status]uint64),
}
stopCh := make(chan struct{})
start := time.Now()
var wg sync.WaitGroup
wg.Add(2)
go func() {
defer wg.Done()
tick := time.NewTicker(time.Second)
defer tick.Stop()
for {
select {
case <-cmd.Context().Done():
return
case <-stopCh:
return
case <-tick.C:
fmt.Printf("Objects processed: %d; Time elapsed: %s\n", ps.total(), time.Since(start))
}
}
}()
go func() {
defer wg.Done()
err = scanStorageEngine(cmd, batchSize, storageEngine, ps, appCfg, cnrCli, nmCli, q, pk, move)
close(stopCh)
}()
wg.Wait()
commonCmd.ExitOnErr(cmd, "scan storage engine for zombie objects: %w", err)
cmd.Println()
cmd.Println("Status description:")
cmd.Println("undefined -- nothing is clear")
cmd.Println("found -- object is found in cluster")
cmd.Println("quarantine -- object is not found in cluster")
cmd.Println()
for status, count := range ps.statusCount {
cmd.Printf("Status: %s, Count: %d\n", status, count)
}
}
type status string
const (
statusUndefined status = "undefined"
statusFound status = "found"
statusQuarantine status = "quarantine"
)
func checkAddr(ctx context.Context, cnrCli *cntClient.Client, nmCli *netmap.Client, cc *cache.ClientCache, obj object.Info) (status, error) {
rawCID := make([]byte, sha256.Size)
cid := obj.Address.Container()
cid.Encode(rawCID)
cnr, err := cnrCli.Get(ctx, rawCID)
if err != nil {
var errContainerNotFound *apistatus.ContainerNotFound
if errors.As(err, &errContainerNotFound) {
// Policer will deal with this object.
return statusFound, nil
}
return statusUndefined, fmt.Errorf("read container %s from morph: %w", cid, err)
}
nm, err := nmCli.NetMap(ctx)
if err != nil {
return statusUndefined, fmt.Errorf("read netmap from morph: %w", err)
}
nodes, err := nm.ContainerNodes(cnr.Value.PlacementPolicy(), rawCID)
if err != nil {
// Not enough nodes, check all netmap nodes.
nodes = append([][]netmap.NodeInfo{}, nm.Nodes())
}
objID := obj.Address.Object()
cnrID := obj.Address.Container()
local := true
raw := false
if obj.ECInfo != nil {
objID = obj.ECInfo.ParentID
local = false
raw = true
}
prm := clientSDK.PrmObjectHead{
ObjectID: &objID,
ContainerID: &cnrID,
Local: local,
Raw: raw,
}
var ni clientCore.NodeInfo
for i := range nodes {
for j := range nodes[i] {
if err := clientCore.NodeInfoFromRawNetmapElement(&ni, netmapCore.Node(nodes[i][j])); err != nil {
return statusUndefined, fmt.Errorf("parse node info: %w", err)
}
c, err := cc.Get(ni)
if err != nil {
continue
}
res, err := c.ObjectHead(ctx, prm)
if err != nil {
var errECInfo *objectSDK.ECInfoError
if raw && errors.As(err, &errECInfo) {
return statusFound, nil
}
continue
}
if err := apistatus.ErrFromStatus(res.Status()); err != nil {
continue
}
return statusFound, nil
}
}
if cnr.Value.PlacementPolicy().NumberOfReplicas() == 1 && cnr.Value.PlacementPolicy().ReplicaDescriptor(0).NumberOfObjects() == 1 {
return statusFound, nil
}
return statusQuarantine, nil
}
func scanStorageEngine(cmd *cobra.Command, batchSize uint32, storageEngine *engine.StorageEngine, ps *processStatus,
appCfg *config.Config, cnrCli *cntClient.Client, nmCli *netmap.Client, q *quarantine, pk *ecdsa.PrivateKey, move bool,
) error {
cc := cache.NewSDKClientCache(cache.ClientCacheOpts{
DialTimeout: apiclientconfig.DialTimeout(appCfg),
StreamTimeout: apiclientconfig.StreamTimeout(appCfg),
ReconnectTimeout: apiclientconfig.ReconnectTimeout(appCfg),
Key: pk,
AllowExternal: apiclientconfig.AllowExternal(appCfg),
})
ctx := cmd.Context()
var cursor *engine.Cursor
for {
select {
case <-ctx.Done():
return ctx.Err()
default:
}
var prm engine.ListWithCursorPrm
prm.WithCursor(cursor)
prm.WithCount(batchSize)
res, err := storageEngine.ListWithCursor(ctx, prm)
if err != nil {
if errors.Is(err, engine.ErrEndOfListing) {
return nil
}
return fmt.Errorf("list with cursor: %w", err)
}
cursor = res.Cursor()
addrList := res.AddressList()
eg, egCtx := errgroup.WithContext(ctx)
eg.SetLimit(int(batchSize))
for i := range addrList {
addr := addrList[i]
eg.Go(func() error {
result, err := checkAddr(egCtx, cnrCli, nmCli, cc, addr)
if err != nil {
return fmt.Errorf("check object %s status: %w", addr.Address, err)
}
ps.add(result)
if !move && result == statusQuarantine {
cmd.Println(addr)
return nil
}
if result == statusQuarantine {
return moveToQuarantine(egCtx, storageEngine, q, addr.Address)
}
return nil
})
}
if err := eg.Wait(); err != nil {
return fmt.Errorf("process objects batch: %w", err)
}
}
}
func moveToQuarantine(ctx context.Context, storageEngine *engine.StorageEngine, q *quarantine, addr oid.Address) error {
var getPrm engine.GetPrm
getPrm.WithAddress(addr)
res, err := storageEngine.Get(ctx, getPrm)
if err != nil {
return fmt.Errorf("get object %s from storage engine: %w", addr, err)
}
if err := q.Put(ctx, res.Object()); err != nil {
return fmt.Errorf("put object %s to quarantine: %w", addr, err)
}
var delPrm engine.DeletePrm
delPrm.WithForceRemoval()
delPrm.WithAddress(addr)
if err = storageEngine.Delete(ctx, delPrm); err != nil {
return fmt.Errorf("delete object %s from storage engine: %w", addr, err)
}
return nil
}
type processStatus struct {
guard sync.RWMutex
statusCount map[status]uint64
count uint64
}
func (s *processStatus) add(st status) {
s.guard.Lock()
defer s.guard.Unlock()
s.statusCount[st]++
s.count++
}
func (s *processStatus) total() uint64 {
s.guard.RLock()
defer s.guard.RUnlock()
return s.count
}

View file

@ -0,0 +1,200 @@
package zombie
import (
"context"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config"
engineconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/engine"
shardconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/engine/shard"
blobovniczaconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/engine/shard/blobstor/blobovnicza"
fstreeconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/engine/shard/blobstor/fstree"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/qos"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/blobovniczatree"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/fstree"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/engine"
meta "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/metabase"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/pilorama"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/shard"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/writecache"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger"
objectSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
"github.com/panjf2000/ants/v2"
"github.com/spf13/cobra"
"go.etcd.io/bbolt"
"go.uber.org/zap"
)
func newEngine(cmd *cobra.Command, c *config.Config) *engine.StorageEngine {
ngOpts := storageEngineOptions(c)
shardOpts := shardOptions(cmd, c)
e := engine.New(ngOpts...)
for _, opts := range shardOpts {
_, err := e.AddShard(cmd.Context(), opts...)
commonCmd.ExitOnErr(cmd, "iterate shards from config: %w", err)
}
commonCmd.ExitOnErr(cmd, "open storage engine: %w", e.Open(cmd.Context()))
commonCmd.ExitOnErr(cmd, "init storage engine: %w", e.Init(cmd.Context()))
return e
}
func storageEngineOptions(c *config.Config) []engine.Option {
return []engine.Option{
engine.WithErrorThreshold(engineconfig.ShardErrorThreshold(c)),
engine.WithLogger(logger.NewLoggerWrapper(zap.NewNop())),
engine.WithLowMemoryConsumption(engineconfig.EngineLowMemoryConsumption(c)),
}
}
func shardOptions(cmd *cobra.Command, c *config.Config) [][]shard.Option {
var result [][]shard.Option
err := engineconfig.IterateShards(c, false, func(sh *shardconfig.Config) error {
result = append(result, getShardOpts(cmd, c, sh))
return nil
})
commonCmd.ExitOnErr(cmd, "iterate shards from config: %w", err)
return result
}
func getShardOpts(cmd *cobra.Command, c *config.Config, sh *shardconfig.Config) []shard.Option {
wc, wcEnabled := getWriteCacheOpts(sh)
return []shard.Option{
shard.WithLogger(logger.NewLoggerWrapper(zap.NewNop())),
shard.WithRefillMetabase(sh.RefillMetabase()),
shard.WithRefillMetabaseWorkersCount(sh.RefillMetabaseWorkersCount()),
shard.WithMode(sh.Mode()),
shard.WithBlobStorOptions(getBlobstorOpts(cmd.Context(), sh)...),
shard.WithMetaBaseOptions(getMetabaseOpts(sh)...),
shard.WithPiloramaOptions(getPiloramaOpts(c, sh)...),
shard.WithWriteCache(wcEnabled),
shard.WithWriteCacheOptions(wc),
shard.WithRemoverBatchSize(sh.GC().RemoverBatchSize()),
shard.WithGCRemoverSleepInterval(sh.GC().RemoverSleepInterval()),
shard.WithExpiredCollectorBatchSize(sh.GC().ExpiredCollectorBatchSize()),
shard.WithExpiredCollectorWorkerCount(sh.GC().ExpiredCollectorWorkerCount()),
shard.WithGCWorkerPoolInitializer(func(sz int) util.WorkerPool {
pool, err := ants.NewPool(sz)
commonCmd.ExitOnErr(cmd, "init GC pool: %w", err)
return pool
}),
shard.WithLimiter(qos.NewNoopLimiter()),
}
}
func getWriteCacheOpts(sh *shardconfig.Config) ([]writecache.Option, bool) {
if wc := sh.WriteCache(); wc != nil && wc.Enabled() {
var result []writecache.Option
result = append(result,
writecache.WithPath(wc.Path()),
writecache.WithFlushSizeLimit(wc.MaxFlushingObjectsSize()),
writecache.WithMaxObjectSize(wc.MaxObjectSize()),
writecache.WithFlushWorkersCount(wc.WorkerCount()),
writecache.WithMaxCacheSize(wc.SizeLimit()),
writecache.WithMaxCacheCount(wc.CountLimit()),
writecache.WithNoSync(wc.NoSync()),
writecache.WithLogger(logger.NewLoggerWrapper(zap.NewNop())),
writecache.WithQoSLimiter(qos.NewNoopLimiter()),
)
return result, true
}
return nil, false
}
func getPiloramaOpts(c *config.Config, sh *shardconfig.Config) []pilorama.Option {
var piloramaOpts []pilorama.Option
if config.BoolSafe(c.Sub("tree"), "enabled") {
pr := sh.Pilorama()
piloramaOpts = append(piloramaOpts,
pilorama.WithPath(pr.Path()),
pilorama.WithPerm(pr.Perm()),
pilorama.WithNoSync(pr.NoSync()),
pilorama.WithMaxBatchSize(pr.MaxBatchSize()),
pilorama.WithMaxBatchDelay(pr.MaxBatchDelay()),
)
}
return piloramaOpts
}
func getMetabaseOpts(sh *shardconfig.Config) []meta.Option {
return []meta.Option{
meta.WithPath(sh.Metabase().Path()),
meta.WithPermissions(sh.Metabase().BoltDB().Perm()),
meta.WithMaxBatchSize(sh.Metabase().BoltDB().MaxBatchSize()),
meta.WithMaxBatchDelay(sh.Metabase().BoltDB().MaxBatchDelay()),
meta.WithBoltDBOptions(&bbolt.Options{
Timeout: 100 * time.Millisecond,
}),
meta.WithLogger(logger.NewLoggerWrapper(zap.NewNop())),
meta.WithEpochState(&epochState{}),
}
}
func getBlobstorOpts(ctx context.Context, sh *shardconfig.Config) []blobstor.Option {
result := []blobstor.Option{
blobstor.WithCompression(sh.Compression()),
blobstor.WithStorages(getSubStorages(ctx, sh)),
blobstor.WithLogger(logger.NewLoggerWrapper(zap.NewNop())),
}
return result
}
func getSubStorages(ctx context.Context, sh *shardconfig.Config) []blobstor.SubStorage {
var ss []blobstor.SubStorage
for _, storage := range sh.BlobStor().Storages() {
switch storage.Type() {
case blobovniczatree.Type:
sub := blobovniczaconfig.From((*config.Config)(storage))
blobTreeOpts := []blobovniczatree.Option{
blobovniczatree.WithRootPath(storage.Path()),
blobovniczatree.WithPermissions(storage.Perm()),
blobovniczatree.WithBlobovniczaSize(sub.Size()),
blobovniczatree.WithBlobovniczaShallowDepth(sub.ShallowDepth()),
blobovniczatree.WithBlobovniczaShallowWidth(sub.ShallowWidth()),
blobovniczatree.WithOpenedCacheSize(sub.OpenedCacheSize()),
blobovniczatree.WithOpenedCacheTTL(sub.OpenedCacheTTL()),
blobovniczatree.WithOpenedCacheExpInterval(sub.OpenedCacheExpInterval()),
blobovniczatree.WithInitWorkerCount(sub.InitWorkerCount()),
blobovniczatree.WithWaitBeforeDropDB(sub.RebuildDropTimeout()),
blobovniczatree.WithLogger(logger.NewLoggerWrapper(zap.NewNop())),
blobovniczatree.WithObjectSizeLimit(sh.SmallSizeLimit()),
}
ss = append(ss, blobstor.SubStorage{
Storage: blobovniczatree.NewBlobovniczaTree(ctx, blobTreeOpts...),
Policy: func(_ *objectSDK.Object, data []byte) bool {
return uint64(len(data)) < sh.SmallSizeLimit()
},
})
case fstree.Type:
sub := fstreeconfig.From((*config.Config)(storage))
fstreeOpts := []fstree.Option{
fstree.WithPath(storage.Path()),
fstree.WithPerm(storage.Perm()),
fstree.WithDepth(sub.Depth()),
fstree.WithNoSync(sub.NoSync()),
fstree.WithLogger(logger.NewLoggerWrapper(zap.NewNop())),
}
ss = append(ss, blobstor.SubStorage{
Storage: fstree.New(fstreeOpts...),
Policy: func(_ *objectSDK.Object, _ []byte) bool {
return true
},
})
default:
// should never happen, that has already
// been handled: when the config was read
}
}
return ss
}
type epochState struct{}
func (epochState) CurrentEpoch() uint64 {
return 0
}

View file

@ -5,6 +5,7 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/config"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/maintenance"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/metabase"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/morph"
"git.frostfs.info/TrueCloudLab/frostfs-node/misc"
@ -41,6 +42,7 @@ func init() {
rootCmd.AddCommand(config.RootCmd)
rootCmd.AddCommand(morph.RootCmd)
rootCmd.AddCommand(metabase.RootCmd)
rootCmd.AddCommand(maintenance.RootCmd)
rootCmd.AddCommand(autocomplete.Command("frostfs-adm"))
rootCmd.AddCommand(gendoc.Command(rootCmd, gendoc.Options{}))

View file

@ -44,6 +44,7 @@ is set to current epoch + n.
_ = viper.BindPFlag(commonflags.WalletPath, ff.Lookup(commonflags.WalletPath))
_ = viper.BindPFlag(commonflags.Account, ff.Lookup(commonflags.Account))
_ = viper.BindPFlag(commonflags.RPC, ff.Lookup(commonflags.RPC))
},
}
@ -81,7 +82,7 @@ func createToken(cmd *cobra.Command, _ []string) {
commonCmd.ExitOnErr(cmd, "can't parse --"+notValidBeforeFlag+" flag: %w", err)
if iatRelative || expRelative || nvbRelative {
endpoint, _ := cmd.Flags().GetString(commonflags.RPC)
endpoint := viper.GetString(commonflags.RPC)
if len(endpoint) == 0 {
commonCmd.ExitOnErr(cmd, "can't fetch current epoch: %w", fmt.Errorf("'%s' flag value must be specified", commonflags.RPC))
}

View file

@ -40,9 +40,9 @@ func (repl *policyPlaygroundREPL) handleLs(args []string) error {
i := 1
for id, node := range repl.nodes {
var attrs []string
node.IterateAttributes(func(k, v string) {
for k, v := range node.Attributes() {
attrs = append(attrs, fmt.Sprintf("%s:%q", k, v))
})
}
fmt.Fprintf(repl.console, "\t%2d: id=%s attrs={%v}\n", i, id, strings.Join(attrs, " "))
i++
}
@ -260,6 +260,28 @@ Example of usage:
},
}
func (repl *policyPlaygroundREPL) handleCommand(args []string) error {
if len(args) == 0 {
return nil
}
switch args[0] {
case "list", "ls":
return repl.handleLs(args[1:])
case "add":
return repl.handleAdd(args[1:])
case "load":
return repl.handleLoad(args[1:])
case "remove", "rm":
return repl.handleRemove(args[1:])
case "eval":
return repl.handleEval(args[1:])
case "help":
return repl.handleHelp(args[1:])
}
return fmt.Errorf("unknown command %q. See 'help' for assistance", args[0])
}
func (repl *policyPlaygroundREPL) run() error {
if len(viper.GetString(commonflags.RPC)) > 0 {
key := key.GetOrGenerate(repl.cmd)
@ -277,15 +299,9 @@ func (repl *policyPlaygroundREPL) run() error {
}
}
cmdHandlers := map[string]func([]string) error{
"list": repl.handleLs,
"ls": repl.handleLs,
"add": repl.handleAdd,
"load": repl.handleLoad,
"remove": repl.handleRemove,
"rm": repl.handleRemove,
"eval": repl.handleEval,
"help": repl.handleHelp,
if len(viper.GetString(netmapConfigPath)) > 0 {
err := repl.handleLoad([]string{viper.GetString(netmapConfigPath)})
commonCmd.ExitOnErr(repl.cmd, "load netmap config error: %w", err)
}
var cfgCompleter []readline.PrefixCompleterInterface
@ -326,17 +342,8 @@ func (repl *policyPlaygroundREPL) run() error {
}
exit = false
parts := strings.Fields(line)
if len(parts) == 0 {
continue
}
cmd := parts[0]
if handler, exists := cmdHandlers[cmd]; exists {
if err := handler(parts[1:]); err != nil {
fmt.Fprintf(repl.console, "error: %v\n", err)
}
} else {
fmt.Fprintf(repl.console, "error: unknown command %q\n", cmd)
if err := repl.handleCommand(strings.Fields(line)); err != nil {
fmt.Fprintf(repl.console, "error: %v\n", err)
}
}
}
@ -352,6 +359,14 @@ If a wallet and endpoint is provided, the initial netmap data will be loaded fro
},
}
const (
netmapConfigPath = "netmap-config"
netmapConfigUsage = "Path to the netmap configuration file"
)
func initContainerPolicyPlaygroundCmd() {
commonflags.Init(policyPlaygroundCmd)
policyPlaygroundCmd.Flags().String(netmapConfigPath, "", netmapConfigUsage)
_ = viper.BindPFlag(netmapConfigPath, policyPlaygroundCmd.Flags().Lookup(netmapConfigPath))
}

View file

@ -62,11 +62,11 @@ func prettyPrintNodeInfo(cmd *cobra.Command, i netmap.NodeInfo) {
cmd.Println("state:", stateWord)
netmap.IterateNetworkEndpoints(i, func(s string) {
for s := range i.NetworkEndpoints() {
cmd.Println("address:", s)
})
}
i.IterateAttributes(func(key, value string) {
for key, value := range i.Attributes() {
cmd.Printf("attribute: %s=%s\n", key, value)
})
}
}

View file

@ -18,6 +18,7 @@ import (
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
// object lock command.
@ -78,7 +79,7 @@ var objectLockCmd = &cobra.Command{
ctx, cancel := context.WithTimeout(context.Background(), time.Second*30)
defer cancel()
endpoint, _ := cmd.Flags().GetString(commonflags.RPC)
endpoint := viper.GetString(commonflags.RPC)
currEpoch, err := internalclient.GetCurrentEpoch(ctx, cmd, endpoint)
commonCmd.ExitOnErr(cmd, "Request current epoch: %w", err)

View file

@ -7,6 +7,7 @@ import (
"encoding/json"
"errors"
"fmt"
"slices"
"sync"
internalclient "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/client"
@ -460,17 +461,11 @@ func createClient(ctx context.Context, cmd *cobra.Command, candidate netmapSDK.N
var cli *client.Client
var addresses []string
if preferInternal, _ := cmd.Flags().GetBool(preferInternalAddressesFlag); preferInternal {
candidate.IterateNetworkEndpoints(func(s string) bool {
addresses = append(addresses, s)
return false
})
addresses = slices.AppendSeq(addresses, candidate.NetworkEndpoints())
addresses = append(addresses, candidate.ExternalAddresses()...)
} else {
addresses = append(addresses, candidate.ExternalAddresses()...)
candidate.IterateNetworkEndpoints(func(s string) bool {
addresses = append(addresses, s)
return false
})
addresses = slices.AppendSeq(addresses, candidate.NetworkEndpoints())
}
var lastErr error

View file

@ -4,12 +4,14 @@ import (
"context"
"os"
"os/signal"
"strconv"
"syscall"
configViper "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common/config"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
control "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control/ir"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger"
"github.com/spf13/cast"
"github.com/spf13/viper"
"go.uber.org/zap"
)
@ -44,11 +46,30 @@ func reloadConfig() error {
if err != nil {
return err
}
log.Reload(logPrm)
err = logPrm.SetTags(loggerTags())
if err != nil {
return err
}
logger.UpdateLevelForTags(logPrm)
return nil
}
func loggerTags() [][]string {
var res [][]string
for i := 0; ; i++ {
var item []string
index := strconv.FormatInt(int64(i), 10)
names := cast.ToString(cfg.Get("logger.tags." + index + ".names"))
if names == "" {
break
}
item = append(item, names, cast.ToString(cfg.Get("logger.tags."+index+".level")))
res = append(res, item)
}
return res
}
func watchForSignal(ctx context.Context, cancel func()) {
ch := make(chan os.Signal, 1)
signal.Notify(ch, syscall.SIGINT, syscall.SIGTERM)

View file

@ -80,10 +80,14 @@ func main() {
exitErr(err)
logPrm.SamplingHook = metrics.LogMetrics().GetSamplingHook()
logPrm.PrependTimestamp = cfg.GetBool("logger.timestamp")
err = logPrm.SetTags(loggerTags())
exitErr(err)
log, err = logger.NewLogger(logPrm)
exitErr(err)
logger.UpdateLevelForTags(logPrm)
ctx, cancel := context.WithCancel(context.Background())
pprofCmp = newPprofComponent()

View file

@ -235,16 +235,8 @@ func (s *lruNetmapSource) updateCandidates(ctx context.Context, d time.Duration)
slices.Compare(n1.ExternalAddresses(), n2.ExternalAddresses()) != 0 {
return 1
}
var ne1 []string
n1.IterateNetworkEndpoints(func(s string) bool {
ne1 = append(ne1, s)
return false
})
var ne2 []string
n2.IterateNetworkEndpoints(func(s string) bool {
ne2 = append(ne2, s)
return false
})
ne1 := slices.Collect(n1.NetworkEndpoints())
ne2 := slices.Collect(n2.NetworkEndpoints())
return slices.Compare(ne1, ne2)
})
if ret != 0 {
@ -364,15 +356,9 @@ func getNetMapNodesToUpdate(nm *netmapSDK.NetMap, candidates []netmapSDK.NodeInf
}
nodeEndpoints := make([]string, 0, nm.Nodes()[i].NumberOfNetworkEndpoints())
nm.Nodes()[i].IterateNetworkEndpoints(func(s string) bool {
nodeEndpoints = append(nodeEndpoints, s)
return false
})
nodeEndpoints = slices.AppendSeq(nodeEndpoints, nm.Nodes()[i].NetworkEndpoints())
candidateEndpoints := make([]string, 0, cnd.NumberOfNetworkEndpoints())
cnd.IterateNetworkEndpoints(func(s string) bool {
candidateEndpoints = append(candidateEndpoints, s)
return false
})
candidateEndpoints = slices.AppendSeq(candidateEndpoints, cnd.NetworkEndpoints())
if slices.Compare(nodeEndpoints, candidateEndpoints) != 0 {
update = true
tmp.endpoints = candidateEndpoints

View file

@ -40,6 +40,7 @@ import (
netmapCore "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/netmap"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/blobovniczatree"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/compression"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/fstree"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/engine"
meta "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/metabase"
@ -109,6 +110,7 @@ type applicationConfiguration struct {
destination string
timestamp bool
options []zap.Option
tags [][]string
}
ObjectCfg struct {
@ -127,12 +129,9 @@ type applicationConfiguration struct {
}
type shardCfg struct {
compress bool
estimateCompressibility bool
estimateCompressibilityThreshold float64
compression compression.Config
smallSizeObjectLimit uint64
uncompressableContentType []string
refillMetabase bool
refillMetabaseWorkersCount int
mode shardmode.Mode
@ -241,19 +240,21 @@ func (a *applicationConfiguration) readConfig(c *config.Config) error {
})}
}
a.LoggerCfg.options = opts
a.LoggerCfg.tags = loggerconfig.Tags(c)
// Object
a.ObjectCfg.tombstoneLifetime = objectconfig.TombstoneLifetime(c)
var pm []placement.Metric
for _, raw := range objectconfig.Get(c).Priority() {
m, err := placement.ParseMetric(raw)
if err != nil {
return err
}
pm = append(pm, m)
locodeDBPath := nodeconfig.LocodeDBPath(c)
parser, err := placement.NewMetricsParser(locodeDBPath)
if err != nil {
return fmt.Errorf("metrics parser creation: %w", err)
}
a.ObjectCfg.priorityMetrics = pm
m, err := parser.ParseMetrics(objectconfig.Get(c).Priority())
if err != nil {
return fmt.Errorf("parse metrics: %w", err)
}
a.ObjectCfg.priorityMetrics = m
// Storage Engine
@ -269,10 +270,7 @@ func (a *applicationConfiguration) updateShardConfig(c *config.Config, source *s
target.refillMetabase = source.RefillMetabase()
target.refillMetabaseWorkersCount = source.RefillMetabaseWorkersCount()
target.mode = source.Mode()
target.compress = source.Compress()
target.estimateCompressibility = source.EstimateCompressibility()
target.estimateCompressibilityThreshold = source.EstimateCompressibilityThreshold()
target.uncompressableContentType = source.UncompressableContentTypes()
target.compression = source.Compression()
target.smallSizeObjectLimit = source.SmallSizeLimit()
a.setShardWriteCacheConfig(&target, source)
@ -383,14 +381,11 @@ func (a *applicationConfiguration) setGCConfig(target *shardCfg, source *shardco
}
func (a *applicationConfiguration) setLimiter(target *shardCfg, source *shardconfig.Config) error {
limitsConfig := source.Limits()
limitsConfig := source.Limits().ToConfig()
limiter, err := qos.NewLimiter(limitsConfig)
if err != nil {
return err
}
if target.limiter != nil {
target.limiter.Close()
}
target.limiter = limiter
return nil
}
@ -727,6 +722,7 @@ func initCfg(appCfg *config.Config) *cfg {
logPrm.SamplingHook = c.metricsCollector.LogMetrics().GetSamplingHook()
log, err := logger.NewLogger(logPrm)
fatalOnErr(err)
logger.UpdateLevelForTags(logPrm)
c.internals = initInternals(appCfg, log)
@ -1025,10 +1021,7 @@ func (c *cfg) getShardOpts(ctx context.Context, shCfg shardCfg) shardOptsWithID
ss := c.getSubstorageOpts(ctx, shCfg)
blobstoreOpts := []blobstor.Option{
blobstor.WithCompressObjects(shCfg.compress),
blobstor.WithUncompressableContentTypes(shCfg.uncompressableContentType),
blobstor.WithCompressibilityEstimate(shCfg.estimateCompressibility),
blobstor.WithCompressibilityEstimateThreshold(shCfg.estimateCompressibilityThreshold),
blobstor.WithCompression(shCfg.compression),
blobstor.WithStorages(ss),
blobstor.WithLogger(c.log),
}
@ -1094,6 +1087,11 @@ func (c *cfg) loggerPrm() (logger.Prm, error) {
}
prm.PrependTimestamp = c.LoggerCfg.timestamp
prm.Options = c.LoggerCfg.options
err = prm.SetTags(c.LoggerCfg.tags)
if err != nil {
// not expected since validation should be performed before
return logger.Prm{}, errors.New("incorrect allowed tags format: " + c.LoggerCfg.destination)
}
return prm, nil
}
@ -1381,7 +1379,7 @@ func (c *cfg) getComponents(ctx context.Context) []dCmp {
if err != nil {
return err
}
c.log.Reload(prm)
logger.UpdateLevelForTags(prm)
return nil
}})
components = append(components, dCmp{"runtime", func() error {

View file

@ -11,10 +11,11 @@ import (
blobovniczaconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/engine/shard/blobstor/blobovnicza"
fstreeconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/engine/shard/blobstor/fstree"
gcconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/engine/shard/gc"
limitsconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/engine/shard/limits"
piloramaconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/engine/shard/pilorama"
writecacheconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/engine/shard/writecache"
configtest "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/test"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/qos"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/compression"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/shard/mode"
"github.com/stretchr/testify/require"
)
@ -100,10 +101,11 @@ func TestEngineSection(t *testing.T) {
require.Equal(t, 100, meta.BoltDB().MaxBatchSize())
require.Equal(t, 10*time.Millisecond, meta.BoltDB().MaxBatchDelay())
require.Equal(t, true, sc.Compress())
require.Equal(t, []string{"audio/*", "video/*"}, sc.UncompressableContentTypes())
require.Equal(t, true, sc.EstimateCompressibility())
require.Equal(t, float64(0.7), sc.EstimateCompressibilityThreshold())
require.Equal(t, true, sc.Compression().Enabled)
require.Equal(t, compression.LevelFastest, sc.Compression().Level)
require.Equal(t, []string{"audio/*", "video/*"}, sc.Compression().UncompressableContentTypes)
require.Equal(t, true, sc.Compression().EstimateCompressibility)
require.Equal(t, float64(0.7), sc.Compression().EstimateCompressibilityThreshold)
require.EqualValues(t, 102400, sc.SmallSizeLimit())
require.Equal(t, 2, len(ss))
@ -135,8 +137,8 @@ func TestEngineSection(t *testing.T) {
require.Equal(t, mode.ReadOnly, sc.Mode())
require.Equal(t, 100, sc.RefillMetabaseWorkersCount())
readLimits := limits.Read()
writeLimits := limits.Write()
readLimits := limits.ToConfig().Read
writeLimits := limits.ToConfig().Write
require.Equal(t, 30*time.Second, readLimits.IdleTimeout)
require.Equal(t, int64(10_000), readLimits.MaxRunningOps)
require.Equal(t, int64(1_000), readLimits.MaxWaitingOps)
@ -144,7 +146,7 @@ func TestEngineSection(t *testing.T) {
require.Equal(t, int64(1_000), writeLimits.MaxRunningOps)
require.Equal(t, int64(100), writeLimits.MaxWaitingOps)
require.ElementsMatch(t, readLimits.Tags,
[]limitsconfig.IOTagConfig{
[]qos.IOTagConfig{
{
Tag: "internal",
Weight: toPtr(20),
@ -173,9 +175,14 @@ func TestEngineSection(t *testing.T) {
LimitOps: toPtr(25000),
Prohibited: true,
},
{
Tag: "treesync",
Weight: toPtr(5),
LimitOps: toPtr(25),
},
})
require.ElementsMatch(t, writeLimits.Tags,
[]limitsconfig.IOTagConfig{
[]qos.IOTagConfig{
{
Tag: "internal",
Weight: toPtr(200),
@ -203,6 +210,11 @@ func TestEngineSection(t *testing.T) {
Weight: toPtr(50),
LimitOps: toPtr(2500),
},
{
Tag: "treesync",
Weight: toPtr(50),
LimitOps: toPtr(100),
},
})
case 1:
require.Equal(t, "tmp/1/blob/pilorama.db", pl.Path())
@ -226,8 +238,9 @@ func TestEngineSection(t *testing.T) {
require.Equal(t, 200, meta.BoltDB().MaxBatchSize())
require.Equal(t, 20*time.Millisecond, meta.BoltDB().MaxBatchDelay())
require.Equal(t, false, sc.Compress())
require.Equal(t, []string(nil), sc.UncompressableContentTypes())
require.Equal(t, false, sc.Compression().Enabled)
require.Equal(t, compression.LevelDefault, sc.Compression().Level)
require.Equal(t, []string(nil), sc.Compression().UncompressableContentTypes)
require.EqualValues(t, 102400, sc.SmallSizeLimit())
require.Equal(t, 2, len(ss))
@ -259,14 +272,14 @@ func TestEngineSection(t *testing.T) {
require.Equal(t, mode.ReadWrite, sc.Mode())
require.Equal(t, shardconfig.RefillMetabaseWorkersCountDefault, sc.RefillMetabaseWorkersCount())
readLimits := limits.Read()
writeLimits := limits.Write()
require.Equal(t, limitsconfig.DefaultIdleTimeout, readLimits.IdleTimeout)
require.Equal(t, limitsconfig.NoLimit, readLimits.MaxRunningOps)
require.Equal(t, limitsconfig.NoLimit, readLimits.MaxWaitingOps)
require.Equal(t, limitsconfig.DefaultIdleTimeout, writeLimits.IdleTimeout)
require.Equal(t, limitsconfig.NoLimit, writeLimits.MaxRunningOps)
require.Equal(t, limitsconfig.NoLimit, writeLimits.MaxWaitingOps)
readLimits := limits.ToConfig().Read
writeLimits := limits.ToConfig().Write
require.Equal(t, qos.DefaultIdleTimeout, readLimits.IdleTimeout)
require.Equal(t, qos.NoLimit, readLimits.MaxRunningOps)
require.Equal(t, qos.NoLimit, readLimits.MaxWaitingOps)
require.Equal(t, qos.DefaultIdleTimeout, writeLimits.IdleTimeout)
require.Equal(t, qos.NoLimit, writeLimits.MaxRunningOps)
require.Equal(t, qos.NoLimit, writeLimits.MaxWaitingOps)
require.Equal(t, 0, len(readLimits.Tags))
require.Equal(t, 0, len(writeLimits.Tags))
}

View file

@ -8,6 +8,7 @@ import (
metabaseconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/engine/shard/metabase"
piloramaconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/engine/shard/pilorama"
writecacheconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/engine/shard/writecache"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/compression"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/shard/mode"
)
@ -27,42 +28,27 @@ func From(c *config.Config) *Config {
return (*Config)(c)
}
// Compress returns the value of "compress" config parameter.
//
// Returns false if the value is not a valid bool.
func (x *Config) Compress() bool {
return config.BoolSafe(
(*config.Config)(x),
"compress",
)
}
// UncompressableContentTypes returns the value of "compress_skip_content_types" config parameter.
//
// Returns nil if a the value is missing or is invalid.
func (x *Config) UncompressableContentTypes() []string {
return config.StringSliceSafe(
(*config.Config)(x),
"compression_exclude_content_types")
}
// EstimateCompressibility returns the value of "estimate_compressibility" config parameter.
//
// Returns false if the value is not a valid bool.
func (x *Config) EstimateCompressibility() bool {
return config.BoolSafe(
(*config.Config)(x),
"compression_estimate_compressibility",
)
func (x *Config) Compression() compression.Config {
cc := (*config.Config)(x).Sub("compression")
if cc == nil {
return compression.Config{}
}
return compression.Config{
Enabled: config.BoolSafe(cc, "enabled"),
UncompressableContentTypes: config.StringSliceSafe(cc, "exclude_content_types"),
Level: compression.Level(config.StringSafe(cc, "level")),
EstimateCompressibility: config.BoolSafe(cc, "estimate_compressibility"),
EstimateCompressibilityThreshold: estimateCompressibilityThreshold(cc),
}
}
// EstimateCompressibilityThreshold returns the value of "estimate_compressibility_threshold" config parameter.
//
// Returns EstimateCompressibilityThresholdDefault if the value is not defined, not valid float or not in range [0.0; 1.0].
func (x *Config) EstimateCompressibilityThreshold() float64 {
func estimateCompressibilityThreshold(c *config.Config) float64 {
v := config.FloatOrDefault(
(*config.Config)(x),
"compression_estimate_compressibility_threshold",
c,
"estimate_compressibility_threshold",
EstimateCompressibilityThresholdDefault)
if v < 0.0 || v > 1.0 {
return EstimateCompressibilityThresholdDefault

View file

@ -1,19 +1,13 @@
package limits
import (
"math"
"strconv"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/qos"
"github.com/spf13/cast"
)
const (
NoLimit int64 = math.MaxInt64
DefaultIdleTimeout = 5 * time.Minute
)
// From wraps config section into Config.
func From(c *config.Config) *Config {
return (*Config)(c)
@ -23,36 +17,43 @@ func From(c *config.Config) *Config {
// which provides access to Shard's limits configurations.
type Config config.Config
// Read returns the value of "read" limits config section.
func (x *Config) Read() OpConfig {
func (x *Config) ToConfig() qos.LimiterConfig {
result := qos.LimiterConfig{
Read: x.read(),
Write: x.write(),
}
panicOnErr(result.Validate())
return result
}
func (x *Config) read() qos.OpConfig {
return x.parse("read")
}
// Write returns the value of "write" limits config section.
func (x *Config) Write() OpConfig {
func (x *Config) write() qos.OpConfig {
return x.parse("write")
}
func (x *Config) parse(sub string) OpConfig {
func (x *Config) parse(sub string) qos.OpConfig {
c := (*config.Config)(x).Sub(sub)
var result OpConfig
var result qos.OpConfig
if s := config.Int(c, "max_waiting_ops"); s > 0 {
result.MaxWaitingOps = s
} else {
result.MaxWaitingOps = NoLimit
result.MaxWaitingOps = qos.NoLimit
}
if s := config.Int(c, "max_running_ops"); s > 0 {
result.MaxRunningOps = s
} else {
result.MaxRunningOps = NoLimit
result.MaxRunningOps = qos.NoLimit
}
if s := config.DurationSafe(c, "idle_timeout"); s > 0 {
result.IdleTimeout = s
} else {
result.IdleTimeout = DefaultIdleTimeout
result.IdleTimeout = qos.DefaultIdleTimeout
}
result.Tags = tags(c)
@ -60,43 +61,16 @@ func (x *Config) parse(sub string) OpConfig {
return result
}
type OpConfig struct {
// MaxWaitingOps returns the value of "max_waiting_ops" config parameter.
//
// Equals NoLimit if the value is not a positive number.
MaxWaitingOps int64
// MaxRunningOps returns the value of "max_running_ops" config parameter.
//
// Equals NoLimit if the value is not a positive number.
MaxRunningOps int64
// IdleTimeout returns the value of "idle_timeout" config parameter.
//
// Equals DefaultIdleTimeout if the value is not a valid duration.
IdleTimeout time.Duration
// Tags returns the value of "tags" config parameter.
//
// Equals nil if the value is not a valid tags config slice.
Tags []IOTagConfig
}
type IOTagConfig struct {
Tag string
Weight *float64
LimitOps *float64
ReservedOps *float64
Prohibited bool
}
func tags(c *config.Config) []IOTagConfig {
func tags(c *config.Config) []qos.IOTagConfig {
c = c.Sub("tags")
var result []IOTagConfig
var result []qos.IOTagConfig
for i := 0; ; i++ {
tag := config.String(c, strconv.Itoa(i)+".tag")
if tag == "" {
return result
}
var tagConfig IOTagConfig
var tagConfig qos.IOTagConfig
tagConfig.Tag = tag
v := c.Value(strconv.Itoa(i) + ".weight")

View file

@ -2,6 +2,7 @@ package loggerconfig
import (
"os"
"strconv"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config"
@ -60,6 +61,21 @@ func Timestamp(c *config.Config) bool {
return config.BoolSafe(c.Sub(subsection), "timestamp")
}
// Tags returns the value of "tags" config parameter from "logger" section.
func Tags(c *config.Config) [][]string {
var res [][]string
sub := c.Sub(subsection).Sub("tags")
for i := 0; ; i++ {
s := sub.Sub(strconv.FormatInt(int64(i), 10))
names := config.StringSafe(s, "names")
if names == "" {
break
}
res = append(res, []string{names, config.StringSafe(s, "level")})
}
return res
}
// ToLokiConfig extracts loki config.
func ToLokiConfig(c *config.Config) loki.Config {
hostname, _ := os.Hostname()

View file

@ -3,7 +3,9 @@ package nodeconfig
import (
"fmt"
"io/fs"
"iter"
"os"
"slices"
"strconv"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config"
@ -88,12 +90,8 @@ func Wallet(c *config.Config) *keys.PrivateKey {
type stringAddressGroup []string
func (x stringAddressGroup) IterateAddresses(f func(string) bool) {
for i := range x {
if f(x[i]) {
break
}
}
func (x stringAddressGroup) Addresses() iter.Seq[string] {
return slices.Values(x)
}
func (x stringAddressGroup) NumberOfAddresses() int {
@ -217,3 +215,8 @@ func (l PersistentPolicyRulesConfig) NoSync() bool {
func CompatibilityMode(c *config.Config) bool {
return config.BoolSafe(c.Sub(subsection), "kludge_compatibility_mode")
}
// LocodeDBPath returns path to LOCODE database.
func LocodeDBPath(c *config.Config) string {
return config.String(c.Sub(subsection), "locode_db_path")
}

View file

@ -8,6 +8,7 @@ import (
"net"
"sync/atomic"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/assert"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/metrics"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/netmap"
@ -104,9 +105,7 @@ func (s *networkState) getNodeInfo() (res netmapSDK.NodeInfo, ok bool) {
v := s.nodeInfo.Load()
if v != nil {
res, ok = v.(netmapSDK.NodeInfo)
if !ok {
panic(fmt.Sprintf("unexpected value in atomic node info state: %T", v))
}
assert.True(ok, fmt.Sprintf("unexpected value in atomic node info state: %T", v))
}
return
@ -124,7 +123,11 @@ func nodeKeyFromNetmap(c *cfg) []byte {
func (c *cfg) iterateNetworkAddresses(f func(string) bool) {
ni, ok := c.cfgNetmap.state.getNodeInfo()
if ok {
ni.IterateNetworkEndpoints(f)
for s := range ni.NetworkEndpoints() {
if f(s) {
return
}
}
}
}

View file

@ -30,6 +30,11 @@ func validateConfig(c *config.Config) error {
return fmt.Errorf("invalid logger destination: %w", err)
}
err = loggerPrm.SetTags(loggerconfig.Tags(c))
if err != nil {
return fmt.Errorf("invalid list of allowed tags: %w", err)
}
// shard configuration validation
shardNum := 0

View file

@ -51,8 +51,13 @@ func ExitOnErr(cmd *cobra.Command, errFmt string, err error) {
}
cmd.PrintErrln(err)
if cmd.PersistentPostRun != nil {
cmd.PersistentPostRun(cmd, nil)
for p := cmd; p != nil; p = p.Parent() {
if p.PersistentPostRun != nil {
p.PersistentPostRun(cmd, nil)
if !cobra.EnableTraverseRunHooks {
break
}
}
}
os.Exit(code)
}

View file

@ -27,15 +27,15 @@ func PrettyPrintNodeInfo(cmd *cobra.Command, node netmap.NodeInfo,
cmd.Printf("%sNode %d: %s %s ", indent, index+1, hex.EncodeToString(node.PublicKey()), strState)
netmap.IterateNetworkEndpoints(node, func(endpoint string) {
for endpoint := range node.NetworkEndpoints() {
cmd.Printf("%s ", endpoint)
})
}
cmd.Println()
if !short {
node.IterateAttributes(func(key, value string) {
for key, value := range node.Attributes() {
cmd.Printf("%s\t%s: %s\n", indent, key, value)
})
}
}
}

View file

@ -1,5 +1,7 @@
FROSTFS_IR_LOGGER_LEVEL=info
FROSTFS_IR_LOGGER_TIMESTAMP=true
FROSTFS_IR_LOGGER_TAGS_0_NAMES="main, morph"
FROSTFS_IR_LOGGER_TAGS_0_LEVEL="debug"
FROSTFS_IR_WALLET_PATH=/path/to/wallet.json
FROSTFS_IR_WALLET_ADDRESS=NUHtW3eM6a4mmFCgyyr4rj4wygsTKB88XX

View file

@ -3,6 +3,9 @@
logger:
level: info # Logger level: one of "debug", "info" (default), "warn", "error", "dpanic", "panic", "fatal"
timestamp: true
tags:
- names: "main, morph" # Possible values: `main`, `morph`, `grpc_svc`, `ir`, `processor`.
level: debug
wallet:
path: /path/to/wallet.json # Path to NEP-6 NEO wallet file

View file

@ -23,6 +23,7 @@ FROSTFS_NODE_ATTRIBUTE_1="UN-LOCODE:RU MSK"
FROSTFS_NODE_RELAY=true
FROSTFS_NODE_PERSISTENT_SESSIONS_PATH=/sessions
FROSTFS_NODE_PERSISTENT_STATE_PATH=/state
FROSTFS_NODE_LOCODE_DB_PATH=/path/to/locode/db
# Tree service section
FROSTFS_TREE_ENABLED=true
@ -121,7 +122,8 @@ FROSTFS_STORAGE_SHARD_0_METABASE_PERM=0644
FROSTFS_STORAGE_SHARD_0_METABASE_MAX_BATCH_SIZE=100
FROSTFS_STORAGE_SHARD_0_METABASE_MAX_BATCH_DELAY=10ms
### Blobstor config
FROSTFS_STORAGE_SHARD_0_COMPRESS=true
FROSTFS_STORAGE_SHARD_0_COMPRESSION_ENABLED=true
FROSTFS_STORAGE_SHARD_0_COMPRESSION_LEVEL=fastest
FROSTFS_STORAGE_SHARD_0_COMPRESSION_EXCLUDE_CONTENT_TYPES="audio/* video/*"
FROSTFS_STORAGE_SHARD_0_COMPRESSION_ESTIMATE_COMPRESSIBILITY=true
FROSTFS_STORAGE_SHARD_0_COMPRESSION_ESTIMATE_COMPRESSIBILITY_THRESHOLD=0.7
@ -181,6 +183,9 @@ FROSTFS_STORAGE_SHARD_0_LIMITS_READ_TAGS_4_TAG=policer
FROSTFS_STORAGE_SHARD_0_LIMITS_READ_TAGS_4_WEIGHT=5
FROSTFS_STORAGE_SHARD_0_LIMITS_READ_TAGS_4_LIMIT_OPS=25000
FROSTFS_STORAGE_SHARD_0_LIMITS_READ_TAGS_4_PROHIBITED=true
FROSTFS_STORAGE_SHARD_0_LIMITS_READ_TAGS_5_TAG=treesync
FROSTFS_STORAGE_SHARD_0_LIMITS_READ_TAGS_5_WEIGHT=5
FROSTFS_STORAGE_SHARD_0_LIMITS_READ_TAGS_5_LIMIT_OPS=25
FROSTFS_STORAGE_SHARD_0_LIMITS_WRITE_TAGS_0_TAG=internal
FROSTFS_STORAGE_SHARD_0_LIMITS_WRITE_TAGS_0_WEIGHT=200
FROSTFS_STORAGE_SHARD_0_LIMITS_WRITE_TAGS_0_LIMIT_OPS=0
@ -198,6 +203,9 @@ FROSTFS_STORAGE_SHARD_0_LIMITS_WRITE_TAGS_3_LIMIT_OPS=2500
FROSTFS_STORAGE_SHARD_0_LIMITS_WRITE_TAGS_4_TAG=policer
FROSTFS_STORAGE_SHARD_0_LIMITS_WRITE_TAGS_4_WEIGHT=50
FROSTFS_STORAGE_SHARD_0_LIMITS_WRITE_TAGS_4_LIMIT_OPS=2500
FROSTFS_STORAGE_SHARD_0_LIMITS_WRITE_TAGS_5_TAG=treesync
FROSTFS_STORAGE_SHARD_0_LIMITS_WRITE_TAGS_5_WEIGHT=50
FROSTFS_STORAGE_SHARD_0_LIMITS_WRITE_TAGS_5_LIMIT_OPS=100
## 1 shard
### Flag to refill Metabase from BlobStor

View file

@ -37,7 +37,8 @@
},
"persistent_state": {
"path": "/state"
}
},
"locode_db_path": "/path/to/locode/db"
},
"grpc": {
"0": {
@ -182,12 +183,15 @@
"max_batch_size": 100,
"max_batch_delay": "10ms"
},
"compress": true,
"compression_exclude_content_types": [
"audio/*", "video/*"
],
"compression_estimate_compressibility": true,
"compression_estimate_compressibility_threshold": 0.7,
"compression": {
"enabled": true,
"level": "fastest",
"exclude_content_types": [
"audio/*", "video/*"
],
"estimate_compressibility": true,
"estimate_compressibility_threshold": 0.7
},
"small_object_size": 102400,
"blobstor": [
{
@ -254,6 +258,11 @@
"weight": 5,
"limit_ops": 25000,
"prohibited": true
},
{
"tag": "treesync",
"weight": 5,
"limit_ops": 25
}
]
},
@ -288,6 +297,11 @@
"tag": "policer",
"weight": 50,
"limit_ops": 2500
},
{
"tag": "treesync",
"weight": 50,
"limit_ops": 100
}
]
}
@ -311,7 +325,9 @@
"max_batch_size": 200,
"max_batch_delay": "20ms"
},
"compress": false,
"compression": {
"enabled": false
},
"small_object_size": 102400,
"blobstor": [
{

View file

@ -36,6 +36,7 @@ node:
path: /sessions # path to persistent session tokens file of Storage node (default: in-memory sessions)
persistent_state:
path: /state # path to persistent state file of Storage node
"locode_db_path": "/path/to/locode/db"
grpc:
- endpoint: s01.frostfs.devenv:8080 # endpoint for gRPC server
@ -159,7 +160,8 @@ storage:
max_batch_delay: 5ms # maximum delay for a batch of operations to be executed
max_batch_size: 100 # maximum amount of operations in a single batch
compress: false # turn on/off zstd(level 3) compression of stored objects
compression:
enabled: false # turn on/off zstd compression of stored objects
small_object_size: 100 kb # size threshold for "small" objects which are cached in key-value DB, not in FS, bytes
blobstor:
@ -201,12 +203,14 @@ storage:
max_batch_size: 100
max_batch_delay: 10ms
compress: true # turn on/off zstd(level 3) compression of stored objects
compression_exclude_content_types:
- audio/*
- video/*
compression_estimate_compressibility: true
compression_estimate_compressibility_threshold: 0.7
compression:
enabled: true # turn on/off zstd compression of stored objects
level: fastest
exclude_content_types:
- audio/*
- video/*
estimate_compressibility: true
estimate_compressibility_threshold: 0.7
blobstor:
- type: blobovnicza
@ -253,6 +257,9 @@ storage:
weight: 5
limit_ops: 25000
prohibited: true
- tag: treesync
weight: 5
limit_ops: 25
write:
max_running_ops: 1000
max_waiting_ops: 100
@ -275,6 +282,9 @@ storage:
- tag: policer
weight: 50
limit_ops: 2500
- tag: treesync
weight: 50
limit_ops: 100
1:
writecache:

View file

@ -12,22 +12,23 @@ There are some custom types used for brevity:
# Structure
| Section | Description |
|------------------------|---------------------------------------------------------------------|
| `logger` | [Logging parameters](#logger-section) |
| `pprof` | [PProf configuration](#pprof-section) |
| `prometheus` | [Prometheus metrics configuration](#prometheus-section) |
| `control` | [Control service configuration](#control-section) |
| `contracts` | [Override FrostFS contracts hashes](#contracts-section) |
| `morph` | [N3 blockchain client configuration](#morph-section) |
| `apiclient` | [FrostFS API client configuration](#apiclient-section) |
| `policer` | [Policer service configuration](#policer-section) |
| `replicator` | [Replicator service configuration](#replicator-section) |
| `storage` | [Storage engine configuration](#storage-section) |
| `runtime` | [Runtime configuration](#runtime-section) |
| `audit` | [Audit configuration](#audit-section) |
| `multinet` | [Multinet configuration](#multinet-section) |
| `qos` | [QoS configuration](#qos-section) |
| Section | Description |
|--------------|---------------------------------------------------------|
| `node` | [Node parameters](#node-section) |
| `logger` | [Logging parameters](#logger-section) |
| `pprof` | [PProf configuration](#pprof-section) |
| `prometheus` | [Prometheus metrics configuration](#prometheus-section) |
| `control` | [Control service configuration](#control-section) |
| `contracts` | [Override FrostFS contracts hashes](#contracts-section) |
| `morph` | [N3 blockchain client configuration](#morph-section) |
| `apiclient` | [FrostFS API client configuration](#apiclient-section) |
| `policer` | [Policer service configuration](#policer-section) |
| `replicator` | [Replicator service configuration](#replicator-section) |
| `storage` | [Storage engine configuration](#storage-section) |
| `runtime` | [Runtime configuration](#runtime-section) |
| `audit` | [Audit configuration](#audit-section) |
| `multinet` | [Multinet configuration](#multinet-section) |
| `qos` | [QoS configuration](#qos-section) |
# `control` section
```yaml
@ -184,21 +185,41 @@ Contains configuration for each shard. Keys must be consecutive numbers starting
`default` subsection has the same format and specifies defaults for missing values.
The following table describes configuration for each shard.
| Parameter | Type | Default value | Description |
| ------------------------------------------------ | ------------------------------------------- | ------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `compress` | `bool` | `false` | Flag to enable compression. |
| `compression_exclude_content_types` | `[]string` | | List of content-types to disable compression for. Content-type is taken from `Content-Type` object attribute. Each element can contain a star `*` as a first (last) character, which matches any prefix (suffix). |
| `compression_estimate_compressibility` | `bool` | `false` | If `true`, then noramalized compressibility estimation is used to decide compress data or not. |
| `compression_estimate_compressibility_threshold` | `float` | `0.1` | Normilized compressibility estimate threshold: data will compress if estimation if greater than this value. |
| `mode` | `string` | `read-write` | Shard Mode.<br/>Possible values: `read-write`, `read-only`, `degraded`, `degraded-read-only`, `disabled` |
| `resync_metabase` | `bool` | `false` | Flag to enable metabase resync on start. |
| `resync_metabase_worker_count` | `int` | `1000` | Count of concurrent workers to resync metabase. |
| `writecache` | [Writecache config](#writecache-subsection) | | Write-cache configuration. |
| `metabase` | [Metabase config](#metabase-subsection) | | Metabase configuration. |
| `blobstor` | [Blobstor config](#blobstor-subsection) | | Blobstor configuration. |
| `small_object_size` | `size` | `1M` | Maximum size of an object stored in blobovnicza tree. |
| `gc` | [GC config](#gc-subsection) | | GC configuration. |
| `limits` | [Shard limits config](#limits-subsection) | | Shard limits configuration. |
| Parameter | Type | Default value | Description |
| ------------------------------ | --------------------------------------------- | ------------- | --------------------------------------------------------------------------------------------------------- |
| `compression` | [Compression config](#compression-subsection) | | Compression config. |
| `mode` | `string` | `read-write` | Shard Mode.<br/>Possible values: `read-write`, `read-only`, `degraded`, `degraded-read-only`, `disabled` |
| `resync_metabase` | `bool` | `false` | Flag to enable metabase resync on start. |
| `resync_metabase_worker_count` | `int` | `1000` | Count of concurrent workers to resync metabase. |
| `writecache` | [Writecache config](#writecache-subsection) | | Write-cache configuration. |
| `metabase` | [Metabase config](#metabase-subsection) | | Metabase configuration. |
| `blobstor` | [Blobstor config](#blobstor-subsection) | | Blobstor configuration. |
| `small_object_size` | `size` | `1M` | Maximum size of an object stored in blobovnicza tree. |
| `gc` | [GC config](#gc-subsection) | | GC configuration. |
| `limits` | [Shard limits config](#limits-subsection) | | Shard limits configuration. |
### `compression` subsection
Contains compression config.
```yaml
compression:
enabled: true
level: smallest_size
exclude_content_types:
- audio/*
- video/*
estimate_compressibility: true
estimate_compressibility_threshold: 0.7
```
| Parameter | Type | Default value | Description |
| ------------------------------------ | ---------- | ------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `enabled` | `bool` | `false` | Flag to enable compression. |
| `level` | `string` | `optimal` | Compression level. Available values are `optimal`, `fastest`, `smallest_size`. |
| `exclude_content_types` | `[]string` | | List of content-types to disable compression for. Content-type is taken from `Content-Type` object attribute. Each element can contain a star `*` as a first (last) character, which matches any prefix (suffix). |
| `estimate_compressibility` | `bool` | `false` | If `true`, then noramalized compressibility estimation is used to decide compress data or not. |
| `estimate_compressibility_threshold` | `float` | `0.1` | Normilized compressibility estimate threshold: data will compress if estimation if greater than this value. |
### `blobstor` subsection
@ -384,17 +405,19 @@ node:
path: /sessions
persistent_state:
path: /state
locode_db_path: "/path/to/locode/db"
```
| Parameter | Type | Default value | Description |
|-----------------------|---------------------------------------------------------------|---------------|-------------------------------------------------------------------------|
| `key` | `string` | | Path to the binary-encoded private key. |
| `wallet` | [Wallet config](#wallet-subsection) | | Wallet configuration. Has no effect if `key` is provided. |
| `addresses` | `[]string` | | Addresses advertised in the netmap. |
| `attribute` | `[]string` | | Node attributes as a list of key-value pairs in `<key>:<value>` format. |
| `relay` | `bool` | | Enable relay mode. |
| `persistent_sessions` | [Persistent sessions config](#persistent_sessions-subsection) | | Persistent session token store configuration. |
| `persistent_state` | [Persistent state config](#persistent_state-subsection) | | Persistent state configuration. |
| Parameter | Type | Default value | Description |
|-----------------------|---------------------------------------------------------------|---------------|-----------------------------------------------------------------------------------------------------|
| `key` | `string` | | Path to the binary-encoded private key. |
| `wallet` | [Wallet config](#wallet-subsection) | | Wallet configuration. Has no effect if `key` is provided. |
| `addresses` | `[]string` | | Addresses advertised in the netmap. |
| `attribute` | `[]string` | | Node attributes as a list of key-value pairs in `<key>:<value>` format. |
| `relay` | `bool` | | Enable relay mode. |
| `persistent_sessions` | [Persistent sessions config](#persistent_sessions-subsection) | | Persistent session token store configuration. |
| `persistent_state` | [Persistent state config](#persistent_state-subsection) | | Persistent state configuration. |
| `locode_db_path` | `string` | empty | Path to UN/LOCODE [database](https://git.frostfs.info/TrueCloudLab/frostfs-locode-db/) for FrostFS. |
## `wallet` subsection
N3 wallet configuration.

View file

@ -23,3 +23,7 @@ func NoError(err error, details ...string) {
panic(content)
}
}
func Fail(details ...string) {
panic(strings.Join(details, " "))
}

View file

@ -516,4 +516,5 @@ const (
FailedToValidateIncomingIOTag = "failed to validate incoming IO tag, replaced with `client`"
WriteCacheFailedToAcquireRPSQuota = "writecache failed to acquire RPS quota to flush object"
FailedToUpdateNetmapCandidates = "update netmap candidates failed"
UnknownCompressionLevelDefaultWillBeUsed = "unknown compression level, 'optimal' will be used"
)

31
internal/qos/config.go Normal file
View file

@ -0,0 +1,31 @@
package qos
import (
"math"
"time"
)
const (
NoLimit int64 = math.MaxInt64
DefaultIdleTimeout = 5 * time.Minute
)
type LimiterConfig struct {
Read OpConfig
Write OpConfig
}
type OpConfig struct {
MaxWaitingOps int64
MaxRunningOps int64
IdleTimeout time.Duration
Tags []IOTagConfig
}
type IOTagConfig struct {
Tag string
Weight *float64
LimitOps *float64
ReservedOps *float64
Prohibited bool
}

View file

@ -8,7 +8,6 @@ import (
"sync/atomic"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/engine/shard/limits"
"git.frostfs.info/TrueCloudLab/frostfs-qos/scheduling"
"git.frostfs.info/TrueCloudLab/frostfs-qos/tagging"
apistatus "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client/status"
@ -37,15 +36,15 @@ type scheduler interface {
Close()
}
func NewLimiter(c *limits.Config) (Limiter, error) {
if err := validateConfig(c); err != nil {
func NewLimiter(c LimiterConfig) (Limiter, error) {
if err := c.Validate(); err != nil {
return nil, err
}
readScheduler, err := createScheduler(c.Read())
readScheduler, err := createScheduler(c.Read)
if err != nil {
return nil, fmt.Errorf("create read scheduler: %w", err)
}
writeScheduler, err := createScheduler(c.Write())
writeScheduler, err := createScheduler(c.Write)
if err != nil {
return nil, fmt.Errorf("create write scheduler: %w", err)
}
@ -63,8 +62,8 @@ func NewLimiter(c *limits.Config) (Limiter, error) {
return l, nil
}
func createScheduler(config limits.OpConfig) (scheduler, error) {
if len(config.Tags) == 0 && config.MaxWaitingOps == limits.NoLimit {
func createScheduler(config OpConfig) (scheduler, error) {
if len(config.Tags) == 0 && config.MaxWaitingOps == NoLimit {
return newSemaphoreScheduler(config.MaxRunningOps), nil
}
return scheduling.NewMClock(
@ -72,7 +71,7 @@ func createScheduler(config limits.OpConfig) (scheduler, error) {
converToSchedulingTags(config.Tags), config.IdleTimeout)
}
func converToSchedulingTags(limits []limits.IOTagConfig) map[string]scheduling.TagInfo {
func converToSchedulingTags(limits []IOTagConfig) map[string]scheduling.TagInfo {
result := make(map[string]scheduling.TagInfo)
for _, tag := range []IOTag{IOTagBackground, IOTagClient, IOTagInternal, IOTagPolicer, IOTagTreeSync, IOTagWritecache} {
result[tag.String()] = scheduling.TagInfo{

View file

@ -4,8 +4,6 @@ import (
"errors"
"fmt"
"math"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/engine/shard/limits"
)
var errWeightsMustBeSpecified = errors.New("invalid weights: weights must be specified for all tags or not specified for any")
@ -14,17 +12,17 @@ type tagConfig struct {
Shares, Limit, Reserved *float64
}
func validateConfig(c *limits.Config) error {
if err := validateOpConfig(c.Read()); err != nil {
func (c *LimiterConfig) Validate() error {
if err := validateOpConfig(c.Read); err != nil {
return fmt.Errorf("limits 'read' section validation error: %w", err)
}
if err := validateOpConfig(c.Write()); err != nil {
if err := validateOpConfig(c.Write); err != nil {
return fmt.Errorf("limits 'write' section validation error: %w", err)
}
return nil
}
func validateOpConfig(c limits.OpConfig) error {
func validateOpConfig(c OpConfig) error {
if c.MaxRunningOps <= 0 {
return fmt.Errorf("invalid 'max_running_ops = %d': must be greater than zero", c.MaxRunningOps)
}
@ -40,7 +38,7 @@ func validateOpConfig(c limits.OpConfig) error {
return nil
}
func validateTags(configTags []limits.IOTagConfig) error {
func validateTags(configTags []IOTagConfig) error {
tags := map[IOTag]tagConfig{
IOTagBackground: {},
IOTagClient: {},

View file

@ -3,6 +3,7 @@ package client
import (
"bytes"
"fmt"
"iter"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/network"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client"
@ -19,7 +20,7 @@ func nodeInfoFromKeyAddr(dst *NodeInfo, k []byte, a, external network.AddressGro
// Args must not be nil.
func NodeInfoFromRawNetmapElement(dst *NodeInfo, info interface {
PublicKey() []byte
IterateAddresses(func(string) bool)
Addresses() iter.Seq[string]
NumberOfAddresses() int
ExternalAddresses() []string
},

View file

@ -1,6 +1,10 @@
package netmap
import "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap"
import (
"iter"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap"
)
// Node is a named type of netmap.NodeInfo which provides interface needed
// in the current repository. Node is expected to be used everywhere instead
@ -14,10 +18,20 @@ func (x Node) PublicKey() []byte {
return (netmap.NodeInfo)(x).PublicKey()
}
// Addresses returns an iterator over all announced network addresses.
func (x Node) Addresses() iter.Seq[string] {
return (netmap.NodeInfo)(x).NetworkEndpoints()
}
// IterateAddresses iterates over all announced network addresses
// and passes them into f. Handler MUST NOT be nil.
// Deprecated: use [Node.Addresses] instead.
func (x Node) IterateAddresses(f func(string) bool) {
(netmap.NodeInfo)(x).IterateNetworkEndpoints(f)
for s := range (netmap.NodeInfo)(x).NetworkEndpoints() {
if f(s) {
return
}
}
}
// NumberOfAddresses returns number of announced network addresses.

View file

@ -13,6 +13,13 @@ type ECInfo struct {
Total uint32
}
func (v *ECInfo) String() string {
if v == nil {
return "<nil>"
}
return fmt.Sprintf("parent ID: %s, index: %d, total %d", v.ParentID, v.Index, v.Total)
}
// Info groups object address with its FrostFS
// object info.
type Info struct {
@ -23,5 +30,5 @@ type Info struct {
}
func (v Info) String() string {
return fmt.Sprintf("address: %s, type: %s, is linking: %t", v.Address, v.Type, v.IsLinkingObject)
return fmt.Sprintf("address: %s, type: %s, is linking: %t, EC header: %s", v.Address, v.Type, v.IsLinkingObject, v.ECInfo)
}

View file

@ -50,7 +50,7 @@ func (s *Server) initNetmapProcessor(ctx context.Context, cfg *viper.Viper,
var err error
s.netmapProcessor, err = netmap.New(&netmap.Params{
Log: s.log,
Log: s.log.WithTag(logger.TagProcessor),
Metrics: s.irMetrics,
PoolSize: poolSize,
NetmapClient: netmap.NewNetmapClient(s.netmapClient),
@ -159,7 +159,7 @@ func (s *Server) createAlphaSync(cfg *viper.Viper, frostfsCli *frostfsClient.Cli
} else {
// create governance processor
governanceProcessor, err := governance.New(&governance.Params{
Log: s.log,
Log: s.log.WithTag(logger.TagProcessor),
Metrics: s.irMetrics,
FrostFSClient: frostfsCli,
AlphabetState: s,
@ -225,7 +225,7 @@ func (s *Server) initAlphabetProcessor(ctx context.Context, cfg *viper.Viper) er
// create alphabet processor
s.alphabetProcessor, err = alphabet.New(&alphabet.Params{
ParsedWallets: parsedWallets,
Log: s.log,
Log: s.log.WithTag(logger.TagProcessor),
Metrics: s.irMetrics,
PoolSize: poolSize,
AlphabetContracts: s.contracts.alphabet,
@ -247,7 +247,7 @@ func (s *Server) initContainerProcessor(ctx context.Context, cfg *viper.Viper, c
s.log.Debug(ctx, logs.ContainerContainerWorkerPool, zap.Int("size", poolSize))
// container processor
containerProcessor, err := cont.New(&cont.Params{
Log: s.log,
Log: s.log.WithTag(logger.TagProcessor),
Metrics: s.irMetrics,
PoolSize: poolSize,
AlphabetState: s,
@ -268,7 +268,7 @@ func (s *Server) initBalanceProcessor(ctx context.Context, cfg *viper.Viper, fro
s.log.Debug(ctx, logs.BalanceBalanceWorkerPool, zap.Int("size", poolSize))
// create balance processor
balanceProcessor, err := balance.New(&balance.Params{
Log: s.log,
Log: s.log.WithTag(logger.TagProcessor),
Metrics: s.irMetrics,
PoolSize: poolSize,
FrostFSClient: frostfsCli,
@ -291,7 +291,7 @@ func (s *Server) initFrostFSMainnetProcessor(ctx context.Context, cfg *viper.Vip
s.log.Debug(ctx, logs.FrostFSFrostfsWorkerPool, zap.Int("size", poolSize))
frostfsProcessor, err := frostfs.New(&frostfs.Params{
Log: s.log,
Log: s.log.WithTag(logger.TagProcessor),
Metrics: s.irMetrics,
PoolSize: poolSize,
FrostFSContract: s.contracts.frostfs,
@ -342,7 +342,7 @@ func (s *Server) initGRPCServer(ctx context.Context, cfg *viper.Viper, log *logg
controlSvc := controlsrv.NewAuditService(controlsrv.New(p, s.netmapClient, s.containerClient,
controlsrv.WithAllowedKeys(authKeys),
), log, audit)
), log.WithTag(logger.TagGrpcSvc), audit)
grpcControlSrv := grpc.NewServer()
control.RegisterControlServiceServer(grpcControlSrv, controlSvc)
@ -458,7 +458,7 @@ func (s *Server) initMorph(ctx context.Context, cfg *viper.Viper, errChan chan<-
}
morphChain := &chainParams{
log: s.log,
log: s.log.WithTag(logger.TagMorph),
cfg: cfg,
key: s.key,
name: morphPrefix,

View file

@ -339,7 +339,7 @@ func New(ctx context.Context, log *logger.Logger, cfg *viper.Viper, errChan chan
) (*Server, error) {
var err error
server := &Server{
log: log,
log: log.WithTag(logger.TagIr),
irMetrics: metrics,
cmode: cmode,
}

View file

@ -158,11 +158,11 @@ func (b *Blobovniczas) Path() string {
}
// SetCompressor implements common.Storage.
func (b *Blobovniczas) SetCompressor(cc *compression.Config) {
func (b *Blobovniczas) SetCompressor(cc *compression.Compressor) {
b.compression = cc
}
func (b *Blobovniczas) Compressor() *compression.Config {
func (b *Blobovniczas) Compressor() *compression.Compressor {
return b.compression
}

View file

@ -19,7 +19,7 @@ type cfg struct {
openedCacheSize int
blzShallowDepth uint64
blzShallowWidth uint64
compression *compression.Config
compression *compression.Compressor
blzOpts []blobovnicza.Option
reportError func(context.Context, string, error) // reportError is the function called when encountering disk errors.
metrics Metrics

View file

@ -41,7 +41,7 @@ type SubStorageInfo struct {
type Option func(*cfg)
type cfg struct {
compression compression.Config
compression compression.Compressor
log *logger.Logger
storage []SubStorage
metrics Metrics
@ -95,46 +95,9 @@ func WithLogger(l *logger.Logger) Option {
}
}
// WithCompressObjects returns option to toggle
// compression of the stored objects.
//
// If true, Zstandard algorithm is used for data compression.
//
// If compressor (decompressor) creation failed,
// the uncompressed option will be used, and the error
// is recorded in the provided log.
func WithCompressObjects(comp bool) Option {
func WithCompression(comp compression.Config) Option {
return func(c *cfg) {
c.compression.Enabled = comp
}
}
// WithCompressibilityEstimate returns an option to use
// normilized compressibility estimate to decide compress
// data or not.
//
// See https://github.com/klauspost/compress/blob/v1.17.2/compressible.go#L5
func WithCompressibilityEstimate(v bool) Option {
return func(c *cfg) {
c.compression.UseCompressEstimation = v
}
}
// WithCompressibilityEstimateThreshold returns an option to set
// normilized compressibility estimate threshold.
//
// See https://github.com/klauspost/compress/blob/v1.17.2/compressible.go#L5
func WithCompressibilityEstimateThreshold(threshold float64) Option {
return func(c *cfg) {
c.compression.CompressEstimationThreshold = threshold
}
}
// WithUncompressableContentTypes returns option to disable decompression
// for specific content types as seen by object.AttributeContentType attribute.
func WithUncompressableContentTypes(values []string) Option {
return func(c *cfg) {
c.compression.UncompressableContentTypes = values
c.compression.Config = comp
}
}
@ -152,6 +115,6 @@ func WithMetrics(m Metrics) Option {
}
}
func (b *BlobStor) Compressor() *compression.Config {
func (b *BlobStor) Compressor() *compression.Compressor {
return &b.compression
}

View file

@ -9,6 +9,7 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/object"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/blobovniczatree"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/compression"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/fstree"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/teststore"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/shard/mode"
@ -51,7 +52,9 @@ func TestCompression(t *testing.T) {
newBlobStor := func(t *testing.T, compress bool) *BlobStor {
bs := New(
WithCompressObjects(compress),
WithCompression(compression.Config{
Enabled: compress,
}),
WithStorages(defaultStorages(dir, smallSizeLimit)))
require.NoError(t, bs.Open(context.Background(), mode.ReadWrite))
require.NoError(t, bs.Init(context.Background()))
@ -113,8 +116,10 @@ func TestBlobstor_needsCompression(t *testing.T) {
dir := t.TempDir()
bs := New(
WithCompressObjects(compress),
WithUncompressableContentTypes(ct),
WithCompression(compression.Config{
Enabled: compress,
UncompressableContentTypes: ct,
}),
WithStorages([]SubStorage{
{
Storage: blobovniczatree.NewBlobovniczaTree(

View file

@ -18,8 +18,8 @@ type Storage interface {
Path() string
ObjectsCount(ctx context.Context) (uint64, error)
SetCompressor(cc *compression.Config)
Compressor() *compression.Config
SetCompressor(cc *compression.Compressor)
Compressor() *compression.Compressor
// SetReportErrorFunc allows to provide a function to be called on disk errors.
// This function MUST be called before Open.

View file

@ -11,7 +11,7 @@ import (
)
func BenchmarkCompression(b *testing.B) {
c := Config{Enabled: true}
c := Compressor{Config: Config{Enabled: true}}
require.NoError(b, c.Init())
for _, size := range []int{128, 1024, 32 * 1024, 32 * 1024 * 1024} {
@ -33,7 +33,7 @@ func BenchmarkCompression(b *testing.B) {
}
}
func benchWith(b *testing.B, c Config, data []byte) {
func benchWith(b *testing.B, c Compressor, data []byte) {
b.ResetTimer()
b.ReportAllocs()
for range b.N {
@ -56,8 +56,10 @@ func BenchmarkCompressionRealVSEstimate(b *testing.B) {
b.Run("estimate", func(b *testing.B) {
b.ResetTimer()
c := &Config{
Enabled: true,
c := &Compressor{
Config: Config{
Enabled: true,
},
}
require.NoError(b, c.Init())
@ -76,8 +78,10 @@ func BenchmarkCompressionRealVSEstimate(b *testing.B) {
b.Run("compress", func(b *testing.B) {
b.ResetTimer()
c := &Config{
Enabled: true,
c := &Compressor{
Config: Config{
Enabled: true,
},
}
require.NoError(b, c.Init())

View file

@ -4,21 +4,36 @@ import (
"bytes"
"strings"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/assert"
objectSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
"github.com/klauspost/compress"
"github.com/klauspost/compress/zstd"
)
type Level string
const (
LevelDefault Level = ""
LevelOptimal Level = "optimal"
LevelFastest Level = "fastest"
LevelSmallestSize Level = "smallest_size"
)
type Compressor struct {
Config
encoder *zstd.Encoder
decoder *zstd.Decoder
}
// Config represents common compression-related configuration.
type Config struct {
Enabled bool
UncompressableContentTypes []string
Level Level
UseCompressEstimation bool
CompressEstimationThreshold float64
encoder *zstd.Encoder
decoder *zstd.Decoder
EstimateCompressibility bool
EstimateCompressibilityThreshold float64
}
// zstdFrameMagic contains first 4 bytes of any compressed object
@ -26,11 +41,11 @@ type Config struct {
var zstdFrameMagic = []byte{0x28, 0xb5, 0x2f, 0xfd}
// Init initializes compression routines.
func (c *Config) Init() error {
func (c *Compressor) Init() error {
var err error
if c.Enabled {
c.encoder, err = zstd.NewWriter(nil)
c.encoder, err = zstd.NewWriter(nil, zstd.WithEncoderLevel(c.compressionLevel()))
if err != nil {
return err
}
@ -73,7 +88,7 @@ func (c *Config) NeedsCompression(obj *objectSDK.Object) bool {
// Decompress decompresses data if it starts with the magic
// and returns data untouched otherwise.
func (c *Config) Decompress(data []byte) ([]byte, error) {
func (c *Compressor) Decompress(data []byte) ([]byte, error) {
if len(data) < 4 || !bytes.Equal(data[:4], zstdFrameMagic) {
return data, nil
}
@ -82,13 +97,13 @@ func (c *Config) Decompress(data []byte) ([]byte, error) {
// Compress compresses data if compression is enabled
// and returns data untouched otherwise.
func (c *Config) Compress(data []byte) []byte {
func (c *Compressor) Compress(data []byte) []byte {
if c == nil || !c.Enabled {
return data
}
if c.UseCompressEstimation {
if c.EstimateCompressibility {
estimated := compress.Estimate(data)
if estimated >= c.CompressEstimationThreshold {
if estimated >= c.EstimateCompressibilityThreshold {
return c.compress(data)
}
return data
@ -96,7 +111,7 @@ func (c *Config) Compress(data []byte) []byte {
return c.compress(data)
}
func (c *Config) compress(data []byte) []byte {
func (c *Compressor) compress(data []byte) []byte {
maxSize := c.encoder.MaxEncodedSize(len(data))
compressed := c.encoder.EncodeAll(data, make([]byte, 0, maxSize))
if len(data) < len(compressed) {
@ -106,7 +121,7 @@ func (c *Config) compress(data []byte) []byte {
}
// Close closes encoder and decoder, returns any error occurred.
func (c *Config) Close() error {
func (c *Compressor) Close() error {
var err error
if c.encoder != nil {
err = c.encoder.Close()
@ -116,3 +131,24 @@ func (c *Config) Close() error {
}
return err
}
func (c *Config) HasValidCompressionLevel() bool {
return c.Level == LevelDefault ||
c.Level == LevelOptimal ||
c.Level == LevelFastest ||
c.Level == LevelSmallestSize
}
func (c *Compressor) compressionLevel() zstd.EncoderLevel {
switch c.Level {
case LevelDefault, LevelOptimal:
return zstd.SpeedDefault
case LevelFastest:
return zstd.SpeedFastest
case LevelSmallestSize:
return zstd.SpeedBestCompression
default:
assert.Fail("unknown compression level", string(c.Level))
return zstd.SpeedDefault
}
}

View file

@ -6,6 +6,7 @@ import (
"fmt"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/compression"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/shard/mode"
"go.uber.org/zap"
)
@ -53,6 +54,10 @@ var ErrInitBlobovniczas = errors.New("failure on blobovnicza initialization stag
func (b *BlobStor) Init(ctx context.Context) error {
b.log.Debug(ctx, logs.BlobstorInitializing)
if !b.compression.HasValidCompressionLevel() {
b.log.Warn(ctx, logs.UnknownCompressionLevelDefaultWillBeUsed, zap.String("level", string(b.compression.Level)))
b.compression.Level = compression.LevelDefault
}
if err := b.compression.Init(); err != nil {
return err
}

View file

@ -2,6 +2,8 @@ package fstree
import (
"sync"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/assert"
)
// FileCounter used to count files in FSTree. The implementation must be thread-safe.
@ -52,16 +54,11 @@ func (c *SimpleCounter) Dec(size uint64) {
c.mtx.Lock()
defer c.mtx.Unlock()
if c.count > 0 {
c.count--
} else {
panic("fstree.SimpleCounter: invalid count")
}
if c.size >= size {
c.size -= size
} else {
panic("fstree.SimpleCounter: invalid size")
}
assert.True(c.count > 0, "fstree.SimpleCounter: invalid count")
c.count--
assert.True(c.size >= size, "fstree.SimpleCounter: invalid size")
c.size -= size
}
func (c *SimpleCounter) CountSize() (uint64, uint64) {

View file

@ -45,7 +45,7 @@ type FSTree struct {
log *logger.Logger
*compression.Config
compressor *compression.Compressor
Depth uint64
DirNameLen int
@ -82,7 +82,7 @@ func New(opts ...Option) *FSTree {
Permissions: 0o700,
RootPath: "./",
},
Config: nil,
compressor: nil,
Depth: 4,
DirNameLen: DirNameLen,
metrics: &noopMetrics{},
@ -196,7 +196,7 @@ func (t *FSTree) iterate(ctx context.Context, depth uint64, curPath []string, pr
}
if err == nil {
data, err = t.Decompress(data)
data, err = t.compressor.Decompress(data)
}
if err != nil {
if prm.IgnoreErrors {
@ -405,7 +405,7 @@ func (t *FSTree) Put(ctx context.Context, prm common.PutPrm) (common.PutRes, err
return common.PutRes{}, err
}
if !prm.DontCompress {
prm.RawData = t.Compress(prm.RawData)
prm.RawData = t.compressor.Compress(prm.RawData)
}
size = len(prm.RawData)
@ -448,7 +448,7 @@ func (t *FSTree) Get(ctx context.Context, prm common.GetPrm) (common.GetRes, err
}
}
data, err = t.Decompress(data)
data, err = t.compressor.Decompress(data)
if err != nil {
return common.GetRes{}, err
}
@ -597,12 +597,12 @@ func (t *FSTree) Path() string {
}
// SetCompressor implements common.Storage.
func (t *FSTree) SetCompressor(cc *compression.Config) {
t.Config = cc
func (t *FSTree) SetCompressor(cc *compression.Compressor) {
t.compressor = cc
}
func (t *FSTree) Compressor() *compression.Config {
return t.Config
func (t *FSTree) Compressor() *compression.Compressor {
return t.compressor
}
// SetReportErrorFunc implements common.Storage.

View file

@ -8,6 +8,7 @@ import (
"testing"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/compression"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/memstore"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/teststore"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/shard/mode"
@ -24,7 +25,9 @@ func TestIterateObjects(t *testing.T) {
// create BlobStor instance
blobStor := New(
WithStorages(defaultStorages(p, smalSz)),
WithCompressObjects(true),
WithCompression(compression.Config{
Enabled: true,
}),
)
defer os.RemoveAll(p)

View file

@ -16,7 +16,7 @@ func (s *memstoreImpl) Init() error
func (s *memstoreImpl) Close(context.Context) error { return nil }
func (s *memstoreImpl) Type() string { return Type }
func (s *memstoreImpl) Path() string { return s.rootPath }
func (s *memstoreImpl) SetCompressor(cc *compression.Config) { s.compression = cc }
func (s *memstoreImpl) Compressor() *compression.Config { return s.compression }
func (s *memstoreImpl) SetCompressor(cc *compression.Compressor) { s.compression = cc }
func (s *memstoreImpl) Compressor() *compression.Compressor { return s.compression }
func (s *memstoreImpl) SetReportErrorFunc(func(context.Context, string, error)) {}
func (s *memstoreImpl) SetParentID(string) {}

View file

@ -7,7 +7,7 @@ import (
type cfg struct {
rootPath string
readOnly bool
compression *compression.Config
compression *compression.Compressor
}
func defaultConfig() *cfg {

View file

@ -17,8 +17,8 @@ type cfg struct {
Type func() string
Path func() string
SetCompressor func(cc *compression.Config)
Compressor func() *compression.Config
SetCompressor func(cc *compression.Compressor)
Compressor func() *compression.Compressor
SetReportErrorFunc func(f func(context.Context, string, error))
Get func(common.GetPrm) (common.GetRes, error)
@ -45,11 +45,11 @@ func WithClose(f func() error) Option { return func(c *cfg) { c
func WithType(f func() string) Option { return func(c *cfg) { c.overrides.Type = f } }
func WithPath(f func() string) Option { return func(c *cfg) { c.overrides.Path = f } }
func WithSetCompressor(f func(*compression.Config)) Option {
func WithSetCompressor(f func(*compression.Compressor)) Option {
return func(c *cfg) { c.overrides.SetCompressor = f }
}
func WithCompressor(f func() *compression.Config) Option {
func WithCompressor(f func() *compression.Compressor) Option {
return func(c *cfg) { c.overrides.Compressor = f }
}

View file

@ -116,7 +116,7 @@ func (s *TestStore) Path() string {
}
}
func (s *TestStore) SetCompressor(cc *compression.Config) {
func (s *TestStore) SetCompressor(cc *compression.Compressor) {
s.mu.RLock()
defer s.mu.RUnlock()
switch {
@ -129,7 +129,7 @@ func (s *TestStore) SetCompressor(cc *compression.Config) {
}
}
func (s *TestStore) Compressor() *compression.Config {
func (s *TestStore) Compressor() *compression.Compressor {
s.mu.RLock()
defer s.mu.RUnlock()
switch {

View file

@ -22,10 +22,6 @@ type shardInitError struct {
// Open opens all StorageEngine's components.
func (e *StorageEngine) Open(ctx context.Context) error {
return e.open(ctx)
}
func (e *StorageEngine) open(ctx context.Context) error {
e.mtx.Lock()
defer e.mtx.Unlock()
@ -149,11 +145,11 @@ var errClosed = errors.New("storage engine is closed")
func (e *StorageEngine) Close(ctx context.Context) error {
close(e.closeCh)
defer e.wg.Wait()
return e.setBlockExecErr(ctx, errClosed)
return e.closeEngine(ctx)
}
// closes all shards. Never returns an error, shard errors are logged.
func (e *StorageEngine) close(ctx context.Context) error {
func (e *StorageEngine) closeAllShards(ctx context.Context) error {
e.mtx.RLock()
defer e.mtx.RUnlock()
@ -176,70 +172,23 @@ func (e *StorageEngine) execIfNotBlocked(op func() error) error {
e.blockExec.mtx.RLock()
defer e.blockExec.mtx.RUnlock()
if e.blockExec.err != nil {
return e.blockExec.err
if e.blockExec.closed {
return errClosed
}
return op()
}
// sets the flag of blocking execution of all data operations according to err:
// - err != nil, then blocks the execution. If exec wasn't blocked, calls close method
// (if err == errClosed => additionally releases pools and does not allow to resume executions).
// - otherwise, resumes execution. If exec was blocked, calls open method.
//
// Can be called concurrently with exec. In this case it waits for all executions to complete.
func (e *StorageEngine) setBlockExecErr(ctx context.Context, err error) error {
func (e *StorageEngine) closeEngine(ctx context.Context) error {
e.blockExec.mtx.Lock()
defer e.blockExec.mtx.Unlock()
prevErr := e.blockExec.err
wasClosed := errors.Is(prevErr, errClosed)
if wasClosed {
if e.blockExec.closed {
return errClosed
}
e.blockExec.err = err
if err == nil {
if prevErr != nil { // block -> ok
return e.open(ctx)
}
} else if prevErr == nil { // ok -> block
return e.close(ctx)
}
// otherwise do nothing
return nil
}
// BlockExecution blocks the execution of any data-related operation. All blocked ops will return err.
// To resume the execution, use ResumeExecution method.
//
// Сan be called regardless of the fact of the previous blocking. If execution wasn't blocked, releases all resources
// similar to Close. Can be called concurrently with Close and any data related method (waits for all executions
// to complete). Returns error if any Close has been called before.
//
// Must not be called concurrently with either Open or Init.
//
// Note: technically passing nil error will resume the execution, otherwise, it is recommended to call ResumeExecution
// for this.
func (e *StorageEngine) BlockExecution(err error) error {
return e.setBlockExecErr(context.Background(), err)
}
// ResumeExecution resumes the execution of any data-related operation.
// To block the execution, use BlockExecution method.
//
// Сan be called regardless of the fact of the previous blocking. If execution was blocked, prepares all resources
// similar to Open. Can be called concurrently with Close and any data related method (waits for all executions
// to complete). Returns error if any Close has been called before.
//
// Must not be called concurrently with either Open or Init.
func (e *StorageEngine) ResumeExecution() error {
return e.setBlockExecErr(context.Background(), nil)
e.blockExec.closed = true
return e.closeAllShards(ctx)
}
type ReConfiguration struct {

View file

@ -2,7 +2,6 @@ package engine
import (
"context"
"errors"
"fmt"
"io/fs"
"os"
@ -12,17 +11,14 @@ import (
"testing"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/object"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/blobstor/teststore"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/internal/testutil"
meta "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/metabase"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/pilorama"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/shard"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/shard/mode"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/writecache"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger/test"
cidtest "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id/test"
"github.com/stretchr/testify/require"
"go.etcd.io/bbolt"
)
@ -163,42 +159,6 @@ func testEngineFailInitAndReload(t *testing.T, degradedMode bool, opts []shard.O
require.Equal(t, 1, shardCount)
}
func TestExecBlocks(t *testing.T) {
e := testNewEngine(t).setShardsNum(t, 2).prepare(t).engine // number doesn't matter in this test, 2 is several but not many
// put some object
obj := testutil.GenerateObjectWithCID(cidtest.ID())
addr := object.AddressOf(obj)
require.NoError(t, Put(context.Background(), e, obj, false))
// block executions
errBlock := errors.New("block exec err")
require.NoError(t, e.BlockExecution(errBlock))
// try to exec some op
_, err := Head(context.Background(), e, addr)
require.ErrorIs(t, err, errBlock)
// resume executions
require.NoError(t, e.ResumeExecution())
_, err = Head(context.Background(), e, addr) // can be any data-related op
require.NoError(t, err)
// close
require.NoError(t, e.Close(context.Background()))
// try exec after close
_, err = Head(context.Background(), e, addr)
require.Error(t, err)
// try to resume
require.Error(t, e.ResumeExecution())
}
func TestPersistentShardID(t *testing.T) {
dir := t.TempDir()

View file

@ -33,9 +33,8 @@ type StorageEngine struct {
wg sync.WaitGroup
blockExec struct {
mtx sync.RWMutex
err error
mtx sync.RWMutex
closed bool
}
evacuateLimiter *evacuationLimiter
}
@ -212,12 +211,18 @@ func New(opts ...Option) *StorageEngine {
opts[i](c)
}
evLimMtx := &sync.RWMutex{}
evLimCond := sync.NewCond(evLimMtx)
return &StorageEngine{
cfg: c,
shards: make(map[string]hashedShard),
closeCh: make(chan struct{}),
setModeCh: make(chan setModeRequest),
evacuateLimiter: &evacuationLimiter{},
cfg: c,
shards: make(map[string]hashedShard),
closeCh: make(chan struct{}),
setModeCh: make(chan setModeRequest),
evacuateLimiter: &evacuationLimiter{
guard: evLimMtx,
statusCond: evLimCond,
},
}
}

View file

@ -139,7 +139,8 @@ type evacuationLimiter struct {
eg *errgroup.Group
cancel context.CancelFunc
guard sync.RWMutex
guard *sync.RWMutex
statusCond *sync.Cond // used in unit tests
}
func (l *evacuationLimiter) TryStart(ctx context.Context, shardIDs []string, result *EvacuateShardRes) (*errgroup.Group, context.Context, error) {
@ -165,6 +166,7 @@ func (l *evacuationLimiter) TryStart(ctx context.Context, shardIDs []string, res
startedAt: time.Now().UTC(),
result: result,
}
l.statusCond.Broadcast()
return l.eg, egCtx, nil
}
@ -180,6 +182,7 @@ func (l *evacuationLimiter) Complete(err error) {
l.state.processState = EvacuateProcessStateCompleted
l.state.errMessage = errMsq
l.state.finishedAt = time.Now().UTC()
l.statusCond.Broadcast()
l.eg = nil
}
@ -214,6 +217,7 @@ func (l *evacuationLimiter) ResetEvacuationStatus() error {
l.state = EvacuationState{}
l.eg = nil
l.cancel = nil
l.statusCond.Broadcast()
return nil
}

View file

@ -204,11 +204,10 @@ func TestEvacuateShardObjects(t *testing.T) {
func testWaitForEvacuationCompleted(t *testing.T, e *StorageEngine) *EvacuationState {
var st *EvacuationState
var err error
require.Eventually(t, func() bool {
st, err = e.GetEvacuationState(context.Background())
require.NoError(t, err)
return st.ProcessingStatus() == EvacuateProcessStateCompleted
}, 6*time.Second, 10*time.Millisecond)
e.evacuateLimiter.waitForCompleted()
st, err = e.GetEvacuationState(context.Background())
require.NoError(t, err)
require.Equal(t, EvacuateProcessStateCompleted, st.ProcessingStatus())
return st
}
@ -817,3 +816,12 @@ func TestEvacuateShardObjectsRepOneOnlyBench(t *testing.T) {
t.Logf("evacuate took %v\n", time.Since(start))
require.NoError(t, err)
}
func (l *evacuationLimiter) waitForCompleted() {
l.guard.Lock()
defer l.guard.Unlock()
for l.state.processState != EvacuateProcessStateCompleted {
l.statusCond.Wait()
}
}

View file

@ -7,6 +7,7 @@ import (
"slices"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/assert"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/internal/metaerr"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/util/logicerr"
"git.frostfs.info/TrueCloudLab/frostfs-observability/tracing"
@ -63,9 +64,7 @@ func (db *DB) Lock(ctx context.Context, cnr cid.ID, locker oid.ID, locked []oid.
return ErrReadOnlyMode
}
if len(locked) == 0 {
panic("empty locked list")
}
assert.False(len(locked) == 0, "empty locked list")
err := db.lockInternal(locked, cnr, locker)
success = err == nil

View file

@ -6,6 +6,7 @@ import (
"errors"
"fmt"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/assert"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
objectSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
@ -278,9 +279,7 @@ func objectKey(obj oid.ID, key []byte) []byte {
//
// firstIrregularObjectType(tx, cnr, obj) usage allows getting object type.
func firstIrregularObjectType(tx *bbolt.Tx, idCnr cid.ID, objs ...[]byte) objectSDK.Type {
if len(objs) == 0 {
panic("empty object list in firstIrregularObjectType")
}
assert.False(len(objs) == 0, "empty object list in firstIrregularObjectType")
var keys [2][1 + cidSize]byte

View file

@ -363,6 +363,7 @@ func (s *Shard) refillTombstoneObject(ctx context.Context, obj *objectSDK.Object
// Close releases all Shard's components.
func (s *Shard) Close(ctx context.Context) error {
unlock := s.lockExclusive()
if s.rb != nil {
s.rb.Stop(ctx, s.log)
}
@ -388,15 +389,19 @@ func (s *Shard) Close(ctx context.Context) error {
}
}
if s.opsLimiter != nil {
s.opsLimiter.Close()
}
unlock()
// GC waits for handlers and remover to complete. Handlers may try to lock shard's lock.
// So to prevent deadlock GC stopping is outside of exclusive lock.
// If Init/Open was unsuccessful gc can be nil.
if s.gc != nil {
s.gc.stop(ctx)
}
if s.opsLimiter != nil {
s.opsLimiter.Close()
}
return lastErr
}

View file

@ -391,6 +391,16 @@ func (s *Shard) handleExpiredObjects(ctx context.Context, expired []oid.Address)
return
}
s.handleExpiredObjectsUnsafe(ctx, expired)
}
func (s *Shard) handleExpiredObjectsUnsafe(ctx context.Context, expired []oid.Address) {
select {
case <-ctx.Done():
return
default:
}
expired, err := s.getExpiredWithLinked(ctx, expired)
if err != nil {
s.log.Warn(ctx, logs.ShardGCFailedToGetExpiredWithLinked, zap.Error(err))
@ -611,13 +621,6 @@ func (s *Shard) getExpiredObjects(ctx context.Context, epoch uint64, onExpiredFo
}
func (s *Shard) selectExpired(ctx context.Context, epoch uint64, addresses []oid.Address) ([]oid.Address, error) {
s.m.RLock()
defer s.m.RUnlock()
if s.info.Mode.NoMetabase() {
return nil, ErrDegradedMode
}
release, err := s.opsLimiter.ReadRequest(ctx)
if err != nil {
return nil, err
@ -728,7 +731,7 @@ func (s *Shard) inhumeUnlockedIfExpired(ctx context.Context, epoch uint64, unloc
return
}
s.handleExpiredObjects(ctx, expiredUnlocked)
s.handleExpiredObjectsUnsafe(ctx, expiredUnlocked)
}
// HandleDeletedLocks unlocks all objects which were locked by lockers.

View file

@ -3,6 +3,8 @@ package writecache
import (
"errors"
"sync"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/assert"
)
var errLimiterClosed = errors.New("acquire failed: limiter closed")
@ -45,17 +47,11 @@ func (l *flushLimiter) release(size uint64) {
l.cond.L.Lock()
defer l.cond.L.Unlock()
if l.size >= size {
l.size -= size
} else {
panic("flushLimiter: invalid size")
}
assert.True(l.size >= size, "flushLimiter: invalid size")
l.size -= size
if l.count > 0 {
l.count--
} else {
panic("flushLimiter: invalid count")
}
assert.True(l.count > 0, "flushLimiter: invalid count")
l.count--
l.cond.Broadcast()
}

View file

@ -52,7 +52,7 @@ type Cache interface {
// MainStorage is the interface of the underlying storage of Cache implementations.
type MainStorage interface {
Compressor() *compression.Config
Compressor() *compression.Compressor
Exists(context.Context, common.ExistsPrm) (common.ExistsRes, error)
Put(context.Context, common.PutPrm) (common.PutRes, error)
}

View file

@ -2,11 +2,11 @@ package network
import (
"errors"
"fmt"
"net"
"net/url"
"strings"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/assert"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/rpc/client"
"github.com/multiformats/go-multiaddr"
manet "github.com/multiformats/go-multiaddr/net"
@ -44,11 +44,9 @@ func (a Address) equal(addr Address) bool {
// See also FromString.
func (a Address) URIAddr() string {
_, host, err := manet.DialArgs(a.ma)
if err != nil {
// the only correct way to construct Address is AddressFromString
// which makes this error appear unexpected
panic(fmt.Errorf("could not get host addr: %w", err))
}
// the only correct way to construct Address is AddressFromString
// which makes this error appear unexpected
assert.NoError(err, "could not get host addr")
if !a.IsTLSEnabled() {
return host

View file

@ -3,6 +3,7 @@ package network
import (
"errors"
"fmt"
"iter"
"slices"
"sort"
@ -68,9 +69,8 @@ func (x AddressGroup) Swap(i, j int) {
// MultiAddressIterator is an interface of network address group.
type MultiAddressIterator interface {
// IterateAddresses must iterate over network addresses and pass each one
// to the handler until it returns true.
IterateAddresses(func(string) bool)
// Addresses must return an iterator over network addresses.
Addresses() iter.Seq[string]
// NumberOfAddresses must return number of addresses in group.
NumberOfAddresses() int
@ -131,19 +131,19 @@ func (x *AddressGroup) FromIterator(iter MultiAddressIterator) error {
// iterateParsedAddresses parses each address from MultiAddressIterator and passes it to f
// until 1st parsing failure or f's error.
func iterateParsedAddresses(iter MultiAddressIterator, f func(s Address) error) (err error) {
iter.IterateAddresses(func(s string) bool {
for s := range iter.Addresses() {
var a Address
err = a.FromString(s)
if err != nil {
err = fmt.Errorf("could not parse address from string: %w", err)
return true
return fmt.Errorf("could not parse address from string: %w", err)
}
err = f(a)
return err != nil
})
if err != nil {
return err
}
}
return
}

View file

@ -1,6 +1,8 @@
package network
import (
"iter"
"slices"
"sort"
"testing"
@ -58,10 +60,8 @@ func TestAddressGroup_FromIterator(t *testing.T) {
type testIterator []string
func (t testIterator) IterateAddresses(f func(string) bool) {
for i := range t {
f(t[i])
}
func (t testIterator) Addresses() iter.Seq[string] {
return slices.Values(t)
}
func (t testIterator) NumberOfAddresses() int {

View file

@ -2,6 +2,7 @@ package network
import (
"errors"
"iter"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap"
)
@ -34,8 +35,8 @@ var (
// MultiAddressIterator.
type NodeEndpointsIterator netmap.NodeInfo
func (x NodeEndpointsIterator) IterateAddresses(f func(string) bool) {
(netmap.NodeInfo)(x).IterateNetworkEndpoints(f)
func (x NodeEndpointsIterator) Addresses() iter.Seq[string] {
return (netmap.NodeInfo)(x).NetworkEndpoints()
}
func (x NodeEndpointsIterator) NumberOfAddresses() int {

View file

@ -5,6 +5,7 @@ import (
"errors"
"fmt"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/assert"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/version"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/util/response"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/netmap"
@ -46,10 +47,12 @@ type NetworkInfo interface {
}
func NewExecutionService(s NodeState, v versionsdk.Version, netInfo NetworkInfo, respSvc *response.Service) Server {
if s == nil || netInfo == nil || !version.IsValid(v) || respSvc == nil {
// this should never happen, otherwise it programmers bug
panic("can't create netmap execution service")
}
// this should never happen, otherwise it's a programmer's bug
msg := "BUG: can't create netmap execution service"
assert.False(s == nil, msg, "node state is nil")
assert.False(netInfo == nil, msg, "network info is nil")
assert.False(respSvc == nil, msg, "response service is nil")
assert.True(version.IsValid(v), msg, "invalid version")
res := &executorSvc{
state: s,

View file

@ -95,6 +95,10 @@ func (x errIncompletePut) Error() string {
return commonMsg
}
func (x errIncompletePut) Unwrap() error {
return x.singleErr
}
// WriteObject implements the transformer.ObjectWriter interface.
func (t *distributedWriter) WriteObject(ctx context.Context, obj *objectSDK.Object) error {
t.obj = obj

View file

@ -96,7 +96,8 @@ func (s *putStreamSigner) CloseAndRecv(ctx context.Context) (resp *object.PutRes
} else {
resp, err = s.stream.CloseAndRecv(ctx)
if err != nil {
return nil, fmt.Errorf("could not close stream and receive response: %w", err)
err = fmt.Errorf("could not close stream and receive response: %w", err)
resp = new(object.PutResponse)
}
}
@ -132,7 +133,8 @@ func (s *patchStreamSigner) CloseAndRecv(ctx context.Context) (resp *object.Patc
} else {
resp, err = s.stream.CloseAndRecv(ctx)
if err != nil {
return nil, fmt.Errorf("could not close stream and receive response: %w", err)
err = fmt.Errorf("could not close stream and receive response: %w", err)
resp = new(object.PatchResponse)
}
}

View file

@ -2,24 +2,90 @@ package placement
import (
"errors"
"fmt"
"maps"
"math"
"strings"
"sync"
"sync/atomic"
locodedb "git.frostfs.info/TrueCloudLab/frostfs-locode-db/pkg/locode/db"
locodebolt "git.frostfs.info/TrueCloudLab/frostfs-locode-db/pkg/locode/db/boltdb"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap"
)
const (
attrPrefix = "$attribute:"
geoDistance = "$geoDistance"
)
type Metric interface {
CalculateValue(*netmap.NodeInfo, *netmap.NodeInfo) int
}
func ParseMetric(raw string) (Metric, error) {
if attr, found := strings.CutPrefix(raw, attrPrefix); found {
return NewAttributeMetric(attr), nil
type metricsParser struct {
locodeDBPath string
locodes map[string]locodedb.Point
}
type MetricParser interface {
ParseMetrics([]string) ([]Metric, error)
}
func NewMetricsParser(locodeDBPath string) (MetricParser, error) {
return &metricsParser{
locodeDBPath: locodeDBPath,
}, nil
}
func (p *metricsParser) initLocodes() error {
if len(p.locodes) != 0 {
return nil
}
return nil, errors.New("unsupported priority metric")
if len(p.locodeDBPath) > 0 {
p.locodes = make(map[string]locodedb.Point)
locodeDB := locodebolt.New(locodebolt.Prm{
Path: p.locodeDBPath,
},
locodebolt.ReadOnly(),
)
err := locodeDB.Open()
if err != nil {
return err
}
defer locodeDB.Close()
err = locodeDB.IterateOverLocodes(func(k string, v locodedb.Point) {
p.locodes[k] = v
})
if err != nil {
return err
}
return nil
}
return errors.New("set path to locode database")
}
func (p *metricsParser) ParseMetrics(priority []string) ([]Metric, error) {
var metrics []Metric
for _, raw := range priority {
if attr, found := strings.CutPrefix(raw, attrPrefix); found {
metrics = append(metrics, NewAttributeMetric(attr))
} else if raw == geoDistance {
err := p.initLocodes()
if err != nil {
return nil, err
}
if len(p.locodes) == 0 {
return nil, fmt.Errorf("provide locodes database for metric %s", raw)
}
m := NewGeoDistanceMetric(p.locodes)
metrics = append(metrics, m)
} else {
return nil, fmt.Errorf("unsupported priority metric %s", raw)
}
}
return metrics, nil
}
// attributeMetric describes priority metric based on attribute.
@ -41,3 +107,79 @@ func (am *attributeMetric) CalculateValue(from *netmap.NodeInfo, to *netmap.Node
func NewAttributeMetric(attr string) Metric {
return &attributeMetric{attribute: attr}
}
// geoDistanceMetric describes priority metric based on attribute.
type geoDistanceMetric struct {
locodes map[string]locodedb.Point
distance *atomic.Pointer[map[string]int]
mtx sync.Mutex
}
func NewGeoDistanceMetric(locodes map[string]locodedb.Point) Metric {
d := atomic.Pointer[map[string]int]{}
m := make(map[string]int)
d.Store(&m)
gm := &geoDistanceMetric{
locodes: locodes,
distance: &d,
}
return gm
}
// CalculateValue return distance in kilometers between current node and provided,
// if coordinates for provided node found. In other case return math.MaxInt.
func (gm *geoDistanceMetric) CalculateValue(from *netmap.NodeInfo, to *netmap.NodeInfo) int {
fl := from.LOCODE()
tl := to.LOCODE()
if fl == tl {
return 0
}
m := gm.distance.Load()
if v, ok := (*m)[fl+tl]; ok {
return v
}
return gm.calculateDistance(fl, tl)
}
func (gm *geoDistanceMetric) calculateDistance(from, to string) int {
gm.mtx.Lock()
defer gm.mtx.Unlock()
od := gm.distance.Load()
if v, ok := (*od)[from+to]; ok {
return v
}
nd := maps.Clone(*od)
var dist int
pointFrom, okFrom := gm.locodes[from]
pointTo, okTo := gm.locodes[to]
if okFrom && okTo {
dist = int(distance(pointFrom.Latitude(), pointFrom.Longitude(), pointTo.Latitude(), pointTo.Longitude()))
} else {
dist = math.MaxInt
}
nd[from+to] = dist
gm.distance.Store(&nd)
return dist
}
// distance return amount of KM between two points.
// Parameters are latitude and longitude of point 1 and 2 in decimal degrees.
// Original implementation can be found here https://www.geodatasource.com/developers/go.
func distance(lt1 float64, ln1 float64, lt2 float64, ln2 float64) float64 {
radLat1 := math.Pi * lt1 / 180
radLat2 := math.Pi * lt2 / 180
radTheta := math.Pi * (ln1 - ln2) / 180
dist := math.Sin(radLat1)*math.Sin(radLat2) + math.Cos(radLat1)*math.Cos(radLat2)*math.Cos(radTheta)
if dist > 1 {
dist = 1
}
dist = math.Acos(dist)
dist = dist * 180 / math.Pi
dist = dist * 60 * 1.1515 * 1.609344
return dist
}

View file

@ -601,4 +601,53 @@ func TestTraverserPriorityMetrics(t *testing.T) {
next = tr.Next()
require.Nil(t, next)
})
t.Run("one rep one geo metric", func(t *testing.T) {
t.Skip()
selectors := []int{2}
replicas := []int{2}
nodes, cnr := testPlacement(selectors, replicas)
// Node_0, PK - ip4/0.0.0.0/tcp/0
nodes[0][0].SetAttribute("UN-LOCODE", "RU MOW")
// Node_1, PK - ip4/0.0.0.0/tcp/1
nodes[0][1].SetAttribute("UN-LOCODE", "RU LED")
sdkNode := testNode(2)
sdkNode.SetAttribute("UN-LOCODE", "FI HEL")
nodesCopy := copyVectors(nodes)
parser, err := NewMetricsParser("/path/to/locode_db")
require.NoError(t, err)
m, err := parser.ParseMetrics([]string{geoDistance})
require.NoError(t, err)
tr, err := NewTraverser(context.Background(),
ForContainer(cnr),
UseBuilder(&testBuilder{
vectors: nodesCopy,
}),
WithoutSuccessTracking(),
WithPriorityMetrics(m),
WithNodeState(&nodeState{
node: &sdkNode,
}),
)
require.NoError(t, err)
// Without priority metric `$geoDistance` the order will be:
// [ {Node_0 RU MOW}, {Node_1 RU LED}]
// With priority metric `$geoDistance` the order should be:
// [ {Node_1 RU LED}, {Node_0 RU MOW}]
next := tr.Next()
require.NotNil(t, next)
require.Equal(t, 2, len(next))
require.Equal(t, "/ip4/0.0.0.0/tcp/1", string(next[0].PublicKey()))
require.Equal(t, "/ip4/0.0.0.0/tcp/0", string(next[1].PublicKey()))
next = tr.Next()
require.Nil(t, next)
})
}

View file

@ -3,6 +3,7 @@ package tombstone
import (
"fmt"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/assert"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger"
lru "github.com/hashicorp/golang-lru/v2"
"go.uber.org/zap"
@ -49,9 +50,7 @@ func NewChecker(oo ...Option) *ExpirationChecker {
panicOnNil(cfg.tsSource, "Tombstone source")
cache, err := lru.New[string, uint64](cfg.cacheSize)
if err != nil {
panic(fmt.Errorf("could not create LRU cache with %d size: %w", cfg.cacheSize, err))
}
assert.NoError(err, fmt.Sprintf("could not create LRU cache with %d size", cfg.cacheSize))
return &ExpirationChecker{
cache: cache,

View file

@ -4,6 +4,7 @@ import (
"context"
"fmt"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/assert"
getsvc "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/object/get"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/object/util"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client"
@ -38,9 +39,7 @@ func (s *TombstoneSourcePrm) SetGetService(v *getsvc.Service) {
// Panics if any of the provided options does not allow
// constructing a valid tombstone local Source.
func NewSource(p TombstoneSourcePrm) Source {
if p.s == nil {
panic("Tombstone source: nil object service")
}
assert.False(p.s == nil, "Tombstone source: nil object service")
return Source(p)
}

View file

@ -1,9 +1,11 @@
package policer
import (
"fmt"
"sync"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/assert"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
lru "github.com/hashicorp/golang-lru/v2"
"go.uber.org/zap"
@ -57,9 +59,7 @@ func New(opts ...Option) *Policer {
c.log = c.log.With(zap.String("component", "Object Policer"))
cache, err := lru.New[oid.Address, time.Time](int(c.cacheSize))
if err != nil {
panic(err)
}
assert.NoError(err, fmt.Sprintf("could not create LRU cache with %d size", c.cacheSize))
return &Policer{
cfg: c,

View file

@ -3,6 +3,7 @@ package replicator
import (
"context"
"errors"
"slices"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
containerCore "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/container"
@ -42,11 +43,7 @@ func (p *Replicator) HandlePullTask(ctx context.Context, task Task) {
if err == nil {
break
}
var endpoints []string
node.IterateNetworkEndpoints(func(s string) bool {
endpoints = append(endpoints, s)
return false
})
endpoints := slices.Collect(node.NetworkEndpoints())
p.log.Error(ctx, logs.ReplicatorCouldNotGetObjectFromRemoteStorage,
zap.Stringer("object", task.Addr),
zap.Error(err),

View file

@ -41,24 +41,15 @@ func (s *Service) forEachNode(ctx context.Context, cntNodes []netmapSDK.NodeInfo
var called bool
for _, n := range cntNodes {
var stop bool
n.IterateNetworkEndpoints(func(endpoint string) bool {
ctx, span := tracing.StartSpanFromContext(ctx, "TreeService.IterateNetworkEndpoints",
trace.WithAttributes(
attribute.String("endpoint", endpoint),
))
defer span.End()
c, err := s.cache.get(ctx, endpoint)
if err != nil {
return false
for endpoint := range n.NetworkEndpoints() {
stop = s.execOnClient(ctx, endpoint, func(c TreeServiceClient) bool {
called = true
return f(c)
})
if called {
break
}
s.log.Debug(ctx, logs.TreeRedirectingTreeServiceQuery, zap.String("endpoint", endpoint))
called = true
stop = f(c)
return true
})
}
if stop {
return nil
}
@ -68,3 +59,19 @@ func (s *Service) forEachNode(ctx context.Context, cntNodes []netmapSDK.NodeInfo
}
return nil
}
func (s *Service) execOnClient(ctx context.Context, endpoint string, f func(TreeServiceClient) bool) bool {
ctx, span := tracing.StartSpanFromContext(ctx, "TreeService.IterateNetworkEndpoints",
trace.WithAttributes(
attribute.String("endpoint", endpoint),
))
defer span.End()
c, err := s.cache.get(ctx, endpoint)
if err != nil {
return false
}
s.log.Debug(ctx, logs.TreeRedirectingTreeServiceQuery, zap.String("endpoint", endpoint))
return f(c)
}

View file

@ -89,29 +89,13 @@ func (s *Service) ReplicateTreeOp(ctx context.Context, n netmapSDK.NodeInfo, req
var lastErr error
var lastAddr string
n.IterateNetworkEndpoints(func(addr string) bool {
ctx, span := tracing.StartSpanFromContext(ctx, "TreeService.HandleReplicationTaskOnEndpoint",
trace.WithAttributes(
attribute.String("public_key", hex.EncodeToString(n.PublicKey())),
attribute.String("address", addr),
),
)
defer span.End()
for addr := range n.NetworkEndpoints() {
lastAddr = addr
c, err := s.cache.get(ctx, addr)
if err != nil {
lastErr = fmt.Errorf("can't create client: %w", err)
return false
lastErr = s.apply(ctx, n, addr, req)
if lastErr == nil {
break
}
ctx, cancel := context.WithTimeout(ctx, s.replicatorTimeout)
_, lastErr = c.Apply(ctx, req)
cancel()
return lastErr == nil
})
}
if lastErr != nil {
if errors.Is(lastErr, errRecentlyFailed) {
@ -130,6 +114,26 @@ func (s *Service) ReplicateTreeOp(ctx context.Context, n netmapSDK.NodeInfo, req
return nil
}
func (s *Service) apply(ctx context.Context, n netmapSDK.NodeInfo, addr string, req *ApplyRequest) error {
ctx, span := tracing.StartSpanFromContext(ctx, "TreeService.HandleReplicationTaskOnEndpoint",
trace.WithAttributes(
attribute.String("public_key", hex.EncodeToString(n.PublicKey())),
attribute.String("address", addr),
),
)
defer span.End()
c, err := s.cache.get(ctx, addr)
if err != nil {
return fmt.Errorf("can't create client: %w", err)
}
ctx, cancel := context.WithTimeout(ctx, s.replicatorTimeout)
_, err = c.Apply(ctx, req)
cancel()
return err
}
func (s *Service) replicateLoop(ctx context.Context) {
for range s.replicatorWorkerCount {
go s.replicationWorker(ctx)

View file

@ -297,27 +297,27 @@ func (s *Service) synchronizeTree(ctx context.Context, cid cid.ID, from uint64,
for i, n := range nodes {
errGroup.Go(func() error {
var nodeSynced bool
n.IterateNetworkEndpoints(func(addr string) bool {
for addr := range n.NetworkEndpoints() {
var a network.Address
if err := a.FromString(addr); err != nil {
s.log.Warn(ctx, logs.TreeFailedToParseAddressForTreeSynchronization, zap.Error(err), zap.String("address", addr))
return false
continue
}
cc, err := createConnection(a, grpc.WithContextDialer(s.ds.GrpcContextDialer()))
if err != nil {
s.log.Warn(ctx, logs.TreeFailedToConnectForTreeSynchronization, zap.Error(err), zap.String("address", addr))
return false
continue
}
defer cc.Close()
err = s.startStream(egCtx, cid, treeID, from, cc, nodeOperationStreams[i])
if err != nil {
s.log.Warn(ctx, logs.TreeFailedToRunTreeSynchronizationForSpecificNode, zap.Error(err), zap.String("address", addr))
}
nodeSynced = err == nil
return true
})
_ = cc.Close()
break
}
close(nodeOperationStreams[i])
if !nodeSynced {
allNodesSynced.Store(false)

View file

@ -23,12 +23,12 @@ func testAttributeMap(t *testing.T, mSrc, mExp map[string]string) {
mExp = mSrc
}
node.IterateAttributes(func(key, value string) {
for key, value := range node.Attributes() {
v, ok := mExp[key]
require.True(t, ok)
require.Equal(t, value, v)
delete(mExp, key)
})
}
require.Empty(t, mExp)
}

View file

@ -6,6 +6,7 @@ import (
"os"
"text/tabwriter"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/assert"
"github.com/mr-tron/base58"
"github.com/nspcc-dev/neo-go/pkg/crypto/hash"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
@ -104,9 +105,7 @@ func (d Dashboard) PrettyPrint(uncompressed, useHex bool) {
func base58ToHex(data string) string {
val, err := base58.Decode(data)
if err != nil {
panic("produced incorrect base58 value")
}
assert.NoError(err, "produced incorrect base58 value")
return hex.EncodeToString(val)
}

View file

@ -13,8 +13,10 @@ import (
// Logger represents a component
// for writing messages to log.
type Logger struct {
z *zap.Logger
lvl zap.AtomicLevel
z *zap.Logger
c zapcore.Core
t Tag
w bool
}
// Prm groups Logger's parameters.
@ -39,6 +41,9 @@ type Prm struct {
// Options for zap.Logger
Options []zap.Option
// map of tag's bit masks to log level, overrides lvl
tl map[Tag]zapcore.Level
}
const (
@ -68,6 +73,12 @@ func (p *Prm) SetDestination(d string) error {
return nil
}
// SetTags parses list of tags with log level.
func (p *Prm) SetTags(tags [][]string) (err error) {
p.tl, err = parseTags(tags)
return err
}
// NewLogger constructs a new zap logger instance. Constructing with nil
// parameters is safe: default values will be used then.
// Passing non-nil parameters after a successful creation (non-error) allows
@ -91,10 +102,8 @@ func NewLogger(prm Prm) (*Logger, error) {
}
func newConsoleLogger(prm Prm) (*Logger, error) {
lvl := zap.NewAtomicLevelAt(prm.level)
c := zap.NewProductionConfig()
c.Level = lvl
c.Level = zap.NewAtomicLevelAt(zap.DebugLevel)
c.Encoding = "console"
if prm.SamplingHook != nil {
c.Sampling.Hook = prm.SamplingHook
@ -115,15 +124,13 @@ func newConsoleLogger(prm Prm) (*Logger, error) {
if err != nil {
return nil, err
}
l := &Logger{z: lZap, lvl: lvl}
l := &Logger{z: lZap, c: lZap.Core()}
l = l.WithTag(TagMain)
return l, nil
}
func newJournaldLogger(prm Prm) (*Logger, error) {
lvl := zap.NewAtomicLevelAt(prm.level)
c := zap.NewProductionConfig()
if prm.SamplingHook != nil {
c.Sampling.Hook = prm.SamplingHook
@ -137,7 +144,7 @@ func newJournaldLogger(prm Prm) (*Logger, error) {
encoder := zapjournald.NewPartialEncoder(zapcore.NewConsoleEncoder(c.EncoderConfig), zapjournald.SyslogFields)
core := zapjournald.NewCore(lvl, encoder, &journald.Journal{}, zapjournald.SyslogFields)
core := zapjournald.NewCore(zap.NewAtomicLevelAt(zap.DebugLevel), encoder, &journald.Journal{}, zapjournald.SyslogFields)
coreWithContext := core.With([]zapcore.Field{
zapjournald.SyslogFacility(zapjournald.LogDaemon),
zapjournald.SyslogIdentifier(),
@ -161,22 +168,75 @@ func newJournaldLogger(prm Prm) (*Logger, error) {
}
opts = append(opts, prm.Options...)
lZap := zap.New(samplingCore, opts...)
l := &Logger{z: lZap, lvl: lvl}
l := &Logger{z: lZap, c: lZap.Core()}
l = l.WithTag(TagMain)
return l, nil
}
func (l *Logger) Reload(prm Prm) {
l.lvl.SetLevel(prm.level)
// With create a child logger with new fields, don't affect the parent.
// Throws panic if tag is unset.
func (l *Logger) With(fields ...zap.Field) *Logger {
if l.t == 0 {
panic("tag is unset")
}
c := *l
c.z = l.z.With(fields...)
// With called under the logger
c.w = true
return &c
}
func (l *Logger) With(fields ...zap.Field) *Logger {
return &Logger{z: l.z.With(fields...)}
type core struct {
c zapcore.Core
l zap.AtomicLevel
}
func (c *core) Enabled(lvl zapcore.Level) bool {
return c.l.Enabled(lvl)
}
func (c *core) With(fields []zapcore.Field) zapcore.Core {
clone := *c
clone.c = clone.c.With(fields)
return &clone
}
func (c *core) Check(e zapcore.Entry, ce *zapcore.CheckedEntry) *zapcore.CheckedEntry {
return c.c.Check(e, ce)
}
func (c *core) Write(e zapcore.Entry, fields []zapcore.Field) error {
return c.c.Write(e, fields)
}
func (c *core) Sync() error {
return c.c.Sync()
}
// WithTag is an equivalent of calling [NewLogger] with the same parameters for the current logger.
// Throws panic if provided unsupported tag.
func (l *Logger) WithTag(tag Tag) *Logger {
if tag == 0 || tag > Tag(len(_Tag_index)-1) {
panic("unsupported tag " + tag.String())
}
if l.w {
panic("unsupported operation for the logger's state")
}
c := *l
c.t = tag
c.z = l.z.WithOptions(zap.WrapCore(func(zapcore.Core) zapcore.Core {
return &core{
c: l.c.With([]zap.Field{zap.String("tag", tag.String())}),
l: tagToLogLevel[tag],
}
}))
return &c
}
func NewLoggerWrapper(z *zap.Logger) *Logger {
return &Logger{
z: z.WithOptions(zap.AddCallerSkip(1)),
t: TagMain,
}
}

View file

@ -0,0 +1,28 @@
// Code generated by "stringer -type Tag -linecomment"; DO NOT EDIT.
package logger
import "strconv"
func _() {
// An "invalid array index" compiler error signifies that the constant values have changed.
// Re-run the stringer command to generate them again.
var x [1]struct{}
_ = x[TagMain-1]
_ = x[TagMorph-2]
_ = x[TagGrpcSvc-3]
_ = x[TagIr-4]
_ = x[TagProcessor-5]
}
const _Tag_name = "mainmorphgrpc_svcirprocessor"
var _Tag_index = [...]uint8{0, 4, 9, 17, 19, 28}
func (i Tag) String() string {
i -= 1
if i >= Tag(len(_Tag_index)-1) {
return "Tag(" + strconv.FormatInt(int64(i+1), 10) + ")"
}
return _Tag_name[_Tag_index[i]:_Tag_index[i+1]]
}

79
pkg/util/logger/tags.go Normal file
View file

@ -0,0 +1,79 @@
package logger
import (
"fmt"
"strings"
"go.uber.org/zap"
"go.uber.org/zap/zapcore"
)
//go:generate stringer -type Tag -linecomment
type Tag uint8
const (
_ Tag = iota //
TagMain // main
TagMorph // morph
TagGrpcSvc // grpc_svc
TagIr // ir
TagProcessor // processor
defaultLevel = zapcore.InfoLevel
)
var (
tagToLogLevel = map[Tag]zap.AtomicLevel{}
stringToTag = map[string]Tag{}
)
func init() {
for i := TagMain; i <= Tag(len(_Tag_index)-1); i++ {
tagToLogLevel[i] = zap.NewAtomicLevelAt(defaultLevel)
stringToTag[i.String()] = i
}
}
// parseTags returns:
// - map(always instantiated) of tag to custom log level for that tag;
// - error if it occurred(map is empty).
func parseTags(raw [][]string) (map[Tag]zapcore.Level, error) {
m := make(map[Tag]zapcore.Level)
if len(raw) == 0 {
return m, nil
}
for _, item := range raw {
str, level := item[0], item[1]
if len(level) == 0 {
// It is not necessary to parse tags without level,
// because default log level will be used.
continue
}
var l zapcore.Level
err := l.UnmarshalText([]byte(level))
if err != nil {
return nil, err
}
tmp := strings.Split(str, ",")
for _, tagStr := range tmp {
tag, ok := stringToTag[strings.TrimSpace(tagStr)]
if !ok {
return nil, fmt.Errorf("unsupported tag %s", str)
}
m[tag] = l
}
}
return m, nil
}
func UpdateLevelForTags(prm Prm) {
for k, v := range tagToLogLevel {
nk, ok := prm.tl[k]
if ok {
v.SetLevel(nk)
} else {
v.SetLevel(prm.level)
}
}
}