Compare commits

...

41 commits

Author SHA1 Message Date
bc0d61977d
[#1450] engine: Make Inhume handle objects in parallel
Add a worker pool for `Inhume` operation and use it to handle objects
in parallel. Since metabase `Inhume` uses `bbolt.Batch`, handling many
objects one by one may be inefficient.

Signed-off-by: Aleksey Savchuk <a.savchuk@yadro.com>
2024-11-13 18:23:21 +03:00
4f3e3c4aca
[#1450] engine: Add benchmark for Inhume operation
Signed-off-by: Aleksey Savchuk <a.savchuk@yadro.com>
2024-11-13 17:10:51 +03:00
7fc6101bec
[#1491] engine/test: Rework engine test utils
- Remove `testNewShard` and `setInitializedShards` because they
violated the default engine workflow. The correct workflow is:
first use `New()`, followed by `Open()`, and then `Init()`. As a
result, adding new logic to `(*StorageEngine).Init` caused several
tests to fail with a panic when attempting to access uninitialized
resources. Now, all engines created with the test utils must be
initialized manually. The new helper method `prepare` can be used
for that purpose.
- Additionally, `setInitializedShards` hardcoded the shard worker
pool size, which prevented it from being configured in tests and
benchmarks. This has been fixed as well.
- Ensure engine initialization is done wherever it was missing.
- Refactor `setShardsNumOpts`, `setShardsNumAdditionalOpts`, and
`setShardsNum`. Make them all depend on `setShardsNumOpts`.

Signed-off-by: Aleksey Savchuk <a.savchuk@yadro.com>
2024-11-13 14:42:53 +03:00
7ef36749d0
[#1491] engine/test: Move BenchmarkExists to exists_test.go
Move `BenchmarkExists` from `engine_test.go` to `exists_test.go`
for better organization and clarity.

Signed-off-by: Aleksey Savchuk <a.savchuk@yadro.com>
2024-11-13 14:09:29 +03:00
c6066d6ee4
[#1491] engine/test: Use more suitable testing utils here and there
Use `setShardsNum` instead of `setInitializedShards` wherever possible.

Signed-off-by: Aleksey Savchuk <a.savchuk@yadro.com>
2024-11-13 14:09:29 +03:00
612b34d570
[#1437] logger: Add caller skip to log original caller position
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-11-13 10:36:12 +03:00
7429553266
[#1437] node: Fix contextcheck linter
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-11-13 10:36:10 +03:00
6921a89061
[#1437] ir: Fix contextcheck linters
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-11-13 10:36:10 +03:00
16598553d9
[#1437] shard: Fix contextcheck linter
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-11-13 10:36:09 +03:00
c139892117
[#1437] ir: Fix contextcheck linter
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-11-13 10:36:09 +03:00
62b5181618
[#1437] blobovnicza: Fix contextcheck linter
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-11-13 10:36:08 +03:00
6db46257c0
[#1437] node: Use ctx for logging
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-11-13 10:36:07 +03:00
c16dae8b4d
[#1437] logger: Use context to log trace id
Signed-off-by: Dmitrii Stepanov
2024-11-13 10:36:07 +03:00
fd004add00 [#1492] metabase: Fix import formatting
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-13 07:30:25 +00:00
8ed7a676d5 [#1492] metabase: Ensure Unmarshal() is called on a cloned slice
The slice returned from bucket.Get() is only valid during the tx
lifetime. Cloning it is not necessary everywhere, but better safe than
sorry.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-13 07:30:25 +00:00
b451de94c8 [#1492] metabase: Fix typo in objData
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-13 07:30:25 +00:00
f1556e3c42
[#1488] Makefile: Drop all containers created on env-up
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-11-12 17:24:44 +03:00
e122ff6013
[#1488] dev: Add Jaeger image and enable tracing on debug
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-11-12 17:24:44 +03:00
e2658c7519
[#1488] tracing: KV attributes for spans from config
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-11-12 17:24:43 +03:00
c00f4bab18
[#1488] go.mod: Bump observability version
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-11-12 17:24:43 +03:00
46fef276b4 [#1449] tree: Log tree sync with Info level
Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-11-12 12:11:07 +00:00
9bd05e94c8 [#1449] tree: Add ApplyBatch method
Concurrent Apply can lead to child node applies before parent, so
undo/redo operations will perform. This leads to performance degradation
in case of tree with many sublevels.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-11-12 12:11:07 +00:00
16830033f8 [#1483] cli: Remove --basic-acl flag
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-12 12:10:51 +00:00
1cf51a8079 [#1483] cli/docs: Remove set-eacl mention
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-12 12:10:51 +00:00
3324c26fd8 [#1483] morph: Remove container.GetEACL()
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-12 12:10:51 +00:00
a692298533 [#1483] node: Remove eACL cache
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-12 12:10:51 +00:00
be2753de00 [#1490] docs: Update description for object.get.priority
Signed-off-by: Anton Nikiforov <an.nikiforov@yadro.com>
2024-11-12 13:35:13 +03:00
b543569c3f [#1486] node: Introduce dual service support
* Register GRPC services for both neo.fs.v2 and frost.fs namespaces
* Use this temporary solution until all nodes are updated

Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2024-11-12 08:43:47 +00:00
80f8a8fd3a
[#1396] cli/playground: Refactor
Signed-off-by: Alexander Chuprov <a.chuprov@yadro.com>
2024-11-12 11:09:27 +03:00
2f3bc6eb84
[#1396] cli/playground: Improve terminal control key handling
Signed-off-by: Alexander Chuprov <a.chuprov@yadro.com>
2024-11-12 11:07:48 +03:00
8a57c78f5f
[#1484] engine: Fix engine metrics
1. Add forgotten metrics for client requests
2. Include execIfNotBlocked into metrics

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-11-11 12:59:20 +03:00
ad01fb958a [#1474] Stop using obsolete .github directory
This commit is a part of multi-repo cleanup effort:
TrueCloudLab/frostfs-infra#136

Signed-off-by: Vitaliy Potyarkin <v.potyarkin@yadro.com>
2024-11-11 06:44:13 +00:00
d336f2d487 [#1393] adm: Make NewLocalActor receive accout name
* Some RPC-clients for contracts require different wallet account types.
  Since, `Policy` contract gets `consensus` accounts while `NNS` gets
  `committee` accounts.

Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2024-11-08 13:57:51 +00:00
c82c753e9f [#1480] objsvc: Remove useless stream wrappers
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-08 12:01:14 +00:00
f666898e5d [#1480] objsvc: Remove EACL checks
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-08 12:01:14 +00:00
b1a31281e4 [#1480] ape: Remove SoftAPECheck flag
Previous release was EACL-compatible.
Starting from now all EACL should've been migrated to APE chains.

Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-08 12:01:14 +00:00
764450d04a [#1479] tree: Regenerate service protobufs
Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2024-11-08 10:43:44 +03:00
755cae3f19 [#1479] control: Regenerate protobufs for service
Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2024-11-08 10:43:44 +03:00
9b13a18aac [#1479] go.mod: Bump frostfs-sdk-go version
* Update version within go.mod;
* Fix deprecated frostfs-api-go/v2 package and use frostfs-sdk-go/api
  instead.

Signed-off-by: Airat Arifullin <a.arifullin@yadro.com>
2024-11-08 10:43:19 +03:00
ef64930fef
[#1477] ape: Fix EC chunk test
Initially, this test was a check that only the container node can
assemble an EC object. But the implementation of this test was wrong.

Signed-off-by: Dmitrii Stepanov <d.stepanov@yadro.com>
2024-11-07 16:00:20 +03:00
c8fb154151
[#1475] Remove container estimation code
Signed-off-by: Evgenii Stratonikov <e.stratonikov@yadro.com>
2024-11-06 15:32:33 +03:00
477 changed files with 3098 additions and 3783 deletions

View file

Before

Width:  |  Height:  |  Size: 5.5 KiB

After

Width:  |  Height:  |  Size: 5.5 KiB

View file

@ -11,7 +11,7 @@ GO_VERSION ?= 1.22
LINT_VERSION ?= 1.61.0
TRUECLOUDLAB_LINT_VERSION ?= 0.0.7
PROTOC_VERSION ?= 25.0
PROTOGEN_FROSTFS_VERSION ?= $(shell go list -f '{{.Version}}' -m git.frostfs.info/TrueCloudLab/frostfs-api-go/v2)
PROTOGEN_FROSTFS_VERSION ?= $(shell go list -f '{{.Version}}' -m git.frostfs.info/TrueCloudLab/frostfs-sdk-go)
PROTOC_OS_VERSION=osx-x86_64
ifeq ($(shell uname), Linux)
PROTOC_OS_VERSION=linux-x86_64
@ -121,7 +121,7 @@ protoc-install:
@unzip -q -o $(PROTOBUF_DIR)/protoc-$(PROTOC_VERSION).zip -d $(PROTOC_DIR)
@rm $(PROTOBUF_DIR)/protoc-$(PROTOC_VERSION).zip
@echo "⇒ Instaling protogen FrostFS plugin..."
@GOBIN=$(PROTOGEN_FROSTFS_DIR) go install -mod=mod -v git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/util/protogen@$(PROTOGEN_FROSTFS_VERSION)
@GOBIN=$(PROTOGEN_FROSTFS_DIR) go install -mod=mod -v git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/util/protogen@$(PROTOGEN_FROSTFS_VERSION)
# Build FrostFS component's docker image
image-%:
@ -282,7 +282,6 @@ env-up: all
# Shutdown dev environment
env-down:
docker compose -f dev/docker-compose.yml down
docker volume rm -f frostfs-node_neo-go
docker compose -f dev/docker-compose.yml down -v
rm -rf ./$(TMP_DIR)/state
rm -rf ./$(TMP_DIR)/storage

View file

@ -1,5 +1,5 @@
<p align="center">
<img src="./.github/logo.svg" width="500px" alt="FrostFS">
<img src="./.forgejo/logo.svg" width="500px" alt="FrostFS">
</p>
<p align="center">
@ -98,7 +98,7 @@ See `frostfs-contract`'s README.md for build instructions.
4. To create container and put object into it run (container and object IDs will be different):
```
./bin/frostfs-cli container create -r 127.0.0.1:8080 --wallet ./dev/wallet.json --policy "REP 1 IN X CBF 1 SELECT 1 FROM * AS X" --basic-acl public-read-write --await
./bin/frostfs-cli container create -r 127.0.0.1:8080 --wallet ./dev/wallet.json --policy "REP 1 IN X CBF 1 SELECT 1 FROM * AS X" --await
Enter password > <- press ENTER, the is no password for wallet
CID: CfPhEuHQ2PRvM4gfBQDC4dWZY3NccovyfcnEdiq2ixju

View file

@ -139,7 +139,7 @@ func newPolicyContractInterface(cmd *cobra.Command) (*morph.ContractStorage, *he
c, err := helper.GetN3Client(viper.GetViper())
commonCmd.ExitOnErr(cmd, "unable to create NEO rpc client: %w", err)
ac, err := helper.NewLocalActor(cmd, c)
ac, err := helper.NewLocalActor(cmd, c, constants.ConsensusAccountName)
commonCmd.ExitOnErr(cmd, "can't create actor: %w", err)
var ch util.Uint160

View file

@ -5,7 +5,6 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/config"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/morph/constants"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"github.com/google/uuid"
"github.com/nspcc-dev/neo-go/pkg/core/state"
@ -31,7 +30,11 @@ type LocalActor struct {
// NewLocalActor create LocalActor with accounts form provided wallets.
// In case of empty wallets provided created actor with dummy account only for read operation.
func NewLocalActor(cmd *cobra.Command, c actor.RPCActor) (*LocalActor, error) {
//
// If wallets are provided, the contract client will use accounts with accName name from these wallets.
// To determine which account name should be used in a contract client, refer to how the contract
// verifies the transaction signature.
func NewLocalActor(cmd *cobra.Command, c actor.RPCActor, accName string) (*LocalActor, error) {
walletDir := config.ResolveHomePath(viper.GetString(commonflags.AlphabetWalletsFlag))
var act *actor.Actor
var accounts []*wallet.Account
@ -53,8 +56,8 @@ func NewLocalActor(cmd *cobra.Command, c actor.RPCActor) (*LocalActor, error) {
commonCmd.ExitOnErr(cmd, "unable to get alphabet wallets: %w", err)
for _, w := range wallets {
acc, err := GetWalletAccount(w, constants.CommitteeAccountName)
commonCmd.ExitOnErr(cmd, "can't find committee account: %w", err)
acc, err := GetWalletAccount(w, accName)
commonCmd.ExitOnErr(cmd, fmt.Sprintf("can't find %s account: %%w", accName), err)
accounts = append(accounts, acc)
}
act, err = actor.New(c, []actor.SignerAccount{{

View file

@ -2,6 +2,7 @@ package nns
import (
client "git.frostfs.info/TrueCloudLab/frostfs-contract/rpcclient/nns"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/morph/constants"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-adm/internal/modules/morph/helper"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"github.com/nspcc-dev/neo-go/pkg/rpcclient/management"
@ -15,7 +16,7 @@ func getRPCClient(cmd *cobra.Command) (*client.Contract, *helper.LocalActor, uti
c, err := helper.GetN3Client(v)
commonCmd.ExitOnErr(cmd, "unable to create NEO rpc client: %w", err)
ac, err := helper.NewLocalActor(cmd, c)
ac, err := helper.NewLocalActor(cmd, c, constants.CommitteeAccountName)
commonCmd.ExitOnErr(cmd, "can't create actor: %w", err)
r := management.NewReader(ac.Invoker)

View file

@ -72,4 +72,3 @@ All other `object` sub-commands support only static sessions (2).
List of commands supporting sessions (static only):
- `create`
- `delete`
- `set-eacl`

View file

@ -7,22 +7,20 @@ import (
"strings"
"time"
containerApi "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/container"
internalclient "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
containerApi "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/container"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/acl"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user"
"github.com/spf13/cobra"
)
var (
containerACL string
containerPolicy string
containerAttributes []string
containerAwait bool
@ -89,9 +87,6 @@ It will be stored in sidechain when inner ring will accepts it.`,
err = parseAttributes(&cnr, containerAttributes)
commonCmd.ExitOnErr(cmd, "", err)
var basicACL acl.Basic
commonCmd.ExitOnErr(cmd, "decode basic ACL string: %w", basicACL.DecodeString(containerACL))
tok := getSession(cmd)
if tok != nil {
@ -105,7 +100,6 @@ It will be stored in sidechain when inner ring will accepts it.`,
}
cnr.SetPlacementPolicy(*placementPolicy)
cnr.SetBasicACL(basicACL)
var syncContainerPrm internalclient.SyncContainerPrm
syncContainerPrm.SetClient(cli)
@ -163,10 +157,6 @@ func initContainerCreateCmd() {
flags.DurationP(commonflags.Timeout, commonflags.TimeoutShorthand, commonflags.TimeoutDefault, commonflags.TimeoutUsage)
flags.StringP(commonflags.WalletPath, commonflags.WalletPathShorthand, commonflags.WalletPathDefault, commonflags.WalletPathUsage)
flags.StringP(commonflags.Account, commonflags.AccountShorthand, commonflags.AccountDefault, commonflags.AccountUsage)
flags.StringVar(&containerACL, "basic-acl", acl.NamePrivate, fmt.Sprintf("HEX encoded basic ACL value or keywords like '%s', '%s', '%s'",
acl.NamePublicRW, acl.NamePrivate, acl.NamePublicROExtended,
))
flags.StringVarP(&containerPolicy, "policy", "p", "", "QL-encoded or JSON-encoded placement policy or path to file with it")
flags.StringSliceVarP(&containerAttributes, "attributes", "a", nil, "Comma separated pairs of container attributes in form of Key1=Value1,Key2=Value2")
flags.BoolVar(&containerAwait, "await", false, "Block execution until container is persisted")

View file

@ -1,11 +1,10 @@
package container
import (
"bufio"
"encoding/hex"
"encoding/json"
"errors"
"fmt"
"io"
"os"
"strings"
@ -14,6 +13,7 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap"
"github.com/chzyer/readline"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
@ -163,6 +163,16 @@ func (repl *policyPlaygroundREPL) netMap() netmap.NetMap {
return nm
}
var policyPlaygroundCompleter = readline.NewPrefixCompleter(
readline.PcItem("list"),
readline.PcItem("ls"),
readline.PcItem("add"),
readline.PcItem("load"),
readline.PcItem("remove"),
readline.PcItem("rm"),
readline.PcItem("eval"),
)
func (repl *policyPlaygroundREPL) run() error {
if len(viper.GetString(commonflags.RPC)) > 0 {
key := key.GetOrGenerate(repl.cmd)
@ -189,22 +199,38 @@ func (repl *policyPlaygroundREPL) run() error {
"rm": repl.handleRemove,
"eval": repl.handleEval,
}
for reader := bufio.NewReader(os.Stdin); ; {
fmt.Print("> ")
line, err := reader.ReadString('\n')
rl, err := readline.NewEx(&readline.Config{
Prompt: "> ",
InterruptPrompt: "^C",
AutoComplete: policyPlaygroundCompleter,
})
if err != nil {
return fmt.Errorf("error initializing readline: %w", err)
}
defer rl.Close()
var exit bool
for {
line, err := rl.Readline()
if err != nil {
if err == io.EOF {
return nil
if errors.Is(err, readline.ErrInterrupt) {
if exit {
return nil
}
exit = true
continue
}
return fmt.Errorf("reading line: %v", err)
return fmt.Errorf("reading line: %w", err)
}
exit = false
parts := strings.Fields(line)
if len(parts) == 0 {
continue
}
cmd := parts[0]
handler, exists := cmdHandlers[cmd]
if exists {
if handler, exists := cmdHandlers[cmd]; exists {
if err := handler(parts[1:]); err != nil {
fmt.Printf("error: %v\n", err)
}

View file

@ -4,11 +4,11 @@ import (
"encoding/hex"
"errors"
"git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/rpc/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/modules/util"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/rpc/client"
apechain "git.frostfs.info/TrueCloudLab/policy-engine/pkg/chain"
"github.com/spf13/cobra"
)

View file

@ -1,10 +1,10 @@
package control
import (
rawclient "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/rpc/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control"
rawclient "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/rpc/client"
"github.com/spf13/cobra"
)

View file

@ -1,10 +1,10 @@
package control
import (
"git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/rpc/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/rpc/client"
"github.com/spf13/cobra"
)

View file

@ -1,10 +1,10 @@
package control
import (
rawclient "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/rpc/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control"
rawclient "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/rpc/client"
"github.com/spf13/cobra"
)

View file

@ -1,10 +1,10 @@
package control
import (
"git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/rpc/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/rpc/client"
"github.com/spf13/cobra"
)

View file

@ -7,11 +7,11 @@ import (
"sync/atomic"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/rpc/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/shard"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/rpc/client"
clientSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client"
"github.com/spf13/cobra"
)

View file

@ -1,10 +1,10 @@
package control
import (
"git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/rpc/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/rpc/client"
"github.com/spf13/cobra"
)

View file

@ -3,11 +3,11 @@ package control
import (
"encoding/hex"
"git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/rpc/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/modules/util"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/rpc/client"
apechain "git.frostfs.info/TrueCloudLab/policy-engine/pkg/chain"
"github.com/spf13/cobra"
)

View file

@ -3,11 +3,11 @@ package control
import (
"os"
rawclient "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/rpc/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control"
rawclient "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/rpc/client"
"github.com/spf13/cobra"
)

View file

@ -3,12 +3,12 @@ package control
import (
"os"
rawclient "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/rpc/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
ircontrol "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control/ir"
ircontrolsrv "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control/ir/server"
rawclient "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/rpc/client"
"github.com/spf13/cobra"
)

View file

@ -1,13 +1,13 @@
package control
import (
"git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/refs"
rawclient "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/rpc/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
ircontrol "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control/ir"
ircontrolsrv "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control/ir/server"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/refs"
rawclient "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/rpc/client"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user"
"github.com/spf13/cobra"

View file

@ -4,11 +4,11 @@ import (
"encoding/hex"
"errors"
rawclient "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/rpc/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
ircontrol "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control/ir"
ircontrolsrv "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control/ir/server"
rawclient "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/rpc/client"
"github.com/spf13/cobra"
)

View file

@ -1,11 +1,11 @@
package control
import (
rawclient "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/rpc/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
ircontrol "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control/ir"
ircontrolsrv "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control/ir/server"
rawclient "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/rpc/client"
"github.com/spf13/cobra"
)

View file

@ -5,11 +5,11 @@ import (
"fmt"
"strings"
"git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/rpc/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/modules/util"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/rpc/client"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
apechain "git.frostfs.info/TrueCloudLab/policy-engine/pkg/chain"
"github.com/nspcc-dev/neo-go/cli/input"

View file

@ -7,10 +7,10 @@ import (
"strconv"
"text/tabwriter"
"git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/rpc/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/rpc/client"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"github.com/spf13/cobra"
"google.golang.org/grpc/codes"

View file

@ -3,10 +3,10 @@ package control
import (
"fmt"
rawclient "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/rpc/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control"
rawclient "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/rpc/client"
"github.com/mr-tron/base58"
"github.com/spf13/cobra"
)

View file

@ -4,10 +4,10 @@ import (
"encoding/hex"
"errors"
"git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/rpc/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/rpc/client"
"github.com/spf13/cobra"
)

View file

@ -6,12 +6,12 @@ import (
"fmt"
"time"
rawclient "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/rpc/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control"
rawclient "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/rpc/client"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client"
"github.com/spf13/cobra"
)

View file

@ -7,11 +7,11 @@ import (
"sort"
"strings"
rawclient "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/rpc/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control"
rawclient "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/rpc/client"
"github.com/mr-tron/base58"
"github.com/spf13/cobra"
)

View file

@ -6,10 +6,10 @@ import (
"slices"
"strings"
rawclient "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/rpc/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control"
rawclient "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/rpc/client"
"github.com/mr-tron/base58"
"github.com/spf13/cobra"
)

View file

@ -4,12 +4,12 @@ import (
"crypto/sha256"
"errors"
rawclient "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/rpc/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control/server/ctrlmessage"
rawclient "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/rpc/client"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"github.com/spf13/cobra"
)

View file

@ -4,11 +4,11 @@ import (
"crypto/ecdsa"
"errors"
"git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/refs"
internalclient "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control/server/ctrlmessage"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/refs"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client"
frostfscrypto "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/crypto"
"github.com/spf13/cobra"

View file

@ -1,10 +1,10 @@
package control
import (
"git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/rpc/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/rpc/client"
"github.com/mr-tron/base58"
"github.com/spf13/cobra"
)

View file

@ -6,12 +6,12 @@ import (
"fmt"
"os"
"git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/refs"
internalclient "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/refs"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
objectSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"

View file

@ -7,12 +7,12 @@ import (
"strconv"
"time"
objectV2 "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/object"
internalclient "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/common"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
objectV2 "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/object"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
objectSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"

View file

@ -10,11 +10,11 @@ import (
"strings"
"time"
objectV2 "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/object"
internalclient "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/commonflags"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-cli/internal/key"
commonCmd "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/internal/common"
objectV2 "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/object"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
objectSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user"

View file

@ -1,6 +1,7 @@
package main
import (
"context"
"os"
"os/signal"
"syscall"
@ -46,7 +47,7 @@ func reloadConfig() error {
return logPrm.Reload()
}
func watchForSignal(cancel func()) {
func watchForSignal(ctx context.Context, cancel func()) {
ch := make(chan os.Signal, 1)
signal.Notify(ch, syscall.SIGINT, syscall.SIGTERM)
@ -58,49 +59,49 @@ func watchForSignal(cancel func()) {
// signals causing application to shut down should have priority over
// reconfiguration signal
case <-ch:
log.Info(logs.FrostFSNodeTerminationSignalHasBeenReceivedStopping)
log.Info(ctx, logs.FrostFSNodeTerminationSignalHasBeenReceivedStopping)
cancel()
shutdown()
log.Info(logs.FrostFSNodeTerminationSignalProcessingIsComplete)
shutdown(ctx)
log.Info(ctx, logs.FrostFSNodeTerminationSignalProcessingIsComplete)
return
case err := <-intErr: // internal application error
log.Info(logs.FrostFSIRInternalError, zap.String("msg", err.Error()))
log.Info(ctx, logs.FrostFSIRInternalError, zap.String("msg", err.Error()))
cancel()
shutdown()
shutdown(ctx)
return
default:
// block until any signal is receieved
select {
case <-ch:
log.Info(logs.FrostFSNodeTerminationSignalHasBeenReceivedStopping)
log.Info(ctx, logs.FrostFSNodeTerminationSignalHasBeenReceivedStopping)
cancel()
shutdown()
log.Info(logs.FrostFSNodeTerminationSignalProcessingIsComplete)
shutdown(ctx)
log.Info(ctx, logs.FrostFSNodeTerminationSignalProcessingIsComplete)
return
case err := <-intErr: // internal application error
log.Info(logs.FrostFSIRInternalError, zap.String("msg", err.Error()))
log.Info(ctx, logs.FrostFSIRInternalError, zap.String("msg", err.Error()))
cancel()
shutdown()
shutdown(ctx)
return
case <-sighupCh:
log.Info(logs.FrostFSNodeSIGHUPHasBeenReceivedRereadingConfiguration)
if !innerRing.CompareAndSwapHealthStatus(control.HealthStatus_READY, control.HealthStatus_RECONFIGURING) {
log.Info(logs.FrostFSNodeSIGHUPSkip)
log.Info(ctx, logs.FrostFSNodeSIGHUPHasBeenReceivedRereadingConfiguration)
if !innerRing.CompareAndSwapHealthStatus(ctx, control.HealthStatus_READY, control.HealthStatus_RECONFIGURING) {
log.Info(ctx, logs.FrostFSNodeSIGHUPSkip)
break
}
err := reloadConfig()
if err != nil {
log.Error(logs.FrostFSNodeConfigurationReading, zap.Error(err))
log.Error(ctx, logs.FrostFSNodeConfigurationReading, zap.Error(err))
}
pprofCmp.reload()
metricsCmp.reload()
log.Info(logs.FrostFSIRReloadExtraWallets)
pprofCmp.reload(ctx)
metricsCmp.reload(ctx)
log.Info(ctx, logs.FrostFSIRReloadExtraWallets)
err = innerRing.SetExtraWallets(cfg)
if err != nil {
log.Error(logs.FrostFSNodeConfigurationReading, zap.Error(err))
log.Error(ctx, logs.FrostFSNodeConfigurationReading, zap.Error(err))
}
innerRing.CompareAndSwapHealthStatus(control.HealthStatus_RECONFIGURING, control.HealthStatus_READY)
log.Info(logs.FrostFSNodeConfigurationHasBeenReloadedSuccessfully)
innerRing.CompareAndSwapHealthStatus(ctx, control.HealthStatus_RECONFIGURING, control.HealthStatus_READY)
log.Info(ctx, logs.FrostFSNodeConfigurationHasBeenReloadedSuccessfully)
}
}
}

View file

@ -1,6 +1,7 @@
package main
import (
"context"
"net/http"
"time"
@ -24,8 +25,8 @@ const (
shutdownTimeoutKeyPostfix = ".shutdown_timeout"
)
func (c *httpComponent) init() {
log.Info("init " + c.name)
func (c *httpComponent) init(ctx context.Context) {
log.Info(ctx, "init "+c.name)
c.enabled = cfg.GetBool(c.name + enabledKeyPostfix)
c.address = cfg.GetString(c.name + addressKeyPostfix)
c.shutdownDur = cfg.GetDuration(c.name + shutdownTimeoutKeyPostfix)
@ -39,14 +40,14 @@ func (c *httpComponent) init() {
httputil.WithShutdownTimeout(c.shutdownDur),
)
} else {
log.Info(c.name + " is disabled, skip")
log.Info(ctx, c.name+" is disabled, skip")
c.srv = nil
}
}
func (c *httpComponent) start() {
func (c *httpComponent) start(ctx context.Context) {
if c.srv != nil {
log.Info("start " + c.name)
log.Info(ctx, "start "+c.name)
wg.Add(1)
go func() {
defer wg.Done()
@ -55,10 +56,10 @@ func (c *httpComponent) start() {
}
}
func (c *httpComponent) shutdown() error {
func (c *httpComponent) shutdown(ctx context.Context) error {
if c.srv != nil {
log.Info("shutdown " + c.name)
return c.srv.Shutdown()
log.Info(ctx, "shutdown "+c.name)
return c.srv.Shutdown(ctx)
}
return nil
}
@ -70,17 +71,17 @@ func (c *httpComponent) needReload() bool {
return enabled != c.enabled || enabled && (address != c.address || dur != c.shutdownDur)
}
func (c *httpComponent) reload() {
log.Info("reload " + c.name)
func (c *httpComponent) reload(ctx context.Context) {
log.Info(ctx, "reload "+c.name)
if c.needReload() {
log.Info(c.name + " config updated")
if err := c.shutdown(); err != nil {
log.Debug(logs.FrostFSIRCouldNotShutdownHTTPServer,
log.Info(ctx, c.name+" config updated")
if err := c.shutdown(ctx); err != nil {
log.Debug(ctx, logs.FrostFSIRCouldNotShutdownHTTPServer,
zap.String("error", err.Error()),
)
} else {
c.init()
c.start()
c.init(ctx)
c.start(ctx)
}
}
}

View file

@ -87,48 +87,48 @@ func main() {
ctx, cancel := context.WithCancel(context.Background())
pprofCmp = newPprofComponent()
pprofCmp.init()
pprofCmp.init(ctx)
metricsCmp = newMetricsComponent()
metricsCmp.init()
metricsCmp.init(ctx)
audit.Store(cfg.GetBool("audit.enabled"))
innerRing, err = innerring.New(ctx, log, cfg, intErr, metrics, cmode, audit)
exitErr(err)
pprofCmp.start()
metricsCmp.start()
pprofCmp.start(ctx)
metricsCmp.start(ctx)
// start inner ring
err = innerRing.Start(ctx, intErr)
exitErr(err)
log.Info(logs.CommonApplicationStarted,
log.Info(ctx, logs.CommonApplicationStarted,
zap.String("version", misc.Version))
watchForSignal(cancel)
watchForSignal(ctx, cancel)
<-ctx.Done() // graceful shutdown
log.Debug(logs.FrostFSNodeWaitingForAllProcessesToStop)
log.Debug(ctx, logs.FrostFSNodeWaitingForAllProcessesToStop)
wg.Wait()
log.Info(logs.FrostFSIRApplicationStopped)
log.Info(ctx, logs.FrostFSIRApplicationStopped)
}
func shutdown() {
innerRing.Stop()
if err := metricsCmp.shutdown(); err != nil {
log.Debug(logs.FrostFSIRCouldNotShutdownHTTPServer,
func shutdown(ctx context.Context) {
innerRing.Stop(ctx)
if err := metricsCmp.shutdown(ctx); err != nil {
log.Debug(ctx, logs.FrostFSIRCouldNotShutdownHTTPServer,
zap.String("error", err.Error()),
)
}
if err := pprofCmp.shutdown(); err != nil {
log.Debug(logs.FrostFSIRCouldNotShutdownHTTPServer,
if err := pprofCmp.shutdown(ctx); err != nil {
log.Debug(ctx, logs.FrostFSIRCouldNotShutdownHTTPServer,
zap.String("error", err.Error()),
)
}
if err := sdnotify.ClearStatus(); err != nil {
log.Error(logs.FailedToReportStatusToSystemd, zap.Error(err))
log.Error(ctx, logs.FailedToReportStatusToSystemd, zap.Error(err))
}
}

View file

@ -1,6 +1,7 @@
package main
import (
"context"
"runtime"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
@ -28,8 +29,8 @@ func newPprofComponent() *pprofComponent {
}
}
func (c *pprofComponent) init() {
c.httpComponent.init()
func (c *pprofComponent) init(ctx context.Context) {
c.httpComponent.init(ctx)
if c.enabled {
c.blockRate = cfg.GetInt(pprofBlockRateKey)
@ -51,17 +52,17 @@ func (c *pprofComponent) needReload() bool {
c.enabled && (c.blockRate != blockRate || c.mutexRate != mutexRate)
}
func (c *pprofComponent) reload() {
log.Info("reload " + c.name)
func (c *pprofComponent) reload(ctx context.Context) {
log.Info(ctx, "reload "+c.name)
if c.needReload() {
log.Info(c.name + " config updated")
if err := c.shutdown(); err != nil {
log.Debug(logs.FrostFSIRCouldNotShutdownHTTPServer,
log.Info(ctx, c.name+" config updated")
if err := c.shutdown(ctx); err != nil {
log.Debug(ctx, logs.FrostFSIRCouldNotShutdownHTTPServer,
zap.String("error", err.Error()))
return
}
c.init()
c.start()
c.init(ctx)
c.start(ctx)
}
}

View file

@ -28,7 +28,7 @@ func inspectFunc(cmd *cobra.Command, _ []string) {
common.ExitOnErr(cmd, common.Errf("invalid address argument: %w", err))
blz := openBlobovnicza(cmd)
defer blz.Close()
defer blz.Close(cmd.Context())
var prm blobovnicza.GetPrm
prm.SetAddress(addr)

View file

@ -32,7 +32,7 @@ func listFunc(cmd *cobra.Command, _ []string) {
}
blz := openBlobovnicza(cmd)
defer blz.Close()
defer blz.Close(cmd.Context())
err := blobovnicza.IterateAddresses(context.Background(), blz, wAddr)
common.ExitOnErr(cmd, common.Errf("blobovnicza iterator failure: %w", err))

View file

@ -27,7 +27,7 @@ func openBlobovnicza(cmd *cobra.Command) *blobovnicza.Blobovnicza {
blobovnicza.WithPath(vPath),
blobovnicza.WithReadOnly(true),
)
common.ExitOnErr(cmd, blz.Open())
common.ExitOnErr(cmd, blz.Open(cmd.Context()))
return blz
}

View file

@ -31,7 +31,7 @@ func inspectFunc(cmd *cobra.Command, _ []string) {
common.ExitOnErr(cmd, common.Errf("invalid address argument: %w", err))
db := openMeta(cmd)
defer db.Close()
defer db.Close(cmd.Context())
storageID := meta.StorageIDPrm{}
storageID.SetAddress(addr)

View file

@ -19,7 +19,7 @@ func init() {
func listGarbageFunc(cmd *cobra.Command, _ []string) {
db := openMeta(cmd)
defer db.Close()
defer db.Close(cmd.Context())
var garbPrm meta.GarbageIterationPrm
garbPrm.SetHandler(

View file

@ -19,7 +19,7 @@ func init() {
func listGraveyardFunc(cmd *cobra.Command, _ []string) {
db := openMeta(cmd)
defer db.Close()
defer db.Close(cmd.Context())
var gravePrm meta.GraveyardIterationPrm
gravePrm.SetHandler(

View file

@ -3,12 +3,13 @@ package main
import (
"context"
"net"
"strings"
accountingGRPC "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/accounting/grpc"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client/balance"
accountingTransportGRPC "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/network/transport/accounting/grpc"
accountingService "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/accounting"
accounting "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/accounting/morph"
accountingGRPC "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/accounting/grpc"
"google.golang.org/grpc"
)
@ -30,5 +31,27 @@ func initAccountingService(ctx context.Context, c *cfg) {
c.cfgGRPC.performAndSave(func(_ string, _ net.Listener, s *grpc.Server) {
accountingGRPC.RegisterAccountingServiceServer(s, server)
// TODO(@aarifullin): #1487 remove the dual service support.
s.RegisterService(frostFSServiceDesc(accountingGRPC.AccountingService_ServiceDesc), server)
})
}
// frostFSServiceDesc creates a service descriptor with the new namespace for dual service support.
func frostFSServiceDesc(sd grpc.ServiceDesc) *grpc.ServiceDesc {
sdLegacy := new(grpc.ServiceDesc)
*sdLegacy = sd
const (
legacyNamespace = "neo.fs.v2"
apemanagerLegacyNamespace = "frostfs.v2"
newNamespace = "frost.fs"
)
if strings.HasPrefix(sd.ServiceName, legacyNamespace) {
sdLegacy.ServiceName = strings.ReplaceAll(sd.ServiceName, legacyNamespace, newNamespace)
} else if strings.HasPrefix(sd.ServiceName, apemanagerLegacyNamespace) {
sdLegacy.ServiceName = strings.ReplaceAll(sd.ServiceName, apemanagerLegacyNamespace, newNamespace)
}
return sdLegacy
}

View file

@ -3,11 +3,11 @@ package main
import (
"net"
apemanager_grpc "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/apemanager/grpc"
ape_contract "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/ape/contract_storage"
morph "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client"
apemanager_transport "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/network/transport/apemanager/grpc"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/apemanager"
apemanager_grpc "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/apemanager/grpc"
"google.golang.org/grpc"
)
@ -26,5 +26,8 @@ func initAPEManagerService(c *cfg) {
c.cfgGRPC.performAndSave(func(_ string, _ net.Listener, s *grpc.Server) {
apemanager_grpc.RegisterAPEManagerServiceServer(s, server)
// TODO(@aarifullin): #1487 remove the dual service support.
s.RegisterService(frostFSServiceDesc(apemanager_grpc.APEManagerService_ServiceDesc), server)
})
}

View file

@ -196,31 +196,6 @@ func (s ttlContainerStorage) DeletionInfo(cnr cid.ID) (*container.DelInfo, error
return s.delInfoCache.get(cnr)
}
type ttlEACLStorage struct {
*ttlNetCache[cid.ID, *container.EACL]
}
func newCachedEACLStorage(v container.EACLSource, ttl time.Duration) ttlEACLStorage {
const eaclCacheSize = 100
lruCnrCache := newNetworkTTLCache(eaclCacheSize, ttl, func(id cid.ID) (*container.EACL, error) {
return v.GetEACL(id)
}, metrics.NewCacheMetrics("eacl"))
return ttlEACLStorage{lruCnrCache}
}
// GetEACL returns eACL value from the cache. If value is missing in the cache
// or expired, then it returns value from side chain and updates cache.
func (s ttlEACLStorage) GetEACL(cnr cid.ID) (*container.EACL, error) {
return s.get(cnr)
}
// InvalidateEACL removes cached eACL value.
func (s ttlEACLStorage) InvalidateEACL(cnr cid.ID) {
s.remove(cnr)
}
type lruNetmapSource struct {
netState netmap.State

View file

@ -15,7 +15,6 @@ import (
"syscall"
"time"
netmapV2 "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/netmap"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config"
apiclientconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/apiclient"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/audit"
@ -70,6 +69,7 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/state"
"git.frostfs.info/TrueCloudLab/frostfs-observability/logging/lokicore"
"git.frostfs.info/TrueCloudLab/frostfs-observability/tracing"
netmapV2 "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/netmap"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap"
objectSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user"
@ -397,16 +397,16 @@ type internals struct {
}
// starts node's maintenance.
func (c *cfg) startMaintenance() {
func (c *cfg) startMaintenance(ctx context.Context) {
c.isMaintenance.Store(true)
c.cfgNetmap.state.setControlNetmapStatus(control.NetmapStatus_MAINTENANCE)
c.log.Info(logs.FrostFSNodeStartedLocalNodesMaintenance)
c.log.Info(ctx, logs.FrostFSNodeStartedLocalNodesMaintenance)
}
// stops node's maintenance.
func (c *internals) stopMaintenance() {
func (c *internals) stopMaintenance(ctx context.Context) {
if c.isMaintenance.CompareAndSwap(true, false) {
c.log.Info(logs.FrostFSNodeStoppedLocalNodesMaintenance)
c.log.Info(ctx, logs.FrostFSNodeStoppedLocalNodesMaintenance)
}
}
@ -642,8 +642,6 @@ type cfgObject struct {
cnrSource container.Source
eaclSource container.EACLSource
cfgAccessPolicyEngine cfgAccessPolicyEngine
pool cfgObjectRoutines
@ -707,7 +705,7 @@ func initCfg(appCfg *config.Config) *cfg {
log, err := logger.NewLogger(logPrm)
fatalOnErr(err)
if loggerconfig.ToLokiConfig(appCfg).Enabled {
log.Logger = log.Logger.WithOptions(zap.WrapCore(func(core zapcore.Core) zapcore.Core {
log.WithOptions(zap.WrapCore(func(core zapcore.Core) zapcore.Core {
lokiCore := lokicore.New(core, loggerconfig.ToLokiConfig(appCfg))
return lokiCore
}))
@ -1091,7 +1089,7 @@ func (c *cfg) LocalAddress() network.AddressGroup {
func initLocalStorage(ctx context.Context, c *cfg) {
ls := engine.New(c.engineOpts()...)
addNewEpochAsyncNotificationHandler(c, func(ev event.Event) {
addNewEpochAsyncNotificationHandler(c, func(ctx context.Context, ev event.Event) {
ls.HandleNewEpoch(ctx, ev.(netmap2.NewEpoch).EpochNumber())
})
@ -1105,10 +1103,10 @@ func initLocalStorage(ctx context.Context, c *cfg) {
shard.WithTombstoneSource(c.createTombstoneSource()),
shard.WithContainerInfoProvider(c.createContainerInfoProvider(ctx)))...)
if err != nil {
c.log.Error(logs.FrostFSNodeFailedToAttachShardToEngine, zap.Error(err))
c.log.Error(ctx, logs.FrostFSNodeFailedToAttachShardToEngine, zap.Error(err))
} else {
shardsAttached++
c.log.Info(logs.FrostFSNodeShardAttachedToEngine, zap.Stringer("id", id))
c.log.Info(ctx, logs.FrostFSNodeShardAttachedToEngine, zap.Stringer("id", id))
}
}
if shardsAttached == 0 {
@ -1118,23 +1116,23 @@ func initLocalStorage(ctx context.Context, c *cfg) {
c.cfgObject.cfgLocalStorage.localStorage = ls
c.onShutdown(func() {
c.log.Info(logs.FrostFSNodeClosingComponentsOfTheStorageEngine)
c.log.Info(ctx, logs.FrostFSNodeClosingComponentsOfTheStorageEngine)
err := ls.Close(context.WithoutCancel(ctx))
if err != nil {
c.log.Info(logs.FrostFSNodeStorageEngineClosingFailure,
c.log.Info(ctx, logs.FrostFSNodeStorageEngineClosingFailure,
zap.String("error", err.Error()),
)
} else {
c.log.Info(logs.FrostFSNodeAllComponentsOfTheStorageEngineClosedSuccessfully)
c.log.Info(ctx, logs.FrostFSNodeAllComponentsOfTheStorageEngineClosedSuccessfully)
}
})
}
func initAccessPolicyEngine(_ context.Context, c *cfg) {
func initAccessPolicyEngine(ctx context.Context, c *cfg) {
var localOverrideDB chainbase.LocalOverrideDatabase
if nodeconfig.PersistentPolicyRules(c.appCfg).Path() == "" {
c.log.Warn(logs.FrostFSNodePersistentRuleStorageDBPathIsNotSetInmemoryWillBeUsed)
c.log.Warn(ctx, logs.FrostFSNodePersistentRuleStorageDBPathIsNotSetInmemoryWillBeUsed)
localOverrideDB = chainbase.NewInmemoryLocalOverrideDatabase()
} else {
localOverrideDB = chainbase.NewBoltLocalOverrideDatabase(
@ -1159,7 +1157,7 @@ func initAccessPolicyEngine(_ context.Context, c *cfg) {
c.onShutdown(func() {
if err := ape.LocalOverrideDatabaseCore().Close(); err != nil {
c.log.Warn(logs.FrostFSNodeAccessPolicyEngineClosingFailure,
c.log.Warn(ctx, logs.FrostFSNodeAccessPolicyEngineClosingFailure,
zap.Error(err),
)
}
@ -1208,10 +1206,10 @@ func (c *cfg) setContractNodeInfo(ni *netmap.NodeInfo) {
c.cfgNetmap.state.setNodeInfo(ni)
}
func (c *cfg) updateContractNodeInfo(epoch uint64) {
func (c *cfg) updateContractNodeInfo(ctx context.Context, epoch uint64) {
ni, err := c.netmapLocalNodeState(epoch)
if err != nil {
c.log.Error(logs.FrostFSNodeCouldNotUpdateNodeStateOnNewEpoch,
c.log.Error(ctx, logs.FrostFSNodeCouldNotUpdateNodeStateOnNewEpoch,
zap.Uint64("epoch", epoch),
zap.String("error", err.Error()))
return
@ -1223,19 +1221,19 @@ func (c *cfg) updateContractNodeInfo(epoch uint64) {
// bootstrapWithState calls "addPeer" method of the Sidechain Netmap contract
// with the binary-encoded information from the current node's configuration.
// The state is set using the provided setter which MUST NOT be nil.
func (c *cfg) bootstrapWithState(stateSetter func(*netmap.NodeInfo)) error {
func (c *cfg) bootstrapWithState(ctx context.Context, stateSetter func(*netmap.NodeInfo)) error {
ni := c.cfgNodeInfo.localInfo
stateSetter(&ni)
prm := nmClient.AddPeerPrm{}
prm.SetNodeInfo(ni)
return c.cfgNetmap.wrapper.AddPeer(prm)
return c.cfgNetmap.wrapper.AddPeer(ctx, prm)
}
// bootstrapOnline calls cfg.bootstrapWithState with "online" state.
func bootstrapOnline(c *cfg) error {
return c.bootstrapWithState(func(ni *netmap.NodeInfo) {
func bootstrapOnline(ctx context.Context, c *cfg) error {
return c.bootstrapWithState(ctx, func(ni *netmap.NodeInfo) {
ni.SetStatus(netmap.Online)
})
}
@ -1243,21 +1241,21 @@ func bootstrapOnline(c *cfg) error {
// bootstrap calls bootstrapWithState with:
// - "maintenance" state if maintenance is in progress on the current node
// - "online", otherwise
func (c *cfg) bootstrap() error {
func (c *cfg) bootstrap(ctx context.Context) error {
// switch to online except when under maintenance
st := c.cfgNetmap.state.controlNetmapStatus()
if st == control.NetmapStatus_MAINTENANCE {
c.log.Info(logs.FrostFSNodeBootstrappingWithTheMaintenanceState)
return c.bootstrapWithState(func(ni *netmap.NodeInfo) {
c.log.Info(ctx, logs.FrostFSNodeBootstrappingWithTheMaintenanceState)
return c.bootstrapWithState(ctx, func(ni *netmap.NodeInfo) {
ni.SetStatus(netmap.Maintenance)
})
}
c.log.Info(logs.FrostFSNodeBootstrappingWithOnlineState,
c.log.Info(ctx, logs.FrostFSNodeBootstrappingWithOnlineState,
zap.Stringer("previous", st),
)
return bootstrapOnline(c)
return bootstrapOnline(ctx, c)
}
// needBootstrap checks if local node should be registered in network on bootup.
@ -1282,19 +1280,19 @@ func (c *cfg) signalWatcher(ctx context.Context) {
// signals causing application to shut down should have priority over
// reconfiguration signal
case <-ch:
c.log.Info(logs.FrostFSNodeTerminationSignalHasBeenReceivedStopping)
c.log.Info(ctx, logs.FrostFSNodeTerminationSignalHasBeenReceivedStopping)
c.shutdown()
c.shutdown(ctx)
c.log.Info(logs.FrostFSNodeTerminationSignalProcessingIsComplete)
c.log.Info(ctx, logs.FrostFSNodeTerminationSignalProcessingIsComplete)
return
case err := <-c.internalErr: // internal application error
c.log.Warn(logs.FrostFSNodeInternalApplicationError,
c.log.Warn(ctx, logs.FrostFSNodeInternalApplicationError,
zap.String("message", err.Error()))
c.shutdown()
c.shutdown(ctx)
c.log.Info(logs.FrostFSNodeInternalErrorProcessingIsComplete)
c.log.Info(ctx, logs.FrostFSNodeInternalErrorProcessingIsComplete)
return
default:
// block until any signal is receieved
@ -1302,19 +1300,19 @@ func (c *cfg) signalWatcher(ctx context.Context) {
case <-sighupCh:
c.reloadConfig(ctx)
case <-ch:
c.log.Info(logs.FrostFSNodeTerminationSignalHasBeenReceivedStopping)
c.log.Info(ctx, logs.FrostFSNodeTerminationSignalHasBeenReceivedStopping)
c.shutdown()
c.shutdown(ctx)
c.log.Info(logs.FrostFSNodeTerminationSignalProcessingIsComplete)
c.log.Info(ctx, logs.FrostFSNodeTerminationSignalProcessingIsComplete)
return
case err := <-c.internalErr: // internal application error
c.log.Warn(logs.FrostFSNodeInternalApplicationError,
c.log.Warn(ctx, logs.FrostFSNodeInternalApplicationError,
zap.String("message", err.Error()))
c.shutdown()
c.shutdown(ctx)
c.log.Info(logs.FrostFSNodeInternalErrorProcessingIsComplete)
c.log.Info(ctx, logs.FrostFSNodeInternalErrorProcessingIsComplete)
return
}
}
@ -1322,17 +1320,17 @@ func (c *cfg) signalWatcher(ctx context.Context) {
}
func (c *cfg) reloadConfig(ctx context.Context) {
c.log.Info(logs.FrostFSNodeSIGHUPHasBeenReceivedRereadingConfiguration)
c.log.Info(ctx, logs.FrostFSNodeSIGHUPHasBeenReceivedRereadingConfiguration)
if !c.compareAndSwapHealthStatus(control.HealthStatus_READY, control.HealthStatus_RECONFIGURING) {
c.log.Info(logs.FrostFSNodeSIGHUPSkip)
if !c.compareAndSwapHealthStatus(ctx, control.HealthStatus_READY, control.HealthStatus_RECONFIGURING) {
c.log.Info(ctx, logs.FrostFSNodeSIGHUPSkip)
return
}
defer c.compareAndSwapHealthStatus(control.HealthStatus_RECONFIGURING, control.HealthStatus_READY)
defer c.compareAndSwapHealthStatus(ctx, control.HealthStatus_RECONFIGURING, control.HealthStatus_READY)
err := c.reloadAppConfig()
if err != nil {
c.log.Error(logs.FrostFSNodeConfigurationReading, zap.Error(err))
c.log.Error(ctx, logs.FrostFSNodeConfigurationReading, zap.Error(err))
return
}
@ -1343,7 +1341,7 @@ func (c *cfg) reloadConfig(ctx context.Context) {
logPrm, err := c.loggerPrm()
if err != nil {
c.log.Error(logs.FrostFSNodeLoggerConfigurationPreparation, zap.Error(err))
c.log.Error(ctx, logs.FrostFSNodeLoggerConfigurationPreparation, zap.Error(err))
return
}
@ -1364,25 +1362,25 @@ func (c *cfg) reloadConfig(ctx context.Context) {
err = c.cfgObject.cfgLocalStorage.localStorage.Reload(ctx, rcfg)
if err != nil {
c.log.Error(logs.FrostFSNodeStorageEngineConfigurationUpdate, zap.Error(err))
c.log.Error(ctx, logs.FrostFSNodeStorageEngineConfigurationUpdate, zap.Error(err))
return
}
for _, component := range components {
err = component.reloadFunc()
if err != nil {
c.log.Error(logs.FrostFSNodeUpdatedConfigurationApplying,
c.log.Error(ctx, logs.FrostFSNodeUpdatedConfigurationApplying,
zap.String("component", component.name),
zap.Error(err))
}
}
if err := c.dialerSource.Update(internalNetConfig(c.appCfg, c.metricsCollector.MultinetMetrics())); err != nil {
c.log.Error(logs.FailedToUpdateMultinetConfiguration, zap.Error(err))
c.log.Error(ctx, logs.FailedToUpdateMultinetConfiguration, zap.Error(err))
return
}
c.log.Info(logs.FrostFSNodeConfigurationHasBeenReloadedSuccessfully)
c.log.Info(ctx, logs.FrostFSNodeConfigurationHasBeenReloadedSuccessfully)
}
func (c *cfg) getComponents(ctx context.Context, logPrm *logger.Prm) []dCmp {
@ -1390,7 +1388,7 @@ func (c *cfg) getComponents(ctx context.Context, logPrm *logger.Prm) []dCmp {
components = append(components, dCmp{"logger", logPrm.Reload})
components = append(components, dCmp{"runtime", func() error {
setRuntimeParameters(c)
setRuntimeParameters(ctx, c)
return nil
}})
components = append(components, dCmp{"audit", func() error {
@ -1405,7 +1403,7 @@ func (c *cfg) getComponents(ctx context.Context, logPrm *logger.Prm) []dCmp {
}
updated, err := tracing.Setup(ctx, *traceConfig)
if updated {
c.log.Info(logs.FrostFSNodeTracingConfigationUpdated)
c.log.Info(ctx, logs.FrostFSNodeTracingConfigationUpdated)
}
return err
}})
@ -1440,7 +1438,7 @@ func (c *cfg) reloadPools() error {
func (c *cfg) reloadPool(p *ants.Pool, newSize int, name string) {
oldSize := p.Cap()
if oldSize != newSize {
c.log.Info(logs.FrostFSNodePoolConfigurationUpdate, zap.String("field", name),
c.log.Info(context.Background(), logs.FrostFSNodePoolConfigurationUpdate, zap.String("field", name),
zap.Int("old", oldSize), zap.Int("new", newSize))
p.Tune(newSize)
}
@ -1476,14 +1474,14 @@ func (c *cfg) createContainerInfoProvider(ctx context.Context) container.InfoPro
})
}
func (c *cfg) shutdown() {
old := c.swapHealthStatus(control.HealthStatus_SHUTTING_DOWN)
func (c *cfg) shutdown(ctx context.Context) {
old := c.swapHealthStatus(ctx, control.HealthStatus_SHUTTING_DOWN)
if old == control.HealthStatus_SHUTTING_DOWN {
c.log.Info(logs.FrostFSNodeShutdownSkip)
c.log.Info(ctx, logs.FrostFSNodeShutdownSkip)
return
}
if old == control.HealthStatus_STARTING {
c.log.Warn(logs.FrostFSNodeShutdownWhenNotReady)
c.log.Warn(ctx, logs.FrostFSNodeShutdownWhenNotReady)
}
c.ctxCancel()
@ -1493,6 +1491,6 @@ func (c *cfg) shutdown() {
}
if err := sdnotify.ClearStatus(); err != nil {
c.log.Error(logs.FailedToReportStatusToSystemd, zap.Error(err))
c.log.Error(ctx, logs.FailedToReportStatusToSystemd, zap.Error(err))
}
}

View file

@ -5,6 +5,7 @@ import (
"errors"
"fmt"
"os"
"strconv"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config"
"git.frostfs.info/TrueCloudLab/frostfs-node/misc"
@ -24,6 +25,7 @@ func ToTracingConfig(c *config.Config) (*tracing.Config, error) {
Service: "frostfs-node",
InstanceID: getInstanceIDOrDefault(c),
Version: misc.Version,
Attributes: make(map[string]string),
}
if trustedCa := config.StringSafe(c.Sub(subsection), "trusted_ca"); trustedCa != "" {
@ -38,11 +40,30 @@ func ToTracingConfig(c *config.Config) (*tracing.Config, error) {
}
conf.ServerCaCertPool = certPool
}
i := uint64(0)
for ; ; i++ {
si := strconv.FormatUint(i, 10)
ac := c.Sub(subsection).Sub("attributes").Sub(si)
k := config.StringSafe(ac, "key")
if k == "" {
break
}
v := config.StringSafe(ac, "value")
if v == "" {
return nil, fmt.Errorf("empty tracing attribute value for key %s", k)
}
if _, ok := conf.Attributes[k]; ok {
return nil, fmt.Errorf("tracing attribute key %s defined more than once", k)
}
conf.Attributes[k] = v
}
return conf, nil
}
func getInstanceIDOrDefault(c *config.Config) string {
s := config.StringSlice(c.Sub("node"), "addresses")
s := config.StringSliceSafe(c.Sub("node"), "addresses")
if len(s) > 0 {
return s[0]
}

View file

@ -0,0 +1,46 @@
package tracing
import (
"testing"
"git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config"
configtest "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/test"
"git.frostfs.info/TrueCloudLab/frostfs-observability/tracing"
"github.com/stretchr/testify/require"
)
func TestTracingSection(t *testing.T) {
t.Run("defaults", func(t *testing.T) {
tc, err := ToTracingConfig(configtest.EmptyConfig())
require.NoError(t, err)
require.Equal(t, false, tc.Enabled)
require.Equal(t, tracing.Exporter(""), tc.Exporter)
require.Equal(t, "", tc.Endpoint)
require.Equal(t, "frostfs-node", tc.Service)
require.Equal(t, "", tc.InstanceID)
require.Nil(t, tc.ServerCaCertPool)
require.Empty(t, tc.Attributes)
})
const path = "../../../../config/example/node"
fileConfigTest := func(c *config.Config) {
tc, err := ToTracingConfig(c)
require.NoError(t, err)
require.Equal(t, true, tc.Enabled)
require.Equal(t, tracing.OTLPgRPCExporter, tc.Exporter)
require.Equal(t, "localhost", tc.Endpoint)
require.Equal(t, "frostfs-node", tc.Service)
require.Nil(t, tc.ServerCaCertPool)
require.EqualValues(t, map[string]string{
"key0": "value",
"key1": "value",
}, tc.Attributes)
}
configtest.ForEachFileType(path, fileConfigTest)
t.Run("ENV", func(t *testing.T) {
configtest.ForEnvFileType(t, path, fileConfigTest)
})
}

View file

@ -10,6 +10,8 @@ import (
const (
subsection = "tree"
SyncBatchSizeDefault = 1000
)
// TreeConfig is a wrapper over "tree" config section
@ -74,6 +76,17 @@ func (c TreeConfig) SyncInterval() time.Duration {
return config.DurationSafe(c.cfg, "sync_interval")
}
// SyncBatchSize returns the value of "sync_batch_size"
// config parameter from the "tree" section.
//
// Returns `SyncBatchSizeDefault` if config value is not specified.
func (c TreeConfig) SyncBatchSize() int {
if v := config.IntSafe(c.cfg, "sync_batch_size"); v > 0 {
return int(v)
}
return SyncBatchSizeDefault
}
// AuthorizedKeys parses and returns an array of "authorized_keys" config
// parameter from "tree" section.
//

View file

@ -44,6 +44,7 @@ func TestTreeSection(t *testing.T) {
require.Equal(t, 32, treeSec.ReplicationWorkerCount())
require.Equal(t, 5*time.Second, treeSec.ReplicationTimeout())
require.Equal(t, time.Hour, treeSec.SyncInterval())
require.Equal(t, 2000, treeSec.SyncBatchSize())
require.Equal(t, expectedKeys, treeSec.AuthorizedKeys())
}

View file

@ -5,7 +5,6 @@ import (
"context"
"net"
containerGRPC "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/container/grpc"
morphconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/morph"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/metrics"
@ -18,6 +17,7 @@ import (
containerTransportGRPC "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/network/transport/container/grpc"
containerService "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/container"
containerMorph "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/container/morph"
containerGRPC "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/container/grpc"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user"
"git.frostfs.info/TrueCloudLab/policy-engine/pkg/engine"
@ -64,16 +64,15 @@ func initContainerService(_ context.Context, c *cfg) {
c.cfgGRPC.performAndSave(func(_ string, _ net.Listener, s *grpc.Server) {
containerGRPC.RegisterContainerServiceServer(s, server)
// TODO(@aarifullin): #1487 remove the dual service support.
s.RegisterService(frostFSServiceDesc(containerGRPC.ContainerService_ServiceDesc), server)
})
c.cfgObject.cfgLocalStorage.localStorage.SetContainerSource(cnrRdr)
}
func configureEACLAndContainerSources(c *cfg, client *cntClient.Client, cnrSrc containerCore.Source) (*morphContainerReader, *morphContainerWriter) {
eACLFetcher := &morphEACLFetcher{
w: client,
}
cnrRdr := new(morphContainerReader)
cnrWrt := &morphContainerWriter{
@ -81,8 +80,6 @@ func configureEACLAndContainerSources(c *cfg, client *cntClient.Client, cnrSrc c
}
if c.cfgMorph.cacheTTL <= 0 {
c.cfgObject.eaclSource = eACLFetcher
cnrRdr.eacl = eACLFetcher
c.cfgObject.cnrSource = cnrSrc
cnrRdr.src = cnrSrc
cnrRdr.lister = client
@ -92,7 +89,7 @@ func configureEACLAndContainerSources(c *cfg, client *cntClient.Client, cnrSrc c
if c.cfgMorph.containerCacheSize > 0 {
containerCache := newCachedContainerStorage(cnrSrc, c.cfgMorph.cacheTTL, c.cfgMorph.containerCacheSize)
subscribeToContainerCreation(c, func(e event.Event) {
subscribeToContainerCreation(c, func(ctx context.Context, e event.Event) {
ev := e.(containerEvent.PutSuccess)
// read owner of the created container in order to update the reading cache.
@ -105,32 +102,28 @@ func configureEACLAndContainerSources(c *cfg, client *cntClient.Client, cnrSrc c
} else {
// unlike removal, we expect successful receive of the container
// after successful creation, so logging can be useful
c.log.Error(logs.FrostFSNodeReadNewlyCreatedContainerAfterTheNotification,
c.log.Error(ctx, logs.FrostFSNodeReadNewlyCreatedContainerAfterTheNotification,
zap.Stringer("id", ev.ID),
zap.Error(err),
)
}
c.log.Debug(logs.FrostFSNodeContainerCreationEventsReceipt,
c.log.Debug(ctx, logs.FrostFSNodeContainerCreationEventsReceipt,
zap.Stringer("id", ev.ID),
)
})
subscribeToContainerRemoval(c, func(e event.Event) {
subscribeToContainerRemoval(c, func(ctx context.Context, e event.Event) {
ev := e.(containerEvent.DeleteSuccess)
containerCache.handleRemoval(ev.ID)
c.log.Debug(logs.FrostFSNodeContainerRemovalEventsReceipt,
c.log.Debug(ctx, logs.FrostFSNodeContainerRemovalEventsReceipt,
zap.Stringer("id", ev.ID),
)
})
c.cfgObject.cnrSource = containerCache
}
cachedEACLStorage := newCachedEACLStorage(eACLFetcher, c.cfgMorph.cacheTTL)
c.cfgObject.eaclSource = cachedEACLStorage
cnrRdr.lister = client
cnrRdr.eacl = c.cfgObject.eaclSource
cnrRdr.src = c.cfgObject.cnrSource
}
@ -221,8 +214,6 @@ func (c *cfg) ExternalAddresses() []string {
// implements interface required by container service provided by morph executor.
type morphContainerReader struct {
eacl containerCore.EACLSource
src containerCore.Source
lister interface {
@ -238,10 +229,6 @@ func (x *morphContainerReader) DeletionInfo(id cid.ID) (*containerCore.DelInfo,
return x.src.DeletionInfo(id)
}
func (x *morphContainerReader) GetEACL(id cid.ID) (*containerCore.EACL, error) {
return x.eacl.GetEACL(id)
}
func (x *morphContainerReader) ContainersOf(id *user.ID) ([]cid.ID, error) {
return x.lister.ContainersOf(id)
}
@ -250,10 +237,10 @@ type morphContainerWriter struct {
neoClient *cntClient.Client
}
func (m morphContainerWriter) Put(cnr containerCore.Container) (*cid.ID, error) {
return cntClient.Put(m.neoClient, cnr)
func (m morphContainerWriter) Put(ctx context.Context, cnr containerCore.Container) (*cid.ID, error) {
return cntClient.Put(ctx, m.neoClient, cnr)
}
func (m morphContainerWriter) Delete(witness containerCore.RemovalWitness) error {
return cntClient.Delete(m.neoClient, witness)
func (m morphContainerWriter) Delete(ctx context.Context, witness containerCore.RemovalWitness) error {
return cntClient.Delete(ctx, m.neoClient, witness)
}

View file

@ -16,7 +16,7 @@ import (
const serviceNameControl = "control"
func initControlService(c *cfg) {
func initControlService(ctx context.Context, c *cfg) {
endpoint := controlconfig.GRPC(c.appCfg).Endpoint()
if endpoint == controlconfig.GRPCEndpointDefault {
return
@ -46,21 +46,21 @@ func initControlService(c *cfg) {
lis, err := net.Listen("tcp", endpoint)
if err != nil {
c.log.Error(logs.FrostFSNodeCantListenGRPCEndpointControl, zap.Error(err))
c.log.Error(ctx, logs.FrostFSNodeCantListenGRPCEndpointControl, zap.Error(err))
return
}
c.cfgControlService.server = grpc.NewServer()
c.onShutdown(func() {
stopGRPC("FrostFS Control API", c.cfgControlService.server, c.log)
stopGRPC(ctx, "FrostFS Control API", c.cfgControlService.server, c.log)
})
control.RegisterControlServiceServer(c.cfgControlService.server, ctlSvc)
c.workers = append(c.workers, newWorkerFromFunc(func(ctx context.Context) {
runAndLog(ctx, c, serviceNameControl, false, func(context.Context, *cfg) {
c.log.Info(logs.FrostFSNodeStartListeningEndpoint,
c.log.Info(ctx, logs.FrostFSNodeStartListeningEndpoint,
zap.String("service", serviceNameControl),
zap.String("endpoint", endpoint))
fatalOnErr(c.cfgControlService.server.Serve(lis))
@ -72,23 +72,23 @@ func (c *cfg) NetmapStatus() control.NetmapStatus {
return c.cfgNetmap.state.controlNetmapStatus()
}
func (c *cfg) setHealthStatus(st control.HealthStatus) {
c.notifySystemd(st)
func (c *cfg) setHealthStatus(ctx context.Context, st control.HealthStatus) {
c.notifySystemd(ctx, st)
c.healthStatus.Store(int32(st))
c.metricsCollector.State().SetHealth(int32(st))
}
func (c *cfg) compareAndSwapHealthStatus(oldSt, newSt control.HealthStatus) (swapped bool) {
func (c *cfg) compareAndSwapHealthStatus(ctx context.Context, oldSt, newSt control.HealthStatus) (swapped bool) {
if swapped = c.healthStatus.CompareAndSwap(int32(oldSt), int32(newSt)); swapped {
c.notifySystemd(newSt)
c.notifySystemd(ctx, newSt)
c.metricsCollector.State().SetHealth(int32(newSt))
}
return
}
func (c *cfg) swapHealthStatus(st control.HealthStatus) (old control.HealthStatus) {
func (c *cfg) swapHealthStatus(ctx context.Context, st control.HealthStatus) (old control.HealthStatus) {
old = control.HealthStatus(c.healthStatus.Swap(int32(st)))
c.notifySystemd(st)
c.notifySystemd(ctx, st)
c.metricsCollector.State().SetHealth(int32(st))
return
}
@ -97,7 +97,7 @@ func (c *cfg) HealthStatus() control.HealthStatus {
return control.HealthStatus(c.healthStatus.Load())
}
func (c *cfg) notifySystemd(st control.HealthStatus) {
func (c *cfg) notifySystemd(ctx context.Context, st control.HealthStatus) {
if !c.sdNotify {
return
}
@ -113,6 +113,6 @@ func (c *cfg) notifySystemd(st control.HealthStatus) {
err = sdnotify.Status(fmt.Sprintf("%v", st))
}
if err != nil {
c.log.Error(logs.FailedToReportStatusToSystemd, zap.Error(err))
c.log.Error(ctx, logs.FailedToReportStatusToSystemd, zap.Error(err))
}
}

View file

@ -1,6 +1,7 @@
package main
import (
"context"
"crypto/tls"
"errors"
"net"
@ -18,11 +19,11 @@ import (
const maxRecvMsgSize = 256 << 20
func initGRPC(c *cfg) {
func initGRPC(ctx context.Context, c *cfg) {
var endpointsToReconnect []string
var successCount int
grpcconfig.IterateEndpoints(c.appCfg, func(sc *grpcconfig.Config) {
serverOpts, ok := getGrpcServerOpts(c, sc)
serverOpts, ok := getGrpcServerOpts(ctx, c, sc)
if !ok {
return
}
@ -30,7 +31,7 @@ func initGRPC(c *cfg) {
lis, err := net.Listen("tcp", sc.Endpoint())
if err != nil {
c.metricsCollector.GrpcServerMetrics().MarkUnhealthy(sc.Endpoint())
c.log.Error(logs.FrostFSNodeCantListenGRPCEndpoint, zap.Error(err))
c.log.Error(ctx, logs.FrostFSNodeCantListenGRPCEndpoint, zap.Error(err))
endpointsToReconnect = append(endpointsToReconnect, sc.Endpoint())
return
}
@ -39,7 +40,7 @@ func initGRPC(c *cfg) {
srv := grpc.NewServer(serverOpts...)
c.onShutdown(func() {
stopGRPC("FrostFS Public API", srv, c.log)
stopGRPC(ctx, "FrostFS Public API", srv, c.log)
})
c.cfgGRPC.append(sc.Endpoint(), lis, srv)
@ -52,11 +53,11 @@ func initGRPC(c *cfg) {
c.cfgGRPC.reconnectTimeout = grpcconfig.ReconnectTimeout(c.appCfg)
for _, endpoint := range endpointsToReconnect {
scheduleReconnect(endpoint, c)
scheduleReconnect(ctx, endpoint, c)
}
}
func scheduleReconnect(endpoint string, c *cfg) {
func scheduleReconnect(ctx context.Context, endpoint string, c *cfg) {
c.wg.Add(1)
go func() {
defer c.wg.Done()
@ -65,7 +66,7 @@ func scheduleReconnect(endpoint string, c *cfg) {
for {
select {
case <-t.C:
if tryReconnect(endpoint, c) {
if tryReconnect(ctx, endpoint, c) {
return
}
case <-c.done:
@ -75,20 +76,20 @@ func scheduleReconnect(endpoint string, c *cfg) {
}()
}
func tryReconnect(endpoint string, c *cfg) bool {
c.log.Info(logs.FrostFSNodeGRPCReconnecting, zap.String("endpoint", endpoint))
func tryReconnect(ctx context.Context, endpoint string, c *cfg) bool {
c.log.Info(ctx, logs.FrostFSNodeGRPCReconnecting, zap.String("endpoint", endpoint))
serverOpts, found := getGRPCEndpointOpts(endpoint, c)
serverOpts, found := getGRPCEndpointOpts(ctx, endpoint, c)
if !found {
c.log.Warn(logs.FrostFSNodeGRPCServerConfigNotFound, zap.String("endpoint", endpoint))
c.log.Warn(ctx, logs.FrostFSNodeGRPCServerConfigNotFound, zap.String("endpoint", endpoint))
return true
}
lis, err := net.Listen("tcp", endpoint)
if err != nil {
c.metricsCollector.GrpcServerMetrics().MarkUnhealthy(endpoint)
c.log.Error(logs.FrostFSNodeCantListenGRPCEndpoint, zap.Error(err))
c.log.Warn(logs.FrostFSNodeGRPCReconnectFailed, zap.Duration("next_try_in", c.cfgGRPC.reconnectTimeout))
c.log.Error(ctx, logs.FrostFSNodeCantListenGRPCEndpoint, zap.Error(err))
c.log.Warn(ctx, logs.FrostFSNodeGRPCReconnectFailed, zap.Duration("next_try_in", c.cfgGRPC.reconnectTimeout))
return false
}
c.metricsCollector.GrpcServerMetrics().MarkHealthy(endpoint)
@ -96,16 +97,16 @@ func tryReconnect(endpoint string, c *cfg) bool {
srv := grpc.NewServer(serverOpts...)
c.onShutdown(func() {
stopGRPC("FrostFS Public API", srv, c.log)
stopGRPC(ctx, "FrostFS Public API", srv, c.log)
})
c.cfgGRPC.appendAndHandle(endpoint, lis, srv)
c.log.Info(logs.FrostFSNodeGRPCReconnectedSuccessfully, zap.String("endpoint", endpoint))
c.log.Info(ctx, logs.FrostFSNodeGRPCReconnectedSuccessfully, zap.String("endpoint", endpoint))
return true
}
func getGRPCEndpointOpts(endpoint string, c *cfg) (result []grpc.ServerOption, found bool) {
func getGRPCEndpointOpts(ctx context.Context, endpoint string, c *cfg) (result []grpc.ServerOption, found bool) {
unlock := c.LockAppConfigShared()
defer unlock()
grpcconfig.IterateEndpoints(c.appCfg, func(sc *grpcconfig.Config) {
@ -116,7 +117,7 @@ func getGRPCEndpointOpts(endpoint string, c *cfg) (result []grpc.ServerOption, f
return
}
var ok bool
result, ok = getGrpcServerOpts(c, sc)
result, ok = getGrpcServerOpts(ctx, c, sc)
if !ok {
return
}
@ -125,7 +126,7 @@ func getGRPCEndpointOpts(endpoint string, c *cfg) (result []grpc.ServerOption, f
return
}
func getGrpcServerOpts(c *cfg, sc *grpcconfig.Config) ([]grpc.ServerOption, bool) {
func getGrpcServerOpts(ctx context.Context, c *cfg, sc *grpcconfig.Config) ([]grpc.ServerOption, bool) {
serverOpts := []grpc.ServerOption{
grpc.MaxRecvMsgSize(maxRecvMsgSize),
grpc.ChainUnaryInterceptor(
@ -143,7 +144,7 @@ func getGrpcServerOpts(c *cfg, sc *grpcconfig.Config) ([]grpc.ServerOption, bool
if tlsCfg != nil {
cert, err := tls.LoadX509KeyPair(tlsCfg.CertificateFile(), tlsCfg.KeyFile())
if err != nil {
c.log.Error(logs.FrostFSNodeCouldNotReadCertificateFromFile, zap.Error(err))
c.log.Error(ctx, logs.FrostFSNodeCouldNotReadCertificateFromFile, zap.Error(err))
return nil, false
}
@ -174,38 +175,38 @@ func getGrpcServerOpts(c *cfg, sc *grpcconfig.Config) ([]grpc.ServerOption, bool
return serverOpts, true
}
func serveGRPC(c *cfg) {
func serveGRPC(ctx context.Context, c *cfg) {
c.cfgGRPC.performAndSave(func(e string, l net.Listener, s *grpc.Server) {
c.wg.Add(1)
go func() {
defer func() {
c.log.Info(logs.FrostFSNodeStopListeningGRPCEndpoint,
c.log.Info(ctx, logs.FrostFSNodeStopListeningGRPCEndpoint,
zap.Stringer("endpoint", l.Addr()),
)
c.wg.Done()
}()
c.log.Info(logs.FrostFSNodeStartListeningEndpoint,
c.log.Info(ctx, logs.FrostFSNodeStartListeningEndpoint,
zap.String("service", "gRPC"),
zap.Stringer("endpoint", l.Addr()),
)
if err := s.Serve(l); err != nil {
c.metricsCollector.GrpcServerMetrics().MarkUnhealthy(e)
c.log.Error(logs.FrostFSNodeGRPCServerError, zap.Error(err))
c.log.Error(ctx, logs.FrostFSNodeGRPCServerError, zap.Error(err))
c.cfgGRPC.dropConnection(e)
scheduleReconnect(e, c)
scheduleReconnect(ctx, e, c)
}
}()
})
}
func stopGRPC(name string, s *grpc.Server, l *logger.Logger) {
l = &logger.Logger{Logger: l.With(zap.String("name", name))}
func stopGRPC(ctx context.Context, name string, s *grpc.Server, l *logger.Logger) {
l = l.With(zap.String("name", name))
l.Info(logs.FrostFSNodeStoppingGRPCServer)
l.Info(ctx, logs.FrostFSNodeStoppingGRPCServer)
// GracefulStop() may freeze forever, see #1270
done := make(chan struct{})
@ -217,9 +218,9 @@ func stopGRPC(name string, s *grpc.Server, l *logger.Logger) {
select {
case <-done:
case <-time.After(1 * time.Minute):
l.Info(logs.FrostFSNodeGRPCCannotShutdownGracefullyForcingStop)
l.Info(ctx, logs.FrostFSNodeGRPCCannotShutdownGracefullyForcingStop)
s.Stop()
}
l.Info(logs.FrostFSNodeGRPCServerStoppedSuccessfully)
l.Info(ctx, logs.FrostFSNodeGRPCServerStoppedSuccessfully)
}

View file

@ -20,9 +20,9 @@ type httpComponent struct {
preReload func(c *cfg)
}
func (cmp *httpComponent) init(c *cfg) {
func (cmp *httpComponent) init(ctx context.Context, c *cfg) {
if !cmp.enabled {
c.log.Info(cmp.name + " is disabled")
c.log.Info(ctx, cmp.name+" is disabled")
return
}
// Init server with parameters
@ -39,14 +39,14 @@ func (cmp *httpComponent) init(c *cfg) {
go func() {
defer c.wg.Done()
c.log.Info(logs.FrostFSNodeStartListeningEndpoint,
c.log.Info(ctx, logs.FrostFSNodeStartListeningEndpoint,
zap.String("service", cmp.name),
zap.String("endpoint", cmp.address))
fatalOnErr(srv.Serve())
}()
c.closers = append(c.closers, closer{
cmp.name,
func() { stopAndLog(c, cmp.name, srv.Shutdown) },
func() { stopAndLog(ctx, c, cmp.name, srv.Shutdown) },
})
}
@ -62,7 +62,7 @@ func (cmp *httpComponent) reload(ctx context.Context) error {
// Cleanup
delCloser(cmp.cfg, cmp.name)
// Init server with new parameters
cmp.init(cmp.cfg)
cmp.init(ctx, cmp.cfg)
// Start worker
if cmp.enabled {
startWorker(ctx, cmp.cfg, *getWorker(cmp.cfg, cmp.name))

View file

@ -61,21 +61,21 @@ func main() {
var ctx context.Context
ctx, c.ctxCancel = context.WithCancel(context.Background())
c.setHealthStatus(control.HealthStatus_STARTING)
c.setHealthStatus(ctx, control.HealthStatus_STARTING)
initApp(ctx, c)
bootUp(ctx, c)
c.compareAndSwapHealthStatus(control.HealthStatus_STARTING, control.HealthStatus_READY)
c.compareAndSwapHealthStatus(ctx, control.HealthStatus_STARTING, control.HealthStatus_READY)
wait(c)
}
func initAndLog(c *cfg, name string, initializer func(*cfg)) {
c.log.Info(fmt.Sprintf("initializing %s service...", name))
func initAndLog(ctx context.Context, c *cfg, name string, initializer func(*cfg)) {
c.log.Info(ctx, fmt.Sprintf("initializing %s service...", name))
initializer(c)
c.log.Info(name + " service has been successfully initialized")
c.log.Info(ctx, name+" service has been successfully initialized")
}
func initApp(ctx context.Context, c *cfg) {
@ -85,72 +85,72 @@ func initApp(ctx context.Context, c *cfg) {
c.wg.Done()
}()
setRuntimeParameters(c)
setRuntimeParameters(ctx, c)
metrics, _ := metricsComponent(c)
initAndLog(c, "profiler", initProfilerService)
initAndLog(c, metrics.name, metrics.init)
initAndLog(ctx, c, "profiler", func(c *cfg) { initProfilerService(ctx, c) })
initAndLog(ctx, c, metrics.name, func(c *cfg) { metrics.init(ctx, c) })
initAndLog(c, "tracing", func(c *cfg) { initTracing(ctx, c) })
initAndLog(ctx, c, "tracing", func(c *cfg) { initTracing(ctx, c) })
initLocalStorage(ctx, c)
initAndLog(c, "storage engine", func(c *cfg) {
initAndLog(ctx, c, "storage engine", func(c *cfg) {
fatalOnErr(c.cfgObject.cfgLocalStorage.localStorage.Open(ctx))
fatalOnErr(c.cfgObject.cfgLocalStorage.localStorage.Init(ctx))
})
initAndLog(c, "gRPC", initGRPC)
initAndLog(c, "netmap", func(c *cfg) { initNetmapService(ctx, c) })
initAndLog(ctx, c, "gRPC", func(c *cfg) { initGRPC(ctx, c) })
initAndLog(ctx, c, "netmap", func(c *cfg) { initNetmapService(ctx, c) })
initAccessPolicyEngine(ctx, c)
initAndLog(c, "access policy engine", func(c *cfg) {
initAndLog(ctx, c, "access policy engine", func(c *cfg) {
fatalOnErr(c.cfgObject.cfgAccessPolicyEngine.accessPolicyEngine.LocalOverrideDatabaseCore().Open(ctx))
fatalOnErr(c.cfgObject.cfgAccessPolicyEngine.accessPolicyEngine.LocalOverrideDatabaseCore().Init())
})
initAndLog(c, "accounting", func(c *cfg) { initAccountingService(ctx, c) })
initAndLog(c, "container", func(c *cfg) { initContainerService(ctx, c) })
initAndLog(c, "session", initSessionService)
initAndLog(c, "object", initObjectService)
initAndLog(c, "tree", initTreeService)
initAndLog(c, "apemanager", initAPEManagerService)
initAndLog(c, "control", initControlService)
initAndLog(ctx, c, "accounting", func(c *cfg) { initAccountingService(ctx, c) })
initAndLog(ctx, c, "container", func(c *cfg) { initContainerService(ctx, c) })
initAndLog(ctx, c, "session", initSessionService)
initAndLog(ctx, c, "object", initObjectService)
initAndLog(ctx, c, "tree", initTreeService)
initAndLog(ctx, c, "apemanager", initAPEManagerService)
initAndLog(ctx, c, "control", func(c *cfg) { initControlService(ctx, c) })
initAndLog(c, "morph notifications", func(c *cfg) { listenMorphNotifications(ctx, c) })
initAndLog(ctx, c, "morph notifications", func(c *cfg) { listenMorphNotifications(ctx, c) })
}
func runAndLog(ctx context.Context, c *cfg, name string, logSuccess bool, starter func(context.Context, *cfg)) {
c.log.Info(fmt.Sprintf("starting %s service...", name))
c.log.Info(ctx, fmt.Sprintf("starting %s service...", name))
starter(ctx, c)
if logSuccess {
c.log.Info(name + " service started successfully")
c.log.Info(ctx, name+" service started successfully")
}
}
func stopAndLog(c *cfg, name string, stopper func() error) {
c.log.Debug(fmt.Sprintf("shutting down %s service", name))
func stopAndLog(ctx context.Context, c *cfg, name string, stopper func(context.Context) error) {
c.log.Debug(ctx, fmt.Sprintf("shutting down %s service", name))
err := stopper()
err := stopper(ctx)
if err != nil {
c.log.Debug(fmt.Sprintf("could not shutdown %s server", name),
c.log.Debug(ctx, fmt.Sprintf("could not shutdown %s server", name),
zap.String("error", err.Error()),
)
}
c.log.Debug(name + " service has been stopped")
c.log.Debug(ctx, name+" service has been stopped")
}
func bootUp(ctx context.Context, c *cfg) {
runAndLog(ctx, c, "gRPC", false, func(_ context.Context, c *cfg) { serveGRPC(c) })
runAndLog(ctx, c, "gRPC", false, func(_ context.Context, c *cfg) { serveGRPC(ctx, c) })
runAndLog(ctx, c, "notary", true, makeAndWaitNotaryDeposit)
bootstrapNode(c)
bootstrapNode(ctx, c)
startWorkers(ctx, c)
}
func wait(c *cfg) {
c.log.Info(logs.CommonApplicationStarted,
c.log.Info(context.Background(), logs.CommonApplicationStarted,
zap.String("version", misc.Version))
<-c.done // graceful shutdown
@ -160,12 +160,12 @@ func wait(c *cfg) {
go func() {
defer drain.Done()
for err := range c.internalErr {
c.log.Warn(logs.FrostFSNodeInternalApplicationError,
c.log.Warn(context.Background(), logs.FrostFSNodeInternalApplicationError,
zap.String("message", err.Error()))
}
}()
c.log.Debug(logs.FrostFSNodeWaitingForAllProcessesToStop)
c.log.Debug(context.Background(), logs.FrostFSNodeWaitingForAllProcessesToStop)
c.wg.Wait()

View file

@ -48,7 +48,7 @@ func (c *cfg) initMorphComponents(ctx context.Context) {
fatalOnErr(err)
}
c.log.Info(logs.FrostFSNodeNotarySupport,
c.log.Info(ctx, logs.FrostFSNodeNotarySupport,
zap.Bool("sidechain_enabled", c.cfgMorph.notaryEnabled),
)
@ -64,7 +64,7 @@ func (c *cfg) initMorphComponents(ctx context.Context) {
msPerBlock, err := c.cfgMorph.client.MsPerBlock()
fatalOnErr(err)
c.cfgMorph.cacheTTL = time.Duration(msPerBlock) * time.Millisecond
c.log.Debug(logs.FrostFSNodeMorphcacheTTLFetchedFromNetwork, zap.Duration("value", c.cfgMorph.cacheTTL))
c.log.Debug(ctx, logs.FrostFSNodeMorphcacheTTLFetchedFromNetwork, zap.Duration("value", c.cfgMorph.cacheTTL))
}
if c.cfgMorph.cacheTTL < 0 {
@ -102,7 +102,7 @@ func initMorphClient(ctx context.Context, c *cfg) {
client.WithDialerSource(c.dialerSource),
)
if err != nil {
c.log.Info(logs.FrostFSNodeFailedToCreateNeoRPCClient,
c.log.Info(ctx, logs.FrostFSNodeFailedToCreateNeoRPCClient,
zap.Any("endpoints", addresses),
zap.String("error", err.Error()),
)
@ -111,12 +111,12 @@ func initMorphClient(ctx context.Context, c *cfg) {
}
c.onShutdown(func() {
c.log.Info(logs.FrostFSNodeClosingMorphComponents)
c.log.Info(ctx, logs.FrostFSNodeClosingMorphComponents)
cli.Close()
})
if err := cli.SetGroupSignerScope(); err != nil {
c.log.Info(logs.FrostFSNodeFailedToSetGroupSignerScopeContinueWithGlobal, zap.Error(err))
c.log.Info(ctx, logs.FrostFSNodeFailedToSetGroupSignerScopeContinueWithGlobal, zap.Error(err))
}
c.cfgMorph.client = cli
@ -129,14 +129,14 @@ func makeAndWaitNotaryDeposit(ctx context.Context, c *cfg) {
return
}
tx, vub, err := makeNotaryDeposit(c)
tx, vub, err := makeNotaryDeposit(ctx, c)
fatalOnErr(err)
if tx.Equals(util.Uint256{}) {
// non-error deposit with an empty TX hash means
// that the deposit has already been made; no
// need to wait it.
c.log.Info(logs.FrostFSNodeNotaryDepositHasAlreadyBeenMade)
c.log.Info(ctx, logs.FrostFSNodeNotaryDepositHasAlreadyBeenMade)
return
}
@ -144,7 +144,7 @@ func makeAndWaitNotaryDeposit(ctx context.Context, c *cfg) {
fatalOnErr(err)
}
func makeNotaryDeposit(c *cfg) (util.Uint256, uint32, error) {
func makeNotaryDeposit(ctx context.Context, c *cfg) (util.Uint256, uint32, error) {
const (
// gasMultiplier defines how many times more the notary
// balance must be compared to the GAS balance of the node:
@ -161,7 +161,7 @@ func makeNotaryDeposit(c *cfg) (util.Uint256, uint32, error) {
return util.Uint256{}, 0, fmt.Errorf("could not calculate notary deposit: %w", err)
}
return c.cfgMorph.client.DepositEndlessNotary(depositAmount)
return c.cfgMorph.client.DepositEndlessNotary(ctx, depositAmount)
}
var (
@ -202,7 +202,7 @@ func waitNotaryDeposit(ctx context.Context, c *cfg, tx util.Uint256, vub uint32)
return fmt.Errorf("could not wait for notary deposit persists in chain: %w", err)
}
if res.Execution.VMState.HasFlag(vmstate.Halt) {
c.log.Info(logs.ClientNotaryDepositTransactionWasSuccessfullyPersisted)
c.log.Info(ctx, logs.ClientNotaryDepositTransactionWasSuccessfullyPersisted)
return nil
}
return errNotaryDepositFail
@ -217,7 +217,7 @@ func listenMorphNotifications(ctx context.Context, c *cfg) {
fromSideChainBlock, err := c.persistate.UInt32(persistateSideChainLastBlockKey)
if err != nil {
fromSideChainBlock = 0
c.log.Warn(logs.FrostFSNodeCantGetLastProcessedSideChainBlockNumber, zap.String("error", err.Error()))
c.log.Warn(ctx, logs.FrostFSNodeCantGetLastProcessedSideChainBlockNumber, zap.String("error", err.Error()))
}
subs, err = subscriber.New(ctx, &subscriber.Params{
@ -246,7 +246,7 @@ func listenMorphNotifications(ctx context.Context, c *cfg) {
setNetmapNotificationParser(c, newEpochNotification, func(src *state.ContainedNotificationEvent) (event.Event, error) {
res, err := netmapEvent.ParseNewEpoch(src)
if err == nil {
c.log.Info(logs.FrostFSNodeNewEpochEventFromSidechain,
c.log.Info(ctx, logs.FrostFSNodeNewEpochEventFromSidechain,
zap.Uint64("number", res.(netmapEvent.NewEpoch).EpochNumber()),
)
}
@ -256,12 +256,12 @@ func listenMorphNotifications(ctx context.Context, c *cfg) {
registerNotificationHandlers(c.cfgNetmap.scriptHash, lis, c.cfgNetmap.parsers, c.cfgNetmap.subscribers)
registerNotificationHandlers(c.cfgContainer.scriptHash, lis, c.cfgContainer.parsers, c.cfgContainer.subscribers)
registerBlockHandler(lis, func(block *block.Block) {
c.log.Debug(logs.FrostFSNodeNewBlock, zap.Uint32("index", block.Index))
registerBlockHandler(lis, func(ctx context.Context, block *block.Block) {
c.log.Debug(ctx, logs.FrostFSNodeNewBlock, zap.Uint32("index", block.Index))
err = c.persistate.SetUInt32(persistateSideChainLastBlockKey, block.Index)
if err != nil {
c.log.Warn(logs.FrostFSNodeCantUpdatePersistentState,
c.log.Warn(ctx, logs.FrostFSNodeCantUpdatePersistentState,
zap.String("chain", "side"),
zap.Uint32("block_index", block.Index))
}

View file

@ -8,7 +8,6 @@ import (
"net"
"sync/atomic"
netmapGRPC "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/netmap/grpc"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/metrics"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/netmap"
@ -19,6 +18,7 @@ import (
netmapTransportGRPC "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/network/transport/netmap/grpc"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/control"
netmapService "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/netmap"
netmapGRPC "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/netmap/grpc"
netmapSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/version"
"go.uber.org/zap"
@ -145,7 +145,7 @@ func initNetmapService(ctx context.Context, c *cfg) {
c.initMorphComponents(ctx)
initNetmapState(c)
initNetmapState(ctx, c)
server := netmapTransportGRPC.New(
netmapService.NewSignService(
@ -166,35 +166,38 @@ func initNetmapService(ctx context.Context, c *cfg) {
c.cfgGRPC.performAndSave(func(_ string, _ net.Listener, s *grpc.Server) {
netmapGRPC.RegisterNetmapServiceServer(s, server)
// TODO(@aarifullin): #1487 remove the dual service support.
s.RegisterService(frostFSServiceDesc(netmapGRPC.NetmapService_ServiceDesc), server)
})
addNewEpochNotificationHandlers(c)
}
func addNewEpochNotificationHandlers(c *cfg) {
addNewEpochNotificationHandler(c, func(ev event.Event) {
addNewEpochNotificationHandler(c, func(_ context.Context, ev event.Event) {
c.cfgNetmap.state.setCurrentEpoch(ev.(netmapEvent.NewEpoch).EpochNumber())
})
addNewEpochAsyncNotificationHandler(c, func(ev event.Event) {
addNewEpochAsyncNotificationHandler(c, func(ctx context.Context, ev event.Event) {
e := ev.(netmapEvent.NewEpoch).EpochNumber()
c.updateContractNodeInfo(e)
c.updateContractNodeInfo(ctx, e)
if !c.needBootstrap() || c.cfgNetmap.reBoostrapTurnedOff.Load() { // fixes #470
return
}
if err := c.bootstrap(); err != nil {
c.log.Warn(logs.FrostFSNodeCantSendRebootstrapTx, zap.Error(err))
if err := c.bootstrap(ctx); err != nil {
c.log.Warn(ctx, logs.FrostFSNodeCantSendRebootstrapTx, zap.Error(err))
}
})
if c.cfgMorph.notaryEnabled {
addNewEpochAsyncNotificationHandler(c, func(_ event.Event) {
_, _, err := makeNotaryDeposit(c)
addNewEpochAsyncNotificationHandler(c, func(ctx context.Context, _ event.Event) {
_, _, err := makeNotaryDeposit(ctx, c)
if err != nil {
c.log.Error(logs.FrostFSNodeCouldNotMakeNotaryDeposit,
c.log.Error(ctx, logs.FrostFSNodeCouldNotMakeNotaryDeposit,
zap.String("error", err.Error()),
)
}
@ -204,13 +207,13 @@ func addNewEpochNotificationHandlers(c *cfg) {
// bootstrapNode adds current node to the Network map.
// Must be called after initNetmapService.
func bootstrapNode(c *cfg) {
func bootstrapNode(ctx context.Context, c *cfg) {
if c.needBootstrap() {
if c.IsMaintenance() {
c.log.Info(logs.FrostFSNodeNodeIsUnderMaintenanceSkipInitialBootstrap)
c.log.Info(ctx, logs.FrostFSNodeNodeIsUnderMaintenanceSkipInitialBootstrap)
return
}
err := c.bootstrap()
err := c.bootstrap(ctx)
fatalOnErrDetails("bootstrap error", err)
}
}
@ -237,17 +240,17 @@ func setNetmapNotificationParser(c *cfg, sTyp string, p event.NotificationParser
// initNetmapState inits current Network map state.
// Must be called after Morph components initialization.
func initNetmapState(c *cfg) {
func initNetmapState(ctx context.Context, c *cfg) {
epoch, err := c.cfgNetmap.wrapper.Epoch()
fatalOnErrDetails("could not initialize current epoch number", err)
var ni *netmapSDK.NodeInfo
ni, err = c.netmapInitLocalNodeState(epoch)
ni, err = c.netmapInitLocalNodeState(ctx, epoch)
fatalOnErrDetails("could not init network state", err)
stateWord := nodeState(ni)
c.log.Info(logs.FrostFSNodeInitialNetworkState,
c.log.Info(ctx, logs.FrostFSNodeInitialNetworkState,
zap.Uint64("epoch", epoch),
zap.String("state", stateWord),
)
@ -276,7 +279,7 @@ func nodeState(ni *netmapSDK.NodeInfo) string {
return "undefined"
}
func (c *cfg) netmapInitLocalNodeState(epoch uint64) (*netmapSDK.NodeInfo, error) {
func (c *cfg) netmapInitLocalNodeState(ctx context.Context, epoch uint64) (*netmapSDK.NodeInfo, error) {
nmNodes, err := c.cfgNetmap.wrapper.GetCandidates()
if err != nil {
return nil, err
@ -304,7 +307,7 @@ func (c *cfg) netmapInitLocalNodeState(epoch uint64) (*netmapSDK.NodeInfo, error
if nmState != candidateState {
// This happens when the node was switched to maintenance without epoch tick.
// We expect it to continue staying in maintenance.
c.log.Info(logs.CandidateStatusPriority,
c.log.Info(ctx, logs.CandidateStatusPriority,
zap.String("netmap", nmState),
zap.String("candidate", candidateState))
}
@ -350,16 +353,16 @@ func addNewEpochAsyncNotificationHandler(c *cfg, h event.Handler) {
var errRelayBootstrap = errors.New("setting netmap status is forbidden in relay mode")
func (c *cfg) SetNetmapStatus(st control.NetmapStatus) error {
func (c *cfg) SetNetmapStatus(ctx context.Context, st control.NetmapStatus) error {
switch st {
default:
return fmt.Errorf("unsupported status %v", st)
case control.NetmapStatus_MAINTENANCE:
return c.setMaintenanceStatus(false)
return c.setMaintenanceStatus(ctx, false)
case control.NetmapStatus_ONLINE, control.NetmapStatus_OFFLINE:
}
c.stopMaintenance()
c.stopMaintenance(ctx)
if !c.needBootstrap() {
return errRelayBootstrap
@ -367,12 +370,12 @@ func (c *cfg) SetNetmapStatus(st control.NetmapStatus) error {
if st == control.NetmapStatus_ONLINE {
c.cfgNetmap.reBoostrapTurnedOff.Store(false)
return bootstrapOnline(c)
return bootstrapOnline(ctx, c)
}
c.cfgNetmap.reBoostrapTurnedOff.Store(true)
return c.updateNetMapState(func(*nmClient.UpdatePeerPrm) {})
return c.updateNetMapState(ctx, func(*nmClient.UpdatePeerPrm) {})
}
func (c *cfg) GetNetmapStatus() (control.NetmapStatus, uint64, error) {
@ -384,11 +387,11 @@ func (c *cfg) GetNetmapStatus() (control.NetmapStatus, uint64, error) {
return st, epoch, nil
}
func (c *cfg) ForceMaintenance() error {
return c.setMaintenanceStatus(true)
func (c *cfg) ForceMaintenance(ctx context.Context) error {
return c.setMaintenanceStatus(ctx, true)
}
func (c *cfg) setMaintenanceStatus(force bool) error {
func (c *cfg) setMaintenanceStatus(ctx context.Context, force bool) error {
netSettings, err := c.cfgNetmap.wrapper.ReadNetworkConfiguration()
if err != nil {
err = fmt.Errorf("read network settings to check maintenance allowance: %w", err)
@ -397,10 +400,10 @@ func (c *cfg) setMaintenanceStatus(force bool) error {
}
if err == nil || force {
c.startMaintenance()
c.startMaintenance(ctx)
if err == nil {
err = c.updateNetMapState((*nmClient.UpdatePeerPrm).SetMaintenance)
err = c.updateNetMapState(ctx, (*nmClient.UpdatePeerPrm).SetMaintenance)
}
if err != nil {
@ -413,12 +416,12 @@ func (c *cfg) setMaintenanceStatus(force bool) error {
// calls UpdatePeerState operation of Netmap contract's client for the local node.
// State setter is used to specify node state to switch to.
func (c *cfg) updateNetMapState(stateSetter func(*nmClient.UpdatePeerPrm)) error {
func (c *cfg) updateNetMapState(ctx context.Context, stateSetter func(*nmClient.UpdatePeerPrm)) error {
var prm nmClient.UpdatePeerPrm
prm.SetKey(c.key.PublicKey().Bytes())
stateSetter(&prm)
_, err := c.cfgNetmap.wrapper.UpdatePeerState(prm)
_, err := c.cfgNetmap.wrapper.UpdatePeerState(ctx, prm)
return err
}

View file

@ -2,12 +2,9 @@ package main
import (
"context"
"errors"
"fmt"
"net"
"git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/object"
objectGRPC "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/object/grpc"
metricsconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/metrics"
policerconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/policer"
replicatorconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/replicator"
@ -16,12 +13,10 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/netmap"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/local_object_storage/engine"
morphClient "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client"
cntClient "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client/container"
nmClient "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client/netmap"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/network/cache"
objectTransportGRPC "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/network/transport/object/grpc"
objectService "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/object"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/object/acl"
v2 "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/object/acl/v2"
objectAPE "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/object/ape"
objectwriter "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/object/common/writer"
@ -38,8 +33,8 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/object_manager/placement"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/policer"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/replicator"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
eaclSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/eacl"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/object"
objectGRPC "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/object/grpc"
netmapSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/netmap"
objectSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
@ -63,7 +58,7 @@ type objectSvc struct {
func (c *cfg) MaxObjectSize() uint64 {
sz, err := c.cfgNetmap.wrapper.MaxObjectSize()
if err != nil {
c.log.Error(logs.FrostFSNodeCouldNotGetMaxObjectSizeValue,
c.log.Error(context.Background(), logs.FrostFSNodeCouldNotGetMaxObjectSizeValue,
zap.String("error", err.Error()),
)
}
@ -71,11 +66,11 @@ func (c *cfg) MaxObjectSize() uint64 {
return sz
}
func (s *objectSvc) Put() (objectService.PutObjectStream, error) {
func (s *objectSvc) Put(_ context.Context) (objectService.PutObjectStream, error) {
return s.put.Put()
}
func (s *objectSvc) Patch() (objectService.PatchObjectStream, error) {
func (s *objectSvc) Patch(_ context.Context) (objectService.PatchObjectStream, error) {
return s.patch.Patch()
}
@ -220,12 +215,15 @@ func initObjectService(c *cfg) {
c.cfgGRPC.performAndSave(func(_ string, _ net.Listener, s *grpc.Server) {
objectGRPC.RegisterObjectServiceServer(s, server)
// TODO(@aarifullin): #1487 remove the dual service support.
s.RegisterService(frostFSServiceDesc(objectGRPC.ObjectService_ServiceDesc), server)
})
}
func addPolicer(c *cfg, keyStorage *util.KeyStorage, clientConstructor *cache.ClientCache) {
if policerconfig.UnsafeDisable(c.appCfg) {
c.log.Warn(logs.FrostFSNodePolicerIsDisabled)
c.log.Warn(context.Background(), logs.FrostFSNodePolicerIsDisabled)
return
}
@ -289,7 +287,7 @@ func addPolicer(c *cfg, keyStorage *util.KeyStorage, clientConstructor *cache.Cl
_, err := ls.Inhume(ctx, inhumePrm)
if err != nil {
c.log.Warn(logs.FrostFSNodeCouldNotInhumeMarkRedundantCopyAsGarbage,
c.log.Warn(ctx, logs.FrostFSNodeCouldNotInhumeMarkRedundantCopyAsGarbage,
zap.String("error", err.Error()),
)
}
@ -458,17 +456,10 @@ func createSplitService(c *cfg, sPutV2 *putsvcV2.Service, sGetV2 *getsvcV2.Servi
}
func createACLServiceV2(c *cfg, apeSvc *objectAPE.Service, irFetcher *cachedIRFetcher) v2.Service {
ls := c.cfgObject.cfgLocalStorage.localStorage
return v2.New(
apeSvc,
c.netMapSource,
irFetcher,
acl.NewChecker(
c.cfgNetmap.state,
c.cfgObject.eaclSource,
eaclSDK.NewValidator(),
ls),
c.cfgObject.cnrSource,
v2.WithLogger(c.log),
)
@ -490,29 +481,6 @@ func createAPEService(c *cfg, splitSvc *objectService.TransportSplitter) *object
)
}
type morphEACLFetcher struct {
w *cntClient.Client
}
func (s *morphEACLFetcher) GetEACL(cnr cid.ID) (*containercore.EACL, error) {
eaclInfo, err := s.w.GetEACL(cnr)
if err != nil {
return nil, err
}
binTable, err := eaclInfo.Value.Marshal()
if err != nil {
return nil, fmt.Errorf("marshal eACL table: %w", err)
}
if !eaclInfo.Signature.Verify(binTable) {
// TODO(@cthulhu-rider): #468 use "const" error
return nil, errors.New("invalid signature of the eACL table")
}
return eaclInfo, nil
}
type engineWithoutNotifications struct {
engine *engine.StorageEngine
}

View file

@ -1,17 +1,18 @@
package main
import (
"context"
"runtime"
profilerconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/profiler"
httputil "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/http"
)
func initProfilerService(c *cfg) {
func initProfilerService(ctx context.Context, c *cfg) {
tuneProfilers(c)
pprof, _ := pprofComponent(c)
pprof.init(c)
pprof.init(ctx, c)
}
func pprofComponent(c *cfg) (*httpComponent, bool) {

View file

@ -1,6 +1,7 @@
package main
import (
"context"
"os"
"runtime/debug"
@ -9,17 +10,17 @@ import (
"go.uber.org/zap"
)
func setRuntimeParameters(c *cfg) {
func setRuntimeParameters(ctx context.Context, c *cfg) {
if len(os.Getenv("GOMEMLIMIT")) != 0 {
// default limit < yaml limit < app env limit < GOMEMLIMIT
c.log.Warn(logs.RuntimeSoftMemoryDefinedWithGOMEMLIMIT)
c.log.Warn(ctx, logs.RuntimeSoftMemoryDefinedWithGOMEMLIMIT)
return
}
memLimitBytes := runtime.GCMemoryLimitBytes(c.appCfg)
previous := debug.SetMemoryLimit(memLimitBytes)
if memLimitBytes != previous {
c.log.Info(logs.RuntimeSoftMemoryLimitUpdated,
c.log.Info(ctx, logs.RuntimeSoftMemoryLimitUpdated,
zap.Int64("new_value", memLimitBytes),
zap.Int64("old_value", previous))
}

View file

@ -6,8 +6,6 @@ import (
"net"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/session"
sessionGRPC "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/session/grpc"
nodeconfig "git.frostfs.info/TrueCloudLab/frostfs-node/cmd/frostfs-node/config/node"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/event"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/event/netmap"
@ -16,6 +14,8 @@ import (
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/session/storage"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/session/storage/persistent"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/services/session/storage/temporary"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/session"
sessionGRPC "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/session/grpc"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user"
"google.golang.org/grpc"
)
@ -48,7 +48,7 @@ func initSessionService(c *cfg) {
_ = c.privateTokenStore.Close()
})
addNewEpochNotificationHandler(c, func(ev event.Event) {
addNewEpochNotificationHandler(c, func(_ context.Context, ev event.Event) {
c.privateTokenStore.RemoveOld(ev.(netmap.NewEpoch).EpochNumber())
})
@ -61,5 +61,8 @@ func initSessionService(c *cfg) {
c.cfgGRPC.performAndSave(func(_ string, _ net.Listener, s *grpc.Server) {
sessionGRPC.RegisterSessionServiceServer(s, server)
// TODO(@aarifullin): #1487 remove the dual service support.
s.RegisterService(frostFSServiceDesc(sessionGRPC.SessionService_ServiceDesc), server)
})
}

View file

@ -13,12 +13,12 @@ import (
func initTracing(ctx context.Context, c *cfg) {
conf, err := tracingconfig.ToTracingConfig(c.appCfg)
if err != nil {
c.log.Error(logs.FrostFSNodeFailedInitTracing, zap.Error(err))
c.log.Error(ctx, logs.FrostFSNodeFailedInitTracing, zap.Error(err))
return
}
_, err = tracing.Setup(ctx, *conf)
if err != nil {
c.log.Error(logs.FrostFSNodeFailedInitTracing, zap.Error(err))
c.log.Error(ctx, logs.FrostFSNodeFailedInitTracing, zap.Error(err))
return
}
@ -29,7 +29,7 @@ func initTracing(ctx context.Context, c *cfg) {
defer cancel()
err := tracing.Shutdown(ctx) // cfg context cancels before close
if err != nil {
c.log.Error(logs.FrostFSNodeFailedShutdownTracing, zap.Error(err))
c.log.Error(ctx, logs.FrostFSNodeFailedShutdownTracing, zap.Error(err))
}
},
})

View file

@ -44,7 +44,7 @@ func (c cnrSource) List() ([]cid.ID, error) {
func initTreeService(c *cfg) {
treeConfig := treeconfig.Tree(c.appCfg)
if !treeConfig.Enabled() {
c.log.Info(logs.FrostFSNodeTreeServiceIsNotEnabledSkipInitialization)
c.log.Info(context.Background(), logs.FrostFSNodeTreeServiceIsNotEnabledSkipInitialization)
return
}
@ -62,6 +62,7 @@ func initTreeService(c *cfg) {
tree.WithReplicationTimeout(treeConfig.ReplicationTimeout()),
tree.WithReplicationChannelCapacity(treeConfig.ReplicationChannelCapacity()),
tree.WithReplicationWorkerCount(treeConfig.ReplicationWorkerCount()),
tree.WithSyncBatchSize(treeConfig.SyncBatchSize()),
tree.WithAuthorizedKeys(treeConfig.AuthorizedKeys()),
tree.WithMetrics(c.metricsCollector.TreeService()),
tree.WithAPELocalOverrideStorage(c.cfgObject.cfgAccessPolicyEngine.accessPolicyEngine.LocalStorage()),
@ -79,10 +80,10 @@ func initTreeService(c *cfg) {
}))
if d := treeConfig.SyncInterval(); d == 0 {
addNewEpochNotificationHandler(c, func(_ event.Event) {
addNewEpochNotificationHandler(c, func(ctx context.Context, _ event.Event) {
err := c.treeService.SynchronizeAll()
if err != nil {
c.log.Error(logs.FrostFSNodeCouldNotSynchronizeTreeService, zap.Error(err))
c.log.Error(ctx, logs.FrostFSNodeCouldNotSynchronizeTreeService, zap.Error(err))
}
})
} else {
@ -93,7 +94,7 @@ func initTreeService(c *cfg) {
for range tick.C {
err := c.treeService.SynchronizeAll()
if err != nil {
c.log.Error(logs.FrostFSNodeCouldNotSynchronizeTreeService, zap.Error(err))
c.log.Error(context.Background(), logs.FrostFSNodeCouldNotSynchronizeTreeService, zap.Error(err))
if errors.Is(err, tree.ErrShuttingDown) {
return
}
@ -102,15 +103,15 @@ func initTreeService(c *cfg) {
}()
}
subscribeToContainerRemoval(c, func(e event.Event) {
subscribeToContainerRemoval(c, func(ctx context.Context, e event.Event) {
ev := e.(containerEvent.DeleteSuccess)
// This is executed asynchronously, so we don't care about the operation taking some time.
c.log.Debug(logs.FrostFSNodeRemovingAllTreesForContainer, zap.Stringer("cid", ev.ID))
err := c.treeService.DropTree(context.Background(), ev.ID, "")
c.log.Debug(ctx, logs.FrostFSNodeRemovingAllTreesForContainer, zap.Stringer("cid", ev.ID))
err := c.treeService.DropTree(ctx, ev.ID, "")
if err != nil && !errors.Is(err, pilorama.ErrTreeNotFound) {
// Ignore pilorama.ErrTreeNotFound but other errors, including shard.ErrReadOnly, should be logged.
c.log.Error(logs.FrostFSNodeContainerRemovalEventReceivedButTreesWerentRemoved,
c.log.Error(ctx, logs.FrostFSNodeContainerRemovalEventReceivedButTreesWerentRemoved,
zap.Stringer("cid", ev.ID),
zap.String("error", err.Error()))
}

View file

@ -31,6 +31,7 @@ FROSTFS_TREE_REPLICATION_CHANNEL_CAPACITY=32
FROSTFS_TREE_REPLICATION_WORKER_COUNT=32
FROSTFS_TREE_REPLICATION_TIMEOUT=5s
FROSTFS_TREE_SYNC_INTERVAL=1h
FROSTFS_TREE_SYNC_BATCH_SIZE=2000
FROSTFS_TREE_AUTHORIZED_KEYS="0397d207ea77909f7d66fa6f36d08daae22ace672be7ea4f53513484dde8a142a0 02053819235c20d784132deba10bb3061629e3a5c819a039ef091841d9d35dad56"
# gRPC section
@ -202,6 +203,10 @@ FROSTFS_TRACING_ENABLED=true
FROSTFS_TRACING_ENDPOINT="localhost"
FROSTFS_TRACING_EXPORTER="otlp_grpc"
FROSTFS_TRACING_TRUSTED_CA=""
FROSTFS_TRACING_ATTRIBUTES_0_KEY=key0
FROSTFS_TRACING_ATTRIBUTES_0_VALUE=value
FROSTFS_TRACING_ATTRIBUTES_1_KEY=key1
FROSTFS_TRACING_ATTRIBUTES_1_VALUE=value
FROSTFS_RUNTIME_SOFT_MEMORY_LIMIT=1073741824

View file

@ -69,6 +69,7 @@
"replication_worker_count": 32,
"replication_timeout": "5s",
"sync_interval": "1h",
"sync_batch_size": 2000,
"authorized_keys": [
"0397d207ea77909f7d66fa6f36d08daae22ace672be7ea4f53513484dde8a142a0",
"02053819235c20d784132deba10bb3061629e3a5c819a039ef091841d9d35dad56"
@ -258,9 +259,19 @@
},
"tracing": {
"enabled": true,
"endpoint": "localhost:9090",
"endpoint": "localhost",
"exporter": "otlp_grpc",
"trusted_ca": "/etc/ssl/tracing.pem"
"trusted_ca": "",
"attributes":[
{
"key": "key0",
"value": "value"
},
{
"key": "key1",
"value": "value"
}
]
},
"runtime": {
"soft_memory_limit": 1073741824

View file

@ -59,6 +59,7 @@ tree:
replication_channel_capacity: 32
replication_timeout: 5s
sync_interval: 1h
sync_batch_size: 2000
authorized_keys: # list of hex-encoded public keys that have rights to use the Tree Service with frostfs-cli
- 0397d207ea77909f7d66fa6f36d08daae22ace672be7ea4f53513484dde8a142a0
- 02053819235c20d784132deba10bb3061629e3a5c819a039ef091841d9d35dad56
@ -238,6 +239,11 @@ tracing:
exporter: "otlp_grpc"
endpoint: "localhost"
trusted_ca: ""
attributes:
- key: key0
value: value
- key: key1
value: value
runtime:
soft_memory_limit: 1gb

View file

@ -78,7 +78,12 @@
"FROSTFS_STORAGE_SHARD_1_PILORAMA_PATH":"${workspaceFolder}/.cache/storage/s1/pilorama1",
"FROSTFS_PROMETHEUS_ENABLED":"true",
"FROSTFS_PROMETHEUS_ADDRESS":"127.0.0.1:9090",
"FROSTFS_PROMETHEUS_SHUTDOWN_TIMEOUT":"15s"
"FROSTFS_PROMETHEUS_SHUTDOWN_TIMEOUT":"15s",
"FROSTFS_TRACING_ENABLED":"true",
"FROSTFS_TRACING_EXPORTER":"otlp_grpc",
"FROSTFS_TRACING_ENDPOINT":"127.0.0.1:4317",
"FROSTFS_TRACING_ATTRIBUTES_0_KEY":"host.ip",
"FROSTFS_TRACING_ATTRIBUTES_0_VALUE":"127.0.0.1:8080"
},
"postDebugTask": "env-down"
},
@ -129,7 +134,12 @@
"FROSTFS_STORAGE_SHARD_1_PILORAMA_PATH":"${workspaceFolder}/.cache/storage/s2/pilorama1",
"FROSTFS_PROMETHEUS_ENABLED":"true",
"FROSTFS_PROMETHEUS_ADDRESS":"127.0.0.1:9091",
"FROSTFS_PROMETHEUS_SHUTDOWN_TIMEOUT":"15s"
"FROSTFS_PROMETHEUS_SHUTDOWN_TIMEOUT":"15s",
"FROSTFS_TRACING_ENABLED":"true",
"FROSTFS_TRACING_EXPORTER":"otlp_grpc",
"FROSTFS_TRACING_ENDPOINT":"127.0.0.1:4317",
"FROSTFS_TRACING_ATTRIBUTES_0_KEY":"host.ip",
"FROSTFS_TRACING_ATTRIBUTES_0_VALUE":"127.0.0.1:8082"
},
"postDebugTask": "env-down"
},
@ -180,7 +190,12 @@
"FROSTFS_STORAGE_SHARD_1_PILORAMA_PATH":"${workspaceFolder}/.cache/storage/s3/pilorama1",
"FROSTFS_PROMETHEUS_ENABLED":"true",
"FROSTFS_PROMETHEUS_ADDRESS":"127.0.0.1:9092",
"FROSTFS_PROMETHEUS_SHUTDOWN_TIMEOUT":"15s"
"FROSTFS_PROMETHEUS_SHUTDOWN_TIMEOUT":"15s",
"FROSTFS_TRACING_ENABLED":"true",
"FROSTFS_TRACING_EXPORTER":"otlp_grpc",
"FROSTFS_TRACING_ENDPOINT":"127.0.0.1:4317",
"FROSTFS_TRACING_ATTRIBUTES_0_KEY":"host.ip",
"FROSTFS_TRACING_ATTRIBUTES_0_VALUE":"127.0.0.1:8084"
},
"postDebugTask": "env-down"
},
@ -231,7 +246,12 @@
"FROSTFS_STORAGE_SHARD_1_PILORAMA_PATH":"${workspaceFolder}/.cache/storage/s4/pilorama1",
"FROSTFS_PROMETHEUS_ENABLED":"true",
"FROSTFS_PROMETHEUS_ADDRESS":"127.0.0.1:9093",
"FROSTFS_PROMETHEUS_SHUTDOWN_TIMEOUT":"15s"
"FROSTFS_PROMETHEUS_SHUTDOWN_TIMEOUT":"15s",
"FROSTFS_TRACING_ENABLED":"true",
"FROSTFS_TRACING_EXPORTER":"otlp_grpc",
"FROSTFS_TRACING_ENDPOINT":"127.0.0.1:4317",
"FROSTFS_TRACING_ATTRIBUTES_0_KEY":"host.ip",
"FROSTFS_TRACING_ATTRIBUTES_0_VALUE":"127.0.0.1:8086"
},
"postDebugTask": "env-down"
}

View file

@ -14,3 +14,15 @@ services:
- ./neo-go/node-wallet.json:/wallets/node-wallet.json
- ./neo-go/config.yml:/wallets/config.yml
- ./neo-go/wallet.json:/wallets/wallet.json
jaeger:
image: jaegertracing/all-in-one:latest
container_name: jaeger
ports:
- '4317:4317' #OTLP over gRPC
- '4318:4318' #OTLP over HTTP
- '16686:16686' #frontend
stop_signal: SIGKILL
environment:
- COLLECTOR_OTLP_ENABLED=true
- SPAN_STORAGE_TYPE=badger
- BADGER_EPHEMERAL=true

View file

@ -55,7 +55,7 @@ Add an entry to the `CHANGELOG.md` following the style established there.
* update `Unreleased...new` and `new...old` diff-links at the bottom of the file
* add optional codename and release date in the heading
* remove all empty sections such as `Added`, `Removed`, etc.
* make sure all changes have references to GitHub issues in `#123` format (if possible)
* make sure all changes have references to relevant issues in `#123` format (if possible)
* clean up all `Unreleased` sections and leave them empty
### Make release commit
@ -110,9 +110,9 @@ $ docker push truecloudlab/frostfs-cli:${FROSTFS_REVISION}
$ docker push truecloudlab/frostfs-adm:${FROSTFS_REVISION}
```
### Make a proper GitHub release (if not automated)
### Make a proper release (if not automated)
Edit an automatically-created release on GitHub, copy things from `CHANGELOG.md`.
Edit an automatically-created release on git.frostfs.info, copy things from `CHANGELOG.md`.
Build and tar release binaries with `make prepare-release`, attach them to
the release. Publish the release.
@ -121,7 +121,7 @@ the release. Publish the release.
Prepare pull-request in [frostfs-devenv](https://git.frostfs.info/TrueCloudLab/frostfs-dev-env)
with new versions.
### Close GitHub milestone
### Close milestone
Look up [milestones](https://git.frostfs.info/TrueCloudLab/frostfs-node/milestones) and close the release one if exists.

View file

@ -412,12 +412,12 @@ object:
- $attribute:ClusterName
```
| Parameter | Type | Default value | Description |
|-----------------------------|------------|---------------|------------------------------------------------------------------------------------------------------|
| `delete.tombstone_lifetime` | `int` | `5` | Tombstone lifetime for removed objects in epochs. |
| `put.remote_pool_size` | `int` | `10` | Max pool size for performing remote `PUT` operations. Used by Policer and Replicator services. |
| `put.local_pool_size` | `int` | `10` | Max pool size for performing local `PUT` operations. Used by Policer and Replicator services. |
| `get.priority` | `[]string` | | List of metrics of nodes for prioritization. Used for computing response on GET and SEARCH requests. |
| Parameter | Type | Default value | Description |
|-----------------------------|------------|---------------|------------------------------------------------------------------------------------------------|
| `delete.tombstone_lifetime` | `int` | `5` | Tombstone lifetime for removed objects in epochs. |
| `put.remote_pool_size` | `int` | `10` | Max pool size for performing remote `PUT` operations. Used by Policer and Replicator services. |
| `put.local_pool_size` | `int` | `10` | Max pool size for performing local `PUT` operations. Used by Policer and Replicator services. |
| `get.priority` | `[]string` | | List of metrics of nodes for prioritization. Used for computing response on GET requests. |
# `runtime` section
Contains runtime parameters.

View file

@ -7,7 +7,7 @@
## Update CI
Change Golang versions for unit test in CI.
There is `go` section in `.github/workflows/go.yaml` file:
There is `go` section in `.forgejo/workflows/*.yml` files:
```yaml
jobs:
test:

5
go.mod
View file

@ -4,12 +4,11 @@ go 1.22
require (
code.gitea.io/sdk/gitea v0.17.1
git.frostfs.info/TrueCloudLab/frostfs-api-go/v2 v2.16.1-0.20241007120543-29c522d5d8a3
git.frostfs.info/TrueCloudLab/frostfs-contract v0.20.0
git.frostfs.info/TrueCloudLab/frostfs-crypto v0.6.0
git.frostfs.info/TrueCloudLab/frostfs-locode-db v0.4.1-0.20240710074952-65761deb5c0d
git.frostfs.info/TrueCloudLab/frostfs-observability v0.0.0-20240909114314-666d326cc573
git.frostfs.info/TrueCloudLab/frostfs-sdk-go v0.0.0-20241010110344-99c5c5836509
git.frostfs.info/TrueCloudLab/frostfs-observability v0.0.0-20241112082307-f17779933e88
git.frostfs.info/TrueCloudLab/frostfs-sdk-go v0.0.0-20241107121119-cb813e27a823
git.frostfs.info/TrueCloudLab/hrw v1.2.1
git.frostfs.info/TrueCloudLab/multinet v0.0.0-20241015075604-6cb0d80e0972
git.frostfs.info/TrueCloudLab/policy-engine v0.0.0-20240814080254-96225afacb88

BIN
go.sum

Binary file not shown.

View file

@ -4,7 +4,7 @@ import (
"encoding/hex"
"fmt"
v2acl "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/acl"
v2acl "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/acl"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/eacl"
apechain "git.frostfs.info/TrueCloudLab/policy-engine/pkg/chain"
nativeschema "git.frostfs.info/TrueCloudLab/policy-engine/schema/native"

View file

@ -1,10 +1,12 @@
package audit
import (
"git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/session"
"context"
crypto "git.frostfs.info/TrueCloudLab/frostfs-crypto"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/session"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"go.uber.org/zap"
)
@ -17,15 +19,15 @@ type Target interface {
String() string
}
func LogRequest(log *logger.Logger, operation string, req Request, target Target, status bool) {
func LogRequest(ctx context.Context, log *logger.Logger, operation string, req Request, target Target, status bool) {
var key []byte
if req != nil {
key = req.GetVerificationHeader().GetBodySignature().GetKey()
}
LogRequestWithKey(log, operation, key, target, status)
LogRequestWithKey(ctx, log, operation, key, target, status)
}
func LogRequestWithKey(log *logger.Logger, operation string, key []byte, target Target, status bool) {
func LogRequestWithKey(ctx context.Context, log *logger.Logger, operation string, key []byte, target Target, status bool) {
object, subject := NotDefined, NotDefined
publicKey := crypto.UnmarshalPublicKey(key)
@ -37,7 +39,7 @@ func LogRequestWithKey(log *logger.Logger, operation string, key []byte, target
object = target.String()
}
log.Info(logs.AuditEventLogRecord,
log.Info(ctx, logs.AuditEventLogRecord,
zap.String("operation", operation),
zap.String("object", object),
zap.String("subject", subject),

View file

@ -3,7 +3,7 @@ package audit
import (
"strings"
"git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/refs"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/refs"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
)

View file

@ -17,8 +17,6 @@ const (
)
const (
InnerringNonalphabetModeDoNotStopContainerEstimations = "non-alphabet mode, do not stop container estimations"
InnerringCantStopEpochEstimation = "can't stop epoch estimation"
InnerringCantMakeNotaryDepositInMainChain = "can't make notary deposit in main chain"
InnerringCantMakeNotaryDepositInSideChain = "can't make notary deposit in side chain"
InnerringNotaryDepositHasAlreadyBeenMade = "notary deposit has already been made"
@ -343,7 +341,6 @@ const (
NetmapCantGetTransactionHeight = "can't get transaction height"
NetmapCantResetEpochTimer = "can't reset epoch timer"
NetmapCantGetNetmapSnapshotToPerformCleanup = "can't get netmap snapshot to perform cleanup"
NetmapCantStartContainerSizeEstimation = "can't start container size estimation"
NetmapNonAlphabetModeIgnoreNewEpochTick = "non alphabet mode, ignore new epoch tick"
NetmapNextEpoch = "next epoch"
NetmapCantInvokeNetmapNewEpoch = "can't invoke netmap.NewEpoch"

View file

@ -3,8 +3,8 @@ package client
import (
"context"
rawclient "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/rpc/client"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/network"
rawclient "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/rpc/client"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/client"
)

View file

@ -1,7 +1,7 @@
package container
import (
"git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/refs"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/refs"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/session"
)

View file

@ -58,16 +58,3 @@ type EACL struct {
// Session within which Value was set. Nil means session absence.
Session *session.Container
}
// EACLSource is the interface that wraps
// basic methods of extended ACL table source.
type EACLSource interface {
// GetEACL reads the table from the source by identifier.
// It returns any error encountered.
//
// GetEACL must return exactly one non-nil value.
//
// Must return apistatus.ErrEACLNotFound if requested
// eACL table is not in source.
GetEACL(cid.ID) (*EACL, error)
}

View file

@ -8,11 +8,11 @@ import (
"fmt"
"strconv"
objectV2 "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/object"
"git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/refs"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/container"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/netmap"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger"
objectV2 "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/object"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/refs"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/acl"
frostfsecdsa "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/crypto/ecdsa"
objectSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
@ -117,7 +117,7 @@ func (v *FormatValidator) Validate(ctx context.Context, obj *objectSDK.Object, u
}
if !unprepared {
if err := v.validateSignatureKey(obj); err != nil {
if err := v.validateSignatureKey(ctx, obj); err != nil {
return fmt.Errorf("(%T) could not validate signature key: %w", v, err)
}
@ -134,7 +134,7 @@ func (v *FormatValidator) Validate(ctx context.Context, obj *objectSDK.Object, u
return nil
}
func (v *FormatValidator) validateSignatureKey(obj *objectSDK.Object) error {
func (v *FormatValidator) validateSignatureKey(ctx context.Context, obj *objectSDK.Object) error {
sig := obj.Signature()
if sig == nil {
return errMissingSignature
@ -156,7 +156,7 @@ func (v *FormatValidator) validateSignatureKey(obj *objectSDK.Object) error {
ownerID := obj.OwnerID()
if token == nil && obj.ECHeader() != nil {
role, err := v.isIROrContainerNode(obj, binKey)
role, err := v.isIROrContainerNode(ctx, obj, binKey)
if err != nil {
return err
}
@ -172,7 +172,7 @@ func (v *FormatValidator) validateSignatureKey(obj *objectSDK.Object) error {
}
if v.verifyTokenIssuer {
role, err := v.isIROrContainerNode(obj, binKey)
role, err := v.isIROrContainerNode(ctx, obj, binKey)
if err != nil {
return err
}
@ -190,7 +190,7 @@ func (v *FormatValidator) validateSignatureKey(obj *objectSDK.Object) error {
return nil
}
func (v *FormatValidator) isIROrContainerNode(obj *objectSDK.Object, signerKey []byte) (acl.Role, error) {
func (v *FormatValidator) isIROrContainerNode(ctx context.Context, obj *objectSDK.Object, signerKey []byte) (acl.Role, error) {
cnrID, containerIDSet := obj.ContainerID()
if !containerIDSet {
return acl.RoleOthers, errNilCID
@ -204,7 +204,7 @@ func (v *FormatValidator) isIROrContainerNode(obj *objectSDK.Object, signerKey [
return acl.RoleOthers, fmt.Errorf("failed to get container (id=%s): %w", cnrID.EncodeToString(), err)
}
res, err := v.senderClassifier.IsInnerRingOrContainerNode(signerKey, cnrID, cnr.Value)
res, err := v.senderClassifier.IsInnerRingOrContainerNode(ctx, signerKey, cnrID, cnr.Value)
if err != nil {
return acl.RoleOthers, err
}

View file

@ -7,9 +7,9 @@ import (
"strconv"
"testing"
objectV2 "git.frostfs.info/TrueCloudLab/frostfs-api-go/v2/object"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/core/container"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger"
objectV2 "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/api/object"
containerSDK "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
cidtest "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id/test"
@ -65,7 +65,7 @@ func TestFormatValidator_Validate(t *testing.T) {
epoch: curEpoch,
}),
WithLockSource(ls),
WithLogger(&logger.Logger{Logger: zaptest.NewLogger(t)}),
WithLogger(logger.NewLoggerWrapper(zaptest.NewLogger(t))),
)
ownerKey, err := keys.NewPrivateKey()
@ -290,7 +290,7 @@ func TestFormatValidator_ValidateTokenIssuer(t *testing.T) {
}),
WithLockSource(ls),
WithVerifySessionTokenIssuer(false),
WithLogger(&logger.Logger{Logger: zaptest.NewLogger(t)}),
WithLogger(logger.NewLoggerWrapper(zaptest.NewLogger(t))),
)
tok := sessiontest.Object()
@ -339,7 +339,7 @@ func TestFormatValidator_ValidateTokenIssuer(t *testing.T) {
},
},
),
WithLogger(&logger.Logger{Logger: zaptest.NewLogger(t)}),
WithLogger(logger.NewLoggerWrapper(zaptest.NewLogger(t))),
)
tok := sessiontest.Object()
@ -417,7 +417,7 @@ func TestFormatValidator_ValidateTokenIssuer(t *testing.T) {
currentEpoch: curEpoch,
},
),
WithLogger(&logger.Logger{Logger: zaptest.NewLogger(t)}),
WithLogger(logger.NewLoggerWrapper(zaptest.NewLogger(t))),
)
require.NoError(t, v.Validate(context.Background(), obj, false))
@ -491,7 +491,7 @@ func TestFormatValidator_ValidateTokenIssuer(t *testing.T) {
currentEpoch: curEpoch,
},
),
WithLogger(&logger.Logger{Logger: zaptest.NewLogger(t)}),
WithLogger(logger.NewLoggerWrapper(zaptest.NewLogger(t))),
)
require.NoError(t, v.Validate(context.Background(), obj, false))
@ -567,7 +567,7 @@ func TestFormatValidator_ValidateTokenIssuer(t *testing.T) {
currentEpoch: curEpoch,
},
),
WithLogger(&logger.Logger{Logger: zaptest.NewLogger(t)}),
WithLogger(logger.NewLoggerWrapper(zaptest.NewLogger(t))),
)
require.Error(t, v.Validate(context.Background(), obj, false))

View file

@ -2,6 +2,7 @@ package object
import (
"bytes"
"context"
"crypto/sha256"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
@ -40,6 +41,7 @@ type ClassifyResult struct {
}
func (c SenderClassifier) Classify(
ctx context.Context,
ownerID *user.ID,
ownerKey *keys.PublicKey,
idCnr cid.ID,
@ -57,14 +59,14 @@ func (c SenderClassifier) Classify(
}, nil
}
return c.IsInnerRingOrContainerNode(ownerKeyInBytes, idCnr, cnr)
return c.IsInnerRingOrContainerNode(ctx, ownerKeyInBytes, idCnr, cnr)
}
func (c SenderClassifier) IsInnerRingOrContainerNode(ownerKeyInBytes []byte, idCnr cid.ID, cnr container.Container) (*ClassifyResult, error) {
func (c SenderClassifier) IsInnerRingOrContainerNode(ctx context.Context, ownerKeyInBytes []byte, idCnr cid.ID, cnr container.Container) (*ClassifyResult, error) {
isInnerRingNode, err := c.isInnerRingKey(ownerKeyInBytes)
if err != nil {
// do not throw error, try best case matching
c.log.Debug(logs.V2CantCheckIfRequestFromInnerRing,
c.log.Debug(ctx, logs.V2CantCheckIfRequestFromInnerRing,
zap.String("error", err.Error()))
} else if isInnerRingNode {
return &ClassifyResult{
@ -81,7 +83,7 @@ func (c SenderClassifier) IsInnerRingOrContainerNode(ownerKeyInBytes []byte, idC
// error might happen if request has `RoleOther` key and placement
// is not possible for previous epoch, so
// do not throw error, try best case matching
c.log.Debug(logs.V2CantCheckIfRequestFromContainerNode,
c.log.Debug(ctx, logs.V2CantCheckIfRequestFromContainerNode,
zap.String("error", err.Error()))
} else if isContainerNode {
return &ClassifyResult{

View file

@ -3,14 +3,10 @@ package innerring
import (
"context"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/innerring/processors/alphabet"
timerEvent "git.frostfs.info/TrueCloudLab/frostfs-node/pkg/innerring/timers"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client/container"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/timer"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger"
"github.com/nspcc-dev/neo-go/pkg/util"
"go.uber.org/zap"
)
type (
@ -19,28 +15,12 @@ type (
EpochDuration() uint64
}
alphaState interface {
IsAlphabet() bool
}
newEpochHandler func()
containerEstimationStopper interface {
StopEstimation(p container.StopEstimationPrm) error
}
epochTimerArgs struct {
l *logger.Logger
alphabetState alphaState
newEpochHandlers []newEpochHandler
cnrWrapper containerEstimationStopper // to invoke stop container estimation
epoch epochState // to specify which epoch to stop, and epoch duration
stopEstimationDMul uint32 // X: X/Y of epoch in blocks
stopEstimationDDiv uint32 // Y: X/Y of epoch in blocks
epoch epochState // to specify which epoch to stop, and epoch duration
}
emitTimerArgs struct {
@ -49,7 +29,7 @@ type (
emitDuration uint32 // in blocks
}
depositor func() (util.Uint256, error)
depositor func(context.Context) (util.Uint256, error)
awaiter func(context.Context, util.Uint256) error
)
@ -74,7 +54,7 @@ func (s *Server) tickTimers(h uint32) {
}
func newEpochTimer(args *epochTimerArgs) *timer.BlockTimer {
epochTimer := timer.NewBlockTimer(
return timer.NewBlockTimer(
func() (uint32, error) {
return uint32(args.epoch.EpochDuration()), nil
},
@ -84,42 +64,13 @@ func newEpochTimer(args *epochTimerArgs) *timer.BlockTimer {
}
},
)
// sub-timer for epoch timer to tick stop container estimation events at
// some block in epoch
epochTimer.OnDelta(
args.stopEstimationDMul,
args.stopEstimationDDiv,
func() {
if !args.alphabetState.IsAlphabet() {
args.l.Debug(logs.InnerringNonalphabetModeDoNotStopContainerEstimations)
return
}
epochN := args.epoch.EpochCounter()
if epochN == 0 { // estimates are invalid in genesis epoch
return
}
prm := container.StopEstimationPrm{}
prm.SetEpoch(epochN - 1)
err := args.cnrWrapper.StopEstimation(prm)
if err != nil {
args.l.Warn(logs.InnerringCantStopEpochEstimation,
zap.Uint64("epoch", epochN),
zap.String("error", err.Error()))
}
})
return epochTimer
}
func newEmissionTimer(args *emitTimerArgs) *timer.BlockTimer {
func newEmissionTimer(ctx context.Context, args *emitTimerArgs) *timer.BlockTimer {
return timer.NewBlockTimer(
timer.StaticBlockMeter(args.emitDuration),
func() {
args.ap.HandleGasEmission(timerEvent.NewAlphabetEmitTick{})
args.ap.HandleGasEmission(ctx, timerEvent.NewAlphabetEmitTick{})
},
)
}

View file

@ -3,29 +3,20 @@ package innerring
import (
"testing"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/client/container"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger/test"
"github.com/stretchr/testify/require"
)
func TestEpochTimer(t *testing.T) {
t.Parallel()
alphaState := &testAlphabetState{isAlphabet: true}
neh := &testNewEpochHandler{}
cnrStopper := &testContainerEstStopper{}
epochState := &testEpochState{
counter: 99,
duration: 10,
}
args := &epochTimerArgs{
l: test.NewLogger(t),
alphabetState: alphaState,
newEpochHandlers: []newEpochHandler{neh.Handle},
cnrWrapper: cnrStopper,
epoch: epochState,
stopEstimationDMul: 2,
stopEstimationDDiv: 10,
newEpochHandlers: []newEpochHandler{neh.Handle},
epoch: epochState,
}
et := newEpochTimer(args)
err := et.Reset()
@ -33,63 +24,43 @@ func TestEpochTimer(t *testing.T) {
et.Tick(100)
require.Equal(t, 0, neh.called, "invalid new epoch handler calls")
require.Equal(t, 0, cnrStopper.called, "invalid container stop handler calls")
et.Tick(101)
require.Equal(t, 0, neh.called, "invalid new epoch handler calls")
require.Equal(t, 1, cnrStopper.called, "invalid container stop handler calls")
et.Tick(102)
require.Equal(t, 0, neh.called, "invalid new epoch handler calls")
require.Equal(t, 1, cnrStopper.called, "invalid container stop handler calls")
et.Tick(103)
require.Equal(t, 0, neh.called, "invalid new epoch handler calls")
require.Equal(t, 1, cnrStopper.called, "invalid container stop handler calls")
var h uint32
for h = 104; h < 109; h++ {
et.Tick(h)
require.Equal(t, 0, neh.called, "invalid new epoch handler calls")
require.Equal(t, 1, cnrStopper.called, "invalid container stop handler calls")
}
et.Tick(109)
require.Equal(t, 1, neh.called, "invalid new epoch handler calls")
require.Equal(t, 1, cnrStopper.called, "invalid container stop handler calls")
et.Tick(110)
require.Equal(t, 1, neh.called, "invalid new epoch handler calls")
require.Equal(t, 1, cnrStopper.called, "invalid container stop handler calls")
et.Tick(111)
require.Equal(t, 1, neh.called, "invalid new epoch handler calls")
require.Equal(t, 2, cnrStopper.called, "invalid container stop handler calls")
et.Tick(112)
require.Equal(t, 1, neh.called, "invalid new epoch handler calls")
require.Equal(t, 2, cnrStopper.called, "invalid container stop handler calls")
et.Tick(113)
require.Equal(t, 1, neh.called, "invalid new epoch handler calls")
require.Equal(t, 2, cnrStopper.called, "invalid container stop handler calls")
for h = 114; h < 119; h++ {
et.Tick(h)
require.Equal(t, 1, neh.called, "invalid new epoch handler calls")
require.Equal(t, 2, cnrStopper.called, "invalid container stop handler calls")
}
et.Tick(120)
require.Equal(t, 2, neh.called, "invalid new epoch handler calls")
require.Equal(t, 2, cnrStopper.called, "invalid container stop handler calls")
}
type testAlphabetState struct {
isAlphabet bool
}
func (s *testAlphabetState) IsAlphabet() bool {
return s.isAlphabet
}
type testNewEpochHandler struct {
@ -100,15 +71,6 @@ func (h *testNewEpochHandler) Handle() {
h.called++
}
type testContainerEstStopper struct {
called int
}
func (s *testContainerEstStopper) StopEstimation(_ container.StopEstimationPrm) error {
s.called++
return nil
}
type testEpochState struct {
counter uint64
duration uint64

View file

@ -35,8 +35,7 @@ import (
"google.golang.org/grpc"
)
func (s *Server) initNetmapProcessor(cfg *viper.Viper,
cnrClient *container.Client,
func (s *Server) initNetmapProcessor(ctx context.Context, cfg *viper.Viper,
alphaSync event.Handler,
) error {
locodeValidator, err := s.newLocodeValidator(cfg)
@ -49,17 +48,19 @@ func (s *Server) initNetmapProcessor(cfg *viper.Viper,
var netMapCandidateStateValidator statevalidation.NetMapCandidateValidator
netMapCandidateStateValidator.SetNetworkSettings(netSettings)
poolSize := cfg.GetInt("workers.netmap")
s.log.Debug(ctx, logs.NetmapNetmapWorkerPool, zap.Int("size", poolSize))
s.netmapProcessor, err = netmap.New(&netmap.Params{
Log: s.log,
Metrics: s.irMetrics,
PoolSize: cfg.GetInt("workers.netmap"),
PoolSize: poolSize,
NetmapClient: netmap.NewNetmapClient(s.netmapClient),
EpochTimer: s,
EpochState: s,
AlphabetState: s,
CleanupEnabled: cfg.GetBool("netmap_cleaner.enabled"),
CleanupThreshold: cfg.GetUint64("netmap_cleaner.threshold"),
ContainerWrapper: cnrClient,
NotaryDepositHandler: s.onlyAlphabetEventHandler(
s.notaryHandler,
),
@ -99,7 +100,7 @@ func (s *Server) initMainnet(ctx context.Context, cfg *viper.Viper, morphChain *
fromMainChainBlock, err := s.persistate.UInt32(persistateMainChainLastBlockKey)
if err != nil {
fromMainChainBlock = 0
s.log.Warn(logs.InnerringCantGetLastProcessedMainChainBlockNumber, zap.String("error", err.Error()))
s.log.Warn(ctx, logs.InnerringCantGetLastProcessedMainChainBlockNumber, zap.String("error", err.Error()))
}
mainnetChain.from = fromMainChainBlock
@ -139,12 +140,12 @@ func (s *Server) enableNotarySupport() error {
return nil
}
func (s *Server) initNotaryConfig() {
func (s *Server) initNotaryConfig(ctx context.Context) {
s.mainNotaryConfig = notaryConfigs(
!s.withoutMainNet && s.mainnetClient.ProbeNotary(), // if mainnet disabled then notary flag must be disabled too
)
s.log.Info(logs.InnerringNotarySupport,
s.log.Info(ctx, logs.InnerringNotarySupport,
zap.Bool("sidechain_enabled", true),
zap.Bool("mainchain_enabled", !s.mainNotaryConfig.disabled),
)
@ -154,8 +155,8 @@ func (s *Server) createAlphaSync(cfg *viper.Viper, frostfsCli *frostfsClient.Cli
var alphaSync event.Handler
if s.withoutMainNet || cfg.GetBool("governance.disable") {
alphaSync = func(event.Event) {
s.log.Debug(logs.InnerringAlphabetKeysSyncIsDisabled)
alphaSync = func(ctx context.Context, _ event.Event) {
s.log.Debug(ctx, logs.InnerringAlphabetKeysSyncIsDisabled)
}
} else {
// create governance processor
@ -198,21 +199,16 @@ func (s *Server) createIRFetcher() irFetcher {
return irf
}
func (s *Server) initTimers(cfg *viper.Viper, morphClients *serverMorphClients) {
func (s *Server) initTimers(ctx context.Context, cfg *viper.Viper) {
s.epochTimer = newEpochTimer(&epochTimerArgs{
l: s.log,
alphabetState: s,
newEpochHandlers: s.newEpochTickHandlers(),
cnrWrapper: morphClients.CnrClient,
epoch: s,
stopEstimationDMul: cfg.GetUint32("timers.stop_estimation.mul"),
stopEstimationDDiv: cfg.GetUint32("timers.stop_estimation.div"),
newEpochHandlers: s.newEpochTickHandlers(ctx),
epoch: s,
})
s.addBlockTimer(s.epochTimer)
// initialize emission timer
emissionTimer := newEmissionTimer(&emitTimerArgs{
emissionTimer := newEmissionTimer(ctx, &emitTimerArgs{
ap: s.alphabetProcessor,
emitDuration: cfg.GetUint32("timers.emit"),
})
@ -220,18 +216,20 @@ func (s *Server) initTimers(cfg *viper.Viper, morphClients *serverMorphClients)
s.addBlockTimer(emissionTimer)
}
func (s *Server) initAlphabetProcessor(cfg *viper.Viper) error {
func (s *Server) initAlphabetProcessor(ctx context.Context, cfg *viper.Viper) error {
parsedWallets, err := parseWalletAddressesFromStrings(cfg.GetStringSlice("emit.extra_wallets"))
if err != nil {
return err
}
poolSize := cfg.GetInt("workers.alphabet")
s.log.Debug(ctx, logs.AlphabetAlphabetWorkerPool, zap.Int("size", poolSize))
// create alphabet processor
s.alphabetProcessor, err = alphabet.New(&alphabet.Params{
ParsedWallets: parsedWallets,
Log: s.log,
Metrics: s.irMetrics,
PoolSize: cfg.GetInt("workers.alphabet"),
PoolSize: poolSize,
AlphabetContracts: s.contracts.alphabet,
NetmapClient: s.netmapClient,
MorphClient: s.morphClient,
@ -246,12 +244,14 @@ func (s *Server) initAlphabetProcessor(cfg *viper.Viper) error {
return err
}
func (s *Server) initContainerProcessor(cfg *viper.Viper, cnrClient *container.Client, frostfsIDClient *frostfsid.Client) error {
func (s *Server) initContainerProcessor(ctx context.Context, cfg *viper.Viper, cnrClient *container.Client, frostfsIDClient *frostfsid.Client) error {
poolSize := cfg.GetInt("workers.container")
s.log.Debug(ctx, logs.ContainerContainerWorkerPool, zap.Int("size", poolSize))
// container processor
containerProcessor, err := cont.New(&cont.Params{
Log: s.log,
Metrics: s.irMetrics,
PoolSize: cfg.GetInt("workers.container"),
PoolSize: poolSize,
AlphabetState: s,
ContainerClient: cnrClient,
MorphClient: cnrClient.Morph(),
@ -265,12 +265,14 @@ func (s *Server) initContainerProcessor(cfg *viper.Viper, cnrClient *container.C
return bindMorphProcessor(containerProcessor, s)
}
func (s *Server) initBalanceProcessor(cfg *viper.Viper, frostfsCli *frostfsClient.Client) error {
func (s *Server) initBalanceProcessor(ctx context.Context, cfg *viper.Viper, frostfsCli *frostfsClient.Client) error {
poolSize := cfg.GetInt("workers.balance")
s.log.Debug(ctx, logs.BalanceBalanceWorkerPool, zap.Int("size", poolSize))
// create balance processor
balanceProcessor, err := balance.New(&balance.Params{
Log: s.log,
Metrics: s.irMetrics,
PoolSize: cfg.GetInt("workers.balance"),
PoolSize: poolSize,
FrostFSClient: frostfsCli,
BalanceSC: s.contracts.balance,
AlphabetState: s,
@ -283,15 +285,17 @@ func (s *Server) initBalanceProcessor(cfg *viper.Viper, frostfsCli *frostfsClien
return bindMorphProcessor(balanceProcessor, s)
}
func (s *Server) initFrostFSMainnetProcessor(cfg *viper.Viper) error {
func (s *Server) initFrostFSMainnetProcessor(ctx context.Context, cfg *viper.Viper) error {
if s.withoutMainNet {
return nil
}
poolSize := cfg.GetInt("workers.frostfs")
s.log.Debug(ctx, logs.FrostFSFrostfsWorkerPool, zap.Int("size", poolSize))
frostfsProcessor, err := frostfs.New(&frostfs.Params{
Log: s.log,
Metrics: s.irMetrics,
PoolSize: cfg.GetInt("workers.frostfs"),
PoolSize: poolSize,
FrostFSContract: s.contracts.frostfs,
BalanceClient: s.balanceClient,
NetmapClient: s.netmapClient,
@ -311,10 +315,10 @@ func (s *Server) initFrostFSMainnetProcessor(cfg *viper.Viper) error {
return bindMainnetProcessor(frostfsProcessor, s)
}
func (s *Server) initGRPCServer(cfg *viper.Viper, log *logger.Logger, audit *atomic.Bool) error {
func (s *Server) initGRPCServer(ctx context.Context, cfg *viper.Viper, log *logger.Logger, audit *atomic.Bool) error {
controlSvcEndpoint := cfg.GetString("control.grpc.endpoint")
if controlSvcEndpoint == "" {
s.log.Info(logs.InnerringNoControlServerEndpointSpecified)
s.log.Info(ctx, logs.InnerringNoControlServerEndpointSpecified)
return nil
}
@ -410,7 +414,7 @@ func (s *Server) initClientsFromMorph() (*serverMorphClients, error) {
return result, nil
}
func (s *Server) initProcessors(cfg *viper.Viper, morphClients *serverMorphClients) error {
func (s *Server) initProcessors(ctx context.Context, cfg *viper.Viper, morphClients *serverMorphClients) error {
irf := s.createIRFetcher()
s.statusIndex = newInnerRingIndexer(
@ -425,27 +429,27 @@ func (s *Server) initProcessors(cfg *viper.Viper, morphClients *serverMorphClien
return err
}
err = s.initNetmapProcessor(cfg, morphClients.CnrClient, alphaSync)
err = s.initNetmapProcessor(ctx, cfg, alphaSync)
if err != nil {
return err
}
err = s.initContainerProcessor(cfg, morphClients.CnrClient, morphClients.FrostFSIDClient)
err = s.initContainerProcessor(ctx, cfg, morphClients.CnrClient, morphClients.FrostFSIDClient)
if err != nil {
return err
}
err = s.initBalanceProcessor(cfg, morphClients.FrostFSClient)
err = s.initBalanceProcessor(ctx, cfg, morphClients.FrostFSClient)
if err != nil {
return err
}
err = s.initFrostFSMainnetProcessor(cfg)
err = s.initFrostFSMainnetProcessor(ctx, cfg)
if err != nil {
return err
}
err = s.initAlphabetProcessor(cfg)
err = s.initAlphabetProcessor(ctx, cfg)
return err
}
@ -453,7 +457,7 @@ func (s *Server) initMorph(ctx context.Context, cfg *viper.Viper, errChan chan<-
fromSideChainBlock, err := s.persistate.UInt32(persistateSideChainLastBlockKey)
if err != nil {
fromSideChainBlock = 0
s.log.Warn(logs.InnerringCantGetLastProcessedSideChainBlockNumber, zap.String("error", err.Error()))
s.log.Warn(ctx, logs.InnerringCantGetLastProcessedSideChainBlockNumber, zap.String("error", err.Error()))
}
morphChain := &chainParams{
@ -478,7 +482,7 @@ func (s *Server) initMorph(ctx context.Context, cfg *viper.Viper, errChan chan<-
return nil, err
}
if err := s.morphClient.SetGroupSignerScope(); err != nil {
morphChain.log.Info(logs.InnerringFailedToSetGroupSignerScope, zap.Error(err))
morphChain.log.Info(ctx, logs.InnerringFailedToSetGroupSignerScope, zap.Error(err))
}
return morphChain, nil

View file

@ -140,10 +140,10 @@ var (
// Start runs all event providers.
func (s *Server) Start(ctx context.Context, intError chan<- error) (err error) {
s.setHealthStatus(control.HealthStatus_STARTING)
s.setHealthStatus(ctx, control.HealthStatus_STARTING)
defer func() {
if err == nil {
s.setHealthStatus(control.HealthStatus_READY)
s.setHealthStatus(ctx, control.HealthStatus_READY)
}
}()
@ -152,12 +152,12 @@ func (s *Server) Start(ctx context.Context, intError chan<- error) (err error) {
return err
}
err = s.initConfigFromBlockchain()
err = s.initConfigFromBlockchain(ctx)
if err != nil {
return err
}
if s.IsAlphabet() {
if s.IsAlphabet(ctx) {
err = s.initMainNotary(ctx)
if err != nil {
return err
@ -173,14 +173,14 @@ func (s *Server) Start(ctx context.Context, intError chan<- error) (err error) {
prm.Validators = s.predefinedValidators
// vote for sidechain validator if it is prepared in config
err = s.voteForSidechainValidator(prm)
err = s.voteForSidechainValidator(ctx, prm)
if err != nil {
// we don't stop inner ring execution on this error
s.log.Warn(logs.InnerringCantVoteForPreparedValidators,
s.log.Warn(ctx, logs.InnerringCantVoteForPreparedValidators,
zap.String("error", err.Error()))
}
s.tickInitialExpoch()
s.tickInitialExpoch(ctx)
morphErr := make(chan error)
mainnnetErr := make(chan error)
@ -217,14 +217,14 @@ func (s *Server) Start(ctx context.Context, intError chan<- error) (err error) {
}
func (s *Server) registerMorphNewBlockEventHandler() {
s.morphListener.RegisterBlockHandler(func(b *block.Block) {
s.log.Debug(logs.InnerringNewBlock,
s.morphListener.RegisterBlockHandler(func(ctx context.Context, b *block.Block) {
s.log.Debug(ctx, logs.InnerringNewBlock,
zap.Uint32("index", b.Index),
)
err := s.persistate.SetUInt32(persistateSideChainLastBlockKey, b.Index)
if err != nil {
s.log.Warn(logs.InnerringCantUpdatePersistentState,
s.log.Warn(ctx, logs.InnerringCantUpdatePersistentState,
zap.String("chain", "side"),
zap.Uint32("block_index", b.Index))
}
@ -235,10 +235,10 @@ func (s *Server) registerMorphNewBlockEventHandler() {
func (s *Server) registerMainnetNewBlockEventHandler() {
if !s.withoutMainNet {
s.mainnetListener.RegisterBlockHandler(func(b *block.Block) {
s.mainnetListener.RegisterBlockHandler(func(ctx context.Context, b *block.Block) {
err := s.persistate.SetUInt32(persistateMainChainLastBlockKey, b.Index)
if err != nil {
s.log.Warn(logs.InnerringCantUpdatePersistentState,
s.log.Warn(ctx, logs.InnerringCantUpdatePersistentState,
zap.String("chain", "main"),
zap.Uint32("block_index", b.Index))
}
@ -283,11 +283,11 @@ func (s *Server) initSideNotary(ctx context.Context) error {
)
}
func (s *Server) tickInitialExpoch() {
func (s *Server) tickInitialExpoch(ctx context.Context) {
initialEpochTicker := timer.NewOneTickTimer(
timer.StaticBlockMeter(s.initialEpochTickDelta),
func() {
s.netmapProcessor.HandleNewEpochTick(timerEvent.NewEpochTick{})
s.netmapProcessor.HandleNewEpochTick(ctx, timerEvent.NewEpochTick{})
})
s.addBlockTimer(initialEpochTicker)
}
@ -299,15 +299,15 @@ func (s *Server) startWorkers(ctx context.Context) {
}
// Stop closes all subscription channels.
func (s *Server) Stop() {
s.setHealthStatus(control.HealthStatus_SHUTTING_DOWN)
func (s *Server) Stop(ctx context.Context) {
s.setHealthStatus(ctx, control.HealthStatus_SHUTTING_DOWN)
go s.morphListener.Stop()
go s.mainnetListener.Stop()
for _, c := range s.closers {
if err := c(); err != nil {
s.log.Warn(logs.InnerringCloserError,
s.log.Warn(ctx, logs.InnerringCloserError,
zap.String("error", err.Error()),
)
}
@ -349,7 +349,7 @@ func New(ctx context.Context, log *logger.Logger, cfg *viper.Viper, errChan chan
return nil, err
}
server.setHealthStatus(control.HealthStatus_HEALTH_STATUS_UNDEFINED)
server.setHealthStatus(ctx, control.HealthStatus_HEALTH_STATUS_UNDEFINED)
// parse notary support
server.feeConfig = config.NewFeeConfig(cfg)
@ -376,7 +376,7 @@ func New(ctx context.Context, log *logger.Logger, cfg *viper.Viper, errChan chan
return nil, err
}
server.initNotaryConfig()
server.initNotaryConfig(ctx)
err = server.initContracts(cfg)
if err != nil {
@ -400,14 +400,14 @@ func New(ctx context.Context, log *logger.Logger, cfg *viper.Viper, errChan chan
return nil, err
}
err = server.initProcessors(cfg, morphClients)
err = server.initProcessors(ctx, cfg, morphClients)
if err != nil {
return nil, err
}
server.initTimers(cfg, morphClients)
server.initTimers(ctx, cfg)
err = server.initGRPCServer(cfg, log, audit)
err = server.initGRPCServer(ctx, cfg, log, audit)
if err != nil {
return nil, err
}
@ -438,7 +438,7 @@ func createListener(ctx context.Context, cli *client.Client, p *chainParams) (ev
}
listener, err := event.NewListener(event.ListenerParams{
Logger: &logger.Logger{Logger: p.log.With(zap.String("chain", p.name))},
Logger: p.log.With(zap.String("chain", p.name)),
Subscriber: sub,
})
if err != nil {
@ -573,7 +573,7 @@ func parseMultinetConfig(cfg *viper.Viper, m metrics.MultinetMetrics) internalNe
return nc
}
func (s *Server) initConfigFromBlockchain() error {
func (s *Server) initConfigFromBlockchain(ctx context.Context) error {
// get current epoch
epoch, err := s.netmapClient.Epoch()
if err != nil {
@ -602,9 +602,9 @@ func (s *Server) initConfigFromBlockchain() error {
return err
}
s.log.Debug(logs.InnerringReadConfigFromBlockchain,
zap.Bool("active", s.IsActive()),
zap.Bool("alphabet", s.IsAlphabet()),
s.log.Debug(ctx, logs.InnerringReadConfigFromBlockchain,
zap.Bool("active", s.IsActive(ctx)),
zap.Bool("alphabet", s.IsAlphabet(ctx)),
zap.Uint64("epoch", epoch),
zap.Uint32("precision", balancePrecision),
zap.Uint32("init_epoch_tick_delta", s.initialEpochTickDelta),
@ -635,17 +635,17 @@ func (s *Server) nextEpochBlockDelta() (uint32, error) {
// onlyAlphabet wrapper around event handler that executes it
// only if inner ring node is alphabet node.
func (s *Server) onlyAlphabetEventHandler(f event.Handler) event.Handler {
return func(ev event.Event) {
if s.IsAlphabet() {
f(ev)
return func(ctx context.Context, ev event.Event) {
if s.IsAlphabet(ctx) {
f(ctx, ev)
}
}
}
func (s *Server) newEpochTickHandlers() []newEpochHandler {
func (s *Server) newEpochTickHandlers(ctx context.Context) []newEpochHandler {
newEpochHandlers := []newEpochHandler{
func() {
s.netmapProcessor.HandleNewEpochTick(timerEvent.NewEpochTick{})
s.netmapProcessor.HandleNewEpochTick(ctx, timerEvent.NewEpochTick{})
},
}

View file

@ -28,38 +28,39 @@ const (
gasDivisor = 2
)
func (s *Server) depositMainNotary() (tx util.Uint256, err error) {
func (s *Server) depositMainNotary(ctx context.Context) (tx util.Uint256, err error) {
depositAmount, err := client.CalculateNotaryDepositAmount(s.mainnetClient, gasMultiplier, gasDivisor)
if err != nil {
return util.Uint256{}, fmt.Errorf("could not calculate main notary deposit amount: %w", err)
}
return s.mainnetClient.DepositNotary(
ctx,
depositAmount,
uint32(s.epochDuration.Load())+notaryExtraBlocks,
)
}
func (s *Server) depositSideNotary() (util.Uint256, error) {
func (s *Server) depositSideNotary(ctx context.Context) (util.Uint256, error) {
depositAmount, err := client.CalculateNotaryDepositAmount(s.morphClient, gasMultiplier, gasDivisor)
if err != nil {
return util.Uint256{}, fmt.Errorf("could not calculate side notary deposit amount: %w", err)
}
tx, _, err := s.morphClient.DepositEndlessNotary(depositAmount)
tx, _, err := s.morphClient.DepositEndlessNotary(ctx, depositAmount)
return tx, err
}
func (s *Server) notaryHandler(_ event.Event) {
func (s *Server) notaryHandler(ctx context.Context, _ event.Event) {
if !s.mainNotaryConfig.disabled {
_, err := s.depositMainNotary()
_, err := s.depositMainNotary(ctx)
if err != nil {
s.log.Error(logs.InnerringCantMakeNotaryDepositInMainChain, zap.Error(err))
s.log.Error(ctx, logs.InnerringCantMakeNotaryDepositInMainChain, zap.Error(err))
}
}
if _, err := s.depositSideNotary(); err != nil {
s.log.Error(logs.InnerringCantMakeNotaryDepositInSideChain, zap.Error(err))
if _, err := s.depositSideNotary(ctx); err != nil {
s.log.Error(ctx, logs.InnerringCantMakeNotaryDepositInSideChain, zap.Error(err))
}
}
@ -72,7 +73,7 @@ func (s *Server) awaitSideNotaryDeposit(ctx context.Context, tx util.Uint256) er
}
func (s *Server) initNotary(ctx context.Context, deposit depositor, await awaiter, msg string) error {
tx, err := deposit()
tx, err := deposit(ctx)
if err != nil {
return err
}
@ -81,11 +82,11 @@ func (s *Server) initNotary(ctx context.Context, deposit depositor, await awaite
// non-error deposit with an empty TX hash means
// that the deposit has already been made; no
// need to wait it.
s.log.Info(logs.InnerringNotaryDepositHasAlreadyBeenMade)
s.log.Info(ctx, logs.InnerringNotaryDepositHasAlreadyBeenMade)
return nil
}
s.log.Info(msg)
s.log.Info(ctx, msg)
return await(ctx, tx)
}

View file

@ -1,6 +1,8 @@
package alphabet
import (
"context"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/innerring/processors"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/innerring/timers"
@ -8,16 +10,16 @@ import (
"go.uber.org/zap"
)
func (ap *Processor) HandleGasEmission(ev event.Event) {
func (ap *Processor) HandleGasEmission(ctx context.Context, ev event.Event) {
_ = ev.(timers.NewAlphabetEmitTick)
ap.log.Info(logs.AlphabetTick, zap.String("type", "alphabet gas emit"))
ap.log.Info(ctx, logs.AlphabetTick, zap.String("type", "alphabet gas emit"))
// send event to the worker pool
err := processors.SubmitEvent(ap.pool, ap.metrics, "alphabet_emit_gas", ap.processEmit)
err := processors.SubmitEvent(ap.pool, ap.metrics, "alphabet_emit_gas", func() bool { return ap.processEmit(ctx) })
if err != nil {
// there system can be moved into controlled degradation stage
ap.log.Warn(logs.AlphabetAlphabetProcessorWorkerPoolDrained,
ap.log.Warn(ctx, logs.AlphabetAlphabetProcessorWorkerPoolDrained,
zap.Int("capacity", ap.pool.Cap()))
}
}

View file

@ -1,6 +1,7 @@
package alphabet_test
import (
"context"
"testing"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/innerring"
@ -60,7 +61,7 @@ func TestProcessorEmitsGasToNetmapAndAlphabet(t *testing.T) {
processor, err := alphabet.New(params)
require.NoError(t, err, "failed to create processor instance")
processor.HandleGasEmission(timers.NewAlphabetEmitTick{})
processor.HandleGasEmission(context.Background(), timers.NewAlphabetEmitTick{})
processor.WaitPoolRunning()
@ -137,7 +138,7 @@ func TestProcessorEmitsGasToNetmapIfNoParsedWallets(t *testing.T) {
processor, err := alphabet.New(params)
require.NoError(t, err, "failed to create processor instance")
processor.HandleGasEmission(timers.NewAlphabetEmitTick{})
processor.HandleGasEmission(context.Background(), timers.NewAlphabetEmitTick{})
processor.WaitPoolRunning()
@ -198,7 +199,7 @@ func TestProcessorDoesntEmitGasIfNoNetmapOrParsedWallets(t *testing.T) {
processor, err := alphabet.New(params)
require.NoError(t, err, "failed to create processor instance")
processor.HandleGasEmission(timers.NewAlphabetEmitTick{})
processor.HandleGasEmission(context.Background(), timers.NewAlphabetEmitTick{})
processor.WaitPoolRunning()
@ -219,7 +220,7 @@ type testIndexer struct {
index int
}
func (i *testIndexer) AlphabetIndex() int {
func (i *testIndexer) AlphabetIndex(context.Context) int {
return i.index
}
@ -246,7 +247,7 @@ type testMorphClient struct {
batchTransferedGas []batchTransferGas
}
func (c *testMorphClient) Invoke(contract util.Uint160, fee fixedn.Fixed8, method string, args ...any) (uint32, error) {
func (c *testMorphClient) Invoke(_ context.Context, contract util.Uint160, fee fixedn.Fixed8, method string, args ...any) (uint32, error) {
c.invokedMethods = append(c.invokedMethods,
invokedMethod{
contract: contract,

View file

@ -1,6 +1,7 @@
package alphabet
import (
"context"
"crypto/elliptic"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
@ -13,39 +14,39 @@ import (
const emitMethod = "emit"
func (ap *Processor) processEmit() bool {
index := ap.irList.AlphabetIndex()
func (ap *Processor) processEmit(ctx context.Context) bool {
index := ap.irList.AlphabetIndex(ctx)
if index < 0 {
ap.log.Info(logs.AlphabetNonAlphabetModeIgnoreGasEmissionEvent)
ap.log.Info(ctx, logs.AlphabetNonAlphabetModeIgnoreGasEmissionEvent)
return true
}
contract, ok := ap.alphabetContracts.GetByIndex(index)
if !ok {
ap.log.Debug(logs.AlphabetNodeIsOutOfAlphabetRangeIgnoreGasEmissionEvent,
ap.log.Debug(ctx, logs.AlphabetNodeIsOutOfAlphabetRangeIgnoreGasEmissionEvent,
zap.Int("index", index))
return false
}
// there is no signature collecting, so we don't need extra fee
_, err := ap.morphClient.Invoke(contract, 0, emitMethod)
_, err := ap.morphClient.Invoke(ctx, contract, 0, emitMethod)
if err != nil {
ap.log.Warn(logs.AlphabetCantInvokeAlphabetEmitMethod, zap.String("error", err.Error()))
ap.log.Warn(ctx, logs.AlphabetCantInvokeAlphabetEmitMethod, zap.String("error", err.Error()))
return false
}
if ap.storageEmission == 0 {
ap.log.Info(logs.AlphabetStorageNodeEmissionIsOff)
ap.log.Info(ctx, logs.AlphabetStorageNodeEmissionIsOff)
return true
}
networkMap, err := ap.netmapClient.NetMap()
if err != nil {
ap.log.Warn(logs.AlphabetCantGetNetmapSnapshotToEmitGasToStorageNodes,
ap.log.Warn(ctx, logs.AlphabetCantGetNetmapSnapshotToEmitGasToStorageNodes,
zap.String("error", err.Error()))
return false
@ -58,7 +59,7 @@ func (ap *Processor) processEmit() bool {
ap.pwLock.RUnlock()
extraLen := len(pw)
ap.log.Debug(logs.AlphabetGasEmission,
ap.log.Debug(ctx, logs.AlphabetGasEmission,
zap.Int("network_map", nmLen),
zap.Int("extra_wallets", extraLen))
@ -68,20 +69,20 @@ func (ap *Processor) processEmit() bool {
gasPerNode := fixedn.Fixed8(ap.storageEmission / uint64(nmLen+extraLen))
ap.transferGasToNetmapNodes(nmNodes, gasPerNode)
ap.transferGasToNetmapNodes(ctx, nmNodes, gasPerNode)
ap.transferGasToExtraNodes(pw, gasPerNode)
ap.transferGasToExtraNodes(ctx, pw, gasPerNode)
return true
}
func (ap *Processor) transferGasToNetmapNodes(nmNodes []netmap.NodeInfo, gasPerNode fixedn.Fixed8) {
func (ap *Processor) transferGasToNetmapNodes(ctx context.Context, nmNodes []netmap.NodeInfo, gasPerNode fixedn.Fixed8) {
for i := range nmNodes {
keyBytes := nmNodes[i].PublicKey()
key, err := keys.NewPublicKeyFromBytes(keyBytes, elliptic.P256())
if err != nil {
ap.log.Warn(logs.AlphabetCantParseNodePublicKey,
ap.log.Warn(ctx, logs.AlphabetCantParseNodePublicKey,
zap.String("error", err.Error()))
continue
@ -89,7 +90,7 @@ func (ap *Processor) transferGasToNetmapNodes(nmNodes []netmap.NodeInfo, gasPerN
err = ap.morphClient.TransferGas(key.GetScriptHash(), gasPerNode)
if err != nil {
ap.log.Warn(logs.AlphabetCantTransferGas,
ap.log.Warn(ctx, logs.AlphabetCantTransferGas,
zap.String("receiver", key.Address()),
zap.Int64("amount", int64(gasPerNode)),
zap.String("error", err.Error()),
@ -98,7 +99,7 @@ func (ap *Processor) transferGasToNetmapNodes(nmNodes []netmap.NodeInfo, gasPerN
}
}
func (ap *Processor) transferGasToExtraNodes(pw []util.Uint160, gasPerNode fixedn.Fixed8) {
func (ap *Processor) transferGasToExtraNodes(ctx context.Context, pw []util.Uint160, gasPerNode fixedn.Fixed8) {
if len(pw) > 0 {
err := ap.morphClient.BatchTransferGas(pw, gasPerNode)
if err != nil {
@ -106,7 +107,7 @@ func (ap *Processor) transferGasToExtraNodes(pw []util.Uint160, gasPerNode fixed
for i, addr := range pw {
receiversLog[i] = addr.StringLE()
}
ap.log.Warn(logs.AlphabetCantTransferGasToWallet,
ap.log.Warn(ctx, logs.AlphabetCantTransferGasToWallet,
zap.Strings("receivers", receiversLog),
zap.Int64("amount", int64(gasPerNode)),
zap.String("error", err.Error()),

View file

@ -1,12 +1,12 @@
package alphabet
import (
"context"
"errors"
"fmt"
"sync"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/innerring/metrics"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/morph/event"
"git.frostfs.info/TrueCloudLab/frostfs-node/pkg/util/logger"
@ -14,13 +14,12 @@ import (
"github.com/nspcc-dev/neo-go/pkg/encoding/fixedn"
"github.com/nspcc-dev/neo-go/pkg/util"
"github.com/panjf2000/ants/v2"
"go.uber.org/zap"
)
type (
// Indexer is a callback interface for inner ring global state.
Indexer interface {
AlphabetIndex() int
AlphabetIndex(context.Context) int
}
// Contracts is an interface of the storage
@ -40,7 +39,7 @@ type (
}
morphClient interface {
Invoke(contract util.Uint160, fee fixedn.Fixed8, method string, args ...any) (uint32, error)
Invoke(ctx context.Context, contract util.Uint160, fee fixedn.Fixed8, method string, args ...any) (uint32, error)
TransferGas(receiver util.Uint160, amount fixedn.Fixed8) error
BatchTransferGas(receivers []util.Uint160, amount fixedn.Fixed8) error
}
@ -85,8 +84,6 @@ func New(p *Params) (*Processor, error) {
return nil, errors.New("ir/alphabet: global state is not set")
}
p.Log.Debug(logs.AlphabetAlphabetWorkerPool, zap.Int("size", p.PoolSize))
pool, err := ants.NewPool(p.PoolSize, ants.WithNonblocking(true))
if err != nil {
return nil, fmt.Errorf("ir/frostfs: can't create worker pool: %w", err)

View file

@ -1,6 +1,7 @@
package balance
import (
"context"
"encoding/hex"
"git.frostfs.info/TrueCloudLab/frostfs-node/internal/logs"
@ -10,20 +11,20 @@ import (
"go.uber.org/zap"
)
func (bp *Processor) handleLock(ev event.Event) {
func (bp *Processor) handleLock(ctx context.Context, ev event.Event) {
lock := ev.(balanceEvent.Lock)
bp.log.Info(logs.Notification,
bp.log.Info(ctx, logs.Notification,
zap.String("type", "lock"),
zap.String("value", hex.EncodeToString(lock.ID())))
// send an event to the worker pool
err := processors.SubmitEvent(bp.pool, bp.metrics, "lock", func() bool {
return bp.processLock(&lock)
return bp.processLock(ctx, &lock)
})
if err != nil {
// there system can be moved into controlled degradation stage
bp.log.Warn(logs.BalanceBalanceWorkerPoolDrained,
bp.log.Warn(ctx, logs.BalanceBalanceWorkerPoolDrained,
zap.Int("capacity", bp.pool.Cap()))
}
}

View file

@ -1,6 +1,7 @@
package balance
import (
"context"
"testing"
"time"
@ -30,7 +31,7 @@ func TestProcessorCallsFrostFSContractForLockEvent(t *testing.T) {
})
require.NoError(t, err, "failed to create processor")
processor.handleLock(balanceEvent.Lock{})
processor.handleLock(context.Background(), balanceEvent.Lock{})
for processor.pool.Running() > 0 {
time.Sleep(10 * time.Millisecond)
@ -56,7 +57,7 @@ func TestProcessorDoesntCallFrostFSContractIfNotAlphabet(t *testing.T) {
})
require.NoError(t, err, "failed to create processor")
processor.handleLock(balanceEvent.Lock{})
processor.handleLock(context.Background(), balanceEvent.Lock{})
for processor.pool.Running() > 0 {
time.Sleep(10 * time.Millisecond)
@ -69,7 +70,7 @@ type testAlphabetState struct {
isAlphabet bool
}
func (s *testAlphabetState) IsAlphabet() bool {
func (s *testAlphabetState) IsAlphabet(context.Context) bool {
return s.isAlphabet
}
@ -83,7 +84,7 @@ type testFrostFSContractClient struct {
chequeCalls int
}
func (c *testFrostFSContractClient) Cheque(p frostfscontract.ChequePrm) error {
func (c *testFrostFSContractClient) Cheque(_ context.Context, p frostfscontract.ChequePrm) error {
c.chequeCalls++
return nil
}

Some files were not shown because too many files have changed in this diff Show more